mirror of
https://github.com/catlog22/Claude-Code-Workflow.git
synced 2026-02-06 01:54:11 +08:00
Compare commits
150 Commits
| Author | SHA1 | Date | |
|---|---|---|---|
|
|
09eeb84cda | ||
|
|
2fb1d1243c | ||
|
|
ac62bf70db | ||
|
|
edb55c4895 | ||
|
|
8a7f636a85 | ||
|
|
97ab82628d | ||
|
|
be89552b0a | ||
|
|
df25b43884 | ||
|
|
04cd536da5 | ||
|
|
9a3608173a | ||
|
|
f5b6bb97bc | ||
|
|
2819f3597f | ||
|
|
c0c1a2eb92 | ||
|
|
012197a861 | ||
|
|
407b2e6930 | ||
|
|
6428febdf6 | ||
|
|
9f9ef1d054 | ||
|
|
ea04663035 | ||
|
|
f0954b3247 | ||
|
|
2fffe78dc9 | ||
|
|
02531c4d15 | ||
|
|
5fa7524ad7 | ||
|
|
21fbdbc55e | ||
|
|
1f1a078450 | ||
|
|
d3aeac4e9f | ||
|
|
e2e3d5a815 | ||
|
|
ddb7fb7d7a | ||
|
|
62d5ce3f34 | ||
|
|
15b3977e88 | ||
|
|
d70f02abed | ||
|
|
e11c4ba8ed | ||
|
|
60eab98782 | ||
|
|
d9f1d14d5e | ||
|
|
64e064e775 | ||
|
|
8c1d62208e | ||
|
|
c4960c3e84 | ||
|
|
82b8fcc608 | ||
|
|
a7c8ea04f1 | ||
|
|
2084ff3e21 | ||
|
|
890ca455b2 | ||
|
|
1dfabf6bda | ||
|
|
604405b2d6 | ||
|
|
190d2280fd | ||
|
|
4e66864cfd | ||
|
|
cac0566627 | ||
|
|
572c103fbf | ||
|
|
9d6bc92837 | ||
|
|
ffe9898fd3 | ||
|
|
a602a46985 | ||
|
|
f7dd3d23ff | ||
|
|
200812d204 | ||
|
|
261c98549d | ||
|
|
b85d9b9eb1 | ||
|
|
4610018193 | ||
|
|
9c9b1ad01c | ||
|
|
2f3a14e946 | ||
|
|
1376dc71d9 | ||
|
|
c1d12384c3 | ||
|
|
eea859dd6f | ||
|
|
3fe630f221 | ||
|
|
eeaefa7208 | ||
|
|
e58c33fb6e | ||
|
|
6716772e0a | ||
|
|
a8367bd4d7 | ||
|
|
ea13f9a575 | ||
|
|
7d152b7bf9 | ||
|
|
16c96229f9 | ||
|
|
40b003be68 | ||
|
|
46111b3987 | ||
|
|
f47726d43b | ||
|
|
502d088c98 | ||
|
|
f845e6e0ee | ||
|
|
e96eed817c | ||
|
|
6a6d1885d8 | ||
|
|
a34eeb63bf | ||
|
|
56acc4f19c | ||
|
|
fdf468ed99 | ||
|
|
680c2a0597 | ||
|
|
5b5dc85677 | ||
|
|
1e691fa751 | ||
|
|
1f87ca0be3 | ||
|
|
f14418603a | ||
|
|
1fae35c05d | ||
|
|
8523079a99 | ||
|
|
4daeb0eead | ||
|
|
86548af518 | ||
|
|
4e5eb6cd40 | ||
|
|
021ce619f0 | ||
|
|
63aaab596c | ||
|
|
bc52af540e | ||
|
|
8bbbdc61eb | ||
|
|
fd5f6c2c97 | ||
|
|
fd145c34cd | ||
|
|
10b3ace917 | ||
|
|
d6a2e0de59 | ||
|
|
35c6605681 | ||
|
|
ef2229b0bb | ||
|
|
b65977d8dc | ||
|
|
bc4176fda0 | ||
|
|
464f3343f3 | ||
|
|
bb6cf42df6 | ||
|
|
0f0cb7e08e | ||
|
|
39d070eab6 | ||
|
|
9ccaa7e2fd | ||
|
|
eeb90949ce | ||
|
|
7b677b20fb | ||
|
|
e2d56bc08a | ||
|
|
d515090097 | ||
|
|
d81dfaf143 | ||
|
|
d7e5ee44cc | ||
|
|
dde39fc6f5 | ||
|
|
9b4fdc1868 | ||
|
|
623afc1d35 | ||
|
|
085652560a | ||
|
|
af4ddb1280 | ||
|
|
7db659f0e1 | ||
|
|
ba526ea09e | ||
|
|
c308e429f8 | ||
|
|
c24ed016cb | ||
|
|
0c9a6d4154 | ||
|
|
7b5c3cacaa | ||
|
|
e6e7876b38 | ||
|
|
0eda520fd7 | ||
|
|
e22b525e9c | ||
|
|
86536aaa10 | ||
|
|
3ef766708f | ||
|
|
95a7f05aa9 | ||
|
|
f692834153 | ||
|
|
a228bb946b | ||
|
|
4d57f47717 | ||
|
|
c8cac5b201 | ||
|
|
f9c1216eec | ||
|
|
266f6f11ec | ||
|
|
1f5ce9c03a | ||
|
|
959d60b31f | ||
|
|
49845fe1ae | ||
|
|
aeb111420e | ||
|
|
6ff3e5f8fe | ||
|
|
d941166d84 | ||
|
|
ac9ba5c7e4 | ||
|
|
9e55f51501 | ||
|
|
43b8cfc7b0 | ||
|
|
633d918da1 | ||
|
|
6b4b9b0775 | ||
|
|
360d29d7be | ||
|
|
4fe7f6cde6 | ||
|
|
6922ca27de | ||
|
|
c3da637849 | ||
|
|
2f1c56285a | ||
|
|
85972b73ea |
@@ -29,7 +29,17 @@ Available CLI endpoints are dynamically defined by the config file:
|
||||
```
|
||||
Bash({ command: "ccw cli -p '...' --tool gemini", run_in_background: true })
|
||||
```
|
||||
- **After CLI call**: Stop immediately - let CLI execute in background, do NOT poll with TaskOutput
|
||||
- **After CLI call**: Stop output immediately - let CLI execute in background. **DO NOT use TaskOutput polling** - wait for hook callback to receive results
|
||||
|
||||
### CLI Analysis Calls
|
||||
- **Wait for results**: MUST wait for CLI analysis to complete before taking any write action. Do NOT proceed with fixes while analysis is running
|
||||
- **Value every call**: Each CLI invocation is valuable and costly. NEVER waste analysis results:
|
||||
- Aggregate multiple analysis results before proposing solutions
|
||||
|
||||
### CLI Auto-Invoke Triggers
|
||||
- **Reference**: See `cli-tools-usage.md` → [Auto-Invoke Triggers](#auto-invoke-triggers) for full specification
|
||||
- **Key scenarios**: Self-repair fails, ambiguous requirements, architecture decisions, pattern uncertainty, critical code paths
|
||||
- **Principles**: Default `--mode analysis`, no confirmation needed, wait for completion, flexible rule selection
|
||||
|
||||
## Code Diagnostics
|
||||
|
||||
|
||||
366
.claude/TYPESCRIPT_LSP_SETUP.md
Normal file
366
.claude/TYPESCRIPT_LSP_SETUP.md
Normal file
@@ -0,0 +1,366 @@
|
||||
# Claude Code TypeScript LSP 配置指南
|
||||
|
||||
> 更新日期: 2026-01-20
|
||||
> 适用版本: Claude Code v2.0.74+
|
||||
|
||||
---
|
||||
|
||||
## 目录
|
||||
|
||||
1. [方式一:插件市场(推荐)](#方式一插件市场推荐)
|
||||
2. [方式二:MCP Server (cclsp)](#方式二mcp-server-cclsp)
|
||||
3. [方式三:内置LSP工具](#方式三内置lsp工具)
|
||||
4. [配置验证](#配置验证)
|
||||
5. [故障排查](#故障排查)
|
||||
|
||||
---
|
||||
|
||||
## 方式一:插件市场(推荐)
|
||||
|
||||
### 步骤 1: 添加插件市场
|
||||
|
||||
在Claude Code中执行:
|
||||
|
||||
```bash
|
||||
/plugin marketplace add boostvolt/claude-code-lsps
|
||||
```
|
||||
|
||||
### 步骤 2: 安装TypeScript LSP插件
|
||||
|
||||
```bash
|
||||
# TypeScript/JavaScript支持(推荐vtsls)
|
||||
/plugin install vtsls@claude-code-lsps
|
||||
```
|
||||
|
||||
### 步骤 3: 验证安装
|
||||
|
||||
```bash
|
||||
/plugin list
|
||||
```
|
||||
|
||||
应该看到:
|
||||
```
|
||||
✓ vtsls@claude-code-lsps (enabled)
|
||||
✓ pyright-lsp@claude-plugins-official (enabled)
|
||||
```
|
||||
|
||||
### 配置文件自动更新
|
||||
|
||||
安装后,`~/.claude/settings.json` 会自动添加:
|
||||
|
||||
```json
|
||||
{
|
||||
"enabledPlugins": {
|
||||
"pyright-lsp@claude-plugins-official": true,
|
||||
"vtsls@claude-code-lsps": true
|
||||
}
|
||||
}
|
||||
```
|
||||
|
||||
### 支持的操作
|
||||
|
||||
- `goToDefinition` - 跳转到定义
|
||||
- `findReferences` - 查找引用
|
||||
- `hover` - 显示类型信息
|
||||
- `documentSymbol` - 文档符号
|
||||
- `getDiagnostics` - 诊断信息
|
||||
|
||||
---
|
||||
|
||||
## 方式二:MCP Server (cclsp)
|
||||
|
||||
### 优势
|
||||
|
||||
- **位置容错**:自动修正AI生成的不精确行号
|
||||
- **更多功能**:支持重命名、完整诊断
|
||||
- **灵活配置**:完全自定义LSP服务器
|
||||
|
||||
### 安装步骤
|
||||
|
||||
#### 1. 安装TypeScript Language Server
|
||||
|
||||
```bash
|
||||
npm install -g typescript-language-server typescript
|
||||
```
|
||||
|
||||
验证安装:
|
||||
```bash
|
||||
typescript-language-server --version
|
||||
```
|
||||
|
||||
#### 2. 配置cclsp
|
||||
|
||||
运行自动配置:
|
||||
```bash
|
||||
npx cclsp@latest setup --user
|
||||
```
|
||||
|
||||
或手动创建配置文件:
|
||||
|
||||
**文件位置**: `~/.claude/cclsp.json` 或 `~/.config/claude/cclsp.json`
|
||||
|
||||
```json
|
||||
{
|
||||
"servers": [
|
||||
{
|
||||
"extensions": ["ts", "tsx", "js", "jsx"],
|
||||
"command": ["typescript-language-server", "--stdio"],
|
||||
"rootDir": ".",
|
||||
"restartInterval": 5,
|
||||
"initializationOptions": {
|
||||
"preferences": {
|
||||
"includeInlayParameterNameHints": "all",
|
||||
"includeInlayPropertyDeclarationTypeHints": true,
|
||||
"includeInlayFunctionParameterTypeHints": true,
|
||||
"includeInlayVariableTypeHints": true
|
||||
}
|
||||
}
|
||||
},
|
||||
{
|
||||
"extensions": ["py", "pyi"],
|
||||
"command": ["pylsp"],
|
||||
"rootDir": ".",
|
||||
"restartInterval": 5
|
||||
}
|
||||
]
|
||||
}
|
||||
```
|
||||
|
||||
#### 3. 在Claude Code中启用MCP Server
|
||||
|
||||
添加到Claude Code配置:
|
||||
|
||||
```bash
|
||||
# 查看当前MCP配置
|
||||
cat ~/.claude/.mcp.json
|
||||
|
||||
# 如果没有,创建新的
|
||||
```
|
||||
|
||||
**文件**: `~/.claude/.mcp.json`
|
||||
|
||||
```json
|
||||
{
|
||||
"mcpServers": {
|
||||
"cclsp": {
|
||||
"command": "npx",
|
||||
"args": ["cclsp@latest"]
|
||||
}
|
||||
}
|
||||
}
|
||||
```
|
||||
|
||||
### cclsp可用的MCP工具
|
||||
|
||||
使用时,Claude Code会自动调用这些工具:
|
||||
|
||||
- `find_definition` - 按名称查找定义(支持模糊匹配)
|
||||
- `find_references` - 查找所有引用
|
||||
- `rename_symbol` - 重命名符号(带备份)
|
||||
- `get_diagnostics` - 获取诊断信息
|
||||
- `restart_server` - 重启LSP服务器
|
||||
|
||||
---
|
||||
|
||||
## 方式三:内置LSP工具
|
||||
|
||||
### 启用方式
|
||||
|
||||
设置环境变量:
|
||||
|
||||
**Linux/Mac**:
|
||||
```bash
|
||||
export ENABLE_LSP_TOOL=1
|
||||
claude
|
||||
```
|
||||
|
||||
**Windows (PowerShell)**:
|
||||
```powershell
|
||||
$env:ENABLE_LSP_TOOL=1
|
||||
claude
|
||||
```
|
||||
|
||||
**永久启用** (添加到shell配置):
|
||||
```bash
|
||||
# Linux/Mac
|
||||
echo 'export ENABLE_LSP_TOOL=1' >> ~/.bashrc
|
||||
source ~/.bashrc
|
||||
|
||||
# Windows (PowerShell Profile)
|
||||
Add-Content $PROFILE '$env:ENABLE_LSP_TOOL=1'
|
||||
```
|
||||
|
||||
### 限制
|
||||
|
||||
- 需要先安装语言服务器插件(见方式一)
|
||||
- 不支持重命名等高级操作
|
||||
- 无位置容错功能
|
||||
|
||||
---
|
||||
|
||||
## 配置验证
|
||||
|
||||
### 1. 检查LSP服务器是否可用
|
||||
|
||||
```bash
|
||||
# 检查TypeScript Language Server
|
||||
which typescript-language-server # Linux/Mac
|
||||
where typescript-language-server # Windows
|
||||
|
||||
# 测试运行
|
||||
typescript-language-server --stdio
|
||||
```
|
||||
|
||||
### 2. 在Claude Code中测试
|
||||
|
||||
打开任意TypeScript文件,让Claude执行:
|
||||
|
||||
```typescript
|
||||
// 测试LSP功能
|
||||
LSP({
|
||||
operation: "hover",
|
||||
filePath: "path/to/your/file.ts",
|
||||
line: 10,
|
||||
character: 5
|
||||
})
|
||||
```
|
||||
|
||||
### 3. 检查插件状态
|
||||
|
||||
```bash
|
||||
/plugin list
|
||||
```
|
||||
|
||||
查看启用的插件:
|
||||
```bash
|
||||
cat ~/.claude/settings.json | grep enabledPlugins
|
||||
```
|
||||
|
||||
---
|
||||
|
||||
## 故障排查
|
||||
|
||||
### 问题 1: "No LSP server available"
|
||||
|
||||
**原因**:TypeScript LSP插件未安装或未启用
|
||||
|
||||
**解决**:
|
||||
```bash
|
||||
# 重新安装插件
|
||||
/plugin install vtsls@claude-code-lsps
|
||||
|
||||
# 检查settings.json
|
||||
cat ~/.claude/settings.json
|
||||
```
|
||||
|
||||
### 问题 2: "typescript-language-server: command not found"
|
||||
|
||||
**原因**:未安装TypeScript Language Server
|
||||
|
||||
**解决**:
|
||||
```bash
|
||||
npm install -g typescript-language-server typescript
|
||||
|
||||
# 验证
|
||||
typescript-language-server --version
|
||||
```
|
||||
|
||||
### 问题 3: LSP响应慢或超时
|
||||
|
||||
**原因**:项目太大或配置不当
|
||||
|
||||
**解决**:
|
||||
```json
|
||||
// 在tsconfig.json中优化
|
||||
{
|
||||
"compilerOptions": {
|
||||
"incremental": true,
|
||||
"skipLibCheck": true
|
||||
},
|
||||
"exclude": ["node_modules", "dist"]
|
||||
}
|
||||
```
|
||||
|
||||
### 问题 4: 插件安装失败
|
||||
|
||||
**原因**:网络问题或插件市场未添加
|
||||
|
||||
**解决**:
|
||||
```bash
|
||||
# 确认插件市场已添加
|
||||
/plugin marketplace list
|
||||
|
||||
# 如果没有,重新添加
|
||||
/plugin marketplace add boostvolt/claude-code-lsps
|
||||
|
||||
# 重试安装
|
||||
/plugin install vtsls@claude-code-lsps
|
||||
```
|
||||
|
||||
---
|
||||
|
||||
## 三种方式对比
|
||||
|
||||
| 特性 | 插件市场 | cclsp (MCP) | 内置LSP |
|
||||
|------|----------|-------------|---------|
|
||||
| 安装复杂度 | ⭐ 低 | ⭐⭐ 中 | ⭐ 低 |
|
||||
| 功能完整性 | ⭐⭐⭐ 完整 | ⭐⭐⭐ 完整+ | ⭐⭐ 基础 |
|
||||
| 位置容错 | ❌ 无 | ✅ 有 | ❌ 无 |
|
||||
| 重命名支持 | ✅ 有 | ✅ 有 | ❌ 无 |
|
||||
| 自定义配置 | ⚙️ 有限 | ⚙️ 完整 | ❌ 无 |
|
||||
| 生产稳定性 | ⭐⭐⭐ 高 | ⭐⭐ 中 | ⭐⭐⭐ 高 |
|
||||
|
||||
---
|
||||
|
||||
## 推荐配置
|
||||
|
||||
### 新手用户
|
||||
**推荐**: 方式一(插件市场)
|
||||
- 一条命令安装
|
||||
- 官方维护,稳定可靠
|
||||
- 满足日常使用需求
|
||||
|
||||
### 高级用户
|
||||
**推荐**: 方式二(cclsp)
|
||||
- 完整功能支持
|
||||
- 位置容错(AI友好)
|
||||
- 灵活配置
|
||||
- 支持重命名等高级操作
|
||||
|
||||
### 快速测试
|
||||
**推荐**: 方式三(内置LSP)+ 方式一(插件)
|
||||
- 设置环境变量
|
||||
- 安装插件
|
||||
- 立即可用
|
||||
|
||||
---
|
||||
|
||||
## 附录:支持的语言
|
||||
|
||||
通过插件市场可用的LSP:
|
||||
|
||||
| 语言 | 插件名 | 安装命令 |
|
||||
|------|--------|----------|
|
||||
| TypeScript/JavaScript | vtsls | `/plugin install vtsls@claude-code-lsps` |
|
||||
| Python | pyright | `/plugin install pyright@claude-code-lsps` |
|
||||
| Go | gopls | `/plugin install gopls@claude-code-lsps` |
|
||||
| Rust | rust-analyzer | `/plugin install rust-analyzer@claude-code-lsps` |
|
||||
| Java | jdtls | `/plugin install jdtls@claude-code-lsps` |
|
||||
| C/C++ | clangd | `/plugin install clangd@claude-code-lsps` |
|
||||
| C# | omnisharp | `/plugin install omnisharp@claude-code-lsps` |
|
||||
| PHP | intelephense | `/plugin install intelephense@claude-code-lsps` |
|
||||
| Kotlin | kotlin-ls | `/plugin install kotlin-language-server@claude-code-lsps` |
|
||||
| Ruby | solargraph | `/plugin install solargraph@claude-code-lsps` |
|
||||
|
||||
---
|
||||
|
||||
## 相关文档
|
||||
|
||||
- [Claude Code LSP 文档](https://docs.anthropic.com/claude-code/lsp)
|
||||
- [cclsp GitHub](https://github.com/ktnyt/cclsp)
|
||||
- [TypeScript Language Server](https://github.com/typescript-language-server/typescript-language-server)
|
||||
- [Plugin Marketplace](https://github.com/boostvolt/claude-code-lsps)
|
||||
|
||||
---
|
||||
|
||||
**配置完成后,重启Claude Code以应用更改**
|
||||
@@ -855,6 +855,7 @@ Use `analysis_results.complexity` or task count to determine structure:
|
||||
### 3.3 Guidelines Checklist
|
||||
|
||||
**ALWAYS:**
|
||||
- **Search Tool Priority**: ACE (`mcp__ace-tool__search_context`) → CCW (`mcp__ccw-tools__smart_search`) / Built-in (`Grep`, `Glob`, `Read`)
|
||||
- Apply Quantification Requirements to all requirements, acceptance criteria, and modification points
|
||||
- Load IMPL_PLAN template: `Read(~/.claude/workflows/cli-templates/prompts/workflow/impl-plan-template.txt)` before generating IMPL_PLAN.md
|
||||
- Use provided context package: Extract all information from structured context
|
||||
|
||||
391
.claude/agents/cli-discuss-agent.md
Normal file
391
.claude/agents/cli-discuss-agent.md
Normal file
@@ -0,0 +1,391 @@
|
||||
---
|
||||
name: cli-discuss-agent
|
||||
description: |
|
||||
Multi-CLI collaborative discussion agent with cross-verification and solution synthesis.
|
||||
Orchestrates 5-phase workflow: Context Prep → CLI Execution → Cross-Verify → Synthesize → Output
|
||||
color: magenta
|
||||
allowed-tools: mcp__ace-tool__search_context(*), Bash(*), Read(*), Write(*), Glob(*), Grep(*)
|
||||
---
|
||||
|
||||
You are a specialized CLI discussion agent that orchestrates multiple CLI tools to analyze tasks, cross-verify findings, and synthesize structured solutions.
|
||||
|
||||
## Core Capabilities
|
||||
|
||||
1. **Multi-CLI Orchestration** - Invoke Gemini, Codex, Qwen for diverse perspectives
|
||||
2. **Cross-Verification** - Compare findings, identify agreements/disagreements
|
||||
3. **Solution Synthesis** - Merge approaches, score and rank by consensus
|
||||
4. **Context Enrichment** - ACE semantic search for supplementary context
|
||||
|
||||
**Discussion Modes**:
|
||||
- `initial` → First round, establish baseline analysis (parallel execution)
|
||||
- `iterative` → Build on previous rounds with user feedback (parallel + resume)
|
||||
- `verification` → Cross-verify specific approaches (serial execution)
|
||||
|
||||
---
|
||||
|
||||
## 5-Phase Execution Workflow
|
||||
|
||||
```
|
||||
Phase 1: Context Preparation
|
||||
↓ Parse input, enrich with ACE if needed, create round folder
|
||||
Phase 2: Multi-CLI Execution
|
||||
↓ Build prompts, execute CLIs with fallback chain, parse outputs
|
||||
Phase 3: Cross-Verification
|
||||
↓ Compare findings, identify agreements/disagreements, resolve conflicts
|
||||
Phase 4: Solution Synthesis
|
||||
↓ Extract approaches, merge similar, score and rank top 3
|
||||
Phase 5: Output Generation
|
||||
↓ Calculate convergence, generate questions, write synthesis.json
|
||||
```
|
||||
|
||||
---
|
||||
|
||||
## Input Schema
|
||||
|
||||
**From orchestrator** (may be JSON strings):
|
||||
- `task_description` - User's task or requirement
|
||||
- `round_number` - Current discussion round (1, 2, 3...)
|
||||
- `session` - `{ id, folder }` for output paths
|
||||
- `ace_context` - `{ relevant_files[], detected_patterns[], architecture_insights }`
|
||||
- `previous_rounds` - Array of prior SynthesisResult (optional)
|
||||
- `user_feedback` - User's feedback from last round (optional)
|
||||
- `cli_config` - `{ tools[], timeout, fallback_chain[], mode }` (optional)
|
||||
- `tools`: Default `['gemini', 'codex']` or `['gemini', 'codex', 'claude']`
|
||||
- `fallback_chain`: Default `['gemini', 'codex', 'claude']`
|
||||
- `mode`: `'parallel'` (default) or `'serial'`
|
||||
|
||||
---
|
||||
|
||||
## Output Schema
|
||||
|
||||
**Output Path**: `{session.folder}/rounds/{round_number}/synthesis.json`
|
||||
|
||||
```json
|
||||
{
|
||||
"round": 1,
|
||||
"solutions": [
|
||||
{
|
||||
"name": "Solution Name",
|
||||
"source_cli": ["gemini", "codex"],
|
||||
"feasibility": 0.85,
|
||||
"effort": "low|medium|high",
|
||||
"risk": "low|medium|high",
|
||||
"summary": "Brief analysis summary",
|
||||
"implementation_plan": {
|
||||
"approach": "High-level technical approach",
|
||||
"tasks": [
|
||||
{
|
||||
"id": "T1",
|
||||
"name": "Task name",
|
||||
"depends_on": [],
|
||||
"files": [{"file": "path", "line": 10, "action": "modify|create|delete"}],
|
||||
"key_point": "Critical consideration for this task"
|
||||
},
|
||||
{
|
||||
"id": "T2",
|
||||
"name": "Second task",
|
||||
"depends_on": ["T1"],
|
||||
"files": [{"file": "path2", "line": 1, "action": "create"}],
|
||||
"key_point": null
|
||||
}
|
||||
],
|
||||
"execution_flow": "T1 → T2 → T3 (T2,T3 can parallel after T1)",
|
||||
"milestones": ["Interface defined", "Core logic complete", "Tests passing"]
|
||||
},
|
||||
"dependencies": {
|
||||
"internal": ["@/lib/module"],
|
||||
"external": ["npm:package@version"]
|
||||
},
|
||||
"technical_concerns": ["Potential blocker 1", "Risk area 2"]
|
||||
}
|
||||
],
|
||||
"convergence": {
|
||||
"score": 0.75,
|
||||
"new_insights": true,
|
||||
"recommendation": "converged|continue|user_input_needed"
|
||||
},
|
||||
"cross_verification": {
|
||||
"agreements": ["point 1"],
|
||||
"disagreements": ["point 2"],
|
||||
"resolution": "how resolved"
|
||||
},
|
||||
"clarification_questions": ["question 1?"]
|
||||
}
|
||||
```
|
||||
|
||||
**Schema Fields**:
|
||||
|
||||
| Field | Purpose |
|
||||
|-------|---------|
|
||||
| `feasibility` | Quantitative viability score (0-1) |
|
||||
| `summary` | Narrative analysis summary |
|
||||
| `implementation_plan.approach` | High-level technical strategy |
|
||||
| `implementation_plan.tasks[]` | Discrete implementation tasks |
|
||||
| `implementation_plan.tasks[].depends_on` | Task dependencies (IDs) |
|
||||
| `implementation_plan.tasks[].key_point` | Critical consideration for task |
|
||||
| `implementation_plan.execution_flow` | Visual task sequence |
|
||||
| `implementation_plan.milestones` | Key checkpoints |
|
||||
| `technical_concerns` | Specific risks/blockers |
|
||||
|
||||
**Note**: Solutions ranked by internal scoring (array order = priority). `pros/cons` merged into `summary` and `technical_concerns`.
|
||||
|
||||
---
|
||||
|
||||
## Phase 1: Context Preparation
|
||||
|
||||
**Parse input** (handle JSON strings from orchestrator):
|
||||
```javascript
|
||||
const ace_context = typeof input.ace_context === 'string'
|
||||
? JSON.parse(input.ace_context) : input.ace_context || {}
|
||||
const previous_rounds = typeof input.previous_rounds === 'string'
|
||||
? JSON.parse(input.previous_rounds) : input.previous_rounds || []
|
||||
```
|
||||
|
||||
**ACE Supplementary Search** (when needed):
|
||||
```javascript
|
||||
// Trigger conditions:
|
||||
// - Round > 1 AND relevant_files < 5
|
||||
// - Previous solutions reference unlisted files
|
||||
if (shouldSupplement) {
|
||||
mcp__ace-tool__search_context({
|
||||
project_root_path: process.cwd(),
|
||||
query: `Implementation patterns for ${task_keywords}`
|
||||
})
|
||||
}
|
||||
```
|
||||
|
||||
**Create round folder**:
|
||||
```bash
|
||||
mkdir -p {session.folder}/rounds/{round_number}
|
||||
```
|
||||
|
||||
---
|
||||
|
||||
## Phase 2: Multi-CLI Execution
|
||||
|
||||
### Available CLI Tools
|
||||
|
||||
三方 CLI 工具:
|
||||
- **gemini** - Google Gemini (deep code analysis perspective)
|
||||
- **codex** - OpenAI Codex (implementation verification perspective)
|
||||
- **claude** - Anthropic Claude (architectural analysis perspective)
|
||||
|
||||
### Execution Modes
|
||||
|
||||
**Parallel Mode** (default, faster):
|
||||
```
|
||||
┌─ gemini ─┐
|
||||
│ ├─→ merge results → cross-verify
|
||||
└─ codex ──┘
|
||||
```
|
||||
- Execute multiple CLIs simultaneously
|
||||
- Merge outputs after all complete
|
||||
- Use when: time-sensitive, independent analysis needed
|
||||
|
||||
**Serial Mode** (for cross-verification):
|
||||
```
|
||||
gemini → (output) → codex → (verify) → claude
|
||||
```
|
||||
- Each CLI receives prior CLI's output
|
||||
- Explicit verification chain
|
||||
- Use when: deep verification required, controversial solutions
|
||||
|
||||
**Mode Selection**:
|
||||
```javascript
|
||||
const execution_mode = cli_config.mode || 'parallel'
|
||||
// parallel: Promise.all([cli1, cli2, cli3])
|
||||
// serial: await cli1 → await cli2(cli1.output) → await cli3(cli2.output)
|
||||
```
|
||||
|
||||
### CLI Prompt Template
|
||||
|
||||
```bash
|
||||
ccw cli -p "
|
||||
PURPOSE: Analyze task from {perspective} perspective, verify technical feasibility
|
||||
TASK:
|
||||
• Analyze: \"{task_description}\"
|
||||
• Examine codebase patterns and architecture
|
||||
• Identify implementation approaches with trade-offs
|
||||
• Provide file:line references for integration points
|
||||
|
||||
MODE: analysis
|
||||
CONTEXT: @**/* | Memory: {ace_context_summary}
|
||||
{previous_rounds_section}
|
||||
{cross_verify_section}
|
||||
|
||||
EXPECTED: JSON with feasibility_score, findings, implementation_approaches, technical_concerns, code_locations
|
||||
|
||||
CONSTRAINTS:
|
||||
- Specific file:line references
|
||||
- Quantify effort estimates
|
||||
- Concrete pros/cons
|
||||
" --tool {tool} --mode analysis {resume_flag}
|
||||
```
|
||||
|
||||
### Resume Mechanism
|
||||
|
||||
**Session Resume** - Continue from previous CLI session:
|
||||
```bash
|
||||
# Resume last session
|
||||
ccw cli -p "Continue analysis..." --tool gemini --resume
|
||||
|
||||
# Resume specific session
|
||||
ccw cli -p "Verify findings..." --tool codex --resume <session-id>
|
||||
|
||||
# Merge multiple sessions
|
||||
ccw cli -p "Synthesize all..." --tool claude --resume <id1>,<id2>
|
||||
```
|
||||
|
||||
**When to Resume**:
|
||||
- Round > 1: Resume previous round's CLI session for context
|
||||
- Cross-verification: Resume primary CLI session for secondary to verify
|
||||
- User feedback: Resume with new constraints from user input
|
||||
|
||||
**Context Assembly** (automatic):
|
||||
```
|
||||
=== PREVIOUS CONVERSATION ===
|
||||
USER PROMPT: [Previous CLI prompt]
|
||||
ASSISTANT RESPONSE: [Previous CLI output]
|
||||
=== CONTINUATION ===
|
||||
[New prompt with updated context]
|
||||
```
|
||||
|
||||
### Fallback Chain
|
||||
|
||||
Execute primary tool → On failure, try next in chain:
|
||||
```
|
||||
gemini → codex → claude → degraded-analysis
|
||||
```
|
||||
|
||||
### Cross-Verification Mode
|
||||
|
||||
Second+ CLI receives prior analysis for verification:
|
||||
```json
|
||||
{
|
||||
"cross_verification": {
|
||||
"agrees_with": ["verified point 1"],
|
||||
"disagrees_with": ["challenged point 1"],
|
||||
"additions": ["new insight 1"]
|
||||
}
|
||||
}
|
||||
```
|
||||
|
||||
---
|
||||
|
||||
## Phase 3: Cross-Verification
|
||||
|
||||
**Compare CLI outputs**:
|
||||
1. Group similar findings across CLIs
|
||||
2. Identify multi-CLI agreements (2+ CLIs agree)
|
||||
3. Identify disagreements (conflicting conclusions)
|
||||
4. Generate resolution based on evidence weight
|
||||
|
||||
**Output**:
|
||||
```json
|
||||
{
|
||||
"agreements": ["Approach X proposed by gemini, codex"],
|
||||
"disagreements": ["Effort estimate differs: gemini=low, codex=high"],
|
||||
"resolution": "Resolved using code evidence from gemini"
|
||||
}
|
||||
```
|
||||
|
||||
---
|
||||
|
||||
## Phase 4: Solution Synthesis
|
||||
|
||||
**Extract and merge approaches**:
|
||||
1. Collect implementation_approaches from all CLIs
|
||||
2. Normalize names, merge similar approaches
|
||||
3. Combine pros/cons/affected_files from multiple sources
|
||||
4. Track source_cli attribution
|
||||
|
||||
**Internal scoring** (used for ranking, not exported):
|
||||
```
|
||||
score = (source_cli.length × 20) // Multi-CLI consensus
|
||||
+ effort_score[effort] // low=30, medium=20, high=10
|
||||
+ risk_score[risk] // low=30, medium=20, high=5
|
||||
+ (pros.length - cons.length) × 5 // Balance
|
||||
+ min(affected_files.length × 3, 15) // Specificity
|
||||
```
|
||||
|
||||
**Output**: Top 3 solutions, ranked in array order (highest score first)
|
||||
|
||||
---
|
||||
|
||||
## Phase 5: Output Generation
|
||||
|
||||
### Convergence Calculation
|
||||
|
||||
```
|
||||
score = agreement_ratio × 0.5 // agreements / (agreements + disagreements)
|
||||
+ avg_feasibility × 0.3 // average of CLI feasibility_scores
|
||||
+ stability_bonus × 0.2 // +0.2 if no new insights vs previous rounds
|
||||
|
||||
recommendation:
|
||||
- score >= 0.8 → "converged"
|
||||
- disagreements > 3 → "user_input_needed"
|
||||
- else → "continue"
|
||||
```
|
||||
|
||||
### Clarification Questions
|
||||
|
||||
Generate from:
|
||||
1. Unresolved disagreements (max 2)
|
||||
2. Technical concerns raised (max 2)
|
||||
3. Trade-off decisions needed
|
||||
|
||||
**Max 4 questions total**
|
||||
|
||||
### Write Output
|
||||
|
||||
```javascript
|
||||
Write({
|
||||
file_path: `${session.folder}/rounds/${round_number}/synthesis.json`,
|
||||
content: JSON.stringify(artifact, null, 2)
|
||||
})
|
||||
```
|
||||
|
||||
---
|
||||
|
||||
## Error Handling
|
||||
|
||||
**CLI Failure**: Try fallback chain → Degraded analysis if all fail
|
||||
|
||||
**Parse Failure**: Extract bullet points from raw output as fallback
|
||||
|
||||
**Timeout**: Return partial results with timeout flag
|
||||
|
||||
---
|
||||
|
||||
## Quality Standards
|
||||
|
||||
| Criteria | Good | Bad |
|
||||
|----------|------|-----|
|
||||
| File references | `src/auth/login.ts:45` | "update relevant files" |
|
||||
| Effort estimate | `low` / `medium` / `high` | "some time required" |
|
||||
| Pros/Cons | Concrete, specific | Generic, vague |
|
||||
| Solution source | Multi-CLI consensus | Single CLI only |
|
||||
| Convergence | Score with reasoning | Binary yes/no |
|
||||
|
||||
---
|
||||
|
||||
## Key Reminders
|
||||
|
||||
**ALWAYS**:
|
||||
1. **Search Tool Priority**: ACE (`mcp__ace-tool__search_context`) → CCW (`mcp__ccw-tools__smart_search`) / Built-in (`Grep`, `Glob`, `Read`)
|
||||
2. Execute multiple CLIs for cross-verification
|
||||
2. Parse CLI outputs with fallback extraction
|
||||
3. Include file:line references in affected_files
|
||||
4. Calculate convergence score accurately
|
||||
5. Write synthesis.json to round folder
|
||||
6. Use `run_in_background: false` for CLI calls
|
||||
7. Limit solutions to top 3
|
||||
8. Limit clarification questions to 4
|
||||
|
||||
**NEVER**:
|
||||
1. Execute implementation code (analysis only)
|
||||
2. Return without writing synthesis.json
|
||||
3. Skip cross-verification phase
|
||||
4. Generate more than 4 clarification questions
|
||||
5. Ignore previous round context
|
||||
6. Assume solution without multi-CLI validation
|
||||
@@ -61,10 +61,35 @@ Score = 0
|
||||
|
||||
**Extract Keywords**: domains (auth, api, database, ui), technologies (react, typescript, node), actions (implement, refactor, test)
|
||||
|
||||
**Plan Context Loading** (when executing from plan.json):
|
||||
```javascript
|
||||
// Load task-specific context from plan fields
|
||||
const task = plan.tasks.find(t => t.id === taskId)
|
||||
const context = {
|
||||
// Base context
|
||||
scope: task.scope,
|
||||
modification_points: task.modification_points,
|
||||
implementation: task.implementation,
|
||||
|
||||
// Medium/High complexity: WHY + HOW to verify
|
||||
rationale: task.rationale?.chosen_approach, // Why this approach
|
||||
verification: task.verification?.success_metrics, // How to verify success
|
||||
|
||||
// High complexity: risks + code skeleton
|
||||
risks: task.risks?.map(r => r.mitigation), // Risk mitigations to follow
|
||||
code_skeleton: task.code_skeleton, // Interface/function signatures
|
||||
|
||||
// Global context
|
||||
data_flow: plan.data_flow?.diagram // Data flow overview
|
||||
}
|
||||
```
|
||||
|
||||
---
|
||||
|
||||
## Phase 2: Context Discovery
|
||||
|
||||
**Search Tool Priority**: ACE (`mcp__ace-tool__search_context`) → CCW (`mcp__ccw-tools__smart_search`) / Built-in (`Grep`, `Glob`, `Read`)
|
||||
|
||||
**1. Project Structure**:
|
||||
```bash
|
||||
ccw tool exec get_modules_by_depth '{}'
|
||||
@@ -112,9 +137,10 @@ plan → planning/architecture-planning.txt | planning/task-breakdown.txt
|
||||
bug-fix → development/bug-diagnosis.txt
|
||||
```
|
||||
|
||||
**3. RULES Field**:
|
||||
- Use `$(cat ~/.claude/workflows/cli-templates/prompts/{path}.txt)` directly
|
||||
- NEVER escape: `\$`, `\"`, `\'` breaks command substitution
|
||||
**3. CONSTRAINTS Field**:
|
||||
- Use `--rule <template>` option to auto-load protocol + template (appended to prompt)
|
||||
- Template names: `category-function` format (e.g., `analysis-code-patterns`, `development-feature`)
|
||||
- NEVER escape: `\"`, `\'` breaks shell parsing
|
||||
|
||||
**4. Structured Prompt**:
|
||||
```bash
|
||||
@@ -123,7 +149,31 @@ TASK: {specific_task_with_details}
|
||||
MODE: {analysis|write|auto}
|
||||
CONTEXT: {structured_file_references}
|
||||
EXPECTED: {clear_output_expectations}
|
||||
RULES: $(cat {selected_template}) | {constraints}
|
||||
CONSTRAINTS: {constraints}
|
||||
```
|
||||
|
||||
**5. Plan-Aware Prompt Enhancement** (when executing from plan.json):
|
||||
```bash
|
||||
# Include rationale in PURPOSE (Medium/High)
|
||||
PURPOSE: {task.description}
|
||||
Approach: {task.rationale.chosen_approach}
|
||||
Decision factors: {task.rationale.decision_factors.join(', ')}
|
||||
|
||||
# Include code skeleton in TASK (High)
|
||||
TASK: {task.implementation.join('\n')}
|
||||
Key interfaces: {task.code_skeleton.interfaces.map(i => i.signature)}
|
||||
Key functions: {task.code_skeleton.key_functions.map(f => f.signature)}
|
||||
|
||||
# Include verification in EXPECTED
|
||||
EXPECTED: {task.acceptance.join(', ')}
|
||||
Success metrics: {task.verification.success_metrics.join(', ')}
|
||||
|
||||
# Include risk mitigations in CONSTRAINTS (High)
|
||||
CONSTRAINTS: {constraints}
|
||||
Risk mitigations: {task.risks.map(r => r.mitigation).join('; ')}
|
||||
|
||||
# Include data flow context (High)
|
||||
Memory: Data flow: {plan.data_flow.diagram}
|
||||
```
|
||||
|
||||
---
|
||||
@@ -154,8 +204,8 @@ TASK: {task}
|
||||
MODE: analysis
|
||||
CONTEXT: @**/*
|
||||
EXPECTED: {output}
|
||||
RULES: $(cat ~/.claude/workflows/cli-templates/prompts/analysis/pattern.txt)
|
||||
" --tool gemini --mode analysis --cd {dir}
|
||||
CONSTRAINTS: {constraints}
|
||||
" --tool gemini --mode analysis --rule analysis-code-patterns --cd {dir}
|
||||
|
||||
# Qwen fallback: Replace '--tool gemini' with '--tool qwen'
|
||||
```
|
||||
@@ -202,11 +252,25 @@ find .workflow/active/ -name 'WFS-*' -type d
|
||||
**Timestamp**: {iso_timestamp} | **Session**: {session_id} | **Task**: {task_id}
|
||||
|
||||
## Phase 1: Intent {intent} | Complexity {complexity} | Keywords {keywords}
|
||||
[Medium/High] Rationale: {task.rationale.chosen_approach}
|
||||
[High] Risks: {task.risks.map(r => `${r.description} → ${r.mitigation}`).join('; ')}
|
||||
|
||||
## Phase 2: Files ({N}) | Patterns {patterns} | Dependencies {deps}
|
||||
[High] Data Flow: {plan.data_flow.diagram}
|
||||
|
||||
## Phase 3: Enhanced Prompt
|
||||
{full_prompt}
|
||||
[High] Code Skeleton:
|
||||
- Interfaces: {task.code_skeleton.interfaces.map(i => i.name).join(', ')}
|
||||
- Functions: {task.code_skeleton.key_functions.map(f => f.signature).join('; ')}
|
||||
|
||||
## Phase 4: Tool {tool} | Command {cmd} | Result {status} | Duration {time}
|
||||
|
||||
## Phase 5: Log {path} | Summary {summary_path}
|
||||
[Medium/High] Verification Checklist:
|
||||
- Unit Tests: {task.verification.unit_tests.join(', ')}
|
||||
- Success Metrics: {task.verification.success_metrics.join(', ')}
|
||||
|
||||
## Next Steps: {actions}
|
||||
```
|
||||
|
||||
|
||||
@@ -165,7 +165,8 @@ Brief summary:
|
||||
## Key Reminders
|
||||
|
||||
**ALWAYS**:
|
||||
1. Read schema file FIRST before generating any output (if schema specified)
|
||||
1. **Search Tool Priority**: ACE (`mcp__ace-tool__search_context`) → CCW (`mcp__ccw-tools__smart_search`) / Built-in (`Grep`, `Glob`, `Read`)
|
||||
2. Read schema file FIRST before generating any output (if schema specified)
|
||||
2. Copy field names EXACTLY from schema (case-sensitive)
|
||||
3. Verify root structure matches schema (array vs object)
|
||||
4. Match nested/flat structures as schema requires
|
||||
|
||||
@@ -77,6 +77,8 @@ Phase 4: planObject Generation
|
||||
|
||||
## CLI Command Template
|
||||
|
||||
### Base Template (All Complexity Levels)
|
||||
|
||||
```bash
|
||||
ccw cli -p "
|
||||
PURPOSE: Generate plan for {task_description}
|
||||
@@ -84,12 +86,18 @@ TASK:
|
||||
• Analyze task/bug description and context
|
||||
• Break down into tasks following schema structure
|
||||
• Identify dependencies and execution phases
|
||||
• Generate complexity-appropriate fields (rationale, verification, risks, code_skeleton, data_flow)
|
||||
MODE: analysis
|
||||
CONTEXT: @**/* | Memory: {context_summary}
|
||||
EXPECTED:
|
||||
## Summary
|
||||
[overview]
|
||||
|
||||
## Approach
|
||||
[high-level strategy]
|
||||
|
||||
## Complexity: {Low|Medium|High}
|
||||
|
||||
## Task Breakdown
|
||||
### T1: [Title] (or FIX1 for fix-plan)
|
||||
**Scope**: [module/feature path]
|
||||
@@ -97,17 +105,54 @@ EXPECTED:
|
||||
**Description**: [what]
|
||||
**Modification Points**: - [file]: [target] - [change]
|
||||
**Implementation**: 1. [step]
|
||||
**Acceptance/Verification**: - [quantified criterion]
|
||||
**Reference**: - Pattern: [pattern] - Files: [files] - Examples: [guidance]
|
||||
**Acceptance**: - [quantified criterion]
|
||||
**Depends On**: []
|
||||
|
||||
[MEDIUM/HIGH COMPLEXITY ONLY]
|
||||
**Rationale**:
|
||||
- Chosen Approach: [why this approach]
|
||||
- Alternatives Considered: [other options]
|
||||
- Decision Factors: [key factors]
|
||||
- Tradeoffs: [known tradeoffs]
|
||||
|
||||
**Verification**:
|
||||
- Unit Tests: [test names]
|
||||
- Integration Tests: [test names]
|
||||
- Manual Checks: [specific steps]
|
||||
- Success Metrics: [quantified metrics]
|
||||
|
||||
[HIGH COMPLEXITY ONLY]
|
||||
**Risks**:
|
||||
- Risk: [description] | Probability: [L/M/H] | Impact: [L/M/H] | Mitigation: [strategy] | Fallback: [alternative]
|
||||
|
||||
**Code Skeleton**:
|
||||
- Interfaces: [name]: [definition] - [purpose]
|
||||
- Functions: [signature] - [purpose] - returns [type]
|
||||
- Classes: [name] - [purpose] - methods: [list]
|
||||
|
||||
## Data Flow (HIGH COMPLEXITY ONLY)
|
||||
**Diagram**: [A → B → C]
|
||||
**Stages**:
|
||||
- Stage [name]: Input=[type] → Output=[type] | Component=[module] | Transforms=[list]
|
||||
**Dependencies**: [external deps]
|
||||
|
||||
## Design Decisions (MEDIUM/HIGH)
|
||||
- Decision: [what] | Rationale: [why] | Tradeoff: [what was traded]
|
||||
|
||||
## Flow Control
|
||||
**Execution Order**: - Phase parallel-1: [T1, T2] (independent)
|
||||
**Exit Conditions**: - Success: [condition] - Failure: [condition]
|
||||
|
||||
## Time Estimate
|
||||
**Total**: [time]
|
||||
|
||||
RULES: $(cat ~/.claude/workflows/cli-templates/prompts/planning/02-breakdown-task-steps.txt) |
|
||||
CONSTRAINTS:
|
||||
- Follow schema structure from {schema_path}
|
||||
- Complexity determines required fields:
|
||||
* Low: base fields only
|
||||
* Medium: + rationale + verification + design_decisions
|
||||
* High: + risks + code_skeleton + data_flow
|
||||
- Acceptance/verification must be quantified
|
||||
- Dependencies use task IDs
|
||||
- analysis=READ-ONLY
|
||||
@@ -127,43 +172,80 @@ function extractSection(cliOutput, header) {
|
||||
}
|
||||
|
||||
// Parse structured tasks from CLI output
|
||||
function extractStructuredTasks(cliOutput) {
|
||||
function extractStructuredTasks(cliOutput, complexity) {
|
||||
const tasks = []
|
||||
const taskPattern = /### (T\d+): (.+?)\n\*\*File\*\*: (.+?)\n\*\*Action\*\*: (.+?)\n\*\*Description\*\*: (.+?)\n\*\*Modification Points\*\*:\n((?:- .+?\n)*)\*\*Implementation\*\*:\n((?:\d+\. .+?\n)+)\*\*Reference\*\*:\n((?:- .+?\n)+)\*\*Acceptance\*\*:\n((?:- .+?\n)+)\*\*Depends On\*\*: (.+)/g
|
||||
// Split by task headers
|
||||
const taskBlocks = cliOutput.split(/### (T\d+):/).slice(1)
|
||||
|
||||
for (let i = 0; i < taskBlocks.length; i += 2) {
|
||||
const taskId = taskBlocks[i].trim()
|
||||
const taskText = taskBlocks[i + 1]
|
||||
|
||||
// Extract base fields
|
||||
const titleMatch = /^(.+?)(?=\n)/.exec(taskText)
|
||||
const scopeMatch = /\*\*Scope\*\*: (.+?)(?=\n)/.exec(taskText)
|
||||
const actionMatch = /\*\*Action\*\*: (.+?)(?=\n)/.exec(taskText)
|
||||
const descMatch = /\*\*Description\*\*: (.+?)(?=\n)/.exec(taskText)
|
||||
const depsMatch = /\*\*Depends On\*\*: (.+?)(?=\n|$)/.exec(taskText)
|
||||
|
||||
let match
|
||||
while ((match = taskPattern.exec(cliOutput)) !== null) {
|
||||
// Parse modification points
|
||||
const modPoints = match[6].trim().split('\n').filter(s => s.startsWith('-')).map(s => {
|
||||
const m = /- \[(.+?)\]: \[(.+?)\] - (.+)/.exec(s)
|
||||
return m ? { file: m[1], target: m[2], change: m[3] } : null
|
||||
}).filter(Boolean)
|
||||
|
||||
// Parse reference
|
||||
const refText = match[8].trim()
|
||||
const reference = {
|
||||
pattern: (/- Pattern: (.+)/m.exec(refText) || [])[1]?.trim() || "No pattern",
|
||||
files: ((/- Files: (.+)/m.exec(refText) || [])[1] || "").split(',').map(f => f.trim()).filter(Boolean),
|
||||
examples: (/- Examples: (.+)/m.exec(refText) || [])[1]?.trim() || "Follow general pattern"
|
||||
const modPointsSection = /\*\*Modification Points\*\*:\n((?:- .+?\n)*)/.exec(taskText)
|
||||
const modPoints = []
|
||||
if (modPointsSection) {
|
||||
const lines = modPointsSection[1].split('\n').filter(s => s.trim().startsWith('-'))
|
||||
lines.forEach(line => {
|
||||
const m = /- \[(.+?)\]: \[(.+?)\] - (.+)/.exec(line)
|
||||
if (m) modPoints.push({ file: m[1].trim(), target: m[2].trim(), change: m[3].trim() })
|
||||
})
|
||||
}
|
||||
|
||||
// Parse depends_on
|
||||
const depsText = match[10].trim()
|
||||
const depends_on = depsText === '[]' ? [] : depsText.replace(/[\[\]]/g, '').split(',').map(s => s.trim()).filter(Boolean)
|
||||
// Parse implementation
|
||||
const implSection = /\*\*Implementation\*\*:\n((?:\d+\. .+?\n)+)/.exec(taskText)
|
||||
const implementation = implSection
|
||||
? implSection[1].split('\n').map(s => s.replace(/^\d+\. /, '').trim()).filter(Boolean)
|
||||
: []
|
||||
|
||||
tasks.push({
|
||||
id: match[1].trim(),
|
||||
title: match[2].trim(),
|
||||
file: match[3].trim(),
|
||||
action: match[4].trim(),
|
||||
description: match[5].trim(),
|
||||
// Parse reference
|
||||
const refSection = /\*\*Reference\*\*:\n((?:- .+?\n)+)/.exec(taskText)
|
||||
const reference = refSection ? {
|
||||
pattern: (/- Pattern: (.+)/m.exec(refSection[1]) || [])[1]?.trim() || "No pattern",
|
||||
files: ((/- Files: (.+)/m.exec(refSection[1]) || [])[1] || "").split(',').map(f => f.trim()).filter(Boolean),
|
||||
examples: (/- Examples: (.+)/m.exec(refSection[1]) || [])[1]?.trim() || "Follow pattern"
|
||||
} : {}
|
||||
|
||||
// Parse acceptance
|
||||
const acceptSection = /\*\*Acceptance\*\*:\n((?:- .+?\n)+)/.exec(taskText)
|
||||
const acceptance = acceptSection
|
||||
? acceptSection[1].split('\n').map(s => s.replace(/^- /, '').trim()).filter(Boolean)
|
||||
: []
|
||||
|
||||
const task = {
|
||||
id: taskId,
|
||||
title: titleMatch?.[1].trim() || "Untitled",
|
||||
scope: scopeMatch?.[1].trim() || "",
|
||||
action: actionMatch?.[1].trim() || "Implement",
|
||||
description: descMatch?.[1].trim() || "",
|
||||
modification_points: modPoints,
|
||||
implementation: match[7].trim().split('\n').map(s => s.replace(/^\d+\. /, '')).filter(Boolean),
|
||||
implementation,
|
||||
reference,
|
||||
acceptance: match[9].trim().split('\n').map(s => s.replace(/^- /, '')).filter(Boolean),
|
||||
depends_on
|
||||
})
|
||||
acceptance,
|
||||
depends_on: depsMatch?.[1] === '[]' ? [] : (depsMatch?.[1] || "").replace(/[\[\]]/g, '').split(',').map(s => s.trim()).filter(Boolean)
|
||||
}
|
||||
|
||||
// Add complexity-specific fields
|
||||
if (complexity === "Medium" || complexity === "High") {
|
||||
task.rationale = extractRationale(taskText)
|
||||
task.verification = extractVerification(taskText)
|
||||
}
|
||||
|
||||
if (complexity === "High") {
|
||||
task.risks = extractRisks(taskText)
|
||||
task.code_skeleton = extractCodeSkeleton(taskText)
|
||||
}
|
||||
|
||||
tasks.push(task)
|
||||
}
|
||||
|
||||
return tasks
|
||||
}
|
||||
|
||||
@@ -186,14 +268,155 @@ function extractFlowControl(cliOutput) {
|
||||
}
|
||||
}
|
||||
|
||||
// Parse rationale section for a task
|
||||
function extractRationale(taskText) {
|
||||
const rationaleMatch = /\*\*Rationale\*\*:\n- Chosen Approach: (.+?)\n- Alternatives Considered: (.+?)\n- Decision Factors: (.+?)\n- Tradeoffs: (.+)/s.exec(taskText)
|
||||
if (!rationaleMatch) return null
|
||||
|
||||
return {
|
||||
chosen_approach: rationaleMatch[1].trim(),
|
||||
alternatives_considered: rationaleMatch[2].split(',').map(s => s.trim()).filter(Boolean),
|
||||
decision_factors: rationaleMatch[3].split(',').map(s => s.trim()).filter(Boolean),
|
||||
tradeoffs: rationaleMatch[4].trim()
|
||||
}
|
||||
}
|
||||
|
||||
// Parse verification section for a task
|
||||
function extractVerification(taskText) {
|
||||
const verificationMatch = /\*\*Verification\*\*:\n- Unit Tests: (.+?)\n- Integration Tests: (.+?)\n- Manual Checks: (.+?)\n- Success Metrics: (.+)/s.exec(taskText)
|
||||
if (!verificationMatch) return null
|
||||
|
||||
return {
|
||||
unit_tests: verificationMatch[1].split(',').map(s => s.trim()).filter(Boolean),
|
||||
integration_tests: verificationMatch[2].split(',').map(s => s.trim()).filter(Boolean),
|
||||
manual_checks: verificationMatch[3].split(',').map(s => s.trim()).filter(Boolean),
|
||||
success_metrics: verificationMatch[4].split(',').map(s => s.trim()).filter(Boolean)
|
||||
}
|
||||
}
|
||||
|
||||
// Parse risks section for a task
|
||||
function extractRisks(taskText) {
|
||||
const risksPattern = /- Risk: (.+?) \| Probability: ([LMH]) \| Impact: ([LMH]) \| Mitigation: (.+?)(?: \| Fallback: (.+?))?(?=\n|$)/g
|
||||
const risks = []
|
||||
let match
|
||||
|
||||
while ((match = risksPattern.exec(taskText)) !== null) {
|
||||
risks.push({
|
||||
description: match[1].trim(),
|
||||
probability: match[2] === 'L' ? 'Low' : match[2] === 'M' ? 'Medium' : 'High',
|
||||
impact: match[3] === 'L' ? 'Low' : match[3] === 'M' ? 'Medium' : 'High',
|
||||
mitigation: match[4].trim(),
|
||||
fallback: match[5]?.trim() || undefined
|
||||
})
|
||||
}
|
||||
|
||||
return risks.length > 0 ? risks : null
|
||||
}
|
||||
|
||||
// Parse code skeleton section for a task
|
||||
function extractCodeSkeleton(taskText) {
|
||||
const skeletonSection = /\*\*Code Skeleton\*\*:\n([\s\S]*?)(?=\n\*\*|$)/.exec(taskText)
|
||||
if (!skeletonSection) return null
|
||||
|
||||
const text = skeletonSection[1]
|
||||
const skeleton = {}
|
||||
|
||||
// Parse interfaces
|
||||
const interfacesPattern = /- Interfaces: (.+?): (.+?) - (.+?)(?=\n|$)/g
|
||||
const interfaces = []
|
||||
let match
|
||||
while ((match = interfacesPattern.exec(text)) !== null) {
|
||||
interfaces.push({ name: match[1].trim(), definition: match[2].trim(), purpose: match[3].trim() })
|
||||
}
|
||||
if (interfaces.length > 0) skeleton.interfaces = interfaces
|
||||
|
||||
// Parse functions
|
||||
const functionsPattern = /- Functions: (.+?) - (.+?) - returns (.+?)(?=\n|$)/g
|
||||
const functions = []
|
||||
while ((match = functionsPattern.exec(text)) !== null) {
|
||||
functions.push({ signature: match[1].trim(), purpose: match[2].trim(), returns: match[3].trim() })
|
||||
}
|
||||
if (functions.length > 0) skeleton.key_functions = functions
|
||||
|
||||
// Parse classes
|
||||
const classesPattern = /- Classes: (.+?) - (.+?) - methods: (.+?)(?=\n|$)/g
|
||||
const classes = []
|
||||
while ((match = classesPattern.exec(text)) !== null) {
|
||||
classes.push({
|
||||
name: match[1].trim(),
|
||||
purpose: match[2].trim(),
|
||||
methods: match[3].split(',').map(s => s.trim()).filter(Boolean)
|
||||
})
|
||||
}
|
||||
if (classes.length > 0) skeleton.classes = classes
|
||||
|
||||
return Object.keys(skeleton).length > 0 ? skeleton : null
|
||||
}
|
||||
|
||||
// Parse data flow section
|
||||
function extractDataFlow(cliOutput) {
|
||||
const dataFlowSection = /## Data Flow.*?\n([\s\S]*?)(?=\n## |$)/.exec(cliOutput)
|
||||
if (!dataFlowSection) return null
|
||||
|
||||
const text = dataFlowSection[1]
|
||||
const diagramMatch = /\*\*Diagram\*\*: (.+?)(?=\n|$)/.exec(text)
|
||||
const depsMatch = /\*\*Dependencies\*\*: (.+?)(?=\n|$)/.exec(text)
|
||||
|
||||
// Parse stages
|
||||
const stagesPattern = /- Stage (.+?): Input=(.+?) → Output=(.+?) \| Component=(.+?)(?: \| Transforms=(.+?))?(?=\n|$)/g
|
||||
const stages = []
|
||||
let match
|
||||
while ((match = stagesPattern.exec(text)) !== null) {
|
||||
stages.push({
|
||||
stage: match[1].trim(),
|
||||
input: match[2].trim(),
|
||||
output: match[3].trim(),
|
||||
component: match[4].trim(),
|
||||
transformations: match[5] ? match[5].split(',').map(s => s.trim()).filter(Boolean) : undefined
|
||||
})
|
||||
}
|
||||
|
||||
return {
|
||||
diagram: diagramMatch?.[1].trim() || null,
|
||||
stages: stages.length > 0 ? stages : undefined,
|
||||
dependencies: depsMatch ? depsMatch[1].split(',').map(s => s.trim()).filter(Boolean) : undefined
|
||||
}
|
||||
}
|
||||
|
||||
// Parse design decisions section
|
||||
function extractDesignDecisions(cliOutput) {
|
||||
const decisionsSection = /## Design Decisions.*?\n([\s\S]*?)(?=\n## |$)/.exec(cliOutput)
|
||||
if (!decisionsSection) return null
|
||||
|
||||
const decisionsPattern = /- Decision: (.+?) \| Rationale: (.+?)(?: \| Tradeoff: (.+?))?(?=\n|$)/g
|
||||
const decisions = []
|
||||
let match
|
||||
|
||||
while ((match = decisionsPattern.exec(decisionsSection[1])) !== null) {
|
||||
decisions.push({
|
||||
decision: match[1].trim(),
|
||||
rationale: match[2].trim(),
|
||||
tradeoff: match[3]?.trim() || undefined
|
||||
})
|
||||
}
|
||||
|
||||
return decisions.length > 0 ? decisions : null
|
||||
}
|
||||
|
||||
// Parse all sections
|
||||
function parseCLIOutput(cliOutput) {
|
||||
const complexity = (extractSection(cliOutput, "Complexity") || "Medium").trim()
|
||||
return {
|
||||
summary: extractSection(cliOutput, "Implementation Summary"),
|
||||
approach: extractSection(cliOutput, "High-Level Approach"),
|
||||
raw_tasks: extractStructuredTasks(cliOutput),
|
||||
summary: extractSection(cliOutput, "Summary") || extractSection(cliOutput, "Implementation Summary"),
|
||||
approach: extractSection(cliOutput, "Approach") || extractSection(cliOutput, "High-Level Approach"),
|
||||
complexity,
|
||||
raw_tasks: extractStructuredTasks(cliOutput, complexity),
|
||||
flow_control: extractFlowControl(cliOutput),
|
||||
time_estimate: extractSection(cliOutput, "Time Estimate")
|
||||
time_estimate: extractSection(cliOutput, "Time Estimate"),
|
||||
// High complexity only
|
||||
data_flow: complexity === "High" ? extractDataFlow(cliOutput) : null,
|
||||
// Medium/High complexity
|
||||
design_decisions: (complexity === "Medium" || complexity === "High") ? extractDesignDecisions(cliOutput) : null
|
||||
}
|
||||
}
|
||||
```
|
||||
@@ -326,7 +549,8 @@ function inferFlowControl(tasks) {
|
||||
|
||||
```javascript
|
||||
function generatePlanObject(parsed, enrichedContext, input, schemaType) {
|
||||
const tasks = validateAndEnhanceTasks(parsed.raw_tasks, enrichedContext)
|
||||
const complexity = parsed.complexity || input.complexity || "Medium"
|
||||
const tasks = validateAndEnhanceTasks(parsed.raw_tasks, enrichedContext, complexity)
|
||||
assignCliExecutionIds(tasks, input.session.id) // MANDATORY: Assign CLI execution IDs
|
||||
const flow_control = parsed.flow_control?.execution_order?.length > 0 ? parsed.flow_control : inferFlowControl(tasks)
|
||||
const focus_paths = [...new Set(tasks.flatMap(t => [t.file || t.scope, ...t.modification_points.map(m => m.file)]).filter(Boolean))]
|
||||
@@ -338,7 +562,7 @@ function generatePlanObject(parsed, enrichedContext, input, schemaType) {
|
||||
flow_control,
|
||||
focus_paths,
|
||||
estimated_time: parsed.time_estimate || `${tasks.length * 30} minutes`,
|
||||
recommended_execution: (input.complexity === "Low" || input.severity === "Low") ? "Agent" : "Codex",
|
||||
recommended_execution: (complexity === "Low" || input.severity === "Low") ? "Agent" : "Codex",
|
||||
_metadata: {
|
||||
timestamp: new Date().toISOString(),
|
||||
source: "cli-lite-planning-agent",
|
||||
@@ -348,6 +572,15 @@ function generatePlanObject(parsed, enrichedContext, input, schemaType) {
|
||||
}
|
||||
}
|
||||
|
||||
// Add complexity-specific top-level fields
|
||||
if (complexity === "Medium" || complexity === "High") {
|
||||
base.design_decisions = parsed.design_decisions || []
|
||||
}
|
||||
|
||||
if (complexity === "High") {
|
||||
base.data_flow = parsed.data_flow || null
|
||||
}
|
||||
|
||||
// Schema-specific fields
|
||||
if (schemaType === 'fix-plan') {
|
||||
return {
|
||||
@@ -361,10 +594,63 @@ function generatePlanObject(parsed, enrichedContext, input, schemaType) {
|
||||
return {
|
||||
...base,
|
||||
approach: parsed.approach || "Step-by-step implementation",
|
||||
complexity: input.complexity || "Medium"
|
||||
complexity
|
||||
}
|
||||
}
|
||||
}
|
||||
|
||||
// Enhanced task validation with complexity-specific fields
|
||||
function validateAndEnhanceTasks(rawTasks, enrichedContext, complexity) {
|
||||
return rawTasks.map((task, idx) => {
|
||||
const enhanced = {
|
||||
id: task.id || `T${idx + 1}`,
|
||||
title: task.title || "Unnamed task",
|
||||
scope: task.scope || task.file || inferFile(task, enrichedContext),
|
||||
action: task.action || inferAction(task.title),
|
||||
description: task.description || task.title,
|
||||
modification_points: task.modification_points?.length > 0
|
||||
? task.modification_points
|
||||
: [{ file: task.scope || task.file, target: "main", change: task.description }],
|
||||
implementation: task.implementation?.length >= 2
|
||||
? task.implementation
|
||||
: [`Analyze ${task.scope || task.file}`, `Implement ${task.title}`, `Add error handling`],
|
||||
reference: task.reference || { pattern: "existing patterns", files: enrichedContext.relevant_files.slice(0, 2), examples: "Follow existing structure" },
|
||||
acceptance: task.acceptance?.length >= 1
|
||||
? task.acceptance
|
||||
: [`${task.title} completed`, `Follows conventions`],
|
||||
depends_on: task.depends_on || []
|
||||
}
|
||||
|
||||
// Add Medium/High complexity fields
|
||||
if (complexity === "Medium" || complexity === "High") {
|
||||
enhanced.rationale = task.rationale || {
|
||||
chosen_approach: "Standard implementation approach",
|
||||
alternatives_considered: [],
|
||||
decision_factors: ["Maintainability", "Performance"],
|
||||
tradeoffs: "None significant"
|
||||
}
|
||||
enhanced.verification = task.verification || {
|
||||
unit_tests: [`test_${task.id.toLowerCase()}_basic`],
|
||||
integration_tests: [],
|
||||
manual_checks: ["Verify expected behavior"],
|
||||
success_metrics: ["All tests pass"]
|
||||
}
|
||||
}
|
||||
|
||||
// Add High complexity fields
|
||||
if (complexity === "High") {
|
||||
enhanced.risks = task.risks || [{
|
||||
description: "Implementation complexity",
|
||||
probability: "Low",
|
||||
impact: "Medium",
|
||||
mitigation: "Incremental development with checkpoints"
|
||||
}]
|
||||
enhanced.code_skeleton = task.code_skeleton || null
|
||||
}
|
||||
|
||||
return enhanced
|
||||
})
|
||||
}
|
||||
```
|
||||
|
||||
### Error Handling
|
||||
@@ -428,6 +714,7 @@ function validateTask(task) {
|
||||
## Key Reminders
|
||||
|
||||
**ALWAYS**:
|
||||
- **Search Tool Priority**: ACE (`mcp__ace-tool__search_context`) → CCW (`mcp__ccw-tools__smart_search`) / Built-in (`Grep`, `Glob`, `Read`)
|
||||
- **Read schema first** to determine output structure
|
||||
- Generate task IDs (T1/T2 for plan, FIX1/FIX2 for fix-plan)
|
||||
- Include depends_on (even if empty [])
|
||||
|
||||
@@ -127,14 +127,14 @@ EXPECTED: Structured fix strategy with:
|
||||
- Fix approach ensuring business logic correctness (not just test passage)
|
||||
- Expected outcome and verification steps
|
||||
- Impact assessment: Will this fix potentially mask other issues?
|
||||
RULES: $(cat ~/.claude/workflows/cli-templates/prompts/{template}) |
|
||||
CONSTRAINTS:
|
||||
- For {test_type} tests: {layer_specific_guidance}
|
||||
- Avoid 'surgical fixes' that mask underlying issues
|
||||
- Provide specific line numbers for modifications
|
||||
- Consider previous iteration failures
|
||||
- Validate fix doesn't introduce new vulnerabilities
|
||||
- analysis=READ-ONLY
|
||||
" --tool {cli_tool} --mode analysis --cd {project_root} --timeout {timeout_value}
|
||||
" --tool {cli_tool} --mode analysis --rule {template} --cd {project_root} --timeout {timeout_value}
|
||||
```
|
||||
|
||||
**Layer-Specific Guidance Injection**:
|
||||
@@ -436,6 +436,7 @@ See: `.process/iteration-{iteration}-cli-output.txt`
|
||||
## Key Reminders
|
||||
|
||||
**ALWAYS:**
|
||||
- **Search Tool Priority**: ACE (`mcp__ace-tool__search_context`) → CCW (`mcp__ccw-tools__smart_search`) / Built-in (`Grep`, `Glob`, `Read`)
|
||||
- **Validate context package**: Ensure all required fields present before CLI execution
|
||||
- **Handle CLI errors gracefully**: Use fallback chain (Gemini → Qwen → degraded mode)
|
||||
- **Parse CLI output structurally**: Extract specific sections (RCA, 修复建议, 验证建议)
|
||||
|
||||
@@ -385,10 +385,15 @@ Before completing any task, verify:
|
||||
- Make assumptions - verify with existing code
|
||||
- Create unnecessary complexity
|
||||
|
||||
**Bash Tool**:
|
||||
- Use `run_in_background=false` for all Bash/CLI calls to ensure foreground execution
|
||||
**Bash Tool (CLI Execution in Agent)**:
|
||||
- Use `run_in_background=false` for all Bash/CLI calls - agent cannot receive task hook callbacks
|
||||
- Set timeout ≥60 minutes for CLI commands (hooks don't propagate to subagents):
|
||||
```javascript
|
||||
Bash(command="ccw cli -p '...' --tool codex --mode write", timeout=3600000) // 60 min
|
||||
```
|
||||
|
||||
**ALWAYS:**
|
||||
- **Search Tool Priority**: ACE (`mcp__ace-tool__search_context`) → CCW (`mcp__ccw-tools__smart_search`) / Built-in (`Grep`, `Glob`, `Read`)
|
||||
- Verify module/package existence with rg/grep/search before referencing
|
||||
- Write working code incrementally
|
||||
- Test your implementation thoroughly
|
||||
|
||||
@@ -27,6 +27,8 @@ You are a conceptual planning specialist focused on **dedicated single-role** st
|
||||
|
||||
## Core Responsibilities
|
||||
|
||||
**Search Tool Priority**: ACE (`mcp__ace-tool__search_context`) → CCW (`mcp__ccw-tools__smart_search`) / Built-in (`Grep`, `Glob`, `Read`)
|
||||
|
||||
1. **Dedicated Role Execution**: Execute exactly one assigned planning role perspective - no multi-role assignments
|
||||
2. **Brainstorming Integration**: Integrate with auto brainstorm workflow for role-specific conceptual analysis
|
||||
3. **Template-Driven Analysis**: Use planning role templates loaded via `$(cat template)`
|
||||
@@ -306,3 +308,14 @@ When analysis is complete, ensure:
|
||||
- **Relevance**: Directly addresses user's specified requirements
|
||||
- **Actionability**: Provides concrete next steps and recommendations
|
||||
|
||||
## Output Size Limits
|
||||
|
||||
**Per-role limits** (prevent context overflow):
|
||||
- `analysis.md`: < 3000 words
|
||||
- `analysis-*.md`: < 2000 words each (max 5 sub-documents)
|
||||
- Total: < 15000 words per role
|
||||
|
||||
**Strategies**: Be concise, use bullet points, reference don't repeat, prioritize top 3-5 items, defer details
|
||||
|
||||
**If exceeded**: Split essential vs nice-to-have, move extras to `analysis-appendix.md` (counts toward limit), use executive summary style
|
||||
|
||||
|
||||
@@ -565,6 +565,7 @@ Output: .workflow/session/{session}/.process/context-package.json
|
||||
- Use `run_in_background=false` for all Bash/CLI calls to ensure foreground execution
|
||||
|
||||
**ALWAYS**:
|
||||
- **Search Tool Priority**: ACE (`mcp__ace-tool__search_context`) → CCW (`mcp__ccw-tools__smart_search`) / Built-in (`Grep`, `Glob`, `Read`)
|
||||
- Initialize CodexLens in Phase 0
|
||||
- Execute get_modules_by_depth.sh
|
||||
- Load CLAUDE.md/README.md (unless in memory)
|
||||
|
||||
@@ -10,6 +10,8 @@ You are an intelligent debugging specialist that autonomously diagnoses bugs thr
|
||||
|
||||
## Tool Selection Hierarchy
|
||||
|
||||
**Search Tool Priority**: ACE (`mcp__ace-tool__search_context`) → CCW (`mcp__ccw-tools__smart_search`) / Built-in (`Grep`, `Glob`, `Read`)
|
||||
|
||||
1. **Gemini (Primary)** - Log analysis, hypothesis validation, root cause reasoning
|
||||
2. **Qwen (Fallback)** - Same capabilities as Gemini, use when unavailable
|
||||
3. **Codex (Alternative)** - Fix implementation, code modification
|
||||
@@ -103,7 +105,7 @@ TASK: • Analyze error pattern • Identify potential root causes • Suggest t
|
||||
MODE: analysis
|
||||
CONTEXT: @{affected_files}
|
||||
EXPECTED: Structured hypothesis list with priority ranking
|
||||
RULES: $(cat ~/.claude/workflows/cli-templates/prompts/analysis/01-diagnose-bug-root-cause.txt) | Focus on testable conditions
|
||||
CONSTRAINTS: Focus on testable conditions
|
||||
" --tool gemini --mode analysis --cd {project_root}
|
||||
```
|
||||
|
||||
@@ -211,7 +213,7 @@ EXPECTED:
|
||||
- Evidence summary
|
||||
- Root cause identification (if confirmed)
|
||||
- Next steps (if inconclusive)
|
||||
RULES: $(cat ~/.claude/workflows/cli-templates/prompts/analysis/01-diagnose-bug-root-cause.txt) | Evidence-based reasoning only
|
||||
CONSTRAINTS: Evidence-based reasoning only
|
||||
" --tool gemini --mode analysis
|
||||
```
|
||||
|
||||
@@ -269,7 +271,7 @@ TASK:
|
||||
MODE: write
|
||||
CONTEXT: @{affected_files}
|
||||
EXPECTED: Working fix that addresses root cause
|
||||
RULES: $(cat ~/.claude/workflows/cli-templates/prompts/development/02-implement-feature.txt) | Minimal changes only
|
||||
CONSTRAINTS: Minimal changes only
|
||||
" --tool codex --mode write --cd {project_root}
|
||||
```
|
||||
|
||||
|
||||
@@ -70,8 +70,8 @@ The agent supports **two execution modes** based on task JSON's `meta.cli_execut
|
||||
CONTEXT: @**/* ./src/modules/auth|code|code:5|dirs:2
|
||||
./src/modules/api|code|code:3|dirs:0
|
||||
EXPECTED: Documentation files in .workflow/docs/my_project/src/modules/
|
||||
RULES: $(cat ~/.claude/workflows/cli-templates/prompts/documentation/module-documentation.txt) | Mirror source structure
|
||||
" --tool gemini --mode write --cd src/modules
|
||||
CONSTRAINTS: Mirror source structure
|
||||
" --tool gemini --mode write --rule documentation-module --cd src/modules
|
||||
```
|
||||
|
||||
4. **CLI Execution** (Gemini CLI):
|
||||
@@ -216,7 +216,7 @@ Before completion, verify:
|
||||
{
|
||||
"step": "analyze_module_structure",
|
||||
"action": "Deep analysis of module structure and API",
|
||||
"command": "ccw cli -p \"PURPOSE: Document module comprehensively\nTASK: Extract module purpose, architecture, public API, dependencies\nMODE: analysis\nCONTEXT: @**/* System: [system_context]\nEXPECTED: Complete module analysis for documentation\nRULES: $(cat ~/.claude/workflows/cli-templates/prompts/documentation/module-documentation.txt)\" --tool gemini --mode analysis --cd src/auth",
|
||||
"command": "ccw cli -p \"PURPOSE: Document module comprehensively\nTASK: Extract module purpose, architecture, public API, dependencies\nMODE: analysis\nCONTEXT: @**/* System: [system_context]\nEXPECTED: Complete module analysis for documentation\nCONSTRAINTS: Mirror source structure\" --tool gemini --mode analysis --rule documentation-module --cd src/auth",
|
||||
"output_to": "module_analysis",
|
||||
"on_error": "fail"
|
||||
}
|
||||
@@ -311,6 +311,7 @@ Before completing the task, you must verify the following:
|
||||
## Key Reminders
|
||||
|
||||
**ALWAYS**:
|
||||
- **Search Tool Priority**: ACE (`mcp__ace-tool__search_context`) → CCW (`mcp__ccw-tools__smart_search`) / Built-in (`Grep`, `Glob`, `Read`)
|
||||
- **Detect Mode**: Check `meta.cli_execute` to determine execution mode (Agent or CLI).
|
||||
- **Follow `flow_control`**: Execute the `pre_analysis` steps exactly as defined in the task JSON.
|
||||
- **Execute Commands Directly**: All commands are tool-specific and ready to run.
|
||||
|
||||
@@ -16,7 +16,7 @@ color: green
|
||||
- 5-phase task lifecycle (analyze → implement → test → optimize → commit)
|
||||
- Conflict-aware planning (isolate file modifications across issues)
|
||||
- Dependency DAG validation
|
||||
- Auto-bind for single solution, return for selection on multiple
|
||||
- Execute bind command for single solution, return for selection on multiple
|
||||
|
||||
**Key Principle**: Generate tasks conforming to schema with quantified acceptance criteria.
|
||||
|
||||
@@ -56,14 +56,61 @@ Phase 4: Validation & Output (15%)
|
||||
ccw issue status <issue-id> --json
|
||||
```
|
||||
|
||||
**Step 2**: Analyze and classify
|
||||
**Step 2**: Analyze failure history (if present)
|
||||
```javascript
|
||||
function analyzeFailureHistory(issue) {
|
||||
if (!issue.feedback || issue.feedback.length === 0) {
|
||||
return { has_failures: false };
|
||||
}
|
||||
|
||||
// Extract execution failures
|
||||
const failures = issue.feedback.filter(f => f.type === 'failure' && f.stage === 'execute');
|
||||
|
||||
if (failures.length === 0) {
|
||||
return { has_failures: false };
|
||||
}
|
||||
|
||||
// Parse failure details
|
||||
const failureAnalysis = failures.map(f => {
|
||||
const detail = JSON.parse(f.content);
|
||||
return {
|
||||
solution_id: detail.solution_id,
|
||||
task_id: detail.task_id,
|
||||
error_type: detail.error_type, // test_failure, compilation, timeout, etc.
|
||||
message: detail.message,
|
||||
stack_trace: detail.stack_trace,
|
||||
timestamp: f.created_at
|
||||
};
|
||||
});
|
||||
|
||||
// Identify patterns
|
||||
const errorTypes = failureAnalysis.map(f => f.error_type);
|
||||
const repeatedErrors = errorTypes.filter((e, i, arr) => arr.indexOf(e) !== i);
|
||||
|
||||
return {
|
||||
has_failures: true,
|
||||
failure_count: failures.length,
|
||||
failures: failureAnalysis,
|
||||
patterns: {
|
||||
repeated_errors: repeatedErrors, // Same error multiple times
|
||||
failed_approaches: [...new Set(failureAnalysis.map(f => f.solution_id))]
|
||||
}
|
||||
};
|
||||
}
|
||||
```
|
||||
|
||||
**Step 3**: Analyze and classify
|
||||
```javascript
|
||||
function analyzeIssue(issue) {
|
||||
const failureAnalysis = analyzeFailureHistory(issue);
|
||||
|
||||
return {
|
||||
issue_id: issue.id,
|
||||
requirements: extractRequirements(issue.context),
|
||||
scope: inferScope(issue.title, issue.context),
|
||||
complexity: determineComplexity(issue) // Low | Medium | High
|
||||
complexity: determineComplexity(issue), // Low | Medium | High
|
||||
failure_analysis: failureAnalysis, // Failure context for planning
|
||||
is_replan: failureAnalysis.has_failures // Flag for replanning
|
||||
}
|
||||
}
|
||||
```
|
||||
@@ -104,6 +151,41 @@ mcp__ace-tool__search_context({
|
||||
|
||||
#### Phase 3: Solution Planning
|
||||
|
||||
**Failure-Aware Planning** (when `issue.failure_analysis.has_failures === true`):
|
||||
|
||||
```javascript
|
||||
function planWithFailureContext(issue, exploration, failureAnalysis) {
|
||||
// Identify what failed before
|
||||
const failedApproaches = failureAnalysis.patterns.failed_approaches;
|
||||
const rootCauses = failureAnalysis.failures.map(f => ({
|
||||
error: f.error_type,
|
||||
message: f.message,
|
||||
task: f.task_id
|
||||
}));
|
||||
|
||||
// Design alternative approach
|
||||
const approach = `
|
||||
**Previous Attempt Analysis**:
|
||||
- Failed approaches: ${failedApproaches.join(', ')}
|
||||
- Root causes: ${rootCauses.map(r => `${r.error} (${r.task}): ${r.message}`).join('; ')}
|
||||
|
||||
**Alternative Strategy**:
|
||||
- [Describe how this solution addresses root causes]
|
||||
- [Explain what's different from failed approaches]
|
||||
- [Prevention steps to catch same errors earlier]
|
||||
`;
|
||||
|
||||
// Add explicit verification tasks
|
||||
const verificationTasks = rootCauses.map(rc => ({
|
||||
verification_type: rc.error,
|
||||
check: `Prevent ${rc.error}: ${rc.message}`,
|
||||
method: `Add unit test / compile check / timeout limit`
|
||||
}));
|
||||
|
||||
return { approach, verificationTasks };
|
||||
}
|
||||
```
|
||||
|
||||
**Multi-Solution Generation**:
|
||||
|
||||
Generate multiple candidate solutions when:
|
||||
@@ -111,30 +193,30 @@ Generate multiple candidate solutions when:
|
||||
- Multiple valid implementation approaches exist
|
||||
- Trade-offs between approaches (performance vs simplicity, etc.)
|
||||
|
||||
| Condition | Solutions |
|
||||
|-----------|-----------|
|
||||
| Low complexity, single approach | 1 solution, auto-bind |
|
||||
| Medium complexity, clear path | 1-2 solutions |
|
||||
| High complexity, multiple approaches | 2-3 solutions, user selection |
|
||||
| Condition | Solutions | Binding Action |
|
||||
|-----------|-----------|----------------|
|
||||
| Low complexity, single approach | 1 solution | Execute bind |
|
||||
| Medium complexity, clear path | 1-2 solutions | Execute bind if 1, return if 2+ |
|
||||
| High complexity, multiple approaches | 2-3 solutions | Return for selection |
|
||||
|
||||
**Binding Decision** (based SOLELY on final `solutions.length`):
|
||||
```javascript
|
||||
// After generating all solutions
|
||||
if (solutions.length === 1) {
|
||||
exec(`ccw issue bind ${issueId} ${solutions[0].id}`); // MUST execute
|
||||
} else {
|
||||
return { pending_selection: solutions }; // Return for user choice
|
||||
}
|
||||
```
|
||||
|
||||
**Solution Evaluation** (for each candidate):
|
||||
```javascript
|
||||
{
|
||||
analysis: {
|
||||
risk: "low|medium|high", // Implementation risk
|
||||
impact: "low|medium|high", // Scope of changes
|
||||
complexity: "low|medium|high" // Technical complexity
|
||||
},
|
||||
score: 0.0-1.0 // Overall quality score (higher = recommended)
|
||||
analysis: { risk: "low|medium|high", impact: "low|medium|high", complexity: "low|medium|high" },
|
||||
score: 0.0-1.0 // Higher = recommended
|
||||
}
|
||||
```
|
||||
|
||||
**Selection Flow**:
|
||||
1. Generate all candidate solutions
|
||||
2. Evaluate and score each
|
||||
3. Single solution → auto-bind
|
||||
4. Multiple solutions → return `pending_selection` for user choice
|
||||
|
||||
**Task Decomposition** following schema:
|
||||
```javascript
|
||||
function decomposeTasks(issue, exploration) {
|
||||
@@ -248,8 +330,8 @@ Write({ file_path: filePath, content: newContent })
|
||||
```
|
||||
|
||||
**Step 2: Bind decision**
|
||||
- **Single solution** → Auto-bind: `ccw issue bind <issue-id> <solution-id>`
|
||||
- **Multiple solutions** → Return for user selection (no bind)
|
||||
- 1 solution → Execute `ccw issue bind <issue-id> <solution-id>`
|
||||
- 2+ solutions → Return `pending_selection` (no bind)
|
||||
|
||||
---
|
||||
|
||||
@@ -264,14 +346,7 @@ Write({ file_path: filePath, content: newContent })
|
||||
|
||||
Each line is a solution JSON containing tasks. Schema: `cat .claude/workflows/cli-templates/schemas/solution-schema.json`
|
||||
|
||||
### 2.2 Binding
|
||||
|
||||
| Scenario | Action |
|
||||
|----------|--------|
|
||||
| Single solution | `ccw issue bind <issue-id> <solution-id>` (auto) |
|
||||
| Multiple solutions | Register only, return for selection |
|
||||
|
||||
### 2.3 Return Summary
|
||||
### 2.2 Return Summary
|
||||
|
||||
```json
|
||||
{
|
||||
@@ -308,16 +383,19 @@ Each line is a solution JSON containing tasks. Schema: `cat .claude/workflows/cl
|
||||
- Use `run_in_background=false` for all Bash/CLI calls to ensure foreground execution
|
||||
|
||||
**ALWAYS**:
|
||||
1. Read schema first: `cat .claude/workflows/cli-templates/schemas/solution-schema.json`
|
||||
2. Use ACE semantic search as PRIMARY exploration tool
|
||||
3. Fetch issue details via `ccw issue status <id> --json`
|
||||
4. Quantify acceptance.criteria with testable conditions
|
||||
5. Validate DAG before output
|
||||
6. Evaluate each solution with `analysis` and `score`
|
||||
7. Write solutions to `.workflow/issues/solutions/{issue-id}.jsonl` (append mode)
|
||||
8. For HIGH complexity: generate 2-3 candidate solutions
|
||||
9. **Solution ID format**: `SOL-{issue-id}-{uid}` where uid is 4 random alphanumeric chars (e.g., `SOL-GH-123-a7x9`)
|
||||
10. **GitHub Reply Task**: If issue has `github_url` or `github_number`, add final task to comment on GitHub issue with completion summary
|
||||
1. **Search Tool Priority**: ACE (`mcp__ace-tool__search_context`) → CCW (`mcp__ccw-tools__smart_search`) / Built-in (`Grep`, `Glob`, `Read`)
|
||||
2. Read schema first: `cat .claude/workflows/cli-templates/schemas/solution-schema.json`
|
||||
3. Use ACE semantic search as PRIMARY exploration tool
|
||||
4. Fetch issue details via `ccw issue status <id> --json`
|
||||
5. **Analyze failure history**: Check `issue.feedback` for type='failure', stage='execute'
|
||||
6. **For replanning**: Reference previous failures in `solution.approach`, add prevention steps
|
||||
7. Quantify acceptance.criteria with testable conditions
|
||||
8. Validate DAG before output
|
||||
9. Evaluate each solution with `analysis` and `score`
|
||||
10. Write solutions to `.workflow/issues/solutions/{issue-id}.jsonl` (append mode)
|
||||
11. For HIGH complexity: generate 2-3 candidate solutions
|
||||
12. **Solution ID format**: `SOL-{issue-id}-{uid}` where uid is 4 random alphanumeric chars (e.g., `SOL-GH-123-a7x9`)
|
||||
13. **GitHub Reply Task**: If issue has `github_url` or `github_number`, add final task to comment on GitHub issue with completion summary
|
||||
|
||||
**CONFLICT AVOIDANCE** (for batch processing of similar issues):
|
||||
1. **File isolation**: Each issue's solution should target distinct files when possible
|
||||
@@ -331,9 +409,9 @@ Each line is a solution JSON containing tasks. Schema: `cat .claude/workflows/cl
|
||||
2. Use vague criteria ("works correctly", "good performance")
|
||||
3. Create circular dependencies
|
||||
4. Generate more than 10 tasks per issue
|
||||
5. **Bind when multiple solutions exist** - MUST check `solutions.length === 1` before calling `ccw issue bind`
|
||||
5. Skip bind when `solutions.length === 1` (MUST execute bind command)
|
||||
|
||||
**OUTPUT**:
|
||||
1. Write solutions to `.workflow/issues/solutions/{issue-id}.jsonl` (JSONL format)
|
||||
2. Single solution → `ccw issue bind <issue-id> <solution-id>`; Multiple → return only
|
||||
3. Return JSON with `bound`, `pending_selection`
|
||||
1. Write solutions to `.workflow/issues/solutions/{issue-id}.jsonl`
|
||||
2. Execute bind or return `pending_selection` based on solution count
|
||||
3. Return JSON: `{ bound: [...], pending_selection: [...] }`
|
||||
|
||||
@@ -87,7 +87,7 @@ TASK: • Detect file conflicts (same file modified by multiple solutions)
|
||||
MODE: analysis
|
||||
CONTEXT: @.workflow/issues/solutions/**/*.jsonl | Solution data: \${SOLUTIONS_JSON}
|
||||
EXPECTED: JSON array of conflicts with type, severity, solutions, recommended_order
|
||||
RULES: $(cat ~/.claude/workflows/cli-templates/protocols/analysis-protocol.md) | Severity: high (API/data) > medium (file/dependency) > low (architecture)
|
||||
CONSTRAINTS: Severity: high (API/data) > medium (file/dependency) > low (architecture)
|
||||
" --tool gemini --mode analysis --cd .workflow/issues
|
||||
```
|
||||
|
||||
@@ -275,7 +275,8 @@ Return brief summaries; full conflict details in separate files:
|
||||
- Use `run_in_background=false` for all Bash/CLI calls to ensure foreground execution
|
||||
|
||||
**ALWAYS**:
|
||||
1. Build dependency graph before ordering
|
||||
1. **Search Tool Priority**: ACE (`mcp__ace-tool__search_context`) → CCW (`mcp__ccw-tools__smart_search`) / Built-in (`Grep`, `Glob`, `Read`)
|
||||
2. Build dependency graph before ordering
|
||||
2. Detect file overlaps between solutions
|
||||
3. Apply resolution rules consistently
|
||||
4. Calculate semantic priority for all solutions
|
||||
|
||||
@@ -75,6 +75,8 @@ Examples:
|
||||
|
||||
## Execution Rules
|
||||
|
||||
**Search Tool Priority**: ACE (`mcp__ace-tool__search_context`) → CCW (`mcp__ccw-tools__smart_search`) / Built-in (`Grep`, `Glob`, `Read`)
|
||||
|
||||
1. **Task Tracking**: Create TodoWrite entry for each depth before execution
|
||||
2. **Parallelism**: Max 4 jobs per depth, sequential across depths
|
||||
3. **Strategy Assignment**: Assign strategy based on depth:
|
||||
|
||||
@@ -28,6 +28,8 @@ You are a test context discovery specialist focused on gathering test coverage i
|
||||
|
||||
## Tool Arsenal
|
||||
|
||||
**Search Tool Priority**: ACE (`mcp__ace-tool__search_context`) → CCW (`mcp__ccw-tools__smart_search`) / Built-in (`Grep`, `Glob`, `Read`)
|
||||
|
||||
### 1. Session & Implementation Context
|
||||
**Tools**:
|
||||
- `Read()` - Load session metadata and implementation summaries
|
||||
|
||||
@@ -332,6 +332,7 @@ When generating test results for orchestrator (saved to `.process/test-results.j
|
||||
## Important Reminders
|
||||
|
||||
**ALWAYS:**
|
||||
- **Search Tool Priority**: ACE (`mcp__ace-tool__search_context`) → CCW (`mcp__ccw-tools__smart_search`) / Built-in (`Grep`, `Glob`, `Read`)
|
||||
- **Execute tests first** - Understand what's failing before fixing
|
||||
- **Diagnose thoroughly** - Find root cause, not just symptoms
|
||||
- **Fix minimally** - Change only what's needed to pass tests
|
||||
|
||||
@@ -284,6 +284,8 @@ You execute 6 distinct task types organized into 3 patterns. Each task includes
|
||||
|
||||
### ALWAYS
|
||||
|
||||
**Search Tool Priority**: ACE (`mcp__ace-tool__search_context`) → CCW (`mcp__ccw-tools__smart_search`) / Built-in (`Grep`, `Glob`, `Read`)
|
||||
|
||||
**W3C Format Compliance**: ✅ Include $schema in all token files | ✅ Use $type metadata for all tokens | ✅ Use $value wrapper for color (light/dark), duration, easing | ✅ Validate token structure against W3C spec
|
||||
|
||||
**Pattern Recognition**: ✅ Identify pattern from [TASK_TYPE_IDENTIFIER] first | ✅ Apply pattern-specific execution rules | ✅ Follow autonomy level
|
||||
|
||||
@@ -124,6 +124,7 @@ Before completing any task, verify:
|
||||
- Use `run_in_background=false` for all Bash/CLI calls to ensure foreground execution
|
||||
|
||||
**ALWAYS:**
|
||||
- **Search Tool Priority**: ACE (`mcp__ace-tool__search_context`) → CCW (`mcp__ccw-tools__smart_search`) / Built-in (`Grep`, `Glob`, `Read`)
|
||||
- Verify resource/dependency existence before referencing
|
||||
- Execute tasks systematically and incrementally
|
||||
- Test and validate work thoroughly
|
||||
|
||||
361
.claude/commands/cli/codex-review.md
Normal file
361
.claude/commands/cli/codex-review.md
Normal file
@@ -0,0 +1,361 @@
|
||||
---
|
||||
name: codex-review
|
||||
description: Interactive code review using Codex CLI via ccw endpoint with configurable review target, model, and custom instructions
|
||||
argument-hint: "[--uncommitted|--base <branch>|--commit <sha>] [--model <model>] [--title <title>] [prompt]"
|
||||
allowed-tools: Bash(*), AskUserQuestion(*), Read(*)
|
||||
---
|
||||
|
||||
# Codex Review Command (/cli:codex-review)
|
||||
|
||||
## Overview
|
||||
Interactive code review command that invokes `codex review` via ccw cli endpoint with guided parameter selection.
|
||||
|
||||
**Codex Review Parameters** (from `codex review --help`):
|
||||
| Parameter | Description |
|
||||
|-----------|-------------|
|
||||
| `[PROMPT]` | Custom review instructions (positional) |
|
||||
| `-c model=<model>` | Override model via config |
|
||||
| `--uncommitted` | Review staged, unstaged, and untracked changes |
|
||||
| `--base <BRANCH>` | Review changes against base branch |
|
||||
| `--commit <SHA>` | Review changes introduced by a commit |
|
||||
| `--title <TITLE>` | Optional commit title for review summary |
|
||||
|
||||
## Prompt Template Format
|
||||
|
||||
Follow the standard ccw cli prompt template:
|
||||
|
||||
```
|
||||
PURPOSE: [what] + [why] + [success criteria] + [constraints/scope]
|
||||
TASK: • [step 1] • [step 2] • [step 3]
|
||||
MODE: review
|
||||
CONTEXT: [review target description] | Memory: [relevant context]
|
||||
EXPECTED: [deliverable format] + [quality criteria]
|
||||
CONSTRAINTS: [focus constraints]
|
||||
```
|
||||
|
||||
## EXECUTION INSTRUCTIONS - START HERE
|
||||
|
||||
**When this command is triggered, follow these exact steps:**
|
||||
|
||||
### Step 1: Parse Arguments
|
||||
|
||||
Check if user provided arguments directly:
|
||||
- `--uncommitted` → Record target = uncommitted
|
||||
- `--base <branch>` → Record target = base, branch name
|
||||
- `--commit <sha>` → Record target = commit, sha value
|
||||
- `--model <model>` → Record model selection
|
||||
- `--title <title>` → Record title
|
||||
- Remaining text → Use as custom focus/prompt
|
||||
|
||||
If no target specified → Continue to Step 2 for interactive selection.
|
||||
|
||||
### Step 2: Interactive Parameter Selection
|
||||
|
||||
**2.1 Review Target Selection**
|
||||
|
||||
```javascript
|
||||
AskUserQuestion({
|
||||
questions: [{
|
||||
question: "What do you want to review?",
|
||||
header: "Review Target",
|
||||
options: [
|
||||
{ label: "Uncommitted changes (Recommended)", description: "Review staged, unstaged, and untracked changes" },
|
||||
{ label: "Compare to branch", description: "Review changes against a base branch (e.g., main)" },
|
||||
{ label: "Specific commit", description: "Review changes introduced by a specific commit" }
|
||||
],
|
||||
multiSelect: false
|
||||
}]
|
||||
})
|
||||
```
|
||||
|
||||
**2.2 Branch/Commit Input (if needed)**
|
||||
|
||||
If "Compare to branch" selected:
|
||||
```javascript
|
||||
AskUserQuestion({
|
||||
questions: [{
|
||||
question: "Which base branch to compare against?",
|
||||
header: "Base Branch",
|
||||
options: [
|
||||
{ label: "main", description: "Compare against main branch" },
|
||||
{ label: "master", description: "Compare against master branch" },
|
||||
{ label: "develop", description: "Compare against develop branch" }
|
||||
],
|
||||
multiSelect: false
|
||||
}]
|
||||
})
|
||||
```
|
||||
|
||||
If "Specific commit" selected:
|
||||
- Run `git log --oneline -10` to show recent commits
|
||||
- Ask user to provide commit SHA or select from list
|
||||
|
||||
**2.3 Model Selection (Optional)**
|
||||
|
||||
```javascript
|
||||
AskUserQuestion({
|
||||
questions: [{
|
||||
question: "Which model to use for review?",
|
||||
header: "Model",
|
||||
options: [
|
||||
{ label: "Default", description: "Use codex default model (gpt-5.2)" },
|
||||
{ label: "o3", description: "OpenAI o3 reasoning model" },
|
||||
{ label: "gpt-4.1", description: "GPT-4.1 model" },
|
||||
{ label: "o4-mini", description: "OpenAI o4-mini (faster)" }
|
||||
],
|
||||
multiSelect: false
|
||||
}]
|
||||
})
|
||||
```
|
||||
|
||||
**2.4 Review Focus Selection**
|
||||
|
||||
```javascript
|
||||
AskUserQuestion({
|
||||
questions: [{
|
||||
question: "What should the review focus on?",
|
||||
header: "Focus Area",
|
||||
options: [
|
||||
{ label: "General review (Recommended)", description: "Comprehensive review: correctness, style, bugs, docs" },
|
||||
{ label: "Security focus", description: "Security vulnerabilities, input validation, auth issues" },
|
||||
{ label: "Performance focus", description: "Performance bottlenecks, complexity, resource usage" },
|
||||
{ label: "Code quality", description: "Readability, maintainability, SOLID principles" }
|
||||
],
|
||||
multiSelect: false
|
||||
}]
|
||||
})
|
||||
```
|
||||
|
||||
### Step 3: Build Prompt and Command
|
||||
|
||||
**3.1 Construct Prompt Based on Focus**
|
||||
|
||||
**General Review Prompt:**
|
||||
```
|
||||
PURPOSE: Comprehensive code review to identify issues, improve quality, and ensure best practices; success = actionable feedback with clear priorities
|
||||
TASK: • Review code correctness and logic errors • Check coding standards and consistency • Identify potential bugs and edge cases • Evaluate documentation completeness
|
||||
MODE: review
|
||||
CONTEXT: {target_description} | Memory: Project conventions from CLAUDE.md
|
||||
EXPECTED: Structured review report with: severity levels (Critical/High/Medium/Low), file:line references, specific improvement suggestions, priority ranking
|
||||
CONSTRAINTS: Focus on actionable feedback
|
||||
```
|
||||
|
||||
**Security Focus Prompt:**
|
||||
```
|
||||
PURPOSE: Security-focused code review to identify vulnerabilities and security risks; success = all security issues documented with remediation
|
||||
TASK: • Scan for injection vulnerabilities (SQL, XSS, command) • Check authentication and authorization logic • Evaluate input validation and sanitization • Identify sensitive data exposure risks
|
||||
MODE: review
|
||||
CONTEXT: {target_description} | Memory: Security best practices, OWASP Top 10
|
||||
EXPECTED: Security report with: vulnerability classification, CVE references where applicable, remediation code snippets, risk severity matrix
|
||||
CONSTRAINTS: Security-first analysis | Flag all potential vulnerabilities
|
||||
```
|
||||
|
||||
**Performance Focus Prompt:**
|
||||
```
|
||||
PURPOSE: Performance-focused code review to identify bottlenecks and optimization opportunities; success = measurable improvement recommendations
|
||||
TASK: • Analyze algorithmic complexity (Big-O) • Identify memory allocation issues • Check for N+1 queries and blocking operations • Evaluate caching opportunities
|
||||
MODE: review
|
||||
CONTEXT: {target_description} | Memory: Performance patterns and anti-patterns
|
||||
EXPECTED: Performance report with: complexity analysis, bottleneck identification, optimization suggestions with expected impact, benchmark recommendations
|
||||
CONSTRAINTS: Performance optimization focus
|
||||
```
|
||||
|
||||
**Code Quality Focus Prompt:**
|
||||
```
|
||||
PURPOSE: Code quality review to improve maintainability and readability; success = cleaner, more maintainable code
|
||||
TASK: • Assess SOLID principles adherence • Identify code duplication and abstraction opportunities • Review naming conventions and clarity • Evaluate test coverage implications
|
||||
MODE: review
|
||||
CONTEXT: {target_description} | Memory: Project coding standards
|
||||
EXPECTED: Quality report with: principle violations, refactoring suggestions, naming improvements, maintainability score
|
||||
CONSTRAINTS: Code quality and maintainability focus
|
||||
```
|
||||
|
||||
**3.2 Build Target Description**
|
||||
|
||||
Based on selection, set `{target_description}`:
|
||||
- Uncommitted: `Reviewing uncommitted changes (staged + unstaged + untracked)`
|
||||
- Base branch: `Reviewing changes against {branch} branch`
|
||||
- Commit: `Reviewing changes introduced by commit {sha}`
|
||||
|
||||
### Step 4: Execute via CCW CLI
|
||||
|
||||
Build and execute the ccw cli command:
|
||||
|
||||
```bash
|
||||
# Base structure
|
||||
ccw cli -p "<PROMPT>" --tool codex --mode review [OPTIONS]
|
||||
```
|
||||
|
||||
**Command Construction:**
|
||||
|
||||
```bash
|
||||
# Variables from user selection
|
||||
TARGET_FLAG="" # --uncommitted | --base <branch> | --commit <sha>
|
||||
MODEL_FLAG="" # --model <model> (if not default)
|
||||
TITLE_FLAG="" # --title "<title>" (if provided)
|
||||
|
||||
# Build target flag
|
||||
if [ "$target" = "uncommitted" ]; then
|
||||
TARGET_FLAG="--uncommitted"
|
||||
elif [ "$target" = "base" ]; then
|
||||
TARGET_FLAG="--base $branch"
|
||||
elif [ "$target" = "commit" ]; then
|
||||
TARGET_FLAG="--commit $sha"
|
||||
fi
|
||||
|
||||
# Build model flag (only if not default)
|
||||
if [ "$model" != "default" ] && [ -n "$model" ]; then
|
||||
MODEL_FLAG="--model $model"
|
||||
fi
|
||||
|
||||
# Build title flag (if provided)
|
||||
if [ -n "$title" ]; then
|
||||
TITLE_FLAG="--title \"$title\""
|
||||
fi
|
||||
|
||||
# Execute
|
||||
ccw cli -p "$PROMPT" --tool codex --mode review $TARGET_FLAG $MODEL_FLAG $TITLE_FLAG
|
||||
```
|
||||
|
||||
**Full Example Commands:**
|
||||
|
||||
**Option 1: With custom prompt (reviews uncommitted by default):**
|
||||
```bash
|
||||
ccw cli -p "
|
||||
PURPOSE: Comprehensive code review to identify issues and improve quality; success = actionable feedback with priorities
|
||||
TASK: • Review correctness and logic • Check standards compliance • Identify bugs and edge cases • Evaluate documentation
|
||||
MODE: review
|
||||
CONTEXT: Reviewing uncommitted changes | Memory: Project conventions
|
||||
EXPECTED: Structured report with severity levels, file:line refs, improvement suggestions
|
||||
CONSTRAINTS: Actionable feedback
|
||||
" --tool codex --mode review --rule analysis-review-code-quality
|
||||
```
|
||||
|
||||
**Option 2: Target flag only (no prompt allowed):**
|
||||
```bash
|
||||
ccw cli --tool codex --mode review --uncommitted
|
||||
```
|
||||
|
||||
### Step 5: Execute and Display Results
|
||||
|
||||
```bash
|
||||
Bash({
|
||||
command: "ccw cli -p \"$PROMPT\" --tool codex --mode review $FLAGS",
|
||||
run_in_background: true
|
||||
})
|
||||
```
|
||||
|
||||
Wait for completion and display formatted results.
|
||||
|
||||
## Quick Usage Examples
|
||||
|
||||
### Direct Execution (No Interaction)
|
||||
|
||||
```bash
|
||||
# Review uncommitted changes with default settings
|
||||
/cli:codex-review --uncommitted
|
||||
|
||||
# Review against main branch
|
||||
/cli:codex-review --base main
|
||||
|
||||
# Review specific commit
|
||||
/cli:codex-review --commit abc123
|
||||
|
||||
# Review with custom model
|
||||
/cli:codex-review --uncommitted --model o3
|
||||
|
||||
# Review with security focus
|
||||
/cli:codex-review --uncommitted security
|
||||
|
||||
# Full options
|
||||
/cli:codex-review --base main --model o3 --title "Auth Feature" security
|
||||
```
|
||||
|
||||
### Interactive Mode
|
||||
|
||||
```bash
|
||||
# Start interactive selection (guided flow)
|
||||
/cli:codex-review
|
||||
```
|
||||
|
||||
## Focus Area Mapping
|
||||
|
||||
| User Selection | Prompt Focus | Key Checks |
|
||||
|----------------|--------------|------------|
|
||||
| General review | Comprehensive | Correctness, style, bugs, docs |
|
||||
| Security focus | Security-first | Injection, auth, validation, exposure |
|
||||
| Performance focus | Optimization | Complexity, memory, queries, caching |
|
||||
| Code quality | Maintainability | SOLID, duplication, naming, tests |
|
||||
|
||||
## Error Handling
|
||||
|
||||
### No Changes to Review
|
||||
```
|
||||
No changes found for review target. Suggestions:
|
||||
- For --uncommitted: Make some code changes first
|
||||
- For --base: Ensure branch exists and has diverged
|
||||
- For --commit: Verify commit SHA exists
|
||||
```
|
||||
|
||||
### Invalid Branch
|
||||
```bash
|
||||
# Show available branches
|
||||
git branch -a --list | head -20
|
||||
```
|
||||
|
||||
### Invalid Commit
|
||||
```bash
|
||||
# Show recent commits
|
||||
git log --oneline -10
|
||||
```
|
||||
|
||||
## Integration Notes
|
||||
|
||||
- Uses `ccw cli --tool codex --mode review` endpoint
|
||||
- Model passed via prompt (codex uses `-c model=` internally)
|
||||
- Target flags (`--uncommitted`, `--base`, `--commit`) passed through to codex
|
||||
- Prompt follows standard ccw cli template format for consistency
|
||||
|
||||
## Validation Constraints
|
||||
|
||||
**IMPORTANT: Target flags and prompt are mutually exclusive**
|
||||
|
||||
The codex CLI has a constraint where target flags (`--uncommitted`, `--base`, `--commit`) cannot be used with a positional `[PROMPT]` argument:
|
||||
|
||||
```
|
||||
error: the argument '--uncommitted' cannot be used with '[PROMPT]'
|
||||
error: the argument '--base <BRANCH>' cannot be used with '[PROMPT]'
|
||||
error: the argument '--commit <SHA>' cannot be used with '[PROMPT]'
|
||||
```
|
||||
|
||||
**Behavior:**
|
||||
- When ANY target flag is specified, ccw cli automatically skips template concatenation (systemRules/roles)
|
||||
- The review uses codex's default review behavior for the specified target
|
||||
- Custom prompts are only supported WITHOUT target flags (reviews uncommitted changes by default)
|
||||
|
||||
**Valid combinations:**
|
||||
| Command | Result |
|
||||
|---------|--------|
|
||||
| `codex review "Focus on security"` | ✓ Custom prompt, reviews uncommitted (default) |
|
||||
| `codex review --uncommitted` | ✓ No prompt, uses default review |
|
||||
| `codex review --base main` | ✓ No prompt, uses default review |
|
||||
| `codex review --commit abc123` | ✓ No prompt, uses default review |
|
||||
| `codex review --uncommitted "prompt"` | ✗ Invalid - mutually exclusive |
|
||||
| `codex review --base main "prompt"` | ✗ Invalid - mutually exclusive |
|
||||
| `codex review --commit abc123 "prompt"` | ✗ Invalid - mutually exclusive |
|
||||
|
||||
**Examples:**
|
||||
```bash
|
||||
# ✓ Valid: prompt only (reviews uncommitted by default)
|
||||
ccw cli -p "Focus on security" --tool codex --mode review
|
||||
|
||||
# ✓ Valid: target flag only (no prompt)
|
||||
ccw cli --tool codex --mode review --uncommitted
|
||||
ccw cli --tool codex --mode review --base main
|
||||
ccw cli --tool codex --mode review --commit abc123
|
||||
|
||||
# ✗ Invalid: target flag with prompt (will fail)
|
||||
ccw cli -p "Review this" --tool codex --mode review --uncommitted
|
||||
ccw cli -p "Review this" --tool codex --mode review --base main
|
||||
ccw cli -p "Review this" --tool codex --mode review --commit abc123
|
||||
```
|
||||
@@ -267,7 +267,7 @@ EXPECTED: JSON exploration plan following exploration-plan-schema.json:
|
||||
"estimated_iterations": N,
|
||||
"termination_conditions": [...]
|
||||
}
|
||||
RULES: $(cat ~/.claude/workflows/cli-templates/protocols/analysis-protocol.md) | Use ACE context to inform targets | Focus on actionable plan
|
||||
CONSTRAINTS: Use ACE context to inform targets | Focus on actionable plan
|
||||
`;
|
||||
|
||||
// Step 3: Execute Gemini planning
|
||||
|
||||
@@ -1,7 +1,7 @@
|
||||
---
|
||||
name: execute
|
||||
description: Execute queue with DAG-based parallel orchestration (one commit per solution)
|
||||
argument-hint: "[--worktree [<existing-path>]] [--queue <queue-id>]"
|
||||
argument-hint: "--queue <queue-id> [--worktree [<existing-path>]]"
|
||||
allowed-tools: TodoWrite(*), Bash(*), Read(*), AskUserQuestion(*)
|
||||
---
|
||||
|
||||
@@ -17,21 +17,64 @@ Minimal orchestrator that dispatches **solution IDs** to executors. Each executo
|
||||
- `done <id>` → update solution completion status
|
||||
- No race conditions: status changes only via `done`
|
||||
- **Executor handles all tasks within a solution sequentially**
|
||||
- **Worktree isolation**: Each executor can work in its own git worktree
|
||||
- **Single worktree for entire queue**: One worktree isolates ALL queue execution from main workspace
|
||||
|
||||
## Queue ID Requirement (MANDATORY)
|
||||
|
||||
**Queue ID is REQUIRED.** You MUST specify which queue to execute via `--queue <queue-id>`.
|
||||
|
||||
### If Queue ID Not Provided
|
||||
|
||||
When `--queue` parameter is missing, you MUST:
|
||||
|
||||
1. **List available queues** by running:
|
||||
```javascript
|
||||
const result = Bash('ccw issue queue list --brief --json');
|
||||
const index = JSON.parse(result);
|
||||
```
|
||||
|
||||
2. **Display available queues** to user:
|
||||
```
|
||||
Available Queues:
|
||||
ID Status Progress Issues
|
||||
-----------------------------------------------------------
|
||||
→ QUE-20251215-001 active 3/10 ISS-001, ISS-002
|
||||
QUE-20251210-002 active 0/5 ISS-003
|
||||
QUE-20251205-003 completed 8/8 ISS-004
|
||||
```
|
||||
|
||||
3. **Stop and ask user** to specify which queue to execute:
|
||||
```javascript
|
||||
AskUserQuestion({
|
||||
questions: [{
|
||||
question: "Which queue would you like to execute?",
|
||||
header: "Queue",
|
||||
multiSelect: false,
|
||||
options: index.queues
|
||||
.filter(q => q.status === 'active')
|
||||
.map(q => ({
|
||||
label: q.id,
|
||||
description: `${q.status}, ${q.completed_solutions || 0}/${q.total_solutions || 0} completed, Issues: ${q.issue_ids.join(', ')}`
|
||||
}))
|
||||
}]
|
||||
})
|
||||
```
|
||||
|
||||
4. **After user selection**, continue execution with the selected queue ID.
|
||||
|
||||
**DO NOT auto-select queues.** Explicit user confirmation is required to prevent accidental execution of wrong queue.
|
||||
|
||||
## Usage
|
||||
|
||||
```bash
|
||||
/issue:execute # Execute active queue(s)
|
||||
/issue:execute --queue QUE-xxx # Execute specific queue
|
||||
/issue:execute --worktree # Use git worktrees for parallel isolation
|
||||
/issue:execute --worktree --queue QUE-xxx
|
||||
/issue:execute --worktree /path/to/existing/worktree # Resume in existing worktree
|
||||
/issue:execute --queue QUE-xxx # Execute specific queue (REQUIRED)
|
||||
/issue:execute --queue QUE-xxx --worktree # Execute in isolated worktree
|
||||
/issue:execute --queue QUE-xxx --worktree /path/to/existing/worktree # Resume
|
||||
```
|
||||
|
||||
**Parallelism**: Determined automatically by task dependency DAG (no manual control)
|
||||
**Executor & Dry-run**: Selected via interactive prompt (AskUserQuestion)
|
||||
**Worktree**: Creates isolated git worktrees for each parallel executor
|
||||
**Worktree**: Creates ONE worktree for the entire queue execution (not per-solution)
|
||||
|
||||
**⭐ Recommended Executor**: **Codex** - Best for long-running autonomous work (2hr timeout), supports background execution and full write access
|
||||
|
||||
@@ -44,37 +87,101 @@ Minimal orchestrator that dispatches **solution IDs** to executors. Each executo
|
||||
## Execution Flow
|
||||
|
||||
```
|
||||
Phase 0 (if --worktree): Setup Worktree Base
|
||||
└─ Ensure .worktrees directory exists
|
||||
Phase 0: Validate Queue ID (REQUIRED)
|
||||
├─ If --queue provided → use specified queue
|
||||
├─ If --queue missing → list queues, prompt user to select
|
||||
└─ Store QUEUE_ID for all subsequent commands
|
||||
|
||||
Phase 0.5 (if --worktree): Setup Queue Worktree
|
||||
├─ Create ONE worktree for entire queue: .ccw/worktrees/queue-<timestamp>
|
||||
├─ All subsequent execution happens in this worktree
|
||||
└─ Main workspace remains clean and untouched
|
||||
|
||||
Phase 1: Get DAG & User Selection
|
||||
├─ ccw issue queue dag [--queue QUE-xxx] → { parallel_batches: [["S-1","S-2"], ["S-3"]] }
|
||||
├─ ccw issue queue dag --queue ${QUEUE_ID} → { parallel_batches: [["S-1","S-2"], ["S-3"]] }
|
||||
└─ AskUserQuestion → executor type (codex|gemini|agent), dry-run mode, worktree mode
|
||||
|
||||
Phase 2: Dispatch Parallel Batch (DAG-driven)
|
||||
├─ Parallelism determined by DAG (no manual limit)
|
||||
├─ All executors work in the SAME worktree (or main if no worktree)
|
||||
├─ For each solution ID in batch (parallel - all at once):
|
||||
│ ├─ (if worktree) Create isolated worktree: git worktree add
|
||||
│ ├─ Executor calls: ccw issue detail <id> (READ-ONLY)
|
||||
│ ├─ Executor gets FULL SOLUTION with all tasks
|
||||
│ ├─ Executor implements all tasks sequentially (T1 → T2 → T3)
|
||||
│ ├─ Executor tests + verifies each task
|
||||
│ ├─ Executor commits ONCE per solution (with formatted summary)
|
||||
│ ├─ Executor calls: ccw issue done <id>
|
||||
│ └─ (if worktree) Cleanup: merge branch, remove worktree
|
||||
│ └─ Executor calls: ccw issue done <id>
|
||||
└─ Wait for batch completion
|
||||
|
||||
Phase 3: Next Batch
|
||||
Phase 3: Next Batch (repeat Phase 2)
|
||||
└─ ccw issue queue dag → check for newly-ready solutions
|
||||
|
||||
Phase 4 (if --worktree): Worktree Completion
|
||||
├─ All batches complete → prompt for merge strategy
|
||||
└─ Options: Create PR / Merge to main / Keep branch
|
||||
```
|
||||
|
||||
## Implementation
|
||||
|
||||
### Phase 0: Validate Queue ID
|
||||
|
||||
```javascript
|
||||
// Check if --queue was provided
|
||||
let QUEUE_ID = args.queue;
|
||||
|
||||
if (!QUEUE_ID) {
|
||||
// List available queues
|
||||
const listResult = Bash('ccw issue queue list --brief --json').trim();
|
||||
const index = JSON.parse(listResult);
|
||||
|
||||
if (index.queues.length === 0) {
|
||||
console.log('No queues found. Use /issue:queue to create one first.');
|
||||
return;
|
||||
}
|
||||
|
||||
// Filter active queues only
|
||||
const activeQueues = index.queues.filter(q => q.status === 'active');
|
||||
|
||||
if (activeQueues.length === 0) {
|
||||
console.log('No active queues found.');
|
||||
console.log('Available queues:', index.queues.map(q => `${q.id} (${q.status})`).join(', '));
|
||||
return;
|
||||
}
|
||||
|
||||
// Display and prompt user
|
||||
console.log('\nAvailable Queues:');
|
||||
console.log('ID'.padEnd(22) + 'Status'.padEnd(12) + 'Progress'.padEnd(12) + 'Issues');
|
||||
console.log('-'.repeat(70));
|
||||
for (const q of index.queues) {
|
||||
const marker = q.id === index.active_queue_id ? '→ ' : ' ';
|
||||
console.log(marker + q.id.padEnd(20) + q.status.padEnd(12) +
|
||||
`${q.completed_solutions || 0}/${q.total_solutions || 0}`.padEnd(12) +
|
||||
q.issue_ids.join(', '));
|
||||
}
|
||||
|
||||
const answer = AskUserQuestion({
|
||||
questions: [{
|
||||
question: "Which queue would you like to execute?",
|
||||
header: "Queue",
|
||||
multiSelect: false,
|
||||
options: activeQueues.map(q => ({
|
||||
label: q.id,
|
||||
description: `${q.completed_solutions || 0}/${q.total_solutions || 0} completed, Issues: ${q.issue_ids.join(', ')}`
|
||||
}))
|
||||
}]
|
||||
});
|
||||
|
||||
QUEUE_ID = answer['Queue'];
|
||||
}
|
||||
|
||||
console.log(`\n## Executing Queue: ${QUEUE_ID}\n`);
|
||||
```
|
||||
|
||||
### Phase 1: Get DAG & User Selection
|
||||
|
||||
```javascript
|
||||
// Get dependency graph and parallel batches
|
||||
const dagJson = Bash(`ccw issue queue dag`).trim();
|
||||
// Get dependency graph and parallel batches (QUEUE_ID required)
|
||||
const dagJson = Bash(`ccw issue queue dag --queue ${QUEUE_ID}`).trim();
|
||||
const dag = JSON.parse(dagJson);
|
||||
|
||||
if (dag.error || dag.ready_count === 0) {
|
||||
@@ -115,12 +222,12 @@ const answer = AskUserQuestion({
|
||||
]
|
||||
},
|
||||
{
|
||||
question: 'Use git worktrees for parallel isolation?',
|
||||
question: 'Use git worktree for queue isolation?',
|
||||
header: 'Worktree',
|
||||
multiSelect: false,
|
||||
options: [
|
||||
{ label: 'Yes (Recommended for parallel)', description: 'Each executor works in isolated worktree branch' },
|
||||
{ label: 'No', description: 'Work directly in current directory (serial only)' }
|
||||
{ label: 'Yes (Recommended)', description: 'Create ONE worktree for entire queue - main stays clean' },
|
||||
{ label: 'No', description: 'Work directly in current directory' }
|
||||
]
|
||||
}
|
||||
]
|
||||
@@ -140,7 +247,7 @@ if (isDryRun) {
|
||||
}
|
||||
```
|
||||
|
||||
### Phase 2: Dispatch Parallel Batch (DAG-driven)
|
||||
### Phase 0 & 2: Setup Queue Worktree & Dispatch
|
||||
|
||||
```javascript
|
||||
// Parallelism determined by DAG - no manual limit
|
||||
@@ -158,24 +265,40 @@ TodoWrite({
|
||||
|
||||
console.log(`\n### Executing Solutions (DAG batch 1): ${batch.join(', ')}`);
|
||||
|
||||
// Setup worktree base directory if needed (using absolute paths)
|
||||
if (useWorktree) {
|
||||
// Use absolute paths to avoid issues when running from subdirectories
|
||||
const repoRoot = Bash('git rev-parse --show-toplevel').trim();
|
||||
const worktreeBase = `${repoRoot}/.ccw/worktrees`;
|
||||
Bash(`mkdir -p "${worktreeBase}"`);
|
||||
// Prune stale worktrees from previous interrupted executions
|
||||
Bash('git worktree prune');
|
||||
}
|
||||
|
||||
// Parse existing worktree path from args if provided
|
||||
// Example: --worktree /path/to/existing/worktree
|
||||
const existingWorktree = args.worktree && typeof args.worktree === 'string' ? args.worktree : null;
|
||||
|
||||
// Setup ONE worktree for entire queue (not per-solution)
|
||||
let worktreePath = null;
|
||||
let worktreeBranch = null;
|
||||
|
||||
if (useWorktree) {
|
||||
const repoRoot = Bash('git rev-parse --show-toplevel').trim();
|
||||
const worktreeBase = `${repoRoot}/.ccw/worktrees`;
|
||||
Bash(`mkdir -p "${worktreeBase}"`);
|
||||
Bash('git worktree prune'); // Cleanup stale worktrees
|
||||
|
||||
if (existingWorktree) {
|
||||
// Resume mode: Use existing worktree
|
||||
worktreePath = existingWorktree;
|
||||
worktreeBranch = Bash(`git -C "${worktreePath}" branch --show-current`).trim();
|
||||
console.log(`Resuming in existing worktree: ${worktreePath} (branch: ${worktreeBranch})`);
|
||||
} else {
|
||||
// Create mode: ONE worktree for the entire queue
|
||||
const timestamp = new Date().toISOString().replace(/[-:T]/g, '').slice(0, 14);
|
||||
worktreeBranch = `queue-exec-${dag.queue_id || timestamp}`;
|
||||
worktreePath = `${worktreeBase}/${worktreeBranch}`;
|
||||
Bash(`git worktree add "${worktreePath}" -b "${worktreeBranch}"`);
|
||||
console.log(`Created queue worktree: ${worktreePath}`);
|
||||
}
|
||||
}
|
||||
|
||||
// Launch ALL solutions in batch in parallel (DAG guarantees no conflicts)
|
||||
// All executors work in the SAME worktree (or main if no worktree)
|
||||
const executions = batch.map(solutionId => {
|
||||
updateTodo(solutionId, 'in_progress');
|
||||
return dispatchExecutor(solutionId, executor, useWorktree, existingWorktree);
|
||||
return dispatchExecutor(solutionId, executor, worktreePath);
|
||||
});
|
||||
|
||||
await Promise.all(executions);
|
||||
@@ -185,126 +308,20 @@ batch.forEach(id => updateTodo(id, 'completed'));
|
||||
### Executor Dispatch
|
||||
|
||||
```javascript
|
||||
function dispatchExecutor(solutionId, executorType, useWorktree = false, existingWorktree = null) {
|
||||
// Worktree setup commands (if enabled) - using absolute paths
|
||||
// Supports both creating new worktrees and resuming in existing ones
|
||||
const worktreeSetup = useWorktree ? `
|
||||
### Step 0: Setup Isolated Worktree
|
||||
\`\`\`bash
|
||||
# Use absolute paths to avoid issues when running from subdirectories
|
||||
REPO_ROOT=$(git rev-parse --show-toplevel)
|
||||
WORKTREE_BASE="\${REPO_ROOT}/.ccw/worktrees"
|
||||
|
||||
# Check if existing worktree path was provided
|
||||
EXISTING_WORKTREE="${existingWorktree || ''}"
|
||||
|
||||
if [[ -n "\${EXISTING_WORKTREE}" && -d "\${EXISTING_WORKTREE}" ]]; then
|
||||
# Resume mode: Use existing worktree
|
||||
WORKTREE_PATH="\${EXISTING_WORKTREE}"
|
||||
WORKTREE_NAME=$(basename "\${WORKTREE_PATH}")
|
||||
|
||||
# Verify it's a valid git worktree
|
||||
if ! git -C "\${WORKTREE_PATH}" rev-parse --is-inside-work-tree &>/dev/null; then
|
||||
echo "Error: \${EXISTING_WORKTREE} is not a valid git worktree"
|
||||
exit 1
|
||||
fi
|
||||
|
||||
echo "Resuming in existing worktree: \${WORKTREE_PATH}"
|
||||
else
|
||||
# Create mode: New worktree with timestamp
|
||||
WORKTREE_NAME="exec-${solutionId}-$(date +%H%M%S)"
|
||||
WORKTREE_PATH="\${WORKTREE_BASE}/\${WORKTREE_NAME}"
|
||||
|
||||
# Ensure worktree base exists
|
||||
mkdir -p "\${WORKTREE_BASE}"
|
||||
|
||||
# Prune stale worktrees
|
||||
git worktree prune
|
||||
|
||||
# Create worktree
|
||||
git worktree add "\${WORKTREE_PATH}" -b "\${WORKTREE_NAME}"
|
||||
|
||||
echo "Created new worktree: \${WORKTREE_PATH}"
|
||||
fi
|
||||
|
||||
# Setup cleanup trap for graceful failure handling
|
||||
cleanup_worktree() {
|
||||
echo "Cleaning up worktree due to interruption..."
|
||||
cd "\${REPO_ROOT}" 2>/dev/null || true
|
||||
git worktree remove "\${WORKTREE_PATH}" --force 2>/dev/null || true
|
||||
echo "Worktree removed. Branch '\${WORKTREE_NAME}' kept for inspection."
|
||||
}
|
||||
trap cleanup_worktree EXIT INT TERM
|
||||
|
||||
cd "\${WORKTREE_PATH}"
|
||||
\`\`\`
|
||||
` : '';
|
||||
|
||||
const worktreeCleanup = useWorktree ? `
|
||||
### Step 5: Worktree Completion (User Choice)
|
||||
|
||||
After all tasks complete, prompt for merge strategy:
|
||||
|
||||
\`\`\`javascript
|
||||
AskUserQuestion({
|
||||
questions: [{
|
||||
question: "Solution ${solutionId} completed. What to do with worktree branch?",
|
||||
header: "Merge",
|
||||
multiSelect: false,
|
||||
options: [
|
||||
{ label: "Create PR (Recommended)", description: "Push branch and create pull request - safest for parallel execution" },
|
||||
{ label: "Merge to main", description: "Merge branch and cleanup worktree (requires clean main)" },
|
||||
{ label: "Keep branch", description: "Cleanup worktree, keep branch for manual handling" }
|
||||
]
|
||||
}]
|
||||
})
|
||||
\`\`\`
|
||||
|
||||
**Based on selection:**
|
||||
\`\`\`bash
|
||||
# Disable cleanup trap before intentional cleanup
|
||||
trap - EXIT INT TERM
|
||||
|
||||
# Return to repo root (use REPO_ROOT from setup)
|
||||
cd "\${REPO_ROOT}"
|
||||
|
||||
# Validate main repo state before merge
|
||||
validate_main_clean() {
|
||||
if [[ -n \$(git status --porcelain) ]]; then
|
||||
echo "⚠️ Warning: Main repo has uncommitted changes."
|
||||
echo "Cannot auto-merge. Falling back to 'Create PR' option."
|
||||
return 1
|
||||
fi
|
||||
return 0
|
||||
}
|
||||
|
||||
# Create PR (Recommended for parallel execution):
|
||||
git push -u origin "\${WORKTREE_NAME}"
|
||||
gh pr create --title "Solution ${solutionId}" --body "Issue queue execution"
|
||||
git worktree remove "\${WORKTREE_PATH}"
|
||||
|
||||
# Merge to main (only if main is clean):
|
||||
if validate_main_clean; then
|
||||
git merge --no-ff "\${WORKTREE_NAME}" -m "Merge solution ${solutionId}"
|
||||
git worktree remove "\${WORKTREE_PATH}" && git branch -d "\${WORKTREE_NAME}"
|
||||
else
|
||||
# Fallback to PR if main is dirty
|
||||
git push -u origin "\${WORKTREE_NAME}"
|
||||
gh pr create --title "Solution ${solutionId}" --body "Issue queue execution (main had uncommitted changes)"
|
||||
git worktree remove "\${WORKTREE_PATH}"
|
||||
fi
|
||||
|
||||
# Keep branch:
|
||||
git worktree remove "\${WORKTREE_PATH}"
|
||||
echo "Branch \${WORKTREE_NAME} kept for manual handling"
|
||||
\`\`\`
|
||||
|
||||
**Parallel Execution Safety**: "Create PR" is the default and safest option for parallel executors, avoiding merge race conditions.
|
||||
` : '';
|
||||
// worktreePath: path to shared worktree (null if not using worktree)
|
||||
function dispatchExecutor(solutionId, executorType, worktreePath = null) {
|
||||
// If worktree is provided, executor works in that directory
|
||||
// No per-solution worktree creation - ONE worktree for entire queue
|
||||
const cdCommand = worktreePath ? `cd "${worktreePath}"` : '';
|
||||
|
||||
const prompt = `
|
||||
## Execute Solution ${solutionId}
|
||||
${worktreeSetup}
|
||||
${worktreePath ? `
|
||||
### Step 0: Enter Queue Worktree
|
||||
\`\`\`bash
|
||||
cd "${worktreePath}"
|
||||
\`\`\`
|
||||
` : ''}
|
||||
### Step 1: Get Solution (read-only)
|
||||
\`\`\`bash
|
||||
ccw issue detail ${solutionId}
|
||||
@@ -352,16 +369,21 @@ If any task failed:
|
||||
\`\`\`bash
|
||||
ccw issue done ${solutionId} --fail --reason '{"task_id": "TX", "error_type": "test_failure", "message": "..."}'
|
||||
\`\`\`
|
||||
${worktreeCleanup}`;
|
||||
|
||||
**Note**: Do NOT cleanup worktree after this solution. Worktree is shared by all solutions in the queue.
|
||||
`;
|
||||
|
||||
// For CLI tools, pass --cd to set working directory
|
||||
const cdOption = worktreePath ? ` --cd "${worktreePath}"` : '';
|
||||
|
||||
if (executorType === 'codex') {
|
||||
return Bash(
|
||||
`ccw cli -p "${escapePrompt(prompt)}" --tool codex --mode write --id exec-${solutionId}`,
|
||||
`ccw cli -p "${escapePrompt(prompt)}" --tool codex --mode write --id exec-${solutionId}${cdOption}`,
|
||||
{ timeout: 7200000, run_in_background: true } // 2hr for full solution
|
||||
);
|
||||
} else if (executorType === 'gemini') {
|
||||
return Bash(
|
||||
`ccw cli -p "${escapePrompt(prompt)}" --tool gemini --mode write --id exec-${solutionId}`,
|
||||
`ccw cli -p "${escapePrompt(prompt)}" --tool gemini --mode write --id exec-${solutionId}${cdOption}`,
|
||||
{ timeout: 3600000, run_in_background: true }
|
||||
);
|
||||
} else {
|
||||
@@ -369,7 +391,7 @@ ${worktreeCleanup}`;
|
||||
subagent_type: 'code-developer',
|
||||
run_in_background: false,
|
||||
description: `Execute solution ${solutionId}`,
|
||||
prompt: prompt
|
||||
prompt: worktreePath ? `Working directory: ${worktreePath}\n\n${prompt}` : prompt
|
||||
});
|
||||
}
|
||||
}
|
||||
@@ -378,8 +400,8 @@ ${worktreeCleanup}`;
|
||||
### Phase 3: Check Next Batch
|
||||
|
||||
```javascript
|
||||
// Refresh DAG after batch completes
|
||||
const refreshedDag = JSON.parse(Bash(`ccw issue queue dag`).trim());
|
||||
// Refresh DAG after batch completes (use same QUEUE_ID)
|
||||
const refreshedDag = JSON.parse(Bash(`ccw issue queue dag --queue ${QUEUE_ID}`).trim());
|
||||
|
||||
console.log(`
|
||||
## Batch Complete
|
||||
@@ -389,46 +411,117 @@ console.log(`
|
||||
`);
|
||||
|
||||
if (refreshedDag.ready_count > 0) {
|
||||
console.log('Run `/issue:execute` again for next batch.');
|
||||
console.log(`Run \`/issue:execute --queue ${QUEUE_ID}\` again for next batch.`);
|
||||
// Note: If resuming, pass existing worktree path:
|
||||
// /issue:execute --queue ${QUEUE_ID} --worktree <worktreePath>
|
||||
}
|
||||
```
|
||||
|
||||
### Phase 4: Worktree Completion (after ALL batches)
|
||||
|
||||
```javascript
|
||||
// Only run when ALL solutions completed AND using worktree
|
||||
if (useWorktree && refreshedDag.ready_count === 0 && refreshedDag.completed_count === refreshedDag.total) {
|
||||
console.log('\n## All Solutions Completed - Worktree Cleanup');
|
||||
|
||||
const answer = AskUserQuestion({
|
||||
questions: [{
|
||||
question: `Queue complete. What to do with worktree branch "${worktreeBranch}"?`,
|
||||
header: 'Merge',
|
||||
multiSelect: false,
|
||||
options: [
|
||||
{ label: 'Create PR (Recommended)', description: 'Push branch and create pull request' },
|
||||
{ label: 'Merge to main', description: 'Merge all commits and cleanup worktree' },
|
||||
{ label: 'Keep branch', description: 'Cleanup worktree, keep branch for manual handling' }
|
||||
]
|
||||
}]
|
||||
});
|
||||
|
||||
const repoRoot = Bash('git rev-parse --show-toplevel').trim();
|
||||
|
||||
if (answer['Merge'].includes('Create PR')) {
|
||||
Bash(`git -C "${worktreePath}" push -u origin "${worktreeBranch}"`);
|
||||
Bash(`gh pr create --title "Queue ${dag.queue_id}" --body "Issue queue execution - all solutions completed" --head "${worktreeBranch}"`);
|
||||
Bash(`git worktree remove "${worktreePath}"`);
|
||||
console.log(`PR created for branch: ${worktreeBranch}`);
|
||||
} else if (answer['Merge'].includes('Merge to main')) {
|
||||
// Check main is clean
|
||||
const mainDirty = Bash('git status --porcelain').trim();
|
||||
if (mainDirty) {
|
||||
console.log('Warning: Main has uncommitted changes. Falling back to PR.');
|
||||
Bash(`git -C "${worktreePath}" push -u origin "${worktreeBranch}"`);
|
||||
Bash(`gh pr create --title "Queue ${dag.queue_id}" --body "Issue queue execution (main had uncommitted changes)" --head "${worktreeBranch}"`);
|
||||
} else {
|
||||
Bash(`git merge --no-ff "${worktreeBranch}" -m "Merge queue ${dag.queue_id}"`);
|
||||
Bash(`git branch -d "${worktreeBranch}"`);
|
||||
}
|
||||
Bash(`git worktree remove "${worktreePath}"`);
|
||||
} else {
|
||||
Bash(`git worktree remove "${worktreePath}"`);
|
||||
console.log(`Branch ${worktreeBranch} kept for manual handling`);
|
||||
}
|
||||
}
|
||||
```
|
||||
|
||||
## Parallel Execution Model
|
||||
|
||||
```
|
||||
┌─────────────────────────────────────────────────────────────┐
|
||||
│ Orchestrator │
|
||||
├─────────────────────────────────────────────────────────────┤
|
||||
│ 1. ccw issue queue dag │
|
||||
│ → { parallel_batches: [["S-1","S-2"], ["S-3"]] } │
|
||||
│ │
|
||||
│ 2. Dispatch batch 1 (parallel): │
|
||||
│ ┌──────────────────────┐ ┌──────────────────────┐ │
|
||||
│ │ Executor 1 │ │ Executor 2 │ │
|
||||
│ │ detail S-1 │ │ detail S-2 │ │
|
||||
│ │ → gets full solution │ │ → gets full solution │ │
|
||||
│ │ [T1→T2→T3 sequential]│ │ [T1→T2 sequential] │ │
|
||||
│ │ commit (1x solution) │ │ commit (1x solution) │ │
|
||||
│ │ done S-1 │ │ done S-2 │ │
|
||||
│ └──────────────────────┘ └──────────────────────┘ │
|
||||
│ │
|
||||
│ 3. ccw issue queue dag (refresh) │
|
||||
│ → S-3 now ready (S-1 completed, file conflict resolved) │
|
||||
└─────────────────────────────────────────────────────────────┘
|
||||
┌─────────────────────────────────────────────────────────────────┐
|
||||
│ Orchestrator │
|
||||
├─────────────────────────────────────────────────────────────────┤
|
||||
│ 0. Validate QUEUE_ID (required, or prompt user to select) │
|
||||
│ │
|
||||
│ 0.5 (if --worktree) Create ONE worktree for entire queue │
|
||||
│ → .ccw/worktrees/queue-exec-<queue-id> │
|
||||
│ │
|
||||
│ 1. ccw issue queue dag --queue ${QUEUE_ID} │
|
||||
│ → { parallel_batches: [["S-1","S-2"], ["S-3"]] } │
|
||||
│ │
|
||||
│ 2. Dispatch batch 1 (parallel, SAME worktree): │
|
||||
│ ┌──────────────────────────────────────────────────────┐ │
|
||||
│ │ Shared Queue Worktree (or main) │ │
|
||||
│ │ ┌──────────────────┐ ┌──────────────────┐ │ │
|
||||
│ │ │ Executor 1 │ │ Executor 2 │ │ │
|
||||
│ │ │ detail S-1 │ │ detail S-2 │ │ │
|
||||
│ │ │ [T1→T2→T3] │ │ [T1→T2] │ │ │
|
||||
│ │ │ commit S-1 │ │ commit S-2 │ │ │
|
||||
│ │ │ done S-1 │ │ done S-2 │ │ │
|
||||
│ │ └──────────────────┘ └──────────────────┘ │ │
|
||||
│ └──────────────────────────────────────────────────────┘ │
|
||||
│ │
|
||||
│ 3. ccw issue queue dag (refresh) │
|
||||
│ → S-3 now ready → dispatch batch 2 (same worktree) │
|
||||
│ │
|
||||
│ 4. (if --worktree) ALL batches complete → cleanup worktree │
|
||||
│ → Prompt: Create PR / Merge to main / Keep branch │
|
||||
└─────────────────────────────────────────────────────────────────┘
|
||||
```
|
||||
|
||||
**Why this works for parallel:**
|
||||
- **ONE worktree for entire queue** → all solutions share same isolated workspace
|
||||
- `detail <id>` is READ-ONLY → no race conditions
|
||||
- Each executor handles **all tasks within a solution** sequentially
|
||||
- **One commit per solution** with formatted summary (not per-task)
|
||||
- `done <id>` updates only its own solution status
|
||||
- `queue dag` recalculates ready solutions after each batch
|
||||
- Solutions in same batch have NO file conflicts
|
||||
- Solutions in same batch have NO file conflicts (DAG guarantees)
|
||||
- **Main workspace stays clean** until merge/PR decision
|
||||
|
||||
## CLI Endpoint Contract
|
||||
|
||||
### `ccw issue queue dag`
|
||||
Returns dependency graph with parallel batches (solution-level):
|
||||
### `ccw issue queue list --brief --json`
|
||||
Returns queue index for selection (used when --queue not provided):
|
||||
```json
|
||||
{
|
||||
"active_queue_id": "QUE-20251215-001",
|
||||
"queues": [
|
||||
{ "id": "QUE-20251215-001", "status": "active", "issue_ids": ["ISS-001"], "total_solutions": 5, "completed_solutions": 2 }
|
||||
]
|
||||
}
|
||||
```
|
||||
|
||||
### `ccw issue queue dag --queue <queue-id>`
|
||||
Returns dependency graph with parallel batches (solution-level, **--queue required**):
|
||||
```json
|
||||
{
|
||||
"queue_id": "QUE-...",
|
||||
|
||||
@@ -131,7 +131,7 @@ TASK: • Analyze issue titles/tags semantically • Identify functional/archite
|
||||
MODE: analysis
|
||||
CONTEXT: Issue metadata only
|
||||
EXPECTED: JSON with groups array, each containing max 4 issue_ids, theme, rationale
|
||||
RULES: $(cat ~/.claude/workflows/cli-templates/protocols/analysis-protocol.md) | Each issue in exactly one group | Max 4 issues per group | Balance group sizes
|
||||
CONSTRAINTS: Each issue in exactly one group | Max 4 issues per group | Balance group sizes
|
||||
|
||||
INPUT:
|
||||
${JSON.stringify(issueSummaries, null, 2)}
|
||||
@@ -195,12 +195,26 @@ ${issueList}
|
||||
|
||||
### Workflow
|
||||
1. Fetch issue details: ccw issue status <id> --json
|
||||
2. Load project context files
|
||||
3. Explore codebase (ACE semantic search)
|
||||
4. Plan solution with tasks (schema: solution-schema.json)
|
||||
5. **If github_url exists**: Add final task to comment on GitHub issue
|
||||
6. Write solution to: .workflow/issues/solutions/{issue-id}.jsonl
|
||||
7. Single solution → auto-bind; Multiple → return for selection
|
||||
2. **Analyze failure history** (if issue.feedback exists):
|
||||
- Extract failure details from issue.feedback (type='failure', stage='execute')
|
||||
- Parse error_type, message, task_id, solution_id from content JSON
|
||||
- Identify failure patterns: repeated errors, root causes, blockers
|
||||
- **Constraint**: Avoid repeating failed approaches
|
||||
3. Load project context files
|
||||
4. Explore codebase (ACE semantic search)
|
||||
5. Plan solution with tasks (schema: solution-schema.json)
|
||||
- **If previous solution failed**: Reference failure analysis in solution.approach
|
||||
- Add explicit verification steps to prevent same failure mode
|
||||
6. **If github_url exists**: Add final task to comment on GitHub issue
|
||||
7. Write solution to: .workflow/issues/solutions/{issue-id}.jsonl
|
||||
8. Single solution → auto-bind; Multiple → return for selection
|
||||
|
||||
### Failure-Aware Planning Rules
|
||||
- **Extract failure patterns**: Parse issue.feedback where type='failure' and stage='execute'
|
||||
- **Identify root causes**: Analyze error_type (test_failure, compilation, timeout, etc.)
|
||||
- **Design alternative approach**: Create solution that addresses root cause
|
||||
- **Add prevention steps**: Include explicit verification to catch same error earlier
|
||||
- **Document lessons**: Reference previous failures in solution.approach
|
||||
|
||||
### Rules
|
||||
- Solution ID format: SOL-{issue-id}-{uid} (uid: 4 random alphanumeric chars, e.g., a7x9)
|
||||
|
||||
@@ -65,9 +65,13 @@ Queue formation command using **issue-queue-agent** that analyzes all bound solu
|
||||
--queues <n> Number of parallel queues (default: 1)
|
||||
--issue <id> Form queue for specific issue only
|
||||
--append <id> Append issue to active queue (don't create new)
|
||||
--force Skip active queue check, always create new queue
|
||||
|
||||
# CLI subcommands (ccw issue queue ...)
|
||||
ccw issue queue list List all queues with status
|
||||
ccw issue queue add <issue-id> Add issue to queue (interactive if active queue exists)
|
||||
ccw issue queue add <issue-id> -f Add to new queue without prompt (force)
|
||||
ccw issue queue merge <src> --queue <target> Merge source queue into target queue
|
||||
ccw issue queue switch <queue-id> Switch active queue
|
||||
ccw issue queue archive Archive current queue
|
||||
ccw issue queue delete <queue-id> Delete queue from history
|
||||
@@ -92,7 +96,7 @@ Phase 2-4: Agent-Driven Queue Formation (issue-queue-agent)
|
||||
│ ├─ Build dependency DAG from conflicts
|
||||
│ ├─ Calculate semantic priority per solution
|
||||
│ └─ Assign execution groups (parallel/sequential)
|
||||
└─ Each agent writes: queue JSON + index update
|
||||
└─ Each agent writes: queue JSON + index update (NOT active yet)
|
||||
|
||||
Phase 5: Conflict Clarification (if needed)
|
||||
├─ Collect `clarifications` arrays from all agents
|
||||
@@ -102,7 +106,24 @@ Phase 5: Conflict Clarification (if needed)
|
||||
|
||||
Phase 6: Status Update & Summary
|
||||
├─ Update issue statuses to 'queued'
|
||||
└─ Display queue summary (N queues), next step: /issue:execute
|
||||
└─ Display new queue summary (N queues)
|
||||
|
||||
Phase 7: Active Queue Check & Decision (REQUIRED)
|
||||
├─ Read queue index: ccw issue queue list --brief
|
||||
├─ Get generated queue ID from agent output
|
||||
├─ If NO active queue exists:
|
||||
│ ├─ Set generated queue as active_queue_id
|
||||
│ ├─ Update index.json
|
||||
│ └─ Display: "Queue created and activated"
|
||||
│
|
||||
└─ If active queue exists with items:
|
||||
├─ Display both queues to user
|
||||
├─ Use AskUserQuestion to prompt:
|
||||
│ ├─ "Use new queue (keep existing)" → Set new as active, keep old inactive
|
||||
│ ├─ "Merge: add new items to existing" → Merge new → existing, delete new
|
||||
│ ├─ "Merge: add existing items to new" → Merge existing → new, archive old
|
||||
│ └─ "Cancel" → Delete new queue, keep existing active
|
||||
└─ Execute chosen action
|
||||
```
|
||||
|
||||
## Implementation
|
||||
@@ -306,6 +327,41 @@ ccw issue update <issue-id> --status queued
|
||||
- Show unplanned issues (planned but NOT in queue)
|
||||
- Show next step: `/issue:execute`
|
||||
|
||||
### Phase 7: Active Queue Check & Decision
|
||||
|
||||
**After agent completes Phase 1-6, check for active queue:**
|
||||
|
||||
```bash
|
||||
ccw issue queue list --brief
|
||||
```
|
||||
|
||||
**Decision:**
|
||||
- If `active_queue_id` is null → `ccw issue queue switch <new-queue-id>` (activate new queue)
|
||||
- If active queue exists → Use **AskUserQuestion** to prompt user
|
||||
|
||||
**AskUserQuestion:**
|
||||
```javascript
|
||||
AskUserQuestion({
|
||||
questions: [{
|
||||
question: "Active queue exists. How would you like to proceed?",
|
||||
header: "Queue Action",
|
||||
options: [
|
||||
{ label: "Merge into existing queue", description: "Add new items to active queue, delete new queue" },
|
||||
{ label: "Use new queue", description: "Switch to new queue, keep existing in history" },
|
||||
{ label: "Cancel", description: "Delete new queue, keep existing active" }
|
||||
],
|
||||
multiSelect: false
|
||||
}]
|
||||
})
|
||||
```
|
||||
|
||||
**Action Commands:**
|
||||
|
||||
| User Choice | Commands |
|
||||
|-------------|----------|
|
||||
| **Merge into existing** | `ccw issue queue merge <new-queue-id> --queue <active-queue-id>` then `ccw issue queue delete <new-queue-id>` |
|
||||
| **Use new queue** | `ccw issue queue switch <new-queue-id>` |
|
||||
| **Cancel** | `ccw issue queue delete <new-queue-id>` |
|
||||
|
||||
## Storage Structure (Queue History)
|
||||
|
||||
@@ -360,6 +416,9 @@ ccw issue update <issue-id> --status queued
|
||||
| User cancels clarification | Abort queue formation |
|
||||
| **index.json not updated** | Auto-fix: Set active_queue_id to new queue |
|
||||
| **Queue file missing solutions** | Abort with error, agent must regenerate |
|
||||
| **User cancels queue add** | Display message, return without changes |
|
||||
| **Merge with empty source** | Skip merge, display warning |
|
||||
| **All items duplicate** | Skip merge, display "All items already exist" |
|
||||
|
||||
## Quality Checklist
|
||||
|
||||
|
||||
@@ -223,8 +223,8 @@ TASK:
|
||||
MODE: analysis
|
||||
CONTEXT: @src/**/*.controller.ts @src/**/*.routes.ts @src/**/*.dto.ts @src/**/middleware/**/*
|
||||
EXPECTED: JSON format API structure analysis report with modules, endpoints, security schemes, and error codes
|
||||
RULES: $(cat ~/.claude/workflows/cli-templates/protocols/analysis-protocol.md) | Strict RESTful standards | Identify all public endpoints | Document output language: {lang}
|
||||
" --tool gemini --mode analysis --cd {project_root}
|
||||
CONSTRAINTS: Strict RESTful standards | Identify all public endpoints | Document output language: {lang}
|
||||
" --tool gemini --mode analysis --rule analysis-code-patterns --cd {project_root}
|
||||
```
|
||||
|
||||
**Update swagger-planning-data.json** with analysis results:
|
||||
@@ -387,7 +387,7 @@ bash(cat ${session_dir}/.process/swagger-planning-data.json | jq -r '.api_struct
|
||||
"step": 1,
|
||||
"title": "Generate OpenAPI spec file",
|
||||
"description": "Create complete swagger.yaml specification file",
|
||||
"cli_prompt": "PURPOSE: Generate OpenAPI 3.0.3 specification file from analyzed API structure\nTASK:\n• Define openapi version: 3.0.3\n• Define info: title, description, version, contact, license\n• Define servers: development, staging, production environments\n• Define tags: organized by business modules\n• Define paths: all API endpoints with complete specifications\n• Define components: schemas, securitySchemes, parameters, responses\nMODE: write\nCONTEXT: @[api_analysis]\nEXPECTED: Complete swagger.yaml file following OpenAPI 3.0.3 specification\nRULES: $(cat ~/.claude/workflows/cli-templates/protocols/write-protocol.md) $(cat ~/.claude/workflows/cli-templates/prompts/documentation/swagger-api.txt) | Use {lang} for all descriptions | Strict RESTful standards",
|
||||
"cli_prompt": "PURPOSE: Generate OpenAPI 3.0.3 specification file from analyzed API structure\nTASK:\n• Define openapi version: 3.0.3\n• Define info: title, description, version, contact, license\n• Define servers: development, staging, production environments\n• Define tags: organized by business modules\n• Define paths: all API endpoints with complete specifications\n• Define components: schemas, securitySchemes, parameters, responses\nMODE: write\nCONTEXT: @[api_analysis]\nEXPECTED: Complete swagger.yaml file following OpenAPI 3.0.3 specification\nCONSTRAINTS: Use {lang} for all descriptions | Strict RESTful standards\n--rule documentation-swagger-api",
|
||||
"output": "swagger.yaml"
|
||||
}
|
||||
],
|
||||
@@ -429,7 +429,7 @@ bash(cat ${session_dir}/.process/swagger-planning-data.json | jq -r '.api_struct
|
||||
{
|
||||
"step": 1,
|
||||
"title": "Generate authentication documentation",
|
||||
"cli_prompt": "PURPOSE: Generate comprehensive authentication documentation for API security\nTASK:\n• Document authentication mechanism: JWT Bearer Token\n• Explain header format: Authorization: Bearer <token>\n• Describe token lifecycle: acquisition, refresh, expiration handling\n• Define permission levels: public, user, admin, super_admin\n• Document authentication failure responses: 401/403 error handling\nMODE: write\nCONTEXT: @[auth_patterns] @src/**/auth/**/* @src/**/guard/**/*\nEXPECTED: Complete authentication guide in {lang}\nRULES: $(cat ~/.claude/workflows/cli-templates/protocols/write-protocol.md) | Include code examples | Clear step-by-step instructions",
|
||||
"cli_prompt": "PURPOSE: Generate comprehensive authentication documentation for API security\nTASK:\n• Document authentication mechanism: JWT Bearer Token\n• Explain header format: Authorization: Bearer <token>\n• Describe token lifecycle: acquisition, refresh, expiration handling\n• Define permission levels: public, user, admin, super_admin\n• Document authentication failure responses: 401/403 error handling\nMODE: write\nCONTEXT: @[auth_patterns] @src/**/auth/**/* @src/**/guard/**/*\nEXPECTED: Complete authentication guide in {lang}\nCONSTRAINTS: Include code examples | Clear step-by-step instructions\n--rule development-feature",
|
||||
"output": "{auth_doc_name}"
|
||||
}
|
||||
],
|
||||
@@ -464,7 +464,7 @@ bash(cat ${session_dir}/.process/swagger-planning-data.json | jq -r '.api_struct
|
||||
{
|
||||
"step": 1,
|
||||
"title": "Generate error code specification document",
|
||||
"cli_prompt": "PURPOSE: Generate comprehensive error code specification for consistent API error handling\nTASK:\n• Define error response format: {code, message, details, timestamp}\n• Document authentication errors (AUTH_xxx): 401/403 series\n• Document parameter errors (PARAM_xxx): 400 series\n• Document business errors (BIZ_xxx): business logic errors\n• Document system errors (SYS_xxx): 500 series\n• For each error code: HTTP status, error message, possible causes, resolution suggestions\nMODE: write\nCONTEXT: @src/**/*.exception.ts @src/**/*.filter.ts\nEXPECTED: Complete error code specification in {lang} with tables and examples\nRULES: $(cat ~/.claude/workflows/cli-templates/protocols/write-protocol.md) | Include response examples | Clear categorization",
|
||||
"cli_prompt": "PURPOSE: Generate comprehensive error code specification for consistent API error handling\nTASK:\n• Define error response format: {code, message, details, timestamp}\n• Document authentication errors (AUTH_xxx): 401/403 series\n• Document parameter errors (PARAM_xxx): 400 series\n• Document business errors (BIZ_xxx): business logic errors\n• Document system errors (SYS_xxx): 500 series\n• For each error code: HTTP status, error message, possible causes, resolution suggestions\nMODE: write\nCONTEXT: @src/**/*.exception.ts @src/**/*.filter.ts\nEXPECTED: Complete error code specification in {lang} with tables and examples\nCONSTRAINTS: Include response examples | Clear categorization\n--rule development-feature",
|
||||
"output": "{error_doc_name}"
|
||||
}
|
||||
],
|
||||
@@ -523,7 +523,7 @@ bash(cat ${session_dir}/.process/swagger-planning-data.json | jq -r '.api_struct
|
||||
"step": 1,
|
||||
"title": "Generate module API documentation",
|
||||
"description": "Generate complete API documentation for ${module_name}",
|
||||
"cli_prompt": "PURPOSE: Generate complete RESTful API documentation for ${module_name} module\nTASK:\n• Create module overview: purpose, use cases, prerequisites\n• Generate endpoint index: grouped by functionality\n• For each endpoint document:\n - Functional description: purpose and business context\n - Request method: GET/POST/PUT/DELETE\n - URL path: complete API path\n - Request headers: Authorization and other required headers\n - Path parameters: {id} and other path variables\n - Query parameters: pagination, filters, etc.\n - Request body: JSON Schema format\n - Response body: success and error responses\n - Field description table: type, required, example, description\n• Add usage examples: cURL, JavaScript, Python\n• Add version info: v1.0.0, last updated date\nMODE: write\nCONTEXT: @[module_endpoints] @[source_code]\nEXPECTED: Complete module API documentation in {lang} with all endpoints fully documented\nRULES: $(cat ~/.claude/workflows/cli-templates/protocols/write-protocol.md) $(cat ~/.claude/workflows/cli-templates/prompts/documentation/swagger-api.txt) | RESTful standards | Include all response codes",
|
||||
"cli_prompt": "PURPOSE: Generate complete RESTful API documentation for ${module_name} module\nTASK:\n• Create module overview: purpose, use cases, prerequisites\n• Generate endpoint index: grouped by functionality\n• For each endpoint document:\n - Functional description: purpose and business context\n - Request method: GET/POST/PUT/DELETE\n - URL path: complete API path\n - Request headers: Authorization and other required headers\n - Path parameters: {id} and other path variables\n - Query parameters: pagination, filters, etc.\n - Request body: JSON Schema format\n - Response body: success and error responses\n - Field description table: type, required, example, description\n• Add usage examples: cURL, JavaScript, Python\n• Add version info: v1.0.0, last updated date\nMODE: write\nCONTEXT: @[module_endpoints] @[source_code]\nEXPECTED: Complete module API documentation in {lang} with all endpoints fully documented\nCONSTRAINTS: RESTful standards | Include all response codes\n--rule documentation-swagger-api",
|
||||
"output": "${module_doc_name}"
|
||||
}
|
||||
],
|
||||
@@ -559,7 +559,7 @@ bash(cat ${session_dir}/.process/swagger-planning-data.json | jq -r '.api_struct
|
||||
{
|
||||
"step": 1,
|
||||
"title": "Generate API overview",
|
||||
"cli_prompt": "PURPOSE: Generate API overview document with navigation and quick start guide\nTASK:\n• Create introduction: system features, tech stack, version\n• Write quick start guide: authentication, first request example\n• Build module navigation: categorized links to all modules\n• Document environment configuration: development, staging, production\n• List SDKs and tools: client libraries, Postman collection\nMODE: write\nCONTEXT: @[all_module_docs] @.workflow/docs/${project_name}/api/swagger.yaml\nEXPECTED: Complete API overview in {lang} with navigation links\nRULES: $(cat ~/.claude/workflows/cli-templates/protocols/write-protocol.md) | Clear structure | Quick start focus",
|
||||
"cli_prompt": "PURPOSE: Generate API overview document with navigation and quick start guide\nTASK:\n• Create introduction: system features, tech stack, version\n• Write quick start guide: authentication, first request example\n• Build module navigation: categorized links to all modules\n• Document environment configuration: development, staging, production\n• List SDKs and tools: client libraries, Postman collection\nMODE: write\nCONTEXT: @[all_module_docs] @.workflow/docs/${project_name}/api/swagger.yaml\nEXPECTED: Complete API overview in {lang} with navigation links\nCONSTRAINTS: Clear structure | Quick start focus\n--rule development-feature",
|
||||
"output": "README.md"
|
||||
}
|
||||
],
|
||||
@@ -602,7 +602,7 @@ bash(cat ${session_dir}/.process/swagger-planning-data.json | jq -r '.api_struct
|
||||
{
|
||||
"step": 1,
|
||||
"title": "Generate test report",
|
||||
"cli_prompt": "PURPOSE: Generate comprehensive API test validation report\nTASK:\n• Document test environment configuration\n• Calculate endpoint coverage statistics\n• Report test results: pass/fail counts\n• Document boundary tests: parameter limits, null values, special characters\n• Document exception tests: auth failures, permission denied, resource not found\n• List issues found with recommendations\nMODE: write\nCONTEXT: @[swagger_spec]\nEXPECTED: Complete test report in {lang} with detailed results\nRULES: $(cat ~/.claude/workflows/cli-templates/protocols/write-protocol.md) | Include test cases | Clear pass/fail status",
|
||||
"cli_prompt": "PURPOSE: Generate comprehensive API test validation report\nTASK:\n• Document test environment configuration\n• Calculate endpoint coverage statistics\n• Report test results: pass/fail counts\n• Document boundary tests: parameter limits, null values, special characters\n• Document exception tests: auth failures, permission denied, resource not found\n• List issues found with recommendations\nMODE: write\nCONTEXT: @[swagger_spec]\nEXPECTED: Complete test report in {lang} with detailed results\nCONSTRAINTS: Include test cases | Clear pass/fail status\n--rule development-tests",
|
||||
"output": "{test_doc_name}"
|
||||
}
|
||||
],
|
||||
|
||||
@@ -147,8 +147,8 @@ You are generating path-conditional rules for Claude Code.
|
||||
|
||||
## Instructions
|
||||
|
||||
Read the agent prompt template for detailed instructions:
|
||||
$(cat ~/.claude/workflows/cli-templates/prompts/rules/tech-rules-agent-prompt.txt)
|
||||
Read the agent prompt template for detailed instructions.
|
||||
Use --rule rules-tech-rules-agent-prompt to load the template automatically.
|
||||
|
||||
## Execution Steps
|
||||
|
||||
|
||||
@@ -424,6 +424,17 @@ CONTEXT_VARS:
|
||||
- **Agent execution failure**: Agent-specific retry with minimal dependencies
|
||||
- **Template loading issues**: Agent handles graceful degradation
|
||||
- **Synthesis conflicts**: Synthesis highlights disagreements without resolution
|
||||
- **Context overflow protection**: See below for automatic context management
|
||||
|
||||
## Context Overflow Protection
|
||||
|
||||
**Per-role limits**: See `conceptual-planning-agent.md` (< 3000 words main, < 2000 words sub-docs, max 5 sub-docs)
|
||||
|
||||
**Synthesis protection**: If total analysis > 100KB, synthesis reads only `analysis.md` files (not sub-documents)
|
||||
|
||||
**Recovery**: Check logs → reduce scope (--count 2) → use --summary-only → manual synthesis
|
||||
|
||||
**Prevention**: Start with --count 3, use structured topic format, review output sizes before synthesis
|
||||
|
||||
## Reference Information
|
||||
|
||||
|
||||
@@ -132,7 +132,7 @@ Scan and analyze workflow session directories:
|
||||
|
||||
**Staleness criteria**:
|
||||
- Active sessions: No modification >7 days + no related git commits
|
||||
- Archives: >30 days old + no feature references in project.json
|
||||
- Archives: >30 days old + no feature references in project-tech.json
|
||||
- Lite-plan: >7 days old + plan.json not executed
|
||||
- Debug: >3 days old + issue not in recent commits
|
||||
|
||||
@@ -443,8 +443,8 @@ if (selectedCategories.includes('Sessions')) {
|
||||
}
|
||||
}
|
||||
|
||||
// Update project.json if features referenced deleted sessions
|
||||
const projectPath = '.workflow/project.json'
|
||||
// Update project-tech.json if features referenced deleted sessions
|
||||
const projectPath = '.workflow/project-tech.json'
|
||||
if (fileExists(projectPath)) {
|
||||
const project = JSON.parse(Read(projectPath))
|
||||
const deletedPaths = new Set(results.deleted)
|
||||
|
||||
666
.claude/commands/workflow/debug-with-file.md
Normal file
666
.claude/commands/workflow/debug-with-file.md
Normal file
@@ -0,0 +1,666 @@
|
||||
---
|
||||
name: debug-with-file
|
||||
description: Interactive hypothesis-driven debugging with documented exploration, understanding evolution, and Gemini-assisted correction
|
||||
argument-hint: "\"bug description or error message\""
|
||||
allowed-tools: TodoWrite(*), Task(*), AskUserQuestion(*), Read(*), Grep(*), Glob(*), Bash(*), Edit(*), Write(*)
|
||||
---
|
||||
|
||||
# Workflow Debug-With-File Command (/workflow:debug-with-file)
|
||||
|
||||
## Overview
|
||||
|
||||
Enhanced evidence-based debugging with **documented exploration process**. Records understanding evolution, consolidates insights, and uses Gemini to correct misunderstandings.
|
||||
|
||||
**Core workflow**: Explore → Document → Log → Analyze → Correct Understanding → Fix → Verify
|
||||
|
||||
**Key enhancements over /workflow:debug**:
|
||||
- **understanding.md**: Timeline of exploration and learning
|
||||
- **Gemini-assisted correction**: Validates and corrects hypotheses
|
||||
- **Consolidation**: Simplifies proven-wrong understanding to avoid clutter
|
||||
- **Learning retention**: Preserves what was learned, even from failed attempts
|
||||
|
||||
## Usage
|
||||
|
||||
```bash
|
||||
/workflow:debug-with-file <BUG_DESCRIPTION>
|
||||
|
||||
# Arguments
|
||||
<bug-description> Bug description, error message, or stack trace (required)
|
||||
```
|
||||
|
||||
## Execution Process
|
||||
|
||||
```
|
||||
Session Detection:
|
||||
├─ Check if debug session exists for this bug
|
||||
├─ EXISTS + understanding.md exists → Continue mode
|
||||
└─ NOT_FOUND → Explore mode
|
||||
|
||||
Explore Mode:
|
||||
├─ Locate error source in codebase
|
||||
├─ Document initial understanding in understanding.md
|
||||
├─ Generate testable hypotheses with Gemini validation
|
||||
├─ Add NDJSON logging instrumentation
|
||||
└─ Output: Hypothesis list + await user reproduction
|
||||
|
||||
Analyze Mode:
|
||||
├─ Parse debug.log, validate each hypothesis
|
||||
├─ Use Gemini to analyze evidence and correct understanding
|
||||
├─ Update understanding.md with:
|
||||
│ ├─ New evidence
|
||||
│ ├─ Corrected misunderstandings (strikethrough + correction)
|
||||
│ └─ Consolidated current understanding
|
||||
└─ Decision:
|
||||
├─ Confirmed → Fix root cause
|
||||
├─ Inconclusive → Add more logging, iterate
|
||||
└─ All rejected → Gemini-assisted new hypotheses
|
||||
|
||||
Fix & Cleanup:
|
||||
├─ Apply fix based on confirmed hypothesis
|
||||
├─ User verifies
|
||||
├─ Document final understanding + lessons learned
|
||||
├─ Remove debug instrumentation
|
||||
└─ If not fixed → Return to Analyze mode
|
||||
```
|
||||
|
||||
## Implementation
|
||||
|
||||
### Session Setup & Mode Detection
|
||||
|
||||
```javascript
|
||||
const getUtc8ISOString = () => new Date(Date.now() + 8 * 60 * 60 * 1000).toISOString()
|
||||
|
||||
const bugSlug = bug_description.toLowerCase().replace(/[^a-z0-9]+/g, '-').substring(0, 30)
|
||||
const dateStr = getUtc8ISOString().substring(0, 10)
|
||||
|
||||
const sessionId = `DBG-${bugSlug}-${dateStr}`
|
||||
const sessionFolder = `.workflow/.debug/${sessionId}`
|
||||
const debugLogPath = `${sessionFolder}/debug.log`
|
||||
const understandingPath = `${sessionFolder}/understanding.md`
|
||||
const hypothesesPath = `${sessionFolder}/hypotheses.json`
|
||||
|
||||
// Auto-detect mode
|
||||
const sessionExists = fs.existsSync(sessionFolder)
|
||||
const hasUnderstanding = sessionExists && fs.existsSync(understandingPath)
|
||||
const logHasContent = sessionExists && fs.existsSync(debugLogPath) && fs.statSync(debugLogPath).size > 0
|
||||
|
||||
const mode = logHasContent ? 'analyze' : (hasUnderstanding ? 'continue' : 'explore')
|
||||
|
||||
if (!sessionExists) {
|
||||
bash(`mkdir -p ${sessionFolder}`)
|
||||
}
|
||||
```
|
||||
|
||||
---
|
||||
|
||||
### Explore Mode
|
||||
|
||||
**Step 1.1: Locate Error Source**
|
||||
|
||||
```javascript
|
||||
// Extract keywords from bug description
|
||||
const keywords = extractErrorKeywords(bug_description)
|
||||
|
||||
// Search codebase for error locations
|
||||
const searchResults = []
|
||||
for (const keyword of keywords) {
|
||||
const results = Grep({ pattern: keyword, path: ".", output_mode: "content", "-C": 3 })
|
||||
searchResults.push({ keyword, results })
|
||||
}
|
||||
|
||||
// Identify affected files and functions
|
||||
const affectedLocations = analyzeSearchResults(searchResults)
|
||||
```
|
||||
|
||||
**Step 1.2: Document Initial Understanding**
|
||||
|
||||
Create `understanding.md` with exploration timeline:
|
||||
|
||||
```markdown
|
||||
# Understanding Document
|
||||
|
||||
**Session ID**: ${sessionId}
|
||||
**Bug Description**: ${bug_description}
|
||||
**Started**: ${getUtc8ISOString()}
|
||||
|
||||
---
|
||||
|
||||
## Exploration Timeline
|
||||
|
||||
### Iteration 1 - Initial Exploration (${timestamp})
|
||||
|
||||
#### Current Understanding
|
||||
|
||||
Based on bug description and initial code search:
|
||||
|
||||
- Error pattern: ${errorPattern}
|
||||
- Affected areas: ${affectedLocations.map(l => l.file).join(', ')}
|
||||
- Initial hypothesis: ${initialThoughts}
|
||||
|
||||
#### Evidence from Code Search
|
||||
|
||||
${searchResults.map(r => `
|
||||
**Keyword: "${r.keyword}"**
|
||||
- Found in: ${r.results.files.join(', ')}
|
||||
- Key findings: ${r.insights}
|
||||
`).join('\n')}
|
||||
|
||||
#### Next Steps
|
||||
|
||||
- Generate testable hypotheses
|
||||
- Add instrumentation
|
||||
- Await reproduction
|
||||
|
||||
---
|
||||
|
||||
## Current Consolidated Understanding
|
||||
|
||||
${initialConsolidatedUnderstanding}
|
||||
```
|
||||
|
||||
**Step 1.3: Gemini-Assisted Hypothesis Generation**
|
||||
|
||||
```bash
|
||||
ccw cli -p "
|
||||
PURPOSE: Generate debugging hypotheses for: ${bug_description}
|
||||
Success criteria: Testable hypotheses with clear evidence criteria
|
||||
|
||||
TASK:
|
||||
• Analyze error pattern and code search results
|
||||
• Identify 3-5 most likely root causes
|
||||
• For each hypothesis, specify:
|
||||
- What might be wrong
|
||||
- What evidence would confirm/reject it
|
||||
- Where to add instrumentation
|
||||
• Rank by likelihood
|
||||
|
||||
MODE: analysis
|
||||
|
||||
CONTEXT: @${sessionFolder}/understanding.md | Search results in understanding.md
|
||||
|
||||
EXPECTED:
|
||||
- Structured hypothesis list (JSON format)
|
||||
- Each hypothesis with: id, description, testable_condition, logging_point, evidence_criteria
|
||||
- Likelihood ranking (1=most likely)
|
||||
|
||||
CONSTRAINTS: Focus on testable conditions
|
||||
" --tool gemini --mode analysis --rule analysis-diagnose-bug-root-cause
|
||||
```
|
||||
|
||||
Save Gemini output to `hypotheses.json`:
|
||||
|
||||
```json
|
||||
{
|
||||
"iteration": 1,
|
||||
"timestamp": "2025-01-21T10:00:00+08:00",
|
||||
"hypotheses": [
|
||||
{
|
||||
"id": "H1",
|
||||
"description": "Data structure mismatch - expected key not present",
|
||||
"testable_condition": "Check if target key exists in dict",
|
||||
"logging_point": "file.py:func:42",
|
||||
"evidence_criteria": {
|
||||
"confirm": "data shows missing key",
|
||||
"reject": "key exists with valid value"
|
||||
},
|
||||
"likelihood": 1,
|
||||
"status": "pending"
|
||||
}
|
||||
],
|
||||
"gemini_insights": "...",
|
||||
"corrected_assumptions": []
|
||||
}
|
||||
```
|
||||
|
||||
**Step 1.4: Add NDJSON Instrumentation**
|
||||
|
||||
For each hypothesis, add logging (same as original debug command).
|
||||
|
||||
**Step 1.5: Update understanding.md**
|
||||
|
||||
Append hypothesis section:
|
||||
|
||||
```markdown
|
||||
#### Hypotheses Generated (Gemini-Assisted)
|
||||
|
||||
${hypotheses.map(h => `
|
||||
**${h.id}** (Likelihood: ${h.likelihood}): ${h.description}
|
||||
- Logging at: ${h.logging_point}
|
||||
- Testing: ${h.testable_condition}
|
||||
- Evidence to confirm: ${h.evidence_criteria.confirm}
|
||||
- Evidence to reject: ${h.evidence_criteria.reject}
|
||||
`).join('\n')}
|
||||
|
||||
**Gemini Insights**: ${geminiInsights}
|
||||
```
|
||||
|
||||
---
|
||||
|
||||
### Analyze Mode
|
||||
|
||||
**Step 2.1: Parse Debug Log**
|
||||
|
||||
```javascript
|
||||
// Parse NDJSON log
|
||||
const entries = Read(debugLogPath).split('\n')
|
||||
.filter(l => l.trim())
|
||||
.map(l => JSON.parse(l))
|
||||
|
||||
// Group by hypothesis
|
||||
const byHypothesis = groupBy(entries, 'hid')
|
||||
```
|
||||
|
||||
**Step 2.2: Gemini-Assisted Evidence Analysis**
|
||||
|
||||
```bash
|
||||
ccw cli -p "
|
||||
PURPOSE: Analyze debug log evidence to validate/correct hypotheses for: ${bug_description}
|
||||
Success criteria: Clear verdict per hypothesis + corrected understanding
|
||||
|
||||
TASK:
|
||||
• Parse log entries by hypothesis
|
||||
• Evaluate evidence against expected criteria
|
||||
• Determine verdict: confirmed | rejected | inconclusive
|
||||
• Identify incorrect assumptions from previous understanding
|
||||
• Suggest corrections to understanding
|
||||
|
||||
MODE: analysis
|
||||
|
||||
CONTEXT:
|
||||
@${debugLogPath}
|
||||
@${understandingPath}
|
||||
@${hypothesesPath}
|
||||
|
||||
EXPECTED:
|
||||
- Per-hypothesis verdict with reasoning
|
||||
- Evidence summary
|
||||
- List of incorrect assumptions with corrections
|
||||
- Updated consolidated understanding
|
||||
- Root cause if confirmed, or next investigation steps
|
||||
|
||||
CONSTRAINTS: Evidence-based reasoning only, no speculation
|
||||
" --tool gemini --mode analysis --rule analysis-diagnose-bug-root-cause
|
||||
```
|
||||
|
||||
**Step 2.3: Update Understanding with Corrections**
|
||||
|
||||
Append new iteration to `understanding.md`:
|
||||
|
||||
```markdown
|
||||
### Iteration ${n} - Evidence Analysis (${timestamp})
|
||||
|
||||
#### Log Analysis Results
|
||||
|
||||
${results.map(r => `
|
||||
**${r.id}**: ${r.verdict.toUpperCase()}
|
||||
- Evidence: ${JSON.stringify(r.evidence)}
|
||||
- Reasoning: ${r.reason}
|
||||
`).join('\n')}
|
||||
|
||||
#### Corrected Understanding
|
||||
|
||||
Previous misunderstandings identified and corrected:
|
||||
|
||||
${corrections.map(c => `
|
||||
- ~~${c.wrong}~~ → ${c.corrected}
|
||||
- Why wrong: ${c.reason}
|
||||
- Evidence: ${c.evidence}
|
||||
`).join('\n')}
|
||||
|
||||
#### New Insights
|
||||
|
||||
${newInsights.join('\n- ')}
|
||||
|
||||
#### Gemini Analysis
|
||||
|
||||
${geminiAnalysis}
|
||||
|
||||
${confirmedHypothesis ? `
|
||||
#### Root Cause Identified
|
||||
|
||||
**${confirmedHypothesis.id}**: ${confirmedHypothesis.description}
|
||||
|
||||
Evidence supporting this conclusion:
|
||||
${confirmedHypothesis.supportingEvidence}
|
||||
` : `
|
||||
#### Next Steps
|
||||
|
||||
${nextSteps}
|
||||
`}
|
||||
|
||||
---
|
||||
|
||||
## Current Consolidated Understanding (Updated)
|
||||
|
||||
${consolidatedUnderstanding}
|
||||
```
|
||||
|
||||
**Step 2.4: Consolidate Understanding**
|
||||
|
||||
At the bottom of `understanding.md`, update the consolidated section:
|
||||
|
||||
- Remove or simplify proven-wrong assumptions
|
||||
- Keep them in strikethrough for reference
|
||||
- Focus on current valid understanding
|
||||
- Avoid repeating details from timeline
|
||||
|
||||
```markdown
|
||||
## Current Consolidated Understanding
|
||||
|
||||
### What We Know
|
||||
|
||||
- ${validUnderstanding1}
|
||||
- ${validUnderstanding2}
|
||||
|
||||
### What Was Disproven
|
||||
|
||||
- ~~Initial assumption: ${wrongAssumption}~~ (Evidence: ${disproofEvidence})
|
||||
|
||||
### Current Investigation Focus
|
||||
|
||||
${currentFocus}
|
||||
|
||||
### Remaining Questions
|
||||
|
||||
- ${openQuestion1}
|
||||
- ${openQuestion2}
|
||||
```
|
||||
|
||||
**Step 2.5: Update hypotheses.json**
|
||||
|
||||
```json
|
||||
{
|
||||
"iteration": 2,
|
||||
"timestamp": "2025-01-21T10:15:00+08:00",
|
||||
"hypotheses": [
|
||||
{
|
||||
"id": "H1",
|
||||
"status": "rejected",
|
||||
"verdict_reason": "Evidence shows key exists with valid value",
|
||||
"evidence": {...}
|
||||
},
|
||||
{
|
||||
"id": "H2",
|
||||
"status": "confirmed",
|
||||
"verdict_reason": "Log data confirms timing issue",
|
||||
"evidence": {...}
|
||||
}
|
||||
],
|
||||
"gemini_corrections": [
|
||||
{
|
||||
"wrong_assumption": "...",
|
||||
"corrected_to": "...",
|
||||
"reason": "..."
|
||||
}
|
||||
]
|
||||
}
|
||||
```
|
||||
|
||||
---
|
||||
|
||||
### Fix & Verification
|
||||
|
||||
**Step 3.1: Apply Fix**
|
||||
|
||||
(Same as original debug command)
|
||||
|
||||
**Step 3.2: Document Resolution**
|
||||
|
||||
Append to `understanding.md`:
|
||||
|
||||
```markdown
|
||||
### Iteration ${n} - Resolution (${timestamp})
|
||||
|
||||
#### Fix Applied
|
||||
|
||||
- Modified files: ${modifiedFiles.join(', ')}
|
||||
- Fix description: ${fixDescription}
|
||||
- Root cause addressed: ${rootCause}
|
||||
|
||||
#### Verification Results
|
||||
|
||||
${verificationResults}
|
||||
|
||||
#### Lessons Learned
|
||||
|
||||
What we learned from this debugging session:
|
||||
|
||||
1. ${lesson1}
|
||||
2. ${lesson2}
|
||||
3. ${lesson3}
|
||||
|
||||
#### Key Insights for Future
|
||||
|
||||
- ${insight1}
|
||||
- ${insight2}
|
||||
```
|
||||
|
||||
**Step 3.3: Cleanup**
|
||||
|
||||
Remove debug instrumentation (same as original command).
|
||||
|
||||
---
|
||||
|
||||
## Session Folder Structure
|
||||
|
||||
```
|
||||
.workflow/.debug/DBG-{slug}-{date}/
|
||||
├── debug.log # NDJSON log (execution evidence)
|
||||
├── understanding.md # NEW: Exploration timeline + consolidated understanding
|
||||
├── hypotheses.json # NEW: Hypothesis history with verdicts
|
||||
└── resolution.md # Optional: Final summary
|
||||
```
|
||||
|
||||
## Understanding Document Template
|
||||
|
||||
```markdown
|
||||
# Understanding Document
|
||||
|
||||
**Session ID**: DBG-xxx-2025-01-21
|
||||
**Bug Description**: [original description]
|
||||
**Started**: 2025-01-21T10:00:00+08:00
|
||||
|
||||
---
|
||||
|
||||
## Exploration Timeline
|
||||
|
||||
### Iteration 1 - Initial Exploration (2025-01-21 10:00)
|
||||
|
||||
#### Current Understanding
|
||||
...
|
||||
|
||||
#### Evidence from Code Search
|
||||
...
|
||||
|
||||
#### Hypotheses Generated (Gemini-Assisted)
|
||||
...
|
||||
|
||||
### Iteration 2 - Evidence Analysis (2025-01-21 10:15)
|
||||
|
||||
#### Log Analysis Results
|
||||
...
|
||||
|
||||
#### Corrected Understanding
|
||||
- ~~[wrong]~~ → [corrected]
|
||||
|
||||
#### Gemini Analysis
|
||||
...
|
||||
|
||||
---
|
||||
|
||||
## Current Consolidated Understanding
|
||||
|
||||
### What We Know
|
||||
- [valid understanding points]
|
||||
|
||||
### What Was Disproven
|
||||
- ~~[disproven assumptions]~~
|
||||
|
||||
### Current Investigation Focus
|
||||
[current focus]
|
||||
|
||||
### Remaining Questions
|
||||
- [open questions]
|
||||
```
|
||||
|
||||
## Iteration Flow
|
||||
|
||||
```
|
||||
First Call (/workflow:debug-with-file "error"):
|
||||
├─ No session exists → Explore mode
|
||||
├─ Extract error keywords, search codebase
|
||||
├─ Document initial understanding in understanding.md
|
||||
├─ Use Gemini to generate hypotheses
|
||||
├─ Add logging instrumentation
|
||||
└─ Await user reproduction
|
||||
|
||||
After Reproduction (/workflow:debug-with-file "error"):
|
||||
├─ Session exists + debug.log has content → Analyze mode
|
||||
├─ Parse log, use Gemini to evaluate hypotheses
|
||||
├─ Update understanding.md with:
|
||||
│ ├─ Evidence analysis results
|
||||
│ ├─ Corrected misunderstandings (strikethrough)
|
||||
│ ├─ New insights
|
||||
│ └─ Updated consolidated understanding
|
||||
├─ Update hypotheses.json with verdicts
|
||||
└─ Decision:
|
||||
├─ Confirmed → Fix → Document resolution
|
||||
├─ Inconclusive → Add logging, document next steps
|
||||
└─ All rejected → Gemini-assisted new hypotheses
|
||||
|
||||
Output:
|
||||
├─ .workflow/.debug/DBG-{slug}-{date}/debug.log
|
||||
├─ .workflow/.debug/DBG-{slug}-{date}/understanding.md (evolving document)
|
||||
└─ .workflow/.debug/DBG-{slug}-{date}/hypotheses.json (history)
|
||||
```
|
||||
|
||||
## Gemini Integration Points
|
||||
|
||||
### 1. Hypothesis Generation (Explore Mode)
|
||||
|
||||
**Purpose**: Generate evidence-based, testable hypotheses
|
||||
|
||||
**Prompt Pattern**:
|
||||
```
|
||||
PURPOSE: Generate debugging hypotheses + evidence criteria
|
||||
TASK: Analyze error + code → testable hypotheses with clear pass/fail criteria
|
||||
CONTEXT: @understanding.md (search results)
|
||||
EXPECTED: JSON with hypotheses, likelihood ranking, evidence criteria
|
||||
```
|
||||
|
||||
### 2. Evidence Analysis (Analyze Mode)
|
||||
|
||||
**Purpose**: Validate hypotheses and correct misunderstandings
|
||||
|
||||
**Prompt Pattern**:
|
||||
```
|
||||
PURPOSE: Analyze debug log evidence + correct understanding
|
||||
TASK: Evaluate each hypothesis → identify wrong assumptions → suggest corrections
|
||||
CONTEXT: @debug.log @understanding.md @hypotheses.json
|
||||
EXPECTED: Verdicts + corrections + updated consolidated understanding
|
||||
```
|
||||
|
||||
### 3. New Hypothesis Generation (After All Rejected)
|
||||
|
||||
**Purpose**: Generate new hypotheses based on what was disproven
|
||||
|
||||
**Prompt Pattern**:
|
||||
```
|
||||
PURPOSE: Generate new hypotheses given disproven assumptions
|
||||
TASK: Review rejected hypotheses → identify knowledge gaps → new investigation angles
|
||||
CONTEXT: @understanding.md (with disproven section) @hypotheses.json
|
||||
EXPECTED: New hypotheses avoiding previously rejected paths
|
||||
```
|
||||
|
||||
## Error Correction Mechanism
|
||||
|
||||
### Correction Format in understanding.md
|
||||
|
||||
```markdown
|
||||
#### Corrected Understanding
|
||||
|
||||
- ~~Assumed dict key "config" was missing~~ → Key exists, but value is None
|
||||
- Why wrong: Only checked existence, not value validity
|
||||
- Evidence: H1 log shows {"config": null, "exists": true}
|
||||
|
||||
- ~~Thought error occurred in initialization~~ → Error happens during runtime update
|
||||
- Why wrong: Stack trace misread as init code
|
||||
- Evidence: H2 timestamp shows 30s after startup
|
||||
```
|
||||
|
||||
### Consolidation Rules
|
||||
|
||||
When updating "Current Consolidated Understanding":
|
||||
|
||||
1. **Simplify disproven items**: Move to "What Was Disproven" with single-line summary
|
||||
2. **Keep valid insights**: Promote confirmed findings to "What We Know"
|
||||
3. **Avoid duplication**: Don't repeat timeline details in consolidated section
|
||||
4. **Focus on current state**: What do we know NOW, not the journey
|
||||
5. **Preserve key corrections**: Keep important wrong→right transformations for learning
|
||||
|
||||
**Bad (cluttered)**:
|
||||
```markdown
|
||||
## Current Consolidated Understanding
|
||||
|
||||
In iteration 1 we thought X, but in iteration 2 we found Y, then in iteration 3...
|
||||
Also we checked A and found B, and then we checked C...
|
||||
```
|
||||
|
||||
**Good (consolidated)**:
|
||||
```markdown
|
||||
## Current Consolidated Understanding
|
||||
|
||||
### What We Know
|
||||
- Error occurs during runtime update, not initialization
|
||||
- Config value is None (not missing key)
|
||||
|
||||
### What Was Disproven
|
||||
- ~~Initialization error~~ (Timing evidence)
|
||||
- ~~Missing key hypothesis~~ (Key exists)
|
||||
|
||||
### Current Investigation Focus
|
||||
Why is config value None during update?
|
||||
```
|
||||
|
||||
## Post-Completion Expansion
|
||||
|
||||
完成后询问用户是否扩展为issue(test/enhance/refactor/doc),选中项调用 `/issue:new "{summary} - {dimension}"`
|
||||
|
||||
---
|
||||
|
||||
## Error Handling
|
||||
|
||||
| Situation | Action |
|
||||
|-----------|--------|
|
||||
| Empty debug.log | Verify reproduction triggered the code path |
|
||||
| All hypotheses rejected | Use Gemini to generate new hypotheses based on disproven assumptions |
|
||||
| Fix doesn't work | Document failed fix attempt, iterate with refined understanding |
|
||||
| >5 iterations | Review consolidated understanding, escalate to `/workflow:lite-fix` with full context |
|
||||
| Gemini unavailable | Fallback to manual hypothesis generation, document without Gemini insights |
|
||||
| Understanding too long | Consolidate aggressively, archive old iterations to separate file |
|
||||
|
||||
## Comparison with /workflow:debug
|
||||
|
||||
| Feature | /workflow:debug | /workflow:debug-with-file |
|
||||
|---------|-----------------|---------------------------|
|
||||
| NDJSON logging | ✅ | ✅ |
|
||||
| Hypothesis generation | Manual | Gemini-assisted |
|
||||
| Exploration documentation | ❌ | ✅ understanding.md |
|
||||
| Understanding evolution | ❌ | ✅ Timeline + corrections |
|
||||
| Error correction | ❌ | ✅ Strikethrough + reasoning |
|
||||
| Consolidated learning | ❌ | ✅ Current understanding section |
|
||||
| Hypothesis history | ❌ | ✅ hypotheses.json |
|
||||
| Gemini validation | ❌ | ✅ At key decision points |
|
||||
|
||||
## Usage Recommendations
|
||||
|
||||
Use `/workflow:debug-with-file` when:
|
||||
- Complex bugs requiring multiple investigation rounds
|
||||
- Learning from debugging process is valuable
|
||||
- Team needs to understand debugging rationale
|
||||
- Bug might recur, documentation helps prevention
|
||||
|
||||
Use `/workflow:debug` when:
|
||||
- Simple, quick bugs
|
||||
- One-off issues
|
||||
- Documentation overhead not needed
|
||||
@@ -311,6 +311,12 @@ Output:
|
||||
└─ .workflow/.debug/DBG-{slug}-{date}/debug.log
|
||||
```
|
||||
|
||||
## Post-Completion Expansion
|
||||
|
||||
完成后询问用户是否扩展为issue(test/enhance/refactor/doc),选中项调用 `/issue:new "{summary} - {dimension}"`
|
||||
|
||||
---
|
||||
|
||||
## Error Handling
|
||||
|
||||
| Situation | Action |
|
||||
|
||||
@@ -275,6 +275,10 @@ AskUserQuestion({
|
||||
- **"Enter Review"**: Execute `/workflow:review`
|
||||
- **"Complete Session"**: Execute `/workflow:session:complete`
|
||||
|
||||
### Post-Completion Expansion
|
||||
|
||||
完成后询问用户是否扩展为issue(test/enhance/refactor/doc),选中项调用 `/issue:new "{summary} - {dimension}"`
|
||||
|
||||
## Execution Strategy (IMPL_PLAN-Driven)
|
||||
|
||||
### Strategy Priority
|
||||
|
||||
@@ -108,11 +108,24 @@ Analyze project for workflow initialization and generate .workflow/project-tech.
|
||||
2. Execute: ccw tool exec get_modules_by_depth '{}' (get project structure)
|
||||
|
||||
## Task
|
||||
Generate complete project-tech.json with:
|
||||
- project_metadata: {name: ${projectName}, root_path: ${projectRoot}, initialized_at, updated_at}
|
||||
- technology_analysis: {description, languages, frameworks, build_tools, test_frameworks, architecture, key_components, dependencies}
|
||||
- development_status: ${regenerate ? 'preserve from backup' : '{completed_features: [], development_index: {feature: [], enhancement: [], bugfix: [], refactor: [], docs: []}, statistics: {total_features: 0, total_sessions: 0, last_updated}}'}
|
||||
- _metadata: {initialized_by: "cli-explore-agent", analysis_timestamp, analysis_mode}
|
||||
Generate complete project-tech.json following the schema structure:
|
||||
- project_name: "${projectName}"
|
||||
- initialized_at: ISO 8601 timestamp
|
||||
- overview: {
|
||||
description: "Brief project description",
|
||||
technology_stack: {
|
||||
languages: [{name, file_count, primary}],
|
||||
frameworks: ["string"],
|
||||
build_tools: ["string"],
|
||||
test_frameworks: ["string"]
|
||||
},
|
||||
architecture: {style, layers: [], patterns: []},
|
||||
key_components: [{name, path, description, importance}]
|
||||
}
|
||||
- features: []
|
||||
- development_index: ${regenerate ? 'preserve from backup' : '{feature: [], enhancement: [], bugfix: [], refactor: [], docs: []}'}
|
||||
- statistics: ${regenerate ? 'preserve from backup' : '{total_features: 0, total_sessions: 0, last_updated: ISO timestamp}'}
|
||||
- _metadata: {initialized_by: "cli-explore-agent", analysis_timestamp: ISO timestamp, analysis_mode: "deep-scan"}
|
||||
|
||||
## Analysis Requirements
|
||||
|
||||
@@ -132,7 +145,7 @@ Generate complete project-tech.json with:
|
||||
1. Structural scan: get_modules_by_depth.sh, find, wc -l
|
||||
2. Semantic analysis: Gemini for patterns/architecture
|
||||
3. Synthesis: Merge findings
|
||||
4. ${regenerate ? 'Merge with preserved development_status from .workflow/project-tech.json.backup' : ''}
|
||||
4. ${regenerate ? 'Merge with preserved development_index and statistics from .workflow/project-tech.json.backup' : ''}
|
||||
5. Write JSON: Write('.workflow/project-tech.json', jsonContent)
|
||||
6. Report: Return brief completion summary
|
||||
|
||||
@@ -181,16 +194,16 @@ console.log(`
|
||||
✓ Project initialized successfully
|
||||
|
||||
## Project Overview
|
||||
Name: ${projectTech.project_metadata.name}
|
||||
Description: ${projectTech.technology_analysis.description}
|
||||
Name: ${projectTech.project_name}
|
||||
Description: ${projectTech.overview.description}
|
||||
|
||||
### Technology Stack
|
||||
Languages: ${projectTech.technology_analysis.languages.map(l => l.name).join(', ')}
|
||||
Frameworks: ${projectTech.technology_analysis.frameworks.join(', ')}
|
||||
Languages: ${projectTech.overview.technology_stack.languages.map(l => l.name).join(', ')}
|
||||
Frameworks: ${projectTech.overview.technology_stack.frameworks.join(', ')}
|
||||
|
||||
### Architecture
|
||||
Style: ${projectTech.technology_analysis.architecture.style}
|
||||
Components: ${projectTech.technology_analysis.key_components.length} core modules
|
||||
Style: ${projectTech.overview.architecture.style}
|
||||
Components: ${projectTech.overview.key_components.length} core modules
|
||||
|
||||
---
|
||||
Files created:
|
||||
|
||||
@@ -81,6 +81,7 @@ AskUserQuestion({
|
||||
options: [
|
||||
{ label: "Skip", description: "No review" },
|
||||
{ label: "Gemini Review", description: "Gemini CLI tool" },
|
||||
{ label: "Codex Review", description: "Git-aware review (prompt OR --uncommitted)" },
|
||||
{ label: "Agent Review", description: "Current agent review" }
|
||||
]
|
||||
}
|
||||
@@ -171,10 +172,23 @@ Output:
|
||||
**Operations**:
|
||||
- Initialize result tracking for multi-execution scenarios
|
||||
- Set up `previousExecutionResults` array for context continuity
|
||||
- **In-Memory Mode**: Echo execution strategy from lite-plan for transparency
|
||||
|
||||
```javascript
|
||||
// Initialize result tracking
|
||||
previousExecutionResults = []
|
||||
|
||||
// In-Memory Mode: Echo execution strategy (transparency before execution)
|
||||
if (executionContext) {
|
||||
console.log(`
|
||||
📋 Execution Strategy (from lite-plan):
|
||||
Method: ${executionContext.executionMethod}
|
||||
Review: ${executionContext.codeReviewTool}
|
||||
Tasks: ${executionContext.planObject.tasks.length}
|
||||
Complexity: ${executionContext.planObject.complexity}
|
||||
${executionContext.executorAssignments ? ` Assignments: ${JSON.stringify(executionContext.executorAssignments)}` : ''}
|
||||
`)
|
||||
}
|
||||
```
|
||||
|
||||
### Step 2: Task Grouping & Batch Creation
|
||||
@@ -313,7 +327,7 @@ for (const call of sequential) {
|
||||
|
||||
```javascript
|
||||
function buildExecutionPrompt(batch) {
|
||||
// Task template (4 parts: Modification Points → How → Reference → Done)
|
||||
// Task template (6 parts: Modification Points → Why → How → Reference → Risks → Done)
|
||||
const formatTask = (t) => `
|
||||
## ${t.title}
|
||||
|
||||
@@ -322,18 +336,38 @@ function buildExecutionPrompt(batch) {
|
||||
### Modification Points
|
||||
${t.modification_points.map(p => `- **${p.file}** → \`${p.target}\`: ${p.change}`).join('\n')}
|
||||
|
||||
${t.rationale ? `
|
||||
### Why this approach (Medium/High)
|
||||
${t.rationale.chosen_approach}
|
||||
${t.rationale.decision_factors?.length > 0 ? `\nKey factors: ${t.rationale.decision_factors.join(', ')}` : ''}
|
||||
${t.rationale.tradeoffs ? `\nTradeoffs: ${t.rationale.tradeoffs}` : ''}
|
||||
` : ''}
|
||||
|
||||
### How to do it
|
||||
${t.description}
|
||||
|
||||
${t.implementation.map(step => `- ${step}`).join('\n')}
|
||||
|
||||
${t.code_skeleton ? `
|
||||
### Code skeleton (High)
|
||||
${t.code_skeleton.interfaces?.length > 0 ? `**Interfaces**: ${t.code_skeleton.interfaces.map(i => `\`${i.name}\` - ${i.purpose}`).join(', ')}` : ''}
|
||||
${t.code_skeleton.key_functions?.length > 0 ? `\n**Functions**: ${t.code_skeleton.key_functions.map(f => `\`${f.signature}\` - ${f.purpose}`).join(', ')}` : ''}
|
||||
${t.code_skeleton.classes?.length > 0 ? `\n**Classes**: ${t.code_skeleton.classes.map(c => `\`${c.name}\` - ${c.purpose}`).join(', ')}` : ''}
|
||||
` : ''}
|
||||
|
||||
### Reference
|
||||
- Pattern: ${t.reference?.pattern || 'N/A'}
|
||||
- Files: ${t.reference?.files?.join(', ') || 'N/A'}
|
||||
${t.reference?.examples ? `- Notes: ${t.reference.examples}` : ''}
|
||||
|
||||
${t.risks?.length > 0 ? `
|
||||
### Risk mitigations (High)
|
||||
${t.risks.map(r => `- ${r.description} → **${r.mitigation}**`).join('\n')}
|
||||
` : ''}
|
||||
|
||||
### Done when
|
||||
${t.acceptance.map(c => `- [ ] ${c}`).join('\n')}`
|
||||
${t.acceptance.map(c => `- [ ] ${c}`).join('\n')}
|
||||
${t.verification?.success_metrics?.length > 0 ? `\n**Success metrics**: ${t.verification.success_metrics.join(', ')}` : ''}`
|
||||
|
||||
// Build prompt
|
||||
const sections = []
|
||||
@@ -350,9 +384,14 @@ ${t.acceptance.map(c => `- [ ] ${c}`).join('\n')}`
|
||||
if (clarificationContext) {
|
||||
context.push(`### Clarifications\n${Object.entries(clarificationContext).map(([q, a]) => `- ${q}: ${a}`).join('\n')}`)
|
||||
}
|
||||
if (executionContext?.planObject?.data_flow?.diagram) {
|
||||
context.push(`### Data Flow\n${executionContext.planObject.data_flow.diagram}`)
|
||||
}
|
||||
if (executionContext?.session?.artifacts?.plan) {
|
||||
context.push(`### Artifacts\nPlan: ${executionContext.session.artifacts.plan}`)
|
||||
}
|
||||
// Project guidelines (user-defined constraints from /workflow:session:solidify)
|
||||
context.push(`### Project Guidelines\n@.workflow/project-guidelines.json`)
|
||||
if (context.length > 0) sections.push(`## Context\n${context.join('\n\n')}`)
|
||||
|
||||
sections.push(`Complete each task according to its "Done when" checklist.`)
|
||||
@@ -392,16 +431,8 @@ ccw cli -p "${buildExecutionPrompt(batch)}" --tool codex --mode write
|
||||
|
||||
**Execution with fixed IDs** (predictable ID pattern):
|
||||
```javascript
|
||||
// Launch CLI in foreground (NOT background)
|
||||
// Timeout based on complexity: Low=40min, Medium=60min, High=100min
|
||||
const timeoutByComplexity = {
|
||||
"Low": 2400000, // 40 minutes
|
||||
"Medium": 3600000, // 60 minutes
|
||||
"High": 6000000 // 100 minutes
|
||||
}
|
||||
|
||||
// Launch CLI in background, wait for task hook callback
|
||||
// Generate fixed execution ID: ${sessionId}-${groupId}
|
||||
// This enables predictable ID lookup without relying on resume context chains
|
||||
const sessionId = executionContext?.session?.id || 'standalone'
|
||||
const fixedExecutionId = `${sessionId}-${batch.groupId}` // e.g., "implement-auth-2025-12-13-P1"
|
||||
|
||||
@@ -413,16 +444,12 @@ const cli_command = previousCliId
|
||||
? `ccw cli -p "${buildExecutionPrompt(batch)}" --tool codex --mode write --id ${fixedExecutionId} --resume ${previousCliId}`
|
||||
: `ccw cli -p "${buildExecutionPrompt(batch)}" --tool codex --mode write --id ${fixedExecutionId}`
|
||||
|
||||
bash_result = Bash(
|
||||
// Execute in background, stop output and wait for task hook callback
|
||||
Bash(
|
||||
command=cli_command,
|
||||
timeout=timeoutByComplexity[planObject.complexity] || 3600000
|
||||
run_in_background=true
|
||||
)
|
||||
|
||||
// Execution ID is now predictable: ${fixedExecutionId}
|
||||
// Can also extract from output: "ID: implement-auth-2025-12-13-P1"
|
||||
const cliExecutionId = fixedExecutionId
|
||||
|
||||
// Update TodoWrite when execution completes
|
||||
// STOP HERE - CLI executes in background, task hook will notify on completion
|
||||
```
|
||||
|
||||
**Resume on Failure** (with fixed ID):
|
||||
@@ -460,32 +487,41 @@ Progress tracked at batch level (not individual task level). Icons: ⚡ (paralle
|
||||
|
||||
**Skip Condition**: Only run if `codeReviewTool ≠ "Skip"`
|
||||
|
||||
**Review Focus**: Verify implementation against plan acceptance criteria
|
||||
- Read plan.json for task acceptance criteria
|
||||
**Review Focus**: Verify implementation against plan acceptance criteria and verification requirements
|
||||
- Read plan.json for task acceptance criteria and verification checklist
|
||||
- Check each acceptance criterion is fulfilled
|
||||
- Verify success metrics from verification field (Medium/High complexity)
|
||||
- Run unit/integration tests specified in verification field
|
||||
- Validate code quality and identify issues
|
||||
- Ensure alignment with planned approach
|
||||
- Ensure alignment with planned approach and risk mitigations
|
||||
|
||||
**Operations**:
|
||||
- Agent Review: Current agent performs direct review
|
||||
- Gemini Review: Execute gemini CLI with review prompt
|
||||
- Custom tool: Execute specified CLI tool (qwen, codex, etc.)
|
||||
- Codex Review: Two options - (A) with prompt for complex reviews, (B) `--uncommitted` flag only for quick reviews
|
||||
- Custom tool: Execute specified CLI tool (qwen, etc.)
|
||||
|
||||
**Unified Review Template** (All tools use same standard):
|
||||
|
||||
**Review Criteria**:
|
||||
- **Acceptance Criteria**: Verify each criterion from plan.tasks[].acceptance
|
||||
- **Verification Checklist** (Medium/High): Check unit_tests, integration_tests, success_metrics from plan.tasks[].verification
|
||||
- **Code Quality**: Analyze quality, identify issues, suggest improvements
|
||||
- **Plan Alignment**: Validate implementation matches planned approach
|
||||
- **Plan Alignment**: Validate implementation matches planned approach and risk mitigations
|
||||
|
||||
**Shared Prompt Template** (used by all CLI tools):
|
||||
```
|
||||
PURPOSE: Code review for implemented changes against plan acceptance criteria
|
||||
TASK: • Verify plan acceptance criteria fulfillment • Analyze code quality • Identify issues • Suggest improvements • Validate plan adherence
|
||||
PURPOSE: Code review for implemented changes against plan acceptance criteria and verification requirements
|
||||
TASK: • Verify plan acceptance criteria fulfillment • Check verification requirements (unit tests, success metrics) • Analyze code quality • Identify issues • Suggest improvements • Validate plan adherence and risk mitigations
|
||||
MODE: analysis
|
||||
CONTEXT: @**/* @{plan.json} [@{exploration.json}] | Memory: Review lite-execute changes against plan requirements
|
||||
EXPECTED: Quality assessment with acceptance criteria verification, issue identification, and recommendations. Explicitly check each acceptance criterion from plan.json tasks.
|
||||
RULES: $(cat ~/.claude/workflows/cli-templates/prompts/analysis/02-review-code-quality.txt) | Focus on plan acceptance criteria and plan adherence | analysis=READ-ONLY
|
||||
CONTEXT: @**/* @{plan.json} [@{exploration.json}] | Memory: Review lite-execute changes against plan requirements including verification checklist
|
||||
EXPECTED: Quality assessment with:
|
||||
- Acceptance criteria verification (all tasks)
|
||||
- Verification checklist validation (Medium/High: unit_tests, integration_tests, success_metrics)
|
||||
- Issue identification
|
||||
- Recommendations
|
||||
Explicitly check each acceptance criterion and verification item from plan.json tasks.
|
||||
CONSTRAINTS: Focus on plan acceptance criteria, verification requirements, and plan adherence | analysis=READ-ONLY
|
||||
```
|
||||
|
||||
**Tool-Specific Execution** (Apply shared prompt template above):
|
||||
@@ -504,8 +540,17 @@ ccw cli -p "[Shared Prompt Template with artifacts]" --tool gemini --mode analys
|
||||
ccw cli -p "[Shared Prompt Template with artifacts]" --tool qwen --mode analysis
|
||||
# Same prompt as Gemini, different execution engine
|
||||
|
||||
# Method 4: Codex Review (autonomous)
|
||||
ccw cli -p "[Verify plan acceptance criteria at ${plan.json}]" --tool codex --mode write
|
||||
# Method 4: Codex Review (git-aware) - Two mutually exclusive options:
|
||||
|
||||
# Option A: With custom prompt (reviews uncommitted by default)
|
||||
ccw cli -p "[Shared Prompt Template with artifacts]" --tool codex --mode review
|
||||
# Use for complex reviews with specific focus areas
|
||||
|
||||
# Option B: Target flag only (no prompt allowed)
|
||||
ccw cli --tool codex --mode review --uncommitted
|
||||
# Quick review of uncommitted changes without custom instructions
|
||||
|
||||
# ⚠️ IMPORTANT: -p prompt and target flags (--uncommitted/--base/--commit) are MUTUALLY EXCLUSIVE
|
||||
```
|
||||
|
||||
**Multi-Round Review with Fixed IDs**:
|
||||
@@ -531,11 +576,11 @@ if (hasUnresolvedIssues(reviewResult)) {
|
||||
|
||||
**Trigger**: After all executions complete (regardless of code review)
|
||||
|
||||
**Skip Condition**: Skip if `.workflow/project.json` does not exist
|
||||
**Skip Condition**: Skip if `.workflow/project-tech.json` does not exist
|
||||
|
||||
**Operations**:
|
||||
```javascript
|
||||
const projectJsonPath = '.workflow/project.json'
|
||||
const projectJsonPath = '.workflow/project-tech.json'
|
||||
if (!fileExists(projectJsonPath)) return // Silent skip
|
||||
|
||||
const projectJson = JSON.parse(Read(projectJsonPath))
|
||||
@@ -664,6 +709,10 @@ Collected after each execution call completes:
|
||||
|
||||
Appended to `previousExecutionResults` array for context continuity in multi-execution scenarios.
|
||||
|
||||
## Post-Completion Expansion
|
||||
|
||||
完成后询问用户是否扩展为issue(test/enhance/refactor/doc),选中项调用 `/issue:new "{summary} - {dimension}"`
|
||||
|
||||
**Fixed ID Pattern**: `${sessionId}-${groupId}` enables predictable lookup without auto-generated timestamps.
|
||||
|
||||
**Resume Usage**: If `status` is "partial" or "failed", use `fixedCliId` to resume:
|
||||
|
||||
@@ -380,6 +380,7 @@ if (uniqueClarifications.length > 0) {
|
||||
const schema = Bash(`cat ~/.claude/workflows/cli-templates/schemas/fix-plan-json-schema.json`)
|
||||
|
||||
// Step 2: Generate fix-plan following schema (Claude directly, no agent)
|
||||
// For Medium complexity: include rationale + verification (optional, but recommended)
|
||||
const fixPlan = {
|
||||
summary: "...",
|
||||
root_cause: "...",
|
||||
@@ -389,13 +390,67 @@ const fixPlan = {
|
||||
recommended_execution: "Agent",
|
||||
severity: severity,
|
||||
risk_level: "...",
|
||||
_metadata: { timestamp: getUtc8ISOString(), source: "direct-planning", planning_mode: "direct" }
|
||||
|
||||
// Medium complexity fields (optional for direct planning, auto-filled for Low)
|
||||
...(severity === "Medium" ? {
|
||||
design_decisions: [
|
||||
{
|
||||
decision: "Use immediate_patch strategy for minimal risk",
|
||||
rationale: "Keeps changes localized and quick to review",
|
||||
tradeoff: "Defers comprehensive refactoring"
|
||||
}
|
||||
],
|
||||
tasks_with_rationale: {
|
||||
// Each task gets rationale if Medium
|
||||
task_rationale_example: {
|
||||
rationale: {
|
||||
chosen_approach: "Direct fix approach",
|
||||
alternatives_considered: ["Workaround", "Refactor"],
|
||||
decision_factors: ["Minimal impact", "Quick turnaround"],
|
||||
tradeoffs: "Doesn't address underlying issue"
|
||||
},
|
||||
verification: {
|
||||
unit_tests: ["test_bug_fix_basic"],
|
||||
integration_tests: [],
|
||||
manual_checks: ["Reproduce issue", "Verify fix"],
|
||||
success_metrics: ["Issue resolved", "No regressions"]
|
||||
}
|
||||
}
|
||||
}
|
||||
} : {}),
|
||||
|
||||
_metadata: {
|
||||
timestamp: getUtc8ISOString(),
|
||||
source: "direct-planning",
|
||||
planning_mode: "direct",
|
||||
complexity: severity === "Medium" ? "Medium" : "Low"
|
||||
}
|
||||
}
|
||||
|
||||
// Step 3: Write fix-plan to session folder
|
||||
// Step 3: Merge task rationale into tasks array
|
||||
if (severity === "Medium") {
|
||||
fixPlan.tasks = fixPlan.tasks.map(task => ({
|
||||
...task,
|
||||
rationale: fixPlan.tasks_with_rationale[task.id]?.rationale || {
|
||||
chosen_approach: "Standard fix",
|
||||
alternatives_considered: [],
|
||||
decision_factors: ["Correctness", "Simplicity"],
|
||||
tradeoffs: "None"
|
||||
},
|
||||
verification: fixPlan.tasks_with_rationale[task.id]?.verification || {
|
||||
unit_tests: [`test_${task.id}_basic`],
|
||||
integration_tests: [],
|
||||
manual_checks: ["Verify fix works"],
|
||||
success_metrics: ["Test pass"]
|
||||
}
|
||||
}))
|
||||
delete fixPlan.tasks_with_rationale // Clean up temp field
|
||||
}
|
||||
|
||||
// Step 4: Write fix-plan to session folder
|
||||
Write(`${sessionFolder}/fix-plan.json`, JSON.stringify(fixPlan, null, 2))
|
||||
|
||||
// Step 4: MUST continue to Phase 4 (Confirmation) - DO NOT execute code here
|
||||
// Step 5: MUST continue to Phase 4 (Confirmation) - DO NOT execute code here
|
||||
```
|
||||
|
||||
**High/Critical Severity** - Invoke cli-lite-planning-agent:
|
||||
@@ -451,11 +506,41 @@ Generate fix-plan.json with:
|
||||
- description
|
||||
- modification_points: ALL files to modify for this fix (group related changes)
|
||||
- implementation (2-5 steps covering all modification_points)
|
||||
- verification (test criteria)
|
||||
- acceptance: Quantified acceptance criteria
|
||||
- depends_on: task IDs this task depends on (use sparingly)
|
||||
|
||||
**High/Critical complexity fields per task** (REQUIRED):
|
||||
- rationale:
|
||||
- chosen_approach: Why this fix approach (not alternatives)
|
||||
- alternatives_considered: Other approaches evaluated
|
||||
- decision_factors: Key factors influencing choice
|
||||
- tradeoffs: Known tradeoffs of this approach
|
||||
- verification:
|
||||
- unit_tests: Test names to add/verify
|
||||
- integration_tests: Integration test names
|
||||
- manual_checks: Manual verification steps
|
||||
- success_metrics: Quantified success criteria
|
||||
- risks:
|
||||
- description: Risk description
|
||||
- probability: Low|Medium|High
|
||||
- impact: Low|Medium|High
|
||||
- mitigation: How to mitigate
|
||||
- fallback: Fallback if fix fails
|
||||
- code_skeleton (optional): Key interfaces/functions to implement
|
||||
- interfaces: [{name, definition, purpose}]
|
||||
- key_functions: [{signature, purpose, returns}]
|
||||
|
||||
**Top-level High/Critical fields** (REQUIRED):
|
||||
- data_flow: How data flows through affected code
|
||||
- diagram: "A → B → C" style flow
|
||||
- stages: [{stage, input, output, component}]
|
||||
- design_decisions: Global fix decisions
|
||||
- [{decision, rationale, tradeoff}]
|
||||
|
||||
- estimated_time, recommended_execution, severity, risk_level
|
||||
- _metadata:
|
||||
- timestamp, source, planning_mode
|
||||
- complexity: "High" | "Critical"
|
||||
- diagnosis_angles: ${JSON.stringify(manifest.diagnoses.map(d => d.angle))}
|
||||
|
||||
## Task Grouping Rules
|
||||
@@ -467,11 +552,21 @@ Generate fix-plan.json with:
|
||||
|
||||
## Execution
|
||||
1. Read ALL diagnosis files for comprehensive context
|
||||
2. Execute CLI planning using Gemini (Qwen fallback)
|
||||
2. Execute CLI planning using Gemini (Qwen fallback) with --rule planning-fix-strategy template
|
||||
3. Synthesize findings from multiple diagnosis angles
|
||||
4. Parse output and structure fix-plan
|
||||
5. Write JSON: Write('${sessionFolder}/fix-plan.json', jsonContent)
|
||||
6. Return brief completion summary
|
||||
4. Generate fix-plan with:
|
||||
- For High/Critical: REQUIRED new fields (rationale, verification, risks, code_skeleton, data_flow, design_decisions)
|
||||
- Each task MUST have rationale (why this fix), verification (how to verify success), and risks (potential issues)
|
||||
5. Parse output and structure fix-plan
|
||||
6. Write JSON: Write('${sessionFolder}/fix-plan.json', jsonContent)
|
||||
7. Return brief completion summary
|
||||
|
||||
## Output Format for CLI
|
||||
Include these sections in your fix-plan output:
|
||||
- Summary, Root Cause, Strategy (existing)
|
||||
- Data Flow: Diagram showing affected code paths
|
||||
- Design Decisions: Key architectural choices in the fix
|
||||
- Tasks: Each with rationale (Medium/High), verification (Medium/High), risks (High), code_skeleton (High)
|
||||
`
|
||||
)
|
||||
```
|
||||
@@ -565,7 +660,11 @@ const fixPlan = JSON.parse(Read(`${sessionFolder}/fix-plan.json`))
|
||||
executionContext = {
|
||||
mode: "bugfix",
|
||||
severity: fixPlan.severity,
|
||||
planObject: fixPlan,
|
||||
planObject: {
|
||||
...fixPlan,
|
||||
// Ensure complexity is set based on severity for new field consumption
|
||||
complexity: fixPlan.complexity || (fixPlan.severity === 'Critical' ? 'High' : (fixPlan.severity === 'High' ? 'High' : 'Medium'))
|
||||
},
|
||||
diagnosisContext: diagnoses,
|
||||
diagnosisAngles: manifest.diagnoses.map(d => d.angle),
|
||||
diagnosisManifest: manifest,
|
||||
|
||||
461
.claude/commands/workflow/lite-lite-lite.md
Normal file
461
.claude/commands/workflow/lite-lite-lite.md
Normal file
@@ -0,0 +1,461 @@
|
||||
---
|
||||
name: workflow:lite-lite-lite
|
||||
description: Ultra-lightweight multi-tool analysis and direct execution. No artifacts for simple tasks; auto-creates planning docs in .workflow/.scratchpad/ for complex tasks. Auto tool selection based on task analysis, user-driven iteration via AskUser.
|
||||
argument-hint: "<task description>"
|
||||
allowed-tools: TodoWrite(*), Task(*), AskUserQuestion(*), Read(*), Bash(*), Write(*), mcp__ace-tool__search_context(*), mcp__ccw-tools__write_file(*)
|
||||
---
|
||||
|
||||
# Ultra-Lite Multi-Tool Workflow
|
||||
|
||||
## Quick Start
|
||||
|
||||
```bash
|
||||
/workflow:lite-lite-lite "Fix the login bug"
|
||||
/workflow:lite-lite-lite "Refactor payment module for multi-gateway support"
|
||||
```
|
||||
|
||||
**Core Philosophy**: Minimal friction, maximum velocity. Simple tasks = no artifacts. Complex tasks = lightweight planning doc in `.workflow/.scratchpad/`.
|
||||
|
||||
## Overview
|
||||
|
||||
**Complexity-aware workflow**: Clarify → Assess Complexity → Select Tools → Multi-Mode Analysis → Decision → Direct Execution
|
||||
|
||||
**vs multi-cli-plan**: No IMPL_PLAN.md, plan.json, synthesis.json - state in memory or lightweight scratchpad doc for complex tasks.
|
||||
|
||||
## Execution Flow
|
||||
|
||||
```
|
||||
Phase 1: Clarify Requirements → AskUser for missing details
|
||||
Phase 1.5: Assess Complexity → Determine if planning doc needed
|
||||
Phase 2: Select Tools (CLI → Mode → Agent) → 3-step selection
|
||||
Phase 3: Multi-Mode Analysis → Execute with --resume chaining
|
||||
Phase 4: User Decision → Execute / Refine / Change / Cancel
|
||||
Phase 5: Direct Execution → No plan files (simple) or scratchpad doc (complex)
|
||||
```
|
||||
|
||||
## Phase 1: Clarify Requirements
|
||||
|
||||
```javascript
|
||||
const taskDescription = $ARGUMENTS
|
||||
|
||||
if (taskDescription.length < 20 || isAmbiguous(taskDescription)) {
|
||||
AskUserQuestion({
|
||||
questions: [{
|
||||
question: "Please provide more details: target files/modules, expected behavior, constraints?",
|
||||
header: "Details",
|
||||
options: [
|
||||
{ label: "I'll provide more", description: "Add more context" },
|
||||
{ label: "Continue analysis", description: "Let tools explore autonomously" }
|
||||
],
|
||||
multiSelect: false
|
||||
}]
|
||||
})
|
||||
}
|
||||
|
||||
// Optional: Quick ACE Context for complex tasks
|
||||
mcp__ace-tool__search_context({
|
||||
project_root_path: process.cwd(),
|
||||
query: `${taskDescription} implementation patterns`
|
||||
})
|
||||
```
|
||||
|
||||
## Phase 1.5: Assess Complexity
|
||||
|
||||
| Level | Creates Plan Doc | Trigger Keywords |
|
||||
|-------|------------------|------------------|
|
||||
| **simple** | ❌ | (default) |
|
||||
| **moderate** | ✅ | module, system, service, integration, multiple |
|
||||
| **complex** | ✅ | refactor, migrate, security, auth, payment, database |
|
||||
|
||||
```javascript
|
||||
// Complexity detection (after ACE query)
|
||||
const isComplex = /refactor|migrate|security|auth|payment|database/i.test(taskDescription)
|
||||
const isModerate = /module|system|service|integration|multiple/i.test(taskDescription) || aceContext?.relevant_files?.length > 2
|
||||
|
||||
if (isComplex || isModerate) {
|
||||
const planPath = `.workflow/.scratchpad/lite3-${taskSlug}-${dateStr}.md`
|
||||
// Create planning doc with: Task, Status, Complexity, Analysis Summary, Execution Plan, Progress Log
|
||||
}
|
||||
```
|
||||
|
||||
## Phase 2: Select Tools
|
||||
|
||||
### Tool Definitions
|
||||
|
||||
**CLI Tools** (from cli-tools.json):
|
||||
```javascript
|
||||
const cliConfig = JSON.parse(Read("~/.claude/cli-tools.json"))
|
||||
const cliTools = Object.entries(cliConfig.tools)
|
||||
.filter(([_, config]) => config.enabled)
|
||||
.map(([name, config]) => ({
|
||||
name, type: 'cli',
|
||||
tags: config.tags || [],
|
||||
model: config.primaryModel,
|
||||
toolType: config.type // builtin, cli-wrapper, api-endpoint
|
||||
}))
|
||||
```
|
||||
|
||||
**Sub Agents**:
|
||||
|
||||
| Agent | Strengths | canExecute |
|
||||
|-------|-----------|------------|
|
||||
| **code-developer** | Code implementation, test writing | ✅ |
|
||||
| **Explore** | Fast code exploration, pattern discovery | ❌ |
|
||||
| **cli-explore-agent** | Dual-source analysis (Bash+CLI) | ❌ |
|
||||
| **cli-discuss-agent** | Multi-CLI collaboration, cross-verification | ❌ |
|
||||
| **debug-explore-agent** | Hypothesis-driven debugging | ❌ |
|
||||
| **context-search-agent** | Multi-layer file discovery, dependency analysis | ❌ |
|
||||
| **test-fix-agent** | Test execution, failure diagnosis, code fixing | ✅ |
|
||||
| **universal-executor** | General execution, multi-domain adaptation | ✅ |
|
||||
|
||||
**Analysis Modes**:
|
||||
|
||||
| Mode | Pattern | Use Case | minCLIs |
|
||||
|------|---------|----------|---------|
|
||||
| **Parallel** | `A \|\| B \|\| C → Aggregate` | Fast multi-perspective | 1+ |
|
||||
| **Sequential** | `A → B(resume) → C(resume)` | Incremental deepening | 2+ |
|
||||
| **Collaborative** | `A → B → A → B → Synthesize` | Multi-round refinement | 2+ |
|
||||
| **Debate** | `A(propose) → B(challenge) → A(defend)` | Adversarial validation | 2 |
|
||||
| **Challenge** | `A(analyze) → B(challenge)` | Find flaws and risks | 2 |
|
||||
|
||||
### Three-Step Selection Flow
|
||||
|
||||
```javascript
|
||||
// Step 1: Select CLIs (multiSelect)
|
||||
AskUserQuestion({
|
||||
questions: [{
|
||||
question: "Select CLI tools for analysis (1-3 for collaboration modes)",
|
||||
header: "CLI Tools",
|
||||
options: cliTools.map(cli => ({
|
||||
label: cli.name,
|
||||
description: cli.tags.length > 0 ? cli.tags.join(', ') : cli.model || 'general'
|
||||
})),
|
||||
multiSelect: true
|
||||
}]
|
||||
})
|
||||
|
||||
// Step 2: Select Mode (filtered by CLI count)
|
||||
const availableModes = analysisModes.filter(m => selectedCLIs.length >= m.minCLIs)
|
||||
AskUserQuestion({
|
||||
questions: [{
|
||||
question: "Select analysis mode",
|
||||
header: "Mode",
|
||||
options: availableModes.map(m => ({
|
||||
label: m.label,
|
||||
description: `${m.description} [${m.pattern}]`
|
||||
})),
|
||||
multiSelect: false
|
||||
}]
|
||||
})
|
||||
|
||||
// Step 3: Select Agent for execution
|
||||
AskUserQuestion({
|
||||
questions: [{
|
||||
question: "Select Sub Agent for execution",
|
||||
header: "Agent",
|
||||
options: agents.map(a => ({ label: a.name, description: a.strength })),
|
||||
multiSelect: false
|
||||
}]
|
||||
})
|
||||
|
||||
// Confirm selection
|
||||
AskUserQuestion({
|
||||
questions: [{
|
||||
question: "Confirm selection?",
|
||||
header: "Confirm",
|
||||
options: [
|
||||
{ label: "Confirm and continue", description: `${selectedMode.label} with ${selectedCLIs.length} CLIs` },
|
||||
{ label: "Re-select CLIs", description: "Choose different CLI tools" },
|
||||
{ label: "Re-select Mode", description: "Choose different analysis mode" },
|
||||
{ label: "Re-select Agent", description: "Choose different Sub Agent" }
|
||||
],
|
||||
multiSelect: false
|
||||
}]
|
||||
})
|
||||
```
|
||||
|
||||
## Phase 3: Multi-Mode Analysis
|
||||
|
||||
### Universal CLI Prompt Template
|
||||
|
||||
```javascript
|
||||
// Unified prompt builder - used by all modes
|
||||
function buildPrompt({ purpose, tasks, expected, rules, taskDescription }) {
|
||||
return `
|
||||
PURPOSE: ${purpose}: ${taskDescription}
|
||||
TASK: ${tasks.map(t => `• ${t}`).join(' ')}
|
||||
MODE: analysis
|
||||
CONTEXT: @**/*
|
||||
EXPECTED: ${expected}
|
||||
CONSTRAINTS: ${rules}
|
||||
`
|
||||
}
|
||||
|
||||
// Execute CLI with prompt
|
||||
function execCLI(cli, prompt, options = {}) {
|
||||
const { resume, background = false } = options
|
||||
const resumeFlag = resume ? `--resume ${resume}` : ''
|
||||
return Bash({
|
||||
command: `ccw cli -p "${prompt}" --tool ${cli.name} --mode analysis ${resumeFlag}`,
|
||||
run_in_background: background
|
||||
})
|
||||
}
|
||||
```
|
||||
|
||||
### Prompt Presets by Role
|
||||
|
||||
| Role | PURPOSE | TASKS | EXPECTED | RULES |
|
||||
|------|---------|-------|----------|-------|
|
||||
| **initial** | Initial analysis | Identify files, Analyze approach, List changes | Root cause, files, changes, risks | Focus on actionable insights |
|
||||
| **extend** | Build on previous | Review previous, Extend, Add insights | Extended analysis building on findings | Build incrementally, avoid repetition |
|
||||
| **synthesize** | Refine and synthesize | Review, Identify gaps, Synthesize | Refined synthesis with new perspectives | Add value not repetition |
|
||||
| **propose** | Propose comprehensive analysis | Analyze thoroughly, Propose solution, State assumptions | Well-reasoned proposal with trade-offs | Be clear about assumptions |
|
||||
| **challenge** | Challenge and stress-test | Identify weaknesses, Question assumptions, Suggest alternatives | Critique with counter-arguments | Be adversarial but constructive |
|
||||
| **defend** | Respond to challenges | Address challenges, Defend valid aspects, Propose refined solution | Refined proposal incorporating feedback | Be open to criticism, synthesize |
|
||||
| **criticize** | Find flaws ruthlessly | Find logical flaws, Identify edge cases, Rate criticisms | Critique with severity: [CRITICAL]/[HIGH]/[MEDIUM]/[LOW] | Be ruthlessly critical |
|
||||
|
||||
```javascript
|
||||
const PROMPTS = {
|
||||
initial: { purpose: 'Initial analysis', tasks: ['Identify affected files', 'Analyze implementation approach', 'List specific changes'], expected: 'Root cause, files to modify, key changes, risks', rules: 'Focus on actionable insights' },
|
||||
extend: { purpose: 'Build on previous analysis', tasks: ['Review previous findings', 'Extend analysis', 'Add new insights'], expected: 'Extended analysis building on previous', rules: 'Build incrementally, avoid repetition' },
|
||||
synthesize: { purpose: 'Refine and synthesize', tasks: ['Review previous', 'Identify gaps', 'Add insights', 'Synthesize findings'], expected: 'Refined synthesis with new perspectives', rules: 'Build collaboratively, add value' },
|
||||
propose: { purpose: 'Propose comprehensive analysis', tasks: ['Analyze thoroughly', 'Propose solution', 'State assumptions clearly'], expected: 'Well-reasoned proposal with trade-offs', rules: 'Be clear about assumptions' },
|
||||
challenge: { purpose: 'Challenge and stress-test', tasks: ['Identify weaknesses', 'Question assumptions', 'Suggest alternatives', 'Highlight overlooked risks'], expected: 'Constructive critique with counter-arguments', rules: 'Be adversarial but constructive' },
|
||||
defend: { purpose: 'Respond to challenges', tasks: ['Address each challenge', 'Defend valid aspects', 'Acknowledge valid criticisms', 'Propose refined solution'], expected: 'Refined proposal incorporating alternatives', rules: 'Be open to criticism, synthesize best ideas' },
|
||||
criticize: { purpose: 'Stress-test and find weaknesses', tasks: ['Find logical flaws', 'Identify missed edge cases', 'Propose alternatives', 'Rate criticisms (High/Medium/Low)'], expected: 'Detailed critique with severity ratings', rules: 'Be ruthlessly critical, find every flaw' }
|
||||
}
|
||||
```
|
||||
|
||||
### Mode Implementations
|
||||
|
||||
```javascript
|
||||
// Parallel: All CLIs run simultaneously
|
||||
async function executeParallel(clis, task) {
|
||||
return await Promise.all(clis.map(cli =>
|
||||
execCLI(cli, buildPrompt({ ...PROMPTS.initial, taskDescription: task }), { background: true })
|
||||
))
|
||||
}
|
||||
|
||||
// Sequential: Each CLI builds on previous via --resume
|
||||
async function executeSequential(clis, task) {
|
||||
const results = []
|
||||
let prevId = null
|
||||
for (const cli of clis) {
|
||||
const preset = prevId ? PROMPTS.extend : PROMPTS.initial
|
||||
const result = await execCLI(cli, buildPrompt({ ...preset, taskDescription: task }), { resume: prevId })
|
||||
results.push(result)
|
||||
prevId = extractSessionId(result)
|
||||
}
|
||||
return results
|
||||
}
|
||||
|
||||
// Collaborative: Multi-round synthesis
|
||||
async function executeCollaborative(clis, task, rounds = 2) {
|
||||
const results = []
|
||||
let prevId = null
|
||||
for (let r = 0; r < rounds; r++) {
|
||||
for (const cli of clis) {
|
||||
const preset = !prevId ? PROMPTS.initial : PROMPTS.synthesize
|
||||
const result = await execCLI(cli, buildPrompt({ ...preset, taskDescription: task }), { resume: prevId })
|
||||
results.push({ cli: cli.name, round: r, result })
|
||||
prevId = extractSessionId(result)
|
||||
}
|
||||
}
|
||||
return results
|
||||
}
|
||||
|
||||
// Debate: Propose → Challenge → Defend
|
||||
async function executeDebate(clis, task) {
|
||||
const [cliA, cliB] = clis
|
||||
const results = []
|
||||
|
||||
const propose = await execCLI(cliA, buildPrompt({ ...PROMPTS.propose, taskDescription: task }))
|
||||
results.push({ phase: 'propose', cli: cliA.name, result: propose })
|
||||
|
||||
const challenge = await execCLI(cliB, buildPrompt({ ...PROMPTS.challenge, taskDescription: task }), { resume: extractSessionId(propose) })
|
||||
results.push({ phase: 'challenge', cli: cliB.name, result: challenge })
|
||||
|
||||
const defend = await execCLI(cliA, buildPrompt({ ...PROMPTS.defend, taskDescription: task }), { resume: extractSessionId(challenge) })
|
||||
results.push({ phase: 'defend', cli: cliA.name, result: defend })
|
||||
|
||||
return results
|
||||
}
|
||||
|
||||
// Challenge: Analyze → Criticize
|
||||
async function executeChallenge(clis, task) {
|
||||
const [cliA, cliB] = clis
|
||||
const results = []
|
||||
|
||||
const analyze = await execCLI(cliA, buildPrompt({ ...PROMPTS.initial, taskDescription: task }))
|
||||
results.push({ phase: 'analyze', cli: cliA.name, result: analyze })
|
||||
|
||||
const criticize = await execCLI(cliB, buildPrompt({ ...PROMPTS.criticize, taskDescription: task }), { resume: extractSessionId(analyze) })
|
||||
results.push({ phase: 'challenge', cli: cliB.name, result: criticize })
|
||||
|
||||
return results
|
||||
}
|
||||
```
|
||||
|
||||
### Mode Router & Result Aggregation
|
||||
|
||||
```javascript
|
||||
async function executeAnalysis(mode, clis, taskDescription) {
|
||||
switch (mode.name) {
|
||||
case 'parallel': return await executeParallel(clis, taskDescription)
|
||||
case 'sequential': return await executeSequential(clis, taskDescription)
|
||||
case 'collaborative': return await executeCollaborative(clis, taskDescription)
|
||||
case 'debate': return await executeDebate(clis, taskDescription)
|
||||
case 'challenge': return await executeChallenge(clis, taskDescription)
|
||||
}
|
||||
}
|
||||
|
||||
function aggregateResults(mode, results) {
|
||||
const base = { mode: mode.name, pattern: mode.pattern, tools_used: results.map(r => r.cli || 'unknown') }
|
||||
|
||||
switch (mode.name) {
|
||||
case 'parallel':
|
||||
return { ...base, findings: results.map(parseOutput), consensus: findCommonPoints(results), divergences: findDifferences(results) }
|
||||
case 'sequential':
|
||||
return { ...base, evolution: results.map((r, i) => ({ step: i + 1, analysis: parseOutput(r) })), finalAnalysis: parseOutput(results.at(-1)) }
|
||||
case 'collaborative':
|
||||
return { ...base, rounds: groupByRound(results), synthesis: extractSynthesis(results.at(-1)) }
|
||||
case 'debate':
|
||||
return { ...base, proposal: parseOutput(results.find(r => r.phase === 'propose')?.result),
|
||||
challenges: parseOutput(results.find(r => r.phase === 'challenge')?.result),
|
||||
resolution: parseOutput(results.find(r => r.phase === 'defend')?.result), confidence: calculateDebateConfidence(results) }
|
||||
case 'challenge':
|
||||
return { ...base, originalAnalysis: parseOutput(results.find(r => r.phase === 'analyze')?.result),
|
||||
critiques: parseCritiques(results.find(r => r.phase === 'challenge')?.result), riskScore: calculateRiskScore(results) }
|
||||
}
|
||||
}
|
||||
|
||||
// If planPath exists: update Analysis Summary & Execution Plan sections
|
||||
```
|
||||
|
||||
## Phase 4: User Decision
|
||||
|
||||
```javascript
|
||||
function presentSummary(analysis) {
|
||||
console.log(`## Analysis Result\n**Mode**: ${analysis.mode} (${analysis.pattern})\n**Tools**: ${analysis.tools_used.join(' → ')}`)
|
||||
|
||||
switch (analysis.mode) {
|
||||
case 'parallel':
|
||||
console.log(`### Consensus\n${analysis.consensus.map(c => `- ${c}`).join('\n')}\n### Divergences\n${analysis.divergences.map(d => `- ${d}`).join('\n')}`)
|
||||
break
|
||||
case 'sequential':
|
||||
console.log(`### Evolution\n${analysis.evolution.map(e => `**Step ${e.step}**: ${e.analysis.summary}`).join('\n')}\n### Final\n${analysis.finalAnalysis.summary}`)
|
||||
break
|
||||
case 'collaborative':
|
||||
console.log(`### Rounds\n${Object.entries(analysis.rounds).map(([r, a]) => `**Round ${r}**: ${a.map(x => x.cli).join(' + ')}`).join('\n')}\n### Synthesis\n${analysis.synthesis}`)
|
||||
break
|
||||
case 'debate':
|
||||
console.log(`### Debate\n**Proposal**: ${analysis.proposal.summary}\n**Challenges**: ${analysis.challenges.points?.length || 0} points\n**Resolution**: ${analysis.resolution.summary}\n**Confidence**: ${analysis.confidence}%`)
|
||||
break
|
||||
case 'challenge':
|
||||
console.log(`### Challenge\n**Original**: ${analysis.originalAnalysis.summary}\n**Critiques**: ${analysis.critiques.length} issues\n${analysis.critiques.map(c => `- [${c.severity}] ${c.description}`).join('\n')}\n**Risk Score**: ${analysis.riskScore}/100`)
|
||||
break
|
||||
}
|
||||
}
|
||||
|
||||
AskUserQuestion({
|
||||
questions: [{
|
||||
question: "How to proceed?",
|
||||
header: "Next Step",
|
||||
options: [
|
||||
{ label: "Execute directly", description: "Implement immediately" },
|
||||
{ label: "Refine analysis", description: "Add constraints, re-analyze" },
|
||||
{ label: "Change tools", description: "Different tool combination" },
|
||||
{ label: "Cancel", description: "End workflow" }
|
||||
],
|
||||
multiSelect: false
|
||||
}]
|
||||
})
|
||||
// If planPath exists: record decision to Decisions Made table
|
||||
// Routing: Execute → Phase 5 | Refine → Phase 3 | Change → Phase 2 | Cancel → End
|
||||
```
|
||||
|
||||
## Phase 5: Direct Execution
|
||||
|
||||
```javascript
|
||||
// Simple tasks: No artifacts | Complex tasks: Update scratchpad doc
|
||||
const executionAgents = agents.filter(a => a.canExecute)
|
||||
const executionTool = selectedAgent.canExecute ? selectedAgent : selectedCLIs[0]
|
||||
|
||||
if (executionTool.type === 'agent') {
|
||||
Task({
|
||||
subagent_type: executionTool.name,
|
||||
run_in_background: false,
|
||||
description: `Execute: ${taskDescription.slice(0, 30)}`,
|
||||
prompt: `## Task\n${taskDescription}\n\n## Analysis Results\n${JSON.stringify(aggregatedAnalysis, null, 2)}\n\n## Instructions\n1. Apply changes to identified files\n2. Follow recommended approach\n3. Handle identified risks\n4. Verify changes work correctly`
|
||||
})
|
||||
} else {
|
||||
Bash({
|
||||
command: `ccw cli -p "
|
||||
PURPOSE: Implement solution: ${taskDescription}
|
||||
TASK: ${extractedTasks.join(' • ')}
|
||||
MODE: write
|
||||
CONTEXT: @${affectedFiles.join(' @')}
|
||||
EXPECTED: Working implementation with all changes applied
|
||||
CONSTRAINTS: Follow existing patterns
|
||||
" --tool ${executionTool.name} --mode write`,
|
||||
run_in_background: false
|
||||
})
|
||||
}
|
||||
// If planPath exists: update Status to completed/failed, append to Progress Log
|
||||
```
|
||||
|
||||
## TodoWrite Structure
|
||||
|
||||
```javascript
|
||||
TodoWrite({ todos: [
|
||||
{ content: "Phase 1: Clarify requirements", status: "in_progress", activeForm: "Clarifying requirements" },
|
||||
{ content: "Phase 1.5: Assess complexity", status: "pending", activeForm: "Assessing complexity" },
|
||||
{ content: "Phase 2: Select tools", status: "pending", activeForm: "Selecting tools" },
|
||||
{ content: "Phase 3: Multi-mode analysis", status: "pending", activeForm: "Running analysis" },
|
||||
{ content: "Phase 4: User decision", status: "pending", activeForm: "Awaiting decision" },
|
||||
{ content: "Phase 5: Direct execution", status: "pending", activeForm: "Executing" }
|
||||
]})
|
||||
```
|
||||
|
||||
## Iteration Patterns
|
||||
|
||||
| Pattern | Flow |
|
||||
|---------|------|
|
||||
| **Direct** | Phase 1 → 2 → 3 → 4(execute) → 5 |
|
||||
| **Refinement** | Phase 3 → 4(refine) → 3 → 4 → 5 |
|
||||
| **Tool Adjust** | Phase 2(adjust) → 3 → 4 → 5 |
|
||||
|
||||
## Error Handling
|
||||
|
||||
| Error | Resolution |
|
||||
|-------|------------|
|
||||
| CLI timeout | Retry with secondary model |
|
||||
| No enabled tools | Ask user to enable tools in cli-tools.json |
|
||||
| Task unclear | Default to first CLI + code-developer |
|
||||
| Ambiguous task | Force clarification via AskUser |
|
||||
| Execution fails | Present error, ask user for direction |
|
||||
| Plan doc write fails | Continue without doc (degrade to zero-artifact mode) |
|
||||
| Scratchpad dir missing | Auto-create `.workflow/.scratchpad/` |
|
||||
|
||||
## Comparison with multi-cli-plan
|
||||
|
||||
| Aspect | lite-lite-lite | multi-cli-plan |
|
||||
|--------|----------------|----------------|
|
||||
| **Artifacts** | Conditional (scratchpad doc for complex tasks) | Always (IMPL_PLAN.md, plan.json, synthesis.json) |
|
||||
| **Session** | Stateless (--resume chaining) | Persistent session folder |
|
||||
| **Tool Selection** | 3-step (CLI → Mode → Agent) | Config-driven fixed tools |
|
||||
| **Analysis Modes** | 5 modes with --resume | Fixed synthesis rounds |
|
||||
| **Complexity** | Auto-detected (simple/moderate/complex) | Assumed complex |
|
||||
| **Best For** | Quick analysis, simple-to-moderate tasks | Complex multi-step implementations |
|
||||
|
||||
## Post-Completion Expansion
|
||||
|
||||
完成后询问用户是否扩展为issue(test/enhance/refactor/doc),选中项调用 `/issue:new "{summary} - {dimension}"`
|
||||
|
||||
## Related Commands
|
||||
|
||||
```bash
|
||||
/workflow:multi-cli-plan "complex task" # Full planning workflow
|
||||
/workflow:lite-plan "task" # Single CLI planning
|
||||
/workflow:lite-execute --in-memory # Direct execution
|
||||
```
|
||||
@@ -497,6 +497,7 @@ ${plan.tasks.map((t, i) => `${i+1}. ${t.title} (${t.file})`).join('\n')}
|
||||
|
||||
**Step 4.2: Collect Confirmation**
|
||||
```javascript
|
||||
// Note: Execution "Other" option allows specifying CLI tools from ~/.claude/cli-tools.json
|
||||
AskUserQuestion({
|
||||
questions: [
|
||||
{
|
||||
@@ -524,8 +525,9 @@ AskUserQuestion({
|
||||
header: "Review",
|
||||
multiSelect: false,
|
||||
options: [
|
||||
{ label: "Gemini Review", description: "Gemini CLI" },
|
||||
{ label: "Agent Review", description: "@code-reviewer" },
|
||||
{ label: "Gemini Review", description: "Gemini CLI review" },
|
||||
{ label: "Codex Review", description: "Git-aware review (prompt OR --uncommitted)" },
|
||||
{ label: "Agent Review", description: "@code-reviewer agent" },
|
||||
{ label: "Skip", description: "No review" }
|
||||
]
|
||||
}
|
||||
|
||||
568
.claude/commands/workflow/multi-cli-plan.md
Normal file
568
.claude/commands/workflow/multi-cli-plan.md
Normal file
@@ -0,0 +1,568 @@
|
||||
---
|
||||
name: workflow:multi-cli-plan
|
||||
description: Multi-CLI collaborative planning workflow with ACE context gathering and iterative cross-verification. Uses cli-discuss-agent for Gemini+Codex+Claude analysis to converge on optimal execution plan.
|
||||
argument-hint: "<task description> [--max-rounds=3] [--tools=gemini,codex] [--mode=parallel|serial]"
|
||||
allowed-tools: TodoWrite(*), Task(*), AskUserQuestion(*), Read(*), Bash(*), Write(*), mcp__ace-tool__search_context(*)
|
||||
---
|
||||
|
||||
# Multi-CLI Collaborative Planning Command
|
||||
|
||||
## Quick Start
|
||||
|
||||
```bash
|
||||
# Basic usage
|
||||
/workflow:multi-cli-plan "Implement user authentication"
|
||||
|
||||
# With options
|
||||
/workflow:multi-cli-plan "Add dark mode support" --max-rounds=3
|
||||
/workflow:multi-cli-plan "Refactor payment module" --tools=gemini,codex,claude
|
||||
/workflow:multi-cli-plan "Fix memory leak" --mode=serial
|
||||
```
|
||||
|
||||
**Context Source**: ACE semantic search + Multi-CLI analysis
|
||||
**Output Directory**: `.workflow/.multi-cli-plan/{session-id}/`
|
||||
**Default Max Rounds**: 3 (convergence may complete earlier)
|
||||
**CLI Tools**: @cli-discuss-agent (analysis), @cli-lite-planning-agent (plan generation)
|
||||
**Execution**: Auto-hands off to `/workflow:lite-execute --in-memory` after plan approval
|
||||
|
||||
## What & Why
|
||||
|
||||
### Core Concept
|
||||
|
||||
Multi-CLI collaborative planning with **three-phase architecture**: ACE context gathering → Iterative multi-CLI discussion → Plan generation. Orchestrator delegates analysis to agents, only handles user decisions and session management.
|
||||
|
||||
**Process**:
|
||||
- **Phase 1**: ACE semantic search gathers codebase context
|
||||
- **Phase 2**: cli-discuss-agent orchestrates Gemini/Codex/Claude for cross-verified analysis
|
||||
- **Phase 3-5**: User decision → Plan generation → Execution handoff
|
||||
|
||||
**vs Single-CLI Planning**:
|
||||
- **Single**: One model perspective, potential blind spots
|
||||
- **Multi-CLI**: Cross-verification catches inconsistencies, builds consensus on solutions
|
||||
|
||||
### Value Proposition
|
||||
|
||||
1. **Multi-Perspective Analysis**: Gemini + Codex + Claude analyze from different angles
|
||||
2. **Cross-Verification**: Identify agreements/disagreements, build confidence
|
||||
3. **User-Driven Decisions**: Every round ends with user decision point
|
||||
4. **Iterative Convergence**: Progressive refinement until consensus reached
|
||||
|
||||
### Orchestrator Boundary (CRITICAL)
|
||||
|
||||
- **ONLY command** for multi-CLI collaborative planning
|
||||
- Manages: Session state, user decisions, agent delegation, phase transitions
|
||||
- Delegates: CLI execution to @cli-discuss-agent, plan generation to @cli-lite-planning-agent
|
||||
|
||||
### Execution Flow
|
||||
|
||||
```
|
||||
Phase 1: Context Gathering
|
||||
└─ ACE semantic search, extract keywords, build context package
|
||||
|
||||
Phase 2: Multi-CLI Discussion (Iterative, via @cli-discuss-agent)
|
||||
├─ Round N: Agent executes Gemini + Codex + Claude
|
||||
├─ Cross-verify findings, synthesize solutions
|
||||
├─ Write synthesis.json to rounds/{N}/
|
||||
└─ Loop until convergence or max rounds
|
||||
|
||||
Phase 3: Present Options
|
||||
└─ Display solutions with trade-offs from agent output
|
||||
|
||||
Phase 4: User Decision
|
||||
├─ Select solution approach
|
||||
├─ Select execution method (Agent/Codex/Auto)
|
||||
├─ Select code review tool (Skip/Gemini/Codex/Agent)
|
||||
└─ Route:
|
||||
├─ Approve → Phase 5
|
||||
├─ Need More Analysis → Return to Phase 2
|
||||
└─ Cancel → Save session
|
||||
|
||||
Phase 5: Plan Generation & Execution Handoff
|
||||
├─ Generate plan.json (via @cli-lite-planning-agent)
|
||||
├─ Build executionContext with user selections
|
||||
└─ Execute to /workflow:lite-execute --in-memory
|
||||
```
|
||||
|
||||
### Agent Roles
|
||||
|
||||
| Agent | Responsibility |
|
||||
|-------|---------------|
|
||||
| **Orchestrator** | Session management, ACE context, user decisions, phase transitions, executionContext assembly |
|
||||
| **@cli-discuss-agent** | Multi-CLI execution (Gemini/Codex/Claude), cross-verification, solution synthesis, synthesis.json output |
|
||||
| **@cli-lite-planning-agent** | Task decomposition, plan.json generation following schema |
|
||||
|
||||
## Core Responsibilities
|
||||
|
||||
### Phase 1: Context Gathering
|
||||
|
||||
**Session Initialization**:
|
||||
```javascript
|
||||
const sessionId = `MCP-${taskSlug}-${date}`
|
||||
const sessionFolder = `.workflow/.multi-cli-plan/${sessionId}`
|
||||
Bash(`mkdir -p ${sessionFolder}/rounds`)
|
||||
```
|
||||
|
||||
**ACE Context Queries**:
|
||||
```javascript
|
||||
const aceQueries = [
|
||||
`Project architecture related to ${keywords}`,
|
||||
`Existing implementations of ${keywords[0]}`,
|
||||
`Code patterns for ${keywords} features`,
|
||||
`Integration points for ${keywords[0]}`
|
||||
]
|
||||
// Execute via mcp__ace-tool__search_context
|
||||
```
|
||||
|
||||
**Context Package** (passed to agent):
|
||||
- `relevant_files[]` - Files identified by ACE
|
||||
- `detected_patterns[]` - Code patterns found
|
||||
- `architecture_insights` - Structure understanding
|
||||
|
||||
### Phase 2: Agent Delegation
|
||||
|
||||
**Core Principle**: Orchestrator only delegates and reads output - NO direct CLI execution.
|
||||
|
||||
**Agent Invocation**:
|
||||
```javascript
|
||||
Task({
|
||||
subagent_type: "cli-discuss-agent",
|
||||
run_in_background: false,
|
||||
description: `Discussion round ${currentRound}`,
|
||||
prompt: `
|
||||
## Input Context
|
||||
- task_description: ${taskDescription}
|
||||
- round_number: ${currentRound}
|
||||
- session: { id: "${sessionId}", folder: "${sessionFolder}" }
|
||||
- ace_context: ${JSON.stringify(contextPackageage)}
|
||||
- previous_rounds: ${JSON.stringify(analysisResults)}
|
||||
- user_feedback: ${userFeedback || 'None'}
|
||||
- cli_config: { tools: ["gemini", "codex"], mode: "parallel", fallback_chain: ["gemini", "codex", "claude"] }
|
||||
|
||||
## Execution Process
|
||||
1. Parse input context (handle JSON strings)
|
||||
2. Check if ACE supplementary search needed
|
||||
3. Build CLI prompts with context
|
||||
4. Execute CLIs (parallel or serial per cli_config.mode)
|
||||
5. Parse CLI outputs, handle failures with fallback
|
||||
6. Perform cross-verification between CLI results
|
||||
7. Synthesize solutions, calculate scores
|
||||
8. Calculate convergence, generate clarification questions
|
||||
9. Write synthesis.json
|
||||
|
||||
## Output
|
||||
Write: ${sessionFolder}/rounds/${currentRound}/synthesis.json
|
||||
|
||||
## Completion Checklist
|
||||
- [ ] All configured CLI tools executed (or fallback triggered)
|
||||
- [ ] Cross-verification completed with agreements/disagreements
|
||||
- [ ] 2-3 solutions generated with file:line references
|
||||
- [ ] Convergence score calculated (0.0-1.0)
|
||||
- [ ] synthesis.json written with all Primary Fields
|
||||
`
|
||||
})
|
||||
```
|
||||
|
||||
**Read Agent Output**:
|
||||
```javascript
|
||||
const synthesis = JSON.parse(Read(`${sessionFolder}/rounds/${round}/synthesis.json`))
|
||||
// Access top-level fields: solutions, convergence, cross_verification, clarification_questions
|
||||
```
|
||||
|
||||
**Convergence Decision**:
|
||||
```javascript
|
||||
if (synthesis.convergence.recommendation === 'converged') {
|
||||
// Proceed to Phase 3
|
||||
} else if (synthesis.convergence.recommendation === 'user_input_needed') {
|
||||
// Collect user feedback, return to Phase 2
|
||||
} else {
|
||||
// Continue to next round if new_insights && round < maxRounds
|
||||
}
|
||||
```
|
||||
|
||||
### Phase 3: Present Options
|
||||
|
||||
**Display from Agent Output** (no processing):
|
||||
```javascript
|
||||
console.log(`
|
||||
## Solution Options
|
||||
|
||||
${synthesis.solutions.map((s, i) => `
|
||||
**Option ${i+1}: ${s.name}**
|
||||
Source: ${s.source_cli.join(' + ')}
|
||||
Effort: ${s.effort} | Risk: ${s.risk}
|
||||
|
||||
Pros: ${s.pros.join(', ')}
|
||||
Cons: ${s.cons.join(', ')}
|
||||
|
||||
Files: ${s.affected_files.slice(0,3).map(f => `${f.file}:${f.line}`).join(', ')}
|
||||
`).join('\n')}
|
||||
|
||||
## Cross-Verification
|
||||
Agreements: ${synthesis.cross_verification.agreements.length}
|
||||
Disagreements: ${synthesis.cross_verification.disagreements.length}
|
||||
`)
|
||||
```
|
||||
|
||||
### Phase 4: User Decision
|
||||
|
||||
**Decision Options**:
|
||||
```javascript
|
||||
AskUserQuestion({
|
||||
questions: [
|
||||
{
|
||||
question: "Which solution approach?",
|
||||
header: "Solution",
|
||||
multiSelect: false,
|
||||
options: solutions.map((s, i) => ({
|
||||
label: `Option ${i+1}: ${s.name}`,
|
||||
description: `${s.effort} effort, ${s.risk} risk`
|
||||
})).concat([
|
||||
{ label: "Need More Analysis", description: "Return to Phase 2" }
|
||||
])
|
||||
},
|
||||
{
|
||||
question: "Execution method:",
|
||||
header: "Execution",
|
||||
multiSelect: false,
|
||||
options: [
|
||||
{ label: "Agent", description: "@code-developer agent" },
|
||||
{ label: "Codex", description: "codex CLI tool" },
|
||||
{ label: "Auto", description: "Auto-select based on complexity" }
|
||||
]
|
||||
},
|
||||
{
|
||||
question: "Code review after execution?",
|
||||
header: "Review",
|
||||
multiSelect: false,
|
||||
options: [
|
||||
{ label: "Skip", description: "No review" },
|
||||
{ label: "Gemini Review", description: "Gemini CLI tool" },
|
||||
{ label: "Codex Review", description: "codex review --uncommitted" },
|
||||
{ label: "Agent Review", description: "Current agent review" }
|
||||
]
|
||||
}
|
||||
]
|
||||
})
|
||||
```
|
||||
|
||||
**Routing**:
|
||||
- Approve + execution method → Phase 5
|
||||
- Need More Analysis → Phase 2 with feedback
|
||||
- Cancel → Save session for resumption
|
||||
|
||||
### Phase 5: Plan Generation & Execution Handoff
|
||||
|
||||
**Step 1: Build Context-Package** (Orchestrator responsibility):
|
||||
```javascript
|
||||
// Extract key information from user decision and synthesis
|
||||
const contextPackage = {
|
||||
// Core solution details
|
||||
solution: {
|
||||
name: selectedSolution.name,
|
||||
source_cli: selectedSolution.source_cli,
|
||||
feasibility: selectedSolution.feasibility,
|
||||
effort: selectedSolution.effort,
|
||||
risk: selectedSolution.risk,
|
||||
summary: selectedSolution.summary
|
||||
},
|
||||
// Implementation plan (tasks, flow, milestones)
|
||||
implementation_plan: selectedSolution.implementation_plan,
|
||||
// Dependencies
|
||||
dependencies: selectedSolution.dependencies || { internal: [], external: [] },
|
||||
// Technical concerns
|
||||
technical_concerns: selectedSolution.technical_concerns || [],
|
||||
// Consensus from cross-verification
|
||||
consensus: {
|
||||
agreements: synthesis.cross_verification.agreements,
|
||||
resolved_conflicts: synthesis.cross_verification.resolution
|
||||
},
|
||||
// User constraints (from Phase 4 feedback)
|
||||
constraints: userConstraints || [],
|
||||
// Task context
|
||||
task_description: taskDescription,
|
||||
session_id: sessionId
|
||||
}
|
||||
|
||||
// Write context-package for traceability
|
||||
Write(`${sessionFolder}/context-package.json`, JSON.stringify(contextPackage, null, 2))
|
||||
```
|
||||
|
||||
**Context-Package Schema**:
|
||||
|
||||
| Field | Type | Description |
|
||||
|-------|------|-------------|
|
||||
| `solution` | object | User-selected solution from synthesis |
|
||||
| `solution.name` | string | Solution identifier |
|
||||
| `solution.feasibility` | number | Viability score (0-1) |
|
||||
| `solution.summary` | string | Brief analysis summary |
|
||||
| `implementation_plan` | object | Task breakdown with flow and dependencies |
|
||||
| `implementation_plan.approach` | string | High-level technical strategy |
|
||||
| `implementation_plan.tasks[]` | array | Discrete tasks with id, name, depends_on, files |
|
||||
| `implementation_plan.execution_flow` | string | Task sequence (e.g., "T1 → T2 → T3") |
|
||||
| `implementation_plan.milestones` | string[] | Key checkpoints |
|
||||
| `dependencies` | object | Module and package dependencies |
|
||||
| `technical_concerns` | string[] | Risks and blockers |
|
||||
| `consensus` | object | Cross-verified agreements from multi-CLI |
|
||||
| `constraints` | string[] | User-specified constraints from Phase 4 |
|
||||
|
||||
```json
|
||||
{
|
||||
"solution": {
|
||||
"name": "Strategy Pattern Refactoring",
|
||||
"source_cli": ["gemini", "codex"],
|
||||
"feasibility": 0.88,
|
||||
"effort": "medium",
|
||||
"risk": "low",
|
||||
"summary": "Extract payment gateway interface, implement strategy pattern for multi-gateway support"
|
||||
},
|
||||
"implementation_plan": {
|
||||
"approach": "Define interface → Create concrete strategies → Implement factory → Migrate existing code",
|
||||
"tasks": [
|
||||
{"id": "T1", "name": "Define PaymentGateway interface", "depends_on": [], "files": [{"file": "src/types/payment.ts", "line": 1, "action": "create"}], "key_point": "Include all existing Stripe methods"},
|
||||
{"id": "T2", "name": "Implement StripeGateway", "depends_on": ["T1"], "files": [{"file": "src/payment/stripe.ts", "line": 1, "action": "create"}], "key_point": "Wrap existing logic"},
|
||||
{"id": "T3", "name": "Create GatewayFactory", "depends_on": ["T1"], "files": [{"file": "src/payment/factory.ts", "line": 1, "action": "create"}], "key_point": null},
|
||||
{"id": "T4", "name": "Migrate processor to use factory", "depends_on": ["T2", "T3"], "files": [{"file": "src/payment/processor.ts", "line": 45, "action": "modify"}], "key_point": "Backward compatible"}
|
||||
],
|
||||
"execution_flow": "T1 → (T2 | T3) → T4",
|
||||
"milestones": ["Interface defined", "Gateway implementations complete", "Migration done"]
|
||||
},
|
||||
"dependencies": {
|
||||
"internal": ["@/lib/payment-gateway", "@/types/payment"],
|
||||
"external": ["stripe@^14.0.0"]
|
||||
},
|
||||
"technical_concerns": ["Existing tests must pass", "No breaking API changes"],
|
||||
"consensus": {
|
||||
"agreements": ["Use strategy pattern", "Keep existing API"],
|
||||
"resolved_conflicts": "Factory over DI for simpler integration"
|
||||
},
|
||||
"constraints": ["backward compatible", "no breaking changes to PaymentResult type"],
|
||||
"task_description": "Refactor payment processing for multi-gateway support",
|
||||
"session_id": "MCP-payment-refactor-2026-01-14"
|
||||
}
|
||||
```
|
||||
|
||||
**Step 2: Invoke Planning Agent**:
|
||||
```javascript
|
||||
Task({
|
||||
subagent_type: "cli-lite-planning-agent",
|
||||
run_in_background: false,
|
||||
description: "Generate implementation plan",
|
||||
prompt: `
|
||||
## Schema Reference
|
||||
Execute: cat ~/.claude/workflows/cli-templates/schemas/plan-json-schema.json
|
||||
|
||||
## Context-Package (from orchestrator)
|
||||
${JSON.stringify(contextPackage, null, 2)}
|
||||
|
||||
## Execution Process
|
||||
1. Read plan-json-schema.json for output structure
|
||||
2. Read project-tech.json and project-guidelines.json
|
||||
3. Parse context-package fields:
|
||||
- solution: name, feasibility, summary
|
||||
- implementation_plan: tasks[], execution_flow, milestones
|
||||
- dependencies: internal[], external[]
|
||||
- technical_concerns: risks/blockers
|
||||
- consensus: agreements, resolved_conflicts
|
||||
- constraints: user requirements
|
||||
4. Use implementation_plan.tasks[] as task foundation
|
||||
5. Preserve task dependencies (depends_on) and execution_flow
|
||||
6. Expand tasks with detailed acceptance criteria
|
||||
7. Generate plan.json following schema exactly
|
||||
|
||||
## Output
|
||||
- ${sessionFolder}/plan.json
|
||||
|
||||
## Completion Checklist
|
||||
- [ ] plan.json preserves task dependencies from implementation_plan
|
||||
- [ ] Task execution order follows execution_flow
|
||||
- [ ] Key_points reflected in task descriptions
|
||||
- [ ] User constraints applied to implementation
|
||||
- [ ] Acceptance criteria are testable
|
||||
- [ ] Schema fields match plan-json-schema.json exactly
|
||||
`
|
||||
})
|
||||
```
|
||||
|
||||
**Step 3: Build executionContext**:
|
||||
```javascript
|
||||
// After plan.json is generated by cli-lite-planning-agent
|
||||
const plan = JSON.parse(Read(`${sessionFolder}/plan.json`))
|
||||
|
||||
// Build executionContext (same structure as lite-plan)
|
||||
executionContext = {
|
||||
planObject: plan,
|
||||
explorationsContext: null, // Multi-CLI doesn't use exploration files
|
||||
explorationAngles: [], // No exploration angles
|
||||
explorationManifest: null, // No manifest
|
||||
clarificationContext: null, // Store user feedback from Phase 2 if exists
|
||||
executionMethod: userSelection.execution_method, // From Phase 4
|
||||
codeReviewTool: userSelection.code_review_tool, // From Phase 4
|
||||
originalUserInput: taskDescription,
|
||||
|
||||
// Optional: Task-level executor assignments
|
||||
executorAssignments: null, // Could be enhanced in future
|
||||
|
||||
session: {
|
||||
id: sessionId,
|
||||
folder: sessionFolder,
|
||||
artifacts: {
|
||||
explorations: [], // No explorations in multi-CLI workflow
|
||||
explorations_manifest: null,
|
||||
plan: `${sessionFolder}/plan.json`,
|
||||
synthesis_rounds: Array.from({length: currentRound}, (_, i) =>
|
||||
`${sessionFolder}/rounds/${i+1}/synthesis.json`
|
||||
),
|
||||
context_package: `${sessionFolder}/context-package.json`
|
||||
}
|
||||
}
|
||||
}
|
||||
```
|
||||
|
||||
**Step 4: Hand off to Execution**:
|
||||
```javascript
|
||||
// Execute to lite-execute with in-memory context
|
||||
SlashCommand("/workflow:lite-execute --in-memory")
|
||||
```
|
||||
|
||||
## Output File Structure
|
||||
|
||||
```
|
||||
.workflow/.multi-cli-plan/{MCP-task-slug-YYYY-MM-DD}/
|
||||
├── session-state.json # Session tracking (orchestrator)
|
||||
├── rounds/
|
||||
│ ├── 1/synthesis.json # Round 1 analysis (cli-discuss-agent)
|
||||
│ ├── 2/synthesis.json # Round 2 analysis (cli-discuss-agent)
|
||||
│ └── .../
|
||||
├── context-package.json # Extracted context for planning (orchestrator)
|
||||
└── plan.json # Structured plan (cli-lite-planning-agent)
|
||||
```
|
||||
|
||||
**File Producers**:
|
||||
|
||||
| File | Producer | Content |
|
||||
|------|----------|---------|
|
||||
| `session-state.json` | Orchestrator | Session metadata, rounds, decisions |
|
||||
| `rounds/*/synthesis.json` | cli-discuss-agent | Solutions, convergence, cross-verification |
|
||||
| `context-package.json` | Orchestrator | Extracted solution, dependencies, consensus for planning |
|
||||
| `plan.json` | cli-lite-planning-agent | Structured tasks for lite-execute |
|
||||
|
||||
## synthesis.json Schema
|
||||
|
||||
```json
|
||||
{
|
||||
"round": 1,
|
||||
"solutions": [{
|
||||
"name": "Solution Name",
|
||||
"source_cli": ["gemini", "codex"],
|
||||
"feasibility": 0.85,
|
||||
"effort": "low|medium|high",
|
||||
"risk": "low|medium|high",
|
||||
"summary": "Brief analysis summary",
|
||||
"implementation_plan": {
|
||||
"approach": "High-level technical approach",
|
||||
"tasks": [
|
||||
{"id": "T1", "name": "Task", "depends_on": [], "files": [], "key_point": "..."}
|
||||
],
|
||||
"execution_flow": "T1 → T2 → T3",
|
||||
"milestones": ["Checkpoint 1", "Checkpoint 2"]
|
||||
},
|
||||
"dependencies": {"internal": [], "external": []},
|
||||
"technical_concerns": ["Risk 1", "Blocker 2"]
|
||||
}],
|
||||
"convergence": {
|
||||
"score": 0.85,
|
||||
"new_insights": false,
|
||||
"recommendation": "converged|continue|user_input_needed"
|
||||
},
|
||||
"cross_verification": {
|
||||
"agreements": [],
|
||||
"disagreements": [],
|
||||
"resolution": "..."
|
||||
},
|
||||
"clarification_questions": []
|
||||
}
|
||||
```
|
||||
|
||||
**Key Planning Fields**:
|
||||
|
||||
| Field | Purpose |
|
||||
|-------|---------|
|
||||
| `feasibility` | Viability score (0-1) |
|
||||
| `implementation_plan.tasks[]` | Discrete tasks with dependencies |
|
||||
| `implementation_plan.execution_flow` | Task sequence visualization |
|
||||
| `implementation_plan.milestones` | Key checkpoints |
|
||||
| `technical_concerns` | Risks and blockers |
|
||||
|
||||
**Note**: Solutions ranked by internal scoring (array order = priority)
|
||||
|
||||
## TodoWrite Structure
|
||||
|
||||
**Initialization**:
|
||||
```javascript
|
||||
TodoWrite({ todos: [
|
||||
{ content: "Phase 1: Context Gathering", status: "in_progress", activeForm: "Gathering context" },
|
||||
{ content: "Phase 2: Multi-CLI Discussion", status: "pending", activeForm: "Running discussion" },
|
||||
{ content: "Phase 3: Present Options", status: "pending", activeForm: "Presenting options" },
|
||||
{ content: "Phase 4: User Decision", status: "pending", activeForm: "Awaiting decision" },
|
||||
{ content: "Phase 5: Plan Generation", status: "pending", activeForm: "Generating plan" }
|
||||
]})
|
||||
```
|
||||
|
||||
**During Discussion Rounds**:
|
||||
```javascript
|
||||
TodoWrite({ todos: [
|
||||
{ content: "Phase 1: Context Gathering", status: "completed", activeForm: "Gathering context" },
|
||||
{ content: "Phase 2: Multi-CLI Discussion", status: "in_progress", activeForm: "Running discussion" },
|
||||
{ content: " → Round 1: Initial analysis", status: "completed", activeForm: "Analyzing" },
|
||||
{ content: " → Round 2: Deep verification", status: "in_progress", activeForm: "Verifying" },
|
||||
{ content: "Phase 3: Present Options", status: "pending", activeForm: "Presenting options" },
|
||||
// ...
|
||||
]})
|
||||
```
|
||||
|
||||
## Error Handling
|
||||
|
||||
| Error | Resolution |
|
||||
|-------|------------|
|
||||
| ACE search fails | Fall back to Glob/Grep for file discovery |
|
||||
| Agent fails | Retry once, then present partial results |
|
||||
| CLI timeout (in agent) | Agent uses fallback: gemini → codex → claude |
|
||||
| No convergence | Present best options, flag uncertainty |
|
||||
| synthesis.json parse error | Request agent retry |
|
||||
| User cancels | Save session for later resumption |
|
||||
|
||||
## Configuration
|
||||
|
||||
| Flag | Default | Description |
|
||||
|------|---------|-------------|
|
||||
| `--max-rounds` | 3 | Maximum discussion rounds |
|
||||
| `--tools` | gemini,codex | CLI tools for analysis |
|
||||
| `--mode` | parallel | Execution mode: parallel or serial |
|
||||
| `--auto-execute` | false | Auto-execute after approval |
|
||||
|
||||
## Best Practices
|
||||
|
||||
1. **Be Specific**: Detailed task descriptions improve ACE context quality
|
||||
2. **Provide Feedback**: Use clarification rounds to refine requirements
|
||||
3. **Trust Cross-Verification**: Multi-CLI consensus indicates high confidence
|
||||
4. **Review Trade-offs**: Consider pros/cons before selecting solution
|
||||
5. **Check synthesis.json**: Review agent output for detailed analysis
|
||||
6. **Iterate When Needed**: Don't hesitate to request more analysis
|
||||
|
||||
## Related Commands
|
||||
|
||||
```bash
|
||||
# Simpler single-round planning
|
||||
/workflow:lite-plan "task description"
|
||||
|
||||
# Issue-driven discovery
|
||||
/issue:discover-by-prompt "find issues"
|
||||
|
||||
# View session files
|
||||
cat .workflow/.multi-cli-plan/{session-id}/plan.json
|
||||
cat .workflow/.multi-cli-plan/{session-id}/rounds/1/synthesis.json
|
||||
cat .workflow/.multi-cli-plan/{session-id}/context-package.json
|
||||
|
||||
# Direct execution (if you have plan.json)
|
||||
/workflow:lite-execute plan.json
|
||||
```
|
||||
@@ -585,6 +585,10 @@ TodoWrite({
|
||||
- Mark completed immediately after each group finishes
|
||||
- Update parent phase status when all child items complete
|
||||
|
||||
## Post-Completion Expansion
|
||||
|
||||
完成后询问用户是否扩展为issue(test/enhance/refactor/doc),选中项调用 `/issue:new "{summary} - {dimension}"`
|
||||
|
||||
## Best Practices
|
||||
|
||||
1. **Trust AI Planning**: Planning agent's grouping and execution strategy are based on dependency analysis
|
||||
|
||||
@@ -107,13 +107,13 @@ rm -f .workflow/archives/$SESSION_ID/.archiving
|
||||
Manifest: Updated with N total sessions
|
||||
```
|
||||
|
||||
### Phase 4: Update project.json (Optional)
|
||||
### Phase 4: Update project-tech.json (Optional)
|
||||
|
||||
**Skip if**: `.workflow/project.json` doesn't exist
|
||||
**Skip if**: `.workflow/project-tech.json` doesn't exist
|
||||
|
||||
```bash
|
||||
# Check
|
||||
test -f .workflow/project.json || echo "SKIP"
|
||||
test -f .workflow/project-tech.json || echo "SKIP"
|
||||
```
|
||||
|
||||
**If exists**, add feature entry:
|
||||
@@ -134,6 +134,32 @@ test -f .workflow/project.json || echo "SKIP"
|
||||
✓ Feature added to project registry
|
||||
```
|
||||
|
||||
### Phase 5: Ask About Solidify (Always)
|
||||
|
||||
After successful archival, prompt user to capture learnings:
|
||||
|
||||
```javascript
|
||||
AskUserQuestion({
|
||||
questions: [{
|
||||
question: "Would you like to solidify learnings from this session into project guidelines?",
|
||||
header: "Solidify",
|
||||
options: [
|
||||
{ label: "Yes, solidify now", description: "Extract learnings and update project-guidelines.json" },
|
||||
{ label: "Skip", description: "Archive complete, no learnings to capture" }
|
||||
],
|
||||
multiSelect: false
|
||||
}]
|
||||
})
|
||||
```
|
||||
|
||||
**If "Yes, solidify now"**: Execute `/workflow:session:solidify` with the archived session ID.
|
||||
|
||||
**Output**:
|
||||
```
|
||||
Session archived successfully.
|
||||
→ Run /workflow:session:solidify to capture learnings (recommended)
|
||||
```
|
||||
|
||||
## Error Recovery
|
||||
|
||||
| Phase | Symptom | Recovery |
|
||||
@@ -149,5 +175,6 @@ test -f .workflow/project.json || echo "SKIP"
|
||||
Phase 1: find session → create .archiving marker
|
||||
Phase 2: read key files → build manifest entry (no writes)
|
||||
Phase 3: mkdir → mv → update manifest.json → rm marker
|
||||
Phase 4: update project.json features array (optional)
|
||||
Phase 4: update project-tech.json features array (optional)
|
||||
Phase 5: ask user → solidify learnings (optional)
|
||||
```
|
||||
|
||||
@@ -16,7 +16,7 @@ examples:
|
||||
Manages workflow sessions with three operation modes: discovery (manual), auto (intelligent), and force-new.
|
||||
|
||||
**Dual Responsibility**:
|
||||
1. **Project-level initialization** (first-time only): Creates `.workflow/project.json` for feature registry
|
||||
1. **Project-level initialization** (first-time only): Creates `.workflow/project-tech.json` for feature registry
|
||||
2. **Session-level initialization** (always): Creates session directory structure
|
||||
|
||||
## Session Types
|
||||
|
||||
@@ -37,6 +37,44 @@ allowed-tools: SlashCommand(*), TodoWrite(*), Read(*), Bash(*)
|
||||
7. **Task Attachment Model**: SlashCommand execute **attaches** sub-tasks to current workflow. Orchestrator **executes** these attached tasks itself, then **collapses** them after completion
|
||||
8. **⚠️ CRITICAL: DO NOT STOP**: Continuous multi-phase workflow. After executing all attached tasks, immediately collapse them and execute next phase
|
||||
|
||||
## TDD Compliance Requirements
|
||||
|
||||
### The Iron Law
|
||||
|
||||
```
|
||||
NO PRODUCTION CODE WITHOUT A FAILING TEST FIRST
|
||||
```
|
||||
|
||||
**Enforcement Method**:
|
||||
- Phase 5: `implementation_approach` includes test-first steps (Red → Green → Refactor)
|
||||
- Green phase: Includes test-fix-cycle configuration (max 3 iterations)
|
||||
- Auto-revert: Triggered when max iterations reached without passing tests
|
||||
|
||||
**Verification**: Phase 6 validates Red-Green-Refactor structure in all generated tasks
|
||||
|
||||
### TDD Compliance Checkpoint
|
||||
|
||||
| Checkpoint | Validation Phase | Evidence Required |
|
||||
|------------|------------------|-------------------|
|
||||
| Test-first structure | Phase 5 | `implementation_approach` has 3 steps |
|
||||
| Red phase exists | Phase 6 | Step 1: `tdd_phase: "red"` |
|
||||
| Green phase with test-fix | Phase 6 | Step 2: `tdd_phase: "green"` + test-fix-cycle |
|
||||
| Refactor phase exists | Phase 6 | Step 3: `tdd_phase: "refactor"` |
|
||||
|
||||
### Core TDD Principles (from ref skills)
|
||||
|
||||
**Red Flags - STOP and Reassess**:
|
||||
- Code written before test
|
||||
- Test passes immediately (no Red phase witnessed)
|
||||
- Cannot explain why test should fail
|
||||
- "Just this once" rationalization
|
||||
- "Tests after achieve same goals" thinking
|
||||
|
||||
**Why Order Matters**:
|
||||
- Tests written after code pass immediately → proves nothing
|
||||
- Test-first forces edge case discovery before implementation
|
||||
- Tests-after verify what was built, not what's required
|
||||
|
||||
## 6-Phase Execution (with Conflict Resolution)
|
||||
|
||||
### Phase 1: Session Discovery
|
||||
@@ -183,7 +221,7 @@ SlashCommand(command="/workflow:tools:conflict-resolution --session [sessionId]
|
||||
{"content": "Phase 3: Test Coverage Analysis", "status": "completed", "activeForm": "Executing test coverage analysis"},
|
||||
{"content": "Phase 4: Conflict Resolution", "status": "in_progress", "activeForm": "Executing conflict resolution"},
|
||||
{"content": " → Detect conflicts with CLI analysis", "status": "in_progress", "activeForm": "Detecting conflicts"},
|
||||
{"content": " → Present conflicts to user", "status": "pending", "activeForm": "Presenting conflicts"},
|
||||
{"content": " → Log and analyze detected conflicts", "status": "pending", "activeForm": "Analyzing conflicts"},
|
||||
{"content": " → Apply resolution strategies", "status": "pending", "activeForm": "Applying resolution strategies"},
|
||||
{"content": "Phase 5: TDD Task Generation", "status": "pending", "activeForm": "Executing TDD task generation"},
|
||||
{"content": "Phase 6: TDD Structure Validation", "status": "pending", "activeForm": "Validating TDD structure"}
|
||||
@@ -251,6 +289,13 @@ SlashCommand(command="/workflow:tools:task-generate-tdd --session [sessionId]")
|
||||
- IMPL_PLAN.md contains workflow_type: "tdd" in frontmatter
|
||||
- Task count ≤10 (compliance with task limit)
|
||||
|
||||
**Red Flag Detection** (Non-Blocking Warnings):
|
||||
- Task count >10: `⚠️ High task count may indicate insufficient decomposition`
|
||||
- Missing test-fix-cycle: `⚠️ Green phase lacks auto-revert configuration`
|
||||
- Generic task names: `⚠️ Vague task names suggest unclear TDD cycles`
|
||||
|
||||
**Action**: Log warnings to `.workflow/active/[sessionId]/.process/tdd-warnings.log` (non-blocking)
|
||||
|
||||
<!-- TodoWrite: When task-generate-tdd executed, INSERT 3 task-generate-tdd tasks -->
|
||||
|
||||
**TodoWrite Update (Phase 5 SlashCommand executed - tasks attached)**:
|
||||
@@ -302,6 +347,42 @@ SlashCommand(command="/workflow:tools:task-generate-tdd --session [sessionId]")
|
||||
5. Test-fix cycle: Green phase step includes test-fix-cycle logic with max_iterations
|
||||
6. Task count: Total tasks ≤10 (simple + subtasks)
|
||||
|
||||
**Red Flag Checklist** (from TDD best practices):
|
||||
- [ ] No tasks skip Red phase (`tdd_phase: "red"` exists in step 1)
|
||||
- [ ] Test files referenced in Red phase (explicit paths, not placeholders)
|
||||
- [ ] Green phase has test-fix-cycle with `max_iterations` configured
|
||||
- [ ] Refactor phase has clear completion criteria
|
||||
|
||||
**Non-Compliance Warning Format**:
|
||||
```
|
||||
⚠️ TDD Red Flag: [issue description]
|
||||
Task: [IMPL-N]
|
||||
Recommendation: [action to fix]
|
||||
```
|
||||
|
||||
**Evidence Gathering** (Before Completion Claims):
|
||||
|
||||
```bash
|
||||
# Verify session artifacts exist
|
||||
ls -la .workflow/active/[sessionId]/{IMPL_PLAN.md,TODO_LIST.md}
|
||||
ls -la .workflow/active/[sessionId]/.task/IMPL-*.json
|
||||
|
||||
# Count generated artifacts
|
||||
echo "IMPL tasks: $(ls .workflow/active/[sessionId]/.task/IMPL-*.json 2>/dev/null | wc -l)"
|
||||
|
||||
# Sample task structure verification (first task)
|
||||
jq '{id, tdd: .meta.tdd_workflow, phases: [.flow_control.implementation_approach[].tdd_phase]}' \
|
||||
"$(ls .workflow/active/[sessionId]/.task/IMPL-*.json | head -1)"
|
||||
```
|
||||
|
||||
**Evidence Required Before Summary**:
|
||||
| Evidence Type | Verification Method | Pass Criteria |
|
||||
|---------------|---------------------|---------------|
|
||||
| File existence | `ls -la` artifacts | All files present |
|
||||
| Task count | Count IMPL-*.json | Count matches claims |
|
||||
| TDD structure | jq sample extraction | Shows red/green/refactor |
|
||||
| Warning log | Check tdd-warnings.log | Logged (may be empty) |
|
||||
|
||||
**Return Summary**:
|
||||
```
|
||||
TDD Planning complete for session: [sessionId]
|
||||
@@ -333,6 +414,9 @@ TDD Configuration:
|
||||
- Green phase includes test-fix cycle (max 3 iterations)
|
||||
- Auto-revert on max iterations reached
|
||||
|
||||
⚠️ ACTION REQUIRED: Before execution, ensure you understand WHY each Red phase test is expected to fail.
|
||||
This is crucial for valid TDD - if you don't know why the test fails, you can't verify it tests the right thing.
|
||||
|
||||
Recommended Next Steps:
|
||||
1. /workflow:action-plan-verify --session [sessionId] # Verify TDD plan quality and dependencies
|
||||
2. /workflow:execute --session [sessionId] # Start TDD execution
|
||||
@@ -400,7 +484,7 @@ TDD Workflow Orchestrator
|
||||
│ IF conflict_risk ≥ medium:
|
||||
│ └─ /workflow:tools:conflict-resolution ← ATTACHED (3 tasks)
|
||||
│ ├─ Phase 4.1: Detect conflicts with CLI
|
||||
│ ├─ Phase 4.2: Present conflicts to user
|
||||
│ ├─ Phase 4.2: Log and analyze detected conflicts
|
||||
│ └─ Phase 4.3: Apply resolution strategies
|
||||
│ └─ Returns: conflict-resolution.json ← COLLAPSED
|
||||
│ ELSE:
|
||||
@@ -439,6 +523,34 @@ Convert user input to TDD-structured format:
|
||||
- **Command failure**: Keep phase in_progress, report error
|
||||
- **TDD validation failure**: Report incomplete chains or wrong dependencies
|
||||
|
||||
### TDD Warning Patterns
|
||||
|
||||
| Pattern | Warning Message | Recommended Action |
|
||||
|---------|----------------|-------------------|
|
||||
| Task count >10 | High task count detected | Consider splitting into multiple sessions |
|
||||
| Missing test-fix-cycle | Green phase lacks auto-revert | Add `max_iterations: 3` to task config |
|
||||
| Red phase missing test path | Test file path not specified | Add explicit test file paths |
|
||||
| Generic task names | Vague names like "Add feature" | Use specific behavior descriptions |
|
||||
| No refactor criteria | Refactor phase lacks completion criteria | Define clear refactor scope |
|
||||
|
||||
### Non-Blocking Warning Policy
|
||||
|
||||
**All warnings are advisory** - they do not halt execution:
|
||||
1. Warnings logged to `.process/tdd-warnings.log`
|
||||
2. Summary displayed in Phase 6 output
|
||||
3. User decides whether to address before `/workflow:execute`
|
||||
|
||||
### Error Handling Quick Reference
|
||||
|
||||
| Error Type | Detection | Recovery Action |
|
||||
|------------|-----------|-----------------|
|
||||
| Parsing failure | Empty/malformed output | Retry once, then report |
|
||||
| Missing context-package | File read error | Re-run `/workflow:tools:context-gather` |
|
||||
| Invalid task JSON | jq parse error | Report malformed file path |
|
||||
| High task count (>10) | Count validation | Log warning, continue (non-blocking) |
|
||||
| Test-context missing | File not found | Re-run `/workflow:tools:test-context-gather` |
|
||||
| Phase timeout | No response | Retry phase, check CLI connectivity |
|
||||
|
||||
## Related Commands
|
||||
|
||||
**Prerequisite Commands**:
|
||||
@@ -458,3 +570,28 @@ Convert user input to TDD-structured format:
|
||||
- `/workflow:execute` - Begin TDD implementation
|
||||
- `/workflow:tdd-verify` - Post-execution: Verify TDD compliance and generate quality report
|
||||
|
||||
## Next Steps Decision Table
|
||||
|
||||
| Situation | Recommended Command | Purpose |
|
||||
|-----------|---------------------|---------|
|
||||
| First time planning | `/workflow:action-plan-verify` | Validate task structure before execution |
|
||||
| Warnings in tdd-warnings.log | Review log, refine tasks | Address Red Flags before proceeding |
|
||||
| High task count warning | Consider `/workflow:session:start` | Split into focused sub-sessions |
|
||||
| Ready to implement | `/workflow:execute` | Begin TDD Red-Green-Refactor cycles |
|
||||
| After implementation | `/workflow:tdd-verify` | Generate TDD compliance report |
|
||||
| Need to review tasks | `/workflow:status --session [id]` | Inspect current task breakdown |
|
||||
| Plan needs changes | `/task:replan` | Update task JSON with new requirements |
|
||||
|
||||
### TDD Workflow State Transitions
|
||||
|
||||
```
|
||||
/workflow:tdd-plan
|
||||
↓
|
||||
[Planning Complete] ──→ /workflow:action-plan-verify (recommended)
|
||||
↓
|
||||
[Verified/Ready] ─────→ /workflow:execute
|
||||
↓
|
||||
[Implementation] ─────→ /workflow:tdd-verify (post-execution)
|
||||
↓
|
||||
[Quality Report] ─────→ Done or iterate
|
||||
```
|
||||
|
||||
@@ -491,6 +491,10 @@ The orchestrator automatically creates git commits at key checkpoints to enable
|
||||
|
||||
**Note**: Final session completion creates additional commit with full summary.
|
||||
|
||||
## Post-Completion Expansion
|
||||
|
||||
完成后询问用户是否扩展为issue(test/enhance/refactor/doc),选中项调用 `/issue:new "{summary} - {dimension}"`
|
||||
|
||||
## Best Practices
|
||||
|
||||
1. **Default Settings Work**: 10 iterations sufficient for most cases
|
||||
|
||||
@@ -154,8 +154,8 @@ Task(subagent_type="cli-execution-agent", run_in_background=false, prompt=`
|
||||
- Validation of exploration conflict_indicators
|
||||
- ModuleOverlap conflicts with overlap_analysis
|
||||
- Targeted clarification questions
|
||||
RULES: $(cat ~/.claude/workflows/cli-templates/prompts/analysis/02-analyze-code-patterns.txt) | Focus on breaking changes, migration needs, and functional overlaps | Prioritize exploration-identified conflicts | analysis=READ-ONLY
|
||||
" --tool gemini --mode analysis --cd {project_root}
|
||||
CONSTRAINTS: Focus on breaking changes, migration needs, and functional overlaps | Prioritize exploration-identified conflicts | analysis=READ-ONLY
|
||||
" --tool gemini --mode analysis --rule analysis-code-patterns --cd {project_root}
|
||||
|
||||
Fallback: Qwen (same prompt) → Claude (manual analysis)
|
||||
|
||||
|
||||
@@ -237,7 +237,7 @@ Execute complete context-search-agent workflow for implementation planning:
|
||||
|
||||
### Phase 1: Initialization & Pre-Analysis
|
||||
1. **Project State Loading**:
|
||||
- Read and parse `.workflow/project-tech.json`. Use its `technology_analysis` section as the foundational `project_context`. This is your primary source for architecture, tech stack, and key components.
|
||||
- Read and parse `.workflow/project-tech.json`. Use its `overview` section as the foundational `project_context`. This is your primary source for architecture, tech stack, and key components.
|
||||
- Read and parse `.workflow/project-guidelines.json`. Load `conventions`, `constraints`, and `learnings` into a `project_guidelines` section.
|
||||
- If files don't exist, proceed with fresh analysis.
|
||||
2. **Detection**: Check for existing context-package (early exit if valid)
|
||||
@@ -255,7 +255,7 @@ Execute all discovery tracks:
|
||||
### Phase 3: Synthesis, Assessment & Packaging
|
||||
1. Apply relevance scoring and build dependency graph
|
||||
2. **Synthesize 4-source data**: Merge findings from all sources (archive > docs > code > web). **Prioritize the context from `project-tech.json`** for architecture and tech stack unless code analysis reveals it's outdated.
|
||||
3. **Populate `project_context`**: Directly use the `technology_analysis` from `project-tech.json` to fill the `project_context` section. Include description, technology_stack, architecture, and key_components.
|
||||
3. **Populate `project_context`**: Directly use the `overview` from `project-tech.json` to fill the `project_context` section. Include description, technology_stack, architecture, and key_components.
|
||||
4. **Populate `project_guidelines`**: Load conventions, constraints, and learnings from `project-guidelines.json` into a dedicated section.
|
||||
5. Integrate brainstorm artifacts (if .brainstorming/ exists, read content)
|
||||
6. Perform conflict detection with risk assessment
|
||||
|
||||
@@ -90,7 +90,7 @@ Template: ~/.claude/workflows/cli-templates/prompts/test/test-concept-analysis.t
|
||||
|
||||
## EXECUTION STEPS
|
||||
1. Execute Gemini analysis:
|
||||
ccw cli -p "$(cat ~/.claude/workflows/cli-templates/prompts/test/test-concept-analysis.txt)" --tool gemini --mode write --cd .workflow/active/{test_session_id}/.process
|
||||
ccw cli -p "..." --tool gemini --mode write --rule test-test-concept-analysis --cd .workflow/active/{test_session_id}/.process
|
||||
|
||||
2. Generate TEST_ANALYSIS_RESULTS.md:
|
||||
Synthesize gemini-test-analysis.md into standardized format for task generation
|
||||
|
||||
@@ -1,139 +1,86 @@
|
||||
---
|
||||
name: ccw-help
|
||||
description: Workflow command guide for Claude Code Workflow (78 commands). Search/browse commands, get next-step recommendations, view documentation, report issues. Triggers "CCW-help", "CCW-issue", "ccw-help", "ccw-issue", "ccw"
|
||||
description: CCW command help system. Search, browse, recommend commands. Triggers "ccw-help", "ccw-issue".
|
||||
allowed-tools: Read, Grep, Glob, AskUserQuestion
|
||||
version: 6.0.0
|
||||
version: 7.0.0
|
||||
---
|
||||
|
||||
# CCW-Help Skill
|
||||
|
||||
CCW 命令帮助系统,提供命令搜索、推荐、文档查看和问题报告功能。
|
||||
CCW 命令帮助系统,提供命令搜索、推荐、文档查看功能。
|
||||
|
||||
## Trigger Conditions
|
||||
|
||||
- 关键词: "CCW-help", "CCW-issue", "ccw-help", "ccw-issue", "帮助", "命令", "怎么用"
|
||||
- 场景: 用户询问命令用法、搜索命令、请求下一步建议、报告问题
|
||||
|
||||
## Execution Flow
|
||||
|
||||
```mermaid
|
||||
graph TD
|
||||
A[User Query] --> B{Intent Classification}
|
||||
B -->|搜索| C[Command Search]
|
||||
B -->|推荐| D[Smart Recommendations]
|
||||
B -->|文档| E[Documentation]
|
||||
B -->|新手| F[Onboarding]
|
||||
B -->|问题| G[Issue Reporting]
|
||||
B -->|分析| H[Deep Analysis]
|
||||
|
||||
C --> I[Query Index]
|
||||
D --> J[Query Relationships]
|
||||
E --> K[Read Source File]
|
||||
F --> L[Essential Commands]
|
||||
G --> M[Generate Template]
|
||||
H --> N[CLI Analysis]
|
||||
|
||||
I & J & K & L & M & N --> O[Synthesize Response]
|
||||
```
|
||||
- 关键词: "ccw-help", "ccw-issue", "帮助", "命令", "怎么用"
|
||||
- 场景: 询问命令用法、搜索命令、请求下一步建议
|
||||
|
||||
## Operation Modes
|
||||
|
||||
### Mode 1: Command Search 🔍
|
||||
### Mode 1: Command Search
|
||||
|
||||
**Triggers**: "搜索命令", "find command", "planning 相关", "search"
|
||||
**Triggers**: "搜索命令", "find command", "search"
|
||||
|
||||
**Process**:
|
||||
1. Query `index/all-commands.json` or `index/by-category.json`
|
||||
2. Filter and rank results based on user context
|
||||
3. Present top 3-5 relevant commands with usage hints
|
||||
1. Query `command.json` commands array
|
||||
2. Filter by name, description, category
|
||||
3. Present top 3-5 relevant commands
|
||||
|
||||
### Mode 2: Smart Recommendations 🤖
|
||||
### Mode 2: Smart Recommendations
|
||||
|
||||
**Triggers**: "下一步", "what's next", "after /workflow:plan", "推荐"
|
||||
**Triggers**: "下一步", "what's next", "推荐"
|
||||
|
||||
**Process**:
|
||||
1. Query `index/command-relationships.json`
|
||||
2. Evaluate context and prioritize recommendations
|
||||
3. Explain WHY each recommendation fits
|
||||
1. Query command's `flow.next_steps` in `command.json`
|
||||
2. Explain WHY each recommendation fits
|
||||
|
||||
### Mode 3: Full Documentation 📖
|
||||
### Mode 3: Documentation
|
||||
|
||||
**Triggers**: "参数说明", "怎么用", "how to use", "详情"
|
||||
**Triggers**: "怎么用", "how to use", "详情"
|
||||
|
||||
**Process**:
|
||||
1. Locate command in index
|
||||
2. Read source file via `source` path (e.g., `commands/workflow/lite-plan.md`)
|
||||
3. Extract relevant sections and provide context-specific examples
|
||||
1. Locate command in `command.json`
|
||||
2. Read source file via `source` path
|
||||
3. Provide context-specific examples
|
||||
|
||||
### Mode 4: Beginner Onboarding 🎓
|
||||
### Mode 4: Beginner Onboarding
|
||||
|
||||
**Triggers**: "新手", "getting started", "如何开始", "常用命令"
|
||||
**Triggers**: "新手", "getting started", "常用命令"
|
||||
|
||||
**Process**:
|
||||
1. Query `index/essential-commands.json`
|
||||
2. Assess project stage (从0到1 vs 功能新增)
|
||||
3. Guide appropriate workflow entry point
|
||||
1. Query `essential_commands` array
|
||||
2. Guide appropriate workflow entry point
|
||||
|
||||
### Mode 5: Issue Reporting 📝
|
||||
### Mode 5: Issue Reporting
|
||||
|
||||
**Triggers**: "CCW-issue", "报告 bug", "功能建议", "问题咨询"
|
||||
**Triggers**: "ccw-issue", "报告 bug"
|
||||
|
||||
**Process**:
|
||||
1. Use AskUserQuestion to gather context
|
||||
2. Generate structured issue template
|
||||
3. Provide actionable next steps
|
||||
|
||||
### Mode 6: Deep Analysis 🔬
|
||||
## Data Source
|
||||
|
||||
**Triggers**: "详细说明", "命令原理", "agent 如何工作", "实现细节"
|
||||
Single source of truth: **[command.json](command.json)**
|
||||
|
||||
**Process**:
|
||||
1. Read source documentation directly
|
||||
2. For complex queries, use CLI for multi-file analysis:
|
||||
```bash
|
||||
ccw cli -p "PURPOSE: Analyze command documentation..." --tool gemini --mode analysis --cd ~/.claude
|
||||
```
|
||||
|
||||
## Index Files
|
||||
|
||||
CCW-Help 使用 JSON 索引实现快速查询(无 reference 文件夹,直接引用源文件):
|
||||
|
||||
| 文件 | 内容 | 用途 |
|
||||
|------|------|------|
|
||||
| `index/all-commands.json` | 完整命令目录 | 关键词搜索 |
|
||||
| `index/all-agents.json` | 完整 Agent 目录 | Agent 查询 |
|
||||
| `index/by-category.json` | 按类别分组 | 分类浏览 |
|
||||
| `index/by-use-case.json` | 按场景分组 | 场景推荐 |
|
||||
| `index/essential-commands.json` | 核心命令 | 新手引导 |
|
||||
| `index/command-relationships.json` | 命令关系 | 下一步推荐 |
|
||||
| Field | Purpose |
|
||||
|-------|---------|
|
||||
| `commands[]` | Flat command list with metadata |
|
||||
| `commands[].flow` | Relationships (next_steps, prerequisites) |
|
||||
| `commands[].essential` | Essential flag for onboarding |
|
||||
| `agents[]` | Agent directory |
|
||||
| `essential_commands[]` | Core commands list |
|
||||
|
||||
### Source Path Format
|
||||
|
||||
索引中的 `source` 字段是从 `index/` 目录的相对路径(先向上再定位):
|
||||
`source` 字段是相对路径(从 `skills/ccw-help/` 目录):
|
||||
|
||||
```json
|
||||
{
|
||||
"name": "workflow:lite-plan",
|
||||
"name": "lite-plan",
|
||||
"source": "../../../commands/workflow/lite-plan.md"
|
||||
}
|
||||
```
|
||||
|
||||
路径结构: `index/` → `ccw-help/` → `skills/` → `.claude/` → `commands/...`
|
||||
|
||||
## Configuration
|
||||
|
||||
| 参数 | 默认值 | 说明 |
|
||||
|------|--------|------|
|
||||
| max_results | 5 | 搜索返回最大结果数 |
|
||||
| show_source | true | 是否显示源文件路径 |
|
||||
|
||||
## CLI Integration
|
||||
|
||||
| 场景 | CLI Hint | 用途 |
|
||||
|------|----------|------|
|
||||
| 复杂查询 | `gemini --mode analysis` | 多文件分析对比 |
|
||||
| 文档生成 | - | 直接读取源文件 |
|
||||
|
||||
## Slash Commands
|
||||
|
||||
```bash
|
||||
@@ -145,33 +92,25 @@ CCW-Help 使用 JSON 索引实现快速查询(无 reference 文件夹,直接
|
||||
|
||||
## Maintenance
|
||||
|
||||
### 更新索引
|
||||
### Update Index
|
||||
|
||||
```bash
|
||||
cd D:/Claude_dms3/.claude/skills/ccw-help
|
||||
python scripts/analyze_commands.py
|
||||
```
|
||||
|
||||
脚本功能:
|
||||
1. 扫描 `commands/` 和 `agents/` 目录
|
||||
2. 提取 YAML frontmatter 元数据
|
||||
3. 生成相对路径引用(无 reference 复制)
|
||||
4. 重建所有索引文件
|
||||
脚本功能:扫描 commands/ 和 agents/ 目录,生成统一的 command.json
|
||||
|
||||
## System Statistics
|
||||
## Statistics
|
||||
|
||||
- **Commands**: 78
|
||||
- **Agents**: 14
|
||||
- **Categories**: 5 (workflow, cli, memory, task, general)
|
||||
- **Essential**: 14 核心命令
|
||||
- **Commands**: 88+
|
||||
- **Agents**: 16
|
||||
- **Essential**: 10 核心命令
|
||||
|
||||
## Core Principle
|
||||
|
||||
**⚠️ 智能整合,非模板复制**
|
||||
**智能整合,非模板复制**
|
||||
|
||||
- ✅ 理解用户具体情况
|
||||
- ✅ 整合多个来源信息
|
||||
- ✅ 定制示例和说明
|
||||
- ✅ 提供渐进式深度
|
||||
- ❌ 原样复制文档
|
||||
- ❌ 返回未处理的 JSON
|
||||
- 理解用户具体情况
|
||||
- 整合多个来源信息
|
||||
- 定制示例和说明
|
||||
|
||||
520
.claude/skills/ccw-help/command.json
Normal file
520
.claude/skills/ccw-help/command.json
Normal file
@@ -0,0 +1,520 @@
|
||||
{
|
||||
"_metadata": {
|
||||
"version": "2.0.0",
|
||||
"total_commands": 45,
|
||||
"total_agents": 16,
|
||||
"description": "Unified CCW-Help command index"
|
||||
},
|
||||
|
||||
"essential_commands": [
|
||||
"/workflow:lite-plan",
|
||||
"/workflow:lite-fix",
|
||||
"/workflow:plan",
|
||||
"/workflow:execute",
|
||||
"/workflow:session:start",
|
||||
"/workflow:review-session-cycle",
|
||||
"/memory:docs",
|
||||
"/workflow:brainstorm:artifacts",
|
||||
"/workflow:action-plan-verify",
|
||||
"/version"
|
||||
],
|
||||
|
||||
"commands": [
|
||||
{
|
||||
"name": "lite-plan",
|
||||
"command": "/workflow:lite-plan",
|
||||
"description": "Lightweight interactive planning with in-memory plan, dispatches to lite-execute",
|
||||
"arguments": "[-e|--explore] \"task\"|file.md",
|
||||
"category": "workflow",
|
||||
"difficulty": "Intermediate",
|
||||
"essential": true,
|
||||
"flow": {
|
||||
"next_steps": ["/workflow:lite-execute"],
|
||||
"alternatives": ["/workflow:plan"]
|
||||
},
|
||||
"source": "../../../commands/workflow/lite-plan.md"
|
||||
},
|
||||
{
|
||||
"name": "lite-execute",
|
||||
"command": "/workflow:lite-execute",
|
||||
"description": "Execute based on in-memory plan or prompt",
|
||||
"arguments": "[--in-memory] \"task\"|file-path",
|
||||
"category": "workflow",
|
||||
"difficulty": "Intermediate",
|
||||
"flow": {
|
||||
"prerequisites": ["/workflow:lite-plan", "/workflow:lite-fix"]
|
||||
},
|
||||
"source": "../../../commands/workflow/lite-execute.md"
|
||||
},
|
||||
{
|
||||
"name": "lite-fix",
|
||||
"command": "/workflow:lite-fix",
|
||||
"description": "Lightweight bug diagnosis and fix with optional hotfix mode",
|
||||
"arguments": "[--hotfix] \"bug description\"",
|
||||
"category": "workflow",
|
||||
"difficulty": "Intermediate",
|
||||
"essential": true,
|
||||
"flow": {
|
||||
"next_steps": ["/workflow:lite-execute"],
|
||||
"alternatives": ["/workflow:lite-plan"]
|
||||
},
|
||||
"source": "../../../commands/workflow/lite-fix.md"
|
||||
},
|
||||
{
|
||||
"name": "plan",
|
||||
"command": "/workflow:plan",
|
||||
"description": "5-phase planning with task JSON generation",
|
||||
"arguments": "\"description\"|file.md",
|
||||
"category": "workflow",
|
||||
"difficulty": "Intermediate",
|
||||
"essential": true,
|
||||
"flow": {
|
||||
"next_steps": ["/workflow:action-plan-verify", "/workflow:execute"],
|
||||
"alternatives": ["/workflow:tdd-plan"]
|
||||
},
|
||||
"source": "../../../commands/workflow/plan.md"
|
||||
},
|
||||
{
|
||||
"name": "execute",
|
||||
"command": "/workflow:execute",
|
||||
"description": "Coordinate agent execution with DAG parallel processing",
|
||||
"arguments": "[--resume-session=\"session-id\"]",
|
||||
"category": "workflow",
|
||||
"difficulty": "Intermediate",
|
||||
"essential": true,
|
||||
"flow": {
|
||||
"prerequisites": ["/workflow:plan", "/workflow:tdd-plan"],
|
||||
"next_steps": ["/workflow:review"]
|
||||
},
|
||||
"source": "../../../commands/workflow/execute.md"
|
||||
},
|
||||
{
|
||||
"name": "action-plan-verify",
|
||||
"command": "/workflow:action-plan-verify",
|
||||
"description": "Cross-artifact consistency analysis",
|
||||
"arguments": "[--session session-id]",
|
||||
"category": "workflow",
|
||||
"difficulty": "Intermediate",
|
||||
"essential": true,
|
||||
"flow": {
|
||||
"prerequisites": ["/workflow:plan"],
|
||||
"next_steps": ["/workflow:execute"]
|
||||
},
|
||||
"source": "../../../commands/workflow/action-plan-verify.md"
|
||||
},
|
||||
{
|
||||
"name": "init",
|
||||
"command": "/workflow:init",
|
||||
"description": "Initialize project-level state",
|
||||
"arguments": "[--regenerate]",
|
||||
"category": "workflow",
|
||||
"difficulty": "Intermediate",
|
||||
"source": "../../../commands/workflow/init.md"
|
||||
},
|
||||
{
|
||||
"name": "clean",
|
||||
"command": "/workflow:clean",
|
||||
"description": "Intelligent code cleanup with stale artifact discovery",
|
||||
"arguments": "[--dry-run] [\"focus\"]",
|
||||
"category": "workflow",
|
||||
"difficulty": "Intermediate",
|
||||
"source": "../../../commands/workflow/clean.md"
|
||||
},
|
||||
{
|
||||
"name": "debug",
|
||||
"command": "/workflow:debug",
|
||||
"description": "Hypothesis-driven debugging with NDJSON logging",
|
||||
"arguments": "\"bug description\"",
|
||||
"category": "workflow",
|
||||
"difficulty": "Intermediate",
|
||||
"source": "../../../commands/workflow/debug.md"
|
||||
},
|
||||
{
|
||||
"name": "replan",
|
||||
"command": "/workflow:replan",
|
||||
"description": "Interactive workflow replanning",
|
||||
"arguments": "[--session id] [task-id] \"requirements\"",
|
||||
"category": "workflow",
|
||||
"difficulty": "Intermediate",
|
||||
"source": "../../../commands/workflow/replan.md"
|
||||
},
|
||||
{
|
||||
"name": "session:start",
|
||||
"command": "/workflow:session:start",
|
||||
"description": "Start or discover workflow sessions",
|
||||
"arguments": "[--type <workflow|review|tdd>] [--auto|--new]",
|
||||
"category": "workflow",
|
||||
"subcategory": "session",
|
||||
"difficulty": "Intermediate",
|
||||
"essential": true,
|
||||
"flow": {
|
||||
"next_steps": ["/workflow:plan", "/workflow:execute"]
|
||||
},
|
||||
"source": "../../../commands/workflow/session/start.md"
|
||||
},
|
||||
{
|
||||
"name": "session:list",
|
||||
"command": "/workflow:session:list",
|
||||
"description": "List all workflow sessions",
|
||||
"arguments": "",
|
||||
"category": "workflow",
|
||||
"subcategory": "session",
|
||||
"difficulty": "Beginner",
|
||||
"source": "../../../commands/workflow/session/list.md"
|
||||
},
|
||||
{
|
||||
"name": "session:resume",
|
||||
"command": "/workflow:session:resume",
|
||||
"description": "Resume paused workflow session",
|
||||
"arguments": "",
|
||||
"category": "workflow",
|
||||
"subcategory": "session",
|
||||
"difficulty": "Intermediate",
|
||||
"source": "../../../commands/workflow/session/resume.md"
|
||||
},
|
||||
{
|
||||
"name": "session:complete",
|
||||
"command": "/workflow:session:complete",
|
||||
"description": "Mark session complete and archive",
|
||||
"arguments": "",
|
||||
"category": "workflow",
|
||||
"subcategory": "session",
|
||||
"difficulty": "Intermediate",
|
||||
"source": "../../../commands/workflow/session/complete.md"
|
||||
},
|
||||
{
|
||||
"name": "brainstorm:auto-parallel",
|
||||
"command": "/workflow:brainstorm:auto-parallel",
|
||||
"description": "Parallel brainstorming with multi-role analysis",
|
||||
"arguments": "\"topic\" [--count N]",
|
||||
"category": "workflow",
|
||||
"subcategory": "brainstorm",
|
||||
"difficulty": "Advanced",
|
||||
"source": "../../../commands/workflow/brainstorm/auto-parallel.md"
|
||||
},
|
||||
{
|
||||
"name": "brainstorm:artifacts",
|
||||
"command": "/workflow:brainstorm:artifacts",
|
||||
"description": "Interactive clarification with guidance specification",
|
||||
"arguments": "\"topic\" [--count N]",
|
||||
"category": "workflow",
|
||||
"subcategory": "brainstorm",
|
||||
"difficulty": "Intermediate",
|
||||
"essential": true,
|
||||
"source": "../../../commands/workflow/brainstorm/artifacts.md"
|
||||
},
|
||||
{
|
||||
"name": "brainstorm:synthesis",
|
||||
"command": "/workflow:brainstorm:synthesis",
|
||||
"description": "Refine role analyses through Q&A",
|
||||
"arguments": "[--session session-id]",
|
||||
"category": "workflow",
|
||||
"subcategory": "brainstorm",
|
||||
"difficulty": "Advanced",
|
||||
"source": "../../../commands/workflow/brainstorm/synthesis.md"
|
||||
},
|
||||
{
|
||||
"name": "tdd-plan",
|
||||
"command": "/workflow:tdd-plan",
|
||||
"description": "TDD planning with Red-Green-Refactor cycles",
|
||||
"arguments": "\"feature\"|file.md",
|
||||
"category": "workflow",
|
||||
"difficulty": "Advanced",
|
||||
"flow": {
|
||||
"next_steps": ["/workflow:execute", "/workflow:tdd-verify"],
|
||||
"alternatives": ["/workflow:plan"]
|
||||
},
|
||||
"source": "../../../commands/workflow/tdd-plan.md"
|
||||
},
|
||||
{
|
||||
"name": "tdd-verify",
|
||||
"command": "/workflow:tdd-verify",
|
||||
"description": "Verify TDD compliance with coverage analysis",
|
||||
"arguments": "[session-id]",
|
||||
"category": "workflow",
|
||||
"difficulty": "Advanced",
|
||||
"flow": {
|
||||
"prerequisites": ["/workflow:execute"]
|
||||
},
|
||||
"source": "../../../commands/workflow/tdd-verify.md"
|
||||
},
|
||||
{
|
||||
"name": "review",
|
||||
"command": "/workflow:review",
|
||||
"description": "Post-implementation review (security/architecture/quality)",
|
||||
"arguments": "[--type=<type>] [session-id]",
|
||||
"category": "workflow",
|
||||
"difficulty": "Intermediate",
|
||||
"source": "../../../commands/workflow/review.md"
|
||||
},
|
||||
{
|
||||
"name": "review-session-cycle",
|
||||
"command": "/workflow:review-session-cycle",
|
||||
"description": "Multi-dimensional code review across 7 dimensions",
|
||||
"arguments": "[session-id] [--dimensions=...]",
|
||||
"category": "workflow",
|
||||
"difficulty": "Intermediate",
|
||||
"essential": true,
|
||||
"flow": {
|
||||
"prerequisites": ["/workflow:execute"],
|
||||
"next_steps": ["/workflow:review-fix"]
|
||||
},
|
||||
"source": "../../../commands/workflow/review-session-cycle.md"
|
||||
},
|
||||
{
|
||||
"name": "review-module-cycle",
|
||||
"command": "/workflow:review-module-cycle",
|
||||
"description": "Module-based multi-dimensional review",
|
||||
"arguments": "<path-pattern> [--dimensions=...]",
|
||||
"category": "workflow",
|
||||
"difficulty": "Intermediate",
|
||||
"source": "../../../commands/workflow/review-module-cycle.md"
|
||||
},
|
||||
{
|
||||
"name": "review-fix",
|
||||
"command": "/workflow:review-fix",
|
||||
"description": "Automated fixing of review findings",
|
||||
"arguments": "<export-file|review-dir>",
|
||||
"category": "workflow",
|
||||
"difficulty": "Intermediate",
|
||||
"flow": {
|
||||
"prerequisites": ["/workflow:review-session-cycle", "/workflow:review-module-cycle"]
|
||||
},
|
||||
"source": "../../../commands/workflow/review-fix.md"
|
||||
},
|
||||
{
|
||||
"name": "test-gen",
|
||||
"command": "/workflow:test-gen",
|
||||
"description": "Generate test session from implementation",
|
||||
"arguments": "source-session-id",
|
||||
"category": "workflow",
|
||||
"difficulty": "Intermediate",
|
||||
"source": "../../../commands/workflow/test-gen.md"
|
||||
},
|
||||
{
|
||||
"name": "test-fix-gen",
|
||||
"command": "/workflow:test-fix-gen",
|
||||
"description": "Create test-fix session with strategy",
|
||||
"arguments": "session-id|\"description\"|file",
|
||||
"category": "workflow",
|
||||
"difficulty": "Intermediate",
|
||||
"source": "../../../commands/workflow/test-fix-gen.md"
|
||||
},
|
||||
{
|
||||
"name": "test-cycle-execute",
|
||||
"command": "/workflow:test-cycle-execute",
|
||||
"description": "Execute test-fix with iterative cycles",
|
||||
"arguments": "[--resume-session=id] [--max-iterations=N]",
|
||||
"category": "workflow",
|
||||
"difficulty": "Intermediate",
|
||||
"source": "../../../commands/workflow/test-cycle-execute.md"
|
||||
},
|
||||
{
|
||||
"name": "issue:new",
|
||||
"command": "/issue:new",
|
||||
"description": "Create issue from GitHub URL or text",
|
||||
"arguments": "<url|text> [--priority 1-5]",
|
||||
"category": "issue",
|
||||
"difficulty": "Intermediate",
|
||||
"source": "../../../commands/issue/new.md"
|
||||
},
|
||||
{
|
||||
"name": "issue:discover",
|
||||
"command": "/issue:discover",
|
||||
"description": "Discover issues from multiple perspectives",
|
||||
"arguments": "<path> [--perspectives=...]",
|
||||
"category": "issue",
|
||||
"difficulty": "Intermediate",
|
||||
"source": "../../../commands/issue/discover.md"
|
||||
},
|
||||
{
|
||||
"name": "issue:plan",
|
||||
"command": "/issue:plan",
|
||||
"description": "Batch plan issue resolution",
|
||||
"arguments": "--all-pending|<ids>",
|
||||
"category": "issue",
|
||||
"difficulty": "Intermediate",
|
||||
"flow": {
|
||||
"next_steps": ["/issue:queue"]
|
||||
},
|
||||
"source": "../../../commands/issue/plan.md"
|
||||
},
|
||||
{
|
||||
"name": "issue:queue",
|
||||
"command": "/issue:queue",
|
||||
"description": "Form execution queue from solutions",
|
||||
"arguments": "[--rebuild]",
|
||||
"category": "issue",
|
||||
"difficulty": "Intermediate",
|
||||
"flow": {
|
||||
"prerequisites": ["/issue:plan"],
|
||||
"next_steps": ["/issue:execute"]
|
||||
},
|
||||
"source": "../../../commands/issue/queue.md"
|
||||
},
|
||||
{
|
||||
"name": "issue:execute",
|
||||
"command": "/issue:execute",
|
||||
"description": "Execute queue with DAG parallel",
|
||||
"arguments": "[--worktree]",
|
||||
"category": "issue",
|
||||
"difficulty": "Intermediate",
|
||||
"flow": {
|
||||
"prerequisites": ["/issue:queue"]
|
||||
},
|
||||
"source": "../../../commands/issue/execute.md"
|
||||
},
|
||||
{
|
||||
"name": "docs",
|
||||
"command": "/memory:docs",
|
||||
"description": "Plan documentation workflow",
|
||||
"arguments": "[path] [--tool <tool>]",
|
||||
"category": "memory",
|
||||
"difficulty": "Intermediate",
|
||||
"essential": true,
|
||||
"flow": {
|
||||
"next_steps": ["/workflow:execute"]
|
||||
},
|
||||
"source": "../../../commands/memory/docs.md"
|
||||
},
|
||||
{
|
||||
"name": "update-related",
|
||||
"command": "/memory:update-related",
|
||||
"description": "Update docs for git-changed modules",
|
||||
"arguments": "[--tool <tool>]",
|
||||
"category": "memory",
|
||||
"difficulty": "Intermediate",
|
||||
"source": "../../../commands/memory/update-related.md"
|
||||
},
|
||||
{
|
||||
"name": "update-full",
|
||||
"command": "/memory:update-full",
|
||||
"description": "Update all CLAUDE.md files",
|
||||
"arguments": "[--tool <tool>]",
|
||||
"category": "memory",
|
||||
"difficulty": "Intermediate",
|
||||
"source": "../../../commands/memory/update-full.md"
|
||||
},
|
||||
{
|
||||
"name": "skill-memory",
|
||||
"command": "/memory:skill-memory",
|
||||
"description": "Generate SKILL.md with loading index",
|
||||
"arguments": "[path] [--regenerate]",
|
||||
"category": "memory",
|
||||
"difficulty": "Intermediate",
|
||||
"source": "../../../commands/memory/skill-memory.md"
|
||||
},
|
||||
{
|
||||
"name": "load-skill-memory",
|
||||
"command": "/memory:load-skill-memory",
|
||||
"description": "Activate SKILL package for task",
|
||||
"arguments": "[skill_name] \"task intent\"",
|
||||
"category": "memory",
|
||||
"difficulty": "Intermediate",
|
||||
"source": "../../../commands/memory/load-skill-memory.md"
|
||||
},
|
||||
{
|
||||
"name": "load",
|
||||
"command": "/memory:load",
|
||||
"description": "Load project context via CLI",
|
||||
"arguments": "[--tool <tool>] \"context\"",
|
||||
"category": "memory",
|
||||
"difficulty": "Intermediate",
|
||||
"source": "../../../commands/memory/load.md"
|
||||
},
|
||||
{
|
||||
"name": "compact",
|
||||
"command": "/memory:compact",
|
||||
"description": "Compact session memory for recovery",
|
||||
"arguments": "[description]",
|
||||
"category": "memory",
|
||||
"difficulty": "Intermediate",
|
||||
"source": "../../../commands/memory/compact.md"
|
||||
},
|
||||
{
|
||||
"name": "task:create",
|
||||
"command": "/task:create",
|
||||
"description": "Generate task JSON from description",
|
||||
"arguments": "\"task title\"",
|
||||
"category": "task",
|
||||
"difficulty": "Intermediate",
|
||||
"source": "../../../commands/task/create.md"
|
||||
},
|
||||
{
|
||||
"name": "task:execute",
|
||||
"command": "/task:execute",
|
||||
"description": "Execute task JSON with agent",
|
||||
"arguments": "task-id",
|
||||
"category": "task",
|
||||
"difficulty": "Intermediate",
|
||||
"source": "../../../commands/task/execute.md"
|
||||
},
|
||||
{
|
||||
"name": "task:breakdown",
|
||||
"command": "/task:breakdown",
|
||||
"description": "Decompose task into subtasks",
|
||||
"arguments": "task-id",
|
||||
"category": "task",
|
||||
"difficulty": "Intermediate",
|
||||
"source": "../../../commands/task/breakdown.md"
|
||||
},
|
||||
{
|
||||
"name": "task:replan",
|
||||
"command": "/task:replan",
|
||||
"description": "Update task with new requirements",
|
||||
"arguments": "task-id [\"text\"|file]",
|
||||
"category": "task",
|
||||
"difficulty": "Intermediate",
|
||||
"source": "../../../commands/task/replan.md"
|
||||
},
|
||||
{
|
||||
"name": "version",
|
||||
"command": "/version",
|
||||
"description": "Display version and check updates",
|
||||
"arguments": "",
|
||||
"category": "general",
|
||||
"difficulty": "Beginner",
|
||||
"essential": true,
|
||||
"source": "../../../commands/version.md"
|
||||
},
|
||||
{
|
||||
"name": "enhance-prompt",
|
||||
"command": "/enhance-prompt",
|
||||
"description": "Transform prompts with session memory",
|
||||
"arguments": "user input",
|
||||
"category": "general",
|
||||
"difficulty": "Intermediate",
|
||||
"source": "../../../commands/enhance-prompt.md"
|
||||
},
|
||||
{
|
||||
"name": "cli-init",
|
||||
"command": "/cli:cli-init",
|
||||
"description": "Initialize CLI tool configurations (.gemini/, .qwen/) with technology-aware ignore rules",
|
||||
"arguments": "[--tool gemini|qwen|all] [--preview] [--output path]",
|
||||
"category": "cli",
|
||||
"difficulty": "Intermediate",
|
||||
"source": "../../../commands/cli/cli-init.md"
|
||||
}
|
||||
],
|
||||
|
||||
"agents": [
|
||||
{ "name": "action-planning-agent", "description": "Task planning and generation", "source": "../../../agents/action-planning-agent.md" },
|
||||
{ "name": "cli-execution-agent", "description": "CLI tool execution", "source": "../../../agents/cli-execution-agent.md" },
|
||||
{ "name": "cli-explore-agent", "description": "Codebase exploration", "source": "../../../agents/cli-explore-agent.md" },
|
||||
{ "name": "cli-lite-planning-agent", "description": "Lightweight planning", "source": "../../../agents/cli-lite-planning-agent.md" },
|
||||
{ "name": "cli-planning-agent", "description": "CLI-based planning", "source": "../../../agents/cli-planning-agent.md" },
|
||||
{ "name": "code-developer", "description": "Code implementation", "source": "../../../agents/code-developer.md" },
|
||||
{ "name": "conceptual-planning-agent", "description": "Conceptual analysis", "source": "../../../agents/conceptual-planning-agent.md" },
|
||||
{ "name": "context-search-agent", "description": "Context discovery", "source": "../../../agents/context-search-agent.md" },
|
||||
{ "name": "doc-generator", "description": "Documentation generation", "source": "../../../agents/doc-generator.md" },
|
||||
{ "name": "issue-plan-agent", "description": "Issue planning", "source": "../../../agents/issue-plan-agent.md" },
|
||||
{ "name": "issue-queue-agent", "description": "Issue queue formation", "source": "../../../agents/issue-queue-agent.md" },
|
||||
{ "name": "memory-bridge", "description": "Documentation coordination", "source": "../../../agents/memory-bridge.md" },
|
||||
{ "name": "test-context-search-agent", "description": "Test context collection", "source": "../../../agents/test-context-search-agent.md" },
|
||||
{ "name": "test-fix-agent", "description": "Test execution and fixing", "source": "../../../agents/test-fix-agent.md" },
|
||||
{ "name": "ui-design-agent", "description": "UI design and prototyping", "source": "../../../agents/ui-design-agent.md" },
|
||||
{ "name": "universal-executor", "description": "Universal task execution", "source": "../../../agents/universal-executor.md" }
|
||||
],
|
||||
|
||||
"categories": ["workflow", "issue", "memory", "task", "general", "cli"]
|
||||
}
|
||||
@@ -1,82 +0,0 @@
|
||||
[
|
||||
{
|
||||
"name": "action-planning-agent",
|
||||
"description": "|",
|
||||
"source": "../../../agents/action-planning-agent.md"
|
||||
},
|
||||
{
|
||||
"name": "cli-execution-agent",
|
||||
"description": "|",
|
||||
"source": "../../../agents/cli-execution-agent.md"
|
||||
},
|
||||
{
|
||||
"name": "cli-explore-agent",
|
||||
"description": "|",
|
||||
"source": "../../../agents/cli-explore-agent.md"
|
||||
},
|
||||
{
|
||||
"name": "cli-lite-planning-agent",
|
||||
"description": "|",
|
||||
"source": "../../../agents/cli-lite-planning-agent.md"
|
||||
},
|
||||
{
|
||||
"name": "cli-planning-agent",
|
||||
"description": "|",
|
||||
"source": "../../../agents/cli-planning-agent.md"
|
||||
},
|
||||
{
|
||||
"name": "code-developer",
|
||||
"description": "|",
|
||||
"source": "../../../agents/code-developer.md"
|
||||
},
|
||||
{
|
||||
"name": "conceptual-planning-agent",
|
||||
"description": "|",
|
||||
"source": "../../../agents/conceptual-planning-agent.md"
|
||||
},
|
||||
{
|
||||
"name": "context-search-agent",
|
||||
"description": "|",
|
||||
"source": "../../../agents/context-search-agent.md"
|
||||
},
|
||||
{
|
||||
"name": "doc-generator",
|
||||
"description": "|",
|
||||
"source": "../../../agents/doc-generator.md"
|
||||
},
|
||||
{
|
||||
"name": "issue-plan-agent",
|
||||
"description": "|",
|
||||
"source": "../../../agents/issue-plan-agent.md"
|
||||
},
|
||||
{
|
||||
"name": "issue-queue-agent",
|
||||
"description": "|",
|
||||
"source": "../../../agents/issue-queue-agent.md"
|
||||
},
|
||||
{
|
||||
"name": "memory-bridge",
|
||||
"description": "Execute complex project documentation updates using script coordination",
|
||||
"source": "../../../agents/memory-bridge.md"
|
||||
},
|
||||
{
|
||||
"name": "test-context-search-agent",
|
||||
"description": "|",
|
||||
"source": "../../../agents/test-context-search-agent.md"
|
||||
},
|
||||
{
|
||||
"name": "test-fix-agent",
|
||||
"description": "|",
|
||||
"source": "../../../agents/test-fix-agent.md"
|
||||
},
|
||||
{
|
||||
"name": "ui-design-agent",
|
||||
"description": "|",
|
||||
"source": "../../../agents/ui-design-agent.md"
|
||||
},
|
||||
{
|
||||
"name": "universal-executor",
|
||||
"description": "|",
|
||||
"source": "../../../agents/universal-executor.md"
|
||||
}
|
||||
]
|
||||
@@ -1,882 +0,0 @@
|
||||
[
|
||||
{
|
||||
"name": "cli-init",
|
||||
"command": "/cli:cli-init",
|
||||
"description": "Generate .gemini/ and .qwen/ config directories with settings.json and ignore files based on workspace technology detection",
|
||||
"arguments": "[--tool gemini|qwen|all] [--output path] [--preview]",
|
||||
"category": "cli",
|
||||
"subcategory": null,
|
||||
"usage_scenario": "general",
|
||||
"difficulty": "Intermediate",
|
||||
"source": "../../../commands/cli/cli-init.md"
|
||||
},
|
||||
{
|
||||
"name": "enhance-prompt",
|
||||
"command": "/enhance-prompt",
|
||||
"description": "Enhanced prompt transformation using session memory and intent analysis with --enhance flag detection",
|
||||
"arguments": "user input to enhance",
|
||||
"category": "general",
|
||||
"subcategory": null,
|
||||
"usage_scenario": "general",
|
||||
"difficulty": "Intermediate",
|
||||
"source": "../../../commands/enhance-prompt.md"
|
||||
},
|
||||
{
|
||||
"name": "issue:discover",
|
||||
"command": "/issue:discover",
|
||||
"description": "Discover potential issues from multiple perspectives (bug, UX, test, quality, security, performance, maintainability, best-practices) using CLI explore. Supports Exa external research for security and best-practices perspectives.",
|
||||
"arguments": "<path-pattern> [--perspectives=bug,ux,...] [--external]",
|
||||
"category": "issue",
|
||||
"subcategory": null,
|
||||
"usage_scenario": "general",
|
||||
"difficulty": "Intermediate",
|
||||
"source": "../../../commands/issue/discover.md"
|
||||
},
|
||||
{
|
||||
"name": "execute",
|
||||
"command": "/issue:execute",
|
||||
"description": "Execute queue with codex using DAG-based parallel orchestration (solution-level)",
|
||||
"arguments": "[--worktree] [--queue <queue-id>]",
|
||||
"category": "issue",
|
||||
"subcategory": null,
|
||||
"usage_scenario": "implementation",
|
||||
"difficulty": "Intermediate",
|
||||
"source": "../../../commands/issue/execute.md"
|
||||
},
|
||||
{
|
||||
"name": "new",
|
||||
"command": "/issue:new",
|
||||
"description": "Create structured issue from GitHub URL or text description",
|
||||
"arguments": "<github-url | text-description> [--priority 1-5]",
|
||||
"category": "issue",
|
||||
"subcategory": null,
|
||||
"usage_scenario": "general",
|
||||
"difficulty": "Intermediate",
|
||||
"source": "../../../commands/issue/new.md"
|
||||
},
|
||||
{
|
||||
"name": "plan",
|
||||
"command": "/issue:plan",
|
||||
"description": "Batch plan issue resolution using issue-plan-agent (explore + plan closed-loop)",
|
||||
"arguments": "--all-pending <issue-id>[,<issue-id>,...] [--batch-size 3] ",
|
||||
"category": "issue",
|
||||
"subcategory": null,
|
||||
"usage_scenario": "planning",
|
||||
"difficulty": "Intermediate",
|
||||
"source": "../../../commands/issue/plan.md"
|
||||
},
|
||||
{
|
||||
"name": "queue",
|
||||
"command": "/issue:queue",
|
||||
"description": "Form execution queue from bound solutions using issue-queue-agent (solution-level)",
|
||||
"arguments": "[--rebuild] [--issue <id>]",
|
||||
"category": "issue",
|
||||
"subcategory": null,
|
||||
"usage_scenario": "general",
|
||||
"difficulty": "Intermediate",
|
||||
"source": "../../../commands/issue/queue.md"
|
||||
},
|
||||
{
|
||||
"name": "code-map-memory",
|
||||
"command": "/memory:code-map-memory",
|
||||
"description": "3-phase orchestrator: parse feature keyword → cli-explore-agent analyzes (Deep Scan dual-source) → orchestrator generates Mermaid docs + SKILL package (skips phase 2 if exists)",
|
||||
"arguments": "\\\"feature-keyword\\\" [--regenerate] [--tool <gemini|qwen>]",
|
||||
"category": "memory",
|
||||
"subcategory": null,
|
||||
"usage_scenario": "documentation",
|
||||
"difficulty": "Intermediate",
|
||||
"source": "../../../commands/memory/code-map-memory.md"
|
||||
},
|
||||
{
|
||||
"name": "compact",
|
||||
"command": "/memory:compact",
|
||||
"description": "Compact current session memory into structured text for session recovery, extracting objective/plan/files/decisions/constraints/state, and save via MCP core_memory tool",
|
||||
"arguments": "[optional: session description]",
|
||||
"category": "memory",
|
||||
"subcategory": null,
|
||||
"usage_scenario": "general",
|
||||
"difficulty": "Intermediate",
|
||||
"source": "../../../commands/memory/compact.md"
|
||||
},
|
||||
{
|
||||
"name": "docs-full-cli",
|
||||
"command": "/memory:docs-full-cli",
|
||||
"description": "Generate full project documentation using CLI execution (Layer 3→1) with batched agents (4 modules/agent) and gemini→qwen→codex fallback, <20 modules uses direct parallel",
|
||||
"arguments": "[path] [--tool <gemini|qwen|codex>]",
|
||||
"category": "memory",
|
||||
"subcategory": null,
|
||||
"usage_scenario": "documentation",
|
||||
"difficulty": "Intermediate",
|
||||
"source": "../../../commands/memory/docs-full-cli.md"
|
||||
},
|
||||
{
|
||||
"name": "docs-related-cli",
|
||||
"command": "/memory:docs-related-cli",
|
||||
"description": "Generate/update documentation for git-changed modules using CLI execution with batched agents (4 modules/agent) and gemini→qwen→codex fallback, <15 modules uses direct parallel",
|
||||
"arguments": "[--tool <gemini|qwen|codex>]",
|
||||
"category": "memory",
|
||||
"subcategory": null,
|
||||
"usage_scenario": "documentation",
|
||||
"difficulty": "Intermediate",
|
||||
"source": "../../../commands/memory/docs-related-cli.md"
|
||||
},
|
||||
{
|
||||
"name": "docs",
|
||||
"command": "/memory:docs",
|
||||
"description": "Plan documentation workflow with dynamic grouping (≤10 docs/task), generates IMPL tasks for parallel module trees, README, ARCHITECTURE, and HTTP API docs",
|
||||
"arguments": "[path] [--tool <gemini|qwen|codex>] [--mode <full|partial>] [--cli-execute]",
|
||||
"category": "memory",
|
||||
"subcategory": null,
|
||||
"usage_scenario": "documentation",
|
||||
"difficulty": "Intermediate",
|
||||
"source": "../../../commands/memory/docs.md"
|
||||
},
|
||||
{
|
||||
"name": "load-skill-memory",
|
||||
"command": "/memory:load-skill-memory",
|
||||
"description": "Activate SKILL package (auto-detect from paths/keywords or manual) and intelligently load documentation based on task intent keywords",
|
||||
"arguments": "[skill_name] \\\"task intent description\\",
|
||||
"category": "memory",
|
||||
"subcategory": null,
|
||||
"usage_scenario": "documentation",
|
||||
"difficulty": "Intermediate",
|
||||
"source": "../../../commands/memory/load-skill-memory.md"
|
||||
},
|
||||
{
|
||||
"name": "load",
|
||||
"command": "/memory:load",
|
||||
"description": "Delegate to universal-executor agent to analyze project via Gemini/Qwen CLI and return JSON core content package for task context",
|
||||
"arguments": "[--tool gemini|qwen] \\\"task context description\\",
|
||||
"category": "memory",
|
||||
"subcategory": null,
|
||||
"usage_scenario": "general",
|
||||
"difficulty": "Intermediate",
|
||||
"source": "../../../commands/memory/load.md"
|
||||
},
|
||||
{
|
||||
"name": "skill-memory",
|
||||
"command": "/memory:skill-memory",
|
||||
"description": "4-phase autonomous orchestrator: check docs → /memory:docs planning → /workflow:execute → generate SKILL.md with progressive loading index (skips phases 2-3 if docs exist)",
|
||||
"arguments": "[path] [--tool <gemini|qwen|codex>] [--regenerate] [--mode <full|partial>] [--cli-execute]",
|
||||
"category": "memory",
|
||||
"subcategory": null,
|
||||
"usage_scenario": "documentation",
|
||||
"difficulty": "Intermediate",
|
||||
"source": "../../../commands/memory/skill-memory.md"
|
||||
},
|
||||
{
|
||||
"name": "style-skill-memory",
|
||||
"command": "/memory:style-skill-memory",
|
||||
"description": "Generate SKILL memory package from style reference for easy loading and consistent design system usage",
|
||||
"arguments": "[package-name] [--regenerate]",
|
||||
"category": "memory",
|
||||
"subcategory": null,
|
||||
"usage_scenario": "documentation",
|
||||
"difficulty": "Intermediate",
|
||||
"source": "../../../commands/memory/style-skill-memory.md"
|
||||
},
|
||||
{
|
||||
"name": "swagger-docs",
|
||||
"command": "/memory:swagger-docs",
|
||||
"description": "Generate complete Swagger/OpenAPI documentation following RESTful standards with global security, API details, error codes, and validation tests",
|
||||
"arguments": "[path] [--tool <gemini|qwen|codex>] [--format <yaml|json>] [--version <v3.0|v3.1>] [--lang <zh|en>]",
|
||||
"category": "memory",
|
||||
"subcategory": null,
|
||||
"usage_scenario": "documentation",
|
||||
"difficulty": "Intermediate",
|
||||
"source": "../../../commands/memory/swagger-docs.md"
|
||||
},
|
||||
{
|
||||
"name": "tech-research-rules",
|
||||
"command": "/memory:tech-research-rules",
|
||||
"description": "3-phase orchestrator: extract tech stack → Exa research → generate path-conditional rules (auto-loaded by Claude Code)",
|
||||
"arguments": "[session-id | tech-stack-name] [--regenerate] [--tool <gemini|qwen>]",
|
||||
"category": "memory",
|
||||
"subcategory": null,
|
||||
"usage_scenario": "general",
|
||||
"difficulty": "Intermediate",
|
||||
"source": "../../../commands/memory/tech-research-rules.md"
|
||||
},
|
||||
{
|
||||
"name": "update-full",
|
||||
"command": "/memory:update-full",
|
||||
"description": "Update all CLAUDE.md files using layer-based execution (Layer 3→1) with batched agents (4 modules/agent) and gemini→qwen→codex fallback, <20 modules uses direct parallel",
|
||||
"arguments": "[--tool gemini|qwen|codex] [--path <directory>]",
|
||||
"category": "memory",
|
||||
"subcategory": null,
|
||||
"usage_scenario": "general",
|
||||
"difficulty": "Intermediate",
|
||||
"source": "../../../commands/memory/update-full.md"
|
||||
},
|
||||
{
|
||||
"name": "update-related",
|
||||
"command": "/memory:update-related",
|
||||
"description": "Update CLAUDE.md for git-changed modules using batched agent execution (4 modules/agent) with gemini→qwen→codex fallback, <15 modules uses direct execution",
|
||||
"arguments": "[--tool gemini|qwen|codex]",
|
||||
"category": "memory",
|
||||
"subcategory": null,
|
||||
"usage_scenario": "general",
|
||||
"difficulty": "Intermediate",
|
||||
"source": "../../../commands/memory/update-related.md"
|
||||
},
|
||||
{
|
||||
"name": "workflow-skill-memory",
|
||||
"command": "/memory:workflow-skill-memory",
|
||||
"description": "Process WFS-* archived sessions using universal-executor agents with Gemini analysis to generate workflow-progress SKILL package (sessions-timeline, lessons, conflicts)",
|
||||
"arguments": "session <session-id> | all",
|
||||
"category": "memory",
|
||||
"subcategory": null,
|
||||
"usage_scenario": "documentation",
|
||||
"difficulty": "Intermediate",
|
||||
"source": "../../../commands/memory/workflow-skill-memory.md"
|
||||
},
|
||||
{
|
||||
"name": "breakdown",
|
||||
"command": "/task:breakdown",
|
||||
"description": "Decompose complex task into subtasks with dependency mapping, creates child task JSONs with parent references and execution order",
|
||||
"arguments": "task-id",
|
||||
"category": "task",
|
||||
"subcategory": null,
|
||||
"usage_scenario": "planning",
|
||||
"difficulty": "Intermediate",
|
||||
"source": "../../../commands/task/breakdown.md"
|
||||
},
|
||||
{
|
||||
"name": "create",
|
||||
"command": "/task:create",
|
||||
"description": "Generate task JSON from natural language description with automatic file pattern detection, scope inference, and dependency analysis",
|
||||
"arguments": "\\\"task title\\",
|
||||
"category": "task",
|
||||
"subcategory": null,
|
||||
"usage_scenario": "implementation",
|
||||
"difficulty": "Intermediate",
|
||||
"source": "../../../commands/task/create.md"
|
||||
},
|
||||
{
|
||||
"name": "execute",
|
||||
"command": "/task:execute",
|
||||
"description": "Execute task JSON using appropriate agent (@doc-generator/@implementation-agent/@test-agent) with pre-analysis context loading and status tracking",
|
||||
"arguments": "task-id",
|
||||
"category": "task",
|
||||
"subcategory": null,
|
||||
"usage_scenario": "implementation",
|
||||
"difficulty": "Intermediate",
|
||||
"source": "../../../commands/task/execute.md"
|
||||
},
|
||||
{
|
||||
"name": "replan",
|
||||
"command": "/task:replan",
|
||||
"description": "Update task JSON with new requirements or batch-update multiple tasks from verification report, tracks changes in task-changes.json",
|
||||
"arguments": "task-id [\\\"text\\\"|file.md] | --batch [verification-report.md]",
|
||||
"category": "task",
|
||||
"subcategory": null,
|
||||
"usage_scenario": "planning",
|
||||
"difficulty": "Intermediate",
|
||||
"source": "../../../commands/task/replan.md"
|
||||
},
|
||||
{
|
||||
"name": "version",
|
||||
"command": "/version",
|
||||
"description": "Display Claude Code version information and check for updates",
|
||||
"arguments": "",
|
||||
"category": "general",
|
||||
"subcategory": null,
|
||||
"usage_scenario": "general",
|
||||
"difficulty": "Beginner",
|
||||
"source": "../../../commands/version.md"
|
||||
},
|
||||
{
|
||||
"name": "action-plan-verify",
|
||||
"command": "/workflow:action-plan-verify",
|
||||
"description": "Perform non-destructive cross-artifact consistency analysis between IMPL_PLAN.md and task JSONs with quality gate validation",
|
||||
"arguments": "[optional: --session session-id]",
|
||||
"category": "workflow",
|
||||
"subcategory": null,
|
||||
"usage_scenario": "planning",
|
||||
"difficulty": "Intermediate",
|
||||
"source": "../../../commands/workflow/action-plan-verify.md"
|
||||
},
|
||||
{
|
||||
"name": "api-designer",
|
||||
"command": "/workflow:brainstorm:api-designer",
|
||||
"description": "Generate or update api-designer/analysis.md addressing guidance-specification discussion points for API design perspective",
|
||||
"arguments": "optional topic - uses existing framework if available",
|
||||
"category": "workflow",
|
||||
"subcategory": "brainstorm",
|
||||
"usage_scenario": "planning",
|
||||
"difficulty": "Intermediate",
|
||||
"source": "../../../commands/workflow/brainstorm/api-designer.md"
|
||||
},
|
||||
{
|
||||
"name": "artifacts",
|
||||
"command": "/workflow:brainstorm:artifacts",
|
||||
"description": "Interactive clarification generating confirmed guidance specification through role-based analysis and synthesis",
|
||||
"arguments": "topic or challenge description [--count N]",
|
||||
"category": "workflow",
|
||||
"subcategory": "brainstorm",
|
||||
"usage_scenario": "general",
|
||||
"difficulty": "Intermediate",
|
||||
"source": "../../../commands/workflow/brainstorm/artifacts.md"
|
||||
},
|
||||
{
|
||||
"name": "auto-parallel",
|
||||
"command": "/workflow:brainstorm:auto-parallel",
|
||||
"description": "Parallel brainstorming automation with dynamic role selection and concurrent execution across multiple perspectives",
|
||||
"arguments": "topic or challenge description\" [--count N]",
|
||||
"category": "workflow",
|
||||
"subcategory": "brainstorm",
|
||||
"usage_scenario": "general",
|
||||
"difficulty": "Advanced",
|
||||
"source": "../../../commands/workflow/brainstorm/auto-parallel.md"
|
||||
},
|
||||
{
|
||||
"name": "data-architect",
|
||||
"command": "/workflow:brainstorm:data-architect",
|
||||
"description": "Generate or update data-architect/analysis.md addressing guidance-specification discussion points for data architecture perspective",
|
||||
"arguments": "optional topic - uses existing framework if available",
|
||||
"category": "workflow",
|
||||
"subcategory": "brainstorm",
|
||||
"usage_scenario": "general",
|
||||
"difficulty": "Intermediate",
|
||||
"source": "../../../commands/workflow/brainstorm/data-architect.md"
|
||||
},
|
||||
{
|
||||
"name": "product-manager",
|
||||
"command": "/workflow:brainstorm:product-manager",
|
||||
"description": "Generate or update product-manager/analysis.md addressing guidance-specification discussion points for product management perspective",
|
||||
"arguments": "optional topic - uses existing framework if available",
|
||||
"category": "workflow",
|
||||
"subcategory": "brainstorm",
|
||||
"usage_scenario": "general",
|
||||
"difficulty": "Intermediate",
|
||||
"source": "../../../commands/workflow/brainstorm/product-manager.md"
|
||||
},
|
||||
{
|
||||
"name": "product-owner",
|
||||
"command": "/workflow:brainstorm:product-owner",
|
||||
"description": "Generate or update product-owner/analysis.md addressing guidance-specification discussion points for product ownership perspective",
|
||||
"arguments": "optional topic - uses existing framework if available",
|
||||
"category": "workflow",
|
||||
"subcategory": "brainstorm",
|
||||
"usage_scenario": "general",
|
||||
"difficulty": "Intermediate",
|
||||
"source": "../../../commands/workflow/brainstorm/product-owner.md"
|
||||
},
|
||||
{
|
||||
"name": "scrum-master",
|
||||
"command": "/workflow:brainstorm:scrum-master",
|
||||
"description": "Generate or update scrum-master/analysis.md addressing guidance-specification discussion points for Agile process perspective",
|
||||
"arguments": "optional topic - uses existing framework if available",
|
||||
"category": "workflow",
|
||||
"subcategory": "brainstorm",
|
||||
"usage_scenario": "general",
|
||||
"difficulty": "Intermediate",
|
||||
"source": "../../../commands/workflow/brainstorm/scrum-master.md"
|
||||
},
|
||||
{
|
||||
"name": "subject-matter-expert",
|
||||
"command": "/workflow:brainstorm:subject-matter-expert",
|
||||
"description": "Generate or update subject-matter-expert/analysis.md addressing guidance-specification discussion points for domain expertise perspective",
|
||||
"arguments": "optional topic - uses existing framework if available",
|
||||
"category": "workflow",
|
||||
"subcategory": "brainstorm",
|
||||
"usage_scenario": "general",
|
||||
"difficulty": "Intermediate",
|
||||
"source": "../../../commands/workflow/brainstorm/subject-matter-expert.md"
|
||||
},
|
||||
{
|
||||
"name": "synthesis",
|
||||
"command": "/workflow:brainstorm:synthesis",
|
||||
"description": "Clarify and refine role analyses through intelligent Q&A and targeted updates with synthesis agent",
|
||||
"arguments": "[optional: --session session-id]",
|
||||
"category": "workflow",
|
||||
"subcategory": "brainstorm",
|
||||
"usage_scenario": "general",
|
||||
"difficulty": "Advanced",
|
||||
"source": "../../../commands/workflow/brainstorm/synthesis.md"
|
||||
},
|
||||
{
|
||||
"name": "system-architect",
|
||||
"command": "/workflow:brainstorm:system-architect",
|
||||
"description": "Generate or update system-architect/analysis.md addressing guidance-specification discussion points for system architecture perspective",
|
||||
"arguments": "optional topic - uses existing framework if available",
|
||||
"category": "workflow",
|
||||
"subcategory": "brainstorm",
|
||||
"usage_scenario": "general",
|
||||
"difficulty": "Intermediate",
|
||||
"source": "../../../commands/workflow/brainstorm/system-architect.md"
|
||||
},
|
||||
{
|
||||
"name": "ui-designer",
|
||||
"command": "/workflow:brainstorm:ui-designer",
|
||||
"description": "Generate or update ui-designer/analysis.md addressing guidance-specification discussion points for UI design perspective",
|
||||
"arguments": "optional topic - uses existing framework if available",
|
||||
"category": "workflow",
|
||||
"subcategory": "brainstorm",
|
||||
"usage_scenario": "planning",
|
||||
"difficulty": "Intermediate",
|
||||
"source": "../../../commands/workflow/brainstorm/ui-designer.md"
|
||||
},
|
||||
{
|
||||
"name": "ux-expert",
|
||||
"command": "/workflow:brainstorm:ux-expert",
|
||||
"description": "Generate or update ux-expert/analysis.md addressing guidance-specification discussion points for UX perspective",
|
||||
"arguments": "optional topic - uses existing framework if available",
|
||||
"category": "workflow",
|
||||
"subcategory": "brainstorm",
|
||||
"usage_scenario": "general",
|
||||
"difficulty": "Intermediate",
|
||||
"source": "../../../commands/workflow/brainstorm/ux-expert.md"
|
||||
},
|
||||
{
|
||||
"name": "clean",
|
||||
"command": "/workflow:clean",
|
||||
"description": "Intelligent code cleanup with mainline detection, stale artifact discovery, and safe execution",
|
||||
"arguments": "[--dry-run] [\\\"focus area\\\"]",
|
||||
"category": "workflow",
|
||||
"subcategory": null,
|
||||
"usage_scenario": "general",
|
||||
"difficulty": "Intermediate",
|
||||
"source": "../../../commands/workflow/clean.md"
|
||||
},
|
||||
{
|
||||
"name": "debug",
|
||||
"command": "/workflow:debug",
|
||||
"description": "Interactive hypothesis-driven debugging with NDJSON logging, iterative until resolved",
|
||||
"arguments": "\\\"bug description or error message\\",
|
||||
"category": "workflow",
|
||||
"subcategory": null,
|
||||
"usage_scenario": "general",
|
||||
"difficulty": "Intermediate",
|
||||
"source": "../../../commands/workflow/debug.md"
|
||||
},
|
||||
{
|
||||
"name": "execute",
|
||||
"command": "/workflow:execute",
|
||||
"description": "Coordinate agent execution for workflow tasks with automatic session discovery, parallel task processing, and status tracking",
|
||||
"arguments": "[--resume-session=\\\"session-id\\\"]",
|
||||
"category": "workflow",
|
||||
"subcategory": null,
|
||||
"usage_scenario": "implementation",
|
||||
"difficulty": "Intermediate",
|
||||
"source": "../../../commands/workflow/execute.md"
|
||||
},
|
||||
{
|
||||
"name": "init",
|
||||
"command": "/workflow:init",
|
||||
"description": "Initialize project-level state with intelligent project analysis using cli-explore-agent",
|
||||
"arguments": "[--regenerate]",
|
||||
"category": "workflow",
|
||||
"subcategory": null,
|
||||
"usage_scenario": "general",
|
||||
"difficulty": "Intermediate",
|
||||
"source": "../../../commands/workflow/init.md"
|
||||
},
|
||||
{
|
||||
"name": "lite-execute",
|
||||
"command": "/workflow:lite-execute",
|
||||
"description": "Execute tasks based on in-memory plan, prompt description, or file content",
|
||||
"arguments": "[--in-memory] [\\\"task description\\\"|file-path]",
|
||||
"category": "workflow",
|
||||
"subcategory": null,
|
||||
"usage_scenario": "implementation",
|
||||
"difficulty": "Intermediate",
|
||||
"source": "../../../commands/workflow/lite-execute.md"
|
||||
},
|
||||
{
|
||||
"name": "lite-fix",
|
||||
"command": "/workflow:lite-fix",
|
||||
"description": "Lightweight bug diagnosis and fix workflow with intelligent severity assessment and optional hotfix mode for production incidents",
|
||||
"arguments": "[--hotfix] \\\"bug description or issue reference\\",
|
||||
"category": "workflow",
|
||||
"subcategory": null,
|
||||
"usage_scenario": "general",
|
||||
"difficulty": "Intermediate",
|
||||
"source": "../../../commands/workflow/lite-fix.md"
|
||||
},
|
||||
{
|
||||
"name": "lite-plan",
|
||||
"command": "/workflow:lite-plan",
|
||||
"description": "Lightweight interactive planning workflow with in-memory planning, code exploration, and execution dispatch to lite-execute after user confirmation",
|
||||
"arguments": "[-e|--explore] \\\"task description\\\"|file.md",
|
||||
"category": "workflow",
|
||||
"subcategory": null,
|
||||
"usage_scenario": "planning",
|
||||
"difficulty": "Intermediate",
|
||||
"source": "../../../commands/workflow/lite-plan.md"
|
||||
},
|
||||
{
|
||||
"name": "plan",
|
||||
"command": "/workflow:plan",
|
||||
"description": "5-phase planning workflow with action-planning-agent task generation, outputs IMPL_PLAN.md and task JSONs",
|
||||
"arguments": "\\\"text description\\\"|file.md",
|
||||
"category": "workflow",
|
||||
"subcategory": null,
|
||||
"usage_scenario": "planning",
|
||||
"difficulty": "Intermediate",
|
||||
"source": "../../../commands/workflow/plan.md"
|
||||
},
|
||||
{
|
||||
"name": "replan",
|
||||
"command": "/workflow:replan",
|
||||
"description": "Interactive workflow replanning with session-level artifact updates and boundary clarification through guided questioning",
|
||||
"arguments": "[--session session-id] [task-id] \\\"requirements\\\"|file.md [--interactive]",
|
||||
"category": "workflow",
|
||||
"subcategory": null,
|
||||
"usage_scenario": "planning",
|
||||
"difficulty": "Intermediate",
|
||||
"source": "../../../commands/workflow/replan.md"
|
||||
},
|
||||
{
|
||||
"name": "review-fix",
|
||||
"command": "/workflow:review-fix",
|
||||
"description": "Automated fixing of code review findings with AI-powered planning and coordinated execution. Uses intelligent grouping, multi-stage timeline coordination, and test-driven verification.",
|
||||
"arguments": "<export-file|review-dir> [--resume] [--max-iterations=N]",
|
||||
"category": "workflow",
|
||||
"subcategory": null,
|
||||
"usage_scenario": "analysis",
|
||||
"difficulty": "Intermediate",
|
||||
"source": "../../../commands/workflow/review-fix.md"
|
||||
},
|
||||
{
|
||||
"name": "review-module-cycle",
|
||||
"command": "/workflow:review-module-cycle",
|
||||
"description": "Independent multi-dimensional code review for specified modules/files. Analyzes specific code paths across 7 dimensions with hybrid parallel-iterative execution, independent of workflow sessions.",
|
||||
"arguments": "<path-pattern> [--dimensions=security,architecture,...] [--max-iterations=N]",
|
||||
"category": "workflow",
|
||||
"subcategory": null,
|
||||
"usage_scenario": "analysis",
|
||||
"difficulty": "Intermediate",
|
||||
"source": "../../../commands/workflow/review-module-cycle.md"
|
||||
},
|
||||
{
|
||||
"name": "review-session-cycle",
|
||||
"command": "/workflow:review-session-cycle",
|
||||
"description": "Session-based comprehensive multi-dimensional code review. Analyzes git changes from workflow session across 7 dimensions with hybrid parallel-iterative execution, aggregates findings, and performs focused deep-dives on critical issues until quality gates met.",
|
||||
"arguments": "[session-id] [--dimensions=security,architecture,...] [--max-iterations=N]",
|
||||
"category": "workflow",
|
||||
"subcategory": null,
|
||||
"usage_scenario": "session-management",
|
||||
"difficulty": "Intermediate",
|
||||
"source": "../../../commands/workflow/review-session-cycle.md"
|
||||
},
|
||||
{
|
||||
"name": "review",
|
||||
"command": "/workflow:review",
|
||||
"description": "Post-implementation review with specialized types (security/architecture/action-items/quality) using analysis agents and Gemini",
|
||||
"arguments": "[--type=security|architecture|action-items|quality] [--archived] [optional: session-id]",
|
||||
"category": "workflow",
|
||||
"subcategory": null,
|
||||
"usage_scenario": "analysis",
|
||||
"difficulty": "Intermediate",
|
||||
"source": "../../../commands/workflow/review.md"
|
||||
},
|
||||
{
|
||||
"name": "complete",
|
||||
"command": "/workflow:session:complete",
|
||||
"description": "Mark active workflow session as complete, archive with lessons learned, update manifest, remove active flag",
|
||||
"arguments": "",
|
||||
"category": "workflow",
|
||||
"subcategory": "session",
|
||||
"usage_scenario": "session-management",
|
||||
"difficulty": "Intermediate",
|
||||
"source": "../../../commands/workflow/session/complete.md"
|
||||
},
|
||||
{
|
||||
"name": "list",
|
||||
"command": "/workflow:session:list",
|
||||
"description": "List all workflow sessions with status filtering, shows session metadata and progress information",
|
||||
"arguments": "",
|
||||
"category": "workflow",
|
||||
"subcategory": "session",
|
||||
"usage_scenario": "general",
|
||||
"difficulty": "Beginner",
|
||||
"source": "../../../commands/workflow/session/list.md"
|
||||
},
|
||||
{
|
||||
"name": "resume",
|
||||
"command": "/workflow:session:resume",
|
||||
"description": "Resume the most recently paused workflow session with automatic session discovery and status update",
|
||||
"arguments": "",
|
||||
"category": "workflow",
|
||||
"subcategory": "session",
|
||||
"usage_scenario": "session-management",
|
||||
"difficulty": "Intermediate",
|
||||
"source": "../../../commands/workflow/session/resume.md"
|
||||
},
|
||||
{
|
||||
"name": "solidify",
|
||||
"command": "/workflow:session:solidify",
|
||||
"description": "Crystallize session learnings and user-defined constraints into permanent project guidelines",
|
||||
"arguments": "[--type <convention|constraint|learning>] [--category <category>] \\\"rule or insight\\",
|
||||
"category": "workflow",
|
||||
"subcategory": "session",
|
||||
"usage_scenario": "general",
|
||||
"difficulty": "Intermediate",
|
||||
"source": "../../../commands/workflow/session/solidify.md"
|
||||
},
|
||||
{
|
||||
"name": "start",
|
||||
"command": "/workflow:session:start",
|
||||
"description": "Discover existing sessions or start new workflow session with intelligent session management and conflict detection",
|
||||
"arguments": "[--type <workflow|review|tdd|test|docs>] [--auto|--new] [optional: task description for new session]",
|
||||
"category": "workflow",
|
||||
"subcategory": "session",
|
||||
"usage_scenario": "general",
|
||||
"difficulty": "Intermediate",
|
||||
"source": "../../../commands/workflow/session/start.md"
|
||||
},
|
||||
{
|
||||
"name": "tdd-plan",
|
||||
"command": "/workflow:tdd-plan",
|
||||
"description": "TDD workflow planning with Red-Green-Refactor task chain generation, test-first development structure, and cycle tracking",
|
||||
"arguments": "\\\"feature description\\\"|file.md",
|
||||
"category": "workflow",
|
||||
"subcategory": null,
|
||||
"usage_scenario": "planning",
|
||||
"difficulty": "Advanced",
|
||||
"source": "../../../commands/workflow/tdd-plan.md"
|
||||
},
|
||||
{
|
||||
"name": "tdd-verify",
|
||||
"command": "/workflow:tdd-verify",
|
||||
"description": "Verify TDD workflow compliance against Red-Green-Refactor cycles, generate quality report with coverage analysis",
|
||||
"arguments": "[optional: WFS-session-id]",
|
||||
"category": "workflow",
|
||||
"subcategory": null,
|
||||
"usage_scenario": "testing",
|
||||
"difficulty": "Advanced",
|
||||
"source": "../../../commands/workflow/tdd-verify.md"
|
||||
},
|
||||
{
|
||||
"name": "test-cycle-execute",
|
||||
"command": "/workflow:test-cycle-execute",
|
||||
"description": "Execute test-fix workflow with dynamic task generation and iterative fix cycles until test pass rate >= 95% or max iterations reached. Uses @cli-planning-agent for failure analysis and task generation.",
|
||||
"arguments": "[--resume-session=\\\"session-id\\\"] [--max-iterations=N]",
|
||||
"category": "workflow",
|
||||
"subcategory": null,
|
||||
"usage_scenario": "implementation",
|
||||
"difficulty": "Intermediate",
|
||||
"source": "../../../commands/workflow/test-cycle-execute.md"
|
||||
},
|
||||
{
|
||||
"name": "test-fix-gen",
|
||||
"command": "/workflow:test-fix-gen",
|
||||
"description": "Create test-fix workflow session from session ID, description, or file path with test strategy generation and task planning",
|
||||
"arguments": "(source-session-id | \\\"feature description\\\" | /path/to/file.md)",
|
||||
"category": "workflow",
|
||||
"subcategory": null,
|
||||
"usage_scenario": "testing",
|
||||
"difficulty": "Intermediate",
|
||||
"source": "../../../commands/workflow/test-fix-gen.md"
|
||||
},
|
||||
{
|
||||
"name": "test-gen",
|
||||
"command": "/workflow:test-gen",
|
||||
"description": "Create independent test-fix workflow session from completed implementation session, analyzes code to generate test tasks",
|
||||
"arguments": "source-session-id",
|
||||
"category": "workflow",
|
||||
"subcategory": null,
|
||||
"usage_scenario": "testing",
|
||||
"difficulty": "Intermediate",
|
||||
"source": "../../../commands/workflow/test-gen.md"
|
||||
},
|
||||
{
|
||||
"name": "conflict-resolution",
|
||||
"command": "/workflow:tools:conflict-resolution",
|
||||
"description": "Detect and resolve conflicts between plan and existing codebase using CLI-powered analysis with Gemini/Qwen",
|
||||
"arguments": "--session WFS-session-id --context path/to/context-package.json",
|
||||
"category": "workflow",
|
||||
"subcategory": "tools",
|
||||
"usage_scenario": "general",
|
||||
"difficulty": "Advanced",
|
||||
"source": "../../../commands/workflow/tools/conflict-resolution.md"
|
||||
},
|
||||
{
|
||||
"name": "gather",
|
||||
"command": "/workflow:tools:gather",
|
||||
"description": "Intelligently collect project context using context-search-agent based on task description, packages into standardized JSON",
|
||||
"arguments": "--session WFS-session-id \\\"task description\\",
|
||||
"category": "workflow",
|
||||
"subcategory": "tools",
|
||||
"usage_scenario": "general",
|
||||
"difficulty": "Intermediate",
|
||||
"source": "../../../commands/workflow/tools/context-gather.md"
|
||||
},
|
||||
{
|
||||
"name": "task-generate-agent",
|
||||
"command": "/workflow:tools:task-generate-agent",
|
||||
"description": "Generate implementation plan documents (IMPL_PLAN.md, task JSONs, TODO_LIST.md) using action-planning-agent - produces planning artifacts, does NOT execute code implementation",
|
||||
"arguments": "--session WFS-session-id",
|
||||
"category": "workflow",
|
||||
"subcategory": "tools",
|
||||
"usage_scenario": "implementation",
|
||||
"difficulty": "Advanced",
|
||||
"source": "../../../commands/workflow/tools/task-generate-agent.md"
|
||||
},
|
||||
{
|
||||
"name": "task-generate-tdd",
|
||||
"command": "/workflow:tools:task-generate-tdd",
|
||||
"description": "Autonomous TDD task generation using action-planning-agent with Red-Green-Refactor cycles, test-first structure, and cycle validation",
|
||||
"arguments": "--session WFS-session-id",
|
||||
"category": "workflow",
|
||||
"subcategory": "tools",
|
||||
"usage_scenario": "implementation",
|
||||
"difficulty": "Advanced",
|
||||
"source": "../../../commands/workflow/tools/task-generate-tdd.md"
|
||||
},
|
||||
{
|
||||
"name": "tdd-coverage-analysis",
|
||||
"command": "/workflow:tools:tdd-coverage-analysis",
|
||||
"description": "Analyze test coverage and TDD cycle execution with Red-Green-Refactor compliance verification",
|
||||
"arguments": "--session WFS-session-id",
|
||||
"category": "workflow",
|
||||
"subcategory": "tools",
|
||||
"usage_scenario": "testing",
|
||||
"difficulty": "Advanced",
|
||||
"source": "../../../commands/workflow/tools/tdd-coverage-analysis.md"
|
||||
},
|
||||
{
|
||||
"name": "test-concept-enhanced",
|
||||
"command": "/workflow:tools:test-concept-enhanced",
|
||||
"description": "Coordinate test analysis workflow using cli-execution-agent to generate test strategy via Gemini",
|
||||
"arguments": "--session WFS-test-session-id --context path/to/test-context-package.json",
|
||||
"category": "workflow",
|
||||
"subcategory": "tools",
|
||||
"usage_scenario": "testing",
|
||||
"difficulty": "Intermediate",
|
||||
"source": "../../../commands/workflow/tools/test-concept-enhanced.md"
|
||||
},
|
||||
{
|
||||
"name": "test-context-gather",
|
||||
"command": "/workflow:tools:test-context-gather",
|
||||
"description": "Collect test coverage context using test-context-search-agent and package into standardized test-context JSON",
|
||||
"arguments": "--session WFS-test-session-id",
|
||||
"category": "workflow",
|
||||
"subcategory": "tools",
|
||||
"usage_scenario": "testing",
|
||||
"difficulty": "Intermediate",
|
||||
"source": "../../../commands/workflow/tools/test-context-gather.md"
|
||||
},
|
||||
{
|
||||
"name": "test-task-generate",
|
||||
"command": "/workflow:tools:test-task-generate",
|
||||
"description": "Generate test planning documents (IMPL_PLAN.md, test task JSONs, TODO_LIST.md) using action-planning-agent - produces test planning artifacts, does NOT execute tests",
|
||||
"arguments": "--session WFS-test-session-id",
|
||||
"category": "workflow",
|
||||
"subcategory": "tools",
|
||||
"usage_scenario": "implementation",
|
||||
"difficulty": "Intermediate",
|
||||
"source": "../../../commands/workflow/tools/test-task-generate.md"
|
||||
},
|
||||
{
|
||||
"name": "animation-extract",
|
||||
"command": "/workflow:ui-design:animation-extract",
|
||||
"description": "Extract animation and transition patterns from prompt inference and image references for design system documentation",
|
||||
"arguments": "[--design-id <id>] [--session <id>] [--images \"<glob>\"] [--focus \"<types>\"] [--interactive] [--refine]",
|
||||
"category": "workflow",
|
||||
"subcategory": "ui-design",
|
||||
"usage_scenario": "general",
|
||||
"difficulty": "Intermediate",
|
||||
"source": "../../../commands/workflow/ui-design/animation-extract.md"
|
||||
},
|
||||
{
|
||||
"name": "workflow:ui-design:codify-style",
|
||||
"command": "/workflow:ui-design:codify-style",
|
||||
"description": "Orchestrator to extract styles from code and generate shareable reference package with preview (automatic file discovery)",
|
||||
"arguments": "<path> [--package-name <name>] [--output-dir <path>] [--overwrite]",
|
||||
"category": "workflow",
|
||||
"subcategory": "ui-design",
|
||||
"usage_scenario": "planning",
|
||||
"difficulty": "Intermediate",
|
||||
"source": "../../../commands/workflow/ui-design/codify-style.md"
|
||||
},
|
||||
{
|
||||
"name": "design-sync",
|
||||
"command": "/workflow:ui-design:design-sync",
|
||||
"description": "Synchronize finalized design system references to brainstorming artifacts, preparing them for /workflow:plan consumption",
|
||||
"arguments": "--session <session_id> [--selected-prototypes \"<list>\"]",
|
||||
"category": "workflow",
|
||||
"subcategory": "ui-design",
|
||||
"usage_scenario": "planning",
|
||||
"difficulty": "Intermediate",
|
||||
"source": "../../../commands/workflow/ui-design/design-sync.md"
|
||||
},
|
||||
{
|
||||
"name": "explore-auto",
|
||||
"command": "/workflow:ui-design:explore-auto",
|
||||
"description": "Interactive exploratory UI design workflow with style-centric batch generation, creates design variants from prompts/images with parallel execution and user selection",
|
||||
"arguments": "[--input \"<value>\"] [--targets \"<list>\"] [--target-type \"page|component\"] [--session <id>] [--style-variants <count>] [--layout-variants <count>]",
|
||||
"category": "workflow",
|
||||
"subcategory": "ui-design",
|
||||
"usage_scenario": "general",
|
||||
"difficulty": "Intermediate",
|
||||
"source": "../../../commands/workflow/ui-design/explore-auto.md"
|
||||
},
|
||||
{
|
||||
"name": "generate",
|
||||
"command": "/workflow:ui-design:generate",
|
||||
"description": "Assemble UI prototypes by combining layout templates with design tokens (default animation support), pure assembler without new content generation",
|
||||
"arguments": "[--design-id <id>] [--session <id>]",
|
||||
"category": "workflow",
|
||||
"subcategory": "ui-design",
|
||||
"usage_scenario": "implementation",
|
||||
"difficulty": "Intermediate",
|
||||
"source": "../../../commands/workflow/ui-design/generate.md"
|
||||
},
|
||||
{
|
||||
"name": "imitate-auto",
|
||||
"command": "/workflow:ui-design:imitate-auto",
|
||||
"description": "UI design workflow with direct code/image input for design token extraction and prototype generation",
|
||||
"arguments": "[--input \"<value>\"] [--session <id>]",
|
||||
"category": "workflow",
|
||||
"subcategory": "ui-design",
|
||||
"usage_scenario": "general",
|
||||
"difficulty": "Intermediate",
|
||||
"source": "../../../commands/workflow/ui-design/imitate-auto.md"
|
||||
},
|
||||
{
|
||||
"name": "workflow:ui-design:import-from-code",
|
||||
"command": "/workflow:ui-design:import-from-code",
|
||||
"description": "Import design system from code files (CSS/JS/HTML/SCSS) with automatic file discovery and parallel agent analysis",
|
||||
"arguments": "[--design-id <id>] [--session <id>] [--source <path>]",
|
||||
"category": "workflow",
|
||||
"subcategory": "ui-design",
|
||||
"usage_scenario": "planning",
|
||||
"difficulty": "Intermediate",
|
||||
"source": "../../../commands/workflow/ui-design/import-from-code.md"
|
||||
},
|
||||
{
|
||||
"name": "layout-extract",
|
||||
"command": "/workflow:ui-design:layout-extract",
|
||||
"description": "Extract structural layout information from reference images or text prompts using Claude analysis with variant generation or refinement mode",
|
||||
"arguments": "[--design-id <id>] [--session <id>] [--images \"<glob>\"] [--prompt \"<desc>\"] [--targets \"<list>\"] [--variants <count>] [--device-type <desktop|mobile|tablet|responsive>] [--interactive] [--refine]",
|
||||
"category": "workflow",
|
||||
"subcategory": "ui-design",
|
||||
"usage_scenario": "general",
|
||||
"difficulty": "Intermediate",
|
||||
"source": "../../../commands/workflow/ui-design/layout-extract.md"
|
||||
},
|
||||
{
|
||||
"name": "workflow:ui-design:reference-page-generator",
|
||||
"command": "/workflow:ui-design:reference-page-generator",
|
||||
"description": "Generate multi-component reference pages and documentation from design run extraction",
|
||||
"arguments": "[--design-run <path>] [--package-name <name>] [--output-dir <path>]",
|
||||
"category": "workflow",
|
||||
"subcategory": "ui-design",
|
||||
"usage_scenario": "planning",
|
||||
"difficulty": "Intermediate",
|
||||
"source": "../../../commands/workflow/ui-design/reference-page-generator.md"
|
||||
},
|
||||
{
|
||||
"name": "style-extract",
|
||||
"command": "/workflow:ui-design:style-extract",
|
||||
"description": "Extract design style from reference images or text prompts using Claude analysis with variant generation or refinement mode",
|
||||
"arguments": "[--design-id <id>] [--session <id>] [--images \"<glob>\"] [--prompt \"<desc>\"] [--variants <count>] [--interactive] [--refine]",
|
||||
"category": "workflow",
|
||||
"subcategory": "ui-design",
|
||||
"usage_scenario": "general",
|
||||
"difficulty": "Intermediate",
|
||||
"source": "../../../commands/workflow/ui-design/style-extract.md"
|
||||
}
|
||||
]
|
||||
@@ -1,914 +0,0 @@
|
||||
{
|
||||
"cli": {
|
||||
"_root": [
|
||||
{
|
||||
"name": "cli-init",
|
||||
"command": "/cli:cli-init",
|
||||
"description": "Generate .gemini/ and .qwen/ config directories with settings.json and ignore files based on workspace technology detection",
|
||||
"arguments": "[--tool gemini|qwen|all] [--output path] [--preview]",
|
||||
"category": "cli",
|
||||
"subcategory": null,
|
||||
"usage_scenario": "general",
|
||||
"difficulty": "Intermediate",
|
||||
"source": "../../../commands/cli/cli-init.md"
|
||||
}
|
||||
]
|
||||
},
|
||||
"general": {
|
||||
"_root": [
|
||||
{
|
||||
"name": "enhance-prompt",
|
||||
"command": "/enhance-prompt",
|
||||
"description": "Enhanced prompt transformation using session memory and intent analysis with --enhance flag detection",
|
||||
"arguments": "user input to enhance",
|
||||
"category": "general",
|
||||
"subcategory": null,
|
||||
"usage_scenario": "general",
|
||||
"difficulty": "Intermediate",
|
||||
"source": "../../../commands/enhance-prompt.md"
|
||||
},
|
||||
{
|
||||
"name": "version",
|
||||
"command": "/version",
|
||||
"description": "Display Claude Code version information and check for updates",
|
||||
"arguments": "",
|
||||
"category": "general",
|
||||
"subcategory": null,
|
||||
"usage_scenario": "general",
|
||||
"difficulty": "Beginner",
|
||||
"source": "../../../commands/version.md"
|
||||
}
|
||||
]
|
||||
},
|
||||
"issue": {
|
||||
"_root": [
|
||||
{
|
||||
"name": "issue:discover",
|
||||
"command": "/issue:discover",
|
||||
"description": "Discover potential issues from multiple perspectives (bug, UX, test, quality, security, performance, maintainability, best-practices) using CLI explore. Supports Exa external research for security and best-practices perspectives.",
|
||||
"arguments": "<path-pattern> [--perspectives=bug,ux,...] [--external]",
|
||||
"category": "issue",
|
||||
"subcategory": null,
|
||||
"usage_scenario": "general",
|
||||
"difficulty": "Intermediate",
|
||||
"source": "../../../commands/issue/discover.md"
|
||||
},
|
||||
{
|
||||
"name": "execute",
|
||||
"command": "/issue:execute",
|
||||
"description": "Execute queue with codex using DAG-based parallel orchestration (solution-level)",
|
||||
"arguments": "[--worktree] [--queue <queue-id>]",
|
||||
"category": "issue",
|
||||
"subcategory": null,
|
||||
"usage_scenario": "implementation",
|
||||
"difficulty": "Intermediate",
|
||||
"source": "../../../commands/issue/execute.md"
|
||||
},
|
||||
{
|
||||
"name": "new",
|
||||
"command": "/issue:new",
|
||||
"description": "Create structured issue from GitHub URL or text description",
|
||||
"arguments": "<github-url | text-description> [--priority 1-5]",
|
||||
"category": "issue",
|
||||
"subcategory": null,
|
||||
"usage_scenario": "general",
|
||||
"difficulty": "Intermediate",
|
||||
"source": "../../../commands/issue/new.md"
|
||||
},
|
||||
{
|
||||
"name": "plan",
|
||||
"command": "/issue:plan",
|
||||
"description": "Batch plan issue resolution using issue-plan-agent (explore + plan closed-loop)",
|
||||
"arguments": "--all-pending <issue-id>[,<issue-id>,...] [--batch-size 3] ",
|
||||
"category": "issue",
|
||||
"subcategory": null,
|
||||
"usage_scenario": "planning",
|
||||
"difficulty": "Intermediate",
|
||||
"source": "../../../commands/issue/plan.md"
|
||||
},
|
||||
{
|
||||
"name": "queue",
|
||||
"command": "/issue:queue",
|
||||
"description": "Form execution queue from bound solutions using issue-queue-agent (solution-level)",
|
||||
"arguments": "[--rebuild] [--issue <id>]",
|
||||
"category": "issue",
|
||||
"subcategory": null,
|
||||
"usage_scenario": "general",
|
||||
"difficulty": "Intermediate",
|
||||
"source": "../../../commands/issue/queue.md"
|
||||
}
|
||||
]
|
||||
},
|
||||
"memory": {
|
||||
"_root": [
|
||||
{
|
||||
"name": "code-map-memory",
|
||||
"command": "/memory:code-map-memory",
|
||||
"description": "3-phase orchestrator: parse feature keyword → cli-explore-agent analyzes (Deep Scan dual-source) → orchestrator generates Mermaid docs + SKILL package (skips phase 2 if exists)",
|
||||
"arguments": "\\\"feature-keyword\\\" [--regenerate] [--tool <gemini|qwen>]",
|
||||
"category": "memory",
|
||||
"subcategory": null,
|
||||
"usage_scenario": "documentation",
|
||||
"difficulty": "Intermediate",
|
||||
"source": "../../../commands/memory/code-map-memory.md"
|
||||
},
|
||||
{
|
||||
"name": "compact",
|
||||
"command": "/memory:compact",
|
||||
"description": "Compact current session memory into structured text for session recovery, extracting objective/plan/files/decisions/constraints/state, and save via MCP core_memory tool",
|
||||
"arguments": "[optional: session description]",
|
||||
"category": "memory",
|
||||
"subcategory": null,
|
||||
"usage_scenario": "general",
|
||||
"difficulty": "Intermediate",
|
||||
"source": "../../../commands/memory/compact.md"
|
||||
},
|
||||
{
|
||||
"name": "docs-full-cli",
|
||||
"command": "/memory:docs-full-cli",
|
||||
"description": "Generate full project documentation using CLI execution (Layer 3→1) with batched agents (4 modules/agent) and gemini→qwen→codex fallback, <20 modules uses direct parallel",
|
||||
"arguments": "[path] [--tool <gemini|qwen|codex>]",
|
||||
"category": "memory",
|
||||
"subcategory": null,
|
||||
"usage_scenario": "documentation",
|
||||
"difficulty": "Intermediate",
|
||||
"source": "../../../commands/memory/docs-full-cli.md"
|
||||
},
|
||||
{
|
||||
"name": "docs-related-cli",
|
||||
"command": "/memory:docs-related-cli",
|
||||
"description": "Generate/update documentation for git-changed modules using CLI execution with batched agents (4 modules/agent) and gemini→qwen→codex fallback, <15 modules uses direct parallel",
|
||||
"arguments": "[--tool <gemini|qwen|codex>]",
|
||||
"category": "memory",
|
||||
"subcategory": null,
|
||||
"usage_scenario": "documentation",
|
||||
"difficulty": "Intermediate",
|
||||
"source": "../../../commands/memory/docs-related-cli.md"
|
||||
},
|
||||
{
|
||||
"name": "docs",
|
||||
"command": "/memory:docs",
|
||||
"description": "Plan documentation workflow with dynamic grouping (≤10 docs/task), generates IMPL tasks for parallel module trees, README, ARCHITECTURE, and HTTP API docs",
|
||||
"arguments": "[path] [--tool <gemini|qwen|codex>] [--mode <full|partial>] [--cli-execute]",
|
||||
"category": "memory",
|
||||
"subcategory": null,
|
||||
"usage_scenario": "documentation",
|
||||
"difficulty": "Intermediate",
|
||||
"source": "../../../commands/memory/docs.md"
|
||||
},
|
||||
{
|
||||
"name": "load-skill-memory",
|
||||
"command": "/memory:load-skill-memory",
|
||||
"description": "Activate SKILL package (auto-detect from paths/keywords or manual) and intelligently load documentation based on task intent keywords",
|
||||
"arguments": "[skill_name] \\\"task intent description\\",
|
||||
"category": "memory",
|
||||
"subcategory": null,
|
||||
"usage_scenario": "documentation",
|
||||
"difficulty": "Intermediate",
|
||||
"source": "../../../commands/memory/load-skill-memory.md"
|
||||
},
|
||||
{
|
||||
"name": "load",
|
||||
"command": "/memory:load",
|
||||
"description": "Delegate to universal-executor agent to analyze project via Gemini/Qwen CLI and return JSON core content package for task context",
|
||||
"arguments": "[--tool gemini|qwen] \\\"task context description\\",
|
||||
"category": "memory",
|
||||
"subcategory": null,
|
||||
"usage_scenario": "general",
|
||||
"difficulty": "Intermediate",
|
||||
"source": "../../../commands/memory/load.md"
|
||||
},
|
||||
{
|
||||
"name": "skill-memory",
|
||||
"command": "/memory:skill-memory",
|
||||
"description": "4-phase autonomous orchestrator: check docs → /memory:docs planning → /workflow:execute → generate SKILL.md with progressive loading index (skips phases 2-3 if docs exist)",
|
||||
"arguments": "[path] [--tool <gemini|qwen|codex>] [--regenerate] [--mode <full|partial>] [--cli-execute]",
|
||||
"category": "memory",
|
||||
"subcategory": null,
|
||||
"usage_scenario": "documentation",
|
||||
"difficulty": "Intermediate",
|
||||
"source": "../../../commands/memory/skill-memory.md"
|
||||
},
|
||||
{
|
||||
"name": "style-skill-memory",
|
||||
"command": "/memory:style-skill-memory",
|
||||
"description": "Generate SKILL memory package from style reference for easy loading and consistent design system usage",
|
||||
"arguments": "[package-name] [--regenerate]",
|
||||
"category": "memory",
|
||||
"subcategory": null,
|
||||
"usage_scenario": "documentation",
|
||||
"difficulty": "Intermediate",
|
||||
"source": "../../../commands/memory/style-skill-memory.md"
|
||||
},
|
||||
{
|
||||
"name": "swagger-docs",
|
||||
"command": "/memory:swagger-docs",
|
||||
"description": "Generate complete Swagger/OpenAPI documentation following RESTful standards with global security, API details, error codes, and validation tests",
|
||||
"arguments": "[path] [--tool <gemini|qwen|codex>] [--format <yaml|json>] [--version <v3.0|v3.1>] [--lang <zh|en>]",
|
||||
"category": "memory",
|
||||
"subcategory": null,
|
||||
"usage_scenario": "documentation",
|
||||
"difficulty": "Intermediate",
|
||||
"source": "../../../commands/memory/swagger-docs.md"
|
||||
},
|
||||
{
|
||||
"name": "tech-research-rules",
|
||||
"command": "/memory:tech-research-rules",
|
||||
"description": "3-phase orchestrator: extract tech stack → Exa research → generate path-conditional rules (auto-loaded by Claude Code)",
|
||||
"arguments": "[session-id | tech-stack-name] [--regenerate] [--tool <gemini|qwen>]",
|
||||
"category": "memory",
|
||||
"subcategory": null,
|
||||
"usage_scenario": "general",
|
||||
"difficulty": "Intermediate",
|
||||
"source": "../../../commands/memory/tech-research-rules.md"
|
||||
},
|
||||
{
|
||||
"name": "update-full",
|
||||
"command": "/memory:update-full",
|
||||
"description": "Update all CLAUDE.md files using layer-based execution (Layer 3→1) with batched agents (4 modules/agent) and gemini→qwen→codex fallback, <20 modules uses direct parallel",
|
||||
"arguments": "[--tool gemini|qwen|codex] [--path <directory>]",
|
||||
"category": "memory",
|
||||
"subcategory": null,
|
||||
"usage_scenario": "general",
|
||||
"difficulty": "Intermediate",
|
||||
"source": "../../../commands/memory/update-full.md"
|
||||
},
|
||||
{
|
||||
"name": "update-related",
|
||||
"command": "/memory:update-related",
|
||||
"description": "Update CLAUDE.md for git-changed modules using batched agent execution (4 modules/agent) with gemini→qwen→codex fallback, <15 modules uses direct execution",
|
||||
"arguments": "[--tool gemini|qwen|codex]",
|
||||
"category": "memory",
|
||||
"subcategory": null,
|
||||
"usage_scenario": "general",
|
||||
"difficulty": "Intermediate",
|
||||
"source": "../../../commands/memory/update-related.md"
|
||||
},
|
||||
{
|
||||
"name": "workflow-skill-memory",
|
||||
"command": "/memory:workflow-skill-memory",
|
||||
"description": "Process WFS-* archived sessions using universal-executor agents with Gemini analysis to generate workflow-progress SKILL package (sessions-timeline, lessons, conflicts)",
|
||||
"arguments": "session <session-id> | all",
|
||||
"category": "memory",
|
||||
"subcategory": null,
|
||||
"usage_scenario": "documentation",
|
||||
"difficulty": "Intermediate",
|
||||
"source": "../../../commands/memory/workflow-skill-memory.md"
|
||||
}
|
||||
]
|
||||
},
|
||||
"task": {
|
||||
"_root": [
|
||||
{
|
||||
"name": "breakdown",
|
||||
"command": "/task:breakdown",
|
||||
"description": "Decompose complex task into subtasks with dependency mapping, creates child task JSONs with parent references and execution order",
|
||||
"arguments": "task-id",
|
||||
"category": "task",
|
||||
"subcategory": null,
|
||||
"usage_scenario": "planning",
|
||||
"difficulty": "Intermediate",
|
||||
"source": "../../../commands/task/breakdown.md"
|
||||
},
|
||||
{
|
||||
"name": "create",
|
||||
"command": "/task:create",
|
||||
"description": "Generate task JSON from natural language description with automatic file pattern detection, scope inference, and dependency analysis",
|
||||
"arguments": "\\\"task title\\",
|
||||
"category": "task",
|
||||
"subcategory": null,
|
||||
"usage_scenario": "implementation",
|
||||
"difficulty": "Intermediate",
|
||||
"source": "../../../commands/task/create.md"
|
||||
},
|
||||
{
|
||||
"name": "execute",
|
||||
"command": "/task:execute",
|
||||
"description": "Execute task JSON using appropriate agent (@doc-generator/@implementation-agent/@test-agent) with pre-analysis context loading and status tracking",
|
||||
"arguments": "task-id",
|
||||
"category": "task",
|
||||
"subcategory": null,
|
||||
"usage_scenario": "implementation",
|
||||
"difficulty": "Intermediate",
|
||||
"source": "../../../commands/task/execute.md"
|
||||
},
|
||||
{
|
||||
"name": "replan",
|
||||
"command": "/task:replan",
|
||||
"description": "Update task JSON with new requirements or batch-update multiple tasks from verification report, tracks changes in task-changes.json",
|
||||
"arguments": "task-id [\\\"text\\\"|file.md] | --batch [verification-report.md]",
|
||||
"category": "task",
|
||||
"subcategory": null,
|
||||
"usage_scenario": "planning",
|
||||
"difficulty": "Intermediate",
|
||||
"source": "../../../commands/task/replan.md"
|
||||
}
|
||||
]
|
||||
},
|
||||
"workflow": {
|
||||
"_root": [
|
||||
{
|
||||
"name": "action-plan-verify",
|
||||
"command": "/workflow:action-plan-verify",
|
||||
"description": "Perform non-destructive cross-artifact consistency analysis between IMPL_PLAN.md and task JSONs with quality gate validation",
|
||||
"arguments": "[optional: --session session-id]",
|
||||
"category": "workflow",
|
||||
"subcategory": null,
|
||||
"usage_scenario": "planning",
|
||||
"difficulty": "Intermediate",
|
||||
"source": "../../../commands/workflow/action-plan-verify.md"
|
||||
},
|
||||
{
|
||||
"name": "clean",
|
||||
"command": "/workflow:clean",
|
||||
"description": "Intelligent code cleanup with mainline detection, stale artifact discovery, and safe execution",
|
||||
"arguments": "[--dry-run] [\\\"focus area\\\"]",
|
||||
"category": "workflow",
|
||||
"subcategory": null,
|
||||
"usage_scenario": "general",
|
||||
"difficulty": "Intermediate",
|
||||
"source": "../../../commands/workflow/clean.md"
|
||||
},
|
||||
{
|
||||
"name": "debug",
|
||||
"command": "/workflow:debug",
|
||||
"description": "Interactive hypothesis-driven debugging with NDJSON logging, iterative until resolved",
|
||||
"arguments": "\\\"bug description or error message\\",
|
||||
"category": "workflow",
|
||||
"subcategory": null,
|
||||
"usage_scenario": "general",
|
||||
"difficulty": "Intermediate",
|
||||
"source": "../../../commands/workflow/debug.md"
|
||||
},
|
||||
{
|
||||
"name": "execute",
|
||||
"command": "/workflow:execute",
|
||||
"description": "Coordinate agent execution for workflow tasks with automatic session discovery, parallel task processing, and status tracking",
|
||||
"arguments": "[--resume-session=\\\"session-id\\\"]",
|
||||
"category": "workflow",
|
||||
"subcategory": null,
|
||||
"usage_scenario": "implementation",
|
||||
"difficulty": "Intermediate",
|
||||
"source": "../../../commands/workflow/execute.md"
|
||||
},
|
||||
{
|
||||
"name": "init",
|
||||
"command": "/workflow:init",
|
||||
"description": "Initialize project-level state with intelligent project analysis using cli-explore-agent",
|
||||
"arguments": "[--regenerate]",
|
||||
"category": "workflow",
|
||||
"subcategory": null,
|
||||
"usage_scenario": "general",
|
||||
"difficulty": "Intermediate",
|
||||
"source": "../../../commands/workflow/init.md"
|
||||
},
|
||||
{
|
||||
"name": "lite-execute",
|
||||
"command": "/workflow:lite-execute",
|
||||
"description": "Execute tasks based on in-memory plan, prompt description, or file content",
|
||||
"arguments": "[--in-memory] [\\\"task description\\\"|file-path]",
|
||||
"category": "workflow",
|
||||
"subcategory": null,
|
||||
"usage_scenario": "implementation",
|
||||
"difficulty": "Intermediate",
|
||||
"source": "../../../commands/workflow/lite-execute.md"
|
||||
},
|
||||
{
|
||||
"name": "lite-fix",
|
||||
"command": "/workflow:lite-fix",
|
||||
"description": "Lightweight bug diagnosis and fix workflow with intelligent severity assessment and optional hotfix mode for production incidents",
|
||||
"arguments": "[--hotfix] \\\"bug description or issue reference\\",
|
||||
"category": "workflow",
|
||||
"subcategory": null,
|
||||
"usage_scenario": "general",
|
||||
"difficulty": "Intermediate",
|
||||
"source": "../../../commands/workflow/lite-fix.md"
|
||||
},
|
||||
{
|
||||
"name": "lite-plan",
|
||||
"command": "/workflow:lite-plan",
|
||||
"description": "Lightweight interactive planning workflow with in-memory planning, code exploration, and execution dispatch to lite-execute after user confirmation",
|
||||
"arguments": "[-e|--explore] \\\"task description\\\"|file.md",
|
||||
"category": "workflow",
|
||||
"subcategory": null,
|
||||
"usage_scenario": "planning",
|
||||
"difficulty": "Intermediate",
|
||||
"source": "../../../commands/workflow/lite-plan.md"
|
||||
},
|
||||
{
|
||||
"name": "plan",
|
||||
"command": "/workflow:plan",
|
||||
"description": "5-phase planning workflow with action-planning-agent task generation, outputs IMPL_PLAN.md and task JSONs",
|
||||
"arguments": "\\\"text description\\\"|file.md",
|
||||
"category": "workflow",
|
||||
"subcategory": null,
|
||||
"usage_scenario": "planning",
|
||||
"difficulty": "Intermediate",
|
||||
"source": "../../../commands/workflow/plan.md"
|
||||
},
|
||||
{
|
||||
"name": "replan",
|
||||
"command": "/workflow:replan",
|
||||
"description": "Interactive workflow replanning with session-level artifact updates and boundary clarification through guided questioning",
|
||||
"arguments": "[--session session-id] [task-id] \\\"requirements\\\"|file.md [--interactive]",
|
||||
"category": "workflow",
|
||||
"subcategory": null,
|
||||
"usage_scenario": "planning",
|
||||
"difficulty": "Intermediate",
|
||||
"source": "../../../commands/workflow/replan.md"
|
||||
},
|
||||
{
|
||||
"name": "review-fix",
|
||||
"command": "/workflow:review-fix",
|
||||
"description": "Automated fixing of code review findings with AI-powered planning and coordinated execution. Uses intelligent grouping, multi-stage timeline coordination, and test-driven verification.",
|
||||
"arguments": "<export-file|review-dir> [--resume] [--max-iterations=N]",
|
||||
"category": "workflow",
|
||||
"subcategory": null,
|
||||
"usage_scenario": "analysis",
|
||||
"difficulty": "Intermediate",
|
||||
"source": "../../../commands/workflow/review-fix.md"
|
||||
},
|
||||
{
|
||||
"name": "review-module-cycle",
|
||||
"command": "/workflow:review-module-cycle",
|
||||
"description": "Independent multi-dimensional code review for specified modules/files. Analyzes specific code paths across 7 dimensions with hybrid parallel-iterative execution, independent of workflow sessions.",
|
||||
"arguments": "<path-pattern> [--dimensions=security,architecture,...] [--max-iterations=N]",
|
||||
"category": "workflow",
|
||||
"subcategory": null,
|
||||
"usage_scenario": "analysis",
|
||||
"difficulty": "Intermediate",
|
||||
"source": "../../../commands/workflow/review-module-cycle.md"
|
||||
},
|
||||
{
|
||||
"name": "review-session-cycle",
|
||||
"command": "/workflow:review-session-cycle",
|
||||
"description": "Session-based comprehensive multi-dimensional code review. Analyzes git changes from workflow session across 7 dimensions with hybrid parallel-iterative execution, aggregates findings, and performs focused deep-dives on critical issues until quality gates met.",
|
||||
"arguments": "[session-id] [--dimensions=security,architecture,...] [--max-iterations=N]",
|
||||
"category": "workflow",
|
||||
"subcategory": null,
|
||||
"usage_scenario": "session-management",
|
||||
"difficulty": "Intermediate",
|
||||
"source": "../../../commands/workflow/review-session-cycle.md"
|
||||
},
|
||||
{
|
||||
"name": "review",
|
||||
"command": "/workflow:review",
|
||||
"description": "Post-implementation review with specialized types (security/architecture/action-items/quality) using analysis agents and Gemini",
|
||||
"arguments": "[--type=security|architecture|action-items|quality] [--archived] [optional: session-id]",
|
||||
"category": "workflow",
|
||||
"subcategory": null,
|
||||
"usage_scenario": "analysis",
|
||||
"difficulty": "Intermediate",
|
||||
"source": "../../../commands/workflow/review.md"
|
||||
},
|
||||
{
|
||||
"name": "tdd-plan",
|
||||
"command": "/workflow:tdd-plan",
|
||||
"description": "TDD workflow planning with Red-Green-Refactor task chain generation, test-first development structure, and cycle tracking",
|
||||
"arguments": "\\\"feature description\\\"|file.md",
|
||||
"category": "workflow",
|
||||
"subcategory": null,
|
||||
"usage_scenario": "planning",
|
||||
"difficulty": "Advanced",
|
||||
"source": "../../../commands/workflow/tdd-plan.md"
|
||||
},
|
||||
{
|
||||
"name": "tdd-verify",
|
||||
"command": "/workflow:tdd-verify",
|
||||
"description": "Verify TDD workflow compliance against Red-Green-Refactor cycles, generate quality report with coverage analysis",
|
||||
"arguments": "[optional: WFS-session-id]",
|
||||
"category": "workflow",
|
||||
"subcategory": null,
|
||||
"usage_scenario": "testing",
|
||||
"difficulty": "Advanced",
|
||||
"source": "../../../commands/workflow/tdd-verify.md"
|
||||
},
|
||||
{
|
||||
"name": "test-cycle-execute",
|
||||
"command": "/workflow:test-cycle-execute",
|
||||
"description": "Execute test-fix workflow with dynamic task generation and iterative fix cycles until test pass rate >= 95% or max iterations reached. Uses @cli-planning-agent for failure analysis and task generation.",
|
||||
"arguments": "[--resume-session=\\\"session-id\\\"] [--max-iterations=N]",
|
||||
"category": "workflow",
|
||||
"subcategory": null,
|
||||
"usage_scenario": "implementation",
|
||||
"difficulty": "Intermediate",
|
||||
"source": "../../../commands/workflow/test-cycle-execute.md"
|
||||
},
|
||||
{
|
||||
"name": "test-fix-gen",
|
||||
"command": "/workflow:test-fix-gen",
|
||||
"description": "Create test-fix workflow session from session ID, description, or file path with test strategy generation and task planning",
|
||||
"arguments": "(source-session-id | \\\"feature description\\\" | /path/to/file.md)",
|
||||
"category": "workflow",
|
||||
"subcategory": null,
|
||||
"usage_scenario": "testing",
|
||||
"difficulty": "Intermediate",
|
||||
"source": "../../../commands/workflow/test-fix-gen.md"
|
||||
},
|
||||
{
|
||||
"name": "test-gen",
|
||||
"command": "/workflow:test-gen",
|
||||
"description": "Create independent test-fix workflow session from completed implementation session, analyzes code to generate test tasks",
|
||||
"arguments": "source-session-id",
|
||||
"category": "workflow",
|
||||
"subcategory": null,
|
||||
"usage_scenario": "testing",
|
||||
"difficulty": "Intermediate",
|
||||
"source": "../../../commands/workflow/test-gen.md"
|
||||
}
|
||||
],
|
||||
"brainstorm": [
|
||||
{
|
||||
"name": "api-designer",
|
||||
"command": "/workflow:brainstorm:api-designer",
|
||||
"description": "Generate or update api-designer/analysis.md addressing guidance-specification discussion points for API design perspective",
|
||||
"arguments": "optional topic - uses existing framework if available",
|
||||
"category": "workflow",
|
||||
"subcategory": "brainstorm",
|
||||
"usage_scenario": "planning",
|
||||
"difficulty": "Intermediate",
|
||||
"source": "../../../commands/workflow/brainstorm/api-designer.md"
|
||||
},
|
||||
{
|
||||
"name": "artifacts",
|
||||
"command": "/workflow:brainstorm:artifacts",
|
||||
"description": "Interactive clarification generating confirmed guidance specification through role-based analysis and synthesis",
|
||||
"arguments": "topic or challenge description [--count N]",
|
||||
"category": "workflow",
|
||||
"subcategory": "brainstorm",
|
||||
"usage_scenario": "general",
|
||||
"difficulty": "Intermediate",
|
||||
"source": "../../../commands/workflow/brainstorm/artifacts.md"
|
||||
},
|
||||
{
|
||||
"name": "auto-parallel",
|
||||
"command": "/workflow:brainstorm:auto-parallel",
|
||||
"description": "Parallel brainstorming automation with dynamic role selection and concurrent execution across multiple perspectives",
|
||||
"arguments": "topic or challenge description\" [--count N]",
|
||||
"category": "workflow",
|
||||
"subcategory": "brainstorm",
|
||||
"usage_scenario": "general",
|
||||
"difficulty": "Advanced",
|
||||
"source": "../../../commands/workflow/brainstorm/auto-parallel.md"
|
||||
},
|
||||
{
|
||||
"name": "data-architect",
|
||||
"command": "/workflow:brainstorm:data-architect",
|
||||
"description": "Generate or update data-architect/analysis.md addressing guidance-specification discussion points for data architecture perspective",
|
||||
"arguments": "optional topic - uses existing framework if available",
|
||||
"category": "workflow",
|
||||
"subcategory": "brainstorm",
|
||||
"usage_scenario": "general",
|
||||
"difficulty": "Intermediate",
|
||||
"source": "../../../commands/workflow/brainstorm/data-architect.md"
|
||||
},
|
||||
{
|
||||
"name": "product-manager",
|
||||
"command": "/workflow:brainstorm:product-manager",
|
||||
"description": "Generate or update product-manager/analysis.md addressing guidance-specification discussion points for product management perspective",
|
||||
"arguments": "optional topic - uses existing framework if available",
|
||||
"category": "workflow",
|
||||
"subcategory": "brainstorm",
|
||||
"usage_scenario": "general",
|
||||
"difficulty": "Intermediate",
|
||||
"source": "../../../commands/workflow/brainstorm/product-manager.md"
|
||||
},
|
||||
{
|
||||
"name": "product-owner",
|
||||
"command": "/workflow:brainstorm:product-owner",
|
||||
"description": "Generate or update product-owner/analysis.md addressing guidance-specification discussion points for product ownership perspective",
|
||||
"arguments": "optional topic - uses existing framework if available",
|
||||
"category": "workflow",
|
||||
"subcategory": "brainstorm",
|
||||
"usage_scenario": "general",
|
||||
"difficulty": "Intermediate",
|
||||
"source": "../../../commands/workflow/brainstorm/product-owner.md"
|
||||
},
|
||||
{
|
||||
"name": "scrum-master",
|
||||
"command": "/workflow:brainstorm:scrum-master",
|
||||
"description": "Generate or update scrum-master/analysis.md addressing guidance-specification discussion points for Agile process perspective",
|
||||
"arguments": "optional topic - uses existing framework if available",
|
||||
"category": "workflow",
|
||||
"subcategory": "brainstorm",
|
||||
"usage_scenario": "general",
|
||||
"difficulty": "Intermediate",
|
||||
"source": "../../../commands/workflow/brainstorm/scrum-master.md"
|
||||
},
|
||||
{
|
||||
"name": "subject-matter-expert",
|
||||
"command": "/workflow:brainstorm:subject-matter-expert",
|
||||
"description": "Generate or update subject-matter-expert/analysis.md addressing guidance-specification discussion points for domain expertise perspective",
|
||||
"arguments": "optional topic - uses existing framework if available",
|
||||
"category": "workflow",
|
||||
"subcategory": "brainstorm",
|
||||
"usage_scenario": "general",
|
||||
"difficulty": "Intermediate",
|
||||
"source": "../../../commands/workflow/brainstorm/subject-matter-expert.md"
|
||||
},
|
||||
{
|
||||
"name": "synthesis",
|
||||
"command": "/workflow:brainstorm:synthesis",
|
||||
"description": "Clarify and refine role analyses through intelligent Q&A and targeted updates with synthesis agent",
|
||||
"arguments": "[optional: --session session-id]",
|
||||
"category": "workflow",
|
||||
"subcategory": "brainstorm",
|
||||
"usage_scenario": "general",
|
||||
"difficulty": "Advanced",
|
||||
"source": "../../../commands/workflow/brainstorm/synthesis.md"
|
||||
},
|
||||
{
|
||||
"name": "system-architect",
|
||||
"command": "/workflow:brainstorm:system-architect",
|
||||
"description": "Generate or update system-architect/analysis.md addressing guidance-specification discussion points for system architecture perspective",
|
||||
"arguments": "optional topic - uses existing framework if available",
|
||||
"category": "workflow",
|
||||
"subcategory": "brainstorm",
|
||||
"usage_scenario": "general",
|
||||
"difficulty": "Intermediate",
|
||||
"source": "../../../commands/workflow/brainstorm/system-architect.md"
|
||||
},
|
||||
{
|
||||
"name": "ui-designer",
|
||||
"command": "/workflow:brainstorm:ui-designer",
|
||||
"description": "Generate or update ui-designer/analysis.md addressing guidance-specification discussion points for UI design perspective",
|
||||
"arguments": "optional topic - uses existing framework if available",
|
||||
"category": "workflow",
|
||||
"subcategory": "brainstorm",
|
||||
"usage_scenario": "planning",
|
||||
"difficulty": "Intermediate",
|
||||
"source": "../../../commands/workflow/brainstorm/ui-designer.md"
|
||||
},
|
||||
{
|
||||
"name": "ux-expert",
|
||||
"command": "/workflow:brainstorm:ux-expert",
|
||||
"description": "Generate or update ux-expert/analysis.md addressing guidance-specification discussion points for UX perspective",
|
||||
"arguments": "optional topic - uses existing framework if available",
|
||||
"category": "workflow",
|
||||
"subcategory": "brainstorm",
|
||||
"usage_scenario": "general",
|
||||
"difficulty": "Intermediate",
|
||||
"source": "../../../commands/workflow/brainstorm/ux-expert.md"
|
||||
}
|
||||
],
|
||||
"session": [
|
||||
{
|
||||
"name": "complete",
|
||||
"command": "/workflow:session:complete",
|
||||
"description": "Mark active workflow session as complete, archive with lessons learned, update manifest, remove active flag",
|
||||
"arguments": "",
|
||||
"category": "workflow",
|
||||
"subcategory": "session",
|
||||
"usage_scenario": "session-management",
|
||||
"difficulty": "Intermediate",
|
||||
"source": "../../../commands/workflow/session/complete.md"
|
||||
},
|
||||
{
|
||||
"name": "list",
|
||||
"command": "/workflow:session:list",
|
||||
"description": "List all workflow sessions with status filtering, shows session metadata and progress information",
|
||||
"arguments": "",
|
||||
"category": "workflow",
|
||||
"subcategory": "session",
|
||||
"usage_scenario": "general",
|
||||
"difficulty": "Beginner",
|
||||
"source": "../../../commands/workflow/session/list.md"
|
||||
},
|
||||
{
|
||||
"name": "resume",
|
||||
"command": "/workflow:session:resume",
|
||||
"description": "Resume the most recently paused workflow session with automatic session discovery and status update",
|
||||
"arguments": "",
|
||||
"category": "workflow",
|
||||
"subcategory": "session",
|
||||
"usage_scenario": "session-management",
|
||||
"difficulty": "Intermediate",
|
||||
"source": "../../../commands/workflow/session/resume.md"
|
||||
},
|
||||
{
|
||||
"name": "solidify",
|
||||
"command": "/workflow:session:solidify",
|
||||
"description": "Crystallize session learnings and user-defined constraints into permanent project guidelines",
|
||||
"arguments": "[--type <convention|constraint|learning>] [--category <category>] \\\"rule or insight\\",
|
||||
"category": "workflow",
|
||||
"subcategory": "session",
|
||||
"usage_scenario": "general",
|
||||
"difficulty": "Intermediate",
|
||||
"source": "../../../commands/workflow/session/solidify.md"
|
||||
},
|
||||
{
|
||||
"name": "start",
|
||||
"command": "/workflow:session:start",
|
||||
"description": "Discover existing sessions or start new workflow session with intelligent session management and conflict detection",
|
||||
"arguments": "[--type <workflow|review|tdd|test|docs>] [--auto|--new] [optional: task description for new session]",
|
||||
"category": "workflow",
|
||||
"subcategory": "session",
|
||||
"usage_scenario": "general",
|
||||
"difficulty": "Intermediate",
|
||||
"source": "../../../commands/workflow/session/start.md"
|
||||
}
|
||||
],
|
||||
"tools": [
|
||||
{
|
||||
"name": "conflict-resolution",
|
||||
"command": "/workflow:tools:conflict-resolution",
|
||||
"description": "Detect and resolve conflicts between plan and existing codebase using CLI-powered analysis with Gemini/Qwen",
|
||||
"arguments": "--session WFS-session-id --context path/to/context-package.json",
|
||||
"category": "workflow",
|
||||
"subcategory": "tools",
|
||||
"usage_scenario": "general",
|
||||
"difficulty": "Advanced",
|
||||
"source": "../../../commands/workflow/tools/conflict-resolution.md"
|
||||
},
|
||||
{
|
||||
"name": "gather",
|
||||
"command": "/workflow:tools:gather",
|
||||
"description": "Intelligently collect project context using context-search-agent based on task description, packages into standardized JSON",
|
||||
"arguments": "--session WFS-session-id \\\"task description\\",
|
||||
"category": "workflow",
|
||||
"subcategory": "tools",
|
||||
"usage_scenario": "general",
|
||||
"difficulty": "Intermediate",
|
||||
"source": "../../../commands/workflow/tools/context-gather.md"
|
||||
},
|
||||
{
|
||||
"name": "task-generate-agent",
|
||||
"command": "/workflow:tools:task-generate-agent",
|
||||
"description": "Generate implementation plan documents (IMPL_PLAN.md, task JSONs, TODO_LIST.md) using action-planning-agent - produces planning artifacts, does NOT execute code implementation",
|
||||
"arguments": "--session WFS-session-id",
|
||||
"category": "workflow",
|
||||
"subcategory": "tools",
|
||||
"usage_scenario": "implementation",
|
||||
"difficulty": "Advanced",
|
||||
"source": "../../../commands/workflow/tools/task-generate-agent.md"
|
||||
},
|
||||
{
|
||||
"name": "task-generate-tdd",
|
||||
"command": "/workflow:tools:task-generate-tdd",
|
||||
"description": "Autonomous TDD task generation using action-planning-agent with Red-Green-Refactor cycles, test-first structure, and cycle validation",
|
||||
"arguments": "--session WFS-session-id",
|
||||
"category": "workflow",
|
||||
"subcategory": "tools",
|
||||
"usage_scenario": "implementation",
|
||||
"difficulty": "Advanced",
|
||||
"source": "../../../commands/workflow/tools/task-generate-tdd.md"
|
||||
},
|
||||
{
|
||||
"name": "tdd-coverage-analysis",
|
||||
"command": "/workflow:tools:tdd-coverage-analysis",
|
||||
"description": "Analyze test coverage and TDD cycle execution with Red-Green-Refactor compliance verification",
|
||||
"arguments": "--session WFS-session-id",
|
||||
"category": "workflow",
|
||||
"subcategory": "tools",
|
||||
"usage_scenario": "testing",
|
||||
"difficulty": "Advanced",
|
||||
"source": "../../../commands/workflow/tools/tdd-coverage-analysis.md"
|
||||
},
|
||||
{
|
||||
"name": "test-concept-enhanced",
|
||||
"command": "/workflow:tools:test-concept-enhanced",
|
||||
"description": "Coordinate test analysis workflow using cli-execution-agent to generate test strategy via Gemini",
|
||||
"arguments": "--session WFS-test-session-id --context path/to/test-context-package.json",
|
||||
"category": "workflow",
|
||||
"subcategory": "tools",
|
||||
"usage_scenario": "testing",
|
||||
"difficulty": "Intermediate",
|
||||
"source": "../../../commands/workflow/tools/test-concept-enhanced.md"
|
||||
},
|
||||
{
|
||||
"name": "test-context-gather",
|
||||
"command": "/workflow:tools:test-context-gather",
|
||||
"description": "Collect test coverage context using test-context-search-agent and package into standardized test-context JSON",
|
||||
"arguments": "--session WFS-test-session-id",
|
||||
"category": "workflow",
|
||||
"subcategory": "tools",
|
||||
"usage_scenario": "testing",
|
||||
"difficulty": "Intermediate",
|
||||
"source": "../../../commands/workflow/tools/test-context-gather.md"
|
||||
},
|
||||
{
|
||||
"name": "test-task-generate",
|
||||
"command": "/workflow:tools:test-task-generate",
|
||||
"description": "Generate test planning documents (IMPL_PLAN.md, test task JSONs, TODO_LIST.md) using action-planning-agent - produces test planning artifacts, does NOT execute tests",
|
||||
"arguments": "--session WFS-test-session-id",
|
||||
"category": "workflow",
|
||||
"subcategory": "tools",
|
||||
"usage_scenario": "implementation",
|
||||
"difficulty": "Intermediate",
|
||||
"source": "../../../commands/workflow/tools/test-task-generate.md"
|
||||
}
|
||||
],
|
||||
"ui-design": [
|
||||
{
|
||||
"name": "animation-extract",
|
||||
"command": "/workflow:ui-design:animation-extract",
|
||||
"description": "Extract animation and transition patterns from prompt inference and image references for design system documentation",
|
||||
"arguments": "[--design-id <id>] [--session <id>] [--images \"<glob>\"] [--focus \"<types>\"] [--interactive] [--refine]",
|
||||
"category": "workflow",
|
||||
"subcategory": "ui-design",
|
||||
"usage_scenario": "general",
|
||||
"difficulty": "Intermediate",
|
||||
"source": "../../../commands/workflow/ui-design/animation-extract.md"
|
||||
},
|
||||
{
|
||||
"name": "workflow:ui-design:codify-style",
|
||||
"command": "/workflow:ui-design:codify-style",
|
||||
"description": "Orchestrator to extract styles from code and generate shareable reference package with preview (automatic file discovery)",
|
||||
"arguments": "<path> [--package-name <name>] [--output-dir <path>] [--overwrite]",
|
||||
"category": "workflow",
|
||||
"subcategory": "ui-design",
|
||||
"usage_scenario": "planning",
|
||||
"difficulty": "Intermediate",
|
||||
"source": "../../../commands/workflow/ui-design/codify-style.md"
|
||||
},
|
||||
{
|
||||
"name": "design-sync",
|
||||
"command": "/workflow:ui-design:design-sync",
|
||||
"description": "Synchronize finalized design system references to brainstorming artifacts, preparing them for /workflow:plan consumption",
|
||||
"arguments": "--session <session_id> [--selected-prototypes \"<list>\"]",
|
||||
"category": "workflow",
|
||||
"subcategory": "ui-design",
|
||||
"usage_scenario": "planning",
|
||||
"difficulty": "Intermediate",
|
||||
"source": "../../../commands/workflow/ui-design/design-sync.md"
|
||||
},
|
||||
{
|
||||
"name": "explore-auto",
|
||||
"command": "/workflow:ui-design:explore-auto",
|
||||
"description": "Interactive exploratory UI design workflow with style-centric batch generation, creates design variants from prompts/images with parallel execution and user selection",
|
||||
"arguments": "[--input \"<value>\"] [--targets \"<list>\"] [--target-type \"page|component\"] [--session <id>] [--style-variants <count>] [--layout-variants <count>]",
|
||||
"category": "workflow",
|
||||
"subcategory": "ui-design",
|
||||
"usage_scenario": "general",
|
||||
"difficulty": "Intermediate",
|
||||
"source": "../../../commands/workflow/ui-design/explore-auto.md"
|
||||
},
|
||||
{
|
||||
"name": "generate",
|
||||
"command": "/workflow:ui-design:generate",
|
||||
"description": "Assemble UI prototypes by combining layout templates with design tokens (default animation support), pure assembler without new content generation",
|
||||
"arguments": "[--design-id <id>] [--session <id>]",
|
||||
"category": "workflow",
|
||||
"subcategory": "ui-design",
|
||||
"usage_scenario": "implementation",
|
||||
"difficulty": "Intermediate",
|
||||
"source": "../../../commands/workflow/ui-design/generate.md"
|
||||
},
|
||||
{
|
||||
"name": "imitate-auto",
|
||||
"command": "/workflow:ui-design:imitate-auto",
|
||||
"description": "UI design workflow with direct code/image input for design token extraction and prototype generation",
|
||||
"arguments": "[--input \"<value>\"] [--session <id>]",
|
||||
"category": "workflow",
|
||||
"subcategory": "ui-design",
|
||||
"usage_scenario": "general",
|
||||
"difficulty": "Intermediate",
|
||||
"source": "../../../commands/workflow/ui-design/imitate-auto.md"
|
||||
},
|
||||
{
|
||||
"name": "workflow:ui-design:import-from-code",
|
||||
"command": "/workflow:ui-design:import-from-code",
|
||||
"description": "Import design system from code files (CSS/JS/HTML/SCSS) with automatic file discovery and parallel agent analysis",
|
||||
"arguments": "[--design-id <id>] [--session <id>] [--source <path>]",
|
||||
"category": "workflow",
|
||||
"subcategory": "ui-design",
|
||||
"usage_scenario": "planning",
|
||||
"difficulty": "Intermediate",
|
||||
"source": "../../../commands/workflow/ui-design/import-from-code.md"
|
||||
},
|
||||
{
|
||||
"name": "layout-extract",
|
||||
"command": "/workflow:ui-design:layout-extract",
|
||||
"description": "Extract structural layout information from reference images or text prompts using Claude analysis with variant generation or refinement mode",
|
||||
"arguments": "[--design-id <id>] [--session <id>] [--images \"<glob>\"] [--prompt \"<desc>\"] [--targets \"<list>\"] [--variants <count>] [--device-type <desktop|mobile|tablet|responsive>] [--interactive] [--refine]",
|
||||
"category": "workflow",
|
||||
"subcategory": "ui-design",
|
||||
"usage_scenario": "general",
|
||||
"difficulty": "Intermediate",
|
||||
"source": "../../../commands/workflow/ui-design/layout-extract.md"
|
||||
},
|
||||
{
|
||||
"name": "workflow:ui-design:reference-page-generator",
|
||||
"command": "/workflow:ui-design:reference-page-generator",
|
||||
"description": "Generate multi-component reference pages and documentation from design run extraction",
|
||||
"arguments": "[--design-run <path>] [--package-name <name>] [--output-dir <path>]",
|
||||
"category": "workflow",
|
||||
"subcategory": "ui-design",
|
||||
"usage_scenario": "planning",
|
||||
"difficulty": "Intermediate",
|
||||
"source": "../../../commands/workflow/ui-design/reference-page-generator.md"
|
||||
},
|
||||
{
|
||||
"name": "style-extract",
|
||||
"command": "/workflow:ui-design:style-extract",
|
||||
"description": "Extract design style from reference images or text prompts using Claude analysis with variant generation or refinement mode",
|
||||
"arguments": "[--design-id <id>] [--session <id>] [--images \"<glob>\"] [--prompt \"<desc>\"] [--variants <count>] [--interactive] [--refine]",
|
||||
"category": "workflow",
|
||||
"subcategory": "ui-design",
|
||||
"usage_scenario": "general",
|
||||
"difficulty": "Intermediate",
|
||||
"source": "../../../commands/workflow/ui-design/style-extract.md"
|
||||
}
|
||||
]
|
||||
}
|
||||
}
|
||||
@@ -1,896 +0,0 @@
|
||||
{
|
||||
"general": [
|
||||
{
|
||||
"name": "cli-init",
|
||||
"command": "/cli:cli-init",
|
||||
"description": "Generate .gemini/ and .qwen/ config directories with settings.json and ignore files based on workspace technology detection",
|
||||
"arguments": "[--tool gemini|qwen|all] [--output path] [--preview]",
|
||||
"category": "cli",
|
||||
"subcategory": null,
|
||||
"usage_scenario": "general",
|
||||
"difficulty": "Intermediate",
|
||||
"source": "../../../commands/cli/cli-init.md"
|
||||
},
|
||||
{
|
||||
"name": "enhance-prompt",
|
||||
"command": "/enhance-prompt",
|
||||
"description": "Enhanced prompt transformation using session memory and intent analysis with --enhance flag detection",
|
||||
"arguments": "user input to enhance",
|
||||
"category": "general",
|
||||
"subcategory": null,
|
||||
"usage_scenario": "general",
|
||||
"difficulty": "Intermediate",
|
||||
"source": "../../../commands/enhance-prompt.md"
|
||||
},
|
||||
{
|
||||
"name": "issue:discover",
|
||||
"command": "/issue:discover",
|
||||
"description": "Discover potential issues from multiple perspectives (bug, UX, test, quality, security, performance, maintainability, best-practices) using CLI explore. Supports Exa external research for security and best-practices perspectives.",
|
||||
"arguments": "<path-pattern> [--perspectives=bug,ux,...] [--external]",
|
||||
"category": "issue",
|
||||
"subcategory": null,
|
||||
"usage_scenario": "general",
|
||||
"difficulty": "Intermediate",
|
||||
"source": "../../../commands/issue/discover.md"
|
||||
},
|
||||
{
|
||||
"name": "new",
|
||||
"command": "/issue:new",
|
||||
"description": "Create structured issue from GitHub URL or text description",
|
||||
"arguments": "<github-url | text-description> [--priority 1-5]",
|
||||
"category": "issue",
|
||||
"subcategory": null,
|
||||
"usage_scenario": "general",
|
||||
"difficulty": "Intermediate",
|
||||
"source": "../../../commands/issue/new.md"
|
||||
},
|
||||
{
|
||||
"name": "queue",
|
||||
"command": "/issue:queue",
|
||||
"description": "Form execution queue from bound solutions using issue-queue-agent (solution-level)",
|
||||
"arguments": "[--rebuild] [--issue <id>]",
|
||||
"category": "issue",
|
||||
"subcategory": null,
|
||||
"usage_scenario": "general",
|
||||
"difficulty": "Intermediate",
|
||||
"source": "../../../commands/issue/queue.md"
|
||||
},
|
||||
{
|
||||
"name": "compact",
|
||||
"command": "/memory:compact",
|
||||
"description": "Compact current session memory into structured text for session recovery, extracting objective/plan/files/decisions/constraints/state, and save via MCP core_memory tool",
|
||||
"arguments": "[optional: session description]",
|
||||
"category": "memory",
|
||||
"subcategory": null,
|
||||
"usage_scenario": "general",
|
||||
"difficulty": "Intermediate",
|
||||
"source": "../../../commands/memory/compact.md"
|
||||
},
|
||||
{
|
||||
"name": "load",
|
||||
"command": "/memory:load",
|
||||
"description": "Delegate to universal-executor agent to analyze project via Gemini/Qwen CLI and return JSON core content package for task context",
|
||||
"arguments": "[--tool gemini|qwen] \\\"task context description\\",
|
||||
"category": "memory",
|
||||
"subcategory": null,
|
||||
"usage_scenario": "general",
|
||||
"difficulty": "Intermediate",
|
||||
"source": "../../../commands/memory/load.md"
|
||||
},
|
||||
{
|
||||
"name": "tech-research-rules",
|
||||
"command": "/memory:tech-research-rules",
|
||||
"description": "3-phase orchestrator: extract tech stack → Exa research → generate path-conditional rules (auto-loaded by Claude Code)",
|
||||
"arguments": "[session-id | tech-stack-name] [--regenerate] [--tool <gemini|qwen>]",
|
||||
"category": "memory",
|
||||
"subcategory": null,
|
||||
"usage_scenario": "general",
|
||||
"difficulty": "Intermediate",
|
||||
"source": "../../../commands/memory/tech-research-rules.md"
|
||||
},
|
||||
{
|
||||
"name": "update-full",
|
||||
"command": "/memory:update-full",
|
||||
"description": "Update all CLAUDE.md files using layer-based execution (Layer 3→1) with batched agents (4 modules/agent) and gemini→qwen→codex fallback, <20 modules uses direct parallel",
|
||||
"arguments": "[--tool gemini|qwen|codex] [--path <directory>]",
|
||||
"category": "memory",
|
||||
"subcategory": null,
|
||||
"usage_scenario": "general",
|
||||
"difficulty": "Intermediate",
|
||||
"source": "../../../commands/memory/update-full.md"
|
||||
},
|
||||
{
|
||||
"name": "update-related",
|
||||
"command": "/memory:update-related",
|
||||
"description": "Update CLAUDE.md for git-changed modules using batched agent execution (4 modules/agent) with gemini→qwen→codex fallback, <15 modules uses direct execution",
|
||||
"arguments": "[--tool gemini|qwen|codex]",
|
||||
"category": "memory",
|
||||
"subcategory": null,
|
||||
"usage_scenario": "general",
|
||||
"difficulty": "Intermediate",
|
||||
"source": "../../../commands/memory/update-related.md"
|
||||
},
|
||||
{
|
||||
"name": "version",
|
||||
"command": "/version",
|
||||
"description": "Display Claude Code version information and check for updates",
|
||||
"arguments": "",
|
||||
"category": "general",
|
||||
"subcategory": null,
|
||||
"usage_scenario": "general",
|
||||
"difficulty": "Beginner",
|
||||
"source": "../../../commands/version.md"
|
||||
},
|
||||
{
|
||||
"name": "artifacts",
|
||||
"command": "/workflow:brainstorm:artifacts",
|
||||
"description": "Interactive clarification generating confirmed guidance specification through role-based analysis and synthesis",
|
||||
"arguments": "topic or challenge description [--count N]",
|
||||
"category": "workflow",
|
||||
"subcategory": "brainstorm",
|
||||
"usage_scenario": "general",
|
||||
"difficulty": "Intermediate",
|
||||
"source": "../../../commands/workflow/brainstorm/artifacts.md"
|
||||
},
|
||||
{
|
||||
"name": "auto-parallel",
|
||||
"command": "/workflow:brainstorm:auto-parallel",
|
||||
"description": "Parallel brainstorming automation with dynamic role selection and concurrent execution across multiple perspectives",
|
||||
"arguments": "topic or challenge description\" [--count N]",
|
||||
"category": "workflow",
|
||||
"subcategory": "brainstorm",
|
||||
"usage_scenario": "general",
|
||||
"difficulty": "Advanced",
|
||||
"source": "../../../commands/workflow/brainstorm/auto-parallel.md"
|
||||
},
|
||||
{
|
||||
"name": "data-architect",
|
||||
"command": "/workflow:brainstorm:data-architect",
|
||||
"description": "Generate or update data-architect/analysis.md addressing guidance-specification discussion points for data architecture perspective",
|
||||
"arguments": "optional topic - uses existing framework if available",
|
||||
"category": "workflow",
|
||||
"subcategory": "brainstorm",
|
||||
"usage_scenario": "general",
|
||||
"difficulty": "Intermediate",
|
||||
"source": "../../../commands/workflow/brainstorm/data-architect.md"
|
||||
},
|
||||
{
|
||||
"name": "product-manager",
|
||||
"command": "/workflow:brainstorm:product-manager",
|
||||
"description": "Generate or update product-manager/analysis.md addressing guidance-specification discussion points for product management perspective",
|
||||
"arguments": "optional topic - uses existing framework if available",
|
||||
"category": "workflow",
|
||||
"subcategory": "brainstorm",
|
||||
"usage_scenario": "general",
|
||||
"difficulty": "Intermediate",
|
||||
"source": "../../../commands/workflow/brainstorm/product-manager.md"
|
||||
},
|
||||
{
|
||||
"name": "product-owner",
|
||||
"command": "/workflow:brainstorm:product-owner",
|
||||
"description": "Generate or update product-owner/analysis.md addressing guidance-specification discussion points for product ownership perspective",
|
||||
"arguments": "optional topic - uses existing framework if available",
|
||||
"category": "workflow",
|
||||
"subcategory": "brainstorm",
|
||||
"usage_scenario": "general",
|
||||
"difficulty": "Intermediate",
|
||||
"source": "../../../commands/workflow/brainstorm/product-owner.md"
|
||||
},
|
||||
{
|
||||
"name": "scrum-master",
|
||||
"command": "/workflow:brainstorm:scrum-master",
|
||||
"description": "Generate or update scrum-master/analysis.md addressing guidance-specification discussion points for Agile process perspective",
|
||||
"arguments": "optional topic - uses existing framework if available",
|
||||
"category": "workflow",
|
||||
"subcategory": "brainstorm",
|
||||
"usage_scenario": "general",
|
||||
"difficulty": "Intermediate",
|
||||
"source": "../../../commands/workflow/brainstorm/scrum-master.md"
|
||||
},
|
||||
{
|
||||
"name": "subject-matter-expert",
|
||||
"command": "/workflow:brainstorm:subject-matter-expert",
|
||||
"description": "Generate or update subject-matter-expert/analysis.md addressing guidance-specification discussion points for domain expertise perspective",
|
||||
"arguments": "optional topic - uses existing framework if available",
|
||||
"category": "workflow",
|
||||
"subcategory": "brainstorm",
|
||||
"usage_scenario": "general",
|
||||
"difficulty": "Intermediate",
|
||||
"source": "../../../commands/workflow/brainstorm/subject-matter-expert.md"
|
||||
},
|
||||
{
|
||||
"name": "synthesis",
|
||||
"command": "/workflow:brainstorm:synthesis",
|
||||
"description": "Clarify and refine role analyses through intelligent Q&A and targeted updates with synthesis agent",
|
||||
"arguments": "[optional: --session session-id]",
|
||||
"category": "workflow",
|
||||
"subcategory": "brainstorm",
|
||||
"usage_scenario": "general",
|
||||
"difficulty": "Advanced",
|
||||
"source": "../../../commands/workflow/brainstorm/synthesis.md"
|
||||
},
|
||||
{
|
||||
"name": "system-architect",
|
||||
"command": "/workflow:brainstorm:system-architect",
|
||||
"description": "Generate or update system-architect/analysis.md addressing guidance-specification discussion points for system architecture perspective",
|
||||
"arguments": "optional topic - uses existing framework if available",
|
||||
"category": "workflow",
|
||||
"subcategory": "brainstorm",
|
||||
"usage_scenario": "general",
|
||||
"difficulty": "Intermediate",
|
||||
"source": "../../../commands/workflow/brainstorm/system-architect.md"
|
||||
},
|
||||
{
|
||||
"name": "ux-expert",
|
||||
"command": "/workflow:brainstorm:ux-expert",
|
||||
"description": "Generate or update ux-expert/analysis.md addressing guidance-specification discussion points for UX perspective",
|
||||
"arguments": "optional topic - uses existing framework if available",
|
||||
"category": "workflow",
|
||||
"subcategory": "brainstorm",
|
||||
"usage_scenario": "general",
|
||||
"difficulty": "Intermediate",
|
||||
"source": "../../../commands/workflow/brainstorm/ux-expert.md"
|
||||
},
|
||||
{
|
||||
"name": "clean",
|
||||
"command": "/workflow:clean",
|
||||
"description": "Intelligent code cleanup with mainline detection, stale artifact discovery, and safe execution",
|
||||
"arguments": "[--dry-run] [\\\"focus area\\\"]",
|
||||
"category": "workflow",
|
||||
"subcategory": null,
|
||||
"usage_scenario": "general",
|
||||
"difficulty": "Intermediate",
|
||||
"source": "../../../commands/workflow/clean.md"
|
||||
},
|
||||
{
|
||||
"name": "debug",
|
||||
"command": "/workflow:debug",
|
||||
"description": "Interactive hypothesis-driven debugging with NDJSON logging, iterative until resolved",
|
||||
"arguments": "\\\"bug description or error message\\",
|
||||
"category": "workflow",
|
||||
"subcategory": null,
|
||||
"usage_scenario": "general",
|
||||
"difficulty": "Intermediate",
|
||||
"source": "../../../commands/workflow/debug.md"
|
||||
},
|
||||
{
|
||||
"name": "init",
|
||||
"command": "/workflow:init",
|
||||
"description": "Initialize project-level state with intelligent project analysis using cli-explore-agent",
|
||||
"arguments": "[--regenerate]",
|
||||
"category": "workflow",
|
||||
"subcategory": null,
|
||||
"usage_scenario": "general",
|
||||
"difficulty": "Intermediate",
|
||||
"source": "../../../commands/workflow/init.md"
|
||||
},
|
||||
{
|
||||
"name": "lite-fix",
|
||||
"command": "/workflow:lite-fix",
|
||||
"description": "Lightweight bug diagnosis and fix workflow with intelligent severity assessment and optional hotfix mode for production incidents",
|
||||
"arguments": "[--hotfix] \\\"bug description or issue reference\\",
|
||||
"category": "workflow",
|
||||
"subcategory": null,
|
||||
"usage_scenario": "general",
|
||||
"difficulty": "Intermediate",
|
||||
"source": "../../../commands/workflow/lite-fix.md"
|
||||
},
|
||||
{
|
||||
"name": "list",
|
||||
"command": "/workflow:session:list",
|
||||
"description": "List all workflow sessions with status filtering, shows session metadata and progress information",
|
||||
"arguments": "",
|
||||
"category": "workflow",
|
||||
"subcategory": "session",
|
||||
"usage_scenario": "general",
|
||||
"difficulty": "Beginner",
|
||||
"source": "../../../commands/workflow/session/list.md"
|
||||
},
|
||||
{
|
||||
"name": "solidify",
|
||||
"command": "/workflow:session:solidify",
|
||||
"description": "Crystallize session learnings and user-defined constraints into permanent project guidelines",
|
||||
"arguments": "[--type <convention|constraint|learning>] [--category <category>] \\\"rule or insight\\",
|
||||
"category": "workflow",
|
||||
"subcategory": "session",
|
||||
"usage_scenario": "general",
|
||||
"difficulty": "Intermediate",
|
||||
"source": "../../../commands/workflow/session/solidify.md"
|
||||
},
|
||||
{
|
||||
"name": "start",
|
||||
"command": "/workflow:session:start",
|
||||
"description": "Discover existing sessions or start new workflow session with intelligent session management and conflict detection",
|
||||
"arguments": "[--type <workflow|review|tdd|test|docs>] [--auto|--new] [optional: task description for new session]",
|
||||
"category": "workflow",
|
||||
"subcategory": "session",
|
||||
"usage_scenario": "general",
|
||||
"difficulty": "Intermediate",
|
||||
"source": "../../../commands/workflow/session/start.md"
|
||||
},
|
||||
{
|
||||
"name": "conflict-resolution",
|
||||
"command": "/workflow:tools:conflict-resolution",
|
||||
"description": "Detect and resolve conflicts between plan and existing codebase using CLI-powered analysis with Gemini/Qwen",
|
||||
"arguments": "--session WFS-session-id --context path/to/context-package.json",
|
||||
"category": "workflow",
|
||||
"subcategory": "tools",
|
||||
"usage_scenario": "general",
|
||||
"difficulty": "Advanced",
|
||||
"source": "../../../commands/workflow/tools/conflict-resolution.md"
|
||||
},
|
||||
{
|
||||
"name": "gather",
|
||||
"command": "/workflow:tools:gather",
|
||||
"description": "Intelligently collect project context using context-search-agent based on task description, packages into standardized JSON",
|
||||
"arguments": "--session WFS-session-id \\\"task description\\",
|
||||
"category": "workflow",
|
||||
"subcategory": "tools",
|
||||
"usage_scenario": "general",
|
||||
"difficulty": "Intermediate",
|
||||
"source": "../../../commands/workflow/tools/context-gather.md"
|
||||
},
|
||||
{
|
||||
"name": "animation-extract",
|
||||
"command": "/workflow:ui-design:animation-extract",
|
||||
"description": "Extract animation and transition patterns from prompt inference and image references for design system documentation",
|
||||
"arguments": "[--design-id <id>] [--session <id>] [--images \"<glob>\"] [--focus \"<types>\"] [--interactive] [--refine]",
|
||||
"category": "workflow",
|
||||
"subcategory": "ui-design",
|
||||
"usage_scenario": "general",
|
||||
"difficulty": "Intermediate",
|
||||
"source": "../../../commands/workflow/ui-design/animation-extract.md"
|
||||
},
|
||||
{
|
||||
"name": "explore-auto",
|
||||
"command": "/workflow:ui-design:explore-auto",
|
||||
"description": "Interactive exploratory UI design workflow with style-centric batch generation, creates design variants from prompts/images with parallel execution and user selection",
|
||||
"arguments": "[--input \"<value>\"] [--targets \"<list>\"] [--target-type \"page|component\"] [--session <id>] [--style-variants <count>] [--layout-variants <count>]",
|
||||
"category": "workflow",
|
||||
"subcategory": "ui-design",
|
||||
"usage_scenario": "general",
|
||||
"difficulty": "Intermediate",
|
||||
"source": "../../../commands/workflow/ui-design/explore-auto.md"
|
||||
},
|
||||
{
|
||||
"name": "imitate-auto",
|
||||
"command": "/workflow:ui-design:imitate-auto",
|
||||
"description": "UI design workflow with direct code/image input for design token extraction and prototype generation",
|
||||
"arguments": "[--input \"<value>\"] [--session <id>]",
|
||||
"category": "workflow",
|
||||
"subcategory": "ui-design",
|
||||
"usage_scenario": "general",
|
||||
"difficulty": "Intermediate",
|
||||
"source": "../../../commands/workflow/ui-design/imitate-auto.md"
|
||||
},
|
||||
{
|
||||
"name": "layout-extract",
|
||||
"command": "/workflow:ui-design:layout-extract",
|
||||
"description": "Extract structural layout information from reference images or text prompts using Claude analysis with variant generation or refinement mode",
|
||||
"arguments": "[--design-id <id>] [--session <id>] [--images \"<glob>\"] [--prompt \"<desc>\"] [--targets \"<list>\"] [--variants <count>] [--device-type <desktop|mobile|tablet|responsive>] [--interactive] [--refine]",
|
||||
"category": "workflow",
|
||||
"subcategory": "ui-design",
|
||||
"usage_scenario": "general",
|
||||
"difficulty": "Intermediate",
|
||||
"source": "../../../commands/workflow/ui-design/layout-extract.md"
|
||||
},
|
||||
{
|
||||
"name": "style-extract",
|
||||
"command": "/workflow:ui-design:style-extract",
|
||||
"description": "Extract design style from reference images or text prompts using Claude analysis with variant generation or refinement mode",
|
||||
"arguments": "[--design-id <id>] [--session <id>] [--images \"<glob>\"] [--prompt \"<desc>\"] [--variants <count>] [--interactive] [--refine]",
|
||||
"category": "workflow",
|
||||
"subcategory": "ui-design",
|
||||
"usage_scenario": "general",
|
||||
"difficulty": "Intermediate",
|
||||
"source": "../../../commands/workflow/ui-design/style-extract.md"
|
||||
}
|
||||
],
|
||||
"implementation": [
|
||||
{
|
||||
"name": "execute",
|
||||
"command": "/issue:execute",
|
||||
"description": "Execute queue with codex using DAG-based parallel orchestration (solution-level)",
|
||||
"arguments": "[--worktree] [--queue <queue-id>]",
|
||||
"category": "issue",
|
||||
"subcategory": null,
|
||||
"usage_scenario": "implementation",
|
||||
"difficulty": "Intermediate",
|
||||
"source": "../../../commands/issue/execute.md"
|
||||
},
|
||||
{
|
||||
"name": "create",
|
||||
"command": "/task:create",
|
||||
"description": "Generate task JSON from natural language description with automatic file pattern detection, scope inference, and dependency analysis",
|
||||
"arguments": "\\\"task title\\",
|
||||
"category": "task",
|
||||
"subcategory": null,
|
||||
"usage_scenario": "implementation",
|
||||
"difficulty": "Intermediate",
|
||||
"source": "../../../commands/task/create.md"
|
||||
},
|
||||
{
|
||||
"name": "execute",
|
||||
"command": "/task:execute",
|
||||
"description": "Execute task JSON using appropriate agent (@doc-generator/@implementation-agent/@test-agent) with pre-analysis context loading and status tracking",
|
||||
"arguments": "task-id",
|
||||
"category": "task",
|
||||
"subcategory": null,
|
||||
"usage_scenario": "implementation",
|
||||
"difficulty": "Intermediate",
|
||||
"source": "../../../commands/task/execute.md"
|
||||
},
|
||||
{
|
||||
"name": "execute",
|
||||
"command": "/workflow:execute",
|
||||
"description": "Coordinate agent execution for workflow tasks with automatic session discovery, parallel task processing, and status tracking",
|
||||
"arguments": "[--resume-session=\\\"session-id\\\"]",
|
||||
"category": "workflow",
|
||||
"subcategory": null,
|
||||
"usage_scenario": "implementation",
|
||||
"difficulty": "Intermediate",
|
||||
"source": "../../../commands/workflow/execute.md"
|
||||
},
|
||||
{
|
||||
"name": "lite-execute",
|
||||
"command": "/workflow:lite-execute",
|
||||
"description": "Execute tasks based on in-memory plan, prompt description, or file content",
|
||||
"arguments": "[--in-memory] [\\\"task description\\\"|file-path]",
|
||||
"category": "workflow",
|
||||
"subcategory": null,
|
||||
"usage_scenario": "implementation",
|
||||
"difficulty": "Intermediate",
|
||||
"source": "../../../commands/workflow/lite-execute.md"
|
||||
},
|
||||
{
|
||||
"name": "test-cycle-execute",
|
||||
"command": "/workflow:test-cycle-execute",
|
||||
"description": "Execute test-fix workflow with dynamic task generation and iterative fix cycles until test pass rate >= 95% or max iterations reached. Uses @cli-planning-agent for failure analysis and task generation.",
|
||||
"arguments": "[--resume-session=\\\"session-id\\\"] [--max-iterations=N]",
|
||||
"category": "workflow",
|
||||
"subcategory": null,
|
||||
"usage_scenario": "implementation",
|
||||
"difficulty": "Intermediate",
|
||||
"source": "../../../commands/workflow/test-cycle-execute.md"
|
||||
},
|
||||
{
|
||||
"name": "task-generate-agent",
|
||||
"command": "/workflow:tools:task-generate-agent",
|
||||
"description": "Generate implementation plan documents (IMPL_PLAN.md, task JSONs, TODO_LIST.md) using action-planning-agent - produces planning artifacts, does NOT execute code implementation",
|
||||
"arguments": "--session WFS-session-id",
|
||||
"category": "workflow",
|
||||
"subcategory": "tools",
|
||||
"usage_scenario": "implementation",
|
||||
"difficulty": "Advanced",
|
||||
"source": "../../../commands/workflow/tools/task-generate-agent.md"
|
||||
},
|
||||
{
|
||||
"name": "task-generate-tdd",
|
||||
"command": "/workflow:tools:task-generate-tdd",
|
||||
"description": "Autonomous TDD task generation using action-planning-agent with Red-Green-Refactor cycles, test-first structure, and cycle validation",
|
||||
"arguments": "--session WFS-session-id",
|
||||
"category": "workflow",
|
||||
"subcategory": "tools",
|
||||
"usage_scenario": "implementation",
|
||||
"difficulty": "Advanced",
|
||||
"source": "../../../commands/workflow/tools/task-generate-tdd.md"
|
||||
},
|
||||
{
|
||||
"name": "test-task-generate",
|
||||
"command": "/workflow:tools:test-task-generate",
|
||||
"description": "Generate test planning documents (IMPL_PLAN.md, test task JSONs, TODO_LIST.md) using action-planning-agent - produces test planning artifacts, does NOT execute tests",
|
||||
"arguments": "--session WFS-test-session-id",
|
||||
"category": "workflow",
|
||||
"subcategory": "tools",
|
||||
"usage_scenario": "implementation",
|
||||
"difficulty": "Intermediate",
|
||||
"source": "../../../commands/workflow/tools/test-task-generate.md"
|
||||
},
|
||||
{
|
||||
"name": "generate",
|
||||
"command": "/workflow:ui-design:generate",
|
||||
"description": "Assemble UI prototypes by combining layout templates with design tokens (default animation support), pure assembler without new content generation",
|
||||
"arguments": "[--design-id <id>] [--session <id>]",
|
||||
"category": "workflow",
|
||||
"subcategory": "ui-design",
|
||||
"usage_scenario": "implementation",
|
||||
"difficulty": "Intermediate",
|
||||
"source": "../../../commands/workflow/ui-design/generate.md"
|
||||
}
|
||||
],
|
||||
"planning": [
|
||||
{
|
||||
"name": "plan",
|
||||
"command": "/issue:plan",
|
||||
"description": "Batch plan issue resolution using issue-plan-agent (explore + plan closed-loop)",
|
||||
"arguments": "--all-pending <issue-id>[,<issue-id>,...] [--batch-size 3] ",
|
||||
"category": "issue",
|
||||
"subcategory": null,
|
||||
"usage_scenario": "planning",
|
||||
"difficulty": "Intermediate",
|
||||
"source": "../../../commands/issue/plan.md"
|
||||
},
|
||||
{
|
||||
"name": "breakdown",
|
||||
"command": "/task:breakdown",
|
||||
"description": "Decompose complex task into subtasks with dependency mapping, creates child task JSONs with parent references and execution order",
|
||||
"arguments": "task-id",
|
||||
"category": "task",
|
||||
"subcategory": null,
|
||||
"usage_scenario": "planning",
|
||||
"difficulty": "Intermediate",
|
||||
"source": "../../../commands/task/breakdown.md"
|
||||
},
|
||||
{
|
||||
"name": "replan",
|
||||
"command": "/task:replan",
|
||||
"description": "Update task JSON with new requirements or batch-update multiple tasks from verification report, tracks changes in task-changes.json",
|
||||
"arguments": "task-id [\\\"text\\\"|file.md] | --batch [verification-report.md]",
|
||||
"category": "task",
|
||||
"subcategory": null,
|
||||
"usage_scenario": "planning",
|
||||
"difficulty": "Intermediate",
|
||||
"source": "../../../commands/task/replan.md"
|
||||
},
|
||||
{
|
||||
"name": "action-plan-verify",
|
||||
"command": "/workflow:action-plan-verify",
|
||||
"description": "Perform non-destructive cross-artifact consistency analysis between IMPL_PLAN.md and task JSONs with quality gate validation",
|
||||
"arguments": "[optional: --session session-id]",
|
||||
"category": "workflow",
|
||||
"subcategory": null,
|
||||
"usage_scenario": "planning",
|
||||
"difficulty": "Intermediate",
|
||||
"source": "../../../commands/workflow/action-plan-verify.md"
|
||||
},
|
||||
{
|
||||
"name": "api-designer",
|
||||
"command": "/workflow:brainstorm:api-designer",
|
||||
"description": "Generate or update api-designer/analysis.md addressing guidance-specification discussion points for API design perspective",
|
||||
"arguments": "optional topic - uses existing framework if available",
|
||||
"category": "workflow",
|
||||
"subcategory": "brainstorm",
|
||||
"usage_scenario": "planning",
|
||||
"difficulty": "Intermediate",
|
||||
"source": "../../../commands/workflow/brainstorm/api-designer.md"
|
||||
},
|
||||
{
|
||||
"name": "ui-designer",
|
||||
"command": "/workflow:brainstorm:ui-designer",
|
||||
"description": "Generate or update ui-designer/analysis.md addressing guidance-specification discussion points for UI design perspective",
|
||||
"arguments": "optional topic - uses existing framework if available",
|
||||
"category": "workflow",
|
||||
"subcategory": "brainstorm",
|
||||
"usage_scenario": "planning",
|
||||
"difficulty": "Intermediate",
|
||||
"source": "../../../commands/workflow/brainstorm/ui-designer.md"
|
||||
},
|
||||
{
|
||||
"name": "lite-plan",
|
||||
"command": "/workflow:lite-plan",
|
||||
"description": "Lightweight interactive planning workflow with in-memory planning, code exploration, and execution dispatch to lite-execute after user confirmation",
|
||||
"arguments": "[-e|--explore] \\\"task description\\\"|file.md",
|
||||
"category": "workflow",
|
||||
"subcategory": null,
|
||||
"usage_scenario": "planning",
|
||||
"difficulty": "Intermediate",
|
||||
"source": "../../../commands/workflow/lite-plan.md"
|
||||
},
|
||||
{
|
||||
"name": "plan",
|
||||
"command": "/workflow:plan",
|
||||
"description": "5-phase planning workflow with action-planning-agent task generation, outputs IMPL_PLAN.md and task JSONs",
|
||||
"arguments": "\\\"text description\\\"|file.md",
|
||||
"category": "workflow",
|
||||
"subcategory": null,
|
||||
"usage_scenario": "planning",
|
||||
"difficulty": "Intermediate",
|
||||
"source": "../../../commands/workflow/plan.md"
|
||||
},
|
||||
{
|
||||
"name": "replan",
|
||||
"command": "/workflow:replan",
|
||||
"description": "Interactive workflow replanning with session-level artifact updates and boundary clarification through guided questioning",
|
||||
"arguments": "[--session session-id] [task-id] \\\"requirements\\\"|file.md [--interactive]",
|
||||
"category": "workflow",
|
||||
"subcategory": null,
|
||||
"usage_scenario": "planning",
|
||||
"difficulty": "Intermediate",
|
||||
"source": "../../../commands/workflow/replan.md"
|
||||
},
|
||||
{
|
||||
"name": "tdd-plan",
|
||||
"command": "/workflow:tdd-plan",
|
||||
"description": "TDD workflow planning with Red-Green-Refactor task chain generation, test-first development structure, and cycle tracking",
|
||||
"arguments": "\\\"feature description\\\"|file.md",
|
||||
"category": "workflow",
|
||||
"subcategory": null,
|
||||
"usage_scenario": "planning",
|
||||
"difficulty": "Advanced",
|
||||
"source": "../../../commands/workflow/tdd-plan.md"
|
||||
},
|
||||
{
|
||||
"name": "workflow:ui-design:codify-style",
|
||||
"command": "/workflow:ui-design:codify-style",
|
||||
"description": "Orchestrator to extract styles from code and generate shareable reference package with preview (automatic file discovery)",
|
||||
"arguments": "<path> [--package-name <name>] [--output-dir <path>] [--overwrite]",
|
||||
"category": "workflow",
|
||||
"subcategory": "ui-design",
|
||||
"usage_scenario": "planning",
|
||||
"difficulty": "Intermediate",
|
||||
"source": "../../../commands/workflow/ui-design/codify-style.md"
|
||||
},
|
||||
{
|
||||
"name": "design-sync",
|
||||
"command": "/workflow:ui-design:design-sync",
|
||||
"description": "Synchronize finalized design system references to brainstorming artifacts, preparing them for /workflow:plan consumption",
|
||||
"arguments": "--session <session_id> [--selected-prototypes \"<list>\"]",
|
||||
"category": "workflow",
|
||||
"subcategory": "ui-design",
|
||||
"usage_scenario": "planning",
|
||||
"difficulty": "Intermediate",
|
||||
"source": "../../../commands/workflow/ui-design/design-sync.md"
|
||||
},
|
||||
{
|
||||
"name": "workflow:ui-design:import-from-code",
|
||||
"command": "/workflow:ui-design:import-from-code",
|
||||
"description": "Import design system from code files (CSS/JS/HTML/SCSS) with automatic file discovery and parallel agent analysis",
|
||||
"arguments": "[--design-id <id>] [--session <id>] [--source <path>]",
|
||||
"category": "workflow",
|
||||
"subcategory": "ui-design",
|
||||
"usage_scenario": "planning",
|
||||
"difficulty": "Intermediate",
|
||||
"source": "../../../commands/workflow/ui-design/import-from-code.md"
|
||||
},
|
||||
{
|
||||
"name": "workflow:ui-design:reference-page-generator",
|
||||
"command": "/workflow:ui-design:reference-page-generator",
|
||||
"description": "Generate multi-component reference pages and documentation from design run extraction",
|
||||
"arguments": "[--design-run <path>] [--package-name <name>] [--output-dir <path>]",
|
||||
"category": "workflow",
|
||||
"subcategory": "ui-design",
|
||||
"usage_scenario": "planning",
|
||||
"difficulty": "Intermediate",
|
||||
"source": "../../../commands/workflow/ui-design/reference-page-generator.md"
|
||||
}
|
||||
],
|
||||
"documentation": [
|
||||
{
|
||||
"name": "code-map-memory",
|
||||
"command": "/memory:code-map-memory",
|
||||
"description": "3-phase orchestrator: parse feature keyword → cli-explore-agent analyzes (Deep Scan dual-source) → orchestrator generates Mermaid docs + SKILL package (skips phase 2 if exists)",
|
||||
"arguments": "\\\"feature-keyword\\\" [--regenerate] [--tool <gemini|qwen>]",
|
||||
"category": "memory",
|
||||
"subcategory": null,
|
||||
"usage_scenario": "documentation",
|
||||
"difficulty": "Intermediate",
|
||||
"source": "../../../commands/memory/code-map-memory.md"
|
||||
},
|
||||
{
|
||||
"name": "docs-full-cli",
|
||||
"command": "/memory:docs-full-cli",
|
||||
"description": "Generate full project documentation using CLI execution (Layer 3→1) with batched agents (4 modules/agent) and gemini→qwen→codex fallback, <20 modules uses direct parallel",
|
||||
"arguments": "[path] [--tool <gemini|qwen|codex>]",
|
||||
"category": "memory",
|
||||
"subcategory": null,
|
||||
"usage_scenario": "documentation",
|
||||
"difficulty": "Intermediate",
|
||||
"source": "../../../commands/memory/docs-full-cli.md"
|
||||
},
|
||||
{
|
||||
"name": "docs-related-cli",
|
||||
"command": "/memory:docs-related-cli",
|
||||
"description": "Generate/update documentation for git-changed modules using CLI execution with batched agents (4 modules/agent) and gemini→qwen→codex fallback, <15 modules uses direct parallel",
|
||||
"arguments": "[--tool <gemini|qwen|codex>]",
|
||||
"category": "memory",
|
||||
"subcategory": null,
|
||||
"usage_scenario": "documentation",
|
||||
"difficulty": "Intermediate",
|
||||
"source": "../../../commands/memory/docs-related-cli.md"
|
||||
},
|
||||
{
|
||||
"name": "docs",
|
||||
"command": "/memory:docs",
|
||||
"description": "Plan documentation workflow with dynamic grouping (≤10 docs/task), generates IMPL tasks for parallel module trees, README, ARCHITECTURE, and HTTP API docs",
|
||||
"arguments": "[path] [--tool <gemini|qwen|codex>] [--mode <full|partial>] [--cli-execute]",
|
||||
"category": "memory",
|
||||
"subcategory": null,
|
||||
"usage_scenario": "documentation",
|
||||
"difficulty": "Intermediate",
|
||||
"source": "../../../commands/memory/docs.md"
|
||||
},
|
||||
{
|
||||
"name": "load-skill-memory",
|
||||
"command": "/memory:load-skill-memory",
|
||||
"description": "Activate SKILL package (auto-detect from paths/keywords or manual) and intelligently load documentation based on task intent keywords",
|
||||
"arguments": "[skill_name] \\\"task intent description\\",
|
||||
"category": "memory",
|
||||
"subcategory": null,
|
||||
"usage_scenario": "documentation",
|
||||
"difficulty": "Intermediate",
|
||||
"source": "../../../commands/memory/load-skill-memory.md"
|
||||
},
|
||||
{
|
||||
"name": "skill-memory",
|
||||
"command": "/memory:skill-memory",
|
||||
"description": "4-phase autonomous orchestrator: check docs → /memory:docs planning → /workflow:execute → generate SKILL.md with progressive loading index (skips phases 2-3 if docs exist)",
|
||||
"arguments": "[path] [--tool <gemini|qwen|codex>] [--regenerate] [--mode <full|partial>] [--cli-execute]",
|
||||
"category": "memory",
|
||||
"subcategory": null,
|
||||
"usage_scenario": "documentation",
|
||||
"difficulty": "Intermediate",
|
||||
"source": "../../../commands/memory/skill-memory.md"
|
||||
},
|
||||
{
|
||||
"name": "style-skill-memory",
|
||||
"command": "/memory:style-skill-memory",
|
||||
"description": "Generate SKILL memory package from style reference for easy loading and consistent design system usage",
|
||||
"arguments": "[package-name] [--regenerate]",
|
||||
"category": "memory",
|
||||
"subcategory": null,
|
||||
"usage_scenario": "documentation",
|
||||
"difficulty": "Intermediate",
|
||||
"source": "../../../commands/memory/style-skill-memory.md"
|
||||
},
|
||||
{
|
||||
"name": "swagger-docs",
|
||||
"command": "/memory:swagger-docs",
|
||||
"description": "Generate complete Swagger/OpenAPI documentation following RESTful standards with global security, API details, error codes, and validation tests",
|
||||
"arguments": "[path] [--tool <gemini|qwen|codex>] [--format <yaml|json>] [--version <v3.0|v3.1>] [--lang <zh|en>]",
|
||||
"category": "memory",
|
||||
"subcategory": null,
|
||||
"usage_scenario": "documentation",
|
||||
"difficulty": "Intermediate",
|
||||
"source": "../../../commands/memory/swagger-docs.md"
|
||||
},
|
||||
{
|
||||
"name": "workflow-skill-memory",
|
||||
"command": "/memory:workflow-skill-memory",
|
||||
"description": "Process WFS-* archived sessions using universal-executor agents with Gemini analysis to generate workflow-progress SKILL package (sessions-timeline, lessons, conflicts)",
|
||||
"arguments": "session <session-id> | all",
|
||||
"category": "memory",
|
||||
"subcategory": null,
|
||||
"usage_scenario": "documentation",
|
||||
"difficulty": "Intermediate",
|
||||
"source": "../../../commands/memory/workflow-skill-memory.md"
|
||||
}
|
||||
],
|
||||
"analysis": [
|
||||
{
|
||||
"name": "review-fix",
|
||||
"command": "/workflow:review-fix",
|
||||
"description": "Automated fixing of code review findings with AI-powered planning and coordinated execution. Uses intelligent grouping, multi-stage timeline coordination, and test-driven verification.",
|
||||
"arguments": "<export-file|review-dir> [--resume] [--max-iterations=N]",
|
||||
"category": "workflow",
|
||||
"subcategory": null,
|
||||
"usage_scenario": "analysis",
|
||||
"difficulty": "Intermediate",
|
||||
"source": "../../../commands/workflow/review-fix.md"
|
||||
},
|
||||
{
|
||||
"name": "review-module-cycle",
|
||||
"command": "/workflow:review-module-cycle",
|
||||
"description": "Independent multi-dimensional code review for specified modules/files. Analyzes specific code paths across 7 dimensions with hybrid parallel-iterative execution, independent of workflow sessions.",
|
||||
"arguments": "<path-pattern> [--dimensions=security,architecture,...] [--max-iterations=N]",
|
||||
"category": "workflow",
|
||||
"subcategory": null,
|
||||
"usage_scenario": "analysis",
|
||||
"difficulty": "Intermediate",
|
||||
"source": "../../../commands/workflow/review-module-cycle.md"
|
||||
},
|
||||
{
|
||||
"name": "review",
|
||||
"command": "/workflow:review",
|
||||
"description": "Post-implementation review with specialized types (security/architecture/action-items/quality) using analysis agents and Gemini",
|
||||
"arguments": "[--type=security|architecture|action-items|quality] [--archived] [optional: session-id]",
|
||||
"category": "workflow",
|
||||
"subcategory": null,
|
||||
"usage_scenario": "analysis",
|
||||
"difficulty": "Intermediate",
|
||||
"source": "../../../commands/workflow/review.md"
|
||||
}
|
||||
],
|
||||
"session-management": [
|
||||
{
|
||||
"name": "review-session-cycle",
|
||||
"command": "/workflow:review-session-cycle",
|
||||
"description": "Session-based comprehensive multi-dimensional code review. Analyzes git changes from workflow session across 7 dimensions with hybrid parallel-iterative execution, aggregates findings, and performs focused deep-dives on critical issues until quality gates met.",
|
||||
"arguments": "[session-id] [--dimensions=security,architecture,...] [--max-iterations=N]",
|
||||
"category": "workflow",
|
||||
"subcategory": null,
|
||||
"usage_scenario": "session-management",
|
||||
"difficulty": "Intermediate",
|
||||
"source": "../../../commands/workflow/review-session-cycle.md"
|
||||
},
|
||||
{
|
||||
"name": "complete",
|
||||
"command": "/workflow:session:complete",
|
||||
"description": "Mark active workflow session as complete, archive with lessons learned, update manifest, remove active flag",
|
||||
"arguments": "",
|
||||
"category": "workflow",
|
||||
"subcategory": "session",
|
||||
"usage_scenario": "session-management",
|
||||
"difficulty": "Intermediate",
|
||||
"source": "../../../commands/workflow/session/complete.md"
|
||||
},
|
||||
{
|
||||
"name": "resume",
|
||||
"command": "/workflow:session:resume",
|
||||
"description": "Resume the most recently paused workflow session with automatic session discovery and status update",
|
||||
"arguments": "",
|
||||
"category": "workflow",
|
||||
"subcategory": "session",
|
||||
"usage_scenario": "session-management",
|
||||
"difficulty": "Intermediate",
|
||||
"source": "../../../commands/workflow/session/resume.md"
|
||||
}
|
||||
],
|
||||
"testing": [
|
||||
{
|
||||
"name": "tdd-verify",
|
||||
"command": "/workflow:tdd-verify",
|
||||
"description": "Verify TDD workflow compliance against Red-Green-Refactor cycles, generate quality report with coverage analysis",
|
||||
"arguments": "[optional: WFS-session-id]",
|
||||
"category": "workflow",
|
||||
"subcategory": null,
|
||||
"usage_scenario": "testing",
|
||||
"difficulty": "Advanced",
|
||||
"source": "../../../commands/workflow/tdd-verify.md"
|
||||
},
|
||||
{
|
||||
"name": "test-fix-gen",
|
||||
"command": "/workflow:test-fix-gen",
|
||||
"description": "Create test-fix workflow session from session ID, description, or file path with test strategy generation and task planning",
|
||||
"arguments": "(source-session-id | \\\"feature description\\\" | /path/to/file.md)",
|
||||
"category": "workflow",
|
||||
"subcategory": null,
|
||||
"usage_scenario": "testing",
|
||||
"difficulty": "Intermediate",
|
||||
"source": "../../../commands/workflow/test-fix-gen.md"
|
||||
},
|
||||
{
|
||||
"name": "test-gen",
|
||||
"command": "/workflow:test-gen",
|
||||
"description": "Create independent test-fix workflow session from completed implementation session, analyzes code to generate test tasks",
|
||||
"arguments": "source-session-id",
|
||||
"category": "workflow",
|
||||
"subcategory": null,
|
||||
"usage_scenario": "testing",
|
||||
"difficulty": "Intermediate",
|
||||
"source": "../../../commands/workflow/test-gen.md"
|
||||
},
|
||||
{
|
||||
"name": "tdd-coverage-analysis",
|
||||
"command": "/workflow:tools:tdd-coverage-analysis",
|
||||
"description": "Analyze test coverage and TDD cycle execution with Red-Green-Refactor compliance verification",
|
||||
"arguments": "--session WFS-session-id",
|
||||
"category": "workflow",
|
||||
"subcategory": "tools",
|
||||
"usage_scenario": "testing",
|
||||
"difficulty": "Advanced",
|
||||
"source": "../../../commands/workflow/tools/tdd-coverage-analysis.md"
|
||||
},
|
||||
{
|
||||
"name": "test-concept-enhanced",
|
||||
"command": "/workflow:tools:test-concept-enhanced",
|
||||
"description": "Coordinate test analysis workflow using cli-execution-agent to generate test strategy via Gemini",
|
||||
"arguments": "--session WFS-test-session-id --context path/to/test-context-package.json",
|
||||
"category": "workflow",
|
||||
"subcategory": "tools",
|
||||
"usage_scenario": "testing",
|
||||
"difficulty": "Intermediate",
|
||||
"source": "../../../commands/workflow/tools/test-concept-enhanced.md"
|
||||
},
|
||||
{
|
||||
"name": "test-context-gather",
|
||||
"command": "/workflow:tools:test-context-gather",
|
||||
"description": "Collect test coverage context using test-context-search-agent and package into standardized test-context JSON",
|
||||
"arguments": "--session WFS-test-session-id",
|
||||
"category": "workflow",
|
||||
"subcategory": "tools",
|
||||
"usage_scenario": "testing",
|
||||
"difficulty": "Intermediate",
|
||||
"source": "../../../commands/workflow/tools/test-context-gather.md"
|
||||
}
|
||||
]
|
||||
}
|
||||
@@ -1,160 +0,0 @@
|
||||
{
|
||||
"workflow:plan": {
|
||||
"calls_internally": [
|
||||
"workflow:session:start",
|
||||
"workflow:tools:context-gather",
|
||||
"workflow:tools:conflict-resolution",
|
||||
"workflow:tools:task-generate-agent"
|
||||
],
|
||||
"next_steps": [
|
||||
"workflow:action-plan-verify",
|
||||
"workflow:status",
|
||||
"workflow:execute"
|
||||
],
|
||||
"alternatives": [
|
||||
"workflow:tdd-plan"
|
||||
],
|
||||
"prerequisites": []
|
||||
},
|
||||
"workflow:tdd-plan": {
|
||||
"calls_internally": [
|
||||
"workflow:session:start",
|
||||
"workflow:tools:context-gather",
|
||||
"workflow:tools:task-generate-tdd"
|
||||
],
|
||||
"next_steps": [
|
||||
"workflow:tdd-verify",
|
||||
"workflow:status",
|
||||
"workflow:execute"
|
||||
],
|
||||
"alternatives": [
|
||||
"workflow:plan"
|
||||
],
|
||||
"prerequisites": []
|
||||
},
|
||||
"workflow:execute": {
|
||||
"prerequisites": [
|
||||
"workflow:plan",
|
||||
"workflow:tdd-plan"
|
||||
],
|
||||
"related": [
|
||||
"workflow:status",
|
||||
"workflow:resume"
|
||||
],
|
||||
"next_steps": [
|
||||
"workflow:review",
|
||||
"workflow:tdd-verify"
|
||||
]
|
||||
},
|
||||
"workflow:action-plan-verify": {
|
||||
"prerequisites": [
|
||||
"workflow:plan"
|
||||
],
|
||||
"next_steps": [
|
||||
"workflow:execute"
|
||||
],
|
||||
"related": [
|
||||
"workflow:status"
|
||||
]
|
||||
},
|
||||
"workflow:tdd-verify": {
|
||||
"prerequisites": [
|
||||
"workflow:execute"
|
||||
],
|
||||
"related": [
|
||||
"workflow:tools:tdd-coverage-analysis"
|
||||
]
|
||||
},
|
||||
"workflow:session:start": {
|
||||
"next_steps": [
|
||||
"workflow:plan",
|
||||
"workflow:execute"
|
||||
],
|
||||
"related": [
|
||||
"workflow:session:list",
|
||||
"workflow:session:resume"
|
||||
]
|
||||
},
|
||||
"workflow:session:resume": {
|
||||
"alternatives": [
|
||||
"workflow:resume"
|
||||
],
|
||||
"related": [
|
||||
"workflow:session:list",
|
||||
"workflow:status"
|
||||
]
|
||||
},
|
||||
"workflow:lite-plan": {
|
||||
"calls_internally": [
|
||||
"workflow:lite-execute"
|
||||
],
|
||||
"next_steps": [
|
||||
"workflow:lite-execute",
|
||||
"workflow:status"
|
||||
],
|
||||
"alternatives": [
|
||||
"workflow:plan"
|
||||
],
|
||||
"prerequisites": []
|
||||
},
|
||||
"workflow:lite-fix": {
|
||||
"next_steps": [
|
||||
"workflow:lite-execute",
|
||||
"workflow:status"
|
||||
],
|
||||
"alternatives": [
|
||||
"workflow:lite-plan"
|
||||
],
|
||||
"related": [
|
||||
"workflow:test-cycle-execute"
|
||||
]
|
||||
},
|
||||
"workflow:lite-execute": {
|
||||
"prerequisites": [
|
||||
"workflow:lite-plan",
|
||||
"workflow:lite-fix"
|
||||
],
|
||||
"related": [
|
||||
"workflow:execute",
|
||||
"workflow:status"
|
||||
]
|
||||
},
|
||||
"workflow:review-session-cycle": {
|
||||
"prerequisites": [
|
||||
"workflow:execute"
|
||||
],
|
||||
"next_steps": [
|
||||
"workflow:review-fix"
|
||||
],
|
||||
"related": [
|
||||
"workflow:review-module-cycle"
|
||||
]
|
||||
},
|
||||
"workflow:review-fix": {
|
||||
"prerequisites": [
|
||||
"workflow:review-module-cycle",
|
||||
"workflow:review-session-cycle"
|
||||
],
|
||||
"related": [
|
||||
"workflow:test-cycle-execute"
|
||||
]
|
||||
},
|
||||
"memory:docs": {
|
||||
"calls_internally": [
|
||||
"workflow:session:start",
|
||||
"workflow:tools:context-gather"
|
||||
],
|
||||
"next_steps": [
|
||||
"workflow:execute"
|
||||
]
|
||||
},
|
||||
"memory:skill-memory": {
|
||||
"next_steps": [
|
||||
"workflow:plan",
|
||||
"cli:analyze"
|
||||
],
|
||||
"related": [
|
||||
"memory:load-skill-memory"
|
||||
]
|
||||
}
|
||||
}
|
||||
@@ -1,112 +0,0 @@
|
||||
[
|
||||
{
|
||||
"name": "lite-plan",
|
||||
"command": "/workflow:lite-plan",
|
||||
"description": "Lightweight interactive planning workflow with in-memory planning, code exploration, and execution dispatch to lite-execute after user confirmation",
|
||||
"arguments": "[-e|--explore] \\\"task description\\\"|file.md",
|
||||
"category": "workflow",
|
||||
"subcategory": null,
|
||||
"usage_scenario": "planning",
|
||||
"difficulty": "Intermediate",
|
||||
"source": "../../../commands/workflow/lite-plan.md"
|
||||
},
|
||||
{
|
||||
"name": "lite-fix",
|
||||
"command": "/workflow:lite-fix",
|
||||
"description": "Lightweight bug diagnosis and fix workflow with intelligent severity assessment and optional hotfix mode for production incidents",
|
||||
"arguments": "[--hotfix] \\\"bug description or issue reference\\",
|
||||
"category": "workflow",
|
||||
"subcategory": null,
|
||||
"usage_scenario": "general",
|
||||
"difficulty": "Intermediate",
|
||||
"source": "../../../commands/workflow/lite-fix.md"
|
||||
},
|
||||
{
|
||||
"name": "plan",
|
||||
"command": "/workflow:plan",
|
||||
"description": "5-phase planning workflow with action-planning-agent task generation, outputs IMPL_PLAN.md and task JSONs",
|
||||
"arguments": "\\\"text description\\\"|file.md",
|
||||
"category": "workflow",
|
||||
"subcategory": null,
|
||||
"usage_scenario": "planning",
|
||||
"difficulty": "Intermediate",
|
||||
"source": "../../../commands/workflow/plan.md"
|
||||
},
|
||||
{
|
||||
"name": "execute",
|
||||
"command": "/workflow:execute",
|
||||
"description": "Coordinate agent execution for workflow tasks with automatic session discovery, parallel task processing, and status tracking",
|
||||
"arguments": "[--resume-session=\\\"session-id\\\"]",
|
||||
"category": "workflow",
|
||||
"subcategory": null,
|
||||
"usage_scenario": "implementation",
|
||||
"difficulty": "Intermediate",
|
||||
"source": "../../../commands/workflow/execute.md"
|
||||
},
|
||||
{
|
||||
"name": "start",
|
||||
"command": "/workflow:session:start",
|
||||
"description": "Discover existing sessions or start new workflow session with intelligent session management and conflict detection",
|
||||
"arguments": "[--type <workflow|review|tdd|test|docs>] [--auto|--new] [optional: task description for new session]",
|
||||
"category": "workflow",
|
||||
"subcategory": "session",
|
||||
"usage_scenario": "general",
|
||||
"difficulty": "Intermediate",
|
||||
"source": "../../../commands/workflow/session/start.md"
|
||||
},
|
||||
{
|
||||
"name": "review-session-cycle",
|
||||
"command": "/workflow:review-session-cycle",
|
||||
"description": "Session-based comprehensive multi-dimensional code review. Analyzes git changes from workflow session across 7 dimensions with hybrid parallel-iterative execution, aggregates findings, and performs focused deep-dives on critical issues until quality gates met.",
|
||||
"arguments": "[session-id] [--dimensions=security,architecture,...] [--max-iterations=N]",
|
||||
"category": "workflow",
|
||||
"subcategory": null,
|
||||
"usage_scenario": "session-management",
|
||||
"difficulty": "Intermediate",
|
||||
"source": "../../../commands/workflow/review-session-cycle.md"
|
||||
},
|
||||
{
|
||||
"name": "docs",
|
||||
"command": "/memory:docs",
|
||||
"description": "Plan documentation workflow with dynamic grouping (≤10 docs/task), generates IMPL tasks for parallel module trees, README, ARCHITECTURE, and HTTP API docs",
|
||||
"arguments": "[path] [--tool <gemini|qwen|codex>] [--mode <full|partial>] [--cli-execute]",
|
||||
"category": "memory",
|
||||
"subcategory": null,
|
||||
"usage_scenario": "documentation",
|
||||
"difficulty": "Intermediate",
|
||||
"source": "../../../commands/memory/docs.md"
|
||||
},
|
||||
{
|
||||
"name": "artifacts",
|
||||
"command": "/workflow:brainstorm:artifacts",
|
||||
"description": "Interactive clarification generating confirmed guidance specification through role-based analysis and synthesis",
|
||||
"arguments": "topic or challenge description [--count N]",
|
||||
"category": "workflow",
|
||||
"subcategory": "brainstorm",
|
||||
"usage_scenario": "general",
|
||||
"difficulty": "Intermediate",
|
||||
"source": "../../../commands/workflow/brainstorm/artifacts.md"
|
||||
},
|
||||
{
|
||||
"name": "action-plan-verify",
|
||||
"command": "/workflow:action-plan-verify",
|
||||
"description": "Perform non-destructive cross-artifact consistency analysis between IMPL_PLAN.md and task JSONs with quality gate validation",
|
||||
"arguments": "[optional: --session session-id]",
|
||||
"category": "workflow",
|
||||
"subcategory": null,
|
||||
"usage_scenario": "planning",
|
||||
"difficulty": "Intermediate",
|
||||
"source": "../../../commands/workflow/action-plan-verify.md"
|
||||
},
|
||||
{
|
||||
"name": "version",
|
||||
"command": "/version",
|
||||
"description": "Display Claude Code version information and check for updates",
|
||||
"arguments": "",
|
||||
"category": "general",
|
||||
"subcategory": null,
|
||||
"usage_scenario": "general",
|
||||
"difficulty": "Beginner",
|
||||
"source": "../../../commands/version.md"
|
||||
}
|
||||
]
|
||||
303
.claude/skills/ccw-loop/README.md
Normal file
303
.claude/skills/ccw-loop/README.md
Normal file
@@ -0,0 +1,303 @@
|
||||
# CCW Loop Skill
|
||||
|
||||
无状态迭代开发循环工作流,支持开发 (Develop)、调试 (Debug)、验证 (Validate) 三个阶段,每个阶段都有独立的文件记录进展。
|
||||
|
||||
## Overview
|
||||
|
||||
CCW Loop 是一个自主模式 (Autonomous) 的 Skill,通过文件驱动的无状态循环,帮助开发者系统化地完成开发任务。
|
||||
|
||||
### 核心特性
|
||||
|
||||
1. **无状态循环**: 每次执行从文件读取状态,不依赖内存
|
||||
2. **文件驱动**: 所有进度记录在 Markdown 文件中,可审计、可回顾
|
||||
3. **Gemini 辅助**: 关键决策点使用 CLI 工具进行深度分析
|
||||
4. **可恢复**: 任何时候中断后可继续
|
||||
5. **双模式**: 支持交互式和自动循环
|
||||
|
||||
### 三大阶段
|
||||
|
||||
- **Develop**: 任务分解 → 代码实现 → 进度记录
|
||||
- **Debug**: 假设生成 → 证据收集 → 根因分析 → 修复验证
|
||||
- **Validate**: 测试执行 → 覆盖率检查 → 质量评估
|
||||
|
||||
## Installation
|
||||
|
||||
已包含在 `.claude/skills/ccw-loop/`,无需额外安装。
|
||||
|
||||
## Usage
|
||||
|
||||
### 基本用法
|
||||
|
||||
```bash
|
||||
# 启动新循环
|
||||
/ccw-loop "实现用户认证功能"
|
||||
|
||||
# 继续现有循环
|
||||
/ccw-loop --resume LOOP-auth-2026-01-22
|
||||
|
||||
# 自动循环模式
|
||||
/ccw-loop --auto "修复登录bug并添加测试"
|
||||
```
|
||||
|
||||
### 交互式流程
|
||||
|
||||
```
|
||||
1. 启动: /ccw-loop "任务描述"
|
||||
2. 初始化: 自动分析任务并生成子任务列表
|
||||
3. 显示菜单:
|
||||
- 📝 继续开发 (Develop)
|
||||
- 🔍 开始调试 (Debug)
|
||||
- ✅ 运行验证 (Validate)
|
||||
- 📊 查看详情 (Status)
|
||||
- 🏁 完成循环 (Complete)
|
||||
- 🚪 退出 (Exit)
|
||||
4. 执行选择的动作
|
||||
5. 重复步骤 3-4 直到完成
|
||||
```
|
||||
|
||||
### 自动循环流程
|
||||
|
||||
```
|
||||
Develop (所有任务) → Debug (如有需要) → Validate → 完成
|
||||
```
|
||||
|
||||
## Directory Structure
|
||||
|
||||
```
|
||||
.workflow/.loop/{session-id}/
|
||||
├── meta.json # 会话元数据 (不可修改)
|
||||
├── state.json # 当前状态 (每次更新)
|
||||
├── summary.md # 完成报告 (结束时生成)
|
||||
├── develop/
|
||||
│ ├── progress.md # 开发进度时间线
|
||||
│ ├── tasks.json # 任务列表
|
||||
│ └── changes.log # 代码变更日志 (NDJSON)
|
||||
├── debug/
|
||||
│ ├── understanding.md # 理解演变文档
|
||||
│ ├── hypotheses.json # 假设历史
|
||||
│ └── debug.log # 调试日志 (NDJSON)
|
||||
└── validate/
|
||||
├── validation.md # 验证报告
|
||||
├── test-results.json # 测试结果
|
||||
└── coverage.json # 覆盖率数据
|
||||
```
|
||||
|
||||
## Action Reference
|
||||
|
||||
| Action | 描述 | 触发条件 |
|
||||
|--------|------|----------|
|
||||
| action-init | 初始化会话 | 首次启动 |
|
||||
| action-menu | 显示操作菜单 | 交互模式下每次循环 |
|
||||
| action-develop-with-file | 执行开发任务 | 有待处理任务 |
|
||||
| action-debug-with-file | 假设驱动调试 | 需要调试 |
|
||||
| action-validate-with-file | 运行测试验证 | 需要验证 |
|
||||
| action-complete | 完成并生成报告 | 所有任务完成 |
|
||||
|
||||
详细说明见 [specs/action-catalog.md](specs/action-catalog.md)
|
||||
|
||||
## CLI Integration
|
||||
|
||||
CCW Loop 在关键决策点集成 CLI 工具:
|
||||
|
||||
### 任务分解 (action-init)
|
||||
```bash
|
||||
ccw cli -p "PURPOSE: 分解开发任务..."
|
||||
--tool gemini
|
||||
--mode analysis
|
||||
--rule planning-breakdown-task-steps
|
||||
```
|
||||
|
||||
### 代码实现 (action-develop)
|
||||
```bash
|
||||
ccw cli -p "PURPOSE: 实现功能代码..."
|
||||
--tool gemini
|
||||
--mode write
|
||||
--rule development-implement-feature
|
||||
```
|
||||
|
||||
### 假设生成 (action-debug - 探索)
|
||||
```bash
|
||||
ccw cli -p "PURPOSE: Generate debugging hypotheses..."
|
||||
--tool gemini
|
||||
--mode analysis
|
||||
--rule analysis-diagnose-bug-root-cause
|
||||
```
|
||||
|
||||
### 证据分析 (action-debug - 分析)
|
||||
```bash
|
||||
ccw cli -p "PURPOSE: Analyze debug log evidence..."
|
||||
--tool gemini
|
||||
--mode analysis
|
||||
--rule analysis-diagnose-bug-root-cause
|
||||
```
|
||||
|
||||
### 质量评估 (action-validate)
|
||||
```bash
|
||||
ccw cli -p "PURPOSE: Analyze test results and coverage..."
|
||||
--tool gemini
|
||||
--mode analysis
|
||||
--rule analysis-review-code-quality
|
||||
```
|
||||
|
||||
## State Management
|
||||
|
||||
### State Schema
|
||||
|
||||
参见 [phases/state-schema.md](phases/state-schema.md)
|
||||
|
||||
### State Transitions
|
||||
|
||||
```
|
||||
pending → running → completed
|
||||
↓
|
||||
user_exit
|
||||
↓
|
||||
failed
|
||||
```
|
||||
|
||||
### State Recovery
|
||||
|
||||
如果 `state.json` 损坏,可从其他文件重建:
|
||||
- develop/tasks.json → develop.*
|
||||
- debug/hypotheses.json → debug.*
|
||||
- validate/test-results.json → validate.*
|
||||
|
||||
## Examples
|
||||
|
||||
### Example 1: 功能开发
|
||||
|
||||
```bash
|
||||
# 1. 启动循环
|
||||
/ccw-loop "Add user profile page"
|
||||
|
||||
# 2. 系统初始化,生成任务:
|
||||
# - task-001: Create profile component
|
||||
# - task-002: Add API endpoints
|
||||
# - task-003: Implement tests
|
||||
|
||||
# 3. 选择 "继续开发"
|
||||
# → 执行 task-001 (Gemini 辅助实现)
|
||||
# → 更新 progress.md
|
||||
|
||||
# 4. 重复开发直到所有任务完成
|
||||
|
||||
# 5. 选择 "运行验证"
|
||||
# → 运行测试
|
||||
# → 检查覆盖率
|
||||
# → 生成 validation.md
|
||||
|
||||
# 6. 选择 "完成循环"
|
||||
# → 生成 summary.md
|
||||
# → 询问是否扩展为 Issue
|
||||
```
|
||||
|
||||
### Example 2: Bug 修复
|
||||
|
||||
```bash
|
||||
# 1. 启动循环
|
||||
/ccw-loop "Fix login timeout issue"
|
||||
|
||||
# 2. 选择 "开始调试"
|
||||
# → 输入 bug 描述: "Login times out after 30s"
|
||||
# → Gemini 生成假设 (H1, H2, H3)
|
||||
# → 添加 NDJSON 日志
|
||||
# → 提示复现 bug
|
||||
|
||||
# 3. 复现 bug (在应用中操作)
|
||||
|
||||
# 4. 再次选择 "开始调试"
|
||||
# → 解析 debug.log
|
||||
# → Gemini 分析证据
|
||||
# → H2 确认为根因
|
||||
# → 生成修复代码
|
||||
# → 更新 understanding.md
|
||||
|
||||
# 5. 选择 "运行验证"
|
||||
# → 测试通过
|
||||
|
||||
# 6. 完成
|
||||
```
|
||||
|
||||
## Templates
|
||||
|
||||
- [progress-template.md](templates/progress-template.md): 开发进度文档模板
|
||||
- [understanding-template.md](templates/understanding-template.md): 调试理解文档模板
|
||||
- [validation-template.md](templates/validation-template.md): 验证报告模板
|
||||
|
||||
## Specifications
|
||||
|
||||
- [loop-requirements.md](specs/loop-requirements.md): 循环需求规范
|
||||
- [action-catalog.md](specs/action-catalog.md): 动作目录
|
||||
|
||||
## Integration
|
||||
|
||||
### Dashboard Integration
|
||||
|
||||
CCW Loop 与 Dashboard Loop Monitor 集成:
|
||||
- Dashboard 创建 Loop → 触发此 Skill
|
||||
- state.json → Dashboard 实时显示
|
||||
- 任务列表双向同步
|
||||
- 控制按钮映射到 actions
|
||||
|
||||
### Issue System Integration
|
||||
|
||||
完成后可扩展为 Issue:
|
||||
- 维度: test, enhance, refactor, doc
|
||||
- 自动调用 `/issue:new`
|
||||
- 上下文自动填充
|
||||
|
||||
## Error Handling
|
||||
|
||||
| 情况 | 处理 |
|
||||
|------|------|
|
||||
| Session 不存在 | 创建新会话 |
|
||||
| state.json 损坏 | 从文件重建 |
|
||||
| CLI 工具失败 | 回退到手动模式 |
|
||||
| 测试失败 | 循环回到 develop/debug |
|
||||
| >10 迭代 | 警告用户,建议拆分 |
|
||||
|
||||
## Limitations
|
||||
|
||||
1. **单会话限制**: 同一时间只能有一个活跃会话
|
||||
2. **迭代限制**: 建议不超过 10 次迭代
|
||||
3. **CLI 依赖**: 部分功能依赖 Gemini CLI 可用性
|
||||
4. **测试框架**: 需要 package.json 中定义测试脚本
|
||||
|
||||
## Troubleshooting
|
||||
|
||||
### Q: 如何查看当前会话状态?
|
||||
|
||||
A: 在菜单中选择 "查看详情 (Status)"
|
||||
|
||||
### Q: 如何恢复中断的会话?
|
||||
|
||||
A: 使用 `--resume` 参数:
|
||||
```bash
|
||||
/ccw-loop --resume LOOP-xxx-2026-01-22
|
||||
```
|
||||
|
||||
### Q: 如果 CLI 工具失败怎么办?
|
||||
|
||||
A: Skill 会自动降级到手动模式,提示用户手动输入
|
||||
|
||||
### Q: 如何添加自定义 action?
|
||||
|
||||
A: 参见 [specs/action-catalog.md](specs/action-catalog.md) 的 "Action Extensions" 部分
|
||||
|
||||
## Contributing
|
||||
|
||||
添加新功能:
|
||||
1. 创建 action 文件在 `phases/actions/`
|
||||
2. 更新 orchestrator 决策逻辑
|
||||
3. 添加到 action-catalog.md
|
||||
4. 更新 action-menu.md
|
||||
|
||||
## License
|
||||
|
||||
MIT
|
||||
|
||||
---
|
||||
|
||||
**Version**: 1.0.0
|
||||
**Last Updated**: 2026-01-22
|
||||
**Author**: CCW Team
|
||||
259
.claude/skills/ccw-loop/SKILL.md
Normal file
259
.claude/skills/ccw-loop/SKILL.md
Normal file
@@ -0,0 +1,259 @@
|
||||
---
|
||||
name: ccw-loop
|
||||
description: Stateless iterative development loop workflow with documented progress. Supports develop, debug, and validate phases with file-based state tracking. Triggers on "ccw-loop", "dev loop", "development loop", "开发循环", "迭代开发".
|
||||
allowed-tools: Task(*), AskUserQuestion(*), Read(*), Grep(*), Glob(*), Bash(*), Edit(*), Write(*), TodoWrite(*)
|
||||
---
|
||||
|
||||
# CCW Loop - Stateless Iterative Development Workflow
|
||||
|
||||
无状态迭代开发循环工作流,支持开发 (develop)、调试 (debug)、验证 (validate) 三个阶段,每个阶段都有独立的文件记录进展。
|
||||
|
||||
## Arguments
|
||||
|
||||
| Arg | Required | Description |
|
||||
|-----|----------|-------------|
|
||||
| task | No | Task description (for new loop, mutually exclusive with --loop-id) |
|
||||
| --loop-id | No | Existing loop ID to continue (from API or previous session) |
|
||||
| --auto | No | Auto-cycle mode (develop → debug → validate → complete) |
|
||||
|
||||
## Unified Architecture (API + Skill Integration)
|
||||
|
||||
```
|
||||
┌─────────────────────────────────────────────────────────────────┐
|
||||
│ Dashboard (UI) │
|
||||
│ [Create] [Start] [Pause] [Resume] [Stop] [View Progress] │
|
||||
└─────────────────────────────────────────────────────────────────┘
|
||||
│
|
||||
▼
|
||||
┌─────────────────────────────────────────────────────────────────┐
|
||||
│ loop-v2-routes.ts (Control Plane) │
|
||||
│ │
|
||||
│ State: .loop/{loopId}.json (MASTER) │
|
||||
│ Tasks: .loop/{loopId}.tasks.jsonl │
|
||||
│ │
|
||||
│ /start → Trigger ccw-loop skill with --loop-id │
|
||||
│ /pause → Set status='paused' (skill checks before action) │
|
||||
│ /stop → Set status='failed' (skill terminates) │
|
||||
│ /resume → Set status='running' (skill continues) │
|
||||
└─────────────────────────────────────────────────────────────────┘
|
||||
│
|
||||
▼
|
||||
┌─────────────────────────────────────────────────────────────────┐
|
||||
│ ccw-loop Skill (Execution Plane) │
|
||||
│ │
|
||||
│ Reads/Writes: .loop/{loopId}.json (unified state) │
|
||||
│ Writes: .loop/{loopId}.progress/* (progress files) │
|
||||
│ │
|
||||
│ BEFORE each action: │
|
||||
│ → Check status: paused/stopped → exit gracefully │
|
||||
│ → running → continue with action │
|
||||
│ │
|
||||
│ Actions: init → develop → debug → validate → complete │
|
||||
└─────────────────────────────────────────────────────────────────┘
|
||||
```
|
||||
|
||||
## Key Design Principles
|
||||
|
||||
1. **统一状态**: API 和 Skill 共享 `.loop/{loopId}.json` 状态文件
|
||||
2. **控制信号**: Skill 每个 Action 前检查 status 字段 (paused/stopped)
|
||||
3. **文件驱动**: 所有进度、理解、结果都记录在 `.loop/{loopId}.progress/`
|
||||
4. **可恢复**: 任何时候可以继续之前的循环 (`--loop-id`)
|
||||
5. **双触发**: 支持 API 触发 (`--loop-id`) 和直接调用 (task description)
|
||||
6. **Gemini 辅助**: 使用 CLI 工具进行深度分析和假设验证
|
||||
|
||||
## Execution Modes
|
||||
|
||||
### Mode 1: Interactive (交互式)
|
||||
|
||||
用户手动选择每个动作,适合复杂任务。
|
||||
|
||||
```
|
||||
用户 → 选择动作 → 执行 → 查看结果 → 选择下一动作
|
||||
```
|
||||
|
||||
### Mode 2: Auto-Loop (自动循环)
|
||||
|
||||
按预设顺序自动执行,适合标准开发流程。
|
||||
|
||||
```
|
||||
Develop → Debug → Validate → (如有问题) → Develop → ...
|
||||
```
|
||||
|
||||
## Session Structure (Unified Location)
|
||||
|
||||
```
|
||||
.loop/
|
||||
├── {loopId}.json # 主状态文件 (API + Skill 共享)
|
||||
├── {loopId}.tasks.jsonl # 任务列表 (API 管理)
|
||||
└── {loopId}.progress/ # Skill 进度文件
|
||||
├── develop.md # 开发进度记录
|
||||
├── debug.md # 理解演变文档
|
||||
├── validate.md # 验证报告
|
||||
├── changes.log # 代码变更日志 (NDJSON)
|
||||
└── debug.log # 调试日志 (NDJSON)
|
||||
```
|
||||
|
||||
## Directory Setup
|
||||
|
||||
```javascript
|
||||
// loopId 来源:
|
||||
// 1. API 触发时: 从 --loop-id 参数获取
|
||||
// 2. 直接调用时: 生成新的 loop-v2-{timestamp}-{random}
|
||||
|
||||
const loopId = args['--loop-id'] || generateLoopId()
|
||||
const loopFile = `.loop/${loopId}.json`
|
||||
const progressDir = `.loop/${loopId}.progress`
|
||||
|
||||
// 创建进度目录
|
||||
Bash(`mkdir -p "${progressDir}"`)
|
||||
```
|
||||
|
||||
## Action Catalog
|
||||
|
||||
| Action | Purpose | Output Files | CLI Integration |
|
||||
|--------|---------|--------------|-----------------|
|
||||
| [action-init](phases/actions/action-init.md) | 初始化循环会话 | meta.json, state.json | - |
|
||||
| [action-develop-with-file](phases/actions/action-develop-with-file.md) | 开发任务执行 | progress.md, tasks.json | gemini --mode write |
|
||||
| [action-debug-with-file](phases/actions/action-debug-with-file.md) | 假设驱动调试 | understanding.md, hypotheses.json | gemini --mode analysis |
|
||||
| [action-validate-with-file](phases/actions/action-validate-with-file.md) | 测试与验证 | validation.md, test-results.json | gemini --mode analysis |
|
||||
| [action-complete](phases/actions/action-complete.md) | 完成循环 | summary.md | - |
|
||||
| [action-menu](phases/actions/action-menu.md) | 显示操作菜单 | - | - |
|
||||
|
||||
## Usage
|
||||
|
||||
```bash
|
||||
# 启动新循环 (直接调用)
|
||||
/ccw-loop "实现用户认证功能"
|
||||
|
||||
# 继续现有循环 (API 触发或手动恢复)
|
||||
/ccw-loop --loop-id loop-v2-20260122-abc123
|
||||
|
||||
# 自动循环模式
|
||||
/ccw-loop --auto "修复登录bug并添加测试"
|
||||
|
||||
# API 触发自动循环
|
||||
/ccw-loop --loop-id loop-v2-20260122-abc123 --auto
|
||||
```
|
||||
|
||||
## Execution Flow
|
||||
|
||||
```
|
||||
┌─────────────────────────────────────────────────────────────────┐
|
||||
│ /ccw-loop [<task> | --loop-id <id>] [--auto] │
|
||||
├─────────────────────────────────────────────────────────────────┤
|
||||
│ │
|
||||
│ 1. Parameter Detection: │
|
||||
│ ├─ IF --loop-id provided: │
|
||||
│ │ ├─ Read .loop/{loopId}.json │
|
||||
│ │ ├─ Validate status === 'running' │
|
||||
│ │ └─ Continue from skill_state.current_action │
|
||||
│ └─ ELSE (task description): │
|
||||
│ ├─ Generate new loopId │
|
||||
│ ├─ Create .loop/{loopId}.json │
|
||||
│ └─ Initialize with action-init │
|
||||
│ │
|
||||
│ 2. Orchestrator Loop: │
|
||||
│ ├─ Read state from .loop/{loopId}.json │
|
||||
│ ├─ Check control signals: │
|
||||
│ │ ├─ status === 'paused' → Exit (wait for resume) │
|
||||
│ │ ├─ status === 'failed' → Exit with error │
|
||||
│ │ └─ status === 'running' → Continue │
|
||||
│ ├─ Show menu / auto-select next action │
|
||||
│ ├─ Execute action │
|
||||
│ ├─ Update .loop/{loopId}.progress/{action}.md │
|
||||
│ ├─ Update .loop/{loopId}.json (skill_state) │
|
||||
│ └─ Loop or exit based on user choice / completion │
|
||||
│ │
|
||||
│ 3. Action Execution: │
|
||||
│ ├─ BEFORE: checkControlSignals() → exit if paused/stopped │
|
||||
│ ├─ Develop: Plan → Implement → Document progress │
|
||||
│ ├─ Debug: Hypothesize → Instrument → Analyze → Fix │
|
||||
│ ├─ Validate: Test → Check → Report │
|
||||
│ └─ AFTER: Update skill_state in .loop/{loopId}.json │
|
||||
│ │
|
||||
│ 4. Termination: │
|
||||
│ ├─ Control signal: paused (graceful exit, wait resume) │
|
||||
│ ├─ Control signal: stopped (failed state) │
|
||||
│ ├─ User exits (interactive mode) │
|
||||
│ ├─ All tasks completed (status → completed) │
|
||||
│ └─ Max iterations reached │
|
||||
│ │
|
||||
└─────────────────────────────────────────────────────────────────┘
|
||||
```
|
||||
|
||||
## Reference Documents
|
||||
|
||||
| Document | Purpose |
|
||||
|----------|---------|
|
||||
| [phases/orchestrator.md](phases/orchestrator.md) | 编排器:状态读取 + 动作选择 |
|
||||
| [phases/state-schema.md](phases/state-schema.md) | 状态结构定义 |
|
||||
| [specs/loop-requirements.md](specs/loop-requirements.md) | 循环需求规范 |
|
||||
| [specs/action-catalog.md](specs/action-catalog.md) | 动作目录 |
|
||||
| [templates/progress-template.md](templates/progress-template.md) | 进度文档模板 |
|
||||
| [templates/understanding-template.md](templates/understanding-template.md) | 理解文档模板 |
|
||||
|
||||
## Integration with Loop Monitor (Dashboard)
|
||||
|
||||
此 Skill 与 CCW Dashboard 的 Loop Monitor 实现 **控制平面 + 执行平面** 分离架构:
|
||||
|
||||
### Control Plane (Dashboard/API → loop-v2-routes.ts)
|
||||
|
||||
1. **创建循环**: `POST /api/loops/v2` → 创建 `.loop/{loopId}.json`
|
||||
2. **启动执行**: `POST /api/loops/v2/:loopId/start` → 触发 `/ccw-loop --loop-id {loopId} --auto`
|
||||
3. **暂停执行**: `POST /api/loops/v2/:loopId/pause` → 设置 `status='paused'` (Skill 下次检查时退出)
|
||||
4. **恢复执行**: `POST /api/loops/v2/:loopId/resume` → 设置 `status='running'` → 重新触发 Skill
|
||||
5. **停止执行**: `POST /api/loops/v2/:loopId/stop` → 设置 `status='failed'`
|
||||
|
||||
### Execution Plane (ccw-loop Skill)
|
||||
|
||||
1. **读取状态**: 从 `.loop/{loopId}.json` 读取 API 设置的状态
|
||||
2. **检查控制**: 每个 Action 前检查 `status` 字段
|
||||
3. **执行动作**: develop → debug → validate → complete
|
||||
4. **更新进度**: 写入 `.loop/{loopId}.progress/*.md` 和更新 `skill_state`
|
||||
5. **状态同步**: Dashboard 通过读取 `.loop/{loopId}.json` 获取进度
|
||||
|
||||
## CLI Integration Points
|
||||
|
||||
### Develop Phase
|
||||
```bash
|
||||
ccw cli -p "PURPOSE: Implement {task}...
|
||||
TASK: • Analyze requirements • Write code • Update progress
|
||||
MODE: write
|
||||
CONTEXT: @progress.md @tasks.json
|
||||
EXPECTED: Implementation + updated progress.md
|
||||
" --tool gemini --mode write --rule development-implement-feature
|
||||
```
|
||||
|
||||
### Debug Phase
|
||||
```bash
|
||||
ccw cli -p "PURPOSE: Generate debugging hypotheses...
|
||||
TASK: • Analyze error • Generate hypotheses • Add instrumentation
|
||||
MODE: analysis
|
||||
CONTEXT: @understanding.md @debug.log
|
||||
EXPECTED: Hypotheses + instrumentation plan
|
||||
" --tool gemini --mode analysis --rule analysis-diagnose-bug-root-cause
|
||||
```
|
||||
|
||||
### Validate Phase
|
||||
```bash
|
||||
ccw cli -p "PURPOSE: Validate implementation...
|
||||
TASK: • Run tests • Check coverage • Verify requirements
|
||||
MODE: analysis
|
||||
CONTEXT: @validation.md @test-results.json
|
||||
EXPECTED: Validation report
|
||||
" --tool gemini --mode analysis --rule analysis-review-code-quality
|
||||
```
|
||||
|
||||
## Error Handling
|
||||
|
||||
| Situation | Action |
|
||||
|-----------|--------|
|
||||
| Session not found | Create new session |
|
||||
| State file corrupted | Rebuild from file contents |
|
||||
| CLI tool fails | Fallback to manual analysis |
|
||||
| Tests fail | Loop back to develop/debug |
|
||||
| >10 iterations | Warn user, suggest break |
|
||||
|
||||
## Post-Completion Expansion
|
||||
|
||||
完成后询问用户是否扩展为 issue (test/enhance/refactor/doc),选中项调用 `/issue:new "{summary} - {dimension}"`
|
||||
320
.claude/skills/ccw-loop/phases/actions/action-complete.md
Normal file
320
.claude/skills/ccw-loop/phases/actions/action-complete.md
Normal file
@@ -0,0 +1,320 @@
|
||||
# Action: Complete
|
||||
|
||||
完成 CCW Loop 会话,生成总结报告。
|
||||
|
||||
## Purpose
|
||||
|
||||
- 生成完成报告
|
||||
- 汇总所有阶段成果
|
||||
- 提供后续建议
|
||||
- 询问是否扩展为 Issue
|
||||
|
||||
## Preconditions
|
||||
|
||||
- [ ] state.initialized === true
|
||||
- [ ] state.status === 'running'
|
||||
|
||||
## Execution
|
||||
|
||||
### Step 1: 汇总统计
|
||||
|
||||
```javascript
|
||||
const getUtc8ISOString = () => new Date(Date.now() + 8 * 60 * 60 * 1000).toISOString()
|
||||
|
||||
const sessionFolder = `.workflow/.loop/${state.session_id}`
|
||||
|
||||
const stats = {
|
||||
// 时间统计
|
||||
duration: Date.now() - new Date(state.created_at).getTime(),
|
||||
iterations: state.iteration_count,
|
||||
|
||||
// 开发统计
|
||||
develop: {
|
||||
total_tasks: state.develop.total_count,
|
||||
completed_tasks: state.develop.completed_count,
|
||||
completion_rate: state.develop.total_count > 0
|
||||
? (state.develop.completed_count / state.develop.total_count * 100).toFixed(1)
|
||||
: 0
|
||||
},
|
||||
|
||||
// 调试统计
|
||||
debug: {
|
||||
iterations: state.debug.iteration,
|
||||
hypotheses_tested: state.debug.hypotheses.length,
|
||||
root_cause_found: state.debug.confirmed_hypothesis !== null
|
||||
},
|
||||
|
||||
// 验证统计
|
||||
validate: {
|
||||
runs: state.validate.test_results.length,
|
||||
passed: state.validate.passed,
|
||||
coverage: state.validate.coverage,
|
||||
failed_tests: state.validate.failed_tests.length
|
||||
}
|
||||
}
|
||||
|
||||
console.log('\n生成完成报告...')
|
||||
```
|
||||
|
||||
### Step 2: 生成总结报告
|
||||
|
||||
```javascript
|
||||
const summaryReport = `# CCW Loop Session Summary
|
||||
|
||||
**Session ID**: ${state.session_id}
|
||||
**Task**: ${state.task_description}
|
||||
**Started**: ${state.created_at}
|
||||
**Completed**: ${getUtc8ISOString()}
|
||||
**Duration**: ${formatDuration(stats.duration)}
|
||||
|
||||
---
|
||||
|
||||
## Executive Summary
|
||||
|
||||
${state.validate.passed
|
||||
? '✅ **任务成功完成** - 所有测试通过,验证成功'
|
||||
: state.develop.completed_count === state.develop.total_count
|
||||
? '⚠️ **开发完成,验证未通过** - 需要进一步调试'
|
||||
: '⏸️ **任务部分完成** - 仍有待处理项'}
|
||||
|
||||
---
|
||||
|
||||
## Development Phase
|
||||
|
||||
| Metric | Value |
|
||||
|--------|-------|
|
||||
| Total Tasks | ${stats.develop.total_tasks} |
|
||||
| Completed | ${stats.develop.completed_tasks} |
|
||||
| Completion Rate | ${stats.develop.completion_rate}% |
|
||||
|
||||
### Completed Tasks
|
||||
|
||||
${state.develop.tasks.filter(t => t.status === 'completed').map(t => `
|
||||
- ✅ ${t.description}
|
||||
- Files: ${t.files_changed?.join(', ') || 'N/A'}
|
||||
- Completed: ${t.completed_at}
|
||||
`).join('\n')}
|
||||
|
||||
### Pending Tasks
|
||||
|
||||
${state.develop.tasks.filter(t => t.status !== 'completed').map(t => `
|
||||
- ⏳ ${t.description}
|
||||
`).join('\n') || '_None_'}
|
||||
|
||||
---
|
||||
|
||||
## Debug Phase
|
||||
|
||||
| Metric | Value |
|
||||
|--------|-------|
|
||||
| Iterations | ${stats.debug.iterations} |
|
||||
| Hypotheses Tested | ${stats.debug.hypotheses_tested} |
|
||||
| Root Cause Found | ${stats.debug.root_cause_found ? 'Yes' : 'No'} |
|
||||
|
||||
${stats.debug.root_cause_found ? `
|
||||
### Confirmed Root Cause
|
||||
|
||||
**${state.debug.confirmed_hypothesis}**: ${state.debug.hypotheses.find(h => h.id === state.debug.confirmed_hypothesis)?.description || 'N/A'}
|
||||
` : ''}
|
||||
|
||||
### Hypothesis Summary
|
||||
|
||||
${state.debug.hypotheses.map(h => `
|
||||
- **${h.id}**: ${h.status.toUpperCase()}
|
||||
- ${h.description}
|
||||
`).join('\n') || '_No hypotheses tested_'}
|
||||
|
||||
---
|
||||
|
||||
## Validation Phase
|
||||
|
||||
| Metric | Value |
|
||||
|--------|-------|
|
||||
| Test Runs | ${stats.validate.runs} |
|
||||
| Status | ${stats.validate.passed ? 'PASSED' : 'FAILED'} |
|
||||
| Coverage | ${stats.validate.coverage || 'N/A'}% |
|
||||
| Failed Tests | ${stats.validate.failed_tests} |
|
||||
|
||||
${stats.validate.failed_tests > 0 ? `
|
||||
### Failed Tests
|
||||
|
||||
${state.validate.failed_tests.map(t => `- ❌ ${t}`).join('\n')}
|
||||
` : ''}
|
||||
|
||||
---
|
||||
|
||||
## Files Modified
|
||||
|
||||
${listModifiedFiles(sessionFolder)}
|
||||
|
||||
---
|
||||
|
||||
## Key Learnings
|
||||
|
||||
${state.debug.iteration > 0 ? `
|
||||
### From Debugging
|
||||
|
||||
${extractLearnings(state.debug.hypotheses)}
|
||||
` : ''}
|
||||
|
||||
---
|
||||
|
||||
## Recommendations
|
||||
|
||||
${generateRecommendations(stats, state)}
|
||||
|
||||
---
|
||||
|
||||
## Session Artifacts
|
||||
|
||||
| File | Description |
|
||||
|------|-------------|
|
||||
| \`develop/progress.md\` | Development progress timeline |
|
||||
| \`develop/tasks.json\` | Task list with status |
|
||||
| \`debug/understanding.md\` | Debug exploration and learnings |
|
||||
| \`debug/hypotheses.json\` | Hypothesis history |
|
||||
| \`validate/validation.md\` | Validation report |
|
||||
| \`validate/test-results.json\` | Test execution results |
|
||||
|
||||
---
|
||||
|
||||
*Generated by CCW Loop at ${getUtc8ISOString()}*
|
||||
`
|
||||
|
||||
Write(`${sessionFolder}/summary.md`, summaryReport)
|
||||
console.log(`\n报告已保存: ${sessionFolder}/summary.md`)
|
||||
```
|
||||
|
||||
### Step 3: 询问后续扩展
|
||||
|
||||
```javascript
|
||||
console.log('\n' + '═'.repeat(60))
|
||||
console.log(' 任务已完成')
|
||||
console.log('═'.repeat(60))
|
||||
|
||||
const expansionResponse = await AskUserQuestion({
|
||||
questions: [{
|
||||
question: "是否将发现扩展为 Issue?",
|
||||
header: "扩展选项",
|
||||
multiSelect: true,
|
||||
options: [
|
||||
{ label: "测试 (Test)", description: "添加更多测试用例" },
|
||||
{ label: "增强 (Enhance)", description: "功能增强建议" },
|
||||
{ label: "重构 (Refactor)", description: "代码重构建议" },
|
||||
{ label: "文档 (Doc)", description: "文档更新需求" },
|
||||
{ label: "否,直接完成", description: "不创建 Issue" }
|
||||
]
|
||||
}]
|
||||
})
|
||||
|
||||
const selectedExpansions = expansionResponse["扩展选项"]
|
||||
|
||||
if (selectedExpansions && !selectedExpansions.includes("否,直接完成")) {
|
||||
for (const expansion of selectedExpansions) {
|
||||
const dimension = expansion.split(' ')[0].toLowerCase()
|
||||
const issueSummary = `${state.task_description} - ${dimension}`
|
||||
|
||||
console.log(`\n创建 Issue: ${issueSummary}`)
|
||||
|
||||
// 调用 /issue:new 创建 issue
|
||||
await Bash({
|
||||
command: `/issue:new "${issueSummary}"`,
|
||||
run_in_background: false
|
||||
})
|
||||
}
|
||||
}
|
||||
```
|
||||
|
||||
### Step 4: 最终输出
|
||||
|
||||
```javascript
|
||||
console.log(`
|
||||
═══════════════════════════════════════════════════════════
|
||||
✅ CCW Loop 会话完成
|
||||
═══════════════════════════════════════════════════════════
|
||||
|
||||
会话 ID: ${state.session_id}
|
||||
用时: ${formatDuration(stats.duration)}
|
||||
迭代: ${stats.iterations}
|
||||
|
||||
开发: ${stats.develop.completed_tasks}/${stats.develop.total_tasks} 任务完成
|
||||
调试: ${stats.debug.iterations} 次迭代
|
||||
验证: ${stats.validate.passed ? '通过 ✅' : '未通过 ❌'}
|
||||
|
||||
报告: ${sessionFolder}/summary.md
|
||||
|
||||
═══════════════════════════════════════════════════════════
|
||||
`)
|
||||
```
|
||||
|
||||
## State Updates
|
||||
|
||||
```javascript
|
||||
return {
|
||||
stateUpdates: {
|
||||
status: 'completed',
|
||||
completed_at: getUtc8ISOString(),
|
||||
summary: stats
|
||||
},
|
||||
continue: false,
|
||||
message: `会话 ${state.session_id} 已完成`
|
||||
}
|
||||
```
|
||||
|
||||
## Helper Functions
|
||||
|
||||
```javascript
|
||||
function formatDuration(ms) {
|
||||
const seconds = Math.floor(ms / 1000)
|
||||
const minutes = Math.floor(seconds / 60)
|
||||
const hours = Math.floor(minutes / 60)
|
||||
|
||||
if (hours > 0) {
|
||||
return `${hours}h ${minutes % 60}m`
|
||||
} else if (minutes > 0) {
|
||||
return `${minutes}m ${seconds % 60}s`
|
||||
} else {
|
||||
return `${seconds}s`
|
||||
}
|
||||
}
|
||||
|
||||
function generateRecommendations(stats, state) {
|
||||
const recommendations = []
|
||||
|
||||
if (stats.develop.completion_rate < 100) {
|
||||
recommendations.push('- 完成剩余开发任务')
|
||||
}
|
||||
|
||||
if (!stats.validate.passed) {
|
||||
recommendations.push('- 修复失败的测试')
|
||||
}
|
||||
|
||||
if (stats.validate.coverage && stats.validate.coverage < 80) {
|
||||
recommendations.push(`- 提高测试覆盖率 (当前: ${stats.validate.coverage}%)`)
|
||||
}
|
||||
|
||||
if (stats.debug.iterations > 3 && !stats.debug.root_cause_found) {
|
||||
recommendations.push('- 考虑代码重构以简化调试')
|
||||
}
|
||||
|
||||
if (recommendations.length === 0) {
|
||||
recommendations.push('- 考虑代码审查')
|
||||
recommendations.push('- 更新相关文档')
|
||||
recommendations.push('- 准备部署')
|
||||
}
|
||||
|
||||
return recommendations.join('\n')
|
||||
}
|
||||
```
|
||||
|
||||
## Error Handling
|
||||
|
||||
| Error Type | Recovery |
|
||||
|------------|----------|
|
||||
| 报告生成失败 | 显示基本统计,跳过文件写入 |
|
||||
| Issue 创建失败 | 记录错误,继续完成 |
|
||||
|
||||
## Next Actions
|
||||
|
||||
- 无 (终止状态)
|
||||
- 如需继续: 使用 `ccw-loop --resume {session-id}` 重新打开会话
|
||||
485
.claude/skills/ccw-loop/phases/actions/action-debug-with-file.md
Normal file
485
.claude/skills/ccw-loop/phases/actions/action-debug-with-file.md
Normal file
@@ -0,0 +1,485 @@
|
||||
# Action: Debug With File
|
||||
|
||||
假设驱动调试,记录理解演变到 understanding.md,支持 Gemini 辅助分析和假设生成。
|
||||
|
||||
## Purpose
|
||||
|
||||
执行假设驱动的调试流程,包括:
|
||||
- 定位错误源
|
||||
- 生成可测试假设
|
||||
- 添加 NDJSON 日志
|
||||
- 分析日志证据
|
||||
- 纠正错误理解
|
||||
- 应用修复
|
||||
|
||||
## Preconditions
|
||||
|
||||
- [ ] state.initialized === true
|
||||
- [ ] state.status === 'running'
|
||||
|
||||
## Session Setup
|
||||
|
||||
```javascript
|
||||
const getUtc8ISOString = () => new Date(Date.now() + 8 * 60 * 60 * 1000).toISOString()
|
||||
|
||||
const sessionFolder = `.workflow/.loop/${state.session_id}`
|
||||
const debugFolder = `${sessionFolder}/debug`
|
||||
const understandingPath = `${debugFolder}/understanding.md`
|
||||
const hypothesesPath = `${debugFolder}/hypotheses.json`
|
||||
const debugLogPath = `${debugFolder}/debug.log`
|
||||
```
|
||||
|
||||
---
|
||||
|
||||
## Mode Detection
|
||||
|
||||
```javascript
|
||||
// 自动检测模式
|
||||
const understandingExists = fs.existsSync(understandingPath)
|
||||
const logHasContent = fs.existsSync(debugLogPath) && fs.statSync(debugLogPath).size > 0
|
||||
|
||||
const debugMode = logHasContent ? 'analyze' : (understandingExists ? 'continue' : 'explore')
|
||||
|
||||
console.log(`Debug mode: ${debugMode}`)
|
||||
```
|
||||
|
||||
---
|
||||
|
||||
## Explore Mode (首次调试)
|
||||
|
||||
### Step 1.1: 定位错误源
|
||||
|
||||
```javascript
|
||||
if (debugMode === 'explore') {
|
||||
// 询问用户 bug 描述
|
||||
const bugInput = await AskUserQuestion({
|
||||
questions: [{
|
||||
question: "请描述遇到的 bug 或错误信息:",
|
||||
header: "Bug 描述",
|
||||
multiSelect: false,
|
||||
options: [
|
||||
{ label: "手动输入", description: "输入错误描述或堆栈" },
|
||||
{ label: "从测试失败", description: "从验证阶段的失败测试中获取" }
|
||||
]
|
||||
}]
|
||||
})
|
||||
|
||||
const bugDescription = bugInput["Bug 描述"]
|
||||
|
||||
// 提取关键词并搜索
|
||||
const searchResults = await Task({
|
||||
subagent_type: 'Explore',
|
||||
run_in_background: false,
|
||||
prompt: `Search codebase for error patterns related to: ${bugDescription}`
|
||||
})
|
||||
|
||||
// 分析搜索结果,识别受影响的位置
|
||||
const affectedLocations = analyzeSearchResults(searchResults)
|
||||
}
|
||||
```
|
||||
|
||||
### Step 1.2: 记录初始理解
|
||||
|
||||
```javascript
|
||||
// 创建 understanding.md
|
||||
const initialUnderstanding = `# Understanding Document
|
||||
|
||||
**Session ID**: ${state.session_id}
|
||||
**Bug Description**: ${bugDescription}
|
||||
**Started**: ${getUtc8ISOString()}
|
||||
|
||||
---
|
||||
|
||||
## Exploration Timeline
|
||||
|
||||
### Iteration 1 - Initial Exploration (${getUtc8ISOString()})
|
||||
|
||||
#### Current Understanding
|
||||
|
||||
Based on bug description and initial code search:
|
||||
|
||||
- Error pattern: ${errorPattern}
|
||||
- Affected areas: ${affectedLocations.map(l => l.file).join(', ')}
|
||||
- Initial hypothesis: ${initialThoughts}
|
||||
|
||||
#### Evidence from Code Search
|
||||
|
||||
${searchResults.map(r => `
|
||||
**Keyword: "${r.keyword}"**
|
||||
- Found in: ${r.files.join(', ')}
|
||||
- Key findings: ${r.insights}
|
||||
`).join('\n')}
|
||||
|
||||
#### Next Steps
|
||||
|
||||
- Generate testable hypotheses
|
||||
- Add instrumentation
|
||||
- Await reproduction
|
||||
|
||||
---
|
||||
|
||||
## Current Consolidated Understanding
|
||||
|
||||
${initialConsolidatedUnderstanding}
|
||||
`
|
||||
|
||||
Write(understandingPath, initialUnderstanding)
|
||||
```
|
||||
|
||||
### Step 1.3: Gemini 辅助假设生成
|
||||
|
||||
```bash
|
||||
ccw cli -p "
|
||||
PURPOSE: Generate debugging hypotheses for: ${bugDescription}
|
||||
Success criteria: Testable hypotheses with clear evidence criteria
|
||||
|
||||
TASK:
|
||||
• Analyze error pattern and code search results
|
||||
• Identify 3-5 most likely root causes
|
||||
• For each hypothesis, specify:
|
||||
- What might be wrong
|
||||
- What evidence would confirm/reject it
|
||||
- Where to add instrumentation
|
||||
• Rank by likelihood
|
||||
|
||||
MODE: analysis
|
||||
|
||||
CONTEXT: @${understandingPath} | Search results in understanding.md
|
||||
|
||||
EXPECTED:
|
||||
- Structured hypothesis list (JSON format)
|
||||
- Each hypothesis with: id, description, testable_condition, logging_point, evidence_criteria
|
||||
- Likelihood ranking (1=most likely)
|
||||
|
||||
CONSTRAINTS: Focus on testable conditions
|
||||
" --tool gemini --mode analysis --rule analysis-diagnose-bug-root-cause
|
||||
```
|
||||
|
||||
### Step 1.4: 保存假设
|
||||
|
||||
```javascript
|
||||
const hypotheses = {
|
||||
iteration: 1,
|
||||
timestamp: getUtc8ISOString(),
|
||||
bug_description: bugDescription,
|
||||
hypotheses: [
|
||||
{
|
||||
id: "H1",
|
||||
description: "...",
|
||||
testable_condition: "...",
|
||||
logging_point: "file.ts:func:42",
|
||||
evidence_criteria: {
|
||||
confirm: "...",
|
||||
reject: "..."
|
||||
},
|
||||
likelihood: 1,
|
||||
status: "pending"
|
||||
}
|
||||
// ...
|
||||
],
|
||||
gemini_insights: "...",
|
||||
corrected_assumptions: []
|
||||
}
|
||||
|
||||
Write(hypothesesPath, JSON.stringify(hypotheses, null, 2))
|
||||
```
|
||||
|
||||
### Step 1.5: 添加 NDJSON 日志
|
||||
|
||||
```javascript
|
||||
// 为每个假设添加日志点
|
||||
for (const hypothesis of hypotheses.hypotheses) {
|
||||
const [file, func, line] = hypothesis.logging_point.split(':')
|
||||
|
||||
const logStatement = `console.log(JSON.stringify({
|
||||
hid: "${hypothesis.id}",
|
||||
ts: Date.now(),
|
||||
func: "${func}",
|
||||
data: { /* 相关数据 */ }
|
||||
}))`
|
||||
|
||||
// 使用 Edit 工具添加日志
|
||||
// ...
|
||||
}
|
||||
```
|
||||
|
||||
---
|
||||
|
||||
## Analyze Mode (有日志后)
|
||||
|
||||
### Step 2.1: 解析调试日志
|
||||
|
||||
```javascript
|
||||
if (debugMode === 'analyze') {
|
||||
// 读取 NDJSON 日志
|
||||
const logContent = Read(debugLogPath)
|
||||
const entries = logContent.split('\n')
|
||||
.filter(l => l.trim())
|
||||
.map(l => JSON.parse(l))
|
||||
|
||||
// 按假设分组
|
||||
const byHypothesis = groupBy(entries, 'hid')
|
||||
}
|
||||
```
|
||||
|
||||
### Step 2.2: Gemini 辅助证据分析
|
||||
|
||||
```bash
|
||||
ccw cli -p "
|
||||
PURPOSE: Analyze debug log evidence to validate/correct hypotheses for: ${bugDescription}
|
||||
Success criteria: Clear verdict per hypothesis + corrected understanding
|
||||
|
||||
TASK:
|
||||
• Parse log entries by hypothesis
|
||||
• Evaluate evidence against expected criteria
|
||||
• Determine verdict: confirmed | rejected | inconclusive
|
||||
• Identify incorrect assumptions from previous understanding
|
||||
• Suggest corrections to understanding
|
||||
|
||||
MODE: analysis
|
||||
|
||||
CONTEXT:
|
||||
@${debugLogPath}
|
||||
@${understandingPath}
|
||||
@${hypothesesPath}
|
||||
|
||||
EXPECTED:
|
||||
- Per-hypothesis verdict with reasoning
|
||||
- Evidence summary
|
||||
- List of incorrect assumptions with corrections
|
||||
- Updated consolidated understanding
|
||||
- Root cause if confirmed, or next investigation steps
|
||||
|
||||
CONSTRAINTS: Evidence-based reasoning only, no speculation
|
||||
" --tool gemini --mode analysis --rule analysis-diagnose-bug-root-cause
|
||||
```
|
||||
|
||||
### Step 2.3: 更新理解文档
|
||||
|
||||
```javascript
|
||||
// 追加新迭代到 understanding.md
|
||||
const iteration = state.debug.iteration + 1
|
||||
|
||||
const analysisEntry = `
|
||||
### Iteration ${iteration} - Evidence Analysis (${getUtc8ISOString()})
|
||||
|
||||
#### Log Analysis Results
|
||||
|
||||
${results.map(r => `
|
||||
**${r.id}**: ${r.verdict.toUpperCase()}
|
||||
- Evidence: ${JSON.stringify(r.evidence)}
|
||||
- Reasoning: ${r.reason}
|
||||
`).join('\n')}
|
||||
|
||||
#### Corrected Understanding
|
||||
|
||||
Previous misunderstandings identified and corrected:
|
||||
|
||||
${corrections.map(c => `
|
||||
- ~~${c.wrong}~~ → ${c.corrected}
|
||||
- Why wrong: ${c.reason}
|
||||
- Evidence: ${c.evidence}
|
||||
`).join('\n')}
|
||||
|
||||
#### New Insights
|
||||
|
||||
${newInsights.join('\n- ')}
|
||||
|
||||
#### Gemini Analysis
|
||||
|
||||
${geminiAnalysis}
|
||||
|
||||
${confirmedHypothesis ? `
|
||||
#### Root Cause Identified
|
||||
|
||||
**${confirmedHypothesis.id}**: ${confirmedHypothesis.description}
|
||||
|
||||
Evidence supporting this conclusion:
|
||||
${confirmedHypothesis.supportingEvidence}
|
||||
` : `
|
||||
#### Next Steps
|
||||
|
||||
${nextSteps}
|
||||
`}
|
||||
|
||||
---
|
||||
|
||||
## Current Consolidated Understanding (Updated)
|
||||
|
||||
### What We Know
|
||||
|
||||
- ${validUnderstanding1}
|
||||
- ${validUnderstanding2}
|
||||
|
||||
### What Was Disproven
|
||||
|
||||
- ~~${wrongAssumption}~~ (Evidence: ${disproofEvidence})
|
||||
|
||||
### Current Investigation Focus
|
||||
|
||||
${currentFocus}
|
||||
|
||||
### Remaining Questions
|
||||
|
||||
- ${openQuestion1}
|
||||
- ${openQuestion2}
|
||||
`
|
||||
|
||||
const existingContent = Read(understandingPath)
|
||||
Write(understandingPath, existingContent + analysisEntry)
|
||||
```
|
||||
|
||||
### Step 2.4: 更新假设状态
|
||||
|
||||
```javascript
|
||||
const hypothesesData = JSON.parse(Read(hypothesesPath))
|
||||
|
||||
// 更新假设状态
|
||||
hypothesesData.hypotheses = hypothesesData.hypotheses.map(h => ({
|
||||
...h,
|
||||
status: results.find(r => r.id === h.id)?.verdict || h.status,
|
||||
evidence: results.find(r => r.id === h.id)?.evidence || h.evidence,
|
||||
verdict_reason: results.find(r => r.id === h.id)?.reason || h.verdict_reason
|
||||
}))
|
||||
|
||||
hypothesesData.iteration++
|
||||
hypothesesData.timestamp = getUtc8ISOString()
|
||||
|
||||
Write(hypothesesPath, JSON.stringify(hypothesesData, null, 2))
|
||||
```
|
||||
|
||||
---
|
||||
|
||||
## Fix & Verification
|
||||
|
||||
### Step 3.1: 应用修复
|
||||
|
||||
```javascript
|
||||
if (confirmedHypothesis) {
|
||||
console.log(`\n根因确认: ${confirmedHypothesis.description}`)
|
||||
console.log('准备应用修复...')
|
||||
|
||||
// 使用 Gemini 生成修复代码
|
||||
const fixPrompt = `
|
||||
PURPOSE: Fix the identified root cause
|
||||
Root Cause: ${confirmedHypothesis.description}
|
||||
Evidence: ${confirmedHypothesis.supportingEvidence}
|
||||
|
||||
TASK:
|
||||
• Generate fix code
|
||||
• Ensure backward compatibility
|
||||
• Add tests if needed
|
||||
|
||||
MODE: write
|
||||
|
||||
CONTEXT: @${confirmedHypothesis.logging_point.split(':')[0]}
|
||||
|
||||
EXPECTED: Fixed code + verification steps
|
||||
`
|
||||
|
||||
await Bash({
|
||||
command: `ccw cli -p "${fixPrompt}" --tool gemini --mode write --rule development-debug-runtime-issues`,
|
||||
run_in_background: false
|
||||
})
|
||||
}
|
||||
```
|
||||
|
||||
### Step 3.2: 记录解决方案
|
||||
|
||||
```javascript
|
||||
const resolutionEntry = `
|
||||
### Resolution (${getUtc8ISOString()})
|
||||
|
||||
#### Fix Applied
|
||||
|
||||
- Modified files: ${modifiedFiles.join(', ')}
|
||||
- Fix description: ${fixDescription}
|
||||
- Root cause addressed: ${rootCause}
|
||||
|
||||
#### Verification Results
|
||||
|
||||
${verificationResults}
|
||||
|
||||
#### Lessons Learned
|
||||
|
||||
1. ${lesson1}
|
||||
2. ${lesson2}
|
||||
|
||||
#### Key Insights for Future
|
||||
|
||||
- ${insight1}
|
||||
- ${insight2}
|
||||
`
|
||||
|
||||
const existingContent = Read(understandingPath)
|
||||
Write(understandingPath, existingContent + resolutionEntry)
|
||||
```
|
||||
|
||||
### Step 3.3: 清理日志
|
||||
|
||||
```javascript
|
||||
// 移除调试日志
|
||||
// (可选,根据用户选择)
|
||||
```
|
||||
|
||||
---
|
||||
|
||||
## State Updates
|
||||
|
||||
```javascript
|
||||
return {
|
||||
stateUpdates: {
|
||||
debug: {
|
||||
current_bug: bugDescription,
|
||||
hypotheses: hypothesesData.hypotheses,
|
||||
confirmed_hypothesis: confirmedHypothesis?.id || null,
|
||||
iteration: hypothesesData.iteration,
|
||||
last_analysis_at: getUtc8ISOString(),
|
||||
understanding_updated: true
|
||||
},
|
||||
last_action: 'action-debug-with-file'
|
||||
},
|
||||
continue: true,
|
||||
message: confirmedHypothesis
|
||||
? `根因确认: ${confirmedHypothesis.description}\n修复已应用,请验证`
|
||||
: `分析完成,需要更多证据\n请复现 bug 后再次执行`
|
||||
}
|
||||
```
|
||||
|
||||
## Error Handling
|
||||
|
||||
| Error Type | Recovery |
|
||||
|------------|----------|
|
||||
| 空 debug.log | 提示用户复现 bug |
|
||||
| 所有假设被否定 | 使用 Gemini 生成新假设 |
|
||||
| 修复无效 | 记录失败尝试,迭代 |
|
||||
| >5 迭代 | 建议升级到 /workflow:lite-fix |
|
||||
| Gemini 不可用 | 回退到手动分析 |
|
||||
|
||||
## Understanding Document Template
|
||||
|
||||
参考 [templates/understanding-template.md](../../templates/understanding-template.md)
|
||||
|
||||
## CLI Integration
|
||||
|
||||
### 假设生成
|
||||
```bash
|
||||
ccw cli -p "PURPOSE: Generate debugging hypotheses..." --tool gemini --mode analysis --rule analysis-diagnose-bug-root-cause
|
||||
```
|
||||
|
||||
### 证据分析
|
||||
```bash
|
||||
ccw cli -p "PURPOSE: Analyze debug log evidence..." --tool gemini --mode analysis --rule analysis-diagnose-bug-root-cause
|
||||
```
|
||||
|
||||
### 生成修复
|
||||
```bash
|
||||
ccw cli -p "PURPOSE: Fix the identified root cause..." --tool gemini --mode write --rule development-debug-runtime-issues
|
||||
```
|
||||
|
||||
## Next Actions (Hints)
|
||||
|
||||
- 根因确认: `action-validate-with-file` (验证修复)
|
||||
- 需要更多证据: 等待用户复现,再次执行此动作
|
||||
- 所有假设否定: 重新执行此动作生成新假设
|
||||
- 用户选择: `action-menu` (返回菜单)
|
||||
@@ -0,0 +1,365 @@
|
||||
# Action: Develop With File
|
||||
|
||||
增量开发任务执行,记录进度到 progress.md,支持 Gemini 辅助实现。
|
||||
|
||||
## Purpose
|
||||
|
||||
执行开发任务并记录进度,包括:
|
||||
- 分析任务需求
|
||||
- 使用 Gemini/CLI 实现代码
|
||||
- 记录代码变更
|
||||
- 更新进度文档
|
||||
|
||||
## Preconditions
|
||||
|
||||
- [ ] state.status === 'running'
|
||||
- [ ] state.skill_state !== null
|
||||
- [ ] state.skill_state.develop.tasks.some(t => t.status === 'pending')
|
||||
|
||||
## Session Setup (Unified Location)
|
||||
|
||||
```javascript
|
||||
const getUtc8ISOString = () => new Date(Date.now() + 8 * 60 * 60 * 1000).toISOString()
|
||||
|
||||
// 统一位置: .loop/{loopId}
|
||||
const loopId = state.loop_id
|
||||
const loopFile = `.loop/${loopId}.json`
|
||||
const progressDir = `.loop/${loopId}.progress`
|
||||
const progressPath = `${progressDir}/develop.md`
|
||||
const changesLogPath = `${progressDir}/changes.log`
|
||||
```
|
||||
|
||||
---
|
||||
|
||||
## Execution
|
||||
|
||||
### Step 0: Check Control Signals (CRITICAL)
|
||||
|
||||
```javascript
|
||||
/**
|
||||
* CRITICAL: 每个 Action 必须在开始时检查控制信号
|
||||
* 如果 API 设置了 paused/stopped,Skill 应立即退出
|
||||
*/
|
||||
function checkControlSignals(loopId) {
|
||||
const state = JSON.parse(Read(`.loop/${loopId}.json`))
|
||||
|
||||
switch (state.status) {
|
||||
case 'paused':
|
||||
console.log('⏸️ Loop paused by API. Exiting action.')
|
||||
return { continue: false, reason: 'paused' }
|
||||
|
||||
case 'failed':
|
||||
console.log('⏹️ Loop stopped by API. Exiting action.')
|
||||
return { continue: false, reason: 'stopped' }
|
||||
|
||||
case 'running':
|
||||
return { continue: true, reason: 'running' }
|
||||
|
||||
default:
|
||||
return { continue: false, reason: 'unknown_status' }
|
||||
}
|
||||
}
|
||||
|
||||
// Execute check
|
||||
const control = checkControlSignals(loopId)
|
||||
if (!control.continue) {
|
||||
return {
|
||||
skillStateUpdates: { current_action: null },
|
||||
continue: false,
|
||||
message: `Action terminated: ${control.reason}`
|
||||
}
|
||||
}
|
||||
```
|
||||
|
||||
### Step 1: 加载任务列表
|
||||
|
||||
```javascript
|
||||
// 读取任务列表 (从 skill_state)
|
||||
let tasks = state.skill_state?.develop?.tasks || []
|
||||
|
||||
// 如果任务列表为空,询问用户创建
|
||||
if (tasks.length === 0) {
|
||||
// 使用 Gemini 分析任务描述,生成任务列表
|
||||
const analysisPrompt = `
|
||||
PURPOSE: 分析开发任务并分解为可执行步骤
|
||||
Success: 生成 3-7 个具体、可验证的子任务
|
||||
|
||||
TASK:
|
||||
• 分析任务描述: ${state.task_description}
|
||||
• 识别关键功能点
|
||||
• 分解为独立子任务
|
||||
• 为每个子任务指定工具和模式
|
||||
|
||||
MODE: analysis
|
||||
|
||||
CONTEXT: @package.json @src/**/*.ts | Memory: 项目结构
|
||||
|
||||
EXPECTED:
|
||||
JSON 格式:
|
||||
{
|
||||
"tasks": [
|
||||
{
|
||||
"id": "task-001",
|
||||
"description": "任务描述",
|
||||
"tool": "gemini",
|
||||
"mode": "write",
|
||||
"files": ["src/xxx.ts"]
|
||||
}
|
||||
]
|
||||
}
|
||||
`
|
||||
|
||||
const result = await Task({
|
||||
subagent_type: 'cli-execution-agent',
|
||||
run_in_background: false,
|
||||
prompt: `Execute Gemini CLI with prompt: ${analysisPrompt}`
|
||||
})
|
||||
|
||||
tasks = JSON.parse(result).tasks
|
||||
}
|
||||
|
||||
// 找到第一个待处理任务
|
||||
const currentTask = tasks.find(t => t.status === 'pending')
|
||||
|
||||
if (!currentTask) {
|
||||
return {
|
||||
skillStateUpdates: {
|
||||
develop: { ...state.skill_state.develop, current_task: null }
|
||||
},
|
||||
continue: true,
|
||||
message: '所有开发任务已完成'
|
||||
}
|
||||
}
|
||||
```
|
||||
|
||||
### Step 2: 执行开发任务
|
||||
|
||||
```javascript
|
||||
console.log(`\n执行任务: ${currentTask.description}`)
|
||||
|
||||
// 更新任务状态
|
||||
currentTask.status = 'in_progress'
|
||||
|
||||
// 使用 Gemini 实现
|
||||
const implementPrompt = `
|
||||
PURPOSE: 实现开发任务
|
||||
Task: ${currentTask.description}
|
||||
Success criteria: 代码实现完成,测试通过
|
||||
|
||||
TASK:
|
||||
• 分析现有代码结构
|
||||
• 实现功能代码
|
||||
• 添加必要的类型定义
|
||||
• 确保代码风格一致
|
||||
|
||||
MODE: write
|
||||
|
||||
CONTEXT: @${currentTask.files?.join(' @') || 'src/**/*.ts'}
|
||||
|
||||
EXPECTED:
|
||||
- 完整的代码实现
|
||||
- 代码变更列表
|
||||
- 简要实现说明
|
||||
|
||||
CONSTRAINTS: 遵循现有代码风格 | 不破坏现有功能
|
||||
`
|
||||
|
||||
const implementResult = await Bash({
|
||||
command: `ccw cli -p "${implementPrompt}" --tool gemini --mode write --rule development-implement-feature`,
|
||||
run_in_background: false
|
||||
})
|
||||
|
||||
// 记录代码变更
|
||||
const timestamp = getUtc8ISOString()
|
||||
const changeEntry = {
|
||||
timestamp,
|
||||
task_id: currentTask.id,
|
||||
description: currentTask.description,
|
||||
files_changed: currentTask.files || [],
|
||||
result: 'success'
|
||||
}
|
||||
|
||||
// 追加到 changes.log (NDJSON 格式)
|
||||
const changesContent = Read(changesLogPath) || ''
|
||||
Write(changesLogPath, changesContent + JSON.stringify(changeEntry) + '\n')
|
||||
```
|
||||
|
||||
### Step 3: 更新进度文档
|
||||
|
||||
```javascript
|
||||
const timestamp = getUtc8ISOString()
|
||||
const iteration = state.develop.completed_count + 1
|
||||
|
||||
// 读取现有进度文档
|
||||
let progressContent = Read(progressPath) || ''
|
||||
|
||||
// 如果是新文档,添加头部
|
||||
if (!progressContent) {
|
||||
progressContent = `# Development Progress
|
||||
|
||||
**Session ID**: ${state.session_id}
|
||||
**Task**: ${state.task_description}
|
||||
**Started**: ${timestamp}
|
||||
|
||||
---
|
||||
|
||||
## Progress Timeline
|
||||
|
||||
`
|
||||
}
|
||||
|
||||
// 追加本次进度
|
||||
const progressEntry = `
|
||||
### Iteration ${iteration} - ${currentTask.description} (${timestamp})
|
||||
|
||||
#### Task Details
|
||||
|
||||
- **ID**: ${currentTask.id}
|
||||
- **Tool**: ${currentTask.tool}
|
||||
- **Mode**: ${currentTask.mode}
|
||||
|
||||
#### Implementation Summary
|
||||
|
||||
${implementResult.summary || '实现完成'}
|
||||
|
||||
#### Files Changed
|
||||
|
||||
${currentTask.files?.map(f => `- \`${f}\``).join('\n') || '- No files specified'}
|
||||
|
||||
#### Status: COMPLETED
|
||||
|
||||
---
|
||||
|
||||
`
|
||||
|
||||
Write(progressPath, progressContent + progressEntry)
|
||||
|
||||
// 更新任务状态
|
||||
currentTask.status = 'completed'
|
||||
currentTask.completed_at = timestamp
|
||||
```
|
||||
|
||||
### Step 4: 更新任务列表文件
|
||||
|
||||
```javascript
|
||||
// 更新 tasks.json
|
||||
const updatedTasks = tasks.map(t =>
|
||||
t.id === currentTask.id ? currentTask : t
|
||||
)
|
||||
|
||||
Write(tasksPath, JSON.stringify(updatedTasks, null, 2))
|
||||
```
|
||||
|
||||
## State Updates
|
||||
|
||||
```javascript
|
||||
return {
|
||||
stateUpdates: {
|
||||
develop: {
|
||||
tasks: updatedTasks,
|
||||
current_task_id: null,
|
||||
completed_count: state.develop.completed_count + 1,
|
||||
total_count: updatedTasks.length,
|
||||
last_progress_at: getUtc8ISOString()
|
||||
},
|
||||
last_action: 'action-develop-with-file'
|
||||
},
|
||||
continue: true,
|
||||
message: `任务完成: ${currentTask.description}\n进度: ${state.develop.completed_count + 1}/${updatedTasks.length}`
|
||||
}
|
||||
```
|
||||
|
||||
## Error Handling
|
||||
|
||||
| Error Type | Recovery |
|
||||
|------------|----------|
|
||||
| Gemini CLI 失败 | 提示用户手动实现,记录到 progress.md |
|
||||
| 文件写入失败 | 重试一次,失败则记录错误 |
|
||||
| 任务解析失败 | 询问用户手动输入任务 |
|
||||
|
||||
## Progress Document Template
|
||||
|
||||
```markdown
|
||||
# Development Progress
|
||||
|
||||
**Session ID**: LOOP-xxx-2026-01-22
|
||||
**Task**: 实现用户认证功能
|
||||
**Started**: 2026-01-22T10:00:00+08:00
|
||||
|
||||
---
|
||||
|
||||
## Progress Timeline
|
||||
|
||||
### Iteration 1 - 分析登录组件 (2026-01-22T10:05:00+08:00)
|
||||
|
||||
#### Task Details
|
||||
|
||||
- **ID**: task-001
|
||||
- **Tool**: gemini
|
||||
- **Mode**: analysis
|
||||
|
||||
#### Implementation Summary
|
||||
|
||||
分析了现有登录组件结构,识别了需要修改的文件和依赖关系。
|
||||
|
||||
#### Files Changed
|
||||
|
||||
- `src/components/Login.tsx`
|
||||
- `src/hooks/useAuth.ts`
|
||||
|
||||
#### Status: COMPLETED
|
||||
|
||||
---
|
||||
|
||||
### Iteration 2 - 实现登录 API (2026-01-22T10:15:00+08:00)
|
||||
|
||||
...
|
||||
|
||||
---
|
||||
|
||||
## Current Statistics
|
||||
|
||||
| Metric | Value |
|
||||
|--------|-------|
|
||||
| Total Tasks | 5 |
|
||||
| Completed | 2 |
|
||||
| In Progress | 1 |
|
||||
| Pending | 2 |
|
||||
| Progress | 40% |
|
||||
|
||||
---
|
||||
|
||||
## Next Steps
|
||||
|
||||
- [ ] 完成剩余任务
|
||||
- [ ] 运行测试
|
||||
- [ ] 代码审查
|
||||
```
|
||||
|
||||
## CLI Integration
|
||||
|
||||
### 任务分析
|
||||
```bash
|
||||
ccw cli -p "PURPOSE: 分解开发任务为子任务
|
||||
TASK: • 分析任务描述 • 识别功能点 • 生成任务列表
|
||||
MODE: analysis
|
||||
CONTEXT: @package.json @src/**/*
|
||||
EXPECTED: JSON 任务列表
|
||||
" --tool gemini --mode analysis --rule planning-breakdown-task-steps
|
||||
```
|
||||
|
||||
### 代码实现
|
||||
```bash
|
||||
ccw cli -p "PURPOSE: 实现功能代码
|
||||
TASK: • 分析需求 • 编写代码 • 添加类型
|
||||
MODE: write
|
||||
CONTEXT: @src/xxx.ts
|
||||
EXPECTED: 完整实现
|
||||
" --tool gemini --mode write --rule development-implement-feature
|
||||
```
|
||||
|
||||
## Next Actions (Hints)
|
||||
|
||||
- 所有任务完成: `action-debug-with-file` (开始调试)
|
||||
- 任务失败: `action-develop-with-file` (重试或下一个任务)
|
||||
- 用户选择: `action-menu` (返回菜单)
|
||||
200
.claude/skills/ccw-loop/phases/actions/action-init.md
Normal file
200
.claude/skills/ccw-loop/phases/actions/action-init.md
Normal file
@@ -0,0 +1,200 @@
|
||||
# Action: Initialize
|
||||
|
||||
初始化 CCW Loop 会话,创建目录结构和初始状态。
|
||||
|
||||
## Purpose
|
||||
|
||||
- 创建会话目录结构
|
||||
- 初始化状态文件
|
||||
- 分析任务描述生成初始任务列表
|
||||
- 准备执行环境
|
||||
|
||||
## Preconditions
|
||||
|
||||
- [ ] state.status === 'pending'
|
||||
- [ ] state.initialized === false
|
||||
|
||||
## Execution
|
||||
|
||||
### Step 1: 创建目录结构
|
||||
|
||||
```javascript
|
||||
const getUtc8ISOString = () => new Date(Date.now() + 8 * 60 * 60 * 1000).toISOString()
|
||||
|
||||
const taskSlug = state.task_description.toLowerCase().replace(/[^a-z0-9]+/g, '-').substring(0, 30)
|
||||
const dateStr = getUtc8ISOString().substring(0, 10)
|
||||
const sessionId = `LOOP-${taskSlug}-${dateStr}`
|
||||
const sessionFolder = `.workflow/.loop/${sessionId}`
|
||||
|
||||
Bash(`mkdir -p "${sessionFolder}/develop"`)
|
||||
Bash(`mkdir -p "${sessionFolder}/debug"`)
|
||||
Bash(`mkdir -p "${sessionFolder}/validate"`)
|
||||
|
||||
console.log(`Session created: ${sessionId}`)
|
||||
console.log(`Location: ${sessionFolder}`)
|
||||
```
|
||||
|
||||
### Step 2: 创建元数据文件
|
||||
|
||||
```javascript
|
||||
const meta = {
|
||||
session_id: sessionId,
|
||||
task_description: state.task_description,
|
||||
created_at: getUtc8ISOString(),
|
||||
mode: state.mode || 'interactive'
|
||||
}
|
||||
|
||||
Write(`${sessionFolder}/meta.json`, JSON.stringify(meta, null, 2))
|
||||
```
|
||||
|
||||
### Step 3: 分析任务生成开发任务列表
|
||||
|
||||
```javascript
|
||||
// 使用 Gemini 分析任务描述
|
||||
console.log('\n分析任务描述...')
|
||||
|
||||
const analysisPrompt = `
|
||||
PURPOSE: 分析开发任务并分解为可执行步骤
|
||||
Success: 生成 3-7 个具体、可验证的子任务
|
||||
|
||||
TASK:
|
||||
• 分析任务描述: ${state.task_description}
|
||||
• 识别关键功能点
|
||||
• 分解为独立子任务
|
||||
• 为每个子任务指定工具和模式
|
||||
|
||||
MODE: analysis
|
||||
|
||||
CONTEXT: @package.json @src/**/*.ts (如存在)
|
||||
|
||||
EXPECTED:
|
||||
JSON 格式:
|
||||
{
|
||||
"tasks": [
|
||||
{
|
||||
"id": "task-001",
|
||||
"description": "任务描述",
|
||||
"tool": "gemini",
|
||||
"mode": "write",
|
||||
"priority": 1
|
||||
}
|
||||
],
|
||||
"estimated_complexity": "low|medium|high",
|
||||
"key_files": ["file1.ts", "file2.ts"]
|
||||
}
|
||||
|
||||
CONSTRAINTS: 生成实际可执行的任务
|
||||
`
|
||||
|
||||
const result = await Bash({
|
||||
command: `ccw cli -p "${analysisPrompt}" --tool gemini --mode analysis --rule planning-breakdown-task-steps`,
|
||||
run_in_background: false
|
||||
})
|
||||
|
||||
const analysis = JSON.parse(result.stdout)
|
||||
const tasks = analysis.tasks.map((t, i) => ({
|
||||
...t,
|
||||
id: t.id || `task-${String(i + 1).padStart(3, '0')}`,
|
||||
status: 'pending',
|
||||
created_at: getUtc8ISOString(),
|
||||
completed_at: null,
|
||||
files_changed: []
|
||||
}))
|
||||
|
||||
// 保存任务列表
|
||||
Write(`${sessionFolder}/develop/tasks.json`, JSON.stringify(tasks, null, 2))
|
||||
```
|
||||
|
||||
### Step 4: 初始化进度文档
|
||||
|
||||
```javascript
|
||||
const progressInitial = `# Development Progress
|
||||
|
||||
**Session ID**: ${sessionId}
|
||||
**Task**: ${state.task_description}
|
||||
**Started**: ${getUtc8ISOString()}
|
||||
**Estimated Complexity**: ${analysis.estimated_complexity}
|
||||
|
||||
---
|
||||
|
||||
## Task List
|
||||
|
||||
${tasks.map((t, i) => `${i + 1}. [ ] ${t.description}`).join('\n')}
|
||||
|
||||
## Key Files
|
||||
|
||||
${analysis.key_files?.map(f => `- \`${f}\``).join('\n') || '- To be determined'}
|
||||
|
||||
---
|
||||
|
||||
## Progress Timeline
|
||||
|
||||
`
|
||||
|
||||
Write(`${sessionFolder}/develop/progress.md`, progressInitial)
|
||||
```
|
||||
|
||||
### Step 5: 显示初始化结果
|
||||
|
||||
```javascript
|
||||
console.log(`\n✅ 会话初始化完成`)
|
||||
console.log(`\n任务列表 (${tasks.length} 项):`)
|
||||
tasks.forEach((t, i) => {
|
||||
console.log(` ${i + 1}. ${t.description} [${t.tool}/${t.mode}]`)
|
||||
})
|
||||
console.log(`\n预估复杂度: ${analysis.estimated_complexity}`)
|
||||
console.log(`\n执行 'develop' 开始开发,或 'menu' 查看更多选项`)
|
||||
```
|
||||
|
||||
## State Updates
|
||||
|
||||
```javascript
|
||||
return {
|
||||
stateUpdates: {
|
||||
session_id: sessionId,
|
||||
status: 'running',
|
||||
initialized: true,
|
||||
develop: {
|
||||
tasks: tasks,
|
||||
current_task_id: null,
|
||||
completed_count: 0,
|
||||
total_count: tasks.length,
|
||||
last_progress_at: null
|
||||
},
|
||||
debug: {
|
||||
current_bug: null,
|
||||
hypotheses: [],
|
||||
confirmed_hypothesis: null,
|
||||
iteration: 0,
|
||||
last_analysis_at: null,
|
||||
understanding_updated: false
|
||||
},
|
||||
validate: {
|
||||
test_results: [],
|
||||
coverage: null,
|
||||
passed: false,
|
||||
failed_tests: [],
|
||||
last_run_at: null
|
||||
},
|
||||
context: {
|
||||
estimated_complexity: analysis.estimated_complexity,
|
||||
key_files: analysis.key_files
|
||||
}
|
||||
},
|
||||
continue: true,
|
||||
message: `会话 ${sessionId} 已初始化\n${tasks.length} 个开发任务待执行`
|
||||
}
|
||||
```
|
||||
|
||||
## Error Handling
|
||||
|
||||
| Error Type | Recovery |
|
||||
|------------|----------|
|
||||
| 目录创建失败 | 检查权限,重试 |
|
||||
| Gemini 分析失败 | 提示用户手动输入任务 |
|
||||
| 任务解析失败 | 使用默认任务列表 |
|
||||
|
||||
## Next Actions
|
||||
|
||||
- 成功: `action-menu` (显示操作菜单) 或 `action-develop-with-file` (直接开始开发)
|
||||
- 失败: 报错退出
|
||||
192
.claude/skills/ccw-loop/phases/actions/action-menu.md
Normal file
192
.claude/skills/ccw-loop/phases/actions/action-menu.md
Normal file
@@ -0,0 +1,192 @@
|
||||
# Action: Menu
|
||||
|
||||
显示交互式操作菜单,让用户选择下一步操作。
|
||||
|
||||
## Purpose
|
||||
|
||||
- 显示当前状态摘要
|
||||
- 提供操作选项
|
||||
- 接收用户选择
|
||||
- 返回下一个动作
|
||||
|
||||
## Preconditions
|
||||
|
||||
- [ ] state.initialized === true
|
||||
- [ ] state.status === 'running'
|
||||
|
||||
## Execution
|
||||
|
||||
### Step 1: 生成状态摘要
|
||||
|
||||
```javascript
|
||||
const getUtc8ISOString = () => new Date(Date.now() + 8 * 60 * 60 * 1000).toISOString()
|
||||
|
||||
// 开发进度
|
||||
const developProgress = state.develop.total_count > 0
|
||||
? `${state.develop.completed_count}/${state.develop.total_count} (${(state.develop.completed_count / state.develop.total_count * 100).toFixed(0)}%)`
|
||||
: '未开始'
|
||||
|
||||
// 调试状态
|
||||
const debugStatus = state.debug.confirmed_hypothesis
|
||||
? `✅ 已确认根因`
|
||||
: state.debug.iteration > 0
|
||||
? `🔍 迭代 ${state.debug.iteration}`
|
||||
: '未开始'
|
||||
|
||||
// 验证状态
|
||||
const validateStatus = state.validate.passed
|
||||
? `✅ 通过`
|
||||
: state.validate.test_results.length > 0
|
||||
? `❌ ${state.validate.failed_tests.length} 个失败`
|
||||
: '未运行'
|
||||
|
||||
const statusSummary = `
|
||||
═══════════════════════════════════════════════════════════
|
||||
CCW Loop - ${state.session_id}
|
||||
═══════════════════════════════════════════════════════════
|
||||
|
||||
任务: ${state.task_description}
|
||||
迭代: ${state.iteration_count}
|
||||
|
||||
┌─────────────────────────────────────────────────────┐
|
||||
│ 开发 (Develop) │ ${developProgress.padEnd(20)} │
|
||||
│ 调试 (Debug) │ ${debugStatus.padEnd(20)} │
|
||||
│ 验证 (Validate) │ ${validateStatus.padEnd(20)} │
|
||||
└─────────────────────────────────────────────────────┘
|
||||
|
||||
═══════════════════════════════════════════════════════════
|
||||
`
|
||||
|
||||
console.log(statusSummary)
|
||||
```
|
||||
|
||||
### Step 2: 显示操作选项
|
||||
|
||||
```javascript
|
||||
const options = [
|
||||
{
|
||||
label: "📝 继续开发 (Develop)",
|
||||
description: state.develop.completed_count < state.develop.total_count
|
||||
? `执行下一个开发任务`
|
||||
: "所有任务已完成,可添加新任务",
|
||||
action: "action-develop-with-file"
|
||||
},
|
||||
{
|
||||
label: "🔍 开始调试 (Debug)",
|
||||
description: state.debug.iteration > 0
|
||||
? "继续假设驱动调试"
|
||||
: "开始新的调试会话",
|
||||
action: "action-debug-with-file"
|
||||
},
|
||||
{
|
||||
label: "✅ 运行验证 (Validate)",
|
||||
description: "运行测试并检查覆盖率",
|
||||
action: "action-validate-with-file"
|
||||
},
|
||||
{
|
||||
label: "📊 查看详情 (Status)",
|
||||
description: "查看详细进度和文件",
|
||||
action: "action-status"
|
||||
},
|
||||
{
|
||||
label: "🏁 完成循环 (Complete)",
|
||||
description: "结束当前循环",
|
||||
action: "action-complete"
|
||||
},
|
||||
{
|
||||
label: "🚪 退出 (Exit)",
|
||||
description: "保存状态并退出",
|
||||
action: "exit"
|
||||
}
|
||||
]
|
||||
|
||||
const response = await AskUserQuestion({
|
||||
questions: [{
|
||||
question: "选择下一步操作:",
|
||||
header: "操作",
|
||||
multiSelect: false,
|
||||
options: options.map(o => ({
|
||||
label: o.label,
|
||||
description: o.description
|
||||
}))
|
||||
}]
|
||||
})
|
||||
|
||||
const selectedLabel = response["操作"]
|
||||
const selectedOption = options.find(o => o.label === selectedLabel)
|
||||
const nextAction = selectedOption?.action || 'action-menu'
|
||||
```
|
||||
|
||||
### Step 3: 处理特殊选项
|
||||
|
||||
```javascript
|
||||
if (nextAction === 'exit') {
|
||||
console.log('\n保存状态并退出...')
|
||||
return {
|
||||
stateUpdates: {
|
||||
status: 'user_exit'
|
||||
},
|
||||
continue: false,
|
||||
message: '会话已保存,使用 --resume 可继续'
|
||||
}
|
||||
}
|
||||
|
||||
if (nextAction === 'action-status') {
|
||||
// 显示详细状态
|
||||
const sessionFolder = `.workflow/.loop/${state.session_id}`
|
||||
|
||||
console.log('\n=== 开发进度 ===')
|
||||
const progress = Read(`${sessionFolder}/develop/progress.md`)
|
||||
console.log(progress?.substring(0, 500) + '...')
|
||||
|
||||
console.log('\n=== 调试状态 ===')
|
||||
if (state.debug.hypotheses.length > 0) {
|
||||
state.debug.hypotheses.forEach(h => {
|
||||
console.log(` ${h.id}: ${h.status} - ${h.description.substring(0, 50)}...`)
|
||||
})
|
||||
} else {
|
||||
console.log(' 尚未开始调试')
|
||||
}
|
||||
|
||||
console.log('\n=== 验证结果 ===')
|
||||
if (state.validate.test_results.length > 0) {
|
||||
const latest = state.validate.test_results[state.validate.test_results.length - 1]
|
||||
console.log(` 最近运行: ${latest.timestamp}`)
|
||||
console.log(` 通过率: ${latest.summary.pass_rate}%`)
|
||||
} else {
|
||||
console.log(' 尚未运行验证')
|
||||
}
|
||||
|
||||
// 返回菜单
|
||||
return {
|
||||
stateUpdates: {},
|
||||
continue: true,
|
||||
nextAction: 'action-menu',
|
||||
message: ''
|
||||
}
|
||||
}
|
||||
```
|
||||
|
||||
## State Updates
|
||||
|
||||
```javascript
|
||||
return {
|
||||
stateUpdates: {
|
||||
// 不更新状态,仅返回下一个动作
|
||||
},
|
||||
continue: true,
|
||||
nextAction: nextAction,
|
||||
message: `执行: ${selectedOption?.label || nextAction}`
|
||||
}
|
||||
```
|
||||
|
||||
## Error Handling
|
||||
|
||||
| Error Type | Recovery |
|
||||
|------------|----------|
|
||||
| 用户取消 | 返回菜单 |
|
||||
| 无效选择 | 重新显示菜单 |
|
||||
|
||||
## Next Actions
|
||||
|
||||
根据用户选择动态决定下一个动作。
|
||||
@@ -0,0 +1,307 @@
|
||||
# Action: Validate With File
|
||||
|
||||
运行测试并验证实现,记录结果到 validation.md,支持 Gemini 辅助分析测试覆盖率和质量。
|
||||
|
||||
## Purpose
|
||||
|
||||
执行测试验证流程,包括:
|
||||
- 运行单元测试
|
||||
- 运行集成测试
|
||||
- 检查代码覆盖率
|
||||
- 生成验证报告
|
||||
- 分析失败原因
|
||||
|
||||
## Preconditions
|
||||
|
||||
- [ ] state.initialized === true
|
||||
- [ ] state.status === 'running'
|
||||
- [ ] state.develop.completed_count > 0 || state.debug.confirmed_hypothesis !== null
|
||||
|
||||
## Session Setup
|
||||
|
||||
```javascript
|
||||
const getUtc8ISOString = () => new Date(Date.now() + 8 * 60 * 60 * 1000).toISOString()
|
||||
|
||||
const sessionFolder = `.workflow/.loop/${state.session_id}`
|
||||
const validateFolder = `${sessionFolder}/validate`
|
||||
const validationPath = `${validateFolder}/validation.md`
|
||||
const testResultsPath = `${validateFolder}/test-results.json`
|
||||
const coveragePath = `${validateFolder}/coverage.json`
|
||||
```
|
||||
|
||||
---
|
||||
|
||||
## Execution
|
||||
|
||||
### Step 1: 运行测试
|
||||
|
||||
```javascript
|
||||
console.log('\n运行测试...')
|
||||
|
||||
// 检测测试框架
|
||||
const packageJson = JSON.parse(Read('package.json'))
|
||||
const testScript = packageJson.scripts?.test || 'npm test'
|
||||
|
||||
// 运行测试并捕获输出
|
||||
const testResult = await Bash({
|
||||
command: testScript,
|
||||
timeout: 300000 // 5分钟
|
||||
})
|
||||
|
||||
// 解析测试输出
|
||||
const testResults = parseTestOutput(testResult.stdout)
|
||||
```
|
||||
|
||||
### Step 2: 检查覆盖率
|
||||
|
||||
```javascript
|
||||
// 运行覆盖率检查
|
||||
let coverageData = null
|
||||
|
||||
if (packageJson.scripts?.['test:coverage']) {
|
||||
const coverageResult = await Bash({
|
||||
command: 'npm run test:coverage',
|
||||
timeout: 300000
|
||||
})
|
||||
|
||||
// 解析覆盖率报告
|
||||
coverageData = parseCoverageReport(coverageResult.stdout)
|
||||
|
||||
Write(coveragePath, JSON.stringify(coverageData, null, 2))
|
||||
}
|
||||
```
|
||||
|
||||
### Step 3: Gemini 辅助分析
|
||||
|
||||
```bash
|
||||
ccw cli -p "
|
||||
PURPOSE: Analyze test results and coverage
|
||||
Success criteria: Identify quality issues and suggest improvements
|
||||
|
||||
TASK:
|
||||
• Analyze test execution results
|
||||
• Review code coverage metrics
|
||||
• Identify missing test cases
|
||||
• Suggest quality improvements
|
||||
• Verify requirements coverage
|
||||
|
||||
MODE: analysis
|
||||
|
||||
CONTEXT:
|
||||
@${testResultsPath}
|
||||
@${coveragePath}
|
||||
@${sessionFolder}/develop/progress.md
|
||||
|
||||
EXPECTED:
|
||||
- Quality assessment report
|
||||
- Failed tests analysis
|
||||
- Coverage gaps identification
|
||||
- Improvement recommendations
|
||||
- Pass/Fail decision with rationale
|
||||
|
||||
CONSTRAINTS: Evidence-based quality assessment
|
||||
" --tool gemini --mode analysis --rule analysis-review-code-quality
|
||||
```
|
||||
|
||||
### Step 4: 生成验证报告
|
||||
|
||||
```javascript
|
||||
const timestamp = getUtc8ISOString()
|
||||
const iteration = (state.validate.test_results?.length || 0) + 1
|
||||
|
||||
const validationReport = `# Validation Report
|
||||
|
||||
**Session ID**: ${state.session_id}
|
||||
**Task**: ${state.task_description}
|
||||
**Validated**: ${timestamp}
|
||||
|
||||
---
|
||||
|
||||
## Iteration ${iteration} - Validation Run
|
||||
|
||||
### Test Execution Summary
|
||||
|
||||
| Metric | Value |
|
||||
|--------|-------|
|
||||
| Total Tests | ${testResults.total} |
|
||||
| Passed | ${testResults.passed} |
|
||||
| Failed | ${testResults.failed} |
|
||||
| Skipped | ${testResults.skipped} |
|
||||
| Duration | ${testResults.duration_ms}ms |
|
||||
| **Pass Rate** | **${(testResults.passed / testResults.total * 100).toFixed(1)}%** |
|
||||
|
||||
### Coverage Report
|
||||
|
||||
${coverageData ? `
|
||||
| File | Statements | Branches | Functions | Lines |
|
||||
|------|------------|----------|-----------|-------|
|
||||
${coverageData.files.map(f => `| ${f.path} | ${f.statements}% | ${f.branches}% | ${f.functions}% | ${f.lines}% |`).join('\n')}
|
||||
|
||||
**Overall Coverage**: ${coverageData.overall.statements}%
|
||||
` : '_No coverage data available_'}
|
||||
|
||||
### Failed Tests
|
||||
|
||||
${testResults.failed > 0 ? `
|
||||
${testResults.failures.map(f => `
|
||||
#### ${f.test_name}
|
||||
|
||||
- **Suite**: ${f.suite}
|
||||
- **Error**: ${f.error_message}
|
||||
- **Stack**:
|
||||
\`\`\`
|
||||
${f.stack_trace}
|
||||
\`\`\`
|
||||
`).join('\n')}
|
||||
` : '_All tests passed_'}
|
||||
|
||||
### Gemini Quality Analysis
|
||||
|
||||
${geminiAnalysis}
|
||||
|
||||
### Recommendations
|
||||
|
||||
${recommendations.map(r => `- ${r}`).join('\n')}
|
||||
|
||||
---
|
||||
|
||||
## Validation Decision
|
||||
|
||||
**Result**: ${testResults.passed === testResults.total ? '✅ PASS' : '❌ FAIL'}
|
||||
|
||||
**Rationale**: ${validationDecision}
|
||||
|
||||
${testResults.passed !== testResults.total ? `
|
||||
### Next Actions
|
||||
|
||||
1. Review failed tests
|
||||
2. Debug failures using action-debug-with-file
|
||||
3. Fix issues and re-run validation
|
||||
` : `
|
||||
### Next Actions
|
||||
|
||||
1. Consider code review
|
||||
2. Prepare for deployment
|
||||
3. Update documentation
|
||||
`}
|
||||
`
|
||||
|
||||
// 写入验证报告
|
||||
Write(validationPath, validationReport)
|
||||
```
|
||||
|
||||
### Step 5: 保存测试结果
|
||||
|
||||
```javascript
|
||||
const testResultsData = {
|
||||
iteration,
|
||||
timestamp,
|
||||
summary: {
|
||||
total: testResults.total,
|
||||
passed: testResults.passed,
|
||||
failed: testResults.failed,
|
||||
skipped: testResults.skipped,
|
||||
pass_rate: (testResults.passed / testResults.total * 100).toFixed(1),
|
||||
duration_ms: testResults.duration_ms
|
||||
},
|
||||
tests: testResults.tests,
|
||||
failures: testResults.failures,
|
||||
coverage: coverageData?.overall || null
|
||||
}
|
||||
|
||||
Write(testResultsPath, JSON.stringify(testResultsData, null, 2))
|
||||
```
|
||||
|
||||
---
|
||||
|
||||
## State Updates
|
||||
|
||||
```javascript
|
||||
const validationPassed = testResults.failed === 0 && testResults.passed > 0
|
||||
|
||||
return {
|
||||
stateUpdates: {
|
||||
validate: {
|
||||
test_results: [...(state.validate.test_results || []), testResultsData],
|
||||
coverage: coverageData?.overall.statements || null,
|
||||
passed: validationPassed,
|
||||
failed_tests: testResults.failures.map(f => f.test_name),
|
||||
last_run_at: getUtc8ISOString()
|
||||
},
|
||||
last_action: 'action-validate-with-file'
|
||||
},
|
||||
continue: true,
|
||||
message: validationPassed
|
||||
? `验证通过 ✅\n测试: ${testResults.passed}/${testResults.total}\n覆盖率: ${coverageData?.overall.statements || 'N/A'}%`
|
||||
: `验证失败 ❌\n失败: ${testResults.failed}/${testResults.total}\n建议进入调试模式`
|
||||
}
|
||||
```
|
||||
|
||||
## Test Output Parsers
|
||||
|
||||
### Jest/Vitest Parser
|
||||
|
||||
```javascript
|
||||
function parseJestOutput(stdout) {
|
||||
const testPattern = /Tests:\s+(\d+) passed.*?(\d+) failed.*?(\d+) total/
|
||||
const match = stdout.match(testPattern)
|
||||
|
||||
return {
|
||||
total: parseInt(match[3]),
|
||||
passed: parseInt(match[1]),
|
||||
failed: parseInt(match[2]),
|
||||
// ... parse individual test results
|
||||
}
|
||||
}
|
||||
```
|
||||
|
||||
### Pytest Parser
|
||||
|
||||
```javascript
|
||||
function parsePytestOutput(stdout) {
|
||||
const summaryPattern = /(\d+) passed.*?(\d+) failed.*?(\d+) error/
|
||||
// ... implementation
|
||||
}
|
||||
```
|
||||
|
||||
## Error Handling
|
||||
|
||||
| Error Type | Recovery |
|
||||
|------------|----------|
|
||||
| Tests don't run | 检查测试脚本配置,提示用户 |
|
||||
| All tests fail | 建议进入 debug 模式 |
|
||||
| Coverage tool missing | 跳过覆盖率检查,仅运行测试 |
|
||||
| Timeout | 增加超时时间或拆分测试 |
|
||||
|
||||
## Validation Report Template
|
||||
|
||||
参考 [templates/validation-template.md](../../templates/validation-template.md)
|
||||
|
||||
## CLI Integration
|
||||
|
||||
### 质量分析
|
||||
```bash
|
||||
ccw cli -p "PURPOSE: Analyze test results and coverage...
|
||||
TASK: • Review results • Identify gaps • Suggest improvements
|
||||
MODE: analysis
|
||||
CONTEXT: @test-results.json @coverage.json
|
||||
EXPECTED: Quality assessment
|
||||
" --tool gemini --mode analysis --rule analysis-review-code-quality
|
||||
```
|
||||
|
||||
### 测试生成 (如覆盖率低)
|
||||
```bash
|
||||
ccw cli -p "PURPOSE: Generate missing test cases...
|
||||
TASK: • Analyze uncovered code • Write tests
|
||||
MODE: write
|
||||
CONTEXT: @coverage.json @src/**/*
|
||||
EXPECTED: Test code
|
||||
" --tool gemini --mode write --rule development-generate-tests
|
||||
```
|
||||
|
||||
## Next Actions (Hints)
|
||||
|
||||
- 验证通过: `action-complete` (完成循环)
|
||||
- 验证失败: `action-debug-with-file` (调试失败测试)
|
||||
- 覆盖率低: `action-develop-with-file` (添加测试)
|
||||
- 用户选择: `action-menu` (返回菜单)
|
||||
486
.claude/skills/ccw-loop/phases/orchestrator.md
Normal file
486
.claude/skills/ccw-loop/phases/orchestrator.md
Normal file
@@ -0,0 +1,486 @@
|
||||
# Orchestrator
|
||||
|
||||
根据当前状态选择并执行下一个动作,实现无状态循环工作流。与 API (loop-v2-routes.ts) 协作实现控制平面/执行平面分离。
|
||||
|
||||
## Role
|
||||
|
||||
检查控制信号 → 读取文件状态 → 选择动作 → 执行 → 更新文件 → 循环,直到完成或被外部暂停/停止。
|
||||
|
||||
## State Management (Unified Location)
|
||||
|
||||
### 读取状态
|
||||
|
||||
```javascript
|
||||
const getUtc8ISOString = () => new Date(Date.now() + 8 * 60 * 60 * 1000).toISOString()
|
||||
|
||||
/**
|
||||
* 读取循环状态 (统一位置)
|
||||
* @param loopId - Loop ID (e.g., "loop-v2-20260122-abc123")
|
||||
*/
|
||||
function readLoopState(loopId) {
|
||||
const stateFile = `.loop/${loopId}.json`
|
||||
|
||||
if (!fs.existsSync(stateFile)) {
|
||||
return null
|
||||
}
|
||||
|
||||
const state = JSON.parse(Read(stateFile))
|
||||
return state
|
||||
}
|
||||
```
|
||||
|
||||
### 更新状态
|
||||
|
||||
```javascript
|
||||
/**
|
||||
* 更新循环状态 (只更新 skill_state 部分,不修改 API 字段)
|
||||
* @param loopId - Loop ID
|
||||
* @param updates - 更新内容 (skill_state 字段)
|
||||
*/
|
||||
function updateLoopState(loopId, updates) {
|
||||
const stateFile = `.loop/${loopId}.json`
|
||||
const currentState = readLoopState(loopId)
|
||||
|
||||
if (!currentState) {
|
||||
throw new Error(`Loop state not found: ${loopId}`)
|
||||
}
|
||||
|
||||
// 只更新 skill_state 和 updated_at
|
||||
const newState = {
|
||||
...currentState,
|
||||
updated_at: getUtc8ISOString(),
|
||||
skill_state: {
|
||||
...currentState.skill_state,
|
||||
...updates
|
||||
}
|
||||
}
|
||||
|
||||
Write(stateFile, JSON.stringify(newState, null, 2))
|
||||
return newState
|
||||
}
|
||||
```
|
||||
|
||||
### 创建新循环状态 (直接调用时)
|
||||
|
||||
```javascript
|
||||
/**
|
||||
* 创建新的循环状态 (仅在直接调用时使用,API 触发时状态已存在)
|
||||
*/
|
||||
function createLoopState(loopId, taskDescription) {
|
||||
const stateFile = `.loop/${loopId}.json`
|
||||
const now = getUtc8ISOString()
|
||||
|
||||
const state = {
|
||||
// API 兼容字段
|
||||
loop_id: loopId,
|
||||
title: taskDescription.substring(0, 100),
|
||||
description: taskDescription,
|
||||
max_iterations: 10,
|
||||
status: 'running', // 直接调用时设为 running
|
||||
current_iteration: 0,
|
||||
created_at: now,
|
||||
updated_at: now,
|
||||
|
||||
// Skill 扩展字段
|
||||
skill_state: null // 由 action-init 初始化
|
||||
}
|
||||
|
||||
// 确保目录存在
|
||||
Bash(`mkdir -p ".loop"`)
|
||||
Bash(`mkdir -p ".loop/${loopId}.progress"`)
|
||||
|
||||
Write(stateFile, JSON.stringify(state, null, 2))
|
||||
return state
|
||||
}
|
||||
```
|
||||
|
||||
## Control Signal Checking
|
||||
|
||||
```javascript
|
||||
/**
|
||||
* 检查 API 控制信号
|
||||
* 必须在每个 Action 开始前调用
|
||||
* @returns { continue: boolean, reason: string }
|
||||
*/
|
||||
function checkControlSignals(loopId) {
|
||||
const state = readLoopState(loopId)
|
||||
|
||||
if (!state) {
|
||||
return { continue: false, reason: 'state_not_found' }
|
||||
}
|
||||
|
||||
switch (state.status) {
|
||||
case 'paused':
|
||||
// API 暂停了循环,Skill 应退出等待 resume
|
||||
console.log(`⏸️ Loop paused by API. Waiting for resume...`)
|
||||
return { continue: false, reason: 'paused' }
|
||||
|
||||
case 'failed':
|
||||
// API 停止了循环 (用户手动停止)
|
||||
console.log(`⏹️ Loop stopped by API.`)
|
||||
return { continue: false, reason: 'stopped' }
|
||||
|
||||
case 'completed':
|
||||
// 已完成
|
||||
console.log(`✅ Loop already completed.`)
|
||||
return { continue: false, reason: 'completed' }
|
||||
|
||||
case 'created':
|
||||
// API 创建但未启动 (不应该走到这里)
|
||||
console.log(`⚠️ Loop not started by API.`)
|
||||
return { continue: false, reason: 'not_started' }
|
||||
|
||||
case 'running':
|
||||
// 正常继续
|
||||
return { continue: true, reason: 'running' }
|
||||
|
||||
default:
|
||||
console.log(`⚠️ Unknown status: ${state.status}`)
|
||||
return { continue: false, reason: 'unknown_status' }
|
||||
}
|
||||
}
|
||||
```
|
||||
|
||||
## Decision Logic
|
||||
|
||||
```javascript
|
||||
/**
|
||||
* 选择下一个 Action (基于 skill_state)
|
||||
*/
|
||||
function selectNextAction(state, mode = 'interactive') {
|
||||
const skillState = state.skill_state
|
||||
|
||||
// 1. 终止条件检查 (API status)
|
||||
if (state.status === 'completed') return null
|
||||
if (state.status === 'failed') return null
|
||||
if (state.current_iteration >= state.max_iterations) {
|
||||
console.warn(`已达到最大迭代次数 (${state.max_iterations})`)
|
||||
return 'action-complete'
|
||||
}
|
||||
|
||||
// 2. 初始化检查
|
||||
if (!skillState || !skillState.current_action) {
|
||||
return 'action-init'
|
||||
}
|
||||
|
||||
// 3. 模式判断
|
||||
if (mode === 'interactive') {
|
||||
return 'action-menu' // 显示菜单让用户选择
|
||||
}
|
||||
|
||||
// 4. 自动模式:基于状态自动选择
|
||||
if (mode === 'auto') {
|
||||
// 按优先级:develop → debug → validate
|
||||
|
||||
// 如果有待开发任务
|
||||
const hasPendingDevelop = skillState.develop?.tasks?.some(t => t.status === 'pending')
|
||||
if (hasPendingDevelop) {
|
||||
return 'action-develop-with-file'
|
||||
}
|
||||
|
||||
// 如果开发完成但未调试
|
||||
if (skillState.last_action === 'action-develop-with-file') {
|
||||
const needsDebug = skillState.develop?.completed < skillState.develop?.total
|
||||
if (needsDebug) {
|
||||
return 'action-debug-with-file'
|
||||
}
|
||||
}
|
||||
|
||||
// 如果调试完成但未验证
|
||||
if (skillState.last_action === 'action-debug-with-file' ||
|
||||
skillState.debug?.confirmed_hypothesis) {
|
||||
return 'action-validate-with-file'
|
||||
}
|
||||
|
||||
// 如果验证失败,回到开发
|
||||
if (skillState.last_action === 'action-validate-with-file') {
|
||||
if (!skillState.validate?.passed) {
|
||||
return 'action-develop-with-file'
|
||||
}
|
||||
}
|
||||
|
||||
// 全部通过,完成
|
||||
if (skillState.validate?.passed && !hasPendingDevelop) {
|
||||
return 'action-complete'
|
||||
}
|
||||
|
||||
// 默认:开发
|
||||
return 'action-develop-with-file'
|
||||
}
|
||||
|
||||
// 5. 默认完成
|
||||
return 'action-complete'
|
||||
}
|
||||
```
|
||||
|
||||
## Execution Loop
|
||||
|
||||
```javascript
|
||||
/**
|
||||
* 运行编排器
|
||||
* @param options.loopId - 现有 Loop ID (API 触发时)
|
||||
* @param options.task - 任务描述 (直接调用时)
|
||||
* @param options.mode - 'interactive' | 'auto'
|
||||
*/
|
||||
async function runOrchestrator(options = {}) {
|
||||
const { loopId: existingLoopId, task, mode = 'interactive' } = options
|
||||
|
||||
console.log('=== CCW Loop Orchestrator Started ===')
|
||||
|
||||
// 1. 确定 loopId
|
||||
let loopId
|
||||
let state
|
||||
|
||||
if (existingLoopId) {
|
||||
// API 触发:使用现有 loopId
|
||||
loopId = existingLoopId
|
||||
state = readLoopState(loopId)
|
||||
|
||||
if (!state) {
|
||||
console.error(`Loop not found: ${loopId}`)
|
||||
return { status: 'error', message: 'Loop not found' }
|
||||
}
|
||||
|
||||
console.log(`Resuming loop: ${loopId}`)
|
||||
console.log(`Status: ${state.status}`)
|
||||
|
||||
} else if (task) {
|
||||
// 直接调用:创建新 loopId
|
||||
const timestamp = getUtc8ISOString().replace(/[-:]/g, '').split('.')[0]
|
||||
const random = Math.random().toString(36).substring(2, 10)
|
||||
loopId = `loop-v2-${timestamp}-${random}`
|
||||
|
||||
console.log(`Creating new loop: ${loopId}`)
|
||||
console.log(`Task: ${task}`)
|
||||
|
||||
state = createLoopState(loopId, task)
|
||||
|
||||
} else {
|
||||
console.error('Either --loop-id or task description is required')
|
||||
return { status: 'error', message: 'Missing loopId or task' }
|
||||
}
|
||||
|
||||
const progressDir = `.loop/${loopId}.progress`
|
||||
|
||||
// 2. 主循环
|
||||
let iteration = state.current_iteration || 0
|
||||
|
||||
while (iteration < state.max_iterations) {
|
||||
iteration++
|
||||
|
||||
// ========================================
|
||||
// CRITICAL: Check control signals first
|
||||
// ========================================
|
||||
const control = checkControlSignals(loopId)
|
||||
if (!control.continue) {
|
||||
console.log(`\n🛑 Loop terminated: ${control.reason}`)
|
||||
break
|
||||
}
|
||||
|
||||
// 重新读取状态 (可能被 API 更新)
|
||||
state = readLoopState(loopId)
|
||||
|
||||
console.log(`\n[Iteration ${iteration}] Status: ${state.status}`)
|
||||
|
||||
// 选择下一个动作
|
||||
const actionId = selectNextAction(state, mode)
|
||||
|
||||
if (!actionId) {
|
||||
console.log('No action selected, terminating.')
|
||||
break
|
||||
}
|
||||
|
||||
console.log(`[Iteration ${iteration}] Executing: ${actionId}`)
|
||||
|
||||
// 更新 current_iteration
|
||||
state = {
|
||||
...state,
|
||||
current_iteration: iteration,
|
||||
updated_at: getUtc8ISOString()
|
||||
}
|
||||
Write(`.loop/${loopId}.json`, JSON.stringify(state, null, 2))
|
||||
|
||||
// 执行动作
|
||||
try {
|
||||
const actionPromptFile = `.claude/skills/ccw-loop/phases/actions/${actionId}.md`
|
||||
|
||||
if (!fs.existsSync(actionPromptFile)) {
|
||||
console.error(`Action file not found: ${actionPromptFile}`)
|
||||
continue
|
||||
}
|
||||
|
||||
const actionPrompt = Read(actionPromptFile)
|
||||
|
||||
// 构建 Agent 提示
|
||||
const agentPrompt = `
|
||||
[LOOP CONTEXT]
|
||||
Loop ID: ${loopId}
|
||||
State File: .loop/${loopId}.json
|
||||
Progress Dir: ${progressDir}
|
||||
|
||||
[CURRENT STATE]
|
||||
${JSON.stringify(state, null, 2)}
|
||||
|
||||
[ACTION INSTRUCTIONS]
|
||||
${actionPrompt}
|
||||
|
||||
[TASK]
|
||||
You are executing ${actionId} for loop: ${state.title || state.description}
|
||||
|
||||
[CONTROL SIGNALS]
|
||||
Before executing, check if status is still 'running'.
|
||||
If status is 'paused' or 'failed', exit gracefully.
|
||||
|
||||
[RETURN]
|
||||
Return JSON with:
|
||||
- skillStateUpdates: Object with skill_state fields to update
|
||||
- continue: Boolean indicating if loop should continue
|
||||
- message: String with user message
|
||||
`
|
||||
|
||||
const result = await Task({
|
||||
subagent_type: 'universal-executor',
|
||||
run_in_background: false,
|
||||
description: `Execute ${actionId}`,
|
||||
prompt: agentPrompt
|
||||
})
|
||||
|
||||
// 解析结果
|
||||
const actionResult = JSON.parse(result)
|
||||
|
||||
// 更新状态 (只更新 skill_state)
|
||||
updateLoopState(loopId, {
|
||||
current_action: null,
|
||||
last_action: actionId,
|
||||
completed_actions: [
|
||||
...(state.skill_state?.completed_actions || []),
|
||||
actionId
|
||||
],
|
||||
...actionResult.skillStateUpdates
|
||||
})
|
||||
|
||||
// 显示消息
|
||||
if (actionResult.message) {
|
||||
console.log(`\n${actionResult.message}`)
|
||||
}
|
||||
|
||||
// 检查是否继续
|
||||
if (actionResult.continue === false) {
|
||||
console.log('Action requested termination.')
|
||||
break
|
||||
}
|
||||
|
||||
} catch (error) {
|
||||
console.error(`Error executing ${actionId}: ${error.message}`)
|
||||
|
||||
// 错误处理
|
||||
updateLoopState(loopId, {
|
||||
current_action: null,
|
||||
errors: [
|
||||
...(state.skill_state?.errors || []),
|
||||
{
|
||||
action: actionId,
|
||||
message: error.message,
|
||||
timestamp: getUtc8ISOString()
|
||||
}
|
||||
]
|
||||
})
|
||||
}
|
||||
}
|
||||
|
||||
if (iteration >= state.max_iterations) {
|
||||
console.log(`\n⚠️ Reached maximum iterations (${state.max_iterations})`)
|
||||
console.log('Consider breaking down the task or taking a break.')
|
||||
}
|
||||
|
||||
console.log('\n=== CCW Loop Orchestrator Finished ===')
|
||||
|
||||
// 返回最终状态
|
||||
const finalState = readLoopState(loopId)
|
||||
return {
|
||||
status: finalState.status,
|
||||
loop_id: loopId,
|
||||
iterations: iteration,
|
||||
final_state: finalState
|
||||
}
|
||||
}
|
||||
```
|
||||
|
||||
## Action Catalog
|
||||
|
||||
| Action | Purpose | Preconditions | Effects |
|
||||
|--------|---------|---------------|---------|
|
||||
| [action-init](actions/action-init.md) | 初始化会话 | status=pending | initialized=true |
|
||||
| [action-menu](actions/action-menu.md) | 显示操作菜单 | initialized=true | 用户选择下一动作 |
|
||||
| [action-develop-with-file](actions/action-develop-with-file.md) | 开发任务 | initialized=true | 更新 progress.md |
|
||||
| [action-debug-with-file](actions/action-debug-with-file.md) | 假设调试 | initialized=true | 更新 understanding.md |
|
||||
| [action-validate-with-file](actions/action-validate-with-file.md) | 测试验证 | initialized=true | 更新 validation.md |
|
||||
| [action-complete](actions/action-complete.md) | 完成循环 | validation_passed=true | status=completed |
|
||||
|
||||
## Termination Conditions
|
||||
|
||||
1. **API 暂停**: `state.status === 'paused'` (Skill 退出,等待 resume)
|
||||
2. **API 停止**: `state.status === 'failed'` (Skill 终止)
|
||||
3. **任务完成**: `state.status === 'completed'`
|
||||
4. **迭代限制**: `state.current_iteration >= state.max_iterations`
|
||||
5. **Action 请求终止**: `actionResult.continue === false`
|
||||
|
||||
## Error Recovery
|
||||
|
||||
| Error Type | Recovery Strategy |
|
||||
|------------|-------------------|
|
||||
| 动作执行失败 | 记录错误,增加 error_count,继续下一动作 |
|
||||
| 状态文件损坏 | 从其他文件重建状态 (progress.md, understanding.md 等) |
|
||||
| 用户中止 | 保存当前状态,允许 --resume 恢复 |
|
||||
| CLI 工具失败 | 回退到手动分析模式 |
|
||||
|
||||
## Mode Strategies
|
||||
|
||||
### Interactive Mode (默认)
|
||||
|
||||
每次显示菜单,让用户选择动作:
|
||||
|
||||
```
|
||||
当前状态: 开发中
|
||||
可用操作:
|
||||
1. 继续开发 (develop)
|
||||
2. 开始调试 (debug)
|
||||
3. 运行验证 (validate)
|
||||
4. 查看进度 (status)
|
||||
5. 退出 (exit)
|
||||
|
||||
请选择:
|
||||
```
|
||||
|
||||
### Auto Mode (自动循环)
|
||||
|
||||
按预设流程自动执行:
|
||||
|
||||
```
|
||||
Develop → Debug → Validate →
|
||||
↓ (如验证失败)
|
||||
Develop (修复) → Debug → Validate → 完成
|
||||
```
|
||||
|
||||
## State Machine (API Status)
|
||||
|
||||
```mermaid
|
||||
stateDiagram-v2
|
||||
[*] --> created: API creates loop
|
||||
created --> running: API /start → Trigger Skill
|
||||
running --> paused: API /pause → Set status
|
||||
running --> completed: action-complete
|
||||
running --> failed: API /stop OR error
|
||||
paused --> running: API /resume → Re-trigger Skill
|
||||
completed --> [*]
|
||||
failed --> [*]
|
||||
|
||||
note right of paused
|
||||
Skill checks status before each action
|
||||
If paused, Skill exits gracefully
|
||||
end note
|
||||
|
||||
note right of running
|
||||
Skill executes: init → develop → debug → validate
|
||||
end note
|
||||
```
|
||||
474
.claude/skills/ccw-loop/phases/state-schema.md
Normal file
474
.claude/skills/ccw-loop/phases/state-schema.md
Normal file
@@ -0,0 +1,474 @@
|
||||
# State Schema
|
||||
|
||||
CCW Loop 的状态结构定义(统一版本)。
|
||||
|
||||
## 状态文件
|
||||
|
||||
**位置**: `.loop/{loopId}.json` (统一位置,API + Skill 共享)
|
||||
|
||||
**旧版本位置** (仅向后兼容): `.workflow/.loop/{session-id}/state.json`
|
||||
|
||||
## 结构定义
|
||||
|
||||
### 统一状态接口 (Unified Loop State)
|
||||
|
||||
```typescript
|
||||
/**
|
||||
* Unified Loop State - API 和 Skill 共享的状态结构
|
||||
* API (loop-v2-routes.ts) 拥有状态的主控权
|
||||
* Skill (ccw-loop) 读取和更新此状态
|
||||
*/
|
||||
interface LoopState {
|
||||
// =====================================================
|
||||
// API FIELDS (from loop-v2-routes.ts)
|
||||
// 这些字段由 API 管理,Skill 只读
|
||||
// =====================================================
|
||||
|
||||
loop_id: string // Loop ID, e.g., "loop-v2-20260122-abc123"
|
||||
title: string // Loop 标题
|
||||
description: string // Loop 描述
|
||||
max_iterations: number // 最大迭代次数
|
||||
status: 'created' | 'running' | 'paused' | 'completed' | 'failed'
|
||||
current_iteration: number // 当前迭代次数
|
||||
created_at: string // 创建时间 (ISO8601)
|
||||
updated_at: string // 最后更新时间 (ISO8601)
|
||||
completed_at?: string // 完成时间 (ISO8601)
|
||||
failure_reason?: string // 失败原因
|
||||
|
||||
// =====================================================
|
||||
// SKILL EXTENSION FIELDS
|
||||
// 这些字段由 Skill 管理,API 只读
|
||||
// =====================================================
|
||||
|
||||
skill_state?: {
|
||||
// 当前执行动作
|
||||
current_action: 'init' | 'develop' | 'debug' | 'validate' | 'complete' | null
|
||||
last_action: string | null
|
||||
completed_actions: string[]
|
||||
mode: 'interactive' | 'auto'
|
||||
|
||||
// === 开发阶段 ===
|
||||
develop: {
|
||||
total: number
|
||||
completed: number
|
||||
current_task?: string
|
||||
tasks: DevelopTask[]
|
||||
last_progress_at: string | null
|
||||
}
|
||||
|
||||
// === 调试阶段 ===
|
||||
debug: {
|
||||
active_bug?: string
|
||||
hypotheses_count: number
|
||||
hypotheses: Hypothesis[]
|
||||
confirmed_hypothesis: string | null
|
||||
iteration: number
|
||||
last_analysis_at: string | null
|
||||
}
|
||||
|
||||
// === 验证阶段 ===
|
||||
validate: {
|
||||
pass_rate: number // 测试通过率 (0-100)
|
||||
coverage: number // 覆盖率 (0-100)
|
||||
test_results: TestResult[]
|
||||
passed: boolean
|
||||
failed_tests: string[]
|
||||
last_run_at: string | null
|
||||
}
|
||||
|
||||
// === 错误追踪 ===
|
||||
errors: Array<{
|
||||
action: string
|
||||
message: string
|
||||
timestamp: string
|
||||
}>
|
||||
}
|
||||
}
|
||||
|
||||
interface DevelopTask {
|
||||
id: string
|
||||
description: string
|
||||
tool: 'gemini' | 'qwen' | 'codex' | 'bash'
|
||||
mode: 'analysis' | 'write'
|
||||
status: 'pending' | 'in_progress' | 'completed' | 'failed'
|
||||
files_changed: string[]
|
||||
created_at: string
|
||||
completed_at: string | null
|
||||
}
|
||||
|
||||
interface Hypothesis {
|
||||
id: string // H1, H2, ...
|
||||
description: string
|
||||
testable_condition: string
|
||||
logging_point: string
|
||||
evidence_criteria: {
|
||||
confirm: string
|
||||
reject: string
|
||||
}
|
||||
likelihood: number // 1 = 最可能
|
||||
status: 'pending' | 'confirmed' | 'rejected' | 'inconclusive'
|
||||
evidence: Record<string, any> | null
|
||||
verdict_reason: string | null
|
||||
}
|
||||
|
||||
interface TestResult {
|
||||
test_name: string
|
||||
suite: string
|
||||
status: 'passed' | 'failed' | 'skipped'
|
||||
duration_ms: number
|
||||
error_message: string | null
|
||||
stack_trace: string | null
|
||||
}
|
||||
```
|
||||
|
||||
## 初始状态
|
||||
|
||||
### 由 API 创建时 (Dashboard 触发)
|
||||
|
||||
```json
|
||||
{
|
||||
"loop_id": "loop-v2-20260122-abc123",
|
||||
"title": "Implement user authentication",
|
||||
"description": "Add login/logout functionality",
|
||||
"max_iterations": 10,
|
||||
"status": "created",
|
||||
"current_iteration": 0,
|
||||
"created_at": "2026-01-22T10:00:00+08:00",
|
||||
"updated_at": "2026-01-22T10:00:00+08:00"
|
||||
}
|
||||
```
|
||||
|
||||
### 由 Skill 初始化后 (action-init)
|
||||
|
||||
```json
|
||||
{
|
||||
"loop_id": "loop-v2-20260122-abc123",
|
||||
"title": "Implement user authentication",
|
||||
"description": "Add login/logout functionality",
|
||||
"max_iterations": 10,
|
||||
"status": "running",
|
||||
"current_iteration": 0,
|
||||
"created_at": "2026-01-22T10:00:00+08:00",
|
||||
"updated_at": "2026-01-22T10:00:05+08:00",
|
||||
|
||||
"skill_state": {
|
||||
"current_action": "init",
|
||||
"last_action": null,
|
||||
"completed_actions": [],
|
||||
"mode": "auto",
|
||||
|
||||
"develop": {
|
||||
"total": 3,
|
||||
"completed": 0,
|
||||
"current_task": null,
|
||||
"tasks": [
|
||||
{ "id": "task-001", "description": "Create auth component", "status": "pending" }
|
||||
],
|
||||
"last_progress_at": null
|
||||
},
|
||||
|
||||
"debug": {
|
||||
"active_bug": null,
|
||||
"hypotheses_count": 0,
|
||||
"hypotheses": [],
|
||||
"confirmed_hypothesis": null,
|
||||
"iteration": 0,
|
||||
"last_analysis_at": null
|
||||
},
|
||||
|
||||
"validate": {
|
||||
"pass_rate": 0,
|
||||
"coverage": 0,
|
||||
"test_results": [],
|
||||
"passed": false,
|
||||
"failed_tests": [],
|
||||
"last_run_at": null
|
||||
},
|
||||
|
||||
"errors": []
|
||||
}
|
||||
}
|
||||
```
|
||||
|
||||
## 控制信号检查 (Control Signals)
|
||||
|
||||
Skill 在每个 Action 开始前必须检查控制信号:
|
||||
|
||||
```javascript
|
||||
/**
|
||||
* 检查 API 控制信号
|
||||
* @returns { continue: boolean, action: 'pause_exit' | 'stop_exit' | 'continue' }
|
||||
*/
|
||||
function checkControlSignals(loopId) {
|
||||
const state = JSON.parse(Read(`.loop/${loopId}.json`))
|
||||
|
||||
switch (state.status) {
|
||||
case 'paused':
|
||||
// API 暂停了循环,Skill 应退出等待 resume
|
||||
return { continue: false, action: 'pause_exit' }
|
||||
|
||||
case 'failed':
|
||||
// API 停止了循环 (用户手动停止)
|
||||
return { continue: false, action: 'stop_exit' }
|
||||
|
||||
case 'running':
|
||||
// 正常继续
|
||||
return { continue: true, action: 'continue' }
|
||||
|
||||
default:
|
||||
// 异常状态
|
||||
return { continue: false, action: 'stop_exit' }
|
||||
}
|
||||
}
|
||||
```
|
||||
|
||||
### 在 Action 中使用
|
||||
|
||||
```markdown
|
||||
## Execution
|
||||
|
||||
### Step 1: Check Control Signals
|
||||
|
||||
\`\`\`javascript
|
||||
const control = checkControlSignals(loopId)
|
||||
if (!control.continue) {
|
||||
// 输出退出原因
|
||||
console.log(`Loop ${control.action}: status = ${state.status}`)
|
||||
|
||||
// 如果是 pause_exit,保存当前进度
|
||||
if (control.action === 'pause_exit') {
|
||||
updateSkillState(loopId, { current_action: 'paused' })
|
||||
}
|
||||
|
||||
return // 退出 Action
|
||||
}
|
||||
\`\`\`
|
||||
|
||||
### Step 2: Execute Action Logic
|
||||
...
|
||||
```
|
||||
|
||||
## 状态转换规则
|
||||
|
||||
### 1. 初始化 (action-init)
|
||||
|
||||
```javascript
|
||||
// Skill 初始化后
|
||||
{
|
||||
// API 字段更新
|
||||
status: 'created' → 'running', // 或保持 'running' 如果 API 已设置
|
||||
updated_at: timestamp,
|
||||
|
||||
// Skill 字段初始化
|
||||
skill_state: {
|
||||
current_action: 'init',
|
||||
mode: 'auto',
|
||||
develop: {
|
||||
tasks: [...parsed_tasks],
|
||||
total: N,
|
||||
completed: 0
|
||||
}
|
||||
}
|
||||
}
|
||||
```
|
||||
|
||||
### 2. 开发进行中 (action-develop-with-file)
|
||||
|
||||
```javascript
|
||||
// 开发任务执行后
|
||||
{
|
||||
updated_at: timestamp,
|
||||
current_iteration: state.current_iteration + 1,
|
||||
|
||||
skill_state: {
|
||||
current_action: 'develop',
|
||||
last_action: 'action-develop-with-file',
|
||||
completed_actions: [...state.skill_state.completed_actions, 'action-develop-with-file'],
|
||||
develop: {
|
||||
current_task: 'task-xxx',
|
||||
completed: N+1,
|
||||
last_progress_at: timestamp
|
||||
}
|
||||
}
|
||||
}
|
||||
```
|
||||
|
||||
### 3. 调试进行中 (action-debug-with-file)
|
||||
|
||||
```javascript
|
||||
// 调试执行后
|
||||
{
|
||||
updated_at: timestamp,
|
||||
current_iteration: state.current_iteration + 1,
|
||||
|
||||
skill_state: {
|
||||
current_action: 'debug',
|
||||
last_action: 'action-debug-with-file',
|
||||
debug: {
|
||||
active_bug: '...',
|
||||
hypotheses_count: N,
|
||||
hypotheses: [...new_hypotheses],
|
||||
iteration: N+1,
|
||||
last_analysis_at: timestamp
|
||||
}
|
||||
}
|
||||
}
|
||||
```
|
||||
|
||||
### 4. 验证完成 (action-validate-with-file)
|
||||
|
||||
```javascript
|
||||
// 验证执行后
|
||||
{
|
||||
updated_at: timestamp,
|
||||
current_iteration: state.current_iteration + 1,
|
||||
|
||||
skill_state: {
|
||||
current_action: 'validate',
|
||||
last_action: 'action-validate-with-file',
|
||||
validate: {
|
||||
test_results: [...results],
|
||||
pass_rate: 95.5,
|
||||
coverage: 85.0,
|
||||
passed: true | false,
|
||||
failed_tests: ['test1', 'test2'],
|
||||
last_run_at: timestamp
|
||||
}
|
||||
}
|
||||
}
|
||||
```
|
||||
|
||||
### 5. 完成 (action-complete)
|
||||
|
||||
```javascript
|
||||
// 循环完成后
|
||||
{
|
||||
status: 'running' → 'completed',
|
||||
completed_at: timestamp,
|
||||
updated_at: timestamp,
|
||||
|
||||
skill_state: {
|
||||
current_action: 'complete',
|
||||
last_action: 'action-complete'
|
||||
}
|
||||
}
|
||||
```
|
||||
|
||||
## 状态派生字段
|
||||
|
||||
以下字段可从状态计算得出,不需要存储:
|
||||
|
||||
```javascript
|
||||
// 开发完成度
|
||||
const developProgress = state.develop.total_count > 0
|
||||
? (state.develop.completed_count / state.develop.total_count) * 100
|
||||
: 0
|
||||
|
||||
// 是否有待开发任务
|
||||
const hasPendingDevelop = state.develop.tasks.some(t => t.status === 'pending')
|
||||
|
||||
// 调试是否完成
|
||||
const debugCompleted = state.debug.confirmed_hypothesis !== null
|
||||
|
||||
// 验证是否通过
|
||||
const validationPassed = state.validate.passed && state.validate.test_results.length > 0
|
||||
|
||||
// 整体进度
|
||||
const overallProgress = (
|
||||
(developProgress * 0.5) +
|
||||
(debugCompleted ? 25 : 0) +
|
||||
(validationPassed ? 25 : 0)
|
||||
)
|
||||
```
|
||||
|
||||
## 文件同步
|
||||
|
||||
### 统一位置 (Unified Location)
|
||||
|
||||
状态与文件的对应关系:
|
||||
|
||||
| 状态字段 | 同步文件 | 同步时机 |
|
||||
|----------|----------|----------|
|
||||
| 整个 LoopState | `.loop/{loopId}.json` | 每次状态变更 (主文件) |
|
||||
| `skill_state.develop` | `.loop/{loopId}.progress/develop.md` | 每次开发操作后 |
|
||||
| `skill_state.debug` | `.loop/{loopId}.progress/debug.md` | 每次调试操作后 |
|
||||
| `skill_state.validate` | `.loop/{loopId}.progress/validate.md` | 每次验证操作后 |
|
||||
| 代码变更日志 | `.loop/{loopId}.progress/changes.log` | 每次文件修改 (NDJSON) |
|
||||
| 调试日志 | `.loop/{loopId}.progress/debug.log` | 每次调试日志 (NDJSON) |
|
||||
|
||||
### 文件结构示例
|
||||
|
||||
```
|
||||
.loop/
|
||||
├── loop-v2-20260122-abc123.json # 主状态文件 (API + Skill)
|
||||
├── loop-v2-20260122-abc123.tasks.jsonl # 任务列表 (API 管理)
|
||||
└── loop-v2-20260122-abc123.progress/ # Skill 进度文件
|
||||
├── develop.md # 开发进度
|
||||
├── debug.md # 调试理解
|
||||
├── validate.md # 验证报告
|
||||
├── changes.log # 代码变更 (NDJSON)
|
||||
└── debug.log # 调试日志 (NDJSON)
|
||||
```
|
||||
|
||||
## 状态恢复
|
||||
|
||||
如果主状态文件 `.loop/{loopId}.json` 损坏,可以从进度文件重建 skill_state:
|
||||
|
||||
```javascript
|
||||
function rebuildSkillStateFromProgress(loopId) {
|
||||
const progressDir = `.loop/${loopId}.progress`
|
||||
|
||||
// 尝试从进度文件解析状态
|
||||
const skill_state = {
|
||||
develop: parseProgressFile(`${progressDir}/develop.md`),
|
||||
debug: parseProgressFile(`${progressDir}/debug.md`),
|
||||
validate: parseProgressFile(`${progressDir}/validate.md`)
|
||||
}
|
||||
|
||||
return skill_state
|
||||
}
|
||||
|
||||
// 解析进度 Markdown 文件
|
||||
function parseProgressFile(filePath) {
|
||||
const content = Read(filePath)
|
||||
if (!content) return null
|
||||
|
||||
// 从 Markdown 表格和结构中提取数据
|
||||
// ... implementation
|
||||
}
|
||||
```
|
||||
|
||||
### 恢复策略
|
||||
|
||||
1. **API 字段**: 无法恢复 - 需要从 API 重新获取或用户手动输入
|
||||
2. **skill_state 字段**: 可以从 `.progress/` 目录的 Markdown 文件解析
|
||||
3. **任务列表**: 从 `.loop/{loopId}.tasks.jsonl` 恢复
|
||||
|
||||
## 状态验证
|
||||
|
||||
```javascript
|
||||
function validateState(state) {
|
||||
const errors = []
|
||||
|
||||
// 必需字段
|
||||
if (!state.session_id) errors.push('Missing session_id')
|
||||
if (!state.task_description) errors.push('Missing task_description')
|
||||
|
||||
// 状态一致性
|
||||
if (state.initialized && state.status === 'pending') {
|
||||
errors.push('Inconsistent: initialized but status is pending')
|
||||
}
|
||||
|
||||
if (state.status === 'completed' && !state.validate.passed) {
|
||||
errors.push('Inconsistent: completed but validation not passed')
|
||||
}
|
||||
|
||||
// 开发任务一致性
|
||||
const completedTasks = state.develop.tasks.filter(t => t.status === 'completed').length
|
||||
if (completedTasks !== state.develop.completed_count) {
|
||||
errors.push('Inconsistent: completed_count mismatch')
|
||||
}
|
||||
|
||||
return { valid: errors.length === 0, errors }
|
||||
}
|
||||
```
|
||||
300
.claude/skills/ccw-loop/specs/action-catalog.md
Normal file
300
.claude/skills/ccw-loop/specs/action-catalog.md
Normal file
@@ -0,0 +1,300 @@
|
||||
# Action Catalog
|
||||
|
||||
CCW Loop 所有可用动作的目录和说明。
|
||||
|
||||
## Available Actions
|
||||
|
||||
| Action | Purpose | Preconditions | Effects | CLI Integration |
|
||||
|--------|---------|---------------|---------|-----------------|
|
||||
| [action-init](../phases/actions/action-init.md) | 初始化会话 | status=pending, initialized=false | status→running, initialized→true, 创建目录和任务列表 | Gemini 任务分解 |
|
||||
| [action-menu](../phases/actions/action-menu.md) | 显示操作菜单 | initialized=true, status=running | 返回用户选择的动作 | - |
|
||||
| [action-develop-with-file](../phases/actions/action-develop-with-file.md) | 执行开发任务 | initialized=true, pending tasks > 0 | 更新 progress.md, 完成一个任务 | Gemini 代码实现 |
|
||||
| [action-debug-with-file](../phases/actions/action-debug-with-file.md) | 假设驱动调试 | initialized=true | 更新 understanding.md, hypotheses.json | Gemini 假设生成和证据分析 |
|
||||
| [action-validate-with-file](../phases/actions/action-validate-with-file.md) | 运行测试验证 | initialized=true, develop > 0 or debug confirmed | 更新 validation.md, test-results.json | Gemini 质量分析 |
|
||||
| [action-complete](../phases/actions/action-complete.md) | 完成循环 | initialized=true | status→completed, 生成 summary.md | - |
|
||||
|
||||
## Action Dependencies Graph
|
||||
|
||||
```mermaid
|
||||
graph TD
|
||||
START([用户启动 /ccw-loop]) --> INIT[action-init]
|
||||
INIT --> MENU[action-menu]
|
||||
|
||||
MENU --> DEVELOP[action-develop-with-file]
|
||||
MENU --> DEBUG[action-debug-with-file]
|
||||
MENU --> VALIDATE[action-validate-with-file]
|
||||
MENU --> STATUS[action-status]
|
||||
MENU --> COMPLETE[action-complete]
|
||||
MENU --> EXIT([退出])
|
||||
|
||||
DEVELOP --> MENU
|
||||
DEBUG --> MENU
|
||||
VALIDATE --> MENU
|
||||
STATUS --> MENU
|
||||
COMPLETE --> END([结束])
|
||||
EXIT --> END
|
||||
|
||||
style INIT fill:#e1f5fe
|
||||
style MENU fill:#fff3e0
|
||||
style DEVELOP fill:#e8f5e9
|
||||
style DEBUG fill:#fce4ec
|
||||
style VALIDATE fill:#f3e5f5
|
||||
style COMPLETE fill:#c8e6c9
|
||||
```
|
||||
|
||||
## Action Execution Matrix
|
||||
|
||||
### Interactive Mode
|
||||
|
||||
| State | Auto-Selected Action | User Options |
|
||||
|-------|---------------------|--------------|
|
||||
| pending | action-init | - |
|
||||
| running, !initialized | action-init | - |
|
||||
| running, initialized | action-menu | All actions |
|
||||
|
||||
### Auto Mode
|
||||
|
||||
| Condition | Selected Action |
|
||||
|-----------|----------------|
|
||||
| pending_develop_tasks > 0 | action-develop-with-file |
|
||||
| last_action=develop, !debug_completed | action-debug-with-file |
|
||||
| last_action=debug, !validation_completed | action-validate-with-file |
|
||||
| validation_failed | action-develop-with-file (fix) |
|
||||
| validation_passed, no pending | action-complete |
|
||||
|
||||
## Action Inputs/Outputs
|
||||
|
||||
### action-init
|
||||
|
||||
**Inputs**:
|
||||
- state.task_description
|
||||
- User input (optional)
|
||||
|
||||
**Outputs**:
|
||||
- meta.json
|
||||
- state.json (初始化)
|
||||
- develop/tasks.json
|
||||
- develop/progress.md
|
||||
|
||||
**State Changes**:
|
||||
```javascript
|
||||
{
|
||||
status: 'pending' → 'running',
|
||||
initialized: false → true,
|
||||
develop.tasks: [] → [task1, task2, ...]
|
||||
}
|
||||
```
|
||||
|
||||
### action-develop-with-file
|
||||
|
||||
**Inputs**:
|
||||
- state.develop.tasks
|
||||
- User selection (如有多个待处理任务)
|
||||
|
||||
**Outputs**:
|
||||
- develop/progress.md (追加)
|
||||
- develop/tasks.json (更新)
|
||||
- develop/changes.log (追加)
|
||||
|
||||
**State Changes**:
|
||||
```javascript
|
||||
{
|
||||
develop.current_task_id: null → 'task-xxx' → null,
|
||||
develop.completed_count: N → N+1,
|
||||
last_action: X → 'action-develop-with-file'
|
||||
}
|
||||
```
|
||||
|
||||
### action-debug-with-file
|
||||
|
||||
**Inputs**:
|
||||
- Bug description (用户输入或从测试失败获取)
|
||||
- debug.log (如已有)
|
||||
|
||||
**Outputs**:
|
||||
- debug/understanding.md (追加)
|
||||
- debug/hypotheses.json (更新)
|
||||
- Code changes (添加日志或修复)
|
||||
|
||||
**State Changes**:
|
||||
```javascript
|
||||
{
|
||||
debug.current_bug: null → 'bug description',
|
||||
debug.hypotheses: [...updated],
|
||||
debug.iteration: N → N+1,
|
||||
debug.confirmed_hypothesis: null → 'H1' (如确认)
|
||||
}
|
||||
```
|
||||
|
||||
### action-validate-with-file
|
||||
|
||||
**Inputs**:
|
||||
- 测试脚本 (从 package.json)
|
||||
- 覆盖率工具 (可选)
|
||||
|
||||
**Outputs**:
|
||||
- validate/validation.md (追加)
|
||||
- validate/test-results.json (更新)
|
||||
- validate/coverage.json (更新)
|
||||
|
||||
**State Changes**:
|
||||
```javascript
|
||||
{
|
||||
validate.test_results: [...new results],
|
||||
validate.coverage: null → 85.5,
|
||||
validate.passed: false → true,
|
||||
validate.failed_tests: ['test1', 'test2'] → []
|
||||
}
|
||||
```
|
||||
|
||||
### action-complete
|
||||
|
||||
**Inputs**:
|
||||
- state (完整状态)
|
||||
- User choices (扩展选项)
|
||||
|
||||
**Outputs**:
|
||||
- summary.md
|
||||
- Issues (如选择扩展)
|
||||
|
||||
**State Changes**:
|
||||
```javascript
|
||||
{
|
||||
status: 'running' → 'completed',
|
||||
completed_at: null → timestamp
|
||||
}
|
||||
```
|
||||
|
||||
## Action Sequences
|
||||
|
||||
### Typical Happy Path
|
||||
|
||||
```
|
||||
action-init
|
||||
→ action-develop-with-file (task 1)
|
||||
→ action-develop-with-file (task 2)
|
||||
→ action-develop-with-file (task 3)
|
||||
→ action-validate-with-file
|
||||
→ PASS
|
||||
→ action-complete
|
||||
```
|
||||
|
||||
### Debug Iteration Path
|
||||
|
||||
```
|
||||
action-init
|
||||
→ action-develop-with-file (task 1)
|
||||
→ action-validate-with-file
|
||||
→ FAIL
|
||||
→ action-debug-with-file (探索)
|
||||
→ action-debug-with-file (分析)
|
||||
→ Root cause found
|
||||
→ action-validate-with-file
|
||||
→ PASS
|
||||
→ action-complete
|
||||
```
|
||||
|
||||
### Multi-Iteration Path
|
||||
|
||||
```
|
||||
action-init
|
||||
→ action-develop-with-file (task 1)
|
||||
→ action-debug-with-file
|
||||
→ action-develop-with-file (task 2)
|
||||
→ action-validate-with-file
|
||||
→ FAIL
|
||||
→ action-debug-with-file
|
||||
→ action-validate-with-file
|
||||
→ PASS
|
||||
→ action-complete
|
||||
```
|
||||
|
||||
## Error Scenarios
|
||||
|
||||
### CLI Tool Failure
|
||||
|
||||
```
|
||||
action-develop-with-file
|
||||
→ Gemini CLI fails
|
||||
→ Fallback to manual implementation
|
||||
→ Prompt user for code
|
||||
→ Continue
|
||||
```
|
||||
|
||||
### Test Failure
|
||||
|
||||
```
|
||||
action-validate-with-file
|
||||
→ Tests fail
|
||||
→ Record failed tests
|
||||
→ Suggest action-debug-with-file
|
||||
→ User chooses debug or manual fix
|
||||
```
|
||||
|
||||
### Max Iterations Reached
|
||||
|
||||
```
|
||||
state.iteration_count >= 10
|
||||
→ Warning message
|
||||
→ Suggest break or task split
|
||||
→ Allow continue or exit
|
||||
```
|
||||
|
||||
## Action Extensions
|
||||
|
||||
### Adding New Actions
|
||||
|
||||
To add a new action:
|
||||
|
||||
1. Create `phases/actions/action-{name}.md`
|
||||
2. Define preconditions, execution, state updates
|
||||
3. Add to this catalog
|
||||
4. Update orchestrator.md decision logic
|
||||
5. Add to action-menu.md options
|
||||
|
||||
### Action Template
|
||||
|
||||
```markdown
|
||||
# Action: {Name}
|
||||
|
||||
{Brief description}
|
||||
|
||||
## Purpose
|
||||
|
||||
{Detailed purpose}
|
||||
|
||||
## Preconditions
|
||||
|
||||
- [ ] condition1
|
||||
- [ ] condition2
|
||||
|
||||
## Execution
|
||||
|
||||
### Step 1: {Step Name}
|
||||
|
||||
\`\`\`javascript
|
||||
// code
|
||||
\`\`\`
|
||||
|
||||
## State Updates
|
||||
|
||||
\`\`\`javascript
|
||||
return {
|
||||
stateUpdates: {
|
||||
// updates
|
||||
},
|
||||
continue: true,
|
||||
message: "..."
|
||||
}
|
||||
\`\`\`
|
||||
|
||||
## Error Handling
|
||||
|
||||
| Error Type | Recovery |
|
||||
|------------|----------|
|
||||
| ... | ... |
|
||||
|
||||
## Next Actions (Hints)
|
||||
|
||||
- condition: next_action
|
||||
```
|
||||
192
.claude/skills/ccw-loop/specs/loop-requirements.md
Normal file
192
.claude/skills/ccw-loop/specs/loop-requirements.md
Normal file
@@ -0,0 +1,192 @@
|
||||
# Loop Requirements Specification
|
||||
|
||||
CCW Loop 的核心需求和约束定义。
|
||||
|
||||
## Core Requirements
|
||||
|
||||
### 1. 无状态循环
|
||||
|
||||
**Requirement**: 每次执行从文件读取状态,执行后写回文件,不依赖内存状态。
|
||||
|
||||
**Rationale**: 支持随时中断和恢复,状态持久化。
|
||||
|
||||
**Validation**:
|
||||
- [ ] 每个 action 开始时从文件读取状态
|
||||
- [ ] 每个 action 结束时将状态写回文件
|
||||
- [ ] 无全局变量或内存状态依赖
|
||||
|
||||
### 2. 文件驱动进度
|
||||
|
||||
**Requirement**: 所有进度、理解、验证结果都记录在专用 Markdown 文件中。
|
||||
|
||||
**Rationale**: 可审计、可回顾、团队可见。
|
||||
|
||||
**Validation**:
|
||||
- [ ] develop/progress.md 记录开发进度
|
||||
- [ ] debug/understanding.md 记录理解演变
|
||||
- [ ] validate/validation.md 记录验证结果
|
||||
- [ ] 所有文件使用 Markdown 格式,易读
|
||||
|
||||
### 3. CLI 工具集成
|
||||
|
||||
**Requirement**: 关键决策点使用 Gemini/CLI 进行深度分析。
|
||||
|
||||
**Rationale**: 利用 LLM 能力提高质量。
|
||||
|
||||
**Validation**:
|
||||
- [ ] 任务分解使用 Gemini
|
||||
- [ ] 假设生成使用 Gemini
|
||||
- [ ] 证据分析使用 Gemini
|
||||
- [ ] 质量评估使用 Gemini
|
||||
|
||||
### 4. 用户控制循环
|
||||
|
||||
**Requirement**: 支持交互式和自动循环两种模式,用户可随时介入。
|
||||
|
||||
**Rationale**: 灵活性,适应不同场景。
|
||||
|
||||
**Validation**:
|
||||
- [ ] 交互模式:每步显示菜单
|
||||
- [ ] 自动模式:按预设流程执行
|
||||
- [ ] 用户可随时退出
|
||||
- [ ] 状态可恢复
|
||||
|
||||
### 5. 可恢复性
|
||||
|
||||
**Requirement**: 任何时候中断后,可以从上次位置继续。
|
||||
|
||||
**Rationale**: 长时间任务支持,意外中断恢复。
|
||||
|
||||
**Validation**:
|
||||
- [ ] 状态保存在 state.json
|
||||
- [ ] 使用 --resume 可继续
|
||||
- [ ] 历史记录完整保留
|
||||
|
||||
## Quality Standards
|
||||
|
||||
### Completeness
|
||||
|
||||
| Dimension | Threshold |
|
||||
|-----------|-----------|
|
||||
| 进度文档完整性 | 每个任务都有记录 |
|
||||
| 理解文档演变 | 每次迭代都有更新 |
|
||||
| 验证报告详尽 | 包含所有测试结果 |
|
||||
|
||||
### Consistency
|
||||
|
||||
| Dimension | Threshold |
|
||||
|-----------|-----------|
|
||||
| 文件格式一致 | 所有 Markdown 文件使用相同模板 |
|
||||
| 状态同步一致 | state.json 与文件内容匹配 |
|
||||
| 时间戳格式 | 统一使用 ISO8601 格式 |
|
||||
|
||||
### Usability
|
||||
|
||||
| Dimension | Threshold |
|
||||
|-----------|-----------|
|
||||
| 菜单易用性 | 选项清晰,描述准确 |
|
||||
| 进度可见性 | 随时可查看当前状态 |
|
||||
| 错误提示 | 错误消息清晰,提供恢复建议 |
|
||||
|
||||
## Constraints
|
||||
|
||||
### 1. 文件结构约束
|
||||
|
||||
```
|
||||
.workflow/.loop/{session-id}/
|
||||
├── meta.json # 只写一次,不再修改
|
||||
├── state.json # 每次 action 后更新
|
||||
├── develop/
|
||||
│ ├── progress.md # 只追加,不删除
|
||||
│ ├── tasks.json # 任务状态更新
|
||||
│ └── changes.log # NDJSON 格式,只追加
|
||||
├── debug/
|
||||
│ ├── understanding.md # 只追加,记录时间线
|
||||
│ ├── hypotheses.json # 更新假设状态
|
||||
│ └── debug.log # NDJSON 格式
|
||||
└── validate/
|
||||
├── validation.md # 每次验证追加
|
||||
├── test-results.json # 累积测试结果
|
||||
└── coverage.json # 最新覆盖率
|
||||
```
|
||||
|
||||
### 2. 命名约束
|
||||
|
||||
- Session ID: `LOOP-{slug}-{YYYY-MM-DD}`
|
||||
- Task ID: `task-{NNN}` (三位数字)
|
||||
- Hypothesis ID: `H{N}` (单字母+数字)
|
||||
|
||||
### 3. 状态转换约束
|
||||
|
||||
```
|
||||
pending → running → completed
|
||||
↓
|
||||
user_exit
|
||||
↓
|
||||
failed
|
||||
```
|
||||
|
||||
Only allow: `pending→running`, `running→completed/user_exit/failed`
|
||||
|
||||
### 4. 错误限制约束
|
||||
|
||||
- 最大错误次数: 3
|
||||
- 超过 3 次错误 → 自动终止
|
||||
- 每次错误 → 记录到 state.errors[]
|
||||
|
||||
### 5. 迭代限制约束
|
||||
|
||||
- 最大迭代次数: 10 (警告)
|
||||
- 超过 10 次 → 警告用户,但不强制停止
|
||||
- 建议拆分任务或休息
|
||||
|
||||
## Integration Requirements
|
||||
|
||||
### 1. Dashboard 集成
|
||||
|
||||
**Requirement**: 与 CCW Dashboard Loop Monitor 无缝集成。
|
||||
|
||||
**Specification**:
|
||||
- Dashboard 创建 Loop → 调用此 Skill
|
||||
- state.json → Dashboard 实时显示
|
||||
- 任务列表双向同步
|
||||
- 状态控制按钮映射到 actions
|
||||
|
||||
### 2. Issue 系统集成
|
||||
|
||||
**Requirement**: 完成后可扩展为 Issue。
|
||||
|
||||
**Specification**:
|
||||
- 支持维度: test, enhance, refactor, doc
|
||||
- 调用 `/issue:new "{summary} - {dimension}"`
|
||||
- 自动填充上下文
|
||||
|
||||
### 3. CLI 工具集成
|
||||
|
||||
**Requirement**: 使用 CCW CLI 工具进行分析和实现。
|
||||
|
||||
**Specification**:
|
||||
- 任务分解: `--rule planning-breakdown-task-steps`
|
||||
- 代码实现: `--rule development-implement-feature`
|
||||
- 根因分析: `--rule analysis-diagnose-bug-root-cause`
|
||||
- 质量评估: `--rule analysis-review-code-quality`
|
||||
|
||||
## Non-Functional Requirements
|
||||
|
||||
### Performance
|
||||
|
||||
- Session 初始化: < 5s
|
||||
- Action 执行: < 30s (不含 CLI 调用)
|
||||
- 状态读写: < 1s
|
||||
|
||||
### Reliability
|
||||
|
||||
- 状态文件损坏恢复: 支持从其他文件重建
|
||||
- CLI 工具失败降级: 回退到手动模式
|
||||
- 错误重试: 支持一次自动重试
|
||||
|
||||
### Maintainability
|
||||
|
||||
- 文档化: 所有 action 都有清晰说明
|
||||
- 模块化: 每个 action 独立可测
|
||||
- 可扩展: 易于添加新 action
|
||||
175
.claude/skills/ccw-loop/templates/progress-template.md
Normal file
175
.claude/skills/ccw-loop/templates/progress-template.md
Normal file
@@ -0,0 +1,175 @@
|
||||
# Progress Document Template
|
||||
|
||||
开发进度文档的标准模板。
|
||||
|
||||
## Template Structure
|
||||
|
||||
```markdown
|
||||
# Development Progress
|
||||
|
||||
**Session ID**: {{session_id}}
|
||||
**Task**: {{task_description}}
|
||||
**Started**: {{started_at}}
|
||||
**Estimated Complexity**: {{complexity}}
|
||||
|
||||
---
|
||||
|
||||
## Task List
|
||||
|
||||
{{#each tasks}}
|
||||
{{@index}}. [{{#if completed}}x{{else}} {{/if}}] {{description}}
|
||||
{{/each}}
|
||||
|
||||
## Key Files
|
||||
|
||||
{{#each key_files}}
|
||||
- `{{this}}`
|
||||
{{/each}}
|
||||
|
||||
---
|
||||
|
||||
## Progress Timeline
|
||||
|
||||
{{#each iterations}}
|
||||
### Iteration {{@index}} - {{task_name}} ({{timestamp}})
|
||||
|
||||
#### Task Details
|
||||
|
||||
- **ID**: {{task_id}}
|
||||
- **Tool**: {{tool}}
|
||||
- **Mode**: {{mode}}
|
||||
|
||||
#### Implementation Summary
|
||||
|
||||
{{summary}}
|
||||
|
||||
#### Files Changed
|
||||
|
||||
{{#each files_changed}}
|
||||
- `{{this}}`
|
||||
{{/each}}
|
||||
|
||||
#### Status: {{status}}
|
||||
|
||||
---
|
||||
{{/each}}
|
||||
|
||||
## Current Statistics
|
||||
|
||||
| Metric | Value |
|
||||
|--------|-------|
|
||||
| Total Tasks | {{total_tasks}} |
|
||||
| Completed | {{completed_tasks}} |
|
||||
| In Progress | {{in_progress_tasks}} |
|
||||
| Pending | {{pending_tasks}} |
|
||||
| Progress | {{progress_percentage}}% |
|
||||
|
||||
---
|
||||
|
||||
## Next Steps
|
||||
|
||||
{{#each next_steps}}
|
||||
- [ ] {{this}}
|
||||
{{/each}}
|
||||
```
|
||||
|
||||
## Template Variables
|
||||
|
||||
| Variable | Type | Source | Description |
|
||||
|----------|------|--------|-------------|
|
||||
| `session_id` | string | state.session_id | 会话 ID |
|
||||
| `task_description` | string | state.task_description | 任务描述 |
|
||||
| `started_at` | string | state.created_at | 开始时间 |
|
||||
| `complexity` | string | state.context.estimated_complexity | 预估复杂度 |
|
||||
| `tasks` | array | state.develop.tasks | 任务列表 |
|
||||
| `key_files` | array | state.context.key_files | 关键文件 |
|
||||
| `iterations` | array | 从文件解析 | 迭代历史 |
|
||||
| `total_tasks` | number | state.develop.total_count | 总任务数 |
|
||||
| `completed_tasks` | number | state.develop.completed_count | 已完成数 |
|
||||
|
||||
## Usage Example
|
||||
|
||||
```javascript
|
||||
const progressTemplate = Read('.claude/skills/ccw-loop/templates/progress-template.md')
|
||||
|
||||
function renderProgress(state) {
|
||||
let content = progressTemplate
|
||||
|
||||
// 替换简单变量
|
||||
content = content.replace('{{session_id}}', state.session_id)
|
||||
content = content.replace('{{task_description}}', state.task_description)
|
||||
content = content.replace('{{started_at}}', state.created_at)
|
||||
content = content.replace('{{complexity}}', state.context?.estimated_complexity || 'unknown')
|
||||
|
||||
// 替换任务列表
|
||||
const taskList = state.develop.tasks.map((t, i) => {
|
||||
const checkbox = t.status === 'completed' ? 'x' : ' '
|
||||
return `${i + 1}. [${checkbox}] ${t.description}`
|
||||
}).join('\n')
|
||||
content = content.replace('{{#each tasks}}...{{/each}}', taskList)
|
||||
|
||||
// 替换统计
|
||||
content = content.replace('{{total_tasks}}', state.develop.total_count)
|
||||
content = content.replace('{{completed_tasks}}', state.develop.completed_count)
|
||||
// ...
|
||||
|
||||
return content
|
||||
}
|
||||
```
|
||||
|
||||
## Section Templates
|
||||
|
||||
### Task Entry
|
||||
|
||||
```markdown
|
||||
### Iteration {{N}} - {{task_name}} ({{timestamp}})
|
||||
|
||||
#### Task Details
|
||||
|
||||
- **ID**: {{task_id}}
|
||||
- **Tool**: {{tool}}
|
||||
- **Mode**: {{mode}}
|
||||
|
||||
#### Implementation Summary
|
||||
|
||||
{{summary}}
|
||||
|
||||
#### Files Changed
|
||||
|
||||
{{#each files}}
|
||||
- `{{this}}`
|
||||
{{/each}}
|
||||
|
||||
#### Status: COMPLETED
|
||||
|
||||
---
|
||||
```
|
||||
|
||||
### Statistics Table
|
||||
|
||||
```markdown
|
||||
## Current Statistics
|
||||
|
||||
| Metric | Value |
|
||||
|--------|-------|
|
||||
| Total Tasks | {{total}} |
|
||||
| Completed | {{completed}} |
|
||||
| In Progress | {{in_progress}} |
|
||||
| Pending | {{pending}} |
|
||||
| Progress | {{percentage}}% |
|
||||
```
|
||||
|
||||
### Next Steps
|
||||
|
||||
```markdown
|
||||
## Next Steps
|
||||
|
||||
{{#if all_completed}}
|
||||
- [ ] Run validation tests
|
||||
- [ ] Code review
|
||||
- [ ] Update documentation
|
||||
{{else}}
|
||||
- [ ] Complete remaining {{pending}} tasks
|
||||
- [ ] Review completed work
|
||||
{{/if}}
|
||||
```
|
||||
303
.claude/skills/ccw-loop/templates/understanding-template.md
Normal file
303
.claude/skills/ccw-loop/templates/understanding-template.md
Normal file
@@ -0,0 +1,303 @@
|
||||
# Understanding Document Template
|
||||
|
||||
调试理解演变文档的标准模板。
|
||||
|
||||
## Template Structure
|
||||
|
||||
```markdown
|
||||
# Understanding Document
|
||||
|
||||
**Session ID**: {{session_id}}
|
||||
**Bug Description**: {{bug_description}}
|
||||
**Started**: {{started_at}}
|
||||
|
||||
---
|
||||
|
||||
## Exploration Timeline
|
||||
|
||||
{{#each iterations}}
|
||||
### Iteration {{number}} - {{title}} ({{timestamp}})
|
||||
|
||||
{{#if is_exploration}}
|
||||
#### Current Understanding
|
||||
|
||||
Based on bug description and initial code search:
|
||||
|
||||
- Error pattern: {{error_pattern}}
|
||||
- Affected areas: {{affected_areas}}
|
||||
- Initial hypothesis: {{initial_thoughts}}
|
||||
|
||||
#### Evidence from Code Search
|
||||
|
||||
{{#each search_results}}
|
||||
**Keyword: "{{keyword}}"**
|
||||
- Found in: {{files}}
|
||||
- Key findings: {{insights}}
|
||||
{{/each}}
|
||||
{{/if}}
|
||||
|
||||
{{#if has_hypotheses}}
|
||||
#### Hypotheses Generated (Gemini-Assisted)
|
||||
|
||||
{{#each hypotheses}}
|
||||
**{{id}}** (Likelihood: {{likelihood}}): {{description}}
|
||||
- Logging at: {{logging_point}}
|
||||
- Testing: {{testable_condition}}
|
||||
- Evidence to confirm: {{confirm_criteria}}
|
||||
- Evidence to reject: {{reject_criteria}}
|
||||
{{/each}}
|
||||
|
||||
**Gemini Insights**: {{gemini_insights}}
|
||||
{{/if}}
|
||||
|
||||
{{#if is_analysis}}
|
||||
#### Log Analysis Results
|
||||
|
||||
{{#each results}}
|
||||
**{{id}}**: {{verdict}}
|
||||
- Evidence: {{evidence}}
|
||||
- Reasoning: {{reason}}
|
||||
{{/each}}
|
||||
|
||||
#### Corrected Understanding
|
||||
|
||||
Previous misunderstandings identified and corrected:
|
||||
|
||||
{{#each corrections}}
|
||||
- ~~{{wrong}}~~ → {{corrected}}
|
||||
- Why wrong: {{reason}}
|
||||
- Evidence: {{evidence}}
|
||||
{{/each}}
|
||||
|
||||
#### New Insights
|
||||
|
||||
{{#each insights}}
|
||||
- {{this}}
|
||||
{{/each}}
|
||||
|
||||
#### Gemini Analysis
|
||||
|
||||
{{gemini_analysis}}
|
||||
{{/if}}
|
||||
|
||||
{{#if root_cause_found}}
|
||||
#### Root Cause Identified
|
||||
|
||||
**{{hypothesis_id}}**: {{description}}
|
||||
|
||||
Evidence supporting this conclusion:
|
||||
{{supporting_evidence}}
|
||||
{{else}}
|
||||
#### Next Steps
|
||||
|
||||
{{next_steps}}
|
||||
{{/if}}
|
||||
|
||||
---
|
||||
{{/each}}
|
||||
|
||||
## Current Consolidated Understanding
|
||||
|
||||
### What We Know
|
||||
|
||||
{{#each valid_understandings}}
|
||||
- {{this}}
|
||||
{{/each}}
|
||||
|
||||
### What Was Disproven
|
||||
|
||||
{{#each disproven}}
|
||||
- ~~{{assumption}}~~ (Evidence: {{evidence}})
|
||||
{{/each}}
|
||||
|
||||
### Current Investigation Focus
|
||||
|
||||
{{current_focus}}
|
||||
|
||||
### Remaining Questions
|
||||
|
||||
{{#each questions}}
|
||||
- {{this}}
|
||||
{{/each}}
|
||||
```
|
||||
|
||||
## Template Variables
|
||||
|
||||
| Variable | Type | Source | Description |
|
||||
|----------|------|--------|-------------|
|
||||
| `session_id` | string | state.session_id | 会话 ID |
|
||||
| `bug_description` | string | state.debug.current_bug | Bug 描述 |
|
||||
| `iterations` | array | 从文件解析 | 迭代历史 |
|
||||
| `hypotheses` | array | state.debug.hypotheses | 假设列表 |
|
||||
| `valid_understandings` | array | 从 Gemini 分析 | 有效理解 |
|
||||
| `disproven` | array | 从假设状态 | 被否定的假设 |
|
||||
|
||||
## Section Templates
|
||||
|
||||
### Exploration Section
|
||||
|
||||
```markdown
|
||||
### Iteration {{N}} - Initial Exploration ({{timestamp}})
|
||||
|
||||
#### Current Understanding
|
||||
|
||||
Based on bug description and initial code search:
|
||||
|
||||
- Error pattern: {{pattern}}
|
||||
- Affected areas: {{areas}}
|
||||
- Initial hypothesis: {{thoughts}}
|
||||
|
||||
#### Evidence from Code Search
|
||||
|
||||
{{#each search_results}}
|
||||
**Keyword: "{{keyword}}"**
|
||||
- Found in: {{files}}
|
||||
- Key findings: {{insights}}
|
||||
{{/each}}
|
||||
|
||||
#### Next Steps
|
||||
|
||||
- Generate testable hypotheses
|
||||
- Add instrumentation
|
||||
- Await reproduction
|
||||
```
|
||||
|
||||
### Hypothesis Section
|
||||
|
||||
```markdown
|
||||
#### Hypotheses Generated (Gemini-Assisted)
|
||||
|
||||
| ID | Description | Likelihood | Status |
|
||||
|----|-------------|------------|--------|
|
||||
{{#each hypotheses}}
|
||||
| {{id}} | {{description}} | {{likelihood}} | {{status}} |
|
||||
{{/each}}
|
||||
|
||||
**Details:**
|
||||
|
||||
{{#each hypotheses}}
|
||||
**{{id}}**: {{description}}
|
||||
- Logging at: `{{logging_point}}`
|
||||
- Testing: {{testable_condition}}
|
||||
- Confirm: {{evidence_criteria.confirm}}
|
||||
- Reject: {{evidence_criteria.reject}}
|
||||
{{/each}}
|
||||
```
|
||||
|
||||
### Analysis Section
|
||||
|
||||
```markdown
|
||||
### Iteration {{N}} - Evidence Analysis ({{timestamp}})
|
||||
|
||||
#### Log Analysis Results
|
||||
|
||||
{{#each results}}
|
||||
**{{id}}**: **{{verdict}}**
|
||||
- Evidence: \`{{evidence}}\`
|
||||
- Reasoning: {{reason}}
|
||||
{{/each}}
|
||||
|
||||
#### Corrected Understanding
|
||||
|
||||
| Previous Assumption | Corrected To | Reason |
|
||||
|---------------------|--------------|--------|
|
||||
{{#each corrections}}
|
||||
| ~~{{wrong}}~~ | {{corrected}} | {{reason}} |
|
||||
{{/each}}
|
||||
|
||||
#### Gemini Analysis
|
||||
|
||||
{{gemini_analysis}}
|
||||
```
|
||||
|
||||
### Consolidated Understanding Section
|
||||
|
||||
```markdown
|
||||
## Current Consolidated Understanding
|
||||
|
||||
### What We Know
|
||||
|
||||
{{#each valid}}
|
||||
- {{this}}
|
||||
{{/each}}
|
||||
|
||||
### What Was Disproven
|
||||
|
||||
{{#each disproven}}
|
||||
- ~~{{this.assumption}}~~ (Evidence: {{this.evidence}})
|
||||
{{/each}}
|
||||
|
||||
### Current Investigation Focus
|
||||
|
||||
{{focus}}
|
||||
|
||||
### Remaining Questions
|
||||
|
||||
{{#each questions}}
|
||||
- {{this}}
|
||||
{{/each}}
|
||||
```
|
||||
|
||||
### Resolution Section
|
||||
|
||||
```markdown
|
||||
### Resolution ({{timestamp}})
|
||||
|
||||
#### Fix Applied
|
||||
|
||||
- Modified files: {{files}}
|
||||
- Fix description: {{description}}
|
||||
- Root cause addressed: {{root_cause}}
|
||||
|
||||
#### Verification Results
|
||||
|
||||
{{verification}}
|
||||
|
||||
#### Lessons Learned
|
||||
|
||||
{{#each lessons}}
|
||||
{{@index}}. {{this}}
|
||||
{{/each}}
|
||||
|
||||
#### Key Insights for Future
|
||||
|
||||
{{#each insights}}
|
||||
- {{this}}
|
||||
{{/each}}
|
||||
```
|
||||
|
||||
## Consolidation Rules
|
||||
|
||||
更新 "Current Consolidated Understanding" 时遵循以下规则:
|
||||
|
||||
1. **简化被否定项**: 移到 "What Was Disproven",只保留单行摘要
|
||||
2. **保留有效见解**: 将确认的发现提升到 "What We Know"
|
||||
3. **避免重复**: 不在合并部分重复时间线细节
|
||||
4. **关注当前状态**: 描述现在知道什么,而不是过程
|
||||
5. **保留关键纠正**: 保留重要的 wrong→right 转换供学习
|
||||
|
||||
## Anti-Patterns
|
||||
|
||||
**错误示例 (冗余)**:
|
||||
```markdown
|
||||
## Current Consolidated Understanding
|
||||
|
||||
In iteration 1 we thought X, but in iteration 2 we found Y, then in iteration 3...
|
||||
Also we checked A and found B, and then we checked C...
|
||||
```
|
||||
|
||||
**正确示例 (精简)**:
|
||||
```markdown
|
||||
## Current Consolidated Understanding
|
||||
|
||||
### What We Know
|
||||
- Error occurs during runtime update, not initialization
|
||||
- Config value is None (not missing key)
|
||||
|
||||
### What Was Disproven
|
||||
- ~~Initialization error~~ (Timing evidence)
|
||||
- ~~Missing key hypothesis~~ (Key exists)
|
||||
|
||||
### Current Investigation Focus
|
||||
Why is config value None during update?
|
||||
```
|
||||
258
.claude/skills/ccw-loop/templates/validation-template.md
Normal file
258
.claude/skills/ccw-loop/templates/validation-template.md
Normal file
@@ -0,0 +1,258 @@
|
||||
# Validation Report Template
|
||||
|
||||
验证报告的标准模板。
|
||||
|
||||
## Template Structure
|
||||
|
||||
```markdown
|
||||
# Validation Report
|
||||
|
||||
**Session ID**: {{session_id}}
|
||||
**Task**: {{task_description}}
|
||||
**Validated**: {{timestamp}}
|
||||
|
||||
---
|
||||
|
||||
## Iteration {{iteration}} - Validation Run
|
||||
|
||||
### Test Execution Summary
|
||||
|
||||
| Metric | Value |
|
||||
|--------|-------|
|
||||
| Total Tests | {{total_tests}} |
|
||||
| Passed | {{passed_tests}} |
|
||||
| Failed | {{failed_tests}} |
|
||||
| Skipped | {{skipped_tests}} |
|
||||
| Duration | {{duration}}ms |
|
||||
| **Pass Rate** | **{{pass_rate}}%** |
|
||||
|
||||
### Coverage Report
|
||||
|
||||
{{#if has_coverage}}
|
||||
| File | Statements | Branches | Functions | Lines |
|
||||
|------|------------|----------|-----------|-------|
|
||||
{{#each coverage_files}}
|
||||
| {{path}} | {{statements}}% | {{branches}}% | {{functions}}% | {{lines}}% |
|
||||
{{/each}}
|
||||
|
||||
**Overall Coverage**: {{overall_coverage}}%
|
||||
{{else}}
|
||||
_No coverage data available_
|
||||
{{/if}}
|
||||
|
||||
### Failed Tests
|
||||
|
||||
{{#if has_failures}}
|
||||
{{#each failures}}
|
||||
#### {{test_name}}
|
||||
|
||||
- **Suite**: {{suite}}
|
||||
- **Error**: {{error_message}}
|
||||
- **Stack**:
|
||||
\`\`\`
|
||||
{{stack_trace}}
|
||||
\`\`\`
|
||||
{{/each}}
|
||||
{{else}}
|
||||
_All tests passed_
|
||||
{{/if}}
|
||||
|
||||
### Gemini Quality Analysis
|
||||
|
||||
{{gemini_analysis}}
|
||||
|
||||
### Recommendations
|
||||
|
||||
{{#each recommendations}}
|
||||
- {{this}}
|
||||
{{/each}}
|
||||
|
||||
---
|
||||
|
||||
## Validation Decision
|
||||
|
||||
**Result**: {{#if passed}}✅ PASS{{else}}❌ FAIL{{/if}}
|
||||
|
||||
**Rationale**: {{rationale}}
|
||||
|
||||
{{#if not_passed}}
|
||||
### Next Actions
|
||||
|
||||
1. Review failed tests
|
||||
2. Debug failures using action-debug-with-file
|
||||
3. Fix issues and re-run validation
|
||||
{{else}}
|
||||
### Next Actions
|
||||
|
||||
1. Consider code review
|
||||
2. Prepare for deployment
|
||||
3. Update documentation
|
||||
{{/if}}
|
||||
```
|
||||
|
||||
## Template Variables
|
||||
|
||||
| Variable | Type | Source | Description |
|
||||
|----------|------|--------|-------------|
|
||||
| `session_id` | string | state.session_id | 会话 ID |
|
||||
| `task_description` | string | state.task_description | 任务描述 |
|
||||
| `timestamp` | string | 当前时间 | 验证时间 |
|
||||
| `iteration` | number | 从文件计算 | 验证迭代次数 |
|
||||
| `total_tests` | number | 测试输出 | 总测试数 |
|
||||
| `passed_tests` | number | 测试输出 | 通过数 |
|
||||
| `failed_tests` | number | 测试输出 | 失败数 |
|
||||
| `pass_rate` | number | 计算得出 | 通过率 |
|
||||
| `coverage_files` | array | 覆盖率报告 | 文件覆盖率 |
|
||||
| `failures` | array | 测试输出 | 失败测试详情 |
|
||||
| `gemini_analysis` | string | Gemini CLI | 质量分析 |
|
||||
| `recommendations` | array | Gemini CLI | 建议列表 |
|
||||
|
||||
## Section Templates
|
||||
|
||||
### Test Summary
|
||||
|
||||
```markdown
|
||||
### Test Execution Summary
|
||||
|
||||
| Metric | Value |
|
||||
|--------|-------|
|
||||
| Total Tests | {{total}} |
|
||||
| Passed | {{passed}} |
|
||||
| Failed | {{failed}} |
|
||||
| Skipped | {{skipped}} |
|
||||
| Duration | {{duration}}ms |
|
||||
| **Pass Rate** | **{{rate}}%** |
|
||||
```
|
||||
|
||||
### Coverage Table
|
||||
|
||||
```markdown
|
||||
### Coverage Report
|
||||
|
||||
| File | Statements | Branches | Functions | Lines |
|
||||
|------|------------|----------|-----------|-------|
|
||||
{{#each files}}
|
||||
| `{{path}}` | {{statements}}% | {{branches}}% | {{functions}}% | {{lines}}% |
|
||||
{{/each}}
|
||||
|
||||
**Overall Coverage**: {{overall}}%
|
||||
|
||||
**Coverage Thresholds**:
|
||||
- ✅ Good: ≥ 80%
|
||||
- ⚠️ Warning: 60-79%
|
||||
- ❌ Poor: < 60%
|
||||
```
|
||||
|
||||
### Failed Test Details
|
||||
|
||||
```markdown
|
||||
### Failed Tests
|
||||
|
||||
{{#each failures}}
|
||||
#### ❌ {{test_name}}
|
||||
|
||||
| Field | Value |
|
||||
|-------|-------|
|
||||
| Suite | {{suite}} |
|
||||
| Error | {{error_message}} |
|
||||
| Duration | {{duration}}ms |
|
||||
|
||||
**Stack Trace**:
|
||||
\`\`\`
|
||||
{{stack_trace}}
|
||||
\`\`\`
|
||||
|
||||
**Possible Causes**:
|
||||
{{#each possible_causes}}
|
||||
- {{this}}
|
||||
{{/each}}
|
||||
|
||||
---
|
||||
{{/each}}
|
||||
```
|
||||
|
||||
### Quality Analysis
|
||||
|
||||
```markdown
|
||||
### Gemini Quality Analysis
|
||||
|
||||
#### Code Quality Assessment
|
||||
|
||||
| Dimension | Score | Status |
|
||||
|-----------|-------|--------|
|
||||
| Correctness | {{correctness}}/10 | {{correctness_status}} |
|
||||
| Completeness | {{completeness}}/10 | {{completeness_status}} |
|
||||
| Reliability | {{reliability}}/10 | {{reliability_status}} |
|
||||
| Maintainability | {{maintainability}}/10 | {{maintainability_status}} |
|
||||
|
||||
#### Key Findings
|
||||
|
||||
{{#each findings}}
|
||||
- **{{severity}}**: {{description}}
|
||||
{{/each}}
|
||||
|
||||
#### Recommendations
|
||||
|
||||
{{#each recommendations}}
|
||||
{{@index}}. {{this}}
|
||||
{{/each}}
|
||||
```
|
||||
|
||||
### Decision Section
|
||||
|
||||
```markdown
|
||||
## Validation Decision
|
||||
|
||||
**Result**: {{#if passed}}✅ PASS{{else}}❌ FAIL{{/if}}
|
||||
|
||||
**Rationale**:
|
||||
{{rationale}}
|
||||
|
||||
**Confidence Level**: {{confidence}}
|
||||
|
||||
### Decision Matrix
|
||||
|
||||
| Criteria | Status | Weight | Score |
|
||||
|----------|--------|--------|-------|
|
||||
| All tests pass | {{tests_pass}} | 40% | {{tests_score}} |
|
||||
| Coverage ≥ 80% | {{coverage_pass}} | 30% | {{coverage_score}} |
|
||||
| No critical issues | {{no_critical}} | 20% | {{critical_score}} |
|
||||
| Quality analysis pass | {{quality_pass}} | 10% | {{quality_score}} |
|
||||
| **Total** | | 100% | **{{total_score}}** |
|
||||
|
||||
**Threshold**: 70% to pass
|
||||
|
||||
### Next Actions
|
||||
|
||||
{{#if passed}}
|
||||
1. ✅ Code review (recommended)
|
||||
2. ✅ Update documentation
|
||||
3. ✅ Prepare for deployment
|
||||
{{else}}
|
||||
1. ❌ Review failed tests
|
||||
2. ❌ Debug failures
|
||||
3. ❌ Fix issues and re-run
|
||||
{{/if}}
|
||||
```
|
||||
|
||||
## Historical Comparison
|
||||
|
||||
```markdown
|
||||
## Validation History
|
||||
|
||||
| Iteration | Date | Pass Rate | Coverage | Status |
|
||||
|-----------|------|-----------|----------|--------|
|
||||
{{#each history}}
|
||||
| {{iteration}} | {{date}} | {{pass_rate}}% | {{coverage}}% | {{status}} |
|
||||
{{/each}}
|
||||
|
||||
### Trend Analysis
|
||||
|
||||
{{#if improving}}
|
||||
📈 **Improving**: Pass rate increased from {{previous_rate}}% to {{current_rate}}%
|
||||
{{else if declining}}
|
||||
📉 **Declining**: Pass rate decreased from {{previous_rate}}% to {{current_rate}}%
|
||||
{{else}}
|
||||
➡️ **Stable**: Pass rate remains at {{current_rate}}%
|
||||
{{/if}}
|
||||
```
|
||||
@@ -1,462 +1,522 @@
|
||||
---
|
||||
name: ccw
|
||||
description: Stateless workflow orchestrator that automatically selects and executes the optimal workflow combination based on task intent. Supports rapid (lite-plan+execute), full (brainstorm+plan+execute), coupled (plan+execute), bugfix (lite-fix), and issue (multi-point fixes) workflows. Triggers on "ccw", "workflow", "自动工作流", "智能调度".
|
||||
allowed-tools: Task(*), SlashCommand(*), AskUserQuestion(*), Read(*), Bash(*), Grep(*)
|
||||
description: Stateless workflow orchestrator. Auto-selects optimal workflow based on task intent. Triggers "ccw", "workflow".
|
||||
allowed-tools: Task(*), SlashCommand(*), AskUserQuestion(*), Read(*), Bash(*), Grep(*), TodoWrite(*)
|
||||
---
|
||||
|
||||
# CCW - Claude Code Workflow Orchestrator
|
||||
|
||||
无状态工作流协调器,根据任务意图自动选择并执行最优工作流组合。
|
||||
无状态工作流协调器,根据任务意图自动选择最优工作流。
|
||||
|
||||
## Architecture Overview
|
||||
## Workflow System Overview
|
||||
|
||||
CCW 提供两个工作流系统:**Main Workflow** 和 **Issue Workflow**,协同覆盖完整的软件开发生命周期。
|
||||
|
||||
```
|
||||
┌─────────────────────────────────────────────────────────────────────────────┐
|
||||
│ Main Workflow │
|
||||
│ │
|
||||
│ ┌─────────────┐ ┌─────────────┐ ┌─────────────┐ ┌─────────────┐ │
|
||||
│ │ Level 1 │ → │ Level 2 │ → │ Level 3 │ → │ Level 4 │ │
|
||||
│ │ Rapid │ │ Lightweight │ │ Standard │ │ Brainstorm │ │
|
||||
│ │ │ │ │ │ │ │ │ │
|
||||
│ │ lite-lite- │ │ lite-plan │ │ plan │ │ brainstorm │ │
|
||||
│ │ lite │ │ lite-fix │ │ tdd-plan │ │ :auto- │ │
|
||||
│ │ │ │ multi-cli- │ │ test-fix- │ │ parallel │ │
|
||||
│ │ │ │ plan │ │ gen │ │ ↓ │ │
|
||||
│ │ │ │ │ │ │ │ plan │ │
|
||||
│ └─────────────┘ └─────────────┘ └─────────────┘ └─────────────┘ │
|
||||
│ │
|
||||
│ Complexity: ━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━▶ │
|
||||
│ Low High │
|
||||
└─────────────────────────────────────────────────────────────────────────────┘
|
||||
│
|
||||
│ After development
|
||||
▼
|
||||
┌─────────────────────────────────────────────────────────────────────────────┐
|
||||
│ Issue Workflow │
|
||||
│ │
|
||||
│ ┌──────────────┐ ┌──────────────┐ ┌──────────────┐ │
|
||||
│ │ Accumulate │ → │ Plan │ → │ Execute │ │
|
||||
│ │ Discover & │ │ Batch │ │ Parallel │ │
|
||||
│ │ Collect │ │ Planning │ │ Execution │ │
|
||||
│ └──────────────┘ └──────────────┘ └──────────────┘ │
|
||||
│ │
|
||||
│ Supplementary role: Maintain main branch stability, worktree isolation │
|
||||
└─────────────────────────────────────────────────────────────────────────────┘
|
||||
```
|
||||
|
||||
## Architecture
|
||||
|
||||
```
|
||||
┌─────────────────────────────────────────────────────────────────┐
|
||||
│ CCW Orchestrator (Stateless) │
|
||||
│ CCW Orchestrator (CLI-Enhanced + Requirement Analysis) │
|
||||
├─────────────────────────────────────────────────────────────────┤
|
||||
│ │
|
||||
│ Input Analysis │
|
||||
│ ├─ Intent Classification (bugfix/feature/refactor/issue/...) │
|
||||
│ ├─ Complexity Assessment (low/medium/high) │
|
||||
│ ├─ Context Detection (codebase familiarity needed?) │
|
||||
│ └─ Constraint Extraction (time/scope/quality) │
|
||||
│ │
|
||||
│ Workflow Selection (Decision Tree) │
|
||||
│ ├─ 🐛 Bug? → lite-fix / lite-fix --hotfix │
|
||||
│ ├─ ❓ Unclear? → brainstorm → plan → execute │
|
||||
│ ├─ ⚡ Simple? → lite-plan → lite-execute │
|
||||
│ ├─ 🔧 Complex? → plan → execute │
|
||||
│ ├─ 📋 Issue? → issue:plan → issue:queue → issue:execute │
|
||||
│ └─ 🎨 UI? → ui-design → plan → execute │
|
||||
│ │
|
||||
│ Execution Dispatch │
|
||||
│ └─ SlashCommand("/workflow:xxx") or Task(agent) │
|
||||
│ │
|
||||
│ Phase 1 │ Input Analysis (rule-based, fast path) │
|
||||
│ Phase 1.5 │ CLI Classification (semantic, smart path) │
|
||||
│ Phase 1.75 │ Requirement Clarification (clarity < 2) │
|
||||
│ Phase 2 │ Level Selection (intent → level → workflow) │
|
||||
│ Phase 2.5 │ CLI Action Planning (high complexity) │
|
||||
│ Phase 3 │ User Confirmation (optional) │
|
||||
│ Phase 4 │ TODO Tracking Setup │
|
||||
│ Phase 5 │ Execution Loop │
|
||||
└─────────────────────────────────────────────────────────────────┘
|
||||
```
|
||||
|
||||
## Workflow Combinations (组合技)
|
||||
## Level Quick Reference
|
||||
|
||||
### 1. Rapid (快速迭代) ⚡
|
||||
**Pattern**: 多模型协作分析 + 直接执行
|
||||
**Commands**: `/workflow:lite-plan` → `/workflow:lite-execute`
|
||||
**When to use**:
|
||||
- 明确知道做什么和怎么做
|
||||
- 单一功能或小型改动
|
||||
- 快速原型验证
|
||||
| Level | Name | Workflows | Artifacts | Execution |
|
||||
|-------|------|-----------|-----------|-----------|
|
||||
| **1** | Rapid | `lite-lite-lite` | None | Direct execute |
|
||||
| **2** | Lightweight | `lite-plan`, `lite-fix`, `multi-cli-plan` | Memory/Lightweight files | → `lite-execute` |
|
||||
| **3** | Standard | `plan`, `tdd-plan`, `test-fix-gen` | Session persistence | → `execute` / `test-cycle-execute` |
|
||||
| **4** | Brainstorm | `brainstorm:auto-parallel` → `plan` | Multi-role analysis + Session | → `execute` |
|
||||
| **-** | Issue | `discover` → `plan` → `queue` → `execute` | Issue records | Worktree isolation (optional) |
|
||||
|
||||
### 2. Full (完整流程) 📋
|
||||
**Pattern**: 分析 + 头脑风暴 + 规划 + 执行
|
||||
**Commands**: `/workflow:brainstorm:auto-parallel` → `/workflow:plan` → `/workflow:execute`
|
||||
**When to use**:
|
||||
- 不确定产品方向或技术方案
|
||||
- 需要多角色视角分析
|
||||
- 复杂新功能开发
|
||||
## Workflow Selection Decision Tree
|
||||
|
||||
### 3. Coupled (复杂耦合) 🔗
|
||||
**Pattern**: 完整规划 + 验证 + 执行
|
||||
**Commands**: `/workflow:plan` → `/workflow:action-plan-verify` → `/workflow:execute`
|
||||
**When to use**:
|
||||
- 跨模块依赖
|
||||
- 架构级变更
|
||||
- 团队协作项目
|
||||
|
||||
### 4. Bugfix (缺陷修复) 🐛
|
||||
**Pattern**: 智能诊断 + 修复
|
||||
**Commands**: `/workflow:lite-fix` or `/workflow:lite-fix --hotfix`
|
||||
**When to use**:
|
||||
- 任何有明确症状的Bug
|
||||
- 生产事故紧急修复
|
||||
- 根因不清楚需要诊断
|
||||
|
||||
### 5. Issue (长时间多点修复) 📌
|
||||
**Pattern**: Issue规划 + 队列 + 批量执行
|
||||
**Commands**: `/issue:plan` → `/issue:queue` → `/issue:execute`
|
||||
**When to use**:
|
||||
- 多个相关问题需要批量处理
|
||||
- 长时间跨度的修复任务
|
||||
- 需要优先级排序和冲突解决
|
||||
|
||||
### 6. UI-First (设计驱动) 🎨
|
||||
**Pattern**: UI设计 + 规划 + 执行
|
||||
**Commands**: `/workflow:ui-design:*` → `/workflow:plan` → `/workflow:execute`
|
||||
**When to use**:
|
||||
- 前端功能开发
|
||||
- 需要视觉参考
|
||||
- 设计系统集成
|
||||
```
|
||||
Start
|
||||
│
|
||||
├─ Is it post-development maintenance?
|
||||
│ ├─ Yes → Issue Workflow
|
||||
│ └─ No ↓
|
||||
│
|
||||
├─ Are requirements clear?
|
||||
│ ├─ Uncertain → Level 4 (brainstorm:auto-parallel)
|
||||
│ └─ Clear ↓
|
||||
│
|
||||
├─ Need persistent Session?
|
||||
│ ├─ Yes → Level 3 (plan / tdd-plan / test-fix-gen)
|
||||
│ └─ No ↓
|
||||
│
|
||||
├─ Need multi-perspective / solution comparison?
|
||||
│ ├─ Yes → Level 2 (multi-cli-plan)
|
||||
│ └─ No ↓
|
||||
│
|
||||
├─ Is it a bug fix?
|
||||
│ ├─ Yes → Level 2 (lite-fix)
|
||||
│ └─ No ↓
|
||||
│
|
||||
├─ Need planning?
|
||||
│ ├─ Yes → Level 2 (lite-plan)
|
||||
│ └─ No → Level 1 (lite-lite-lite)
|
||||
```
|
||||
|
||||
## Intent Classification
|
||||
|
||||
```javascript
|
||||
function classifyIntent(input) {
|
||||
const text = input.toLowerCase()
|
||||
|
||||
// Priority 1: Bug keywords
|
||||
if (/\b(fix|bug|error|issue|crash|broken|fail|wrong|incorrect)\b/.test(text)) {
|
||||
if (/\b(hotfix|urgent|production|critical|emergency)\b/.test(text)) {
|
||||
return { type: 'bugfix', mode: 'hotfix', workflow: 'lite-fix --hotfix' }
|
||||
}
|
||||
return { type: 'bugfix', mode: 'standard', workflow: 'lite-fix' }
|
||||
}
|
||||
|
||||
// Priority 2: Issue batch keywords
|
||||
if (/\b(issues?|batch|queue|多个|批量)\b/.test(text) && /\b(fix|resolve|处理)\b/.test(text)) {
|
||||
return { type: 'issue', workflow: 'issue:plan → issue:queue → issue:execute' }
|
||||
}
|
||||
|
||||
// Priority 3: Uncertainty keywords → Full workflow
|
||||
if (/\b(不确定|不知道|explore|研究|分析一下|怎么做|what if|should i|探索)\b/.test(text)) {
|
||||
return { type: 'exploration', workflow: 'brainstorm → plan → execute' }
|
||||
}
|
||||
|
||||
// Priority 4: UI/Design keywords
|
||||
if (/\b(ui|界面|design|设计|component|组件|style|样式|layout|布局)\b/.test(text)) {
|
||||
return { type: 'ui', workflow: 'ui-design → plan → execute' }
|
||||
}
|
||||
|
||||
// Priority 5: Complexity assessment for remaining
|
||||
const complexity = assessComplexity(text)
|
||||
|
||||
if (complexity === 'high') {
|
||||
return { type: 'feature', complexity: 'high', workflow: 'plan → verify → execute' }
|
||||
}
|
||||
|
||||
if (complexity === 'medium') {
|
||||
return { type: 'feature', complexity: 'medium', workflow: 'lite-plan → lite-execute' }
|
||||
}
|
||||
|
||||
return { type: 'feature', complexity: 'low', workflow: 'lite-plan → lite-execute' }
|
||||
}
|
||||
### Priority Order (with Level Mapping)
|
||||
|
||||
| Priority | Intent | Patterns | Level | Flow |
|
||||
|----------|--------|----------|-------|------|
|
||||
| 1 | bugfix/hotfix | `urgent,production,critical` + bug | L2 | `bugfix.hotfix` |
|
||||
| 1 | bugfix | `fix,bug,error,crash,fail` | L2 | `bugfix.standard` |
|
||||
| 2 | issue batch | `issues,batch` + `fix,resolve` | Issue | `issue` |
|
||||
| 3 | exploration | `不确定,explore,研究,what if` | L4 | `full` |
|
||||
| 3 | multi-perspective | `多视角,权衡,比较方案,cross-verify` | L2 | `multi-cli-plan` |
|
||||
| 4 | quick-task | `快速,简单,small,quick` + feature | L1 | `lite-lite-lite` |
|
||||
| 5 | ui design | `ui,design,component,style` | L3/L4 | `ui` |
|
||||
| 6 | tdd | `tdd,test-driven,先写测试` | L3 | `tdd` |
|
||||
| 7 | test-fix | `测试失败,test fail,fix test` | L3 | `test-fix-gen` |
|
||||
| 8 | review | `review,审查,code review` | L3 | `review-fix` |
|
||||
| 9 | documentation | `文档,docs,readme` | L2 | `docs` |
|
||||
| 99 | feature | complexity-based | L2/L3 | `rapid`/`coupled` |
|
||||
|
||||
### Quick Selection Guide
|
||||
|
||||
| Scenario | Recommended Workflow | Level |
|
||||
|----------|---------------------|-------|
|
||||
| Quick fixes, config adjustments | `lite-lite-lite` | 1 |
|
||||
| Clear single-module features | `lite-plan → lite-execute` | 2 |
|
||||
| Bug diagnosis and fix | `lite-fix` | 2 |
|
||||
| Production emergencies | `lite-fix --hotfix` | 2 |
|
||||
| Technology selection, solution comparison | `multi-cli-plan → lite-execute` | 2 |
|
||||
| Multi-module changes, refactoring | `plan → verify → execute` | 3 |
|
||||
| Test-driven development | `tdd-plan → execute → tdd-verify` | 3 |
|
||||
| Test failure fixes | `test-fix-gen → test-cycle-execute` | 3 |
|
||||
| New features, architecture design | `brainstorm:auto-parallel → plan → execute` | 4 |
|
||||
| Post-development issue fixes | Issue Workflow | - |
|
||||
|
||||
### Complexity Assessment
|
||||
|
||||
```javascript
|
||||
function assessComplexity(text) {
|
||||
let score = 0
|
||||
|
||||
// Architecture keywords
|
||||
if (/\b(refactor|重构|migrate|迁移|architect|架构|system|系统)\b/.test(text)) score += 2
|
||||
|
||||
// Multi-module keywords
|
||||
if (/\b(multiple|多个|across|跨|all|所有|entire|整个)\b/.test(text)) score += 2
|
||||
|
||||
// Integration keywords
|
||||
if (/\b(integrate|集成|connect|连接|api|database|数据库)\b/.test(text)) score += 1
|
||||
|
||||
// Security/Performance keywords
|
||||
if (/\b(security|安全|performance|性能|scale|扩展)\b/.test(text)) score += 1
|
||||
|
||||
if (score >= 4) return 'high'
|
||||
if (score >= 2) return 'medium'
|
||||
return 'low'
|
||||
if (/refactor|重构|migrate|迁移|architect|架构|system|系统/.test(text)) score += 2
|
||||
if (/multiple|多个|across|跨|all|所有|entire|整个/.test(text)) score += 2
|
||||
if (/integrate|集成|api|database|数据库/.test(text)) score += 1
|
||||
if (/security|安全|performance|性能|scale|扩展/.test(text)) score += 1
|
||||
return score >= 4 ? 'high' : score >= 2 ? 'medium' : 'low'
|
||||
}
|
||||
```
|
||||
|
||||
## Execution Flow
|
||||
| Complexity | Flow |
|
||||
|------------|------|
|
||||
| high | `coupled` (plan → verify → execute) |
|
||||
| medium/low | `rapid` (lite-plan → lite-execute) |
|
||||
|
||||
### Phase 1: Input Analysis
|
||||
### Dimension Extraction (WHAT/WHERE/WHY/HOW)
|
||||
|
||||
从用户输入提取四个维度,用于需求澄清和工作流选择:
|
||||
|
||||
| 维度 | 提取内容 | 示例模式 |
|
||||
|------|----------|----------|
|
||||
| **WHAT** | action + target | `创建/修复/重构/优化/分析` + 目标对象 |
|
||||
| **WHERE** | scope + paths | `file/module/system` + 文件路径 |
|
||||
| **WHY** | goal + motivation | `为了.../因为.../目的是...` |
|
||||
| **HOW** | constraints + preferences | `必须.../不要.../应该...` |
|
||||
|
||||
**Clarity Score** (0-3):
|
||||
- +0.5: 有明确 action
|
||||
- +0.5: 有具体 target
|
||||
- +0.5: 有文件路径
|
||||
- +0.5: scope 不是 unknown
|
||||
- +0.5: 有明确 goal
|
||||
- +0.5: 有约束条件
|
||||
- -0.5: 包含不确定词 (`不知道/maybe/怎么`)
|
||||
|
||||
### Requirement Clarification
|
||||
|
||||
当 `clarity_score < 2` 时触发需求澄清:
|
||||
|
||||
```javascript
|
||||
// Parse user input
|
||||
const input = userInput.trim()
|
||||
if (dimensions.clarity_score < 2) {
|
||||
const questions = generateClarificationQuestions(dimensions)
|
||||
// 生成问题:目标是什么? 范围是什么? 有什么约束?
|
||||
AskUserQuestion({ questions })
|
||||
}
|
||||
```
|
||||
|
||||
// Check for explicit workflow request
|
||||
**澄清问题类型**:
|
||||
- 目标不明确 → "你想要对什么进行操作?"
|
||||
- 范围不明确 → "操作的范围是什么?"
|
||||
- 目的不明确 → "这个操作的主要目标是什么?"
|
||||
- 复杂操作 → "有什么特殊要求或限制?"
|
||||
|
||||
## TODO Tracking Protocol
|
||||
|
||||
### CRITICAL: Append-Only Rule
|
||||
|
||||
CCW 创建的 Todo **必须附加到现有列表**,不能覆盖用户的其他 Todo。
|
||||
|
||||
### Implementation
|
||||
|
||||
```javascript
|
||||
// 1. 使用 CCW 前缀隔离工作流 todo
|
||||
const prefix = `CCW:${flowName}`
|
||||
|
||||
// 2. 创建新 todo 时使用前缀格式
|
||||
TodoWrite({
|
||||
todos: [
|
||||
...existingNonCCWTodos, // 保留用户的 todo
|
||||
{ content: `${prefix}: [1/N] /command:step1`, status: "in_progress", activeForm: "..." },
|
||||
{ content: `${prefix}: [2/N] /command:step2`, status: "pending", activeForm: "..." }
|
||||
]
|
||||
})
|
||||
|
||||
// 3. 更新状态时只修改匹配前缀的 todo
|
||||
```
|
||||
|
||||
### Todo Format
|
||||
|
||||
```
|
||||
CCW:{flow}: [{N}/{Total}] /command:name
|
||||
```
|
||||
|
||||
### Visual Example
|
||||
|
||||
```
|
||||
✓ CCW:rapid: [1/2] /workflow:lite-plan
|
||||
→ CCW:rapid: [2/2] /workflow:lite-execute
|
||||
用户自己的 todo(保留不动)
|
||||
```
|
||||
|
||||
### Status Management
|
||||
|
||||
- 开始工作流:创建所有步骤 todo,第一步 `in_progress`
|
||||
- 完成步骤:当前步骤 `completed`,下一步 `in_progress`
|
||||
- 工作流结束:所有 CCW todo 标记 `completed`
|
||||
|
||||
## Execution Flow
|
||||
|
||||
```javascript
|
||||
// 1. Check explicit command
|
||||
if (input.startsWith('/workflow:') || input.startsWith('/issue:')) {
|
||||
// User explicitly requested a workflow, pass through
|
||||
SlashCommand(input)
|
||||
return
|
||||
}
|
||||
|
||||
// Classify intent
|
||||
const intent = classifyIntent(input)
|
||||
// 2. Classify intent
|
||||
const intent = classifyIntent(input) // See command.json intent_rules
|
||||
|
||||
console.log(`
|
||||
## Intent Analysis
|
||||
// 3. Select flow
|
||||
const flow = selectFlow(intent) // See command.json flows
|
||||
|
||||
**Input**: ${input.substring(0, 100)}...
|
||||
**Classification**: ${intent.type}
|
||||
**Complexity**: ${intent.complexity || 'N/A'}
|
||||
**Recommended Workflow**: ${intent.workflow}
|
||||
`)
|
||||
```
|
||||
// 4. Create todos with CCW prefix
|
||||
createWorkflowTodos(flow)
|
||||
|
||||
### Phase 2: User Confirmation (Optional)
|
||||
|
||||
```javascript
|
||||
// For high-complexity or ambiguous intents, confirm with user
|
||||
if (intent.complexity === 'high' || intent.type === 'exploration') {
|
||||
const confirmation = AskUserQuestion({
|
||||
questions: [{
|
||||
question: `Recommended: ${intent.workflow}. Proceed?`,
|
||||
header: "Workflow",
|
||||
multiSelect: false,
|
||||
options: [
|
||||
{ label: `${intent.workflow} (Recommended)`, description: "Use recommended workflow" },
|
||||
{ label: "Rapid (lite-plan)", description: "Quick iteration" },
|
||||
{ label: "Full (brainstorm+plan)", description: "Complete exploration" },
|
||||
{ label: "Manual", description: "I'll specify the commands" }
|
||||
]
|
||||
}]
|
||||
})
|
||||
|
||||
// Adjust workflow based on user selection
|
||||
intent.workflow = mapSelectionToWorkflow(confirmation)
|
||||
}
|
||||
```
|
||||
|
||||
### Phase 3: Workflow Dispatch
|
||||
|
||||
```javascript
|
||||
switch (intent.workflow) {
|
||||
case 'lite-fix':
|
||||
SlashCommand('/workflow:lite-fix', args: input)
|
||||
break
|
||||
|
||||
case 'lite-fix --hotfix':
|
||||
SlashCommand('/workflow:lite-fix --hotfix', args: input)
|
||||
break
|
||||
|
||||
case 'lite-plan → lite-execute':
|
||||
SlashCommand('/workflow:lite-plan', args: input)
|
||||
// lite-plan will automatically dispatch to lite-execute
|
||||
break
|
||||
|
||||
case 'plan → verify → execute':
|
||||
SlashCommand('/workflow:plan', args: input)
|
||||
// After plan, prompt for verify and execute
|
||||
break
|
||||
|
||||
case 'brainstorm → plan → execute':
|
||||
SlashCommand('/workflow:brainstorm:auto-parallel', args: input)
|
||||
// After brainstorm, continue with plan
|
||||
break
|
||||
|
||||
case 'issue:plan → issue:queue → issue:execute':
|
||||
SlashCommand('/issue:plan', args: input)
|
||||
// Issue workflow handles queue and execute
|
||||
break
|
||||
|
||||
case 'ui-design → plan → execute':
|
||||
// Determine UI design subcommand
|
||||
if (hasReference(input)) {
|
||||
SlashCommand('/workflow:ui-design:imitate-auto', args: input)
|
||||
} else {
|
||||
SlashCommand('/workflow:ui-design:explore-auto', args: input)
|
||||
}
|
||||
break
|
||||
}
|
||||
// 5. Dispatch first command
|
||||
SlashCommand(flow.steps[0].command, args: input)
|
||||
```
|
||||
|
||||
## CLI Tool Integration
|
||||
|
||||
CCW **隐式调用** CLI 工具以获得三大优势:
|
||||
CCW 在特定条件下自动注入 CLI 调用:
|
||||
|
||||
### 1. Token 效率 (Context Efficiency)
|
||||
| Condition | CLI Inject |
|
||||
|-----------|------------|
|
||||
| 大量代码上下文 (≥50k chars) | `gemini --mode analysis` |
|
||||
| 高复杂度任务 | `gemini --mode analysis` |
|
||||
| Bug 诊断 | `gemini --mode analysis` |
|
||||
| 多任务执行 (≥3 tasks) | `codex --mode write` |
|
||||
|
||||
CLI 工具在单独进程中运行,可以处理大量代码上下文而不消耗主会话 token:
|
||||
### CLI Enhancement Phases
|
||||
|
||||
| 场景 | 触发条件 | 自动注入 |
|
||||
|------|----------|----------|
|
||||
| 大量代码上下文 | 文件读取 ≥ 50k 字符 | `gemini --mode analysis` |
|
||||
| 多模块分析 | 涉及 ≥ 5 个模块 | `gemini --mode analysis` |
|
||||
| 代码审查 | review 步骤 | `gemini --mode analysis` |
|
||||
**Phase 1.5: CLI-Assisted Classification**
|
||||
|
||||
### 2. 多模型视角 (Multi-Model Perspectives)
|
||||
当规则匹配不明确时,使用 CLI 辅助分类:
|
||||
|
||||
不同模型有不同优势,CCW 根据任务类型自动选择:
|
||||
| 触发条件 | 说明 |
|
||||
|----------|------|
|
||||
| matchCount < 2 | 多个意图模式匹配 |
|
||||
| complexity = high | 高复杂度任务 |
|
||||
| input > 100 chars | 长输入需要语义理解 |
|
||||
|
||||
| Tool | 核心优势 | 最佳场景 | 触发关键词 |
|
||||
|------|----------|----------|------------|
|
||||
| Gemini | 超长上下文、深度分析、架构理解、执行流追踪 | 代码库理解、架构评估、根因分析 | "分析", "理解", "设计", "架构", "诊断" |
|
||||
| Qwen | 代码模式识别、多维度分析 | Gemini备选、第二视角验证 | "评估", "对比", "验证" |
|
||||
| Codex | 精确代码生成、自主执行、数学推理 | 功能实现、重构、测试 | "实现", "重构", "修复", "生成", "测试" |
|
||||
**Phase 2.5: CLI-Assisted Action Planning**
|
||||
|
||||
### 3. 增强能力 (Enhanced Capabilities)
|
||||
高复杂度任务的工作流优化:
|
||||
|
||||
#### Debug 能力增强
|
||||
```
|
||||
触发条件: intent === 'bugfix' AND root_cause_unclear
|
||||
自动注入: gemini --mode analysis (执行流追踪)
|
||||
用途: 假设驱动调试、状态机错误诊断、并发问题排查
|
||||
```
|
||||
| 触发条件 | 说明 |
|
||||
|----------|------|
|
||||
| complexity = high | 高复杂度任务 |
|
||||
| steps >= 3 | 多步骤工作流 |
|
||||
| input > 200 chars | 复杂需求描述 |
|
||||
|
||||
#### 规划能力增强
|
||||
```
|
||||
触发条件: complexity === 'high' OR intent === 'exploration'
|
||||
自动注入: gemini --mode analysis (架构分析)
|
||||
用途: 复杂任务先用CLI分析获取多模型视角
|
||||
```
|
||||
CLI 可返回建议:`use_default` | `modify` (调整步骤) | `upgrade` (升级工作流)
|
||||
|
||||
### 隐式注入规则 (Implicit Injection Rules)
|
||||
## Continuation Commands
|
||||
|
||||
CCW 在以下条件自动注入 CLI 调用(无需用户显式请求):
|
||||
工作流执行中的用户控制命令:
|
||||
|
||||
```javascript
|
||||
const implicitRules = {
|
||||
// 上下文收集:大量代码使用CLI可节省主会话token
|
||||
context_gathering: {
|
||||
trigger: 'file_read >= 50k chars OR module_count >= 5',
|
||||
inject: 'gemini --mode analysis'
|
||||
},
|
||||
|
||||
// 规划前分析:复杂任务先用CLI分析
|
||||
pre_planning_analysis: {
|
||||
trigger: 'complexity === "high" OR intent === "exploration"',
|
||||
inject: 'gemini --mode analysis'
|
||||
},
|
||||
|
||||
// 调试诊断:利用Gemini的执行流追踪能力
|
||||
debug_diagnosis: {
|
||||
trigger: 'intent === "bugfix" AND root_cause_unclear',
|
||||
inject: 'gemini --mode analysis'
|
||||
},
|
||||
|
||||
// 代码审查:用CLI减少token占用
|
||||
code_review: {
|
||||
trigger: 'step === "review"',
|
||||
inject: 'gemini --mode analysis'
|
||||
},
|
||||
|
||||
// 多任务执行:用Codex自主完成
|
||||
implementation: {
|
||||
trigger: 'step === "execute" AND task_count >= 3',
|
||||
inject: 'codex --mode write'
|
||||
}
|
||||
}
|
||||
```
|
||||
|
||||
### 用户语义触发 (Semantic Tool Assignment)
|
||||
|
||||
```javascript
|
||||
// 用户可以通过自然语言指定工具偏好
|
||||
const toolHints = {
|
||||
gemini: /用\s*gemini|gemini\s*分析|让\s*gemini|深度分析|架构理解/i,
|
||||
qwen: /用\s*qwen|qwen\s*评估|让\s*qwen|第二视角/i,
|
||||
codex: /用\s*codex|codex\s*实现|让\s*codex|自主完成|批量修改/i
|
||||
}
|
||||
|
||||
function detectToolPreference(input) {
|
||||
for (const [tool, pattern] of Object.entries(toolHints)) {
|
||||
if (pattern.test(input)) return tool
|
||||
}
|
||||
return null // Auto-select based on task type
|
||||
}
|
||||
```
|
||||
|
||||
### 独立 CLI 工作流 (Standalone CLI Workflows)
|
||||
|
||||
直接调用 CLI 进行特定任务:
|
||||
|
||||
| Workflow | 命令 | 用途 |
|
||||
|----------|------|------|
|
||||
| CLI Analysis | `ccw cli --tool gemini` | 大型代码库快速理解、架构评估 |
|
||||
| CLI Implement | `ccw cli --tool codex` | 明确需求的自主实现 |
|
||||
| CLI Debug | `ccw cli --tool gemini` | 复杂bug根因分析、执行流追踪 |
|
||||
|
||||
## Index Files (Dynamic Coordination)
|
||||
|
||||
CCW 使用索引文件实现智能命令协调:
|
||||
|
||||
| Index | Purpose |
|
||||
|-------|---------|
|
||||
| [index/command-capabilities.json](index/command-capabilities.json) | 命令能力分类(explore, plan, execute, test, review...) |
|
||||
| [index/workflow-chains.json](index/workflow-chains.json) | 预定义工作流链(rapid, full, coupled, bugfix, issue, tdd, ui...) |
|
||||
|
||||
### 能力分类
|
||||
|
||||
```
|
||||
capabilities:
|
||||
├── explore - 代码探索、上下文收集
|
||||
├── brainstorm - 多角色分析、方案探索
|
||||
├── plan - 任务规划、分解
|
||||
├── verify - 计划验证、质量检查
|
||||
├── execute - 任务执行、代码实现
|
||||
├── bugfix - Bug诊断、修复
|
||||
├── test - 测试生成、执行
|
||||
├── review - 代码审查、质量分析
|
||||
├── issue - 批量问题管理
|
||||
├── ui-design - UI设计、原型
|
||||
├── memory - 文档、知识管理
|
||||
├── session - 会话管理
|
||||
└── debug - 调试、问题排查
|
||||
```
|
||||
|
||||
## TODO Tracking Integration
|
||||
|
||||
CCW 自动使用 TodoWrite 跟踪工作流执行进度:
|
||||
|
||||
```javascript
|
||||
// 工作流启动时自动创建 TODO 列表
|
||||
TodoWrite({
|
||||
todos: [
|
||||
{ content: "CCW: Rapid Iteration (2 steps)", status: "in_progress", activeForm: "Running workflow" },
|
||||
{ content: "[1/2] /workflow:lite-plan", status: "in_progress", activeForm: "Executing lite-plan" },
|
||||
{ content: "[2/2] /workflow:lite-execute", status: "pending", activeForm: "Executing lite-execute" }
|
||||
]
|
||||
})
|
||||
|
||||
// 每个步骤完成后自动更新状态
|
||||
// 支持暂停、继续、跳过操作
|
||||
```
|
||||
|
||||
**进度可视化**:
|
||||
```
|
||||
✓ CCW: Rapid Iteration (2 steps)
|
||||
✓ [1/2] /workflow:lite-plan
|
||||
→ [2/2] /workflow:lite-execute
|
||||
```
|
||||
|
||||
**控制命令**:
|
||||
| Input | Action |
|
||||
|-------|--------|
|
||||
| `continue` | 执行下一步 |
|
||||
| 命令 | 作用 |
|
||||
|------|------|
|
||||
| `continue` | 继续执行下一步 |
|
||||
| `skip` | 跳过当前步骤 |
|
||||
| `abort` | 停止工作流 |
|
||||
| `/workflow:*` | 执行指定命令 |
|
||||
| `abort` | 终止工作流 |
|
||||
| `/workflow:*` | 切换到指定命令 |
|
||||
| 自然语言 | 重新分析意图 |
|
||||
|
||||
## Reference Documents
|
||||
## Workflow Flow Details
|
||||
|
||||
| Document | Purpose |
|
||||
|----------|---------|
|
||||
| [phases/orchestrator.md](phases/orchestrator.md) | 编排器决策逻辑 + TODO 跟踪 |
|
||||
| [phases/actions/rapid.md](phases/actions/rapid.md) | 快速迭代组合 |
|
||||
| [phases/actions/full.md](phases/actions/full.md) | 完整流程组合 |
|
||||
| [phases/actions/coupled.md](phases/actions/coupled.md) | 复杂耦合组合 |
|
||||
| [phases/actions/bugfix.md](phases/actions/bugfix.md) | 缺陷修复组合 |
|
||||
| [phases/actions/issue.md](phases/actions/issue.md) | Issue工作流组合 |
|
||||
| [specs/intent-classification.md](specs/intent-classification.md) | 意图分类规范 |
|
||||
| [WORKFLOW_DECISION_GUIDE.md](/WORKFLOW_DECISION_GUIDE.md) | 工作流决策指南 |
|
||||
### Issue Workflow (Main Workflow 补充机制)
|
||||
|
||||
## Examples
|
||||
Issue Workflow 是 Main Workflow 的**补充机制**,专注于开发后的持续维护。
|
||||
|
||||
#### 设计理念
|
||||
|
||||
| 方面 | Main Workflow | Issue Workflow |
|
||||
|------|---------------|----------------|
|
||||
| **用途** | 主要开发周期 | 开发后维护 |
|
||||
| **时机** | 功能开发阶段 | 主工作流完成后 |
|
||||
| **范围** | 完整功能实现 | 针对性修复/增强 |
|
||||
| **并行性** | 依赖分析 → Agent 并行 | Worktree 隔离 (可选) |
|
||||
| **分支模型** | 当前分支工作 | 可使用隔离的 worktree |
|
||||
|
||||
#### 为什么 Main Workflow 不自动使用 Worktree?
|
||||
|
||||
**依赖分析已解决并行性问题**:
|
||||
1. 规划阶段 (`/workflow:plan`) 执行依赖分析
|
||||
2. 自动识别任务依赖和关键路径
|
||||
3. 划分为**并行组**(独立任务)和**串行链**(依赖任务)
|
||||
4. Agent 并行执行独立任务,无需文件系统隔离
|
||||
|
||||
#### 两阶段生命周期
|
||||
|
||||
### Example 1: Bug Fix
|
||||
```
|
||||
User: 用户登录失败,返回 401 错误
|
||||
CCW: Intent=bugfix, Workflow=lite-fix
|
||||
→ /workflow:lite-fix "用户登录失败,返回 401 错误"
|
||||
┌─────────────────────────────────────────────────────────────────────┐
|
||||
│ Phase 1: Accumulation (积累阶段) │
|
||||
│ │
|
||||
│ Triggers: 任务完成后的 review、代码审查发现、测试失败 │
|
||||
│ │
|
||||
│ ┌────────────┐ ┌────────────┐ ┌────────────┐ │
|
||||
│ │ discover │ │ discover- │ │ new │ │
|
||||
│ │ Auto-find │ │ by-prompt │ │ Manual │ │
|
||||
│ └────────────┘ └────────────┘ └────────────┘ │
|
||||
│ │
|
||||
│ 持续积累 issues 到待处理队列 │
|
||||
└─────────────────────────────────────────────────────────────────────┘
|
||||
│
|
||||
│ 积累足够后
|
||||
▼
|
||||
┌─────────────────────────────────────────────────────────────────────┐
|
||||
│ Phase 2: Batch Resolution (批量解决阶段) │
|
||||
│ │
|
||||
│ ┌────────────┐ ┌────────────┐ ┌────────────┐ │
|
||||
│ │ plan │ ──→ │ queue │ ──→ │ execute │ │
|
||||
│ │ --all- │ │ Optimize │ │ Parallel │ │
|
||||
│ │ pending │ │ order │ │ execution │ │
|
||||
│ └────────────┘ └────────────┘ └────────────┘ │
|
||||
│ │
|
||||
│ 支持 worktree 隔离,保持主分支稳定 │
|
||||
└─────────────────────────────────────────────────────────────────────┘
|
||||
```
|
||||
|
||||
### Example 2: New Feature (Simple)
|
||||
#### 与 Main Workflow 的协作
|
||||
|
||||
```
|
||||
User: 添加用户头像上传功能
|
||||
CCW: Intent=feature, Complexity=low, Workflow=lite-plan→lite-execute
|
||||
→ /workflow:lite-plan "添加用户头像上传功能"
|
||||
开发迭代循环
|
||||
┌─────────────────────────────────────────────────────────────────────┐
|
||||
│ │
|
||||
│ ┌─────────┐ ┌─────────┐ │
|
||||
│ │ Feature │ ──→ Main Workflow ──→ Done ──→│ Review │ │
|
||||
│ │ Request │ (Level 1-4) └────┬────┘ │
|
||||
│ └─────────┘ │ │
|
||||
│ ▲ │ 发现 Issues │
|
||||
│ │ ▼ │
|
||||
│ │ ┌─────────┐ │
|
||||
│ 继续 │ │ Issue │ │
|
||||
│ 新功能│ │ Workflow│ │
|
||||
│ │ └────┬────┘ │
|
||||
│ │ ┌──────────────────────────────┘ │
|
||||
│ │ │ 修复完成 │
|
||||
│ │ ▼ │
|
||||
│ ┌────┴────┐◀────── │
|
||||
│ │ Main │ Merge │
|
||||
│ │ Branch │ back │
|
||||
│ └─────────┘ │
|
||||
│ │
|
||||
└─────────────────────────────────────────────────────────────────────┘
|
||||
```
|
||||
|
||||
### Example 3: Complex Refactoring
|
||||
```
|
||||
User: 重构整个认证模块,迁移到 OAuth2
|
||||
CCW: Intent=feature, Complexity=high, Workflow=plan→verify→execute
|
||||
→ /workflow:plan "重构整个认证模块,迁移到 OAuth2"
|
||||
#### 命令列表
|
||||
|
||||
**积累阶段:**
|
||||
```bash
|
||||
/issue:discover # 多视角自动发现
|
||||
/issue:discover-by-prompt # 基于提示发现
|
||||
/issue:new # 手动创建
|
||||
```
|
||||
|
||||
### Example 4: Exploration
|
||||
```
|
||||
User: 我想优化系统性能,但不知道从哪入手
|
||||
CCW: Intent=exploration, Workflow=brainstorm→plan→execute
|
||||
→ /workflow:brainstorm:auto-parallel "探索系统性能优化方向"
|
||||
**批量解决阶段:**
|
||||
```bash
|
||||
/issue:plan --all-pending # 批量规划所有待处理
|
||||
/issue:queue # 生成优化执行队列
|
||||
/issue:execute # 并行执行
|
||||
```
|
||||
|
||||
### Example 5: Multi-Model Collaboration
|
||||
### lite-lite-lite vs multi-cli-plan
|
||||
|
||||
| 维度 | lite-lite-lite | multi-cli-plan |
|
||||
|------|---------------|----------------|
|
||||
| **产物** | 无文件 | IMPL_PLAN.md + plan.json + synthesis.json |
|
||||
| **状态** | 无状态 | 持久化 session |
|
||||
| **CLI选择** | 自动分析任务类型选择 | 配置驱动 |
|
||||
| **迭代** | 通过 AskUser | 多轮收敛 |
|
||||
| **执行** | 直接执行 | 通过 lite-execute |
|
||||
| **适用** | 快速修复、简单功能 | 复杂多步骤实现 |
|
||||
|
||||
**选择指南**:
|
||||
- 任务清晰、改动范围小 → `lite-lite-lite`
|
||||
- 需要多视角分析、复杂架构 → `multi-cli-plan`
|
||||
|
||||
### multi-cli-plan vs lite-plan
|
||||
|
||||
| 维度 | multi-cli-plan | lite-plan |
|
||||
|------|---------------|-----------|
|
||||
| **上下文** | ACE 语义搜索 | 手动文件模式 |
|
||||
| **分析** | 多 CLI 交叉验证 | 单次规划 |
|
||||
| **迭代** | 多轮直到收敛 | 单轮 |
|
||||
| **置信度** | 高 (共识驱动) | 中 (单一视角) |
|
||||
| **适用** | 需要多视角的复杂任务 | 直接明确的实现 |
|
||||
|
||||
**选择指南**:
|
||||
- 需求明确、路径清晰 → `lite-plan`
|
||||
- 需要权衡、多方案比较 → `multi-cli-plan`
|
||||
|
||||
## Artifact Flow Protocol
|
||||
|
||||
工作流产出的自动流转机制,支持不同格式产出间的意图提取和完成度判断。
|
||||
|
||||
### 产出格式
|
||||
|
||||
| 命令 | 产出位置 | 格式 | 关键字段 |
|
||||
|------|----------|------|----------|
|
||||
| `/workflow:lite-plan` | memory://plan | structured_plan | tasks, files, dependencies |
|
||||
| `/workflow:plan` | .workflow/{session}/IMPL_PLAN.md | markdown_plan | phases, tasks, risks |
|
||||
| `/workflow:execute` | execution_log.json | execution_report | completed_tasks, errors |
|
||||
| `/workflow:test-cycle-execute` | test_results.json | test_report | pass_rate, failures, coverage |
|
||||
| `/workflow:review-session-cycle` | review_report.md | review_report | findings, severity_counts |
|
||||
|
||||
### 意图提取 (Intent Extraction)
|
||||
|
||||
流转到下一步时,自动提取关键信息:
|
||||
|
||||
```
|
||||
User: 用 gemini 分析现有架构,然后让 codex 实现优化
|
||||
CCW: Detects tool preferences, executes in sequence
|
||||
→ Gemini CLI (analysis) → Codex CLI (implementation)
|
||||
plan → execute:
|
||||
提取: tasks (未完成), priority_order, files_to_modify, context_summary
|
||||
|
||||
execute → test:
|
||||
提取: modified_files, test_scope (推断), pending_verification
|
||||
|
||||
test → fix:
|
||||
条件: pass_rate < 0.95
|
||||
提取: failures, error_messages, affected_files, suggested_fixes
|
||||
|
||||
review → fix:
|
||||
条件: critical > 0 OR high > 3
|
||||
提取: findings (critical/high), fix_priority, affected_files
|
||||
```
|
||||
|
||||
### 完成度判断
|
||||
|
||||
**Test 完成度路由**:
|
||||
```
|
||||
pass_rate >= 0.95 AND coverage >= 0.80 → complete
|
||||
pass_rate >= 0.95 AND coverage < 0.80 → add_more_tests
|
||||
pass_rate >= 0.80 → fix_failures_then_continue
|
||||
pass_rate < 0.80 → major_fix_required
|
||||
```
|
||||
|
||||
**Review 完成度路由**:
|
||||
```
|
||||
critical == 0 AND high <= 3 → complete_or_optional_fix
|
||||
critical > 0 → mandatory_fix
|
||||
high > 3 → recommended_fix
|
||||
```
|
||||
|
||||
### 流转决策模式
|
||||
|
||||
**plan_execute_test**:
|
||||
```
|
||||
plan → execute → test
|
||||
↓ (if test fail)
|
||||
extract_failures → fix → test (max 3 iterations)
|
||||
↓ (if still fail)
|
||||
manual_intervention
|
||||
```
|
||||
|
||||
**iterative_improvement**:
|
||||
```
|
||||
execute → test → fix → test → ...
|
||||
loop until: pass_rate >= 0.95 OR iterations >= 3
|
||||
```
|
||||
|
||||
### 使用示例
|
||||
|
||||
```javascript
|
||||
// 执行完成后,根据产出决定下一步
|
||||
const result = await execute(plan)
|
||||
|
||||
// 提取意图流转到测试
|
||||
const testContext = extractIntent('execute_to_test', result)
|
||||
// testContext = { modified_files, test_scope, pending_verification }
|
||||
|
||||
// 测试完成后,根据完成度决定路由
|
||||
const testResult = await test(testContext)
|
||||
const nextStep = evaluateCompletion('test', testResult)
|
||||
// nextStep = 'fix_failures_then_continue' if pass_rate = 0.85
|
||||
```
|
||||
|
||||
## Reference
|
||||
|
||||
- [command.json](command.json) - 命令元数据、Flow 定义、意图规则、Artifact Flow
|
||||
|
||||
641
.claude/skills/ccw/command.json
Normal file
641
.claude/skills/ccw/command.json
Normal file
@@ -0,0 +1,641 @@
|
||||
{
|
||||
"_metadata": {
|
||||
"version": "2.0.0",
|
||||
"description": "Unified CCW command index with capabilities, flows, and intent rules"
|
||||
},
|
||||
|
||||
"capabilities": {
|
||||
"explore": {
|
||||
"description": "Codebase exploration and context gathering",
|
||||
"commands": ["/workflow:init", "/workflow:tools:gather", "/memory:load"],
|
||||
"agents": ["cli-explore-agent", "context-search-agent"]
|
||||
},
|
||||
"brainstorm": {
|
||||
"description": "Multi-perspective analysis and ideation",
|
||||
"commands": ["/workflow:brainstorm:auto-parallel", "/workflow:brainstorm:artifacts", "/workflow:brainstorm:synthesis"],
|
||||
"roles": ["product-manager", "system-architect", "ux-expert", "data-architect", "api-designer"]
|
||||
},
|
||||
"plan": {
|
||||
"description": "Task planning and decomposition",
|
||||
"commands": ["/workflow:lite-plan", "/workflow:plan", "/workflow:tdd-plan", "/task:create", "/task:breakdown"],
|
||||
"agents": ["cli-lite-planning-agent", "action-planning-agent"]
|
||||
},
|
||||
"verify": {
|
||||
"description": "Plan and quality verification",
|
||||
"commands": ["/workflow:action-plan-verify", "/workflow:tdd-verify"]
|
||||
},
|
||||
"execute": {
|
||||
"description": "Task execution and implementation",
|
||||
"commands": ["/workflow:lite-execute", "/workflow:execute", "/task:execute"],
|
||||
"agents": ["code-developer", "cli-execution-agent", "universal-executor"]
|
||||
},
|
||||
"bugfix": {
|
||||
"description": "Bug diagnosis and fixing",
|
||||
"commands": ["/workflow:lite-fix"],
|
||||
"agents": ["code-developer"]
|
||||
},
|
||||
"test": {
|
||||
"description": "Test generation and execution",
|
||||
"commands": ["/workflow:test-gen", "/workflow:test-fix-gen", "/workflow:test-cycle-execute"],
|
||||
"agents": ["test-fix-agent"]
|
||||
},
|
||||
"review": {
|
||||
"description": "Code review and quality analysis",
|
||||
"commands": ["/workflow:review-session-cycle", "/workflow:review-module-cycle", "/workflow:review", "/workflow:review-fix"]
|
||||
},
|
||||
"issue": {
|
||||
"description": "Issue lifecycle management - discover, accumulate, batch resolve",
|
||||
"commands": ["/issue:new", "/issue:discover", "/issue:discover-by-prompt", "/issue:plan", "/issue:queue", "/issue:execute", "/issue:manage"],
|
||||
"agents": ["issue-plan-agent", "issue-queue-agent", "cli-explore-agent"],
|
||||
"lifecycle": {
|
||||
"accumulation": {
|
||||
"description": "任务完成后进行需求扩展、bug分析、测试发现",
|
||||
"triggers": ["post-task review", "code review findings", "test failures"],
|
||||
"commands": ["/issue:discover", "/issue:discover-by-prompt", "/issue:new"]
|
||||
},
|
||||
"batch_resolution": {
|
||||
"description": "积累的issue集中规划和并行执行",
|
||||
"flow": ["plan", "queue", "execute"],
|
||||
"commands": ["/issue:plan --all-pending", "/issue:queue", "/issue:execute"]
|
||||
}
|
||||
}
|
||||
},
|
||||
"ui-design": {
|
||||
"description": "UI design and prototyping",
|
||||
"commands": ["/workflow:ui-design:explore-auto", "/workflow:ui-design:imitate-auto", "/workflow:ui-design:design-sync"],
|
||||
"agents": ["ui-design-agent"]
|
||||
},
|
||||
"memory": {
|
||||
"description": "Documentation and knowledge management",
|
||||
"commands": ["/memory:docs", "/memory:update-related", "/memory:update-full", "/memory:skill-memory"],
|
||||
"agents": ["doc-generator", "memory-bridge"]
|
||||
}
|
||||
},
|
||||
|
||||
"flows": {
|
||||
"_level_guide": {
|
||||
"L1": "Rapid - No artifacts, direct execution",
|
||||
"L2": "Lightweight - Memory/lightweight files, → lite-execute",
|
||||
"L3": "Standard - Session persistence, → execute/test-cycle-execute",
|
||||
"L4": "Brainstorm - Multi-role analysis + Session, → execute"
|
||||
},
|
||||
"lite-lite-lite": {
|
||||
"name": "Ultra-Rapid Execution",
|
||||
"level": "L1",
|
||||
"description": "零文件 + 自动CLI选择 + 语义描述 + 直接执行",
|
||||
"complexity": ["low"],
|
||||
"artifacts": "none",
|
||||
"steps": [
|
||||
{ "phase": "clarify", "description": "需求澄清 (AskUser if needed)" },
|
||||
{ "phase": "auto-select", "description": "任务分析 → 自动选择CLI组合" },
|
||||
{ "phase": "multi-cli", "description": "并行多CLI分析" },
|
||||
{ "phase": "decision", "description": "展示结果 → AskUser决策" },
|
||||
{ "phase": "execute", "description": "直接执行 (无中间文件)" }
|
||||
],
|
||||
"cli_hints": {
|
||||
"analysis": { "tool": "auto", "mode": "analysis", "parallel": true },
|
||||
"execution": { "tool": "auto", "mode": "write" }
|
||||
},
|
||||
"estimated_time": "10-30 min"
|
||||
},
|
||||
"rapid": {
|
||||
"name": "Rapid Iteration",
|
||||
"level": "L2",
|
||||
"description": "内存规划 + 直接执行",
|
||||
"complexity": ["low", "medium"],
|
||||
"artifacts": "memory://plan",
|
||||
"steps": [
|
||||
{ "command": "/workflow:lite-plan", "optional": false, "auto_continue": true },
|
||||
{ "command": "/workflow:lite-execute", "optional": false }
|
||||
],
|
||||
"cli_hints": {
|
||||
"explore_phase": { "tool": "gemini", "mode": "analysis", "trigger": "needs_exploration" },
|
||||
"execution": { "tool": "codex", "mode": "write", "trigger": "complexity >= medium" }
|
||||
},
|
||||
"estimated_time": "15-45 min"
|
||||
},
|
||||
"multi-cli-plan": {
|
||||
"name": "Multi-CLI Collaborative Planning",
|
||||
"level": "L2",
|
||||
"description": "ACE上下文 + 多CLI协作分析 + 迭代收敛 + 计划生成",
|
||||
"complexity": ["medium", "high"],
|
||||
"artifacts": ".workflow/.multi-cli-plan/{session}/",
|
||||
"steps": [
|
||||
{ "command": "/workflow:multi-cli-plan", "optional": false, "phases": [
|
||||
"context_gathering: ACE语义搜索",
|
||||
"multi_cli_discussion: cli-discuss-agent多轮分析",
|
||||
"present_options: 展示解决方案",
|
||||
"user_decision: 用户选择",
|
||||
"plan_generation: cli-lite-planning-agent生成计划"
|
||||
]},
|
||||
{ "command": "/workflow:lite-execute", "optional": false }
|
||||
],
|
||||
"vs_lite_plan": {
|
||||
"context": "ACE semantic search vs Manual file patterns",
|
||||
"analysis": "Multi-CLI cross-verification vs Single-pass planning",
|
||||
"iteration": "Multiple rounds until convergence vs Single round",
|
||||
"confidence": "High (consensus-based) vs Medium (single perspective)",
|
||||
"best_for": "Complex tasks needing multiple perspectives vs Straightforward implementations"
|
||||
},
|
||||
"agents": ["cli-discuss-agent", "cli-lite-planning-agent"],
|
||||
"cli_hints": {
|
||||
"discussion": { "tools": ["gemini", "codex", "claude"], "mode": "analysis", "parallel": true },
|
||||
"planning": { "tool": "gemini", "mode": "analysis" }
|
||||
},
|
||||
"estimated_time": "30-90 min"
|
||||
},
|
||||
"coupled": {
|
||||
"name": "Standard Planning",
|
||||
"level": "L3",
|
||||
"description": "完整规划 + 验证 + 执行",
|
||||
"complexity": ["medium", "high"],
|
||||
"artifacts": ".workflow/active/{session}/",
|
||||
"steps": [
|
||||
{ "command": "/workflow:plan", "optional": false },
|
||||
{ "command": "/workflow:action-plan-verify", "optional": false, "auto_continue": true },
|
||||
{ "command": "/workflow:execute", "optional": false },
|
||||
{ "command": "/workflow:review", "optional": true }
|
||||
],
|
||||
"cli_hints": {
|
||||
"pre_analysis": { "tool": "gemini", "mode": "analysis", "trigger": "always" },
|
||||
"execution": { "tool": "codex", "mode": "write", "trigger": "always" }
|
||||
},
|
||||
"estimated_time": "2-4 hours"
|
||||
},
|
||||
"full": {
|
||||
"name": "Full Exploration (Brainstorm)",
|
||||
"level": "L4",
|
||||
"description": "头脑风暴 + 规划 + 执行",
|
||||
"complexity": ["high"],
|
||||
"artifacts": ".workflow/active/{session}/.brainstorming/",
|
||||
"steps": [
|
||||
{ "command": "/workflow:brainstorm:auto-parallel", "optional": false, "confirm_before": true },
|
||||
{ "command": "/workflow:plan", "optional": false },
|
||||
{ "command": "/workflow:action-plan-verify", "optional": true, "auto_continue": true },
|
||||
{ "command": "/workflow:execute", "optional": false }
|
||||
],
|
||||
"cli_hints": {
|
||||
"role_analysis": { "tool": "gemini", "mode": "analysis", "trigger": "always", "parallel": true },
|
||||
"execution": { "tool": "codex", "mode": "write", "trigger": "task_count >= 3" }
|
||||
},
|
||||
"estimated_time": "1-3 hours"
|
||||
},
|
||||
"bugfix": {
|
||||
"name": "Bug Fix",
|
||||
"level": "L2",
|
||||
"description": "智能诊断 + 修复 (5 phases)",
|
||||
"complexity": ["low", "medium"],
|
||||
"artifacts": ".workflow/.lite-fix/{bug-slug}-{date}/",
|
||||
"variants": {
|
||||
"standard": [{ "command": "/workflow:lite-fix", "optional": false }],
|
||||
"hotfix": [{ "command": "/workflow:lite-fix --hotfix", "optional": false }]
|
||||
},
|
||||
"phases": [
|
||||
"Phase 1: Bug Analysis & Diagnosis (severity pre-assessment)",
|
||||
"Phase 2: Clarification (optional, AskUserQuestion)",
|
||||
"Phase 3: Fix Planning (Low/Medium → Claude, High/Critical → cli-lite-planning-agent)",
|
||||
"Phase 4: Confirmation & Selection",
|
||||
"Phase 5: Execute (→ lite-execute --mode bugfix)"
|
||||
],
|
||||
"cli_hints": {
|
||||
"diagnosis": { "tool": "gemini", "mode": "analysis", "trigger": "always" },
|
||||
"fix": { "tool": "codex", "mode": "write", "trigger": "severity >= medium" }
|
||||
},
|
||||
"estimated_time": "10-30 min"
|
||||
},
|
||||
"issue": {
|
||||
"name": "Issue Lifecycle",
|
||||
"level": "Supplementary",
|
||||
"description": "发现积累 → 批量规划 → 队列优化 → 并行执行 (Main Workflow 补充机制)",
|
||||
"complexity": ["medium", "high"],
|
||||
"artifacts": ".workflow/.issues/",
|
||||
"purpose": "Post-development continuous maintenance, maintain main branch stability",
|
||||
"phases": {
|
||||
"accumulation": {
|
||||
"description": "项目迭代中持续发现和积累issue",
|
||||
"commands": ["/issue:discover", "/issue:discover-by-prompt", "/issue:new"],
|
||||
"trigger": "post-task, code-review, test-failure"
|
||||
},
|
||||
"resolution": {
|
||||
"description": "集中规划和执行积累的issue",
|
||||
"steps": [
|
||||
{ "command": "/issue:plan --all-pending", "optional": false },
|
||||
{ "command": "/issue:queue", "optional": false },
|
||||
{ "command": "/issue:execute", "optional": false }
|
||||
]
|
||||
}
|
||||
},
|
||||
"worktree_support": {
|
||||
"description": "可选的 worktree 隔离,保持主分支稳定",
|
||||
"use_case": "主开发完成后的 issue 修复"
|
||||
},
|
||||
"cli_hints": {
|
||||
"discovery": { "tool": "gemini", "mode": "analysis", "trigger": "perspective_analysis", "parallel": true },
|
||||
"solution_generation": { "tool": "gemini", "mode": "analysis", "trigger": "always", "parallel": true },
|
||||
"batch_execution": { "tool": "codex", "mode": "write", "trigger": "always" }
|
||||
},
|
||||
"estimated_time": "1-4 hours"
|
||||
},
|
||||
"tdd": {
|
||||
"name": "Test-Driven Development",
|
||||
"level": "L3",
|
||||
"description": "TDD规划 + 执行 + 验证 (6 phases)",
|
||||
"complexity": ["medium", "high"],
|
||||
"artifacts": ".workflow/active/{session}/",
|
||||
"steps": [
|
||||
{ "command": "/workflow:tdd-plan", "optional": false },
|
||||
{ "command": "/workflow:action-plan-verify", "optional": true, "auto_continue": true },
|
||||
{ "command": "/workflow:execute", "optional": false },
|
||||
{ "command": "/workflow:tdd-verify", "optional": false }
|
||||
],
|
||||
"tdd_structure": {
|
||||
"description": "Each IMPL task contains complete internal Red-Green-Refactor cycle",
|
||||
"meta": "tdd_workflow: true",
|
||||
"flow_control": "implementation_approach contains 3 steps (red/green/refactor)"
|
||||
},
|
||||
"cli_hints": {
|
||||
"test_strategy": { "tool": "gemini", "mode": "analysis", "trigger": "always" },
|
||||
"red_green_refactor": { "tool": "codex", "mode": "write", "trigger": "always" }
|
||||
},
|
||||
"estimated_time": "1-3 hours"
|
||||
},
|
||||
"test-fix": {
|
||||
"name": "Test Fix Generation",
|
||||
"level": "L3",
|
||||
"description": "测试修复生成 + 执行循环 (5 phases)",
|
||||
"complexity": ["medium", "high"],
|
||||
"artifacts": ".workflow/active/WFS-test-{session}/",
|
||||
"dual_mode": {
|
||||
"session_mode": { "input": "WFS-xxx", "context_source": "Source session summaries" },
|
||||
"prompt_mode": { "input": "Text/file path", "context_source": "Direct codebase analysis" }
|
||||
},
|
||||
"steps": [
|
||||
{ "command": "/workflow:test-fix-gen", "optional": false },
|
||||
{ "command": "/workflow:test-cycle-execute", "optional": false }
|
||||
],
|
||||
"task_structure": [
|
||||
"IMPL-001.json (test understanding & generation)",
|
||||
"IMPL-001.5-review.json (quality gate)",
|
||||
"IMPL-002.json (test execution & fix cycle)"
|
||||
],
|
||||
"cli_hints": {
|
||||
"analysis": { "tool": "gemini", "mode": "analysis", "trigger": "always" },
|
||||
"fix_cycle": { "tool": "codex", "mode": "write", "trigger": "pass_rate < 0.95" }
|
||||
},
|
||||
"estimated_time": "1-2 hours"
|
||||
},
|
||||
"ui": {
|
||||
"name": "UI-First Development",
|
||||
"level": "L3/L4",
|
||||
"description": "UI设计 + 规划 + 执行",
|
||||
"complexity": ["medium", "high"],
|
||||
"artifacts": ".workflow/active/{session}/",
|
||||
"variants": {
|
||||
"explore": [
|
||||
{ "command": "/workflow:ui-design:explore-auto", "optional": false },
|
||||
{ "command": "/workflow:ui-design:design-sync", "optional": false, "auto_continue": true },
|
||||
{ "command": "/workflow:plan", "optional": false },
|
||||
{ "command": "/workflow:execute", "optional": false }
|
||||
],
|
||||
"imitate": [
|
||||
{ "command": "/workflow:ui-design:imitate-auto", "optional": false },
|
||||
{ "command": "/workflow:ui-design:design-sync", "optional": false, "auto_continue": true },
|
||||
{ "command": "/workflow:plan", "optional": false },
|
||||
{ "command": "/workflow:execute", "optional": false }
|
||||
]
|
||||
},
|
||||
"estimated_time": "2-4 hours"
|
||||
},
|
||||
"review-fix": {
|
||||
"name": "Review and Fix",
|
||||
"level": "L3",
|
||||
"description": "多维审查 + 自动修复",
|
||||
"complexity": ["medium"],
|
||||
"artifacts": ".workflow/active/{session}/review_report.md",
|
||||
"steps": [
|
||||
{ "command": "/workflow:review-session-cycle", "optional": false },
|
||||
{ "command": "/workflow:review-fix", "optional": true }
|
||||
],
|
||||
"cli_hints": {
|
||||
"multi_dimension_review": { "tool": "gemini", "mode": "analysis", "trigger": "always", "parallel": true },
|
||||
"auto_fix": { "tool": "codex", "mode": "write", "trigger": "findings_count >= 3" }
|
||||
},
|
||||
"estimated_time": "30-90 min"
|
||||
},
|
||||
"docs": {
|
||||
"name": "Documentation",
|
||||
"level": "L2",
|
||||
"description": "批量文档生成",
|
||||
"complexity": ["low", "medium"],
|
||||
"variants": {
|
||||
"incremental": [{ "command": "/memory:update-related", "optional": false }],
|
||||
"full": [
|
||||
{ "command": "/memory:docs", "optional": false },
|
||||
{ "command": "/workflow:execute", "optional": false }
|
||||
]
|
||||
},
|
||||
"estimated_time": "15-60 min"
|
||||
}
|
||||
},
|
||||
|
||||
"intent_rules": {
|
||||
"_level_mapping": {
|
||||
"description": "Intent → Level → Flow mapping guide",
|
||||
"L1": ["lite-lite-lite"],
|
||||
"L2": ["rapid", "bugfix", "multi-cli-plan", "docs"],
|
||||
"L3": ["coupled", "tdd", "test-fix", "review-fix", "ui"],
|
||||
"L4": ["full"],
|
||||
"Supplementary": ["issue"]
|
||||
},
|
||||
"bugfix": {
|
||||
"priority": 1,
|
||||
"level": "L2",
|
||||
"variants": {
|
||||
"hotfix": {
|
||||
"patterns": ["hotfix", "urgent", "production", "critical", "emergency", "紧急", "生产环境", "线上"],
|
||||
"flow": "bugfix.hotfix"
|
||||
},
|
||||
"standard": {
|
||||
"patterns": ["fix", "bug", "error", "issue", "crash", "broken", "fail", "wrong", "修复", "错误", "崩溃"],
|
||||
"flow": "bugfix.standard"
|
||||
}
|
||||
}
|
||||
},
|
||||
"issue_batch": {
|
||||
"priority": 2,
|
||||
"level": "Supplementary",
|
||||
"patterns": {
|
||||
"batch": ["issues", "batch", "queue", "多个", "批量"],
|
||||
"action": ["fix", "resolve", "处理", "解决"]
|
||||
},
|
||||
"require_both": true,
|
||||
"flow": "issue"
|
||||
},
|
||||
"exploration": {
|
||||
"priority": 3,
|
||||
"level": "L4",
|
||||
"patterns": ["不确定", "不知道", "explore", "研究", "分析一下", "怎么做", "what if", "探索"],
|
||||
"flow": "full"
|
||||
},
|
||||
"multi_perspective": {
|
||||
"priority": 3,
|
||||
"level": "L2",
|
||||
"patterns": ["多视角", "权衡", "比较方案", "cross-verify", "多CLI", "协作分析"],
|
||||
"flow": "multi-cli-plan"
|
||||
},
|
||||
"quick_task": {
|
||||
"priority": 4,
|
||||
"level": "L1",
|
||||
"patterns": ["快速", "简单", "small", "quick", "simple", "trivial", "小改动"],
|
||||
"flow": "lite-lite-lite"
|
||||
},
|
||||
"ui_design": {
|
||||
"priority": 5,
|
||||
"level": "L3/L4",
|
||||
"patterns": ["ui", "界面", "design", "设计", "component", "组件", "style", "样式", "layout", "布局"],
|
||||
"variants": {
|
||||
"imitate": { "triggers": ["参考", "模仿", "像", "类似"], "flow": "ui.imitate" },
|
||||
"explore": { "triggers": [], "flow": "ui.explore" }
|
||||
}
|
||||
},
|
||||
"tdd": {
|
||||
"priority": 6,
|
||||
"level": "L3",
|
||||
"patterns": ["tdd", "test-driven", "测试驱动", "先写测试", "test first"],
|
||||
"flow": "tdd"
|
||||
},
|
||||
"test_fix": {
|
||||
"priority": 7,
|
||||
"level": "L3",
|
||||
"patterns": ["测试失败", "test fail", "fix test", "test error", "pass rate", "coverage gap"],
|
||||
"flow": "test-fix"
|
||||
},
|
||||
"review": {
|
||||
"priority": 8,
|
||||
"level": "L3",
|
||||
"patterns": ["review", "审查", "检查代码", "code review", "质量检查"],
|
||||
"flow": "review-fix"
|
||||
},
|
||||
"documentation": {
|
||||
"priority": 9,
|
||||
"level": "L2",
|
||||
"patterns": ["文档", "documentation", "docs", "readme"],
|
||||
"variants": {
|
||||
"incremental": { "triggers": ["更新", "增量"], "flow": "docs.incremental" },
|
||||
"full": { "triggers": ["全部", "完整"], "flow": "docs.full" }
|
||||
}
|
||||
},
|
||||
"feature": {
|
||||
"priority": 99,
|
||||
"complexity_map": {
|
||||
"high": { "level": "L3", "flow": "coupled" },
|
||||
"medium": { "level": "L2", "flow": "rapid" },
|
||||
"low": { "level": "L1", "flow": "lite-lite-lite" }
|
||||
}
|
||||
}
|
||||
},
|
||||
|
||||
"complexity_indicators": {
|
||||
"high": {
|
||||
"threshold": 4,
|
||||
"patterns": {
|
||||
"architecture": { "keywords": ["refactor", "重构", "migrate", "迁移", "architect", "架构", "system", "系统"], "weight": 2 },
|
||||
"multi_module": { "keywords": ["multiple", "多个", "across", "跨", "all", "所有", "entire", "整个"], "weight": 2 },
|
||||
"integration": { "keywords": ["integrate", "集成", "api", "database", "数据库"], "weight": 1 },
|
||||
"quality": { "keywords": ["security", "安全", "performance", "性能", "scale", "扩展"], "weight": 1 }
|
||||
}
|
||||
},
|
||||
"medium": { "threshold": 2 },
|
||||
"low": { "threshold": 0 }
|
||||
},
|
||||
|
||||
"cli_tools": {
|
||||
"gemini": {
|
||||
"strengths": ["超长上下文", "深度分析", "架构理解", "执行流追踪"],
|
||||
"triggers": ["分析", "理解", "设计", "架构", "诊断"],
|
||||
"mode": "analysis"
|
||||
},
|
||||
"qwen": {
|
||||
"strengths": ["代码模式识别", "多维度分析"],
|
||||
"triggers": ["评估", "对比", "验证"],
|
||||
"mode": "analysis"
|
||||
},
|
||||
"codex": {
|
||||
"strengths": ["精确代码生成", "自主执行"],
|
||||
"triggers": ["实现", "重构", "修复", "生成"],
|
||||
"mode": "write"
|
||||
}
|
||||
},
|
||||
|
||||
"cli_injection_rules": {
|
||||
"context_gathering": { "trigger": "file_read >= 50k OR module_count >= 5", "inject": "gemini --mode analysis" },
|
||||
"pre_planning_analysis": { "trigger": "complexity === high", "inject": "gemini --mode analysis" },
|
||||
"debug_diagnosis": { "trigger": "intent === bugfix AND root_cause_unclear", "inject": "gemini --mode analysis" },
|
||||
"code_review": { "trigger": "step === review", "inject": "gemini --mode analysis" },
|
||||
"implementation": { "trigger": "step === execute AND task_count >= 3", "inject": "codex --mode write" }
|
||||
},
|
||||
|
||||
"artifact_flow": {
|
||||
"_description": "定义工作流产出的格式、意图提取和流转规则",
|
||||
|
||||
"outputs": {
|
||||
"/workflow:lite-plan": {
|
||||
"artifact": "memory://plan",
|
||||
"format": "structured_plan",
|
||||
"fields": ["tasks", "files", "dependencies", "approach"]
|
||||
},
|
||||
"/workflow:plan": {
|
||||
"artifact": ".workflow/{session}/IMPL_PLAN.md",
|
||||
"format": "markdown_plan",
|
||||
"fields": ["phases", "tasks", "dependencies", "risks", "test_strategy"]
|
||||
},
|
||||
"/workflow:multi-cli-plan": {
|
||||
"artifact": ".workflow/.multi-cli-plan/{session}/",
|
||||
"format": "multi_file",
|
||||
"files": ["IMPL_PLAN.md", "plan.json", "synthesis.json"],
|
||||
"fields": ["consensus", "divergences", "recommended_approach", "tasks"]
|
||||
},
|
||||
"/workflow:lite-execute": {
|
||||
"artifact": "git_changes",
|
||||
"format": "code_diff",
|
||||
"fields": ["modified_files", "added_files", "deleted_files", "build_status"]
|
||||
},
|
||||
"/workflow:execute": {
|
||||
"artifact": ".workflow/{session}/execution_log.json",
|
||||
"format": "execution_report",
|
||||
"fields": ["completed_tasks", "pending_tasks", "errors", "warnings"]
|
||||
},
|
||||
"/workflow:test-cycle-execute": {
|
||||
"artifact": ".workflow/{session}/test_results.json",
|
||||
"format": "test_report",
|
||||
"fields": ["pass_rate", "failures", "coverage", "duration"]
|
||||
},
|
||||
"/workflow:review-session-cycle": {
|
||||
"artifact": ".workflow/{session}/review_report.md",
|
||||
"format": "review_report",
|
||||
"fields": ["findings", "severity_counts", "recommendations"]
|
||||
},
|
||||
"/workflow:lite-fix": {
|
||||
"artifact": "git_changes",
|
||||
"format": "fix_report",
|
||||
"fields": ["root_cause", "fix_applied", "files_modified", "verification_status"]
|
||||
}
|
||||
},
|
||||
|
||||
"intent_extraction": {
|
||||
"plan_to_execute": {
|
||||
"from": ["lite-plan", "plan", "multi-cli-plan"],
|
||||
"to": ["lite-execute", "execute"],
|
||||
"extract": {
|
||||
"tasks": "$.tasks[] | filter(status != 'completed')",
|
||||
"priority_order": "$.tasks | sort_by(priority)",
|
||||
"files_to_modify": "$.tasks[].files | flatten | unique",
|
||||
"dependencies": "$.dependencies",
|
||||
"context_summary": "$.approach OR $.recommended_approach"
|
||||
}
|
||||
},
|
||||
"execute_to_test": {
|
||||
"from": ["lite-execute", "execute"],
|
||||
"to": ["test-cycle-execute", "test-fix-gen"],
|
||||
"extract": {
|
||||
"modified_files": "$.modified_files",
|
||||
"test_scope": "infer_from($.modified_files)",
|
||||
"build_status": "$.build_status",
|
||||
"pending_verification": "$.completed_tasks | needs_test"
|
||||
}
|
||||
},
|
||||
"test_to_fix": {
|
||||
"from": ["test-cycle-execute"],
|
||||
"to": ["lite-fix", "review-fix"],
|
||||
"condition": "$.pass_rate < 0.95",
|
||||
"extract": {
|
||||
"failures": "$.failures",
|
||||
"error_messages": "$.failures[].message",
|
||||
"affected_files": "$.failures[].file",
|
||||
"suggested_fixes": "$.failures[].suggested_fix"
|
||||
}
|
||||
},
|
||||
"review_to_fix": {
|
||||
"from": ["review-session-cycle", "review-module-cycle"],
|
||||
"to": ["review-fix"],
|
||||
"condition": "$.severity_counts.critical > 0 OR $.severity_counts.high > 3",
|
||||
"extract": {
|
||||
"findings": "$.findings | filter(severity in ['critical', 'high'])",
|
||||
"fix_priority": "$.findings | group_by(category) | sort_by(severity)",
|
||||
"affected_files": "$.findings[].file | unique"
|
||||
}
|
||||
}
|
||||
},
|
||||
|
||||
"completion_criteria": {
|
||||
"plan": {
|
||||
"required": ["has_tasks", "has_files"],
|
||||
"optional": ["has_tests", "no_blocking_risks"],
|
||||
"threshold": 0.8,
|
||||
"routing": {
|
||||
"complete": "proceed_to_execute",
|
||||
"incomplete": "clarify_requirements"
|
||||
}
|
||||
},
|
||||
"execute": {
|
||||
"required": ["all_tasks_attempted", "no_critical_errors"],
|
||||
"optional": ["build_passes", "lint_passes"],
|
||||
"threshold": 1.0,
|
||||
"routing": {
|
||||
"complete": "proceed_to_test_or_review",
|
||||
"partial": "continue_execution",
|
||||
"failed": "diagnose_and_retry"
|
||||
}
|
||||
},
|
||||
"test": {
|
||||
"metrics": {
|
||||
"pass_rate": { "target": 0.95, "minimum": 0.80 },
|
||||
"coverage": { "target": 0.80, "minimum": 0.60 }
|
||||
},
|
||||
"routing": {
|
||||
"pass_rate >= 0.95 AND coverage >= 0.80": "complete",
|
||||
"pass_rate >= 0.95 AND coverage < 0.80": "add_more_tests",
|
||||
"pass_rate >= 0.80": "fix_failures_then_continue",
|
||||
"pass_rate < 0.80": "major_fix_required"
|
||||
}
|
||||
},
|
||||
"review": {
|
||||
"metrics": {
|
||||
"critical_findings": { "target": 0, "maximum": 0 },
|
||||
"high_findings": { "target": 0, "maximum": 3 }
|
||||
},
|
||||
"routing": {
|
||||
"critical == 0 AND high <= 3": "complete_or_optional_fix",
|
||||
"critical > 0": "mandatory_fix",
|
||||
"high > 3": "recommended_fix"
|
||||
}
|
||||
}
|
||||
},
|
||||
|
||||
"flow_decisions": {
|
||||
"_description": "根据产出完成度决定下一步",
|
||||
"patterns": {
|
||||
"plan_execute_test": {
|
||||
"sequence": ["plan", "execute", "test"],
|
||||
"on_test_fail": {
|
||||
"action": "extract_failures_and_fix",
|
||||
"max_iterations": 3,
|
||||
"fallback": "manual_intervention"
|
||||
}
|
||||
},
|
||||
"plan_execute_review": {
|
||||
"sequence": ["plan", "execute", "review"],
|
||||
"on_review_issues": {
|
||||
"action": "prioritize_and_fix",
|
||||
"auto_fix_threshold": "severity < high"
|
||||
}
|
||||
},
|
||||
"iterative_improvement": {
|
||||
"sequence": ["execute", "test", "fix"],
|
||||
"loop_until": "pass_rate >= 0.95 OR iterations >= 3",
|
||||
"on_loop_exit": "report_status"
|
||||
}
|
||||
}
|
||||
}
|
||||
}
|
||||
}
|
||||
@@ -1,127 +0,0 @@
|
||||
{
|
||||
"_metadata": {
|
||||
"version": "1.0.0",
|
||||
"generated": "2026-01-03",
|
||||
"description": "CCW command capability index for intelligent workflow coordination"
|
||||
},
|
||||
"capabilities": {
|
||||
"explore": {
|
||||
"description": "Codebase exploration and context gathering",
|
||||
"commands": [
|
||||
{ "command": "/workflow:init", "weight": 1.0, "tags": ["project-setup", "context"] },
|
||||
{ "command": "/workflow:tools:gather", "weight": 0.9, "tags": ["context", "analysis"] },
|
||||
{ "command": "/memory:load", "weight": 0.8, "tags": ["context", "memory"] }
|
||||
],
|
||||
"agents": ["cli-explore-agent", "context-search-agent"]
|
||||
},
|
||||
"brainstorm": {
|
||||
"description": "Multi-perspective analysis and ideation",
|
||||
"commands": [
|
||||
{ "command": "/workflow:brainstorm:auto-parallel", "weight": 1.0, "tags": ["exploration", "multi-role"] },
|
||||
{ "command": "/workflow:brainstorm:artifacts", "weight": 0.9, "tags": ["clarification", "guidance"] },
|
||||
{ "command": "/workflow:brainstorm:synthesis", "weight": 0.8, "tags": ["consolidation", "refinement"] }
|
||||
],
|
||||
"roles": ["product-manager", "system-architect", "ux-expert", "data-architect", "api-designer"]
|
||||
},
|
||||
"plan": {
|
||||
"description": "Task planning and decomposition",
|
||||
"commands": [
|
||||
{ "command": "/workflow:lite-plan", "weight": 1.0, "complexity": "low-medium", "tags": ["fast", "interactive"] },
|
||||
{ "command": "/workflow:plan", "weight": 0.9, "complexity": "medium-high", "tags": ["comprehensive", "persistent"] },
|
||||
{ "command": "/workflow:tdd-plan", "weight": 0.7, "complexity": "medium-high", "tags": ["test-first", "quality"] },
|
||||
{ "command": "/task:create", "weight": 0.6, "tags": ["single-task", "manual"] },
|
||||
{ "command": "/task:breakdown", "weight": 0.5, "tags": ["decomposition", "subtasks"] }
|
||||
],
|
||||
"agents": ["cli-lite-planning-agent", "action-planning-agent"]
|
||||
},
|
||||
"verify": {
|
||||
"description": "Plan and quality verification",
|
||||
"commands": [
|
||||
{ "command": "/workflow:action-plan-verify", "weight": 1.0, "tags": ["plan-quality", "consistency"] },
|
||||
{ "command": "/workflow:tdd-verify", "weight": 0.8, "tags": ["tdd-compliance", "coverage"] }
|
||||
]
|
||||
},
|
||||
"execute": {
|
||||
"description": "Task execution and implementation",
|
||||
"commands": [
|
||||
{ "command": "/workflow:lite-execute", "weight": 1.0, "complexity": "low-medium", "tags": ["fast", "agent-or-cli"] },
|
||||
{ "command": "/workflow:execute", "weight": 0.9, "complexity": "medium-high", "tags": ["dag-parallel", "comprehensive"] },
|
||||
{ "command": "/task:execute", "weight": 0.7, "tags": ["single-task"] }
|
||||
],
|
||||
"agents": ["code-developer", "cli-execution-agent", "universal-executor"]
|
||||
},
|
||||
"bugfix": {
|
||||
"description": "Bug diagnosis and fixing",
|
||||
"commands": [
|
||||
{ "command": "/workflow:lite-fix", "weight": 1.0, "tags": ["diagnosis", "fix", "standard"] },
|
||||
{ "command": "/workflow:lite-fix --hotfix", "weight": 0.9, "tags": ["emergency", "production", "fast"] }
|
||||
],
|
||||
"agents": ["code-developer"]
|
||||
},
|
||||
"test": {
|
||||
"description": "Test generation and execution",
|
||||
"commands": [
|
||||
{ "command": "/workflow:test-gen", "weight": 1.0, "tags": ["post-implementation", "coverage"] },
|
||||
{ "command": "/workflow:test-fix-gen", "weight": 0.9, "tags": ["from-description", "flexible"] },
|
||||
{ "command": "/workflow:test-cycle-execute", "weight": 0.8, "tags": ["iterative", "fix-cycle"] }
|
||||
],
|
||||
"agents": ["test-fix-agent"]
|
||||
},
|
||||
"review": {
|
||||
"description": "Code review and quality analysis",
|
||||
"commands": [
|
||||
{ "command": "/workflow:review-session-cycle", "weight": 1.0, "tags": ["session-based", "comprehensive"] },
|
||||
{ "command": "/workflow:review-module-cycle", "weight": 0.9, "tags": ["module-based", "targeted"] },
|
||||
{ "command": "/workflow:review", "weight": 0.8, "tags": ["single-pass", "type-specific"] },
|
||||
{ "command": "/workflow:review-fix", "weight": 0.7, "tags": ["auto-fix", "findings"] }
|
||||
]
|
||||
},
|
||||
"issue": {
|
||||
"description": "Batch issue management",
|
||||
"commands": [
|
||||
{ "command": "/issue:new", "weight": 1.0, "tags": ["create", "import"] },
|
||||
{ "command": "/issue:discover", "weight": 0.9, "tags": ["find", "analyze"] },
|
||||
{ "command": "/issue:plan", "weight": 0.8, "tags": ["solutions", "planning"] },
|
||||
{ "command": "/issue:queue", "weight": 0.7, "tags": ["prioritize", "order"] },
|
||||
{ "command": "/issue:execute", "weight": 0.6, "tags": ["batch-execute", "dag"] }
|
||||
],
|
||||
"agents": ["issue-plan-agent", "issue-queue-agent"]
|
||||
},
|
||||
"ui-design": {
|
||||
"description": "UI design and prototyping",
|
||||
"commands": [
|
||||
{ "command": "/workflow:ui-design:explore-auto", "weight": 1.0, "tags": ["from-scratch", "variants"] },
|
||||
{ "command": "/workflow:ui-design:imitate-auto", "weight": 0.9, "tags": ["reference-based", "copy"] },
|
||||
{ "command": "/workflow:ui-design:design-sync", "weight": 0.7, "tags": ["sync", "finalize"] },
|
||||
{ "command": "/workflow:ui-design:generate", "weight": 0.6, "tags": ["assemble", "prototype"] }
|
||||
],
|
||||
"agents": ["ui-design-agent"]
|
||||
},
|
||||
"memory": {
|
||||
"description": "Documentation and knowledge management",
|
||||
"commands": [
|
||||
{ "command": "/memory:docs", "weight": 1.0, "tags": ["generate", "planning"] },
|
||||
{ "command": "/memory:update-related", "weight": 0.9, "tags": ["incremental", "git-based"] },
|
||||
{ "command": "/memory:update-full", "weight": 0.8, "tags": ["comprehensive", "all-modules"] },
|
||||
{ "command": "/memory:skill-memory", "weight": 0.7, "tags": ["package", "reusable"] }
|
||||
],
|
||||
"agents": ["doc-generator", "memory-bridge"]
|
||||
},
|
||||
"session": {
|
||||
"description": "Workflow session management",
|
||||
"commands": [
|
||||
{ "command": "/workflow:session:start", "weight": 1.0, "tags": ["init", "discover"] },
|
||||
{ "command": "/workflow:session:list", "weight": 0.9, "tags": ["view", "status"] },
|
||||
{ "command": "/workflow:session:resume", "weight": 0.8, "tags": ["continue", "restore"] },
|
||||
{ "command": "/workflow:session:complete", "weight": 0.7, "tags": ["finish", "archive"] }
|
||||
]
|
||||
},
|
||||
"debug": {
|
||||
"description": "Debugging and problem solving",
|
||||
"commands": [
|
||||
{ "command": "/workflow:debug", "weight": 1.0, "tags": ["hypothesis", "iterative"] },
|
||||
{ "command": "/workflow:clean", "weight": 0.6, "tags": ["cleanup", "artifacts"] }
|
||||
]
|
||||
}
|
||||
}
|
||||
}
|
||||
@@ -1,136 +0,0 @@
|
||||
{
|
||||
"_metadata": {
|
||||
"version": "1.0.0",
|
||||
"description": "Externalized intent classification rules for CCW orchestrator"
|
||||
},
|
||||
"intent_patterns": {
|
||||
"bugfix": {
|
||||
"priority": 1,
|
||||
"description": "Bug修复意图",
|
||||
"variants": {
|
||||
"hotfix": {
|
||||
"patterns": ["hotfix", "urgent", "production", "critical", "emergency", "紧急", "生产环境", "线上"],
|
||||
"workflow": "lite-fix --hotfix"
|
||||
},
|
||||
"standard": {
|
||||
"patterns": ["fix", "bug", "error", "issue", "crash", "broken", "fail", "wrong", "incorrect", "修复", "错误", "崩溃", "失败"],
|
||||
"workflow": "lite-fix"
|
||||
}
|
||||
}
|
||||
},
|
||||
"issue_batch": {
|
||||
"priority": 2,
|
||||
"description": "批量Issue处理意图",
|
||||
"patterns": {
|
||||
"batch_keywords": ["issues", "issue", "batch", "queue", "多个", "批量", "一批"],
|
||||
"action_keywords": ["fix", "resolve", "处理", "解决", "修复"]
|
||||
},
|
||||
"require_both": true,
|
||||
"workflow": "issue:plan → issue:queue → issue:execute"
|
||||
},
|
||||
"exploration": {
|
||||
"priority": 3,
|
||||
"description": "探索/不确定意图",
|
||||
"patterns": ["不确定", "不知道", "explore", "研究", "分析一下", "怎么做", "what if", "should i", "探索", "可能", "或许", "建议"],
|
||||
"workflow": "brainstorm → plan → execute"
|
||||
},
|
||||
"ui_design": {
|
||||
"priority": 4,
|
||||
"description": "UI/设计意图",
|
||||
"patterns": ["ui", "界面", "design", "设计", "component", "组件", "style", "样式", "layout", "布局", "前端", "frontend", "页面"],
|
||||
"variants": {
|
||||
"imitate": {
|
||||
"triggers": ["参考", "模仿", "像", "类似", "reference", "like"],
|
||||
"workflow": "ui-design:imitate-auto → plan → execute"
|
||||
},
|
||||
"explore": {
|
||||
"triggers": [],
|
||||
"workflow": "ui-design:explore-auto → plan → execute"
|
||||
}
|
||||
}
|
||||
},
|
||||
"tdd": {
|
||||
"priority": 5,
|
||||
"description": "测试驱动开发意图",
|
||||
"patterns": ["tdd", "test-driven", "测试驱动", "先写测试", "red-green", "test first"],
|
||||
"workflow": "tdd-plan → execute → tdd-verify"
|
||||
},
|
||||
"review": {
|
||||
"priority": 6,
|
||||
"description": "代码审查意图",
|
||||
"patterns": ["review", "审查", "检查代码", "code review", "质量检查", "安全审查"],
|
||||
"workflow": "review-session-cycle → review-fix"
|
||||
},
|
||||
"documentation": {
|
||||
"priority": 7,
|
||||
"description": "文档生成意图",
|
||||
"patterns": ["文档", "documentation", "docs", "readme", "注释", "api doc", "说明"],
|
||||
"variants": {
|
||||
"incremental": {
|
||||
"triggers": ["更新", "增量", "相关"],
|
||||
"workflow": "memory:update-related"
|
||||
},
|
||||
"full": {
|
||||
"triggers": ["全部", "完整", "所有"],
|
||||
"workflow": "memory:docs → execute"
|
||||
}
|
||||
}
|
||||
}
|
||||
},
|
||||
"complexity_indicators": {
|
||||
"high": {
|
||||
"score_threshold": 4,
|
||||
"patterns": {
|
||||
"architecture": {
|
||||
"keywords": ["refactor", "重构", "migrate", "迁移", "architect", "架构", "system", "系统"],
|
||||
"weight": 2
|
||||
},
|
||||
"multi_module": {
|
||||
"keywords": ["multiple", "多个", "across", "跨", "all", "所有", "entire", "整个"],
|
||||
"weight": 2
|
||||
},
|
||||
"integration": {
|
||||
"keywords": ["integrate", "集成", "connect", "连接", "api", "database", "数据库"],
|
||||
"weight": 1
|
||||
},
|
||||
"quality": {
|
||||
"keywords": ["security", "安全", "performance", "性能", "scale", "扩展", "优化"],
|
||||
"weight": 1
|
||||
}
|
||||
},
|
||||
"workflow": "plan → verify → execute"
|
||||
},
|
||||
"medium": {
|
||||
"score_threshold": 2,
|
||||
"workflow": "lite-plan → lite-execute"
|
||||
},
|
||||
"low": {
|
||||
"score_threshold": 0,
|
||||
"workflow": "lite-plan → lite-execute"
|
||||
}
|
||||
},
|
||||
"cli_tool_triggers": {
|
||||
"gemini": {
|
||||
"explicit": ["用 gemini", "gemini 分析", "让 gemini", "用gemini"],
|
||||
"semantic": ["深度分析", "架构理解", "执行流追踪", "根因分析"]
|
||||
},
|
||||
"qwen": {
|
||||
"explicit": ["用 qwen", "qwen 评估", "让 qwen", "用qwen"],
|
||||
"semantic": ["第二视角", "对比验证", "模式识别"]
|
||||
},
|
||||
"codex": {
|
||||
"explicit": ["用 codex", "codex 实现", "让 codex", "用codex"],
|
||||
"semantic": ["自主完成", "批量修改", "自动实现"]
|
||||
}
|
||||
},
|
||||
"fallback_rules": {
|
||||
"no_match": {
|
||||
"default_workflow": "lite-plan → lite-execute",
|
||||
"use_complexity_assessment": true
|
||||
},
|
||||
"ambiguous": {
|
||||
"action": "ask_user",
|
||||
"message": "检测到多个可能意图,请确认工作流选择"
|
||||
}
|
||||
}
|
||||
}
|
||||
@@ -1,451 +0,0 @@
|
||||
{
|
||||
"_metadata": {
|
||||
"version": "1.1.0",
|
||||
"description": "Predefined workflow chains with CLI tool integration for CCW orchestration"
|
||||
},
|
||||
"cli_tools": {
|
||||
"_doc": "CLI工具是CCW的核心能力,在合适时机自动调用以获得:1)较少token获取大量上下文 2)引入不同模型视角 3)增强debug和规划能力",
|
||||
"gemini": {
|
||||
"strengths": ["超长上下文", "深度分析", "架构理解", "执行流追踪"],
|
||||
"triggers": ["分析", "理解", "设计", "架构", "评估", "诊断"],
|
||||
"mode": "analysis",
|
||||
"token_efficiency": "high",
|
||||
"use_when": [
|
||||
"需要理解大型代码库结构",
|
||||
"执行流追踪和数据流分析",
|
||||
"架构设计和技术方案评估",
|
||||
"复杂问题诊断(root cause analysis)"
|
||||
]
|
||||
},
|
||||
"qwen": {
|
||||
"strengths": ["超长上下文", "代码模式识别", "多维度分析"],
|
||||
"triggers": ["评估", "对比", "验证"],
|
||||
"mode": "analysis",
|
||||
"token_efficiency": "high",
|
||||
"use_when": [
|
||||
"Gemini 不可用时作为备选",
|
||||
"需要第二视角验证分析结果",
|
||||
"代码模式识别和重复检测"
|
||||
]
|
||||
},
|
||||
"codex": {
|
||||
"strengths": ["精确代码生成", "自主执行", "数学推理"],
|
||||
"triggers": ["实现", "重构", "修复", "生成", "测试"],
|
||||
"mode": "write",
|
||||
"token_efficiency": "medium",
|
||||
"use_when": [
|
||||
"需要自主完成多步骤代码修改",
|
||||
"复杂重构和迁移任务",
|
||||
"测试生成和修复循环"
|
||||
]
|
||||
}
|
||||
},
|
||||
"cli_injection_rules": {
|
||||
"_doc": "隐式规则:在特定条件下自动注入CLI调用",
|
||||
"context_gathering": {
|
||||
"trigger": "file_read >= 50k chars OR module_count >= 5",
|
||||
"inject": "gemini --mode analysis",
|
||||
"reason": "大量代码上下文使用CLI可节省主会话token"
|
||||
},
|
||||
"pre_planning_analysis": {
|
||||
"trigger": "complexity === 'high' OR intent === 'exploration'",
|
||||
"inject": "gemini --mode analysis",
|
||||
"reason": "复杂任务先用CLI分析获取多模型视角"
|
||||
},
|
||||
"debug_diagnosis": {
|
||||
"trigger": "intent === 'bugfix' AND root_cause_unclear",
|
||||
"inject": "gemini --mode analysis",
|
||||
"reason": "深度诊断利用Gemini的执行流追踪能力"
|
||||
},
|
||||
"code_review": {
|
||||
"trigger": "step === 'review'",
|
||||
"inject": "gemini --mode analysis",
|
||||
"reason": "代码审查用CLI减少token占用"
|
||||
},
|
||||
"implementation": {
|
||||
"trigger": "step === 'execute' AND task_count >= 3",
|
||||
"inject": "codex --mode write",
|
||||
"reason": "多任务执行用Codex自主完成"
|
||||
}
|
||||
},
|
||||
"chains": {
|
||||
"rapid": {
|
||||
"name": "Rapid Iteration",
|
||||
"description": "多模型协作分析 + 直接执行",
|
||||
"complexity": ["low", "medium"],
|
||||
"steps": [
|
||||
{
|
||||
"command": "/workflow:lite-plan",
|
||||
"optional": false,
|
||||
"auto_continue": true,
|
||||
"cli_hint": {
|
||||
"explore_phase": { "tool": "gemini", "mode": "analysis", "trigger": "needs_exploration" },
|
||||
"planning_phase": { "tool": "gemini", "mode": "analysis", "trigger": "complexity >= medium" }
|
||||
}
|
||||
},
|
||||
{
|
||||
"command": "/workflow:lite-execute",
|
||||
"optional": false,
|
||||
"auto_continue": false,
|
||||
"cli_hint": {
|
||||
"execution": { "tool": "codex", "mode": "write", "trigger": "user_selects_codex OR complexity >= medium" },
|
||||
"review": { "tool": "gemini", "mode": "analysis", "trigger": "user_selects_review" }
|
||||
}
|
||||
}
|
||||
],
|
||||
"total_steps": 2,
|
||||
"estimated_time": "15-45 min"
|
||||
},
|
||||
"full": {
|
||||
"name": "Full Exploration",
|
||||
"description": "多模型深度分析 + 头脑风暴 + 规划 + 执行",
|
||||
"complexity": ["medium", "high"],
|
||||
"steps": [
|
||||
{
|
||||
"command": "/workflow:brainstorm:auto-parallel",
|
||||
"optional": false,
|
||||
"confirm_before": true,
|
||||
"cli_hint": {
|
||||
"role_analysis": { "tool": "gemini", "mode": "analysis", "trigger": "always", "parallel": true }
|
||||
}
|
||||
},
|
||||
{
|
||||
"command": "/workflow:plan",
|
||||
"optional": false,
|
||||
"auto_continue": false,
|
||||
"cli_hint": {
|
||||
"context_gather": { "tool": "gemini", "mode": "analysis", "trigger": "always" },
|
||||
"task_generation": { "tool": "gemini", "mode": "analysis", "trigger": "always" }
|
||||
}
|
||||
},
|
||||
{ "command": "/workflow:action-plan-verify", "optional": true, "auto_continue": true },
|
||||
{
|
||||
"command": "/workflow:execute",
|
||||
"optional": false,
|
||||
"auto_continue": false,
|
||||
"cli_hint": {
|
||||
"execution": { "tool": "codex", "mode": "write", "trigger": "task_count >= 3" }
|
||||
}
|
||||
}
|
||||
],
|
||||
"total_steps": 4,
|
||||
"estimated_time": "1-3 hours"
|
||||
},
|
||||
"coupled": {
|
||||
"name": "Coupled Planning",
|
||||
"description": "CLI深度分析 + 完整规划 + 验证 + 执行",
|
||||
"complexity": ["high"],
|
||||
"steps": [
|
||||
{
|
||||
"command": "/workflow:plan",
|
||||
"optional": false,
|
||||
"auto_continue": false,
|
||||
"cli_hint": {
|
||||
"pre_analysis": { "tool": "gemini", "mode": "analysis", "trigger": "always", "purpose": "架构理解和依赖分析" },
|
||||
"conflict_detection": { "tool": "gemini", "mode": "analysis", "trigger": "always" }
|
||||
}
|
||||
},
|
||||
{ "command": "/workflow:action-plan-verify", "optional": false, "auto_continue": true },
|
||||
{
|
||||
"command": "/workflow:execute",
|
||||
"optional": false,
|
||||
"auto_continue": false,
|
||||
"cli_hint": {
|
||||
"execution": { "tool": "codex", "mode": "write", "trigger": "always", "purpose": "自主多任务执行" }
|
||||
}
|
||||
},
|
||||
{
|
||||
"command": "/workflow:review",
|
||||
"optional": true,
|
||||
"auto_continue": false,
|
||||
"cli_hint": {
|
||||
"review": { "tool": "gemini", "mode": "analysis", "trigger": "always" }
|
||||
}
|
||||
}
|
||||
],
|
||||
"total_steps": 4,
|
||||
"estimated_time": "2-4 hours"
|
||||
},
|
||||
"bugfix": {
|
||||
"name": "Bug Fix",
|
||||
"description": "CLI诊断 + 智能修复",
|
||||
"complexity": ["low", "medium"],
|
||||
"variants": {
|
||||
"standard": {
|
||||
"steps": [
|
||||
{
|
||||
"command": "/workflow:lite-fix",
|
||||
"optional": false,
|
||||
"auto_continue": true,
|
||||
"cli_hint": {
|
||||
"diagnosis": { "tool": "gemini", "mode": "analysis", "trigger": "always", "purpose": "根因分析和执行流追踪" },
|
||||
"fix": { "tool": "codex", "mode": "write", "trigger": "severity >= medium" }
|
||||
}
|
||||
}
|
||||
]
|
||||
},
|
||||
"hotfix": {
|
||||
"steps": [
|
||||
{
|
||||
"command": "/workflow:lite-fix --hotfix",
|
||||
"optional": false,
|
||||
"auto_continue": true,
|
||||
"cli_hint": {
|
||||
"quick_diagnosis": { "tool": "gemini", "mode": "analysis", "trigger": "always", "timeout": "60s" }
|
||||
}
|
||||
}
|
||||
]
|
||||
}
|
||||
},
|
||||
"total_steps": 1,
|
||||
"estimated_time": "10-30 min"
|
||||
},
|
||||
"issue": {
|
||||
"name": "Issue Batch",
|
||||
"description": "CLI批量分析 + 队列优化 + 并行执行",
|
||||
"complexity": ["medium", "high"],
|
||||
"steps": [
|
||||
{
|
||||
"command": "/issue:plan",
|
||||
"optional": false,
|
||||
"auto_continue": false,
|
||||
"cli_hint": {
|
||||
"solution_generation": { "tool": "gemini", "mode": "analysis", "trigger": "always", "parallel": true }
|
||||
}
|
||||
},
|
||||
{
|
||||
"command": "/issue:queue",
|
||||
"optional": false,
|
||||
"auto_continue": false,
|
||||
"cli_hint": {
|
||||
"conflict_analysis": { "tool": "gemini", "mode": "analysis", "trigger": "issue_count >= 3" }
|
||||
}
|
||||
},
|
||||
{
|
||||
"command": "/issue:execute",
|
||||
"optional": false,
|
||||
"auto_continue": false,
|
||||
"cli_hint": {
|
||||
"batch_execution": { "tool": "codex", "mode": "write", "trigger": "always", "purpose": "DAG并行执行" }
|
||||
}
|
||||
}
|
||||
],
|
||||
"total_steps": 3,
|
||||
"estimated_time": "1-4 hours"
|
||||
},
|
||||
"tdd": {
|
||||
"name": "Test-Driven Development",
|
||||
"description": "TDD规划 + 执行 + CLI验证",
|
||||
"complexity": ["medium", "high"],
|
||||
"steps": [
|
||||
{
|
||||
"command": "/workflow:tdd-plan",
|
||||
"optional": false,
|
||||
"auto_continue": false,
|
||||
"cli_hint": {
|
||||
"test_strategy": { "tool": "gemini", "mode": "analysis", "trigger": "always" }
|
||||
}
|
||||
},
|
||||
{
|
||||
"command": "/workflow:execute",
|
||||
"optional": false,
|
||||
"auto_continue": false,
|
||||
"cli_hint": {
|
||||
"red_green_refactor": { "tool": "codex", "mode": "write", "trigger": "always" }
|
||||
}
|
||||
},
|
||||
{
|
||||
"command": "/workflow:tdd-verify",
|
||||
"optional": false,
|
||||
"auto_continue": false,
|
||||
"cli_hint": {
|
||||
"coverage_analysis": { "tool": "gemini", "mode": "analysis", "trigger": "always" }
|
||||
}
|
||||
}
|
||||
],
|
||||
"total_steps": 3,
|
||||
"estimated_time": "1-3 hours"
|
||||
},
|
||||
"ui": {
|
||||
"name": "UI-First Development",
|
||||
"description": "UI设计 + 规划 + 执行",
|
||||
"complexity": ["medium", "high"],
|
||||
"variants": {
|
||||
"explore": {
|
||||
"steps": [
|
||||
{ "command": "/workflow:ui-design:explore-auto", "optional": false, "auto_continue": false },
|
||||
{ "command": "/workflow:ui-design:design-sync", "optional": false, "auto_continue": true },
|
||||
{ "command": "/workflow:plan", "optional": false, "auto_continue": false },
|
||||
{ "command": "/workflow:execute", "optional": false, "auto_continue": false }
|
||||
]
|
||||
},
|
||||
"imitate": {
|
||||
"steps": [
|
||||
{ "command": "/workflow:ui-design:imitate-auto", "optional": false, "auto_continue": false },
|
||||
{ "command": "/workflow:ui-design:design-sync", "optional": false, "auto_continue": true },
|
||||
{ "command": "/workflow:plan", "optional": false, "auto_continue": false },
|
||||
{ "command": "/workflow:execute", "optional": false, "auto_continue": false }
|
||||
]
|
||||
}
|
||||
},
|
||||
"total_steps": 4,
|
||||
"estimated_time": "2-4 hours"
|
||||
},
|
||||
"review-fix": {
|
||||
"name": "Review and Fix",
|
||||
"description": "CLI多维审查 + 自动修复",
|
||||
"complexity": ["medium"],
|
||||
"steps": [
|
||||
{
|
||||
"command": "/workflow:review-session-cycle",
|
||||
"optional": false,
|
||||
"auto_continue": false,
|
||||
"cli_hint": {
|
||||
"multi_dimension_review": { "tool": "gemini", "mode": "analysis", "trigger": "always", "parallel": true }
|
||||
}
|
||||
},
|
||||
{
|
||||
"command": "/workflow:review-fix",
|
||||
"optional": true,
|
||||
"auto_continue": false,
|
||||
"cli_hint": {
|
||||
"auto_fix": { "tool": "codex", "mode": "write", "trigger": "findings_count >= 3" }
|
||||
}
|
||||
}
|
||||
],
|
||||
"total_steps": 2,
|
||||
"estimated_time": "30-90 min"
|
||||
},
|
||||
"docs": {
|
||||
"name": "Documentation",
|
||||
"description": "CLI批量文档生成",
|
||||
"complexity": ["low", "medium"],
|
||||
"variants": {
|
||||
"incremental": {
|
||||
"steps": [
|
||||
{
|
||||
"command": "/memory:update-related",
|
||||
"optional": false,
|
||||
"auto_continue": false,
|
||||
"cli_hint": {
|
||||
"doc_generation": { "tool": "gemini", "mode": "write", "trigger": "module_count >= 5" }
|
||||
}
|
||||
}
|
||||
]
|
||||
},
|
||||
"full": {
|
||||
"steps": [
|
||||
{ "command": "/memory:docs", "optional": false, "auto_continue": false },
|
||||
{
|
||||
"command": "/workflow:execute",
|
||||
"optional": false,
|
||||
"auto_continue": false,
|
||||
"cli_hint": {
|
||||
"batch_doc": { "tool": "gemini", "mode": "write", "trigger": "always" }
|
||||
}
|
||||
}
|
||||
]
|
||||
}
|
||||
},
|
||||
"total_steps": 2,
|
||||
"estimated_time": "15-60 min"
|
||||
},
|
||||
"cli-analysis": {
|
||||
"name": "CLI Direct Analysis",
|
||||
"description": "直接CLI分析,获取多模型视角,节省主会话token",
|
||||
"complexity": ["low", "medium", "high"],
|
||||
"standalone": true,
|
||||
"steps": [
|
||||
{
|
||||
"command": "ccw cli",
|
||||
"tool": "gemini",
|
||||
"mode": "analysis",
|
||||
"optional": false,
|
||||
"auto_continue": false
|
||||
}
|
||||
],
|
||||
"use_cases": [
|
||||
"大型代码库快速理解",
|
||||
"执行流追踪和数据流分析",
|
||||
"架构评估和技术方案对比",
|
||||
"性能瓶颈诊断"
|
||||
],
|
||||
"total_steps": 1,
|
||||
"estimated_time": "5-15 min"
|
||||
},
|
||||
"cli-implement": {
|
||||
"name": "CLI Direct Implementation",
|
||||
"description": "直接Codex实现,自主完成多步骤任务",
|
||||
"complexity": ["medium", "high"],
|
||||
"standalone": true,
|
||||
"steps": [
|
||||
{
|
||||
"command": "ccw cli",
|
||||
"tool": "codex",
|
||||
"mode": "write",
|
||||
"optional": false,
|
||||
"auto_continue": false
|
||||
}
|
||||
],
|
||||
"use_cases": [
|
||||
"明确需求的功能实现",
|
||||
"代码重构和迁移",
|
||||
"测试生成",
|
||||
"批量代码修改"
|
||||
],
|
||||
"total_steps": 1,
|
||||
"estimated_time": "15-60 min"
|
||||
},
|
||||
"cli-debug": {
|
||||
"name": "CLI Debug Session",
|
||||
"description": "CLI调试会话,利用Gemini深度诊断能力",
|
||||
"complexity": ["medium", "high"],
|
||||
"standalone": true,
|
||||
"steps": [
|
||||
{
|
||||
"command": "ccw cli",
|
||||
"tool": "gemini",
|
||||
"mode": "analysis",
|
||||
"purpose": "hypothesis-driven debugging",
|
||||
"optional": false,
|
||||
"auto_continue": false
|
||||
}
|
||||
],
|
||||
"use_cases": [
|
||||
"复杂bug根因分析",
|
||||
"执行流异常追踪",
|
||||
"状态机错误诊断",
|
||||
"并发问题排查"
|
||||
],
|
||||
"total_steps": 1,
|
||||
"estimated_time": "10-30 min"
|
||||
}
|
||||
},
|
||||
"chain_selection_rules": {
|
||||
"intent_mapping": {
|
||||
"bugfix": ["bugfix"],
|
||||
"feature_simple": ["rapid"],
|
||||
"feature_unclear": ["full"],
|
||||
"feature_complex": ["coupled"],
|
||||
"issue_batch": ["issue"],
|
||||
"test_driven": ["tdd"],
|
||||
"ui_design": ["ui"],
|
||||
"code_review": ["review-fix"],
|
||||
"documentation": ["docs"],
|
||||
"analysis_only": ["cli-analysis"],
|
||||
"implement_only": ["cli-implement"],
|
||||
"debug": ["cli-debug", "bugfix"]
|
||||
},
|
||||
"complexity_fallback": {
|
||||
"low": "rapid",
|
||||
"medium": "coupled",
|
||||
"high": "full"
|
||||
},
|
||||
"cli_preference_rules": {
|
||||
"_doc": "用户语义触发CLI工具选择",
|
||||
"gemini_triggers": ["用 gemini", "gemini 分析", "让 gemini", "深度分析", "架构理解"],
|
||||
"qwen_triggers": ["用 qwen", "qwen 评估", "让 qwen", "第二视角"],
|
||||
"codex_triggers": ["用 codex", "codex 实现", "让 codex", "自主完成", "批量修改"]
|
||||
}
|
||||
}
|
||||
}
|
||||
@@ -1,218 +0,0 @@
|
||||
# Action: Bugfix Workflow
|
||||
|
||||
缺陷修复工作流:智能诊断 + 影响评估 + 修复
|
||||
|
||||
## Pattern
|
||||
|
||||
```
|
||||
lite-fix [--hotfix]
|
||||
```
|
||||
|
||||
## Trigger Conditions
|
||||
|
||||
- Keywords: "fix", "bug", "error", "crash", "broken", "fail", "修复", "报错"
|
||||
- Problem symptoms described
|
||||
- Error messages present
|
||||
|
||||
## Execution Flow
|
||||
|
||||
### Standard Mode
|
||||
|
||||
```mermaid
|
||||
sequenceDiagram
|
||||
participant U as User
|
||||
participant O as CCW Orchestrator
|
||||
participant LF as lite-fix
|
||||
participant CLI as CLI Tools
|
||||
|
||||
U->>O: Bug description
|
||||
O->>O: Classify: bugfix (standard)
|
||||
O->>LF: /workflow:lite-fix "bug"
|
||||
|
||||
Note over LF: Phase 1: Diagnosis
|
||||
LF->>CLI: Root cause analysis (Gemini)
|
||||
CLI-->>LF: diagnosis.json
|
||||
|
||||
Note over LF: Phase 2: Impact Assessment
|
||||
LF->>LF: Risk scoring (0-10)
|
||||
LF->>LF: Severity classification
|
||||
LF-->>U: Impact report
|
||||
|
||||
Note over LF: Phase 3: Fix Strategy
|
||||
LF->>LF: Generate fix options
|
||||
LF-->>U: Present strategies
|
||||
U->>LF: Select strategy
|
||||
|
||||
Note over LF: Phase 4: Verification Plan
|
||||
LF->>LF: Generate test plan
|
||||
LF-->>U: Verification approach
|
||||
|
||||
Note over LF: Phase 5: Confirmation
|
||||
LF->>U: Execution method?
|
||||
U->>LF: Confirm
|
||||
|
||||
Note over LF: Phase 6: Execute
|
||||
LF->>CLI: Execute fix (Agent/Codex)
|
||||
CLI-->>LF: Results
|
||||
LF-->>U: Fix complete
|
||||
```
|
||||
|
||||
### Hotfix Mode
|
||||
|
||||
```mermaid
|
||||
sequenceDiagram
|
||||
participant U as User
|
||||
participant O as CCW Orchestrator
|
||||
participant LF as lite-fix
|
||||
participant CLI as CLI Tools
|
||||
|
||||
U->>O: Urgent bug + "hotfix"
|
||||
O->>O: Classify: bugfix (hotfix)
|
||||
O->>LF: /workflow:lite-fix --hotfix "bug"
|
||||
|
||||
Note over LF: Minimal Diagnosis
|
||||
LF->>CLI: Quick root cause
|
||||
CLI-->>LF: Known issue?
|
||||
|
||||
Note over LF: Surgical Fix
|
||||
LF->>LF: Single optimal fix
|
||||
LF-->>U: Quick confirmation
|
||||
U->>LF: Proceed
|
||||
|
||||
Note over LF: Smoke Test
|
||||
LF->>CLI: Minimal verification
|
||||
CLI-->>LF: Pass/Fail
|
||||
|
||||
Note over LF: Follow-up Generation
|
||||
LF->>LF: Generate follow-up tasks
|
||||
LF-->>U: Fix deployed + follow-ups created
|
||||
```
|
||||
|
||||
## When to Use
|
||||
|
||||
### Standard Mode (/workflow:lite-fix)
|
||||
✅ **Use for**:
|
||||
- 已知症状的 Bug
|
||||
- 本地化修复(1-5 文件)
|
||||
- 非紧急问题
|
||||
- 需要完整诊断
|
||||
|
||||
### Hotfix Mode (/workflow:lite-fix --hotfix)
|
||||
✅ **Use for**:
|
||||
- 生产事故
|
||||
- 紧急修复
|
||||
- 明确的单点故障
|
||||
- 时间敏感
|
||||
|
||||
❌ **Don't use** (for either mode):
|
||||
- 需要架构变更 → `/workflow:plan --mode bugfix`
|
||||
- 多个相关问题 → `/issue:plan`
|
||||
|
||||
## Severity Classification
|
||||
|
||||
| Score | Severity | Response | Verification |
|
||||
|-------|----------|----------|--------------|
|
||||
| 8-10 | Critical | Immediate | Smoke test only |
|
||||
| 6-7.9 | High | Fast track | Integration tests |
|
||||
| 4-5.9 | Medium | Normal | Full test suite |
|
||||
| 0-3.9 | Low | Scheduled | Comprehensive |
|
||||
|
||||
## Configuration
|
||||
|
||||
```javascript
|
||||
const bugfixConfig = {
|
||||
standard: {
|
||||
diagnosis: {
|
||||
tool: 'gemini',
|
||||
depth: 'comprehensive',
|
||||
timeout: 300000 // 5 min
|
||||
},
|
||||
impact: {
|
||||
riskThreshold: 6.0, // High risk threshold
|
||||
autoEscalate: true
|
||||
},
|
||||
verification: {
|
||||
levels: ['smoke', 'integration', 'full'],
|
||||
autoSelect: true // Based on severity
|
||||
}
|
||||
},
|
||||
|
||||
hotfix: {
|
||||
diagnosis: {
|
||||
tool: 'gemini',
|
||||
depth: 'minimal',
|
||||
timeout: 60000 // 1 min
|
||||
},
|
||||
fix: {
|
||||
strategy: 'single', // Single optimal fix
|
||||
surgical: true
|
||||
},
|
||||
followup: {
|
||||
generate: true,
|
||||
types: ['comprehensive-fix', 'post-mortem']
|
||||
}
|
||||
}
|
||||
}
|
||||
```
|
||||
|
||||
## Example Invocations
|
||||
|
||||
```bash
|
||||
# Standard bug fix
|
||||
ccw "用户头像上传失败,返回 413 错误"
|
||||
→ lite-fix
|
||||
→ Diagnosis: File size limit in nginx
|
||||
→ Impact: 6.5 (High)
|
||||
→ Fix: Update nginx config + add client validation
|
||||
→ Verify: Integration test
|
||||
|
||||
# Production hotfix
|
||||
ccw "紧急:支付网关返回 5xx 错误,影响所有用户"
|
||||
→ lite-fix --hotfix
|
||||
→ Quick diagnosis: API key expired
|
||||
→ Surgical fix: Rotate key
|
||||
→ Smoke test: Payment flow
|
||||
→ Follow-ups: Key rotation automation, monitoring alert
|
||||
|
||||
# Unknown root cause
|
||||
ccw "购物车随机丢失商品,原因不明"
|
||||
→ lite-fix
|
||||
→ Deep diagnosis (auto)
|
||||
→ Root cause: Race condition in concurrent updates
|
||||
→ Fix: Add optimistic locking
|
||||
→ Verify: Concurrent test suite
|
||||
```
|
||||
|
||||
## Output Artifacts
|
||||
|
||||
```
|
||||
.workflow/.lite-fix/{bug-slug}-{timestamp}/
|
||||
├── diagnosis.json # Root cause analysis
|
||||
├── impact.json # Risk assessment
|
||||
├── fix-plan.json # Fix strategy
|
||||
├── task.json # Enhanced task for execution
|
||||
└── followup.json # Follow-up tasks (hotfix only)
|
||||
```
|
||||
|
||||
## Follow-up Tasks (Hotfix Mode)
|
||||
|
||||
```json
|
||||
{
|
||||
"followups": [
|
||||
{
|
||||
"id": "FOLLOWUP-001",
|
||||
"type": "comprehensive-fix",
|
||||
"title": "Complete fix for payment gateway issue",
|
||||
"due": "3 days",
|
||||
"description": "Implement full solution with proper error handling"
|
||||
},
|
||||
{
|
||||
"id": "FOLLOWUP-002",
|
||||
"type": "post-mortem",
|
||||
"title": "Post-mortem analysis",
|
||||
"due": "1 week",
|
||||
"description": "Document incident and prevention measures"
|
||||
}
|
||||
]
|
||||
}
|
||||
```
|
||||
@@ -1,194 +0,0 @@
|
||||
# Action: Coupled Workflow
|
||||
|
||||
复杂耦合工作流:完整规划 + 验证 + 执行
|
||||
|
||||
## Pattern
|
||||
|
||||
```
|
||||
plan → action-plan-verify → execute
|
||||
```
|
||||
|
||||
## Trigger Conditions
|
||||
|
||||
- Complexity: High
|
||||
- Keywords: "refactor", "重构", "migrate", "迁移", "architect", "架构"
|
||||
- Cross-module changes
|
||||
- System-level modifications
|
||||
|
||||
## Execution Flow
|
||||
|
||||
```mermaid
|
||||
sequenceDiagram
|
||||
participant U as User
|
||||
participant O as CCW Orchestrator
|
||||
participant PL as plan
|
||||
participant VF as verify
|
||||
participant EX as execute
|
||||
participant RV as review
|
||||
|
||||
U->>O: Complex task
|
||||
O->>O: Classify: coupled (high complexity)
|
||||
|
||||
Note over PL: Phase 1: Comprehensive Planning
|
||||
O->>PL: /workflow:plan
|
||||
PL->>PL: Multi-phase planning
|
||||
PL->>PL: Generate IMPL_PLAN.md
|
||||
PL->>PL: Generate task JSONs
|
||||
PL-->>U: Present plan
|
||||
|
||||
Note over VF: Phase 2: Verification
|
||||
U->>VF: /workflow:action-plan-verify
|
||||
VF->>VF: Cross-artifact consistency
|
||||
VF->>VF: Dependency validation
|
||||
VF->>VF: Quality gate checks
|
||||
VF-->>U: Verification report
|
||||
|
||||
alt Verification failed
|
||||
U->>PL: Replan with issues
|
||||
else Verification passed
|
||||
Note over EX: Phase 3: Execution
|
||||
U->>EX: /workflow:execute
|
||||
EX->>EX: DAG-based parallel execution
|
||||
EX-->>U: Execution complete
|
||||
end
|
||||
|
||||
Note over RV: Phase 4: Review
|
||||
U->>RV: /workflow:review
|
||||
RV-->>U: Review findings
|
||||
```
|
||||
|
||||
## When to Use
|
||||
|
||||
✅ **Ideal scenarios**:
|
||||
- 大规模重构
|
||||
- 架构迁移
|
||||
- 跨模块功能开发
|
||||
- 技术栈升级
|
||||
- 团队协作项目
|
||||
|
||||
❌ **Avoid when**:
|
||||
- 简单的局部修改
|
||||
- 时间紧迫
|
||||
- 独立的小功能
|
||||
|
||||
## Verification Checks
|
||||
|
||||
| Check | Description | Severity |
|
||||
|-------|-------------|----------|
|
||||
| Dependency Cycles | 检测循环依赖 | Critical |
|
||||
| Missing Tasks | 计划与实际不符 | High |
|
||||
| File Conflicts | 多任务修改同文件 | Medium |
|
||||
| Coverage Gaps | 未覆盖的需求 | Medium |
|
||||
|
||||
## Configuration
|
||||
|
||||
```javascript
|
||||
const coupledConfig = {
|
||||
plan: {
|
||||
phases: 5, // Full 5-phase planning
|
||||
taskGeneration: 'action-planning-agent',
|
||||
outputFormat: {
|
||||
implPlan: '.workflow/plans/IMPL_PLAN.md',
|
||||
taskJsons: '.workflow/tasks/IMPL-*.json'
|
||||
}
|
||||
},
|
||||
|
||||
verify: {
|
||||
required: true, // Always verify before execute
|
||||
autoReplan: false, // Manual replan on failure
|
||||
qualityGates: ['no-cycles', 'no-conflicts', 'complete-coverage']
|
||||
},
|
||||
|
||||
execute: {
|
||||
dagParallel: true,
|
||||
checkpointInterval: 3, // Checkpoint every 3 tasks
|
||||
rollbackOnFailure: true
|
||||
},
|
||||
|
||||
review: {
|
||||
types: ['architecture', 'security'],
|
||||
required: true
|
||||
}
|
||||
}
|
||||
```
|
||||
|
||||
## Task JSON Structure
|
||||
|
||||
```json
|
||||
{
|
||||
"id": "IMPL-001",
|
||||
"title": "重构认证模块核心逻辑",
|
||||
"scope": "src/auth/**",
|
||||
"action": "refactor",
|
||||
"depends_on": [],
|
||||
"modification_points": [
|
||||
{
|
||||
"file": "src/auth/service.ts",
|
||||
"target": "AuthService",
|
||||
"change": "Extract OAuth2 logic"
|
||||
}
|
||||
],
|
||||
"acceptance": [
|
||||
"所有现有测试通过",
|
||||
"OAuth2 流程可用"
|
||||
]
|
||||
}
|
||||
```
|
||||
|
||||
## Example Invocations
|
||||
|
||||
```bash
|
||||
# Architecture refactoring
|
||||
ccw "重构整个认证模块,从 session 迁移到 JWT"
|
||||
→ plan (5 phases)
|
||||
→ verify
|
||||
→ execute
|
||||
|
||||
# System migration
|
||||
ccw "将数据库从 MySQL 迁移到 PostgreSQL"
|
||||
→ plan (migration strategy)
|
||||
→ verify (data integrity checks)
|
||||
→ execute (staged migration)
|
||||
|
||||
# Cross-module feature
|
||||
ccw "实现跨服务的分布式事务支持"
|
||||
→ plan (architectural design)
|
||||
→ verify (consistency checks)
|
||||
→ execute (incremental rollout)
|
||||
```
|
||||
|
||||
## Output Artifacts
|
||||
|
||||
```
|
||||
.workflow/
|
||||
├── plans/
|
||||
│ └── IMPL_PLAN.md # Comprehensive plan
|
||||
├── tasks/
|
||||
│ ├── IMPL-001.json
|
||||
│ ├── IMPL-002.json
|
||||
│ └── ...
|
||||
├── verify/
|
||||
│ └── verification-report.md # Verification results
|
||||
└── reviews/
|
||||
└── {review-type}.md # Review findings
|
||||
```
|
||||
|
||||
## Replan Flow
|
||||
|
||||
When verification fails:
|
||||
|
||||
```javascript
|
||||
if (verificationResult.status === 'failed') {
|
||||
console.log(`
|
||||
## Verification Failed
|
||||
|
||||
**Issues found**:
|
||||
${verificationResult.issues.map(i => `- ${i.severity}: ${i.message}`).join('\n')}
|
||||
|
||||
**Options**:
|
||||
1. /workflow:replan - Address issues and regenerate plan
|
||||
2. /workflow:plan --force - Proceed despite issues (not recommended)
|
||||
3. Review issues manually and fix plan files
|
||||
`)
|
||||
}
|
||||
```
|
||||
@@ -1,93 +0,0 @@
|
||||
# Documentation Workflow Action
|
||||
|
||||
## Pattern
|
||||
```
|
||||
memory:docs → execute (full)
|
||||
memory:update-related (incremental)
|
||||
```
|
||||
|
||||
## Trigger Conditions
|
||||
|
||||
- 关键词: "文档", "documentation", "docs", "readme", "注释"
|
||||
- 变体触发:
|
||||
- `incremental`: "更新", "增量", "相关"
|
||||
- `full`: "全部", "完整", "所有"
|
||||
|
||||
## Variants
|
||||
|
||||
### Full Documentation
|
||||
```mermaid
|
||||
graph TD
|
||||
A[User Input] --> B[memory:docs]
|
||||
B --> C[项目结构分析]
|
||||
C --> D[模块分组 ≤10/task]
|
||||
D --> E[execute: 并行生成]
|
||||
E --> F[README.md]
|
||||
E --> G[ARCHITECTURE.md]
|
||||
E --> H[API Docs]
|
||||
E --> I[Module CLAUDE.md]
|
||||
```
|
||||
|
||||
### Incremental Update
|
||||
```mermaid
|
||||
graph TD
|
||||
A[Git Changes] --> B[memory:update-related]
|
||||
B --> C[变更模块检测]
|
||||
C --> D[相关文档定位]
|
||||
D --> E[增量更新]
|
||||
```
|
||||
|
||||
## Configuration
|
||||
|
||||
| 参数 | 默认值 | 说明 |
|
||||
|------|--------|------|
|
||||
| batch_size | 4 | 每agent处理模块数 |
|
||||
| format | markdown | 输出格式 |
|
||||
| include_api | true | 生成API文档 |
|
||||
| include_diagrams | true | 生成Mermaid图 |
|
||||
|
||||
## CLI Integration
|
||||
|
||||
| 阶段 | CLI Hint | 用途 |
|
||||
|------|----------|------|
|
||||
| memory:docs | `gemini --mode analysis` | 项目结构分析 |
|
||||
| execute | `gemini --mode write` | 文档生成 |
|
||||
| update-related | `gemini --mode write` | 增量更新 |
|
||||
|
||||
## Slash Commands
|
||||
|
||||
```bash
|
||||
/memory:docs # 规划全量文档生成
|
||||
/memory:docs-full-cli # CLI执行全量文档
|
||||
/memory:docs-related-cli # CLI执行增量文档
|
||||
/memory:update-related # 更新变更相关文档
|
||||
/memory:update-full # 更新所有CLAUDE.md
|
||||
```
|
||||
|
||||
## Output Structure
|
||||
|
||||
```
|
||||
project/
|
||||
├── README.md # 项目概览
|
||||
├── ARCHITECTURE.md # 架构文档
|
||||
├── docs/
|
||||
│ └── api/ # API文档
|
||||
└── src/
|
||||
└── module/
|
||||
└── CLAUDE.md # 模块文档
|
||||
```
|
||||
|
||||
## When to Use
|
||||
|
||||
- 新项目初始化文档
|
||||
- 大版本发布前文档更新
|
||||
- 代码变更后同步文档
|
||||
- API文档生成
|
||||
|
||||
## Risk Assessment
|
||||
|
||||
| 风险 | 缓解措施 |
|
||||
|------|----------|
|
||||
| 文档与代码不同步 | git hook集成 |
|
||||
| 生成内容过于冗长 | batch_size控制 |
|
||||
| 遗漏重要模块 | 全量扫描验证 |
|
||||
@@ -1,154 +0,0 @@
|
||||
# Action: Full Workflow
|
||||
|
||||
完整探索工作流:分析 + 头脑风暴 + 规划 + 执行
|
||||
|
||||
## Pattern
|
||||
|
||||
```
|
||||
brainstorm:auto-parallel → plan → [verify] → execute
|
||||
```
|
||||
|
||||
## Trigger Conditions
|
||||
|
||||
- Intent: Exploration (uncertainty detected)
|
||||
- Keywords: "不确定", "不知道", "explore", "怎么做", "what if"
|
||||
- No clear implementation path
|
||||
|
||||
## Execution Flow
|
||||
|
||||
```mermaid
|
||||
sequenceDiagram
|
||||
participant U as User
|
||||
participant O as CCW Orchestrator
|
||||
participant BS as brainstorm
|
||||
participant PL as plan
|
||||
participant VF as verify
|
||||
participant EX as execute
|
||||
|
||||
U->>O: Unclear task
|
||||
O->>O: Classify: full
|
||||
|
||||
Note over BS: Phase 1: Brainstorm
|
||||
O->>BS: /workflow:brainstorm:auto-parallel
|
||||
BS->>BS: Multi-role parallel analysis
|
||||
BS->>BS: Synthesis & recommendations
|
||||
BS-->>U: Present options
|
||||
U->>BS: Select direction
|
||||
|
||||
Note over PL: Phase 2: Plan
|
||||
BS->>PL: /workflow:plan
|
||||
PL->>PL: Generate IMPL_PLAN.md
|
||||
PL->>PL: Generate task JSONs
|
||||
PL-->>U: Review plan
|
||||
|
||||
Note over VF: Phase 3: Verify (optional)
|
||||
U->>VF: /workflow:action-plan-verify
|
||||
VF->>VF: Cross-artifact consistency
|
||||
VF-->>U: Verification report
|
||||
|
||||
Note over EX: Phase 4: Execute
|
||||
U->>EX: /workflow:execute
|
||||
EX->>EX: DAG-based parallel execution
|
||||
EX-->>U: Execution complete
|
||||
```
|
||||
|
||||
## When to Use
|
||||
|
||||
✅ **Ideal scenarios**:
|
||||
- 产品方向探索
|
||||
- 技术选型评估
|
||||
- 架构设计决策
|
||||
- 复杂功能规划
|
||||
- 需要多角色视角
|
||||
|
||||
❌ **Avoid when**:
|
||||
- 任务明确简单
|
||||
- 时间紧迫
|
||||
- 已有成熟方案
|
||||
|
||||
## Brainstorm Roles
|
||||
|
||||
| Role | Focus | Typical Questions |
|
||||
|------|-------|-------------------|
|
||||
| Product Manager | 用户价值、市场定位 | "用户痛点是什么?" |
|
||||
| System Architect | 技术方案、架构设计 | "如何保证可扩展性?" |
|
||||
| UX Expert | 用户体验、交互设计 | "用户流程是否顺畅?" |
|
||||
| Security Expert | 安全风险、合规要求 | "有哪些安全隐患?" |
|
||||
| Data Architect | 数据模型、存储方案 | "数据如何组织?" |
|
||||
|
||||
## Configuration
|
||||
|
||||
```javascript
|
||||
const fullConfig = {
|
||||
brainstorm: {
|
||||
defaultRoles: ['product-manager', 'system-architect', 'ux-expert'],
|
||||
maxRoles: 5,
|
||||
synthesis: true // Always generate synthesis
|
||||
},
|
||||
|
||||
plan: {
|
||||
verifyBeforeExecute: true, // Recommend verification
|
||||
taskFormat: 'json' // Generate task JSONs
|
||||
},
|
||||
|
||||
execute: {
|
||||
dagParallel: true, // DAG-based parallel execution
|
||||
testGeneration: 'optional' // Suggest test-gen after
|
||||
}
|
||||
}
|
||||
```
|
||||
|
||||
## Continuation Points
|
||||
|
||||
After each phase, CCW can continue to the next:
|
||||
|
||||
```javascript
|
||||
// After brainstorm completes
|
||||
console.log(`
|
||||
## Brainstorm Complete
|
||||
|
||||
**Next steps**:
|
||||
1. /workflow:plan "基于头脑风暴结果规划实施"
|
||||
2. Or refine: /workflow:brainstorm:synthesis
|
||||
`)
|
||||
|
||||
// After plan completes
|
||||
console.log(`
|
||||
## Plan Complete
|
||||
|
||||
**Next steps**:
|
||||
1. /workflow:action-plan-verify (recommended)
|
||||
2. /workflow:execute (直接执行)
|
||||
`)
|
||||
```
|
||||
|
||||
## Example Invocations
|
||||
|
||||
```bash
|
||||
# Product exploration
|
||||
ccw "我想做一个团队协作工具,但不确定具体方向"
|
||||
→ brainstorm:auto-parallel (5 roles)
|
||||
→ plan
|
||||
→ execute
|
||||
|
||||
# Technical exploration
|
||||
ccw "如何设计一个高可用的消息队列系统?"
|
||||
→ brainstorm:auto-parallel (system-architect, data-architect)
|
||||
→ plan
|
||||
→ verify
|
||||
→ execute
|
||||
```
|
||||
|
||||
## Output Artifacts
|
||||
|
||||
```
|
||||
.workflow/
|
||||
├── brainstorm/
|
||||
│ ├── {session}/
|
||||
│ │ ├── role-{role}.md
|
||||
│ │ └── synthesis.md
|
||||
├── plans/
|
||||
│ └── IMPL_PLAN.md
|
||||
└── tasks/
|
||||
└── IMPL-*.json
|
||||
```
|
||||
@@ -1,201 +0,0 @@
|
||||
# Action: Issue Workflow
|
||||
|
||||
Issue 批量处理工作流:规划 + 队列 + 批量执行
|
||||
|
||||
## Pattern
|
||||
|
||||
```
|
||||
issue:plan → issue:queue → issue:execute
|
||||
```
|
||||
|
||||
## Trigger Conditions
|
||||
|
||||
- Keywords: "issues", "batch", "queue", "多个", "批量"
|
||||
- Multiple related problems
|
||||
- Long-running fix campaigns
|
||||
- Priority-based processing needed
|
||||
|
||||
## Execution Flow
|
||||
|
||||
```mermaid
|
||||
sequenceDiagram
|
||||
participant U as User
|
||||
participant O as CCW Orchestrator
|
||||
participant IP as issue:plan
|
||||
participant IQ as issue:queue
|
||||
participant IE as issue:execute
|
||||
|
||||
U->>O: Multiple issues / batch fix
|
||||
O->>O: Classify: issue
|
||||
|
||||
Note over IP: Phase 1: Issue Planning
|
||||
O->>IP: /issue:plan
|
||||
IP->>IP: Load unplanned issues
|
||||
IP->>IP: Generate solutions per issue
|
||||
IP->>U: Review solutions
|
||||
U->>IP: Bind selected solutions
|
||||
|
||||
Note over IQ: Phase 2: Queue Formation
|
||||
IP->>IQ: /issue:queue
|
||||
IQ->>IQ: Conflict analysis
|
||||
IQ->>IQ: Priority calculation
|
||||
IQ->>IQ: DAG construction
|
||||
IQ->>U: High-severity conflicts?
|
||||
U->>IQ: Resolve conflicts
|
||||
IQ->>IQ: Generate execution queue
|
||||
|
||||
Note over IE: Phase 3: Execution
|
||||
IQ->>IE: /issue:execute
|
||||
IE->>IE: DAG-based parallel execution
|
||||
IE->>IE: Per-solution progress tracking
|
||||
IE-->>U: Batch execution complete
|
||||
```
|
||||
|
||||
## When to Use
|
||||
|
||||
✅ **Ideal scenarios**:
|
||||
- 多个相关 Bug 需要批量修复
|
||||
- GitHub Issues 批量处理
|
||||
- 技术债务清理
|
||||
- 安全漏洞批量修复
|
||||
- 代码质量改进活动
|
||||
|
||||
❌ **Avoid when**:
|
||||
- 单一问题 → `/workflow:lite-fix`
|
||||
- 独立不相关的任务 → 分别处理
|
||||
- 紧急生产问题 → `/workflow:lite-fix --hotfix`
|
||||
|
||||
## Issue Lifecycle
|
||||
|
||||
```
|
||||
draft → planned → queued → executing → completed
|
||||
↓ ↓
|
||||
skipped on-hold
|
||||
```
|
||||
|
||||
## Conflict Types
|
||||
|
||||
| Type | Description | Resolution |
|
||||
|------|-------------|------------|
|
||||
| File | 多个解决方案修改同一文件 | Sequential execution |
|
||||
| API | API 签名变更影响 | Dependency ordering |
|
||||
| Data | 数据结构变更冲突 | User decision |
|
||||
| Dependency | 包依赖冲突 | Version negotiation |
|
||||
| Architecture | 架构方向冲突 | User decision (high severity) |
|
||||
|
||||
## Configuration
|
||||
|
||||
```javascript
|
||||
const issueConfig = {
|
||||
plan: {
|
||||
solutionsPerIssue: 3, // Generate up to 3 solutions
|
||||
autoSelect: false, // User must bind solution
|
||||
planningAgent: 'issue-plan-agent'
|
||||
},
|
||||
|
||||
queue: {
|
||||
conflictAnalysis: true,
|
||||
priorityCalculation: true,
|
||||
clarifyThreshold: 'high', // Ask user for high-severity conflicts
|
||||
queueAgent: 'issue-queue-agent'
|
||||
},
|
||||
|
||||
execute: {
|
||||
dagParallel: true,
|
||||
executionLevel: 'solution', // Execute by solution, not task
|
||||
executor: 'codex',
|
||||
resumable: true
|
||||
}
|
||||
}
|
||||
```
|
||||
|
||||
## Example Invocations
|
||||
|
||||
```bash
|
||||
# From GitHub Issues
|
||||
ccw "批量处理所有 label:bug 的 GitHub Issues"
|
||||
→ issue:new (import from GitHub)
|
||||
→ issue:plan (generate solutions)
|
||||
→ issue:queue (form execution queue)
|
||||
→ issue:execute (batch execute)
|
||||
|
||||
# Tech debt cleanup
|
||||
ccw "处理所有 TODO 注释和已知技术债务"
|
||||
→ issue:discover (find issues)
|
||||
→ issue:plan (plan solutions)
|
||||
→ issue:queue (prioritize)
|
||||
→ issue:execute (execute)
|
||||
|
||||
# Security vulnerabilities
|
||||
ccw "修复所有 npm audit 报告的安全漏洞"
|
||||
→ issue:new (from audit report)
|
||||
→ issue:plan (upgrade strategies)
|
||||
→ issue:queue (conflict resolution)
|
||||
→ issue:execute (staged upgrades)
|
||||
```
|
||||
|
||||
## Queue Structure
|
||||
|
||||
```json
|
||||
{
|
||||
"queue_id": "QUE-20251227-143000",
|
||||
"status": "active",
|
||||
"execution_groups": [
|
||||
{
|
||||
"id": "P1",
|
||||
"type": "parallel",
|
||||
"solutions": ["SOL-ISS-001-1", "SOL-ISS-002-1"],
|
||||
"description": "Independent fixes, no file overlap"
|
||||
},
|
||||
{
|
||||
"id": "S1",
|
||||
"type": "sequential",
|
||||
"solutions": ["SOL-ISS-003-1"],
|
||||
"depends_on": ["P1"],
|
||||
"description": "Depends on P1 completion"
|
||||
}
|
||||
]
|
||||
}
|
||||
```
|
||||
|
||||
## Output Artifacts
|
||||
|
||||
```
|
||||
.workflow/issues/
|
||||
├── issues.jsonl # All issues (one per line)
|
||||
├── solutions/
|
||||
│ ├── ISS-001.jsonl # Solutions for ISS-001
|
||||
│ └── ISS-002.jsonl
|
||||
├── queues/
|
||||
│ ├── index.json # Queue index
|
||||
│ └── QUE-xxx.json # Queue details
|
||||
└── execution/
|
||||
└── {queue-id}/
|
||||
├── progress.json
|
||||
└── results/
|
||||
```
|
||||
|
||||
## Progress Tracking
|
||||
|
||||
```javascript
|
||||
// Real-time progress during execution
|
||||
const progress = {
|
||||
queue_id: "QUE-xxx",
|
||||
total_solutions: 5,
|
||||
completed: 2,
|
||||
in_progress: 1,
|
||||
pending: 2,
|
||||
current_group: "P1",
|
||||
eta: "15 minutes"
|
||||
}
|
||||
```
|
||||
|
||||
## Resume Capability
|
||||
|
||||
```bash
|
||||
# If execution interrupted
|
||||
ccw "继续执行 issue 队列"
|
||||
→ Detects active queue: QUE-xxx
|
||||
→ Resumes from last checkpoint
|
||||
→ /issue:execute --resume
|
||||
```
|
||||
@@ -1,104 +0,0 @@
|
||||
# Action: Rapid Workflow
|
||||
|
||||
快速迭代工作流组合:多模型协作分析 + 直接执行
|
||||
|
||||
## Pattern
|
||||
|
||||
```
|
||||
lite-plan → lite-execute
|
||||
```
|
||||
|
||||
## Trigger Conditions
|
||||
|
||||
- Complexity: Low to Medium
|
||||
- Intent: Feature development
|
||||
- Context: Clear requirements, known implementation path
|
||||
- No uncertainty keywords
|
||||
|
||||
## Execution Flow
|
||||
|
||||
```mermaid
|
||||
sequenceDiagram
|
||||
participant U as User
|
||||
participant O as CCW Orchestrator
|
||||
participant LP as lite-plan
|
||||
participant LE as lite-execute
|
||||
participant CLI as CLI Tools
|
||||
|
||||
U->>O: Task description
|
||||
O->>O: Classify: rapid
|
||||
O->>LP: /workflow:lite-plan "task"
|
||||
|
||||
LP->>LP: Complexity assessment
|
||||
LP->>CLI: Parallel explorations (if needed)
|
||||
CLI-->>LP: Exploration results
|
||||
LP->>LP: Generate plan.json
|
||||
LP->>U: Display plan, ask confirmation
|
||||
U->>LP: Confirm + select execution method
|
||||
|
||||
LP->>LE: /workflow:lite-execute --in-memory
|
||||
LE->>CLI: Execute tasks (Agent/Codex)
|
||||
CLI-->>LE: Results
|
||||
LE->>LE: Optional code review
|
||||
LE-->>U: Execution complete
|
||||
```
|
||||
|
||||
## When to Use
|
||||
|
||||
✅ **Ideal scenarios**:
|
||||
- 添加单一功能(如用户头像上传)
|
||||
- 修改现有功能(如更新表单验证)
|
||||
- 小型重构(如抽取公共方法)
|
||||
- 添加测试用例
|
||||
- 文档更新
|
||||
|
||||
❌ **Avoid when**:
|
||||
- 不确定实现方案
|
||||
- 跨多个模块
|
||||
- 需要架构决策
|
||||
- 有复杂依赖关系
|
||||
|
||||
## Configuration
|
||||
|
||||
```javascript
|
||||
const rapidConfig = {
|
||||
explorationThreshold: {
|
||||
// Force exploration if task mentions specific files
|
||||
forceExplore: /\b(file|文件|module|模块|class|类)\s*[::]?\s*\w+/i,
|
||||
// Skip exploration for simple tasks
|
||||
skipExplore: /\b(add|添加|create|创建)\s+(comment|注释|log|日志)/i
|
||||
},
|
||||
|
||||
defaultExecution: 'Agent', // Agent for low complexity
|
||||
|
||||
codeReview: {
|
||||
default: 'Skip', // Skip review for simple tasks
|
||||
threshold: 'medium' // Enable for medium+ complexity
|
||||
}
|
||||
}
|
||||
```
|
||||
|
||||
## Example Invocations
|
||||
|
||||
```bash
|
||||
# Simple feature
|
||||
ccw "添加用户退出登录按钮"
|
||||
→ lite-plan → lite-execute (Agent)
|
||||
|
||||
# With exploration
|
||||
ccw "优化 AuthService 的 token 刷新逻辑"
|
||||
→ lite-plan -e → lite-execute (Agent, Gemini review)
|
||||
|
||||
# Medium complexity
|
||||
ccw "实现用户偏好设置的本地存储"
|
||||
→ lite-plan -e → lite-execute (Codex)
|
||||
```
|
||||
|
||||
## Output Artifacts
|
||||
|
||||
```
|
||||
.workflow/.lite-plan/{task-slug}-{date}/
|
||||
├── exploration-*.json # If exploration was triggered
|
||||
├── explorations-manifest.json
|
||||
└── plan.json # Implementation plan
|
||||
```
|
||||
@@ -1,84 +0,0 @@
|
||||
# Review-Fix Workflow Action
|
||||
|
||||
## Pattern
|
||||
```
|
||||
review-session-cycle → review-fix
|
||||
```
|
||||
|
||||
## Trigger Conditions
|
||||
|
||||
- 关键词: "review", "审查", "检查代码", "code review", "质量检查"
|
||||
- 场景: PR审查、代码质量提升、安全审计
|
||||
|
||||
## Execution Flow
|
||||
|
||||
```mermaid
|
||||
graph TD
|
||||
A[User Input] --> B[review-session-cycle]
|
||||
B --> C{7维度分析}
|
||||
C --> D[Security]
|
||||
C --> E[Performance]
|
||||
C --> F[Maintainability]
|
||||
C --> G[Architecture]
|
||||
C --> H[Code Style]
|
||||
C --> I[Test Coverage]
|
||||
C --> J[Documentation]
|
||||
D & E & F & G & H & I & J --> K[Findings Aggregation]
|
||||
K --> L{Quality Gate}
|
||||
L -->|Pass| M[Report Only]
|
||||
L -->|Fail| N[review-fix]
|
||||
N --> O[Auto Fix]
|
||||
O --> P[Re-verify]
|
||||
```
|
||||
|
||||
## Configuration
|
||||
|
||||
| 参数 | 默认值 | 说明 |
|
||||
|------|--------|------|
|
||||
| dimensions | all | 审查维度(security,performance,etc.) |
|
||||
| quality_gate | 80 | 质量门槛分数 |
|
||||
| auto_fix | true | 自动修复发现的问题 |
|
||||
| severity_threshold | medium | 最低关注级别 |
|
||||
|
||||
## CLI Integration
|
||||
|
||||
| 阶段 | CLI Hint | 用途 |
|
||||
|------|----------|------|
|
||||
| review-session-cycle | `gemini --mode analysis` | 多维度深度分析 |
|
||||
| review-fix | `codex --mode write` | 自动修复问题 |
|
||||
|
||||
## Slash Commands
|
||||
|
||||
```bash
|
||||
/workflow:review-session-cycle # 会话级代码审查
|
||||
/workflow:review-module-cycle # 模块级代码审查
|
||||
/workflow:review-fix # 自动修复审查发现
|
||||
/workflow:review --type security # 专项安全审查
|
||||
```
|
||||
|
||||
## Review Dimensions
|
||||
|
||||
| 维度 | 检查点 |
|
||||
|------|--------|
|
||||
| Security | 注入、XSS、敏感数据暴露 |
|
||||
| Performance | N+1查询、内存泄漏、算法复杂度 |
|
||||
| Maintainability | 代码重复、复杂度、命名 |
|
||||
| Architecture | 依赖方向、层级违规、耦合度 |
|
||||
| Code Style | 格式、约定、一致性 |
|
||||
| Test Coverage | 覆盖率、边界用例 |
|
||||
| Documentation | 注释、API文档、README |
|
||||
|
||||
## When to Use
|
||||
|
||||
- PR合并前审查
|
||||
- 重构后质量验证
|
||||
- 安全合规审计
|
||||
- 技术债务评估
|
||||
|
||||
## Risk Assessment
|
||||
|
||||
| 风险 | 缓解措施 |
|
||||
|------|----------|
|
||||
| 误报过多 | severity_threshold过滤 |
|
||||
| 修复引入新问题 | re-verify循环 |
|
||||
| 审查不全面 | 7维度覆盖 |
|
||||
@@ -1,66 +0,0 @@
|
||||
# TDD Workflow Action
|
||||
|
||||
## Pattern
|
||||
```
|
||||
tdd-plan → execute → tdd-verify
|
||||
```
|
||||
|
||||
## Trigger Conditions
|
||||
|
||||
- 关键词: "tdd", "test-driven", "测试驱动", "先写测试", "red-green"
|
||||
- 场景: 需要高质量代码保证、关键业务逻辑、回归风险高
|
||||
|
||||
## Execution Flow
|
||||
|
||||
```mermaid
|
||||
graph TD
|
||||
A[User Input] --> B[tdd-plan]
|
||||
B --> C{生成测试任务链}
|
||||
C --> D[Red Phase: 写失败测试]
|
||||
D --> E[execute: 实现代码]
|
||||
E --> F[Green Phase: 测试通过]
|
||||
F --> G{需要重构?}
|
||||
G -->|Yes| H[Refactor Phase]
|
||||
H --> F
|
||||
G -->|No| I[tdd-verify]
|
||||
I --> J[质量报告]
|
||||
```
|
||||
|
||||
## Configuration
|
||||
|
||||
| 参数 | 默认值 | 说明 |
|
||||
|------|--------|------|
|
||||
| coverage_target | 80% | 目标覆盖率 |
|
||||
| cycle_limit | 10 | 最大Red-Green-Refactor循环 |
|
||||
| strict_mode | false | 严格模式(必须先红后绿) |
|
||||
|
||||
## CLI Integration
|
||||
|
||||
| 阶段 | CLI Hint | 用途 |
|
||||
|------|----------|------|
|
||||
| tdd-plan | `gemini --mode analysis` | 分析测试策略 |
|
||||
| execute | `codex --mode write` | 实现代码 |
|
||||
| tdd-verify | `gemini --mode analysis` | 验证TDD合规性 |
|
||||
|
||||
## Slash Commands
|
||||
|
||||
```bash
|
||||
/workflow:tdd-plan # 生成TDD任务链
|
||||
/workflow:execute # 执行Red-Green-Refactor
|
||||
/workflow:tdd-verify # 验证TDD合规性+覆盖率
|
||||
```
|
||||
|
||||
## When to Use
|
||||
|
||||
- 核心业务逻辑开发
|
||||
- 需要高测试覆盖率的模块
|
||||
- 重构现有代码时确保不破坏功能
|
||||
- 团队要求TDD实践
|
||||
|
||||
## Risk Assessment
|
||||
|
||||
| 风险 | 缓解措施 |
|
||||
|------|----------|
|
||||
| 测试粒度不当 | tdd-plan阶段评估测试边界 |
|
||||
| 过度测试 | 聚焦行为而非实现 |
|
||||
| 循环过多 | cycle_limit限制 |
|
||||
@@ -1,79 +0,0 @@
|
||||
# UI Design Workflow Action
|
||||
|
||||
## Pattern
|
||||
```
|
||||
ui-design:[explore|imitate]-auto → design-sync → plan → execute
|
||||
```
|
||||
|
||||
## Trigger Conditions
|
||||
|
||||
- 关键词: "ui", "界面", "design", "组件", "样式", "布局", "前端"
|
||||
- 变体触发:
|
||||
- `imitate`: "参考", "模仿", "像", "类似"
|
||||
- `explore`: 无特定参考时默认
|
||||
|
||||
## Variants
|
||||
|
||||
### Explore (探索式设计)
|
||||
```mermaid
|
||||
graph TD
|
||||
A[User Input] --> B[ui-design:explore-auto]
|
||||
B --> C[设计系统分析]
|
||||
C --> D[组件结构规划]
|
||||
D --> E[design-sync]
|
||||
E --> F[plan]
|
||||
F --> G[execute]
|
||||
```
|
||||
|
||||
### Imitate (参考式设计)
|
||||
```mermaid
|
||||
graph TD
|
||||
A[User Input + Reference] --> B[ui-design:imitate-auto]
|
||||
B --> C[参考分析]
|
||||
C --> D[风格提取]
|
||||
D --> E[design-sync]
|
||||
E --> F[plan]
|
||||
F --> G[execute]
|
||||
```
|
||||
|
||||
## Configuration
|
||||
|
||||
| 参数 | 默认值 | 说明 |
|
||||
|------|--------|------|
|
||||
| design_system | auto | 设计系统(auto/tailwind/mui/custom) |
|
||||
| responsive | true | 响应式设计 |
|
||||
| accessibility | true | 无障碍支持 |
|
||||
|
||||
## CLI Integration
|
||||
|
||||
| 阶段 | CLI Hint | 用途 |
|
||||
|------|----------|------|
|
||||
| explore/imitate | `gemini --mode analysis` | 设计分析、风格提取 |
|
||||
| design-sync | - | 设计决策与代码库同步 |
|
||||
| plan | - | 内置规划 |
|
||||
| execute | `codex --mode write` | 组件实现 |
|
||||
|
||||
## Slash Commands
|
||||
|
||||
```bash
|
||||
/workflow:ui-design:explore-auto # 探索式UI设计
|
||||
/workflow:ui-design:imitate-auto # 参考式UI设计
|
||||
/workflow:ui-design:design-sync # 设计与代码同步(关键步骤)
|
||||
/workflow:ui-design:style-extract # 提取现有样式
|
||||
/workflow:ui-design:codify-style # 样式代码化
|
||||
```
|
||||
|
||||
## When to Use
|
||||
|
||||
- 新页面/组件开发
|
||||
- UI重构或现代化
|
||||
- 设计系统建立
|
||||
- 参考其他产品设计
|
||||
|
||||
## Risk Assessment
|
||||
|
||||
| 风险 | 缓解措施 |
|
||||
|------|----------|
|
||||
| 设计不一致 | style-extract确保复用 |
|
||||
| 响应式问题 | 多断点验证 |
|
||||
| 可访问性缺失 | a11y检查集成 |
|
||||
@@ -1,435 +0,0 @@
|
||||
# CCW Orchestrator
|
||||
|
||||
无状态编排器:分析输入 → 选择工作流链 → TODO 跟踪执行
|
||||
|
||||
## Architecture
|
||||
|
||||
```
|
||||
┌──────────────────────────────────────────────────────────────────┐
|
||||
│ CCW Orchestrator │
|
||||
├──────────────────────────────────────────────────────────────────┤
|
||||
│ │
|
||||
│ Phase 1: Input Analysis │
|
||||
│ ├─ Parse input (natural language / explicit command) │
|
||||
│ ├─ Classify intent (bugfix / feature / issue / ui / docs) │
|
||||
│ └─ Assess complexity (low / medium / high) │
|
||||
│ │
|
||||
│ Phase 2: Chain Selection │
|
||||
│ ├─ Load index/workflow-chains.json │
|
||||
│ ├─ Match intent → chain(s) │
|
||||
│ ├─ Filter by complexity │
|
||||
│ └─ Select optimal chain │
|
||||
│ │
|
||||
│ Phase 3: User Confirmation (optional) │
|
||||
│ ├─ Display selected chain and steps │
|
||||
│ └─ Allow modification or manual selection │
|
||||
│ │
|
||||
│ Phase 4: TODO Tracking Setup │
|
||||
│ ├─ Create TodoWrite with chain steps │
|
||||
│ └─ Mark first step as in_progress │
|
||||
│ │
|
||||
│ Phase 5: Execution Loop │
|
||||
│ ├─ Execute current step (SlashCommand) │
|
||||
│ ├─ Update TODO status (completed) │
|
||||
│ ├─ Check auto_continue flag │
|
||||
│ └─ Proceed to next step or wait for user │
|
||||
│ │
|
||||
└──────────────────────────────────────────────────────────────────┘
|
||||
```
|
||||
|
||||
## Implementation
|
||||
|
||||
### Phase 1: Input Analysis
|
||||
|
||||
```javascript
|
||||
// Load external configuration (externalized for flexibility)
|
||||
const intentRules = JSON.parse(Read('.claude/skills/ccw/index/intent-rules.json'))
|
||||
const capabilities = JSON.parse(Read('.claude/skills/ccw/index/command-capabilities.json'))
|
||||
|
||||
function analyzeInput(userInput) {
|
||||
const input = userInput.trim()
|
||||
|
||||
// Check for explicit command passthrough
|
||||
if (input.match(/^\/(?:workflow|issue|memory|task):/)) {
|
||||
return { type: 'explicit', command: input, passthrough: true }
|
||||
}
|
||||
|
||||
// Classify intent using external rules
|
||||
const intent = classifyIntent(input, intentRules.intent_patterns)
|
||||
|
||||
// Assess complexity using external indicators
|
||||
const complexity = assessComplexity(input, intentRules.complexity_indicators)
|
||||
|
||||
// Detect tool preferences using external triggers
|
||||
const toolPreference = detectToolPreference(input, intentRules.cli_tool_triggers)
|
||||
|
||||
return {
|
||||
type: 'natural',
|
||||
text: input,
|
||||
intent,
|
||||
complexity,
|
||||
toolPreference,
|
||||
passthrough: false
|
||||
}
|
||||
}
|
||||
|
||||
function classifyIntent(text, patterns) {
|
||||
// Sort by priority
|
||||
const sorted = Object.entries(patterns)
|
||||
.sort((a, b) => a[1].priority - b[1].priority)
|
||||
|
||||
for (const [intentType, config] of sorted) {
|
||||
// Handle variants (bugfix, ui, docs)
|
||||
if (config.variants) {
|
||||
for (const [variant, variantConfig] of Object.entries(config.variants)) {
|
||||
const variantPatterns = variantConfig.patterns || variantConfig.triggers || []
|
||||
if (matchesAnyPattern(text, variantPatterns)) {
|
||||
// For bugfix, check if standard patterns also match
|
||||
if (intentType === 'bugfix') {
|
||||
const standardMatch = matchesAnyPattern(text, config.variants.standard?.patterns || [])
|
||||
if (standardMatch) {
|
||||
return { type: intentType, variant, workflow: variantConfig.workflow }
|
||||
}
|
||||
} else {
|
||||
return { type: intentType, variant, workflow: variantConfig.workflow }
|
||||
}
|
||||
}
|
||||
}
|
||||
// Check default variant
|
||||
if (config.variants.standard) {
|
||||
if (matchesAnyPattern(text, config.variants.standard.patterns)) {
|
||||
return { type: intentType, variant: 'standard', workflow: config.variants.standard.workflow }
|
||||
}
|
||||
}
|
||||
}
|
||||
|
||||
// Handle simple patterns (exploration, tdd, review)
|
||||
if (config.patterns && !config.require_both) {
|
||||
if (matchesAnyPattern(text, config.patterns)) {
|
||||
return { type: intentType, workflow: config.workflow }
|
||||
}
|
||||
}
|
||||
|
||||
// Handle dual-pattern matching (issue_batch)
|
||||
if (config.require_both && config.patterns) {
|
||||
const matchBatch = matchesAnyPattern(text, config.patterns.batch_keywords)
|
||||
const matchAction = matchesAnyPattern(text, config.patterns.action_keywords)
|
||||
if (matchBatch && matchAction) {
|
||||
return { type: intentType, workflow: config.workflow }
|
||||
}
|
||||
}
|
||||
}
|
||||
|
||||
// Default to feature
|
||||
return { type: 'feature' }
|
||||
}
|
||||
|
||||
function matchesAnyPattern(text, patterns) {
|
||||
if (!Array.isArray(patterns)) return false
|
||||
const lowerText = text.toLowerCase()
|
||||
return patterns.some(p => lowerText.includes(p.toLowerCase()))
|
||||
}
|
||||
|
||||
function assessComplexity(text, indicators) {
|
||||
let score = 0
|
||||
|
||||
for (const [level, config] of Object.entries(indicators)) {
|
||||
if (config.patterns) {
|
||||
for (const [category, patternConfig] of Object.entries(config.patterns)) {
|
||||
if (matchesAnyPattern(text, patternConfig.keywords)) {
|
||||
score += patternConfig.weight || 1
|
||||
}
|
||||
}
|
||||
}
|
||||
}
|
||||
|
||||
if (score >= indicators.high.score_threshold) return 'high'
|
||||
if (score >= indicators.medium.score_threshold) return 'medium'
|
||||
return 'low'
|
||||
}
|
||||
|
||||
function detectToolPreference(text, triggers) {
|
||||
for (const [tool, config] of Object.entries(triggers)) {
|
||||
// Check explicit triggers
|
||||
if (matchesAnyPattern(text, config.explicit)) return tool
|
||||
// Check semantic triggers
|
||||
if (matchesAnyPattern(text, config.semantic)) return tool
|
||||
}
|
||||
return null
|
||||
}
|
||||
```
|
||||
|
||||
### Phase 2: Chain Selection
|
||||
|
||||
```javascript
|
||||
// Load workflow chains index
|
||||
const chains = JSON.parse(Read('.claude/skills/ccw/index/workflow-chains.json'))
|
||||
|
||||
function selectChain(analysis) {
|
||||
const { intent, complexity } = analysis
|
||||
|
||||
// Map intent type (from intent-rules.json) to chain ID (from workflow-chains.json)
|
||||
const chainMapping = {
|
||||
'bugfix': 'bugfix',
|
||||
'issue_batch': 'issue', // intent-rules.json key → chains.json chain ID
|
||||
'exploration': 'full',
|
||||
'ui_design': 'ui', // intent-rules.json key → chains.json chain ID
|
||||
'tdd': 'tdd',
|
||||
'review': 'review-fix',
|
||||
'documentation': 'docs', // intent-rules.json key → chains.json chain ID
|
||||
'feature': null // Use complexity fallback
|
||||
}
|
||||
|
||||
let chainId = chainMapping[intent.type]
|
||||
|
||||
// Fallback to complexity-based selection
|
||||
if (!chainId) {
|
||||
chainId = chains.chain_selection_rules.complexity_fallback[complexity]
|
||||
}
|
||||
|
||||
const chain = chains.chains[chainId]
|
||||
|
||||
// Handle variants
|
||||
let steps = chain.steps
|
||||
if (chain.variants) {
|
||||
const variant = intent.variant || Object.keys(chain.variants)[0]
|
||||
steps = chain.variants[variant].steps
|
||||
}
|
||||
|
||||
return {
|
||||
id: chainId,
|
||||
name: chain.name,
|
||||
description: chain.description,
|
||||
steps,
|
||||
complexity: chain.complexity,
|
||||
estimated_time: chain.estimated_time
|
||||
}
|
||||
}
|
||||
```
|
||||
|
||||
### Phase 3: User Confirmation
|
||||
|
||||
```javascript
|
||||
function confirmChain(selectedChain, analysis) {
|
||||
// Skip confirmation for simple chains
|
||||
if (selectedChain.steps.length <= 2 && analysis.complexity === 'low') {
|
||||
return selectedChain
|
||||
}
|
||||
|
||||
console.log(`
|
||||
## CCW Workflow Selection
|
||||
|
||||
**Task**: ${analysis.text.substring(0, 80)}...
|
||||
**Intent**: ${analysis.intent.type}${analysis.intent.variant ? ` (${analysis.intent.variant})` : ''}
|
||||
**Complexity**: ${analysis.complexity}
|
||||
|
||||
**Selected Chain**: ${selectedChain.name}
|
||||
**Description**: ${selectedChain.description}
|
||||
**Estimated Time**: ${selectedChain.estimated_time}
|
||||
|
||||
**Steps**:
|
||||
${selectedChain.steps.map((s, i) => `${i + 1}. ${s.command}${s.optional ? ' (optional)' : ''}`).join('\n')}
|
||||
`)
|
||||
|
||||
const response = AskUserQuestion({
|
||||
questions: [{
|
||||
question: `Proceed with ${selectedChain.name}?`,
|
||||
header: "Confirm",
|
||||
multiSelect: false,
|
||||
options: [
|
||||
{ label: "Proceed", description: `Execute ${selectedChain.steps.length} steps` },
|
||||
{ label: "Rapid", description: "Use lite-plan → lite-execute" },
|
||||
{ label: "Full", description: "Use brainstorm → plan → execute" },
|
||||
{ label: "Manual", description: "Specify commands manually" }
|
||||
]
|
||||
}]
|
||||
})
|
||||
|
||||
// Handle alternative selection
|
||||
if (response.Confirm === 'Rapid') {
|
||||
return selectChain({ intent: { type: 'feature' }, complexity: 'low' })
|
||||
}
|
||||
if (response.Confirm === 'Full') {
|
||||
return chains.chains['full']
|
||||
}
|
||||
if (response.Confirm === 'Manual') {
|
||||
return null // User will specify
|
||||
}
|
||||
|
||||
return selectedChain
|
||||
}
|
||||
```
|
||||
|
||||
### Phase 4: TODO Tracking Setup
|
||||
|
||||
```javascript
|
||||
function setupTodoTracking(chain, analysis) {
|
||||
const todos = chain.steps.map((step, index) => ({
|
||||
content: `[${index + 1}/${chain.steps.length}] ${step.command}`,
|
||||
status: index === 0 ? 'in_progress' : 'pending',
|
||||
activeForm: `Executing ${step.command}`
|
||||
}))
|
||||
|
||||
// Add header todo
|
||||
todos.unshift({
|
||||
content: `CCW: ${chain.name} (${chain.steps.length} steps)`,
|
||||
status: 'in_progress',
|
||||
activeForm: `Running ${chain.name} workflow`
|
||||
})
|
||||
|
||||
TodoWrite({ todos })
|
||||
|
||||
return {
|
||||
chain,
|
||||
currentStep: 0,
|
||||
todos
|
||||
}
|
||||
}
|
||||
```
|
||||
|
||||
### Phase 5: Execution Loop
|
||||
|
||||
```javascript
|
||||
async function executeChain(execution, analysis) {
|
||||
const { chain, todos } = execution
|
||||
let currentStep = 0
|
||||
|
||||
while (currentStep < chain.steps.length) {
|
||||
const step = chain.steps[currentStep]
|
||||
|
||||
// Update TODO: mark current as in_progress
|
||||
const updatedTodos = todos.map((t, i) => ({
|
||||
...t,
|
||||
status: i === 0
|
||||
? 'in_progress'
|
||||
: i === currentStep + 1
|
||||
? 'in_progress'
|
||||
: i <= currentStep
|
||||
? 'completed'
|
||||
: 'pending'
|
||||
}))
|
||||
TodoWrite({ todos: updatedTodos })
|
||||
|
||||
console.log(`\n### Step ${currentStep + 1}/${chain.steps.length}: ${step.command}\n`)
|
||||
|
||||
// Check for confirmation requirement
|
||||
if (step.confirm_before) {
|
||||
const proceed = AskUserQuestion({
|
||||
questions: [{
|
||||
question: `Ready to execute ${step.command}?`,
|
||||
header: "Step",
|
||||
multiSelect: false,
|
||||
options: [
|
||||
{ label: "Execute", description: "Run this step" },
|
||||
{ label: "Skip", description: "Skip to next step" },
|
||||
{ label: "Abort", description: "Stop workflow" }
|
||||
]
|
||||
}]
|
||||
})
|
||||
|
||||
if (proceed.Step === 'Skip') {
|
||||
currentStep++
|
||||
continue
|
||||
}
|
||||
if (proceed.Step === 'Abort') {
|
||||
break
|
||||
}
|
||||
}
|
||||
|
||||
// Execute the command
|
||||
const args = analysis.text
|
||||
SlashCommand(step.command, { args })
|
||||
|
||||
// Mark step as completed
|
||||
updatedTodos[currentStep + 1].status = 'completed'
|
||||
TodoWrite({ todos: updatedTodos })
|
||||
|
||||
currentStep++
|
||||
|
||||
// Check auto_continue
|
||||
if (!step.auto_continue && currentStep < chain.steps.length) {
|
||||
console.log(`
|
||||
Step completed. Next: ${chain.steps[currentStep].command}
|
||||
Type "continue" to proceed or specify different action.
|
||||
`)
|
||||
// Wait for user input before continuing
|
||||
break
|
||||
}
|
||||
}
|
||||
|
||||
// Final status
|
||||
if (currentStep >= chain.steps.length) {
|
||||
const finalTodos = todos.map(t => ({ ...t, status: 'completed' }))
|
||||
TodoWrite({ todos: finalTodos })
|
||||
|
||||
console.log(`\n✓ ${chain.name} workflow completed (${chain.steps.length} steps)`)
|
||||
}
|
||||
|
||||
return { completed: currentStep, total: chain.steps.length }
|
||||
}
|
||||
```
|
||||
|
||||
## Main Orchestration Entry
|
||||
|
||||
```javascript
|
||||
async function ccwOrchestrate(userInput) {
|
||||
console.log('## CCW Orchestrator\n')
|
||||
|
||||
// Phase 1: Analyze input
|
||||
const analysis = analyzeInput(userInput)
|
||||
|
||||
// Handle explicit command passthrough
|
||||
if (analysis.passthrough) {
|
||||
console.log(`Direct command: ${analysis.command}`)
|
||||
return SlashCommand(analysis.command)
|
||||
}
|
||||
|
||||
// Phase 2: Select chain
|
||||
const selectedChain = selectChain(analysis)
|
||||
|
||||
// Phase 3: Confirm (for complex workflows)
|
||||
const confirmedChain = confirmChain(selectedChain, analysis)
|
||||
if (!confirmedChain) {
|
||||
console.log('Manual mode selected. Specify commands directly.')
|
||||
return
|
||||
}
|
||||
|
||||
// Phase 4: Setup TODO tracking
|
||||
const execution = setupTodoTracking(confirmedChain, analysis)
|
||||
|
||||
// Phase 5: Execute
|
||||
const result = await executeChain(execution, analysis)
|
||||
|
||||
return result
|
||||
}
|
||||
```
|
||||
|
||||
## Decision Matrix
|
||||
|
||||
| Intent | Complexity | Chain | Steps |
|
||||
|--------|------------|-------|-------|
|
||||
| bugfix (standard) | * | bugfix | lite-fix |
|
||||
| bugfix (hotfix) | * | bugfix | lite-fix --hotfix |
|
||||
| issue | * | issue | plan → queue → execute |
|
||||
| exploration | * | full | brainstorm → plan → execute |
|
||||
| ui (explore) | * | ui | ui-design:explore → sync → plan → execute |
|
||||
| ui (imitate) | * | ui | ui-design:imitate → sync → plan → execute |
|
||||
| tdd | * | tdd | tdd-plan → execute → tdd-verify |
|
||||
| review | * | review-fix | review-session-cycle → review-fix |
|
||||
| docs | low | docs | update-related |
|
||||
| docs | medium+ | docs | docs → execute |
|
||||
| feature | low | rapid | lite-plan → lite-execute |
|
||||
| feature | medium | coupled | plan → verify → execute |
|
||||
| feature | high | full | brainstorm → plan → execute |
|
||||
|
||||
## Continuation Commands
|
||||
|
||||
After each step pause:
|
||||
|
||||
| User Input | Action |
|
||||
|------------|--------|
|
||||
| `continue` | Execute next step |
|
||||
| `skip` | Skip current step |
|
||||
| `abort` | Stop workflow |
|
||||
| `/workflow:*` | Execute specific command |
|
||||
| Natural language | Re-analyze and potentially switch chains |
|
||||
@@ -1,336 +0,0 @@
|
||||
# Intent Classification Specification
|
||||
|
||||
CCW 意图分类规范:定义如何从用户输入识别任务意图并选择最优工作流。
|
||||
|
||||
## Classification Hierarchy
|
||||
|
||||
```
|
||||
Intent Classification
|
||||
├── Priority 1: Explicit Commands
|
||||
│ └── /workflow:*, /issue:*, /memory:*, /task:*
|
||||
├── Priority 2: Bug Keywords
|
||||
│ ├── Hotfix: urgent + bug keywords
|
||||
│ └── Standard: bug keywords only
|
||||
├── Priority 3: Issue Batch
|
||||
│ └── Multiple + fix keywords
|
||||
├── Priority 4: Exploration
|
||||
│ └── Uncertainty keywords
|
||||
├── Priority 5: UI/Design
|
||||
│ └── Visual/component keywords
|
||||
└── Priority 6: Complexity Fallback
|
||||
├── High → Coupled
|
||||
├── Medium → Rapid
|
||||
└── Low → Rapid
|
||||
```
|
||||
|
||||
## Keyword Patterns
|
||||
|
||||
### Bug Detection
|
||||
|
||||
```javascript
|
||||
const BUG_PATTERNS = {
|
||||
core: /\b(fix|bug|error|issue|crash|broken|fail|wrong|incorrect|修复|报错|错误|问题|异常|崩溃|失败)\b/i,
|
||||
|
||||
urgency: /\b(hotfix|urgent|production|critical|emergency|asap|immediately|紧急|生产|线上|马上|立即)\b/i,
|
||||
|
||||
symptoms: /\b(not working|doesn't work|can't|cannot|won't|stopped|stopped working|无法|不能|不工作)\b/i,
|
||||
|
||||
errors: /\b(\d{3}\s*error|exception|stack\s*trace|undefined|null\s*pointer|timeout)\b/i
|
||||
}
|
||||
|
||||
function detectBug(text) {
|
||||
const isBug = BUG_PATTERNS.core.test(text) || BUG_PATTERNS.symptoms.test(text)
|
||||
const isUrgent = BUG_PATTERNS.urgency.test(text)
|
||||
const hasError = BUG_PATTERNS.errors.test(text)
|
||||
|
||||
if (!isBug && !hasError) return null
|
||||
|
||||
return {
|
||||
type: 'bugfix',
|
||||
mode: isUrgent ? 'hotfix' : 'standard',
|
||||
confidence: (isBug && hasError) ? 'high' : 'medium'
|
||||
}
|
||||
}
|
||||
```
|
||||
|
||||
### Issue Batch Detection
|
||||
|
||||
```javascript
|
||||
const ISSUE_PATTERNS = {
|
||||
batch: /\b(issues?|batch|queue|multiple|several|all|多个|批量|一系列|所有|这些)\b/i,
|
||||
action: /\b(fix|resolve|handle|process|处理|解决|修复)\b/i,
|
||||
source: /\b(github|jira|linear|backlog|todo|待办)\b/i
|
||||
}
|
||||
|
||||
function detectIssueBatch(text) {
|
||||
const hasBatch = ISSUE_PATTERNS.batch.test(text)
|
||||
const hasAction = ISSUE_PATTERNS.action.test(text)
|
||||
const hasSource = ISSUE_PATTERNS.source.test(text)
|
||||
|
||||
if (hasBatch && hasAction) {
|
||||
return {
|
||||
type: 'issue',
|
||||
confidence: hasSource ? 'high' : 'medium'
|
||||
}
|
||||
}
|
||||
return null
|
||||
}
|
||||
```
|
||||
|
||||
### Exploration Detection
|
||||
|
||||
```javascript
|
||||
const EXPLORATION_PATTERNS = {
|
||||
uncertainty: /\b(不确定|不知道|not sure|unsure|how to|怎么|如何|what if|should i|could i|是否应该)\b/i,
|
||||
|
||||
exploration: /\b(explore|research|investigate|分析|研究|调研|评估|探索|了解)\b/i,
|
||||
|
||||
options: /\b(options|alternatives|approaches|方案|选择|方向|可能性)\b/i,
|
||||
|
||||
questions: /\b(what|which|how|why|什么|哪个|怎样|为什么)\b.*\?/i
|
||||
}
|
||||
|
||||
function detectExploration(text) {
|
||||
const hasUncertainty = EXPLORATION_PATTERNS.uncertainty.test(text)
|
||||
const hasExploration = EXPLORATION_PATTERNS.exploration.test(text)
|
||||
const hasOptions = EXPLORATION_PATTERNS.options.test(text)
|
||||
const hasQuestion = EXPLORATION_PATTERNS.questions.test(text)
|
||||
|
||||
const score = [hasUncertainty, hasExploration, hasOptions, hasQuestion].filter(Boolean).length
|
||||
|
||||
if (score >= 2 || hasUncertainty) {
|
||||
return {
|
||||
type: 'exploration',
|
||||
confidence: score >= 3 ? 'high' : 'medium'
|
||||
}
|
||||
}
|
||||
return null
|
||||
}
|
||||
```
|
||||
|
||||
### UI/Design Detection
|
||||
|
||||
```javascript
|
||||
const UI_PATTERNS = {
|
||||
components: /\b(ui|界面|component|组件|button|按钮|form|表单|modal|弹窗|dialog|对话框)\b/i,
|
||||
|
||||
design: /\b(design|设计|style|样式|layout|布局|theme|主题|color|颜色)\b/i,
|
||||
|
||||
visual: /\b(visual|视觉|animation|动画|responsive|响应式|mobile|移动端)\b/i,
|
||||
|
||||
frontend: /\b(frontend|前端|react|vue|angular|css|html|page|页面)\b/i
|
||||
}
|
||||
|
||||
function detectUI(text) {
|
||||
const hasComponents = UI_PATTERNS.components.test(text)
|
||||
const hasDesign = UI_PATTERNS.design.test(text)
|
||||
const hasVisual = UI_PATTERNS.visual.test(text)
|
||||
const hasFrontend = UI_PATTERNS.frontend.test(text)
|
||||
|
||||
const score = [hasComponents, hasDesign, hasVisual, hasFrontend].filter(Boolean).length
|
||||
|
||||
if (score >= 2) {
|
||||
return {
|
||||
type: 'ui',
|
||||
hasReference: /参考|reference|based on|像|like|模仿|imitate/.test(text),
|
||||
confidence: score >= 3 ? 'high' : 'medium'
|
||||
}
|
||||
}
|
||||
return null
|
||||
}
|
||||
```
|
||||
|
||||
## Complexity Assessment
|
||||
|
||||
### Indicators
|
||||
|
||||
```javascript
|
||||
const COMPLEXITY_INDICATORS = {
|
||||
high: {
|
||||
patterns: [
|
||||
/\b(refactor|重构|restructure|重新组织)\b/i,
|
||||
/\b(migrate|迁移|upgrade|升级|convert|转换)\b/i,
|
||||
/\b(architect|架构|system|系统|infrastructure|基础设施)\b/i,
|
||||
/\b(entire|整个|complete|完整|all\s+modules?|所有模块)\b/i,
|
||||
/\b(security|安全|scale|扩展|performance\s+critical|性能关键)\b/i,
|
||||
/\b(distributed|分布式|microservice|微服务|cluster|集群)\b/i
|
||||
],
|
||||
weight: 2
|
||||
},
|
||||
|
||||
medium: {
|
||||
patterns: [
|
||||
/\b(integrate|集成|connect|连接|link|链接)\b/i,
|
||||
/\b(api|database|数据库|service|服务|endpoint|接口)\b/i,
|
||||
/\b(test|测试|validate|验证|coverage|覆盖)\b/i,
|
||||
/\b(multiple\s+files?|多个文件|several\s+components?|几个组件)\b/i,
|
||||
/\b(authentication|认证|authorization|授权)\b/i
|
||||
],
|
||||
weight: 1
|
||||
},
|
||||
|
||||
low: {
|
||||
patterns: [
|
||||
/\b(add|添加|create|创建|simple|简单)\b/i,
|
||||
/\b(update|更新|modify|修改|change|改变)\b/i,
|
||||
/\b(single|单个|one|一个|small|小)\b/i,
|
||||
/\b(comment|注释|log|日志|print|打印)\b/i
|
||||
],
|
||||
weight: -1
|
||||
}
|
||||
}
|
||||
|
||||
function assessComplexity(text) {
|
||||
let score = 0
|
||||
|
||||
for (const [level, config] of Object.entries(COMPLEXITY_INDICATORS)) {
|
||||
for (const pattern of config.patterns) {
|
||||
if (pattern.test(text)) {
|
||||
score += config.weight
|
||||
}
|
||||
}
|
||||
}
|
||||
|
||||
// File count indicator
|
||||
const fileMatches = text.match(/\b\d+\s*(files?|文件)/i)
|
||||
if (fileMatches) {
|
||||
const count = parseInt(fileMatches[0])
|
||||
if (count > 10) score += 2
|
||||
else if (count > 5) score += 1
|
||||
}
|
||||
|
||||
// Module count indicator
|
||||
const moduleMatches = text.match(/\b\d+\s*(modules?|模块)/i)
|
||||
if (moduleMatches) {
|
||||
const count = parseInt(moduleMatches[0])
|
||||
if (count > 3) score += 2
|
||||
else if (count > 1) score += 1
|
||||
}
|
||||
|
||||
if (score >= 4) return 'high'
|
||||
if (score >= 2) return 'medium'
|
||||
return 'low'
|
||||
}
|
||||
```
|
||||
|
||||
## Workflow Selection Matrix
|
||||
|
||||
| Intent | Complexity | Workflow | Commands |
|
||||
|--------|------------|----------|----------|
|
||||
| bugfix (hotfix) | * | bugfix | `lite-fix --hotfix` |
|
||||
| bugfix (standard) | * | bugfix | `lite-fix` |
|
||||
| issue | * | issue | `issue:plan → queue → execute` |
|
||||
| exploration | * | full | `brainstorm → plan → execute` |
|
||||
| ui (reference) | * | ui | `ui-design:imitate-auto → plan` |
|
||||
| ui (explore) | * | ui | `ui-design:explore-auto → plan` |
|
||||
| feature | high | coupled | `plan → verify → execute` |
|
||||
| feature | medium | rapid | `lite-plan → lite-execute` |
|
||||
| feature | low | rapid | `lite-plan → lite-execute` |
|
||||
|
||||
## Confidence Levels
|
||||
|
||||
| Level | Description | Action |
|
||||
|-------|-------------|--------|
|
||||
| **high** | Multiple strong indicators match | Direct dispatch |
|
||||
| **medium** | Some indicators match | Confirm with user |
|
||||
| **low** | Fallback classification | Always confirm |
|
||||
|
||||
## Tool Preference Detection
|
||||
|
||||
```javascript
|
||||
const TOOL_PREFERENCES = {
|
||||
gemini: {
|
||||
pattern: /用\s*gemini|gemini\s*(分析|理解|设计)|让\s*gemini/i,
|
||||
capability: 'analysis'
|
||||
},
|
||||
qwen: {
|
||||
pattern: /用\s*qwen|qwen\s*(分析|评估)|让\s*qwen/i,
|
||||
capability: 'analysis'
|
||||
},
|
||||
codex: {
|
||||
pattern: /用\s*codex|codex\s*(实现|重构|修复)|让\s*codex/i,
|
||||
capability: 'implementation'
|
||||
}
|
||||
}
|
||||
|
||||
function detectToolPreference(text) {
|
||||
for (const [tool, config] of Object.entries(TOOL_PREFERENCES)) {
|
||||
if (config.pattern.test(text)) {
|
||||
return { tool, capability: config.capability }
|
||||
}
|
||||
}
|
||||
return null
|
||||
}
|
||||
```
|
||||
|
||||
## Multi-Tool Collaboration Detection
|
||||
|
||||
```javascript
|
||||
const COLLABORATION_PATTERNS = {
|
||||
sequential: /先.*(分析|理解).*然后.*(实现|重构)|分析.*后.*实现/i,
|
||||
parallel: /(同时|并行).*(分析|实现)|一边.*一边/i,
|
||||
hybrid: /(分析|设计).*和.*(实现|测试).*分开/i
|
||||
}
|
||||
|
||||
function detectCollaboration(text) {
|
||||
if (COLLABORATION_PATTERNS.sequential.test(text)) {
|
||||
return { mode: 'sequential', description: 'Analysis first, then implementation' }
|
||||
}
|
||||
if (COLLABORATION_PATTERNS.parallel.test(text)) {
|
||||
return { mode: 'parallel', description: 'Concurrent analysis and implementation' }
|
||||
}
|
||||
if (COLLABORATION_PATTERNS.hybrid.test(text)) {
|
||||
return { mode: 'hybrid', description: 'Mixed parallel and sequential' }
|
||||
}
|
||||
return null
|
||||
}
|
||||
```
|
||||
|
||||
## Classification Pipeline
|
||||
|
||||
```javascript
|
||||
function classify(userInput) {
|
||||
const text = userInput.trim()
|
||||
|
||||
// Step 1: Check explicit commands
|
||||
if (/^\/(?:workflow|issue|memory|task):/.test(text)) {
|
||||
return { type: 'explicit', command: text }
|
||||
}
|
||||
|
||||
// Step 2: Priority-based classification
|
||||
const bugResult = detectBug(text)
|
||||
if (bugResult) return bugResult
|
||||
|
||||
const issueResult = detectIssueBatch(text)
|
||||
if (issueResult) return issueResult
|
||||
|
||||
const explorationResult = detectExploration(text)
|
||||
if (explorationResult) return explorationResult
|
||||
|
||||
const uiResult = detectUI(text)
|
||||
if (uiResult) return uiResult
|
||||
|
||||
// Step 3: Complexity-based fallback
|
||||
const complexity = assessComplexity(text)
|
||||
return {
|
||||
type: 'feature',
|
||||
complexity,
|
||||
workflow: complexity === 'high' ? 'coupled' : 'rapid',
|
||||
confidence: 'low'
|
||||
}
|
||||
}
|
||||
```
|
||||
|
||||
## Examples
|
||||
|
||||
### Input → Classification
|
||||
|
||||
| Input | Classification | Workflow |
|
||||
|-------|----------------|----------|
|
||||
| "用户登录失败,401错误" | bugfix/standard | lite-fix |
|
||||
| "紧急:支付网关挂了" | bugfix/hotfix | lite-fix --hotfix |
|
||||
| "批量处理这些 GitHub issues" | issue | issue:plan → queue |
|
||||
| "不确定要怎么设计缓存系统" | exploration | brainstorm → plan |
|
||||
| "添加一个深色模式切换按钮" | ui | ui-design → plan |
|
||||
| "重构整个认证模块" | feature/high | plan → verify |
|
||||
| "添加用户头像功能" | feature/low | lite-plan |
|
||||
@@ -304,28 +304,22 @@ async function runWithTool(tool, context) {
|
||||
### 引用协议模板
|
||||
|
||||
```bash
|
||||
# 分析模式 - 必须引用 analysis-protocol.md
|
||||
# Analysis mode - use --rule to auto-load protocol and template (appended to prompt)
|
||||
ccw cli -p "
|
||||
RULES: $(cat ~/.claude/workflows/cli-templates/protocols/analysis-protocol.md)
|
||||
$(cat ~/.claude/workflows/cli-templates/prompts/analysis/02-analyze-code-patterns.txt)
|
||||
..." --tool gemini --mode analysis
|
||||
CONSTRAINTS: ...
|
||||
..." --tool gemini --mode analysis --rule analysis-code-patterns
|
||||
|
||||
# 写入模式 - 必须引用 write-protocol.md
|
||||
# Write mode - use --rule to auto-load protocol and template (appended to prompt)
|
||||
ccw cli -p "
|
||||
RULES: $(cat ~/.claude/workflows/cli-templates/protocols/write-protocol.md)
|
||||
$(cat ~/.claude/workflows/cli-templates/prompts/development/02-implement-feature.txt)
|
||||
..." --tool codex --mode write
|
||||
CONSTRAINTS: ...
|
||||
..." --tool codex --mode write --rule development-feature
|
||||
```
|
||||
|
||||
### 动态模板构建
|
||||
|
||||
```javascript
|
||||
function buildPrompt(config) {
|
||||
const { purpose, task, mode, context, expected, template } = config;
|
||||
|
||||
const protocolPath = mode === 'write'
|
||||
? '~/.claude/workflows/cli-templates/protocols/write-protocol.md'
|
||||
: '~/.claude/workflows/cli-templates/protocols/analysis-protocol.md';
|
||||
const { purpose, task, mode, context, expected, constraints } = config;
|
||||
|
||||
return `
|
||||
PURPOSE: ${purpose}
|
||||
@@ -333,8 +327,8 @@ TASK: ${task.map(t => `• ${t}`).join('\n')}
|
||||
MODE: ${mode}
|
||||
CONTEXT: ${context}
|
||||
EXPECTED: ${expected}
|
||||
RULES: $(cat ${protocolPath}) $(cat ${template})
|
||||
`;
|
||||
CONSTRAINTS: ${constraints || ''}
|
||||
`; // Use --rule option to auto-append protocol + template
|
||||
}
|
||||
```
|
||||
|
||||
@@ -435,11 +429,11 @@ CLI 调用 (Bash + ccw cli):
|
||||
- 相关任务使用 `--resume` 保持上下文
|
||||
- 独立任务不使用 `--resume`
|
||||
|
||||
### 4. 提示词规范
|
||||
### 4. Prompt Specification
|
||||
|
||||
- 始终使用 PURPOSE/TASK/MODE/CONTEXT/EXPECTED/RULES 结构
|
||||
- 必须引用协议模板(analysis-protocol 或 write-protocol)
|
||||
- 使用 `$(cat ...)` 动态加载模板
|
||||
- Always use PURPOSE/TASK/MODE/CONTEXT/EXPECTED/CONSTRAINTS structure
|
||||
- Use `--rule <template>` to auto-append protocol + template to prompt
|
||||
- Template name format: `category-function` (e.g., `analysis-code-patterns`)
|
||||
|
||||
### 5. 结果处理
|
||||
|
||||
|
||||
303
.claude/skills/skill-tuning/SKILL.md
Normal file
303
.claude/skills/skill-tuning/SKILL.md
Normal file
@@ -0,0 +1,303 @@
|
||||
---
|
||||
name: skill-tuning
|
||||
description: Universal skill diagnosis and optimization tool. Detect and fix skill execution issues including context explosion, long-tail forgetting, data flow disruption, and agent coordination failures. Supports Gemini CLI for deep analysis. Triggers on "skill tuning", "tune skill", "skill diagnosis", "optimize skill", "skill debug".
|
||||
allowed-tools: Task, AskUserQuestion, Read, Write, Bash, Glob, Grep, mcp__ace-tool__search_context
|
||||
---
|
||||
|
||||
# Skill Tuning
|
||||
|
||||
Universal skill diagnosis and optimization tool that identifies and resolves skill execution problems through iterative multi-agent analysis.
|
||||
|
||||
## Architecture Overview
|
||||
|
||||
```
|
||||
┌─────────────────────────────────────────────────────────────────────────────┐
|
||||
│ Skill Tuning Architecture (Autonomous Mode + Gemini CLI) │
|
||||
├─────────────────────────────────────────────────────────────────────────────┤
|
||||
│ │
|
||||
│ ⚠️ Phase 0: Specification → 阅读规范 + 理解目标 skill 结构 (强制前置) │
|
||||
│ Study │
|
||||
│ ↓ │
|
||||
│ ┌───────────────────────────────────────────────────────────────────────┐ │
|
||||
│ │ Orchestrator (状态驱动决策) │ │
|
||||
│ │ 读取诊断状态 → 选择下一步动作 → 执行 → 更新状态 → 循环直到完成 │ │
|
||||
│ └───────────────────────────────────────────────────────────────────────┘ │
|
||||
│ │ │
|
||||
│ ┌────────────┬───────────┼───────────┬────────────┬────────────┐ │
|
||||
│ ↓ ↓ ↓ ↓ ↓ ↓ │
|
||||
│ ┌──────┐ ┌──────────┐ ┌─────────┐ ┌────────┐ ┌────────┐ ┌─────────┐ │
|
||||
│ │ Init │→ │ Analyze │→ │Diagnose │ │Diagnose│ │Diagnose│ │ Gemini │ │
|
||||
│ │ │ │Requiremts│ │ Context │ │ Memory │ │DataFlow│ │Analysis │ │
|
||||
│ └──────┘ └──────────┘ └─────────┘ └────────┘ └────────┘ └─────────┘ │
|
||||
│ │ │ │ │ │ │
|
||||
│ │ └───────────┴───────────┴────────────┘ │
|
||||
│ ↓ │
|
||||
│ ┌───────────────────────────────────────────────────────────────────────┐ │
|
||||
│ │ Requirement Analysis (NEW) │ │
|
||||
│ │ • Phase 1: 维度拆解 (Gemini CLI) - 单一描述 → 多个关注维度 │ │
|
||||
│ │ • Phase 2: Spec 匹配 - 每个维度 → taxonomy + strategy │ │
|
||||
│ │ • Phase 3: 覆盖度评估 - 以"有修复策略"为满足标准 │ │
|
||||
│ │ • Phase 4: 歧义检测 - 识别多义性描述,必要时请求澄清 │ │
|
||||
│ └───────────────────────────────────────────────────────────────────────┘ │
|
||||
│ ↓ │
|
||||
│ ┌──────────────────┐ │
|
||||
│ │ Apply Fixes + │ │
|
||||
│ │ Verify Results │ │
|
||||
│ └──────────────────┘ │
|
||||
│ │
|
||||
│ ┌───────────────────────────────────────────────────────────────────────┐ │
|
||||
│ │ Gemini CLI Integration │ │
|
||||
│ │ 根据用户需求动态调用 gemini cli 进行深度分析: │ │
|
||||
│ │ • 需求维度拆解 (requirement decomposition) │ │
|
||||
│ │ • 复杂问题分析 (prompt engineering, architecture review) │ │
|
||||
│ │ • 代码模式识别 (pattern matching, anti-pattern detection) │ │
|
||||
│ │ • 修复策略生成 (fix generation, refactoring suggestions) │ │
|
||||
│ └───────────────────────────────────────────────────────────────────────┘ │
|
||||
│ │
|
||||
└─────────────────────────────────────────────────────────────────────────────┘
|
||||
```
|
||||
|
||||
## Problem Domain
|
||||
|
||||
Based on comprehensive analysis, skill-tuning addresses **core skill issues** and **general optimization areas**:
|
||||
|
||||
### Core Skill Issues (自动检测)
|
||||
|
||||
| Priority | Problem | Root Cause | Solution Strategy |
|
||||
|----------|---------|------------|-------------------|
|
||||
| **P0** | Authoring Principles Violation | 中间文件存储, State膨胀, 文件中转 | eliminate_intermediate_files, minimize_state, context_passing |
|
||||
| **P1** | Data Flow Disruption | Scattered state, inconsistent formats | state_centralization, schema_enforcement |
|
||||
| **P2** | Agent Coordination | Fragile call chains, merge complexity | error_wrapping, result_validation |
|
||||
| **P3** | Context Explosion | Token accumulation, multi-turn bloat | sliding_window, context_summarization |
|
||||
| **P4** | Long-tail Forgetting | Early constraint loss | constraint_injection, checkpoint_restore |
|
||||
| **P5** | Token Consumption | Verbose prompts, excessive state, redundant I/O | prompt_compression, lazy_loading, output_minimization |
|
||||
|
||||
### General Optimization Areas (按需分析 via Gemini CLI)
|
||||
|
||||
| Category | Issues | Gemini Analysis Scope |
|
||||
|----------|--------|----------------------|
|
||||
| **Prompt Engineering** | 模糊指令, 输出格式不一致, 幻觉风险 | 提示词优化, 结构化输出设计 |
|
||||
| **Architecture** | 阶段划分不合理, 依赖混乱, 扩展性差 | 架构审查, 模块化建议 |
|
||||
| **Performance** | 执行慢, Token消耗高, 重复计算 | 性能分析, 缓存策略 |
|
||||
| **Error Handling** | 错误恢复不当, 无降级策略, 日志不足 | 容错设计, 可观测性增强 |
|
||||
| **Output Quality** | 输出不稳定, 格式漂移, 质量波动 | 质量门控, 验证机制 |
|
||||
| **User Experience** | 交互不流畅, 反馈不清晰, 进度不可见 | UX优化, 进度追踪 |
|
||||
|
||||
## Key Design Principles
|
||||
|
||||
1. **Problem-First Diagnosis**: Systematic identification before any fix attempt
|
||||
2. **Data-Driven Analysis**: Record execution traces, token counts, state snapshots
|
||||
3. **Iterative Refinement**: Multiple tuning rounds until quality gates pass
|
||||
4. **Non-Destructive**: All changes are reversible with backup checkpoints
|
||||
5. **Agent Coordination**: Use specialized sub-agents for each diagnosis type
|
||||
6. **Gemini CLI On-Demand**: Deep analysis via CLI for complex/custom issues
|
||||
|
||||
---
|
||||
|
||||
## Gemini CLI Integration
|
||||
|
||||
根据用户需求动态调用 Gemini CLI 进行深度分析。
|
||||
|
||||
### Trigger Conditions
|
||||
|
||||
| Condition | Action | CLI Mode |
|
||||
|-----------|--------|----------|
|
||||
| 用户描述复杂问题 | 调用 Gemini 分析问题根因 | `analysis` |
|
||||
| 自动诊断发现 critical 问题 | 请求深度分析确认 | `analysis` |
|
||||
| 用户请求架构审查 | 执行架构分析 | `analysis` |
|
||||
| 需要生成修复代码 | 生成修复提案 | `write` |
|
||||
| 标准策略不适用 | 请求定制化策略 | `analysis` |
|
||||
|
||||
### CLI Command Template
|
||||
|
||||
```bash
|
||||
ccw cli -p "
|
||||
PURPOSE: ${purpose}
|
||||
TASK: ${task_steps}
|
||||
MODE: ${mode}
|
||||
CONTEXT: @${skill_path}/**/*
|
||||
EXPECTED: ${expected_output}
|
||||
RULES: $(cat ~/.claude/workflows/cli-templates/protocols/${mode}-protocol.md) | ${constraints}
|
||||
" --tool gemini --mode ${mode} --cd ${skill_path}
|
||||
```
|
||||
|
||||
### Analysis Types
|
||||
|
||||
#### 1. Problem Root Cause Analysis
|
||||
|
||||
```bash
|
||||
ccw cli -p "
|
||||
PURPOSE: Identify root cause of skill execution issue: ${user_issue_description}
|
||||
TASK: • Analyze skill structure and phase flow • Identify anti-patterns • Trace data flow issues
|
||||
MODE: analysis
|
||||
CONTEXT: @**/*.md
|
||||
EXPECTED: JSON with { root_causes: [], patterns_found: [], recommendations: [] }
|
||||
RULES: $(cat ~/.claude/workflows/cli-templates/protocols/analysis-protocol.md) | Focus on execution flow
|
||||
" --tool gemini --mode analysis
|
||||
```
|
||||
|
||||
#### 2. Architecture Review
|
||||
|
||||
```bash
|
||||
ccw cli -p "
|
||||
PURPOSE: Review skill architecture for scalability and maintainability
|
||||
TASK: • Evaluate phase decomposition • Check state management patterns • Assess agent coordination
|
||||
MODE: analysis
|
||||
CONTEXT: @**/*.md
|
||||
EXPECTED: Architecture assessment with improvement recommendations
|
||||
RULES: $(cat ~/.claude/workflows/cli-templates/protocols/analysis-protocol.md) | Focus on modularity
|
||||
" --tool gemini --mode analysis
|
||||
```
|
||||
|
||||
#### 3. Fix Strategy Generation
|
||||
|
||||
```bash
|
||||
ccw cli -p "
|
||||
PURPOSE: Generate fix strategy for issue: ${issue_id} - ${issue_description}
|
||||
TASK: • Analyze issue context • Design fix approach • Generate implementation plan
|
||||
MODE: analysis
|
||||
CONTEXT: @**/*.md
|
||||
EXPECTED: JSON with { strategy: string, changes: [], verification_steps: [] }
|
||||
RULES: $(cat ~/.claude/workflows/cli-templates/protocols/analysis-protocol.md) | Minimal invasive changes
|
||||
" --tool gemini --mode analysis
|
||||
```
|
||||
|
||||
---
|
||||
|
||||
## Mandatory Prerequisites
|
||||
|
||||
> **CRITICAL**: Read these documents before executing any action.
|
||||
|
||||
### Core Specs (Required)
|
||||
|
||||
| Document | Purpose | Priority |
|
||||
|----------|---------|----------|
|
||||
| [specs/skill-authoring-principles.md](specs/skill-authoring-principles.md) | **首要准则:简洁高效、去除存储、上下文流转** | **P0** |
|
||||
| [specs/problem-taxonomy.md](specs/problem-taxonomy.md) | Problem classification and detection patterns | **P0** |
|
||||
| [specs/tuning-strategies.md](specs/tuning-strategies.md) | Fix strategies for each problem type | **P0** |
|
||||
| [specs/dimension-mapping.md](specs/dimension-mapping.md) | Dimension to Spec mapping rules | **P0** |
|
||||
| [specs/quality-gates.md](specs/quality-gates.md) | Quality thresholds and verification criteria | P1 |
|
||||
|
||||
### Templates (Reference)
|
||||
|
||||
| Document | Purpose |
|
||||
|----------|---------|
|
||||
| [templates/diagnosis-report.md](templates/diagnosis-report.md) | Diagnosis report structure |
|
||||
| [templates/fix-proposal.md](templates/fix-proposal.md) | Fix proposal format |
|
||||
|
||||
---
|
||||
|
||||
## Execution Flow
|
||||
|
||||
```
|
||||
┌─────────────────────────────────────────────────────────────────────────────┐
|
||||
│ Phase 0: Specification Study (强制前置 - 禁止跳过) │
|
||||
│ → Read: specs/problem-taxonomy.md (问题分类) │
|
||||
│ → Read: specs/tuning-strategies.md (调优策略) │
|
||||
│ → Read: specs/dimension-mapping.md (维度映射规则) │
|
||||
│ → Read: Target skill's SKILL.md and phases/*.md │
|
||||
│ → Output: 内化规范,理解目标 skill 结构 │
|
||||
├─────────────────────────────────────────────────────────────────────────────┤
|
||||
│ action-init: Initialize Tuning Session │
|
||||
│ → Create work directory: .workflow/.scratchpad/skill-tuning-{timestamp} │
|
||||
│ → Initialize state.json with target skill info │
|
||||
│ → Create backup of target skill files │
|
||||
├─────────────────────────────────────────────────────────────────────────────┤
|
||||
│ action-analyze-requirements: Requirement Analysis │
|
||||
│ → Phase 1: 维度拆解 (Gemini CLI) - 单一描述 → 多个关注维度 │
|
||||
│ → Phase 2: Spec 匹配 - 每个维度 → taxonomy + strategy │
|
||||
│ → Phase 3: 覆盖度评估 - 以"有修复策略"为满足标准 │
|
||||
│ → Phase 4: 歧义检测 - 识别多义性描述,必要时请求澄清 │
|
||||
│ → Output: state.json (requirement_analysis field) │
|
||||
├─────────────────────────────────────────────────────────────────────────────┤
|
||||
│ action-diagnose-*: Diagnosis Actions (context/memory/dataflow/agent/docs/ │
|
||||
│ token_consumption) │
|
||||
│ → Execute pattern-based detection for each category │
|
||||
│ → Output: state.json (diagnosis.{category} field) │
|
||||
├─────────────────────────────────────────────────────────────────────────────┤
|
||||
│ action-generate-report: Consolidated Report │
|
||||
│ → Generate markdown summary from state.diagnosis │
|
||||
│ → Prioritize issues by severity │
|
||||
│ → Output: state.json (final_report field) │
|
||||
├─────────────────────────────────────────────────────────────────────────────┤
|
||||
│ action-propose-fixes: Fix Proposal Generation │
|
||||
│ → Generate fix strategies for each issue │
|
||||
│ → Create implementation plan │
|
||||
│ → Output: state.json (proposed_fixes field) │
|
||||
├─────────────────────────────────────────────────────────────────────────────┤
|
||||
│ action-apply-fix: Apply Selected Fix │
|
||||
│ → User selects fix to apply │
|
||||
│ → Execute fix with backup │
|
||||
│ → Update state with fix result │
|
||||
├─────────────────────────────────────────────────────────────────────────────┤
|
||||
│ action-verify: Verification │
|
||||
│ → Re-run affected diagnosis │
|
||||
│ → Check quality gates │
|
||||
│ → Update iteration count │
|
||||
├─────────────────────────────────────────────────────────────────────────────┤
|
||||
│ action-complete: Finalization │
|
||||
│ → Set status='completed' │
|
||||
│ → Final report already in state.json (final_report field) │
|
||||
│ → Output: state.json (final) │
|
||||
└─────────────────────────────────────────────────────────────────────────────┘
|
||||
```
|
||||
|
||||
## Directory Setup
|
||||
|
||||
```javascript
|
||||
const timestamp = new Date().toISOString().slice(0,19).replace(/[-:T]/g, '');
|
||||
const workDir = `.workflow/.scratchpad/skill-tuning-${timestamp}`;
|
||||
|
||||
// Simplified: Only backups dir needed, diagnosis results go into state.json
|
||||
Bash(`mkdir -p "${workDir}/backups"`);
|
||||
```
|
||||
|
||||
## Output Structure
|
||||
|
||||
```
|
||||
.workflow/.scratchpad/skill-tuning-{timestamp}/
|
||||
├── state.json # Single source of truth (all results consolidated)
|
||||
│ ├── diagnosis.* # All diagnosis results embedded
|
||||
│ ├── issues[] # Found issues
|
||||
│ ├── proposed_fixes[] # Fix proposals
|
||||
│ └── final_report # Markdown summary (on completion)
|
||||
└── backups/
|
||||
└── {skill-name}-backup/ # Original skill files backup
|
||||
```
|
||||
|
||||
> **Token Optimization**: All outputs consolidated into state.json. No separate diagnosis files or report files.
|
||||
|
||||
## State Schema
|
||||
|
||||
详细状态结构定义请参阅 [phases/state-schema.md](phases/state-schema.md)。
|
||||
|
||||
核心状态字段:
|
||||
- `status`: 工作流状态 (pending/running/completed/failed)
|
||||
- `target_skill`: 目标 skill 信息
|
||||
- `diagnosis`: 各维度诊断结果
|
||||
- `issues`: 发现的问题列表
|
||||
- `proposed_fixes`: 建议的修复方案
|
||||
|
||||
## Reference Documents
|
||||
|
||||
| Document | Purpose |
|
||||
|----------|---------|
|
||||
| [phases/orchestrator.md](phases/orchestrator.md) | Orchestrator decision logic |
|
||||
| [phases/state-schema.md](phases/state-schema.md) | State structure definition |
|
||||
| [phases/actions/action-init.md](phases/actions/action-init.md) | Initialize tuning session |
|
||||
| [phases/actions/action-analyze-requirements.md](phases/actions/action-analyze-requirements.md) | Requirement analysis (NEW) |
|
||||
| [phases/actions/action-diagnose-context.md](phases/actions/action-diagnose-context.md) | Context explosion diagnosis |
|
||||
| [phases/actions/action-diagnose-memory.md](phases/actions/action-diagnose-memory.md) | Long-tail forgetting diagnosis |
|
||||
| [phases/actions/action-diagnose-dataflow.md](phases/actions/action-diagnose-dataflow.md) | Data flow diagnosis |
|
||||
| [phases/actions/action-diagnose-agent.md](phases/actions/action-diagnose-agent.md) | Agent coordination diagnosis |
|
||||
| [phases/actions/action-diagnose-docs.md](phases/actions/action-diagnose-docs.md) | Documentation structure diagnosis |
|
||||
| [phases/actions/action-diagnose-token-consumption.md](phases/actions/action-diagnose-token-consumption.md) | Token consumption diagnosis |
|
||||
| [phases/actions/action-generate-report.md](phases/actions/action-generate-report.md) | Report generation |
|
||||
| [phases/actions/action-propose-fixes.md](phases/actions/action-propose-fixes.md) | Fix proposal |
|
||||
| [phases/actions/action-apply-fix.md](phases/actions/action-apply-fix.md) | Fix application |
|
||||
| [phases/actions/action-verify.md](phases/actions/action-verify.md) | Verification |
|
||||
| [phases/actions/action-complete.md](phases/actions/action-complete.md) | Finalization |
|
||||
| [specs/problem-taxonomy.md](specs/problem-taxonomy.md) | Problem classification |
|
||||
| [specs/tuning-strategies.md](specs/tuning-strategies.md) | Fix strategies |
|
||||
| [specs/dimension-mapping.md](specs/dimension-mapping.md) | Dimension to Spec mapping (NEW) |
|
||||
| [specs/quality-gates.md](specs/quality-gates.md) | Quality criteria |
|
||||
164
.claude/skills/skill-tuning/phases/actions/action-abort.md
Normal file
164
.claude/skills/skill-tuning/phases/actions/action-abort.md
Normal file
@@ -0,0 +1,164 @@
|
||||
# Action: Abort
|
||||
|
||||
Abort the tuning session due to unrecoverable errors.
|
||||
|
||||
## Purpose
|
||||
|
||||
- Safely terminate on critical failures
|
||||
- Preserve diagnostic information for debugging
|
||||
- Ensure backup remains available
|
||||
- Notify user of failure reason
|
||||
|
||||
## Preconditions
|
||||
|
||||
- [ ] state.error_count >= state.max_errors
|
||||
- [ ] OR critical failure detected
|
||||
|
||||
## Execution
|
||||
|
||||
```javascript
|
||||
async function execute(state, workDir) {
|
||||
console.log('Aborting skill tuning session...');
|
||||
|
||||
const errors = state.errors;
|
||||
const targetSkill = state.target_skill;
|
||||
|
||||
// Generate abort report
|
||||
const abortReport = `# Skill Tuning Aborted
|
||||
|
||||
**Target Skill**: ${targetSkill?.name || 'Unknown'}
|
||||
**Aborted At**: ${new Date().toISOString()}
|
||||
**Reason**: Too many errors or critical failure
|
||||
|
||||
---
|
||||
|
||||
## Error Log
|
||||
|
||||
${errors.length === 0 ? '_No errors recorded_' :
|
||||
errors.map((err, i) => `
|
||||
### Error ${i + 1}
|
||||
- **Action**: ${err.action}
|
||||
- **Message**: ${err.message}
|
||||
- **Time**: ${err.timestamp}
|
||||
- **Recoverable**: ${err.recoverable ? 'Yes' : 'No'}
|
||||
`).join('\n')}
|
||||
|
||||
---
|
||||
|
||||
## Session State at Abort
|
||||
|
||||
- **Status**: ${state.status}
|
||||
- **Iteration Count**: ${state.iteration_count}
|
||||
- **Completed Actions**: ${state.completed_actions.length}
|
||||
- **Issues Found**: ${state.issues.length}
|
||||
- **Fixes Applied**: ${state.applied_fixes.length}
|
||||
|
||||
---
|
||||
|
||||
## Recovery Options
|
||||
|
||||
### Option 1: Restore Original Skill
|
||||
If any changes were made, restore from backup:
|
||||
\`\`\`bash
|
||||
cp -r "${state.backup_dir}/${targetSkill?.name || 'backup'}-backup"/* "${targetSkill?.path || 'target'}/"
|
||||
\`\`\`
|
||||
|
||||
### Option 2: Resume from Last State
|
||||
The session state is preserved at:
|
||||
\`${workDir}/state.json\`
|
||||
|
||||
To resume:
|
||||
1. Fix the underlying issue
|
||||
2. Reset error_count in state.json
|
||||
3. Re-run skill-tuning with --resume flag
|
||||
|
||||
### Option 3: Manual Investigation
|
||||
Review the following files:
|
||||
- Diagnosis results: \`${workDir}/diagnosis/*.json\`
|
||||
- Error log: \`${workDir}/errors.json\`
|
||||
- State snapshot: \`${workDir}/state.json\`
|
||||
|
||||
---
|
||||
|
||||
## Diagnostic Information
|
||||
|
||||
### Last Successful Action
|
||||
${state.completed_actions.length > 0 ? state.completed_actions[state.completed_actions.length - 1] : 'None'}
|
||||
|
||||
### Current Action When Failed
|
||||
${state.current_action || 'Unknown'}
|
||||
|
||||
### Partial Diagnosis Results
|
||||
- Context: ${state.diagnosis.context ? 'Completed' : 'Not completed'}
|
||||
- Memory: ${state.diagnosis.memory ? 'Completed' : 'Not completed'}
|
||||
- Data Flow: ${state.diagnosis.dataflow ? 'Completed' : 'Not completed'}
|
||||
- Agent: ${state.diagnosis.agent ? 'Completed' : 'Not completed'}
|
||||
|
||||
---
|
||||
|
||||
*Skill tuning aborted - please review errors and retry*
|
||||
`;
|
||||
|
||||
// Write abort report
|
||||
Write(`${workDir}/abort-report.md`, abortReport);
|
||||
|
||||
// Save error log
|
||||
Write(`${workDir}/errors.json`, JSON.stringify(errors, null, 2));
|
||||
|
||||
// Notify user
|
||||
await AskUserQuestion({
|
||||
questions: [{
|
||||
question: `Skill tuning aborted due to ${errors.length} errors. Would you like to restore the original skill?`,
|
||||
header: 'Restore',
|
||||
multiSelect: false,
|
||||
options: [
|
||||
{ label: 'Yes, restore', description: 'Restore original skill from backup' },
|
||||
{ label: 'No, keep changes', description: 'Keep any partial changes made' }
|
||||
]
|
||||
}]
|
||||
}).then(async response => {
|
||||
if (response['Restore'] === 'Yes, restore') {
|
||||
// Restore from backup
|
||||
if (state.backup_dir && targetSkill?.path) {
|
||||
Bash(`cp -r "${state.backup_dir}/${targetSkill.name}-backup"/* "${targetSkill.path}/"`);
|
||||
console.log('Original skill restored from backup.');
|
||||
}
|
||||
}
|
||||
}).catch(() => {
|
||||
// User cancelled, don't restore
|
||||
});
|
||||
|
||||
return {
|
||||
stateUpdates: {
|
||||
status: 'failed',
|
||||
completed_at: new Date().toISOString()
|
||||
},
|
||||
outputFiles: [`${workDir}/abort-report.md`, `${workDir}/errors.json`],
|
||||
summary: `Tuning aborted: ${errors.length} errors. Check abort-report.md for details.`
|
||||
};
|
||||
}
|
||||
```
|
||||
|
||||
## State Updates
|
||||
|
||||
```javascript
|
||||
return {
|
||||
stateUpdates: {
|
||||
status: 'failed',
|
||||
completed_at: '<timestamp>'
|
||||
}
|
||||
};
|
||||
```
|
||||
|
||||
## Output
|
||||
|
||||
- **File**: `abort-report.md`
|
||||
- **Location**: `${workDir}/abort-report.md`
|
||||
|
||||
## Error Handling
|
||||
|
||||
This action should not fail - it's the final error handler.
|
||||
|
||||
## Next Actions
|
||||
|
||||
- None (terminal state)
|
||||
@@ -0,0 +1,406 @@
|
||||
# Action: Analyze Requirements
|
||||
|
||||
将用户问题描述拆解为多个分析维度,匹配 Spec,评估覆盖度,检测歧义。
|
||||
|
||||
## Purpose
|
||||
|
||||
- 将单一用户描述拆解为多个独立关注维度
|
||||
- 为每个维度匹配 problem-taxonomy(检测)+ tuning-strategies(修复)
|
||||
- 以"有修复策略"为标准判断是否满足需求
|
||||
- 检测歧义并在必要时请求用户澄清
|
||||
|
||||
## Preconditions
|
||||
|
||||
- [ ] `state.status === 'running'`
|
||||
- [ ] `state.target_skill !== null`
|
||||
- [ ] `state.completed_actions.includes('action-init')`
|
||||
- [ ] `!state.completed_actions.includes('action-analyze-requirements')`
|
||||
|
||||
## Execution
|
||||
|
||||
### Phase 1: 维度拆解 (Gemini CLI)
|
||||
|
||||
调用 Gemini 对用户描述进行语义分析,拆解为独立维度:
|
||||
|
||||
```javascript
|
||||
async function analyzeDimensions(state, workDir) {
|
||||
const prompt = `
|
||||
PURPOSE: 分析用户问题描述,拆解为独立的关注维度
|
||||
TASK:
|
||||
• 识别用户描述中的多个关注点(每个关注点应该是独立的、可单独分析的)
|
||||
• 为每个关注点提取关键词(中英文均可)
|
||||
• 推断可能的问题类别:
|
||||
- context_explosion: 上下文/Token 相关
|
||||
- memory_loss: 遗忘/约束丢失相关
|
||||
- dataflow_break: 状态/数据流相关
|
||||
- agent_failure: Agent/子任务相关
|
||||
- prompt_quality: 提示词/输出质量相关
|
||||
- architecture: 架构/结构相关
|
||||
- performance: 性能/效率相关
|
||||
- error_handling: 错误/异常处理相关
|
||||
- output_quality: 输出质量/验证相关
|
||||
- user_experience: 交互/体验相关
|
||||
• 评估推断置信度 (0-1)
|
||||
|
||||
INPUT:
|
||||
User description: ${state.user_issue_description}
|
||||
Target skill: ${state.target_skill.name}
|
||||
Skill structure: ${JSON.stringify(state.target_skill.phases)}
|
||||
|
||||
MODE: analysis
|
||||
CONTEXT: @specs/problem-taxonomy.md @specs/dimension-mapping.md
|
||||
EXPECTED: JSON (不要包含 markdown 代码块标记)
|
||||
{
|
||||
"dimensions": [
|
||||
{
|
||||
"id": "DIM-001",
|
||||
"description": "关注点的简短描述",
|
||||
"keywords": ["关键词1", "关键词2"],
|
||||
"inferred_category": "问题类别",
|
||||
"confidence": 0.85,
|
||||
"reasoning": "推断理由"
|
||||
}
|
||||
],
|
||||
"analysis_notes": "整体分析说明"
|
||||
}
|
||||
RULES:
|
||||
- 每个维度必须独立,不重叠
|
||||
- 低于 0.5 置信度的推断应标注需要澄清
|
||||
- 如果用户描述非常模糊,至少提取一个 "general" 维度
|
||||
`;
|
||||
|
||||
const cliCommand = `ccw cli -p "${escapeForShell(prompt)}" --tool gemini --mode analysis --cd "${state.target_skill.path}"`;
|
||||
|
||||
console.log('Phase 1: 执行 Gemini 维度拆解分析...');
|
||||
|
||||
const result = Bash({
|
||||
command: cliCommand,
|
||||
run_in_background: true,
|
||||
timeout: 300000
|
||||
});
|
||||
|
||||
return result;
|
||||
}
|
||||
```
|
||||
|
||||
### Phase 2: Spec 匹配
|
||||
|
||||
基于 `specs/category-mappings.json` 配置为每个维度匹配检测模式和修复策略:
|
||||
|
||||
```javascript
|
||||
// 加载集中式映射配置
|
||||
const mappings = JSON.parse(Read('specs/category-mappings.json'));
|
||||
|
||||
function matchSpecs(dimensions) {
|
||||
return dimensions.map(dim => {
|
||||
// 匹配 taxonomy pattern
|
||||
const taxonomyMatch = findTaxonomyMatch(dim.inferred_category);
|
||||
|
||||
// 匹配 strategy
|
||||
const strategyMatch = findStrategyMatch(dim.inferred_category);
|
||||
|
||||
// 判断是否满足(核心标准:有修复策略)
|
||||
const hasFix = strategyMatch !== null && strategyMatch.strategies.length > 0;
|
||||
|
||||
return {
|
||||
dimension_id: dim.id,
|
||||
taxonomy_match: taxonomyMatch,
|
||||
strategy_match: strategyMatch,
|
||||
has_fix: hasFix,
|
||||
needs_gemini_analysis: taxonomyMatch === null || mappings.categories[dim.inferred_category]?.needs_gemini_analysis
|
||||
};
|
||||
});
|
||||
}
|
||||
|
||||
function findTaxonomyMatch(category) {
|
||||
const config = mappings.categories[category];
|
||||
if (!config || config.pattern_ids.length === 0) return null;
|
||||
|
||||
return {
|
||||
category: category,
|
||||
pattern_ids: config.pattern_ids,
|
||||
severity_hint: config.severity_hint
|
||||
};
|
||||
}
|
||||
|
||||
function findStrategyMatch(category) {
|
||||
const config = mappings.categories[category];
|
||||
if (!config) {
|
||||
// Fallback to custom from config
|
||||
return mappings.fallback;
|
||||
}
|
||||
|
||||
return {
|
||||
strategies: config.strategies,
|
||||
risk_levels: config.risk_levels
|
||||
};
|
||||
}
|
||||
```
|
||||
|
||||
### Phase 3: 覆盖度评估
|
||||
|
||||
评估所有维度的 Spec 覆盖情况:
|
||||
|
||||
```javascript
|
||||
function evaluateCoverage(specMatches) {
|
||||
const total = specMatches.length;
|
||||
const withDetection = specMatches.filter(m => m.taxonomy_match !== null).length;
|
||||
const withFix = specMatches.filter(m => m.has_fix).length;
|
||||
|
||||
const rate = total > 0 ? Math.round((withFix / total) * 100) : 0;
|
||||
|
||||
let status;
|
||||
if (rate >= 80) {
|
||||
status = 'satisfied';
|
||||
} else if (rate >= 50) {
|
||||
status = 'partial';
|
||||
} else {
|
||||
status = 'unsatisfied';
|
||||
}
|
||||
|
||||
return {
|
||||
total_dimensions: total,
|
||||
with_detection: withDetection,
|
||||
with_fix_strategy: withFix,
|
||||
coverage_rate: rate,
|
||||
status: status
|
||||
};
|
||||
}
|
||||
```
|
||||
|
||||
### Phase 4: 歧义检测
|
||||
|
||||
识别需要用户澄清的歧义点:
|
||||
|
||||
```javascript
|
||||
function detectAmbiguities(dimensions, specMatches) {
|
||||
const ambiguities = [];
|
||||
|
||||
for (const dim of dimensions) {
|
||||
const match = specMatches.find(m => m.dimension_id === dim.id);
|
||||
|
||||
// 检测1: 低置信度 (< 0.5)
|
||||
if (dim.confidence < 0.5) {
|
||||
ambiguities.push({
|
||||
dimension_id: dim.id,
|
||||
type: 'vague_description',
|
||||
description: `维度 "${dim.description}" 描述模糊,推断置信度低 (${dim.confidence})`,
|
||||
possible_interpretations: suggestInterpretations(dim),
|
||||
needs_clarification: true
|
||||
});
|
||||
}
|
||||
|
||||
// 检测2: 无匹配类别
|
||||
if (!match || (!match.taxonomy_match && !match.strategy_match)) {
|
||||
ambiguities.push({
|
||||
dimension_id: dim.id,
|
||||
type: 'no_category_match',
|
||||
description: `维度 "${dim.description}" 无法匹配到已知问题类别`,
|
||||
possible_interpretations: ['custom'],
|
||||
needs_clarification: true
|
||||
});
|
||||
}
|
||||
|
||||
// 检测3: 关键词冲突(可能属于多个类别)
|
||||
if (dim.keywords.length > 3 && hasConflictingKeywords(dim.keywords)) {
|
||||
ambiguities.push({
|
||||
dimension_id: dim.id,
|
||||
type: 'conflicting_keywords',
|
||||
description: `维度 "${dim.description}" 的关键词可能指向多个不同问题`,
|
||||
possible_interpretations: inferMultipleCategories(dim.keywords),
|
||||
needs_clarification: true
|
||||
});
|
||||
}
|
||||
}
|
||||
|
||||
return ambiguities;
|
||||
}
|
||||
|
||||
function suggestInterpretations(dim) {
|
||||
// 基于 mappings 配置推荐可能的解释
|
||||
const categories = Object.keys(mappings.categories).filter(
|
||||
cat => cat !== 'authoring_principles_violation' // 排除内部检测类别
|
||||
);
|
||||
return categories.slice(0, 4); // 返回最常见的 4 个作为选项
|
||||
}
|
||||
|
||||
function hasConflictingKeywords(keywords) {
|
||||
// 检查关键词是否指向不同方向
|
||||
const categoryHints = keywords.map(k => getKeywordCategoryHint(k));
|
||||
const uniqueCategories = [...new Set(categoryHints.filter(c => c))];
|
||||
return uniqueCategories.length > 1;
|
||||
}
|
||||
|
||||
function getKeywordCategoryHint(keyword) {
|
||||
// 从 mappings.keywords 构建查找表(合并中英文关键词)
|
||||
const keywordMap = {
|
||||
...mappings.keywords.chinese,
|
||||
...mappings.keywords.english
|
||||
};
|
||||
return keywordMap[keyword.toLowerCase()];
|
||||
}
|
||||
```
|
||||
|
||||
## User Interaction
|
||||
|
||||
如果检测到需要澄清的歧义,暂停并询问用户:
|
||||
|
||||
```javascript
|
||||
async function handleAmbiguities(ambiguities, dimensions) {
|
||||
const needsClarification = ambiguities.filter(a => a.needs_clarification);
|
||||
|
||||
if (needsClarification.length === 0) {
|
||||
return null; // 无需澄清
|
||||
}
|
||||
|
||||
const questions = needsClarification.slice(0, 4).map(a => {
|
||||
const dim = dimensions.find(d => d.id === a.dimension_id);
|
||||
|
||||
return {
|
||||
question: `关于 "${dim.description}",您具体指的是?`,
|
||||
header: a.dimension_id,
|
||||
options: a.possible_interpretations.map(interp => ({
|
||||
label: getCategoryLabel(interp),
|
||||
description: getCategoryDescription(interp)
|
||||
})),
|
||||
multiSelect: false
|
||||
};
|
||||
});
|
||||
|
||||
return await AskUserQuestion({ questions });
|
||||
}
|
||||
|
||||
function getCategoryLabel(category) {
|
||||
// 从 mappings 配置加载标签
|
||||
return mappings.category_labels_chinese[category] || category;
|
||||
}
|
||||
|
||||
function getCategoryDescription(category) {
|
||||
// 从 mappings 配置加载描述
|
||||
return mappings.category_descriptions[category] || 'Requires further analysis';
|
||||
}
|
||||
```
|
||||
|
||||
## Output
|
||||
|
||||
### State Updates
|
||||
|
||||
```javascript
|
||||
return {
|
||||
stateUpdates: {
|
||||
requirement_analysis: {
|
||||
status: ambiguities.some(a => a.needs_clarification) ? 'needs_clarification' : 'completed',
|
||||
analyzed_at: new Date().toISOString(),
|
||||
dimensions: dimensions,
|
||||
spec_matches: specMatches,
|
||||
coverage: coverageResult,
|
||||
ambiguities: ambiguities
|
||||
},
|
||||
// 根据分析结果自动优化 focus_areas
|
||||
focus_areas: deriveOptimalFocusAreas(specMatches)
|
||||
},
|
||||
outputFiles: [
|
||||
`${workDir}/requirement-analysis.json`,
|
||||
`${workDir}/requirement-analysis.md`
|
||||
],
|
||||
summary: generateSummary(dimensions, coverageResult, ambiguities)
|
||||
};
|
||||
|
||||
function deriveOptimalFocusAreas(specMatches) {
|
||||
const coreCategories = ['context', 'memory', 'dataflow', 'agent'];
|
||||
const matched = specMatches
|
||||
.filter(m => m.taxonomy_match !== null)
|
||||
.map(m => {
|
||||
// 映射到诊断 focus_area
|
||||
const category = m.taxonomy_match.category;
|
||||
if (category === 'context_explosion' || category === 'performance') return 'context';
|
||||
if (category === 'memory_loss') return 'memory';
|
||||
if (category === 'dataflow_break') return 'dataflow';
|
||||
if (category === 'agent_failure' || category === 'error_handling') return 'agent';
|
||||
return null;
|
||||
})
|
||||
.filter(f => f && coreCategories.includes(f));
|
||||
|
||||
// 去重
|
||||
return [...new Set(matched)];
|
||||
}
|
||||
|
||||
function generateSummary(dimensions, coverage, ambiguities) {
|
||||
const dimCount = dimensions.length;
|
||||
const coverageStatus = coverage.status;
|
||||
const ambiguityCount = ambiguities.filter(a => a.needs_clarification).length;
|
||||
|
||||
let summary = `分析完成:${dimCount} 个维度`;
|
||||
summary += `,覆盖度 ${coverage.coverage_rate}% (${coverageStatus})`;
|
||||
|
||||
if (ambiguityCount > 0) {
|
||||
summary += `,${ambiguityCount} 个歧义点待澄清`;
|
||||
}
|
||||
|
||||
return summary;
|
||||
}
|
||||
```
|
||||
|
||||
### Output Files
|
||||
|
||||
#### requirement-analysis.json
|
||||
|
||||
```json
|
||||
{
|
||||
"timestamp": "2024-01-01T00:00:00Z",
|
||||
"target_skill": "skill-name",
|
||||
"user_description": "原始用户描述",
|
||||
"dimensions": [...],
|
||||
"spec_matches": [...],
|
||||
"coverage": {...},
|
||||
"ambiguities": [...],
|
||||
"derived_focus_areas": [...]
|
||||
}
|
||||
```
|
||||
|
||||
#### requirement-analysis.md
|
||||
|
||||
```markdown
|
||||
# 需求分析报告
|
||||
|
||||
## 用户描述
|
||||
> ${user_issue_description}
|
||||
|
||||
## 维度拆解
|
||||
|
||||
| ID | 描述 | 类别 | 置信度 |
|
||||
|----|------|------|--------|
|
||||
| DIM-001 | ... | ... | 0.85 |
|
||||
|
||||
## Spec 匹配
|
||||
|
||||
| 维度 | 检测模式 | 修复策略 | 是否满足 |
|
||||
|------|----------|----------|----------|
|
||||
| DIM-001 | CTX-001,002 | sliding_window | ✓ |
|
||||
|
||||
## 覆盖度评估
|
||||
|
||||
- 总维度数: N
|
||||
- 有检测手段: M
|
||||
- 有修复策略: K (满足标准)
|
||||
- 覆盖率: X%
|
||||
- 状态: satisfied/partial/unsatisfied
|
||||
|
||||
## 歧义点
|
||||
|
||||
(如有)
|
||||
```
|
||||
|
||||
## Error Handling
|
||||
|
||||
| Error | Recovery |
|
||||
|-------|----------|
|
||||
| Gemini CLI 超时 | 重试一次,仍失败则使用简化分析 |
|
||||
| JSON 解析失败 | 尝试修复 JSON 或使用默认维度 |
|
||||
| 无法匹配任何类别 | 全部归类为 custom,触发 Gemini 深度分析 |
|
||||
|
||||
## Next Actions
|
||||
|
||||
- 如果 `requirement_analysis.status === 'completed'`: 继续到 `action-diagnose-*`
|
||||
- 如果 `requirement_analysis.status === 'needs_clarification'`: 等待用户澄清后重新执行
|
||||
- 如果 `coverage.status === 'unsatisfied'`: 自动触发 `action-gemini-analysis` 进行深度分析
|
||||
206
.claude/skills/skill-tuning/phases/actions/action-apply-fix.md
Normal file
206
.claude/skills/skill-tuning/phases/actions/action-apply-fix.md
Normal file
@@ -0,0 +1,206 @@
|
||||
# Action: Apply Fix
|
||||
|
||||
Apply a selected fix to the target skill with backup and rollback capability.
|
||||
|
||||
## Purpose
|
||||
|
||||
- Apply fix changes to target skill files
|
||||
- Create backup before modifications
|
||||
- Track applied fixes for verification
|
||||
- Support rollback if needed
|
||||
|
||||
## Preconditions
|
||||
|
||||
- [ ] state.status === 'running'
|
||||
- [ ] state.pending_fixes.length > 0
|
||||
- [ ] state.proposed_fixes contains the fix to apply
|
||||
|
||||
## Execution
|
||||
|
||||
```javascript
|
||||
async function execute(state, workDir) {
|
||||
const pendingFixes = state.pending_fixes;
|
||||
const proposedFixes = state.proposed_fixes;
|
||||
const targetPath = state.target_skill.path;
|
||||
const backupDir = state.backup_dir;
|
||||
|
||||
if (pendingFixes.length === 0) {
|
||||
return {
|
||||
stateUpdates: {},
|
||||
outputFiles: [],
|
||||
summary: 'No pending fixes to apply'
|
||||
};
|
||||
}
|
||||
|
||||
// Get next fix to apply
|
||||
const fixId = pendingFixes[0];
|
||||
const fix = proposedFixes.find(f => f.id === fixId);
|
||||
|
||||
if (!fix) {
|
||||
return {
|
||||
stateUpdates: {
|
||||
pending_fixes: pendingFixes.slice(1),
|
||||
errors: [...state.errors, {
|
||||
action: 'action-apply-fix',
|
||||
message: `Fix ${fixId} not found in proposals`,
|
||||
timestamp: new Date().toISOString(),
|
||||
recoverable: true
|
||||
}]
|
||||
},
|
||||
outputFiles: [],
|
||||
summary: `Fix ${fixId} not found, skipping`
|
||||
};
|
||||
}
|
||||
|
||||
console.log(`Applying fix ${fix.id}: ${fix.description}`);
|
||||
|
||||
// Create fix-specific backup
|
||||
const fixBackupDir = `${backupDir}/before-${fix.id}`;
|
||||
Bash(`mkdir -p "${fixBackupDir}"`);
|
||||
|
||||
const appliedChanges = [];
|
||||
let success = true;
|
||||
|
||||
for (const change of fix.changes) {
|
||||
try {
|
||||
// Resolve file path (handle wildcards)
|
||||
let targetFiles = [];
|
||||
if (change.file.includes('*')) {
|
||||
targetFiles = Glob(`${targetPath}/${change.file}`);
|
||||
} else {
|
||||
targetFiles = [`${targetPath}/${change.file}`];
|
||||
}
|
||||
|
||||
for (const targetFile of targetFiles) {
|
||||
// Backup original
|
||||
const relativePath = targetFile.replace(targetPath + '/', '');
|
||||
const backupPath = `${fixBackupDir}/${relativePath}`;
|
||||
|
||||
if (Glob(targetFile).length > 0) {
|
||||
const originalContent = Read(targetFile);
|
||||
Bash(`mkdir -p "$(dirname "${backupPath}")"`);
|
||||
Write(backupPath, originalContent);
|
||||
}
|
||||
|
||||
// Apply change based on action type
|
||||
if (change.action === 'modify' && change.diff) {
|
||||
// For now, append the diff as a comment/note
|
||||
// Real implementation would parse and apply the diff
|
||||
const existingContent = Read(targetFile);
|
||||
|
||||
// Simple diff application: look for context and apply
|
||||
// This is a simplified version - real implementation would be more sophisticated
|
||||
const newContent = existingContent + `\n\n<!-- Applied fix ${fix.id}: ${fix.description} -->\n`;
|
||||
|
||||
Write(targetFile, newContent);
|
||||
|
||||
appliedChanges.push({
|
||||
file: relativePath,
|
||||
action: 'modified',
|
||||
backup: backupPath
|
||||
});
|
||||
} else if (change.action === 'create') {
|
||||
Write(targetFile, change.new_content || '');
|
||||
appliedChanges.push({
|
||||
file: relativePath,
|
||||
action: 'created',
|
||||
backup: null
|
||||
});
|
||||
}
|
||||
}
|
||||
} catch (error) {
|
||||
console.log(`Error applying change to ${change.file}: ${error.message}`);
|
||||
success = false;
|
||||
}
|
||||
}
|
||||
|
||||
// Record applied fix
|
||||
const appliedFix = {
|
||||
fix_id: fix.id,
|
||||
applied_at: new Date().toISOString(),
|
||||
success: success,
|
||||
backup_path: fixBackupDir,
|
||||
verification_result: 'pending',
|
||||
rollback_available: true,
|
||||
changes_made: appliedChanges
|
||||
};
|
||||
|
||||
// Update applied fixes log
|
||||
const appliedFixesPath = `${workDir}/fixes/applied-fixes.json`;
|
||||
let existingApplied = [];
|
||||
try {
|
||||
existingApplied = JSON.parse(Read(appliedFixesPath));
|
||||
} catch (e) {
|
||||
existingApplied = [];
|
||||
}
|
||||
existingApplied.push(appliedFix);
|
||||
Write(appliedFixesPath, JSON.stringify(existingApplied, null, 2));
|
||||
|
||||
return {
|
||||
stateUpdates: {
|
||||
applied_fixes: [...state.applied_fixes, appliedFix],
|
||||
pending_fixes: pendingFixes.slice(1) // Remove applied fix from pending
|
||||
},
|
||||
outputFiles: [appliedFixesPath],
|
||||
summary: `Applied fix ${fix.id}: ${success ? 'success' : 'partial'}, ${appliedChanges.length} files modified`
|
||||
};
|
||||
}
|
||||
```
|
||||
|
||||
## State Updates
|
||||
|
||||
```javascript
|
||||
return {
|
||||
stateUpdates: {
|
||||
applied_fixes: [...existingApplied, newAppliedFix],
|
||||
pending_fixes: remainingPendingFixes
|
||||
}
|
||||
};
|
||||
```
|
||||
|
||||
## Rollback Function
|
||||
|
||||
```javascript
|
||||
async function rollbackFix(fixId, state, workDir) {
|
||||
const appliedFix = state.applied_fixes.find(f => f.fix_id === fixId);
|
||||
|
||||
if (!appliedFix || !appliedFix.rollback_available) {
|
||||
throw new Error(`Cannot rollback fix ${fixId}`);
|
||||
}
|
||||
|
||||
const backupDir = appliedFix.backup_path;
|
||||
const targetPath = state.target_skill.path;
|
||||
|
||||
// Restore from backup
|
||||
const backupFiles = Glob(`${backupDir}/**/*`);
|
||||
for (const backupFile of backupFiles) {
|
||||
const relativePath = backupFile.replace(backupDir + '/', '');
|
||||
const targetFile = `${targetPath}/${relativePath}`;
|
||||
const content = Read(backupFile);
|
||||
Write(targetFile, content);
|
||||
}
|
||||
|
||||
return {
|
||||
stateUpdates: {
|
||||
applied_fixes: state.applied_fixes.map(f =>
|
||||
f.fix_id === fixId
|
||||
? { ...f, rollback_available: false, verification_result: 'rolled_back' }
|
||||
: f
|
||||
)
|
||||
}
|
||||
};
|
||||
}
|
||||
```
|
||||
|
||||
## Error Handling
|
||||
|
||||
| Error Type | Recovery |
|
||||
|------------|----------|
|
||||
| File not found | Skip file, log warning |
|
||||
| Write permission error | Retry with sudo or report |
|
||||
| Backup creation failed | Abort fix, don't modify |
|
||||
|
||||
## Next Actions
|
||||
|
||||
- If pending_fixes.length > 0: action-apply-fix (continue)
|
||||
- If all fixes applied: action-verify
|
||||
195
.claude/skills/skill-tuning/phases/actions/action-complete.md
Normal file
195
.claude/skills/skill-tuning/phases/actions/action-complete.md
Normal file
@@ -0,0 +1,195 @@
|
||||
# Action: Complete
|
||||
|
||||
Finalize the tuning session with summary report and cleanup.
|
||||
|
||||
## Purpose
|
||||
|
||||
- Generate final summary report
|
||||
- Record tuning statistics
|
||||
- Clean up temporary files (optional)
|
||||
- Provide recommendations for future maintenance
|
||||
|
||||
## Preconditions
|
||||
|
||||
- [ ] state.status === 'running'
|
||||
- [ ] quality_gate === 'pass' OR max_iterations reached
|
||||
|
||||
## Execution
|
||||
|
||||
```javascript
|
||||
async function execute(state, workDir) {
|
||||
console.log('Finalizing skill tuning session...');
|
||||
|
||||
const targetSkill = state.target_skill;
|
||||
const startTime = new Date(state.started_at);
|
||||
const endTime = new Date();
|
||||
const duration = Math.round((endTime - startTime) / 1000);
|
||||
|
||||
// Generate final summary
|
||||
const summary = `# Skill Tuning Summary
|
||||
|
||||
**Target Skill**: ${targetSkill.name}
|
||||
**Path**: ${targetSkill.path}
|
||||
**Session Duration**: ${duration} seconds
|
||||
**Completed**: ${endTime.toISOString()}
|
||||
|
||||
---
|
||||
|
||||
## Final Status
|
||||
|
||||
| Metric | Value |
|
||||
|--------|-------|
|
||||
| Final Health Score | ${state.quality_score}/100 |
|
||||
| Quality Gate | ${state.quality_gate.toUpperCase()} |
|
||||
| Total Iterations | ${state.iteration_count} |
|
||||
| Issues Found | ${state.issues.length + state.applied_fixes.flatMap(f => f.issues_resolved || []).length} |
|
||||
| Issues Resolved | ${state.applied_fixes.flatMap(f => f.issues_resolved || []).length} |
|
||||
| Fixes Applied | ${state.applied_fixes.length} |
|
||||
| Fixes Verified | ${state.applied_fixes.filter(f => f.verification_result === 'pass').length} |
|
||||
|
||||
---
|
||||
|
||||
## Diagnosis Summary
|
||||
|
||||
| Area | Issues Found | Severity |
|
||||
|------|--------------|----------|
|
||||
| Context Explosion | ${state.diagnosis.context?.issues_found || 'N/A'} | ${state.diagnosis.context?.severity || 'N/A'} |
|
||||
| Long-tail Forgetting | ${state.diagnosis.memory?.issues_found || 'N/A'} | ${state.diagnosis.memory?.severity || 'N/A'} |
|
||||
| Data Flow | ${state.diagnosis.dataflow?.issues_found || 'N/A'} | ${state.diagnosis.dataflow?.severity || 'N/A'} |
|
||||
| Agent Coordination | ${state.diagnosis.agent?.issues_found || 'N/A'} | ${state.diagnosis.agent?.severity || 'N/A'} |
|
||||
|
||||
---
|
||||
|
||||
## Applied Fixes
|
||||
|
||||
${state.applied_fixes.length === 0 ? '_No fixes applied_' :
|
||||
state.applied_fixes.map((fix, i) => `
|
||||
### ${i + 1}. ${fix.fix_id}
|
||||
|
||||
- **Applied At**: ${fix.applied_at}
|
||||
- **Success**: ${fix.success ? 'Yes' : 'No'}
|
||||
- **Verification**: ${fix.verification_result}
|
||||
- **Rollback Available**: ${fix.rollback_available ? 'Yes' : 'No'}
|
||||
`).join('\n')}
|
||||
|
||||
---
|
||||
|
||||
## Remaining Issues
|
||||
|
||||
${state.issues.length === 0 ? '✅ All issues resolved!' :
|
||||
`${state.issues.length} issues remain:\n\n` +
|
||||
state.issues.map(issue =>
|
||||
`- **[${issue.severity.toUpperCase()}]** ${issue.description} (${issue.id})`
|
||||
).join('\n')}
|
||||
|
||||
---
|
||||
|
||||
## Recommendations
|
||||
|
||||
${generateRecommendations(state)}
|
||||
|
||||
---
|
||||
|
||||
## Backup Information
|
||||
|
||||
Original skill files backed up to:
|
||||
\`${state.backup_dir}\`
|
||||
|
||||
To restore original skill:
|
||||
\`\`\`bash
|
||||
cp -r "${state.backup_dir}/${targetSkill.name}-backup"/* "${targetSkill.path}/"
|
||||
\`\`\`
|
||||
|
||||
---
|
||||
|
||||
## Session Files
|
||||
|
||||
| File | Description |
|
||||
|------|-------------|
|
||||
| ${workDir}/tuning-report.md | Full diagnostic report |
|
||||
| ${workDir}/diagnosis/*.json | Individual diagnosis results |
|
||||
| ${workDir}/fixes/fix-proposals.json | Proposed fixes |
|
||||
| ${workDir}/fixes/applied-fixes.json | Applied fix history |
|
||||
| ${workDir}/tuning-summary.md | This summary |
|
||||
|
||||
---
|
||||
|
||||
*Skill tuning completed by skill-tuning*
|
||||
`;
|
||||
|
||||
Write(`${workDir}/tuning-summary.md`, summary);
|
||||
|
||||
// Update final state
|
||||
return {
|
||||
stateUpdates: {
|
||||
status: 'completed',
|
||||
completed_at: endTime.toISOString()
|
||||
},
|
||||
outputFiles: [`${workDir}/tuning-summary.md`],
|
||||
summary: `Tuning complete: ${state.quality_gate} with ${state.quality_score}/100 health score`
|
||||
};
|
||||
}
|
||||
|
||||
function generateRecommendations(state) {
|
||||
const recommendations = [];
|
||||
|
||||
// Based on remaining issues
|
||||
if (state.issues.some(i => i.type === 'context_explosion')) {
|
||||
recommendations.push('- **Context Management**: Consider implementing a context summarization agent to prevent token growth');
|
||||
}
|
||||
|
||||
if (state.issues.some(i => i.type === 'memory_loss')) {
|
||||
recommendations.push('- **Constraint Tracking**: Add explicit constraint injection to each phase prompt');
|
||||
}
|
||||
|
||||
if (state.issues.some(i => i.type === 'dataflow_break')) {
|
||||
recommendations.push('- **State Centralization**: Migrate to single state.json with schema validation');
|
||||
}
|
||||
|
||||
if (state.issues.some(i => i.type === 'agent_failure')) {
|
||||
recommendations.push('- **Error Handling**: Wrap all Task calls in try-catch blocks');
|
||||
}
|
||||
|
||||
// General recommendations
|
||||
if (state.iteration_count >= state.max_iterations) {
|
||||
recommendations.push('- **Deep Refactoring**: Consider architectural review if issues persist after multiple iterations');
|
||||
}
|
||||
|
||||
if (state.quality_score < 80) {
|
||||
recommendations.push('- **Regular Tuning**: Schedule periodic skill-tuning runs to catch issues early');
|
||||
}
|
||||
|
||||
if (recommendations.length === 0) {
|
||||
recommendations.push('- Skill is in good health! Monitor for regressions during future development.');
|
||||
}
|
||||
|
||||
return recommendations.join('\n');
|
||||
}
|
||||
```
|
||||
|
||||
## State Updates
|
||||
|
||||
```javascript
|
||||
return {
|
||||
stateUpdates: {
|
||||
status: 'completed',
|
||||
completed_at: '<timestamp>'
|
||||
}
|
||||
};
|
||||
```
|
||||
|
||||
## Output
|
||||
|
||||
- **File**: `tuning-summary.md`
|
||||
- **Location**: `${workDir}/tuning-summary.md`
|
||||
- **Format**: Markdown
|
||||
|
||||
## Error Handling
|
||||
|
||||
| Error Type | Recovery |
|
||||
|------------|----------|
|
||||
| Summary write failed | Write to alternative location |
|
||||
|
||||
## Next Actions
|
||||
|
||||
- None (terminal state)
|
||||
@@ -0,0 +1,317 @@
|
||||
# Action: Diagnose Agent Coordination
|
||||
|
||||
Analyze target skill for agent coordination failures - call chain fragility and result passing issues.
|
||||
|
||||
## Purpose
|
||||
|
||||
- Detect fragile agent call patterns
|
||||
- Identify result passing issues
|
||||
- Find missing error handling in agent calls
|
||||
- Analyze agent return format consistency
|
||||
|
||||
## Preconditions
|
||||
|
||||
- [ ] state.status === 'running'
|
||||
- [ ] state.target_skill.path is set
|
||||
- [ ] 'agent' in state.focus_areas OR state.focus_areas is empty
|
||||
|
||||
## Detection Patterns
|
||||
|
||||
### Pattern 1: Unhandled Agent Failures
|
||||
|
||||
```regex
|
||||
# Task calls without try-catch or error handling
|
||||
/Task\s*\(\s*\{[^}]*\}\s*\)(?![^;]*catch)/
|
||||
```
|
||||
|
||||
### Pattern 2: Missing Return Validation
|
||||
|
||||
```regex
|
||||
# Agent result used directly without validation
|
||||
/const\s+\w+\s*=\s*await?\s*Task\([^)]+\);\s*(?!.*(?:if|try|JSON\.parse))/
|
||||
```
|
||||
|
||||
### Pattern 3: Inconsistent Agent Configuration
|
||||
|
||||
```regex
|
||||
# Different agent configurations in same skill
|
||||
/subagent_type:\s*['"](\w+)['"]/g
|
||||
```
|
||||
|
||||
### Pattern 4: Deeply Nested Agent Calls
|
||||
|
||||
```regex
|
||||
# Agent calling another agent (nested)
|
||||
/Task\s*\([^)]*prompt:[^)]*Task\s*\(/
|
||||
```
|
||||
|
||||
## Execution
|
||||
|
||||
```javascript
|
||||
async function execute(state, workDir) {
|
||||
const skillPath = state.target_skill.path;
|
||||
const startTime = Date.now();
|
||||
const issues = [];
|
||||
const evidence = [];
|
||||
|
||||
console.log(`Diagnosing agent coordination in ${skillPath}...`);
|
||||
|
||||
// 1. Find all Task/agent calls
|
||||
const allFiles = Glob(`${skillPath}/**/*.md`);
|
||||
const agentCalls = [];
|
||||
const agentTypes = new Set();
|
||||
|
||||
for (const file of allFiles) {
|
||||
const content = Read(file);
|
||||
const relativePath = file.replace(skillPath + '/', '');
|
||||
|
||||
// Find Task calls
|
||||
const taskMatches = content.matchAll(/Task\s*\(\s*\{([^}]+)\}/g);
|
||||
for (const match of taskMatches) {
|
||||
const config = match[1];
|
||||
|
||||
// Extract agent type
|
||||
const typeMatch = config.match(/subagent_type:\s*['"]([^'"]+)['"]/);
|
||||
const agentType = typeMatch ? typeMatch[1] : 'unknown';
|
||||
agentTypes.add(agentType);
|
||||
|
||||
// Check for error handling context
|
||||
const hasErrorHandling = /try\s*\{.*Task|\.catch\(|await\s+Task.*\.then/s.test(
|
||||
content.slice(Math.max(0, match.index - 100), match.index + match[0].length + 100)
|
||||
);
|
||||
|
||||
// Check for result validation
|
||||
const hasResultValidation = /JSON\.parse|if\s*\(\s*result|result\s*\?\./s.test(
|
||||
content.slice(match.index, match.index + match[0].length + 200)
|
||||
);
|
||||
|
||||
// Check for background execution
|
||||
const runsInBackground = /run_in_background:\s*true/.test(config);
|
||||
|
||||
agentCalls.push({
|
||||
file: relativePath,
|
||||
agentType,
|
||||
hasErrorHandling,
|
||||
hasResultValidation,
|
||||
runsInBackground,
|
||||
config: config.slice(0, 200)
|
||||
});
|
||||
}
|
||||
}
|
||||
|
||||
// 2. Analyze agent call patterns
|
||||
const totalCalls = agentCalls.length;
|
||||
const callsWithoutErrorHandling = agentCalls.filter(c => !c.hasErrorHandling);
|
||||
const callsWithoutValidation = agentCalls.filter(c => !c.hasResultValidation);
|
||||
|
||||
// Issue: Missing error handling
|
||||
if (callsWithoutErrorHandling.length > 0) {
|
||||
issues.push({
|
||||
id: `AGT-${issues.length + 1}`,
|
||||
type: 'agent_failure',
|
||||
severity: callsWithoutErrorHandling.length > 2 ? 'high' : 'medium',
|
||||
location: { file: 'multiple' },
|
||||
description: `${callsWithoutErrorHandling.length}/${totalCalls} agent calls lack error handling`,
|
||||
evidence: callsWithoutErrorHandling.slice(0, 3).map(c =>
|
||||
`${c.file}: ${c.agentType}`
|
||||
),
|
||||
root_cause: 'Agent failures not caught, may crash workflow',
|
||||
impact: 'Unhandled agent errors cause cascading failures',
|
||||
suggested_fix: 'Wrap Task calls in try-catch with graceful fallback'
|
||||
});
|
||||
evidence.push({
|
||||
file: 'multiple',
|
||||
pattern: 'missing_error_handling',
|
||||
context: `${callsWithoutErrorHandling.length} calls affected`,
|
||||
severity: 'high'
|
||||
});
|
||||
}
|
||||
|
||||
// Issue: Missing result validation
|
||||
if (callsWithoutValidation.length > 0) {
|
||||
issues.push({
|
||||
id: `AGT-${issues.length + 1}`,
|
||||
type: 'agent_failure',
|
||||
severity: 'medium',
|
||||
location: { file: 'multiple' },
|
||||
description: `${callsWithoutValidation.length}/${totalCalls} agent calls lack result validation`,
|
||||
evidence: callsWithoutValidation.slice(0, 3).map(c =>
|
||||
`${c.file}: ${c.agentType} result not validated`
|
||||
),
|
||||
root_cause: 'Agent results used directly without type checking',
|
||||
impact: 'Invalid agent output may corrupt state',
|
||||
suggested_fix: 'Add JSON.parse with try-catch and schema validation'
|
||||
});
|
||||
}
|
||||
|
||||
// 3. Check for inconsistent agent types usage
|
||||
if (agentTypes.size > 3 && state.target_skill.execution_mode === 'autonomous') {
|
||||
issues.push({
|
||||
id: `AGT-${issues.length + 1}`,
|
||||
type: 'agent_failure',
|
||||
severity: 'low',
|
||||
location: { file: 'multiple' },
|
||||
description: `Using ${agentTypes.size} different agent types`,
|
||||
evidence: [...agentTypes].slice(0, 5),
|
||||
root_cause: 'Multiple agent types increase coordination complexity',
|
||||
impact: 'Different agent behaviors may cause inconsistency',
|
||||
suggested_fix: 'Standardize on fewer agent types with clear roles'
|
||||
});
|
||||
}
|
||||
|
||||
// 4. Check for nested agent calls
|
||||
for (const file of allFiles) {
|
||||
const content = Read(file);
|
||||
const relativePath = file.replace(skillPath + '/', '');
|
||||
|
||||
// Detect nested Task calls
|
||||
const hasNestedTask = /Task\s*\([^)]*prompt:[^)]*Task\s*\(/s.test(content);
|
||||
|
||||
if (hasNestedTask) {
|
||||
issues.push({
|
||||
id: `AGT-${issues.length + 1}`,
|
||||
type: 'agent_failure',
|
||||
severity: 'high',
|
||||
location: { file: relativePath },
|
||||
description: 'Nested agent calls detected',
|
||||
evidence: ['Agent prompt contains another Task call'],
|
||||
root_cause: 'Agent calls another agent, creating deep nesting',
|
||||
impact: 'Context explosion, hard to debug, unpredictable behavior',
|
||||
suggested_fix: 'Flatten agent calls, use orchestrator to coordinate'
|
||||
});
|
||||
}
|
||||
}
|
||||
|
||||
// 5. Check SKILL.md for agent configuration consistency
|
||||
const skillMd = Read(`${skillPath}/SKILL.md`);
|
||||
|
||||
// Check if allowed-tools includes Task
|
||||
const allowedTools = skillMd.match(/allowed-tools:\s*([^\n]+)/i);
|
||||
if (allowedTools && !allowedTools[1].includes('Task') && totalCalls > 0) {
|
||||
issues.push({
|
||||
id: `AGT-${issues.length + 1}`,
|
||||
type: 'agent_failure',
|
||||
severity: 'medium',
|
||||
location: { file: 'SKILL.md' },
|
||||
description: 'Task tool used but not declared in allowed-tools',
|
||||
evidence: [`${totalCalls} Task calls found, but Task not in allowed-tools`],
|
||||
root_cause: 'Tool declaration mismatch',
|
||||
impact: 'May cause runtime permission issues',
|
||||
suggested_fix: 'Add Task to allowed-tools in SKILL.md front matter'
|
||||
});
|
||||
}
|
||||
|
||||
// 6. Check for agent result format consistency
|
||||
const returnFormats = new Set();
|
||||
for (const file of allFiles) {
|
||||
const content = Read(file);
|
||||
|
||||
// Look for return format definitions
|
||||
const returnMatch = content.match(/\[RETURN\][^[]*|return\s*\{[^}]+\}/gi);
|
||||
if (returnMatch) {
|
||||
returnMatch.forEach(r => {
|
||||
const format = r.includes('JSON') ? 'json' :
|
||||
r.includes('summary') ? 'summary' :
|
||||
r.includes('file') ? 'file_path' : 'other';
|
||||
returnFormats.add(format);
|
||||
});
|
||||
}
|
||||
}
|
||||
|
||||
if (returnFormats.size > 2) {
|
||||
issues.push({
|
||||
id: `AGT-${issues.length + 1}`,
|
||||
type: 'agent_failure',
|
||||
severity: 'medium',
|
||||
location: { file: 'multiple' },
|
||||
description: 'Inconsistent agent return formats',
|
||||
evidence: [...returnFormats],
|
||||
root_cause: 'Different agents return data in different formats',
|
||||
impact: 'Orchestrator must handle multiple format types',
|
||||
suggested_fix: 'Standardize return format: {status, output_file, summary}'
|
||||
});
|
||||
}
|
||||
|
||||
// 7. Calculate severity
|
||||
const criticalCount = issues.filter(i => i.severity === 'critical').length;
|
||||
const highCount = issues.filter(i => i.severity === 'high').length;
|
||||
const severity = criticalCount > 0 ? 'critical' :
|
||||
highCount > 1 ? 'high' :
|
||||
highCount > 0 ? 'medium' :
|
||||
issues.length > 0 ? 'low' : 'none';
|
||||
|
||||
// 8. Write diagnosis result
|
||||
const diagnosisResult = {
|
||||
status: 'completed',
|
||||
issues_found: issues.length,
|
||||
severity: severity,
|
||||
execution_time_ms: Date.now() - startTime,
|
||||
details: {
|
||||
patterns_checked: [
|
||||
'error_handling',
|
||||
'result_validation',
|
||||
'agent_type_consistency',
|
||||
'nested_calls',
|
||||
'return_format_consistency'
|
||||
],
|
||||
patterns_matched: evidence.map(e => e.pattern),
|
||||
evidence: evidence,
|
||||
agent_analysis: {
|
||||
total_agent_calls: totalCalls,
|
||||
unique_agent_types: agentTypes.size,
|
||||
calls_without_error_handling: callsWithoutErrorHandling.length,
|
||||
calls_without_validation: callsWithoutValidation.length,
|
||||
agent_types_used: [...agentTypes]
|
||||
},
|
||||
recommendations: [
|
||||
callsWithoutErrorHandling.length > 0
|
||||
? 'Add try-catch to all Task calls' : null,
|
||||
callsWithoutValidation.length > 0
|
||||
? 'Add result validation with JSON.parse and schema check' : null,
|
||||
agentTypes.size > 3
|
||||
? 'Consolidate agent types for consistency' : null
|
||||
].filter(Boolean)
|
||||
}
|
||||
};
|
||||
|
||||
Write(`${workDir}/diagnosis/agent-diagnosis.json`,
|
||||
JSON.stringify(diagnosisResult, null, 2));
|
||||
|
||||
return {
|
||||
stateUpdates: {
|
||||
'diagnosis.agent': diagnosisResult,
|
||||
issues: [...state.issues, ...issues]
|
||||
},
|
||||
outputFiles: [`${workDir}/diagnosis/agent-diagnosis.json`],
|
||||
summary: `Agent diagnosis: ${issues.length} issues found (severity: ${severity})`
|
||||
};
|
||||
}
|
||||
```
|
||||
|
||||
## State Updates
|
||||
|
||||
```javascript
|
||||
return {
|
||||
stateUpdates: {
|
||||
'diagnosis.agent': {
|
||||
status: 'completed',
|
||||
issues_found: <count>,
|
||||
severity: '<critical|high|medium|low|none>',
|
||||
// ... full diagnosis result
|
||||
},
|
||||
issues: [...existingIssues, ...newIssues]
|
||||
}
|
||||
};
|
||||
```
|
||||
|
||||
## Error Handling
|
||||
|
||||
| Error Type | Recovery |
|
||||
|------------|----------|
|
||||
| Regex match error | Use simpler patterns |
|
||||
| File access error | Skip and continue |
|
||||
|
||||
## Next Actions
|
||||
|
||||
- Success: action-generate-report
|
||||
- Skipped: If 'agent' not in focus_areas
|
||||
@@ -0,0 +1,243 @@
|
||||
# Action: Diagnose Context Explosion
|
||||
|
||||
Analyze target skill for context explosion issues - token accumulation and multi-turn dialogue bloat.
|
||||
|
||||
## Purpose
|
||||
|
||||
- Detect patterns that cause context growth
|
||||
- Identify multi-turn accumulation points
|
||||
- Find missing context compression mechanisms
|
||||
- Measure potential token waste
|
||||
|
||||
## Preconditions
|
||||
|
||||
- [ ] state.status === 'running'
|
||||
- [ ] state.target_skill.path is set
|
||||
- [ ] 'context' in state.focus_areas OR state.focus_areas is empty
|
||||
|
||||
## Detection Patterns
|
||||
|
||||
### Pattern 1: Unbounded History Accumulation
|
||||
|
||||
```regex
|
||||
# Patterns that suggest history accumulation
|
||||
/\bhistory\b.*\.push\b/
|
||||
/\bmessages\b.*\.concat\b/
|
||||
/\bconversation\b.*\+=\b/
|
||||
/\bappend.*context\b/i
|
||||
```
|
||||
|
||||
### Pattern 2: Full Content Passing
|
||||
|
||||
```regex
|
||||
# Patterns that pass full content instead of references
|
||||
/Read\([^)]+\).*\+.*Read\(/
|
||||
/JSON\.stringify\(.*state\)/ # Full state serialization
|
||||
/\$\{.*content\}/ # Template literal with full content
|
||||
```
|
||||
|
||||
### Pattern 3: Missing Summarization
|
||||
|
||||
```regex
|
||||
# Absence of compression/summarization
|
||||
# Check for lack of: summarize, compress, truncate, slice
|
||||
```
|
||||
|
||||
### Pattern 4: Agent Return Bloat
|
||||
|
||||
```regex
|
||||
# Agent returning full content instead of path + summary
|
||||
/return\s*\{[^}]*content:/
|
||||
/return.*JSON\.stringify/
|
||||
```
|
||||
|
||||
## Execution
|
||||
|
||||
```javascript
|
||||
async function execute(state, workDir) {
|
||||
const skillPath = state.target_skill.path;
|
||||
const startTime = Date.now();
|
||||
const issues = [];
|
||||
const evidence = [];
|
||||
|
||||
console.log(`Diagnosing context explosion in ${skillPath}...`);
|
||||
|
||||
// 1. Scan all phase files
|
||||
const phaseFiles = Glob(`${skillPath}/phases/**/*.md`);
|
||||
|
||||
for (const file of phaseFiles) {
|
||||
const content = Read(file);
|
||||
const relativePath = file.replace(skillPath + '/', '');
|
||||
|
||||
// Check Pattern 1: History accumulation
|
||||
const historyPatterns = [
|
||||
/history\s*[.=].*push|concat|append/gi,
|
||||
/messages\s*=\s*\[.*\.\.\..*messages/gi,
|
||||
/conversation.*\+=/gi
|
||||
];
|
||||
|
||||
for (const pattern of historyPatterns) {
|
||||
const matches = content.match(pattern);
|
||||
if (matches) {
|
||||
issues.push({
|
||||
id: `CTX-${issues.length + 1}`,
|
||||
type: 'context_explosion',
|
||||
severity: 'high',
|
||||
location: { file: relativePath },
|
||||
description: 'Unbounded history accumulation detected',
|
||||
evidence: matches.slice(0, 3),
|
||||
root_cause: 'History/messages array grows without bounds',
|
||||
impact: 'Token count increases linearly with iterations',
|
||||
suggested_fix: 'Implement sliding window or summarization'
|
||||
});
|
||||
evidence.push({
|
||||
file: relativePath,
|
||||
pattern: 'history_accumulation',
|
||||
context: matches[0],
|
||||
severity: 'high'
|
||||
});
|
||||
}
|
||||
}
|
||||
|
||||
// Check Pattern 2: Full content passing
|
||||
const contentPatterns = [
|
||||
/Read\s*\([^)]+\)\s*[\+,]/g,
|
||||
/JSON\.stringify\s*\(\s*state\s*\)/g,
|
||||
/\$\{[^}]*content[^}]*\}/g
|
||||
];
|
||||
|
||||
for (const pattern of contentPatterns) {
|
||||
const matches = content.match(pattern);
|
||||
if (matches) {
|
||||
issues.push({
|
||||
id: `CTX-${issues.length + 1}`,
|
||||
type: 'context_explosion',
|
||||
severity: 'medium',
|
||||
location: { file: relativePath },
|
||||
description: 'Full content passed instead of reference',
|
||||
evidence: matches.slice(0, 3),
|
||||
root_cause: 'Entire file/state content included in prompts',
|
||||
impact: 'Unnecessary token consumption',
|
||||
suggested_fix: 'Pass file paths and summaries instead of full content'
|
||||
});
|
||||
evidence.push({
|
||||
file: relativePath,
|
||||
pattern: 'full_content_passing',
|
||||
context: matches[0],
|
||||
severity: 'medium'
|
||||
});
|
||||
}
|
||||
}
|
||||
|
||||
// Check Pattern 3: Missing summarization
|
||||
const hasSummarization = /summariz|compress|truncat|slice.*context/i.test(content);
|
||||
const hasLongPrompts = content.length > 5000;
|
||||
|
||||
if (hasLongPrompts && !hasSummarization) {
|
||||
issues.push({
|
||||
id: `CTX-${issues.length + 1}`,
|
||||
type: 'context_explosion',
|
||||
severity: 'medium',
|
||||
location: { file: relativePath },
|
||||
description: 'Long phase file without summarization mechanism',
|
||||
evidence: [`File length: ${content.length} chars`],
|
||||
root_cause: 'No context compression for large content',
|
||||
impact: 'Potential token overflow in long sessions',
|
||||
suggested_fix: 'Add context summarization before passing to agents'
|
||||
});
|
||||
}
|
||||
|
||||
// Check Pattern 4: Agent return bloat
|
||||
const returnPatterns = /return\s*\{[^}]*(?:content|full_output|complete_result):/g;
|
||||
const returnMatches = content.match(returnPatterns);
|
||||
if (returnMatches) {
|
||||
issues.push({
|
||||
id: `CTX-${issues.length + 1}`,
|
||||
type: 'context_explosion',
|
||||
severity: 'high',
|
||||
location: { file: relativePath },
|
||||
description: 'Agent returns full content instead of path+summary',
|
||||
evidence: returnMatches.slice(0, 3),
|
||||
root_cause: 'Agent output includes complete content',
|
||||
impact: 'Context bloat when orchestrator receives full output',
|
||||
suggested_fix: 'Return {output_file, summary} instead of {content}'
|
||||
});
|
||||
}
|
||||
}
|
||||
|
||||
// 2. Calculate severity
|
||||
const criticalCount = issues.filter(i => i.severity === 'critical').length;
|
||||
const highCount = issues.filter(i => i.severity === 'high').length;
|
||||
const severity = criticalCount > 0 ? 'critical' :
|
||||
highCount > 2 ? 'high' :
|
||||
highCount > 0 ? 'medium' :
|
||||
issues.length > 0 ? 'low' : 'none';
|
||||
|
||||
// 3. Write diagnosis result
|
||||
const diagnosisResult = {
|
||||
status: 'completed',
|
||||
issues_found: issues.length,
|
||||
severity: severity,
|
||||
execution_time_ms: Date.now() - startTime,
|
||||
details: {
|
||||
patterns_checked: [
|
||||
'history_accumulation',
|
||||
'full_content_passing',
|
||||
'missing_summarization',
|
||||
'agent_return_bloat'
|
||||
],
|
||||
patterns_matched: evidence.map(e => e.pattern),
|
||||
evidence: evidence,
|
||||
recommendations: [
|
||||
issues.length > 0 ? 'Implement context summarization agent' : null,
|
||||
highCount > 0 ? 'Add sliding window for conversation history' : null,
|
||||
evidence.some(e => e.pattern === 'full_content_passing')
|
||||
? 'Refactor to pass file paths instead of content' : null
|
||||
].filter(Boolean)
|
||||
}
|
||||
};
|
||||
|
||||
Write(`${workDir}/diagnosis/context-diagnosis.json`,
|
||||
JSON.stringify(diagnosisResult, null, 2));
|
||||
|
||||
return {
|
||||
stateUpdates: {
|
||||
'diagnosis.context': diagnosisResult,
|
||||
issues: [...state.issues, ...issues],
|
||||
'issues_by_severity.critical': state.issues_by_severity.critical + criticalCount,
|
||||
'issues_by_severity.high': state.issues_by_severity.high + highCount
|
||||
},
|
||||
outputFiles: [`${workDir}/diagnosis/context-diagnosis.json`],
|
||||
summary: `Context diagnosis: ${issues.length} issues found (severity: ${severity})`
|
||||
};
|
||||
}
|
||||
```
|
||||
|
||||
## State Updates
|
||||
|
||||
```javascript
|
||||
return {
|
||||
stateUpdates: {
|
||||
'diagnosis.context': {
|
||||
status: 'completed',
|
||||
issues_found: <count>,
|
||||
severity: '<critical|high|medium|low|none>',
|
||||
// ... full diagnosis result
|
||||
},
|
||||
issues: [...existingIssues, ...newIssues]
|
||||
}
|
||||
};
|
||||
```
|
||||
|
||||
## Error Handling
|
||||
|
||||
| Error Type | Recovery |
|
||||
|------------|----------|
|
||||
| File read error | Skip file, log warning |
|
||||
| Pattern matching error | Use fallback patterns |
|
||||
| Write error | Retry to alternative path |
|
||||
|
||||
## Next Actions
|
||||
|
||||
- Success: action-diagnose-memory (or next in focus_areas)
|
||||
- Skipped: If 'context' not in focus_areas
|
||||
@@ -0,0 +1,318 @@
|
||||
# Action: Diagnose Data Flow Issues
|
||||
|
||||
Analyze target skill for data flow disruption - state inconsistencies and format variations.
|
||||
|
||||
## Purpose
|
||||
|
||||
- Detect inconsistent data formats between phases
|
||||
- Identify scattered state storage
|
||||
- Find missing data contracts
|
||||
- Measure state transition integrity
|
||||
|
||||
## Preconditions
|
||||
|
||||
- [ ] state.status === 'running'
|
||||
- [ ] state.target_skill.path is set
|
||||
- [ ] 'dataflow' in state.focus_areas OR state.focus_areas is empty
|
||||
|
||||
## Detection Patterns
|
||||
|
||||
### Pattern 1: Multiple Storage Locations
|
||||
|
||||
```regex
|
||||
# Data written to multiple paths without centralization
|
||||
/Write\s*\(\s*[`'"][^`'"]+[`'"]/g
|
||||
```
|
||||
|
||||
### Pattern 2: Inconsistent Field Names
|
||||
|
||||
```regex
|
||||
# Same concept with different names: title/name, id/identifier
|
||||
```
|
||||
|
||||
### Pattern 3: Missing Schema Validation
|
||||
|
||||
```regex
|
||||
# Absence of validation before state write
|
||||
# Look for lack of: validate, schema, check, verify
|
||||
```
|
||||
|
||||
### Pattern 4: Format Transformation Without Normalization
|
||||
|
||||
```regex
|
||||
# Direct JSON.parse without error handling or normalization
|
||||
/JSON\.parse\([^)]+\)(?!\s*\|\|)/
|
||||
```
|
||||
|
||||
## Execution
|
||||
|
||||
```javascript
|
||||
async function execute(state, workDir) {
|
||||
const skillPath = state.target_skill.path;
|
||||
const startTime = Date.now();
|
||||
const issues = [];
|
||||
const evidence = [];
|
||||
|
||||
console.log(`Diagnosing data flow in ${skillPath}...`);
|
||||
|
||||
// 1. Collect all Write operations to map data storage
|
||||
const allFiles = Glob(`${skillPath}/**/*.md`);
|
||||
const writeLocations = [];
|
||||
const readLocations = [];
|
||||
|
||||
for (const file of allFiles) {
|
||||
const content = Read(file);
|
||||
const relativePath = file.replace(skillPath + '/', '');
|
||||
|
||||
// Find Write operations
|
||||
const writeMatches = content.matchAll(/Write\s*\(\s*[`'"]([^`'"]+)[`'"]/g);
|
||||
for (const match of writeMatches) {
|
||||
writeLocations.push({
|
||||
file: relativePath,
|
||||
target: match[1],
|
||||
isStateFile: match[1].includes('state.json') || match[1].includes('config.json')
|
||||
});
|
||||
}
|
||||
|
||||
// Find Read operations
|
||||
const readMatches = content.matchAll(/Read\s*\(\s*[`'"]([^`'"]+)[`'"]/g);
|
||||
for (const match of readMatches) {
|
||||
readLocations.push({
|
||||
file: relativePath,
|
||||
source: match[1]
|
||||
});
|
||||
}
|
||||
}
|
||||
|
||||
// 2. Check for scattered state storage
|
||||
const stateTargets = writeLocations
|
||||
.filter(w => w.isStateFile)
|
||||
.map(w => w.target);
|
||||
|
||||
const uniqueStateFiles = [...new Set(stateTargets)];
|
||||
|
||||
if (uniqueStateFiles.length > 2) {
|
||||
issues.push({
|
||||
id: `DF-${issues.length + 1}`,
|
||||
type: 'dataflow_break',
|
||||
severity: 'high',
|
||||
location: { file: 'multiple' },
|
||||
description: `State stored in ${uniqueStateFiles.length} different locations`,
|
||||
evidence: uniqueStateFiles.slice(0, 5),
|
||||
root_cause: 'No centralized state management',
|
||||
impact: 'State inconsistency between phases',
|
||||
suggested_fix: 'Centralize state to single state.json with state manager'
|
||||
});
|
||||
evidence.push({
|
||||
file: 'multiple',
|
||||
pattern: 'scattered_state',
|
||||
context: uniqueStateFiles.join(', '),
|
||||
severity: 'high'
|
||||
});
|
||||
}
|
||||
|
||||
// 3. Check for inconsistent field naming
|
||||
const fieldNamePatterns = {
|
||||
'name_vs_title': [/\.name\b/, /\.title\b/],
|
||||
'id_vs_identifier': [/\.id\b/, /\.identifier\b/],
|
||||
'status_vs_state': [/\.status\b/, /\.state\b/],
|
||||
'error_vs_errors': [/\.error\b/, /\.errors\b/]
|
||||
};
|
||||
|
||||
const fieldUsage = {};
|
||||
|
||||
for (const file of allFiles) {
|
||||
const content = Read(file);
|
||||
const relativePath = file.replace(skillPath + '/', '');
|
||||
|
||||
for (const [patternName, patterns] of Object.entries(fieldNamePatterns)) {
|
||||
for (const pattern of patterns) {
|
||||
if (pattern.test(content)) {
|
||||
if (!fieldUsage[patternName]) fieldUsage[patternName] = [];
|
||||
fieldUsage[patternName].push({
|
||||
file: relativePath,
|
||||
pattern: pattern.toString()
|
||||
});
|
||||
}
|
||||
}
|
||||
}
|
||||
}
|
||||
|
||||
for (const [patternName, usages] of Object.entries(fieldUsage)) {
|
||||
const uniquePatterns = [...new Set(usages.map(u => u.pattern))];
|
||||
if (uniquePatterns.length > 1) {
|
||||
issues.push({
|
||||
id: `DF-${issues.length + 1}`,
|
||||
type: 'dataflow_break',
|
||||
severity: 'medium',
|
||||
location: { file: 'multiple' },
|
||||
description: `Inconsistent field naming: ${patternName.replace('_vs_', ' vs ')}`,
|
||||
evidence: usages.slice(0, 3).map(u => `${u.file}: ${u.pattern}`),
|
||||
root_cause: 'Same concept referred to with different field names',
|
||||
impact: 'Data may be lost during field access',
|
||||
suggested_fix: `Standardize to single field name, add normalization function`
|
||||
});
|
||||
}
|
||||
}
|
||||
|
||||
// 4. Check for missing schema validation
|
||||
for (const file of allFiles) {
|
||||
const content = Read(file);
|
||||
const relativePath = file.replace(skillPath + '/', '');
|
||||
|
||||
// Find JSON.parse without validation
|
||||
const unsafeParses = content.match(/JSON\.parse\s*\([^)]+\)(?!\s*\?\?|\s*\|\|)/g);
|
||||
const hasValidation = /validat|schema|type.*check/i.test(content);
|
||||
|
||||
if (unsafeParses && unsafeParses.length > 0 && !hasValidation) {
|
||||
issues.push({
|
||||
id: `DF-${issues.length + 1}`,
|
||||
type: 'dataflow_break',
|
||||
severity: 'medium',
|
||||
location: { file: relativePath },
|
||||
description: 'JSON parsing without validation',
|
||||
evidence: unsafeParses.slice(0, 2),
|
||||
root_cause: 'No schema validation after parsing',
|
||||
impact: 'Invalid data may propagate through phases',
|
||||
suggested_fix: 'Add schema validation after JSON.parse'
|
||||
});
|
||||
}
|
||||
}
|
||||
|
||||
// 5. Check state schema if exists
|
||||
const stateSchemaFile = Glob(`${skillPath}/phases/state-schema.md`)[0];
|
||||
if (stateSchemaFile) {
|
||||
const schemaContent = Read(stateSchemaFile);
|
||||
|
||||
// Check for type definitions
|
||||
const hasTypeScript = /interface\s+\w+|type\s+\w+\s*=/i.test(schemaContent);
|
||||
const hasValidationFunction = /function\s+validate|validateState/i.test(schemaContent);
|
||||
|
||||
if (hasTypeScript && !hasValidationFunction) {
|
||||
issues.push({
|
||||
id: `DF-${issues.length + 1}`,
|
||||
type: 'dataflow_break',
|
||||
severity: 'low',
|
||||
location: { file: 'phases/state-schema.md' },
|
||||
description: 'Type definitions without runtime validation',
|
||||
evidence: ['TypeScript interfaces defined but no validation function'],
|
||||
root_cause: 'Types are compile-time only, not enforced at runtime',
|
||||
impact: 'Schema violations may occur at runtime',
|
||||
suggested_fix: 'Add validateState() function using Zod or manual checks'
|
||||
});
|
||||
}
|
||||
} else if (state.target_skill.execution_mode === 'autonomous') {
|
||||
issues.push({
|
||||
id: `DF-${issues.length + 1}`,
|
||||
type: 'dataflow_break',
|
||||
severity: 'high',
|
||||
location: { file: 'phases/' },
|
||||
description: 'Autonomous skill missing state-schema.md',
|
||||
evidence: ['No state schema definition found'],
|
||||
root_cause: 'State structure undefined for orchestrator',
|
||||
impact: 'Inconsistent state handling across actions',
|
||||
suggested_fix: 'Create phases/state-schema.md with explicit type definitions'
|
||||
});
|
||||
}
|
||||
|
||||
// 6. Check read-write alignment
|
||||
const writtenFiles = new Set(writeLocations.map(w => w.target));
|
||||
const readFiles = new Set(readLocations.map(r => r.source));
|
||||
|
||||
const writtenButNotRead = [...writtenFiles].filter(f =>
|
||||
!readFiles.has(f) && !f.includes('output') && !f.includes('report')
|
||||
);
|
||||
|
||||
if (writtenButNotRead.length > 0) {
|
||||
issues.push({
|
||||
id: `DF-${issues.length + 1}`,
|
||||
type: 'dataflow_break',
|
||||
severity: 'low',
|
||||
location: { file: 'multiple' },
|
||||
description: 'Files written but never read',
|
||||
evidence: writtenButNotRead.slice(0, 3),
|
||||
root_cause: 'Orphaned output files',
|
||||
impact: 'Wasted storage and potential confusion',
|
||||
suggested_fix: 'Remove unused writes or add reads where needed'
|
||||
});
|
||||
}
|
||||
|
||||
// 7. Calculate severity
|
||||
const criticalCount = issues.filter(i => i.severity === 'critical').length;
|
||||
const highCount = issues.filter(i => i.severity === 'high').length;
|
||||
const severity = criticalCount > 0 ? 'critical' :
|
||||
highCount > 1 ? 'high' :
|
||||
highCount > 0 ? 'medium' :
|
||||
issues.length > 0 ? 'low' : 'none';
|
||||
|
||||
// 8. Write diagnosis result
|
||||
const diagnosisResult = {
|
||||
status: 'completed',
|
||||
issues_found: issues.length,
|
||||
severity: severity,
|
||||
execution_time_ms: Date.now() - startTime,
|
||||
details: {
|
||||
patterns_checked: [
|
||||
'scattered_state',
|
||||
'inconsistent_naming',
|
||||
'missing_validation',
|
||||
'read_write_alignment'
|
||||
],
|
||||
patterns_matched: evidence.map(e => e.pattern),
|
||||
evidence: evidence,
|
||||
data_flow_map: {
|
||||
write_locations: writeLocations.length,
|
||||
read_locations: readLocations.length,
|
||||
unique_state_files: uniqueStateFiles.length
|
||||
},
|
||||
recommendations: [
|
||||
uniqueStateFiles.length > 2 ? 'Implement centralized state manager' : null,
|
||||
issues.some(i => i.description.includes('naming'))
|
||||
? 'Create normalization layer for field names' : null,
|
||||
issues.some(i => i.description.includes('validation'))
|
||||
? 'Add Zod or JSON Schema validation' : null
|
||||
].filter(Boolean)
|
||||
}
|
||||
};
|
||||
|
||||
Write(`${workDir}/diagnosis/dataflow-diagnosis.json`,
|
||||
JSON.stringify(diagnosisResult, null, 2));
|
||||
|
||||
return {
|
||||
stateUpdates: {
|
||||
'diagnosis.dataflow': diagnosisResult,
|
||||
issues: [...state.issues, ...issues]
|
||||
},
|
||||
outputFiles: [`${workDir}/diagnosis/dataflow-diagnosis.json`],
|
||||
summary: `Data flow diagnosis: ${issues.length} issues found (severity: ${severity})`
|
||||
};
|
||||
}
|
||||
```
|
||||
|
||||
## State Updates
|
||||
|
||||
```javascript
|
||||
return {
|
||||
stateUpdates: {
|
||||
'diagnosis.dataflow': {
|
||||
status: 'completed',
|
||||
issues_found: <count>,
|
||||
severity: '<critical|high|medium|low|none>',
|
||||
// ... full diagnosis result
|
||||
},
|
||||
issues: [...existingIssues, ...newIssues]
|
||||
}
|
||||
};
|
||||
```
|
||||
|
||||
## Error Handling
|
||||
|
||||
| Error Type | Recovery |
|
||||
|------------|----------|
|
||||
| Glob pattern error | Use fallback patterns |
|
||||
| File read error | Skip and continue |
|
||||
|
||||
## Next Actions
|
||||
|
||||
- Success: action-diagnose-agent (or next in focus_areas)
|
||||
- Skipped: If 'dataflow' not in focus_areas
|
||||
@@ -0,0 +1,299 @@
|
||||
# Action: Diagnose Documentation Structure
|
||||
|
||||
检测目标 skill 中的文档冗余和冲突问题。
|
||||
|
||||
## Purpose
|
||||
|
||||
- 检测重复定义(State Schema、映射表、类型定义等)
|
||||
- 检测冲突定义(优先级定义不一致、实现与文档漂移等)
|
||||
- 生成合并和解决冲突的建议
|
||||
|
||||
## Preconditions
|
||||
|
||||
- [ ] `state.status === 'running'`
|
||||
- [ ] `state.target_skill !== null`
|
||||
- [ ] `!state.diagnosis.docs`
|
||||
- [ ] 用户指定 focus_areas 包含 'docs' 或 'all',或需要全面诊断
|
||||
|
||||
## Detection Patterns
|
||||
|
||||
### DOC-RED-001: 核心定义重复
|
||||
|
||||
检测 State Schema、核心接口等在多处定义:
|
||||
|
||||
```javascript
|
||||
async function detectDefinitionDuplicates(skillPath) {
|
||||
const patterns = [
|
||||
{ name: 'state_schema', regex: /interface\s+(TuningState|State)\s*\{/g },
|
||||
{ name: 'fix_strategy', regex: /type\s+FixStrategy\s*=/g },
|
||||
{ name: 'issue_type', regex: /type:\s*['"]?(context_explosion|memory_loss|dataflow_break)/g }
|
||||
];
|
||||
|
||||
const files = Glob('**/*.md', { cwd: skillPath });
|
||||
const duplicates = [];
|
||||
|
||||
for (const pattern of patterns) {
|
||||
const matches = [];
|
||||
for (const file of files) {
|
||||
const content = Read(`${skillPath}/${file}`);
|
||||
if (pattern.regex.test(content)) {
|
||||
matches.push({ file, pattern: pattern.name });
|
||||
}
|
||||
}
|
||||
if (matches.length > 1) {
|
||||
duplicates.push({
|
||||
type: pattern.name,
|
||||
files: matches.map(m => m.file),
|
||||
severity: 'high'
|
||||
});
|
||||
}
|
||||
}
|
||||
|
||||
return duplicates;
|
||||
}
|
||||
```
|
||||
|
||||
### DOC-RED-002: 硬编码配置重复
|
||||
|
||||
检测 action 文件中硬编码与 spec 文档的重复:
|
||||
|
||||
```javascript
|
||||
async function detectHardcodedDuplicates(skillPath) {
|
||||
const actionFiles = Glob('phases/actions/*.md', { cwd: skillPath });
|
||||
const specFiles = Glob('specs/*.md', { cwd: skillPath });
|
||||
|
||||
const duplicates = [];
|
||||
|
||||
for (const actionFile of actionFiles) {
|
||||
const content = Read(`${skillPath}/${actionFile}`);
|
||||
|
||||
// 检测硬编码的映射对象
|
||||
const hardcodedPatterns = [
|
||||
/const\s+\w*[Mm]apping\s*=\s*\{/g,
|
||||
/patternMapping\s*=\s*\{/g,
|
||||
/strategyMapping\s*=\s*\{/g
|
||||
];
|
||||
|
||||
for (const pattern of hardcodedPatterns) {
|
||||
if (pattern.test(content)) {
|
||||
duplicates.push({
|
||||
type: 'hardcoded_mapping',
|
||||
file: actionFile,
|
||||
description: '硬编码映射可能与 specs/ 中的定义重复',
|
||||
severity: 'high'
|
||||
});
|
||||
}
|
||||
}
|
||||
}
|
||||
|
||||
return duplicates;
|
||||
}
|
||||
```
|
||||
|
||||
### DOC-CON-001: 优先级定义冲突
|
||||
|
||||
检测 P0-P3 等优先级在不同文件中的定义不一致:
|
||||
|
||||
```javascript
|
||||
async function detectPriorityConflicts(skillPath) {
|
||||
const files = Glob('**/*.md', { cwd: skillPath });
|
||||
const priorityDefs = {};
|
||||
|
||||
const priorityPattern = /\*\*P(\d+)\*\*[:\s]+([^\|]+)/g;
|
||||
|
||||
for (const file of files) {
|
||||
const content = Read(`${skillPath}/${file}`);
|
||||
let match;
|
||||
while ((match = priorityPattern.exec(content)) !== null) {
|
||||
const priority = `P${match[1]}`;
|
||||
const definition = match[2].trim();
|
||||
|
||||
if (!priorityDefs[priority]) {
|
||||
priorityDefs[priority] = [];
|
||||
}
|
||||
priorityDefs[priority].push({ file, definition });
|
||||
}
|
||||
}
|
||||
|
||||
const conflicts = [];
|
||||
for (const [priority, defs] of Object.entries(priorityDefs)) {
|
||||
const uniqueDefs = [...new Set(defs.map(d => d.definition))];
|
||||
if (uniqueDefs.length > 1) {
|
||||
conflicts.push({
|
||||
key: priority,
|
||||
definitions: defs,
|
||||
severity: 'critical'
|
||||
});
|
||||
}
|
||||
}
|
||||
|
||||
return conflicts;
|
||||
}
|
||||
```
|
||||
|
||||
### DOC-CON-002: 实现与文档漂移
|
||||
|
||||
检测硬编码与文档表格的不一致:
|
||||
|
||||
```javascript
|
||||
async function detectImplementationDrift(skillPath) {
|
||||
// 比较 category-mappings.json 与 specs/*.md 中的表格
|
||||
const mappingsFile = `${skillPath}/specs/category-mappings.json`;
|
||||
|
||||
if (!fileExists(mappingsFile)) {
|
||||
return []; // 无集中配置,跳过
|
||||
}
|
||||
|
||||
const mappings = JSON.parse(Read(mappingsFile));
|
||||
const conflicts = [];
|
||||
|
||||
// 与 dimension-mapping.md 对比
|
||||
const dimMapping = Read(`${skillPath}/specs/dimension-mapping.md`);
|
||||
|
||||
for (const [category, config] of Object.entries(mappings.categories)) {
|
||||
// 检查策略是否在文档中提及
|
||||
for (const strategy of config.strategies || []) {
|
||||
if (!dimMapping.includes(strategy)) {
|
||||
conflicts.push({
|
||||
type: 'mapping',
|
||||
key: `${category}.strategies`,
|
||||
issue: `策略 ${strategy} 在 JSON 中定义但未在文档中提及`
|
||||
});
|
||||
}
|
||||
}
|
||||
}
|
||||
|
||||
return conflicts;
|
||||
}
|
||||
```
|
||||
|
||||
## Execution
|
||||
|
||||
```javascript
|
||||
async function executeDiagnosis(state, workDir) {
|
||||
console.log('=== Diagnosing Documentation Structure ===');
|
||||
|
||||
const skillPath = state.target_skill.path;
|
||||
const issues = [];
|
||||
|
||||
// 1. 检测冗余
|
||||
const definitionDups = await detectDefinitionDuplicates(skillPath);
|
||||
const hardcodedDups = await detectHardcodedDuplicates(skillPath);
|
||||
|
||||
for (const dup of [...definitionDups, ...hardcodedDups]) {
|
||||
issues.push({
|
||||
id: `DOC-RED-${issues.length + 1}`,
|
||||
type: 'doc_redundancy',
|
||||
severity: dup.severity,
|
||||
location: { files: dup.files || [dup.file] },
|
||||
description: dup.description || `${dup.type} 在多处定义`,
|
||||
evidence: dup.files || [dup.file],
|
||||
root_cause: '缺乏单一真相来源',
|
||||
impact: '维护困难,易产生不一致',
|
||||
suggested_fix: 'consolidate_to_ssot'
|
||||
});
|
||||
}
|
||||
|
||||
// 2. 检测冲突
|
||||
const priorityConflicts = await detectPriorityConflicts(skillPath);
|
||||
const driftConflicts = await detectImplementationDrift(skillPath);
|
||||
|
||||
for (const conflict of priorityConflicts) {
|
||||
issues.push({
|
||||
id: `DOC-CON-${issues.length + 1}`,
|
||||
type: 'doc_conflict',
|
||||
severity: 'critical',
|
||||
location: { files: conflict.definitions.map(d => d.file) },
|
||||
description: `${conflict.key} 在不同文件中定义不一致`,
|
||||
evidence: conflict.definitions.map(d => `${d.file}: ${d.definition}`),
|
||||
root_cause: '定义更新后未同步',
|
||||
impact: '行为不可预测',
|
||||
suggested_fix: 'reconcile_conflicting_definitions'
|
||||
});
|
||||
}
|
||||
|
||||
// 3. 生成报告
|
||||
const severity = issues.some(i => i.severity === 'critical') ? 'critical' :
|
||||
issues.some(i => i.severity === 'high') ? 'high' :
|
||||
issues.length > 0 ? 'medium' : 'none';
|
||||
|
||||
const result = {
|
||||
status: 'completed',
|
||||
issues_found: issues.length,
|
||||
severity: severity,
|
||||
execution_time_ms: Date.now() - startTime,
|
||||
details: {
|
||||
patterns_checked: ['DOC-RED-001', 'DOC-RED-002', 'DOC-CON-001', 'DOC-CON-002'],
|
||||
patterns_matched: issues.map(i => i.id.split('-').slice(0, 2).join('-')),
|
||||
evidence: issues.flatMap(i => i.evidence),
|
||||
recommendations: generateRecommendations(issues)
|
||||
},
|
||||
redundancies: issues.filter(i => i.type === 'doc_redundancy'),
|
||||
conflicts: issues.filter(i => i.type === 'doc_conflict')
|
||||
};
|
||||
|
||||
// 写入诊断结果
|
||||
Write(`${workDir}/diagnosis/docs-diagnosis.json`, JSON.stringify(result, null, 2));
|
||||
|
||||
return {
|
||||
stateUpdates: {
|
||||
'diagnosis.docs': result,
|
||||
issues: [...state.issues, ...issues]
|
||||
},
|
||||
outputFiles: [`${workDir}/diagnosis/docs-diagnosis.json`],
|
||||
summary: `文档诊断完成:发现 ${issues.length} 个问题 (${severity})`
|
||||
};
|
||||
}
|
||||
|
||||
function generateRecommendations(issues) {
|
||||
const recommendations = [];
|
||||
|
||||
if (issues.some(i => i.type === 'doc_redundancy')) {
|
||||
recommendations.push('使用 consolidate_to_ssot 策略合并重复定义');
|
||||
recommendations.push('考虑创建 specs/category-mappings.json 集中管理配置');
|
||||
}
|
||||
|
||||
if (issues.some(i => i.type === 'doc_conflict')) {
|
||||
recommendations.push('使用 reconcile_conflicting_definitions 策略解决冲突');
|
||||
recommendations.push('建立文档同步检查机制');
|
||||
}
|
||||
|
||||
return recommendations;
|
||||
}
|
||||
```
|
||||
|
||||
## Output
|
||||
|
||||
### State Updates
|
||||
|
||||
```javascript
|
||||
{
|
||||
stateUpdates: {
|
||||
'diagnosis.docs': {
|
||||
status: 'completed',
|
||||
issues_found: N,
|
||||
severity: 'critical|high|medium|low|none',
|
||||
redundancies: [...],
|
||||
conflicts: [...]
|
||||
},
|
||||
issues: [...existingIssues, ...newIssues]
|
||||
}
|
||||
}
|
||||
```
|
||||
|
||||
### Output Files
|
||||
|
||||
- `${workDir}/diagnosis/docs-diagnosis.json` - 完整诊断结果
|
||||
|
||||
## Error Handling
|
||||
|
||||
| Error | Recovery |
|
||||
|-------|----------|
|
||||
| 文件读取失败 | 记录警告,继续处理其他文件 |
|
||||
| 正则匹配超时 | 跳过该模式,记录 skipped |
|
||||
| JSON 解析失败 | 跳过配置对比,仅进行模式检测 |
|
||||
|
||||
## Next Actions
|
||||
|
||||
- 如果发现 critical 问题 → 优先进入 action-propose-fixes
|
||||
- 如果无问题 → 继续下一个诊断或 action-generate-report
|
||||
@@ -0,0 +1,269 @@
|
||||
# Action: Diagnose Long-tail Forgetting
|
||||
|
||||
Analyze target skill for long-tail effect and constraint forgetting issues.
|
||||
|
||||
## Purpose
|
||||
|
||||
- Detect loss of early instructions in long execution chains
|
||||
- Identify missing constraint propagation mechanisms
|
||||
- Find weak goal alignment between phases
|
||||
- Measure instruction retention across phases
|
||||
|
||||
## Preconditions
|
||||
|
||||
- [ ] state.status === 'running'
|
||||
- [ ] state.target_skill.path is set
|
||||
- [ ] 'memory' in state.focus_areas OR state.focus_areas is empty
|
||||
|
||||
## Detection Patterns
|
||||
|
||||
### Pattern 1: Missing Constraint References
|
||||
|
||||
```regex
|
||||
# Phases that don't reference original requirements
|
||||
# Look for absence of: requirements, constraints, original, initial, user_request
|
||||
```
|
||||
|
||||
### Pattern 2: Goal Drift
|
||||
|
||||
```regex
|
||||
# Later phases focus on immediate task without global context
|
||||
/\[TASK\][^[]*(?!\[CONSTRAINTS\]|\[REQUIREMENTS\])/
|
||||
```
|
||||
|
||||
### Pattern 3: No Checkpoint Mechanism
|
||||
|
||||
```regex
|
||||
# Absence of state preservation at key points
|
||||
# Look for lack of: checkpoint, snapshot, preserve, restore
|
||||
```
|
||||
|
||||
### Pattern 4: Implicit State Passing
|
||||
|
||||
```regex
|
||||
# State passed implicitly through conversation rather than explicitly
|
||||
/(?<!state\.)context\./
|
||||
```
|
||||
|
||||
## Execution
|
||||
|
||||
```javascript
|
||||
async function execute(state, workDir) {
|
||||
const skillPath = state.target_skill.path;
|
||||
const startTime = Date.now();
|
||||
const issues = [];
|
||||
const evidence = [];
|
||||
|
||||
console.log(`Diagnosing long-tail forgetting in ${skillPath}...`);
|
||||
|
||||
// 1. Analyze phase chain for constraint propagation
|
||||
const phaseFiles = Glob(`${skillPath}/phases/*.md`)
|
||||
.filter(f => !f.includes('orchestrator') && !f.includes('state-schema'))
|
||||
.sort();
|
||||
|
||||
// Extract phase order (for sequential) or action dependencies (for autonomous)
|
||||
const isAutonomous = state.target_skill.execution_mode === 'autonomous';
|
||||
|
||||
// 2. Check each phase for constraint awareness
|
||||
let firstPhaseConstraints = [];
|
||||
|
||||
for (let i = 0; i < phaseFiles.length; i++) {
|
||||
const file = phaseFiles[i];
|
||||
const content = Read(file);
|
||||
const relativePath = file.replace(skillPath + '/', '');
|
||||
const phaseNum = i + 1;
|
||||
|
||||
// Extract constraints from first phase
|
||||
if (i === 0) {
|
||||
const constraintMatch = content.match(/\[CONSTRAINTS?\]([^[]*)/i);
|
||||
if (constraintMatch) {
|
||||
firstPhaseConstraints = constraintMatch[1]
|
||||
.split('\n')
|
||||
.filter(l => l.trim().startsWith('-'))
|
||||
.map(l => l.trim().replace(/^-\s*/, ''));
|
||||
}
|
||||
}
|
||||
|
||||
// Check if later phases reference original constraints
|
||||
if (i > 0 && firstPhaseConstraints.length > 0) {
|
||||
const mentionsConstraints = firstPhaseConstraints.some(c =>
|
||||
content.toLowerCase().includes(c.toLowerCase().slice(0, 20))
|
||||
);
|
||||
|
||||
if (!mentionsConstraints) {
|
||||
issues.push({
|
||||
id: `MEM-${issues.length + 1}`,
|
||||
type: 'memory_loss',
|
||||
severity: 'high',
|
||||
location: { file: relativePath, phase: `Phase ${phaseNum}` },
|
||||
description: `Phase ${phaseNum} does not reference original constraints`,
|
||||
evidence: [`Original constraints: ${firstPhaseConstraints.slice(0, 3).join(', ')}`],
|
||||
root_cause: 'Constraint information not propagated to later phases',
|
||||
impact: 'May produce output violating original requirements',
|
||||
suggested_fix: 'Add explicit constraint injection or reference to state.original_constraints'
|
||||
});
|
||||
evidence.push({
|
||||
file: relativePath,
|
||||
pattern: 'missing_constraint_reference',
|
||||
context: `Phase ${phaseNum} of ${phaseFiles.length}`,
|
||||
severity: 'high'
|
||||
});
|
||||
}
|
||||
}
|
||||
|
||||
// Check for goal drift - task without constraints
|
||||
const hasTask = /\[TASK\]/i.test(content);
|
||||
const hasConstraints = /\[CONSTRAINTS?\]|\[REQUIREMENTS?\]|\[RULES?\]/i.test(content);
|
||||
|
||||
if (hasTask && !hasConstraints && i > 1) {
|
||||
issues.push({
|
||||
id: `MEM-${issues.length + 1}`,
|
||||
type: 'memory_loss',
|
||||
severity: 'medium',
|
||||
location: { file: relativePath },
|
||||
description: 'Phase has TASK but no CONSTRAINTS/RULES section',
|
||||
evidence: ['Task defined without boundary constraints'],
|
||||
root_cause: 'Agent may not adhere to global constraints',
|
||||
impact: 'Potential goal drift from original intent',
|
||||
suggested_fix: 'Add [CONSTRAINTS] section referencing global rules'
|
||||
});
|
||||
}
|
||||
|
||||
// Check for checkpoint mechanism
|
||||
const hasCheckpoint = /checkpoint|snapshot|preserve|savepoint/i.test(content);
|
||||
const isKeyPhase = i === Math.floor(phaseFiles.length / 2) || i === phaseFiles.length - 1;
|
||||
|
||||
if (isKeyPhase && !hasCheckpoint && phaseFiles.length > 3) {
|
||||
issues.push({
|
||||
id: `MEM-${issues.length + 1}`,
|
||||
type: 'memory_loss',
|
||||
severity: 'low',
|
||||
location: { file: relativePath },
|
||||
description: 'Key phase without checkpoint mechanism',
|
||||
evidence: [`Phase ${phaseNum} is a key milestone but has no state preservation`],
|
||||
root_cause: 'Cannot recover from failures or verify constraint adherence',
|
||||
impact: 'No rollback capability if constraints violated',
|
||||
suggested_fix: 'Add checkpoint before major state changes'
|
||||
});
|
||||
}
|
||||
}
|
||||
|
||||
// 3. Check for explicit state schema with constraints field
|
||||
const stateSchemaFile = Glob(`${skillPath}/phases/state-schema.md`)[0];
|
||||
if (stateSchemaFile) {
|
||||
const schemaContent = Read(stateSchemaFile);
|
||||
const hasConstraintsField = /constraints|requirements|original_request/i.test(schemaContent);
|
||||
|
||||
if (!hasConstraintsField) {
|
||||
issues.push({
|
||||
id: `MEM-${issues.length + 1}`,
|
||||
type: 'memory_loss',
|
||||
severity: 'medium',
|
||||
location: { file: 'phases/state-schema.md' },
|
||||
description: 'State schema lacks constraints/requirements field',
|
||||
evidence: ['No dedicated field for preserving original requirements'],
|
||||
root_cause: 'State structure does not support constraint persistence',
|
||||
impact: 'Constraints may be lost during state transitions',
|
||||
suggested_fix: 'Add original_requirements field to state schema'
|
||||
});
|
||||
}
|
||||
}
|
||||
|
||||
// 4. Check SKILL.md for constraint enforcement in execution flow
|
||||
const skillMd = Read(`${skillPath}/SKILL.md`);
|
||||
const hasConstraintVerification = /constraint.*verif|verif.*constraint|quality.*gate/i.test(skillMd);
|
||||
|
||||
if (!hasConstraintVerification && phaseFiles.length > 3) {
|
||||
issues.push({
|
||||
id: `MEM-${issues.length + 1}`,
|
||||
type: 'memory_loss',
|
||||
severity: 'medium',
|
||||
location: { file: 'SKILL.md' },
|
||||
description: 'No constraint verification step in execution flow',
|
||||
evidence: ['Execution flow lacks quality gate or constraint check'],
|
||||
root_cause: 'No mechanism to verify output matches original intent',
|
||||
impact: 'Constraint violations may go undetected',
|
||||
suggested_fix: 'Add verification phase comparing output to original requirements'
|
||||
});
|
||||
}
|
||||
|
||||
// 5. Calculate severity
|
||||
const criticalCount = issues.filter(i => i.severity === 'critical').length;
|
||||
const highCount = issues.filter(i => i.severity === 'high').length;
|
||||
const severity = criticalCount > 0 ? 'critical' :
|
||||
highCount > 2 ? 'high' :
|
||||
highCount > 0 ? 'medium' :
|
||||
issues.length > 0 ? 'low' : 'none';
|
||||
|
||||
// 6. Write diagnosis result
|
||||
const diagnosisResult = {
|
||||
status: 'completed',
|
||||
issues_found: issues.length,
|
||||
severity: severity,
|
||||
execution_time_ms: Date.now() - startTime,
|
||||
details: {
|
||||
patterns_checked: [
|
||||
'constraint_propagation',
|
||||
'goal_drift',
|
||||
'checkpoint_mechanism',
|
||||
'state_schema_constraints'
|
||||
],
|
||||
patterns_matched: evidence.map(e => e.pattern),
|
||||
evidence: evidence,
|
||||
phase_analysis: {
|
||||
total_phases: phaseFiles.length,
|
||||
first_phase_constraints: firstPhaseConstraints.length,
|
||||
phases_with_constraint_ref: phaseFiles.length - issues.filter(i =>
|
||||
i.description.includes('does not reference')).length
|
||||
},
|
||||
recommendations: [
|
||||
highCount > 0 ? 'Implement constraint injection at each phase' : null,
|
||||
issues.some(i => i.description.includes('checkpoint'))
|
||||
? 'Add checkpoint/restore mechanism' : null,
|
||||
issues.some(i => i.description.includes('State schema'))
|
||||
? 'Add original_requirements to state schema' : null
|
||||
].filter(Boolean)
|
||||
}
|
||||
};
|
||||
|
||||
Write(`${workDir}/diagnosis/memory-diagnosis.json`,
|
||||
JSON.stringify(diagnosisResult, null, 2));
|
||||
|
||||
return {
|
||||
stateUpdates: {
|
||||
'diagnosis.memory': diagnosisResult,
|
||||
issues: [...state.issues, ...issues]
|
||||
},
|
||||
outputFiles: [`${workDir}/diagnosis/memory-diagnosis.json`],
|
||||
summary: `Memory diagnosis: ${issues.length} issues found (severity: ${severity})`
|
||||
};
|
||||
}
|
||||
```
|
||||
|
||||
## State Updates
|
||||
|
||||
```javascript
|
||||
return {
|
||||
stateUpdates: {
|
||||
'diagnosis.memory': {
|
||||
status: 'completed',
|
||||
issues_found: <count>,
|
||||
severity: '<critical|high|medium|low|none>',
|
||||
// ... full diagnosis result
|
||||
},
|
||||
issues: [...existingIssues, ...newIssues]
|
||||
}
|
||||
};
|
||||
```
|
||||
|
||||
## Error Handling
|
||||
|
||||
| Error Type | Recovery |
|
||||
|------------|----------|
|
||||
| Phase file read error | Skip file, continue analysis |
|
||||
| No phases found | Report as structure issue |
|
||||
|
||||
## Next Actions
|
||||
|
||||
- Success: action-diagnose-dataflow (or next in focus_areas)
|
||||
- Skipped: If 'memory' not in focus_areas
|
||||
@@ -0,0 +1,200 @@
|
||||
# Action: Diagnose Token Consumption
|
||||
|
||||
Analyze target skill for token consumption inefficiencies and output optimization opportunities.
|
||||
|
||||
## Purpose
|
||||
|
||||
Detect patterns that cause excessive token usage:
|
||||
- Verbose prompts without compression
|
||||
- Large state objects with unnecessary fields
|
||||
- Full content passing instead of references
|
||||
- Unbounded arrays without sliding windows
|
||||
- Redundant file I/O (write-then-read patterns)
|
||||
|
||||
## Detection Patterns
|
||||
|
||||
| Pattern ID | Name | Detection Logic | Severity |
|
||||
|------------|------|-----------------|----------|
|
||||
| TKN-001 | Verbose Prompts | Prompt files > 4KB or high static/variable ratio | medium |
|
||||
| TKN-002 | Excessive State Fields | State schema > 15 top-level keys | medium |
|
||||
| TKN-003 | Full Content Passing | `Read()` result embedded directly in prompt | high |
|
||||
| TKN-004 | Unbounded Arrays | `.push`/`concat` without `.slice(-N)` | high |
|
||||
| TKN-005 | Redundant Write→Read | `Write(file)` followed by `Read(file)` | medium |
|
||||
|
||||
## Execution Steps
|
||||
|
||||
```javascript
|
||||
async function diagnoseTokenConsumption(state, workDir) {
|
||||
const issues = [];
|
||||
const evidence = [];
|
||||
const skillPath = state.target_skill.path;
|
||||
|
||||
// 1. Scan for verbose prompts (TKN-001)
|
||||
const mdFiles = Glob(`${skillPath}/**/*.md`);
|
||||
for (const file of mdFiles) {
|
||||
const content = Read(file);
|
||||
if (content.length > 4000) {
|
||||
evidence.push({
|
||||
file: file,
|
||||
pattern: 'TKN-001',
|
||||
severity: 'medium',
|
||||
context: `File size: ${content.length} chars (threshold: 4000)`
|
||||
});
|
||||
}
|
||||
}
|
||||
|
||||
// 2. Check state schema field count (TKN-002)
|
||||
const stateSchema = Glob(`${skillPath}/**/state-schema.md`)[0];
|
||||
if (stateSchema) {
|
||||
const schemaContent = Read(stateSchema);
|
||||
const fieldMatches = schemaContent.match(/^\s*\w+:/gm) || [];
|
||||
if (fieldMatches.length > 15) {
|
||||
evidence.push({
|
||||
file: stateSchema,
|
||||
pattern: 'TKN-002',
|
||||
severity: 'medium',
|
||||
context: `State has ${fieldMatches.length} fields (threshold: 15)`
|
||||
});
|
||||
}
|
||||
}
|
||||
|
||||
// 3. Detect full content passing (TKN-003)
|
||||
const fullContentPattern = /Read\([^)]+\)\s*[\+,]|`\$\{.*Read\(/g;
|
||||
for (const file of mdFiles) {
|
||||
const content = Read(file);
|
||||
const matches = content.match(fullContentPattern);
|
||||
if (matches) {
|
||||
evidence.push({
|
||||
file: file,
|
||||
pattern: 'TKN-003',
|
||||
severity: 'high',
|
||||
context: `Full content passing detected: ${matches[0]}`
|
||||
});
|
||||
}
|
||||
}
|
||||
|
||||
// 4. Detect unbounded arrays (TKN-004)
|
||||
const unboundedPattern = /\.(push|concat)\([^)]+\)(?!.*\.slice)/g;
|
||||
for (const file of mdFiles) {
|
||||
const content = Read(file);
|
||||
const matches = content.match(unboundedPattern);
|
||||
if (matches) {
|
||||
evidence.push({
|
||||
file: file,
|
||||
pattern: 'TKN-004',
|
||||
severity: 'high',
|
||||
context: `Unbounded array growth: ${matches[0]}`
|
||||
});
|
||||
}
|
||||
}
|
||||
|
||||
// 5. Detect write-then-read patterns (TKN-005)
|
||||
const writeReadPattern = /Write\([^)]+\)[\s\S]{0,100}Read\([^)]+\)/g;
|
||||
for (const file of mdFiles) {
|
||||
const content = Read(file);
|
||||
const matches = content.match(writeReadPattern);
|
||||
if (matches) {
|
||||
evidence.push({
|
||||
file: file,
|
||||
pattern: 'TKN-005',
|
||||
severity: 'medium',
|
||||
context: `Write-then-read pattern detected`
|
||||
});
|
||||
}
|
||||
}
|
||||
|
||||
// Calculate severity
|
||||
const highCount = evidence.filter(e => e.severity === 'high').length;
|
||||
const mediumCount = evidence.filter(e => e.severity === 'medium').length;
|
||||
|
||||
let severity = 'none';
|
||||
if (highCount > 0) severity = 'high';
|
||||
else if (mediumCount > 2) severity = 'medium';
|
||||
else if (mediumCount > 0) severity = 'low';
|
||||
|
||||
return {
|
||||
status: 'completed',
|
||||
issues_found: evidence.length,
|
||||
severity: severity,
|
||||
execution_time_ms: Date.now() - startTime,
|
||||
details: {
|
||||
patterns_checked: ['TKN-001', 'TKN-002', 'TKN-003', 'TKN-004', 'TKN-005'],
|
||||
patterns_matched: [...new Set(evidence.map(e => e.pattern))],
|
||||
evidence: evidence,
|
||||
recommendations: generateRecommendations(evidence)
|
||||
}
|
||||
};
|
||||
}
|
||||
|
||||
function generateRecommendations(evidence) {
|
||||
const recs = [];
|
||||
const patterns = [...new Set(evidence.map(e => e.pattern))];
|
||||
|
||||
if (patterns.includes('TKN-001')) {
|
||||
recs.push('Apply prompt_compression: Extract static instructions to templates, use placeholders');
|
||||
}
|
||||
if (patterns.includes('TKN-002')) {
|
||||
recs.push('Apply state_field_reduction: Remove debug/cache fields, consolidate related fields');
|
||||
}
|
||||
if (patterns.includes('TKN-003')) {
|
||||
recs.push('Apply lazy_loading: Pass file paths instead of content, let agents read if needed');
|
||||
}
|
||||
if (patterns.includes('TKN-004')) {
|
||||
recs.push('Apply sliding_window: Add .slice(-N) to array operations to bound growth');
|
||||
}
|
||||
if (patterns.includes('TKN-005')) {
|
||||
recs.push('Apply output_minimization: Use in-memory data passing, eliminate temporary files');
|
||||
}
|
||||
|
||||
return recs;
|
||||
}
|
||||
```
|
||||
|
||||
## Output
|
||||
|
||||
Write diagnosis result to `${workDir}/diagnosis/token-consumption-diagnosis.json`:
|
||||
|
||||
```json
|
||||
{
|
||||
"status": "completed",
|
||||
"issues_found": 3,
|
||||
"severity": "medium",
|
||||
"execution_time_ms": 1500,
|
||||
"details": {
|
||||
"patterns_checked": ["TKN-001", "TKN-002", "TKN-003", "TKN-004", "TKN-005"],
|
||||
"patterns_matched": ["TKN-001", "TKN-003"],
|
||||
"evidence": [
|
||||
{
|
||||
"file": "phases/orchestrator.md",
|
||||
"pattern": "TKN-001",
|
||||
"severity": "medium",
|
||||
"context": "File size: 5200 chars (threshold: 4000)"
|
||||
}
|
||||
],
|
||||
"recommendations": [
|
||||
"Apply prompt_compression: Extract static instructions to templates"
|
||||
]
|
||||
}
|
||||
}
|
||||
```
|
||||
|
||||
## State Update
|
||||
|
||||
```javascript
|
||||
updateState({
|
||||
diagnosis: {
|
||||
...state.diagnosis,
|
||||
token_consumption: diagnosisResult
|
||||
}
|
||||
});
|
||||
```
|
||||
|
||||
## Fix Strategies Mapping
|
||||
|
||||
| Pattern | Strategy | Implementation |
|
||||
|---------|----------|----------------|
|
||||
| TKN-001 | prompt_compression | Extract static text to variables, use template inheritance |
|
||||
| TKN-002 | state_field_reduction | Audit and consolidate fields, remove non-essential data |
|
||||
| TKN-003 | lazy_loading | Pass paths instead of content, agents load when needed |
|
||||
| TKN-004 | sliding_window | Add `.slice(-N)` after push/concat operations |
|
||||
| TKN-005 | output_minimization | Use return values instead of file relay |
|
||||
@@ -0,0 +1,322 @@
|
||||
# Action: Gemini Analysis
|
||||
|
||||
动态调用 Gemini CLI 进行深度分析,根据用户需求或诊断结果选择分析类型。
|
||||
|
||||
## Role
|
||||
|
||||
- 接收用户指定的分析需求或从诊断结果推断需求
|
||||
- 构建适当的 CLI 命令
|
||||
- 执行分析并解析结果
|
||||
- 更新状态以供后续动作使用
|
||||
|
||||
## Preconditions
|
||||
|
||||
- `state.status === 'running'`
|
||||
- 满足以下任一条件:
|
||||
- `state.gemini_analysis_requested === true` (用户请求)
|
||||
- `state.issues.some(i => i.severity === 'critical')` (发现严重问题)
|
||||
- `state.analysis_type !== null` (已指定分析类型)
|
||||
|
||||
## Analysis Types
|
||||
|
||||
### 1. root_cause - 问题根因分析
|
||||
|
||||
针对用户描述的问题进行深度分析。
|
||||
|
||||
```javascript
|
||||
const analysisPrompt = `
|
||||
PURPOSE: Identify root cause of skill execution issue: ${state.user_issue_description}
|
||||
TASK:
|
||||
• Analyze skill structure at: ${state.target_skill.path}
|
||||
• Identify anti-patterns in phase files
|
||||
• Trace data flow through state management
|
||||
• Check agent coordination patterns
|
||||
MODE: analysis
|
||||
CONTEXT: @**/*.md
|
||||
EXPECTED: JSON with structure:
|
||||
{
|
||||
"root_causes": [
|
||||
{ "id": "RC-001", "description": "...", "severity": "high", "evidence": ["file:line"] }
|
||||
],
|
||||
"patterns_found": [
|
||||
{ "pattern": "...", "type": "anti-pattern|best-practice", "locations": [] }
|
||||
],
|
||||
"recommendations": [
|
||||
{ "priority": 1, "action": "...", "rationale": "..." }
|
||||
]
|
||||
}
|
||||
RULES: Focus on execution flow, state management, agent coordination
|
||||
`;
|
||||
```
|
||||
|
||||
### 2. architecture - 架构审查
|
||||
|
||||
评估 skill 的整体架构设计。
|
||||
|
||||
```javascript
|
||||
const analysisPrompt = `
|
||||
PURPOSE: Review skill architecture for: ${state.target_skill.name}
|
||||
TASK:
|
||||
• Evaluate phase decomposition and responsibility separation
|
||||
• Check state schema design and data flow
|
||||
• Assess agent coordination and error handling
|
||||
• Review scalability and maintainability
|
||||
MODE: analysis
|
||||
CONTEXT: @**/*.md
|
||||
EXPECTED: Markdown report with sections:
|
||||
- Executive Summary
|
||||
- Phase Architecture Assessment
|
||||
- State Management Evaluation
|
||||
- Agent Coordination Analysis
|
||||
- Improvement Recommendations (prioritized)
|
||||
RULES: Focus on modularity, extensibility, maintainability
|
||||
`;
|
||||
```
|
||||
|
||||
### 3. prompt_optimization - 提示词优化
|
||||
|
||||
分析和优化 phase 中的提示词。
|
||||
|
||||
```javascript
|
||||
const analysisPrompt = `
|
||||
PURPOSE: Optimize prompts in skill phases for better output quality
|
||||
TASK:
|
||||
• Analyze existing prompts for clarity and specificity
|
||||
• Identify ambiguous instructions
|
||||
• Check output format specifications
|
||||
• Evaluate constraint communication
|
||||
MODE: analysis
|
||||
CONTEXT: @phases/**/*.md
|
||||
EXPECTED: JSON with structure:
|
||||
{
|
||||
"prompt_issues": [
|
||||
{ "file": "...", "issue": "...", "severity": "...", "suggestion": "..." }
|
||||
],
|
||||
"optimized_prompts": [
|
||||
{ "file": "...", "original": "...", "optimized": "...", "rationale": "..." }
|
||||
]
|
||||
}
|
||||
RULES: Preserve intent, improve clarity, add structured output requirements
|
||||
`;
|
||||
```
|
||||
|
||||
### 4. performance - 性能分析
|
||||
|
||||
分析 Token 消耗和执行效率。
|
||||
|
||||
```javascript
|
||||
const analysisPrompt = `
|
||||
PURPOSE: Analyze performance bottlenecks in skill execution
|
||||
TASK:
|
||||
• Estimate token consumption per phase
|
||||
• Identify redundant data passing
|
||||
• Check for unnecessary full-content transfers
|
||||
• Evaluate caching opportunities
|
||||
MODE: analysis
|
||||
CONTEXT: @**/*.md
|
||||
EXPECTED: JSON with structure:
|
||||
{
|
||||
"token_estimates": [
|
||||
{ "phase": "...", "estimated_tokens": 1000, "breakdown": {} }
|
||||
],
|
||||
"bottlenecks": [
|
||||
{ "type": "...", "location": "...", "impact": "high|medium|low", "fix": "..." }
|
||||
],
|
||||
"optimization_suggestions": []
|
||||
}
|
||||
RULES: Focus on token efficiency, reduce redundancy
|
||||
`;
|
||||
```
|
||||
|
||||
### 5. custom - 自定义分析
|
||||
|
||||
用户指定的自定义分析需求。
|
||||
|
||||
```javascript
|
||||
const analysisPrompt = `
|
||||
PURPOSE: ${state.custom_analysis_purpose}
|
||||
TASK: ${state.custom_analysis_tasks}
|
||||
MODE: analysis
|
||||
CONTEXT: @**/*.md
|
||||
EXPECTED: ${state.custom_analysis_expected}
|
||||
RULES: ${state.custom_analysis_rules || 'Follow best practices'}
|
||||
`;
|
||||
```
|
||||
|
||||
## Execution
|
||||
|
||||
```javascript
|
||||
async function executeGeminiAnalysis(state, workDir) {
|
||||
// 1. 确定分析类型
|
||||
const analysisType = state.analysis_type || determineAnalysisType(state);
|
||||
|
||||
// 2. 构建 prompt
|
||||
const prompt = buildAnalysisPrompt(analysisType, state);
|
||||
|
||||
// 3. 构建 CLI 命令
|
||||
const cliCommand = `ccw cli -p "${escapeForShell(prompt)}" --tool gemini --mode analysis --cd "${state.target_skill.path}"`;
|
||||
|
||||
console.log(`Executing Gemini analysis: ${analysisType}`);
|
||||
console.log(`Command: ${cliCommand}`);
|
||||
|
||||
// 4. 执行 CLI (后台运行)
|
||||
const result = Bash({
|
||||
command: cliCommand,
|
||||
run_in_background: true,
|
||||
timeout: 300000 // 5 minutes
|
||||
});
|
||||
|
||||
// 5. 等待结果
|
||||
// 注意: 根据 CLAUDE.md 指引,CLI 后台执行后应停止轮询
|
||||
// 结果会在 CLI 完成后写入 state
|
||||
|
||||
return {
|
||||
stateUpdates: {
|
||||
gemini_analysis: {
|
||||
type: analysisType,
|
||||
status: 'running',
|
||||
started_at: new Date().toISOString(),
|
||||
task_id: result.task_id
|
||||
}
|
||||
},
|
||||
outputFiles: [],
|
||||
summary: `Gemini ${analysisType} analysis started in background`
|
||||
};
|
||||
}
|
||||
|
||||
function determineAnalysisType(state) {
|
||||
// 根据状态推断分析类型
|
||||
if (state.user_issue_description && state.user_issue_description.length > 100) {
|
||||
return 'root_cause';
|
||||
}
|
||||
if (state.issues.some(i => i.severity === 'critical')) {
|
||||
return 'root_cause';
|
||||
}
|
||||
if (state.focus_areas.includes('architecture')) {
|
||||
return 'architecture';
|
||||
}
|
||||
if (state.focus_areas.includes('prompt')) {
|
||||
return 'prompt_optimization';
|
||||
}
|
||||
if (state.focus_areas.includes('performance')) {
|
||||
return 'performance';
|
||||
}
|
||||
return 'root_cause'; // 默认
|
||||
}
|
||||
|
||||
function buildAnalysisPrompt(type, state) {
|
||||
const templates = {
|
||||
root_cause: () => `
|
||||
PURPOSE: Identify root cause of skill execution issue: ${state.user_issue_description}
|
||||
TASK: • Analyze skill structure • Identify anti-patterns • Trace data flow issues • Check agent coordination
|
||||
MODE: analysis
|
||||
CONTEXT: @**/*.md
|
||||
EXPECTED: JSON { root_causes: [], patterns_found: [], recommendations: [] }
|
||||
RULES: Focus on execution flow, be specific about file:line locations
|
||||
`,
|
||||
architecture: () => `
|
||||
PURPOSE: Review skill architecture for ${state.target_skill.name}
|
||||
TASK: • Evaluate phase decomposition • Check state design • Assess agent coordination • Review extensibility
|
||||
MODE: analysis
|
||||
CONTEXT: @**/*.md
|
||||
EXPECTED: Markdown architecture assessment report
|
||||
RULES: Focus on modularity and maintainability
|
||||
`,
|
||||
prompt_optimization: () => `
|
||||
PURPOSE: Optimize prompts in skill for better output quality
|
||||
TASK: • Analyze prompt clarity • Check output specifications • Evaluate constraint handling
|
||||
MODE: analysis
|
||||
CONTEXT: @phases/**/*.md
|
||||
EXPECTED: JSON { prompt_issues: [], optimized_prompts: [] }
|
||||
RULES: Preserve intent, improve clarity
|
||||
`,
|
||||
performance: () => `
|
||||
PURPOSE: Analyze performance bottlenecks in skill
|
||||
TASK: • Estimate token consumption • Identify redundancy • Check data transfer efficiency
|
||||
MODE: analysis
|
||||
CONTEXT: @**/*.md
|
||||
EXPECTED: JSON { token_estimates: [], bottlenecks: [], optimization_suggestions: [] }
|
||||
RULES: Focus on token efficiency
|
||||
`,
|
||||
custom: () => `
|
||||
PURPOSE: ${state.custom_analysis_purpose}
|
||||
TASK: ${state.custom_analysis_tasks}
|
||||
MODE: analysis
|
||||
CONTEXT: @**/*.md
|
||||
EXPECTED: ${state.custom_analysis_expected}
|
||||
RULES: ${state.custom_analysis_rules || 'Best practices'}
|
||||
`
|
||||
};
|
||||
|
||||
return templates[type]();
|
||||
}
|
||||
|
||||
function escapeForShell(str) {
|
||||
// 转义 shell 特殊字符
|
||||
return str.replace(/"/g, '\\"').replace(/\$/g, '\\$').replace(/`/g, '\\`');
|
||||
}
|
||||
```
|
||||
|
||||
## Output
|
||||
|
||||
### State Updates
|
||||
|
||||
```javascript
|
||||
{
|
||||
gemini_analysis: {
|
||||
type: 'root_cause' | 'architecture' | 'prompt_optimization' | 'performance' | 'custom',
|
||||
status: 'running' | 'completed' | 'failed',
|
||||
started_at: '2024-01-01T00:00:00Z',
|
||||
completed_at: '2024-01-01T00:05:00Z',
|
||||
task_id: 'xxx',
|
||||
result: { /* 分析结果 */ },
|
||||
error: null
|
||||
},
|
||||
// 分析结果合并到 issues
|
||||
issues: [
|
||||
...state.issues,
|
||||
...newIssuesFromAnalysis
|
||||
]
|
||||
}
|
||||
```
|
||||
|
||||
### Output Files
|
||||
|
||||
- `${workDir}/diagnosis/gemini-analysis-${type}.json` - 原始分析结果
|
||||
- `${workDir}/diagnosis/gemini-analysis-${type}.md` - 格式化报告
|
||||
|
||||
## Post-Execution
|
||||
|
||||
分析完成后:
|
||||
1. 解析 CLI 输出为结构化数据
|
||||
2. 提取新发现的 issues 合并到 state.issues
|
||||
3. 更新 recommendations 到 state
|
||||
4. 触发下一步动作 (通常是 action-generate-report 或 action-propose-fixes)
|
||||
|
||||
## Error Handling
|
||||
|
||||
| Error | Recovery |
|
||||
|-------|----------|
|
||||
| CLI 超时 | 重试一次,仍失败则跳过 Gemini 分析 |
|
||||
| 解析失败 | 保存原始输出,手动处理 |
|
||||
| 无结果 | 标记为 skipped,继续流程 |
|
||||
|
||||
## User Interaction
|
||||
|
||||
如果 `state.analysis_type === null` 且无法自动推断,询问用户:
|
||||
|
||||
```javascript
|
||||
AskUserQuestion({
|
||||
questions: [{
|
||||
question: '请选择 Gemini 分析类型',
|
||||
header: '分析类型',
|
||||
options: [
|
||||
{ label: '问题根因分析', description: '深度分析用户描述的问题' },
|
||||
{ label: '架构审查', description: '评估整体架构设计' },
|
||||
{ label: '提示词优化', description: '分析和优化 phase 提示词' },
|
||||
{ label: '性能分析', description: '分析 Token 消耗和执行效率' }
|
||||
],
|
||||
multiSelect: false
|
||||
}]
|
||||
});
|
||||
```
|
||||
@@ -0,0 +1,228 @@
|
||||
# Action: Generate Consolidated Report
|
||||
|
||||
Generate a comprehensive tuning report merging all diagnosis results with prioritized recommendations.
|
||||
|
||||
## Purpose
|
||||
|
||||
- Merge all diagnosis results into unified report
|
||||
- Prioritize issues by severity and impact
|
||||
- Generate actionable recommendations
|
||||
- Create human-readable markdown report
|
||||
|
||||
## Preconditions
|
||||
|
||||
- [ ] state.status === 'running'
|
||||
- [ ] All diagnoses in focus_areas are completed
|
||||
- [ ] state.issues.length > 0 OR generate summary report
|
||||
|
||||
## Execution
|
||||
|
||||
```javascript
|
||||
async function execute(state, workDir) {
|
||||
console.log('Generating consolidated tuning report...');
|
||||
|
||||
const targetSkill = state.target_skill;
|
||||
const issues = state.issues;
|
||||
|
||||
// 1. Group issues by type
|
||||
const issuesByType = {
|
||||
context_explosion: issues.filter(i => i.type === 'context_explosion'),
|
||||
memory_loss: issues.filter(i => i.type === 'memory_loss'),
|
||||
dataflow_break: issues.filter(i => i.type === 'dataflow_break'),
|
||||
agent_failure: issues.filter(i => i.type === 'agent_failure')
|
||||
};
|
||||
|
||||
// 2. Group issues by severity
|
||||
const issuesBySeverity = {
|
||||
critical: issues.filter(i => i.severity === 'critical'),
|
||||
high: issues.filter(i => i.severity === 'high'),
|
||||
medium: issues.filter(i => i.severity === 'medium'),
|
||||
low: issues.filter(i => i.severity === 'low')
|
||||
};
|
||||
|
||||
// 3. Calculate overall health score
|
||||
const weights = { critical: 25, high: 15, medium: 5, low: 1 };
|
||||
const deductions = Object.entries(issuesBySeverity)
|
||||
.reduce((sum, [sev, arr]) => sum + arr.length * weights[sev], 0);
|
||||
const healthScore = Math.max(0, 100 - deductions);
|
||||
|
||||
// 4. Generate report content
|
||||
const report = `# Skill Tuning Report
|
||||
|
||||
**Target Skill**: ${targetSkill.name}
|
||||
**Path**: ${targetSkill.path}
|
||||
**Execution Mode**: ${targetSkill.execution_mode}
|
||||
**Generated**: ${new Date().toISOString()}
|
||||
|
||||
---
|
||||
|
||||
## Executive Summary
|
||||
|
||||
| Metric | Value |
|
||||
|--------|-------|
|
||||
| Health Score | ${healthScore}/100 |
|
||||
| Total Issues | ${issues.length} |
|
||||
| Critical | ${issuesBySeverity.critical.length} |
|
||||
| High | ${issuesBySeverity.high.length} |
|
||||
| Medium | ${issuesBySeverity.medium.length} |
|
||||
| Low | ${issuesBySeverity.low.length} |
|
||||
|
||||
### User Reported Issue
|
||||
> ${state.user_issue_description}
|
||||
|
||||
### Overall Assessment
|
||||
${healthScore >= 80 ? '✅ Skill is in good health with minor issues.' :
|
||||
healthScore >= 60 ? '⚠️ Skill has significant issues requiring attention.' :
|
||||
healthScore >= 40 ? '🔶 Skill has serious issues affecting reliability.' :
|
||||
'❌ Skill has critical issues requiring immediate fixes.'}
|
||||
|
||||
---
|
||||
|
||||
## Diagnosis Results
|
||||
|
||||
### Context Explosion Analysis
|
||||
${state.diagnosis.context ?
|
||||
`- **Status**: ${state.diagnosis.context.status}
|
||||
- **Severity**: ${state.diagnosis.context.severity}
|
||||
- **Issues Found**: ${state.diagnosis.context.issues_found}
|
||||
- **Key Findings**: ${state.diagnosis.context.details.recommendations.join('; ') || 'None'}` :
|
||||
'_Not analyzed_'}
|
||||
|
||||
### Long-tail Memory Analysis
|
||||
${state.diagnosis.memory ?
|
||||
`- **Status**: ${state.diagnosis.memory.status}
|
||||
- **Severity**: ${state.diagnosis.memory.severity}
|
||||
- **Issues Found**: ${state.diagnosis.memory.issues_found}
|
||||
- **Key Findings**: ${state.diagnosis.memory.details.recommendations.join('; ') || 'None'}` :
|
||||
'_Not analyzed_'}
|
||||
|
||||
### Data Flow Analysis
|
||||
${state.diagnosis.dataflow ?
|
||||
`- **Status**: ${state.diagnosis.dataflow.status}
|
||||
- **Severity**: ${state.diagnosis.dataflow.severity}
|
||||
- **Issues Found**: ${state.diagnosis.dataflow.issues_found}
|
||||
- **Key Findings**: ${state.diagnosis.dataflow.details.recommendations.join('; ') || 'None'}` :
|
||||
'_Not analyzed_'}
|
||||
|
||||
### Agent Coordination Analysis
|
||||
${state.diagnosis.agent ?
|
||||
`- **Status**: ${state.diagnosis.agent.status}
|
||||
- **Severity**: ${state.diagnosis.agent.severity}
|
||||
- **Issues Found**: ${state.diagnosis.agent.issues_found}
|
||||
- **Key Findings**: ${state.diagnosis.agent.details.recommendations.join('; ') || 'None'}` :
|
||||
'_Not analyzed_'}
|
||||
|
||||
---
|
||||
|
||||
## Critical & High Priority Issues
|
||||
|
||||
${issuesBySeverity.critical.length + issuesBySeverity.high.length === 0 ?
|
||||
'_No critical or high priority issues found._' :
|
||||
[...issuesBySeverity.critical, ...issuesBySeverity.high].map((issue, i) => `
|
||||
### ${i + 1}. [${issue.severity.toUpperCase()}] ${issue.description}
|
||||
|
||||
- **ID**: ${issue.id}
|
||||
- **Type**: ${issue.type}
|
||||
- **Location**: ${typeof issue.location === 'object' ? issue.location.file : issue.location}
|
||||
- **Root Cause**: ${issue.root_cause}
|
||||
- **Impact**: ${issue.impact}
|
||||
- **Suggested Fix**: ${issue.suggested_fix}
|
||||
|
||||
**Evidence**:
|
||||
${issue.evidence.map(e => `- \`${e}\``).join('\n')}
|
||||
`).join('\n')}
|
||||
|
||||
---
|
||||
|
||||
## Medium & Low Priority Issues
|
||||
|
||||
${issuesBySeverity.medium.length + issuesBySeverity.low.length === 0 ?
|
||||
'_No medium or low priority issues found._' :
|
||||
[...issuesBySeverity.medium, ...issuesBySeverity.low].map((issue, i) => `
|
||||
### ${i + 1}. [${issue.severity.toUpperCase()}] ${issue.description}
|
||||
|
||||
- **ID**: ${issue.id}
|
||||
- **Type**: ${issue.type}
|
||||
- **Suggested Fix**: ${issue.suggested_fix}
|
||||
`).join('\n')}
|
||||
|
||||
---
|
||||
|
||||
## Recommended Fix Order
|
||||
|
||||
Based on severity and dependencies, apply fixes in this order:
|
||||
|
||||
${[...issuesBySeverity.critical, ...issuesBySeverity.high, ...issuesBySeverity.medium]
|
||||
.slice(0, 10)
|
||||
.map((issue, i) => `${i + 1}. **${issue.id}**: ${issue.suggested_fix}`)
|
||||
.join('\n')}
|
||||
|
||||
---
|
||||
|
||||
## Quality Gates
|
||||
|
||||
| Gate | Threshold | Current | Status |
|
||||
|------|-----------|---------|--------|
|
||||
| Critical Issues | 0 | ${issuesBySeverity.critical.length} | ${issuesBySeverity.critical.length === 0 ? '✅ PASS' : '❌ FAIL'} |
|
||||
| High Issues | ≤ 2 | ${issuesBySeverity.high.length} | ${issuesBySeverity.high.length <= 2 ? '✅ PASS' : '❌ FAIL'} |
|
||||
| Health Score | ≥ 60 | ${healthScore} | ${healthScore >= 60 ? '✅ PASS' : '❌ FAIL'} |
|
||||
|
||||
**Overall Quality Gate**: ${
|
||||
issuesBySeverity.critical.length === 0 &&
|
||||
issuesBySeverity.high.length <= 2 &&
|
||||
healthScore >= 60 ? '✅ PASS' : '❌ FAIL'}
|
||||
|
||||
---
|
||||
|
||||
*Report generated by skill-tuning*
|
||||
`;
|
||||
|
||||
// 5. Write report
|
||||
Write(`${workDir}/tuning-report.md`, report);
|
||||
|
||||
// 6. Calculate quality gate
|
||||
const qualityGate = issuesBySeverity.critical.length === 0 &&
|
||||
issuesBySeverity.high.length <= 2 &&
|
||||
healthScore >= 60 ? 'pass' :
|
||||
healthScore >= 40 ? 'review' : 'fail';
|
||||
|
||||
return {
|
||||
stateUpdates: {
|
||||
quality_score: healthScore,
|
||||
quality_gate: qualityGate,
|
||||
issues_by_severity: {
|
||||
critical: issuesBySeverity.critical.length,
|
||||
high: issuesBySeverity.high.length,
|
||||
medium: issuesBySeverity.medium.length,
|
||||
low: issuesBySeverity.low.length
|
||||
}
|
||||
},
|
||||
outputFiles: [`${workDir}/tuning-report.md`],
|
||||
summary: `Report generated: ${issues.length} issues, health score ${healthScore}/100, gate: ${qualityGate}`
|
||||
};
|
||||
}
|
||||
```
|
||||
|
||||
## State Updates
|
||||
|
||||
```javascript
|
||||
return {
|
||||
stateUpdates: {
|
||||
quality_score: <0-100>,
|
||||
quality_gate: '<pass|review|fail>',
|
||||
issues_by_severity: { critical: N, high: N, medium: N, low: N }
|
||||
}
|
||||
};
|
||||
```
|
||||
|
||||
## Error Handling
|
||||
|
||||
| Error Type | Recovery |
|
||||
|------------|----------|
|
||||
| Write error | Retry to alternative path |
|
||||
| Empty issues | Generate summary with no issues |
|
||||
|
||||
## Next Actions
|
||||
|
||||
- If issues.length > 0: action-propose-fixes
|
||||
- If issues.length === 0: action-complete
|
||||
149
.claude/skills/skill-tuning/phases/actions/action-init.md
Normal file
149
.claude/skills/skill-tuning/phases/actions/action-init.md
Normal file
@@ -0,0 +1,149 @@
|
||||
# Action: Initialize Tuning Session
|
||||
|
||||
Initialize the skill-tuning session by collecting target skill information, creating work directories, and setting up initial state.
|
||||
|
||||
## Purpose
|
||||
|
||||
- Identify target skill to tune
|
||||
- Collect user's problem description
|
||||
- Create work directory structure
|
||||
- Backup original skill files
|
||||
- Initialize state for orchestrator
|
||||
|
||||
## Preconditions
|
||||
|
||||
- [ ] state.status === 'pending'
|
||||
|
||||
## Execution
|
||||
|
||||
```javascript
|
||||
async function execute(state, workDir) {
|
||||
// 1. Ask user for target skill
|
||||
const skillInput = await AskUserQuestion({
|
||||
questions: [{
|
||||
question: "Which skill do you want to tune?",
|
||||
header: "Target Skill",
|
||||
multiSelect: false,
|
||||
options: [
|
||||
{ label: "Specify path", description: "Enter skill directory path" }
|
||||
]
|
||||
}]
|
||||
});
|
||||
|
||||
const skillPath = skillInput["Target Skill"];
|
||||
|
||||
// 2. Validate skill exists and read structure
|
||||
const skillMdPath = `${skillPath}/SKILL.md`;
|
||||
if (!Glob(`${skillPath}/SKILL.md`).length) {
|
||||
throw new Error(`Invalid skill path: ${skillPath} - SKILL.md not found`);
|
||||
}
|
||||
|
||||
// 3. Read skill metadata
|
||||
const skillMd = Read(skillMdPath);
|
||||
const frontMatterMatch = skillMd.match(/^---\n([\s\S]*?)\n---/);
|
||||
const skillName = frontMatterMatch
|
||||
? frontMatterMatch[1].match(/name:\s*(.+)/)?.[1]?.trim()
|
||||
: skillPath.split('/').pop();
|
||||
|
||||
// 4. Detect execution mode
|
||||
const hasOrchestrator = Glob(`${skillPath}/phases/orchestrator.md`).length > 0;
|
||||
const executionMode = hasOrchestrator ? 'autonomous' : 'sequential';
|
||||
|
||||
// 5. Scan skill structure
|
||||
const phases = Glob(`${skillPath}/phases/**/*.md`).map(f => f.replace(skillPath + '/', ''));
|
||||
const specs = Glob(`${skillPath}/specs/**/*.md`).map(f => f.replace(skillPath + '/', ''));
|
||||
|
||||
// 6. Ask for problem description
|
||||
const issueInput = await AskUserQuestion({
|
||||
questions: [{
|
||||
question: "Describe the issue or what you want to optimize:",
|
||||
header: "Issue",
|
||||
multiSelect: false,
|
||||
options: [
|
||||
{ label: "Context grows too large", description: "Token explosion over multiple turns" },
|
||||
{ label: "Instructions forgotten", description: "Early constraints lost in long execution" },
|
||||
{ label: "Data inconsistency", description: "State format changes between phases" },
|
||||
{ label: "Agent failures", description: "Sub-agent calls fail or return unexpected results" }
|
||||
]
|
||||
}]
|
||||
});
|
||||
|
||||
// 7. Ask for focus areas
|
||||
const focusInput = await AskUserQuestion({
|
||||
questions: [{
|
||||
question: "Which areas should be diagnosed? (Select all that apply)",
|
||||
header: "Focus",
|
||||
multiSelect: true,
|
||||
options: [
|
||||
{ label: "context", description: "Context explosion analysis" },
|
||||
{ label: "memory", description: "Long-tail forgetting analysis" },
|
||||
{ label: "dataflow", description: "Data flow analysis" },
|
||||
{ label: "agent", description: "Agent coordination analysis" }
|
||||
]
|
||||
}]
|
||||
});
|
||||
|
||||
const focusAreas = focusInput["Focus"] || ['context', 'memory', 'dataflow', 'agent'];
|
||||
|
||||
// 8. Create backup
|
||||
const backupDir = `${workDir}/backups/${skillName}-backup`;
|
||||
Bash(`mkdir -p "${backupDir}"`);
|
||||
Bash(`cp -r "${skillPath}"/* "${backupDir}/"`);
|
||||
|
||||
// 9. Return state updates
|
||||
return {
|
||||
stateUpdates: {
|
||||
status: 'running',
|
||||
started_at: new Date().toISOString(),
|
||||
target_skill: {
|
||||
name: skillName,
|
||||
path: skillPath,
|
||||
execution_mode: executionMode,
|
||||
phases: phases,
|
||||
specs: specs
|
||||
},
|
||||
user_issue_description: issueInput["Issue"],
|
||||
focus_areas: Array.isArray(focusAreas) ? focusAreas : [focusAreas],
|
||||
work_dir: workDir,
|
||||
backup_dir: backupDir
|
||||
},
|
||||
outputFiles: [],
|
||||
summary: `Initialized tuning for "${skillName}" (${executionMode} mode), focus: ${focusAreas.join(', ')}`
|
||||
};
|
||||
}
|
||||
```
|
||||
|
||||
## State Updates
|
||||
|
||||
```javascript
|
||||
return {
|
||||
stateUpdates: {
|
||||
status: 'running',
|
||||
started_at: '<timestamp>',
|
||||
target_skill: {
|
||||
name: '<skill-name>',
|
||||
path: '<skill-path>',
|
||||
execution_mode: '<sequential|autonomous>',
|
||||
phases: ['...'],
|
||||
specs: ['...']
|
||||
},
|
||||
user_issue_description: '<user description>',
|
||||
focus_areas: ['context', 'memory', ...],
|
||||
work_dir: '<work-dir>',
|
||||
backup_dir: '<backup-dir>'
|
||||
}
|
||||
};
|
||||
```
|
||||
|
||||
## Error Handling
|
||||
|
||||
| Error Type | Recovery |
|
||||
|------------|----------|
|
||||
| Skill path not found | Ask user to re-enter valid path |
|
||||
| SKILL.md missing | Suggest path correction |
|
||||
| Backup creation failed | Retry with alternative location |
|
||||
|
||||
## Next Actions
|
||||
|
||||
- Success: Continue to first diagnosis action based on focus_areas
|
||||
- Failure: action-abort
|
||||
Some files were not shown because too many files have changed in this diff Show More
Reference in New Issue
Block a user