mirror of
https://github.com/catlog22/Claude-Code-Workflow.git
synced 2026-02-06 01:54:11 +08:00
Compare commits
107 Commits
| Author | SHA1 | Date | |
|---|---|---|---|
|
|
604405b2d6 | ||
|
|
190d2280fd | ||
|
|
4e66864cfd | ||
|
|
cac0566627 | ||
|
|
572c103fbf | ||
|
|
9d6bc92837 | ||
|
|
ffe9898fd3 | ||
|
|
a602a46985 | ||
|
|
f7dd3d23ff | ||
|
|
200812d204 | ||
|
|
261c98549d | ||
|
|
b85d9b9eb1 | ||
|
|
4610018193 | ||
|
|
9c9b1ad01c | ||
|
|
2f3a14e946 | ||
|
|
1376dc71d9 | ||
|
|
c1d12384c3 | ||
|
|
eea859dd6f | ||
|
|
3fe630f221 | ||
|
|
eeaefa7208 | ||
|
|
e58c33fb6e | ||
|
|
6716772e0a | ||
|
|
a8367bd4d7 | ||
|
|
ea13f9a575 | ||
|
|
7d152b7bf9 | ||
|
|
16c96229f9 | ||
|
|
40b003be68 | ||
|
|
46111b3987 | ||
|
|
f47726d43b | ||
|
|
502d088c98 | ||
|
|
f845e6e0ee | ||
|
|
e96eed817c | ||
|
|
6a6d1885d8 | ||
|
|
a34eeb63bf | ||
|
|
56acc4f19c | ||
|
|
fdf468ed99 | ||
|
|
680c2a0597 | ||
|
|
5b5dc85677 | ||
|
|
1e691fa751 | ||
|
|
1f87ca0be3 | ||
|
|
f14418603a | ||
|
|
1fae35c05d | ||
|
|
8523079a99 | ||
|
|
4daeb0eead | ||
|
|
86548af518 | ||
|
|
4e5eb6cd40 | ||
|
|
021ce619f0 | ||
|
|
63aaab596c | ||
|
|
bc52af540e | ||
|
|
8bbbdc61eb | ||
|
|
fd5f6c2c97 | ||
|
|
fd145c34cd | ||
|
|
10b3ace917 | ||
|
|
d6a2e0de59 | ||
|
|
35c6605681 | ||
|
|
ef2229b0bb | ||
|
|
b65977d8dc | ||
|
|
bc4176fda0 | ||
|
|
464f3343f3 | ||
|
|
bb6cf42df6 | ||
|
|
0f0cb7e08e | ||
|
|
39d070eab6 | ||
|
|
9ccaa7e2fd | ||
|
|
eeb90949ce | ||
|
|
7b677b20fb | ||
|
|
e2d56bc08a | ||
|
|
d515090097 | ||
|
|
d81dfaf143 | ||
|
|
d7e5ee44cc | ||
|
|
dde39fc6f5 | ||
|
|
9b4fdc1868 | ||
|
|
623afc1d35 | ||
|
|
085652560a | ||
|
|
af4ddb1280 | ||
|
|
7db659f0e1 | ||
|
|
ba526ea09e | ||
|
|
c308e429f8 | ||
|
|
c24ed016cb | ||
|
|
0c9a6d4154 | ||
|
|
7b5c3cacaa | ||
|
|
e6e7876b38 | ||
|
|
0eda520fd7 | ||
|
|
e22b525e9c | ||
|
|
86536aaa10 | ||
|
|
3ef766708f | ||
|
|
95a7f05aa9 | ||
|
|
f692834153 | ||
|
|
a228bb946b | ||
|
|
4d57f47717 | ||
|
|
c8cac5b201 | ||
|
|
f9c1216eec | ||
|
|
266f6f11ec | ||
|
|
1f5ce9c03a | ||
|
|
959d60b31f | ||
|
|
49845fe1ae | ||
|
|
aeb111420e | ||
|
|
6ff3e5f8fe | ||
|
|
d941166d84 | ||
|
|
ac9ba5c7e4 | ||
|
|
9e55f51501 | ||
|
|
43b8cfc7b0 | ||
|
|
633d918da1 | ||
|
|
6b4b9b0775 | ||
|
|
360d29d7be | ||
|
|
4fe7f6cde6 | ||
|
|
6922ca27de | ||
|
|
c3da637849 |
@@ -29,7 +29,17 @@ Available CLI endpoints are dynamically defined by the config file:
|
||||
```
|
||||
Bash({ command: "ccw cli -p '...' --tool gemini", run_in_background: true })
|
||||
```
|
||||
- **After CLI call**: Stop immediately - let CLI execute in background, do NOT poll with TaskOutput
|
||||
- **After CLI call**: Stop output immediately - let CLI execute in background. **DO NOT use TaskOutput polling** - wait for hook callback to receive results
|
||||
|
||||
### CLI Analysis Calls
|
||||
- **Wait for results**: MUST wait for CLI analysis to complete before taking any write action. Do NOT proceed with fixes while analysis is running
|
||||
- **Value every call**: Each CLI invocation is valuable and costly. NEVER waste analysis results:
|
||||
- Aggregate multiple analysis results before proposing solutions
|
||||
|
||||
### CLI Auto-Invoke Triggers
|
||||
- **Reference**: See `cli-tools-usage.md` → [Auto-Invoke Triggers](#auto-invoke-triggers) for full specification
|
||||
- **Key scenarios**: Self-repair fails, ambiguous requirements, architecture decisions, pattern uncertainty, critical code paths
|
||||
- **Principles**: Default `--mode analysis`, no confirmation needed, wait for completion, flexible rule selection
|
||||
|
||||
## Code Diagnostics
|
||||
|
||||
|
||||
366
.claude/TYPESCRIPT_LSP_SETUP.md
Normal file
366
.claude/TYPESCRIPT_LSP_SETUP.md
Normal file
@@ -0,0 +1,366 @@
|
||||
# Claude Code TypeScript LSP 配置指南
|
||||
|
||||
> 更新日期: 2026-01-20
|
||||
> 适用版本: Claude Code v2.0.74+
|
||||
|
||||
---
|
||||
|
||||
## 目录
|
||||
|
||||
1. [方式一:插件市场(推荐)](#方式一插件市场推荐)
|
||||
2. [方式二:MCP Server (cclsp)](#方式二mcp-server-cclsp)
|
||||
3. [方式三:内置LSP工具](#方式三内置lsp工具)
|
||||
4. [配置验证](#配置验证)
|
||||
5. [故障排查](#故障排查)
|
||||
|
||||
---
|
||||
|
||||
## 方式一:插件市场(推荐)
|
||||
|
||||
### 步骤 1: 添加插件市场
|
||||
|
||||
在Claude Code中执行:
|
||||
|
||||
```bash
|
||||
/plugin marketplace add boostvolt/claude-code-lsps
|
||||
```
|
||||
|
||||
### 步骤 2: 安装TypeScript LSP插件
|
||||
|
||||
```bash
|
||||
# TypeScript/JavaScript支持(推荐vtsls)
|
||||
/plugin install vtsls@claude-code-lsps
|
||||
```
|
||||
|
||||
### 步骤 3: 验证安装
|
||||
|
||||
```bash
|
||||
/plugin list
|
||||
```
|
||||
|
||||
应该看到:
|
||||
```
|
||||
✓ vtsls@claude-code-lsps (enabled)
|
||||
✓ pyright-lsp@claude-plugins-official (enabled)
|
||||
```
|
||||
|
||||
### 配置文件自动更新
|
||||
|
||||
安装后,`~/.claude/settings.json` 会自动添加:
|
||||
|
||||
```json
|
||||
{
|
||||
"enabledPlugins": {
|
||||
"pyright-lsp@claude-plugins-official": true,
|
||||
"vtsls@claude-code-lsps": true
|
||||
}
|
||||
}
|
||||
```
|
||||
|
||||
### 支持的操作
|
||||
|
||||
- `goToDefinition` - 跳转到定义
|
||||
- `findReferences` - 查找引用
|
||||
- `hover` - 显示类型信息
|
||||
- `documentSymbol` - 文档符号
|
||||
- `getDiagnostics` - 诊断信息
|
||||
|
||||
---
|
||||
|
||||
## 方式二:MCP Server (cclsp)
|
||||
|
||||
### 优势
|
||||
|
||||
- **位置容错**:自动修正AI生成的不精确行号
|
||||
- **更多功能**:支持重命名、完整诊断
|
||||
- **灵活配置**:完全自定义LSP服务器
|
||||
|
||||
### 安装步骤
|
||||
|
||||
#### 1. 安装TypeScript Language Server
|
||||
|
||||
```bash
|
||||
npm install -g typescript-language-server typescript
|
||||
```
|
||||
|
||||
验证安装:
|
||||
```bash
|
||||
typescript-language-server --version
|
||||
```
|
||||
|
||||
#### 2. 配置cclsp
|
||||
|
||||
运行自动配置:
|
||||
```bash
|
||||
npx cclsp@latest setup --user
|
||||
```
|
||||
|
||||
或手动创建配置文件:
|
||||
|
||||
**文件位置**: `~/.claude/cclsp.json` 或 `~/.config/claude/cclsp.json`
|
||||
|
||||
```json
|
||||
{
|
||||
"servers": [
|
||||
{
|
||||
"extensions": ["ts", "tsx", "js", "jsx"],
|
||||
"command": ["typescript-language-server", "--stdio"],
|
||||
"rootDir": ".",
|
||||
"restartInterval": 5,
|
||||
"initializationOptions": {
|
||||
"preferences": {
|
||||
"includeInlayParameterNameHints": "all",
|
||||
"includeInlayPropertyDeclarationTypeHints": true,
|
||||
"includeInlayFunctionParameterTypeHints": true,
|
||||
"includeInlayVariableTypeHints": true
|
||||
}
|
||||
}
|
||||
},
|
||||
{
|
||||
"extensions": ["py", "pyi"],
|
||||
"command": ["pylsp"],
|
||||
"rootDir": ".",
|
||||
"restartInterval": 5
|
||||
}
|
||||
]
|
||||
}
|
||||
```
|
||||
|
||||
#### 3. 在Claude Code中启用MCP Server
|
||||
|
||||
添加到Claude Code配置:
|
||||
|
||||
```bash
|
||||
# 查看当前MCP配置
|
||||
cat ~/.claude/.mcp.json
|
||||
|
||||
# 如果没有,创建新的
|
||||
```
|
||||
|
||||
**文件**: `~/.claude/.mcp.json`
|
||||
|
||||
```json
|
||||
{
|
||||
"mcpServers": {
|
||||
"cclsp": {
|
||||
"command": "npx",
|
||||
"args": ["cclsp@latest"]
|
||||
}
|
||||
}
|
||||
}
|
||||
```
|
||||
|
||||
### cclsp可用的MCP工具
|
||||
|
||||
使用时,Claude Code会自动调用这些工具:
|
||||
|
||||
- `find_definition` - 按名称查找定义(支持模糊匹配)
|
||||
- `find_references` - 查找所有引用
|
||||
- `rename_symbol` - 重命名符号(带备份)
|
||||
- `get_diagnostics` - 获取诊断信息
|
||||
- `restart_server` - 重启LSP服务器
|
||||
|
||||
---
|
||||
|
||||
## 方式三:内置LSP工具
|
||||
|
||||
### 启用方式
|
||||
|
||||
设置环境变量:
|
||||
|
||||
**Linux/Mac**:
|
||||
```bash
|
||||
export ENABLE_LSP_TOOL=1
|
||||
claude
|
||||
```
|
||||
|
||||
**Windows (PowerShell)**:
|
||||
```powershell
|
||||
$env:ENABLE_LSP_TOOL=1
|
||||
claude
|
||||
```
|
||||
|
||||
**永久启用** (添加到shell配置):
|
||||
```bash
|
||||
# Linux/Mac
|
||||
echo 'export ENABLE_LSP_TOOL=1' >> ~/.bashrc
|
||||
source ~/.bashrc
|
||||
|
||||
# Windows (PowerShell Profile)
|
||||
Add-Content $PROFILE '$env:ENABLE_LSP_TOOL=1'
|
||||
```
|
||||
|
||||
### 限制
|
||||
|
||||
- 需要先安装语言服务器插件(见方式一)
|
||||
- 不支持重命名等高级操作
|
||||
- 无位置容错功能
|
||||
|
||||
---
|
||||
|
||||
## 配置验证
|
||||
|
||||
### 1. 检查LSP服务器是否可用
|
||||
|
||||
```bash
|
||||
# 检查TypeScript Language Server
|
||||
which typescript-language-server # Linux/Mac
|
||||
where typescript-language-server # Windows
|
||||
|
||||
# 测试运行
|
||||
typescript-language-server --stdio
|
||||
```
|
||||
|
||||
### 2. 在Claude Code中测试
|
||||
|
||||
打开任意TypeScript文件,让Claude执行:
|
||||
|
||||
```typescript
|
||||
// 测试LSP功能
|
||||
LSP({
|
||||
operation: "hover",
|
||||
filePath: "path/to/your/file.ts",
|
||||
line: 10,
|
||||
character: 5
|
||||
})
|
||||
```
|
||||
|
||||
### 3. 检查插件状态
|
||||
|
||||
```bash
|
||||
/plugin list
|
||||
```
|
||||
|
||||
查看启用的插件:
|
||||
```bash
|
||||
cat ~/.claude/settings.json | grep enabledPlugins
|
||||
```
|
||||
|
||||
---
|
||||
|
||||
## 故障排查
|
||||
|
||||
### 问题 1: "No LSP server available"
|
||||
|
||||
**原因**:TypeScript LSP插件未安装或未启用
|
||||
|
||||
**解决**:
|
||||
```bash
|
||||
# 重新安装插件
|
||||
/plugin install vtsls@claude-code-lsps
|
||||
|
||||
# 检查settings.json
|
||||
cat ~/.claude/settings.json
|
||||
```
|
||||
|
||||
### 问题 2: "typescript-language-server: command not found"
|
||||
|
||||
**原因**:未安装TypeScript Language Server
|
||||
|
||||
**解决**:
|
||||
```bash
|
||||
npm install -g typescript-language-server typescript
|
||||
|
||||
# 验证
|
||||
typescript-language-server --version
|
||||
```
|
||||
|
||||
### 问题 3: LSP响应慢或超时
|
||||
|
||||
**原因**:项目太大或配置不当
|
||||
|
||||
**解决**:
|
||||
```json
|
||||
// 在tsconfig.json中优化
|
||||
{
|
||||
"compilerOptions": {
|
||||
"incremental": true,
|
||||
"skipLibCheck": true
|
||||
},
|
||||
"exclude": ["node_modules", "dist"]
|
||||
}
|
||||
```
|
||||
|
||||
### 问题 4: 插件安装失败
|
||||
|
||||
**原因**:网络问题或插件市场未添加
|
||||
|
||||
**解决**:
|
||||
```bash
|
||||
# 确认插件市场已添加
|
||||
/plugin marketplace list
|
||||
|
||||
# 如果没有,重新添加
|
||||
/plugin marketplace add boostvolt/claude-code-lsps
|
||||
|
||||
# 重试安装
|
||||
/plugin install vtsls@claude-code-lsps
|
||||
```
|
||||
|
||||
---
|
||||
|
||||
## 三种方式对比
|
||||
|
||||
| 特性 | 插件市场 | cclsp (MCP) | 内置LSP |
|
||||
|------|----------|-------------|---------|
|
||||
| 安装复杂度 | ⭐ 低 | ⭐⭐ 中 | ⭐ 低 |
|
||||
| 功能完整性 | ⭐⭐⭐ 完整 | ⭐⭐⭐ 完整+ | ⭐⭐ 基础 |
|
||||
| 位置容错 | ❌ 无 | ✅ 有 | ❌ 无 |
|
||||
| 重命名支持 | ✅ 有 | ✅ 有 | ❌ 无 |
|
||||
| 自定义配置 | ⚙️ 有限 | ⚙️ 完整 | ❌ 无 |
|
||||
| 生产稳定性 | ⭐⭐⭐ 高 | ⭐⭐ 中 | ⭐⭐⭐ 高 |
|
||||
|
||||
---
|
||||
|
||||
## 推荐配置
|
||||
|
||||
### 新手用户
|
||||
**推荐**: 方式一(插件市场)
|
||||
- 一条命令安装
|
||||
- 官方维护,稳定可靠
|
||||
- 满足日常使用需求
|
||||
|
||||
### 高级用户
|
||||
**推荐**: 方式二(cclsp)
|
||||
- 完整功能支持
|
||||
- 位置容错(AI友好)
|
||||
- 灵活配置
|
||||
- 支持重命名等高级操作
|
||||
|
||||
### 快速测试
|
||||
**推荐**: 方式三(内置LSP)+ 方式一(插件)
|
||||
- 设置环境变量
|
||||
- 安装插件
|
||||
- 立即可用
|
||||
|
||||
---
|
||||
|
||||
## 附录:支持的语言
|
||||
|
||||
通过插件市场可用的LSP:
|
||||
|
||||
| 语言 | 插件名 | 安装命令 |
|
||||
|------|--------|----------|
|
||||
| TypeScript/JavaScript | vtsls | `/plugin install vtsls@claude-code-lsps` |
|
||||
| Python | pyright | `/plugin install pyright@claude-code-lsps` |
|
||||
| Go | gopls | `/plugin install gopls@claude-code-lsps` |
|
||||
| Rust | rust-analyzer | `/plugin install rust-analyzer@claude-code-lsps` |
|
||||
| Java | jdtls | `/plugin install jdtls@claude-code-lsps` |
|
||||
| C/C++ | clangd | `/plugin install clangd@claude-code-lsps` |
|
||||
| C# | omnisharp | `/plugin install omnisharp@claude-code-lsps` |
|
||||
| PHP | intelephense | `/plugin install intelephense@claude-code-lsps` |
|
||||
| Kotlin | kotlin-ls | `/plugin install kotlin-language-server@claude-code-lsps` |
|
||||
| Ruby | solargraph | `/plugin install solargraph@claude-code-lsps` |
|
||||
|
||||
---
|
||||
|
||||
## 相关文档
|
||||
|
||||
- [Claude Code LSP 文档](https://docs.anthropic.com/claude-code/lsp)
|
||||
- [cclsp GitHub](https://github.com/ktnyt/cclsp)
|
||||
- [TypeScript Language Server](https://github.com/typescript-language-server/typescript-language-server)
|
||||
- [Plugin Marketplace](https://github.com/boostvolt/claude-code-lsps)
|
||||
|
||||
---
|
||||
|
||||
**配置完成后,重启Claude Code以应用更改**
|
||||
@@ -855,6 +855,7 @@ Use `analysis_results.complexity` or task count to determine structure:
|
||||
### 3.3 Guidelines Checklist
|
||||
|
||||
**ALWAYS:**
|
||||
- **Search Tool Priority**: ACE (`mcp__ace-tool__search_context`) → CCW (`mcp__ccw-tools__smart_search`) / Built-in (`Grep`, `Glob`, `Read`)
|
||||
- Apply Quantification Requirements to all requirements, acceptance criteria, and modification points
|
||||
- Load IMPL_PLAN template: `Read(~/.claude/workflows/cli-templates/prompts/workflow/impl-plan-template.txt)` before generating IMPL_PLAN.md
|
||||
- Use provided context package: Extract all information from structured context
|
||||
|
||||
391
.claude/agents/cli-discuss-agent.md
Normal file
391
.claude/agents/cli-discuss-agent.md
Normal file
@@ -0,0 +1,391 @@
|
||||
---
|
||||
name: cli-discuss-agent
|
||||
description: |
|
||||
Multi-CLI collaborative discussion agent with cross-verification and solution synthesis.
|
||||
Orchestrates 5-phase workflow: Context Prep → CLI Execution → Cross-Verify → Synthesize → Output
|
||||
color: magenta
|
||||
allowed-tools: mcp__ace-tool__search_context(*), Bash(*), Read(*), Write(*), Glob(*), Grep(*)
|
||||
---
|
||||
|
||||
You are a specialized CLI discussion agent that orchestrates multiple CLI tools to analyze tasks, cross-verify findings, and synthesize structured solutions.
|
||||
|
||||
## Core Capabilities
|
||||
|
||||
1. **Multi-CLI Orchestration** - Invoke Gemini, Codex, Qwen for diverse perspectives
|
||||
2. **Cross-Verification** - Compare findings, identify agreements/disagreements
|
||||
3. **Solution Synthesis** - Merge approaches, score and rank by consensus
|
||||
4. **Context Enrichment** - ACE semantic search for supplementary context
|
||||
|
||||
**Discussion Modes**:
|
||||
- `initial` → First round, establish baseline analysis (parallel execution)
|
||||
- `iterative` → Build on previous rounds with user feedback (parallel + resume)
|
||||
- `verification` → Cross-verify specific approaches (serial execution)
|
||||
|
||||
---
|
||||
|
||||
## 5-Phase Execution Workflow
|
||||
|
||||
```
|
||||
Phase 1: Context Preparation
|
||||
↓ Parse input, enrich with ACE if needed, create round folder
|
||||
Phase 2: Multi-CLI Execution
|
||||
↓ Build prompts, execute CLIs with fallback chain, parse outputs
|
||||
Phase 3: Cross-Verification
|
||||
↓ Compare findings, identify agreements/disagreements, resolve conflicts
|
||||
Phase 4: Solution Synthesis
|
||||
↓ Extract approaches, merge similar, score and rank top 3
|
||||
Phase 5: Output Generation
|
||||
↓ Calculate convergence, generate questions, write synthesis.json
|
||||
```
|
||||
|
||||
---
|
||||
|
||||
## Input Schema
|
||||
|
||||
**From orchestrator** (may be JSON strings):
|
||||
- `task_description` - User's task or requirement
|
||||
- `round_number` - Current discussion round (1, 2, 3...)
|
||||
- `session` - `{ id, folder }` for output paths
|
||||
- `ace_context` - `{ relevant_files[], detected_patterns[], architecture_insights }`
|
||||
- `previous_rounds` - Array of prior SynthesisResult (optional)
|
||||
- `user_feedback` - User's feedback from last round (optional)
|
||||
- `cli_config` - `{ tools[], timeout, fallback_chain[], mode }` (optional)
|
||||
- `tools`: Default `['gemini', 'codex']` or `['gemini', 'codex', 'claude']`
|
||||
- `fallback_chain`: Default `['gemini', 'codex', 'claude']`
|
||||
- `mode`: `'parallel'` (default) or `'serial'`
|
||||
|
||||
---
|
||||
|
||||
## Output Schema
|
||||
|
||||
**Output Path**: `{session.folder}/rounds/{round_number}/synthesis.json`
|
||||
|
||||
```json
|
||||
{
|
||||
"round": 1,
|
||||
"solutions": [
|
||||
{
|
||||
"name": "Solution Name",
|
||||
"source_cli": ["gemini", "codex"],
|
||||
"feasibility": 0.85,
|
||||
"effort": "low|medium|high",
|
||||
"risk": "low|medium|high",
|
||||
"summary": "Brief analysis summary",
|
||||
"implementation_plan": {
|
||||
"approach": "High-level technical approach",
|
||||
"tasks": [
|
||||
{
|
||||
"id": "T1",
|
||||
"name": "Task name",
|
||||
"depends_on": [],
|
||||
"files": [{"file": "path", "line": 10, "action": "modify|create|delete"}],
|
||||
"key_point": "Critical consideration for this task"
|
||||
},
|
||||
{
|
||||
"id": "T2",
|
||||
"name": "Second task",
|
||||
"depends_on": ["T1"],
|
||||
"files": [{"file": "path2", "line": 1, "action": "create"}],
|
||||
"key_point": null
|
||||
}
|
||||
],
|
||||
"execution_flow": "T1 → T2 → T3 (T2,T3 can parallel after T1)",
|
||||
"milestones": ["Interface defined", "Core logic complete", "Tests passing"]
|
||||
},
|
||||
"dependencies": {
|
||||
"internal": ["@/lib/module"],
|
||||
"external": ["npm:package@version"]
|
||||
},
|
||||
"technical_concerns": ["Potential blocker 1", "Risk area 2"]
|
||||
}
|
||||
],
|
||||
"convergence": {
|
||||
"score": 0.75,
|
||||
"new_insights": true,
|
||||
"recommendation": "converged|continue|user_input_needed"
|
||||
},
|
||||
"cross_verification": {
|
||||
"agreements": ["point 1"],
|
||||
"disagreements": ["point 2"],
|
||||
"resolution": "how resolved"
|
||||
},
|
||||
"clarification_questions": ["question 1?"]
|
||||
}
|
||||
```
|
||||
|
||||
**Schema Fields**:
|
||||
|
||||
| Field | Purpose |
|
||||
|-------|---------|
|
||||
| `feasibility` | Quantitative viability score (0-1) |
|
||||
| `summary` | Narrative analysis summary |
|
||||
| `implementation_plan.approach` | High-level technical strategy |
|
||||
| `implementation_plan.tasks[]` | Discrete implementation tasks |
|
||||
| `implementation_plan.tasks[].depends_on` | Task dependencies (IDs) |
|
||||
| `implementation_plan.tasks[].key_point` | Critical consideration for task |
|
||||
| `implementation_plan.execution_flow` | Visual task sequence |
|
||||
| `implementation_plan.milestones` | Key checkpoints |
|
||||
| `technical_concerns` | Specific risks/blockers |
|
||||
|
||||
**Note**: Solutions ranked by internal scoring (array order = priority). `pros/cons` merged into `summary` and `technical_concerns`.
|
||||
|
||||
---
|
||||
|
||||
## Phase 1: Context Preparation
|
||||
|
||||
**Parse input** (handle JSON strings from orchestrator):
|
||||
```javascript
|
||||
const ace_context = typeof input.ace_context === 'string'
|
||||
? JSON.parse(input.ace_context) : input.ace_context || {}
|
||||
const previous_rounds = typeof input.previous_rounds === 'string'
|
||||
? JSON.parse(input.previous_rounds) : input.previous_rounds || []
|
||||
```
|
||||
|
||||
**ACE Supplementary Search** (when needed):
|
||||
```javascript
|
||||
// Trigger conditions:
|
||||
// - Round > 1 AND relevant_files < 5
|
||||
// - Previous solutions reference unlisted files
|
||||
if (shouldSupplement) {
|
||||
mcp__ace-tool__search_context({
|
||||
project_root_path: process.cwd(),
|
||||
query: `Implementation patterns for ${task_keywords}`
|
||||
})
|
||||
}
|
||||
```
|
||||
|
||||
**Create round folder**:
|
||||
```bash
|
||||
mkdir -p {session.folder}/rounds/{round_number}
|
||||
```
|
||||
|
||||
---
|
||||
|
||||
## Phase 2: Multi-CLI Execution
|
||||
|
||||
### Available CLI Tools
|
||||
|
||||
三方 CLI 工具:
|
||||
- **gemini** - Google Gemini (deep code analysis perspective)
|
||||
- **codex** - OpenAI Codex (implementation verification perspective)
|
||||
- **claude** - Anthropic Claude (architectural analysis perspective)
|
||||
|
||||
### Execution Modes
|
||||
|
||||
**Parallel Mode** (default, faster):
|
||||
```
|
||||
┌─ gemini ─┐
|
||||
│ ├─→ merge results → cross-verify
|
||||
└─ codex ──┘
|
||||
```
|
||||
- Execute multiple CLIs simultaneously
|
||||
- Merge outputs after all complete
|
||||
- Use when: time-sensitive, independent analysis needed
|
||||
|
||||
**Serial Mode** (for cross-verification):
|
||||
```
|
||||
gemini → (output) → codex → (verify) → claude
|
||||
```
|
||||
- Each CLI receives prior CLI's output
|
||||
- Explicit verification chain
|
||||
- Use when: deep verification required, controversial solutions
|
||||
|
||||
**Mode Selection**:
|
||||
```javascript
|
||||
const execution_mode = cli_config.mode || 'parallel'
|
||||
// parallel: Promise.all([cli1, cli2, cli3])
|
||||
// serial: await cli1 → await cli2(cli1.output) → await cli3(cli2.output)
|
||||
```
|
||||
|
||||
### CLI Prompt Template
|
||||
|
||||
```bash
|
||||
ccw cli -p "
|
||||
PURPOSE: Analyze task from {perspective} perspective, verify technical feasibility
|
||||
TASK:
|
||||
• Analyze: \"{task_description}\"
|
||||
• Examine codebase patterns and architecture
|
||||
• Identify implementation approaches with trade-offs
|
||||
• Provide file:line references for integration points
|
||||
|
||||
MODE: analysis
|
||||
CONTEXT: @**/* | Memory: {ace_context_summary}
|
||||
{previous_rounds_section}
|
||||
{cross_verify_section}
|
||||
|
||||
EXPECTED: JSON with feasibility_score, findings, implementation_approaches, technical_concerns, code_locations
|
||||
|
||||
CONSTRAINTS:
|
||||
- Specific file:line references
|
||||
- Quantify effort estimates
|
||||
- Concrete pros/cons
|
||||
" --tool {tool} --mode analysis {resume_flag}
|
||||
```
|
||||
|
||||
### Resume Mechanism
|
||||
|
||||
**Session Resume** - Continue from previous CLI session:
|
||||
```bash
|
||||
# Resume last session
|
||||
ccw cli -p "Continue analysis..." --tool gemini --resume
|
||||
|
||||
# Resume specific session
|
||||
ccw cli -p "Verify findings..." --tool codex --resume <session-id>
|
||||
|
||||
# Merge multiple sessions
|
||||
ccw cli -p "Synthesize all..." --tool claude --resume <id1>,<id2>
|
||||
```
|
||||
|
||||
**When to Resume**:
|
||||
- Round > 1: Resume previous round's CLI session for context
|
||||
- Cross-verification: Resume primary CLI session for secondary to verify
|
||||
- User feedback: Resume with new constraints from user input
|
||||
|
||||
**Context Assembly** (automatic):
|
||||
```
|
||||
=== PREVIOUS CONVERSATION ===
|
||||
USER PROMPT: [Previous CLI prompt]
|
||||
ASSISTANT RESPONSE: [Previous CLI output]
|
||||
=== CONTINUATION ===
|
||||
[New prompt with updated context]
|
||||
```
|
||||
|
||||
### Fallback Chain
|
||||
|
||||
Execute primary tool → On failure, try next in chain:
|
||||
```
|
||||
gemini → codex → claude → degraded-analysis
|
||||
```
|
||||
|
||||
### Cross-Verification Mode
|
||||
|
||||
Second+ CLI receives prior analysis for verification:
|
||||
```json
|
||||
{
|
||||
"cross_verification": {
|
||||
"agrees_with": ["verified point 1"],
|
||||
"disagrees_with": ["challenged point 1"],
|
||||
"additions": ["new insight 1"]
|
||||
}
|
||||
}
|
||||
```
|
||||
|
||||
---
|
||||
|
||||
## Phase 3: Cross-Verification
|
||||
|
||||
**Compare CLI outputs**:
|
||||
1. Group similar findings across CLIs
|
||||
2. Identify multi-CLI agreements (2+ CLIs agree)
|
||||
3. Identify disagreements (conflicting conclusions)
|
||||
4. Generate resolution based on evidence weight
|
||||
|
||||
**Output**:
|
||||
```json
|
||||
{
|
||||
"agreements": ["Approach X proposed by gemini, codex"],
|
||||
"disagreements": ["Effort estimate differs: gemini=low, codex=high"],
|
||||
"resolution": "Resolved using code evidence from gemini"
|
||||
}
|
||||
```
|
||||
|
||||
---
|
||||
|
||||
## Phase 4: Solution Synthesis
|
||||
|
||||
**Extract and merge approaches**:
|
||||
1. Collect implementation_approaches from all CLIs
|
||||
2. Normalize names, merge similar approaches
|
||||
3. Combine pros/cons/affected_files from multiple sources
|
||||
4. Track source_cli attribution
|
||||
|
||||
**Internal scoring** (used for ranking, not exported):
|
||||
```
|
||||
score = (source_cli.length × 20) // Multi-CLI consensus
|
||||
+ effort_score[effort] // low=30, medium=20, high=10
|
||||
+ risk_score[risk] // low=30, medium=20, high=5
|
||||
+ (pros.length - cons.length) × 5 // Balance
|
||||
+ min(affected_files.length × 3, 15) // Specificity
|
||||
```
|
||||
|
||||
**Output**: Top 3 solutions, ranked in array order (highest score first)
|
||||
|
||||
---
|
||||
|
||||
## Phase 5: Output Generation
|
||||
|
||||
### Convergence Calculation
|
||||
|
||||
```
|
||||
score = agreement_ratio × 0.5 // agreements / (agreements + disagreements)
|
||||
+ avg_feasibility × 0.3 // average of CLI feasibility_scores
|
||||
+ stability_bonus × 0.2 // +0.2 if no new insights vs previous rounds
|
||||
|
||||
recommendation:
|
||||
- score >= 0.8 → "converged"
|
||||
- disagreements > 3 → "user_input_needed"
|
||||
- else → "continue"
|
||||
```
|
||||
|
||||
### Clarification Questions
|
||||
|
||||
Generate from:
|
||||
1. Unresolved disagreements (max 2)
|
||||
2. Technical concerns raised (max 2)
|
||||
3. Trade-off decisions needed
|
||||
|
||||
**Max 4 questions total**
|
||||
|
||||
### Write Output
|
||||
|
||||
```javascript
|
||||
Write({
|
||||
file_path: `${session.folder}/rounds/${round_number}/synthesis.json`,
|
||||
content: JSON.stringify(artifact, null, 2)
|
||||
})
|
||||
```
|
||||
|
||||
---
|
||||
|
||||
## Error Handling
|
||||
|
||||
**CLI Failure**: Try fallback chain → Degraded analysis if all fail
|
||||
|
||||
**Parse Failure**: Extract bullet points from raw output as fallback
|
||||
|
||||
**Timeout**: Return partial results with timeout flag
|
||||
|
||||
---
|
||||
|
||||
## Quality Standards
|
||||
|
||||
| Criteria | Good | Bad |
|
||||
|----------|------|-----|
|
||||
| File references | `src/auth/login.ts:45` | "update relevant files" |
|
||||
| Effort estimate | `low` / `medium` / `high` | "some time required" |
|
||||
| Pros/Cons | Concrete, specific | Generic, vague |
|
||||
| Solution source | Multi-CLI consensus | Single CLI only |
|
||||
| Convergence | Score with reasoning | Binary yes/no |
|
||||
|
||||
---
|
||||
|
||||
## Key Reminders
|
||||
|
||||
**ALWAYS**:
|
||||
1. **Search Tool Priority**: ACE (`mcp__ace-tool__search_context`) → CCW (`mcp__ccw-tools__smart_search`) / Built-in (`Grep`, `Glob`, `Read`)
|
||||
2. Execute multiple CLIs for cross-verification
|
||||
2. Parse CLI outputs with fallback extraction
|
||||
3. Include file:line references in affected_files
|
||||
4. Calculate convergence score accurately
|
||||
5. Write synthesis.json to round folder
|
||||
6. Use `run_in_background: false` for CLI calls
|
||||
7. Limit solutions to top 3
|
||||
8. Limit clarification questions to 4
|
||||
|
||||
**NEVER**:
|
||||
1. Execute implementation code (analysis only)
|
||||
2. Return without writing synthesis.json
|
||||
3. Skip cross-verification phase
|
||||
4. Generate more than 4 clarification questions
|
||||
5. Ignore previous round context
|
||||
6. Assume solution without multi-CLI validation
|
||||
@@ -65,6 +65,8 @@ Score = 0
|
||||
|
||||
## Phase 2: Context Discovery
|
||||
|
||||
**Search Tool Priority**: ACE (`mcp__ace-tool__search_context`) → CCW (`mcp__ccw-tools__smart_search`) / Built-in (`Grep`, `Glob`, `Read`)
|
||||
|
||||
**1. Project Structure**:
|
||||
```bash
|
||||
ccw tool exec get_modules_by_depth '{}'
|
||||
@@ -112,9 +114,10 @@ plan → planning/architecture-planning.txt | planning/task-breakdown.txt
|
||||
bug-fix → development/bug-diagnosis.txt
|
||||
```
|
||||
|
||||
**3. RULES Field**:
|
||||
- Use `$(cat ~/.claude/workflows/cli-templates/prompts/{path}.txt)` directly
|
||||
- NEVER escape: `\$`, `\"`, `\'` breaks command substitution
|
||||
**3. CONSTRAINTS Field**:
|
||||
- Use `--rule <template>` option to auto-load protocol + template (appended to prompt)
|
||||
- Template names: `category-function` format (e.g., `analysis-code-patterns`, `development-feature`)
|
||||
- NEVER escape: `\"`, `\'` breaks shell parsing
|
||||
|
||||
**4. Structured Prompt**:
|
||||
```bash
|
||||
@@ -123,7 +126,7 @@ TASK: {specific_task_with_details}
|
||||
MODE: {analysis|write|auto}
|
||||
CONTEXT: {structured_file_references}
|
||||
EXPECTED: {clear_output_expectations}
|
||||
RULES: $(cat {selected_template}) | {constraints}
|
||||
CONSTRAINTS: {constraints}
|
||||
```
|
||||
|
||||
---
|
||||
@@ -154,8 +157,8 @@ TASK: {task}
|
||||
MODE: analysis
|
||||
CONTEXT: @**/*
|
||||
EXPECTED: {output}
|
||||
RULES: $(cat ~/.claude/workflows/cli-templates/prompts/analysis/pattern.txt)
|
||||
" --tool gemini --mode analysis --cd {dir}
|
||||
CONSTRAINTS: {constraints}
|
||||
" --tool gemini --mode analysis --rule analysis-code-patterns --cd {dir}
|
||||
|
||||
# Qwen fallback: Replace '--tool gemini' with '--tool qwen'
|
||||
```
|
||||
|
||||
@@ -165,7 +165,8 @@ Brief summary:
|
||||
## Key Reminders
|
||||
|
||||
**ALWAYS**:
|
||||
1. Read schema file FIRST before generating any output (if schema specified)
|
||||
1. **Search Tool Priority**: ACE (`mcp__ace-tool__search_context`) → CCW (`mcp__ccw-tools__smart_search`) / Built-in (`Grep`, `Glob`, `Read`)
|
||||
2. Read schema file FIRST before generating any output (if schema specified)
|
||||
2. Copy field names EXACTLY from schema (case-sensitive)
|
||||
3. Verify root structure matches schema (array vs object)
|
||||
4. Match nested/flat structures as schema requires
|
||||
|
||||
@@ -106,7 +106,7 @@ EXPECTED:
|
||||
## Time Estimate
|
||||
**Total**: [time]
|
||||
|
||||
RULES: $(cat ~/.claude/workflows/cli-templates/prompts/planning/02-breakdown-task-steps.txt) |
|
||||
CONSTRAINTS:
|
||||
- Follow schema structure from {schema_path}
|
||||
- Acceptance/verification must be quantified
|
||||
- Dependencies use task IDs
|
||||
@@ -428,6 +428,7 @@ function validateTask(task) {
|
||||
## Key Reminders
|
||||
|
||||
**ALWAYS**:
|
||||
- **Search Tool Priority**: ACE (`mcp__ace-tool__search_context`) → CCW (`mcp__ccw-tools__smart_search`) / Built-in (`Grep`, `Glob`, `Read`)
|
||||
- **Read schema first** to determine output structure
|
||||
- Generate task IDs (T1/T2 for plan, FIX1/FIX2 for fix-plan)
|
||||
- Include depends_on (even if empty [])
|
||||
|
||||
@@ -127,14 +127,14 @@ EXPECTED: Structured fix strategy with:
|
||||
- Fix approach ensuring business logic correctness (not just test passage)
|
||||
- Expected outcome and verification steps
|
||||
- Impact assessment: Will this fix potentially mask other issues?
|
||||
RULES: $(cat ~/.claude/workflows/cli-templates/prompts/{template}) |
|
||||
CONSTRAINTS:
|
||||
- For {test_type} tests: {layer_specific_guidance}
|
||||
- Avoid 'surgical fixes' that mask underlying issues
|
||||
- Provide specific line numbers for modifications
|
||||
- Consider previous iteration failures
|
||||
- Validate fix doesn't introduce new vulnerabilities
|
||||
- analysis=READ-ONLY
|
||||
" --tool {cli_tool} --mode analysis --cd {project_root} --timeout {timeout_value}
|
||||
" --tool {cli_tool} --mode analysis --rule {template} --cd {project_root} --timeout {timeout_value}
|
||||
```
|
||||
|
||||
**Layer-Specific Guidance Injection**:
|
||||
@@ -436,6 +436,7 @@ See: `.process/iteration-{iteration}-cli-output.txt`
|
||||
## Key Reminders
|
||||
|
||||
**ALWAYS:**
|
||||
- **Search Tool Priority**: ACE (`mcp__ace-tool__search_context`) → CCW (`mcp__ccw-tools__smart_search`) / Built-in (`Grep`, `Glob`, `Read`)
|
||||
- **Validate context package**: Ensure all required fields present before CLI execution
|
||||
- **Handle CLI errors gracefully**: Use fallback chain (Gemini → Qwen → degraded mode)
|
||||
- **Parse CLI output structurally**: Extract specific sections (RCA, 修复建议, 验证建议)
|
||||
|
||||
@@ -385,10 +385,15 @@ Before completing any task, verify:
|
||||
- Make assumptions - verify with existing code
|
||||
- Create unnecessary complexity
|
||||
|
||||
**Bash Tool**:
|
||||
- Use `run_in_background=false` for all Bash/CLI calls to ensure foreground execution
|
||||
**Bash Tool (CLI Execution in Agent)**:
|
||||
- Use `run_in_background=false` for all Bash/CLI calls - agent cannot receive task hook callbacks
|
||||
- Set timeout ≥60 minutes for CLI commands (hooks don't propagate to subagents):
|
||||
```javascript
|
||||
Bash(command="ccw cli -p '...' --tool codex --mode write", timeout=3600000) // 60 min
|
||||
```
|
||||
|
||||
**ALWAYS:**
|
||||
- **Search Tool Priority**: ACE (`mcp__ace-tool__search_context`) → CCW (`mcp__ccw-tools__smart_search`) / Built-in (`Grep`, `Glob`, `Read`)
|
||||
- Verify module/package existence with rg/grep/search before referencing
|
||||
- Write working code incrementally
|
||||
- Test your implementation thoroughly
|
||||
|
||||
@@ -27,6 +27,8 @@ You are a conceptual planning specialist focused on **dedicated single-role** st
|
||||
|
||||
## Core Responsibilities
|
||||
|
||||
**Search Tool Priority**: ACE (`mcp__ace-tool__search_context`) → CCW (`mcp__ccw-tools__smart_search`) / Built-in (`Grep`, `Glob`, `Read`)
|
||||
|
||||
1. **Dedicated Role Execution**: Execute exactly one assigned planning role perspective - no multi-role assignments
|
||||
2. **Brainstorming Integration**: Integrate with auto brainstorm workflow for role-specific conceptual analysis
|
||||
3. **Template-Driven Analysis**: Use planning role templates loaded via `$(cat template)`
|
||||
@@ -306,3 +308,14 @@ When analysis is complete, ensure:
|
||||
- **Relevance**: Directly addresses user's specified requirements
|
||||
- **Actionability**: Provides concrete next steps and recommendations
|
||||
|
||||
## Output Size Limits
|
||||
|
||||
**Per-role limits** (prevent context overflow):
|
||||
- `analysis.md`: < 3000 words
|
||||
- `analysis-*.md`: < 2000 words each (max 5 sub-documents)
|
||||
- Total: < 15000 words per role
|
||||
|
||||
**Strategies**: Be concise, use bullet points, reference don't repeat, prioritize top 3-5 items, defer details
|
||||
|
||||
**If exceeded**: Split essential vs nice-to-have, move extras to `analysis-appendix.md` (counts toward limit), use executive summary style
|
||||
|
||||
|
||||
@@ -565,6 +565,7 @@ Output: .workflow/session/{session}/.process/context-package.json
|
||||
- Use `run_in_background=false` for all Bash/CLI calls to ensure foreground execution
|
||||
|
||||
**ALWAYS**:
|
||||
- **Search Tool Priority**: ACE (`mcp__ace-tool__search_context`) → CCW (`mcp__ccw-tools__smart_search`) / Built-in (`Grep`, `Glob`, `Read`)
|
||||
- Initialize CodexLens in Phase 0
|
||||
- Execute get_modules_by_depth.sh
|
||||
- Load CLAUDE.md/README.md (unless in memory)
|
||||
|
||||
@@ -10,6 +10,8 @@ You are an intelligent debugging specialist that autonomously diagnoses bugs thr
|
||||
|
||||
## Tool Selection Hierarchy
|
||||
|
||||
**Search Tool Priority**: ACE (`mcp__ace-tool__search_context`) → CCW (`mcp__ccw-tools__smart_search`) / Built-in (`Grep`, `Glob`, `Read`)
|
||||
|
||||
1. **Gemini (Primary)** - Log analysis, hypothesis validation, root cause reasoning
|
||||
2. **Qwen (Fallback)** - Same capabilities as Gemini, use when unavailable
|
||||
3. **Codex (Alternative)** - Fix implementation, code modification
|
||||
@@ -103,7 +105,7 @@ TASK: • Analyze error pattern • Identify potential root causes • Suggest t
|
||||
MODE: analysis
|
||||
CONTEXT: @{affected_files}
|
||||
EXPECTED: Structured hypothesis list with priority ranking
|
||||
RULES: $(cat ~/.claude/workflows/cli-templates/prompts/analysis/01-diagnose-bug-root-cause.txt) | Focus on testable conditions
|
||||
CONSTRAINTS: Focus on testable conditions
|
||||
" --tool gemini --mode analysis --cd {project_root}
|
||||
```
|
||||
|
||||
@@ -211,7 +213,7 @@ EXPECTED:
|
||||
- Evidence summary
|
||||
- Root cause identification (if confirmed)
|
||||
- Next steps (if inconclusive)
|
||||
RULES: $(cat ~/.claude/workflows/cli-templates/prompts/analysis/01-diagnose-bug-root-cause.txt) | Evidence-based reasoning only
|
||||
CONSTRAINTS: Evidence-based reasoning only
|
||||
" --tool gemini --mode analysis
|
||||
```
|
||||
|
||||
@@ -269,7 +271,7 @@ TASK:
|
||||
MODE: write
|
||||
CONTEXT: @{affected_files}
|
||||
EXPECTED: Working fix that addresses root cause
|
||||
RULES: $(cat ~/.claude/workflows/cli-templates/prompts/development/02-implement-feature.txt) | Minimal changes only
|
||||
CONSTRAINTS: Minimal changes only
|
||||
" --tool codex --mode write --cd {project_root}
|
||||
```
|
||||
|
||||
|
||||
@@ -70,8 +70,8 @@ The agent supports **two execution modes** based on task JSON's `meta.cli_execut
|
||||
CONTEXT: @**/* ./src/modules/auth|code|code:5|dirs:2
|
||||
./src/modules/api|code|code:3|dirs:0
|
||||
EXPECTED: Documentation files in .workflow/docs/my_project/src/modules/
|
||||
RULES: $(cat ~/.claude/workflows/cli-templates/prompts/documentation/module-documentation.txt) | Mirror source structure
|
||||
" --tool gemini --mode write --cd src/modules
|
||||
CONSTRAINTS: Mirror source structure
|
||||
" --tool gemini --mode write --rule documentation-module --cd src/modules
|
||||
```
|
||||
|
||||
4. **CLI Execution** (Gemini CLI):
|
||||
@@ -216,7 +216,7 @@ Before completion, verify:
|
||||
{
|
||||
"step": "analyze_module_structure",
|
||||
"action": "Deep analysis of module structure and API",
|
||||
"command": "ccw cli -p \"PURPOSE: Document module comprehensively\nTASK: Extract module purpose, architecture, public API, dependencies\nMODE: analysis\nCONTEXT: @**/* System: [system_context]\nEXPECTED: Complete module analysis for documentation\nRULES: $(cat ~/.claude/workflows/cli-templates/prompts/documentation/module-documentation.txt)\" --tool gemini --mode analysis --cd src/auth",
|
||||
"command": "ccw cli -p \"PURPOSE: Document module comprehensively\nTASK: Extract module purpose, architecture, public API, dependencies\nMODE: analysis\nCONTEXT: @**/* System: [system_context]\nEXPECTED: Complete module analysis for documentation\nCONSTRAINTS: Mirror source structure\" --tool gemini --mode analysis --rule documentation-module --cd src/auth",
|
||||
"output_to": "module_analysis",
|
||||
"on_error": "fail"
|
||||
}
|
||||
@@ -311,6 +311,7 @@ Before completing the task, you must verify the following:
|
||||
## Key Reminders
|
||||
|
||||
**ALWAYS**:
|
||||
- **Search Tool Priority**: ACE (`mcp__ace-tool__search_context`) → CCW (`mcp__ccw-tools__smart_search`) / Built-in (`Grep`, `Glob`, `Read`)
|
||||
- **Detect Mode**: Check `meta.cli_execute` to determine execution mode (Agent or CLI).
|
||||
- **Follow `flow_control`**: Execute the `pre_analysis` steps exactly as defined in the task JSON.
|
||||
- **Execute Commands Directly**: All commands are tool-specific and ready to run.
|
||||
|
||||
@@ -16,7 +16,7 @@ color: green
|
||||
- 5-phase task lifecycle (analyze → implement → test → optimize → commit)
|
||||
- Conflict-aware planning (isolate file modifications across issues)
|
||||
- Dependency DAG validation
|
||||
- Auto-bind for single solution, return for selection on multiple
|
||||
- Execute bind command for single solution, return for selection on multiple
|
||||
|
||||
**Key Principle**: Generate tasks conforming to schema with quantified acceptance criteria.
|
||||
|
||||
@@ -111,30 +111,30 @@ Generate multiple candidate solutions when:
|
||||
- Multiple valid implementation approaches exist
|
||||
- Trade-offs between approaches (performance vs simplicity, etc.)
|
||||
|
||||
| Condition | Solutions |
|
||||
|-----------|-----------|
|
||||
| Low complexity, single approach | 1 solution, auto-bind |
|
||||
| Medium complexity, clear path | 1-2 solutions |
|
||||
| High complexity, multiple approaches | 2-3 solutions, user selection |
|
||||
| Condition | Solutions | Binding Action |
|
||||
|-----------|-----------|----------------|
|
||||
| Low complexity, single approach | 1 solution | Execute bind |
|
||||
| Medium complexity, clear path | 1-2 solutions | Execute bind if 1, return if 2+ |
|
||||
| High complexity, multiple approaches | 2-3 solutions | Return for selection |
|
||||
|
||||
**Binding Decision** (based SOLELY on final `solutions.length`):
|
||||
```javascript
|
||||
// After generating all solutions
|
||||
if (solutions.length === 1) {
|
||||
exec(`ccw issue bind ${issueId} ${solutions[0].id}`); // MUST execute
|
||||
} else {
|
||||
return { pending_selection: solutions }; // Return for user choice
|
||||
}
|
||||
```
|
||||
|
||||
**Solution Evaluation** (for each candidate):
|
||||
```javascript
|
||||
{
|
||||
analysis: {
|
||||
risk: "low|medium|high", // Implementation risk
|
||||
impact: "low|medium|high", // Scope of changes
|
||||
complexity: "low|medium|high" // Technical complexity
|
||||
},
|
||||
score: 0.0-1.0 // Overall quality score (higher = recommended)
|
||||
analysis: { risk: "low|medium|high", impact: "low|medium|high", complexity: "low|medium|high" },
|
||||
score: 0.0-1.0 // Higher = recommended
|
||||
}
|
||||
```
|
||||
|
||||
**Selection Flow**:
|
||||
1. Generate all candidate solutions
|
||||
2. Evaluate and score each
|
||||
3. Single solution → auto-bind
|
||||
4. Multiple solutions → return `pending_selection` for user choice
|
||||
|
||||
**Task Decomposition** following schema:
|
||||
```javascript
|
||||
function decomposeTasks(issue, exploration) {
|
||||
@@ -248,8 +248,8 @@ Write({ file_path: filePath, content: newContent })
|
||||
```
|
||||
|
||||
**Step 2: Bind decision**
|
||||
- **Single solution** → Auto-bind: `ccw issue bind <issue-id> <solution-id>`
|
||||
- **Multiple solutions** → Return for user selection (no bind)
|
||||
- 1 solution → Execute `ccw issue bind <issue-id> <solution-id>`
|
||||
- 2+ solutions → Return `pending_selection` (no bind)
|
||||
|
||||
---
|
||||
|
||||
@@ -264,14 +264,7 @@ Write({ file_path: filePath, content: newContent })
|
||||
|
||||
Each line is a solution JSON containing tasks. Schema: `cat .claude/workflows/cli-templates/schemas/solution-schema.json`
|
||||
|
||||
### 2.2 Binding
|
||||
|
||||
| Scenario | Action |
|
||||
|----------|--------|
|
||||
| Single solution | `ccw issue bind <issue-id> <solution-id>` (auto) |
|
||||
| Multiple solutions | Register only, return for selection |
|
||||
|
||||
### 2.3 Return Summary
|
||||
### 2.2 Return Summary
|
||||
|
||||
```json
|
||||
{
|
||||
@@ -308,7 +301,8 @@ Each line is a solution JSON containing tasks. Schema: `cat .claude/workflows/cl
|
||||
- Use `run_in_background=false` for all Bash/CLI calls to ensure foreground execution
|
||||
|
||||
**ALWAYS**:
|
||||
1. Read schema first: `cat .claude/workflows/cli-templates/schemas/solution-schema.json`
|
||||
1. **Search Tool Priority**: ACE (`mcp__ace-tool__search_context`) → CCW (`mcp__ccw-tools__smart_search`) / Built-in (`Grep`, `Glob`, `Read`)
|
||||
2. Read schema first: `cat .claude/workflows/cli-templates/schemas/solution-schema.json`
|
||||
2. Use ACE semantic search as PRIMARY exploration tool
|
||||
3. Fetch issue details via `ccw issue status <id> --json`
|
||||
4. Quantify acceptance.criteria with testable conditions
|
||||
@@ -331,9 +325,9 @@ Each line is a solution JSON containing tasks. Schema: `cat .claude/workflows/cl
|
||||
2. Use vague criteria ("works correctly", "good performance")
|
||||
3. Create circular dependencies
|
||||
4. Generate more than 10 tasks per issue
|
||||
5. **Bind when multiple solutions exist** - MUST check `solutions.length === 1` before calling `ccw issue bind`
|
||||
5. Skip bind when `solutions.length === 1` (MUST execute bind command)
|
||||
|
||||
**OUTPUT**:
|
||||
1. Write solutions to `.workflow/issues/solutions/{issue-id}.jsonl` (JSONL format)
|
||||
2. Single solution → `ccw issue bind <issue-id> <solution-id>`; Multiple → return only
|
||||
3. Return JSON with `bound`, `pending_selection`
|
||||
1. Write solutions to `.workflow/issues/solutions/{issue-id}.jsonl`
|
||||
2. Execute bind or return `pending_selection` based on solution count
|
||||
3. Return JSON: `{ bound: [...], pending_selection: [...] }`
|
||||
|
||||
@@ -87,7 +87,7 @@ TASK: • Detect file conflicts (same file modified by multiple solutions)
|
||||
MODE: analysis
|
||||
CONTEXT: @.workflow/issues/solutions/**/*.jsonl | Solution data: \${SOLUTIONS_JSON}
|
||||
EXPECTED: JSON array of conflicts with type, severity, solutions, recommended_order
|
||||
RULES: $(cat ~/.claude/workflows/cli-templates/protocols/analysis-protocol.md) | Severity: high (API/data) > medium (file/dependency) > low (architecture)
|
||||
CONSTRAINTS: Severity: high (API/data) > medium (file/dependency) > low (architecture)
|
||||
" --tool gemini --mode analysis --cd .workflow/issues
|
||||
```
|
||||
|
||||
@@ -275,7 +275,8 @@ Return brief summaries; full conflict details in separate files:
|
||||
- Use `run_in_background=false` for all Bash/CLI calls to ensure foreground execution
|
||||
|
||||
**ALWAYS**:
|
||||
1. Build dependency graph before ordering
|
||||
1. **Search Tool Priority**: ACE (`mcp__ace-tool__search_context`) → CCW (`mcp__ccw-tools__smart_search`) / Built-in (`Grep`, `Glob`, `Read`)
|
||||
2. Build dependency graph before ordering
|
||||
2. Detect file overlaps between solutions
|
||||
3. Apply resolution rules consistently
|
||||
4. Calculate semantic priority for all solutions
|
||||
|
||||
@@ -75,6 +75,8 @@ Examples:
|
||||
|
||||
## Execution Rules
|
||||
|
||||
**Search Tool Priority**: ACE (`mcp__ace-tool__search_context`) → CCW (`mcp__ccw-tools__smart_search`) / Built-in (`Grep`, `Glob`, `Read`)
|
||||
|
||||
1. **Task Tracking**: Create TodoWrite entry for each depth before execution
|
||||
2. **Parallelism**: Max 4 jobs per depth, sequential across depths
|
||||
3. **Strategy Assignment**: Assign strategy based on depth:
|
||||
|
||||
@@ -28,6 +28,8 @@ You are a test context discovery specialist focused on gathering test coverage i
|
||||
|
||||
## Tool Arsenal
|
||||
|
||||
**Search Tool Priority**: ACE (`mcp__ace-tool__search_context`) → CCW (`mcp__ccw-tools__smart_search`) / Built-in (`Grep`, `Glob`, `Read`)
|
||||
|
||||
### 1. Session & Implementation Context
|
||||
**Tools**:
|
||||
- `Read()` - Load session metadata and implementation summaries
|
||||
|
||||
@@ -332,6 +332,7 @@ When generating test results for orchestrator (saved to `.process/test-results.j
|
||||
## Important Reminders
|
||||
|
||||
**ALWAYS:**
|
||||
- **Search Tool Priority**: ACE (`mcp__ace-tool__search_context`) → CCW (`mcp__ccw-tools__smart_search`) / Built-in (`Grep`, `Glob`, `Read`)
|
||||
- **Execute tests first** - Understand what's failing before fixing
|
||||
- **Diagnose thoroughly** - Find root cause, not just symptoms
|
||||
- **Fix minimally** - Change only what's needed to pass tests
|
||||
|
||||
@@ -284,6 +284,8 @@ You execute 6 distinct task types organized into 3 patterns. Each task includes
|
||||
|
||||
### ALWAYS
|
||||
|
||||
**Search Tool Priority**: ACE (`mcp__ace-tool__search_context`) → CCW (`mcp__ccw-tools__smart_search`) / Built-in (`Grep`, `Glob`, `Read`)
|
||||
|
||||
**W3C Format Compliance**: ✅ Include $schema in all token files | ✅ Use $type metadata for all tokens | ✅ Use $value wrapper for color (light/dark), duration, easing | ✅ Validate token structure against W3C spec
|
||||
|
||||
**Pattern Recognition**: ✅ Identify pattern from [TASK_TYPE_IDENTIFIER] first | ✅ Apply pattern-specific execution rules | ✅ Follow autonomy level
|
||||
|
||||
@@ -124,6 +124,7 @@ Before completing any task, verify:
|
||||
- Use `run_in_background=false` for all Bash/CLI calls to ensure foreground execution
|
||||
|
||||
**ALWAYS:**
|
||||
- **Search Tool Priority**: ACE (`mcp__ace-tool__search_context`) → CCW (`mcp__ccw-tools__smart_search`) / Built-in (`Grep`, `Glob`, `Read`)
|
||||
- Verify resource/dependency existence before referencing
|
||||
- Execute tasks systematically and incrementally
|
||||
- Test and validate work thoroughly
|
||||
|
||||
361
.claude/commands/cli/codex-review.md
Normal file
361
.claude/commands/cli/codex-review.md
Normal file
@@ -0,0 +1,361 @@
|
||||
---
|
||||
name: codex-review
|
||||
description: Interactive code review using Codex CLI via ccw endpoint with configurable review target, model, and custom instructions
|
||||
argument-hint: "[--uncommitted|--base <branch>|--commit <sha>] [--model <model>] [--title <title>] [prompt]"
|
||||
allowed-tools: Bash(*), AskUserQuestion(*), Read(*)
|
||||
---
|
||||
|
||||
# Codex Review Command (/cli:codex-review)
|
||||
|
||||
## Overview
|
||||
Interactive code review command that invokes `codex review` via ccw cli endpoint with guided parameter selection.
|
||||
|
||||
**Codex Review Parameters** (from `codex review --help`):
|
||||
| Parameter | Description |
|
||||
|-----------|-------------|
|
||||
| `[PROMPT]` | Custom review instructions (positional) |
|
||||
| `-c model=<model>` | Override model via config |
|
||||
| `--uncommitted` | Review staged, unstaged, and untracked changes |
|
||||
| `--base <BRANCH>` | Review changes against base branch |
|
||||
| `--commit <SHA>` | Review changes introduced by a commit |
|
||||
| `--title <TITLE>` | Optional commit title for review summary |
|
||||
|
||||
## Prompt Template Format
|
||||
|
||||
Follow the standard ccw cli prompt template:
|
||||
|
||||
```
|
||||
PURPOSE: [what] + [why] + [success criteria] + [constraints/scope]
|
||||
TASK: • [step 1] • [step 2] • [step 3]
|
||||
MODE: review
|
||||
CONTEXT: [review target description] | Memory: [relevant context]
|
||||
EXPECTED: [deliverable format] + [quality criteria]
|
||||
CONSTRAINTS: [focus constraints]
|
||||
```
|
||||
|
||||
## EXECUTION INSTRUCTIONS - START HERE
|
||||
|
||||
**When this command is triggered, follow these exact steps:**
|
||||
|
||||
### Step 1: Parse Arguments
|
||||
|
||||
Check if user provided arguments directly:
|
||||
- `--uncommitted` → Record target = uncommitted
|
||||
- `--base <branch>` → Record target = base, branch name
|
||||
- `--commit <sha>` → Record target = commit, sha value
|
||||
- `--model <model>` → Record model selection
|
||||
- `--title <title>` → Record title
|
||||
- Remaining text → Use as custom focus/prompt
|
||||
|
||||
If no target specified → Continue to Step 2 for interactive selection.
|
||||
|
||||
### Step 2: Interactive Parameter Selection
|
||||
|
||||
**2.1 Review Target Selection**
|
||||
|
||||
```javascript
|
||||
AskUserQuestion({
|
||||
questions: [{
|
||||
question: "What do you want to review?",
|
||||
header: "Review Target",
|
||||
options: [
|
||||
{ label: "Uncommitted changes (Recommended)", description: "Review staged, unstaged, and untracked changes" },
|
||||
{ label: "Compare to branch", description: "Review changes against a base branch (e.g., main)" },
|
||||
{ label: "Specific commit", description: "Review changes introduced by a specific commit" }
|
||||
],
|
||||
multiSelect: false
|
||||
}]
|
||||
})
|
||||
```
|
||||
|
||||
**2.2 Branch/Commit Input (if needed)**
|
||||
|
||||
If "Compare to branch" selected:
|
||||
```javascript
|
||||
AskUserQuestion({
|
||||
questions: [{
|
||||
question: "Which base branch to compare against?",
|
||||
header: "Base Branch",
|
||||
options: [
|
||||
{ label: "main", description: "Compare against main branch" },
|
||||
{ label: "master", description: "Compare against master branch" },
|
||||
{ label: "develop", description: "Compare against develop branch" }
|
||||
],
|
||||
multiSelect: false
|
||||
}]
|
||||
})
|
||||
```
|
||||
|
||||
If "Specific commit" selected:
|
||||
- Run `git log --oneline -10` to show recent commits
|
||||
- Ask user to provide commit SHA or select from list
|
||||
|
||||
**2.3 Model Selection (Optional)**
|
||||
|
||||
```javascript
|
||||
AskUserQuestion({
|
||||
questions: [{
|
||||
question: "Which model to use for review?",
|
||||
header: "Model",
|
||||
options: [
|
||||
{ label: "Default", description: "Use codex default model (gpt-5.2)" },
|
||||
{ label: "o3", description: "OpenAI o3 reasoning model" },
|
||||
{ label: "gpt-4.1", description: "GPT-4.1 model" },
|
||||
{ label: "o4-mini", description: "OpenAI o4-mini (faster)" }
|
||||
],
|
||||
multiSelect: false
|
||||
}]
|
||||
})
|
||||
```
|
||||
|
||||
**2.4 Review Focus Selection**
|
||||
|
||||
```javascript
|
||||
AskUserQuestion({
|
||||
questions: [{
|
||||
question: "What should the review focus on?",
|
||||
header: "Focus Area",
|
||||
options: [
|
||||
{ label: "General review (Recommended)", description: "Comprehensive review: correctness, style, bugs, docs" },
|
||||
{ label: "Security focus", description: "Security vulnerabilities, input validation, auth issues" },
|
||||
{ label: "Performance focus", description: "Performance bottlenecks, complexity, resource usage" },
|
||||
{ label: "Code quality", description: "Readability, maintainability, SOLID principles" }
|
||||
],
|
||||
multiSelect: false
|
||||
}]
|
||||
})
|
||||
```
|
||||
|
||||
### Step 3: Build Prompt and Command
|
||||
|
||||
**3.1 Construct Prompt Based on Focus**
|
||||
|
||||
**General Review Prompt:**
|
||||
```
|
||||
PURPOSE: Comprehensive code review to identify issues, improve quality, and ensure best practices; success = actionable feedback with clear priorities
|
||||
TASK: • Review code correctness and logic errors • Check coding standards and consistency • Identify potential bugs and edge cases • Evaluate documentation completeness
|
||||
MODE: review
|
||||
CONTEXT: {target_description} | Memory: Project conventions from CLAUDE.md
|
||||
EXPECTED: Structured review report with: severity levels (Critical/High/Medium/Low), file:line references, specific improvement suggestions, priority ranking
|
||||
CONSTRAINTS: Focus on actionable feedback
|
||||
```
|
||||
|
||||
**Security Focus Prompt:**
|
||||
```
|
||||
PURPOSE: Security-focused code review to identify vulnerabilities and security risks; success = all security issues documented with remediation
|
||||
TASK: • Scan for injection vulnerabilities (SQL, XSS, command) • Check authentication and authorization logic • Evaluate input validation and sanitization • Identify sensitive data exposure risks
|
||||
MODE: review
|
||||
CONTEXT: {target_description} | Memory: Security best practices, OWASP Top 10
|
||||
EXPECTED: Security report with: vulnerability classification, CVE references where applicable, remediation code snippets, risk severity matrix
|
||||
CONSTRAINTS: Security-first analysis | Flag all potential vulnerabilities
|
||||
```
|
||||
|
||||
**Performance Focus Prompt:**
|
||||
```
|
||||
PURPOSE: Performance-focused code review to identify bottlenecks and optimization opportunities; success = measurable improvement recommendations
|
||||
TASK: • Analyze algorithmic complexity (Big-O) • Identify memory allocation issues • Check for N+1 queries and blocking operations • Evaluate caching opportunities
|
||||
MODE: review
|
||||
CONTEXT: {target_description} | Memory: Performance patterns and anti-patterns
|
||||
EXPECTED: Performance report with: complexity analysis, bottleneck identification, optimization suggestions with expected impact, benchmark recommendations
|
||||
CONSTRAINTS: Performance optimization focus
|
||||
```
|
||||
|
||||
**Code Quality Focus Prompt:**
|
||||
```
|
||||
PURPOSE: Code quality review to improve maintainability and readability; success = cleaner, more maintainable code
|
||||
TASK: • Assess SOLID principles adherence • Identify code duplication and abstraction opportunities • Review naming conventions and clarity • Evaluate test coverage implications
|
||||
MODE: review
|
||||
CONTEXT: {target_description} | Memory: Project coding standards
|
||||
EXPECTED: Quality report with: principle violations, refactoring suggestions, naming improvements, maintainability score
|
||||
CONSTRAINTS: Code quality and maintainability focus
|
||||
```
|
||||
|
||||
**3.2 Build Target Description**
|
||||
|
||||
Based on selection, set `{target_description}`:
|
||||
- Uncommitted: `Reviewing uncommitted changes (staged + unstaged + untracked)`
|
||||
- Base branch: `Reviewing changes against {branch} branch`
|
||||
- Commit: `Reviewing changes introduced by commit {sha}`
|
||||
|
||||
### Step 4: Execute via CCW CLI
|
||||
|
||||
Build and execute the ccw cli command:
|
||||
|
||||
```bash
|
||||
# Base structure
|
||||
ccw cli -p "<PROMPT>" --tool codex --mode review [OPTIONS]
|
||||
```
|
||||
|
||||
**Command Construction:**
|
||||
|
||||
```bash
|
||||
# Variables from user selection
|
||||
TARGET_FLAG="" # --uncommitted | --base <branch> | --commit <sha>
|
||||
MODEL_FLAG="" # --model <model> (if not default)
|
||||
TITLE_FLAG="" # --title "<title>" (if provided)
|
||||
|
||||
# Build target flag
|
||||
if [ "$target" = "uncommitted" ]; then
|
||||
TARGET_FLAG="--uncommitted"
|
||||
elif [ "$target" = "base" ]; then
|
||||
TARGET_FLAG="--base $branch"
|
||||
elif [ "$target" = "commit" ]; then
|
||||
TARGET_FLAG="--commit $sha"
|
||||
fi
|
||||
|
||||
# Build model flag (only if not default)
|
||||
if [ "$model" != "default" ] && [ -n "$model" ]; then
|
||||
MODEL_FLAG="--model $model"
|
||||
fi
|
||||
|
||||
# Build title flag (if provided)
|
||||
if [ -n "$title" ]; then
|
||||
TITLE_FLAG="--title \"$title\""
|
||||
fi
|
||||
|
||||
# Execute
|
||||
ccw cli -p "$PROMPT" --tool codex --mode review $TARGET_FLAG $MODEL_FLAG $TITLE_FLAG
|
||||
```
|
||||
|
||||
**Full Example Commands:**
|
||||
|
||||
**Option 1: With custom prompt (reviews uncommitted by default):**
|
||||
```bash
|
||||
ccw cli -p "
|
||||
PURPOSE: Comprehensive code review to identify issues and improve quality; success = actionable feedback with priorities
|
||||
TASK: • Review correctness and logic • Check standards compliance • Identify bugs and edge cases • Evaluate documentation
|
||||
MODE: review
|
||||
CONTEXT: Reviewing uncommitted changes | Memory: Project conventions
|
||||
EXPECTED: Structured report with severity levels, file:line refs, improvement suggestions
|
||||
CONSTRAINTS: Actionable feedback
|
||||
" --tool codex --mode review --rule analysis-review-code-quality
|
||||
```
|
||||
|
||||
**Option 2: Target flag only (no prompt allowed):**
|
||||
```bash
|
||||
ccw cli --tool codex --mode review --uncommitted
|
||||
```
|
||||
|
||||
### Step 5: Execute and Display Results
|
||||
|
||||
```bash
|
||||
Bash({
|
||||
command: "ccw cli -p \"$PROMPT\" --tool codex --mode review $FLAGS",
|
||||
run_in_background: true
|
||||
})
|
||||
```
|
||||
|
||||
Wait for completion and display formatted results.
|
||||
|
||||
## Quick Usage Examples
|
||||
|
||||
### Direct Execution (No Interaction)
|
||||
|
||||
```bash
|
||||
# Review uncommitted changes with default settings
|
||||
/cli:codex-review --uncommitted
|
||||
|
||||
# Review against main branch
|
||||
/cli:codex-review --base main
|
||||
|
||||
# Review specific commit
|
||||
/cli:codex-review --commit abc123
|
||||
|
||||
# Review with custom model
|
||||
/cli:codex-review --uncommitted --model o3
|
||||
|
||||
# Review with security focus
|
||||
/cli:codex-review --uncommitted security
|
||||
|
||||
# Full options
|
||||
/cli:codex-review --base main --model o3 --title "Auth Feature" security
|
||||
```
|
||||
|
||||
### Interactive Mode
|
||||
|
||||
```bash
|
||||
# Start interactive selection (guided flow)
|
||||
/cli:codex-review
|
||||
```
|
||||
|
||||
## Focus Area Mapping
|
||||
|
||||
| User Selection | Prompt Focus | Key Checks |
|
||||
|----------------|--------------|------------|
|
||||
| General review | Comprehensive | Correctness, style, bugs, docs |
|
||||
| Security focus | Security-first | Injection, auth, validation, exposure |
|
||||
| Performance focus | Optimization | Complexity, memory, queries, caching |
|
||||
| Code quality | Maintainability | SOLID, duplication, naming, tests |
|
||||
|
||||
## Error Handling
|
||||
|
||||
### No Changes to Review
|
||||
```
|
||||
No changes found for review target. Suggestions:
|
||||
- For --uncommitted: Make some code changes first
|
||||
- For --base: Ensure branch exists and has diverged
|
||||
- For --commit: Verify commit SHA exists
|
||||
```
|
||||
|
||||
### Invalid Branch
|
||||
```bash
|
||||
# Show available branches
|
||||
git branch -a --list | head -20
|
||||
```
|
||||
|
||||
### Invalid Commit
|
||||
```bash
|
||||
# Show recent commits
|
||||
git log --oneline -10
|
||||
```
|
||||
|
||||
## Integration Notes
|
||||
|
||||
- Uses `ccw cli --tool codex --mode review` endpoint
|
||||
- Model passed via prompt (codex uses `-c model=` internally)
|
||||
- Target flags (`--uncommitted`, `--base`, `--commit`) passed through to codex
|
||||
- Prompt follows standard ccw cli template format for consistency
|
||||
|
||||
## Validation Constraints
|
||||
|
||||
**IMPORTANT: Target flags and prompt are mutually exclusive**
|
||||
|
||||
The codex CLI has a constraint where target flags (`--uncommitted`, `--base`, `--commit`) cannot be used with a positional `[PROMPT]` argument:
|
||||
|
||||
```
|
||||
error: the argument '--uncommitted' cannot be used with '[PROMPT]'
|
||||
error: the argument '--base <BRANCH>' cannot be used with '[PROMPT]'
|
||||
error: the argument '--commit <SHA>' cannot be used with '[PROMPT]'
|
||||
```
|
||||
|
||||
**Behavior:**
|
||||
- When ANY target flag is specified, ccw cli automatically skips template concatenation (systemRules/roles)
|
||||
- The review uses codex's default review behavior for the specified target
|
||||
- Custom prompts are only supported WITHOUT target flags (reviews uncommitted changes by default)
|
||||
|
||||
**Valid combinations:**
|
||||
| Command | Result |
|
||||
|---------|--------|
|
||||
| `codex review "Focus on security"` | ✓ Custom prompt, reviews uncommitted (default) |
|
||||
| `codex review --uncommitted` | ✓ No prompt, uses default review |
|
||||
| `codex review --base main` | ✓ No prompt, uses default review |
|
||||
| `codex review --commit abc123` | ✓ No prompt, uses default review |
|
||||
| `codex review --uncommitted "prompt"` | ✗ Invalid - mutually exclusive |
|
||||
| `codex review --base main "prompt"` | ✗ Invalid - mutually exclusive |
|
||||
| `codex review --commit abc123 "prompt"` | ✗ Invalid - mutually exclusive |
|
||||
|
||||
**Examples:**
|
||||
```bash
|
||||
# ✓ Valid: prompt only (reviews uncommitted by default)
|
||||
ccw cli -p "Focus on security" --tool codex --mode review
|
||||
|
||||
# ✓ Valid: target flag only (no prompt)
|
||||
ccw cli --tool codex --mode review --uncommitted
|
||||
ccw cli --tool codex --mode review --base main
|
||||
ccw cli --tool codex --mode review --commit abc123
|
||||
|
||||
# ✗ Invalid: target flag with prompt (will fail)
|
||||
ccw cli -p "Review this" --tool codex --mode review --uncommitted
|
||||
ccw cli -p "Review this" --tool codex --mode review --base main
|
||||
ccw cli -p "Review this" --tool codex --mode review --commit abc123
|
||||
```
|
||||
@@ -267,7 +267,7 @@ EXPECTED: JSON exploration plan following exploration-plan-schema.json:
|
||||
"estimated_iterations": N,
|
||||
"termination_conditions": [...]
|
||||
}
|
||||
RULES: $(cat ~/.claude/workflows/cli-templates/protocols/analysis-protocol.md) | Use ACE context to inform targets | Focus on actionable plan
|
||||
CONSTRAINTS: Use ACE context to inform targets | Focus on actionable plan
|
||||
`;
|
||||
|
||||
// Step 3: Execute Gemini planning
|
||||
|
||||
@@ -1,7 +1,7 @@
|
||||
---
|
||||
name: execute
|
||||
description: Execute queue with DAG-based parallel orchestration (one commit per solution)
|
||||
argument-hint: "[--worktree [<existing-path>]] [--queue <queue-id>]"
|
||||
argument-hint: "--queue <queue-id> [--worktree [<existing-path>]]"
|
||||
allowed-tools: TodoWrite(*), Bash(*), Read(*), AskUserQuestion(*)
|
||||
---
|
||||
|
||||
@@ -17,21 +17,64 @@ Minimal orchestrator that dispatches **solution IDs** to executors. Each executo
|
||||
- `done <id>` → update solution completion status
|
||||
- No race conditions: status changes only via `done`
|
||||
- **Executor handles all tasks within a solution sequentially**
|
||||
- **Worktree isolation**: Each executor can work in its own git worktree
|
||||
- **Single worktree for entire queue**: One worktree isolates ALL queue execution from main workspace
|
||||
|
||||
## Queue ID Requirement (MANDATORY)
|
||||
|
||||
**Queue ID is REQUIRED.** You MUST specify which queue to execute via `--queue <queue-id>`.
|
||||
|
||||
### If Queue ID Not Provided
|
||||
|
||||
When `--queue` parameter is missing, you MUST:
|
||||
|
||||
1. **List available queues** by running:
|
||||
```javascript
|
||||
const result = Bash('ccw issue queue list --brief --json');
|
||||
const index = JSON.parse(result);
|
||||
```
|
||||
|
||||
2. **Display available queues** to user:
|
||||
```
|
||||
Available Queues:
|
||||
ID Status Progress Issues
|
||||
-----------------------------------------------------------
|
||||
→ QUE-20251215-001 active 3/10 ISS-001, ISS-002
|
||||
QUE-20251210-002 active 0/5 ISS-003
|
||||
QUE-20251205-003 completed 8/8 ISS-004
|
||||
```
|
||||
|
||||
3. **Stop and ask user** to specify which queue to execute:
|
||||
```javascript
|
||||
AskUserQuestion({
|
||||
questions: [{
|
||||
question: "Which queue would you like to execute?",
|
||||
header: "Queue",
|
||||
multiSelect: false,
|
||||
options: index.queues
|
||||
.filter(q => q.status === 'active')
|
||||
.map(q => ({
|
||||
label: q.id,
|
||||
description: `${q.status}, ${q.completed_solutions || 0}/${q.total_solutions || 0} completed, Issues: ${q.issue_ids.join(', ')}`
|
||||
}))
|
||||
}]
|
||||
})
|
||||
```
|
||||
|
||||
4. **After user selection**, continue execution with the selected queue ID.
|
||||
|
||||
**DO NOT auto-select queues.** Explicit user confirmation is required to prevent accidental execution of wrong queue.
|
||||
|
||||
## Usage
|
||||
|
||||
```bash
|
||||
/issue:execute # Execute active queue(s)
|
||||
/issue:execute --queue QUE-xxx # Execute specific queue
|
||||
/issue:execute --worktree # Use git worktrees for parallel isolation
|
||||
/issue:execute --worktree --queue QUE-xxx
|
||||
/issue:execute --worktree /path/to/existing/worktree # Resume in existing worktree
|
||||
/issue:execute --queue QUE-xxx # Execute specific queue (REQUIRED)
|
||||
/issue:execute --queue QUE-xxx --worktree # Execute in isolated worktree
|
||||
/issue:execute --queue QUE-xxx --worktree /path/to/existing/worktree # Resume
|
||||
```
|
||||
|
||||
**Parallelism**: Determined automatically by task dependency DAG (no manual control)
|
||||
**Executor & Dry-run**: Selected via interactive prompt (AskUserQuestion)
|
||||
**Worktree**: Creates isolated git worktrees for each parallel executor
|
||||
**Worktree**: Creates ONE worktree for the entire queue execution (not per-solution)
|
||||
|
||||
**⭐ Recommended Executor**: **Codex** - Best for long-running autonomous work (2hr timeout), supports background execution and full write access
|
||||
|
||||
@@ -44,37 +87,101 @@ Minimal orchestrator that dispatches **solution IDs** to executors. Each executo
|
||||
## Execution Flow
|
||||
|
||||
```
|
||||
Phase 0 (if --worktree): Setup Worktree Base
|
||||
└─ Ensure .worktrees directory exists
|
||||
Phase 0: Validate Queue ID (REQUIRED)
|
||||
├─ If --queue provided → use specified queue
|
||||
├─ If --queue missing → list queues, prompt user to select
|
||||
└─ Store QUEUE_ID for all subsequent commands
|
||||
|
||||
Phase 0.5 (if --worktree): Setup Queue Worktree
|
||||
├─ Create ONE worktree for entire queue: .ccw/worktrees/queue-<timestamp>
|
||||
├─ All subsequent execution happens in this worktree
|
||||
└─ Main workspace remains clean and untouched
|
||||
|
||||
Phase 1: Get DAG & User Selection
|
||||
├─ ccw issue queue dag [--queue QUE-xxx] → { parallel_batches: [["S-1","S-2"], ["S-3"]] }
|
||||
├─ ccw issue queue dag --queue ${QUEUE_ID} → { parallel_batches: [["S-1","S-2"], ["S-3"]] }
|
||||
└─ AskUserQuestion → executor type (codex|gemini|agent), dry-run mode, worktree mode
|
||||
|
||||
Phase 2: Dispatch Parallel Batch (DAG-driven)
|
||||
├─ Parallelism determined by DAG (no manual limit)
|
||||
├─ All executors work in the SAME worktree (or main if no worktree)
|
||||
├─ For each solution ID in batch (parallel - all at once):
|
||||
│ ├─ (if worktree) Create isolated worktree: git worktree add
|
||||
│ ├─ Executor calls: ccw issue detail <id> (READ-ONLY)
|
||||
│ ├─ Executor gets FULL SOLUTION with all tasks
|
||||
│ ├─ Executor implements all tasks sequentially (T1 → T2 → T3)
|
||||
│ ├─ Executor tests + verifies each task
|
||||
│ ├─ Executor commits ONCE per solution (with formatted summary)
|
||||
│ ├─ Executor calls: ccw issue done <id>
|
||||
│ └─ (if worktree) Cleanup: merge branch, remove worktree
|
||||
│ └─ Executor calls: ccw issue done <id>
|
||||
└─ Wait for batch completion
|
||||
|
||||
Phase 3: Next Batch
|
||||
Phase 3: Next Batch (repeat Phase 2)
|
||||
└─ ccw issue queue dag → check for newly-ready solutions
|
||||
|
||||
Phase 4 (if --worktree): Worktree Completion
|
||||
├─ All batches complete → prompt for merge strategy
|
||||
└─ Options: Create PR / Merge to main / Keep branch
|
||||
```
|
||||
|
||||
## Implementation
|
||||
|
||||
### Phase 0: Validate Queue ID
|
||||
|
||||
```javascript
|
||||
// Check if --queue was provided
|
||||
let QUEUE_ID = args.queue;
|
||||
|
||||
if (!QUEUE_ID) {
|
||||
// List available queues
|
||||
const listResult = Bash('ccw issue queue list --brief --json').trim();
|
||||
const index = JSON.parse(listResult);
|
||||
|
||||
if (index.queues.length === 0) {
|
||||
console.log('No queues found. Use /issue:queue to create one first.');
|
||||
return;
|
||||
}
|
||||
|
||||
// Filter active queues only
|
||||
const activeQueues = index.queues.filter(q => q.status === 'active');
|
||||
|
||||
if (activeQueues.length === 0) {
|
||||
console.log('No active queues found.');
|
||||
console.log('Available queues:', index.queues.map(q => `${q.id} (${q.status})`).join(', '));
|
||||
return;
|
||||
}
|
||||
|
||||
// Display and prompt user
|
||||
console.log('\nAvailable Queues:');
|
||||
console.log('ID'.padEnd(22) + 'Status'.padEnd(12) + 'Progress'.padEnd(12) + 'Issues');
|
||||
console.log('-'.repeat(70));
|
||||
for (const q of index.queues) {
|
||||
const marker = q.id === index.active_queue_id ? '→ ' : ' ';
|
||||
console.log(marker + q.id.padEnd(20) + q.status.padEnd(12) +
|
||||
`${q.completed_solutions || 0}/${q.total_solutions || 0}`.padEnd(12) +
|
||||
q.issue_ids.join(', '));
|
||||
}
|
||||
|
||||
const answer = AskUserQuestion({
|
||||
questions: [{
|
||||
question: "Which queue would you like to execute?",
|
||||
header: "Queue",
|
||||
multiSelect: false,
|
||||
options: activeQueues.map(q => ({
|
||||
label: q.id,
|
||||
description: `${q.completed_solutions || 0}/${q.total_solutions || 0} completed, Issues: ${q.issue_ids.join(', ')}`
|
||||
}))
|
||||
}]
|
||||
});
|
||||
|
||||
QUEUE_ID = answer['Queue'];
|
||||
}
|
||||
|
||||
console.log(`\n## Executing Queue: ${QUEUE_ID}\n`);
|
||||
```
|
||||
|
||||
### Phase 1: Get DAG & User Selection
|
||||
|
||||
```javascript
|
||||
// Get dependency graph and parallel batches
|
||||
const dagJson = Bash(`ccw issue queue dag`).trim();
|
||||
// Get dependency graph and parallel batches (QUEUE_ID required)
|
||||
const dagJson = Bash(`ccw issue queue dag --queue ${QUEUE_ID}`).trim();
|
||||
const dag = JSON.parse(dagJson);
|
||||
|
||||
if (dag.error || dag.ready_count === 0) {
|
||||
@@ -115,12 +222,12 @@ const answer = AskUserQuestion({
|
||||
]
|
||||
},
|
||||
{
|
||||
question: 'Use git worktrees for parallel isolation?',
|
||||
question: 'Use git worktree for queue isolation?',
|
||||
header: 'Worktree',
|
||||
multiSelect: false,
|
||||
options: [
|
||||
{ label: 'Yes (Recommended for parallel)', description: 'Each executor works in isolated worktree branch' },
|
||||
{ label: 'No', description: 'Work directly in current directory (serial only)' }
|
||||
{ label: 'Yes (Recommended)', description: 'Create ONE worktree for entire queue - main stays clean' },
|
||||
{ label: 'No', description: 'Work directly in current directory' }
|
||||
]
|
||||
}
|
||||
]
|
||||
@@ -140,7 +247,7 @@ if (isDryRun) {
|
||||
}
|
||||
```
|
||||
|
||||
### Phase 2: Dispatch Parallel Batch (DAG-driven)
|
||||
### Phase 0 & 2: Setup Queue Worktree & Dispatch
|
||||
|
||||
```javascript
|
||||
// Parallelism determined by DAG - no manual limit
|
||||
@@ -158,24 +265,40 @@ TodoWrite({
|
||||
|
||||
console.log(`\n### Executing Solutions (DAG batch 1): ${batch.join(', ')}`);
|
||||
|
||||
// Setup worktree base directory if needed (using absolute paths)
|
||||
if (useWorktree) {
|
||||
// Use absolute paths to avoid issues when running from subdirectories
|
||||
const repoRoot = Bash('git rev-parse --show-toplevel').trim();
|
||||
const worktreeBase = `${repoRoot}/.ccw/worktrees`;
|
||||
Bash(`mkdir -p "${worktreeBase}"`);
|
||||
// Prune stale worktrees from previous interrupted executions
|
||||
Bash('git worktree prune');
|
||||
}
|
||||
|
||||
// Parse existing worktree path from args if provided
|
||||
// Example: --worktree /path/to/existing/worktree
|
||||
const existingWorktree = args.worktree && typeof args.worktree === 'string' ? args.worktree : null;
|
||||
|
||||
// Setup ONE worktree for entire queue (not per-solution)
|
||||
let worktreePath = null;
|
||||
let worktreeBranch = null;
|
||||
|
||||
if (useWorktree) {
|
||||
const repoRoot = Bash('git rev-parse --show-toplevel').trim();
|
||||
const worktreeBase = `${repoRoot}/.ccw/worktrees`;
|
||||
Bash(`mkdir -p "${worktreeBase}"`);
|
||||
Bash('git worktree prune'); // Cleanup stale worktrees
|
||||
|
||||
if (existingWorktree) {
|
||||
// Resume mode: Use existing worktree
|
||||
worktreePath = existingWorktree;
|
||||
worktreeBranch = Bash(`git -C "${worktreePath}" branch --show-current`).trim();
|
||||
console.log(`Resuming in existing worktree: ${worktreePath} (branch: ${worktreeBranch})`);
|
||||
} else {
|
||||
// Create mode: ONE worktree for the entire queue
|
||||
const timestamp = new Date().toISOString().replace(/[-:T]/g, '').slice(0, 14);
|
||||
worktreeBranch = `queue-exec-${dag.queue_id || timestamp}`;
|
||||
worktreePath = `${worktreeBase}/${worktreeBranch}`;
|
||||
Bash(`git worktree add "${worktreePath}" -b "${worktreeBranch}"`);
|
||||
console.log(`Created queue worktree: ${worktreePath}`);
|
||||
}
|
||||
}
|
||||
|
||||
// Launch ALL solutions in batch in parallel (DAG guarantees no conflicts)
|
||||
// All executors work in the SAME worktree (or main if no worktree)
|
||||
const executions = batch.map(solutionId => {
|
||||
updateTodo(solutionId, 'in_progress');
|
||||
return dispatchExecutor(solutionId, executor, useWorktree, existingWorktree);
|
||||
return dispatchExecutor(solutionId, executor, worktreePath);
|
||||
});
|
||||
|
||||
await Promise.all(executions);
|
||||
@@ -185,126 +308,20 @@ batch.forEach(id => updateTodo(id, 'completed'));
|
||||
### Executor Dispatch
|
||||
|
||||
```javascript
|
||||
function dispatchExecutor(solutionId, executorType, useWorktree = false, existingWorktree = null) {
|
||||
// Worktree setup commands (if enabled) - using absolute paths
|
||||
// Supports both creating new worktrees and resuming in existing ones
|
||||
const worktreeSetup = useWorktree ? `
|
||||
### Step 0: Setup Isolated Worktree
|
||||
\`\`\`bash
|
||||
# Use absolute paths to avoid issues when running from subdirectories
|
||||
REPO_ROOT=$(git rev-parse --show-toplevel)
|
||||
WORKTREE_BASE="\${REPO_ROOT}/.ccw/worktrees"
|
||||
|
||||
# Check if existing worktree path was provided
|
||||
EXISTING_WORKTREE="${existingWorktree || ''}"
|
||||
|
||||
if [[ -n "\${EXISTING_WORKTREE}" && -d "\${EXISTING_WORKTREE}" ]]; then
|
||||
# Resume mode: Use existing worktree
|
||||
WORKTREE_PATH="\${EXISTING_WORKTREE}"
|
||||
WORKTREE_NAME=$(basename "\${WORKTREE_PATH}")
|
||||
|
||||
# Verify it's a valid git worktree
|
||||
if ! git -C "\${WORKTREE_PATH}" rev-parse --is-inside-work-tree &>/dev/null; then
|
||||
echo "Error: \${EXISTING_WORKTREE} is not a valid git worktree"
|
||||
exit 1
|
||||
fi
|
||||
|
||||
echo "Resuming in existing worktree: \${WORKTREE_PATH}"
|
||||
else
|
||||
# Create mode: New worktree with timestamp
|
||||
WORKTREE_NAME="exec-${solutionId}-$(date +%H%M%S)"
|
||||
WORKTREE_PATH="\${WORKTREE_BASE}/\${WORKTREE_NAME}"
|
||||
|
||||
# Ensure worktree base exists
|
||||
mkdir -p "\${WORKTREE_BASE}"
|
||||
|
||||
# Prune stale worktrees
|
||||
git worktree prune
|
||||
|
||||
# Create worktree
|
||||
git worktree add "\${WORKTREE_PATH}" -b "\${WORKTREE_NAME}"
|
||||
|
||||
echo "Created new worktree: \${WORKTREE_PATH}"
|
||||
fi
|
||||
|
||||
# Setup cleanup trap for graceful failure handling
|
||||
cleanup_worktree() {
|
||||
echo "Cleaning up worktree due to interruption..."
|
||||
cd "\${REPO_ROOT}" 2>/dev/null || true
|
||||
git worktree remove "\${WORKTREE_PATH}" --force 2>/dev/null || true
|
||||
echo "Worktree removed. Branch '\${WORKTREE_NAME}' kept for inspection."
|
||||
}
|
||||
trap cleanup_worktree EXIT INT TERM
|
||||
|
||||
cd "\${WORKTREE_PATH}"
|
||||
\`\`\`
|
||||
` : '';
|
||||
|
||||
const worktreeCleanup = useWorktree ? `
|
||||
### Step 5: Worktree Completion (User Choice)
|
||||
|
||||
After all tasks complete, prompt for merge strategy:
|
||||
|
||||
\`\`\`javascript
|
||||
AskUserQuestion({
|
||||
questions: [{
|
||||
question: "Solution ${solutionId} completed. What to do with worktree branch?",
|
||||
header: "Merge",
|
||||
multiSelect: false,
|
||||
options: [
|
||||
{ label: "Create PR (Recommended)", description: "Push branch and create pull request - safest for parallel execution" },
|
||||
{ label: "Merge to main", description: "Merge branch and cleanup worktree (requires clean main)" },
|
||||
{ label: "Keep branch", description: "Cleanup worktree, keep branch for manual handling" }
|
||||
]
|
||||
}]
|
||||
})
|
||||
\`\`\`
|
||||
|
||||
**Based on selection:**
|
||||
\`\`\`bash
|
||||
# Disable cleanup trap before intentional cleanup
|
||||
trap - EXIT INT TERM
|
||||
|
||||
# Return to repo root (use REPO_ROOT from setup)
|
||||
cd "\${REPO_ROOT}"
|
||||
|
||||
# Validate main repo state before merge
|
||||
validate_main_clean() {
|
||||
if [[ -n \$(git status --porcelain) ]]; then
|
||||
echo "⚠️ Warning: Main repo has uncommitted changes."
|
||||
echo "Cannot auto-merge. Falling back to 'Create PR' option."
|
||||
return 1
|
||||
fi
|
||||
return 0
|
||||
}
|
||||
|
||||
# Create PR (Recommended for parallel execution):
|
||||
git push -u origin "\${WORKTREE_NAME}"
|
||||
gh pr create --title "Solution ${solutionId}" --body "Issue queue execution"
|
||||
git worktree remove "\${WORKTREE_PATH}"
|
||||
|
||||
# Merge to main (only if main is clean):
|
||||
if validate_main_clean; then
|
||||
git merge --no-ff "\${WORKTREE_NAME}" -m "Merge solution ${solutionId}"
|
||||
git worktree remove "\${WORKTREE_PATH}" && git branch -d "\${WORKTREE_NAME}"
|
||||
else
|
||||
# Fallback to PR if main is dirty
|
||||
git push -u origin "\${WORKTREE_NAME}"
|
||||
gh pr create --title "Solution ${solutionId}" --body "Issue queue execution (main had uncommitted changes)"
|
||||
git worktree remove "\${WORKTREE_PATH}"
|
||||
fi
|
||||
|
||||
# Keep branch:
|
||||
git worktree remove "\${WORKTREE_PATH}"
|
||||
echo "Branch \${WORKTREE_NAME} kept for manual handling"
|
||||
\`\`\`
|
||||
|
||||
**Parallel Execution Safety**: "Create PR" is the default and safest option for parallel executors, avoiding merge race conditions.
|
||||
` : '';
|
||||
// worktreePath: path to shared worktree (null if not using worktree)
|
||||
function dispatchExecutor(solutionId, executorType, worktreePath = null) {
|
||||
// If worktree is provided, executor works in that directory
|
||||
// No per-solution worktree creation - ONE worktree for entire queue
|
||||
const cdCommand = worktreePath ? `cd "${worktreePath}"` : '';
|
||||
|
||||
const prompt = `
|
||||
## Execute Solution ${solutionId}
|
||||
${worktreeSetup}
|
||||
${worktreePath ? `
|
||||
### Step 0: Enter Queue Worktree
|
||||
\`\`\`bash
|
||||
cd "${worktreePath}"
|
||||
\`\`\`
|
||||
` : ''}
|
||||
### Step 1: Get Solution (read-only)
|
||||
\`\`\`bash
|
||||
ccw issue detail ${solutionId}
|
||||
@@ -352,16 +369,21 @@ If any task failed:
|
||||
\`\`\`bash
|
||||
ccw issue done ${solutionId} --fail --reason '{"task_id": "TX", "error_type": "test_failure", "message": "..."}'
|
||||
\`\`\`
|
||||
${worktreeCleanup}`;
|
||||
|
||||
**Note**: Do NOT cleanup worktree after this solution. Worktree is shared by all solutions in the queue.
|
||||
`;
|
||||
|
||||
// For CLI tools, pass --cd to set working directory
|
||||
const cdOption = worktreePath ? ` --cd "${worktreePath}"` : '';
|
||||
|
||||
if (executorType === 'codex') {
|
||||
return Bash(
|
||||
`ccw cli -p "${escapePrompt(prompt)}" --tool codex --mode write --id exec-${solutionId}`,
|
||||
`ccw cli -p "${escapePrompt(prompt)}" --tool codex --mode write --id exec-${solutionId}${cdOption}`,
|
||||
{ timeout: 7200000, run_in_background: true } // 2hr for full solution
|
||||
);
|
||||
} else if (executorType === 'gemini') {
|
||||
return Bash(
|
||||
`ccw cli -p "${escapePrompt(prompt)}" --tool gemini --mode write --id exec-${solutionId}`,
|
||||
`ccw cli -p "${escapePrompt(prompt)}" --tool gemini --mode write --id exec-${solutionId}${cdOption}`,
|
||||
{ timeout: 3600000, run_in_background: true }
|
||||
);
|
||||
} else {
|
||||
@@ -369,7 +391,7 @@ ${worktreeCleanup}`;
|
||||
subagent_type: 'code-developer',
|
||||
run_in_background: false,
|
||||
description: `Execute solution ${solutionId}`,
|
||||
prompt: prompt
|
||||
prompt: worktreePath ? `Working directory: ${worktreePath}\n\n${prompt}` : prompt
|
||||
});
|
||||
}
|
||||
}
|
||||
@@ -378,8 +400,8 @@ ${worktreeCleanup}`;
|
||||
### Phase 3: Check Next Batch
|
||||
|
||||
```javascript
|
||||
// Refresh DAG after batch completes
|
||||
const refreshedDag = JSON.parse(Bash(`ccw issue queue dag`).trim());
|
||||
// Refresh DAG after batch completes (use same QUEUE_ID)
|
||||
const refreshedDag = JSON.parse(Bash(`ccw issue queue dag --queue ${QUEUE_ID}`).trim());
|
||||
|
||||
console.log(`
|
||||
## Batch Complete
|
||||
@@ -389,46 +411,117 @@ console.log(`
|
||||
`);
|
||||
|
||||
if (refreshedDag.ready_count > 0) {
|
||||
console.log('Run `/issue:execute` again for next batch.');
|
||||
console.log(`Run \`/issue:execute --queue ${QUEUE_ID}\` again for next batch.`);
|
||||
// Note: If resuming, pass existing worktree path:
|
||||
// /issue:execute --queue ${QUEUE_ID} --worktree <worktreePath>
|
||||
}
|
||||
```
|
||||
|
||||
### Phase 4: Worktree Completion (after ALL batches)
|
||||
|
||||
```javascript
|
||||
// Only run when ALL solutions completed AND using worktree
|
||||
if (useWorktree && refreshedDag.ready_count === 0 && refreshedDag.completed_count === refreshedDag.total) {
|
||||
console.log('\n## All Solutions Completed - Worktree Cleanup');
|
||||
|
||||
const answer = AskUserQuestion({
|
||||
questions: [{
|
||||
question: `Queue complete. What to do with worktree branch "${worktreeBranch}"?`,
|
||||
header: 'Merge',
|
||||
multiSelect: false,
|
||||
options: [
|
||||
{ label: 'Create PR (Recommended)', description: 'Push branch and create pull request' },
|
||||
{ label: 'Merge to main', description: 'Merge all commits and cleanup worktree' },
|
||||
{ label: 'Keep branch', description: 'Cleanup worktree, keep branch for manual handling' }
|
||||
]
|
||||
}]
|
||||
});
|
||||
|
||||
const repoRoot = Bash('git rev-parse --show-toplevel').trim();
|
||||
|
||||
if (answer['Merge'].includes('Create PR')) {
|
||||
Bash(`git -C "${worktreePath}" push -u origin "${worktreeBranch}"`);
|
||||
Bash(`gh pr create --title "Queue ${dag.queue_id}" --body "Issue queue execution - all solutions completed" --head "${worktreeBranch}"`);
|
||||
Bash(`git worktree remove "${worktreePath}"`);
|
||||
console.log(`PR created for branch: ${worktreeBranch}`);
|
||||
} else if (answer['Merge'].includes('Merge to main')) {
|
||||
// Check main is clean
|
||||
const mainDirty = Bash('git status --porcelain').trim();
|
||||
if (mainDirty) {
|
||||
console.log('Warning: Main has uncommitted changes. Falling back to PR.');
|
||||
Bash(`git -C "${worktreePath}" push -u origin "${worktreeBranch}"`);
|
||||
Bash(`gh pr create --title "Queue ${dag.queue_id}" --body "Issue queue execution (main had uncommitted changes)" --head "${worktreeBranch}"`);
|
||||
} else {
|
||||
Bash(`git merge --no-ff "${worktreeBranch}" -m "Merge queue ${dag.queue_id}"`);
|
||||
Bash(`git branch -d "${worktreeBranch}"`);
|
||||
}
|
||||
Bash(`git worktree remove "${worktreePath}"`);
|
||||
} else {
|
||||
Bash(`git worktree remove "${worktreePath}"`);
|
||||
console.log(`Branch ${worktreeBranch} kept for manual handling`);
|
||||
}
|
||||
}
|
||||
```
|
||||
|
||||
## Parallel Execution Model
|
||||
|
||||
```
|
||||
┌─────────────────────────────────────────────────────────────┐
|
||||
│ Orchestrator │
|
||||
├─────────────────────────────────────────────────────────────┤
|
||||
│ 1. ccw issue queue dag │
|
||||
│ → { parallel_batches: [["S-1","S-2"], ["S-3"]] } │
|
||||
│ │
|
||||
│ 2. Dispatch batch 1 (parallel): │
|
||||
│ ┌──────────────────────┐ ┌──────────────────────┐ │
|
||||
│ │ Executor 1 │ │ Executor 2 │ │
|
||||
│ │ detail S-1 │ │ detail S-2 │ │
|
||||
│ │ → gets full solution │ │ → gets full solution │ │
|
||||
│ │ [T1→T2→T3 sequential]│ │ [T1→T2 sequential] │ │
|
||||
│ │ commit (1x solution) │ │ commit (1x solution) │ │
|
||||
│ │ done S-1 │ │ done S-2 │ │
|
||||
│ └──────────────────────┘ └──────────────────────┘ │
|
||||
│ │
|
||||
│ 3. ccw issue queue dag (refresh) │
|
||||
│ → S-3 now ready (S-1 completed, file conflict resolved) │
|
||||
└─────────────────────────────────────────────────────────────┘
|
||||
┌─────────────────────────────────────────────────────────────────┐
|
||||
│ Orchestrator │
|
||||
├─────────────────────────────────────────────────────────────────┤
|
||||
│ 0. Validate QUEUE_ID (required, or prompt user to select) │
|
||||
│ │
|
||||
│ 0.5 (if --worktree) Create ONE worktree for entire queue │
|
||||
│ → .ccw/worktrees/queue-exec-<queue-id> │
|
||||
│ │
|
||||
│ 1. ccw issue queue dag --queue ${QUEUE_ID} │
|
||||
│ → { parallel_batches: [["S-1","S-2"], ["S-3"]] } │
|
||||
│ │
|
||||
│ 2. Dispatch batch 1 (parallel, SAME worktree): │
|
||||
│ ┌──────────────────────────────────────────────────────┐ │
|
||||
│ │ Shared Queue Worktree (or main) │ │
|
||||
│ │ ┌──────────────────┐ ┌──────────────────┐ │ │
|
||||
│ │ │ Executor 1 │ │ Executor 2 │ │ │
|
||||
│ │ │ detail S-1 │ │ detail S-2 │ │ │
|
||||
│ │ │ [T1→T2→T3] │ │ [T1→T2] │ │ │
|
||||
│ │ │ commit S-1 │ │ commit S-2 │ │ │
|
||||
│ │ │ done S-1 │ │ done S-2 │ │ │
|
||||
│ │ └──────────────────┘ └──────────────────┘ │ │
|
||||
│ └──────────────────────────────────────────────────────┘ │
|
||||
│ │
|
||||
│ 3. ccw issue queue dag (refresh) │
|
||||
│ → S-3 now ready → dispatch batch 2 (same worktree) │
|
||||
│ │
|
||||
│ 4. (if --worktree) ALL batches complete → cleanup worktree │
|
||||
│ → Prompt: Create PR / Merge to main / Keep branch │
|
||||
└─────────────────────────────────────────────────────────────────┘
|
||||
```
|
||||
|
||||
**Why this works for parallel:**
|
||||
- **ONE worktree for entire queue** → all solutions share same isolated workspace
|
||||
- `detail <id>` is READ-ONLY → no race conditions
|
||||
- Each executor handles **all tasks within a solution** sequentially
|
||||
- **One commit per solution** with formatted summary (not per-task)
|
||||
- `done <id>` updates only its own solution status
|
||||
- `queue dag` recalculates ready solutions after each batch
|
||||
- Solutions in same batch have NO file conflicts
|
||||
- Solutions in same batch have NO file conflicts (DAG guarantees)
|
||||
- **Main workspace stays clean** until merge/PR decision
|
||||
|
||||
## CLI Endpoint Contract
|
||||
|
||||
### `ccw issue queue dag`
|
||||
Returns dependency graph with parallel batches (solution-level):
|
||||
### `ccw issue queue list --brief --json`
|
||||
Returns queue index for selection (used when --queue not provided):
|
||||
```json
|
||||
{
|
||||
"active_queue_id": "QUE-20251215-001",
|
||||
"queues": [
|
||||
{ "id": "QUE-20251215-001", "status": "active", "issue_ids": ["ISS-001"], "total_solutions": 5, "completed_solutions": 2 }
|
||||
]
|
||||
}
|
||||
```
|
||||
|
||||
### `ccw issue queue dag --queue <queue-id>`
|
||||
Returns dependency graph with parallel batches (solution-level, **--queue required**):
|
||||
```json
|
||||
{
|
||||
"queue_id": "QUE-...",
|
||||
|
||||
@@ -131,7 +131,7 @@ TASK: • Analyze issue titles/tags semantically • Identify functional/archite
|
||||
MODE: analysis
|
||||
CONTEXT: Issue metadata only
|
||||
EXPECTED: JSON with groups array, each containing max 4 issue_ids, theme, rationale
|
||||
RULES: $(cat ~/.claude/workflows/cli-templates/protocols/analysis-protocol.md) | Each issue in exactly one group | Max 4 issues per group | Balance group sizes
|
||||
CONSTRAINTS: Each issue in exactly one group | Max 4 issues per group | Balance group sizes
|
||||
|
||||
INPUT:
|
||||
${JSON.stringify(issueSummaries, null, 2)}
|
||||
|
||||
@@ -65,9 +65,13 @@ Queue formation command using **issue-queue-agent** that analyzes all bound solu
|
||||
--queues <n> Number of parallel queues (default: 1)
|
||||
--issue <id> Form queue for specific issue only
|
||||
--append <id> Append issue to active queue (don't create new)
|
||||
--force Skip active queue check, always create new queue
|
||||
|
||||
# CLI subcommands (ccw issue queue ...)
|
||||
ccw issue queue list List all queues with status
|
||||
ccw issue queue add <issue-id> Add issue to queue (interactive if active queue exists)
|
||||
ccw issue queue add <issue-id> -f Add to new queue without prompt (force)
|
||||
ccw issue queue merge <src> --queue <target> Merge source queue into target queue
|
||||
ccw issue queue switch <queue-id> Switch active queue
|
||||
ccw issue queue archive Archive current queue
|
||||
ccw issue queue delete <queue-id> Delete queue from history
|
||||
@@ -92,7 +96,7 @@ Phase 2-4: Agent-Driven Queue Formation (issue-queue-agent)
|
||||
│ ├─ Build dependency DAG from conflicts
|
||||
│ ├─ Calculate semantic priority per solution
|
||||
│ └─ Assign execution groups (parallel/sequential)
|
||||
└─ Each agent writes: queue JSON + index update
|
||||
└─ Each agent writes: queue JSON + index update (NOT active yet)
|
||||
|
||||
Phase 5: Conflict Clarification (if needed)
|
||||
├─ Collect `clarifications` arrays from all agents
|
||||
@@ -102,7 +106,24 @@ Phase 5: Conflict Clarification (if needed)
|
||||
|
||||
Phase 6: Status Update & Summary
|
||||
├─ Update issue statuses to 'queued'
|
||||
└─ Display queue summary (N queues), next step: /issue:execute
|
||||
└─ Display new queue summary (N queues)
|
||||
|
||||
Phase 7: Active Queue Check & Decision (REQUIRED)
|
||||
├─ Read queue index: ccw issue queue list --brief
|
||||
├─ Get generated queue ID from agent output
|
||||
├─ If NO active queue exists:
|
||||
│ ├─ Set generated queue as active_queue_id
|
||||
│ ├─ Update index.json
|
||||
│ └─ Display: "Queue created and activated"
|
||||
│
|
||||
└─ If active queue exists with items:
|
||||
├─ Display both queues to user
|
||||
├─ Use AskUserQuestion to prompt:
|
||||
│ ├─ "Use new queue (keep existing)" → Set new as active, keep old inactive
|
||||
│ ├─ "Merge: add new items to existing" → Merge new → existing, delete new
|
||||
│ ├─ "Merge: add existing items to new" → Merge existing → new, archive old
|
||||
│ └─ "Cancel" → Delete new queue, keep existing active
|
||||
└─ Execute chosen action
|
||||
```
|
||||
|
||||
## Implementation
|
||||
@@ -306,6 +327,41 @@ ccw issue update <issue-id> --status queued
|
||||
- Show unplanned issues (planned but NOT in queue)
|
||||
- Show next step: `/issue:execute`
|
||||
|
||||
### Phase 7: Active Queue Check & Decision
|
||||
|
||||
**After agent completes Phase 1-6, check for active queue:**
|
||||
|
||||
```bash
|
||||
ccw issue queue list --brief
|
||||
```
|
||||
|
||||
**Decision:**
|
||||
- If `active_queue_id` is null → `ccw issue queue switch <new-queue-id>` (activate new queue)
|
||||
- If active queue exists → Use **AskUserQuestion** to prompt user
|
||||
|
||||
**AskUserQuestion:**
|
||||
```javascript
|
||||
AskUserQuestion({
|
||||
questions: [{
|
||||
question: "Active queue exists. How would you like to proceed?",
|
||||
header: "Queue Action",
|
||||
options: [
|
||||
{ label: "Merge into existing queue", description: "Add new items to active queue, delete new queue" },
|
||||
{ label: "Use new queue", description: "Switch to new queue, keep existing in history" },
|
||||
{ label: "Cancel", description: "Delete new queue, keep existing active" }
|
||||
],
|
||||
multiSelect: false
|
||||
}]
|
||||
})
|
||||
```
|
||||
|
||||
**Action Commands:**
|
||||
|
||||
| User Choice | Commands |
|
||||
|-------------|----------|
|
||||
| **Merge into existing** | `ccw issue queue merge <new-queue-id> --queue <active-queue-id>` then `ccw issue queue delete <new-queue-id>` |
|
||||
| **Use new queue** | `ccw issue queue switch <new-queue-id>` |
|
||||
| **Cancel** | `ccw issue queue delete <new-queue-id>` |
|
||||
|
||||
## Storage Structure (Queue History)
|
||||
|
||||
@@ -360,6 +416,9 @@ ccw issue update <issue-id> --status queued
|
||||
| User cancels clarification | Abort queue formation |
|
||||
| **index.json not updated** | Auto-fix: Set active_queue_id to new queue |
|
||||
| **Queue file missing solutions** | Abort with error, agent must regenerate |
|
||||
| **User cancels queue add** | Display message, return without changes |
|
||||
| **Merge with empty source** | Skip merge, display warning |
|
||||
| **All items duplicate** | Skip merge, display "All items already exist" |
|
||||
|
||||
## Quality Checklist
|
||||
|
||||
|
||||
@@ -223,8 +223,8 @@ TASK:
|
||||
MODE: analysis
|
||||
CONTEXT: @src/**/*.controller.ts @src/**/*.routes.ts @src/**/*.dto.ts @src/**/middleware/**/*
|
||||
EXPECTED: JSON format API structure analysis report with modules, endpoints, security schemes, and error codes
|
||||
RULES: $(cat ~/.claude/workflows/cli-templates/protocols/analysis-protocol.md) | Strict RESTful standards | Identify all public endpoints | Document output language: {lang}
|
||||
" --tool gemini --mode analysis --cd {project_root}
|
||||
CONSTRAINTS: Strict RESTful standards | Identify all public endpoints | Document output language: {lang}
|
||||
" --tool gemini --mode analysis --rule analysis-code-patterns --cd {project_root}
|
||||
```
|
||||
|
||||
**Update swagger-planning-data.json** with analysis results:
|
||||
@@ -387,7 +387,7 @@ bash(cat ${session_dir}/.process/swagger-planning-data.json | jq -r '.api_struct
|
||||
"step": 1,
|
||||
"title": "Generate OpenAPI spec file",
|
||||
"description": "Create complete swagger.yaml specification file",
|
||||
"cli_prompt": "PURPOSE: Generate OpenAPI 3.0.3 specification file from analyzed API structure\nTASK:\n• Define openapi version: 3.0.3\n• Define info: title, description, version, contact, license\n• Define servers: development, staging, production environments\n• Define tags: organized by business modules\n• Define paths: all API endpoints with complete specifications\n• Define components: schemas, securitySchemes, parameters, responses\nMODE: write\nCONTEXT: @[api_analysis]\nEXPECTED: Complete swagger.yaml file following OpenAPI 3.0.3 specification\nRULES: $(cat ~/.claude/workflows/cli-templates/protocols/write-protocol.md) $(cat ~/.claude/workflows/cli-templates/prompts/documentation/swagger-api.txt) | Use {lang} for all descriptions | Strict RESTful standards",
|
||||
"cli_prompt": "PURPOSE: Generate OpenAPI 3.0.3 specification file from analyzed API structure\nTASK:\n• Define openapi version: 3.0.3\n• Define info: title, description, version, contact, license\n• Define servers: development, staging, production environments\n• Define tags: organized by business modules\n• Define paths: all API endpoints with complete specifications\n• Define components: schemas, securitySchemes, parameters, responses\nMODE: write\nCONTEXT: @[api_analysis]\nEXPECTED: Complete swagger.yaml file following OpenAPI 3.0.3 specification\nCONSTRAINTS: Use {lang} for all descriptions | Strict RESTful standards\n--rule documentation-swagger-api",
|
||||
"output": "swagger.yaml"
|
||||
}
|
||||
],
|
||||
@@ -429,7 +429,7 @@ bash(cat ${session_dir}/.process/swagger-planning-data.json | jq -r '.api_struct
|
||||
{
|
||||
"step": 1,
|
||||
"title": "Generate authentication documentation",
|
||||
"cli_prompt": "PURPOSE: Generate comprehensive authentication documentation for API security\nTASK:\n• Document authentication mechanism: JWT Bearer Token\n• Explain header format: Authorization: Bearer <token>\n• Describe token lifecycle: acquisition, refresh, expiration handling\n• Define permission levels: public, user, admin, super_admin\n• Document authentication failure responses: 401/403 error handling\nMODE: write\nCONTEXT: @[auth_patterns] @src/**/auth/**/* @src/**/guard/**/*\nEXPECTED: Complete authentication guide in {lang}\nRULES: $(cat ~/.claude/workflows/cli-templates/protocols/write-protocol.md) | Include code examples | Clear step-by-step instructions",
|
||||
"cli_prompt": "PURPOSE: Generate comprehensive authentication documentation for API security\nTASK:\n• Document authentication mechanism: JWT Bearer Token\n• Explain header format: Authorization: Bearer <token>\n• Describe token lifecycle: acquisition, refresh, expiration handling\n• Define permission levels: public, user, admin, super_admin\n• Document authentication failure responses: 401/403 error handling\nMODE: write\nCONTEXT: @[auth_patterns] @src/**/auth/**/* @src/**/guard/**/*\nEXPECTED: Complete authentication guide in {lang}\nCONSTRAINTS: Include code examples | Clear step-by-step instructions\n--rule development-feature",
|
||||
"output": "{auth_doc_name}"
|
||||
}
|
||||
],
|
||||
@@ -464,7 +464,7 @@ bash(cat ${session_dir}/.process/swagger-planning-data.json | jq -r '.api_struct
|
||||
{
|
||||
"step": 1,
|
||||
"title": "Generate error code specification document",
|
||||
"cli_prompt": "PURPOSE: Generate comprehensive error code specification for consistent API error handling\nTASK:\n• Define error response format: {code, message, details, timestamp}\n• Document authentication errors (AUTH_xxx): 401/403 series\n• Document parameter errors (PARAM_xxx): 400 series\n• Document business errors (BIZ_xxx): business logic errors\n• Document system errors (SYS_xxx): 500 series\n• For each error code: HTTP status, error message, possible causes, resolution suggestions\nMODE: write\nCONTEXT: @src/**/*.exception.ts @src/**/*.filter.ts\nEXPECTED: Complete error code specification in {lang} with tables and examples\nRULES: $(cat ~/.claude/workflows/cli-templates/protocols/write-protocol.md) | Include response examples | Clear categorization",
|
||||
"cli_prompt": "PURPOSE: Generate comprehensive error code specification for consistent API error handling\nTASK:\n• Define error response format: {code, message, details, timestamp}\n• Document authentication errors (AUTH_xxx): 401/403 series\n• Document parameter errors (PARAM_xxx): 400 series\n• Document business errors (BIZ_xxx): business logic errors\n• Document system errors (SYS_xxx): 500 series\n• For each error code: HTTP status, error message, possible causes, resolution suggestions\nMODE: write\nCONTEXT: @src/**/*.exception.ts @src/**/*.filter.ts\nEXPECTED: Complete error code specification in {lang} with tables and examples\nCONSTRAINTS: Include response examples | Clear categorization\n--rule development-feature",
|
||||
"output": "{error_doc_name}"
|
||||
}
|
||||
],
|
||||
@@ -523,7 +523,7 @@ bash(cat ${session_dir}/.process/swagger-planning-data.json | jq -r '.api_struct
|
||||
"step": 1,
|
||||
"title": "Generate module API documentation",
|
||||
"description": "Generate complete API documentation for ${module_name}",
|
||||
"cli_prompt": "PURPOSE: Generate complete RESTful API documentation for ${module_name} module\nTASK:\n• Create module overview: purpose, use cases, prerequisites\n• Generate endpoint index: grouped by functionality\n• For each endpoint document:\n - Functional description: purpose and business context\n - Request method: GET/POST/PUT/DELETE\n - URL path: complete API path\n - Request headers: Authorization and other required headers\n - Path parameters: {id} and other path variables\n - Query parameters: pagination, filters, etc.\n - Request body: JSON Schema format\n - Response body: success and error responses\n - Field description table: type, required, example, description\n• Add usage examples: cURL, JavaScript, Python\n• Add version info: v1.0.0, last updated date\nMODE: write\nCONTEXT: @[module_endpoints] @[source_code]\nEXPECTED: Complete module API documentation in {lang} with all endpoints fully documented\nRULES: $(cat ~/.claude/workflows/cli-templates/protocols/write-protocol.md) $(cat ~/.claude/workflows/cli-templates/prompts/documentation/swagger-api.txt) | RESTful standards | Include all response codes",
|
||||
"cli_prompt": "PURPOSE: Generate complete RESTful API documentation for ${module_name} module\nTASK:\n• Create module overview: purpose, use cases, prerequisites\n• Generate endpoint index: grouped by functionality\n• For each endpoint document:\n - Functional description: purpose and business context\n - Request method: GET/POST/PUT/DELETE\n - URL path: complete API path\n - Request headers: Authorization and other required headers\n - Path parameters: {id} and other path variables\n - Query parameters: pagination, filters, etc.\n - Request body: JSON Schema format\n - Response body: success and error responses\n - Field description table: type, required, example, description\n• Add usage examples: cURL, JavaScript, Python\n• Add version info: v1.0.0, last updated date\nMODE: write\nCONTEXT: @[module_endpoints] @[source_code]\nEXPECTED: Complete module API documentation in {lang} with all endpoints fully documented\nCONSTRAINTS: RESTful standards | Include all response codes\n--rule documentation-swagger-api",
|
||||
"output": "${module_doc_name}"
|
||||
}
|
||||
],
|
||||
@@ -559,7 +559,7 @@ bash(cat ${session_dir}/.process/swagger-planning-data.json | jq -r '.api_struct
|
||||
{
|
||||
"step": 1,
|
||||
"title": "Generate API overview",
|
||||
"cli_prompt": "PURPOSE: Generate API overview document with navigation and quick start guide\nTASK:\n• Create introduction: system features, tech stack, version\n• Write quick start guide: authentication, first request example\n• Build module navigation: categorized links to all modules\n• Document environment configuration: development, staging, production\n• List SDKs and tools: client libraries, Postman collection\nMODE: write\nCONTEXT: @[all_module_docs] @.workflow/docs/${project_name}/api/swagger.yaml\nEXPECTED: Complete API overview in {lang} with navigation links\nRULES: $(cat ~/.claude/workflows/cli-templates/protocols/write-protocol.md) | Clear structure | Quick start focus",
|
||||
"cli_prompt": "PURPOSE: Generate API overview document with navigation and quick start guide\nTASK:\n• Create introduction: system features, tech stack, version\n• Write quick start guide: authentication, first request example\n• Build module navigation: categorized links to all modules\n• Document environment configuration: development, staging, production\n• List SDKs and tools: client libraries, Postman collection\nMODE: write\nCONTEXT: @[all_module_docs] @.workflow/docs/${project_name}/api/swagger.yaml\nEXPECTED: Complete API overview in {lang} with navigation links\nCONSTRAINTS: Clear structure | Quick start focus\n--rule development-feature",
|
||||
"output": "README.md"
|
||||
}
|
||||
],
|
||||
@@ -602,7 +602,7 @@ bash(cat ${session_dir}/.process/swagger-planning-data.json | jq -r '.api_struct
|
||||
{
|
||||
"step": 1,
|
||||
"title": "Generate test report",
|
||||
"cli_prompt": "PURPOSE: Generate comprehensive API test validation report\nTASK:\n• Document test environment configuration\n• Calculate endpoint coverage statistics\n• Report test results: pass/fail counts\n• Document boundary tests: parameter limits, null values, special characters\n• Document exception tests: auth failures, permission denied, resource not found\n• List issues found with recommendations\nMODE: write\nCONTEXT: @[swagger_spec]\nEXPECTED: Complete test report in {lang} with detailed results\nRULES: $(cat ~/.claude/workflows/cli-templates/protocols/write-protocol.md) | Include test cases | Clear pass/fail status",
|
||||
"cli_prompt": "PURPOSE: Generate comprehensive API test validation report\nTASK:\n• Document test environment configuration\n• Calculate endpoint coverage statistics\n• Report test results: pass/fail counts\n• Document boundary tests: parameter limits, null values, special characters\n• Document exception tests: auth failures, permission denied, resource not found\n• List issues found with recommendations\nMODE: write\nCONTEXT: @[swagger_spec]\nEXPECTED: Complete test report in {lang} with detailed results\nCONSTRAINTS: Include test cases | Clear pass/fail status\n--rule development-tests",
|
||||
"output": "{test_doc_name}"
|
||||
}
|
||||
],
|
||||
|
||||
@@ -147,8 +147,8 @@ You are generating path-conditional rules for Claude Code.
|
||||
|
||||
## Instructions
|
||||
|
||||
Read the agent prompt template for detailed instructions:
|
||||
$(cat ~/.claude/workflows/cli-templates/prompts/rules/tech-rules-agent-prompt.txt)
|
||||
Read the agent prompt template for detailed instructions.
|
||||
Use --rule rules-tech-rules-agent-prompt to load the template automatically.
|
||||
|
||||
## Execution Steps
|
||||
|
||||
|
||||
@@ -424,6 +424,17 @@ CONTEXT_VARS:
|
||||
- **Agent execution failure**: Agent-specific retry with minimal dependencies
|
||||
- **Template loading issues**: Agent handles graceful degradation
|
||||
- **Synthesis conflicts**: Synthesis highlights disagreements without resolution
|
||||
- **Context overflow protection**: See below for automatic context management
|
||||
|
||||
## Context Overflow Protection
|
||||
|
||||
**Per-role limits**: See `conceptual-planning-agent.md` (< 3000 words main, < 2000 words sub-docs, max 5 sub-docs)
|
||||
|
||||
**Synthesis protection**: If total analysis > 100KB, synthesis reads only `analysis.md` files (not sub-documents)
|
||||
|
||||
**Recovery**: Check logs → reduce scope (--count 2) → use --summary-only → manual synthesis
|
||||
|
||||
**Prevention**: Start with --count 3, use structured topic format, review output sizes before synthesis
|
||||
|
||||
## Reference Information
|
||||
|
||||
|
||||
@@ -132,7 +132,7 @@ Scan and analyze workflow session directories:
|
||||
|
||||
**Staleness criteria**:
|
||||
- Active sessions: No modification >7 days + no related git commits
|
||||
- Archives: >30 days old + no feature references in project.json
|
||||
- Archives: >30 days old + no feature references in project-tech.json
|
||||
- Lite-plan: >7 days old + plan.json not executed
|
||||
- Debug: >3 days old + issue not in recent commits
|
||||
|
||||
@@ -443,8 +443,8 @@ if (selectedCategories.includes('Sessions')) {
|
||||
}
|
||||
}
|
||||
|
||||
// Update project.json if features referenced deleted sessions
|
||||
const projectPath = '.workflow/project.json'
|
||||
// Update project-tech.json if features referenced deleted sessions
|
||||
const projectPath = '.workflow/project-tech.json'
|
||||
if (fileExists(projectPath)) {
|
||||
const project = JSON.parse(Read(projectPath))
|
||||
const deletedPaths = new Set(results.deleted)
|
||||
|
||||
666
.claude/commands/workflow/debug-with-file.md
Normal file
666
.claude/commands/workflow/debug-with-file.md
Normal file
@@ -0,0 +1,666 @@
|
||||
---
|
||||
name: debug-with-file
|
||||
description: Interactive hypothesis-driven debugging with documented exploration, understanding evolution, and Gemini-assisted correction
|
||||
argument-hint: "\"bug description or error message\""
|
||||
allowed-tools: TodoWrite(*), Task(*), AskUserQuestion(*), Read(*), Grep(*), Glob(*), Bash(*), Edit(*), Write(*)
|
||||
---
|
||||
|
||||
# Workflow Debug-With-File Command (/workflow:debug-with-file)
|
||||
|
||||
## Overview
|
||||
|
||||
Enhanced evidence-based debugging with **documented exploration process**. Records understanding evolution, consolidates insights, and uses Gemini to correct misunderstandings.
|
||||
|
||||
**Core workflow**: Explore → Document → Log → Analyze → Correct Understanding → Fix → Verify
|
||||
|
||||
**Key enhancements over /workflow:debug**:
|
||||
- **understanding.md**: Timeline of exploration and learning
|
||||
- **Gemini-assisted correction**: Validates and corrects hypotheses
|
||||
- **Consolidation**: Simplifies proven-wrong understanding to avoid clutter
|
||||
- **Learning retention**: Preserves what was learned, even from failed attempts
|
||||
|
||||
## Usage
|
||||
|
||||
```bash
|
||||
/workflow:debug-with-file <BUG_DESCRIPTION>
|
||||
|
||||
# Arguments
|
||||
<bug-description> Bug description, error message, or stack trace (required)
|
||||
```
|
||||
|
||||
## Execution Process
|
||||
|
||||
```
|
||||
Session Detection:
|
||||
├─ Check if debug session exists for this bug
|
||||
├─ EXISTS + understanding.md exists → Continue mode
|
||||
└─ NOT_FOUND → Explore mode
|
||||
|
||||
Explore Mode:
|
||||
├─ Locate error source in codebase
|
||||
├─ Document initial understanding in understanding.md
|
||||
├─ Generate testable hypotheses with Gemini validation
|
||||
├─ Add NDJSON logging instrumentation
|
||||
└─ Output: Hypothesis list + await user reproduction
|
||||
|
||||
Analyze Mode:
|
||||
├─ Parse debug.log, validate each hypothesis
|
||||
├─ Use Gemini to analyze evidence and correct understanding
|
||||
├─ Update understanding.md with:
|
||||
│ ├─ New evidence
|
||||
│ ├─ Corrected misunderstandings (strikethrough + correction)
|
||||
│ └─ Consolidated current understanding
|
||||
└─ Decision:
|
||||
├─ Confirmed → Fix root cause
|
||||
├─ Inconclusive → Add more logging, iterate
|
||||
└─ All rejected → Gemini-assisted new hypotheses
|
||||
|
||||
Fix & Cleanup:
|
||||
├─ Apply fix based on confirmed hypothesis
|
||||
├─ User verifies
|
||||
├─ Document final understanding + lessons learned
|
||||
├─ Remove debug instrumentation
|
||||
└─ If not fixed → Return to Analyze mode
|
||||
```
|
||||
|
||||
## Implementation
|
||||
|
||||
### Session Setup & Mode Detection
|
||||
|
||||
```javascript
|
||||
const getUtc8ISOString = () => new Date(Date.now() + 8 * 60 * 60 * 1000).toISOString()
|
||||
|
||||
const bugSlug = bug_description.toLowerCase().replace(/[^a-z0-9]+/g, '-').substring(0, 30)
|
||||
const dateStr = getUtc8ISOString().substring(0, 10)
|
||||
|
||||
const sessionId = `DBG-${bugSlug}-${dateStr}`
|
||||
const sessionFolder = `.workflow/.debug/${sessionId}`
|
||||
const debugLogPath = `${sessionFolder}/debug.log`
|
||||
const understandingPath = `${sessionFolder}/understanding.md`
|
||||
const hypothesesPath = `${sessionFolder}/hypotheses.json`
|
||||
|
||||
// Auto-detect mode
|
||||
const sessionExists = fs.existsSync(sessionFolder)
|
||||
const hasUnderstanding = sessionExists && fs.existsSync(understandingPath)
|
||||
const logHasContent = sessionExists && fs.existsSync(debugLogPath) && fs.statSync(debugLogPath).size > 0
|
||||
|
||||
const mode = logHasContent ? 'analyze' : (hasUnderstanding ? 'continue' : 'explore')
|
||||
|
||||
if (!sessionExists) {
|
||||
bash(`mkdir -p ${sessionFolder}`)
|
||||
}
|
||||
```
|
||||
|
||||
---
|
||||
|
||||
### Explore Mode
|
||||
|
||||
**Step 1.1: Locate Error Source**
|
||||
|
||||
```javascript
|
||||
// Extract keywords from bug description
|
||||
const keywords = extractErrorKeywords(bug_description)
|
||||
|
||||
// Search codebase for error locations
|
||||
const searchResults = []
|
||||
for (const keyword of keywords) {
|
||||
const results = Grep({ pattern: keyword, path: ".", output_mode: "content", "-C": 3 })
|
||||
searchResults.push({ keyword, results })
|
||||
}
|
||||
|
||||
// Identify affected files and functions
|
||||
const affectedLocations = analyzeSearchResults(searchResults)
|
||||
```
|
||||
|
||||
**Step 1.2: Document Initial Understanding**
|
||||
|
||||
Create `understanding.md` with exploration timeline:
|
||||
|
||||
```markdown
|
||||
# Understanding Document
|
||||
|
||||
**Session ID**: ${sessionId}
|
||||
**Bug Description**: ${bug_description}
|
||||
**Started**: ${getUtc8ISOString()}
|
||||
|
||||
---
|
||||
|
||||
## Exploration Timeline
|
||||
|
||||
### Iteration 1 - Initial Exploration (${timestamp})
|
||||
|
||||
#### Current Understanding
|
||||
|
||||
Based on bug description and initial code search:
|
||||
|
||||
- Error pattern: ${errorPattern}
|
||||
- Affected areas: ${affectedLocations.map(l => l.file).join(', ')}
|
||||
- Initial hypothesis: ${initialThoughts}
|
||||
|
||||
#### Evidence from Code Search
|
||||
|
||||
${searchResults.map(r => `
|
||||
**Keyword: "${r.keyword}"**
|
||||
- Found in: ${r.results.files.join(', ')}
|
||||
- Key findings: ${r.insights}
|
||||
`).join('\n')}
|
||||
|
||||
#### Next Steps
|
||||
|
||||
- Generate testable hypotheses
|
||||
- Add instrumentation
|
||||
- Await reproduction
|
||||
|
||||
---
|
||||
|
||||
## Current Consolidated Understanding
|
||||
|
||||
${initialConsolidatedUnderstanding}
|
||||
```
|
||||
|
||||
**Step 1.3: Gemini-Assisted Hypothesis Generation**
|
||||
|
||||
```bash
|
||||
ccw cli -p "
|
||||
PURPOSE: Generate debugging hypotheses for: ${bug_description}
|
||||
Success criteria: Testable hypotheses with clear evidence criteria
|
||||
|
||||
TASK:
|
||||
• Analyze error pattern and code search results
|
||||
• Identify 3-5 most likely root causes
|
||||
• For each hypothesis, specify:
|
||||
- What might be wrong
|
||||
- What evidence would confirm/reject it
|
||||
- Where to add instrumentation
|
||||
• Rank by likelihood
|
||||
|
||||
MODE: analysis
|
||||
|
||||
CONTEXT: @${sessionFolder}/understanding.md | Search results in understanding.md
|
||||
|
||||
EXPECTED:
|
||||
- Structured hypothesis list (JSON format)
|
||||
- Each hypothesis with: id, description, testable_condition, logging_point, evidence_criteria
|
||||
- Likelihood ranking (1=most likely)
|
||||
|
||||
CONSTRAINTS: Focus on testable conditions
|
||||
" --tool gemini --mode analysis --rule analysis-diagnose-bug-root-cause
|
||||
```
|
||||
|
||||
Save Gemini output to `hypotheses.json`:
|
||||
|
||||
```json
|
||||
{
|
||||
"iteration": 1,
|
||||
"timestamp": "2025-01-21T10:00:00+08:00",
|
||||
"hypotheses": [
|
||||
{
|
||||
"id": "H1",
|
||||
"description": "Data structure mismatch - expected key not present",
|
||||
"testable_condition": "Check if target key exists in dict",
|
||||
"logging_point": "file.py:func:42",
|
||||
"evidence_criteria": {
|
||||
"confirm": "data shows missing key",
|
||||
"reject": "key exists with valid value"
|
||||
},
|
||||
"likelihood": 1,
|
||||
"status": "pending"
|
||||
}
|
||||
],
|
||||
"gemini_insights": "...",
|
||||
"corrected_assumptions": []
|
||||
}
|
||||
```
|
||||
|
||||
**Step 1.4: Add NDJSON Instrumentation**
|
||||
|
||||
For each hypothesis, add logging (same as original debug command).
|
||||
|
||||
**Step 1.5: Update understanding.md**
|
||||
|
||||
Append hypothesis section:
|
||||
|
||||
```markdown
|
||||
#### Hypotheses Generated (Gemini-Assisted)
|
||||
|
||||
${hypotheses.map(h => `
|
||||
**${h.id}** (Likelihood: ${h.likelihood}): ${h.description}
|
||||
- Logging at: ${h.logging_point}
|
||||
- Testing: ${h.testable_condition}
|
||||
- Evidence to confirm: ${h.evidence_criteria.confirm}
|
||||
- Evidence to reject: ${h.evidence_criteria.reject}
|
||||
`).join('\n')}
|
||||
|
||||
**Gemini Insights**: ${geminiInsights}
|
||||
```
|
||||
|
||||
---
|
||||
|
||||
### Analyze Mode
|
||||
|
||||
**Step 2.1: Parse Debug Log**
|
||||
|
||||
```javascript
|
||||
// Parse NDJSON log
|
||||
const entries = Read(debugLogPath).split('\n')
|
||||
.filter(l => l.trim())
|
||||
.map(l => JSON.parse(l))
|
||||
|
||||
// Group by hypothesis
|
||||
const byHypothesis = groupBy(entries, 'hid')
|
||||
```
|
||||
|
||||
**Step 2.2: Gemini-Assisted Evidence Analysis**
|
||||
|
||||
```bash
|
||||
ccw cli -p "
|
||||
PURPOSE: Analyze debug log evidence to validate/correct hypotheses for: ${bug_description}
|
||||
Success criteria: Clear verdict per hypothesis + corrected understanding
|
||||
|
||||
TASK:
|
||||
• Parse log entries by hypothesis
|
||||
• Evaluate evidence against expected criteria
|
||||
• Determine verdict: confirmed | rejected | inconclusive
|
||||
• Identify incorrect assumptions from previous understanding
|
||||
• Suggest corrections to understanding
|
||||
|
||||
MODE: analysis
|
||||
|
||||
CONTEXT:
|
||||
@${debugLogPath}
|
||||
@${understandingPath}
|
||||
@${hypothesesPath}
|
||||
|
||||
EXPECTED:
|
||||
- Per-hypothesis verdict with reasoning
|
||||
- Evidence summary
|
||||
- List of incorrect assumptions with corrections
|
||||
- Updated consolidated understanding
|
||||
- Root cause if confirmed, or next investigation steps
|
||||
|
||||
CONSTRAINTS: Evidence-based reasoning only, no speculation
|
||||
" --tool gemini --mode analysis --rule analysis-diagnose-bug-root-cause
|
||||
```
|
||||
|
||||
**Step 2.3: Update Understanding with Corrections**
|
||||
|
||||
Append new iteration to `understanding.md`:
|
||||
|
||||
```markdown
|
||||
### Iteration ${n} - Evidence Analysis (${timestamp})
|
||||
|
||||
#### Log Analysis Results
|
||||
|
||||
${results.map(r => `
|
||||
**${r.id}**: ${r.verdict.toUpperCase()}
|
||||
- Evidence: ${JSON.stringify(r.evidence)}
|
||||
- Reasoning: ${r.reason}
|
||||
`).join('\n')}
|
||||
|
||||
#### Corrected Understanding
|
||||
|
||||
Previous misunderstandings identified and corrected:
|
||||
|
||||
${corrections.map(c => `
|
||||
- ~~${c.wrong}~~ → ${c.corrected}
|
||||
- Why wrong: ${c.reason}
|
||||
- Evidence: ${c.evidence}
|
||||
`).join('\n')}
|
||||
|
||||
#### New Insights
|
||||
|
||||
${newInsights.join('\n- ')}
|
||||
|
||||
#### Gemini Analysis
|
||||
|
||||
${geminiAnalysis}
|
||||
|
||||
${confirmedHypothesis ? `
|
||||
#### Root Cause Identified
|
||||
|
||||
**${confirmedHypothesis.id}**: ${confirmedHypothesis.description}
|
||||
|
||||
Evidence supporting this conclusion:
|
||||
${confirmedHypothesis.supportingEvidence}
|
||||
` : `
|
||||
#### Next Steps
|
||||
|
||||
${nextSteps}
|
||||
`}
|
||||
|
||||
---
|
||||
|
||||
## Current Consolidated Understanding (Updated)
|
||||
|
||||
${consolidatedUnderstanding}
|
||||
```
|
||||
|
||||
**Step 2.4: Consolidate Understanding**
|
||||
|
||||
At the bottom of `understanding.md`, update the consolidated section:
|
||||
|
||||
- Remove or simplify proven-wrong assumptions
|
||||
- Keep them in strikethrough for reference
|
||||
- Focus on current valid understanding
|
||||
- Avoid repeating details from timeline
|
||||
|
||||
```markdown
|
||||
## Current Consolidated Understanding
|
||||
|
||||
### What We Know
|
||||
|
||||
- ${validUnderstanding1}
|
||||
- ${validUnderstanding2}
|
||||
|
||||
### What Was Disproven
|
||||
|
||||
- ~~Initial assumption: ${wrongAssumption}~~ (Evidence: ${disproofEvidence})
|
||||
|
||||
### Current Investigation Focus
|
||||
|
||||
${currentFocus}
|
||||
|
||||
### Remaining Questions
|
||||
|
||||
- ${openQuestion1}
|
||||
- ${openQuestion2}
|
||||
```
|
||||
|
||||
**Step 2.5: Update hypotheses.json**
|
||||
|
||||
```json
|
||||
{
|
||||
"iteration": 2,
|
||||
"timestamp": "2025-01-21T10:15:00+08:00",
|
||||
"hypotheses": [
|
||||
{
|
||||
"id": "H1",
|
||||
"status": "rejected",
|
||||
"verdict_reason": "Evidence shows key exists with valid value",
|
||||
"evidence": {...}
|
||||
},
|
||||
{
|
||||
"id": "H2",
|
||||
"status": "confirmed",
|
||||
"verdict_reason": "Log data confirms timing issue",
|
||||
"evidence": {...}
|
||||
}
|
||||
],
|
||||
"gemini_corrections": [
|
||||
{
|
||||
"wrong_assumption": "...",
|
||||
"corrected_to": "...",
|
||||
"reason": "..."
|
||||
}
|
||||
]
|
||||
}
|
||||
```
|
||||
|
||||
---
|
||||
|
||||
### Fix & Verification
|
||||
|
||||
**Step 3.1: Apply Fix**
|
||||
|
||||
(Same as original debug command)
|
||||
|
||||
**Step 3.2: Document Resolution**
|
||||
|
||||
Append to `understanding.md`:
|
||||
|
||||
```markdown
|
||||
### Iteration ${n} - Resolution (${timestamp})
|
||||
|
||||
#### Fix Applied
|
||||
|
||||
- Modified files: ${modifiedFiles.join(', ')}
|
||||
- Fix description: ${fixDescription}
|
||||
- Root cause addressed: ${rootCause}
|
||||
|
||||
#### Verification Results
|
||||
|
||||
${verificationResults}
|
||||
|
||||
#### Lessons Learned
|
||||
|
||||
What we learned from this debugging session:
|
||||
|
||||
1. ${lesson1}
|
||||
2. ${lesson2}
|
||||
3. ${lesson3}
|
||||
|
||||
#### Key Insights for Future
|
||||
|
||||
- ${insight1}
|
||||
- ${insight2}
|
||||
```
|
||||
|
||||
**Step 3.3: Cleanup**
|
||||
|
||||
Remove debug instrumentation (same as original command).
|
||||
|
||||
---
|
||||
|
||||
## Session Folder Structure
|
||||
|
||||
```
|
||||
.workflow/.debug/DBG-{slug}-{date}/
|
||||
├── debug.log # NDJSON log (execution evidence)
|
||||
├── understanding.md # NEW: Exploration timeline + consolidated understanding
|
||||
├── hypotheses.json # NEW: Hypothesis history with verdicts
|
||||
└── resolution.md # Optional: Final summary
|
||||
```
|
||||
|
||||
## Understanding Document Template
|
||||
|
||||
```markdown
|
||||
# Understanding Document
|
||||
|
||||
**Session ID**: DBG-xxx-2025-01-21
|
||||
**Bug Description**: [original description]
|
||||
**Started**: 2025-01-21T10:00:00+08:00
|
||||
|
||||
---
|
||||
|
||||
## Exploration Timeline
|
||||
|
||||
### Iteration 1 - Initial Exploration (2025-01-21 10:00)
|
||||
|
||||
#### Current Understanding
|
||||
...
|
||||
|
||||
#### Evidence from Code Search
|
||||
...
|
||||
|
||||
#### Hypotheses Generated (Gemini-Assisted)
|
||||
...
|
||||
|
||||
### Iteration 2 - Evidence Analysis (2025-01-21 10:15)
|
||||
|
||||
#### Log Analysis Results
|
||||
...
|
||||
|
||||
#### Corrected Understanding
|
||||
- ~~[wrong]~~ → [corrected]
|
||||
|
||||
#### Gemini Analysis
|
||||
...
|
||||
|
||||
---
|
||||
|
||||
## Current Consolidated Understanding
|
||||
|
||||
### What We Know
|
||||
- [valid understanding points]
|
||||
|
||||
### What Was Disproven
|
||||
- ~~[disproven assumptions]~~
|
||||
|
||||
### Current Investigation Focus
|
||||
[current focus]
|
||||
|
||||
### Remaining Questions
|
||||
- [open questions]
|
||||
```
|
||||
|
||||
## Iteration Flow
|
||||
|
||||
```
|
||||
First Call (/workflow:debug-with-file "error"):
|
||||
├─ No session exists → Explore mode
|
||||
├─ Extract error keywords, search codebase
|
||||
├─ Document initial understanding in understanding.md
|
||||
├─ Use Gemini to generate hypotheses
|
||||
├─ Add logging instrumentation
|
||||
└─ Await user reproduction
|
||||
|
||||
After Reproduction (/workflow:debug-with-file "error"):
|
||||
├─ Session exists + debug.log has content → Analyze mode
|
||||
├─ Parse log, use Gemini to evaluate hypotheses
|
||||
├─ Update understanding.md with:
|
||||
│ ├─ Evidence analysis results
|
||||
│ ├─ Corrected misunderstandings (strikethrough)
|
||||
│ ├─ New insights
|
||||
│ └─ Updated consolidated understanding
|
||||
├─ Update hypotheses.json with verdicts
|
||||
└─ Decision:
|
||||
├─ Confirmed → Fix → Document resolution
|
||||
├─ Inconclusive → Add logging, document next steps
|
||||
└─ All rejected → Gemini-assisted new hypotheses
|
||||
|
||||
Output:
|
||||
├─ .workflow/.debug/DBG-{slug}-{date}/debug.log
|
||||
├─ .workflow/.debug/DBG-{slug}-{date}/understanding.md (evolving document)
|
||||
└─ .workflow/.debug/DBG-{slug}-{date}/hypotheses.json (history)
|
||||
```
|
||||
|
||||
## Gemini Integration Points
|
||||
|
||||
### 1. Hypothesis Generation (Explore Mode)
|
||||
|
||||
**Purpose**: Generate evidence-based, testable hypotheses
|
||||
|
||||
**Prompt Pattern**:
|
||||
```
|
||||
PURPOSE: Generate debugging hypotheses + evidence criteria
|
||||
TASK: Analyze error + code → testable hypotheses with clear pass/fail criteria
|
||||
CONTEXT: @understanding.md (search results)
|
||||
EXPECTED: JSON with hypotheses, likelihood ranking, evidence criteria
|
||||
```
|
||||
|
||||
### 2. Evidence Analysis (Analyze Mode)
|
||||
|
||||
**Purpose**: Validate hypotheses and correct misunderstandings
|
||||
|
||||
**Prompt Pattern**:
|
||||
```
|
||||
PURPOSE: Analyze debug log evidence + correct understanding
|
||||
TASK: Evaluate each hypothesis → identify wrong assumptions → suggest corrections
|
||||
CONTEXT: @debug.log @understanding.md @hypotheses.json
|
||||
EXPECTED: Verdicts + corrections + updated consolidated understanding
|
||||
```
|
||||
|
||||
### 3. New Hypothesis Generation (After All Rejected)
|
||||
|
||||
**Purpose**: Generate new hypotheses based on what was disproven
|
||||
|
||||
**Prompt Pattern**:
|
||||
```
|
||||
PURPOSE: Generate new hypotheses given disproven assumptions
|
||||
TASK: Review rejected hypotheses → identify knowledge gaps → new investigation angles
|
||||
CONTEXT: @understanding.md (with disproven section) @hypotheses.json
|
||||
EXPECTED: New hypotheses avoiding previously rejected paths
|
||||
```
|
||||
|
||||
## Error Correction Mechanism
|
||||
|
||||
### Correction Format in understanding.md
|
||||
|
||||
```markdown
|
||||
#### Corrected Understanding
|
||||
|
||||
- ~~Assumed dict key "config" was missing~~ → Key exists, but value is None
|
||||
- Why wrong: Only checked existence, not value validity
|
||||
- Evidence: H1 log shows {"config": null, "exists": true}
|
||||
|
||||
- ~~Thought error occurred in initialization~~ → Error happens during runtime update
|
||||
- Why wrong: Stack trace misread as init code
|
||||
- Evidence: H2 timestamp shows 30s after startup
|
||||
```
|
||||
|
||||
### Consolidation Rules
|
||||
|
||||
When updating "Current Consolidated Understanding":
|
||||
|
||||
1. **Simplify disproven items**: Move to "What Was Disproven" with single-line summary
|
||||
2. **Keep valid insights**: Promote confirmed findings to "What We Know"
|
||||
3. **Avoid duplication**: Don't repeat timeline details in consolidated section
|
||||
4. **Focus on current state**: What do we know NOW, not the journey
|
||||
5. **Preserve key corrections**: Keep important wrong→right transformations for learning
|
||||
|
||||
**Bad (cluttered)**:
|
||||
```markdown
|
||||
## Current Consolidated Understanding
|
||||
|
||||
In iteration 1 we thought X, but in iteration 2 we found Y, then in iteration 3...
|
||||
Also we checked A and found B, and then we checked C...
|
||||
```
|
||||
|
||||
**Good (consolidated)**:
|
||||
```markdown
|
||||
## Current Consolidated Understanding
|
||||
|
||||
### What We Know
|
||||
- Error occurs during runtime update, not initialization
|
||||
- Config value is None (not missing key)
|
||||
|
||||
### What Was Disproven
|
||||
- ~~Initialization error~~ (Timing evidence)
|
||||
- ~~Missing key hypothesis~~ (Key exists)
|
||||
|
||||
### Current Investigation Focus
|
||||
Why is config value None during update?
|
||||
```
|
||||
|
||||
## Post-Completion Expansion
|
||||
|
||||
完成后询问用户是否扩展为issue(test/enhance/refactor/doc),选中项调用 `/issue:new "{summary} - {dimension}"`
|
||||
|
||||
---
|
||||
|
||||
## Error Handling
|
||||
|
||||
| Situation | Action |
|
||||
|-----------|--------|
|
||||
| Empty debug.log | Verify reproduction triggered the code path |
|
||||
| All hypotheses rejected | Use Gemini to generate new hypotheses based on disproven assumptions |
|
||||
| Fix doesn't work | Document failed fix attempt, iterate with refined understanding |
|
||||
| >5 iterations | Review consolidated understanding, escalate to `/workflow:lite-fix` with full context |
|
||||
| Gemini unavailable | Fallback to manual hypothesis generation, document without Gemini insights |
|
||||
| Understanding too long | Consolidate aggressively, archive old iterations to separate file |
|
||||
|
||||
## Comparison with /workflow:debug
|
||||
|
||||
| Feature | /workflow:debug | /workflow:debug-with-file |
|
||||
|---------|-----------------|---------------------------|
|
||||
| NDJSON logging | ✅ | ✅ |
|
||||
| Hypothesis generation | Manual | Gemini-assisted |
|
||||
| Exploration documentation | ❌ | ✅ understanding.md |
|
||||
| Understanding evolution | ❌ | ✅ Timeline + corrections |
|
||||
| Error correction | ❌ | ✅ Strikethrough + reasoning |
|
||||
| Consolidated learning | ❌ | ✅ Current understanding section |
|
||||
| Hypothesis history | ❌ | ✅ hypotheses.json |
|
||||
| Gemini validation | ❌ | ✅ At key decision points |
|
||||
|
||||
## Usage Recommendations
|
||||
|
||||
Use `/workflow:debug-with-file` when:
|
||||
- Complex bugs requiring multiple investigation rounds
|
||||
- Learning from debugging process is valuable
|
||||
- Team needs to understand debugging rationale
|
||||
- Bug might recur, documentation helps prevention
|
||||
|
||||
Use `/workflow:debug` when:
|
||||
- Simple, quick bugs
|
||||
- One-off issues
|
||||
- Documentation overhead not needed
|
||||
@@ -311,6 +311,12 @@ Output:
|
||||
└─ .workflow/.debug/DBG-{slug}-{date}/debug.log
|
||||
```
|
||||
|
||||
## Post-Completion Expansion
|
||||
|
||||
完成后询问用户是否扩展为issue(test/enhance/refactor/doc),选中项调用 `/issue:new "{summary} - {dimension}"`
|
||||
|
||||
---
|
||||
|
||||
## Error Handling
|
||||
|
||||
| Situation | Action |
|
||||
|
||||
@@ -275,6 +275,10 @@ AskUserQuestion({
|
||||
- **"Enter Review"**: Execute `/workflow:review`
|
||||
- **"Complete Session"**: Execute `/workflow:session:complete`
|
||||
|
||||
### Post-Completion Expansion
|
||||
|
||||
完成后询问用户是否扩展为issue(test/enhance/refactor/doc),选中项调用 `/issue:new "{summary} - {dimension}"`
|
||||
|
||||
## Execution Strategy (IMPL_PLAN-Driven)
|
||||
|
||||
### Strategy Priority
|
||||
|
||||
@@ -108,11 +108,24 @@ Analyze project for workflow initialization and generate .workflow/project-tech.
|
||||
2. Execute: ccw tool exec get_modules_by_depth '{}' (get project structure)
|
||||
|
||||
## Task
|
||||
Generate complete project-tech.json with:
|
||||
- project_metadata: {name: ${projectName}, root_path: ${projectRoot}, initialized_at, updated_at}
|
||||
- technology_analysis: {description, languages, frameworks, build_tools, test_frameworks, architecture, key_components, dependencies}
|
||||
- development_status: ${regenerate ? 'preserve from backup' : '{completed_features: [], development_index: {feature: [], enhancement: [], bugfix: [], refactor: [], docs: []}, statistics: {total_features: 0, total_sessions: 0, last_updated}}'}
|
||||
- _metadata: {initialized_by: "cli-explore-agent", analysis_timestamp, analysis_mode}
|
||||
Generate complete project-tech.json following the schema structure:
|
||||
- project_name: "${projectName}"
|
||||
- initialized_at: ISO 8601 timestamp
|
||||
- overview: {
|
||||
description: "Brief project description",
|
||||
technology_stack: {
|
||||
languages: [{name, file_count, primary}],
|
||||
frameworks: ["string"],
|
||||
build_tools: ["string"],
|
||||
test_frameworks: ["string"]
|
||||
},
|
||||
architecture: {style, layers: [], patterns: []},
|
||||
key_components: [{name, path, description, importance}]
|
||||
}
|
||||
- features: []
|
||||
- development_index: ${regenerate ? 'preserve from backup' : '{feature: [], enhancement: [], bugfix: [], refactor: [], docs: []}'}
|
||||
- statistics: ${regenerate ? 'preserve from backup' : '{total_features: 0, total_sessions: 0, last_updated: ISO timestamp}'}
|
||||
- _metadata: {initialized_by: "cli-explore-agent", analysis_timestamp: ISO timestamp, analysis_mode: "deep-scan"}
|
||||
|
||||
## Analysis Requirements
|
||||
|
||||
@@ -132,7 +145,7 @@ Generate complete project-tech.json with:
|
||||
1. Structural scan: get_modules_by_depth.sh, find, wc -l
|
||||
2. Semantic analysis: Gemini for patterns/architecture
|
||||
3. Synthesis: Merge findings
|
||||
4. ${regenerate ? 'Merge with preserved development_status from .workflow/project-tech.json.backup' : ''}
|
||||
4. ${regenerate ? 'Merge with preserved development_index and statistics from .workflow/project-tech.json.backup' : ''}
|
||||
5. Write JSON: Write('.workflow/project-tech.json', jsonContent)
|
||||
6. Report: Return brief completion summary
|
||||
|
||||
@@ -181,16 +194,16 @@ console.log(`
|
||||
✓ Project initialized successfully
|
||||
|
||||
## Project Overview
|
||||
Name: ${projectTech.project_metadata.name}
|
||||
Description: ${projectTech.technology_analysis.description}
|
||||
Name: ${projectTech.project_name}
|
||||
Description: ${projectTech.overview.description}
|
||||
|
||||
### Technology Stack
|
||||
Languages: ${projectTech.technology_analysis.languages.map(l => l.name).join(', ')}
|
||||
Frameworks: ${projectTech.technology_analysis.frameworks.join(', ')}
|
||||
Languages: ${projectTech.overview.technology_stack.languages.map(l => l.name).join(', ')}
|
||||
Frameworks: ${projectTech.overview.technology_stack.frameworks.join(', ')}
|
||||
|
||||
### Architecture
|
||||
Style: ${projectTech.technology_analysis.architecture.style}
|
||||
Components: ${projectTech.technology_analysis.key_components.length} core modules
|
||||
Style: ${projectTech.overview.architecture.style}
|
||||
Components: ${projectTech.overview.key_components.length} core modules
|
||||
|
||||
---
|
||||
Files created:
|
||||
|
||||
@@ -81,6 +81,7 @@ AskUserQuestion({
|
||||
options: [
|
||||
{ label: "Skip", description: "No review" },
|
||||
{ label: "Gemini Review", description: "Gemini CLI tool" },
|
||||
{ label: "Codex Review", description: "Git-aware review (prompt OR --uncommitted)" },
|
||||
{ label: "Agent Review", description: "Current agent review" }
|
||||
]
|
||||
}
|
||||
@@ -171,10 +172,23 @@ Output:
|
||||
**Operations**:
|
||||
- Initialize result tracking for multi-execution scenarios
|
||||
- Set up `previousExecutionResults` array for context continuity
|
||||
- **In-Memory Mode**: Echo execution strategy from lite-plan for transparency
|
||||
|
||||
```javascript
|
||||
// Initialize result tracking
|
||||
previousExecutionResults = []
|
||||
|
||||
// In-Memory Mode: Echo execution strategy (transparency before execution)
|
||||
if (executionContext) {
|
||||
console.log(`
|
||||
📋 Execution Strategy (from lite-plan):
|
||||
Method: ${executionContext.executionMethod}
|
||||
Review: ${executionContext.codeReviewTool}
|
||||
Tasks: ${executionContext.planObject.tasks.length}
|
||||
Complexity: ${executionContext.planObject.complexity}
|
||||
${executionContext.executorAssignments ? ` Assignments: ${JSON.stringify(executionContext.executorAssignments)}` : ''}
|
||||
`)
|
||||
}
|
||||
```
|
||||
|
||||
### Step 2: Task Grouping & Batch Creation
|
||||
@@ -392,16 +406,8 @@ ccw cli -p "${buildExecutionPrompt(batch)}" --tool codex --mode write
|
||||
|
||||
**Execution with fixed IDs** (predictable ID pattern):
|
||||
```javascript
|
||||
// Launch CLI in foreground (NOT background)
|
||||
// Timeout based on complexity: Low=40min, Medium=60min, High=100min
|
||||
const timeoutByComplexity = {
|
||||
"Low": 2400000, // 40 minutes
|
||||
"Medium": 3600000, // 60 minutes
|
||||
"High": 6000000 // 100 minutes
|
||||
}
|
||||
|
||||
// Launch CLI in background, wait for task hook callback
|
||||
// Generate fixed execution ID: ${sessionId}-${groupId}
|
||||
// This enables predictable ID lookup without relying on resume context chains
|
||||
const sessionId = executionContext?.session?.id || 'standalone'
|
||||
const fixedExecutionId = `${sessionId}-${batch.groupId}` // e.g., "implement-auth-2025-12-13-P1"
|
||||
|
||||
@@ -413,16 +419,12 @@ const cli_command = previousCliId
|
||||
? `ccw cli -p "${buildExecutionPrompt(batch)}" --tool codex --mode write --id ${fixedExecutionId} --resume ${previousCliId}`
|
||||
: `ccw cli -p "${buildExecutionPrompt(batch)}" --tool codex --mode write --id ${fixedExecutionId}`
|
||||
|
||||
bash_result = Bash(
|
||||
// Execute in background, stop output and wait for task hook callback
|
||||
Bash(
|
||||
command=cli_command,
|
||||
timeout=timeoutByComplexity[planObject.complexity] || 3600000
|
||||
run_in_background=true
|
||||
)
|
||||
|
||||
// Execution ID is now predictable: ${fixedExecutionId}
|
||||
// Can also extract from output: "ID: implement-auth-2025-12-13-P1"
|
||||
const cliExecutionId = fixedExecutionId
|
||||
|
||||
// Update TodoWrite when execution completes
|
||||
// STOP HERE - CLI executes in background, task hook will notify on completion
|
||||
```
|
||||
|
||||
**Resume on Failure** (with fixed ID):
|
||||
@@ -469,7 +471,8 @@ Progress tracked at batch level (not individual task level). Icons: ⚡ (paralle
|
||||
**Operations**:
|
||||
- Agent Review: Current agent performs direct review
|
||||
- Gemini Review: Execute gemini CLI with review prompt
|
||||
- Custom tool: Execute specified CLI tool (qwen, codex, etc.)
|
||||
- Codex Review: Two options - (A) with prompt for complex reviews, (B) `--uncommitted` flag only for quick reviews
|
||||
- Custom tool: Execute specified CLI tool (qwen, etc.)
|
||||
|
||||
**Unified Review Template** (All tools use same standard):
|
||||
|
||||
@@ -485,7 +488,7 @@ TASK: • Verify plan acceptance criteria fulfillment • Analyze code quality
|
||||
MODE: analysis
|
||||
CONTEXT: @**/* @{plan.json} [@{exploration.json}] | Memory: Review lite-execute changes against plan requirements
|
||||
EXPECTED: Quality assessment with acceptance criteria verification, issue identification, and recommendations. Explicitly check each acceptance criterion from plan.json tasks.
|
||||
RULES: $(cat ~/.claude/workflows/cli-templates/prompts/analysis/02-review-code-quality.txt) | Focus on plan acceptance criteria and plan adherence | analysis=READ-ONLY
|
||||
CONSTRAINTS: Focus on plan acceptance criteria and plan adherence | analysis=READ-ONLY
|
||||
```
|
||||
|
||||
**Tool-Specific Execution** (Apply shared prompt template above):
|
||||
@@ -504,8 +507,17 @@ ccw cli -p "[Shared Prompt Template with artifacts]" --tool gemini --mode analys
|
||||
ccw cli -p "[Shared Prompt Template with artifacts]" --tool qwen --mode analysis
|
||||
# Same prompt as Gemini, different execution engine
|
||||
|
||||
# Method 4: Codex Review (autonomous)
|
||||
ccw cli -p "[Verify plan acceptance criteria at ${plan.json}]" --tool codex --mode write
|
||||
# Method 4: Codex Review (git-aware) - Two mutually exclusive options:
|
||||
|
||||
# Option A: With custom prompt (reviews uncommitted by default)
|
||||
ccw cli -p "[Shared Prompt Template with artifacts]" --tool codex --mode review
|
||||
# Use for complex reviews with specific focus areas
|
||||
|
||||
# Option B: Target flag only (no prompt allowed)
|
||||
ccw cli --tool codex --mode review --uncommitted
|
||||
# Quick review of uncommitted changes without custom instructions
|
||||
|
||||
# ⚠️ IMPORTANT: -p prompt and target flags (--uncommitted/--base/--commit) are MUTUALLY EXCLUSIVE
|
||||
```
|
||||
|
||||
**Multi-Round Review with Fixed IDs**:
|
||||
@@ -531,11 +543,11 @@ if (hasUnresolvedIssues(reviewResult)) {
|
||||
|
||||
**Trigger**: After all executions complete (regardless of code review)
|
||||
|
||||
**Skip Condition**: Skip if `.workflow/project.json` does not exist
|
||||
**Skip Condition**: Skip if `.workflow/project-tech.json` does not exist
|
||||
|
||||
**Operations**:
|
||||
```javascript
|
||||
const projectJsonPath = '.workflow/project.json'
|
||||
const projectJsonPath = '.workflow/project-tech.json'
|
||||
if (!fileExists(projectJsonPath)) return // Silent skip
|
||||
|
||||
const projectJson = JSON.parse(Read(projectJsonPath))
|
||||
@@ -664,6 +676,10 @@ Collected after each execution call completes:
|
||||
|
||||
Appended to `previousExecutionResults` array for context continuity in multi-execution scenarios.
|
||||
|
||||
## Post-Completion Expansion
|
||||
|
||||
完成后询问用户是否扩展为issue(test/enhance/refactor/doc),选中项调用 `/issue:new "{summary} - {dimension}"`
|
||||
|
||||
**Fixed ID Pattern**: `${sessionId}-${groupId}` enables predictable lookup without auto-generated timestamps.
|
||||
|
||||
**Resume Usage**: If `status` is "partial" or "failed", use `fixedCliId` to resume:
|
||||
|
||||
461
.claude/commands/workflow/lite-lite-lite.md
Normal file
461
.claude/commands/workflow/lite-lite-lite.md
Normal file
@@ -0,0 +1,461 @@
|
||||
---
|
||||
name: workflow:lite-lite-lite
|
||||
description: Ultra-lightweight multi-tool analysis and direct execution. No artifacts for simple tasks; auto-creates planning docs in .workflow/.scratchpad/ for complex tasks. Auto tool selection based on task analysis, user-driven iteration via AskUser.
|
||||
argument-hint: "<task description>"
|
||||
allowed-tools: TodoWrite(*), Task(*), AskUserQuestion(*), Read(*), Bash(*), Write(*), mcp__ace-tool__search_context(*), mcp__ccw-tools__write_file(*)
|
||||
---
|
||||
|
||||
# Ultra-Lite Multi-Tool Workflow
|
||||
|
||||
## Quick Start
|
||||
|
||||
```bash
|
||||
/workflow:lite-lite-lite "Fix the login bug"
|
||||
/workflow:lite-lite-lite "Refactor payment module for multi-gateway support"
|
||||
```
|
||||
|
||||
**Core Philosophy**: Minimal friction, maximum velocity. Simple tasks = no artifacts. Complex tasks = lightweight planning doc in `.workflow/.scratchpad/`.
|
||||
|
||||
## Overview
|
||||
|
||||
**Complexity-aware workflow**: Clarify → Assess Complexity → Select Tools → Multi-Mode Analysis → Decision → Direct Execution
|
||||
|
||||
**vs multi-cli-plan**: No IMPL_PLAN.md, plan.json, synthesis.json - state in memory or lightweight scratchpad doc for complex tasks.
|
||||
|
||||
## Execution Flow
|
||||
|
||||
```
|
||||
Phase 1: Clarify Requirements → AskUser for missing details
|
||||
Phase 1.5: Assess Complexity → Determine if planning doc needed
|
||||
Phase 2: Select Tools (CLI → Mode → Agent) → 3-step selection
|
||||
Phase 3: Multi-Mode Analysis → Execute with --resume chaining
|
||||
Phase 4: User Decision → Execute / Refine / Change / Cancel
|
||||
Phase 5: Direct Execution → No plan files (simple) or scratchpad doc (complex)
|
||||
```
|
||||
|
||||
## Phase 1: Clarify Requirements
|
||||
|
||||
```javascript
|
||||
const taskDescription = $ARGUMENTS
|
||||
|
||||
if (taskDescription.length < 20 || isAmbiguous(taskDescription)) {
|
||||
AskUserQuestion({
|
||||
questions: [{
|
||||
question: "Please provide more details: target files/modules, expected behavior, constraints?",
|
||||
header: "Details",
|
||||
options: [
|
||||
{ label: "I'll provide more", description: "Add more context" },
|
||||
{ label: "Continue analysis", description: "Let tools explore autonomously" }
|
||||
],
|
||||
multiSelect: false
|
||||
}]
|
||||
})
|
||||
}
|
||||
|
||||
// Optional: Quick ACE Context for complex tasks
|
||||
mcp__ace-tool__search_context({
|
||||
project_root_path: process.cwd(),
|
||||
query: `${taskDescription} implementation patterns`
|
||||
})
|
||||
```
|
||||
|
||||
## Phase 1.5: Assess Complexity
|
||||
|
||||
| Level | Creates Plan Doc | Trigger Keywords |
|
||||
|-------|------------------|------------------|
|
||||
| **simple** | ❌ | (default) |
|
||||
| **moderate** | ✅ | module, system, service, integration, multiple |
|
||||
| **complex** | ✅ | refactor, migrate, security, auth, payment, database |
|
||||
|
||||
```javascript
|
||||
// Complexity detection (after ACE query)
|
||||
const isComplex = /refactor|migrate|security|auth|payment|database/i.test(taskDescription)
|
||||
const isModerate = /module|system|service|integration|multiple/i.test(taskDescription) || aceContext?.relevant_files?.length > 2
|
||||
|
||||
if (isComplex || isModerate) {
|
||||
const planPath = `.workflow/.scratchpad/lite3-${taskSlug}-${dateStr}.md`
|
||||
// Create planning doc with: Task, Status, Complexity, Analysis Summary, Execution Plan, Progress Log
|
||||
}
|
||||
```
|
||||
|
||||
## Phase 2: Select Tools
|
||||
|
||||
### Tool Definitions
|
||||
|
||||
**CLI Tools** (from cli-tools.json):
|
||||
```javascript
|
||||
const cliConfig = JSON.parse(Read("~/.claude/cli-tools.json"))
|
||||
const cliTools = Object.entries(cliConfig.tools)
|
||||
.filter(([_, config]) => config.enabled)
|
||||
.map(([name, config]) => ({
|
||||
name, type: 'cli',
|
||||
tags: config.tags || [],
|
||||
model: config.primaryModel,
|
||||
toolType: config.type // builtin, cli-wrapper, api-endpoint
|
||||
}))
|
||||
```
|
||||
|
||||
**Sub Agents**:
|
||||
|
||||
| Agent | Strengths | canExecute |
|
||||
|-------|-----------|------------|
|
||||
| **code-developer** | Code implementation, test writing | ✅ |
|
||||
| **Explore** | Fast code exploration, pattern discovery | ❌ |
|
||||
| **cli-explore-agent** | Dual-source analysis (Bash+CLI) | ❌ |
|
||||
| **cli-discuss-agent** | Multi-CLI collaboration, cross-verification | ❌ |
|
||||
| **debug-explore-agent** | Hypothesis-driven debugging | ❌ |
|
||||
| **context-search-agent** | Multi-layer file discovery, dependency analysis | ❌ |
|
||||
| **test-fix-agent** | Test execution, failure diagnosis, code fixing | ✅ |
|
||||
| **universal-executor** | General execution, multi-domain adaptation | ✅ |
|
||||
|
||||
**Analysis Modes**:
|
||||
|
||||
| Mode | Pattern | Use Case | minCLIs |
|
||||
|------|---------|----------|---------|
|
||||
| **Parallel** | `A \|\| B \|\| C → Aggregate` | Fast multi-perspective | 1+ |
|
||||
| **Sequential** | `A → B(resume) → C(resume)` | Incremental deepening | 2+ |
|
||||
| **Collaborative** | `A → B → A → B → Synthesize` | Multi-round refinement | 2+ |
|
||||
| **Debate** | `A(propose) → B(challenge) → A(defend)` | Adversarial validation | 2 |
|
||||
| **Challenge** | `A(analyze) → B(challenge)` | Find flaws and risks | 2 |
|
||||
|
||||
### Three-Step Selection Flow
|
||||
|
||||
```javascript
|
||||
// Step 1: Select CLIs (multiSelect)
|
||||
AskUserQuestion({
|
||||
questions: [{
|
||||
question: "Select CLI tools for analysis (1-3 for collaboration modes)",
|
||||
header: "CLI Tools",
|
||||
options: cliTools.map(cli => ({
|
||||
label: cli.name,
|
||||
description: cli.tags.length > 0 ? cli.tags.join(', ') : cli.model || 'general'
|
||||
})),
|
||||
multiSelect: true
|
||||
}]
|
||||
})
|
||||
|
||||
// Step 2: Select Mode (filtered by CLI count)
|
||||
const availableModes = analysisModes.filter(m => selectedCLIs.length >= m.minCLIs)
|
||||
AskUserQuestion({
|
||||
questions: [{
|
||||
question: "Select analysis mode",
|
||||
header: "Mode",
|
||||
options: availableModes.map(m => ({
|
||||
label: m.label,
|
||||
description: `${m.description} [${m.pattern}]`
|
||||
})),
|
||||
multiSelect: false
|
||||
}]
|
||||
})
|
||||
|
||||
// Step 3: Select Agent for execution
|
||||
AskUserQuestion({
|
||||
questions: [{
|
||||
question: "Select Sub Agent for execution",
|
||||
header: "Agent",
|
||||
options: agents.map(a => ({ label: a.name, description: a.strength })),
|
||||
multiSelect: false
|
||||
}]
|
||||
})
|
||||
|
||||
// Confirm selection
|
||||
AskUserQuestion({
|
||||
questions: [{
|
||||
question: "Confirm selection?",
|
||||
header: "Confirm",
|
||||
options: [
|
||||
{ label: "Confirm and continue", description: `${selectedMode.label} with ${selectedCLIs.length} CLIs` },
|
||||
{ label: "Re-select CLIs", description: "Choose different CLI tools" },
|
||||
{ label: "Re-select Mode", description: "Choose different analysis mode" },
|
||||
{ label: "Re-select Agent", description: "Choose different Sub Agent" }
|
||||
],
|
||||
multiSelect: false
|
||||
}]
|
||||
})
|
||||
```
|
||||
|
||||
## Phase 3: Multi-Mode Analysis
|
||||
|
||||
### Universal CLI Prompt Template
|
||||
|
||||
```javascript
|
||||
// Unified prompt builder - used by all modes
|
||||
function buildPrompt({ purpose, tasks, expected, rules, taskDescription }) {
|
||||
return `
|
||||
PURPOSE: ${purpose}: ${taskDescription}
|
||||
TASK: ${tasks.map(t => `• ${t}`).join(' ')}
|
||||
MODE: analysis
|
||||
CONTEXT: @**/*
|
||||
EXPECTED: ${expected}
|
||||
CONSTRAINTS: ${rules}
|
||||
`
|
||||
}
|
||||
|
||||
// Execute CLI with prompt
|
||||
function execCLI(cli, prompt, options = {}) {
|
||||
const { resume, background = false } = options
|
||||
const resumeFlag = resume ? `--resume ${resume}` : ''
|
||||
return Bash({
|
||||
command: `ccw cli -p "${prompt}" --tool ${cli.name} --mode analysis ${resumeFlag}`,
|
||||
run_in_background: background
|
||||
})
|
||||
}
|
||||
```
|
||||
|
||||
### Prompt Presets by Role
|
||||
|
||||
| Role | PURPOSE | TASKS | EXPECTED | RULES |
|
||||
|------|---------|-------|----------|-------|
|
||||
| **initial** | Initial analysis | Identify files, Analyze approach, List changes | Root cause, files, changes, risks | Focus on actionable insights |
|
||||
| **extend** | Build on previous | Review previous, Extend, Add insights | Extended analysis building on findings | Build incrementally, avoid repetition |
|
||||
| **synthesize** | Refine and synthesize | Review, Identify gaps, Synthesize | Refined synthesis with new perspectives | Add value not repetition |
|
||||
| **propose** | Propose comprehensive analysis | Analyze thoroughly, Propose solution, State assumptions | Well-reasoned proposal with trade-offs | Be clear about assumptions |
|
||||
| **challenge** | Challenge and stress-test | Identify weaknesses, Question assumptions, Suggest alternatives | Critique with counter-arguments | Be adversarial but constructive |
|
||||
| **defend** | Respond to challenges | Address challenges, Defend valid aspects, Propose refined solution | Refined proposal incorporating feedback | Be open to criticism, synthesize |
|
||||
| **criticize** | Find flaws ruthlessly | Find logical flaws, Identify edge cases, Rate criticisms | Critique with severity: [CRITICAL]/[HIGH]/[MEDIUM]/[LOW] | Be ruthlessly critical |
|
||||
|
||||
```javascript
|
||||
const PROMPTS = {
|
||||
initial: { purpose: 'Initial analysis', tasks: ['Identify affected files', 'Analyze implementation approach', 'List specific changes'], expected: 'Root cause, files to modify, key changes, risks', rules: 'Focus on actionable insights' },
|
||||
extend: { purpose: 'Build on previous analysis', tasks: ['Review previous findings', 'Extend analysis', 'Add new insights'], expected: 'Extended analysis building on previous', rules: 'Build incrementally, avoid repetition' },
|
||||
synthesize: { purpose: 'Refine and synthesize', tasks: ['Review previous', 'Identify gaps', 'Add insights', 'Synthesize findings'], expected: 'Refined synthesis with new perspectives', rules: 'Build collaboratively, add value' },
|
||||
propose: { purpose: 'Propose comprehensive analysis', tasks: ['Analyze thoroughly', 'Propose solution', 'State assumptions clearly'], expected: 'Well-reasoned proposal with trade-offs', rules: 'Be clear about assumptions' },
|
||||
challenge: { purpose: 'Challenge and stress-test', tasks: ['Identify weaknesses', 'Question assumptions', 'Suggest alternatives', 'Highlight overlooked risks'], expected: 'Constructive critique with counter-arguments', rules: 'Be adversarial but constructive' },
|
||||
defend: { purpose: 'Respond to challenges', tasks: ['Address each challenge', 'Defend valid aspects', 'Acknowledge valid criticisms', 'Propose refined solution'], expected: 'Refined proposal incorporating alternatives', rules: 'Be open to criticism, synthesize best ideas' },
|
||||
criticize: { purpose: 'Stress-test and find weaknesses', tasks: ['Find logical flaws', 'Identify missed edge cases', 'Propose alternatives', 'Rate criticisms (High/Medium/Low)'], expected: 'Detailed critique with severity ratings', rules: 'Be ruthlessly critical, find every flaw' }
|
||||
}
|
||||
```
|
||||
|
||||
### Mode Implementations
|
||||
|
||||
```javascript
|
||||
// Parallel: All CLIs run simultaneously
|
||||
async function executeParallel(clis, task) {
|
||||
return await Promise.all(clis.map(cli =>
|
||||
execCLI(cli, buildPrompt({ ...PROMPTS.initial, taskDescription: task }), { background: true })
|
||||
))
|
||||
}
|
||||
|
||||
// Sequential: Each CLI builds on previous via --resume
|
||||
async function executeSequential(clis, task) {
|
||||
const results = []
|
||||
let prevId = null
|
||||
for (const cli of clis) {
|
||||
const preset = prevId ? PROMPTS.extend : PROMPTS.initial
|
||||
const result = await execCLI(cli, buildPrompt({ ...preset, taskDescription: task }), { resume: prevId })
|
||||
results.push(result)
|
||||
prevId = extractSessionId(result)
|
||||
}
|
||||
return results
|
||||
}
|
||||
|
||||
// Collaborative: Multi-round synthesis
|
||||
async function executeCollaborative(clis, task, rounds = 2) {
|
||||
const results = []
|
||||
let prevId = null
|
||||
for (let r = 0; r < rounds; r++) {
|
||||
for (const cli of clis) {
|
||||
const preset = !prevId ? PROMPTS.initial : PROMPTS.synthesize
|
||||
const result = await execCLI(cli, buildPrompt({ ...preset, taskDescription: task }), { resume: prevId })
|
||||
results.push({ cli: cli.name, round: r, result })
|
||||
prevId = extractSessionId(result)
|
||||
}
|
||||
}
|
||||
return results
|
||||
}
|
||||
|
||||
// Debate: Propose → Challenge → Defend
|
||||
async function executeDebate(clis, task) {
|
||||
const [cliA, cliB] = clis
|
||||
const results = []
|
||||
|
||||
const propose = await execCLI(cliA, buildPrompt({ ...PROMPTS.propose, taskDescription: task }))
|
||||
results.push({ phase: 'propose', cli: cliA.name, result: propose })
|
||||
|
||||
const challenge = await execCLI(cliB, buildPrompt({ ...PROMPTS.challenge, taskDescription: task }), { resume: extractSessionId(propose) })
|
||||
results.push({ phase: 'challenge', cli: cliB.name, result: challenge })
|
||||
|
||||
const defend = await execCLI(cliA, buildPrompt({ ...PROMPTS.defend, taskDescription: task }), { resume: extractSessionId(challenge) })
|
||||
results.push({ phase: 'defend', cli: cliA.name, result: defend })
|
||||
|
||||
return results
|
||||
}
|
||||
|
||||
// Challenge: Analyze → Criticize
|
||||
async function executeChallenge(clis, task) {
|
||||
const [cliA, cliB] = clis
|
||||
const results = []
|
||||
|
||||
const analyze = await execCLI(cliA, buildPrompt({ ...PROMPTS.initial, taskDescription: task }))
|
||||
results.push({ phase: 'analyze', cli: cliA.name, result: analyze })
|
||||
|
||||
const criticize = await execCLI(cliB, buildPrompt({ ...PROMPTS.criticize, taskDescription: task }), { resume: extractSessionId(analyze) })
|
||||
results.push({ phase: 'challenge', cli: cliB.name, result: criticize })
|
||||
|
||||
return results
|
||||
}
|
||||
```
|
||||
|
||||
### Mode Router & Result Aggregation
|
||||
|
||||
```javascript
|
||||
async function executeAnalysis(mode, clis, taskDescription) {
|
||||
switch (mode.name) {
|
||||
case 'parallel': return await executeParallel(clis, taskDescription)
|
||||
case 'sequential': return await executeSequential(clis, taskDescription)
|
||||
case 'collaborative': return await executeCollaborative(clis, taskDescription)
|
||||
case 'debate': return await executeDebate(clis, taskDescription)
|
||||
case 'challenge': return await executeChallenge(clis, taskDescription)
|
||||
}
|
||||
}
|
||||
|
||||
function aggregateResults(mode, results) {
|
||||
const base = { mode: mode.name, pattern: mode.pattern, tools_used: results.map(r => r.cli || 'unknown') }
|
||||
|
||||
switch (mode.name) {
|
||||
case 'parallel':
|
||||
return { ...base, findings: results.map(parseOutput), consensus: findCommonPoints(results), divergences: findDifferences(results) }
|
||||
case 'sequential':
|
||||
return { ...base, evolution: results.map((r, i) => ({ step: i + 1, analysis: parseOutput(r) })), finalAnalysis: parseOutput(results.at(-1)) }
|
||||
case 'collaborative':
|
||||
return { ...base, rounds: groupByRound(results), synthesis: extractSynthesis(results.at(-1)) }
|
||||
case 'debate':
|
||||
return { ...base, proposal: parseOutput(results.find(r => r.phase === 'propose')?.result),
|
||||
challenges: parseOutput(results.find(r => r.phase === 'challenge')?.result),
|
||||
resolution: parseOutput(results.find(r => r.phase === 'defend')?.result), confidence: calculateDebateConfidence(results) }
|
||||
case 'challenge':
|
||||
return { ...base, originalAnalysis: parseOutput(results.find(r => r.phase === 'analyze')?.result),
|
||||
critiques: parseCritiques(results.find(r => r.phase === 'challenge')?.result), riskScore: calculateRiskScore(results) }
|
||||
}
|
||||
}
|
||||
|
||||
// If planPath exists: update Analysis Summary & Execution Plan sections
|
||||
```
|
||||
|
||||
## Phase 4: User Decision
|
||||
|
||||
```javascript
|
||||
function presentSummary(analysis) {
|
||||
console.log(`## Analysis Result\n**Mode**: ${analysis.mode} (${analysis.pattern})\n**Tools**: ${analysis.tools_used.join(' → ')}`)
|
||||
|
||||
switch (analysis.mode) {
|
||||
case 'parallel':
|
||||
console.log(`### Consensus\n${analysis.consensus.map(c => `- ${c}`).join('\n')}\n### Divergences\n${analysis.divergences.map(d => `- ${d}`).join('\n')}`)
|
||||
break
|
||||
case 'sequential':
|
||||
console.log(`### Evolution\n${analysis.evolution.map(e => `**Step ${e.step}**: ${e.analysis.summary}`).join('\n')}\n### Final\n${analysis.finalAnalysis.summary}`)
|
||||
break
|
||||
case 'collaborative':
|
||||
console.log(`### Rounds\n${Object.entries(analysis.rounds).map(([r, a]) => `**Round ${r}**: ${a.map(x => x.cli).join(' + ')}`).join('\n')}\n### Synthesis\n${analysis.synthesis}`)
|
||||
break
|
||||
case 'debate':
|
||||
console.log(`### Debate\n**Proposal**: ${analysis.proposal.summary}\n**Challenges**: ${analysis.challenges.points?.length || 0} points\n**Resolution**: ${analysis.resolution.summary}\n**Confidence**: ${analysis.confidence}%`)
|
||||
break
|
||||
case 'challenge':
|
||||
console.log(`### Challenge\n**Original**: ${analysis.originalAnalysis.summary}\n**Critiques**: ${analysis.critiques.length} issues\n${analysis.critiques.map(c => `- [${c.severity}] ${c.description}`).join('\n')}\n**Risk Score**: ${analysis.riskScore}/100`)
|
||||
break
|
||||
}
|
||||
}
|
||||
|
||||
AskUserQuestion({
|
||||
questions: [{
|
||||
question: "How to proceed?",
|
||||
header: "Next Step",
|
||||
options: [
|
||||
{ label: "Execute directly", description: "Implement immediately" },
|
||||
{ label: "Refine analysis", description: "Add constraints, re-analyze" },
|
||||
{ label: "Change tools", description: "Different tool combination" },
|
||||
{ label: "Cancel", description: "End workflow" }
|
||||
],
|
||||
multiSelect: false
|
||||
}]
|
||||
})
|
||||
// If planPath exists: record decision to Decisions Made table
|
||||
// Routing: Execute → Phase 5 | Refine → Phase 3 | Change → Phase 2 | Cancel → End
|
||||
```
|
||||
|
||||
## Phase 5: Direct Execution
|
||||
|
||||
```javascript
|
||||
// Simple tasks: No artifacts | Complex tasks: Update scratchpad doc
|
||||
const executionAgents = agents.filter(a => a.canExecute)
|
||||
const executionTool = selectedAgent.canExecute ? selectedAgent : selectedCLIs[0]
|
||||
|
||||
if (executionTool.type === 'agent') {
|
||||
Task({
|
||||
subagent_type: executionTool.name,
|
||||
run_in_background: false,
|
||||
description: `Execute: ${taskDescription.slice(0, 30)}`,
|
||||
prompt: `## Task\n${taskDescription}\n\n## Analysis Results\n${JSON.stringify(aggregatedAnalysis, null, 2)}\n\n## Instructions\n1. Apply changes to identified files\n2. Follow recommended approach\n3. Handle identified risks\n4. Verify changes work correctly`
|
||||
})
|
||||
} else {
|
||||
Bash({
|
||||
command: `ccw cli -p "
|
||||
PURPOSE: Implement solution: ${taskDescription}
|
||||
TASK: ${extractedTasks.join(' • ')}
|
||||
MODE: write
|
||||
CONTEXT: @${affectedFiles.join(' @')}
|
||||
EXPECTED: Working implementation with all changes applied
|
||||
CONSTRAINTS: Follow existing patterns
|
||||
" --tool ${executionTool.name} --mode write`,
|
||||
run_in_background: false
|
||||
})
|
||||
}
|
||||
// If planPath exists: update Status to completed/failed, append to Progress Log
|
||||
```
|
||||
|
||||
## TodoWrite Structure
|
||||
|
||||
```javascript
|
||||
TodoWrite({ todos: [
|
||||
{ content: "Phase 1: Clarify requirements", status: "in_progress", activeForm: "Clarifying requirements" },
|
||||
{ content: "Phase 1.5: Assess complexity", status: "pending", activeForm: "Assessing complexity" },
|
||||
{ content: "Phase 2: Select tools", status: "pending", activeForm: "Selecting tools" },
|
||||
{ content: "Phase 3: Multi-mode analysis", status: "pending", activeForm: "Running analysis" },
|
||||
{ content: "Phase 4: User decision", status: "pending", activeForm: "Awaiting decision" },
|
||||
{ content: "Phase 5: Direct execution", status: "pending", activeForm: "Executing" }
|
||||
]})
|
||||
```
|
||||
|
||||
## Iteration Patterns
|
||||
|
||||
| Pattern | Flow |
|
||||
|---------|------|
|
||||
| **Direct** | Phase 1 → 2 → 3 → 4(execute) → 5 |
|
||||
| **Refinement** | Phase 3 → 4(refine) → 3 → 4 → 5 |
|
||||
| **Tool Adjust** | Phase 2(adjust) → 3 → 4 → 5 |
|
||||
|
||||
## Error Handling
|
||||
|
||||
| Error | Resolution |
|
||||
|-------|------------|
|
||||
| CLI timeout | Retry with secondary model |
|
||||
| No enabled tools | Ask user to enable tools in cli-tools.json |
|
||||
| Task unclear | Default to first CLI + code-developer |
|
||||
| Ambiguous task | Force clarification via AskUser |
|
||||
| Execution fails | Present error, ask user for direction |
|
||||
| Plan doc write fails | Continue without doc (degrade to zero-artifact mode) |
|
||||
| Scratchpad dir missing | Auto-create `.workflow/.scratchpad/` |
|
||||
|
||||
## Comparison with multi-cli-plan
|
||||
|
||||
| Aspect | lite-lite-lite | multi-cli-plan |
|
||||
|--------|----------------|----------------|
|
||||
| **Artifacts** | Conditional (scratchpad doc for complex tasks) | Always (IMPL_PLAN.md, plan.json, synthesis.json) |
|
||||
| **Session** | Stateless (--resume chaining) | Persistent session folder |
|
||||
| **Tool Selection** | 3-step (CLI → Mode → Agent) | Config-driven fixed tools |
|
||||
| **Analysis Modes** | 5 modes with --resume | Fixed synthesis rounds |
|
||||
| **Complexity** | Auto-detected (simple/moderate/complex) | Assumed complex |
|
||||
| **Best For** | Quick analysis, simple-to-moderate tasks | Complex multi-step implementations |
|
||||
|
||||
## Post-Completion Expansion
|
||||
|
||||
完成后询问用户是否扩展为issue(test/enhance/refactor/doc),选中项调用 `/issue:new "{summary} - {dimension}"`
|
||||
|
||||
## Related Commands
|
||||
|
||||
```bash
|
||||
/workflow:multi-cli-plan "complex task" # Full planning workflow
|
||||
/workflow:lite-plan "task" # Single CLI planning
|
||||
/workflow:lite-execute --in-memory # Direct execution
|
||||
```
|
||||
@@ -497,6 +497,7 @@ ${plan.tasks.map((t, i) => `${i+1}. ${t.title} (${t.file})`).join('\n')}
|
||||
|
||||
**Step 4.2: Collect Confirmation**
|
||||
```javascript
|
||||
// Note: Execution "Other" option allows specifying CLI tools from ~/.claude/cli-tools.json
|
||||
AskUserQuestion({
|
||||
questions: [
|
||||
{
|
||||
@@ -524,8 +525,9 @@ AskUserQuestion({
|
||||
header: "Review",
|
||||
multiSelect: false,
|
||||
options: [
|
||||
{ label: "Gemini Review", description: "Gemini CLI" },
|
||||
{ label: "Agent Review", description: "@code-reviewer" },
|
||||
{ label: "Gemini Review", description: "Gemini CLI review" },
|
||||
{ label: "Codex Review", description: "Git-aware review (prompt OR --uncommitted)" },
|
||||
{ label: "Agent Review", description: "@code-reviewer agent" },
|
||||
{ label: "Skip", description: "No review" }
|
||||
]
|
||||
}
|
||||
|
||||
568
.claude/commands/workflow/multi-cli-plan.md
Normal file
568
.claude/commands/workflow/multi-cli-plan.md
Normal file
@@ -0,0 +1,568 @@
|
||||
---
|
||||
name: workflow:multi-cli-plan
|
||||
description: Multi-CLI collaborative planning workflow with ACE context gathering and iterative cross-verification. Uses cli-discuss-agent for Gemini+Codex+Claude analysis to converge on optimal execution plan.
|
||||
argument-hint: "<task description> [--max-rounds=3] [--tools=gemini,codex] [--mode=parallel|serial]"
|
||||
allowed-tools: TodoWrite(*), Task(*), AskUserQuestion(*), Read(*), Bash(*), Write(*), mcp__ace-tool__search_context(*)
|
||||
---
|
||||
|
||||
# Multi-CLI Collaborative Planning Command
|
||||
|
||||
## Quick Start
|
||||
|
||||
```bash
|
||||
# Basic usage
|
||||
/workflow:multi-cli-plan "Implement user authentication"
|
||||
|
||||
# With options
|
||||
/workflow:multi-cli-plan "Add dark mode support" --max-rounds=3
|
||||
/workflow:multi-cli-plan "Refactor payment module" --tools=gemini,codex,claude
|
||||
/workflow:multi-cli-plan "Fix memory leak" --mode=serial
|
||||
```
|
||||
|
||||
**Context Source**: ACE semantic search + Multi-CLI analysis
|
||||
**Output Directory**: `.workflow/.multi-cli-plan/{session-id}/`
|
||||
**Default Max Rounds**: 3 (convergence may complete earlier)
|
||||
**CLI Tools**: @cli-discuss-agent (analysis), @cli-lite-planning-agent (plan generation)
|
||||
**Execution**: Auto-hands off to `/workflow:lite-execute --in-memory` after plan approval
|
||||
|
||||
## What & Why
|
||||
|
||||
### Core Concept
|
||||
|
||||
Multi-CLI collaborative planning with **three-phase architecture**: ACE context gathering → Iterative multi-CLI discussion → Plan generation. Orchestrator delegates analysis to agents, only handles user decisions and session management.
|
||||
|
||||
**Process**:
|
||||
- **Phase 1**: ACE semantic search gathers codebase context
|
||||
- **Phase 2**: cli-discuss-agent orchestrates Gemini/Codex/Claude for cross-verified analysis
|
||||
- **Phase 3-5**: User decision → Plan generation → Execution handoff
|
||||
|
||||
**vs Single-CLI Planning**:
|
||||
- **Single**: One model perspective, potential blind spots
|
||||
- **Multi-CLI**: Cross-verification catches inconsistencies, builds consensus on solutions
|
||||
|
||||
### Value Proposition
|
||||
|
||||
1. **Multi-Perspective Analysis**: Gemini + Codex + Claude analyze from different angles
|
||||
2. **Cross-Verification**: Identify agreements/disagreements, build confidence
|
||||
3. **User-Driven Decisions**: Every round ends with user decision point
|
||||
4. **Iterative Convergence**: Progressive refinement until consensus reached
|
||||
|
||||
### Orchestrator Boundary (CRITICAL)
|
||||
|
||||
- **ONLY command** for multi-CLI collaborative planning
|
||||
- Manages: Session state, user decisions, agent delegation, phase transitions
|
||||
- Delegates: CLI execution to @cli-discuss-agent, plan generation to @cli-lite-planning-agent
|
||||
|
||||
### Execution Flow
|
||||
|
||||
```
|
||||
Phase 1: Context Gathering
|
||||
└─ ACE semantic search, extract keywords, build context package
|
||||
|
||||
Phase 2: Multi-CLI Discussion (Iterative, via @cli-discuss-agent)
|
||||
├─ Round N: Agent executes Gemini + Codex + Claude
|
||||
├─ Cross-verify findings, synthesize solutions
|
||||
├─ Write synthesis.json to rounds/{N}/
|
||||
└─ Loop until convergence or max rounds
|
||||
|
||||
Phase 3: Present Options
|
||||
└─ Display solutions with trade-offs from agent output
|
||||
|
||||
Phase 4: User Decision
|
||||
├─ Select solution approach
|
||||
├─ Select execution method (Agent/Codex/Auto)
|
||||
├─ Select code review tool (Skip/Gemini/Codex/Agent)
|
||||
└─ Route:
|
||||
├─ Approve → Phase 5
|
||||
├─ Need More Analysis → Return to Phase 2
|
||||
└─ Cancel → Save session
|
||||
|
||||
Phase 5: Plan Generation & Execution Handoff
|
||||
├─ Generate plan.json (via @cli-lite-planning-agent)
|
||||
├─ Build executionContext with user selections
|
||||
└─ Execute to /workflow:lite-execute --in-memory
|
||||
```
|
||||
|
||||
### Agent Roles
|
||||
|
||||
| Agent | Responsibility |
|
||||
|-------|---------------|
|
||||
| **Orchestrator** | Session management, ACE context, user decisions, phase transitions, executionContext assembly |
|
||||
| **@cli-discuss-agent** | Multi-CLI execution (Gemini/Codex/Claude), cross-verification, solution synthesis, synthesis.json output |
|
||||
| **@cli-lite-planning-agent** | Task decomposition, plan.json generation following schema |
|
||||
|
||||
## Core Responsibilities
|
||||
|
||||
### Phase 1: Context Gathering
|
||||
|
||||
**Session Initialization**:
|
||||
```javascript
|
||||
const sessionId = `MCP-${taskSlug}-${date}`
|
||||
const sessionFolder = `.workflow/.multi-cli-plan/${sessionId}`
|
||||
Bash(`mkdir -p ${sessionFolder}/rounds`)
|
||||
```
|
||||
|
||||
**ACE Context Queries**:
|
||||
```javascript
|
||||
const aceQueries = [
|
||||
`Project architecture related to ${keywords}`,
|
||||
`Existing implementations of ${keywords[0]}`,
|
||||
`Code patterns for ${keywords} features`,
|
||||
`Integration points for ${keywords[0]}`
|
||||
]
|
||||
// Execute via mcp__ace-tool__search_context
|
||||
```
|
||||
|
||||
**Context Package** (passed to agent):
|
||||
- `relevant_files[]` - Files identified by ACE
|
||||
- `detected_patterns[]` - Code patterns found
|
||||
- `architecture_insights` - Structure understanding
|
||||
|
||||
### Phase 2: Agent Delegation
|
||||
|
||||
**Core Principle**: Orchestrator only delegates and reads output - NO direct CLI execution.
|
||||
|
||||
**Agent Invocation**:
|
||||
```javascript
|
||||
Task({
|
||||
subagent_type: "cli-discuss-agent",
|
||||
run_in_background: false,
|
||||
description: `Discussion round ${currentRound}`,
|
||||
prompt: `
|
||||
## Input Context
|
||||
- task_description: ${taskDescription}
|
||||
- round_number: ${currentRound}
|
||||
- session: { id: "${sessionId}", folder: "${sessionFolder}" }
|
||||
- ace_context: ${JSON.stringify(contextPackageage)}
|
||||
- previous_rounds: ${JSON.stringify(analysisResults)}
|
||||
- user_feedback: ${userFeedback || 'None'}
|
||||
- cli_config: { tools: ["gemini", "codex"], mode: "parallel", fallback_chain: ["gemini", "codex", "claude"] }
|
||||
|
||||
## Execution Process
|
||||
1. Parse input context (handle JSON strings)
|
||||
2. Check if ACE supplementary search needed
|
||||
3. Build CLI prompts with context
|
||||
4. Execute CLIs (parallel or serial per cli_config.mode)
|
||||
5. Parse CLI outputs, handle failures with fallback
|
||||
6. Perform cross-verification between CLI results
|
||||
7. Synthesize solutions, calculate scores
|
||||
8. Calculate convergence, generate clarification questions
|
||||
9. Write synthesis.json
|
||||
|
||||
## Output
|
||||
Write: ${sessionFolder}/rounds/${currentRound}/synthesis.json
|
||||
|
||||
## Completion Checklist
|
||||
- [ ] All configured CLI tools executed (or fallback triggered)
|
||||
- [ ] Cross-verification completed with agreements/disagreements
|
||||
- [ ] 2-3 solutions generated with file:line references
|
||||
- [ ] Convergence score calculated (0.0-1.0)
|
||||
- [ ] synthesis.json written with all Primary Fields
|
||||
`
|
||||
})
|
||||
```
|
||||
|
||||
**Read Agent Output**:
|
||||
```javascript
|
||||
const synthesis = JSON.parse(Read(`${sessionFolder}/rounds/${round}/synthesis.json`))
|
||||
// Access top-level fields: solutions, convergence, cross_verification, clarification_questions
|
||||
```
|
||||
|
||||
**Convergence Decision**:
|
||||
```javascript
|
||||
if (synthesis.convergence.recommendation === 'converged') {
|
||||
// Proceed to Phase 3
|
||||
} else if (synthesis.convergence.recommendation === 'user_input_needed') {
|
||||
// Collect user feedback, return to Phase 2
|
||||
} else {
|
||||
// Continue to next round if new_insights && round < maxRounds
|
||||
}
|
||||
```
|
||||
|
||||
### Phase 3: Present Options
|
||||
|
||||
**Display from Agent Output** (no processing):
|
||||
```javascript
|
||||
console.log(`
|
||||
## Solution Options
|
||||
|
||||
${synthesis.solutions.map((s, i) => `
|
||||
**Option ${i+1}: ${s.name}**
|
||||
Source: ${s.source_cli.join(' + ')}
|
||||
Effort: ${s.effort} | Risk: ${s.risk}
|
||||
|
||||
Pros: ${s.pros.join(', ')}
|
||||
Cons: ${s.cons.join(', ')}
|
||||
|
||||
Files: ${s.affected_files.slice(0,3).map(f => `${f.file}:${f.line}`).join(', ')}
|
||||
`).join('\n')}
|
||||
|
||||
## Cross-Verification
|
||||
Agreements: ${synthesis.cross_verification.agreements.length}
|
||||
Disagreements: ${synthesis.cross_verification.disagreements.length}
|
||||
`)
|
||||
```
|
||||
|
||||
### Phase 4: User Decision
|
||||
|
||||
**Decision Options**:
|
||||
```javascript
|
||||
AskUserQuestion({
|
||||
questions: [
|
||||
{
|
||||
question: "Which solution approach?",
|
||||
header: "Solution",
|
||||
multiSelect: false,
|
||||
options: solutions.map((s, i) => ({
|
||||
label: `Option ${i+1}: ${s.name}`,
|
||||
description: `${s.effort} effort, ${s.risk} risk`
|
||||
})).concat([
|
||||
{ label: "Need More Analysis", description: "Return to Phase 2" }
|
||||
])
|
||||
},
|
||||
{
|
||||
question: "Execution method:",
|
||||
header: "Execution",
|
||||
multiSelect: false,
|
||||
options: [
|
||||
{ label: "Agent", description: "@code-developer agent" },
|
||||
{ label: "Codex", description: "codex CLI tool" },
|
||||
{ label: "Auto", description: "Auto-select based on complexity" }
|
||||
]
|
||||
},
|
||||
{
|
||||
question: "Code review after execution?",
|
||||
header: "Review",
|
||||
multiSelect: false,
|
||||
options: [
|
||||
{ label: "Skip", description: "No review" },
|
||||
{ label: "Gemini Review", description: "Gemini CLI tool" },
|
||||
{ label: "Codex Review", description: "codex review --uncommitted" },
|
||||
{ label: "Agent Review", description: "Current agent review" }
|
||||
]
|
||||
}
|
||||
]
|
||||
})
|
||||
```
|
||||
|
||||
**Routing**:
|
||||
- Approve + execution method → Phase 5
|
||||
- Need More Analysis → Phase 2 with feedback
|
||||
- Cancel → Save session for resumption
|
||||
|
||||
### Phase 5: Plan Generation & Execution Handoff
|
||||
|
||||
**Step 1: Build Context-Package** (Orchestrator responsibility):
|
||||
```javascript
|
||||
// Extract key information from user decision and synthesis
|
||||
const contextPackage = {
|
||||
// Core solution details
|
||||
solution: {
|
||||
name: selectedSolution.name,
|
||||
source_cli: selectedSolution.source_cli,
|
||||
feasibility: selectedSolution.feasibility,
|
||||
effort: selectedSolution.effort,
|
||||
risk: selectedSolution.risk,
|
||||
summary: selectedSolution.summary
|
||||
},
|
||||
// Implementation plan (tasks, flow, milestones)
|
||||
implementation_plan: selectedSolution.implementation_plan,
|
||||
// Dependencies
|
||||
dependencies: selectedSolution.dependencies || { internal: [], external: [] },
|
||||
// Technical concerns
|
||||
technical_concerns: selectedSolution.technical_concerns || [],
|
||||
// Consensus from cross-verification
|
||||
consensus: {
|
||||
agreements: synthesis.cross_verification.agreements,
|
||||
resolved_conflicts: synthesis.cross_verification.resolution
|
||||
},
|
||||
// User constraints (from Phase 4 feedback)
|
||||
constraints: userConstraints || [],
|
||||
// Task context
|
||||
task_description: taskDescription,
|
||||
session_id: sessionId
|
||||
}
|
||||
|
||||
// Write context-package for traceability
|
||||
Write(`${sessionFolder}/context-package.json`, JSON.stringify(contextPackage, null, 2))
|
||||
```
|
||||
|
||||
**Context-Package Schema**:
|
||||
|
||||
| Field | Type | Description |
|
||||
|-------|------|-------------|
|
||||
| `solution` | object | User-selected solution from synthesis |
|
||||
| `solution.name` | string | Solution identifier |
|
||||
| `solution.feasibility` | number | Viability score (0-1) |
|
||||
| `solution.summary` | string | Brief analysis summary |
|
||||
| `implementation_plan` | object | Task breakdown with flow and dependencies |
|
||||
| `implementation_plan.approach` | string | High-level technical strategy |
|
||||
| `implementation_plan.tasks[]` | array | Discrete tasks with id, name, depends_on, files |
|
||||
| `implementation_plan.execution_flow` | string | Task sequence (e.g., "T1 → T2 → T3") |
|
||||
| `implementation_plan.milestones` | string[] | Key checkpoints |
|
||||
| `dependencies` | object | Module and package dependencies |
|
||||
| `technical_concerns` | string[] | Risks and blockers |
|
||||
| `consensus` | object | Cross-verified agreements from multi-CLI |
|
||||
| `constraints` | string[] | User-specified constraints from Phase 4 |
|
||||
|
||||
```json
|
||||
{
|
||||
"solution": {
|
||||
"name": "Strategy Pattern Refactoring",
|
||||
"source_cli": ["gemini", "codex"],
|
||||
"feasibility": 0.88,
|
||||
"effort": "medium",
|
||||
"risk": "low",
|
||||
"summary": "Extract payment gateway interface, implement strategy pattern for multi-gateway support"
|
||||
},
|
||||
"implementation_plan": {
|
||||
"approach": "Define interface → Create concrete strategies → Implement factory → Migrate existing code",
|
||||
"tasks": [
|
||||
{"id": "T1", "name": "Define PaymentGateway interface", "depends_on": [], "files": [{"file": "src/types/payment.ts", "line": 1, "action": "create"}], "key_point": "Include all existing Stripe methods"},
|
||||
{"id": "T2", "name": "Implement StripeGateway", "depends_on": ["T1"], "files": [{"file": "src/payment/stripe.ts", "line": 1, "action": "create"}], "key_point": "Wrap existing logic"},
|
||||
{"id": "T3", "name": "Create GatewayFactory", "depends_on": ["T1"], "files": [{"file": "src/payment/factory.ts", "line": 1, "action": "create"}], "key_point": null},
|
||||
{"id": "T4", "name": "Migrate processor to use factory", "depends_on": ["T2", "T3"], "files": [{"file": "src/payment/processor.ts", "line": 45, "action": "modify"}], "key_point": "Backward compatible"}
|
||||
],
|
||||
"execution_flow": "T1 → (T2 | T3) → T4",
|
||||
"milestones": ["Interface defined", "Gateway implementations complete", "Migration done"]
|
||||
},
|
||||
"dependencies": {
|
||||
"internal": ["@/lib/payment-gateway", "@/types/payment"],
|
||||
"external": ["stripe@^14.0.0"]
|
||||
},
|
||||
"technical_concerns": ["Existing tests must pass", "No breaking API changes"],
|
||||
"consensus": {
|
||||
"agreements": ["Use strategy pattern", "Keep existing API"],
|
||||
"resolved_conflicts": "Factory over DI for simpler integration"
|
||||
},
|
||||
"constraints": ["backward compatible", "no breaking changes to PaymentResult type"],
|
||||
"task_description": "Refactor payment processing for multi-gateway support",
|
||||
"session_id": "MCP-payment-refactor-2026-01-14"
|
||||
}
|
||||
```
|
||||
|
||||
**Step 2: Invoke Planning Agent**:
|
||||
```javascript
|
||||
Task({
|
||||
subagent_type: "cli-lite-planning-agent",
|
||||
run_in_background: false,
|
||||
description: "Generate implementation plan",
|
||||
prompt: `
|
||||
## Schema Reference
|
||||
Execute: cat ~/.claude/workflows/cli-templates/schemas/plan-json-schema.json
|
||||
|
||||
## Context-Package (from orchestrator)
|
||||
${JSON.stringify(contextPackage, null, 2)}
|
||||
|
||||
## Execution Process
|
||||
1. Read plan-json-schema.json for output structure
|
||||
2. Read project-tech.json and project-guidelines.json
|
||||
3. Parse context-package fields:
|
||||
- solution: name, feasibility, summary
|
||||
- implementation_plan: tasks[], execution_flow, milestones
|
||||
- dependencies: internal[], external[]
|
||||
- technical_concerns: risks/blockers
|
||||
- consensus: agreements, resolved_conflicts
|
||||
- constraints: user requirements
|
||||
4. Use implementation_plan.tasks[] as task foundation
|
||||
5. Preserve task dependencies (depends_on) and execution_flow
|
||||
6. Expand tasks with detailed acceptance criteria
|
||||
7. Generate plan.json following schema exactly
|
||||
|
||||
## Output
|
||||
- ${sessionFolder}/plan.json
|
||||
|
||||
## Completion Checklist
|
||||
- [ ] plan.json preserves task dependencies from implementation_plan
|
||||
- [ ] Task execution order follows execution_flow
|
||||
- [ ] Key_points reflected in task descriptions
|
||||
- [ ] User constraints applied to implementation
|
||||
- [ ] Acceptance criteria are testable
|
||||
- [ ] Schema fields match plan-json-schema.json exactly
|
||||
`
|
||||
})
|
||||
```
|
||||
|
||||
**Step 3: Build executionContext**:
|
||||
```javascript
|
||||
// After plan.json is generated by cli-lite-planning-agent
|
||||
const plan = JSON.parse(Read(`${sessionFolder}/plan.json`))
|
||||
|
||||
// Build executionContext (same structure as lite-plan)
|
||||
executionContext = {
|
||||
planObject: plan,
|
||||
explorationsContext: null, // Multi-CLI doesn't use exploration files
|
||||
explorationAngles: [], // No exploration angles
|
||||
explorationManifest: null, // No manifest
|
||||
clarificationContext: null, // Store user feedback from Phase 2 if exists
|
||||
executionMethod: userSelection.execution_method, // From Phase 4
|
||||
codeReviewTool: userSelection.code_review_tool, // From Phase 4
|
||||
originalUserInput: taskDescription,
|
||||
|
||||
// Optional: Task-level executor assignments
|
||||
executorAssignments: null, // Could be enhanced in future
|
||||
|
||||
session: {
|
||||
id: sessionId,
|
||||
folder: sessionFolder,
|
||||
artifacts: {
|
||||
explorations: [], // No explorations in multi-CLI workflow
|
||||
explorations_manifest: null,
|
||||
plan: `${sessionFolder}/plan.json`,
|
||||
synthesis_rounds: Array.from({length: currentRound}, (_, i) =>
|
||||
`${sessionFolder}/rounds/${i+1}/synthesis.json`
|
||||
),
|
||||
context_package: `${sessionFolder}/context-package.json`
|
||||
}
|
||||
}
|
||||
}
|
||||
```
|
||||
|
||||
**Step 4: Hand off to Execution**:
|
||||
```javascript
|
||||
// Execute to lite-execute with in-memory context
|
||||
SlashCommand("/workflow:lite-execute --in-memory")
|
||||
```
|
||||
|
||||
## Output File Structure
|
||||
|
||||
```
|
||||
.workflow/.multi-cli-plan/{MCP-task-slug-YYYY-MM-DD}/
|
||||
├── session-state.json # Session tracking (orchestrator)
|
||||
├── rounds/
|
||||
│ ├── 1/synthesis.json # Round 1 analysis (cli-discuss-agent)
|
||||
│ ├── 2/synthesis.json # Round 2 analysis (cli-discuss-agent)
|
||||
│ └── .../
|
||||
├── context-package.json # Extracted context for planning (orchestrator)
|
||||
└── plan.json # Structured plan (cli-lite-planning-agent)
|
||||
```
|
||||
|
||||
**File Producers**:
|
||||
|
||||
| File | Producer | Content |
|
||||
|------|----------|---------|
|
||||
| `session-state.json` | Orchestrator | Session metadata, rounds, decisions |
|
||||
| `rounds/*/synthesis.json` | cli-discuss-agent | Solutions, convergence, cross-verification |
|
||||
| `context-package.json` | Orchestrator | Extracted solution, dependencies, consensus for planning |
|
||||
| `plan.json` | cli-lite-planning-agent | Structured tasks for lite-execute |
|
||||
|
||||
## synthesis.json Schema
|
||||
|
||||
```json
|
||||
{
|
||||
"round": 1,
|
||||
"solutions": [{
|
||||
"name": "Solution Name",
|
||||
"source_cli": ["gemini", "codex"],
|
||||
"feasibility": 0.85,
|
||||
"effort": "low|medium|high",
|
||||
"risk": "low|medium|high",
|
||||
"summary": "Brief analysis summary",
|
||||
"implementation_plan": {
|
||||
"approach": "High-level technical approach",
|
||||
"tasks": [
|
||||
{"id": "T1", "name": "Task", "depends_on": [], "files": [], "key_point": "..."}
|
||||
],
|
||||
"execution_flow": "T1 → T2 → T3",
|
||||
"milestones": ["Checkpoint 1", "Checkpoint 2"]
|
||||
},
|
||||
"dependencies": {"internal": [], "external": []},
|
||||
"technical_concerns": ["Risk 1", "Blocker 2"]
|
||||
}],
|
||||
"convergence": {
|
||||
"score": 0.85,
|
||||
"new_insights": false,
|
||||
"recommendation": "converged|continue|user_input_needed"
|
||||
},
|
||||
"cross_verification": {
|
||||
"agreements": [],
|
||||
"disagreements": [],
|
||||
"resolution": "..."
|
||||
},
|
||||
"clarification_questions": []
|
||||
}
|
||||
```
|
||||
|
||||
**Key Planning Fields**:
|
||||
|
||||
| Field | Purpose |
|
||||
|-------|---------|
|
||||
| `feasibility` | Viability score (0-1) |
|
||||
| `implementation_plan.tasks[]` | Discrete tasks with dependencies |
|
||||
| `implementation_plan.execution_flow` | Task sequence visualization |
|
||||
| `implementation_plan.milestones` | Key checkpoints |
|
||||
| `technical_concerns` | Risks and blockers |
|
||||
|
||||
**Note**: Solutions ranked by internal scoring (array order = priority)
|
||||
|
||||
## TodoWrite Structure
|
||||
|
||||
**Initialization**:
|
||||
```javascript
|
||||
TodoWrite({ todos: [
|
||||
{ content: "Phase 1: Context Gathering", status: "in_progress", activeForm: "Gathering context" },
|
||||
{ content: "Phase 2: Multi-CLI Discussion", status: "pending", activeForm: "Running discussion" },
|
||||
{ content: "Phase 3: Present Options", status: "pending", activeForm: "Presenting options" },
|
||||
{ content: "Phase 4: User Decision", status: "pending", activeForm: "Awaiting decision" },
|
||||
{ content: "Phase 5: Plan Generation", status: "pending", activeForm: "Generating plan" }
|
||||
]})
|
||||
```
|
||||
|
||||
**During Discussion Rounds**:
|
||||
```javascript
|
||||
TodoWrite({ todos: [
|
||||
{ content: "Phase 1: Context Gathering", status: "completed", activeForm: "Gathering context" },
|
||||
{ content: "Phase 2: Multi-CLI Discussion", status: "in_progress", activeForm: "Running discussion" },
|
||||
{ content: " → Round 1: Initial analysis", status: "completed", activeForm: "Analyzing" },
|
||||
{ content: " → Round 2: Deep verification", status: "in_progress", activeForm: "Verifying" },
|
||||
{ content: "Phase 3: Present Options", status: "pending", activeForm: "Presenting options" },
|
||||
// ...
|
||||
]})
|
||||
```
|
||||
|
||||
## Error Handling
|
||||
|
||||
| Error | Resolution |
|
||||
|-------|------------|
|
||||
| ACE search fails | Fall back to Glob/Grep for file discovery |
|
||||
| Agent fails | Retry once, then present partial results |
|
||||
| CLI timeout (in agent) | Agent uses fallback: gemini → codex → claude |
|
||||
| No convergence | Present best options, flag uncertainty |
|
||||
| synthesis.json parse error | Request agent retry |
|
||||
| User cancels | Save session for later resumption |
|
||||
|
||||
## Configuration
|
||||
|
||||
| Flag | Default | Description |
|
||||
|------|---------|-------------|
|
||||
| `--max-rounds` | 3 | Maximum discussion rounds |
|
||||
| `--tools` | gemini,codex | CLI tools for analysis |
|
||||
| `--mode` | parallel | Execution mode: parallel or serial |
|
||||
| `--auto-execute` | false | Auto-execute after approval |
|
||||
|
||||
## Best Practices
|
||||
|
||||
1. **Be Specific**: Detailed task descriptions improve ACE context quality
|
||||
2. **Provide Feedback**: Use clarification rounds to refine requirements
|
||||
3. **Trust Cross-Verification**: Multi-CLI consensus indicates high confidence
|
||||
4. **Review Trade-offs**: Consider pros/cons before selecting solution
|
||||
5. **Check synthesis.json**: Review agent output for detailed analysis
|
||||
6. **Iterate When Needed**: Don't hesitate to request more analysis
|
||||
|
||||
## Related Commands
|
||||
|
||||
```bash
|
||||
# Simpler single-round planning
|
||||
/workflow:lite-plan "task description"
|
||||
|
||||
# Issue-driven discovery
|
||||
/issue:discover-by-prompt "find issues"
|
||||
|
||||
# View session files
|
||||
cat .workflow/.multi-cli-plan/{session-id}/plan.json
|
||||
cat .workflow/.multi-cli-plan/{session-id}/rounds/1/synthesis.json
|
||||
cat .workflow/.multi-cli-plan/{session-id}/context-package.json
|
||||
|
||||
# Direct execution (if you have plan.json)
|
||||
/workflow:lite-execute plan.json
|
||||
```
|
||||
@@ -585,6 +585,10 @@ TodoWrite({
|
||||
- Mark completed immediately after each group finishes
|
||||
- Update parent phase status when all child items complete
|
||||
|
||||
## Post-Completion Expansion
|
||||
|
||||
完成后询问用户是否扩展为issue(test/enhance/refactor/doc),选中项调用 `/issue:new "{summary} - {dimension}"`
|
||||
|
||||
## Best Practices
|
||||
|
||||
1. **Trust AI Planning**: Planning agent's grouping and execution strategy are based on dependency analysis
|
||||
|
||||
@@ -107,13 +107,13 @@ rm -f .workflow/archives/$SESSION_ID/.archiving
|
||||
Manifest: Updated with N total sessions
|
||||
```
|
||||
|
||||
### Phase 4: Update project.json (Optional)
|
||||
### Phase 4: Update project-tech.json (Optional)
|
||||
|
||||
**Skip if**: `.workflow/project.json` doesn't exist
|
||||
**Skip if**: `.workflow/project-tech.json` doesn't exist
|
||||
|
||||
```bash
|
||||
# Check
|
||||
test -f .workflow/project.json || echo "SKIP"
|
||||
test -f .workflow/project-tech.json || echo "SKIP"
|
||||
```
|
||||
|
||||
**If exists**, add feature entry:
|
||||
@@ -134,6 +134,32 @@ test -f .workflow/project.json || echo "SKIP"
|
||||
✓ Feature added to project registry
|
||||
```
|
||||
|
||||
### Phase 5: Ask About Solidify (Always)
|
||||
|
||||
After successful archival, prompt user to capture learnings:
|
||||
|
||||
```javascript
|
||||
AskUserQuestion({
|
||||
questions: [{
|
||||
question: "Would you like to solidify learnings from this session into project guidelines?",
|
||||
header: "Solidify",
|
||||
options: [
|
||||
{ label: "Yes, solidify now", description: "Extract learnings and update project-guidelines.json" },
|
||||
{ label: "Skip", description: "Archive complete, no learnings to capture" }
|
||||
],
|
||||
multiSelect: false
|
||||
}]
|
||||
})
|
||||
```
|
||||
|
||||
**If "Yes, solidify now"**: Execute `/workflow:session:solidify` with the archived session ID.
|
||||
|
||||
**Output**:
|
||||
```
|
||||
Session archived successfully.
|
||||
→ Run /workflow:session:solidify to capture learnings (recommended)
|
||||
```
|
||||
|
||||
## Error Recovery
|
||||
|
||||
| Phase | Symptom | Recovery |
|
||||
@@ -149,5 +175,6 @@ test -f .workflow/project.json || echo "SKIP"
|
||||
Phase 1: find session → create .archiving marker
|
||||
Phase 2: read key files → build manifest entry (no writes)
|
||||
Phase 3: mkdir → mv → update manifest.json → rm marker
|
||||
Phase 4: update project.json features array (optional)
|
||||
Phase 4: update project-tech.json features array (optional)
|
||||
Phase 5: ask user → solidify learnings (optional)
|
||||
```
|
||||
|
||||
@@ -16,7 +16,7 @@ examples:
|
||||
Manages workflow sessions with three operation modes: discovery (manual), auto (intelligent), and force-new.
|
||||
|
||||
**Dual Responsibility**:
|
||||
1. **Project-level initialization** (first-time only): Creates `.workflow/project.json` for feature registry
|
||||
1. **Project-level initialization** (first-time only): Creates `.workflow/project-tech.json` for feature registry
|
||||
2. **Session-level initialization** (always): Creates session directory structure
|
||||
|
||||
## Session Types
|
||||
|
||||
@@ -37,6 +37,44 @@ allowed-tools: SlashCommand(*), TodoWrite(*), Read(*), Bash(*)
|
||||
7. **Task Attachment Model**: SlashCommand execute **attaches** sub-tasks to current workflow. Orchestrator **executes** these attached tasks itself, then **collapses** them after completion
|
||||
8. **⚠️ CRITICAL: DO NOT STOP**: Continuous multi-phase workflow. After executing all attached tasks, immediately collapse them and execute next phase
|
||||
|
||||
## TDD Compliance Requirements
|
||||
|
||||
### The Iron Law
|
||||
|
||||
```
|
||||
NO PRODUCTION CODE WITHOUT A FAILING TEST FIRST
|
||||
```
|
||||
|
||||
**Enforcement Method**:
|
||||
- Phase 5: `implementation_approach` includes test-first steps (Red → Green → Refactor)
|
||||
- Green phase: Includes test-fix-cycle configuration (max 3 iterations)
|
||||
- Auto-revert: Triggered when max iterations reached without passing tests
|
||||
|
||||
**Verification**: Phase 6 validates Red-Green-Refactor structure in all generated tasks
|
||||
|
||||
### TDD Compliance Checkpoint
|
||||
|
||||
| Checkpoint | Validation Phase | Evidence Required |
|
||||
|------------|------------------|-------------------|
|
||||
| Test-first structure | Phase 5 | `implementation_approach` has 3 steps |
|
||||
| Red phase exists | Phase 6 | Step 1: `tdd_phase: "red"` |
|
||||
| Green phase with test-fix | Phase 6 | Step 2: `tdd_phase: "green"` + test-fix-cycle |
|
||||
| Refactor phase exists | Phase 6 | Step 3: `tdd_phase: "refactor"` |
|
||||
|
||||
### Core TDD Principles (from ref skills)
|
||||
|
||||
**Red Flags - STOP and Reassess**:
|
||||
- Code written before test
|
||||
- Test passes immediately (no Red phase witnessed)
|
||||
- Cannot explain why test should fail
|
||||
- "Just this once" rationalization
|
||||
- "Tests after achieve same goals" thinking
|
||||
|
||||
**Why Order Matters**:
|
||||
- Tests written after code pass immediately → proves nothing
|
||||
- Test-first forces edge case discovery before implementation
|
||||
- Tests-after verify what was built, not what's required
|
||||
|
||||
## 6-Phase Execution (with Conflict Resolution)
|
||||
|
||||
### Phase 1: Session Discovery
|
||||
@@ -183,7 +221,7 @@ SlashCommand(command="/workflow:tools:conflict-resolution --session [sessionId]
|
||||
{"content": "Phase 3: Test Coverage Analysis", "status": "completed", "activeForm": "Executing test coverage analysis"},
|
||||
{"content": "Phase 4: Conflict Resolution", "status": "in_progress", "activeForm": "Executing conflict resolution"},
|
||||
{"content": " → Detect conflicts with CLI analysis", "status": "in_progress", "activeForm": "Detecting conflicts"},
|
||||
{"content": " → Present conflicts to user", "status": "pending", "activeForm": "Presenting conflicts"},
|
||||
{"content": " → Log and analyze detected conflicts", "status": "pending", "activeForm": "Analyzing conflicts"},
|
||||
{"content": " → Apply resolution strategies", "status": "pending", "activeForm": "Applying resolution strategies"},
|
||||
{"content": "Phase 5: TDD Task Generation", "status": "pending", "activeForm": "Executing TDD task generation"},
|
||||
{"content": "Phase 6: TDD Structure Validation", "status": "pending", "activeForm": "Validating TDD structure"}
|
||||
@@ -251,6 +289,13 @@ SlashCommand(command="/workflow:tools:task-generate-tdd --session [sessionId]")
|
||||
- IMPL_PLAN.md contains workflow_type: "tdd" in frontmatter
|
||||
- Task count ≤10 (compliance with task limit)
|
||||
|
||||
**Red Flag Detection** (Non-Blocking Warnings):
|
||||
- Task count >10: `⚠️ High task count may indicate insufficient decomposition`
|
||||
- Missing test-fix-cycle: `⚠️ Green phase lacks auto-revert configuration`
|
||||
- Generic task names: `⚠️ Vague task names suggest unclear TDD cycles`
|
||||
|
||||
**Action**: Log warnings to `.workflow/active/[sessionId]/.process/tdd-warnings.log` (non-blocking)
|
||||
|
||||
<!-- TodoWrite: When task-generate-tdd executed, INSERT 3 task-generate-tdd tasks -->
|
||||
|
||||
**TodoWrite Update (Phase 5 SlashCommand executed - tasks attached)**:
|
||||
@@ -302,6 +347,42 @@ SlashCommand(command="/workflow:tools:task-generate-tdd --session [sessionId]")
|
||||
5. Test-fix cycle: Green phase step includes test-fix-cycle logic with max_iterations
|
||||
6. Task count: Total tasks ≤10 (simple + subtasks)
|
||||
|
||||
**Red Flag Checklist** (from TDD best practices):
|
||||
- [ ] No tasks skip Red phase (`tdd_phase: "red"` exists in step 1)
|
||||
- [ ] Test files referenced in Red phase (explicit paths, not placeholders)
|
||||
- [ ] Green phase has test-fix-cycle with `max_iterations` configured
|
||||
- [ ] Refactor phase has clear completion criteria
|
||||
|
||||
**Non-Compliance Warning Format**:
|
||||
```
|
||||
⚠️ TDD Red Flag: [issue description]
|
||||
Task: [IMPL-N]
|
||||
Recommendation: [action to fix]
|
||||
```
|
||||
|
||||
**Evidence Gathering** (Before Completion Claims):
|
||||
|
||||
```bash
|
||||
# Verify session artifacts exist
|
||||
ls -la .workflow/active/[sessionId]/{IMPL_PLAN.md,TODO_LIST.md}
|
||||
ls -la .workflow/active/[sessionId]/.task/IMPL-*.json
|
||||
|
||||
# Count generated artifacts
|
||||
echo "IMPL tasks: $(ls .workflow/active/[sessionId]/.task/IMPL-*.json 2>/dev/null | wc -l)"
|
||||
|
||||
# Sample task structure verification (first task)
|
||||
jq '{id, tdd: .meta.tdd_workflow, phases: [.flow_control.implementation_approach[].tdd_phase]}' \
|
||||
"$(ls .workflow/active/[sessionId]/.task/IMPL-*.json | head -1)"
|
||||
```
|
||||
|
||||
**Evidence Required Before Summary**:
|
||||
| Evidence Type | Verification Method | Pass Criteria |
|
||||
|---------------|---------------------|---------------|
|
||||
| File existence | `ls -la` artifacts | All files present |
|
||||
| Task count | Count IMPL-*.json | Count matches claims |
|
||||
| TDD structure | jq sample extraction | Shows red/green/refactor |
|
||||
| Warning log | Check tdd-warnings.log | Logged (may be empty) |
|
||||
|
||||
**Return Summary**:
|
||||
```
|
||||
TDD Planning complete for session: [sessionId]
|
||||
@@ -333,6 +414,9 @@ TDD Configuration:
|
||||
- Green phase includes test-fix cycle (max 3 iterations)
|
||||
- Auto-revert on max iterations reached
|
||||
|
||||
⚠️ ACTION REQUIRED: Before execution, ensure you understand WHY each Red phase test is expected to fail.
|
||||
This is crucial for valid TDD - if you don't know why the test fails, you can't verify it tests the right thing.
|
||||
|
||||
Recommended Next Steps:
|
||||
1. /workflow:action-plan-verify --session [sessionId] # Verify TDD plan quality and dependencies
|
||||
2. /workflow:execute --session [sessionId] # Start TDD execution
|
||||
@@ -400,7 +484,7 @@ TDD Workflow Orchestrator
|
||||
│ IF conflict_risk ≥ medium:
|
||||
│ └─ /workflow:tools:conflict-resolution ← ATTACHED (3 tasks)
|
||||
│ ├─ Phase 4.1: Detect conflicts with CLI
|
||||
│ ├─ Phase 4.2: Present conflicts to user
|
||||
│ ├─ Phase 4.2: Log and analyze detected conflicts
|
||||
│ └─ Phase 4.3: Apply resolution strategies
|
||||
│ └─ Returns: conflict-resolution.json ← COLLAPSED
|
||||
│ ELSE:
|
||||
@@ -439,6 +523,34 @@ Convert user input to TDD-structured format:
|
||||
- **Command failure**: Keep phase in_progress, report error
|
||||
- **TDD validation failure**: Report incomplete chains or wrong dependencies
|
||||
|
||||
### TDD Warning Patterns
|
||||
|
||||
| Pattern | Warning Message | Recommended Action |
|
||||
|---------|----------------|-------------------|
|
||||
| Task count >10 | High task count detected | Consider splitting into multiple sessions |
|
||||
| Missing test-fix-cycle | Green phase lacks auto-revert | Add `max_iterations: 3` to task config |
|
||||
| Red phase missing test path | Test file path not specified | Add explicit test file paths |
|
||||
| Generic task names | Vague names like "Add feature" | Use specific behavior descriptions |
|
||||
| No refactor criteria | Refactor phase lacks completion criteria | Define clear refactor scope |
|
||||
|
||||
### Non-Blocking Warning Policy
|
||||
|
||||
**All warnings are advisory** - they do not halt execution:
|
||||
1. Warnings logged to `.process/tdd-warnings.log`
|
||||
2. Summary displayed in Phase 6 output
|
||||
3. User decides whether to address before `/workflow:execute`
|
||||
|
||||
### Error Handling Quick Reference
|
||||
|
||||
| Error Type | Detection | Recovery Action |
|
||||
|------------|-----------|-----------------|
|
||||
| Parsing failure | Empty/malformed output | Retry once, then report |
|
||||
| Missing context-package | File read error | Re-run `/workflow:tools:context-gather` |
|
||||
| Invalid task JSON | jq parse error | Report malformed file path |
|
||||
| High task count (>10) | Count validation | Log warning, continue (non-blocking) |
|
||||
| Test-context missing | File not found | Re-run `/workflow:tools:test-context-gather` |
|
||||
| Phase timeout | No response | Retry phase, check CLI connectivity |
|
||||
|
||||
## Related Commands
|
||||
|
||||
**Prerequisite Commands**:
|
||||
@@ -458,3 +570,28 @@ Convert user input to TDD-structured format:
|
||||
- `/workflow:execute` - Begin TDD implementation
|
||||
- `/workflow:tdd-verify` - Post-execution: Verify TDD compliance and generate quality report
|
||||
|
||||
## Next Steps Decision Table
|
||||
|
||||
| Situation | Recommended Command | Purpose |
|
||||
|-----------|---------------------|---------|
|
||||
| First time planning | `/workflow:action-plan-verify` | Validate task structure before execution |
|
||||
| Warnings in tdd-warnings.log | Review log, refine tasks | Address Red Flags before proceeding |
|
||||
| High task count warning | Consider `/workflow:session:start` | Split into focused sub-sessions |
|
||||
| Ready to implement | `/workflow:execute` | Begin TDD Red-Green-Refactor cycles |
|
||||
| After implementation | `/workflow:tdd-verify` | Generate TDD compliance report |
|
||||
| Need to review tasks | `/workflow:status --session [id]` | Inspect current task breakdown |
|
||||
| Plan needs changes | `/task:replan` | Update task JSON with new requirements |
|
||||
|
||||
### TDD Workflow State Transitions
|
||||
|
||||
```
|
||||
/workflow:tdd-plan
|
||||
↓
|
||||
[Planning Complete] ──→ /workflow:action-plan-verify (recommended)
|
||||
↓
|
||||
[Verified/Ready] ─────→ /workflow:execute
|
||||
↓
|
||||
[Implementation] ─────→ /workflow:tdd-verify (post-execution)
|
||||
↓
|
||||
[Quality Report] ─────→ Done or iterate
|
||||
```
|
||||
|
||||
@@ -491,6 +491,10 @@ The orchestrator automatically creates git commits at key checkpoints to enable
|
||||
|
||||
**Note**: Final session completion creates additional commit with full summary.
|
||||
|
||||
## Post-Completion Expansion
|
||||
|
||||
完成后询问用户是否扩展为issue(test/enhance/refactor/doc),选中项调用 `/issue:new "{summary} - {dimension}"`
|
||||
|
||||
## Best Practices
|
||||
|
||||
1. **Default Settings Work**: 10 iterations sufficient for most cases
|
||||
|
||||
@@ -154,8 +154,8 @@ Task(subagent_type="cli-execution-agent", run_in_background=false, prompt=`
|
||||
- Validation of exploration conflict_indicators
|
||||
- ModuleOverlap conflicts with overlap_analysis
|
||||
- Targeted clarification questions
|
||||
RULES: $(cat ~/.claude/workflows/cli-templates/prompts/analysis/02-analyze-code-patterns.txt) | Focus on breaking changes, migration needs, and functional overlaps | Prioritize exploration-identified conflicts | analysis=READ-ONLY
|
||||
" --tool gemini --mode analysis --cd {project_root}
|
||||
CONSTRAINTS: Focus on breaking changes, migration needs, and functional overlaps | Prioritize exploration-identified conflicts | analysis=READ-ONLY
|
||||
" --tool gemini --mode analysis --rule analysis-code-patterns --cd {project_root}
|
||||
|
||||
Fallback: Qwen (same prompt) → Claude (manual analysis)
|
||||
|
||||
|
||||
@@ -237,7 +237,7 @@ Execute complete context-search-agent workflow for implementation planning:
|
||||
|
||||
### Phase 1: Initialization & Pre-Analysis
|
||||
1. **Project State Loading**:
|
||||
- Read and parse `.workflow/project-tech.json`. Use its `technology_analysis` section as the foundational `project_context`. This is your primary source for architecture, tech stack, and key components.
|
||||
- Read and parse `.workflow/project-tech.json`. Use its `overview` section as the foundational `project_context`. This is your primary source for architecture, tech stack, and key components.
|
||||
- Read and parse `.workflow/project-guidelines.json`. Load `conventions`, `constraints`, and `learnings` into a `project_guidelines` section.
|
||||
- If files don't exist, proceed with fresh analysis.
|
||||
2. **Detection**: Check for existing context-package (early exit if valid)
|
||||
@@ -255,7 +255,7 @@ Execute all discovery tracks:
|
||||
### Phase 3: Synthesis, Assessment & Packaging
|
||||
1. Apply relevance scoring and build dependency graph
|
||||
2. **Synthesize 4-source data**: Merge findings from all sources (archive > docs > code > web). **Prioritize the context from `project-tech.json`** for architecture and tech stack unless code analysis reveals it's outdated.
|
||||
3. **Populate `project_context`**: Directly use the `technology_analysis` from `project-tech.json` to fill the `project_context` section. Include description, technology_stack, architecture, and key_components.
|
||||
3. **Populate `project_context`**: Directly use the `overview` from `project-tech.json` to fill the `project_context` section. Include description, technology_stack, architecture, and key_components.
|
||||
4. **Populate `project_guidelines`**: Load conventions, constraints, and learnings from `project-guidelines.json` into a dedicated section.
|
||||
5. Integrate brainstorm artifacts (if .brainstorming/ exists, read content)
|
||||
6. Perform conflict detection with risk assessment
|
||||
|
||||
@@ -90,7 +90,7 @@ Template: ~/.claude/workflows/cli-templates/prompts/test/test-concept-analysis.t
|
||||
|
||||
## EXECUTION STEPS
|
||||
1. Execute Gemini analysis:
|
||||
ccw cli -p "$(cat ~/.claude/workflows/cli-templates/prompts/test/test-concept-analysis.txt)" --tool gemini --mode write --cd .workflow/active/{test_session_id}/.process
|
||||
ccw cli -p "..." --tool gemini --mode write --rule test-test-concept-analysis --cd .workflow/active/{test_session_id}/.process
|
||||
|
||||
2. Generate TEST_ANALYSIS_RESULTS.md:
|
||||
Synthesize gemini-test-analysis.md into standardized format for task generation
|
||||
|
||||
@@ -1,139 +1,86 @@
|
||||
---
|
||||
name: ccw-help
|
||||
description: Workflow command guide for Claude Code Workflow (78 commands). Search/browse commands, get next-step recommendations, view documentation, report issues. Triggers "CCW-help", "CCW-issue", "ccw-help", "ccw-issue", "ccw"
|
||||
description: CCW command help system. Search, browse, recommend commands. Triggers "ccw-help", "ccw-issue".
|
||||
allowed-tools: Read, Grep, Glob, AskUserQuestion
|
||||
version: 6.0.0
|
||||
version: 7.0.0
|
||||
---
|
||||
|
||||
# CCW-Help Skill
|
||||
|
||||
CCW 命令帮助系统,提供命令搜索、推荐、文档查看和问题报告功能。
|
||||
CCW 命令帮助系统,提供命令搜索、推荐、文档查看功能。
|
||||
|
||||
## Trigger Conditions
|
||||
|
||||
- 关键词: "CCW-help", "CCW-issue", "ccw-help", "ccw-issue", "帮助", "命令", "怎么用"
|
||||
- 场景: 用户询问命令用法、搜索命令、请求下一步建议、报告问题
|
||||
|
||||
## Execution Flow
|
||||
|
||||
```mermaid
|
||||
graph TD
|
||||
A[User Query] --> B{Intent Classification}
|
||||
B -->|搜索| C[Command Search]
|
||||
B -->|推荐| D[Smart Recommendations]
|
||||
B -->|文档| E[Documentation]
|
||||
B -->|新手| F[Onboarding]
|
||||
B -->|问题| G[Issue Reporting]
|
||||
B -->|分析| H[Deep Analysis]
|
||||
|
||||
C --> I[Query Index]
|
||||
D --> J[Query Relationships]
|
||||
E --> K[Read Source File]
|
||||
F --> L[Essential Commands]
|
||||
G --> M[Generate Template]
|
||||
H --> N[CLI Analysis]
|
||||
|
||||
I & J & K & L & M & N --> O[Synthesize Response]
|
||||
```
|
||||
- 关键词: "ccw-help", "ccw-issue", "帮助", "命令", "怎么用"
|
||||
- 场景: 询问命令用法、搜索命令、请求下一步建议
|
||||
|
||||
## Operation Modes
|
||||
|
||||
### Mode 1: Command Search 🔍
|
||||
### Mode 1: Command Search
|
||||
|
||||
**Triggers**: "搜索命令", "find command", "planning 相关", "search"
|
||||
**Triggers**: "搜索命令", "find command", "search"
|
||||
|
||||
**Process**:
|
||||
1. Query `index/all-commands.json` or `index/by-category.json`
|
||||
2. Filter and rank results based on user context
|
||||
3. Present top 3-5 relevant commands with usage hints
|
||||
1. Query `command.json` commands array
|
||||
2. Filter by name, description, category
|
||||
3. Present top 3-5 relevant commands
|
||||
|
||||
### Mode 2: Smart Recommendations 🤖
|
||||
### Mode 2: Smart Recommendations
|
||||
|
||||
**Triggers**: "下一步", "what's next", "after /workflow:plan", "推荐"
|
||||
**Triggers**: "下一步", "what's next", "推荐"
|
||||
|
||||
**Process**:
|
||||
1. Query `index/command-relationships.json`
|
||||
2. Evaluate context and prioritize recommendations
|
||||
3. Explain WHY each recommendation fits
|
||||
1. Query command's `flow.next_steps` in `command.json`
|
||||
2. Explain WHY each recommendation fits
|
||||
|
||||
### Mode 3: Full Documentation 📖
|
||||
### Mode 3: Documentation
|
||||
|
||||
**Triggers**: "参数说明", "怎么用", "how to use", "详情"
|
||||
**Triggers**: "怎么用", "how to use", "详情"
|
||||
|
||||
**Process**:
|
||||
1. Locate command in index
|
||||
2. Read source file via `source` path (e.g., `commands/workflow/lite-plan.md`)
|
||||
3. Extract relevant sections and provide context-specific examples
|
||||
1. Locate command in `command.json`
|
||||
2. Read source file via `source` path
|
||||
3. Provide context-specific examples
|
||||
|
||||
### Mode 4: Beginner Onboarding 🎓
|
||||
### Mode 4: Beginner Onboarding
|
||||
|
||||
**Triggers**: "新手", "getting started", "如何开始", "常用命令"
|
||||
**Triggers**: "新手", "getting started", "常用命令"
|
||||
|
||||
**Process**:
|
||||
1. Query `index/essential-commands.json`
|
||||
2. Assess project stage (从0到1 vs 功能新增)
|
||||
3. Guide appropriate workflow entry point
|
||||
1. Query `essential_commands` array
|
||||
2. Guide appropriate workflow entry point
|
||||
|
||||
### Mode 5: Issue Reporting 📝
|
||||
### Mode 5: Issue Reporting
|
||||
|
||||
**Triggers**: "CCW-issue", "报告 bug", "功能建议", "问题咨询"
|
||||
**Triggers**: "ccw-issue", "报告 bug"
|
||||
|
||||
**Process**:
|
||||
1. Use AskUserQuestion to gather context
|
||||
2. Generate structured issue template
|
||||
3. Provide actionable next steps
|
||||
|
||||
### Mode 6: Deep Analysis 🔬
|
||||
## Data Source
|
||||
|
||||
**Triggers**: "详细说明", "命令原理", "agent 如何工作", "实现细节"
|
||||
Single source of truth: **[command.json](command.json)**
|
||||
|
||||
**Process**:
|
||||
1. Read source documentation directly
|
||||
2. For complex queries, use CLI for multi-file analysis:
|
||||
```bash
|
||||
ccw cli -p "PURPOSE: Analyze command documentation..." --tool gemini --mode analysis --cd ~/.claude
|
||||
```
|
||||
|
||||
## Index Files
|
||||
|
||||
CCW-Help 使用 JSON 索引实现快速查询(无 reference 文件夹,直接引用源文件):
|
||||
|
||||
| 文件 | 内容 | 用途 |
|
||||
|------|------|------|
|
||||
| `index/all-commands.json` | 完整命令目录 | 关键词搜索 |
|
||||
| `index/all-agents.json` | 完整 Agent 目录 | Agent 查询 |
|
||||
| `index/by-category.json` | 按类别分组 | 分类浏览 |
|
||||
| `index/by-use-case.json` | 按场景分组 | 场景推荐 |
|
||||
| `index/essential-commands.json` | 核心命令 | 新手引导 |
|
||||
| `index/command-relationships.json` | 命令关系 | 下一步推荐 |
|
||||
| Field | Purpose |
|
||||
|-------|---------|
|
||||
| `commands[]` | Flat command list with metadata |
|
||||
| `commands[].flow` | Relationships (next_steps, prerequisites) |
|
||||
| `commands[].essential` | Essential flag for onboarding |
|
||||
| `agents[]` | Agent directory |
|
||||
| `essential_commands[]` | Core commands list |
|
||||
|
||||
### Source Path Format
|
||||
|
||||
索引中的 `source` 字段是从 `index/` 目录的相对路径(先向上再定位):
|
||||
`source` 字段是相对路径(从 `skills/ccw-help/` 目录):
|
||||
|
||||
```json
|
||||
{
|
||||
"name": "workflow:lite-plan",
|
||||
"name": "lite-plan",
|
||||
"source": "../../../commands/workflow/lite-plan.md"
|
||||
}
|
||||
```
|
||||
|
||||
路径结构: `index/` → `ccw-help/` → `skills/` → `.claude/` → `commands/...`
|
||||
|
||||
## Configuration
|
||||
|
||||
| 参数 | 默认值 | 说明 |
|
||||
|------|--------|------|
|
||||
| max_results | 5 | 搜索返回最大结果数 |
|
||||
| show_source | true | 是否显示源文件路径 |
|
||||
|
||||
## CLI Integration
|
||||
|
||||
| 场景 | CLI Hint | 用途 |
|
||||
|------|----------|------|
|
||||
| 复杂查询 | `gemini --mode analysis` | 多文件分析对比 |
|
||||
| 文档生成 | - | 直接读取源文件 |
|
||||
|
||||
## Slash Commands
|
||||
|
||||
```bash
|
||||
@@ -145,33 +92,25 @@ CCW-Help 使用 JSON 索引实现快速查询(无 reference 文件夹,直接
|
||||
|
||||
## Maintenance
|
||||
|
||||
### 更新索引
|
||||
### Update Index
|
||||
|
||||
```bash
|
||||
cd D:/Claude_dms3/.claude/skills/ccw-help
|
||||
python scripts/analyze_commands.py
|
||||
```
|
||||
|
||||
脚本功能:
|
||||
1. 扫描 `commands/` 和 `agents/` 目录
|
||||
2. 提取 YAML frontmatter 元数据
|
||||
3. 生成相对路径引用(无 reference 复制)
|
||||
4. 重建所有索引文件
|
||||
脚本功能:扫描 commands/ 和 agents/ 目录,生成统一的 command.json
|
||||
|
||||
## System Statistics
|
||||
## Statistics
|
||||
|
||||
- **Commands**: 78
|
||||
- **Agents**: 14
|
||||
- **Categories**: 5 (workflow, cli, memory, task, general)
|
||||
- **Essential**: 14 核心命令
|
||||
- **Commands**: 88+
|
||||
- **Agents**: 16
|
||||
- **Essential**: 10 核心命令
|
||||
|
||||
## Core Principle
|
||||
|
||||
**⚠️ 智能整合,非模板复制**
|
||||
**智能整合,非模板复制**
|
||||
|
||||
- ✅ 理解用户具体情况
|
||||
- ✅ 整合多个来源信息
|
||||
- ✅ 定制示例和说明
|
||||
- ✅ 提供渐进式深度
|
||||
- ❌ 原样复制文档
|
||||
- ❌ 返回未处理的 JSON
|
||||
- 理解用户具体情况
|
||||
- 整合多个来源信息
|
||||
- 定制示例和说明
|
||||
|
||||
520
.claude/skills/ccw-help/command.json
Normal file
520
.claude/skills/ccw-help/command.json
Normal file
@@ -0,0 +1,520 @@
|
||||
{
|
||||
"_metadata": {
|
||||
"version": "2.0.0",
|
||||
"total_commands": 45,
|
||||
"total_agents": 16,
|
||||
"description": "Unified CCW-Help command index"
|
||||
},
|
||||
|
||||
"essential_commands": [
|
||||
"/workflow:lite-plan",
|
||||
"/workflow:lite-fix",
|
||||
"/workflow:plan",
|
||||
"/workflow:execute",
|
||||
"/workflow:session:start",
|
||||
"/workflow:review-session-cycle",
|
||||
"/memory:docs",
|
||||
"/workflow:brainstorm:artifacts",
|
||||
"/workflow:action-plan-verify",
|
||||
"/version"
|
||||
],
|
||||
|
||||
"commands": [
|
||||
{
|
||||
"name": "lite-plan",
|
||||
"command": "/workflow:lite-plan",
|
||||
"description": "Lightweight interactive planning with in-memory plan, dispatches to lite-execute",
|
||||
"arguments": "[-e|--explore] \"task\"|file.md",
|
||||
"category": "workflow",
|
||||
"difficulty": "Intermediate",
|
||||
"essential": true,
|
||||
"flow": {
|
||||
"next_steps": ["/workflow:lite-execute"],
|
||||
"alternatives": ["/workflow:plan"]
|
||||
},
|
||||
"source": "../../../commands/workflow/lite-plan.md"
|
||||
},
|
||||
{
|
||||
"name": "lite-execute",
|
||||
"command": "/workflow:lite-execute",
|
||||
"description": "Execute based on in-memory plan or prompt",
|
||||
"arguments": "[--in-memory] \"task\"|file-path",
|
||||
"category": "workflow",
|
||||
"difficulty": "Intermediate",
|
||||
"flow": {
|
||||
"prerequisites": ["/workflow:lite-plan", "/workflow:lite-fix"]
|
||||
},
|
||||
"source": "../../../commands/workflow/lite-execute.md"
|
||||
},
|
||||
{
|
||||
"name": "lite-fix",
|
||||
"command": "/workflow:lite-fix",
|
||||
"description": "Lightweight bug diagnosis and fix with optional hotfix mode",
|
||||
"arguments": "[--hotfix] \"bug description\"",
|
||||
"category": "workflow",
|
||||
"difficulty": "Intermediate",
|
||||
"essential": true,
|
||||
"flow": {
|
||||
"next_steps": ["/workflow:lite-execute"],
|
||||
"alternatives": ["/workflow:lite-plan"]
|
||||
},
|
||||
"source": "../../../commands/workflow/lite-fix.md"
|
||||
},
|
||||
{
|
||||
"name": "plan",
|
||||
"command": "/workflow:plan",
|
||||
"description": "5-phase planning with task JSON generation",
|
||||
"arguments": "\"description\"|file.md",
|
||||
"category": "workflow",
|
||||
"difficulty": "Intermediate",
|
||||
"essential": true,
|
||||
"flow": {
|
||||
"next_steps": ["/workflow:action-plan-verify", "/workflow:execute"],
|
||||
"alternatives": ["/workflow:tdd-plan"]
|
||||
},
|
||||
"source": "../../../commands/workflow/plan.md"
|
||||
},
|
||||
{
|
||||
"name": "execute",
|
||||
"command": "/workflow:execute",
|
||||
"description": "Coordinate agent execution with DAG parallel processing",
|
||||
"arguments": "[--resume-session=\"session-id\"]",
|
||||
"category": "workflow",
|
||||
"difficulty": "Intermediate",
|
||||
"essential": true,
|
||||
"flow": {
|
||||
"prerequisites": ["/workflow:plan", "/workflow:tdd-plan"],
|
||||
"next_steps": ["/workflow:review"]
|
||||
},
|
||||
"source": "../../../commands/workflow/execute.md"
|
||||
},
|
||||
{
|
||||
"name": "action-plan-verify",
|
||||
"command": "/workflow:action-plan-verify",
|
||||
"description": "Cross-artifact consistency analysis",
|
||||
"arguments": "[--session session-id]",
|
||||
"category": "workflow",
|
||||
"difficulty": "Intermediate",
|
||||
"essential": true,
|
||||
"flow": {
|
||||
"prerequisites": ["/workflow:plan"],
|
||||
"next_steps": ["/workflow:execute"]
|
||||
},
|
||||
"source": "../../../commands/workflow/action-plan-verify.md"
|
||||
},
|
||||
{
|
||||
"name": "init",
|
||||
"command": "/workflow:init",
|
||||
"description": "Initialize project-level state",
|
||||
"arguments": "[--regenerate]",
|
||||
"category": "workflow",
|
||||
"difficulty": "Intermediate",
|
||||
"source": "../../../commands/workflow/init.md"
|
||||
},
|
||||
{
|
||||
"name": "clean",
|
||||
"command": "/workflow:clean",
|
||||
"description": "Intelligent code cleanup with stale artifact discovery",
|
||||
"arguments": "[--dry-run] [\"focus\"]",
|
||||
"category": "workflow",
|
||||
"difficulty": "Intermediate",
|
||||
"source": "../../../commands/workflow/clean.md"
|
||||
},
|
||||
{
|
||||
"name": "debug",
|
||||
"command": "/workflow:debug",
|
||||
"description": "Hypothesis-driven debugging with NDJSON logging",
|
||||
"arguments": "\"bug description\"",
|
||||
"category": "workflow",
|
||||
"difficulty": "Intermediate",
|
||||
"source": "../../../commands/workflow/debug.md"
|
||||
},
|
||||
{
|
||||
"name": "replan",
|
||||
"command": "/workflow:replan",
|
||||
"description": "Interactive workflow replanning",
|
||||
"arguments": "[--session id] [task-id] \"requirements\"",
|
||||
"category": "workflow",
|
||||
"difficulty": "Intermediate",
|
||||
"source": "../../../commands/workflow/replan.md"
|
||||
},
|
||||
{
|
||||
"name": "session:start",
|
||||
"command": "/workflow:session:start",
|
||||
"description": "Start or discover workflow sessions",
|
||||
"arguments": "[--type <workflow|review|tdd>] [--auto|--new]",
|
||||
"category": "workflow",
|
||||
"subcategory": "session",
|
||||
"difficulty": "Intermediate",
|
||||
"essential": true,
|
||||
"flow": {
|
||||
"next_steps": ["/workflow:plan", "/workflow:execute"]
|
||||
},
|
||||
"source": "../../../commands/workflow/session/start.md"
|
||||
},
|
||||
{
|
||||
"name": "session:list",
|
||||
"command": "/workflow:session:list",
|
||||
"description": "List all workflow sessions",
|
||||
"arguments": "",
|
||||
"category": "workflow",
|
||||
"subcategory": "session",
|
||||
"difficulty": "Beginner",
|
||||
"source": "../../../commands/workflow/session/list.md"
|
||||
},
|
||||
{
|
||||
"name": "session:resume",
|
||||
"command": "/workflow:session:resume",
|
||||
"description": "Resume paused workflow session",
|
||||
"arguments": "",
|
||||
"category": "workflow",
|
||||
"subcategory": "session",
|
||||
"difficulty": "Intermediate",
|
||||
"source": "../../../commands/workflow/session/resume.md"
|
||||
},
|
||||
{
|
||||
"name": "session:complete",
|
||||
"command": "/workflow:session:complete",
|
||||
"description": "Mark session complete and archive",
|
||||
"arguments": "",
|
||||
"category": "workflow",
|
||||
"subcategory": "session",
|
||||
"difficulty": "Intermediate",
|
||||
"source": "../../../commands/workflow/session/complete.md"
|
||||
},
|
||||
{
|
||||
"name": "brainstorm:auto-parallel",
|
||||
"command": "/workflow:brainstorm:auto-parallel",
|
||||
"description": "Parallel brainstorming with multi-role analysis",
|
||||
"arguments": "\"topic\" [--count N]",
|
||||
"category": "workflow",
|
||||
"subcategory": "brainstorm",
|
||||
"difficulty": "Advanced",
|
||||
"source": "../../../commands/workflow/brainstorm/auto-parallel.md"
|
||||
},
|
||||
{
|
||||
"name": "brainstorm:artifacts",
|
||||
"command": "/workflow:brainstorm:artifacts",
|
||||
"description": "Interactive clarification with guidance specification",
|
||||
"arguments": "\"topic\" [--count N]",
|
||||
"category": "workflow",
|
||||
"subcategory": "brainstorm",
|
||||
"difficulty": "Intermediate",
|
||||
"essential": true,
|
||||
"source": "../../../commands/workflow/brainstorm/artifacts.md"
|
||||
},
|
||||
{
|
||||
"name": "brainstorm:synthesis",
|
||||
"command": "/workflow:brainstorm:synthesis",
|
||||
"description": "Refine role analyses through Q&A",
|
||||
"arguments": "[--session session-id]",
|
||||
"category": "workflow",
|
||||
"subcategory": "brainstorm",
|
||||
"difficulty": "Advanced",
|
||||
"source": "../../../commands/workflow/brainstorm/synthesis.md"
|
||||
},
|
||||
{
|
||||
"name": "tdd-plan",
|
||||
"command": "/workflow:tdd-plan",
|
||||
"description": "TDD planning with Red-Green-Refactor cycles",
|
||||
"arguments": "\"feature\"|file.md",
|
||||
"category": "workflow",
|
||||
"difficulty": "Advanced",
|
||||
"flow": {
|
||||
"next_steps": ["/workflow:execute", "/workflow:tdd-verify"],
|
||||
"alternatives": ["/workflow:plan"]
|
||||
},
|
||||
"source": "../../../commands/workflow/tdd-plan.md"
|
||||
},
|
||||
{
|
||||
"name": "tdd-verify",
|
||||
"command": "/workflow:tdd-verify",
|
||||
"description": "Verify TDD compliance with coverage analysis",
|
||||
"arguments": "[session-id]",
|
||||
"category": "workflow",
|
||||
"difficulty": "Advanced",
|
||||
"flow": {
|
||||
"prerequisites": ["/workflow:execute"]
|
||||
},
|
||||
"source": "../../../commands/workflow/tdd-verify.md"
|
||||
},
|
||||
{
|
||||
"name": "review",
|
||||
"command": "/workflow:review",
|
||||
"description": "Post-implementation review (security/architecture/quality)",
|
||||
"arguments": "[--type=<type>] [session-id]",
|
||||
"category": "workflow",
|
||||
"difficulty": "Intermediate",
|
||||
"source": "../../../commands/workflow/review.md"
|
||||
},
|
||||
{
|
||||
"name": "review-session-cycle",
|
||||
"command": "/workflow:review-session-cycle",
|
||||
"description": "Multi-dimensional code review across 7 dimensions",
|
||||
"arguments": "[session-id] [--dimensions=...]",
|
||||
"category": "workflow",
|
||||
"difficulty": "Intermediate",
|
||||
"essential": true,
|
||||
"flow": {
|
||||
"prerequisites": ["/workflow:execute"],
|
||||
"next_steps": ["/workflow:review-fix"]
|
||||
},
|
||||
"source": "../../../commands/workflow/review-session-cycle.md"
|
||||
},
|
||||
{
|
||||
"name": "review-module-cycle",
|
||||
"command": "/workflow:review-module-cycle",
|
||||
"description": "Module-based multi-dimensional review",
|
||||
"arguments": "<path-pattern> [--dimensions=...]",
|
||||
"category": "workflow",
|
||||
"difficulty": "Intermediate",
|
||||
"source": "../../../commands/workflow/review-module-cycle.md"
|
||||
},
|
||||
{
|
||||
"name": "review-fix",
|
||||
"command": "/workflow:review-fix",
|
||||
"description": "Automated fixing of review findings",
|
||||
"arguments": "<export-file|review-dir>",
|
||||
"category": "workflow",
|
||||
"difficulty": "Intermediate",
|
||||
"flow": {
|
||||
"prerequisites": ["/workflow:review-session-cycle", "/workflow:review-module-cycle"]
|
||||
},
|
||||
"source": "../../../commands/workflow/review-fix.md"
|
||||
},
|
||||
{
|
||||
"name": "test-gen",
|
||||
"command": "/workflow:test-gen",
|
||||
"description": "Generate test session from implementation",
|
||||
"arguments": "source-session-id",
|
||||
"category": "workflow",
|
||||
"difficulty": "Intermediate",
|
||||
"source": "../../../commands/workflow/test-gen.md"
|
||||
},
|
||||
{
|
||||
"name": "test-fix-gen",
|
||||
"command": "/workflow:test-fix-gen",
|
||||
"description": "Create test-fix session with strategy",
|
||||
"arguments": "session-id|\"description\"|file",
|
||||
"category": "workflow",
|
||||
"difficulty": "Intermediate",
|
||||
"source": "../../../commands/workflow/test-fix-gen.md"
|
||||
},
|
||||
{
|
||||
"name": "test-cycle-execute",
|
||||
"command": "/workflow:test-cycle-execute",
|
||||
"description": "Execute test-fix with iterative cycles",
|
||||
"arguments": "[--resume-session=id] [--max-iterations=N]",
|
||||
"category": "workflow",
|
||||
"difficulty": "Intermediate",
|
||||
"source": "../../../commands/workflow/test-cycle-execute.md"
|
||||
},
|
||||
{
|
||||
"name": "issue:new",
|
||||
"command": "/issue:new",
|
||||
"description": "Create issue from GitHub URL or text",
|
||||
"arguments": "<url|text> [--priority 1-5]",
|
||||
"category": "issue",
|
||||
"difficulty": "Intermediate",
|
||||
"source": "../../../commands/issue/new.md"
|
||||
},
|
||||
{
|
||||
"name": "issue:discover",
|
||||
"command": "/issue:discover",
|
||||
"description": "Discover issues from multiple perspectives",
|
||||
"arguments": "<path> [--perspectives=...]",
|
||||
"category": "issue",
|
||||
"difficulty": "Intermediate",
|
||||
"source": "../../../commands/issue/discover.md"
|
||||
},
|
||||
{
|
||||
"name": "issue:plan",
|
||||
"command": "/issue:plan",
|
||||
"description": "Batch plan issue resolution",
|
||||
"arguments": "--all-pending|<ids>",
|
||||
"category": "issue",
|
||||
"difficulty": "Intermediate",
|
||||
"flow": {
|
||||
"next_steps": ["/issue:queue"]
|
||||
},
|
||||
"source": "../../../commands/issue/plan.md"
|
||||
},
|
||||
{
|
||||
"name": "issue:queue",
|
||||
"command": "/issue:queue",
|
||||
"description": "Form execution queue from solutions",
|
||||
"arguments": "[--rebuild]",
|
||||
"category": "issue",
|
||||
"difficulty": "Intermediate",
|
||||
"flow": {
|
||||
"prerequisites": ["/issue:plan"],
|
||||
"next_steps": ["/issue:execute"]
|
||||
},
|
||||
"source": "../../../commands/issue/queue.md"
|
||||
},
|
||||
{
|
||||
"name": "issue:execute",
|
||||
"command": "/issue:execute",
|
||||
"description": "Execute queue with DAG parallel",
|
||||
"arguments": "[--worktree]",
|
||||
"category": "issue",
|
||||
"difficulty": "Intermediate",
|
||||
"flow": {
|
||||
"prerequisites": ["/issue:queue"]
|
||||
},
|
||||
"source": "../../../commands/issue/execute.md"
|
||||
},
|
||||
{
|
||||
"name": "docs",
|
||||
"command": "/memory:docs",
|
||||
"description": "Plan documentation workflow",
|
||||
"arguments": "[path] [--tool <tool>]",
|
||||
"category": "memory",
|
||||
"difficulty": "Intermediate",
|
||||
"essential": true,
|
||||
"flow": {
|
||||
"next_steps": ["/workflow:execute"]
|
||||
},
|
||||
"source": "../../../commands/memory/docs.md"
|
||||
},
|
||||
{
|
||||
"name": "update-related",
|
||||
"command": "/memory:update-related",
|
||||
"description": "Update docs for git-changed modules",
|
||||
"arguments": "[--tool <tool>]",
|
||||
"category": "memory",
|
||||
"difficulty": "Intermediate",
|
||||
"source": "../../../commands/memory/update-related.md"
|
||||
},
|
||||
{
|
||||
"name": "update-full",
|
||||
"command": "/memory:update-full",
|
||||
"description": "Update all CLAUDE.md files",
|
||||
"arguments": "[--tool <tool>]",
|
||||
"category": "memory",
|
||||
"difficulty": "Intermediate",
|
||||
"source": "../../../commands/memory/update-full.md"
|
||||
},
|
||||
{
|
||||
"name": "skill-memory",
|
||||
"command": "/memory:skill-memory",
|
||||
"description": "Generate SKILL.md with loading index",
|
||||
"arguments": "[path] [--regenerate]",
|
||||
"category": "memory",
|
||||
"difficulty": "Intermediate",
|
||||
"source": "../../../commands/memory/skill-memory.md"
|
||||
},
|
||||
{
|
||||
"name": "load-skill-memory",
|
||||
"command": "/memory:load-skill-memory",
|
||||
"description": "Activate SKILL package for task",
|
||||
"arguments": "[skill_name] \"task intent\"",
|
||||
"category": "memory",
|
||||
"difficulty": "Intermediate",
|
||||
"source": "../../../commands/memory/load-skill-memory.md"
|
||||
},
|
||||
{
|
||||
"name": "load",
|
||||
"command": "/memory:load",
|
||||
"description": "Load project context via CLI",
|
||||
"arguments": "[--tool <tool>] \"context\"",
|
||||
"category": "memory",
|
||||
"difficulty": "Intermediate",
|
||||
"source": "../../../commands/memory/load.md"
|
||||
},
|
||||
{
|
||||
"name": "compact",
|
||||
"command": "/memory:compact",
|
||||
"description": "Compact session memory for recovery",
|
||||
"arguments": "[description]",
|
||||
"category": "memory",
|
||||
"difficulty": "Intermediate",
|
||||
"source": "../../../commands/memory/compact.md"
|
||||
},
|
||||
{
|
||||
"name": "task:create",
|
||||
"command": "/task:create",
|
||||
"description": "Generate task JSON from description",
|
||||
"arguments": "\"task title\"",
|
||||
"category": "task",
|
||||
"difficulty": "Intermediate",
|
||||
"source": "../../../commands/task/create.md"
|
||||
},
|
||||
{
|
||||
"name": "task:execute",
|
||||
"command": "/task:execute",
|
||||
"description": "Execute task JSON with agent",
|
||||
"arguments": "task-id",
|
||||
"category": "task",
|
||||
"difficulty": "Intermediate",
|
||||
"source": "../../../commands/task/execute.md"
|
||||
},
|
||||
{
|
||||
"name": "task:breakdown",
|
||||
"command": "/task:breakdown",
|
||||
"description": "Decompose task into subtasks",
|
||||
"arguments": "task-id",
|
||||
"category": "task",
|
||||
"difficulty": "Intermediate",
|
||||
"source": "../../../commands/task/breakdown.md"
|
||||
},
|
||||
{
|
||||
"name": "task:replan",
|
||||
"command": "/task:replan",
|
||||
"description": "Update task with new requirements",
|
||||
"arguments": "task-id [\"text\"|file]",
|
||||
"category": "task",
|
||||
"difficulty": "Intermediate",
|
||||
"source": "../../../commands/task/replan.md"
|
||||
},
|
||||
{
|
||||
"name": "version",
|
||||
"command": "/version",
|
||||
"description": "Display version and check updates",
|
||||
"arguments": "",
|
||||
"category": "general",
|
||||
"difficulty": "Beginner",
|
||||
"essential": true,
|
||||
"source": "../../../commands/version.md"
|
||||
},
|
||||
{
|
||||
"name": "enhance-prompt",
|
||||
"command": "/enhance-prompt",
|
||||
"description": "Transform prompts with session memory",
|
||||
"arguments": "user input",
|
||||
"category": "general",
|
||||
"difficulty": "Intermediate",
|
||||
"source": "../../../commands/enhance-prompt.md"
|
||||
},
|
||||
{
|
||||
"name": "cli-init",
|
||||
"command": "/cli:cli-init",
|
||||
"description": "Initialize CLI tool configurations (.gemini/, .qwen/) with technology-aware ignore rules",
|
||||
"arguments": "[--tool gemini|qwen|all] [--preview] [--output path]",
|
||||
"category": "cli",
|
||||
"difficulty": "Intermediate",
|
||||
"source": "../../../commands/cli/cli-init.md"
|
||||
}
|
||||
],
|
||||
|
||||
"agents": [
|
||||
{ "name": "action-planning-agent", "description": "Task planning and generation", "source": "../../../agents/action-planning-agent.md" },
|
||||
{ "name": "cli-execution-agent", "description": "CLI tool execution", "source": "../../../agents/cli-execution-agent.md" },
|
||||
{ "name": "cli-explore-agent", "description": "Codebase exploration", "source": "../../../agents/cli-explore-agent.md" },
|
||||
{ "name": "cli-lite-planning-agent", "description": "Lightweight planning", "source": "../../../agents/cli-lite-planning-agent.md" },
|
||||
{ "name": "cli-planning-agent", "description": "CLI-based planning", "source": "../../../agents/cli-planning-agent.md" },
|
||||
{ "name": "code-developer", "description": "Code implementation", "source": "../../../agents/code-developer.md" },
|
||||
{ "name": "conceptual-planning-agent", "description": "Conceptual analysis", "source": "../../../agents/conceptual-planning-agent.md" },
|
||||
{ "name": "context-search-agent", "description": "Context discovery", "source": "../../../agents/context-search-agent.md" },
|
||||
{ "name": "doc-generator", "description": "Documentation generation", "source": "../../../agents/doc-generator.md" },
|
||||
{ "name": "issue-plan-agent", "description": "Issue planning", "source": "../../../agents/issue-plan-agent.md" },
|
||||
{ "name": "issue-queue-agent", "description": "Issue queue formation", "source": "../../../agents/issue-queue-agent.md" },
|
||||
{ "name": "memory-bridge", "description": "Documentation coordination", "source": "../../../agents/memory-bridge.md" },
|
||||
{ "name": "test-context-search-agent", "description": "Test context collection", "source": "../../../agents/test-context-search-agent.md" },
|
||||
{ "name": "test-fix-agent", "description": "Test execution and fixing", "source": "../../../agents/test-fix-agent.md" },
|
||||
{ "name": "ui-design-agent", "description": "UI design and prototyping", "source": "../../../agents/ui-design-agent.md" },
|
||||
{ "name": "universal-executor", "description": "Universal task execution", "source": "../../../agents/universal-executor.md" }
|
||||
],
|
||||
|
||||
"categories": ["workflow", "issue", "memory", "task", "general", "cli"]
|
||||
}
|
||||
@@ -1,82 +0,0 @@
|
||||
[
|
||||
{
|
||||
"name": "action-planning-agent",
|
||||
"description": "|",
|
||||
"source": "../../../agents/action-planning-agent.md"
|
||||
},
|
||||
{
|
||||
"name": "cli-execution-agent",
|
||||
"description": "|",
|
||||
"source": "../../../agents/cli-execution-agent.md"
|
||||
},
|
||||
{
|
||||
"name": "cli-explore-agent",
|
||||
"description": "|",
|
||||
"source": "../../../agents/cli-explore-agent.md"
|
||||
},
|
||||
{
|
||||
"name": "cli-lite-planning-agent",
|
||||
"description": "|",
|
||||
"source": "../../../agents/cli-lite-planning-agent.md"
|
||||
},
|
||||
{
|
||||
"name": "cli-planning-agent",
|
||||
"description": "|",
|
||||
"source": "../../../agents/cli-planning-agent.md"
|
||||
},
|
||||
{
|
||||
"name": "code-developer",
|
||||
"description": "|",
|
||||
"source": "../../../agents/code-developer.md"
|
||||
},
|
||||
{
|
||||
"name": "conceptual-planning-agent",
|
||||
"description": "|",
|
||||
"source": "../../../agents/conceptual-planning-agent.md"
|
||||
},
|
||||
{
|
||||
"name": "context-search-agent",
|
||||
"description": "|",
|
||||
"source": "../../../agents/context-search-agent.md"
|
||||
},
|
||||
{
|
||||
"name": "doc-generator",
|
||||
"description": "|",
|
||||
"source": "../../../agents/doc-generator.md"
|
||||
},
|
||||
{
|
||||
"name": "issue-plan-agent",
|
||||
"description": "|",
|
||||
"source": "../../../agents/issue-plan-agent.md"
|
||||
},
|
||||
{
|
||||
"name": "issue-queue-agent",
|
||||
"description": "|",
|
||||
"source": "../../../agents/issue-queue-agent.md"
|
||||
},
|
||||
{
|
||||
"name": "memory-bridge",
|
||||
"description": "Execute complex project documentation updates using script coordination",
|
||||
"source": "../../../agents/memory-bridge.md"
|
||||
},
|
||||
{
|
||||
"name": "test-context-search-agent",
|
||||
"description": "|",
|
||||
"source": "../../../agents/test-context-search-agent.md"
|
||||
},
|
||||
{
|
||||
"name": "test-fix-agent",
|
||||
"description": "|",
|
||||
"source": "../../../agents/test-fix-agent.md"
|
||||
},
|
||||
{
|
||||
"name": "ui-design-agent",
|
||||
"description": "|",
|
||||
"source": "../../../agents/ui-design-agent.md"
|
||||
},
|
||||
{
|
||||
"name": "universal-executor",
|
||||
"description": "|",
|
||||
"source": "../../../agents/universal-executor.md"
|
||||
}
|
||||
]
|
||||
@@ -1,882 +0,0 @@
|
||||
[
|
||||
{
|
||||
"name": "cli-init",
|
||||
"command": "/cli:cli-init",
|
||||
"description": "Generate .gemini/ and .qwen/ config directories with settings.json and ignore files based on workspace technology detection",
|
||||
"arguments": "[--tool gemini|qwen|all] [--output path] [--preview]",
|
||||
"category": "cli",
|
||||
"subcategory": null,
|
||||
"usage_scenario": "general",
|
||||
"difficulty": "Intermediate",
|
||||
"source": "../../../commands/cli/cli-init.md"
|
||||
},
|
||||
{
|
||||
"name": "enhance-prompt",
|
||||
"command": "/enhance-prompt",
|
||||
"description": "Enhanced prompt transformation using session memory and intent analysis with --enhance flag detection",
|
||||
"arguments": "user input to enhance",
|
||||
"category": "general",
|
||||
"subcategory": null,
|
||||
"usage_scenario": "general",
|
||||
"difficulty": "Intermediate",
|
||||
"source": "../../../commands/enhance-prompt.md"
|
||||
},
|
||||
{
|
||||
"name": "issue:discover",
|
||||
"command": "/issue:discover",
|
||||
"description": "Discover potential issues from multiple perspectives (bug, UX, test, quality, security, performance, maintainability, best-practices) using CLI explore. Supports Exa external research for security and best-practices perspectives.",
|
||||
"arguments": "<path-pattern> [--perspectives=bug,ux,...] [--external]",
|
||||
"category": "issue",
|
||||
"subcategory": null,
|
||||
"usage_scenario": "general",
|
||||
"difficulty": "Intermediate",
|
||||
"source": "../../../commands/issue/discover.md"
|
||||
},
|
||||
{
|
||||
"name": "execute",
|
||||
"command": "/issue:execute",
|
||||
"description": "Execute queue with codex using DAG-based parallel orchestration (solution-level)",
|
||||
"arguments": "[--worktree] [--queue <queue-id>]",
|
||||
"category": "issue",
|
||||
"subcategory": null,
|
||||
"usage_scenario": "implementation",
|
||||
"difficulty": "Intermediate",
|
||||
"source": "../../../commands/issue/execute.md"
|
||||
},
|
||||
{
|
||||
"name": "new",
|
||||
"command": "/issue:new",
|
||||
"description": "Create structured issue from GitHub URL or text description",
|
||||
"arguments": "<github-url | text-description> [--priority 1-5]",
|
||||
"category": "issue",
|
||||
"subcategory": null,
|
||||
"usage_scenario": "general",
|
||||
"difficulty": "Intermediate",
|
||||
"source": "../../../commands/issue/new.md"
|
||||
},
|
||||
{
|
||||
"name": "plan",
|
||||
"command": "/issue:plan",
|
||||
"description": "Batch plan issue resolution using issue-plan-agent (explore + plan closed-loop)",
|
||||
"arguments": "--all-pending <issue-id>[,<issue-id>,...] [--batch-size 3] ",
|
||||
"category": "issue",
|
||||
"subcategory": null,
|
||||
"usage_scenario": "planning",
|
||||
"difficulty": "Intermediate",
|
||||
"source": "../../../commands/issue/plan.md"
|
||||
},
|
||||
{
|
||||
"name": "queue",
|
||||
"command": "/issue:queue",
|
||||
"description": "Form execution queue from bound solutions using issue-queue-agent (solution-level)",
|
||||
"arguments": "[--rebuild] [--issue <id>]",
|
||||
"category": "issue",
|
||||
"subcategory": null,
|
||||
"usage_scenario": "general",
|
||||
"difficulty": "Intermediate",
|
||||
"source": "../../../commands/issue/queue.md"
|
||||
},
|
||||
{
|
||||
"name": "code-map-memory",
|
||||
"command": "/memory:code-map-memory",
|
||||
"description": "3-phase orchestrator: parse feature keyword → cli-explore-agent analyzes (Deep Scan dual-source) → orchestrator generates Mermaid docs + SKILL package (skips phase 2 if exists)",
|
||||
"arguments": "\\\"feature-keyword\\\" [--regenerate] [--tool <gemini|qwen>]",
|
||||
"category": "memory",
|
||||
"subcategory": null,
|
||||
"usage_scenario": "documentation",
|
||||
"difficulty": "Intermediate",
|
||||
"source": "../../../commands/memory/code-map-memory.md"
|
||||
},
|
||||
{
|
||||
"name": "compact",
|
||||
"command": "/memory:compact",
|
||||
"description": "Compact current session memory into structured text for session recovery, extracting objective/plan/files/decisions/constraints/state, and save via MCP core_memory tool",
|
||||
"arguments": "[optional: session description]",
|
||||
"category": "memory",
|
||||
"subcategory": null,
|
||||
"usage_scenario": "general",
|
||||
"difficulty": "Intermediate",
|
||||
"source": "../../../commands/memory/compact.md"
|
||||
},
|
||||
{
|
||||
"name": "docs-full-cli",
|
||||
"command": "/memory:docs-full-cli",
|
||||
"description": "Generate full project documentation using CLI execution (Layer 3→1) with batched agents (4 modules/agent) and gemini→qwen→codex fallback, <20 modules uses direct parallel",
|
||||
"arguments": "[path] [--tool <gemini|qwen|codex>]",
|
||||
"category": "memory",
|
||||
"subcategory": null,
|
||||
"usage_scenario": "documentation",
|
||||
"difficulty": "Intermediate",
|
||||
"source": "../../../commands/memory/docs-full-cli.md"
|
||||
},
|
||||
{
|
||||
"name": "docs-related-cli",
|
||||
"command": "/memory:docs-related-cli",
|
||||
"description": "Generate/update documentation for git-changed modules using CLI execution with batched agents (4 modules/agent) and gemini→qwen→codex fallback, <15 modules uses direct parallel",
|
||||
"arguments": "[--tool <gemini|qwen|codex>]",
|
||||
"category": "memory",
|
||||
"subcategory": null,
|
||||
"usage_scenario": "documentation",
|
||||
"difficulty": "Intermediate",
|
||||
"source": "../../../commands/memory/docs-related-cli.md"
|
||||
},
|
||||
{
|
||||
"name": "docs",
|
||||
"command": "/memory:docs",
|
||||
"description": "Plan documentation workflow with dynamic grouping (≤10 docs/task), generates IMPL tasks for parallel module trees, README, ARCHITECTURE, and HTTP API docs",
|
||||
"arguments": "[path] [--tool <gemini|qwen|codex>] [--mode <full|partial>] [--cli-execute]",
|
||||
"category": "memory",
|
||||
"subcategory": null,
|
||||
"usage_scenario": "documentation",
|
||||
"difficulty": "Intermediate",
|
||||
"source": "../../../commands/memory/docs.md"
|
||||
},
|
||||
{
|
||||
"name": "load-skill-memory",
|
||||
"command": "/memory:load-skill-memory",
|
||||
"description": "Activate SKILL package (auto-detect from paths/keywords or manual) and intelligently load documentation based on task intent keywords",
|
||||
"arguments": "[skill_name] \\\"task intent description\\",
|
||||
"category": "memory",
|
||||
"subcategory": null,
|
||||
"usage_scenario": "documentation",
|
||||
"difficulty": "Intermediate",
|
||||
"source": "../../../commands/memory/load-skill-memory.md"
|
||||
},
|
||||
{
|
||||
"name": "load",
|
||||
"command": "/memory:load",
|
||||
"description": "Delegate to universal-executor agent to analyze project via Gemini/Qwen CLI and return JSON core content package for task context",
|
||||
"arguments": "[--tool gemini|qwen] \\\"task context description\\",
|
||||
"category": "memory",
|
||||
"subcategory": null,
|
||||
"usage_scenario": "general",
|
||||
"difficulty": "Intermediate",
|
||||
"source": "../../../commands/memory/load.md"
|
||||
},
|
||||
{
|
||||
"name": "skill-memory",
|
||||
"command": "/memory:skill-memory",
|
||||
"description": "4-phase autonomous orchestrator: check docs → /memory:docs planning → /workflow:execute → generate SKILL.md with progressive loading index (skips phases 2-3 if docs exist)",
|
||||
"arguments": "[path] [--tool <gemini|qwen|codex>] [--regenerate] [--mode <full|partial>] [--cli-execute]",
|
||||
"category": "memory",
|
||||
"subcategory": null,
|
||||
"usage_scenario": "documentation",
|
||||
"difficulty": "Intermediate",
|
||||
"source": "../../../commands/memory/skill-memory.md"
|
||||
},
|
||||
{
|
||||
"name": "style-skill-memory",
|
||||
"command": "/memory:style-skill-memory",
|
||||
"description": "Generate SKILL memory package from style reference for easy loading and consistent design system usage",
|
||||
"arguments": "[package-name] [--regenerate]",
|
||||
"category": "memory",
|
||||
"subcategory": null,
|
||||
"usage_scenario": "documentation",
|
||||
"difficulty": "Intermediate",
|
||||
"source": "../../../commands/memory/style-skill-memory.md"
|
||||
},
|
||||
{
|
||||
"name": "swagger-docs",
|
||||
"command": "/memory:swagger-docs",
|
||||
"description": "Generate complete Swagger/OpenAPI documentation following RESTful standards with global security, API details, error codes, and validation tests",
|
||||
"arguments": "[path] [--tool <gemini|qwen|codex>] [--format <yaml|json>] [--version <v3.0|v3.1>] [--lang <zh|en>]",
|
||||
"category": "memory",
|
||||
"subcategory": null,
|
||||
"usage_scenario": "documentation",
|
||||
"difficulty": "Intermediate",
|
||||
"source": "../../../commands/memory/swagger-docs.md"
|
||||
},
|
||||
{
|
||||
"name": "tech-research-rules",
|
||||
"command": "/memory:tech-research-rules",
|
||||
"description": "3-phase orchestrator: extract tech stack → Exa research → generate path-conditional rules (auto-loaded by Claude Code)",
|
||||
"arguments": "[session-id | tech-stack-name] [--regenerate] [--tool <gemini|qwen>]",
|
||||
"category": "memory",
|
||||
"subcategory": null,
|
||||
"usage_scenario": "general",
|
||||
"difficulty": "Intermediate",
|
||||
"source": "../../../commands/memory/tech-research-rules.md"
|
||||
},
|
||||
{
|
||||
"name": "update-full",
|
||||
"command": "/memory:update-full",
|
||||
"description": "Update all CLAUDE.md files using layer-based execution (Layer 3→1) with batched agents (4 modules/agent) and gemini→qwen→codex fallback, <20 modules uses direct parallel",
|
||||
"arguments": "[--tool gemini|qwen|codex] [--path <directory>]",
|
||||
"category": "memory",
|
||||
"subcategory": null,
|
||||
"usage_scenario": "general",
|
||||
"difficulty": "Intermediate",
|
||||
"source": "../../../commands/memory/update-full.md"
|
||||
},
|
||||
{
|
||||
"name": "update-related",
|
||||
"command": "/memory:update-related",
|
||||
"description": "Update CLAUDE.md for git-changed modules using batched agent execution (4 modules/agent) with gemini→qwen→codex fallback, <15 modules uses direct execution",
|
||||
"arguments": "[--tool gemini|qwen|codex]",
|
||||
"category": "memory",
|
||||
"subcategory": null,
|
||||
"usage_scenario": "general",
|
||||
"difficulty": "Intermediate",
|
||||
"source": "../../../commands/memory/update-related.md"
|
||||
},
|
||||
{
|
||||
"name": "workflow-skill-memory",
|
||||
"command": "/memory:workflow-skill-memory",
|
||||
"description": "Process WFS-* archived sessions using universal-executor agents with Gemini analysis to generate workflow-progress SKILL package (sessions-timeline, lessons, conflicts)",
|
||||
"arguments": "session <session-id> | all",
|
||||
"category": "memory",
|
||||
"subcategory": null,
|
||||
"usage_scenario": "documentation",
|
||||
"difficulty": "Intermediate",
|
||||
"source": "../../../commands/memory/workflow-skill-memory.md"
|
||||
},
|
||||
{
|
||||
"name": "breakdown",
|
||||
"command": "/task:breakdown",
|
||||
"description": "Decompose complex task into subtasks with dependency mapping, creates child task JSONs with parent references and execution order",
|
||||
"arguments": "task-id",
|
||||
"category": "task",
|
||||
"subcategory": null,
|
||||
"usage_scenario": "planning",
|
||||
"difficulty": "Intermediate",
|
||||
"source": "../../../commands/task/breakdown.md"
|
||||
},
|
||||
{
|
||||
"name": "create",
|
||||
"command": "/task:create",
|
||||
"description": "Generate task JSON from natural language description with automatic file pattern detection, scope inference, and dependency analysis",
|
||||
"arguments": "\\\"task title\\",
|
||||
"category": "task",
|
||||
"subcategory": null,
|
||||
"usage_scenario": "implementation",
|
||||
"difficulty": "Intermediate",
|
||||
"source": "../../../commands/task/create.md"
|
||||
},
|
||||
{
|
||||
"name": "execute",
|
||||
"command": "/task:execute",
|
||||
"description": "Execute task JSON using appropriate agent (@doc-generator/@implementation-agent/@test-agent) with pre-analysis context loading and status tracking",
|
||||
"arguments": "task-id",
|
||||
"category": "task",
|
||||
"subcategory": null,
|
||||
"usage_scenario": "implementation",
|
||||
"difficulty": "Intermediate",
|
||||
"source": "../../../commands/task/execute.md"
|
||||
},
|
||||
{
|
||||
"name": "replan",
|
||||
"command": "/task:replan",
|
||||
"description": "Update task JSON with new requirements or batch-update multiple tasks from verification report, tracks changes in task-changes.json",
|
||||
"arguments": "task-id [\\\"text\\\"|file.md] | --batch [verification-report.md]",
|
||||
"category": "task",
|
||||
"subcategory": null,
|
||||
"usage_scenario": "planning",
|
||||
"difficulty": "Intermediate",
|
||||
"source": "../../../commands/task/replan.md"
|
||||
},
|
||||
{
|
||||
"name": "version",
|
||||
"command": "/version",
|
||||
"description": "Display Claude Code version information and check for updates",
|
||||
"arguments": "",
|
||||
"category": "general",
|
||||
"subcategory": null,
|
||||
"usage_scenario": "general",
|
||||
"difficulty": "Beginner",
|
||||
"source": "../../../commands/version.md"
|
||||
},
|
||||
{
|
||||
"name": "action-plan-verify",
|
||||
"command": "/workflow:action-plan-verify",
|
||||
"description": "Perform non-destructive cross-artifact consistency analysis between IMPL_PLAN.md and task JSONs with quality gate validation",
|
||||
"arguments": "[optional: --session session-id]",
|
||||
"category": "workflow",
|
||||
"subcategory": null,
|
||||
"usage_scenario": "planning",
|
||||
"difficulty": "Intermediate",
|
||||
"source": "../../../commands/workflow/action-plan-verify.md"
|
||||
},
|
||||
{
|
||||
"name": "api-designer",
|
||||
"command": "/workflow:brainstorm:api-designer",
|
||||
"description": "Generate or update api-designer/analysis.md addressing guidance-specification discussion points for API design perspective",
|
||||
"arguments": "optional topic - uses existing framework if available",
|
||||
"category": "workflow",
|
||||
"subcategory": "brainstorm",
|
||||
"usage_scenario": "planning",
|
||||
"difficulty": "Intermediate",
|
||||
"source": "../../../commands/workflow/brainstorm/api-designer.md"
|
||||
},
|
||||
{
|
||||
"name": "artifacts",
|
||||
"command": "/workflow:brainstorm:artifacts",
|
||||
"description": "Interactive clarification generating confirmed guidance specification through role-based analysis and synthesis",
|
||||
"arguments": "topic or challenge description [--count N]",
|
||||
"category": "workflow",
|
||||
"subcategory": "brainstorm",
|
||||
"usage_scenario": "general",
|
||||
"difficulty": "Intermediate",
|
||||
"source": "../../../commands/workflow/brainstorm/artifacts.md"
|
||||
},
|
||||
{
|
||||
"name": "auto-parallel",
|
||||
"command": "/workflow:brainstorm:auto-parallel",
|
||||
"description": "Parallel brainstorming automation with dynamic role selection and concurrent execution across multiple perspectives",
|
||||
"arguments": "topic or challenge description\" [--count N]",
|
||||
"category": "workflow",
|
||||
"subcategory": "brainstorm",
|
||||
"usage_scenario": "general",
|
||||
"difficulty": "Advanced",
|
||||
"source": "../../../commands/workflow/brainstorm/auto-parallel.md"
|
||||
},
|
||||
{
|
||||
"name": "data-architect",
|
||||
"command": "/workflow:brainstorm:data-architect",
|
||||
"description": "Generate or update data-architect/analysis.md addressing guidance-specification discussion points for data architecture perspective",
|
||||
"arguments": "optional topic - uses existing framework if available",
|
||||
"category": "workflow",
|
||||
"subcategory": "brainstorm",
|
||||
"usage_scenario": "general",
|
||||
"difficulty": "Intermediate",
|
||||
"source": "../../../commands/workflow/brainstorm/data-architect.md"
|
||||
},
|
||||
{
|
||||
"name": "product-manager",
|
||||
"command": "/workflow:brainstorm:product-manager",
|
||||
"description": "Generate or update product-manager/analysis.md addressing guidance-specification discussion points for product management perspective",
|
||||
"arguments": "optional topic - uses existing framework if available",
|
||||
"category": "workflow",
|
||||
"subcategory": "brainstorm",
|
||||
"usage_scenario": "general",
|
||||
"difficulty": "Intermediate",
|
||||
"source": "../../../commands/workflow/brainstorm/product-manager.md"
|
||||
},
|
||||
{
|
||||
"name": "product-owner",
|
||||
"command": "/workflow:brainstorm:product-owner",
|
||||
"description": "Generate or update product-owner/analysis.md addressing guidance-specification discussion points for product ownership perspective",
|
||||
"arguments": "optional topic - uses existing framework if available",
|
||||
"category": "workflow",
|
||||
"subcategory": "brainstorm",
|
||||
"usage_scenario": "general",
|
||||
"difficulty": "Intermediate",
|
||||
"source": "../../../commands/workflow/brainstorm/product-owner.md"
|
||||
},
|
||||
{
|
||||
"name": "scrum-master",
|
||||
"command": "/workflow:brainstorm:scrum-master",
|
||||
"description": "Generate or update scrum-master/analysis.md addressing guidance-specification discussion points for Agile process perspective",
|
||||
"arguments": "optional topic - uses existing framework if available",
|
||||
"category": "workflow",
|
||||
"subcategory": "brainstorm",
|
||||
"usage_scenario": "general",
|
||||
"difficulty": "Intermediate",
|
||||
"source": "../../../commands/workflow/brainstorm/scrum-master.md"
|
||||
},
|
||||
{
|
||||
"name": "subject-matter-expert",
|
||||
"command": "/workflow:brainstorm:subject-matter-expert",
|
||||
"description": "Generate or update subject-matter-expert/analysis.md addressing guidance-specification discussion points for domain expertise perspective",
|
||||
"arguments": "optional topic - uses existing framework if available",
|
||||
"category": "workflow",
|
||||
"subcategory": "brainstorm",
|
||||
"usage_scenario": "general",
|
||||
"difficulty": "Intermediate",
|
||||
"source": "../../../commands/workflow/brainstorm/subject-matter-expert.md"
|
||||
},
|
||||
{
|
||||
"name": "synthesis",
|
||||
"command": "/workflow:brainstorm:synthesis",
|
||||
"description": "Clarify and refine role analyses through intelligent Q&A and targeted updates with synthesis agent",
|
||||
"arguments": "[optional: --session session-id]",
|
||||
"category": "workflow",
|
||||
"subcategory": "brainstorm",
|
||||
"usage_scenario": "general",
|
||||
"difficulty": "Advanced",
|
||||
"source": "../../../commands/workflow/brainstorm/synthesis.md"
|
||||
},
|
||||
{
|
||||
"name": "system-architect",
|
||||
"command": "/workflow:brainstorm:system-architect",
|
||||
"description": "Generate or update system-architect/analysis.md addressing guidance-specification discussion points for system architecture perspective",
|
||||
"arguments": "optional topic - uses existing framework if available",
|
||||
"category": "workflow",
|
||||
"subcategory": "brainstorm",
|
||||
"usage_scenario": "general",
|
||||
"difficulty": "Intermediate",
|
||||
"source": "../../../commands/workflow/brainstorm/system-architect.md"
|
||||
},
|
||||
{
|
||||
"name": "ui-designer",
|
||||
"command": "/workflow:brainstorm:ui-designer",
|
||||
"description": "Generate or update ui-designer/analysis.md addressing guidance-specification discussion points for UI design perspective",
|
||||
"arguments": "optional topic - uses existing framework if available",
|
||||
"category": "workflow",
|
||||
"subcategory": "brainstorm",
|
||||
"usage_scenario": "planning",
|
||||
"difficulty": "Intermediate",
|
||||
"source": "../../../commands/workflow/brainstorm/ui-designer.md"
|
||||
},
|
||||
{
|
||||
"name": "ux-expert",
|
||||
"command": "/workflow:brainstorm:ux-expert",
|
||||
"description": "Generate or update ux-expert/analysis.md addressing guidance-specification discussion points for UX perspective",
|
||||
"arguments": "optional topic - uses existing framework if available",
|
||||
"category": "workflow",
|
||||
"subcategory": "brainstorm",
|
||||
"usage_scenario": "general",
|
||||
"difficulty": "Intermediate",
|
||||
"source": "../../../commands/workflow/brainstorm/ux-expert.md"
|
||||
},
|
||||
{
|
||||
"name": "clean",
|
||||
"command": "/workflow:clean",
|
||||
"description": "Intelligent code cleanup with mainline detection, stale artifact discovery, and safe execution",
|
||||
"arguments": "[--dry-run] [\\\"focus area\\\"]",
|
||||
"category": "workflow",
|
||||
"subcategory": null,
|
||||
"usage_scenario": "general",
|
||||
"difficulty": "Intermediate",
|
||||
"source": "../../../commands/workflow/clean.md"
|
||||
},
|
||||
{
|
||||
"name": "debug",
|
||||
"command": "/workflow:debug",
|
||||
"description": "Interactive hypothesis-driven debugging with NDJSON logging, iterative until resolved",
|
||||
"arguments": "\\\"bug description or error message\\",
|
||||
"category": "workflow",
|
||||
"subcategory": null,
|
||||
"usage_scenario": "general",
|
||||
"difficulty": "Intermediate",
|
||||
"source": "../../../commands/workflow/debug.md"
|
||||
},
|
||||
{
|
||||
"name": "execute",
|
||||
"command": "/workflow:execute",
|
||||
"description": "Coordinate agent execution for workflow tasks with automatic session discovery, parallel task processing, and status tracking",
|
||||
"arguments": "[--resume-session=\\\"session-id\\\"]",
|
||||
"category": "workflow",
|
||||
"subcategory": null,
|
||||
"usage_scenario": "implementation",
|
||||
"difficulty": "Intermediate",
|
||||
"source": "../../../commands/workflow/execute.md"
|
||||
},
|
||||
{
|
||||
"name": "init",
|
||||
"command": "/workflow:init",
|
||||
"description": "Initialize project-level state with intelligent project analysis using cli-explore-agent",
|
||||
"arguments": "[--regenerate]",
|
||||
"category": "workflow",
|
||||
"subcategory": null,
|
||||
"usage_scenario": "general",
|
||||
"difficulty": "Intermediate",
|
||||
"source": "../../../commands/workflow/init.md"
|
||||
},
|
||||
{
|
||||
"name": "lite-execute",
|
||||
"command": "/workflow:lite-execute",
|
||||
"description": "Execute tasks based on in-memory plan, prompt description, or file content",
|
||||
"arguments": "[--in-memory] [\\\"task description\\\"|file-path]",
|
||||
"category": "workflow",
|
||||
"subcategory": null,
|
||||
"usage_scenario": "implementation",
|
||||
"difficulty": "Intermediate",
|
||||
"source": "../../../commands/workflow/lite-execute.md"
|
||||
},
|
||||
{
|
||||
"name": "lite-fix",
|
||||
"command": "/workflow:lite-fix",
|
||||
"description": "Lightweight bug diagnosis and fix workflow with intelligent severity assessment and optional hotfix mode for production incidents",
|
||||
"arguments": "[--hotfix] \\\"bug description or issue reference\\",
|
||||
"category": "workflow",
|
||||
"subcategory": null,
|
||||
"usage_scenario": "general",
|
||||
"difficulty": "Intermediate",
|
||||
"source": "../../../commands/workflow/lite-fix.md"
|
||||
},
|
||||
{
|
||||
"name": "lite-plan",
|
||||
"command": "/workflow:lite-plan",
|
||||
"description": "Lightweight interactive planning workflow with in-memory planning, code exploration, and execution dispatch to lite-execute after user confirmation",
|
||||
"arguments": "[-e|--explore] \\\"task description\\\"|file.md",
|
||||
"category": "workflow",
|
||||
"subcategory": null,
|
||||
"usage_scenario": "planning",
|
||||
"difficulty": "Intermediate",
|
||||
"source": "../../../commands/workflow/lite-plan.md"
|
||||
},
|
||||
{
|
||||
"name": "plan",
|
||||
"command": "/workflow:plan",
|
||||
"description": "5-phase planning workflow with action-planning-agent task generation, outputs IMPL_PLAN.md and task JSONs",
|
||||
"arguments": "\\\"text description\\\"|file.md",
|
||||
"category": "workflow",
|
||||
"subcategory": null,
|
||||
"usage_scenario": "planning",
|
||||
"difficulty": "Intermediate",
|
||||
"source": "../../../commands/workflow/plan.md"
|
||||
},
|
||||
{
|
||||
"name": "replan",
|
||||
"command": "/workflow:replan",
|
||||
"description": "Interactive workflow replanning with session-level artifact updates and boundary clarification through guided questioning",
|
||||
"arguments": "[--session session-id] [task-id] \\\"requirements\\\"|file.md [--interactive]",
|
||||
"category": "workflow",
|
||||
"subcategory": null,
|
||||
"usage_scenario": "planning",
|
||||
"difficulty": "Intermediate",
|
||||
"source": "../../../commands/workflow/replan.md"
|
||||
},
|
||||
{
|
||||
"name": "review-fix",
|
||||
"command": "/workflow:review-fix",
|
||||
"description": "Automated fixing of code review findings with AI-powered planning and coordinated execution. Uses intelligent grouping, multi-stage timeline coordination, and test-driven verification.",
|
||||
"arguments": "<export-file|review-dir> [--resume] [--max-iterations=N]",
|
||||
"category": "workflow",
|
||||
"subcategory": null,
|
||||
"usage_scenario": "analysis",
|
||||
"difficulty": "Intermediate",
|
||||
"source": "../../../commands/workflow/review-fix.md"
|
||||
},
|
||||
{
|
||||
"name": "review-module-cycle",
|
||||
"command": "/workflow:review-module-cycle",
|
||||
"description": "Independent multi-dimensional code review for specified modules/files. Analyzes specific code paths across 7 dimensions with hybrid parallel-iterative execution, independent of workflow sessions.",
|
||||
"arguments": "<path-pattern> [--dimensions=security,architecture,...] [--max-iterations=N]",
|
||||
"category": "workflow",
|
||||
"subcategory": null,
|
||||
"usage_scenario": "analysis",
|
||||
"difficulty": "Intermediate",
|
||||
"source": "../../../commands/workflow/review-module-cycle.md"
|
||||
},
|
||||
{
|
||||
"name": "review-session-cycle",
|
||||
"command": "/workflow:review-session-cycle",
|
||||
"description": "Session-based comprehensive multi-dimensional code review. Analyzes git changes from workflow session across 7 dimensions with hybrid parallel-iterative execution, aggregates findings, and performs focused deep-dives on critical issues until quality gates met.",
|
||||
"arguments": "[session-id] [--dimensions=security,architecture,...] [--max-iterations=N]",
|
||||
"category": "workflow",
|
||||
"subcategory": null,
|
||||
"usage_scenario": "session-management",
|
||||
"difficulty": "Intermediate",
|
||||
"source": "../../../commands/workflow/review-session-cycle.md"
|
||||
},
|
||||
{
|
||||
"name": "review",
|
||||
"command": "/workflow:review",
|
||||
"description": "Post-implementation review with specialized types (security/architecture/action-items/quality) using analysis agents and Gemini",
|
||||
"arguments": "[--type=security|architecture|action-items|quality] [--archived] [optional: session-id]",
|
||||
"category": "workflow",
|
||||
"subcategory": null,
|
||||
"usage_scenario": "analysis",
|
||||
"difficulty": "Intermediate",
|
||||
"source": "../../../commands/workflow/review.md"
|
||||
},
|
||||
{
|
||||
"name": "complete",
|
||||
"command": "/workflow:session:complete",
|
||||
"description": "Mark active workflow session as complete, archive with lessons learned, update manifest, remove active flag",
|
||||
"arguments": "",
|
||||
"category": "workflow",
|
||||
"subcategory": "session",
|
||||
"usage_scenario": "session-management",
|
||||
"difficulty": "Intermediate",
|
||||
"source": "../../../commands/workflow/session/complete.md"
|
||||
},
|
||||
{
|
||||
"name": "list",
|
||||
"command": "/workflow:session:list",
|
||||
"description": "List all workflow sessions with status filtering, shows session metadata and progress information",
|
||||
"arguments": "",
|
||||
"category": "workflow",
|
||||
"subcategory": "session",
|
||||
"usage_scenario": "general",
|
||||
"difficulty": "Beginner",
|
||||
"source": "../../../commands/workflow/session/list.md"
|
||||
},
|
||||
{
|
||||
"name": "resume",
|
||||
"command": "/workflow:session:resume",
|
||||
"description": "Resume the most recently paused workflow session with automatic session discovery and status update",
|
||||
"arguments": "",
|
||||
"category": "workflow",
|
||||
"subcategory": "session",
|
||||
"usage_scenario": "session-management",
|
||||
"difficulty": "Intermediate",
|
||||
"source": "../../../commands/workflow/session/resume.md"
|
||||
},
|
||||
{
|
||||
"name": "solidify",
|
||||
"command": "/workflow:session:solidify",
|
||||
"description": "Crystallize session learnings and user-defined constraints into permanent project guidelines",
|
||||
"arguments": "[--type <convention|constraint|learning>] [--category <category>] \\\"rule or insight\\",
|
||||
"category": "workflow",
|
||||
"subcategory": "session",
|
||||
"usage_scenario": "general",
|
||||
"difficulty": "Intermediate",
|
||||
"source": "../../../commands/workflow/session/solidify.md"
|
||||
},
|
||||
{
|
||||
"name": "start",
|
||||
"command": "/workflow:session:start",
|
||||
"description": "Discover existing sessions or start new workflow session with intelligent session management and conflict detection",
|
||||
"arguments": "[--type <workflow|review|tdd|test|docs>] [--auto|--new] [optional: task description for new session]",
|
||||
"category": "workflow",
|
||||
"subcategory": "session",
|
||||
"usage_scenario": "general",
|
||||
"difficulty": "Intermediate",
|
||||
"source": "../../../commands/workflow/session/start.md"
|
||||
},
|
||||
{
|
||||
"name": "tdd-plan",
|
||||
"command": "/workflow:tdd-plan",
|
||||
"description": "TDD workflow planning with Red-Green-Refactor task chain generation, test-first development structure, and cycle tracking",
|
||||
"arguments": "\\\"feature description\\\"|file.md",
|
||||
"category": "workflow",
|
||||
"subcategory": null,
|
||||
"usage_scenario": "planning",
|
||||
"difficulty": "Advanced",
|
||||
"source": "../../../commands/workflow/tdd-plan.md"
|
||||
},
|
||||
{
|
||||
"name": "tdd-verify",
|
||||
"command": "/workflow:tdd-verify",
|
||||
"description": "Verify TDD workflow compliance against Red-Green-Refactor cycles, generate quality report with coverage analysis",
|
||||
"arguments": "[optional: WFS-session-id]",
|
||||
"category": "workflow",
|
||||
"subcategory": null,
|
||||
"usage_scenario": "testing",
|
||||
"difficulty": "Advanced",
|
||||
"source": "../../../commands/workflow/tdd-verify.md"
|
||||
},
|
||||
{
|
||||
"name": "test-cycle-execute",
|
||||
"command": "/workflow:test-cycle-execute",
|
||||
"description": "Execute test-fix workflow with dynamic task generation and iterative fix cycles until test pass rate >= 95% or max iterations reached. Uses @cli-planning-agent for failure analysis and task generation.",
|
||||
"arguments": "[--resume-session=\\\"session-id\\\"] [--max-iterations=N]",
|
||||
"category": "workflow",
|
||||
"subcategory": null,
|
||||
"usage_scenario": "implementation",
|
||||
"difficulty": "Intermediate",
|
||||
"source": "../../../commands/workflow/test-cycle-execute.md"
|
||||
},
|
||||
{
|
||||
"name": "test-fix-gen",
|
||||
"command": "/workflow:test-fix-gen",
|
||||
"description": "Create test-fix workflow session from session ID, description, or file path with test strategy generation and task planning",
|
||||
"arguments": "(source-session-id | \\\"feature description\\\" | /path/to/file.md)",
|
||||
"category": "workflow",
|
||||
"subcategory": null,
|
||||
"usage_scenario": "testing",
|
||||
"difficulty": "Intermediate",
|
||||
"source": "../../../commands/workflow/test-fix-gen.md"
|
||||
},
|
||||
{
|
||||
"name": "test-gen",
|
||||
"command": "/workflow:test-gen",
|
||||
"description": "Create independent test-fix workflow session from completed implementation session, analyzes code to generate test tasks",
|
||||
"arguments": "source-session-id",
|
||||
"category": "workflow",
|
||||
"subcategory": null,
|
||||
"usage_scenario": "testing",
|
||||
"difficulty": "Intermediate",
|
||||
"source": "../../../commands/workflow/test-gen.md"
|
||||
},
|
||||
{
|
||||
"name": "conflict-resolution",
|
||||
"command": "/workflow:tools:conflict-resolution",
|
||||
"description": "Detect and resolve conflicts between plan and existing codebase using CLI-powered analysis with Gemini/Qwen",
|
||||
"arguments": "--session WFS-session-id --context path/to/context-package.json",
|
||||
"category": "workflow",
|
||||
"subcategory": "tools",
|
||||
"usage_scenario": "general",
|
||||
"difficulty": "Advanced",
|
||||
"source": "../../../commands/workflow/tools/conflict-resolution.md"
|
||||
},
|
||||
{
|
||||
"name": "gather",
|
||||
"command": "/workflow:tools:gather",
|
||||
"description": "Intelligently collect project context using context-search-agent based on task description, packages into standardized JSON",
|
||||
"arguments": "--session WFS-session-id \\\"task description\\",
|
||||
"category": "workflow",
|
||||
"subcategory": "tools",
|
||||
"usage_scenario": "general",
|
||||
"difficulty": "Intermediate",
|
||||
"source": "../../../commands/workflow/tools/context-gather.md"
|
||||
},
|
||||
{
|
||||
"name": "task-generate-agent",
|
||||
"command": "/workflow:tools:task-generate-agent",
|
||||
"description": "Generate implementation plan documents (IMPL_PLAN.md, task JSONs, TODO_LIST.md) using action-planning-agent - produces planning artifacts, does NOT execute code implementation",
|
||||
"arguments": "--session WFS-session-id",
|
||||
"category": "workflow",
|
||||
"subcategory": "tools",
|
||||
"usage_scenario": "implementation",
|
||||
"difficulty": "Advanced",
|
||||
"source": "../../../commands/workflow/tools/task-generate-agent.md"
|
||||
},
|
||||
{
|
||||
"name": "task-generate-tdd",
|
||||
"command": "/workflow:tools:task-generate-tdd",
|
||||
"description": "Autonomous TDD task generation using action-planning-agent with Red-Green-Refactor cycles, test-first structure, and cycle validation",
|
||||
"arguments": "--session WFS-session-id",
|
||||
"category": "workflow",
|
||||
"subcategory": "tools",
|
||||
"usage_scenario": "implementation",
|
||||
"difficulty": "Advanced",
|
||||
"source": "../../../commands/workflow/tools/task-generate-tdd.md"
|
||||
},
|
||||
{
|
||||
"name": "tdd-coverage-analysis",
|
||||
"command": "/workflow:tools:tdd-coverage-analysis",
|
||||
"description": "Analyze test coverage and TDD cycle execution with Red-Green-Refactor compliance verification",
|
||||
"arguments": "--session WFS-session-id",
|
||||
"category": "workflow",
|
||||
"subcategory": "tools",
|
||||
"usage_scenario": "testing",
|
||||
"difficulty": "Advanced",
|
||||
"source": "../../../commands/workflow/tools/tdd-coverage-analysis.md"
|
||||
},
|
||||
{
|
||||
"name": "test-concept-enhanced",
|
||||
"command": "/workflow:tools:test-concept-enhanced",
|
||||
"description": "Coordinate test analysis workflow using cli-execution-agent to generate test strategy via Gemini",
|
||||
"arguments": "--session WFS-test-session-id --context path/to/test-context-package.json",
|
||||
"category": "workflow",
|
||||
"subcategory": "tools",
|
||||
"usage_scenario": "testing",
|
||||
"difficulty": "Intermediate",
|
||||
"source": "../../../commands/workflow/tools/test-concept-enhanced.md"
|
||||
},
|
||||
{
|
||||
"name": "test-context-gather",
|
||||
"command": "/workflow:tools:test-context-gather",
|
||||
"description": "Collect test coverage context using test-context-search-agent and package into standardized test-context JSON",
|
||||
"arguments": "--session WFS-test-session-id",
|
||||
"category": "workflow",
|
||||
"subcategory": "tools",
|
||||
"usage_scenario": "testing",
|
||||
"difficulty": "Intermediate",
|
||||
"source": "../../../commands/workflow/tools/test-context-gather.md"
|
||||
},
|
||||
{
|
||||
"name": "test-task-generate",
|
||||
"command": "/workflow:tools:test-task-generate",
|
||||
"description": "Generate test planning documents (IMPL_PLAN.md, test task JSONs, TODO_LIST.md) using action-planning-agent - produces test planning artifacts, does NOT execute tests",
|
||||
"arguments": "--session WFS-test-session-id",
|
||||
"category": "workflow",
|
||||
"subcategory": "tools",
|
||||
"usage_scenario": "implementation",
|
||||
"difficulty": "Intermediate",
|
||||
"source": "../../../commands/workflow/tools/test-task-generate.md"
|
||||
},
|
||||
{
|
||||
"name": "animation-extract",
|
||||
"command": "/workflow:ui-design:animation-extract",
|
||||
"description": "Extract animation and transition patterns from prompt inference and image references for design system documentation",
|
||||
"arguments": "[--design-id <id>] [--session <id>] [--images \"<glob>\"] [--focus \"<types>\"] [--interactive] [--refine]",
|
||||
"category": "workflow",
|
||||
"subcategory": "ui-design",
|
||||
"usage_scenario": "general",
|
||||
"difficulty": "Intermediate",
|
||||
"source": "../../../commands/workflow/ui-design/animation-extract.md"
|
||||
},
|
||||
{
|
||||
"name": "workflow:ui-design:codify-style",
|
||||
"command": "/workflow:ui-design:codify-style",
|
||||
"description": "Orchestrator to extract styles from code and generate shareable reference package with preview (automatic file discovery)",
|
||||
"arguments": "<path> [--package-name <name>] [--output-dir <path>] [--overwrite]",
|
||||
"category": "workflow",
|
||||
"subcategory": "ui-design",
|
||||
"usage_scenario": "planning",
|
||||
"difficulty": "Intermediate",
|
||||
"source": "../../../commands/workflow/ui-design/codify-style.md"
|
||||
},
|
||||
{
|
||||
"name": "design-sync",
|
||||
"command": "/workflow:ui-design:design-sync",
|
||||
"description": "Synchronize finalized design system references to brainstorming artifacts, preparing them for /workflow:plan consumption",
|
||||
"arguments": "--session <session_id> [--selected-prototypes \"<list>\"]",
|
||||
"category": "workflow",
|
||||
"subcategory": "ui-design",
|
||||
"usage_scenario": "planning",
|
||||
"difficulty": "Intermediate",
|
||||
"source": "../../../commands/workflow/ui-design/design-sync.md"
|
||||
},
|
||||
{
|
||||
"name": "explore-auto",
|
||||
"command": "/workflow:ui-design:explore-auto",
|
||||
"description": "Interactive exploratory UI design workflow with style-centric batch generation, creates design variants from prompts/images with parallel execution and user selection",
|
||||
"arguments": "[--input \"<value>\"] [--targets \"<list>\"] [--target-type \"page|component\"] [--session <id>] [--style-variants <count>] [--layout-variants <count>]",
|
||||
"category": "workflow",
|
||||
"subcategory": "ui-design",
|
||||
"usage_scenario": "general",
|
||||
"difficulty": "Intermediate",
|
||||
"source": "../../../commands/workflow/ui-design/explore-auto.md"
|
||||
},
|
||||
{
|
||||
"name": "generate",
|
||||
"command": "/workflow:ui-design:generate",
|
||||
"description": "Assemble UI prototypes by combining layout templates with design tokens (default animation support), pure assembler without new content generation",
|
||||
"arguments": "[--design-id <id>] [--session <id>]",
|
||||
"category": "workflow",
|
||||
"subcategory": "ui-design",
|
||||
"usage_scenario": "implementation",
|
||||
"difficulty": "Intermediate",
|
||||
"source": "../../../commands/workflow/ui-design/generate.md"
|
||||
},
|
||||
{
|
||||
"name": "imitate-auto",
|
||||
"command": "/workflow:ui-design:imitate-auto",
|
||||
"description": "UI design workflow with direct code/image input for design token extraction and prototype generation",
|
||||
"arguments": "[--input \"<value>\"] [--session <id>]",
|
||||
"category": "workflow",
|
||||
"subcategory": "ui-design",
|
||||
"usage_scenario": "general",
|
||||
"difficulty": "Intermediate",
|
||||
"source": "../../../commands/workflow/ui-design/imitate-auto.md"
|
||||
},
|
||||
{
|
||||
"name": "workflow:ui-design:import-from-code",
|
||||
"command": "/workflow:ui-design:import-from-code",
|
||||
"description": "Import design system from code files (CSS/JS/HTML/SCSS) with automatic file discovery and parallel agent analysis",
|
||||
"arguments": "[--design-id <id>] [--session <id>] [--source <path>]",
|
||||
"category": "workflow",
|
||||
"subcategory": "ui-design",
|
||||
"usage_scenario": "planning",
|
||||
"difficulty": "Intermediate",
|
||||
"source": "../../../commands/workflow/ui-design/import-from-code.md"
|
||||
},
|
||||
{
|
||||
"name": "layout-extract",
|
||||
"command": "/workflow:ui-design:layout-extract",
|
||||
"description": "Extract structural layout information from reference images or text prompts using Claude analysis with variant generation or refinement mode",
|
||||
"arguments": "[--design-id <id>] [--session <id>] [--images \"<glob>\"] [--prompt \"<desc>\"] [--targets \"<list>\"] [--variants <count>] [--device-type <desktop|mobile|tablet|responsive>] [--interactive] [--refine]",
|
||||
"category": "workflow",
|
||||
"subcategory": "ui-design",
|
||||
"usage_scenario": "general",
|
||||
"difficulty": "Intermediate",
|
||||
"source": "../../../commands/workflow/ui-design/layout-extract.md"
|
||||
},
|
||||
{
|
||||
"name": "workflow:ui-design:reference-page-generator",
|
||||
"command": "/workflow:ui-design:reference-page-generator",
|
||||
"description": "Generate multi-component reference pages and documentation from design run extraction",
|
||||
"arguments": "[--design-run <path>] [--package-name <name>] [--output-dir <path>]",
|
||||
"category": "workflow",
|
||||
"subcategory": "ui-design",
|
||||
"usage_scenario": "planning",
|
||||
"difficulty": "Intermediate",
|
||||
"source": "../../../commands/workflow/ui-design/reference-page-generator.md"
|
||||
},
|
||||
{
|
||||
"name": "style-extract",
|
||||
"command": "/workflow:ui-design:style-extract",
|
||||
"description": "Extract design style from reference images or text prompts using Claude analysis with variant generation or refinement mode",
|
||||
"arguments": "[--design-id <id>] [--session <id>] [--images \"<glob>\"] [--prompt \"<desc>\"] [--variants <count>] [--interactive] [--refine]",
|
||||
"category": "workflow",
|
||||
"subcategory": "ui-design",
|
||||
"usage_scenario": "general",
|
||||
"difficulty": "Intermediate",
|
||||
"source": "../../../commands/workflow/ui-design/style-extract.md"
|
||||
}
|
||||
]
|
||||
@@ -1,914 +0,0 @@
|
||||
{
|
||||
"cli": {
|
||||
"_root": [
|
||||
{
|
||||
"name": "cli-init",
|
||||
"command": "/cli:cli-init",
|
||||
"description": "Generate .gemini/ and .qwen/ config directories with settings.json and ignore files based on workspace technology detection",
|
||||
"arguments": "[--tool gemini|qwen|all] [--output path] [--preview]",
|
||||
"category": "cli",
|
||||
"subcategory": null,
|
||||
"usage_scenario": "general",
|
||||
"difficulty": "Intermediate",
|
||||
"source": "../../../commands/cli/cli-init.md"
|
||||
}
|
||||
]
|
||||
},
|
||||
"general": {
|
||||
"_root": [
|
||||
{
|
||||
"name": "enhance-prompt",
|
||||
"command": "/enhance-prompt",
|
||||
"description": "Enhanced prompt transformation using session memory and intent analysis with --enhance flag detection",
|
||||
"arguments": "user input to enhance",
|
||||
"category": "general",
|
||||
"subcategory": null,
|
||||
"usage_scenario": "general",
|
||||
"difficulty": "Intermediate",
|
||||
"source": "../../../commands/enhance-prompt.md"
|
||||
},
|
||||
{
|
||||
"name": "version",
|
||||
"command": "/version",
|
||||
"description": "Display Claude Code version information and check for updates",
|
||||
"arguments": "",
|
||||
"category": "general",
|
||||
"subcategory": null,
|
||||
"usage_scenario": "general",
|
||||
"difficulty": "Beginner",
|
||||
"source": "../../../commands/version.md"
|
||||
}
|
||||
]
|
||||
},
|
||||
"issue": {
|
||||
"_root": [
|
||||
{
|
||||
"name": "issue:discover",
|
||||
"command": "/issue:discover",
|
||||
"description": "Discover potential issues from multiple perspectives (bug, UX, test, quality, security, performance, maintainability, best-practices) using CLI explore. Supports Exa external research for security and best-practices perspectives.",
|
||||
"arguments": "<path-pattern> [--perspectives=bug,ux,...] [--external]",
|
||||
"category": "issue",
|
||||
"subcategory": null,
|
||||
"usage_scenario": "general",
|
||||
"difficulty": "Intermediate",
|
||||
"source": "../../../commands/issue/discover.md"
|
||||
},
|
||||
{
|
||||
"name": "execute",
|
||||
"command": "/issue:execute",
|
||||
"description": "Execute queue with codex using DAG-based parallel orchestration (solution-level)",
|
||||
"arguments": "[--worktree] [--queue <queue-id>]",
|
||||
"category": "issue",
|
||||
"subcategory": null,
|
||||
"usage_scenario": "implementation",
|
||||
"difficulty": "Intermediate",
|
||||
"source": "../../../commands/issue/execute.md"
|
||||
},
|
||||
{
|
||||
"name": "new",
|
||||
"command": "/issue:new",
|
||||
"description": "Create structured issue from GitHub URL or text description",
|
||||
"arguments": "<github-url | text-description> [--priority 1-5]",
|
||||
"category": "issue",
|
||||
"subcategory": null,
|
||||
"usage_scenario": "general",
|
||||
"difficulty": "Intermediate",
|
||||
"source": "../../../commands/issue/new.md"
|
||||
},
|
||||
{
|
||||
"name": "plan",
|
||||
"command": "/issue:plan",
|
||||
"description": "Batch plan issue resolution using issue-plan-agent (explore + plan closed-loop)",
|
||||
"arguments": "--all-pending <issue-id>[,<issue-id>,...] [--batch-size 3] ",
|
||||
"category": "issue",
|
||||
"subcategory": null,
|
||||
"usage_scenario": "planning",
|
||||
"difficulty": "Intermediate",
|
||||
"source": "../../../commands/issue/plan.md"
|
||||
},
|
||||
{
|
||||
"name": "queue",
|
||||
"command": "/issue:queue",
|
||||
"description": "Form execution queue from bound solutions using issue-queue-agent (solution-level)",
|
||||
"arguments": "[--rebuild] [--issue <id>]",
|
||||
"category": "issue",
|
||||
"subcategory": null,
|
||||
"usage_scenario": "general",
|
||||
"difficulty": "Intermediate",
|
||||
"source": "../../../commands/issue/queue.md"
|
||||
}
|
||||
]
|
||||
},
|
||||
"memory": {
|
||||
"_root": [
|
||||
{
|
||||
"name": "code-map-memory",
|
||||
"command": "/memory:code-map-memory",
|
||||
"description": "3-phase orchestrator: parse feature keyword → cli-explore-agent analyzes (Deep Scan dual-source) → orchestrator generates Mermaid docs + SKILL package (skips phase 2 if exists)",
|
||||
"arguments": "\\\"feature-keyword\\\" [--regenerate] [--tool <gemini|qwen>]",
|
||||
"category": "memory",
|
||||
"subcategory": null,
|
||||
"usage_scenario": "documentation",
|
||||
"difficulty": "Intermediate",
|
||||
"source": "../../../commands/memory/code-map-memory.md"
|
||||
},
|
||||
{
|
||||
"name": "compact",
|
||||
"command": "/memory:compact",
|
||||
"description": "Compact current session memory into structured text for session recovery, extracting objective/plan/files/decisions/constraints/state, and save via MCP core_memory tool",
|
||||
"arguments": "[optional: session description]",
|
||||
"category": "memory",
|
||||
"subcategory": null,
|
||||
"usage_scenario": "general",
|
||||
"difficulty": "Intermediate",
|
||||
"source": "../../../commands/memory/compact.md"
|
||||
},
|
||||
{
|
||||
"name": "docs-full-cli",
|
||||
"command": "/memory:docs-full-cli",
|
||||
"description": "Generate full project documentation using CLI execution (Layer 3→1) with batched agents (4 modules/agent) and gemini→qwen→codex fallback, <20 modules uses direct parallel",
|
||||
"arguments": "[path] [--tool <gemini|qwen|codex>]",
|
||||
"category": "memory",
|
||||
"subcategory": null,
|
||||
"usage_scenario": "documentation",
|
||||
"difficulty": "Intermediate",
|
||||
"source": "../../../commands/memory/docs-full-cli.md"
|
||||
},
|
||||
{
|
||||
"name": "docs-related-cli",
|
||||
"command": "/memory:docs-related-cli",
|
||||
"description": "Generate/update documentation for git-changed modules using CLI execution with batched agents (4 modules/agent) and gemini→qwen→codex fallback, <15 modules uses direct parallel",
|
||||
"arguments": "[--tool <gemini|qwen|codex>]",
|
||||
"category": "memory",
|
||||
"subcategory": null,
|
||||
"usage_scenario": "documentation",
|
||||
"difficulty": "Intermediate",
|
||||
"source": "../../../commands/memory/docs-related-cli.md"
|
||||
},
|
||||
{
|
||||
"name": "docs",
|
||||
"command": "/memory:docs",
|
||||
"description": "Plan documentation workflow with dynamic grouping (≤10 docs/task), generates IMPL tasks for parallel module trees, README, ARCHITECTURE, and HTTP API docs",
|
||||
"arguments": "[path] [--tool <gemini|qwen|codex>] [--mode <full|partial>] [--cli-execute]",
|
||||
"category": "memory",
|
||||
"subcategory": null,
|
||||
"usage_scenario": "documentation",
|
||||
"difficulty": "Intermediate",
|
||||
"source": "../../../commands/memory/docs.md"
|
||||
},
|
||||
{
|
||||
"name": "load-skill-memory",
|
||||
"command": "/memory:load-skill-memory",
|
||||
"description": "Activate SKILL package (auto-detect from paths/keywords or manual) and intelligently load documentation based on task intent keywords",
|
||||
"arguments": "[skill_name] \\\"task intent description\\",
|
||||
"category": "memory",
|
||||
"subcategory": null,
|
||||
"usage_scenario": "documentation",
|
||||
"difficulty": "Intermediate",
|
||||
"source": "../../../commands/memory/load-skill-memory.md"
|
||||
},
|
||||
{
|
||||
"name": "load",
|
||||
"command": "/memory:load",
|
||||
"description": "Delegate to universal-executor agent to analyze project via Gemini/Qwen CLI and return JSON core content package for task context",
|
||||
"arguments": "[--tool gemini|qwen] \\\"task context description\\",
|
||||
"category": "memory",
|
||||
"subcategory": null,
|
||||
"usage_scenario": "general",
|
||||
"difficulty": "Intermediate",
|
||||
"source": "../../../commands/memory/load.md"
|
||||
},
|
||||
{
|
||||
"name": "skill-memory",
|
||||
"command": "/memory:skill-memory",
|
||||
"description": "4-phase autonomous orchestrator: check docs → /memory:docs planning → /workflow:execute → generate SKILL.md with progressive loading index (skips phases 2-3 if docs exist)",
|
||||
"arguments": "[path] [--tool <gemini|qwen|codex>] [--regenerate] [--mode <full|partial>] [--cli-execute]",
|
||||
"category": "memory",
|
||||
"subcategory": null,
|
||||
"usage_scenario": "documentation",
|
||||
"difficulty": "Intermediate",
|
||||
"source": "../../../commands/memory/skill-memory.md"
|
||||
},
|
||||
{
|
||||
"name": "style-skill-memory",
|
||||
"command": "/memory:style-skill-memory",
|
||||
"description": "Generate SKILL memory package from style reference for easy loading and consistent design system usage",
|
||||
"arguments": "[package-name] [--regenerate]",
|
||||
"category": "memory",
|
||||
"subcategory": null,
|
||||
"usage_scenario": "documentation",
|
||||
"difficulty": "Intermediate",
|
||||
"source": "../../../commands/memory/style-skill-memory.md"
|
||||
},
|
||||
{
|
||||
"name": "swagger-docs",
|
||||
"command": "/memory:swagger-docs",
|
||||
"description": "Generate complete Swagger/OpenAPI documentation following RESTful standards with global security, API details, error codes, and validation tests",
|
||||
"arguments": "[path] [--tool <gemini|qwen|codex>] [--format <yaml|json>] [--version <v3.0|v3.1>] [--lang <zh|en>]",
|
||||
"category": "memory",
|
||||
"subcategory": null,
|
||||
"usage_scenario": "documentation",
|
||||
"difficulty": "Intermediate",
|
||||
"source": "../../../commands/memory/swagger-docs.md"
|
||||
},
|
||||
{
|
||||
"name": "tech-research-rules",
|
||||
"command": "/memory:tech-research-rules",
|
||||
"description": "3-phase orchestrator: extract tech stack → Exa research → generate path-conditional rules (auto-loaded by Claude Code)",
|
||||
"arguments": "[session-id | tech-stack-name] [--regenerate] [--tool <gemini|qwen>]",
|
||||
"category": "memory",
|
||||
"subcategory": null,
|
||||
"usage_scenario": "general",
|
||||
"difficulty": "Intermediate",
|
||||
"source": "../../../commands/memory/tech-research-rules.md"
|
||||
},
|
||||
{
|
||||
"name": "update-full",
|
||||
"command": "/memory:update-full",
|
||||
"description": "Update all CLAUDE.md files using layer-based execution (Layer 3→1) with batched agents (4 modules/agent) and gemini→qwen→codex fallback, <20 modules uses direct parallel",
|
||||
"arguments": "[--tool gemini|qwen|codex] [--path <directory>]",
|
||||
"category": "memory",
|
||||
"subcategory": null,
|
||||
"usage_scenario": "general",
|
||||
"difficulty": "Intermediate",
|
||||
"source": "../../../commands/memory/update-full.md"
|
||||
},
|
||||
{
|
||||
"name": "update-related",
|
||||
"command": "/memory:update-related",
|
||||
"description": "Update CLAUDE.md for git-changed modules using batched agent execution (4 modules/agent) with gemini→qwen→codex fallback, <15 modules uses direct execution",
|
||||
"arguments": "[--tool gemini|qwen|codex]",
|
||||
"category": "memory",
|
||||
"subcategory": null,
|
||||
"usage_scenario": "general",
|
||||
"difficulty": "Intermediate",
|
||||
"source": "../../../commands/memory/update-related.md"
|
||||
},
|
||||
{
|
||||
"name": "workflow-skill-memory",
|
||||
"command": "/memory:workflow-skill-memory",
|
||||
"description": "Process WFS-* archived sessions using universal-executor agents with Gemini analysis to generate workflow-progress SKILL package (sessions-timeline, lessons, conflicts)",
|
||||
"arguments": "session <session-id> | all",
|
||||
"category": "memory",
|
||||
"subcategory": null,
|
||||
"usage_scenario": "documentation",
|
||||
"difficulty": "Intermediate",
|
||||
"source": "../../../commands/memory/workflow-skill-memory.md"
|
||||
}
|
||||
]
|
||||
},
|
||||
"task": {
|
||||
"_root": [
|
||||
{
|
||||
"name": "breakdown",
|
||||
"command": "/task:breakdown",
|
||||
"description": "Decompose complex task into subtasks with dependency mapping, creates child task JSONs with parent references and execution order",
|
||||
"arguments": "task-id",
|
||||
"category": "task",
|
||||
"subcategory": null,
|
||||
"usage_scenario": "planning",
|
||||
"difficulty": "Intermediate",
|
||||
"source": "../../../commands/task/breakdown.md"
|
||||
},
|
||||
{
|
||||
"name": "create",
|
||||
"command": "/task:create",
|
||||
"description": "Generate task JSON from natural language description with automatic file pattern detection, scope inference, and dependency analysis",
|
||||
"arguments": "\\\"task title\\",
|
||||
"category": "task",
|
||||
"subcategory": null,
|
||||
"usage_scenario": "implementation",
|
||||
"difficulty": "Intermediate",
|
||||
"source": "../../../commands/task/create.md"
|
||||
},
|
||||
{
|
||||
"name": "execute",
|
||||
"command": "/task:execute",
|
||||
"description": "Execute task JSON using appropriate agent (@doc-generator/@implementation-agent/@test-agent) with pre-analysis context loading and status tracking",
|
||||
"arguments": "task-id",
|
||||
"category": "task",
|
||||
"subcategory": null,
|
||||
"usage_scenario": "implementation",
|
||||
"difficulty": "Intermediate",
|
||||
"source": "../../../commands/task/execute.md"
|
||||
},
|
||||
{
|
||||
"name": "replan",
|
||||
"command": "/task:replan",
|
||||
"description": "Update task JSON with new requirements or batch-update multiple tasks from verification report, tracks changes in task-changes.json",
|
||||
"arguments": "task-id [\\\"text\\\"|file.md] | --batch [verification-report.md]",
|
||||
"category": "task",
|
||||
"subcategory": null,
|
||||
"usage_scenario": "planning",
|
||||
"difficulty": "Intermediate",
|
||||
"source": "../../../commands/task/replan.md"
|
||||
}
|
||||
]
|
||||
},
|
||||
"workflow": {
|
||||
"_root": [
|
||||
{
|
||||
"name": "action-plan-verify",
|
||||
"command": "/workflow:action-plan-verify",
|
||||
"description": "Perform non-destructive cross-artifact consistency analysis between IMPL_PLAN.md and task JSONs with quality gate validation",
|
||||
"arguments": "[optional: --session session-id]",
|
||||
"category": "workflow",
|
||||
"subcategory": null,
|
||||
"usage_scenario": "planning",
|
||||
"difficulty": "Intermediate",
|
||||
"source": "../../../commands/workflow/action-plan-verify.md"
|
||||
},
|
||||
{
|
||||
"name": "clean",
|
||||
"command": "/workflow:clean",
|
||||
"description": "Intelligent code cleanup with mainline detection, stale artifact discovery, and safe execution",
|
||||
"arguments": "[--dry-run] [\\\"focus area\\\"]",
|
||||
"category": "workflow",
|
||||
"subcategory": null,
|
||||
"usage_scenario": "general",
|
||||
"difficulty": "Intermediate",
|
||||
"source": "../../../commands/workflow/clean.md"
|
||||
},
|
||||
{
|
||||
"name": "debug",
|
||||
"command": "/workflow:debug",
|
||||
"description": "Interactive hypothesis-driven debugging with NDJSON logging, iterative until resolved",
|
||||
"arguments": "\\\"bug description or error message\\",
|
||||
"category": "workflow",
|
||||
"subcategory": null,
|
||||
"usage_scenario": "general",
|
||||
"difficulty": "Intermediate",
|
||||
"source": "../../../commands/workflow/debug.md"
|
||||
},
|
||||
{
|
||||
"name": "execute",
|
||||
"command": "/workflow:execute",
|
||||
"description": "Coordinate agent execution for workflow tasks with automatic session discovery, parallel task processing, and status tracking",
|
||||
"arguments": "[--resume-session=\\\"session-id\\\"]",
|
||||
"category": "workflow",
|
||||
"subcategory": null,
|
||||
"usage_scenario": "implementation",
|
||||
"difficulty": "Intermediate",
|
||||
"source": "../../../commands/workflow/execute.md"
|
||||
},
|
||||
{
|
||||
"name": "init",
|
||||
"command": "/workflow:init",
|
||||
"description": "Initialize project-level state with intelligent project analysis using cli-explore-agent",
|
||||
"arguments": "[--regenerate]",
|
||||
"category": "workflow",
|
||||
"subcategory": null,
|
||||
"usage_scenario": "general",
|
||||
"difficulty": "Intermediate",
|
||||
"source": "../../../commands/workflow/init.md"
|
||||
},
|
||||
{
|
||||
"name": "lite-execute",
|
||||
"command": "/workflow:lite-execute",
|
||||
"description": "Execute tasks based on in-memory plan, prompt description, or file content",
|
||||
"arguments": "[--in-memory] [\\\"task description\\\"|file-path]",
|
||||
"category": "workflow",
|
||||
"subcategory": null,
|
||||
"usage_scenario": "implementation",
|
||||
"difficulty": "Intermediate",
|
||||
"source": "../../../commands/workflow/lite-execute.md"
|
||||
},
|
||||
{
|
||||
"name": "lite-fix",
|
||||
"command": "/workflow:lite-fix",
|
||||
"description": "Lightweight bug diagnosis and fix workflow with intelligent severity assessment and optional hotfix mode for production incidents",
|
||||
"arguments": "[--hotfix] \\\"bug description or issue reference\\",
|
||||
"category": "workflow",
|
||||
"subcategory": null,
|
||||
"usage_scenario": "general",
|
||||
"difficulty": "Intermediate",
|
||||
"source": "../../../commands/workflow/lite-fix.md"
|
||||
},
|
||||
{
|
||||
"name": "lite-plan",
|
||||
"command": "/workflow:lite-plan",
|
||||
"description": "Lightweight interactive planning workflow with in-memory planning, code exploration, and execution dispatch to lite-execute after user confirmation",
|
||||
"arguments": "[-e|--explore] \\\"task description\\\"|file.md",
|
||||
"category": "workflow",
|
||||
"subcategory": null,
|
||||
"usage_scenario": "planning",
|
||||
"difficulty": "Intermediate",
|
||||
"source": "../../../commands/workflow/lite-plan.md"
|
||||
},
|
||||
{
|
||||
"name": "plan",
|
||||
"command": "/workflow:plan",
|
||||
"description": "5-phase planning workflow with action-planning-agent task generation, outputs IMPL_PLAN.md and task JSONs",
|
||||
"arguments": "\\\"text description\\\"|file.md",
|
||||
"category": "workflow",
|
||||
"subcategory": null,
|
||||
"usage_scenario": "planning",
|
||||
"difficulty": "Intermediate",
|
||||
"source": "../../../commands/workflow/plan.md"
|
||||
},
|
||||
{
|
||||
"name": "replan",
|
||||
"command": "/workflow:replan",
|
||||
"description": "Interactive workflow replanning with session-level artifact updates and boundary clarification through guided questioning",
|
||||
"arguments": "[--session session-id] [task-id] \\\"requirements\\\"|file.md [--interactive]",
|
||||
"category": "workflow",
|
||||
"subcategory": null,
|
||||
"usage_scenario": "planning",
|
||||
"difficulty": "Intermediate",
|
||||
"source": "../../../commands/workflow/replan.md"
|
||||
},
|
||||
{
|
||||
"name": "review-fix",
|
||||
"command": "/workflow:review-fix",
|
||||
"description": "Automated fixing of code review findings with AI-powered planning and coordinated execution. Uses intelligent grouping, multi-stage timeline coordination, and test-driven verification.",
|
||||
"arguments": "<export-file|review-dir> [--resume] [--max-iterations=N]",
|
||||
"category": "workflow",
|
||||
"subcategory": null,
|
||||
"usage_scenario": "analysis",
|
||||
"difficulty": "Intermediate",
|
||||
"source": "../../../commands/workflow/review-fix.md"
|
||||
},
|
||||
{
|
||||
"name": "review-module-cycle",
|
||||
"command": "/workflow:review-module-cycle",
|
||||
"description": "Independent multi-dimensional code review for specified modules/files. Analyzes specific code paths across 7 dimensions with hybrid parallel-iterative execution, independent of workflow sessions.",
|
||||
"arguments": "<path-pattern> [--dimensions=security,architecture,...] [--max-iterations=N]",
|
||||
"category": "workflow",
|
||||
"subcategory": null,
|
||||
"usage_scenario": "analysis",
|
||||
"difficulty": "Intermediate",
|
||||
"source": "../../../commands/workflow/review-module-cycle.md"
|
||||
},
|
||||
{
|
||||
"name": "review-session-cycle",
|
||||
"command": "/workflow:review-session-cycle",
|
||||
"description": "Session-based comprehensive multi-dimensional code review. Analyzes git changes from workflow session across 7 dimensions with hybrid parallel-iterative execution, aggregates findings, and performs focused deep-dives on critical issues until quality gates met.",
|
||||
"arguments": "[session-id] [--dimensions=security,architecture,...] [--max-iterations=N]",
|
||||
"category": "workflow",
|
||||
"subcategory": null,
|
||||
"usage_scenario": "session-management",
|
||||
"difficulty": "Intermediate",
|
||||
"source": "../../../commands/workflow/review-session-cycle.md"
|
||||
},
|
||||
{
|
||||
"name": "review",
|
||||
"command": "/workflow:review",
|
||||
"description": "Post-implementation review with specialized types (security/architecture/action-items/quality) using analysis agents and Gemini",
|
||||
"arguments": "[--type=security|architecture|action-items|quality] [--archived] [optional: session-id]",
|
||||
"category": "workflow",
|
||||
"subcategory": null,
|
||||
"usage_scenario": "analysis",
|
||||
"difficulty": "Intermediate",
|
||||
"source": "../../../commands/workflow/review.md"
|
||||
},
|
||||
{
|
||||
"name": "tdd-plan",
|
||||
"command": "/workflow:tdd-plan",
|
||||
"description": "TDD workflow planning with Red-Green-Refactor task chain generation, test-first development structure, and cycle tracking",
|
||||
"arguments": "\\\"feature description\\\"|file.md",
|
||||
"category": "workflow",
|
||||
"subcategory": null,
|
||||
"usage_scenario": "planning",
|
||||
"difficulty": "Advanced",
|
||||
"source": "../../../commands/workflow/tdd-plan.md"
|
||||
},
|
||||
{
|
||||
"name": "tdd-verify",
|
||||
"command": "/workflow:tdd-verify",
|
||||
"description": "Verify TDD workflow compliance against Red-Green-Refactor cycles, generate quality report with coverage analysis",
|
||||
"arguments": "[optional: WFS-session-id]",
|
||||
"category": "workflow",
|
||||
"subcategory": null,
|
||||
"usage_scenario": "testing",
|
||||
"difficulty": "Advanced",
|
||||
"source": "../../../commands/workflow/tdd-verify.md"
|
||||
},
|
||||
{
|
||||
"name": "test-cycle-execute",
|
||||
"command": "/workflow:test-cycle-execute",
|
||||
"description": "Execute test-fix workflow with dynamic task generation and iterative fix cycles until test pass rate >= 95% or max iterations reached. Uses @cli-planning-agent for failure analysis and task generation.",
|
||||
"arguments": "[--resume-session=\\\"session-id\\\"] [--max-iterations=N]",
|
||||
"category": "workflow",
|
||||
"subcategory": null,
|
||||
"usage_scenario": "implementation",
|
||||
"difficulty": "Intermediate",
|
||||
"source": "../../../commands/workflow/test-cycle-execute.md"
|
||||
},
|
||||
{
|
||||
"name": "test-fix-gen",
|
||||
"command": "/workflow:test-fix-gen",
|
||||
"description": "Create test-fix workflow session from session ID, description, or file path with test strategy generation and task planning",
|
||||
"arguments": "(source-session-id | \\\"feature description\\\" | /path/to/file.md)",
|
||||
"category": "workflow",
|
||||
"subcategory": null,
|
||||
"usage_scenario": "testing",
|
||||
"difficulty": "Intermediate",
|
||||
"source": "../../../commands/workflow/test-fix-gen.md"
|
||||
},
|
||||
{
|
||||
"name": "test-gen",
|
||||
"command": "/workflow:test-gen",
|
||||
"description": "Create independent test-fix workflow session from completed implementation session, analyzes code to generate test tasks",
|
||||
"arguments": "source-session-id",
|
||||
"category": "workflow",
|
||||
"subcategory": null,
|
||||
"usage_scenario": "testing",
|
||||
"difficulty": "Intermediate",
|
||||
"source": "../../../commands/workflow/test-gen.md"
|
||||
}
|
||||
],
|
||||
"brainstorm": [
|
||||
{
|
||||
"name": "api-designer",
|
||||
"command": "/workflow:brainstorm:api-designer",
|
||||
"description": "Generate or update api-designer/analysis.md addressing guidance-specification discussion points for API design perspective",
|
||||
"arguments": "optional topic - uses existing framework if available",
|
||||
"category": "workflow",
|
||||
"subcategory": "brainstorm",
|
||||
"usage_scenario": "planning",
|
||||
"difficulty": "Intermediate",
|
||||
"source": "../../../commands/workflow/brainstorm/api-designer.md"
|
||||
},
|
||||
{
|
||||
"name": "artifacts",
|
||||
"command": "/workflow:brainstorm:artifacts",
|
||||
"description": "Interactive clarification generating confirmed guidance specification through role-based analysis and synthesis",
|
||||
"arguments": "topic or challenge description [--count N]",
|
||||
"category": "workflow",
|
||||
"subcategory": "brainstorm",
|
||||
"usage_scenario": "general",
|
||||
"difficulty": "Intermediate",
|
||||
"source": "../../../commands/workflow/brainstorm/artifacts.md"
|
||||
},
|
||||
{
|
||||
"name": "auto-parallel",
|
||||
"command": "/workflow:brainstorm:auto-parallel",
|
||||
"description": "Parallel brainstorming automation with dynamic role selection and concurrent execution across multiple perspectives",
|
||||
"arguments": "topic or challenge description\" [--count N]",
|
||||
"category": "workflow",
|
||||
"subcategory": "brainstorm",
|
||||
"usage_scenario": "general",
|
||||
"difficulty": "Advanced",
|
||||
"source": "../../../commands/workflow/brainstorm/auto-parallel.md"
|
||||
},
|
||||
{
|
||||
"name": "data-architect",
|
||||
"command": "/workflow:brainstorm:data-architect",
|
||||
"description": "Generate or update data-architect/analysis.md addressing guidance-specification discussion points for data architecture perspective",
|
||||
"arguments": "optional topic - uses existing framework if available",
|
||||
"category": "workflow",
|
||||
"subcategory": "brainstorm",
|
||||
"usage_scenario": "general",
|
||||
"difficulty": "Intermediate",
|
||||
"source": "../../../commands/workflow/brainstorm/data-architect.md"
|
||||
},
|
||||
{
|
||||
"name": "product-manager",
|
||||
"command": "/workflow:brainstorm:product-manager",
|
||||
"description": "Generate or update product-manager/analysis.md addressing guidance-specification discussion points for product management perspective",
|
||||
"arguments": "optional topic - uses existing framework if available",
|
||||
"category": "workflow",
|
||||
"subcategory": "brainstorm",
|
||||
"usage_scenario": "general",
|
||||
"difficulty": "Intermediate",
|
||||
"source": "../../../commands/workflow/brainstorm/product-manager.md"
|
||||
},
|
||||
{
|
||||
"name": "product-owner",
|
||||
"command": "/workflow:brainstorm:product-owner",
|
||||
"description": "Generate or update product-owner/analysis.md addressing guidance-specification discussion points for product ownership perspective",
|
||||
"arguments": "optional topic - uses existing framework if available",
|
||||
"category": "workflow",
|
||||
"subcategory": "brainstorm",
|
||||
"usage_scenario": "general",
|
||||
"difficulty": "Intermediate",
|
||||
"source": "../../../commands/workflow/brainstorm/product-owner.md"
|
||||
},
|
||||
{
|
||||
"name": "scrum-master",
|
||||
"command": "/workflow:brainstorm:scrum-master",
|
||||
"description": "Generate or update scrum-master/analysis.md addressing guidance-specification discussion points for Agile process perspective",
|
||||
"arguments": "optional topic - uses existing framework if available",
|
||||
"category": "workflow",
|
||||
"subcategory": "brainstorm",
|
||||
"usage_scenario": "general",
|
||||
"difficulty": "Intermediate",
|
||||
"source": "../../../commands/workflow/brainstorm/scrum-master.md"
|
||||
},
|
||||
{
|
||||
"name": "subject-matter-expert",
|
||||
"command": "/workflow:brainstorm:subject-matter-expert",
|
||||
"description": "Generate or update subject-matter-expert/analysis.md addressing guidance-specification discussion points for domain expertise perspective",
|
||||
"arguments": "optional topic - uses existing framework if available",
|
||||
"category": "workflow",
|
||||
"subcategory": "brainstorm",
|
||||
"usage_scenario": "general",
|
||||
"difficulty": "Intermediate",
|
||||
"source": "../../../commands/workflow/brainstorm/subject-matter-expert.md"
|
||||
},
|
||||
{
|
||||
"name": "synthesis",
|
||||
"command": "/workflow:brainstorm:synthesis",
|
||||
"description": "Clarify and refine role analyses through intelligent Q&A and targeted updates with synthesis agent",
|
||||
"arguments": "[optional: --session session-id]",
|
||||
"category": "workflow",
|
||||
"subcategory": "brainstorm",
|
||||
"usage_scenario": "general",
|
||||
"difficulty": "Advanced",
|
||||
"source": "../../../commands/workflow/brainstorm/synthesis.md"
|
||||
},
|
||||
{
|
||||
"name": "system-architect",
|
||||
"command": "/workflow:brainstorm:system-architect",
|
||||
"description": "Generate or update system-architect/analysis.md addressing guidance-specification discussion points for system architecture perspective",
|
||||
"arguments": "optional topic - uses existing framework if available",
|
||||
"category": "workflow",
|
||||
"subcategory": "brainstorm",
|
||||
"usage_scenario": "general",
|
||||
"difficulty": "Intermediate",
|
||||
"source": "../../../commands/workflow/brainstorm/system-architect.md"
|
||||
},
|
||||
{
|
||||
"name": "ui-designer",
|
||||
"command": "/workflow:brainstorm:ui-designer",
|
||||
"description": "Generate or update ui-designer/analysis.md addressing guidance-specification discussion points for UI design perspective",
|
||||
"arguments": "optional topic - uses existing framework if available",
|
||||
"category": "workflow",
|
||||
"subcategory": "brainstorm",
|
||||
"usage_scenario": "planning",
|
||||
"difficulty": "Intermediate",
|
||||
"source": "../../../commands/workflow/brainstorm/ui-designer.md"
|
||||
},
|
||||
{
|
||||
"name": "ux-expert",
|
||||
"command": "/workflow:brainstorm:ux-expert",
|
||||
"description": "Generate or update ux-expert/analysis.md addressing guidance-specification discussion points for UX perspective",
|
||||
"arguments": "optional topic - uses existing framework if available",
|
||||
"category": "workflow",
|
||||
"subcategory": "brainstorm",
|
||||
"usage_scenario": "general",
|
||||
"difficulty": "Intermediate",
|
||||
"source": "../../../commands/workflow/brainstorm/ux-expert.md"
|
||||
}
|
||||
],
|
||||
"session": [
|
||||
{
|
||||
"name": "complete",
|
||||
"command": "/workflow:session:complete",
|
||||
"description": "Mark active workflow session as complete, archive with lessons learned, update manifest, remove active flag",
|
||||
"arguments": "",
|
||||
"category": "workflow",
|
||||
"subcategory": "session",
|
||||
"usage_scenario": "session-management",
|
||||
"difficulty": "Intermediate",
|
||||
"source": "../../../commands/workflow/session/complete.md"
|
||||
},
|
||||
{
|
||||
"name": "list",
|
||||
"command": "/workflow:session:list",
|
||||
"description": "List all workflow sessions with status filtering, shows session metadata and progress information",
|
||||
"arguments": "",
|
||||
"category": "workflow",
|
||||
"subcategory": "session",
|
||||
"usage_scenario": "general",
|
||||
"difficulty": "Beginner",
|
||||
"source": "../../../commands/workflow/session/list.md"
|
||||
},
|
||||
{
|
||||
"name": "resume",
|
||||
"command": "/workflow:session:resume",
|
||||
"description": "Resume the most recently paused workflow session with automatic session discovery and status update",
|
||||
"arguments": "",
|
||||
"category": "workflow",
|
||||
"subcategory": "session",
|
||||
"usage_scenario": "session-management",
|
||||
"difficulty": "Intermediate",
|
||||
"source": "../../../commands/workflow/session/resume.md"
|
||||
},
|
||||
{
|
||||
"name": "solidify",
|
||||
"command": "/workflow:session:solidify",
|
||||
"description": "Crystallize session learnings and user-defined constraints into permanent project guidelines",
|
||||
"arguments": "[--type <convention|constraint|learning>] [--category <category>] \\\"rule or insight\\",
|
||||
"category": "workflow",
|
||||
"subcategory": "session",
|
||||
"usage_scenario": "general",
|
||||
"difficulty": "Intermediate",
|
||||
"source": "../../../commands/workflow/session/solidify.md"
|
||||
},
|
||||
{
|
||||
"name": "start",
|
||||
"command": "/workflow:session:start",
|
||||
"description": "Discover existing sessions or start new workflow session with intelligent session management and conflict detection",
|
||||
"arguments": "[--type <workflow|review|tdd|test|docs>] [--auto|--new] [optional: task description for new session]",
|
||||
"category": "workflow",
|
||||
"subcategory": "session",
|
||||
"usage_scenario": "general",
|
||||
"difficulty": "Intermediate",
|
||||
"source": "../../../commands/workflow/session/start.md"
|
||||
}
|
||||
],
|
||||
"tools": [
|
||||
{
|
||||
"name": "conflict-resolution",
|
||||
"command": "/workflow:tools:conflict-resolution",
|
||||
"description": "Detect and resolve conflicts between plan and existing codebase using CLI-powered analysis with Gemini/Qwen",
|
||||
"arguments": "--session WFS-session-id --context path/to/context-package.json",
|
||||
"category": "workflow",
|
||||
"subcategory": "tools",
|
||||
"usage_scenario": "general",
|
||||
"difficulty": "Advanced",
|
||||
"source": "../../../commands/workflow/tools/conflict-resolution.md"
|
||||
},
|
||||
{
|
||||
"name": "gather",
|
||||
"command": "/workflow:tools:gather",
|
||||
"description": "Intelligently collect project context using context-search-agent based on task description, packages into standardized JSON",
|
||||
"arguments": "--session WFS-session-id \\\"task description\\",
|
||||
"category": "workflow",
|
||||
"subcategory": "tools",
|
||||
"usage_scenario": "general",
|
||||
"difficulty": "Intermediate",
|
||||
"source": "../../../commands/workflow/tools/context-gather.md"
|
||||
},
|
||||
{
|
||||
"name": "task-generate-agent",
|
||||
"command": "/workflow:tools:task-generate-agent",
|
||||
"description": "Generate implementation plan documents (IMPL_PLAN.md, task JSONs, TODO_LIST.md) using action-planning-agent - produces planning artifacts, does NOT execute code implementation",
|
||||
"arguments": "--session WFS-session-id",
|
||||
"category": "workflow",
|
||||
"subcategory": "tools",
|
||||
"usage_scenario": "implementation",
|
||||
"difficulty": "Advanced",
|
||||
"source": "../../../commands/workflow/tools/task-generate-agent.md"
|
||||
},
|
||||
{
|
||||
"name": "task-generate-tdd",
|
||||
"command": "/workflow:tools:task-generate-tdd",
|
||||
"description": "Autonomous TDD task generation using action-planning-agent with Red-Green-Refactor cycles, test-first structure, and cycle validation",
|
||||
"arguments": "--session WFS-session-id",
|
||||
"category": "workflow",
|
||||
"subcategory": "tools",
|
||||
"usage_scenario": "implementation",
|
||||
"difficulty": "Advanced",
|
||||
"source": "../../../commands/workflow/tools/task-generate-tdd.md"
|
||||
},
|
||||
{
|
||||
"name": "tdd-coverage-analysis",
|
||||
"command": "/workflow:tools:tdd-coverage-analysis",
|
||||
"description": "Analyze test coverage and TDD cycle execution with Red-Green-Refactor compliance verification",
|
||||
"arguments": "--session WFS-session-id",
|
||||
"category": "workflow",
|
||||
"subcategory": "tools",
|
||||
"usage_scenario": "testing",
|
||||
"difficulty": "Advanced",
|
||||
"source": "../../../commands/workflow/tools/tdd-coverage-analysis.md"
|
||||
},
|
||||
{
|
||||
"name": "test-concept-enhanced",
|
||||
"command": "/workflow:tools:test-concept-enhanced",
|
||||
"description": "Coordinate test analysis workflow using cli-execution-agent to generate test strategy via Gemini",
|
||||
"arguments": "--session WFS-test-session-id --context path/to/test-context-package.json",
|
||||
"category": "workflow",
|
||||
"subcategory": "tools",
|
||||
"usage_scenario": "testing",
|
||||
"difficulty": "Intermediate",
|
||||
"source": "../../../commands/workflow/tools/test-concept-enhanced.md"
|
||||
},
|
||||
{
|
||||
"name": "test-context-gather",
|
||||
"command": "/workflow:tools:test-context-gather",
|
||||
"description": "Collect test coverage context using test-context-search-agent and package into standardized test-context JSON",
|
||||
"arguments": "--session WFS-test-session-id",
|
||||
"category": "workflow",
|
||||
"subcategory": "tools",
|
||||
"usage_scenario": "testing",
|
||||
"difficulty": "Intermediate",
|
||||
"source": "../../../commands/workflow/tools/test-context-gather.md"
|
||||
},
|
||||
{
|
||||
"name": "test-task-generate",
|
||||
"command": "/workflow:tools:test-task-generate",
|
||||
"description": "Generate test planning documents (IMPL_PLAN.md, test task JSONs, TODO_LIST.md) using action-planning-agent - produces test planning artifacts, does NOT execute tests",
|
||||
"arguments": "--session WFS-test-session-id",
|
||||
"category": "workflow",
|
||||
"subcategory": "tools",
|
||||
"usage_scenario": "implementation",
|
||||
"difficulty": "Intermediate",
|
||||
"source": "../../../commands/workflow/tools/test-task-generate.md"
|
||||
}
|
||||
],
|
||||
"ui-design": [
|
||||
{
|
||||
"name": "animation-extract",
|
||||
"command": "/workflow:ui-design:animation-extract",
|
||||
"description": "Extract animation and transition patterns from prompt inference and image references for design system documentation",
|
||||
"arguments": "[--design-id <id>] [--session <id>] [--images \"<glob>\"] [--focus \"<types>\"] [--interactive] [--refine]",
|
||||
"category": "workflow",
|
||||
"subcategory": "ui-design",
|
||||
"usage_scenario": "general",
|
||||
"difficulty": "Intermediate",
|
||||
"source": "../../../commands/workflow/ui-design/animation-extract.md"
|
||||
},
|
||||
{
|
||||
"name": "workflow:ui-design:codify-style",
|
||||
"command": "/workflow:ui-design:codify-style",
|
||||
"description": "Orchestrator to extract styles from code and generate shareable reference package with preview (automatic file discovery)",
|
||||
"arguments": "<path> [--package-name <name>] [--output-dir <path>] [--overwrite]",
|
||||
"category": "workflow",
|
||||
"subcategory": "ui-design",
|
||||
"usage_scenario": "planning",
|
||||
"difficulty": "Intermediate",
|
||||
"source": "../../../commands/workflow/ui-design/codify-style.md"
|
||||
},
|
||||
{
|
||||
"name": "design-sync",
|
||||
"command": "/workflow:ui-design:design-sync",
|
||||
"description": "Synchronize finalized design system references to brainstorming artifacts, preparing them for /workflow:plan consumption",
|
||||
"arguments": "--session <session_id> [--selected-prototypes \"<list>\"]",
|
||||
"category": "workflow",
|
||||
"subcategory": "ui-design",
|
||||
"usage_scenario": "planning",
|
||||
"difficulty": "Intermediate",
|
||||
"source": "../../../commands/workflow/ui-design/design-sync.md"
|
||||
},
|
||||
{
|
||||
"name": "explore-auto",
|
||||
"command": "/workflow:ui-design:explore-auto",
|
||||
"description": "Interactive exploratory UI design workflow with style-centric batch generation, creates design variants from prompts/images with parallel execution and user selection",
|
||||
"arguments": "[--input \"<value>\"] [--targets \"<list>\"] [--target-type \"page|component\"] [--session <id>] [--style-variants <count>] [--layout-variants <count>]",
|
||||
"category": "workflow",
|
||||
"subcategory": "ui-design",
|
||||
"usage_scenario": "general",
|
||||
"difficulty": "Intermediate",
|
||||
"source": "../../../commands/workflow/ui-design/explore-auto.md"
|
||||
},
|
||||
{
|
||||
"name": "generate",
|
||||
"command": "/workflow:ui-design:generate",
|
||||
"description": "Assemble UI prototypes by combining layout templates with design tokens (default animation support), pure assembler without new content generation",
|
||||
"arguments": "[--design-id <id>] [--session <id>]",
|
||||
"category": "workflow",
|
||||
"subcategory": "ui-design",
|
||||
"usage_scenario": "implementation",
|
||||
"difficulty": "Intermediate",
|
||||
"source": "../../../commands/workflow/ui-design/generate.md"
|
||||
},
|
||||
{
|
||||
"name": "imitate-auto",
|
||||
"command": "/workflow:ui-design:imitate-auto",
|
||||
"description": "UI design workflow with direct code/image input for design token extraction and prototype generation",
|
||||
"arguments": "[--input \"<value>\"] [--session <id>]",
|
||||
"category": "workflow",
|
||||
"subcategory": "ui-design",
|
||||
"usage_scenario": "general",
|
||||
"difficulty": "Intermediate",
|
||||
"source": "../../../commands/workflow/ui-design/imitate-auto.md"
|
||||
},
|
||||
{
|
||||
"name": "workflow:ui-design:import-from-code",
|
||||
"command": "/workflow:ui-design:import-from-code",
|
||||
"description": "Import design system from code files (CSS/JS/HTML/SCSS) with automatic file discovery and parallel agent analysis",
|
||||
"arguments": "[--design-id <id>] [--session <id>] [--source <path>]",
|
||||
"category": "workflow",
|
||||
"subcategory": "ui-design",
|
||||
"usage_scenario": "planning",
|
||||
"difficulty": "Intermediate",
|
||||
"source": "../../../commands/workflow/ui-design/import-from-code.md"
|
||||
},
|
||||
{
|
||||
"name": "layout-extract",
|
||||
"command": "/workflow:ui-design:layout-extract",
|
||||
"description": "Extract structural layout information from reference images or text prompts using Claude analysis with variant generation or refinement mode",
|
||||
"arguments": "[--design-id <id>] [--session <id>] [--images \"<glob>\"] [--prompt \"<desc>\"] [--targets \"<list>\"] [--variants <count>] [--device-type <desktop|mobile|tablet|responsive>] [--interactive] [--refine]",
|
||||
"category": "workflow",
|
||||
"subcategory": "ui-design",
|
||||
"usage_scenario": "general",
|
||||
"difficulty": "Intermediate",
|
||||
"source": "../../../commands/workflow/ui-design/layout-extract.md"
|
||||
},
|
||||
{
|
||||
"name": "workflow:ui-design:reference-page-generator",
|
||||
"command": "/workflow:ui-design:reference-page-generator",
|
||||
"description": "Generate multi-component reference pages and documentation from design run extraction",
|
||||
"arguments": "[--design-run <path>] [--package-name <name>] [--output-dir <path>]",
|
||||
"category": "workflow",
|
||||
"subcategory": "ui-design",
|
||||
"usage_scenario": "planning",
|
||||
"difficulty": "Intermediate",
|
||||
"source": "../../../commands/workflow/ui-design/reference-page-generator.md"
|
||||
},
|
||||
{
|
||||
"name": "style-extract",
|
||||
"command": "/workflow:ui-design:style-extract",
|
||||
"description": "Extract design style from reference images or text prompts using Claude analysis with variant generation or refinement mode",
|
||||
"arguments": "[--design-id <id>] [--session <id>] [--images \"<glob>\"] [--prompt \"<desc>\"] [--variants <count>] [--interactive] [--refine]",
|
||||
"category": "workflow",
|
||||
"subcategory": "ui-design",
|
||||
"usage_scenario": "general",
|
||||
"difficulty": "Intermediate",
|
||||
"source": "../../../commands/workflow/ui-design/style-extract.md"
|
||||
}
|
||||
]
|
||||
}
|
||||
}
|
||||
@@ -1,896 +0,0 @@
|
||||
{
|
||||
"general": [
|
||||
{
|
||||
"name": "cli-init",
|
||||
"command": "/cli:cli-init",
|
||||
"description": "Generate .gemini/ and .qwen/ config directories with settings.json and ignore files based on workspace technology detection",
|
||||
"arguments": "[--tool gemini|qwen|all] [--output path] [--preview]",
|
||||
"category": "cli",
|
||||
"subcategory": null,
|
||||
"usage_scenario": "general",
|
||||
"difficulty": "Intermediate",
|
||||
"source": "../../../commands/cli/cli-init.md"
|
||||
},
|
||||
{
|
||||
"name": "enhance-prompt",
|
||||
"command": "/enhance-prompt",
|
||||
"description": "Enhanced prompt transformation using session memory and intent analysis with --enhance flag detection",
|
||||
"arguments": "user input to enhance",
|
||||
"category": "general",
|
||||
"subcategory": null,
|
||||
"usage_scenario": "general",
|
||||
"difficulty": "Intermediate",
|
||||
"source": "../../../commands/enhance-prompt.md"
|
||||
},
|
||||
{
|
||||
"name": "issue:discover",
|
||||
"command": "/issue:discover",
|
||||
"description": "Discover potential issues from multiple perspectives (bug, UX, test, quality, security, performance, maintainability, best-practices) using CLI explore. Supports Exa external research for security and best-practices perspectives.",
|
||||
"arguments": "<path-pattern> [--perspectives=bug,ux,...] [--external]",
|
||||
"category": "issue",
|
||||
"subcategory": null,
|
||||
"usage_scenario": "general",
|
||||
"difficulty": "Intermediate",
|
||||
"source": "../../../commands/issue/discover.md"
|
||||
},
|
||||
{
|
||||
"name": "new",
|
||||
"command": "/issue:new",
|
||||
"description": "Create structured issue from GitHub URL or text description",
|
||||
"arguments": "<github-url | text-description> [--priority 1-5]",
|
||||
"category": "issue",
|
||||
"subcategory": null,
|
||||
"usage_scenario": "general",
|
||||
"difficulty": "Intermediate",
|
||||
"source": "../../../commands/issue/new.md"
|
||||
},
|
||||
{
|
||||
"name": "queue",
|
||||
"command": "/issue:queue",
|
||||
"description": "Form execution queue from bound solutions using issue-queue-agent (solution-level)",
|
||||
"arguments": "[--rebuild] [--issue <id>]",
|
||||
"category": "issue",
|
||||
"subcategory": null,
|
||||
"usage_scenario": "general",
|
||||
"difficulty": "Intermediate",
|
||||
"source": "../../../commands/issue/queue.md"
|
||||
},
|
||||
{
|
||||
"name": "compact",
|
||||
"command": "/memory:compact",
|
||||
"description": "Compact current session memory into structured text for session recovery, extracting objective/plan/files/decisions/constraints/state, and save via MCP core_memory tool",
|
||||
"arguments": "[optional: session description]",
|
||||
"category": "memory",
|
||||
"subcategory": null,
|
||||
"usage_scenario": "general",
|
||||
"difficulty": "Intermediate",
|
||||
"source": "../../../commands/memory/compact.md"
|
||||
},
|
||||
{
|
||||
"name": "load",
|
||||
"command": "/memory:load",
|
||||
"description": "Delegate to universal-executor agent to analyze project via Gemini/Qwen CLI and return JSON core content package for task context",
|
||||
"arguments": "[--tool gemini|qwen] \\\"task context description\\",
|
||||
"category": "memory",
|
||||
"subcategory": null,
|
||||
"usage_scenario": "general",
|
||||
"difficulty": "Intermediate",
|
||||
"source": "../../../commands/memory/load.md"
|
||||
},
|
||||
{
|
||||
"name": "tech-research-rules",
|
||||
"command": "/memory:tech-research-rules",
|
||||
"description": "3-phase orchestrator: extract tech stack → Exa research → generate path-conditional rules (auto-loaded by Claude Code)",
|
||||
"arguments": "[session-id | tech-stack-name] [--regenerate] [--tool <gemini|qwen>]",
|
||||
"category": "memory",
|
||||
"subcategory": null,
|
||||
"usage_scenario": "general",
|
||||
"difficulty": "Intermediate",
|
||||
"source": "../../../commands/memory/tech-research-rules.md"
|
||||
},
|
||||
{
|
||||
"name": "update-full",
|
||||
"command": "/memory:update-full",
|
||||
"description": "Update all CLAUDE.md files using layer-based execution (Layer 3→1) with batched agents (4 modules/agent) and gemini→qwen→codex fallback, <20 modules uses direct parallel",
|
||||
"arguments": "[--tool gemini|qwen|codex] [--path <directory>]",
|
||||
"category": "memory",
|
||||
"subcategory": null,
|
||||
"usage_scenario": "general",
|
||||
"difficulty": "Intermediate",
|
||||
"source": "../../../commands/memory/update-full.md"
|
||||
},
|
||||
{
|
||||
"name": "update-related",
|
||||
"command": "/memory:update-related",
|
||||
"description": "Update CLAUDE.md for git-changed modules using batched agent execution (4 modules/agent) with gemini→qwen→codex fallback, <15 modules uses direct execution",
|
||||
"arguments": "[--tool gemini|qwen|codex]",
|
||||
"category": "memory",
|
||||
"subcategory": null,
|
||||
"usage_scenario": "general",
|
||||
"difficulty": "Intermediate",
|
||||
"source": "../../../commands/memory/update-related.md"
|
||||
},
|
||||
{
|
||||
"name": "version",
|
||||
"command": "/version",
|
||||
"description": "Display Claude Code version information and check for updates",
|
||||
"arguments": "",
|
||||
"category": "general",
|
||||
"subcategory": null,
|
||||
"usage_scenario": "general",
|
||||
"difficulty": "Beginner",
|
||||
"source": "../../../commands/version.md"
|
||||
},
|
||||
{
|
||||
"name": "artifacts",
|
||||
"command": "/workflow:brainstorm:artifacts",
|
||||
"description": "Interactive clarification generating confirmed guidance specification through role-based analysis and synthesis",
|
||||
"arguments": "topic or challenge description [--count N]",
|
||||
"category": "workflow",
|
||||
"subcategory": "brainstorm",
|
||||
"usage_scenario": "general",
|
||||
"difficulty": "Intermediate",
|
||||
"source": "../../../commands/workflow/brainstorm/artifacts.md"
|
||||
},
|
||||
{
|
||||
"name": "auto-parallel",
|
||||
"command": "/workflow:brainstorm:auto-parallel",
|
||||
"description": "Parallel brainstorming automation with dynamic role selection and concurrent execution across multiple perspectives",
|
||||
"arguments": "topic or challenge description\" [--count N]",
|
||||
"category": "workflow",
|
||||
"subcategory": "brainstorm",
|
||||
"usage_scenario": "general",
|
||||
"difficulty": "Advanced",
|
||||
"source": "../../../commands/workflow/brainstorm/auto-parallel.md"
|
||||
},
|
||||
{
|
||||
"name": "data-architect",
|
||||
"command": "/workflow:brainstorm:data-architect",
|
||||
"description": "Generate or update data-architect/analysis.md addressing guidance-specification discussion points for data architecture perspective",
|
||||
"arguments": "optional topic - uses existing framework if available",
|
||||
"category": "workflow",
|
||||
"subcategory": "brainstorm",
|
||||
"usage_scenario": "general",
|
||||
"difficulty": "Intermediate",
|
||||
"source": "../../../commands/workflow/brainstorm/data-architect.md"
|
||||
},
|
||||
{
|
||||
"name": "product-manager",
|
||||
"command": "/workflow:brainstorm:product-manager",
|
||||
"description": "Generate or update product-manager/analysis.md addressing guidance-specification discussion points for product management perspective",
|
||||
"arguments": "optional topic - uses existing framework if available",
|
||||
"category": "workflow",
|
||||
"subcategory": "brainstorm",
|
||||
"usage_scenario": "general",
|
||||
"difficulty": "Intermediate",
|
||||
"source": "../../../commands/workflow/brainstorm/product-manager.md"
|
||||
},
|
||||
{
|
||||
"name": "product-owner",
|
||||
"command": "/workflow:brainstorm:product-owner",
|
||||
"description": "Generate or update product-owner/analysis.md addressing guidance-specification discussion points for product ownership perspective",
|
||||
"arguments": "optional topic - uses existing framework if available",
|
||||
"category": "workflow",
|
||||
"subcategory": "brainstorm",
|
||||
"usage_scenario": "general",
|
||||
"difficulty": "Intermediate",
|
||||
"source": "../../../commands/workflow/brainstorm/product-owner.md"
|
||||
},
|
||||
{
|
||||
"name": "scrum-master",
|
||||
"command": "/workflow:brainstorm:scrum-master",
|
||||
"description": "Generate or update scrum-master/analysis.md addressing guidance-specification discussion points for Agile process perspective",
|
||||
"arguments": "optional topic - uses existing framework if available",
|
||||
"category": "workflow",
|
||||
"subcategory": "brainstorm",
|
||||
"usage_scenario": "general",
|
||||
"difficulty": "Intermediate",
|
||||
"source": "../../../commands/workflow/brainstorm/scrum-master.md"
|
||||
},
|
||||
{
|
||||
"name": "subject-matter-expert",
|
||||
"command": "/workflow:brainstorm:subject-matter-expert",
|
||||
"description": "Generate or update subject-matter-expert/analysis.md addressing guidance-specification discussion points for domain expertise perspective",
|
||||
"arguments": "optional topic - uses existing framework if available",
|
||||
"category": "workflow",
|
||||
"subcategory": "brainstorm",
|
||||
"usage_scenario": "general",
|
||||
"difficulty": "Intermediate",
|
||||
"source": "../../../commands/workflow/brainstorm/subject-matter-expert.md"
|
||||
},
|
||||
{
|
||||
"name": "synthesis",
|
||||
"command": "/workflow:brainstorm:synthesis",
|
||||
"description": "Clarify and refine role analyses through intelligent Q&A and targeted updates with synthesis agent",
|
||||
"arguments": "[optional: --session session-id]",
|
||||
"category": "workflow",
|
||||
"subcategory": "brainstorm",
|
||||
"usage_scenario": "general",
|
||||
"difficulty": "Advanced",
|
||||
"source": "../../../commands/workflow/brainstorm/synthesis.md"
|
||||
},
|
||||
{
|
||||
"name": "system-architect",
|
||||
"command": "/workflow:brainstorm:system-architect",
|
||||
"description": "Generate or update system-architect/analysis.md addressing guidance-specification discussion points for system architecture perspective",
|
||||
"arguments": "optional topic - uses existing framework if available",
|
||||
"category": "workflow",
|
||||
"subcategory": "brainstorm",
|
||||
"usage_scenario": "general",
|
||||
"difficulty": "Intermediate",
|
||||
"source": "../../../commands/workflow/brainstorm/system-architect.md"
|
||||
},
|
||||
{
|
||||
"name": "ux-expert",
|
||||
"command": "/workflow:brainstorm:ux-expert",
|
||||
"description": "Generate or update ux-expert/analysis.md addressing guidance-specification discussion points for UX perspective",
|
||||
"arguments": "optional topic - uses existing framework if available",
|
||||
"category": "workflow",
|
||||
"subcategory": "brainstorm",
|
||||
"usage_scenario": "general",
|
||||
"difficulty": "Intermediate",
|
||||
"source": "../../../commands/workflow/brainstorm/ux-expert.md"
|
||||
},
|
||||
{
|
||||
"name": "clean",
|
||||
"command": "/workflow:clean",
|
||||
"description": "Intelligent code cleanup with mainline detection, stale artifact discovery, and safe execution",
|
||||
"arguments": "[--dry-run] [\\\"focus area\\\"]",
|
||||
"category": "workflow",
|
||||
"subcategory": null,
|
||||
"usage_scenario": "general",
|
||||
"difficulty": "Intermediate",
|
||||
"source": "../../../commands/workflow/clean.md"
|
||||
},
|
||||
{
|
||||
"name": "debug",
|
||||
"command": "/workflow:debug",
|
||||
"description": "Interactive hypothesis-driven debugging with NDJSON logging, iterative until resolved",
|
||||
"arguments": "\\\"bug description or error message\\",
|
||||
"category": "workflow",
|
||||
"subcategory": null,
|
||||
"usage_scenario": "general",
|
||||
"difficulty": "Intermediate",
|
||||
"source": "../../../commands/workflow/debug.md"
|
||||
},
|
||||
{
|
||||
"name": "init",
|
||||
"command": "/workflow:init",
|
||||
"description": "Initialize project-level state with intelligent project analysis using cli-explore-agent",
|
||||
"arguments": "[--regenerate]",
|
||||
"category": "workflow",
|
||||
"subcategory": null,
|
||||
"usage_scenario": "general",
|
||||
"difficulty": "Intermediate",
|
||||
"source": "../../../commands/workflow/init.md"
|
||||
},
|
||||
{
|
||||
"name": "lite-fix",
|
||||
"command": "/workflow:lite-fix",
|
||||
"description": "Lightweight bug diagnosis and fix workflow with intelligent severity assessment and optional hotfix mode for production incidents",
|
||||
"arguments": "[--hotfix] \\\"bug description or issue reference\\",
|
||||
"category": "workflow",
|
||||
"subcategory": null,
|
||||
"usage_scenario": "general",
|
||||
"difficulty": "Intermediate",
|
||||
"source": "../../../commands/workflow/lite-fix.md"
|
||||
},
|
||||
{
|
||||
"name": "list",
|
||||
"command": "/workflow:session:list",
|
||||
"description": "List all workflow sessions with status filtering, shows session metadata and progress information",
|
||||
"arguments": "",
|
||||
"category": "workflow",
|
||||
"subcategory": "session",
|
||||
"usage_scenario": "general",
|
||||
"difficulty": "Beginner",
|
||||
"source": "../../../commands/workflow/session/list.md"
|
||||
},
|
||||
{
|
||||
"name": "solidify",
|
||||
"command": "/workflow:session:solidify",
|
||||
"description": "Crystallize session learnings and user-defined constraints into permanent project guidelines",
|
||||
"arguments": "[--type <convention|constraint|learning>] [--category <category>] \\\"rule or insight\\",
|
||||
"category": "workflow",
|
||||
"subcategory": "session",
|
||||
"usage_scenario": "general",
|
||||
"difficulty": "Intermediate",
|
||||
"source": "../../../commands/workflow/session/solidify.md"
|
||||
},
|
||||
{
|
||||
"name": "start",
|
||||
"command": "/workflow:session:start",
|
||||
"description": "Discover existing sessions or start new workflow session with intelligent session management and conflict detection",
|
||||
"arguments": "[--type <workflow|review|tdd|test|docs>] [--auto|--new] [optional: task description for new session]",
|
||||
"category": "workflow",
|
||||
"subcategory": "session",
|
||||
"usage_scenario": "general",
|
||||
"difficulty": "Intermediate",
|
||||
"source": "../../../commands/workflow/session/start.md"
|
||||
},
|
||||
{
|
||||
"name": "conflict-resolution",
|
||||
"command": "/workflow:tools:conflict-resolution",
|
||||
"description": "Detect and resolve conflicts between plan and existing codebase using CLI-powered analysis with Gemini/Qwen",
|
||||
"arguments": "--session WFS-session-id --context path/to/context-package.json",
|
||||
"category": "workflow",
|
||||
"subcategory": "tools",
|
||||
"usage_scenario": "general",
|
||||
"difficulty": "Advanced",
|
||||
"source": "../../../commands/workflow/tools/conflict-resolution.md"
|
||||
},
|
||||
{
|
||||
"name": "gather",
|
||||
"command": "/workflow:tools:gather",
|
||||
"description": "Intelligently collect project context using context-search-agent based on task description, packages into standardized JSON",
|
||||
"arguments": "--session WFS-session-id \\\"task description\\",
|
||||
"category": "workflow",
|
||||
"subcategory": "tools",
|
||||
"usage_scenario": "general",
|
||||
"difficulty": "Intermediate",
|
||||
"source": "../../../commands/workflow/tools/context-gather.md"
|
||||
},
|
||||
{
|
||||
"name": "animation-extract",
|
||||
"command": "/workflow:ui-design:animation-extract",
|
||||
"description": "Extract animation and transition patterns from prompt inference and image references for design system documentation",
|
||||
"arguments": "[--design-id <id>] [--session <id>] [--images \"<glob>\"] [--focus \"<types>\"] [--interactive] [--refine]",
|
||||
"category": "workflow",
|
||||
"subcategory": "ui-design",
|
||||
"usage_scenario": "general",
|
||||
"difficulty": "Intermediate",
|
||||
"source": "../../../commands/workflow/ui-design/animation-extract.md"
|
||||
},
|
||||
{
|
||||
"name": "explore-auto",
|
||||
"command": "/workflow:ui-design:explore-auto",
|
||||
"description": "Interactive exploratory UI design workflow with style-centric batch generation, creates design variants from prompts/images with parallel execution and user selection",
|
||||
"arguments": "[--input \"<value>\"] [--targets \"<list>\"] [--target-type \"page|component\"] [--session <id>] [--style-variants <count>] [--layout-variants <count>]",
|
||||
"category": "workflow",
|
||||
"subcategory": "ui-design",
|
||||
"usage_scenario": "general",
|
||||
"difficulty": "Intermediate",
|
||||
"source": "../../../commands/workflow/ui-design/explore-auto.md"
|
||||
},
|
||||
{
|
||||
"name": "imitate-auto",
|
||||
"command": "/workflow:ui-design:imitate-auto",
|
||||
"description": "UI design workflow with direct code/image input for design token extraction and prototype generation",
|
||||
"arguments": "[--input \"<value>\"] [--session <id>]",
|
||||
"category": "workflow",
|
||||
"subcategory": "ui-design",
|
||||
"usage_scenario": "general",
|
||||
"difficulty": "Intermediate",
|
||||
"source": "../../../commands/workflow/ui-design/imitate-auto.md"
|
||||
},
|
||||
{
|
||||
"name": "layout-extract",
|
||||
"command": "/workflow:ui-design:layout-extract",
|
||||
"description": "Extract structural layout information from reference images or text prompts using Claude analysis with variant generation or refinement mode",
|
||||
"arguments": "[--design-id <id>] [--session <id>] [--images \"<glob>\"] [--prompt \"<desc>\"] [--targets \"<list>\"] [--variants <count>] [--device-type <desktop|mobile|tablet|responsive>] [--interactive] [--refine]",
|
||||
"category": "workflow",
|
||||
"subcategory": "ui-design",
|
||||
"usage_scenario": "general",
|
||||
"difficulty": "Intermediate",
|
||||
"source": "../../../commands/workflow/ui-design/layout-extract.md"
|
||||
},
|
||||
{
|
||||
"name": "style-extract",
|
||||
"command": "/workflow:ui-design:style-extract",
|
||||
"description": "Extract design style from reference images or text prompts using Claude analysis with variant generation or refinement mode",
|
||||
"arguments": "[--design-id <id>] [--session <id>] [--images \"<glob>\"] [--prompt \"<desc>\"] [--variants <count>] [--interactive] [--refine]",
|
||||
"category": "workflow",
|
||||
"subcategory": "ui-design",
|
||||
"usage_scenario": "general",
|
||||
"difficulty": "Intermediate",
|
||||
"source": "../../../commands/workflow/ui-design/style-extract.md"
|
||||
}
|
||||
],
|
||||
"implementation": [
|
||||
{
|
||||
"name": "execute",
|
||||
"command": "/issue:execute",
|
||||
"description": "Execute queue with codex using DAG-based parallel orchestration (solution-level)",
|
||||
"arguments": "[--worktree] [--queue <queue-id>]",
|
||||
"category": "issue",
|
||||
"subcategory": null,
|
||||
"usage_scenario": "implementation",
|
||||
"difficulty": "Intermediate",
|
||||
"source": "../../../commands/issue/execute.md"
|
||||
},
|
||||
{
|
||||
"name": "create",
|
||||
"command": "/task:create",
|
||||
"description": "Generate task JSON from natural language description with automatic file pattern detection, scope inference, and dependency analysis",
|
||||
"arguments": "\\\"task title\\",
|
||||
"category": "task",
|
||||
"subcategory": null,
|
||||
"usage_scenario": "implementation",
|
||||
"difficulty": "Intermediate",
|
||||
"source": "../../../commands/task/create.md"
|
||||
},
|
||||
{
|
||||
"name": "execute",
|
||||
"command": "/task:execute",
|
||||
"description": "Execute task JSON using appropriate agent (@doc-generator/@implementation-agent/@test-agent) with pre-analysis context loading and status tracking",
|
||||
"arguments": "task-id",
|
||||
"category": "task",
|
||||
"subcategory": null,
|
||||
"usage_scenario": "implementation",
|
||||
"difficulty": "Intermediate",
|
||||
"source": "../../../commands/task/execute.md"
|
||||
},
|
||||
{
|
||||
"name": "execute",
|
||||
"command": "/workflow:execute",
|
||||
"description": "Coordinate agent execution for workflow tasks with automatic session discovery, parallel task processing, and status tracking",
|
||||
"arguments": "[--resume-session=\\\"session-id\\\"]",
|
||||
"category": "workflow",
|
||||
"subcategory": null,
|
||||
"usage_scenario": "implementation",
|
||||
"difficulty": "Intermediate",
|
||||
"source": "../../../commands/workflow/execute.md"
|
||||
},
|
||||
{
|
||||
"name": "lite-execute",
|
||||
"command": "/workflow:lite-execute",
|
||||
"description": "Execute tasks based on in-memory plan, prompt description, or file content",
|
||||
"arguments": "[--in-memory] [\\\"task description\\\"|file-path]",
|
||||
"category": "workflow",
|
||||
"subcategory": null,
|
||||
"usage_scenario": "implementation",
|
||||
"difficulty": "Intermediate",
|
||||
"source": "../../../commands/workflow/lite-execute.md"
|
||||
},
|
||||
{
|
||||
"name": "test-cycle-execute",
|
||||
"command": "/workflow:test-cycle-execute",
|
||||
"description": "Execute test-fix workflow with dynamic task generation and iterative fix cycles until test pass rate >= 95% or max iterations reached. Uses @cli-planning-agent for failure analysis and task generation.",
|
||||
"arguments": "[--resume-session=\\\"session-id\\\"] [--max-iterations=N]",
|
||||
"category": "workflow",
|
||||
"subcategory": null,
|
||||
"usage_scenario": "implementation",
|
||||
"difficulty": "Intermediate",
|
||||
"source": "../../../commands/workflow/test-cycle-execute.md"
|
||||
},
|
||||
{
|
||||
"name": "task-generate-agent",
|
||||
"command": "/workflow:tools:task-generate-agent",
|
||||
"description": "Generate implementation plan documents (IMPL_PLAN.md, task JSONs, TODO_LIST.md) using action-planning-agent - produces planning artifacts, does NOT execute code implementation",
|
||||
"arguments": "--session WFS-session-id",
|
||||
"category": "workflow",
|
||||
"subcategory": "tools",
|
||||
"usage_scenario": "implementation",
|
||||
"difficulty": "Advanced",
|
||||
"source": "../../../commands/workflow/tools/task-generate-agent.md"
|
||||
},
|
||||
{
|
||||
"name": "task-generate-tdd",
|
||||
"command": "/workflow:tools:task-generate-tdd",
|
||||
"description": "Autonomous TDD task generation using action-planning-agent with Red-Green-Refactor cycles, test-first structure, and cycle validation",
|
||||
"arguments": "--session WFS-session-id",
|
||||
"category": "workflow",
|
||||
"subcategory": "tools",
|
||||
"usage_scenario": "implementation",
|
||||
"difficulty": "Advanced",
|
||||
"source": "../../../commands/workflow/tools/task-generate-tdd.md"
|
||||
},
|
||||
{
|
||||
"name": "test-task-generate",
|
||||
"command": "/workflow:tools:test-task-generate",
|
||||
"description": "Generate test planning documents (IMPL_PLAN.md, test task JSONs, TODO_LIST.md) using action-planning-agent - produces test planning artifacts, does NOT execute tests",
|
||||
"arguments": "--session WFS-test-session-id",
|
||||
"category": "workflow",
|
||||
"subcategory": "tools",
|
||||
"usage_scenario": "implementation",
|
||||
"difficulty": "Intermediate",
|
||||
"source": "../../../commands/workflow/tools/test-task-generate.md"
|
||||
},
|
||||
{
|
||||
"name": "generate",
|
||||
"command": "/workflow:ui-design:generate",
|
||||
"description": "Assemble UI prototypes by combining layout templates with design tokens (default animation support), pure assembler without new content generation",
|
||||
"arguments": "[--design-id <id>] [--session <id>]",
|
||||
"category": "workflow",
|
||||
"subcategory": "ui-design",
|
||||
"usage_scenario": "implementation",
|
||||
"difficulty": "Intermediate",
|
||||
"source": "../../../commands/workflow/ui-design/generate.md"
|
||||
}
|
||||
],
|
||||
"planning": [
|
||||
{
|
||||
"name": "plan",
|
||||
"command": "/issue:plan",
|
||||
"description": "Batch plan issue resolution using issue-plan-agent (explore + plan closed-loop)",
|
||||
"arguments": "--all-pending <issue-id>[,<issue-id>,...] [--batch-size 3] ",
|
||||
"category": "issue",
|
||||
"subcategory": null,
|
||||
"usage_scenario": "planning",
|
||||
"difficulty": "Intermediate",
|
||||
"source": "../../../commands/issue/plan.md"
|
||||
},
|
||||
{
|
||||
"name": "breakdown",
|
||||
"command": "/task:breakdown",
|
||||
"description": "Decompose complex task into subtasks with dependency mapping, creates child task JSONs with parent references and execution order",
|
||||
"arguments": "task-id",
|
||||
"category": "task",
|
||||
"subcategory": null,
|
||||
"usage_scenario": "planning",
|
||||
"difficulty": "Intermediate",
|
||||
"source": "../../../commands/task/breakdown.md"
|
||||
},
|
||||
{
|
||||
"name": "replan",
|
||||
"command": "/task:replan",
|
||||
"description": "Update task JSON with new requirements or batch-update multiple tasks from verification report, tracks changes in task-changes.json",
|
||||
"arguments": "task-id [\\\"text\\\"|file.md] | --batch [verification-report.md]",
|
||||
"category": "task",
|
||||
"subcategory": null,
|
||||
"usage_scenario": "planning",
|
||||
"difficulty": "Intermediate",
|
||||
"source": "../../../commands/task/replan.md"
|
||||
},
|
||||
{
|
||||
"name": "action-plan-verify",
|
||||
"command": "/workflow:action-plan-verify",
|
||||
"description": "Perform non-destructive cross-artifact consistency analysis between IMPL_PLAN.md and task JSONs with quality gate validation",
|
||||
"arguments": "[optional: --session session-id]",
|
||||
"category": "workflow",
|
||||
"subcategory": null,
|
||||
"usage_scenario": "planning",
|
||||
"difficulty": "Intermediate",
|
||||
"source": "../../../commands/workflow/action-plan-verify.md"
|
||||
},
|
||||
{
|
||||
"name": "api-designer",
|
||||
"command": "/workflow:brainstorm:api-designer",
|
||||
"description": "Generate or update api-designer/analysis.md addressing guidance-specification discussion points for API design perspective",
|
||||
"arguments": "optional topic - uses existing framework if available",
|
||||
"category": "workflow",
|
||||
"subcategory": "brainstorm",
|
||||
"usage_scenario": "planning",
|
||||
"difficulty": "Intermediate",
|
||||
"source": "../../../commands/workflow/brainstorm/api-designer.md"
|
||||
},
|
||||
{
|
||||
"name": "ui-designer",
|
||||
"command": "/workflow:brainstorm:ui-designer",
|
||||
"description": "Generate or update ui-designer/analysis.md addressing guidance-specification discussion points for UI design perspective",
|
||||
"arguments": "optional topic - uses existing framework if available",
|
||||
"category": "workflow",
|
||||
"subcategory": "brainstorm",
|
||||
"usage_scenario": "planning",
|
||||
"difficulty": "Intermediate",
|
||||
"source": "../../../commands/workflow/brainstorm/ui-designer.md"
|
||||
},
|
||||
{
|
||||
"name": "lite-plan",
|
||||
"command": "/workflow:lite-plan",
|
||||
"description": "Lightweight interactive planning workflow with in-memory planning, code exploration, and execution dispatch to lite-execute after user confirmation",
|
||||
"arguments": "[-e|--explore] \\\"task description\\\"|file.md",
|
||||
"category": "workflow",
|
||||
"subcategory": null,
|
||||
"usage_scenario": "planning",
|
||||
"difficulty": "Intermediate",
|
||||
"source": "../../../commands/workflow/lite-plan.md"
|
||||
},
|
||||
{
|
||||
"name": "plan",
|
||||
"command": "/workflow:plan",
|
||||
"description": "5-phase planning workflow with action-planning-agent task generation, outputs IMPL_PLAN.md and task JSONs",
|
||||
"arguments": "\\\"text description\\\"|file.md",
|
||||
"category": "workflow",
|
||||
"subcategory": null,
|
||||
"usage_scenario": "planning",
|
||||
"difficulty": "Intermediate",
|
||||
"source": "../../../commands/workflow/plan.md"
|
||||
},
|
||||
{
|
||||
"name": "replan",
|
||||
"command": "/workflow:replan",
|
||||
"description": "Interactive workflow replanning with session-level artifact updates and boundary clarification through guided questioning",
|
||||
"arguments": "[--session session-id] [task-id] \\\"requirements\\\"|file.md [--interactive]",
|
||||
"category": "workflow",
|
||||
"subcategory": null,
|
||||
"usage_scenario": "planning",
|
||||
"difficulty": "Intermediate",
|
||||
"source": "../../../commands/workflow/replan.md"
|
||||
},
|
||||
{
|
||||
"name": "tdd-plan",
|
||||
"command": "/workflow:tdd-plan",
|
||||
"description": "TDD workflow planning with Red-Green-Refactor task chain generation, test-first development structure, and cycle tracking",
|
||||
"arguments": "\\\"feature description\\\"|file.md",
|
||||
"category": "workflow",
|
||||
"subcategory": null,
|
||||
"usage_scenario": "planning",
|
||||
"difficulty": "Advanced",
|
||||
"source": "../../../commands/workflow/tdd-plan.md"
|
||||
},
|
||||
{
|
||||
"name": "workflow:ui-design:codify-style",
|
||||
"command": "/workflow:ui-design:codify-style",
|
||||
"description": "Orchestrator to extract styles from code and generate shareable reference package with preview (automatic file discovery)",
|
||||
"arguments": "<path> [--package-name <name>] [--output-dir <path>] [--overwrite]",
|
||||
"category": "workflow",
|
||||
"subcategory": "ui-design",
|
||||
"usage_scenario": "planning",
|
||||
"difficulty": "Intermediate",
|
||||
"source": "../../../commands/workflow/ui-design/codify-style.md"
|
||||
},
|
||||
{
|
||||
"name": "design-sync",
|
||||
"command": "/workflow:ui-design:design-sync",
|
||||
"description": "Synchronize finalized design system references to brainstorming artifacts, preparing them for /workflow:plan consumption",
|
||||
"arguments": "--session <session_id> [--selected-prototypes \"<list>\"]",
|
||||
"category": "workflow",
|
||||
"subcategory": "ui-design",
|
||||
"usage_scenario": "planning",
|
||||
"difficulty": "Intermediate",
|
||||
"source": "../../../commands/workflow/ui-design/design-sync.md"
|
||||
},
|
||||
{
|
||||
"name": "workflow:ui-design:import-from-code",
|
||||
"command": "/workflow:ui-design:import-from-code",
|
||||
"description": "Import design system from code files (CSS/JS/HTML/SCSS) with automatic file discovery and parallel agent analysis",
|
||||
"arguments": "[--design-id <id>] [--session <id>] [--source <path>]",
|
||||
"category": "workflow",
|
||||
"subcategory": "ui-design",
|
||||
"usage_scenario": "planning",
|
||||
"difficulty": "Intermediate",
|
||||
"source": "../../../commands/workflow/ui-design/import-from-code.md"
|
||||
},
|
||||
{
|
||||
"name": "workflow:ui-design:reference-page-generator",
|
||||
"command": "/workflow:ui-design:reference-page-generator",
|
||||
"description": "Generate multi-component reference pages and documentation from design run extraction",
|
||||
"arguments": "[--design-run <path>] [--package-name <name>] [--output-dir <path>]",
|
||||
"category": "workflow",
|
||||
"subcategory": "ui-design",
|
||||
"usage_scenario": "planning",
|
||||
"difficulty": "Intermediate",
|
||||
"source": "../../../commands/workflow/ui-design/reference-page-generator.md"
|
||||
}
|
||||
],
|
||||
"documentation": [
|
||||
{
|
||||
"name": "code-map-memory",
|
||||
"command": "/memory:code-map-memory",
|
||||
"description": "3-phase orchestrator: parse feature keyword → cli-explore-agent analyzes (Deep Scan dual-source) → orchestrator generates Mermaid docs + SKILL package (skips phase 2 if exists)",
|
||||
"arguments": "\\\"feature-keyword\\\" [--regenerate] [--tool <gemini|qwen>]",
|
||||
"category": "memory",
|
||||
"subcategory": null,
|
||||
"usage_scenario": "documentation",
|
||||
"difficulty": "Intermediate",
|
||||
"source": "../../../commands/memory/code-map-memory.md"
|
||||
},
|
||||
{
|
||||
"name": "docs-full-cli",
|
||||
"command": "/memory:docs-full-cli",
|
||||
"description": "Generate full project documentation using CLI execution (Layer 3→1) with batched agents (4 modules/agent) and gemini→qwen→codex fallback, <20 modules uses direct parallel",
|
||||
"arguments": "[path] [--tool <gemini|qwen|codex>]",
|
||||
"category": "memory",
|
||||
"subcategory": null,
|
||||
"usage_scenario": "documentation",
|
||||
"difficulty": "Intermediate",
|
||||
"source": "../../../commands/memory/docs-full-cli.md"
|
||||
},
|
||||
{
|
||||
"name": "docs-related-cli",
|
||||
"command": "/memory:docs-related-cli",
|
||||
"description": "Generate/update documentation for git-changed modules using CLI execution with batched agents (4 modules/agent) and gemini→qwen→codex fallback, <15 modules uses direct parallel",
|
||||
"arguments": "[--tool <gemini|qwen|codex>]",
|
||||
"category": "memory",
|
||||
"subcategory": null,
|
||||
"usage_scenario": "documentation",
|
||||
"difficulty": "Intermediate",
|
||||
"source": "../../../commands/memory/docs-related-cli.md"
|
||||
},
|
||||
{
|
||||
"name": "docs",
|
||||
"command": "/memory:docs",
|
||||
"description": "Plan documentation workflow with dynamic grouping (≤10 docs/task), generates IMPL tasks for parallel module trees, README, ARCHITECTURE, and HTTP API docs",
|
||||
"arguments": "[path] [--tool <gemini|qwen|codex>] [--mode <full|partial>] [--cli-execute]",
|
||||
"category": "memory",
|
||||
"subcategory": null,
|
||||
"usage_scenario": "documentation",
|
||||
"difficulty": "Intermediate",
|
||||
"source": "../../../commands/memory/docs.md"
|
||||
},
|
||||
{
|
||||
"name": "load-skill-memory",
|
||||
"command": "/memory:load-skill-memory",
|
||||
"description": "Activate SKILL package (auto-detect from paths/keywords or manual) and intelligently load documentation based on task intent keywords",
|
||||
"arguments": "[skill_name] \\\"task intent description\\",
|
||||
"category": "memory",
|
||||
"subcategory": null,
|
||||
"usage_scenario": "documentation",
|
||||
"difficulty": "Intermediate",
|
||||
"source": "../../../commands/memory/load-skill-memory.md"
|
||||
},
|
||||
{
|
||||
"name": "skill-memory",
|
||||
"command": "/memory:skill-memory",
|
||||
"description": "4-phase autonomous orchestrator: check docs → /memory:docs planning → /workflow:execute → generate SKILL.md with progressive loading index (skips phases 2-3 if docs exist)",
|
||||
"arguments": "[path] [--tool <gemini|qwen|codex>] [--regenerate] [--mode <full|partial>] [--cli-execute]",
|
||||
"category": "memory",
|
||||
"subcategory": null,
|
||||
"usage_scenario": "documentation",
|
||||
"difficulty": "Intermediate",
|
||||
"source": "../../../commands/memory/skill-memory.md"
|
||||
},
|
||||
{
|
||||
"name": "style-skill-memory",
|
||||
"command": "/memory:style-skill-memory",
|
||||
"description": "Generate SKILL memory package from style reference for easy loading and consistent design system usage",
|
||||
"arguments": "[package-name] [--regenerate]",
|
||||
"category": "memory",
|
||||
"subcategory": null,
|
||||
"usage_scenario": "documentation",
|
||||
"difficulty": "Intermediate",
|
||||
"source": "../../../commands/memory/style-skill-memory.md"
|
||||
},
|
||||
{
|
||||
"name": "swagger-docs",
|
||||
"command": "/memory:swagger-docs",
|
||||
"description": "Generate complete Swagger/OpenAPI documentation following RESTful standards with global security, API details, error codes, and validation tests",
|
||||
"arguments": "[path] [--tool <gemini|qwen|codex>] [--format <yaml|json>] [--version <v3.0|v3.1>] [--lang <zh|en>]",
|
||||
"category": "memory",
|
||||
"subcategory": null,
|
||||
"usage_scenario": "documentation",
|
||||
"difficulty": "Intermediate",
|
||||
"source": "../../../commands/memory/swagger-docs.md"
|
||||
},
|
||||
{
|
||||
"name": "workflow-skill-memory",
|
||||
"command": "/memory:workflow-skill-memory",
|
||||
"description": "Process WFS-* archived sessions using universal-executor agents with Gemini analysis to generate workflow-progress SKILL package (sessions-timeline, lessons, conflicts)",
|
||||
"arguments": "session <session-id> | all",
|
||||
"category": "memory",
|
||||
"subcategory": null,
|
||||
"usage_scenario": "documentation",
|
||||
"difficulty": "Intermediate",
|
||||
"source": "../../../commands/memory/workflow-skill-memory.md"
|
||||
}
|
||||
],
|
||||
"analysis": [
|
||||
{
|
||||
"name": "review-fix",
|
||||
"command": "/workflow:review-fix",
|
||||
"description": "Automated fixing of code review findings with AI-powered planning and coordinated execution. Uses intelligent grouping, multi-stage timeline coordination, and test-driven verification.",
|
||||
"arguments": "<export-file|review-dir> [--resume] [--max-iterations=N]",
|
||||
"category": "workflow",
|
||||
"subcategory": null,
|
||||
"usage_scenario": "analysis",
|
||||
"difficulty": "Intermediate",
|
||||
"source": "../../../commands/workflow/review-fix.md"
|
||||
},
|
||||
{
|
||||
"name": "review-module-cycle",
|
||||
"command": "/workflow:review-module-cycle",
|
||||
"description": "Independent multi-dimensional code review for specified modules/files. Analyzes specific code paths across 7 dimensions with hybrid parallel-iterative execution, independent of workflow sessions.",
|
||||
"arguments": "<path-pattern> [--dimensions=security,architecture,...] [--max-iterations=N]",
|
||||
"category": "workflow",
|
||||
"subcategory": null,
|
||||
"usage_scenario": "analysis",
|
||||
"difficulty": "Intermediate",
|
||||
"source": "../../../commands/workflow/review-module-cycle.md"
|
||||
},
|
||||
{
|
||||
"name": "review",
|
||||
"command": "/workflow:review",
|
||||
"description": "Post-implementation review with specialized types (security/architecture/action-items/quality) using analysis agents and Gemini",
|
||||
"arguments": "[--type=security|architecture|action-items|quality] [--archived] [optional: session-id]",
|
||||
"category": "workflow",
|
||||
"subcategory": null,
|
||||
"usage_scenario": "analysis",
|
||||
"difficulty": "Intermediate",
|
||||
"source": "../../../commands/workflow/review.md"
|
||||
}
|
||||
],
|
||||
"session-management": [
|
||||
{
|
||||
"name": "review-session-cycle",
|
||||
"command": "/workflow:review-session-cycle",
|
||||
"description": "Session-based comprehensive multi-dimensional code review. Analyzes git changes from workflow session across 7 dimensions with hybrid parallel-iterative execution, aggregates findings, and performs focused deep-dives on critical issues until quality gates met.",
|
||||
"arguments": "[session-id] [--dimensions=security,architecture,...] [--max-iterations=N]",
|
||||
"category": "workflow",
|
||||
"subcategory": null,
|
||||
"usage_scenario": "session-management",
|
||||
"difficulty": "Intermediate",
|
||||
"source": "../../../commands/workflow/review-session-cycle.md"
|
||||
},
|
||||
{
|
||||
"name": "complete",
|
||||
"command": "/workflow:session:complete",
|
||||
"description": "Mark active workflow session as complete, archive with lessons learned, update manifest, remove active flag",
|
||||
"arguments": "",
|
||||
"category": "workflow",
|
||||
"subcategory": "session",
|
||||
"usage_scenario": "session-management",
|
||||
"difficulty": "Intermediate",
|
||||
"source": "../../../commands/workflow/session/complete.md"
|
||||
},
|
||||
{
|
||||
"name": "resume",
|
||||
"command": "/workflow:session:resume",
|
||||
"description": "Resume the most recently paused workflow session with automatic session discovery and status update",
|
||||
"arguments": "",
|
||||
"category": "workflow",
|
||||
"subcategory": "session",
|
||||
"usage_scenario": "session-management",
|
||||
"difficulty": "Intermediate",
|
||||
"source": "../../../commands/workflow/session/resume.md"
|
||||
}
|
||||
],
|
||||
"testing": [
|
||||
{
|
||||
"name": "tdd-verify",
|
||||
"command": "/workflow:tdd-verify",
|
||||
"description": "Verify TDD workflow compliance against Red-Green-Refactor cycles, generate quality report with coverage analysis",
|
||||
"arguments": "[optional: WFS-session-id]",
|
||||
"category": "workflow",
|
||||
"subcategory": null,
|
||||
"usage_scenario": "testing",
|
||||
"difficulty": "Advanced",
|
||||
"source": "../../../commands/workflow/tdd-verify.md"
|
||||
},
|
||||
{
|
||||
"name": "test-fix-gen",
|
||||
"command": "/workflow:test-fix-gen",
|
||||
"description": "Create test-fix workflow session from session ID, description, or file path with test strategy generation and task planning",
|
||||
"arguments": "(source-session-id | \\\"feature description\\\" | /path/to/file.md)",
|
||||
"category": "workflow",
|
||||
"subcategory": null,
|
||||
"usage_scenario": "testing",
|
||||
"difficulty": "Intermediate",
|
||||
"source": "../../../commands/workflow/test-fix-gen.md"
|
||||
},
|
||||
{
|
||||
"name": "test-gen",
|
||||
"command": "/workflow:test-gen",
|
||||
"description": "Create independent test-fix workflow session from completed implementation session, analyzes code to generate test tasks",
|
||||
"arguments": "source-session-id",
|
||||
"category": "workflow",
|
||||
"subcategory": null,
|
||||
"usage_scenario": "testing",
|
||||
"difficulty": "Intermediate",
|
||||
"source": "../../../commands/workflow/test-gen.md"
|
||||
},
|
||||
{
|
||||
"name": "tdd-coverage-analysis",
|
||||
"command": "/workflow:tools:tdd-coverage-analysis",
|
||||
"description": "Analyze test coverage and TDD cycle execution with Red-Green-Refactor compliance verification",
|
||||
"arguments": "--session WFS-session-id",
|
||||
"category": "workflow",
|
||||
"subcategory": "tools",
|
||||
"usage_scenario": "testing",
|
||||
"difficulty": "Advanced",
|
||||
"source": "../../../commands/workflow/tools/tdd-coverage-analysis.md"
|
||||
},
|
||||
{
|
||||
"name": "test-concept-enhanced",
|
||||
"command": "/workflow:tools:test-concept-enhanced",
|
||||
"description": "Coordinate test analysis workflow using cli-execution-agent to generate test strategy via Gemini",
|
||||
"arguments": "--session WFS-test-session-id --context path/to/test-context-package.json",
|
||||
"category": "workflow",
|
||||
"subcategory": "tools",
|
||||
"usage_scenario": "testing",
|
||||
"difficulty": "Intermediate",
|
||||
"source": "../../../commands/workflow/tools/test-concept-enhanced.md"
|
||||
},
|
||||
{
|
||||
"name": "test-context-gather",
|
||||
"command": "/workflow:tools:test-context-gather",
|
||||
"description": "Collect test coverage context using test-context-search-agent and package into standardized test-context JSON",
|
||||
"arguments": "--session WFS-test-session-id",
|
||||
"category": "workflow",
|
||||
"subcategory": "tools",
|
||||
"usage_scenario": "testing",
|
||||
"difficulty": "Intermediate",
|
||||
"source": "../../../commands/workflow/tools/test-context-gather.md"
|
||||
}
|
||||
]
|
||||
}
|
||||
@@ -1,160 +0,0 @@
|
||||
{
|
||||
"workflow:plan": {
|
||||
"calls_internally": [
|
||||
"workflow:session:start",
|
||||
"workflow:tools:context-gather",
|
||||
"workflow:tools:conflict-resolution",
|
||||
"workflow:tools:task-generate-agent"
|
||||
],
|
||||
"next_steps": [
|
||||
"workflow:action-plan-verify",
|
||||
"workflow:status",
|
||||
"workflow:execute"
|
||||
],
|
||||
"alternatives": [
|
||||
"workflow:tdd-plan"
|
||||
],
|
||||
"prerequisites": []
|
||||
},
|
||||
"workflow:tdd-plan": {
|
||||
"calls_internally": [
|
||||
"workflow:session:start",
|
||||
"workflow:tools:context-gather",
|
||||
"workflow:tools:task-generate-tdd"
|
||||
],
|
||||
"next_steps": [
|
||||
"workflow:tdd-verify",
|
||||
"workflow:status",
|
||||
"workflow:execute"
|
||||
],
|
||||
"alternatives": [
|
||||
"workflow:plan"
|
||||
],
|
||||
"prerequisites": []
|
||||
},
|
||||
"workflow:execute": {
|
||||
"prerequisites": [
|
||||
"workflow:plan",
|
||||
"workflow:tdd-plan"
|
||||
],
|
||||
"related": [
|
||||
"workflow:status",
|
||||
"workflow:resume"
|
||||
],
|
||||
"next_steps": [
|
||||
"workflow:review",
|
||||
"workflow:tdd-verify"
|
||||
]
|
||||
},
|
||||
"workflow:action-plan-verify": {
|
||||
"prerequisites": [
|
||||
"workflow:plan"
|
||||
],
|
||||
"next_steps": [
|
||||
"workflow:execute"
|
||||
],
|
||||
"related": [
|
||||
"workflow:status"
|
||||
]
|
||||
},
|
||||
"workflow:tdd-verify": {
|
||||
"prerequisites": [
|
||||
"workflow:execute"
|
||||
],
|
||||
"related": [
|
||||
"workflow:tools:tdd-coverage-analysis"
|
||||
]
|
||||
},
|
||||
"workflow:session:start": {
|
||||
"next_steps": [
|
||||
"workflow:plan",
|
||||
"workflow:execute"
|
||||
],
|
||||
"related": [
|
||||
"workflow:session:list",
|
||||
"workflow:session:resume"
|
||||
]
|
||||
},
|
||||
"workflow:session:resume": {
|
||||
"alternatives": [
|
||||
"workflow:resume"
|
||||
],
|
||||
"related": [
|
||||
"workflow:session:list",
|
||||
"workflow:status"
|
||||
]
|
||||
},
|
||||
"workflow:lite-plan": {
|
||||
"calls_internally": [
|
||||
"workflow:lite-execute"
|
||||
],
|
||||
"next_steps": [
|
||||
"workflow:lite-execute",
|
||||
"workflow:status"
|
||||
],
|
||||
"alternatives": [
|
||||
"workflow:plan"
|
||||
],
|
||||
"prerequisites": []
|
||||
},
|
||||
"workflow:lite-fix": {
|
||||
"next_steps": [
|
||||
"workflow:lite-execute",
|
||||
"workflow:status"
|
||||
],
|
||||
"alternatives": [
|
||||
"workflow:lite-plan"
|
||||
],
|
||||
"related": [
|
||||
"workflow:test-cycle-execute"
|
||||
]
|
||||
},
|
||||
"workflow:lite-execute": {
|
||||
"prerequisites": [
|
||||
"workflow:lite-plan",
|
||||
"workflow:lite-fix"
|
||||
],
|
||||
"related": [
|
||||
"workflow:execute",
|
||||
"workflow:status"
|
||||
]
|
||||
},
|
||||
"workflow:review-session-cycle": {
|
||||
"prerequisites": [
|
||||
"workflow:execute"
|
||||
],
|
||||
"next_steps": [
|
||||
"workflow:review-fix"
|
||||
],
|
||||
"related": [
|
||||
"workflow:review-module-cycle"
|
||||
]
|
||||
},
|
||||
"workflow:review-fix": {
|
||||
"prerequisites": [
|
||||
"workflow:review-module-cycle",
|
||||
"workflow:review-session-cycle"
|
||||
],
|
||||
"related": [
|
||||
"workflow:test-cycle-execute"
|
||||
]
|
||||
},
|
||||
"memory:docs": {
|
||||
"calls_internally": [
|
||||
"workflow:session:start",
|
||||
"workflow:tools:context-gather"
|
||||
],
|
||||
"next_steps": [
|
||||
"workflow:execute"
|
||||
]
|
||||
},
|
||||
"memory:skill-memory": {
|
||||
"next_steps": [
|
||||
"workflow:plan",
|
||||
"cli:analyze"
|
||||
],
|
||||
"related": [
|
||||
"memory:load-skill-memory"
|
||||
]
|
||||
}
|
||||
}
|
||||
@@ -1,112 +0,0 @@
|
||||
[
|
||||
{
|
||||
"name": "lite-plan",
|
||||
"command": "/workflow:lite-plan",
|
||||
"description": "Lightweight interactive planning workflow with in-memory planning, code exploration, and execution dispatch to lite-execute after user confirmation",
|
||||
"arguments": "[-e|--explore] \\\"task description\\\"|file.md",
|
||||
"category": "workflow",
|
||||
"subcategory": null,
|
||||
"usage_scenario": "planning",
|
||||
"difficulty": "Intermediate",
|
||||
"source": "../../../commands/workflow/lite-plan.md"
|
||||
},
|
||||
{
|
||||
"name": "lite-fix",
|
||||
"command": "/workflow:lite-fix",
|
||||
"description": "Lightweight bug diagnosis and fix workflow with intelligent severity assessment and optional hotfix mode for production incidents",
|
||||
"arguments": "[--hotfix] \\\"bug description or issue reference\\",
|
||||
"category": "workflow",
|
||||
"subcategory": null,
|
||||
"usage_scenario": "general",
|
||||
"difficulty": "Intermediate",
|
||||
"source": "../../../commands/workflow/lite-fix.md"
|
||||
},
|
||||
{
|
||||
"name": "plan",
|
||||
"command": "/workflow:plan",
|
||||
"description": "5-phase planning workflow with action-planning-agent task generation, outputs IMPL_PLAN.md and task JSONs",
|
||||
"arguments": "\\\"text description\\\"|file.md",
|
||||
"category": "workflow",
|
||||
"subcategory": null,
|
||||
"usage_scenario": "planning",
|
||||
"difficulty": "Intermediate",
|
||||
"source": "../../../commands/workflow/plan.md"
|
||||
},
|
||||
{
|
||||
"name": "execute",
|
||||
"command": "/workflow:execute",
|
||||
"description": "Coordinate agent execution for workflow tasks with automatic session discovery, parallel task processing, and status tracking",
|
||||
"arguments": "[--resume-session=\\\"session-id\\\"]",
|
||||
"category": "workflow",
|
||||
"subcategory": null,
|
||||
"usage_scenario": "implementation",
|
||||
"difficulty": "Intermediate",
|
||||
"source": "../../../commands/workflow/execute.md"
|
||||
},
|
||||
{
|
||||
"name": "start",
|
||||
"command": "/workflow:session:start",
|
||||
"description": "Discover existing sessions or start new workflow session with intelligent session management and conflict detection",
|
||||
"arguments": "[--type <workflow|review|tdd|test|docs>] [--auto|--new] [optional: task description for new session]",
|
||||
"category": "workflow",
|
||||
"subcategory": "session",
|
||||
"usage_scenario": "general",
|
||||
"difficulty": "Intermediate",
|
||||
"source": "../../../commands/workflow/session/start.md"
|
||||
},
|
||||
{
|
||||
"name": "review-session-cycle",
|
||||
"command": "/workflow:review-session-cycle",
|
||||
"description": "Session-based comprehensive multi-dimensional code review. Analyzes git changes from workflow session across 7 dimensions with hybrid parallel-iterative execution, aggregates findings, and performs focused deep-dives on critical issues until quality gates met.",
|
||||
"arguments": "[session-id] [--dimensions=security,architecture,...] [--max-iterations=N]",
|
||||
"category": "workflow",
|
||||
"subcategory": null,
|
||||
"usage_scenario": "session-management",
|
||||
"difficulty": "Intermediate",
|
||||
"source": "../../../commands/workflow/review-session-cycle.md"
|
||||
},
|
||||
{
|
||||
"name": "docs",
|
||||
"command": "/memory:docs",
|
||||
"description": "Plan documentation workflow with dynamic grouping (≤10 docs/task), generates IMPL tasks for parallel module trees, README, ARCHITECTURE, and HTTP API docs",
|
||||
"arguments": "[path] [--tool <gemini|qwen|codex>] [--mode <full|partial>] [--cli-execute]",
|
||||
"category": "memory",
|
||||
"subcategory": null,
|
||||
"usage_scenario": "documentation",
|
||||
"difficulty": "Intermediate",
|
||||
"source": "../../../commands/memory/docs.md"
|
||||
},
|
||||
{
|
||||
"name": "artifacts",
|
||||
"command": "/workflow:brainstorm:artifacts",
|
||||
"description": "Interactive clarification generating confirmed guidance specification through role-based analysis and synthesis",
|
||||
"arguments": "topic or challenge description [--count N]",
|
||||
"category": "workflow",
|
||||
"subcategory": "brainstorm",
|
||||
"usage_scenario": "general",
|
||||
"difficulty": "Intermediate",
|
||||
"source": "../../../commands/workflow/brainstorm/artifacts.md"
|
||||
},
|
||||
{
|
||||
"name": "action-plan-verify",
|
||||
"command": "/workflow:action-plan-verify",
|
||||
"description": "Perform non-destructive cross-artifact consistency analysis between IMPL_PLAN.md and task JSONs with quality gate validation",
|
||||
"arguments": "[optional: --session session-id]",
|
||||
"category": "workflow",
|
||||
"subcategory": null,
|
||||
"usage_scenario": "planning",
|
||||
"difficulty": "Intermediate",
|
||||
"source": "../../../commands/workflow/action-plan-verify.md"
|
||||
},
|
||||
{
|
||||
"name": "version",
|
||||
"command": "/version",
|
||||
"description": "Display Claude Code version information and check for updates",
|
||||
"arguments": "",
|
||||
"category": "general",
|
||||
"subcategory": null,
|
||||
"usage_scenario": "general",
|
||||
"difficulty": "Beginner",
|
||||
"source": "../../../commands/version.md"
|
||||
}
|
||||
]
|
||||
@@ -1,462 +1,522 @@
|
||||
---
|
||||
name: ccw
|
||||
description: Stateless workflow orchestrator that automatically selects and executes the optimal workflow combination based on task intent. Supports rapid (lite-plan+execute), full (brainstorm+plan+execute), coupled (plan+execute), bugfix (lite-fix), and issue (multi-point fixes) workflows. Triggers on "ccw", "workflow", "自动工作流", "智能调度".
|
||||
allowed-tools: Task(*), SlashCommand(*), AskUserQuestion(*), Read(*), Bash(*), Grep(*)
|
||||
description: Stateless workflow orchestrator. Auto-selects optimal workflow based on task intent. Triggers "ccw", "workflow".
|
||||
allowed-tools: Task(*), SlashCommand(*), AskUserQuestion(*), Read(*), Bash(*), Grep(*), TodoWrite(*)
|
||||
---
|
||||
|
||||
# CCW - Claude Code Workflow Orchestrator
|
||||
|
||||
无状态工作流协调器,根据任务意图自动选择并执行最优工作流组合。
|
||||
无状态工作流协调器,根据任务意图自动选择最优工作流。
|
||||
|
||||
## Architecture Overview
|
||||
## Workflow System Overview
|
||||
|
||||
CCW 提供两个工作流系统:**Main Workflow** 和 **Issue Workflow**,协同覆盖完整的软件开发生命周期。
|
||||
|
||||
```
|
||||
┌─────────────────────────────────────────────────────────────────────────────┐
|
||||
│ Main Workflow │
|
||||
│ │
|
||||
│ ┌─────────────┐ ┌─────────────┐ ┌─────────────┐ ┌─────────────┐ │
|
||||
│ │ Level 1 │ → │ Level 2 │ → │ Level 3 │ → │ Level 4 │ │
|
||||
│ │ Rapid │ │ Lightweight │ │ Standard │ │ Brainstorm │ │
|
||||
│ │ │ │ │ │ │ │ │ │
|
||||
│ │ lite-lite- │ │ lite-plan │ │ plan │ │ brainstorm │ │
|
||||
│ │ lite │ │ lite-fix │ │ tdd-plan │ │ :auto- │ │
|
||||
│ │ │ │ multi-cli- │ │ test-fix- │ │ parallel │ │
|
||||
│ │ │ │ plan │ │ gen │ │ ↓ │ │
|
||||
│ │ │ │ │ │ │ │ plan │ │
|
||||
│ └─────────────┘ └─────────────┘ └─────────────┘ └─────────────┘ │
|
||||
│ │
|
||||
│ Complexity: ━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━▶ │
|
||||
│ Low High │
|
||||
└─────────────────────────────────────────────────────────────────────────────┘
|
||||
│
|
||||
│ After development
|
||||
▼
|
||||
┌─────────────────────────────────────────────────────────────────────────────┐
|
||||
│ Issue Workflow │
|
||||
│ │
|
||||
│ ┌──────────────┐ ┌──────────────┐ ┌──────────────┐ │
|
||||
│ │ Accumulate │ → │ Plan │ → │ Execute │ │
|
||||
│ │ Discover & │ │ Batch │ │ Parallel │ │
|
||||
│ │ Collect │ │ Planning │ │ Execution │ │
|
||||
│ └──────────────┘ └──────────────┘ └──────────────┘ │
|
||||
│ │
|
||||
│ Supplementary role: Maintain main branch stability, worktree isolation │
|
||||
└─────────────────────────────────────────────────────────────────────────────┘
|
||||
```
|
||||
|
||||
## Architecture
|
||||
|
||||
```
|
||||
┌─────────────────────────────────────────────────────────────────┐
|
||||
│ CCW Orchestrator (Stateless) │
|
||||
│ CCW Orchestrator (CLI-Enhanced + Requirement Analysis) │
|
||||
├─────────────────────────────────────────────────────────────────┤
|
||||
│ │
|
||||
│ Input Analysis │
|
||||
│ ├─ Intent Classification (bugfix/feature/refactor/issue/...) │
|
||||
│ ├─ Complexity Assessment (low/medium/high) │
|
||||
│ ├─ Context Detection (codebase familiarity needed?) │
|
||||
│ └─ Constraint Extraction (time/scope/quality) │
|
||||
│ │
|
||||
│ Workflow Selection (Decision Tree) │
|
||||
│ ├─ 🐛 Bug? → lite-fix / lite-fix --hotfix │
|
||||
│ ├─ ❓ Unclear? → brainstorm → plan → execute │
|
||||
│ ├─ ⚡ Simple? → lite-plan → lite-execute │
|
||||
│ ├─ 🔧 Complex? → plan → execute │
|
||||
│ ├─ 📋 Issue? → issue:plan → issue:queue → issue:execute │
|
||||
│ └─ 🎨 UI? → ui-design → plan → execute │
|
||||
│ │
|
||||
│ Execution Dispatch │
|
||||
│ └─ SlashCommand("/workflow:xxx") or Task(agent) │
|
||||
│ │
|
||||
│ Phase 1 │ Input Analysis (rule-based, fast path) │
|
||||
│ Phase 1.5 │ CLI Classification (semantic, smart path) │
|
||||
│ Phase 1.75 │ Requirement Clarification (clarity < 2) │
|
||||
│ Phase 2 │ Level Selection (intent → level → workflow) │
|
||||
│ Phase 2.5 │ CLI Action Planning (high complexity) │
|
||||
│ Phase 3 │ User Confirmation (optional) │
|
||||
│ Phase 4 │ TODO Tracking Setup │
|
||||
│ Phase 5 │ Execution Loop │
|
||||
└─────────────────────────────────────────────────────────────────┘
|
||||
```
|
||||
|
||||
## Workflow Combinations (组合技)
|
||||
## Level Quick Reference
|
||||
|
||||
### 1. Rapid (快速迭代) ⚡
|
||||
**Pattern**: 多模型协作分析 + 直接执行
|
||||
**Commands**: `/workflow:lite-plan` → `/workflow:lite-execute`
|
||||
**When to use**:
|
||||
- 明确知道做什么和怎么做
|
||||
- 单一功能或小型改动
|
||||
- 快速原型验证
|
||||
| Level | Name | Workflows | Artifacts | Execution |
|
||||
|-------|------|-----------|-----------|-----------|
|
||||
| **1** | Rapid | `lite-lite-lite` | None | Direct execute |
|
||||
| **2** | Lightweight | `lite-plan`, `lite-fix`, `multi-cli-plan` | Memory/Lightweight files | → `lite-execute` |
|
||||
| **3** | Standard | `plan`, `tdd-plan`, `test-fix-gen` | Session persistence | → `execute` / `test-cycle-execute` |
|
||||
| **4** | Brainstorm | `brainstorm:auto-parallel` → `plan` | Multi-role analysis + Session | → `execute` |
|
||||
| **-** | Issue | `discover` → `plan` → `queue` → `execute` | Issue records | Worktree isolation (optional) |
|
||||
|
||||
### 2. Full (完整流程) 📋
|
||||
**Pattern**: 分析 + 头脑风暴 + 规划 + 执行
|
||||
**Commands**: `/workflow:brainstorm:auto-parallel` → `/workflow:plan` → `/workflow:execute`
|
||||
**When to use**:
|
||||
- 不确定产品方向或技术方案
|
||||
- 需要多角色视角分析
|
||||
- 复杂新功能开发
|
||||
## Workflow Selection Decision Tree
|
||||
|
||||
### 3. Coupled (复杂耦合) 🔗
|
||||
**Pattern**: 完整规划 + 验证 + 执行
|
||||
**Commands**: `/workflow:plan` → `/workflow:action-plan-verify` → `/workflow:execute`
|
||||
**When to use**:
|
||||
- 跨模块依赖
|
||||
- 架构级变更
|
||||
- 团队协作项目
|
||||
|
||||
### 4. Bugfix (缺陷修复) 🐛
|
||||
**Pattern**: 智能诊断 + 修复
|
||||
**Commands**: `/workflow:lite-fix` or `/workflow:lite-fix --hotfix`
|
||||
**When to use**:
|
||||
- 任何有明确症状的Bug
|
||||
- 生产事故紧急修复
|
||||
- 根因不清楚需要诊断
|
||||
|
||||
### 5. Issue (长时间多点修复) 📌
|
||||
**Pattern**: Issue规划 + 队列 + 批量执行
|
||||
**Commands**: `/issue:plan` → `/issue:queue` → `/issue:execute`
|
||||
**When to use**:
|
||||
- 多个相关问题需要批量处理
|
||||
- 长时间跨度的修复任务
|
||||
- 需要优先级排序和冲突解决
|
||||
|
||||
### 6. UI-First (设计驱动) 🎨
|
||||
**Pattern**: UI设计 + 规划 + 执行
|
||||
**Commands**: `/workflow:ui-design:*` → `/workflow:plan` → `/workflow:execute`
|
||||
**When to use**:
|
||||
- 前端功能开发
|
||||
- 需要视觉参考
|
||||
- 设计系统集成
|
||||
```
|
||||
Start
|
||||
│
|
||||
├─ Is it post-development maintenance?
|
||||
│ ├─ Yes → Issue Workflow
|
||||
│ └─ No ↓
|
||||
│
|
||||
├─ Are requirements clear?
|
||||
│ ├─ Uncertain → Level 4 (brainstorm:auto-parallel)
|
||||
│ └─ Clear ↓
|
||||
│
|
||||
├─ Need persistent Session?
|
||||
│ ├─ Yes → Level 3 (plan / tdd-plan / test-fix-gen)
|
||||
│ └─ No ↓
|
||||
│
|
||||
├─ Need multi-perspective / solution comparison?
|
||||
│ ├─ Yes → Level 2 (multi-cli-plan)
|
||||
│ └─ No ↓
|
||||
│
|
||||
├─ Is it a bug fix?
|
||||
│ ├─ Yes → Level 2 (lite-fix)
|
||||
│ └─ No ↓
|
||||
│
|
||||
├─ Need planning?
|
||||
│ ├─ Yes → Level 2 (lite-plan)
|
||||
│ └─ No → Level 1 (lite-lite-lite)
|
||||
```
|
||||
|
||||
## Intent Classification
|
||||
|
||||
```javascript
|
||||
function classifyIntent(input) {
|
||||
const text = input.toLowerCase()
|
||||
|
||||
// Priority 1: Bug keywords
|
||||
if (/\b(fix|bug|error|issue|crash|broken|fail|wrong|incorrect)\b/.test(text)) {
|
||||
if (/\b(hotfix|urgent|production|critical|emergency)\b/.test(text)) {
|
||||
return { type: 'bugfix', mode: 'hotfix', workflow: 'lite-fix --hotfix' }
|
||||
}
|
||||
return { type: 'bugfix', mode: 'standard', workflow: 'lite-fix' }
|
||||
}
|
||||
|
||||
// Priority 2: Issue batch keywords
|
||||
if (/\b(issues?|batch|queue|多个|批量)\b/.test(text) && /\b(fix|resolve|处理)\b/.test(text)) {
|
||||
return { type: 'issue', workflow: 'issue:plan → issue:queue → issue:execute' }
|
||||
}
|
||||
|
||||
// Priority 3: Uncertainty keywords → Full workflow
|
||||
if (/\b(不确定|不知道|explore|研究|分析一下|怎么做|what if|should i|探索)\b/.test(text)) {
|
||||
return { type: 'exploration', workflow: 'brainstorm → plan → execute' }
|
||||
}
|
||||
|
||||
// Priority 4: UI/Design keywords
|
||||
if (/\b(ui|界面|design|设计|component|组件|style|样式|layout|布局)\b/.test(text)) {
|
||||
return { type: 'ui', workflow: 'ui-design → plan → execute' }
|
||||
}
|
||||
|
||||
// Priority 5: Complexity assessment for remaining
|
||||
const complexity = assessComplexity(text)
|
||||
|
||||
if (complexity === 'high') {
|
||||
return { type: 'feature', complexity: 'high', workflow: 'plan → verify → execute' }
|
||||
}
|
||||
|
||||
if (complexity === 'medium') {
|
||||
return { type: 'feature', complexity: 'medium', workflow: 'lite-plan → lite-execute' }
|
||||
}
|
||||
|
||||
return { type: 'feature', complexity: 'low', workflow: 'lite-plan → lite-execute' }
|
||||
}
|
||||
### Priority Order (with Level Mapping)
|
||||
|
||||
| Priority | Intent | Patterns | Level | Flow |
|
||||
|----------|--------|----------|-------|------|
|
||||
| 1 | bugfix/hotfix | `urgent,production,critical` + bug | L2 | `bugfix.hotfix` |
|
||||
| 1 | bugfix | `fix,bug,error,crash,fail` | L2 | `bugfix.standard` |
|
||||
| 2 | issue batch | `issues,batch` + `fix,resolve` | Issue | `issue` |
|
||||
| 3 | exploration | `不确定,explore,研究,what if` | L4 | `full` |
|
||||
| 3 | multi-perspective | `多视角,权衡,比较方案,cross-verify` | L2 | `multi-cli-plan` |
|
||||
| 4 | quick-task | `快速,简单,small,quick` + feature | L1 | `lite-lite-lite` |
|
||||
| 5 | ui design | `ui,design,component,style` | L3/L4 | `ui` |
|
||||
| 6 | tdd | `tdd,test-driven,先写测试` | L3 | `tdd` |
|
||||
| 7 | test-fix | `测试失败,test fail,fix test` | L3 | `test-fix-gen` |
|
||||
| 8 | review | `review,审查,code review` | L3 | `review-fix` |
|
||||
| 9 | documentation | `文档,docs,readme` | L2 | `docs` |
|
||||
| 99 | feature | complexity-based | L2/L3 | `rapid`/`coupled` |
|
||||
|
||||
### Quick Selection Guide
|
||||
|
||||
| Scenario | Recommended Workflow | Level |
|
||||
|----------|---------------------|-------|
|
||||
| Quick fixes, config adjustments | `lite-lite-lite` | 1 |
|
||||
| Clear single-module features | `lite-plan → lite-execute` | 2 |
|
||||
| Bug diagnosis and fix | `lite-fix` | 2 |
|
||||
| Production emergencies | `lite-fix --hotfix` | 2 |
|
||||
| Technology selection, solution comparison | `multi-cli-plan → lite-execute` | 2 |
|
||||
| Multi-module changes, refactoring | `plan → verify → execute` | 3 |
|
||||
| Test-driven development | `tdd-plan → execute → tdd-verify` | 3 |
|
||||
| Test failure fixes | `test-fix-gen → test-cycle-execute` | 3 |
|
||||
| New features, architecture design | `brainstorm:auto-parallel → plan → execute` | 4 |
|
||||
| Post-development issue fixes | Issue Workflow | - |
|
||||
|
||||
### Complexity Assessment
|
||||
|
||||
```javascript
|
||||
function assessComplexity(text) {
|
||||
let score = 0
|
||||
|
||||
// Architecture keywords
|
||||
if (/\b(refactor|重构|migrate|迁移|architect|架构|system|系统)\b/.test(text)) score += 2
|
||||
|
||||
// Multi-module keywords
|
||||
if (/\b(multiple|多个|across|跨|all|所有|entire|整个)\b/.test(text)) score += 2
|
||||
|
||||
// Integration keywords
|
||||
if (/\b(integrate|集成|connect|连接|api|database|数据库)\b/.test(text)) score += 1
|
||||
|
||||
// Security/Performance keywords
|
||||
if (/\b(security|安全|performance|性能|scale|扩展)\b/.test(text)) score += 1
|
||||
|
||||
if (score >= 4) return 'high'
|
||||
if (score >= 2) return 'medium'
|
||||
return 'low'
|
||||
if (/refactor|重构|migrate|迁移|architect|架构|system|系统/.test(text)) score += 2
|
||||
if (/multiple|多个|across|跨|all|所有|entire|整个/.test(text)) score += 2
|
||||
if (/integrate|集成|api|database|数据库/.test(text)) score += 1
|
||||
if (/security|安全|performance|性能|scale|扩展/.test(text)) score += 1
|
||||
return score >= 4 ? 'high' : score >= 2 ? 'medium' : 'low'
|
||||
}
|
||||
```
|
||||
|
||||
## Execution Flow
|
||||
| Complexity | Flow |
|
||||
|------------|------|
|
||||
| high | `coupled` (plan → verify → execute) |
|
||||
| medium/low | `rapid` (lite-plan → lite-execute) |
|
||||
|
||||
### Phase 1: Input Analysis
|
||||
### Dimension Extraction (WHAT/WHERE/WHY/HOW)
|
||||
|
||||
从用户输入提取四个维度,用于需求澄清和工作流选择:
|
||||
|
||||
| 维度 | 提取内容 | 示例模式 |
|
||||
|------|----------|----------|
|
||||
| **WHAT** | action + target | `创建/修复/重构/优化/分析` + 目标对象 |
|
||||
| **WHERE** | scope + paths | `file/module/system` + 文件路径 |
|
||||
| **WHY** | goal + motivation | `为了.../因为.../目的是...` |
|
||||
| **HOW** | constraints + preferences | `必须.../不要.../应该...` |
|
||||
|
||||
**Clarity Score** (0-3):
|
||||
- +0.5: 有明确 action
|
||||
- +0.5: 有具体 target
|
||||
- +0.5: 有文件路径
|
||||
- +0.5: scope 不是 unknown
|
||||
- +0.5: 有明确 goal
|
||||
- +0.5: 有约束条件
|
||||
- -0.5: 包含不确定词 (`不知道/maybe/怎么`)
|
||||
|
||||
### Requirement Clarification
|
||||
|
||||
当 `clarity_score < 2` 时触发需求澄清:
|
||||
|
||||
```javascript
|
||||
// Parse user input
|
||||
const input = userInput.trim()
|
||||
if (dimensions.clarity_score < 2) {
|
||||
const questions = generateClarificationQuestions(dimensions)
|
||||
// 生成问题:目标是什么? 范围是什么? 有什么约束?
|
||||
AskUserQuestion({ questions })
|
||||
}
|
||||
```
|
||||
|
||||
// Check for explicit workflow request
|
||||
**澄清问题类型**:
|
||||
- 目标不明确 → "你想要对什么进行操作?"
|
||||
- 范围不明确 → "操作的范围是什么?"
|
||||
- 目的不明确 → "这个操作的主要目标是什么?"
|
||||
- 复杂操作 → "有什么特殊要求或限制?"
|
||||
|
||||
## TODO Tracking Protocol
|
||||
|
||||
### CRITICAL: Append-Only Rule
|
||||
|
||||
CCW 创建的 Todo **必须附加到现有列表**,不能覆盖用户的其他 Todo。
|
||||
|
||||
### Implementation
|
||||
|
||||
```javascript
|
||||
// 1. 使用 CCW 前缀隔离工作流 todo
|
||||
const prefix = `CCW:${flowName}`
|
||||
|
||||
// 2. 创建新 todo 时使用前缀格式
|
||||
TodoWrite({
|
||||
todos: [
|
||||
...existingNonCCWTodos, // 保留用户的 todo
|
||||
{ content: `${prefix}: [1/N] /command:step1`, status: "in_progress", activeForm: "..." },
|
||||
{ content: `${prefix}: [2/N] /command:step2`, status: "pending", activeForm: "..." }
|
||||
]
|
||||
})
|
||||
|
||||
// 3. 更新状态时只修改匹配前缀的 todo
|
||||
```
|
||||
|
||||
### Todo Format
|
||||
|
||||
```
|
||||
CCW:{flow}: [{N}/{Total}] /command:name
|
||||
```
|
||||
|
||||
### Visual Example
|
||||
|
||||
```
|
||||
✓ CCW:rapid: [1/2] /workflow:lite-plan
|
||||
→ CCW:rapid: [2/2] /workflow:lite-execute
|
||||
用户自己的 todo(保留不动)
|
||||
```
|
||||
|
||||
### Status Management
|
||||
|
||||
- 开始工作流:创建所有步骤 todo,第一步 `in_progress`
|
||||
- 完成步骤:当前步骤 `completed`,下一步 `in_progress`
|
||||
- 工作流结束:所有 CCW todo 标记 `completed`
|
||||
|
||||
## Execution Flow
|
||||
|
||||
```javascript
|
||||
// 1. Check explicit command
|
||||
if (input.startsWith('/workflow:') || input.startsWith('/issue:')) {
|
||||
// User explicitly requested a workflow, pass through
|
||||
SlashCommand(input)
|
||||
return
|
||||
}
|
||||
|
||||
// Classify intent
|
||||
const intent = classifyIntent(input)
|
||||
// 2. Classify intent
|
||||
const intent = classifyIntent(input) // See command.json intent_rules
|
||||
|
||||
console.log(`
|
||||
## Intent Analysis
|
||||
// 3. Select flow
|
||||
const flow = selectFlow(intent) // See command.json flows
|
||||
|
||||
**Input**: ${input.substring(0, 100)}...
|
||||
**Classification**: ${intent.type}
|
||||
**Complexity**: ${intent.complexity || 'N/A'}
|
||||
**Recommended Workflow**: ${intent.workflow}
|
||||
`)
|
||||
```
|
||||
// 4. Create todos with CCW prefix
|
||||
createWorkflowTodos(flow)
|
||||
|
||||
### Phase 2: User Confirmation (Optional)
|
||||
|
||||
```javascript
|
||||
// For high-complexity or ambiguous intents, confirm with user
|
||||
if (intent.complexity === 'high' || intent.type === 'exploration') {
|
||||
const confirmation = AskUserQuestion({
|
||||
questions: [{
|
||||
question: `Recommended: ${intent.workflow}. Proceed?`,
|
||||
header: "Workflow",
|
||||
multiSelect: false,
|
||||
options: [
|
||||
{ label: `${intent.workflow} (Recommended)`, description: "Use recommended workflow" },
|
||||
{ label: "Rapid (lite-plan)", description: "Quick iteration" },
|
||||
{ label: "Full (brainstorm+plan)", description: "Complete exploration" },
|
||||
{ label: "Manual", description: "I'll specify the commands" }
|
||||
]
|
||||
}]
|
||||
})
|
||||
|
||||
// Adjust workflow based on user selection
|
||||
intent.workflow = mapSelectionToWorkflow(confirmation)
|
||||
}
|
||||
```
|
||||
|
||||
### Phase 3: Workflow Dispatch
|
||||
|
||||
```javascript
|
||||
switch (intent.workflow) {
|
||||
case 'lite-fix':
|
||||
SlashCommand('/workflow:lite-fix', args: input)
|
||||
break
|
||||
|
||||
case 'lite-fix --hotfix':
|
||||
SlashCommand('/workflow:lite-fix --hotfix', args: input)
|
||||
break
|
||||
|
||||
case 'lite-plan → lite-execute':
|
||||
SlashCommand('/workflow:lite-plan', args: input)
|
||||
// lite-plan will automatically dispatch to lite-execute
|
||||
break
|
||||
|
||||
case 'plan → verify → execute':
|
||||
SlashCommand('/workflow:plan', args: input)
|
||||
// After plan, prompt for verify and execute
|
||||
break
|
||||
|
||||
case 'brainstorm → plan → execute':
|
||||
SlashCommand('/workflow:brainstorm:auto-parallel', args: input)
|
||||
// After brainstorm, continue with plan
|
||||
break
|
||||
|
||||
case 'issue:plan → issue:queue → issue:execute':
|
||||
SlashCommand('/issue:plan', args: input)
|
||||
// Issue workflow handles queue and execute
|
||||
break
|
||||
|
||||
case 'ui-design → plan → execute':
|
||||
// Determine UI design subcommand
|
||||
if (hasReference(input)) {
|
||||
SlashCommand('/workflow:ui-design:imitate-auto', args: input)
|
||||
} else {
|
||||
SlashCommand('/workflow:ui-design:explore-auto', args: input)
|
||||
}
|
||||
break
|
||||
}
|
||||
// 5. Dispatch first command
|
||||
SlashCommand(flow.steps[0].command, args: input)
|
||||
```
|
||||
|
||||
## CLI Tool Integration
|
||||
|
||||
CCW **隐式调用** CLI 工具以获得三大优势:
|
||||
CCW 在特定条件下自动注入 CLI 调用:
|
||||
|
||||
### 1. Token 效率 (Context Efficiency)
|
||||
| Condition | CLI Inject |
|
||||
|-----------|------------|
|
||||
| 大量代码上下文 (≥50k chars) | `gemini --mode analysis` |
|
||||
| 高复杂度任务 | `gemini --mode analysis` |
|
||||
| Bug 诊断 | `gemini --mode analysis` |
|
||||
| 多任务执行 (≥3 tasks) | `codex --mode write` |
|
||||
|
||||
CLI 工具在单独进程中运行,可以处理大量代码上下文而不消耗主会话 token:
|
||||
### CLI Enhancement Phases
|
||||
|
||||
| 场景 | 触发条件 | 自动注入 |
|
||||
|------|----------|----------|
|
||||
| 大量代码上下文 | 文件读取 ≥ 50k 字符 | `gemini --mode analysis` |
|
||||
| 多模块分析 | 涉及 ≥ 5 个模块 | `gemini --mode analysis` |
|
||||
| 代码审查 | review 步骤 | `gemini --mode analysis` |
|
||||
**Phase 1.5: CLI-Assisted Classification**
|
||||
|
||||
### 2. 多模型视角 (Multi-Model Perspectives)
|
||||
当规则匹配不明确时,使用 CLI 辅助分类:
|
||||
|
||||
不同模型有不同优势,CCW 根据任务类型自动选择:
|
||||
| 触发条件 | 说明 |
|
||||
|----------|------|
|
||||
| matchCount < 2 | 多个意图模式匹配 |
|
||||
| complexity = high | 高复杂度任务 |
|
||||
| input > 100 chars | 长输入需要语义理解 |
|
||||
|
||||
| Tool | 核心优势 | 最佳场景 | 触发关键词 |
|
||||
|------|----------|----------|------------|
|
||||
| Gemini | 超长上下文、深度分析、架构理解、执行流追踪 | 代码库理解、架构评估、根因分析 | "分析", "理解", "设计", "架构", "诊断" |
|
||||
| Qwen | 代码模式识别、多维度分析 | Gemini备选、第二视角验证 | "评估", "对比", "验证" |
|
||||
| Codex | 精确代码生成、自主执行、数学推理 | 功能实现、重构、测试 | "实现", "重构", "修复", "生成", "测试" |
|
||||
**Phase 2.5: CLI-Assisted Action Planning**
|
||||
|
||||
### 3. 增强能力 (Enhanced Capabilities)
|
||||
高复杂度任务的工作流优化:
|
||||
|
||||
#### Debug 能力增强
|
||||
```
|
||||
触发条件: intent === 'bugfix' AND root_cause_unclear
|
||||
自动注入: gemini --mode analysis (执行流追踪)
|
||||
用途: 假设驱动调试、状态机错误诊断、并发问题排查
|
||||
```
|
||||
| 触发条件 | 说明 |
|
||||
|----------|------|
|
||||
| complexity = high | 高复杂度任务 |
|
||||
| steps >= 3 | 多步骤工作流 |
|
||||
| input > 200 chars | 复杂需求描述 |
|
||||
|
||||
#### 规划能力增强
|
||||
```
|
||||
触发条件: complexity === 'high' OR intent === 'exploration'
|
||||
自动注入: gemini --mode analysis (架构分析)
|
||||
用途: 复杂任务先用CLI分析获取多模型视角
|
||||
```
|
||||
CLI 可返回建议:`use_default` | `modify` (调整步骤) | `upgrade` (升级工作流)
|
||||
|
||||
### 隐式注入规则 (Implicit Injection Rules)
|
||||
## Continuation Commands
|
||||
|
||||
CCW 在以下条件自动注入 CLI 调用(无需用户显式请求):
|
||||
工作流执行中的用户控制命令:
|
||||
|
||||
```javascript
|
||||
const implicitRules = {
|
||||
// 上下文收集:大量代码使用CLI可节省主会话token
|
||||
context_gathering: {
|
||||
trigger: 'file_read >= 50k chars OR module_count >= 5',
|
||||
inject: 'gemini --mode analysis'
|
||||
},
|
||||
|
||||
// 规划前分析:复杂任务先用CLI分析
|
||||
pre_planning_analysis: {
|
||||
trigger: 'complexity === "high" OR intent === "exploration"',
|
||||
inject: 'gemini --mode analysis'
|
||||
},
|
||||
|
||||
// 调试诊断:利用Gemini的执行流追踪能力
|
||||
debug_diagnosis: {
|
||||
trigger: 'intent === "bugfix" AND root_cause_unclear',
|
||||
inject: 'gemini --mode analysis'
|
||||
},
|
||||
|
||||
// 代码审查:用CLI减少token占用
|
||||
code_review: {
|
||||
trigger: 'step === "review"',
|
||||
inject: 'gemini --mode analysis'
|
||||
},
|
||||
|
||||
// 多任务执行:用Codex自主完成
|
||||
implementation: {
|
||||
trigger: 'step === "execute" AND task_count >= 3',
|
||||
inject: 'codex --mode write'
|
||||
}
|
||||
}
|
||||
```
|
||||
|
||||
### 用户语义触发 (Semantic Tool Assignment)
|
||||
|
||||
```javascript
|
||||
// 用户可以通过自然语言指定工具偏好
|
||||
const toolHints = {
|
||||
gemini: /用\s*gemini|gemini\s*分析|让\s*gemini|深度分析|架构理解/i,
|
||||
qwen: /用\s*qwen|qwen\s*评估|让\s*qwen|第二视角/i,
|
||||
codex: /用\s*codex|codex\s*实现|让\s*codex|自主完成|批量修改/i
|
||||
}
|
||||
|
||||
function detectToolPreference(input) {
|
||||
for (const [tool, pattern] of Object.entries(toolHints)) {
|
||||
if (pattern.test(input)) return tool
|
||||
}
|
||||
return null // Auto-select based on task type
|
||||
}
|
||||
```
|
||||
|
||||
### 独立 CLI 工作流 (Standalone CLI Workflows)
|
||||
|
||||
直接调用 CLI 进行特定任务:
|
||||
|
||||
| Workflow | 命令 | 用途 |
|
||||
|----------|------|------|
|
||||
| CLI Analysis | `ccw cli --tool gemini` | 大型代码库快速理解、架构评估 |
|
||||
| CLI Implement | `ccw cli --tool codex` | 明确需求的自主实现 |
|
||||
| CLI Debug | `ccw cli --tool gemini` | 复杂bug根因分析、执行流追踪 |
|
||||
|
||||
## Index Files (Dynamic Coordination)
|
||||
|
||||
CCW 使用索引文件实现智能命令协调:
|
||||
|
||||
| Index | Purpose |
|
||||
|-------|---------|
|
||||
| [index/command-capabilities.json](index/command-capabilities.json) | 命令能力分类(explore, plan, execute, test, review...) |
|
||||
| [index/workflow-chains.json](index/workflow-chains.json) | 预定义工作流链(rapid, full, coupled, bugfix, issue, tdd, ui...) |
|
||||
|
||||
### 能力分类
|
||||
|
||||
```
|
||||
capabilities:
|
||||
├── explore - 代码探索、上下文收集
|
||||
├── brainstorm - 多角色分析、方案探索
|
||||
├── plan - 任务规划、分解
|
||||
├── verify - 计划验证、质量检查
|
||||
├── execute - 任务执行、代码实现
|
||||
├── bugfix - Bug诊断、修复
|
||||
├── test - 测试生成、执行
|
||||
├── review - 代码审查、质量分析
|
||||
├── issue - 批量问题管理
|
||||
├── ui-design - UI设计、原型
|
||||
├── memory - 文档、知识管理
|
||||
├── session - 会话管理
|
||||
└── debug - 调试、问题排查
|
||||
```
|
||||
|
||||
## TODO Tracking Integration
|
||||
|
||||
CCW 自动使用 TodoWrite 跟踪工作流执行进度:
|
||||
|
||||
```javascript
|
||||
// 工作流启动时自动创建 TODO 列表
|
||||
TodoWrite({
|
||||
todos: [
|
||||
{ content: "CCW: Rapid Iteration (2 steps)", status: "in_progress", activeForm: "Running workflow" },
|
||||
{ content: "[1/2] /workflow:lite-plan", status: "in_progress", activeForm: "Executing lite-plan" },
|
||||
{ content: "[2/2] /workflow:lite-execute", status: "pending", activeForm: "Executing lite-execute" }
|
||||
]
|
||||
})
|
||||
|
||||
// 每个步骤完成后自动更新状态
|
||||
// 支持暂停、继续、跳过操作
|
||||
```
|
||||
|
||||
**进度可视化**:
|
||||
```
|
||||
✓ CCW: Rapid Iteration (2 steps)
|
||||
✓ [1/2] /workflow:lite-plan
|
||||
→ [2/2] /workflow:lite-execute
|
||||
```
|
||||
|
||||
**控制命令**:
|
||||
| Input | Action |
|
||||
|-------|--------|
|
||||
| `continue` | 执行下一步 |
|
||||
| 命令 | 作用 |
|
||||
|------|------|
|
||||
| `continue` | 继续执行下一步 |
|
||||
| `skip` | 跳过当前步骤 |
|
||||
| `abort` | 停止工作流 |
|
||||
| `/workflow:*` | 执行指定命令 |
|
||||
| `abort` | 终止工作流 |
|
||||
| `/workflow:*` | 切换到指定命令 |
|
||||
| 自然语言 | 重新分析意图 |
|
||||
|
||||
## Reference Documents
|
||||
## Workflow Flow Details
|
||||
|
||||
| Document | Purpose |
|
||||
|----------|---------|
|
||||
| [phases/orchestrator.md](phases/orchestrator.md) | 编排器决策逻辑 + TODO 跟踪 |
|
||||
| [phases/actions/rapid.md](phases/actions/rapid.md) | 快速迭代组合 |
|
||||
| [phases/actions/full.md](phases/actions/full.md) | 完整流程组合 |
|
||||
| [phases/actions/coupled.md](phases/actions/coupled.md) | 复杂耦合组合 |
|
||||
| [phases/actions/bugfix.md](phases/actions/bugfix.md) | 缺陷修复组合 |
|
||||
| [phases/actions/issue.md](phases/actions/issue.md) | Issue工作流组合 |
|
||||
| [specs/intent-classification.md](specs/intent-classification.md) | 意图分类规范 |
|
||||
| [WORKFLOW_DECISION_GUIDE.md](/WORKFLOW_DECISION_GUIDE.md) | 工作流决策指南 |
|
||||
### Issue Workflow (Main Workflow 补充机制)
|
||||
|
||||
## Examples
|
||||
Issue Workflow 是 Main Workflow 的**补充机制**,专注于开发后的持续维护。
|
||||
|
||||
#### 设计理念
|
||||
|
||||
| 方面 | Main Workflow | Issue Workflow |
|
||||
|------|---------------|----------------|
|
||||
| **用途** | 主要开发周期 | 开发后维护 |
|
||||
| **时机** | 功能开发阶段 | 主工作流完成后 |
|
||||
| **范围** | 完整功能实现 | 针对性修复/增强 |
|
||||
| **并行性** | 依赖分析 → Agent 并行 | Worktree 隔离 (可选) |
|
||||
| **分支模型** | 当前分支工作 | 可使用隔离的 worktree |
|
||||
|
||||
#### 为什么 Main Workflow 不自动使用 Worktree?
|
||||
|
||||
**依赖分析已解决并行性问题**:
|
||||
1. 规划阶段 (`/workflow:plan`) 执行依赖分析
|
||||
2. 自动识别任务依赖和关键路径
|
||||
3. 划分为**并行组**(独立任务)和**串行链**(依赖任务)
|
||||
4. Agent 并行执行独立任务,无需文件系统隔离
|
||||
|
||||
#### 两阶段生命周期
|
||||
|
||||
### Example 1: Bug Fix
|
||||
```
|
||||
User: 用户登录失败,返回 401 错误
|
||||
CCW: Intent=bugfix, Workflow=lite-fix
|
||||
→ /workflow:lite-fix "用户登录失败,返回 401 错误"
|
||||
┌─────────────────────────────────────────────────────────────────────┐
|
||||
│ Phase 1: Accumulation (积累阶段) │
|
||||
│ │
|
||||
│ Triggers: 任务完成后的 review、代码审查发现、测试失败 │
|
||||
│ │
|
||||
│ ┌────────────┐ ┌────────────┐ ┌────────────┐ │
|
||||
│ │ discover │ │ discover- │ │ new │ │
|
||||
│ │ Auto-find │ │ by-prompt │ │ Manual │ │
|
||||
│ └────────────┘ └────────────┘ └────────────┘ │
|
||||
│ │
|
||||
│ 持续积累 issues 到待处理队列 │
|
||||
└─────────────────────────────────────────────────────────────────────┘
|
||||
│
|
||||
│ 积累足够后
|
||||
▼
|
||||
┌─────────────────────────────────────────────────────────────────────┐
|
||||
│ Phase 2: Batch Resolution (批量解决阶段) │
|
||||
│ │
|
||||
│ ┌────────────┐ ┌────────────┐ ┌────────────┐ │
|
||||
│ │ plan │ ──→ │ queue │ ──→ │ execute │ │
|
||||
│ │ --all- │ │ Optimize │ │ Parallel │ │
|
||||
│ │ pending │ │ order │ │ execution │ │
|
||||
│ └────────────┘ └────────────┘ └────────────┘ │
|
||||
│ │
|
||||
│ 支持 worktree 隔离,保持主分支稳定 │
|
||||
└─────────────────────────────────────────────────────────────────────┘
|
||||
```
|
||||
|
||||
### Example 2: New Feature (Simple)
|
||||
#### 与 Main Workflow 的协作
|
||||
|
||||
```
|
||||
User: 添加用户头像上传功能
|
||||
CCW: Intent=feature, Complexity=low, Workflow=lite-plan→lite-execute
|
||||
→ /workflow:lite-plan "添加用户头像上传功能"
|
||||
开发迭代循环
|
||||
┌─────────────────────────────────────────────────────────────────────┐
|
||||
│ │
|
||||
│ ┌─────────┐ ┌─────────┐ │
|
||||
│ │ Feature │ ──→ Main Workflow ──→ Done ──→│ Review │ │
|
||||
│ │ Request │ (Level 1-4) └────┬────┘ │
|
||||
│ └─────────┘ │ │
|
||||
│ ▲ │ 发现 Issues │
|
||||
│ │ ▼ │
|
||||
│ │ ┌─────────┐ │
|
||||
│ 继续 │ │ Issue │ │
|
||||
│ 新功能│ │ Workflow│ │
|
||||
│ │ └────┬────┘ │
|
||||
│ │ ┌──────────────────────────────┘ │
|
||||
│ │ │ 修复完成 │
|
||||
│ │ ▼ │
|
||||
│ ┌────┴────┐◀────── │
|
||||
│ │ Main │ Merge │
|
||||
│ │ Branch │ back │
|
||||
│ └─────────┘ │
|
||||
│ │
|
||||
└─────────────────────────────────────────────────────────────────────┘
|
||||
```
|
||||
|
||||
### Example 3: Complex Refactoring
|
||||
```
|
||||
User: 重构整个认证模块,迁移到 OAuth2
|
||||
CCW: Intent=feature, Complexity=high, Workflow=plan→verify→execute
|
||||
→ /workflow:plan "重构整个认证模块,迁移到 OAuth2"
|
||||
#### 命令列表
|
||||
|
||||
**积累阶段:**
|
||||
```bash
|
||||
/issue:discover # 多视角自动发现
|
||||
/issue:discover-by-prompt # 基于提示发现
|
||||
/issue:new # 手动创建
|
||||
```
|
||||
|
||||
### Example 4: Exploration
|
||||
```
|
||||
User: 我想优化系统性能,但不知道从哪入手
|
||||
CCW: Intent=exploration, Workflow=brainstorm→plan→execute
|
||||
→ /workflow:brainstorm:auto-parallel "探索系统性能优化方向"
|
||||
**批量解决阶段:**
|
||||
```bash
|
||||
/issue:plan --all-pending # 批量规划所有待处理
|
||||
/issue:queue # 生成优化执行队列
|
||||
/issue:execute # 并行执行
|
||||
```
|
||||
|
||||
### Example 5: Multi-Model Collaboration
|
||||
### lite-lite-lite vs multi-cli-plan
|
||||
|
||||
| 维度 | lite-lite-lite | multi-cli-plan |
|
||||
|------|---------------|----------------|
|
||||
| **产物** | 无文件 | IMPL_PLAN.md + plan.json + synthesis.json |
|
||||
| **状态** | 无状态 | 持久化 session |
|
||||
| **CLI选择** | 自动分析任务类型选择 | 配置驱动 |
|
||||
| **迭代** | 通过 AskUser | 多轮收敛 |
|
||||
| **执行** | 直接执行 | 通过 lite-execute |
|
||||
| **适用** | 快速修复、简单功能 | 复杂多步骤实现 |
|
||||
|
||||
**选择指南**:
|
||||
- 任务清晰、改动范围小 → `lite-lite-lite`
|
||||
- 需要多视角分析、复杂架构 → `multi-cli-plan`
|
||||
|
||||
### multi-cli-plan vs lite-plan
|
||||
|
||||
| 维度 | multi-cli-plan | lite-plan |
|
||||
|------|---------------|-----------|
|
||||
| **上下文** | ACE 语义搜索 | 手动文件模式 |
|
||||
| **分析** | 多 CLI 交叉验证 | 单次规划 |
|
||||
| **迭代** | 多轮直到收敛 | 单轮 |
|
||||
| **置信度** | 高 (共识驱动) | 中 (单一视角) |
|
||||
| **适用** | 需要多视角的复杂任务 | 直接明确的实现 |
|
||||
|
||||
**选择指南**:
|
||||
- 需求明确、路径清晰 → `lite-plan`
|
||||
- 需要权衡、多方案比较 → `multi-cli-plan`
|
||||
|
||||
## Artifact Flow Protocol
|
||||
|
||||
工作流产出的自动流转机制,支持不同格式产出间的意图提取和完成度判断。
|
||||
|
||||
### 产出格式
|
||||
|
||||
| 命令 | 产出位置 | 格式 | 关键字段 |
|
||||
|------|----------|------|----------|
|
||||
| `/workflow:lite-plan` | memory://plan | structured_plan | tasks, files, dependencies |
|
||||
| `/workflow:plan` | .workflow/{session}/IMPL_PLAN.md | markdown_plan | phases, tasks, risks |
|
||||
| `/workflow:execute` | execution_log.json | execution_report | completed_tasks, errors |
|
||||
| `/workflow:test-cycle-execute` | test_results.json | test_report | pass_rate, failures, coverage |
|
||||
| `/workflow:review-session-cycle` | review_report.md | review_report | findings, severity_counts |
|
||||
|
||||
### 意图提取 (Intent Extraction)
|
||||
|
||||
流转到下一步时,自动提取关键信息:
|
||||
|
||||
```
|
||||
User: 用 gemini 分析现有架构,然后让 codex 实现优化
|
||||
CCW: Detects tool preferences, executes in sequence
|
||||
→ Gemini CLI (analysis) → Codex CLI (implementation)
|
||||
plan → execute:
|
||||
提取: tasks (未完成), priority_order, files_to_modify, context_summary
|
||||
|
||||
execute → test:
|
||||
提取: modified_files, test_scope (推断), pending_verification
|
||||
|
||||
test → fix:
|
||||
条件: pass_rate < 0.95
|
||||
提取: failures, error_messages, affected_files, suggested_fixes
|
||||
|
||||
review → fix:
|
||||
条件: critical > 0 OR high > 3
|
||||
提取: findings (critical/high), fix_priority, affected_files
|
||||
```
|
||||
|
||||
### 完成度判断
|
||||
|
||||
**Test 完成度路由**:
|
||||
```
|
||||
pass_rate >= 0.95 AND coverage >= 0.80 → complete
|
||||
pass_rate >= 0.95 AND coverage < 0.80 → add_more_tests
|
||||
pass_rate >= 0.80 → fix_failures_then_continue
|
||||
pass_rate < 0.80 → major_fix_required
|
||||
```
|
||||
|
||||
**Review 完成度路由**:
|
||||
```
|
||||
critical == 0 AND high <= 3 → complete_or_optional_fix
|
||||
critical > 0 → mandatory_fix
|
||||
high > 3 → recommended_fix
|
||||
```
|
||||
|
||||
### 流转决策模式
|
||||
|
||||
**plan_execute_test**:
|
||||
```
|
||||
plan → execute → test
|
||||
↓ (if test fail)
|
||||
extract_failures → fix → test (max 3 iterations)
|
||||
↓ (if still fail)
|
||||
manual_intervention
|
||||
```
|
||||
|
||||
**iterative_improvement**:
|
||||
```
|
||||
execute → test → fix → test → ...
|
||||
loop until: pass_rate >= 0.95 OR iterations >= 3
|
||||
```
|
||||
|
||||
### 使用示例
|
||||
|
||||
```javascript
|
||||
// 执行完成后,根据产出决定下一步
|
||||
const result = await execute(plan)
|
||||
|
||||
// 提取意图流转到测试
|
||||
const testContext = extractIntent('execute_to_test', result)
|
||||
// testContext = { modified_files, test_scope, pending_verification }
|
||||
|
||||
// 测试完成后,根据完成度决定路由
|
||||
const testResult = await test(testContext)
|
||||
const nextStep = evaluateCompletion('test', testResult)
|
||||
// nextStep = 'fix_failures_then_continue' if pass_rate = 0.85
|
||||
```
|
||||
|
||||
## Reference
|
||||
|
||||
- [command.json](command.json) - 命令元数据、Flow 定义、意图规则、Artifact Flow
|
||||
|
||||
641
.claude/skills/ccw/command.json
Normal file
641
.claude/skills/ccw/command.json
Normal file
@@ -0,0 +1,641 @@
|
||||
{
|
||||
"_metadata": {
|
||||
"version": "2.0.0",
|
||||
"description": "Unified CCW command index with capabilities, flows, and intent rules"
|
||||
},
|
||||
|
||||
"capabilities": {
|
||||
"explore": {
|
||||
"description": "Codebase exploration and context gathering",
|
||||
"commands": ["/workflow:init", "/workflow:tools:gather", "/memory:load"],
|
||||
"agents": ["cli-explore-agent", "context-search-agent"]
|
||||
},
|
||||
"brainstorm": {
|
||||
"description": "Multi-perspective analysis and ideation",
|
||||
"commands": ["/workflow:brainstorm:auto-parallel", "/workflow:brainstorm:artifacts", "/workflow:brainstorm:synthesis"],
|
||||
"roles": ["product-manager", "system-architect", "ux-expert", "data-architect", "api-designer"]
|
||||
},
|
||||
"plan": {
|
||||
"description": "Task planning and decomposition",
|
||||
"commands": ["/workflow:lite-plan", "/workflow:plan", "/workflow:tdd-plan", "/task:create", "/task:breakdown"],
|
||||
"agents": ["cli-lite-planning-agent", "action-planning-agent"]
|
||||
},
|
||||
"verify": {
|
||||
"description": "Plan and quality verification",
|
||||
"commands": ["/workflow:action-plan-verify", "/workflow:tdd-verify"]
|
||||
},
|
||||
"execute": {
|
||||
"description": "Task execution and implementation",
|
||||
"commands": ["/workflow:lite-execute", "/workflow:execute", "/task:execute"],
|
||||
"agents": ["code-developer", "cli-execution-agent", "universal-executor"]
|
||||
},
|
||||
"bugfix": {
|
||||
"description": "Bug diagnosis and fixing",
|
||||
"commands": ["/workflow:lite-fix"],
|
||||
"agents": ["code-developer"]
|
||||
},
|
||||
"test": {
|
||||
"description": "Test generation and execution",
|
||||
"commands": ["/workflow:test-gen", "/workflow:test-fix-gen", "/workflow:test-cycle-execute"],
|
||||
"agents": ["test-fix-agent"]
|
||||
},
|
||||
"review": {
|
||||
"description": "Code review and quality analysis",
|
||||
"commands": ["/workflow:review-session-cycle", "/workflow:review-module-cycle", "/workflow:review", "/workflow:review-fix"]
|
||||
},
|
||||
"issue": {
|
||||
"description": "Issue lifecycle management - discover, accumulate, batch resolve",
|
||||
"commands": ["/issue:new", "/issue:discover", "/issue:discover-by-prompt", "/issue:plan", "/issue:queue", "/issue:execute", "/issue:manage"],
|
||||
"agents": ["issue-plan-agent", "issue-queue-agent", "cli-explore-agent"],
|
||||
"lifecycle": {
|
||||
"accumulation": {
|
||||
"description": "任务完成后进行需求扩展、bug分析、测试发现",
|
||||
"triggers": ["post-task review", "code review findings", "test failures"],
|
||||
"commands": ["/issue:discover", "/issue:discover-by-prompt", "/issue:new"]
|
||||
},
|
||||
"batch_resolution": {
|
||||
"description": "积累的issue集中规划和并行执行",
|
||||
"flow": ["plan", "queue", "execute"],
|
||||
"commands": ["/issue:plan --all-pending", "/issue:queue", "/issue:execute"]
|
||||
}
|
||||
}
|
||||
},
|
||||
"ui-design": {
|
||||
"description": "UI design and prototyping",
|
||||
"commands": ["/workflow:ui-design:explore-auto", "/workflow:ui-design:imitate-auto", "/workflow:ui-design:design-sync"],
|
||||
"agents": ["ui-design-agent"]
|
||||
},
|
||||
"memory": {
|
||||
"description": "Documentation and knowledge management",
|
||||
"commands": ["/memory:docs", "/memory:update-related", "/memory:update-full", "/memory:skill-memory"],
|
||||
"agents": ["doc-generator", "memory-bridge"]
|
||||
}
|
||||
},
|
||||
|
||||
"flows": {
|
||||
"_level_guide": {
|
||||
"L1": "Rapid - No artifacts, direct execution",
|
||||
"L2": "Lightweight - Memory/lightweight files, → lite-execute",
|
||||
"L3": "Standard - Session persistence, → execute/test-cycle-execute",
|
||||
"L4": "Brainstorm - Multi-role analysis + Session, → execute"
|
||||
},
|
||||
"lite-lite-lite": {
|
||||
"name": "Ultra-Rapid Execution",
|
||||
"level": "L1",
|
||||
"description": "零文件 + 自动CLI选择 + 语义描述 + 直接执行",
|
||||
"complexity": ["low"],
|
||||
"artifacts": "none",
|
||||
"steps": [
|
||||
{ "phase": "clarify", "description": "需求澄清 (AskUser if needed)" },
|
||||
{ "phase": "auto-select", "description": "任务分析 → 自动选择CLI组合" },
|
||||
{ "phase": "multi-cli", "description": "并行多CLI分析" },
|
||||
{ "phase": "decision", "description": "展示结果 → AskUser决策" },
|
||||
{ "phase": "execute", "description": "直接执行 (无中间文件)" }
|
||||
],
|
||||
"cli_hints": {
|
||||
"analysis": { "tool": "auto", "mode": "analysis", "parallel": true },
|
||||
"execution": { "tool": "auto", "mode": "write" }
|
||||
},
|
||||
"estimated_time": "10-30 min"
|
||||
},
|
||||
"rapid": {
|
||||
"name": "Rapid Iteration",
|
||||
"level": "L2",
|
||||
"description": "内存规划 + 直接执行",
|
||||
"complexity": ["low", "medium"],
|
||||
"artifacts": "memory://plan",
|
||||
"steps": [
|
||||
{ "command": "/workflow:lite-plan", "optional": false, "auto_continue": true },
|
||||
{ "command": "/workflow:lite-execute", "optional": false }
|
||||
],
|
||||
"cli_hints": {
|
||||
"explore_phase": { "tool": "gemini", "mode": "analysis", "trigger": "needs_exploration" },
|
||||
"execution": { "tool": "codex", "mode": "write", "trigger": "complexity >= medium" }
|
||||
},
|
||||
"estimated_time": "15-45 min"
|
||||
},
|
||||
"multi-cli-plan": {
|
||||
"name": "Multi-CLI Collaborative Planning",
|
||||
"level": "L2",
|
||||
"description": "ACE上下文 + 多CLI协作分析 + 迭代收敛 + 计划生成",
|
||||
"complexity": ["medium", "high"],
|
||||
"artifacts": ".workflow/.multi-cli-plan/{session}/",
|
||||
"steps": [
|
||||
{ "command": "/workflow:multi-cli-plan", "optional": false, "phases": [
|
||||
"context_gathering: ACE语义搜索",
|
||||
"multi_cli_discussion: cli-discuss-agent多轮分析",
|
||||
"present_options: 展示解决方案",
|
||||
"user_decision: 用户选择",
|
||||
"plan_generation: cli-lite-planning-agent生成计划"
|
||||
]},
|
||||
{ "command": "/workflow:lite-execute", "optional": false }
|
||||
],
|
||||
"vs_lite_plan": {
|
||||
"context": "ACE semantic search vs Manual file patterns",
|
||||
"analysis": "Multi-CLI cross-verification vs Single-pass planning",
|
||||
"iteration": "Multiple rounds until convergence vs Single round",
|
||||
"confidence": "High (consensus-based) vs Medium (single perspective)",
|
||||
"best_for": "Complex tasks needing multiple perspectives vs Straightforward implementations"
|
||||
},
|
||||
"agents": ["cli-discuss-agent", "cli-lite-planning-agent"],
|
||||
"cli_hints": {
|
||||
"discussion": { "tools": ["gemini", "codex", "claude"], "mode": "analysis", "parallel": true },
|
||||
"planning": { "tool": "gemini", "mode": "analysis" }
|
||||
},
|
||||
"estimated_time": "30-90 min"
|
||||
},
|
||||
"coupled": {
|
||||
"name": "Standard Planning",
|
||||
"level": "L3",
|
||||
"description": "完整规划 + 验证 + 执行",
|
||||
"complexity": ["medium", "high"],
|
||||
"artifacts": ".workflow/active/{session}/",
|
||||
"steps": [
|
||||
{ "command": "/workflow:plan", "optional": false },
|
||||
{ "command": "/workflow:action-plan-verify", "optional": false, "auto_continue": true },
|
||||
{ "command": "/workflow:execute", "optional": false },
|
||||
{ "command": "/workflow:review", "optional": true }
|
||||
],
|
||||
"cli_hints": {
|
||||
"pre_analysis": { "tool": "gemini", "mode": "analysis", "trigger": "always" },
|
||||
"execution": { "tool": "codex", "mode": "write", "trigger": "always" }
|
||||
},
|
||||
"estimated_time": "2-4 hours"
|
||||
},
|
||||
"full": {
|
||||
"name": "Full Exploration (Brainstorm)",
|
||||
"level": "L4",
|
||||
"description": "头脑风暴 + 规划 + 执行",
|
||||
"complexity": ["high"],
|
||||
"artifacts": ".workflow/active/{session}/.brainstorming/",
|
||||
"steps": [
|
||||
{ "command": "/workflow:brainstorm:auto-parallel", "optional": false, "confirm_before": true },
|
||||
{ "command": "/workflow:plan", "optional": false },
|
||||
{ "command": "/workflow:action-plan-verify", "optional": true, "auto_continue": true },
|
||||
{ "command": "/workflow:execute", "optional": false }
|
||||
],
|
||||
"cli_hints": {
|
||||
"role_analysis": { "tool": "gemini", "mode": "analysis", "trigger": "always", "parallel": true },
|
||||
"execution": { "tool": "codex", "mode": "write", "trigger": "task_count >= 3" }
|
||||
},
|
||||
"estimated_time": "1-3 hours"
|
||||
},
|
||||
"bugfix": {
|
||||
"name": "Bug Fix",
|
||||
"level": "L2",
|
||||
"description": "智能诊断 + 修复 (5 phases)",
|
||||
"complexity": ["low", "medium"],
|
||||
"artifacts": ".workflow/.lite-fix/{bug-slug}-{date}/",
|
||||
"variants": {
|
||||
"standard": [{ "command": "/workflow:lite-fix", "optional": false }],
|
||||
"hotfix": [{ "command": "/workflow:lite-fix --hotfix", "optional": false }]
|
||||
},
|
||||
"phases": [
|
||||
"Phase 1: Bug Analysis & Diagnosis (severity pre-assessment)",
|
||||
"Phase 2: Clarification (optional, AskUserQuestion)",
|
||||
"Phase 3: Fix Planning (Low/Medium → Claude, High/Critical → cli-lite-planning-agent)",
|
||||
"Phase 4: Confirmation & Selection",
|
||||
"Phase 5: Execute (→ lite-execute --mode bugfix)"
|
||||
],
|
||||
"cli_hints": {
|
||||
"diagnosis": { "tool": "gemini", "mode": "analysis", "trigger": "always" },
|
||||
"fix": { "tool": "codex", "mode": "write", "trigger": "severity >= medium" }
|
||||
},
|
||||
"estimated_time": "10-30 min"
|
||||
},
|
||||
"issue": {
|
||||
"name": "Issue Lifecycle",
|
||||
"level": "Supplementary",
|
||||
"description": "发现积累 → 批量规划 → 队列优化 → 并行执行 (Main Workflow 补充机制)",
|
||||
"complexity": ["medium", "high"],
|
||||
"artifacts": ".workflow/.issues/",
|
||||
"purpose": "Post-development continuous maintenance, maintain main branch stability",
|
||||
"phases": {
|
||||
"accumulation": {
|
||||
"description": "项目迭代中持续发现和积累issue",
|
||||
"commands": ["/issue:discover", "/issue:discover-by-prompt", "/issue:new"],
|
||||
"trigger": "post-task, code-review, test-failure"
|
||||
},
|
||||
"resolution": {
|
||||
"description": "集中规划和执行积累的issue",
|
||||
"steps": [
|
||||
{ "command": "/issue:plan --all-pending", "optional": false },
|
||||
{ "command": "/issue:queue", "optional": false },
|
||||
{ "command": "/issue:execute", "optional": false }
|
||||
]
|
||||
}
|
||||
},
|
||||
"worktree_support": {
|
||||
"description": "可选的 worktree 隔离,保持主分支稳定",
|
||||
"use_case": "主开发完成后的 issue 修复"
|
||||
},
|
||||
"cli_hints": {
|
||||
"discovery": { "tool": "gemini", "mode": "analysis", "trigger": "perspective_analysis", "parallel": true },
|
||||
"solution_generation": { "tool": "gemini", "mode": "analysis", "trigger": "always", "parallel": true },
|
||||
"batch_execution": { "tool": "codex", "mode": "write", "trigger": "always" }
|
||||
},
|
||||
"estimated_time": "1-4 hours"
|
||||
},
|
||||
"tdd": {
|
||||
"name": "Test-Driven Development",
|
||||
"level": "L3",
|
||||
"description": "TDD规划 + 执行 + 验证 (6 phases)",
|
||||
"complexity": ["medium", "high"],
|
||||
"artifacts": ".workflow/active/{session}/",
|
||||
"steps": [
|
||||
{ "command": "/workflow:tdd-plan", "optional": false },
|
||||
{ "command": "/workflow:action-plan-verify", "optional": true, "auto_continue": true },
|
||||
{ "command": "/workflow:execute", "optional": false },
|
||||
{ "command": "/workflow:tdd-verify", "optional": false }
|
||||
],
|
||||
"tdd_structure": {
|
||||
"description": "Each IMPL task contains complete internal Red-Green-Refactor cycle",
|
||||
"meta": "tdd_workflow: true",
|
||||
"flow_control": "implementation_approach contains 3 steps (red/green/refactor)"
|
||||
},
|
||||
"cli_hints": {
|
||||
"test_strategy": { "tool": "gemini", "mode": "analysis", "trigger": "always" },
|
||||
"red_green_refactor": { "tool": "codex", "mode": "write", "trigger": "always" }
|
||||
},
|
||||
"estimated_time": "1-3 hours"
|
||||
},
|
||||
"test-fix": {
|
||||
"name": "Test Fix Generation",
|
||||
"level": "L3",
|
||||
"description": "测试修复生成 + 执行循环 (5 phases)",
|
||||
"complexity": ["medium", "high"],
|
||||
"artifacts": ".workflow/active/WFS-test-{session}/",
|
||||
"dual_mode": {
|
||||
"session_mode": { "input": "WFS-xxx", "context_source": "Source session summaries" },
|
||||
"prompt_mode": { "input": "Text/file path", "context_source": "Direct codebase analysis" }
|
||||
},
|
||||
"steps": [
|
||||
{ "command": "/workflow:test-fix-gen", "optional": false },
|
||||
{ "command": "/workflow:test-cycle-execute", "optional": false }
|
||||
],
|
||||
"task_structure": [
|
||||
"IMPL-001.json (test understanding & generation)",
|
||||
"IMPL-001.5-review.json (quality gate)",
|
||||
"IMPL-002.json (test execution & fix cycle)"
|
||||
],
|
||||
"cli_hints": {
|
||||
"analysis": { "tool": "gemini", "mode": "analysis", "trigger": "always" },
|
||||
"fix_cycle": { "tool": "codex", "mode": "write", "trigger": "pass_rate < 0.95" }
|
||||
},
|
||||
"estimated_time": "1-2 hours"
|
||||
},
|
||||
"ui": {
|
||||
"name": "UI-First Development",
|
||||
"level": "L3/L4",
|
||||
"description": "UI设计 + 规划 + 执行",
|
||||
"complexity": ["medium", "high"],
|
||||
"artifacts": ".workflow/active/{session}/",
|
||||
"variants": {
|
||||
"explore": [
|
||||
{ "command": "/workflow:ui-design:explore-auto", "optional": false },
|
||||
{ "command": "/workflow:ui-design:design-sync", "optional": false, "auto_continue": true },
|
||||
{ "command": "/workflow:plan", "optional": false },
|
||||
{ "command": "/workflow:execute", "optional": false }
|
||||
],
|
||||
"imitate": [
|
||||
{ "command": "/workflow:ui-design:imitate-auto", "optional": false },
|
||||
{ "command": "/workflow:ui-design:design-sync", "optional": false, "auto_continue": true },
|
||||
{ "command": "/workflow:plan", "optional": false },
|
||||
{ "command": "/workflow:execute", "optional": false }
|
||||
]
|
||||
},
|
||||
"estimated_time": "2-4 hours"
|
||||
},
|
||||
"review-fix": {
|
||||
"name": "Review and Fix",
|
||||
"level": "L3",
|
||||
"description": "多维审查 + 自动修复",
|
||||
"complexity": ["medium"],
|
||||
"artifacts": ".workflow/active/{session}/review_report.md",
|
||||
"steps": [
|
||||
{ "command": "/workflow:review-session-cycle", "optional": false },
|
||||
{ "command": "/workflow:review-fix", "optional": true }
|
||||
],
|
||||
"cli_hints": {
|
||||
"multi_dimension_review": { "tool": "gemini", "mode": "analysis", "trigger": "always", "parallel": true },
|
||||
"auto_fix": { "tool": "codex", "mode": "write", "trigger": "findings_count >= 3" }
|
||||
},
|
||||
"estimated_time": "30-90 min"
|
||||
},
|
||||
"docs": {
|
||||
"name": "Documentation",
|
||||
"level": "L2",
|
||||
"description": "批量文档生成",
|
||||
"complexity": ["low", "medium"],
|
||||
"variants": {
|
||||
"incremental": [{ "command": "/memory:update-related", "optional": false }],
|
||||
"full": [
|
||||
{ "command": "/memory:docs", "optional": false },
|
||||
{ "command": "/workflow:execute", "optional": false }
|
||||
]
|
||||
},
|
||||
"estimated_time": "15-60 min"
|
||||
}
|
||||
},
|
||||
|
||||
"intent_rules": {
|
||||
"_level_mapping": {
|
||||
"description": "Intent → Level → Flow mapping guide",
|
||||
"L1": ["lite-lite-lite"],
|
||||
"L2": ["rapid", "bugfix", "multi-cli-plan", "docs"],
|
||||
"L3": ["coupled", "tdd", "test-fix", "review-fix", "ui"],
|
||||
"L4": ["full"],
|
||||
"Supplementary": ["issue"]
|
||||
},
|
||||
"bugfix": {
|
||||
"priority": 1,
|
||||
"level": "L2",
|
||||
"variants": {
|
||||
"hotfix": {
|
||||
"patterns": ["hotfix", "urgent", "production", "critical", "emergency", "紧急", "生产环境", "线上"],
|
||||
"flow": "bugfix.hotfix"
|
||||
},
|
||||
"standard": {
|
||||
"patterns": ["fix", "bug", "error", "issue", "crash", "broken", "fail", "wrong", "修复", "错误", "崩溃"],
|
||||
"flow": "bugfix.standard"
|
||||
}
|
||||
}
|
||||
},
|
||||
"issue_batch": {
|
||||
"priority": 2,
|
||||
"level": "Supplementary",
|
||||
"patterns": {
|
||||
"batch": ["issues", "batch", "queue", "多个", "批量"],
|
||||
"action": ["fix", "resolve", "处理", "解决"]
|
||||
},
|
||||
"require_both": true,
|
||||
"flow": "issue"
|
||||
},
|
||||
"exploration": {
|
||||
"priority": 3,
|
||||
"level": "L4",
|
||||
"patterns": ["不确定", "不知道", "explore", "研究", "分析一下", "怎么做", "what if", "探索"],
|
||||
"flow": "full"
|
||||
},
|
||||
"multi_perspective": {
|
||||
"priority": 3,
|
||||
"level": "L2",
|
||||
"patterns": ["多视角", "权衡", "比较方案", "cross-verify", "多CLI", "协作分析"],
|
||||
"flow": "multi-cli-plan"
|
||||
},
|
||||
"quick_task": {
|
||||
"priority": 4,
|
||||
"level": "L1",
|
||||
"patterns": ["快速", "简单", "small", "quick", "simple", "trivial", "小改动"],
|
||||
"flow": "lite-lite-lite"
|
||||
},
|
||||
"ui_design": {
|
||||
"priority": 5,
|
||||
"level": "L3/L4",
|
||||
"patterns": ["ui", "界面", "design", "设计", "component", "组件", "style", "样式", "layout", "布局"],
|
||||
"variants": {
|
||||
"imitate": { "triggers": ["参考", "模仿", "像", "类似"], "flow": "ui.imitate" },
|
||||
"explore": { "triggers": [], "flow": "ui.explore" }
|
||||
}
|
||||
},
|
||||
"tdd": {
|
||||
"priority": 6,
|
||||
"level": "L3",
|
||||
"patterns": ["tdd", "test-driven", "测试驱动", "先写测试", "test first"],
|
||||
"flow": "tdd"
|
||||
},
|
||||
"test_fix": {
|
||||
"priority": 7,
|
||||
"level": "L3",
|
||||
"patterns": ["测试失败", "test fail", "fix test", "test error", "pass rate", "coverage gap"],
|
||||
"flow": "test-fix"
|
||||
},
|
||||
"review": {
|
||||
"priority": 8,
|
||||
"level": "L3",
|
||||
"patterns": ["review", "审查", "检查代码", "code review", "质量检查"],
|
||||
"flow": "review-fix"
|
||||
},
|
||||
"documentation": {
|
||||
"priority": 9,
|
||||
"level": "L2",
|
||||
"patterns": ["文档", "documentation", "docs", "readme"],
|
||||
"variants": {
|
||||
"incremental": { "triggers": ["更新", "增量"], "flow": "docs.incremental" },
|
||||
"full": { "triggers": ["全部", "完整"], "flow": "docs.full" }
|
||||
}
|
||||
},
|
||||
"feature": {
|
||||
"priority": 99,
|
||||
"complexity_map": {
|
||||
"high": { "level": "L3", "flow": "coupled" },
|
||||
"medium": { "level": "L2", "flow": "rapid" },
|
||||
"low": { "level": "L1", "flow": "lite-lite-lite" }
|
||||
}
|
||||
}
|
||||
},
|
||||
|
||||
"complexity_indicators": {
|
||||
"high": {
|
||||
"threshold": 4,
|
||||
"patterns": {
|
||||
"architecture": { "keywords": ["refactor", "重构", "migrate", "迁移", "architect", "架构", "system", "系统"], "weight": 2 },
|
||||
"multi_module": { "keywords": ["multiple", "多个", "across", "跨", "all", "所有", "entire", "整个"], "weight": 2 },
|
||||
"integration": { "keywords": ["integrate", "集成", "api", "database", "数据库"], "weight": 1 },
|
||||
"quality": { "keywords": ["security", "安全", "performance", "性能", "scale", "扩展"], "weight": 1 }
|
||||
}
|
||||
},
|
||||
"medium": { "threshold": 2 },
|
||||
"low": { "threshold": 0 }
|
||||
},
|
||||
|
||||
"cli_tools": {
|
||||
"gemini": {
|
||||
"strengths": ["超长上下文", "深度分析", "架构理解", "执行流追踪"],
|
||||
"triggers": ["分析", "理解", "设计", "架构", "诊断"],
|
||||
"mode": "analysis"
|
||||
},
|
||||
"qwen": {
|
||||
"strengths": ["代码模式识别", "多维度分析"],
|
||||
"triggers": ["评估", "对比", "验证"],
|
||||
"mode": "analysis"
|
||||
},
|
||||
"codex": {
|
||||
"strengths": ["精确代码生成", "自主执行"],
|
||||
"triggers": ["实现", "重构", "修复", "生成"],
|
||||
"mode": "write"
|
||||
}
|
||||
},
|
||||
|
||||
"cli_injection_rules": {
|
||||
"context_gathering": { "trigger": "file_read >= 50k OR module_count >= 5", "inject": "gemini --mode analysis" },
|
||||
"pre_planning_analysis": { "trigger": "complexity === high", "inject": "gemini --mode analysis" },
|
||||
"debug_diagnosis": { "trigger": "intent === bugfix AND root_cause_unclear", "inject": "gemini --mode analysis" },
|
||||
"code_review": { "trigger": "step === review", "inject": "gemini --mode analysis" },
|
||||
"implementation": { "trigger": "step === execute AND task_count >= 3", "inject": "codex --mode write" }
|
||||
},
|
||||
|
||||
"artifact_flow": {
|
||||
"_description": "定义工作流产出的格式、意图提取和流转规则",
|
||||
|
||||
"outputs": {
|
||||
"/workflow:lite-plan": {
|
||||
"artifact": "memory://plan",
|
||||
"format": "structured_plan",
|
||||
"fields": ["tasks", "files", "dependencies", "approach"]
|
||||
},
|
||||
"/workflow:plan": {
|
||||
"artifact": ".workflow/{session}/IMPL_PLAN.md",
|
||||
"format": "markdown_plan",
|
||||
"fields": ["phases", "tasks", "dependencies", "risks", "test_strategy"]
|
||||
},
|
||||
"/workflow:multi-cli-plan": {
|
||||
"artifact": ".workflow/.multi-cli-plan/{session}/",
|
||||
"format": "multi_file",
|
||||
"files": ["IMPL_PLAN.md", "plan.json", "synthesis.json"],
|
||||
"fields": ["consensus", "divergences", "recommended_approach", "tasks"]
|
||||
},
|
||||
"/workflow:lite-execute": {
|
||||
"artifact": "git_changes",
|
||||
"format": "code_diff",
|
||||
"fields": ["modified_files", "added_files", "deleted_files", "build_status"]
|
||||
},
|
||||
"/workflow:execute": {
|
||||
"artifact": ".workflow/{session}/execution_log.json",
|
||||
"format": "execution_report",
|
||||
"fields": ["completed_tasks", "pending_tasks", "errors", "warnings"]
|
||||
},
|
||||
"/workflow:test-cycle-execute": {
|
||||
"artifact": ".workflow/{session}/test_results.json",
|
||||
"format": "test_report",
|
||||
"fields": ["pass_rate", "failures", "coverage", "duration"]
|
||||
},
|
||||
"/workflow:review-session-cycle": {
|
||||
"artifact": ".workflow/{session}/review_report.md",
|
||||
"format": "review_report",
|
||||
"fields": ["findings", "severity_counts", "recommendations"]
|
||||
},
|
||||
"/workflow:lite-fix": {
|
||||
"artifact": "git_changes",
|
||||
"format": "fix_report",
|
||||
"fields": ["root_cause", "fix_applied", "files_modified", "verification_status"]
|
||||
}
|
||||
},
|
||||
|
||||
"intent_extraction": {
|
||||
"plan_to_execute": {
|
||||
"from": ["lite-plan", "plan", "multi-cli-plan"],
|
||||
"to": ["lite-execute", "execute"],
|
||||
"extract": {
|
||||
"tasks": "$.tasks[] | filter(status != 'completed')",
|
||||
"priority_order": "$.tasks | sort_by(priority)",
|
||||
"files_to_modify": "$.tasks[].files | flatten | unique",
|
||||
"dependencies": "$.dependencies",
|
||||
"context_summary": "$.approach OR $.recommended_approach"
|
||||
}
|
||||
},
|
||||
"execute_to_test": {
|
||||
"from": ["lite-execute", "execute"],
|
||||
"to": ["test-cycle-execute", "test-fix-gen"],
|
||||
"extract": {
|
||||
"modified_files": "$.modified_files",
|
||||
"test_scope": "infer_from($.modified_files)",
|
||||
"build_status": "$.build_status",
|
||||
"pending_verification": "$.completed_tasks | needs_test"
|
||||
}
|
||||
},
|
||||
"test_to_fix": {
|
||||
"from": ["test-cycle-execute"],
|
||||
"to": ["lite-fix", "review-fix"],
|
||||
"condition": "$.pass_rate < 0.95",
|
||||
"extract": {
|
||||
"failures": "$.failures",
|
||||
"error_messages": "$.failures[].message",
|
||||
"affected_files": "$.failures[].file",
|
||||
"suggested_fixes": "$.failures[].suggested_fix"
|
||||
}
|
||||
},
|
||||
"review_to_fix": {
|
||||
"from": ["review-session-cycle", "review-module-cycle"],
|
||||
"to": ["review-fix"],
|
||||
"condition": "$.severity_counts.critical > 0 OR $.severity_counts.high > 3",
|
||||
"extract": {
|
||||
"findings": "$.findings | filter(severity in ['critical', 'high'])",
|
||||
"fix_priority": "$.findings | group_by(category) | sort_by(severity)",
|
||||
"affected_files": "$.findings[].file | unique"
|
||||
}
|
||||
}
|
||||
},
|
||||
|
||||
"completion_criteria": {
|
||||
"plan": {
|
||||
"required": ["has_tasks", "has_files"],
|
||||
"optional": ["has_tests", "no_blocking_risks"],
|
||||
"threshold": 0.8,
|
||||
"routing": {
|
||||
"complete": "proceed_to_execute",
|
||||
"incomplete": "clarify_requirements"
|
||||
}
|
||||
},
|
||||
"execute": {
|
||||
"required": ["all_tasks_attempted", "no_critical_errors"],
|
||||
"optional": ["build_passes", "lint_passes"],
|
||||
"threshold": 1.0,
|
||||
"routing": {
|
||||
"complete": "proceed_to_test_or_review",
|
||||
"partial": "continue_execution",
|
||||
"failed": "diagnose_and_retry"
|
||||
}
|
||||
},
|
||||
"test": {
|
||||
"metrics": {
|
||||
"pass_rate": { "target": 0.95, "minimum": 0.80 },
|
||||
"coverage": { "target": 0.80, "minimum": 0.60 }
|
||||
},
|
||||
"routing": {
|
||||
"pass_rate >= 0.95 AND coverage >= 0.80": "complete",
|
||||
"pass_rate >= 0.95 AND coverage < 0.80": "add_more_tests",
|
||||
"pass_rate >= 0.80": "fix_failures_then_continue",
|
||||
"pass_rate < 0.80": "major_fix_required"
|
||||
}
|
||||
},
|
||||
"review": {
|
||||
"metrics": {
|
||||
"critical_findings": { "target": 0, "maximum": 0 },
|
||||
"high_findings": { "target": 0, "maximum": 3 }
|
||||
},
|
||||
"routing": {
|
||||
"critical == 0 AND high <= 3": "complete_or_optional_fix",
|
||||
"critical > 0": "mandatory_fix",
|
||||
"high > 3": "recommended_fix"
|
||||
}
|
||||
}
|
||||
},
|
||||
|
||||
"flow_decisions": {
|
||||
"_description": "根据产出完成度决定下一步",
|
||||
"patterns": {
|
||||
"plan_execute_test": {
|
||||
"sequence": ["plan", "execute", "test"],
|
||||
"on_test_fail": {
|
||||
"action": "extract_failures_and_fix",
|
||||
"max_iterations": 3,
|
||||
"fallback": "manual_intervention"
|
||||
}
|
||||
},
|
||||
"plan_execute_review": {
|
||||
"sequence": ["plan", "execute", "review"],
|
||||
"on_review_issues": {
|
||||
"action": "prioritize_and_fix",
|
||||
"auto_fix_threshold": "severity < high"
|
||||
}
|
||||
},
|
||||
"iterative_improvement": {
|
||||
"sequence": ["execute", "test", "fix"],
|
||||
"loop_until": "pass_rate >= 0.95 OR iterations >= 3",
|
||||
"on_loop_exit": "report_status"
|
||||
}
|
||||
}
|
||||
}
|
||||
}
|
||||
}
|
||||
@@ -1,127 +0,0 @@
|
||||
{
|
||||
"_metadata": {
|
||||
"version": "1.0.0",
|
||||
"generated": "2026-01-03",
|
||||
"description": "CCW command capability index for intelligent workflow coordination"
|
||||
},
|
||||
"capabilities": {
|
||||
"explore": {
|
||||
"description": "Codebase exploration and context gathering",
|
||||
"commands": [
|
||||
{ "command": "/workflow:init", "weight": 1.0, "tags": ["project-setup", "context"] },
|
||||
{ "command": "/workflow:tools:gather", "weight": 0.9, "tags": ["context", "analysis"] },
|
||||
{ "command": "/memory:load", "weight": 0.8, "tags": ["context", "memory"] }
|
||||
],
|
||||
"agents": ["cli-explore-agent", "context-search-agent"]
|
||||
},
|
||||
"brainstorm": {
|
||||
"description": "Multi-perspective analysis and ideation",
|
||||
"commands": [
|
||||
{ "command": "/workflow:brainstorm:auto-parallel", "weight": 1.0, "tags": ["exploration", "multi-role"] },
|
||||
{ "command": "/workflow:brainstorm:artifacts", "weight": 0.9, "tags": ["clarification", "guidance"] },
|
||||
{ "command": "/workflow:brainstorm:synthesis", "weight": 0.8, "tags": ["consolidation", "refinement"] }
|
||||
],
|
||||
"roles": ["product-manager", "system-architect", "ux-expert", "data-architect", "api-designer"]
|
||||
},
|
||||
"plan": {
|
||||
"description": "Task planning and decomposition",
|
||||
"commands": [
|
||||
{ "command": "/workflow:lite-plan", "weight": 1.0, "complexity": "low-medium", "tags": ["fast", "interactive"] },
|
||||
{ "command": "/workflow:plan", "weight": 0.9, "complexity": "medium-high", "tags": ["comprehensive", "persistent"] },
|
||||
{ "command": "/workflow:tdd-plan", "weight": 0.7, "complexity": "medium-high", "tags": ["test-first", "quality"] },
|
||||
{ "command": "/task:create", "weight": 0.6, "tags": ["single-task", "manual"] },
|
||||
{ "command": "/task:breakdown", "weight": 0.5, "tags": ["decomposition", "subtasks"] }
|
||||
],
|
||||
"agents": ["cli-lite-planning-agent", "action-planning-agent"]
|
||||
},
|
||||
"verify": {
|
||||
"description": "Plan and quality verification",
|
||||
"commands": [
|
||||
{ "command": "/workflow:action-plan-verify", "weight": 1.0, "tags": ["plan-quality", "consistency"] },
|
||||
{ "command": "/workflow:tdd-verify", "weight": 0.8, "tags": ["tdd-compliance", "coverage"] }
|
||||
]
|
||||
},
|
||||
"execute": {
|
||||
"description": "Task execution and implementation",
|
||||
"commands": [
|
||||
{ "command": "/workflow:lite-execute", "weight": 1.0, "complexity": "low-medium", "tags": ["fast", "agent-or-cli"] },
|
||||
{ "command": "/workflow:execute", "weight": 0.9, "complexity": "medium-high", "tags": ["dag-parallel", "comprehensive"] },
|
||||
{ "command": "/task:execute", "weight": 0.7, "tags": ["single-task"] }
|
||||
],
|
||||
"agents": ["code-developer", "cli-execution-agent", "universal-executor"]
|
||||
},
|
||||
"bugfix": {
|
||||
"description": "Bug diagnosis and fixing",
|
||||
"commands": [
|
||||
{ "command": "/workflow:lite-fix", "weight": 1.0, "tags": ["diagnosis", "fix", "standard"] },
|
||||
{ "command": "/workflow:lite-fix --hotfix", "weight": 0.9, "tags": ["emergency", "production", "fast"] }
|
||||
],
|
||||
"agents": ["code-developer"]
|
||||
},
|
||||
"test": {
|
||||
"description": "Test generation and execution",
|
||||
"commands": [
|
||||
{ "command": "/workflow:test-gen", "weight": 1.0, "tags": ["post-implementation", "coverage"] },
|
||||
{ "command": "/workflow:test-fix-gen", "weight": 0.9, "tags": ["from-description", "flexible"] },
|
||||
{ "command": "/workflow:test-cycle-execute", "weight": 0.8, "tags": ["iterative", "fix-cycle"] }
|
||||
],
|
||||
"agents": ["test-fix-agent"]
|
||||
},
|
||||
"review": {
|
||||
"description": "Code review and quality analysis",
|
||||
"commands": [
|
||||
{ "command": "/workflow:review-session-cycle", "weight": 1.0, "tags": ["session-based", "comprehensive"] },
|
||||
{ "command": "/workflow:review-module-cycle", "weight": 0.9, "tags": ["module-based", "targeted"] },
|
||||
{ "command": "/workflow:review", "weight": 0.8, "tags": ["single-pass", "type-specific"] },
|
||||
{ "command": "/workflow:review-fix", "weight": 0.7, "tags": ["auto-fix", "findings"] }
|
||||
]
|
||||
},
|
||||
"issue": {
|
||||
"description": "Batch issue management",
|
||||
"commands": [
|
||||
{ "command": "/issue:new", "weight": 1.0, "tags": ["create", "import"] },
|
||||
{ "command": "/issue:discover", "weight": 0.9, "tags": ["find", "analyze"] },
|
||||
{ "command": "/issue:plan", "weight": 0.8, "tags": ["solutions", "planning"] },
|
||||
{ "command": "/issue:queue", "weight": 0.7, "tags": ["prioritize", "order"] },
|
||||
{ "command": "/issue:execute", "weight": 0.6, "tags": ["batch-execute", "dag"] }
|
||||
],
|
||||
"agents": ["issue-plan-agent", "issue-queue-agent"]
|
||||
},
|
||||
"ui-design": {
|
||||
"description": "UI design and prototyping",
|
||||
"commands": [
|
||||
{ "command": "/workflow:ui-design:explore-auto", "weight": 1.0, "tags": ["from-scratch", "variants"] },
|
||||
{ "command": "/workflow:ui-design:imitate-auto", "weight": 0.9, "tags": ["reference-based", "copy"] },
|
||||
{ "command": "/workflow:ui-design:design-sync", "weight": 0.7, "tags": ["sync", "finalize"] },
|
||||
{ "command": "/workflow:ui-design:generate", "weight": 0.6, "tags": ["assemble", "prototype"] }
|
||||
],
|
||||
"agents": ["ui-design-agent"]
|
||||
},
|
||||
"memory": {
|
||||
"description": "Documentation and knowledge management",
|
||||
"commands": [
|
||||
{ "command": "/memory:docs", "weight": 1.0, "tags": ["generate", "planning"] },
|
||||
{ "command": "/memory:update-related", "weight": 0.9, "tags": ["incremental", "git-based"] },
|
||||
{ "command": "/memory:update-full", "weight": 0.8, "tags": ["comprehensive", "all-modules"] },
|
||||
{ "command": "/memory:skill-memory", "weight": 0.7, "tags": ["package", "reusable"] }
|
||||
],
|
||||
"agents": ["doc-generator", "memory-bridge"]
|
||||
},
|
||||
"session": {
|
||||
"description": "Workflow session management",
|
||||
"commands": [
|
||||
{ "command": "/workflow:session:start", "weight": 1.0, "tags": ["init", "discover"] },
|
||||
{ "command": "/workflow:session:list", "weight": 0.9, "tags": ["view", "status"] },
|
||||
{ "command": "/workflow:session:resume", "weight": 0.8, "tags": ["continue", "restore"] },
|
||||
{ "command": "/workflow:session:complete", "weight": 0.7, "tags": ["finish", "archive"] }
|
||||
]
|
||||
},
|
||||
"debug": {
|
||||
"description": "Debugging and problem solving",
|
||||
"commands": [
|
||||
{ "command": "/workflow:debug", "weight": 1.0, "tags": ["hypothesis", "iterative"] },
|
||||
{ "command": "/workflow:clean", "weight": 0.6, "tags": ["cleanup", "artifacts"] }
|
||||
]
|
||||
}
|
||||
}
|
||||
}
|
||||
@@ -1,136 +0,0 @@
|
||||
{
|
||||
"_metadata": {
|
||||
"version": "1.0.0",
|
||||
"description": "Externalized intent classification rules for CCW orchestrator"
|
||||
},
|
||||
"intent_patterns": {
|
||||
"bugfix": {
|
||||
"priority": 1,
|
||||
"description": "Bug修复意图",
|
||||
"variants": {
|
||||
"hotfix": {
|
||||
"patterns": ["hotfix", "urgent", "production", "critical", "emergency", "紧急", "生产环境", "线上"],
|
||||
"workflow": "lite-fix --hotfix"
|
||||
},
|
||||
"standard": {
|
||||
"patterns": ["fix", "bug", "error", "issue", "crash", "broken", "fail", "wrong", "incorrect", "修复", "错误", "崩溃", "失败"],
|
||||
"workflow": "lite-fix"
|
||||
}
|
||||
}
|
||||
},
|
||||
"issue_batch": {
|
||||
"priority": 2,
|
||||
"description": "批量Issue处理意图",
|
||||
"patterns": {
|
||||
"batch_keywords": ["issues", "issue", "batch", "queue", "多个", "批量", "一批"],
|
||||
"action_keywords": ["fix", "resolve", "处理", "解决", "修复"]
|
||||
},
|
||||
"require_both": true,
|
||||
"workflow": "issue:plan → issue:queue → issue:execute"
|
||||
},
|
||||
"exploration": {
|
||||
"priority": 3,
|
||||
"description": "探索/不确定意图",
|
||||
"patterns": ["不确定", "不知道", "explore", "研究", "分析一下", "怎么做", "what if", "should i", "探索", "可能", "或许", "建议"],
|
||||
"workflow": "brainstorm → plan → execute"
|
||||
},
|
||||
"ui_design": {
|
||||
"priority": 4,
|
||||
"description": "UI/设计意图",
|
||||
"patterns": ["ui", "界面", "design", "设计", "component", "组件", "style", "样式", "layout", "布局", "前端", "frontend", "页面"],
|
||||
"variants": {
|
||||
"imitate": {
|
||||
"triggers": ["参考", "模仿", "像", "类似", "reference", "like"],
|
||||
"workflow": "ui-design:imitate-auto → plan → execute"
|
||||
},
|
||||
"explore": {
|
||||
"triggers": [],
|
||||
"workflow": "ui-design:explore-auto → plan → execute"
|
||||
}
|
||||
}
|
||||
},
|
||||
"tdd": {
|
||||
"priority": 5,
|
||||
"description": "测试驱动开发意图",
|
||||
"patterns": ["tdd", "test-driven", "测试驱动", "先写测试", "red-green", "test first"],
|
||||
"workflow": "tdd-plan → execute → tdd-verify"
|
||||
},
|
||||
"review": {
|
||||
"priority": 6,
|
||||
"description": "代码审查意图",
|
||||
"patterns": ["review", "审查", "检查代码", "code review", "质量检查", "安全审查"],
|
||||
"workflow": "review-session-cycle → review-fix"
|
||||
},
|
||||
"documentation": {
|
||||
"priority": 7,
|
||||
"description": "文档生成意图",
|
||||
"patterns": ["文档", "documentation", "docs", "readme", "注释", "api doc", "说明"],
|
||||
"variants": {
|
||||
"incremental": {
|
||||
"triggers": ["更新", "增量", "相关"],
|
||||
"workflow": "memory:update-related"
|
||||
},
|
||||
"full": {
|
||||
"triggers": ["全部", "完整", "所有"],
|
||||
"workflow": "memory:docs → execute"
|
||||
}
|
||||
}
|
||||
}
|
||||
},
|
||||
"complexity_indicators": {
|
||||
"high": {
|
||||
"score_threshold": 4,
|
||||
"patterns": {
|
||||
"architecture": {
|
||||
"keywords": ["refactor", "重构", "migrate", "迁移", "architect", "架构", "system", "系统"],
|
||||
"weight": 2
|
||||
},
|
||||
"multi_module": {
|
||||
"keywords": ["multiple", "多个", "across", "跨", "all", "所有", "entire", "整个"],
|
||||
"weight": 2
|
||||
},
|
||||
"integration": {
|
||||
"keywords": ["integrate", "集成", "connect", "连接", "api", "database", "数据库"],
|
||||
"weight": 1
|
||||
},
|
||||
"quality": {
|
||||
"keywords": ["security", "安全", "performance", "性能", "scale", "扩展", "优化"],
|
||||
"weight": 1
|
||||
}
|
||||
},
|
||||
"workflow": "plan → verify → execute"
|
||||
},
|
||||
"medium": {
|
||||
"score_threshold": 2,
|
||||
"workflow": "lite-plan → lite-execute"
|
||||
},
|
||||
"low": {
|
||||
"score_threshold": 0,
|
||||
"workflow": "lite-plan → lite-execute"
|
||||
}
|
||||
},
|
||||
"cli_tool_triggers": {
|
||||
"gemini": {
|
||||
"explicit": ["用 gemini", "gemini 分析", "让 gemini", "用gemini"],
|
||||
"semantic": ["深度分析", "架构理解", "执行流追踪", "根因分析"]
|
||||
},
|
||||
"qwen": {
|
||||
"explicit": ["用 qwen", "qwen 评估", "让 qwen", "用qwen"],
|
||||
"semantic": ["第二视角", "对比验证", "模式识别"]
|
||||
},
|
||||
"codex": {
|
||||
"explicit": ["用 codex", "codex 实现", "让 codex", "用codex"],
|
||||
"semantic": ["自主完成", "批量修改", "自动实现"]
|
||||
}
|
||||
},
|
||||
"fallback_rules": {
|
||||
"no_match": {
|
||||
"default_workflow": "lite-plan → lite-execute",
|
||||
"use_complexity_assessment": true
|
||||
},
|
||||
"ambiguous": {
|
||||
"action": "ask_user",
|
||||
"message": "检测到多个可能意图,请确认工作流选择"
|
||||
}
|
||||
}
|
||||
}
|
||||
@@ -1,451 +0,0 @@
|
||||
{
|
||||
"_metadata": {
|
||||
"version": "1.1.0",
|
||||
"description": "Predefined workflow chains with CLI tool integration for CCW orchestration"
|
||||
},
|
||||
"cli_tools": {
|
||||
"_doc": "CLI工具是CCW的核心能力,在合适时机自动调用以获得:1)较少token获取大量上下文 2)引入不同模型视角 3)增强debug和规划能力",
|
||||
"gemini": {
|
||||
"strengths": ["超长上下文", "深度分析", "架构理解", "执行流追踪"],
|
||||
"triggers": ["分析", "理解", "设计", "架构", "评估", "诊断"],
|
||||
"mode": "analysis",
|
||||
"token_efficiency": "high",
|
||||
"use_when": [
|
||||
"需要理解大型代码库结构",
|
||||
"执行流追踪和数据流分析",
|
||||
"架构设计和技术方案评估",
|
||||
"复杂问题诊断(root cause analysis)"
|
||||
]
|
||||
},
|
||||
"qwen": {
|
||||
"strengths": ["超长上下文", "代码模式识别", "多维度分析"],
|
||||
"triggers": ["评估", "对比", "验证"],
|
||||
"mode": "analysis",
|
||||
"token_efficiency": "high",
|
||||
"use_when": [
|
||||
"Gemini 不可用时作为备选",
|
||||
"需要第二视角验证分析结果",
|
||||
"代码模式识别和重复检测"
|
||||
]
|
||||
},
|
||||
"codex": {
|
||||
"strengths": ["精确代码生成", "自主执行", "数学推理"],
|
||||
"triggers": ["实现", "重构", "修复", "生成", "测试"],
|
||||
"mode": "write",
|
||||
"token_efficiency": "medium",
|
||||
"use_when": [
|
||||
"需要自主完成多步骤代码修改",
|
||||
"复杂重构和迁移任务",
|
||||
"测试生成和修复循环"
|
||||
]
|
||||
}
|
||||
},
|
||||
"cli_injection_rules": {
|
||||
"_doc": "隐式规则:在特定条件下自动注入CLI调用",
|
||||
"context_gathering": {
|
||||
"trigger": "file_read >= 50k chars OR module_count >= 5",
|
||||
"inject": "gemini --mode analysis",
|
||||
"reason": "大量代码上下文使用CLI可节省主会话token"
|
||||
},
|
||||
"pre_planning_analysis": {
|
||||
"trigger": "complexity === 'high' OR intent === 'exploration'",
|
||||
"inject": "gemini --mode analysis",
|
||||
"reason": "复杂任务先用CLI分析获取多模型视角"
|
||||
},
|
||||
"debug_diagnosis": {
|
||||
"trigger": "intent === 'bugfix' AND root_cause_unclear",
|
||||
"inject": "gemini --mode analysis",
|
||||
"reason": "深度诊断利用Gemini的执行流追踪能力"
|
||||
},
|
||||
"code_review": {
|
||||
"trigger": "step === 'review'",
|
||||
"inject": "gemini --mode analysis",
|
||||
"reason": "代码审查用CLI减少token占用"
|
||||
},
|
||||
"implementation": {
|
||||
"trigger": "step === 'execute' AND task_count >= 3",
|
||||
"inject": "codex --mode write",
|
||||
"reason": "多任务执行用Codex自主完成"
|
||||
}
|
||||
},
|
||||
"chains": {
|
||||
"rapid": {
|
||||
"name": "Rapid Iteration",
|
||||
"description": "多模型协作分析 + 直接执行",
|
||||
"complexity": ["low", "medium"],
|
||||
"steps": [
|
||||
{
|
||||
"command": "/workflow:lite-plan",
|
||||
"optional": false,
|
||||
"auto_continue": true,
|
||||
"cli_hint": {
|
||||
"explore_phase": { "tool": "gemini", "mode": "analysis", "trigger": "needs_exploration" },
|
||||
"planning_phase": { "tool": "gemini", "mode": "analysis", "trigger": "complexity >= medium" }
|
||||
}
|
||||
},
|
||||
{
|
||||
"command": "/workflow:lite-execute",
|
||||
"optional": false,
|
||||
"auto_continue": false,
|
||||
"cli_hint": {
|
||||
"execution": { "tool": "codex", "mode": "write", "trigger": "user_selects_codex OR complexity >= medium" },
|
||||
"review": { "tool": "gemini", "mode": "analysis", "trigger": "user_selects_review" }
|
||||
}
|
||||
}
|
||||
],
|
||||
"total_steps": 2,
|
||||
"estimated_time": "15-45 min"
|
||||
},
|
||||
"full": {
|
||||
"name": "Full Exploration",
|
||||
"description": "多模型深度分析 + 头脑风暴 + 规划 + 执行",
|
||||
"complexity": ["medium", "high"],
|
||||
"steps": [
|
||||
{
|
||||
"command": "/workflow:brainstorm:auto-parallel",
|
||||
"optional": false,
|
||||
"confirm_before": true,
|
||||
"cli_hint": {
|
||||
"role_analysis": { "tool": "gemini", "mode": "analysis", "trigger": "always", "parallel": true }
|
||||
}
|
||||
},
|
||||
{
|
||||
"command": "/workflow:plan",
|
||||
"optional": false,
|
||||
"auto_continue": false,
|
||||
"cli_hint": {
|
||||
"context_gather": { "tool": "gemini", "mode": "analysis", "trigger": "always" },
|
||||
"task_generation": { "tool": "gemini", "mode": "analysis", "trigger": "always" }
|
||||
}
|
||||
},
|
||||
{ "command": "/workflow:action-plan-verify", "optional": true, "auto_continue": true },
|
||||
{
|
||||
"command": "/workflow:execute",
|
||||
"optional": false,
|
||||
"auto_continue": false,
|
||||
"cli_hint": {
|
||||
"execution": { "tool": "codex", "mode": "write", "trigger": "task_count >= 3" }
|
||||
}
|
||||
}
|
||||
],
|
||||
"total_steps": 4,
|
||||
"estimated_time": "1-3 hours"
|
||||
},
|
||||
"coupled": {
|
||||
"name": "Coupled Planning",
|
||||
"description": "CLI深度分析 + 完整规划 + 验证 + 执行",
|
||||
"complexity": ["high"],
|
||||
"steps": [
|
||||
{
|
||||
"command": "/workflow:plan",
|
||||
"optional": false,
|
||||
"auto_continue": false,
|
||||
"cli_hint": {
|
||||
"pre_analysis": { "tool": "gemini", "mode": "analysis", "trigger": "always", "purpose": "架构理解和依赖分析" },
|
||||
"conflict_detection": { "tool": "gemini", "mode": "analysis", "trigger": "always" }
|
||||
}
|
||||
},
|
||||
{ "command": "/workflow:action-plan-verify", "optional": false, "auto_continue": true },
|
||||
{
|
||||
"command": "/workflow:execute",
|
||||
"optional": false,
|
||||
"auto_continue": false,
|
||||
"cli_hint": {
|
||||
"execution": { "tool": "codex", "mode": "write", "trigger": "always", "purpose": "自主多任务执行" }
|
||||
}
|
||||
},
|
||||
{
|
||||
"command": "/workflow:review",
|
||||
"optional": true,
|
||||
"auto_continue": false,
|
||||
"cli_hint": {
|
||||
"review": { "tool": "gemini", "mode": "analysis", "trigger": "always" }
|
||||
}
|
||||
}
|
||||
],
|
||||
"total_steps": 4,
|
||||
"estimated_time": "2-4 hours"
|
||||
},
|
||||
"bugfix": {
|
||||
"name": "Bug Fix",
|
||||
"description": "CLI诊断 + 智能修复",
|
||||
"complexity": ["low", "medium"],
|
||||
"variants": {
|
||||
"standard": {
|
||||
"steps": [
|
||||
{
|
||||
"command": "/workflow:lite-fix",
|
||||
"optional": false,
|
||||
"auto_continue": true,
|
||||
"cli_hint": {
|
||||
"diagnosis": { "tool": "gemini", "mode": "analysis", "trigger": "always", "purpose": "根因分析和执行流追踪" },
|
||||
"fix": { "tool": "codex", "mode": "write", "trigger": "severity >= medium" }
|
||||
}
|
||||
}
|
||||
]
|
||||
},
|
||||
"hotfix": {
|
||||
"steps": [
|
||||
{
|
||||
"command": "/workflow:lite-fix --hotfix",
|
||||
"optional": false,
|
||||
"auto_continue": true,
|
||||
"cli_hint": {
|
||||
"quick_diagnosis": { "tool": "gemini", "mode": "analysis", "trigger": "always", "timeout": "60s" }
|
||||
}
|
||||
}
|
||||
]
|
||||
}
|
||||
},
|
||||
"total_steps": 1,
|
||||
"estimated_time": "10-30 min"
|
||||
},
|
||||
"issue": {
|
||||
"name": "Issue Batch",
|
||||
"description": "CLI批量分析 + 队列优化 + 并行执行",
|
||||
"complexity": ["medium", "high"],
|
||||
"steps": [
|
||||
{
|
||||
"command": "/issue:plan",
|
||||
"optional": false,
|
||||
"auto_continue": false,
|
||||
"cli_hint": {
|
||||
"solution_generation": { "tool": "gemini", "mode": "analysis", "trigger": "always", "parallel": true }
|
||||
}
|
||||
},
|
||||
{
|
||||
"command": "/issue:queue",
|
||||
"optional": false,
|
||||
"auto_continue": false,
|
||||
"cli_hint": {
|
||||
"conflict_analysis": { "tool": "gemini", "mode": "analysis", "trigger": "issue_count >= 3" }
|
||||
}
|
||||
},
|
||||
{
|
||||
"command": "/issue:execute",
|
||||
"optional": false,
|
||||
"auto_continue": false,
|
||||
"cli_hint": {
|
||||
"batch_execution": { "tool": "codex", "mode": "write", "trigger": "always", "purpose": "DAG并行执行" }
|
||||
}
|
||||
}
|
||||
],
|
||||
"total_steps": 3,
|
||||
"estimated_time": "1-4 hours"
|
||||
},
|
||||
"tdd": {
|
||||
"name": "Test-Driven Development",
|
||||
"description": "TDD规划 + 执行 + CLI验证",
|
||||
"complexity": ["medium", "high"],
|
||||
"steps": [
|
||||
{
|
||||
"command": "/workflow:tdd-plan",
|
||||
"optional": false,
|
||||
"auto_continue": false,
|
||||
"cli_hint": {
|
||||
"test_strategy": { "tool": "gemini", "mode": "analysis", "trigger": "always" }
|
||||
}
|
||||
},
|
||||
{
|
||||
"command": "/workflow:execute",
|
||||
"optional": false,
|
||||
"auto_continue": false,
|
||||
"cli_hint": {
|
||||
"red_green_refactor": { "tool": "codex", "mode": "write", "trigger": "always" }
|
||||
}
|
||||
},
|
||||
{
|
||||
"command": "/workflow:tdd-verify",
|
||||
"optional": false,
|
||||
"auto_continue": false,
|
||||
"cli_hint": {
|
||||
"coverage_analysis": { "tool": "gemini", "mode": "analysis", "trigger": "always" }
|
||||
}
|
||||
}
|
||||
],
|
||||
"total_steps": 3,
|
||||
"estimated_time": "1-3 hours"
|
||||
},
|
||||
"ui": {
|
||||
"name": "UI-First Development",
|
||||
"description": "UI设计 + 规划 + 执行",
|
||||
"complexity": ["medium", "high"],
|
||||
"variants": {
|
||||
"explore": {
|
||||
"steps": [
|
||||
{ "command": "/workflow:ui-design:explore-auto", "optional": false, "auto_continue": false },
|
||||
{ "command": "/workflow:ui-design:design-sync", "optional": false, "auto_continue": true },
|
||||
{ "command": "/workflow:plan", "optional": false, "auto_continue": false },
|
||||
{ "command": "/workflow:execute", "optional": false, "auto_continue": false }
|
||||
]
|
||||
},
|
||||
"imitate": {
|
||||
"steps": [
|
||||
{ "command": "/workflow:ui-design:imitate-auto", "optional": false, "auto_continue": false },
|
||||
{ "command": "/workflow:ui-design:design-sync", "optional": false, "auto_continue": true },
|
||||
{ "command": "/workflow:plan", "optional": false, "auto_continue": false },
|
||||
{ "command": "/workflow:execute", "optional": false, "auto_continue": false }
|
||||
]
|
||||
}
|
||||
},
|
||||
"total_steps": 4,
|
||||
"estimated_time": "2-4 hours"
|
||||
},
|
||||
"review-fix": {
|
||||
"name": "Review and Fix",
|
||||
"description": "CLI多维审查 + 自动修复",
|
||||
"complexity": ["medium"],
|
||||
"steps": [
|
||||
{
|
||||
"command": "/workflow:review-session-cycle",
|
||||
"optional": false,
|
||||
"auto_continue": false,
|
||||
"cli_hint": {
|
||||
"multi_dimension_review": { "tool": "gemini", "mode": "analysis", "trigger": "always", "parallel": true }
|
||||
}
|
||||
},
|
||||
{
|
||||
"command": "/workflow:review-fix",
|
||||
"optional": true,
|
||||
"auto_continue": false,
|
||||
"cli_hint": {
|
||||
"auto_fix": { "tool": "codex", "mode": "write", "trigger": "findings_count >= 3" }
|
||||
}
|
||||
}
|
||||
],
|
||||
"total_steps": 2,
|
||||
"estimated_time": "30-90 min"
|
||||
},
|
||||
"docs": {
|
||||
"name": "Documentation",
|
||||
"description": "CLI批量文档生成",
|
||||
"complexity": ["low", "medium"],
|
||||
"variants": {
|
||||
"incremental": {
|
||||
"steps": [
|
||||
{
|
||||
"command": "/memory:update-related",
|
||||
"optional": false,
|
||||
"auto_continue": false,
|
||||
"cli_hint": {
|
||||
"doc_generation": { "tool": "gemini", "mode": "write", "trigger": "module_count >= 5" }
|
||||
}
|
||||
}
|
||||
]
|
||||
},
|
||||
"full": {
|
||||
"steps": [
|
||||
{ "command": "/memory:docs", "optional": false, "auto_continue": false },
|
||||
{
|
||||
"command": "/workflow:execute",
|
||||
"optional": false,
|
||||
"auto_continue": false,
|
||||
"cli_hint": {
|
||||
"batch_doc": { "tool": "gemini", "mode": "write", "trigger": "always" }
|
||||
}
|
||||
}
|
||||
]
|
||||
}
|
||||
},
|
||||
"total_steps": 2,
|
||||
"estimated_time": "15-60 min"
|
||||
},
|
||||
"cli-analysis": {
|
||||
"name": "CLI Direct Analysis",
|
||||
"description": "直接CLI分析,获取多模型视角,节省主会话token",
|
||||
"complexity": ["low", "medium", "high"],
|
||||
"standalone": true,
|
||||
"steps": [
|
||||
{
|
||||
"command": "ccw cli",
|
||||
"tool": "gemini",
|
||||
"mode": "analysis",
|
||||
"optional": false,
|
||||
"auto_continue": false
|
||||
}
|
||||
],
|
||||
"use_cases": [
|
||||
"大型代码库快速理解",
|
||||
"执行流追踪和数据流分析",
|
||||
"架构评估和技术方案对比",
|
||||
"性能瓶颈诊断"
|
||||
],
|
||||
"total_steps": 1,
|
||||
"estimated_time": "5-15 min"
|
||||
},
|
||||
"cli-implement": {
|
||||
"name": "CLI Direct Implementation",
|
||||
"description": "直接Codex实现,自主完成多步骤任务",
|
||||
"complexity": ["medium", "high"],
|
||||
"standalone": true,
|
||||
"steps": [
|
||||
{
|
||||
"command": "ccw cli",
|
||||
"tool": "codex",
|
||||
"mode": "write",
|
||||
"optional": false,
|
||||
"auto_continue": false
|
||||
}
|
||||
],
|
||||
"use_cases": [
|
||||
"明确需求的功能实现",
|
||||
"代码重构和迁移",
|
||||
"测试生成",
|
||||
"批量代码修改"
|
||||
],
|
||||
"total_steps": 1,
|
||||
"estimated_time": "15-60 min"
|
||||
},
|
||||
"cli-debug": {
|
||||
"name": "CLI Debug Session",
|
||||
"description": "CLI调试会话,利用Gemini深度诊断能力",
|
||||
"complexity": ["medium", "high"],
|
||||
"standalone": true,
|
||||
"steps": [
|
||||
{
|
||||
"command": "ccw cli",
|
||||
"tool": "gemini",
|
||||
"mode": "analysis",
|
||||
"purpose": "hypothesis-driven debugging",
|
||||
"optional": false,
|
||||
"auto_continue": false
|
||||
}
|
||||
],
|
||||
"use_cases": [
|
||||
"复杂bug根因分析",
|
||||
"执行流异常追踪",
|
||||
"状态机错误诊断",
|
||||
"并发问题排查"
|
||||
],
|
||||
"total_steps": 1,
|
||||
"estimated_time": "10-30 min"
|
||||
}
|
||||
},
|
||||
"chain_selection_rules": {
|
||||
"intent_mapping": {
|
||||
"bugfix": ["bugfix"],
|
||||
"feature_simple": ["rapid"],
|
||||
"feature_unclear": ["full"],
|
||||
"feature_complex": ["coupled"],
|
||||
"issue_batch": ["issue"],
|
||||
"test_driven": ["tdd"],
|
||||
"ui_design": ["ui"],
|
||||
"code_review": ["review-fix"],
|
||||
"documentation": ["docs"],
|
||||
"analysis_only": ["cli-analysis"],
|
||||
"implement_only": ["cli-implement"],
|
||||
"debug": ["cli-debug", "bugfix"]
|
||||
},
|
||||
"complexity_fallback": {
|
||||
"low": "rapid",
|
||||
"medium": "coupled",
|
||||
"high": "full"
|
||||
},
|
||||
"cli_preference_rules": {
|
||||
"_doc": "用户语义触发CLI工具选择",
|
||||
"gemini_triggers": ["用 gemini", "gemini 分析", "让 gemini", "深度分析", "架构理解"],
|
||||
"qwen_triggers": ["用 qwen", "qwen 评估", "让 qwen", "第二视角"],
|
||||
"codex_triggers": ["用 codex", "codex 实现", "让 codex", "自主完成", "批量修改"]
|
||||
}
|
||||
}
|
||||
}
|
||||
@@ -1,218 +0,0 @@
|
||||
# Action: Bugfix Workflow
|
||||
|
||||
缺陷修复工作流:智能诊断 + 影响评估 + 修复
|
||||
|
||||
## Pattern
|
||||
|
||||
```
|
||||
lite-fix [--hotfix]
|
||||
```
|
||||
|
||||
## Trigger Conditions
|
||||
|
||||
- Keywords: "fix", "bug", "error", "crash", "broken", "fail", "修复", "报错"
|
||||
- Problem symptoms described
|
||||
- Error messages present
|
||||
|
||||
## Execution Flow
|
||||
|
||||
### Standard Mode
|
||||
|
||||
```mermaid
|
||||
sequenceDiagram
|
||||
participant U as User
|
||||
participant O as CCW Orchestrator
|
||||
participant LF as lite-fix
|
||||
participant CLI as CLI Tools
|
||||
|
||||
U->>O: Bug description
|
||||
O->>O: Classify: bugfix (standard)
|
||||
O->>LF: /workflow:lite-fix "bug"
|
||||
|
||||
Note over LF: Phase 1: Diagnosis
|
||||
LF->>CLI: Root cause analysis (Gemini)
|
||||
CLI-->>LF: diagnosis.json
|
||||
|
||||
Note over LF: Phase 2: Impact Assessment
|
||||
LF->>LF: Risk scoring (0-10)
|
||||
LF->>LF: Severity classification
|
||||
LF-->>U: Impact report
|
||||
|
||||
Note over LF: Phase 3: Fix Strategy
|
||||
LF->>LF: Generate fix options
|
||||
LF-->>U: Present strategies
|
||||
U->>LF: Select strategy
|
||||
|
||||
Note over LF: Phase 4: Verification Plan
|
||||
LF->>LF: Generate test plan
|
||||
LF-->>U: Verification approach
|
||||
|
||||
Note over LF: Phase 5: Confirmation
|
||||
LF->>U: Execution method?
|
||||
U->>LF: Confirm
|
||||
|
||||
Note over LF: Phase 6: Execute
|
||||
LF->>CLI: Execute fix (Agent/Codex)
|
||||
CLI-->>LF: Results
|
||||
LF-->>U: Fix complete
|
||||
```
|
||||
|
||||
### Hotfix Mode
|
||||
|
||||
```mermaid
|
||||
sequenceDiagram
|
||||
participant U as User
|
||||
participant O as CCW Orchestrator
|
||||
participant LF as lite-fix
|
||||
participant CLI as CLI Tools
|
||||
|
||||
U->>O: Urgent bug + "hotfix"
|
||||
O->>O: Classify: bugfix (hotfix)
|
||||
O->>LF: /workflow:lite-fix --hotfix "bug"
|
||||
|
||||
Note over LF: Minimal Diagnosis
|
||||
LF->>CLI: Quick root cause
|
||||
CLI-->>LF: Known issue?
|
||||
|
||||
Note over LF: Surgical Fix
|
||||
LF->>LF: Single optimal fix
|
||||
LF-->>U: Quick confirmation
|
||||
U->>LF: Proceed
|
||||
|
||||
Note over LF: Smoke Test
|
||||
LF->>CLI: Minimal verification
|
||||
CLI-->>LF: Pass/Fail
|
||||
|
||||
Note over LF: Follow-up Generation
|
||||
LF->>LF: Generate follow-up tasks
|
||||
LF-->>U: Fix deployed + follow-ups created
|
||||
```
|
||||
|
||||
## When to Use
|
||||
|
||||
### Standard Mode (/workflow:lite-fix)
|
||||
✅ **Use for**:
|
||||
- 已知症状的 Bug
|
||||
- 本地化修复(1-5 文件)
|
||||
- 非紧急问题
|
||||
- 需要完整诊断
|
||||
|
||||
### Hotfix Mode (/workflow:lite-fix --hotfix)
|
||||
✅ **Use for**:
|
||||
- 生产事故
|
||||
- 紧急修复
|
||||
- 明确的单点故障
|
||||
- 时间敏感
|
||||
|
||||
❌ **Don't use** (for either mode):
|
||||
- 需要架构变更 → `/workflow:plan --mode bugfix`
|
||||
- 多个相关问题 → `/issue:plan`
|
||||
|
||||
## Severity Classification
|
||||
|
||||
| Score | Severity | Response | Verification |
|
||||
|-------|----------|----------|--------------|
|
||||
| 8-10 | Critical | Immediate | Smoke test only |
|
||||
| 6-7.9 | High | Fast track | Integration tests |
|
||||
| 4-5.9 | Medium | Normal | Full test suite |
|
||||
| 0-3.9 | Low | Scheduled | Comprehensive |
|
||||
|
||||
## Configuration
|
||||
|
||||
```javascript
|
||||
const bugfixConfig = {
|
||||
standard: {
|
||||
diagnosis: {
|
||||
tool: 'gemini',
|
||||
depth: 'comprehensive',
|
||||
timeout: 300000 // 5 min
|
||||
},
|
||||
impact: {
|
||||
riskThreshold: 6.0, // High risk threshold
|
||||
autoEscalate: true
|
||||
},
|
||||
verification: {
|
||||
levels: ['smoke', 'integration', 'full'],
|
||||
autoSelect: true // Based on severity
|
||||
}
|
||||
},
|
||||
|
||||
hotfix: {
|
||||
diagnosis: {
|
||||
tool: 'gemini',
|
||||
depth: 'minimal',
|
||||
timeout: 60000 // 1 min
|
||||
},
|
||||
fix: {
|
||||
strategy: 'single', // Single optimal fix
|
||||
surgical: true
|
||||
},
|
||||
followup: {
|
||||
generate: true,
|
||||
types: ['comprehensive-fix', 'post-mortem']
|
||||
}
|
||||
}
|
||||
}
|
||||
```
|
||||
|
||||
## Example Invocations
|
||||
|
||||
```bash
|
||||
# Standard bug fix
|
||||
ccw "用户头像上传失败,返回 413 错误"
|
||||
→ lite-fix
|
||||
→ Diagnosis: File size limit in nginx
|
||||
→ Impact: 6.5 (High)
|
||||
→ Fix: Update nginx config + add client validation
|
||||
→ Verify: Integration test
|
||||
|
||||
# Production hotfix
|
||||
ccw "紧急:支付网关返回 5xx 错误,影响所有用户"
|
||||
→ lite-fix --hotfix
|
||||
→ Quick diagnosis: API key expired
|
||||
→ Surgical fix: Rotate key
|
||||
→ Smoke test: Payment flow
|
||||
→ Follow-ups: Key rotation automation, monitoring alert
|
||||
|
||||
# Unknown root cause
|
||||
ccw "购物车随机丢失商品,原因不明"
|
||||
→ lite-fix
|
||||
→ Deep diagnosis (auto)
|
||||
→ Root cause: Race condition in concurrent updates
|
||||
→ Fix: Add optimistic locking
|
||||
→ Verify: Concurrent test suite
|
||||
```
|
||||
|
||||
## Output Artifacts
|
||||
|
||||
```
|
||||
.workflow/.lite-fix/{bug-slug}-{timestamp}/
|
||||
├── diagnosis.json # Root cause analysis
|
||||
├── impact.json # Risk assessment
|
||||
├── fix-plan.json # Fix strategy
|
||||
├── task.json # Enhanced task for execution
|
||||
└── followup.json # Follow-up tasks (hotfix only)
|
||||
```
|
||||
|
||||
## Follow-up Tasks (Hotfix Mode)
|
||||
|
||||
```json
|
||||
{
|
||||
"followups": [
|
||||
{
|
||||
"id": "FOLLOWUP-001",
|
||||
"type": "comprehensive-fix",
|
||||
"title": "Complete fix for payment gateway issue",
|
||||
"due": "3 days",
|
||||
"description": "Implement full solution with proper error handling"
|
||||
},
|
||||
{
|
||||
"id": "FOLLOWUP-002",
|
||||
"type": "post-mortem",
|
||||
"title": "Post-mortem analysis",
|
||||
"due": "1 week",
|
||||
"description": "Document incident and prevention measures"
|
||||
}
|
||||
]
|
||||
}
|
||||
```
|
||||
@@ -1,194 +0,0 @@
|
||||
# Action: Coupled Workflow
|
||||
|
||||
复杂耦合工作流:完整规划 + 验证 + 执行
|
||||
|
||||
## Pattern
|
||||
|
||||
```
|
||||
plan → action-plan-verify → execute
|
||||
```
|
||||
|
||||
## Trigger Conditions
|
||||
|
||||
- Complexity: High
|
||||
- Keywords: "refactor", "重构", "migrate", "迁移", "architect", "架构"
|
||||
- Cross-module changes
|
||||
- System-level modifications
|
||||
|
||||
## Execution Flow
|
||||
|
||||
```mermaid
|
||||
sequenceDiagram
|
||||
participant U as User
|
||||
participant O as CCW Orchestrator
|
||||
participant PL as plan
|
||||
participant VF as verify
|
||||
participant EX as execute
|
||||
participant RV as review
|
||||
|
||||
U->>O: Complex task
|
||||
O->>O: Classify: coupled (high complexity)
|
||||
|
||||
Note over PL: Phase 1: Comprehensive Planning
|
||||
O->>PL: /workflow:plan
|
||||
PL->>PL: Multi-phase planning
|
||||
PL->>PL: Generate IMPL_PLAN.md
|
||||
PL->>PL: Generate task JSONs
|
||||
PL-->>U: Present plan
|
||||
|
||||
Note over VF: Phase 2: Verification
|
||||
U->>VF: /workflow:action-plan-verify
|
||||
VF->>VF: Cross-artifact consistency
|
||||
VF->>VF: Dependency validation
|
||||
VF->>VF: Quality gate checks
|
||||
VF-->>U: Verification report
|
||||
|
||||
alt Verification failed
|
||||
U->>PL: Replan with issues
|
||||
else Verification passed
|
||||
Note over EX: Phase 3: Execution
|
||||
U->>EX: /workflow:execute
|
||||
EX->>EX: DAG-based parallel execution
|
||||
EX-->>U: Execution complete
|
||||
end
|
||||
|
||||
Note over RV: Phase 4: Review
|
||||
U->>RV: /workflow:review
|
||||
RV-->>U: Review findings
|
||||
```
|
||||
|
||||
## When to Use
|
||||
|
||||
✅ **Ideal scenarios**:
|
||||
- 大规模重构
|
||||
- 架构迁移
|
||||
- 跨模块功能开发
|
||||
- 技术栈升级
|
||||
- 团队协作项目
|
||||
|
||||
❌ **Avoid when**:
|
||||
- 简单的局部修改
|
||||
- 时间紧迫
|
||||
- 独立的小功能
|
||||
|
||||
## Verification Checks
|
||||
|
||||
| Check | Description | Severity |
|
||||
|-------|-------------|----------|
|
||||
| Dependency Cycles | 检测循环依赖 | Critical |
|
||||
| Missing Tasks | 计划与实际不符 | High |
|
||||
| File Conflicts | 多任务修改同文件 | Medium |
|
||||
| Coverage Gaps | 未覆盖的需求 | Medium |
|
||||
|
||||
## Configuration
|
||||
|
||||
```javascript
|
||||
const coupledConfig = {
|
||||
plan: {
|
||||
phases: 5, // Full 5-phase planning
|
||||
taskGeneration: 'action-planning-agent',
|
||||
outputFormat: {
|
||||
implPlan: '.workflow/plans/IMPL_PLAN.md',
|
||||
taskJsons: '.workflow/tasks/IMPL-*.json'
|
||||
}
|
||||
},
|
||||
|
||||
verify: {
|
||||
required: true, // Always verify before execute
|
||||
autoReplan: false, // Manual replan on failure
|
||||
qualityGates: ['no-cycles', 'no-conflicts', 'complete-coverage']
|
||||
},
|
||||
|
||||
execute: {
|
||||
dagParallel: true,
|
||||
checkpointInterval: 3, // Checkpoint every 3 tasks
|
||||
rollbackOnFailure: true
|
||||
},
|
||||
|
||||
review: {
|
||||
types: ['architecture', 'security'],
|
||||
required: true
|
||||
}
|
||||
}
|
||||
```
|
||||
|
||||
## Task JSON Structure
|
||||
|
||||
```json
|
||||
{
|
||||
"id": "IMPL-001",
|
||||
"title": "重构认证模块核心逻辑",
|
||||
"scope": "src/auth/**",
|
||||
"action": "refactor",
|
||||
"depends_on": [],
|
||||
"modification_points": [
|
||||
{
|
||||
"file": "src/auth/service.ts",
|
||||
"target": "AuthService",
|
||||
"change": "Extract OAuth2 logic"
|
||||
}
|
||||
],
|
||||
"acceptance": [
|
||||
"所有现有测试通过",
|
||||
"OAuth2 流程可用"
|
||||
]
|
||||
}
|
||||
```
|
||||
|
||||
## Example Invocations
|
||||
|
||||
```bash
|
||||
# Architecture refactoring
|
||||
ccw "重构整个认证模块,从 session 迁移到 JWT"
|
||||
→ plan (5 phases)
|
||||
→ verify
|
||||
→ execute
|
||||
|
||||
# System migration
|
||||
ccw "将数据库从 MySQL 迁移到 PostgreSQL"
|
||||
→ plan (migration strategy)
|
||||
→ verify (data integrity checks)
|
||||
→ execute (staged migration)
|
||||
|
||||
# Cross-module feature
|
||||
ccw "实现跨服务的分布式事务支持"
|
||||
→ plan (architectural design)
|
||||
→ verify (consistency checks)
|
||||
→ execute (incremental rollout)
|
||||
```
|
||||
|
||||
## Output Artifacts
|
||||
|
||||
```
|
||||
.workflow/
|
||||
├── plans/
|
||||
│ └── IMPL_PLAN.md # Comprehensive plan
|
||||
├── tasks/
|
||||
│ ├── IMPL-001.json
|
||||
│ ├── IMPL-002.json
|
||||
│ └── ...
|
||||
├── verify/
|
||||
│ └── verification-report.md # Verification results
|
||||
└── reviews/
|
||||
└── {review-type}.md # Review findings
|
||||
```
|
||||
|
||||
## Replan Flow
|
||||
|
||||
When verification fails:
|
||||
|
||||
```javascript
|
||||
if (verificationResult.status === 'failed') {
|
||||
console.log(`
|
||||
## Verification Failed
|
||||
|
||||
**Issues found**:
|
||||
${verificationResult.issues.map(i => `- ${i.severity}: ${i.message}`).join('\n')}
|
||||
|
||||
**Options**:
|
||||
1. /workflow:replan - Address issues and regenerate plan
|
||||
2. /workflow:plan --force - Proceed despite issues (not recommended)
|
||||
3. Review issues manually and fix plan files
|
||||
`)
|
||||
}
|
||||
```
|
||||
@@ -1,93 +0,0 @@
|
||||
# Documentation Workflow Action
|
||||
|
||||
## Pattern
|
||||
```
|
||||
memory:docs → execute (full)
|
||||
memory:update-related (incremental)
|
||||
```
|
||||
|
||||
## Trigger Conditions
|
||||
|
||||
- 关键词: "文档", "documentation", "docs", "readme", "注释"
|
||||
- 变体触发:
|
||||
- `incremental`: "更新", "增量", "相关"
|
||||
- `full`: "全部", "完整", "所有"
|
||||
|
||||
## Variants
|
||||
|
||||
### Full Documentation
|
||||
```mermaid
|
||||
graph TD
|
||||
A[User Input] --> B[memory:docs]
|
||||
B --> C[项目结构分析]
|
||||
C --> D[模块分组 ≤10/task]
|
||||
D --> E[execute: 并行生成]
|
||||
E --> F[README.md]
|
||||
E --> G[ARCHITECTURE.md]
|
||||
E --> H[API Docs]
|
||||
E --> I[Module CLAUDE.md]
|
||||
```
|
||||
|
||||
### Incremental Update
|
||||
```mermaid
|
||||
graph TD
|
||||
A[Git Changes] --> B[memory:update-related]
|
||||
B --> C[变更模块检测]
|
||||
C --> D[相关文档定位]
|
||||
D --> E[增量更新]
|
||||
```
|
||||
|
||||
## Configuration
|
||||
|
||||
| 参数 | 默认值 | 说明 |
|
||||
|------|--------|------|
|
||||
| batch_size | 4 | 每agent处理模块数 |
|
||||
| format | markdown | 输出格式 |
|
||||
| include_api | true | 生成API文档 |
|
||||
| include_diagrams | true | 生成Mermaid图 |
|
||||
|
||||
## CLI Integration
|
||||
|
||||
| 阶段 | CLI Hint | 用途 |
|
||||
|------|----------|------|
|
||||
| memory:docs | `gemini --mode analysis` | 项目结构分析 |
|
||||
| execute | `gemini --mode write` | 文档生成 |
|
||||
| update-related | `gemini --mode write` | 增量更新 |
|
||||
|
||||
## Slash Commands
|
||||
|
||||
```bash
|
||||
/memory:docs # 规划全量文档生成
|
||||
/memory:docs-full-cli # CLI执行全量文档
|
||||
/memory:docs-related-cli # CLI执行增量文档
|
||||
/memory:update-related # 更新变更相关文档
|
||||
/memory:update-full # 更新所有CLAUDE.md
|
||||
```
|
||||
|
||||
## Output Structure
|
||||
|
||||
```
|
||||
project/
|
||||
├── README.md # 项目概览
|
||||
├── ARCHITECTURE.md # 架构文档
|
||||
├── docs/
|
||||
│ └── api/ # API文档
|
||||
└── src/
|
||||
└── module/
|
||||
└── CLAUDE.md # 模块文档
|
||||
```
|
||||
|
||||
## When to Use
|
||||
|
||||
- 新项目初始化文档
|
||||
- 大版本发布前文档更新
|
||||
- 代码变更后同步文档
|
||||
- API文档生成
|
||||
|
||||
## Risk Assessment
|
||||
|
||||
| 风险 | 缓解措施 |
|
||||
|------|----------|
|
||||
| 文档与代码不同步 | git hook集成 |
|
||||
| 生成内容过于冗长 | batch_size控制 |
|
||||
| 遗漏重要模块 | 全量扫描验证 |
|
||||
@@ -1,154 +0,0 @@
|
||||
# Action: Full Workflow
|
||||
|
||||
完整探索工作流:分析 + 头脑风暴 + 规划 + 执行
|
||||
|
||||
## Pattern
|
||||
|
||||
```
|
||||
brainstorm:auto-parallel → plan → [verify] → execute
|
||||
```
|
||||
|
||||
## Trigger Conditions
|
||||
|
||||
- Intent: Exploration (uncertainty detected)
|
||||
- Keywords: "不确定", "不知道", "explore", "怎么做", "what if"
|
||||
- No clear implementation path
|
||||
|
||||
## Execution Flow
|
||||
|
||||
```mermaid
|
||||
sequenceDiagram
|
||||
participant U as User
|
||||
participant O as CCW Orchestrator
|
||||
participant BS as brainstorm
|
||||
participant PL as plan
|
||||
participant VF as verify
|
||||
participant EX as execute
|
||||
|
||||
U->>O: Unclear task
|
||||
O->>O: Classify: full
|
||||
|
||||
Note over BS: Phase 1: Brainstorm
|
||||
O->>BS: /workflow:brainstorm:auto-parallel
|
||||
BS->>BS: Multi-role parallel analysis
|
||||
BS->>BS: Synthesis & recommendations
|
||||
BS-->>U: Present options
|
||||
U->>BS: Select direction
|
||||
|
||||
Note over PL: Phase 2: Plan
|
||||
BS->>PL: /workflow:plan
|
||||
PL->>PL: Generate IMPL_PLAN.md
|
||||
PL->>PL: Generate task JSONs
|
||||
PL-->>U: Review plan
|
||||
|
||||
Note over VF: Phase 3: Verify (optional)
|
||||
U->>VF: /workflow:action-plan-verify
|
||||
VF->>VF: Cross-artifact consistency
|
||||
VF-->>U: Verification report
|
||||
|
||||
Note over EX: Phase 4: Execute
|
||||
U->>EX: /workflow:execute
|
||||
EX->>EX: DAG-based parallel execution
|
||||
EX-->>U: Execution complete
|
||||
```
|
||||
|
||||
## When to Use
|
||||
|
||||
✅ **Ideal scenarios**:
|
||||
- 产品方向探索
|
||||
- 技术选型评估
|
||||
- 架构设计决策
|
||||
- 复杂功能规划
|
||||
- 需要多角色视角
|
||||
|
||||
❌ **Avoid when**:
|
||||
- 任务明确简单
|
||||
- 时间紧迫
|
||||
- 已有成熟方案
|
||||
|
||||
## Brainstorm Roles
|
||||
|
||||
| Role | Focus | Typical Questions |
|
||||
|------|-------|-------------------|
|
||||
| Product Manager | 用户价值、市场定位 | "用户痛点是什么?" |
|
||||
| System Architect | 技术方案、架构设计 | "如何保证可扩展性?" |
|
||||
| UX Expert | 用户体验、交互设计 | "用户流程是否顺畅?" |
|
||||
| Security Expert | 安全风险、合规要求 | "有哪些安全隐患?" |
|
||||
| Data Architect | 数据模型、存储方案 | "数据如何组织?" |
|
||||
|
||||
## Configuration
|
||||
|
||||
```javascript
|
||||
const fullConfig = {
|
||||
brainstorm: {
|
||||
defaultRoles: ['product-manager', 'system-architect', 'ux-expert'],
|
||||
maxRoles: 5,
|
||||
synthesis: true // Always generate synthesis
|
||||
},
|
||||
|
||||
plan: {
|
||||
verifyBeforeExecute: true, // Recommend verification
|
||||
taskFormat: 'json' // Generate task JSONs
|
||||
},
|
||||
|
||||
execute: {
|
||||
dagParallel: true, // DAG-based parallel execution
|
||||
testGeneration: 'optional' // Suggest test-gen after
|
||||
}
|
||||
}
|
||||
```
|
||||
|
||||
## Continuation Points
|
||||
|
||||
After each phase, CCW can continue to the next:
|
||||
|
||||
```javascript
|
||||
// After brainstorm completes
|
||||
console.log(`
|
||||
## Brainstorm Complete
|
||||
|
||||
**Next steps**:
|
||||
1. /workflow:plan "基于头脑风暴结果规划实施"
|
||||
2. Or refine: /workflow:brainstorm:synthesis
|
||||
`)
|
||||
|
||||
// After plan completes
|
||||
console.log(`
|
||||
## Plan Complete
|
||||
|
||||
**Next steps**:
|
||||
1. /workflow:action-plan-verify (recommended)
|
||||
2. /workflow:execute (直接执行)
|
||||
`)
|
||||
```
|
||||
|
||||
## Example Invocations
|
||||
|
||||
```bash
|
||||
# Product exploration
|
||||
ccw "我想做一个团队协作工具,但不确定具体方向"
|
||||
→ brainstorm:auto-parallel (5 roles)
|
||||
→ plan
|
||||
→ execute
|
||||
|
||||
# Technical exploration
|
||||
ccw "如何设计一个高可用的消息队列系统?"
|
||||
→ brainstorm:auto-parallel (system-architect, data-architect)
|
||||
→ plan
|
||||
→ verify
|
||||
→ execute
|
||||
```
|
||||
|
||||
## Output Artifacts
|
||||
|
||||
```
|
||||
.workflow/
|
||||
├── brainstorm/
|
||||
│ ├── {session}/
|
||||
│ │ ├── role-{role}.md
|
||||
│ │ └── synthesis.md
|
||||
├── plans/
|
||||
│ └── IMPL_PLAN.md
|
||||
└── tasks/
|
||||
└── IMPL-*.json
|
||||
```
|
||||
@@ -1,201 +0,0 @@
|
||||
# Action: Issue Workflow
|
||||
|
||||
Issue 批量处理工作流:规划 + 队列 + 批量执行
|
||||
|
||||
## Pattern
|
||||
|
||||
```
|
||||
issue:plan → issue:queue → issue:execute
|
||||
```
|
||||
|
||||
## Trigger Conditions
|
||||
|
||||
- Keywords: "issues", "batch", "queue", "多个", "批量"
|
||||
- Multiple related problems
|
||||
- Long-running fix campaigns
|
||||
- Priority-based processing needed
|
||||
|
||||
## Execution Flow
|
||||
|
||||
```mermaid
|
||||
sequenceDiagram
|
||||
participant U as User
|
||||
participant O as CCW Orchestrator
|
||||
participant IP as issue:plan
|
||||
participant IQ as issue:queue
|
||||
participant IE as issue:execute
|
||||
|
||||
U->>O: Multiple issues / batch fix
|
||||
O->>O: Classify: issue
|
||||
|
||||
Note over IP: Phase 1: Issue Planning
|
||||
O->>IP: /issue:plan
|
||||
IP->>IP: Load unplanned issues
|
||||
IP->>IP: Generate solutions per issue
|
||||
IP->>U: Review solutions
|
||||
U->>IP: Bind selected solutions
|
||||
|
||||
Note over IQ: Phase 2: Queue Formation
|
||||
IP->>IQ: /issue:queue
|
||||
IQ->>IQ: Conflict analysis
|
||||
IQ->>IQ: Priority calculation
|
||||
IQ->>IQ: DAG construction
|
||||
IQ->>U: High-severity conflicts?
|
||||
U->>IQ: Resolve conflicts
|
||||
IQ->>IQ: Generate execution queue
|
||||
|
||||
Note over IE: Phase 3: Execution
|
||||
IQ->>IE: /issue:execute
|
||||
IE->>IE: DAG-based parallel execution
|
||||
IE->>IE: Per-solution progress tracking
|
||||
IE-->>U: Batch execution complete
|
||||
```
|
||||
|
||||
## When to Use
|
||||
|
||||
✅ **Ideal scenarios**:
|
||||
- 多个相关 Bug 需要批量修复
|
||||
- GitHub Issues 批量处理
|
||||
- 技术债务清理
|
||||
- 安全漏洞批量修复
|
||||
- 代码质量改进活动
|
||||
|
||||
❌ **Avoid when**:
|
||||
- 单一问题 → `/workflow:lite-fix`
|
||||
- 独立不相关的任务 → 分别处理
|
||||
- 紧急生产问题 → `/workflow:lite-fix --hotfix`
|
||||
|
||||
## Issue Lifecycle
|
||||
|
||||
```
|
||||
draft → planned → queued → executing → completed
|
||||
↓ ↓
|
||||
skipped on-hold
|
||||
```
|
||||
|
||||
## Conflict Types
|
||||
|
||||
| Type | Description | Resolution |
|
||||
|------|-------------|------------|
|
||||
| File | 多个解决方案修改同一文件 | Sequential execution |
|
||||
| API | API 签名变更影响 | Dependency ordering |
|
||||
| Data | 数据结构变更冲突 | User decision |
|
||||
| Dependency | 包依赖冲突 | Version negotiation |
|
||||
| Architecture | 架构方向冲突 | User decision (high severity) |
|
||||
|
||||
## Configuration
|
||||
|
||||
```javascript
|
||||
const issueConfig = {
|
||||
plan: {
|
||||
solutionsPerIssue: 3, // Generate up to 3 solutions
|
||||
autoSelect: false, // User must bind solution
|
||||
planningAgent: 'issue-plan-agent'
|
||||
},
|
||||
|
||||
queue: {
|
||||
conflictAnalysis: true,
|
||||
priorityCalculation: true,
|
||||
clarifyThreshold: 'high', // Ask user for high-severity conflicts
|
||||
queueAgent: 'issue-queue-agent'
|
||||
},
|
||||
|
||||
execute: {
|
||||
dagParallel: true,
|
||||
executionLevel: 'solution', // Execute by solution, not task
|
||||
executor: 'codex',
|
||||
resumable: true
|
||||
}
|
||||
}
|
||||
```
|
||||
|
||||
## Example Invocations
|
||||
|
||||
```bash
|
||||
# From GitHub Issues
|
||||
ccw "批量处理所有 label:bug 的 GitHub Issues"
|
||||
→ issue:new (import from GitHub)
|
||||
→ issue:plan (generate solutions)
|
||||
→ issue:queue (form execution queue)
|
||||
→ issue:execute (batch execute)
|
||||
|
||||
# Tech debt cleanup
|
||||
ccw "处理所有 TODO 注释和已知技术债务"
|
||||
→ issue:discover (find issues)
|
||||
→ issue:plan (plan solutions)
|
||||
→ issue:queue (prioritize)
|
||||
→ issue:execute (execute)
|
||||
|
||||
# Security vulnerabilities
|
||||
ccw "修复所有 npm audit 报告的安全漏洞"
|
||||
→ issue:new (from audit report)
|
||||
→ issue:plan (upgrade strategies)
|
||||
→ issue:queue (conflict resolution)
|
||||
→ issue:execute (staged upgrades)
|
||||
```
|
||||
|
||||
## Queue Structure
|
||||
|
||||
```json
|
||||
{
|
||||
"queue_id": "QUE-20251227-143000",
|
||||
"status": "active",
|
||||
"execution_groups": [
|
||||
{
|
||||
"id": "P1",
|
||||
"type": "parallel",
|
||||
"solutions": ["SOL-ISS-001-1", "SOL-ISS-002-1"],
|
||||
"description": "Independent fixes, no file overlap"
|
||||
},
|
||||
{
|
||||
"id": "S1",
|
||||
"type": "sequential",
|
||||
"solutions": ["SOL-ISS-003-1"],
|
||||
"depends_on": ["P1"],
|
||||
"description": "Depends on P1 completion"
|
||||
}
|
||||
]
|
||||
}
|
||||
```
|
||||
|
||||
## Output Artifacts
|
||||
|
||||
```
|
||||
.workflow/issues/
|
||||
├── issues.jsonl # All issues (one per line)
|
||||
├── solutions/
|
||||
│ ├── ISS-001.jsonl # Solutions for ISS-001
|
||||
│ └── ISS-002.jsonl
|
||||
├── queues/
|
||||
│ ├── index.json # Queue index
|
||||
│ └── QUE-xxx.json # Queue details
|
||||
└── execution/
|
||||
└── {queue-id}/
|
||||
├── progress.json
|
||||
└── results/
|
||||
```
|
||||
|
||||
## Progress Tracking
|
||||
|
||||
```javascript
|
||||
// Real-time progress during execution
|
||||
const progress = {
|
||||
queue_id: "QUE-xxx",
|
||||
total_solutions: 5,
|
||||
completed: 2,
|
||||
in_progress: 1,
|
||||
pending: 2,
|
||||
current_group: "P1",
|
||||
eta: "15 minutes"
|
||||
}
|
||||
```
|
||||
|
||||
## Resume Capability
|
||||
|
||||
```bash
|
||||
# If execution interrupted
|
||||
ccw "继续执行 issue 队列"
|
||||
→ Detects active queue: QUE-xxx
|
||||
→ Resumes from last checkpoint
|
||||
→ /issue:execute --resume
|
||||
```
|
||||
@@ -1,104 +0,0 @@
|
||||
# Action: Rapid Workflow
|
||||
|
||||
快速迭代工作流组合:多模型协作分析 + 直接执行
|
||||
|
||||
## Pattern
|
||||
|
||||
```
|
||||
lite-plan → lite-execute
|
||||
```
|
||||
|
||||
## Trigger Conditions
|
||||
|
||||
- Complexity: Low to Medium
|
||||
- Intent: Feature development
|
||||
- Context: Clear requirements, known implementation path
|
||||
- No uncertainty keywords
|
||||
|
||||
## Execution Flow
|
||||
|
||||
```mermaid
|
||||
sequenceDiagram
|
||||
participant U as User
|
||||
participant O as CCW Orchestrator
|
||||
participant LP as lite-plan
|
||||
participant LE as lite-execute
|
||||
participant CLI as CLI Tools
|
||||
|
||||
U->>O: Task description
|
||||
O->>O: Classify: rapid
|
||||
O->>LP: /workflow:lite-plan "task"
|
||||
|
||||
LP->>LP: Complexity assessment
|
||||
LP->>CLI: Parallel explorations (if needed)
|
||||
CLI-->>LP: Exploration results
|
||||
LP->>LP: Generate plan.json
|
||||
LP->>U: Display plan, ask confirmation
|
||||
U->>LP: Confirm + select execution method
|
||||
|
||||
LP->>LE: /workflow:lite-execute --in-memory
|
||||
LE->>CLI: Execute tasks (Agent/Codex)
|
||||
CLI-->>LE: Results
|
||||
LE->>LE: Optional code review
|
||||
LE-->>U: Execution complete
|
||||
```
|
||||
|
||||
## When to Use
|
||||
|
||||
✅ **Ideal scenarios**:
|
||||
- 添加单一功能(如用户头像上传)
|
||||
- 修改现有功能(如更新表单验证)
|
||||
- 小型重构(如抽取公共方法)
|
||||
- 添加测试用例
|
||||
- 文档更新
|
||||
|
||||
❌ **Avoid when**:
|
||||
- 不确定实现方案
|
||||
- 跨多个模块
|
||||
- 需要架构决策
|
||||
- 有复杂依赖关系
|
||||
|
||||
## Configuration
|
||||
|
||||
```javascript
|
||||
const rapidConfig = {
|
||||
explorationThreshold: {
|
||||
// Force exploration if task mentions specific files
|
||||
forceExplore: /\b(file|文件|module|模块|class|类)\s*[::]?\s*\w+/i,
|
||||
// Skip exploration for simple tasks
|
||||
skipExplore: /\b(add|添加|create|创建)\s+(comment|注释|log|日志)/i
|
||||
},
|
||||
|
||||
defaultExecution: 'Agent', // Agent for low complexity
|
||||
|
||||
codeReview: {
|
||||
default: 'Skip', // Skip review for simple tasks
|
||||
threshold: 'medium' // Enable for medium+ complexity
|
||||
}
|
||||
}
|
||||
```
|
||||
|
||||
## Example Invocations
|
||||
|
||||
```bash
|
||||
# Simple feature
|
||||
ccw "添加用户退出登录按钮"
|
||||
→ lite-plan → lite-execute (Agent)
|
||||
|
||||
# With exploration
|
||||
ccw "优化 AuthService 的 token 刷新逻辑"
|
||||
→ lite-plan -e → lite-execute (Agent, Gemini review)
|
||||
|
||||
# Medium complexity
|
||||
ccw "实现用户偏好设置的本地存储"
|
||||
→ lite-plan -e → lite-execute (Codex)
|
||||
```
|
||||
|
||||
## Output Artifacts
|
||||
|
||||
```
|
||||
.workflow/.lite-plan/{task-slug}-{date}/
|
||||
├── exploration-*.json # If exploration was triggered
|
||||
├── explorations-manifest.json
|
||||
└── plan.json # Implementation plan
|
||||
```
|
||||
@@ -1,84 +0,0 @@
|
||||
# Review-Fix Workflow Action
|
||||
|
||||
## Pattern
|
||||
```
|
||||
review-session-cycle → review-fix
|
||||
```
|
||||
|
||||
## Trigger Conditions
|
||||
|
||||
- 关键词: "review", "审查", "检查代码", "code review", "质量检查"
|
||||
- 场景: PR审查、代码质量提升、安全审计
|
||||
|
||||
## Execution Flow
|
||||
|
||||
```mermaid
|
||||
graph TD
|
||||
A[User Input] --> B[review-session-cycle]
|
||||
B --> C{7维度分析}
|
||||
C --> D[Security]
|
||||
C --> E[Performance]
|
||||
C --> F[Maintainability]
|
||||
C --> G[Architecture]
|
||||
C --> H[Code Style]
|
||||
C --> I[Test Coverage]
|
||||
C --> J[Documentation]
|
||||
D & E & F & G & H & I & J --> K[Findings Aggregation]
|
||||
K --> L{Quality Gate}
|
||||
L -->|Pass| M[Report Only]
|
||||
L -->|Fail| N[review-fix]
|
||||
N --> O[Auto Fix]
|
||||
O --> P[Re-verify]
|
||||
```
|
||||
|
||||
## Configuration
|
||||
|
||||
| 参数 | 默认值 | 说明 |
|
||||
|------|--------|------|
|
||||
| dimensions | all | 审查维度(security,performance,etc.) |
|
||||
| quality_gate | 80 | 质量门槛分数 |
|
||||
| auto_fix | true | 自动修复发现的问题 |
|
||||
| severity_threshold | medium | 最低关注级别 |
|
||||
|
||||
## CLI Integration
|
||||
|
||||
| 阶段 | CLI Hint | 用途 |
|
||||
|------|----------|------|
|
||||
| review-session-cycle | `gemini --mode analysis` | 多维度深度分析 |
|
||||
| review-fix | `codex --mode write` | 自动修复问题 |
|
||||
|
||||
## Slash Commands
|
||||
|
||||
```bash
|
||||
/workflow:review-session-cycle # 会话级代码审查
|
||||
/workflow:review-module-cycle # 模块级代码审查
|
||||
/workflow:review-fix # 自动修复审查发现
|
||||
/workflow:review --type security # 专项安全审查
|
||||
```
|
||||
|
||||
## Review Dimensions
|
||||
|
||||
| 维度 | 检查点 |
|
||||
|------|--------|
|
||||
| Security | 注入、XSS、敏感数据暴露 |
|
||||
| Performance | N+1查询、内存泄漏、算法复杂度 |
|
||||
| Maintainability | 代码重复、复杂度、命名 |
|
||||
| Architecture | 依赖方向、层级违规、耦合度 |
|
||||
| Code Style | 格式、约定、一致性 |
|
||||
| Test Coverage | 覆盖率、边界用例 |
|
||||
| Documentation | 注释、API文档、README |
|
||||
|
||||
## When to Use
|
||||
|
||||
- PR合并前审查
|
||||
- 重构后质量验证
|
||||
- 安全合规审计
|
||||
- 技术债务评估
|
||||
|
||||
## Risk Assessment
|
||||
|
||||
| 风险 | 缓解措施 |
|
||||
|------|----------|
|
||||
| 误报过多 | severity_threshold过滤 |
|
||||
| 修复引入新问题 | re-verify循环 |
|
||||
| 审查不全面 | 7维度覆盖 |
|
||||
@@ -1,66 +0,0 @@
|
||||
# TDD Workflow Action
|
||||
|
||||
## Pattern
|
||||
```
|
||||
tdd-plan → execute → tdd-verify
|
||||
```
|
||||
|
||||
## Trigger Conditions
|
||||
|
||||
- 关键词: "tdd", "test-driven", "测试驱动", "先写测试", "red-green"
|
||||
- 场景: 需要高质量代码保证、关键业务逻辑、回归风险高
|
||||
|
||||
## Execution Flow
|
||||
|
||||
```mermaid
|
||||
graph TD
|
||||
A[User Input] --> B[tdd-plan]
|
||||
B --> C{生成测试任务链}
|
||||
C --> D[Red Phase: 写失败测试]
|
||||
D --> E[execute: 实现代码]
|
||||
E --> F[Green Phase: 测试通过]
|
||||
F --> G{需要重构?}
|
||||
G -->|Yes| H[Refactor Phase]
|
||||
H --> F
|
||||
G -->|No| I[tdd-verify]
|
||||
I --> J[质量报告]
|
||||
```
|
||||
|
||||
## Configuration
|
||||
|
||||
| 参数 | 默认值 | 说明 |
|
||||
|------|--------|------|
|
||||
| coverage_target | 80% | 目标覆盖率 |
|
||||
| cycle_limit | 10 | 最大Red-Green-Refactor循环 |
|
||||
| strict_mode | false | 严格模式(必须先红后绿) |
|
||||
|
||||
## CLI Integration
|
||||
|
||||
| 阶段 | CLI Hint | 用途 |
|
||||
|------|----------|------|
|
||||
| tdd-plan | `gemini --mode analysis` | 分析测试策略 |
|
||||
| execute | `codex --mode write` | 实现代码 |
|
||||
| tdd-verify | `gemini --mode analysis` | 验证TDD合规性 |
|
||||
|
||||
## Slash Commands
|
||||
|
||||
```bash
|
||||
/workflow:tdd-plan # 生成TDD任务链
|
||||
/workflow:execute # 执行Red-Green-Refactor
|
||||
/workflow:tdd-verify # 验证TDD合规性+覆盖率
|
||||
```
|
||||
|
||||
## When to Use
|
||||
|
||||
- 核心业务逻辑开发
|
||||
- 需要高测试覆盖率的模块
|
||||
- 重构现有代码时确保不破坏功能
|
||||
- 团队要求TDD实践
|
||||
|
||||
## Risk Assessment
|
||||
|
||||
| 风险 | 缓解措施 |
|
||||
|------|----------|
|
||||
| 测试粒度不当 | tdd-plan阶段评估测试边界 |
|
||||
| 过度测试 | 聚焦行为而非实现 |
|
||||
| 循环过多 | cycle_limit限制 |
|
||||
@@ -1,79 +0,0 @@
|
||||
# UI Design Workflow Action
|
||||
|
||||
## Pattern
|
||||
```
|
||||
ui-design:[explore|imitate]-auto → design-sync → plan → execute
|
||||
```
|
||||
|
||||
## Trigger Conditions
|
||||
|
||||
- 关键词: "ui", "界面", "design", "组件", "样式", "布局", "前端"
|
||||
- 变体触发:
|
||||
- `imitate`: "参考", "模仿", "像", "类似"
|
||||
- `explore`: 无特定参考时默认
|
||||
|
||||
## Variants
|
||||
|
||||
### Explore (探索式设计)
|
||||
```mermaid
|
||||
graph TD
|
||||
A[User Input] --> B[ui-design:explore-auto]
|
||||
B --> C[设计系统分析]
|
||||
C --> D[组件结构规划]
|
||||
D --> E[design-sync]
|
||||
E --> F[plan]
|
||||
F --> G[execute]
|
||||
```
|
||||
|
||||
### Imitate (参考式设计)
|
||||
```mermaid
|
||||
graph TD
|
||||
A[User Input + Reference] --> B[ui-design:imitate-auto]
|
||||
B --> C[参考分析]
|
||||
C --> D[风格提取]
|
||||
D --> E[design-sync]
|
||||
E --> F[plan]
|
||||
F --> G[execute]
|
||||
```
|
||||
|
||||
## Configuration
|
||||
|
||||
| 参数 | 默认值 | 说明 |
|
||||
|------|--------|------|
|
||||
| design_system | auto | 设计系统(auto/tailwind/mui/custom) |
|
||||
| responsive | true | 响应式设计 |
|
||||
| accessibility | true | 无障碍支持 |
|
||||
|
||||
## CLI Integration
|
||||
|
||||
| 阶段 | CLI Hint | 用途 |
|
||||
|------|----------|------|
|
||||
| explore/imitate | `gemini --mode analysis` | 设计分析、风格提取 |
|
||||
| design-sync | - | 设计决策与代码库同步 |
|
||||
| plan | - | 内置规划 |
|
||||
| execute | `codex --mode write` | 组件实现 |
|
||||
|
||||
## Slash Commands
|
||||
|
||||
```bash
|
||||
/workflow:ui-design:explore-auto # 探索式UI设计
|
||||
/workflow:ui-design:imitate-auto # 参考式UI设计
|
||||
/workflow:ui-design:design-sync # 设计与代码同步(关键步骤)
|
||||
/workflow:ui-design:style-extract # 提取现有样式
|
||||
/workflow:ui-design:codify-style # 样式代码化
|
||||
```
|
||||
|
||||
## When to Use
|
||||
|
||||
- 新页面/组件开发
|
||||
- UI重构或现代化
|
||||
- 设计系统建立
|
||||
- 参考其他产品设计
|
||||
|
||||
## Risk Assessment
|
||||
|
||||
| 风险 | 缓解措施 |
|
||||
|------|----------|
|
||||
| 设计不一致 | style-extract确保复用 |
|
||||
| 响应式问题 | 多断点验证 |
|
||||
| 可访问性缺失 | a11y检查集成 |
|
||||
@@ -1,435 +0,0 @@
|
||||
# CCW Orchestrator
|
||||
|
||||
无状态编排器:分析输入 → 选择工作流链 → TODO 跟踪执行
|
||||
|
||||
## Architecture
|
||||
|
||||
```
|
||||
┌──────────────────────────────────────────────────────────────────┐
|
||||
│ CCW Orchestrator │
|
||||
├──────────────────────────────────────────────────────────────────┤
|
||||
│ │
|
||||
│ Phase 1: Input Analysis │
|
||||
│ ├─ Parse input (natural language / explicit command) │
|
||||
│ ├─ Classify intent (bugfix / feature / issue / ui / docs) │
|
||||
│ └─ Assess complexity (low / medium / high) │
|
||||
│ │
|
||||
│ Phase 2: Chain Selection │
|
||||
│ ├─ Load index/workflow-chains.json │
|
||||
│ ├─ Match intent → chain(s) │
|
||||
│ ├─ Filter by complexity │
|
||||
│ └─ Select optimal chain │
|
||||
│ │
|
||||
│ Phase 3: User Confirmation (optional) │
|
||||
│ ├─ Display selected chain and steps │
|
||||
│ └─ Allow modification or manual selection │
|
||||
│ │
|
||||
│ Phase 4: TODO Tracking Setup │
|
||||
│ ├─ Create TodoWrite with chain steps │
|
||||
│ └─ Mark first step as in_progress │
|
||||
│ │
|
||||
│ Phase 5: Execution Loop │
|
||||
│ ├─ Execute current step (SlashCommand) │
|
||||
│ ├─ Update TODO status (completed) │
|
||||
│ ├─ Check auto_continue flag │
|
||||
│ └─ Proceed to next step or wait for user │
|
||||
│ │
|
||||
└──────────────────────────────────────────────────────────────────┘
|
||||
```
|
||||
|
||||
## Implementation
|
||||
|
||||
### Phase 1: Input Analysis
|
||||
|
||||
```javascript
|
||||
// Load external configuration (externalized for flexibility)
|
||||
const intentRules = JSON.parse(Read('.claude/skills/ccw/index/intent-rules.json'))
|
||||
const capabilities = JSON.parse(Read('.claude/skills/ccw/index/command-capabilities.json'))
|
||||
|
||||
function analyzeInput(userInput) {
|
||||
const input = userInput.trim()
|
||||
|
||||
// Check for explicit command passthrough
|
||||
if (input.match(/^\/(?:workflow|issue|memory|task):/)) {
|
||||
return { type: 'explicit', command: input, passthrough: true }
|
||||
}
|
||||
|
||||
// Classify intent using external rules
|
||||
const intent = classifyIntent(input, intentRules.intent_patterns)
|
||||
|
||||
// Assess complexity using external indicators
|
||||
const complexity = assessComplexity(input, intentRules.complexity_indicators)
|
||||
|
||||
// Detect tool preferences using external triggers
|
||||
const toolPreference = detectToolPreference(input, intentRules.cli_tool_triggers)
|
||||
|
||||
return {
|
||||
type: 'natural',
|
||||
text: input,
|
||||
intent,
|
||||
complexity,
|
||||
toolPreference,
|
||||
passthrough: false
|
||||
}
|
||||
}
|
||||
|
||||
function classifyIntent(text, patterns) {
|
||||
// Sort by priority
|
||||
const sorted = Object.entries(patterns)
|
||||
.sort((a, b) => a[1].priority - b[1].priority)
|
||||
|
||||
for (const [intentType, config] of sorted) {
|
||||
// Handle variants (bugfix, ui, docs)
|
||||
if (config.variants) {
|
||||
for (const [variant, variantConfig] of Object.entries(config.variants)) {
|
||||
const variantPatterns = variantConfig.patterns || variantConfig.triggers || []
|
||||
if (matchesAnyPattern(text, variantPatterns)) {
|
||||
// For bugfix, check if standard patterns also match
|
||||
if (intentType === 'bugfix') {
|
||||
const standardMatch = matchesAnyPattern(text, config.variants.standard?.patterns || [])
|
||||
if (standardMatch) {
|
||||
return { type: intentType, variant, workflow: variantConfig.workflow }
|
||||
}
|
||||
} else {
|
||||
return { type: intentType, variant, workflow: variantConfig.workflow }
|
||||
}
|
||||
}
|
||||
}
|
||||
// Check default variant
|
||||
if (config.variants.standard) {
|
||||
if (matchesAnyPattern(text, config.variants.standard.patterns)) {
|
||||
return { type: intentType, variant: 'standard', workflow: config.variants.standard.workflow }
|
||||
}
|
||||
}
|
||||
}
|
||||
|
||||
// Handle simple patterns (exploration, tdd, review)
|
||||
if (config.patterns && !config.require_both) {
|
||||
if (matchesAnyPattern(text, config.patterns)) {
|
||||
return { type: intentType, workflow: config.workflow }
|
||||
}
|
||||
}
|
||||
|
||||
// Handle dual-pattern matching (issue_batch)
|
||||
if (config.require_both && config.patterns) {
|
||||
const matchBatch = matchesAnyPattern(text, config.patterns.batch_keywords)
|
||||
const matchAction = matchesAnyPattern(text, config.patterns.action_keywords)
|
||||
if (matchBatch && matchAction) {
|
||||
return { type: intentType, workflow: config.workflow }
|
||||
}
|
||||
}
|
||||
}
|
||||
|
||||
// Default to feature
|
||||
return { type: 'feature' }
|
||||
}
|
||||
|
||||
function matchesAnyPattern(text, patterns) {
|
||||
if (!Array.isArray(patterns)) return false
|
||||
const lowerText = text.toLowerCase()
|
||||
return patterns.some(p => lowerText.includes(p.toLowerCase()))
|
||||
}
|
||||
|
||||
function assessComplexity(text, indicators) {
|
||||
let score = 0
|
||||
|
||||
for (const [level, config] of Object.entries(indicators)) {
|
||||
if (config.patterns) {
|
||||
for (const [category, patternConfig] of Object.entries(config.patterns)) {
|
||||
if (matchesAnyPattern(text, patternConfig.keywords)) {
|
||||
score += patternConfig.weight || 1
|
||||
}
|
||||
}
|
||||
}
|
||||
}
|
||||
|
||||
if (score >= indicators.high.score_threshold) return 'high'
|
||||
if (score >= indicators.medium.score_threshold) return 'medium'
|
||||
return 'low'
|
||||
}
|
||||
|
||||
function detectToolPreference(text, triggers) {
|
||||
for (const [tool, config] of Object.entries(triggers)) {
|
||||
// Check explicit triggers
|
||||
if (matchesAnyPattern(text, config.explicit)) return tool
|
||||
// Check semantic triggers
|
||||
if (matchesAnyPattern(text, config.semantic)) return tool
|
||||
}
|
||||
return null
|
||||
}
|
||||
```
|
||||
|
||||
### Phase 2: Chain Selection
|
||||
|
||||
```javascript
|
||||
// Load workflow chains index
|
||||
const chains = JSON.parse(Read('.claude/skills/ccw/index/workflow-chains.json'))
|
||||
|
||||
function selectChain(analysis) {
|
||||
const { intent, complexity } = analysis
|
||||
|
||||
// Map intent type (from intent-rules.json) to chain ID (from workflow-chains.json)
|
||||
const chainMapping = {
|
||||
'bugfix': 'bugfix',
|
||||
'issue_batch': 'issue', // intent-rules.json key → chains.json chain ID
|
||||
'exploration': 'full',
|
||||
'ui_design': 'ui', // intent-rules.json key → chains.json chain ID
|
||||
'tdd': 'tdd',
|
||||
'review': 'review-fix',
|
||||
'documentation': 'docs', // intent-rules.json key → chains.json chain ID
|
||||
'feature': null // Use complexity fallback
|
||||
}
|
||||
|
||||
let chainId = chainMapping[intent.type]
|
||||
|
||||
// Fallback to complexity-based selection
|
||||
if (!chainId) {
|
||||
chainId = chains.chain_selection_rules.complexity_fallback[complexity]
|
||||
}
|
||||
|
||||
const chain = chains.chains[chainId]
|
||||
|
||||
// Handle variants
|
||||
let steps = chain.steps
|
||||
if (chain.variants) {
|
||||
const variant = intent.variant || Object.keys(chain.variants)[0]
|
||||
steps = chain.variants[variant].steps
|
||||
}
|
||||
|
||||
return {
|
||||
id: chainId,
|
||||
name: chain.name,
|
||||
description: chain.description,
|
||||
steps,
|
||||
complexity: chain.complexity,
|
||||
estimated_time: chain.estimated_time
|
||||
}
|
||||
}
|
||||
```
|
||||
|
||||
### Phase 3: User Confirmation
|
||||
|
||||
```javascript
|
||||
function confirmChain(selectedChain, analysis) {
|
||||
// Skip confirmation for simple chains
|
||||
if (selectedChain.steps.length <= 2 && analysis.complexity === 'low') {
|
||||
return selectedChain
|
||||
}
|
||||
|
||||
console.log(`
|
||||
## CCW Workflow Selection
|
||||
|
||||
**Task**: ${analysis.text.substring(0, 80)}...
|
||||
**Intent**: ${analysis.intent.type}${analysis.intent.variant ? ` (${analysis.intent.variant})` : ''}
|
||||
**Complexity**: ${analysis.complexity}
|
||||
|
||||
**Selected Chain**: ${selectedChain.name}
|
||||
**Description**: ${selectedChain.description}
|
||||
**Estimated Time**: ${selectedChain.estimated_time}
|
||||
|
||||
**Steps**:
|
||||
${selectedChain.steps.map((s, i) => `${i + 1}. ${s.command}${s.optional ? ' (optional)' : ''}`).join('\n')}
|
||||
`)
|
||||
|
||||
const response = AskUserQuestion({
|
||||
questions: [{
|
||||
question: `Proceed with ${selectedChain.name}?`,
|
||||
header: "Confirm",
|
||||
multiSelect: false,
|
||||
options: [
|
||||
{ label: "Proceed", description: `Execute ${selectedChain.steps.length} steps` },
|
||||
{ label: "Rapid", description: "Use lite-plan → lite-execute" },
|
||||
{ label: "Full", description: "Use brainstorm → plan → execute" },
|
||||
{ label: "Manual", description: "Specify commands manually" }
|
||||
]
|
||||
}]
|
||||
})
|
||||
|
||||
// Handle alternative selection
|
||||
if (response.Confirm === 'Rapid') {
|
||||
return selectChain({ intent: { type: 'feature' }, complexity: 'low' })
|
||||
}
|
||||
if (response.Confirm === 'Full') {
|
||||
return chains.chains['full']
|
||||
}
|
||||
if (response.Confirm === 'Manual') {
|
||||
return null // User will specify
|
||||
}
|
||||
|
||||
return selectedChain
|
||||
}
|
||||
```
|
||||
|
||||
### Phase 4: TODO Tracking Setup
|
||||
|
||||
```javascript
|
||||
function setupTodoTracking(chain, analysis) {
|
||||
const todos = chain.steps.map((step, index) => ({
|
||||
content: `[${index + 1}/${chain.steps.length}] ${step.command}`,
|
||||
status: index === 0 ? 'in_progress' : 'pending',
|
||||
activeForm: `Executing ${step.command}`
|
||||
}))
|
||||
|
||||
// Add header todo
|
||||
todos.unshift({
|
||||
content: `CCW: ${chain.name} (${chain.steps.length} steps)`,
|
||||
status: 'in_progress',
|
||||
activeForm: `Running ${chain.name} workflow`
|
||||
})
|
||||
|
||||
TodoWrite({ todos })
|
||||
|
||||
return {
|
||||
chain,
|
||||
currentStep: 0,
|
||||
todos
|
||||
}
|
||||
}
|
||||
```
|
||||
|
||||
### Phase 5: Execution Loop
|
||||
|
||||
```javascript
|
||||
async function executeChain(execution, analysis) {
|
||||
const { chain, todos } = execution
|
||||
let currentStep = 0
|
||||
|
||||
while (currentStep < chain.steps.length) {
|
||||
const step = chain.steps[currentStep]
|
||||
|
||||
// Update TODO: mark current as in_progress
|
||||
const updatedTodos = todos.map((t, i) => ({
|
||||
...t,
|
||||
status: i === 0
|
||||
? 'in_progress'
|
||||
: i === currentStep + 1
|
||||
? 'in_progress'
|
||||
: i <= currentStep
|
||||
? 'completed'
|
||||
: 'pending'
|
||||
}))
|
||||
TodoWrite({ todos: updatedTodos })
|
||||
|
||||
console.log(`\n### Step ${currentStep + 1}/${chain.steps.length}: ${step.command}\n`)
|
||||
|
||||
// Check for confirmation requirement
|
||||
if (step.confirm_before) {
|
||||
const proceed = AskUserQuestion({
|
||||
questions: [{
|
||||
question: `Ready to execute ${step.command}?`,
|
||||
header: "Step",
|
||||
multiSelect: false,
|
||||
options: [
|
||||
{ label: "Execute", description: "Run this step" },
|
||||
{ label: "Skip", description: "Skip to next step" },
|
||||
{ label: "Abort", description: "Stop workflow" }
|
||||
]
|
||||
}]
|
||||
})
|
||||
|
||||
if (proceed.Step === 'Skip') {
|
||||
currentStep++
|
||||
continue
|
||||
}
|
||||
if (proceed.Step === 'Abort') {
|
||||
break
|
||||
}
|
||||
}
|
||||
|
||||
// Execute the command
|
||||
const args = analysis.text
|
||||
SlashCommand(step.command, { args })
|
||||
|
||||
// Mark step as completed
|
||||
updatedTodos[currentStep + 1].status = 'completed'
|
||||
TodoWrite({ todos: updatedTodos })
|
||||
|
||||
currentStep++
|
||||
|
||||
// Check auto_continue
|
||||
if (!step.auto_continue && currentStep < chain.steps.length) {
|
||||
console.log(`
|
||||
Step completed. Next: ${chain.steps[currentStep].command}
|
||||
Type "continue" to proceed or specify different action.
|
||||
`)
|
||||
// Wait for user input before continuing
|
||||
break
|
||||
}
|
||||
}
|
||||
|
||||
// Final status
|
||||
if (currentStep >= chain.steps.length) {
|
||||
const finalTodos = todos.map(t => ({ ...t, status: 'completed' }))
|
||||
TodoWrite({ todos: finalTodos })
|
||||
|
||||
console.log(`\n✓ ${chain.name} workflow completed (${chain.steps.length} steps)`)
|
||||
}
|
||||
|
||||
return { completed: currentStep, total: chain.steps.length }
|
||||
}
|
||||
```
|
||||
|
||||
## Main Orchestration Entry
|
||||
|
||||
```javascript
|
||||
async function ccwOrchestrate(userInput) {
|
||||
console.log('## CCW Orchestrator\n')
|
||||
|
||||
// Phase 1: Analyze input
|
||||
const analysis = analyzeInput(userInput)
|
||||
|
||||
// Handle explicit command passthrough
|
||||
if (analysis.passthrough) {
|
||||
console.log(`Direct command: ${analysis.command}`)
|
||||
return SlashCommand(analysis.command)
|
||||
}
|
||||
|
||||
// Phase 2: Select chain
|
||||
const selectedChain = selectChain(analysis)
|
||||
|
||||
// Phase 3: Confirm (for complex workflows)
|
||||
const confirmedChain = confirmChain(selectedChain, analysis)
|
||||
if (!confirmedChain) {
|
||||
console.log('Manual mode selected. Specify commands directly.')
|
||||
return
|
||||
}
|
||||
|
||||
// Phase 4: Setup TODO tracking
|
||||
const execution = setupTodoTracking(confirmedChain, analysis)
|
||||
|
||||
// Phase 5: Execute
|
||||
const result = await executeChain(execution, analysis)
|
||||
|
||||
return result
|
||||
}
|
||||
```
|
||||
|
||||
## Decision Matrix
|
||||
|
||||
| Intent | Complexity | Chain | Steps |
|
||||
|--------|------------|-------|-------|
|
||||
| bugfix (standard) | * | bugfix | lite-fix |
|
||||
| bugfix (hotfix) | * | bugfix | lite-fix --hotfix |
|
||||
| issue | * | issue | plan → queue → execute |
|
||||
| exploration | * | full | brainstorm → plan → execute |
|
||||
| ui (explore) | * | ui | ui-design:explore → sync → plan → execute |
|
||||
| ui (imitate) | * | ui | ui-design:imitate → sync → plan → execute |
|
||||
| tdd | * | tdd | tdd-plan → execute → tdd-verify |
|
||||
| review | * | review-fix | review-session-cycle → review-fix |
|
||||
| docs | low | docs | update-related |
|
||||
| docs | medium+ | docs | docs → execute |
|
||||
| feature | low | rapid | lite-plan → lite-execute |
|
||||
| feature | medium | coupled | plan → verify → execute |
|
||||
| feature | high | full | brainstorm → plan → execute |
|
||||
|
||||
## Continuation Commands
|
||||
|
||||
After each step pause:
|
||||
|
||||
| User Input | Action |
|
||||
|------------|--------|
|
||||
| `continue` | Execute next step |
|
||||
| `skip` | Skip current step |
|
||||
| `abort` | Stop workflow |
|
||||
| `/workflow:*` | Execute specific command |
|
||||
| Natural language | Re-analyze and potentially switch chains |
|
||||
@@ -1,336 +0,0 @@
|
||||
# Intent Classification Specification
|
||||
|
||||
CCW 意图分类规范:定义如何从用户输入识别任务意图并选择最优工作流。
|
||||
|
||||
## Classification Hierarchy
|
||||
|
||||
```
|
||||
Intent Classification
|
||||
├── Priority 1: Explicit Commands
|
||||
│ └── /workflow:*, /issue:*, /memory:*, /task:*
|
||||
├── Priority 2: Bug Keywords
|
||||
│ ├── Hotfix: urgent + bug keywords
|
||||
│ └── Standard: bug keywords only
|
||||
├── Priority 3: Issue Batch
|
||||
│ └── Multiple + fix keywords
|
||||
├── Priority 4: Exploration
|
||||
│ └── Uncertainty keywords
|
||||
├── Priority 5: UI/Design
|
||||
│ └── Visual/component keywords
|
||||
└── Priority 6: Complexity Fallback
|
||||
├── High → Coupled
|
||||
├── Medium → Rapid
|
||||
└── Low → Rapid
|
||||
```
|
||||
|
||||
## Keyword Patterns
|
||||
|
||||
### Bug Detection
|
||||
|
||||
```javascript
|
||||
const BUG_PATTERNS = {
|
||||
core: /\b(fix|bug|error|issue|crash|broken|fail|wrong|incorrect|修复|报错|错误|问题|异常|崩溃|失败)\b/i,
|
||||
|
||||
urgency: /\b(hotfix|urgent|production|critical|emergency|asap|immediately|紧急|生产|线上|马上|立即)\b/i,
|
||||
|
||||
symptoms: /\b(not working|doesn't work|can't|cannot|won't|stopped|stopped working|无法|不能|不工作)\b/i,
|
||||
|
||||
errors: /\b(\d{3}\s*error|exception|stack\s*trace|undefined|null\s*pointer|timeout)\b/i
|
||||
}
|
||||
|
||||
function detectBug(text) {
|
||||
const isBug = BUG_PATTERNS.core.test(text) || BUG_PATTERNS.symptoms.test(text)
|
||||
const isUrgent = BUG_PATTERNS.urgency.test(text)
|
||||
const hasError = BUG_PATTERNS.errors.test(text)
|
||||
|
||||
if (!isBug && !hasError) return null
|
||||
|
||||
return {
|
||||
type: 'bugfix',
|
||||
mode: isUrgent ? 'hotfix' : 'standard',
|
||||
confidence: (isBug && hasError) ? 'high' : 'medium'
|
||||
}
|
||||
}
|
||||
```
|
||||
|
||||
### Issue Batch Detection
|
||||
|
||||
```javascript
|
||||
const ISSUE_PATTERNS = {
|
||||
batch: /\b(issues?|batch|queue|multiple|several|all|多个|批量|一系列|所有|这些)\b/i,
|
||||
action: /\b(fix|resolve|handle|process|处理|解决|修复)\b/i,
|
||||
source: /\b(github|jira|linear|backlog|todo|待办)\b/i
|
||||
}
|
||||
|
||||
function detectIssueBatch(text) {
|
||||
const hasBatch = ISSUE_PATTERNS.batch.test(text)
|
||||
const hasAction = ISSUE_PATTERNS.action.test(text)
|
||||
const hasSource = ISSUE_PATTERNS.source.test(text)
|
||||
|
||||
if (hasBatch && hasAction) {
|
||||
return {
|
||||
type: 'issue',
|
||||
confidence: hasSource ? 'high' : 'medium'
|
||||
}
|
||||
}
|
||||
return null
|
||||
}
|
||||
```
|
||||
|
||||
### Exploration Detection
|
||||
|
||||
```javascript
|
||||
const EXPLORATION_PATTERNS = {
|
||||
uncertainty: /\b(不确定|不知道|not sure|unsure|how to|怎么|如何|what if|should i|could i|是否应该)\b/i,
|
||||
|
||||
exploration: /\b(explore|research|investigate|分析|研究|调研|评估|探索|了解)\b/i,
|
||||
|
||||
options: /\b(options|alternatives|approaches|方案|选择|方向|可能性)\b/i,
|
||||
|
||||
questions: /\b(what|which|how|why|什么|哪个|怎样|为什么)\b.*\?/i
|
||||
}
|
||||
|
||||
function detectExploration(text) {
|
||||
const hasUncertainty = EXPLORATION_PATTERNS.uncertainty.test(text)
|
||||
const hasExploration = EXPLORATION_PATTERNS.exploration.test(text)
|
||||
const hasOptions = EXPLORATION_PATTERNS.options.test(text)
|
||||
const hasQuestion = EXPLORATION_PATTERNS.questions.test(text)
|
||||
|
||||
const score = [hasUncertainty, hasExploration, hasOptions, hasQuestion].filter(Boolean).length
|
||||
|
||||
if (score >= 2 || hasUncertainty) {
|
||||
return {
|
||||
type: 'exploration',
|
||||
confidence: score >= 3 ? 'high' : 'medium'
|
||||
}
|
||||
}
|
||||
return null
|
||||
}
|
||||
```
|
||||
|
||||
### UI/Design Detection
|
||||
|
||||
```javascript
|
||||
const UI_PATTERNS = {
|
||||
components: /\b(ui|界面|component|组件|button|按钮|form|表单|modal|弹窗|dialog|对话框)\b/i,
|
||||
|
||||
design: /\b(design|设计|style|样式|layout|布局|theme|主题|color|颜色)\b/i,
|
||||
|
||||
visual: /\b(visual|视觉|animation|动画|responsive|响应式|mobile|移动端)\b/i,
|
||||
|
||||
frontend: /\b(frontend|前端|react|vue|angular|css|html|page|页面)\b/i
|
||||
}
|
||||
|
||||
function detectUI(text) {
|
||||
const hasComponents = UI_PATTERNS.components.test(text)
|
||||
const hasDesign = UI_PATTERNS.design.test(text)
|
||||
const hasVisual = UI_PATTERNS.visual.test(text)
|
||||
const hasFrontend = UI_PATTERNS.frontend.test(text)
|
||||
|
||||
const score = [hasComponents, hasDesign, hasVisual, hasFrontend].filter(Boolean).length
|
||||
|
||||
if (score >= 2) {
|
||||
return {
|
||||
type: 'ui',
|
||||
hasReference: /参考|reference|based on|像|like|模仿|imitate/.test(text),
|
||||
confidence: score >= 3 ? 'high' : 'medium'
|
||||
}
|
||||
}
|
||||
return null
|
||||
}
|
||||
```
|
||||
|
||||
## Complexity Assessment
|
||||
|
||||
### Indicators
|
||||
|
||||
```javascript
|
||||
const COMPLEXITY_INDICATORS = {
|
||||
high: {
|
||||
patterns: [
|
||||
/\b(refactor|重构|restructure|重新组织)\b/i,
|
||||
/\b(migrate|迁移|upgrade|升级|convert|转换)\b/i,
|
||||
/\b(architect|架构|system|系统|infrastructure|基础设施)\b/i,
|
||||
/\b(entire|整个|complete|完整|all\s+modules?|所有模块)\b/i,
|
||||
/\b(security|安全|scale|扩展|performance\s+critical|性能关键)\b/i,
|
||||
/\b(distributed|分布式|microservice|微服务|cluster|集群)\b/i
|
||||
],
|
||||
weight: 2
|
||||
},
|
||||
|
||||
medium: {
|
||||
patterns: [
|
||||
/\b(integrate|集成|connect|连接|link|链接)\b/i,
|
||||
/\b(api|database|数据库|service|服务|endpoint|接口)\b/i,
|
||||
/\b(test|测试|validate|验证|coverage|覆盖)\b/i,
|
||||
/\b(multiple\s+files?|多个文件|several\s+components?|几个组件)\b/i,
|
||||
/\b(authentication|认证|authorization|授权)\b/i
|
||||
],
|
||||
weight: 1
|
||||
},
|
||||
|
||||
low: {
|
||||
patterns: [
|
||||
/\b(add|添加|create|创建|simple|简单)\b/i,
|
||||
/\b(update|更新|modify|修改|change|改变)\b/i,
|
||||
/\b(single|单个|one|一个|small|小)\b/i,
|
||||
/\b(comment|注释|log|日志|print|打印)\b/i
|
||||
],
|
||||
weight: -1
|
||||
}
|
||||
}
|
||||
|
||||
function assessComplexity(text) {
|
||||
let score = 0
|
||||
|
||||
for (const [level, config] of Object.entries(COMPLEXITY_INDICATORS)) {
|
||||
for (const pattern of config.patterns) {
|
||||
if (pattern.test(text)) {
|
||||
score += config.weight
|
||||
}
|
||||
}
|
||||
}
|
||||
|
||||
// File count indicator
|
||||
const fileMatches = text.match(/\b\d+\s*(files?|文件)/i)
|
||||
if (fileMatches) {
|
||||
const count = parseInt(fileMatches[0])
|
||||
if (count > 10) score += 2
|
||||
else if (count > 5) score += 1
|
||||
}
|
||||
|
||||
// Module count indicator
|
||||
const moduleMatches = text.match(/\b\d+\s*(modules?|模块)/i)
|
||||
if (moduleMatches) {
|
||||
const count = parseInt(moduleMatches[0])
|
||||
if (count > 3) score += 2
|
||||
else if (count > 1) score += 1
|
||||
}
|
||||
|
||||
if (score >= 4) return 'high'
|
||||
if (score >= 2) return 'medium'
|
||||
return 'low'
|
||||
}
|
||||
```
|
||||
|
||||
## Workflow Selection Matrix
|
||||
|
||||
| Intent | Complexity | Workflow | Commands |
|
||||
|--------|------------|----------|----------|
|
||||
| bugfix (hotfix) | * | bugfix | `lite-fix --hotfix` |
|
||||
| bugfix (standard) | * | bugfix | `lite-fix` |
|
||||
| issue | * | issue | `issue:plan → queue → execute` |
|
||||
| exploration | * | full | `brainstorm → plan → execute` |
|
||||
| ui (reference) | * | ui | `ui-design:imitate-auto → plan` |
|
||||
| ui (explore) | * | ui | `ui-design:explore-auto → plan` |
|
||||
| feature | high | coupled | `plan → verify → execute` |
|
||||
| feature | medium | rapid | `lite-plan → lite-execute` |
|
||||
| feature | low | rapid | `lite-plan → lite-execute` |
|
||||
|
||||
## Confidence Levels
|
||||
|
||||
| Level | Description | Action |
|
||||
|-------|-------------|--------|
|
||||
| **high** | Multiple strong indicators match | Direct dispatch |
|
||||
| **medium** | Some indicators match | Confirm with user |
|
||||
| **low** | Fallback classification | Always confirm |
|
||||
|
||||
## Tool Preference Detection
|
||||
|
||||
```javascript
|
||||
const TOOL_PREFERENCES = {
|
||||
gemini: {
|
||||
pattern: /用\s*gemini|gemini\s*(分析|理解|设计)|让\s*gemini/i,
|
||||
capability: 'analysis'
|
||||
},
|
||||
qwen: {
|
||||
pattern: /用\s*qwen|qwen\s*(分析|评估)|让\s*qwen/i,
|
||||
capability: 'analysis'
|
||||
},
|
||||
codex: {
|
||||
pattern: /用\s*codex|codex\s*(实现|重构|修复)|让\s*codex/i,
|
||||
capability: 'implementation'
|
||||
}
|
||||
}
|
||||
|
||||
function detectToolPreference(text) {
|
||||
for (const [tool, config] of Object.entries(TOOL_PREFERENCES)) {
|
||||
if (config.pattern.test(text)) {
|
||||
return { tool, capability: config.capability }
|
||||
}
|
||||
}
|
||||
return null
|
||||
}
|
||||
```
|
||||
|
||||
## Multi-Tool Collaboration Detection
|
||||
|
||||
```javascript
|
||||
const COLLABORATION_PATTERNS = {
|
||||
sequential: /先.*(分析|理解).*然后.*(实现|重构)|分析.*后.*实现/i,
|
||||
parallel: /(同时|并行).*(分析|实现)|一边.*一边/i,
|
||||
hybrid: /(分析|设计).*和.*(实现|测试).*分开/i
|
||||
}
|
||||
|
||||
function detectCollaboration(text) {
|
||||
if (COLLABORATION_PATTERNS.sequential.test(text)) {
|
||||
return { mode: 'sequential', description: 'Analysis first, then implementation' }
|
||||
}
|
||||
if (COLLABORATION_PATTERNS.parallel.test(text)) {
|
||||
return { mode: 'parallel', description: 'Concurrent analysis and implementation' }
|
||||
}
|
||||
if (COLLABORATION_PATTERNS.hybrid.test(text)) {
|
||||
return { mode: 'hybrid', description: 'Mixed parallel and sequential' }
|
||||
}
|
||||
return null
|
||||
}
|
||||
```
|
||||
|
||||
## Classification Pipeline
|
||||
|
||||
```javascript
|
||||
function classify(userInput) {
|
||||
const text = userInput.trim()
|
||||
|
||||
// Step 1: Check explicit commands
|
||||
if (/^\/(?:workflow|issue|memory|task):/.test(text)) {
|
||||
return { type: 'explicit', command: text }
|
||||
}
|
||||
|
||||
// Step 2: Priority-based classification
|
||||
const bugResult = detectBug(text)
|
||||
if (bugResult) return bugResult
|
||||
|
||||
const issueResult = detectIssueBatch(text)
|
||||
if (issueResult) return issueResult
|
||||
|
||||
const explorationResult = detectExploration(text)
|
||||
if (explorationResult) return explorationResult
|
||||
|
||||
const uiResult = detectUI(text)
|
||||
if (uiResult) return uiResult
|
||||
|
||||
// Step 3: Complexity-based fallback
|
||||
const complexity = assessComplexity(text)
|
||||
return {
|
||||
type: 'feature',
|
||||
complexity,
|
||||
workflow: complexity === 'high' ? 'coupled' : 'rapid',
|
||||
confidence: 'low'
|
||||
}
|
||||
}
|
||||
```
|
||||
|
||||
## Examples
|
||||
|
||||
### Input → Classification
|
||||
|
||||
| Input | Classification | Workflow |
|
||||
|-------|----------------|----------|
|
||||
| "用户登录失败,401错误" | bugfix/standard | lite-fix |
|
||||
| "紧急:支付网关挂了" | bugfix/hotfix | lite-fix --hotfix |
|
||||
| "批量处理这些 GitHub issues" | issue | issue:plan → queue |
|
||||
| "不确定要怎么设计缓存系统" | exploration | brainstorm → plan |
|
||||
| "添加一个深色模式切换按钮" | ui | ui-design → plan |
|
||||
| "重构整个认证模块" | feature/high | plan → verify |
|
||||
| "添加用户头像功能" | feature/low | lite-plan |
|
||||
@@ -304,28 +304,22 @@ async function runWithTool(tool, context) {
|
||||
### 引用协议模板
|
||||
|
||||
```bash
|
||||
# 分析模式 - 必须引用 analysis-protocol.md
|
||||
# Analysis mode - use --rule to auto-load protocol and template (appended to prompt)
|
||||
ccw cli -p "
|
||||
RULES: $(cat ~/.claude/workflows/cli-templates/protocols/analysis-protocol.md)
|
||||
$(cat ~/.claude/workflows/cli-templates/prompts/analysis/02-analyze-code-patterns.txt)
|
||||
..." --tool gemini --mode analysis
|
||||
CONSTRAINTS: ...
|
||||
..." --tool gemini --mode analysis --rule analysis-code-patterns
|
||||
|
||||
# 写入模式 - 必须引用 write-protocol.md
|
||||
# Write mode - use --rule to auto-load protocol and template (appended to prompt)
|
||||
ccw cli -p "
|
||||
RULES: $(cat ~/.claude/workflows/cli-templates/protocols/write-protocol.md)
|
||||
$(cat ~/.claude/workflows/cli-templates/prompts/development/02-implement-feature.txt)
|
||||
..." --tool codex --mode write
|
||||
CONSTRAINTS: ...
|
||||
..." --tool codex --mode write --rule development-feature
|
||||
```
|
||||
|
||||
### 动态模板构建
|
||||
|
||||
```javascript
|
||||
function buildPrompt(config) {
|
||||
const { purpose, task, mode, context, expected, template } = config;
|
||||
|
||||
const protocolPath = mode === 'write'
|
||||
? '~/.claude/workflows/cli-templates/protocols/write-protocol.md'
|
||||
: '~/.claude/workflows/cli-templates/protocols/analysis-protocol.md';
|
||||
const { purpose, task, mode, context, expected, constraints } = config;
|
||||
|
||||
return `
|
||||
PURPOSE: ${purpose}
|
||||
@@ -333,8 +327,8 @@ TASK: ${task.map(t => `• ${t}`).join('\n')}
|
||||
MODE: ${mode}
|
||||
CONTEXT: ${context}
|
||||
EXPECTED: ${expected}
|
||||
RULES: $(cat ${protocolPath}) $(cat ${template})
|
||||
`;
|
||||
CONSTRAINTS: ${constraints || ''}
|
||||
`; // Use --rule option to auto-append protocol + template
|
||||
}
|
||||
```
|
||||
|
||||
@@ -435,11 +429,11 @@ CLI 调用 (Bash + ccw cli):
|
||||
- 相关任务使用 `--resume` 保持上下文
|
||||
- 独立任务不使用 `--resume`
|
||||
|
||||
### 4. 提示词规范
|
||||
### 4. Prompt Specification
|
||||
|
||||
- 始终使用 PURPOSE/TASK/MODE/CONTEXT/EXPECTED/RULES 结构
|
||||
- 必须引用协议模板(analysis-protocol 或 write-protocol)
|
||||
- 使用 `$(cat ...)` 动态加载模板
|
||||
- Always use PURPOSE/TASK/MODE/CONTEXT/EXPECTED/CONSTRAINTS structure
|
||||
- Use `--rule <template>` to auto-append protocol + template to prompt
|
||||
- Template name format: `category-function` (e.g., `analysis-code-patterns`)
|
||||
|
||||
### 5. 结果处理
|
||||
|
||||
|
||||
303
.claude/skills/skill-tuning/SKILL.md
Normal file
303
.claude/skills/skill-tuning/SKILL.md
Normal file
@@ -0,0 +1,303 @@
|
||||
---
|
||||
name: skill-tuning
|
||||
description: Universal skill diagnosis and optimization tool. Detect and fix skill execution issues including context explosion, long-tail forgetting, data flow disruption, and agent coordination failures. Supports Gemini CLI for deep analysis. Triggers on "skill tuning", "tune skill", "skill diagnosis", "optimize skill", "skill debug".
|
||||
allowed-tools: Task, AskUserQuestion, Read, Write, Bash, Glob, Grep, mcp__ace-tool__search_context
|
||||
---
|
||||
|
||||
# Skill Tuning
|
||||
|
||||
Universal skill diagnosis and optimization tool that identifies and resolves skill execution problems through iterative multi-agent analysis.
|
||||
|
||||
## Architecture Overview
|
||||
|
||||
```
|
||||
┌─────────────────────────────────────────────────────────────────────────────┐
|
||||
│ Skill Tuning Architecture (Autonomous Mode + Gemini CLI) │
|
||||
├─────────────────────────────────────────────────────────────────────────────┤
|
||||
│ │
|
||||
│ ⚠️ Phase 0: Specification → 阅读规范 + 理解目标 skill 结构 (强制前置) │
|
||||
│ Study │
|
||||
│ ↓ │
|
||||
│ ┌───────────────────────────────────────────────────────────────────────┐ │
|
||||
│ │ Orchestrator (状态驱动决策) │ │
|
||||
│ │ 读取诊断状态 → 选择下一步动作 → 执行 → 更新状态 → 循环直到完成 │ │
|
||||
│ └───────────────────────────────────────────────────────────────────────┘ │
|
||||
│ │ │
|
||||
│ ┌────────────┬───────────┼───────────┬────────────┬────────────┐ │
|
||||
│ ↓ ↓ ↓ ↓ ↓ ↓ │
|
||||
│ ┌──────┐ ┌──────────┐ ┌─────────┐ ┌────────┐ ┌────────┐ ┌─────────┐ │
|
||||
│ │ Init │→ │ Analyze │→ │Diagnose │ │Diagnose│ │Diagnose│ │ Gemini │ │
|
||||
│ │ │ │Requiremts│ │ Context │ │ Memory │ │DataFlow│ │Analysis │ │
|
||||
│ └──────┘ └──────────┘ └─────────┘ └────────┘ └────────┘ └─────────┘ │
|
||||
│ │ │ │ │ │ │
|
||||
│ │ └───────────┴───────────┴────────────┘ │
|
||||
│ ↓ │
|
||||
│ ┌───────────────────────────────────────────────────────────────────────┐ │
|
||||
│ │ Requirement Analysis (NEW) │ │
|
||||
│ │ • Phase 1: 维度拆解 (Gemini CLI) - 单一描述 → 多个关注维度 │ │
|
||||
│ │ • Phase 2: Spec 匹配 - 每个维度 → taxonomy + strategy │ │
|
||||
│ │ • Phase 3: 覆盖度评估 - 以"有修复策略"为满足标准 │ │
|
||||
│ │ • Phase 4: 歧义检测 - 识别多义性描述,必要时请求澄清 │ │
|
||||
│ └───────────────────────────────────────────────────────────────────────┘ │
|
||||
│ ↓ │
|
||||
│ ┌──────────────────┐ │
|
||||
│ │ Apply Fixes + │ │
|
||||
│ │ Verify Results │ │
|
||||
│ └──────────────────┘ │
|
||||
│ │
|
||||
│ ┌───────────────────────────────────────────────────────────────────────┐ │
|
||||
│ │ Gemini CLI Integration │ │
|
||||
│ │ 根据用户需求动态调用 gemini cli 进行深度分析: │ │
|
||||
│ │ • 需求维度拆解 (requirement decomposition) │ │
|
||||
│ │ • 复杂问题分析 (prompt engineering, architecture review) │ │
|
||||
│ │ • 代码模式识别 (pattern matching, anti-pattern detection) │ │
|
||||
│ │ • 修复策略生成 (fix generation, refactoring suggestions) │ │
|
||||
│ └───────────────────────────────────────────────────────────────────────┘ │
|
||||
│ │
|
||||
└─────────────────────────────────────────────────────────────────────────────┘
|
||||
```
|
||||
|
||||
## Problem Domain
|
||||
|
||||
Based on comprehensive analysis, skill-tuning addresses **core skill issues** and **general optimization areas**:
|
||||
|
||||
### Core Skill Issues (自动检测)
|
||||
|
||||
| Priority | Problem | Root Cause | Solution Strategy |
|
||||
|----------|---------|------------|-------------------|
|
||||
| **P0** | Authoring Principles Violation | 中间文件存储, State膨胀, 文件中转 | eliminate_intermediate_files, minimize_state, context_passing |
|
||||
| **P1** | Data Flow Disruption | Scattered state, inconsistent formats | state_centralization, schema_enforcement |
|
||||
| **P2** | Agent Coordination | Fragile call chains, merge complexity | error_wrapping, result_validation |
|
||||
| **P3** | Context Explosion | Token accumulation, multi-turn bloat | sliding_window, context_summarization |
|
||||
| **P4** | Long-tail Forgetting | Early constraint loss | constraint_injection, checkpoint_restore |
|
||||
| **P5** | Token Consumption | Verbose prompts, excessive state, redundant I/O | prompt_compression, lazy_loading, output_minimization |
|
||||
|
||||
### General Optimization Areas (按需分析 via Gemini CLI)
|
||||
|
||||
| Category | Issues | Gemini Analysis Scope |
|
||||
|----------|--------|----------------------|
|
||||
| **Prompt Engineering** | 模糊指令, 输出格式不一致, 幻觉风险 | 提示词优化, 结构化输出设计 |
|
||||
| **Architecture** | 阶段划分不合理, 依赖混乱, 扩展性差 | 架构审查, 模块化建议 |
|
||||
| **Performance** | 执行慢, Token消耗高, 重复计算 | 性能分析, 缓存策略 |
|
||||
| **Error Handling** | 错误恢复不当, 无降级策略, 日志不足 | 容错设计, 可观测性增强 |
|
||||
| **Output Quality** | 输出不稳定, 格式漂移, 质量波动 | 质量门控, 验证机制 |
|
||||
| **User Experience** | 交互不流畅, 反馈不清晰, 进度不可见 | UX优化, 进度追踪 |
|
||||
|
||||
## Key Design Principles
|
||||
|
||||
1. **Problem-First Diagnosis**: Systematic identification before any fix attempt
|
||||
2. **Data-Driven Analysis**: Record execution traces, token counts, state snapshots
|
||||
3. **Iterative Refinement**: Multiple tuning rounds until quality gates pass
|
||||
4. **Non-Destructive**: All changes are reversible with backup checkpoints
|
||||
5. **Agent Coordination**: Use specialized sub-agents for each diagnosis type
|
||||
6. **Gemini CLI On-Demand**: Deep analysis via CLI for complex/custom issues
|
||||
|
||||
---
|
||||
|
||||
## Gemini CLI Integration
|
||||
|
||||
根据用户需求动态调用 Gemini CLI 进行深度分析。
|
||||
|
||||
### Trigger Conditions
|
||||
|
||||
| Condition | Action | CLI Mode |
|
||||
|-----------|--------|----------|
|
||||
| 用户描述复杂问题 | 调用 Gemini 分析问题根因 | `analysis` |
|
||||
| 自动诊断发现 critical 问题 | 请求深度分析确认 | `analysis` |
|
||||
| 用户请求架构审查 | 执行架构分析 | `analysis` |
|
||||
| 需要生成修复代码 | 生成修复提案 | `write` |
|
||||
| 标准策略不适用 | 请求定制化策略 | `analysis` |
|
||||
|
||||
### CLI Command Template
|
||||
|
||||
```bash
|
||||
ccw cli -p "
|
||||
PURPOSE: ${purpose}
|
||||
TASK: ${task_steps}
|
||||
MODE: ${mode}
|
||||
CONTEXT: @${skill_path}/**/*
|
||||
EXPECTED: ${expected_output}
|
||||
RULES: $(cat ~/.claude/workflows/cli-templates/protocols/${mode}-protocol.md) | ${constraints}
|
||||
" --tool gemini --mode ${mode} --cd ${skill_path}
|
||||
```
|
||||
|
||||
### Analysis Types
|
||||
|
||||
#### 1. Problem Root Cause Analysis
|
||||
|
||||
```bash
|
||||
ccw cli -p "
|
||||
PURPOSE: Identify root cause of skill execution issue: ${user_issue_description}
|
||||
TASK: • Analyze skill structure and phase flow • Identify anti-patterns • Trace data flow issues
|
||||
MODE: analysis
|
||||
CONTEXT: @**/*.md
|
||||
EXPECTED: JSON with { root_causes: [], patterns_found: [], recommendations: [] }
|
||||
RULES: $(cat ~/.claude/workflows/cli-templates/protocols/analysis-protocol.md) | Focus on execution flow
|
||||
" --tool gemini --mode analysis
|
||||
```
|
||||
|
||||
#### 2. Architecture Review
|
||||
|
||||
```bash
|
||||
ccw cli -p "
|
||||
PURPOSE: Review skill architecture for scalability and maintainability
|
||||
TASK: • Evaluate phase decomposition • Check state management patterns • Assess agent coordination
|
||||
MODE: analysis
|
||||
CONTEXT: @**/*.md
|
||||
EXPECTED: Architecture assessment with improvement recommendations
|
||||
RULES: $(cat ~/.claude/workflows/cli-templates/protocols/analysis-protocol.md) | Focus on modularity
|
||||
" --tool gemini --mode analysis
|
||||
```
|
||||
|
||||
#### 3. Fix Strategy Generation
|
||||
|
||||
```bash
|
||||
ccw cli -p "
|
||||
PURPOSE: Generate fix strategy for issue: ${issue_id} - ${issue_description}
|
||||
TASK: • Analyze issue context • Design fix approach • Generate implementation plan
|
||||
MODE: analysis
|
||||
CONTEXT: @**/*.md
|
||||
EXPECTED: JSON with { strategy: string, changes: [], verification_steps: [] }
|
||||
RULES: $(cat ~/.claude/workflows/cli-templates/protocols/analysis-protocol.md) | Minimal invasive changes
|
||||
" --tool gemini --mode analysis
|
||||
```
|
||||
|
||||
---
|
||||
|
||||
## Mandatory Prerequisites
|
||||
|
||||
> **CRITICAL**: Read these documents before executing any action.
|
||||
|
||||
### Core Specs (Required)
|
||||
|
||||
| Document | Purpose | Priority |
|
||||
|----------|---------|----------|
|
||||
| [specs/skill-authoring-principles.md](specs/skill-authoring-principles.md) | **首要准则:简洁高效、去除存储、上下文流转** | **P0** |
|
||||
| [specs/problem-taxonomy.md](specs/problem-taxonomy.md) | Problem classification and detection patterns | **P0** |
|
||||
| [specs/tuning-strategies.md](specs/tuning-strategies.md) | Fix strategies for each problem type | **P0** |
|
||||
| [specs/dimension-mapping.md](specs/dimension-mapping.md) | Dimension to Spec mapping rules | **P0** |
|
||||
| [specs/quality-gates.md](specs/quality-gates.md) | Quality thresholds and verification criteria | P1 |
|
||||
|
||||
### Templates (Reference)
|
||||
|
||||
| Document | Purpose |
|
||||
|----------|---------|
|
||||
| [templates/diagnosis-report.md](templates/diagnosis-report.md) | Diagnosis report structure |
|
||||
| [templates/fix-proposal.md](templates/fix-proposal.md) | Fix proposal format |
|
||||
|
||||
---
|
||||
|
||||
## Execution Flow
|
||||
|
||||
```
|
||||
┌─────────────────────────────────────────────────────────────────────────────┐
|
||||
│ Phase 0: Specification Study (强制前置 - 禁止跳过) │
|
||||
│ → Read: specs/problem-taxonomy.md (问题分类) │
|
||||
│ → Read: specs/tuning-strategies.md (调优策略) │
|
||||
│ → Read: specs/dimension-mapping.md (维度映射规则) │
|
||||
│ → Read: Target skill's SKILL.md and phases/*.md │
|
||||
│ → Output: 内化规范,理解目标 skill 结构 │
|
||||
├─────────────────────────────────────────────────────────────────────────────┤
|
||||
│ action-init: Initialize Tuning Session │
|
||||
│ → Create work directory: .workflow/.scratchpad/skill-tuning-{timestamp} │
|
||||
│ → Initialize state.json with target skill info │
|
||||
│ → Create backup of target skill files │
|
||||
├─────────────────────────────────────────────────────────────────────────────┤
|
||||
│ action-analyze-requirements: Requirement Analysis │
|
||||
│ → Phase 1: 维度拆解 (Gemini CLI) - 单一描述 → 多个关注维度 │
|
||||
│ → Phase 2: Spec 匹配 - 每个维度 → taxonomy + strategy │
|
||||
│ → Phase 3: 覆盖度评估 - 以"有修复策略"为满足标准 │
|
||||
│ → Phase 4: 歧义检测 - 识别多义性描述,必要时请求澄清 │
|
||||
│ → Output: state.json (requirement_analysis field) │
|
||||
├─────────────────────────────────────────────────────────────────────────────┤
|
||||
│ action-diagnose-*: Diagnosis Actions (context/memory/dataflow/agent/docs/ │
|
||||
│ token_consumption) │
|
||||
│ → Execute pattern-based detection for each category │
|
||||
│ → Output: state.json (diagnosis.{category} field) │
|
||||
├─────────────────────────────────────────────────────────────────────────────┤
|
||||
│ action-generate-report: Consolidated Report │
|
||||
│ → Generate markdown summary from state.diagnosis │
|
||||
│ → Prioritize issues by severity │
|
||||
│ → Output: state.json (final_report field) │
|
||||
├─────────────────────────────────────────────────────────────────────────────┤
|
||||
│ action-propose-fixes: Fix Proposal Generation │
|
||||
│ → Generate fix strategies for each issue │
|
||||
│ → Create implementation plan │
|
||||
│ → Output: state.json (proposed_fixes field) │
|
||||
├─────────────────────────────────────────────────────────────────────────────┤
|
||||
│ action-apply-fix: Apply Selected Fix │
|
||||
│ → User selects fix to apply │
|
||||
│ → Execute fix with backup │
|
||||
│ → Update state with fix result │
|
||||
├─────────────────────────────────────────────────────────────────────────────┤
|
||||
│ action-verify: Verification │
|
||||
│ → Re-run affected diagnosis │
|
||||
│ → Check quality gates │
|
||||
│ → Update iteration count │
|
||||
├─────────────────────────────────────────────────────────────────────────────┤
|
||||
│ action-complete: Finalization │
|
||||
│ → Set status='completed' │
|
||||
│ → Final report already in state.json (final_report field) │
|
||||
│ → Output: state.json (final) │
|
||||
└─────────────────────────────────────────────────────────────────────────────┘
|
||||
```
|
||||
|
||||
## Directory Setup
|
||||
|
||||
```javascript
|
||||
const timestamp = new Date().toISOString().slice(0,19).replace(/[-:T]/g, '');
|
||||
const workDir = `.workflow/.scratchpad/skill-tuning-${timestamp}`;
|
||||
|
||||
// Simplified: Only backups dir needed, diagnosis results go into state.json
|
||||
Bash(`mkdir -p "${workDir}/backups"`);
|
||||
```
|
||||
|
||||
## Output Structure
|
||||
|
||||
```
|
||||
.workflow/.scratchpad/skill-tuning-{timestamp}/
|
||||
├── state.json # Single source of truth (all results consolidated)
|
||||
│ ├── diagnosis.* # All diagnosis results embedded
|
||||
│ ├── issues[] # Found issues
|
||||
│ ├── proposed_fixes[] # Fix proposals
|
||||
│ └── final_report # Markdown summary (on completion)
|
||||
└── backups/
|
||||
└── {skill-name}-backup/ # Original skill files backup
|
||||
```
|
||||
|
||||
> **Token Optimization**: All outputs consolidated into state.json. No separate diagnosis files or report files.
|
||||
|
||||
## State Schema
|
||||
|
||||
详细状态结构定义请参阅 [phases/state-schema.md](phases/state-schema.md)。
|
||||
|
||||
核心状态字段:
|
||||
- `status`: 工作流状态 (pending/running/completed/failed)
|
||||
- `target_skill`: 目标 skill 信息
|
||||
- `diagnosis`: 各维度诊断结果
|
||||
- `issues`: 发现的问题列表
|
||||
- `proposed_fixes`: 建议的修复方案
|
||||
|
||||
## Reference Documents
|
||||
|
||||
| Document | Purpose |
|
||||
|----------|---------|
|
||||
| [phases/orchestrator.md](phases/orchestrator.md) | Orchestrator decision logic |
|
||||
| [phases/state-schema.md](phases/state-schema.md) | State structure definition |
|
||||
| [phases/actions/action-init.md](phases/actions/action-init.md) | Initialize tuning session |
|
||||
| [phases/actions/action-analyze-requirements.md](phases/actions/action-analyze-requirements.md) | Requirement analysis (NEW) |
|
||||
| [phases/actions/action-diagnose-context.md](phases/actions/action-diagnose-context.md) | Context explosion diagnosis |
|
||||
| [phases/actions/action-diagnose-memory.md](phases/actions/action-diagnose-memory.md) | Long-tail forgetting diagnosis |
|
||||
| [phases/actions/action-diagnose-dataflow.md](phases/actions/action-diagnose-dataflow.md) | Data flow diagnosis |
|
||||
| [phases/actions/action-diagnose-agent.md](phases/actions/action-diagnose-agent.md) | Agent coordination diagnosis |
|
||||
| [phases/actions/action-diagnose-docs.md](phases/actions/action-diagnose-docs.md) | Documentation structure diagnosis |
|
||||
| [phases/actions/action-diagnose-token-consumption.md](phases/actions/action-diagnose-token-consumption.md) | Token consumption diagnosis |
|
||||
| [phases/actions/action-generate-report.md](phases/actions/action-generate-report.md) | Report generation |
|
||||
| [phases/actions/action-propose-fixes.md](phases/actions/action-propose-fixes.md) | Fix proposal |
|
||||
| [phases/actions/action-apply-fix.md](phases/actions/action-apply-fix.md) | Fix application |
|
||||
| [phases/actions/action-verify.md](phases/actions/action-verify.md) | Verification |
|
||||
| [phases/actions/action-complete.md](phases/actions/action-complete.md) | Finalization |
|
||||
| [specs/problem-taxonomy.md](specs/problem-taxonomy.md) | Problem classification |
|
||||
| [specs/tuning-strategies.md](specs/tuning-strategies.md) | Fix strategies |
|
||||
| [specs/dimension-mapping.md](specs/dimension-mapping.md) | Dimension to Spec mapping (NEW) |
|
||||
| [specs/quality-gates.md](specs/quality-gates.md) | Quality criteria |
|
||||
164
.claude/skills/skill-tuning/phases/actions/action-abort.md
Normal file
164
.claude/skills/skill-tuning/phases/actions/action-abort.md
Normal file
@@ -0,0 +1,164 @@
|
||||
# Action: Abort
|
||||
|
||||
Abort the tuning session due to unrecoverable errors.
|
||||
|
||||
## Purpose
|
||||
|
||||
- Safely terminate on critical failures
|
||||
- Preserve diagnostic information for debugging
|
||||
- Ensure backup remains available
|
||||
- Notify user of failure reason
|
||||
|
||||
## Preconditions
|
||||
|
||||
- [ ] state.error_count >= state.max_errors
|
||||
- [ ] OR critical failure detected
|
||||
|
||||
## Execution
|
||||
|
||||
```javascript
|
||||
async function execute(state, workDir) {
|
||||
console.log('Aborting skill tuning session...');
|
||||
|
||||
const errors = state.errors;
|
||||
const targetSkill = state.target_skill;
|
||||
|
||||
// Generate abort report
|
||||
const abortReport = `# Skill Tuning Aborted
|
||||
|
||||
**Target Skill**: ${targetSkill?.name || 'Unknown'}
|
||||
**Aborted At**: ${new Date().toISOString()}
|
||||
**Reason**: Too many errors or critical failure
|
||||
|
||||
---
|
||||
|
||||
## Error Log
|
||||
|
||||
${errors.length === 0 ? '_No errors recorded_' :
|
||||
errors.map((err, i) => `
|
||||
### Error ${i + 1}
|
||||
- **Action**: ${err.action}
|
||||
- **Message**: ${err.message}
|
||||
- **Time**: ${err.timestamp}
|
||||
- **Recoverable**: ${err.recoverable ? 'Yes' : 'No'}
|
||||
`).join('\n')}
|
||||
|
||||
---
|
||||
|
||||
## Session State at Abort
|
||||
|
||||
- **Status**: ${state.status}
|
||||
- **Iteration Count**: ${state.iteration_count}
|
||||
- **Completed Actions**: ${state.completed_actions.length}
|
||||
- **Issues Found**: ${state.issues.length}
|
||||
- **Fixes Applied**: ${state.applied_fixes.length}
|
||||
|
||||
---
|
||||
|
||||
## Recovery Options
|
||||
|
||||
### Option 1: Restore Original Skill
|
||||
If any changes were made, restore from backup:
|
||||
\`\`\`bash
|
||||
cp -r "${state.backup_dir}/${targetSkill?.name || 'backup'}-backup"/* "${targetSkill?.path || 'target'}/"
|
||||
\`\`\`
|
||||
|
||||
### Option 2: Resume from Last State
|
||||
The session state is preserved at:
|
||||
\`${workDir}/state.json\`
|
||||
|
||||
To resume:
|
||||
1. Fix the underlying issue
|
||||
2. Reset error_count in state.json
|
||||
3. Re-run skill-tuning with --resume flag
|
||||
|
||||
### Option 3: Manual Investigation
|
||||
Review the following files:
|
||||
- Diagnosis results: \`${workDir}/diagnosis/*.json\`
|
||||
- Error log: \`${workDir}/errors.json\`
|
||||
- State snapshot: \`${workDir}/state.json\`
|
||||
|
||||
---
|
||||
|
||||
## Diagnostic Information
|
||||
|
||||
### Last Successful Action
|
||||
${state.completed_actions.length > 0 ? state.completed_actions[state.completed_actions.length - 1] : 'None'}
|
||||
|
||||
### Current Action When Failed
|
||||
${state.current_action || 'Unknown'}
|
||||
|
||||
### Partial Diagnosis Results
|
||||
- Context: ${state.diagnosis.context ? 'Completed' : 'Not completed'}
|
||||
- Memory: ${state.diagnosis.memory ? 'Completed' : 'Not completed'}
|
||||
- Data Flow: ${state.diagnosis.dataflow ? 'Completed' : 'Not completed'}
|
||||
- Agent: ${state.diagnosis.agent ? 'Completed' : 'Not completed'}
|
||||
|
||||
---
|
||||
|
||||
*Skill tuning aborted - please review errors and retry*
|
||||
`;
|
||||
|
||||
// Write abort report
|
||||
Write(`${workDir}/abort-report.md`, abortReport);
|
||||
|
||||
// Save error log
|
||||
Write(`${workDir}/errors.json`, JSON.stringify(errors, null, 2));
|
||||
|
||||
// Notify user
|
||||
await AskUserQuestion({
|
||||
questions: [{
|
||||
question: `Skill tuning aborted due to ${errors.length} errors. Would you like to restore the original skill?`,
|
||||
header: 'Restore',
|
||||
multiSelect: false,
|
||||
options: [
|
||||
{ label: 'Yes, restore', description: 'Restore original skill from backup' },
|
||||
{ label: 'No, keep changes', description: 'Keep any partial changes made' }
|
||||
]
|
||||
}]
|
||||
}).then(async response => {
|
||||
if (response['Restore'] === 'Yes, restore') {
|
||||
// Restore from backup
|
||||
if (state.backup_dir && targetSkill?.path) {
|
||||
Bash(`cp -r "${state.backup_dir}/${targetSkill.name}-backup"/* "${targetSkill.path}/"`);
|
||||
console.log('Original skill restored from backup.');
|
||||
}
|
||||
}
|
||||
}).catch(() => {
|
||||
// User cancelled, don't restore
|
||||
});
|
||||
|
||||
return {
|
||||
stateUpdates: {
|
||||
status: 'failed',
|
||||
completed_at: new Date().toISOString()
|
||||
},
|
||||
outputFiles: [`${workDir}/abort-report.md`, `${workDir}/errors.json`],
|
||||
summary: `Tuning aborted: ${errors.length} errors. Check abort-report.md for details.`
|
||||
};
|
||||
}
|
||||
```
|
||||
|
||||
## State Updates
|
||||
|
||||
```javascript
|
||||
return {
|
||||
stateUpdates: {
|
||||
status: 'failed',
|
||||
completed_at: '<timestamp>'
|
||||
}
|
||||
};
|
||||
```
|
||||
|
||||
## Output
|
||||
|
||||
- **File**: `abort-report.md`
|
||||
- **Location**: `${workDir}/abort-report.md`
|
||||
|
||||
## Error Handling
|
||||
|
||||
This action should not fail - it's the final error handler.
|
||||
|
||||
## Next Actions
|
||||
|
||||
- None (terminal state)
|
||||
@@ -0,0 +1,406 @@
|
||||
# Action: Analyze Requirements
|
||||
|
||||
将用户问题描述拆解为多个分析维度,匹配 Spec,评估覆盖度,检测歧义。
|
||||
|
||||
## Purpose
|
||||
|
||||
- 将单一用户描述拆解为多个独立关注维度
|
||||
- 为每个维度匹配 problem-taxonomy(检测)+ tuning-strategies(修复)
|
||||
- 以"有修复策略"为标准判断是否满足需求
|
||||
- 检测歧义并在必要时请求用户澄清
|
||||
|
||||
## Preconditions
|
||||
|
||||
- [ ] `state.status === 'running'`
|
||||
- [ ] `state.target_skill !== null`
|
||||
- [ ] `state.completed_actions.includes('action-init')`
|
||||
- [ ] `!state.completed_actions.includes('action-analyze-requirements')`
|
||||
|
||||
## Execution
|
||||
|
||||
### Phase 1: 维度拆解 (Gemini CLI)
|
||||
|
||||
调用 Gemini 对用户描述进行语义分析,拆解为独立维度:
|
||||
|
||||
```javascript
|
||||
async function analyzeDimensions(state, workDir) {
|
||||
const prompt = `
|
||||
PURPOSE: 分析用户问题描述,拆解为独立的关注维度
|
||||
TASK:
|
||||
• 识别用户描述中的多个关注点(每个关注点应该是独立的、可单独分析的)
|
||||
• 为每个关注点提取关键词(中英文均可)
|
||||
• 推断可能的问题类别:
|
||||
- context_explosion: 上下文/Token 相关
|
||||
- memory_loss: 遗忘/约束丢失相关
|
||||
- dataflow_break: 状态/数据流相关
|
||||
- agent_failure: Agent/子任务相关
|
||||
- prompt_quality: 提示词/输出质量相关
|
||||
- architecture: 架构/结构相关
|
||||
- performance: 性能/效率相关
|
||||
- error_handling: 错误/异常处理相关
|
||||
- output_quality: 输出质量/验证相关
|
||||
- user_experience: 交互/体验相关
|
||||
• 评估推断置信度 (0-1)
|
||||
|
||||
INPUT:
|
||||
User description: ${state.user_issue_description}
|
||||
Target skill: ${state.target_skill.name}
|
||||
Skill structure: ${JSON.stringify(state.target_skill.phases)}
|
||||
|
||||
MODE: analysis
|
||||
CONTEXT: @specs/problem-taxonomy.md @specs/dimension-mapping.md
|
||||
EXPECTED: JSON (不要包含 markdown 代码块标记)
|
||||
{
|
||||
"dimensions": [
|
||||
{
|
||||
"id": "DIM-001",
|
||||
"description": "关注点的简短描述",
|
||||
"keywords": ["关键词1", "关键词2"],
|
||||
"inferred_category": "问题类别",
|
||||
"confidence": 0.85,
|
||||
"reasoning": "推断理由"
|
||||
}
|
||||
],
|
||||
"analysis_notes": "整体分析说明"
|
||||
}
|
||||
RULES:
|
||||
- 每个维度必须独立,不重叠
|
||||
- 低于 0.5 置信度的推断应标注需要澄清
|
||||
- 如果用户描述非常模糊,至少提取一个 "general" 维度
|
||||
`;
|
||||
|
||||
const cliCommand = `ccw cli -p "${escapeForShell(prompt)}" --tool gemini --mode analysis --cd "${state.target_skill.path}"`;
|
||||
|
||||
console.log('Phase 1: 执行 Gemini 维度拆解分析...');
|
||||
|
||||
const result = Bash({
|
||||
command: cliCommand,
|
||||
run_in_background: true,
|
||||
timeout: 300000
|
||||
});
|
||||
|
||||
return result;
|
||||
}
|
||||
```
|
||||
|
||||
### Phase 2: Spec 匹配
|
||||
|
||||
基于 `specs/category-mappings.json` 配置为每个维度匹配检测模式和修复策略:
|
||||
|
||||
```javascript
|
||||
// 加载集中式映射配置
|
||||
const mappings = JSON.parse(Read('specs/category-mappings.json'));
|
||||
|
||||
function matchSpecs(dimensions) {
|
||||
return dimensions.map(dim => {
|
||||
// 匹配 taxonomy pattern
|
||||
const taxonomyMatch = findTaxonomyMatch(dim.inferred_category);
|
||||
|
||||
// 匹配 strategy
|
||||
const strategyMatch = findStrategyMatch(dim.inferred_category);
|
||||
|
||||
// 判断是否满足(核心标准:有修复策略)
|
||||
const hasFix = strategyMatch !== null && strategyMatch.strategies.length > 0;
|
||||
|
||||
return {
|
||||
dimension_id: dim.id,
|
||||
taxonomy_match: taxonomyMatch,
|
||||
strategy_match: strategyMatch,
|
||||
has_fix: hasFix,
|
||||
needs_gemini_analysis: taxonomyMatch === null || mappings.categories[dim.inferred_category]?.needs_gemini_analysis
|
||||
};
|
||||
});
|
||||
}
|
||||
|
||||
function findTaxonomyMatch(category) {
|
||||
const config = mappings.categories[category];
|
||||
if (!config || config.pattern_ids.length === 0) return null;
|
||||
|
||||
return {
|
||||
category: category,
|
||||
pattern_ids: config.pattern_ids,
|
||||
severity_hint: config.severity_hint
|
||||
};
|
||||
}
|
||||
|
||||
function findStrategyMatch(category) {
|
||||
const config = mappings.categories[category];
|
||||
if (!config) {
|
||||
// Fallback to custom from config
|
||||
return mappings.fallback;
|
||||
}
|
||||
|
||||
return {
|
||||
strategies: config.strategies,
|
||||
risk_levels: config.risk_levels
|
||||
};
|
||||
}
|
||||
```
|
||||
|
||||
### Phase 3: 覆盖度评估
|
||||
|
||||
评估所有维度的 Spec 覆盖情况:
|
||||
|
||||
```javascript
|
||||
function evaluateCoverage(specMatches) {
|
||||
const total = specMatches.length;
|
||||
const withDetection = specMatches.filter(m => m.taxonomy_match !== null).length;
|
||||
const withFix = specMatches.filter(m => m.has_fix).length;
|
||||
|
||||
const rate = total > 0 ? Math.round((withFix / total) * 100) : 0;
|
||||
|
||||
let status;
|
||||
if (rate >= 80) {
|
||||
status = 'satisfied';
|
||||
} else if (rate >= 50) {
|
||||
status = 'partial';
|
||||
} else {
|
||||
status = 'unsatisfied';
|
||||
}
|
||||
|
||||
return {
|
||||
total_dimensions: total,
|
||||
with_detection: withDetection,
|
||||
with_fix_strategy: withFix,
|
||||
coverage_rate: rate,
|
||||
status: status
|
||||
};
|
||||
}
|
||||
```
|
||||
|
||||
### Phase 4: 歧义检测
|
||||
|
||||
识别需要用户澄清的歧义点:
|
||||
|
||||
```javascript
|
||||
function detectAmbiguities(dimensions, specMatches) {
|
||||
const ambiguities = [];
|
||||
|
||||
for (const dim of dimensions) {
|
||||
const match = specMatches.find(m => m.dimension_id === dim.id);
|
||||
|
||||
// 检测1: 低置信度 (< 0.5)
|
||||
if (dim.confidence < 0.5) {
|
||||
ambiguities.push({
|
||||
dimension_id: dim.id,
|
||||
type: 'vague_description',
|
||||
description: `维度 "${dim.description}" 描述模糊,推断置信度低 (${dim.confidence})`,
|
||||
possible_interpretations: suggestInterpretations(dim),
|
||||
needs_clarification: true
|
||||
});
|
||||
}
|
||||
|
||||
// 检测2: 无匹配类别
|
||||
if (!match || (!match.taxonomy_match && !match.strategy_match)) {
|
||||
ambiguities.push({
|
||||
dimension_id: dim.id,
|
||||
type: 'no_category_match',
|
||||
description: `维度 "${dim.description}" 无法匹配到已知问题类别`,
|
||||
possible_interpretations: ['custom'],
|
||||
needs_clarification: true
|
||||
});
|
||||
}
|
||||
|
||||
// 检测3: 关键词冲突(可能属于多个类别)
|
||||
if (dim.keywords.length > 3 && hasConflictingKeywords(dim.keywords)) {
|
||||
ambiguities.push({
|
||||
dimension_id: dim.id,
|
||||
type: 'conflicting_keywords',
|
||||
description: `维度 "${dim.description}" 的关键词可能指向多个不同问题`,
|
||||
possible_interpretations: inferMultipleCategories(dim.keywords),
|
||||
needs_clarification: true
|
||||
});
|
||||
}
|
||||
}
|
||||
|
||||
return ambiguities;
|
||||
}
|
||||
|
||||
function suggestInterpretations(dim) {
|
||||
// 基于 mappings 配置推荐可能的解释
|
||||
const categories = Object.keys(mappings.categories).filter(
|
||||
cat => cat !== 'authoring_principles_violation' // 排除内部检测类别
|
||||
);
|
||||
return categories.slice(0, 4); // 返回最常见的 4 个作为选项
|
||||
}
|
||||
|
||||
function hasConflictingKeywords(keywords) {
|
||||
// 检查关键词是否指向不同方向
|
||||
const categoryHints = keywords.map(k => getKeywordCategoryHint(k));
|
||||
const uniqueCategories = [...new Set(categoryHints.filter(c => c))];
|
||||
return uniqueCategories.length > 1;
|
||||
}
|
||||
|
||||
function getKeywordCategoryHint(keyword) {
|
||||
// 从 mappings.keywords 构建查找表(合并中英文关键词)
|
||||
const keywordMap = {
|
||||
...mappings.keywords.chinese,
|
||||
...mappings.keywords.english
|
||||
};
|
||||
return keywordMap[keyword.toLowerCase()];
|
||||
}
|
||||
```
|
||||
|
||||
## User Interaction
|
||||
|
||||
如果检测到需要澄清的歧义,暂停并询问用户:
|
||||
|
||||
```javascript
|
||||
async function handleAmbiguities(ambiguities, dimensions) {
|
||||
const needsClarification = ambiguities.filter(a => a.needs_clarification);
|
||||
|
||||
if (needsClarification.length === 0) {
|
||||
return null; // 无需澄清
|
||||
}
|
||||
|
||||
const questions = needsClarification.slice(0, 4).map(a => {
|
||||
const dim = dimensions.find(d => d.id === a.dimension_id);
|
||||
|
||||
return {
|
||||
question: `关于 "${dim.description}",您具体指的是?`,
|
||||
header: a.dimension_id,
|
||||
options: a.possible_interpretations.map(interp => ({
|
||||
label: getCategoryLabel(interp),
|
||||
description: getCategoryDescription(interp)
|
||||
})),
|
||||
multiSelect: false
|
||||
};
|
||||
});
|
||||
|
||||
return await AskUserQuestion({ questions });
|
||||
}
|
||||
|
||||
function getCategoryLabel(category) {
|
||||
// 从 mappings 配置加载标签
|
||||
return mappings.category_labels_chinese[category] || category;
|
||||
}
|
||||
|
||||
function getCategoryDescription(category) {
|
||||
// 从 mappings 配置加载描述
|
||||
return mappings.category_descriptions[category] || 'Requires further analysis';
|
||||
}
|
||||
```
|
||||
|
||||
## Output
|
||||
|
||||
### State Updates
|
||||
|
||||
```javascript
|
||||
return {
|
||||
stateUpdates: {
|
||||
requirement_analysis: {
|
||||
status: ambiguities.some(a => a.needs_clarification) ? 'needs_clarification' : 'completed',
|
||||
analyzed_at: new Date().toISOString(),
|
||||
dimensions: dimensions,
|
||||
spec_matches: specMatches,
|
||||
coverage: coverageResult,
|
||||
ambiguities: ambiguities
|
||||
},
|
||||
// 根据分析结果自动优化 focus_areas
|
||||
focus_areas: deriveOptimalFocusAreas(specMatches)
|
||||
},
|
||||
outputFiles: [
|
||||
`${workDir}/requirement-analysis.json`,
|
||||
`${workDir}/requirement-analysis.md`
|
||||
],
|
||||
summary: generateSummary(dimensions, coverageResult, ambiguities)
|
||||
};
|
||||
|
||||
function deriveOptimalFocusAreas(specMatches) {
|
||||
const coreCategories = ['context', 'memory', 'dataflow', 'agent'];
|
||||
const matched = specMatches
|
||||
.filter(m => m.taxonomy_match !== null)
|
||||
.map(m => {
|
||||
// 映射到诊断 focus_area
|
||||
const category = m.taxonomy_match.category;
|
||||
if (category === 'context_explosion' || category === 'performance') return 'context';
|
||||
if (category === 'memory_loss') return 'memory';
|
||||
if (category === 'dataflow_break') return 'dataflow';
|
||||
if (category === 'agent_failure' || category === 'error_handling') return 'agent';
|
||||
return null;
|
||||
})
|
||||
.filter(f => f && coreCategories.includes(f));
|
||||
|
||||
// 去重
|
||||
return [...new Set(matched)];
|
||||
}
|
||||
|
||||
function generateSummary(dimensions, coverage, ambiguities) {
|
||||
const dimCount = dimensions.length;
|
||||
const coverageStatus = coverage.status;
|
||||
const ambiguityCount = ambiguities.filter(a => a.needs_clarification).length;
|
||||
|
||||
let summary = `分析完成:${dimCount} 个维度`;
|
||||
summary += `,覆盖度 ${coverage.coverage_rate}% (${coverageStatus})`;
|
||||
|
||||
if (ambiguityCount > 0) {
|
||||
summary += `,${ambiguityCount} 个歧义点待澄清`;
|
||||
}
|
||||
|
||||
return summary;
|
||||
}
|
||||
```
|
||||
|
||||
### Output Files
|
||||
|
||||
#### requirement-analysis.json
|
||||
|
||||
```json
|
||||
{
|
||||
"timestamp": "2024-01-01T00:00:00Z",
|
||||
"target_skill": "skill-name",
|
||||
"user_description": "原始用户描述",
|
||||
"dimensions": [...],
|
||||
"spec_matches": [...],
|
||||
"coverage": {...},
|
||||
"ambiguities": [...],
|
||||
"derived_focus_areas": [...]
|
||||
}
|
||||
```
|
||||
|
||||
#### requirement-analysis.md
|
||||
|
||||
```markdown
|
||||
# 需求分析报告
|
||||
|
||||
## 用户描述
|
||||
> ${user_issue_description}
|
||||
|
||||
## 维度拆解
|
||||
|
||||
| ID | 描述 | 类别 | 置信度 |
|
||||
|----|------|------|--------|
|
||||
| DIM-001 | ... | ... | 0.85 |
|
||||
|
||||
## Spec 匹配
|
||||
|
||||
| 维度 | 检测模式 | 修复策略 | 是否满足 |
|
||||
|------|----------|----------|----------|
|
||||
| DIM-001 | CTX-001,002 | sliding_window | ✓ |
|
||||
|
||||
## 覆盖度评估
|
||||
|
||||
- 总维度数: N
|
||||
- 有检测手段: M
|
||||
- 有修复策略: K (满足标准)
|
||||
- 覆盖率: X%
|
||||
- 状态: satisfied/partial/unsatisfied
|
||||
|
||||
## 歧义点
|
||||
|
||||
(如有)
|
||||
```
|
||||
|
||||
## Error Handling
|
||||
|
||||
| Error | Recovery |
|
||||
|-------|----------|
|
||||
| Gemini CLI 超时 | 重试一次,仍失败则使用简化分析 |
|
||||
| JSON 解析失败 | 尝试修复 JSON 或使用默认维度 |
|
||||
| 无法匹配任何类别 | 全部归类为 custom,触发 Gemini 深度分析 |
|
||||
|
||||
## Next Actions
|
||||
|
||||
- 如果 `requirement_analysis.status === 'completed'`: 继续到 `action-diagnose-*`
|
||||
- 如果 `requirement_analysis.status === 'needs_clarification'`: 等待用户澄清后重新执行
|
||||
- 如果 `coverage.status === 'unsatisfied'`: 自动触发 `action-gemini-analysis` 进行深度分析
|
||||
206
.claude/skills/skill-tuning/phases/actions/action-apply-fix.md
Normal file
206
.claude/skills/skill-tuning/phases/actions/action-apply-fix.md
Normal file
@@ -0,0 +1,206 @@
|
||||
# Action: Apply Fix
|
||||
|
||||
Apply a selected fix to the target skill with backup and rollback capability.
|
||||
|
||||
## Purpose
|
||||
|
||||
- Apply fix changes to target skill files
|
||||
- Create backup before modifications
|
||||
- Track applied fixes for verification
|
||||
- Support rollback if needed
|
||||
|
||||
## Preconditions
|
||||
|
||||
- [ ] state.status === 'running'
|
||||
- [ ] state.pending_fixes.length > 0
|
||||
- [ ] state.proposed_fixes contains the fix to apply
|
||||
|
||||
## Execution
|
||||
|
||||
```javascript
|
||||
async function execute(state, workDir) {
|
||||
const pendingFixes = state.pending_fixes;
|
||||
const proposedFixes = state.proposed_fixes;
|
||||
const targetPath = state.target_skill.path;
|
||||
const backupDir = state.backup_dir;
|
||||
|
||||
if (pendingFixes.length === 0) {
|
||||
return {
|
||||
stateUpdates: {},
|
||||
outputFiles: [],
|
||||
summary: 'No pending fixes to apply'
|
||||
};
|
||||
}
|
||||
|
||||
// Get next fix to apply
|
||||
const fixId = pendingFixes[0];
|
||||
const fix = proposedFixes.find(f => f.id === fixId);
|
||||
|
||||
if (!fix) {
|
||||
return {
|
||||
stateUpdates: {
|
||||
pending_fixes: pendingFixes.slice(1),
|
||||
errors: [...state.errors, {
|
||||
action: 'action-apply-fix',
|
||||
message: `Fix ${fixId} not found in proposals`,
|
||||
timestamp: new Date().toISOString(),
|
||||
recoverable: true
|
||||
}]
|
||||
},
|
||||
outputFiles: [],
|
||||
summary: `Fix ${fixId} not found, skipping`
|
||||
};
|
||||
}
|
||||
|
||||
console.log(`Applying fix ${fix.id}: ${fix.description}`);
|
||||
|
||||
// Create fix-specific backup
|
||||
const fixBackupDir = `${backupDir}/before-${fix.id}`;
|
||||
Bash(`mkdir -p "${fixBackupDir}"`);
|
||||
|
||||
const appliedChanges = [];
|
||||
let success = true;
|
||||
|
||||
for (const change of fix.changes) {
|
||||
try {
|
||||
// Resolve file path (handle wildcards)
|
||||
let targetFiles = [];
|
||||
if (change.file.includes('*')) {
|
||||
targetFiles = Glob(`${targetPath}/${change.file}`);
|
||||
} else {
|
||||
targetFiles = [`${targetPath}/${change.file}`];
|
||||
}
|
||||
|
||||
for (const targetFile of targetFiles) {
|
||||
// Backup original
|
||||
const relativePath = targetFile.replace(targetPath + '/', '');
|
||||
const backupPath = `${fixBackupDir}/${relativePath}`;
|
||||
|
||||
if (Glob(targetFile).length > 0) {
|
||||
const originalContent = Read(targetFile);
|
||||
Bash(`mkdir -p "$(dirname "${backupPath}")"`);
|
||||
Write(backupPath, originalContent);
|
||||
}
|
||||
|
||||
// Apply change based on action type
|
||||
if (change.action === 'modify' && change.diff) {
|
||||
// For now, append the diff as a comment/note
|
||||
// Real implementation would parse and apply the diff
|
||||
const existingContent = Read(targetFile);
|
||||
|
||||
// Simple diff application: look for context and apply
|
||||
// This is a simplified version - real implementation would be more sophisticated
|
||||
const newContent = existingContent + `\n\n<!-- Applied fix ${fix.id}: ${fix.description} -->\n`;
|
||||
|
||||
Write(targetFile, newContent);
|
||||
|
||||
appliedChanges.push({
|
||||
file: relativePath,
|
||||
action: 'modified',
|
||||
backup: backupPath
|
||||
});
|
||||
} else if (change.action === 'create') {
|
||||
Write(targetFile, change.new_content || '');
|
||||
appliedChanges.push({
|
||||
file: relativePath,
|
||||
action: 'created',
|
||||
backup: null
|
||||
});
|
||||
}
|
||||
}
|
||||
} catch (error) {
|
||||
console.log(`Error applying change to ${change.file}: ${error.message}`);
|
||||
success = false;
|
||||
}
|
||||
}
|
||||
|
||||
// Record applied fix
|
||||
const appliedFix = {
|
||||
fix_id: fix.id,
|
||||
applied_at: new Date().toISOString(),
|
||||
success: success,
|
||||
backup_path: fixBackupDir,
|
||||
verification_result: 'pending',
|
||||
rollback_available: true,
|
||||
changes_made: appliedChanges
|
||||
};
|
||||
|
||||
// Update applied fixes log
|
||||
const appliedFixesPath = `${workDir}/fixes/applied-fixes.json`;
|
||||
let existingApplied = [];
|
||||
try {
|
||||
existingApplied = JSON.parse(Read(appliedFixesPath));
|
||||
} catch (e) {
|
||||
existingApplied = [];
|
||||
}
|
||||
existingApplied.push(appliedFix);
|
||||
Write(appliedFixesPath, JSON.stringify(existingApplied, null, 2));
|
||||
|
||||
return {
|
||||
stateUpdates: {
|
||||
applied_fixes: [...state.applied_fixes, appliedFix],
|
||||
pending_fixes: pendingFixes.slice(1) // Remove applied fix from pending
|
||||
},
|
||||
outputFiles: [appliedFixesPath],
|
||||
summary: `Applied fix ${fix.id}: ${success ? 'success' : 'partial'}, ${appliedChanges.length} files modified`
|
||||
};
|
||||
}
|
||||
```
|
||||
|
||||
## State Updates
|
||||
|
||||
```javascript
|
||||
return {
|
||||
stateUpdates: {
|
||||
applied_fixes: [...existingApplied, newAppliedFix],
|
||||
pending_fixes: remainingPendingFixes
|
||||
}
|
||||
};
|
||||
```
|
||||
|
||||
## Rollback Function
|
||||
|
||||
```javascript
|
||||
async function rollbackFix(fixId, state, workDir) {
|
||||
const appliedFix = state.applied_fixes.find(f => f.fix_id === fixId);
|
||||
|
||||
if (!appliedFix || !appliedFix.rollback_available) {
|
||||
throw new Error(`Cannot rollback fix ${fixId}`);
|
||||
}
|
||||
|
||||
const backupDir = appliedFix.backup_path;
|
||||
const targetPath = state.target_skill.path;
|
||||
|
||||
// Restore from backup
|
||||
const backupFiles = Glob(`${backupDir}/**/*`);
|
||||
for (const backupFile of backupFiles) {
|
||||
const relativePath = backupFile.replace(backupDir + '/', '');
|
||||
const targetFile = `${targetPath}/${relativePath}`;
|
||||
const content = Read(backupFile);
|
||||
Write(targetFile, content);
|
||||
}
|
||||
|
||||
return {
|
||||
stateUpdates: {
|
||||
applied_fixes: state.applied_fixes.map(f =>
|
||||
f.fix_id === fixId
|
||||
? { ...f, rollback_available: false, verification_result: 'rolled_back' }
|
||||
: f
|
||||
)
|
||||
}
|
||||
};
|
||||
}
|
||||
```
|
||||
|
||||
## Error Handling
|
||||
|
||||
| Error Type | Recovery |
|
||||
|------------|----------|
|
||||
| File not found | Skip file, log warning |
|
||||
| Write permission error | Retry with sudo or report |
|
||||
| Backup creation failed | Abort fix, don't modify |
|
||||
|
||||
## Next Actions
|
||||
|
||||
- If pending_fixes.length > 0: action-apply-fix (continue)
|
||||
- If all fixes applied: action-verify
|
||||
195
.claude/skills/skill-tuning/phases/actions/action-complete.md
Normal file
195
.claude/skills/skill-tuning/phases/actions/action-complete.md
Normal file
@@ -0,0 +1,195 @@
|
||||
# Action: Complete
|
||||
|
||||
Finalize the tuning session with summary report and cleanup.
|
||||
|
||||
## Purpose
|
||||
|
||||
- Generate final summary report
|
||||
- Record tuning statistics
|
||||
- Clean up temporary files (optional)
|
||||
- Provide recommendations for future maintenance
|
||||
|
||||
## Preconditions
|
||||
|
||||
- [ ] state.status === 'running'
|
||||
- [ ] quality_gate === 'pass' OR max_iterations reached
|
||||
|
||||
## Execution
|
||||
|
||||
```javascript
|
||||
async function execute(state, workDir) {
|
||||
console.log('Finalizing skill tuning session...');
|
||||
|
||||
const targetSkill = state.target_skill;
|
||||
const startTime = new Date(state.started_at);
|
||||
const endTime = new Date();
|
||||
const duration = Math.round((endTime - startTime) / 1000);
|
||||
|
||||
// Generate final summary
|
||||
const summary = `# Skill Tuning Summary
|
||||
|
||||
**Target Skill**: ${targetSkill.name}
|
||||
**Path**: ${targetSkill.path}
|
||||
**Session Duration**: ${duration} seconds
|
||||
**Completed**: ${endTime.toISOString()}
|
||||
|
||||
---
|
||||
|
||||
## Final Status
|
||||
|
||||
| Metric | Value |
|
||||
|--------|-------|
|
||||
| Final Health Score | ${state.quality_score}/100 |
|
||||
| Quality Gate | ${state.quality_gate.toUpperCase()} |
|
||||
| Total Iterations | ${state.iteration_count} |
|
||||
| Issues Found | ${state.issues.length + state.applied_fixes.flatMap(f => f.issues_resolved || []).length} |
|
||||
| Issues Resolved | ${state.applied_fixes.flatMap(f => f.issues_resolved || []).length} |
|
||||
| Fixes Applied | ${state.applied_fixes.length} |
|
||||
| Fixes Verified | ${state.applied_fixes.filter(f => f.verification_result === 'pass').length} |
|
||||
|
||||
---
|
||||
|
||||
## Diagnosis Summary
|
||||
|
||||
| Area | Issues Found | Severity |
|
||||
|------|--------------|----------|
|
||||
| Context Explosion | ${state.diagnosis.context?.issues_found || 'N/A'} | ${state.diagnosis.context?.severity || 'N/A'} |
|
||||
| Long-tail Forgetting | ${state.diagnosis.memory?.issues_found || 'N/A'} | ${state.diagnosis.memory?.severity || 'N/A'} |
|
||||
| Data Flow | ${state.diagnosis.dataflow?.issues_found || 'N/A'} | ${state.diagnosis.dataflow?.severity || 'N/A'} |
|
||||
| Agent Coordination | ${state.diagnosis.agent?.issues_found || 'N/A'} | ${state.diagnosis.agent?.severity || 'N/A'} |
|
||||
|
||||
---
|
||||
|
||||
## Applied Fixes
|
||||
|
||||
${state.applied_fixes.length === 0 ? '_No fixes applied_' :
|
||||
state.applied_fixes.map((fix, i) => `
|
||||
### ${i + 1}. ${fix.fix_id}
|
||||
|
||||
- **Applied At**: ${fix.applied_at}
|
||||
- **Success**: ${fix.success ? 'Yes' : 'No'}
|
||||
- **Verification**: ${fix.verification_result}
|
||||
- **Rollback Available**: ${fix.rollback_available ? 'Yes' : 'No'}
|
||||
`).join('\n')}
|
||||
|
||||
---
|
||||
|
||||
## Remaining Issues
|
||||
|
||||
${state.issues.length === 0 ? '✅ All issues resolved!' :
|
||||
`${state.issues.length} issues remain:\n\n` +
|
||||
state.issues.map(issue =>
|
||||
`- **[${issue.severity.toUpperCase()}]** ${issue.description} (${issue.id})`
|
||||
).join('\n')}
|
||||
|
||||
---
|
||||
|
||||
## Recommendations
|
||||
|
||||
${generateRecommendations(state)}
|
||||
|
||||
---
|
||||
|
||||
## Backup Information
|
||||
|
||||
Original skill files backed up to:
|
||||
\`${state.backup_dir}\`
|
||||
|
||||
To restore original skill:
|
||||
\`\`\`bash
|
||||
cp -r "${state.backup_dir}/${targetSkill.name}-backup"/* "${targetSkill.path}/"
|
||||
\`\`\`
|
||||
|
||||
---
|
||||
|
||||
## Session Files
|
||||
|
||||
| File | Description |
|
||||
|------|-------------|
|
||||
| ${workDir}/tuning-report.md | Full diagnostic report |
|
||||
| ${workDir}/diagnosis/*.json | Individual diagnosis results |
|
||||
| ${workDir}/fixes/fix-proposals.json | Proposed fixes |
|
||||
| ${workDir}/fixes/applied-fixes.json | Applied fix history |
|
||||
| ${workDir}/tuning-summary.md | This summary |
|
||||
|
||||
---
|
||||
|
||||
*Skill tuning completed by skill-tuning*
|
||||
`;
|
||||
|
||||
Write(`${workDir}/tuning-summary.md`, summary);
|
||||
|
||||
// Update final state
|
||||
return {
|
||||
stateUpdates: {
|
||||
status: 'completed',
|
||||
completed_at: endTime.toISOString()
|
||||
},
|
||||
outputFiles: [`${workDir}/tuning-summary.md`],
|
||||
summary: `Tuning complete: ${state.quality_gate} with ${state.quality_score}/100 health score`
|
||||
};
|
||||
}
|
||||
|
||||
function generateRecommendations(state) {
|
||||
const recommendations = [];
|
||||
|
||||
// Based on remaining issues
|
||||
if (state.issues.some(i => i.type === 'context_explosion')) {
|
||||
recommendations.push('- **Context Management**: Consider implementing a context summarization agent to prevent token growth');
|
||||
}
|
||||
|
||||
if (state.issues.some(i => i.type === 'memory_loss')) {
|
||||
recommendations.push('- **Constraint Tracking**: Add explicit constraint injection to each phase prompt');
|
||||
}
|
||||
|
||||
if (state.issues.some(i => i.type === 'dataflow_break')) {
|
||||
recommendations.push('- **State Centralization**: Migrate to single state.json with schema validation');
|
||||
}
|
||||
|
||||
if (state.issues.some(i => i.type === 'agent_failure')) {
|
||||
recommendations.push('- **Error Handling**: Wrap all Task calls in try-catch blocks');
|
||||
}
|
||||
|
||||
// General recommendations
|
||||
if (state.iteration_count >= state.max_iterations) {
|
||||
recommendations.push('- **Deep Refactoring**: Consider architectural review if issues persist after multiple iterations');
|
||||
}
|
||||
|
||||
if (state.quality_score < 80) {
|
||||
recommendations.push('- **Regular Tuning**: Schedule periodic skill-tuning runs to catch issues early');
|
||||
}
|
||||
|
||||
if (recommendations.length === 0) {
|
||||
recommendations.push('- Skill is in good health! Monitor for regressions during future development.');
|
||||
}
|
||||
|
||||
return recommendations.join('\n');
|
||||
}
|
||||
```
|
||||
|
||||
## State Updates
|
||||
|
||||
```javascript
|
||||
return {
|
||||
stateUpdates: {
|
||||
status: 'completed',
|
||||
completed_at: '<timestamp>'
|
||||
}
|
||||
};
|
||||
```
|
||||
|
||||
## Output
|
||||
|
||||
- **File**: `tuning-summary.md`
|
||||
- **Location**: `${workDir}/tuning-summary.md`
|
||||
- **Format**: Markdown
|
||||
|
||||
## Error Handling
|
||||
|
||||
| Error Type | Recovery |
|
||||
|------------|----------|
|
||||
| Summary write failed | Write to alternative location |
|
||||
|
||||
## Next Actions
|
||||
|
||||
- None (terminal state)
|
||||
@@ -0,0 +1,317 @@
|
||||
# Action: Diagnose Agent Coordination
|
||||
|
||||
Analyze target skill for agent coordination failures - call chain fragility and result passing issues.
|
||||
|
||||
## Purpose
|
||||
|
||||
- Detect fragile agent call patterns
|
||||
- Identify result passing issues
|
||||
- Find missing error handling in agent calls
|
||||
- Analyze agent return format consistency
|
||||
|
||||
## Preconditions
|
||||
|
||||
- [ ] state.status === 'running'
|
||||
- [ ] state.target_skill.path is set
|
||||
- [ ] 'agent' in state.focus_areas OR state.focus_areas is empty
|
||||
|
||||
## Detection Patterns
|
||||
|
||||
### Pattern 1: Unhandled Agent Failures
|
||||
|
||||
```regex
|
||||
# Task calls without try-catch or error handling
|
||||
/Task\s*\(\s*\{[^}]*\}\s*\)(?![^;]*catch)/
|
||||
```
|
||||
|
||||
### Pattern 2: Missing Return Validation
|
||||
|
||||
```regex
|
||||
# Agent result used directly without validation
|
||||
/const\s+\w+\s*=\s*await?\s*Task\([^)]+\);\s*(?!.*(?:if|try|JSON\.parse))/
|
||||
```
|
||||
|
||||
### Pattern 3: Inconsistent Agent Configuration
|
||||
|
||||
```regex
|
||||
# Different agent configurations in same skill
|
||||
/subagent_type:\s*['"](\w+)['"]/g
|
||||
```
|
||||
|
||||
### Pattern 4: Deeply Nested Agent Calls
|
||||
|
||||
```regex
|
||||
# Agent calling another agent (nested)
|
||||
/Task\s*\([^)]*prompt:[^)]*Task\s*\(/
|
||||
```
|
||||
|
||||
## Execution
|
||||
|
||||
```javascript
|
||||
async function execute(state, workDir) {
|
||||
const skillPath = state.target_skill.path;
|
||||
const startTime = Date.now();
|
||||
const issues = [];
|
||||
const evidence = [];
|
||||
|
||||
console.log(`Diagnosing agent coordination in ${skillPath}...`);
|
||||
|
||||
// 1. Find all Task/agent calls
|
||||
const allFiles = Glob(`${skillPath}/**/*.md`);
|
||||
const agentCalls = [];
|
||||
const agentTypes = new Set();
|
||||
|
||||
for (const file of allFiles) {
|
||||
const content = Read(file);
|
||||
const relativePath = file.replace(skillPath + '/', '');
|
||||
|
||||
// Find Task calls
|
||||
const taskMatches = content.matchAll(/Task\s*\(\s*\{([^}]+)\}/g);
|
||||
for (const match of taskMatches) {
|
||||
const config = match[1];
|
||||
|
||||
// Extract agent type
|
||||
const typeMatch = config.match(/subagent_type:\s*['"]([^'"]+)['"]/);
|
||||
const agentType = typeMatch ? typeMatch[1] : 'unknown';
|
||||
agentTypes.add(agentType);
|
||||
|
||||
// Check for error handling context
|
||||
const hasErrorHandling = /try\s*\{.*Task|\.catch\(|await\s+Task.*\.then/s.test(
|
||||
content.slice(Math.max(0, match.index - 100), match.index + match[0].length + 100)
|
||||
);
|
||||
|
||||
// Check for result validation
|
||||
const hasResultValidation = /JSON\.parse|if\s*\(\s*result|result\s*\?\./s.test(
|
||||
content.slice(match.index, match.index + match[0].length + 200)
|
||||
);
|
||||
|
||||
// Check for background execution
|
||||
const runsInBackground = /run_in_background:\s*true/.test(config);
|
||||
|
||||
agentCalls.push({
|
||||
file: relativePath,
|
||||
agentType,
|
||||
hasErrorHandling,
|
||||
hasResultValidation,
|
||||
runsInBackground,
|
||||
config: config.slice(0, 200)
|
||||
});
|
||||
}
|
||||
}
|
||||
|
||||
// 2. Analyze agent call patterns
|
||||
const totalCalls = agentCalls.length;
|
||||
const callsWithoutErrorHandling = agentCalls.filter(c => !c.hasErrorHandling);
|
||||
const callsWithoutValidation = agentCalls.filter(c => !c.hasResultValidation);
|
||||
|
||||
// Issue: Missing error handling
|
||||
if (callsWithoutErrorHandling.length > 0) {
|
||||
issues.push({
|
||||
id: `AGT-${issues.length + 1}`,
|
||||
type: 'agent_failure',
|
||||
severity: callsWithoutErrorHandling.length > 2 ? 'high' : 'medium',
|
||||
location: { file: 'multiple' },
|
||||
description: `${callsWithoutErrorHandling.length}/${totalCalls} agent calls lack error handling`,
|
||||
evidence: callsWithoutErrorHandling.slice(0, 3).map(c =>
|
||||
`${c.file}: ${c.agentType}`
|
||||
),
|
||||
root_cause: 'Agent failures not caught, may crash workflow',
|
||||
impact: 'Unhandled agent errors cause cascading failures',
|
||||
suggested_fix: 'Wrap Task calls in try-catch with graceful fallback'
|
||||
});
|
||||
evidence.push({
|
||||
file: 'multiple',
|
||||
pattern: 'missing_error_handling',
|
||||
context: `${callsWithoutErrorHandling.length} calls affected`,
|
||||
severity: 'high'
|
||||
});
|
||||
}
|
||||
|
||||
// Issue: Missing result validation
|
||||
if (callsWithoutValidation.length > 0) {
|
||||
issues.push({
|
||||
id: `AGT-${issues.length + 1}`,
|
||||
type: 'agent_failure',
|
||||
severity: 'medium',
|
||||
location: { file: 'multiple' },
|
||||
description: `${callsWithoutValidation.length}/${totalCalls} agent calls lack result validation`,
|
||||
evidence: callsWithoutValidation.slice(0, 3).map(c =>
|
||||
`${c.file}: ${c.agentType} result not validated`
|
||||
),
|
||||
root_cause: 'Agent results used directly without type checking',
|
||||
impact: 'Invalid agent output may corrupt state',
|
||||
suggested_fix: 'Add JSON.parse with try-catch and schema validation'
|
||||
});
|
||||
}
|
||||
|
||||
// 3. Check for inconsistent agent types usage
|
||||
if (agentTypes.size > 3 && state.target_skill.execution_mode === 'autonomous') {
|
||||
issues.push({
|
||||
id: `AGT-${issues.length + 1}`,
|
||||
type: 'agent_failure',
|
||||
severity: 'low',
|
||||
location: { file: 'multiple' },
|
||||
description: `Using ${agentTypes.size} different agent types`,
|
||||
evidence: [...agentTypes].slice(0, 5),
|
||||
root_cause: 'Multiple agent types increase coordination complexity',
|
||||
impact: 'Different agent behaviors may cause inconsistency',
|
||||
suggested_fix: 'Standardize on fewer agent types with clear roles'
|
||||
});
|
||||
}
|
||||
|
||||
// 4. Check for nested agent calls
|
||||
for (const file of allFiles) {
|
||||
const content = Read(file);
|
||||
const relativePath = file.replace(skillPath + '/', '');
|
||||
|
||||
// Detect nested Task calls
|
||||
const hasNestedTask = /Task\s*\([^)]*prompt:[^)]*Task\s*\(/s.test(content);
|
||||
|
||||
if (hasNestedTask) {
|
||||
issues.push({
|
||||
id: `AGT-${issues.length + 1}`,
|
||||
type: 'agent_failure',
|
||||
severity: 'high',
|
||||
location: { file: relativePath },
|
||||
description: 'Nested agent calls detected',
|
||||
evidence: ['Agent prompt contains another Task call'],
|
||||
root_cause: 'Agent calls another agent, creating deep nesting',
|
||||
impact: 'Context explosion, hard to debug, unpredictable behavior',
|
||||
suggested_fix: 'Flatten agent calls, use orchestrator to coordinate'
|
||||
});
|
||||
}
|
||||
}
|
||||
|
||||
// 5. Check SKILL.md for agent configuration consistency
|
||||
const skillMd = Read(`${skillPath}/SKILL.md`);
|
||||
|
||||
// Check if allowed-tools includes Task
|
||||
const allowedTools = skillMd.match(/allowed-tools:\s*([^\n]+)/i);
|
||||
if (allowedTools && !allowedTools[1].includes('Task') && totalCalls > 0) {
|
||||
issues.push({
|
||||
id: `AGT-${issues.length + 1}`,
|
||||
type: 'agent_failure',
|
||||
severity: 'medium',
|
||||
location: { file: 'SKILL.md' },
|
||||
description: 'Task tool used but not declared in allowed-tools',
|
||||
evidence: [`${totalCalls} Task calls found, but Task not in allowed-tools`],
|
||||
root_cause: 'Tool declaration mismatch',
|
||||
impact: 'May cause runtime permission issues',
|
||||
suggested_fix: 'Add Task to allowed-tools in SKILL.md front matter'
|
||||
});
|
||||
}
|
||||
|
||||
// 6. Check for agent result format consistency
|
||||
const returnFormats = new Set();
|
||||
for (const file of allFiles) {
|
||||
const content = Read(file);
|
||||
|
||||
// Look for return format definitions
|
||||
const returnMatch = content.match(/\[RETURN\][^[]*|return\s*\{[^}]+\}/gi);
|
||||
if (returnMatch) {
|
||||
returnMatch.forEach(r => {
|
||||
const format = r.includes('JSON') ? 'json' :
|
||||
r.includes('summary') ? 'summary' :
|
||||
r.includes('file') ? 'file_path' : 'other';
|
||||
returnFormats.add(format);
|
||||
});
|
||||
}
|
||||
}
|
||||
|
||||
if (returnFormats.size > 2) {
|
||||
issues.push({
|
||||
id: `AGT-${issues.length + 1}`,
|
||||
type: 'agent_failure',
|
||||
severity: 'medium',
|
||||
location: { file: 'multiple' },
|
||||
description: 'Inconsistent agent return formats',
|
||||
evidence: [...returnFormats],
|
||||
root_cause: 'Different agents return data in different formats',
|
||||
impact: 'Orchestrator must handle multiple format types',
|
||||
suggested_fix: 'Standardize return format: {status, output_file, summary}'
|
||||
});
|
||||
}
|
||||
|
||||
// 7. Calculate severity
|
||||
const criticalCount = issues.filter(i => i.severity === 'critical').length;
|
||||
const highCount = issues.filter(i => i.severity === 'high').length;
|
||||
const severity = criticalCount > 0 ? 'critical' :
|
||||
highCount > 1 ? 'high' :
|
||||
highCount > 0 ? 'medium' :
|
||||
issues.length > 0 ? 'low' : 'none';
|
||||
|
||||
// 8. Write diagnosis result
|
||||
const diagnosisResult = {
|
||||
status: 'completed',
|
||||
issues_found: issues.length,
|
||||
severity: severity,
|
||||
execution_time_ms: Date.now() - startTime,
|
||||
details: {
|
||||
patterns_checked: [
|
||||
'error_handling',
|
||||
'result_validation',
|
||||
'agent_type_consistency',
|
||||
'nested_calls',
|
||||
'return_format_consistency'
|
||||
],
|
||||
patterns_matched: evidence.map(e => e.pattern),
|
||||
evidence: evidence,
|
||||
agent_analysis: {
|
||||
total_agent_calls: totalCalls,
|
||||
unique_agent_types: agentTypes.size,
|
||||
calls_without_error_handling: callsWithoutErrorHandling.length,
|
||||
calls_without_validation: callsWithoutValidation.length,
|
||||
agent_types_used: [...agentTypes]
|
||||
},
|
||||
recommendations: [
|
||||
callsWithoutErrorHandling.length > 0
|
||||
? 'Add try-catch to all Task calls' : null,
|
||||
callsWithoutValidation.length > 0
|
||||
? 'Add result validation with JSON.parse and schema check' : null,
|
||||
agentTypes.size > 3
|
||||
? 'Consolidate agent types for consistency' : null
|
||||
].filter(Boolean)
|
||||
}
|
||||
};
|
||||
|
||||
Write(`${workDir}/diagnosis/agent-diagnosis.json`,
|
||||
JSON.stringify(diagnosisResult, null, 2));
|
||||
|
||||
return {
|
||||
stateUpdates: {
|
||||
'diagnosis.agent': diagnosisResult,
|
||||
issues: [...state.issues, ...issues]
|
||||
},
|
||||
outputFiles: [`${workDir}/diagnosis/agent-diagnosis.json`],
|
||||
summary: `Agent diagnosis: ${issues.length} issues found (severity: ${severity})`
|
||||
};
|
||||
}
|
||||
```
|
||||
|
||||
## State Updates
|
||||
|
||||
```javascript
|
||||
return {
|
||||
stateUpdates: {
|
||||
'diagnosis.agent': {
|
||||
status: 'completed',
|
||||
issues_found: <count>,
|
||||
severity: '<critical|high|medium|low|none>',
|
||||
// ... full diagnosis result
|
||||
},
|
||||
issues: [...existingIssues, ...newIssues]
|
||||
}
|
||||
};
|
||||
```
|
||||
|
||||
## Error Handling
|
||||
|
||||
| Error Type | Recovery |
|
||||
|------------|----------|
|
||||
| Regex match error | Use simpler patterns |
|
||||
| File access error | Skip and continue |
|
||||
|
||||
## Next Actions
|
||||
|
||||
- Success: action-generate-report
|
||||
- Skipped: If 'agent' not in focus_areas
|
||||
@@ -0,0 +1,243 @@
|
||||
# Action: Diagnose Context Explosion
|
||||
|
||||
Analyze target skill for context explosion issues - token accumulation and multi-turn dialogue bloat.
|
||||
|
||||
## Purpose
|
||||
|
||||
- Detect patterns that cause context growth
|
||||
- Identify multi-turn accumulation points
|
||||
- Find missing context compression mechanisms
|
||||
- Measure potential token waste
|
||||
|
||||
## Preconditions
|
||||
|
||||
- [ ] state.status === 'running'
|
||||
- [ ] state.target_skill.path is set
|
||||
- [ ] 'context' in state.focus_areas OR state.focus_areas is empty
|
||||
|
||||
## Detection Patterns
|
||||
|
||||
### Pattern 1: Unbounded History Accumulation
|
||||
|
||||
```regex
|
||||
# Patterns that suggest history accumulation
|
||||
/\bhistory\b.*\.push\b/
|
||||
/\bmessages\b.*\.concat\b/
|
||||
/\bconversation\b.*\+=\b/
|
||||
/\bappend.*context\b/i
|
||||
```
|
||||
|
||||
### Pattern 2: Full Content Passing
|
||||
|
||||
```regex
|
||||
# Patterns that pass full content instead of references
|
||||
/Read\([^)]+\).*\+.*Read\(/
|
||||
/JSON\.stringify\(.*state\)/ # Full state serialization
|
||||
/\$\{.*content\}/ # Template literal with full content
|
||||
```
|
||||
|
||||
### Pattern 3: Missing Summarization
|
||||
|
||||
```regex
|
||||
# Absence of compression/summarization
|
||||
# Check for lack of: summarize, compress, truncate, slice
|
||||
```
|
||||
|
||||
### Pattern 4: Agent Return Bloat
|
||||
|
||||
```regex
|
||||
# Agent returning full content instead of path + summary
|
||||
/return\s*\{[^}]*content:/
|
||||
/return.*JSON\.stringify/
|
||||
```
|
||||
|
||||
## Execution
|
||||
|
||||
```javascript
|
||||
async function execute(state, workDir) {
|
||||
const skillPath = state.target_skill.path;
|
||||
const startTime = Date.now();
|
||||
const issues = [];
|
||||
const evidence = [];
|
||||
|
||||
console.log(`Diagnosing context explosion in ${skillPath}...`);
|
||||
|
||||
// 1. Scan all phase files
|
||||
const phaseFiles = Glob(`${skillPath}/phases/**/*.md`);
|
||||
|
||||
for (const file of phaseFiles) {
|
||||
const content = Read(file);
|
||||
const relativePath = file.replace(skillPath + '/', '');
|
||||
|
||||
// Check Pattern 1: History accumulation
|
||||
const historyPatterns = [
|
||||
/history\s*[.=].*push|concat|append/gi,
|
||||
/messages\s*=\s*\[.*\.\.\..*messages/gi,
|
||||
/conversation.*\+=/gi
|
||||
];
|
||||
|
||||
for (const pattern of historyPatterns) {
|
||||
const matches = content.match(pattern);
|
||||
if (matches) {
|
||||
issues.push({
|
||||
id: `CTX-${issues.length + 1}`,
|
||||
type: 'context_explosion',
|
||||
severity: 'high',
|
||||
location: { file: relativePath },
|
||||
description: 'Unbounded history accumulation detected',
|
||||
evidence: matches.slice(0, 3),
|
||||
root_cause: 'History/messages array grows without bounds',
|
||||
impact: 'Token count increases linearly with iterations',
|
||||
suggested_fix: 'Implement sliding window or summarization'
|
||||
});
|
||||
evidence.push({
|
||||
file: relativePath,
|
||||
pattern: 'history_accumulation',
|
||||
context: matches[0],
|
||||
severity: 'high'
|
||||
});
|
||||
}
|
||||
}
|
||||
|
||||
// Check Pattern 2: Full content passing
|
||||
const contentPatterns = [
|
||||
/Read\s*\([^)]+\)\s*[\+,]/g,
|
||||
/JSON\.stringify\s*\(\s*state\s*\)/g,
|
||||
/\$\{[^}]*content[^}]*\}/g
|
||||
];
|
||||
|
||||
for (const pattern of contentPatterns) {
|
||||
const matches = content.match(pattern);
|
||||
if (matches) {
|
||||
issues.push({
|
||||
id: `CTX-${issues.length + 1}`,
|
||||
type: 'context_explosion',
|
||||
severity: 'medium',
|
||||
location: { file: relativePath },
|
||||
description: 'Full content passed instead of reference',
|
||||
evidence: matches.slice(0, 3),
|
||||
root_cause: 'Entire file/state content included in prompts',
|
||||
impact: 'Unnecessary token consumption',
|
||||
suggested_fix: 'Pass file paths and summaries instead of full content'
|
||||
});
|
||||
evidence.push({
|
||||
file: relativePath,
|
||||
pattern: 'full_content_passing',
|
||||
context: matches[0],
|
||||
severity: 'medium'
|
||||
});
|
||||
}
|
||||
}
|
||||
|
||||
// Check Pattern 3: Missing summarization
|
||||
const hasSummarization = /summariz|compress|truncat|slice.*context/i.test(content);
|
||||
const hasLongPrompts = content.length > 5000;
|
||||
|
||||
if (hasLongPrompts && !hasSummarization) {
|
||||
issues.push({
|
||||
id: `CTX-${issues.length + 1}`,
|
||||
type: 'context_explosion',
|
||||
severity: 'medium',
|
||||
location: { file: relativePath },
|
||||
description: 'Long phase file without summarization mechanism',
|
||||
evidence: [`File length: ${content.length} chars`],
|
||||
root_cause: 'No context compression for large content',
|
||||
impact: 'Potential token overflow in long sessions',
|
||||
suggested_fix: 'Add context summarization before passing to agents'
|
||||
});
|
||||
}
|
||||
|
||||
// Check Pattern 4: Agent return bloat
|
||||
const returnPatterns = /return\s*\{[^}]*(?:content|full_output|complete_result):/g;
|
||||
const returnMatches = content.match(returnPatterns);
|
||||
if (returnMatches) {
|
||||
issues.push({
|
||||
id: `CTX-${issues.length + 1}`,
|
||||
type: 'context_explosion',
|
||||
severity: 'high',
|
||||
location: { file: relativePath },
|
||||
description: 'Agent returns full content instead of path+summary',
|
||||
evidence: returnMatches.slice(0, 3),
|
||||
root_cause: 'Agent output includes complete content',
|
||||
impact: 'Context bloat when orchestrator receives full output',
|
||||
suggested_fix: 'Return {output_file, summary} instead of {content}'
|
||||
});
|
||||
}
|
||||
}
|
||||
|
||||
// 2. Calculate severity
|
||||
const criticalCount = issues.filter(i => i.severity === 'critical').length;
|
||||
const highCount = issues.filter(i => i.severity === 'high').length;
|
||||
const severity = criticalCount > 0 ? 'critical' :
|
||||
highCount > 2 ? 'high' :
|
||||
highCount > 0 ? 'medium' :
|
||||
issues.length > 0 ? 'low' : 'none';
|
||||
|
||||
// 3. Write diagnosis result
|
||||
const diagnosisResult = {
|
||||
status: 'completed',
|
||||
issues_found: issues.length,
|
||||
severity: severity,
|
||||
execution_time_ms: Date.now() - startTime,
|
||||
details: {
|
||||
patterns_checked: [
|
||||
'history_accumulation',
|
||||
'full_content_passing',
|
||||
'missing_summarization',
|
||||
'agent_return_bloat'
|
||||
],
|
||||
patterns_matched: evidence.map(e => e.pattern),
|
||||
evidence: evidence,
|
||||
recommendations: [
|
||||
issues.length > 0 ? 'Implement context summarization agent' : null,
|
||||
highCount > 0 ? 'Add sliding window for conversation history' : null,
|
||||
evidence.some(e => e.pattern === 'full_content_passing')
|
||||
? 'Refactor to pass file paths instead of content' : null
|
||||
].filter(Boolean)
|
||||
}
|
||||
};
|
||||
|
||||
Write(`${workDir}/diagnosis/context-diagnosis.json`,
|
||||
JSON.stringify(diagnosisResult, null, 2));
|
||||
|
||||
return {
|
||||
stateUpdates: {
|
||||
'diagnosis.context': diagnosisResult,
|
||||
issues: [...state.issues, ...issues],
|
||||
'issues_by_severity.critical': state.issues_by_severity.critical + criticalCount,
|
||||
'issues_by_severity.high': state.issues_by_severity.high + highCount
|
||||
},
|
||||
outputFiles: [`${workDir}/diagnosis/context-diagnosis.json`],
|
||||
summary: `Context diagnosis: ${issues.length} issues found (severity: ${severity})`
|
||||
};
|
||||
}
|
||||
```
|
||||
|
||||
## State Updates
|
||||
|
||||
```javascript
|
||||
return {
|
||||
stateUpdates: {
|
||||
'diagnosis.context': {
|
||||
status: 'completed',
|
||||
issues_found: <count>,
|
||||
severity: '<critical|high|medium|low|none>',
|
||||
// ... full diagnosis result
|
||||
},
|
||||
issues: [...existingIssues, ...newIssues]
|
||||
}
|
||||
};
|
||||
```
|
||||
|
||||
## Error Handling
|
||||
|
||||
| Error Type | Recovery |
|
||||
|------------|----------|
|
||||
| File read error | Skip file, log warning |
|
||||
| Pattern matching error | Use fallback patterns |
|
||||
| Write error | Retry to alternative path |
|
||||
|
||||
## Next Actions
|
||||
|
||||
- Success: action-diagnose-memory (or next in focus_areas)
|
||||
- Skipped: If 'context' not in focus_areas
|
||||
@@ -0,0 +1,318 @@
|
||||
# Action: Diagnose Data Flow Issues
|
||||
|
||||
Analyze target skill for data flow disruption - state inconsistencies and format variations.
|
||||
|
||||
## Purpose
|
||||
|
||||
- Detect inconsistent data formats between phases
|
||||
- Identify scattered state storage
|
||||
- Find missing data contracts
|
||||
- Measure state transition integrity
|
||||
|
||||
## Preconditions
|
||||
|
||||
- [ ] state.status === 'running'
|
||||
- [ ] state.target_skill.path is set
|
||||
- [ ] 'dataflow' in state.focus_areas OR state.focus_areas is empty
|
||||
|
||||
## Detection Patterns
|
||||
|
||||
### Pattern 1: Multiple Storage Locations
|
||||
|
||||
```regex
|
||||
# Data written to multiple paths without centralization
|
||||
/Write\s*\(\s*[`'"][^`'"]+[`'"]/g
|
||||
```
|
||||
|
||||
### Pattern 2: Inconsistent Field Names
|
||||
|
||||
```regex
|
||||
# Same concept with different names: title/name, id/identifier
|
||||
```
|
||||
|
||||
### Pattern 3: Missing Schema Validation
|
||||
|
||||
```regex
|
||||
# Absence of validation before state write
|
||||
# Look for lack of: validate, schema, check, verify
|
||||
```
|
||||
|
||||
### Pattern 4: Format Transformation Without Normalization
|
||||
|
||||
```regex
|
||||
# Direct JSON.parse without error handling or normalization
|
||||
/JSON\.parse\([^)]+\)(?!\s*\|\|)/
|
||||
```
|
||||
|
||||
## Execution
|
||||
|
||||
```javascript
|
||||
async function execute(state, workDir) {
|
||||
const skillPath = state.target_skill.path;
|
||||
const startTime = Date.now();
|
||||
const issues = [];
|
||||
const evidence = [];
|
||||
|
||||
console.log(`Diagnosing data flow in ${skillPath}...`);
|
||||
|
||||
// 1. Collect all Write operations to map data storage
|
||||
const allFiles = Glob(`${skillPath}/**/*.md`);
|
||||
const writeLocations = [];
|
||||
const readLocations = [];
|
||||
|
||||
for (const file of allFiles) {
|
||||
const content = Read(file);
|
||||
const relativePath = file.replace(skillPath + '/', '');
|
||||
|
||||
// Find Write operations
|
||||
const writeMatches = content.matchAll(/Write\s*\(\s*[`'"]([^`'"]+)[`'"]/g);
|
||||
for (const match of writeMatches) {
|
||||
writeLocations.push({
|
||||
file: relativePath,
|
||||
target: match[1],
|
||||
isStateFile: match[1].includes('state.json') || match[1].includes('config.json')
|
||||
});
|
||||
}
|
||||
|
||||
// Find Read operations
|
||||
const readMatches = content.matchAll(/Read\s*\(\s*[`'"]([^`'"]+)[`'"]/g);
|
||||
for (const match of readMatches) {
|
||||
readLocations.push({
|
||||
file: relativePath,
|
||||
source: match[1]
|
||||
});
|
||||
}
|
||||
}
|
||||
|
||||
// 2. Check for scattered state storage
|
||||
const stateTargets = writeLocations
|
||||
.filter(w => w.isStateFile)
|
||||
.map(w => w.target);
|
||||
|
||||
const uniqueStateFiles = [...new Set(stateTargets)];
|
||||
|
||||
if (uniqueStateFiles.length > 2) {
|
||||
issues.push({
|
||||
id: `DF-${issues.length + 1}`,
|
||||
type: 'dataflow_break',
|
||||
severity: 'high',
|
||||
location: { file: 'multiple' },
|
||||
description: `State stored in ${uniqueStateFiles.length} different locations`,
|
||||
evidence: uniqueStateFiles.slice(0, 5),
|
||||
root_cause: 'No centralized state management',
|
||||
impact: 'State inconsistency between phases',
|
||||
suggested_fix: 'Centralize state to single state.json with state manager'
|
||||
});
|
||||
evidence.push({
|
||||
file: 'multiple',
|
||||
pattern: 'scattered_state',
|
||||
context: uniqueStateFiles.join(', '),
|
||||
severity: 'high'
|
||||
});
|
||||
}
|
||||
|
||||
// 3. Check for inconsistent field naming
|
||||
const fieldNamePatterns = {
|
||||
'name_vs_title': [/\.name\b/, /\.title\b/],
|
||||
'id_vs_identifier': [/\.id\b/, /\.identifier\b/],
|
||||
'status_vs_state': [/\.status\b/, /\.state\b/],
|
||||
'error_vs_errors': [/\.error\b/, /\.errors\b/]
|
||||
};
|
||||
|
||||
const fieldUsage = {};
|
||||
|
||||
for (const file of allFiles) {
|
||||
const content = Read(file);
|
||||
const relativePath = file.replace(skillPath + '/', '');
|
||||
|
||||
for (const [patternName, patterns] of Object.entries(fieldNamePatterns)) {
|
||||
for (const pattern of patterns) {
|
||||
if (pattern.test(content)) {
|
||||
if (!fieldUsage[patternName]) fieldUsage[patternName] = [];
|
||||
fieldUsage[patternName].push({
|
||||
file: relativePath,
|
||||
pattern: pattern.toString()
|
||||
});
|
||||
}
|
||||
}
|
||||
}
|
||||
}
|
||||
|
||||
for (const [patternName, usages] of Object.entries(fieldUsage)) {
|
||||
const uniquePatterns = [...new Set(usages.map(u => u.pattern))];
|
||||
if (uniquePatterns.length > 1) {
|
||||
issues.push({
|
||||
id: `DF-${issues.length + 1}`,
|
||||
type: 'dataflow_break',
|
||||
severity: 'medium',
|
||||
location: { file: 'multiple' },
|
||||
description: `Inconsistent field naming: ${patternName.replace('_vs_', ' vs ')}`,
|
||||
evidence: usages.slice(0, 3).map(u => `${u.file}: ${u.pattern}`),
|
||||
root_cause: 'Same concept referred to with different field names',
|
||||
impact: 'Data may be lost during field access',
|
||||
suggested_fix: `Standardize to single field name, add normalization function`
|
||||
});
|
||||
}
|
||||
}
|
||||
|
||||
// 4. Check for missing schema validation
|
||||
for (const file of allFiles) {
|
||||
const content = Read(file);
|
||||
const relativePath = file.replace(skillPath + '/', '');
|
||||
|
||||
// Find JSON.parse without validation
|
||||
const unsafeParses = content.match(/JSON\.parse\s*\([^)]+\)(?!\s*\?\?|\s*\|\|)/g);
|
||||
const hasValidation = /validat|schema|type.*check/i.test(content);
|
||||
|
||||
if (unsafeParses && unsafeParses.length > 0 && !hasValidation) {
|
||||
issues.push({
|
||||
id: `DF-${issues.length + 1}`,
|
||||
type: 'dataflow_break',
|
||||
severity: 'medium',
|
||||
location: { file: relativePath },
|
||||
description: 'JSON parsing without validation',
|
||||
evidence: unsafeParses.slice(0, 2),
|
||||
root_cause: 'No schema validation after parsing',
|
||||
impact: 'Invalid data may propagate through phases',
|
||||
suggested_fix: 'Add schema validation after JSON.parse'
|
||||
});
|
||||
}
|
||||
}
|
||||
|
||||
// 5. Check state schema if exists
|
||||
const stateSchemaFile = Glob(`${skillPath}/phases/state-schema.md`)[0];
|
||||
if (stateSchemaFile) {
|
||||
const schemaContent = Read(stateSchemaFile);
|
||||
|
||||
// Check for type definitions
|
||||
const hasTypeScript = /interface\s+\w+|type\s+\w+\s*=/i.test(schemaContent);
|
||||
const hasValidationFunction = /function\s+validate|validateState/i.test(schemaContent);
|
||||
|
||||
if (hasTypeScript && !hasValidationFunction) {
|
||||
issues.push({
|
||||
id: `DF-${issues.length + 1}`,
|
||||
type: 'dataflow_break',
|
||||
severity: 'low',
|
||||
location: { file: 'phases/state-schema.md' },
|
||||
description: 'Type definitions without runtime validation',
|
||||
evidence: ['TypeScript interfaces defined but no validation function'],
|
||||
root_cause: 'Types are compile-time only, not enforced at runtime',
|
||||
impact: 'Schema violations may occur at runtime',
|
||||
suggested_fix: 'Add validateState() function using Zod or manual checks'
|
||||
});
|
||||
}
|
||||
} else if (state.target_skill.execution_mode === 'autonomous') {
|
||||
issues.push({
|
||||
id: `DF-${issues.length + 1}`,
|
||||
type: 'dataflow_break',
|
||||
severity: 'high',
|
||||
location: { file: 'phases/' },
|
||||
description: 'Autonomous skill missing state-schema.md',
|
||||
evidence: ['No state schema definition found'],
|
||||
root_cause: 'State structure undefined for orchestrator',
|
||||
impact: 'Inconsistent state handling across actions',
|
||||
suggested_fix: 'Create phases/state-schema.md with explicit type definitions'
|
||||
});
|
||||
}
|
||||
|
||||
// 6. Check read-write alignment
|
||||
const writtenFiles = new Set(writeLocations.map(w => w.target));
|
||||
const readFiles = new Set(readLocations.map(r => r.source));
|
||||
|
||||
const writtenButNotRead = [...writtenFiles].filter(f =>
|
||||
!readFiles.has(f) && !f.includes('output') && !f.includes('report')
|
||||
);
|
||||
|
||||
if (writtenButNotRead.length > 0) {
|
||||
issues.push({
|
||||
id: `DF-${issues.length + 1}`,
|
||||
type: 'dataflow_break',
|
||||
severity: 'low',
|
||||
location: { file: 'multiple' },
|
||||
description: 'Files written but never read',
|
||||
evidence: writtenButNotRead.slice(0, 3),
|
||||
root_cause: 'Orphaned output files',
|
||||
impact: 'Wasted storage and potential confusion',
|
||||
suggested_fix: 'Remove unused writes or add reads where needed'
|
||||
});
|
||||
}
|
||||
|
||||
// 7. Calculate severity
|
||||
const criticalCount = issues.filter(i => i.severity === 'critical').length;
|
||||
const highCount = issues.filter(i => i.severity === 'high').length;
|
||||
const severity = criticalCount > 0 ? 'critical' :
|
||||
highCount > 1 ? 'high' :
|
||||
highCount > 0 ? 'medium' :
|
||||
issues.length > 0 ? 'low' : 'none';
|
||||
|
||||
// 8. Write diagnosis result
|
||||
const diagnosisResult = {
|
||||
status: 'completed',
|
||||
issues_found: issues.length,
|
||||
severity: severity,
|
||||
execution_time_ms: Date.now() - startTime,
|
||||
details: {
|
||||
patterns_checked: [
|
||||
'scattered_state',
|
||||
'inconsistent_naming',
|
||||
'missing_validation',
|
||||
'read_write_alignment'
|
||||
],
|
||||
patterns_matched: evidence.map(e => e.pattern),
|
||||
evidence: evidence,
|
||||
data_flow_map: {
|
||||
write_locations: writeLocations.length,
|
||||
read_locations: readLocations.length,
|
||||
unique_state_files: uniqueStateFiles.length
|
||||
},
|
||||
recommendations: [
|
||||
uniqueStateFiles.length > 2 ? 'Implement centralized state manager' : null,
|
||||
issues.some(i => i.description.includes('naming'))
|
||||
? 'Create normalization layer for field names' : null,
|
||||
issues.some(i => i.description.includes('validation'))
|
||||
? 'Add Zod or JSON Schema validation' : null
|
||||
].filter(Boolean)
|
||||
}
|
||||
};
|
||||
|
||||
Write(`${workDir}/diagnosis/dataflow-diagnosis.json`,
|
||||
JSON.stringify(diagnosisResult, null, 2));
|
||||
|
||||
return {
|
||||
stateUpdates: {
|
||||
'diagnosis.dataflow': diagnosisResult,
|
||||
issues: [...state.issues, ...issues]
|
||||
},
|
||||
outputFiles: [`${workDir}/diagnosis/dataflow-diagnosis.json`],
|
||||
summary: `Data flow diagnosis: ${issues.length} issues found (severity: ${severity})`
|
||||
};
|
||||
}
|
||||
```
|
||||
|
||||
## State Updates
|
||||
|
||||
```javascript
|
||||
return {
|
||||
stateUpdates: {
|
||||
'diagnosis.dataflow': {
|
||||
status: 'completed',
|
||||
issues_found: <count>,
|
||||
severity: '<critical|high|medium|low|none>',
|
||||
// ... full diagnosis result
|
||||
},
|
||||
issues: [...existingIssues, ...newIssues]
|
||||
}
|
||||
};
|
||||
```
|
||||
|
||||
## Error Handling
|
||||
|
||||
| Error Type | Recovery |
|
||||
|------------|----------|
|
||||
| Glob pattern error | Use fallback patterns |
|
||||
| File read error | Skip and continue |
|
||||
|
||||
## Next Actions
|
||||
|
||||
- Success: action-diagnose-agent (or next in focus_areas)
|
||||
- Skipped: If 'dataflow' not in focus_areas
|
||||
@@ -0,0 +1,299 @@
|
||||
# Action: Diagnose Documentation Structure
|
||||
|
||||
检测目标 skill 中的文档冗余和冲突问题。
|
||||
|
||||
## Purpose
|
||||
|
||||
- 检测重复定义(State Schema、映射表、类型定义等)
|
||||
- 检测冲突定义(优先级定义不一致、实现与文档漂移等)
|
||||
- 生成合并和解决冲突的建议
|
||||
|
||||
## Preconditions
|
||||
|
||||
- [ ] `state.status === 'running'`
|
||||
- [ ] `state.target_skill !== null`
|
||||
- [ ] `!state.diagnosis.docs`
|
||||
- [ ] 用户指定 focus_areas 包含 'docs' 或 'all',或需要全面诊断
|
||||
|
||||
## Detection Patterns
|
||||
|
||||
### DOC-RED-001: 核心定义重复
|
||||
|
||||
检测 State Schema、核心接口等在多处定义:
|
||||
|
||||
```javascript
|
||||
async function detectDefinitionDuplicates(skillPath) {
|
||||
const patterns = [
|
||||
{ name: 'state_schema', regex: /interface\s+(TuningState|State)\s*\{/g },
|
||||
{ name: 'fix_strategy', regex: /type\s+FixStrategy\s*=/g },
|
||||
{ name: 'issue_type', regex: /type:\s*['"]?(context_explosion|memory_loss|dataflow_break)/g }
|
||||
];
|
||||
|
||||
const files = Glob('**/*.md', { cwd: skillPath });
|
||||
const duplicates = [];
|
||||
|
||||
for (const pattern of patterns) {
|
||||
const matches = [];
|
||||
for (const file of files) {
|
||||
const content = Read(`${skillPath}/${file}`);
|
||||
if (pattern.regex.test(content)) {
|
||||
matches.push({ file, pattern: pattern.name });
|
||||
}
|
||||
}
|
||||
if (matches.length > 1) {
|
||||
duplicates.push({
|
||||
type: pattern.name,
|
||||
files: matches.map(m => m.file),
|
||||
severity: 'high'
|
||||
});
|
||||
}
|
||||
}
|
||||
|
||||
return duplicates;
|
||||
}
|
||||
```
|
||||
|
||||
### DOC-RED-002: 硬编码配置重复
|
||||
|
||||
检测 action 文件中硬编码与 spec 文档的重复:
|
||||
|
||||
```javascript
|
||||
async function detectHardcodedDuplicates(skillPath) {
|
||||
const actionFiles = Glob('phases/actions/*.md', { cwd: skillPath });
|
||||
const specFiles = Glob('specs/*.md', { cwd: skillPath });
|
||||
|
||||
const duplicates = [];
|
||||
|
||||
for (const actionFile of actionFiles) {
|
||||
const content = Read(`${skillPath}/${actionFile}`);
|
||||
|
||||
// 检测硬编码的映射对象
|
||||
const hardcodedPatterns = [
|
||||
/const\s+\w*[Mm]apping\s*=\s*\{/g,
|
||||
/patternMapping\s*=\s*\{/g,
|
||||
/strategyMapping\s*=\s*\{/g
|
||||
];
|
||||
|
||||
for (const pattern of hardcodedPatterns) {
|
||||
if (pattern.test(content)) {
|
||||
duplicates.push({
|
||||
type: 'hardcoded_mapping',
|
||||
file: actionFile,
|
||||
description: '硬编码映射可能与 specs/ 中的定义重复',
|
||||
severity: 'high'
|
||||
});
|
||||
}
|
||||
}
|
||||
}
|
||||
|
||||
return duplicates;
|
||||
}
|
||||
```
|
||||
|
||||
### DOC-CON-001: 优先级定义冲突
|
||||
|
||||
检测 P0-P3 等优先级在不同文件中的定义不一致:
|
||||
|
||||
```javascript
|
||||
async function detectPriorityConflicts(skillPath) {
|
||||
const files = Glob('**/*.md', { cwd: skillPath });
|
||||
const priorityDefs = {};
|
||||
|
||||
const priorityPattern = /\*\*P(\d+)\*\*[:\s]+([^\|]+)/g;
|
||||
|
||||
for (const file of files) {
|
||||
const content = Read(`${skillPath}/${file}`);
|
||||
let match;
|
||||
while ((match = priorityPattern.exec(content)) !== null) {
|
||||
const priority = `P${match[1]}`;
|
||||
const definition = match[2].trim();
|
||||
|
||||
if (!priorityDefs[priority]) {
|
||||
priorityDefs[priority] = [];
|
||||
}
|
||||
priorityDefs[priority].push({ file, definition });
|
||||
}
|
||||
}
|
||||
|
||||
const conflicts = [];
|
||||
for (const [priority, defs] of Object.entries(priorityDefs)) {
|
||||
const uniqueDefs = [...new Set(defs.map(d => d.definition))];
|
||||
if (uniqueDefs.length > 1) {
|
||||
conflicts.push({
|
||||
key: priority,
|
||||
definitions: defs,
|
||||
severity: 'critical'
|
||||
});
|
||||
}
|
||||
}
|
||||
|
||||
return conflicts;
|
||||
}
|
||||
```
|
||||
|
||||
### DOC-CON-002: 实现与文档漂移
|
||||
|
||||
检测硬编码与文档表格的不一致:
|
||||
|
||||
```javascript
|
||||
async function detectImplementationDrift(skillPath) {
|
||||
// 比较 category-mappings.json 与 specs/*.md 中的表格
|
||||
const mappingsFile = `${skillPath}/specs/category-mappings.json`;
|
||||
|
||||
if (!fileExists(mappingsFile)) {
|
||||
return []; // 无集中配置,跳过
|
||||
}
|
||||
|
||||
const mappings = JSON.parse(Read(mappingsFile));
|
||||
const conflicts = [];
|
||||
|
||||
// 与 dimension-mapping.md 对比
|
||||
const dimMapping = Read(`${skillPath}/specs/dimension-mapping.md`);
|
||||
|
||||
for (const [category, config] of Object.entries(mappings.categories)) {
|
||||
// 检查策略是否在文档中提及
|
||||
for (const strategy of config.strategies || []) {
|
||||
if (!dimMapping.includes(strategy)) {
|
||||
conflicts.push({
|
||||
type: 'mapping',
|
||||
key: `${category}.strategies`,
|
||||
issue: `策略 ${strategy} 在 JSON 中定义但未在文档中提及`
|
||||
});
|
||||
}
|
||||
}
|
||||
}
|
||||
|
||||
return conflicts;
|
||||
}
|
||||
```
|
||||
|
||||
## Execution
|
||||
|
||||
```javascript
|
||||
async function executeDiagnosis(state, workDir) {
|
||||
console.log('=== Diagnosing Documentation Structure ===');
|
||||
|
||||
const skillPath = state.target_skill.path;
|
||||
const issues = [];
|
||||
|
||||
// 1. 检测冗余
|
||||
const definitionDups = await detectDefinitionDuplicates(skillPath);
|
||||
const hardcodedDups = await detectHardcodedDuplicates(skillPath);
|
||||
|
||||
for (const dup of [...definitionDups, ...hardcodedDups]) {
|
||||
issues.push({
|
||||
id: `DOC-RED-${issues.length + 1}`,
|
||||
type: 'doc_redundancy',
|
||||
severity: dup.severity,
|
||||
location: { files: dup.files || [dup.file] },
|
||||
description: dup.description || `${dup.type} 在多处定义`,
|
||||
evidence: dup.files || [dup.file],
|
||||
root_cause: '缺乏单一真相来源',
|
||||
impact: '维护困难,易产生不一致',
|
||||
suggested_fix: 'consolidate_to_ssot'
|
||||
});
|
||||
}
|
||||
|
||||
// 2. 检测冲突
|
||||
const priorityConflicts = await detectPriorityConflicts(skillPath);
|
||||
const driftConflicts = await detectImplementationDrift(skillPath);
|
||||
|
||||
for (const conflict of priorityConflicts) {
|
||||
issues.push({
|
||||
id: `DOC-CON-${issues.length + 1}`,
|
||||
type: 'doc_conflict',
|
||||
severity: 'critical',
|
||||
location: { files: conflict.definitions.map(d => d.file) },
|
||||
description: `${conflict.key} 在不同文件中定义不一致`,
|
||||
evidence: conflict.definitions.map(d => `${d.file}: ${d.definition}`),
|
||||
root_cause: '定义更新后未同步',
|
||||
impact: '行为不可预测',
|
||||
suggested_fix: 'reconcile_conflicting_definitions'
|
||||
});
|
||||
}
|
||||
|
||||
// 3. 生成报告
|
||||
const severity = issues.some(i => i.severity === 'critical') ? 'critical' :
|
||||
issues.some(i => i.severity === 'high') ? 'high' :
|
||||
issues.length > 0 ? 'medium' : 'none';
|
||||
|
||||
const result = {
|
||||
status: 'completed',
|
||||
issues_found: issues.length,
|
||||
severity: severity,
|
||||
execution_time_ms: Date.now() - startTime,
|
||||
details: {
|
||||
patterns_checked: ['DOC-RED-001', 'DOC-RED-002', 'DOC-CON-001', 'DOC-CON-002'],
|
||||
patterns_matched: issues.map(i => i.id.split('-').slice(0, 2).join('-')),
|
||||
evidence: issues.flatMap(i => i.evidence),
|
||||
recommendations: generateRecommendations(issues)
|
||||
},
|
||||
redundancies: issues.filter(i => i.type === 'doc_redundancy'),
|
||||
conflicts: issues.filter(i => i.type === 'doc_conflict')
|
||||
};
|
||||
|
||||
// 写入诊断结果
|
||||
Write(`${workDir}/diagnosis/docs-diagnosis.json`, JSON.stringify(result, null, 2));
|
||||
|
||||
return {
|
||||
stateUpdates: {
|
||||
'diagnosis.docs': result,
|
||||
issues: [...state.issues, ...issues]
|
||||
},
|
||||
outputFiles: [`${workDir}/diagnosis/docs-diagnosis.json`],
|
||||
summary: `文档诊断完成:发现 ${issues.length} 个问题 (${severity})`
|
||||
};
|
||||
}
|
||||
|
||||
function generateRecommendations(issues) {
|
||||
const recommendations = [];
|
||||
|
||||
if (issues.some(i => i.type === 'doc_redundancy')) {
|
||||
recommendations.push('使用 consolidate_to_ssot 策略合并重复定义');
|
||||
recommendations.push('考虑创建 specs/category-mappings.json 集中管理配置');
|
||||
}
|
||||
|
||||
if (issues.some(i => i.type === 'doc_conflict')) {
|
||||
recommendations.push('使用 reconcile_conflicting_definitions 策略解决冲突');
|
||||
recommendations.push('建立文档同步检查机制');
|
||||
}
|
||||
|
||||
return recommendations;
|
||||
}
|
||||
```
|
||||
|
||||
## Output
|
||||
|
||||
### State Updates
|
||||
|
||||
```javascript
|
||||
{
|
||||
stateUpdates: {
|
||||
'diagnosis.docs': {
|
||||
status: 'completed',
|
||||
issues_found: N,
|
||||
severity: 'critical|high|medium|low|none',
|
||||
redundancies: [...],
|
||||
conflicts: [...]
|
||||
},
|
||||
issues: [...existingIssues, ...newIssues]
|
||||
}
|
||||
}
|
||||
```
|
||||
|
||||
### Output Files
|
||||
|
||||
- `${workDir}/diagnosis/docs-diagnosis.json` - 完整诊断结果
|
||||
|
||||
## Error Handling
|
||||
|
||||
| Error | Recovery |
|
||||
|-------|----------|
|
||||
| 文件读取失败 | 记录警告,继续处理其他文件 |
|
||||
| 正则匹配超时 | 跳过该模式,记录 skipped |
|
||||
| JSON 解析失败 | 跳过配置对比,仅进行模式检测 |
|
||||
|
||||
## Next Actions
|
||||
|
||||
- 如果发现 critical 问题 → 优先进入 action-propose-fixes
|
||||
- 如果无问题 → 继续下一个诊断或 action-generate-report
|
||||
@@ -0,0 +1,269 @@
|
||||
# Action: Diagnose Long-tail Forgetting
|
||||
|
||||
Analyze target skill for long-tail effect and constraint forgetting issues.
|
||||
|
||||
## Purpose
|
||||
|
||||
- Detect loss of early instructions in long execution chains
|
||||
- Identify missing constraint propagation mechanisms
|
||||
- Find weak goal alignment between phases
|
||||
- Measure instruction retention across phases
|
||||
|
||||
## Preconditions
|
||||
|
||||
- [ ] state.status === 'running'
|
||||
- [ ] state.target_skill.path is set
|
||||
- [ ] 'memory' in state.focus_areas OR state.focus_areas is empty
|
||||
|
||||
## Detection Patterns
|
||||
|
||||
### Pattern 1: Missing Constraint References
|
||||
|
||||
```regex
|
||||
# Phases that don't reference original requirements
|
||||
# Look for absence of: requirements, constraints, original, initial, user_request
|
||||
```
|
||||
|
||||
### Pattern 2: Goal Drift
|
||||
|
||||
```regex
|
||||
# Later phases focus on immediate task without global context
|
||||
/\[TASK\][^[]*(?!\[CONSTRAINTS\]|\[REQUIREMENTS\])/
|
||||
```
|
||||
|
||||
### Pattern 3: No Checkpoint Mechanism
|
||||
|
||||
```regex
|
||||
# Absence of state preservation at key points
|
||||
# Look for lack of: checkpoint, snapshot, preserve, restore
|
||||
```
|
||||
|
||||
### Pattern 4: Implicit State Passing
|
||||
|
||||
```regex
|
||||
# State passed implicitly through conversation rather than explicitly
|
||||
/(?<!state\.)context\./
|
||||
```
|
||||
|
||||
## Execution
|
||||
|
||||
```javascript
|
||||
async function execute(state, workDir) {
|
||||
const skillPath = state.target_skill.path;
|
||||
const startTime = Date.now();
|
||||
const issues = [];
|
||||
const evidence = [];
|
||||
|
||||
console.log(`Diagnosing long-tail forgetting in ${skillPath}...`);
|
||||
|
||||
// 1. Analyze phase chain for constraint propagation
|
||||
const phaseFiles = Glob(`${skillPath}/phases/*.md`)
|
||||
.filter(f => !f.includes('orchestrator') && !f.includes('state-schema'))
|
||||
.sort();
|
||||
|
||||
// Extract phase order (for sequential) or action dependencies (for autonomous)
|
||||
const isAutonomous = state.target_skill.execution_mode === 'autonomous';
|
||||
|
||||
// 2. Check each phase for constraint awareness
|
||||
let firstPhaseConstraints = [];
|
||||
|
||||
for (let i = 0; i < phaseFiles.length; i++) {
|
||||
const file = phaseFiles[i];
|
||||
const content = Read(file);
|
||||
const relativePath = file.replace(skillPath + '/', '');
|
||||
const phaseNum = i + 1;
|
||||
|
||||
// Extract constraints from first phase
|
||||
if (i === 0) {
|
||||
const constraintMatch = content.match(/\[CONSTRAINTS?\]([^[]*)/i);
|
||||
if (constraintMatch) {
|
||||
firstPhaseConstraints = constraintMatch[1]
|
||||
.split('\n')
|
||||
.filter(l => l.trim().startsWith('-'))
|
||||
.map(l => l.trim().replace(/^-\s*/, ''));
|
||||
}
|
||||
}
|
||||
|
||||
// Check if later phases reference original constraints
|
||||
if (i > 0 && firstPhaseConstraints.length > 0) {
|
||||
const mentionsConstraints = firstPhaseConstraints.some(c =>
|
||||
content.toLowerCase().includes(c.toLowerCase().slice(0, 20))
|
||||
);
|
||||
|
||||
if (!mentionsConstraints) {
|
||||
issues.push({
|
||||
id: `MEM-${issues.length + 1}`,
|
||||
type: 'memory_loss',
|
||||
severity: 'high',
|
||||
location: { file: relativePath, phase: `Phase ${phaseNum}` },
|
||||
description: `Phase ${phaseNum} does not reference original constraints`,
|
||||
evidence: [`Original constraints: ${firstPhaseConstraints.slice(0, 3).join(', ')}`],
|
||||
root_cause: 'Constraint information not propagated to later phases',
|
||||
impact: 'May produce output violating original requirements',
|
||||
suggested_fix: 'Add explicit constraint injection or reference to state.original_constraints'
|
||||
});
|
||||
evidence.push({
|
||||
file: relativePath,
|
||||
pattern: 'missing_constraint_reference',
|
||||
context: `Phase ${phaseNum} of ${phaseFiles.length}`,
|
||||
severity: 'high'
|
||||
});
|
||||
}
|
||||
}
|
||||
|
||||
// Check for goal drift - task without constraints
|
||||
const hasTask = /\[TASK\]/i.test(content);
|
||||
const hasConstraints = /\[CONSTRAINTS?\]|\[REQUIREMENTS?\]|\[RULES?\]/i.test(content);
|
||||
|
||||
if (hasTask && !hasConstraints && i > 1) {
|
||||
issues.push({
|
||||
id: `MEM-${issues.length + 1}`,
|
||||
type: 'memory_loss',
|
||||
severity: 'medium',
|
||||
location: { file: relativePath },
|
||||
description: 'Phase has TASK but no CONSTRAINTS/RULES section',
|
||||
evidence: ['Task defined without boundary constraints'],
|
||||
root_cause: 'Agent may not adhere to global constraints',
|
||||
impact: 'Potential goal drift from original intent',
|
||||
suggested_fix: 'Add [CONSTRAINTS] section referencing global rules'
|
||||
});
|
||||
}
|
||||
|
||||
// Check for checkpoint mechanism
|
||||
const hasCheckpoint = /checkpoint|snapshot|preserve|savepoint/i.test(content);
|
||||
const isKeyPhase = i === Math.floor(phaseFiles.length / 2) || i === phaseFiles.length - 1;
|
||||
|
||||
if (isKeyPhase && !hasCheckpoint && phaseFiles.length > 3) {
|
||||
issues.push({
|
||||
id: `MEM-${issues.length + 1}`,
|
||||
type: 'memory_loss',
|
||||
severity: 'low',
|
||||
location: { file: relativePath },
|
||||
description: 'Key phase without checkpoint mechanism',
|
||||
evidence: [`Phase ${phaseNum} is a key milestone but has no state preservation`],
|
||||
root_cause: 'Cannot recover from failures or verify constraint adherence',
|
||||
impact: 'No rollback capability if constraints violated',
|
||||
suggested_fix: 'Add checkpoint before major state changes'
|
||||
});
|
||||
}
|
||||
}
|
||||
|
||||
// 3. Check for explicit state schema with constraints field
|
||||
const stateSchemaFile = Glob(`${skillPath}/phases/state-schema.md`)[0];
|
||||
if (stateSchemaFile) {
|
||||
const schemaContent = Read(stateSchemaFile);
|
||||
const hasConstraintsField = /constraints|requirements|original_request/i.test(schemaContent);
|
||||
|
||||
if (!hasConstraintsField) {
|
||||
issues.push({
|
||||
id: `MEM-${issues.length + 1}`,
|
||||
type: 'memory_loss',
|
||||
severity: 'medium',
|
||||
location: { file: 'phases/state-schema.md' },
|
||||
description: 'State schema lacks constraints/requirements field',
|
||||
evidence: ['No dedicated field for preserving original requirements'],
|
||||
root_cause: 'State structure does not support constraint persistence',
|
||||
impact: 'Constraints may be lost during state transitions',
|
||||
suggested_fix: 'Add original_requirements field to state schema'
|
||||
});
|
||||
}
|
||||
}
|
||||
|
||||
// 4. Check SKILL.md for constraint enforcement in execution flow
|
||||
const skillMd = Read(`${skillPath}/SKILL.md`);
|
||||
const hasConstraintVerification = /constraint.*verif|verif.*constraint|quality.*gate/i.test(skillMd);
|
||||
|
||||
if (!hasConstraintVerification && phaseFiles.length > 3) {
|
||||
issues.push({
|
||||
id: `MEM-${issues.length + 1}`,
|
||||
type: 'memory_loss',
|
||||
severity: 'medium',
|
||||
location: { file: 'SKILL.md' },
|
||||
description: 'No constraint verification step in execution flow',
|
||||
evidence: ['Execution flow lacks quality gate or constraint check'],
|
||||
root_cause: 'No mechanism to verify output matches original intent',
|
||||
impact: 'Constraint violations may go undetected',
|
||||
suggested_fix: 'Add verification phase comparing output to original requirements'
|
||||
});
|
||||
}
|
||||
|
||||
// 5. Calculate severity
|
||||
const criticalCount = issues.filter(i => i.severity === 'critical').length;
|
||||
const highCount = issues.filter(i => i.severity === 'high').length;
|
||||
const severity = criticalCount > 0 ? 'critical' :
|
||||
highCount > 2 ? 'high' :
|
||||
highCount > 0 ? 'medium' :
|
||||
issues.length > 0 ? 'low' : 'none';
|
||||
|
||||
// 6. Write diagnosis result
|
||||
const diagnosisResult = {
|
||||
status: 'completed',
|
||||
issues_found: issues.length,
|
||||
severity: severity,
|
||||
execution_time_ms: Date.now() - startTime,
|
||||
details: {
|
||||
patterns_checked: [
|
||||
'constraint_propagation',
|
||||
'goal_drift',
|
||||
'checkpoint_mechanism',
|
||||
'state_schema_constraints'
|
||||
],
|
||||
patterns_matched: evidence.map(e => e.pattern),
|
||||
evidence: evidence,
|
||||
phase_analysis: {
|
||||
total_phases: phaseFiles.length,
|
||||
first_phase_constraints: firstPhaseConstraints.length,
|
||||
phases_with_constraint_ref: phaseFiles.length - issues.filter(i =>
|
||||
i.description.includes('does not reference')).length
|
||||
},
|
||||
recommendations: [
|
||||
highCount > 0 ? 'Implement constraint injection at each phase' : null,
|
||||
issues.some(i => i.description.includes('checkpoint'))
|
||||
? 'Add checkpoint/restore mechanism' : null,
|
||||
issues.some(i => i.description.includes('State schema'))
|
||||
? 'Add original_requirements to state schema' : null
|
||||
].filter(Boolean)
|
||||
}
|
||||
};
|
||||
|
||||
Write(`${workDir}/diagnosis/memory-diagnosis.json`,
|
||||
JSON.stringify(diagnosisResult, null, 2));
|
||||
|
||||
return {
|
||||
stateUpdates: {
|
||||
'diagnosis.memory': diagnosisResult,
|
||||
issues: [...state.issues, ...issues]
|
||||
},
|
||||
outputFiles: [`${workDir}/diagnosis/memory-diagnosis.json`],
|
||||
summary: `Memory diagnosis: ${issues.length} issues found (severity: ${severity})`
|
||||
};
|
||||
}
|
||||
```
|
||||
|
||||
## State Updates
|
||||
|
||||
```javascript
|
||||
return {
|
||||
stateUpdates: {
|
||||
'diagnosis.memory': {
|
||||
status: 'completed',
|
||||
issues_found: <count>,
|
||||
severity: '<critical|high|medium|low|none>',
|
||||
// ... full diagnosis result
|
||||
},
|
||||
issues: [...existingIssues, ...newIssues]
|
||||
}
|
||||
};
|
||||
```
|
||||
|
||||
## Error Handling
|
||||
|
||||
| Error Type | Recovery |
|
||||
|------------|----------|
|
||||
| Phase file read error | Skip file, continue analysis |
|
||||
| No phases found | Report as structure issue |
|
||||
|
||||
## Next Actions
|
||||
|
||||
- Success: action-diagnose-dataflow (or next in focus_areas)
|
||||
- Skipped: If 'memory' not in focus_areas
|
||||
@@ -0,0 +1,200 @@
|
||||
# Action: Diagnose Token Consumption
|
||||
|
||||
Analyze target skill for token consumption inefficiencies and output optimization opportunities.
|
||||
|
||||
## Purpose
|
||||
|
||||
Detect patterns that cause excessive token usage:
|
||||
- Verbose prompts without compression
|
||||
- Large state objects with unnecessary fields
|
||||
- Full content passing instead of references
|
||||
- Unbounded arrays without sliding windows
|
||||
- Redundant file I/O (write-then-read patterns)
|
||||
|
||||
## Detection Patterns
|
||||
|
||||
| Pattern ID | Name | Detection Logic | Severity |
|
||||
|------------|------|-----------------|----------|
|
||||
| TKN-001 | Verbose Prompts | Prompt files > 4KB or high static/variable ratio | medium |
|
||||
| TKN-002 | Excessive State Fields | State schema > 15 top-level keys | medium |
|
||||
| TKN-003 | Full Content Passing | `Read()` result embedded directly in prompt | high |
|
||||
| TKN-004 | Unbounded Arrays | `.push`/`concat` without `.slice(-N)` | high |
|
||||
| TKN-005 | Redundant Write→Read | `Write(file)` followed by `Read(file)` | medium |
|
||||
|
||||
## Execution Steps
|
||||
|
||||
```javascript
|
||||
async function diagnoseTokenConsumption(state, workDir) {
|
||||
const issues = [];
|
||||
const evidence = [];
|
||||
const skillPath = state.target_skill.path;
|
||||
|
||||
// 1. Scan for verbose prompts (TKN-001)
|
||||
const mdFiles = Glob(`${skillPath}/**/*.md`);
|
||||
for (const file of mdFiles) {
|
||||
const content = Read(file);
|
||||
if (content.length > 4000) {
|
||||
evidence.push({
|
||||
file: file,
|
||||
pattern: 'TKN-001',
|
||||
severity: 'medium',
|
||||
context: `File size: ${content.length} chars (threshold: 4000)`
|
||||
});
|
||||
}
|
||||
}
|
||||
|
||||
// 2. Check state schema field count (TKN-002)
|
||||
const stateSchema = Glob(`${skillPath}/**/state-schema.md`)[0];
|
||||
if (stateSchema) {
|
||||
const schemaContent = Read(stateSchema);
|
||||
const fieldMatches = schemaContent.match(/^\s*\w+:/gm) || [];
|
||||
if (fieldMatches.length > 15) {
|
||||
evidence.push({
|
||||
file: stateSchema,
|
||||
pattern: 'TKN-002',
|
||||
severity: 'medium',
|
||||
context: `State has ${fieldMatches.length} fields (threshold: 15)`
|
||||
});
|
||||
}
|
||||
}
|
||||
|
||||
// 3. Detect full content passing (TKN-003)
|
||||
const fullContentPattern = /Read\([^)]+\)\s*[\+,]|`\$\{.*Read\(/g;
|
||||
for (const file of mdFiles) {
|
||||
const content = Read(file);
|
||||
const matches = content.match(fullContentPattern);
|
||||
if (matches) {
|
||||
evidence.push({
|
||||
file: file,
|
||||
pattern: 'TKN-003',
|
||||
severity: 'high',
|
||||
context: `Full content passing detected: ${matches[0]}`
|
||||
});
|
||||
}
|
||||
}
|
||||
|
||||
// 4. Detect unbounded arrays (TKN-004)
|
||||
const unboundedPattern = /\.(push|concat)\([^)]+\)(?!.*\.slice)/g;
|
||||
for (const file of mdFiles) {
|
||||
const content = Read(file);
|
||||
const matches = content.match(unboundedPattern);
|
||||
if (matches) {
|
||||
evidence.push({
|
||||
file: file,
|
||||
pattern: 'TKN-004',
|
||||
severity: 'high',
|
||||
context: `Unbounded array growth: ${matches[0]}`
|
||||
});
|
||||
}
|
||||
}
|
||||
|
||||
// 5. Detect write-then-read patterns (TKN-005)
|
||||
const writeReadPattern = /Write\([^)]+\)[\s\S]{0,100}Read\([^)]+\)/g;
|
||||
for (const file of mdFiles) {
|
||||
const content = Read(file);
|
||||
const matches = content.match(writeReadPattern);
|
||||
if (matches) {
|
||||
evidence.push({
|
||||
file: file,
|
||||
pattern: 'TKN-005',
|
||||
severity: 'medium',
|
||||
context: `Write-then-read pattern detected`
|
||||
});
|
||||
}
|
||||
}
|
||||
|
||||
// Calculate severity
|
||||
const highCount = evidence.filter(e => e.severity === 'high').length;
|
||||
const mediumCount = evidence.filter(e => e.severity === 'medium').length;
|
||||
|
||||
let severity = 'none';
|
||||
if (highCount > 0) severity = 'high';
|
||||
else if (mediumCount > 2) severity = 'medium';
|
||||
else if (mediumCount > 0) severity = 'low';
|
||||
|
||||
return {
|
||||
status: 'completed',
|
||||
issues_found: evidence.length,
|
||||
severity: severity,
|
||||
execution_time_ms: Date.now() - startTime,
|
||||
details: {
|
||||
patterns_checked: ['TKN-001', 'TKN-002', 'TKN-003', 'TKN-004', 'TKN-005'],
|
||||
patterns_matched: [...new Set(evidence.map(e => e.pattern))],
|
||||
evidence: evidence,
|
||||
recommendations: generateRecommendations(evidence)
|
||||
}
|
||||
};
|
||||
}
|
||||
|
||||
function generateRecommendations(evidence) {
|
||||
const recs = [];
|
||||
const patterns = [...new Set(evidence.map(e => e.pattern))];
|
||||
|
||||
if (patterns.includes('TKN-001')) {
|
||||
recs.push('Apply prompt_compression: Extract static instructions to templates, use placeholders');
|
||||
}
|
||||
if (patterns.includes('TKN-002')) {
|
||||
recs.push('Apply state_field_reduction: Remove debug/cache fields, consolidate related fields');
|
||||
}
|
||||
if (patterns.includes('TKN-003')) {
|
||||
recs.push('Apply lazy_loading: Pass file paths instead of content, let agents read if needed');
|
||||
}
|
||||
if (patterns.includes('TKN-004')) {
|
||||
recs.push('Apply sliding_window: Add .slice(-N) to array operations to bound growth');
|
||||
}
|
||||
if (patterns.includes('TKN-005')) {
|
||||
recs.push('Apply output_minimization: Use in-memory data passing, eliminate temporary files');
|
||||
}
|
||||
|
||||
return recs;
|
||||
}
|
||||
```
|
||||
|
||||
## Output
|
||||
|
||||
Write diagnosis result to `${workDir}/diagnosis/token-consumption-diagnosis.json`:
|
||||
|
||||
```json
|
||||
{
|
||||
"status": "completed",
|
||||
"issues_found": 3,
|
||||
"severity": "medium",
|
||||
"execution_time_ms": 1500,
|
||||
"details": {
|
||||
"patterns_checked": ["TKN-001", "TKN-002", "TKN-003", "TKN-004", "TKN-005"],
|
||||
"patterns_matched": ["TKN-001", "TKN-003"],
|
||||
"evidence": [
|
||||
{
|
||||
"file": "phases/orchestrator.md",
|
||||
"pattern": "TKN-001",
|
||||
"severity": "medium",
|
||||
"context": "File size: 5200 chars (threshold: 4000)"
|
||||
}
|
||||
],
|
||||
"recommendations": [
|
||||
"Apply prompt_compression: Extract static instructions to templates"
|
||||
]
|
||||
}
|
||||
}
|
||||
```
|
||||
|
||||
## State Update
|
||||
|
||||
```javascript
|
||||
updateState({
|
||||
diagnosis: {
|
||||
...state.diagnosis,
|
||||
token_consumption: diagnosisResult
|
||||
}
|
||||
});
|
||||
```
|
||||
|
||||
## Fix Strategies Mapping
|
||||
|
||||
| Pattern | Strategy | Implementation |
|
||||
|---------|----------|----------------|
|
||||
| TKN-001 | prompt_compression | Extract static text to variables, use template inheritance |
|
||||
| TKN-002 | state_field_reduction | Audit and consolidate fields, remove non-essential data |
|
||||
| TKN-003 | lazy_loading | Pass paths instead of content, agents load when needed |
|
||||
| TKN-004 | sliding_window | Add `.slice(-N)` after push/concat operations |
|
||||
| TKN-005 | output_minimization | Use return values instead of file relay |
|
||||
@@ -0,0 +1,322 @@
|
||||
# Action: Gemini Analysis
|
||||
|
||||
动态调用 Gemini CLI 进行深度分析,根据用户需求或诊断结果选择分析类型。
|
||||
|
||||
## Role
|
||||
|
||||
- 接收用户指定的分析需求或从诊断结果推断需求
|
||||
- 构建适当的 CLI 命令
|
||||
- 执行分析并解析结果
|
||||
- 更新状态以供后续动作使用
|
||||
|
||||
## Preconditions
|
||||
|
||||
- `state.status === 'running'`
|
||||
- 满足以下任一条件:
|
||||
- `state.gemini_analysis_requested === true` (用户请求)
|
||||
- `state.issues.some(i => i.severity === 'critical')` (发现严重问题)
|
||||
- `state.analysis_type !== null` (已指定分析类型)
|
||||
|
||||
## Analysis Types
|
||||
|
||||
### 1. root_cause - 问题根因分析
|
||||
|
||||
针对用户描述的问题进行深度分析。
|
||||
|
||||
```javascript
|
||||
const analysisPrompt = `
|
||||
PURPOSE: Identify root cause of skill execution issue: ${state.user_issue_description}
|
||||
TASK:
|
||||
• Analyze skill structure at: ${state.target_skill.path}
|
||||
• Identify anti-patterns in phase files
|
||||
• Trace data flow through state management
|
||||
• Check agent coordination patterns
|
||||
MODE: analysis
|
||||
CONTEXT: @**/*.md
|
||||
EXPECTED: JSON with structure:
|
||||
{
|
||||
"root_causes": [
|
||||
{ "id": "RC-001", "description": "...", "severity": "high", "evidence": ["file:line"] }
|
||||
],
|
||||
"patterns_found": [
|
||||
{ "pattern": "...", "type": "anti-pattern|best-practice", "locations": [] }
|
||||
],
|
||||
"recommendations": [
|
||||
{ "priority": 1, "action": "...", "rationale": "..." }
|
||||
]
|
||||
}
|
||||
RULES: Focus on execution flow, state management, agent coordination
|
||||
`;
|
||||
```
|
||||
|
||||
### 2. architecture - 架构审查
|
||||
|
||||
评估 skill 的整体架构设计。
|
||||
|
||||
```javascript
|
||||
const analysisPrompt = `
|
||||
PURPOSE: Review skill architecture for: ${state.target_skill.name}
|
||||
TASK:
|
||||
• Evaluate phase decomposition and responsibility separation
|
||||
• Check state schema design and data flow
|
||||
• Assess agent coordination and error handling
|
||||
• Review scalability and maintainability
|
||||
MODE: analysis
|
||||
CONTEXT: @**/*.md
|
||||
EXPECTED: Markdown report with sections:
|
||||
- Executive Summary
|
||||
- Phase Architecture Assessment
|
||||
- State Management Evaluation
|
||||
- Agent Coordination Analysis
|
||||
- Improvement Recommendations (prioritized)
|
||||
RULES: Focus on modularity, extensibility, maintainability
|
||||
`;
|
||||
```
|
||||
|
||||
### 3. prompt_optimization - 提示词优化
|
||||
|
||||
分析和优化 phase 中的提示词。
|
||||
|
||||
```javascript
|
||||
const analysisPrompt = `
|
||||
PURPOSE: Optimize prompts in skill phases for better output quality
|
||||
TASK:
|
||||
• Analyze existing prompts for clarity and specificity
|
||||
• Identify ambiguous instructions
|
||||
• Check output format specifications
|
||||
• Evaluate constraint communication
|
||||
MODE: analysis
|
||||
CONTEXT: @phases/**/*.md
|
||||
EXPECTED: JSON with structure:
|
||||
{
|
||||
"prompt_issues": [
|
||||
{ "file": "...", "issue": "...", "severity": "...", "suggestion": "..." }
|
||||
],
|
||||
"optimized_prompts": [
|
||||
{ "file": "...", "original": "...", "optimized": "...", "rationale": "..." }
|
||||
]
|
||||
}
|
||||
RULES: Preserve intent, improve clarity, add structured output requirements
|
||||
`;
|
||||
```
|
||||
|
||||
### 4. performance - 性能分析
|
||||
|
||||
分析 Token 消耗和执行效率。
|
||||
|
||||
```javascript
|
||||
const analysisPrompt = `
|
||||
PURPOSE: Analyze performance bottlenecks in skill execution
|
||||
TASK:
|
||||
• Estimate token consumption per phase
|
||||
• Identify redundant data passing
|
||||
• Check for unnecessary full-content transfers
|
||||
• Evaluate caching opportunities
|
||||
MODE: analysis
|
||||
CONTEXT: @**/*.md
|
||||
EXPECTED: JSON with structure:
|
||||
{
|
||||
"token_estimates": [
|
||||
{ "phase": "...", "estimated_tokens": 1000, "breakdown": {} }
|
||||
],
|
||||
"bottlenecks": [
|
||||
{ "type": "...", "location": "...", "impact": "high|medium|low", "fix": "..." }
|
||||
],
|
||||
"optimization_suggestions": []
|
||||
}
|
||||
RULES: Focus on token efficiency, reduce redundancy
|
||||
`;
|
||||
```
|
||||
|
||||
### 5. custom - 自定义分析
|
||||
|
||||
用户指定的自定义分析需求。
|
||||
|
||||
```javascript
|
||||
const analysisPrompt = `
|
||||
PURPOSE: ${state.custom_analysis_purpose}
|
||||
TASK: ${state.custom_analysis_tasks}
|
||||
MODE: analysis
|
||||
CONTEXT: @**/*.md
|
||||
EXPECTED: ${state.custom_analysis_expected}
|
||||
RULES: ${state.custom_analysis_rules || 'Follow best practices'}
|
||||
`;
|
||||
```
|
||||
|
||||
## Execution
|
||||
|
||||
```javascript
|
||||
async function executeGeminiAnalysis(state, workDir) {
|
||||
// 1. 确定分析类型
|
||||
const analysisType = state.analysis_type || determineAnalysisType(state);
|
||||
|
||||
// 2. 构建 prompt
|
||||
const prompt = buildAnalysisPrompt(analysisType, state);
|
||||
|
||||
// 3. 构建 CLI 命令
|
||||
const cliCommand = `ccw cli -p "${escapeForShell(prompt)}" --tool gemini --mode analysis --cd "${state.target_skill.path}"`;
|
||||
|
||||
console.log(`Executing Gemini analysis: ${analysisType}`);
|
||||
console.log(`Command: ${cliCommand}`);
|
||||
|
||||
// 4. 执行 CLI (后台运行)
|
||||
const result = Bash({
|
||||
command: cliCommand,
|
||||
run_in_background: true,
|
||||
timeout: 300000 // 5 minutes
|
||||
});
|
||||
|
||||
// 5. 等待结果
|
||||
// 注意: 根据 CLAUDE.md 指引,CLI 后台执行后应停止轮询
|
||||
// 结果会在 CLI 完成后写入 state
|
||||
|
||||
return {
|
||||
stateUpdates: {
|
||||
gemini_analysis: {
|
||||
type: analysisType,
|
||||
status: 'running',
|
||||
started_at: new Date().toISOString(),
|
||||
task_id: result.task_id
|
||||
}
|
||||
},
|
||||
outputFiles: [],
|
||||
summary: `Gemini ${analysisType} analysis started in background`
|
||||
};
|
||||
}
|
||||
|
||||
function determineAnalysisType(state) {
|
||||
// 根据状态推断分析类型
|
||||
if (state.user_issue_description && state.user_issue_description.length > 100) {
|
||||
return 'root_cause';
|
||||
}
|
||||
if (state.issues.some(i => i.severity === 'critical')) {
|
||||
return 'root_cause';
|
||||
}
|
||||
if (state.focus_areas.includes('architecture')) {
|
||||
return 'architecture';
|
||||
}
|
||||
if (state.focus_areas.includes('prompt')) {
|
||||
return 'prompt_optimization';
|
||||
}
|
||||
if (state.focus_areas.includes('performance')) {
|
||||
return 'performance';
|
||||
}
|
||||
return 'root_cause'; // 默认
|
||||
}
|
||||
|
||||
function buildAnalysisPrompt(type, state) {
|
||||
const templates = {
|
||||
root_cause: () => `
|
||||
PURPOSE: Identify root cause of skill execution issue: ${state.user_issue_description}
|
||||
TASK: • Analyze skill structure • Identify anti-patterns • Trace data flow issues • Check agent coordination
|
||||
MODE: analysis
|
||||
CONTEXT: @**/*.md
|
||||
EXPECTED: JSON { root_causes: [], patterns_found: [], recommendations: [] }
|
||||
RULES: Focus on execution flow, be specific about file:line locations
|
||||
`,
|
||||
architecture: () => `
|
||||
PURPOSE: Review skill architecture for ${state.target_skill.name}
|
||||
TASK: • Evaluate phase decomposition • Check state design • Assess agent coordination • Review extensibility
|
||||
MODE: analysis
|
||||
CONTEXT: @**/*.md
|
||||
EXPECTED: Markdown architecture assessment report
|
||||
RULES: Focus on modularity and maintainability
|
||||
`,
|
||||
prompt_optimization: () => `
|
||||
PURPOSE: Optimize prompts in skill for better output quality
|
||||
TASK: • Analyze prompt clarity • Check output specifications • Evaluate constraint handling
|
||||
MODE: analysis
|
||||
CONTEXT: @phases/**/*.md
|
||||
EXPECTED: JSON { prompt_issues: [], optimized_prompts: [] }
|
||||
RULES: Preserve intent, improve clarity
|
||||
`,
|
||||
performance: () => `
|
||||
PURPOSE: Analyze performance bottlenecks in skill
|
||||
TASK: • Estimate token consumption • Identify redundancy • Check data transfer efficiency
|
||||
MODE: analysis
|
||||
CONTEXT: @**/*.md
|
||||
EXPECTED: JSON { token_estimates: [], bottlenecks: [], optimization_suggestions: [] }
|
||||
RULES: Focus on token efficiency
|
||||
`,
|
||||
custom: () => `
|
||||
PURPOSE: ${state.custom_analysis_purpose}
|
||||
TASK: ${state.custom_analysis_tasks}
|
||||
MODE: analysis
|
||||
CONTEXT: @**/*.md
|
||||
EXPECTED: ${state.custom_analysis_expected}
|
||||
RULES: ${state.custom_analysis_rules || 'Best practices'}
|
||||
`
|
||||
};
|
||||
|
||||
return templates[type]();
|
||||
}
|
||||
|
||||
function escapeForShell(str) {
|
||||
// 转义 shell 特殊字符
|
||||
return str.replace(/"/g, '\\"').replace(/\$/g, '\\$').replace(/`/g, '\\`');
|
||||
}
|
||||
```
|
||||
|
||||
## Output
|
||||
|
||||
### State Updates
|
||||
|
||||
```javascript
|
||||
{
|
||||
gemini_analysis: {
|
||||
type: 'root_cause' | 'architecture' | 'prompt_optimization' | 'performance' | 'custom',
|
||||
status: 'running' | 'completed' | 'failed',
|
||||
started_at: '2024-01-01T00:00:00Z',
|
||||
completed_at: '2024-01-01T00:05:00Z',
|
||||
task_id: 'xxx',
|
||||
result: { /* 分析结果 */ },
|
||||
error: null
|
||||
},
|
||||
// 分析结果合并到 issues
|
||||
issues: [
|
||||
...state.issues,
|
||||
...newIssuesFromAnalysis
|
||||
]
|
||||
}
|
||||
```
|
||||
|
||||
### Output Files
|
||||
|
||||
- `${workDir}/diagnosis/gemini-analysis-${type}.json` - 原始分析结果
|
||||
- `${workDir}/diagnosis/gemini-analysis-${type}.md` - 格式化报告
|
||||
|
||||
## Post-Execution
|
||||
|
||||
分析完成后:
|
||||
1. 解析 CLI 输出为结构化数据
|
||||
2. 提取新发现的 issues 合并到 state.issues
|
||||
3. 更新 recommendations 到 state
|
||||
4. 触发下一步动作 (通常是 action-generate-report 或 action-propose-fixes)
|
||||
|
||||
## Error Handling
|
||||
|
||||
| Error | Recovery |
|
||||
|-------|----------|
|
||||
| CLI 超时 | 重试一次,仍失败则跳过 Gemini 分析 |
|
||||
| 解析失败 | 保存原始输出,手动处理 |
|
||||
| 无结果 | 标记为 skipped,继续流程 |
|
||||
|
||||
## User Interaction
|
||||
|
||||
如果 `state.analysis_type === null` 且无法自动推断,询问用户:
|
||||
|
||||
```javascript
|
||||
AskUserQuestion({
|
||||
questions: [{
|
||||
question: '请选择 Gemini 分析类型',
|
||||
header: '分析类型',
|
||||
options: [
|
||||
{ label: '问题根因分析', description: '深度分析用户描述的问题' },
|
||||
{ label: '架构审查', description: '评估整体架构设计' },
|
||||
{ label: '提示词优化', description: '分析和优化 phase 提示词' },
|
||||
{ label: '性能分析', description: '分析 Token 消耗和执行效率' }
|
||||
],
|
||||
multiSelect: false
|
||||
}]
|
||||
});
|
||||
```
|
||||
@@ -0,0 +1,228 @@
|
||||
# Action: Generate Consolidated Report
|
||||
|
||||
Generate a comprehensive tuning report merging all diagnosis results with prioritized recommendations.
|
||||
|
||||
## Purpose
|
||||
|
||||
- Merge all diagnosis results into unified report
|
||||
- Prioritize issues by severity and impact
|
||||
- Generate actionable recommendations
|
||||
- Create human-readable markdown report
|
||||
|
||||
## Preconditions
|
||||
|
||||
- [ ] state.status === 'running'
|
||||
- [ ] All diagnoses in focus_areas are completed
|
||||
- [ ] state.issues.length > 0 OR generate summary report
|
||||
|
||||
## Execution
|
||||
|
||||
```javascript
|
||||
async function execute(state, workDir) {
|
||||
console.log('Generating consolidated tuning report...');
|
||||
|
||||
const targetSkill = state.target_skill;
|
||||
const issues = state.issues;
|
||||
|
||||
// 1. Group issues by type
|
||||
const issuesByType = {
|
||||
context_explosion: issues.filter(i => i.type === 'context_explosion'),
|
||||
memory_loss: issues.filter(i => i.type === 'memory_loss'),
|
||||
dataflow_break: issues.filter(i => i.type === 'dataflow_break'),
|
||||
agent_failure: issues.filter(i => i.type === 'agent_failure')
|
||||
};
|
||||
|
||||
// 2. Group issues by severity
|
||||
const issuesBySeverity = {
|
||||
critical: issues.filter(i => i.severity === 'critical'),
|
||||
high: issues.filter(i => i.severity === 'high'),
|
||||
medium: issues.filter(i => i.severity === 'medium'),
|
||||
low: issues.filter(i => i.severity === 'low')
|
||||
};
|
||||
|
||||
// 3. Calculate overall health score
|
||||
const weights = { critical: 25, high: 15, medium: 5, low: 1 };
|
||||
const deductions = Object.entries(issuesBySeverity)
|
||||
.reduce((sum, [sev, arr]) => sum + arr.length * weights[sev], 0);
|
||||
const healthScore = Math.max(0, 100 - deductions);
|
||||
|
||||
// 4. Generate report content
|
||||
const report = `# Skill Tuning Report
|
||||
|
||||
**Target Skill**: ${targetSkill.name}
|
||||
**Path**: ${targetSkill.path}
|
||||
**Execution Mode**: ${targetSkill.execution_mode}
|
||||
**Generated**: ${new Date().toISOString()}
|
||||
|
||||
---
|
||||
|
||||
## Executive Summary
|
||||
|
||||
| Metric | Value |
|
||||
|--------|-------|
|
||||
| Health Score | ${healthScore}/100 |
|
||||
| Total Issues | ${issues.length} |
|
||||
| Critical | ${issuesBySeverity.critical.length} |
|
||||
| High | ${issuesBySeverity.high.length} |
|
||||
| Medium | ${issuesBySeverity.medium.length} |
|
||||
| Low | ${issuesBySeverity.low.length} |
|
||||
|
||||
### User Reported Issue
|
||||
> ${state.user_issue_description}
|
||||
|
||||
### Overall Assessment
|
||||
${healthScore >= 80 ? '✅ Skill is in good health with minor issues.' :
|
||||
healthScore >= 60 ? '⚠️ Skill has significant issues requiring attention.' :
|
||||
healthScore >= 40 ? '🔶 Skill has serious issues affecting reliability.' :
|
||||
'❌ Skill has critical issues requiring immediate fixes.'}
|
||||
|
||||
---
|
||||
|
||||
## Diagnosis Results
|
||||
|
||||
### Context Explosion Analysis
|
||||
${state.diagnosis.context ?
|
||||
`- **Status**: ${state.diagnosis.context.status}
|
||||
- **Severity**: ${state.diagnosis.context.severity}
|
||||
- **Issues Found**: ${state.diagnosis.context.issues_found}
|
||||
- **Key Findings**: ${state.diagnosis.context.details.recommendations.join('; ') || 'None'}` :
|
||||
'_Not analyzed_'}
|
||||
|
||||
### Long-tail Memory Analysis
|
||||
${state.diagnosis.memory ?
|
||||
`- **Status**: ${state.diagnosis.memory.status}
|
||||
- **Severity**: ${state.diagnosis.memory.severity}
|
||||
- **Issues Found**: ${state.diagnosis.memory.issues_found}
|
||||
- **Key Findings**: ${state.diagnosis.memory.details.recommendations.join('; ') || 'None'}` :
|
||||
'_Not analyzed_'}
|
||||
|
||||
### Data Flow Analysis
|
||||
${state.diagnosis.dataflow ?
|
||||
`- **Status**: ${state.diagnosis.dataflow.status}
|
||||
- **Severity**: ${state.diagnosis.dataflow.severity}
|
||||
- **Issues Found**: ${state.diagnosis.dataflow.issues_found}
|
||||
- **Key Findings**: ${state.diagnosis.dataflow.details.recommendations.join('; ') || 'None'}` :
|
||||
'_Not analyzed_'}
|
||||
|
||||
### Agent Coordination Analysis
|
||||
${state.diagnosis.agent ?
|
||||
`- **Status**: ${state.diagnosis.agent.status}
|
||||
- **Severity**: ${state.diagnosis.agent.severity}
|
||||
- **Issues Found**: ${state.diagnosis.agent.issues_found}
|
||||
- **Key Findings**: ${state.diagnosis.agent.details.recommendations.join('; ') || 'None'}` :
|
||||
'_Not analyzed_'}
|
||||
|
||||
---
|
||||
|
||||
## Critical & High Priority Issues
|
||||
|
||||
${issuesBySeverity.critical.length + issuesBySeverity.high.length === 0 ?
|
||||
'_No critical or high priority issues found._' :
|
||||
[...issuesBySeverity.critical, ...issuesBySeverity.high].map((issue, i) => `
|
||||
### ${i + 1}. [${issue.severity.toUpperCase()}] ${issue.description}
|
||||
|
||||
- **ID**: ${issue.id}
|
||||
- **Type**: ${issue.type}
|
||||
- **Location**: ${typeof issue.location === 'object' ? issue.location.file : issue.location}
|
||||
- **Root Cause**: ${issue.root_cause}
|
||||
- **Impact**: ${issue.impact}
|
||||
- **Suggested Fix**: ${issue.suggested_fix}
|
||||
|
||||
**Evidence**:
|
||||
${issue.evidence.map(e => `- \`${e}\``).join('\n')}
|
||||
`).join('\n')}
|
||||
|
||||
---
|
||||
|
||||
## Medium & Low Priority Issues
|
||||
|
||||
${issuesBySeverity.medium.length + issuesBySeverity.low.length === 0 ?
|
||||
'_No medium or low priority issues found._' :
|
||||
[...issuesBySeverity.medium, ...issuesBySeverity.low].map((issue, i) => `
|
||||
### ${i + 1}. [${issue.severity.toUpperCase()}] ${issue.description}
|
||||
|
||||
- **ID**: ${issue.id}
|
||||
- **Type**: ${issue.type}
|
||||
- **Suggested Fix**: ${issue.suggested_fix}
|
||||
`).join('\n')}
|
||||
|
||||
---
|
||||
|
||||
## Recommended Fix Order
|
||||
|
||||
Based on severity and dependencies, apply fixes in this order:
|
||||
|
||||
${[...issuesBySeverity.critical, ...issuesBySeverity.high, ...issuesBySeverity.medium]
|
||||
.slice(0, 10)
|
||||
.map((issue, i) => `${i + 1}. **${issue.id}**: ${issue.suggested_fix}`)
|
||||
.join('\n')}
|
||||
|
||||
---
|
||||
|
||||
## Quality Gates
|
||||
|
||||
| Gate | Threshold | Current | Status |
|
||||
|------|-----------|---------|--------|
|
||||
| Critical Issues | 0 | ${issuesBySeverity.critical.length} | ${issuesBySeverity.critical.length === 0 ? '✅ PASS' : '❌ FAIL'} |
|
||||
| High Issues | ≤ 2 | ${issuesBySeverity.high.length} | ${issuesBySeverity.high.length <= 2 ? '✅ PASS' : '❌ FAIL'} |
|
||||
| Health Score | ≥ 60 | ${healthScore} | ${healthScore >= 60 ? '✅ PASS' : '❌ FAIL'} |
|
||||
|
||||
**Overall Quality Gate**: ${
|
||||
issuesBySeverity.critical.length === 0 &&
|
||||
issuesBySeverity.high.length <= 2 &&
|
||||
healthScore >= 60 ? '✅ PASS' : '❌ FAIL'}
|
||||
|
||||
---
|
||||
|
||||
*Report generated by skill-tuning*
|
||||
`;
|
||||
|
||||
// 5. Write report
|
||||
Write(`${workDir}/tuning-report.md`, report);
|
||||
|
||||
// 6. Calculate quality gate
|
||||
const qualityGate = issuesBySeverity.critical.length === 0 &&
|
||||
issuesBySeverity.high.length <= 2 &&
|
||||
healthScore >= 60 ? 'pass' :
|
||||
healthScore >= 40 ? 'review' : 'fail';
|
||||
|
||||
return {
|
||||
stateUpdates: {
|
||||
quality_score: healthScore,
|
||||
quality_gate: qualityGate,
|
||||
issues_by_severity: {
|
||||
critical: issuesBySeverity.critical.length,
|
||||
high: issuesBySeverity.high.length,
|
||||
medium: issuesBySeverity.medium.length,
|
||||
low: issuesBySeverity.low.length
|
||||
}
|
||||
},
|
||||
outputFiles: [`${workDir}/tuning-report.md`],
|
||||
summary: `Report generated: ${issues.length} issues, health score ${healthScore}/100, gate: ${qualityGate}`
|
||||
};
|
||||
}
|
||||
```
|
||||
|
||||
## State Updates
|
||||
|
||||
```javascript
|
||||
return {
|
||||
stateUpdates: {
|
||||
quality_score: <0-100>,
|
||||
quality_gate: '<pass|review|fail>',
|
||||
issues_by_severity: { critical: N, high: N, medium: N, low: N }
|
||||
}
|
||||
};
|
||||
```
|
||||
|
||||
## Error Handling
|
||||
|
||||
| Error Type | Recovery |
|
||||
|------------|----------|
|
||||
| Write error | Retry to alternative path |
|
||||
| Empty issues | Generate summary with no issues |
|
||||
|
||||
## Next Actions
|
||||
|
||||
- If issues.length > 0: action-propose-fixes
|
||||
- If issues.length === 0: action-complete
|
||||
149
.claude/skills/skill-tuning/phases/actions/action-init.md
Normal file
149
.claude/skills/skill-tuning/phases/actions/action-init.md
Normal file
@@ -0,0 +1,149 @@
|
||||
# Action: Initialize Tuning Session
|
||||
|
||||
Initialize the skill-tuning session by collecting target skill information, creating work directories, and setting up initial state.
|
||||
|
||||
## Purpose
|
||||
|
||||
- Identify target skill to tune
|
||||
- Collect user's problem description
|
||||
- Create work directory structure
|
||||
- Backup original skill files
|
||||
- Initialize state for orchestrator
|
||||
|
||||
## Preconditions
|
||||
|
||||
- [ ] state.status === 'pending'
|
||||
|
||||
## Execution
|
||||
|
||||
```javascript
|
||||
async function execute(state, workDir) {
|
||||
// 1. Ask user for target skill
|
||||
const skillInput = await AskUserQuestion({
|
||||
questions: [{
|
||||
question: "Which skill do you want to tune?",
|
||||
header: "Target Skill",
|
||||
multiSelect: false,
|
||||
options: [
|
||||
{ label: "Specify path", description: "Enter skill directory path" }
|
||||
]
|
||||
}]
|
||||
});
|
||||
|
||||
const skillPath = skillInput["Target Skill"];
|
||||
|
||||
// 2. Validate skill exists and read structure
|
||||
const skillMdPath = `${skillPath}/SKILL.md`;
|
||||
if (!Glob(`${skillPath}/SKILL.md`).length) {
|
||||
throw new Error(`Invalid skill path: ${skillPath} - SKILL.md not found`);
|
||||
}
|
||||
|
||||
// 3. Read skill metadata
|
||||
const skillMd = Read(skillMdPath);
|
||||
const frontMatterMatch = skillMd.match(/^---\n([\s\S]*?)\n---/);
|
||||
const skillName = frontMatterMatch
|
||||
? frontMatterMatch[1].match(/name:\s*(.+)/)?.[1]?.trim()
|
||||
: skillPath.split('/').pop();
|
||||
|
||||
// 4. Detect execution mode
|
||||
const hasOrchestrator = Glob(`${skillPath}/phases/orchestrator.md`).length > 0;
|
||||
const executionMode = hasOrchestrator ? 'autonomous' : 'sequential';
|
||||
|
||||
// 5. Scan skill structure
|
||||
const phases = Glob(`${skillPath}/phases/**/*.md`).map(f => f.replace(skillPath + '/', ''));
|
||||
const specs = Glob(`${skillPath}/specs/**/*.md`).map(f => f.replace(skillPath + '/', ''));
|
||||
|
||||
// 6. Ask for problem description
|
||||
const issueInput = await AskUserQuestion({
|
||||
questions: [{
|
||||
question: "Describe the issue or what you want to optimize:",
|
||||
header: "Issue",
|
||||
multiSelect: false,
|
||||
options: [
|
||||
{ label: "Context grows too large", description: "Token explosion over multiple turns" },
|
||||
{ label: "Instructions forgotten", description: "Early constraints lost in long execution" },
|
||||
{ label: "Data inconsistency", description: "State format changes between phases" },
|
||||
{ label: "Agent failures", description: "Sub-agent calls fail or return unexpected results" }
|
||||
]
|
||||
}]
|
||||
});
|
||||
|
||||
// 7. Ask for focus areas
|
||||
const focusInput = await AskUserQuestion({
|
||||
questions: [{
|
||||
question: "Which areas should be diagnosed? (Select all that apply)",
|
||||
header: "Focus",
|
||||
multiSelect: true,
|
||||
options: [
|
||||
{ label: "context", description: "Context explosion analysis" },
|
||||
{ label: "memory", description: "Long-tail forgetting analysis" },
|
||||
{ label: "dataflow", description: "Data flow analysis" },
|
||||
{ label: "agent", description: "Agent coordination analysis" }
|
||||
]
|
||||
}]
|
||||
});
|
||||
|
||||
const focusAreas = focusInput["Focus"] || ['context', 'memory', 'dataflow', 'agent'];
|
||||
|
||||
// 8. Create backup
|
||||
const backupDir = `${workDir}/backups/${skillName}-backup`;
|
||||
Bash(`mkdir -p "${backupDir}"`);
|
||||
Bash(`cp -r "${skillPath}"/* "${backupDir}/"`);
|
||||
|
||||
// 9. Return state updates
|
||||
return {
|
||||
stateUpdates: {
|
||||
status: 'running',
|
||||
started_at: new Date().toISOString(),
|
||||
target_skill: {
|
||||
name: skillName,
|
||||
path: skillPath,
|
||||
execution_mode: executionMode,
|
||||
phases: phases,
|
||||
specs: specs
|
||||
},
|
||||
user_issue_description: issueInput["Issue"],
|
||||
focus_areas: Array.isArray(focusAreas) ? focusAreas : [focusAreas],
|
||||
work_dir: workDir,
|
||||
backup_dir: backupDir
|
||||
},
|
||||
outputFiles: [],
|
||||
summary: `Initialized tuning for "${skillName}" (${executionMode} mode), focus: ${focusAreas.join(', ')}`
|
||||
};
|
||||
}
|
||||
```
|
||||
|
||||
## State Updates
|
||||
|
||||
```javascript
|
||||
return {
|
||||
stateUpdates: {
|
||||
status: 'running',
|
||||
started_at: '<timestamp>',
|
||||
target_skill: {
|
||||
name: '<skill-name>',
|
||||
path: '<skill-path>',
|
||||
execution_mode: '<sequential|autonomous>',
|
||||
phases: ['...'],
|
||||
specs: ['...']
|
||||
},
|
||||
user_issue_description: '<user description>',
|
||||
focus_areas: ['context', 'memory', ...],
|
||||
work_dir: '<work-dir>',
|
||||
backup_dir: '<backup-dir>'
|
||||
}
|
||||
};
|
||||
```
|
||||
|
||||
## Error Handling
|
||||
|
||||
| Error Type | Recovery |
|
||||
|------------|----------|
|
||||
| Skill path not found | Ask user to re-enter valid path |
|
||||
| SKILL.md missing | Suggest path correction |
|
||||
| Backup creation failed | Retry with alternative location |
|
||||
|
||||
## Next Actions
|
||||
|
||||
- Success: Continue to first diagnosis action based on focus_areas
|
||||
- Failure: action-abort
|
||||
@@ -0,0 +1,317 @@
|
||||
# Action: Propose Fixes
|
||||
|
||||
Generate fix proposals for identified issues with implementation strategies.
|
||||
|
||||
## Purpose
|
||||
|
||||
- Create fix strategies for each issue
|
||||
- Generate implementation plans
|
||||
- Estimate risk levels
|
||||
- Allow user to select fixes to apply
|
||||
|
||||
## Preconditions
|
||||
|
||||
- [ ] state.status === 'running'
|
||||
- [ ] state.issues.length > 0
|
||||
- [ ] action-generate-report completed
|
||||
|
||||
## Fix Strategy Catalog
|
||||
|
||||
### Context Explosion Fixes
|
||||
|
||||
| Strategy | Description | Risk |
|
||||
|----------|-------------|------|
|
||||
| `context_summarization` | Add summarizer agent between phases | low |
|
||||
| `sliding_window` | Keep only last N turns in context | low |
|
||||
| `structured_state` | Replace text context with JSON state | medium |
|
||||
| `path_reference` | Pass file paths instead of content | low |
|
||||
|
||||
### Memory Loss Fixes
|
||||
|
||||
| Strategy | Description | Risk |
|
||||
|----------|-------------|------|
|
||||
| `constraint_injection` | Add constraints to each phase prompt | low |
|
||||
| `checkpoint_restore` | Save state at milestones | low |
|
||||
| `goal_embedding` | Track goal similarity throughout | medium |
|
||||
| `state_constraints_field` | Add constraints field to state schema | low |
|
||||
|
||||
### Data Flow Fixes
|
||||
|
||||
| Strategy | Description | Risk |
|
||||
|----------|-------------|------|
|
||||
| `state_centralization` | Single state.json for all data | medium |
|
||||
| `schema_enforcement` | Add Zod validation | low |
|
||||
| `field_normalization` | Normalize field names | low |
|
||||
| `transactional_updates` | Atomic state updates | medium |
|
||||
|
||||
### Agent Coordination Fixes
|
||||
|
||||
| Strategy | Description | Risk |
|
||||
|----------|-------------|------|
|
||||
| `error_wrapping` | Add try-catch to all Task calls | low |
|
||||
| `result_validation` | Validate agent returns | low |
|
||||
| `orchestrator_refactor` | Centralize agent coordination | high |
|
||||
| `flatten_nesting` | Remove nested agent calls | medium |
|
||||
|
||||
## Execution
|
||||
|
||||
```javascript
|
||||
async function execute(state, workDir) {
|
||||
console.log('Generating fix proposals...');
|
||||
|
||||
const issues = state.issues;
|
||||
const fixes = [];
|
||||
|
||||
// Group issues by type for batch fixes
|
||||
const issuesByType = {
|
||||
context_explosion: issues.filter(i => i.type === 'context_explosion'),
|
||||
memory_loss: issues.filter(i => i.type === 'memory_loss'),
|
||||
dataflow_break: issues.filter(i => i.type === 'dataflow_break'),
|
||||
agent_failure: issues.filter(i => i.type === 'agent_failure')
|
||||
};
|
||||
|
||||
// Generate fixes for context explosion
|
||||
if (issuesByType.context_explosion.length > 0) {
|
||||
const ctxIssues = issuesByType.context_explosion;
|
||||
|
||||
if (ctxIssues.some(i => i.description.includes('history accumulation'))) {
|
||||
fixes.push({
|
||||
id: `FIX-${fixes.length + 1}`,
|
||||
issue_ids: ctxIssues.filter(i => i.description.includes('history')).map(i => i.id),
|
||||
strategy: 'sliding_window',
|
||||
description: 'Implement sliding window for conversation history',
|
||||
rationale: 'Prevents unbounded context growth by keeping only recent turns',
|
||||
changes: [{
|
||||
file: 'phases/orchestrator.md',
|
||||
action: 'modify',
|
||||
diff: `+ const MAX_HISTORY = 5;
|
||||
+ state.history = state.history.slice(-MAX_HISTORY);`
|
||||
}],
|
||||
risk: 'low',
|
||||
estimated_impact: 'Reduces token usage by ~50%',
|
||||
verification_steps: ['Run skill with 10+ iterations', 'Verify context size stable']
|
||||
});
|
||||
}
|
||||
|
||||
if (ctxIssues.some(i => i.description.includes('full content'))) {
|
||||
fixes.push({
|
||||
id: `FIX-${fixes.length + 1}`,
|
||||
issue_ids: ctxIssues.filter(i => i.description.includes('content')).map(i => i.id),
|
||||
strategy: 'path_reference',
|
||||
description: 'Pass file paths instead of full content',
|
||||
rationale: 'Agents can read files when needed, reducing prompt size',
|
||||
changes: [{
|
||||
file: 'phases/*.md',
|
||||
action: 'modify',
|
||||
diff: `- prompt: \${content}
|
||||
+ prompt: Read file at: \${filePath}`
|
||||
}],
|
||||
risk: 'low',
|
||||
estimated_impact: 'Significant token reduction',
|
||||
verification_steps: ['Verify agents can still access needed content']
|
||||
});
|
||||
}
|
||||
}
|
||||
|
||||
// Generate fixes for memory loss
|
||||
if (issuesByType.memory_loss.length > 0) {
|
||||
const memIssues = issuesByType.memory_loss;
|
||||
|
||||
if (memIssues.some(i => i.description.includes('constraint'))) {
|
||||
fixes.push({
|
||||
id: `FIX-${fixes.length + 1}`,
|
||||
issue_ids: memIssues.filter(i => i.description.includes('constraint')).map(i => i.id),
|
||||
strategy: 'constraint_injection',
|
||||
description: 'Add constraint injection to all phases',
|
||||
rationale: 'Ensures original requirements are visible in every phase',
|
||||
changes: [{
|
||||
file: 'phases/*.md',
|
||||
action: 'modify',
|
||||
diff: `+ [CONSTRAINTS]
|
||||
+ Original requirements from state.original_requirements:
|
||||
+ \${JSON.stringify(state.original_requirements)}`
|
||||
}],
|
||||
risk: 'low',
|
||||
estimated_impact: 'Improves constraint adherence',
|
||||
verification_steps: ['Run skill with specific constraints', 'Verify output matches']
|
||||
});
|
||||
}
|
||||
|
||||
if (memIssues.some(i => i.description.includes('State schema'))) {
|
||||
fixes.push({
|
||||
id: `FIX-${fixes.length + 1}`,
|
||||
issue_ids: memIssues.filter(i => i.description.includes('schema')).map(i => i.id),
|
||||
strategy: 'state_constraints_field',
|
||||
description: 'Add original_requirements field to state schema',
|
||||
rationale: 'Preserves original intent throughout execution',
|
||||
changes: [{
|
||||
file: 'phases/state-schema.md',
|
||||
action: 'modify',
|
||||
diff: `+ original_requirements: string[]; // User's original constraints
|
||||
+ goal_summary: string; // One-line goal statement`
|
||||
}],
|
||||
risk: 'low',
|
||||
estimated_impact: 'Enables constraint tracking',
|
||||
verification_steps: ['Verify state includes requirements after init']
|
||||
});
|
||||
}
|
||||
}
|
||||
|
||||
// Generate fixes for data flow
|
||||
if (issuesByType.dataflow_break.length > 0) {
|
||||
const dfIssues = issuesByType.dataflow_break;
|
||||
|
||||
if (dfIssues.some(i => i.description.includes('multiple locations'))) {
|
||||
fixes.push({
|
||||
id: `FIX-${fixes.length + 1}`,
|
||||
issue_ids: dfIssues.filter(i => i.description.includes('location')).map(i => i.id),
|
||||
strategy: 'state_centralization',
|
||||
description: 'Centralize all state to single state.json',
|
||||
rationale: 'Single source of truth prevents inconsistencies',
|
||||
changes: [{
|
||||
file: 'phases/*.md',
|
||||
action: 'modify',
|
||||
diff: `- Write(\`\${workDir}/config.json\`, ...)
|
||||
+ updateState({ config: ... }) // Use state manager`
|
||||
}],
|
||||
risk: 'medium',
|
||||
estimated_impact: 'Eliminates state fragmentation',
|
||||
verification_steps: ['Verify all reads come from state.json', 'Test state persistence']
|
||||
});
|
||||
}
|
||||
|
||||
if (dfIssues.some(i => i.description.includes('validation'))) {
|
||||
fixes.push({
|
||||
id: `FIX-${fixes.length + 1}`,
|
||||
issue_ids: dfIssues.filter(i => i.description.includes('validation')).map(i => i.id),
|
||||
strategy: 'schema_enforcement',
|
||||
description: 'Add Zod schema validation',
|
||||
rationale: 'Runtime validation catches schema violations',
|
||||
changes: [{
|
||||
file: 'phases/state-schema.md',
|
||||
action: 'modify',
|
||||
diff: `+ import { z } from 'zod';
|
||||
+ const StateSchema = z.object({...});
|
||||
+ function validateState(s) { return StateSchema.parse(s); }`
|
||||
}],
|
||||
risk: 'low',
|
||||
estimated_impact: 'Catches invalid state early',
|
||||
verification_steps: ['Test with invalid state input', 'Verify error thrown']
|
||||
});
|
||||
}
|
||||
}
|
||||
|
||||
// Generate fixes for agent coordination
|
||||
if (issuesByType.agent_failure.length > 0) {
|
||||
const agentIssues = issuesByType.agent_failure;
|
||||
|
||||
if (agentIssues.some(i => i.description.includes('error handling'))) {
|
||||
fixes.push({
|
||||
id: `FIX-${fixes.length + 1}`,
|
||||
issue_ids: agentIssues.filter(i => i.description.includes('error')).map(i => i.id),
|
||||
strategy: 'error_wrapping',
|
||||
description: 'Wrap all Task calls in try-catch',
|
||||
rationale: 'Prevents cascading failures from agent errors',
|
||||
changes: [{
|
||||
file: 'phases/*.md',
|
||||
action: 'modify',
|
||||
diff: `+ try {
|
||||
const result = await Task({...});
|
||||
+ if (!result) throw new Error('Empty result');
|
||||
+ } catch (e) {
|
||||
+ updateState({ errors: [...errors, e.message], error_count: error_count + 1 });
|
||||
+ }`
|
||||
}],
|
||||
risk: 'low',
|
||||
estimated_impact: 'Improves error resilience',
|
||||
verification_steps: ['Simulate agent failure', 'Verify graceful handling']
|
||||
});
|
||||
}
|
||||
|
||||
if (agentIssues.some(i => i.description.includes('nested'))) {
|
||||
fixes.push({
|
||||
id: `FIX-${fixes.length + 1}`,
|
||||
issue_ids: agentIssues.filter(i => i.description.includes('nested')).map(i => i.id),
|
||||
strategy: 'flatten_nesting',
|
||||
description: 'Flatten nested agent calls',
|
||||
rationale: 'Reduces complexity and context explosion',
|
||||
changes: [{
|
||||
file: 'phases/orchestrator.md',
|
||||
action: 'modify',
|
||||
diff: `// Instead of agent calling agent:
|
||||
// Agent A returns {needs_agent_b: true}
|
||||
// Orchestrator sees this and calls Agent B next`
|
||||
}],
|
||||
risk: 'medium',
|
||||
estimated_impact: 'Reduces nesting depth',
|
||||
verification_steps: ['Verify no nested Task calls', 'Test agent chaining via orchestrator']
|
||||
});
|
||||
}
|
||||
}
|
||||
|
||||
// Write fix proposals
|
||||
Write(`${workDir}/fixes/fix-proposals.json`, JSON.stringify(fixes, null, 2));
|
||||
|
||||
// Ask user to select fixes to apply
|
||||
const fixOptions = fixes.slice(0, 4).map(f => ({
|
||||
label: f.id,
|
||||
description: `[${f.risk.toUpperCase()} risk] ${f.description}`
|
||||
}));
|
||||
|
||||
if (fixOptions.length > 0) {
|
||||
const selection = await AskUserQuestion({
|
||||
questions: [{
|
||||
question: 'Which fixes would you like to apply?',
|
||||
header: 'Fixes',
|
||||
multiSelect: true,
|
||||
options: fixOptions
|
||||
}]
|
||||
});
|
||||
|
||||
const selectedFixIds = Array.isArray(selection['Fixes'])
|
||||
? selection['Fixes']
|
||||
: [selection['Fixes']];
|
||||
|
||||
return {
|
||||
stateUpdates: {
|
||||
proposed_fixes: fixes,
|
||||
pending_fixes: selectedFixIds.filter(id => id && fixes.some(f => f.id === id))
|
||||
},
|
||||
outputFiles: [`${workDir}/fixes/fix-proposals.json`],
|
||||
summary: `Generated ${fixes.length} fix proposals, ${selectedFixIds.length} selected for application`
|
||||
};
|
||||
}
|
||||
|
||||
return {
|
||||
stateUpdates: {
|
||||
proposed_fixes: fixes,
|
||||
pending_fixes: []
|
||||
},
|
||||
outputFiles: [`${workDir}/fixes/fix-proposals.json`],
|
||||
summary: `Generated ${fixes.length} fix proposals (none selected)`
|
||||
};
|
||||
}
|
||||
```
|
||||
|
||||
## State Updates
|
||||
|
||||
```javascript
|
||||
return {
|
||||
stateUpdates: {
|
||||
proposed_fixes: [...fixes],
|
||||
pending_fixes: [...selectedFixIds]
|
||||
}
|
||||
};
|
||||
```
|
||||
|
||||
## Error Handling
|
||||
|
||||
| Error Type | Recovery |
|
||||
|------------|----------|
|
||||
| No issues to fix | Skip to action-complete |
|
||||
| User cancels selection | Set pending_fixes to empty |
|
||||
|
||||
## Next Actions
|
||||
|
||||
- If pending_fixes.length > 0: action-apply-fix
|
||||
- If pending_fixes.length === 0: action-complete
|
||||
222
.claude/skills/skill-tuning/phases/actions/action-verify.md
Normal file
222
.claude/skills/skill-tuning/phases/actions/action-verify.md
Normal file
@@ -0,0 +1,222 @@
|
||||
# Action: Verify Applied Fixes
|
||||
|
||||
Verify that applied fixes resolved the targeted issues.
|
||||
|
||||
## Purpose
|
||||
|
||||
- Re-run relevant diagnostics
|
||||
- Compare before/after issue counts
|
||||
- Update verification status
|
||||
- Determine if more iterations needed
|
||||
|
||||
## Preconditions
|
||||
|
||||
- [ ] state.status === 'running'
|
||||
- [ ] state.applied_fixes.length > 0
|
||||
- [ ] Some applied_fixes have verification_result === 'pending'
|
||||
|
||||
## Execution
|
||||
|
||||
```javascript
|
||||
async function execute(state, workDir) {
|
||||
console.log('Verifying applied fixes...');
|
||||
|
||||
const appliedFixes = state.applied_fixes.filter(f => f.verification_result === 'pending');
|
||||
|
||||
if (appliedFixes.length === 0) {
|
||||
return {
|
||||
stateUpdates: {},
|
||||
outputFiles: [],
|
||||
summary: 'No fixes pending verification'
|
||||
};
|
||||
}
|
||||
|
||||
const verificationResults = [];
|
||||
|
||||
for (const fix of appliedFixes) {
|
||||
const proposedFix = state.proposed_fixes.find(f => f.id === fix.fix_id);
|
||||
|
||||
if (!proposedFix) {
|
||||
verificationResults.push({
|
||||
fix_id: fix.fix_id,
|
||||
result: 'fail',
|
||||
reason: 'Fix definition not found'
|
||||
});
|
||||
continue;
|
||||
}
|
||||
|
||||
// Determine which diagnosis to re-run based on fix strategy
|
||||
const strategyToDiagnosis = {
|
||||
'context_summarization': 'context',
|
||||
'sliding_window': 'context',
|
||||
'structured_state': 'context',
|
||||
'path_reference': 'context',
|
||||
'constraint_injection': 'memory',
|
||||
'checkpoint_restore': 'memory',
|
||||
'goal_embedding': 'memory',
|
||||
'state_constraints_field': 'memory',
|
||||
'state_centralization': 'dataflow',
|
||||
'schema_enforcement': 'dataflow',
|
||||
'field_normalization': 'dataflow',
|
||||
'transactional_updates': 'dataflow',
|
||||
'error_wrapping': 'agent',
|
||||
'result_validation': 'agent',
|
||||
'orchestrator_refactor': 'agent',
|
||||
'flatten_nesting': 'agent'
|
||||
};
|
||||
|
||||
const diagnosisType = strategyToDiagnosis[proposedFix.strategy];
|
||||
|
||||
// For now, do a lightweight verification
|
||||
// Full implementation would re-run the specific diagnosis
|
||||
|
||||
// Check if the fix was actually applied (look for markers)
|
||||
const targetPath = state.target_skill.path;
|
||||
const fixMarker = `Applied fix ${fix.fix_id}`;
|
||||
|
||||
let fixFound = false;
|
||||
const allFiles = Glob(`${targetPath}/**/*.md`);
|
||||
|
||||
for (const file of allFiles) {
|
||||
const content = Read(file);
|
||||
if (content.includes(fixMarker)) {
|
||||
fixFound = true;
|
||||
break;
|
||||
}
|
||||
}
|
||||
|
||||
if (fixFound) {
|
||||
// Verify by checking if original issues still exist
|
||||
const relatedIssues = proposedFix.issue_ids;
|
||||
const originalIssueCount = relatedIssues.length;
|
||||
|
||||
// Simplified verification: assume fix worked if marker present
|
||||
// Real implementation would re-run diagnosis patterns
|
||||
|
||||
verificationResults.push({
|
||||
fix_id: fix.fix_id,
|
||||
result: 'pass',
|
||||
reason: `Fix applied successfully, addressing ${originalIssueCount} issues`,
|
||||
issues_resolved: relatedIssues
|
||||
});
|
||||
} else {
|
||||
verificationResults.push({
|
||||
fix_id: fix.fix_id,
|
||||
result: 'fail',
|
||||
reason: 'Fix marker not found in target files'
|
||||
});
|
||||
}
|
||||
}
|
||||
|
||||
// Update applied fixes with verification results
|
||||
const updatedAppliedFixes = state.applied_fixes.map(fix => {
|
||||
const result = verificationResults.find(v => v.fix_id === fix.fix_id);
|
||||
if (result) {
|
||||
return {
|
||||
...fix,
|
||||
verification_result: result.result
|
||||
};
|
||||
}
|
||||
return fix;
|
||||
});
|
||||
|
||||
// Calculate new quality score
|
||||
const passedFixes = verificationResults.filter(v => v.result === 'pass').length;
|
||||
const totalFixes = verificationResults.length;
|
||||
const verificationRate = totalFixes > 0 ? (passedFixes / totalFixes) * 100 : 100;
|
||||
|
||||
// Recalculate issues (remove resolved ones)
|
||||
const resolvedIssueIds = verificationResults
|
||||
.filter(v => v.result === 'pass')
|
||||
.flatMap(v => v.issues_resolved || []);
|
||||
|
||||
const remainingIssues = state.issues.filter(i => !resolvedIssueIds.includes(i.id));
|
||||
|
||||
// Recalculate quality score
|
||||
const weights = { critical: 25, high: 15, medium: 5, low: 1 };
|
||||
const deductions = remainingIssues.reduce((sum, issue) =>
|
||||
sum + (weights[issue.severity] || 0), 0);
|
||||
const newHealthScore = Math.max(0, 100 - deductions);
|
||||
|
||||
// Determine new quality gate
|
||||
const remainingCritical = remainingIssues.filter(i => i.severity === 'critical').length;
|
||||
const remainingHigh = remainingIssues.filter(i => i.severity === 'high').length;
|
||||
const newQualityGate = remainingCritical === 0 && remainingHigh <= 2 && newHealthScore >= 60
|
||||
? 'pass'
|
||||
: newHealthScore >= 40 ? 'review' : 'fail';
|
||||
|
||||
// Increment iteration count
|
||||
const newIterationCount = state.iteration_count + 1;
|
||||
|
||||
// Ask user if they want to continue
|
||||
let continueIteration = false;
|
||||
if (newQualityGate !== 'pass' && newIterationCount < state.max_iterations) {
|
||||
const continueResponse = await AskUserQuestion({
|
||||
questions: [{
|
||||
question: `Verification complete. Quality gate: ${newQualityGate}. Continue with another iteration?`,
|
||||
header: 'Continue',
|
||||
multiSelect: false,
|
||||
options: [
|
||||
{ label: 'Yes', description: `Run iteration ${newIterationCount + 1}` },
|
||||
{ label: 'No', description: 'Finish with current state' }
|
||||
]
|
||||
}]
|
||||
});
|
||||
continueIteration = continueResponse['Continue'] === 'Yes';
|
||||
}
|
||||
|
||||
// If continuing, reset diagnosis for re-evaluation
|
||||
const diagnosisReset = continueIteration ? {
|
||||
'diagnosis.context': null,
|
||||
'diagnosis.memory': null,
|
||||
'diagnosis.dataflow': null,
|
||||
'diagnosis.agent': null
|
||||
} : {};
|
||||
|
||||
return {
|
||||
stateUpdates: {
|
||||
applied_fixes: updatedAppliedFixes,
|
||||
issues: remainingIssues,
|
||||
quality_score: newHealthScore,
|
||||
quality_gate: newQualityGate,
|
||||
iteration_count: newIterationCount,
|
||||
...diagnosisReset,
|
||||
issues_by_severity: {
|
||||
critical: remainingIssues.filter(i => i.severity === 'critical').length,
|
||||
high: remainingIssues.filter(i => i.severity === 'high').length,
|
||||
medium: remainingIssues.filter(i => i.severity === 'medium').length,
|
||||
low: remainingIssues.filter(i => i.severity === 'low').length
|
||||
}
|
||||
},
|
||||
outputFiles: [],
|
||||
summary: `Verified ${totalFixes} fixes: ${passedFixes} passed. Score: ${newHealthScore}, Gate: ${newQualityGate}, Iteration: ${newIterationCount}`
|
||||
};
|
||||
}
|
||||
```
|
||||
|
||||
## State Updates
|
||||
|
||||
```javascript
|
||||
return {
|
||||
stateUpdates: {
|
||||
applied_fixes: [...updatedWithVerificationResults],
|
||||
issues: [...remainingIssues],
|
||||
quality_score: newScore,
|
||||
quality_gate: newGate,
|
||||
iteration_count: iteration + 1
|
||||
}
|
||||
};
|
||||
```
|
||||
|
||||
## Error Handling
|
||||
|
||||
| Error Type | Recovery |
|
||||
|------------|----------|
|
||||
| Re-diagnosis fails | Mark as 'inconclusive' |
|
||||
| File access error | Skip file verification |
|
||||
|
||||
## Next Actions
|
||||
|
||||
- If quality_gate === 'pass': action-complete
|
||||
- If user chose to continue: restart diagnosis cycle
|
||||
- If max_iterations reached: action-complete
|
||||
377
.claude/skills/skill-tuning/phases/orchestrator.md
Normal file
377
.claude/skills/skill-tuning/phases/orchestrator.md
Normal file
@@ -0,0 +1,377 @@
|
||||
# Orchestrator
|
||||
|
||||
Autonomous orchestrator for skill-tuning workflow. Reads current state and selects the next action based on diagnosis progress and quality gates.
|
||||
|
||||
## Role
|
||||
|
||||
Drive the tuning workflow by:
|
||||
1. Reading current session state
|
||||
2. Selecting the appropriate next action
|
||||
3. Executing the action via sub-agent
|
||||
4. Updating state with results
|
||||
5. Repeating until termination conditions met
|
||||
|
||||
## State Management
|
||||
|
||||
### Read State
|
||||
|
||||
```javascript
|
||||
const state = JSON.parse(Read(`${workDir}/state.json`));
|
||||
```
|
||||
|
||||
### Update State
|
||||
|
||||
```javascript
|
||||
function updateState(updates) {
|
||||
const state = JSON.parse(Read(`${workDir}/state.json`));
|
||||
const newState = {
|
||||
...state,
|
||||
...updates,
|
||||
updated_at: new Date().toISOString()
|
||||
};
|
||||
Write(`${workDir}/state.json`, JSON.stringify(newState, null, 2));
|
||||
return newState;
|
||||
}
|
||||
```
|
||||
|
||||
## Decision Logic
|
||||
|
||||
```javascript
|
||||
function selectNextAction(state) {
|
||||
// === Termination Checks ===
|
||||
|
||||
// User exit
|
||||
if (state.status === 'user_exit') return null;
|
||||
|
||||
// Completed
|
||||
if (state.status === 'completed') return null;
|
||||
|
||||
// Error limit exceeded
|
||||
if (state.error_count >= state.max_errors) {
|
||||
return 'action-abort';
|
||||
}
|
||||
|
||||
// Max iterations exceeded
|
||||
if (state.iteration_count >= state.max_iterations) {
|
||||
return 'action-complete';
|
||||
}
|
||||
|
||||
// === Action Selection ===
|
||||
|
||||
// 1. Not initialized yet
|
||||
if (state.status === 'pending') {
|
||||
return 'action-init';
|
||||
}
|
||||
|
||||
// 1.5. Requirement analysis (在 init 后,diagnosis 前)
|
||||
if (state.status === 'running' &&
|
||||
state.completed_actions.includes('action-init') &&
|
||||
!state.completed_actions.includes('action-analyze-requirements')) {
|
||||
return 'action-analyze-requirements';
|
||||
}
|
||||
|
||||
// 1.6. 如果需求分析发现歧义需要澄清,暂停等待用户
|
||||
if (state.requirement_analysis?.status === 'needs_clarification') {
|
||||
return null; // 等待用户澄清后继续
|
||||
}
|
||||
|
||||
// 1.7. 如果需求分析覆盖度不足,优先触发 Gemini 深度分析
|
||||
if (state.requirement_analysis?.coverage?.status === 'unsatisfied' &&
|
||||
!state.completed_actions.includes('action-gemini-analysis')) {
|
||||
return 'action-gemini-analysis';
|
||||
}
|
||||
|
||||
// 2. Check if Gemini analysis is requested or needed
|
||||
if (shouldTriggerGeminiAnalysis(state)) {
|
||||
return 'action-gemini-analysis';
|
||||
}
|
||||
|
||||
// 3. Check if Gemini analysis is running
|
||||
if (state.gemini_analysis?.status === 'running') {
|
||||
// Wait for Gemini analysis to complete
|
||||
return null; // Orchestrator will be re-triggered when CLI completes
|
||||
}
|
||||
|
||||
// 4. Run diagnosis in order (only if not completed)
|
||||
const diagnosisOrder = ['context', 'memory', 'dataflow', 'agent', 'docs', 'token_consumption'];
|
||||
|
||||
for (const diagType of diagnosisOrder) {
|
||||
if (state.diagnosis[diagType] === null) {
|
||||
// Check if user wants to skip this diagnosis
|
||||
if (!state.focus_areas.length || state.focus_areas.includes(diagType)) {
|
||||
return `action-diagnose-${diagType}`;
|
||||
}
|
||||
// For docs diagnosis, also check 'all' focus_area
|
||||
if (diagType === 'docs' && state.focus_areas.includes('all')) {
|
||||
return 'action-diagnose-docs';
|
||||
}
|
||||
}
|
||||
}
|
||||
|
||||
// 5. All diagnosis complete, generate report if not done
|
||||
const allDiagnosisComplete = diagnosisOrder.every(
|
||||
d => state.diagnosis[d] !== null || !state.focus_areas.includes(d)
|
||||
);
|
||||
|
||||
if (allDiagnosisComplete && !state.completed_actions.includes('action-generate-report')) {
|
||||
return 'action-generate-report';
|
||||
}
|
||||
|
||||
// 6. Report generated, propose fixes if not done
|
||||
if (state.completed_actions.includes('action-generate-report') &&
|
||||
state.proposed_fixes.length === 0 &&
|
||||
state.issues.length > 0) {
|
||||
return 'action-propose-fixes';
|
||||
}
|
||||
|
||||
// 7. Fixes proposed, check if user wants to apply
|
||||
if (state.proposed_fixes.length > 0 && state.pending_fixes.length > 0) {
|
||||
return 'action-apply-fix';
|
||||
}
|
||||
|
||||
// 8. Fixes applied, verify
|
||||
if (state.applied_fixes.length > 0 &&
|
||||
state.applied_fixes.some(f => f.verification_result === 'pending')) {
|
||||
return 'action-verify';
|
||||
}
|
||||
|
||||
// 9. Quality gate check
|
||||
if (state.quality_gate === 'pass') {
|
||||
return 'action-complete';
|
||||
}
|
||||
|
||||
// 10. More iterations needed
|
||||
if (state.iteration_count < state.max_iterations &&
|
||||
state.quality_gate !== 'pass' &&
|
||||
state.issues.some(i => i.severity === 'critical' || i.severity === 'high')) {
|
||||
// Reset diagnosis for re-evaluation
|
||||
return 'action-diagnose-context'; // Start new iteration
|
||||
}
|
||||
|
||||
// 11. Default: complete
|
||||
return 'action-complete';
|
||||
}
|
||||
|
||||
/**
|
||||
* 判断是否需要触发 Gemini CLI 分析
|
||||
*/
|
||||
function shouldTriggerGeminiAnalysis(state) {
|
||||
// 已完成 Gemini 分析,不再触发
|
||||
if (state.gemini_analysis?.status === 'completed') {
|
||||
return false;
|
||||
}
|
||||
|
||||
// 用户显式请求
|
||||
if (state.gemini_analysis_requested === true) {
|
||||
return true;
|
||||
}
|
||||
|
||||
// 发现 critical 问题且未进行深度分析
|
||||
if (state.issues.some(i => i.severity === 'critical') &&
|
||||
!state.completed_actions.includes('action-gemini-analysis')) {
|
||||
return true;
|
||||
}
|
||||
|
||||
// 用户指定了需要 Gemini 分析的 focus_areas
|
||||
const geminiAreas = ['architecture', 'prompt', 'performance', 'custom'];
|
||||
if (state.focus_areas.some(area => geminiAreas.includes(area))) {
|
||||
return true;
|
||||
}
|
||||
|
||||
// 标准诊断完成但问题未得到解决,需要深度分析
|
||||
const diagnosisComplete = ['context', 'memory', 'dataflow', 'agent', 'docs'].every(
|
||||
d => state.diagnosis[d] !== null
|
||||
);
|
||||
if (diagnosisComplete &&
|
||||
state.issues.length > 0 &&
|
||||
state.iteration_count > 0 &&
|
||||
!state.completed_actions.includes('action-gemini-analysis')) {
|
||||
// 第二轮迭代如果问题仍存在,触发 Gemini 分析
|
||||
return true;
|
||||
}
|
||||
|
||||
return false;
|
||||
}
|
||||
```
|
||||
|
||||
## Execution Loop
|
||||
|
||||
```javascript
|
||||
async function runOrchestrator(workDir) {
|
||||
console.log('=== Skill Tuning Orchestrator Started ===');
|
||||
|
||||
let iteration = 0;
|
||||
const MAX_LOOP_ITERATIONS = 50; // Safety limit
|
||||
|
||||
while (iteration < MAX_LOOP_ITERATIONS) {
|
||||
iteration++;
|
||||
|
||||
// 1. Read current state
|
||||
const state = JSON.parse(Read(`${workDir}/state.json`));
|
||||
console.log(`[Loop ${iteration}] Status: ${state.status}, Action: ${state.current_action}`);
|
||||
|
||||
// 2. Select next action
|
||||
const actionId = selectNextAction(state);
|
||||
|
||||
if (!actionId) {
|
||||
console.log('No action selected, terminating orchestrator.');
|
||||
break;
|
||||
}
|
||||
|
||||
console.log(`[Loop ${iteration}] Executing: ${actionId}`);
|
||||
|
||||
// 3. Update state: current action
|
||||
// FIX CTX-001: sliding window for action_history (keep last 10)
|
||||
updateState({
|
||||
current_action: actionId,
|
||||
action_history: [...state.action_history, {
|
||||
action: actionId,
|
||||
started_at: new Date().toISOString(),
|
||||
completed_at: null,
|
||||
result: null,
|
||||
output_files: []
|
||||
}].slice(-10) // Sliding window: prevent unbounded growth
|
||||
});
|
||||
|
||||
// 4. Execute action
|
||||
try {
|
||||
const actionPrompt = Read(`phases/actions/${actionId}.md`);
|
||||
// FIX CTX-003: Pass state path + key fields only instead of full state
|
||||
const stateKeyInfo = {
|
||||
status: state.status,
|
||||
iteration_count: state.iteration_count,
|
||||
issues_by_severity: state.issues_by_severity,
|
||||
quality_gate: state.quality_gate,
|
||||
current_action: state.current_action,
|
||||
completed_actions: state.completed_actions,
|
||||
user_issue_description: state.user_issue_description,
|
||||
target_skill: { name: state.target_skill.name, path: state.target_skill.path }
|
||||
};
|
||||
const stateKeyJson = JSON.stringify(stateKeyInfo, null, 2);
|
||||
|
||||
const result = await Task({
|
||||
subagent_type: 'universal-executor',
|
||||
run_in_background: false,
|
||||
prompt: `
|
||||
[CONTEXT]
|
||||
You are executing action "${actionId}" for skill-tuning workflow.
|
||||
Work directory: ${workDir}
|
||||
|
||||
[STATE KEY INFO]
|
||||
${stateKeyJson}
|
||||
|
||||
[FULL STATE PATH]
|
||||
${workDir}/state.json
|
||||
(Read full state from this file if you need additional fields)
|
||||
|
||||
[ACTION INSTRUCTIONS]
|
||||
${actionPrompt}
|
||||
|
||||
[OUTPUT REQUIREMENT]
|
||||
After completing the action:
|
||||
1. Write any output files to the work directory
|
||||
2. Return a JSON object with:
|
||||
- stateUpdates: object with state fields to update
|
||||
- outputFiles: array of files created
|
||||
- summary: brief description of what was done
|
||||
`
|
||||
});
|
||||
|
||||
// 5. Parse result and update state
|
||||
let actionResult;
|
||||
try {
|
||||
actionResult = JSON.parse(result);
|
||||
} catch (e) {
|
||||
actionResult = {
|
||||
stateUpdates: {},
|
||||
outputFiles: [],
|
||||
summary: result
|
||||
};
|
||||
}
|
||||
|
||||
// 6. Update state: action complete
|
||||
const updatedHistory = [...state.action_history];
|
||||
updatedHistory[updatedHistory.length - 1] = {
|
||||
...updatedHistory[updatedHistory.length - 1],
|
||||
completed_at: new Date().toISOString(),
|
||||
result: 'success',
|
||||
output_files: actionResult.outputFiles || []
|
||||
};
|
||||
|
||||
updateState({
|
||||
current_action: null,
|
||||
completed_actions: [...state.completed_actions, actionId],
|
||||
action_history: updatedHistory,
|
||||
...actionResult.stateUpdates
|
||||
});
|
||||
|
||||
console.log(`[Loop ${iteration}] Completed: ${actionId}`);
|
||||
|
||||
} catch (error) {
|
||||
console.log(`[Loop ${iteration}] Error in ${actionId}: ${error.message}`);
|
||||
|
||||
// Error handling
|
||||
// FIX CTX-002: sliding window for errors (keep last 5)
|
||||
updateState({
|
||||
current_action: null,
|
||||
errors: [...state.errors, {
|
||||
action: actionId,
|
||||
message: error.message,
|
||||
timestamp: new Date().toISOString(),
|
||||
recoverable: true
|
||||
}].slice(-5), // Sliding window: prevent unbounded growth
|
||||
error_count: state.error_count + 1
|
||||
});
|
||||
}
|
||||
}
|
||||
|
||||
console.log('=== Skill Tuning Orchestrator Finished ===');
|
||||
}
|
||||
```
|
||||
|
||||
## Action Catalog
|
||||
|
||||
| Action | Purpose | Preconditions | Effects |
|
||||
|--------|---------|---------------|---------|
|
||||
| [action-init](actions/action-init.md) | Initialize tuning session | status === 'pending' | Creates work dirs, backup, sets status='running' |
|
||||
| [action-analyze-requirements](actions/action-analyze-requirements.md) | Analyze user requirements | init completed | Sets requirement_analysis, optimizes focus_areas |
|
||||
| [action-diagnose-context](actions/action-diagnose-context.md) | Analyze context explosion | status === 'running' | Sets diagnosis.context |
|
||||
| [action-diagnose-memory](actions/action-diagnose-memory.md) | Analyze long-tail forgetting | status === 'running' | Sets diagnosis.memory |
|
||||
| [action-diagnose-dataflow](actions/action-diagnose-dataflow.md) | Analyze data flow issues | status === 'running' | Sets diagnosis.dataflow |
|
||||
| [action-diagnose-agent](actions/action-diagnose-agent.md) | Analyze agent coordination | status === 'running' | Sets diagnosis.agent |
|
||||
| [action-diagnose-docs](actions/action-diagnose-docs.md) | Analyze documentation structure | status === 'running', focus includes 'docs' | Sets diagnosis.docs |
|
||||
| [action-gemini-analysis](actions/action-gemini-analysis.md) | Deep analysis via Gemini CLI | User request OR critical issues | Sets gemini_analysis, adds issues |
|
||||
| [action-generate-report](actions/action-generate-report.md) | Generate consolidated report | All diagnoses complete | Creates tuning-report.md |
|
||||
| [action-propose-fixes](actions/action-propose-fixes.md) | Generate fix proposals | Report generated, issues > 0 | Sets proposed_fixes |
|
||||
| [action-apply-fix](actions/action-apply-fix.md) | Apply selected fix | pending_fixes > 0 | Updates applied_fixes |
|
||||
| [action-verify](actions/action-verify.md) | Verify applied fixes | applied_fixes with pending verification | Updates verification_result |
|
||||
| [action-complete](actions/action-complete.md) | Finalize session | quality_gate='pass' OR max_iterations | Sets status='completed' |
|
||||
| [action-abort](actions/action-abort.md) | Abort on errors | error_count >= max_errors | Sets status='failed' |
|
||||
|
||||
## Termination Conditions
|
||||
|
||||
- `status === 'completed'`: Normal completion
|
||||
- `status === 'user_exit'`: User requested exit
|
||||
- `status === 'failed'`: Unrecoverable error
|
||||
- `requirement_analysis.status === 'needs_clarification'`: Waiting for user clarification (暂停,非终止)
|
||||
- `error_count >= max_errors`: Too many errors (default: 3)
|
||||
- `iteration_count >= max_iterations`: Max iterations reached (default: 5)
|
||||
- `quality_gate === 'pass'`: All quality criteria met
|
||||
|
||||
## Error Recovery
|
||||
|
||||
| Error Type | Recovery Strategy |
|
||||
|------------|-------------------|
|
||||
| Action execution failed | Retry up to 3 times, then skip |
|
||||
| State parse error | Restore from backup |
|
||||
| File write error | Retry with alternative path |
|
||||
| User abort | Save state and exit gracefully |
|
||||
|
||||
## User Interaction Points
|
||||
|
||||
The orchestrator pauses for user input at these points:
|
||||
|
||||
1. **action-init**: Confirm target skill and describe issue
|
||||
2. **action-propose-fixes**: Select which fixes to apply
|
||||
3. **action-verify**: Review verification results, decide to continue or stop
|
||||
4. **action-complete**: Review final summary
|
||||
378
.claude/skills/skill-tuning/phases/state-schema.md
Normal file
378
.claude/skills/skill-tuning/phases/state-schema.md
Normal file
@@ -0,0 +1,378 @@
|
||||
# State Schema
|
||||
|
||||
Defines the state structure for skill-tuning orchestrator.
|
||||
|
||||
## State Structure
|
||||
|
||||
```typescript
|
||||
interface TuningState {
|
||||
// === Core Status ===
|
||||
status: 'pending' | 'running' | 'completed' | 'failed';
|
||||
started_at: string; // ISO timestamp
|
||||
updated_at: string; // ISO timestamp
|
||||
|
||||
// === Target Skill Info ===
|
||||
target_skill: {
|
||||
name: string; // e.g., "software-manual"
|
||||
path: string; // e.g., ".claude/skills/software-manual"
|
||||
execution_mode: 'sequential' | 'autonomous';
|
||||
phases: string[]; // List of phase files
|
||||
specs: string[]; // List of spec files
|
||||
};
|
||||
|
||||
// === User Input ===
|
||||
user_issue_description: string; // User's problem description
|
||||
focus_areas: string[]; // User-specified focus (optional)
|
||||
|
||||
// === Diagnosis Results ===
|
||||
diagnosis: {
|
||||
context: DiagnosisResult | null;
|
||||
memory: DiagnosisResult | null;
|
||||
dataflow: DiagnosisResult | null;
|
||||
agent: DiagnosisResult | null;
|
||||
docs: DocsDiagnosisResult | null; // 文档结构诊断
|
||||
token_consumption: DiagnosisResult | null; // Token消耗诊断
|
||||
};
|
||||
|
||||
// === Issues Found ===
|
||||
issues: Issue[];
|
||||
issues_by_severity: {
|
||||
critical: number;
|
||||
high: number;
|
||||
medium: number;
|
||||
low: number;
|
||||
};
|
||||
|
||||
// === Fix Management ===
|
||||
proposed_fixes: Fix[];
|
||||
applied_fixes: AppliedFix[];
|
||||
pending_fixes: string[]; // Fix IDs pending application
|
||||
|
||||
// === Iteration Control ===
|
||||
iteration_count: number;
|
||||
max_iterations: number; // Default: 5
|
||||
|
||||
// === Quality Metrics ===
|
||||
quality_score: number; // 0-100
|
||||
quality_gate: 'pass' | 'review' | 'fail';
|
||||
|
||||
// === Orchestrator State ===
|
||||
completed_actions: string[];
|
||||
current_action: string | null;
|
||||
action_history: ActionHistoryEntry[];
|
||||
|
||||
// === Error Handling ===
|
||||
errors: ErrorEntry[];
|
||||
error_count: number;
|
||||
max_errors: number; // Default: 3
|
||||
|
||||
// === Output Paths ===
|
||||
work_dir: string;
|
||||
backup_dir: string;
|
||||
|
||||
// === Final Report (consolidated output) ===
|
||||
final_report: string | null; // Markdown summary generated on completion
|
||||
|
||||
// === Requirement Analysis (新增) ===
|
||||
requirement_analysis: RequirementAnalysis | null;
|
||||
}
|
||||
|
||||
interface RequirementAnalysis {
|
||||
status: 'pending' | 'completed' | 'needs_clarification';
|
||||
analyzed_at: string;
|
||||
|
||||
// Phase 1: 维度拆解
|
||||
dimensions: Dimension[];
|
||||
|
||||
// Phase 2: Spec 匹配
|
||||
spec_matches: SpecMatch[];
|
||||
|
||||
// Phase 3: 覆盖度
|
||||
coverage: {
|
||||
total_dimensions: number;
|
||||
with_detection: number; // 有 taxonomy pattern
|
||||
with_fix_strategy: number; // 有 tuning strategy (满足判断标准)
|
||||
coverage_rate: number; // 0-100%
|
||||
status: 'satisfied' | 'partial' | 'unsatisfied';
|
||||
};
|
||||
|
||||
// Phase 4: 歧义
|
||||
ambiguities: Ambiguity[];
|
||||
}
|
||||
|
||||
interface Dimension {
|
||||
id: string; // e.g., "DIM-001"
|
||||
description: string; // 关注点描述
|
||||
keywords: string[]; // 关键词
|
||||
inferred_category: string; // 推断的问题类别
|
||||
confidence: number; // 置信度 0-1
|
||||
}
|
||||
|
||||
interface SpecMatch {
|
||||
dimension_id: string;
|
||||
taxonomy_match: {
|
||||
category: string; // e.g., "context_explosion"
|
||||
pattern_ids: string[]; // e.g., ["CTX-001", "CTX-003"]
|
||||
severity_hint: string;
|
||||
} | null;
|
||||
strategy_match: {
|
||||
strategies: string[]; // e.g., ["sliding_window", "path_reference"]
|
||||
risk_levels: string[];
|
||||
} | null;
|
||||
has_fix: boolean; // 满足性判断核心
|
||||
}
|
||||
|
||||
interface Ambiguity {
|
||||
dimension_id: string;
|
||||
type: 'multi_category' | 'vague_description' | 'conflicting_keywords';
|
||||
description: string;
|
||||
possible_interpretations: string[];
|
||||
needs_clarification: boolean;
|
||||
}
|
||||
|
||||
interface DiagnosisResult {
|
||||
status: 'completed' | 'skipped' | 'failed';
|
||||
issues_found: number;
|
||||
severity: 'critical' | 'high' | 'medium' | 'low' | 'none';
|
||||
execution_time_ms: number;
|
||||
details: {
|
||||
patterns_checked: string[];
|
||||
patterns_matched: string[];
|
||||
evidence: Evidence[];
|
||||
recommendations: string[];
|
||||
};
|
||||
}
|
||||
|
||||
interface DocsDiagnosisResult extends DiagnosisResult {
|
||||
redundancies: Redundancy[];
|
||||
conflicts: Conflict[];
|
||||
}
|
||||
|
||||
interface Redundancy {
|
||||
id: string; // e.g., "DOC-RED-001"
|
||||
type: 'state_schema' | 'strategy_mapping' | 'type_definition' | 'other';
|
||||
files: string[]; // 涉及的文件列表
|
||||
description: string; // 冗余描述
|
||||
severity: 'high' | 'medium' | 'low';
|
||||
merge_suggestion: string; // 合并建议
|
||||
}
|
||||
|
||||
interface Conflict {
|
||||
id: string; // e.g., "DOC-CON-001"
|
||||
type: 'priority' | 'mapping' | 'definition';
|
||||
files: string[]; // 涉及的文件列表
|
||||
key: string; // 冲突的键/概念
|
||||
definitions: {
|
||||
file: string;
|
||||
value: string;
|
||||
location?: string;
|
||||
}[];
|
||||
resolution_suggestion: string; // 解决建议
|
||||
}
|
||||
|
||||
interface Evidence {
|
||||
file: string;
|
||||
line?: number;
|
||||
pattern: string;
|
||||
context: string;
|
||||
severity: string;
|
||||
}
|
||||
|
||||
interface Issue {
|
||||
id: string; // e.g., "ISS-001"
|
||||
type: 'context_explosion' | 'memory_loss' | 'dataflow_break' | 'agent_failure' | 'token_consumption';
|
||||
severity: 'critical' | 'high' | 'medium' | 'low';
|
||||
priority: number; // 1 = highest
|
||||
location: {
|
||||
file: string;
|
||||
line_start?: number;
|
||||
line_end?: number;
|
||||
phase?: string;
|
||||
};
|
||||
description: string;
|
||||
evidence: string[];
|
||||
root_cause: string;
|
||||
impact: string;
|
||||
suggested_fix: string;
|
||||
related_issues: string[]; // Issue IDs
|
||||
}
|
||||
|
||||
interface Fix {
|
||||
id: string; // e.g., "FIX-001"
|
||||
issue_ids: string[]; // Issues this fix addresses
|
||||
strategy: FixStrategy;
|
||||
description: string;
|
||||
rationale: string;
|
||||
changes: FileChange[];
|
||||
risk: 'low' | 'medium' | 'high';
|
||||
estimated_impact: string;
|
||||
verification_steps: string[];
|
||||
}
|
||||
|
||||
type FixStrategy =
|
||||
| 'context_summarization' // Add context compression
|
||||
| 'sliding_window' // Implement sliding context window
|
||||
| 'structured_state' // Convert to structured state passing
|
||||
| 'constraint_injection' // Add constraint propagation
|
||||
| 'checkpoint_restore' // Add checkpointing mechanism
|
||||
| 'schema_enforcement' // Add data contract validation
|
||||
| 'orchestrator_refactor' // Refactor agent coordination
|
||||
| 'state_centralization' // Centralize state management
|
||||
| 'prompt_compression' // Extract static text, use templates
|
||||
| 'lazy_loading' // Pass paths instead of content
|
||||
| 'output_minimization' // Return minimal structured JSON
|
||||
| 'state_field_reduction' // Audit and consolidate state fields
|
||||
| 'custom'; // Custom fix
|
||||
|
||||
interface FileChange {
|
||||
file: string;
|
||||
action: 'create' | 'modify' | 'delete';
|
||||
old_content?: string;
|
||||
new_content?: string;
|
||||
diff?: string;
|
||||
}
|
||||
|
||||
interface AppliedFix {
|
||||
fix_id: string;
|
||||
applied_at: string;
|
||||
success: boolean;
|
||||
backup_path: string;
|
||||
verification_result: 'pass' | 'fail' | 'pending';
|
||||
rollback_available: boolean;
|
||||
}
|
||||
|
||||
interface ActionHistoryEntry {
|
||||
action: string;
|
||||
started_at: string;
|
||||
completed_at: string;
|
||||
result: 'success' | 'failure' | 'skipped';
|
||||
output_files: string[];
|
||||
}
|
||||
|
||||
interface ErrorEntry {
|
||||
action: string;
|
||||
message: string;
|
||||
timestamp: string;
|
||||
recoverable: boolean;
|
||||
}
|
||||
```
|
||||
|
||||
## Initial State Template
|
||||
|
||||
```json
|
||||
{
|
||||
"status": "pending",
|
||||
"started_at": null,
|
||||
"updated_at": null,
|
||||
"target_skill": {
|
||||
"name": null,
|
||||
"path": null,
|
||||
"execution_mode": null,
|
||||
"phases": [],
|
||||
"specs": []
|
||||
},
|
||||
"user_issue_description": "",
|
||||
"focus_areas": [],
|
||||
"diagnosis": {
|
||||
"context": null,
|
||||
"memory": null,
|
||||
"dataflow": null,
|
||||
"agent": null,
|
||||
"docs": null,
|
||||
"token_consumption": null
|
||||
},
|
||||
"issues": [],
|
||||
"issues_by_severity": {
|
||||
"critical": 0,
|
||||
"high": 0,
|
||||
"medium": 0,
|
||||
"low": 0
|
||||
},
|
||||
"proposed_fixes": [],
|
||||
"applied_fixes": [],
|
||||
"pending_fixes": [],
|
||||
"iteration_count": 0,
|
||||
"max_iterations": 5,
|
||||
"quality_score": 0,
|
||||
"quality_gate": "fail",
|
||||
"completed_actions": [],
|
||||
"current_action": null,
|
||||
"action_history": [],
|
||||
"errors": [],
|
||||
"error_count": 0,
|
||||
"max_errors": 3,
|
||||
"work_dir": null,
|
||||
"backup_dir": null,
|
||||
"final_report": null,
|
||||
"requirement_analysis": null
|
||||
}
|
||||
```
|
||||
|
||||
## State Transition Diagram
|
||||
|
||||
```
|
||||
┌─────────────┐
|
||||
│ pending │
|
||||
└──────┬──────┘
|
||||
│ action-init
|
||||
↓
|
||||
┌─────────────┐
|
||||
┌──────────│ running │──────────┐
|
||||
│ └──────┬──────┘ │
|
||||
│ │ │
|
||||
diagnosis │ ┌────────────┼────────────┐ │ error_count >= 3
|
||||
actions │ │ │ │ │
|
||||
│ ↓ ↓ ↓ │
|
||||
│ context memory dataflow │
|
||||
│ │ │ │ │
|
||||
│ └────────────┼────────────┘ │
|
||||
│ │ │
|
||||
│ ↓ │
|
||||
│ action-verify │
|
||||
│ │ │
|
||||
│ ┌───────────┼───────────┐ │
|
||||
│ │ │ │ │
|
||||
│ ↓ ↓ ↓ │
|
||||
│ quality iterate apply │
|
||||
│ gate=pass (< max) fix │
|
||||
│ │ │ │ │
|
||||
│ │ └───────────┘ │
|
||||
│ ↓ ↓
|
||||
│ ┌─────────────┐ ┌─────────────┐
|
||||
└→│ completed │ │ failed │
|
||||
└─────────────┘ └─────────────┘
|
||||
```
|
||||
|
||||
## State Update Rules
|
||||
|
||||
### Atomicity
|
||||
All state updates must be atomic - read current state, apply changes, write entire state.
|
||||
|
||||
### Immutability
|
||||
Never mutate state in place. Always create new state object with changes.
|
||||
|
||||
### Validation
|
||||
Before writing state, validate against schema to prevent corruption.
|
||||
|
||||
### Timestamps
|
||||
Always update `updated_at` on every state change.
|
||||
|
||||
```javascript
|
||||
function updateState(workDir, updates) {
|
||||
const currentState = JSON.parse(Read(`${workDir}/state.json`));
|
||||
|
||||
const newState = {
|
||||
...currentState,
|
||||
...updates,
|
||||
updated_at: new Date().toISOString()
|
||||
};
|
||||
|
||||
// Validate before write
|
||||
if (!validateState(newState)) {
|
||||
throw new Error('Invalid state update');
|
||||
}
|
||||
|
||||
Write(`${workDir}/state.json`, JSON.stringify(newState, null, 2));
|
||||
return newState;
|
||||
}
|
||||
```
|
||||
284
.claude/skills/skill-tuning/specs/category-mappings.json
Normal file
284
.claude/skills/skill-tuning/specs/category-mappings.json
Normal file
@@ -0,0 +1,284 @@
|
||||
{
|
||||
"version": "1.0.0",
|
||||
"description": "Centralized category mappings for skill-tuning analysis and fix proposal",
|
||||
"categories": {
|
||||
"authoring_principles_violation": {
|
||||
"pattern_ids": ["APV-001", "APV-002", "APV-003", "APV-004", "APV-005", "APV-006"],
|
||||
"severity_hint": "critical",
|
||||
"strategies": ["eliminate_intermediate_files", "minimize_state", "context_passing"],
|
||||
"risk_levels": ["low", "low", "low"],
|
||||
"detection_focus": "Intermediate files, state bloat, file relay patterns",
|
||||
"priority_order": [1, 2, 3]
|
||||
},
|
||||
"context_explosion": {
|
||||
"pattern_ids": ["CTX-001", "CTX-002", "CTX-003", "CTX-004", "CTX-005"],
|
||||
"severity_hint": "high",
|
||||
"strategies": ["sliding_window", "path_reference", "context_summarization", "structured_state"],
|
||||
"risk_levels": ["low", "low", "low", "medium"],
|
||||
"detection_focus": "Token accumulation, content passing patterns",
|
||||
"priority_order": [1, 2, 3, 4]
|
||||
},
|
||||
"memory_loss": {
|
||||
"pattern_ids": ["MEM-001", "MEM-002", "MEM-003", "MEM-004", "MEM-005"],
|
||||
"severity_hint": "high",
|
||||
"strategies": ["constraint_injection", "state_constraints_field", "checkpoint_restore", "goal_embedding"],
|
||||
"risk_levels": ["low", "low", "low", "medium"],
|
||||
"detection_focus": "Constraint propagation, checkpoint mechanisms",
|
||||
"priority_order": [1, 2, 3, 4]
|
||||
},
|
||||
"dataflow_break": {
|
||||
"pattern_ids": ["DF-001", "DF-002", "DF-003", "DF-004", "DF-005"],
|
||||
"severity_hint": "critical",
|
||||
"strategies": ["state_centralization", "schema_enforcement", "field_normalization"],
|
||||
"risk_levels": ["medium", "low", "low"],
|
||||
"detection_focus": "State storage, schema validation",
|
||||
"priority_order": [1, 2, 3]
|
||||
},
|
||||
"agent_failure": {
|
||||
"pattern_ids": ["AGT-001", "AGT-002", "AGT-003", "AGT-004", "AGT-005", "AGT-006"],
|
||||
"severity_hint": "high",
|
||||
"strategies": ["error_wrapping", "result_validation", "flatten_nesting"],
|
||||
"risk_levels": ["low", "low", "medium"],
|
||||
"detection_focus": "Error handling, result validation",
|
||||
"priority_order": [1, 2, 3]
|
||||
},
|
||||
"prompt_quality": {
|
||||
"pattern_ids": [],
|
||||
"severity_hint": "medium",
|
||||
"strategies": ["structured_prompt", "output_schema", "grounding_context", "format_enforcement"],
|
||||
"risk_levels": ["low", "low", "medium", "low"],
|
||||
"detection_focus": null,
|
||||
"needs_gemini_analysis": true,
|
||||
"priority_order": [1, 2, 3, 4]
|
||||
},
|
||||
"architecture": {
|
||||
"pattern_ids": [],
|
||||
"severity_hint": "medium",
|
||||
"strategies": ["phase_decomposition", "interface_contracts", "plugin_architecture", "state_machine"],
|
||||
"risk_levels": ["medium", "medium", "high", "medium"],
|
||||
"detection_focus": null,
|
||||
"needs_gemini_analysis": true,
|
||||
"priority_order": [1, 2, 3, 4]
|
||||
},
|
||||
"performance": {
|
||||
"pattern_ids": ["CTX-001", "CTX-003"],
|
||||
"severity_hint": "medium",
|
||||
"strategies": ["token_budgeting", "parallel_execution", "result_caching", "lazy_loading"],
|
||||
"risk_levels": ["low", "low", "low", "low"],
|
||||
"detection_focus": "Reuses context explosion detection",
|
||||
"priority_order": [1, 2, 3, 4]
|
||||
},
|
||||
"error_handling": {
|
||||
"pattern_ids": ["AGT-001", "AGT-002"],
|
||||
"severity_hint": "medium",
|
||||
"strategies": ["graceful_degradation", "error_propagation", "structured_logging", "error_context"],
|
||||
"risk_levels": ["low", "low", "low", "low"],
|
||||
"detection_focus": "Reuses agent failure detection",
|
||||
"priority_order": [1, 2, 3, 4]
|
||||
},
|
||||
"output_quality": {
|
||||
"pattern_ids": [],
|
||||
"severity_hint": "medium",
|
||||
"strategies": ["quality_gates", "output_validation", "template_enforcement", "completeness_check"],
|
||||
"risk_levels": ["low", "low", "low", "low"],
|
||||
"detection_focus": null,
|
||||
"needs_gemini_analysis": true,
|
||||
"priority_order": [1, 2, 3, 4]
|
||||
},
|
||||
"user_experience": {
|
||||
"pattern_ids": [],
|
||||
"severity_hint": "low",
|
||||
"strategies": ["progress_tracking", "status_communication", "interactive_checkpoints", "guided_workflow"],
|
||||
"risk_levels": ["low", "low", "low", "low"],
|
||||
"detection_focus": null,
|
||||
"needs_gemini_analysis": true,
|
||||
"priority_order": [1, 2, 3, 4]
|
||||
},
|
||||
"token_consumption": {
|
||||
"pattern_ids": ["TKN-001", "TKN-002", "TKN-003", "TKN-004", "TKN-005"],
|
||||
"severity_hint": "medium",
|
||||
"strategies": ["prompt_compression", "lazy_loading", "output_minimization", "state_field_reduction", "sliding_window"],
|
||||
"risk_levels": ["low", "low", "low", "low", "low"],
|
||||
"detection_focus": "Verbose prompts, excessive state fields, full content passing, unbounded arrays, redundant I/O",
|
||||
"priority_order": [1, 2, 3, 4, 5]
|
||||
}
|
||||
},
|
||||
"keywords": {
|
||||
"chinese": {
|
||||
"token": "context_explosion",
|
||||
"上下文": "context_explosion",
|
||||
"爆炸": "context_explosion",
|
||||
"太长": "context_explosion",
|
||||
"超限": "context_explosion",
|
||||
"膨胀": "context_explosion",
|
||||
"遗忘": "memory_loss",
|
||||
"忘记": "memory_loss",
|
||||
"指令丢失": "memory_loss",
|
||||
"约束消失": "memory_loss",
|
||||
"目标漂移": "memory_loss",
|
||||
"状态": "dataflow_break",
|
||||
"数据": "dataflow_break",
|
||||
"格式": "dataflow_break",
|
||||
"不一致": "dataflow_break",
|
||||
"丢失": "dataflow_break",
|
||||
"损坏": "dataflow_break",
|
||||
"agent": "agent_failure",
|
||||
"子任务": "agent_failure",
|
||||
"失败": "agent_failure",
|
||||
"嵌套": "agent_failure",
|
||||
"调用": "agent_failure",
|
||||
"协调": "agent_failure",
|
||||
"慢": "performance",
|
||||
"性能": "performance",
|
||||
"效率": "performance",
|
||||
"延迟": "performance",
|
||||
"提示词": "prompt_quality",
|
||||
"输出不稳定": "prompt_quality",
|
||||
"幻觉": "prompt_quality",
|
||||
"架构": "architecture",
|
||||
"结构": "architecture",
|
||||
"模块": "architecture",
|
||||
"耦合": "architecture",
|
||||
"扩展": "architecture",
|
||||
"错误": "error_handling",
|
||||
"异常": "error_handling",
|
||||
"恢复": "error_handling",
|
||||
"降级": "error_handling",
|
||||
"崩溃": "error_handling",
|
||||
"输出": "output_quality",
|
||||
"质量": "output_quality",
|
||||
"验证": "output_quality",
|
||||
"不完整": "output_quality",
|
||||
"交互": "user_experience",
|
||||
"体验": "user_experience",
|
||||
"进度": "user_experience",
|
||||
"反馈": "user_experience",
|
||||
"不清晰": "user_experience",
|
||||
"中间文件": "authoring_principles_violation",
|
||||
"临时文件": "authoring_principles_violation",
|
||||
"文件中转": "authoring_principles_violation",
|
||||
"state膨胀": "authoring_principles_violation",
|
||||
"token消耗": "token_consumption",
|
||||
"token优化": "token_consumption",
|
||||
"产出简化": "token_consumption",
|
||||
"冗长": "token_consumption",
|
||||
"精简": "token_consumption"
|
||||
},
|
||||
"english": {
|
||||
"token": "context_explosion",
|
||||
"context": "context_explosion",
|
||||
"explosion": "context_explosion",
|
||||
"overflow": "context_explosion",
|
||||
"bloat": "context_explosion",
|
||||
"forget": "memory_loss",
|
||||
"lost": "memory_loss",
|
||||
"drift": "memory_loss",
|
||||
"constraint": "memory_loss",
|
||||
"goal": "memory_loss",
|
||||
"state": "dataflow_break",
|
||||
"data": "dataflow_break",
|
||||
"format": "dataflow_break",
|
||||
"inconsistent": "dataflow_break",
|
||||
"corrupt": "dataflow_break",
|
||||
"agent": "agent_failure",
|
||||
"subtask": "agent_failure",
|
||||
"fail": "agent_failure",
|
||||
"nested": "agent_failure",
|
||||
"call": "agent_failure",
|
||||
"coordinate": "agent_failure",
|
||||
"slow": "performance",
|
||||
"performance": "performance",
|
||||
"efficiency": "performance",
|
||||
"latency": "performance",
|
||||
"prompt": "prompt_quality",
|
||||
"unstable": "prompt_quality",
|
||||
"hallucination": "prompt_quality",
|
||||
"architecture": "architecture",
|
||||
"structure": "architecture",
|
||||
"module": "architecture",
|
||||
"coupling": "architecture",
|
||||
"error": "error_handling",
|
||||
"exception": "error_handling",
|
||||
"recovery": "error_handling",
|
||||
"crash": "error_handling",
|
||||
"output": "output_quality",
|
||||
"quality": "output_quality",
|
||||
"validation": "output_quality",
|
||||
"incomplete": "output_quality",
|
||||
"interaction": "user_experience",
|
||||
"ux": "user_experience",
|
||||
"progress": "user_experience",
|
||||
"feedback": "user_experience",
|
||||
"intermediate": "authoring_principles_violation",
|
||||
"temp": "authoring_principles_violation",
|
||||
"relay": "authoring_principles_violation",
|
||||
"verbose": "token_consumption",
|
||||
"minimize": "token_consumption",
|
||||
"compress": "token_consumption",
|
||||
"simplify": "token_consumption",
|
||||
"reduction": "token_consumption"
|
||||
}
|
||||
},
|
||||
"category_labels": {
|
||||
"context_explosion": "Context Explosion",
|
||||
"memory_loss": "Long-tail Forgetting",
|
||||
"dataflow_break": "Data Flow Disruption",
|
||||
"agent_failure": "Agent Coordination Failure",
|
||||
"prompt_quality": "Prompt Quality",
|
||||
"architecture": "Architecture",
|
||||
"performance": "Performance",
|
||||
"error_handling": "Error Handling",
|
||||
"output_quality": "Output Quality",
|
||||
"user_experience": "User Experience",
|
||||
"authoring_principles_violation": "Authoring Principles Violation",
|
||||
"token_consumption": "Token Consumption",
|
||||
"custom": "Custom"
|
||||
},
|
||||
"category_labels_chinese": {
|
||||
"context_explosion": "Context Explosion",
|
||||
"memory_loss": "Long-tail Forgetting",
|
||||
"dataflow_break": "Data Flow Disruption",
|
||||
"agent_failure": "Agent Coordination Failure",
|
||||
"prompt_quality": "Prompt Quality",
|
||||
"architecture": "Architecture Issues",
|
||||
"performance": "Performance Issues",
|
||||
"error_handling": "Error Handling",
|
||||
"output_quality": "Output Quality",
|
||||
"user_experience": "User Experience",
|
||||
"authoring_principles_violation": "Authoring Principles Violation",
|
||||
"token_consumption": "Token Consumption Optimization",
|
||||
"custom": "Other Issues"
|
||||
},
|
||||
"category_descriptions": {
|
||||
"context_explosion": "Token accumulation causing prompt size to grow unbounded",
|
||||
"memory_loss": "Early instructions or constraints lost in later phases",
|
||||
"dataflow_break": "State data inconsistency between phases",
|
||||
"agent_failure": "Sub-agent call failures or abnormal results",
|
||||
"prompt_quality": "Vague prompts causing unstable outputs",
|
||||
"architecture": "Improper phase division or module structure",
|
||||
"performance": "Slow execution or high token consumption",
|
||||
"error_handling": "Incomplete error recovery mechanisms",
|
||||
"output_quality": "Output validation or completeness issues",
|
||||
"user_experience": "Interaction or feedback clarity issues",
|
||||
"authoring_principles_violation": "Violation of skill authoring principles",
|
||||
"token_consumption": "Excessive token usage from verbose prompts, large state objects, or redundant I/O patterns",
|
||||
"custom": "Requires custom analysis"
|
||||
},
|
||||
"fix_priority_order": {
|
||||
"P0": ["dataflow_break", "authoring_principles_violation"],
|
||||
"P1": ["agent_failure"],
|
||||
"P2": ["context_explosion", "token_consumption"],
|
||||
"P3": ["memory_loss"]
|
||||
},
|
||||
"cross_category_dependencies": {
|
||||
"context_explosion": ["memory_loss"],
|
||||
"dataflow_break": ["agent_failure"],
|
||||
"agent_failure": ["context_explosion"]
|
||||
},
|
||||
"fallback": {
|
||||
"strategies": ["custom"],
|
||||
"risk_levels": ["medium"],
|
||||
"has_fix": true,
|
||||
"needs_gemini_analysis": true
|
||||
}
|
||||
}
|
||||
212
.claude/skills/skill-tuning/specs/dimension-mapping.md
Normal file
212
.claude/skills/skill-tuning/specs/dimension-mapping.md
Normal file
@@ -0,0 +1,212 @@
|
||||
# Dimension to Spec Mapping
|
||||
|
||||
维度关键词到 Spec 的映射规则,用于 action-analyze-requirements 阶段的自动匹配。
|
||||
|
||||
## When to Use
|
||||
|
||||
| Phase | Usage |
|
||||
|-------|-------|
|
||||
| action-analyze-requirements | 维度→类别→Spec 自动匹配 |
|
||||
| action-propose-fixes | 策略选择参考 |
|
||||
|
||||
---
|
||||
|
||||
## Keyword → Category Mapping
|
||||
|
||||
基于关键词将用户描述的维度映射到问题类别。
|
||||
|
||||
### 中英文关键词表
|
||||
|
||||
| Keywords (中文) | Keywords (英文) | Primary Category | Secondary |
|
||||
|----------------|-----------------|------------------|-----------|
|
||||
| token, 上下文, 爆炸, 太长, 超限, 膨胀 | token, context, explosion, overflow, bloat | context_explosion | - |
|
||||
| 遗忘, 忘记, 指令丢失, 约束消失, 目标漂移 | forget, lost, drift, constraint, goal | memory_loss | - |
|
||||
| 状态, 数据, 格式, 不一致, 丢失, 损坏 | state, data, format, inconsistent, corrupt | dataflow_break | - |
|
||||
| agent, 子任务, 失败, 嵌套, 调用, 协调 | agent, subtask, fail, nested, call, coordinate | agent_failure | - |
|
||||
| 慢, 性能, 效率, token 消耗, 延迟 | slow, performance, efficiency, latency | performance | context_explosion |
|
||||
| 提示词, prompt, 输出不稳定, 幻觉 | prompt, unstable, hallucination | prompt_quality | - |
|
||||
| 架构, 结构, 模块, 耦合, 扩展 | architecture, structure, module, coupling | architecture | - |
|
||||
| 错误, 异常, 恢复, 降级, 崩溃 | error, exception, recovery, crash | error_handling | agent_failure |
|
||||
| 输出, 质量, 格式, 验证, 不完整 | output, quality, validation, incomplete | output_quality | - |
|
||||
| 交互, 体验, 进度, 反馈, 不清晰 | interaction, ux, progress, feedback | user_experience | - |
|
||||
| 重复, 冗余, 多处定义, 相同内容 | duplicate, redundant, multiple definitions | doc_redundancy | - |
|
||||
| 冲突, 不一致, 定义不同, 矛盾 | conflict, inconsistent, mismatch, contradiction | doc_conflict | - |
|
||||
|
||||
### Matching Algorithm
|
||||
|
||||
```javascript
|
||||
function matchCategory(keywords) {
|
||||
const categoryScores = {};
|
||||
|
||||
for (const keyword of keywords) {
|
||||
const normalizedKeyword = keyword.toLowerCase();
|
||||
|
||||
for (const [category, categoryKeywords] of Object.entries(KEYWORD_MAP)) {
|
||||
if (categoryKeywords.some(k => normalizedKeyword.includes(k) || k.includes(normalizedKeyword))) {
|
||||
categoryScores[category] = (categoryScores[category] || 0) + 1;
|
||||
}
|
||||
}
|
||||
}
|
||||
|
||||
// 返回得分最高的类别
|
||||
const sorted = Object.entries(categoryScores).sort((a, b) => b[1] - a[1]);
|
||||
|
||||
if (sorted.length === 0) return null;
|
||||
|
||||
// 如果前两名得分相同,返回多类别(需澄清)
|
||||
if (sorted.length > 1 && sorted[0][1] === sorted[1][1]) {
|
||||
return {
|
||||
primary: sorted[0][0],
|
||||
secondary: sorted[1][0],
|
||||
ambiguous: true
|
||||
};
|
||||
}
|
||||
|
||||
return {
|
||||
primary: sorted[0][0],
|
||||
secondary: sorted[1]?.[0] || null,
|
||||
ambiguous: false
|
||||
};
|
||||
}
|
||||
```
|
||||
|
||||
---
|
||||
|
||||
## Category → Taxonomy Pattern Mapping
|
||||
|
||||
将问题类别映射到 problem-taxonomy.md 中的检测模式。
|
||||
|
||||
| Category | Pattern IDs | Detection Focus |
|
||||
|----------|-------------|-----------------|
|
||||
| context_explosion | CTX-001, CTX-002, CTX-003, CTX-004, CTX-005 | Token 累积、内容传递模式 |
|
||||
| memory_loss | MEM-001, MEM-002, MEM-003, MEM-004, MEM-005 | 约束传播、检查点机制 |
|
||||
| dataflow_break | DF-001, DF-002, DF-003, DF-004, DF-005 | 状态存储、Schema 验证 |
|
||||
| agent_failure | AGT-001, AGT-002, AGT-003, AGT-004, AGT-005, AGT-006 | 错误处理、结果验证 |
|
||||
| prompt_quality | - | (无内置检测,需 Gemini 分析) |
|
||||
| architecture | - | (无内置检测,需 Gemini 分析) |
|
||||
| performance | CTX-001, CTX-003 | (复用 context 检测) |
|
||||
| error_handling | AGT-001, AGT-002 | (复用 agent 检测) |
|
||||
| output_quality | - | (无内置检测,需 Gemini 分析) |
|
||||
| user_experience | - | (无内置检测,需 Gemini 分析) |
|
||||
| doc_redundancy | DOC-RED-001, DOC-RED-002, DOC-RED-003 | 重复定义检测 |
|
||||
| doc_conflict | DOC-CON-001, DOC-CON-002 | 冲突定义检测 |
|
||||
|
||||
---
|
||||
|
||||
## Category → Strategy Mapping
|
||||
|
||||
将问题类别映射到 tuning-strategies.md 中的修复策略。
|
||||
|
||||
### Core Categories (有完整策略)
|
||||
|
||||
| Category | Available Strategies | Risk Level |
|
||||
|----------|---------------------|------------|
|
||||
| context_explosion | sliding_window, path_reference, context_summarization, structured_state | Low-Medium |
|
||||
| memory_loss | constraint_injection, state_constraints_field, checkpoint_restore, goal_embedding | Low-Medium |
|
||||
| dataflow_break | state_centralization, schema_enforcement, field_normalization | Low-Medium |
|
||||
| agent_failure | error_wrapping, result_validation, flatten_nesting | Low-Medium |
|
||||
| doc_redundancy | consolidate_to_ssot, centralize_mapping_config | Low-Medium |
|
||||
| doc_conflict | reconcile_conflicting_definitions | Low |
|
||||
|
||||
### Extended Categories (需 Gemini 生成策略)
|
||||
|
||||
| Category | Available Strategies | Risk Level |
|
||||
|----------|---------------------|------------|
|
||||
| prompt_quality | structured_prompt, output_schema, grounding_context, format_enforcement | Low |
|
||||
| architecture | phase_decomposition, interface_contracts, plugin_architecture, state_machine | Medium-High |
|
||||
| performance | token_budgeting, parallel_execution, result_caching, lazy_loading | Low-Medium |
|
||||
| error_handling | graceful_degradation, error_propagation, structured_logging, error_context | Low |
|
||||
| output_quality | quality_gates, output_validation, template_enforcement, completeness_check | Low |
|
||||
| user_experience | progress_tracking, status_communication, interactive_checkpoints, guided_workflow | Low |
|
||||
|
||||
---
|
||||
|
||||
## Coverage Rules
|
||||
|
||||
### Satisfaction Criteria
|
||||
|
||||
判断"是否满足需求"的标准:
|
||||
|
||||
```javascript
|
||||
function evaluateSatisfaction(specMatch) {
|
||||
// 核心标准:有可用的修复策略
|
||||
const hasFix = specMatch.strategy_match !== null &&
|
||||
specMatch.strategy_match.strategies.length > 0;
|
||||
|
||||
// 辅助标准:有检测手段
|
||||
const hasDetection = specMatch.taxonomy_match !== null;
|
||||
|
||||
return {
|
||||
satisfied: hasFix,
|
||||
detection_available: hasDetection,
|
||||
needs_gemini: !hasDetection // 无内置检测时需要 Gemini 分析
|
||||
};
|
||||
}
|
||||
```
|
||||
|
||||
### Coverage Status Thresholds
|
||||
|
||||
| Status | Condition |
|
||||
|--------|-----------|
|
||||
| satisfied | coverage_rate >= 80% |
|
||||
| partial | 50% <= coverage_rate < 80% |
|
||||
| unsatisfied | coverage_rate < 50% |
|
||||
|
||||
---
|
||||
|
||||
## Fallback Rules
|
||||
|
||||
当无法匹配到具体类别时的处理:
|
||||
|
||||
```javascript
|
||||
function handleUnmatchedDimension(dimension) {
|
||||
return {
|
||||
dimension_id: dimension.id,
|
||||
taxonomy_match: null,
|
||||
strategy_match: {
|
||||
strategies: ['custom'], // Fallback to custom strategy
|
||||
risk_levels: ['medium']
|
||||
},
|
||||
has_fix: true, // custom 策略视为"可满足"
|
||||
needs_gemini_analysis: true,
|
||||
fallback_reason: 'no_keyword_match'
|
||||
};
|
||||
}
|
||||
```
|
||||
|
||||
---
|
||||
|
||||
## Usage Example
|
||||
|
||||
```javascript
|
||||
// 输入:用户描述 "skill 执行太慢,而且有时候会忘记最初的指令"
|
||||
|
||||
// Step 1: Gemini 拆解为维度
|
||||
const dimensions = [
|
||||
{ id: 'DIM-001', description: '执行太慢', keywords: ['慢', '执行'], confidence: 0.9 },
|
||||
{ id: 'DIM-002', description: '忘记最初指令', keywords: ['忘记', '指令'], confidence: 0.85 }
|
||||
];
|
||||
|
||||
// Step 2: 匹配类别
|
||||
// DIM-001 → performance (慢)
|
||||
// DIM-002 → memory_loss (忘记, 指令)
|
||||
|
||||
// Step 3: 匹配 Spec
|
||||
const specMatches = [
|
||||
{
|
||||
dimension_id: 'DIM-001',
|
||||
taxonomy_match: { category: 'performance', pattern_ids: ['CTX-001', 'CTX-003'], severity_hint: 'medium' },
|
||||
strategy_match: { strategies: ['token_budgeting', 'parallel_execution'], risk_levels: ['low', 'low'] },
|
||||
has_fix: true
|
||||
},
|
||||
{
|
||||
dimension_id: 'DIM-002',
|
||||
taxonomy_match: { category: 'memory_loss', pattern_ids: ['MEM-001', 'MEM-002'], severity_hint: 'high' },
|
||||
strategy_match: { strategies: ['constraint_injection', 'checkpoint_restore'], risk_levels: ['low', 'low'] },
|
||||
has_fix: true
|
||||
}
|
||||
];
|
||||
|
||||
// Step 4: 评估覆盖度
|
||||
// 2/2 = 100% → satisfied
|
||||
```
|
||||
318
.claude/skills/skill-tuning/specs/problem-taxonomy.md
Normal file
318
.claude/skills/skill-tuning/specs/problem-taxonomy.md
Normal file
@@ -0,0 +1,318 @@
|
||||
# Problem Taxonomy
|
||||
|
||||
Classification of skill execution issues with detection patterns and severity criteria.
|
||||
|
||||
## When to Use
|
||||
|
||||
| Phase | Usage | Section |
|
||||
|-------|-------|---------|
|
||||
| All Diagnosis Actions | Issue classification | All sections |
|
||||
| action-propose-fixes | Strategy selection | Fix Mapping |
|
||||
| action-generate-report | Severity assessment | Severity Criteria |
|
||||
|
||||
---
|
||||
|
||||
## Problem Categories
|
||||
|
||||
### 0. Authoring Principles Violation (P0)
|
||||
|
||||
**Definition**: 违反 skill 撰写首要准则(简洁高效、去除存储、上下文流转)。
|
||||
|
||||
**Root Causes**:
|
||||
- 不必要的中间文件存储
|
||||
- State schema 过度膨胀
|
||||
- 文件中转代替上下文传递
|
||||
- 重复数据存储
|
||||
|
||||
**Detection Patterns**:
|
||||
|
||||
| Pattern ID | Regex/Check | Description |
|
||||
|------------|-------------|-------------|
|
||||
| APV-001 | `/Write\([^)]*temp-|intermediate-/` | 中间文件写入 |
|
||||
| APV-002 | `/Write\([^)]+\)[\s\S]{0,50}Read\([^)]+\)/` | 写后立即读(文件中转) |
|
||||
| APV-003 | State schema > 15 fields | State 字段过多 |
|
||||
| APV-004 | `/_history\s*[.=].*push|concat/` | 无限增长数组 |
|
||||
| APV-005 | `/debug_|_cache|_temp/` in state | 调试/缓存字段残留 |
|
||||
| APV-006 | Same data in multiple state fields | 重复存储 |
|
||||
|
||||
**Impact Levels**:
|
||||
- **Critical**: 中间文件 > 5 个,严重违反原则
|
||||
- **High**: State 字段 > 20 个,或存在文件中转
|
||||
- **Medium**: 存在调试字段或轻微冗余
|
||||
- **Low**: 轻微的命名不规范
|
||||
|
||||
---
|
||||
|
||||
### 1. Context Explosion (P2)
|
||||
|
||||
**Definition**: Excessive token accumulation causing prompt size to grow unbounded.
|
||||
|
||||
**Root Causes**:
|
||||
- Unbounded conversation history
|
||||
- Full content passing instead of references
|
||||
- Missing summarization mechanisms
|
||||
- Agent returning full output instead of path+summary
|
||||
|
||||
**Detection Patterns**:
|
||||
|
||||
| Pattern ID | Regex/Check | Description |
|
||||
|------------|-------------|-------------|
|
||||
| CTX-001 | `/history\s*[.=].*push\|concat/` | History array growth |
|
||||
| CTX-002 | `/JSON\.stringify\s*\(\s*state\s*\)/` | Full state serialization |
|
||||
| CTX-003 | `/Read\([^)]+\)\s*[\+,]/` | Multiple file content concatenation |
|
||||
| CTX-004 | `/return\s*\{[^}]*content:/` | Agent returning full content |
|
||||
| CTX-005 | File length > 5000 chars without summarize | Long prompt without compression |
|
||||
|
||||
**Impact Levels**:
|
||||
- **Critical**: Context exceeds model limit (128K tokens)
|
||||
- **High**: Context > 50K tokens per iteration
|
||||
- **Medium**: Context grows 10%+ per iteration
|
||||
- **Low**: Potential for growth but currently manageable
|
||||
|
||||
---
|
||||
|
||||
### 2. Long-tail Forgetting (P3)
|
||||
|
||||
**Definition**: Loss of early instructions, constraints, or goals in long execution chains.
|
||||
|
||||
**Root Causes**:
|
||||
- No explicit constraint propagation
|
||||
- Reliance on implicit context
|
||||
- Missing checkpoint/restore mechanisms
|
||||
- State schema without requirements field
|
||||
|
||||
**Detection Patterns**:
|
||||
|
||||
| Pattern ID | Regex/Check | Description |
|
||||
|------------|-------------|-------------|
|
||||
| MEM-001 | Later phases missing constraint reference | Constraint not carried forward |
|
||||
| MEM-002 | `/\[TASK\][^[]*(?!\[CONSTRAINTS\])/` | Task without constraints section |
|
||||
| MEM-003 | Key phases without checkpoint | Missing state preservation |
|
||||
| MEM-004 | State schema lacks `original_requirements` | No constraint persistence |
|
||||
| MEM-005 | No verification phase | Output not checked against intent |
|
||||
|
||||
**Impact Levels**:
|
||||
- **Critical**: Original goal completely lost
|
||||
- **High**: Key constraints ignored in output
|
||||
- **Medium**: Some requirements missing
|
||||
- **Low**: Minor goal drift
|
||||
|
||||
---
|
||||
|
||||
### 3. Data Flow Disruption (P0)
|
||||
|
||||
**Definition**: Inconsistent state management causing data loss or corruption.
|
||||
|
||||
**Root Causes**:
|
||||
- Multiple state storage locations
|
||||
- Inconsistent field naming
|
||||
- Missing schema validation
|
||||
- Format transformation without normalization
|
||||
|
||||
**Detection Patterns**:
|
||||
|
||||
| Pattern ID | Regex/Check | Description |
|
||||
|------------|-------------|-------------|
|
||||
| DF-001 | Multiple state file writes | Scattered state storage |
|
||||
| DF-002 | Same concept, different names | Field naming inconsistency |
|
||||
| DF-003 | JSON.parse without validation | Missing schema validation |
|
||||
| DF-004 | Files written but never read | Orphaned outputs |
|
||||
| DF-005 | Autonomous skill without state-schema | Undefined state structure |
|
||||
|
||||
**Impact Levels**:
|
||||
- **Critical**: Data loss or corruption
|
||||
- **High**: State inconsistency between phases
|
||||
- **Medium**: Potential for inconsistency
|
||||
- **Low**: Minor naming inconsistencies
|
||||
|
||||
---
|
||||
|
||||
### 4. Agent Coordination Failure (P1)
|
||||
|
||||
**Definition**: Fragile agent call patterns causing cascading failures.
|
||||
|
||||
**Root Causes**:
|
||||
- Missing error handling in Task calls
|
||||
- No result validation
|
||||
- Inconsistent agent configurations
|
||||
- Deeply nested agent calls
|
||||
|
||||
**Detection Patterns**:
|
||||
|
||||
| Pattern ID | Regex/Check | Description |
|
||||
|------------|-------------|-------------|
|
||||
| AGT-001 | Task without try-catch | Missing error handling |
|
||||
| AGT-002 | Result used without validation | No return value check |
|
||||
| AGT-003 | > 3 different agent types | Agent type proliferation |
|
||||
| AGT-004 | Nested Task in prompt | Agent calling agent |
|
||||
| AGT-005 | Task used but not in allowed-tools | Tool declaration mismatch |
|
||||
| AGT-006 | Multiple return formats | Inconsistent agent output |
|
||||
|
||||
**Impact Levels**:
|
||||
- **Critical**: Workflow crash on agent failure
|
||||
- **High**: Unpredictable agent behavior
|
||||
- **Medium**: Occasional coordination issues
|
||||
- **Low**: Minor inconsistencies
|
||||
|
||||
---
|
||||
|
||||
### 5. Documentation Redundancy (P5)
|
||||
|
||||
**Definition**: 同一定义(如 State Schema、映射表、类型定义)在多个文件中重复出现,导致维护困难和不一致风险。
|
||||
|
||||
**Root Causes**:
|
||||
- 缺乏单一真相来源 (SSOT)
|
||||
- 复制粘贴代替引用
|
||||
- 硬编码配置代替集中管理
|
||||
|
||||
**Detection Patterns**:
|
||||
|
||||
| Pattern ID | Regex/Check | Description |
|
||||
|------------|-------------|-------------|
|
||||
| DOC-RED-001 | 跨文件语义比较 | 找到 State Schema 等核心概念的重复定义 |
|
||||
| DOC-RED-002 | 代码块 vs 规范表对比 | action 文件中硬编码与 spec 文档的重复 |
|
||||
| DOC-RED-003 | `/interface\s+(\w+)/` 同名扫描 | 多处定义的 interface/type |
|
||||
|
||||
**Impact Levels**:
|
||||
- **High**: 核心定义(State Schema, 映射表)重复
|
||||
- **Medium**: 类型定义重复
|
||||
- **Low**: 示例代码重复
|
||||
|
||||
---
|
||||
|
||||
### 6. Token Consumption (P6)
|
||||
|
||||
**Definition**: Excessive token usage from verbose prompts, large state objects, or inefficient I/O patterns.
|
||||
|
||||
**Root Causes**:
|
||||
- Long static prompts without compression
|
||||
- State schema with too many fields
|
||||
- Full content embedding instead of path references
|
||||
- Arrays growing unbounded without sliding windows
|
||||
- Write-then-read file relay patterns
|
||||
|
||||
**Detection Patterns**:
|
||||
|
||||
| Pattern ID | Regex/Check | Description |
|
||||
|------------|-------------|-------------|
|
||||
| TKN-001 | File size > 4KB | Verbose prompt files |
|
||||
| TKN-002 | State fields > 15 | Excessive state schema |
|
||||
| TKN-003 | `/Read\([^)]+\)\s*[\+,]/` | Full content passing |
|
||||
| TKN-004 | `/.push\|concat(?!.*\.slice)/` | Unbounded array growth |
|
||||
| TKN-005 | `/Write\([^)]+\)[\s\S]{0,100}Read\([^)]+\)/` | Write-then-read pattern |
|
||||
|
||||
**Impact Levels**:
|
||||
- **High**: Multiple TKN-003/TKN-004 issues causing significant token waste
|
||||
- **Medium**: Several verbose files or state bloat
|
||||
- **Low**: Minor optimization opportunities
|
||||
|
||||
---
|
||||
|
||||
### 7. Documentation Conflict (P7)
|
||||
|
||||
**Definition**: 同一概念在不同文件中定义不一致,导致行为不可预测和文档误导。
|
||||
|
||||
**Root Causes**:
|
||||
- 定义更新后未同步其他位置
|
||||
- 实现与文档漂移
|
||||
- 缺乏一致性校验
|
||||
|
||||
**Detection Patterns**:
|
||||
|
||||
| Pattern ID | Regex/Check | Description |
|
||||
|------------|-------------|-------------|
|
||||
| DOC-CON-001 | 键值一致性校验 | 同一键(如优先级)在不同文件中值不同 |
|
||||
| DOC-CON-002 | 实现 vs 文档对比 | 硬编码配置与文档对应项不一致 |
|
||||
|
||||
**Impact Levels**:
|
||||
- **Critical**: 优先级/类别定义冲突
|
||||
- **High**: 策略映射不一致
|
||||
- **Medium**: 示例与实际不符
|
||||
|
||||
---
|
||||
|
||||
## Severity Criteria
|
||||
|
||||
### Global Severity Matrix
|
||||
|
||||
| Severity | Definition | Action Required |
|
||||
|----------|------------|-----------------|
|
||||
| **Critical** | Blocks execution or causes data loss | Immediate fix required |
|
||||
| **High** | Significantly impacts reliability | Should fix before deployment |
|
||||
| **Medium** | Affects quality or maintainability | Fix in next iteration |
|
||||
| **Low** | Minor improvement opportunity | Optional fix |
|
||||
|
||||
### Severity Calculation
|
||||
|
||||
```javascript
|
||||
function calculateIssueSeverity(issue) {
|
||||
const weights = {
|
||||
impact_on_execution: 40, // Does it block workflow?
|
||||
data_integrity_risk: 30, // Can it cause data loss?
|
||||
frequency: 20, // How often does it occur?
|
||||
complexity_to_fix: 10 // How hard to fix?
|
||||
};
|
||||
|
||||
let score = 0;
|
||||
|
||||
// Impact on execution
|
||||
if (issue.blocks_execution) score += weights.impact_on_execution;
|
||||
else if (issue.degrades_execution) score += weights.impact_on_execution * 0.5;
|
||||
|
||||
// Data integrity
|
||||
if (issue.causes_data_loss) score += weights.data_integrity_risk;
|
||||
else if (issue.causes_inconsistency) score += weights.data_integrity_risk * 0.5;
|
||||
|
||||
// Frequency
|
||||
if (issue.occurs_every_run) score += weights.frequency;
|
||||
else if (issue.occurs_sometimes) score += weights.frequency * 0.5;
|
||||
|
||||
// Complexity (inverse - easier to fix = higher priority)
|
||||
if (issue.fix_complexity === 'low') score += weights.complexity_to_fix;
|
||||
else if (issue.fix_complexity === 'medium') score += weights.complexity_to_fix * 0.5;
|
||||
|
||||
// Map score to severity
|
||||
if (score >= 70) return 'critical';
|
||||
if (score >= 50) return 'high';
|
||||
if (score >= 30) return 'medium';
|
||||
return 'low';
|
||||
}
|
||||
```
|
||||
|
||||
---
|
||||
|
||||
## Fix Mapping
|
||||
|
||||
| Problem Type | Recommended Strategies | Priority Order |
|
||||
|--------------|----------------------|----------------|
|
||||
| **Authoring Principles Violation** | eliminate_intermediate_files, minimize_state, context_passing | 1, 2, 3 |
|
||||
| Context Explosion | sliding_window, path_reference, context_summarization | 1, 2, 3 |
|
||||
| Long-tail Forgetting | constraint_injection, state_constraints_field, checkpoint | 1, 2, 3 |
|
||||
| Data Flow Disruption | state_centralization, schema_enforcement, field_normalization | 1, 2, 3 |
|
||||
| Agent Coordination | error_wrapping, result_validation, flatten_nesting | 1, 2, 3 |
|
||||
| **Token Consumption** | prompt_compression, lazy_loading, output_minimization, state_field_reduction | 1, 2, 3, 4 |
|
||||
| **Documentation Redundancy** | consolidate_to_ssot, centralize_mapping_config | 1, 2 |
|
||||
| **Documentation Conflict** | reconcile_conflicting_definitions | 1 |
|
||||
|
||||
---
|
||||
|
||||
## Cross-Category Dependencies
|
||||
|
||||
Some issues may trigger others:
|
||||
|
||||
```
|
||||
Context Explosion ──→ Long-tail Forgetting
|
||||
(Large context causes important info to be pushed out)
|
||||
|
||||
Data Flow Disruption ──→ Agent Coordination Failure
|
||||
(Inconsistent data causes agents to fail)
|
||||
|
||||
Agent Coordination Failure ──→ Context Explosion
|
||||
(Failed retries add to context)
|
||||
```
|
||||
|
||||
When fixing, address in this order:
|
||||
1. **P0 Data Flow** - Foundation for other fixes
|
||||
2. **P1 Agent Coordination** - Stability
|
||||
3. **P2 Context Explosion** - Efficiency
|
||||
4. **P3 Long-tail Forgetting** - Quality
|
||||
263
.claude/skills/skill-tuning/specs/quality-gates.md
Normal file
263
.claude/skills/skill-tuning/specs/quality-gates.md
Normal file
@@ -0,0 +1,263 @@
|
||||
# Quality Gates
|
||||
|
||||
Quality thresholds and verification criteria for skill tuning.
|
||||
|
||||
## When to Use
|
||||
|
||||
| Phase | Usage | Section |
|
||||
|-------|-------|---------|
|
||||
| action-generate-report | Calculate quality score | Scoring |
|
||||
| action-verify | Check quality gates | Gate Definitions |
|
||||
| action-complete | Final assessment | Pass Criteria |
|
||||
|
||||
---
|
||||
|
||||
## Quality Dimensions
|
||||
|
||||
### 1. Issue Severity Distribution (40%)
|
||||
|
||||
Measures the severity profile of identified issues.
|
||||
|
||||
| Metric | Weight | Calculation |
|
||||
|--------|--------|-------------|
|
||||
| Critical Issues | -25 each | High penalty |
|
||||
| High Issues | -15 each | Significant penalty |
|
||||
| Medium Issues | -5 each | Moderate penalty |
|
||||
| Low Issues | -1 each | Minor penalty |
|
||||
|
||||
**Score Calculation**:
|
||||
```javascript
|
||||
function calculateSeverityScore(issues) {
|
||||
const weights = { critical: 25, high: 15, medium: 5, low: 1 };
|
||||
const deductions = issues.reduce((sum, issue) =>
|
||||
sum + (weights[issue.severity] || 0), 0);
|
||||
return Math.max(0, 100 - deductions);
|
||||
}
|
||||
```
|
||||
|
||||
### 2. Fix Effectiveness (30%)
|
||||
|
||||
Measures success rate of applied fixes.
|
||||
|
||||
| Metric | Weight | Threshold |
|
||||
|--------|--------|-----------|
|
||||
| Fixes Verified Pass | +30 | > 80% pass rate |
|
||||
| Fixes Verified Fail | -20 | < 50% triggers review |
|
||||
| Issues Resolved | +10 | Per resolved issue |
|
||||
|
||||
**Score Calculation**:
|
||||
```javascript
|
||||
function calculateFixScore(appliedFixes) {
|
||||
const total = appliedFixes.length;
|
||||
if (total === 0) return 100; // No fixes needed = good
|
||||
|
||||
const passed = appliedFixes.filter(f => f.verification_result === 'pass').length;
|
||||
return Math.round((passed / total) * 100);
|
||||
}
|
||||
```
|
||||
|
||||
### 3. Coverage Completeness (20%)
|
||||
|
||||
Measures diagnosis coverage across all areas.
|
||||
|
||||
| Metric | Weight | Threshold |
|
||||
|--------|--------|-----------|
|
||||
| All 4 diagnosis complete | +20 | Full coverage |
|
||||
| 3 diagnosis complete | +15 | Good coverage |
|
||||
| 2 diagnosis complete | +10 | Partial coverage |
|
||||
| < 2 diagnosis complete | +0 | Insufficient |
|
||||
|
||||
### 4. Iteration Efficiency (10%)
|
||||
|
||||
Measures how quickly issues are resolved.
|
||||
|
||||
| Metric | Weight | Threshold |
|
||||
|--------|--------|-----------|
|
||||
| Resolved in 1 iteration | +10 | Excellent |
|
||||
| Resolved in 2 iterations | +7 | Good |
|
||||
| Resolved in 3 iterations | +4 | Acceptable |
|
||||
| > 3 iterations | +0 | Needs improvement |
|
||||
|
||||
---
|
||||
|
||||
## Gate Definitions
|
||||
|
||||
### Gate: PASS
|
||||
|
||||
**Threshold**: Quality Score >= 80 AND Critical Issues = 0 AND High Issues <= 2
|
||||
|
||||
**Meaning**: Skill is production-ready with minor issues.
|
||||
|
||||
**Actions**:
|
||||
- Complete tuning session
|
||||
- Generate summary report
|
||||
- No further fixes required
|
||||
|
||||
### Gate: REVIEW
|
||||
|
||||
**Threshold**: Quality Score 60-79 OR High Issues 3-5
|
||||
|
||||
**Meaning**: Skill has issues requiring attention.
|
||||
|
||||
**Actions**:
|
||||
- Review remaining issues
|
||||
- Apply additional fixes if possible
|
||||
- May require manual intervention
|
||||
|
||||
### Gate: FAIL
|
||||
|
||||
**Threshold**: Quality Score < 60 OR Critical Issues > 0 OR High Issues > 5
|
||||
|
||||
**Meaning**: Skill has serious issues blocking deployment.
|
||||
|
||||
**Actions**:
|
||||
- Must fix critical issues
|
||||
- Re-run diagnosis after fixes
|
||||
- Consider architectural review
|
||||
|
||||
---
|
||||
|
||||
## Quality Score Calculation
|
||||
|
||||
```javascript
|
||||
function calculateQualityScore(state) {
|
||||
// Dimension 1: Severity (40%)
|
||||
const severityScore = calculateSeverityScore(state.issues);
|
||||
|
||||
// Dimension 2: Fix Effectiveness (30%)
|
||||
const fixScore = calculateFixScore(state.applied_fixes);
|
||||
|
||||
// Dimension 3: Coverage (20%)
|
||||
const diagnosisCount = Object.values(state.diagnosis)
|
||||
.filter(d => d !== null).length;
|
||||
const coverageScore = [0, 0, 10, 15, 20][diagnosisCount] || 0;
|
||||
|
||||
// Dimension 4: Efficiency (10%)
|
||||
const efficiencyScore = state.iteration_count <= 1 ? 10 :
|
||||
state.iteration_count <= 2 ? 7 :
|
||||
state.iteration_count <= 3 ? 4 : 0;
|
||||
|
||||
// Weighted total
|
||||
const total = (severityScore * 0.4) +
|
||||
(fixScore * 0.3) +
|
||||
(coverageScore * 1.0) + // Already scaled to 20
|
||||
(efficiencyScore * 1.0); // Already scaled to 10
|
||||
|
||||
return Math.round(total);
|
||||
}
|
||||
|
||||
function determineQualityGate(state) {
|
||||
const score = calculateQualityScore(state);
|
||||
const criticalCount = state.issues.filter(i => i.severity === 'critical').length;
|
||||
const highCount = state.issues.filter(i => i.severity === 'high').length;
|
||||
|
||||
if (criticalCount > 0) return 'fail';
|
||||
if (highCount > 5) return 'fail';
|
||||
if (score < 60) return 'fail';
|
||||
|
||||
if (highCount > 2) return 'review';
|
||||
if (score < 80) return 'review';
|
||||
|
||||
return 'pass';
|
||||
}
|
||||
```
|
||||
|
||||
---
|
||||
|
||||
## Verification Criteria
|
||||
|
||||
### For Each Issue Type
|
||||
|
||||
#### Context Explosion Issues
|
||||
- [ ] Token count does not grow unbounded
|
||||
- [ ] History limited to reasonable size
|
||||
- [ ] No full content in prompts (paths used instead)
|
||||
- [ ] Agent returns are compact
|
||||
|
||||
#### Long-tail Forgetting Issues
|
||||
- [ ] Constraints visible in all phase prompts
|
||||
- [ ] State schema includes requirements field
|
||||
- [ ] Checkpoints exist at key milestones
|
||||
- [ ] Output matches original constraints
|
||||
|
||||
#### Data Flow Issues
|
||||
- [ ] Single state.json after execution
|
||||
- [ ] No orphan state files
|
||||
- [ ] Schema validation active
|
||||
- [ ] Consistent field naming
|
||||
|
||||
#### Agent Coordination Issues
|
||||
- [ ] All Task calls have error handling
|
||||
- [ ] Agent results validated before use
|
||||
- [ ] No nested agent calls
|
||||
- [ ] Tool declarations match usage
|
||||
|
||||
---
|
||||
|
||||
## Iteration Control
|
||||
|
||||
### Max Iterations
|
||||
|
||||
Default: 5 iterations
|
||||
|
||||
**Rationale**:
|
||||
- Each iteration may introduce new issues
|
||||
- Diminishing returns after 3-4 iterations
|
||||
- Prevents infinite loops
|
||||
|
||||
### Iteration Exit Criteria
|
||||
|
||||
```javascript
|
||||
function shouldContinueIteration(state) {
|
||||
// Exit if quality gate passed
|
||||
if (state.quality_gate === 'pass') return false;
|
||||
|
||||
// Exit if max iterations reached
|
||||
if (state.iteration_count >= state.max_iterations) return false;
|
||||
|
||||
// Exit if no improvement in last 2 iterations
|
||||
if (state.iteration_count >= 2) {
|
||||
const recentHistory = state.action_history.slice(-10);
|
||||
const issuesResolvedRecently = recentHistory.filter(a =>
|
||||
a.action === 'action-verify' && a.result === 'success'
|
||||
).length;
|
||||
|
||||
if (issuesResolvedRecently === 0) {
|
||||
console.log('No progress in recent iterations, stopping.');
|
||||
return false;
|
||||
}
|
||||
}
|
||||
|
||||
// Continue if critical/high issues remain
|
||||
const hasUrgentIssues = state.issues.some(i =>
|
||||
i.severity === 'critical' || i.severity === 'high'
|
||||
);
|
||||
|
||||
return hasUrgentIssues;
|
||||
}
|
||||
```
|
||||
|
||||
---
|
||||
|
||||
## Reporting Format
|
||||
|
||||
### Quality Summary Table
|
||||
|
||||
| Dimension | Score | Weight | Weighted |
|
||||
|-----------|-------|--------|----------|
|
||||
| Severity Distribution | {score}/100 | 40% | {weighted} |
|
||||
| Fix Effectiveness | {score}/100 | 30% | {weighted} |
|
||||
| Coverage Completeness | {score}/20 | 20% | {score} |
|
||||
| Iteration Efficiency | {score}/10 | 10% | {score} |
|
||||
| **Total** | | | **{total}/100** |
|
||||
|
||||
### Gate Status
|
||||
|
||||
```
|
||||
Quality Gate: {PASS|REVIEW|FAIL}
|
||||
|
||||
Criteria:
|
||||
- Quality Score: {score} (threshold: 60)
|
||||
- Critical Issues: {count} (threshold: 0)
|
||||
- High Issues: {count} (threshold: 5)
|
||||
```
|
||||
189
.claude/skills/skill-tuning/specs/skill-authoring-principles.md
Normal file
189
.claude/skills/skill-tuning/specs/skill-authoring-principles.md
Normal file
@@ -0,0 +1,189 @@
|
||||
# Skill Authoring Principles
|
||||
|
||||
Skill 撰写首要准则。所有诊断和优化以此为纲。
|
||||
|
||||
---
|
||||
|
||||
## 核心原则
|
||||
|
||||
```
|
||||
简洁高效 → 去除无关存储 → 去除中间存储 → 上下文流转
|
||||
```
|
||||
|
||||
---
|
||||
|
||||
## 1. 简洁高效
|
||||
|
||||
**原则**:最小化实现,只做必要的事
|
||||
|
||||
| DO | DON'T |
|
||||
|----|-------|
|
||||
| 单一职责阶段 | 臃肿的多功能阶段 |
|
||||
| 直接的数据路径 | 迂回的处理流程 |
|
||||
| 必要的字段 | 冗余的 schema 定义 |
|
||||
| 精准的 prompt | 过度详细的指令 |
|
||||
|
||||
**检测模式**:
|
||||
- Phase 文件 > 200 行 → 需拆分
|
||||
- State schema 字段 > 20 个 → 需精简
|
||||
- 同一数据多处定义 → 需去重
|
||||
|
||||
---
|
||||
|
||||
## 2. 去除无关存储
|
||||
|
||||
**原则**:不存储不需要的数据
|
||||
|
||||
| DO | DON'T |
|
||||
|----|-------|
|
||||
| 只存最终结果 | 存储调试信息 |
|
||||
| 存路径引用 | 存完整内容副本 |
|
||||
| 存必要索引 | 存全量历史 |
|
||||
|
||||
**检测模式**:
|
||||
```javascript
|
||||
// BAD: 存储完整内容
|
||||
state.full_analysis_result = longAnalysisOutput;
|
||||
|
||||
// GOOD: 存路径 + 摘要
|
||||
state.analysis = {
|
||||
path: `${workDir}/analysis.json`,
|
||||
summary: extractSummary(output),
|
||||
key_findings: extractFindings(output)
|
||||
};
|
||||
```
|
||||
|
||||
**反模式清单**:
|
||||
- `state.debug_*` → 删除
|
||||
- `state.*_history` (无限增长) → 限制或删除
|
||||
- `state.*_cache` (会话内) → 改用内存变量
|
||||
- 重复字段 → 合并
|
||||
|
||||
---
|
||||
|
||||
## 3. 去除中间存储
|
||||
|
||||
**原则**:避免临时文件和中间状态文件
|
||||
|
||||
| DO | DON'T |
|
||||
|----|-------|
|
||||
| 直接传递结果 | 写文件再读文件 |
|
||||
| 函数返回值 | 中间 JSON 文件 |
|
||||
| 管道处理 | 阶段性存储 |
|
||||
|
||||
**检测模式**:
|
||||
```javascript
|
||||
// BAD: 中间文件
|
||||
Write(`${workDir}/temp-step1.json`, step1Result);
|
||||
const step1 = Read(`${workDir}/temp-step1.json`);
|
||||
const step2Result = process(step1);
|
||||
Write(`${workDir}/temp-step2.json`, step2Result);
|
||||
|
||||
// GOOD: 直接流转
|
||||
const step1Result = await executeStep1();
|
||||
const step2Result = process(step1Result);
|
||||
const finalResult = finalize(step2Result);
|
||||
Write(`${workDir}/final-output.json`, finalResult); // 只存最终结果
|
||||
```
|
||||
|
||||
**允许的存储**:
|
||||
- 最终输出(用户需要的结果)
|
||||
- 检查点(长流程恢复用,可选)
|
||||
- 备份(修改前的原始文件)
|
||||
|
||||
**禁止的存储**:
|
||||
- `temp-*.json`
|
||||
- `intermediate-*.json`
|
||||
- `step[N]-output.json`
|
||||
- `*-draft.md`
|
||||
|
||||
---
|
||||
|
||||
## 4. 上下文流转
|
||||
|
||||
**原则**:通过上下文传递而非文件
|
||||
|
||||
| DO | DON'T |
|
||||
|----|-------|
|
||||
| 函数参数传递 | 全局状态读写 |
|
||||
| 返回值链式处理 | 文件中转 |
|
||||
| prompt 内嵌数据 | 指向外部文件 |
|
||||
|
||||
**模式**:
|
||||
```javascript
|
||||
// 上下文流转模式
|
||||
async function executePhase(context) {
|
||||
const { previousResult, constraints, config } = context;
|
||||
|
||||
const result = await Task({
|
||||
prompt: `
|
||||
[CONTEXT]
|
||||
Previous: ${JSON.stringify(previousResult)}
|
||||
Constraints: ${constraints.join(', ')}
|
||||
|
||||
[TASK]
|
||||
Process and return result directly.
|
||||
`
|
||||
});
|
||||
|
||||
return {
|
||||
...context,
|
||||
currentResult: result,
|
||||
completed: ['phase-name']
|
||||
};
|
||||
}
|
||||
|
||||
// 链式执行
|
||||
let ctx = initialContext;
|
||||
ctx = await executePhase1(ctx);
|
||||
ctx = await executePhase2(ctx);
|
||||
ctx = await executePhase3(ctx);
|
||||
// ctx 包含完整上下文,无中间文件
|
||||
```
|
||||
|
||||
**State 最小化**:
|
||||
```typescript
|
||||
// 只存必要状态
|
||||
interface MinimalState {
|
||||
status: 'pending' | 'running' | 'completed';
|
||||
target: { name: string; path: string };
|
||||
result_path: string; // 最终结果路径
|
||||
error?: string;
|
||||
}
|
||||
```
|
||||
|
||||
---
|
||||
|
||||
## 应用场景
|
||||
|
||||
### 诊断时检查
|
||||
|
||||
| 检查项 | 违反时标记 |
|
||||
|--------|-----------|
|
||||
| Phase 内写入 temp 文件 | `unnecessary_storage` |
|
||||
| State 包含 *_history 无限数组 | `unbounded_state` |
|
||||
| 文件写入后立即读取 | `redundant_io` |
|
||||
| 多阶段传递完整内容 | `context_bloat` |
|
||||
|
||||
### 优化策略
|
||||
|
||||
| 问题 | 策略 |
|
||||
|------|------|
|
||||
| 中间文件过多 | `eliminate_intermediate_files` |
|
||||
| State 膨胀 | `minimize_state_schema` |
|
||||
| 重复存储 | `deduplicate_storage` |
|
||||
| 文件中转 | `context_passing` |
|
||||
|
||||
---
|
||||
|
||||
## 合规检查清单
|
||||
|
||||
```
|
||||
□ 无 temp/intermediate 文件写入
|
||||
□ State schema < 15 个字段
|
||||
□ 无重复数据存储
|
||||
□ Phase 间通过上下文/返回值传递
|
||||
□ 只存最终结果文件
|
||||
□ 无无限增长的数组
|
||||
□ 无调试字段残留
|
||||
```
|
||||
1537
.claude/skills/skill-tuning/specs/tuning-strategies.md
Normal file
1537
.claude/skills/skill-tuning/specs/tuning-strategies.md
Normal file
File diff suppressed because it is too large
Load Diff
153
.claude/skills/skill-tuning/templates/diagnosis-report.md
Normal file
153
.claude/skills/skill-tuning/templates/diagnosis-report.md
Normal file
@@ -0,0 +1,153 @@
|
||||
# Diagnosis Report Template
|
||||
|
||||
Template for individual diagnosis action reports.
|
||||
|
||||
## Template
|
||||
|
||||
```markdown
|
||||
# {{diagnosis_type}} Diagnosis Report
|
||||
|
||||
**Target Skill**: {{skill_name}}
|
||||
**Diagnosis Type**: {{diagnosis_type}}
|
||||
**Executed At**: {{timestamp}}
|
||||
**Duration**: {{duration_ms}}ms
|
||||
|
||||
---
|
||||
|
||||
## Summary
|
||||
|
||||
| Metric | Value |
|
||||
|--------|-------|
|
||||
| Issues Found | {{issues_found}} |
|
||||
| Severity | {{severity}} |
|
||||
| Patterns Checked | {{patterns_checked_count}} |
|
||||
| Patterns Matched | {{patterns_matched_count}} |
|
||||
|
||||
---
|
||||
|
||||
## Patterns Analyzed
|
||||
|
||||
{{#each patterns_checked}}
|
||||
### {{pattern_name}}
|
||||
|
||||
- **Status**: {{status}}
|
||||
- **Matches**: {{match_count}}
|
||||
- **Files Affected**: {{affected_files}}
|
||||
|
||||
{{/each}}
|
||||
|
||||
---
|
||||
|
||||
## Issues Identified
|
||||
|
||||
{{#if issues.length}}
|
||||
{{#each issues}}
|
||||
### {{id}}: {{description}}
|
||||
|
||||
| Field | Value |
|
||||
|-------|-------|
|
||||
| Type | {{type}} |
|
||||
| Severity | {{severity}} |
|
||||
| Location | {{location}} |
|
||||
| Root Cause | {{root_cause}} |
|
||||
| Impact | {{impact}} |
|
||||
|
||||
**Evidence**:
|
||||
{{#each evidence}}
|
||||
- `{{this}}`
|
||||
{{/each}}
|
||||
|
||||
**Suggested Fix**: {{suggested_fix}}
|
||||
|
||||
---
|
||||
{{/each}}
|
||||
{{else}}
|
||||
_No issues found in this diagnosis area._
|
||||
{{/if}}
|
||||
|
||||
---
|
||||
|
||||
## Recommendations
|
||||
|
||||
{{#if recommendations.length}}
|
||||
{{#each recommendations}}
|
||||
{{@index}}. {{this}}
|
||||
{{/each}}
|
||||
{{else}}
|
||||
No specific recommendations - area appears healthy.
|
||||
{{/if}}
|
||||
|
||||
---
|
||||
|
||||
## Raw Data
|
||||
|
||||
Full diagnosis data available at:
|
||||
`{{output_file}}`
|
||||
```
|
||||
|
||||
## Variable Reference
|
||||
|
||||
| Variable | Type | Source |
|
||||
|----------|------|--------|
|
||||
| `diagnosis_type` | string | 'context' \| 'memory' \| 'dataflow' \| 'agent' |
|
||||
| `skill_name` | string | state.target_skill.name |
|
||||
| `timestamp` | string | ISO timestamp |
|
||||
| `duration_ms` | number | Execution time |
|
||||
| `issues_found` | number | issues.length |
|
||||
| `severity` | string | Calculated severity |
|
||||
| `patterns_checked` | array | Patterns analyzed |
|
||||
| `patterns_matched` | array | Patterns with matches |
|
||||
| `issues` | array | Issue objects |
|
||||
| `recommendations` | array | String recommendations |
|
||||
| `output_file` | string | Path to JSON file |
|
||||
|
||||
## Usage
|
||||
|
||||
```javascript
|
||||
function renderDiagnosisReport(diagnosis, diagnosisType, skillName, outputFile) {
|
||||
return `# ${diagnosisType} Diagnosis Report
|
||||
|
||||
**Target Skill**: ${skillName}
|
||||
**Diagnosis Type**: ${diagnosisType}
|
||||
**Executed At**: ${new Date().toISOString()}
|
||||
**Duration**: ${diagnosis.execution_time_ms}ms
|
||||
|
||||
---
|
||||
|
||||
## Summary
|
||||
|
||||
| Metric | Value |
|
||||
|--------|-------|
|
||||
| Issues Found | ${diagnosis.issues_found} |
|
||||
| Severity | ${diagnosis.severity} |
|
||||
| Patterns Checked | ${diagnosis.details.patterns_checked.length} |
|
||||
| Patterns Matched | ${diagnosis.details.patterns_matched.length} |
|
||||
|
||||
---
|
||||
|
||||
## Issues Identified
|
||||
|
||||
${diagnosis.details.evidence.map((e, i) => `
|
||||
### Issue ${i + 1}
|
||||
|
||||
- **File**: ${e.file}
|
||||
- **Pattern**: ${e.pattern}
|
||||
- **Severity**: ${e.severity}
|
||||
- **Context**: \`${e.context}\`
|
||||
`).join('\n')}
|
||||
|
||||
---
|
||||
|
||||
## Recommendations
|
||||
|
||||
${diagnosis.details.recommendations.map((r, i) => `${i + 1}. ${r}`).join('\n')}
|
||||
|
||||
---
|
||||
|
||||
## Raw Data
|
||||
|
||||
Full diagnosis data available at:
|
||||
\`${outputFile}\`
|
||||
`;
|
||||
}
|
||||
```
|
||||
204
.claude/skills/skill-tuning/templates/fix-proposal.md
Normal file
204
.claude/skills/skill-tuning/templates/fix-proposal.md
Normal file
@@ -0,0 +1,204 @@
|
||||
# Fix Proposal Template
|
||||
|
||||
Template for fix proposal documentation.
|
||||
|
||||
## Template
|
||||
|
||||
```markdown
|
||||
# Fix Proposal: {{fix_id}}
|
||||
|
||||
**Strategy**: {{strategy}}
|
||||
**Risk Level**: {{risk}}
|
||||
**Issues Addressed**: {{issue_ids}}
|
||||
|
||||
---
|
||||
|
||||
## Description
|
||||
|
||||
{{description}}
|
||||
|
||||
## Rationale
|
||||
|
||||
{{rationale}}
|
||||
|
||||
---
|
||||
|
||||
## Affected Files
|
||||
|
||||
{{#each changes}}
|
||||
### {{file}}
|
||||
|
||||
**Action**: {{action}}
|
||||
|
||||
```diff
|
||||
{{diff}}
|
||||
```
|
||||
|
||||
{{/each}}
|
||||
|
||||
---
|
||||
|
||||
## Implementation Steps
|
||||
|
||||
{{#each implementation_steps}}
|
||||
{{@index}}. {{this}}
|
||||
{{/each}}
|
||||
|
||||
---
|
||||
|
||||
## Risk Assessment
|
||||
|
||||
| Factor | Assessment |
|
||||
|--------|------------|
|
||||
| Complexity | {{complexity}} |
|
||||
| Reversibility | {{reversible ? 'Yes' : 'No'}} |
|
||||
| Breaking Changes | {{breaking_changes}} |
|
||||
| Test Coverage | {{test_coverage}} |
|
||||
|
||||
**Overall Risk**: {{risk}}
|
||||
|
||||
---
|
||||
|
||||
## Verification Steps
|
||||
|
||||
{{#each verification_steps}}
|
||||
- [ ] {{this}}
|
||||
{{/each}}
|
||||
|
||||
---
|
||||
|
||||
## Rollback Plan
|
||||
|
||||
{{#if rollback_available}}
|
||||
To rollback this fix:
|
||||
|
||||
```bash
|
||||
{{rollback_command}}
|
||||
```
|
||||
{{else}}
|
||||
_Rollback not available for this fix type._
|
||||
{{/if}}
|
||||
|
||||
---
|
||||
|
||||
## Estimated Impact
|
||||
|
||||
{{estimated_impact}}
|
||||
```
|
||||
|
||||
## Variable Reference
|
||||
|
||||
| Variable | Type | Source |
|
||||
|----------|------|--------|
|
||||
| `fix_id` | string | Generated ID (FIX-001) |
|
||||
| `strategy` | string | Fix strategy name |
|
||||
| `risk` | string | 'low' \| 'medium' \| 'high' |
|
||||
| `issue_ids` | array | Related issue IDs |
|
||||
| `description` | string | Human-readable description |
|
||||
| `rationale` | string | Why this fix works |
|
||||
| `changes` | array | File change objects |
|
||||
| `implementation_steps` | array | Step-by-step guide |
|
||||
| `verification_steps` | array | How to verify fix worked |
|
||||
| `estimated_impact` | string | Expected improvement |
|
||||
|
||||
## Usage
|
||||
|
||||
```javascript
|
||||
function renderFixProposal(fix) {
|
||||
return `# Fix Proposal: ${fix.id}
|
||||
|
||||
**Strategy**: ${fix.strategy}
|
||||
**Risk Level**: ${fix.risk}
|
||||
**Issues Addressed**: ${fix.issue_ids.join(', ')}
|
||||
|
||||
---
|
||||
|
||||
## Description
|
||||
|
||||
${fix.description}
|
||||
|
||||
## Rationale
|
||||
|
||||
${fix.rationale}
|
||||
|
||||
---
|
||||
|
||||
## Affected Files
|
||||
|
||||
${fix.changes.map(change => `
|
||||
### ${change.file}
|
||||
|
||||
**Action**: ${change.action}
|
||||
|
||||
\`\`\`diff
|
||||
${change.diff || change.new_content?.slice(0, 200) || 'N/A'}
|
||||
\`\`\`
|
||||
`).join('\n')}
|
||||
|
||||
---
|
||||
|
||||
## Verification Steps
|
||||
|
||||
${fix.verification_steps.map(step => `- [ ] ${step}`).join('\n')}
|
||||
|
||||
---
|
||||
|
||||
## Estimated Impact
|
||||
|
||||
${fix.estimated_impact}
|
||||
`;
|
||||
}
|
||||
```
|
||||
|
||||
## Fix Strategy Templates
|
||||
|
||||
### sliding_window
|
||||
|
||||
```markdown
|
||||
## Description
|
||||
Implement sliding window for conversation history to prevent unbounded growth.
|
||||
|
||||
## Changes
|
||||
- Add MAX_HISTORY constant
|
||||
- Modify history update logic to slice array
|
||||
- Update state schema documentation
|
||||
|
||||
## Verification
|
||||
- [ ] Run skill for 10+ iterations
|
||||
- [ ] Verify history.length <= MAX_HISTORY
|
||||
- [ ] Check no data loss for recent items
|
||||
```
|
||||
|
||||
### constraint_injection
|
||||
|
||||
```markdown
|
||||
## Description
|
||||
Add explicit constraint section to each phase prompt.
|
||||
|
||||
## Changes
|
||||
- Add [CONSTRAINTS] section template
|
||||
- Reference state.original_requirements
|
||||
- Add reminder before output section
|
||||
|
||||
## Verification
|
||||
- [ ] Check constraints visible in all phases
|
||||
- [ ] Test with specific constraint
|
||||
- [ ] Verify output respects constraint
|
||||
```
|
||||
|
||||
### error_wrapping
|
||||
|
||||
```markdown
|
||||
## Description
|
||||
Wrap all Task calls in try-catch with retry logic.
|
||||
|
||||
## Changes
|
||||
- Create safeTask wrapper function
|
||||
- Replace direct Task calls
|
||||
- Add error logging to state
|
||||
|
||||
## Verification
|
||||
- [ ] Simulate agent failure
|
||||
- [ ] Verify graceful error handling
|
||||
- [ ] Check retry logic
|
||||
```
|
||||
@@ -4,7 +4,7 @@
|
||||
|
||||
- 所有回复使用简体中文
|
||||
- 技术术语保留英文,首次出现可添加中文解释
|
||||
- 代码内容(变量名、注释)保持英文
|
||||
- 代码变量名保持英文,注释使用中文
|
||||
|
||||
## 格式规范
|
||||
|
||||
|
||||
Some files were not shown because too many files have changed in this diff Show More
Reference in New Issue
Block a user