Compare commits

...

36 Commits

Author SHA1 Message Date
catlog22
623afc1d35 6.3.31 2026-01-15 22:30:57 +08:00
catlog22
085652560a refactor: 移除 ccw cli 内部超时参数,改由外部 bash 控制
- 移除 --timeout 命令行选项和内部超时处理逻辑
- 进程生命周期跟随父进程(bash)状态
- 简化代码,超时控制交由外部调用者管理
2026-01-15 22:30:22 +08:00
catlog22
af4ddb1280 feat: 添加队列和议题删除功能,支持归档议题 2026-01-15 19:58:54 +08:00
catlog22
7db659f0e1 feat: 增强议题搜索功能与多队列卡片界面优化
搜索增强:
- 添加防抖处理修复快速输入导致页面卡死的问题
- 扩展搜索范围至解决方案的描述和方法字段
- 新增搜索结果高亮显示匹配关键词
- 添加搜索下拉建议,支持键盘导航

多队列界面:
- 优化队列展开视图的卡片布局使用CSS Grid
- 添加取消激活队列功能及API端点
- 改进状态颜色分布和统计卡片样式
- 添加激活/取消激活按钮的中文国际化

修复:
- 修复路由冲突导致的deactivate 404错误
- 修复异步加载后拖拽排序失效的问题
2026-01-15 19:44:44 +08:00
catlog22
ba526ea09e fix: 修复 Dashboard 概况页面无法显示项目信息的问题
添加 extractStringArray 辅助函数来处理混合数组类型(字符串数组和对象数组),
使 loadProjectOverview 函数能够正确处理 project-tech.json 中的数据结构。

修复的字段包括:
- languages: 对象数组 [{name, file_count, primary}] → 字符串数组
- frameworks: 保持兼容字符串数组
- key_components: 对象数组 [{name, description, path}] → 字符串数组
- layers/patterns: 保持兼容混合类型

Closes #79
2026-01-15 18:58:42 +08:00
catlog22
c308e429f8 feat: 添加增量更新命令以支持单文件索引更新 2026-01-15 18:14:51 +08:00
catlog22
c24ed016cb feat: 更新执行命令文档,添加队列ID要求和用户提示功能 2026-01-15 16:22:48 +08:00
catlog22
0c9a6d4154 chore: bump version to 6.3.29
Release 6.3.29 with:
- Multi-CLI task and discussion tabs i18n support
- Collapsible sections for discussion and summary tabs
- Post-Completion Expansion for execution commands
- Enhanced multi-CLI session handling
- Code structure refactoring
2026-01-15 15:38:15 +08:00
catlog22
7b5c3cacaa feat: 添加多CLI任务和讨论标签的国际化支持 2026-01-15 15:35:09 +08:00
catlog22
e6e7876b38 feat: Add collapsible sections and enhance layout for discussion and summary tabs 2026-01-15 15:30:11 +08:00
catlog22
0eda520fd7 feat: Enhance multi-CLI session handling and UI updates
- Added loading of plan.json in scanMultiCliDir to improve task extraction.
- Implemented normalization of tasks from plan.json format to support new UI.
- Updated CSS for multi-CLI plan summary and task item badges for better visibility.
- Refactored hook-manager to use Node.js for cross-platform compatibility in command execution.
- Improved i18n support for new CLI tool configuration in the hook wizard.
- Enhanced lite-tasks view to utilize normalized tasks and provide better fallback mechanisms.
- Updated memory-update-queue to return string messages for better integration with hooks.
2026-01-15 15:20:20 +08:00
catlog22
e22b525e9c feat: add Post-Completion Expansion to execution commands
执行命令完成后询问用户是否扩展为issue(test/enhance/refactor/doc),选中项调用 /issue:new
2026-01-15 13:00:50 +08:00
catlog22
86536aaa10 Refactor code structure for improved readability and maintainability 2026-01-15 11:51:19 +08:00
catlog22
3ef766708f chore: bump version to 6.3.28
Fixes #74 - Include ccw/scripts/ in npm package files
2026-01-15 11:20:34 +08:00
catlog22
95a7f05aa9 Add unified command indices for CCW and CCW-Help with detailed capabilities, flows, and intent rules
- Introduced command.json for CCW-Help with 88 commands and 16 agents, covering essential workflows and memory management.
- Created command.json for CCW with comprehensive capabilities for exploration, planning, execution, bug fixing, testing, reviewing, and documentation.
- Defined complex flows for rapid iteration, full exploration, coupled planning, bug fixing, issue lifecycle management, and more.
- Implemented intent rules for bug fixing, issue batch processing, exploration, UI design, TDD, review, and documentation.
- Established CLI tools and injection rules to enhance command execution based on context and complexity.
2026-01-15 11:19:30 +08:00
catlog22
f692834153 fix: Status导航项现在正确显示CLI状态页面而非CLAUDE.md管理器
cli-manager视图路由错误调用了renderClaudeManager(),修复为调用正确的renderCliManager()函数。

Co-Authored-By: Claude Opus 4.5 <noreply@anthropic.com>
2026-01-15 10:38:19 +08:00
catlog22
a228bb946b fix: Issue Manager completed过滤器现在可以显示归档议题
- 添加 loadIssueHistory() 函数从 /api/issues/history 加载归档议题
- 修改 filterIssuesByStatus() 在选择 completed 过滤器时加载历史数据
- 修改 renderIssueView() 合并当前已完成议题和归档议题
- 修改 renderIssueCard() 显示 "Archived" 标签区分归档议题
- 修改 openIssueDetail() 支持从缓存加载归档议题详情
- 添加 .issue-card.archived 和 .issue-archived-badge CSS样式

Fixes: https://github.com/catlog22/Claude-Code-Workflow/issues/76

Co-Authored-By: Claude Opus 4.5 <noreply@anthropic.com>
2026-01-15 10:28:52 +08:00
catlog22
4d57f47717 feat: 更新搜索工具优先级指南,统一格式以提高可读性 2026-01-14 22:00:01 +08:00
catlog22
c8cac5b201 feat: 添加搜索工具优先级指南,优化CLI工具调用和执行策略 2026-01-14 21:46:36 +08:00
catlog22
f9c1216eec feat: 添加令牌消耗诊断功能,优化输出和状态管理 2026-01-14 21:40:00 +08:00
catlog22
266f6f11ec feat: Enhance documentation diagnosis and category mapping
- Introduced action to diagnose documentation structure, identifying redundancies and conflicts.
- Added centralized category mappings in JSON format for improved detection and strategy application.
- Updated existing functions to utilize new mappings for taxonomy and strategy matching.
- Implemented new detection patterns for documentation redundancy and conflict.
- Expanded state schema to include documentation diagnosis results.
- Enhanced severity criteria and strategy selection guide to accommodate new documentation issues.
2026-01-14 21:07:52 +08:00
catlog22
1f5ce9c03a Enhance CCW Orchestrator with Requirement Analysis Features
- Updated SKILL.md to reflect new requirement analysis capabilities, including input analysis and clarity scoring.
- Expanded issue workflow in issue.md to include discovery and creation phases, along with detailed command references.
- Introduced requirement analysis specification in requirement-analysis.md, outlining clarity scoring, dimension extraction, and validation processes.
- Added output templates specification in output-templates.md for consistent user experience across classification, planning, clarification, execution, and summary outputs.
2026-01-14 20:15:42 +08:00
catlog22
959d60b31f Enhance CLI Stream Viewer and Navigation Lifecycle Management
- Added lifecycle management for CLI Stream Viewer with destroy function to clean up event listeners and timers.
- Improved navigation state management by registering destroy functions for views and ensuring cleanup on transitions.
- Updated Claude Manager to include lifecycle functions for better resource management.
- Enhanced CLI History View with state reset functionality and improved dropdown handling for batch delete.
- Introduced round solutions rendering in Lite Tasks View, including collapsible sections for implementation plans, dependencies, and technical concerns.
2026-01-14 19:57:05 +08:00
catlog22
49845fe1ae feat: 扩展多CLI详细页面样式,更新任务卡片和决策状态显示 2026-01-14 18:47:23 +08:00
catlog22
aeb111420e feat: 添加多CLI计划支持,更新数据聚合和导航组件以处理新任务类型 2026-01-14 17:06:36 +08:00
catlog22
6ff3e5f8fe test: add unit tests for hook quoting fix (Issue #73)
Add comprehensive test suite for convertToClaudeCodeFormat function:
- Verify bash -c commands use single quotes
- Verify jq patterns are preserved without excessive escaping
- Verify single quotes in scripts are properly escaped
- Test all real-world hook templates (danger-*, ccw-notify, log-tool)
- Test edge cases (non-bash commands, already formatted data)

All 13 tests passing.

Co-Authored-By: Claude Opus 4.5 <noreply@anthropic.com>
2026-01-14 15:22:52 +08:00
catlog22
d941166d84 fix: use single quotes for bash -c script to avoid jq escaping issues
Problem:
When generating hook configurations, the convertToClaudeCodeFormat function
was using double quotes to wrap bash -c script arguments. This caused
complex escaping issues with jq commands inside, leading to parse errors
like "jq: error: syntax error, unexpected end of file".

Solution:
For bash -c commands, now use single quotes to wrap the script argument.
Single quotes prevent shell expansion, so internal double quotes (like
those used in jq patterns) work naturally without excessive escaping.

If the script contains single quotes, they are properly escaped using
the '\'' pattern (close quote, escaped quote, reopen quote).

Fixes: https://github.com/catlog22/Claude-Code-Workflow/issues/73

Co-Authored-By: Claude Opus 4.5 <noreply@anthropic.com>
2026-01-14 15:07:04 +08:00
catlog22
ac9ba5c7e4 feat: 更新CLI分析调用部分,强调等待结果和价值评估,移除背景执行默认设置 2026-01-14 14:00:14 +08:00
catlog22
9e55f51501 feat: 添加需求分析功能,支持维度拆解、覆盖度评估和歧义检测 2026-01-14 13:42:57 +08:00
catlog22
43b8cfc7b0 feat: 添加CLI辅助意图分类和行动规划功能,增强复杂输入处理和执行策略优化 2026-01-14 13:23:22 +08:00
catlog22
633d918da1 Add quality gates and tuning strategies documentation
- Introduced quality gates specification for skill tuning, detailing quality dimensions, scoring, and gate definitions.
- Added comprehensive tuning strategies for various issue categories, including context explosion, long-tail forgetting, data flow, and agent coordination.
- Created templates for diagnosis reports and fix proposals to standardize documentation and reporting processes.
2026-01-14 12:59:13 +08:00
catlog22
6b4b9b0775 feat: enhance multi-CLI planning with new schema for solutions and implementation plans; improve file handling with async methods 2026-01-14 12:15:42 +08:00
catlog22
360d29d7be Enhance server routing to include dialog API endpoints
- Updated system routes in the server to handle dialog-related API requests.
- Added support for new dialog routes under the '/api/dialog/' path.
2026-01-14 10:51:23 +08:00
catlog22
4fe7f6cde6 feat: enhance CLI discussion agent and multi-CLI planning with JSON string support; improve error handling and internationalization 2026-01-13 23:51:46 +08:00
catlog22
6922ca27de Add Multi-CLI Plan feature and corresponding JSON schema
- Introduced a new navigation item for "Multi-CLI Plan" in the dashboard template.
- Created a new JSON schema for "Multi-CLI Discussion Artifact" to facilitate structured discussions and decision-making processes.
2026-01-13 23:46:15 +08:00
catlog22
c3da637849 feat(workflow): add multi-CLI collaborative planning command
- Introduced a new command `/workflow:multi-cli-plan` for collaborative planning using ACE semantic search and iterative analysis with Claude and Codex.
- Implemented a structured execution flow with phases for context gathering, multi-tool analysis, user decision points, and final plan generation.
- Added detailed documentation outlining the command's usage, execution phases, and key features.
- Included error handling and configuration options for enhanced user experience.
2026-01-13 23:23:09 +08:00
117 changed files with 21769 additions and 6751 deletions

View File

@@ -29,7 +29,13 @@ Available CLI endpoints are dynamically defined by the config file:
```
Bash({ command: "ccw cli -p '...' --tool gemini", run_in_background: true })
```
- **After CLI call**: Stop immediately - let CLI execute in background, do NOT poll with TaskOutput
- **After CLI call**: Stop immediately - let CLI execute in background, do NOT
poll with TaskOutput
### CLI Analysis Calls
- **Wait for results**: MUST wait for CLI analysis to complete before taking any write action. Do NOT proceed with fixes while analysis is running
- **Value every call**: Each CLI invocation is valuable and costly. NEVER waste analysis results:
- Aggregate multiple analysis results before proposing solutions
## Code Diagnostics

View File

@@ -855,6 +855,7 @@ Use `analysis_results.complexity` or task count to determine structure:
### 3.3 Guidelines Checklist
**ALWAYS:**
- **Search Tool Priority**: ACE (`mcp__ace-tool__search_context`) → CCW (`mcp__ccw-tools__smart_search`) / Built-in (`Grep`, `Glob`, `Read`)
- Apply Quantification Requirements to all requirements, acceptance criteria, and modification points
- Load IMPL_PLAN template: `Read(~/.claude/workflows/cli-templates/prompts/workflow/impl-plan-template.txt)` before generating IMPL_PLAN.md
- Use provided context package: Extract all information from structured context

View File

@@ -0,0 +1,391 @@
---
name: cli-discuss-agent
description: |
Multi-CLI collaborative discussion agent with cross-verification and solution synthesis.
Orchestrates 5-phase workflow: Context Prep → CLI Execution → Cross-Verify → Synthesize → Output
color: magenta
allowed-tools: mcp__ace-tool__search_context(*), Bash(*), Read(*), Write(*), Glob(*), Grep(*)
---
You are a specialized CLI discussion agent that orchestrates multiple CLI tools to analyze tasks, cross-verify findings, and synthesize structured solutions.
## Core Capabilities
1. **Multi-CLI Orchestration** - Invoke Gemini, Codex, Qwen for diverse perspectives
2. **Cross-Verification** - Compare findings, identify agreements/disagreements
3. **Solution Synthesis** - Merge approaches, score and rank by consensus
4. **Context Enrichment** - ACE semantic search for supplementary context
**Discussion Modes**:
- `initial` → First round, establish baseline analysis (parallel execution)
- `iterative` → Build on previous rounds with user feedback (parallel + resume)
- `verification` → Cross-verify specific approaches (serial execution)
---
## 5-Phase Execution Workflow
```
Phase 1: Context Preparation
↓ Parse input, enrich with ACE if needed, create round folder
Phase 2: Multi-CLI Execution
↓ Build prompts, execute CLIs with fallback chain, parse outputs
Phase 3: Cross-Verification
↓ Compare findings, identify agreements/disagreements, resolve conflicts
Phase 4: Solution Synthesis
↓ Extract approaches, merge similar, score and rank top 3
Phase 5: Output Generation
↓ Calculate convergence, generate questions, write synthesis.json
```
---
## Input Schema
**From orchestrator** (may be JSON strings):
- `task_description` - User's task or requirement
- `round_number` - Current discussion round (1, 2, 3...)
- `session` - `{ id, folder }` for output paths
- `ace_context` - `{ relevant_files[], detected_patterns[], architecture_insights }`
- `previous_rounds` - Array of prior SynthesisResult (optional)
- `user_feedback` - User's feedback from last round (optional)
- `cli_config` - `{ tools[], timeout, fallback_chain[], mode }` (optional)
- `tools`: Default `['gemini', 'codex']` or `['gemini', 'codex', 'claude']`
- `fallback_chain`: Default `['gemini', 'codex', 'claude']`
- `mode`: `'parallel'` (default) or `'serial'`
---
## Output Schema
**Output Path**: `{session.folder}/rounds/{round_number}/synthesis.json`
```json
{
"round": 1,
"solutions": [
{
"name": "Solution Name",
"source_cli": ["gemini", "codex"],
"feasibility": 0.85,
"effort": "low|medium|high",
"risk": "low|medium|high",
"summary": "Brief analysis summary",
"implementation_plan": {
"approach": "High-level technical approach",
"tasks": [
{
"id": "T1",
"name": "Task name",
"depends_on": [],
"files": [{"file": "path", "line": 10, "action": "modify|create|delete"}],
"key_point": "Critical consideration for this task"
},
{
"id": "T2",
"name": "Second task",
"depends_on": ["T1"],
"files": [{"file": "path2", "line": 1, "action": "create"}],
"key_point": null
}
],
"execution_flow": "T1 → T2 → T3 (T2,T3 can parallel after T1)",
"milestones": ["Interface defined", "Core logic complete", "Tests passing"]
},
"dependencies": {
"internal": ["@/lib/module"],
"external": ["npm:package@version"]
},
"technical_concerns": ["Potential blocker 1", "Risk area 2"]
}
],
"convergence": {
"score": 0.75,
"new_insights": true,
"recommendation": "converged|continue|user_input_needed"
},
"cross_verification": {
"agreements": ["point 1"],
"disagreements": ["point 2"],
"resolution": "how resolved"
},
"clarification_questions": ["question 1?"]
}
```
**Schema Fields**:
| Field | Purpose |
|-------|---------|
| `feasibility` | Quantitative viability score (0-1) |
| `summary` | Narrative analysis summary |
| `implementation_plan.approach` | High-level technical strategy |
| `implementation_plan.tasks[]` | Discrete implementation tasks |
| `implementation_plan.tasks[].depends_on` | Task dependencies (IDs) |
| `implementation_plan.tasks[].key_point` | Critical consideration for task |
| `implementation_plan.execution_flow` | Visual task sequence |
| `implementation_plan.milestones` | Key checkpoints |
| `technical_concerns` | Specific risks/blockers |
**Note**: Solutions ranked by internal scoring (array order = priority). `pros/cons` merged into `summary` and `technical_concerns`.
---
## Phase 1: Context Preparation
**Parse input** (handle JSON strings from orchestrator):
```javascript
const ace_context = typeof input.ace_context === 'string'
? JSON.parse(input.ace_context) : input.ace_context || {}
const previous_rounds = typeof input.previous_rounds === 'string'
? JSON.parse(input.previous_rounds) : input.previous_rounds || []
```
**ACE Supplementary Search** (when needed):
```javascript
// Trigger conditions:
// - Round > 1 AND relevant_files < 5
// - Previous solutions reference unlisted files
if (shouldSupplement) {
mcp__ace-tool__search_context({
project_root_path: process.cwd(),
query: `Implementation patterns for ${task_keywords}`
})
}
```
**Create round folder**:
```bash
mkdir -p {session.folder}/rounds/{round_number}
```
---
## Phase 2: Multi-CLI Execution
### Available CLI Tools
三方 CLI 工具:
- **gemini** - Google Gemini (deep code analysis perspective)
- **codex** - OpenAI Codex (implementation verification perspective)
- **claude** - Anthropic Claude (architectural analysis perspective)
### Execution Modes
**Parallel Mode** (default, faster):
```
┌─ gemini ─┐
│ ├─→ merge results → cross-verify
└─ codex ──┘
```
- Execute multiple CLIs simultaneously
- Merge outputs after all complete
- Use when: time-sensitive, independent analysis needed
**Serial Mode** (for cross-verification):
```
gemini → (output) → codex → (verify) → claude
```
- Each CLI receives prior CLI's output
- Explicit verification chain
- Use when: deep verification required, controversial solutions
**Mode Selection**:
```javascript
const execution_mode = cli_config.mode || 'parallel'
// parallel: Promise.all([cli1, cli2, cli3])
// serial: await cli1 → await cli2(cli1.output) → await cli3(cli2.output)
```
### CLI Prompt Template
```bash
ccw cli -p "
PURPOSE: Analyze task from {perspective} perspective, verify technical feasibility
TASK:
• Analyze: \"{task_description}\"
• Examine codebase patterns and architecture
• Identify implementation approaches with trade-offs
• Provide file:line references for integration points
MODE: analysis
CONTEXT: @**/* | Memory: {ace_context_summary}
{previous_rounds_section}
{cross_verify_section}
EXPECTED: JSON with feasibility_score, findings, implementation_approaches, technical_concerns, code_locations
RULES: $(cat ~/.claude/workflows/cli-templates/protocols/analysis-protocol.md) |
- Specific file:line references
- Quantify effort estimates
- Concrete pros/cons
" --tool {tool} --mode analysis {resume_flag}
```
### Resume Mechanism
**Session Resume** - Continue from previous CLI session:
```bash
# Resume last session
ccw cli -p "Continue analysis..." --tool gemini --resume
# Resume specific session
ccw cli -p "Verify findings..." --tool codex --resume <session-id>
# Merge multiple sessions
ccw cli -p "Synthesize all..." --tool claude --resume <id1>,<id2>
```
**When to Resume**:
- Round > 1: Resume previous round's CLI session for context
- Cross-verification: Resume primary CLI session for secondary to verify
- User feedback: Resume with new constraints from user input
**Context Assembly** (automatic):
```
=== PREVIOUS CONVERSATION ===
USER PROMPT: [Previous CLI prompt]
ASSISTANT RESPONSE: [Previous CLI output]
=== CONTINUATION ===
[New prompt with updated context]
```
### Fallback Chain
Execute primary tool → On failure, try next in chain:
```
gemini → codex → claude → degraded-analysis
```
### Cross-Verification Mode
Second+ CLI receives prior analysis for verification:
```json
{
"cross_verification": {
"agrees_with": ["verified point 1"],
"disagrees_with": ["challenged point 1"],
"additions": ["new insight 1"]
}
}
```
---
## Phase 3: Cross-Verification
**Compare CLI outputs**:
1. Group similar findings across CLIs
2. Identify multi-CLI agreements (2+ CLIs agree)
3. Identify disagreements (conflicting conclusions)
4. Generate resolution based on evidence weight
**Output**:
```json
{
"agreements": ["Approach X proposed by gemini, codex"],
"disagreements": ["Effort estimate differs: gemini=low, codex=high"],
"resolution": "Resolved using code evidence from gemini"
}
```
---
## Phase 4: Solution Synthesis
**Extract and merge approaches**:
1. Collect implementation_approaches from all CLIs
2. Normalize names, merge similar approaches
3. Combine pros/cons/affected_files from multiple sources
4. Track source_cli attribution
**Internal scoring** (used for ranking, not exported):
```
score = (source_cli.length × 20) // Multi-CLI consensus
+ effort_score[effort] // low=30, medium=20, high=10
+ risk_score[risk] // low=30, medium=20, high=5
+ (pros.length - cons.length) × 5 // Balance
+ min(affected_files.length × 3, 15) // Specificity
```
**Output**: Top 3 solutions, ranked in array order (highest score first)
---
## Phase 5: Output Generation
### Convergence Calculation
```
score = agreement_ratio × 0.5 // agreements / (agreements + disagreements)
+ avg_feasibility × 0.3 // average of CLI feasibility_scores
+ stability_bonus × 0.2 // +0.2 if no new insights vs previous rounds
recommendation:
- score >= 0.8 → "converged"
- disagreements > 3 → "user_input_needed"
- else → "continue"
```
### Clarification Questions
Generate from:
1. Unresolved disagreements (max 2)
2. Technical concerns raised (max 2)
3. Trade-off decisions needed
**Max 4 questions total**
### Write Output
```javascript
Write({
file_path: `${session.folder}/rounds/${round_number}/synthesis.json`,
content: JSON.stringify(artifact, null, 2)
})
```
---
## Error Handling
**CLI Failure**: Try fallback chain → Degraded analysis if all fail
**Parse Failure**: Extract bullet points from raw output as fallback
**Timeout**: Return partial results with timeout flag
---
## Quality Standards
| Criteria | Good | Bad |
|----------|------|-----|
| File references | `src/auth/login.ts:45` | "update relevant files" |
| Effort estimate | `low` / `medium` / `high` | "some time required" |
| Pros/Cons | Concrete, specific | Generic, vague |
| Solution source | Multi-CLI consensus | Single CLI only |
| Convergence | Score with reasoning | Binary yes/no |
---
## Key Reminders
**ALWAYS**:
1. **Search Tool Priority**: ACE (`mcp__ace-tool__search_context`) → CCW (`mcp__ccw-tools__smart_search`) / Built-in (`Grep`, `Glob`, `Read`)
2. Execute multiple CLIs for cross-verification
2. Parse CLI outputs with fallback extraction
3. Include file:line references in affected_files
4. Calculate convergence score accurately
5. Write synthesis.json to round folder
6. Use `run_in_background: false` for CLI calls
7. Limit solutions to top 3
8. Limit clarification questions to 4
**NEVER**:
1. Execute implementation code (analysis only)
2. Return without writing synthesis.json
3. Skip cross-verification phase
4. Generate more than 4 clarification questions
5. Ignore previous round context
6. Assume solution without multi-CLI validation

View File

@@ -65,6 +65,8 @@ Score = 0
## Phase 2: Context Discovery
**Search Tool Priority**: ACE (`mcp__ace-tool__search_context`) → CCW (`mcp__ccw-tools__smart_search`) / Built-in (`Grep`, `Glob`, `Read`)
**1. Project Structure**:
```bash
ccw tool exec get_modules_by_depth '{}'

View File

@@ -165,7 +165,8 @@ Brief summary:
## Key Reminders
**ALWAYS**:
1. Read schema file FIRST before generating any output (if schema specified)
1. **Search Tool Priority**: ACE (`mcp__ace-tool__search_context`) → CCW (`mcp__ccw-tools__smart_search`) / Built-in (`Grep`, `Glob`, `Read`)
2. Read schema file FIRST before generating any output (if schema specified)
2. Copy field names EXACTLY from schema (case-sensitive)
3. Verify root structure matches schema (array vs object)
4. Match nested/flat structures as schema requires

View File

@@ -428,6 +428,7 @@ function validateTask(task) {
## Key Reminders
**ALWAYS**:
- **Search Tool Priority**: ACE (`mcp__ace-tool__search_context`) → CCW (`mcp__ccw-tools__smart_search`) / Built-in (`Grep`, `Glob`, `Read`)
- **Read schema first** to determine output structure
- Generate task IDs (T1/T2 for plan, FIX1/FIX2 for fix-plan)
- Include depends_on (even if empty [])

View File

@@ -436,6 +436,7 @@ See: `.process/iteration-{iteration}-cli-output.txt`
## Key Reminders
**ALWAYS:**
- **Search Tool Priority**: ACE (`mcp__ace-tool__search_context`) → CCW (`mcp__ccw-tools__smart_search`) / Built-in (`Grep`, `Glob`, `Read`)
- **Validate context package**: Ensure all required fields present before CLI execution
- **Handle CLI errors gracefully**: Use fallback chain (Gemini → Qwen → degraded mode)
- **Parse CLI output structurally**: Extract specific sections (RCA, 修复建议, 验证建议)

View File

@@ -389,6 +389,7 @@ Before completing any task, verify:
- Use `run_in_background=false` for all Bash/CLI calls to ensure foreground execution
**ALWAYS:**
- **Search Tool Priority**: ACE (`mcp__ace-tool__search_context`) → CCW (`mcp__ccw-tools__smart_search`) / Built-in (`Grep`, `Glob`, `Read`)
- Verify module/package existence with rg/grep/search before referencing
- Write working code incrementally
- Test your implementation thoroughly

View File

@@ -27,6 +27,8 @@ You are a conceptual planning specialist focused on **dedicated single-role** st
## Core Responsibilities
**Search Tool Priority**: ACE (`mcp__ace-tool__search_context`) → CCW (`mcp__ccw-tools__smart_search`) / Built-in (`Grep`, `Glob`, `Read`)
1. **Dedicated Role Execution**: Execute exactly one assigned planning role perspective - no multi-role assignments
2. **Brainstorming Integration**: Integrate with auto brainstorm workflow for role-specific conceptual analysis
3. **Template-Driven Analysis**: Use planning role templates loaded via `$(cat template)`

View File

@@ -565,6 +565,7 @@ Output: .workflow/session/{session}/.process/context-package.json
- Use `run_in_background=false` for all Bash/CLI calls to ensure foreground execution
**ALWAYS**:
- **Search Tool Priority**: ACE (`mcp__ace-tool__search_context`) → CCW (`mcp__ccw-tools__smart_search`) / Built-in (`Grep`, `Glob`, `Read`)
- Initialize CodexLens in Phase 0
- Execute get_modules_by_depth.sh
- Load CLAUDE.md/README.md (unless in memory)

View File

@@ -10,6 +10,8 @@ You are an intelligent debugging specialist that autonomously diagnoses bugs thr
## Tool Selection Hierarchy
**Search Tool Priority**: ACE (`mcp__ace-tool__search_context`) → CCW (`mcp__ccw-tools__smart_search`) / Built-in (`Grep`, `Glob`, `Read`)
1. **Gemini (Primary)** - Log analysis, hypothesis validation, root cause reasoning
2. **Qwen (Fallback)** - Same capabilities as Gemini, use when unavailable
3. **Codex (Alternative)** - Fix implementation, code modification

View File

@@ -311,6 +311,7 @@ Before completing the task, you must verify the following:
## Key Reminders
**ALWAYS**:
- **Search Tool Priority**: ACE (`mcp__ace-tool__search_context`) → CCW (`mcp__ccw-tools__smart_search`) / Built-in (`Grep`, `Glob`, `Read`)
- **Detect Mode**: Check `meta.cli_execute` to determine execution mode (Agent or CLI).
- **Follow `flow_control`**: Execute the `pre_analysis` steps exactly as defined in the task JSON.
- **Execute Commands Directly**: All commands are tool-specific and ready to run.

View File

@@ -308,7 +308,8 @@ Each line is a solution JSON containing tasks. Schema: `cat .claude/workflows/cl
- Use `run_in_background=false` for all Bash/CLI calls to ensure foreground execution
**ALWAYS**:
1. Read schema first: `cat .claude/workflows/cli-templates/schemas/solution-schema.json`
1. **Search Tool Priority**: ACE (`mcp__ace-tool__search_context`) → CCW (`mcp__ccw-tools__smart_search`) / Built-in (`Grep`, `Glob`, `Read`)
2. Read schema first: `cat .claude/workflows/cli-templates/schemas/solution-schema.json`
2. Use ACE semantic search as PRIMARY exploration tool
3. Fetch issue details via `ccw issue status <id> --json`
4. Quantify acceptance.criteria with testable conditions

View File

@@ -275,7 +275,8 @@ Return brief summaries; full conflict details in separate files:
- Use `run_in_background=false` for all Bash/CLI calls to ensure foreground execution
**ALWAYS**:
1. Build dependency graph before ordering
1. **Search Tool Priority**: ACE (`mcp__ace-tool__search_context`) → CCW (`mcp__ccw-tools__smart_search`) / Built-in (`Grep`, `Glob`, `Read`)
2. Build dependency graph before ordering
2. Detect file overlaps between solutions
3. Apply resolution rules consistently
4. Calculate semantic priority for all solutions

View File

@@ -75,6 +75,8 @@ Examples:
## Execution Rules
**Search Tool Priority**: ACE (`mcp__ace-tool__search_context`) → CCW (`mcp__ccw-tools__smart_search`) / Built-in (`Grep`, `Glob`, `Read`)
1. **Task Tracking**: Create TodoWrite entry for each depth before execution
2. **Parallelism**: Max 4 jobs per depth, sequential across depths
3. **Strategy Assignment**: Assign strategy based on depth:

View File

@@ -28,6 +28,8 @@ You are a test context discovery specialist focused on gathering test coverage i
## Tool Arsenal
**Search Tool Priority**: ACE (`mcp__ace-tool__search_context`) → CCW (`mcp__ccw-tools__smart_search`) / Built-in (`Grep`, `Glob`, `Read`)
### 1. Session & Implementation Context
**Tools**:
- `Read()` - Load session metadata and implementation summaries

View File

@@ -332,6 +332,7 @@ When generating test results for orchestrator (saved to `.process/test-results.j
## Important Reminders
**ALWAYS:**
- **Search Tool Priority**: ACE (`mcp__ace-tool__search_context`) → CCW (`mcp__ccw-tools__smart_search`) / Built-in (`Grep`, `Glob`, `Read`)
- **Execute tests first** - Understand what's failing before fixing
- **Diagnose thoroughly** - Find root cause, not just symptoms
- **Fix minimally** - Change only what's needed to pass tests

View File

@@ -284,6 +284,8 @@ You execute 6 distinct task types organized into 3 patterns. Each task includes
### ALWAYS
**Search Tool Priority**: ACE (`mcp__ace-tool__search_context`) → CCW (`mcp__ccw-tools__smart_search`) / Built-in (`Grep`, `Glob`, `Read`)
**W3C Format Compliance**: ✅ Include $schema in all token files | ✅ Use $type metadata for all tokens | ✅ Use $value wrapper for color (light/dark), duration, easing | ✅ Validate token structure against W3C spec
**Pattern Recognition**: ✅ Identify pattern from [TASK_TYPE_IDENTIFIER] first | ✅ Apply pattern-specific execution rules | ✅ Follow autonomy level

View File

@@ -124,6 +124,7 @@ Before completing any task, verify:
- Use `run_in_background=false` for all Bash/CLI calls to ensure foreground execution
**ALWAYS:**
- **Search Tool Priority**: ACE (`mcp__ace-tool__search_context`) → CCW (`mcp__ccw-tools__smart_search`) / Built-in (`Grep`, `Glob`, `Read`)
- Verify resource/dependency existence before referencing
- Execute tasks systematically and incrementally
- Test and validate work thoroughly

View File

@@ -1,7 +1,7 @@
---
name: execute
description: Execute queue with DAG-based parallel orchestration (one commit per solution)
argument-hint: "[--worktree [<existing-path>]] [--queue <queue-id>]"
argument-hint: "--queue <queue-id> [--worktree [<existing-path>]]"
allowed-tools: TodoWrite(*), Bash(*), Read(*), AskUserQuestion(*)
---
@@ -17,21 +17,64 @@ Minimal orchestrator that dispatches **solution IDs** to executors. Each executo
- `done <id>` → update solution completion status
- No race conditions: status changes only via `done`
- **Executor handles all tasks within a solution sequentially**
- **Worktree isolation**: Each executor can work in its own git worktree
- **Single worktree for entire queue**: One worktree isolates ALL queue execution from main workspace
## Queue ID Requirement (MANDATORY)
**Queue ID is REQUIRED.** You MUST specify which queue to execute via `--queue <queue-id>`.
### If Queue ID Not Provided
When `--queue` parameter is missing, you MUST:
1. **List available queues** by running:
```javascript
const result = Bash('ccw issue queue list --brief --json');
const index = JSON.parse(result);
```
2. **Display available queues** to user:
```
Available Queues:
ID Status Progress Issues
-----------------------------------------------------------
→ QUE-20251215-001 active 3/10 ISS-001, ISS-002
QUE-20251210-002 active 0/5 ISS-003
QUE-20251205-003 completed 8/8 ISS-004
```
3. **Stop and ask user** to specify which queue to execute:
```javascript
AskUserQuestion({
questions: [{
question: "Which queue would you like to execute?",
header: "Queue",
multiSelect: false,
options: index.queues
.filter(q => q.status === 'active')
.map(q => ({
label: q.id,
description: `${q.status}, ${q.completed_solutions || 0}/${q.total_solutions || 0} completed, Issues: ${q.issue_ids.join(', ')}`
}))
}]
})
```
4. **After user selection**, continue execution with the selected queue ID.
**DO NOT auto-select queues.** Explicit user confirmation is required to prevent accidental execution of wrong queue.
## Usage
```bash
/issue:execute # Execute active queue(s)
/issue:execute --queue QUE-xxx # Execute specific queue
/issue:execute --worktree # Use git worktrees for parallel isolation
/issue:execute --worktree --queue QUE-xxx
/issue:execute --worktree /path/to/existing/worktree # Resume in existing worktree
/issue:execute --queue QUE-xxx # Execute specific queue (REQUIRED)
/issue:execute --queue QUE-xxx --worktree # Execute in isolated worktree
/issue:execute --queue QUE-xxx --worktree /path/to/existing/worktree # Resume
```
**Parallelism**: Determined automatically by task dependency DAG (no manual control)
**Executor & Dry-run**: Selected via interactive prompt (AskUserQuestion)
**Worktree**: Creates isolated git worktrees for each parallel executor
**Worktree**: Creates ONE worktree for the entire queue execution (not per-solution)
**⭐ Recommended Executor**: **Codex** - Best for long-running autonomous work (2hr timeout), supports background execution and full write access
@@ -44,37 +87,101 @@ Minimal orchestrator that dispatches **solution IDs** to executors. Each executo
## Execution Flow
```
Phase 0 (if --worktree): Setup Worktree Base
Ensure .worktrees directory exists
Phase 0: Validate Queue ID (REQUIRED)
If --queue provided → use specified queue
├─ If --queue missing → list queues, prompt user to select
└─ Store QUEUE_ID for all subsequent commands
Phase 0.5 (if --worktree): Setup Queue Worktree
├─ Create ONE worktree for entire queue: .ccw/worktrees/queue-<timestamp>
├─ All subsequent execution happens in this worktree
└─ Main workspace remains clean and untouched
Phase 1: Get DAG & User Selection
├─ ccw issue queue dag [--queue QUE-xxx] → { parallel_batches: [["S-1","S-2"], ["S-3"]] }
├─ ccw issue queue dag --queue ${QUEUE_ID} → { parallel_batches: [["S-1","S-2"], ["S-3"]] }
└─ AskUserQuestion → executor type (codex|gemini|agent), dry-run mode, worktree mode
Phase 2: Dispatch Parallel Batch (DAG-driven)
├─ Parallelism determined by DAG (no manual limit)
├─ All executors work in the SAME worktree (or main if no worktree)
├─ For each solution ID in batch (parallel - all at once):
│ ├─ (if worktree) Create isolated worktree: git worktree add
│ ├─ Executor calls: ccw issue detail <id> (READ-ONLY)
│ ├─ Executor gets FULL SOLUTION with all tasks
│ ├─ Executor implements all tasks sequentially (T1 → T2 → T3)
│ ├─ Executor tests + verifies each task
│ ├─ Executor commits ONCE per solution (with formatted summary)
─ Executor calls: ccw issue done <id>
│ └─ (if worktree) Cleanup: merge branch, remove worktree
─ Executor calls: ccw issue done <id>
└─ Wait for batch completion
Phase 3: Next Batch
Phase 3: Next Batch (repeat Phase 2)
└─ ccw issue queue dag → check for newly-ready solutions
Phase 4 (if --worktree): Worktree Completion
├─ All batches complete → prompt for merge strategy
└─ Options: Create PR / Merge to main / Keep branch
```
## Implementation
### Phase 0: Validate Queue ID
```javascript
// Check if --queue was provided
let QUEUE_ID = args.queue;
if (!QUEUE_ID) {
// List available queues
const listResult = Bash('ccw issue queue list --brief --json').trim();
const index = JSON.parse(listResult);
if (index.queues.length === 0) {
console.log('No queues found. Use /issue:queue to create one first.');
return;
}
// Filter active queues only
const activeQueues = index.queues.filter(q => q.status === 'active');
if (activeQueues.length === 0) {
console.log('No active queues found.');
console.log('Available queues:', index.queues.map(q => `${q.id} (${q.status})`).join(', '));
return;
}
// Display and prompt user
console.log('\nAvailable Queues:');
console.log('ID'.padEnd(22) + 'Status'.padEnd(12) + 'Progress'.padEnd(12) + 'Issues');
console.log('-'.repeat(70));
for (const q of index.queues) {
const marker = q.id === index.active_queue_id ? '→ ' : ' ';
console.log(marker + q.id.padEnd(20) + q.status.padEnd(12) +
`${q.completed_solutions || 0}/${q.total_solutions || 0}`.padEnd(12) +
q.issue_ids.join(', '));
}
const answer = AskUserQuestion({
questions: [{
question: "Which queue would you like to execute?",
header: "Queue",
multiSelect: false,
options: activeQueues.map(q => ({
label: q.id,
description: `${q.completed_solutions || 0}/${q.total_solutions || 0} completed, Issues: ${q.issue_ids.join(', ')}`
}))
}]
});
QUEUE_ID = answer['Queue'];
}
console.log(`\n## Executing Queue: ${QUEUE_ID}\n`);
```
### Phase 1: Get DAG & User Selection
```javascript
// Get dependency graph and parallel batches
const dagJson = Bash(`ccw issue queue dag`).trim();
// Get dependency graph and parallel batches (QUEUE_ID required)
const dagJson = Bash(`ccw issue queue dag --queue ${QUEUE_ID}`).trim();
const dag = JSON.parse(dagJson);
if (dag.error || dag.ready_count === 0) {
@@ -115,12 +222,12 @@ const answer = AskUserQuestion({
]
},
{
question: 'Use git worktrees for parallel isolation?',
question: 'Use git worktree for queue isolation?',
header: 'Worktree',
multiSelect: false,
options: [
{ label: 'Yes (Recommended for parallel)', description: 'Each executor works in isolated worktree branch' },
{ label: 'No', description: 'Work directly in current directory (serial only)' }
{ label: 'Yes (Recommended)', description: 'Create ONE worktree for entire queue - main stays clean' },
{ label: 'No', description: 'Work directly in current directory' }
]
}
]
@@ -140,7 +247,7 @@ if (isDryRun) {
}
```
### Phase 2: Dispatch Parallel Batch (DAG-driven)
### Phase 0 & 2: Setup Queue Worktree & Dispatch
```javascript
// Parallelism determined by DAG - no manual limit
@@ -158,24 +265,40 @@ TodoWrite({
console.log(`\n### Executing Solutions (DAG batch 1): ${batch.join(', ')}`);
// Setup worktree base directory if needed (using absolute paths)
if (useWorktree) {
// Use absolute paths to avoid issues when running from subdirectories
const repoRoot = Bash('git rev-parse --show-toplevel').trim();
const worktreeBase = `${repoRoot}/.ccw/worktrees`;
Bash(`mkdir -p "${worktreeBase}"`);
// Prune stale worktrees from previous interrupted executions
Bash('git worktree prune');
}
// Parse existing worktree path from args if provided
// Example: --worktree /path/to/existing/worktree
const existingWorktree = args.worktree && typeof args.worktree === 'string' ? args.worktree : null;
// Setup ONE worktree for entire queue (not per-solution)
let worktreePath = null;
let worktreeBranch = null;
if (useWorktree) {
const repoRoot = Bash('git rev-parse --show-toplevel').trim();
const worktreeBase = `${repoRoot}/.ccw/worktrees`;
Bash(`mkdir -p "${worktreeBase}"`);
Bash('git worktree prune'); // Cleanup stale worktrees
if (existingWorktree) {
// Resume mode: Use existing worktree
worktreePath = existingWorktree;
worktreeBranch = Bash(`git -C "${worktreePath}" branch --show-current`).trim();
console.log(`Resuming in existing worktree: ${worktreePath} (branch: ${worktreeBranch})`);
} else {
// Create mode: ONE worktree for the entire queue
const timestamp = new Date().toISOString().replace(/[-:T]/g, '').slice(0, 14);
worktreeBranch = `queue-exec-${dag.queue_id || timestamp}`;
worktreePath = `${worktreeBase}/${worktreeBranch}`;
Bash(`git worktree add "${worktreePath}" -b "${worktreeBranch}"`);
console.log(`Created queue worktree: ${worktreePath}`);
}
}
// Launch ALL solutions in batch in parallel (DAG guarantees no conflicts)
// All executors work in the SAME worktree (or main if no worktree)
const executions = batch.map(solutionId => {
updateTodo(solutionId, 'in_progress');
return dispatchExecutor(solutionId, executor, useWorktree, existingWorktree);
return dispatchExecutor(solutionId, executor, worktreePath);
});
await Promise.all(executions);
@@ -185,126 +308,20 @@ batch.forEach(id => updateTodo(id, 'completed'));
### Executor Dispatch
```javascript
function dispatchExecutor(solutionId, executorType, useWorktree = false, existingWorktree = null) {
// Worktree setup commands (if enabled) - using absolute paths
// Supports both creating new worktrees and resuming in existing ones
const worktreeSetup = useWorktree ? `
### Step 0: Setup Isolated Worktree
\`\`\`bash
# Use absolute paths to avoid issues when running from subdirectories
REPO_ROOT=$(git rev-parse --show-toplevel)
WORKTREE_BASE="\${REPO_ROOT}/.ccw/worktrees"
# Check if existing worktree path was provided
EXISTING_WORKTREE="${existingWorktree || ''}"
if [[ -n "\${EXISTING_WORKTREE}" && -d "\${EXISTING_WORKTREE}" ]]; then
# Resume mode: Use existing worktree
WORKTREE_PATH="\${EXISTING_WORKTREE}"
WORKTREE_NAME=$(basename "\${WORKTREE_PATH}")
# Verify it's a valid git worktree
if ! git -C "\${WORKTREE_PATH}" rev-parse --is-inside-work-tree &>/dev/null; then
echo "Error: \${EXISTING_WORKTREE} is not a valid git worktree"
exit 1
fi
echo "Resuming in existing worktree: \${WORKTREE_PATH}"
else
# Create mode: New worktree with timestamp
WORKTREE_NAME="exec-${solutionId}-$(date +%H%M%S)"
WORKTREE_PATH="\${WORKTREE_BASE}/\${WORKTREE_NAME}"
# Ensure worktree base exists
mkdir -p "\${WORKTREE_BASE}"
# Prune stale worktrees
git worktree prune
# Create worktree
git worktree add "\${WORKTREE_PATH}" -b "\${WORKTREE_NAME}"
echo "Created new worktree: \${WORKTREE_PATH}"
fi
# Setup cleanup trap for graceful failure handling
cleanup_worktree() {
echo "Cleaning up worktree due to interruption..."
cd "\${REPO_ROOT}" 2>/dev/null || true
git worktree remove "\${WORKTREE_PATH}" --force 2>/dev/null || true
echo "Worktree removed. Branch '\${WORKTREE_NAME}' kept for inspection."
}
trap cleanup_worktree EXIT INT TERM
cd "\${WORKTREE_PATH}"
\`\`\`
` : '';
const worktreeCleanup = useWorktree ? `
### Step 5: Worktree Completion (User Choice)
After all tasks complete, prompt for merge strategy:
\`\`\`javascript
AskUserQuestion({
questions: [{
question: "Solution ${solutionId} completed. What to do with worktree branch?",
header: "Merge",
multiSelect: false,
options: [
{ label: "Create PR (Recommended)", description: "Push branch and create pull request - safest for parallel execution" },
{ label: "Merge to main", description: "Merge branch and cleanup worktree (requires clean main)" },
{ label: "Keep branch", description: "Cleanup worktree, keep branch for manual handling" }
]
}]
})
\`\`\`
**Based on selection:**
\`\`\`bash
# Disable cleanup trap before intentional cleanup
trap - EXIT INT TERM
# Return to repo root (use REPO_ROOT from setup)
cd "\${REPO_ROOT}"
# Validate main repo state before merge
validate_main_clean() {
if [[ -n \$(git status --porcelain) ]]; then
echo "⚠️ Warning: Main repo has uncommitted changes."
echo "Cannot auto-merge. Falling back to 'Create PR' option."
return 1
fi
return 0
}
# Create PR (Recommended for parallel execution):
git push -u origin "\${WORKTREE_NAME}"
gh pr create --title "Solution ${solutionId}" --body "Issue queue execution"
git worktree remove "\${WORKTREE_PATH}"
# Merge to main (only if main is clean):
if validate_main_clean; then
git merge --no-ff "\${WORKTREE_NAME}" -m "Merge solution ${solutionId}"
git worktree remove "\${WORKTREE_PATH}" && git branch -d "\${WORKTREE_NAME}"
else
# Fallback to PR if main is dirty
git push -u origin "\${WORKTREE_NAME}"
gh pr create --title "Solution ${solutionId}" --body "Issue queue execution (main had uncommitted changes)"
git worktree remove "\${WORKTREE_PATH}"
fi
# Keep branch:
git worktree remove "\${WORKTREE_PATH}"
echo "Branch \${WORKTREE_NAME} kept for manual handling"
\`\`\`
**Parallel Execution Safety**: "Create PR" is the default and safest option for parallel executors, avoiding merge race conditions.
` : '';
// worktreePath: path to shared worktree (null if not using worktree)
function dispatchExecutor(solutionId, executorType, worktreePath = null) {
// If worktree is provided, executor works in that directory
// No per-solution worktree creation - ONE worktree for entire queue
const cdCommand = worktreePath ? `cd "${worktreePath}"` : '';
const prompt = `
## Execute Solution ${solutionId}
${worktreeSetup}
${worktreePath ? `
### Step 0: Enter Queue Worktree
\`\`\`bash
cd "${worktreePath}"
\`\`\`
` : ''}
### Step 1: Get Solution (read-only)
\`\`\`bash
ccw issue detail ${solutionId}
@@ -352,16 +369,21 @@ If any task failed:
\`\`\`bash
ccw issue done ${solutionId} --fail --reason '{"task_id": "TX", "error_type": "test_failure", "message": "..."}'
\`\`\`
${worktreeCleanup}`;
**Note**: Do NOT cleanup worktree after this solution. Worktree is shared by all solutions in the queue.
`;
// For CLI tools, pass --cd to set working directory
const cdOption = worktreePath ? ` --cd "${worktreePath}"` : '';
if (executorType === 'codex') {
return Bash(
`ccw cli -p "${escapePrompt(prompt)}" --tool codex --mode write --id exec-${solutionId}`,
`ccw cli -p "${escapePrompt(prompt)}" --tool codex --mode write --id exec-${solutionId}${cdOption}`,
{ timeout: 7200000, run_in_background: true } // 2hr for full solution
);
} else if (executorType === 'gemini') {
return Bash(
`ccw cli -p "${escapePrompt(prompt)}" --tool gemini --mode write --id exec-${solutionId}`,
`ccw cli -p "${escapePrompt(prompt)}" --tool gemini --mode write --id exec-${solutionId}${cdOption}`,
{ timeout: 3600000, run_in_background: true }
);
} else {
@@ -369,7 +391,7 @@ ${worktreeCleanup}`;
subagent_type: 'code-developer',
run_in_background: false,
description: `Execute solution ${solutionId}`,
prompt: prompt
prompt: worktreePath ? `Working directory: ${worktreePath}\n\n${prompt}` : prompt
});
}
}
@@ -378,8 +400,8 @@ ${worktreeCleanup}`;
### Phase 3: Check Next Batch
```javascript
// Refresh DAG after batch completes
const refreshedDag = JSON.parse(Bash(`ccw issue queue dag`).trim());
// Refresh DAG after batch completes (use same QUEUE_ID)
const refreshedDag = JSON.parse(Bash(`ccw issue queue dag --queue ${QUEUE_ID}`).trim());
console.log(`
## Batch Complete
@@ -389,46 +411,117 @@ console.log(`
`);
if (refreshedDag.ready_count > 0) {
console.log('Run `/issue:execute` again for next batch.');
console.log(`Run \`/issue:execute --queue ${QUEUE_ID}\` again for next batch.`);
// Note: If resuming, pass existing worktree path:
// /issue:execute --queue ${QUEUE_ID} --worktree <worktreePath>
}
```
### Phase 4: Worktree Completion (after ALL batches)
```javascript
// Only run when ALL solutions completed AND using worktree
if (useWorktree && refreshedDag.ready_count === 0 && refreshedDag.completed_count === refreshedDag.total) {
console.log('\n## All Solutions Completed - Worktree Cleanup');
const answer = AskUserQuestion({
questions: [{
question: `Queue complete. What to do with worktree branch "${worktreeBranch}"?`,
header: 'Merge',
multiSelect: false,
options: [
{ label: 'Create PR (Recommended)', description: 'Push branch and create pull request' },
{ label: 'Merge to main', description: 'Merge all commits and cleanup worktree' },
{ label: 'Keep branch', description: 'Cleanup worktree, keep branch for manual handling' }
]
}]
});
const repoRoot = Bash('git rev-parse --show-toplevel').trim();
if (answer['Merge'].includes('Create PR')) {
Bash(`git -C "${worktreePath}" push -u origin "${worktreeBranch}"`);
Bash(`gh pr create --title "Queue ${dag.queue_id}" --body "Issue queue execution - all solutions completed" --head "${worktreeBranch}"`);
Bash(`git worktree remove "${worktreePath}"`);
console.log(`PR created for branch: ${worktreeBranch}`);
} else if (answer['Merge'].includes('Merge to main')) {
// Check main is clean
const mainDirty = Bash('git status --porcelain').trim();
if (mainDirty) {
console.log('Warning: Main has uncommitted changes. Falling back to PR.');
Bash(`git -C "${worktreePath}" push -u origin "${worktreeBranch}"`);
Bash(`gh pr create --title "Queue ${dag.queue_id}" --body "Issue queue execution (main had uncommitted changes)" --head "${worktreeBranch}"`);
} else {
Bash(`git merge --no-ff "${worktreeBranch}" -m "Merge queue ${dag.queue_id}"`);
Bash(`git branch -d "${worktreeBranch}"`);
}
Bash(`git worktree remove "${worktreePath}"`);
} else {
Bash(`git worktree remove "${worktreePath}"`);
console.log(`Branch ${worktreeBranch} kept for manual handling`);
}
}
```
## Parallel Execution Model
```
┌─────────────────────────────────────────────────────────────┐
│ Orchestrator │
├─────────────────────────────────────────────────────────────┤
1. ccw issue queue dag
→ { parallel_batches: [["S-1","S-2"], ["S-3"]] }
2. Dispatch batch 1 (parallel):
┌──────────────────────┐ ┌──────────────────────┐
│ Executor 1 │ │ Executor 2 │
│ detail S-1 │ │ detail S-2
│ → gets full solution │ │ → gets full solution │
│ [T1→T2→T3 sequential]│ │ [T1→T2 sequential] │
│ commit (1x solution) │ │ commit (1x solution) │
│ │ done S-1 │ │ done S-2
└──────────────────────┘ └──────────────────────┘
3. ccw issue queue dag (refresh)
→ S-3 now ready (S-1 completed, file conflict resolved)
└─────────────────────────────────────────────────────────────┘
┌─────────────────────────────────────────────────────────────────
│ Orchestrator
├─────────────────────────────────────────────────────────────────
0. Validate QUEUE_ID (required, or prompt user to select)
0.5 (if --worktree) Create ONE worktree for entire queue
→ .ccw/worktrees/queue-exec-<queue-id>
1. ccw issue queue dag --queue ${QUEUE_ID}
→ { parallel_batches: [["S-1","S-2"], ["S-3"]] }
2. Dispatch batch 1 (parallel, SAME worktree):
┌──────────────────────────────────────────────────────┐
│ │ Shared Queue Worktree (or main) │ │
│ ┌──────────────────┐ ┌──────────────────┐ │
│ Executor 1 │ │ Executor 2
│ │ detail S-1 │ │ detail S-2
│ │ [T1→T2→T3] │ │ [T1→T2] │ │
│ │ │ commit S-1 │ │ commit S-2 │ │ │
│ │ │ done S-1 │ │ done S-2 │ │ │
│ │ └──────────────────┘ └──────────────────┘ │ │
│ └──────────────────────────────────────────────────────┘ │
│ │
│ 3. ccw issue queue dag (refresh) │
│ → S-3 now ready → dispatch batch 2 (same worktree) │
│ │
│ 4. (if --worktree) ALL batches complete → cleanup worktree │
│ → Prompt: Create PR / Merge to main / Keep branch │
└─────────────────────────────────────────────────────────────────┘
```
**Why this works for parallel:**
- **ONE worktree for entire queue** → all solutions share same isolated workspace
- `detail <id>` is READ-ONLY → no race conditions
- Each executor handles **all tasks within a solution** sequentially
- **One commit per solution** with formatted summary (not per-task)
- `done <id>` updates only its own solution status
- `queue dag` recalculates ready solutions after each batch
- Solutions in same batch have NO file conflicts
- Solutions in same batch have NO file conflicts (DAG guarantees)
- **Main workspace stays clean** until merge/PR decision
## CLI Endpoint Contract
### `ccw issue queue dag`
Returns dependency graph with parallel batches (solution-level):
### `ccw issue queue list --brief --json`
Returns queue index for selection (used when --queue not provided):
```json
{
"active_queue_id": "QUE-20251215-001",
"queues": [
{ "id": "QUE-20251215-001", "status": "active", "issue_ids": ["ISS-001"], "total_solutions": 5, "completed_solutions": 2 }
]
}
```
### `ccw issue queue dag --queue <queue-id>`
Returns dependency graph with parallel batches (solution-level, **--queue required**):
```json
{
"queue_id": "QUE-...",

View File

@@ -311,6 +311,12 @@ Output:
└─ .workflow/.debug/DBG-{slug}-{date}/debug.log
```
## Post-Completion Expansion
完成后询问用户是否扩展为issue(test/enhance/refactor/doc),选中项调用 `/issue:new "{summary} - {dimension}"`
---
## Error Handling
| Situation | Action |

View File

@@ -275,6 +275,10 @@ AskUserQuestion({
- **"Enter Review"**: Execute `/workflow:review`
- **"Complete Session"**: Execute `/workflow:session:complete`
### Post-Completion Expansion
完成后询问用户是否扩展为issue(test/enhance/refactor/doc),选中项调用 `/issue:new "{summary} - {dimension}"`
## Execution Strategy (IMPL_PLAN-Driven)
### Strategy Priority

View File

@@ -664,6 +664,10 @@ Collected after each execution call completes:
Appended to `previousExecutionResults` array for context continuity in multi-execution scenarios.
## Post-Completion Expansion
完成后询问用户是否扩展为issue(test/enhance/refactor/doc),选中项调用 `/issue:new "{summary} - {dimension}"`
**Fixed ID Pattern**: `${sessionId}-${groupId}` enables predictable lookup without auto-generated timestamps.
**Resume Usage**: If `status` is "partial" or "failed", use `fixedCliId` to resume:

View File

@@ -0,0 +1,433 @@
---
name: workflow:lite-lite-lite
description: Ultra-lightweight multi-tool analysis and direct execution. No artifacts, auto tool selection based on task analysis, user-driven iteration via AskUser.
argument-hint: "<task description>"
allowed-tools: TodoWrite(*), Task(*), AskUserQuestion(*), Read(*), Bash(*), mcp__ace-tool__search_context(*)
---
# Ultra-Lite Multi-Tool Workflow
## Quick Start
```bash
/workflow:lite-lite-lite "Fix the login bug"
/workflow:lite-lite-lite "Refactor payment module for multi-gateway support"
```
**Core Philosophy**: Minimal friction, maximum velocity. No files, no artifacts - just analyze and execute.
## Overview
**Zero-artifact workflow**: Clarify → Select Tools → Multi-Mode Analysis → Decision → Direct Execution
**vs multi-cli-plan**: No IMPL_PLAN.md, plan.json, synthesis.json - all state in memory.
## Execution Flow
```
Phase 1: Clarify Requirements → AskUser for missing details
Phase 2: Select Tools (CLI → Mode → Agent) → 3-step selection
Phase 3: Multi-Mode Analysis → Execute with --resume chaining
Phase 4: User Decision → Execute / Refine / Change / Cancel
Phase 5: Direct Execution → No plan files, immediate implementation
```
## Phase 1: Clarify Requirements
```javascript
const taskDescription = $ARGUMENTS
if (taskDescription.length < 20 || isAmbiguous(taskDescription)) {
AskUserQuestion({
questions: [{
question: "Please provide more details: target files/modules, expected behavior, constraints?",
header: "Details",
options: [
{ label: "I'll provide more", description: "Add more context" },
{ label: "Continue analysis", description: "Let tools explore autonomously" }
],
multiSelect: false
}]
})
}
// Optional: Quick ACE Context for complex tasks
mcp__ace-tool__search_context({
project_root_path: process.cwd(),
query: `${taskDescription} implementation patterns`
})
```
## Phase 2: Select Tools
### Tool Definitions
**CLI Tools** (from cli-tools.json):
```javascript
const cliConfig = JSON.parse(Read("~/.claude/cli-tools.json"))
const cliTools = Object.entries(cliConfig.tools)
.filter(([_, config]) => config.enabled)
.map(([name, config]) => ({
name, type: 'cli',
tags: config.tags || [],
model: config.primaryModel,
toolType: config.type // builtin, cli-wrapper, api-endpoint
}))
```
**Sub Agents**:
| Agent | Strengths | canExecute |
|-------|-----------|------------|
| **code-developer** | Code implementation, test writing | ✅ |
| **Explore** | Fast code exploration, pattern discovery | ❌ |
| **cli-explore-agent** | Dual-source analysis (Bash+CLI) | ❌ |
| **cli-discuss-agent** | Multi-CLI collaboration, cross-verification | ❌ |
| **debug-explore-agent** | Hypothesis-driven debugging | ❌ |
| **context-search-agent** | Multi-layer file discovery, dependency analysis | ❌ |
| **test-fix-agent** | Test execution, failure diagnosis, code fixing | ✅ |
| **universal-executor** | General execution, multi-domain adaptation | ✅ |
**Analysis Modes**:
| Mode | Pattern | Use Case | minCLIs |
|------|---------|----------|---------|
| **Parallel** | `A \|\| B \|\| C → Aggregate` | Fast multi-perspective | 1+ |
| **Sequential** | `A → B(resume) → C(resume)` | Incremental deepening | 2+ |
| **Collaborative** | `A → B → A → B → Synthesize` | Multi-round refinement | 2+ |
| **Debate** | `A(propose) → B(challenge) → A(defend)` | Adversarial validation | 2 |
| **Challenge** | `A(analyze) → B(challenge)` | Find flaws and risks | 2 |
### Three-Step Selection Flow
```javascript
// Step 1: Select CLIs (multiSelect)
AskUserQuestion({
questions: [{
question: "Select CLI tools for analysis (1-3 for collaboration modes)",
header: "CLI Tools",
options: cliTools.map(cli => ({
label: cli.name,
description: cli.tags.length > 0 ? cli.tags.join(', ') : cli.model || 'general'
})),
multiSelect: true
}]
})
// Step 2: Select Mode (filtered by CLI count)
const availableModes = analysisModes.filter(m => selectedCLIs.length >= m.minCLIs)
AskUserQuestion({
questions: [{
question: "Select analysis mode",
header: "Mode",
options: availableModes.map(m => ({
label: m.label,
description: `${m.description} [${m.pattern}]`
})),
multiSelect: false
}]
})
// Step 3: Select Agent for execution
AskUserQuestion({
questions: [{
question: "Select Sub Agent for execution",
header: "Agent",
options: agents.map(a => ({ label: a.name, description: a.strength })),
multiSelect: false
}]
})
// Confirm selection
AskUserQuestion({
questions: [{
question: "Confirm selection?",
header: "Confirm",
options: [
{ label: "Confirm and continue", description: `${selectedMode.label} with ${selectedCLIs.length} CLIs` },
{ label: "Re-select CLIs", description: "Choose different CLI tools" },
{ label: "Re-select Mode", description: "Choose different analysis mode" },
{ label: "Re-select Agent", description: "Choose different Sub Agent" }
],
multiSelect: false
}]
})
```
## Phase 3: Multi-Mode Analysis
### Universal CLI Prompt Template
```javascript
// Unified prompt builder - used by all modes
function buildPrompt({ purpose, tasks, expected, rules, taskDescription }) {
return `
PURPOSE: ${purpose}: ${taskDescription}
TASK: ${tasks.map(t => `${t}`).join(' ')}
MODE: analysis
CONTEXT: @**/*
EXPECTED: ${expected}
RULES: $(cat ~/.claude/workflows/cli-templates/protocols/analysis-protocol.md) | ${rules}
`
}
// Execute CLI with prompt
function execCLI(cli, prompt, options = {}) {
const { resume, background = false } = options
const resumeFlag = resume ? `--resume ${resume}` : ''
return Bash({
command: `ccw cli -p "${prompt}" --tool ${cli.name} --mode analysis ${resumeFlag}`,
run_in_background: background
})
}
```
### Prompt Presets by Role
| Role | PURPOSE | TASKS | EXPECTED | RULES |
|------|---------|-------|----------|-------|
| **initial** | Initial analysis | Identify files, Analyze approach, List changes | Root cause, files, changes, risks | Focus on actionable insights |
| **extend** | Build on previous | Review previous, Extend, Add insights | Extended analysis building on findings | Build incrementally, avoid repetition |
| **synthesize** | Refine and synthesize | Review, Identify gaps, Synthesize | Refined synthesis with new perspectives | Add value not repetition |
| **propose** | Propose comprehensive analysis | Analyze thoroughly, Propose solution, State assumptions | Well-reasoned proposal with trade-offs | Be clear about assumptions |
| **challenge** | Challenge and stress-test | Identify weaknesses, Question assumptions, Suggest alternatives | Critique with counter-arguments | Be adversarial but constructive |
| **defend** | Respond to challenges | Address challenges, Defend valid aspects, Propose refined solution | Refined proposal incorporating feedback | Be open to criticism, synthesize |
| **criticize** | Find flaws ruthlessly | Find logical flaws, Identify edge cases, Rate criticisms | Critique with severity: [CRITICAL]/[HIGH]/[MEDIUM]/[LOW] | Be ruthlessly critical |
```javascript
const PROMPTS = {
initial: { purpose: 'Initial analysis', tasks: ['Identify affected files', 'Analyze implementation approach', 'List specific changes'], expected: 'Root cause, files to modify, key changes, risks', rules: 'Focus on actionable insights' },
extend: { purpose: 'Build on previous analysis', tasks: ['Review previous findings', 'Extend analysis', 'Add new insights'], expected: 'Extended analysis building on previous', rules: 'Build incrementally, avoid repetition' },
synthesize: { purpose: 'Refine and synthesize', tasks: ['Review previous', 'Identify gaps', 'Add insights', 'Synthesize findings'], expected: 'Refined synthesis with new perspectives', rules: 'Build collaboratively, add value' },
propose: { purpose: 'Propose comprehensive analysis', tasks: ['Analyze thoroughly', 'Propose solution', 'State assumptions clearly'], expected: 'Well-reasoned proposal with trade-offs', rules: 'Be clear about assumptions' },
challenge: { purpose: 'Challenge and stress-test', tasks: ['Identify weaknesses', 'Question assumptions', 'Suggest alternatives', 'Highlight overlooked risks'], expected: 'Constructive critique with counter-arguments', rules: 'Be adversarial but constructive' },
defend: { purpose: 'Respond to challenges', tasks: ['Address each challenge', 'Defend valid aspects', 'Acknowledge valid criticisms', 'Propose refined solution'], expected: 'Refined proposal incorporating alternatives', rules: 'Be open to criticism, synthesize best ideas' },
criticize: { purpose: 'Stress-test and find weaknesses', tasks: ['Find logical flaws', 'Identify missed edge cases', 'Propose alternatives', 'Rate criticisms (High/Medium/Low)'], expected: 'Detailed critique with severity ratings', rules: 'Be ruthlessly critical, find every flaw' }
}
```
### Mode Implementations
```javascript
// Parallel: All CLIs run simultaneously
async function executeParallel(clis, task) {
return await Promise.all(clis.map(cli =>
execCLI(cli, buildPrompt({ ...PROMPTS.initial, taskDescription: task }), { background: true })
))
}
// Sequential: Each CLI builds on previous via --resume
async function executeSequential(clis, task) {
const results = []
let prevId = null
for (const cli of clis) {
const preset = prevId ? PROMPTS.extend : PROMPTS.initial
const result = await execCLI(cli, buildPrompt({ ...preset, taskDescription: task }), { resume: prevId })
results.push(result)
prevId = extractSessionId(result)
}
return results
}
// Collaborative: Multi-round synthesis
async function executeCollaborative(clis, task, rounds = 2) {
const results = []
let prevId = null
for (let r = 0; r < rounds; r++) {
for (const cli of clis) {
const preset = !prevId ? PROMPTS.initial : PROMPTS.synthesize
const result = await execCLI(cli, buildPrompt({ ...preset, taskDescription: task }), { resume: prevId })
results.push({ cli: cli.name, round: r, result })
prevId = extractSessionId(result)
}
}
return results
}
// Debate: Propose → Challenge → Defend
async function executeDebate(clis, task) {
const [cliA, cliB] = clis
const results = []
const propose = await execCLI(cliA, buildPrompt({ ...PROMPTS.propose, taskDescription: task }))
results.push({ phase: 'propose', cli: cliA.name, result: propose })
const challenge = await execCLI(cliB, buildPrompt({ ...PROMPTS.challenge, taskDescription: task }), { resume: extractSessionId(propose) })
results.push({ phase: 'challenge', cli: cliB.name, result: challenge })
const defend = await execCLI(cliA, buildPrompt({ ...PROMPTS.defend, taskDescription: task }), { resume: extractSessionId(challenge) })
results.push({ phase: 'defend', cli: cliA.name, result: defend })
return results
}
// Challenge: Analyze → Criticize
async function executeChallenge(clis, task) {
const [cliA, cliB] = clis
const results = []
const analyze = await execCLI(cliA, buildPrompt({ ...PROMPTS.initial, taskDescription: task }))
results.push({ phase: 'analyze', cli: cliA.name, result: analyze })
const criticize = await execCLI(cliB, buildPrompt({ ...PROMPTS.criticize, taskDescription: task }), { resume: extractSessionId(analyze) })
results.push({ phase: 'challenge', cli: cliB.name, result: criticize })
return results
}
```
### Mode Router & Result Aggregation
```javascript
async function executeAnalysis(mode, clis, taskDescription) {
switch (mode.name) {
case 'parallel': return await executeParallel(clis, taskDescription)
case 'sequential': return await executeSequential(clis, taskDescription)
case 'collaborative': return await executeCollaborative(clis, taskDescription)
case 'debate': return await executeDebate(clis, taskDescription)
case 'challenge': return await executeChallenge(clis, taskDescription)
}
}
function aggregateResults(mode, results) {
const base = { mode: mode.name, pattern: mode.pattern, tools_used: results.map(r => r.cli || 'unknown') }
switch (mode.name) {
case 'parallel':
return { ...base, findings: results.map(parseOutput), consensus: findCommonPoints(results), divergences: findDifferences(results) }
case 'sequential':
return { ...base, evolution: results.map((r, i) => ({ step: i + 1, analysis: parseOutput(r) })), finalAnalysis: parseOutput(results.at(-1)) }
case 'collaborative':
return { ...base, rounds: groupByRound(results), synthesis: extractSynthesis(results.at(-1)) }
case 'debate':
return { ...base, proposal: parseOutput(results.find(r => r.phase === 'propose')?.result),
challenges: parseOutput(results.find(r => r.phase === 'challenge')?.result),
resolution: parseOutput(results.find(r => r.phase === 'defend')?.result), confidence: calculateDebateConfidence(results) }
case 'challenge':
return { ...base, originalAnalysis: parseOutput(results.find(r => r.phase === 'analyze')?.result),
critiques: parseCritiques(results.find(r => r.phase === 'challenge')?.result), riskScore: calculateRiskScore(results) }
}
}
```
## Phase 4: User Decision
```javascript
function presentSummary(analysis) {
console.log(`## Analysis Result\n**Mode**: ${analysis.mode} (${analysis.pattern})\n**Tools**: ${analysis.tools_used.join(' → ')}`)
switch (analysis.mode) {
case 'parallel':
console.log(`### Consensus\n${analysis.consensus.map(c => `- ${c}`).join('\n')}\n### Divergences\n${analysis.divergences.map(d => `- ${d}`).join('\n')}`)
break
case 'sequential':
console.log(`### Evolution\n${analysis.evolution.map(e => `**Step ${e.step}**: ${e.analysis.summary}`).join('\n')}\n### Final\n${analysis.finalAnalysis.summary}`)
break
case 'collaborative':
console.log(`### Rounds\n${Object.entries(analysis.rounds).map(([r, a]) => `**Round ${r}**: ${a.map(x => x.cli).join(' + ')}`).join('\n')}\n### Synthesis\n${analysis.synthesis}`)
break
case 'debate':
console.log(`### Debate\n**Proposal**: ${analysis.proposal.summary}\n**Challenges**: ${analysis.challenges.points?.length || 0} points\n**Resolution**: ${analysis.resolution.summary}\n**Confidence**: ${analysis.confidence}%`)
break
case 'challenge':
console.log(`### Challenge\n**Original**: ${analysis.originalAnalysis.summary}\n**Critiques**: ${analysis.critiques.length} issues\n${analysis.critiques.map(c => `- [${c.severity}] ${c.description}`).join('\n')}\n**Risk Score**: ${analysis.riskScore}/100`)
break
}
}
AskUserQuestion({
questions: [{
question: "How to proceed?",
header: "Next Step",
options: [
{ label: "Execute directly", description: "Implement immediately" },
{ label: "Refine analysis", description: "Add constraints, re-analyze" },
{ label: "Change tools", description: "Different tool combination" },
{ label: "Cancel", description: "End workflow" }
],
multiSelect: false
}]
})
// Routing: Execute → Phase 5 | Refine → Phase 3 | Change → Phase 2 | Cancel → End
```
## Phase 5: Direct Execution
```javascript
// No IMPL_PLAN.md, no plan.json - direct implementation
const executionAgents = agents.filter(a => a.canExecute)
const executionTool = selectedAgent.canExecute ? selectedAgent : selectedCLIs[0]
if (executionTool.type === 'agent') {
Task({
subagent_type: executionTool.name,
run_in_background: false,
description: `Execute: ${taskDescription.slice(0, 30)}`,
prompt: `## Task\n${taskDescription}\n\n## Analysis Results\n${JSON.stringify(aggregatedAnalysis, null, 2)}\n\n## Instructions\n1. Apply changes to identified files\n2. Follow recommended approach\n3. Handle identified risks\n4. Verify changes work correctly`
})
} else {
Bash({
command: `ccw cli -p "
PURPOSE: Implement solution: ${taskDescription}
TASK: ${extractedTasks.join(' • ')}
MODE: write
CONTEXT: @${affectedFiles.join(' @')}
EXPECTED: Working implementation with all changes applied
RULES: $(cat ~/.claude/workflows/cli-templates/protocols/write-protocol.md)
" --tool ${executionTool.name} --mode write`,
run_in_background: false
})
}
```
## TodoWrite Structure
```javascript
TodoWrite({ todos: [
{ content: "Phase 1: Clarify requirements", status: "in_progress", activeForm: "Clarifying requirements" },
{ content: "Phase 2: Select tools", status: "pending", activeForm: "Selecting tools" },
{ content: "Phase 3: Multi-mode analysis", status: "pending", activeForm: "Running analysis" },
{ content: "Phase 4: User decision", status: "pending", activeForm: "Awaiting decision" },
{ content: "Phase 5: Direct execution", status: "pending", activeForm: "Executing" }
]})
```
## Iteration Patterns
| Pattern | Flow |
|---------|------|
| **Direct** | Phase 1 → 2 → 3 → 4(execute) → 5 |
| **Refinement** | Phase 3 → 4(refine) → 3 → 4 → 5 |
| **Tool Adjust** | Phase 2(adjust) → 3 → 4 → 5 |
## Error Handling
| Error | Resolution |
|-------|------------|
| CLI timeout | Retry with secondary model |
| No enabled tools | Ask user to enable tools in cli-tools.json |
| Task unclear | Default to first CLI + code-developer |
| Ambiguous task | Force clarification via AskUser |
| Execution fails | Present error, ask user for direction |
## Comparison with multi-cli-plan
| Aspect | lite-lite-lite | multi-cli-plan |
|--------|----------------|----------------|
| **Artifacts** | None | IMPL_PLAN.md, plan.json, synthesis.json |
| **Session** | Stateless (--resume chaining) | Persistent session folder |
| **Tool Selection** | 3-step (CLI → Mode → Agent) | Config-driven fixed tools |
| **Analysis Modes** | 5 modes with --resume | Fixed synthesis rounds |
| **Best For** | Quick analysis, adversarial validation | Complex multi-step implementations |
## Post-Completion Expansion
完成后询问用户是否扩展为issue(test/enhance/refactor/doc),选中项调用 `/issue:new "{summary} - {dimension}"`
## Related Commands
```bash
/workflow:multi-cli-plan "complex task" # Full planning workflow
/workflow:lite-plan "task" # Single CLI planning
/workflow:lite-execute --in-memory # Direct execution
```

View File

@@ -0,0 +1,510 @@
---
name: workflow:multi-cli-plan
description: Multi-CLI collaborative planning workflow with ACE context gathering and iterative cross-verification. Uses cli-discuss-agent for Gemini+Codex+Claude analysis to converge on optimal execution plan.
argument-hint: "<task description> [--max-rounds=3] [--tools=gemini,codex] [--mode=parallel|serial]"
allowed-tools: TodoWrite(*), Task(*), AskUserQuestion(*), Read(*), Bash(*), Write(*), mcp__ace-tool__search_context(*)
---
# Multi-CLI Collaborative Planning Command
## Quick Start
```bash
# Basic usage
/workflow:multi-cli-plan "Implement user authentication"
# With options
/workflow:multi-cli-plan "Add dark mode support" --max-rounds=3
/workflow:multi-cli-plan "Refactor payment module" --tools=gemini,codex,claude
/workflow:multi-cli-plan "Fix memory leak" --mode=serial
# Resume session
/workflow:lite-execute --session=MCP-xxx
```
**Context Source**: ACE semantic search + Multi-CLI analysis
**Output Directory**: `.workflow/.multi-cli-plan/{session-id}/`
**Default Max Rounds**: 3 (convergence may complete earlier)
**CLI Tools**: @cli-discuss-agent (analysis), @cli-lite-planning-agent (plan generation)
## What & Why
### Core Concept
Multi-CLI collaborative planning with **three-phase architecture**: ACE context gathering → Iterative multi-CLI discussion → Plan generation. Orchestrator delegates analysis to agents, only handles user decisions and session management.
**Process**:
- **Phase 1**: ACE semantic search gathers codebase context
- **Phase 2**: cli-discuss-agent orchestrates Gemini/Codex/Claude for cross-verified analysis
- **Phase 3-5**: User decision → Plan generation → Execution handoff
**vs Single-CLI Planning**:
- **Single**: One model perspective, potential blind spots
- **Multi-CLI**: Cross-verification catches inconsistencies, builds consensus on solutions
### Value Proposition
1. **Multi-Perspective Analysis**: Gemini + Codex + Claude analyze from different angles
2. **Cross-Verification**: Identify agreements/disagreements, build confidence
3. **User-Driven Decisions**: Every round ends with user decision point
4. **Iterative Convergence**: Progressive refinement until consensus reached
### Orchestrator Boundary (CRITICAL)
- **ONLY command** for multi-CLI collaborative planning
- Manages: Session state, user decisions, agent delegation, phase transitions
- Delegates: CLI execution to @cli-discuss-agent, plan generation to @cli-lite-planning-agent
### Execution Flow
```
Phase 1: Context Gathering
└─ ACE semantic search, extract keywords, build context package
Phase 2: Multi-CLI Discussion (Iterative, via @cli-discuss-agent)
├─ Round N: Agent executes Gemini + Codex + Claude
├─ Cross-verify findings, synthesize solutions
├─ Write synthesis.json to rounds/{N}/
└─ Loop until convergence or max rounds
Phase 3: Present Options
└─ Display solutions with trade-offs from agent output
Phase 4: User Decision
├─ Approve solution → Phase 5
├─ Need clarification → Return to Phase 2
└─ Change direction → Reset with feedback
Phase 5: Plan Generation (via @cli-lite-planning-agent)
├─ Generate IMPL_PLAN.md + plan.json
└─ Hand off to /workflow:lite-execute
```
### Agent Roles
| Agent | Responsibility |
|-------|---------------|
| **Orchestrator** | Session management, ACE context, user decisions, phase transitions |
| **@cli-discuss-agent** | Multi-CLI execution (Gemini/Codex/Claude), cross-verification, solution synthesis, synthesis.json output |
| **@cli-lite-planning-agent** | Task decomposition, IMPL_PLAN.md + plan.json generation |
## Core Responsibilities
### Phase 1: Context Gathering
**Session Initialization**:
```javascript
const sessionId = `MCP-${taskSlug}-${date}`
const sessionFolder = `.workflow/.multi-cli-plan/${sessionId}`
Bash(`mkdir -p ${sessionFolder}/rounds`)
```
**ACE Context Queries**:
```javascript
const aceQueries = [
`Project architecture related to ${keywords}`,
`Existing implementations of ${keywords[0]}`,
`Code patterns for ${keywords} features`,
`Integration points for ${keywords[0]}`
]
// Execute via mcp__ace-tool__search_context
```
**Context Package** (passed to agent):
- `relevant_files[]` - Files identified by ACE
- `detected_patterns[]` - Code patterns found
- `architecture_insights` - Structure understanding
### Phase 2: Agent Delegation
**Core Principle**: Orchestrator only delegates and reads output - NO direct CLI execution.
**Agent Invocation**:
```javascript
Task({
subagent_type: "cli-discuss-agent",
run_in_background: false,
description: `Discussion round ${currentRound}`,
prompt: `
## Input Context
- task_description: ${taskDescription}
- round_number: ${currentRound}
- session: { id: "${sessionId}", folder: "${sessionFolder}" }
- ace_context: ${JSON.stringify(contextPackageage)}
- previous_rounds: ${JSON.stringify(analysisResults)}
- user_feedback: ${userFeedback || 'None'}
- cli_config: { tools: ["gemini", "codex"], mode: "parallel", fallback_chain: ["gemini", "codex", "claude"] }
## Execution Process
1. Parse input context (handle JSON strings)
2. Check if ACE supplementary search needed
3. Build CLI prompts with context
4. Execute CLIs (parallel or serial per cli_config.mode)
5. Parse CLI outputs, handle failures with fallback
6. Perform cross-verification between CLI results
7. Synthesize solutions, calculate scores
8. Calculate convergence, generate clarification questions
9. Write synthesis.json
## Output
Write: ${sessionFolder}/rounds/${currentRound}/synthesis.json
## Completion Checklist
- [ ] All configured CLI tools executed (or fallback triggered)
- [ ] Cross-verification completed with agreements/disagreements
- [ ] 2-3 solutions generated with file:line references
- [ ] Convergence score calculated (0.0-1.0)
- [ ] synthesis.json written with all Primary Fields
`
})
```
**Read Agent Output**:
```javascript
const synthesis = JSON.parse(Read(`${sessionFolder}/rounds/${round}/synthesis.json`))
// Access top-level fields: solutions, convergence, cross_verification, clarification_questions
```
**Convergence Decision**:
```javascript
if (synthesis.convergence.recommendation === 'converged') {
// Proceed to Phase 3
} else if (synthesis.convergence.recommendation === 'user_input_needed') {
// Collect user feedback, return to Phase 2
} else {
// Continue to next round if new_insights && round < maxRounds
}
```
### Phase 3: Present Options
**Display from Agent Output** (no processing):
```javascript
console.log(`
## Solution Options
${synthesis.solutions.map((s, i) => `
**Option ${i+1}: ${s.name}**
Source: ${s.source_cli.join(' + ')}
Effort: ${s.effort} | Risk: ${s.risk}
Pros: ${s.pros.join(', ')}
Cons: ${s.cons.join(', ')}
Files: ${s.affected_files.slice(0,3).map(f => `${f.file}:${f.line}`).join(', ')}
`).join('\n')}
## Cross-Verification
Agreements: ${synthesis.cross_verification.agreements.length}
Disagreements: ${synthesis.cross_verification.disagreements.length}
`)
```
### Phase 4: User Decision
**Decision Options**:
```javascript
AskUserQuestion({
questions: [{
question: "Which solution approach?",
header: "Solution",
options: solutions.map((s, i) => ({
label: `Option ${i+1}: ${s.name}`,
description: `${s.effort} effort, ${s.risk} risk`
})).concat([
{ label: "Need More Analysis", description: "Return to Phase 2" }
])
}]
})
```
**Routing**:
- Approve → Phase 5
- Need More Analysis → Phase 2 with feedback
- Add constraints → Collect details, then Phase 5
### Phase 5: Plan Generation
**Step 1: Build Context-Package** (Orchestrator responsibility):
```javascript
// Extract key information from user decision and synthesis
const contextPackage = {
// Core solution details
solution: {
name: selectedSolution.name,
source_cli: selectedSolution.source_cli,
feasibility: selectedSolution.feasibility,
effort: selectedSolution.effort,
risk: selectedSolution.risk,
summary: selectedSolution.summary
},
// Implementation plan (tasks, flow, milestones)
implementation_plan: selectedSolution.implementation_plan,
// Dependencies
dependencies: selectedSolution.dependencies || { internal: [], external: [] },
// Technical concerns
technical_concerns: selectedSolution.technical_concerns || [],
// Consensus from cross-verification
consensus: {
agreements: synthesis.cross_verification.agreements,
resolved_conflicts: synthesis.cross_verification.resolution
},
// User constraints (from Phase 4 feedback)
constraints: userConstraints || [],
// Task context
task_description: taskDescription,
session_id: sessionId
}
// Write context-package for traceability
Write(`${sessionFolder}/context-package.json`, JSON.stringify(contextPackage, null, 2))
```
**Context-Package Schema**:
| Field | Type | Description |
|-------|------|-------------|
| `solution` | object | User-selected solution from synthesis |
| `solution.name` | string | Solution identifier |
| `solution.feasibility` | number | Viability score (0-1) |
| `solution.summary` | string | Brief analysis summary |
| `implementation_plan` | object | Task breakdown with flow and dependencies |
| `implementation_plan.approach` | string | High-level technical strategy |
| `implementation_plan.tasks[]` | array | Discrete tasks with id, name, depends_on, files |
| `implementation_plan.execution_flow` | string | Task sequence (e.g., "T1 → T2 → T3") |
| `implementation_plan.milestones` | string[] | Key checkpoints |
| `dependencies` | object | Module and package dependencies |
| `technical_concerns` | string[] | Risks and blockers |
| `consensus` | object | Cross-verified agreements from multi-CLI |
| `constraints` | string[] | User-specified constraints from Phase 4 |
```json
{
"solution": {
"name": "Strategy Pattern Refactoring",
"source_cli": ["gemini", "codex"],
"feasibility": 0.88,
"effort": "medium",
"risk": "low",
"summary": "Extract payment gateway interface, implement strategy pattern for multi-gateway support"
},
"implementation_plan": {
"approach": "Define interface → Create concrete strategies → Implement factory → Migrate existing code",
"tasks": [
{"id": "T1", "name": "Define PaymentGateway interface", "depends_on": [], "files": [{"file": "src/types/payment.ts", "line": 1, "action": "create"}], "key_point": "Include all existing Stripe methods"},
{"id": "T2", "name": "Implement StripeGateway", "depends_on": ["T1"], "files": [{"file": "src/payment/stripe.ts", "line": 1, "action": "create"}], "key_point": "Wrap existing logic"},
{"id": "T3", "name": "Create GatewayFactory", "depends_on": ["T1"], "files": [{"file": "src/payment/factory.ts", "line": 1, "action": "create"}], "key_point": null},
{"id": "T4", "name": "Migrate processor to use factory", "depends_on": ["T2", "T3"], "files": [{"file": "src/payment/processor.ts", "line": 45, "action": "modify"}], "key_point": "Backward compatible"}
],
"execution_flow": "T1 → (T2 | T3) → T4",
"milestones": ["Interface defined", "Gateway implementations complete", "Migration done"]
},
"dependencies": {
"internal": ["@/lib/payment-gateway", "@/types/payment"],
"external": ["stripe@^14.0.0"]
},
"technical_concerns": ["Existing tests must pass", "No breaking API changes"],
"consensus": {
"agreements": ["Use strategy pattern", "Keep existing API"],
"resolved_conflicts": "Factory over DI for simpler integration"
},
"constraints": ["backward compatible", "no breaking changes to PaymentResult type"],
"task_description": "Refactor payment processing for multi-gateway support",
"session_id": "MCP-payment-refactor-2026-01-14"
}
```
**Step 2: Invoke Planning Agent**:
```javascript
Task({
subagent_type: "cli-lite-planning-agent",
run_in_background: false,
description: "Generate implementation plan",
prompt: `
## Schema Reference
Execute: cat ~/.claude/workflows/cli-templates/schemas/plan-json-schema.json
## Context-Package (from orchestrator)
${JSON.stringify(contextPackage, null, 2)}
## Execution Process
1. Read plan-json-schema.json for output structure
2. Read project-tech.json and project-guidelines.json
3. Parse context-package fields:
- solution: name, feasibility, summary
- implementation_plan: tasks[], execution_flow, milestones
- dependencies: internal[], external[]
- technical_concerns: risks/blockers
- consensus: agreements, resolved_conflicts
- constraints: user requirements
4. Use implementation_plan.tasks[] as task foundation
5. Preserve task dependencies (depends_on) and execution_flow
6. Expand tasks with detailed acceptance criteria
7. Generate IMPL_PLAN.md documenting milestones and key_points
8. Generate plan.json following schema exactly
## Output
- ${sessionFolder}/IMPL_PLAN.md
- ${sessionFolder}/plan.json
## Completion Checklist
- [ ] IMPL_PLAN.md documents approach, milestones, technical_concerns
- [ ] plan.json preserves task dependencies from implementation_plan
- [ ] Task execution order follows execution_flow
- [ ] Key_points reflected in task descriptions
- [ ] User constraints applied to implementation
- [ ] Acceptance criteria are testable
`
})
```
**Hand off to Execution**:
```javascript
if (userConfirms) {
SlashCommand("/workflow:lite-execute --in-memory")
}
```
## Output File Structure
```
.workflow/.multi-cli-plan/{MCP-task-slug-YYYY-MM-DD}/
├── session-state.json # Session tracking (orchestrator)
├── rounds/
│ ├── 1/synthesis.json # Round 1 analysis (cli-discuss-agent)
│ ├── 2/synthesis.json # Round 2 analysis (cli-discuss-agent)
│ └── .../
├── context-package.json # Extracted context for planning (orchestrator)
├── IMPL_PLAN.md # Documentation (cli-lite-planning-agent)
└── plan.json # Structured plan (cli-lite-planning-agent)
```
**File Producers**:
| File | Producer | Content |
|------|----------|---------|
| `session-state.json` | Orchestrator | Session metadata, rounds, decisions |
| `rounds/*/synthesis.json` | cli-discuss-agent | Solutions, convergence, cross-verification |
| `context-package.json` | Orchestrator | Extracted solution, dependencies, consensus for planning |
| `IMPL_PLAN.md` | cli-lite-planning-agent | Human-readable plan |
| `plan.json` | cli-lite-planning-agent | Structured tasks for execution |
## synthesis.json Schema
```json
{
"round": 1,
"solutions": [{
"name": "Solution Name",
"source_cli": ["gemini", "codex"],
"feasibility": 0.85,
"effort": "low|medium|high",
"risk": "low|medium|high",
"summary": "Brief analysis summary",
"implementation_plan": {
"approach": "High-level technical approach",
"tasks": [
{"id": "T1", "name": "Task", "depends_on": [], "files": [], "key_point": "..."}
],
"execution_flow": "T1 → T2 → T3",
"milestones": ["Checkpoint 1", "Checkpoint 2"]
},
"dependencies": {"internal": [], "external": []},
"technical_concerns": ["Risk 1", "Blocker 2"]
}],
"convergence": {
"score": 0.85,
"new_insights": false,
"recommendation": "converged|continue|user_input_needed"
},
"cross_verification": {
"agreements": [],
"disagreements": [],
"resolution": "..."
},
"clarification_questions": []
}
```
**Key Planning Fields**:
| Field | Purpose |
|-------|---------|
| `feasibility` | Viability score (0-1) |
| `implementation_plan.tasks[]` | Discrete tasks with dependencies |
| `implementation_plan.execution_flow` | Task sequence visualization |
| `implementation_plan.milestones` | Key checkpoints |
| `technical_concerns` | Risks and blockers |
**Note**: Solutions ranked by internal scoring (array order = priority)
## TodoWrite Structure
**Initialization**:
```javascript
TodoWrite({ todos: [
{ content: "Phase 1: Context Gathering", status: "in_progress", activeForm: "Gathering context" },
{ content: "Phase 2: Multi-CLI Discussion", status: "pending", activeForm: "Running discussion" },
{ content: "Phase 3: Present Options", status: "pending", activeForm: "Presenting options" },
{ content: "Phase 4: User Decision", status: "pending", activeForm: "Awaiting decision" },
{ content: "Phase 5: Plan Generation", status: "pending", activeForm: "Generating plan" }
]})
```
**During Discussion Rounds**:
```javascript
TodoWrite({ todos: [
{ content: "Phase 1: Context Gathering", status: "completed", activeForm: "Gathering context" },
{ content: "Phase 2: Multi-CLI Discussion", status: "in_progress", activeForm: "Running discussion" },
{ content: " → Round 1: Initial analysis", status: "completed", activeForm: "Analyzing" },
{ content: " → Round 2: Deep verification", status: "in_progress", activeForm: "Verifying" },
{ content: "Phase 3: Present Options", status: "pending", activeForm: "Presenting options" },
// ...
]})
```
## Error Handling
| Error | Resolution |
|-------|------------|
| ACE search fails | Fall back to Glob/Grep for file discovery |
| Agent fails | Retry once, then present partial results |
| CLI timeout (in agent) | Agent uses fallback: gemini → codex → claude |
| No convergence | Present best options, flag uncertainty |
| synthesis.json parse error | Request agent retry |
| User cancels | Save session for later resumption |
## Configuration
| Flag | Default | Description |
|------|---------|-------------|
| `--max-rounds` | 3 | Maximum discussion rounds |
| `--tools` | gemini,codex | CLI tools for analysis |
| `--mode` | parallel | Execution mode: parallel or serial |
| `--auto-execute` | false | Auto-execute after approval |
## Best Practices
1. **Be Specific**: Detailed task descriptions improve ACE context quality
2. **Provide Feedback**: Use clarification rounds to refine requirements
3. **Trust Cross-Verification**: Multi-CLI consensus indicates high confidence
4. **Review Trade-offs**: Consider pros/cons before selecting solution
5. **Check synthesis.json**: Review agent output for detailed analysis
6. **Iterate When Needed**: Don't hesitate to request more analysis
## Related Commands
```bash
# Resume saved session
/workflow:lite-execute --session=MCP-xxx
# Simpler single-round planning
/workflow:lite-plan "task description"
# Issue-driven discovery
/issue:discover-by-prompt "find issues"
# View session files
cat .workflow/.multi-cli-plan/{session-id}/IMPL_PLAN.md
cat .workflow/.multi-cli-plan/{session-id}/rounds/1/synthesis.json
```

View File

@@ -585,6 +585,10 @@ TodoWrite({
- Mark completed immediately after each group finishes
- Update parent phase status when all child items complete
## Post-Completion Expansion
完成后询问用户是否扩展为issue(test/enhance/refactor/doc),选中项调用 `/issue:new "{summary} - {dimension}"`
## Best Practices
1. **Trust AI Planning**: Planning agent's grouping and execution strategy are based on dependency analysis

View File

@@ -491,6 +491,10 @@ The orchestrator automatically creates git commits at key checkpoints to enable
**Note**: Final session completion creates additional commit with full summary.
## Post-Completion Expansion
完成后询问用户是否扩展为issue(test/enhance/refactor/doc),选中项调用 `/issue:new "{summary} - {dimension}"`
## Best Practices
1. **Default Settings Work**: 10 iterations sufficient for most cases

View File

@@ -1,139 +1,86 @@
---
name: ccw-help
description: Workflow command guide for Claude Code Workflow (78 commands). Search/browse commands, get next-step recommendations, view documentation, report issues. Triggers "CCW-help", "CCW-issue", "ccw-help", "ccw-issue", "ccw"
description: CCW command help system. Search, browse, recommend commands. Triggers "ccw-help", "ccw-issue".
allowed-tools: Read, Grep, Glob, AskUserQuestion
version: 6.0.0
version: 7.0.0
---
# CCW-Help Skill
CCW 命令帮助系统,提供命令搜索、推荐、文档查看和问题报告功能。
CCW 命令帮助系统,提供命令搜索、推荐、文档查看功能。
## Trigger Conditions
- 关键词: "CCW-help", "CCW-issue", "ccw-help", "ccw-issue", "帮助", "命令", "怎么用"
- 场景: 用户询问命令用法、搜索命令、请求下一步建议、报告问题
## Execution Flow
```mermaid
graph TD
A[User Query] --> B{Intent Classification}
B -->|搜索| C[Command Search]
B -->|推荐| D[Smart Recommendations]
B -->|文档| E[Documentation]
B -->|新手| F[Onboarding]
B -->|问题| G[Issue Reporting]
B -->|分析| H[Deep Analysis]
C --> I[Query Index]
D --> J[Query Relationships]
E --> K[Read Source File]
F --> L[Essential Commands]
G --> M[Generate Template]
H --> N[CLI Analysis]
I & J & K & L & M & N --> O[Synthesize Response]
```
- 关键词: "ccw-help", "ccw-issue", "帮助", "命令", "怎么用"
- 场景: 询问命令用法、搜索命令、请求下一步建议
## Operation Modes
### Mode 1: Command Search 🔍
### Mode 1: Command Search
**Triggers**: "搜索命令", "find command", "planning 相关", "search"
**Triggers**: "搜索命令", "find command", "search"
**Process**:
1. Query `index/all-commands.json` or `index/by-category.json`
2. Filter and rank results based on user context
3. Present top 3-5 relevant commands with usage hints
1. Query `command.json` commands array
2. Filter by name, description, category
3. Present top 3-5 relevant commands
### Mode 2: Smart Recommendations 🤖
### Mode 2: Smart Recommendations
**Triggers**: "下一步", "what's next", "after /workflow:plan", "推荐"
**Triggers**: "下一步", "what's next", "推荐"
**Process**:
1. Query `index/command-relationships.json`
2. Evaluate context and prioritize recommendations
3. Explain WHY each recommendation fits
1. Query command's `flow.next_steps` in `command.json`
2. Explain WHY each recommendation fits
### Mode 3: Full Documentation 📖
### Mode 3: Documentation
**Triggers**: "参数说明", "怎么用", "how to use", "详情"
**Triggers**: "怎么用", "how to use", "详情"
**Process**:
1. Locate command in index
2. Read source file via `source` path (e.g., `commands/workflow/lite-plan.md`)
3. Extract relevant sections and provide context-specific examples
1. Locate command in `command.json`
2. Read source file via `source` path
3. Provide context-specific examples
### Mode 4: Beginner Onboarding 🎓
### Mode 4: Beginner Onboarding
**Triggers**: "新手", "getting started", "如何开始", "常用命令"
**Triggers**: "新手", "getting started", "常用命令"
**Process**:
1. Query `index/essential-commands.json`
2. Assess project stage (从0到1 vs 功能新增)
3. Guide appropriate workflow entry point
1. Query `essential_commands` array
2. Guide appropriate workflow entry point
### Mode 5: Issue Reporting 📝
### Mode 5: Issue Reporting
**Triggers**: "CCW-issue", "报告 bug", "功能建议", "问题咨询"
**Triggers**: "ccw-issue", "报告 bug"
**Process**:
1. Use AskUserQuestion to gather context
2. Generate structured issue template
3. Provide actionable next steps
### Mode 6: Deep Analysis 🔬
## Data Source
**Triggers**: "详细说明", "命令原理", "agent 如何工作", "实现细节"
Single source of truth: **[command.json](command.json)**
**Process**:
1. Read source documentation directly
2. For complex queries, use CLI for multi-file analysis:
```bash
ccw cli -p "PURPOSE: Analyze command documentation..." --tool gemini --mode analysis --cd ~/.claude
```
## Index Files
CCW-Help 使用 JSON 索引实现快速查询(无 reference 文件夹,直接引用源文件):
| 文件 | 内容 | 用途 |
|------|------|------|
| `index/all-commands.json` | 完整命令目录 | 关键词搜索 |
| `index/all-agents.json` | 完整 Agent 目录 | Agent 查询 |
| `index/by-category.json` | 按类别分组 | 分类浏览 |
| `index/by-use-case.json` | 按场景分组 | 场景推荐 |
| `index/essential-commands.json` | 核心命令 | 新手引导 |
| `index/command-relationships.json` | 命令关系 | 下一步推荐 |
| Field | Purpose |
|-------|---------|
| `commands[]` | Flat command list with metadata |
| `commands[].flow` | Relationships (next_steps, prerequisites) |
| `commands[].essential` | Essential flag for onboarding |
| `agents[]` | Agent directory |
| `essential_commands[]` | Core commands list |
### Source Path Format
索引中的 `source` 字段是从 `index/` 目录的相对路径(先向上再定位
`source` 字段是相对路径(从 `skills/ccw-help/` 目录
```json
{
"name": "workflow:lite-plan",
"name": "lite-plan",
"source": "../../../commands/workflow/lite-plan.md"
}
```
路径结构: `index/` → `ccw-help/` → `skills/` → `.claude/` → `commands/...`
## Configuration
| 参数 | 默认值 | 说明 |
|------|--------|------|
| max_results | 5 | 搜索返回最大结果数 |
| show_source | true | 是否显示源文件路径 |
## CLI Integration
| 场景 | CLI Hint | 用途 |
|------|----------|------|
| 复杂查询 | `gemini --mode analysis` | 多文件分析对比 |
| 文档生成 | - | 直接读取源文件 |
## Slash Commands
```bash
@@ -145,33 +92,25 @@ CCW-Help 使用 JSON 索引实现快速查询(无 reference 文件夹,直接
## Maintenance
### 更新索引
### Update Index
```bash
cd D:/Claude_dms3/.claude/skills/ccw-help
python scripts/analyze_commands.py
```
脚本功能:
1. 扫描 `commands/` 和 `agents/` 目录
2. 提取 YAML frontmatter 元数据
3. 生成相对路径引用(无 reference 复制)
4. 重建所有索引文件
脚本功能:扫描 commands/ 和 agents/ 目录,生成统一的 command.json
## System Statistics
## Statistics
- **Commands**: 78
- **Agents**: 14
- **Categories**: 5 (workflow, cli, memory, task, general)
- **Essential**: 14 核心命令
- **Commands**: 88+
- **Agents**: 16
- **Essential**: 10 核心命令
## Core Principle
**⚠️ 智能整合,非模板复制**
**智能整合,非模板复制**
- 理解用户具体情况
- 整合多个来源信息
- 定制示例和说明
- ✅ 提供渐进式深度
- ❌ 原样复制文档
- ❌ 返回未处理的 JSON
- 理解用户具体情况
- 整合多个来源信息
- 定制示例和说明

View File

@@ -0,0 +1,511 @@
{
"_metadata": {
"version": "2.0.0",
"total_commands": 88,
"total_agents": 16,
"description": "Unified CCW-Help command index"
},
"essential_commands": [
"/workflow:lite-plan",
"/workflow:lite-fix",
"/workflow:plan",
"/workflow:execute",
"/workflow:session:start",
"/workflow:review-session-cycle",
"/memory:docs",
"/workflow:brainstorm:artifacts",
"/workflow:action-plan-verify",
"/version"
],
"commands": [
{
"name": "lite-plan",
"command": "/workflow:lite-plan",
"description": "Lightweight interactive planning with in-memory plan, dispatches to lite-execute",
"arguments": "[-e|--explore] \"task\"|file.md",
"category": "workflow",
"difficulty": "Intermediate",
"essential": true,
"flow": {
"next_steps": ["/workflow:lite-execute"],
"alternatives": ["/workflow:plan"]
},
"source": "../../../commands/workflow/lite-plan.md"
},
{
"name": "lite-execute",
"command": "/workflow:lite-execute",
"description": "Execute based on in-memory plan or prompt",
"arguments": "[--in-memory] \"task\"|file-path",
"category": "workflow",
"difficulty": "Intermediate",
"flow": {
"prerequisites": ["/workflow:lite-plan", "/workflow:lite-fix"]
},
"source": "../../../commands/workflow/lite-execute.md"
},
{
"name": "lite-fix",
"command": "/workflow:lite-fix",
"description": "Lightweight bug diagnosis and fix with optional hotfix mode",
"arguments": "[--hotfix] \"bug description\"",
"category": "workflow",
"difficulty": "Intermediate",
"essential": true,
"flow": {
"next_steps": ["/workflow:lite-execute"],
"alternatives": ["/workflow:lite-plan"]
},
"source": "../../../commands/workflow/lite-fix.md"
},
{
"name": "plan",
"command": "/workflow:plan",
"description": "5-phase planning with task JSON generation",
"arguments": "\"description\"|file.md",
"category": "workflow",
"difficulty": "Intermediate",
"essential": true,
"flow": {
"next_steps": ["/workflow:action-plan-verify", "/workflow:execute"],
"alternatives": ["/workflow:tdd-plan"]
},
"source": "../../../commands/workflow/plan.md"
},
{
"name": "execute",
"command": "/workflow:execute",
"description": "Coordinate agent execution with DAG parallel processing",
"arguments": "[--resume-session=\"session-id\"]",
"category": "workflow",
"difficulty": "Intermediate",
"essential": true,
"flow": {
"prerequisites": ["/workflow:plan", "/workflow:tdd-plan"],
"next_steps": ["/workflow:review"]
},
"source": "../../../commands/workflow/execute.md"
},
{
"name": "action-plan-verify",
"command": "/workflow:action-plan-verify",
"description": "Cross-artifact consistency analysis",
"arguments": "[--session session-id]",
"category": "workflow",
"difficulty": "Intermediate",
"essential": true,
"flow": {
"prerequisites": ["/workflow:plan"],
"next_steps": ["/workflow:execute"]
},
"source": "../../../commands/workflow/action-plan-verify.md"
},
{
"name": "init",
"command": "/workflow:init",
"description": "Initialize project-level state",
"arguments": "[--regenerate]",
"category": "workflow",
"difficulty": "Intermediate",
"source": "../../../commands/workflow/init.md"
},
{
"name": "clean",
"command": "/workflow:clean",
"description": "Intelligent code cleanup with stale artifact discovery",
"arguments": "[--dry-run] [\"focus\"]",
"category": "workflow",
"difficulty": "Intermediate",
"source": "../../../commands/workflow/clean.md"
},
{
"name": "debug",
"command": "/workflow:debug",
"description": "Hypothesis-driven debugging with NDJSON logging",
"arguments": "\"bug description\"",
"category": "workflow",
"difficulty": "Intermediate",
"source": "../../../commands/workflow/debug.md"
},
{
"name": "replan",
"command": "/workflow:replan",
"description": "Interactive workflow replanning",
"arguments": "[--session id] [task-id] \"requirements\"",
"category": "workflow",
"difficulty": "Intermediate",
"source": "../../../commands/workflow/replan.md"
},
{
"name": "session:start",
"command": "/workflow:session:start",
"description": "Start or discover workflow sessions",
"arguments": "[--type <workflow|review|tdd>] [--auto|--new]",
"category": "workflow",
"subcategory": "session",
"difficulty": "Intermediate",
"essential": true,
"flow": {
"next_steps": ["/workflow:plan", "/workflow:execute"]
},
"source": "../../../commands/workflow/session/start.md"
},
{
"name": "session:list",
"command": "/workflow:session:list",
"description": "List all workflow sessions",
"arguments": "",
"category": "workflow",
"subcategory": "session",
"difficulty": "Beginner",
"source": "../../../commands/workflow/session/list.md"
},
{
"name": "session:resume",
"command": "/workflow:session:resume",
"description": "Resume paused workflow session",
"arguments": "",
"category": "workflow",
"subcategory": "session",
"difficulty": "Intermediate",
"source": "../../../commands/workflow/session/resume.md"
},
{
"name": "session:complete",
"command": "/workflow:session:complete",
"description": "Mark session complete and archive",
"arguments": "",
"category": "workflow",
"subcategory": "session",
"difficulty": "Intermediate",
"source": "../../../commands/workflow/session/complete.md"
},
{
"name": "brainstorm:auto-parallel",
"command": "/workflow:brainstorm:auto-parallel",
"description": "Parallel brainstorming with multi-role analysis",
"arguments": "\"topic\" [--count N]",
"category": "workflow",
"subcategory": "brainstorm",
"difficulty": "Advanced",
"source": "../../../commands/workflow/brainstorm/auto-parallel.md"
},
{
"name": "brainstorm:artifacts",
"command": "/workflow:brainstorm:artifacts",
"description": "Interactive clarification with guidance specification",
"arguments": "\"topic\" [--count N]",
"category": "workflow",
"subcategory": "brainstorm",
"difficulty": "Intermediate",
"essential": true,
"source": "../../../commands/workflow/brainstorm/artifacts.md"
},
{
"name": "brainstorm:synthesis",
"command": "/workflow:brainstorm:synthesis",
"description": "Refine role analyses through Q&A",
"arguments": "[--session session-id]",
"category": "workflow",
"subcategory": "brainstorm",
"difficulty": "Advanced",
"source": "../../../commands/workflow/brainstorm/synthesis.md"
},
{
"name": "tdd-plan",
"command": "/workflow:tdd-plan",
"description": "TDD planning with Red-Green-Refactor cycles",
"arguments": "\"feature\"|file.md",
"category": "workflow",
"difficulty": "Advanced",
"flow": {
"next_steps": ["/workflow:execute", "/workflow:tdd-verify"],
"alternatives": ["/workflow:plan"]
},
"source": "../../../commands/workflow/tdd-plan.md"
},
{
"name": "tdd-verify",
"command": "/workflow:tdd-verify",
"description": "Verify TDD compliance with coverage analysis",
"arguments": "[session-id]",
"category": "workflow",
"difficulty": "Advanced",
"flow": {
"prerequisites": ["/workflow:execute"]
},
"source": "../../../commands/workflow/tdd-verify.md"
},
{
"name": "review",
"command": "/workflow:review",
"description": "Post-implementation review (security/architecture/quality)",
"arguments": "[--type=<type>] [session-id]",
"category": "workflow",
"difficulty": "Intermediate",
"source": "../../../commands/workflow/review.md"
},
{
"name": "review-session-cycle",
"command": "/workflow:review-session-cycle",
"description": "Multi-dimensional code review across 7 dimensions",
"arguments": "[session-id] [--dimensions=...]",
"category": "workflow",
"difficulty": "Intermediate",
"essential": true,
"flow": {
"prerequisites": ["/workflow:execute"],
"next_steps": ["/workflow:review-fix"]
},
"source": "../../../commands/workflow/review-session-cycle.md"
},
{
"name": "review-module-cycle",
"command": "/workflow:review-module-cycle",
"description": "Module-based multi-dimensional review",
"arguments": "<path-pattern> [--dimensions=...]",
"category": "workflow",
"difficulty": "Intermediate",
"source": "../../../commands/workflow/review-module-cycle.md"
},
{
"name": "review-fix",
"command": "/workflow:review-fix",
"description": "Automated fixing of review findings",
"arguments": "<export-file|review-dir>",
"category": "workflow",
"difficulty": "Intermediate",
"flow": {
"prerequisites": ["/workflow:review-session-cycle", "/workflow:review-module-cycle"]
},
"source": "../../../commands/workflow/review-fix.md"
},
{
"name": "test-gen",
"command": "/workflow:test-gen",
"description": "Generate test session from implementation",
"arguments": "source-session-id",
"category": "workflow",
"difficulty": "Intermediate",
"source": "../../../commands/workflow/test-gen.md"
},
{
"name": "test-fix-gen",
"command": "/workflow:test-fix-gen",
"description": "Create test-fix session with strategy",
"arguments": "session-id|\"description\"|file",
"category": "workflow",
"difficulty": "Intermediate",
"source": "../../../commands/workflow/test-fix-gen.md"
},
{
"name": "test-cycle-execute",
"command": "/workflow:test-cycle-execute",
"description": "Execute test-fix with iterative cycles",
"arguments": "[--resume-session=id] [--max-iterations=N]",
"category": "workflow",
"difficulty": "Intermediate",
"source": "../../../commands/workflow/test-cycle-execute.md"
},
{
"name": "issue:new",
"command": "/issue:new",
"description": "Create issue from GitHub URL or text",
"arguments": "<url|text> [--priority 1-5]",
"category": "issue",
"difficulty": "Intermediate",
"source": "../../../commands/issue/new.md"
},
{
"name": "issue:discover",
"command": "/issue:discover",
"description": "Discover issues from multiple perspectives",
"arguments": "<path> [--perspectives=...]",
"category": "issue",
"difficulty": "Intermediate",
"source": "../../../commands/issue/discover.md"
},
{
"name": "issue:plan",
"command": "/issue:plan",
"description": "Batch plan issue resolution",
"arguments": "--all-pending|<ids>",
"category": "issue",
"difficulty": "Intermediate",
"flow": {
"next_steps": ["/issue:queue"]
},
"source": "../../../commands/issue/plan.md"
},
{
"name": "issue:queue",
"command": "/issue:queue",
"description": "Form execution queue from solutions",
"arguments": "[--rebuild]",
"category": "issue",
"difficulty": "Intermediate",
"flow": {
"prerequisites": ["/issue:plan"],
"next_steps": ["/issue:execute"]
},
"source": "../../../commands/issue/queue.md"
},
{
"name": "issue:execute",
"command": "/issue:execute",
"description": "Execute queue with DAG parallel",
"arguments": "[--worktree]",
"category": "issue",
"difficulty": "Intermediate",
"flow": {
"prerequisites": ["/issue:queue"]
},
"source": "../../../commands/issue/execute.md"
},
{
"name": "docs",
"command": "/memory:docs",
"description": "Plan documentation workflow",
"arguments": "[path] [--tool <tool>]",
"category": "memory",
"difficulty": "Intermediate",
"essential": true,
"flow": {
"next_steps": ["/workflow:execute"]
},
"source": "../../../commands/memory/docs.md"
},
{
"name": "update-related",
"command": "/memory:update-related",
"description": "Update docs for git-changed modules",
"arguments": "[--tool <tool>]",
"category": "memory",
"difficulty": "Intermediate",
"source": "../../../commands/memory/update-related.md"
},
{
"name": "update-full",
"command": "/memory:update-full",
"description": "Update all CLAUDE.md files",
"arguments": "[--tool <tool>]",
"category": "memory",
"difficulty": "Intermediate",
"source": "../../../commands/memory/update-full.md"
},
{
"name": "skill-memory",
"command": "/memory:skill-memory",
"description": "Generate SKILL.md with loading index",
"arguments": "[path] [--regenerate]",
"category": "memory",
"difficulty": "Intermediate",
"source": "../../../commands/memory/skill-memory.md"
},
{
"name": "load-skill-memory",
"command": "/memory:load-skill-memory",
"description": "Activate SKILL package for task",
"arguments": "[skill_name] \"task intent\"",
"category": "memory",
"difficulty": "Intermediate",
"source": "../../../commands/memory/load-skill-memory.md"
},
{
"name": "load",
"command": "/memory:load",
"description": "Load project context via CLI",
"arguments": "[--tool <tool>] \"context\"",
"category": "memory",
"difficulty": "Intermediate",
"source": "../../../commands/memory/load.md"
},
{
"name": "compact",
"command": "/memory:compact",
"description": "Compact session memory for recovery",
"arguments": "[description]",
"category": "memory",
"difficulty": "Intermediate",
"source": "../../../commands/memory/compact.md"
},
{
"name": "task:create",
"command": "/task:create",
"description": "Generate task JSON from description",
"arguments": "\"task title\"",
"category": "task",
"difficulty": "Intermediate",
"source": "../../../commands/task/create.md"
},
{
"name": "task:execute",
"command": "/task:execute",
"description": "Execute task JSON with agent",
"arguments": "task-id",
"category": "task",
"difficulty": "Intermediate",
"source": "../../../commands/task/execute.md"
},
{
"name": "task:breakdown",
"command": "/task:breakdown",
"description": "Decompose task into subtasks",
"arguments": "task-id",
"category": "task",
"difficulty": "Intermediate",
"source": "../../../commands/task/breakdown.md"
},
{
"name": "task:replan",
"command": "/task:replan",
"description": "Update task with new requirements",
"arguments": "task-id [\"text\"|file]",
"category": "task",
"difficulty": "Intermediate",
"source": "../../../commands/task/replan.md"
},
{
"name": "version",
"command": "/version",
"description": "Display version and check updates",
"arguments": "",
"category": "general",
"difficulty": "Beginner",
"essential": true,
"source": "../../../commands/version.md"
},
{
"name": "enhance-prompt",
"command": "/enhance-prompt",
"description": "Transform prompts with session memory",
"arguments": "user input",
"category": "general",
"difficulty": "Intermediate",
"source": "../../../commands/enhance-prompt.md"
}
],
"agents": [
{ "name": "action-planning-agent", "description": "Task planning and generation", "source": "../../../agents/action-planning-agent.md" },
{ "name": "cli-execution-agent", "description": "CLI tool execution", "source": "../../../agents/cli-execution-agent.md" },
{ "name": "cli-explore-agent", "description": "Codebase exploration", "source": "../../../agents/cli-explore-agent.md" },
{ "name": "cli-lite-planning-agent", "description": "Lightweight planning", "source": "../../../agents/cli-lite-planning-agent.md" },
{ "name": "cli-planning-agent", "description": "CLI-based planning", "source": "../../../agents/cli-planning-agent.md" },
{ "name": "code-developer", "description": "Code implementation", "source": "../../../agents/code-developer.md" },
{ "name": "conceptual-planning-agent", "description": "Conceptual analysis", "source": "../../../agents/conceptual-planning-agent.md" },
{ "name": "context-search-agent", "description": "Context discovery", "source": "../../../agents/context-search-agent.md" },
{ "name": "doc-generator", "description": "Documentation generation", "source": "../../../agents/doc-generator.md" },
{ "name": "issue-plan-agent", "description": "Issue planning", "source": "../../../agents/issue-plan-agent.md" },
{ "name": "issue-queue-agent", "description": "Issue queue formation", "source": "../../../agents/issue-queue-agent.md" },
{ "name": "memory-bridge", "description": "Documentation coordination", "source": "../../../agents/memory-bridge.md" },
{ "name": "test-context-search-agent", "description": "Test context collection", "source": "../../../agents/test-context-search-agent.md" },
{ "name": "test-fix-agent", "description": "Test execution and fixing", "source": "../../../agents/test-fix-agent.md" },
{ "name": "ui-design-agent", "description": "UI design and prototyping", "source": "../../../agents/ui-design-agent.md" },
{ "name": "universal-executor", "description": "Universal task execution", "source": "../../../agents/universal-executor.md" }
],
"categories": ["workflow", "issue", "memory", "task", "general", "cli"]
}

View File

@@ -1,82 +0,0 @@
[
{
"name": "action-planning-agent",
"description": "|",
"source": "../../../agents/action-planning-agent.md"
},
{
"name": "cli-execution-agent",
"description": "|",
"source": "../../../agents/cli-execution-agent.md"
},
{
"name": "cli-explore-agent",
"description": "|",
"source": "../../../agents/cli-explore-agent.md"
},
{
"name": "cli-lite-planning-agent",
"description": "|",
"source": "../../../agents/cli-lite-planning-agent.md"
},
{
"name": "cli-planning-agent",
"description": "|",
"source": "../../../agents/cli-planning-agent.md"
},
{
"name": "code-developer",
"description": "|",
"source": "../../../agents/code-developer.md"
},
{
"name": "conceptual-planning-agent",
"description": "|",
"source": "../../../agents/conceptual-planning-agent.md"
},
{
"name": "context-search-agent",
"description": "|",
"source": "../../../agents/context-search-agent.md"
},
{
"name": "doc-generator",
"description": "|",
"source": "../../../agents/doc-generator.md"
},
{
"name": "issue-plan-agent",
"description": "|",
"source": "../../../agents/issue-plan-agent.md"
},
{
"name": "issue-queue-agent",
"description": "|",
"source": "../../../agents/issue-queue-agent.md"
},
{
"name": "memory-bridge",
"description": "Execute complex project documentation updates using script coordination",
"source": "../../../agents/memory-bridge.md"
},
{
"name": "test-context-search-agent",
"description": "|",
"source": "../../../agents/test-context-search-agent.md"
},
{
"name": "test-fix-agent",
"description": "|",
"source": "../../../agents/test-fix-agent.md"
},
{
"name": "ui-design-agent",
"description": "|",
"source": "../../../agents/ui-design-agent.md"
},
{
"name": "universal-executor",
"description": "|",
"source": "../../../agents/universal-executor.md"
}
]

View File

@@ -1,882 +0,0 @@
[
{
"name": "cli-init",
"command": "/cli:cli-init",
"description": "Generate .gemini/ and .qwen/ config directories with settings.json and ignore files based on workspace technology detection",
"arguments": "[--tool gemini|qwen|all] [--output path] [--preview]",
"category": "cli",
"subcategory": null,
"usage_scenario": "general",
"difficulty": "Intermediate",
"source": "../../../commands/cli/cli-init.md"
},
{
"name": "enhance-prompt",
"command": "/enhance-prompt",
"description": "Enhanced prompt transformation using session memory and intent analysis with --enhance flag detection",
"arguments": "user input to enhance",
"category": "general",
"subcategory": null,
"usage_scenario": "general",
"difficulty": "Intermediate",
"source": "../../../commands/enhance-prompt.md"
},
{
"name": "issue:discover",
"command": "/issue:discover",
"description": "Discover potential issues from multiple perspectives (bug, UX, test, quality, security, performance, maintainability, best-practices) using CLI explore. Supports Exa external research for security and best-practices perspectives.",
"arguments": "<path-pattern> [--perspectives=bug,ux,...] [--external]",
"category": "issue",
"subcategory": null,
"usage_scenario": "general",
"difficulty": "Intermediate",
"source": "../../../commands/issue/discover.md"
},
{
"name": "execute",
"command": "/issue:execute",
"description": "Execute queue with codex using DAG-based parallel orchestration (solution-level)",
"arguments": "[--worktree] [--queue <queue-id>]",
"category": "issue",
"subcategory": null,
"usage_scenario": "implementation",
"difficulty": "Intermediate",
"source": "../../../commands/issue/execute.md"
},
{
"name": "new",
"command": "/issue:new",
"description": "Create structured issue from GitHub URL or text description",
"arguments": "<github-url | text-description> [--priority 1-5]",
"category": "issue",
"subcategory": null,
"usage_scenario": "general",
"difficulty": "Intermediate",
"source": "../../../commands/issue/new.md"
},
{
"name": "plan",
"command": "/issue:plan",
"description": "Batch plan issue resolution using issue-plan-agent (explore + plan closed-loop)",
"arguments": "--all-pending <issue-id>[,<issue-id>,...] [--batch-size 3] ",
"category": "issue",
"subcategory": null,
"usage_scenario": "planning",
"difficulty": "Intermediate",
"source": "../../../commands/issue/plan.md"
},
{
"name": "queue",
"command": "/issue:queue",
"description": "Form execution queue from bound solutions using issue-queue-agent (solution-level)",
"arguments": "[--rebuild] [--issue <id>]",
"category": "issue",
"subcategory": null,
"usage_scenario": "general",
"difficulty": "Intermediate",
"source": "../../../commands/issue/queue.md"
},
{
"name": "code-map-memory",
"command": "/memory:code-map-memory",
"description": "3-phase orchestrator: parse feature keyword → cli-explore-agent analyzes (Deep Scan dual-source) → orchestrator generates Mermaid docs + SKILL package (skips phase 2 if exists)",
"arguments": "\\\"feature-keyword\\\" [--regenerate] [--tool <gemini|qwen>]",
"category": "memory",
"subcategory": null,
"usage_scenario": "documentation",
"difficulty": "Intermediate",
"source": "../../../commands/memory/code-map-memory.md"
},
{
"name": "compact",
"command": "/memory:compact",
"description": "Compact current session memory into structured text for session recovery, extracting objective/plan/files/decisions/constraints/state, and save via MCP core_memory tool",
"arguments": "[optional: session description]",
"category": "memory",
"subcategory": null,
"usage_scenario": "general",
"difficulty": "Intermediate",
"source": "../../../commands/memory/compact.md"
},
{
"name": "docs-full-cli",
"command": "/memory:docs-full-cli",
"description": "Generate full project documentation using CLI execution (Layer 3→1) with batched agents (4 modules/agent) and gemini→qwen→codex fallback, <20 modules uses direct parallel",
"arguments": "[path] [--tool <gemini|qwen|codex>]",
"category": "memory",
"subcategory": null,
"usage_scenario": "documentation",
"difficulty": "Intermediate",
"source": "../../../commands/memory/docs-full-cli.md"
},
{
"name": "docs-related-cli",
"command": "/memory:docs-related-cli",
"description": "Generate/update documentation for git-changed modules using CLI execution with batched agents (4 modules/agent) and gemini→qwen→codex fallback, <15 modules uses direct parallel",
"arguments": "[--tool <gemini|qwen|codex>]",
"category": "memory",
"subcategory": null,
"usage_scenario": "documentation",
"difficulty": "Intermediate",
"source": "../../../commands/memory/docs-related-cli.md"
},
{
"name": "docs",
"command": "/memory:docs",
"description": "Plan documentation workflow with dynamic grouping (≤10 docs/task), generates IMPL tasks for parallel module trees, README, ARCHITECTURE, and HTTP API docs",
"arguments": "[path] [--tool <gemini|qwen|codex>] [--mode <full|partial>] [--cli-execute]",
"category": "memory",
"subcategory": null,
"usage_scenario": "documentation",
"difficulty": "Intermediate",
"source": "../../../commands/memory/docs.md"
},
{
"name": "load-skill-memory",
"command": "/memory:load-skill-memory",
"description": "Activate SKILL package (auto-detect from paths/keywords or manual) and intelligently load documentation based on task intent keywords",
"arguments": "[skill_name] \\\"task intent description\\",
"category": "memory",
"subcategory": null,
"usage_scenario": "documentation",
"difficulty": "Intermediate",
"source": "../../../commands/memory/load-skill-memory.md"
},
{
"name": "load",
"command": "/memory:load",
"description": "Delegate to universal-executor agent to analyze project via Gemini/Qwen CLI and return JSON core content package for task context",
"arguments": "[--tool gemini|qwen] \\\"task context description\\",
"category": "memory",
"subcategory": null,
"usage_scenario": "general",
"difficulty": "Intermediate",
"source": "../../../commands/memory/load.md"
},
{
"name": "skill-memory",
"command": "/memory:skill-memory",
"description": "4-phase autonomous orchestrator: check docs → /memory:docs planning → /workflow:execute → generate SKILL.md with progressive loading index (skips phases 2-3 if docs exist)",
"arguments": "[path] [--tool <gemini|qwen|codex>] [--regenerate] [--mode <full|partial>] [--cli-execute]",
"category": "memory",
"subcategory": null,
"usage_scenario": "documentation",
"difficulty": "Intermediate",
"source": "../../../commands/memory/skill-memory.md"
},
{
"name": "style-skill-memory",
"command": "/memory:style-skill-memory",
"description": "Generate SKILL memory package from style reference for easy loading and consistent design system usage",
"arguments": "[package-name] [--regenerate]",
"category": "memory",
"subcategory": null,
"usage_scenario": "documentation",
"difficulty": "Intermediate",
"source": "../../../commands/memory/style-skill-memory.md"
},
{
"name": "swagger-docs",
"command": "/memory:swagger-docs",
"description": "Generate complete Swagger/OpenAPI documentation following RESTful standards with global security, API details, error codes, and validation tests",
"arguments": "[path] [--tool <gemini|qwen|codex>] [--format <yaml|json>] [--version <v3.0|v3.1>] [--lang <zh|en>]",
"category": "memory",
"subcategory": null,
"usage_scenario": "documentation",
"difficulty": "Intermediate",
"source": "../../../commands/memory/swagger-docs.md"
},
{
"name": "tech-research-rules",
"command": "/memory:tech-research-rules",
"description": "3-phase orchestrator: extract tech stack → Exa research → generate path-conditional rules (auto-loaded by Claude Code)",
"arguments": "[session-id | tech-stack-name] [--regenerate] [--tool <gemini|qwen>]",
"category": "memory",
"subcategory": null,
"usage_scenario": "general",
"difficulty": "Intermediate",
"source": "../../../commands/memory/tech-research-rules.md"
},
{
"name": "update-full",
"command": "/memory:update-full",
"description": "Update all CLAUDE.md files using layer-based execution (Layer 3→1) with batched agents (4 modules/agent) and gemini→qwen→codex fallback, <20 modules uses direct parallel",
"arguments": "[--tool gemini|qwen|codex] [--path <directory>]",
"category": "memory",
"subcategory": null,
"usage_scenario": "general",
"difficulty": "Intermediate",
"source": "../../../commands/memory/update-full.md"
},
{
"name": "update-related",
"command": "/memory:update-related",
"description": "Update CLAUDE.md for git-changed modules using batched agent execution (4 modules/agent) with gemini→qwen→codex fallback, <15 modules uses direct execution",
"arguments": "[--tool gemini|qwen|codex]",
"category": "memory",
"subcategory": null,
"usage_scenario": "general",
"difficulty": "Intermediate",
"source": "../../../commands/memory/update-related.md"
},
{
"name": "workflow-skill-memory",
"command": "/memory:workflow-skill-memory",
"description": "Process WFS-* archived sessions using universal-executor agents with Gemini analysis to generate workflow-progress SKILL package (sessions-timeline, lessons, conflicts)",
"arguments": "session <session-id> | all",
"category": "memory",
"subcategory": null,
"usage_scenario": "documentation",
"difficulty": "Intermediate",
"source": "../../../commands/memory/workflow-skill-memory.md"
},
{
"name": "breakdown",
"command": "/task:breakdown",
"description": "Decompose complex task into subtasks with dependency mapping, creates child task JSONs with parent references and execution order",
"arguments": "task-id",
"category": "task",
"subcategory": null,
"usage_scenario": "planning",
"difficulty": "Intermediate",
"source": "../../../commands/task/breakdown.md"
},
{
"name": "create",
"command": "/task:create",
"description": "Generate task JSON from natural language description with automatic file pattern detection, scope inference, and dependency analysis",
"arguments": "\\\"task title\\",
"category": "task",
"subcategory": null,
"usage_scenario": "implementation",
"difficulty": "Intermediate",
"source": "../../../commands/task/create.md"
},
{
"name": "execute",
"command": "/task:execute",
"description": "Execute task JSON using appropriate agent (@doc-generator/@implementation-agent/@test-agent) with pre-analysis context loading and status tracking",
"arguments": "task-id",
"category": "task",
"subcategory": null,
"usage_scenario": "implementation",
"difficulty": "Intermediate",
"source": "../../../commands/task/execute.md"
},
{
"name": "replan",
"command": "/task:replan",
"description": "Update task JSON with new requirements or batch-update multiple tasks from verification report, tracks changes in task-changes.json",
"arguments": "task-id [\\\"text\\\"|file.md] | --batch [verification-report.md]",
"category": "task",
"subcategory": null,
"usage_scenario": "planning",
"difficulty": "Intermediate",
"source": "../../../commands/task/replan.md"
},
{
"name": "version",
"command": "/version",
"description": "Display Claude Code version information and check for updates",
"arguments": "",
"category": "general",
"subcategory": null,
"usage_scenario": "general",
"difficulty": "Beginner",
"source": "../../../commands/version.md"
},
{
"name": "action-plan-verify",
"command": "/workflow:action-plan-verify",
"description": "Perform non-destructive cross-artifact consistency analysis between IMPL_PLAN.md and task JSONs with quality gate validation",
"arguments": "[optional: --session session-id]",
"category": "workflow",
"subcategory": null,
"usage_scenario": "planning",
"difficulty": "Intermediate",
"source": "../../../commands/workflow/action-plan-verify.md"
},
{
"name": "api-designer",
"command": "/workflow:brainstorm:api-designer",
"description": "Generate or update api-designer/analysis.md addressing guidance-specification discussion points for API design perspective",
"arguments": "optional topic - uses existing framework if available",
"category": "workflow",
"subcategory": "brainstorm",
"usage_scenario": "planning",
"difficulty": "Intermediate",
"source": "../../../commands/workflow/brainstorm/api-designer.md"
},
{
"name": "artifacts",
"command": "/workflow:brainstorm:artifacts",
"description": "Interactive clarification generating confirmed guidance specification through role-based analysis and synthesis",
"arguments": "topic or challenge description [--count N]",
"category": "workflow",
"subcategory": "brainstorm",
"usage_scenario": "general",
"difficulty": "Intermediate",
"source": "../../../commands/workflow/brainstorm/artifacts.md"
},
{
"name": "auto-parallel",
"command": "/workflow:brainstorm:auto-parallel",
"description": "Parallel brainstorming automation with dynamic role selection and concurrent execution across multiple perspectives",
"arguments": "topic or challenge description\" [--count N]",
"category": "workflow",
"subcategory": "brainstorm",
"usage_scenario": "general",
"difficulty": "Advanced",
"source": "../../../commands/workflow/brainstorm/auto-parallel.md"
},
{
"name": "data-architect",
"command": "/workflow:brainstorm:data-architect",
"description": "Generate or update data-architect/analysis.md addressing guidance-specification discussion points for data architecture perspective",
"arguments": "optional topic - uses existing framework if available",
"category": "workflow",
"subcategory": "brainstorm",
"usage_scenario": "general",
"difficulty": "Intermediate",
"source": "../../../commands/workflow/brainstorm/data-architect.md"
},
{
"name": "product-manager",
"command": "/workflow:brainstorm:product-manager",
"description": "Generate or update product-manager/analysis.md addressing guidance-specification discussion points for product management perspective",
"arguments": "optional topic - uses existing framework if available",
"category": "workflow",
"subcategory": "brainstorm",
"usage_scenario": "general",
"difficulty": "Intermediate",
"source": "../../../commands/workflow/brainstorm/product-manager.md"
},
{
"name": "product-owner",
"command": "/workflow:brainstorm:product-owner",
"description": "Generate or update product-owner/analysis.md addressing guidance-specification discussion points for product ownership perspective",
"arguments": "optional topic - uses existing framework if available",
"category": "workflow",
"subcategory": "brainstorm",
"usage_scenario": "general",
"difficulty": "Intermediate",
"source": "../../../commands/workflow/brainstorm/product-owner.md"
},
{
"name": "scrum-master",
"command": "/workflow:brainstorm:scrum-master",
"description": "Generate or update scrum-master/analysis.md addressing guidance-specification discussion points for Agile process perspective",
"arguments": "optional topic - uses existing framework if available",
"category": "workflow",
"subcategory": "brainstorm",
"usage_scenario": "general",
"difficulty": "Intermediate",
"source": "../../../commands/workflow/brainstorm/scrum-master.md"
},
{
"name": "subject-matter-expert",
"command": "/workflow:brainstorm:subject-matter-expert",
"description": "Generate or update subject-matter-expert/analysis.md addressing guidance-specification discussion points for domain expertise perspective",
"arguments": "optional topic - uses existing framework if available",
"category": "workflow",
"subcategory": "brainstorm",
"usage_scenario": "general",
"difficulty": "Intermediate",
"source": "../../../commands/workflow/brainstorm/subject-matter-expert.md"
},
{
"name": "synthesis",
"command": "/workflow:brainstorm:synthesis",
"description": "Clarify and refine role analyses through intelligent Q&A and targeted updates with synthesis agent",
"arguments": "[optional: --session session-id]",
"category": "workflow",
"subcategory": "brainstorm",
"usage_scenario": "general",
"difficulty": "Advanced",
"source": "../../../commands/workflow/brainstorm/synthesis.md"
},
{
"name": "system-architect",
"command": "/workflow:brainstorm:system-architect",
"description": "Generate or update system-architect/analysis.md addressing guidance-specification discussion points for system architecture perspective",
"arguments": "optional topic - uses existing framework if available",
"category": "workflow",
"subcategory": "brainstorm",
"usage_scenario": "general",
"difficulty": "Intermediate",
"source": "../../../commands/workflow/brainstorm/system-architect.md"
},
{
"name": "ui-designer",
"command": "/workflow:brainstorm:ui-designer",
"description": "Generate or update ui-designer/analysis.md addressing guidance-specification discussion points for UI design perspective",
"arguments": "optional topic - uses existing framework if available",
"category": "workflow",
"subcategory": "brainstorm",
"usage_scenario": "planning",
"difficulty": "Intermediate",
"source": "../../../commands/workflow/brainstorm/ui-designer.md"
},
{
"name": "ux-expert",
"command": "/workflow:brainstorm:ux-expert",
"description": "Generate or update ux-expert/analysis.md addressing guidance-specification discussion points for UX perspective",
"arguments": "optional topic - uses existing framework if available",
"category": "workflow",
"subcategory": "brainstorm",
"usage_scenario": "general",
"difficulty": "Intermediate",
"source": "../../../commands/workflow/brainstorm/ux-expert.md"
},
{
"name": "clean",
"command": "/workflow:clean",
"description": "Intelligent code cleanup with mainline detection, stale artifact discovery, and safe execution",
"arguments": "[--dry-run] [\\\"focus area\\\"]",
"category": "workflow",
"subcategory": null,
"usage_scenario": "general",
"difficulty": "Intermediate",
"source": "../../../commands/workflow/clean.md"
},
{
"name": "debug",
"command": "/workflow:debug",
"description": "Interactive hypothesis-driven debugging with NDJSON logging, iterative until resolved",
"arguments": "\\\"bug description or error message\\",
"category": "workflow",
"subcategory": null,
"usage_scenario": "general",
"difficulty": "Intermediate",
"source": "../../../commands/workflow/debug.md"
},
{
"name": "execute",
"command": "/workflow:execute",
"description": "Coordinate agent execution for workflow tasks with automatic session discovery, parallel task processing, and status tracking",
"arguments": "[--resume-session=\\\"session-id\\\"]",
"category": "workflow",
"subcategory": null,
"usage_scenario": "implementation",
"difficulty": "Intermediate",
"source": "../../../commands/workflow/execute.md"
},
{
"name": "init",
"command": "/workflow:init",
"description": "Initialize project-level state with intelligent project analysis using cli-explore-agent",
"arguments": "[--regenerate]",
"category": "workflow",
"subcategory": null,
"usage_scenario": "general",
"difficulty": "Intermediate",
"source": "../../../commands/workflow/init.md"
},
{
"name": "lite-execute",
"command": "/workflow:lite-execute",
"description": "Execute tasks based on in-memory plan, prompt description, or file content",
"arguments": "[--in-memory] [\\\"task description\\\"|file-path]",
"category": "workflow",
"subcategory": null,
"usage_scenario": "implementation",
"difficulty": "Intermediate",
"source": "../../../commands/workflow/lite-execute.md"
},
{
"name": "lite-fix",
"command": "/workflow:lite-fix",
"description": "Lightweight bug diagnosis and fix workflow with intelligent severity assessment and optional hotfix mode for production incidents",
"arguments": "[--hotfix] \\\"bug description or issue reference\\",
"category": "workflow",
"subcategory": null,
"usage_scenario": "general",
"difficulty": "Intermediate",
"source": "../../../commands/workflow/lite-fix.md"
},
{
"name": "lite-plan",
"command": "/workflow:lite-plan",
"description": "Lightweight interactive planning workflow with in-memory planning, code exploration, and execution dispatch to lite-execute after user confirmation",
"arguments": "[-e|--explore] \\\"task description\\\"|file.md",
"category": "workflow",
"subcategory": null,
"usage_scenario": "planning",
"difficulty": "Intermediate",
"source": "../../../commands/workflow/lite-plan.md"
},
{
"name": "plan",
"command": "/workflow:plan",
"description": "5-phase planning workflow with action-planning-agent task generation, outputs IMPL_PLAN.md and task JSONs",
"arguments": "\\\"text description\\\"|file.md",
"category": "workflow",
"subcategory": null,
"usage_scenario": "planning",
"difficulty": "Intermediate",
"source": "../../../commands/workflow/plan.md"
},
{
"name": "replan",
"command": "/workflow:replan",
"description": "Interactive workflow replanning with session-level artifact updates and boundary clarification through guided questioning",
"arguments": "[--session session-id] [task-id] \\\"requirements\\\"|file.md [--interactive]",
"category": "workflow",
"subcategory": null,
"usage_scenario": "planning",
"difficulty": "Intermediate",
"source": "../../../commands/workflow/replan.md"
},
{
"name": "review-fix",
"command": "/workflow:review-fix",
"description": "Automated fixing of code review findings with AI-powered planning and coordinated execution. Uses intelligent grouping, multi-stage timeline coordination, and test-driven verification.",
"arguments": "<export-file|review-dir> [--resume] [--max-iterations=N]",
"category": "workflow",
"subcategory": null,
"usage_scenario": "analysis",
"difficulty": "Intermediate",
"source": "../../../commands/workflow/review-fix.md"
},
{
"name": "review-module-cycle",
"command": "/workflow:review-module-cycle",
"description": "Independent multi-dimensional code review for specified modules/files. Analyzes specific code paths across 7 dimensions with hybrid parallel-iterative execution, independent of workflow sessions.",
"arguments": "<path-pattern> [--dimensions=security,architecture,...] [--max-iterations=N]",
"category": "workflow",
"subcategory": null,
"usage_scenario": "analysis",
"difficulty": "Intermediate",
"source": "../../../commands/workflow/review-module-cycle.md"
},
{
"name": "review-session-cycle",
"command": "/workflow:review-session-cycle",
"description": "Session-based comprehensive multi-dimensional code review. Analyzes git changes from workflow session across 7 dimensions with hybrid parallel-iterative execution, aggregates findings, and performs focused deep-dives on critical issues until quality gates met.",
"arguments": "[session-id] [--dimensions=security,architecture,...] [--max-iterations=N]",
"category": "workflow",
"subcategory": null,
"usage_scenario": "session-management",
"difficulty": "Intermediate",
"source": "../../../commands/workflow/review-session-cycle.md"
},
{
"name": "review",
"command": "/workflow:review",
"description": "Post-implementation review with specialized types (security/architecture/action-items/quality) using analysis agents and Gemini",
"arguments": "[--type=security|architecture|action-items|quality] [--archived] [optional: session-id]",
"category": "workflow",
"subcategory": null,
"usage_scenario": "analysis",
"difficulty": "Intermediate",
"source": "../../../commands/workflow/review.md"
},
{
"name": "complete",
"command": "/workflow:session:complete",
"description": "Mark active workflow session as complete, archive with lessons learned, update manifest, remove active flag",
"arguments": "",
"category": "workflow",
"subcategory": "session",
"usage_scenario": "session-management",
"difficulty": "Intermediate",
"source": "../../../commands/workflow/session/complete.md"
},
{
"name": "list",
"command": "/workflow:session:list",
"description": "List all workflow sessions with status filtering, shows session metadata and progress information",
"arguments": "",
"category": "workflow",
"subcategory": "session",
"usage_scenario": "general",
"difficulty": "Beginner",
"source": "../../../commands/workflow/session/list.md"
},
{
"name": "resume",
"command": "/workflow:session:resume",
"description": "Resume the most recently paused workflow session with automatic session discovery and status update",
"arguments": "",
"category": "workflow",
"subcategory": "session",
"usage_scenario": "session-management",
"difficulty": "Intermediate",
"source": "../../../commands/workflow/session/resume.md"
},
{
"name": "solidify",
"command": "/workflow:session:solidify",
"description": "Crystallize session learnings and user-defined constraints into permanent project guidelines",
"arguments": "[--type <convention|constraint|learning>] [--category <category>] \\\"rule or insight\\",
"category": "workflow",
"subcategory": "session",
"usage_scenario": "general",
"difficulty": "Intermediate",
"source": "../../../commands/workflow/session/solidify.md"
},
{
"name": "start",
"command": "/workflow:session:start",
"description": "Discover existing sessions or start new workflow session with intelligent session management and conflict detection",
"arguments": "[--type <workflow|review|tdd|test|docs>] [--auto|--new] [optional: task description for new session]",
"category": "workflow",
"subcategory": "session",
"usage_scenario": "general",
"difficulty": "Intermediate",
"source": "../../../commands/workflow/session/start.md"
},
{
"name": "tdd-plan",
"command": "/workflow:tdd-plan",
"description": "TDD workflow planning with Red-Green-Refactor task chain generation, test-first development structure, and cycle tracking",
"arguments": "\\\"feature description\\\"|file.md",
"category": "workflow",
"subcategory": null,
"usage_scenario": "planning",
"difficulty": "Advanced",
"source": "../../../commands/workflow/tdd-plan.md"
},
{
"name": "tdd-verify",
"command": "/workflow:tdd-verify",
"description": "Verify TDD workflow compliance against Red-Green-Refactor cycles, generate quality report with coverage analysis",
"arguments": "[optional: WFS-session-id]",
"category": "workflow",
"subcategory": null,
"usage_scenario": "testing",
"difficulty": "Advanced",
"source": "../../../commands/workflow/tdd-verify.md"
},
{
"name": "test-cycle-execute",
"command": "/workflow:test-cycle-execute",
"description": "Execute test-fix workflow with dynamic task generation and iterative fix cycles until test pass rate >= 95% or max iterations reached. Uses @cli-planning-agent for failure analysis and task generation.",
"arguments": "[--resume-session=\\\"session-id\\\"] [--max-iterations=N]",
"category": "workflow",
"subcategory": null,
"usage_scenario": "implementation",
"difficulty": "Intermediate",
"source": "../../../commands/workflow/test-cycle-execute.md"
},
{
"name": "test-fix-gen",
"command": "/workflow:test-fix-gen",
"description": "Create test-fix workflow session from session ID, description, or file path with test strategy generation and task planning",
"arguments": "(source-session-id | \\\"feature description\\\" | /path/to/file.md)",
"category": "workflow",
"subcategory": null,
"usage_scenario": "testing",
"difficulty": "Intermediate",
"source": "../../../commands/workflow/test-fix-gen.md"
},
{
"name": "test-gen",
"command": "/workflow:test-gen",
"description": "Create independent test-fix workflow session from completed implementation session, analyzes code to generate test tasks",
"arguments": "source-session-id",
"category": "workflow",
"subcategory": null,
"usage_scenario": "testing",
"difficulty": "Intermediate",
"source": "../../../commands/workflow/test-gen.md"
},
{
"name": "conflict-resolution",
"command": "/workflow:tools:conflict-resolution",
"description": "Detect and resolve conflicts between plan and existing codebase using CLI-powered analysis with Gemini/Qwen",
"arguments": "--session WFS-session-id --context path/to/context-package.json",
"category": "workflow",
"subcategory": "tools",
"usage_scenario": "general",
"difficulty": "Advanced",
"source": "../../../commands/workflow/tools/conflict-resolution.md"
},
{
"name": "gather",
"command": "/workflow:tools:gather",
"description": "Intelligently collect project context using context-search-agent based on task description, packages into standardized JSON",
"arguments": "--session WFS-session-id \\\"task description\\",
"category": "workflow",
"subcategory": "tools",
"usage_scenario": "general",
"difficulty": "Intermediate",
"source": "../../../commands/workflow/tools/context-gather.md"
},
{
"name": "task-generate-agent",
"command": "/workflow:tools:task-generate-agent",
"description": "Generate implementation plan documents (IMPL_PLAN.md, task JSONs, TODO_LIST.md) using action-planning-agent - produces planning artifacts, does NOT execute code implementation",
"arguments": "--session WFS-session-id",
"category": "workflow",
"subcategory": "tools",
"usage_scenario": "implementation",
"difficulty": "Advanced",
"source": "../../../commands/workflow/tools/task-generate-agent.md"
},
{
"name": "task-generate-tdd",
"command": "/workflow:tools:task-generate-tdd",
"description": "Autonomous TDD task generation using action-planning-agent with Red-Green-Refactor cycles, test-first structure, and cycle validation",
"arguments": "--session WFS-session-id",
"category": "workflow",
"subcategory": "tools",
"usage_scenario": "implementation",
"difficulty": "Advanced",
"source": "../../../commands/workflow/tools/task-generate-tdd.md"
},
{
"name": "tdd-coverage-analysis",
"command": "/workflow:tools:tdd-coverage-analysis",
"description": "Analyze test coverage and TDD cycle execution with Red-Green-Refactor compliance verification",
"arguments": "--session WFS-session-id",
"category": "workflow",
"subcategory": "tools",
"usage_scenario": "testing",
"difficulty": "Advanced",
"source": "../../../commands/workflow/tools/tdd-coverage-analysis.md"
},
{
"name": "test-concept-enhanced",
"command": "/workflow:tools:test-concept-enhanced",
"description": "Coordinate test analysis workflow using cli-execution-agent to generate test strategy via Gemini",
"arguments": "--session WFS-test-session-id --context path/to/test-context-package.json",
"category": "workflow",
"subcategory": "tools",
"usage_scenario": "testing",
"difficulty": "Intermediate",
"source": "../../../commands/workflow/tools/test-concept-enhanced.md"
},
{
"name": "test-context-gather",
"command": "/workflow:tools:test-context-gather",
"description": "Collect test coverage context using test-context-search-agent and package into standardized test-context JSON",
"arguments": "--session WFS-test-session-id",
"category": "workflow",
"subcategory": "tools",
"usage_scenario": "testing",
"difficulty": "Intermediate",
"source": "../../../commands/workflow/tools/test-context-gather.md"
},
{
"name": "test-task-generate",
"command": "/workflow:tools:test-task-generate",
"description": "Generate test planning documents (IMPL_PLAN.md, test task JSONs, TODO_LIST.md) using action-planning-agent - produces test planning artifacts, does NOT execute tests",
"arguments": "--session WFS-test-session-id",
"category": "workflow",
"subcategory": "tools",
"usage_scenario": "implementation",
"difficulty": "Intermediate",
"source": "../../../commands/workflow/tools/test-task-generate.md"
},
{
"name": "animation-extract",
"command": "/workflow:ui-design:animation-extract",
"description": "Extract animation and transition patterns from prompt inference and image references for design system documentation",
"arguments": "[--design-id <id>] [--session <id>] [--images \"<glob>\"] [--focus \"<types>\"] [--interactive] [--refine]",
"category": "workflow",
"subcategory": "ui-design",
"usage_scenario": "general",
"difficulty": "Intermediate",
"source": "../../../commands/workflow/ui-design/animation-extract.md"
},
{
"name": "workflow:ui-design:codify-style",
"command": "/workflow:ui-design:codify-style",
"description": "Orchestrator to extract styles from code and generate shareable reference package with preview (automatic file discovery)",
"arguments": "<path> [--package-name <name>] [--output-dir <path>] [--overwrite]",
"category": "workflow",
"subcategory": "ui-design",
"usage_scenario": "planning",
"difficulty": "Intermediate",
"source": "../../../commands/workflow/ui-design/codify-style.md"
},
{
"name": "design-sync",
"command": "/workflow:ui-design:design-sync",
"description": "Synchronize finalized design system references to brainstorming artifacts, preparing them for /workflow:plan consumption",
"arguments": "--session <session_id> [--selected-prototypes \"<list>\"]",
"category": "workflow",
"subcategory": "ui-design",
"usage_scenario": "planning",
"difficulty": "Intermediate",
"source": "../../../commands/workflow/ui-design/design-sync.md"
},
{
"name": "explore-auto",
"command": "/workflow:ui-design:explore-auto",
"description": "Interactive exploratory UI design workflow with style-centric batch generation, creates design variants from prompts/images with parallel execution and user selection",
"arguments": "[--input \"<value>\"] [--targets \"<list>\"] [--target-type \"page|component\"] [--session <id>] [--style-variants <count>] [--layout-variants <count>]",
"category": "workflow",
"subcategory": "ui-design",
"usage_scenario": "general",
"difficulty": "Intermediate",
"source": "../../../commands/workflow/ui-design/explore-auto.md"
},
{
"name": "generate",
"command": "/workflow:ui-design:generate",
"description": "Assemble UI prototypes by combining layout templates with design tokens (default animation support), pure assembler without new content generation",
"arguments": "[--design-id <id>] [--session <id>]",
"category": "workflow",
"subcategory": "ui-design",
"usage_scenario": "implementation",
"difficulty": "Intermediate",
"source": "../../../commands/workflow/ui-design/generate.md"
},
{
"name": "imitate-auto",
"command": "/workflow:ui-design:imitate-auto",
"description": "UI design workflow with direct code/image input for design token extraction and prototype generation",
"arguments": "[--input \"<value>\"] [--session <id>]",
"category": "workflow",
"subcategory": "ui-design",
"usage_scenario": "general",
"difficulty": "Intermediate",
"source": "../../../commands/workflow/ui-design/imitate-auto.md"
},
{
"name": "workflow:ui-design:import-from-code",
"command": "/workflow:ui-design:import-from-code",
"description": "Import design system from code files (CSS/JS/HTML/SCSS) with automatic file discovery and parallel agent analysis",
"arguments": "[--design-id <id>] [--session <id>] [--source <path>]",
"category": "workflow",
"subcategory": "ui-design",
"usage_scenario": "planning",
"difficulty": "Intermediate",
"source": "../../../commands/workflow/ui-design/import-from-code.md"
},
{
"name": "layout-extract",
"command": "/workflow:ui-design:layout-extract",
"description": "Extract structural layout information from reference images or text prompts using Claude analysis with variant generation or refinement mode",
"arguments": "[--design-id <id>] [--session <id>] [--images \"<glob>\"] [--prompt \"<desc>\"] [--targets \"<list>\"] [--variants <count>] [--device-type <desktop|mobile|tablet|responsive>] [--interactive] [--refine]",
"category": "workflow",
"subcategory": "ui-design",
"usage_scenario": "general",
"difficulty": "Intermediate",
"source": "../../../commands/workflow/ui-design/layout-extract.md"
},
{
"name": "workflow:ui-design:reference-page-generator",
"command": "/workflow:ui-design:reference-page-generator",
"description": "Generate multi-component reference pages and documentation from design run extraction",
"arguments": "[--design-run <path>] [--package-name <name>] [--output-dir <path>]",
"category": "workflow",
"subcategory": "ui-design",
"usage_scenario": "planning",
"difficulty": "Intermediate",
"source": "../../../commands/workflow/ui-design/reference-page-generator.md"
},
{
"name": "style-extract",
"command": "/workflow:ui-design:style-extract",
"description": "Extract design style from reference images or text prompts using Claude analysis with variant generation or refinement mode",
"arguments": "[--design-id <id>] [--session <id>] [--images \"<glob>\"] [--prompt \"<desc>\"] [--variants <count>] [--interactive] [--refine]",
"category": "workflow",
"subcategory": "ui-design",
"usage_scenario": "general",
"difficulty": "Intermediate",
"source": "../../../commands/workflow/ui-design/style-extract.md"
}
]

View File

@@ -1,914 +0,0 @@
{
"cli": {
"_root": [
{
"name": "cli-init",
"command": "/cli:cli-init",
"description": "Generate .gemini/ and .qwen/ config directories with settings.json and ignore files based on workspace technology detection",
"arguments": "[--tool gemini|qwen|all] [--output path] [--preview]",
"category": "cli",
"subcategory": null,
"usage_scenario": "general",
"difficulty": "Intermediate",
"source": "../../../commands/cli/cli-init.md"
}
]
},
"general": {
"_root": [
{
"name": "enhance-prompt",
"command": "/enhance-prompt",
"description": "Enhanced prompt transformation using session memory and intent analysis with --enhance flag detection",
"arguments": "user input to enhance",
"category": "general",
"subcategory": null,
"usage_scenario": "general",
"difficulty": "Intermediate",
"source": "../../../commands/enhance-prompt.md"
},
{
"name": "version",
"command": "/version",
"description": "Display Claude Code version information and check for updates",
"arguments": "",
"category": "general",
"subcategory": null,
"usage_scenario": "general",
"difficulty": "Beginner",
"source": "../../../commands/version.md"
}
]
},
"issue": {
"_root": [
{
"name": "issue:discover",
"command": "/issue:discover",
"description": "Discover potential issues from multiple perspectives (bug, UX, test, quality, security, performance, maintainability, best-practices) using CLI explore. Supports Exa external research for security and best-practices perspectives.",
"arguments": "<path-pattern> [--perspectives=bug,ux,...] [--external]",
"category": "issue",
"subcategory": null,
"usage_scenario": "general",
"difficulty": "Intermediate",
"source": "../../../commands/issue/discover.md"
},
{
"name": "execute",
"command": "/issue:execute",
"description": "Execute queue with codex using DAG-based parallel orchestration (solution-level)",
"arguments": "[--worktree] [--queue <queue-id>]",
"category": "issue",
"subcategory": null,
"usage_scenario": "implementation",
"difficulty": "Intermediate",
"source": "../../../commands/issue/execute.md"
},
{
"name": "new",
"command": "/issue:new",
"description": "Create structured issue from GitHub URL or text description",
"arguments": "<github-url | text-description> [--priority 1-5]",
"category": "issue",
"subcategory": null,
"usage_scenario": "general",
"difficulty": "Intermediate",
"source": "../../../commands/issue/new.md"
},
{
"name": "plan",
"command": "/issue:plan",
"description": "Batch plan issue resolution using issue-plan-agent (explore + plan closed-loop)",
"arguments": "--all-pending <issue-id>[,<issue-id>,...] [--batch-size 3] ",
"category": "issue",
"subcategory": null,
"usage_scenario": "planning",
"difficulty": "Intermediate",
"source": "../../../commands/issue/plan.md"
},
{
"name": "queue",
"command": "/issue:queue",
"description": "Form execution queue from bound solutions using issue-queue-agent (solution-level)",
"arguments": "[--rebuild] [--issue <id>]",
"category": "issue",
"subcategory": null,
"usage_scenario": "general",
"difficulty": "Intermediate",
"source": "../../../commands/issue/queue.md"
}
]
},
"memory": {
"_root": [
{
"name": "code-map-memory",
"command": "/memory:code-map-memory",
"description": "3-phase orchestrator: parse feature keyword → cli-explore-agent analyzes (Deep Scan dual-source) → orchestrator generates Mermaid docs + SKILL package (skips phase 2 if exists)",
"arguments": "\\\"feature-keyword\\\" [--regenerate] [--tool <gemini|qwen>]",
"category": "memory",
"subcategory": null,
"usage_scenario": "documentation",
"difficulty": "Intermediate",
"source": "../../../commands/memory/code-map-memory.md"
},
{
"name": "compact",
"command": "/memory:compact",
"description": "Compact current session memory into structured text for session recovery, extracting objective/plan/files/decisions/constraints/state, and save via MCP core_memory tool",
"arguments": "[optional: session description]",
"category": "memory",
"subcategory": null,
"usage_scenario": "general",
"difficulty": "Intermediate",
"source": "../../../commands/memory/compact.md"
},
{
"name": "docs-full-cli",
"command": "/memory:docs-full-cli",
"description": "Generate full project documentation using CLI execution (Layer 3→1) with batched agents (4 modules/agent) and gemini→qwen→codex fallback, <20 modules uses direct parallel",
"arguments": "[path] [--tool <gemini|qwen|codex>]",
"category": "memory",
"subcategory": null,
"usage_scenario": "documentation",
"difficulty": "Intermediate",
"source": "../../../commands/memory/docs-full-cli.md"
},
{
"name": "docs-related-cli",
"command": "/memory:docs-related-cli",
"description": "Generate/update documentation for git-changed modules using CLI execution with batched agents (4 modules/agent) and gemini→qwen→codex fallback, <15 modules uses direct parallel",
"arguments": "[--tool <gemini|qwen|codex>]",
"category": "memory",
"subcategory": null,
"usage_scenario": "documentation",
"difficulty": "Intermediate",
"source": "../../../commands/memory/docs-related-cli.md"
},
{
"name": "docs",
"command": "/memory:docs",
"description": "Plan documentation workflow with dynamic grouping (≤10 docs/task), generates IMPL tasks for parallel module trees, README, ARCHITECTURE, and HTTP API docs",
"arguments": "[path] [--tool <gemini|qwen|codex>] [--mode <full|partial>] [--cli-execute]",
"category": "memory",
"subcategory": null,
"usage_scenario": "documentation",
"difficulty": "Intermediate",
"source": "../../../commands/memory/docs.md"
},
{
"name": "load-skill-memory",
"command": "/memory:load-skill-memory",
"description": "Activate SKILL package (auto-detect from paths/keywords or manual) and intelligently load documentation based on task intent keywords",
"arguments": "[skill_name] \\\"task intent description\\",
"category": "memory",
"subcategory": null,
"usage_scenario": "documentation",
"difficulty": "Intermediate",
"source": "../../../commands/memory/load-skill-memory.md"
},
{
"name": "load",
"command": "/memory:load",
"description": "Delegate to universal-executor agent to analyze project via Gemini/Qwen CLI and return JSON core content package for task context",
"arguments": "[--tool gemini|qwen] \\\"task context description\\",
"category": "memory",
"subcategory": null,
"usage_scenario": "general",
"difficulty": "Intermediate",
"source": "../../../commands/memory/load.md"
},
{
"name": "skill-memory",
"command": "/memory:skill-memory",
"description": "4-phase autonomous orchestrator: check docs → /memory:docs planning → /workflow:execute → generate SKILL.md with progressive loading index (skips phases 2-3 if docs exist)",
"arguments": "[path] [--tool <gemini|qwen|codex>] [--regenerate] [--mode <full|partial>] [--cli-execute]",
"category": "memory",
"subcategory": null,
"usage_scenario": "documentation",
"difficulty": "Intermediate",
"source": "../../../commands/memory/skill-memory.md"
},
{
"name": "style-skill-memory",
"command": "/memory:style-skill-memory",
"description": "Generate SKILL memory package from style reference for easy loading and consistent design system usage",
"arguments": "[package-name] [--regenerate]",
"category": "memory",
"subcategory": null,
"usage_scenario": "documentation",
"difficulty": "Intermediate",
"source": "../../../commands/memory/style-skill-memory.md"
},
{
"name": "swagger-docs",
"command": "/memory:swagger-docs",
"description": "Generate complete Swagger/OpenAPI documentation following RESTful standards with global security, API details, error codes, and validation tests",
"arguments": "[path] [--tool <gemini|qwen|codex>] [--format <yaml|json>] [--version <v3.0|v3.1>] [--lang <zh|en>]",
"category": "memory",
"subcategory": null,
"usage_scenario": "documentation",
"difficulty": "Intermediate",
"source": "../../../commands/memory/swagger-docs.md"
},
{
"name": "tech-research-rules",
"command": "/memory:tech-research-rules",
"description": "3-phase orchestrator: extract tech stack → Exa research → generate path-conditional rules (auto-loaded by Claude Code)",
"arguments": "[session-id | tech-stack-name] [--regenerate] [--tool <gemini|qwen>]",
"category": "memory",
"subcategory": null,
"usage_scenario": "general",
"difficulty": "Intermediate",
"source": "../../../commands/memory/tech-research-rules.md"
},
{
"name": "update-full",
"command": "/memory:update-full",
"description": "Update all CLAUDE.md files using layer-based execution (Layer 3→1) with batched agents (4 modules/agent) and gemini→qwen→codex fallback, <20 modules uses direct parallel",
"arguments": "[--tool gemini|qwen|codex] [--path <directory>]",
"category": "memory",
"subcategory": null,
"usage_scenario": "general",
"difficulty": "Intermediate",
"source": "../../../commands/memory/update-full.md"
},
{
"name": "update-related",
"command": "/memory:update-related",
"description": "Update CLAUDE.md for git-changed modules using batched agent execution (4 modules/agent) with gemini→qwen→codex fallback, <15 modules uses direct execution",
"arguments": "[--tool gemini|qwen|codex]",
"category": "memory",
"subcategory": null,
"usage_scenario": "general",
"difficulty": "Intermediate",
"source": "../../../commands/memory/update-related.md"
},
{
"name": "workflow-skill-memory",
"command": "/memory:workflow-skill-memory",
"description": "Process WFS-* archived sessions using universal-executor agents with Gemini analysis to generate workflow-progress SKILL package (sessions-timeline, lessons, conflicts)",
"arguments": "session <session-id> | all",
"category": "memory",
"subcategory": null,
"usage_scenario": "documentation",
"difficulty": "Intermediate",
"source": "../../../commands/memory/workflow-skill-memory.md"
}
]
},
"task": {
"_root": [
{
"name": "breakdown",
"command": "/task:breakdown",
"description": "Decompose complex task into subtasks with dependency mapping, creates child task JSONs with parent references and execution order",
"arguments": "task-id",
"category": "task",
"subcategory": null,
"usage_scenario": "planning",
"difficulty": "Intermediate",
"source": "../../../commands/task/breakdown.md"
},
{
"name": "create",
"command": "/task:create",
"description": "Generate task JSON from natural language description with automatic file pattern detection, scope inference, and dependency analysis",
"arguments": "\\\"task title\\",
"category": "task",
"subcategory": null,
"usage_scenario": "implementation",
"difficulty": "Intermediate",
"source": "../../../commands/task/create.md"
},
{
"name": "execute",
"command": "/task:execute",
"description": "Execute task JSON using appropriate agent (@doc-generator/@implementation-agent/@test-agent) with pre-analysis context loading and status tracking",
"arguments": "task-id",
"category": "task",
"subcategory": null,
"usage_scenario": "implementation",
"difficulty": "Intermediate",
"source": "../../../commands/task/execute.md"
},
{
"name": "replan",
"command": "/task:replan",
"description": "Update task JSON with new requirements or batch-update multiple tasks from verification report, tracks changes in task-changes.json",
"arguments": "task-id [\\\"text\\\"|file.md] | --batch [verification-report.md]",
"category": "task",
"subcategory": null,
"usage_scenario": "planning",
"difficulty": "Intermediate",
"source": "../../../commands/task/replan.md"
}
]
},
"workflow": {
"_root": [
{
"name": "action-plan-verify",
"command": "/workflow:action-plan-verify",
"description": "Perform non-destructive cross-artifact consistency analysis between IMPL_PLAN.md and task JSONs with quality gate validation",
"arguments": "[optional: --session session-id]",
"category": "workflow",
"subcategory": null,
"usage_scenario": "planning",
"difficulty": "Intermediate",
"source": "../../../commands/workflow/action-plan-verify.md"
},
{
"name": "clean",
"command": "/workflow:clean",
"description": "Intelligent code cleanup with mainline detection, stale artifact discovery, and safe execution",
"arguments": "[--dry-run] [\\\"focus area\\\"]",
"category": "workflow",
"subcategory": null,
"usage_scenario": "general",
"difficulty": "Intermediate",
"source": "../../../commands/workflow/clean.md"
},
{
"name": "debug",
"command": "/workflow:debug",
"description": "Interactive hypothesis-driven debugging with NDJSON logging, iterative until resolved",
"arguments": "\\\"bug description or error message\\",
"category": "workflow",
"subcategory": null,
"usage_scenario": "general",
"difficulty": "Intermediate",
"source": "../../../commands/workflow/debug.md"
},
{
"name": "execute",
"command": "/workflow:execute",
"description": "Coordinate agent execution for workflow tasks with automatic session discovery, parallel task processing, and status tracking",
"arguments": "[--resume-session=\\\"session-id\\\"]",
"category": "workflow",
"subcategory": null,
"usage_scenario": "implementation",
"difficulty": "Intermediate",
"source": "../../../commands/workflow/execute.md"
},
{
"name": "init",
"command": "/workflow:init",
"description": "Initialize project-level state with intelligent project analysis using cli-explore-agent",
"arguments": "[--regenerate]",
"category": "workflow",
"subcategory": null,
"usage_scenario": "general",
"difficulty": "Intermediate",
"source": "../../../commands/workflow/init.md"
},
{
"name": "lite-execute",
"command": "/workflow:lite-execute",
"description": "Execute tasks based on in-memory plan, prompt description, or file content",
"arguments": "[--in-memory] [\\\"task description\\\"|file-path]",
"category": "workflow",
"subcategory": null,
"usage_scenario": "implementation",
"difficulty": "Intermediate",
"source": "../../../commands/workflow/lite-execute.md"
},
{
"name": "lite-fix",
"command": "/workflow:lite-fix",
"description": "Lightweight bug diagnosis and fix workflow with intelligent severity assessment and optional hotfix mode for production incidents",
"arguments": "[--hotfix] \\\"bug description or issue reference\\",
"category": "workflow",
"subcategory": null,
"usage_scenario": "general",
"difficulty": "Intermediate",
"source": "../../../commands/workflow/lite-fix.md"
},
{
"name": "lite-plan",
"command": "/workflow:lite-plan",
"description": "Lightweight interactive planning workflow with in-memory planning, code exploration, and execution dispatch to lite-execute after user confirmation",
"arguments": "[-e|--explore] \\\"task description\\\"|file.md",
"category": "workflow",
"subcategory": null,
"usage_scenario": "planning",
"difficulty": "Intermediate",
"source": "../../../commands/workflow/lite-plan.md"
},
{
"name": "plan",
"command": "/workflow:plan",
"description": "5-phase planning workflow with action-planning-agent task generation, outputs IMPL_PLAN.md and task JSONs",
"arguments": "\\\"text description\\\"|file.md",
"category": "workflow",
"subcategory": null,
"usage_scenario": "planning",
"difficulty": "Intermediate",
"source": "../../../commands/workflow/plan.md"
},
{
"name": "replan",
"command": "/workflow:replan",
"description": "Interactive workflow replanning with session-level artifact updates and boundary clarification through guided questioning",
"arguments": "[--session session-id] [task-id] \\\"requirements\\\"|file.md [--interactive]",
"category": "workflow",
"subcategory": null,
"usage_scenario": "planning",
"difficulty": "Intermediate",
"source": "../../../commands/workflow/replan.md"
},
{
"name": "review-fix",
"command": "/workflow:review-fix",
"description": "Automated fixing of code review findings with AI-powered planning and coordinated execution. Uses intelligent grouping, multi-stage timeline coordination, and test-driven verification.",
"arguments": "<export-file|review-dir> [--resume] [--max-iterations=N]",
"category": "workflow",
"subcategory": null,
"usage_scenario": "analysis",
"difficulty": "Intermediate",
"source": "../../../commands/workflow/review-fix.md"
},
{
"name": "review-module-cycle",
"command": "/workflow:review-module-cycle",
"description": "Independent multi-dimensional code review for specified modules/files. Analyzes specific code paths across 7 dimensions with hybrid parallel-iterative execution, independent of workflow sessions.",
"arguments": "<path-pattern> [--dimensions=security,architecture,...] [--max-iterations=N]",
"category": "workflow",
"subcategory": null,
"usage_scenario": "analysis",
"difficulty": "Intermediate",
"source": "../../../commands/workflow/review-module-cycle.md"
},
{
"name": "review-session-cycle",
"command": "/workflow:review-session-cycle",
"description": "Session-based comprehensive multi-dimensional code review. Analyzes git changes from workflow session across 7 dimensions with hybrid parallel-iterative execution, aggregates findings, and performs focused deep-dives on critical issues until quality gates met.",
"arguments": "[session-id] [--dimensions=security,architecture,...] [--max-iterations=N]",
"category": "workflow",
"subcategory": null,
"usage_scenario": "session-management",
"difficulty": "Intermediate",
"source": "../../../commands/workflow/review-session-cycle.md"
},
{
"name": "review",
"command": "/workflow:review",
"description": "Post-implementation review with specialized types (security/architecture/action-items/quality) using analysis agents and Gemini",
"arguments": "[--type=security|architecture|action-items|quality] [--archived] [optional: session-id]",
"category": "workflow",
"subcategory": null,
"usage_scenario": "analysis",
"difficulty": "Intermediate",
"source": "../../../commands/workflow/review.md"
},
{
"name": "tdd-plan",
"command": "/workflow:tdd-plan",
"description": "TDD workflow planning with Red-Green-Refactor task chain generation, test-first development structure, and cycle tracking",
"arguments": "\\\"feature description\\\"|file.md",
"category": "workflow",
"subcategory": null,
"usage_scenario": "planning",
"difficulty": "Advanced",
"source": "../../../commands/workflow/tdd-plan.md"
},
{
"name": "tdd-verify",
"command": "/workflow:tdd-verify",
"description": "Verify TDD workflow compliance against Red-Green-Refactor cycles, generate quality report with coverage analysis",
"arguments": "[optional: WFS-session-id]",
"category": "workflow",
"subcategory": null,
"usage_scenario": "testing",
"difficulty": "Advanced",
"source": "../../../commands/workflow/tdd-verify.md"
},
{
"name": "test-cycle-execute",
"command": "/workflow:test-cycle-execute",
"description": "Execute test-fix workflow with dynamic task generation and iterative fix cycles until test pass rate >= 95% or max iterations reached. Uses @cli-planning-agent for failure analysis and task generation.",
"arguments": "[--resume-session=\\\"session-id\\\"] [--max-iterations=N]",
"category": "workflow",
"subcategory": null,
"usage_scenario": "implementation",
"difficulty": "Intermediate",
"source": "../../../commands/workflow/test-cycle-execute.md"
},
{
"name": "test-fix-gen",
"command": "/workflow:test-fix-gen",
"description": "Create test-fix workflow session from session ID, description, or file path with test strategy generation and task planning",
"arguments": "(source-session-id | \\\"feature description\\\" | /path/to/file.md)",
"category": "workflow",
"subcategory": null,
"usage_scenario": "testing",
"difficulty": "Intermediate",
"source": "../../../commands/workflow/test-fix-gen.md"
},
{
"name": "test-gen",
"command": "/workflow:test-gen",
"description": "Create independent test-fix workflow session from completed implementation session, analyzes code to generate test tasks",
"arguments": "source-session-id",
"category": "workflow",
"subcategory": null,
"usage_scenario": "testing",
"difficulty": "Intermediate",
"source": "../../../commands/workflow/test-gen.md"
}
],
"brainstorm": [
{
"name": "api-designer",
"command": "/workflow:brainstorm:api-designer",
"description": "Generate or update api-designer/analysis.md addressing guidance-specification discussion points for API design perspective",
"arguments": "optional topic - uses existing framework if available",
"category": "workflow",
"subcategory": "brainstorm",
"usage_scenario": "planning",
"difficulty": "Intermediate",
"source": "../../../commands/workflow/brainstorm/api-designer.md"
},
{
"name": "artifacts",
"command": "/workflow:brainstorm:artifacts",
"description": "Interactive clarification generating confirmed guidance specification through role-based analysis and synthesis",
"arguments": "topic or challenge description [--count N]",
"category": "workflow",
"subcategory": "brainstorm",
"usage_scenario": "general",
"difficulty": "Intermediate",
"source": "../../../commands/workflow/brainstorm/artifacts.md"
},
{
"name": "auto-parallel",
"command": "/workflow:brainstorm:auto-parallel",
"description": "Parallel brainstorming automation with dynamic role selection and concurrent execution across multiple perspectives",
"arguments": "topic or challenge description\" [--count N]",
"category": "workflow",
"subcategory": "brainstorm",
"usage_scenario": "general",
"difficulty": "Advanced",
"source": "../../../commands/workflow/brainstorm/auto-parallel.md"
},
{
"name": "data-architect",
"command": "/workflow:brainstorm:data-architect",
"description": "Generate or update data-architect/analysis.md addressing guidance-specification discussion points for data architecture perspective",
"arguments": "optional topic - uses existing framework if available",
"category": "workflow",
"subcategory": "brainstorm",
"usage_scenario": "general",
"difficulty": "Intermediate",
"source": "../../../commands/workflow/brainstorm/data-architect.md"
},
{
"name": "product-manager",
"command": "/workflow:brainstorm:product-manager",
"description": "Generate or update product-manager/analysis.md addressing guidance-specification discussion points for product management perspective",
"arguments": "optional topic - uses existing framework if available",
"category": "workflow",
"subcategory": "brainstorm",
"usage_scenario": "general",
"difficulty": "Intermediate",
"source": "../../../commands/workflow/brainstorm/product-manager.md"
},
{
"name": "product-owner",
"command": "/workflow:brainstorm:product-owner",
"description": "Generate or update product-owner/analysis.md addressing guidance-specification discussion points for product ownership perspective",
"arguments": "optional topic - uses existing framework if available",
"category": "workflow",
"subcategory": "brainstorm",
"usage_scenario": "general",
"difficulty": "Intermediate",
"source": "../../../commands/workflow/brainstorm/product-owner.md"
},
{
"name": "scrum-master",
"command": "/workflow:brainstorm:scrum-master",
"description": "Generate or update scrum-master/analysis.md addressing guidance-specification discussion points for Agile process perspective",
"arguments": "optional topic - uses existing framework if available",
"category": "workflow",
"subcategory": "brainstorm",
"usage_scenario": "general",
"difficulty": "Intermediate",
"source": "../../../commands/workflow/brainstorm/scrum-master.md"
},
{
"name": "subject-matter-expert",
"command": "/workflow:brainstorm:subject-matter-expert",
"description": "Generate or update subject-matter-expert/analysis.md addressing guidance-specification discussion points for domain expertise perspective",
"arguments": "optional topic - uses existing framework if available",
"category": "workflow",
"subcategory": "brainstorm",
"usage_scenario": "general",
"difficulty": "Intermediate",
"source": "../../../commands/workflow/brainstorm/subject-matter-expert.md"
},
{
"name": "synthesis",
"command": "/workflow:brainstorm:synthesis",
"description": "Clarify and refine role analyses through intelligent Q&A and targeted updates with synthesis agent",
"arguments": "[optional: --session session-id]",
"category": "workflow",
"subcategory": "brainstorm",
"usage_scenario": "general",
"difficulty": "Advanced",
"source": "../../../commands/workflow/brainstorm/synthesis.md"
},
{
"name": "system-architect",
"command": "/workflow:brainstorm:system-architect",
"description": "Generate or update system-architect/analysis.md addressing guidance-specification discussion points for system architecture perspective",
"arguments": "optional topic - uses existing framework if available",
"category": "workflow",
"subcategory": "brainstorm",
"usage_scenario": "general",
"difficulty": "Intermediate",
"source": "../../../commands/workflow/brainstorm/system-architect.md"
},
{
"name": "ui-designer",
"command": "/workflow:brainstorm:ui-designer",
"description": "Generate or update ui-designer/analysis.md addressing guidance-specification discussion points for UI design perspective",
"arguments": "optional topic - uses existing framework if available",
"category": "workflow",
"subcategory": "brainstorm",
"usage_scenario": "planning",
"difficulty": "Intermediate",
"source": "../../../commands/workflow/brainstorm/ui-designer.md"
},
{
"name": "ux-expert",
"command": "/workflow:brainstorm:ux-expert",
"description": "Generate or update ux-expert/analysis.md addressing guidance-specification discussion points for UX perspective",
"arguments": "optional topic - uses existing framework if available",
"category": "workflow",
"subcategory": "brainstorm",
"usage_scenario": "general",
"difficulty": "Intermediate",
"source": "../../../commands/workflow/brainstorm/ux-expert.md"
}
],
"session": [
{
"name": "complete",
"command": "/workflow:session:complete",
"description": "Mark active workflow session as complete, archive with lessons learned, update manifest, remove active flag",
"arguments": "",
"category": "workflow",
"subcategory": "session",
"usage_scenario": "session-management",
"difficulty": "Intermediate",
"source": "../../../commands/workflow/session/complete.md"
},
{
"name": "list",
"command": "/workflow:session:list",
"description": "List all workflow sessions with status filtering, shows session metadata and progress information",
"arguments": "",
"category": "workflow",
"subcategory": "session",
"usage_scenario": "general",
"difficulty": "Beginner",
"source": "../../../commands/workflow/session/list.md"
},
{
"name": "resume",
"command": "/workflow:session:resume",
"description": "Resume the most recently paused workflow session with automatic session discovery and status update",
"arguments": "",
"category": "workflow",
"subcategory": "session",
"usage_scenario": "session-management",
"difficulty": "Intermediate",
"source": "../../../commands/workflow/session/resume.md"
},
{
"name": "solidify",
"command": "/workflow:session:solidify",
"description": "Crystallize session learnings and user-defined constraints into permanent project guidelines",
"arguments": "[--type <convention|constraint|learning>] [--category <category>] \\\"rule or insight\\",
"category": "workflow",
"subcategory": "session",
"usage_scenario": "general",
"difficulty": "Intermediate",
"source": "../../../commands/workflow/session/solidify.md"
},
{
"name": "start",
"command": "/workflow:session:start",
"description": "Discover existing sessions or start new workflow session with intelligent session management and conflict detection",
"arguments": "[--type <workflow|review|tdd|test|docs>] [--auto|--new] [optional: task description for new session]",
"category": "workflow",
"subcategory": "session",
"usage_scenario": "general",
"difficulty": "Intermediate",
"source": "../../../commands/workflow/session/start.md"
}
],
"tools": [
{
"name": "conflict-resolution",
"command": "/workflow:tools:conflict-resolution",
"description": "Detect and resolve conflicts between plan and existing codebase using CLI-powered analysis with Gemini/Qwen",
"arguments": "--session WFS-session-id --context path/to/context-package.json",
"category": "workflow",
"subcategory": "tools",
"usage_scenario": "general",
"difficulty": "Advanced",
"source": "../../../commands/workflow/tools/conflict-resolution.md"
},
{
"name": "gather",
"command": "/workflow:tools:gather",
"description": "Intelligently collect project context using context-search-agent based on task description, packages into standardized JSON",
"arguments": "--session WFS-session-id \\\"task description\\",
"category": "workflow",
"subcategory": "tools",
"usage_scenario": "general",
"difficulty": "Intermediate",
"source": "../../../commands/workflow/tools/context-gather.md"
},
{
"name": "task-generate-agent",
"command": "/workflow:tools:task-generate-agent",
"description": "Generate implementation plan documents (IMPL_PLAN.md, task JSONs, TODO_LIST.md) using action-planning-agent - produces planning artifacts, does NOT execute code implementation",
"arguments": "--session WFS-session-id",
"category": "workflow",
"subcategory": "tools",
"usage_scenario": "implementation",
"difficulty": "Advanced",
"source": "../../../commands/workflow/tools/task-generate-agent.md"
},
{
"name": "task-generate-tdd",
"command": "/workflow:tools:task-generate-tdd",
"description": "Autonomous TDD task generation using action-planning-agent with Red-Green-Refactor cycles, test-first structure, and cycle validation",
"arguments": "--session WFS-session-id",
"category": "workflow",
"subcategory": "tools",
"usage_scenario": "implementation",
"difficulty": "Advanced",
"source": "../../../commands/workflow/tools/task-generate-tdd.md"
},
{
"name": "tdd-coverage-analysis",
"command": "/workflow:tools:tdd-coverage-analysis",
"description": "Analyze test coverage and TDD cycle execution with Red-Green-Refactor compliance verification",
"arguments": "--session WFS-session-id",
"category": "workflow",
"subcategory": "tools",
"usage_scenario": "testing",
"difficulty": "Advanced",
"source": "../../../commands/workflow/tools/tdd-coverage-analysis.md"
},
{
"name": "test-concept-enhanced",
"command": "/workflow:tools:test-concept-enhanced",
"description": "Coordinate test analysis workflow using cli-execution-agent to generate test strategy via Gemini",
"arguments": "--session WFS-test-session-id --context path/to/test-context-package.json",
"category": "workflow",
"subcategory": "tools",
"usage_scenario": "testing",
"difficulty": "Intermediate",
"source": "../../../commands/workflow/tools/test-concept-enhanced.md"
},
{
"name": "test-context-gather",
"command": "/workflow:tools:test-context-gather",
"description": "Collect test coverage context using test-context-search-agent and package into standardized test-context JSON",
"arguments": "--session WFS-test-session-id",
"category": "workflow",
"subcategory": "tools",
"usage_scenario": "testing",
"difficulty": "Intermediate",
"source": "../../../commands/workflow/tools/test-context-gather.md"
},
{
"name": "test-task-generate",
"command": "/workflow:tools:test-task-generate",
"description": "Generate test planning documents (IMPL_PLAN.md, test task JSONs, TODO_LIST.md) using action-planning-agent - produces test planning artifacts, does NOT execute tests",
"arguments": "--session WFS-test-session-id",
"category": "workflow",
"subcategory": "tools",
"usage_scenario": "implementation",
"difficulty": "Intermediate",
"source": "../../../commands/workflow/tools/test-task-generate.md"
}
],
"ui-design": [
{
"name": "animation-extract",
"command": "/workflow:ui-design:animation-extract",
"description": "Extract animation and transition patterns from prompt inference and image references for design system documentation",
"arguments": "[--design-id <id>] [--session <id>] [--images \"<glob>\"] [--focus \"<types>\"] [--interactive] [--refine]",
"category": "workflow",
"subcategory": "ui-design",
"usage_scenario": "general",
"difficulty": "Intermediate",
"source": "../../../commands/workflow/ui-design/animation-extract.md"
},
{
"name": "workflow:ui-design:codify-style",
"command": "/workflow:ui-design:codify-style",
"description": "Orchestrator to extract styles from code and generate shareable reference package with preview (automatic file discovery)",
"arguments": "<path> [--package-name <name>] [--output-dir <path>] [--overwrite]",
"category": "workflow",
"subcategory": "ui-design",
"usage_scenario": "planning",
"difficulty": "Intermediate",
"source": "../../../commands/workflow/ui-design/codify-style.md"
},
{
"name": "design-sync",
"command": "/workflow:ui-design:design-sync",
"description": "Synchronize finalized design system references to brainstorming artifacts, preparing them for /workflow:plan consumption",
"arguments": "--session <session_id> [--selected-prototypes \"<list>\"]",
"category": "workflow",
"subcategory": "ui-design",
"usage_scenario": "planning",
"difficulty": "Intermediate",
"source": "../../../commands/workflow/ui-design/design-sync.md"
},
{
"name": "explore-auto",
"command": "/workflow:ui-design:explore-auto",
"description": "Interactive exploratory UI design workflow with style-centric batch generation, creates design variants from prompts/images with parallel execution and user selection",
"arguments": "[--input \"<value>\"] [--targets \"<list>\"] [--target-type \"page|component\"] [--session <id>] [--style-variants <count>] [--layout-variants <count>]",
"category": "workflow",
"subcategory": "ui-design",
"usage_scenario": "general",
"difficulty": "Intermediate",
"source": "../../../commands/workflow/ui-design/explore-auto.md"
},
{
"name": "generate",
"command": "/workflow:ui-design:generate",
"description": "Assemble UI prototypes by combining layout templates with design tokens (default animation support), pure assembler without new content generation",
"arguments": "[--design-id <id>] [--session <id>]",
"category": "workflow",
"subcategory": "ui-design",
"usage_scenario": "implementation",
"difficulty": "Intermediate",
"source": "../../../commands/workflow/ui-design/generate.md"
},
{
"name": "imitate-auto",
"command": "/workflow:ui-design:imitate-auto",
"description": "UI design workflow with direct code/image input for design token extraction and prototype generation",
"arguments": "[--input \"<value>\"] [--session <id>]",
"category": "workflow",
"subcategory": "ui-design",
"usage_scenario": "general",
"difficulty": "Intermediate",
"source": "../../../commands/workflow/ui-design/imitate-auto.md"
},
{
"name": "workflow:ui-design:import-from-code",
"command": "/workflow:ui-design:import-from-code",
"description": "Import design system from code files (CSS/JS/HTML/SCSS) with automatic file discovery and parallel agent analysis",
"arguments": "[--design-id <id>] [--session <id>] [--source <path>]",
"category": "workflow",
"subcategory": "ui-design",
"usage_scenario": "planning",
"difficulty": "Intermediate",
"source": "../../../commands/workflow/ui-design/import-from-code.md"
},
{
"name": "layout-extract",
"command": "/workflow:ui-design:layout-extract",
"description": "Extract structural layout information from reference images or text prompts using Claude analysis with variant generation or refinement mode",
"arguments": "[--design-id <id>] [--session <id>] [--images \"<glob>\"] [--prompt \"<desc>\"] [--targets \"<list>\"] [--variants <count>] [--device-type <desktop|mobile|tablet|responsive>] [--interactive] [--refine]",
"category": "workflow",
"subcategory": "ui-design",
"usage_scenario": "general",
"difficulty": "Intermediate",
"source": "../../../commands/workflow/ui-design/layout-extract.md"
},
{
"name": "workflow:ui-design:reference-page-generator",
"command": "/workflow:ui-design:reference-page-generator",
"description": "Generate multi-component reference pages and documentation from design run extraction",
"arguments": "[--design-run <path>] [--package-name <name>] [--output-dir <path>]",
"category": "workflow",
"subcategory": "ui-design",
"usage_scenario": "planning",
"difficulty": "Intermediate",
"source": "../../../commands/workflow/ui-design/reference-page-generator.md"
},
{
"name": "style-extract",
"command": "/workflow:ui-design:style-extract",
"description": "Extract design style from reference images or text prompts using Claude analysis with variant generation or refinement mode",
"arguments": "[--design-id <id>] [--session <id>] [--images \"<glob>\"] [--prompt \"<desc>\"] [--variants <count>] [--interactive] [--refine]",
"category": "workflow",
"subcategory": "ui-design",
"usage_scenario": "general",
"difficulty": "Intermediate",
"source": "../../../commands/workflow/ui-design/style-extract.md"
}
]
}
}

View File

@@ -1,896 +0,0 @@
{
"general": [
{
"name": "cli-init",
"command": "/cli:cli-init",
"description": "Generate .gemini/ and .qwen/ config directories with settings.json and ignore files based on workspace technology detection",
"arguments": "[--tool gemini|qwen|all] [--output path] [--preview]",
"category": "cli",
"subcategory": null,
"usage_scenario": "general",
"difficulty": "Intermediate",
"source": "../../../commands/cli/cli-init.md"
},
{
"name": "enhance-prompt",
"command": "/enhance-prompt",
"description": "Enhanced prompt transformation using session memory and intent analysis with --enhance flag detection",
"arguments": "user input to enhance",
"category": "general",
"subcategory": null,
"usage_scenario": "general",
"difficulty": "Intermediate",
"source": "../../../commands/enhance-prompt.md"
},
{
"name": "issue:discover",
"command": "/issue:discover",
"description": "Discover potential issues from multiple perspectives (bug, UX, test, quality, security, performance, maintainability, best-practices) using CLI explore. Supports Exa external research for security and best-practices perspectives.",
"arguments": "<path-pattern> [--perspectives=bug,ux,...] [--external]",
"category": "issue",
"subcategory": null,
"usage_scenario": "general",
"difficulty": "Intermediate",
"source": "../../../commands/issue/discover.md"
},
{
"name": "new",
"command": "/issue:new",
"description": "Create structured issue from GitHub URL or text description",
"arguments": "<github-url | text-description> [--priority 1-5]",
"category": "issue",
"subcategory": null,
"usage_scenario": "general",
"difficulty": "Intermediate",
"source": "../../../commands/issue/new.md"
},
{
"name": "queue",
"command": "/issue:queue",
"description": "Form execution queue from bound solutions using issue-queue-agent (solution-level)",
"arguments": "[--rebuild] [--issue <id>]",
"category": "issue",
"subcategory": null,
"usage_scenario": "general",
"difficulty": "Intermediate",
"source": "../../../commands/issue/queue.md"
},
{
"name": "compact",
"command": "/memory:compact",
"description": "Compact current session memory into structured text for session recovery, extracting objective/plan/files/decisions/constraints/state, and save via MCP core_memory tool",
"arguments": "[optional: session description]",
"category": "memory",
"subcategory": null,
"usage_scenario": "general",
"difficulty": "Intermediate",
"source": "../../../commands/memory/compact.md"
},
{
"name": "load",
"command": "/memory:load",
"description": "Delegate to universal-executor agent to analyze project via Gemini/Qwen CLI and return JSON core content package for task context",
"arguments": "[--tool gemini|qwen] \\\"task context description\\",
"category": "memory",
"subcategory": null,
"usage_scenario": "general",
"difficulty": "Intermediate",
"source": "../../../commands/memory/load.md"
},
{
"name": "tech-research-rules",
"command": "/memory:tech-research-rules",
"description": "3-phase orchestrator: extract tech stack → Exa research → generate path-conditional rules (auto-loaded by Claude Code)",
"arguments": "[session-id | tech-stack-name] [--regenerate] [--tool <gemini|qwen>]",
"category": "memory",
"subcategory": null,
"usage_scenario": "general",
"difficulty": "Intermediate",
"source": "../../../commands/memory/tech-research-rules.md"
},
{
"name": "update-full",
"command": "/memory:update-full",
"description": "Update all CLAUDE.md files using layer-based execution (Layer 3→1) with batched agents (4 modules/agent) and gemini→qwen→codex fallback, <20 modules uses direct parallel",
"arguments": "[--tool gemini|qwen|codex] [--path <directory>]",
"category": "memory",
"subcategory": null,
"usage_scenario": "general",
"difficulty": "Intermediate",
"source": "../../../commands/memory/update-full.md"
},
{
"name": "update-related",
"command": "/memory:update-related",
"description": "Update CLAUDE.md for git-changed modules using batched agent execution (4 modules/agent) with gemini→qwen→codex fallback, <15 modules uses direct execution",
"arguments": "[--tool gemini|qwen|codex]",
"category": "memory",
"subcategory": null,
"usage_scenario": "general",
"difficulty": "Intermediate",
"source": "../../../commands/memory/update-related.md"
},
{
"name": "version",
"command": "/version",
"description": "Display Claude Code version information and check for updates",
"arguments": "",
"category": "general",
"subcategory": null,
"usage_scenario": "general",
"difficulty": "Beginner",
"source": "../../../commands/version.md"
},
{
"name": "artifacts",
"command": "/workflow:brainstorm:artifacts",
"description": "Interactive clarification generating confirmed guidance specification through role-based analysis and synthesis",
"arguments": "topic or challenge description [--count N]",
"category": "workflow",
"subcategory": "brainstorm",
"usage_scenario": "general",
"difficulty": "Intermediate",
"source": "../../../commands/workflow/brainstorm/artifacts.md"
},
{
"name": "auto-parallel",
"command": "/workflow:brainstorm:auto-parallel",
"description": "Parallel brainstorming automation with dynamic role selection and concurrent execution across multiple perspectives",
"arguments": "topic or challenge description\" [--count N]",
"category": "workflow",
"subcategory": "brainstorm",
"usage_scenario": "general",
"difficulty": "Advanced",
"source": "../../../commands/workflow/brainstorm/auto-parallel.md"
},
{
"name": "data-architect",
"command": "/workflow:brainstorm:data-architect",
"description": "Generate or update data-architect/analysis.md addressing guidance-specification discussion points for data architecture perspective",
"arguments": "optional topic - uses existing framework if available",
"category": "workflow",
"subcategory": "brainstorm",
"usage_scenario": "general",
"difficulty": "Intermediate",
"source": "../../../commands/workflow/brainstorm/data-architect.md"
},
{
"name": "product-manager",
"command": "/workflow:brainstorm:product-manager",
"description": "Generate or update product-manager/analysis.md addressing guidance-specification discussion points for product management perspective",
"arguments": "optional topic - uses existing framework if available",
"category": "workflow",
"subcategory": "brainstorm",
"usage_scenario": "general",
"difficulty": "Intermediate",
"source": "../../../commands/workflow/brainstorm/product-manager.md"
},
{
"name": "product-owner",
"command": "/workflow:brainstorm:product-owner",
"description": "Generate or update product-owner/analysis.md addressing guidance-specification discussion points for product ownership perspective",
"arguments": "optional topic - uses existing framework if available",
"category": "workflow",
"subcategory": "brainstorm",
"usage_scenario": "general",
"difficulty": "Intermediate",
"source": "../../../commands/workflow/brainstorm/product-owner.md"
},
{
"name": "scrum-master",
"command": "/workflow:brainstorm:scrum-master",
"description": "Generate or update scrum-master/analysis.md addressing guidance-specification discussion points for Agile process perspective",
"arguments": "optional topic - uses existing framework if available",
"category": "workflow",
"subcategory": "brainstorm",
"usage_scenario": "general",
"difficulty": "Intermediate",
"source": "../../../commands/workflow/brainstorm/scrum-master.md"
},
{
"name": "subject-matter-expert",
"command": "/workflow:brainstorm:subject-matter-expert",
"description": "Generate or update subject-matter-expert/analysis.md addressing guidance-specification discussion points for domain expertise perspective",
"arguments": "optional topic - uses existing framework if available",
"category": "workflow",
"subcategory": "brainstorm",
"usage_scenario": "general",
"difficulty": "Intermediate",
"source": "../../../commands/workflow/brainstorm/subject-matter-expert.md"
},
{
"name": "synthesis",
"command": "/workflow:brainstorm:synthesis",
"description": "Clarify and refine role analyses through intelligent Q&A and targeted updates with synthesis agent",
"arguments": "[optional: --session session-id]",
"category": "workflow",
"subcategory": "brainstorm",
"usage_scenario": "general",
"difficulty": "Advanced",
"source": "../../../commands/workflow/brainstorm/synthesis.md"
},
{
"name": "system-architect",
"command": "/workflow:brainstorm:system-architect",
"description": "Generate or update system-architect/analysis.md addressing guidance-specification discussion points for system architecture perspective",
"arguments": "optional topic - uses existing framework if available",
"category": "workflow",
"subcategory": "brainstorm",
"usage_scenario": "general",
"difficulty": "Intermediate",
"source": "../../../commands/workflow/brainstorm/system-architect.md"
},
{
"name": "ux-expert",
"command": "/workflow:brainstorm:ux-expert",
"description": "Generate or update ux-expert/analysis.md addressing guidance-specification discussion points for UX perspective",
"arguments": "optional topic - uses existing framework if available",
"category": "workflow",
"subcategory": "brainstorm",
"usage_scenario": "general",
"difficulty": "Intermediate",
"source": "../../../commands/workflow/brainstorm/ux-expert.md"
},
{
"name": "clean",
"command": "/workflow:clean",
"description": "Intelligent code cleanup with mainline detection, stale artifact discovery, and safe execution",
"arguments": "[--dry-run] [\\\"focus area\\\"]",
"category": "workflow",
"subcategory": null,
"usage_scenario": "general",
"difficulty": "Intermediate",
"source": "../../../commands/workflow/clean.md"
},
{
"name": "debug",
"command": "/workflow:debug",
"description": "Interactive hypothesis-driven debugging with NDJSON logging, iterative until resolved",
"arguments": "\\\"bug description or error message\\",
"category": "workflow",
"subcategory": null,
"usage_scenario": "general",
"difficulty": "Intermediate",
"source": "../../../commands/workflow/debug.md"
},
{
"name": "init",
"command": "/workflow:init",
"description": "Initialize project-level state with intelligent project analysis using cli-explore-agent",
"arguments": "[--regenerate]",
"category": "workflow",
"subcategory": null,
"usage_scenario": "general",
"difficulty": "Intermediate",
"source": "../../../commands/workflow/init.md"
},
{
"name": "lite-fix",
"command": "/workflow:lite-fix",
"description": "Lightweight bug diagnosis and fix workflow with intelligent severity assessment and optional hotfix mode for production incidents",
"arguments": "[--hotfix] \\\"bug description or issue reference\\",
"category": "workflow",
"subcategory": null,
"usage_scenario": "general",
"difficulty": "Intermediate",
"source": "../../../commands/workflow/lite-fix.md"
},
{
"name": "list",
"command": "/workflow:session:list",
"description": "List all workflow sessions with status filtering, shows session metadata and progress information",
"arguments": "",
"category": "workflow",
"subcategory": "session",
"usage_scenario": "general",
"difficulty": "Beginner",
"source": "../../../commands/workflow/session/list.md"
},
{
"name": "solidify",
"command": "/workflow:session:solidify",
"description": "Crystallize session learnings and user-defined constraints into permanent project guidelines",
"arguments": "[--type <convention|constraint|learning>] [--category <category>] \\\"rule or insight\\",
"category": "workflow",
"subcategory": "session",
"usage_scenario": "general",
"difficulty": "Intermediate",
"source": "../../../commands/workflow/session/solidify.md"
},
{
"name": "start",
"command": "/workflow:session:start",
"description": "Discover existing sessions or start new workflow session with intelligent session management and conflict detection",
"arguments": "[--type <workflow|review|tdd|test|docs>] [--auto|--new] [optional: task description for new session]",
"category": "workflow",
"subcategory": "session",
"usage_scenario": "general",
"difficulty": "Intermediate",
"source": "../../../commands/workflow/session/start.md"
},
{
"name": "conflict-resolution",
"command": "/workflow:tools:conflict-resolution",
"description": "Detect and resolve conflicts between plan and existing codebase using CLI-powered analysis with Gemini/Qwen",
"arguments": "--session WFS-session-id --context path/to/context-package.json",
"category": "workflow",
"subcategory": "tools",
"usage_scenario": "general",
"difficulty": "Advanced",
"source": "../../../commands/workflow/tools/conflict-resolution.md"
},
{
"name": "gather",
"command": "/workflow:tools:gather",
"description": "Intelligently collect project context using context-search-agent based on task description, packages into standardized JSON",
"arguments": "--session WFS-session-id \\\"task description\\",
"category": "workflow",
"subcategory": "tools",
"usage_scenario": "general",
"difficulty": "Intermediate",
"source": "../../../commands/workflow/tools/context-gather.md"
},
{
"name": "animation-extract",
"command": "/workflow:ui-design:animation-extract",
"description": "Extract animation and transition patterns from prompt inference and image references for design system documentation",
"arguments": "[--design-id <id>] [--session <id>] [--images \"<glob>\"] [--focus \"<types>\"] [--interactive] [--refine]",
"category": "workflow",
"subcategory": "ui-design",
"usage_scenario": "general",
"difficulty": "Intermediate",
"source": "../../../commands/workflow/ui-design/animation-extract.md"
},
{
"name": "explore-auto",
"command": "/workflow:ui-design:explore-auto",
"description": "Interactive exploratory UI design workflow with style-centric batch generation, creates design variants from prompts/images with parallel execution and user selection",
"arguments": "[--input \"<value>\"] [--targets \"<list>\"] [--target-type \"page|component\"] [--session <id>] [--style-variants <count>] [--layout-variants <count>]",
"category": "workflow",
"subcategory": "ui-design",
"usage_scenario": "general",
"difficulty": "Intermediate",
"source": "../../../commands/workflow/ui-design/explore-auto.md"
},
{
"name": "imitate-auto",
"command": "/workflow:ui-design:imitate-auto",
"description": "UI design workflow with direct code/image input for design token extraction and prototype generation",
"arguments": "[--input \"<value>\"] [--session <id>]",
"category": "workflow",
"subcategory": "ui-design",
"usage_scenario": "general",
"difficulty": "Intermediate",
"source": "../../../commands/workflow/ui-design/imitate-auto.md"
},
{
"name": "layout-extract",
"command": "/workflow:ui-design:layout-extract",
"description": "Extract structural layout information from reference images or text prompts using Claude analysis with variant generation or refinement mode",
"arguments": "[--design-id <id>] [--session <id>] [--images \"<glob>\"] [--prompt \"<desc>\"] [--targets \"<list>\"] [--variants <count>] [--device-type <desktop|mobile|tablet|responsive>] [--interactive] [--refine]",
"category": "workflow",
"subcategory": "ui-design",
"usage_scenario": "general",
"difficulty": "Intermediate",
"source": "../../../commands/workflow/ui-design/layout-extract.md"
},
{
"name": "style-extract",
"command": "/workflow:ui-design:style-extract",
"description": "Extract design style from reference images or text prompts using Claude analysis with variant generation or refinement mode",
"arguments": "[--design-id <id>] [--session <id>] [--images \"<glob>\"] [--prompt \"<desc>\"] [--variants <count>] [--interactive] [--refine]",
"category": "workflow",
"subcategory": "ui-design",
"usage_scenario": "general",
"difficulty": "Intermediate",
"source": "../../../commands/workflow/ui-design/style-extract.md"
}
],
"implementation": [
{
"name": "execute",
"command": "/issue:execute",
"description": "Execute queue with codex using DAG-based parallel orchestration (solution-level)",
"arguments": "[--worktree] [--queue <queue-id>]",
"category": "issue",
"subcategory": null,
"usage_scenario": "implementation",
"difficulty": "Intermediate",
"source": "../../../commands/issue/execute.md"
},
{
"name": "create",
"command": "/task:create",
"description": "Generate task JSON from natural language description with automatic file pattern detection, scope inference, and dependency analysis",
"arguments": "\\\"task title\\",
"category": "task",
"subcategory": null,
"usage_scenario": "implementation",
"difficulty": "Intermediate",
"source": "../../../commands/task/create.md"
},
{
"name": "execute",
"command": "/task:execute",
"description": "Execute task JSON using appropriate agent (@doc-generator/@implementation-agent/@test-agent) with pre-analysis context loading and status tracking",
"arguments": "task-id",
"category": "task",
"subcategory": null,
"usage_scenario": "implementation",
"difficulty": "Intermediate",
"source": "../../../commands/task/execute.md"
},
{
"name": "execute",
"command": "/workflow:execute",
"description": "Coordinate agent execution for workflow tasks with automatic session discovery, parallel task processing, and status tracking",
"arguments": "[--resume-session=\\\"session-id\\\"]",
"category": "workflow",
"subcategory": null,
"usage_scenario": "implementation",
"difficulty": "Intermediate",
"source": "../../../commands/workflow/execute.md"
},
{
"name": "lite-execute",
"command": "/workflow:lite-execute",
"description": "Execute tasks based on in-memory plan, prompt description, or file content",
"arguments": "[--in-memory] [\\\"task description\\\"|file-path]",
"category": "workflow",
"subcategory": null,
"usage_scenario": "implementation",
"difficulty": "Intermediate",
"source": "../../../commands/workflow/lite-execute.md"
},
{
"name": "test-cycle-execute",
"command": "/workflow:test-cycle-execute",
"description": "Execute test-fix workflow with dynamic task generation and iterative fix cycles until test pass rate >= 95% or max iterations reached. Uses @cli-planning-agent for failure analysis and task generation.",
"arguments": "[--resume-session=\\\"session-id\\\"] [--max-iterations=N]",
"category": "workflow",
"subcategory": null,
"usage_scenario": "implementation",
"difficulty": "Intermediate",
"source": "../../../commands/workflow/test-cycle-execute.md"
},
{
"name": "task-generate-agent",
"command": "/workflow:tools:task-generate-agent",
"description": "Generate implementation plan documents (IMPL_PLAN.md, task JSONs, TODO_LIST.md) using action-planning-agent - produces planning artifacts, does NOT execute code implementation",
"arguments": "--session WFS-session-id",
"category": "workflow",
"subcategory": "tools",
"usage_scenario": "implementation",
"difficulty": "Advanced",
"source": "../../../commands/workflow/tools/task-generate-agent.md"
},
{
"name": "task-generate-tdd",
"command": "/workflow:tools:task-generate-tdd",
"description": "Autonomous TDD task generation using action-planning-agent with Red-Green-Refactor cycles, test-first structure, and cycle validation",
"arguments": "--session WFS-session-id",
"category": "workflow",
"subcategory": "tools",
"usage_scenario": "implementation",
"difficulty": "Advanced",
"source": "../../../commands/workflow/tools/task-generate-tdd.md"
},
{
"name": "test-task-generate",
"command": "/workflow:tools:test-task-generate",
"description": "Generate test planning documents (IMPL_PLAN.md, test task JSONs, TODO_LIST.md) using action-planning-agent - produces test planning artifacts, does NOT execute tests",
"arguments": "--session WFS-test-session-id",
"category": "workflow",
"subcategory": "tools",
"usage_scenario": "implementation",
"difficulty": "Intermediate",
"source": "../../../commands/workflow/tools/test-task-generate.md"
},
{
"name": "generate",
"command": "/workflow:ui-design:generate",
"description": "Assemble UI prototypes by combining layout templates with design tokens (default animation support), pure assembler without new content generation",
"arguments": "[--design-id <id>] [--session <id>]",
"category": "workflow",
"subcategory": "ui-design",
"usage_scenario": "implementation",
"difficulty": "Intermediate",
"source": "../../../commands/workflow/ui-design/generate.md"
}
],
"planning": [
{
"name": "plan",
"command": "/issue:plan",
"description": "Batch plan issue resolution using issue-plan-agent (explore + plan closed-loop)",
"arguments": "--all-pending <issue-id>[,<issue-id>,...] [--batch-size 3] ",
"category": "issue",
"subcategory": null,
"usage_scenario": "planning",
"difficulty": "Intermediate",
"source": "../../../commands/issue/plan.md"
},
{
"name": "breakdown",
"command": "/task:breakdown",
"description": "Decompose complex task into subtasks with dependency mapping, creates child task JSONs with parent references and execution order",
"arguments": "task-id",
"category": "task",
"subcategory": null,
"usage_scenario": "planning",
"difficulty": "Intermediate",
"source": "../../../commands/task/breakdown.md"
},
{
"name": "replan",
"command": "/task:replan",
"description": "Update task JSON with new requirements or batch-update multiple tasks from verification report, tracks changes in task-changes.json",
"arguments": "task-id [\\\"text\\\"|file.md] | --batch [verification-report.md]",
"category": "task",
"subcategory": null,
"usage_scenario": "planning",
"difficulty": "Intermediate",
"source": "../../../commands/task/replan.md"
},
{
"name": "action-plan-verify",
"command": "/workflow:action-plan-verify",
"description": "Perform non-destructive cross-artifact consistency analysis between IMPL_PLAN.md and task JSONs with quality gate validation",
"arguments": "[optional: --session session-id]",
"category": "workflow",
"subcategory": null,
"usage_scenario": "planning",
"difficulty": "Intermediate",
"source": "../../../commands/workflow/action-plan-verify.md"
},
{
"name": "api-designer",
"command": "/workflow:brainstorm:api-designer",
"description": "Generate or update api-designer/analysis.md addressing guidance-specification discussion points for API design perspective",
"arguments": "optional topic - uses existing framework if available",
"category": "workflow",
"subcategory": "brainstorm",
"usage_scenario": "planning",
"difficulty": "Intermediate",
"source": "../../../commands/workflow/brainstorm/api-designer.md"
},
{
"name": "ui-designer",
"command": "/workflow:brainstorm:ui-designer",
"description": "Generate or update ui-designer/analysis.md addressing guidance-specification discussion points for UI design perspective",
"arguments": "optional topic - uses existing framework if available",
"category": "workflow",
"subcategory": "brainstorm",
"usage_scenario": "planning",
"difficulty": "Intermediate",
"source": "../../../commands/workflow/brainstorm/ui-designer.md"
},
{
"name": "lite-plan",
"command": "/workflow:lite-plan",
"description": "Lightweight interactive planning workflow with in-memory planning, code exploration, and execution dispatch to lite-execute after user confirmation",
"arguments": "[-e|--explore] \\\"task description\\\"|file.md",
"category": "workflow",
"subcategory": null,
"usage_scenario": "planning",
"difficulty": "Intermediate",
"source": "../../../commands/workflow/lite-plan.md"
},
{
"name": "plan",
"command": "/workflow:plan",
"description": "5-phase planning workflow with action-planning-agent task generation, outputs IMPL_PLAN.md and task JSONs",
"arguments": "\\\"text description\\\"|file.md",
"category": "workflow",
"subcategory": null,
"usage_scenario": "planning",
"difficulty": "Intermediate",
"source": "../../../commands/workflow/plan.md"
},
{
"name": "replan",
"command": "/workflow:replan",
"description": "Interactive workflow replanning with session-level artifact updates and boundary clarification through guided questioning",
"arguments": "[--session session-id] [task-id] \\\"requirements\\\"|file.md [--interactive]",
"category": "workflow",
"subcategory": null,
"usage_scenario": "planning",
"difficulty": "Intermediate",
"source": "../../../commands/workflow/replan.md"
},
{
"name": "tdd-plan",
"command": "/workflow:tdd-plan",
"description": "TDD workflow planning with Red-Green-Refactor task chain generation, test-first development structure, and cycle tracking",
"arguments": "\\\"feature description\\\"|file.md",
"category": "workflow",
"subcategory": null,
"usage_scenario": "planning",
"difficulty": "Advanced",
"source": "../../../commands/workflow/tdd-plan.md"
},
{
"name": "workflow:ui-design:codify-style",
"command": "/workflow:ui-design:codify-style",
"description": "Orchestrator to extract styles from code and generate shareable reference package with preview (automatic file discovery)",
"arguments": "<path> [--package-name <name>] [--output-dir <path>] [--overwrite]",
"category": "workflow",
"subcategory": "ui-design",
"usage_scenario": "planning",
"difficulty": "Intermediate",
"source": "../../../commands/workflow/ui-design/codify-style.md"
},
{
"name": "design-sync",
"command": "/workflow:ui-design:design-sync",
"description": "Synchronize finalized design system references to brainstorming artifacts, preparing them for /workflow:plan consumption",
"arguments": "--session <session_id> [--selected-prototypes \"<list>\"]",
"category": "workflow",
"subcategory": "ui-design",
"usage_scenario": "planning",
"difficulty": "Intermediate",
"source": "../../../commands/workflow/ui-design/design-sync.md"
},
{
"name": "workflow:ui-design:import-from-code",
"command": "/workflow:ui-design:import-from-code",
"description": "Import design system from code files (CSS/JS/HTML/SCSS) with automatic file discovery and parallel agent analysis",
"arguments": "[--design-id <id>] [--session <id>] [--source <path>]",
"category": "workflow",
"subcategory": "ui-design",
"usage_scenario": "planning",
"difficulty": "Intermediate",
"source": "../../../commands/workflow/ui-design/import-from-code.md"
},
{
"name": "workflow:ui-design:reference-page-generator",
"command": "/workflow:ui-design:reference-page-generator",
"description": "Generate multi-component reference pages and documentation from design run extraction",
"arguments": "[--design-run <path>] [--package-name <name>] [--output-dir <path>]",
"category": "workflow",
"subcategory": "ui-design",
"usage_scenario": "planning",
"difficulty": "Intermediate",
"source": "../../../commands/workflow/ui-design/reference-page-generator.md"
}
],
"documentation": [
{
"name": "code-map-memory",
"command": "/memory:code-map-memory",
"description": "3-phase orchestrator: parse feature keyword → cli-explore-agent analyzes (Deep Scan dual-source) → orchestrator generates Mermaid docs + SKILL package (skips phase 2 if exists)",
"arguments": "\\\"feature-keyword\\\" [--regenerate] [--tool <gemini|qwen>]",
"category": "memory",
"subcategory": null,
"usage_scenario": "documentation",
"difficulty": "Intermediate",
"source": "../../../commands/memory/code-map-memory.md"
},
{
"name": "docs-full-cli",
"command": "/memory:docs-full-cli",
"description": "Generate full project documentation using CLI execution (Layer 3→1) with batched agents (4 modules/agent) and gemini→qwen→codex fallback, <20 modules uses direct parallel",
"arguments": "[path] [--tool <gemini|qwen|codex>]",
"category": "memory",
"subcategory": null,
"usage_scenario": "documentation",
"difficulty": "Intermediate",
"source": "../../../commands/memory/docs-full-cli.md"
},
{
"name": "docs-related-cli",
"command": "/memory:docs-related-cli",
"description": "Generate/update documentation for git-changed modules using CLI execution with batched agents (4 modules/agent) and gemini→qwen→codex fallback, <15 modules uses direct parallel",
"arguments": "[--tool <gemini|qwen|codex>]",
"category": "memory",
"subcategory": null,
"usage_scenario": "documentation",
"difficulty": "Intermediate",
"source": "../../../commands/memory/docs-related-cli.md"
},
{
"name": "docs",
"command": "/memory:docs",
"description": "Plan documentation workflow with dynamic grouping (≤10 docs/task), generates IMPL tasks for parallel module trees, README, ARCHITECTURE, and HTTP API docs",
"arguments": "[path] [--tool <gemini|qwen|codex>] [--mode <full|partial>] [--cli-execute]",
"category": "memory",
"subcategory": null,
"usage_scenario": "documentation",
"difficulty": "Intermediate",
"source": "../../../commands/memory/docs.md"
},
{
"name": "load-skill-memory",
"command": "/memory:load-skill-memory",
"description": "Activate SKILL package (auto-detect from paths/keywords or manual) and intelligently load documentation based on task intent keywords",
"arguments": "[skill_name] \\\"task intent description\\",
"category": "memory",
"subcategory": null,
"usage_scenario": "documentation",
"difficulty": "Intermediate",
"source": "../../../commands/memory/load-skill-memory.md"
},
{
"name": "skill-memory",
"command": "/memory:skill-memory",
"description": "4-phase autonomous orchestrator: check docs → /memory:docs planning → /workflow:execute → generate SKILL.md with progressive loading index (skips phases 2-3 if docs exist)",
"arguments": "[path] [--tool <gemini|qwen|codex>] [--regenerate] [--mode <full|partial>] [--cli-execute]",
"category": "memory",
"subcategory": null,
"usage_scenario": "documentation",
"difficulty": "Intermediate",
"source": "../../../commands/memory/skill-memory.md"
},
{
"name": "style-skill-memory",
"command": "/memory:style-skill-memory",
"description": "Generate SKILL memory package from style reference for easy loading and consistent design system usage",
"arguments": "[package-name] [--regenerate]",
"category": "memory",
"subcategory": null,
"usage_scenario": "documentation",
"difficulty": "Intermediate",
"source": "../../../commands/memory/style-skill-memory.md"
},
{
"name": "swagger-docs",
"command": "/memory:swagger-docs",
"description": "Generate complete Swagger/OpenAPI documentation following RESTful standards with global security, API details, error codes, and validation tests",
"arguments": "[path] [--tool <gemini|qwen|codex>] [--format <yaml|json>] [--version <v3.0|v3.1>] [--lang <zh|en>]",
"category": "memory",
"subcategory": null,
"usage_scenario": "documentation",
"difficulty": "Intermediate",
"source": "../../../commands/memory/swagger-docs.md"
},
{
"name": "workflow-skill-memory",
"command": "/memory:workflow-skill-memory",
"description": "Process WFS-* archived sessions using universal-executor agents with Gemini analysis to generate workflow-progress SKILL package (sessions-timeline, lessons, conflicts)",
"arguments": "session <session-id> | all",
"category": "memory",
"subcategory": null,
"usage_scenario": "documentation",
"difficulty": "Intermediate",
"source": "../../../commands/memory/workflow-skill-memory.md"
}
],
"analysis": [
{
"name": "review-fix",
"command": "/workflow:review-fix",
"description": "Automated fixing of code review findings with AI-powered planning and coordinated execution. Uses intelligent grouping, multi-stage timeline coordination, and test-driven verification.",
"arguments": "<export-file|review-dir> [--resume] [--max-iterations=N]",
"category": "workflow",
"subcategory": null,
"usage_scenario": "analysis",
"difficulty": "Intermediate",
"source": "../../../commands/workflow/review-fix.md"
},
{
"name": "review-module-cycle",
"command": "/workflow:review-module-cycle",
"description": "Independent multi-dimensional code review for specified modules/files. Analyzes specific code paths across 7 dimensions with hybrid parallel-iterative execution, independent of workflow sessions.",
"arguments": "<path-pattern> [--dimensions=security,architecture,...] [--max-iterations=N]",
"category": "workflow",
"subcategory": null,
"usage_scenario": "analysis",
"difficulty": "Intermediate",
"source": "../../../commands/workflow/review-module-cycle.md"
},
{
"name": "review",
"command": "/workflow:review",
"description": "Post-implementation review with specialized types (security/architecture/action-items/quality) using analysis agents and Gemini",
"arguments": "[--type=security|architecture|action-items|quality] [--archived] [optional: session-id]",
"category": "workflow",
"subcategory": null,
"usage_scenario": "analysis",
"difficulty": "Intermediate",
"source": "../../../commands/workflow/review.md"
}
],
"session-management": [
{
"name": "review-session-cycle",
"command": "/workflow:review-session-cycle",
"description": "Session-based comprehensive multi-dimensional code review. Analyzes git changes from workflow session across 7 dimensions with hybrid parallel-iterative execution, aggregates findings, and performs focused deep-dives on critical issues until quality gates met.",
"arguments": "[session-id] [--dimensions=security,architecture,...] [--max-iterations=N]",
"category": "workflow",
"subcategory": null,
"usage_scenario": "session-management",
"difficulty": "Intermediate",
"source": "../../../commands/workflow/review-session-cycle.md"
},
{
"name": "complete",
"command": "/workflow:session:complete",
"description": "Mark active workflow session as complete, archive with lessons learned, update manifest, remove active flag",
"arguments": "",
"category": "workflow",
"subcategory": "session",
"usage_scenario": "session-management",
"difficulty": "Intermediate",
"source": "../../../commands/workflow/session/complete.md"
},
{
"name": "resume",
"command": "/workflow:session:resume",
"description": "Resume the most recently paused workflow session with automatic session discovery and status update",
"arguments": "",
"category": "workflow",
"subcategory": "session",
"usage_scenario": "session-management",
"difficulty": "Intermediate",
"source": "../../../commands/workflow/session/resume.md"
}
],
"testing": [
{
"name": "tdd-verify",
"command": "/workflow:tdd-verify",
"description": "Verify TDD workflow compliance against Red-Green-Refactor cycles, generate quality report with coverage analysis",
"arguments": "[optional: WFS-session-id]",
"category": "workflow",
"subcategory": null,
"usage_scenario": "testing",
"difficulty": "Advanced",
"source": "../../../commands/workflow/tdd-verify.md"
},
{
"name": "test-fix-gen",
"command": "/workflow:test-fix-gen",
"description": "Create test-fix workflow session from session ID, description, or file path with test strategy generation and task planning",
"arguments": "(source-session-id | \\\"feature description\\\" | /path/to/file.md)",
"category": "workflow",
"subcategory": null,
"usage_scenario": "testing",
"difficulty": "Intermediate",
"source": "../../../commands/workflow/test-fix-gen.md"
},
{
"name": "test-gen",
"command": "/workflow:test-gen",
"description": "Create independent test-fix workflow session from completed implementation session, analyzes code to generate test tasks",
"arguments": "source-session-id",
"category": "workflow",
"subcategory": null,
"usage_scenario": "testing",
"difficulty": "Intermediate",
"source": "../../../commands/workflow/test-gen.md"
},
{
"name": "tdd-coverage-analysis",
"command": "/workflow:tools:tdd-coverage-analysis",
"description": "Analyze test coverage and TDD cycle execution with Red-Green-Refactor compliance verification",
"arguments": "--session WFS-session-id",
"category": "workflow",
"subcategory": "tools",
"usage_scenario": "testing",
"difficulty": "Advanced",
"source": "../../../commands/workflow/tools/tdd-coverage-analysis.md"
},
{
"name": "test-concept-enhanced",
"command": "/workflow:tools:test-concept-enhanced",
"description": "Coordinate test analysis workflow using cli-execution-agent to generate test strategy via Gemini",
"arguments": "--session WFS-test-session-id --context path/to/test-context-package.json",
"category": "workflow",
"subcategory": "tools",
"usage_scenario": "testing",
"difficulty": "Intermediate",
"source": "../../../commands/workflow/tools/test-concept-enhanced.md"
},
{
"name": "test-context-gather",
"command": "/workflow:tools:test-context-gather",
"description": "Collect test coverage context using test-context-search-agent and package into standardized test-context JSON",
"arguments": "--session WFS-test-session-id",
"category": "workflow",
"subcategory": "tools",
"usage_scenario": "testing",
"difficulty": "Intermediate",
"source": "../../../commands/workflow/tools/test-context-gather.md"
}
]
}

View File

@@ -1,160 +0,0 @@
{
"workflow:plan": {
"calls_internally": [
"workflow:session:start",
"workflow:tools:context-gather",
"workflow:tools:conflict-resolution",
"workflow:tools:task-generate-agent"
],
"next_steps": [
"workflow:action-plan-verify",
"workflow:status",
"workflow:execute"
],
"alternatives": [
"workflow:tdd-plan"
],
"prerequisites": []
},
"workflow:tdd-plan": {
"calls_internally": [
"workflow:session:start",
"workflow:tools:context-gather",
"workflow:tools:task-generate-tdd"
],
"next_steps": [
"workflow:tdd-verify",
"workflow:status",
"workflow:execute"
],
"alternatives": [
"workflow:plan"
],
"prerequisites": []
},
"workflow:execute": {
"prerequisites": [
"workflow:plan",
"workflow:tdd-plan"
],
"related": [
"workflow:status",
"workflow:resume"
],
"next_steps": [
"workflow:review",
"workflow:tdd-verify"
]
},
"workflow:action-plan-verify": {
"prerequisites": [
"workflow:plan"
],
"next_steps": [
"workflow:execute"
],
"related": [
"workflow:status"
]
},
"workflow:tdd-verify": {
"prerequisites": [
"workflow:execute"
],
"related": [
"workflow:tools:tdd-coverage-analysis"
]
},
"workflow:session:start": {
"next_steps": [
"workflow:plan",
"workflow:execute"
],
"related": [
"workflow:session:list",
"workflow:session:resume"
]
},
"workflow:session:resume": {
"alternatives": [
"workflow:resume"
],
"related": [
"workflow:session:list",
"workflow:status"
]
},
"workflow:lite-plan": {
"calls_internally": [
"workflow:lite-execute"
],
"next_steps": [
"workflow:lite-execute",
"workflow:status"
],
"alternatives": [
"workflow:plan"
],
"prerequisites": []
},
"workflow:lite-fix": {
"next_steps": [
"workflow:lite-execute",
"workflow:status"
],
"alternatives": [
"workflow:lite-plan"
],
"related": [
"workflow:test-cycle-execute"
]
},
"workflow:lite-execute": {
"prerequisites": [
"workflow:lite-plan",
"workflow:lite-fix"
],
"related": [
"workflow:execute",
"workflow:status"
]
},
"workflow:review-session-cycle": {
"prerequisites": [
"workflow:execute"
],
"next_steps": [
"workflow:review-fix"
],
"related": [
"workflow:review-module-cycle"
]
},
"workflow:review-fix": {
"prerequisites": [
"workflow:review-module-cycle",
"workflow:review-session-cycle"
],
"related": [
"workflow:test-cycle-execute"
]
},
"memory:docs": {
"calls_internally": [
"workflow:session:start",
"workflow:tools:context-gather"
],
"next_steps": [
"workflow:execute"
]
},
"memory:skill-memory": {
"next_steps": [
"workflow:plan",
"cli:analyze"
],
"related": [
"memory:load-skill-memory"
]
}
}

View File

@@ -1,112 +0,0 @@
[
{
"name": "lite-plan",
"command": "/workflow:lite-plan",
"description": "Lightweight interactive planning workflow with in-memory planning, code exploration, and execution dispatch to lite-execute after user confirmation",
"arguments": "[-e|--explore] \\\"task description\\\"|file.md",
"category": "workflow",
"subcategory": null,
"usage_scenario": "planning",
"difficulty": "Intermediate",
"source": "../../../commands/workflow/lite-plan.md"
},
{
"name": "lite-fix",
"command": "/workflow:lite-fix",
"description": "Lightweight bug diagnosis and fix workflow with intelligent severity assessment and optional hotfix mode for production incidents",
"arguments": "[--hotfix] \\\"bug description or issue reference\\",
"category": "workflow",
"subcategory": null,
"usage_scenario": "general",
"difficulty": "Intermediate",
"source": "../../../commands/workflow/lite-fix.md"
},
{
"name": "plan",
"command": "/workflow:plan",
"description": "5-phase planning workflow with action-planning-agent task generation, outputs IMPL_PLAN.md and task JSONs",
"arguments": "\\\"text description\\\"|file.md",
"category": "workflow",
"subcategory": null,
"usage_scenario": "planning",
"difficulty": "Intermediate",
"source": "../../../commands/workflow/plan.md"
},
{
"name": "execute",
"command": "/workflow:execute",
"description": "Coordinate agent execution for workflow tasks with automatic session discovery, parallel task processing, and status tracking",
"arguments": "[--resume-session=\\\"session-id\\\"]",
"category": "workflow",
"subcategory": null,
"usage_scenario": "implementation",
"difficulty": "Intermediate",
"source": "../../../commands/workflow/execute.md"
},
{
"name": "start",
"command": "/workflow:session:start",
"description": "Discover existing sessions or start new workflow session with intelligent session management and conflict detection",
"arguments": "[--type <workflow|review|tdd|test|docs>] [--auto|--new] [optional: task description for new session]",
"category": "workflow",
"subcategory": "session",
"usage_scenario": "general",
"difficulty": "Intermediate",
"source": "../../../commands/workflow/session/start.md"
},
{
"name": "review-session-cycle",
"command": "/workflow:review-session-cycle",
"description": "Session-based comprehensive multi-dimensional code review. Analyzes git changes from workflow session across 7 dimensions with hybrid parallel-iterative execution, aggregates findings, and performs focused deep-dives on critical issues until quality gates met.",
"arguments": "[session-id] [--dimensions=security,architecture,...] [--max-iterations=N]",
"category": "workflow",
"subcategory": null,
"usage_scenario": "session-management",
"difficulty": "Intermediate",
"source": "../../../commands/workflow/review-session-cycle.md"
},
{
"name": "docs",
"command": "/memory:docs",
"description": "Plan documentation workflow with dynamic grouping (≤10 docs/task), generates IMPL tasks for parallel module trees, README, ARCHITECTURE, and HTTP API docs",
"arguments": "[path] [--tool <gemini|qwen|codex>] [--mode <full|partial>] [--cli-execute]",
"category": "memory",
"subcategory": null,
"usage_scenario": "documentation",
"difficulty": "Intermediate",
"source": "../../../commands/memory/docs.md"
},
{
"name": "artifacts",
"command": "/workflow:brainstorm:artifacts",
"description": "Interactive clarification generating confirmed guidance specification through role-based analysis and synthesis",
"arguments": "topic or challenge description [--count N]",
"category": "workflow",
"subcategory": "brainstorm",
"usage_scenario": "general",
"difficulty": "Intermediate",
"source": "../../../commands/workflow/brainstorm/artifacts.md"
},
{
"name": "action-plan-verify",
"command": "/workflow:action-plan-verify",
"description": "Perform non-destructive cross-artifact consistency analysis between IMPL_PLAN.md and task JSONs with quality gate validation",
"arguments": "[optional: --session session-id]",
"category": "workflow",
"subcategory": null,
"usage_scenario": "planning",
"difficulty": "Intermediate",
"source": "../../../commands/workflow/action-plan-verify.md"
},
{
"name": "version",
"command": "/version",
"description": "Display Claude Code version information and check for updates",
"arguments": "",
"category": "general",
"subcategory": null,
"usage_scenario": "general",
"difficulty": "Beginner",
"source": "../../../commands/version.md"
}
]

View File

@@ -1,462 +1,352 @@
---
name: ccw
description: Stateless workflow orchestrator that automatically selects and executes the optimal workflow combination based on task intent. Supports rapid (lite-plan+execute), full (brainstorm+plan+execute), coupled (plan+execute), bugfix (lite-fix), and issue (multi-point fixes) workflows. Triggers on "ccw", "workflow", "自动工作流", "智能调度".
allowed-tools: Task(*), SlashCommand(*), AskUserQuestion(*), Read(*), Bash(*), Grep(*)
description: Stateless workflow orchestrator. Auto-selects optimal workflow based on task intent. Triggers "ccw", "workflow".
allowed-tools: Task(*), SlashCommand(*), AskUserQuestion(*), Read(*), Bash(*), Grep(*), TodoWrite(*)
---
# CCW - Claude Code Workflow Orchestrator
无状态工作流协调器,根据任务意图自动选择并执行最优工作流组合
无状态工作流协调器,根据任务意图自动选择最优工作流。
## Architecture Overview
## Architecture
```
┌─────────────────────────────────────────────────────────────────┐
│ CCW Orchestrator (Stateless)
│ CCW Orchestrator (CLI-Enhanced + Requirement Analysis)
├─────────────────────────────────────────────────────────────────┤
Input Analysis
├─ Intent Classification (bugfix/feature/refactor/issue/...)
├─ Complexity Assessment (low/medium/high)
├─ Context Detection (codebase familiarity needed?)
└─ Constraint Extraction (time/scope/quality)
Workflow Selection (Decision Tree)
│ ├─ 🐛 Bug? → lite-fix / lite-fix --hotfix │
│ ├─ ❓ Unclear? → brainstorm → plan → execute │
│ ├─ ⚡ Simple? → lite-plan → lite-execute │
│ ├─ 🔧 Complex? → plan → execute │
│ ├─ 📋 Issue? → issue:plan → issue:queue → issue:execute │
│ └─ 🎨 UI? → ui-design → plan → execute │
│ │
│ Execution Dispatch │
│ └─ SlashCommand("/workflow:xxx") or Task(agent) │
│ │
Phase 1 │ Input Analysis (rule-based, fast path)
Phase 1.5 │ CLI Classification (semantic, smart path)
Phase 1.75 │ Requirement Clarification (clarity < 2)
Phase 2 │ Chain Selection (intent → workflow)
Phase 2.5 │ CLI Action Planning (high complexity)
Phase 3 │ User Confirmation (optional)
Phase 4 │ TODO Tracking Setup
Phase 5 │ Execution Loop
└─────────────────────────────────────────────────────────────────┘
```
## Workflow Combinations (组合技)
### 1. Rapid (快速迭代) ⚡
**Pattern**: 多模型协作分析 + 直接执行
**Commands**: `/workflow:lite-plan``/workflow:lite-execute`
**When to use**:
- 明确知道做什么和怎么做
- 单一功能或小型改动
- 快速原型验证
### 2. Full (完整流程) 📋
**Pattern**: 分析 + 头脑风暴 + 规划 + 执行
**Commands**: `/workflow:brainstorm:auto-parallel``/workflow:plan``/workflow:execute`
**When to use**:
- 不确定产品方向或技术方案
- 需要多角色视角分析
- 复杂新功能开发
### 3. Coupled (复杂耦合) 🔗
**Pattern**: 完整规划 + 验证 + 执行
**Commands**: `/workflow:plan``/workflow:action-plan-verify``/workflow:execute`
**When to use**:
- 跨模块依赖
- 架构级变更
- 团队协作项目
### 4. Bugfix (缺陷修复) 🐛
**Pattern**: 智能诊断 + 修复
**Commands**: `/workflow:lite-fix` or `/workflow:lite-fix --hotfix`
**When to use**:
- 任何有明确症状的Bug
- 生产事故紧急修复
- 根因不清楚需要诊断
### 5. Issue (长时间多点修复) 📌
**Pattern**: Issue规划 + 队列 + 批量执行
**Commands**: `/issue:plan``/issue:queue``/issue:execute`
**When to use**:
- 多个相关问题需要批量处理
- 长时间跨度的修复任务
- 需要优先级排序和冲突解决
### 6. UI-First (设计驱动) 🎨
**Pattern**: UI设计 + 规划 + 执行
**Commands**: `/workflow:ui-design:*``/workflow:plan``/workflow:execute`
**When to use**:
- 前端功能开发
- 需要视觉参考
- 设计系统集成
## Intent Classification
```javascript
function classifyIntent(input) {
const text = input.toLowerCase()
// Priority 1: Bug keywords
if (/\b(fix|bug|error|issue|crash|broken|fail|wrong|incorrect)\b/.test(text)) {
if (/\b(hotfix|urgent|production|critical|emergency)\b/.test(text)) {
return { type: 'bugfix', mode: 'hotfix', workflow: 'lite-fix --hotfix' }
}
return { type: 'bugfix', mode: 'standard', workflow: 'lite-fix' }
}
// Priority 2: Issue batch keywords
if (/\b(issues?|batch|queue|多个|批量)\b/.test(text) && /\b(fix|resolve|处理)\b/.test(text)) {
return { type: 'issue', workflow: 'issue:plan → issue:queue → issue:execute' }
}
// Priority 3: Uncertainty keywords → Full workflow
if (/\b(不确定|不知道|explore|研究|分析一下|怎么做|what if|should i|探索)\b/.test(text)) {
return { type: 'exploration', workflow: 'brainstorm → plan → execute' }
}
// Priority 4: UI/Design keywords
if (/\b(ui|界面|design|设计|component|组件|style|样式|layout|布局)\b/.test(text)) {
return { type: 'ui', workflow: 'ui-design → plan → execute' }
}
// Priority 5: Complexity assessment for remaining
const complexity = assessComplexity(text)
if (complexity === 'high') {
return { type: 'feature', complexity: 'high', workflow: 'plan → verify → execute' }
}
if (complexity === 'medium') {
return { type: 'feature', complexity: 'medium', workflow: 'lite-plan → lite-execute' }
}
return { type: 'feature', complexity: 'low', workflow: 'lite-plan → lite-execute' }
}
### Priority Order
| Priority | Intent | Patterns | Flow |
|----------|--------|----------|------|
| 1 | bugfix/hotfix | `urgent,production,critical` + bug | `bugfix.hotfix` |
| 1 | bugfix | `fix,bug,error,crash,fail` | `bugfix.standard` |
| 2 | issue batch | `issues,batch` + `fix,resolve` | `issue` |
| 3 | exploration | `不确定,explore,研究,what if` | `full` |
| 3 | multi-perspective | `多视角,权衡,比较方案,cross-verify` | `multi-cli-plan` |
| 4 | quick-task | `快速,简单,small,quick` + feature | `lite-lite-lite` |
| 5 | ui design | `ui,design,component,style` | `ui` |
| 6 | tdd | `tdd,test-driven,先写测试` | `tdd` |
| 7 | review | `review,审查,code review` | `review-fix` |
| 8 | documentation | `文档,docs,readme` | `docs` |
| 99 | feature | complexity-based | `rapid`/`coupled` |
### Complexity Assessment
```javascript
function assessComplexity(text) {
let score = 0
// Architecture keywords
if (/\b(refactor|重构|migrate|迁移|architect|架构|system|系统)\b/.test(text)) score += 2
// Multi-module keywords
if (/\b(multiple|多个|across|跨|all|所有|entire|整个)\b/.test(text)) score += 2
// Integration keywords
if (/\b(integrate|集成|connect|连接|api|database|数据库)\b/.test(text)) score += 1
// Security/Performance keywords
if (/\b(security|安全|performance|性能|scale|扩展)\b/.test(text)) score += 1
if (score >= 4) return 'high'
if (score >= 2) return 'medium'
return 'low'
if (/refactor|重构|migrate|迁移|architect|架构|system|系统/.test(text)) score += 2
if (/multiple|多个|across|跨|all|所有|entire|整个/.test(text)) score += 2
if (/integrate|集成|api|database|数据库/.test(text)) score += 1
if (/security|安全|performance|性能|scale|扩展/.test(text)) score += 1
return score >= 4 ? 'high' : score >= 2 ? 'medium' : 'low'
}
```
## Execution Flow
| Complexity | Flow |
|------------|------|
| high | `coupled` (plan → verify → execute) |
| medium/low | `rapid` (lite-plan → lite-execute) |
### Phase 1: Input Analysis
### Dimension Extraction (WHAT/WHERE/WHY/HOW)
从用户输入提取四个维度,用于需求澄清和工作流选择:
| 维度 | 提取内容 | 示例模式 |
|------|----------|----------|
| **WHAT** | action + target | `创建/修复/重构/优化/分析` + 目标对象 |
| **WHERE** | scope + paths | `file/module/system` + 文件路径 |
| **WHY** | goal + motivation | `为了.../因为.../目的是...` |
| **HOW** | constraints + preferences | `必须.../不要.../应该...` |
**Clarity Score** (0-3):
- +0.5: 有明确 action
- +0.5: 有具体 target
- +0.5: 有文件路径
- +0.5: scope 不是 unknown
- +0.5: 有明确 goal
- +0.5: 有约束条件
- -0.5: 包含不确定词 (`不知道/maybe/怎么`)
### Requirement Clarification
`clarity_score < 2` 时触发需求澄清:
```javascript
// Parse user input
const input = userInput.trim()
if (dimensions.clarity_score < 2) {
const questions = generateClarificationQuestions(dimensions)
// 生成问题:目标是什么? 范围是什么? 有什么约束?
AskUserQuestion({ questions })
}
```
// Check for explicit workflow request
**澄清问题类型**:
- 目标不明确 → "你想要对什么进行操作?"
- 范围不明确 → "操作的范围是什么?"
- 目的不明确 → "这个操作的主要目标是什么?"
- 复杂操作 → "有什么特殊要求或限制?"
## TODO Tracking Protocol
### CRITICAL: Append-Only Rule
CCW 创建的 Todo **必须附加到现有列表**,不能覆盖用户的其他 Todo。
### Implementation
```javascript
// 1. 使用 CCW 前缀隔离工作流 todo
const prefix = `CCW:${flowName}`
// 2. 创建新 todo 时使用前缀格式
TodoWrite({
todos: [
...existingNonCCWTodos, // 保留用户的 todo
{ content: `${prefix}: [1/N] /command:step1`, status: "in_progress", activeForm: "..." },
{ content: `${prefix}: [2/N] /command:step2`, status: "pending", activeForm: "..." }
]
})
// 3. 更新状态时只修改匹配前缀的 todo
```
### Todo Format
```
CCW:{flow}: [{N}/{Total}] /command:name
```
### Visual Example
```
✓ CCW:rapid: [1/2] /workflow:lite-plan
→ CCW:rapid: [2/2] /workflow:lite-execute
用户自己的 todo保留不动
```
### Status Management
- 开始工作流:创建所有步骤 todo第一步 `in_progress`
- 完成步骤:当前步骤 `completed`,下一步 `in_progress`
- 工作流结束:所有 CCW todo 标记 `completed`
## Execution Flow
```javascript
// 1. Check explicit command
if (input.startsWith('/workflow:') || input.startsWith('/issue:')) {
// User explicitly requested a workflow, pass through
SlashCommand(input)
return
}
// Classify intent
const intent = classifyIntent(input)
// 2. Classify intent
const intent = classifyIntent(input) // See command.json intent_rules
console.log(`
## Intent Analysis
// 3. Select flow
const flow = selectFlow(intent) // See command.json flows
**Input**: ${input.substring(0, 100)}...
**Classification**: ${intent.type}
**Complexity**: ${intent.complexity || 'N/A'}
**Recommended Workflow**: ${intent.workflow}
`)
```
// 4. Create todos with CCW prefix
createWorkflowTodos(flow)
### Phase 2: User Confirmation (Optional)
```javascript
// For high-complexity or ambiguous intents, confirm with user
if (intent.complexity === 'high' || intent.type === 'exploration') {
const confirmation = AskUserQuestion({
questions: [{
question: `Recommended: ${intent.workflow}. Proceed?`,
header: "Workflow",
multiSelect: false,
options: [
{ label: `${intent.workflow} (Recommended)`, description: "Use recommended workflow" },
{ label: "Rapid (lite-plan)", description: "Quick iteration" },
{ label: "Full (brainstorm+plan)", description: "Complete exploration" },
{ label: "Manual", description: "I'll specify the commands" }
]
}]
})
// Adjust workflow based on user selection
intent.workflow = mapSelectionToWorkflow(confirmation)
}
```
### Phase 3: Workflow Dispatch
```javascript
switch (intent.workflow) {
case 'lite-fix':
SlashCommand('/workflow:lite-fix', args: input)
break
case 'lite-fix --hotfix':
SlashCommand('/workflow:lite-fix --hotfix', args: input)
break
case 'lite-plan → lite-execute':
SlashCommand('/workflow:lite-plan', args: input)
// lite-plan will automatically dispatch to lite-execute
break
case 'plan → verify → execute':
SlashCommand('/workflow:plan', args: input)
// After plan, prompt for verify and execute
break
case 'brainstorm → plan → execute':
SlashCommand('/workflow:brainstorm:auto-parallel', args: input)
// After brainstorm, continue with plan
break
case 'issue:plan → issue:queue → issue:execute':
SlashCommand('/issue:plan', args: input)
// Issue workflow handles queue and execute
break
case 'ui-design → plan → execute':
// Determine UI design subcommand
if (hasReference(input)) {
SlashCommand('/workflow:ui-design:imitate-auto', args: input)
} else {
SlashCommand('/workflow:ui-design:explore-auto', args: input)
}
break
}
// 5. Dispatch first command
SlashCommand(flow.steps[0].command, args: input)
```
## CLI Tool Integration
CCW **隐式调用** CLI 工具以获得三大优势
CCW 在特定条件下自动注入 CLI 调用
### 1. Token 效率 (Context Efficiency)
| Condition | CLI Inject |
|-----------|------------|
| 大量代码上下文 (≥50k chars) | `gemini --mode analysis` |
| 高复杂度任务 | `gemini --mode analysis` |
| Bug 诊断 | `gemini --mode analysis` |
| 多任务执行 (≥3 tasks) | `codex --mode write` |
CLI 工具在单独进程中运行,可以处理大量代码上下文而不消耗主会话 token
### CLI Enhancement Phases
| 场景 | 触发条件 | 自动注入 |
|------|----------|----------|
| 大量代码上下文 | 文件读取 ≥ 50k 字符 | `gemini --mode analysis` |
| 多模块分析 | 涉及 ≥ 5 个模块 | `gemini --mode analysis` |
| 代码审查 | review 步骤 | `gemini --mode analysis` |
**Phase 1.5: CLI-Assisted Classification**
### 2. 多模型视角 (Multi-Model Perspectives)
当规则匹配不明确时,使用 CLI 辅助分类:
不同模型有不同优势CCW 根据任务类型自动选择:
| 触发条件 | 说明 |
|----------|------|
| matchCount < 2 | 多个意图模式匹配 |
| complexity = high | 高复杂度任务 |
| input > 100 chars | 长输入需要语义理解 |
| Tool | 核心优势 | 最佳场景 | 触发关键词 |
|------|----------|----------|------------|
| Gemini | 超长上下文、深度分析、架构理解、执行流追踪 | 代码库理解、架构评估、根因分析 | "分析", "理解", "设计", "架构", "诊断" |
| Qwen | 代码模式识别、多维度分析 | Gemini备选、第二视角验证 | "评估", "对比", "验证" |
| Codex | 精确代码生成、自主执行、数学推理 | 功能实现、重构、测试 | "实现", "重构", "修复", "生成", "测试" |
**Phase 2.5: CLI-Assisted Action Planning**
### 3. 增强能力 (Enhanced Capabilities)
高复杂度任务的工作流优化:
#### Debug 能力增强
```
触发条件: intent === 'bugfix' AND root_cause_unclear
自动注入: gemini --mode analysis (执行流追踪)
用途: 假设驱动调试、状态机错误诊断、并发问题排查
```
| 触发条件 | 说明 |
|----------|------|
| complexity = high | 高复杂度任务 |
| steps >= 3 | 多步骤工作流 |
| input > 200 chars | 复杂需求描述 |
#### 规划能力增强
```
触发条件: complexity === 'high' OR intent === 'exploration'
自动注入: gemini --mode analysis (架构分析)
用途: 复杂任务先用CLI分析获取多模型视角
```
CLI 可返回建议:`use_default` | `modify` (调整步骤) | `upgrade` (升级工作流)
### 隐式注入规则 (Implicit Injection Rules)
## Continuation Commands
CCW 在以下条件自动注入 CLI 调用(无需用户显式请求)
工作流执行中的用户控制命令
```javascript
const implicitRules = {
// 上下文收集大量代码使用CLI可节省主会话token
context_gathering: {
trigger: 'file_read >= 50k chars OR module_count >= 5',
inject: 'gemini --mode analysis'
},
// 规划前分析复杂任务先用CLI分析
pre_planning_analysis: {
trigger: 'complexity === "high" OR intent === "exploration"',
inject: 'gemini --mode analysis'
},
// 调试诊断利用Gemini的执行流追踪能力
debug_diagnosis: {
trigger: 'intent === "bugfix" AND root_cause_unclear',
inject: 'gemini --mode analysis'
},
// 代码审查用CLI减少token占用
code_review: {
trigger: 'step === "review"',
inject: 'gemini --mode analysis'
},
// 多任务执行用Codex自主完成
implementation: {
trigger: 'step === "execute" AND task_count >= 3',
inject: 'codex --mode write'
}
}
```
### 用户语义触发 (Semantic Tool Assignment)
```javascript
// 用户可以通过自然语言指定工具偏好
const toolHints = {
gemini: /用\s*gemini|gemini\s*分析|让\s*gemini|深度分析|架构理解/i,
qwen: /用\s*qwen|qwen\s*评估|让\s*qwen|第二视角/i,
codex: /用\s*codex|codex\s*实现|让\s*codex|自主完成|批量修改/i
}
function detectToolPreference(input) {
for (const [tool, pattern] of Object.entries(toolHints)) {
if (pattern.test(input)) return tool
}
return null // Auto-select based on task type
}
```
### 独立 CLI 工作流 (Standalone CLI Workflows)
直接调用 CLI 进行特定任务:
| Workflow | 命令 | 用途 |
|----------|------|------|
| CLI Analysis | `ccw cli --tool gemini` | 大型代码库快速理解、架构评估 |
| CLI Implement | `ccw cli --tool codex` | 明确需求的自主实现 |
| CLI Debug | `ccw cli --tool gemini` | 复杂bug根因分析、执行流追踪 |
## Index Files (Dynamic Coordination)
CCW 使用索引文件实现智能命令协调:
| Index | Purpose |
|-------|---------|
| [index/command-capabilities.json](index/command-capabilities.json) | 命令能力分类explore, plan, execute, test, review... |
| [index/workflow-chains.json](index/workflow-chains.json) | 预定义工作流链rapid, full, coupled, bugfix, issue, tdd, ui... |
### 能力分类
```
capabilities:
├── explore - 代码探索、上下文收集
├── brainstorm - 多角色分析、方案探索
├── plan - 任务规划、分解
├── verify - 计划验证、质量检查
├── execute - 任务执行、代码实现
├── bugfix - Bug诊断、修复
├── test - 测试生成、执行
├── review - 代码审查、质量分析
├── issue - 批量问题管理
├── ui-design - UI设计、原型
├── memory - 文档、知识管理
├── session - 会话管理
└── debug - 调试、问题排查
```
## TODO Tracking Integration
CCW 自动使用 TodoWrite 跟踪工作流执行进度:
```javascript
// 工作流启动时自动创建 TODO 列表
TodoWrite({
todos: [
{ content: "CCW: Rapid Iteration (2 steps)", status: "in_progress", activeForm: "Running workflow" },
{ content: "[1/2] /workflow:lite-plan", status: "in_progress", activeForm: "Executing lite-plan" },
{ content: "[2/2] /workflow:lite-execute", status: "pending", activeForm: "Executing lite-execute" }
]
})
// 每个步骤完成后自动更新状态
// 支持暂停、继续、跳过操作
```
**进度可视化**:
```
✓ CCW: Rapid Iteration (2 steps)
✓ [1/2] /workflow:lite-plan
→ [2/2] /workflow:lite-execute
```
**控制命令**:
| Input | Action |
|-------|--------|
| `continue` | 执行下一步 |
| 命令 | 作用 |
|------|------|
| `continue` | 继续执行下一步 |
| `skip` | 跳过当前步骤 |
| `abort` | 止工作流 |
| `/workflow:*` | 执行指定命令 |
| `abort` | 止工作流 |
| `/workflow:*` | 切换到指定命令 |
| 自然语言 | 重新分析意图 |
## Reference Documents
## Workflow Flow Details
| Document | Purpose |
|----------|---------|
| [phases/orchestrator.md](phases/orchestrator.md) | 编排器决策逻辑 + TODO 跟踪 |
| [phases/actions/rapid.md](phases/actions/rapid.md) | 快速迭代组合 |
| [phases/actions/full.md](phases/actions/full.md) | 完整流程组合 |
| [phases/actions/coupled.md](phases/actions/coupled.md) | 复杂耦合组合 |
| [phases/actions/bugfix.md](phases/actions/bugfix.md) | 缺陷修复组合 |
| [phases/actions/issue.md](phases/actions/issue.md) | Issue工作流组合 |
| [specs/intent-classification.md](specs/intent-classification.md) | 意图分类规范 |
| [WORKFLOW_DECISION_GUIDE.md](/WORKFLOW_DECISION_GUIDE.md) | 工作流决策指南 |
### Issue Workflow (两阶段生命周期)
## Examples
Issue 工作流设计为两阶段生命周期,支持在项目迭代过程中积累问题并集中解决。
**Phase 1: Accumulation (积累阶段)**
- 触发:任务完成后的 review、代码审查发现、测试失败
- 活动需求扩展、bug 分析、测试覆盖、安全审查
- 命令:`/issue:discover`, `/issue:discover-by-prompt`, `/issue:new`
**Phase 2: Batch Resolution (批量解决阶段)**
- 触发:积累足够 issue 后的集中处理
- 流程plan → queue → execute
- 命令:`/issue:plan --all-pending``/issue:queue``/issue:execute`
### Example 1: Bug Fix
```
User: 用户登录失败,返回 401 错误
CCW: Intent=bugfix, Workflow=lite-fix
→ /workflow:lite-fix "用户登录失败,返回 401 错误"
任务完成 → discover → 积累 issue → ... → plan all → queue → parallel execute
↑ ↓
└────── 迭代循环 ───────┘
```
### Example 2: New Feature (Simple)
### lite-lite-lite vs multi-cli-plan
| 维度 | lite-lite-lite | multi-cli-plan |
|------|---------------|----------------|
| **产物** | 无文件 | IMPL_PLAN.md + plan.json + synthesis.json |
| **状态** | 无状态 | 持久化 session |
| **CLI选择** | 自动分析任务类型选择 | 配置驱动 |
| **迭代** | 通过 AskUser | 多轮收敛 |
| **执行** | 直接执行 | 通过 lite-execute |
| **适用** | 快速修复、简单功能 | 复杂多步骤实现 |
**选择指南**
- 任务清晰、改动范围小 → `lite-lite-lite`
- 需要多视角分析、复杂架构 → `multi-cli-plan`
### multi-cli-plan vs lite-plan
| 维度 | multi-cli-plan | lite-plan |
|------|---------------|-----------|
| **上下文** | ACE 语义搜索 | 手动文件模式 |
| **分析** | 多 CLI 交叉验证 | 单次规划 |
| **迭代** | 多轮直到收敛 | 单轮 |
| **置信度** | 高 (共识驱动) | 中 (单一视角) |
| **适用** | 需要多视角的复杂任务 | 直接明确的实现 |
**选择指南**
- 需求明确、路径清晰 → `lite-plan`
- 需要权衡、多方案比较 → `multi-cli-plan`
## Artifact Flow Protocol
工作流产出的自动流转机制,支持不同格式产出间的意图提取和完成度判断。
### 产出格式
| 命令 | 产出位置 | 格式 | 关键字段 |
|------|----------|------|----------|
| `/workflow:lite-plan` | memory://plan | structured_plan | tasks, files, dependencies |
| `/workflow:plan` | .workflow/{session}/IMPL_PLAN.md | markdown_plan | phases, tasks, risks |
| `/workflow:execute` | execution_log.json | execution_report | completed_tasks, errors |
| `/workflow:test-cycle-execute` | test_results.json | test_report | pass_rate, failures, coverage |
| `/workflow:review-session-cycle` | review_report.md | review_report | findings, severity_counts |
### 意图提取 (Intent Extraction)
流转到下一步时,自动提取关键信息:
```
User: 添加用户头像上传功能
CCW: Intent=feature, Complexity=low, Workflow=lite-plan→lite-execute
→ /workflow:lite-plan "添加用户头像上传功能"
plan → execute:
提取: tasks (未完成), priority_order, files_to_modify, context_summary
execute → test:
提取: modified_files, test_scope (推断), pending_verification
test → fix:
条件: pass_rate < 0.95
提取: failures, error_messages, affected_files, suggested_fixes
review → fix:
条件: critical > 0 OR high > 3
提取: findings (critical/high), fix_priority, affected_files
```
### Example 3: Complex Refactoring
### 完成度判断
**Test 完成度路由**:
```
User: 重构整个认证模块,迁移到 OAuth2
CCW: Intent=feature, Complexity=high, Workflow=plan→verify→execute
→ /workflow:plan "重构整个认证模块,迁移到 OAuth2"
pass_rate >= 0.95 AND coverage >= 0.80 → complete
pass_rate >= 0.95 AND coverage < 0.80 → add_more_tests
pass_rate >= 0.80 → fix_failures_then_continue
pass_rate < 0.80 → major_fix_required
```
### Example 4: Exploration
**Review 完成度路由**:
```
User: 我想优化系统性能,但不知道从哪入手
CCW: Intent=exploration, Workflow=brainstorm→plan→execute
→ /workflow:brainstorm:auto-parallel "探索系统性能优化方向"
critical == 0 AND high <= 3 → complete_or_optional_fix
critical > 0 → mandatory_fix
high > 3 → recommended_fix
```
### Example 5: Multi-Model Collaboration
### 流转决策模式
**plan_execute_test**:
```
User: 用 gemini 分析现有架构,然后让 codex 实现优化
CCW: Detects tool preferences, executes in sequence
→ Gemini CLI (analysis) → Codex CLI (implementation)
plan → execute → test
↓ (if test fail)
extract_failures → fix → test (max 3 iterations)
↓ (if still fail)
manual_intervention
```
**iterative_improvement**:
```
execute → test → fix → test → ...
loop until: pass_rate >= 0.95 OR iterations >= 3
```
### 使用示例
```javascript
// 执行完成后,根据产出决定下一步
const result = await execute(plan)
// 提取意图流转到测试
const testContext = extractIntent('execute_to_test', result)
// testContext = { modified_files, test_scope, pending_verification }
// 测试完成后,根据完成度决定路由
const testResult = await test(testContext)
const nextStep = evaluateCompletion('test', testResult)
// nextStep = 'fix_failures_then_continue' if pass_rate = 0.85
```
## Reference
- [command.json](command.json) - 命令元数据、Flow 定义、意图规则、Artifact Flow

View File

@@ -0,0 +1,547 @@
{
"_metadata": {
"version": "2.0.0",
"description": "Unified CCW command index with capabilities, flows, and intent rules"
},
"capabilities": {
"explore": {
"description": "Codebase exploration and context gathering",
"commands": ["/workflow:init", "/workflow:tools:gather", "/memory:load"],
"agents": ["cli-explore-agent", "context-search-agent"]
},
"brainstorm": {
"description": "Multi-perspective analysis and ideation",
"commands": ["/workflow:brainstorm:auto-parallel", "/workflow:brainstorm:artifacts", "/workflow:brainstorm:synthesis"],
"roles": ["product-manager", "system-architect", "ux-expert", "data-architect", "api-designer"]
},
"plan": {
"description": "Task planning and decomposition",
"commands": ["/workflow:lite-plan", "/workflow:plan", "/workflow:tdd-plan", "/task:create", "/task:breakdown"],
"agents": ["cli-lite-planning-agent", "action-planning-agent"]
},
"verify": {
"description": "Plan and quality verification",
"commands": ["/workflow:action-plan-verify", "/workflow:tdd-verify"]
},
"execute": {
"description": "Task execution and implementation",
"commands": ["/workflow:lite-execute", "/workflow:execute", "/task:execute"],
"agents": ["code-developer", "cli-execution-agent", "universal-executor"]
},
"bugfix": {
"description": "Bug diagnosis and fixing",
"commands": ["/workflow:lite-fix"],
"agents": ["code-developer"]
},
"test": {
"description": "Test generation and execution",
"commands": ["/workflow:test-gen", "/workflow:test-fix-gen", "/workflow:test-cycle-execute"],
"agents": ["test-fix-agent"]
},
"review": {
"description": "Code review and quality analysis",
"commands": ["/workflow:review-session-cycle", "/workflow:review-module-cycle", "/workflow:review", "/workflow:review-fix"]
},
"issue": {
"description": "Issue lifecycle management - discover, accumulate, batch resolve",
"commands": ["/issue:new", "/issue:discover", "/issue:discover-by-prompt", "/issue:plan", "/issue:queue", "/issue:execute", "/issue:manage"],
"agents": ["issue-plan-agent", "issue-queue-agent", "cli-explore-agent"],
"lifecycle": {
"accumulation": {
"description": "任务完成后进行需求扩展、bug分析、测试发现",
"triggers": ["post-task review", "code review findings", "test failures"],
"commands": ["/issue:discover", "/issue:discover-by-prompt", "/issue:new"]
},
"batch_resolution": {
"description": "积累的issue集中规划和并行执行",
"flow": ["plan", "queue", "execute"],
"commands": ["/issue:plan --all-pending", "/issue:queue", "/issue:execute"]
}
}
},
"ui-design": {
"description": "UI design and prototyping",
"commands": ["/workflow:ui-design:explore-auto", "/workflow:ui-design:imitate-auto", "/workflow:ui-design:design-sync"],
"agents": ["ui-design-agent"]
},
"memory": {
"description": "Documentation and knowledge management",
"commands": ["/memory:docs", "/memory:update-related", "/memory:update-full", "/memory:skill-memory"],
"agents": ["doc-generator", "memory-bridge"]
}
},
"flows": {
"rapid": {
"name": "Rapid Iteration",
"description": "多模型协作分析 + 直接执行",
"complexity": ["low", "medium"],
"steps": [
{ "command": "/workflow:lite-plan", "optional": false, "auto_continue": true },
{ "command": "/workflow:lite-execute", "optional": false }
],
"cli_hints": {
"explore_phase": { "tool": "gemini", "mode": "analysis", "trigger": "needs_exploration" },
"execution": { "tool": "codex", "mode": "write", "trigger": "complexity >= medium" }
},
"estimated_time": "15-45 min"
},
"full": {
"name": "Full Exploration",
"description": "头脑风暴 + 规划 + 执行",
"complexity": ["medium", "high"],
"steps": [
{ "command": "/workflow:brainstorm:auto-parallel", "optional": false, "confirm_before": true },
{ "command": "/workflow:plan", "optional": false },
{ "command": "/workflow:action-plan-verify", "optional": true, "auto_continue": true },
{ "command": "/workflow:execute", "optional": false }
],
"cli_hints": {
"role_analysis": { "tool": "gemini", "mode": "analysis", "trigger": "always", "parallel": true },
"execution": { "tool": "codex", "mode": "write", "trigger": "task_count >= 3" }
},
"estimated_time": "1-3 hours"
},
"coupled": {
"name": "Coupled Planning",
"description": "完整规划 + 验证 + 执行",
"complexity": ["high"],
"steps": [
{ "command": "/workflow:plan", "optional": false },
{ "command": "/workflow:action-plan-verify", "optional": false, "auto_continue": true },
{ "command": "/workflow:execute", "optional": false },
{ "command": "/workflow:review", "optional": true }
],
"cli_hints": {
"pre_analysis": { "tool": "gemini", "mode": "analysis", "trigger": "always" },
"execution": { "tool": "codex", "mode": "write", "trigger": "always" }
},
"estimated_time": "2-4 hours"
},
"bugfix": {
"name": "Bug Fix",
"description": "智能诊断 + 修复",
"complexity": ["low", "medium"],
"variants": {
"standard": [{ "command": "/workflow:lite-fix", "optional": false }],
"hotfix": [{ "command": "/workflow:lite-fix --hotfix", "optional": false }]
},
"cli_hints": {
"diagnosis": { "tool": "gemini", "mode": "analysis", "trigger": "always" },
"fix": { "tool": "codex", "mode": "write", "trigger": "severity >= medium" }
},
"estimated_time": "10-30 min"
},
"issue": {
"name": "Issue Lifecycle",
"description": "发现积累 → 批量规划 → 队列优化 → 并行执行",
"complexity": ["medium", "high"],
"phases": {
"accumulation": {
"description": "项目迭代中持续发现和积累issue",
"commands": ["/issue:discover", "/issue:new"],
"trigger": "post-task, code-review, test-failure"
},
"resolution": {
"description": "集中规划和执行积累的issue",
"steps": [
{ "command": "/issue:plan --all-pending", "optional": false },
{ "command": "/issue:queue", "optional": false },
{ "command": "/issue:execute", "optional": false }
]
}
},
"cli_hints": {
"discovery": { "tool": "gemini", "mode": "analysis", "trigger": "perspective_analysis", "parallel": true },
"solution_generation": { "tool": "gemini", "mode": "analysis", "trigger": "always", "parallel": true },
"batch_execution": { "tool": "codex", "mode": "write", "trigger": "always" }
},
"estimated_time": "1-4 hours"
},
"lite-lite-lite": {
"name": "Ultra-Lite Multi-CLI",
"description": "零文件 + 自动CLI选择 + 语义描述 + 直接执行",
"complexity": ["low", "medium"],
"steps": [
{ "phase": "clarify", "description": "需求澄清 (AskUser if needed)" },
{ "phase": "auto-select", "description": "任务分析 → 自动选择CLI组合" },
{ "phase": "multi-cli", "description": "并行多CLI分析" },
{ "phase": "decision", "description": "展示结果 → AskUser决策" },
{ "phase": "execute", "description": "直接执行 (无中间文件)" }
],
"vs_multi_cli_plan": {
"artifacts": "None vs IMPL_PLAN.md + plan.json + synthesis.json",
"session": "Stateless vs Persistent",
"cli_selection": "Auto-select based on task analysis vs Config-driven",
"iteration": "Via AskUser vs Via rounds/synthesis",
"execution": "Direct vs Via lite-execute",
"best_for": "Quick fixes, simple features vs Complex multi-step implementations"
},
"cli_hints": {
"analysis": { "tool": "auto", "mode": "analysis", "parallel": true },
"execution": { "tool": "auto", "mode": "write" }
},
"estimated_time": "10-30 min"
},
"multi-cli-plan": {
"name": "Multi-CLI Collaborative Planning",
"description": "ACE上下文 + 多CLI协作分析 + 迭代收敛 + 计划生成",
"complexity": ["medium", "high"],
"steps": [
{ "command": "/workflow:multi-cli-plan", "optional": false, "phases": [
"context_gathering: ACE语义搜索",
"multi_cli_discussion: cli-discuss-agent多轮分析",
"present_options: 展示解决方案",
"user_decision: 用户选择",
"plan_generation: cli-lite-planning-agent生成计划"
]},
{ "command": "/workflow:lite-execute", "optional": false }
],
"vs_lite_plan": {
"context": "ACE semantic search vs Manual file patterns",
"analysis": "Multi-CLI cross-verification vs Single-pass planning",
"iteration": "Multiple rounds until convergence vs Single round",
"confidence": "High (consensus-based) vs Medium (single perspective)",
"best_for": "Complex tasks needing multiple perspectives vs Straightforward implementations"
},
"agents": ["cli-discuss-agent", "cli-lite-planning-agent"],
"cli_hints": {
"discussion": { "tools": ["gemini", "codex", "claude"], "mode": "analysis", "parallel": true },
"planning": { "tool": "gemini", "mode": "analysis" }
},
"output": ".workflow/.multi-cli-plan/{session-id}/",
"estimated_time": "30-90 min"
},
"tdd": {
"name": "Test-Driven Development",
"description": "TDD规划 + 执行 + 验证",
"complexity": ["medium", "high"],
"steps": [
{ "command": "/workflow:tdd-plan", "optional": false },
{ "command": "/workflow:execute", "optional": false },
{ "command": "/workflow:tdd-verify", "optional": false }
],
"cli_hints": {
"test_strategy": { "tool": "gemini", "mode": "analysis", "trigger": "always" },
"red_green_refactor": { "tool": "codex", "mode": "write", "trigger": "always" }
},
"estimated_time": "1-3 hours"
},
"ui": {
"name": "UI-First Development",
"description": "UI设计 + 规划 + 执行",
"complexity": ["medium", "high"],
"variants": {
"explore": [
{ "command": "/workflow:ui-design:explore-auto", "optional": false },
{ "command": "/workflow:ui-design:design-sync", "optional": false, "auto_continue": true },
{ "command": "/workflow:plan", "optional": false },
{ "command": "/workflow:execute", "optional": false }
],
"imitate": [
{ "command": "/workflow:ui-design:imitate-auto", "optional": false },
{ "command": "/workflow:ui-design:design-sync", "optional": false, "auto_continue": true },
{ "command": "/workflow:plan", "optional": false },
{ "command": "/workflow:execute", "optional": false }
]
},
"estimated_time": "2-4 hours"
},
"review-fix": {
"name": "Review and Fix",
"description": "多维审查 + 自动修复",
"complexity": ["medium"],
"steps": [
{ "command": "/workflow:review-session-cycle", "optional": false },
{ "command": "/workflow:review-fix", "optional": true }
],
"cli_hints": {
"multi_dimension_review": { "tool": "gemini", "mode": "analysis", "trigger": "always", "parallel": true },
"auto_fix": { "tool": "codex", "mode": "write", "trigger": "findings_count >= 3" }
},
"estimated_time": "30-90 min"
},
"docs": {
"name": "Documentation",
"description": "批量文档生成",
"complexity": ["low", "medium"],
"variants": {
"incremental": [{ "command": "/memory:update-related", "optional": false }],
"full": [
{ "command": "/memory:docs", "optional": false },
{ "command": "/workflow:execute", "optional": false }
]
},
"estimated_time": "15-60 min"
}
},
"intent_rules": {
"bugfix": {
"priority": 1,
"variants": {
"hotfix": {
"patterns": ["hotfix", "urgent", "production", "critical", "emergency", "紧急", "生产环境", "线上"],
"flow": "bugfix.hotfix"
},
"standard": {
"patterns": ["fix", "bug", "error", "issue", "crash", "broken", "fail", "wrong", "修复", "错误", "崩溃"],
"flow": "bugfix.standard"
}
}
},
"issue_batch": {
"priority": 2,
"patterns": {
"batch": ["issues", "batch", "queue", "多个", "批量"],
"action": ["fix", "resolve", "处理", "解决"]
},
"require_both": true,
"flow": "issue"
},
"exploration": {
"priority": 3,
"patterns": ["不确定", "不知道", "explore", "研究", "分析一下", "怎么做", "what if", "探索"],
"flow": "full"
},
"ui_design": {
"priority": 4,
"patterns": ["ui", "界面", "design", "设计", "component", "组件", "style", "样式", "layout", "布局"],
"variants": {
"imitate": { "triggers": ["参考", "模仿", "像", "类似"], "flow": "ui.imitate" },
"explore": { "triggers": [], "flow": "ui.explore" }
}
},
"tdd": {
"priority": 5,
"patterns": ["tdd", "test-driven", "测试驱动", "先写测试", "test first"],
"flow": "tdd"
},
"review": {
"priority": 6,
"patterns": ["review", "审查", "检查代码", "code review", "质量检查"],
"flow": "review-fix"
},
"documentation": {
"priority": 7,
"patterns": ["文档", "documentation", "docs", "readme"],
"variants": {
"incremental": { "triggers": ["更新", "增量"], "flow": "docs.incremental" },
"full": { "triggers": ["全部", "完整"], "flow": "docs.full" }
}
},
"feature": {
"priority": 99,
"complexity_map": {
"high": "coupled",
"medium": "rapid",
"low": "rapid"
}
}
},
"complexity_indicators": {
"high": {
"threshold": 4,
"patterns": {
"architecture": { "keywords": ["refactor", "重构", "migrate", "迁移", "architect", "架构", "system", "系统"], "weight": 2 },
"multi_module": { "keywords": ["multiple", "多个", "across", "跨", "all", "所有", "entire", "整个"], "weight": 2 },
"integration": { "keywords": ["integrate", "集成", "api", "database", "数据库"], "weight": 1 },
"quality": { "keywords": ["security", "安全", "performance", "性能", "scale", "扩展"], "weight": 1 }
}
},
"medium": { "threshold": 2 },
"low": { "threshold": 0 }
},
"cli_tools": {
"gemini": {
"strengths": ["超长上下文", "深度分析", "架构理解", "执行流追踪"],
"triggers": ["分析", "理解", "设计", "架构", "诊断"],
"mode": "analysis"
},
"qwen": {
"strengths": ["代码模式识别", "多维度分析"],
"triggers": ["评估", "对比", "验证"],
"mode": "analysis"
},
"codex": {
"strengths": ["精确代码生成", "自主执行"],
"triggers": ["实现", "重构", "修复", "生成"],
"mode": "write"
}
},
"cli_injection_rules": {
"context_gathering": { "trigger": "file_read >= 50k OR module_count >= 5", "inject": "gemini --mode analysis" },
"pre_planning_analysis": { "trigger": "complexity === high", "inject": "gemini --mode analysis" },
"debug_diagnosis": { "trigger": "intent === bugfix AND root_cause_unclear", "inject": "gemini --mode analysis" },
"code_review": { "trigger": "step === review", "inject": "gemini --mode analysis" },
"implementation": { "trigger": "step === execute AND task_count >= 3", "inject": "codex --mode write" }
},
"artifact_flow": {
"_description": "定义工作流产出的格式、意图提取和流转规则",
"outputs": {
"/workflow:lite-plan": {
"artifact": "memory://plan",
"format": "structured_plan",
"fields": ["tasks", "files", "dependencies", "approach"]
},
"/workflow:plan": {
"artifact": ".workflow/{session}/IMPL_PLAN.md",
"format": "markdown_plan",
"fields": ["phases", "tasks", "dependencies", "risks", "test_strategy"]
},
"/workflow:multi-cli-plan": {
"artifact": ".workflow/.multi-cli-plan/{session}/",
"format": "multi_file",
"files": ["IMPL_PLAN.md", "plan.json", "synthesis.json"],
"fields": ["consensus", "divergences", "recommended_approach", "tasks"]
},
"/workflow:lite-execute": {
"artifact": "git_changes",
"format": "code_diff",
"fields": ["modified_files", "added_files", "deleted_files", "build_status"]
},
"/workflow:execute": {
"artifact": ".workflow/{session}/execution_log.json",
"format": "execution_report",
"fields": ["completed_tasks", "pending_tasks", "errors", "warnings"]
},
"/workflow:test-cycle-execute": {
"artifact": ".workflow/{session}/test_results.json",
"format": "test_report",
"fields": ["pass_rate", "failures", "coverage", "duration"]
},
"/workflow:review-session-cycle": {
"artifact": ".workflow/{session}/review_report.md",
"format": "review_report",
"fields": ["findings", "severity_counts", "recommendations"]
},
"/workflow:lite-fix": {
"artifact": "git_changes",
"format": "fix_report",
"fields": ["root_cause", "fix_applied", "files_modified", "verification_status"]
}
},
"intent_extraction": {
"plan_to_execute": {
"from": ["lite-plan", "plan", "multi-cli-plan"],
"to": ["lite-execute", "execute"],
"extract": {
"tasks": "$.tasks[] | filter(status != 'completed')",
"priority_order": "$.tasks | sort_by(priority)",
"files_to_modify": "$.tasks[].files | flatten | unique",
"dependencies": "$.dependencies",
"context_summary": "$.approach OR $.recommended_approach"
}
},
"execute_to_test": {
"from": ["lite-execute", "execute"],
"to": ["test-cycle-execute", "test-fix-gen"],
"extract": {
"modified_files": "$.modified_files",
"test_scope": "infer_from($.modified_files)",
"build_status": "$.build_status",
"pending_verification": "$.completed_tasks | needs_test"
}
},
"test_to_fix": {
"from": ["test-cycle-execute"],
"to": ["lite-fix", "review-fix"],
"condition": "$.pass_rate < 0.95",
"extract": {
"failures": "$.failures",
"error_messages": "$.failures[].message",
"affected_files": "$.failures[].file",
"suggested_fixes": "$.failures[].suggested_fix"
}
},
"review_to_fix": {
"from": ["review-session-cycle", "review-module-cycle"],
"to": ["review-fix"],
"condition": "$.severity_counts.critical > 0 OR $.severity_counts.high > 3",
"extract": {
"findings": "$.findings | filter(severity in ['critical', 'high'])",
"fix_priority": "$.findings | group_by(category) | sort_by(severity)",
"affected_files": "$.findings[].file | unique"
}
}
},
"completion_criteria": {
"plan": {
"required": ["has_tasks", "has_files"],
"optional": ["has_tests", "no_blocking_risks"],
"threshold": 0.8,
"routing": {
"complete": "proceed_to_execute",
"incomplete": "clarify_requirements"
}
},
"execute": {
"required": ["all_tasks_attempted", "no_critical_errors"],
"optional": ["build_passes", "lint_passes"],
"threshold": 1.0,
"routing": {
"complete": "proceed_to_test_or_review",
"partial": "continue_execution",
"failed": "diagnose_and_retry"
}
},
"test": {
"metrics": {
"pass_rate": { "target": 0.95, "minimum": 0.80 },
"coverage": { "target": 0.80, "minimum": 0.60 }
},
"routing": {
"pass_rate >= 0.95 AND coverage >= 0.80": "complete",
"pass_rate >= 0.95 AND coverage < 0.80": "add_more_tests",
"pass_rate >= 0.80": "fix_failures_then_continue",
"pass_rate < 0.80": "major_fix_required"
}
},
"review": {
"metrics": {
"critical_findings": { "target": 0, "maximum": 0 },
"high_findings": { "target": 0, "maximum": 3 }
},
"routing": {
"critical == 0 AND high <= 3": "complete_or_optional_fix",
"critical > 0": "mandatory_fix",
"high > 3": "recommended_fix"
}
}
},
"flow_decisions": {
"_description": "根据产出完成度决定下一步",
"patterns": {
"plan_execute_test": {
"sequence": ["plan", "execute", "test"],
"on_test_fail": {
"action": "extract_failures_and_fix",
"max_iterations": 3,
"fallback": "manual_intervention"
}
},
"plan_execute_review": {
"sequence": ["plan", "execute", "review"],
"on_review_issues": {
"action": "prioritize_and_fix",
"auto_fix_threshold": "severity < high"
}
},
"iterative_improvement": {
"sequence": ["execute", "test", "fix"],
"loop_until": "pass_rate >= 0.95 OR iterations >= 3",
"on_loop_exit": "report_status"
}
}
}
}
}

View File

@@ -1,127 +0,0 @@
{
"_metadata": {
"version": "1.0.0",
"generated": "2026-01-03",
"description": "CCW command capability index for intelligent workflow coordination"
},
"capabilities": {
"explore": {
"description": "Codebase exploration and context gathering",
"commands": [
{ "command": "/workflow:init", "weight": 1.0, "tags": ["project-setup", "context"] },
{ "command": "/workflow:tools:gather", "weight": 0.9, "tags": ["context", "analysis"] },
{ "command": "/memory:load", "weight": 0.8, "tags": ["context", "memory"] }
],
"agents": ["cli-explore-agent", "context-search-agent"]
},
"brainstorm": {
"description": "Multi-perspective analysis and ideation",
"commands": [
{ "command": "/workflow:brainstorm:auto-parallel", "weight": 1.0, "tags": ["exploration", "multi-role"] },
{ "command": "/workflow:brainstorm:artifacts", "weight": 0.9, "tags": ["clarification", "guidance"] },
{ "command": "/workflow:brainstorm:synthesis", "weight": 0.8, "tags": ["consolidation", "refinement"] }
],
"roles": ["product-manager", "system-architect", "ux-expert", "data-architect", "api-designer"]
},
"plan": {
"description": "Task planning and decomposition",
"commands": [
{ "command": "/workflow:lite-plan", "weight": 1.0, "complexity": "low-medium", "tags": ["fast", "interactive"] },
{ "command": "/workflow:plan", "weight": 0.9, "complexity": "medium-high", "tags": ["comprehensive", "persistent"] },
{ "command": "/workflow:tdd-plan", "weight": 0.7, "complexity": "medium-high", "tags": ["test-first", "quality"] },
{ "command": "/task:create", "weight": 0.6, "tags": ["single-task", "manual"] },
{ "command": "/task:breakdown", "weight": 0.5, "tags": ["decomposition", "subtasks"] }
],
"agents": ["cli-lite-planning-agent", "action-planning-agent"]
},
"verify": {
"description": "Plan and quality verification",
"commands": [
{ "command": "/workflow:action-plan-verify", "weight": 1.0, "tags": ["plan-quality", "consistency"] },
{ "command": "/workflow:tdd-verify", "weight": 0.8, "tags": ["tdd-compliance", "coverage"] }
]
},
"execute": {
"description": "Task execution and implementation",
"commands": [
{ "command": "/workflow:lite-execute", "weight": 1.0, "complexity": "low-medium", "tags": ["fast", "agent-or-cli"] },
{ "command": "/workflow:execute", "weight": 0.9, "complexity": "medium-high", "tags": ["dag-parallel", "comprehensive"] },
{ "command": "/task:execute", "weight": 0.7, "tags": ["single-task"] }
],
"agents": ["code-developer", "cli-execution-agent", "universal-executor"]
},
"bugfix": {
"description": "Bug diagnosis and fixing",
"commands": [
{ "command": "/workflow:lite-fix", "weight": 1.0, "tags": ["diagnosis", "fix", "standard"] },
{ "command": "/workflow:lite-fix --hotfix", "weight": 0.9, "tags": ["emergency", "production", "fast"] }
],
"agents": ["code-developer"]
},
"test": {
"description": "Test generation and execution",
"commands": [
{ "command": "/workflow:test-gen", "weight": 1.0, "tags": ["post-implementation", "coverage"] },
{ "command": "/workflow:test-fix-gen", "weight": 0.9, "tags": ["from-description", "flexible"] },
{ "command": "/workflow:test-cycle-execute", "weight": 0.8, "tags": ["iterative", "fix-cycle"] }
],
"agents": ["test-fix-agent"]
},
"review": {
"description": "Code review and quality analysis",
"commands": [
{ "command": "/workflow:review-session-cycle", "weight": 1.0, "tags": ["session-based", "comprehensive"] },
{ "command": "/workflow:review-module-cycle", "weight": 0.9, "tags": ["module-based", "targeted"] },
{ "command": "/workflow:review", "weight": 0.8, "tags": ["single-pass", "type-specific"] },
{ "command": "/workflow:review-fix", "weight": 0.7, "tags": ["auto-fix", "findings"] }
]
},
"issue": {
"description": "Batch issue management",
"commands": [
{ "command": "/issue:new", "weight": 1.0, "tags": ["create", "import"] },
{ "command": "/issue:discover", "weight": 0.9, "tags": ["find", "analyze"] },
{ "command": "/issue:plan", "weight": 0.8, "tags": ["solutions", "planning"] },
{ "command": "/issue:queue", "weight": 0.7, "tags": ["prioritize", "order"] },
{ "command": "/issue:execute", "weight": 0.6, "tags": ["batch-execute", "dag"] }
],
"agents": ["issue-plan-agent", "issue-queue-agent"]
},
"ui-design": {
"description": "UI design and prototyping",
"commands": [
{ "command": "/workflow:ui-design:explore-auto", "weight": 1.0, "tags": ["from-scratch", "variants"] },
{ "command": "/workflow:ui-design:imitate-auto", "weight": 0.9, "tags": ["reference-based", "copy"] },
{ "command": "/workflow:ui-design:design-sync", "weight": 0.7, "tags": ["sync", "finalize"] },
{ "command": "/workflow:ui-design:generate", "weight": 0.6, "tags": ["assemble", "prototype"] }
],
"agents": ["ui-design-agent"]
},
"memory": {
"description": "Documentation and knowledge management",
"commands": [
{ "command": "/memory:docs", "weight": 1.0, "tags": ["generate", "planning"] },
{ "command": "/memory:update-related", "weight": 0.9, "tags": ["incremental", "git-based"] },
{ "command": "/memory:update-full", "weight": 0.8, "tags": ["comprehensive", "all-modules"] },
{ "command": "/memory:skill-memory", "weight": 0.7, "tags": ["package", "reusable"] }
],
"agents": ["doc-generator", "memory-bridge"]
},
"session": {
"description": "Workflow session management",
"commands": [
{ "command": "/workflow:session:start", "weight": 1.0, "tags": ["init", "discover"] },
{ "command": "/workflow:session:list", "weight": 0.9, "tags": ["view", "status"] },
{ "command": "/workflow:session:resume", "weight": 0.8, "tags": ["continue", "restore"] },
{ "command": "/workflow:session:complete", "weight": 0.7, "tags": ["finish", "archive"] }
]
},
"debug": {
"description": "Debugging and problem solving",
"commands": [
{ "command": "/workflow:debug", "weight": 1.0, "tags": ["hypothesis", "iterative"] },
{ "command": "/workflow:clean", "weight": 0.6, "tags": ["cleanup", "artifacts"] }
]
}
}
}

View File

@@ -1,136 +0,0 @@
{
"_metadata": {
"version": "1.0.0",
"description": "Externalized intent classification rules for CCW orchestrator"
},
"intent_patterns": {
"bugfix": {
"priority": 1,
"description": "Bug修复意图",
"variants": {
"hotfix": {
"patterns": ["hotfix", "urgent", "production", "critical", "emergency", "紧急", "生产环境", "线上"],
"workflow": "lite-fix --hotfix"
},
"standard": {
"patterns": ["fix", "bug", "error", "issue", "crash", "broken", "fail", "wrong", "incorrect", "修复", "错误", "崩溃", "失败"],
"workflow": "lite-fix"
}
}
},
"issue_batch": {
"priority": 2,
"description": "批量Issue处理意图",
"patterns": {
"batch_keywords": ["issues", "issue", "batch", "queue", "多个", "批量", "一批"],
"action_keywords": ["fix", "resolve", "处理", "解决", "修复"]
},
"require_both": true,
"workflow": "issue:plan → issue:queue → issue:execute"
},
"exploration": {
"priority": 3,
"description": "探索/不确定意图",
"patterns": ["不确定", "不知道", "explore", "研究", "分析一下", "怎么做", "what if", "should i", "探索", "可能", "或许", "建议"],
"workflow": "brainstorm → plan → execute"
},
"ui_design": {
"priority": 4,
"description": "UI/设计意图",
"patterns": ["ui", "界面", "design", "设计", "component", "组件", "style", "样式", "layout", "布局", "前端", "frontend", "页面"],
"variants": {
"imitate": {
"triggers": ["参考", "模仿", "像", "类似", "reference", "like"],
"workflow": "ui-design:imitate-auto → plan → execute"
},
"explore": {
"triggers": [],
"workflow": "ui-design:explore-auto → plan → execute"
}
}
},
"tdd": {
"priority": 5,
"description": "测试驱动开发意图",
"patterns": ["tdd", "test-driven", "测试驱动", "先写测试", "red-green", "test first"],
"workflow": "tdd-plan → execute → tdd-verify"
},
"review": {
"priority": 6,
"description": "代码审查意图",
"patterns": ["review", "审查", "检查代码", "code review", "质量检查", "安全审查"],
"workflow": "review-session-cycle → review-fix"
},
"documentation": {
"priority": 7,
"description": "文档生成意图",
"patterns": ["文档", "documentation", "docs", "readme", "注释", "api doc", "说明"],
"variants": {
"incremental": {
"triggers": ["更新", "增量", "相关"],
"workflow": "memory:update-related"
},
"full": {
"triggers": ["全部", "完整", "所有"],
"workflow": "memory:docs → execute"
}
}
}
},
"complexity_indicators": {
"high": {
"score_threshold": 4,
"patterns": {
"architecture": {
"keywords": ["refactor", "重构", "migrate", "迁移", "architect", "架构", "system", "系统"],
"weight": 2
},
"multi_module": {
"keywords": ["multiple", "多个", "across", "跨", "all", "所有", "entire", "整个"],
"weight": 2
},
"integration": {
"keywords": ["integrate", "集成", "connect", "连接", "api", "database", "数据库"],
"weight": 1
},
"quality": {
"keywords": ["security", "安全", "performance", "性能", "scale", "扩展", "优化"],
"weight": 1
}
},
"workflow": "plan → verify → execute"
},
"medium": {
"score_threshold": 2,
"workflow": "lite-plan → lite-execute"
},
"low": {
"score_threshold": 0,
"workflow": "lite-plan → lite-execute"
}
},
"cli_tool_triggers": {
"gemini": {
"explicit": ["用 gemini", "gemini 分析", "让 gemini", "用gemini"],
"semantic": ["深度分析", "架构理解", "执行流追踪", "根因分析"]
},
"qwen": {
"explicit": ["用 qwen", "qwen 评估", "让 qwen", "用qwen"],
"semantic": ["第二视角", "对比验证", "模式识别"]
},
"codex": {
"explicit": ["用 codex", "codex 实现", "让 codex", "用codex"],
"semantic": ["自主完成", "批量修改", "自动实现"]
}
},
"fallback_rules": {
"no_match": {
"default_workflow": "lite-plan → lite-execute",
"use_complexity_assessment": true
},
"ambiguous": {
"action": "ask_user",
"message": "检测到多个可能意图,请确认工作流选择"
}
}
}

View File

@@ -1,451 +0,0 @@
{
"_metadata": {
"version": "1.1.0",
"description": "Predefined workflow chains with CLI tool integration for CCW orchestration"
},
"cli_tools": {
"_doc": "CLI工具是CCW的核心能力在合适时机自动调用以获得1)较少token获取大量上下文 2)引入不同模型视角 3)增强debug和规划能力",
"gemini": {
"strengths": ["超长上下文", "深度分析", "架构理解", "执行流追踪"],
"triggers": ["分析", "理解", "设计", "架构", "评估", "诊断"],
"mode": "analysis",
"token_efficiency": "high",
"use_when": [
"需要理解大型代码库结构",
"执行流追踪和数据流分析",
"架构设计和技术方案评估",
"复杂问题诊断root cause analysis"
]
},
"qwen": {
"strengths": ["超长上下文", "代码模式识别", "多维度分析"],
"triggers": ["评估", "对比", "验证"],
"mode": "analysis",
"token_efficiency": "high",
"use_when": [
"Gemini 不可用时作为备选",
"需要第二视角验证分析结果",
"代码模式识别和重复检测"
]
},
"codex": {
"strengths": ["精确代码生成", "自主执行", "数学推理"],
"triggers": ["实现", "重构", "修复", "生成", "测试"],
"mode": "write",
"token_efficiency": "medium",
"use_when": [
"需要自主完成多步骤代码修改",
"复杂重构和迁移任务",
"测试生成和修复循环"
]
}
},
"cli_injection_rules": {
"_doc": "隐式规则在特定条件下自动注入CLI调用",
"context_gathering": {
"trigger": "file_read >= 50k chars OR module_count >= 5",
"inject": "gemini --mode analysis",
"reason": "大量代码上下文使用CLI可节省主会话token"
},
"pre_planning_analysis": {
"trigger": "complexity === 'high' OR intent === 'exploration'",
"inject": "gemini --mode analysis",
"reason": "复杂任务先用CLI分析获取多模型视角"
},
"debug_diagnosis": {
"trigger": "intent === 'bugfix' AND root_cause_unclear",
"inject": "gemini --mode analysis",
"reason": "深度诊断利用Gemini的执行流追踪能力"
},
"code_review": {
"trigger": "step === 'review'",
"inject": "gemini --mode analysis",
"reason": "代码审查用CLI减少token占用"
},
"implementation": {
"trigger": "step === 'execute' AND task_count >= 3",
"inject": "codex --mode write",
"reason": "多任务执行用Codex自主完成"
}
},
"chains": {
"rapid": {
"name": "Rapid Iteration",
"description": "多模型协作分析 + 直接执行",
"complexity": ["low", "medium"],
"steps": [
{
"command": "/workflow:lite-plan",
"optional": false,
"auto_continue": true,
"cli_hint": {
"explore_phase": { "tool": "gemini", "mode": "analysis", "trigger": "needs_exploration" },
"planning_phase": { "tool": "gemini", "mode": "analysis", "trigger": "complexity >= medium" }
}
},
{
"command": "/workflow:lite-execute",
"optional": false,
"auto_continue": false,
"cli_hint": {
"execution": { "tool": "codex", "mode": "write", "trigger": "user_selects_codex OR complexity >= medium" },
"review": { "tool": "gemini", "mode": "analysis", "trigger": "user_selects_review" }
}
}
],
"total_steps": 2,
"estimated_time": "15-45 min"
},
"full": {
"name": "Full Exploration",
"description": "多模型深度分析 + 头脑风暴 + 规划 + 执行",
"complexity": ["medium", "high"],
"steps": [
{
"command": "/workflow:brainstorm:auto-parallel",
"optional": false,
"confirm_before": true,
"cli_hint": {
"role_analysis": { "tool": "gemini", "mode": "analysis", "trigger": "always", "parallel": true }
}
},
{
"command": "/workflow:plan",
"optional": false,
"auto_continue": false,
"cli_hint": {
"context_gather": { "tool": "gemini", "mode": "analysis", "trigger": "always" },
"task_generation": { "tool": "gemini", "mode": "analysis", "trigger": "always" }
}
},
{ "command": "/workflow:action-plan-verify", "optional": true, "auto_continue": true },
{
"command": "/workflow:execute",
"optional": false,
"auto_continue": false,
"cli_hint": {
"execution": { "tool": "codex", "mode": "write", "trigger": "task_count >= 3" }
}
}
],
"total_steps": 4,
"estimated_time": "1-3 hours"
},
"coupled": {
"name": "Coupled Planning",
"description": "CLI深度分析 + 完整规划 + 验证 + 执行",
"complexity": ["high"],
"steps": [
{
"command": "/workflow:plan",
"optional": false,
"auto_continue": false,
"cli_hint": {
"pre_analysis": { "tool": "gemini", "mode": "analysis", "trigger": "always", "purpose": "架构理解和依赖分析" },
"conflict_detection": { "tool": "gemini", "mode": "analysis", "trigger": "always" }
}
},
{ "command": "/workflow:action-plan-verify", "optional": false, "auto_continue": true },
{
"command": "/workflow:execute",
"optional": false,
"auto_continue": false,
"cli_hint": {
"execution": { "tool": "codex", "mode": "write", "trigger": "always", "purpose": "自主多任务执行" }
}
},
{
"command": "/workflow:review",
"optional": true,
"auto_continue": false,
"cli_hint": {
"review": { "tool": "gemini", "mode": "analysis", "trigger": "always" }
}
}
],
"total_steps": 4,
"estimated_time": "2-4 hours"
},
"bugfix": {
"name": "Bug Fix",
"description": "CLI诊断 + 智能修复",
"complexity": ["low", "medium"],
"variants": {
"standard": {
"steps": [
{
"command": "/workflow:lite-fix",
"optional": false,
"auto_continue": true,
"cli_hint": {
"diagnosis": { "tool": "gemini", "mode": "analysis", "trigger": "always", "purpose": "根因分析和执行流追踪" },
"fix": { "tool": "codex", "mode": "write", "trigger": "severity >= medium" }
}
}
]
},
"hotfix": {
"steps": [
{
"command": "/workflow:lite-fix --hotfix",
"optional": false,
"auto_continue": true,
"cli_hint": {
"quick_diagnosis": { "tool": "gemini", "mode": "analysis", "trigger": "always", "timeout": "60s" }
}
}
]
}
},
"total_steps": 1,
"estimated_time": "10-30 min"
},
"issue": {
"name": "Issue Batch",
"description": "CLI批量分析 + 队列优化 + 并行执行",
"complexity": ["medium", "high"],
"steps": [
{
"command": "/issue:plan",
"optional": false,
"auto_continue": false,
"cli_hint": {
"solution_generation": { "tool": "gemini", "mode": "analysis", "trigger": "always", "parallel": true }
}
},
{
"command": "/issue:queue",
"optional": false,
"auto_continue": false,
"cli_hint": {
"conflict_analysis": { "tool": "gemini", "mode": "analysis", "trigger": "issue_count >= 3" }
}
},
{
"command": "/issue:execute",
"optional": false,
"auto_continue": false,
"cli_hint": {
"batch_execution": { "tool": "codex", "mode": "write", "trigger": "always", "purpose": "DAG并行执行" }
}
}
],
"total_steps": 3,
"estimated_time": "1-4 hours"
},
"tdd": {
"name": "Test-Driven Development",
"description": "TDD规划 + 执行 + CLI验证",
"complexity": ["medium", "high"],
"steps": [
{
"command": "/workflow:tdd-plan",
"optional": false,
"auto_continue": false,
"cli_hint": {
"test_strategy": { "tool": "gemini", "mode": "analysis", "trigger": "always" }
}
},
{
"command": "/workflow:execute",
"optional": false,
"auto_continue": false,
"cli_hint": {
"red_green_refactor": { "tool": "codex", "mode": "write", "trigger": "always" }
}
},
{
"command": "/workflow:tdd-verify",
"optional": false,
"auto_continue": false,
"cli_hint": {
"coverage_analysis": { "tool": "gemini", "mode": "analysis", "trigger": "always" }
}
}
],
"total_steps": 3,
"estimated_time": "1-3 hours"
},
"ui": {
"name": "UI-First Development",
"description": "UI设计 + 规划 + 执行",
"complexity": ["medium", "high"],
"variants": {
"explore": {
"steps": [
{ "command": "/workflow:ui-design:explore-auto", "optional": false, "auto_continue": false },
{ "command": "/workflow:ui-design:design-sync", "optional": false, "auto_continue": true },
{ "command": "/workflow:plan", "optional": false, "auto_continue": false },
{ "command": "/workflow:execute", "optional": false, "auto_continue": false }
]
},
"imitate": {
"steps": [
{ "command": "/workflow:ui-design:imitate-auto", "optional": false, "auto_continue": false },
{ "command": "/workflow:ui-design:design-sync", "optional": false, "auto_continue": true },
{ "command": "/workflow:plan", "optional": false, "auto_continue": false },
{ "command": "/workflow:execute", "optional": false, "auto_continue": false }
]
}
},
"total_steps": 4,
"estimated_time": "2-4 hours"
},
"review-fix": {
"name": "Review and Fix",
"description": "CLI多维审查 + 自动修复",
"complexity": ["medium"],
"steps": [
{
"command": "/workflow:review-session-cycle",
"optional": false,
"auto_continue": false,
"cli_hint": {
"multi_dimension_review": { "tool": "gemini", "mode": "analysis", "trigger": "always", "parallel": true }
}
},
{
"command": "/workflow:review-fix",
"optional": true,
"auto_continue": false,
"cli_hint": {
"auto_fix": { "tool": "codex", "mode": "write", "trigger": "findings_count >= 3" }
}
}
],
"total_steps": 2,
"estimated_time": "30-90 min"
},
"docs": {
"name": "Documentation",
"description": "CLI批量文档生成",
"complexity": ["low", "medium"],
"variants": {
"incremental": {
"steps": [
{
"command": "/memory:update-related",
"optional": false,
"auto_continue": false,
"cli_hint": {
"doc_generation": { "tool": "gemini", "mode": "write", "trigger": "module_count >= 5" }
}
}
]
},
"full": {
"steps": [
{ "command": "/memory:docs", "optional": false, "auto_continue": false },
{
"command": "/workflow:execute",
"optional": false,
"auto_continue": false,
"cli_hint": {
"batch_doc": { "tool": "gemini", "mode": "write", "trigger": "always" }
}
}
]
}
},
"total_steps": 2,
"estimated_time": "15-60 min"
},
"cli-analysis": {
"name": "CLI Direct Analysis",
"description": "直接CLI分析获取多模型视角节省主会话token",
"complexity": ["low", "medium", "high"],
"standalone": true,
"steps": [
{
"command": "ccw cli",
"tool": "gemini",
"mode": "analysis",
"optional": false,
"auto_continue": false
}
],
"use_cases": [
"大型代码库快速理解",
"执行流追踪和数据流分析",
"架构评估和技术方案对比",
"性能瓶颈诊断"
],
"total_steps": 1,
"estimated_time": "5-15 min"
},
"cli-implement": {
"name": "CLI Direct Implementation",
"description": "直接Codex实现自主完成多步骤任务",
"complexity": ["medium", "high"],
"standalone": true,
"steps": [
{
"command": "ccw cli",
"tool": "codex",
"mode": "write",
"optional": false,
"auto_continue": false
}
],
"use_cases": [
"明确需求的功能实现",
"代码重构和迁移",
"测试生成",
"批量代码修改"
],
"total_steps": 1,
"estimated_time": "15-60 min"
},
"cli-debug": {
"name": "CLI Debug Session",
"description": "CLI调试会话利用Gemini深度诊断能力",
"complexity": ["medium", "high"],
"standalone": true,
"steps": [
{
"command": "ccw cli",
"tool": "gemini",
"mode": "analysis",
"purpose": "hypothesis-driven debugging",
"optional": false,
"auto_continue": false
}
],
"use_cases": [
"复杂bug根因分析",
"执行流异常追踪",
"状态机错误诊断",
"并发问题排查"
],
"total_steps": 1,
"estimated_time": "10-30 min"
}
},
"chain_selection_rules": {
"intent_mapping": {
"bugfix": ["bugfix"],
"feature_simple": ["rapid"],
"feature_unclear": ["full"],
"feature_complex": ["coupled"],
"issue_batch": ["issue"],
"test_driven": ["tdd"],
"ui_design": ["ui"],
"code_review": ["review-fix"],
"documentation": ["docs"],
"analysis_only": ["cli-analysis"],
"implement_only": ["cli-implement"],
"debug": ["cli-debug", "bugfix"]
},
"complexity_fallback": {
"low": "rapid",
"medium": "coupled",
"high": "full"
},
"cli_preference_rules": {
"_doc": "用户语义触发CLI工具选择",
"gemini_triggers": ["用 gemini", "gemini 分析", "让 gemini", "深度分析", "架构理解"],
"qwen_triggers": ["用 qwen", "qwen 评估", "让 qwen", "第二视角"],
"codex_triggers": ["用 codex", "codex 实现", "让 codex", "自主完成", "批量修改"]
}
}
}

View File

@@ -1,218 +0,0 @@
# Action: Bugfix Workflow
缺陷修复工作流:智能诊断 + 影响评估 + 修复
## Pattern
```
lite-fix [--hotfix]
```
## Trigger Conditions
- Keywords: "fix", "bug", "error", "crash", "broken", "fail", "修复", "报错"
- Problem symptoms described
- Error messages present
## Execution Flow
### Standard Mode
```mermaid
sequenceDiagram
participant U as User
participant O as CCW Orchestrator
participant LF as lite-fix
participant CLI as CLI Tools
U->>O: Bug description
O->>O: Classify: bugfix (standard)
O->>LF: /workflow:lite-fix "bug"
Note over LF: Phase 1: Diagnosis
LF->>CLI: Root cause analysis (Gemini)
CLI-->>LF: diagnosis.json
Note over LF: Phase 2: Impact Assessment
LF->>LF: Risk scoring (0-10)
LF->>LF: Severity classification
LF-->>U: Impact report
Note over LF: Phase 3: Fix Strategy
LF->>LF: Generate fix options
LF-->>U: Present strategies
U->>LF: Select strategy
Note over LF: Phase 4: Verification Plan
LF->>LF: Generate test plan
LF-->>U: Verification approach
Note over LF: Phase 5: Confirmation
LF->>U: Execution method?
U->>LF: Confirm
Note over LF: Phase 6: Execute
LF->>CLI: Execute fix (Agent/Codex)
CLI-->>LF: Results
LF-->>U: Fix complete
```
### Hotfix Mode
```mermaid
sequenceDiagram
participant U as User
participant O as CCW Orchestrator
participant LF as lite-fix
participant CLI as CLI Tools
U->>O: Urgent bug + "hotfix"
O->>O: Classify: bugfix (hotfix)
O->>LF: /workflow:lite-fix --hotfix "bug"
Note over LF: Minimal Diagnosis
LF->>CLI: Quick root cause
CLI-->>LF: Known issue?
Note over LF: Surgical Fix
LF->>LF: Single optimal fix
LF-->>U: Quick confirmation
U->>LF: Proceed
Note over LF: Smoke Test
LF->>CLI: Minimal verification
CLI-->>LF: Pass/Fail
Note over LF: Follow-up Generation
LF->>LF: Generate follow-up tasks
LF-->>U: Fix deployed + follow-ups created
```
## When to Use
### Standard Mode (/workflow:lite-fix)
**Use for**:
- 已知症状的 Bug
- 本地化修复1-5 文件)
- 非紧急问题
- 需要完整诊断
### Hotfix Mode (/workflow:lite-fix --hotfix)
**Use for**:
- 生产事故
- 紧急修复
- 明确的单点故障
- 时间敏感
**Don't use** (for either mode):
- 需要架构变更 → `/workflow:plan --mode bugfix`
- 多个相关问题 → `/issue:plan`
## Severity Classification
| Score | Severity | Response | Verification |
|-------|----------|----------|--------------|
| 8-10 | Critical | Immediate | Smoke test only |
| 6-7.9 | High | Fast track | Integration tests |
| 4-5.9 | Medium | Normal | Full test suite |
| 0-3.9 | Low | Scheduled | Comprehensive |
## Configuration
```javascript
const bugfixConfig = {
standard: {
diagnosis: {
tool: 'gemini',
depth: 'comprehensive',
timeout: 300000 // 5 min
},
impact: {
riskThreshold: 6.0, // High risk threshold
autoEscalate: true
},
verification: {
levels: ['smoke', 'integration', 'full'],
autoSelect: true // Based on severity
}
},
hotfix: {
diagnosis: {
tool: 'gemini',
depth: 'minimal',
timeout: 60000 // 1 min
},
fix: {
strategy: 'single', // Single optimal fix
surgical: true
},
followup: {
generate: true,
types: ['comprehensive-fix', 'post-mortem']
}
}
}
```
## Example Invocations
```bash
# Standard bug fix
ccw "用户头像上传失败,返回 413 错误"
→ lite-fix
→ Diagnosis: File size limit in nginx
→ Impact: 6.5 (High)
→ Fix: Update nginx config + add client validation
→ Verify: Integration test
# Production hotfix
ccw "紧急:支付网关返回 5xx 错误,影响所有用户"
→ lite-fix --hotfix
→ Quick diagnosis: API key expired
→ Surgical fix: Rotate key
→ Smoke test: Payment flow
→ Follow-ups: Key rotation automation, monitoring alert
# Unknown root cause
ccw "购物车随机丢失商品,原因不明"
→ lite-fix
→ Deep diagnosis (auto)
→ Root cause: Race condition in concurrent updates
→ Fix: Add optimistic locking
→ Verify: Concurrent test suite
```
## Output Artifacts
```
.workflow/.lite-fix/{bug-slug}-{timestamp}/
├── diagnosis.json # Root cause analysis
├── impact.json # Risk assessment
├── fix-plan.json # Fix strategy
├── task.json # Enhanced task for execution
└── followup.json # Follow-up tasks (hotfix only)
```
## Follow-up Tasks (Hotfix Mode)
```json
{
"followups": [
{
"id": "FOLLOWUP-001",
"type": "comprehensive-fix",
"title": "Complete fix for payment gateway issue",
"due": "3 days",
"description": "Implement full solution with proper error handling"
},
{
"id": "FOLLOWUP-002",
"type": "post-mortem",
"title": "Post-mortem analysis",
"due": "1 week",
"description": "Document incident and prevention measures"
}
]
}
```

View File

@@ -1,194 +0,0 @@
# Action: Coupled Workflow
复杂耦合工作流:完整规划 + 验证 + 执行
## Pattern
```
plan → action-plan-verify → execute
```
## Trigger Conditions
- Complexity: High
- Keywords: "refactor", "重构", "migrate", "迁移", "architect", "架构"
- Cross-module changes
- System-level modifications
## Execution Flow
```mermaid
sequenceDiagram
participant U as User
participant O as CCW Orchestrator
participant PL as plan
participant VF as verify
participant EX as execute
participant RV as review
U->>O: Complex task
O->>O: Classify: coupled (high complexity)
Note over PL: Phase 1: Comprehensive Planning
O->>PL: /workflow:plan
PL->>PL: Multi-phase planning
PL->>PL: Generate IMPL_PLAN.md
PL->>PL: Generate task JSONs
PL-->>U: Present plan
Note over VF: Phase 2: Verification
U->>VF: /workflow:action-plan-verify
VF->>VF: Cross-artifact consistency
VF->>VF: Dependency validation
VF->>VF: Quality gate checks
VF-->>U: Verification report
alt Verification failed
U->>PL: Replan with issues
else Verification passed
Note over EX: Phase 3: Execution
U->>EX: /workflow:execute
EX->>EX: DAG-based parallel execution
EX-->>U: Execution complete
end
Note over RV: Phase 4: Review
U->>RV: /workflow:review
RV-->>U: Review findings
```
## When to Use
**Ideal scenarios**:
- 大规模重构
- 架构迁移
- 跨模块功能开发
- 技术栈升级
- 团队协作项目
**Avoid when**:
- 简单的局部修改
- 时间紧迫
- 独立的小功能
## Verification Checks
| Check | Description | Severity |
|-------|-------------|----------|
| Dependency Cycles | 检测循环依赖 | Critical |
| Missing Tasks | 计划与实际不符 | High |
| File Conflicts | 多任务修改同文件 | Medium |
| Coverage Gaps | 未覆盖的需求 | Medium |
## Configuration
```javascript
const coupledConfig = {
plan: {
phases: 5, // Full 5-phase planning
taskGeneration: 'action-planning-agent',
outputFormat: {
implPlan: '.workflow/plans/IMPL_PLAN.md',
taskJsons: '.workflow/tasks/IMPL-*.json'
}
},
verify: {
required: true, // Always verify before execute
autoReplan: false, // Manual replan on failure
qualityGates: ['no-cycles', 'no-conflicts', 'complete-coverage']
},
execute: {
dagParallel: true,
checkpointInterval: 3, // Checkpoint every 3 tasks
rollbackOnFailure: true
},
review: {
types: ['architecture', 'security'],
required: true
}
}
```
## Task JSON Structure
```json
{
"id": "IMPL-001",
"title": "重构认证模块核心逻辑",
"scope": "src/auth/**",
"action": "refactor",
"depends_on": [],
"modification_points": [
{
"file": "src/auth/service.ts",
"target": "AuthService",
"change": "Extract OAuth2 logic"
}
],
"acceptance": [
"所有现有测试通过",
"OAuth2 流程可用"
]
}
```
## Example Invocations
```bash
# Architecture refactoring
ccw "重构整个认证模块,从 session 迁移到 JWT"
→ plan (5 phases)
→ verify
→ execute
# System migration
ccw "将数据库从 MySQL 迁移到 PostgreSQL"
→ plan (migration strategy)
→ verify (data integrity checks)
→ execute (staged migration)
# Cross-module feature
ccw "实现跨服务的分布式事务支持"
→ plan (architectural design)
→ verify (consistency checks)
→ execute (incremental rollout)
```
## Output Artifacts
```
.workflow/
├── plans/
│ └── IMPL_PLAN.md # Comprehensive plan
├── tasks/
│ ├── IMPL-001.json
│ ├── IMPL-002.json
│ └── ...
├── verify/
│ └── verification-report.md # Verification results
└── reviews/
└── {review-type}.md # Review findings
```
## Replan Flow
When verification fails:
```javascript
if (verificationResult.status === 'failed') {
console.log(`
## Verification Failed
**Issues found**:
${verificationResult.issues.map(i => `- ${i.severity}: ${i.message}`).join('\n')}
**Options**:
1. /workflow:replan - Address issues and regenerate plan
2. /workflow:plan --force - Proceed despite issues (not recommended)
3. Review issues manually and fix plan files
`)
}
```

View File

@@ -1,93 +0,0 @@
# Documentation Workflow Action
## Pattern
```
memory:docs → execute (full)
memory:update-related (incremental)
```
## Trigger Conditions
- 关键词: "文档", "documentation", "docs", "readme", "注释"
- 变体触发:
- `incremental`: "更新", "增量", "相关"
- `full`: "全部", "完整", "所有"
## Variants
### Full Documentation
```mermaid
graph TD
A[User Input] --> B[memory:docs]
B --> C[项目结构分析]
C --> D[模块分组 ≤10/task]
D --> E[execute: 并行生成]
E --> F[README.md]
E --> G[ARCHITECTURE.md]
E --> H[API Docs]
E --> I[Module CLAUDE.md]
```
### Incremental Update
```mermaid
graph TD
A[Git Changes] --> B[memory:update-related]
B --> C[变更模块检测]
C --> D[相关文档定位]
D --> E[增量更新]
```
## Configuration
| 参数 | 默认值 | 说明 |
|------|--------|------|
| batch_size | 4 | 每agent处理模块数 |
| format | markdown | 输出格式 |
| include_api | true | 生成API文档 |
| include_diagrams | true | 生成Mermaid图 |
## CLI Integration
| 阶段 | CLI Hint | 用途 |
|------|----------|------|
| memory:docs | `gemini --mode analysis` | 项目结构分析 |
| execute | `gemini --mode write` | 文档生成 |
| update-related | `gemini --mode write` | 增量更新 |
## Slash Commands
```bash
/memory:docs # 规划全量文档生成
/memory:docs-full-cli # CLI执行全量文档
/memory:docs-related-cli # CLI执行增量文档
/memory:update-related # 更新变更相关文档
/memory:update-full # 更新所有CLAUDE.md
```
## Output Structure
```
project/
├── README.md # 项目概览
├── ARCHITECTURE.md # 架构文档
├── docs/
│ └── api/ # API文档
└── src/
└── module/
└── CLAUDE.md # 模块文档
```
## When to Use
- 新项目初始化文档
- 大版本发布前文档更新
- 代码变更后同步文档
- API文档生成
## Risk Assessment
| 风险 | 缓解措施 |
|------|----------|
| 文档与代码不同步 | git hook集成 |
| 生成内容过于冗长 | batch_size控制 |
| 遗漏重要模块 | 全量扫描验证 |

View File

@@ -1,154 +0,0 @@
# Action: Full Workflow
完整探索工作流:分析 + 头脑风暴 + 规划 + 执行
## Pattern
```
brainstorm:auto-parallel → plan → [verify] → execute
```
## Trigger Conditions
- Intent: Exploration (uncertainty detected)
- Keywords: "不确定", "不知道", "explore", "怎么做", "what if"
- No clear implementation path
## Execution Flow
```mermaid
sequenceDiagram
participant U as User
participant O as CCW Orchestrator
participant BS as brainstorm
participant PL as plan
participant VF as verify
participant EX as execute
U->>O: Unclear task
O->>O: Classify: full
Note over BS: Phase 1: Brainstorm
O->>BS: /workflow:brainstorm:auto-parallel
BS->>BS: Multi-role parallel analysis
BS->>BS: Synthesis & recommendations
BS-->>U: Present options
U->>BS: Select direction
Note over PL: Phase 2: Plan
BS->>PL: /workflow:plan
PL->>PL: Generate IMPL_PLAN.md
PL->>PL: Generate task JSONs
PL-->>U: Review plan
Note over VF: Phase 3: Verify (optional)
U->>VF: /workflow:action-plan-verify
VF->>VF: Cross-artifact consistency
VF-->>U: Verification report
Note over EX: Phase 4: Execute
U->>EX: /workflow:execute
EX->>EX: DAG-based parallel execution
EX-->>U: Execution complete
```
## When to Use
**Ideal scenarios**:
- 产品方向探索
- 技术选型评估
- 架构设计决策
- 复杂功能规划
- 需要多角色视角
**Avoid when**:
- 任务明确简单
- 时间紧迫
- 已有成熟方案
## Brainstorm Roles
| Role | Focus | Typical Questions |
|------|-------|-------------------|
| Product Manager | 用户价值、市场定位 | "用户痛点是什么?" |
| System Architect | 技术方案、架构设计 | "如何保证可扩展性?" |
| UX Expert | 用户体验、交互设计 | "用户流程是否顺畅?" |
| Security Expert | 安全风险、合规要求 | "有哪些安全隐患?" |
| Data Architect | 数据模型、存储方案 | "数据如何组织?" |
## Configuration
```javascript
const fullConfig = {
brainstorm: {
defaultRoles: ['product-manager', 'system-architect', 'ux-expert'],
maxRoles: 5,
synthesis: true // Always generate synthesis
},
plan: {
verifyBeforeExecute: true, // Recommend verification
taskFormat: 'json' // Generate task JSONs
},
execute: {
dagParallel: true, // DAG-based parallel execution
testGeneration: 'optional' // Suggest test-gen after
}
}
```
## Continuation Points
After each phase, CCW can continue to the next:
```javascript
// After brainstorm completes
console.log(`
## Brainstorm Complete
**Next steps**:
1. /workflow:plan "基于头脑风暴结果规划实施"
2. Or refine: /workflow:brainstorm:synthesis
`)
// After plan completes
console.log(`
## Plan Complete
**Next steps**:
1. /workflow:action-plan-verify (recommended)
2. /workflow:execute (直接执行)
`)
```
## Example Invocations
```bash
# Product exploration
ccw "我想做一个团队协作工具,但不确定具体方向"
→ brainstorm:auto-parallel (5 roles)
→ plan
→ execute
# Technical exploration
ccw "如何设计一个高可用的消息队列系统?"
→ brainstorm:auto-parallel (system-architect, data-architect)
→ plan
→ verify
→ execute
```
## Output Artifacts
```
.workflow/
├── brainstorm/
│ ├── {session}/
│ │ ├── role-{role}.md
│ │ └── synthesis.md
├── plans/
│ └── IMPL_PLAN.md
└── tasks/
└── IMPL-*.json
```

View File

@@ -1,201 +0,0 @@
# Action: Issue Workflow
Issue 批量处理工作流:规划 + 队列 + 批量执行
## Pattern
```
issue:plan → issue:queue → issue:execute
```
## Trigger Conditions
- Keywords: "issues", "batch", "queue", "多个", "批量"
- Multiple related problems
- Long-running fix campaigns
- Priority-based processing needed
## Execution Flow
```mermaid
sequenceDiagram
participant U as User
participant O as CCW Orchestrator
participant IP as issue:plan
participant IQ as issue:queue
participant IE as issue:execute
U->>O: Multiple issues / batch fix
O->>O: Classify: issue
Note over IP: Phase 1: Issue Planning
O->>IP: /issue:plan
IP->>IP: Load unplanned issues
IP->>IP: Generate solutions per issue
IP->>U: Review solutions
U->>IP: Bind selected solutions
Note over IQ: Phase 2: Queue Formation
IP->>IQ: /issue:queue
IQ->>IQ: Conflict analysis
IQ->>IQ: Priority calculation
IQ->>IQ: DAG construction
IQ->>U: High-severity conflicts?
U->>IQ: Resolve conflicts
IQ->>IQ: Generate execution queue
Note over IE: Phase 3: Execution
IQ->>IE: /issue:execute
IE->>IE: DAG-based parallel execution
IE->>IE: Per-solution progress tracking
IE-->>U: Batch execution complete
```
## When to Use
**Ideal scenarios**:
- 多个相关 Bug 需要批量修复
- GitHub Issues 批量处理
- 技术债务清理
- 安全漏洞批量修复
- 代码质量改进活动
**Avoid when**:
- 单一问题 → `/workflow:lite-fix`
- 独立不相关的任务 → 分别处理
- 紧急生产问题 → `/workflow:lite-fix --hotfix`
## Issue Lifecycle
```
draft → planned → queued → executing → completed
↓ ↓
skipped on-hold
```
## Conflict Types
| Type | Description | Resolution |
|------|-------------|------------|
| File | 多个解决方案修改同一文件 | Sequential execution |
| API | API 签名变更影响 | Dependency ordering |
| Data | 数据结构变更冲突 | User decision |
| Dependency | 包依赖冲突 | Version negotiation |
| Architecture | 架构方向冲突 | User decision (high severity) |
## Configuration
```javascript
const issueConfig = {
plan: {
solutionsPerIssue: 3, // Generate up to 3 solutions
autoSelect: false, // User must bind solution
planningAgent: 'issue-plan-agent'
},
queue: {
conflictAnalysis: true,
priorityCalculation: true,
clarifyThreshold: 'high', // Ask user for high-severity conflicts
queueAgent: 'issue-queue-agent'
},
execute: {
dagParallel: true,
executionLevel: 'solution', // Execute by solution, not task
executor: 'codex',
resumable: true
}
}
```
## Example Invocations
```bash
# From GitHub Issues
ccw "批量处理所有 label:bug 的 GitHub Issues"
→ issue:new (import from GitHub)
→ issue:plan (generate solutions)
→ issue:queue (form execution queue)
→ issue:execute (batch execute)
# Tech debt cleanup
ccw "处理所有 TODO 注释和已知技术债务"
→ issue:discover (find issues)
→ issue:plan (plan solutions)
→ issue:queue (prioritize)
→ issue:execute (execute)
# Security vulnerabilities
ccw "修复所有 npm audit 报告的安全漏洞"
→ issue:new (from audit report)
→ issue:plan (upgrade strategies)
→ issue:queue (conflict resolution)
→ issue:execute (staged upgrades)
```
## Queue Structure
```json
{
"queue_id": "QUE-20251227-143000",
"status": "active",
"execution_groups": [
{
"id": "P1",
"type": "parallel",
"solutions": ["SOL-ISS-001-1", "SOL-ISS-002-1"],
"description": "Independent fixes, no file overlap"
},
{
"id": "S1",
"type": "sequential",
"solutions": ["SOL-ISS-003-1"],
"depends_on": ["P1"],
"description": "Depends on P1 completion"
}
]
}
```
## Output Artifacts
```
.workflow/issues/
├── issues.jsonl # All issues (one per line)
├── solutions/
│ ├── ISS-001.jsonl # Solutions for ISS-001
│ └── ISS-002.jsonl
├── queues/
│ ├── index.json # Queue index
│ └── QUE-xxx.json # Queue details
└── execution/
└── {queue-id}/
├── progress.json
└── results/
```
## Progress Tracking
```javascript
// Real-time progress during execution
const progress = {
queue_id: "QUE-xxx",
total_solutions: 5,
completed: 2,
in_progress: 1,
pending: 2,
current_group: "P1",
eta: "15 minutes"
}
```
## Resume Capability
```bash
# If execution interrupted
ccw "继续执行 issue 队列"
→ Detects active queue: QUE-xxx
→ Resumes from last checkpoint
→ /issue:execute --resume
```

View File

@@ -1,104 +0,0 @@
# Action: Rapid Workflow
快速迭代工作流组合:多模型协作分析 + 直接执行
## Pattern
```
lite-plan → lite-execute
```
## Trigger Conditions
- Complexity: Low to Medium
- Intent: Feature development
- Context: Clear requirements, known implementation path
- No uncertainty keywords
## Execution Flow
```mermaid
sequenceDiagram
participant U as User
participant O as CCW Orchestrator
participant LP as lite-plan
participant LE as lite-execute
participant CLI as CLI Tools
U->>O: Task description
O->>O: Classify: rapid
O->>LP: /workflow:lite-plan "task"
LP->>LP: Complexity assessment
LP->>CLI: Parallel explorations (if needed)
CLI-->>LP: Exploration results
LP->>LP: Generate plan.json
LP->>U: Display plan, ask confirmation
U->>LP: Confirm + select execution method
LP->>LE: /workflow:lite-execute --in-memory
LE->>CLI: Execute tasks (Agent/Codex)
CLI-->>LE: Results
LE->>LE: Optional code review
LE-->>U: Execution complete
```
## When to Use
**Ideal scenarios**:
- 添加单一功能(如用户头像上传)
- 修改现有功能(如更新表单验证)
- 小型重构(如抽取公共方法)
- 添加测试用例
- 文档更新
**Avoid when**:
- 不确定实现方案
- 跨多个模块
- 需要架构决策
- 有复杂依赖关系
## Configuration
```javascript
const rapidConfig = {
explorationThreshold: {
// Force exploration if task mentions specific files
forceExplore: /\b(file|文件|module|模块|class|类)\s*[:]?\s*\w+/i,
// Skip exploration for simple tasks
skipExplore: /\b(add|添加|create|创建)\s+(comment|注释|log|日志)/i
},
defaultExecution: 'Agent', // Agent for low complexity
codeReview: {
default: 'Skip', // Skip review for simple tasks
threshold: 'medium' // Enable for medium+ complexity
}
}
```
## Example Invocations
```bash
# Simple feature
ccw "添加用户退出登录按钮"
→ lite-plan → lite-execute (Agent)
# With exploration
ccw "优化 AuthService 的 token 刷新逻辑"
→ lite-plan -e → lite-execute (Agent, Gemini review)
# Medium complexity
ccw "实现用户偏好设置的本地存储"
→ lite-plan -e → lite-execute (Codex)
```
## Output Artifacts
```
.workflow/.lite-plan/{task-slug}-{date}/
├── exploration-*.json # If exploration was triggered
├── explorations-manifest.json
└── plan.json # Implementation plan
```

View File

@@ -1,84 +0,0 @@
# Review-Fix Workflow Action
## Pattern
```
review-session-cycle → review-fix
```
## Trigger Conditions
- 关键词: "review", "审查", "检查代码", "code review", "质量检查"
- 场景: PR审查、代码质量提升、安全审计
## Execution Flow
```mermaid
graph TD
A[User Input] --> B[review-session-cycle]
B --> C{7维度分析}
C --> D[Security]
C --> E[Performance]
C --> F[Maintainability]
C --> G[Architecture]
C --> H[Code Style]
C --> I[Test Coverage]
C --> J[Documentation]
D & E & F & G & H & I & J --> K[Findings Aggregation]
K --> L{Quality Gate}
L -->|Pass| M[Report Only]
L -->|Fail| N[review-fix]
N --> O[Auto Fix]
O --> P[Re-verify]
```
## Configuration
| 参数 | 默认值 | 说明 |
|------|--------|------|
| dimensions | all | 审查维度(security,performance,etc.) |
| quality_gate | 80 | 质量门槛分数 |
| auto_fix | true | 自动修复发现的问题 |
| severity_threshold | medium | 最低关注级别 |
## CLI Integration
| 阶段 | CLI Hint | 用途 |
|------|----------|------|
| review-session-cycle | `gemini --mode analysis` | 多维度深度分析 |
| review-fix | `codex --mode write` | 自动修复问题 |
## Slash Commands
```bash
/workflow:review-session-cycle # 会话级代码审查
/workflow:review-module-cycle # 模块级代码审查
/workflow:review-fix # 自动修复审查发现
/workflow:review --type security # 专项安全审查
```
## Review Dimensions
| 维度 | 检查点 |
|------|--------|
| Security | 注入、XSS、敏感数据暴露 |
| Performance | N+1查询、内存泄漏、算法复杂度 |
| Maintainability | 代码重复、复杂度、命名 |
| Architecture | 依赖方向、层级违规、耦合度 |
| Code Style | 格式、约定、一致性 |
| Test Coverage | 覆盖率、边界用例 |
| Documentation | 注释、API文档、README |
## When to Use
- PR合并前审查
- 重构后质量验证
- 安全合规审计
- 技术债务评估
## Risk Assessment
| 风险 | 缓解措施 |
|------|----------|
| 误报过多 | severity_threshold过滤 |
| 修复引入新问题 | re-verify循环 |
| 审查不全面 | 7维度覆盖 |

View File

@@ -1,66 +0,0 @@
# TDD Workflow Action
## Pattern
```
tdd-plan → execute → tdd-verify
```
## Trigger Conditions
- 关键词: "tdd", "test-driven", "测试驱动", "先写测试", "red-green"
- 场景: 需要高质量代码保证、关键业务逻辑、回归风险高
## Execution Flow
```mermaid
graph TD
A[User Input] --> B[tdd-plan]
B --> C{生成测试任务链}
C --> D[Red Phase: 写失败测试]
D --> E[execute: 实现代码]
E --> F[Green Phase: 测试通过]
F --> G{需要重构?}
G -->|Yes| H[Refactor Phase]
H --> F
G -->|No| I[tdd-verify]
I --> J[质量报告]
```
## Configuration
| 参数 | 默认值 | 说明 |
|------|--------|------|
| coverage_target | 80% | 目标覆盖率 |
| cycle_limit | 10 | 最大Red-Green-Refactor循环 |
| strict_mode | false | 严格模式(必须先红后绿) |
## CLI Integration
| 阶段 | CLI Hint | 用途 |
|------|----------|------|
| tdd-plan | `gemini --mode analysis` | 分析测试策略 |
| execute | `codex --mode write` | 实现代码 |
| tdd-verify | `gemini --mode analysis` | 验证TDD合规性 |
## Slash Commands
```bash
/workflow:tdd-plan # 生成TDD任务链
/workflow:execute # 执行Red-Green-Refactor
/workflow:tdd-verify # 验证TDD合规性+覆盖率
```
## When to Use
- 核心业务逻辑开发
- 需要高测试覆盖率的模块
- 重构现有代码时确保不破坏功能
- 团队要求TDD实践
## Risk Assessment
| 风险 | 缓解措施 |
|------|----------|
| 测试粒度不当 | tdd-plan阶段评估测试边界 |
| 过度测试 | 聚焦行为而非实现 |
| 循环过多 | cycle_limit限制 |

View File

@@ -1,79 +0,0 @@
# UI Design Workflow Action
## Pattern
```
ui-design:[explore|imitate]-auto → design-sync → plan → execute
```
## Trigger Conditions
- 关键词: "ui", "界面", "design", "组件", "样式", "布局", "前端"
- 变体触发:
- `imitate`: "参考", "模仿", "像", "类似"
- `explore`: 无特定参考时默认
## Variants
### Explore (探索式设计)
```mermaid
graph TD
A[User Input] --> B[ui-design:explore-auto]
B --> C[设计系统分析]
C --> D[组件结构规划]
D --> E[design-sync]
E --> F[plan]
F --> G[execute]
```
### Imitate (参考式设计)
```mermaid
graph TD
A[User Input + Reference] --> B[ui-design:imitate-auto]
B --> C[参考分析]
C --> D[风格提取]
D --> E[design-sync]
E --> F[plan]
F --> G[execute]
```
## Configuration
| 参数 | 默认值 | 说明 |
|------|--------|------|
| design_system | auto | 设计系统(auto/tailwind/mui/custom) |
| responsive | true | 响应式设计 |
| accessibility | true | 无障碍支持 |
## CLI Integration
| 阶段 | CLI Hint | 用途 |
|------|----------|------|
| explore/imitate | `gemini --mode analysis` | 设计分析、风格提取 |
| design-sync | - | 设计决策与代码库同步 |
| plan | - | 内置规划 |
| execute | `codex --mode write` | 组件实现 |
## Slash Commands
```bash
/workflow:ui-design:explore-auto # 探索式UI设计
/workflow:ui-design:imitate-auto # 参考式UI设计
/workflow:ui-design:design-sync # 设计与代码同步(关键步骤)
/workflow:ui-design:style-extract # 提取现有样式
/workflow:ui-design:codify-style # 样式代码化
```
## When to Use
- 新页面/组件开发
- UI重构或现代化
- 设计系统建立
- 参考其他产品设计
## Risk Assessment
| 风险 | 缓解措施 |
|------|----------|
| 设计不一致 | style-extract确保复用 |
| 响应式问题 | 多断点验证 |
| 可访问性缺失 | a11y检查集成 |

View File

@@ -1,435 +0,0 @@
# CCW Orchestrator
无状态编排器:分析输入 → 选择工作流链 → TODO 跟踪执行
## Architecture
```
┌──────────────────────────────────────────────────────────────────┐
│ CCW Orchestrator │
├──────────────────────────────────────────────────────────────────┤
│ │
│ Phase 1: Input Analysis │
│ ├─ Parse input (natural language / explicit command) │
│ ├─ Classify intent (bugfix / feature / issue / ui / docs) │
│ └─ Assess complexity (low / medium / high) │
│ │
│ Phase 2: Chain Selection │
│ ├─ Load index/workflow-chains.json │
│ ├─ Match intent → chain(s) │
│ ├─ Filter by complexity │
│ └─ Select optimal chain │
│ │
│ Phase 3: User Confirmation (optional) │
│ ├─ Display selected chain and steps │
│ └─ Allow modification or manual selection │
│ │
│ Phase 4: TODO Tracking Setup │
│ ├─ Create TodoWrite with chain steps │
│ └─ Mark first step as in_progress │
│ │
│ Phase 5: Execution Loop │
│ ├─ Execute current step (SlashCommand) │
│ ├─ Update TODO status (completed) │
│ ├─ Check auto_continue flag │
│ └─ Proceed to next step or wait for user │
│ │
└──────────────────────────────────────────────────────────────────┘
```
## Implementation
### Phase 1: Input Analysis
```javascript
// Load external configuration (externalized for flexibility)
const intentRules = JSON.parse(Read('.claude/skills/ccw/index/intent-rules.json'))
const capabilities = JSON.parse(Read('.claude/skills/ccw/index/command-capabilities.json'))
function analyzeInput(userInput) {
const input = userInput.trim()
// Check for explicit command passthrough
if (input.match(/^\/(?:workflow|issue|memory|task):/)) {
return { type: 'explicit', command: input, passthrough: true }
}
// Classify intent using external rules
const intent = classifyIntent(input, intentRules.intent_patterns)
// Assess complexity using external indicators
const complexity = assessComplexity(input, intentRules.complexity_indicators)
// Detect tool preferences using external triggers
const toolPreference = detectToolPreference(input, intentRules.cli_tool_triggers)
return {
type: 'natural',
text: input,
intent,
complexity,
toolPreference,
passthrough: false
}
}
function classifyIntent(text, patterns) {
// Sort by priority
const sorted = Object.entries(patterns)
.sort((a, b) => a[1].priority - b[1].priority)
for (const [intentType, config] of sorted) {
// Handle variants (bugfix, ui, docs)
if (config.variants) {
for (const [variant, variantConfig] of Object.entries(config.variants)) {
const variantPatterns = variantConfig.patterns || variantConfig.triggers || []
if (matchesAnyPattern(text, variantPatterns)) {
// For bugfix, check if standard patterns also match
if (intentType === 'bugfix') {
const standardMatch = matchesAnyPattern(text, config.variants.standard?.patterns || [])
if (standardMatch) {
return { type: intentType, variant, workflow: variantConfig.workflow }
}
} else {
return { type: intentType, variant, workflow: variantConfig.workflow }
}
}
}
// Check default variant
if (config.variants.standard) {
if (matchesAnyPattern(text, config.variants.standard.patterns)) {
return { type: intentType, variant: 'standard', workflow: config.variants.standard.workflow }
}
}
}
// Handle simple patterns (exploration, tdd, review)
if (config.patterns && !config.require_both) {
if (matchesAnyPattern(text, config.patterns)) {
return { type: intentType, workflow: config.workflow }
}
}
// Handle dual-pattern matching (issue_batch)
if (config.require_both && config.patterns) {
const matchBatch = matchesAnyPattern(text, config.patterns.batch_keywords)
const matchAction = matchesAnyPattern(text, config.patterns.action_keywords)
if (matchBatch && matchAction) {
return { type: intentType, workflow: config.workflow }
}
}
}
// Default to feature
return { type: 'feature' }
}
function matchesAnyPattern(text, patterns) {
if (!Array.isArray(patterns)) return false
const lowerText = text.toLowerCase()
return patterns.some(p => lowerText.includes(p.toLowerCase()))
}
function assessComplexity(text, indicators) {
let score = 0
for (const [level, config] of Object.entries(indicators)) {
if (config.patterns) {
for (const [category, patternConfig] of Object.entries(config.patterns)) {
if (matchesAnyPattern(text, patternConfig.keywords)) {
score += patternConfig.weight || 1
}
}
}
}
if (score >= indicators.high.score_threshold) return 'high'
if (score >= indicators.medium.score_threshold) return 'medium'
return 'low'
}
function detectToolPreference(text, triggers) {
for (const [tool, config] of Object.entries(triggers)) {
// Check explicit triggers
if (matchesAnyPattern(text, config.explicit)) return tool
// Check semantic triggers
if (matchesAnyPattern(text, config.semantic)) return tool
}
return null
}
```
### Phase 2: Chain Selection
```javascript
// Load workflow chains index
const chains = JSON.parse(Read('.claude/skills/ccw/index/workflow-chains.json'))
function selectChain(analysis) {
const { intent, complexity } = analysis
// Map intent type (from intent-rules.json) to chain ID (from workflow-chains.json)
const chainMapping = {
'bugfix': 'bugfix',
'issue_batch': 'issue', // intent-rules.json key → chains.json chain ID
'exploration': 'full',
'ui_design': 'ui', // intent-rules.json key → chains.json chain ID
'tdd': 'tdd',
'review': 'review-fix',
'documentation': 'docs', // intent-rules.json key → chains.json chain ID
'feature': null // Use complexity fallback
}
let chainId = chainMapping[intent.type]
// Fallback to complexity-based selection
if (!chainId) {
chainId = chains.chain_selection_rules.complexity_fallback[complexity]
}
const chain = chains.chains[chainId]
// Handle variants
let steps = chain.steps
if (chain.variants) {
const variant = intent.variant || Object.keys(chain.variants)[0]
steps = chain.variants[variant].steps
}
return {
id: chainId,
name: chain.name,
description: chain.description,
steps,
complexity: chain.complexity,
estimated_time: chain.estimated_time
}
}
```
### Phase 3: User Confirmation
```javascript
function confirmChain(selectedChain, analysis) {
// Skip confirmation for simple chains
if (selectedChain.steps.length <= 2 && analysis.complexity === 'low') {
return selectedChain
}
console.log(`
## CCW Workflow Selection
**Task**: ${analysis.text.substring(0, 80)}...
**Intent**: ${analysis.intent.type}${analysis.intent.variant ? ` (${analysis.intent.variant})` : ''}
**Complexity**: ${analysis.complexity}
**Selected Chain**: ${selectedChain.name}
**Description**: ${selectedChain.description}
**Estimated Time**: ${selectedChain.estimated_time}
**Steps**:
${selectedChain.steps.map((s, i) => `${i + 1}. ${s.command}${s.optional ? ' (optional)' : ''}`).join('\n')}
`)
const response = AskUserQuestion({
questions: [{
question: `Proceed with ${selectedChain.name}?`,
header: "Confirm",
multiSelect: false,
options: [
{ label: "Proceed", description: `Execute ${selectedChain.steps.length} steps` },
{ label: "Rapid", description: "Use lite-plan → lite-execute" },
{ label: "Full", description: "Use brainstorm → plan → execute" },
{ label: "Manual", description: "Specify commands manually" }
]
}]
})
// Handle alternative selection
if (response.Confirm === 'Rapid') {
return selectChain({ intent: { type: 'feature' }, complexity: 'low' })
}
if (response.Confirm === 'Full') {
return chains.chains['full']
}
if (response.Confirm === 'Manual') {
return null // User will specify
}
return selectedChain
}
```
### Phase 4: TODO Tracking Setup
```javascript
function setupTodoTracking(chain, analysis) {
const todos = chain.steps.map((step, index) => ({
content: `[${index + 1}/${chain.steps.length}] ${step.command}`,
status: index === 0 ? 'in_progress' : 'pending',
activeForm: `Executing ${step.command}`
}))
// Add header todo
todos.unshift({
content: `CCW: ${chain.name} (${chain.steps.length} steps)`,
status: 'in_progress',
activeForm: `Running ${chain.name} workflow`
})
TodoWrite({ todos })
return {
chain,
currentStep: 0,
todos
}
}
```
### Phase 5: Execution Loop
```javascript
async function executeChain(execution, analysis) {
const { chain, todos } = execution
let currentStep = 0
while (currentStep < chain.steps.length) {
const step = chain.steps[currentStep]
// Update TODO: mark current as in_progress
const updatedTodos = todos.map((t, i) => ({
...t,
status: i === 0
? 'in_progress'
: i === currentStep + 1
? 'in_progress'
: i <= currentStep
? 'completed'
: 'pending'
}))
TodoWrite({ todos: updatedTodos })
console.log(`\n### Step ${currentStep + 1}/${chain.steps.length}: ${step.command}\n`)
// Check for confirmation requirement
if (step.confirm_before) {
const proceed = AskUserQuestion({
questions: [{
question: `Ready to execute ${step.command}?`,
header: "Step",
multiSelect: false,
options: [
{ label: "Execute", description: "Run this step" },
{ label: "Skip", description: "Skip to next step" },
{ label: "Abort", description: "Stop workflow" }
]
}]
})
if (proceed.Step === 'Skip') {
currentStep++
continue
}
if (proceed.Step === 'Abort') {
break
}
}
// Execute the command
const args = analysis.text
SlashCommand(step.command, { args })
// Mark step as completed
updatedTodos[currentStep + 1].status = 'completed'
TodoWrite({ todos: updatedTodos })
currentStep++
// Check auto_continue
if (!step.auto_continue && currentStep < chain.steps.length) {
console.log(`
Step completed. Next: ${chain.steps[currentStep].command}
Type "continue" to proceed or specify different action.
`)
// Wait for user input before continuing
break
}
}
// Final status
if (currentStep >= chain.steps.length) {
const finalTodos = todos.map(t => ({ ...t, status: 'completed' }))
TodoWrite({ todos: finalTodos })
console.log(`\n${chain.name} workflow completed (${chain.steps.length} steps)`)
}
return { completed: currentStep, total: chain.steps.length }
}
```
## Main Orchestration Entry
```javascript
async function ccwOrchestrate(userInput) {
console.log('## CCW Orchestrator\n')
// Phase 1: Analyze input
const analysis = analyzeInput(userInput)
// Handle explicit command passthrough
if (analysis.passthrough) {
console.log(`Direct command: ${analysis.command}`)
return SlashCommand(analysis.command)
}
// Phase 2: Select chain
const selectedChain = selectChain(analysis)
// Phase 3: Confirm (for complex workflows)
const confirmedChain = confirmChain(selectedChain, analysis)
if (!confirmedChain) {
console.log('Manual mode selected. Specify commands directly.')
return
}
// Phase 4: Setup TODO tracking
const execution = setupTodoTracking(confirmedChain, analysis)
// Phase 5: Execute
const result = await executeChain(execution, analysis)
return result
}
```
## Decision Matrix
| Intent | Complexity | Chain | Steps |
|--------|------------|-------|-------|
| bugfix (standard) | * | bugfix | lite-fix |
| bugfix (hotfix) | * | bugfix | lite-fix --hotfix |
| issue | * | issue | plan → queue → execute |
| exploration | * | full | brainstorm → plan → execute |
| ui (explore) | * | ui | ui-design:explore → sync → plan → execute |
| ui (imitate) | * | ui | ui-design:imitate → sync → plan → execute |
| tdd | * | tdd | tdd-plan → execute → tdd-verify |
| review | * | review-fix | review-session-cycle → review-fix |
| docs | low | docs | update-related |
| docs | medium+ | docs | docs → execute |
| feature | low | rapid | lite-plan → lite-execute |
| feature | medium | coupled | plan → verify → execute |
| feature | high | full | brainstorm → plan → execute |
## Continuation Commands
After each step pause:
| User Input | Action |
|------------|--------|
| `continue` | Execute next step |
| `skip` | Skip current step |
| `abort` | Stop workflow |
| `/workflow:*` | Execute specific command |
| Natural language | Re-analyze and potentially switch chains |

View File

@@ -1,336 +0,0 @@
# Intent Classification Specification
CCW 意图分类规范:定义如何从用户输入识别任务意图并选择最优工作流。
## Classification Hierarchy
```
Intent Classification
├── Priority 1: Explicit Commands
│ └── /workflow:*, /issue:*, /memory:*, /task:*
├── Priority 2: Bug Keywords
│ ├── Hotfix: urgent + bug keywords
│ └── Standard: bug keywords only
├── Priority 3: Issue Batch
│ └── Multiple + fix keywords
├── Priority 4: Exploration
│ └── Uncertainty keywords
├── Priority 5: UI/Design
│ └── Visual/component keywords
└── Priority 6: Complexity Fallback
├── High → Coupled
├── Medium → Rapid
└── Low → Rapid
```
## Keyword Patterns
### Bug Detection
```javascript
const BUG_PATTERNS = {
core: /\b(fix|bug|error|issue|crash|broken|fail|wrong|incorrect|修复|报错|错误|问题|异常|崩溃|失败)\b/i,
urgency: /\b(hotfix|urgent|production|critical|emergency|asap|immediately|紧急|生产|线上|马上|立即)\b/i,
symptoms: /\b(not working|doesn't work|can't|cannot|won't|stopped|stopped working|无法|不能|不工作)\b/i,
errors: /\b(\d{3}\s*error|exception|stack\s*trace|undefined|null\s*pointer|timeout)\b/i
}
function detectBug(text) {
const isBug = BUG_PATTERNS.core.test(text) || BUG_PATTERNS.symptoms.test(text)
const isUrgent = BUG_PATTERNS.urgency.test(text)
const hasError = BUG_PATTERNS.errors.test(text)
if (!isBug && !hasError) return null
return {
type: 'bugfix',
mode: isUrgent ? 'hotfix' : 'standard',
confidence: (isBug && hasError) ? 'high' : 'medium'
}
}
```
### Issue Batch Detection
```javascript
const ISSUE_PATTERNS = {
batch: /\b(issues?|batch|queue|multiple|several|all|多个|批量|一系列|所有|这些)\b/i,
action: /\b(fix|resolve|handle|process|处理|解决|修复)\b/i,
source: /\b(github|jira|linear|backlog|todo|待办)\b/i
}
function detectIssueBatch(text) {
const hasBatch = ISSUE_PATTERNS.batch.test(text)
const hasAction = ISSUE_PATTERNS.action.test(text)
const hasSource = ISSUE_PATTERNS.source.test(text)
if (hasBatch && hasAction) {
return {
type: 'issue',
confidence: hasSource ? 'high' : 'medium'
}
}
return null
}
```
### Exploration Detection
```javascript
const EXPLORATION_PATTERNS = {
uncertainty: /\b(不确定|不知道|not sure|unsure|how to|怎么|如何|what if|should i|could i|是否应该)\b/i,
exploration: /\b(explore|research|investigate|分析|研究|调研|评估|探索|了解)\b/i,
options: /\b(options|alternatives|approaches|方案|选择|方向|可能性)\b/i,
questions: /\b(what|which|how|why|什么|哪个|怎样|为什么)\b.*\?/i
}
function detectExploration(text) {
const hasUncertainty = EXPLORATION_PATTERNS.uncertainty.test(text)
const hasExploration = EXPLORATION_PATTERNS.exploration.test(text)
const hasOptions = EXPLORATION_PATTERNS.options.test(text)
const hasQuestion = EXPLORATION_PATTERNS.questions.test(text)
const score = [hasUncertainty, hasExploration, hasOptions, hasQuestion].filter(Boolean).length
if (score >= 2 || hasUncertainty) {
return {
type: 'exploration',
confidence: score >= 3 ? 'high' : 'medium'
}
}
return null
}
```
### UI/Design Detection
```javascript
const UI_PATTERNS = {
components: /\b(ui|界面|component|组件|button|按钮|form|表单|modal|弹窗|dialog|对话框)\b/i,
design: /\b(design|设计|style|样式|layout|布局|theme|主题|color|颜色)\b/i,
visual: /\b(visual|视觉|animation|动画|responsive|响应式|mobile|移动端)\b/i,
frontend: /\b(frontend|前端|react|vue|angular|css|html|page|页面)\b/i
}
function detectUI(text) {
const hasComponents = UI_PATTERNS.components.test(text)
const hasDesign = UI_PATTERNS.design.test(text)
const hasVisual = UI_PATTERNS.visual.test(text)
const hasFrontend = UI_PATTERNS.frontend.test(text)
const score = [hasComponents, hasDesign, hasVisual, hasFrontend].filter(Boolean).length
if (score >= 2) {
return {
type: 'ui',
hasReference: /参考|reference|based on|像|like|模仿|imitate/.test(text),
confidence: score >= 3 ? 'high' : 'medium'
}
}
return null
}
```
## Complexity Assessment
### Indicators
```javascript
const COMPLEXITY_INDICATORS = {
high: {
patterns: [
/\b(refactor|重构|restructure|重新组织)\b/i,
/\b(migrate|迁移|upgrade|升级|convert|转换)\b/i,
/\b(architect|架构|system|系统|infrastructure|基础设施)\b/i,
/\b(entire|整个|complete|完整|all\s+modules?|所有模块)\b/i,
/\b(security|安全|scale|扩展|performance\s+critical|性能关键)\b/i,
/\b(distributed|分布式|microservice|微服务|cluster|集群)\b/i
],
weight: 2
},
medium: {
patterns: [
/\b(integrate|集成|connect|连接|link|链接)\b/i,
/\b(api|database|数据库|service|服务|endpoint|接口)\b/i,
/\b(test|测试|validate|验证|coverage|覆盖)\b/i,
/\b(multiple\s+files?|多个文件|several\s+components?|几个组件)\b/i,
/\b(authentication|认证|authorization|授权)\b/i
],
weight: 1
},
low: {
patterns: [
/\b(add|添加|create|创建|simple|简单)\b/i,
/\b(update|更新|modify|修改|change|改变)\b/i,
/\b(single|单个|one|一个|small|小)\b/i,
/\b(comment|注释|log|日志|print|打印)\b/i
],
weight: -1
}
}
function assessComplexity(text) {
let score = 0
for (const [level, config] of Object.entries(COMPLEXITY_INDICATORS)) {
for (const pattern of config.patterns) {
if (pattern.test(text)) {
score += config.weight
}
}
}
// File count indicator
const fileMatches = text.match(/\b\d+\s*(files?|文件)/i)
if (fileMatches) {
const count = parseInt(fileMatches[0])
if (count > 10) score += 2
else if (count > 5) score += 1
}
// Module count indicator
const moduleMatches = text.match(/\b\d+\s*(modules?|模块)/i)
if (moduleMatches) {
const count = parseInt(moduleMatches[0])
if (count > 3) score += 2
else if (count > 1) score += 1
}
if (score >= 4) return 'high'
if (score >= 2) return 'medium'
return 'low'
}
```
## Workflow Selection Matrix
| Intent | Complexity | Workflow | Commands |
|--------|------------|----------|----------|
| bugfix (hotfix) | * | bugfix | `lite-fix --hotfix` |
| bugfix (standard) | * | bugfix | `lite-fix` |
| issue | * | issue | `issue:plan → queue → execute` |
| exploration | * | full | `brainstorm → plan → execute` |
| ui (reference) | * | ui | `ui-design:imitate-auto → plan` |
| ui (explore) | * | ui | `ui-design:explore-auto → plan` |
| feature | high | coupled | `plan → verify → execute` |
| feature | medium | rapid | `lite-plan → lite-execute` |
| feature | low | rapid | `lite-plan → lite-execute` |
## Confidence Levels
| Level | Description | Action |
|-------|-------------|--------|
| **high** | Multiple strong indicators match | Direct dispatch |
| **medium** | Some indicators match | Confirm with user |
| **low** | Fallback classification | Always confirm |
## Tool Preference Detection
```javascript
const TOOL_PREFERENCES = {
gemini: {
pattern: /用\s*gemini|gemini\s*(分析|理解|设计)|让\s*gemini/i,
capability: 'analysis'
},
qwen: {
pattern: /用\s*qwen|qwen\s*(分析|评估)|让\s*qwen/i,
capability: 'analysis'
},
codex: {
pattern: /用\s*codex|codex\s*(实现|重构|修复)|让\s*codex/i,
capability: 'implementation'
}
}
function detectToolPreference(text) {
for (const [tool, config] of Object.entries(TOOL_PREFERENCES)) {
if (config.pattern.test(text)) {
return { tool, capability: config.capability }
}
}
return null
}
```
## Multi-Tool Collaboration Detection
```javascript
const COLLABORATION_PATTERNS = {
sequential: /先.*(分析|理解).*然后.*(实现|重构)|分析.*后.*实现/i,
parallel: /(同时|并行).*(分析|实现)|一边.*一边/i,
hybrid: /(分析|设计).*和.*(实现|测试).*分开/i
}
function detectCollaboration(text) {
if (COLLABORATION_PATTERNS.sequential.test(text)) {
return { mode: 'sequential', description: 'Analysis first, then implementation' }
}
if (COLLABORATION_PATTERNS.parallel.test(text)) {
return { mode: 'parallel', description: 'Concurrent analysis and implementation' }
}
if (COLLABORATION_PATTERNS.hybrid.test(text)) {
return { mode: 'hybrid', description: 'Mixed parallel and sequential' }
}
return null
}
```
## Classification Pipeline
```javascript
function classify(userInput) {
const text = userInput.trim()
// Step 1: Check explicit commands
if (/^\/(?:workflow|issue|memory|task):/.test(text)) {
return { type: 'explicit', command: text }
}
// Step 2: Priority-based classification
const bugResult = detectBug(text)
if (bugResult) return bugResult
const issueResult = detectIssueBatch(text)
if (issueResult) return issueResult
const explorationResult = detectExploration(text)
if (explorationResult) return explorationResult
const uiResult = detectUI(text)
if (uiResult) return uiResult
// Step 3: Complexity-based fallback
const complexity = assessComplexity(text)
return {
type: 'feature',
complexity,
workflow: complexity === 'high' ? 'coupled' : 'rapid',
confidence: 'low'
}
}
```
## Examples
### Input → Classification
| Input | Classification | Workflow |
|-------|----------------|----------|
| "用户登录失败401错误" | bugfix/standard | lite-fix |
| "紧急:支付网关挂了" | bugfix/hotfix | lite-fix --hotfix |
| "批量处理这些 GitHub issues" | issue | issue:plan → queue |
| "不确定要怎么设计缓存系统" | exploration | brainstorm → plan |
| "添加一个深色模式切换按钮" | ui | ui-design → plan |
| "重构整个认证模块" | feature/high | plan → verify |
| "添加用户头像功能" | feature/low | lite-plan |

View File

@@ -0,0 +1,303 @@
---
name: skill-tuning
description: Universal skill diagnosis and optimization tool. Detect and fix skill execution issues including context explosion, long-tail forgetting, data flow disruption, and agent coordination failures. Supports Gemini CLI for deep analysis. Triggers on "skill tuning", "tune skill", "skill diagnosis", "optimize skill", "skill debug".
allowed-tools: Task, AskUserQuestion, Read, Write, Bash, Glob, Grep, mcp__ace-tool__search_context
---
# Skill Tuning
Universal skill diagnosis and optimization tool that identifies and resolves skill execution problems through iterative multi-agent analysis.
## Architecture Overview
```
┌─────────────────────────────────────────────────────────────────────────────┐
│ Skill Tuning Architecture (Autonomous Mode + Gemini CLI) │
├─────────────────────────────────────────────────────────────────────────────┤
│ │
│ ⚠️ Phase 0: Specification → 阅读规范 + 理解目标 skill 结构 (强制前置) │
│ Study │
│ ↓ │
│ ┌───────────────────────────────────────────────────────────────────────┐ │
│ │ Orchestrator (状态驱动决策) │ │
│ │ 读取诊断状态 → 选择下一步动作 → 执行 → 更新状态 → 循环直到完成 │ │
│ └───────────────────────────────────────────────────────────────────────┘ │
│ │ │
│ ┌────────────┬───────────┼───────────┬────────────┬────────────┐ │
│ ↓ ↓ ↓ ↓ ↓ ↓ │
│ ┌──────┐ ┌──────────┐ ┌─────────┐ ┌────────┐ ┌────────┐ ┌─────────┐ │
│ │ Init │→ │ Analyze │→ │Diagnose │ │Diagnose│ │Diagnose│ │ Gemini │ │
│ │ │ │Requiremts│ │ Context │ │ Memory │ │DataFlow│ │Analysis │ │
│ └──────┘ └──────────┘ └─────────┘ └────────┘ └────────┘ └─────────┘ │
│ │ │ │ │ │ │
│ │ └───────────┴───────────┴────────────┘ │
│ ↓ │
│ ┌───────────────────────────────────────────────────────────────────────┐ │
│ │ Requirement Analysis (NEW) │ │
│ │ • Phase 1: 维度拆解 (Gemini CLI) - 单一描述 → 多个关注维度 │ │
│ │ • Phase 2: Spec 匹配 - 每个维度 → taxonomy + strategy │ │
│ │ • Phase 3: 覆盖度评估 - 以"有修复策略"为满足标准 │ │
│ │ • Phase 4: 歧义检测 - 识别多义性描述,必要时请求澄清 │ │
│ └───────────────────────────────────────────────────────────────────────┘ │
│ ↓ │
│ ┌──────────────────┐ │
│ │ Apply Fixes + │ │
│ │ Verify Results │ │
│ └──────────────────┘ │
│ │
│ ┌───────────────────────────────────────────────────────────────────────┐ │
│ │ Gemini CLI Integration │ │
│ │ 根据用户需求动态调用 gemini cli 进行深度分析: │ │
│ │ • 需求维度拆解 (requirement decomposition) │ │
│ │ • 复杂问题分析 (prompt engineering, architecture review) │ │
│ │ • 代码模式识别 (pattern matching, anti-pattern detection) │ │
│ │ • 修复策略生成 (fix generation, refactoring suggestions) │ │
│ └───────────────────────────────────────────────────────────────────────┘ │
│ │
└─────────────────────────────────────────────────────────────────────────────┘
```
## Problem Domain
Based on comprehensive analysis, skill-tuning addresses **core skill issues** and **general optimization areas**:
### Core Skill Issues (自动检测)
| Priority | Problem | Root Cause | Solution Strategy |
|----------|---------|------------|-------------------|
| **P0** | Authoring Principles Violation | 中间文件存储, State膨胀, 文件中转 | eliminate_intermediate_files, minimize_state, context_passing |
| **P1** | Data Flow Disruption | Scattered state, inconsistent formats | state_centralization, schema_enforcement |
| **P2** | Agent Coordination | Fragile call chains, merge complexity | error_wrapping, result_validation |
| **P3** | Context Explosion | Token accumulation, multi-turn bloat | sliding_window, context_summarization |
| **P4** | Long-tail Forgetting | Early constraint loss | constraint_injection, checkpoint_restore |
| **P5** | Token Consumption | Verbose prompts, excessive state, redundant I/O | prompt_compression, lazy_loading, output_minimization |
### General Optimization Areas (按需分析 via Gemini CLI)
| Category | Issues | Gemini Analysis Scope |
|----------|--------|----------------------|
| **Prompt Engineering** | 模糊指令, 输出格式不一致, 幻觉风险 | 提示词优化, 结构化输出设计 |
| **Architecture** | 阶段划分不合理, 依赖混乱, 扩展性差 | 架构审查, 模块化建议 |
| **Performance** | 执行慢, Token消耗高, 重复计算 | 性能分析, 缓存策略 |
| **Error Handling** | 错误恢复不当, 无降级策略, 日志不足 | 容错设计, 可观测性增强 |
| **Output Quality** | 输出不稳定, 格式漂移, 质量波动 | 质量门控, 验证机制 |
| **User Experience** | 交互不流畅, 反馈不清晰, 进度不可见 | UX优化, 进度追踪 |
## Key Design Principles
1. **Problem-First Diagnosis**: Systematic identification before any fix attempt
2. **Data-Driven Analysis**: Record execution traces, token counts, state snapshots
3. **Iterative Refinement**: Multiple tuning rounds until quality gates pass
4. **Non-Destructive**: All changes are reversible with backup checkpoints
5. **Agent Coordination**: Use specialized sub-agents for each diagnosis type
6. **Gemini CLI On-Demand**: Deep analysis via CLI for complex/custom issues
---
## Gemini CLI Integration
根据用户需求动态调用 Gemini CLI 进行深度分析。
### Trigger Conditions
| Condition | Action | CLI Mode |
|-----------|--------|----------|
| 用户描述复杂问题 | 调用 Gemini 分析问题根因 | `analysis` |
| 自动诊断发现 critical 问题 | 请求深度分析确认 | `analysis` |
| 用户请求架构审查 | 执行架构分析 | `analysis` |
| 需要生成修复代码 | 生成修复提案 | `write` |
| 标准策略不适用 | 请求定制化策略 | `analysis` |
### CLI Command Template
```bash
ccw cli -p "
PURPOSE: ${purpose}
TASK: ${task_steps}
MODE: ${mode}
CONTEXT: @${skill_path}/**/*
EXPECTED: ${expected_output}
RULES: $(cat ~/.claude/workflows/cli-templates/protocols/${mode}-protocol.md) | ${constraints}
" --tool gemini --mode ${mode} --cd ${skill_path}
```
### Analysis Types
#### 1. Problem Root Cause Analysis
```bash
ccw cli -p "
PURPOSE: Identify root cause of skill execution issue: ${user_issue_description}
TASK: • Analyze skill structure and phase flow • Identify anti-patterns • Trace data flow issues
MODE: analysis
CONTEXT: @**/*.md
EXPECTED: JSON with { root_causes: [], patterns_found: [], recommendations: [] }
RULES: $(cat ~/.claude/workflows/cli-templates/protocols/analysis-protocol.md) | Focus on execution flow
" --tool gemini --mode analysis
```
#### 2. Architecture Review
```bash
ccw cli -p "
PURPOSE: Review skill architecture for scalability and maintainability
TASK: • Evaluate phase decomposition • Check state management patterns • Assess agent coordination
MODE: analysis
CONTEXT: @**/*.md
EXPECTED: Architecture assessment with improvement recommendations
RULES: $(cat ~/.claude/workflows/cli-templates/protocols/analysis-protocol.md) | Focus on modularity
" --tool gemini --mode analysis
```
#### 3. Fix Strategy Generation
```bash
ccw cli -p "
PURPOSE: Generate fix strategy for issue: ${issue_id} - ${issue_description}
TASK: • Analyze issue context • Design fix approach • Generate implementation plan
MODE: analysis
CONTEXT: @**/*.md
EXPECTED: JSON with { strategy: string, changes: [], verification_steps: [] }
RULES: $(cat ~/.claude/workflows/cli-templates/protocols/analysis-protocol.md) | Minimal invasive changes
" --tool gemini --mode analysis
```
---
## Mandatory Prerequisites
> **CRITICAL**: Read these documents before executing any action.
### Core Specs (Required)
| Document | Purpose | Priority |
|----------|---------|----------|
| [specs/skill-authoring-principles.md](specs/skill-authoring-principles.md) | **首要准则:简洁高效、去除存储、上下文流转** | **P0** |
| [specs/problem-taxonomy.md](specs/problem-taxonomy.md) | Problem classification and detection patterns | **P0** |
| [specs/tuning-strategies.md](specs/tuning-strategies.md) | Fix strategies for each problem type | **P0** |
| [specs/dimension-mapping.md](specs/dimension-mapping.md) | Dimension to Spec mapping rules | **P0** |
| [specs/quality-gates.md](specs/quality-gates.md) | Quality thresholds and verification criteria | P1 |
### Templates (Reference)
| Document | Purpose |
|----------|---------|
| [templates/diagnosis-report.md](templates/diagnosis-report.md) | Diagnosis report structure |
| [templates/fix-proposal.md](templates/fix-proposal.md) | Fix proposal format |
---
## Execution Flow
```
┌─────────────────────────────────────────────────────────────────────────────┐
│ Phase 0: Specification Study (强制前置 - 禁止跳过) │
│ → Read: specs/problem-taxonomy.md (问题分类) │
│ → Read: specs/tuning-strategies.md (调优策略) │
│ → Read: specs/dimension-mapping.md (维度映射规则) │
│ → Read: Target skill's SKILL.md and phases/*.md │
│ → Output: 内化规范,理解目标 skill 结构 │
├─────────────────────────────────────────────────────────────────────────────┤
│ action-init: Initialize Tuning Session │
│ → Create work directory: .workflow/.scratchpad/skill-tuning-{timestamp} │
│ → Initialize state.json with target skill info │
│ → Create backup of target skill files │
├─────────────────────────────────────────────────────────────────────────────┤
│ action-analyze-requirements: Requirement Analysis │
│ → Phase 1: 维度拆解 (Gemini CLI) - 单一描述 → 多个关注维度 │
│ → Phase 2: Spec 匹配 - 每个维度 → taxonomy + strategy │
│ → Phase 3: 覆盖度评估 - 以"有修复策略"为满足标准 │
│ → Phase 4: 歧义检测 - 识别多义性描述,必要时请求澄清 │
│ → Output: state.json (requirement_analysis field) │
├─────────────────────────────────────────────────────────────────────────────┤
│ action-diagnose-*: Diagnosis Actions (context/memory/dataflow/agent/docs/ │
│ token_consumption) │
│ → Execute pattern-based detection for each category │
│ → Output: state.json (diagnosis.{category} field) │
├─────────────────────────────────────────────────────────────────────────────┤
│ action-generate-report: Consolidated Report │
│ → Generate markdown summary from state.diagnosis │
│ → Prioritize issues by severity │
│ → Output: state.json (final_report field) │
├─────────────────────────────────────────────────────────────────────────────┤
│ action-propose-fixes: Fix Proposal Generation │
│ → Generate fix strategies for each issue │
│ → Create implementation plan │
│ → Output: state.json (proposed_fixes field) │
├─────────────────────────────────────────────────────────────────────────────┤
│ action-apply-fix: Apply Selected Fix │
│ → User selects fix to apply │
│ → Execute fix with backup │
│ → Update state with fix result │
├─────────────────────────────────────────────────────────────────────────────┤
│ action-verify: Verification │
│ → Re-run affected diagnosis │
│ → Check quality gates │
│ → Update iteration count │
├─────────────────────────────────────────────────────────────────────────────┤
│ action-complete: Finalization │
│ → Set status='completed' │
│ → Final report already in state.json (final_report field) │
│ → Output: state.json (final) │
└─────────────────────────────────────────────────────────────────────────────┘
```
## Directory Setup
```javascript
const timestamp = new Date().toISOString().slice(0,19).replace(/[-:T]/g, '');
const workDir = `.workflow/.scratchpad/skill-tuning-${timestamp}`;
// Simplified: Only backups dir needed, diagnosis results go into state.json
Bash(`mkdir -p "${workDir}/backups"`);
```
## Output Structure
```
.workflow/.scratchpad/skill-tuning-{timestamp}/
├── state.json # Single source of truth (all results consolidated)
│ ├── diagnosis.* # All diagnosis results embedded
│ ├── issues[] # Found issues
│ ├── proposed_fixes[] # Fix proposals
│ └── final_report # Markdown summary (on completion)
└── backups/
└── {skill-name}-backup/ # Original skill files backup
```
> **Token Optimization**: All outputs consolidated into state.json. No separate diagnosis files or report files.
## State Schema
详细状态结构定义请参阅 [phases/state-schema.md](phases/state-schema.md)。
核心状态字段:
- `status`: 工作流状态 (pending/running/completed/failed)
- `target_skill`: 目标 skill 信息
- `diagnosis`: 各维度诊断结果
- `issues`: 发现的问题列表
- `proposed_fixes`: 建议的修复方案
## Reference Documents
| Document | Purpose |
|----------|---------|
| [phases/orchestrator.md](phases/orchestrator.md) | Orchestrator decision logic |
| [phases/state-schema.md](phases/state-schema.md) | State structure definition |
| [phases/actions/action-init.md](phases/actions/action-init.md) | Initialize tuning session |
| [phases/actions/action-analyze-requirements.md](phases/actions/action-analyze-requirements.md) | Requirement analysis (NEW) |
| [phases/actions/action-diagnose-context.md](phases/actions/action-diagnose-context.md) | Context explosion diagnosis |
| [phases/actions/action-diagnose-memory.md](phases/actions/action-diagnose-memory.md) | Long-tail forgetting diagnosis |
| [phases/actions/action-diagnose-dataflow.md](phases/actions/action-diagnose-dataflow.md) | Data flow diagnosis |
| [phases/actions/action-diagnose-agent.md](phases/actions/action-diagnose-agent.md) | Agent coordination diagnosis |
| [phases/actions/action-diagnose-docs.md](phases/actions/action-diagnose-docs.md) | Documentation structure diagnosis |
| [phases/actions/action-diagnose-token-consumption.md](phases/actions/action-diagnose-token-consumption.md) | Token consumption diagnosis |
| [phases/actions/action-generate-report.md](phases/actions/action-generate-report.md) | Report generation |
| [phases/actions/action-propose-fixes.md](phases/actions/action-propose-fixes.md) | Fix proposal |
| [phases/actions/action-apply-fix.md](phases/actions/action-apply-fix.md) | Fix application |
| [phases/actions/action-verify.md](phases/actions/action-verify.md) | Verification |
| [phases/actions/action-complete.md](phases/actions/action-complete.md) | Finalization |
| [specs/problem-taxonomy.md](specs/problem-taxonomy.md) | Problem classification |
| [specs/tuning-strategies.md](specs/tuning-strategies.md) | Fix strategies |
| [specs/dimension-mapping.md](specs/dimension-mapping.md) | Dimension to Spec mapping (NEW) |
| [specs/quality-gates.md](specs/quality-gates.md) | Quality criteria |

View File

@@ -0,0 +1,164 @@
# Action: Abort
Abort the tuning session due to unrecoverable errors.
## Purpose
- Safely terminate on critical failures
- Preserve diagnostic information for debugging
- Ensure backup remains available
- Notify user of failure reason
## Preconditions
- [ ] state.error_count >= state.max_errors
- [ ] OR critical failure detected
## Execution
```javascript
async function execute(state, workDir) {
console.log('Aborting skill tuning session...');
const errors = state.errors;
const targetSkill = state.target_skill;
// Generate abort report
const abortReport = `# Skill Tuning Aborted
**Target Skill**: ${targetSkill?.name || 'Unknown'}
**Aborted At**: ${new Date().toISOString()}
**Reason**: Too many errors or critical failure
---
## Error Log
${errors.length === 0 ? '_No errors recorded_' :
errors.map((err, i) => `
### Error ${i + 1}
- **Action**: ${err.action}
- **Message**: ${err.message}
- **Time**: ${err.timestamp}
- **Recoverable**: ${err.recoverable ? 'Yes' : 'No'}
`).join('\n')}
---
## Session State at Abort
- **Status**: ${state.status}
- **Iteration Count**: ${state.iteration_count}
- **Completed Actions**: ${state.completed_actions.length}
- **Issues Found**: ${state.issues.length}
- **Fixes Applied**: ${state.applied_fixes.length}
---
## Recovery Options
### Option 1: Restore Original Skill
If any changes were made, restore from backup:
\`\`\`bash
cp -r "${state.backup_dir}/${targetSkill?.name || 'backup'}-backup"/* "${targetSkill?.path || 'target'}/"
\`\`\`
### Option 2: Resume from Last State
The session state is preserved at:
\`${workDir}/state.json\`
To resume:
1. Fix the underlying issue
2. Reset error_count in state.json
3. Re-run skill-tuning with --resume flag
### Option 3: Manual Investigation
Review the following files:
- Diagnosis results: \`${workDir}/diagnosis/*.json\`
- Error log: \`${workDir}/errors.json\`
- State snapshot: \`${workDir}/state.json\`
---
## Diagnostic Information
### Last Successful Action
${state.completed_actions.length > 0 ? state.completed_actions[state.completed_actions.length - 1] : 'None'}
### Current Action When Failed
${state.current_action || 'Unknown'}
### Partial Diagnosis Results
- Context: ${state.diagnosis.context ? 'Completed' : 'Not completed'}
- Memory: ${state.diagnosis.memory ? 'Completed' : 'Not completed'}
- Data Flow: ${state.diagnosis.dataflow ? 'Completed' : 'Not completed'}
- Agent: ${state.diagnosis.agent ? 'Completed' : 'Not completed'}
---
*Skill tuning aborted - please review errors and retry*
`;
// Write abort report
Write(`${workDir}/abort-report.md`, abortReport);
// Save error log
Write(`${workDir}/errors.json`, JSON.stringify(errors, null, 2));
// Notify user
await AskUserQuestion({
questions: [{
question: `Skill tuning aborted due to ${errors.length} errors. Would you like to restore the original skill?`,
header: 'Restore',
multiSelect: false,
options: [
{ label: 'Yes, restore', description: 'Restore original skill from backup' },
{ label: 'No, keep changes', description: 'Keep any partial changes made' }
]
}]
}).then(async response => {
if (response['Restore'] === 'Yes, restore') {
// Restore from backup
if (state.backup_dir && targetSkill?.path) {
Bash(`cp -r "${state.backup_dir}/${targetSkill.name}-backup"/* "${targetSkill.path}/"`);
console.log('Original skill restored from backup.');
}
}
}).catch(() => {
// User cancelled, don't restore
});
return {
stateUpdates: {
status: 'failed',
completed_at: new Date().toISOString()
},
outputFiles: [`${workDir}/abort-report.md`, `${workDir}/errors.json`],
summary: `Tuning aborted: ${errors.length} errors. Check abort-report.md for details.`
};
}
```
## State Updates
```javascript
return {
stateUpdates: {
status: 'failed',
completed_at: '<timestamp>'
}
};
```
## Output
- **File**: `abort-report.md`
- **Location**: `${workDir}/abort-report.md`
## Error Handling
This action should not fail - it's the final error handler.
## Next Actions
- None (terminal state)

View File

@@ -0,0 +1,406 @@
# Action: Analyze Requirements
将用户问题描述拆解为多个分析维度,匹配 Spec评估覆盖度检测歧义。
## Purpose
- 将单一用户描述拆解为多个独立关注维度
- 为每个维度匹配 problem-taxonomy检测+ tuning-strategies修复
- 以"有修复策略"为标准判断是否满足需求
- 检测歧义并在必要时请求用户澄清
## Preconditions
- [ ] `state.status === 'running'`
- [ ] `state.target_skill !== null`
- [ ] `state.completed_actions.includes('action-init')`
- [ ] `!state.completed_actions.includes('action-analyze-requirements')`
## Execution
### Phase 1: 维度拆解 (Gemini CLI)
调用 Gemini 对用户描述进行语义分析,拆解为独立维度:
```javascript
async function analyzeDimensions(state, workDir) {
const prompt = `
PURPOSE: 分析用户问题描述,拆解为独立的关注维度
TASK:
• 识别用户描述中的多个关注点(每个关注点应该是独立的、可单独分析的)
• 为每个关注点提取关键词(中英文均可)
• 推断可能的问题类别:
- context_explosion: 上下文/Token 相关
- memory_loss: 遗忘/约束丢失相关
- dataflow_break: 状态/数据流相关
- agent_failure: Agent/子任务相关
- prompt_quality: 提示词/输出质量相关
- architecture: 架构/结构相关
- performance: 性能/效率相关
- error_handling: 错误/异常处理相关
- output_quality: 输出质量/验证相关
- user_experience: 交互/体验相关
• 评估推断置信度 (0-1)
INPUT:
User description: ${state.user_issue_description}
Target skill: ${state.target_skill.name}
Skill structure: ${JSON.stringify(state.target_skill.phases)}
MODE: analysis
CONTEXT: @specs/problem-taxonomy.md @specs/dimension-mapping.md
EXPECTED: JSON (不要包含 markdown 代码块标记)
{
"dimensions": [
{
"id": "DIM-001",
"description": "关注点的简短描述",
"keywords": ["关键词1", "关键词2"],
"inferred_category": "问题类别",
"confidence": 0.85,
"reasoning": "推断理由"
}
],
"analysis_notes": "整体分析说明"
}
RULES:
- 每个维度必须独立,不重叠
- 低于 0.5 置信度的推断应标注需要澄清
- 如果用户描述非常模糊,至少提取一个 "general" 维度
`;
const cliCommand = `ccw cli -p "${escapeForShell(prompt)}" --tool gemini --mode analysis --cd "${state.target_skill.path}"`;
console.log('Phase 1: 执行 Gemini 维度拆解分析...');
const result = Bash({
command: cliCommand,
run_in_background: true,
timeout: 300000
});
return result;
}
```
### Phase 2: Spec 匹配
基于 `specs/category-mappings.json` 配置为每个维度匹配检测模式和修复策略:
```javascript
// 加载集中式映射配置
const mappings = JSON.parse(Read('specs/category-mappings.json'));
function matchSpecs(dimensions) {
return dimensions.map(dim => {
// 匹配 taxonomy pattern
const taxonomyMatch = findTaxonomyMatch(dim.inferred_category);
// 匹配 strategy
const strategyMatch = findStrategyMatch(dim.inferred_category);
// 判断是否满足(核心标准:有修复策略)
const hasFix = strategyMatch !== null && strategyMatch.strategies.length > 0;
return {
dimension_id: dim.id,
taxonomy_match: taxonomyMatch,
strategy_match: strategyMatch,
has_fix: hasFix,
needs_gemini_analysis: taxonomyMatch === null || mappings.categories[dim.inferred_category]?.needs_gemini_analysis
};
});
}
function findTaxonomyMatch(category) {
const config = mappings.categories[category];
if (!config || config.pattern_ids.length === 0) return null;
return {
category: category,
pattern_ids: config.pattern_ids,
severity_hint: config.severity_hint
};
}
function findStrategyMatch(category) {
const config = mappings.categories[category];
if (!config) {
// Fallback to custom from config
return mappings.fallback;
}
return {
strategies: config.strategies,
risk_levels: config.risk_levels
};
}
```
### Phase 3: 覆盖度评估
评估所有维度的 Spec 覆盖情况:
```javascript
function evaluateCoverage(specMatches) {
const total = specMatches.length;
const withDetection = specMatches.filter(m => m.taxonomy_match !== null).length;
const withFix = specMatches.filter(m => m.has_fix).length;
const rate = total > 0 ? Math.round((withFix / total) * 100) : 0;
let status;
if (rate >= 80) {
status = 'satisfied';
} else if (rate >= 50) {
status = 'partial';
} else {
status = 'unsatisfied';
}
return {
total_dimensions: total,
with_detection: withDetection,
with_fix_strategy: withFix,
coverage_rate: rate,
status: status
};
}
```
### Phase 4: 歧义检测
识别需要用户澄清的歧义点:
```javascript
function detectAmbiguities(dimensions, specMatches) {
const ambiguities = [];
for (const dim of dimensions) {
const match = specMatches.find(m => m.dimension_id === dim.id);
// 检测1: 低置信度 (< 0.5)
if (dim.confidence < 0.5) {
ambiguities.push({
dimension_id: dim.id,
type: 'vague_description',
description: `维度 "${dim.description}" 描述模糊,推断置信度低 (${dim.confidence})`,
possible_interpretations: suggestInterpretations(dim),
needs_clarification: true
});
}
// 检测2: 无匹配类别
if (!match || (!match.taxonomy_match && !match.strategy_match)) {
ambiguities.push({
dimension_id: dim.id,
type: 'no_category_match',
description: `维度 "${dim.description}" 无法匹配到已知问题类别`,
possible_interpretations: ['custom'],
needs_clarification: true
});
}
// 检测3: 关键词冲突(可能属于多个类别)
if (dim.keywords.length > 3 && hasConflictingKeywords(dim.keywords)) {
ambiguities.push({
dimension_id: dim.id,
type: 'conflicting_keywords',
description: `维度 "${dim.description}" 的关键词可能指向多个不同问题`,
possible_interpretations: inferMultipleCategories(dim.keywords),
needs_clarification: true
});
}
}
return ambiguities;
}
function suggestInterpretations(dim) {
// 基于 mappings 配置推荐可能的解释
const categories = Object.keys(mappings.categories).filter(
cat => cat !== 'authoring_principles_violation' // 排除内部检测类别
);
return categories.slice(0, 4); // 返回最常见的 4 个作为选项
}
function hasConflictingKeywords(keywords) {
// 检查关键词是否指向不同方向
const categoryHints = keywords.map(k => getKeywordCategoryHint(k));
const uniqueCategories = [...new Set(categoryHints.filter(c => c))];
return uniqueCategories.length > 1;
}
function getKeywordCategoryHint(keyword) {
// 从 mappings.keywords 构建查找表(合并中英文关键词)
const keywordMap = {
...mappings.keywords.chinese,
...mappings.keywords.english
};
return keywordMap[keyword.toLowerCase()];
}
```
## User Interaction
如果检测到需要澄清的歧义,暂停并询问用户:
```javascript
async function handleAmbiguities(ambiguities, dimensions) {
const needsClarification = ambiguities.filter(a => a.needs_clarification);
if (needsClarification.length === 0) {
return null; // 无需澄清
}
const questions = needsClarification.slice(0, 4).map(a => {
const dim = dimensions.find(d => d.id === a.dimension_id);
return {
question: `关于 "${dim.description}",您具体指的是?`,
header: a.dimension_id,
options: a.possible_interpretations.map(interp => ({
label: getCategoryLabel(interp),
description: getCategoryDescription(interp)
})),
multiSelect: false
};
});
return await AskUserQuestion({ questions });
}
function getCategoryLabel(category) {
// 从 mappings 配置加载标签
return mappings.category_labels_chinese[category] || category;
}
function getCategoryDescription(category) {
// 从 mappings 配置加载描述
return mappings.category_descriptions[category] || 'Requires further analysis';
}
```
## Output
### State Updates
```javascript
return {
stateUpdates: {
requirement_analysis: {
status: ambiguities.some(a => a.needs_clarification) ? 'needs_clarification' : 'completed',
analyzed_at: new Date().toISOString(),
dimensions: dimensions,
spec_matches: specMatches,
coverage: coverageResult,
ambiguities: ambiguities
},
// 根据分析结果自动优化 focus_areas
focus_areas: deriveOptimalFocusAreas(specMatches)
},
outputFiles: [
`${workDir}/requirement-analysis.json`,
`${workDir}/requirement-analysis.md`
],
summary: generateSummary(dimensions, coverageResult, ambiguities)
};
function deriveOptimalFocusAreas(specMatches) {
const coreCategories = ['context', 'memory', 'dataflow', 'agent'];
const matched = specMatches
.filter(m => m.taxonomy_match !== null)
.map(m => {
// 映射到诊断 focus_area
const category = m.taxonomy_match.category;
if (category === 'context_explosion' || category === 'performance') return 'context';
if (category === 'memory_loss') return 'memory';
if (category === 'dataflow_break') return 'dataflow';
if (category === 'agent_failure' || category === 'error_handling') return 'agent';
return null;
})
.filter(f => f && coreCategories.includes(f));
// 去重
return [...new Set(matched)];
}
function generateSummary(dimensions, coverage, ambiguities) {
const dimCount = dimensions.length;
const coverageStatus = coverage.status;
const ambiguityCount = ambiguities.filter(a => a.needs_clarification).length;
let summary = `分析完成:${dimCount} 个维度`;
summary += `,覆盖度 ${coverage.coverage_rate}% (${coverageStatus})`;
if (ambiguityCount > 0) {
summary += `${ambiguityCount} 个歧义点待澄清`;
}
return summary;
}
```
### Output Files
#### requirement-analysis.json
```json
{
"timestamp": "2024-01-01T00:00:00Z",
"target_skill": "skill-name",
"user_description": "原始用户描述",
"dimensions": [...],
"spec_matches": [...],
"coverage": {...},
"ambiguities": [...],
"derived_focus_areas": [...]
}
```
#### requirement-analysis.md
```markdown
# 需求分析报告
## 用户描述
> ${user_issue_description}
## 维度拆解
| ID | 描述 | 类别 | 置信度 |
|----|------|------|--------|
| DIM-001 | ... | ... | 0.85 |
## Spec 匹配
| 维度 | 检测模式 | 修复策略 | 是否满足 |
|------|----------|----------|----------|
| DIM-001 | CTX-001,002 | sliding_window | ✓ |
## 覆盖度评估
- 总维度数: N
- 有检测手段: M
- 有修复策略: K (满足标准)
- 覆盖率: X%
- 状态: satisfied/partial/unsatisfied
## 歧义点
(如有)
```
## Error Handling
| Error | Recovery |
|-------|----------|
| Gemini CLI 超时 | 重试一次,仍失败则使用简化分析 |
| JSON 解析失败 | 尝试修复 JSON 或使用默认维度 |
| 无法匹配任何类别 | 全部归类为 custom触发 Gemini 深度分析 |
## Next Actions
- 如果 `requirement_analysis.status === 'completed'`: 继续到 `action-diagnose-*`
- 如果 `requirement_analysis.status === 'needs_clarification'`: 等待用户澄清后重新执行
- 如果 `coverage.status === 'unsatisfied'`: 自动触发 `action-gemini-analysis` 进行深度分析

View File

@@ -0,0 +1,206 @@
# Action: Apply Fix
Apply a selected fix to the target skill with backup and rollback capability.
## Purpose
- Apply fix changes to target skill files
- Create backup before modifications
- Track applied fixes for verification
- Support rollback if needed
## Preconditions
- [ ] state.status === 'running'
- [ ] state.pending_fixes.length > 0
- [ ] state.proposed_fixes contains the fix to apply
## Execution
```javascript
async function execute(state, workDir) {
const pendingFixes = state.pending_fixes;
const proposedFixes = state.proposed_fixes;
const targetPath = state.target_skill.path;
const backupDir = state.backup_dir;
if (pendingFixes.length === 0) {
return {
stateUpdates: {},
outputFiles: [],
summary: 'No pending fixes to apply'
};
}
// Get next fix to apply
const fixId = pendingFixes[0];
const fix = proposedFixes.find(f => f.id === fixId);
if (!fix) {
return {
stateUpdates: {
pending_fixes: pendingFixes.slice(1),
errors: [...state.errors, {
action: 'action-apply-fix',
message: `Fix ${fixId} not found in proposals`,
timestamp: new Date().toISOString(),
recoverable: true
}]
},
outputFiles: [],
summary: `Fix ${fixId} not found, skipping`
};
}
console.log(`Applying fix ${fix.id}: ${fix.description}`);
// Create fix-specific backup
const fixBackupDir = `${backupDir}/before-${fix.id}`;
Bash(`mkdir -p "${fixBackupDir}"`);
const appliedChanges = [];
let success = true;
for (const change of fix.changes) {
try {
// Resolve file path (handle wildcards)
let targetFiles = [];
if (change.file.includes('*')) {
targetFiles = Glob(`${targetPath}/${change.file}`);
} else {
targetFiles = [`${targetPath}/${change.file}`];
}
for (const targetFile of targetFiles) {
// Backup original
const relativePath = targetFile.replace(targetPath + '/', '');
const backupPath = `${fixBackupDir}/${relativePath}`;
if (Glob(targetFile).length > 0) {
const originalContent = Read(targetFile);
Bash(`mkdir -p "$(dirname "${backupPath}")"`);
Write(backupPath, originalContent);
}
// Apply change based on action type
if (change.action === 'modify' && change.diff) {
// For now, append the diff as a comment/note
// Real implementation would parse and apply the diff
const existingContent = Read(targetFile);
// Simple diff application: look for context and apply
// This is a simplified version - real implementation would be more sophisticated
const newContent = existingContent + `\n\n<!-- Applied fix ${fix.id}: ${fix.description} -->\n`;
Write(targetFile, newContent);
appliedChanges.push({
file: relativePath,
action: 'modified',
backup: backupPath
});
} else if (change.action === 'create') {
Write(targetFile, change.new_content || '');
appliedChanges.push({
file: relativePath,
action: 'created',
backup: null
});
}
}
} catch (error) {
console.log(`Error applying change to ${change.file}: ${error.message}`);
success = false;
}
}
// Record applied fix
const appliedFix = {
fix_id: fix.id,
applied_at: new Date().toISOString(),
success: success,
backup_path: fixBackupDir,
verification_result: 'pending',
rollback_available: true,
changes_made: appliedChanges
};
// Update applied fixes log
const appliedFixesPath = `${workDir}/fixes/applied-fixes.json`;
let existingApplied = [];
try {
existingApplied = JSON.parse(Read(appliedFixesPath));
} catch (e) {
existingApplied = [];
}
existingApplied.push(appliedFix);
Write(appliedFixesPath, JSON.stringify(existingApplied, null, 2));
return {
stateUpdates: {
applied_fixes: [...state.applied_fixes, appliedFix],
pending_fixes: pendingFixes.slice(1) // Remove applied fix from pending
},
outputFiles: [appliedFixesPath],
summary: `Applied fix ${fix.id}: ${success ? 'success' : 'partial'}, ${appliedChanges.length} files modified`
};
}
```
## State Updates
```javascript
return {
stateUpdates: {
applied_fixes: [...existingApplied, newAppliedFix],
pending_fixes: remainingPendingFixes
}
};
```
## Rollback Function
```javascript
async function rollbackFix(fixId, state, workDir) {
const appliedFix = state.applied_fixes.find(f => f.fix_id === fixId);
if (!appliedFix || !appliedFix.rollback_available) {
throw new Error(`Cannot rollback fix ${fixId}`);
}
const backupDir = appliedFix.backup_path;
const targetPath = state.target_skill.path;
// Restore from backup
const backupFiles = Glob(`${backupDir}/**/*`);
for (const backupFile of backupFiles) {
const relativePath = backupFile.replace(backupDir + '/', '');
const targetFile = `${targetPath}/${relativePath}`;
const content = Read(backupFile);
Write(targetFile, content);
}
return {
stateUpdates: {
applied_fixes: state.applied_fixes.map(f =>
f.fix_id === fixId
? { ...f, rollback_available: false, verification_result: 'rolled_back' }
: f
)
}
};
}
```
## Error Handling
| Error Type | Recovery |
|------------|----------|
| File not found | Skip file, log warning |
| Write permission error | Retry with sudo or report |
| Backup creation failed | Abort fix, don't modify |
## Next Actions
- If pending_fixes.length > 0: action-apply-fix (continue)
- If all fixes applied: action-verify

View File

@@ -0,0 +1,195 @@
# Action: Complete
Finalize the tuning session with summary report and cleanup.
## Purpose
- Generate final summary report
- Record tuning statistics
- Clean up temporary files (optional)
- Provide recommendations for future maintenance
## Preconditions
- [ ] state.status === 'running'
- [ ] quality_gate === 'pass' OR max_iterations reached
## Execution
```javascript
async function execute(state, workDir) {
console.log('Finalizing skill tuning session...');
const targetSkill = state.target_skill;
const startTime = new Date(state.started_at);
const endTime = new Date();
const duration = Math.round((endTime - startTime) / 1000);
// Generate final summary
const summary = `# Skill Tuning Summary
**Target Skill**: ${targetSkill.name}
**Path**: ${targetSkill.path}
**Session Duration**: ${duration} seconds
**Completed**: ${endTime.toISOString()}
---
## Final Status
| Metric | Value |
|--------|-------|
| Final Health Score | ${state.quality_score}/100 |
| Quality Gate | ${state.quality_gate.toUpperCase()} |
| Total Iterations | ${state.iteration_count} |
| Issues Found | ${state.issues.length + state.applied_fixes.flatMap(f => f.issues_resolved || []).length} |
| Issues Resolved | ${state.applied_fixes.flatMap(f => f.issues_resolved || []).length} |
| Fixes Applied | ${state.applied_fixes.length} |
| Fixes Verified | ${state.applied_fixes.filter(f => f.verification_result === 'pass').length} |
---
## Diagnosis Summary
| Area | Issues Found | Severity |
|------|--------------|----------|
| Context Explosion | ${state.diagnosis.context?.issues_found || 'N/A'} | ${state.diagnosis.context?.severity || 'N/A'} |
| Long-tail Forgetting | ${state.diagnosis.memory?.issues_found || 'N/A'} | ${state.diagnosis.memory?.severity || 'N/A'} |
| Data Flow | ${state.diagnosis.dataflow?.issues_found || 'N/A'} | ${state.diagnosis.dataflow?.severity || 'N/A'} |
| Agent Coordination | ${state.diagnosis.agent?.issues_found || 'N/A'} | ${state.diagnosis.agent?.severity || 'N/A'} |
---
## Applied Fixes
${state.applied_fixes.length === 0 ? '_No fixes applied_' :
state.applied_fixes.map((fix, i) => `
### ${i + 1}. ${fix.fix_id}
- **Applied At**: ${fix.applied_at}
- **Success**: ${fix.success ? 'Yes' : 'No'}
- **Verification**: ${fix.verification_result}
- **Rollback Available**: ${fix.rollback_available ? 'Yes' : 'No'}
`).join('\n')}
---
## Remaining Issues
${state.issues.length === 0 ? '✅ All issues resolved!' :
`${state.issues.length} issues remain:\n\n` +
state.issues.map(issue =>
`- **[${issue.severity.toUpperCase()}]** ${issue.description} (${issue.id})`
).join('\n')}
---
## Recommendations
${generateRecommendations(state)}
---
## Backup Information
Original skill files backed up to:
\`${state.backup_dir}\`
To restore original skill:
\`\`\`bash
cp -r "${state.backup_dir}/${targetSkill.name}-backup"/* "${targetSkill.path}/"
\`\`\`
---
## Session Files
| File | Description |
|------|-------------|
| ${workDir}/tuning-report.md | Full diagnostic report |
| ${workDir}/diagnosis/*.json | Individual diagnosis results |
| ${workDir}/fixes/fix-proposals.json | Proposed fixes |
| ${workDir}/fixes/applied-fixes.json | Applied fix history |
| ${workDir}/tuning-summary.md | This summary |
---
*Skill tuning completed by skill-tuning*
`;
Write(`${workDir}/tuning-summary.md`, summary);
// Update final state
return {
stateUpdates: {
status: 'completed',
completed_at: endTime.toISOString()
},
outputFiles: [`${workDir}/tuning-summary.md`],
summary: `Tuning complete: ${state.quality_gate} with ${state.quality_score}/100 health score`
};
}
function generateRecommendations(state) {
const recommendations = [];
// Based on remaining issues
if (state.issues.some(i => i.type === 'context_explosion')) {
recommendations.push('- **Context Management**: Consider implementing a context summarization agent to prevent token growth');
}
if (state.issues.some(i => i.type === 'memory_loss')) {
recommendations.push('- **Constraint Tracking**: Add explicit constraint injection to each phase prompt');
}
if (state.issues.some(i => i.type === 'dataflow_break')) {
recommendations.push('- **State Centralization**: Migrate to single state.json with schema validation');
}
if (state.issues.some(i => i.type === 'agent_failure')) {
recommendations.push('- **Error Handling**: Wrap all Task calls in try-catch blocks');
}
// General recommendations
if (state.iteration_count >= state.max_iterations) {
recommendations.push('- **Deep Refactoring**: Consider architectural review if issues persist after multiple iterations');
}
if (state.quality_score < 80) {
recommendations.push('- **Regular Tuning**: Schedule periodic skill-tuning runs to catch issues early');
}
if (recommendations.length === 0) {
recommendations.push('- Skill is in good health! Monitor for regressions during future development.');
}
return recommendations.join('\n');
}
```
## State Updates
```javascript
return {
stateUpdates: {
status: 'completed',
completed_at: '<timestamp>'
}
};
```
## Output
- **File**: `tuning-summary.md`
- **Location**: `${workDir}/tuning-summary.md`
- **Format**: Markdown
## Error Handling
| Error Type | Recovery |
|------------|----------|
| Summary write failed | Write to alternative location |
## Next Actions
- None (terminal state)

View File

@@ -0,0 +1,317 @@
# Action: Diagnose Agent Coordination
Analyze target skill for agent coordination failures - call chain fragility and result passing issues.
## Purpose
- Detect fragile agent call patterns
- Identify result passing issues
- Find missing error handling in agent calls
- Analyze agent return format consistency
## Preconditions
- [ ] state.status === 'running'
- [ ] state.target_skill.path is set
- [ ] 'agent' in state.focus_areas OR state.focus_areas is empty
## Detection Patterns
### Pattern 1: Unhandled Agent Failures
```regex
# Task calls without try-catch or error handling
/Task\s*\(\s*\{[^}]*\}\s*\)(?![^;]*catch)/
```
### Pattern 2: Missing Return Validation
```regex
# Agent result used directly without validation
/const\s+\w+\s*=\s*await?\s*Task\([^)]+\);\s*(?!.*(?:if|try|JSON\.parse))/
```
### Pattern 3: Inconsistent Agent Configuration
```regex
# Different agent configurations in same skill
/subagent_type:\s*['"](\w+)['"]/g
```
### Pattern 4: Deeply Nested Agent Calls
```regex
# Agent calling another agent (nested)
/Task\s*\([^)]*prompt:[^)]*Task\s*\(/
```
## Execution
```javascript
async function execute(state, workDir) {
const skillPath = state.target_skill.path;
const startTime = Date.now();
const issues = [];
const evidence = [];
console.log(`Diagnosing agent coordination in ${skillPath}...`);
// 1. Find all Task/agent calls
const allFiles = Glob(`${skillPath}/**/*.md`);
const agentCalls = [];
const agentTypes = new Set();
for (const file of allFiles) {
const content = Read(file);
const relativePath = file.replace(skillPath + '/', '');
// Find Task calls
const taskMatches = content.matchAll(/Task\s*\(\s*\{([^}]+)\}/g);
for (const match of taskMatches) {
const config = match[1];
// Extract agent type
const typeMatch = config.match(/subagent_type:\s*['"]([^'"]+)['"]/);
const agentType = typeMatch ? typeMatch[1] : 'unknown';
agentTypes.add(agentType);
// Check for error handling context
const hasErrorHandling = /try\s*\{.*Task|\.catch\(|await\s+Task.*\.then/s.test(
content.slice(Math.max(0, match.index - 100), match.index + match[0].length + 100)
);
// Check for result validation
const hasResultValidation = /JSON\.parse|if\s*\(\s*result|result\s*\?\./s.test(
content.slice(match.index, match.index + match[0].length + 200)
);
// Check for background execution
const runsInBackground = /run_in_background:\s*true/.test(config);
agentCalls.push({
file: relativePath,
agentType,
hasErrorHandling,
hasResultValidation,
runsInBackground,
config: config.slice(0, 200)
});
}
}
// 2. Analyze agent call patterns
const totalCalls = agentCalls.length;
const callsWithoutErrorHandling = agentCalls.filter(c => !c.hasErrorHandling);
const callsWithoutValidation = agentCalls.filter(c => !c.hasResultValidation);
// Issue: Missing error handling
if (callsWithoutErrorHandling.length > 0) {
issues.push({
id: `AGT-${issues.length + 1}`,
type: 'agent_failure',
severity: callsWithoutErrorHandling.length > 2 ? 'high' : 'medium',
location: { file: 'multiple' },
description: `${callsWithoutErrorHandling.length}/${totalCalls} agent calls lack error handling`,
evidence: callsWithoutErrorHandling.slice(0, 3).map(c =>
`${c.file}: ${c.agentType}`
),
root_cause: 'Agent failures not caught, may crash workflow',
impact: 'Unhandled agent errors cause cascading failures',
suggested_fix: 'Wrap Task calls in try-catch with graceful fallback'
});
evidence.push({
file: 'multiple',
pattern: 'missing_error_handling',
context: `${callsWithoutErrorHandling.length} calls affected`,
severity: 'high'
});
}
// Issue: Missing result validation
if (callsWithoutValidation.length > 0) {
issues.push({
id: `AGT-${issues.length + 1}`,
type: 'agent_failure',
severity: 'medium',
location: { file: 'multiple' },
description: `${callsWithoutValidation.length}/${totalCalls} agent calls lack result validation`,
evidence: callsWithoutValidation.slice(0, 3).map(c =>
`${c.file}: ${c.agentType} result not validated`
),
root_cause: 'Agent results used directly without type checking',
impact: 'Invalid agent output may corrupt state',
suggested_fix: 'Add JSON.parse with try-catch and schema validation'
});
}
// 3. Check for inconsistent agent types usage
if (agentTypes.size > 3 && state.target_skill.execution_mode === 'autonomous') {
issues.push({
id: `AGT-${issues.length + 1}`,
type: 'agent_failure',
severity: 'low',
location: { file: 'multiple' },
description: `Using ${agentTypes.size} different agent types`,
evidence: [...agentTypes].slice(0, 5),
root_cause: 'Multiple agent types increase coordination complexity',
impact: 'Different agent behaviors may cause inconsistency',
suggested_fix: 'Standardize on fewer agent types with clear roles'
});
}
// 4. Check for nested agent calls
for (const file of allFiles) {
const content = Read(file);
const relativePath = file.replace(skillPath + '/', '');
// Detect nested Task calls
const hasNestedTask = /Task\s*\([^)]*prompt:[^)]*Task\s*\(/s.test(content);
if (hasNestedTask) {
issues.push({
id: `AGT-${issues.length + 1}`,
type: 'agent_failure',
severity: 'high',
location: { file: relativePath },
description: 'Nested agent calls detected',
evidence: ['Agent prompt contains another Task call'],
root_cause: 'Agent calls another agent, creating deep nesting',
impact: 'Context explosion, hard to debug, unpredictable behavior',
suggested_fix: 'Flatten agent calls, use orchestrator to coordinate'
});
}
}
// 5. Check SKILL.md for agent configuration consistency
const skillMd = Read(`${skillPath}/SKILL.md`);
// Check if allowed-tools includes Task
const allowedTools = skillMd.match(/allowed-tools:\s*([^\n]+)/i);
if (allowedTools && !allowedTools[1].includes('Task') && totalCalls > 0) {
issues.push({
id: `AGT-${issues.length + 1}`,
type: 'agent_failure',
severity: 'medium',
location: { file: 'SKILL.md' },
description: 'Task tool used but not declared in allowed-tools',
evidence: [`${totalCalls} Task calls found, but Task not in allowed-tools`],
root_cause: 'Tool declaration mismatch',
impact: 'May cause runtime permission issues',
suggested_fix: 'Add Task to allowed-tools in SKILL.md front matter'
});
}
// 6. Check for agent result format consistency
const returnFormats = new Set();
for (const file of allFiles) {
const content = Read(file);
// Look for return format definitions
const returnMatch = content.match(/\[RETURN\][^[]*|return\s*\{[^}]+\}/gi);
if (returnMatch) {
returnMatch.forEach(r => {
const format = r.includes('JSON') ? 'json' :
r.includes('summary') ? 'summary' :
r.includes('file') ? 'file_path' : 'other';
returnFormats.add(format);
});
}
}
if (returnFormats.size > 2) {
issues.push({
id: `AGT-${issues.length + 1}`,
type: 'agent_failure',
severity: 'medium',
location: { file: 'multiple' },
description: 'Inconsistent agent return formats',
evidence: [...returnFormats],
root_cause: 'Different agents return data in different formats',
impact: 'Orchestrator must handle multiple format types',
suggested_fix: 'Standardize return format: {status, output_file, summary}'
});
}
// 7. Calculate severity
const criticalCount = issues.filter(i => i.severity === 'critical').length;
const highCount = issues.filter(i => i.severity === 'high').length;
const severity = criticalCount > 0 ? 'critical' :
highCount > 1 ? 'high' :
highCount > 0 ? 'medium' :
issues.length > 0 ? 'low' : 'none';
// 8. Write diagnosis result
const diagnosisResult = {
status: 'completed',
issues_found: issues.length,
severity: severity,
execution_time_ms: Date.now() - startTime,
details: {
patterns_checked: [
'error_handling',
'result_validation',
'agent_type_consistency',
'nested_calls',
'return_format_consistency'
],
patterns_matched: evidence.map(e => e.pattern),
evidence: evidence,
agent_analysis: {
total_agent_calls: totalCalls,
unique_agent_types: agentTypes.size,
calls_without_error_handling: callsWithoutErrorHandling.length,
calls_without_validation: callsWithoutValidation.length,
agent_types_used: [...agentTypes]
},
recommendations: [
callsWithoutErrorHandling.length > 0
? 'Add try-catch to all Task calls' : null,
callsWithoutValidation.length > 0
? 'Add result validation with JSON.parse and schema check' : null,
agentTypes.size > 3
? 'Consolidate agent types for consistency' : null
].filter(Boolean)
}
};
Write(`${workDir}/diagnosis/agent-diagnosis.json`,
JSON.stringify(diagnosisResult, null, 2));
return {
stateUpdates: {
'diagnosis.agent': diagnosisResult,
issues: [...state.issues, ...issues]
},
outputFiles: [`${workDir}/diagnosis/agent-diagnosis.json`],
summary: `Agent diagnosis: ${issues.length} issues found (severity: ${severity})`
};
}
```
## State Updates
```javascript
return {
stateUpdates: {
'diagnosis.agent': {
status: 'completed',
issues_found: <count>,
severity: '<critical|high|medium|low|none>',
// ... full diagnosis result
},
issues: [...existingIssues, ...newIssues]
}
};
```
## Error Handling
| Error Type | Recovery |
|------------|----------|
| Regex match error | Use simpler patterns |
| File access error | Skip and continue |
## Next Actions
- Success: action-generate-report
- Skipped: If 'agent' not in focus_areas

View File

@@ -0,0 +1,243 @@
# Action: Diagnose Context Explosion
Analyze target skill for context explosion issues - token accumulation and multi-turn dialogue bloat.
## Purpose
- Detect patterns that cause context growth
- Identify multi-turn accumulation points
- Find missing context compression mechanisms
- Measure potential token waste
## Preconditions
- [ ] state.status === 'running'
- [ ] state.target_skill.path is set
- [ ] 'context' in state.focus_areas OR state.focus_areas is empty
## Detection Patterns
### Pattern 1: Unbounded History Accumulation
```regex
# Patterns that suggest history accumulation
/\bhistory\b.*\.push\b/
/\bmessages\b.*\.concat\b/
/\bconversation\b.*\+=\b/
/\bappend.*context\b/i
```
### Pattern 2: Full Content Passing
```regex
# Patterns that pass full content instead of references
/Read\([^)]+\).*\+.*Read\(/
/JSON\.stringify\(.*state\)/ # Full state serialization
/\$\{.*content\}/ # Template literal with full content
```
### Pattern 3: Missing Summarization
```regex
# Absence of compression/summarization
# Check for lack of: summarize, compress, truncate, slice
```
### Pattern 4: Agent Return Bloat
```regex
# Agent returning full content instead of path + summary
/return\s*\{[^}]*content:/
/return.*JSON\.stringify/
```
## Execution
```javascript
async function execute(state, workDir) {
const skillPath = state.target_skill.path;
const startTime = Date.now();
const issues = [];
const evidence = [];
console.log(`Diagnosing context explosion in ${skillPath}...`);
// 1. Scan all phase files
const phaseFiles = Glob(`${skillPath}/phases/**/*.md`);
for (const file of phaseFiles) {
const content = Read(file);
const relativePath = file.replace(skillPath + '/', '');
// Check Pattern 1: History accumulation
const historyPatterns = [
/history\s*[.=].*push|concat|append/gi,
/messages\s*=\s*\[.*\.\.\..*messages/gi,
/conversation.*\+=/gi
];
for (const pattern of historyPatterns) {
const matches = content.match(pattern);
if (matches) {
issues.push({
id: `CTX-${issues.length + 1}`,
type: 'context_explosion',
severity: 'high',
location: { file: relativePath },
description: 'Unbounded history accumulation detected',
evidence: matches.slice(0, 3),
root_cause: 'History/messages array grows without bounds',
impact: 'Token count increases linearly with iterations',
suggested_fix: 'Implement sliding window or summarization'
});
evidence.push({
file: relativePath,
pattern: 'history_accumulation',
context: matches[0],
severity: 'high'
});
}
}
// Check Pattern 2: Full content passing
const contentPatterns = [
/Read\s*\([^)]+\)\s*[\+,]/g,
/JSON\.stringify\s*\(\s*state\s*\)/g,
/\$\{[^}]*content[^}]*\}/g
];
for (const pattern of contentPatterns) {
const matches = content.match(pattern);
if (matches) {
issues.push({
id: `CTX-${issues.length + 1}`,
type: 'context_explosion',
severity: 'medium',
location: { file: relativePath },
description: 'Full content passed instead of reference',
evidence: matches.slice(0, 3),
root_cause: 'Entire file/state content included in prompts',
impact: 'Unnecessary token consumption',
suggested_fix: 'Pass file paths and summaries instead of full content'
});
evidence.push({
file: relativePath,
pattern: 'full_content_passing',
context: matches[0],
severity: 'medium'
});
}
}
// Check Pattern 3: Missing summarization
const hasSummarization = /summariz|compress|truncat|slice.*context/i.test(content);
const hasLongPrompts = content.length > 5000;
if (hasLongPrompts && !hasSummarization) {
issues.push({
id: `CTX-${issues.length + 1}`,
type: 'context_explosion',
severity: 'medium',
location: { file: relativePath },
description: 'Long phase file without summarization mechanism',
evidence: [`File length: ${content.length} chars`],
root_cause: 'No context compression for large content',
impact: 'Potential token overflow in long sessions',
suggested_fix: 'Add context summarization before passing to agents'
});
}
// Check Pattern 4: Agent return bloat
const returnPatterns = /return\s*\{[^}]*(?:content|full_output|complete_result):/g;
const returnMatches = content.match(returnPatterns);
if (returnMatches) {
issues.push({
id: `CTX-${issues.length + 1}`,
type: 'context_explosion',
severity: 'high',
location: { file: relativePath },
description: 'Agent returns full content instead of path+summary',
evidence: returnMatches.slice(0, 3),
root_cause: 'Agent output includes complete content',
impact: 'Context bloat when orchestrator receives full output',
suggested_fix: 'Return {output_file, summary} instead of {content}'
});
}
}
// 2. Calculate severity
const criticalCount = issues.filter(i => i.severity === 'critical').length;
const highCount = issues.filter(i => i.severity === 'high').length;
const severity = criticalCount > 0 ? 'critical' :
highCount > 2 ? 'high' :
highCount > 0 ? 'medium' :
issues.length > 0 ? 'low' : 'none';
// 3. Write diagnosis result
const diagnosisResult = {
status: 'completed',
issues_found: issues.length,
severity: severity,
execution_time_ms: Date.now() - startTime,
details: {
patterns_checked: [
'history_accumulation',
'full_content_passing',
'missing_summarization',
'agent_return_bloat'
],
patterns_matched: evidence.map(e => e.pattern),
evidence: evidence,
recommendations: [
issues.length > 0 ? 'Implement context summarization agent' : null,
highCount > 0 ? 'Add sliding window for conversation history' : null,
evidence.some(e => e.pattern === 'full_content_passing')
? 'Refactor to pass file paths instead of content' : null
].filter(Boolean)
}
};
Write(`${workDir}/diagnosis/context-diagnosis.json`,
JSON.stringify(diagnosisResult, null, 2));
return {
stateUpdates: {
'diagnosis.context': diagnosisResult,
issues: [...state.issues, ...issues],
'issues_by_severity.critical': state.issues_by_severity.critical + criticalCount,
'issues_by_severity.high': state.issues_by_severity.high + highCount
},
outputFiles: [`${workDir}/diagnosis/context-diagnosis.json`],
summary: `Context diagnosis: ${issues.length} issues found (severity: ${severity})`
};
}
```
## State Updates
```javascript
return {
stateUpdates: {
'diagnosis.context': {
status: 'completed',
issues_found: <count>,
severity: '<critical|high|medium|low|none>',
// ... full diagnosis result
},
issues: [...existingIssues, ...newIssues]
}
};
```
## Error Handling
| Error Type | Recovery |
|------------|----------|
| File read error | Skip file, log warning |
| Pattern matching error | Use fallback patterns |
| Write error | Retry to alternative path |
## Next Actions
- Success: action-diagnose-memory (or next in focus_areas)
- Skipped: If 'context' not in focus_areas

View File

@@ -0,0 +1,318 @@
# Action: Diagnose Data Flow Issues
Analyze target skill for data flow disruption - state inconsistencies and format variations.
## Purpose
- Detect inconsistent data formats between phases
- Identify scattered state storage
- Find missing data contracts
- Measure state transition integrity
## Preconditions
- [ ] state.status === 'running'
- [ ] state.target_skill.path is set
- [ ] 'dataflow' in state.focus_areas OR state.focus_areas is empty
## Detection Patterns
### Pattern 1: Multiple Storage Locations
```regex
# Data written to multiple paths without centralization
/Write\s*\(\s*[`'"][^`'"]+[`'"]/g
```
### Pattern 2: Inconsistent Field Names
```regex
# Same concept with different names: title/name, id/identifier
```
### Pattern 3: Missing Schema Validation
```regex
# Absence of validation before state write
# Look for lack of: validate, schema, check, verify
```
### Pattern 4: Format Transformation Without Normalization
```regex
# Direct JSON.parse without error handling or normalization
/JSON\.parse\([^)]+\)(?!\s*\|\|)/
```
## Execution
```javascript
async function execute(state, workDir) {
const skillPath = state.target_skill.path;
const startTime = Date.now();
const issues = [];
const evidence = [];
console.log(`Diagnosing data flow in ${skillPath}...`);
// 1. Collect all Write operations to map data storage
const allFiles = Glob(`${skillPath}/**/*.md`);
const writeLocations = [];
const readLocations = [];
for (const file of allFiles) {
const content = Read(file);
const relativePath = file.replace(skillPath + '/', '');
// Find Write operations
const writeMatches = content.matchAll(/Write\s*\(\s*[`'"]([^`'"]+)[`'"]/g);
for (const match of writeMatches) {
writeLocations.push({
file: relativePath,
target: match[1],
isStateFile: match[1].includes('state.json') || match[1].includes('config.json')
});
}
// Find Read operations
const readMatches = content.matchAll(/Read\s*\(\s*[`'"]([^`'"]+)[`'"]/g);
for (const match of readMatches) {
readLocations.push({
file: relativePath,
source: match[1]
});
}
}
// 2. Check for scattered state storage
const stateTargets = writeLocations
.filter(w => w.isStateFile)
.map(w => w.target);
const uniqueStateFiles = [...new Set(stateTargets)];
if (uniqueStateFiles.length > 2) {
issues.push({
id: `DF-${issues.length + 1}`,
type: 'dataflow_break',
severity: 'high',
location: { file: 'multiple' },
description: `State stored in ${uniqueStateFiles.length} different locations`,
evidence: uniqueStateFiles.slice(0, 5),
root_cause: 'No centralized state management',
impact: 'State inconsistency between phases',
suggested_fix: 'Centralize state to single state.json with state manager'
});
evidence.push({
file: 'multiple',
pattern: 'scattered_state',
context: uniqueStateFiles.join(', '),
severity: 'high'
});
}
// 3. Check for inconsistent field naming
const fieldNamePatterns = {
'name_vs_title': [/\.name\b/, /\.title\b/],
'id_vs_identifier': [/\.id\b/, /\.identifier\b/],
'status_vs_state': [/\.status\b/, /\.state\b/],
'error_vs_errors': [/\.error\b/, /\.errors\b/]
};
const fieldUsage = {};
for (const file of allFiles) {
const content = Read(file);
const relativePath = file.replace(skillPath + '/', '');
for (const [patternName, patterns] of Object.entries(fieldNamePatterns)) {
for (const pattern of patterns) {
if (pattern.test(content)) {
if (!fieldUsage[patternName]) fieldUsage[patternName] = [];
fieldUsage[patternName].push({
file: relativePath,
pattern: pattern.toString()
});
}
}
}
}
for (const [patternName, usages] of Object.entries(fieldUsage)) {
const uniquePatterns = [...new Set(usages.map(u => u.pattern))];
if (uniquePatterns.length > 1) {
issues.push({
id: `DF-${issues.length + 1}`,
type: 'dataflow_break',
severity: 'medium',
location: { file: 'multiple' },
description: `Inconsistent field naming: ${patternName.replace('_vs_', ' vs ')}`,
evidence: usages.slice(0, 3).map(u => `${u.file}: ${u.pattern}`),
root_cause: 'Same concept referred to with different field names',
impact: 'Data may be lost during field access',
suggested_fix: `Standardize to single field name, add normalization function`
});
}
}
// 4. Check for missing schema validation
for (const file of allFiles) {
const content = Read(file);
const relativePath = file.replace(skillPath + '/', '');
// Find JSON.parse without validation
const unsafeParses = content.match(/JSON\.parse\s*\([^)]+\)(?!\s*\?\?|\s*\|\|)/g);
const hasValidation = /validat|schema|type.*check/i.test(content);
if (unsafeParses && unsafeParses.length > 0 && !hasValidation) {
issues.push({
id: `DF-${issues.length + 1}`,
type: 'dataflow_break',
severity: 'medium',
location: { file: relativePath },
description: 'JSON parsing without validation',
evidence: unsafeParses.slice(0, 2),
root_cause: 'No schema validation after parsing',
impact: 'Invalid data may propagate through phases',
suggested_fix: 'Add schema validation after JSON.parse'
});
}
}
// 5. Check state schema if exists
const stateSchemaFile = Glob(`${skillPath}/phases/state-schema.md`)[0];
if (stateSchemaFile) {
const schemaContent = Read(stateSchemaFile);
// Check for type definitions
const hasTypeScript = /interface\s+\w+|type\s+\w+\s*=/i.test(schemaContent);
const hasValidationFunction = /function\s+validate|validateState/i.test(schemaContent);
if (hasTypeScript && !hasValidationFunction) {
issues.push({
id: `DF-${issues.length + 1}`,
type: 'dataflow_break',
severity: 'low',
location: { file: 'phases/state-schema.md' },
description: 'Type definitions without runtime validation',
evidence: ['TypeScript interfaces defined but no validation function'],
root_cause: 'Types are compile-time only, not enforced at runtime',
impact: 'Schema violations may occur at runtime',
suggested_fix: 'Add validateState() function using Zod or manual checks'
});
}
} else if (state.target_skill.execution_mode === 'autonomous') {
issues.push({
id: `DF-${issues.length + 1}`,
type: 'dataflow_break',
severity: 'high',
location: { file: 'phases/' },
description: 'Autonomous skill missing state-schema.md',
evidence: ['No state schema definition found'],
root_cause: 'State structure undefined for orchestrator',
impact: 'Inconsistent state handling across actions',
suggested_fix: 'Create phases/state-schema.md with explicit type definitions'
});
}
// 6. Check read-write alignment
const writtenFiles = new Set(writeLocations.map(w => w.target));
const readFiles = new Set(readLocations.map(r => r.source));
const writtenButNotRead = [...writtenFiles].filter(f =>
!readFiles.has(f) && !f.includes('output') && !f.includes('report')
);
if (writtenButNotRead.length > 0) {
issues.push({
id: `DF-${issues.length + 1}`,
type: 'dataflow_break',
severity: 'low',
location: { file: 'multiple' },
description: 'Files written but never read',
evidence: writtenButNotRead.slice(0, 3),
root_cause: 'Orphaned output files',
impact: 'Wasted storage and potential confusion',
suggested_fix: 'Remove unused writes or add reads where needed'
});
}
// 7. Calculate severity
const criticalCount = issues.filter(i => i.severity === 'critical').length;
const highCount = issues.filter(i => i.severity === 'high').length;
const severity = criticalCount > 0 ? 'critical' :
highCount > 1 ? 'high' :
highCount > 0 ? 'medium' :
issues.length > 0 ? 'low' : 'none';
// 8. Write diagnosis result
const diagnosisResult = {
status: 'completed',
issues_found: issues.length,
severity: severity,
execution_time_ms: Date.now() - startTime,
details: {
patterns_checked: [
'scattered_state',
'inconsistent_naming',
'missing_validation',
'read_write_alignment'
],
patterns_matched: evidence.map(e => e.pattern),
evidence: evidence,
data_flow_map: {
write_locations: writeLocations.length,
read_locations: readLocations.length,
unique_state_files: uniqueStateFiles.length
},
recommendations: [
uniqueStateFiles.length > 2 ? 'Implement centralized state manager' : null,
issues.some(i => i.description.includes('naming'))
? 'Create normalization layer for field names' : null,
issues.some(i => i.description.includes('validation'))
? 'Add Zod or JSON Schema validation' : null
].filter(Boolean)
}
};
Write(`${workDir}/diagnosis/dataflow-diagnosis.json`,
JSON.stringify(diagnosisResult, null, 2));
return {
stateUpdates: {
'diagnosis.dataflow': diagnosisResult,
issues: [...state.issues, ...issues]
},
outputFiles: [`${workDir}/diagnosis/dataflow-diagnosis.json`],
summary: `Data flow diagnosis: ${issues.length} issues found (severity: ${severity})`
};
}
```
## State Updates
```javascript
return {
stateUpdates: {
'diagnosis.dataflow': {
status: 'completed',
issues_found: <count>,
severity: '<critical|high|medium|low|none>',
// ... full diagnosis result
},
issues: [...existingIssues, ...newIssues]
}
};
```
## Error Handling
| Error Type | Recovery |
|------------|----------|
| Glob pattern error | Use fallback patterns |
| File read error | Skip and continue |
## Next Actions
- Success: action-diagnose-agent (or next in focus_areas)
- Skipped: If 'dataflow' not in focus_areas

View File

@@ -0,0 +1,299 @@
# Action: Diagnose Documentation Structure
检测目标 skill 中的文档冗余和冲突问题。
## Purpose
- 检测重复定义State Schema、映射表、类型定义等
- 检测冲突定义(优先级定义不一致、实现与文档漂移等)
- 生成合并和解决冲突的建议
## Preconditions
- [ ] `state.status === 'running'`
- [ ] `state.target_skill !== null`
- [ ] `!state.diagnosis.docs`
- [ ] 用户指定 focus_areas 包含 'docs' 或 'all',或需要全面诊断
## Detection Patterns
### DOC-RED-001: 核心定义重复
检测 State Schema、核心接口等在多处定义
```javascript
async function detectDefinitionDuplicates(skillPath) {
const patterns = [
{ name: 'state_schema', regex: /interface\s+(TuningState|State)\s*\{/g },
{ name: 'fix_strategy', regex: /type\s+FixStrategy\s*=/g },
{ name: 'issue_type', regex: /type:\s*['"]?(context_explosion|memory_loss|dataflow_break)/g }
];
const files = Glob('**/*.md', { cwd: skillPath });
const duplicates = [];
for (const pattern of patterns) {
const matches = [];
for (const file of files) {
const content = Read(`${skillPath}/${file}`);
if (pattern.regex.test(content)) {
matches.push({ file, pattern: pattern.name });
}
}
if (matches.length > 1) {
duplicates.push({
type: pattern.name,
files: matches.map(m => m.file),
severity: 'high'
});
}
}
return duplicates;
}
```
### DOC-RED-002: 硬编码配置重复
检测 action 文件中硬编码与 spec 文档的重复:
```javascript
async function detectHardcodedDuplicates(skillPath) {
const actionFiles = Glob('phases/actions/*.md', { cwd: skillPath });
const specFiles = Glob('specs/*.md', { cwd: skillPath });
const duplicates = [];
for (const actionFile of actionFiles) {
const content = Read(`${skillPath}/${actionFile}`);
// 检测硬编码的映射对象
const hardcodedPatterns = [
/const\s+\w*[Mm]apping\s*=\s*\{/g,
/patternMapping\s*=\s*\{/g,
/strategyMapping\s*=\s*\{/g
];
for (const pattern of hardcodedPatterns) {
if (pattern.test(content)) {
duplicates.push({
type: 'hardcoded_mapping',
file: actionFile,
description: '硬编码映射可能与 specs/ 中的定义重复',
severity: 'high'
});
}
}
}
return duplicates;
}
```
### DOC-CON-001: 优先级定义冲突
检测 P0-P3 等优先级在不同文件中的定义不一致:
```javascript
async function detectPriorityConflicts(skillPath) {
const files = Glob('**/*.md', { cwd: skillPath });
const priorityDefs = {};
const priorityPattern = /\*\*P(\d+)\*\*[:\s]+([^\|]+)/g;
for (const file of files) {
const content = Read(`${skillPath}/${file}`);
let match;
while ((match = priorityPattern.exec(content)) !== null) {
const priority = `P${match[1]}`;
const definition = match[2].trim();
if (!priorityDefs[priority]) {
priorityDefs[priority] = [];
}
priorityDefs[priority].push({ file, definition });
}
}
const conflicts = [];
for (const [priority, defs] of Object.entries(priorityDefs)) {
const uniqueDefs = [...new Set(defs.map(d => d.definition))];
if (uniqueDefs.length > 1) {
conflicts.push({
key: priority,
definitions: defs,
severity: 'critical'
});
}
}
return conflicts;
}
```
### DOC-CON-002: 实现与文档漂移
检测硬编码与文档表格的不一致:
```javascript
async function detectImplementationDrift(skillPath) {
// 比较 category-mappings.json 与 specs/*.md 中的表格
const mappingsFile = `${skillPath}/specs/category-mappings.json`;
if (!fileExists(mappingsFile)) {
return []; // 无集中配置,跳过
}
const mappings = JSON.parse(Read(mappingsFile));
const conflicts = [];
// 与 dimension-mapping.md 对比
const dimMapping = Read(`${skillPath}/specs/dimension-mapping.md`);
for (const [category, config] of Object.entries(mappings.categories)) {
// 检查策略是否在文档中提及
for (const strategy of config.strategies || []) {
if (!dimMapping.includes(strategy)) {
conflicts.push({
type: 'mapping',
key: `${category}.strategies`,
issue: `策略 ${strategy} 在 JSON 中定义但未在文档中提及`
});
}
}
}
return conflicts;
}
```
## Execution
```javascript
async function executeDiagnosis(state, workDir) {
console.log('=== Diagnosing Documentation Structure ===');
const skillPath = state.target_skill.path;
const issues = [];
// 1. 检测冗余
const definitionDups = await detectDefinitionDuplicates(skillPath);
const hardcodedDups = await detectHardcodedDuplicates(skillPath);
for (const dup of [...definitionDups, ...hardcodedDups]) {
issues.push({
id: `DOC-RED-${issues.length + 1}`,
type: 'doc_redundancy',
severity: dup.severity,
location: { files: dup.files || [dup.file] },
description: dup.description || `${dup.type} 在多处定义`,
evidence: dup.files || [dup.file],
root_cause: '缺乏单一真相来源',
impact: '维护困难,易产生不一致',
suggested_fix: 'consolidate_to_ssot'
});
}
// 2. 检测冲突
const priorityConflicts = await detectPriorityConflicts(skillPath);
const driftConflicts = await detectImplementationDrift(skillPath);
for (const conflict of priorityConflicts) {
issues.push({
id: `DOC-CON-${issues.length + 1}`,
type: 'doc_conflict',
severity: 'critical',
location: { files: conflict.definitions.map(d => d.file) },
description: `${conflict.key} 在不同文件中定义不一致`,
evidence: conflict.definitions.map(d => `${d.file}: ${d.definition}`),
root_cause: '定义更新后未同步',
impact: '行为不可预测',
suggested_fix: 'reconcile_conflicting_definitions'
});
}
// 3. 生成报告
const severity = issues.some(i => i.severity === 'critical') ? 'critical' :
issues.some(i => i.severity === 'high') ? 'high' :
issues.length > 0 ? 'medium' : 'none';
const result = {
status: 'completed',
issues_found: issues.length,
severity: severity,
execution_time_ms: Date.now() - startTime,
details: {
patterns_checked: ['DOC-RED-001', 'DOC-RED-002', 'DOC-CON-001', 'DOC-CON-002'],
patterns_matched: issues.map(i => i.id.split('-').slice(0, 2).join('-')),
evidence: issues.flatMap(i => i.evidence),
recommendations: generateRecommendations(issues)
},
redundancies: issues.filter(i => i.type === 'doc_redundancy'),
conflicts: issues.filter(i => i.type === 'doc_conflict')
};
// 写入诊断结果
Write(`${workDir}/diagnosis/docs-diagnosis.json`, JSON.stringify(result, null, 2));
return {
stateUpdates: {
'diagnosis.docs': result,
issues: [...state.issues, ...issues]
},
outputFiles: [`${workDir}/diagnosis/docs-diagnosis.json`],
summary: `文档诊断完成:发现 ${issues.length} 个问题 (${severity})`
};
}
function generateRecommendations(issues) {
const recommendations = [];
if (issues.some(i => i.type === 'doc_redundancy')) {
recommendations.push('使用 consolidate_to_ssot 策略合并重复定义');
recommendations.push('考虑创建 specs/category-mappings.json 集中管理配置');
}
if (issues.some(i => i.type === 'doc_conflict')) {
recommendations.push('使用 reconcile_conflicting_definitions 策略解决冲突');
recommendations.push('建立文档同步检查机制');
}
return recommendations;
}
```
## Output
### State Updates
```javascript
{
stateUpdates: {
'diagnosis.docs': {
status: 'completed',
issues_found: N,
severity: 'critical|high|medium|low|none',
redundancies: [...],
conflicts: [...]
},
issues: [...existingIssues, ...newIssues]
}
}
```
### Output Files
- `${workDir}/diagnosis/docs-diagnosis.json` - 完整诊断结果
## Error Handling
| Error | Recovery |
|-------|----------|
| 文件读取失败 | 记录警告,继续处理其他文件 |
| 正则匹配超时 | 跳过该模式,记录 skipped |
| JSON 解析失败 | 跳过配置对比,仅进行模式检测 |
## Next Actions
- 如果发现 critical 问题 → 优先进入 action-propose-fixes
- 如果无问题 → 继续下一个诊断或 action-generate-report

View File

@@ -0,0 +1,269 @@
# Action: Diagnose Long-tail Forgetting
Analyze target skill for long-tail effect and constraint forgetting issues.
## Purpose
- Detect loss of early instructions in long execution chains
- Identify missing constraint propagation mechanisms
- Find weak goal alignment between phases
- Measure instruction retention across phases
## Preconditions
- [ ] state.status === 'running'
- [ ] state.target_skill.path is set
- [ ] 'memory' in state.focus_areas OR state.focus_areas is empty
## Detection Patterns
### Pattern 1: Missing Constraint References
```regex
# Phases that don't reference original requirements
# Look for absence of: requirements, constraints, original, initial, user_request
```
### Pattern 2: Goal Drift
```regex
# Later phases focus on immediate task without global context
/\[TASK\][^[]*(?!\[CONSTRAINTS\]|\[REQUIREMENTS\])/
```
### Pattern 3: No Checkpoint Mechanism
```regex
# Absence of state preservation at key points
# Look for lack of: checkpoint, snapshot, preserve, restore
```
### Pattern 4: Implicit State Passing
```regex
# State passed implicitly through conversation rather than explicitly
/(?<!state\.)context\./
```
## Execution
```javascript
async function execute(state, workDir) {
const skillPath = state.target_skill.path;
const startTime = Date.now();
const issues = [];
const evidence = [];
console.log(`Diagnosing long-tail forgetting in ${skillPath}...`);
// 1. Analyze phase chain for constraint propagation
const phaseFiles = Glob(`${skillPath}/phases/*.md`)
.filter(f => !f.includes('orchestrator') && !f.includes('state-schema'))
.sort();
// Extract phase order (for sequential) or action dependencies (for autonomous)
const isAutonomous = state.target_skill.execution_mode === 'autonomous';
// 2. Check each phase for constraint awareness
let firstPhaseConstraints = [];
for (let i = 0; i < phaseFiles.length; i++) {
const file = phaseFiles[i];
const content = Read(file);
const relativePath = file.replace(skillPath + '/', '');
const phaseNum = i + 1;
// Extract constraints from first phase
if (i === 0) {
const constraintMatch = content.match(/\[CONSTRAINTS?\]([^[]*)/i);
if (constraintMatch) {
firstPhaseConstraints = constraintMatch[1]
.split('\n')
.filter(l => l.trim().startsWith('-'))
.map(l => l.trim().replace(/^-\s*/, ''));
}
}
// Check if later phases reference original constraints
if (i > 0 && firstPhaseConstraints.length > 0) {
const mentionsConstraints = firstPhaseConstraints.some(c =>
content.toLowerCase().includes(c.toLowerCase().slice(0, 20))
);
if (!mentionsConstraints) {
issues.push({
id: `MEM-${issues.length + 1}`,
type: 'memory_loss',
severity: 'high',
location: { file: relativePath, phase: `Phase ${phaseNum}` },
description: `Phase ${phaseNum} does not reference original constraints`,
evidence: [`Original constraints: ${firstPhaseConstraints.slice(0, 3).join(', ')}`],
root_cause: 'Constraint information not propagated to later phases',
impact: 'May produce output violating original requirements',
suggested_fix: 'Add explicit constraint injection or reference to state.original_constraints'
});
evidence.push({
file: relativePath,
pattern: 'missing_constraint_reference',
context: `Phase ${phaseNum} of ${phaseFiles.length}`,
severity: 'high'
});
}
}
// Check for goal drift - task without constraints
const hasTask = /\[TASK\]/i.test(content);
const hasConstraints = /\[CONSTRAINTS?\]|\[REQUIREMENTS?\]|\[RULES?\]/i.test(content);
if (hasTask && !hasConstraints && i > 1) {
issues.push({
id: `MEM-${issues.length + 1}`,
type: 'memory_loss',
severity: 'medium',
location: { file: relativePath },
description: 'Phase has TASK but no CONSTRAINTS/RULES section',
evidence: ['Task defined without boundary constraints'],
root_cause: 'Agent may not adhere to global constraints',
impact: 'Potential goal drift from original intent',
suggested_fix: 'Add [CONSTRAINTS] section referencing global rules'
});
}
// Check for checkpoint mechanism
const hasCheckpoint = /checkpoint|snapshot|preserve|savepoint/i.test(content);
const isKeyPhase = i === Math.floor(phaseFiles.length / 2) || i === phaseFiles.length - 1;
if (isKeyPhase && !hasCheckpoint && phaseFiles.length > 3) {
issues.push({
id: `MEM-${issues.length + 1}`,
type: 'memory_loss',
severity: 'low',
location: { file: relativePath },
description: 'Key phase without checkpoint mechanism',
evidence: [`Phase ${phaseNum} is a key milestone but has no state preservation`],
root_cause: 'Cannot recover from failures or verify constraint adherence',
impact: 'No rollback capability if constraints violated',
suggested_fix: 'Add checkpoint before major state changes'
});
}
}
// 3. Check for explicit state schema with constraints field
const stateSchemaFile = Glob(`${skillPath}/phases/state-schema.md`)[0];
if (stateSchemaFile) {
const schemaContent = Read(stateSchemaFile);
const hasConstraintsField = /constraints|requirements|original_request/i.test(schemaContent);
if (!hasConstraintsField) {
issues.push({
id: `MEM-${issues.length + 1}`,
type: 'memory_loss',
severity: 'medium',
location: { file: 'phases/state-schema.md' },
description: 'State schema lacks constraints/requirements field',
evidence: ['No dedicated field for preserving original requirements'],
root_cause: 'State structure does not support constraint persistence',
impact: 'Constraints may be lost during state transitions',
suggested_fix: 'Add original_requirements field to state schema'
});
}
}
// 4. Check SKILL.md for constraint enforcement in execution flow
const skillMd = Read(`${skillPath}/SKILL.md`);
const hasConstraintVerification = /constraint.*verif|verif.*constraint|quality.*gate/i.test(skillMd);
if (!hasConstraintVerification && phaseFiles.length > 3) {
issues.push({
id: `MEM-${issues.length + 1}`,
type: 'memory_loss',
severity: 'medium',
location: { file: 'SKILL.md' },
description: 'No constraint verification step in execution flow',
evidence: ['Execution flow lacks quality gate or constraint check'],
root_cause: 'No mechanism to verify output matches original intent',
impact: 'Constraint violations may go undetected',
suggested_fix: 'Add verification phase comparing output to original requirements'
});
}
// 5. Calculate severity
const criticalCount = issues.filter(i => i.severity === 'critical').length;
const highCount = issues.filter(i => i.severity === 'high').length;
const severity = criticalCount > 0 ? 'critical' :
highCount > 2 ? 'high' :
highCount > 0 ? 'medium' :
issues.length > 0 ? 'low' : 'none';
// 6. Write diagnosis result
const diagnosisResult = {
status: 'completed',
issues_found: issues.length,
severity: severity,
execution_time_ms: Date.now() - startTime,
details: {
patterns_checked: [
'constraint_propagation',
'goal_drift',
'checkpoint_mechanism',
'state_schema_constraints'
],
patterns_matched: evidence.map(e => e.pattern),
evidence: evidence,
phase_analysis: {
total_phases: phaseFiles.length,
first_phase_constraints: firstPhaseConstraints.length,
phases_with_constraint_ref: phaseFiles.length - issues.filter(i =>
i.description.includes('does not reference')).length
},
recommendations: [
highCount > 0 ? 'Implement constraint injection at each phase' : null,
issues.some(i => i.description.includes('checkpoint'))
? 'Add checkpoint/restore mechanism' : null,
issues.some(i => i.description.includes('State schema'))
? 'Add original_requirements to state schema' : null
].filter(Boolean)
}
};
Write(`${workDir}/diagnosis/memory-diagnosis.json`,
JSON.stringify(diagnosisResult, null, 2));
return {
stateUpdates: {
'diagnosis.memory': diagnosisResult,
issues: [...state.issues, ...issues]
},
outputFiles: [`${workDir}/diagnosis/memory-diagnosis.json`],
summary: `Memory diagnosis: ${issues.length} issues found (severity: ${severity})`
};
}
```
## State Updates
```javascript
return {
stateUpdates: {
'diagnosis.memory': {
status: 'completed',
issues_found: <count>,
severity: '<critical|high|medium|low|none>',
// ... full diagnosis result
},
issues: [...existingIssues, ...newIssues]
}
};
```
## Error Handling
| Error Type | Recovery |
|------------|----------|
| Phase file read error | Skip file, continue analysis |
| No phases found | Report as structure issue |
## Next Actions
- Success: action-diagnose-dataflow (or next in focus_areas)
- Skipped: If 'memory' not in focus_areas

View File

@@ -0,0 +1,200 @@
# Action: Diagnose Token Consumption
Analyze target skill for token consumption inefficiencies and output optimization opportunities.
## Purpose
Detect patterns that cause excessive token usage:
- Verbose prompts without compression
- Large state objects with unnecessary fields
- Full content passing instead of references
- Unbounded arrays without sliding windows
- Redundant file I/O (write-then-read patterns)
## Detection Patterns
| Pattern ID | Name | Detection Logic | Severity |
|------------|------|-----------------|----------|
| TKN-001 | Verbose Prompts | Prompt files > 4KB or high static/variable ratio | medium |
| TKN-002 | Excessive State Fields | State schema > 15 top-level keys | medium |
| TKN-003 | Full Content Passing | `Read()` result embedded directly in prompt | high |
| TKN-004 | Unbounded Arrays | `.push`/`concat` without `.slice(-N)` | high |
| TKN-005 | Redundant Write→Read | `Write(file)` followed by `Read(file)` | medium |
## Execution Steps
```javascript
async function diagnoseTokenConsumption(state, workDir) {
const issues = [];
const evidence = [];
const skillPath = state.target_skill.path;
// 1. Scan for verbose prompts (TKN-001)
const mdFiles = Glob(`${skillPath}/**/*.md`);
for (const file of mdFiles) {
const content = Read(file);
if (content.length > 4000) {
evidence.push({
file: file,
pattern: 'TKN-001',
severity: 'medium',
context: `File size: ${content.length} chars (threshold: 4000)`
});
}
}
// 2. Check state schema field count (TKN-002)
const stateSchema = Glob(`${skillPath}/**/state-schema.md`)[0];
if (stateSchema) {
const schemaContent = Read(stateSchema);
const fieldMatches = schemaContent.match(/^\s*\w+:/gm) || [];
if (fieldMatches.length > 15) {
evidence.push({
file: stateSchema,
pattern: 'TKN-002',
severity: 'medium',
context: `State has ${fieldMatches.length} fields (threshold: 15)`
});
}
}
// 3. Detect full content passing (TKN-003)
const fullContentPattern = /Read\([^)]+\)\s*[\+,]|`\$\{.*Read\(/g;
for (const file of mdFiles) {
const content = Read(file);
const matches = content.match(fullContentPattern);
if (matches) {
evidence.push({
file: file,
pattern: 'TKN-003',
severity: 'high',
context: `Full content passing detected: ${matches[0]}`
});
}
}
// 4. Detect unbounded arrays (TKN-004)
const unboundedPattern = /\.(push|concat)\([^)]+\)(?!.*\.slice)/g;
for (const file of mdFiles) {
const content = Read(file);
const matches = content.match(unboundedPattern);
if (matches) {
evidence.push({
file: file,
pattern: 'TKN-004',
severity: 'high',
context: `Unbounded array growth: ${matches[0]}`
});
}
}
// 5. Detect write-then-read patterns (TKN-005)
const writeReadPattern = /Write\([^)]+\)[\s\S]{0,100}Read\([^)]+\)/g;
for (const file of mdFiles) {
const content = Read(file);
const matches = content.match(writeReadPattern);
if (matches) {
evidence.push({
file: file,
pattern: 'TKN-005',
severity: 'medium',
context: `Write-then-read pattern detected`
});
}
}
// Calculate severity
const highCount = evidence.filter(e => e.severity === 'high').length;
const mediumCount = evidence.filter(e => e.severity === 'medium').length;
let severity = 'none';
if (highCount > 0) severity = 'high';
else if (mediumCount > 2) severity = 'medium';
else if (mediumCount > 0) severity = 'low';
return {
status: 'completed',
issues_found: evidence.length,
severity: severity,
execution_time_ms: Date.now() - startTime,
details: {
patterns_checked: ['TKN-001', 'TKN-002', 'TKN-003', 'TKN-004', 'TKN-005'],
patterns_matched: [...new Set(evidence.map(e => e.pattern))],
evidence: evidence,
recommendations: generateRecommendations(evidence)
}
};
}
function generateRecommendations(evidence) {
const recs = [];
const patterns = [...new Set(evidence.map(e => e.pattern))];
if (patterns.includes('TKN-001')) {
recs.push('Apply prompt_compression: Extract static instructions to templates, use placeholders');
}
if (patterns.includes('TKN-002')) {
recs.push('Apply state_field_reduction: Remove debug/cache fields, consolidate related fields');
}
if (patterns.includes('TKN-003')) {
recs.push('Apply lazy_loading: Pass file paths instead of content, let agents read if needed');
}
if (patterns.includes('TKN-004')) {
recs.push('Apply sliding_window: Add .slice(-N) to array operations to bound growth');
}
if (patterns.includes('TKN-005')) {
recs.push('Apply output_minimization: Use in-memory data passing, eliminate temporary files');
}
return recs;
}
```
## Output
Write diagnosis result to `${workDir}/diagnosis/token-consumption-diagnosis.json`:
```json
{
"status": "completed",
"issues_found": 3,
"severity": "medium",
"execution_time_ms": 1500,
"details": {
"patterns_checked": ["TKN-001", "TKN-002", "TKN-003", "TKN-004", "TKN-005"],
"patterns_matched": ["TKN-001", "TKN-003"],
"evidence": [
{
"file": "phases/orchestrator.md",
"pattern": "TKN-001",
"severity": "medium",
"context": "File size: 5200 chars (threshold: 4000)"
}
],
"recommendations": [
"Apply prompt_compression: Extract static instructions to templates"
]
}
}
```
## State Update
```javascript
updateState({
diagnosis: {
...state.diagnosis,
token_consumption: diagnosisResult
}
});
```
## Fix Strategies Mapping
| Pattern | Strategy | Implementation |
|---------|----------|----------------|
| TKN-001 | prompt_compression | Extract static text to variables, use template inheritance |
| TKN-002 | state_field_reduction | Audit and consolidate fields, remove non-essential data |
| TKN-003 | lazy_loading | Pass paths instead of content, agents load when needed |
| TKN-004 | sliding_window | Add `.slice(-N)` after push/concat operations |
| TKN-005 | output_minimization | Use return values instead of file relay |

View File

@@ -0,0 +1,322 @@
# Action: Gemini Analysis
动态调用 Gemini CLI 进行深度分析,根据用户需求或诊断结果选择分析类型。
## Role
- 接收用户指定的分析需求或从诊断结果推断需求
- 构建适当的 CLI 命令
- 执行分析并解析结果
- 更新状态以供后续动作使用
## Preconditions
- `state.status === 'running'`
- 满足以下任一条件:
- `state.gemini_analysis_requested === true` (用户请求)
- `state.issues.some(i => i.severity === 'critical')` (发现严重问题)
- `state.analysis_type !== null` (已指定分析类型)
## Analysis Types
### 1. root_cause - 问题根因分析
针对用户描述的问题进行深度分析。
```javascript
const analysisPrompt = `
PURPOSE: Identify root cause of skill execution issue: ${state.user_issue_description}
TASK:
• Analyze skill structure at: ${state.target_skill.path}
• Identify anti-patterns in phase files
• Trace data flow through state management
• Check agent coordination patterns
MODE: analysis
CONTEXT: @**/*.md
EXPECTED: JSON with structure:
{
"root_causes": [
{ "id": "RC-001", "description": "...", "severity": "high", "evidence": ["file:line"] }
],
"patterns_found": [
{ "pattern": "...", "type": "anti-pattern|best-practice", "locations": [] }
],
"recommendations": [
{ "priority": 1, "action": "...", "rationale": "..." }
]
}
RULES: Focus on execution flow, state management, agent coordination
`;
```
### 2. architecture - 架构审查
评估 skill 的整体架构设计。
```javascript
const analysisPrompt = `
PURPOSE: Review skill architecture for: ${state.target_skill.name}
TASK:
• Evaluate phase decomposition and responsibility separation
• Check state schema design and data flow
• Assess agent coordination and error handling
• Review scalability and maintainability
MODE: analysis
CONTEXT: @**/*.md
EXPECTED: Markdown report with sections:
- Executive Summary
- Phase Architecture Assessment
- State Management Evaluation
- Agent Coordination Analysis
- Improvement Recommendations (prioritized)
RULES: Focus on modularity, extensibility, maintainability
`;
```
### 3. prompt_optimization - 提示词优化
分析和优化 phase 中的提示词。
```javascript
const analysisPrompt = `
PURPOSE: Optimize prompts in skill phases for better output quality
TASK:
• Analyze existing prompts for clarity and specificity
• Identify ambiguous instructions
• Check output format specifications
• Evaluate constraint communication
MODE: analysis
CONTEXT: @phases/**/*.md
EXPECTED: JSON with structure:
{
"prompt_issues": [
{ "file": "...", "issue": "...", "severity": "...", "suggestion": "..." }
],
"optimized_prompts": [
{ "file": "...", "original": "...", "optimized": "...", "rationale": "..." }
]
}
RULES: Preserve intent, improve clarity, add structured output requirements
`;
```
### 4. performance - 性能分析
分析 Token 消耗和执行效率。
```javascript
const analysisPrompt = `
PURPOSE: Analyze performance bottlenecks in skill execution
TASK:
• Estimate token consumption per phase
• Identify redundant data passing
• Check for unnecessary full-content transfers
• Evaluate caching opportunities
MODE: analysis
CONTEXT: @**/*.md
EXPECTED: JSON with structure:
{
"token_estimates": [
{ "phase": "...", "estimated_tokens": 1000, "breakdown": {} }
],
"bottlenecks": [
{ "type": "...", "location": "...", "impact": "high|medium|low", "fix": "..." }
],
"optimization_suggestions": []
}
RULES: Focus on token efficiency, reduce redundancy
`;
```
### 5. custom - 自定义分析
用户指定的自定义分析需求。
```javascript
const analysisPrompt = `
PURPOSE: ${state.custom_analysis_purpose}
TASK: ${state.custom_analysis_tasks}
MODE: analysis
CONTEXT: @**/*.md
EXPECTED: ${state.custom_analysis_expected}
RULES: ${state.custom_analysis_rules || 'Follow best practices'}
`;
```
## Execution
```javascript
async function executeGeminiAnalysis(state, workDir) {
// 1. 确定分析类型
const analysisType = state.analysis_type || determineAnalysisType(state);
// 2. 构建 prompt
const prompt = buildAnalysisPrompt(analysisType, state);
// 3. 构建 CLI 命令
const cliCommand = `ccw cli -p "${escapeForShell(prompt)}" --tool gemini --mode analysis --cd "${state.target_skill.path}"`;
console.log(`Executing Gemini analysis: ${analysisType}`);
console.log(`Command: ${cliCommand}`);
// 4. 执行 CLI (后台运行)
const result = Bash({
command: cliCommand,
run_in_background: true,
timeout: 300000 // 5 minutes
});
// 5. 等待结果
// 注意: 根据 CLAUDE.md 指引CLI 后台执行后应停止轮询
// 结果会在 CLI 完成后写入 state
return {
stateUpdates: {
gemini_analysis: {
type: analysisType,
status: 'running',
started_at: new Date().toISOString(),
task_id: result.task_id
}
},
outputFiles: [],
summary: `Gemini ${analysisType} analysis started in background`
};
}
function determineAnalysisType(state) {
// 根据状态推断分析类型
if (state.user_issue_description && state.user_issue_description.length > 100) {
return 'root_cause';
}
if (state.issues.some(i => i.severity === 'critical')) {
return 'root_cause';
}
if (state.focus_areas.includes('architecture')) {
return 'architecture';
}
if (state.focus_areas.includes('prompt')) {
return 'prompt_optimization';
}
if (state.focus_areas.includes('performance')) {
return 'performance';
}
return 'root_cause'; // 默认
}
function buildAnalysisPrompt(type, state) {
const templates = {
root_cause: () => `
PURPOSE: Identify root cause of skill execution issue: ${state.user_issue_description}
TASK: • Analyze skill structure • Identify anti-patterns • Trace data flow issues • Check agent coordination
MODE: analysis
CONTEXT: @**/*.md
EXPECTED: JSON { root_causes: [], patterns_found: [], recommendations: [] }
RULES: Focus on execution flow, be specific about file:line locations
`,
architecture: () => `
PURPOSE: Review skill architecture for ${state.target_skill.name}
TASK: • Evaluate phase decomposition • Check state design • Assess agent coordination • Review extensibility
MODE: analysis
CONTEXT: @**/*.md
EXPECTED: Markdown architecture assessment report
RULES: Focus on modularity and maintainability
`,
prompt_optimization: () => `
PURPOSE: Optimize prompts in skill for better output quality
TASK: • Analyze prompt clarity • Check output specifications • Evaluate constraint handling
MODE: analysis
CONTEXT: @phases/**/*.md
EXPECTED: JSON { prompt_issues: [], optimized_prompts: [] }
RULES: Preserve intent, improve clarity
`,
performance: () => `
PURPOSE: Analyze performance bottlenecks in skill
TASK: • Estimate token consumption • Identify redundancy • Check data transfer efficiency
MODE: analysis
CONTEXT: @**/*.md
EXPECTED: JSON { token_estimates: [], bottlenecks: [], optimization_suggestions: [] }
RULES: Focus on token efficiency
`,
custom: () => `
PURPOSE: ${state.custom_analysis_purpose}
TASK: ${state.custom_analysis_tasks}
MODE: analysis
CONTEXT: @**/*.md
EXPECTED: ${state.custom_analysis_expected}
RULES: ${state.custom_analysis_rules || 'Best practices'}
`
};
return templates[type]();
}
function escapeForShell(str) {
// 转义 shell 特殊字符
return str.replace(/"/g, '\\"').replace(/\$/g, '\\$').replace(/`/g, '\\`');
}
```
## Output
### State Updates
```javascript
{
gemini_analysis: {
type: 'root_cause' | 'architecture' | 'prompt_optimization' | 'performance' | 'custom',
status: 'running' | 'completed' | 'failed',
started_at: '2024-01-01T00:00:00Z',
completed_at: '2024-01-01T00:05:00Z',
task_id: 'xxx',
result: { /* 分析结果 */ },
error: null
},
// 分析结果合并到 issues
issues: [
...state.issues,
...newIssuesFromAnalysis
]
}
```
### Output Files
- `${workDir}/diagnosis/gemini-analysis-${type}.json` - 原始分析结果
- `${workDir}/diagnosis/gemini-analysis-${type}.md` - 格式化报告
## Post-Execution
分析完成后:
1. 解析 CLI 输出为结构化数据
2. 提取新发现的 issues 合并到 state.issues
3. 更新 recommendations 到 state
4. 触发下一步动作 (通常是 action-generate-report 或 action-propose-fixes)
## Error Handling
| Error | Recovery |
|-------|----------|
| CLI 超时 | 重试一次,仍失败则跳过 Gemini 分析 |
| 解析失败 | 保存原始输出,手动处理 |
| 无结果 | 标记为 skipped继续流程 |
## User Interaction
如果 `state.analysis_type === null` 且无法自动推断,询问用户:
```javascript
AskUserQuestion({
questions: [{
question: '请选择 Gemini 分析类型',
header: '分析类型',
options: [
{ label: '问题根因分析', description: '深度分析用户描述的问题' },
{ label: '架构审查', description: '评估整体架构设计' },
{ label: '提示词优化', description: '分析和优化 phase 提示词' },
{ label: '性能分析', description: '分析 Token 消耗和执行效率' }
],
multiSelect: false
}]
});
```

View File

@@ -0,0 +1,228 @@
# Action: Generate Consolidated Report
Generate a comprehensive tuning report merging all diagnosis results with prioritized recommendations.
## Purpose
- Merge all diagnosis results into unified report
- Prioritize issues by severity and impact
- Generate actionable recommendations
- Create human-readable markdown report
## Preconditions
- [ ] state.status === 'running'
- [ ] All diagnoses in focus_areas are completed
- [ ] state.issues.length > 0 OR generate summary report
## Execution
```javascript
async function execute(state, workDir) {
console.log('Generating consolidated tuning report...');
const targetSkill = state.target_skill;
const issues = state.issues;
// 1. Group issues by type
const issuesByType = {
context_explosion: issues.filter(i => i.type === 'context_explosion'),
memory_loss: issues.filter(i => i.type === 'memory_loss'),
dataflow_break: issues.filter(i => i.type === 'dataflow_break'),
agent_failure: issues.filter(i => i.type === 'agent_failure')
};
// 2. Group issues by severity
const issuesBySeverity = {
critical: issues.filter(i => i.severity === 'critical'),
high: issues.filter(i => i.severity === 'high'),
medium: issues.filter(i => i.severity === 'medium'),
low: issues.filter(i => i.severity === 'low')
};
// 3. Calculate overall health score
const weights = { critical: 25, high: 15, medium: 5, low: 1 };
const deductions = Object.entries(issuesBySeverity)
.reduce((sum, [sev, arr]) => sum + arr.length * weights[sev], 0);
const healthScore = Math.max(0, 100 - deductions);
// 4. Generate report content
const report = `# Skill Tuning Report
**Target Skill**: ${targetSkill.name}
**Path**: ${targetSkill.path}
**Execution Mode**: ${targetSkill.execution_mode}
**Generated**: ${new Date().toISOString()}
---
## Executive Summary
| Metric | Value |
|--------|-------|
| Health Score | ${healthScore}/100 |
| Total Issues | ${issues.length} |
| Critical | ${issuesBySeverity.critical.length} |
| High | ${issuesBySeverity.high.length} |
| Medium | ${issuesBySeverity.medium.length} |
| Low | ${issuesBySeverity.low.length} |
### User Reported Issue
> ${state.user_issue_description}
### Overall Assessment
${healthScore >= 80 ? '✅ Skill is in good health with minor issues.' :
healthScore >= 60 ? '⚠️ Skill has significant issues requiring attention.' :
healthScore >= 40 ? '🔶 Skill has serious issues affecting reliability.' :
'❌ Skill has critical issues requiring immediate fixes.'}
---
## Diagnosis Results
### Context Explosion Analysis
${state.diagnosis.context ?
`- **Status**: ${state.diagnosis.context.status}
- **Severity**: ${state.diagnosis.context.severity}
- **Issues Found**: ${state.diagnosis.context.issues_found}
- **Key Findings**: ${state.diagnosis.context.details.recommendations.join('; ') || 'None'}` :
'_Not analyzed_'}
### Long-tail Memory Analysis
${state.diagnosis.memory ?
`- **Status**: ${state.diagnosis.memory.status}
- **Severity**: ${state.diagnosis.memory.severity}
- **Issues Found**: ${state.diagnosis.memory.issues_found}
- **Key Findings**: ${state.diagnosis.memory.details.recommendations.join('; ') || 'None'}` :
'_Not analyzed_'}
### Data Flow Analysis
${state.diagnosis.dataflow ?
`- **Status**: ${state.diagnosis.dataflow.status}
- **Severity**: ${state.diagnosis.dataflow.severity}
- **Issues Found**: ${state.diagnosis.dataflow.issues_found}
- **Key Findings**: ${state.diagnosis.dataflow.details.recommendations.join('; ') || 'None'}` :
'_Not analyzed_'}
### Agent Coordination Analysis
${state.diagnosis.agent ?
`- **Status**: ${state.diagnosis.agent.status}
- **Severity**: ${state.diagnosis.agent.severity}
- **Issues Found**: ${state.diagnosis.agent.issues_found}
- **Key Findings**: ${state.diagnosis.agent.details.recommendations.join('; ') || 'None'}` :
'_Not analyzed_'}
---
## Critical & High Priority Issues
${issuesBySeverity.critical.length + issuesBySeverity.high.length === 0 ?
'_No critical or high priority issues found._' :
[...issuesBySeverity.critical, ...issuesBySeverity.high].map((issue, i) => `
### ${i + 1}. [${issue.severity.toUpperCase()}] ${issue.description}
- **ID**: ${issue.id}
- **Type**: ${issue.type}
- **Location**: ${typeof issue.location === 'object' ? issue.location.file : issue.location}
- **Root Cause**: ${issue.root_cause}
- **Impact**: ${issue.impact}
- **Suggested Fix**: ${issue.suggested_fix}
**Evidence**:
${issue.evidence.map(e => `- \`${e}\``).join('\n')}
`).join('\n')}
---
## Medium & Low Priority Issues
${issuesBySeverity.medium.length + issuesBySeverity.low.length === 0 ?
'_No medium or low priority issues found._' :
[...issuesBySeverity.medium, ...issuesBySeverity.low].map((issue, i) => `
### ${i + 1}. [${issue.severity.toUpperCase()}] ${issue.description}
- **ID**: ${issue.id}
- **Type**: ${issue.type}
- **Suggested Fix**: ${issue.suggested_fix}
`).join('\n')}
---
## Recommended Fix Order
Based on severity and dependencies, apply fixes in this order:
${[...issuesBySeverity.critical, ...issuesBySeverity.high, ...issuesBySeverity.medium]
.slice(0, 10)
.map((issue, i) => `${i + 1}. **${issue.id}**: ${issue.suggested_fix}`)
.join('\n')}
---
## Quality Gates
| Gate | Threshold | Current | Status |
|------|-----------|---------|--------|
| Critical Issues | 0 | ${issuesBySeverity.critical.length} | ${issuesBySeverity.critical.length === 0 ? '✅ PASS' : '❌ FAIL'} |
| High Issues | ≤ 2 | ${issuesBySeverity.high.length} | ${issuesBySeverity.high.length <= 2 ? '✅ PASS' : '❌ FAIL'} |
| Health Score | ≥ 60 | ${healthScore} | ${healthScore >= 60 ? '✅ PASS' : '❌ FAIL'} |
**Overall Quality Gate**: ${
issuesBySeverity.critical.length === 0 &&
issuesBySeverity.high.length <= 2 &&
healthScore >= 60 ? '✅ PASS' : '❌ FAIL'}
---
*Report generated by skill-tuning*
`;
// 5. Write report
Write(`${workDir}/tuning-report.md`, report);
// 6. Calculate quality gate
const qualityGate = issuesBySeverity.critical.length === 0 &&
issuesBySeverity.high.length <= 2 &&
healthScore >= 60 ? 'pass' :
healthScore >= 40 ? 'review' : 'fail';
return {
stateUpdates: {
quality_score: healthScore,
quality_gate: qualityGate,
issues_by_severity: {
critical: issuesBySeverity.critical.length,
high: issuesBySeverity.high.length,
medium: issuesBySeverity.medium.length,
low: issuesBySeverity.low.length
}
},
outputFiles: [`${workDir}/tuning-report.md`],
summary: `Report generated: ${issues.length} issues, health score ${healthScore}/100, gate: ${qualityGate}`
};
}
```
## State Updates
```javascript
return {
stateUpdates: {
quality_score: <0-100>,
quality_gate: '<pass|review|fail>',
issues_by_severity: { critical: N, high: N, medium: N, low: N }
}
};
```
## Error Handling
| Error Type | Recovery |
|------------|----------|
| Write error | Retry to alternative path |
| Empty issues | Generate summary with no issues |
## Next Actions
- If issues.length > 0: action-propose-fixes
- If issues.length === 0: action-complete

View File

@@ -0,0 +1,149 @@
# Action: Initialize Tuning Session
Initialize the skill-tuning session by collecting target skill information, creating work directories, and setting up initial state.
## Purpose
- Identify target skill to tune
- Collect user's problem description
- Create work directory structure
- Backup original skill files
- Initialize state for orchestrator
## Preconditions
- [ ] state.status === 'pending'
## Execution
```javascript
async function execute(state, workDir) {
// 1. Ask user for target skill
const skillInput = await AskUserQuestion({
questions: [{
question: "Which skill do you want to tune?",
header: "Target Skill",
multiSelect: false,
options: [
{ label: "Specify path", description: "Enter skill directory path" }
]
}]
});
const skillPath = skillInput["Target Skill"];
// 2. Validate skill exists and read structure
const skillMdPath = `${skillPath}/SKILL.md`;
if (!Glob(`${skillPath}/SKILL.md`).length) {
throw new Error(`Invalid skill path: ${skillPath} - SKILL.md not found`);
}
// 3. Read skill metadata
const skillMd = Read(skillMdPath);
const frontMatterMatch = skillMd.match(/^---\n([\s\S]*?)\n---/);
const skillName = frontMatterMatch
? frontMatterMatch[1].match(/name:\s*(.+)/)?.[1]?.trim()
: skillPath.split('/').pop();
// 4. Detect execution mode
const hasOrchestrator = Glob(`${skillPath}/phases/orchestrator.md`).length > 0;
const executionMode = hasOrchestrator ? 'autonomous' : 'sequential';
// 5. Scan skill structure
const phases = Glob(`${skillPath}/phases/**/*.md`).map(f => f.replace(skillPath + '/', ''));
const specs = Glob(`${skillPath}/specs/**/*.md`).map(f => f.replace(skillPath + '/', ''));
// 6. Ask for problem description
const issueInput = await AskUserQuestion({
questions: [{
question: "Describe the issue or what you want to optimize:",
header: "Issue",
multiSelect: false,
options: [
{ label: "Context grows too large", description: "Token explosion over multiple turns" },
{ label: "Instructions forgotten", description: "Early constraints lost in long execution" },
{ label: "Data inconsistency", description: "State format changes between phases" },
{ label: "Agent failures", description: "Sub-agent calls fail or return unexpected results" }
]
}]
});
// 7. Ask for focus areas
const focusInput = await AskUserQuestion({
questions: [{
question: "Which areas should be diagnosed? (Select all that apply)",
header: "Focus",
multiSelect: true,
options: [
{ label: "context", description: "Context explosion analysis" },
{ label: "memory", description: "Long-tail forgetting analysis" },
{ label: "dataflow", description: "Data flow analysis" },
{ label: "agent", description: "Agent coordination analysis" }
]
}]
});
const focusAreas = focusInput["Focus"] || ['context', 'memory', 'dataflow', 'agent'];
// 8. Create backup
const backupDir = `${workDir}/backups/${skillName}-backup`;
Bash(`mkdir -p "${backupDir}"`);
Bash(`cp -r "${skillPath}"/* "${backupDir}/"`);
// 9. Return state updates
return {
stateUpdates: {
status: 'running',
started_at: new Date().toISOString(),
target_skill: {
name: skillName,
path: skillPath,
execution_mode: executionMode,
phases: phases,
specs: specs
},
user_issue_description: issueInput["Issue"],
focus_areas: Array.isArray(focusAreas) ? focusAreas : [focusAreas],
work_dir: workDir,
backup_dir: backupDir
},
outputFiles: [],
summary: `Initialized tuning for "${skillName}" (${executionMode} mode), focus: ${focusAreas.join(', ')}`
};
}
```
## State Updates
```javascript
return {
stateUpdates: {
status: 'running',
started_at: '<timestamp>',
target_skill: {
name: '<skill-name>',
path: '<skill-path>',
execution_mode: '<sequential|autonomous>',
phases: ['...'],
specs: ['...']
},
user_issue_description: '<user description>',
focus_areas: ['context', 'memory', ...],
work_dir: '<work-dir>',
backup_dir: '<backup-dir>'
}
};
```
## Error Handling
| Error Type | Recovery |
|------------|----------|
| Skill path not found | Ask user to re-enter valid path |
| SKILL.md missing | Suggest path correction |
| Backup creation failed | Retry with alternative location |
## Next Actions
- Success: Continue to first diagnosis action based on focus_areas
- Failure: action-abort

View File

@@ -0,0 +1,317 @@
# Action: Propose Fixes
Generate fix proposals for identified issues with implementation strategies.
## Purpose
- Create fix strategies for each issue
- Generate implementation plans
- Estimate risk levels
- Allow user to select fixes to apply
## Preconditions
- [ ] state.status === 'running'
- [ ] state.issues.length > 0
- [ ] action-generate-report completed
## Fix Strategy Catalog
### Context Explosion Fixes
| Strategy | Description | Risk |
|----------|-------------|------|
| `context_summarization` | Add summarizer agent between phases | low |
| `sliding_window` | Keep only last N turns in context | low |
| `structured_state` | Replace text context with JSON state | medium |
| `path_reference` | Pass file paths instead of content | low |
### Memory Loss Fixes
| Strategy | Description | Risk |
|----------|-------------|------|
| `constraint_injection` | Add constraints to each phase prompt | low |
| `checkpoint_restore` | Save state at milestones | low |
| `goal_embedding` | Track goal similarity throughout | medium |
| `state_constraints_field` | Add constraints field to state schema | low |
### Data Flow Fixes
| Strategy | Description | Risk |
|----------|-------------|------|
| `state_centralization` | Single state.json for all data | medium |
| `schema_enforcement` | Add Zod validation | low |
| `field_normalization` | Normalize field names | low |
| `transactional_updates` | Atomic state updates | medium |
### Agent Coordination Fixes
| Strategy | Description | Risk |
|----------|-------------|------|
| `error_wrapping` | Add try-catch to all Task calls | low |
| `result_validation` | Validate agent returns | low |
| `orchestrator_refactor` | Centralize agent coordination | high |
| `flatten_nesting` | Remove nested agent calls | medium |
## Execution
```javascript
async function execute(state, workDir) {
console.log('Generating fix proposals...');
const issues = state.issues;
const fixes = [];
// Group issues by type for batch fixes
const issuesByType = {
context_explosion: issues.filter(i => i.type === 'context_explosion'),
memory_loss: issues.filter(i => i.type === 'memory_loss'),
dataflow_break: issues.filter(i => i.type === 'dataflow_break'),
agent_failure: issues.filter(i => i.type === 'agent_failure')
};
// Generate fixes for context explosion
if (issuesByType.context_explosion.length > 0) {
const ctxIssues = issuesByType.context_explosion;
if (ctxIssues.some(i => i.description.includes('history accumulation'))) {
fixes.push({
id: `FIX-${fixes.length + 1}`,
issue_ids: ctxIssues.filter(i => i.description.includes('history')).map(i => i.id),
strategy: 'sliding_window',
description: 'Implement sliding window for conversation history',
rationale: 'Prevents unbounded context growth by keeping only recent turns',
changes: [{
file: 'phases/orchestrator.md',
action: 'modify',
diff: `+ const MAX_HISTORY = 5;
+ state.history = state.history.slice(-MAX_HISTORY);`
}],
risk: 'low',
estimated_impact: 'Reduces token usage by ~50%',
verification_steps: ['Run skill with 10+ iterations', 'Verify context size stable']
});
}
if (ctxIssues.some(i => i.description.includes('full content'))) {
fixes.push({
id: `FIX-${fixes.length + 1}`,
issue_ids: ctxIssues.filter(i => i.description.includes('content')).map(i => i.id),
strategy: 'path_reference',
description: 'Pass file paths instead of full content',
rationale: 'Agents can read files when needed, reducing prompt size',
changes: [{
file: 'phases/*.md',
action: 'modify',
diff: `- prompt: \${content}
+ prompt: Read file at: \${filePath}`
}],
risk: 'low',
estimated_impact: 'Significant token reduction',
verification_steps: ['Verify agents can still access needed content']
});
}
}
// Generate fixes for memory loss
if (issuesByType.memory_loss.length > 0) {
const memIssues = issuesByType.memory_loss;
if (memIssues.some(i => i.description.includes('constraint'))) {
fixes.push({
id: `FIX-${fixes.length + 1}`,
issue_ids: memIssues.filter(i => i.description.includes('constraint')).map(i => i.id),
strategy: 'constraint_injection',
description: 'Add constraint injection to all phases',
rationale: 'Ensures original requirements are visible in every phase',
changes: [{
file: 'phases/*.md',
action: 'modify',
diff: `+ [CONSTRAINTS]
+ Original requirements from state.original_requirements:
+ \${JSON.stringify(state.original_requirements)}`
}],
risk: 'low',
estimated_impact: 'Improves constraint adherence',
verification_steps: ['Run skill with specific constraints', 'Verify output matches']
});
}
if (memIssues.some(i => i.description.includes('State schema'))) {
fixes.push({
id: `FIX-${fixes.length + 1}`,
issue_ids: memIssues.filter(i => i.description.includes('schema')).map(i => i.id),
strategy: 'state_constraints_field',
description: 'Add original_requirements field to state schema',
rationale: 'Preserves original intent throughout execution',
changes: [{
file: 'phases/state-schema.md',
action: 'modify',
diff: `+ original_requirements: string[]; // User's original constraints
+ goal_summary: string; // One-line goal statement`
}],
risk: 'low',
estimated_impact: 'Enables constraint tracking',
verification_steps: ['Verify state includes requirements after init']
});
}
}
// Generate fixes for data flow
if (issuesByType.dataflow_break.length > 0) {
const dfIssues = issuesByType.dataflow_break;
if (dfIssues.some(i => i.description.includes('multiple locations'))) {
fixes.push({
id: `FIX-${fixes.length + 1}`,
issue_ids: dfIssues.filter(i => i.description.includes('location')).map(i => i.id),
strategy: 'state_centralization',
description: 'Centralize all state to single state.json',
rationale: 'Single source of truth prevents inconsistencies',
changes: [{
file: 'phases/*.md',
action: 'modify',
diff: `- Write(\`\${workDir}/config.json\`, ...)
+ updateState({ config: ... }) // Use state manager`
}],
risk: 'medium',
estimated_impact: 'Eliminates state fragmentation',
verification_steps: ['Verify all reads come from state.json', 'Test state persistence']
});
}
if (dfIssues.some(i => i.description.includes('validation'))) {
fixes.push({
id: `FIX-${fixes.length + 1}`,
issue_ids: dfIssues.filter(i => i.description.includes('validation')).map(i => i.id),
strategy: 'schema_enforcement',
description: 'Add Zod schema validation',
rationale: 'Runtime validation catches schema violations',
changes: [{
file: 'phases/state-schema.md',
action: 'modify',
diff: `+ import { z } from 'zod';
+ const StateSchema = z.object({...});
+ function validateState(s) { return StateSchema.parse(s); }`
}],
risk: 'low',
estimated_impact: 'Catches invalid state early',
verification_steps: ['Test with invalid state input', 'Verify error thrown']
});
}
}
// Generate fixes for agent coordination
if (issuesByType.agent_failure.length > 0) {
const agentIssues = issuesByType.agent_failure;
if (agentIssues.some(i => i.description.includes('error handling'))) {
fixes.push({
id: `FIX-${fixes.length + 1}`,
issue_ids: agentIssues.filter(i => i.description.includes('error')).map(i => i.id),
strategy: 'error_wrapping',
description: 'Wrap all Task calls in try-catch',
rationale: 'Prevents cascading failures from agent errors',
changes: [{
file: 'phases/*.md',
action: 'modify',
diff: `+ try {
const result = await Task({...});
+ if (!result) throw new Error('Empty result');
+ } catch (e) {
+ updateState({ errors: [...errors, e.message], error_count: error_count + 1 });
+ }`
}],
risk: 'low',
estimated_impact: 'Improves error resilience',
verification_steps: ['Simulate agent failure', 'Verify graceful handling']
});
}
if (agentIssues.some(i => i.description.includes('nested'))) {
fixes.push({
id: `FIX-${fixes.length + 1}`,
issue_ids: agentIssues.filter(i => i.description.includes('nested')).map(i => i.id),
strategy: 'flatten_nesting',
description: 'Flatten nested agent calls',
rationale: 'Reduces complexity and context explosion',
changes: [{
file: 'phases/orchestrator.md',
action: 'modify',
diff: `// Instead of agent calling agent:
// Agent A returns {needs_agent_b: true}
// Orchestrator sees this and calls Agent B next`
}],
risk: 'medium',
estimated_impact: 'Reduces nesting depth',
verification_steps: ['Verify no nested Task calls', 'Test agent chaining via orchestrator']
});
}
}
// Write fix proposals
Write(`${workDir}/fixes/fix-proposals.json`, JSON.stringify(fixes, null, 2));
// Ask user to select fixes to apply
const fixOptions = fixes.slice(0, 4).map(f => ({
label: f.id,
description: `[${f.risk.toUpperCase()} risk] ${f.description}`
}));
if (fixOptions.length > 0) {
const selection = await AskUserQuestion({
questions: [{
question: 'Which fixes would you like to apply?',
header: 'Fixes',
multiSelect: true,
options: fixOptions
}]
});
const selectedFixIds = Array.isArray(selection['Fixes'])
? selection['Fixes']
: [selection['Fixes']];
return {
stateUpdates: {
proposed_fixes: fixes,
pending_fixes: selectedFixIds.filter(id => id && fixes.some(f => f.id === id))
},
outputFiles: [`${workDir}/fixes/fix-proposals.json`],
summary: `Generated ${fixes.length} fix proposals, ${selectedFixIds.length} selected for application`
};
}
return {
stateUpdates: {
proposed_fixes: fixes,
pending_fixes: []
},
outputFiles: [`${workDir}/fixes/fix-proposals.json`],
summary: `Generated ${fixes.length} fix proposals (none selected)`
};
}
```
## State Updates
```javascript
return {
stateUpdates: {
proposed_fixes: [...fixes],
pending_fixes: [...selectedFixIds]
}
};
```
## Error Handling
| Error Type | Recovery |
|------------|----------|
| No issues to fix | Skip to action-complete |
| User cancels selection | Set pending_fixes to empty |
## Next Actions
- If pending_fixes.length > 0: action-apply-fix
- If pending_fixes.length === 0: action-complete

View File

@@ -0,0 +1,222 @@
# Action: Verify Applied Fixes
Verify that applied fixes resolved the targeted issues.
## Purpose
- Re-run relevant diagnostics
- Compare before/after issue counts
- Update verification status
- Determine if more iterations needed
## Preconditions
- [ ] state.status === 'running'
- [ ] state.applied_fixes.length > 0
- [ ] Some applied_fixes have verification_result === 'pending'
## Execution
```javascript
async function execute(state, workDir) {
console.log('Verifying applied fixes...');
const appliedFixes = state.applied_fixes.filter(f => f.verification_result === 'pending');
if (appliedFixes.length === 0) {
return {
stateUpdates: {},
outputFiles: [],
summary: 'No fixes pending verification'
};
}
const verificationResults = [];
for (const fix of appliedFixes) {
const proposedFix = state.proposed_fixes.find(f => f.id === fix.fix_id);
if (!proposedFix) {
verificationResults.push({
fix_id: fix.fix_id,
result: 'fail',
reason: 'Fix definition not found'
});
continue;
}
// Determine which diagnosis to re-run based on fix strategy
const strategyToDiagnosis = {
'context_summarization': 'context',
'sliding_window': 'context',
'structured_state': 'context',
'path_reference': 'context',
'constraint_injection': 'memory',
'checkpoint_restore': 'memory',
'goal_embedding': 'memory',
'state_constraints_field': 'memory',
'state_centralization': 'dataflow',
'schema_enforcement': 'dataflow',
'field_normalization': 'dataflow',
'transactional_updates': 'dataflow',
'error_wrapping': 'agent',
'result_validation': 'agent',
'orchestrator_refactor': 'agent',
'flatten_nesting': 'agent'
};
const diagnosisType = strategyToDiagnosis[proposedFix.strategy];
// For now, do a lightweight verification
// Full implementation would re-run the specific diagnosis
// Check if the fix was actually applied (look for markers)
const targetPath = state.target_skill.path;
const fixMarker = `Applied fix ${fix.fix_id}`;
let fixFound = false;
const allFiles = Glob(`${targetPath}/**/*.md`);
for (const file of allFiles) {
const content = Read(file);
if (content.includes(fixMarker)) {
fixFound = true;
break;
}
}
if (fixFound) {
// Verify by checking if original issues still exist
const relatedIssues = proposedFix.issue_ids;
const originalIssueCount = relatedIssues.length;
// Simplified verification: assume fix worked if marker present
// Real implementation would re-run diagnosis patterns
verificationResults.push({
fix_id: fix.fix_id,
result: 'pass',
reason: `Fix applied successfully, addressing ${originalIssueCount} issues`,
issues_resolved: relatedIssues
});
} else {
verificationResults.push({
fix_id: fix.fix_id,
result: 'fail',
reason: 'Fix marker not found in target files'
});
}
}
// Update applied fixes with verification results
const updatedAppliedFixes = state.applied_fixes.map(fix => {
const result = verificationResults.find(v => v.fix_id === fix.fix_id);
if (result) {
return {
...fix,
verification_result: result.result
};
}
return fix;
});
// Calculate new quality score
const passedFixes = verificationResults.filter(v => v.result === 'pass').length;
const totalFixes = verificationResults.length;
const verificationRate = totalFixes > 0 ? (passedFixes / totalFixes) * 100 : 100;
// Recalculate issues (remove resolved ones)
const resolvedIssueIds = verificationResults
.filter(v => v.result === 'pass')
.flatMap(v => v.issues_resolved || []);
const remainingIssues = state.issues.filter(i => !resolvedIssueIds.includes(i.id));
// Recalculate quality score
const weights = { critical: 25, high: 15, medium: 5, low: 1 };
const deductions = remainingIssues.reduce((sum, issue) =>
sum + (weights[issue.severity] || 0), 0);
const newHealthScore = Math.max(0, 100 - deductions);
// Determine new quality gate
const remainingCritical = remainingIssues.filter(i => i.severity === 'critical').length;
const remainingHigh = remainingIssues.filter(i => i.severity === 'high').length;
const newQualityGate = remainingCritical === 0 && remainingHigh <= 2 && newHealthScore >= 60
? 'pass'
: newHealthScore >= 40 ? 'review' : 'fail';
// Increment iteration count
const newIterationCount = state.iteration_count + 1;
// Ask user if they want to continue
let continueIteration = false;
if (newQualityGate !== 'pass' && newIterationCount < state.max_iterations) {
const continueResponse = await AskUserQuestion({
questions: [{
question: `Verification complete. Quality gate: ${newQualityGate}. Continue with another iteration?`,
header: 'Continue',
multiSelect: false,
options: [
{ label: 'Yes', description: `Run iteration ${newIterationCount + 1}` },
{ label: 'No', description: 'Finish with current state' }
]
}]
});
continueIteration = continueResponse['Continue'] === 'Yes';
}
// If continuing, reset diagnosis for re-evaluation
const diagnosisReset = continueIteration ? {
'diagnosis.context': null,
'diagnosis.memory': null,
'diagnosis.dataflow': null,
'diagnosis.agent': null
} : {};
return {
stateUpdates: {
applied_fixes: updatedAppliedFixes,
issues: remainingIssues,
quality_score: newHealthScore,
quality_gate: newQualityGate,
iteration_count: newIterationCount,
...diagnosisReset,
issues_by_severity: {
critical: remainingIssues.filter(i => i.severity === 'critical').length,
high: remainingIssues.filter(i => i.severity === 'high').length,
medium: remainingIssues.filter(i => i.severity === 'medium').length,
low: remainingIssues.filter(i => i.severity === 'low').length
}
},
outputFiles: [],
summary: `Verified ${totalFixes} fixes: ${passedFixes} passed. Score: ${newHealthScore}, Gate: ${newQualityGate}, Iteration: ${newIterationCount}`
};
}
```
## State Updates
```javascript
return {
stateUpdates: {
applied_fixes: [...updatedWithVerificationResults],
issues: [...remainingIssues],
quality_score: newScore,
quality_gate: newGate,
iteration_count: iteration + 1
}
};
```
## Error Handling
| Error Type | Recovery |
|------------|----------|
| Re-diagnosis fails | Mark as 'inconclusive' |
| File access error | Skip file verification |
## Next Actions
- If quality_gate === 'pass': action-complete
- If user chose to continue: restart diagnosis cycle
- If max_iterations reached: action-complete

View File

@@ -0,0 +1,377 @@
# Orchestrator
Autonomous orchestrator for skill-tuning workflow. Reads current state and selects the next action based on diagnosis progress and quality gates.
## Role
Drive the tuning workflow by:
1. Reading current session state
2. Selecting the appropriate next action
3. Executing the action via sub-agent
4. Updating state with results
5. Repeating until termination conditions met
## State Management
### Read State
```javascript
const state = JSON.parse(Read(`${workDir}/state.json`));
```
### Update State
```javascript
function updateState(updates) {
const state = JSON.parse(Read(`${workDir}/state.json`));
const newState = {
...state,
...updates,
updated_at: new Date().toISOString()
};
Write(`${workDir}/state.json`, JSON.stringify(newState, null, 2));
return newState;
}
```
## Decision Logic
```javascript
function selectNextAction(state) {
// === Termination Checks ===
// User exit
if (state.status === 'user_exit') return null;
// Completed
if (state.status === 'completed') return null;
// Error limit exceeded
if (state.error_count >= state.max_errors) {
return 'action-abort';
}
// Max iterations exceeded
if (state.iteration_count >= state.max_iterations) {
return 'action-complete';
}
// === Action Selection ===
// 1. Not initialized yet
if (state.status === 'pending') {
return 'action-init';
}
// 1.5. Requirement analysis (在 init 后diagnosis 前)
if (state.status === 'running' &&
state.completed_actions.includes('action-init') &&
!state.completed_actions.includes('action-analyze-requirements')) {
return 'action-analyze-requirements';
}
// 1.6. 如果需求分析发现歧义需要澄清,暂停等待用户
if (state.requirement_analysis?.status === 'needs_clarification') {
return null; // 等待用户澄清后继续
}
// 1.7. 如果需求分析覆盖度不足,优先触发 Gemini 深度分析
if (state.requirement_analysis?.coverage?.status === 'unsatisfied' &&
!state.completed_actions.includes('action-gemini-analysis')) {
return 'action-gemini-analysis';
}
// 2. Check if Gemini analysis is requested or needed
if (shouldTriggerGeminiAnalysis(state)) {
return 'action-gemini-analysis';
}
// 3. Check if Gemini analysis is running
if (state.gemini_analysis?.status === 'running') {
// Wait for Gemini analysis to complete
return null; // Orchestrator will be re-triggered when CLI completes
}
// 4. Run diagnosis in order (only if not completed)
const diagnosisOrder = ['context', 'memory', 'dataflow', 'agent', 'docs', 'token_consumption'];
for (const diagType of diagnosisOrder) {
if (state.diagnosis[diagType] === null) {
// Check if user wants to skip this diagnosis
if (!state.focus_areas.length || state.focus_areas.includes(diagType)) {
return `action-diagnose-${diagType}`;
}
// For docs diagnosis, also check 'all' focus_area
if (diagType === 'docs' && state.focus_areas.includes('all')) {
return 'action-diagnose-docs';
}
}
}
// 5. All diagnosis complete, generate report if not done
const allDiagnosisComplete = diagnosisOrder.every(
d => state.diagnosis[d] !== null || !state.focus_areas.includes(d)
);
if (allDiagnosisComplete && !state.completed_actions.includes('action-generate-report')) {
return 'action-generate-report';
}
// 6. Report generated, propose fixes if not done
if (state.completed_actions.includes('action-generate-report') &&
state.proposed_fixes.length === 0 &&
state.issues.length > 0) {
return 'action-propose-fixes';
}
// 7. Fixes proposed, check if user wants to apply
if (state.proposed_fixes.length > 0 && state.pending_fixes.length > 0) {
return 'action-apply-fix';
}
// 8. Fixes applied, verify
if (state.applied_fixes.length > 0 &&
state.applied_fixes.some(f => f.verification_result === 'pending')) {
return 'action-verify';
}
// 9. Quality gate check
if (state.quality_gate === 'pass') {
return 'action-complete';
}
// 10. More iterations needed
if (state.iteration_count < state.max_iterations &&
state.quality_gate !== 'pass' &&
state.issues.some(i => i.severity === 'critical' || i.severity === 'high')) {
// Reset diagnosis for re-evaluation
return 'action-diagnose-context'; // Start new iteration
}
// 11. Default: complete
return 'action-complete';
}
/**
* 判断是否需要触发 Gemini CLI 分析
*/
function shouldTriggerGeminiAnalysis(state) {
// 已完成 Gemini 分析,不再触发
if (state.gemini_analysis?.status === 'completed') {
return false;
}
// 用户显式请求
if (state.gemini_analysis_requested === true) {
return true;
}
// 发现 critical 问题且未进行深度分析
if (state.issues.some(i => i.severity === 'critical') &&
!state.completed_actions.includes('action-gemini-analysis')) {
return true;
}
// 用户指定了需要 Gemini 分析的 focus_areas
const geminiAreas = ['architecture', 'prompt', 'performance', 'custom'];
if (state.focus_areas.some(area => geminiAreas.includes(area))) {
return true;
}
// 标准诊断完成但问题未得到解决,需要深度分析
const diagnosisComplete = ['context', 'memory', 'dataflow', 'agent', 'docs'].every(
d => state.diagnosis[d] !== null
);
if (diagnosisComplete &&
state.issues.length > 0 &&
state.iteration_count > 0 &&
!state.completed_actions.includes('action-gemini-analysis')) {
// 第二轮迭代如果问题仍存在,触发 Gemini 分析
return true;
}
return false;
}
```
## Execution Loop
```javascript
async function runOrchestrator(workDir) {
console.log('=== Skill Tuning Orchestrator Started ===');
let iteration = 0;
const MAX_LOOP_ITERATIONS = 50; // Safety limit
while (iteration < MAX_LOOP_ITERATIONS) {
iteration++;
// 1. Read current state
const state = JSON.parse(Read(`${workDir}/state.json`));
console.log(`[Loop ${iteration}] Status: ${state.status}, Action: ${state.current_action}`);
// 2. Select next action
const actionId = selectNextAction(state);
if (!actionId) {
console.log('No action selected, terminating orchestrator.');
break;
}
console.log(`[Loop ${iteration}] Executing: ${actionId}`);
// 3. Update state: current action
// FIX CTX-001: sliding window for action_history (keep last 10)
updateState({
current_action: actionId,
action_history: [...state.action_history, {
action: actionId,
started_at: new Date().toISOString(),
completed_at: null,
result: null,
output_files: []
}].slice(-10) // Sliding window: prevent unbounded growth
});
// 4. Execute action
try {
const actionPrompt = Read(`phases/actions/${actionId}.md`);
// FIX CTX-003: Pass state path + key fields only instead of full state
const stateKeyInfo = {
status: state.status,
iteration_count: state.iteration_count,
issues_by_severity: state.issues_by_severity,
quality_gate: state.quality_gate,
current_action: state.current_action,
completed_actions: state.completed_actions,
user_issue_description: state.user_issue_description,
target_skill: { name: state.target_skill.name, path: state.target_skill.path }
};
const stateKeyJson = JSON.stringify(stateKeyInfo, null, 2);
const result = await Task({
subagent_type: 'universal-executor',
run_in_background: false,
prompt: `
[CONTEXT]
You are executing action "${actionId}" for skill-tuning workflow.
Work directory: ${workDir}
[STATE KEY INFO]
${stateKeyJson}
[FULL STATE PATH]
${workDir}/state.json
(Read full state from this file if you need additional fields)
[ACTION INSTRUCTIONS]
${actionPrompt}
[OUTPUT REQUIREMENT]
After completing the action:
1. Write any output files to the work directory
2. Return a JSON object with:
- stateUpdates: object with state fields to update
- outputFiles: array of files created
- summary: brief description of what was done
`
});
// 5. Parse result and update state
let actionResult;
try {
actionResult = JSON.parse(result);
} catch (e) {
actionResult = {
stateUpdates: {},
outputFiles: [],
summary: result
};
}
// 6. Update state: action complete
const updatedHistory = [...state.action_history];
updatedHistory[updatedHistory.length - 1] = {
...updatedHistory[updatedHistory.length - 1],
completed_at: new Date().toISOString(),
result: 'success',
output_files: actionResult.outputFiles || []
};
updateState({
current_action: null,
completed_actions: [...state.completed_actions, actionId],
action_history: updatedHistory,
...actionResult.stateUpdates
});
console.log(`[Loop ${iteration}] Completed: ${actionId}`);
} catch (error) {
console.log(`[Loop ${iteration}] Error in ${actionId}: ${error.message}`);
// Error handling
// FIX CTX-002: sliding window for errors (keep last 5)
updateState({
current_action: null,
errors: [...state.errors, {
action: actionId,
message: error.message,
timestamp: new Date().toISOString(),
recoverable: true
}].slice(-5), // Sliding window: prevent unbounded growth
error_count: state.error_count + 1
});
}
}
console.log('=== Skill Tuning Orchestrator Finished ===');
}
```
## Action Catalog
| Action | Purpose | Preconditions | Effects |
|--------|---------|---------------|---------|
| [action-init](actions/action-init.md) | Initialize tuning session | status === 'pending' | Creates work dirs, backup, sets status='running' |
| [action-analyze-requirements](actions/action-analyze-requirements.md) | Analyze user requirements | init completed | Sets requirement_analysis, optimizes focus_areas |
| [action-diagnose-context](actions/action-diagnose-context.md) | Analyze context explosion | status === 'running' | Sets diagnosis.context |
| [action-diagnose-memory](actions/action-diagnose-memory.md) | Analyze long-tail forgetting | status === 'running' | Sets diagnosis.memory |
| [action-diagnose-dataflow](actions/action-diagnose-dataflow.md) | Analyze data flow issues | status === 'running' | Sets diagnosis.dataflow |
| [action-diagnose-agent](actions/action-diagnose-agent.md) | Analyze agent coordination | status === 'running' | Sets diagnosis.agent |
| [action-diagnose-docs](actions/action-diagnose-docs.md) | Analyze documentation structure | status === 'running', focus includes 'docs' | Sets diagnosis.docs |
| [action-gemini-analysis](actions/action-gemini-analysis.md) | Deep analysis via Gemini CLI | User request OR critical issues | Sets gemini_analysis, adds issues |
| [action-generate-report](actions/action-generate-report.md) | Generate consolidated report | All diagnoses complete | Creates tuning-report.md |
| [action-propose-fixes](actions/action-propose-fixes.md) | Generate fix proposals | Report generated, issues > 0 | Sets proposed_fixes |
| [action-apply-fix](actions/action-apply-fix.md) | Apply selected fix | pending_fixes > 0 | Updates applied_fixes |
| [action-verify](actions/action-verify.md) | Verify applied fixes | applied_fixes with pending verification | Updates verification_result |
| [action-complete](actions/action-complete.md) | Finalize session | quality_gate='pass' OR max_iterations | Sets status='completed' |
| [action-abort](actions/action-abort.md) | Abort on errors | error_count >= max_errors | Sets status='failed' |
## Termination Conditions
- `status === 'completed'`: Normal completion
- `status === 'user_exit'`: User requested exit
- `status === 'failed'`: Unrecoverable error
- `requirement_analysis.status === 'needs_clarification'`: Waiting for user clarification (暂停,非终止)
- `error_count >= max_errors`: Too many errors (default: 3)
- `iteration_count >= max_iterations`: Max iterations reached (default: 5)
- `quality_gate === 'pass'`: All quality criteria met
## Error Recovery
| Error Type | Recovery Strategy |
|------------|-------------------|
| Action execution failed | Retry up to 3 times, then skip |
| State parse error | Restore from backup |
| File write error | Retry with alternative path |
| User abort | Save state and exit gracefully |
## User Interaction Points
The orchestrator pauses for user input at these points:
1. **action-init**: Confirm target skill and describe issue
2. **action-propose-fixes**: Select which fixes to apply
3. **action-verify**: Review verification results, decide to continue or stop
4. **action-complete**: Review final summary

View File

@@ -0,0 +1,378 @@
# State Schema
Defines the state structure for skill-tuning orchestrator.
## State Structure
```typescript
interface TuningState {
// === Core Status ===
status: 'pending' | 'running' | 'completed' | 'failed';
started_at: string; // ISO timestamp
updated_at: string; // ISO timestamp
// === Target Skill Info ===
target_skill: {
name: string; // e.g., "software-manual"
path: string; // e.g., ".claude/skills/software-manual"
execution_mode: 'sequential' | 'autonomous';
phases: string[]; // List of phase files
specs: string[]; // List of spec files
};
// === User Input ===
user_issue_description: string; // User's problem description
focus_areas: string[]; // User-specified focus (optional)
// === Diagnosis Results ===
diagnosis: {
context: DiagnosisResult | null;
memory: DiagnosisResult | null;
dataflow: DiagnosisResult | null;
agent: DiagnosisResult | null;
docs: DocsDiagnosisResult | null; // 文档结构诊断
token_consumption: DiagnosisResult | null; // Token消耗诊断
};
// === Issues Found ===
issues: Issue[];
issues_by_severity: {
critical: number;
high: number;
medium: number;
low: number;
};
// === Fix Management ===
proposed_fixes: Fix[];
applied_fixes: AppliedFix[];
pending_fixes: string[]; // Fix IDs pending application
// === Iteration Control ===
iteration_count: number;
max_iterations: number; // Default: 5
// === Quality Metrics ===
quality_score: number; // 0-100
quality_gate: 'pass' | 'review' | 'fail';
// === Orchestrator State ===
completed_actions: string[];
current_action: string | null;
action_history: ActionHistoryEntry[];
// === Error Handling ===
errors: ErrorEntry[];
error_count: number;
max_errors: number; // Default: 3
// === Output Paths ===
work_dir: string;
backup_dir: string;
// === Final Report (consolidated output) ===
final_report: string | null; // Markdown summary generated on completion
// === Requirement Analysis (新增) ===
requirement_analysis: RequirementAnalysis | null;
}
interface RequirementAnalysis {
status: 'pending' | 'completed' | 'needs_clarification';
analyzed_at: string;
// Phase 1: 维度拆解
dimensions: Dimension[];
// Phase 2: Spec 匹配
spec_matches: SpecMatch[];
// Phase 3: 覆盖度
coverage: {
total_dimensions: number;
with_detection: number; // 有 taxonomy pattern
with_fix_strategy: number; // 有 tuning strategy (满足判断标准)
coverage_rate: number; // 0-100%
status: 'satisfied' | 'partial' | 'unsatisfied';
};
// Phase 4: 歧义
ambiguities: Ambiguity[];
}
interface Dimension {
id: string; // e.g., "DIM-001"
description: string; // 关注点描述
keywords: string[]; // 关键词
inferred_category: string; // 推断的问题类别
confidence: number; // 置信度 0-1
}
interface SpecMatch {
dimension_id: string;
taxonomy_match: {
category: string; // e.g., "context_explosion"
pattern_ids: string[]; // e.g., ["CTX-001", "CTX-003"]
severity_hint: string;
} | null;
strategy_match: {
strategies: string[]; // e.g., ["sliding_window", "path_reference"]
risk_levels: string[];
} | null;
has_fix: boolean; // 满足性判断核心
}
interface Ambiguity {
dimension_id: string;
type: 'multi_category' | 'vague_description' | 'conflicting_keywords';
description: string;
possible_interpretations: string[];
needs_clarification: boolean;
}
interface DiagnosisResult {
status: 'completed' | 'skipped' | 'failed';
issues_found: number;
severity: 'critical' | 'high' | 'medium' | 'low' | 'none';
execution_time_ms: number;
details: {
patterns_checked: string[];
patterns_matched: string[];
evidence: Evidence[];
recommendations: string[];
};
}
interface DocsDiagnosisResult extends DiagnosisResult {
redundancies: Redundancy[];
conflicts: Conflict[];
}
interface Redundancy {
id: string; // e.g., "DOC-RED-001"
type: 'state_schema' | 'strategy_mapping' | 'type_definition' | 'other';
files: string[]; // 涉及的文件列表
description: string; // 冗余描述
severity: 'high' | 'medium' | 'low';
merge_suggestion: string; // 合并建议
}
interface Conflict {
id: string; // e.g., "DOC-CON-001"
type: 'priority' | 'mapping' | 'definition';
files: string[]; // 涉及的文件列表
key: string; // 冲突的键/概念
definitions: {
file: string;
value: string;
location?: string;
}[];
resolution_suggestion: string; // 解决建议
}
interface Evidence {
file: string;
line?: number;
pattern: string;
context: string;
severity: string;
}
interface Issue {
id: string; // e.g., "ISS-001"
type: 'context_explosion' | 'memory_loss' | 'dataflow_break' | 'agent_failure' | 'token_consumption';
severity: 'critical' | 'high' | 'medium' | 'low';
priority: number; // 1 = highest
location: {
file: string;
line_start?: number;
line_end?: number;
phase?: string;
};
description: string;
evidence: string[];
root_cause: string;
impact: string;
suggested_fix: string;
related_issues: string[]; // Issue IDs
}
interface Fix {
id: string; // e.g., "FIX-001"
issue_ids: string[]; // Issues this fix addresses
strategy: FixStrategy;
description: string;
rationale: string;
changes: FileChange[];
risk: 'low' | 'medium' | 'high';
estimated_impact: string;
verification_steps: string[];
}
type FixStrategy =
| 'context_summarization' // Add context compression
| 'sliding_window' // Implement sliding context window
| 'structured_state' // Convert to structured state passing
| 'constraint_injection' // Add constraint propagation
| 'checkpoint_restore' // Add checkpointing mechanism
| 'schema_enforcement' // Add data contract validation
| 'orchestrator_refactor' // Refactor agent coordination
| 'state_centralization' // Centralize state management
| 'prompt_compression' // Extract static text, use templates
| 'lazy_loading' // Pass paths instead of content
| 'output_minimization' // Return minimal structured JSON
| 'state_field_reduction' // Audit and consolidate state fields
| 'custom'; // Custom fix
interface FileChange {
file: string;
action: 'create' | 'modify' | 'delete';
old_content?: string;
new_content?: string;
diff?: string;
}
interface AppliedFix {
fix_id: string;
applied_at: string;
success: boolean;
backup_path: string;
verification_result: 'pass' | 'fail' | 'pending';
rollback_available: boolean;
}
interface ActionHistoryEntry {
action: string;
started_at: string;
completed_at: string;
result: 'success' | 'failure' | 'skipped';
output_files: string[];
}
interface ErrorEntry {
action: string;
message: string;
timestamp: string;
recoverable: boolean;
}
```
## Initial State Template
```json
{
"status": "pending",
"started_at": null,
"updated_at": null,
"target_skill": {
"name": null,
"path": null,
"execution_mode": null,
"phases": [],
"specs": []
},
"user_issue_description": "",
"focus_areas": [],
"diagnosis": {
"context": null,
"memory": null,
"dataflow": null,
"agent": null,
"docs": null,
"token_consumption": null
},
"issues": [],
"issues_by_severity": {
"critical": 0,
"high": 0,
"medium": 0,
"low": 0
},
"proposed_fixes": [],
"applied_fixes": [],
"pending_fixes": [],
"iteration_count": 0,
"max_iterations": 5,
"quality_score": 0,
"quality_gate": "fail",
"completed_actions": [],
"current_action": null,
"action_history": [],
"errors": [],
"error_count": 0,
"max_errors": 3,
"work_dir": null,
"backup_dir": null,
"final_report": null,
"requirement_analysis": null
}
```
## State Transition Diagram
```
┌─────────────┐
│ pending │
└──────┬──────┘
│ action-init
┌─────────────┐
┌──────────│ running │──────────┐
│ └──────┬──────┘ │
│ │ │
diagnosis │ ┌────────────┼────────────┐ │ error_count >= 3
actions │ │ │ │ │
│ ↓ ↓ ↓ │
│ context memory dataflow │
│ │ │ │ │
│ └────────────┼────────────┘ │
│ │ │
│ ↓ │
│ action-verify │
│ │ │
│ ┌───────────┼───────────┐ │
│ │ │ │ │
│ ↓ ↓ ↓ │
│ quality iterate apply │
│ gate=pass (< max) fix │
│ │ │ │ │
│ │ └───────────┘ │
│ ↓ ↓
│ ┌─────────────┐ ┌─────────────┐
└→│ completed │ │ failed │
└─────────────┘ └─────────────┘
```
## State Update Rules
### Atomicity
All state updates must be atomic - read current state, apply changes, write entire state.
### Immutability
Never mutate state in place. Always create new state object with changes.
### Validation
Before writing state, validate against schema to prevent corruption.
### Timestamps
Always update `updated_at` on every state change.
```javascript
function updateState(workDir, updates) {
const currentState = JSON.parse(Read(`${workDir}/state.json`));
const newState = {
...currentState,
...updates,
updated_at: new Date().toISOString()
};
// Validate before write
if (!validateState(newState)) {
throw new Error('Invalid state update');
}
Write(`${workDir}/state.json`, JSON.stringify(newState, null, 2));
return newState;
}
```

View File

@@ -0,0 +1,284 @@
{
"version": "1.0.0",
"description": "Centralized category mappings for skill-tuning analysis and fix proposal",
"categories": {
"authoring_principles_violation": {
"pattern_ids": ["APV-001", "APV-002", "APV-003", "APV-004", "APV-005", "APV-006"],
"severity_hint": "critical",
"strategies": ["eliminate_intermediate_files", "minimize_state", "context_passing"],
"risk_levels": ["low", "low", "low"],
"detection_focus": "Intermediate files, state bloat, file relay patterns",
"priority_order": [1, 2, 3]
},
"context_explosion": {
"pattern_ids": ["CTX-001", "CTX-002", "CTX-003", "CTX-004", "CTX-005"],
"severity_hint": "high",
"strategies": ["sliding_window", "path_reference", "context_summarization", "structured_state"],
"risk_levels": ["low", "low", "low", "medium"],
"detection_focus": "Token accumulation, content passing patterns",
"priority_order": [1, 2, 3, 4]
},
"memory_loss": {
"pattern_ids": ["MEM-001", "MEM-002", "MEM-003", "MEM-004", "MEM-005"],
"severity_hint": "high",
"strategies": ["constraint_injection", "state_constraints_field", "checkpoint_restore", "goal_embedding"],
"risk_levels": ["low", "low", "low", "medium"],
"detection_focus": "Constraint propagation, checkpoint mechanisms",
"priority_order": [1, 2, 3, 4]
},
"dataflow_break": {
"pattern_ids": ["DF-001", "DF-002", "DF-003", "DF-004", "DF-005"],
"severity_hint": "critical",
"strategies": ["state_centralization", "schema_enforcement", "field_normalization"],
"risk_levels": ["medium", "low", "low"],
"detection_focus": "State storage, schema validation",
"priority_order": [1, 2, 3]
},
"agent_failure": {
"pattern_ids": ["AGT-001", "AGT-002", "AGT-003", "AGT-004", "AGT-005", "AGT-006"],
"severity_hint": "high",
"strategies": ["error_wrapping", "result_validation", "flatten_nesting"],
"risk_levels": ["low", "low", "medium"],
"detection_focus": "Error handling, result validation",
"priority_order": [1, 2, 3]
},
"prompt_quality": {
"pattern_ids": [],
"severity_hint": "medium",
"strategies": ["structured_prompt", "output_schema", "grounding_context", "format_enforcement"],
"risk_levels": ["low", "low", "medium", "low"],
"detection_focus": null,
"needs_gemini_analysis": true,
"priority_order": [1, 2, 3, 4]
},
"architecture": {
"pattern_ids": [],
"severity_hint": "medium",
"strategies": ["phase_decomposition", "interface_contracts", "plugin_architecture", "state_machine"],
"risk_levels": ["medium", "medium", "high", "medium"],
"detection_focus": null,
"needs_gemini_analysis": true,
"priority_order": [1, 2, 3, 4]
},
"performance": {
"pattern_ids": ["CTX-001", "CTX-003"],
"severity_hint": "medium",
"strategies": ["token_budgeting", "parallel_execution", "result_caching", "lazy_loading"],
"risk_levels": ["low", "low", "low", "low"],
"detection_focus": "Reuses context explosion detection",
"priority_order": [1, 2, 3, 4]
},
"error_handling": {
"pattern_ids": ["AGT-001", "AGT-002"],
"severity_hint": "medium",
"strategies": ["graceful_degradation", "error_propagation", "structured_logging", "error_context"],
"risk_levels": ["low", "low", "low", "low"],
"detection_focus": "Reuses agent failure detection",
"priority_order": [1, 2, 3, 4]
},
"output_quality": {
"pattern_ids": [],
"severity_hint": "medium",
"strategies": ["quality_gates", "output_validation", "template_enforcement", "completeness_check"],
"risk_levels": ["low", "low", "low", "low"],
"detection_focus": null,
"needs_gemini_analysis": true,
"priority_order": [1, 2, 3, 4]
},
"user_experience": {
"pattern_ids": [],
"severity_hint": "low",
"strategies": ["progress_tracking", "status_communication", "interactive_checkpoints", "guided_workflow"],
"risk_levels": ["low", "low", "low", "low"],
"detection_focus": null,
"needs_gemini_analysis": true,
"priority_order": [1, 2, 3, 4]
},
"token_consumption": {
"pattern_ids": ["TKN-001", "TKN-002", "TKN-003", "TKN-004", "TKN-005"],
"severity_hint": "medium",
"strategies": ["prompt_compression", "lazy_loading", "output_minimization", "state_field_reduction", "sliding_window"],
"risk_levels": ["low", "low", "low", "low", "low"],
"detection_focus": "Verbose prompts, excessive state fields, full content passing, unbounded arrays, redundant I/O",
"priority_order": [1, 2, 3, 4, 5]
}
},
"keywords": {
"chinese": {
"token": "context_explosion",
"上下文": "context_explosion",
"爆炸": "context_explosion",
"太长": "context_explosion",
"超限": "context_explosion",
"膨胀": "context_explosion",
"遗忘": "memory_loss",
"忘记": "memory_loss",
"指令丢失": "memory_loss",
"约束消失": "memory_loss",
"目标漂移": "memory_loss",
"状态": "dataflow_break",
"数据": "dataflow_break",
"格式": "dataflow_break",
"不一致": "dataflow_break",
"丢失": "dataflow_break",
"损坏": "dataflow_break",
"agent": "agent_failure",
"子任务": "agent_failure",
"失败": "agent_failure",
"嵌套": "agent_failure",
"调用": "agent_failure",
"协调": "agent_failure",
"慢": "performance",
"性能": "performance",
"效率": "performance",
"延迟": "performance",
"提示词": "prompt_quality",
"输出不稳定": "prompt_quality",
"幻觉": "prompt_quality",
"架构": "architecture",
"结构": "architecture",
"模块": "architecture",
"耦合": "architecture",
"扩展": "architecture",
"错误": "error_handling",
"异常": "error_handling",
"恢复": "error_handling",
"降级": "error_handling",
"崩溃": "error_handling",
"输出": "output_quality",
"质量": "output_quality",
"验证": "output_quality",
"不完整": "output_quality",
"交互": "user_experience",
"体验": "user_experience",
"进度": "user_experience",
"反馈": "user_experience",
"不清晰": "user_experience",
"中间文件": "authoring_principles_violation",
"临时文件": "authoring_principles_violation",
"文件中转": "authoring_principles_violation",
"state膨胀": "authoring_principles_violation",
"token消耗": "token_consumption",
"token优化": "token_consumption",
"产出简化": "token_consumption",
"冗长": "token_consumption",
"精简": "token_consumption"
},
"english": {
"token": "context_explosion",
"context": "context_explosion",
"explosion": "context_explosion",
"overflow": "context_explosion",
"bloat": "context_explosion",
"forget": "memory_loss",
"lost": "memory_loss",
"drift": "memory_loss",
"constraint": "memory_loss",
"goal": "memory_loss",
"state": "dataflow_break",
"data": "dataflow_break",
"format": "dataflow_break",
"inconsistent": "dataflow_break",
"corrupt": "dataflow_break",
"agent": "agent_failure",
"subtask": "agent_failure",
"fail": "agent_failure",
"nested": "agent_failure",
"call": "agent_failure",
"coordinate": "agent_failure",
"slow": "performance",
"performance": "performance",
"efficiency": "performance",
"latency": "performance",
"prompt": "prompt_quality",
"unstable": "prompt_quality",
"hallucination": "prompt_quality",
"architecture": "architecture",
"structure": "architecture",
"module": "architecture",
"coupling": "architecture",
"error": "error_handling",
"exception": "error_handling",
"recovery": "error_handling",
"crash": "error_handling",
"output": "output_quality",
"quality": "output_quality",
"validation": "output_quality",
"incomplete": "output_quality",
"interaction": "user_experience",
"ux": "user_experience",
"progress": "user_experience",
"feedback": "user_experience",
"intermediate": "authoring_principles_violation",
"temp": "authoring_principles_violation",
"relay": "authoring_principles_violation",
"verbose": "token_consumption",
"minimize": "token_consumption",
"compress": "token_consumption",
"simplify": "token_consumption",
"reduction": "token_consumption"
}
},
"category_labels": {
"context_explosion": "Context Explosion",
"memory_loss": "Long-tail Forgetting",
"dataflow_break": "Data Flow Disruption",
"agent_failure": "Agent Coordination Failure",
"prompt_quality": "Prompt Quality",
"architecture": "Architecture",
"performance": "Performance",
"error_handling": "Error Handling",
"output_quality": "Output Quality",
"user_experience": "User Experience",
"authoring_principles_violation": "Authoring Principles Violation",
"token_consumption": "Token Consumption",
"custom": "Custom"
},
"category_labels_chinese": {
"context_explosion": "Context Explosion",
"memory_loss": "Long-tail Forgetting",
"dataflow_break": "Data Flow Disruption",
"agent_failure": "Agent Coordination Failure",
"prompt_quality": "Prompt Quality",
"architecture": "Architecture Issues",
"performance": "Performance Issues",
"error_handling": "Error Handling",
"output_quality": "Output Quality",
"user_experience": "User Experience",
"authoring_principles_violation": "Authoring Principles Violation",
"token_consumption": "Token Consumption Optimization",
"custom": "Other Issues"
},
"category_descriptions": {
"context_explosion": "Token accumulation causing prompt size to grow unbounded",
"memory_loss": "Early instructions or constraints lost in later phases",
"dataflow_break": "State data inconsistency between phases",
"agent_failure": "Sub-agent call failures or abnormal results",
"prompt_quality": "Vague prompts causing unstable outputs",
"architecture": "Improper phase division or module structure",
"performance": "Slow execution or high token consumption",
"error_handling": "Incomplete error recovery mechanisms",
"output_quality": "Output validation or completeness issues",
"user_experience": "Interaction or feedback clarity issues",
"authoring_principles_violation": "Violation of skill authoring principles",
"token_consumption": "Excessive token usage from verbose prompts, large state objects, or redundant I/O patterns",
"custom": "Requires custom analysis"
},
"fix_priority_order": {
"P0": ["dataflow_break", "authoring_principles_violation"],
"P1": ["agent_failure"],
"P2": ["context_explosion", "token_consumption"],
"P3": ["memory_loss"]
},
"cross_category_dependencies": {
"context_explosion": ["memory_loss"],
"dataflow_break": ["agent_failure"],
"agent_failure": ["context_explosion"]
},
"fallback": {
"strategies": ["custom"],
"risk_levels": ["medium"],
"has_fix": true,
"needs_gemini_analysis": true
}
}

View File

@@ -0,0 +1,212 @@
# Dimension to Spec Mapping
维度关键词到 Spec 的映射规则,用于 action-analyze-requirements 阶段的自动匹配。
## When to Use
| Phase | Usage |
|-------|-------|
| action-analyze-requirements | 维度→类别→Spec 自动匹配 |
| action-propose-fixes | 策略选择参考 |
---
## Keyword → Category Mapping
基于关键词将用户描述的维度映射到问题类别。
### 中英文关键词表
| Keywords (中文) | Keywords (英文) | Primary Category | Secondary |
|----------------|-----------------|------------------|-----------|
| token, 上下文, 爆炸, 太长, 超限, 膨胀 | token, context, explosion, overflow, bloat | context_explosion | - |
| 遗忘, 忘记, 指令丢失, 约束消失, 目标漂移 | forget, lost, drift, constraint, goal | memory_loss | - |
| 状态, 数据, 格式, 不一致, 丢失, 损坏 | state, data, format, inconsistent, corrupt | dataflow_break | - |
| agent, 子任务, 失败, 嵌套, 调用, 协调 | agent, subtask, fail, nested, call, coordinate | agent_failure | - |
| 慢, 性能, 效率, token 消耗, 延迟 | slow, performance, efficiency, latency | performance | context_explosion |
| 提示词, prompt, 输出不稳定, 幻觉 | prompt, unstable, hallucination | prompt_quality | - |
| 架构, 结构, 模块, 耦合, 扩展 | architecture, structure, module, coupling | architecture | - |
| 错误, 异常, 恢复, 降级, 崩溃 | error, exception, recovery, crash | error_handling | agent_failure |
| 输出, 质量, 格式, 验证, 不完整 | output, quality, validation, incomplete | output_quality | - |
| 交互, 体验, 进度, 反馈, 不清晰 | interaction, ux, progress, feedback | user_experience | - |
| 重复, 冗余, 多处定义, 相同内容 | duplicate, redundant, multiple definitions | doc_redundancy | - |
| 冲突, 不一致, 定义不同, 矛盾 | conflict, inconsistent, mismatch, contradiction | doc_conflict | - |
### Matching Algorithm
```javascript
function matchCategory(keywords) {
const categoryScores = {};
for (const keyword of keywords) {
const normalizedKeyword = keyword.toLowerCase();
for (const [category, categoryKeywords] of Object.entries(KEYWORD_MAP)) {
if (categoryKeywords.some(k => normalizedKeyword.includes(k) || k.includes(normalizedKeyword))) {
categoryScores[category] = (categoryScores[category] || 0) + 1;
}
}
}
// 返回得分最高的类别
const sorted = Object.entries(categoryScores).sort((a, b) => b[1] - a[1]);
if (sorted.length === 0) return null;
// 如果前两名得分相同,返回多类别(需澄清)
if (sorted.length > 1 && sorted[0][1] === sorted[1][1]) {
return {
primary: sorted[0][0],
secondary: sorted[1][0],
ambiguous: true
};
}
return {
primary: sorted[0][0],
secondary: sorted[1]?.[0] || null,
ambiguous: false
};
}
```
---
## Category → Taxonomy Pattern Mapping
将问题类别映射到 problem-taxonomy.md 中的检测模式。
| Category | Pattern IDs | Detection Focus |
|----------|-------------|-----------------|
| context_explosion | CTX-001, CTX-002, CTX-003, CTX-004, CTX-005 | Token 累积、内容传递模式 |
| memory_loss | MEM-001, MEM-002, MEM-003, MEM-004, MEM-005 | 约束传播、检查点机制 |
| dataflow_break | DF-001, DF-002, DF-003, DF-004, DF-005 | 状态存储、Schema 验证 |
| agent_failure | AGT-001, AGT-002, AGT-003, AGT-004, AGT-005, AGT-006 | 错误处理、结果验证 |
| prompt_quality | - | (无内置检测,需 Gemini 分析) |
| architecture | - | (无内置检测,需 Gemini 分析) |
| performance | CTX-001, CTX-003 | (复用 context 检测) |
| error_handling | AGT-001, AGT-002 | (复用 agent 检测) |
| output_quality | - | (无内置检测,需 Gemini 分析) |
| user_experience | - | (无内置检测,需 Gemini 分析) |
| doc_redundancy | DOC-RED-001, DOC-RED-002, DOC-RED-003 | 重复定义检测 |
| doc_conflict | DOC-CON-001, DOC-CON-002 | 冲突定义检测 |
---
## Category → Strategy Mapping
将问题类别映射到 tuning-strategies.md 中的修复策略。
### Core Categories (有完整策略)
| Category | Available Strategies | Risk Level |
|----------|---------------------|------------|
| context_explosion | sliding_window, path_reference, context_summarization, structured_state | Low-Medium |
| memory_loss | constraint_injection, state_constraints_field, checkpoint_restore, goal_embedding | Low-Medium |
| dataflow_break | state_centralization, schema_enforcement, field_normalization | Low-Medium |
| agent_failure | error_wrapping, result_validation, flatten_nesting | Low-Medium |
| doc_redundancy | consolidate_to_ssot, centralize_mapping_config | Low-Medium |
| doc_conflict | reconcile_conflicting_definitions | Low |
### Extended Categories (需 Gemini 生成策略)
| Category | Available Strategies | Risk Level |
|----------|---------------------|------------|
| prompt_quality | structured_prompt, output_schema, grounding_context, format_enforcement | Low |
| architecture | phase_decomposition, interface_contracts, plugin_architecture, state_machine | Medium-High |
| performance | token_budgeting, parallel_execution, result_caching, lazy_loading | Low-Medium |
| error_handling | graceful_degradation, error_propagation, structured_logging, error_context | Low |
| output_quality | quality_gates, output_validation, template_enforcement, completeness_check | Low |
| user_experience | progress_tracking, status_communication, interactive_checkpoints, guided_workflow | Low |
---
## Coverage Rules
### Satisfaction Criteria
判断"是否满足需求"的标准:
```javascript
function evaluateSatisfaction(specMatch) {
// 核心标准:有可用的修复策略
const hasFix = specMatch.strategy_match !== null &&
specMatch.strategy_match.strategies.length > 0;
// 辅助标准:有检测手段
const hasDetection = specMatch.taxonomy_match !== null;
return {
satisfied: hasFix,
detection_available: hasDetection,
needs_gemini: !hasDetection // 无内置检测时需要 Gemini 分析
};
}
```
### Coverage Status Thresholds
| Status | Condition |
|--------|-----------|
| satisfied | coverage_rate >= 80% |
| partial | 50% <= coverage_rate < 80% |
| unsatisfied | coverage_rate < 50% |
---
## Fallback Rules
当无法匹配到具体类别时的处理:
```javascript
function handleUnmatchedDimension(dimension) {
return {
dimension_id: dimension.id,
taxonomy_match: null,
strategy_match: {
strategies: ['custom'], // Fallback to custom strategy
risk_levels: ['medium']
},
has_fix: true, // custom 策略视为"可满足"
needs_gemini_analysis: true,
fallback_reason: 'no_keyword_match'
};
}
```
---
## Usage Example
```javascript
// 输入:用户描述 "skill 执行太慢,而且有时候会忘记最初的指令"
// Step 1: Gemini 拆解为维度
const dimensions = [
{ id: 'DIM-001', description: '执行太慢', keywords: ['慢', '执行'], confidence: 0.9 },
{ id: 'DIM-002', description: '忘记最初指令', keywords: ['忘记', '指令'], confidence: 0.85 }
];
// Step 2: 匹配类别
// DIM-001 → performance (慢)
// DIM-002 → memory_loss (忘记, 指令)
// Step 3: 匹配 Spec
const specMatches = [
{
dimension_id: 'DIM-001',
taxonomy_match: { category: 'performance', pattern_ids: ['CTX-001', 'CTX-003'], severity_hint: 'medium' },
strategy_match: { strategies: ['token_budgeting', 'parallel_execution'], risk_levels: ['low', 'low'] },
has_fix: true
},
{
dimension_id: 'DIM-002',
taxonomy_match: { category: 'memory_loss', pattern_ids: ['MEM-001', 'MEM-002'], severity_hint: 'high' },
strategy_match: { strategies: ['constraint_injection', 'checkpoint_restore'], risk_levels: ['low', 'low'] },
has_fix: true
}
];
// Step 4: 评估覆盖度
// 2/2 = 100% → satisfied
```

View File

@@ -0,0 +1,318 @@
# Problem Taxonomy
Classification of skill execution issues with detection patterns and severity criteria.
## When to Use
| Phase | Usage | Section |
|-------|-------|---------|
| All Diagnosis Actions | Issue classification | All sections |
| action-propose-fixes | Strategy selection | Fix Mapping |
| action-generate-report | Severity assessment | Severity Criteria |
---
## Problem Categories
### 0. Authoring Principles Violation (P0)
**Definition**: 违反 skill 撰写首要准则(简洁高效、去除存储、上下文流转)。
**Root Causes**:
- 不必要的中间文件存储
- State schema 过度膨胀
- 文件中转代替上下文传递
- 重复数据存储
**Detection Patterns**:
| Pattern ID | Regex/Check | Description |
|------------|-------------|-------------|
| APV-001 | `/Write\([^)]*temp-|intermediate-/` | 中间文件写入 |
| APV-002 | `/Write\([^)]+\)[\s\S]{0,50}Read\([^)]+\)/` | 写后立即读(文件中转) |
| APV-003 | State schema > 15 fields | State 字段过多 |
| APV-004 | `/_history\s*[.=].*push|concat/` | 无限增长数组 |
| APV-005 | `/debug_|_cache|_temp/` in state | 调试/缓存字段残留 |
| APV-006 | Same data in multiple state fields | 重复存储 |
**Impact Levels**:
- **Critical**: 中间文件 > 5 个,严重违反原则
- **High**: State 字段 > 20 个,或存在文件中转
- **Medium**: 存在调试字段或轻微冗余
- **Low**: 轻微的命名不规范
---
### 1. Context Explosion (P2)
**Definition**: Excessive token accumulation causing prompt size to grow unbounded.
**Root Causes**:
- Unbounded conversation history
- Full content passing instead of references
- Missing summarization mechanisms
- Agent returning full output instead of path+summary
**Detection Patterns**:
| Pattern ID | Regex/Check | Description |
|------------|-------------|-------------|
| CTX-001 | `/history\s*[.=].*push\|concat/` | History array growth |
| CTX-002 | `/JSON\.stringify\s*\(\s*state\s*\)/` | Full state serialization |
| CTX-003 | `/Read\([^)]+\)\s*[\+,]/` | Multiple file content concatenation |
| CTX-004 | `/return\s*\{[^}]*content:/` | Agent returning full content |
| CTX-005 | File length > 5000 chars without summarize | Long prompt without compression |
**Impact Levels**:
- **Critical**: Context exceeds model limit (128K tokens)
- **High**: Context > 50K tokens per iteration
- **Medium**: Context grows 10%+ per iteration
- **Low**: Potential for growth but currently manageable
---
### 2. Long-tail Forgetting (P3)
**Definition**: Loss of early instructions, constraints, or goals in long execution chains.
**Root Causes**:
- No explicit constraint propagation
- Reliance on implicit context
- Missing checkpoint/restore mechanisms
- State schema without requirements field
**Detection Patterns**:
| Pattern ID | Regex/Check | Description |
|------------|-------------|-------------|
| MEM-001 | Later phases missing constraint reference | Constraint not carried forward |
| MEM-002 | `/\[TASK\][^[]*(?!\[CONSTRAINTS\])/` | Task without constraints section |
| MEM-003 | Key phases without checkpoint | Missing state preservation |
| MEM-004 | State schema lacks `original_requirements` | No constraint persistence |
| MEM-005 | No verification phase | Output not checked against intent |
**Impact Levels**:
- **Critical**: Original goal completely lost
- **High**: Key constraints ignored in output
- **Medium**: Some requirements missing
- **Low**: Minor goal drift
---
### 3. Data Flow Disruption (P0)
**Definition**: Inconsistent state management causing data loss or corruption.
**Root Causes**:
- Multiple state storage locations
- Inconsistent field naming
- Missing schema validation
- Format transformation without normalization
**Detection Patterns**:
| Pattern ID | Regex/Check | Description |
|------------|-------------|-------------|
| DF-001 | Multiple state file writes | Scattered state storage |
| DF-002 | Same concept, different names | Field naming inconsistency |
| DF-003 | JSON.parse without validation | Missing schema validation |
| DF-004 | Files written but never read | Orphaned outputs |
| DF-005 | Autonomous skill without state-schema | Undefined state structure |
**Impact Levels**:
- **Critical**: Data loss or corruption
- **High**: State inconsistency between phases
- **Medium**: Potential for inconsistency
- **Low**: Minor naming inconsistencies
---
### 4. Agent Coordination Failure (P1)
**Definition**: Fragile agent call patterns causing cascading failures.
**Root Causes**:
- Missing error handling in Task calls
- No result validation
- Inconsistent agent configurations
- Deeply nested agent calls
**Detection Patterns**:
| Pattern ID | Regex/Check | Description |
|------------|-------------|-------------|
| AGT-001 | Task without try-catch | Missing error handling |
| AGT-002 | Result used without validation | No return value check |
| AGT-003 | > 3 different agent types | Agent type proliferation |
| AGT-004 | Nested Task in prompt | Agent calling agent |
| AGT-005 | Task used but not in allowed-tools | Tool declaration mismatch |
| AGT-006 | Multiple return formats | Inconsistent agent output |
**Impact Levels**:
- **Critical**: Workflow crash on agent failure
- **High**: Unpredictable agent behavior
- **Medium**: Occasional coordination issues
- **Low**: Minor inconsistencies
---
### 5. Documentation Redundancy (P5)
**Definition**: 同一定义(如 State Schema、映射表、类型定义在多个文件中重复出现导致维护困难和不一致风险。
**Root Causes**:
- 缺乏单一真相来源 (SSOT)
- 复制粘贴代替引用
- 硬编码配置代替集中管理
**Detection Patterns**:
| Pattern ID | Regex/Check | Description |
|------------|-------------|-------------|
| DOC-RED-001 | 跨文件语义比较 | 找到 State Schema 等核心概念的重复定义 |
| DOC-RED-002 | 代码块 vs 规范表对比 | action 文件中硬编码与 spec 文档的重复 |
| DOC-RED-003 | `/interface\s+(\w+)/` 同名扫描 | 多处定义的 interface/type |
**Impact Levels**:
- **High**: 核心定义State Schema, 映射表)重复
- **Medium**: 类型定义重复
- **Low**: 示例代码重复
---
### 6. Token Consumption (P6)
**Definition**: Excessive token usage from verbose prompts, large state objects, or inefficient I/O patterns.
**Root Causes**:
- Long static prompts without compression
- State schema with too many fields
- Full content embedding instead of path references
- Arrays growing unbounded without sliding windows
- Write-then-read file relay patterns
**Detection Patterns**:
| Pattern ID | Regex/Check | Description |
|------------|-------------|-------------|
| TKN-001 | File size > 4KB | Verbose prompt files |
| TKN-002 | State fields > 15 | Excessive state schema |
| TKN-003 | `/Read\([^)]+\)\s*[\+,]/` | Full content passing |
| TKN-004 | `/.push\|concat(?!.*\.slice)/` | Unbounded array growth |
| TKN-005 | `/Write\([^)]+\)[\s\S]{0,100}Read\([^)]+\)/` | Write-then-read pattern |
**Impact Levels**:
- **High**: Multiple TKN-003/TKN-004 issues causing significant token waste
- **Medium**: Several verbose files or state bloat
- **Low**: Minor optimization opportunities
---
### 7. Documentation Conflict (P7)
**Definition**: 同一概念在不同文件中定义不一致,导致行为不可预测和文档误导。
**Root Causes**:
- 定义更新后未同步其他位置
- 实现与文档漂移
- 缺乏一致性校验
**Detection Patterns**:
| Pattern ID | Regex/Check | Description |
|------------|-------------|-------------|
| DOC-CON-001 | 键值一致性校验 | 同一键(如优先级)在不同文件中值不同 |
| DOC-CON-002 | 实现 vs 文档对比 | 硬编码配置与文档对应项不一致 |
**Impact Levels**:
- **Critical**: 优先级/类别定义冲突
- **High**: 策略映射不一致
- **Medium**: 示例与实际不符
---
## Severity Criteria
### Global Severity Matrix
| Severity | Definition | Action Required |
|----------|------------|-----------------|
| **Critical** | Blocks execution or causes data loss | Immediate fix required |
| **High** | Significantly impacts reliability | Should fix before deployment |
| **Medium** | Affects quality or maintainability | Fix in next iteration |
| **Low** | Minor improvement opportunity | Optional fix |
### Severity Calculation
```javascript
function calculateIssueSeverity(issue) {
const weights = {
impact_on_execution: 40, // Does it block workflow?
data_integrity_risk: 30, // Can it cause data loss?
frequency: 20, // How often does it occur?
complexity_to_fix: 10 // How hard to fix?
};
let score = 0;
// Impact on execution
if (issue.blocks_execution) score += weights.impact_on_execution;
else if (issue.degrades_execution) score += weights.impact_on_execution * 0.5;
// Data integrity
if (issue.causes_data_loss) score += weights.data_integrity_risk;
else if (issue.causes_inconsistency) score += weights.data_integrity_risk * 0.5;
// Frequency
if (issue.occurs_every_run) score += weights.frequency;
else if (issue.occurs_sometimes) score += weights.frequency * 0.5;
// Complexity (inverse - easier to fix = higher priority)
if (issue.fix_complexity === 'low') score += weights.complexity_to_fix;
else if (issue.fix_complexity === 'medium') score += weights.complexity_to_fix * 0.5;
// Map score to severity
if (score >= 70) return 'critical';
if (score >= 50) return 'high';
if (score >= 30) return 'medium';
return 'low';
}
```
---
## Fix Mapping
| Problem Type | Recommended Strategies | Priority Order |
|--------------|----------------------|----------------|
| **Authoring Principles Violation** | eliminate_intermediate_files, minimize_state, context_passing | 1, 2, 3 |
| Context Explosion | sliding_window, path_reference, context_summarization | 1, 2, 3 |
| Long-tail Forgetting | constraint_injection, state_constraints_field, checkpoint | 1, 2, 3 |
| Data Flow Disruption | state_centralization, schema_enforcement, field_normalization | 1, 2, 3 |
| Agent Coordination | error_wrapping, result_validation, flatten_nesting | 1, 2, 3 |
| **Token Consumption** | prompt_compression, lazy_loading, output_minimization, state_field_reduction | 1, 2, 3, 4 |
| **Documentation Redundancy** | consolidate_to_ssot, centralize_mapping_config | 1, 2 |
| **Documentation Conflict** | reconcile_conflicting_definitions | 1 |
---
## Cross-Category Dependencies
Some issues may trigger others:
```
Context Explosion ──→ Long-tail Forgetting
(Large context causes important info to be pushed out)
Data Flow Disruption ──→ Agent Coordination Failure
(Inconsistent data causes agents to fail)
Agent Coordination Failure ──→ Context Explosion
(Failed retries add to context)
```
When fixing, address in this order:
1. **P0 Data Flow** - Foundation for other fixes
2. **P1 Agent Coordination** - Stability
3. **P2 Context Explosion** - Efficiency
4. **P3 Long-tail Forgetting** - Quality

View File

@@ -0,0 +1,263 @@
# Quality Gates
Quality thresholds and verification criteria for skill tuning.
## When to Use
| Phase | Usage | Section |
|-------|-------|---------|
| action-generate-report | Calculate quality score | Scoring |
| action-verify | Check quality gates | Gate Definitions |
| action-complete | Final assessment | Pass Criteria |
---
## Quality Dimensions
### 1. Issue Severity Distribution (40%)
Measures the severity profile of identified issues.
| Metric | Weight | Calculation |
|--------|--------|-------------|
| Critical Issues | -25 each | High penalty |
| High Issues | -15 each | Significant penalty |
| Medium Issues | -5 each | Moderate penalty |
| Low Issues | -1 each | Minor penalty |
**Score Calculation**:
```javascript
function calculateSeverityScore(issues) {
const weights = { critical: 25, high: 15, medium: 5, low: 1 };
const deductions = issues.reduce((sum, issue) =>
sum + (weights[issue.severity] || 0), 0);
return Math.max(0, 100 - deductions);
}
```
### 2. Fix Effectiveness (30%)
Measures success rate of applied fixes.
| Metric | Weight | Threshold |
|--------|--------|-----------|
| Fixes Verified Pass | +30 | > 80% pass rate |
| Fixes Verified Fail | -20 | < 50% triggers review |
| Issues Resolved | +10 | Per resolved issue |
**Score Calculation**:
```javascript
function calculateFixScore(appliedFixes) {
const total = appliedFixes.length;
if (total === 0) return 100; // No fixes needed = good
const passed = appliedFixes.filter(f => f.verification_result === 'pass').length;
return Math.round((passed / total) * 100);
}
```
### 3. Coverage Completeness (20%)
Measures diagnosis coverage across all areas.
| Metric | Weight | Threshold |
|--------|--------|-----------|
| All 4 diagnosis complete | +20 | Full coverage |
| 3 diagnosis complete | +15 | Good coverage |
| 2 diagnosis complete | +10 | Partial coverage |
| < 2 diagnosis complete | +0 | Insufficient |
### 4. Iteration Efficiency (10%)
Measures how quickly issues are resolved.
| Metric | Weight | Threshold |
|--------|--------|-----------|
| Resolved in 1 iteration | +10 | Excellent |
| Resolved in 2 iterations | +7 | Good |
| Resolved in 3 iterations | +4 | Acceptable |
| > 3 iterations | +0 | Needs improvement |
---
## Gate Definitions
### Gate: PASS
**Threshold**: Quality Score >= 80 AND Critical Issues = 0 AND High Issues <= 2
**Meaning**: Skill is production-ready with minor issues.
**Actions**:
- Complete tuning session
- Generate summary report
- No further fixes required
### Gate: REVIEW
**Threshold**: Quality Score 60-79 OR High Issues 3-5
**Meaning**: Skill has issues requiring attention.
**Actions**:
- Review remaining issues
- Apply additional fixes if possible
- May require manual intervention
### Gate: FAIL
**Threshold**: Quality Score < 60 OR Critical Issues > 0 OR High Issues > 5
**Meaning**: Skill has serious issues blocking deployment.
**Actions**:
- Must fix critical issues
- Re-run diagnosis after fixes
- Consider architectural review
---
## Quality Score Calculation
```javascript
function calculateQualityScore(state) {
// Dimension 1: Severity (40%)
const severityScore = calculateSeverityScore(state.issues);
// Dimension 2: Fix Effectiveness (30%)
const fixScore = calculateFixScore(state.applied_fixes);
// Dimension 3: Coverage (20%)
const diagnosisCount = Object.values(state.diagnosis)
.filter(d => d !== null).length;
const coverageScore = [0, 0, 10, 15, 20][diagnosisCount] || 0;
// Dimension 4: Efficiency (10%)
const efficiencyScore = state.iteration_count <= 1 ? 10 :
state.iteration_count <= 2 ? 7 :
state.iteration_count <= 3 ? 4 : 0;
// Weighted total
const total = (severityScore * 0.4) +
(fixScore * 0.3) +
(coverageScore * 1.0) + // Already scaled to 20
(efficiencyScore * 1.0); // Already scaled to 10
return Math.round(total);
}
function determineQualityGate(state) {
const score = calculateQualityScore(state);
const criticalCount = state.issues.filter(i => i.severity === 'critical').length;
const highCount = state.issues.filter(i => i.severity === 'high').length;
if (criticalCount > 0) return 'fail';
if (highCount > 5) return 'fail';
if (score < 60) return 'fail';
if (highCount > 2) return 'review';
if (score < 80) return 'review';
return 'pass';
}
```
---
## Verification Criteria
### For Each Issue Type
#### Context Explosion Issues
- [ ] Token count does not grow unbounded
- [ ] History limited to reasonable size
- [ ] No full content in prompts (paths used instead)
- [ ] Agent returns are compact
#### Long-tail Forgetting Issues
- [ ] Constraints visible in all phase prompts
- [ ] State schema includes requirements field
- [ ] Checkpoints exist at key milestones
- [ ] Output matches original constraints
#### Data Flow Issues
- [ ] Single state.json after execution
- [ ] No orphan state files
- [ ] Schema validation active
- [ ] Consistent field naming
#### Agent Coordination Issues
- [ ] All Task calls have error handling
- [ ] Agent results validated before use
- [ ] No nested agent calls
- [ ] Tool declarations match usage
---
## Iteration Control
### Max Iterations
Default: 5 iterations
**Rationale**:
- Each iteration may introduce new issues
- Diminishing returns after 3-4 iterations
- Prevents infinite loops
### Iteration Exit Criteria
```javascript
function shouldContinueIteration(state) {
// Exit if quality gate passed
if (state.quality_gate === 'pass') return false;
// Exit if max iterations reached
if (state.iteration_count >= state.max_iterations) return false;
// Exit if no improvement in last 2 iterations
if (state.iteration_count >= 2) {
const recentHistory = state.action_history.slice(-10);
const issuesResolvedRecently = recentHistory.filter(a =>
a.action === 'action-verify' && a.result === 'success'
).length;
if (issuesResolvedRecently === 0) {
console.log('No progress in recent iterations, stopping.');
return false;
}
}
// Continue if critical/high issues remain
const hasUrgentIssues = state.issues.some(i =>
i.severity === 'critical' || i.severity === 'high'
);
return hasUrgentIssues;
}
```
---
## Reporting Format
### Quality Summary Table
| Dimension | Score | Weight | Weighted |
|-----------|-------|--------|----------|
| Severity Distribution | {score}/100 | 40% | {weighted} |
| Fix Effectiveness | {score}/100 | 30% | {weighted} |
| Coverage Completeness | {score}/20 | 20% | {score} |
| Iteration Efficiency | {score}/10 | 10% | {score} |
| **Total** | | | **{total}/100** |
### Gate Status
```
Quality Gate: {PASS|REVIEW|FAIL}
Criteria:
- Quality Score: {score} (threshold: 60)
- Critical Issues: {count} (threshold: 0)
- High Issues: {count} (threshold: 5)
```

View File

@@ -0,0 +1,189 @@
# Skill Authoring Principles
Skill 撰写首要准则。所有诊断和优化以此为纲。
---
## 核心原则
```
简洁高效 → 去除无关存储 → 去除中间存储 → 上下文流转
```
---
## 1. 简洁高效
**原则**:最小化实现,只做必要的事
| DO | DON'T |
|----|-------|
| 单一职责阶段 | 臃肿的多功能阶段 |
| 直接的数据路径 | 迂回的处理流程 |
| 必要的字段 | 冗余的 schema 定义 |
| 精准的 prompt | 过度详细的指令 |
**检测模式**
- Phase 文件 > 200 行 → 需拆分
- State schema 字段 > 20 个 → 需精简
- 同一数据多处定义 → 需去重
---
## 2. 去除无关存储
**原则**:不存储不需要的数据
| DO | DON'T |
|----|-------|
| 只存最终结果 | 存储调试信息 |
| 存路径引用 | 存完整内容副本 |
| 存必要索引 | 存全量历史 |
**检测模式**
```javascript
// BAD: 存储完整内容
state.full_analysis_result = longAnalysisOutput;
// GOOD: 存路径 + 摘要
state.analysis = {
path: `${workDir}/analysis.json`,
summary: extractSummary(output),
key_findings: extractFindings(output)
};
```
**反模式清单**
- `state.debug_*` → 删除
- `state.*_history` (无限增长) → 限制或删除
- `state.*_cache` (会话内) → 改用内存变量
- 重复字段 → 合并
---
## 3. 去除中间存储
**原则**:避免临时文件和中间状态文件
| DO | DON'T |
|----|-------|
| 直接传递结果 | 写文件再读文件 |
| 函数返回值 | 中间 JSON 文件 |
| 管道处理 | 阶段性存储 |
**检测模式**
```javascript
// BAD: 中间文件
Write(`${workDir}/temp-step1.json`, step1Result);
const step1 = Read(`${workDir}/temp-step1.json`);
const step2Result = process(step1);
Write(`${workDir}/temp-step2.json`, step2Result);
// GOOD: 直接流转
const step1Result = await executeStep1();
const step2Result = process(step1Result);
const finalResult = finalize(step2Result);
Write(`${workDir}/final-output.json`, finalResult); // 只存最终结果
```
**允许的存储**
- 最终输出(用户需要的结果)
- 检查点(长流程恢复用,可选)
- 备份(修改前的原始文件)
**禁止的存储**
- `temp-*.json`
- `intermediate-*.json`
- `step[N]-output.json`
- `*-draft.md`
---
## 4. 上下文流转
**原则**:通过上下文传递而非文件
| DO | DON'T |
|----|-------|
| 函数参数传递 | 全局状态读写 |
| 返回值链式处理 | 文件中转 |
| prompt 内嵌数据 | 指向外部文件 |
**模式**
```javascript
// 上下文流转模式
async function executePhase(context) {
const { previousResult, constraints, config } = context;
const result = await Task({
prompt: `
[CONTEXT]
Previous: ${JSON.stringify(previousResult)}
Constraints: ${constraints.join(', ')}
[TASK]
Process and return result directly.
`
});
return {
...context,
currentResult: result,
completed: ['phase-name']
};
}
// 链式执行
let ctx = initialContext;
ctx = await executePhase1(ctx);
ctx = await executePhase2(ctx);
ctx = await executePhase3(ctx);
// ctx 包含完整上下文,无中间文件
```
**State 最小化**
```typescript
// 只存必要状态
interface MinimalState {
status: 'pending' | 'running' | 'completed';
target: { name: string; path: string };
result_path: string; // 最终结果路径
error?: string;
}
```
---
## 应用场景
### 诊断时检查
| 检查项 | 违反时标记 |
|--------|-----------|
| Phase 内写入 temp 文件 | `unnecessary_storage` |
| State 包含 *_history 无限数组 | `unbounded_state` |
| 文件写入后立即读取 | `redundant_io` |
| 多阶段传递完整内容 | `context_bloat` |
### 优化策略
| 问题 | 策略 |
|------|------|
| 中间文件过多 | `eliminate_intermediate_files` |
| State 膨胀 | `minimize_state_schema` |
| 重复存储 | `deduplicate_storage` |
| 文件中转 | `context_passing` |
---
## 合规检查清单
```
□ 无 temp/intermediate 文件写入
□ State schema < 15 个字段
□ 无重复数据存储
□ Phase 间通过上下文/返回值传递
□ 只存最终结果文件
□ 无无限增长的数组
□ 无调试字段残留
```

File diff suppressed because it is too large Load Diff

View File

@@ -0,0 +1,153 @@
# Diagnosis Report Template
Template for individual diagnosis action reports.
## Template
```markdown
# {{diagnosis_type}} Diagnosis Report
**Target Skill**: {{skill_name}}
**Diagnosis Type**: {{diagnosis_type}}
**Executed At**: {{timestamp}}
**Duration**: {{duration_ms}}ms
---
## Summary
| Metric | Value |
|--------|-------|
| Issues Found | {{issues_found}} |
| Severity | {{severity}} |
| Patterns Checked | {{patterns_checked_count}} |
| Patterns Matched | {{patterns_matched_count}} |
---
## Patterns Analyzed
{{#each patterns_checked}}
### {{pattern_name}}
- **Status**: {{status}}
- **Matches**: {{match_count}}
- **Files Affected**: {{affected_files}}
{{/each}}
---
## Issues Identified
{{#if issues.length}}
{{#each issues}}
### {{id}}: {{description}}
| Field | Value |
|-------|-------|
| Type | {{type}} |
| Severity | {{severity}} |
| Location | {{location}} |
| Root Cause | {{root_cause}} |
| Impact | {{impact}} |
**Evidence**:
{{#each evidence}}
- `{{this}}`
{{/each}}
**Suggested Fix**: {{suggested_fix}}
---
{{/each}}
{{else}}
_No issues found in this diagnosis area._
{{/if}}
---
## Recommendations
{{#if recommendations.length}}
{{#each recommendations}}
{{@index}}. {{this}}
{{/each}}
{{else}}
No specific recommendations - area appears healthy.
{{/if}}
---
## Raw Data
Full diagnosis data available at:
`{{output_file}}`
```
## Variable Reference
| Variable | Type | Source |
|----------|------|--------|
| `diagnosis_type` | string | 'context' \| 'memory' \| 'dataflow' \| 'agent' |
| `skill_name` | string | state.target_skill.name |
| `timestamp` | string | ISO timestamp |
| `duration_ms` | number | Execution time |
| `issues_found` | number | issues.length |
| `severity` | string | Calculated severity |
| `patterns_checked` | array | Patterns analyzed |
| `patterns_matched` | array | Patterns with matches |
| `issues` | array | Issue objects |
| `recommendations` | array | String recommendations |
| `output_file` | string | Path to JSON file |
## Usage
```javascript
function renderDiagnosisReport(diagnosis, diagnosisType, skillName, outputFile) {
return `# ${diagnosisType} Diagnosis Report
**Target Skill**: ${skillName}
**Diagnosis Type**: ${diagnosisType}
**Executed At**: ${new Date().toISOString()}
**Duration**: ${diagnosis.execution_time_ms}ms
---
## Summary
| Metric | Value |
|--------|-------|
| Issues Found | ${diagnosis.issues_found} |
| Severity | ${diagnosis.severity} |
| Patterns Checked | ${diagnosis.details.patterns_checked.length} |
| Patterns Matched | ${diagnosis.details.patterns_matched.length} |
---
## Issues Identified
${diagnosis.details.evidence.map((e, i) => `
### Issue ${i + 1}
- **File**: ${e.file}
- **Pattern**: ${e.pattern}
- **Severity**: ${e.severity}
- **Context**: \`${e.context}\`
`).join('\n')}
---
## Recommendations
${diagnosis.details.recommendations.map((r, i) => `${i + 1}. ${r}`).join('\n')}
---
## Raw Data
Full diagnosis data available at:
\`${outputFile}\`
`;
}
```

View File

@@ -0,0 +1,204 @@
# Fix Proposal Template
Template for fix proposal documentation.
## Template
```markdown
# Fix Proposal: {{fix_id}}
**Strategy**: {{strategy}}
**Risk Level**: {{risk}}
**Issues Addressed**: {{issue_ids}}
---
## Description
{{description}}
## Rationale
{{rationale}}
---
## Affected Files
{{#each changes}}
### {{file}}
**Action**: {{action}}
```diff
{{diff}}
```
{{/each}}
---
## Implementation Steps
{{#each implementation_steps}}
{{@index}}. {{this}}
{{/each}}
---
## Risk Assessment
| Factor | Assessment |
|--------|------------|
| Complexity | {{complexity}} |
| Reversibility | {{reversible ? 'Yes' : 'No'}} |
| Breaking Changes | {{breaking_changes}} |
| Test Coverage | {{test_coverage}} |
**Overall Risk**: {{risk}}
---
## Verification Steps
{{#each verification_steps}}
- [ ] {{this}}
{{/each}}
---
## Rollback Plan
{{#if rollback_available}}
To rollback this fix:
```bash
{{rollback_command}}
```
{{else}}
_Rollback not available for this fix type._
{{/if}}
---
## Estimated Impact
{{estimated_impact}}
```
## Variable Reference
| Variable | Type | Source |
|----------|------|--------|
| `fix_id` | string | Generated ID (FIX-001) |
| `strategy` | string | Fix strategy name |
| `risk` | string | 'low' \| 'medium' \| 'high' |
| `issue_ids` | array | Related issue IDs |
| `description` | string | Human-readable description |
| `rationale` | string | Why this fix works |
| `changes` | array | File change objects |
| `implementation_steps` | array | Step-by-step guide |
| `verification_steps` | array | How to verify fix worked |
| `estimated_impact` | string | Expected improvement |
## Usage
```javascript
function renderFixProposal(fix) {
return `# Fix Proposal: ${fix.id}
**Strategy**: ${fix.strategy}
**Risk Level**: ${fix.risk}
**Issues Addressed**: ${fix.issue_ids.join(', ')}
---
## Description
${fix.description}
## Rationale
${fix.rationale}
---
## Affected Files
${fix.changes.map(change => `
### ${change.file}
**Action**: ${change.action}
\`\`\`diff
${change.diff || change.new_content?.slice(0, 200) || 'N/A'}
\`\`\`
`).join('\n')}
---
## Verification Steps
${fix.verification_steps.map(step => `- [ ] ${step}`).join('\n')}
---
## Estimated Impact
${fix.estimated_impact}
`;
}
```
## Fix Strategy Templates
### sliding_window
```markdown
## Description
Implement sliding window for conversation history to prevent unbounded growth.
## Changes
- Add MAX_HISTORY constant
- Modify history update logic to slice array
- Update state schema documentation
## Verification
- [ ] Run skill for 10+ iterations
- [ ] Verify history.length <= MAX_HISTORY
- [ ] Check no data loss for recent items
```
### constraint_injection
```markdown
## Description
Add explicit constraint section to each phase prompt.
## Changes
- Add [CONSTRAINTS] section template
- Reference state.original_requirements
- Add reminder before output section
## Verification
- [ ] Check constraints visible in all phases
- [ ] Test with specific constraint
- [ ] Verify output respects constraint
```
### error_wrapping
```markdown
## Description
Wrap all Task calls in try-catch with retry logic.
## Changes
- Create safeTask wrapper function
- Replace direct Task calls
- Add error logging to state
## Verification
- [ ] Simulate agent failure
- [ ] Verify graceful error handling
- [ ] Check retry logic
```

View File

@@ -0,0 +1,421 @@
{
"$schema": "http://json-schema.org/draft-07/schema#",
"title": "Multi-CLI Discussion Artifact Schema",
"description": "Visualization-friendly output for multi-CLI collaborative discussion agent",
"type": "object",
"required": ["metadata", "discussionTopic", "relatedFiles", "planning", "decision", "decisionRecords"],
"properties": {
"metadata": {
"type": "object",
"required": ["artifactId", "roundId", "timestamp", "contributingAgents"],
"properties": {
"artifactId": {
"type": "string",
"description": "Unique ID for this artifact (e.g., 'MCP-auth-refactor-2026-01-13-round-1')"
},
"roundId": {
"type": "integer",
"minimum": 1,
"description": "Discussion round number"
},
"timestamp": {
"type": "string",
"format": "date-time",
"description": "ISO 8601 timestamp"
},
"contributingAgents": {
"type": "array",
"items": {
"$ref": "#/definitions/AgentIdentifier"
},
"description": "Agents that contributed to this artifact"
},
"durationSeconds": {
"type": "integer",
"description": "Total duration in seconds"
},
"exportFormats": {
"type": "array",
"items": {
"type": "string",
"enum": ["markdown", "html"]
},
"description": "Supported export formats"
}
}
},
"discussionTopic": {
"type": "object",
"required": ["title", "description", "status"],
"properties": {
"title": {
"$ref": "#/definitions/I18nLabel"
},
"description": {
"$ref": "#/definitions/I18nLabel"
},
"scope": {
"type": "object",
"properties": {
"included": {
"type": "array",
"items": { "$ref": "#/definitions/I18nLabel" },
"description": "What's in scope"
},
"excluded": {
"type": "array",
"items": { "$ref": "#/definitions/I18nLabel" },
"description": "What's explicitly out of scope"
}
}
},
"keyQuestions": {
"type": "array",
"items": { "$ref": "#/definitions/I18nLabel" },
"description": "Questions being explored"
},
"status": {
"type": "string",
"enum": ["exploring", "analyzing", "debating", "decided", "blocked"],
"description": "Discussion status"
},
"tags": {
"type": "array",
"items": { "type": "string" },
"description": "Tags for filtering (e.g., ['auth', 'security', 'api'])"
}
}
},
"relatedFiles": {
"type": "object",
"properties": {
"fileTree": {
"type": "array",
"items": { "$ref": "#/definitions/FileNode" },
"description": "File tree structure"
},
"dependencyGraph": {
"type": "array",
"items": { "$ref": "#/definitions/DependencyEdge" },
"description": "Dependency relationships"
},
"impactSummary": {
"type": "array",
"items": { "$ref": "#/definitions/FileImpact" },
"description": "File impact summary"
}
}
},
"planning": {
"type": "object",
"properties": {
"functional": {
"type": "array",
"items": { "$ref": "#/definitions/Requirement" },
"description": "Functional requirements"
},
"nonFunctional": {
"type": "array",
"items": { "$ref": "#/definitions/Requirement" },
"description": "Non-functional requirements"
},
"acceptanceCriteria": {
"type": "array",
"items": { "$ref": "#/definitions/AcceptanceCriterion" },
"description": "Acceptance criteria"
}
}
},
"decision": {
"type": "object",
"required": ["status", "confidenceScore"],
"properties": {
"status": {
"type": "string",
"enum": ["pending", "decided", "conflict"],
"description": "Decision status"
},
"summary": {
"$ref": "#/definitions/I18nLabel"
},
"selectedSolution": {
"$ref": "#/definitions/Solution"
},
"rejectedAlternatives": {
"type": "array",
"items": { "$ref": "#/definitions/RejectedSolution" }
},
"confidenceScore": {
"type": "number",
"minimum": 0,
"maximum": 1,
"description": "Confidence score (0.0 to 1.0)"
}
}
},
"decisionRecords": {
"type": "object",
"properties": {
"timeline": {
"type": "array",
"items": { "$ref": "#/definitions/DecisionEvent" },
"description": "Timeline of decision events"
}
}
},
"_internal": {
"type": "object",
"description": "Internal analysis data (for debugging)",
"properties": {
"cli_analyses": {
"type": "array",
"items": { "$ref": "#/definitions/CLIAnalysis" }
},
"cross_verification": {
"$ref": "#/definitions/CrossVerification"
},
"convergence": {
"$ref": "#/definitions/ConvergenceMetrics"
}
}
}
},
"definitions": {
"I18nLabel": {
"type": "object",
"required": ["en", "zh"],
"properties": {
"en": { "type": "string" },
"zh": { "type": "string" }
},
"description": "Multi-language label for UI display"
},
"AgentIdentifier": {
"type": "object",
"required": ["name", "id"],
"properties": {
"name": {
"type": "string",
"enum": ["Gemini", "Codex", "Qwen", "Human", "System"]
},
"id": { "type": "string" }
}
},
"FileNode": {
"type": "object",
"required": ["path", "type"],
"properties": {
"path": { "type": "string" },
"type": {
"type": "string",
"enum": ["file", "directory"]
},
"modificationStatus": {
"type": "string",
"enum": ["added", "modified", "deleted", "unchanged"]
},
"impactScore": {
"type": "string",
"enum": ["critical", "high", "medium", "low"]
},
"children": {
"type": "array",
"items": { "$ref": "#/definitions/FileNode" }
},
"codeSnippet": { "$ref": "#/definitions/CodeSnippet" }
}
},
"DependencyEdge": {
"type": "object",
"required": ["source", "target", "relationship"],
"properties": {
"source": { "type": "string" },
"target": { "type": "string" },
"relationship": { "type": "string" }
}
},
"FileImpact": {
"type": "object",
"required": ["filePath", "score", "reasoning"],
"properties": {
"filePath": { "type": "string" },
"line": { "type": "integer" },
"score": {
"type": "string",
"enum": ["critical", "high", "medium", "low"]
},
"reasoning": { "$ref": "#/definitions/I18nLabel" }
}
},
"CodeSnippet": {
"type": "object",
"required": ["startLine", "endLine", "code"],
"properties": {
"startLine": { "type": "integer" },
"endLine": { "type": "integer" },
"code": { "type": "string" },
"language": { "type": "string" },
"comment": { "$ref": "#/definitions/I18nLabel" }
}
},
"Requirement": {
"type": "object",
"required": ["id", "description", "priority"],
"properties": {
"id": { "type": "string" },
"description": { "$ref": "#/definitions/I18nLabel" },
"priority": {
"type": "string",
"enum": ["critical", "high", "medium", "low"]
},
"source": { "type": "string" }
}
},
"AcceptanceCriterion": {
"type": "object",
"required": ["id", "description", "isMet"],
"properties": {
"id": { "type": "string" },
"description": { "$ref": "#/definitions/I18nLabel" },
"isMet": { "type": "boolean" }
}
},
"Solution": {
"type": "object",
"required": ["id", "title", "description"],
"properties": {
"id": { "type": "string" },
"title": { "$ref": "#/definitions/I18nLabel" },
"description": { "$ref": "#/definitions/I18nLabel" },
"pros": {
"type": "array",
"items": { "$ref": "#/definitions/I18nLabel" }
},
"cons": {
"type": "array",
"items": { "$ref": "#/definitions/I18nLabel" }
},
"estimatedEffort": { "$ref": "#/definitions/I18nLabel" },
"risk": {
"type": "string",
"enum": ["critical", "high", "medium", "low"]
},
"affectedFiles": {
"type": "array",
"items": { "$ref": "#/definitions/FileImpact" }
},
"sourceCLIs": {
"type": "array",
"items": { "type": "string" }
}
}
},
"RejectedSolution": {
"allOf": [
{ "$ref": "#/definitions/Solution" },
{
"type": "object",
"required": ["rejectionReason"],
"properties": {
"rejectionReason": { "$ref": "#/definitions/I18nLabel" }
}
}
]
},
"DecisionEvent": {
"type": "object",
"required": ["eventId", "timestamp", "type", "contributor", "summary"],
"properties": {
"eventId": { "type": "string" },
"timestamp": {
"type": "string",
"format": "date-time"
},
"type": {
"type": "string",
"enum": ["proposal", "argument", "agreement", "disagreement", "decision", "reversal"]
},
"contributor": { "$ref": "#/definitions/AgentIdentifier" },
"summary": { "$ref": "#/definitions/I18nLabel" },
"evidence": {
"type": "array",
"items": { "$ref": "#/definitions/Evidence" }
},
"reversibility": {
"type": "string",
"enum": ["easily_reversible", "requires_refactoring", "irreversible"]
}
}
},
"Evidence": {
"type": "object",
"required": ["type", "content", "description"],
"properties": {
"type": {
"type": "string",
"enum": ["link", "code_snippet", "log_output", "benchmark", "reference"]
},
"content": {},
"description": { "$ref": "#/definitions/I18nLabel" }
}
},
"CLIAnalysis": {
"type": "object",
"required": ["tool", "perspective", "feasibility_score"],
"properties": {
"tool": {
"type": "string",
"enum": ["gemini", "codex", "qwen"]
},
"perspective": { "type": "string" },
"feasibility_score": {
"type": "number",
"minimum": 0,
"maximum": 1
},
"findings": {
"type": "array",
"items": { "type": "string" }
},
"implementation_approaches": { "type": "array" },
"technical_concerns": {
"type": "array",
"items": { "type": "string" }
},
"code_locations": {
"type": "array",
"items": { "$ref": "#/definitions/FileImpact" }
}
}
},
"CrossVerification": {
"type": "object",
"properties": {
"agreements": {
"type": "array",
"items": { "type": "string" }
},
"disagreements": {
"type": "array",
"items": { "type": "string" }
},
"resolution": { "type": "string" }
}
},
"ConvergenceMetrics": {
"type": "object",
"properties": {
"score": {
"type": "number",
"minimum": 0,
"maximum": 1
},
"new_insights": { "type": "boolean" },
"recommendation": {
"type": "string",
"enum": ["continue", "converged", "user_input_needed"]
}
}
}
}
}

View File

@@ -18,33 +18,6 @@
All tool availability, model selection, and routing are defined in this configuration file.
### Configuration Schema
```json
{
"version": "3.0.0",
"models": {
"<tool-id>": ["<model-1>", "<model-2>", ...]
},
"tools": {
"<tool-id>": {
"enabled": true|false,
"primaryModel": "<model-id>",
"secondaryModel": "<model-id>",
"tags": ["<tag-1>", "<tag-2>", ...]
}
},
"customEndpoints": [
{
"id": "<endpoint-id>",
"name": "<display-name>",
"enabled": true|false,
"tags": ["<tag-1>", "<tag-2>", ...]
}
]
}
```
### Configuration Fields
| Field | Description |
@@ -492,20 +465,6 @@ RULES: $(cat ~/.claude/workflows/cli-templates/protocols/write-protocol.md) $(ca
---
## Execution Configuration
### Dynamic Timeout Allocation
**Minimum timeout: 5 minutes (300000ms)** - Never set below this threshold.
**Timeout Ranges**:
- **Simple** (analysis, search): 5-10min (300000-600000ms)
- **Medium** (refactoring, documentation): 10-20min (600000-1200000ms)
- **Complex** (implementation, migration): 20-60min (1200000-3600000ms)
- **Heavy** (large codebase, multi-file): 60-120min (3600000-7200000ms)
**Auto-detection**: Analyze PURPOSE and TASK fields to determine timeout
### Permission Framework
**Single-Use Authorization**: Each execution requires explicit user instruction. Previous authorization does NOT carry over.

View File

@@ -1,6 +1,6 @@
---
description: Execute all solutions from issue queue with git commit after each solution
argument-hint: "[--worktree [<existing-path>]] [--queue <queue-id>]"
argument-hint: "--queue <queue-id> [--worktree [<existing-path>]]"
---
# Issue Execute (Codex Version)
@@ -9,6 +9,49 @@ argument-hint: "[--worktree [<existing-path>]] [--queue <queue-id>]"
**Serial Execution**: Execute solutions ONE BY ONE from the issue queue via `ccw issue next`. For each solution, complete all tasks sequentially (implement → test → verify), then commit once per solution with formatted summary. Continue autonomously until queue is empty.
## Queue ID Requirement (MANDATORY)
**Queue ID is REQUIRED.** You MUST specify which queue to execute via `--queue <queue-id>`.
### If Queue ID Not Provided
When `--queue` parameter is missing, you MUST:
1. **List available queues** by running:
```javascript
const result = shell_command({ command: "ccw issue queue list --brief --json" })
```
2. **Parse and display queues** to user:
```
Available Queues:
ID Status Progress Issues
-----------------------------------------------------------
→ QUE-20251215-001 active 3/10 ISS-001, ISS-002
QUE-20251210-002 active 0/5 ISS-003
QUE-20251205-003 completed 8/8 ISS-004
```
3. **Stop and ask user** to specify which queue to execute:
```javascript
AskUserQuestion({
questions: [{
question: "Which queue would you like to execute?",
header: "Queue",
multiSelect: false,
options: [
// Generate from parsed queue list - only show active/pending queues
{ label: "QUE-20251215-001", description: "active, 3/10 completed, Issues: ISS-001, ISS-002" },
{ label: "QUE-20251210-002", description: "active, 0/5 completed, Issues: ISS-003" }
]
}]
})
```
4. **After user selection**, continue execution with the selected queue ID.
**DO NOT auto-select queues.** Explicit user confirmation is required to prevent accidental execution of wrong queue.
## Worktree Mode (Recommended for Parallel Execution)
When `--worktree` is specified, create or use a git worktree to isolate work.
@@ -77,7 +120,8 @@ cd "${WORKTREE_PATH}"
**Worktree Execution Pattern**:
```
1. [WORKTREE] ccw issue next → auto-redirects to main repo's .workflow/
0. [MAIN REPO] Validate queue ID (--queue required, or prompt user to select)
1. [WORKTREE] ccw issue next --queue <queue-id> → auto-redirects to main repo's .workflow/
2. [WORKTREE] Implement all tasks, run tests, git commit
3. [WORKTREE] ccw issue done <item_id> → auto-redirects to main repo
4. Repeat from step 1
@@ -177,10 +221,12 @@ echo "Branch '${WORKTREE_NAME}' kept. Merge manually when ready."
## Execution Flow
```
INIT: Fetch first solution via ccw issue next
STEP 0: Validate queue ID (--queue required, or prompt user to select)
INIT: Fetch first solution via ccw issue next --queue <queue-id>
WHILE solution exists:
1. Receive solution JSON from ccw issue next
1. Receive solution JSON from ccw issue next --queue <queue-id>
2. Execute all tasks in solution.tasks sequentially:
FOR each task:
- IMPLEMENT: Follow task.implementation steps
@@ -188,7 +234,7 @@ WHILE solution exists:
- VERIFY: Check task.acceptance criteria
3. COMMIT: Stage all files, commit once with formatted summary
4. Report completion via ccw issue done <item_id>
5. Fetch next solution via ccw issue next
5. Fetch next solution via ccw issue next --queue <queue-id>
WHEN queue empty:
Output final summary
@@ -196,11 +242,14 @@ WHEN queue empty:
## Step 1: Fetch First Solution
**Prerequisite**: Queue ID must be determined (either from `--queue` argument or user selection in Step 0).
Run this command to get your first solution:
```javascript
// ccw auto-detects worktree and uses main repo's .workflow/
const result = shell_command({ command: "ccw issue next" })
// QUEUE_ID is required - obtained from --queue argument or user selection
const result = shell_command({ command: `ccw issue next --queue ${QUEUE_ID}` })
```
This returns JSON with the full solution definition:
@@ -494,11 +543,12 @@ shell_command({
## Step 5: Continue to Next Solution
Fetch next solution:
Fetch next solution (using same QUEUE_ID from Step 0/1):
```javascript
// ccw auto-detects worktree
const result = shell_command({ command: "ccw issue next" })
// Continue using the same QUEUE_ID throughout execution
const result = shell_command({ command: `ccw issue next --queue ${QUEUE_ID}` })
```
**Output progress:**
@@ -567,18 +617,28 @@ When `ccw issue next` returns `{ "status": "empty" }`:
| Command | Purpose |
|---------|---------|
| `ccw issue next` | Fetch next solution from queue (auto-selects from active queues) |
| `ccw issue next --queue QUE-xxx` | Fetch from specific queue |
| `ccw issue queue list --brief --json` | List all queues (for queue selection) |
| `ccw issue next --queue QUE-xxx` | Fetch next solution from specified queue (**--queue required**) |
| `ccw issue done <id>` | Mark solution complete with result (auto-detects queue) |
| `ccw issue done <id> --fail --reason "..."` | Mark solution failed with structured reason |
| `ccw issue retry --queue QUE-xxx` | Reset failed items in specific queue |
## Start Execution
Begin by running:
**Step 0: Validate Queue ID**
If `--queue` was NOT provided in the command arguments:
1. Run `ccw issue queue list --brief --json`
2. Display available queues to user
3. Ask user to select a queue via `AskUserQuestion`
4. Store selected queue ID for all subsequent commands
**Step 1: Fetch First Solution**
Once queue ID is confirmed, begin by running:
```bash
ccw issue next
ccw issue next --queue <queue-id>
```
Then follow the solution lifecycle for each solution until queue is empty.

View File

@@ -5,6 +5,21 @@ All notable changes to Claude Code Workflow (CCW) will be documented in this fil
The format is based on [Keep a Changelog](https://keepachangelog.com/en/1.0.0/),
and this project adheres to [Semantic Versioning](https://semver.org/spec/v2.0.0.html).
## [6.3.29] - 2026-01-15
### ✨ New Features | 新功能
#### Multi-CLI Task & Discussion Enhancements | 多CLI任务与讨论增强
- **Added**: Internationalization support for multi-CLI tasks and discussion tabs | 多CLI任务和讨论标签的国际化支持
- **Added**: Collapsible sections for discussion and summary tabs with enhanced layout | 讨论和摘要标签的可折叠区域及增强布局
- **Added**: Post-Completion Expansion feature for execution commands | 执行命令的完成后扩展功能
#### Session & UI Improvements | 会话与UI改进
- **Enhanced**: Multi-CLI session handling with improved UI updates | 多CLI会话处理及UI更新优化
- **Refactored**: Code structure for improved readability and maintainability | 代码结构重构以提升可读性和可维护性
---
## [6.3.19] - 2026-01-12
### 🚀 Major New Features | 主要新功能

View File

@@ -281,6 +281,9 @@ CCW provides comprehensive documentation to help you get started quickly and mas
- [**Dashboard Guide**](DASHBOARD_GUIDE.md) - Dashboard user guide and interface overview
- [**Dashboard Operations**](DASHBOARD_OPERATIONS_EN.md) - Detailed operation instructions
### 🔄 **Workflow Guides**
- [**Issue Loop Workflow**](docs/workflows/ISSUE_LOOP_WORKFLOW.md) - Batch issue processing with two-phase lifecycle (accumulate → resolve)
### 🏗️ **Architecture & Design**
- [**Architecture Overview**](ARCHITECTURE.md) - System design and core components
- [**Project Introduction**](PROJECT_INTRODUCTION.md) - Detailed project overview

View File

@@ -177,7 +177,7 @@ export function run(argv: string[]): void {
.option('--model <model>', 'Model override')
.option('--cd <path>', 'Working directory')
.option('--includeDirs <dirs>', 'Additional directories (--include-directories for gemini/qwen, --add-dir for codex/claude)')
.option('--timeout <ms>', 'Timeout in milliseconds (0=disabled, controlled by external caller)', '0')
// --timeout removed - controlled by external caller (bash timeout)
.option('--stream', 'Enable streaming output (default: non-streaming with caching)')
.option('--limit <n>', 'History limit')
.option('--status <status>', 'Filter by status')

View File

@@ -116,7 +116,7 @@ interface CliExecOptions {
model?: string;
cd?: string;
includeDirs?: string;
timeout?: string;
// timeout removed - controlled by external caller (bash timeout)
stream?: boolean; // Enable streaming (default: false, caches output)
resume?: string | boolean; // true = last, string = execution ID, comma-separated for merge
id?: string; // Custom execution ID (e.g., IMPL-001-step1)
@@ -535,7 +535,7 @@ async function statusAction(debug?: boolean): Promise<void> {
* @param {Object} options - CLI options
*/
async function execAction(positionalPrompt: string | undefined, options: CliExecOptions): Promise<void> {
const { prompt: optionPrompt, file, tool = 'gemini', mode = 'analysis', model, cd, includeDirs, timeout, stream, resume, id, noNative, cache, injectMode, debug } = options;
const { prompt: optionPrompt, file, tool = 'gemini', mode = 'analysis', model, cd, includeDirs, stream, resume, id, noNative, cache, injectMode, debug } = options;
// Enable debug mode if --debug flag is set
if (debug) {
@@ -842,7 +842,7 @@ async function execAction(positionalPrompt: string | undefined, options: CliExec
model,
cd,
includeDirs,
timeout: timeout ? parseInt(timeout, 10) : 0, // 0 = no internal timeout, controlled by external caller
// timeout removed - controlled by external caller (bash timeout)
resume,
id, // custom execution ID
noNative,
@@ -1221,7 +1221,7 @@ export async function cliCommand(
console.log(chalk.gray(' --model <model> Model override'));
console.log(chalk.gray(' --cd <path> Working directory'));
console.log(chalk.gray(' --includeDirs <dirs> Additional directories'));
console.log(chalk.gray(' --timeout <ms> Timeout (default: 0=disabled)'));
// --timeout removed - controlled by external caller (bash timeout)
console.log(chalk.gray(' --resume [id] Resume previous session'));
console.log(chalk.gray(' --cache <items> Cache: comma-separated @patterns and text'));
console.log(chalk.gray(' --inject-mode <m> Inject mode: none, full, progressive'));

View File

@@ -77,6 +77,7 @@ interface DashboardData {
liteTasks: {
litePlan: unknown[];
liteFix: unknown[];
multiCliPlan: unknown[];
};
reviewData: ReviewData | null;
projectOverview: ProjectOverview | null;
@@ -88,6 +89,7 @@ interface DashboardData {
reviewFindings: number;
litePlanCount: number;
liteFixCount: number;
multiCliPlanCount: number;
};
}
@@ -211,7 +213,8 @@ export async function aggregateData(sessions: ScanSessionsResult, workflowDir: s
archivedSessions: [],
liteTasks: {
litePlan: [],
liteFix: []
liteFix: [],
multiCliPlan: []
},
reviewData: null,
projectOverview: null,
@@ -222,7 +225,8 @@ export async function aggregateData(sessions: ScanSessionsResult, workflowDir: s
completedTasks: 0,
reviewFindings: 0,
litePlanCount: 0,
liteFixCount: 0
liteFixCount: 0,
multiCliPlanCount: 0
}
};
@@ -257,6 +261,7 @@ export async function aggregateData(sessions: ScanSessionsResult, workflowDir: s
data.liteTasks = liteTasks;
data.statistics.litePlanCount = liteTasks.litePlan.length;
data.statistics.liteFixCount = liteTasks.liteFix.length;
data.statistics.multiCliPlanCount = liteTasks.multiCliPlan.length;
} catch (err) {
console.error('Error scanning lite tasks:', (err as Error).message);
}
@@ -584,6 +589,18 @@ function loadProjectOverview(workflowDir: string): ProjectOverview | null {
const statistics = (projectData.statistics || developmentStatus?.statistics) as Record<string, unknown> | undefined;
const metadata = projectData._metadata as Record<string, unknown> | undefined;
// Helper to extract string array from mixed array (handles both string[] and {name: string}[])
const extractStringArray = (arr: unknown[] | undefined): string[] => {
if (!arr) return [];
return arr.map(item => {
if (typeof item === 'string') return item;
if (typeof item === 'object' && item !== null && 'name' in item) {
return String((item as { name: unknown }).name);
}
return String(item);
});
};
// Load guidelines from separate file if exists
let guidelines: ProjectGuidelines | null = null;
if (existsSync(guidelinesFile)) {
@@ -628,17 +645,17 @@ function loadProjectOverview(workflowDir: string): ProjectOverview | null {
description: (overview?.description as string) || '',
initializedAt: (projectData.initialized_at as string) || null,
technologyStack: {
languages: (technologyStack?.languages as string[]) || [],
frameworks: (technologyStack?.frameworks as string[]) || [],
build_tools: (technologyStack?.build_tools as string[]) || [],
test_frameworks: (technologyStack?.test_frameworks as string[]) || []
languages: extractStringArray(technologyStack?.languages),
frameworks: extractStringArray(technologyStack?.frameworks),
build_tools: extractStringArray(technologyStack?.build_tools),
test_frameworks: extractStringArray(technologyStack?.test_frameworks)
},
architecture: {
style: (architecture?.style as string) || 'Unknown',
layers: (architecture?.layers as string[]) || [],
patterns: (architecture?.patterns as string[]) || []
layers: extractStringArray(architecture?.layers as unknown[] | undefined),
patterns: extractStringArray(architecture?.patterns as unknown[] | undefined)
},
keyComponents: (overview?.key_components as string[]) || [],
keyComponents: extractStringArray(overview?.key_components as unknown[] | undefined),
features: (projectData.features as unknown[]) || [],
developmentIndex: {
feature: (developmentIndex?.feature as unknown[]) || [],

View File

@@ -60,9 +60,47 @@ interface LiteSession {
progress: Progress;
}
// Multi-CLI specific session state from session-state.json
interface MultiCliSessionState {
session_id: string;
task_description: string;
status: string;
current_phase: number;
phases: Record<string, { status: string; rounds_completed?: number }>;
ace_context?: { relevant_files: string[]; detected_patterns: string[] };
user_decisions?: Array<{ round: number; decision: string; selected: string }>;
updated_at?: string;
}
// Discussion topic structure for frontend rendering
interface DiscussionTopic {
title: string;
description: string;
scope: { included: string[]; excluded: string[] };
keyQuestions: string[];
status: string;
tags: string[];
}
// Extended session interface for multi-cli-plan
interface MultiCliSession extends LiteSession {
roundCount: number;
topicTitle: string;
status: string;
metadata: {
roundId: number;
timestamp: string;
currentPhase: number;
};
discussionTopic: DiscussionTopic;
rounds: RoundSynthesis[];
latestSynthesis: RoundSynthesis | null;
}
interface LiteTasks {
litePlan: LiteSession[];
liteFix: LiteSession[];
multiCliPlan: LiteSession[];
}
interface LiteTaskDetail {
@@ -84,13 +122,15 @@ interface LiteTaskDetail {
export async function scanLiteTasks(workflowDir: string): Promise<LiteTasks> {
const litePlanDir = join(workflowDir, '.lite-plan');
const liteFixDir = join(workflowDir, '.lite-fix');
const multiCliDir = join(workflowDir, '.multi-cli-plan');
const [litePlan, liteFix] = await Promise.all([
const [litePlan, liteFix, multiCliPlan] = await Promise.all([
scanLiteDir(litePlanDir, 'lite-plan'),
scanLiteDir(liteFixDir, 'lite-fix'),
scanMultiCliDir(multiCliDir),
]);
return { litePlan, liteFix };
return { litePlan, liteFix, multiCliPlan };
}
/**
@@ -142,6 +182,427 @@ async function scanLiteDir(dir: string, type: string): Promise<LiteSession[]> {
}
}
/**
* Load session-state.json from multi-cli session directory
* @param sessionPath - Session directory path
* @returns Session state or null if not found
*/
async function loadSessionState(sessionPath: string): Promise<MultiCliSessionState | null> {
const statePath = join(sessionPath, 'session-state.json');
try {
const content = await readFile(statePath, 'utf8');
return JSON.parse(content);
} catch {
return null;
}
}
/**
* Build discussion topic structure from session state and synthesis
* @param state - Session state from session-state.json
* @param synthesis - Latest round synthesis
* @returns Discussion topic for frontend rendering
*/
function buildDiscussionTopic(
state: MultiCliSessionState | null,
synthesis: RoundSynthesis | null
): DiscussionTopic {
const keyQuestions = synthesis?.clarification_questions || [];
const solutions = synthesis?.solutions || [];
return {
title: state?.task_description || 'Discussion Topic',
description: solutions[0]?.summary || '',
scope: {
included: state?.ace_context?.relevant_files || [],
excluded: [],
},
keyQuestions,
status: state?.status || 'analyzing',
tags: solutions.map((s) => s.name).slice(0, 3),
};
}
/**
* Scan multi-cli-plan directory for sessions
* @param dir - Directory path to .multi-cli-plan
* @returns Array of multi-cli sessions with extended metadata
*/
async function scanMultiCliDir(dir: string): Promise<MultiCliSession[]> {
try {
const entries = await readdir(dir, { withFileTypes: true });
const sessions = (await Promise.all(
entries
.filter((entry) => entry.isDirectory())
.map(async (entry) => {
const sessionPath = join(dir, entry.name);
const [createdAt, syntheses, sessionState, planJson] = await Promise.all([
getCreatedTime(sessionPath),
loadRoundSyntheses(sessionPath),
loadSessionState(sessionPath),
loadPlanJson(sessionPath),
]);
// Extract data from syntheses
const roundCount = syntheses.length;
const latestSynthesis = syntheses.length > 0 ? syntheses[syntheses.length - 1] : null;
// Calculate progress based on round count and convergence
const progress = calculateMultiCliProgress(syntheses);
// Build discussion topic for frontend
const discussionTopic = buildDiscussionTopic(sessionState, latestSynthesis);
// Determine status from session state or synthesis convergence
const status = sessionState?.status ||
(latestSynthesis?.convergence?.recommendation === 'converged' ? 'converged' : 'analyzing');
// Use plan.json if available, otherwise extract from synthesis
const plan = planJson || latestSynthesis;
// Use tasks from plan.json if available, otherwise extract from synthesis
const tasks = (planJson as any)?.tasks?.length > 0
? normalizePlanJsonTasks((planJson as any).tasks)
: extractTasksFromSyntheses(syntheses);
const session: MultiCliSession = {
id: entry.name,
type: 'multi-cli-plan',
path: sessionPath,
createdAt,
plan,
tasks,
progress,
// Extended multi-cli specific fields
roundCount,
topicTitle: sessionState?.task_description || 'Discussion Topic',
status,
metadata: {
roundId: roundCount,
timestamp: sessionState?.updated_at || createdAt,
currentPhase: sessionState?.current_phase || 1,
},
discussionTopic,
rounds: syntheses,
latestSynthesis,
};
return session;
}),
))
.filter((session): session is MultiCliSession => session !== null)
.sort((a, b) => new Date(b.createdAt).getTime() - new Date(a.createdAt).getTime());
return sessions;
} catch (err: any) {
if (err?.code === 'ENOENT') return [];
console.error(`Error scanning ${dir}:`, err?.message || String(err));
return [];
}
}
// NEW Schema types for multi-cli synthesis
interface SolutionFileAction {
file: string;
line: number;
action: 'modify' | 'create' | 'delete';
}
interface SolutionTask {
id: string;
name: string;
depends_on: string[];
files: SolutionFileAction[];
key_point: string | null;
}
interface SolutionImplementationPlan {
approach: string;
tasks: SolutionTask[];
execution_flow: string;
milestones: string[];
}
interface SolutionDependencies {
internal: string[];
external: string[];
}
interface Solution {
name: string;
source_cli: string[];
feasibility: number; // 0-1
effort: 'low' | 'medium' | 'high';
risk: 'low' | 'medium' | 'high';
summary: string;
implementation_plan: SolutionImplementationPlan;
dependencies: SolutionDependencies;
technical_concerns: string[];
}
interface SynthesisConvergence {
score: number;
new_insights: boolean;
recommendation: 'converged' | 'continue' | 'user_input_needed';
}
interface SynthesisCrossVerification {
agreements: string[];
disagreements: string[];
resolution: string;
}
interface RoundSynthesis {
round: number;
// NEW schema fields
solutions?: Solution[];
convergence?: SynthesisConvergence;
cross_verification?: SynthesisCrossVerification;
clarification_questions?: string[];
// OLD schema fields (backward compatibility)
converged?: boolean;
tasks?: unknown[];
synthesis?: unknown;
[key: string]: unknown;
}
/**
* Load all synthesis.json files from rounds subdirectories
* @param sessionPath - Session directory path
* @returns Array of synthesis objects sorted by round number
*/
async function loadRoundSyntheses(sessionPath: string): Promise<RoundSynthesis[]> {
const roundsDir = join(sessionPath, 'rounds');
const syntheses: RoundSynthesis[] = [];
try {
const roundEntries = await readdir(roundsDir, { withFileTypes: true });
const roundDirs = roundEntries
.filter((entry) => entry.isDirectory() && /^\d+$/.test(entry.name))
.map((entry) => ({
name: entry.name,
num: parseInt(entry.name, 10),
}))
.sort((a, b) => a.num - b.num);
for (const roundDir of roundDirs) {
const synthesisPath = join(roundsDir, roundDir.name, 'synthesis.json');
try {
const content = await readFile(synthesisPath, 'utf8');
const synthesis = JSON.parse(content) as RoundSynthesis;
synthesis.round = roundDir.num;
syntheses.push(synthesis);
} catch (e) {
console.warn('Failed to parse synthesis file:', synthesisPath, (e as Error).message);
}
}
} catch (e) {
// Ignore ENOENT errors (directory doesn't exist), warn on others
if ((e as NodeJS.ErrnoException).code !== 'ENOENT') {
console.warn('Failed to read rounds directory:', roundsDir, (e as Error).message);
}
}
return syntheses;
}
// Extended Progress interface for multi-cli sessions
interface MultiCliProgress extends Progress {
convergenceScore?: number;
recommendation?: 'converged' | 'continue' | 'user_input_needed';
solutionsCount?: number;
avgFeasibility?: number;
}
/**
* Calculate progress for multi-cli-plan sessions
* Uses new convergence.score and convergence.recommendation when available
* Falls back to old converged boolean for backward compatibility
* @param syntheses - Array of round syntheses
* @returns Progress info with convergence metrics
*/
function calculateMultiCliProgress(syntheses: RoundSynthesis[]): MultiCliProgress {
if (syntheses.length === 0) {
return { total: 0, completed: 0, percentage: 0 };
}
const latestSynthesis = syntheses[syntheses.length - 1];
// NEW schema: Use convergence object
if (latestSynthesis.convergence) {
const { score, recommendation } = latestSynthesis.convergence;
const isConverged = recommendation === 'converged';
// Calculate solutions metrics
const solutions = latestSynthesis.solutions || [];
const solutionsCount = solutions.length;
const avgFeasibility = solutionsCount > 0
? solutions.reduce((sum, s) => sum + (s.feasibility || 0), 0) / solutionsCount
: 0;
// Total is based on rounds, percentage derived from convergence score
const total = syntheses.length;
const completed = isConverged ? total : Math.max(0, total - 1);
const percentage = isConverged ? 100 : Math.round(score * 100);
return {
total,
completed,
percentage,
convergenceScore: score,
recommendation,
solutionsCount,
avgFeasibility: Math.round(avgFeasibility * 100) / 100
};
}
// OLD schema: Fallback to converged boolean
const isConverged = latestSynthesis.converged === true;
const total = syntheses.length;
const completed = isConverged ? total : Math.max(0, total - 1);
const percentage = isConverged ? 100 : Math.round((completed / Math.max(total, 1)) * 100);
return { total, completed, percentage };
}
/**
* Extract tasks from synthesis objects
* NEW schema: Extract from solutions[].implementation_plan.tasks
* OLD schema: Extract from tasks[] array directly
* @param syntheses - Array of round syntheses
* @returns Normalized tasks from latest synthesis
*/
function extractTasksFromSyntheses(syntheses: RoundSynthesis[]): NormalizedTask[] {
if (syntheses.length === 0) return [];
const latestSynthesis = syntheses[syntheses.length - 1];
// NEW schema: Extract tasks from solutions
if (latestSynthesis.solutions && Array.isArray(latestSynthesis.solutions)) {
const allTasks: NormalizedTask[] = [];
for (const solution of latestSynthesis.solutions) {
const implPlan = solution.implementation_plan;
if (!implPlan?.tasks || !Array.isArray(implPlan.tasks)) continue;
for (const task of implPlan.tasks) {
const normalizedTask = normalizeSolutionTask(task, solution);
if (normalizedTask) {
allTasks.push(normalizedTask);
}
}
}
// Sort by task ID
return allTasks.sort((a, b) => {
const aNum = parseInt(a.id?.replace(/\D/g, '') || '0');
const bNum = parseInt(b.id?.replace(/\D/g, '') || '0');
return aNum - bNum;
});
}
// OLD schema: Extract from tasks array directly
const tasks = latestSynthesis.tasks;
if (!Array.isArray(tasks)) return [];
return tasks
.map((task) => normalizeTask(task))
.filter((task): task is NormalizedTask => task !== null);
}
/**
* Normalize a solution task from NEW schema to NormalizedTask
* @param task - SolutionTask from new schema
* @param solution - Parent solution for context
* @returns Normalized task
*/
function normalizeSolutionTask(task: SolutionTask, solution: Solution): NormalizedTask | null {
if (!task || !task.id) return null;
return {
id: task.id,
title: task.name || 'Untitled Task',
status: (task as unknown as { status?: string }).status || 'pending',
meta: {
type: 'implementation',
agent: null,
scope: solution.name || null,
module: null
},
context: {
requirements: task.key_point ? [task.key_point] : [],
focus_paths: task.files?.map(f => f.file) || [],
acceptance: [],
depends_on: task.depends_on || []
},
flow_control: {
implementation_approach: task.files?.map((f, i) => ({
step: `Step ${i + 1}`,
action: `${f.action} ${f.file}${f.line ? ` at line ${f.line}` : ''}`
})) || []
},
_raw: {
task,
solution: {
name: solution.name,
source_cli: solution.source_cli,
feasibility: solution.feasibility,
effort: solution.effort,
risk: solution.risk
}
}
};
}
/**
* Normalize tasks from plan.json format to NormalizedTask[]
* plan.json tasks have: id, name, description, depends_on, status, files, key_point, acceptance_criteria
* @param tasks - Tasks array from plan.json
* @returns Normalized tasks
*/
function normalizePlanJsonTasks(tasks: unknown[]): NormalizedTask[] {
if (!Array.isArray(tasks)) return [];
return tasks.map((task: any): NormalizedTask | null => {
if (!task || !task.id) return null;
return {
id: task.id,
title: task.name || task.title || 'Untitled Task',
status: task.status || 'pending',
meta: {
type: 'implementation',
agent: null,
scope: task.scope || null,
module: null
},
context: {
requirements: task.description ? [task.description] : (task.key_point ? [task.key_point] : []),
focus_paths: task.files?.map((f: any) => typeof f === 'string' ? f : f.file) || [],
acceptance: task.acceptance_criteria || [],
depends_on: task.depends_on || []
},
flow_control: {
implementation_approach: task.files?.map((f: any, i: number) => {
const filePath = typeof f === 'string' ? f : f.file;
const action = typeof f === 'string' ? 'modify' : f.action;
const line = typeof f === 'string' ? null : f.line;
return {
step: `Step ${i + 1}`,
action: `${action} ${filePath}${line ? ` at line ${line}` : ''}`
};
}) || []
},
_raw: {
task,
estimated_complexity: task.estimated_complexity
}
};
}).filter((task): task is NormalizedTask => task !== null);
}
/**
* Load plan.json or fix-plan.json from session directory
* @param sessionPath - Session directory path
@@ -368,14 +829,19 @@ function calculateProgress(tasks: NormalizedTask[]): Progress {
/**
* Get detailed lite task info
* @param workflowDir - Workflow directory
* @param type - 'lite-plan' or 'lite-fix'
* @param type - 'lite-plan', 'lite-fix', or 'multi-cli-plan'
* @param sessionId - Session ID
* @returns Detailed task info
*/
export async function getLiteTaskDetail(workflowDir: string, type: string, sessionId: string): Promise<LiteTaskDetail | null> {
const dir = type === 'lite-plan'
? join(workflowDir, '.lite-plan', sessionId)
: join(workflowDir, '.lite-fix', sessionId);
let dir: string;
if (type === 'lite-plan') {
dir = join(workflowDir, '.lite-plan', sessionId);
} else if (type === 'multi-cli-plan') {
dir = join(workflowDir, '.multi-cli-plan', sessionId);
} else {
dir = join(workflowDir, '.lite-fix', sessionId);
}
try {
const stats = await stat(dir);
@@ -384,6 +850,29 @@ export async function getLiteTaskDetail(workflowDir: string, type: string, sessi
return null;
}
// For multi-cli-plan, use synthesis-based loading
if (type === 'multi-cli-plan') {
const [syntheses, explorations, clarifications] = await Promise.all([
loadRoundSyntheses(dir),
loadExplorations(dir),
loadClarifications(dir),
]);
const latestSynthesis = syntheses.length > 0 ? syntheses[syntheses.length - 1] : null;
const detail: LiteTaskDetail = {
id: sessionId,
type,
path: dir,
plan: latestSynthesis,
tasks: extractTasksFromSyntheses(syntheses),
explorations,
clarifications,
};
return detail;
}
const [plan, tasks, explorations, clarifications, diagnoses] = await Promise.all([
loadPlanJson(dir),
loadTaskJsons(dir),

View File

@@ -23,7 +23,7 @@
* - POST /api/queue/reorder - Reorder queue items
*/
import { readFileSync, existsSync, writeFileSync, mkdirSync, unlinkSync } from 'fs';
import { join } from 'path';
import { join, resolve, normalize } from 'path';
import type { RouteContext } from './types.js';
// ========== JSONL Helper Functions ==========
@@ -67,6 +67,12 @@ function readIssueHistoryJsonl(issuesDir: string): any[] {
}
}
function writeIssueHistoryJsonl(issuesDir: string, issues: any[]) {
if (!existsSync(issuesDir)) mkdirSync(issuesDir, { recursive: true });
const historyPath = join(issuesDir, 'issue-history.jsonl');
writeFileSync(historyPath, issues.map(i => JSON.stringify(i)).join('\n'));
}
function writeSolutionsJsonl(issuesDir: string, issueId: string, solutions: any[]) {
const solutionsDir = join(issuesDir, 'solutions');
if (!existsSync(solutionsDir)) mkdirSync(solutionsDir, { recursive: true });
@@ -156,7 +162,30 @@ function writeQueue(issuesDir: string, queue: any) {
function getIssueDetail(issuesDir: string, issueId: string) {
const issues = readIssuesJsonl(issuesDir);
const issue = issues.find(i => i.id === issueId);
let issue = issues.find(i => i.id === issueId);
// Fallback: Reconstruct issue from solution file if issue not in issues.jsonl
if (!issue) {
const solutionPath = join(issuesDir, 'solutions', `${issueId}.jsonl`);
if (existsSync(solutionPath)) {
const solutions = readSolutionsJsonl(issuesDir, issueId);
if (solutions.length > 0) {
const boundSolution = solutions.find(s => s.is_bound) || solutions[0];
issue = {
id: issueId,
title: boundSolution?.description || issueId,
status: 'completed',
priority: 3,
context: boundSolution?.approach || '',
bound_solution_id: boundSolution?.id || null,
created_at: boundSolution?.created_at || new Date().toISOString(),
updated_at: new Date().toISOString(),
_reconstructed: true
};
}
}
}
if (!issue) return null;
const solutions = readSolutionsJsonl(issuesDir, issueId);
@@ -254,11 +283,46 @@ function bindSolutionToIssue(issuesDir: string, issueId: string, solutionId: str
return { success: true, bound: solutionId };
}
// ========== Path Validation ==========
/**
* Validate that the provided path is safe (no path traversal)
* Returns the resolved, normalized path or null if invalid
*/
function validateProjectPath(requestedPath: string, basePath: string): string | null {
if (!requestedPath) return basePath;
// Resolve to absolute path and normalize
const resolvedPath = resolve(normalize(requestedPath));
const resolvedBase = resolve(normalize(basePath));
// For local development tool, we allow any absolute path
// but prevent obvious traversal attempts
if (requestedPath.includes('..') && !resolvedPath.startsWith(resolvedBase)) {
// Check if it's trying to escape with ..
const normalizedRequested = normalize(requestedPath);
if (normalizedRequested.startsWith('..')) {
return null;
}
}
return resolvedPath;
}
// ========== Route Handler ==========
export async function handleIssueRoutes(ctx: RouteContext): Promise<boolean> {
const { pathname, url, req, res, initialPath, handlePostRequest } = ctx;
const projectPath = url.searchParams.get('path') || initialPath;
const rawProjectPath = url.searchParams.get('path') || initialPath;
// Validate project path to prevent path traversal
const projectPath = validateProjectPath(rawProjectPath, initialPath);
if (!projectPath) {
res.writeHead(400, { 'Content-Type': 'application/json' });
res.end(JSON.stringify({ error: 'Invalid project path' }));
return true;
}
const issuesDir = join(projectPath, '.workflow', 'issues');
// ===== Queue Routes (top-level /api/queue) =====
@@ -295,7 +359,8 @@ export async function handleIssueRoutes(ctx: RouteContext): Promise<boolean> {
// GET /api/queue/:id - Get specific queue by ID
const queueDetailMatch = pathname.match(/^\/api\/queue\/([^/]+)$/);
if (queueDetailMatch && req.method === 'GET' && queueDetailMatch[1] !== 'history' && queueDetailMatch[1] !== 'reorder') {
const reservedQueuePaths = ['history', 'reorder', 'switch', 'deactivate', 'merge'];
if (queueDetailMatch && req.method === 'GET' && !reservedQueuePaths.includes(queueDetailMatch[1])) {
const queueId = queueDetailMatch[1];
const queuesDir = join(issuesDir, 'queues');
const queueFilePath = join(queuesDir, `${queueId}.json`);
@@ -347,6 +412,29 @@ export async function handleIssueRoutes(ctx: RouteContext): Promise<boolean> {
return true;
}
// POST /api/queue/deactivate - Deactivate current queue (set active to null)
if (pathname === '/api/queue/deactivate' && req.method === 'POST') {
handlePostRequest(req, res, async (body: any) => {
const queuesDir = join(issuesDir, 'queues');
const indexPath = join(queuesDir, 'index.json');
try {
const index = existsSync(indexPath)
? JSON.parse(readFileSync(indexPath, 'utf8'))
: { active_queue_id: null, queues: [] };
const previousActiveId = index.active_queue_id;
index.active_queue_id = null;
writeFileSync(indexPath, JSON.stringify(index, null, 2));
return { success: true, previous_active_id: previousActiveId };
} catch (err) {
return { error: 'Failed to deactivate queue' };
}
});
return true;
}
// POST /api/queue/reorder - Reorder queue items (supports both solutions and tasks)
if (pathname === '/api/queue/reorder' && req.method === 'POST') {
handlePostRequest(req, res, async (body: any) => {
@@ -399,6 +487,237 @@ export async function handleIssueRoutes(ctx: RouteContext): Promise<boolean> {
return true;
}
// DELETE /api/queue/:queueId/item/:itemId - Delete item from queue
const queueItemDeleteMatch = pathname.match(/^\/api\/queue\/([^/]+)\/item\/([^/]+)$/);
if (queueItemDeleteMatch && req.method === 'DELETE') {
const queueId = queueItemDeleteMatch[1];
const itemId = decodeURIComponent(queueItemDeleteMatch[2]);
const queuesDir = join(issuesDir, 'queues');
const queueFilePath = join(queuesDir, `${queueId}.json`);
if (!existsSync(queueFilePath)) {
res.writeHead(404, { 'Content-Type': 'application/json' });
res.end(JSON.stringify({ error: `Queue ${queueId} not found` }));
return true;
}
try {
const queue = JSON.parse(readFileSync(queueFilePath, 'utf8'));
const items = queue.solutions || queue.tasks || [];
const filteredItems = items.filter((item: any) => item.item_id !== itemId);
if (filteredItems.length === items.length) {
res.writeHead(404, { 'Content-Type': 'application/json' });
res.end(JSON.stringify({ error: `Item ${itemId} not found in queue` }));
return true;
}
// Update queue items
if (queue.solutions) {
queue.solutions = filteredItems;
} else {
queue.tasks = filteredItems;
}
// Recalculate metadata
const completedCount = filteredItems.filter((i: any) => i.status === 'completed').length;
queue._metadata = {
...queue._metadata,
updated_at: new Date().toISOString(),
...(queue.solutions
? { total_solutions: filteredItems.length, completed_solutions: completedCount }
: { total_tasks: filteredItems.length, completed_tasks: completedCount })
};
writeFileSync(queueFilePath, JSON.stringify(queue, null, 2));
// Update index counts
const indexPath = join(queuesDir, 'index.json');
if (existsSync(indexPath)) {
try {
const index = JSON.parse(readFileSync(indexPath, 'utf8'));
const queueEntry = index.queues?.find((q: any) => q.id === queueId);
if (queueEntry) {
if (queue.solutions) {
queueEntry.total_solutions = filteredItems.length;
queueEntry.completed_solutions = completedCount;
} else {
queueEntry.total_tasks = filteredItems.length;
queueEntry.completed_tasks = completedCount;
}
writeFileSync(indexPath, JSON.stringify(index, null, 2));
}
} catch (err) {
console.error('Failed to update queue index:', err);
}
}
res.writeHead(200, { 'Content-Type': 'application/json' });
res.end(JSON.stringify({ success: true, queueId, deletedItemId: itemId }));
} catch (err) {
res.writeHead(500, { 'Content-Type': 'application/json' });
res.end(JSON.stringify({ error: 'Failed to delete item' }));
}
return true;
}
// DELETE /api/queue/:queueId - Delete entire queue
const queueDeleteMatch = pathname.match(/^\/api\/queue\/([^/]+)$/);
if (queueDeleteMatch && req.method === 'DELETE') {
const queueId = queueDeleteMatch[1];
const queuesDir = join(issuesDir, 'queues');
const queueFilePath = join(queuesDir, `${queueId}.json`);
const indexPath = join(queuesDir, 'index.json');
if (!existsSync(queueFilePath)) {
res.writeHead(404, { 'Content-Type': 'application/json' });
res.end(JSON.stringify({ error: `Queue ${queueId} not found` }));
return true;
}
try {
// Delete queue file
unlinkSync(queueFilePath);
// Update index
if (existsSync(indexPath)) {
const index = JSON.parse(readFileSync(indexPath, 'utf8'));
// Remove from queues array
index.queues = (index.queues || []).filter((q: any) => q.id !== queueId);
// Clear active if this was the active queue
if (index.active_queue_id === queueId) {
index.active_queue_id = null;
}
writeFileSync(indexPath, JSON.stringify(index, null, 2));
}
res.writeHead(200, { 'Content-Type': 'application/json' });
res.end(JSON.stringify({ success: true, deletedQueueId: queueId }));
} catch (err) {
res.writeHead(500, { 'Content-Type': 'application/json' });
res.end(JSON.stringify({ error: 'Failed to delete queue' }));
}
return true;
}
// POST /api/queue/merge - Merge source queue into target queue
if (pathname === '/api/queue/merge' && req.method === 'POST') {
handlePostRequest(req, res, async (body: any) => {
const { sourceQueueId, targetQueueId } = body;
if (!sourceQueueId || !targetQueueId) {
return { error: 'sourceQueueId and targetQueueId required' };
}
if (sourceQueueId === targetQueueId) {
return { error: 'Cannot merge queue into itself' };
}
const queuesDir = join(issuesDir, 'queues');
const sourcePath = join(queuesDir, `${sourceQueueId}.json`);
const targetPath = join(queuesDir, `${targetQueueId}.json`);
if (!existsSync(sourcePath)) return { error: `Source queue ${sourceQueueId} not found` };
if (!existsSync(targetPath)) return { error: `Target queue ${targetQueueId} not found` };
try {
const sourceQueue = JSON.parse(readFileSync(sourcePath, 'utf8'));
const targetQueue = JSON.parse(readFileSync(targetPath, 'utf8'));
const sourceItems = sourceQueue.solutions || sourceQueue.tasks || [];
const targetItems = targetQueue.solutions || targetQueue.tasks || [];
const isSolutionBased = !!targetQueue.solutions;
// Re-index source items to avoid ID conflicts
const maxOrder = targetItems.reduce((max: number, i: any) => Math.max(max, i.execution_order || 0), 0);
const reindexedSourceItems = sourceItems.map((item: any, idx: number) => ({
...item,
item_id: `${item.item_id}-merged`,
execution_order: maxOrder + idx + 1,
execution_group: item.execution_group ? `M-${item.execution_group}` : 'M-ungrouped'
}));
// Merge items
const mergedItems = [...targetItems, ...reindexedSourceItems];
if (isSolutionBased) {
targetQueue.solutions = mergedItems;
} else {
targetQueue.tasks = mergedItems;
}
// Merge issue_ids
const mergedIssueIds = [...new Set([
...(targetQueue.issue_ids || []),
...(sourceQueue.issue_ids || [])
])];
targetQueue.issue_ids = mergedIssueIds;
// Update metadata
const completedCount = mergedItems.filter((i: any) => i.status === 'completed').length;
targetQueue._metadata = {
...targetQueue._metadata,
updated_at: new Date().toISOString(),
...(isSolutionBased
? { total_solutions: mergedItems.length, completed_solutions: completedCount }
: { total_tasks: mergedItems.length, completed_tasks: completedCount })
};
// Write merged queue
writeFileSync(targetPath, JSON.stringify(targetQueue, null, 2));
// Update source queue status
sourceQueue.status = 'merged';
sourceQueue._metadata = {
...sourceQueue._metadata,
merged_into: targetQueueId,
merged_at: new Date().toISOString()
};
writeFileSync(sourcePath, JSON.stringify(sourceQueue, null, 2));
// Update index
const indexPath = join(queuesDir, 'index.json');
if (existsSync(indexPath)) {
try {
const index = JSON.parse(readFileSync(indexPath, 'utf8'));
const sourceEntry = index.queues?.find((q: any) => q.id === sourceQueueId);
const targetEntry = index.queues?.find((q: any) => q.id === targetQueueId);
if (sourceEntry) {
sourceEntry.status = 'merged';
}
if (targetEntry) {
if (isSolutionBased) {
targetEntry.total_solutions = mergedItems.length;
targetEntry.completed_solutions = completedCount;
} else {
targetEntry.total_tasks = mergedItems.length;
targetEntry.completed_tasks = completedCount;
}
targetEntry.issue_ids = mergedIssueIds;
}
writeFileSync(indexPath, JSON.stringify(index, null, 2));
} catch {
// Ignore index update errors
}
}
return {
success: true,
sourceQueueId,
targetQueueId,
mergedItemCount: sourceItems.length,
totalItems: mergedItems.length
};
} catch (err) {
return { error: 'Failed to merge queues' };
}
});
return true;
}
// Legacy: GET /api/issues/queue (backward compat)
if (pathname === '/api/issues/queue' && req.method === 'GET') {
const queue = groupQueueByExecutionGroup(readQueue(issuesDir));
@@ -546,6 +865,39 @@ export async function handleIssueRoutes(ctx: RouteContext): Promise<boolean> {
return true;
}
// POST /api/issues/:id/archive - Archive issue (move to history)
const archiveMatch = pathname.match(/^\/api\/issues\/([^/]+)\/archive$/);
if (archiveMatch && req.method === 'POST') {
const issueId = decodeURIComponent(archiveMatch[1]);
const issues = readIssuesJsonl(issuesDir);
const issueIndex = issues.findIndex(i => i.id === issueId);
if (issueIndex === -1) {
res.writeHead(404, { 'Content-Type': 'application/json' });
res.end(JSON.stringify({ error: 'Issue not found' }));
return true;
}
// Get the issue and add archive metadata
const issue = issues[issueIndex];
issue.archived_at = new Date().toISOString();
issue.status = 'completed';
// Move to history
const history = readIssueHistoryJsonl(issuesDir);
history.push(issue);
writeIssueHistoryJsonl(issuesDir, history);
// Remove from active issues
issues.splice(issueIndex, 1);
writeIssuesJsonl(issuesDir, issues);
res.writeHead(200, { 'Content-Type': 'application/json' });
res.end(JSON.stringify({ success: true, issueId, archivedAt: issue.archived_at }));
return true;
}
// POST /api/issues/:id/solutions - Add solution
const addSolMatch = pathname.match(/^\/api\/issues\/([^/]+)\/solutions$/);
if (addSolMatch && req.method === 'POST') {

View File

@@ -2,14 +2,29 @@
* Session Routes Module
* Handles all Session/Task-related API endpoints
*/
import { readFileSync, writeFileSync, existsSync, readdirSync } from 'fs';
import { readFileSync, writeFileSync, existsSync } from 'fs';
import { readFile, readdir, access } from 'fs/promises';
import { join } from 'path';
import type { RouteContext } from './types.js';
/**
* Get session detail data (context, summaries, impl-plan, review)
* Check if a file or directory exists (async version)
* @param filePath - Path to check
* @returns Promise<boolean>
*/
async function fileExists(filePath: string): Promise<boolean> {
try {
await access(filePath);
return true;
} catch {
return false;
}
}
/**
* Get session detail data (context, summaries, impl-plan, review, multi-cli)
* @param {string} sessionPath - Path to session directory
* @param {string} dataType - Type of data to load ('all', 'context', 'tasks', 'summary', 'plan', 'explorations', 'conflict', 'impl-plan', 'review')
* @param {string} dataType - Type of data to load ('all', 'context', 'tasks', 'summary', 'plan', 'explorations', 'conflict', 'impl-plan', 'review', 'multi-cli', 'discussions')
* @returns {Promise<Object>}
*/
async function getSessionDetailData(sessionPath: string, dataType: string): Promise<Record<string, unknown>> {
@@ -23,14 +38,15 @@ async function getSessionDetailData(sessionPath: string, dataType: string): Prom
if (dataType === 'context' || dataType === 'all') {
// Try .process/context-package.json first (common location)
let contextFile = join(normalizedPath, '.process', 'context-package.json');
if (!existsSync(contextFile)) {
if (!(await fileExists(contextFile))) {
// Fallback to session root
contextFile = join(normalizedPath, 'context-package.json');
}
if (existsSync(contextFile)) {
if (await fileExists(contextFile)) {
try {
result.context = JSON.parse(readFileSync(contextFile, 'utf8'));
result.context = JSON.parse(await readFile(contextFile, 'utf8'));
} catch (e) {
console.warn('Failed to parse context file:', contextFile, (e as Error).message);
result.context = null;
}
}
@@ -40,18 +56,18 @@ async function getSessionDetailData(sessionPath: string, dataType: string): Prom
if (dataType === 'tasks' || dataType === 'all') {
const taskDir = join(normalizedPath, '.task');
result.tasks = [];
if (existsSync(taskDir)) {
const files = readdirSync(taskDir).filter(f => f.endsWith('.json') && f.startsWith('IMPL-'));
if (await fileExists(taskDir)) {
const files = (await readdir(taskDir)).filter(f => f.endsWith('.json') && f.startsWith('IMPL-'));
for (const file of files) {
try {
const content = JSON.parse(readFileSync(join(taskDir, file), 'utf8'));
const content = JSON.parse(await readFile(join(taskDir, file), 'utf8'));
result.tasks.push({
filename: file,
task_id: file.replace('.json', ''),
...content
});
} catch (e) {
// Skip unreadable files
console.warn('Failed to parse task file:', join(taskDir, file), (e as Error).message);
}
}
// Sort by task ID
@@ -63,14 +79,14 @@ async function getSessionDetailData(sessionPath: string, dataType: string): Prom
if (dataType === 'summary' || dataType === 'all') {
const summariesDir = join(normalizedPath, '.summaries');
result.summaries = [];
if (existsSync(summariesDir)) {
const files = readdirSync(summariesDir).filter(f => f.endsWith('.md'));
if (await fileExists(summariesDir)) {
const files = (await readdir(summariesDir)).filter(f => f.endsWith('.md'));
for (const file of files) {
try {
const content = readFileSync(join(summariesDir, file), 'utf8');
const content = await readFile(join(summariesDir, file), 'utf8');
result.summaries.push({ name: file.replace('.md', ''), content });
} catch (e) {
// Skip unreadable files
console.warn('Failed to read summary file:', join(summariesDir, file), (e as Error).message);
}
}
}
@@ -79,10 +95,11 @@ async function getSessionDetailData(sessionPath: string, dataType: string): Prom
// Load plan.json (for lite tasks)
if (dataType === 'plan' || dataType === 'all') {
const planFile = join(normalizedPath, 'plan.json');
if (existsSync(planFile)) {
if (await fileExists(planFile)) {
try {
result.plan = JSON.parse(readFileSync(planFile, 'utf8'));
result.plan = JSON.parse(await readFile(planFile, 'utf8'));
} catch (e) {
console.warn('Failed to parse plan file:', planFile, (e as Error).message);
result.plan = null;
}
}
@@ -100,52 +117,54 @@ async function getSessionDetailData(sessionPath: string, dataType: string): Prom
];
for (const searchDir of searchDirs) {
if (!existsSync(searchDir)) continue;
if (!(await fileExists(searchDir))) continue;
// Look for explorations-manifest.json
const manifestFile = join(searchDir, 'explorations-manifest.json');
if (existsSync(manifestFile)) {
if (await fileExists(manifestFile)) {
try {
result.explorations.manifest = JSON.parse(readFileSync(manifestFile, 'utf8'));
result.explorations.manifest = JSON.parse(await readFile(manifestFile, 'utf8'));
// Load each exploration file based on manifest
const explorations = result.explorations.manifest.explorations || [];
for (const exp of explorations) {
const expFile = join(searchDir, exp.file);
if (existsSync(expFile)) {
if (await fileExists(expFile)) {
try {
result.explorations.data[exp.angle] = JSON.parse(readFileSync(expFile, 'utf8'));
result.explorations.data[exp.angle] = JSON.parse(await readFile(expFile, 'utf8'));
} catch (e) {
// Skip unreadable exploration files
console.warn('Failed to parse exploration file:', expFile, (e as Error).message);
}
}
}
break; // Found manifest, stop searching
} catch (e) {
console.warn('Failed to parse explorations manifest:', manifestFile, (e as Error).message);
result.explorations.manifest = null;
}
}
// Look for diagnoses-manifest.json
const diagManifestFile = join(searchDir, 'diagnoses-manifest.json');
if (existsSync(diagManifestFile)) {
if (await fileExists(diagManifestFile)) {
try {
result.diagnoses.manifest = JSON.parse(readFileSync(diagManifestFile, 'utf8'));
result.diagnoses.manifest = JSON.parse(await readFile(diagManifestFile, 'utf8'));
// Load each diagnosis file based on manifest
const diagnoses = result.diagnoses.manifest.diagnoses || [];
for (const diag of diagnoses) {
const diagFile = join(searchDir, diag.file);
if (existsSync(diagFile)) {
if (await fileExists(diagFile)) {
try {
result.diagnoses.data[diag.angle] = JSON.parse(readFileSync(diagFile, 'utf8'));
result.diagnoses.data[diag.angle] = JSON.parse(await readFile(diagFile, 'utf8'));
} catch (e) {
// Skip unreadable diagnosis files
console.warn('Failed to parse diagnosis file:', diagFile, (e as Error).message);
}
}
}
break; // Found manifest, stop searching
} catch (e) {
console.warn('Failed to parse diagnoses manifest:', diagManifestFile, (e as Error).message);
result.diagnoses.manifest = null;
}
}
@@ -153,7 +172,7 @@ async function getSessionDetailData(sessionPath: string, dataType: string): Prom
// Fallback: scan for exploration-*.json and diagnosis-*.json files directly
if (!result.explorations.manifest) {
try {
const expFiles = readdirSync(searchDir).filter(f => f.startsWith('exploration-') && f.endsWith('.json') && f !== 'explorations-manifest.json');
const expFiles = (await readdir(searchDir)).filter(f => f.startsWith('exploration-') && f.endsWith('.json') && f !== 'explorations-manifest.json');
if (expFiles.length > 0) {
// Create synthetic manifest
result.explorations.manifest = {
@@ -169,21 +188,21 @@ async function getSessionDetailData(sessionPath: string, dataType: string): Prom
for (const file of expFiles) {
const angle = file.replace('exploration-', '').replace('.json', '');
try {
result.explorations.data[angle] = JSON.parse(readFileSync(join(searchDir, file), 'utf8'));
result.explorations.data[angle] = JSON.parse(await readFile(join(searchDir, file), 'utf8'));
} catch (e) {
// Skip unreadable files
console.warn('Failed to parse exploration file:', join(searchDir, file), (e as Error).message);
}
}
}
} catch (e) {
// Directory read failed
console.warn('Failed to read explorations directory:', searchDir, (e as Error).message);
}
}
// Fallback: scan for diagnosis-*.json files directly
if (!result.diagnoses.manifest) {
try {
const diagFiles = readdirSync(searchDir).filter(f => f.startsWith('diagnosis-') && f.endsWith('.json') && f !== 'diagnoses-manifest.json');
const diagFiles = (await readdir(searchDir)).filter(f => f.startsWith('diagnosis-') && f.endsWith('.json') && f !== 'diagnoses-manifest.json');
if (diagFiles.length > 0) {
// Create synthetic manifest
result.diagnoses.manifest = {
@@ -199,14 +218,14 @@ async function getSessionDetailData(sessionPath: string, dataType: string): Prom
for (const file of diagFiles) {
const angle = file.replace('diagnosis-', '').replace('.json', '');
try {
result.diagnoses.data[angle] = JSON.parse(readFileSync(join(searchDir, file), 'utf8'));
result.diagnoses.data[angle] = JSON.parse(await readFile(join(searchDir, file), 'utf8'));
} catch (e) {
// Skip unreadable files
console.warn('Failed to parse diagnosis file:', join(searchDir, file), (e as Error).message);
}
}
}
} catch (e) {
// Directory read failed
console.warn('Failed to read diagnoses directory:', searchDir, (e as Error).message);
}
}
@@ -228,12 +247,12 @@ async function getSessionDetailData(sessionPath: string, dataType: string): Prom
];
for (const conflictFile of conflictFiles) {
if (existsSync(conflictFile)) {
if (await fileExists(conflictFile)) {
try {
result.conflictResolution = JSON.parse(readFileSync(conflictFile, 'utf8'));
result.conflictResolution = JSON.parse(await readFile(conflictFile, 'utf8'));
break; // Found file, stop searching
} catch (e) {
// Skip unreadable file
console.warn('Failed to parse conflict resolution file:', conflictFile, (e as Error).message);
}
}
}
@@ -242,15 +261,149 @@ async function getSessionDetailData(sessionPath: string, dataType: string): Prom
// Load IMPL_PLAN.md
if (dataType === 'impl-plan' || dataType === 'all') {
const implPlanFile = join(normalizedPath, 'IMPL_PLAN.md');
if (existsSync(implPlanFile)) {
if (await fileExists(implPlanFile)) {
try {
result.implPlan = readFileSync(implPlanFile, 'utf8');
result.implPlan = await readFile(implPlanFile, 'utf8');
} catch (e) {
console.warn('Failed to read IMPL_PLAN.md:', implPlanFile, (e as Error).message);
result.implPlan = null;
}
}
}
// Load multi-cli discussion rounds (rounds/*/synthesis.json)
// Supports both NEW and OLD schema formats
if (dataType === 'multi-cli' || dataType === 'discussions' || dataType === 'all') {
result.multiCli = {
sessionId: normalizedPath.split('/').pop() || '',
type: 'multi-cli-plan',
rounds: [] as Array<{
roundNumber: number;
synthesis: Record<string, unknown> | null;
// NEW schema extracted fields
solutions?: Array<{
name: string;
source_cli: string[];
feasibility: number;
effort: string;
risk: string;
summary: string;
tasksCount: number;
dependencies: { internal: string[]; external: string[] };
technical_concerns: string[];
}>;
convergence?: {
score: number;
new_insights: boolean;
recommendation: string;
};
cross_verification?: {
agreements: string[];
disagreements: string[];
resolution: string;
};
clarification_questions?: string[];
}>,
// Aggregated data from latest synthesis
latestSolutions: [] as Array<Record<string, unknown>>,
latestConvergence: null as Record<string, unknown> | null,
latestCrossVerification: null as Record<string, unknown> | null,
clarificationQuestions: [] as string[]
};
const roundsDir = join(normalizedPath, 'rounds');
if (await fileExists(roundsDir)) {
try {
const roundDirs = (await readdir(roundsDir))
.filter(d => /^\d+$/.test(d)) // Only numeric directories
.sort((a, b) => parseInt(a) - parseInt(b));
for (const roundDir of roundDirs) {
const synthesisFile = join(roundsDir, roundDir, 'synthesis.json');
let synthesis: Record<string, unknown> | null = null;
if (await fileExists(synthesisFile)) {
try {
synthesis = JSON.parse(await readFile(synthesisFile, 'utf8'));
} catch (e) {
console.warn('Failed to parse synthesis file:', synthesisFile, (e as Error).message);
}
}
// Build round data with NEW schema fields extracted
const roundData: any = {
roundNumber: parseInt(roundDir),
synthesis
};
// Extract NEW schema fields if present
if (synthesis) {
// Extract solutions with summary info
if (Array.isArray(synthesis.solutions)) {
roundData.solutions = (synthesis.solutions as Array<Record<string, any>>).map(s => ({
name: s.name || '',
source_cli: s.source_cli || [],
feasibility: s.feasibility ?? 0,
effort: s.effort || 'unknown',
risk: s.risk || 'unknown',
summary: s.summary || '',
tasksCount: s.implementation_plan?.tasks?.length || 0,
dependencies: s.dependencies || { internal: [], external: [] },
technical_concerns: s.technical_concerns || []
}));
}
// Extract convergence
if (synthesis.convergence && typeof synthesis.convergence === 'object') {
const conv = synthesis.convergence as Record<string, unknown>;
roundData.convergence = {
score: conv.score ?? 0,
new_insights: conv.new_insights ?? false,
recommendation: conv.recommendation || 'unknown'
};
}
// Extract cross_verification
if (synthesis.cross_verification && typeof synthesis.cross_verification === 'object') {
const cv = synthesis.cross_verification as Record<string, unknown>;
roundData.cross_verification = {
agreements: Array.isArray(cv.agreements) ? cv.agreements : [],
disagreements: Array.isArray(cv.disagreements) ? cv.disagreements : [],
resolution: (cv.resolution as string) || ''
};
}
// Extract clarification_questions
if (Array.isArray(synthesis.clarification_questions)) {
roundData.clarification_questions = synthesis.clarification_questions;
}
}
result.multiCli.rounds.push(roundData);
}
// Populate aggregated data from latest round
if (result.multiCli.rounds.length > 0) {
const latestRound = result.multiCli.rounds[result.multiCli.rounds.length - 1];
if (latestRound.solutions) {
result.multiCli.latestSolutions = latestRound.solutions;
}
if (latestRound.convergence) {
result.multiCli.latestConvergence = latestRound.convergence;
}
if (latestRound.cross_verification) {
result.multiCli.latestCrossVerification = latestRound.cross_verification;
}
if (latestRound.clarification_questions) {
result.multiCli.clarificationQuestions = latestRound.clarification_questions;
}
}
} catch (e) {
console.warn('Failed to read rounds directory:', roundsDir, (e as Error).message);
}
}
}
// Load review data from .review/
if (dataType === 'review' || dataType === 'all') {
const reviewDir = join(normalizedPath, '.review');
@@ -261,12 +414,12 @@ async function getSessionDetailData(sessionPath: string, dataType: string): Prom
totalFindings: 0
};
if (existsSync(reviewDir)) {
if (await fileExists(reviewDir)) {
// Load review-state.json
const stateFile = join(reviewDir, 'review-state.json');
if (existsSync(stateFile)) {
if (await fileExists(stateFile)) {
try {
const state = JSON.parse(readFileSync(stateFile, 'utf8'));
const state = JSON.parse(await readFile(stateFile, 'utf8'));
result.review.state = state;
result.review.severityDistribution = state.severity_distribution || {};
result.review.totalFindings = state.total_findings || 0;
@@ -275,18 +428,18 @@ async function getSessionDetailData(sessionPath: string, dataType: string): Prom
result.review.crossCuttingConcerns = state.cross_cutting_concerns || [];
result.review.criticalFiles = state.critical_files || [];
} catch (e) {
// Skip unreadable state
console.warn('Failed to parse review state file:', stateFile, (e as Error).message);
}
}
// Load dimension findings
const dimensionsDir = join(reviewDir, 'dimensions');
if (existsSync(dimensionsDir)) {
const files = readdirSync(dimensionsDir).filter(f => f.endsWith('.json'));
if (await fileExists(dimensionsDir)) {
const files = (await readdir(dimensionsDir)).filter(f => f.endsWith('.json'));
for (const file of files) {
try {
const dimName = file.replace('.json', '');
const data = JSON.parse(readFileSync(join(dimensionsDir, file), 'utf8'));
const data = JSON.parse(await readFile(join(dimensionsDir, file), 'utf8'));
// Handle array structure: [ { findings: [...] } ]
let findings = [];
@@ -308,7 +461,7 @@ async function getSessionDetailData(sessionPath: string, dataType: string): Prom
count: findings.length
});
} catch (e) {
// Skip unreadable files
console.warn('Failed to parse review dimension file:', join(dimensionsDir, file), (e as Error).message);
}
}
}

View File

@@ -416,5 +416,107 @@ export async function handleSystemRoutes(ctx: SystemRouteContext): Promise<boole
return true;
}
// API: File dialog - list directory contents for file browser
if (pathname === '/api/dialog/browse' && req.method === 'POST') {
handlePostRequest(req, res, async (body) => {
const { path: browsePath, showHidden } = body as {
path?: string;
showHidden?: boolean;
};
const os = await import('os');
const path = await import('path');
const fs = await import('fs');
// Default to home directory
let targetPath = browsePath || os.homedir();
// Expand ~ to home directory
if (targetPath.startsWith('~')) {
targetPath = path.join(os.homedir(), targetPath.slice(1));
}
// Resolve to absolute path
if (!path.isAbsolute(targetPath)) {
targetPath = path.resolve(targetPath);
}
try {
const stat = await fs.promises.stat(targetPath);
if (!stat.isDirectory()) {
return { error: 'Path is not a directory', status: 400 };
}
const entries = await fs.promises.readdir(targetPath, { withFileTypes: true });
const items = entries
.filter(entry => showHidden || !entry.name.startsWith('.'))
.map(entry => ({
name: entry.name,
path: path.join(targetPath, entry.name),
isDirectory: entry.isDirectory(),
isFile: entry.isFile()
}))
.sort((a, b) => {
// Directories first, then files
if (a.isDirectory && !b.isDirectory) return -1;
if (!a.isDirectory && b.isDirectory) return 1;
return a.name.localeCompare(b.name);
});
return {
currentPath: targetPath,
parentPath: path.dirname(targetPath),
items,
homePath: os.homedir()
};
} catch (err) {
return { error: 'Cannot access directory: ' + (err as Error).message, status: 400 };
}
});
return true;
}
// API: File dialog - select file (validate path exists)
if (pathname === '/api/dialog/open-file' && req.method === 'POST') {
handlePostRequest(req, res, async (body) => {
const { path: filePath } = body as { path?: string };
if (!filePath) {
return { error: 'Path is required', status: 400 };
}
const os = await import('os');
const path = await import('path');
const fs = await import('fs');
let targetPath = filePath;
// Expand ~ to home directory
if (targetPath.startsWith('~')) {
targetPath = path.join(os.homedir(), targetPath.slice(1));
}
// Resolve to absolute path
if (!path.isAbsolute(targetPath)) {
targetPath = path.resolve(targetPath);
}
try {
await fs.promises.access(targetPath, fs.constants.R_OK);
const stat = await fs.promises.stat(targetPath);
return {
success: true,
path: targetPath,
isFile: stat.isFile(),
isDirectory: stat.isDirectory()
};
} catch {
return { error: 'File not accessible', status: 404 };
}
});
return true;
}
return false;
}

View File

@@ -597,12 +597,12 @@ export async function startServer(options: ServerOptions = {}): Promise<http.Ser
if (await handleFilesRoutes(routeContext)) return;
}
// System routes (data, health, version, paths, shutdown, notify, storage)
// System routes (data, health, version, paths, shutdown, notify, storage, dialog)
if (pathname === '/api/data' || pathname === '/api/health' ||
pathname === '/api/version-check' || pathname === '/api/shutdown' ||
pathname === '/api/recent-paths' || pathname === '/api/switch-path' ||
pathname === '/api/remove-recent-path' || pathname === '/api/system/notify' ||
pathname.startsWith('/api/storage/')) {
pathname.startsWith('/api/storage/') || pathname.startsWith('/api/dialog/')) {
if (await handleSystemRoutes(routeContext)) return;
}

View File

@@ -119,6 +119,14 @@ body {
color: hsl(var(--orange));
}
.nav-item[data-lite="multi-cli-plan"].active {
background-color: hsl(var(--purple-light, 280 60% 95%));
}
.nav-item[data-lite="multi-cli-plan"].active .nav-icon {
color: hsl(var(--purple, 280 60% 50%));
}
.sidebar.collapsed .toggle-icon {
transform: rotate(180deg);
}

View File

@@ -102,6 +102,87 @@
color: hsl(220 80% 40%);
}
/* Session Status Badge (used in detail page header) */
.session-status-badge {
font-size: 0.7rem;
font-weight: 500;
padding: 0.25rem 0.625rem;
border-radius: 0.25rem;
text-transform: lowercase;
}
.session-status-badge.plan_generated,
.session-status-badge.converged,
.session-status-badge.completed,
.session-status-badge.decided {
background: hsl(var(--success-light, 142 70% 95%));
color: hsl(var(--success, 142 70% 45%));
}
.session-status-badge.analyzing,
.session-status-badge.debating {
background: hsl(var(--warning-light, 45 90% 95%));
color: hsl(var(--warning, 45 90% 40%));
}
.session-status-badge.initialized,
.session-status-badge.exploring {
background: hsl(var(--info-light, 220 80% 95%));
color: hsl(var(--info, 220 80% 55%));
}
.session-status-badge.blocked,
.session-status-badge.conflict {
background: hsl(var(--destructive) / 0.1);
color: hsl(var(--destructive));
}
.session-status-badge.pending {
background: hsl(var(--muted));
color: hsl(var(--muted-foreground));
}
/* Status Badge Colors (used in card list meta) */
.session-meta-item.status-badge.success {
background: hsl(var(--success-light, 142 70% 95%));
color: hsl(var(--success, 142 70% 45%));
padding: 0.25rem 0.5rem;
border-radius: 0.25rem;
font-weight: 500;
}
.session-meta-item.status-badge.warning {
background: hsl(var(--warning-light, 45 90% 95%));
color: hsl(var(--warning, 45 90% 40%));
padding: 0.25rem 0.5rem;
border-radius: 0.25rem;
font-weight: 500;
}
.session-meta-item.status-badge.info {
background: hsl(var(--info-light, 220 80% 95%));
color: hsl(var(--info, 220 80% 55%));
padding: 0.25rem 0.5rem;
border-radius: 0.25rem;
font-weight: 500;
}
.session-meta-item.status-badge.error {
background: hsl(var(--destructive) / 0.1);
color: hsl(var(--destructive));
padding: 0.25rem 0.5rem;
border-radius: 0.25rem;
font-weight: 500;
}
.session-meta-item.status-badge.default {
background: hsl(var(--muted));
color: hsl(var(--muted-foreground));
padding: 0.25rem 0.5rem;
border-radius: 0.25rem;
font-weight: 500;
}
.session-body {
display: flex;
flex-direction: column;

View File

@@ -302,9 +302,14 @@
.collapsible-content {
padding: 1rem;
display: block;
}
.collapsible-content.collapsed {
display: none;
}
/* Legacy .open class support */
.collapsible-content.open {
display: block;
}

File diff suppressed because it is too large Load Diff

View File

@@ -661,3 +661,160 @@
color: hsl(var(--success));
}
/* ========================================
* File Browser Modal
* ======================================== */
.file-browser-modal {
width: 600px;
max-width: 90vw;
max-height: 80vh;
display: flex;
flex-direction: column;
}
.file-browser-toolbar {
display: flex;
align-items: center;
gap: 0.5rem;
padding: 0.5rem;
background: hsl(var(--muted) / 0.3);
border-radius: 0.375rem;
margin-bottom: 0.75rem;
}
.file-browser-toolbar .btn-sm {
flex-shrink: 0;
padding: 0.375rem;
}
.file-browser-path {
flex: 1;
padding: 0.375rem 0.5rem;
font-family: monospace;
font-size: 0.75rem;
background: hsl(var(--background));
border: 1px solid hsl(var(--border));
border-radius: 0.25rem;
color: hsl(var(--foreground));
}
.file-browser-path:focus {
outline: none;
border-color: hsl(var(--primary));
box-shadow: 0 0 0 2px hsl(var(--primary) / 0.2);
}
.file-browser-drives {
display: flex;
gap: 0.25rem;
}
.btn-xs {
padding: 0.25rem 0.5rem;
font-size: 0.6875rem;
border-radius: 0.25rem;
}
.drive-btn {
font-family: monospace;
font-weight: 600;
}
.file-browser-hidden-toggle {
display: flex;
align-items: center;
gap: 0.375rem;
font-size: 0.75rem;
color: hsl(var(--muted-foreground));
cursor: pointer;
white-space: nowrap;
}
.file-browser-hidden-toggle input {
cursor: pointer;
}
.file-browser-list {
flex: 1;
min-height: 300px;
max-height: 400px;
overflow-y: auto;
border: 1px solid hsl(var(--border));
border-radius: 0.375rem;
background: hsl(var(--background));
}
.file-browser-loading,
.file-browser-empty {
display: flex;
align-items: center;
justify-content: center;
height: 100%;
min-height: 200px;
color: hsl(var(--muted-foreground));
font-size: 0.875rem;
}
.file-browser-error {
display: flex;
flex-direction: column;
align-items: center;
justify-content: center;
height: 100%;
min-height: 200px;
font-size: 0.875rem;
text-align: center;
padding: 1rem;
gap: 0.5rem;
}
.file-browser-error p {
margin: 0;
color: hsl(var(--destructive));
}
.file-browser-hint {
color: hsl(var(--muted-foreground));
font-size: 0.8rem;
}
.file-browser-item {
display: flex;
align-items: center;
gap: 0.5rem;
padding: 0.5rem 0.75rem;
cursor: pointer;
border-bottom: 1px solid hsl(var(--border) / 0.5);
transition: background-color 0.15s;
}
.file-browser-item:last-child {
border-bottom: none;
}
.file-browser-item:hover {
background: hsl(var(--muted) / 0.5);
}
.file-browser-item.selected {
background: hsl(var(--primary) / 0.15);
border-color: hsl(var(--primary) / 0.3);
}
.file-browser-item.is-directory {
color: hsl(var(--primary));
}
.file-browser-item.is-file {
color: hsl(var(--foreground));
}
.file-browser-item-name {
flex: 1;
font-size: 0.8125rem;
overflow: hidden;
text-overflow: ellipsis;
white-space: nowrap;
}

View File

@@ -128,6 +128,29 @@
box-shadow: 0 4px 12px hsl(var(--foreground) / 0.08);
}
/* Archived Issue Card */
.issue-card.archived {
opacity: 0.85;
background: hsl(var(--muted) / 0.3);
}
.issue-card.archived:hover {
opacity: 1;
}
.issue-archived-badge {
display: inline-flex;
align-items: center;
padding: 0.125rem 0.375rem;
background: hsl(var(--muted));
color: hsl(var(--muted-foreground));
font-size: 0.625rem;
font-weight: 500;
border-radius: 0.25rem;
text-transform: uppercase;
letter-spacing: 0.025em;
}
.issue-card-header {
display: flex;
align-items: flex-start;
@@ -406,14 +429,16 @@
border: 1px solid hsl(var(--border));
border-radius: 0.75rem;
overflow: hidden;
margin-bottom: 1rem;
box-shadow: 0 1px 3px hsl(var(--foreground) / 0.04);
}
.queue-group-header {
display: flex;
align-items: center;
justify-content: space-between;
padding: 0.75rem 1rem;
background: hsl(var(--muted) / 0.5);
padding: 0.875rem 1.25rem;
background: hsl(var(--muted) / 0.3);
border-bottom: 1px solid hsl(var(--border));
}
@@ -1233,6 +1258,68 @@
color: hsl(var(--destructive));
}
/* Search Highlight */
.search-highlight {
background: hsl(45 93% 47% / 0.3);
color: inherit;
padding: 0 2px;
border-radius: 2px;
font-weight: 500;
}
/* Search Suggestions Dropdown */
.search-suggestions {
position: absolute;
top: 100%;
left: 0;
right: 0;
margin-top: 0.25rem;
background: hsl(var(--card));
border: 1px solid hsl(var(--border));
border-radius: 0.5rem;
box-shadow: 0 4px 12px hsl(var(--foreground) / 0.1);
max-height: 300px;
overflow-y: auto;
z-index: 50;
display: none;
}
.search-suggestions.show {
display: block;
}
.search-suggestion-item {
padding: 0.625rem 0.875rem;
cursor: pointer;
border-bottom: 1px solid hsl(var(--border) / 0.5);
transition: background 0.15s ease;
}
.search-suggestion-item:hover,
.search-suggestion-item.selected {
background: hsl(var(--muted));
}
.search-suggestion-item:last-child {
border-bottom: none;
}
.suggestion-id {
font-family: var(--font-mono);
font-size: 0.7rem;
color: hsl(var(--muted-foreground));
margin-bottom: 0.125rem;
}
.suggestion-title {
font-size: 0.8125rem;
color: hsl(var(--foreground));
line-height: 1.3;
overflow: hidden;
text-overflow: ellipsis;
white-space: nowrap;
}
/* ==========================================
CREATE BUTTON
========================================== */
@@ -1757,61 +1844,147 @@
}
.queue-items {
padding: 0.75rem;
padding: 1rem;
display: flex;
flex-direction: column;
gap: 0.5rem;
gap: 0.75rem;
}
/* Parallel items use CSS Grid for uniform sizing */
.queue-items.parallel {
flex-direction: row;
flex-wrap: wrap;
display: grid;
grid-template-columns: repeat(auto-fill, minmax(130px, 1fr));
gap: 0.75rem;
}
.queue-items.parallel .queue-item {
flex: 1;
min-width: 200px;
display: grid;
grid-template-areas:
"id id delete"
"issue issue issue"
"solution solution solution";
grid-template-columns: 1fr 1fr auto;
grid-template-rows: auto auto 1fr;
align-items: start;
padding: 0.75rem;
min-height: 90px;
gap: 0.25rem;
}
/* Card content layout */
.queue-items.parallel .queue-item .queue-item-id {
grid-area: id;
font-size: 0.875rem;
font-weight: 700;
color: hsl(var(--foreground));
}
.queue-items.parallel .queue-item .queue-item-issue {
grid-area: issue;
font-size: 0.6875rem;
color: hsl(var(--muted-foreground));
white-space: nowrap;
overflow: hidden;
text-overflow: ellipsis;
line-height: 1.3;
}
.queue-items.parallel .queue-item .queue-item-solution {
grid-area: solution;
display: flex;
align-items: center;
gap: 0.25rem;
font-size: 0.75rem;
font-weight: 500;
color: hsl(var(--foreground));
align-self: end;
}
/* Hide extra elements in parallel view */
.queue-items.parallel .queue-item .queue-item-files,
.queue-items.parallel .queue-item .queue-item-priority,
.queue-items.parallel .queue-item .queue-item-deps,
.queue-items.parallel .queue-item .queue-item-task {
display: none;
}
/* Delete button positioned in corner */
.queue-items.parallel .queue-item .queue-item-delete {
grid-area: delete;
justify-self: end;
padding: 0.125rem;
opacity: 0;
}
.queue-group-type {
display: flex;
display: inline-flex;
align-items: center;
gap: 0.375rem;
font-size: 0.875rem;
font-weight: 600;
padding: 0.25rem 0.625rem;
border-radius: 0.375rem;
}
.queue-group-type.parallel {
color: hsl(142 71% 45%);
color: hsl(142 71% 40%);
background: hsl(142 71% 45% / 0.1);
}
.queue-group-type.sequential {
color: hsl(262 83% 58%);
color: hsl(262 83% 50%);
background: hsl(262 83% 58% / 0.1);
}
/* Queue Item Status Colors */
/* Queue Item Status Colors - Enhanced visual distinction */
/* Pending - Default subtle state */
.queue-item.pending,
.queue-item:not(.ready):not(.executing):not(.completed):not(.failed):not(.blocked) {
border-color: hsl(var(--border));
background: hsl(var(--card));
}
/* Ready - Blue tint, ready to execute */
.queue-item.ready {
border-color: hsl(199 89% 48%);
background: hsl(199 89% 48% / 0.06);
border-left: 3px solid hsl(199 89% 48%);
}
/* Executing - Amber with pulse animation */
.queue-item.executing {
border-color: hsl(45 93% 47%);
background: hsl(45 93% 47% / 0.05);
border-color: hsl(38 92% 50%);
background: hsl(38 92% 50% / 0.08);
border-left: 3px solid hsl(38 92% 50%);
animation: executing-pulse 2s ease-in-out infinite;
}
@keyframes executing-pulse {
0%, 100% { box-shadow: 0 0 0 0 hsl(38 92% 50% / 0.3); }
50% { box-shadow: 0 0 8px 2px hsl(38 92% 50% / 0.2); }
}
/* Completed - Green success state */
.queue-item.completed {
border-color: hsl(var(--success));
background: hsl(var(--success) / 0.05);
border-color: hsl(142 71% 45%);
background: hsl(142 71% 45% / 0.06);
border-left: 3px solid hsl(142 71% 45%);
}
/* Failed - Red error state */
.queue-item.failed {
border-color: hsl(var(--destructive));
background: hsl(var(--destructive) / 0.05);
border-color: hsl(0 84% 60%);
background: hsl(0 84% 60% / 0.06);
border-left: 3px solid hsl(0 84% 60%);
}
/* Blocked - Purple/violet blocked state */
.queue-item.blocked {
border-color: hsl(262 83% 58%);
opacity: 0.7;
background: hsl(262 83% 58% / 0.05);
border-left: 3px solid hsl(262 83% 58%);
opacity: 0.8;
}
/* Priority indicator */
@@ -2213,61 +2386,89 @@
flex-direction: column;
align-items: center;
justify-content: center;
padding: 0.75rem 1rem;
background: hsl(var(--muted) / 0.3);
padding: 1rem 1.25rem;
background: hsl(var(--card));
border: 1px solid hsl(var(--border));
border-radius: 0.5rem;
border-radius: 0.75rem;
text-align: center;
transition: all 0.2s ease;
}
.queue-stat-card:hover {
transform: translateY(-1px);
box-shadow: 0 2px 8px hsl(var(--foreground) / 0.06);
}
.queue-stat-card .queue-stat-value {
font-size: 1.5rem;
font-size: 1.75rem;
font-weight: 700;
color: hsl(var(--foreground));
line-height: 1.2;
}
.queue-stat-card .queue-stat-label {
font-size: 0.75rem;
font-size: 0.6875rem;
color: hsl(var(--muted-foreground));
text-transform: uppercase;
letter-spacing: 0.025em;
margin-top: 0.25rem;
letter-spacing: 0.05em;
margin-top: 0.375rem;
font-weight: 500;
}
/* Pending - Slate/Gray with subtle blue tint */
.queue-stat-card.pending {
border-color: hsl(var(--muted-foreground) / 0.3);
border-color: hsl(215 20% 65% / 0.4);
background: linear-gradient(135deg, hsl(215 20% 95%) 0%, hsl(var(--card)) 100%);
}
.queue-stat-card.pending .queue-stat-value {
color: hsl(var(--muted-foreground));
color: hsl(215 20% 45%);
}
.queue-stat-card.pending .queue-stat-label {
color: hsl(215 20% 55%);
}
/* Executing - Amber/Orange - attention-grabbing */
.queue-stat-card.executing {
border-color: hsl(45 93% 47% / 0.5);
background: hsl(45 93% 47% / 0.05);
border-color: hsl(38 92% 50% / 0.5);
background: linear-gradient(135deg, hsl(38 92% 95%) 0%, hsl(45 93% 97%) 100%);
}
.queue-stat-card.executing .queue-stat-value {
color: hsl(45 93% 47%);
color: hsl(38 92% 40%);
}
.queue-stat-card.executing .queue-stat-label {
color: hsl(38 70% 45%);
}
/* Completed - Green - success indicator */
.queue-stat-card.completed {
border-color: hsl(var(--success) / 0.5);
background: hsl(var(--success) / 0.05);
border-color: hsl(142 71% 45% / 0.5);
background: linear-gradient(135deg, hsl(142 71% 95%) 0%, hsl(142 50% 97%) 100%);
}
.queue-stat-card.completed .queue-stat-value {
color: hsl(var(--success));
color: hsl(142 71% 35%);
}
.queue-stat-card.completed .queue-stat-label {
color: hsl(142 50% 40%);
}
/* Failed - Red - error indicator */
.queue-stat-card.failed {
border-color: hsl(var(--destructive) / 0.5);
background: hsl(var(--destructive) / 0.05);
border-color: hsl(0 84% 60% / 0.5);
background: linear-gradient(135deg, hsl(0 84% 95%) 0%, hsl(0 70% 97%) 100%);
}
.queue-stat-card.failed .queue-stat-value {
color: hsl(var(--destructive));
color: hsl(0 84% 45%);
}
.queue-stat-card.failed .queue-stat-label {
color: hsl(0 60% 50%);
}
/* ==========================================
@@ -2851,3 +3052,251 @@
gap: 0.25rem;
}
}
/* ==========================================
MULTI-QUEUE CARDS VIEW
========================================== */
/* Queue Cards Header */
.queue-cards-header {
display: flex;
align-items: center;
justify-content: space-between;
flex-wrap: wrap;
gap: 1rem;
}
/* Queue Cards Grid */
.queue-cards-grid {
display: grid;
grid-template-columns: repeat(auto-fill, minmax(280px, 1fr));
gap: 1rem;
margin-bottom: 1.5rem;
}
/* Individual Queue Card */
.queue-card {
position: relative;
background: hsl(var(--card));
border: 1px solid hsl(var(--border));
border-radius: 0.75rem;
padding: 1rem;
cursor: pointer;
transition: all 0.2s ease;
}
.queue-card:hover {
border-color: hsl(var(--primary) / 0.5);
transform: translateY(-2px);
box-shadow: 0 4px 12px hsl(var(--foreground) / 0.08);
}
.queue-card.active {
border-color: hsl(var(--primary));
background: hsl(var(--primary) / 0.05);
}
.queue-card.merged {
opacity: 0.6;
border-style: dashed;
}
.queue-card.merged:hover {
opacity: 0.8;
}
/* Queue Card Header */
.queue-card-header {
display: flex;
align-items: center;
justify-content: space-between;
margin-bottom: 0.75rem;
}
.queue-card-id {
font-size: 0.875rem;
font-weight: 600;
color: hsl(var(--foreground));
}
.queue-card-badges {
display: flex;
align-items: center;
gap: 0.5rem;
}
/* Queue Card Stats - Progress Bar */
.queue-card-stats {
margin-bottom: 0.75rem;
}
.queue-card-stats .progress-bar {
height: 6px;
background: hsl(var(--muted));
border-radius: 3px;
overflow: hidden;
margin-bottom: 0.5rem;
}
.queue-card-stats .progress-fill {
height: 100%;
background: hsl(var(--primary));
border-radius: 3px;
transition: width 0.3s ease;
}
.queue-card-stats .progress-fill.completed {
background: hsl(var(--success, 142 76% 36%));
}
.queue-card-progress {
display: flex;
justify-content: space-between;
font-size: 0.75rem;
color: hsl(var(--foreground));
}
/* Queue Card Meta */
.queue-card-meta {
display: flex;
gap: 1rem;
font-size: 0.75rem;
color: hsl(var(--muted-foreground));
margin-bottom: 0.75rem;
}
/* Queue Card Actions */
.queue-card-actions {
display: flex;
gap: 0.5rem;
padding-top: 0.75rem;
border-top: 1px solid hsl(var(--border));
}
/* Queue Detail Header */
.queue-detail-header {
display: flex;
align-items: center;
gap: 1rem;
flex-wrap: wrap;
}
.queue-detail-title {
flex: 1;
display: flex;
align-items: center;
gap: 1rem;
}
.queue-detail-actions {
display: flex;
gap: 0.5rem;
}
/* Queue Item Delete Button */
.queue-item-delete {
margin-left: auto;
padding: 0.25rem;
opacity: 0;
transition: opacity 0.15s ease;
color: hsl(var(--muted-foreground));
border-radius: 0.25rem;
}
.queue-item:hover .queue-item-delete {
opacity: 1;
}
.queue-item-delete:hover {
color: hsl(var(--destructive, 0 84% 60%));
background: hsl(var(--destructive, 0 84% 60%) / 0.1);
}
/* Queue Error State */
.queue-error {
padding: 2rem;
text-align: center;
}
/* Responsive adjustments for queue cards */
@media (max-width: 640px) {
.queue-cards-grid {
grid-template-columns: 1fr;
}
.queue-cards-header {
flex-direction: column;
align-items: flex-start;
}
.queue-detail-header {
flex-direction: column;
align-items: flex-start;
}
.queue-detail-title {
flex-direction: column;
align-items: flex-start;
gap: 0.5rem;
}
}
/* ==========================================
WARNING BUTTON STYLE
========================================== */
.btn-warning,
.btn-secondary.btn-warning {
color: hsl(38 92% 40%);
border-color: hsl(38 92% 50% / 0.5);
background: hsl(38 92% 50% / 0.08);
}
.btn-warning:hover,
.btn-secondary.btn-warning:hover {
background: hsl(38 92% 50% / 0.15);
border-color: hsl(38 92% 50%);
}
.btn-danger,
.btn-secondary.btn-danger,
.btn-sm.btn-danger {
color: hsl(var(--destructive));
border-color: hsl(var(--destructive) / 0.5);
background: hsl(var(--destructive) / 0.08);
}
.btn-danger:hover,
.btn-secondary.btn-danger:hover,
.btn-sm.btn-danger:hover {
background: hsl(var(--destructive) / 0.15);
border-color: hsl(var(--destructive));
}
/* Issue Detail Actions */
.issue-detail-actions {
margin-top: 1rem;
padding-top: 1rem;
border-top: 1px solid hsl(var(--border));
}
.issue-detail-actions .flex {
display: flex;
gap: 0.5rem;
flex-wrap: wrap;
}
/* Active queue badge enhancement */
.queue-active-badge {
display: inline-flex;
align-items: center;
padding: 0.125rem 0.5rem;
font-size: 0.6875rem;
font-weight: 600;
color: hsl(142 71% 35%);
background: hsl(142 71% 45% / 0.15);
border: 1px solid hsl(142 71% 45% / 0.3);
border-radius: 9999px;
text-transform: uppercase;
letter-spacing: 0.025em;
}

View File

@@ -3,6 +3,12 @@
* Real-time streaming output viewer for CLI executions
*/
// ===== Lifecycle Management =====
let cliStreamViewerDestroy = null;
let streamKeyboardHandler = null;
let streamScrollHandler = null; // Track scroll listener
let streamStatusTimers = []; // Track status update timers
// ===== State Management =====
let cliStreamExecutions = {}; // { executionId: { tool, mode, output, status, startTime, endTime } }
let activeStreamTab = null;
@@ -91,7 +97,7 @@ async function syncActiveExecutions() {
// ===== Initialization =====
function initCliStreamViewer() {
// Initialize keyboard shortcuts
document.addEventListener('keydown', function(e) {
streamKeyboardHandler = function(e) {
if (e.key === 'Escape' && isCliStreamViewerOpen) {
if (searchFilter) {
clearSearch();
@@ -108,12 +114,14 @@ function initCliStreamViewer() {
searchInput.select();
}
}
});
};
document.addEventListener('keydown', streamKeyboardHandler);
// Initialize scroll detection for auto-scroll
const content = document.getElementById('cliStreamContent');
if (content) {
content.addEventListener('scroll', handleStreamContentScroll);
streamScrollHandler = handleStreamContentScroll;
content.addEventListener('scroll', streamScrollHandler);
}
// Sync active executions from server (recover state for mid-execution joins)
@@ -592,11 +600,12 @@ function renderStreamStatus(executionId) {
// Update duration periodically for running executions
if (exec.status === 'running') {
setTimeout(() => {
const timerId = setTimeout(() => {
if (activeStreamTab === executionId && cliStreamExecutions[executionId]?.status === 'running') {
renderStreamStatus(executionId);
}
}, 1000);
streamStatusTimers.push(timerId);
}
}
@@ -760,6 +769,31 @@ if (document.readyState === 'loading') {
initCliStreamViewer();
}
// ===== Lifecycle Functions =====
function destroyCliStreamViewer() {
// Remove keyboard event listener if exists
if (streamKeyboardHandler) {
document.removeEventListener('keydown', streamKeyboardHandler);
streamKeyboardHandler = null;
}
// Remove scroll event listener if exists
if (streamScrollHandler) {
const content = document.getElementById('cliStreamContent');
if (content) {
content.removeEventListener('scroll', streamScrollHandler);
}
streamScrollHandler = null;
}
// Clear all pending status update timers
streamStatusTimers.forEach(timerId => clearTimeout(timerId));
streamStatusTimers = [];
}
// Export lifecycle functions
window.destroyCliStreamViewer = destroyCliStreamViewer;
// ===== Global Exposure =====
window.toggleCliStreamViewer = toggleCliStreamViewer;
window.handleCliStreamStarted = handleCliStreamStarted;

View File

@@ -52,12 +52,13 @@ const HOOK_TEMPLATES = {
'memory-update-queue': {
event: 'Stop',
matcher: '',
command: 'bash',
args: ['-c', 'ccw tool exec memory_queue "{\\"action\\":\\"add\\",\\"path\\":\\"$CLAUDE_PROJECT_DIR\\"}"'],
command: 'node',
args: ['-e', "require('child_process').spawnSync(process.platform==='win32'?'cmd':'ccw',process.platform==='win32'?['/c','ccw','tool','exec','memory_queue',JSON.stringify({action:'add',path:process.env.CLAUDE_PROJECT_DIR,tool:'gemini'})]:['tool','exec','memory_queue',JSON.stringify({action:'add',path:process.env.CLAUDE_PROJECT_DIR,tool:'gemini'})],{stdio:'inherit'})"],
description: 'Queue CLAUDE.md update when session ends (batched by threshold/timeout)',
category: 'memory',
configurable: true,
config: {
tool: { type: 'select', default: 'gemini', options: ['gemini', 'qwen', 'codex', 'opencode'], label: 'CLI Tool' },
threshold: { type: 'number', default: 5, min: 1, max: 20, label: 'Threshold (paths)', step: 1 },
timeout: { type: 'number', default: 300, min: 60, max: 1800, label: 'Timeout (seconds)', step: 60 }
}
@@ -66,8 +67,8 @@ const HOOK_TEMPLATES = {
'skill-context-keyword': {
event: 'UserPromptSubmit',
matcher: '',
command: 'bash',
args: ['-c', 'ccw tool exec skill_context_loader --stdin'],
command: 'node',
args: ['-e', "const p=JSON.parse(process.env.HOOK_INPUT||'{}');require('child_process').spawnSync('ccw',['tool','exec','skill_context_loader',JSON.stringify({prompt:p.user_prompt||''})],{stdio:'inherit'})"],
description: 'Load SKILL context based on keyword matching in user prompt',
category: 'skill',
configurable: true,
@@ -79,8 +80,8 @@ const HOOK_TEMPLATES = {
'skill-context-auto': {
event: 'UserPromptSubmit',
matcher: '',
command: 'bash',
args: ['-c', 'ccw tool exec skill_context_loader --stdin --mode auto'],
command: 'node',
args: ['-e', "const p=JSON.parse(process.env.HOOK_INPUT||'{}');require('child_process').spawnSync('ccw',['tool','exec','skill_context_loader',JSON.stringify({mode:'auto',prompt:p.user_prompt||''})],{stdio:'inherit'})"],
description: 'Auto-detect and load SKILL based on skill name in prompt',
category: 'skill',
configurable: false
@@ -195,6 +196,7 @@ const WIZARD_TEMPLATES = {
}
],
configFields: [
{ key: 'tool', type: 'select', label: 'CLI Tool', default: 'gemini', options: ['gemini', 'qwen', 'codex', 'opencode'], description: 'CLI tool for CLAUDE.md generation' },
{ key: 'threshold', type: 'number', label: 'Threshold (paths)', default: 5, min: 1, max: 20, step: 1, description: 'Number of paths to trigger batch update' },
{ key: 'timeout', type: 'number', label: 'Timeout (seconds)', default: 300, min: 60, max: 1800, step: 60, description: 'Auto-flush queue after this time' }
]
@@ -359,48 +361,73 @@ async function loadAvailableSkills() {
* Convert internal hook format to Claude Code format
* Internal: { command, args, matcher, timeout }
* Claude Code: { matcher, hooks: [{ type: "command", command: "...", timeout }] }
*
* IMPORTANT: For bash -c commands, use single quotes to wrap the script argument
* to avoid complex escaping issues with jq commands inside.
* See: https://github.com/catlog22/Claude-Code-Workflow/issues/73
*/
function convertToClaudeCodeFormat(hookData) {
// If already in correct format, return as-is
if (hookData.hooks && Array.isArray(hookData.hooks)) {
return hookData;
}
// Build command string from command + args
let commandStr = hookData.command || '';
if (hookData.args && Array.isArray(hookData.args)) {
// Join args, properly quoting if needed
const quotedArgs = hookData.args.map(arg => {
if (arg.includes(' ') && !arg.startsWith('"') && !arg.startsWith("'")) {
return `"${arg.replace(/"/g, '\\"')}"`;
// Special handling for bash -c commands: use single quotes for the script
// This avoids complex escaping issues with jq and other shell commands
if (commandStr === 'bash' && hookData.args.length >= 2 && hookData.args[0] === '-c') {
// Use single quotes for bash -c script argument
// Single quotes prevent shell expansion, so internal double quotes work naturally
const script = hookData.args[1];
// Escape single quotes within the script: ' -> '\''
const escapedScript = script.replace(/'/g, "'\\''");
commandStr = `bash -c '${escapedScript}'`;
// Handle any additional args after the script
if (hookData.args.length > 2) {
const additionalArgs = hookData.args.slice(2).map(arg => {
if (arg.includes(' ') && !arg.startsWith('"') && !arg.startsWith("'")) {
return `"${arg.replace(/"/g, '\\"')}"`;
}
return arg;
});
commandStr += ' ' + additionalArgs.join(' ');
}
return arg;
});
commandStr = `${commandStr} ${quotedArgs.join(' ')}`.trim();
} else {
// Default handling for other commands
const quotedArgs = hookData.args.map(arg => {
if (arg.includes(' ') && !arg.startsWith('"') && !arg.startsWith("'")) {
return `"${arg.replace(/"/g, '\\"')}"`;
}
return arg;
});
commandStr = `${commandStr} ${quotedArgs.join(' ')}`.trim();
}
}
const converted = {
hooks: [{
type: 'command',
command: commandStr
}]
};
// Add matcher if present (not needed for UserPromptSubmit, Stop, etc.)
if (hookData.matcher) {
converted.matcher = hookData.matcher;
}
// Add timeout if present (in seconds for Claude Code)
if (hookData.timeout) {
converted.hooks[0].timeout = Math.ceil(hookData.timeout / 1000);
}
// Preserve replaceIndex for updates
if (hookData.replaceIndex !== undefined) {
converted.replaceIndex = hookData.replaceIndex;
}
return converted;
}
@@ -723,6 +750,7 @@ function renderWizardModalContent() {
// Helper to get translated field labels
const getFieldLabel = (fieldKey) => {
const labels = {
'tool': t('hook.wizard.cliTool') || 'CLI Tool',
'threshold': t('hook.wizard.thresholdPaths') || 'Threshold (paths)',
'timeout': t('hook.wizard.timeoutSeconds') || 'Timeout (seconds)'
};
@@ -731,6 +759,7 @@ function renderWizardModalContent() {
const getFieldDesc = (fieldKey) => {
const descs = {
'tool': t('hook.wizard.cliToolDesc') || 'CLI tool for CLAUDE.md generation',
'threshold': t('hook.wizard.thresholdPathsDesc') || 'Number of paths to trigger batch update',
'timeout': t('hook.wizard.timeoutSecondsDesc') || 'Auto-flush queue after this time'
};
@@ -1096,20 +1125,19 @@ function generateWizardCommand() {
keywords: c.keywords.split(',').map(k => k.trim()).filter(k => k)
}));
const params = JSON.stringify({ configs: configJson, prompt: '$CLAUDE_PROMPT' });
return `ccw tool exec skill_context_loader '${params}'`;
// Use node + spawnSync for cross-platform JSON handling
const paramsObj = { configs: configJson, prompt: '${p.user_prompt}' };
return `node -e "const p=JSON.parse(process.env.HOOK_INPUT||'{}');require('child_process').spawnSync('ccw',['tool','exec','skill_context_loader',JSON.stringify(${JSON.stringify(paramsObj).replace('${p.user_prompt}', "'+p.user_prompt+'")})],{stdio:'inherit'})"`;
} else {
// auto mode
const params = JSON.stringify({ mode: 'auto', prompt: '$CLAUDE_PROMPT' });
return `ccw tool exec skill_context_loader '${params}'`;
// auto mode - use node + spawnSync
return `node -e "const p=JSON.parse(process.env.HOOK_INPUT||'{}');require('child_process').spawnSync('ccw',['tool','exec','skill_context_loader',JSON.stringify({mode:'auto',prompt:p.user_prompt||''})],{stdio:'inherit'})"`;
}
}
// Handle memory-update wizard (default)
// Now uses memory_queue for batched updates with configurable threshold/timeout
// The command adds to queue, configuration is applied separately via submitHookWizard
const params = `"{\\"action\\":\\"add\\",\\"path\\":\\"$CLAUDE_PROJECT_DIR\\"}"`;
return `ccw tool exec memory_queue ${params}`;
// Use node + spawnSync for cross-platform JSON handling
const selectedTool = wizardConfig.tool || 'gemini';
return `node -e "require('child_process').spawnSync(process.platform==='win32'?'cmd':'ccw',process.platform==='win32'?['/c','ccw','tool','exec','memory_queue',JSON.stringify({action:'add',path:process.env.CLAUDE_PROJECT_DIR,tool:'${selectedTool}'})]:['tool','exec','memory_queue',JSON.stringify({action:'add',path:process.env.CLAUDE_PROJECT_DIR,tool:'${selectedTool}'})],{stdio:'inherit'})"`;
}
async function submitHookWizard() {
@@ -1192,13 +1220,18 @@ async function submitHookWizard() {
const baseTemplate = HOOK_TEMPLATES[selectedOption.templateId];
if (!baseTemplate) return;
const command = generateWizardCommand();
const hookData = {
command: 'bash',
args: ['-c', command]
// Build hook data with configured values
let hookData = {
command: baseTemplate.command,
args: [...baseTemplate.args]
};
// For memory-update wizard, use configured tool in args (cross-platform)
if (wizard.id === 'memory-update') {
const selectedTool = wizardConfig.tool || 'gemini';
hookData.args = ['-e', `require('child_process').spawnSync(process.platform==='win32'?'cmd':'ccw',process.platform==='win32'?['/c','ccw','tool','exec','memory_queue',JSON.stringify({action:'add',path:process.env.CLAUDE_PROJECT_DIR,tool:'${selectedTool}'})]:['tool','exec','memory_queue',JSON.stringify({action:'add',path:process.env.CLAUDE_PROJECT_DIR,tool:'${selectedTool}'})],{stdio:'inherit'})`];
}
if (baseTemplate.matcher) {
hookData.matcher = baseTemplate.matcher;
}
@@ -1207,6 +1240,7 @@ async function submitHookWizard() {
// For memory-update wizard, also configure queue settings
if (wizard.id === 'memory-update') {
const selectedTool = wizardConfig.tool || 'gemini';
const threshold = wizardConfig.threshold || 5;
const timeout = wizardConfig.timeout || 300;
try {
@@ -1217,7 +1251,7 @@ async function submitHookWizard() {
body: JSON.stringify({ tool: 'memory_queue', params: configParams })
});
if (response.ok) {
showRefreshToast(`Queue configured: threshold=${threshold}, timeout=${timeout}s`, 'success');
showRefreshToast(`Queue configured: tool=${selectedTool}, threshold=${threshold}, timeout=${timeout}s`, 'success');
}
} catch (e) {
console.warn('Failed to configure memory queue:', e);

View File

@@ -1,6 +1,9 @@
// Navigation and Routing
// Manages navigation events, active state, content title updates, search, and path selector
// View lifecycle management
var currentViewDestroy = null;
// Path Selector
function initPathSelector() {
const btn = document.getElementById('pathButton');
@@ -54,6 +57,11 @@ function initPathSelector() {
// Cleanup function for view transitions
function cleanupPreviousView() {
// Call current view's destroy function if exists
if (currentViewDestroy) {
currentViewDestroy();
currentViewDestroy = null;
}
// Cleanup graph explorer
if (currentView === 'graph-explorer' && typeof window.cleanupGraphExplorer === 'function') {
window.cleanupGraphExplorer();
@@ -121,6 +129,10 @@ function initNavigation() {
renderCliManager();
} else if (currentView === 'cli-history') {
renderCliHistoryView();
// Register destroy function for cli-history view
if (typeof window.destroyCliHistoryView === 'function') {
currentViewDestroy = window.destroyCliHistoryView;
}
} else if (currentView === 'hook-manager') {
renderHookManager();
} else if (currentView === 'memory') {
@@ -133,6 +145,10 @@ function initNavigation() {
renderRulesManager();
} else if (currentView === 'claude-manager') {
renderClaudeManager();
// Register destroy function for claude-manager view
if (typeof window.initClaudeManager === 'function') {
window.initClaudeManager();
}
} else if (currentView === 'graph-explorer') {
renderGraphExplorer();
} else if (currentView === 'help') {
@@ -216,8 +232,10 @@ function updateContentTitle() {
} else if (currentView === 'issue-discovery') {
titleEl.textContent = t('title.issueDiscovery');
} else if (currentView === 'liteTasks') {
const names = { 'lite-plan': t('title.litePlanSessions'), 'lite-fix': t('title.liteFixSessions') };
const names = { 'lite-plan': t('title.litePlanSessions'), 'lite-fix': t('title.liteFixSessions'), 'multi-cli-plan': t('title.multiCliPlanSessions') || 'Multi-CLI Plan Sessions' };
titleEl.textContent = names[currentLiteType] || t('title.liteTasks');
} else if (currentView === 'multiCliDetail') {
titleEl.textContent = t('title.multiCliDetail') || 'Multi-CLI Discussion Detail';
} else if (currentView === 'sessionDetail') {
titleEl.textContent = t('title.sessionDetail');
} else if (currentView === 'liteTaskDetail') {
@@ -319,12 +337,14 @@ function updateSidebarCounts(data) {
if (archivedCount) archivedCount.textContent = data.archivedSessions?.length || 0;
if (allCount) allCount.textContent = (data.activeSessions?.length || 0) + (data.archivedSessions?.length || 0);
// Update lite task counts
const litePlanCount = document.querySelector('.nav-item[data-lite="lite-plan"] .nav-count');
const liteFixCount = document.querySelector('.nav-item[data-lite="lite-fix"] .nav-count');
// Update lite task counts (using ID selectors to match dashboard.html structure)
const litePlanCount = document.getElementById('badgeLitePlan');
const liteFixCount = document.getElementById('badgeLiteFix');
const multiCliPlanCount = document.getElementById('badgeMultiCliPlan');
if (litePlanCount) litePlanCount.textContent = data.liteTasks?.litePlan?.length || 0;
if (liteFixCount) liteFixCount.textContent = data.liteTasks?.liteFix?.length || 0;
if (multiCliPlanCount) multiCliPlanCount.textContent = data.liteTasks?.multiCliPlan?.length || 0;
}
// ========== Navigation Badge Aggregation ==========

View File

@@ -83,7 +83,8 @@ const i18n = {
'nav.liteTasks': 'Lite Tasks',
'nav.litePlan': 'Lite Plan',
'nav.liteFix': 'Lite Fix',
'nav.multiCliPlan': 'Multi-CLI Plan',
// Sidebar - MCP section
'nav.mcpServers': 'MCP Servers',
'nav.manage': 'Manage',
@@ -119,9 +120,11 @@ const i18n = {
'title.cliHistory': 'CLI Execution History',
'title.litePlanSessions': 'Lite Plan Sessions',
'title.liteFixSessions': 'Lite Fix Sessions',
'title.multiCliPlanSessions': 'Multi-CLI Plan Sessions',
'title.liteTasks': 'Lite Tasks',
'title.sessionDetail': 'Session Detail',
'title.liteTaskDetail': 'Lite Task Detail',
'title.multiCliDetail': 'Multi-CLI Discussion Detail',
'title.hookManager': 'Hook Manager',
'title.memoryModule': 'Memory Module',
'title.promptHistory': 'Prompt History',
@@ -268,6 +271,15 @@ const i18n = {
'cli.envFilePlaceholder': 'Path to .env file (e.g., ~/.gemini-env or C:/Users/xxx/.env)',
'cli.envFileHint': 'Load environment variables (e.g., API keys) before CLI execution. Supports ~ for home directory.',
'cli.envFileBrowse': 'Browse',
'cli.envFilePathHint': 'Please verify or complete the file path (e.g., ~/.gemini-env)',
'cli.fileBrowser': 'File Browser',
'cli.fileBrowserSelect': 'Select',
'cli.fileBrowserCancel': 'Cancel',
'cli.fileBrowserUp': 'Parent Directory',
'cli.fileBrowserHome': 'Home',
'cli.fileBrowserShowHidden': 'Show hidden files',
'cli.fileBrowserApiError': 'Server restart required to enable file browser',
'cli.fileBrowserManualHint': 'Type the full path above and click Select (e.g., C:\\Users\\name\\.gemini)',
// CodexLens Configuration
'codexlens.config': 'CodexLens Configuration',
@@ -1095,6 +1107,8 @@ const i18n = {
'hook.wizard.memoryUpdateDesc': 'Queue-based CLAUDE.md updates with configurable threshold and timeout',
'hook.wizard.queueBasedUpdate': 'Queue-Based Update',
'hook.wizard.queueBasedUpdateDesc': 'Batch updates when threshold reached or timeout expires',
'hook.wizard.cliTool': 'CLI Tool',
'hook.wizard.cliToolDesc': 'CLI tool for CLAUDE.md generation',
'hook.wizard.thresholdPaths': 'Threshold (paths)',
'hook.wizard.thresholdPathsDesc': 'Number of paths to trigger batch update',
'hook.wizard.timeoutSeconds': 'Timeout (seconds)',
@@ -1195,7 +1209,130 @@ const i18n = {
'lite.diagnosisDetails': 'Diagnosis Details',
'lite.totalDiagnoses': 'Total Diagnoses:',
'lite.angles': 'Angles:',
'lite.multiCli': 'Multi-CLI',
// Multi-CLI Plan
'multiCli.rounds': 'rounds',
'multiCli.backToList': 'Back to Multi-CLI Plan',
'multiCli.roundCount': 'Rounds',
'multiCli.topic': 'Topic',
'multiCli.tab.topic': 'Discussion Topic',
'multiCli.tab.files': 'Related Files',
'multiCli.tab.planning': 'Planning',
'multiCli.tab.decision': 'Decision',
'multiCli.tab.timeline': 'Timeline',
'multiCli.tab.rounds': 'Rounds',
'multiCli.tab.discussion': 'Discussion',
'multiCli.tab.association': 'Association',
'multiCli.scope': 'Scope',
'multiCli.scope.included': 'Included',
'multiCli.scope.excluded': 'Excluded',
'multiCli.keyQuestions': 'Key Questions',
'multiCli.fileTree': 'File Tree',
'multiCli.impactSummary': 'Impact Summary',
'multiCli.dependencies': 'Dependencies',
'multiCli.functional': 'Functional Requirements',
'multiCli.nonFunctional': 'Non-Functional Requirements',
'multiCli.acceptanceCriteria': 'Acceptance Criteria',
'multiCli.source': 'Source',
'multiCli.confidence': 'Confidence',
'multiCli.selectedSolution': 'Selected Solution',
'multiCli.rejectedAlternatives': 'Rejected Alternatives',
'multiCli.rejectionReason': 'Reason',
'multiCli.pros': 'Pros',
'multiCli.cons': 'Cons',
'multiCli.effort': 'Effort',
'multiCli.sources': 'Sources',
'multiCli.currentRound': 'Current',
'multiCli.singleRoundInfo': 'This is a single-round discussion. View other tabs for details.',
'multiCli.noRoundData': 'No data for this round.',
'multiCli.roundId': 'Round',
'multiCli.timestamp': 'Time',
'multiCli.duration': 'Duration',
'multiCli.contributors': 'Contributors',
'multiCli.convergence': 'Convergence',
'multiCli.newInsights': 'New Insights',
'multiCli.crossVerification': 'Cross-Verification',
'multiCli.agreements': 'Agreements',
'multiCli.disagreements': 'Disagreements',
'multiCli.resolution': 'Resolution',
'multiCli.empty.topic': 'No Discussion Topic',
'multiCli.empty.topicText': 'No discussion topic data available for this session.',
'multiCli.empty.files': 'No Related Files',
'multiCli.empty.filesText': 'No file analysis data available for this session.',
'multiCli.empty.planning': 'No Planning Data',
'multiCli.empty.planningText': 'No planning requirements available for this session.',
'multiCli.empty.decision': 'No Decision Yet',
'multiCli.empty.decisionText': 'No decision has been made for this discussion yet.',
'multiCli.empty.timeline': 'No Timeline Events',
'multiCli.empty.timelineText': 'No decision timeline available for this session.',
'multiCli.empty.association': 'No Association Data',
'multiCli.empty.associationText': 'No context package or related files available for this session.',
'multiCli.round': 'Round',
'multiCli.solutionSummary': 'Solution Summary',
'multiCli.feasibility': 'Feasibility',
'multiCli.effort': 'Effort',
'multiCli.risk': 'Risk',
'multiCli.consensus': 'Consensus',
'multiCli.resolvedConflicts': 'Resolved Conflicts',
// Toolbar
'multiCli.toolbar.title': 'Task Navigator',
'multiCli.toolbar.tasks': 'Tasks',
'multiCli.toolbar.refresh': 'Refresh',
'multiCli.toolbar.exportJson': 'Export JSON',
'multiCli.toolbar.viewRaw': 'View Raw Data',
'multiCli.toolbar.noTasks': 'No tasks available',
'multiCli.toolbar.scrollToTask': 'Click to scroll to task',
// Context Tab
'multiCli.context.taskDescription': 'Task Description',
'multiCli.context.constraints': 'Constraints',
'multiCli.context.focusPaths': 'Focus Paths',
'multiCli.context.relevantFiles': 'Relevant Files',
'multiCli.context.dependencies': 'Dependencies',
'multiCli.context.conflictRisks': 'Conflict Risks',
'multiCli.context.sessionId': 'Session ID',
'multiCli.context.rawJson': 'Raw JSON',
// Summary Tab
'multiCli.summary.title': 'Summary',
'multiCli.summary.convergence': 'Convergence',
'multiCli.summary.solutions': 'Solutions',
'multiCli.summary.solution': 'Solution',
// Task Overview
'multiCli.task.description': 'Description',
'multiCli.task.keyPoint': 'Key Point',
'multiCli.task.scope': 'Scope',
'multiCli.task.dependencies': 'Dependencies',
'multiCli.task.targetFiles': 'Target Files',
'multiCli.task.acceptanceCriteria': 'Acceptance Criteria',
'multiCli.task.reference': 'Reference',
'multiCli.task.pattern': 'PATTERN',
'multiCli.task.files': 'FILES',
'multiCli.task.examples': 'EXAMPLES',
'multiCli.task.noOverviewData': 'No overview data available',
// Task Implementation
'multiCli.task.implementationSteps': 'Implementation Steps',
'multiCli.task.modificationPoints': 'Modification Points',
'multiCli.task.verification': 'Verification',
'multiCli.task.noImplementationData': 'No implementation details available',
'multiCli.task.noFilesSpecified': 'No files specified',
// Discussion Tab
'multiCli.discussion.title': 'Discussion',
'multiCli.discussion.discussionTopic': 'Discussion Topic',
'multiCli.solutions': 'Solutions',
'multiCli.decision': 'Decision',
// Plan
'multiCli.plan.objective': 'Objective',
'multiCli.plan.solution': 'Solution',
'multiCli.plan.approach': 'Approach',
'multiCli.plan.risk': 'risk',
// Modals
'modal.contentPreview': 'Content Preview',
'modal.raw': 'Raw',
@@ -2132,6 +2269,25 @@ const i18n = {
'issues.queueCommandInfo': 'After running the command, click "Refresh" to see the updated queue.',
'issues.alternative': 'Alternative',
'issues.refreshAfter': 'Refresh Queue',
'issues.activate': 'Activate',
'issues.deactivate': 'Deactivate',
'issues.queueActivated': 'Queue activated',
'issues.queueDeactivated': 'Queue deactivated',
'issues.deleteQueue': 'Delete queue',
'issues.confirmDeleteQueue': 'Are you sure you want to delete this queue? This action cannot be undone.',
'issues.queueDeleted': 'Queue deleted successfully',
'issues.actions': 'Actions',
'issues.archive': 'Archive',
'issues.delete': 'Delete',
'issues.confirmDeleteIssue': 'Are you sure you want to delete this issue? This action cannot be undone.',
'issues.confirmArchiveIssue': 'Archive this issue? It will be moved to history.',
'issues.issueDeleted': 'Issue deleted successfully',
'issues.issueArchived': 'Issue archived successfully',
'issues.executionQueues': 'Execution Queues',
'issues.queues': 'queues',
'issues.noQueues': 'No queues found',
'issues.queueEmptyHint': 'Generate execution queue from bound solutions',
'issues.refresh': 'Refresh',
// issue.* keys (legacy)
'issue.viewIssues': 'Issues',
'issue.viewQueue': 'Queue',
@@ -2257,7 +2413,8 @@ const i18n = {
'nav.liteTasks': '轻量任务',
'nav.litePlan': '轻量规划',
'nav.liteFix': '轻量修复',
'nav.multiCliPlan': '多CLI规划',
// Sidebar - MCP section
'nav.mcpServers': 'MCP 服务器',
'nav.manage': '管理',
@@ -2293,9 +2450,11 @@ const i18n = {
'title.cliHistory': 'CLI 执行历史',
'title.litePlanSessions': '轻量规划会话',
'title.liteFixSessions': '轻量修复会话',
'title.multiCliPlanSessions': '多CLI规划会话',
'title.liteTasks': '轻量任务',
'title.sessionDetail': '会话详情',
'title.liteTaskDetail': '轻量任务详情',
'title.multiCliDetail': '多CLI讨论详情',
'title.hookManager': '钩子管理',
'title.memoryModule': '记忆模块',
'title.promptHistory': '提示历史',
@@ -2442,6 +2601,15 @@ const i18n = {
'cli.envFilePlaceholder': '.env 文件路径(如 ~/.gemini-env 或 C:/Users/xxx/.env',
'cli.envFileHint': '在 CLI 执行前加载环境变量(如 API 密钥)。支持 ~ 表示用户目录。',
'cli.envFileBrowse': '浏览',
'cli.envFilePathHint': '请确认或补全文件路径(如 ~/.gemini-env',
'cli.fileBrowser': '文件浏览器',
'cli.fileBrowserSelect': '选择',
'cli.fileBrowserCancel': '取消',
'cli.fileBrowserUp': '上级目录',
'cli.fileBrowserHome': '主目录',
'cli.fileBrowserShowHidden': '显示隐藏文件',
'cli.fileBrowserApiError': '需要重启服务器以启用文件浏览器',
'cli.fileBrowserManualHint': '请在上方输入完整路径后点击选择(如 C:\\Users\\用户名\\.gemini',
// CodexLens 配置
'codexlens.config': 'CodexLens 配置',
@@ -3248,6 +3416,8 @@ const i18n = {
'hook.wizard.memoryUpdateDesc': '基于队列的 CLAUDE.md 更新,支持阈值和超时配置',
'hook.wizard.queueBasedUpdate': '队列批量更新',
'hook.wizard.queueBasedUpdateDesc': '达到路径数量阈值或超时时批量更新',
'hook.wizard.cliTool': 'CLI 工具',
'hook.wizard.cliToolDesc': '用于生成 CLAUDE.md 的 CLI 工具',
'hook.wizard.thresholdPaths': '阈值(路径数)',
'hook.wizard.thresholdPathsDesc': '触发批量更新的路径数量',
'hook.wizard.timeoutSeconds': '超时(秒)',
@@ -3348,7 +3518,130 @@ const i18n = {
'lite.diagnosisDetails': '诊断详情',
'lite.totalDiagnoses': '总诊断数:',
'lite.angles': '分析角度:',
'lite.multiCli': '多CLI',
// Multi-CLI Plan
'multiCli.rounds': '轮',
'multiCli.backToList': '返回多CLI计划',
'multiCli.roundCount': '轮数',
'multiCli.topic': '主题',
'multiCli.tab.topic': '讨论主题',
'multiCli.tab.files': '相关文件',
'multiCli.tab.planning': '规划',
'multiCli.tab.decision': '决策',
'multiCli.tab.timeline': '时间线',
'multiCli.tab.rounds': '轮次',
'multiCli.tab.discussion': '讨论',
'multiCli.tab.association': '关联',
'multiCli.scope': '范围',
'multiCli.scope.included': '包含',
'multiCli.scope.excluded': '排除',
'multiCli.keyQuestions': '关键问题',
'multiCli.fileTree': '文件树',
'multiCli.impactSummary': '影响摘要',
'multiCli.dependencies': '依赖关系',
'multiCli.functional': '功能需求',
'multiCli.nonFunctional': '非功能需求',
'multiCli.acceptanceCriteria': '验收标准',
'multiCli.source': '来源',
'multiCli.confidence': '置信度',
'multiCli.selectedSolution': '选定方案',
'multiCli.rejectedAlternatives': '被拒绝的备选方案',
'multiCli.rejectionReason': '原因',
'multiCli.pros': '优点',
'multiCli.cons': '缺点',
'multiCli.effort': '工作量',
'multiCli.sources': '来源',
'multiCli.currentRound': '当前',
'multiCli.singleRoundInfo': '这是单轮讨论。查看其他标签页获取详情。',
'multiCli.noRoundData': '此轮无数据。',
'multiCli.roundId': '轮次',
'multiCli.timestamp': '时间',
'multiCli.duration': '持续时间',
'multiCli.contributors': '贡献者',
'multiCli.convergence': '收敛度',
'multiCli.newInsights': '新发现',
'multiCli.crossVerification': '交叉验证',
'multiCli.agreements': '一致意见',
'multiCli.disagreements': '分歧',
'multiCli.resolution': '决议',
'multiCli.empty.topic': '无讨论主题',
'multiCli.empty.topicText': '此会话无可用的讨论主题数据。',
'multiCli.empty.files': '无相关文件',
'multiCli.empty.filesText': '此会话无可用的文件分析数据。',
'multiCli.empty.planning': '无规划数据',
'multiCli.empty.planningText': '此会话无可用的规划需求。',
'multiCli.empty.decision': '暂无决策',
'multiCli.empty.decisionText': '此讨论尚未做出决策。',
'multiCli.empty.timeline': '无时间线事件',
'multiCli.empty.timelineText': '此会话无可用的决策时间线。',
'multiCli.empty.association': '无关联数据',
'multiCli.empty.associationText': '此会话无可用的上下文包或相关文件。',
'multiCli.round': '轮次',
'multiCli.solutionSummary': '方案摘要',
'multiCli.feasibility': '可行性',
'multiCli.effort': '工作量',
'multiCli.risk': '风险',
'multiCli.consensus': '共识',
'multiCli.resolvedConflicts': '已解决冲突',
// Toolbar
'multiCli.toolbar.title': '任务导航',
'multiCli.toolbar.tasks': '任务列表',
'multiCli.toolbar.refresh': '刷新',
'multiCli.toolbar.exportJson': '导出JSON',
'multiCli.toolbar.viewRaw': '查看原始数据',
'multiCli.toolbar.noTasks': '暂无任务',
'multiCli.toolbar.scrollToTask': '点击定位到任务',
// Context Tab
'multiCli.context.taskDescription': '任务描述',
'multiCli.context.constraints': '约束条件',
'multiCli.context.focusPaths': '焦点路径',
'multiCli.context.relevantFiles': '相关文件',
'multiCli.context.dependencies': '依赖项',
'multiCli.context.conflictRisks': '冲突风险',
'multiCli.context.sessionId': '会话ID',
'multiCli.context.rawJson': '原始JSON',
// Summary Tab
'multiCli.summary.title': '摘要',
'multiCli.summary.convergence': '收敛状态',
'multiCli.summary.solutions': '解决方案',
'multiCli.summary.solution': '方案',
// Task Overview
'multiCli.task.description': '描述',
'multiCli.task.keyPoint': '关键点',
'multiCli.task.scope': '范围',
'multiCli.task.dependencies': '依赖项',
'multiCli.task.targetFiles': '目标文件',
'multiCli.task.acceptanceCriteria': '验收标准',
'multiCli.task.reference': '参考资料',
'multiCli.task.pattern': '模式',
'multiCli.task.files': '文件',
'multiCli.task.examples': '示例',
'multiCli.task.noOverviewData': '无概览数据',
// Task Implementation
'multiCli.task.implementationSteps': '实现步骤',
'multiCli.task.modificationPoints': '修改点',
'multiCli.task.verification': '验证',
'multiCli.task.noImplementationData': '无实现详情',
'multiCli.task.noFilesSpecified': '未指定文件',
// Discussion Tab
'multiCli.discussion.title': '讨论',
'multiCli.discussion.discussionTopic': '讨论主题',
'multiCli.solutions': '解决方案',
'multiCli.decision': '决策',
// Plan
'multiCli.plan.objective': '目标',
'multiCli.plan.solution': '解决方案',
'multiCli.plan.approach': '实现方式',
'multiCli.plan.risk': '风险',
// Modals
'modal.contentPreview': '内容预览',
'modal.raw': '原始',
@@ -4318,6 +4611,25 @@ const i18n = {
'issues.queueCommandInfo': '运行命令后,点击"刷新"查看更新后的队列。',
'issues.alternative': '或者',
'issues.refreshAfter': '刷新队列',
'issues.activate': '激活',
'issues.deactivate': '取消激活',
'issues.queueActivated': '队列已激活',
'issues.queueDeactivated': '队列已取消激活',
'issues.deleteQueue': '删除队列',
'issues.confirmDeleteQueue': '确定要删除此队列吗?此操作无法撤销。',
'issues.queueDeleted': '队列删除成功',
'issues.actions': '操作',
'issues.archive': '归档',
'issues.delete': '删除',
'issues.confirmDeleteIssue': '确定要删除此议题吗?此操作无法撤销。',
'issues.confirmArchiveIssue': '归档此议题?它将被移动到历史记录中。',
'issues.issueDeleted': '议题删除成功',
'issues.issueArchived': '议题归档成功',
'issues.executionQueues': '执行队列',
'issues.queues': '个队列',
'issues.noQueues': '暂无队列',
'issues.queueEmptyHint': '从绑定的解决方案生成执行队列',
'issues.refresh': '刷新',
// issue.* keys (legacy)
'issue.viewIssues': '议题',
'issue.viewQueue': '队列',

Some files were not shown because too many files have changed in this diff Show More