Compare commits

..

39 Commits

Author SHA1 Message Date
catlog22
204cb20617 chore(release): version 6.3.49
- Update package.json version to 6.3.49
- Add changelog entry for v6.3.49 with all commits since v6.3.48
- New features: CLI tools enhancements, skills system improvements, security fixes
- Documentation updates and UI improvements
2026-01-28 23:34:31 +08:00
catlog22
63f0daebbb feat: Update initialization process to prioritize in-memory configuration for CLI tool selection 2026-01-28 23:20:50 +08:00
catlog22
6ac041c1d8 feat: Enhance Codex CLI settings with toggle and refresh actions 2026-01-28 23:05:31 +08:00
catlog22
279adfd391 feat: Implement Codex CLI enhancement settings with API integration and UI toggle 2026-01-28 23:01:18 +08:00
catlog22
0a07138c27 feat: Add ccw-cli-tools skill specification with unified execution framework and configuration-driven tool selection 2026-01-28 22:55:36 +08:00
catlog22
a5d9e8ca87 feat: Enhance lite-skill-generator with single file output and improved validation 2026-01-28 22:23:19 +08:00
catlog22
502c8a09a1 fix(security): Apply 3 critical security fixes
- sec-001: Add validateAllowedPath to /api/file endpoint (path traversal)
- sec-002: Enable CSRF by default with CCW_DISABLE_CSRF opt-out
- sec-003: Add validateAllowedPath to /api/dialog/browse and /api/dialog/open-file (path traversal)

Ref: fix-1738072800000
2026-01-28 22:04:18 +08:00
catlog22
ed0255b8a2 Add skill tuning diagnosis report for skill-generator
- Introduced a new JSON file `skill-tuning-diagnosis.json` containing a comprehensive diagnosis of the skill-generator.
- Documented critical issues related to context management and data flow, including:
  - Full state serialization leading to unbounded context growth.
  - Scattered state writing without a unified schema.
  - Lack of input state schema validation in autonomous orchestrators.
- Provided detailed descriptions, impacts, root causes, and fix strategies for each identified issue.
- Summarized recommendations with priority levels for urgent fixes.
2026-01-28 22:00:20 +08:00
catlog22
6e94fc0740 chore: remove CLI endpoints section from Codex Code Guidelines 2026-01-28 21:33:43 +08:00
catlog22
b361a8c041 Add CLI endpoints documentation and unified script template for Bash and Python
- Updated AGENTS.md to include CLI tools usage and configuration details.
- Introduced a new script template for both Bash and Python, outlining usage context, calling conventions, and implementation guidelines.
- Provided examples for common patterns in both Bash and Python scripts.
- Established a directory convention for script organization and naming.
2026-01-28 21:29:21 +08:00
catlog22
24dad8cefd Refactor orchestrator logic and enhance problem taxonomy
- Updated orchestrator decision logic to improve state management and action selection.
- Introduced structured termination checks and action selection criteria.
- Enhanced state update mechanism with sliding window for action history and error tracking.
- Revised problem taxonomy for skill execution issues, consolidating categories and refining detection patterns.
- Improved severity calculation method for issue prioritization.
- Streamlined fix mapping strategies for better clarity and usability.
2026-01-28 21:08:49 +08:00
catlog22
071c98d89c feat: add brainstorm-to-cycle adapter for converting brainstorm output to parallel-dev-cycle input 2026-01-28 20:51:35 +08:00
catlog22
994718dee2 chore: remove unnecessary blank line in auto-parallel command documentation 2026-01-28 20:36:41 +08:00
catlog22
3998d24e32 Enhance skill generator documentation and templates
- Updated Phase 1 and Phase 2 documentation to include next phase links and data flow details.
- Expanded Phase 5 documentation to include comprehensive validation and README generation steps, along with validation report structure.
- Added purpose and usage context sections to various action and script templates (e.g., autonomous-action, llm-action, script-bash).
- Improved commands management by simplifying the command scanning logic and enabling/disabling commands through renaming files.
- Enhanced dashboard command manager to format group names and display nested groups with appropriate icons and colors.
- Updated LiteLLM executor to allow model overrides during execution.
- Added action reference guide and template reference sections to the skill-tuning SKILL.md for better navigation and understanding.
2026-01-28 20:34:03 +08:00
catlog22
29274ee943 feat(codex): add brainstorm-with-file prompt for interactive brainstorming workflow
- Add multi-perspective brainstorming workflow
- Support creative, pragmatic, and systematic analysis
- Include diverge-converge cycles with user interaction
- Add deep dive, devil's advocate, and idea merging
- Document thought evolution in brainstorm.md
2026-01-28 20:33:13 +08:00
catlog22
46d5739935 fix(changelog): update workflow references from review-fix to review-cycle-fix for consistency 2026-01-28 20:30:03 +08:00
catlog22
152cab2b7e feat: update review commands to use review-cycle-fix for automated fixing 2026-01-28 20:24:59 +08:00
catlog22
0cc5101c0e feat: Add phases for document consolidation, assembly, and compliance refinement
- Introduced Phase 2.5: Consolidation Agent to summarize analysis outputs and generate design overviews.
- Added Phase 4: Document Assembly to create index-style documents linking chapter files.
- Implemented Phase 5: Compliance Review & Iterative Refinement for CPCC compliance checks and updates.
- Established CPCC Compliance Requirements document outlining mandatory sections and validation functions.
- Created a base template for analysis agents to ensure consistency and efficiency in execution.
2026-01-28 19:57:24 +08:00
catlog22
4c78f53bcc feat: add commands management feature with API endpoints and UI integration
- Implemented commands routes for listing, enabling, and disabling commands.
- Created commands manager view with accordion groups for better organization.
- Added loading states and confirmation dialogs for enabling/disabling commands.
- Enhanced error handling and user feedback for command operations.
- Introduced CSS styles for commands manager UI components.
- Updated navigation to include commands manager link.
- Refactored existing code for better maintainability and clarity.
2026-01-28 08:26:37 +08:00
catlog22
cc5a5716cf fix(skills): improve robustness of enable/disable operations
- Add rollback in moveDirectory when rmSync fails after cpSync
- Add transaction rollback in disable/enableSkill when config save fails
- Surface config corruption by throwing on JSON parse errors
- Add robust JSON error parsing with fallback in frontend
- Add loading state and double-click prevention for toggle button
2026-01-28 08:25:59 +08:00
catlog22
af05874510 feat(skills): enhance moveDirectory function with rollback on failure and update config handling 2026-01-28 00:50:24 +08:00
catlog22
7a40f16235 feat(skills): implement enable/disable functionality for skills
- Added new API endpoints to enable and disable skills.
- Introduced logic to manage disabled skills, including loading and saving configurations.
- Enhanced skills routes to return lists of disabled skills.
- Updated frontend to display disabled skills and allow toggling their status.
- Added internationalization support for new skill status messages.
- Created JSON schemas for plan verification agent and findings.
- Defined new types for skill management in TypeScript.
2026-01-28 00:49:39 +08:00
catlog22
8d178feaac feat: 增强计划验证和上下文收集功能,支持自动执行和用户交互选择 2026-01-28 00:12:15 +08:00
catlog22
b3c47294e7 Enhance workflow commands and context management
- Updated `plan.md` to include new fields in context-package.json: prioritized_context, user_intent, priority_tiers, dependency_order, and sorting_rationale.
- Added validation for the existence of the prioritized_context field in context-package.json.
- Modified user decision flow in task generation to present action choices after planning completion.
- Improved context-gathering process in `context-gather.md` to integrate user intent and prioritize context based on user goals.
- Revised conflict-resolution documentation to require planning notes records after conflict analysis.
- Streamlined task generation in `task-generate-agent.md` to utilize pre-sorted context without redundant sorting.
- Removed unused settings persistence functions and corresponding tests from `claude-cli-tools.ts` and `settings-persistence.test.ts`.
2026-01-28 00:02:45 +08:00
catlog22
9989cfcf21 feat: 更新任务生成和执行限制,优化多模块任务管理 2026-01-27 23:34:31 +08:00
catlog22
1b6ace0447 feat: 添加规划笔记功能以支持任务生成和约束管理 2026-01-27 23:16:01 +08:00
catlog22
a3b303d8e3 Enhance CLI Lite Planning Agent with Mandatory Quality Check
- Added Phase 5: Plan Quality Check to cli-lite-planning-agent.md, detailing mandatory quality validation after plan generation.
- Introduced quality dimensions: completeness, granularity, dependencies, acceptance criteria, implementation steps, and constraint compliance.
- Specified CLI command format for quality check execution and expected output structure.
- Implemented result parsing and auto-fix strategies for minor issues.
- Updated integration flow to ensure quality check is executed before returning the plan to the orchestrator.

Refactor lite-plan.md to reflect internal quality check execution for medium/high complexity plans.

Create new brainstorm-with-file.md for interactive brainstorming workflow, detailing session setup, execution process, and implementation steps.
2026-01-27 23:02:05 +08:00
catlog22
0c1c87f704 fix: 修正 README_CN.md 交流群二维码图片扩展名
将引用从 .jpg 改为 .png 以匹配实际文件名
2026-01-26 09:47:42 +08:00
catlog22
985085c624 Refactor CLI Config Manager and Add Provider Model Routes
- Removed deprecated constants and functions from cli-config-manager.ts.
- Introduced new provider model presets in litellm-provider-models.ts for better organization and management of model information.
- Created provider-routes.ts to handle API endpoints for retrieving provider information and models.
- Added integration tests for provider routes to ensure correct functionality and response structure.
- Implemented unit tests for settings persistence functions, covering various scenarios and edge cases.
- Enhanced error handling and validation in the new routes and settings functions.
2026-01-25 17:27:58 +08:00
catlog22
7c16cc6427 feat: 添加 convert-to-plan 命令以将规划文档转换为问题解决方案 2026-01-25 12:32:42 +08:00
catlog22
6875108dda Add interactive analysis workflow with documented discussions and CLI exploration
- Introduced `analyze-with-file` command for collaborative analysis.
- Implemented session management for new and continued analysis sessions.
- Developed structured phases for topic understanding, exploration, discussion, and conclusion synthesis.
- Created detailed documentation for the workflow, including examples and implementation details.
- Added Codex prompt for deep analysis and exploration of codebase and concepts.
2026-01-25 11:08:13 +08:00
catlog22
9cff6f5f43 refactor(issue): remove 'merged' queue status, use 'archived' instead
- Remove 'merged' from VALID_QUEUE_STATUSES constant
- Update mergeQueues() to set status to 'archived' instead of 'merged'
- Preserve merged_into/merged_at metadata for traceability
- Update frontend to use 'archived' for button visibility checks
- Fix queue --json returning fake queue ID when no active queue exists
2026-01-25 10:56:46 +08:00
catlog22
fe2536d4cd feat: 增加队列状态“merged”及相关验证功能,添加状态参考文档 2026-01-25 10:11:52 +08:00
catlog22
16f27c080a docs: add Level 5 intelligent orchestration workflow guide to English version
- Added Level 5 (CCW Coordinator) chapter with complete Minimum Execution Units definition
- Added 3-Phase Workflow explanation (Analyze Requirements, Discover Commands, Execute)
- Integrated command port system for dynamic pipeline composition
- Added JavaScript task analysis and complexity assessment functions
- Included state file structure and complete Mermaid flowchart (119 lines)
- Updated Quick Selection Table to reference ccw-coordinator for complex workflows
- Updated Decision Flowchart to include Level 5 at top level
- Updated Summary Level Overview table to include Level 5 row
- Updated Core Principles with Level 5 selection criteria and usage guidelines
- Synchronized English version with WORKFLOW_GUIDE_CN.md Level 5 content
2026-01-24 21:41:31 +08:00
catlog22
874b70726d chore: archive unused test scripts and temporary documents
- Moved 6 empty test files (*.test.ts) to archive/
- Moved 5 Python test scripts (*.py) to archive/
- Moved 5 outdated/temporary documents to archive/
- Cleaned up root directory for better organization
2026-01-24 21:26:03 +08:00
catlog22
862365ffaf chore: 删除 Codex Subagent 使用规范文档 2026-01-24 21:17:59 +08:00
catlog22
b8c807b2f9 feat: add parent/child directory lookup for ccw cli output
- Implement findProjectWithExecution() to search upward through parent directories
- Add automatic project path discovery in outputAction
- Support explicit --project parameter for manual path specification
- Improve error messages with search scope indication
- Display project path in formatted output
- Enable cross-directory execution without working directory dependency
2026-01-24 21:17:21 +08:00
catlog22
7ea6362c50 docs: add Level 5 workflow guide with CCW Coordinator and decision flowchart
- Add "Level 5: 智能编排 (CCW Coordinator)" chapter to WORKFLOW_GUIDE_CN.md
- Integrate 6.2.8 version full lifecycle command decision flowchart (Mermaid)
- Document minimum execution units (最小执行单元) for planning, testing, and review
- Explain 3-phase workflow: requirements analysis, command discovery & recommendation, sequential execution
- Include command port system for dynamic command chain assembly
- Add state file structure (.workflow/.ccw-coordinator/{session_id}/state.json)
- Document 3 typical scenarios: simple feature, bug fix, complex feature development
- Update overview diagram to show Level 5 with automation progression
- Update workflow selection guide, decision flowchart, and summary table
- Add Level 5 relationship documentation to other workflow levels
2026-01-24 21:15:44 +08:00
catlog22
b435391f17 docs: add /ccw and /ccw-coordinator as recommended commands
- Add prominent section highlighting /ccw and /ccw-coordinator as main features
- /ccw: Auto workflow orchestrator for general tasks
- /ccw-coordinator: Smart orchestrator with intelligent recommendations
- Include comparison table, quick examples, and key differences
- Update both English and Chinese READMEs
2026-01-24 15:12:33 +08:00
153 changed files with 16677 additions and 15956 deletions

View File

@@ -9,12 +9,7 @@
**Strictly follow the cli-tools.json configuration**
Available CLI endpoints are dynamically defined by the config file:
- Built-in tools and their enable/disable status
- Custom API endpoints registered via the Dashboard
- Managed through the CCW Dashboard Status page
Available CLI endpoints are dynamically defined by the config file
## Tool Execution
- **Context Requirements**: @~/.claude/workflows/context-tools.md
@@ -25,9 +20,12 @@ Available CLI endpoints are dynamically defined by the config file:
- **TaskOutput usage**: Only use `TaskOutput({ task_id: "xxx", block: false })` + sleep loop to poll completion status. NEVER read intermediate output during agent/CLI execution - wait for final result only
### CLI Tool Calls (ccw cli)
- **Default: `run_in_background: true`** - Unless otherwise specified, always use background execution for CLI calls:
- **Default: Use Bash `run_in_background: true`** - Unless otherwise specified, always execute CLI calls in background using Bash tool's background mode:
```
Bash({ command: "ccw cli -p '...' --tool gemini", run_in_background: true })
Bash({
command: "ccw cli -p '...' --tool gemini",
run_in_background: true // Bash tool parameter, not ccw cli parameter
})
```
- **After CLI call**: Stop output immediately - let CLI execute in background. **DO NOT use TaskOutput polling** - wait for hook callback to receive results

View File

@@ -1,4 +0,0 @@
{
"interval": "manual",
"tool": "gemini"
}

View File

@@ -55,6 +55,17 @@ color: yellow
**Step-by-step execution**:
```
0. Load planning notes → Extract phase-level constraints (NEW)
Commands: Read('.workflow/active/{session-id}/planning-notes.md')
Output: Consolidated constraints from all workflow phases
Structure:
- User Intent: Original GOAL, KEY_CONSTRAINTS
- Context Findings: Critical files, architecture notes, constraints
- Conflict Decisions: Resolved conflicts, modified artifacts
- Consolidated Constraints: Numbered list of ALL constraints (Phase 1-3)
USAGE: This is the PRIMARY source of constraints. All task generation MUST respect these constraints.
1. Load session metadata → Extract user input
- User description: Original task/feature requirements
- Project scope: User-specified boundaries and goals
@@ -299,25 +310,22 @@ function computeCliStrategy(task, allTasks) {
**execution_config Alignment Rules** (MANDATORY):
```
userConfig.executionMethod → meta.execution_config + implementation_approach
userConfig.executionMethod → meta.execution_config
"agent" →
meta.execution_config = { method: "agent", cli_tool: null, enable_resume: false }
implementation_approach steps: NO command field (agent direct execution)
"hybrid" →
meta.execution_config = { method: "hybrid", cli_tool: userConfig.preferredCliTool }
implementation_approach steps: command field ONLY on complex steps
Execution: Agent executes pre_analysis, then directly implements implementation_approach
"cli" →
meta.execution_config = { method: "cli", cli_tool: userConfig.preferredCliTool }
implementation_approach steps: command field on ALL steps
meta.execution_config = { method: "cli", cli_tool: userConfig.preferredCliTool, enable_resume: true }
Execution: Agent executes pre_analysis, then hands off context + implementation_approach to CLI
"hybrid" →
meta.execution_config = { method: "hybrid", cli_tool: userConfig.preferredCliTool, enable_resume: true }
Execution: Agent decides which tasks to handoff to CLI based on complexity
```
**Consistency Check**: `meta.execution_config.method` MUST match presence of `command` fields:
- `method: "agent"` → 0 steps have command field
- `method: "hybrid"` → some steps have command field
- `method: "cli"` → all steps have command field
**Note**: implementation_approach steps NO LONGER contain `command` fields. CLI execution is controlled by task-level `meta.execution_config` only.
**Test Task Extensions** (for type="test-gen" or type="test-fix"):
@@ -638,32 +646,6 @@ Agent determines CLI tool usage per-step based on user semantics and task nature
"output": "implementation"
},
// === CLI MODE: Command Execution (optional command field) ===
{
"step": 3,
"title": "Execute implementation using CLI tool",
"description": "Use Codex/Gemini for complex autonomous execution",
"command": "ccw cli -p '[prompt]' --tool codex --mode write --cd [path]",
"modification_points": ["[Same as default mode]"],
"logic_flow": ["[Same as default mode]"],
"depends_on": [1, 2],
"output": "cli_implementation",
"cli_output_id": "step3_cli_id" // Store execution ID for resume
},
// === CLI MODE with Resume: Continue from previous CLI execution ===
{
"step": 4,
"title": "Continue implementation with context",
"description": "Resume from previous step with accumulated context",
"command": "ccw cli -p '[continuation prompt]' --resume ${step3_cli_id} --tool codex --mode write",
"resume_from": "step3_cli_id", // Reference previous step's CLI ID
"modification_points": ["[Continue from step 3]"],
"logic_flow": ["[Build on previous output]"],
"depends_on": [3],
"output": "continued_implementation",
"cli_output_id": "step4_cli_id"
}
]
```
@@ -785,13 +767,13 @@ Generate at `.workflow/active/{session_id}/TODO_LIST.md`:
Use `analysis_results.complexity` or task count to determine structure:
**Single Module Mode**:
- **Simple Tasks** (≤5 tasks): Flat structure
- **Medium Tasks** (6-12 tasks): Flat structure
- **Complex Tasks** (>12 tasks): Re-scope required (maximum 12 tasks hard limit)
- **Simple Tasks** (≤4 tasks): Flat structure
- **Medium Tasks** (5-8 tasks): Flat structure
- **Complex Tasks** (>8 tasks): Re-scope required (maximum 8 tasks hard limit)
**Multi-Module Mode** (N+1 parallel planning):
- **Per-module limit**: ≤9 tasks per module
- **Total limit**: Sum of all module tasks ≤27 (3 modules × 9 tasks)
- **Per-module limit**: ≤6 tasks per module
- **Total limit**: No total limit (each module independently capped at 6 tasks)
- **Task ID format**: `IMPL-{prefix}{seq}` (e.g., IMPL-A1, IMPL-B1)
- **Structure**: Hierarchical by module in IMPL_PLAN.md and TODO_LIST.md
@@ -855,6 +837,7 @@ Use `analysis_results.complexity` or task count to determine structure:
### 3.3 Guidelines Checklist
**ALWAYS:**
- **Load planning-notes.md FIRST**: Read planning-notes.md before context-package.json. Use its Consolidated Constraints as primary constraint source for all task generation
- **Search Tool Priority**: ACE (`mcp__ace-tool__search_context`) → CCW (`mcp__ccw-tools__smart_search`) / Built-in (`Grep`, `Glob`, `Read`)
- Apply Quantification Requirements to all requirements, acceptance criteria, and modification points
- Load IMPL_PLAN template: `Read(~/.claude/workflows/cli-templates/prompts/workflow/impl-plan-template.txt)` before generating IMPL_PLAN.md
@@ -865,7 +848,7 @@ Use `analysis_results.complexity` or task count to determine structure:
- **Compute CLI execution strategy**: Based on `depends_on`, set `cli_execution.strategy` (new/resume/fork/merge_fork)
- Map artifacts: Use artifacts_inventory to populate task.context.artifacts array
- Add MCP integration: Include MCP tool steps in flow_control.pre_analysis when capabilities available
- Validate task count: Maximum 12 tasks hard limit, request re-scope if exceeded
- Validate task count: Maximum 8 tasks (single module) or 6 tasks per module (multi-module), request re-scope if exceeded
- Use session paths: Construct all paths using provided session_id
- Link documents properly: Use correct linking format (📋 for JSON, ✅ for summaries)
- Run validation checklist: Verify all quantification requirements before finalizing task JSONs
@@ -879,7 +862,7 @@ Use `analysis_results.complexity` or task count to determine structure:
- Load files directly (use provided context package instead)
- Assume default locations (always use session_id in paths)
- Create circular dependencies in task.depends_on
- Exceed 12 tasks without re-scoping
- Exceed 8 tasks (single module) or 6 tasks per module (multi-module) without re-scoping
- Skip artifact integration when artifacts_inventory is provided
- Ignore MCP capabilities when available
- Use fixed pre-analysis steps without task-specific adaptation

View File

@@ -13,6 +13,8 @@ color: cyan
You are a generic planning agent that generates structured plan JSON for lite workflows. Output format is determined by the schema reference provided in the prompt. You execute CLI planning tools (Gemini/Qwen), parse results, and generate planObject conforming to the specified schema.
**CRITICAL**: After generating plan.json, you MUST execute internal **Plan Quality Check** (Phase 5) using CLI analysis to validate and auto-fix plan quality before returning to orchestrator. Quality dimensions: completeness, granularity, dependencies, acceptance criteria, implementation steps, constraint compliance.
## Input Context
@@ -72,7 +74,22 @@ Phase 4: planObject Generation
├─ Build planObject conforming to schema
├─ Assign CLI execution IDs and strategies
├─ Generate flow_control from depends_on
└─ Return to orchestrator
└─ Write initial plan.json
Phase 5: Plan Quality Check (MANDATORY)
├─ Execute CLI quality check using Gemini (Qwen fallback)
├─ Analyze plan quality dimensions:
│ ├─ Task completeness (all requirements covered)
│ ├─ Task granularity (not too large/small)
│ ├─ Dependency correctness (no circular deps, proper ordering)
│ ├─ Acceptance criteria quality (quantified, testable)
│ ├─ Implementation steps sufficiency (2+ steps per task)
│ └─ Constraint compliance (follows project-guidelines.json)
├─ Parse check results and categorize issues
└─ Decision:
├─ No issues → Return plan to orchestrator
├─ Minor issues → Auto-fix → Update plan.json → Return
└─ Critical issues → Report → Suggest regeneration
```
## CLI Command Template
@@ -734,3 +751,78 @@ function validateTask(task) {
- Skip task validation
- **Skip CLI execution ID assignment**
- **Ignore schema structure**
- **Skip Phase 5 Plan Quality Check**
---
## Phase 5: Plan Quality Check (MANDATORY)
### Overview
After generating plan.json, **MUST** execute CLI quality check before returning to orchestrator. This is a mandatory step for ALL plans regardless of complexity.
### Quality Dimensions
| Dimension | Check Criteria | Critical? |
|-----------|---------------|-----------|
| **Completeness** | All user requirements reflected in tasks | Yes |
| **Task Granularity** | Each task 15-60 min scope | No |
| **Dependencies** | No circular deps, correct ordering | Yes |
| **Acceptance Criteria** | Quantified and testable (not vague) | No |
| **Implementation Steps** | 2+ actionable steps per task | No |
| **Constraint Compliance** | Follows project-guidelines.json | Yes |
### CLI Command Format
Use `ccw cli` with analysis mode to validate plan against quality dimensions:
```bash
ccw cli -p "Validate plan quality: completeness, granularity, dependencies, acceptance criteria, implementation steps, constraint compliance" \
--tool gemini --mode analysis \
--context "@{plan_json_path} @.workflow/project-guidelines.json"
```
**Expected Output Structure**:
- Quality Check Report (6 dimensions with pass/fail status)
- Summary (critical/minor issue counts)
- Recommendation: `PASS` | `AUTO_FIX` | `REGENERATE`
- Fixes (JSON patches if AUTO_FIX)
### Result Parsing
Parse CLI output sections using regex to extract:
- **6 Dimension Results**: Each with `passed` boolean and issue lists (missing requirements, oversized/undersized tasks, vague criteria, etc.)
- **Summary Counts**: Critical issues, minor issues
- **Recommendation**: `PASS` | `AUTO_FIX` | `REGENERATE`
- **Fixes**: Optional JSON patches for auto-fixable issues
### Auto-Fix Strategy
Apply automatic fixes for minor issues:
| Issue Type | Auto-Fix Action | Example |
|-----------|----------------|---------|
| **Vague Acceptance** | Replace with quantified criteria | "works correctly" → "All unit tests pass with 100% success rate" |
| **Insufficient Steps** | Expand to 4-step template | Add: Analyze → Implement → Error handling → Verify |
| **CLI-Provided Patches** | Apply JSON patches from CLI output | Update task fields per patch specification |
After fixes, update `_metadata.quality_check` with fix log.
### Execution Flow
After Phase 4 planObject generation:
1. **Write Initial Plan**`${sessionFolder}/plan.json`
2. **Execute CLI Check** → Gemini (Qwen fallback)
3. **Parse Results** → Extract recommendation and issues
4. **Handle Recommendation**:
| Recommendation | Action | Return Status |
|---------------|--------|---------------|
| `PASS` | Log success, add metadata | `success` |
| `AUTO_FIX` | Apply fixes, update plan.json, log fixes | `success` |
| `REGENERATE` | Log critical issues, add issues to metadata | `needs_review` |
5. **Return** → Plan with `_metadata.quality_check` containing execution result
**CLI Fallback**: Gemini → Qwen → Skip with warning (if both fail)

View File

@@ -186,34 +186,150 @@ output → Variable name to store this step's result
**Execution Flow**:
```
FOR each step in implementation_approach[] (ordered by step number):
1. Check depends_on: Wait for all listed step numbers to complete
2. Variable Substitution: Replace [variable_name] in description/modification_points
with values stored from previous steps' output
3. Execute step (choose one):
// Read task-level execution config (Single Source of Truth)
const executionMethod = task.meta?.execution_config?.method || 'agent';
const cliTool = task.meta?.execution_config?.cli_tool || getDefaultCliTool(); // See ~/.claude/cli-tools.json
IF step.command exists:
→ Execute the CLI command via Bash tool
→ Capture output
// Phase 1: Execute pre_analysis (always by Agent)
const preAnalysisResults = {};
for (const step of task.flow_control.pre_analysis || []) {
const result = executePreAnalysisStep(step);
preAnalysisResults[step.output_to] = result;
}
ELSE (no command - Agent direct implementation):
→ Read modification_points[] as list of files to create/modify
→ Read logic_flow[] as implementation sequence
→ For each file in modification_points:
• If "Create new file: path" → Use Write tool to create
• If "Modify file: path" → Use Edit tool to modify
• If "Add to file: path" → Use Edit tool to append
→ Follow logic_flow sequence for implementation logic
→ Use [focus_paths] from context as working directory scope
// Phase 2: Determine execution mode
const hasLegacyCommands = task.flow_control.implementation_approach
.some(step => step.command);
4. Store result in [step.output] variable for later steps
5. Mark step complete, proceed to next
IF hasLegacyCommands:
// Backward compatibility: Old mode with step.command fields
FOR each step in implementation_approach[]:
IF step.command exists:
→ Execute via Bash: Bash({ command: step.command, timeout: 3600000 })
ELSE:
→ Agent direct implementation
ELSE IF executionMethod === 'cli':
// New mode: CLI Handoff
→ const cliPrompt = buildCliHandoffPrompt(preAnalysisResults, task)
→ const cliCommand = buildCliCommand(task, cliTool, cliPrompt)
→ Bash({ command: cliCommand, run_in_background: false, timeout: 3600000 })
ELSE IF executionMethod === 'hybrid':
// Hybrid mode: Agent decides based on task complexity
→ IF task is complex (multiple files, complex logic):
Use CLI Handoff (same as cli mode)
ELSE:
Use Agent direct implementation
ELSE (executionMethod === 'agent'):
// Default: Agent direct implementation
FOR each step in implementation_approach[]:
1. Variable Substitution: Replace [variable_name] with preAnalysisResults
2. Read modification_points[] as files to create/modify
3. Read logic_flow[] as implementation sequence
4. For each file in modification_points:
• If "Create new file: path" → Use Write tool
• If "Modify file: path" → Use Edit tool
• If "Add to file: path" → Use Edit tool (append)
5. Follow logic_flow sequence
6. Use [focus_paths] from context as working directory scope
7. Store result in [step.output] variable
```
**CLI Command Execution (CLI Execute Mode)**:
When step contains `command` field with Codex CLI, execute via CCW CLI. For Codex resume:
- First task (`depends_on: []`): `ccw cli -p "..." --tool codex --mode write --cd [path]`
- Subsequent tasks (has `depends_on`): Use CCW CLI with resume context to maintain session
**CLI Handoff Functions**:
```javascript
// Get default CLI tool from cli-tools.json
function getDefaultCliTool() {
// Read ~/.claude/cli-tools.json and return first enabled tool
// Fallback order: gemini → qwen → codex (first enabled in config)
return firstEnabledTool || 'gemini'; // System default fallback
}
// Build CLI prompt from pre-analysis results and task
function buildCliHandoffPrompt(preAnalysisResults, task) {
const contextSection = Object.entries(preAnalysisResults)
.map(([key, value]) => `### ${key}\n${value}`)
.join('\n\n');
const approachSection = task.flow_control.implementation_approach
.map((step, i) => `
### Step ${step.step}: ${step.title}
${step.description}
**Modification Points**:
${step.modification_points?.map(m => `- ${m}`).join('\n') || 'N/A'}
**Logic Flow**:
${step.logic_flow?.map((l, j) => `${j + 1}. ${l}`).join('\n') || 'Follow modification points'}
`).join('\n');
return `
PURPOSE: ${task.title}
Complete implementation based on pre-analyzed context.
## PRE-ANALYSIS CONTEXT
${contextSection}
## REQUIREMENTS
${task.context.requirements?.map(r => `- ${r}`).join('\n') || task.context.requirements}
## IMPLEMENTATION APPROACH
${approachSection}
## ACCEPTANCE CRITERIA
${task.context.acceptance?.map(a => `- ${a}`).join('\n') || task.context.acceptance}
## TARGET FILES
${task.flow_control.target_files?.map(f => `- ${f}`).join('\n') || 'See modification points above'}
MODE: write
CONSTRAINTS: Follow existing patterns | No breaking changes
`.trim();
}
// Build CLI command with resume strategy
function buildCliCommand(task, cliTool, cliPrompt) {
const cli = task.cli_execution || {};
const escapedPrompt = cliPrompt.replace(/"/g, '\\"');
const baseCmd = `ccw cli -p "${escapedPrompt}"`;
switch (cli.strategy) {
case 'new':
return `${baseCmd} --tool ${cliTool} --mode write --id ${task.cli_execution_id}`;
case 'resume':
return `${baseCmd} --resume ${cli.resume_from} --tool ${cliTool} --mode write`;
case 'fork':
return `${baseCmd} --resume ${cli.resume_from} --id ${task.cli_execution_id} --tool ${cliTool} --mode write`;
case 'merge_fork':
return `${baseCmd} --resume ${cli.merge_from.join(',')} --id ${task.cli_execution_id} --tool ${cliTool} --mode write`;
default:
// Fallback: no resume, no id
return `${baseCmd} --tool ${cliTool} --mode write`;
}
}
```
**Execution Config Reference** (from task.meta.execution_config):
| Field | Values | Description |
|-------|--------|-------------|
| `method` | `agent` / `cli` / `hybrid` | Execution mode (default: agent) |
| `cli_tool` | See `~/.claude/cli-tools.json` | CLI tool preference (first enabled tool as default) |
| `enable_resume` | `true` / `false` | Enable CLI session resume |
**CLI Execution Reference** (from task.cli_execution):
| Field | Values | Description |
|-------|--------|-------------|
| `strategy` | `new` / `resume` / `fork` / `merge_fork` | Resume strategy |
| `resume_from` | `{session}-{task_id}` | Parent task CLI ID (resume/fork) |
| `merge_from` | `[{id1}, {id2}]` | Parent task CLI IDs (merge_fork) |
**Resume Strategy Examples**:
- **New task** (no dependencies): `--id WFS-001-IMPL-001`
- **Resume** (single dependency, single child): `--resume WFS-001-IMPL-001`
- **Fork** (single dependency, multiple children): `--resume WFS-001-IMPL-001 --id WFS-001-IMPL-002`
- **Merge** (multiple dependencies): `--resume WFS-001-IMPL-001,WFS-001-IMPL-002 --id WFS-001-IMPL-003`
**Test-Driven Development**:
- Write tests first (red → green → refactor)
@@ -389,7 +505,8 @@ Before completing any task, verify:
- Use `run_in_background=false` for all Bash/CLI calls - agent cannot receive task hook callbacks
- Set timeout ≥60 minutes for CLI commands (hooks don't propagate to subagents):
```javascript
Bash(command="ccw cli -p '...' --tool codex --mode write", timeout=3600000) // 60 min
Bash(command="ccw cli -p '...' --tool <cli-tool> --mode write", timeout=3600000) // 60 min
// <cli-tool>: First enabled tool from ~/.claude/cli-tools.json (e.g., gemini, qwen, codex)
```
**ALWAYS:**

View File

@@ -47,14 +47,30 @@ Interactive orchestration tool: analyze task → discover commands → recommend
| Unit Name | Commands | Purpose | Output |
|-----------|----------|---------|--------|
| **Code Review (Session)** | review-session-cycle → review-fix | Complete review cycle and apply fixes | Fixed code |
| **Code Review (Module)** | review-module-cycle → review-fix | Module review cycle and apply fixes | Fixed code |
| **Code Review (Session)** | review-session-cycle → review-cycle-fix | Complete review cycle and apply fixes | Fixed code |
| **Code Review (Module)** | review-module-cycle → review-cycle-fix | Module review cycle and apply fixes | Fixed code |
**Issue Units** (Issue单元):
| Unit Name | Commands | Purpose | Output |
|-----------|----------|---------|--------|
| **Issue Workflow** | discover → plan → queue → execute | Complete issue lifecycle | Completed issues |
| **Rapid-to-Issue** | lite-plan → convert-to-plan → queue → execute | Bridge lite workflow to issue workflow | Completed issues |
| **Brainstorm-to-Issue** | from-brainstorm → queue → execute | Bridge brainstorm session to issue workflow | Completed issues |
**With-File Units** (文档化单元):
| Unit Name | Commands | Purpose | Output |
|-----------|----------|---------|--------|
| **Brainstorm With File** | brainstorm-with-file | Multi-perspective ideation with documentation | brainstorm.md |
| **Debug With File** | debug-with-file | Hypothesis-driven debugging with documentation | understanding.md |
| **Analyze With File** | analyze-with-file | Collaborative analysis with documentation | discussion.md |
### Command-to-Unit Mapping (命令与最小单元的映射)
| Command | Can Precede | Atomic Units |
|---------|-----------|--------------|
| lite-plan | lite-execute | Quick Implementation |
| lite-plan | lite-execute, convert-to-plan | Quick Implementation, Rapid-to-Issue |
| multi-cli-plan | lite-execute | Multi-CLI Planning |
| lite-fix | lite-execute | Bug Fix |
| plan | plan-verify, execute | Full Planning + Execution, Verified Planning + Execution |
@@ -62,9 +78,17 @@ Interactive orchestration tool: analyze task → discover commands → recommend
| replan | execute | Replanning + Execution |
| test-gen | execute | Test Generation + Execution |
| tdd-plan | execute | TDD Planning + Execution |
| review-session-cycle | review-fix | Code Review (Session) |
| review-module-cycle | review-fix | Code Review (Module) |
| review-session-cycle | review-cycle-fix | Code Review (Session) |
| review-module-cycle | review-cycle-fix | Code Review (Module) |
| test-fix-gen | test-cycle-execute | Test Validation |
| issue:discover | issue:plan | Issue Workflow |
| issue:plan | issue:queue | Issue Workflow |
| convert-to-plan | issue:queue | Rapid-to-Issue |
| issue:queue | issue:execute | Issue Workflow, Rapid-to-Issue, Brainstorm-to-Issue |
| issue:from-brainstorm | issue:queue | Brainstorm-to-Issue |
| brainstorm-with-file | issue:from-brainstorm (optional) | Brainstorm With File, Brainstorm-to-Issue |
| debug-with-file | (standalone) | Debug With File |
| analyze-with-file | (standalone) | Analyze With File |
### Atomic Group Rules
@@ -105,6 +129,14 @@ function detectTaskType(text) {
if (/测试失败|test fail|fix test|failing test/.test(text)) return 'test-fix';
if (/generate test|写测试|add test|补充测试/.test(text)) return 'test-gen';
if (/review|审查|code review/.test(text)) return 'review';
// Issue workflow patterns
if (/issues?.*batch|batch.*issues?|批量.*issue|issue.*批量/.test(text)) return 'issue-batch';
if (/issue workflow|structured workflow|queue|multi-stage|转.*issue|issue.*流程/.test(text)) return 'issue-transition';
// With-File workflow patterns
if (/brainstorm|ideation|头脑风暴|创意|发散思维|creative thinking/.test(text)) return 'brainstorm-file';
if (/brainstorm.*issue|头脑风暴.*issue|idea.*issue|想法.*issue|从.*头脑风暴|convert.*brainstorm/.test(text)) return 'brainstorm-to-issue';
if (/debug.*document|hypothesis.*debug|深度调试|假设.*验证|systematic debug/.test(text)) return 'debug-file';
if (/analyze.*document|collaborative analysis|协作分析|深度.*理解/.test(text)) return 'analyze-file';
if (/不确定|explore|研究|what if|brainstorm|权衡/.test(text)) return 'brainstorm';
if (/多视角|比较方案|cross-verify|multi-cli/.test(text)) return 'multi-cli';
return 'feature'; // Default
@@ -252,8 +284,8 @@ const commandPorts = {
output: ['review-findings'],
tags: ['review']
},
'review-fix': {
name: 'review-fix',
'review-cycle-fix': {
name: 'review-cycle-fix',
input: ['review-findings', 'review-verified'], // Accept output from review-session-cycle or review-module-cycle
output: ['fixed-code'],
tags: ['review'],
@@ -277,14 +309,81 @@ const commandPorts = {
input: ['code', 'session'], // 可接受代码或会话
output: ['review-verified'], // 输出端口:审查通过
tags: ['review'],
atomic_group: 'code-review' // 最小单元:与 review-fix 绑定
atomic_group: 'code-review' // 最小单元:与 review-cycle-fix 绑定
},
'review-module-cycle': {
name: 'review-module-cycle',
input: ['module-pattern'], // 输入端口:模块模式
output: ['review-verified'], // 输出端口:审查通过
tags: ['review'],
atomic_group: 'code-review' // 最小单元:与 review-fix 绑定
atomic_group: 'code-review' // 最小单元:与 review-cycle-fix 绑定
},
// Issue workflow commands
'issue:discover': {
name: 'issue:discover',
input: ['codebase'], // 输入端口:代码库
output: ['pending-issues'], // 输出端口:待处理 issues
tags: ['issue'],
atomic_group: 'issue-workflow' // 最小单元discover → plan → queue → execute
},
'issue:plan': {
name: 'issue:plan',
input: ['pending-issues'], // 输入端口:待处理 issues
output: ['issue-plans'], // 输出端口issue 计划
tags: ['issue'],
atomic_group: 'issue-workflow'
},
'issue:queue': {
name: 'issue:queue',
input: ['issue-plans', 'converted-plan'], // 可接受 issue:plan 或 convert-to-plan 输出
output: ['execution-queue'], // 输出端口:执行队列
tags: ['issue'],
atomic_groups: ['issue-workflow', 'rapid-to-issue']
},
'issue:execute': {
name: 'issue:execute',
input: ['execution-queue'], // 输入端口:执行队列
output: ['completed-issues'], // 输出端口:已完成 issues
tags: ['issue'],
atomic_groups: ['issue-workflow', 'rapid-to-issue']
},
'issue:convert-to-plan': {
name: 'issue:convert-to-plan',
input: ['plan'], // 输入端口lite-plan 输出
output: ['converted-plan'], // 输出端口:转换后的 issue 计划
tags: ['issue', 'planning'],
atomic_group: 'rapid-to-issue' // 最小单元lite-plan → convert-to-plan → queue → execute
},
// With-File workflows (documented exploration with multi-CLI collaboration)
'brainstorm-with-file': {
name: 'brainstorm-with-file',
input: ['exploration-topic'], // 输入端口:探索主题
output: ['brainstorm-document'], // 输出端口brainstorm.md + 综合结论
tags: ['brainstorm', 'with-file'],
note: 'Self-contained workflow with multi-round diverge-converge cycles'
},
'issue:from-brainstorm': {
name: 'issue:from-brainstorm',
input: ['brainstorm-document'], // 输入端口brainstorm 产物synthesis.json
output: ['converted-plan'], // 输出端口issue + solution
tags: ['issue', 'brainstorm'],
atomic_group: 'brainstorm-to-issue' // 最小单元from-brainstorm → queue → execute
},
'debug-with-file': {
name: 'debug-with-file',
input: ['bug-report'], // 输入端口bug 报告
output: ['understanding-document'], // 输出端口understanding.md + 修复
tags: ['bugfix', 'with-file'],
note: 'Self-contained workflow with hypothesis-driven iteration'
},
'analyze-with-file': {
name: 'analyze-with-file',
input: ['analysis-topic'], // 输入端口:分析主题
output: ['discussion-document'], // 输出端口discussion.md + 结论
tags: ['analysis', 'with-file'],
note: 'Self-contained workflow with multi-round discussion'
}
};
```
@@ -306,14 +405,22 @@ async function recommendCommandChain(analysis) {
// 任务类型对应的端口流
function determinePortFlow(taskType, constraints) {
const flows = {
'bugfix': { inputPort: 'bug-report', outputPort: constraints?.includes('skip-tests') ? 'fixed-code' : 'test-passed' },
'tdd': { inputPort: 'requirement', outputPort: 'tdd-verified' },
'test-fix': { inputPort: 'failing-tests', outputPort: 'test-passed' },
'test-gen': { inputPort: 'code', outputPort: 'test-passed' },
'review': { inputPort: 'code', outputPort: 'review-verified' },
'brainstorm': { inputPort: 'exploration-topic', outputPort: 'test-passed' },
'multi-cli': { inputPort: 'requirement', outputPort: 'test-passed' },
'feature': { inputPort: 'requirement', outputPort: constraints?.includes('skip-tests') ? 'code' : 'test-passed' }
'bugfix': { inputPort: 'bug-report', outputPort: constraints?.includes('skip-tests') ? 'fixed-code' : 'test-passed' },
'tdd': { inputPort: 'requirement', outputPort: 'tdd-verified' },
'test-fix': { inputPort: 'failing-tests', outputPort: 'test-passed' },
'test-gen': { inputPort: 'code', outputPort: 'test-passed' },
'review': { inputPort: 'code', outputPort: 'review-verified' },
'brainstorm': { inputPort: 'exploration-topic', outputPort: 'test-passed' },
'multi-cli': { inputPort: 'requirement', outputPort: 'test-passed' },
// Issue workflow types
'issue-batch': { inputPort: 'codebase', outputPort: 'completed-issues' },
'issue-transition': { inputPort: 'requirement', outputPort: 'completed-issues' },
// With-File workflow types
'brainstorm-file': { inputPort: 'exploration-topic', outputPort: 'brainstorm-document' },
'brainstorm-to-issue': { inputPort: 'brainstorm-document', outputPort: 'completed-issues' },
'debug-file': { inputPort: 'bug-report', outputPort: 'understanding-document' },
'analyze-file': { inputPort: 'analysis-topic', outputPort: 'discussion-document' },
'feature': { inputPort: 'requirement', outputPort: constraints?.includes('skip-tests') ? 'code' : 'test-passed' }
};
return flows[taskType] || flows['feature'];
}
@@ -539,7 +646,7 @@ function formatCommand(cmd, previousResults, analysis) {
if (latest?.session_id) prompt += ` --session="${latest.session_id}"`;
// Review fix - takes session from review
} else if (name === 'review-fix') {
} else if (name === 'review-cycle-fix') {
const review = previousResults.find(r => r.command.includes('review'));
const latest = review || previousResults.filter(r => r.session_id).pop();
if (latest?.session_id) prompt += ` --session="${latest.session_id}"`;
@@ -553,6 +660,45 @@ function formatCommand(cmd, previousResults, analysis) {
} else if (name.includes('test') || name.includes('review') || name.includes('verify')) {
const latest = previousResults.filter(r => r.session_id).pop();
if (latest?.session_id) prompt += ` --session="${latest.session_id}"`;
// Issue workflow commands
} else if (name === 'issue:discover') {
// No parameters needed - discovers from codebase
prompt = `/issue:discover -y`;
} else if (name === 'issue:plan') {
prompt = `/issue:plan -y --all-pending`;
} else if (name === 'issue:queue') {
prompt = `/issue:queue -y`;
} else if (name === 'issue:execute') {
prompt = `/issue:execute -y --queue auto`;
} else if (name === 'issue:convert-to-plan' || name === 'convert-to-plan') {
// Convert latest lite-plan to issue plan
prompt = `/issue:convert-to-plan -y --latest-lite-plan`;
// With-File workflows (self-contained)
} else if (name === 'brainstorm-with-file') {
prompt = `/workflow:brainstorm-with-file -y "${analysis.goal}"`;
} else if (name === 'debug-with-file') {
prompt = `/workflow:debug-with-file -y "${analysis.goal}"`;
} else if (name === 'analyze-with-file') {
prompt = `/workflow:analyze-with-file -y "${analysis.goal}"`;
// Brainstorm-to-issue bridge
} else if (name === 'issue:from-brainstorm' || name === 'from-brainstorm') {
// Extract session ID from analysis.goal or latest brainstorm
const sessionMatch = analysis.goal.match(/BS-[\w-]+/);
if (sessionMatch) {
prompt = `/issue:from-brainstorm -y SESSION="${sessionMatch[0]}" --auto`;
} else {
// Find latest brainstorm session
prompt = `/issue:from-brainstorm -y --auto`;
}
}
return prompt;
@@ -904,18 +1050,20 @@ break; // ⚠️ STOP HERE - DO NOT use TaskOutput polling
## Available Commands
All from `~/.claude/commands/workflow/`:
All from `~/.claude/commands/workflow/` and `~/.claude/commands/issue/`:
**Planning**: lite-plan, plan, multi-cli-plan, plan-verify, tdd-plan
**Execution**: lite-execute, execute, develop-with-file
**Testing**: test-cycle-execute, test-gen, test-fix-gen, tdd-verify
**Review**: review, review-session-cycle, review-module-cycle, review-fix
**Review**: review, review-session-cycle, review-module-cycle, review-cycle-fix
**Bug Fixes**: lite-fix, debug, debug-with-file
**Brainstorming**: brainstorm:auto-parallel, brainstorm:artifacts, brainstorm:synthesis
**Design**: ui-design:*, animation-extract, layout-extract, style-extract, codify-style
**Session Management**: session:start, session:resume, session:complete, session:solidify, session:list
**Tools**: context-gather, test-context-gather, task-generate, conflict-resolution, action-plan-verify
**Utility**: clean, init, replan
**Issue Workflow**: issue:discover, issue:plan, issue:queue, issue:execute, issue:convert-to-plan, issue:from-brainstorm
**With-File Workflows**: brainstorm-with-file, debug-with-file, analyze-with-file
### Testing Commands Distinction
@@ -941,8 +1089,14 @@ All from `~/.claude/commands/workflow/`:
| **tdd** | 需求 → tdd-plan → TDD任务 → execute → 代码 → tdd-verify | TDD Planning + Execution |
| **test-fix** | 失败测试 →【test-fix-gen → test-cycle-execute】→ 测试通过 | Test Validation |
| **test-gen** | 代码/会话 →【test-gen → execute】→ 测试通过 | Test Generation + Execution |
| **review** | 代码 →【review-* → review-fix】→ 修复代码 →【test-fix-gen → test-cycle-execute】→ 测试通过 | Code Review + Testing |
| **review** | 代码 →【review-* → review-cycle-fix】→ 修复代码 →【test-fix-gen → test-cycle-execute】→ 测试通过 | Code Review + Testing |
| **brainstorm** | 探索主题 → brainstorm → 分析 →【plan → plan-verify】→ execute → test | Exploration + Planning + Execution |
| **multi-cli** | 需求 → multi-cli-plan → 对比分析 → lite-execute → test | Multi-Perspective + Testing |
| **issue-batch** | 代码库 →【discover → plan → queue → execute】→ 完成 issues | Issue Workflow |
| **issue-transition** | 需求 →【lite-plan → convert-to-plan → queue → execute】→ 完成 issues | Rapid-to-Issue |
| **brainstorm-file** | 主题 → brainstorm-with-file → brainstorm.md (自包含) | Brainstorm With File |
| **brainstorm-to-issue** | brainstorm.md →【from-brainstorm → queue → execute】→ 完成 issues | Brainstorm to Issue |
| **debug-file** | Bug报告 → debug-with-file → understanding.md (自包含) | Debug With File |
| **analyze-file** | 分析主题 → analyze-with-file → discussion.md (自包含) | Analyze With File |
Use `CommandRegistry.getAllCommandsSummary()` to discover all commands dynamically.

View File

@@ -0,0 +1,832 @@
---
name: ccw-debug
description: Aggregated debug command - combines debugging diagnostics and test verification in a synergistic workflow supporting cli-quick / debug-first / test-first / bidirectional-verification modes
argument-hint: "[--mode cli|debug|test|bidirectional] [--yes|-y] [--hotfix] \"bug description or error message\""
allowed-tools: SlashCommand(*), TodoWrite(*), AskUserQuestion(*), Read(*), Bash(*)
---
# CCW-Debug Aggregated Command
## Core Concept
**Aggregated Debug Command** - Combines debugging diagnostics and test verification in a synergistic workflow. Not a simple concatenation of two commands, but intelligent orchestration based on mode selection.
### Four Execution Modes
| Mode | Workflow | Use Case | Characteristics |
|------|----------|----------|-----------------|
| **CLI Quick** (cli) | Direct CLI Analysis → Fix Suggestions | Simple issues, quick diagnosis | Fastest, minimal workflow, recommendation-only |
| **Debug First** (debug) | Debug → Analyze Hypotheses → Apply Fix → Test Verification | Root cause unclear, requires exploration | Starts with exploration, Gemini-assisted |
| **Test First** (test) | Generate Tests → Execute → Analyze Failures → CLI Fixes | Code implemented, needs test validation | Driven by test coverage, auto-iterates |
| **Bidirectional Verification** (bidirectional) | Parallel: Debug + Test → Merge Findings → Unified Fix | Complex systems, ambiguous symptoms | Parallel execution, converged insights |
---
## Quick Start
### Basic Usage
```bash
# CLI quick mode: fastest, recommendation-only (new!)
/ccw-debug --mode cli "Login failed: token validation error"
# Default mode: debug-first (recommended for most scenarios)
/ccw-debug "Login failed: token validation error"
# Test-first mode
/ccw-debug --mode test "User permission check failure"
# Bidirectional verification mode (complex issues)
/ccw-debug --mode bidirectional "Payment flow multiple failures"
# Auto mode (skip all confirmations)
/ccw-debug --yes "Quick fix: database connection timeout"
# Production hotfix (minimal diagnostics)
/ccw-debug --hotfix --yes "Production: API returns 500"
```
### Mode Selection Guide
**Choose "CLI Quick"** when:
- Need immediate diagnosis, not execution
- Want quick recommendations without workflows
- Simple issues with clear symptoms
- Just need fix suggestions, no auto-application
- Time is critical, prefer fast output
- Want to review CLI analysis before action
**Choose "Debug First"** when:
- Root cause is unclear
- Error messages are incomplete or vague
- Need to understand code execution flow
- Issues involve multi-module interactions
**Choose "Test First"** when:
- Code is fully implemented
- Need test coverage verification
- Have clear failure cases
- Want automated iterative fixes
**Choose "Bidirectional Verification"** when:
- System is complex (multiple subsystems)
- Problem symptoms are ambiguous (multiple possible root causes)
- Need multi-angle validation
- Time allows parallel analysis
---
## Execution Flow
### Overall Process
```
Phase 1: Intent Analysis & Mode Selection
├─ Parse --mode flag or recommend mode
├─ Check --hotfix and --yes flags
└─ Determine workflow path
Phase 2: Initialization
├─ CLI Quick: Lightweight init (no session directory needed)
├─ Others: Create unified session directory (.workflow/.ccw-debug/)
├─ Setup TodoWrite tracking
└─ Prepare session context
Phase 3: Execute Corresponding Workflow
├─ CLI Quick: ccw cli → Diagnosis Report → Optional: Escalate to debug/test/apply fix
├─ Debug First: /workflow:debug-with-file → Fix → /workflow:test-fix-gen → /workflow:test-cycle-execute
├─ Test First: /workflow:test-fix-gen → /workflow:test-cycle-execute → CLI analyze failures
└─ Bidirectional: [/workflow:debug-with-file] ∥ [/workflow:test-fix-gen → test-cycle-execute]
Phase 4: Merge Findings (Bidirectional Mode) / Escalation Decision (CLI Mode)
├─ CLI Quick: Present results → Ask user: Apply fix? Escalate? Done?
├─ Bidirectional: Converge findings from both workflows
├─ Identify consistent and conflicting root cause analyses
└─ Generate unified fix plan
Phase 5: Completion & Follow-up
├─ Generate summary report
├─ Provide next step recommendations
└─ Optional: Expand to issues (testing/enhancement/refactoring/documentation)
```
---
## Workflow Details
### Mode 0: CLI Quick (Minimal Debug Method)
**Best For**: Fast recommendations without full workflow overhead
**Workflow**:
```
User Input → Quick Context Gather → ccw cli (Gemini/Qwen/Codex)
Analysis Report
Fix Recommendations
Optional: User Decision
┌──────────────┼──────────────┐
↓ ↓ ↓
Apply Fix Escalate Mode Done
(debug/test)
```
**Execution Steps**:
1. **Lightweight Context Gather** (Phase 2)
```javascript
// No session directory needed for CLI mode
const tempContext = {
bug_description: bug_description,
timestamp: getUtc8ISOString(),
mode: "cli"
}
// Quick context discovery (30s max)
// - Read error file if path provided
// - Extract error patterns from description
// - Identify likely affected files (basic grep)
```
2. **Execute CLI Analysis** (Phase 3)
```bash
# Use ccw cli with bug diagnosis template
ccw cli -p "
PURPOSE: Quick bug diagnosis for immediate recommendations
TASK:
• Analyze bug symptoms: ${bug_description}
• Identify likely root cause
• Provide actionable fix recommendations (code snippets if possible)
• Assess fix confidence level
MODE: analysis
CONTEXT: ${contextFiles.length > 0 ? '@' + contextFiles.join(' @') : 'Bug description only'}
EXPECTED:
- Root cause hypothesis (1-2 sentences)
- Fix strategy (immediate/comprehensive/refactor)
- Code snippets or file modification suggestions
- Confidence level: High/Medium/Low
- Risk assessment
CONSTRAINTS: Quick analysis, 2-5 minutes max
" --tool gemini --mode analysis --rule analysis-diagnose-bug-root-cause
```
3. **Present Results** (Phase 4)
```
## CLI Quick Analysis Complete
**Issue**: [bug_description]
**Analysis Time**: [duration]
**Confidence**: [High/Medium/Low]
### Root Cause
[1-2 sentence hypothesis]
### Fix Strategy
[immediate_patch | comprehensive_fix | refactor]
### Recommended Changes
**File**: src/module/file.ts
```typescript
// Change line 45-50
- old code
+ new code
```
**Rationale**: [why this fix]
**Risk**: [Low/Medium/High] - [risk description]
### Confidence Assessment
- Analysis confidence: [percentage]
- Recommendation: [apply immediately | review first | escalate to full debug]
```
4. **User Decision** (Phase 5)
```javascript
// Parse --yes flag
const autoYes = $ARGUMENTS.includes('--yes') || $ARGUMENTS.includes('-y')
if (autoYes && confidence === 'High') {
// Auto-apply fix
console.log('[--yes + High confidence] Auto-applying fix...')
applyFixFromCLIRecommendation(cliOutput)
} else {
// Ask user
const decision = AskUserQuestion({
questions: [{
question: `CLI analysis complete (${confidence} confidence). What next?`,
header: "Decision",
multiSelect: false,
options: [
{ label: "Apply Fix", description: "Apply recommended changes immediately" },
{ label: "Escalate to Debug", description: "Switch to debug-first for deeper analysis" },
{ label: "Escalate to Test", description: "Switch to test-first for validation" },
{ label: "Review Only", description: "Just review, no action" }
]
}]
})
if (decision === "Apply Fix") {
applyFixFromCLIRecommendation(cliOutput)
} else if (decision === "Escalate to Debug") {
// Re-invoke ccw-debug with --mode debug
SlashCommand(command=`/ccw-debug --mode debug "${bug_description}"`)
} else if (decision === "Escalate to Test") {
// Re-invoke ccw-debug with --mode test
SlashCommand(command=`/ccw-debug --mode test "${bug_description}"`)
}
}
```
**Key Characteristics**:
- **Speed**: 2-5 minutes total (fastest mode)
- **Session**: No persistent session directory (lightweight)
- **Output**: Recommendation report only
- **Execution**: Optional, user-controlled
- **Escalation**: Can upgrade to full debug/test workflows
**Limitations**:
- No hypothesis iteration (single-shot analysis)
- No automatic test generation
- No instrumentation/logging
- Best for clear symptoms with localized fixes
---
### Mode 1: Debug First
**Best For**: Issues requiring root cause exploration
**Workflow**:
```
User Input → Session Init → /workflow:debug-with-file
Generate understanding.md + hypotheses
User reproduces issue, analyze logs
Gemini validates hypotheses
Apply fix code
/workflow:test-fix-gen
/workflow:test-cycle-execute
Generate unified report
```
**Execution Steps**:
1. **Session Initialization** (Phase 2)
```javascript
const sessionId = `CCWD-${bugSlug}-${dateStr}`
const sessionFolder = `.workflow/.ccw-debug/${sessionId}`
bash(`mkdir -p ${sessionFolder}`)
// Record mode selection
const modeConfig = {
mode: "debug",
original_input: bug_description,
timestamp: getUtc8ISOString(),
flags: { hotfix, autoYes }
}
Write(`${sessionFolder}/mode-config.json`, JSON.stringify(modeConfig, null, 2))
```
2. **Start Debug** (Phase 3)
```javascript
SlashCommand(command=`/workflow:debug-with-file "${bug_description}"`)
// Update TodoWrite
TodoWrite({
todos: [
{ content: "Phase 1: Debug & Analysis", status: "completed" },
{ content: "Phase 2: Apply Fix from Debug Findings", status: "in_progress" },
{ content: "Phase 3: Generate & Execute Tests", status: "pending" },
{ content: "Phase 4: Generate Report", status: "pending" }
]
})
```
3. **Apply Fix** (Handled by debug command)
4. **Test Generation & Execution**
```javascript
// Auto-continue after debug command completes
SlashCommand(command=`/workflow:test-fix-gen "Test validation for: ${bug_description}"`)
SlashCommand(command="/workflow:test-cycle-execute")
```
5. **Generate Report** (Phase 5)
```
## Debug-First Workflow Completed
**Issue**: [bug_description]
**Mode**: Debug First
**Session**: [sessionId]
### Debug Phase Results
- Root Cause: [extracted from understanding.md]
- Hypothesis Confirmation: [from hypotheses.json]
- Fixes Applied: [list of modified files]
### Test Phase Results
- Tests Created: [test files generated by IMPL-001]
- Pass Rate: [final test pass rate]
- Iteration Count: [fix iterations]
### Key Findings
- [learning points from debugging]
- [coverage insights from testing]
```
---
### Mode 2: Test First
**Best For**: Implemented code needing test validation
**Workflow**:
```
User Input → Session Init → /workflow:test-fix-gen
Generate test tasks (IMPL-001, IMPL-002)
/workflow:test-cycle-execute
Auto-iterate: Test → Analyze Failures → CLI Fix
Until pass rate ≥ 95%
Generate report
```
**Execution Steps**:
1. **Session Initialization** (Phase 2)
```javascript
const modeConfig = {
mode: "test",
original_input: bug_description,
timestamp: getUtc8ISOString(),
flags: { hotfix, autoYes }
}
```
2. **Generate Tests** (Phase 3)
```javascript
SlashCommand(command=`/workflow:test-fix-gen "${bug_description}"`)
// Update TodoWrite
TodoWrite({
todos: [
{ content: "Phase 1: Generate Tests", status: "completed" },
{ content: "Phase 2: Execute & Fix Tests", status: "in_progress" },
{ content: "Phase 3: Final Validation", status: "pending" },
{ content: "Phase 4: Generate Report", status: "pending" }
]
})
```
3. **Execute & Iterate** (Phase 3 cont.)
```javascript
SlashCommand(command="/workflow:test-cycle-execute")
// test-cycle-execute handles:
// - Execute tests
// - Analyze failures
// - Generate fix tasks via CLI
// - Iterate fixes until pass
```
4. **Generate Report** (Phase 5)
---
### Mode 3: Bidirectional Verification
**Best For**: Complex systems, multi-dimensional analysis
**Workflow**:
```
User Input → Session Init → Parallel execution:
┌──────────────────────────────┐
│ │
↓ ↓
/workflow:debug-with-file /workflow:test-fix-gen
│ │
Generate hypotheses & understanding Generate test tasks
│ │
↓ ↓
Apply debug fixes /workflow:test-cycle-execute
│ │
└──────────────┬───────────────┘
Phase 4: Merge Findings
├─ Converge root cause analyses
├─ Identify consistency (mutual validation)
├─ Identify conflicts (need coordination)
└─ Generate unified report
```
**Execution Steps**:
1. **Parallel Execution** (Phase 3)
```javascript
// Start debug
const debugTask = SlashCommand(
command=`/workflow:debug-with-file "${bug_description}"`,
run_in_background=false
)
// Start test generation (synchronous execution, SlashCommand blocks)
const testTask = SlashCommand(
command=`/workflow:test-fix-gen "${bug_description}"`,
run_in_background=false
)
// Execute test cycle
const testCycleTask = SlashCommand(
command="/workflow:test-cycle-execute",
run_in_background=false
)
```
2. **Merge Findings** (Phase 4)
```javascript
// Read debug results
const understandingMd = Read(`${debugSessionFolder}/understanding.md`)
const hypothesesJson = JSON.parse(Read(`${debugSessionFolder}/hypotheses.json`))
// Read test results
const testResultsJson = JSON.parse(Read(`${testSessionFolder}/.process/test-results.json`))
const fixPlanJson = JSON.parse(Read(`${testSessionFolder}/.task/IMPL-002.json`))
// Merge analysis
const convergence = {
debug_root_cause: hypothesesJson.confirmed_hypothesis,
test_failure_pattern: testResultsJson.failures,
consistency: analyzeConsistency(debugRootCause, testFailures),
conflicts: identifyConflicts(debugRootCause, testFailures),
unified_root_cause: mergeRootCauses(debugRootCause, testFailures),
recommended_fix: selectBestFix(debugRootCause, testRootCause)
}
```
3. **Generate Report** (Phase 5)
```
## Bidirectional Verification Workflow Completed
**Issue**: [bug_description]
**Mode**: Bidirectional Verification
### Debug Findings
- Root Cause (hypothesis): [from understanding.md]
- Confidence: [from hypotheses.json]
- Key code paths: [file:line]
### Test Findings
- Failure pattern: [list of failing tests]
- Error type: [error type]
- Impact scope: [affected modules]
### Merged Analysis
- ✓ Consistent: Both workflows identified same root cause
- ⚠ Conflicts: [list any conflicts]
- → Unified Root Cause: [final confirmed root cause]
### Recommended Fix
- Strategy: [selected fix strategy]
- Rationale: [why this strategy]
- Risks: [known risks]
```
---
## Command Line Interface
### Complete Syntax
```bash
/ccw-debug [OPTIONS] <BUG_DESCRIPTION>
Options:
--mode <cli|debug|test|bidirectional> Execution mode (default: debug)
--yes, -y Auto mode (skip all confirmations)
--hotfix, -h Production hotfix mode (only for debug mode)
--no-tests Skip test generation in debug-first mode
--skip-report Don't generate final report
--resume <session-id> Resume interrupted session
Arguments:
<BUG_DESCRIPTION> Issue description, error message, or .md file path
```
### Examples
```bash
# CLI quick mode: fastest, recommendation-only (NEW!)
/ccw-debug --mode cli "User login timeout"
/ccw-debug --mode cli --yes "Quick fix: API 500 error" # Auto-apply if high confidence
# Debug first (default)
/ccw-debug "User login timeout"
# Test first
/ccw-debug --mode test "Payment validation failure"
# Bidirectional verification
/ccw-debug --mode bidirectional "Multi-module data consistency issue"
# Hotfix auto mode
/ccw-debug --hotfix --yes "API 500 error"
# Debug first, skip tests
/ccw-debug --no-tests "Understand code flow"
# Resume interrupted session
/ccw-debug --resume CCWD-login-timeout-2025-01-27
```
---
## Session Structure
### File Organization
```
.workflow/.ccw-debug/CCWD-{slug}-{date}/
├── mode-config.json # Mode configuration and flags
├── session-manifest.json # Session index and status
├── final-report.md # Final report
├── debug/ # Debug workflow (if mode includes debug)
│ ├── debug-session-id.txt
│ ├── understanding.md
│ ├── hypotheses.json
│ └── debug.log
├── test/ # Test workflow (if mode includes test)
│ ├── test-session-id.txt
│ ├── IMPL_PLAN.md
│ ├── test-results.json
│ └── iteration-state.json
└── fusion/ # Fusion analysis (bidirectional mode)
├── convergence-analysis.json
├── consistency-report.md
└── unified-root-cause.json
```
### Session State Management
```json
{
"session_id": "CCWD-login-timeout-2025-01-27",
"mode": "debug|test|bidirectional",
"status": "running|completed|failed|paused",
"phases": {
"phase_1": { "status": "completed", "timestamp": "..." },
"phase_2": { "status": "in_progress", "timestamp": "..." },
"phase_3": { "status": "pending" },
"phase_4": { "status": "pending" },
"phase_5": { "status": "pending" }
},
"sub_sessions": {
"debug_session": "DBG-...",
"test_session": "WFS-test-..."
},
"artifacts": {
"debug_understanding": "...",
"test_results": "...",
"fusion_analysis": "..."
}
}
```
---
## Mode Selection Logic
### Auto Mode Recommendation
When user doesn't specify `--mode`, recommend based on input analysis:
```javascript
function recommendMode(bugDescription) {
const indicators = {
cli_signals: [
/quick|fast|simple|immediate/,
/recommendation|suggest|advice/,
/just need|only want|quick look/,
/straightforward|obvious|clear/
],
debug_signals: [
/unclear|don't know|maybe|uncertain|why/,
/error|crash|fail|exception|stack trace/,
/execution flow|code path|how does/
],
test_signals: [
/test|coverage|verify|pass|fail/,
/implementation|implemented|complete/,
/case|scenario|should/
],
complex_signals: [
/multiple|all|system|integration/,
/module|subsystem|network|distributed/,
/concurrent|async|race/
]
}
let score = { cli: 0, debug: 0, test: 0, bidirectional: 0 }
// CLI signals (lightweight preference)
for (const pattern of indicators.cli_signals) {
if (pattern.test(bugDescription)) score.cli += 3
}
// Debug signals
for (const pattern of indicators.debug_signals) {
if (pattern.test(bugDescription)) score.debug += 2
}
// Test signals
for (const pattern of indicators.test_signals) {
if (pattern.test(bugDescription)) score.test += 2
}
// Complex signals (prefer bidirectional for complex issues)
for (const pattern of indicators.complex_signals) {
if (pattern.test(bugDescription)) {
score.bidirectional += 3
score.cli -= 2 // Complex issues not suitable for CLI quick
}
}
// If description is short and has clear error, prefer CLI
if (bugDescription.length < 100 && /error|fail|crash/.test(bugDescription)) {
score.cli += 2
}
// Return highest scoring mode
return Object.entries(score).sort((a, b) => b[1] - a[1])[0][0]
}
```
---
## Best Practices
### When to Use Each Mode
| Issue Characteristic | Recommended Mode | Rationale |
|----------------------|-----------------|-----------|
| Simple error, clear symptoms | CLI Quick | Fastest recommendation |
| Incomplete error info, requires exploration | Debug First | Deep diagnostic capability |
| Code complete, needs test coverage | Test First | Automated iterative fixes |
| Cross-module issue, ambiguous symptoms | Bidirectional | Multi-angle insights |
| Production failure, needs immediate guidance | CLI Quick + --yes | Fastest guidance, optional escalation |
| Production failure, needs safe fix | Debug First + --hotfix | Minimal diagnosis time |
| Want to understand why it failed | Debug First | Records understanding evolution |
| Want to ensure all scenarios pass | Test First | Complete coverage-driven |
### Performance Tips
- **CLI Quick**: 2-5 minutes, no file I/O, recommendation-only
- **Debug First**: Usually requires manual issue reproduction (after logging added), then 15-30 min
- **Test First**: Fully automated, 20-45 min depending on test suite size
- **Bidirectional**: Most comprehensive but slowest (parallel workflows), 30-60 min
### Workflow Continuity
- **CLI Quick**: Can escalate to debug/test/apply fix based on user decision
- **Debug First**: Auto-launches test generation and execution after completion
- **Test First**: With high failure rates suggests switching to debug mode for root cause
- **Bidirectional**: Always executes complete flow
---
## Follow-up Expansion
After completion, offer to expand to issues:
```
## Done! What's next?
- [ ] Create Test issue (improve test coverage)
- [ ] Create Enhancement issue (optimize code quality)
- [ ] Create Refactor issue (improve architecture)
- [ ] Create Documentation issue (record learnings)
- [ ] Don't create any issue, end workflow
```
Selected items call: `/issue:new "{issue summary} - {dimension}"`
---
## Error Handling
| Error | CLI Quick | Debug First | Test First | Bidirectional |
|-------|-----------|-------------|-----------|---------------|
| Session creation failed | N/A (no session) | Retry → abort | Retry → abort | Retry → abort |
| CLI analysis failed | Retry with fallback tool → manual | N/A | N/A | N/A |
| Diagnosis/test failed | N/A | Continue with partial results | Direct failure | Use alternate workflow results |
| Low confidence result | Ask escalate or review | N/A | N/A | N/A |
| Merge conflicts | N/A | N/A | N/A | Select highest confidence plan |
| Fix application failed | Report error, no auto-retry | Request manual fix | Mark failed, request intervention | Try alternative plan |
---
## Relationship with ccw Command
| Feature | ccw | ccw-debug |
|---------|-----|----------|
| **Design** | General workflow orchestration | Debug + test aggregation |
| **Intent Detection** | ✅ Detects task type | ✅ Detects issue type |
| **Automation** | ✅ Auto-selects workflow | ✅ Auto-selects mode |
| **Quick Mode** | ❌ None | ✅ CLI Quick (2-5 min) |
| **Parallel Execution** | ❌ Sequential | ✅ Bidirectional mode parallel |
| **Fusion Analysis** | ❌ None | ✅ Bidirectional mode fusion |
| **Workflow Scope** | Broad (feature/bugfix/tdd/ui etc.) | Deep focus (debug + test) |
| **CLI Integration** | Yes | Yes (4 levels: quick/deep/iterative/fusion) |
---
## Usage Recommendations
1. **First Time**: Use default mode (debug-first), observe workflow
2. **Quick Decision**: Use CLI Quick (--mode cli) for immediate recommendations
3. **Quick Fix**: Use `--hotfix --yes` for minimal diagnostics (debug mode)
4. **Learning**: Use debug-first, read `understanding.md`
5. **Complete Validation**: Use bidirectional for multi-dimensional insights
6. **Auto Repair**: Use test-first for automatic iteration
7. **Escalation**: Start with CLI Quick, escalate to other modes as needed
---
## Reference
### Related Commands
- `ccw cli` - Direct CLI analysis (used by CLI Quick mode)
- `/workflow:debug-with-file` - Deep debug diagnostics
- `/workflow:test-fix-gen` - Test generation
- `/workflow:test-cycle-execute` - Test execution
- `/workflow:lite-fix` - Lightweight fix
- `/ccw` - General workflow orchestration
### Configuration Files
- `~/.claude/cli-tools.json` - CLI tool configuration (Gemini/Qwen/Codex)
- `.workflow/project-tech.json` - Project technology stack
- `.workflow/project-guidelines.json` - Project conventions
### CLI Tool Fallback Chain (for CLI Quick mode)
When CLI analysis fails, fallback order:
1. **Gemini** (primary): `gemini-2.5-pro`
2. **Qwen** (fallback): `coder-model`
3. **Codex** (fallback): `gpt-5.2`
---
## Summary: Mode Selection Decision Tree
```
User calls: /ccw-debug <bug_description>
┌─ Explicit --mode specified?
│ ├─ YES → Use specified mode
│ │ ├─ cli → 2-5 min analysis, optionally escalate
│ │ ├─ debug → Full debug-with-file workflow
│ │ ├─ test → Test-first workflow
│ │ └─ bidirectional → Parallel debug + test
│ │
│ └─ NO → Auto-recommend based on bug description
│ ├─ Keywords: "quick", "fast", "simple" → CLI Quick
│ ├─ Keywords: "error", "crash", "exception" → Debug First (or CLI if simple)
│ ├─ Keywords: "test", "verify", "coverage" → Test First
│ └─ Keywords: "multiple", "system", "distributed" → Bidirectional
├─ Check --yes flag
│ ├─ YES → Auto-confirm all decisions
│ │ ├─ CLI mode: Auto-apply if confidence High
│ │ └─ Others: Auto-select default options
│ │
│ └─ NO → Interactive mode, ask user for confirmations
├─ Check --hotfix flag (debug mode only)
│ ├─ YES → Minimal diagnostics, fast fix
│ └─ NO → Full analysis
└─ Execute selected mode workflow
└─ Return results or escalation options
```

View File

@@ -24,7 +24,7 @@ Main process orchestrator: intent analysis → workflow selection → command ch
|-----------|---------|---------|
| **Planning + Execution** | plan-cmd → execute-cmd | lite-plan → lite-execute |
| **Testing** | test-gen-cmd → test-exec-cmd | test-fix-gen → test-cycle-execute |
| **Review** | review-cmd → fix-cmd | review-session-cycle → review-fix |
| **Review** | review-cmd → fix-cmd | review-session-cycle → review-cycle-fix |
**Atomic Rules**:
1. CCW automatically groups commands into minimum units - never splits them
@@ -67,10 +67,16 @@ function analyzeIntent(input) {
function detectTaskType(text) {
const patterns = {
'bugfix-hotfix': /urgent|production|critical/ && /fix|bug/,
// With-File workflows (documented exploration with multi-CLI collaboration)
'brainstorm': /brainstorm|ideation|头脑风暴|创意|发散思维|creative thinking|multi-perspective.*think|compare perspectives|探索.*可能/,
'brainstorm-to-issue': /brainstorm.*issue|头脑风暴.*issue|idea.*issue|想法.*issue|从.*头脑风暴|convert.*brainstorm/,
'debug-file': /debug.*document|hypothesis.*debug|troubleshoot.*track|investigate.*log|调试.*记录|假设.*验证|systematic debug|深度调试/,
'analyze-file': /analyze.*document|explore.*concept|understand.*architecture|investigate.*discuss|collaborative analysis|分析.*讨论|深度.*理解|协作.*分析/,
// Standard workflows
'bugfix': /fix|bug|error|crash|fail|debug/,
'issue-batch': /issues?|batch/ && /fix|resolve/,
'issue-transition': /issue workflow|structured workflow|queue|multi-stage/,
'exploration': /uncertain|explore|research|what if/,
'multi-perspective': /multi-perspective|compare|cross-verify/,
'quick-task': /quick|simple|small/ && /feature|function/,
'ui-design': /ui|design|component|style/,
'tdd': /tdd|test-driven|test first/,
@@ -111,14 +117,21 @@ async function clarifyRequirements(analysis) {
function selectWorkflow(analysis) {
const levelMap = {
'bugfix-hotfix': { level: 2, flow: 'bugfix.hotfix' },
// With-File workflows (documented exploration with multi-CLI collaboration)
'brainstorm': { level: 4, flow: 'brainstorm-with-file' }, // Multi-perspective ideation
'brainstorm-to-issue': { level: 4, flow: 'brainstorm-to-issue' }, // Brainstorm → Issue workflow
'debug-file': { level: 3, flow: 'debug-with-file' }, // Hypothesis-driven debugging
'analyze-file': { level: 3, flow: 'analyze-with-file' }, // Collaborative analysis
// Standard workflows
'bugfix': { level: 2, flow: 'bugfix.standard' },
'issue-batch': { level: 'Issue', flow: 'issue' },
'issue-transition': { level: 2.5, flow: 'rapid-to-issue' }, // Bridge workflow
'exploration': { level: 4, flow: 'full' },
'quick-task': { level: 1, flow: 'lite-lite-lite' },
'ui-design': { level: analysis.complexity === 'high' ? 4 : 3, flow: 'ui' },
'tdd': { level: 3, flow: 'tdd' },
'test-fix': { level: 3, flow: 'test-fix-gen' },
'review': { level: 3, flow: 'review-fix' },
'review': { level: 3, flow: 'review-cycle-fix' },
'documentation': { level: 2, flow: 'docs' },
'feature': { level: analysis.complexity === 'high' ? 3 : 2, flow: analysis.complexity === 'high' ? 'coupled' : 'rapid' }
};
@@ -147,6 +160,16 @@ function buildCommandChain(workflow, analysis) {
])
],
// Level 2 Bridge - Lightweight to Issue Workflow
'rapid-to-issue': [
// Unit: Quick Implementation【lite-plan → convert-to-plan】
{ cmd: '/workflow:lite-plan', args: `"${analysis.goal}"`, unit: 'quick-impl-to-issue' },
{ cmd: '/issue:convert-to-plan', args: '--latest-lite-plan -y', unit: 'quick-impl-to-issue' },
// Auto-continue to issue workflow
{ cmd: '/issue:queue', args: '' },
{ cmd: '/issue:execute', args: '--queue auto' }
],
'bugfix.standard': [
// Unit: Bug Fix【lite-fix → lite-execute】
{ cmd: '/workflow:lite-fix', args: `"${analysis.goal}"`, unit: 'bug-fix' },
@@ -179,6 +202,30 @@ function buildCommandChain(workflow, analysis) {
{ cmd: '/workflow:lite-execute', args: '--in-memory', unit: 'quick-impl' }
],
// With-File workflows (documented exploration with multi-CLI collaboration)
'brainstorm-with-file': [
{ cmd: '/workflow:brainstorm-with-file', args: `"${analysis.goal}"` }
// Note: Has built-in post-completion options (create plan, create issue, deep analysis)
],
// Brainstorm-to-Issue workflow (bridge from brainstorm to issue execution)
'brainstorm-to-issue': [
// Note: Assumes brainstorm session already exists, or run brainstorm first
{ cmd: '/issue:from-brainstorm', args: `SESSION="${extractBrainstormSession(analysis)}" --auto` },
{ cmd: '/issue:queue', args: '' },
{ cmd: '/issue:execute', args: '--queue auto' }
],
'debug-with-file': [
{ cmd: '/workflow:debug-with-file', args: `"${analysis.goal}"` }
// Note: Self-contained with hypothesis-driven iteration and Gemini validation
],
'analyze-with-file': [
{ cmd: '/workflow:analyze-with-file', args: `"${analysis.goal}"` }
// Note: Self-contained with multi-round discussion and CLI exploration
],
// Level 3 - Standard
'coupled': [
// Unit: Verified Planning【plan → plan-verify】
@@ -186,9 +233,9 @@ function buildCommandChain(workflow, analysis) {
{ cmd: '/workflow:plan-verify', args: '', unit: 'verified-planning' },
// Execution
{ cmd: '/workflow:execute', args: '' },
// Unit: Code Review【review-session-cycle → review-fix】
// Unit: Code Review【review-session-cycle → review-cycle-fix】
{ cmd: '/workflow:review-session-cycle', args: '', unit: 'code-review' },
{ cmd: '/workflow:review-fix', args: '', unit: 'code-review' },
{ cmd: '/workflow:review-cycle-fix', args: '', unit: 'code-review' },
// Unit: Test Validation【test-fix-gen → test-cycle-execute】
...(analysis.constraints?.includes('skip-tests') ? [] : [
{ cmd: '/workflow:test-fix-gen', args: '', unit: 'test-validation' },
@@ -210,10 +257,10 @@ function buildCommandChain(workflow, analysis) {
{ cmd: '/workflow:test-cycle-execute', args: '', unit: 'test-validation' }
],
'review-fix': [
// Unit: Code Review【review-session-cycle → review-fix】
'review-cycle-fix': [
// Unit: Code Review【review-session-cycle → review-cycle-fix】
{ cmd: '/workflow:review-session-cycle', args: '', unit: 'code-review' },
{ cmd: '/workflow:review-fix', args: '', unit: 'code-review' },
{ cmd: '/workflow:review-cycle-fix', args: '', unit: 'code-review' },
// Unit: Test Validation【test-fix-gen → test-cycle-execute】
{ cmd: '/workflow:test-fix-gen', args: '', unit: 'test-validation' },
{ cmd: '/workflow:test-cycle-execute', args: '', unit: 'test-validation' }
@@ -409,7 +456,12 @@ Phase 5: Execute Command Chain
|-------|------|-------|-----------------------|
| "Add API endpoint" | feature (low) | 2 |【lite-plan → lite-execute】→【test-fix-gen → test-cycle-execute】|
| "Fix login timeout" | bugfix | 2 |【lite-fix → lite-execute】→【test-fix-gen → test-cycle-execute】|
| "OAuth2 system" | feature (high) | 3 |【plan → plan-verify】→ execute →【review-session-cycle → review-fix】→【test-fix-gen → test-cycle-execute|
| "Use issue workflow" | issue-transition | 2.5 |【lite-plan → convert-to-plan】→ queue → execute |
| "头脑风暴: 通知系统重构" | brainstorm | 4 | brainstorm-with-file → (built-in post-completion) |
| "从头脑风暴创建 issue" | brainstorm-to-issue | 4 | from-brainstorm → queue → execute |
| "深度调试 WebSocket 连接断开" | debug-file | 3 | debug-with-file → (hypothesis iteration) |
| "协作分析: 认证架构优化" | analyze-file | 3 | analyze-with-file → (multi-round discussion) |
| "OAuth2 system" | feature (high) | 3 |【plan → plan-verify】→ execute →【review-session-cycle → review-cycle-fix】→【test-fix-gen → test-cycle-execute】|
| "Implement with TDD" | tdd | 3 |【tdd-plan → execute】→ tdd-verify |
| "Uncertain: real-time arch" | exploration | 4 | brainstorm:auto-parallel →【plan → plan-verify】→ execute →【test-fix-gen → test-cycle-execute】|
@@ -452,6 +504,29 @@ todos = [
---
## With-File Workflows
**With-File workflows** provide documented exploration with multi-CLI collaboration. They are self-contained and generate comprehensive session artifacts.
| Workflow | Purpose | Key Features | Output Folder |
|----------|---------|--------------|---------------|
| **brainstorm-with-file** | Multi-perspective ideation | Gemini/Codex/Claude perspectives, diverge-converge cycles | `.workflow/.brainstorm/` |
| **debug-with-file** | Hypothesis-driven debugging | Gemini validation, understanding evolution, NDJSON logging | `.workflow/.debug/` |
| **analyze-with-file** | Collaborative analysis | Multi-round Q&A, CLI exploration, documented discussions | `.workflow/.analysis/` |
**Detection Keywords**:
- **brainstorm**: 头脑风暴, 创意, 发散思维, multi-perspective, compare perspectives
- **debug-file**: 深度调试, 假设验证, systematic debug, hypothesis debug
- **analyze-file**: 协作分析, 深度理解, collaborative analysis, explore concept
**Characteristics**:
1. **Self-Contained**: Each workflow handles its own iteration loop
2. **Documented Process**: Creates evolving documents (brainstorm.md, understanding.md, discussion.md)
3. **Multi-CLI**: Uses Gemini/Codex/Claude for different perspectives
4. **Built-in Post-Completion**: Offers follow-up options (create plan, issue, etc.)
---
## Type Comparison: ccw vs ccw-coordinator
| Aspect | ccw | ccw-coordinator |
@@ -483,4 +558,10 @@ ccw "Implement user registration with TDD"
# Exploratory task
ccw "Uncertain about architecture for real-time notifications"
# With-File workflows (documented exploration with multi-CLI collaboration)
ccw "头脑风暴: 用户通知系统重新设计" # → brainstorm-with-file
ccw "从头脑风暴 BS-通知系统-2025-01-28 创建 issue" # → brainstorm-to-issue (bridge)
ccw "深度调试: 系统随机崩溃问题" # → debug-with-file
ccw "协作分析: 理解现有认证架构的设计决策" # → analyze-with-file
```

View File

@@ -3,6 +3,7 @@ name: cli-init
description: Generate .gemini/ and .qwen/ config directories with settings.json and ignore files based on workspace technology detection
argument-hint: "[--tool gemini|qwen|all] [--output path] [--preview]"
allowed-tools: Bash(*), Read(*), Write(*), Glob(*)
group: cli
---
# CLI Initialization Command (/cli:cli-init)

View File

@@ -1,93 +0,0 @@
---
name: enhance-prompt
description: Enhanced prompt transformation using session memory and intent analysis with --enhance flag detection
argument-hint: "user input to enhance"
---
## Overview
Systematically enhances user prompts by leveraging session memory context and intent analysis, translating ambiguous requests into actionable specifications.
## Core Protocol
**Enhancement Pipeline:**
`Intent Translation``Context Integration``Structured Output`
**Context Sources:**
- Session memory (conversation history, previous analysis)
- Implicit technical requirements
- User intent patterns
## Enhancement Rules
### Intent Translation
| User Says | Translate To | Focus |
|-----------|--------------|-------|
| "fix" | Debug and resolve | Root cause → preserve behavior |
| "improve" | Enhance/optimize | Performance/readability |
| "add" | Implement feature | Integration + edge cases |
| "refactor" | Restructure quality | Maintain behavior |
| "update" | Modernize | Version compatibility |
### Context Integration Strategy
**Session Memory:**
- Reference recent conversation context
- Reuse previously identified patterns
- Build on established understanding
- Infer technical requirements from discussion
**Example:**
```bash
# User: "add login"
# Session Memory: Previous auth discussion, JWT mentioned
# Inferred: JWT-based auth, integrate with existing session management
# Action: Implement JWT authentication with session persistence
```
## Output Structure
```bash
INTENT: [Clear technical goal]
CONTEXT: [Session memory + codebase patterns]
ACTION: [Specific implementation steps]
ATTENTION: [Critical constraints]
```
### Output Examples
**Example 1:**
```bash
# Input: "fix login button"
INTENT: Debug non-functional login button
CONTEXT: From session - OAuth flow discussed, known state issue
ACTION: Check event binding → verify state updates → test auth flow
ATTENTION: Preserve existing OAuth integration
```
**Example 2:**
```bash
# Input: "refactor payment code"
INTENT: Restructure payment module for maintainability
CONTEXT: Session memory - PCI compliance requirements, Stripe integration patterns
ACTION: Extract reusable validators → isolate payment gateway logic → maintain adapter pattern
ATTENTION: Zero behavior change, maintain PCI compliance, full test coverage
```
## Enhancement Triggers
- Ambiguous language: "fix", "improve", "clean up"
- Vague requests requiring clarification
- Complex technical requirements
- Architecture changes
- Critical systems: auth, payment, security
- Multi-step refactoring
## Key Principles
1. **Session Memory First**: Leverage conversation context and established understanding
2. **Context Reuse**: Build on previous discussions and decisions
3. **Clear Output**: Structured, actionable specifications
4. **Intent Clarification**: Transform vague requests into specific technical goals
5. **Avoid Duplication**: Reference existing context, don't repeat

View File

@@ -0,0 +1,718 @@
---
name: convert-to-plan
description: Convert planning artifacts (lite-plan, workflow session, markdown) to issue solutions
argument-hint: "[-y|--yes] [--issue <id>] [--supplement] <SOURCE>"
allowed-tools: TodoWrite(*), Bash(*), Read(*), Write(*), Glob(*), AskUserQuestion(*)
---
## Auto Mode
When `--yes` or `-y`: Skip confirmation, auto-create issue and bind solution.
# Issue Convert-to-Plan Command (/issue:convert-to-plan)
## Overview
Converts various planning artifact formats into issue workflow solutions with intelligent detection and automatic binding.
**Supported Sources** (auto-detected):
- **lite-plan**: `.workflow/.lite-plan/{slug}/plan.json`
- **workflow-session**: `WFS-xxx` ID or `.workflow/active/{session}/` folder
- **markdown**: Any `.md` file with implementation/task content
- **json**: Direct JSON files matching plan-json-schema
## Quick Reference
```bash
# Convert lite-plan to new issue (auto-creates issue)
/issue:convert-to-plan ".workflow/.lite-plan/implement-auth-2026-01-25"
# Convert workflow session to existing issue
/issue:convert-to-plan WFS-auth-impl --issue GH-123
# Supplement existing solution with additional tasks
/issue:convert-to-plan "./docs/additional-tasks.md" --issue ISS-001 --supplement
# Auto mode - skip confirmations
/issue:convert-to-plan ".workflow/.lite-plan/my-plan" -y
```
## Command Options
| Option | Description | Default |
|--------|-------------|---------|
| `<SOURCE>` | Planning artifact path or WFS-xxx ID | Required |
| `--issue <id>` | Bind to existing issue instead of creating new | Auto-create |
| `--supplement` | Add tasks to existing solution (requires --issue) | false |
| `-y, --yes` | Skip all confirmations | false |
## Core Data Access Principle
**⚠️ Important**: Use CLI commands for all issue/solution operations.
| Operation | Correct | Incorrect |
|-----------|---------|-----------|
| Get issue | `ccw issue status <id> --json` | Read issues.jsonl directly |
| Create issue | `ccw issue init <id> --title "..."` | Write to issues.jsonl |
| Bind solution | `ccw issue bind <id> <sol-id>` | Edit issues.jsonl |
| List solutions | `ccw issue solutions --issue <id> --brief` | Read solutions/*.jsonl |
## Solution Schema Reference
Target format for all extracted data (from solution-schema.json):
```typescript
interface Solution {
id: string; // SOL-{issue-id}-{4-char-uid}
description?: string; // High-level summary
approach?: string; // Technical strategy
tasks: Task[]; // Required: at least 1 task
exploration_context?: object; // Optional: source context
analysis?: { risk, impact, complexity };
score?: number; // 0.0-1.0
is_bound: boolean;
created_at: string;
bound_at?: string;
}
interface Task {
id: string; // T1, T2, T3... (pattern: ^T[0-9]+$)
title: string; // Required: action verb + target
scope: string; // Required: module path or feature area
action: Action; // Required: Create|Update|Implement|...
description?: string;
modification_points?: Array<{file, target, change}>;
implementation: string[]; // Required: step-by-step guide
test?: { unit?, integration?, commands?, coverage_target? };
acceptance: { criteria: string[], verification: string[] }; // Required
commit?: { type, scope, message_template, breaking? };
depends_on?: string[];
priority?: number; // 1-5 (default: 3)
}
type Action = 'Create' | 'Update' | 'Implement' | 'Refactor' | 'Add' | 'Delete' | 'Configure' | 'Test' | 'Fix';
```
## Implementation
### Phase 1: Parse Arguments & Detect Source Type
```javascript
const input = userInput.trim();
const flags = parseFlags(userInput); // --issue, --supplement, -y/--yes
// Extract source path (first non-flag argument)
const source = extractSourceArg(input);
// Detect source type
function detectSourceType(source) {
// Check for WFS-xxx pattern (workflow session ID)
if (source.match(/^WFS-[\w-]+$/)) {
return { type: 'workflow-session-id', path: `.workflow/active/${source}` };
}
// Check if directory
const isDir = Bash(`test -d "${source}" && echo "dir" || echo "file"`).trim() === 'dir';
if (isDir) {
// Check for lite-plan indicator
const hasPlanJson = Bash(`test -f "${source}/plan.json" && echo "yes" || echo "no"`).trim() === 'yes';
if (hasPlanJson) {
return { type: 'lite-plan', path: source };
}
// Check for workflow session indicator
const hasSession = Bash(`test -f "${source}/workflow-session.json" && echo "yes" || echo "no"`).trim() === 'yes';
if (hasSession) {
return { type: 'workflow-session', path: source };
}
}
// Check file extensions
if (source.endsWith('.json')) {
return { type: 'json-file', path: source };
}
if (source.endsWith('.md')) {
return { type: 'markdown-file', path: source };
}
// Check if path exists at all
const exists = Bash(`test -e "${source}" && echo "yes" || echo "no"`).trim() === 'yes';
if (!exists) {
throw new Error(`E001: Source not found: ${source}`);
}
return { type: 'unknown', path: source };
}
const sourceInfo = detectSourceType(source);
if (sourceInfo.type === 'unknown') {
throw new Error(`E002: Unable to detect source format for: ${source}`);
}
console.log(`Detected source type: ${sourceInfo.type}`);
```
### Phase 2: Extract Data Using Format-Specific Extractor
```javascript
let extracted = { title: '', approach: '', tasks: [], metadata: {} };
switch (sourceInfo.type) {
case 'lite-plan':
extracted = extractFromLitePlan(sourceInfo.path);
break;
case 'workflow-session':
case 'workflow-session-id':
extracted = extractFromWorkflowSession(sourceInfo.path);
break;
case 'markdown-file':
extracted = await extractFromMarkdownAI(sourceInfo.path);
break;
case 'json-file':
extracted = extractFromJsonFile(sourceInfo.path);
break;
}
// Validate extraction
if (!extracted.tasks || extracted.tasks.length === 0) {
throw new Error('E006: No tasks extracted from source');
}
// Ensure task IDs are normalized to T1, T2, T3...
extracted.tasks = normalizeTaskIds(extracted.tasks);
console.log(`Extracted: ${extracted.tasks.length} tasks`);
```
#### Extractor: Lite-Plan
```javascript
function extractFromLitePlan(folderPath) {
const planJson = Read(`${folderPath}/plan.json`);
const plan = JSON.parse(planJson);
return {
title: plan.summary?.split('.')[0]?.trim() || 'Untitled Plan',
description: plan.summary,
approach: plan.approach,
tasks: plan.tasks.map(t => ({
id: t.id,
title: t.title,
scope: t.scope || '',
action: t.action || 'Implement',
description: t.description || t.title,
modification_points: t.modification_points || [],
implementation: Array.isArray(t.implementation) ? t.implementation : [t.implementation || ''],
test: t.verification ? {
unit: t.verification.unit_tests,
integration: t.verification.integration_tests,
commands: t.verification.manual_checks
} : {},
acceptance: {
criteria: Array.isArray(t.acceptance) ? t.acceptance : [t.acceptance || ''],
verification: t.verification?.manual_checks || []
},
depends_on: t.depends_on || [],
priority: 3
})),
metadata: {
source_type: 'lite-plan',
source_path: folderPath,
complexity: plan.complexity,
estimated_time: plan.estimated_time,
exploration_angles: plan._metadata?.exploration_angles || [],
original_timestamp: plan._metadata?.timestamp
}
};
}
```
#### Extractor: Workflow Session
```javascript
function extractFromWorkflowSession(sessionPath) {
// Load session metadata
const sessionJson = Read(`${sessionPath}/workflow-session.json`);
const session = JSON.parse(sessionJson);
// Load IMPL_PLAN.md for approach (if exists)
let approach = '';
const implPlanPath = `${sessionPath}/IMPL_PLAN.md`;
const hasImplPlan = Bash(`test -f "${implPlanPath}" && echo "yes" || echo "no"`).trim() === 'yes';
if (hasImplPlan) {
const implPlan = Read(implPlanPath);
// Extract overview/approach section
const overviewMatch = implPlan.match(/##\s*(?:Overview|Approach|Strategy)\s*\n([\s\S]*?)(?=\n##|$)/i);
approach = overviewMatch?.[1]?.trim() || implPlan.split('\n').slice(0, 10).join('\n');
}
// Load all task JSONs from .task folder
const taskFiles = Glob({ pattern: `${sessionPath}/.task/IMPL-*.json` });
const tasks = taskFiles.map(f => {
const taskJson = Read(f);
const task = JSON.parse(taskJson);
return {
id: task.id?.replace(/^IMPL-0*/, 'T') || 'T1', // IMPL-001 → T1
title: task.title,
scope: task.scope || inferScopeFromTask(task),
action: capitalizeAction(task.type) || 'Implement',
description: task.description,
modification_points: task.implementation?.modification_points || [],
implementation: task.implementation?.steps || [],
test: task.implementation?.test || {},
acceptance: {
criteria: task.acceptance_criteria || [],
verification: task.verification_steps || []
},
commit: task.commit,
depends_on: (task.depends_on || []).map(d => d.replace(/^IMPL-0*/, 'T')),
priority: task.priority || 3
};
});
return {
title: session.name || session.description?.split('.')[0] || 'Workflow Session',
description: session.description || session.name,
approach: approach || session.description,
tasks: tasks,
metadata: {
source_type: 'workflow-session',
source_path: sessionPath,
session_id: session.id,
created_at: session.created_at
}
};
}
function inferScopeFromTask(task) {
if (task.implementation?.modification_points?.length) {
const files = task.implementation.modification_points.map(m => m.file);
// Find common directory prefix
const dirs = files.map(f => f.split('/').slice(0, -1).join('/'));
return [...new Set(dirs)][0] || '';
}
return '';
}
function capitalizeAction(type) {
if (!type) return 'Implement';
const map = { feature: 'Implement', bugfix: 'Fix', refactor: 'Refactor', test: 'Test', docs: 'Update' };
return map[type.toLowerCase()] || type.charAt(0).toUpperCase() + type.slice(1);
}
```
#### Extractor: Markdown (AI-Assisted via Gemini)
```javascript
async function extractFromMarkdownAI(filePath) {
const fileContent = Read(filePath);
// Use Gemini CLI for intelligent extraction
const cliPrompt = `PURPOSE: Extract implementation plan from markdown document for issue solution conversion. Must output ONLY valid JSON.
TASK: • Analyze document structure • Identify title/summary • Extract approach/strategy section • Parse tasks from any format (lists, tables, sections, code blocks) • Normalize each task to solution schema
MODE: analysis
CONTEXT: Document content provided below
EXPECTED: Valid JSON object with format:
{
"title": "extracted title",
"approach": "extracted approach/strategy",
"tasks": [
{
"id": "T1",
"title": "task title",
"scope": "module or feature area",
"action": "Implement|Update|Create|Fix|Refactor|Add|Delete|Configure|Test",
"description": "what to do",
"implementation": ["step 1", "step 2"],
"acceptance": ["criteria 1", "criteria 2"]
}
]
}
CONSTRAINTS: Output ONLY valid JSON - no markdown, no explanation | Action must be one of: Create, Update, Implement, Refactor, Add, Delete, Configure, Test, Fix | Tasks must have id, title, scope, action, implementation (array), acceptance (array)
DOCUMENT CONTENT:
${fileContent}`;
// Execute Gemini CLI
const result = Bash(`ccw cli -p '${cliPrompt.replace(/'/g, "'\\''")}' --tool gemini --mode analysis`, { timeout: 120000 });
// Parse JSON from result (may be wrapped in markdown code block)
let jsonText = result.trim();
const jsonMatch = jsonText.match(/```(?:json)?\s*([\s\S]*?)```/);
if (jsonMatch) {
jsonText = jsonMatch[1].trim();
}
try {
const extracted = JSON.parse(jsonText);
// Normalize tasks
const tasks = (extracted.tasks || []).map((t, i) => ({
id: t.id || `T${i + 1}`,
title: t.title || 'Untitled task',
scope: t.scope || '',
action: validateAction(t.action) || 'Implement',
description: t.description || t.title,
modification_points: t.modification_points || [],
implementation: Array.isArray(t.implementation) ? t.implementation : [t.implementation || ''],
test: t.test || {},
acceptance: {
criteria: Array.isArray(t.acceptance) ? t.acceptance : [t.acceptance || ''],
verification: t.verification || []
},
depends_on: t.depends_on || [],
priority: t.priority || 3
}));
return {
title: extracted.title || 'Extracted Plan',
description: extracted.summary || extracted.title,
approach: extracted.approach || '',
tasks: tasks,
metadata: {
source_type: 'markdown',
source_path: filePath,
extraction_method: 'gemini-ai'
}
};
} catch (e) {
// Provide more context for debugging
throw new Error(`E005: Failed to extract tasks from markdown. Gemini response was not valid JSON. Error: ${e.message}. Response preview: ${jsonText.substring(0, 200)}...`);
}
}
function validateAction(action) {
const validActions = ['Create', 'Update', 'Implement', 'Refactor', 'Add', 'Delete', 'Configure', 'Test', 'Fix'];
if (!action) return null;
const normalized = action.charAt(0).toUpperCase() + action.slice(1).toLowerCase();
return validActions.includes(normalized) ? normalized : null;
}
```
#### Extractor: JSON File
```javascript
function extractFromJsonFile(filePath) {
const content = Read(filePath);
const plan = JSON.parse(content);
// Detect if it's already solution format or plan format
if (plan.tasks && Array.isArray(plan.tasks)) {
// Map tasks to normalized format
const tasks = plan.tasks.map((t, i) => ({
id: t.id || `T${i + 1}`,
title: t.title,
scope: t.scope || '',
action: t.action || 'Implement',
description: t.description || t.title,
modification_points: t.modification_points || [],
implementation: Array.isArray(t.implementation) ? t.implementation : [t.implementation || ''],
test: t.test || t.verification || {},
acceptance: normalizeAcceptance(t.acceptance),
depends_on: t.depends_on || [],
priority: t.priority || 3
}));
return {
title: plan.summary?.split('.')[0] || plan.title || 'JSON Plan',
description: plan.summary || plan.description,
approach: plan.approach,
tasks: tasks,
metadata: {
source_type: 'json',
source_path: filePath,
complexity: plan.complexity,
original_metadata: plan._metadata
}
};
}
throw new Error('E002: JSON file does not contain valid plan structure (missing tasks array)');
}
function normalizeAcceptance(acceptance) {
if (!acceptance) return { criteria: [], verification: [] };
if (typeof acceptance === 'object' && acceptance.criteria) return acceptance;
if (Array.isArray(acceptance)) return { criteria: acceptance, verification: [] };
return { criteria: [String(acceptance)], verification: [] };
}
```
### Phase 3: Normalize Task IDs
```javascript
function normalizeTaskIds(tasks) {
return tasks.map((t, i) => ({
...t,
id: `T${i + 1}`,
// Also normalize depends_on references
depends_on: (t.depends_on || []).map(d => {
// Handle various ID formats: IMPL-001, T1, 1, etc.
const num = d.match(/\d+/)?.[0];
return num ? `T${parseInt(num)}` : d;
})
}));
}
```
### Phase 4: Resolve Issue (Create or Find)
```javascript
let issueId = flags.issue;
let existingSolution = null;
if (issueId) {
// Validate issue exists
let issueCheck;
try {
issueCheck = Bash(`ccw issue status ${issueId} --json 2>/dev/null`).trim();
if (!issueCheck || issueCheck === '') {
throw new Error('empty response');
}
} catch (e) {
throw new Error(`E003: Issue not found: ${issueId}`);
}
const issue = JSON.parse(issueCheck);
// Check if issue already has bound solution
if (issue.bound_solution_id && !flags.supplement) {
throw new Error(`E004: Issue ${issueId} already has bound solution (${issue.bound_solution_id}). Use --supplement to add tasks.`);
}
// Load existing solution for supplement mode
if (flags.supplement && issue.bound_solution_id) {
try {
const solResult = Bash(`ccw issue solution ${issue.bound_solution_id} --json`).trim();
existingSolution = JSON.parse(solResult);
console.log(`Loaded existing solution with ${existingSolution.tasks.length} tasks`);
} catch (e) {
throw new Error(`Failed to load existing solution: ${e.message}`);
}
}
} else {
// Create new issue via ccw issue create (auto-generates correct ID)
// Smart extraction: title from content, priority from complexity
const title = extracted.title || 'Converted Plan';
const context = extracted.description || extracted.approach || title;
// Auto-determine priority based on complexity
const complexityMap = { high: 2, medium: 3, low: 4 };
const priority = complexityMap[extracted.metadata.complexity?.toLowerCase()] || 3;
try {
// Use heredoc to avoid shell escaping issues
const createResult = Bash(`ccw issue create << 'EOF'
{
"title": ${JSON.stringify(title)},
"context": ${JSON.stringify(context)},
"priority": ${priority},
"source": "converted"
}
EOF`).trim();
// Parse result to get created issue ID
const created = JSON.parse(createResult);
issueId = created.id;
console.log(`Created issue: ${issueId} (priority: ${priority})`);
} catch (e) {
throw new Error(`Failed to create issue: ${e.message}`);
}
}
```
### Phase 5: Generate Solution
```javascript
// Generate solution ID
function generateSolutionId(issueId) {
const chars = 'abcdefghijklmnopqrstuvwxyz0123456789';
let uid = '';
for (let i = 0; i < 4; i++) {
uid += chars[Math.floor(Math.random() * chars.length)];
}
return `SOL-${issueId}-${uid}`;
}
let solution;
const solutionId = generateSolutionId(issueId);
if (flags.supplement && existingSolution) {
// Supplement mode: merge with existing solution
const maxTaskId = Math.max(...existingSolution.tasks.map(t => parseInt(t.id.slice(1))));
const newTasks = extracted.tasks.map((t, i) => ({
...t,
id: `T${maxTaskId + i + 1}`
}));
solution = {
...existingSolution,
tasks: [...existingSolution.tasks, ...newTasks],
approach: existingSolution.approach + '\n\n[Supplementary] ' + (extracted.approach || ''),
updated_at: new Date().toISOString()
};
console.log(`Supplementing: ${existingSolution.tasks.length} existing + ${newTasks.length} new = ${solution.tasks.length} total tasks`);
} else {
// New solution
solution = {
id: solutionId,
description: extracted.description || extracted.title,
approach: extracted.approach,
tasks: extracted.tasks,
exploration_context: extracted.metadata.exploration_angles ? {
exploration_angles: extracted.metadata.exploration_angles
} : undefined,
analysis: {
risk: 'medium',
impact: 'medium',
complexity: extracted.metadata.complexity?.toLowerCase() || 'medium'
},
is_bound: false,
created_at: new Date().toISOString(),
_conversion_metadata: {
source_type: extracted.metadata.source_type,
source_path: extracted.metadata.source_path,
converted_at: new Date().toISOString()
}
};
}
```
### Phase 6: Confirm & Persist
```javascript
// Display preview
console.log(`
## Conversion Summary
**Issue**: ${issueId}
**Solution**: ${flags.supplement ? existingSolution.id : solutionId}
**Tasks**: ${solution.tasks.length}
**Mode**: ${flags.supplement ? 'Supplement' : 'New'}
### Tasks:
${solution.tasks.map(t => `- ${t.id}: ${t.title} [${t.action}]`).join('\n')}
`);
// Confirm if not auto mode
if (!flags.yes && !flags.y) {
const confirm = AskUserQuestion({
questions: [{
question: `Create solution for issue ${issueId} with ${solution.tasks.length} tasks?`,
header: 'Confirm',
multiSelect: false,
options: [
{ label: 'Yes, create solution', description: 'Create and bind solution' },
{ label: 'Cancel', description: 'Abort without changes' }
]
}]
});
if (!confirm.answers?.['Confirm']?.includes('Yes')) {
console.log('Cancelled.');
return;
}
}
// Persist solution (following issue-plan-agent pattern)
Bash(`mkdir -p .workflow/issues/solutions`);
const solutionFile = `.workflow/issues/solutions/${issueId}.jsonl`;
if (flags.supplement) {
// Supplement mode: update existing solution line atomically
try {
const existingContent = Read(solutionFile);
const lines = existingContent.trim().split('\n').filter(l => l);
const updatedLines = lines.map(line => {
const sol = JSON.parse(line);
if (sol.id === existingSolution.id) {
return JSON.stringify(solution);
}
return line;
});
// Atomic write: write entire content at once
Write({ file_path: solutionFile, content: updatedLines.join('\n') + '\n' });
console.log(`✓ Updated solution: ${existingSolution.id}`);
} catch (e) {
throw new Error(`Failed to update solution: ${e.message}`);
}
// Note: No need to rebind - solution is already bound to issue
} else {
// New solution: append to JSONL file (following issue-plan-agent pattern)
try {
const solutionLine = JSON.stringify(solution);
// Read existing content, append new line, write atomically
const existing = Bash(`test -f "${solutionFile}" && cat "${solutionFile}" || echo ""`).trim();
const newContent = existing ? existing + '\n' + solutionLine + '\n' : solutionLine + '\n';
Write({ file_path: solutionFile, content: newContent });
console.log(`✓ Created solution: ${solutionId}`);
} catch (e) {
throw new Error(`Failed to write solution: ${e.message}`);
}
// Bind solution to issue
try {
Bash(`ccw issue bind ${issueId} ${solutionId}`);
console.log(`✓ Bound solution to issue`);
} catch (e) {
// Cleanup: remove solution file on bind failure
try {
Bash(`rm -f "${solutionFile}"`);
} catch (cleanupError) {
// Ignore cleanup errors
}
throw new Error(`Failed to bind solution: ${e.message}`);
}
// Update issue status to planned
try {
Bash(`ccw issue update ${issueId} --status planned`);
} catch (e) {
throw new Error(`Failed to update issue status: ${e.message}`);
}
}
```
### Phase 7: Summary
```javascript
console.log(`
## Done
**Issue**: ${issueId}
**Solution**: ${flags.supplement ? existingSolution.id : solutionId}
**Tasks**: ${solution.tasks.length}
**Status**: planned
### Next Steps:
- \`/issue:queue\` → Form execution queue
- \`ccw issue status ${issueId}\` → View issue details
- \`ccw issue solution ${flags.supplement ? existingSolution.id : solutionId}\` → View solution
`);
```
## Error Handling
| Error | Code | Resolution |
|-------|------|------------|
| Source not found | E001 | Check path exists |
| Invalid source format | E002 | Verify file contains valid plan structure |
| Issue not found | E003 | Check issue ID or omit --issue to create new |
| Solution already bound | E004 | Use --supplement to add tasks |
| AI extraction failed | E005 | Check markdown structure, try simpler format |
| No tasks extracted | E006 | Source must contain at least 1 task |
## Related Commands
- `/issue:plan` - Generate solutions from issue exploration
- `/issue:queue` - Form execution queue from bound solutions
- `/issue:execute` - Execute queue with DAG parallelism
- `ccw issue status <id>` - View issue details
- `ccw issue solution <id>` - View solution details

View File

@@ -0,0 +1,382 @@
---
name: from-brainstorm
description: Convert brainstorm session ideas into issue with executable solution for parallel-dev-cycle
argument-hint: "SESSION=\"<session-id>\" [--idea=<index>] [--auto] [-y|--yes]"
allowed-tools: TodoWrite(*), Bash(*), Read(*), Write(*), Glob(*), AskUserQuestion(*)
---
## Auto Mode
When `--yes` or `-y`: Auto-select highest-scored idea, skip confirmations, create issue directly.
# Issue From-Brainstorm Command (/issue:from-brainstorm)
## Overview
Bridge command that converts **brainstorm-with-file** session output into executable **issue + solution** for parallel-dev-cycle consumption.
**Core workflow**: Load Session → Select Idea → Convert to Issue → Generate Solution → Bind & Ready
**Input sources**:
- **synthesis.json** - Main brainstorm results with top_ideas
- **perspectives.json** - Multi-CLI perspectives (creative/pragmatic/systematic)
- **.brainstorming/** - Synthesis artifacts (clarifications, enhancements from role analyses)
**Output**:
- **Issue** (ISS-YYYYMMDD-NNN) - Full context with clarifications
- **Solution** (SOL-{issue-id}-{uid}) - Structured tasks for parallel-dev-cycle
## Quick Reference
```bash
# Interactive mode - select idea, confirm before creation
/issue:from-brainstorm SESSION="BS-rate-limiting-2025-01-28"
# Pre-select idea by index
/issue:from-brainstorm SESSION="BS-auth-system-2025-01-28" --idea=0
# Auto mode - select highest scored, no confirmations
/issue:from-brainstorm SESSION="BS-caching-2025-01-28" --auto -y
```
## Arguments
| Argument | Required | Type | Default | Description |
|----------|----------|------|---------|-------------|
| SESSION | Yes | String | - | Session ID or path to `.workflow/.brainstorm/BS-xxx` |
| --idea | No | Integer | - | Pre-select idea by index (0-based) |
| --auto | No | Flag | false | Auto-select highest-scored idea |
| -y, --yes | No | Flag | false | Skip all confirmations |
## Data Structures
### Issue Schema (Output)
```typescript
interface Issue {
id: string; // ISS-YYYYMMDD-NNN
title: string; // From idea.title
status: 'planned'; // Auto-set after solution binding
priority: number; // 1-5 (derived from idea.score)
context: string; // Full description with clarifications
source: 'brainstorm';
labels: string[]; // ['brainstorm', perspective, feasibility]
// Structured fields
expected_behavior: string; // From key_strengths
actual_behavior: string; // From main_challenges
affected_components: string[]; // Extracted from description
_brainstorm_metadata: {
session_id: string;
idea_score: number;
novelty: number;
feasibility: string;
clarifications_count: number;
};
}
```
### Solution Schema (Output)
```typescript
interface Solution {
id: string; // SOL-{issue-id}-{4-char-uid}
description: string; // idea.title
approach: string; // idea.description
tasks: Task[]; // Generated from idea.next_steps
analysis: {
risk: 'low' | 'medium' | 'high';
impact: 'low' | 'medium' | 'high';
complexity: 'low' | 'medium' | 'high';
};
is_bound: boolean; // true
created_at: string;
bound_at: string;
}
interface Task {
id: string; // T1, T2, T3...
title: string; // Actionable task name
scope: string; // design|implementation|testing|documentation
action: string; // Implement|Design|Research|Test|Document
description: string;
implementation: string[]; // Step-by-step guide
acceptance: {
criteria: string[]; // What defines success
verification: string[]; // How to verify
};
priority: number; // 1-5
depends_on: string[]; // Task dependencies
}
```
## Execution Flow
```
Phase 1: Session Loading
├─ Validate session path
├─ Load synthesis.json (required)
├─ Load perspectives.json (optional - multi-CLI insights)
├─ Load .brainstorming/** (optional - synthesis artifacts)
└─ Validate top_ideas array exists
Phase 2: Idea Selection
├─ Auto mode: Select highest scored idea
├─ Pre-selected: Use --idea=N index
└─ Interactive: Display table, ask user to select
Phase 3: Enrich Issue Context
├─ Base: idea.description + key_strengths + main_challenges
├─ Add: Relevant clarifications (Requirements/Architecture/Feasibility)
├─ Add: Multi-perspective insights (creative/pragmatic/systematic)
└─ Add: Session metadata (session_id, completion date, clarification count)
Phase 4: Create Issue
├─ Generate issue data with enriched context
├─ Calculate priority from idea.score (0-10 → 1-5)
├─ Create via: ccw issue create (heredoc for JSON)
└─ Returns: ISS-YYYYMMDD-NNN
Phase 5: Generate Solution Tasks
├─ T1: Research & Validate (if main_challenges exist)
├─ T2: Design & Specification (if key_strengths exist)
├─ T3+: Implementation tasks (from idea.next_steps)
└─ Each task includes: implementation steps + acceptance criteria
Phase 6: Bind Solution
├─ Write solution to .workflow/issues/solutions/{issue-id}.jsonl
├─ Bind via: ccw issue bind {issue-id} {solution-id}
├─ Update issue status to 'planned'
└─ Returns: SOL-{issue-id}-{uid}
Phase 7: Next Steps
└─ Offer: Form queue | Convert another idea | View details | Done
```
## Context Enrichment Logic
### Base Context (Always Included)
- **Description**: `idea.description`
- **Why This Idea**: `idea.key_strengths[]`
- **Challenges to Address**: `idea.main_challenges[]`
- **Implementation Steps**: `idea.next_steps[]`
### Enhanced Context (If Available)
**From Synthesis Artifacts** (`.brainstorming/*/analysis*.md`):
- Extract clarifications matching categories: Requirements, Architecture, Feasibility
- Format: `**{Category}** ({role}): {question} → {answer}`
- Limit: Top 3 most relevant
**From Perspectives** (`perspectives.json`):
- **Creative**: First insight from `perspectives.creative.insights[0]`
- **Pragmatic**: First blocker from `perspectives.pragmatic.blockers[0]`
- **Systematic**: First pattern from `perspectives.systematic.patterns[0]`
**Session Metadata**:
- Session ID, Topic, Completion Date
- Clarifications count (if synthesis artifacts loaded)
## Task Generation Strategy
### Task 1: Research & Validation
**Trigger**: `idea.main_challenges.length > 0`
- **Title**: "Research & Validate Approach"
- **Scope**: design
- **Action**: Research
- **Implementation**: Investigate blockers, review similar implementations, validate with team
- **Acceptance**: Blockers documented, feasibility assessed, approach validated
### Task 2: Design & Specification
**Trigger**: `idea.key_strengths.length > 0`
- **Title**: "Design & Create Specification"
- **Scope**: design
- **Action**: Design
- **Implementation**: Create design doc, define success criteria, plan phases
- **Acceptance**: Design complete, metrics defined, plan outlined
### Task 3+: Implementation Tasks
**Trigger**: `idea.next_steps[]`
- **Title**: From `next_steps[i]` (max 60 chars)
- **Scope**: Inferred from keywords (test→testing, api→backend, ui→frontend)
- **Action**: Detected from verbs (implement, create, update, fix, test, document)
- **Implementation**: Execute step + follow design + write tests
- **Acceptance**: Step implemented + tests passing + code reviewed
### Fallback Task
**Trigger**: No tasks generated from above
- **Title**: `idea.title`
- **Scope**: implementation
- **Action**: Implement
- **Generic implementation + acceptance criteria**
## Priority Calculation
### Issue Priority (1-5)
```
idea.score: 0-10
priority = max(1, min(5, ceil((10 - score) / 2)))
Examples:
score 9-10 → priority 1 (critical)
score 7-8 → priority 2 (high)
score 5-6 → priority 3 (medium)
score 3-4 → priority 4 (low)
score 0-2 → priority 5 (lowest)
```
### Task Priority (1-5)
- Research task: 1 (highest)
- Design task: 2
- Implementation tasks: 3 by default, decrement for later tasks
- Testing/documentation: 4-5
### Complexity Analysis
```
risk: main_challenges.length > 2 ? 'high' : 'medium'
impact: score >= 8 ? 'high' : score >= 6 ? 'medium' : 'low'
complexity: main_challenges > 3 OR tasks > 5 ? 'high'
tasks > 3 ? 'medium' : 'low'
```
## CLI Integration
### Issue Creation
```bash
# Uses heredoc to avoid shell escaping
ccw issue create << 'EOF'
{
"title": "...",
"context": "...",
"priority": 3,
"source": "brainstorm",
"labels": ["brainstorm", "creative", "feasibility-high"],
...
}
EOF
```
### Solution Binding
```bash
# Append solution to JSONL file
echo '{"id":"SOL-xxx","tasks":[...]}' >> .workflow/issues/solutions/{issue-id}.jsonl
# Bind to issue
ccw issue bind {issue-id} {solution-id}
# Update status
ccw issue update {issue-id} --status planned
```
## Error Handling
| Error | Message | Resolution |
|-------|---------|------------|
| Session not found | synthesis.json missing | Check session ID, list available sessions |
| No ideas | top_ideas array empty | Complete brainstorm workflow first |
| Invalid idea index | Index out of range | Check valid range 0 to N-1 |
| Issue creation failed | ccw issue create error | Verify CLI endpoint working |
| Solution binding failed | Bind error | Check issue exists, retry |
## Examples
### Interactive Mode
```bash
/issue:from-brainstorm SESSION="BS-rate-limiting-2025-01-28"
# Output:
# | # | Title | Score | Feasibility |
# |---|-------|-------|-------------|
# | 0 | Token Bucket Algorithm | 8.5 | High |
# | 1 | Sliding Window Counter | 7.2 | Medium |
# | 2 | Fixed Window | 6.1 | High |
# User selects: #0
# Result:
# ✓ Created issue: ISS-20250128-001
# ✓ Created solution: SOL-ISS-20250128-001-ab3d
# ✓ Bound solution to issue
# → Next: /issue:queue
```
### Auto Mode
```bash
/issue:from-brainstorm SESSION="BS-caching-2025-01-28" --auto
# Result:
# Auto-selected: Redis Cache Layer (Score: 9.2/10)
# ✓ Created issue: ISS-20250128-002
# ✓ Solution with 4 tasks
# → Status: planned
```
## Integration Flow
```
brainstorm-with-file
├─ synthesis.json
├─ perspectives.json
└─ .brainstorming/** (optional)
/issue:from-brainstorm ◄─── This command
├─ ISS-YYYYMMDD-NNN (enriched issue)
└─ SOL-{issue-id}-{uid} (structured solution)
/issue:queue
/parallel-dev-cycle
RA → EP → CD → VAS
```
## Session Files Reference
### Input Files
```
.workflow/.brainstorm/BS-{slug}-{date}/
├── synthesis.json # REQUIRED - Top ideas with scores
├── perspectives.json # OPTIONAL - Multi-CLI insights
├── brainstorm.md # Reference only
└── .brainstorming/ # OPTIONAL - Synthesis artifacts
├── system-architect/
│ └── analysis.md # Contains clarifications + enhancements
├── api-designer/
│ └── analysis.md
└── ...
```
### Output Files
```
.workflow/issues/
├── solutions/
│ └── ISS-YYYYMMDD-001.jsonl # Created solution (JSONL)
└── (managed by ccw issue CLI)
```
## Related Commands
- `/workflow:brainstorm-with-file` - Generate brainstorm sessions
- `/workflow:brainstorm:synthesis` - Add clarifications to brainstorm
- `/issue:new` - Create issues from GitHub or text
- `/issue:plan` - Generate solutions via exploration
- `/issue:queue` - Form execution queue
- `/issue:execute` - Execute with parallel-dev-cycle
- `ccw issue status <id>` - View issue
- `ccw issue solution <id>` - View solution

View File

@@ -1,687 +0,0 @@
---
name: code-map-memory
description: 3-phase orchestrator: parse feature keyword → cli-explore-agent analyzes (Deep Scan dual-source) → orchestrator generates Mermaid docs + SKILL package (skips phase 2 if exists)
argument-hint: "\"feature-keyword\" [--regenerate] [--tool <gemini|qwen>]"
allowed-tools: SlashCommand(*), TodoWrite(*), Bash(*), Read(*), Write(*), Task(*)
---
# Code Flow Mapping Generator
## Overview
**Pure Orchestrator with Agent Delegation**: Prepares context paths and delegates code flow analysis to specialized cli-explore-agent. Orchestrator transforms agent's JSON analysis into Mermaid documentation.
**Auto-Continue Workflow**: Runs fully autonomously once triggered. Each phase completes and automatically triggers the next phase.
**Execution Paths**:
- **Full Path**: All 3 phases (no existing codemap OR `--regenerate` specified)
- **Skip Path**: Phase 1 → Phase 3 (existing codemap found AND no `--regenerate` flag)
- **Phase 3 Always Executes**: SKILL index is always generated or updated
**Agent Responsibility** (cli-explore-agent):
- Deep code flow analysis using dual-source strategy (Bash + Gemini CLI)
- Returns structured JSON with architecture, functions, data flow, conditionals, patterns
- NO file writing - analysis only
**Orchestrator Responsibility**:
- Provides feature keyword and analysis scope to agent
- Transforms agent's JSON into Mermaid-enriched markdown documentation
- Writes all files (5 docs + metadata.json + SKILL.md)
## Core Rules
1. **Start Immediately**: First action is TodoWrite initialization, second action is Phase 1 execution
2. **Feature-Specific SKILL**: Each feature creates independent `.claude/skills/codemap-{feature}/` package
3. **Specialized Agent**: Phase 2a uses cli-explore-agent for professional code analysis (Deep Scan mode)
4. **Orchestrator Documentation**: Phase 2b transforms agent JSON into Mermaid markdown files
5. **Auto-Continue**: After completing each phase, update TodoWrite and immediately execute next phase
6. **No User Prompts**: Never ask user questions or wait for input between phases
7. **Track Progress**: Update TodoWrite after EVERY phase completion before starting next phase
8. **Multi-Level Detail**: Generate 4 levels: architecture → function → data → conditional
---
## 3-Phase Execution
### Phase 1: Parse Feature Keyword & Check Existing
**Goal**: Normalize feature keyword, check existing codemap, prepare for analysis
**Step 1: Parse Feature Keyword**
```bash
# Get feature keyword from argument
FEATURE_KEYWORD="$1"
# Normalize: lowercase, spaces to hyphens
normalized_feature=$(echo "$FEATURE_KEYWORD" | tr '[:upper:]' '[:lower:]' | tr ' ' '-' | tr '_' '-')
# Example: "User Authentication" → "user-authentication"
# Example: "支付处理" → "支付处理" (keep non-ASCII)
```
**Step 2: Set Tool Preference**
```bash
# Default to gemini unless --tool specified
TOOL="${tool_flag:-gemini}"
```
**Step 3: Check Existing Codemap**
```bash
# Define codemap directory
CODEMAP_DIR=".claude/skills/codemap-${normalized_feature}"
# Check if codemap exists
bash(test -d "$CODEMAP_DIR" && echo "exists" || echo "not_exists")
# Count existing files
bash(find "$CODEMAP_DIR" -name "*.md" 2>/dev/null | wc -l || echo 0)
```
**Step 4: Skip Decision**
```javascript
if (existing_files > 0 && !regenerate_flag) {
SKIP_GENERATION = true
message = "Codemap already exists, skipping Phase 2. Use --regenerate to force regeneration."
} else if (regenerate_flag) {
bash(rm -rf "$CODEMAP_DIR")
SKIP_GENERATION = false
message = "Regenerating codemap from scratch."
} else {
SKIP_GENERATION = false
message = "No existing codemap found, generating new code flow analysis."
}
```
**Output Variables**:
- `FEATURE_KEYWORD`: Original feature keyword
- `normalized_feature`: Normalized feature name for directory
- `CODEMAP_DIR`: `.claude/skills/codemap-{feature}`
- `TOOL`: CLI tool to use (gemini or qwen)
- `SKIP_GENERATION`: Boolean - whether to skip Phase 2
**TodoWrite**:
- If skipping: Mark phase 1 completed, phase 2 completed, phase 3 in_progress
- If not skipping: Mark phase 1 completed, phase 2 in_progress
---
### Phase 2: Code Flow Analysis & Documentation Generation
**Skip Condition**: Skipped if `SKIP_GENERATION = true`
**Goal**: Use cli-explore-agent for professional code analysis, then orchestrator generates Mermaid documentation
**Architecture**: Phase 2a (Agent Analysis) → Phase 2b (Orchestrator Documentation)
---
#### Phase 2a: cli-explore-agent Analysis
**Purpose**: Leverage specialized cli-explore-agent for deep code flow analysis
**Agent Task Specification**:
```
Task(
subagent_type: "cli-explore-agent",
description: "Analyze code flow: {FEATURE_KEYWORD}",
prompt: "
Perform Deep Scan analysis for feature: {FEATURE_KEYWORD}
**Analysis Mode**: deep-scan (Dual-source: Bash structural scan + Gemini semantic analysis)
**Analysis Objectives**:
1. **Module Architecture**: Identify high-level module organization, interactions, and entry points
2. **Function Call Chains**: Trace execution paths, call sequences, and parameter flows
3. **Data Transformations**: Map data structure changes and transformation stages
4. **Conditional Paths**: Document decision trees, branches, and error handling strategies
5. **Design Patterns**: Discover architectural patterns and extract design intent
**Scope**:
- Feature: {FEATURE_KEYWORD}
- CLI Tool: {TOOL} (gemini-2.5-pro or qwen coder-model)
- File Discovery: MCP Code Index (preferred) + rg fallback
- Target: 5-15 most relevant files
**MANDATORY FIRST STEP**:
Read: ~/.claude/workflows/cli-templates/schemas/codemap-json-schema.json
**Output**: Return JSON following schema exactly. NO FILE WRITING - return JSON analysis only.
**Critical Requirements**:
- Use Deep Scan mode: Bash (Phase 1 - precise locations) + Gemini CLI (Phase 2 - semantic understanding) + Synthesis (Phase 3 - merge with attribution)
- Focus exclusively on {FEATURE_KEYWORD} feature flow
- Include file:line references for ALL findings
- Extract design intent from code structure and comments
- NO FILE WRITING - return JSON analysis only
- Handle tool failures gracefully (Gemini → Qwen fallback, MCP → rg fallback)
"
)
```
**Agent Output**: JSON analysis result with architecture, functions, data flow, conditionals, and patterns
---
#### Phase 2b: Orchestrator Documentation Generation
**Purpose**: Transform cli-explore-agent JSON into Mermaid-enriched documentation
**Input**: Agent's JSON analysis result
**Process**:
1. **Parse Agent Analysis**:
```javascript
const analysis = JSON.parse(agentResult)
const { feature, files_analyzed, architecture, function_calls, data_flow, conditional_logic, design_patterns } = analysis
```
2. **Generate Mermaid Diagrams from Structured Data**:
**a) architecture-flow.md** (~3K tokens):
```javascript
// Convert architecture.modules + architecture.interactions → Mermaid graph TD
const architectureMermaid = `
graph TD
${architecture.modules.map(m => ` ${m.name}[${m.name}]`).join('\n')}
${architecture.interactions.map(i => ` ${i.from} -->|${i.type}| ${i.to}`).join('\n')}
`
Write({
file_path: `${CODEMAP_DIR}/architecture-flow.md`,
content: `---
feature: ${feature}
level: architecture
detail: high-level module interactions
---
# Architecture Flow: ${feature}
## Overview
${architecture.overview}
## Module Architecture
${architecture.modules.map(m => `### ${m.name}\n- **File**: ${m.file}\n- **Role**: ${m.responsibility}\n- **Dependencies**: ${m.dependencies.join(', ')}`).join('\n\n')}
## Flow Diagram
\`\`\`mermaid
${architectureMermaid}
\`\`\`
## Key Interactions
${architecture.interactions.map(i => `- **${i.from} → ${i.to}**: ${i.description}`).join('\n')}
## Entry Points
${architecture.entry_points.map(e => `- **${e.function}** (${e.file}): ${e.description}`).join('\n')}
`
})
```
**b) function-calls.md** (~5K tokens):
```javascript
// Convert function_calls.sequences → Mermaid sequenceDiagram
const sequenceMermaid = `
sequenceDiagram
${function_calls.sequences.map(s => ` ${s.from}->>${s.to}: ${s.method}`).join('\n')}
`
Write({
file_path: `${CODEMAP_DIR}/function-calls.md`,
content: `---
feature: ${feature}
level: function
detail: function-level call sequences
---
# Function Call Chains: ${feature}
## Call Sequence Diagram
\`\`\`mermaid
${sequenceMermaid}
\`\`\`
## Detailed Call Chains
${function_calls.call_chains.map(chain => `
### Chain ${chain.chain_id}: ${chain.description}
${chain.sequence.map(fn => `- **${fn.function}** (${fn.file})\n - Calls: ${fn.calls.join(', ')}`).join('\n')}
`).join('\n')}
## Parameters & Returns
${function_calls.sequences.map(s => `- **${s.method}** → Returns: ${s.returns || 'void'}`).join('\n')}
`
})
```
**c) data-flow.md** (~4K tokens):
```javascript
// Convert data_flow.transformations → Mermaid flowchart LR
const dataFlowMermaid = `
flowchart LR
${data_flow.transformations.map((t, i) => ` Stage${i}[${t.from}] -->|${t.transformer}| Stage${i+1}[${t.to}]`).join('\n')}
`
Write({
file_path: `${CODEMAP_DIR}/data-flow.md`,
content: `---
feature: ${feature}
level: data
detail: data structure transformations
---
# Data Flow: ${feature}
## Data Transformation Diagram
\`\`\`mermaid
${dataFlowMermaid}
\`\`\`
## Data Structures
${data_flow.structures.map(s => `### ${s.name} (${s.stage})\n\`\`\`json\n${JSON.stringify(s.shape, null, 2)}\n\`\`\``).join('\n\n')}
## Transformations
${data_flow.transformations.map(t => `- **${t.from} → ${t.to}** via \`${t.transformer}\` (${t.file})`).join('\n')}
`
})
```
**d) conditional-paths.md** (~4K tokens):
```javascript
// Convert conditional_logic.branches → Mermaid flowchart TD
const conditionalMermaid = `
flowchart TD
Start[Entry Point]
${conditional_logic.branches.map((b, i) => `
Start --> Check${i}{${b.condition}}
Check${i} -->|Yes| Path${i}A[${b.true_path}]
Check${i} -->|No| Path${i}B[${b.false_path}]
`).join('\n')}
`
Write({
file_path: `${CODEMAP_DIR}/conditional-paths.md`,
content: `---
feature: ${feature}
level: conditional
detail: decision trees and error paths
---
# Conditional Paths: ${feature}
## Decision Tree
\`\`\`mermaid
${conditionalMermaid}
\`\`\`
## Branch Conditions
${conditional_logic.branches.map(b => `- **${b.condition}** (${b.file})\n - True: ${b.true_path}\n - False: ${b.false_path}`).join('\n')}
## Error Handling
${conditional_logic.error_handling.map(e => `- **${e.error_type}**: Handler \`${e.handler}\` (${e.file}) - Recovery: ${e.recovery}`).join('\n')}
`
})
```
**e) complete-flow.md** (~8K tokens):
```javascript
// Integrate all Mermaid diagrams
Write({
file_path: `${CODEMAP_DIR}/complete-flow.md`,
content: `---
feature: ${feature}
level: complete
detail: integrated multi-level view
---
# Complete Flow: ${feature}
## Integrated Flow Diagram
\`\`\`mermaid
graph TB
subgraph Architecture
${architecture.modules.map(m => ` ${m.name}[${m.name}]`).join('\n')}
end
subgraph "Function Calls"
${function_calls.call_chains[0]?.sequence.map(fn => ` ${fn.function}`).join('\n') || ''}
end
subgraph "Data Flow"
${data_flow.structures.map(s => ` ${s.name}[${s.name}]`).join('\n')}
end
\`\`\`
## Complete Trace
[Comprehensive end-to-end documentation combining all analysis layers]
## Design Patterns Identified
${design_patterns.map(p => `- **${p.pattern}** in ${p.location}: ${p.description}`).join('\n')}
## Recommendations
${analysis.recommendations.map(r => `- ${r}`).join('\n')}
## Cross-References
- [Architecture Flow](./architecture-flow.md) - High-level module structure
- [Function Calls](./function-calls.md) - Detailed call chains
- [Data Flow](./data-flow.md) - Data transformation stages
- [Conditional Paths](./conditional-paths.md) - Decision trees and error handling
`
})
```
3. **Write metadata.json**:
```javascript
Write({
file_path: `${CODEMAP_DIR}/metadata.json`,
content: JSON.stringify({
feature: feature,
normalized_name: normalized_feature,
generated_at: new Date().toISOString(),
tool_used: analysis.analysis_metadata.tool_used,
files_analyzed: files_analyzed.map(f => f.file),
analysis_summary: {
total_files: files_analyzed.length,
modules_traced: architecture.modules.length,
functions_traced: function_calls.call_chains.reduce((sum, c) => sum + c.sequence.length, 0),
patterns_discovered: design_patterns.length
}
}, null, 2)
})
```
4. **Report Phase 2 Completion**:
```
Phase 2 Complete: Code flow analysis and documentation generated
- Agent Analysis: cli-explore-agent with {TOOL}
- Files Analyzed: {count}
- Documentation Generated: 5 markdown files + metadata.json
- Location: {CODEMAP_DIR}
```
**Completion Criteria**:
- cli-explore-agent task completed successfully with JSON result
- 5 documentation files written with valid Mermaid diagrams
- metadata.json written with analysis summary
- All files properly formatted and cross-referenced
**TodoWrite**: Mark phase 2 completed, phase 3 in_progress
---
### Phase 3: Generate SKILL.md Index
**Note**: This phase **ALWAYS executes** - generates or updates the SKILL index.
**Goal**: Read generated flow documentation and create SKILL.md index with progressive loading
**Steps**:
1. **Verify Generated Files**:
```bash
bash(find "{CODEMAP_DIR}" -name "*.md" -type f | sort)
```
2. **Read metadata.json**:
```javascript
Read({CODEMAP_DIR}/metadata.json)
// Extract: feature, normalized_name, files_analyzed, analysis_summary
```
3. **Read File Headers** (optional, first 30 lines):
```javascript
Read({CODEMAP_DIR}/architecture-flow.md, limit: 30)
Read({CODEMAP_DIR}/function-calls.md, limit: 30)
// Extract overview and diagram counts
```
4. **Generate SKILL.md Index**:
Template structure:
```yaml
---
name: codemap-{normalized_feature}
description: Code flow mapping for {FEATURE_KEYWORD} feature (located at {project_path}). Load this SKILL when analyzing, tracing, or understanding {FEATURE_KEYWORD} execution flow, especially when no relevant context exists in memory.
version: 1.0.0
generated_at: {ISO_TIMESTAMP}
---
# Code Flow Map: {FEATURE_KEYWORD}
## Feature: `{FEATURE_KEYWORD}`
**Analysis Date**: {DATE}
**Tool Used**: {TOOL}
**Files Analyzed**: {COUNT}
## Progressive Loading
### Level 0: Quick Overview (~2K tokens)
- [Architecture Flow](./architecture-flow.md) - High-level module interactions
### Level 1: Core Flows (~10K tokens)
- [Architecture Flow](./architecture-flow.md) - Module architecture
- [Function Calls](./function-calls.md) - Function call chains
### Level 2: Complete Analysis (~20K tokens)
- [Architecture Flow](./architecture-flow.md)
- [Function Calls](./function-calls.md)
- [Data Flow](./data-flow.md) - Data transformations
### Level 3: Deep Dive (~30K tokens)
- [Architecture Flow](./architecture-flow.md)
- [Function Calls](./function-calls.md)
- [Data Flow](./data-flow.md)
- [Conditional Paths](./conditional-paths.md) - Branches and error handling
- [Complete Flow](./complete-flow.md) - Integrated comprehensive view
## Usage
Load this SKILL package when:
- Analyzing {FEATURE_KEYWORD} implementation
- Tracing execution flow for debugging
- Understanding code dependencies
- Planning refactoring or enhancements
## Analysis Summary
- **Modules Traced**: {modules_traced}
- **Functions Traced**: {functions_traced}
- **Files Analyzed**: {total_files}
## Mermaid Diagrams Included
- Architecture flow diagram (graph TD)
- Function call sequence diagram (sequenceDiagram)
- Data transformation flowchart (flowchart LR)
- Conditional decision tree (flowchart TD)
- Complete integrated diagram (graph TB)
```
5. **Write SKILL.md**:
```javascript
Write({
file_path: `{CODEMAP_DIR}/SKILL.md`,
content: generatedIndexMarkdown
})
```
**Completion Criteria**:
- SKILL.md index written
- All documentation files verified
- Progressive loading levels (0-3) properly structured
- Mermaid diagram references included
**TodoWrite**: Mark phase 3 completed
**Final Report**:
```
Code Flow Mapping Complete
Feature: {FEATURE_KEYWORD}
Location: .claude/skills/codemap-{normalized_feature}/
Files Generated:
- SKILL.md (index)
- architecture-flow.md (with Mermaid diagram)
- function-calls.md (with Mermaid sequence diagram)
- data-flow.md (with Mermaid flowchart)
- conditional-paths.md (with Mermaid decision tree)
- complete-flow.md (with integrated Mermaid diagram)
- metadata.json
Analysis:
- Files analyzed: {count}
- Modules traced: {count}
- Functions traced: {count}
Usage: Skill(command: "codemap-{normalized_feature}")
```
---
## Implementation Details
### TodoWrite Patterns
**Initialization** (Before Phase 1):
```javascript
TodoWrite({todos: [
{"content": "Parse feature keyword and check existing", "status": "in_progress", "activeForm": "Parsing feature keyword"},
{"content": "Agent analyzes code flow and generates files", "status": "pending", "activeForm": "Analyzing code flow"},
{"content": "Generate SKILL.md index", "status": "pending", "activeForm": "Generating SKILL index"}
]})
```
**Full Path** (SKIP_GENERATION = false):
```javascript
// After Phase 1
TodoWrite({todos: [
{"content": "Parse feature keyword and check existing", "status": "completed", ...},
{"content": "Agent analyzes code flow and generates files", "status": "in_progress", ...},
{"content": "Generate SKILL.md index", "status": "pending", ...}
]})
// After Phase 2
TodoWrite({todos: [
{"content": "Parse feature keyword and check existing", "status": "completed", ...},
{"content": "Agent analyzes code flow and generates files", "status": "completed", ...},
{"content": "Generate SKILL.md index", "status": "in_progress", ...}
]})
// After Phase 3
TodoWrite({todos: [
{"content": "Parse feature keyword and check existing", "status": "completed", ...},
{"content": "Agent analyzes code flow and generates files", "status": "completed", ...},
{"content": "Generate SKILL.md index", "status": "completed", ...}
]})
```
**Skip Path** (SKIP_GENERATION = true):
```javascript
// After Phase 1 (skip Phase 2)
TodoWrite({todos: [
{"content": "Parse feature keyword and check existing", "status": "completed", ...},
{"content": "Agent analyzes code flow and generates files", "status": "completed", ...}, // Skipped
{"content": "Generate SKILL.md index", "status": "in_progress", ...}
]})
```
### Execution Flow
**Full Path**:
```
User → TodoWrite Init → Phase 1 (parse) → Phase 2 (agent analyzes) → Phase 3 (write index) → Report
```
**Skip Path**:
```
User → TodoWrite Init → Phase 1 (detect existing) → Phase 3 (update index) → Report
```
### Error Handling
**Phase 1 Errors**:
- Empty feature keyword: Report error, ask user to provide feature description
- Invalid characters: Normalize and continue
**Phase 2 Errors (Agent)**:
- Agent task fails: Retry once, report if fails again
- No files discovered: Warn user, ask for more specific feature keyword
- CLI failures: Agent handles internally with retries
- Invalid Mermaid syntax: Agent validates before writing
**Phase 3 Errors**:
- Write failures: Report which files failed
- Missing files: Note in SKILL.md, suggest regeneration
---
## Parameters
```bash
/memory:code-map-memory "feature-keyword" [--regenerate] [--tool <gemini|qwen>]
```
**Arguments**:
- **"feature-keyword"**: Feature or flow to analyze (required)
- Examples: `"user authentication"`, `"payment processing"`, `"数据导入流程"`
- Can be English, Chinese, or mixed
- Spaces and underscores normalized to hyphens
- **--regenerate**: Force regenerate existing codemap (deletes and recreates)
- **--tool**: CLI tool for analysis (default: gemini)
- `gemini`: Comprehensive flow analysis with gemini-2.5-pro
- `qwen`: Alternative with coder-model
---
## Examples
**Generated File Structure** (for all examples):
```
.claude/skills/codemap-{feature}/
├── SKILL.md # Index (Phase 3)
├── architecture-flow.md # Agent (Phase 2) - High-level flow
├── function-calls.md # Agent (Phase 2) - Function chains
├── data-flow.md # Agent (Phase 2) - Data transformations
├── conditional-paths.md # Agent (Phase 2) - Branches & errors
├── complete-flow.md # Agent (Phase 2) - Integrated view
└── metadata.json # Agent (Phase 2)
```
### Example 1: User Authentication Flow
```bash
/memory:code-map-memory "user authentication"
```
**Workflow**:
1. Phase 1: Normalizes to "user-authentication", checks existing codemap
2. Phase 2: Agent discovers auth-related files, executes CLI analysis, generates 5 flow docs with Mermaid
3. Phase 3: Generates SKILL.md index with progressive loading
**Output**: `.claude/skills/codemap-user-authentication/` with 6 files + metadata
### Example 3: Regenerate with Qwen
```bash
/memory:code-map-memory "payment processing" --regenerate --tool qwen
```
**Workflow**:
1. Phase 1: Deletes existing codemap due to --regenerate
2. Phase 2: Agent uses qwen with coder-model for fresh analysis
3. Phase 3: Generates updated SKILL.md
---
## Architecture
```
code-map-memory (orchestrator)
├─ Phase 1: Parse & Check (bash commands, skip decision)
├─ Phase 2: Code Analysis & Documentation (skippable)
│ ├─ Phase 2a: cli-explore-agent Analysis
│ │ └─ Deep Scan: Bash structural + Gemini semantic → JSON
│ └─ Phase 2b: Orchestrator Documentation
│ └─ Transform JSON → 5 Mermaid markdown files + metadata.json
└─ Phase 3: Write SKILL.md (index generation, always runs)
Output: .claude/skills/codemap-{feature}/
```

View File

@@ -1,615 +0,0 @@
---
name: docs
description: Plan documentation workflow with dynamic grouping (≤10 docs/task), generates IMPL tasks for parallel module trees, README, ARCHITECTURE, and HTTP API docs
argument-hint: "[path] [--tool <gemini|qwen|codex>] [--mode <full|partial>] [--cli-execute]"
---
# Documentation Workflow (/memory:docs)
## Overview
Lightweight planner that analyzes project structure, decomposes documentation work into tasks, and generates execution plans. Does NOT generate documentation content itself - delegates to doc-generator agent.
**Execution Strategy**:
- **Dynamic Task Grouping**: Level 1 tasks grouped by top-level directories with document count limit
- **Primary constraint**: Each task generates ≤10 documents (API.md + README.md count)
- **Optimization goal**: Prefer grouping 2 top-level directories per task for context sharing
- **Conflict resolution**: If 2 dirs exceed 10 docs, reduce to 1 dir/task; if 1 dir exceeds 10 docs, split by subdirectories
- **Context benefit**: Same-task directories analyzed together via single Gemini call
- **Parallel Execution**: Multiple Level 1 tasks execute concurrently for faster completion
- **Pre-computed Analysis**: Phase 2 performs unified analysis once, stored in `.process/` for reuse
- **Efficient Data Loading**: All existing docs loaded once in Phase 2, shared across tasks
**Path Mirroring**: Documentation structure mirrors source code under `.workflow/docs/{project_name}/`
- Example: `my_app/src/core/``.workflow/docs/my_app/src/core/API.md`
**Two Execution Modes**:
- **Default (Agent Mode)**: CLI analyzes in `pre_analysis` (MODE=analysis), agent writes docs
- **--cli-execute (CLI Mode)**: CLI generates docs in `implementation_approach` (MODE=write), agent executes CLI commands
## Path Mirroring Strategy
**Principle**: Documentation structure **mirrors** source code structure under project-specific directory.
| Source Path | Project Name | Documentation Path |
|------------|--------------|-------------------|
| `my_app/src/core/` | `my_app` | `.workflow/docs/my_app/src/core/API.md` |
| `my_app/src/modules/auth/` | `my_app` | `.workflow/docs/my_app/src/modules/auth/API.md` |
| `another_project/lib/utils/` | `another_project` | `.workflow/docs/another_project/lib/utils/API.md` |
## Parameters
```bash
/memory:docs [path] [--tool <gemini|qwen|codex>] [--mode <full|partial>] [--cli-execute]
```
- **path**: Source directory to analyze (default: current directory)
- Specifies the source code directory to be documented
- Documentation is generated in a separate `.workflow/docs/{project_name}/` directory at the workspace root, **not** within the source `path` itself
- The source path's structure is mirrored within the project-specific documentation folder
- Example: analyzing `src/modules` produces documentation at `.workflow/docs/{project_name}/src/modules/`
- **--mode**: Documentation generation mode (default: full)
- `full`: Complete documentation (modules + README + ARCHITECTURE + EXAMPLES + HTTP API)
- `partial`: Module documentation only (API.md + README.md)
- **--tool**: CLI tool selection (default: gemini)
- `gemini`: Comprehensive documentation, pattern recognition
- `qwen`: Architecture analysis, system design focus
- `codex`: Implementation validation, code quality
- **--cli-execute**: Enable CLI-based documentation generation (optional)
## Planning Workflow
### Phase 1: Initialize Session
```bash
# Get target path, project name, and root
bash(pwd && basename "$(pwd)" && git rev-parse --show-toplevel 2>/dev/null || pwd && date +%Y%m%d-%H%M%S)
```
```javascript
// Create docs session (type: docs)
SlashCommand(command="/workflow:session:start --type docs --new \"{project_name}-docs-{timestamp}\"")
// Parse output to get sessionId
```
```bash
# Update workflow-session.json with docs-specific fields
bash(jq '. + {"target_path":"{target_path}","project_root":"{project_root}","project_name":"{project_name}","mode":"full","tool":"gemini","cli_execute":false}' .workflow/active/{sessionId}/workflow-session.json > tmp.json && mv tmp.json .workflow/active/{sessionId}/workflow-session.json)
```
### Phase 2: Analyze Structure
**Smart filter**: Auto-detect and skip tests/build/config/vendor based on project tech stack.
**Commands** (collect data with simple bash):
```bash
# 1. Run folder analysis
bash(ccw tool exec get_modules_by_depth '{}' | ccw tool exec classify_folders '{}')
# 2. Get top-level directories (first 2 path levels)
bash(ccw tool exec get_modules_by_depth '{}' | ccw tool exec classify_folders '{}' | awk -F'|' '{print $1}' | sed 's|^\./||' | awk -F'/' '{if(NF>=2) print $1"/"$2; else if(NF==1) print $1}' | sort -u)
# 3. Find existing docs (if directory exists)
bash(if [ -d .workflow/docs/\${project_name} ]; then find .workflow/docs/\${project_name} -type f -name "*.md" ! -path "*/README.md" ! -path "*/ARCHITECTURE.md" ! -path "*/EXAMPLES.md" ! -path "*/api/*" 2>/dev/null; fi)
# 4. Read existing docs content (if files exist)
bash(if [ -d .workflow/docs/\${project_name} ]; then find .workflow/docs/\${project_name} -type f -name "*.md" ! -path "*/README.md" ! -path "*/ARCHITECTURE.md" ! -path "*/EXAMPLES.md" ! -path "*/api/*" 2>/dev/null | xargs cat 2>/dev/null; fi)
```
**Data Processing**: Parse bash outputs, calculate statistics, use **Write tool** to create `${session_dir}/.process/doc-planning-data.json` with structure:
```json
{
"metadata": {
"generated_at": "2025-11-03T16:57:30.469669",
"project_name": "project_name",
"project_root": "/path/to/project"
},
"folder_analysis": [
{"path": "./src/core", "type": "code", "code_count": 5, "dirs_count": 2}
],
"top_level_dirs": ["src/modules", "lib/core"],
"existing_docs": {
"file_list": [".workflow/docs/project/src/core/API.md"],
"content": "... existing docs content ..."
},
"unified_analysis": [],
"statistics": {
"total": 15,
"code": 8,
"navigation": 7,
"top_level": 3
}
}
```
**Then** use **Edit tool** to update `workflow-session.json` adding analysis field.
**Output**: Single `doc-planning-data.json` with all analysis data (no temp files or Python scripts).
**Auto-skipped**: Tests (`**/test/**`, `**/*.test.*`), Build (`**/node_modules/**`, `**/dist/**`), Config (root-level files), Vendor directories.
### Phase 3: Detect Update Mode
**Commands**:
```bash
# Count existing docs from doc-planning-data.json
bash(cat .workflow/active/WFS-docs-{timestamp}/.process/doc-planning-data.json | jq '.existing_docs.file_list | length')
```
**Data Processing**: Use count result, then use **Edit tool** to update `workflow-session.json`:
- Add `"update_mode": "update"` if count > 0, else `"create"`
- Add `"existing_docs": <count>`
### Phase 4: Decompose Tasks
**Task Hierarchy** (Dynamic based on document count):
```
Small Projects (total ≤10 docs):
Level 1: IMPL-001 (all directories in single task, shared context)
Level 2: IMPL-002 (README, full mode only)
Level 3: IMPL-003 (ARCHITECTURE+EXAMPLES), IMPL-004 (HTTP API, optional)
Medium Projects (Example: 7 top-level dirs, 18 total docs):
Step 1: Count docs per top-level dir
├─ dir1: 3 docs, dir2: 4 docs → Group 1 (7 docs)
├─ dir3: 5 docs, dir4: 3 docs → Group 2 (8 docs)
├─ dir5: 2 docs → Group 3 (2 docs, can add more)
Step 2: Create tasks with ≤10 docs constraint
Level 1: IMPL-001 to IMPL-003 (parallel groups)
├─ IMPL-001: Group 1 (dir1 + dir2, 7 docs, shared context)
├─ IMPL-002: Group 2 (dir3 + dir4, 8 docs, shared context)
└─ IMPL-003: Group 3 (remaining dirs, ≤10 docs)
Level 2: IMPL-004 (README, depends on Level 1, full mode only)
Level 3: IMPL-005 (ARCHITECTURE+EXAMPLES), IMPL-006 (HTTP API, optional)
Large Projects (single dir >10 docs):
Step 1: Detect oversized directory
└─ src/modules/: 15 subdirs → 30 docs (exceeds limit)
Step 2: Split by subdirectories
Level 1: IMPL-001 to IMPL-003 (split oversized dir)
├─ IMPL-001: src/modules/ subdirs 1-5 (10 docs)
├─ IMPL-002: src/modules/ subdirs 6-10 (10 docs)
└─ IMPL-003: src/modules/ subdirs 11-15 (10 docs)
```
**Grouping Algorithm**:
1. Count total docs for each top-level directory
2. Try grouping 2 directories (optimization for context sharing)
3. If group exceeds 10 docs, split to 1 dir/task
4. If single dir exceeds 10 docs, split by subdirectories
5. Create parallel Level 1 tasks with ≤10 docs each
**Commands**:
```bash
# 1. Get top-level directories from doc-planning-data.json
bash(cat .workflow/active/WFS-docs-{timestamp}/.process/doc-planning-data.json | jq -r '.top_level_dirs[]')
# 2. Get mode from workflow-session.json
bash(cat .workflow/active/WFS-docs-{timestamp}/workflow-session.json | jq -r '.mode // "full"')
# 3. Check for HTTP API
bash(grep -r "router\.|@Get\|@Post" src/ 2>/dev/null && echo "API_FOUND" || echo "NO_API")
```
**Data Processing**:
1. Count documents for each top-level directory (from folder_analysis):
- Code folders: 2 docs each (API.md + README.md)
- Navigation folders: 1 doc each (README.md only)
2. Apply grouping algorithm with ≤10 docs constraint:
- Try grouping 2 directories, calculate total docs
- If total ≤10 docs: create group
- If total >10 docs: split to 1 dir/group or subdivide
- If single dir >10 docs: split by subdirectories
3. Use **Edit tool** to update `doc-planning-data.json` adding groups field:
```json
"groups": {
"count": 3,
"assignments": [
{"group_id": "001", "directories": ["src/modules", "src/utils"], "doc_count": 5},
{"group_id": "002", "directories": ["lib/core"], "doc_count": 6},
{"group_id": "003", "directories": ["lib/helpers"], "doc_count": 3}
]
}
```
**Task ID Calculation**:
```bash
group_count=$(jq '.groups.count' .workflow/active/WFS-docs-{timestamp}/.process/doc-planning-data.json)
readme_id=$((group_count + 1)) # Next ID after groups
arch_id=$((group_count + 2))
api_id=$((group_count + 3))
```
### Phase 5: Generate Task JSONs
**CLI Strategy**:
| Mode | cli_execute | Placement | CLI MODE | Approval Flag | Agent Role |
|------|-------------|-----------|----------|---------------|------------|
| **Agent** | false | pre_analysis | analysis | (none) | Generate docs in implementation_approach |
| **CLI** | true | implementation_approach | write | --mode write | Execute CLI commands, validate output |
**Command Patterns**:
- Gemini/Qwen: `ccw cli -p "..." --tool gemini --mode analysis --cd dir`
- CLI Mode: `ccw cli -p "..." --tool gemini --mode write --cd dir`
- Codex: `ccw cli -p "..." --tool codex --mode write --cd dir`
**Generation Process**:
1. Read configuration values (tool, cli_execute, mode) from workflow-session.json
2. Read group assignments from doc-planning-data.json
3. Generate Level 1 tasks (IMPL-001 to IMPL-N, one per group)
4. Generate Level 2+ tasks if mode=full (README, ARCHITECTURE, HTTP API)
## Task Templates
### Level 1: Module Trees Group Task (Unified)
**Execution Model**: Each task processes assigned directory group (max 2 directories) using pre-analyzed data from Phase 2.
```json
{
"id": "IMPL-${group_number}",
"title": "Document Module Trees Group ${group_number}",
"status": "pending",
"meta": {
"type": "docs-tree-group",
"agent": "@doc-generator",
"tool": "gemini",
"cli_execute": false,
"group_number": "${group_number}",
"total_groups": "${total_groups}"
},
"context": {
"requirements": [
"Process directories from group ${group_number} in doc-planning-data.json",
"Generate docs to .workflow/docs/${project_name}/ (mirrored structure)",
"Code folders: API.md + README.md; Navigation folders: README.md only",
"Use pre-analyzed data from Phase 2 (no redundant analysis)"
],
"focus_paths": ["${group_dirs_from_json}"],
"precomputed_data": {
"phase2_analysis": "${session_dir}/.process/doc-planning-data.json"
}
},
"flow_control": {
"pre_analysis": [
{
"step": "load_precomputed_data",
"action": "Load Phase 2 analysis and extract group directories",
"commands": [
"bash(cat ${session_dir}/.process/doc-planning-data.json)",
"bash(jq '.groups.assignments[] | select(.group_id == \"${group_number}\") | .directories' ${session_dir}/.process/doc-planning-data.json)"
],
"output_to": "phase2_context",
"note": "Single JSON file contains all Phase 2 analysis results"
}
],
"implementation_approach": [
{
"step": 1,
"title": "Generate documentation for assigned directory group",
"description": "Process directories in Group ${group_number} using pre-analyzed data",
"modification_points": [
"Read group directories from [phase2_context].groups.assignments[${group_number}].directories",
"For each directory: parse folder types from folder_analysis, parse structure from unified_analysis",
"Map source_path to .workflow/docs/${project_name}/{path}",
"Generate API.md for code folders, README.md for all folders",
"Preserve user modifications from [phase2_context].existing_docs.content"
],
"logic_flow": [
"phase2 = parse([phase2_context])",
"dirs = phase2.groups.assignments[${group_number}].directories",
"for dir in dirs:",
" folder_info = find(dir, phase2.folder_analysis)",
" outline = find(dir, phase2.unified_analysis)",
" if folder_info.type == 'code': generate API.md + README.md",
" elif folder_info.type == 'navigation': generate README.md only",
" write to .workflow/docs/${project_name}/{dir}/"
],
"depends_on": [],
"output": "group_module_docs"
}
],
"target_files": [
".workflow/docs/${project_name}/*/API.md",
".workflow/docs/${project_name}/*/README.md"
]
}
}
```
**CLI Execute Mode Note**: When `cli_execute=true`, add Step 2 in `implementation_approach`:
```json
{
"step": 2,
"title": "Batch generate documentation via CLI",
"command": "ccw cli -p 'PURPOSE: Generate module docs\\nTASK: Create documentation\\nMODE: write\\nCONTEXT: @**/* [phase2_context]\\nEXPECTED: API.md and README.md\\nRULES: Mirror structure' --tool gemini --mode write --cd ${dirs_from_group}",
"depends_on": [1],
"output": "generated_docs"
}
```
### Level 2: Project README Task
**Task ID**: `IMPL-${readme_id}` (where `readme_id = group_count + 1`)
**Dependencies**: Depends on all Level 1 tasks completing.
```json
{
"id": "IMPL-${readme_id}",
"title": "Generate Project README",
"status": "pending",
"depends_on": ["IMPL-001", "...", "IMPL-${group_count}"],
"meta": {"type": "docs", "agent": "@doc-generator", "tool": "gemini", "cli_execute": false},
"flow_control": {
"pre_analysis": [
{
"step": "load_existing_readme",
"command": "bash(cat .workflow/docs/${project_name}/README.md 2>/dev/null || echo 'No existing README')",
"output_to": "existing_readme"
},
{
"step": "load_module_docs",
"command": "bash(find .workflow/docs/${project_name} -type f -name '*.md' ! -path '.workflow/docs/${project_name}/README.md' ! -path '.workflow/docs/${project_name}/ARCHITECTURE.md' ! -path '.workflow/docs/${project_name}/EXAMPLES.md' ! -path '.workflow/docs/${project_name}/api/*' | xargs cat)",
"output_to": "all_module_docs"
},
{
"step": "analyze_project",
"command": "bash(ccw cli -p \"PURPOSE: Analyze project structure\\nTASK: Extract overview from modules\\nMODE: analysis\\nCONTEXT: [all_module_docs]\\nEXPECTED: Project outline\" --tool gemini --mode analysis)",
"output_to": "project_outline"
}
],
"implementation_approach": [
{
"step": 1,
"title": "Generate project README",
"description": "Generate project README with navigation links while preserving user modifications",
"modification_points": [
"Parse [project_outline] and [all_module_docs]",
"Generate README structure with navigation links",
"Preserve [existing_readme] user modifications"
],
"logic_flow": ["Parse data", "Generate README with navigation", "Preserve modifications"],
"depends_on": [],
"output": "project_readme"
}
],
"target_files": [".workflow/docs/${project_name}/README.md"]
}
}
```
### Level 3: Architecture & Examples Documentation Task
**Task ID**: `IMPL-${arch_id}` (where `arch_id = group_count + 2`)
**Dependencies**: Depends on Level 2 (Project README).
```json
{
"id": "IMPL-${arch_id}",
"title": "Generate Architecture & Examples Documentation",
"status": "pending",
"depends_on": ["IMPL-${readme_id}"],
"meta": {"type": "docs", "agent": "@doc-generator", "tool": "gemini", "cli_execute": false},
"flow_control": {
"pre_analysis": [
{"step": "load_existing_docs", "command": "bash(cat .workflow/docs/${project_name}/{ARCHITECTURE,EXAMPLES}.md 2>/dev/null || echo 'No existing docs')", "output_to": "existing_arch_examples"},
{"step": "load_all_docs", "command": "bash(cat .workflow/docs/${project_name}/README.md && find .workflow/docs/${project_name} -type f -name '*.md' ! -path '*/README.md' ! -path '*/ARCHITECTURE.md' ! -path '*/EXAMPLES.md' ! -path '*/api/*' | xargs cat)", "output_to": "all_docs"},
{"step": "analyze_architecture", "command": "bash(ccw cli -p \"PURPOSE: Analyze system architecture\\nTASK: Synthesize architectural overview and examples\\nMODE: analysis\\nCONTEXT: [all_docs]\\nEXPECTED: Architecture + Examples outline\" --tool gemini --mode analysis)", "output_to": "arch_examples_outline"}
],
"implementation_approach": [
{
"step": 1,
"title": "Generate architecture and examples documentation",
"modification_points": [
"Parse [arch_examples_outline] and [all_docs]",
"Generate ARCHITECTURE.md (system design, patterns)",
"Generate EXAMPLES.md (code snippets, usage)",
"Preserve [existing_arch_examples] modifications"
],
"depends_on": [],
"output": "arch_examples_docs"
}
],
"target_files": [".workflow/docs/${project_name}/ARCHITECTURE.md", ".workflow/docs/${project_name}/EXAMPLES.md"]
}
}
```
### Level 4: HTTP API Documentation Task (Optional)
**Task ID**: `IMPL-${api_id}` (where `api_id = group_count + 3`)
**Dependencies**: Depends on Level 3.
```json
{
"id": "IMPL-${api_id}",
"title": "Generate HTTP API Documentation",
"status": "pending",
"depends_on": ["IMPL-${arch_id}"],
"meta": {"type": "docs", "agent": "@doc-generator", "tool": "gemini", "cli_execute": false},
"flow_control": {
"pre_analysis": [
{"step": "discover_api", "command": "bash(rg 'router\\.| @(Get|Post)' -g '*.{ts,js}')", "output_to": "endpoint_discovery"},
{"step": "load_existing_api", "command": "bash(cat .workflow/docs/${project_name}/api/README.md 2>/dev/null || echo 'No existing API docs')", "output_to": "existing_api_docs"},
{"step": "analyze_api", "command": "bash(ccw cli -p \"PURPOSE: Document HTTP API\\nTASK: Analyze endpoints\\nMODE: analysis\\nCONTEXT: @src/api/**/* [endpoint_discovery]\\nEXPECTED: API outline\" --tool gemini --mode analysis)", "output_to": "api_outline"}
],
"implementation_approach": [
{
"step": 1,
"title": "Generate HTTP API documentation",
"modification_points": [
"Parse [api_outline] and [endpoint_discovery]",
"Document endpoints, request/response formats",
"Preserve [existing_api_docs] modifications"
],
"depends_on": [],
"output": "api_docs"
}
],
"target_files": [".workflow/docs/${project_name}/api/README.md"]
}
}
```
## Session Structure
**Unified Structure** (single JSON replaces multiple text files):
```
.workflow/active/
└── WFS-docs-{timestamp}/
├── workflow-session.json # Session metadata
├── IMPL_PLAN.md
├── TODO_LIST.md
├── .process/
│ └── doc-planning-data.json # All Phase 2 analysis data (replaces 7+ files)
└── .task/
├── IMPL-001.json # Small: all modules | Large: group 1
├── IMPL-00N.json # (Large only: groups 2-N)
├── IMPL-{N+1}.json # README (full mode)
├── IMPL-{N+2}.json # ARCHITECTURE+EXAMPLES (full mode)
└── IMPL-{N+3}.json # HTTP API (optional)
```
**doc-planning-data.json Structure**:
```json
{
"metadata": {
"generated_at": "2025-11-03T16:41:06+08:00",
"project_name": "Claude_dms3",
"project_root": "/d/Claude_dms3"
},
"folder_analysis": [
{"path": "./src/core", "type": "code", "code_count": 5, "dirs_count": 2},
{"path": "./src/utils", "type": "navigation", "code_count": 0, "dirs_count": 4}
],
"top_level_dirs": ["src/modules", "src/utils", "lib/core"],
"existing_docs": {
"file_list": [".workflow/docs/project/src/core/API.md"],
"content": "... concatenated existing docs ..."
},
"unified_analysis": [
{"module_path": "./src/core", "outline_summary": "Core functionality"}
],
"groups": {
"count": 4,
"assignments": [
{"group_id": "001", "directories": ["src/modules", "src/utils"], "doc_count": 6},
{"group_id": "002", "directories": ["lib/core", "lib/helpers"], "doc_count": 7}
]
},
"statistics": {
"total": 15,
"code": 8,
"navigation": 7,
"top_level": 3
}
}
```
**Workflow Session Structure** (workflow-session.json):
```json
{
"session_id": "WFS-docs-{timestamp}",
"project": "{project_name} documentation",
"status": "planning",
"timestamp": "2024-01-20T14:30:22+08:00",
"path": ".",
"target_path": "/path/to/project",
"project_root": "/path/to/project",
"project_name": "{project_name}",
"mode": "full",
"tool": "gemini",
"cli_execute": false,
"update_mode": "update",
"existing_docs": 5,
"analysis": {
"total": "15",
"code": "8",
"navigation": "7",
"top_level": "3"
}
}
```
## Generated Documentation
**Structure mirrors project source directories under project-specific folder**:
```
.workflow/docs/
└── {project_name}/ # Project-specific root
├── src/ # Mirrors src/ directory
│ ├── modules/
│ │ ├── README.md # Navigation
│ │ ├── auth/
│ │ │ ├── API.md # API signatures
│ │ │ ├── README.md # Module docs
│ │ │ └── middleware/
│ │ │ ├── API.md
│ │ │ └── README.md
│ │ └── api/
│ │ ├── API.md
│ │ └── README.md
│ └── utils/
│ └── README.md
├── lib/ # Mirrors lib/ directory
│ └── core/
│ ├── API.md
│ └── README.md
├── README.md # Project root
├── ARCHITECTURE.md # System design
├── EXAMPLES.md # Usage examples
└── api/ # Optional
└── README.md # HTTP API reference
```
## Execution Commands
```bash
# Execute entire workflow (auto-discovers active session)
/workflow:execute
# Or specify session
/workflow:execute --resume-session="WFS-docs-yyyymmdd-hhmmss"
# Individual task execution
/task:execute IMPL-001
```
## Template Reference
**Available Templates** (`~/.claude/workflows/cli-templates/prompts/documentation/`):
- `api.txt`: Code API (Part A) + HTTP API (Part B)
- `module-readme.txt`: Module purpose, usage, dependencies
- `folder-navigation.txt`: Navigation README for folders with subdirectories
- `project-readme.txt`: Project overview, getting started, navigation
- `project-architecture.txt`: System structure, module map, design patterns
- `project-examples.txt`: End-to-end usage examples
## Execution Mode Summary
| Mode | CLI Placement | CLI MODE | Approval Flag | Agent Role |
|------|---------------|----------|---------------|------------|
| **Agent (default)** | pre_analysis | analysis | (none) | Generates documentation content |
| **CLI (--cli-execute)** | implementation_approach | write | --mode write | Executes CLI commands, validates output |
**Execution Flow**:
- **Phase 2**: Unified analysis once, results in `.process/`
- **Phase 4**: Dynamic grouping (max 2 dirs per group)
- **Level 1**: Parallel processing for module tree groups
- **Level 2+**: Sequential execution for project-level docs
## Related Commands
- `/workflow:execute` - Execute documentation tasks
- `/workflow:status` - View task progress
- `/workflow:session:complete` - Mark session complete

View File

@@ -1,182 +0,0 @@
---
name: load-skill-memory
description: Activate SKILL package (auto-detect from paths/keywords or manual) and intelligently load documentation based on task intent keywords
argument-hint: "[skill_name] \"task intent description\""
allowed-tools: Bash(*), Read(*), Skill(*)
---
# Memory Load SKILL Command (/memory:load-skill-memory)
## 1. Overview
The `memory:load-skill-memory` command **activates SKILL package** (auto-detect from task or manual specification) and intelligently loads documentation based on user's task intent. The system automatically determines which documentation files to read based on the intent description.
**Core Philosophy**:
- **Flexible Activation**: Auto-detect skill from task description/paths, or user explicitly specifies
- **Intent-Driven Loading**: System analyzes task intent to determine documentation scope
- **Intelligent Selection**: Automatically chooses appropriate documentation level and modules
- **Direct Context Loading**: Loads selected documentation into conversation memory
**When to Use**:
- Manually activate a known SKILL package for a specific task
- Load SKILL context when system hasn't auto-triggered it
- Force reload SKILL documentation with specific intent focus
**Note**: Normal SKILL activation happens automatically via description triggers or path mentions (system extracts skill name from file paths for intelligent triggering). Use this command only when manual activation is needed.
## 2. Parameters
- `[skill_name]` (Optional): Name of SKILL package to activate
- If omitted: System auto-detects from task description or file paths
- If specified: Direct activation of named SKILL package
- Example: `my_project`, `api_service`
- Must match directory name under `.claude/skills/`
- `"task intent description"` (Required): Description of what you want to do
- Used for both: auto-detection (if skill_name omitted) and documentation scope selection
- **Analysis tasks**: "分析builder pattern实现", "理解参数系统架构"
- **Modification tasks**: "修改workflow逻辑", "增强thermal template功能"
- **Learning tasks**: "学习接口设计模式", "了解测试框架使用"
- **With paths**: "修改D:\projects\my_project\src\auth.py的认证逻辑" (auto-extracts `my_project`)
## 3. Execution Flow
### Step 1: Determine SKILL Name (if not provided)
**Auto-Detection Strategy** (when skill_name parameter is omitted):
1. **Path Extraction**: Scan task description for file paths
- Extract potential project names from path segments
- Example: `"修改D:\projects\my_project\src\auth.py"` → extracts `my_project`
2. **Keyword Matching**: Match task keywords against SKILL descriptions
- Search for project-specific terms, domain keywords
3. **Validation**: Check if extracted name matches `.claude/skills/{skill_name}/`
**Result**: Either uses provided skill_name or auto-detected name for activation
### Step 2: Activate SKILL and Analyze Intent
**Activate SKILL Package**:
```javascript
Skill(command: "${skill_name}") // Uses provided or auto-detected name
```
**What Happens After Activation**:
1. If SKILL exists in memory: System reads `.claude/skills/${skill_name}/SKILL.md`
2. If SKILL not found in memory: Error - SKILL package doesn't exist
3. SKILL description triggers are loaded into memory
4. Progressive loading mechanism becomes available
5. Documentation structure is now accessible
**Intent Analysis**:
Based on task intent description, system determines:
- **Action type**: analyzing, modifying, learning
- **Scope**: specific module, architecture overview, complete system
- **Depth**: quick reference, detailed API, full documentation
### Step 3: Intelligent Documentation Loading
**Loading Strategy**:
The system automatically selects documentation based on intent keywords:
1. **Quick Understanding** ("了解", "快速理解", "什么是"):
- Load: Level 0 (README.md only, ~2K tokens)
- Use case: Quick overview of capabilities
2. **Specific Module Analysis** ("分析XXX模块", "理解XXX实现"):
- Load: Module-specific README.md + API.md (~5K tokens)
- Use case: Deep dive into specific component
3. **Architecture Review** ("架构", "设计模式", "整体结构"):
- Load: README.md + ARCHITECTURE.md (~10K tokens)
- Use case: System design understanding
4. **Implementation/Modification** ("修改", "增强", "实现"):
- Load: Relevant module docs + EXAMPLES.md (~15K tokens)
- Use case: Code modification with examples
5. **Comprehensive Learning** ("学习", "完整了解", "深入理解"):
- Load: Level 3 (All documentation, ~40K tokens)
- Use case: Complete system mastery
**Documentation Loaded into Memory**:
After loading, the selected documentation content is available in conversation memory for subsequent operations.
## 4. Usage Examples
### Example 1: Manual Specification
**User Command**:
```bash
/memory:load-skill-memory my_project "修改认证模块增加OAuth支持"
```
**Execution**:
```javascript
// Step 1: Use provided skill_name
skill_name = "my_project" // Directly from parameter
// Step 2: Activate SKILL
Skill(command: "my_project")
// Step 3: Intent Analysis
Keywords: ["修改", "认证模块", "增加", "OAuth"]
Action: modifying (implementation)
Scope: auth module + examples
// Load documentation based on intent
Read(.workflow/docs/my_project/auth/README.md)
Read(.workflow/docs/my_project/auth/API.md)
Read(.workflow/docs/my_project/EXAMPLES.md)
```
### Example 2: Auto-Detection from Path
**User Command**:
```bash
/memory:load-skill-memory "修改D:\projects\my_project\src\services\api.py的接口逻辑"
```
**Execution**:
```javascript
// Step 1: Auto-detect skill_name from path
Path detected: "D:\projects\my_project\src\services\api.py"
Extracted: "my_project"
Validated: .claude/skills/my_project/ exists
skill_name = "my_project"
// Step 2: Activate SKILL
Skill(command: "my_project")
// Step 3: Intent Analysis
Keywords: ["修改", "services", "接口逻辑"]
Action: modifying (implementation)
Scope: services module + examples
// Load documentation based on intent
Read(.workflow/docs/my_project/services/README.md)
Read(.workflow/docs/my_project/services/API.md)
Read(.workflow/docs/my_project/EXAMPLES.md)
```
## 5. Intent Keyword Mapping
**Quick Reference**:
- **Triggers**: "了解", "快速", "什么是", "简介"
- **Loads**: README.md only (~2K)
**Module-Specific**:
- **Triggers**: "XXX模块", "XXX组件", "分析XXX"
- **Loads**: Module README + API (~5K)
**Architecture**:
- **Triggers**: "架构", "设计", "整体结构", "系统设计"
- **Loads**: README + ARCHITECTURE (~10K)
**Implementation**:
- **Triggers**: "修改", "增强", "实现", "开发", "集成"
- **Loads**: Relevant module + EXAMPLES (~15K)
**Comprehensive**:
- **Triggers**: "完整", "深入", "全面", "学习整个"
- **Loads**: All documentation (~40K)

View File

@@ -1,525 +0,0 @@
---
name: skill-memory
description: 4-phase autonomous orchestrator: check docs → /memory:docs planning → /workflow:execute → generate SKILL.md with progressive loading index (skips phases 2-3 if docs exist)
argument-hint: "[path] [--tool <gemini|qwen|codex>] [--regenerate] [--mode <full|partial>] [--cli-execute]"
allowed-tools: SlashCommand(*), TodoWrite(*), Bash(*), Read(*), Write(*)
---
# Memory SKILL Package Generator
## Orchestrator Role
**Pure Orchestrator**: Execute documentation generation workflow, then generate SKILL.md index. Does NOT create task JSON files.
**Auto-Continue Workflow**: This command runs **fully autonomously** once triggered. Each phase completes and automatically triggers the next phase without user interaction.
**Execution Paths**:
- **Full Path**: All 4 phases (no existing docs OR `--regenerate` specified)
- **Skip Path**: Phase 1 → Phase 4 (existing docs found AND no `--regenerate` flag)
- **Phase 4 Always Executes**: SKILL.md index is never skipped, always generated or updated
## Core Rules
1. **Start Immediately**: First action is TodoWrite initialization, second action is Phase 1 execution
2. **No Task JSON**: This command does not create task JSON files - delegates to /memory:docs
3. **Parse Every Output**: Extract required data from each command output (session_id, task_count, file paths)
4. **Auto-Continue**: After completing each phase, update TodoWrite and immediately execute next phase
5. **Track Progress**: Update TodoWrite after EVERY phase completion before starting next phase
6. **Direct Generation**: Phase 4 directly generates SKILL.md using Write tool
7. **No Manual Steps**: User should never be prompted for decisions between phases
---
## 4-Phase Execution
### Phase 1: Prepare Arguments
**Goal**: Parse command arguments and check existing documentation
**Step 1: Get Target Path and Project Name**
```bash
# Get current directory (or use provided path)
bash(pwd)
# Get project name from directory
bash(basename "$(pwd)")
# Get project root
bash(git rev-parse --show-toplevel 2>/dev/null || pwd)
```
**Output**:
- `target_path`: `/d/my_project`
- `project_name`: `my_project`
- `project_root`: `/d/my_project`
**Step 2: Set Default Parameters**
```bash
# Default values (use these unless user specifies otherwise):
# - tool: "gemini"
# - mode: "full"
# - regenerate: false (no --regenerate flag)
# - cli_execute: false (no --cli-execute flag)
```
**Step 3: Check Existing Documentation**
```bash
# Check if docs directory exists
bash(test -d .workflow/docs/my_project && echo "exists" || echo "not_exists")
# Count existing documentation files
bash(find .workflow/docs/my_project -name "*.md" 2>/dev/null | wc -l || echo 0)
```
**Output**:
- `docs_exists`: `exists` or `not_exists`
- `existing_docs`: `5` (or `0` if no docs)
**Step 4: Determine Execution Path**
**Decision Logic**:
```javascript
if (existing_docs > 0 && !regenerate_flag) {
// Documentation exists and no regenerate flag
SKIP_DOCS_GENERATION = true
message = "Documentation already exists, skipping Phase 2 and Phase 3. Use --regenerate to force regeneration."
} else if (regenerate_flag) {
// Force regeneration: delete existing docs
bash(rm -rf .workflow/docs/my_project 2>/dev/null || true)
SKIP_DOCS_GENERATION = false
message = "Regenerating documentation from scratch."
} else {
// No existing docs
SKIP_DOCS_GENERATION = false
message = "No existing documentation found, generating new documentation."
}
```
**Summary Variables**:
- `PROJECT_NAME`: `my_project`
- `TARGET_PATH`: `/d/my_project`
- `DOCS_PATH`: `.workflow/docs/my_project`
- `TOOL`: `gemini` (default) or user-specified
- `MODE`: `full` (default) or user-specified
- `CLI_EXECUTE`: `false` (default) or `true` if --cli-execute flag
- `REGENERATE`: `false` (default) or `true` if --regenerate flag
- `EXISTING_DOCS`: Count of existing documentation files
- `SKIP_DOCS_GENERATION`: `true` if skipping Phase 2/3, `false` otherwise
**Completion & TodoWrite**:
- If `SKIP_DOCS_GENERATION = true`: Mark phase 1 completed, phase 2&3 completed (skipped), phase 4 in_progress
- If `SKIP_DOCS_GENERATION = false`: Mark phase 1 completed, phase 2 in_progress
**Next Action**:
- If skipping: Display skip message → Jump to Phase 4 (SKILL.md generation)
- If not skipping: Display preparation results → Continue to Phase 2 (documentation planning)
---
### Phase 2: Call /memory:docs
**Skip Condition**: This phase is **skipped if SKIP_DOCS_GENERATION = true** (documentation already exists without --regenerate flag)
**Goal**: Trigger documentation generation workflow
**Command**:
```bash
SlashCommand(command="/memory:docs [targetPath] --tool [tool] --mode [mode] [--cli-execute]")
```
**Example**:
```bash
/memory:docs /d/my_app --tool gemini --mode full
/memory:docs /d/my_app --tool gemini --mode full --cli-execute
```
**Note**: The `--regenerate` flag is handled in Phase 1 by deleting existing documentation. This command always calls `/memory:docs` without the regenerate flag, relying on docs.md's built-in update detection.
**Parse Output**:
- Extract session ID: `WFS-docs-[timestamp]` (store as `docsSessionId`)
- Extract task count (store as `taskCount`)
**Completion Criteria**:
- `/memory:docs` command executed successfully
- Session ID extracted and stored
- Task count retrieved
- Task files created in `.workflow/[docsSessionId]/.task/`
- workflow-session.json exists
**TodoWrite**: Mark phase 2 completed, phase 3 in_progress
**Next Action**: Display docs planning results (session ID, task count) → Auto-continue to Phase 3
---
### Phase 3: Execute Documentation Generation
**Skip Condition**: This phase is **skipped if SKIP_DOCS_GENERATION = true** (documentation already exists without --regenerate flag)
**Goal**: Execute documentation generation tasks
**Command**:
```bash
SlashCommand(command="/workflow:execute")
```
**Note**: `/workflow:execute` automatically discovers active session from Phase 2
**Completion Criteria**:
- `/workflow:execute` command executed successfully
- Documentation files generated in `.workflow/docs/[projectName]/`
- All tasks marked as completed in session
- At minimum: module documentation files exist (API.md and/or README.md)
- For full mode: Project README, ARCHITECTURE, EXAMPLES files generated
**TodoWrite**: Mark phase 3 completed, phase 4 in_progress
**Next Action**: Display execution results (file count, module count) → Auto-continue to Phase 4
---
### Phase 4: Generate SKILL.md Index
**Note**: This phase is **NEVER skipped** - it always executes to generate or update the SKILL index.
**Step 1: Read Key Files** (Use Read tool)
- `.workflow/docs/{project_name}/README.md` (required)
- `.workflow/docs/{project_name}/ARCHITECTURE.md` (optional)
**Step 2: Discover Structure**
```bash
bash(find .workflow/docs/{project_name} -name "*.md" | sed 's|.workflow/docs/{project_name}/||' | awk -F'/' '{if(NF>=2) print $1"/"$2}' | sort -u)
```
**Step 3: Generate Intelligent Description**
Extract from README + structure: Function (capabilities), Modules (names), Keywords (API/CLI/auth/etc.)
**Format**: `{Project} {core capabilities} (located at {project_path}). Load this SKILL when analyzing, modifying, or learning about {domain_description} or files under this path, especially when no relevant context exists in memory.`
**Key Elements**:
- **Path Reference**: Use `TARGET_PATH` from Phase 1 for precise location identification
- **Domain Description**: Extract human-readable domain/feature area from README (e.g., "workflow management", "thermal modeling")
- **Trigger Optimization**: Include project path, emphasize "especially when no relevant context exists in memory"
- **Action Coverage**: analyzing (分析), modifying (修改), learning (了解)
**Example**: "Workflow orchestration system with CLI tools and documentation generation (located at /d/Claude_dms3). Load this SKILL when analyzing, modifying, or learning about workflow management or files under this path, especially when no relevant context exists in memory."
**Step 4: Write SKILL.md** (Use Write tool)
```bash
bash(mkdir -p .claude/skills/{project_name})
```
`.claude/skills/{project_name}/SKILL.md`:
```yaml
---
name: {project_name}
description: {intelligent description from Step 3}
version: 1.0.0
---
# {Project Name} SKILL Package
## Documentation: `../../../.workflow/docs/{project_name}/`
## Progressive Loading
### Level 0: Quick Start (~2K)
- [README](../../../.workflow/docs/{project_name}/README.md)
### Level 1: Core Modules (~8K)
{Module READMEs}
### Level 2: Complete (~25K)
All modules + [Architecture](../../../.workflow/docs/{project_name}/ARCHITECTURE.md)
### Level 3: Deep Dive (~40K)
Everything + [Examples](../../../.workflow/docs/{project_name}/EXAMPLES.md)
```
**Completion Criteria**:
- SKILL.md file created at `.claude/skills/{project_name}/SKILL.md`
- Intelligent description generated from documentation
- Progressive loading levels (0-3) properly structured
- Module index includes all documented modules
- All file references use relative paths
**TodoWrite**: Mark phase 4 completed
**Final Action**: Report completion summary to user
**Return to User**:
```
SKILL Package Generation Complete
Project: {project_name}
Documentation: .workflow/docs/{project_name}/ ({doc_count} files)
SKILL Index: .claude/skills/{project_name}/SKILL.md
Generated:
- {task_count} documentation tasks completed
- SKILL.md with progressive loading (4 levels)
- Module index with {module_count} modules
Usage:
- Load Level 0: Quick project overview (~2K tokens)
- Load Level 1: Core modules (~8K tokens)
- Load Level 2: Complete docs (~25K tokens)
- Load Level 3: Everything (~40K tokens)
```
---
## Implementation Details
### Critical Rules
1. **No User Prompts Between Phases**: Never ask user questions or wait for input between phases
2. **Immediate Phase Transition**: After TodoWrite update, immediately execute next phase command
3. **Status-Driven Execution**: Check TodoList status after each phase:
- If next task is "pending" → Mark it "in_progress" and execute
- If all tasks are "completed" → Report final summary
4. **Phase Completion Pattern**:
```
Phase N completes → Update TodoWrite (N=completed, N+1=in_progress) → Execute Phase N+1
```
### TodoWrite Patterns
#### Initialization (Before Phase 1)
**FIRST ACTION**: Create TodoList with all 4 phases
```javascript
TodoWrite({todos: [
{"content": "Parse arguments and prepare", "status": "in_progress", "activeForm": "Parsing arguments"},
{"content": "Call /memory:docs to plan documentation", "status": "pending", "activeForm": "Calling /memory:docs"},
{"content": "Execute documentation generation", "status": "pending", "activeForm": "Executing documentation"},
{"content": "Generate SKILL.md index", "status": "pending", "activeForm": "Generating SKILL.md"}
]})
```
**SECOND ACTION**: Execute Phase 1 immediately
#### Full Path (SKIP_DOCS_GENERATION = false)
**After Phase 1**:
```javascript
TodoWrite({todos: [
{"content": "Parse arguments and prepare", "status": "completed", "activeForm": "Parsing arguments"},
{"content": "Call /memory:docs to plan documentation", "status": "in_progress", "activeForm": "Calling /memory:docs"},
{"content": "Execute documentation generation", "status": "pending", "activeForm": "Executing documentation"},
{"content": "Generate SKILL.md index", "status": "pending", "activeForm": "Generating SKILL.md"}
]})
// Auto-continue to Phase 2
```
**After Phase 2**:
```javascript
TodoWrite({todos: [
{"content": "Parse arguments and prepare", "status": "completed", "activeForm": "Parsing arguments"},
{"content": "Call /memory:docs to plan documentation", "status": "completed", "activeForm": "Calling /memory:docs"},
{"content": "Execute documentation generation", "status": "in_progress", "activeForm": "Executing documentation"},
{"content": "Generate SKILL.md index", "status": "pending", "activeForm": "Generating SKILL.md"}
]})
// Auto-continue to Phase 3
```
**After Phase 3**:
```javascript
TodoWrite({todos: [
{"content": "Parse arguments and prepare", "status": "completed", "activeForm": "Parsing arguments"},
{"content": "Call /memory:docs to plan documentation", "status": "completed", "activeForm": "Calling /memory:docs"},
{"content": "Execute documentation generation", "status": "completed", "activeForm": "Executing documentation"},
{"content": "Generate SKILL.md index", "status": "in_progress", "activeForm": "Generating SKILL.md"}
]})
// Auto-continue to Phase 4
```
**After Phase 4**:
```javascript
TodoWrite({todos: [
{"content": "Parse arguments and prepare", "status": "completed", "activeForm": "Parsing arguments"},
{"content": "Call /memory:docs to plan documentation", "status": "completed", "activeForm": "Calling /memory:docs"},
{"content": "Execute documentation generation", "status": "completed", "activeForm": "Executing documentation"},
{"content": "Generate SKILL.md index", "status": "completed", "activeForm": "Generating SKILL.md"}
]})
// Report completion summary to user
```
#### Skip Path (SKIP_DOCS_GENERATION = true)
**After Phase 1** (detects existing docs, skips Phase 2 & 3):
```javascript
TodoWrite({todos: [
{"content": "Parse arguments and prepare", "status": "completed", "activeForm": "Parsing arguments"},
{"content": "Call /memory:docs to plan documentation", "status": "completed", "activeForm": "Calling /memory:docs"},
{"content": "Execute documentation generation", "status": "completed", "activeForm": "Executing documentation"},
{"content": "Generate SKILL.md index", "status": "in_progress", "activeForm": "Generating SKILL.md"}
]})
// Display skip message: "Documentation already exists, skipping Phase 2 and Phase 3. Use --regenerate to force regeneration."
// Jump directly to Phase 4
```
**After Phase 4**:
```javascript
TodoWrite({todos: [
{"content": "Parse arguments and prepare", "status": "completed", "activeForm": "Parsing arguments"},
{"content": "Call /memory:docs to plan documentation", "status": "completed", "activeForm": "Calling /memory:docs"},
{"content": "Execute documentation generation", "status": "completed", "activeForm": "Executing documentation"},
{"content": "Generate SKILL.md index", "status": "completed", "activeForm": "Generating SKILL.md"}
]})
// Report completion summary to user
```
### Execution Flow Diagrams
#### Full Path Flow
```
User triggers command
[TodoWrite] Initialize 4 phases (Phase 1 = in_progress)
[Execute] Phase 1: Parse arguments
[TodoWrite] Phase 1 = completed, Phase 2 = in_progress
[Execute] Phase 2: Call /memory:docs
[TodoWrite] Phase 2 = completed, Phase 3 = in_progress
[Execute] Phase 3: Call /workflow:execute
[TodoWrite] Phase 3 = completed, Phase 4 = in_progress
[Execute] Phase 4: Generate SKILL.md
[TodoWrite] Phase 4 = completed
[Report] Display completion summary
```
#### Skip Path Flow
```
User triggers command
[TodoWrite] Initialize 4 phases (Phase 1 = in_progress)
[Execute] Phase 1: Parse arguments, detect existing docs
[TodoWrite] Phase 1 = completed, Phase 2&3 = completed (skipped), Phase 4 = in_progress
[Display] Skip message: "Documentation already exists, skipping Phase 2 and Phase 3"
[Execute] Phase 4: Generate SKILL.md (always runs)
[TodoWrite] Phase 4 = completed
[Report] Display completion summary
```
### Error Handling
- If any phase fails, mark it as "in_progress" (not completed)
- Report error details to user
- Do NOT auto-continue to next phase on failure
---
## Parameters
```bash
/memory:skill-memory [path] [--tool <gemini|qwen|codex>] [--regenerate] [--mode <full|partial>] [--cli-execute]
```
- **path**: Target directory (default: current directory)
- **--tool**: CLI tool for documentation (default: gemini)
- `gemini`: Comprehensive documentation
- `qwen`: Architecture analysis
- `codex`: Implementation validation
- **--regenerate**: Force regenerate all documentation
- When enabled: Deletes existing `.workflow/docs/{project_name}/` before regeneration
- Ensures fresh documentation from source code
- **--mode**: Documentation mode (default: full)
- `full`: Complete docs (modules + README + ARCHITECTURE + EXAMPLES)
- `partial`: Module docs only
- **--cli-execute**: Enable CLI-based documentation generation (optional)
- When enabled: CLI generates docs directly in implementation_approach
- When disabled (default): Agent generates documentation content
---
## Examples
### Example 1: Generate SKILL Package (Default)
```bash
/memory:skill-memory
```
**Workflow**:
1. Phase 1: Detects current directory, checks existing docs
2. Phase 2: Calls `/memory:docs . --tool gemini --mode full` (Agent Mode)
3. Phase 3: Executes documentation generation via `/workflow:execute`
4. Phase 4: Generates SKILL.md at `.claude/skills/{project_name}/SKILL.md`
### Example 2: Regenerate with Qwen
```bash
/memory:skill-memory /d/my_app --tool qwen --regenerate
```
**Workflow**:
1. Phase 1: Parses target path, detects regenerate flag, deletes existing docs
2. Phase 2: Calls `/memory:docs /d/my_app --tool qwen --mode full`
3. Phase 3: Executes documentation regeneration
4. Phase 4: Generates updated SKILL.md
### Example 3: Partial Mode (Modules Only)
```bash
/memory:skill-memory --mode partial
```
**Workflow**:
1. Phase 1: Detects partial mode
2. Phase 2: Calls `/memory:docs . --tool gemini --mode partial` (Agent Mode)
3. Phase 3: Executes module documentation only
4. Phase 4: Generates SKILL.md with module-only index
### Example 4: CLI Execute Mode
```bash
/memory:skill-memory --cli-execute
```
**Workflow**:
1. Phase 1: Detects CLI execute mode
2. Phase 2: Calls `/memory:docs . --tool gemini --mode full --cli-execute` (CLI Mode)
3. Phase 3: Executes CLI-based documentation generation
4. Phase 4: Generates SKILL.md at `.claude/skills/{project_name}/SKILL.md`
### Example 5: Skip Path (Existing Docs)
```bash
/memory:skill-memory
```
**Scenario**: Documentation already exists in `.workflow/docs/{project_name}/`
**Workflow**:
1. Phase 1: Detects existing docs (5 files), sets SKIP_DOCS_GENERATION = true
2. Display: "Documentation already exists, skipping Phase 2 and Phase 3. Use --regenerate to force regeneration."
3. Phase 4: Generates or updates SKILL.md index only (~5-10x faster)
---
## Architecture
```
skill-memory (orchestrator)
├─ Phase 1: Prepare (bash commands, skip decision)
├─ Phase 2: /memory:docs (task planning, skippable)
├─ Phase 3: /workflow:execute (task execution, skippable)
└─ Phase 4: Write SKILL.md (direct file generation, always runs)
No task JSON created by this command
All documentation tasks managed by /memory:docs
Smart skip logic: 5-10x faster when docs exist
```

View File

@@ -1,773 +0,0 @@
---
name: swagger-docs
description: Generate complete Swagger/OpenAPI documentation following RESTful standards with global security, API details, error codes, and validation tests
argument-hint: "[path] [--tool <gemini|qwen|codex>] [--format <yaml|json>] [--version <v3.0|v3.1>] [--lang <zh|en>]"
---
# Swagger API Documentation Workflow (/memory:swagger-docs)
## Overview
Professional Swagger/OpenAPI documentation generator that strictly follows RESTful API design standards to produce enterprise-grade API documentation.
**Core Features**:
- **RESTful Standards**: Strict adherence to REST architecture and HTTP semantics
- **Global Security**: Unified Authorization Token validation mechanism
- **Complete API Docs**: Descriptions, methods, URLs, parameters for each endpoint
- **Organized Structure**: Clear directory hierarchy by business domain
- **Detailed Fields**: Type, required, example, description for each field
- **Error Code Standards**: Unified error response format and code definitions
- **Validation Tests**: Boundary conditions and exception handling tests
**Output Structure** (--lang zh):
```
.workflow/docs/{project_name}/api/
├── swagger.yaml # Main OpenAPI spec file
├── 概述/
│ ├── README.md # API overview
│ ├── 认证说明.md # Authentication guide
│ ├── 错误码规范.md # Error code definitions
│ └── 版本历史.md # Version history
├── 用户模块/ # Grouped by business domain
│ ├── 用户认证.md
│ ├── 用户管理.md
│ └── 权限控制.md
├── 业务模块/
│ └── ...
└── 测试报告/
├── 接口测试.md # API test results
└── 边界测试.md # Boundary condition tests
```
**Output Structure** (--lang en):
```
.workflow/docs/{project_name}/api/
├── swagger.yaml # Main OpenAPI spec file
├── overview/
│ ├── README.md # API overview
│ ├── authentication.md # Authentication guide
│ ├── error-codes.md # Error code definitions
│ └── changelog.md # Version history
├── users/ # Grouped by business domain
│ ├── authentication.md
│ ├── management.md
│ └── permissions.md
├── orders/
│ └── ...
└── test-reports/
├── api-tests.md # API test results
└── boundary-tests.md # Boundary condition tests
```
## Parameters
```bash
/memory:swagger-docs [path] [--tool <gemini|qwen|codex>] [--format <yaml|json>] [--version <v3.0|v3.1>] [--lang <zh|en>]
```
- **path**: API source code directory (default: current directory)
- **--tool**: CLI tool selection (default: gemini)
- `gemini`: Comprehensive analysis, pattern recognition
- `qwen`: Architecture analysis, system design
- `codex`: Implementation validation, code quality
- **--format**: OpenAPI spec format (default: yaml)
- `yaml`: YAML format (recommended, better readability)
- `json`: JSON format
- **--version**: OpenAPI version (default: v3.0)
- `v3.0`: OpenAPI 3.0.x
- `v3.1`: OpenAPI 3.1.0 (supports JSON Schema 2020-12)
- **--lang**: Documentation language (default: zh)
- `zh`: Chinese documentation with Chinese directory names
- `en`: English documentation with English directory names
## Planning Workflow
### Phase 1: Initialize Session
```bash
# Get project info
bash(pwd && basename "$(pwd)" && git rev-parse --show-toplevel 2>/dev/null || pwd && date +%Y%m%d-%H%M%S)
```
```javascript
// Create swagger-docs session
SlashCommand(command="/workflow:session:start --type swagger-docs --new \"{project_name}-swagger-{timestamp}\"")
// Parse output to get sessionId
```
```bash
# Update workflow-session.json
bash(jq '. + {"target_path":"{target_path}","project_root":"{project_root}","project_name":"{project_name}","format":"yaml","openapi_version":"3.0.3","lang":"{lang}","tool":"gemini"}' .workflow/active/{sessionId}/workflow-session.json > tmp.json && mv tmp.json .workflow/active/{sessionId}/workflow-session.json)
```
### Phase 2: Scan API Endpoints
**Discovery Patterns**: Auto-detect framework signatures and API definition styles.
**Supported Frameworks**:
| Framework | Detection Pattern | Example |
|-----------|-------------------|---------|
| Express.js | `router.get/post/put/delete` | `router.get('/users/:id')` |
| Fastify | `fastify.route`, `@Route` | `fastify.get('/api/users')` |
| NestJS | `@Controller`, `@Get/@Post` | `@Get('users/:id')` |
| Koa | `router.get`, `ctx.body` | `router.get('/users')` |
| Hono | `app.get/post`, `c.json` | `app.get('/users/:id')` |
| FastAPI | `@app.get`, `@router.post` | `@app.get("/users/{id}")` |
| Flask | `@app.route`, `@bp.route` | `@app.route('/users')` |
| Spring | `@GetMapping`, `@PostMapping` | `@GetMapping("/users/{id}")` |
| Go Gin | `r.GET`, `r.POST` | `r.GET("/users/:id")` |
| Go Chi | `r.Get`, `r.Post` | `r.Get("/users/{id}")` |
**Commands**:
```bash
# 1. Detect API framework type
bash(
if rg -q "@Controller|@Get|@Post|@Put|@Delete" --type ts 2>/dev/null; then echo "NESTJS";
elif rg -q "router\.(get|post|put|delete|patch)" --type ts --type js 2>/dev/null; then echo "EXPRESS";
elif rg -q "fastify\.(get|post|route)" --type ts --type js 2>/dev/null; then echo "FASTIFY";
elif rg -q "@app\.(get|post|put|delete)" --type py 2>/dev/null; then echo "FASTAPI";
elif rg -q "@GetMapping|@PostMapping|@RequestMapping" --type java 2>/dev/null; then echo "SPRING";
elif rg -q 'r\.(GET|POST|PUT|DELETE)' --type go 2>/dev/null; then echo "GO_GIN";
else echo "UNKNOWN"; fi
)
# 2. Scan all API endpoint definitions
bash(rg -n "(router|app|fastify)\.(get|post|put|delete|patch)|@(Get|Post|Put|Delete|Patch|Controller|RequestMapping)" --type ts --type js --type py --type java --type go -g '!*.test.*' -g '!*.spec.*' -g '!node_modules/**' 2>/dev/null | head -200)
# 3. Extract route paths
bash(rg -o "['\"](/api)?/[a-zA-Z0-9/:_-]+['\"]" --type ts --type js --type py -g '!*.test.*' 2>/dev/null | sort -u | head -100)
# 4. Detect existing OpenAPI/Swagger files
bash(find . -type f \( -name "swagger.yaml" -o -name "swagger.json" -o -name "openapi.yaml" -o -name "openapi.json" \) ! -path "*/node_modules/*" 2>/dev/null)
# 5. Extract DTO/Schema definitions
bash(rg -n "export (interface|type|class).*Dto|@ApiProperty|class.*Schema" --type ts -g '!*.test.*' 2>/dev/null | head -100)
```
**Data Processing**: Parse outputs, use **Write tool** to create `${session_dir}/.process/swagger-planning-data.json`:
```json
{
"metadata": {
"generated_at": "2025-01-01T12:00:00+08:00",
"project_name": "project_name",
"project_root": "/path/to/project",
"openapi_version": "3.0.3",
"format": "yaml",
"lang": "zh"
},
"framework": {
"type": "NESTJS",
"detected_patterns": ["@Controller", "@Get", "@Post"],
"base_path": "/api/v1"
},
"endpoints": [
{
"file": "src/modules/users/users.controller.ts",
"line": 25,
"method": "GET",
"path": "/api/v1/users/:id",
"handler": "getUser",
"controller": "UsersController"
}
],
"existing_specs": {
"found": false,
"files": []
},
"dto_schemas": [
{
"name": "CreateUserDto",
"file": "src/modules/users/dto/create-user.dto.ts",
"properties": ["email", "password", "name"]
}
],
"statistics": {
"total_endpoints": 45,
"by_method": {"GET": 20, "POST": 15, "PUT": 5, "DELETE": 5},
"by_module": {"users": 12, "auth": 8, "orders": 15, "products": 10}
}
}
```
### Phase 3: Analyze API Structure
**Commands**:
```bash
# 1. Analyze controller/route file structure
bash(cat ${session_dir}/.process/swagger-planning-data.json | jq -r '.endpoints[].file' | sort -u | head -20)
# 2. Extract request/response types
bash(for f in $(jq -r '.dto_schemas[].file' ${session_dir}/.process/swagger-planning-data.json | head -20); do echo "=== $f ===" && cat "$f" 2>/dev/null; done)
# 3. Analyze authentication middleware
bash(rg -n "auth|guard|middleware|jwt|bearer|token" -i --type ts --type js -g '!*.test.*' -g '!node_modules/**' 2>/dev/null | head -50)
# 4. Detect error handling patterns
bash(rg -n "HttpException|BadRequest|Unauthorized|Forbidden|NotFound|throw new" --type ts --type js -g '!*.test.*' 2>/dev/null | head -50)
```
**Deep Analysis via Gemini CLI**:
```bash
ccw cli -p "
PURPOSE: Analyze API structure and generate OpenAPI specification outline for comprehensive documentation
TASK:
• Parse all API endpoints and identify business module boundaries
• Extract request parameters, request bodies, and response formats
• Identify authentication mechanisms and security requirements
• Discover error handling patterns and error codes
• Map endpoints to logical module groups
MODE: analysis
CONTEXT: @src/**/*.controller.ts @src/**/*.routes.ts @src/**/*.dto.ts @src/**/middleware/**/*
EXPECTED: JSON format API structure analysis report with modules, endpoints, security schemes, and error codes
CONSTRAINTS: Strict RESTful standards | Identify all public endpoints | Document output language: {lang}
" --tool gemini --mode analysis --rule analysis-code-patterns --cd {project_root}
```
**Update swagger-planning-data.json** with analysis results:
```json
{
"api_structure": {
"modules": [
{
"name": "Users",
"name_zh": "用户模块",
"base_path": "/api/v1/users",
"endpoints": [
{
"path": "/api/v1/users",
"method": "GET",
"operation_id": "listUsers",
"summary": "List all users",
"summary_zh": "获取用户列表",
"description": "Paginated list of system users with filtering by status and role",
"description_zh": "分页获取系统用户列表,支持按状态、角色筛选",
"tags": ["User Management"],
"tags_zh": ["用户管理"],
"security": ["bearerAuth"],
"parameters": {
"query": ["page", "limit", "status", "role"]
},
"responses": {
"200": "UserListResponse",
"401": "UnauthorizedError",
"403": "ForbiddenError"
}
}
]
}
],
"security_schemes": {
"bearerAuth": {
"type": "http",
"scheme": "bearer",
"bearerFormat": "JWT",
"description": "JWT Token authentication. Add Authorization: Bearer <token> to request header"
}
},
"error_codes": [
{"code": "AUTH_001", "status": 401, "message": "Invalid or expired token", "message_zh": "Token 无效或已过期"},
{"code": "AUTH_002", "status": 401, "message": "Authentication required", "message_zh": "未提供认证信息"},
{"code": "AUTH_003", "status": 403, "message": "Insufficient permissions", "message_zh": "权限不足"}
]
}
}
```
### Phase 4: Task Decomposition
**Task Hierarchy**:
```
Level 1: Infrastructure Tasks (Parallel)
├─ IMPL-001: Generate main OpenAPI spec file (swagger.yaml)
├─ IMPL-002: Generate global security config and auth documentation
└─ IMPL-003: Generate unified error code specification
Level 2: Module Documentation Tasks (Parallel, by business module)
├─ IMPL-004: Users module API documentation
├─ IMPL-005: Auth module API documentation
├─ IMPL-006: Business module N API documentation
└─ ...
Level 3: Aggregation Tasks (Depends on Level 1-2)
├─ IMPL-N+1: Generate API overview and navigation
└─ IMPL-N+2: Generate version history and changelog
Level 4: Validation Tasks (Depends on Level 1-3)
├─ IMPL-N+3: API endpoint validation tests
└─ IMPL-N+4: Boundary condition tests
```
**Grouping Strategy**:
1. Group by business module (users, orders, products, etc.)
2. Maximum 10 endpoints per task
3. Large modules (>10 endpoints) split by submodules
**Commands**:
```bash
# 1. Count endpoints by module
bash(cat ${session_dir}/.process/swagger-planning-data.json | jq '.statistics.by_module')
# 2. Calculate task groupings
bash(cat ${session_dir}/.process/swagger-planning-data.json | jq -r '.api_structure.modules[] | "\(.name):\(.endpoints | length)"')
```
**Data Processing**: Use **Edit tool** to update `swagger-planning-data.json` with task groups:
```json
{
"task_groups": {
"level1_count": 3,
"level2_count": 5,
"total_count": 12,
"assignments": [
{"task_id": "IMPL-001", "level": 1, "type": "openapi-spec", "title": "Generate OpenAPI main spec file"},
{"task_id": "IMPL-002", "level": 1, "type": "security", "title": "Generate global security config"},
{"task_id": "IMPL-003", "level": 1, "type": "error-codes", "title": "Generate error code specification"},
{"task_id": "IMPL-004", "level": 2, "type": "module-doc", "module": "users", "endpoint_count": 12},
{"task_id": "IMPL-005", "level": 2, "type": "module-doc", "module": "auth", "endpoint_count": 8}
]
}
}
```
### Phase 5: Generate Task JSONs
**Generation Process**:
1. Read configuration values from workflow-session.json
2. Read task groups from swagger-planning-data.json
3. Generate Level 1 tasks (infrastructure)
4. Generate Level 2 tasks (by module)
5. Generate Level 3-4 tasks (aggregation and validation)
## Task Templates
### Level 1-1: OpenAPI Main Spec File
```json
{
"id": "IMPL-001",
"title": "Generate OpenAPI main specification file",
"status": "pending",
"meta": {
"type": "swagger-openapi-spec",
"agent": "@doc-generator",
"tool": "gemini",
"priority": "critical"
},
"context": {
"requirements": [
"Generate OpenAPI 3.0.3 compliant swagger.yaml",
"Include complete info, servers, tags, paths, components definitions",
"Follow RESTful design standards, use {lang} for descriptions"
],
"precomputed_data": {
"planning_data": "${session_dir}/.process/swagger-planning-data.json"
}
},
"flow_control": {
"pre_analysis": [
{
"step": "load_analysis_data",
"action": "Load API analysis data",
"commands": [
"bash(cat ${session_dir}/.process/swagger-planning-data.json)"
],
"output_to": "api_analysis"
}
],
"implementation_approach": [
{
"step": 1,
"title": "Generate OpenAPI spec file",
"description": "Create complete swagger.yaml specification file",
"cli_prompt": "PURPOSE: Generate OpenAPI 3.0.3 specification file from analyzed API structure\nTASK:\n• Define openapi version: 3.0.3\n• Define info: title, description, version, contact, license\n• Define servers: development, staging, production environments\n• Define tags: organized by business modules\n• Define paths: all API endpoints with complete specifications\n• Define components: schemas, securitySchemes, parameters, responses\nMODE: write\nCONTEXT: @[api_analysis]\nEXPECTED: Complete swagger.yaml file following OpenAPI 3.0.3 specification\nCONSTRAINTS: Use {lang} for all descriptions | Strict RESTful standards\n--rule documentation-swagger-api",
"output": "swagger.yaml"
}
],
"target_files": [
".workflow/docs/${project_name}/api/swagger.yaml"
]
}
}
```
### Level 1-2: Global Security Configuration
```json
{
"id": "IMPL-002",
"title": "Generate global security configuration and authentication guide",
"status": "pending",
"meta": {
"type": "swagger-security",
"agent": "@doc-generator",
"tool": "gemini"
},
"context": {
"requirements": [
"Document Authorization header format in detail",
"Describe token acquisition, refresh, and expiration mechanisms",
"List permission requirements for each endpoint"
]
},
"flow_control": {
"pre_analysis": [
{
"step": "analyze_auth",
"command": "bash(rg -n 'auth|guard|jwt|bearer' -i --type ts -g '!*.test.*' 2>/dev/null | head -50)",
"output_to": "auth_patterns"
}
],
"implementation_approach": [
{
"step": 1,
"title": "Generate authentication documentation",
"cli_prompt": "PURPOSE: Generate comprehensive authentication documentation for API security\nTASK:\n• Document authentication mechanism: JWT Bearer Token\n• Explain header format: Authorization: Bearer <token>\n• Describe token lifecycle: acquisition, refresh, expiration handling\n• Define permission levels: public, user, admin, super_admin\n• Document authentication failure responses: 401/403 error handling\nMODE: write\nCONTEXT: @[auth_patterns] @src/**/auth/**/* @src/**/guard/**/*\nEXPECTED: Complete authentication guide in {lang}\nCONSTRAINTS: Include code examples | Clear step-by-step instructions\n--rule development-feature",
"output": "{auth_doc_name}"
}
],
"target_files": [
".workflow/docs/${project_name}/api/{overview_dir}/{auth_doc_name}"
]
}
}
```
### Level 1-3: Unified Error Code Specification
```json
{
"id": "IMPL-003",
"title": "Generate unified error code specification",
"status": "pending",
"meta": {
"type": "swagger-error-codes",
"agent": "@doc-generator",
"tool": "gemini"
},
"context": {
"requirements": [
"Define unified error response format",
"Create categorized error code system (auth, business, system)",
"Provide detailed description and examples for each error code"
]
},
"flow_control": {
"implementation_approach": [
{
"step": 1,
"title": "Generate error code specification document",
"cli_prompt": "PURPOSE: Generate comprehensive error code specification for consistent API error handling\nTASK:\n• Define error response format: {code, message, details, timestamp}\n• Document authentication errors (AUTH_xxx): 401/403 series\n• Document parameter errors (PARAM_xxx): 400 series\n• Document business errors (BIZ_xxx): business logic errors\n• Document system errors (SYS_xxx): 500 series\n• For each error code: HTTP status, error message, possible causes, resolution suggestions\nMODE: write\nCONTEXT: @src/**/*.exception.ts @src/**/*.filter.ts\nEXPECTED: Complete error code specification in {lang} with tables and examples\nCONSTRAINTS: Include response examples | Clear categorization\n--rule development-feature",
"output": "{error_doc_name}"
}
],
"target_files": [
".workflow/docs/${project_name}/api/{overview_dir}/{error_doc_name}"
]
}
}
```
### Level 2: Module API Documentation (Template)
```json
{
"id": "IMPL-${module_task_id}",
"title": "Generate ${module_name} API documentation",
"status": "pending",
"depends_on": ["IMPL-001", "IMPL-002", "IMPL-003"],
"meta": {
"type": "swagger-module-doc",
"agent": "@doc-generator",
"tool": "gemini",
"module": "${module_name}",
"endpoint_count": "${endpoint_count}"
},
"context": {
"requirements": [
"Complete documentation for all endpoints in this module",
"Each endpoint: description, method, URL, parameters, responses",
"Include success and failure response examples",
"Mark API version and last update time"
],
"focus_paths": ["${module_source_paths}"]
},
"flow_control": {
"pre_analysis": [
{
"step": "load_module_endpoints",
"action": "Load module endpoint information",
"commands": [
"bash(cat ${session_dir}/.process/swagger-planning-data.json | jq '.api_structure.modules[] | select(.name == \"${module_name}\")')"
],
"output_to": "module_endpoints"
},
{
"step": "read_source_files",
"action": "Read module source files",
"commands": [
"bash(cat ${module_source_files})"
],
"output_to": "source_code"
}
],
"implementation_approach": [
{
"step": 1,
"title": "Generate module API documentation",
"description": "Generate complete API documentation for ${module_name}",
"cli_prompt": "PURPOSE: Generate complete RESTful API documentation for ${module_name} module\nTASK:\n• Create module overview: purpose, use cases, prerequisites\n• Generate endpoint index: grouped by functionality\n• For each endpoint document:\n - Functional description: purpose and business context\n - Request method: GET/POST/PUT/DELETE\n - URL path: complete API path\n - Request headers: Authorization and other required headers\n - Path parameters: {id} and other path variables\n - Query parameters: pagination, filters, etc.\n - Request body: JSON Schema format\n - Response body: success and error responses\n - Field description table: type, required, example, description\n• Add usage examples: cURL, JavaScript, Python\n• Add version info: v1.0.0, last updated date\nMODE: write\nCONTEXT: @[module_endpoints] @[source_code]\nEXPECTED: Complete module API documentation in {lang} with all endpoints fully documented\nCONSTRAINTS: RESTful standards | Include all response codes\n--rule documentation-swagger-api",
"output": "${module_doc_name}"
}
],
"target_files": [
".workflow/docs/${project_name}/api/${module_dir}/${module_doc_name}"
]
}
}
```
### Level 3: API Overview and Navigation
```json
{
"id": "IMPL-${overview_task_id}",
"title": "Generate API overview and navigation",
"status": "pending",
"depends_on": ["IMPL-001", "...", "IMPL-${last_module_task_id}"],
"meta": {
"type": "swagger-overview",
"agent": "@doc-generator",
"tool": "gemini"
},
"flow_control": {
"pre_analysis": [
{
"step": "load_all_docs",
"command": "bash(find .workflow/docs/${project_name}/api -type f -name '*.md' ! -path '*/{overview_dir}/*' | xargs cat)",
"output_to": "all_module_docs"
}
],
"implementation_approach": [
{
"step": 1,
"title": "Generate API overview",
"cli_prompt": "PURPOSE: Generate API overview document with navigation and quick start guide\nTASK:\n• Create introduction: system features, tech stack, version\n• Write quick start guide: authentication, first request example\n• Build module navigation: categorized links to all modules\n• Document environment configuration: development, staging, production\n• List SDKs and tools: client libraries, Postman collection\nMODE: write\nCONTEXT: @[all_module_docs] @.workflow/docs/${project_name}/api/swagger.yaml\nEXPECTED: Complete API overview in {lang} with navigation links\nCONSTRAINTS: Clear structure | Quick start focus\n--rule development-feature",
"output": "README.md"
}
],
"target_files": [
".workflow/docs/${project_name}/api/{overview_dir}/README.md"
]
}
}
```
### Level 4: Validation Tasks
```json
{
"id": "IMPL-${test_task_id}",
"title": "API endpoint validation tests",
"status": "pending",
"depends_on": ["IMPL-${overview_task_id}"],
"meta": {
"type": "swagger-validation",
"agent": "@test-fix-agent",
"tool": "codex"
},
"context": {
"requirements": [
"Validate accessibility of all endpoints",
"Test various boundary conditions",
"Verify exception handling"
]
},
"flow_control": {
"pre_analysis": [
{
"step": "load_swagger_spec",
"command": "bash(cat .workflow/docs/${project_name}/api/swagger.yaml)",
"output_to": "swagger_spec"
}
],
"implementation_approach": [
{
"step": 1,
"title": "Generate test report",
"cli_prompt": "PURPOSE: Generate comprehensive API test validation report\nTASK:\n• Document test environment configuration\n• Calculate endpoint coverage statistics\n• Report test results: pass/fail counts\n• Document boundary tests: parameter limits, null values, special characters\n• Document exception tests: auth failures, permission denied, resource not found\n• List issues found with recommendations\nMODE: write\nCONTEXT: @[swagger_spec]\nEXPECTED: Complete test report in {lang} with detailed results\nCONSTRAINTS: Include test cases | Clear pass/fail status\n--rule development-tests",
"output": "{test_doc_name}"
}
],
"target_files": [
".workflow/docs/${project_name}/api/{test_dir}/{test_doc_name}"
]
}
}
```
## Language-Specific Directory Mapping
| Component | --lang zh | --lang en |
|-----------|-----------|-----------|
| Overview dir | 概述 | overview |
| Auth doc | 认证说明.md | authentication.md |
| Error doc | 错误码规范.md | error-codes.md |
| Changelog | 版本历史.md | changelog.md |
| Users module | 用户模块 | users |
| Orders module | 订单模块 | orders |
| Products module | 商品模块 | products |
| Test dir | 测试报告 | test-reports |
| API test doc | 接口测试.md | api-tests.md |
| Boundary test doc | 边界测试.md | boundary-tests.md |
## API Documentation Template
### Single Endpoint Format
Each endpoint must include:
```markdown
### Get User Details
**Description**: Retrieve detailed user information by ID, including profile and permissions.
**Endpoint Info**:
| Property | Value |
|----------|-------|
| Method | GET |
| URL | `/api/v1/users/{id}` |
| Version | v1.0.0 |
| Updated | 2025-01-01 |
| Auth | Bearer Token |
| Permission | user / admin |
**Request Headers**:
| Field | Type | Required | Example | Description |
|-------|------|----------|---------|-------------|
| Authorization | string | Yes | `Bearer eyJhbGc...` | JWT Token |
| Content-Type | string | No | `application/json` | Request content type |
**Path Parameters**:
| Field | Type | Required | Example | Description |
|-------|------|----------|---------|-------------|
| id | string | Yes | `usr_123456` | Unique user identifier |
**Query Parameters**:
| Field | Type | Required | Default | Example | Description |
|-------|------|----------|---------|---------|-------------|
| include | string | No | - | `roles,permissions` | Related data to include |
**Success Response** (200 OK):
```json
{
"code": 0,
"message": "success",
"data": {
"id": "usr_123456",
"email": "user@example.com",
"name": "John Doe",
"status": "active",
"roles": ["user"],
"created_at": "2025-01-01T00:00:00Z",
"updated_at": "2025-01-01T00:00:00Z"
},
"timestamp": "2025-01-01T12:00:00Z"
}
```
**Response Fields**:
| Field | Type | Description |
|-------|------|-------------|
| code | integer | Business status code, 0 = success |
| message | string | Response message |
| data.id | string | Unique user identifier |
| data.email | string | User email address |
| data.name | string | User display name |
| data.status | string | User status: active/inactive/suspended |
| data.roles | array | User role list |
| data.created_at | string | Creation timestamp (ISO 8601) |
| data.updated_at | string | Last update timestamp (ISO 8601) |
**Error Responses**:
| Status | Code | Message | Possible Cause |
|--------|------|---------|----------------|
| 401 | AUTH_001 | Invalid or expired token | Token format error or expired |
| 403 | AUTH_003 | Insufficient permissions | No access to this user info |
| 404 | USER_001 | User not found | User ID doesn't exist or deleted |
**Examples**:
```bash
# cURL
curl -X GET "https://api.example.com/api/v1/users/usr_123456" \
-H "Authorization: Bearer eyJhbGc..." \
-H "Content-Type: application/json"
```
```javascript
// JavaScript (fetch)
const response = await fetch('https://api.example.com/api/v1/users/usr_123456', {
method: 'GET',
headers: {
'Authorization': 'Bearer eyJhbGc...',
'Content-Type': 'application/json'
}
});
const data = await response.json();
```
```
## Session Structure
```
.workflow/active/
└── WFS-swagger-{timestamp}/
├── workflow-session.json
├── IMPL_PLAN.md
├── TODO_LIST.md
├── .process/
│ └── swagger-planning-data.json
└── .task/
├── IMPL-001.json # OpenAPI spec
├── IMPL-002.json # Security config
├── IMPL-003.json # Error codes
├── IMPL-004.json # Module 1 API
├── ...
├── IMPL-N+1.json # API overview
└── IMPL-N+2.json # Validation tests
```
## Execution Commands
```bash
# Execute entire workflow
/workflow:execute
# Specify session
/workflow:execute --resume-session="WFS-swagger-yyyymmdd-hhmmss"
# Single task execution
/task:execute IMPL-001
```
## Related Commands
- `/workflow:execute` - Execute documentation tasks
- `/workflow:status` - View task progress
- `/workflow:session:complete` - Mark session complete
- `/memory:docs` - General documentation workflow

View File

@@ -1,310 +0,0 @@
---
name: tech-research-rules
description: "3-phase orchestrator: extract tech stack → Exa research → generate path-conditional rules (auto-loaded by Claude Code)"
argument-hint: "[session-id | tech-stack-name] [--regenerate] [--tool <gemini|qwen>]"
allowed-tools: SlashCommand(*), TodoWrite(*), Bash(*), Read(*), Write(*), Task(*)
---
# Tech Stack Rules Generator
## Overview
**Purpose**: Generate multi-layered, path-conditional rules that Claude Code automatically loads based on file context.
**Output Structure**:
```
.claude/rules/tech/{tech-stack}/
├── core.md # paths: **/*.{ext} - Core principles
├── patterns.md # paths: src/**/*.{ext} - Implementation patterns
├── testing.md # paths: **/*.{test,spec}.{ext} - Testing rules
├── config.md # paths: *.config.* - Configuration rules
├── api.md # paths: **/api/**/* - API rules (backend only)
├── components.md # paths: **/components/**/* - Component rules (frontend only)
└── metadata.json # Generation metadata
```
**Templates Location**: `~/.claude/workflows/cli-templates/prompts/rules/`
---
## Core Rules
1. **Start Immediately**: First action is TodoWrite initialization
2. **Path-Conditional Output**: Every rule file includes `paths` frontmatter
3. **Template-Driven**: Agent reads templates before generating content
4. **Agent Produces Files**: Agent writes all rule files directly
5. **No Manual Loading**: Rules auto-activate when Claude works with matching files
---
## 3-Phase Execution
### Phase 1: Prepare Context & Detect Tech Stack
**Goal**: Detect input mode, extract tech stack info, determine file extensions
**Input Mode Detection**:
```bash
input="$1"
if [[ "$input" == WFS-* ]]; then
MODE="session"
SESSION_ID="$input"
# Read workflow-session.json to extract tech stack
else
MODE="direct"
TECH_STACK_NAME="$input"
fi
```
**Tech Stack Analysis**:
```javascript
// Decompose composite tech stacks
// "typescript-react-nextjs" → ["typescript", "react", "nextjs"]
const TECH_EXTENSIONS = {
"typescript": "{ts,tsx}",
"javascript": "{js,jsx}",
"python": "py",
"rust": "rs",
"go": "go",
"java": "java",
"csharp": "cs",
"ruby": "rb",
"php": "php"
};
const FRAMEWORK_TYPE = {
"react": "frontend",
"vue": "frontend",
"angular": "frontend",
"nextjs": "fullstack",
"nuxt": "fullstack",
"fastapi": "backend",
"express": "backend",
"django": "backend",
"rails": "backend"
};
```
**Check Existing Rules**:
```bash
normalized_name=$(echo "$TECH_STACK_NAME" | tr '[:upper:]' '[:lower:]' | tr ' ' '-')
rules_dir=".claude/rules/tech/${normalized_name}"
existing_count=$(find "${rules_dir}" -name "*.md" 2>/dev/null | wc -l || echo 0)
```
**Skip Decision**:
- If `existing_count > 0` AND no `--regenerate``SKIP_GENERATION = true`
- If `--regenerate` → Delete existing and regenerate
**Output Variables**:
- `TECH_STACK_NAME`: Normalized name
- `PRIMARY_LANG`: Primary language
- `FILE_EXT`: File extension pattern
- `FRAMEWORK_TYPE`: frontend | backend | fullstack | library
- `COMPONENTS`: Array of tech components
- `SKIP_GENERATION`: Boolean
**TodoWrite**: Mark phase 1 completed
---
### Phase 2: Agent Produces Path-Conditional Rules
**Skip Condition**: Skipped if `SKIP_GENERATION = true`
**Goal**: Delegate to agent for Exa research and rule file generation
**Template Files**:
```
~/.claude/workflows/cli-templates/prompts/rules/
├── tech-rules-agent-prompt.txt # Agent instructions
├── rule-core.txt # Core principles template
├── rule-patterns.txt # Implementation patterns template
├── rule-testing.txt # Testing rules template
├── rule-config.txt # Configuration rules template
├── rule-api.txt # API rules template (backend)
└── rule-components.txt # Component rules template (frontend)
```
**Agent Task**:
```javascript
Task({
subagent_type: "general-purpose",
description: `Generate tech stack rules: ${TECH_STACK_NAME}`,
prompt: `
You are generating path-conditional rules for Claude Code.
## Context
- Tech Stack: ${TECH_STACK_NAME}
- Primary Language: ${PRIMARY_LANG}
- File Extensions: ${FILE_EXT}
- Framework Type: ${FRAMEWORK_TYPE}
- Components: ${JSON.stringify(COMPONENTS)}
- Output Directory: .claude/rules/tech/${TECH_STACK_NAME}/
## Instructions
Read the agent prompt template for detailed instructions.
Use --rule rules-tech-rules-agent-prompt to load the template automatically.
## Execution Steps
1. Execute Exa research queries (see agent prompt)
2. Read each rule template
3. Generate rule files following template structure
4. Write files to output directory
5. Write metadata.json
6. Report completion
## Variable Substitutions
Replace in templates:
- {TECH_STACK_NAME} → ${TECH_STACK_NAME}
- {PRIMARY_LANG} → ${PRIMARY_LANG}
- {FILE_EXT} → ${FILE_EXT}
- {FRAMEWORK_TYPE} → ${FRAMEWORK_TYPE}
`
})
```
**Completion Criteria**:
- 4-6 rule files written with proper `paths` frontmatter
- metadata.json written
- Agent reports files created
**TodoWrite**: Mark phase 2 completed
---
### Phase 3: Verify & Report
**Goal**: Verify generated files and provide usage summary
**Steps**:
1. **Verify Files**:
```bash
find ".claude/rules/tech/${TECH_STACK_NAME}" -name "*.md" -type f
```
2. **Validate Frontmatter**:
```bash
head -5 ".claude/rules/tech/${TECH_STACK_NAME}/core.md"
```
3. **Read Metadata**:
```javascript
Read(`.claude/rules/tech/${TECH_STACK_NAME}/metadata.json`)
```
4. **Generate Summary Report**:
```
Tech Stack Rules Generated
Tech Stack: {TECH_STACK_NAME}
Location: .claude/rules/tech/{TECH_STACK_NAME}/
Files Created:
├── core.md → paths: **/*.{ext}
├── patterns.md → paths: src/**/*.{ext}
├── testing.md → paths: **/*.{test,spec}.{ext}
├── config.md → paths: *.config.*
├── api.md → paths: **/api/**/* (if backend)
└── components.md → paths: **/components/**/* (if frontend)
Auto-Loading:
- Rules apply automatically when editing matching files
- No manual loading required
Example Activation:
- Edit src/components/Button.tsx → core.md + patterns.md + components.md
- Edit tests/api.test.ts → core.md + testing.md
- Edit package.json → config.md
```
**TodoWrite**: Mark phase 3 completed
---
## Path Pattern Reference
| Pattern | Matches |
|---------|---------|
| `**/*.ts` | All .ts files |
| `src/**/*` | All files under src/ |
| `*.config.*` | Config files in root |
| `**/*.{ts,tsx}` | .ts and .tsx files |
| Tech Stack | Core Pattern | Test Pattern |
|------------|--------------|--------------|
| TypeScript | `**/*.{ts,tsx}` | `**/*.{test,spec}.{ts,tsx}` |
| Python | `**/*.py` | `**/test_*.py, **/*_test.py` |
| Rust | `**/*.rs` | `**/tests/**/*.rs` |
| Go | `**/*.go` | `**/*_test.go` |
---
## Parameters
```bash
/memory:tech-research [session-id | "tech-stack-name"] [--regenerate]
```
**Arguments**:
- **session-id**: `WFS-*` format - Extract from workflow session
- **tech-stack-name**: Direct input - `"typescript"`, `"typescript-react"`
- **--regenerate**: Force regenerate existing rules
---
## Examples
### Single Language
```bash
/memory:tech-research "typescript"
```
**Output**: `.claude/rules/tech/typescript/` with 4 rule files
### Frontend Stack
```bash
/memory:tech-research "typescript-react"
```
**Output**: `.claude/rules/tech/typescript-react/` with 5 rule files (includes components.md)
### Backend Stack
```bash
/memory:tech-research "python-fastapi"
```
**Output**: `.claude/rules/tech/python-fastapi/` with 5 rule files (includes api.md)
### From Session
```bash
/memory:tech-research WFS-user-auth-20251104
```
**Workflow**: Extract tech stack from session → Generate rules
---
## Comparison: Rules vs SKILL
| Aspect | SKILL Memory | Rules |
|--------|--------------|-------|
| Loading | Manual: `Skill("tech")` | Automatic by path |
| Scope | All files when loaded | Only matching files |
| Granularity | Monolithic packages | Per-file-type |
| Context | Full package | Only relevant rules |
**When to Use**:
- **Rules**: Tech stack conventions per file type
- **SKILL**: Reference docs, APIs, examples for manual lookup

View File

@@ -0,0 +1,332 @@
---
name: tips
description: Quick note-taking command to capture ideas, snippets, reminders, and insights for later reference
argument-hint: "<note content> [--tag <tag1,tag2>] [--context <context>]"
allowed-tools: mcp__ccw-tools__core_memory(*), Read(*)
examples:
- /memory:tips "Remember to use Redis for rate limiting"
- /memory:tips "Auth pattern: JWT with refresh tokens" --tag architecture,auth
- /memory:tips "Bug: memory leak in WebSocket handler after 24h" --context websocket-service
- /memory:tips "Performance: lazy loading reduced bundle by 40%" --tag performance
---
# Memory Tips Command (/memory:tips)
## 1. Overview
The `memory:tips` command provides **quick note-taking** for capturing:
- Quick ideas and insights
- Code snippets and patterns
- Reminders and follow-ups
- Bug notes and debugging hints
- Performance observations
- Architecture decisions
- Library/tool recommendations
**Core Philosophy**:
- **Speed First**: Minimal friction for capturing thoughts
- **Searchable**: Tagged for easy retrieval
- **Context-Aware**: Optional context linking
- **Lightweight**: No complex session analysis
## 2. Parameters
- `<note content>` (Required): The tip/note content to save
- `--tag <tags>` (Optional): Comma-separated tags for categorization
- `--context <context>` (Optional): Related context (file, module, feature)
**Examples**:
```bash
/memory:tips "Use Zod for runtime validation - better DX than class-validator"
/memory:tips "Redis connection pool: max 10, min 2" --tag config,redis
/memory:tips "Fix needed: race condition in payment processor" --tag bug,payment --context src/payments
```
## 3. Structured Output Format
```markdown
## Tip ID
TIP-YYYYMMDD-HHMMSS
## Timestamp
YYYY-MM-DD HH:MM:SS
## Project Root
[Absolute path to project root, e.g., D:\Claude_dms3]
## Content
[The tip/note content exactly as provided]
## Tags
[Comma-separated tags, or (none)]
## Context
[Optional context linking - file, module, or feature reference]
## Session Link
[WFS-ID if workflow session active, otherwise (none)]
## Auto-Detected Context
[Files/topics from current conversation if relevant]
```
## 4. Field Definitions
| Field | Purpose | Example |
|-------|---------|---------|
| **Tip ID** | Unique identifier with timestamp | TIP-20260128-143052 |
| **Timestamp** | When tip was created | 2026-01-28 14:30:52 |
| **Project Root** | Current project path | D:\Claude_dms3 |
| **Content** | The actual tip/note | "Use Redis for rate limiting" |
| **Tags** | Categorization labels | architecture, auth, performance |
| **Context** | Related code/feature | src/auth/**, payment-module |
| **Session Link** | Link to workflow session | WFS-auth-20260128 |
| **Auto-Detected Context** | Files from conversation | src/api/handler.ts |
## 5. Execution Flow
### Step 1: Parse Arguments
```javascript
const parseTipsCommand = (input) => {
// Extract note content (everything before flags)
const contentMatch = input.match(/^"([^"]+)"|^([^\s-]+)/);
const content = contentMatch ? (contentMatch[1] || contentMatch[2]) : '';
// Extract tags
const tagsMatch = input.match(/--tag\s+([^\s-]+)/);
const tags = tagsMatch ? tagsMatch[1].split(',').map(t => t.trim()) : [];
// Extract context
const contextMatch = input.match(/--context\s+([^\s-]+)/);
const context = contextMatch ? contextMatch[1] : '';
return { content, tags, context };
};
```
### Step 2: Gather Context
```javascript
const gatherTipContext = async () => {
// Get project root
const projectRoot = process.cwd(); // or detect from environment
// Get current session if active
const manifest = await mcp__ccw-tools__session_manager({
operation: "list",
location: "active"
});
const sessionId = manifest.sessions?.[0]?.id || null;
// Auto-detect files from recent conversation
const recentFiles = extractRecentFilesFromConversation(); // Last 5 messages
return {
projectRoot,
sessionId,
autoDetectedContext: recentFiles
};
};
```
### Step 3: Generate Structured Text
```javascript
const generateTipText = (parsed, context) => {
const timestamp = new Date().toISOString().replace('T', ' ').slice(0, 19);
const tipId = `TIP-${new Date().toISOString().slice(0,10).replace(/-/g, '')}-${new Date().toTimeString().slice(0,8).replace(/:/g, '')}`;
return `## Tip ID
${tipId}
## Timestamp
${timestamp}
## Project Root
${context.projectRoot}
## Content
${parsed.content}
## Tags
${parsed.tags.length > 0 ? parsed.tags.join(', ') : '(none)'}
## Context
${parsed.context || '(none)'}
## Session Link
${context.sessionId || '(none)'}
## Auto-Detected Context
${context.autoDetectedContext.length > 0
? context.autoDetectedContext.map(f => `- ${f}`).join('\n')
: '(none)'}`;
};
```
### Step 4: Save to Core Memory
```javascript
mcp__ccw-tools__core_memory({
operation: "import",
text: structuredText
})
```
**Response Format**:
```json
{
"operation": "import",
"id": "CMEM-YYYYMMDD-HHMMSS",
"message": "Created memory: CMEM-YYYYMMDD-HHMMSS"
}
```
### Step 5: Confirm to User
```
✓ Tip saved successfully
ID: CMEM-YYYYMMDD-HHMMSS
Tags: architecture, auth
Context: src/auth/**
To retrieve: /memory:search "auth patterns"
Or via MCP: core_memory(operation="search", query="auth")
```
## 6. Tag Categories (Suggested)
**Technical**:
- `architecture` - Design decisions and patterns
- `performance` - Optimization insights
- `security` - Security considerations
- `bug` - Bug notes and fixes
- `config` - Configuration settings
- `api` - API design patterns
**Development**:
- `testing` - Test strategies and patterns
- `debugging` - Debugging techniques
- `refactoring` - Refactoring notes
- `documentation` - Doc improvements
**Domain Specific**:
- `auth` - Authentication/authorization
- `database` - Database patterns
- `frontend` - UI/UX patterns
- `backend` - Backend logic
- `devops` - Infrastructure and deployment
**Organizational**:
- `reminder` - Follow-up items
- `research` - Research findings
- `idea` - Feature ideas
- `review` - Code review notes
## 7. Search Integration
Tips can be retrieved using:
```bash
# Via command (if /memory:search exists)
/memory:search "rate limiting"
# Via MCP tool
mcp__ccw-tools__core_memory({
operation: "search",
query: "rate limiting",
source_type: "core_memory",
top_k: 10
})
# Via CLI
ccw core-memory search --query "rate limiting" --top-k 10
```
## 8. Quality Checklist
Before saving:
- [ ] Content is clear and actionable
- [ ] Tags are relevant and consistent
- [ ] Context provides enough reference
- [ ] Auto-detected context is accurate
- [ ] Project root is absolute path
- [ ] Timestamp is properly formatted
## 9. Best Practices
### Good Tips Examples
**Specific and Actionable**:
```
"Use connection pooling for Redis: { max: 10, min: 2, acquireTimeoutMillis: 30000 }"
--tag config,redis
```
**With Context**:
```
"Auth middleware must validate both access and refresh tokens"
--tag security,auth --context src/middleware/auth.ts
```
**Problem + Solution**:
```
"Memory leak fixed by unsubscribing event listeners in componentWillUnmount"
--tag bug,react --context src/components/Chat.tsx
```
### Poor Tips Examples
**Too Vague**:
```
"Fix the bug" --tag bug
```
**Too Long** (use /memory:compact instead):
```
"Here's the complete implementation plan for the entire auth system... [3 paragraphs]"
```
**No Context**:
```
"Remember to update this later"
```
## 10. Use Cases
### During Development
```bash
/memory:tips "JWT secret must be 256-bit minimum" --tag security,auth
/memory:tips "Use debounce (300ms) for search input" --tag performance,ux
```
### After Bug Fixes
```bash
/memory:tips "Race condition in payment: lock with Redis SETNX" --tag bug,payment
```
### Code Review Insights
```bash
/memory:tips "Prefer early returns over nested ifs" --tag style,readability
```
### Architecture Decisions
```bash
/memory:tips "Chose PostgreSQL over MongoDB for ACID compliance" --tag architecture,database
```
### Library Recommendations
```bash
/memory:tips "Zod > Yup for TypeScript validation - better type inference" --tag library,typescript
```
## 11. Notes
- **Frequency**: Use liberally - capture all valuable insights
- **Retrieval**: Search by tags, content, or context
- **Lifecycle**: Tips persist across sessions
- **Organization**: Tags enable filtering and categorization
- **Integration**: Can reference tips in later workflows
- **Lightweight**: No complex session analysis required

View File

@@ -1,517 +0,0 @@
---
name: workflow-skill-memory
description: Process WFS-* archived sessions using universal-executor agents with Gemini analysis to generate workflow-progress SKILL package (sessions-timeline, lessons, conflicts)
argument-hint: "session <session-id> | all"
allowed-tools: Task(*), TodoWrite(*), Bash(*), Read(*), Write(*)
---
# Workflow SKILL Memory Generator
## Overview
Generate SKILL package from archived workflow sessions using agent-driven analysis. Supports single-session incremental updates or parallel processing of all sessions.
**Scope**: Only processes WFS-* workflow sessions. Other session types (e.g., doc sessions) are automatically ignored.
## Usage
```bash
/memory:workflow-skill-memory session WFS-<session-id> # Process single WFS session
/memory:workflow-skill-memory all # Process all WFS sessions in parallel
```
## Execution Modes
### Mode 1: Single Session (`session <session-id>`)
**Purpose**: Incremental update - process one archived session and merge into existing SKILL package
**Workflow**:
1. **Validate session**: Check if session exists in `.workflow/.archives/{session-id}/`
2. **Invoke agent**: Call `universal-executor` to analyze session and update SKILL documents
3. **Agent tasks**:
- Read session data from `.workflow/.archives/{session-id}/`
- Extract lessons, conflicts, and outcomes
- Use Gemini for intelligent aggregation (optional)
- Update or create SKILL documents using templates
- Regenerate SKILL.md index
**Command Example**:
```bash
/memory:workflow-skill-memory session WFS-user-auth
```
**Expected Output**:
```
Session WFS-user-auth processed
Updated:
- sessions-timeline.md (1 session added)
- lessons-learned.md (3 lessons merged)
- conflict-patterns.md (1 conflict added)
- SKILL.md (index regenerated)
```
---
### Mode 2: All Sessions (`all`)
**Purpose**: Full regeneration - process all archived sessions in parallel for complete SKILL package
**Workflow**:
1. **List sessions**: Read manifest.json to get all archived session IDs
2. **Parallel invocation**: Launch multiple `universal-executor` agents in parallel (one per session)
3. **Agent coordination**:
- Each agent processes one session independently
- Agents use Gemini for analysis
- Agents collect data into JSON (no direct file writes)
- Final aggregator agent merges results and generates SKILL documents
**Command Example**:
```bash
/memory:workflow-skill-memory all
```
**Expected Output**:
```
All sessions processed in parallel
Sessions: 8 total
Updated:
- sessions-timeline.md (8 sessions)
- lessons-learned.md (24 lessons aggregated)
- conflict-patterns.md (12 conflicts documented)
- SKILL.md (index regenerated)
```
---
## Implementation Flow
### Phase 1: Validation and Setup
**Step 1.1: Parse Command Arguments**
Extract mode and session ID:
```javascript
if (args === "all") {
mode = "all"
} else if (args.startsWith("session ")) {
mode = "session"
session_id = args.replace("session ", "").trim()
} else {
ERROR = "Invalid arguments. Usage: session <session-id> | all"
EXIT
}
```
**Step 1.2: Validate Archive Directory**
```bash
bash(test -d .workflow/.archives && echo "exists" || echo "missing")
```
If missing, report error and exit.
**Step 1.3: Mode-Specific Validation**
**Single Session Mode**:
```bash
# Validate session ID format (must start with WFS-)
if [[ ! "$session_id" =~ ^WFS- ]]; then
ERROR = "Invalid session ID format. Only WFS-* sessions are supported"
EXIT
fi
# Check if session exists
bash(test -d .workflow/.archives/{session_id} && echo "exists" || echo "missing")
```
If missing, report error: "Session {session_id} not found in archives"
**All Sessions Mode**:
```bash
# Read manifest and filter only WFS- sessions
bash(cat .workflow/.archives/manifest.json | jq -r '.archives[].session_id | select(startswith("WFS-"))')
```
Store filtered session IDs in array. Ignore doc sessions and other non-WFS sessions.
**Step 1.4: TodoWrite Initialization**
**Single Session Mode**:
```javascript
TodoWrite({todos: [
{"content": "Validate session existence", "status": "completed", "activeForm": "Validating session"},
{"content": "Invoke agent to process session", "status": "in_progress", "activeForm": "Invoking agent"},
{"content": "Verify SKILL package updated", "status": "pending", "activeForm": "Verifying update"}
]})
```
**All Sessions Mode**:
```javascript
TodoWrite({todos: [
{"content": "Read manifest and list sessions", "status": "completed", "activeForm": "Reading manifest"},
{"content": "Invoke agents in parallel", "status": "in_progress", "activeForm": "Invoking agents"},
{"content": "Verify SKILL package regenerated", "status": "pending", "activeForm": "Verifying regeneration"}
]})
```
---
### Phase 2: Agent Invocation
#### Single Session Mode - Agent Task
Invoke `universal-executor` with session-specific task:
**Agent Prompt Structure**:
```
Task: Process Workflow Session for SKILL Package
Context:
- Session ID: {session_id}
- Session Path: .workflow/.archives/{session_id}/
- Mode: Incremental update
Objectives:
1. Read session data:
- workflow-session.json (metadata)
- IMPL_PLAN.md (implementation summary)
- TODO_LIST.md (if exists)
- manifest.json entry for lessons
2. Extract key information:
- Description, tags, metrics
- Lessons (successes, challenges, watch_patterns)
- Context package path (reference only)
- Key outcomes from IMPL_PLAN
3. Use Gemini for aggregation (optional):
Command pattern:
ccw cli -p "
PURPOSE: Extract lessons and conflicts from workflow session
TASK:
• Analyze IMPL_PLAN and lessons from manifest
• Identify success patterns and challenges
• Extract conflict patterns with resolutions
• Categorize by functional domain
MODE: analysis
CONTEXT: @IMPL_PLAN.md @workflow-session.json
EXPECTED: Structured lessons and conflicts in JSON format
RULES: Template reference from skill-aggregation.txt
" --tool gemini --mode analysis --cd .workflow/.archives/{session_id}
3.5. **Generate SKILL.md Description** (CRITICAL for auto-loading):
Read skill-index.txt template Section: "Description Field Generation"
Execute command to get project root:
```bash
git rev-parse --show-toplevel # Example output: /d/Claude_dms3
```
Apply description format:
```
Progressive workflow development history (located at {project_root}).
Load this SKILL when continuing development, analyzing past implementations,
or learning from workflow history, especially when no relevant context exists in memory.
```
**Validation**:
- [ ] Path uses forward slashes (not backslashes)
- [ ] All three use cases present
- [ ] Trigger optimization phrase included
- [ ] Path is absolute (starts with / or drive letter)
4. Read templates for formatting guidance:
- ~/.claude/workflows/cli-templates/prompts/workflow/skill-sessions-timeline.txt
- ~/.claude/workflows/cli-templates/prompts/workflow/skill-lessons-learned.txt
- ~/.claude/workflows/cli-templates/prompts/workflow/skill-conflict-patterns.txt
- ~/.claude/workflows/cli-templates/prompts/workflow/skill-index.txt
**CRITICAL**: From skill-index.txt, read these sections:
- "Description Field Generation" - Rules for generating description
- "Variable Substitution Guide" - All required variables
- "Generation Instructions" - Step-by-step generation process
- "Validation Checklist" - Final validation steps
5. Update SKILL documents:
- sessions-timeline.md: Append new session, update domain grouping
- lessons-learned.md: Merge lessons into categories, update frequencies
- conflict-patterns.md: Add conflicts, update recurring pattern frequencies
- SKILL.md: Regenerate index with updated counts
**For SKILL.md generation**:
- Follow "Generation Instructions" from skill-index.txt (Steps 1-7)
- Use git command for project_root: `git rev-parse --show-toplevel`
- Apply "Description Field Generation" rules
- Validate using "Validation Checklist"
- Increment version (patch level)
6. Return result JSON:
{
"status": "success",
"session_id": "{session_id}",
"updates": {
"sessions_added": 1,
"lessons_merged": count,
"conflicts_added": count
}
}
```
---
#### All Sessions Mode - Parallel Agent Tasks
**Step 2.1: Launch parallel session analyzers**
Invoke multiple agents in parallel (one message with multiple Task calls):
**Per-Session Agent Prompt**:
```
Task: Extract Session Data for SKILL Package
Context:
- Session ID: {session_id}
- Mode: Parallel analysis (no direct file writes)
Objectives:
1. Read session data (same as single mode)
2. Extract key information (same as single mode)
3. Use Gemini for analysis (same as single mode)
4. Return structured data JSON:
{
"status": "success",
"session_id": "{session_id}",
"data": {
"metadata": {
"description": "...",
"archived_at": "...",
"tags": [...],
"metrics": {...}
},
"lessons": {
"successes": [...],
"challenges": [...],
"watch_patterns": [...]
},
"conflicts": [
{
"type": "architecture|dependencies|testing|performance",
"pattern": "...",
"resolution": "...",
"code_impact": [...]
}
],
"impl_summary": "First 200 chars of IMPL_PLAN",
"context_package_path": "..."
}
}
```
**Step 2.2: Aggregate results**
After all session agents complete, invoke aggregator agent:
**Aggregator Agent Prompt**:
```
Task: Aggregate Session Results and Generate SKILL Package
Context:
- Mode: Full regeneration
- Input: JSON results from {session_count} session agents
Objectives:
1. Aggregate all session data:
- Collect metadata from all sessions
- Merge lessons by category
- Group conflicts by type
- Sort sessions by date
2. Use Gemini for final aggregation:
ccw cli -p "
PURPOSE: Aggregate lessons and conflicts from all workflow sessions
TASK:
• Group successes by functional domain
• Categorize challenges by severity (HIGH/MEDIUM/LOW)
• Identify recurring conflict patterns
• Calculate frequencies and prioritize
MODE: analysis
CONTEXT: [Provide aggregated JSON data]
EXPECTED: Final aggregated structure for SKILL documents
RULES: Template reference from skill-aggregation.txt
" --tool gemini --mode analysis
3. Read templates for formatting (same 4 templates as single mode)
4. Generate all SKILL documents:
- sessions-timeline.md (all sessions, sorted by date)
- lessons-learned.md (aggregated lessons with frequencies)
- conflict-patterns.md (recurring patterns with resolutions)
- SKILL.md (index with progressive loading)
5. Write files to .claude/skills/workflow-progress/
6. Return result JSON:
{
"status": "success",
"sessions_processed": count,
"files_generated": ["SKILL.md", "sessions-timeline.md", ...],
"summary": {
"total_sessions": count,
"functional_domains": [...],
"date_range": "...",
"lessons_count": count,
"conflicts_count": count
}
}
```
---
### Phase 3: Verification
**Step 3.1: Check SKILL Package Files**
```bash
bash(ls -lh .claude/skills/workflow-progress/)
```
Verify all 4 files exist:
- SKILL.md
- sessions-timeline.md
- lessons-learned.md
- conflict-patterns.md
**Step 3.2: TodoWrite Completion**
Mark all tasks as completed.
**Step 3.3: Display Summary**
**Single Session Mode**:
```
Session {session_id} processed successfully
Updated:
- sessions-timeline.md
- lessons-learned.md
- conflict-patterns.md
- SKILL.md
SKILL Location: .claude/skills/workflow-progress/SKILL.md
```
**All Sessions Mode**:
```
All sessions processed in parallel
Sessions: {count} total
Functional Domains: {domain_list}
Date Range: {earliest} - {latest}
Generated:
- sessions-timeline.md ({count} sessions)
- lessons-learned.md ({lessons_count} lessons)
- conflict-patterns.md ({conflicts_count} conflicts)
- SKILL.md (4-level progressive loading)
SKILL Location: .claude/skills/workflow-progress/SKILL.md
Usage:
- Level 0: Quick refresh (~2K tokens)
- Level 1: Recent history (~8K tokens)
- Level 2: Complete analysis (~25K tokens)
- Level 3: Deep dive (~40K tokens)
```
---
## Agent Guidelines
### Agent Capabilities
**universal-executor agents can**:
- Read files from `.workflow/.archives/`
- Execute bash commands
- Call Gemini CLI for intelligent analysis
- Read template files for formatting guidance
- Write SKILL package files (single mode) or return JSON (parallel mode)
- Return structured results
### Gemini Usage Pattern
**When to use Gemini**:
- Aggregating lessons from multiple sources
- Identifying recurring patterns
- Classifying conflicts by type and severity
- Extracting structured data from IMPL_PLAN
**Fallback Strategy**: If Gemini fails or times out, use direct file parsing with structured extraction logic.
---
## Template System
### Template Files
All templates located in: `~/.claude/workflows/cli-templates/prompts/workflow/`
1. **skill-sessions-timeline.txt**: Format for sessions-timeline.md
2. **skill-lessons-learned.txt**: Format for lessons-learned.md
3. **skill-conflict-patterns.txt**: Format for conflict-patterns.md
4. **skill-index.txt**: Format for SKILL.md index
5. **skill-aggregation.txt**: Rules for Gemini aggregation (existing)
### Template Usage in Agent
**Agents read templates to understand**:
- File structure and markdown format
- Data sources (which files to read)
- Update strategy (incremental vs full)
- Formatting rules and conventions
- Aggregation logic (for Gemini)
**Templates are NOT shown in this command documentation** - agents read them directly as needed.
---
## Error Handling
### Validation Errors
- **No archives directory**: "Error: No workflow archives found at .workflow/.archives/"
- **Invalid session ID format**: "Error: Invalid session ID format. Only WFS-* sessions are supported"
- **Session not found**: "Error: Session {session_id} not found in archives"
- **No WFS sessions in manifest**: "Error: No WFS-* workflow sessions found in manifest.json"
### Agent Errors
- If agent fails, report error message from agent result
- If Gemini times out, agents use fallback direct parsing
- If template read fails, agents use inline format
### Recovery
- Single session mode: Can be retried without affecting other sessions
- All sessions mode: If one agent fails, others continue; retry failed sessions individually
## Integration
### Called by `/workflow:session:complete`
Automatically invoked after session archival:
```bash
SlashCommand(command="/memory:workflow-skill-memory session {session_id}")
```
### Manual Invocation
Users can manually process sessions:
```bash
/memory:workflow-skill-memory session WFS-custom-feature # Single session
/memory:workflow-skill-memory all # Full regeneration
```

View File

@@ -1,208 +0,0 @@
---
name: breakdown
description: Decompose complex task into subtasks with dependency mapping, creates child task JSONs with parent references and execution order
argument-hint: "[-y|--yes] task-id"
---
## Auto Mode
When `--yes` or `-y`: Auto-confirm breakdown, use recommended subtask structure.
# Task Breakdown Command (/task:breakdown)
## Overview
Breaks down complex tasks into executable subtasks with context inheritance and agent assignment.
## Core Principles
**File Cohesion:** Related files must stay in same task
**10-Task Limit:** Total tasks cannot exceed 10 (triggers re-scoping)
## Core Features
**CRITICAL**: Manual breakdown with safety controls to prevent file conflicts and task limit violations.
### Breakdown Process
1. **Session Check**: Verify active session contains parent task
2. **Task Validation**: Ensure parent is `pending` status
3. **10-Task Limit Check**: Verify breakdown won't exceed total limit
4. **Manual Decomposition**: User defines subtasks with validation
5. **File Conflict Detection**: Warn if same files appear in multiple subtasks
6. **Similar Function Warning**: Alert if subtasks have overlapping functionality
7. **Context Distribution**: Inherit parent requirements and scope
8. **Agent Assignment**: Auto-assign agents based on subtask type
9. **TODO_LIST Update**: Regenerate TODO_LIST.md with new structure
### Breakdown Rules
- Only `pending` tasks can be broken down
- **Manual breakdown only**: Automated breakdown disabled to prevent violations
- Parent becomes `container` status (not executable)
- Subtasks use format: IMPL-N.M (max 2 levels)
- Context flows from parent to subtasks
- All relationships tracked in JSON
- **10-task limit enforced**: Breakdown rejected if total would exceed 10 tasks
- **File cohesion preserved**: Same files cannot be split across subtasks
## Usage
### Basic Breakdown
```bash
/task:breakdown impl-1
```
Interactive process:
```
Task: Build authentication module
Current total tasks: 6/10
MANUAL BREAKDOWN REQUIRED
Define subtasks manually (remaining capacity: 4 tasks):
1. Enter subtask title: User authentication core
Focus files: models/User.js, routes/auth.js, middleware/auth.js
2. Enter subtask title: OAuth integration
Focus files: services/OAuthService.js, routes/oauth.js
FILE CONFLICT DETECTED:
- routes/auth.js appears in multiple subtasks
- Recommendation: Merge related authentication routes
SIMILAR FUNCTIONALITY WARNING:
- "User authentication" and "OAuth integration" both handle auth
- Consider combining into single task
# Use AskUserQuestion for confirmation
AskUserQuestion({
questions: [{
question: "File conflicts and/or similar functionality detected. How do you want to proceed?",
header: "Confirm",
options: [
{ label: "Proceed with breakdown", description: "Accept the risks and create the subtasks as defined." },
{ label: "Restart breakdown", description: "Discard current subtasks and start over." },
{ label: "Cancel breakdown", description: "Abort the operation and leave the parent task as is." }
],
multiSelect: false
}]
})
User selected: "Proceed with breakdown"
Task IMPL-1 broken down:
IMPL-1: Build authentication module (container)
├── IMPL-1.1: User authentication core -> @code-developer
└── IMPL-1.2: OAuth integration -> @code-developer
Files updated: .task/IMPL-1.json + 2 subtask files + TODO_LIST.md
```
## Decomposition Logic
### Agent Assignment
- **Design/Planning** → `@planning-agent`
- **Implementation** → `@code-developer`
- **Testing** → `@code-developer` (type: "test-gen")
- **Test Validation** → `@test-fix-agent` (type: "test-fix")
- **Review** → `@universal-executor` (optional)
### Context Inheritance
- Subtasks inherit parent requirements
- Scope refined for specific subtask
- Implementation details distributed appropriately
## Safety Controls
### File Conflict Detection
**Validates file cohesion across subtasks:**
- Scans `focus_paths` in all subtasks
- Warns if same file appears in multiple subtasks
- Suggests merging subtasks with overlapping files
- Blocks breakdown if critical conflicts detected
### Similar Functionality Detection
**Prevents functional overlap:**
- Analyzes subtask titles for similar keywords
- Warns about potential functional redundancy
- Suggests consolidation of related functionality
- Examples: "user auth" + "login system" → merge recommendation
### 10-Task Limit Enforcement
**Hard limit compliance:**
- Counts current total tasks in session
- Calculates breakdown impact on total
- Rejects breakdown if would exceed 10 tasks
- Suggests re-scoping if limit reached
### Manual Control Requirements
**User-driven breakdown only:**
- No automatic subtask generation
- User must define each subtask title and scope
- Real-time validation during input
- Confirmation required before execution
## Implementation Details
- Complete task JSON schema
- Implementation field structure
- Context inheritance rules
- Agent assignment logic
## Validation
### Pre-breakdown Checks
1. Active session exists
2. Task found in session
3. Task status is `pending`
4. Not already broken down
5. **10-task limit compliance**: Total tasks + new subtasks ≤ 10
6. **Manual mode enabled**: No automatic breakdown allowed
### Post-breakdown Actions
1. Update parent to `container` status
2. Create subtask JSON files
3. Update parent subtasks list
4. Update session stats
5. **Regenerate TODO_LIST.md** with new hierarchy
6. Validate file paths in focus_paths
7. Update session task count
## Examples
### Basic Breakdown
```bash
/task:breakdown impl-1
impl-1: Build authentication (container)
├── impl-1.1: Design schema -> @planning-agent
├── impl-1.2: Implement logic + tests -> @code-developer
└── impl-1.3: Execute & fix tests -> @test-fix-agent
```
## Error Handling
```bash
# Task not found
Task IMPL-5 not found
# Already broken down
Task IMPL-1 already has subtasks
# Wrong status
Cannot breakdown completed task IMPL-2
# 10-task limit exceeded
Breakdown would exceed 10-task limit (current: 8, proposed: 4)
Suggestion: Re-scope project into smaller iterations
# File conflicts detected
File conflict: routes/auth.js appears in IMPL-1.1 and IMPL-1.2
Recommendation: Merge subtasks or redistribute files
# Similar functionality warning
Similar functions detected: "user login" and "authentication"
Consider consolidating related functionality
# Manual breakdown required
Automatic breakdown disabled. Use manual breakdown process.
```
**System ensures**: Manual breakdown control with file cohesion enforcement, similar functionality detection, and 10-task limit compliance

View File

@@ -1,152 +0,0 @@
---
name: create
description: Generate task JSON from natural language description with automatic file pattern detection, scope inference, and dependency analysis
argument-hint: "\"task title\""
---
# Task Create Command (/task:create)
## Overview
Creates new implementation tasks with automatic context awareness and ID generation.
## Core Principles
**Task System:** @~/.claude/workflows/task-core.md
## Core Features
### Automatic Behaviors
- **ID Generation**: Auto-generates IMPL-N format (max 2 levels)
- **Context Inheritance**: Inherits from active workflow session
- **JSON Creation**: Creates task JSON in active session
- **Status Setting**: Initial status = "pending"
- **Agent Assignment**: Suggests agent based on task type
- **Session Integration**: Updates workflow session stats
### Context Awareness
- Validates active workflow session exists
- Avoids duplicate task IDs
- Inherits session requirements and scope
- Suggests task relationships
## Usage
### Basic Creation
```bash
/task:create "Build authentication module"
```
Output:
```
Task created: IMPL-1
Title: Build authentication module
Type: feature
Agent: code-developer
Status: pending
```
### Task Types
- `feature` - New functionality (default)
- `bugfix` - Bug fixes
- `refactor` - Code improvements
- `test` - Test implementation
- `docs` - Documentation
## Task Creation Process
1. **Session Validation**: Check active workflow session
2. **ID Generation**: Auto-increment IMPL-N
3. **Context Inheritance**: Load workflow context
4. **Implementation Setup**: Initialize implementation field
5. **Agent Assignment**: Select appropriate agent
6. **File Creation**: Save JSON to .task/ directory
7. **Session Update**: Update workflow stats
**Task Schema**: See @~/.claude/workflows/task-core.md for complete JSON structure
## Implementation Field Setup
### Auto-Population Strategy
- **Detailed info**: Extract from task description and scope
- **Missing info**: Mark `pre_analysis` as multi-step array format for later pre-analysis
- **Basic structure**: Initialize with standard template
### Analysis Triggers
When implementation details incomplete:
```bash
Task requires analysis for implementation details
Suggest running: gemini analysis for file locations and dependencies
```
## File Management
### JSON Task File
- **Location**: `.task/IMPL-[N].json` in active session
- **Content**: Complete task with implementation field
- **Updates**: Session stats only
### Simple Process
1. Validate session and inputs
2. Generate task JSON
3. Update session stats
4. Notify completion
## Context Inheritance
Tasks inherit from:
1. **Active Session** - Requirements and scope from workflow-session.json
2. **Planning Document** - Context from IMPL_PLAN.md
3. **Parent Task** - For subtasks (IMPL-N.M format)
## Agent Assignment
Based on task type and title keywords:
- **Build/Implement** → @code-developer
- **Design/Plan** → @planning-agent
- **Test Generation** → @code-developer (type: "test-gen")
- **Test Execution/Fix** → @test-fix-agent (type: "test-fix")
- **Review/Audit** → @universal-executor (optional, only when explicitly requested)
## Validation Rules
1. **Session Check** - Active workflow session required
2. **Duplicate Check** - Avoid similar task titles
3. **ID Uniqueness** - Auto-increment task IDs
4. **Schema Validation** - Ensure proper JSON structure
## Error Handling
```bash
# No workflow session
No active workflow found
Use: /workflow init "project name"
# Duplicate task
Similar task exists: IMPL-3
Continue anyway? (y/n)
# Max depth exceeded
Cannot create IMPL-1.2.1 (max 2 levels)
Use: IMPL-2 for new main task
```
## Examples
### Feature Task
```bash
/task:create "Implement user authentication"
Created IMPL-1: Implement user authentication
Type: feature
Agent: code-developer
Status: pending
```
### Bug Fix
```bash
/task:create "Fix login validation bug" --type=bugfix
Created IMPL-2: Fix login validation bug
Type: bugfix
Agent: code-developer
Status: pending
```

View File

@@ -1,270 +0,0 @@
---
name: execute
description: Execute task JSON using appropriate agent (@doc-generator/@implementation-agent/@test-agent) with pre-analysis context loading and status tracking
argument-hint: "task-id"
---
## Command Overview: /task:execute
**Purpose**: Executes tasks using intelligent agent selection, context preparation, and progress tracking.
## Execution Modes
- **auto (Default)**
- Fully autonomous execution with automatic agent selection.
- Provides progress updates at each checkpoint.
- Automatically completes the task when done.
- **guided**
- Executes step-by-step, requiring user confirmation at each checkpoint.
- Allows for dynamic adjustments and manual review during the process.
- **review**
- Optional manual review using `@universal-executor`.
- Used only when explicitly requested by user.
## Agent Selection Logic
The system determines the appropriate agent for a task using the following logic.
```pseudo
FUNCTION select_agent(task, agent_override):
// A manual override always takes precedence.
// Corresponds to the --agent=<agent-type> flag.
IF agent_override IS NOT NULL:
RETURN agent_override
// If no override, select based on keywords in the task title.
ELSE:
CASE task.title:
WHEN CONTAINS "Build API", "Implement":
RETURN "@code-developer"
WHEN CONTAINS "Design schema", "Plan":
RETURN "@planning-agent"
WHEN CONTAINS "Write tests", "Generate tests":
RETURN "@code-developer" // type: test-gen
WHEN CONTAINS "Execute tests", "Fix tests", "Validate":
RETURN "@test-fix-agent" // type: test-fix
WHEN CONTAINS "Review code":
RETURN "@universal-executor" // Optional manual review
DEFAULT:
RETURN "@code-developer" // Default agent
END CASE
END FUNCTION
```
## Core Execution Protocol
`Pre-Execution` -> `Execution` -> `Post-Execution`
### Pre-Execution Protocol
`Validate Task & Dependencies` **->** `Prepare Execution Context` **->** `Coordinate with TodoWrite`
- **Validation**: Checks for the task's JSON file in `.task/` and resolves its dependencies.
- **Context Preparation**: Loads task and workflow context, preparing it for the selected agent.
- **Session Context Injection**: Provides workflow directory paths to agents for TODO_LIST.md and summary management.
- **TodoWrite Coordination**: Generates execution Todos and checkpoints, syncing with `TODO_LIST.md`.
### Post-Execution Protocol
`Update Task Status` **->** `Generate Summary` **->** `Save Artifacts` **->** `Sync All Progress` **->** `Validate File Integrity`
- Updates status in the task's JSON file and `TODO_LIST.md`.
- Creates a summary in `.summaries/`.
- Stores outputs and syncs progress across the entire workflow session.
### Task & Subtask Execution Logic
This logic defines how single, multiple, or parent tasks are handled.
```pseudo
FUNCTION execute_task_command(task_id, mode, parallel_flag):
// Handle parent tasks by executing their subtasks.
IF is_parent_task(task_id):
subtasks = get_subtasks(task_id)
EXECUTE_SUBTASK_BATCH(subtasks, mode)
// Handle wildcard execution (e.g., IMPL-001.*)
ELSE IF task_id CONTAINS "*":
subtasks = find_matching_tasks(task_id)
IF parallel_flag IS true:
EXECUTE_IN_PARALLEL(subtasks)
ELSE:
FOR each subtask in subtasks:
EXECUTE_SINGLE_TASK(subtask, mode)
// Default case for a single task ID.
ELSE:
EXECUTE_SINGLE_TASK(task_id, mode)
END FUNCTION
```
### Error Handling & Recovery Logic
```pseudo
FUNCTION pre_execution_check(task):
// Ensure dependencies are met before starting.
IF task.dependencies ARE NOT MET:
LOG_ERROR("Cannot execute " + task.id)
LOG_INFO("Blocked by: " + unmet_dependencies)
HALT_EXECUTION()
FUNCTION on_execution_failure(checkpoint):
// Provide user with recovery options upon failure.
LOG_WARNING("Execution failed at checkpoint " + checkpoint)
PRESENT_OPTIONS([
"Retry from checkpoint",
"Retry from beginning",
"Switch to guided mode",
"Abort execution"
])
AWAIT user_input
// System performs the selected action.
END FUNCTION
```
### Simplified Context Structure (JSON)
This is the simplified data structure loaded to provide context for task execution.
```json
{
"task": {
"id": "IMPL-1",
"title": "Build authentication module",
"type": "feature",
"status": "active",
"agent": "code-developer",
"context": {
"requirements": ["JWT authentication", "OAuth2 support"],
"scope": ["src/auth/*", "tests/auth/*"],
"acceptance": ["Module handles JWT tokens", "OAuth2 flow implemented"],
"inherited_from": "WFS-user-auth"
},
"relations": {
"parent": null,
"subtasks": ["IMPL-1.1", "IMPL-1.2"],
"dependencies": ["IMPL-0"]
},
"implementation": {
"files": [
{
"path": "src/auth/login.ts",
"location": {
"function": "authenticateUser",
"lines": "25-65",
"description": "Main authentication logic"
},
"original_code": "// Code snippet extracted via gemini analysis",
"modifications": {
"current_state": "Basic password authentication only",
"proposed_changes": [
"Add JWT token generation",
"Implement OAuth2 callback handling",
"Add multi-factor authentication support"
],
"logic_flow": [
"validateCredentials() ───► checkUserExists()",
"◊─── if password ───► generateJWT() ───► return token",
"◊─── if OAuth ───► validateOAuthCode() ───► exchangeForToken()",
"◊─── if MFA ───► sendMFACode() ───► awaitVerification()"
],
"reason": "Support modern authentication standards and security requirements",
"expected_outcome": "Comprehensive authentication system supporting multiple methods"
}
}
],
"context_notes": {
"dependencies": ["jsonwebtoken", "passport", "speakeasy"],
"affected_modules": ["user-session", "auth-middleware", "api-routes"],
"risks": [
"Breaking changes to existing login endpoints",
"Token storage and rotation complexity",
"OAuth provider configuration dependencies"
],
"performance_considerations": "JWT validation adds ~10ms per request, OAuth callbacks may timeout",
"error_handling": "Ensure sensitive authentication errors don't leak user enumeration data"
},
"pre_analysis": [
{
"action": "analyze patterns",
"template": "~/.claude/workflows/cli-templates/prompts/analysis/02-analyze-code-patterns.txt",
"method": "gemini"
}
]
}
},
"workflow": {
"session": "WFS-user-auth",
"phase": "IMPLEMENT",
"session_context": {
"workflow_directory": ".workflow/active/WFS-user-auth/",
"todo_list_location": ".workflow/active/WFS-user-auth/TODO_LIST.md",
"summaries_directory": ".workflow/active/WFS-user-auth/.summaries/",
"task_json_location": ".workflow/active/WFS-user-auth/.task/"
}
},
"execution": {
"agent": "code-developer",
"mode": "auto",
"attempts": 0
}
}
```
### Agent-Specific Context
Different agents receive context tailored to their function, including implementation details:
**`@code-developer`**:
- Complete implementation.files array with file paths and locations
- original_code snippets and proposed_changes for precise modifications
- logic_flow diagrams for understanding data flow
- Dependencies and affected modules for integration planning
- Performance and error handling considerations
**`@planning-agent`**:
- High-level requirements, constraints, success criteria
- Implementation risks and mitigation strategies
- Architecture implications from implementation.context_notes
**`@test-fix-agent`**:
- Test files to execute from task.context.focus_paths
- Source files to fix from implementation.files[].path
- Expected behaviors from implementation.modifications.logic_flow
- Error conditions to validate from implementation.context_notes.error_handling
- Performance requirements from implementation.context_notes.performance_considerations
**`@universal-executor`**:
- Used for optional manual reviews when explicitly requested
- Code quality standards and implementation patterns
- Security considerations from implementation.context_notes.risks
- Dependency validation from implementation.context_notes.dependencies
- Architecture compliance checks
### Simplified File Output
- **Task JSON File (`.task/<task-id>.json`)**: Updated with status and last attempt time only.
- **Session File (`workflow-session.json`)**: Updated task stats (completed count).
- **Summary File**: Generated in `.summaries/` upon completion (optional).
### Simplified Summary Template
Optional summary file generated at `.summaries/IMPL-[task-id]-summary.md`.
```markdown
# Task Summary: IMPL-1 Build Authentication Module
## What Was Done
- Created src/auth/login.ts with JWT validation
- Added tests in tests/auth.test.ts
## Execution Results
- **Agent**: code-developer
- **Status**: completed
## Files Modified
- `src/auth/login.ts` (created)
- `tests/auth.test.ts` (created)
```

View File

@@ -1,441 +0,0 @@
---
name: replan
description: Update task JSON with new requirements or batch-update multiple tasks from verification report, tracks changes in task-changes.json
argument-hint: "[-y|--yes] task-id [\"text\"|file.md] | --batch [verification-report.md]"
allowed-tools: Read(*), Write(*), Edit(*), TodoWrite(*), Glob(*), Bash(*)
---
## Auto Mode
When `--yes` or `-y`: Auto-confirm updates, use recommended changes.
# Task Replan Command (/task:replan)
> **⚠️ DEPRECATION NOTICE**: This command is maintained for backward compatibility. For new workflows, use `/workflow:replan` which provides:
> - Session-level replanning with comprehensive artifact updates
> - Interactive boundary clarification
> - Updates to IMPL_PLAN.md, TODO_LIST.md, and session metadata
> - Better integration with workflow sessions
>
> **Migration**: Replace `/task:replan IMPL-1 "changes"` with `/workflow:replan IMPL-1 "changes"`
## Overview
Replans individual tasks or batch processes multiple tasks with change tracking and backup management.
**Modes**:
- **Single Task Mode**: Replan one task with specific changes
- **Batch Mode**: Process multiple tasks from action-plan verification report
## Key Features
- **Single/Batch Operations**: Single task or multiple tasks from verification report
- **Multiple Input Sources**: Text, files, or verification report
- **Backup Management**: Automatic backup of previous versions
- **Change Documentation**: Track all modifications
- **Progress Tracking**: TodoWrite integration for batch operations
**CRITICAL**: Validates active session before replanning
## Operation Modes
### Single Task Mode
#### Direct Text (Default)
```bash
/task:replan IMPL-1 "Add OAuth2 authentication support"
```
#### File-based Input
```bash
/task:replan IMPL-1 updated-specs.md
```
Supports: .md, .txt, .json, .yaml
#### Interactive Mode
```bash
/task:replan IMPL-1 --interactive
```
Guided step-by-step modification process with validation
### Batch Mode
#### From Verification Report
```bash
/task:replan --batch ACTION_PLAN_VERIFICATION.md
```
**Workflow**:
1. Parse verification report to extract replan recommendations
2. Create TodoWrite task list for all modifications
3. Process each task sequentially with confirmation
4. Track progress and generate summary report
**Auto-detection**: If input file contains "Action Plan Verification Report" header, automatically enters batch mode
## Replanning Process
### Single Task Process
1. **Load & Validate**: Read task JSON and validate session
2. **Parse Input**: Process changes from input source
3. **Create Backup**: Save previous version to backup folder
4. **Update Task**: Modify JSON structure and relationships
5. **Save Changes**: Write updated task and increment version
6. **Update Session**: Reflect changes in workflow stats
### Batch Process
1. **Parse Verification Report**: Extract all replan recommendations
2. **Initialize TodoWrite**: Create task list for tracking
3. **For Each Task**:
- Mark todo as in_progress
- Load and validate task JSON
- Create backup
- Apply recommended changes
- Save updated task
- Mark todo as completed
4. **Generate Summary**: Report all changes and backup locations
## Backup Management
### Backup Tracking
Tasks maintain backup history:
```json
{
"id": "IMPL-1",
"version": "1.2",
"replan_history": [
{
"version": "1.2",
"reason": "Add OAuth2 support",
"input_source": "direct_text",
"backup_location": ".task/backup/IMPL-1-v1.1.json",
"timestamp": "2025-10-17T10:30:00Z"
}
]
}
```
**Complete schema**: See @~/.claude/workflows/task-core.md
### File Structure
```
.task/
├── IMPL-1.json # Current version
├── backup/
│ ├── IMPL-1-v1.0.json # Original version
│ ├── IMPL-1-v1.1.json # Previous backup
│ └── IMPL-1-v1.2.json # Latest backup
└── [new subtasks as needed]
```
**Backup Naming**: `{task-id}-v{version}.json`
## Implementation Updates
### Change Detection
Tracks modifications to:
- Files in implementation.files array
- Dependencies and affected modules
- Risk assessments and performance notes
- Logic flows and code locations
### Analysis Triggers
May require gemini re-analysis when:
- New files need code extraction
- Function locations change
- Dependencies require re-evaluation
## Document Updates
### Planning Document
May update IMPL_PLAN.md sections when task structure changes significantly
### TODO List Sync
If TODO_LIST.md exists, synchronizes:
- New subtasks (with [ ] checkbox)
- Modified tasks (marked as updated)
- Removed subtasks (deleted from list)
## Change Documentation
### Change Summary
Generates brief change log with:
- Version increment (1.1 → 1.2)
- Input source and reason
- Key modifications made
- Files updated/created
- Backup location
## Session Updates
Updates workflow-session.json with:
- Modified task tracking
- Task count changes (if subtasks added/removed)
- Last modification timestamps
## Rollback Support
```bash
/task:replan IMPL-1 --rollback v1.1
Rollback to version 1.1:
- Restore task from backup/.../IMPL-1-v1.1.json
- Remove new subtasks if any
- Update session stats
# Use AskUserQuestion for confirmation
AskUserQuestion({
questions: [{
question: "Are you sure you want to roll back this task to a previous version?",
header: "Confirm",
options: [
{ label: "Yes, rollback", description: "Restore the task from the selected backup." },
{ label: "No, cancel", description: "Keep the current version of the task." }
],
multiSelect: false
}]
})
User selected: "Yes, rollback"
Task rolled back to version 1.1
```
## Batch Processing with TodoWrite
### Progress Tracking
When processing multiple tasks, automatically creates TodoWrite task list:
```markdown
**Batch Replan Progress**:
- [x] IMPL-002: Add FR-12 draft saving acceptance criteria
- [x] IMPL-003: Add FR-14 history tracking acceptance criteria
- [ ] IMPL-004: Add FR-09 response surface explicit coverage
- [ ] IMPL-008: Add NFR performance validation steps
```
### Batch Report
After completion, generates summary:
```markdown
## Batch Replan Summary
**Total Tasks**: 4
**Successful**: 3
**Failed**: 1
**Skipped**: 0
### Changes Made
- IMPL-002 v1.0 → v1.1: Added FR-12 acceptance criteria
- IMPL-003 v1.0 → v1.1: Added FR-14 acceptance criteria
- IMPL-004 v1.0 → v1.1: Added FR-09 explicit coverage
### Backups Created
- .task/backup/IMPL-002-v1.0.json
- .task/backup/IMPL-003-v1.0.json
- .task/backup/IMPL-004-v1.0.json
### Errors
- IMPL-008: File not found (task may have been renamed)
```
## Examples
### Single Task - Text Input
```bash
/task:replan IMPL-1 "Add OAuth2 authentication support"
Processing changes...
Proposed updates:
+ Add OAuth2 integration
+ Update authentication flow
# Use AskUserQuestion for confirmation
AskUserQuestion({
questions: [{
question: "Do you want to apply these changes to the task?",
header: "Apply",
options: [
{ label: "Yes, apply", description: "Create new version with these changes." },
{ label: "No, cancel", description: "Discard changes and keep current version." }
],
multiSelect: false
}]
})
User selected: "Yes, apply"
Version 1.2 created
Context updated
Backup saved to .task/backup/IMPL-1-v1.1.json
```
### Single Task - File Input
```bash
/task:replan IMPL-2 requirements.md
Loading requirements.md...
Applying specification changes...
Task updated with new requirements
Version 1.1 created
Backup saved to .task/backup/IMPL-2-v1.0.json
```
### Batch Mode - From Verification Report
```bash
/task:replan --batch .workflow/active/WFS-{session}/.process/ACTION_PLAN_VERIFICATION.md
Parsing verification report...
Found 4 tasks requiring replanning:
- IMPL-002: Add FR-12 draft saving acceptance criteria
- IMPL-003: Add FR-14 history tracking acceptance criteria
- IMPL-004: Add FR-09 response surface explicit coverage
- IMPL-008: Add NFR performance validation steps
Creating task tracking list...
Processing IMPL-002...
Backup created: .task/backup/IMPL-002-v1.0.json
Updated to v1.1
Processing IMPL-003...
Backup created: .task/backup/IMPL-003-v1.0.json
Updated to v1.1
Processing IMPL-004...
Backup created: .task/backup/IMPL-004-v1.0.json
Updated to v1.1
Processing IMPL-008...
Backup created: .task/backup/IMPL-008-v1.0.json
Updated to v1.1
Batch replan completed: 4/4 successful
Summary report saved
```
### Batch Mode - Auto-detection
```bash
# If file contains "Action Plan Verification Report", auto-enters batch mode
/task:replan ACTION_PLAN_VERIFICATION.md
Detected verification report format
Entering batch mode...
[same as above]
```
## Error Handling
### Single Task Errors
```bash
# Task not found
Task IMPL-5 not found
Check task ID with /workflow:status
# Task completed
Task IMPL-1 is completed (cannot replan)
Create new task for additional work
# File not found
File requirements.md not found
Check file path
# No input provided
Please specify changes needed
Provide text, file, or verification report
```
### Batch Mode Errors
```bash
# Invalid verification report
File does not contain valid verification report format
Check report structure or use single task mode
# Partial failures
Batch completed with errors: 3/4 successful
Review error details in summary report
# No replan recommendations found
Verification report contains no replan recommendations
Check report content or use /workflow:plan-verify first
```
## Batch Mode Integration
### Input Format Expectations
Batch mode parses verification reports looking for:
1. **Required Actions Section**: Commands like `/task:replan IMPL-X "changes"`
2. **Findings Table**: Task IDs with recommendations
3. **Next Actions Section**: Specific replan commands
**Example Patterns**:
```markdown
#### 1. HIGH Priority - Address FR Coverage Gaps
/task:replan IMPL-004 "
Add explicit acceptance criteria:
- FR-09: Response surface 3D visualization
"
#### 2. MEDIUM Priority - Enhance NFR Coverage
/task:replan IMPL-008 "
Add performance testing:
- NFR-01: Load test API endpoints
"
```
### Extraction Logic
1. Scan for `/task:replan` commands in report
2. Extract task ID and change description
3. Group by priority (HIGH, MEDIUM, LOW)
4. Process in priority order with TodoWrite tracking
### Confirmation Behavior
- **Default**: Confirm each task before applying
- **With `--auto-confirm`**: Apply all changes without prompting
```bash
/task:replan --batch report.md --auto-confirm
```
## Implementation Details
### Backup Management
```typescript
// Backup file naming convention
const backupPath = `.task/backup/${taskId}-v${previousVersion}.json`;
// Backup metadata in task JSON
{
"replan_history": [
{
"version": "1.2",
"timestamp": "2025-10-17T10:30:00Z",
"reason": "Add FR-09 explicit coverage",
"input_source": "batch_verification_report",
"backup_location": ".task/backup/IMPL-004-v1.1.json"
}
]
}
```
### TodoWrite Integration
```typescript
// Initialize tracking for batch mode
TodoWrite({
todos: taskList.map(task => ({
content: `${task.id}: ${task.changeDescription}`,
status: "pending",
activeForm: `Replanning ${task.id}`
}))
});
// Update progress during processing
TodoWrite({
todos: updateTaskStatus(taskId, "in_progress")
});
// Mark completed
TodoWrite({
todos: updateTaskStatus(taskId, "completed")
});
```

View File

@@ -1,254 +0,0 @@
---
name: version
description: Display Claude Code version information and check for updates
allowed-tools: Bash(*)
---
# Version Command (/version)
## Purpose
Display local and global installation versions, check for the latest updates from GitHub, and provide upgrade recommendations.
## Execution Flow
1. **Local Version Check**: Read version information from `./.claude/version.json` if it exists.
2. **Global Version Check**: Read version information from `~/.claude/version.json` if it exists.
3. **Fetch Remote Versions**: Use GitHub API to get the latest stable release tag and the latest commit hash from the main branch.
4. **Compare & Suggest**: Compare installed versions with the latest remote versions and provide upgrade suggestions if applicable.
## Step 1: Check Local Version
### Check if local version.json exists
```bash
bash(test -f ./.claude/version.json && echo "found" || echo "not_found")
```
### Read local version (if exists)
```bash
bash(cat ./.claude/version.json)
```
### Extract version with jq (preferred)
```bash
bash(cat ./.claude/version.json | grep -o '"version": *"[^"]*"' | cut -d'"' -f4)
```
### Extract installation date
```bash
bash(cat ./.claude/version.json | grep -o '"installation_date_utc": *"[^"]*"' | cut -d'"' -f4)
```
**Output Format**:
```
Local Version: 3.2.1
Installed: 2025-10-03T12:00:00Z
```
## Step 2: Check Global Version
### Check if global version.json exists
```bash
bash(test -f ~/.claude/version.json && echo "found" || echo "not_found")
```
### Read global version
```bash
bash(cat ~/.claude/version.json)
```
### Extract version
```bash
bash(cat ~/.claude/version.json | grep -o '"version": *"[^"]*"' | cut -d'"' -f4)
```
### Extract installation date
```bash
bash(cat ~/.claude/version.json | grep -o '"installation_date_utc": *"[^"]*"' | cut -d'"' -f4)
```
**Output Format**:
```
Global Version: 3.2.1
Installed: 2025-10-03T12:00:00Z
```
## Step 3: Fetch Latest Stable Release
### Call GitHub API for latest release (with timeout)
```bash
bash(curl -fsSL "https://api.github.com/repos/catlog22/Claude-Code-Workflow/releases/latest" 2>/dev/null, timeout: 30000)
```
### Extract tag name (version)
```bash
bash(curl -fsSL "https://api.github.com/repos/catlog22/Claude-Code-Workflow/releases/latest" 2>/dev/null | grep -o '"tag_name": *"[^"]*"' | head -1 | cut -d'"' -f4, timeout: 30000)
```
### Extract release name
```bash
bash(curl -fsSL "https://api.github.com/repos/catlog22/Claude-Code-Workflow/releases/latest" 2>/dev/null | grep -o '"name": *"[^"]*"' | head -1 | cut -d'"' -f4, timeout: 30000)
```
### Extract published date
```bash
bash(curl -fsSL "https://api.github.com/repos/catlog22/Claude-Code-Workflow/releases/latest" 2>/dev/null | grep -o '"published_at": *"[^"]*"' | cut -d'"' -f4, timeout: 30000)
```
**Output Format**:
```
Latest Stable: v3.2.2
Release: v3.2.2: Independent Test-Gen Workflow with Cross-Session Context
Published: 2025-10-03T04:10:08Z
```
## Step 4: Fetch Latest Main Branch
### Call GitHub API for latest commit on main (with timeout)
```bash
bash(curl -fsSL "https://api.github.com/repos/catlog22/Claude-Code-Workflow/commits/main" 2>/dev/null, timeout: 30000)
```
### Extract commit SHA (short)
```bash
bash(curl -fsSL "https://api.github.com/repos/catlog22/Claude-Code-Workflow/commits/main" 2>/dev/null | grep -o '"sha": *"[^"]*"' | head -1 | cut -d'"' -f4 | cut -c1-7, timeout: 30000)
```
### Extract commit message (first line only)
```bash
bash(curl -fsSL "https://api.github.com/repos/catlog22/Claude-Code-Workflow/commits/main" 2>/dev/null | grep '"message":' | cut -d'"' -f4 | cut -d'\' -f1, timeout: 30000)
```
### Extract commit date
```bash
bash(curl -fsSL "https://api.github.com/repos/catlog22/Claude-Code-Workflow/commits/main" 2>/dev/null | grep -o '"date": *"[^"]*"' | head -1 | cut -d'"' -f4, timeout: 30000)
```
**Output Format**:
```
Latest Dev: a03415b
Message: feat: Add version tracking and upgrade check system
Date: 2025-10-03T04:46:44Z
```
## Step 5: Compare Versions and Suggest Upgrade
### Normalize versions (remove 'v' prefix)
```bash
bash(echo "v3.2.1" | sed 's/^v//')
```
### Compare two versions
```bash
bash(printf "%s\n%s" "3.2.1" "3.2.2" | sort -V | tail -n 1)
```
### Check if versions are equal
```bash
# If equal: Up to date
# If remote newer: Upgrade available
# If local newer: Development version
```
**Output Scenarios**:
**Scenario 1: Up to date**
```
You are on the latest stable version (3.2.1)
```
**Scenario 2: Upgrade available**
```
A newer stable version is available: v3.2.2
Your version: 3.2.1
To upgrade:
PowerShell: iex (iwr -useb https://raw.githubusercontent.com/catlog22/Claude-Code-Workflow/main/install-remote.ps1)
Bash: bash <(curl -fsSL https://raw.githubusercontent.com/catlog22/Claude-Code-Workflow/main/install-remote.sh)
```
**Scenario 3: Development version**
```
You are running a development version (3.4.0-dev)
This is newer than the latest stable release (v3.3.0)
```
## Simple Bash Commands
### Basic Operations
```bash
# Check local version file
bash(test -f ./.claude/version.json && cat ./.claude/version.json)
# Check global version file
bash(test -f ~/.claude/version.json && cat ~/.claude/version.json)
# Extract version from JSON
bash(cat version.json | grep -o '"version": *"[^"]*"' | cut -d'"' -f4)
# Extract date from JSON
bash(cat version.json | grep -o '"installation_date_utc": *"[^"]*"' | cut -d'"' -f4)
# Fetch latest release (with timeout)
bash(curl -fsSL "https://api.github.com/repos/catlog22/Claude-Code-Workflow/releases/latest" 2>/dev/null, timeout: 30000)
# Extract tag name
bash(curl -fsSL "https://api.github.com/repos/catlog22/Claude-Code-Workflow/releases/latest" 2>/dev/null | grep -o '"tag_name": *"[^"]*"' | cut -d'"' -f4, timeout: 30000)
# Extract release name
bash(curl -fsSL "https://api.github.com/repos/catlog22/Claude-Code-Workflow/releases/latest" 2>/dev/null | grep -o '"name": *"[^"]*"' | head -1 | cut -d'"' -f4, timeout: 30000)
# Fetch latest commit (with timeout)
bash(curl -fsSL "https://api.github.com/repos/catlog22/Claude-Code-Workflow/commits/main" 2>/dev/null, timeout: 30000)
# Extract commit SHA
bash(curl -fsSL "https://api.github.com/repos/catlog22/Claude-Code-Workflow/commits/main" 2>/dev/null | grep -o '"sha": *"[^"]*"' | head -1 | cut -d'"' -f4 | cut -c1-7, timeout: 30000)
# Extract commit message (first line)
bash(curl -fsSL "https://api.github.com/repos/catlog22/Claude-Code-Workflow/commits/main" 2>/dev/null | grep '"message":' | cut -d'"' -f4 | cut -d'\' -f1, timeout: 30000)
# Compare versions
bash(printf "%s\n%s" "3.2.1" "3.2.2" | sort -V | tail -n 1)
# Remove 'v' prefix
bash(echo "v3.2.1" | sed 's/^v//')
```
## Error Handling
### No installation found
```
WARNING: Claude Code Workflow not installed
Install using:
PowerShell: iex (iwr -useb https://raw.githubusercontent.com/catlog22/Claude-Code-Workflow/main/install-remote.ps1)
```
### Network error
```
ERROR: Could not fetch latest version from GitHub
Check your network connection
```
### Invalid version.json
```
ERROR: version.json is invalid or corrupted
```
## Design Notes
- Uses simple, direct bash commands instead of complex functions
- Each step is independent and can be executed separately
- Fallback to grep/sed for JSON parsing (no jq dependency required)
- Network calls use curl with error suppression and 30-second timeout
- Version comparison uses `sort -V` for accurate semantic versioning
- Use `/commits/main` API instead of `/branches/main` for more reliable commit info
- Extract first line of commit message using `cut -d'\' -f1` to handle JSON escape sequences
## API Endpoints
### GitHub API Used
- **Latest Release**: `https://api.github.com/repos/catlog22/Claude-Code-Workflow/releases/latest`
- Fields: `tag_name`, `name`, `published_at`
- **Latest Commit**: `https://api.github.com/repos/catlog22/Claude-Code-Workflow/commits/main`
- Fields: `sha`, `commit.message`, `commit.author.date`
### Timeout Configuration
All network calls should use `timeout: 30000` (30 seconds) to handle slow connections.

View File

@@ -1,485 +0,0 @@
---
name: plan-verify
description: Perform READ-ONLY verification analysis between IMPL_PLAN.md, task JSONs, and brainstorming artifacts. Generates structured report with quality gate recommendation. Does NOT modify any files.
argument-hint: "[optional: --session session-id]"
allowed-tools: Read(*), Write(*), Glob(*), Bash(*)
---
## User Input
```text
$ARGUMENTS
```
You **MUST** consider the user input before proceeding (if not empty).
## Goal
Generate a comprehensive verification report that identifies inconsistencies, duplications, ambiguities, and underspecified items between action planning artifacts (`IMPL_PLAN.md`, `task.json`) and brainstorming artifacts (`role analysis documents`). This command MUST run only after `/workflow:plan` has successfully produced complete `IMPL_PLAN.md` and task JSON files.
**Output**: A structured Markdown report saved to `.workflow/active/WFS-{session}/.process/ACTION_PLAN_VERIFICATION.md` containing:
- Executive summary with quality gate recommendation
- Detailed findings by severity (CRITICAL/HIGH/MEDIUM/LOW)
- Requirements coverage analysis
- Dependency integrity check
- Synthesis alignment validation
- Actionable remediation recommendations
## Operating Constraints
**STRICTLY READ-ONLY FOR SOURCE ARTIFACTS**:
- **MUST NOT** modify `IMPL_PLAN.md`, any `task.json` files, or brainstorming artifacts
- **MUST NOT** create or delete task files
- **MUST ONLY** write the verification report to `.process/ACTION_PLAN_VERIFICATION.md`
**Synthesis Authority**: The `role analysis documents` are **authoritative** for requirements and design decisions. Any conflicts between IMPL_PLAN/tasks and synthesis are automatically CRITICAL and require adjustment of the plan/tasks—not reinterpretation of requirements.
**Quality Gate Authority**: The verification report provides a binding recommendation (BLOCK_EXECUTION / PROCEED_WITH_FIXES / PROCEED_WITH_CAUTION / PROCEED) based on objective severity criteria. User MUST review critical/high issues before proceeding with implementation.
## Execution Steps
### 1. Initialize Analysis Context
```bash
# Detect active workflow session
IF --session parameter provided:
session_id = provided session
ELSE:
# Auto-detect active session
active_sessions = bash(find .workflow/active/ -name "WFS-*" -type d 2>/dev/null)
IF active_sessions is empty:
ERROR: "No active workflow session found. Use --session <session-id>"
EXIT
ELSE IF active_sessions has multiple entries:
# Use most recently modified session
session_id = bash(ls -td .workflow/active/WFS-*/ 2>/dev/null | head -1 | xargs basename)
ELSE:
session_id = basename(active_sessions[0])
# Derive absolute paths
session_dir = .workflow/active/WFS-{session}
brainstorm_dir = session_dir/.brainstorming
task_dir = session_dir/.task
process_dir = session_dir/.process
session_file = session_dir/workflow-session.json
# Create .process directory if not exists (report output location)
IF NOT EXISTS(process_dir):
bash(mkdir -p "{process_dir}")
# Validate required artifacts
# Note: "role analysis documents" refers to [role]/analysis.md files (e.g., product-manager/analysis.md)
SYNTHESIS_DIR = brainstorm_dir # Contains role analysis files: */analysis.md
IMPL_PLAN = session_dir/IMPL_PLAN.md
TASK_FILES = Glob(task_dir/*.json)
# Abort if missing - in order of dependency
SESSION_FILE_EXISTS = EXISTS(session_file)
IF NOT SESSION_FILE_EXISTS:
WARNING: "workflow-session.json not found. User intent alignment verification will be skipped."
# Continue execution - this is optional context, not blocking
SYNTHESIS_FILES = Glob(brainstorm_dir/*/analysis.md)
IF SYNTHESIS_FILES.count == 0:
ERROR: "No role analysis documents found in .brainstorming/*/analysis.md. Run /workflow:brainstorm:synthesis first"
EXIT
IF NOT EXISTS(IMPL_PLAN):
ERROR: "IMPL_PLAN.md not found. Run /workflow:plan first"
EXIT
IF TASK_FILES.count == 0:
ERROR: "No task JSON files found. Run /workflow:plan first"
EXIT
```
### 2. Load Artifacts (Progressive Disclosure)
Load only minimal necessary context from each artifact:
**From workflow-session.json** (OPTIONAL - Primary Reference for User Intent):
- **ONLY IF EXISTS**: Load user intent context
- Original user prompt/intent (project or description field)
- User's stated goals and objectives
- User's scope definition
- **IF MISSING**: Set user_intent_analysis = "SKIPPED: workflow-session.json not found"
**From role analysis documents** (AUTHORITATIVE SOURCE):
- Functional Requirements (IDs, descriptions, acceptance criteria)
- Non-Functional Requirements (IDs, targets)
- Business Requirements (IDs, success metrics)
- Key Architecture Decisions
- Risk factors and mitigation strategies
- Implementation Roadmap (high-level phases)
**From IMPL_PLAN.md**:
- Summary and objectives
- Context Analysis
- Implementation Strategy
- Task Breakdown Summary
- Success Criteria
- Brainstorming Artifacts References (if present)
**From task.json files**:
- Task IDs
- Titles and descriptions
- Status
- Dependencies (depends_on, blocks)
- Context (requirements, focus_paths, acceptance, artifacts)
- Flow control (pre_analysis, implementation_approach)
- Meta (complexity, priority)
### 3. Build Semantic Models
Create internal representations (do not include raw artifacts in output):
**Requirements inventory**:
- Each functional/non-functional/business requirement with stable ID
- Requirement text, acceptance criteria, priority
**Architecture decisions inventory**:
- ADRs from synthesis
- Technology choices
- Data model references
**Task coverage mapping**:
- Map each task to one or more requirements (by ID reference or keyword inference)
- Map each requirement to covering tasks
**Dependency graph**:
- Task-to-task dependencies (depends_on, blocks)
- Requirement-level dependencies (from synthesis)
### 4. Detection Passes (Token-Efficient Analysis)
**Token Budget Strategy**:
- **Total Limit**: 50 findings maximum (aggregate remainder in overflow summary)
- **Priority Allocation**: CRITICAL (unlimited) → HIGH (15) → MEDIUM (20) → LOW (15)
- **Early Exit**: If CRITICAL findings > 0 in User Intent/Requirements Coverage, skip LOW/MEDIUM priority checks
**Execution Order** (Process in sequence; skip if token budget exhausted):
1. **Tier 1 (CRITICAL Path)**: A, B, C - User intent, coverage, consistency (process fully)
2. **Tier 2 (HIGH Priority)**: D, E - Dependencies, synthesis alignment (limit 15 findings total)
3. **Tier 3 (MEDIUM Priority)**: F - Specification quality (limit 20 findings)
4. **Tier 4 (LOW Priority)**: G, H - Duplication, feasibility (limit 15 findings total)
---
#### A. User Intent Alignment (CRITICAL - Tier 1)
- **Goal Alignment**: IMPL_PLAN objectives match user's original intent
- **Scope Drift**: Plan covers user's stated scope without unauthorized expansion
- **Success Criteria Match**: Plan's success criteria reflect user's expectations
- **Intent Conflicts**: Tasks contradicting user's original objectives
#### B. Requirements Coverage Analysis
- **Orphaned Requirements**: Requirements in synthesis with zero associated tasks
- **Unmapped Tasks**: Tasks with no clear requirement linkage
- **NFR Coverage Gaps**: Non-functional requirements (performance, security, scalability) not reflected in tasks
#### C. Consistency Validation
- **Requirement Conflicts**: Tasks contradicting synthesis requirements
- **Architecture Drift**: IMPL_PLAN architecture not matching synthesis ADRs
- **Terminology Drift**: Same concept named differently across IMPL_PLAN and tasks
- **Data Model Inconsistency**: Tasks referencing entities/fields not in synthesis data model
#### D. Dependency Integrity
- **Circular Dependencies**: Task A depends on B, B depends on C, C depends on A
- **Missing Dependencies**: Task requires outputs from another task but no explicit dependency
- **Broken Dependencies**: Task depends on non-existent task ID
- **Logical Ordering Issues**: Implementation tasks before foundational setup without dependency note
#### E. Synthesis Alignment
- **Priority Conflicts**: High-priority synthesis requirements mapped to low-priority tasks
- **Success Criteria Mismatch**: IMPL_PLAN success criteria not covering synthesis acceptance criteria
- **Risk Mitigation Gaps**: Critical risks in synthesis without corresponding mitigation tasks
#### F. Task Specification Quality
- **Ambiguous Focus Paths**: Tasks with vague or missing focus_paths
- **Underspecified Acceptance**: Tasks without clear acceptance criteria
- **Missing Artifacts References**: Tasks not referencing relevant brainstorming artifacts in context.artifacts
- **Weak Flow Control**: Tasks without clear implementation_approach or pre_analysis steps
- **Missing Target Files**: Tasks without flow_control.target_files specification
#### G. Duplication Detection
- **Overlapping Task Scope**: Multiple tasks with nearly identical descriptions
- **Redundant Requirements Coverage**: Same requirement covered by multiple tasks without clear partitioning
#### H. Feasibility Assessment
- **Complexity Misalignment**: Task marked "simple" but requires multiple file modifications
- **Resource Conflicts**: Parallel tasks requiring same resources/files
- **Skill Gap Risks**: Tasks requiring skills not in team capability assessment (from synthesis)
### 5. Severity Assignment
Use this heuristic to prioritize findings:
- **CRITICAL**:
- Violates user's original intent (goal misalignment, scope drift)
- Violates synthesis authority (requirement conflict)
- Core requirement with zero coverage
- Circular dependencies
- Broken dependencies
- **HIGH**:
- NFR coverage gaps
- Priority conflicts
- Missing risk mitigation tasks
- Ambiguous acceptance criteria
- **MEDIUM**:
- Terminology drift
- Missing artifacts references
- Weak flow control
- Logical ordering issues
- **LOW**:
- Style/wording improvements
- Minor redundancy not affecting execution
### 6. Produce Compact Analysis Report
**Report Generation**: Generate report content and save to file.
Output a Markdown report with the following structure:
```markdown
## Action Plan Verification Report
**Session**: WFS-{session-id}
**Generated**: {timestamp}
**Artifacts Analyzed**: role analysis documents, IMPL_PLAN.md, {N} task files
---
### Executive Summary
- **Overall Risk Level**: CRITICAL | HIGH | MEDIUM | LOW
- **Recommendation**: (See decision matrix below)
- BLOCK_EXECUTION: Critical issues exist (must fix before proceeding)
- PROCEED_WITH_FIXES: High issues exist, no critical (fix recommended before execution)
- PROCEED_WITH_CAUTION: Medium issues only (proceed with awareness)
- PROCEED: Low issues only or no issues (safe to execute)
- **Critical Issues**: {count}
- **High Issues**: {count}
- **Medium Issues**: {count}
- **Low Issues**: {count}
---
### Findings Summary
| ID | Category | Severity | Location(s) | Summary | Recommendation |
|----|----------|----------|-------------|---------|----------------|
| C1 | Coverage | CRITICAL | synthesis:FR-03 | Requirement "User auth" has zero task coverage | Add authentication implementation task |
| H1 | Consistency | HIGH | IMPL-1.2 vs synthesis:ADR-02 | Task uses REST while synthesis specifies GraphQL | Align task with ADR-02 decision |
| M1 | Specification | MEDIUM | IMPL-2.1 | Missing context.artifacts reference | Add @synthesis reference |
| L1 | Duplication | LOW | IMPL-3.1, IMPL-3.2 | Similar scope | Consider merging |
(Add one row per finding; generate stable IDs prefixed by severity initial.)
---
### Requirements Coverage Analysis
| Requirement ID | Requirement Summary | Has Task? | Task IDs | Priority Match | Notes |
|----------------|---------------------|-----------|----------|----------------|-------|
| FR-01 | User authentication | Yes | IMPL-1.1, IMPL-1.2 | Match | Complete |
| FR-02 | Data export | Yes | IMPL-2.3 | Mismatch | High req → Med priority task |
| FR-03 | Profile management | No | - | - | **CRITICAL: Zero coverage** |
| NFR-01 | Response time <200ms | No | - | - | **HIGH: No performance tasks** |
**Coverage Metrics**:
- Functional Requirements: 85% (17/20 covered)
- Non-Functional Requirements: 40% (2/5 covered)
- Business Requirements: 100% (5/5 covered)
---
### Unmapped Tasks
| Task ID | Title | Issue | Recommendation |
|---------|-------|-------|----------------|
| IMPL-4.5 | Refactor utils | No requirement linkage | Link to technical debt or remove |
---
### Dependency Graph Issues
**Circular Dependencies**: None detected
**Broken Dependencies**:
- IMPL-2.3 depends on "IMPL-2.4" (non-existent)
**Logical Ordering Issues**:
- IMPL-5.1 (integration test) has no dependency on IMPL-1.* (implementation tasks)
---
### Synthesis Alignment Issues
| Issue Type | Synthesis Reference | IMPL_PLAN/Task | Impact | Recommendation |
|------------|---------------------|----------------|--------|----------------|
| Architecture Conflict | synthesis:ADR-01 (JWT auth) | IMPL_PLAN uses session cookies | HIGH | Update IMPL_PLAN to use JWT |
| Priority Mismatch | synthesis:FR-02 (High) | IMPL-2.3 (Medium) | MEDIUM | Elevate task priority |
| Missing Risk Mitigation | synthesis:Risk-03 (API rate limits) | No mitigation tasks | HIGH | Add rate limiting implementation task |
---
### Task Specification Quality Issues
**Missing Artifacts References**: 12 tasks lack context.artifacts
**Weak Flow Control**: 5 tasks lack implementation_approach
**Missing Target Files**: 8 tasks lack flow_control.target_files
**Sample Issues**:
- IMPL-1.2: No context.artifacts reference to synthesis
- IMPL-3.1: Missing flow_control.target_files specification
- IMPL-4.2: Vague focus_paths ["src/"] - needs refinement
---
### Feasibility Concerns
| Concern | Tasks Affected | Issue | Recommendation |
|---------|----------------|-------|----------------|
| Skill Gap | IMPL-6.1, IMPL-6.2 | Requires Kubernetes expertise not in team | Add training task or external consultant |
| Resource Conflict | IMPL-3.1, IMPL-3.2 | Both modify src/auth/service.ts in parallel | Add dependency or serialize |
---
### Metrics
- **Total Requirements**: 30 (20 functional, 5 non-functional, 5 business)
- **Total Tasks**: 25
- **Overall Coverage**: 77% (23/30 requirements with ≥1 task)
- **Critical Issues**: 2
- **High Issues**: 5
- **Medium Issues**: 8
- **Low Issues**: 3
---
### Next Actions
#### Action Recommendations
**Recommendation Decision Matrix**:
| Condition | Recommendation | Action |
|-----------|----------------|--------|
| Critical > 0 | BLOCK_EXECUTION | Must resolve all critical issues before proceeding |
| Critical = 0, High > 0 | PROCEED_WITH_FIXES | Fix high-priority issues before execution |
| Critical = 0, High = 0, Medium > 0 | PROCEED_WITH_CAUTION | Proceed with awareness of medium issues |
| Only Low or None | PROCEED | Safe to execute workflow |
**If CRITICAL Issues Exist** (BLOCK_EXECUTION):
- Resolve all critical issues before proceeding
- Use TodoWrite to track required fixes
- Fix broken dependencies and circular references first
**If HIGH Issues Exist** (PROCEED_WITH_FIXES):
- Fix high-priority issues before execution
- Use TodoWrite to systematically track and complete improvements
**If Only MEDIUM/LOW Issues** (PROCEED_WITH_CAUTION / PROCEED):
- Can proceed with execution
- Address issues during or after implementation
#### TodoWrite-Based Remediation Workflow
**Report Location**: `.workflow/active/WFS-{session}/.process/ACTION_PLAN_VERIFICATION.md`
**Recommended Workflow**:
1. **Create TodoWrite Task List**: Extract all findings from report
2. **Process by Priority**: CRITICAL → HIGH → MEDIUM → LOW
3. **Complete Each Fix**: Mark tasks as in_progress/completed as you work
4. **Validate Changes**: Verify each modification against requirements
**TodoWrite Task Structure Example**:
```markdown
Priority Order:
1. Fix coverage gaps (CRITICAL)
2. Resolve consistency conflicts (CRITICAL/HIGH)
3. Add missing specifications (MEDIUM)
4. Improve task quality (LOW)
```
**Notes**:
- TodoWrite provides real-time progress tracking
- Each finding becomes a trackable todo item
- User can monitor progress throughout remediation
- Architecture drift in IMPL_PLAN requires manual editing
```
### 7. Save Report and Execute TodoWrite-Based Remediation
**Step 7.1: Save Analysis Report**:
```bash
report_path = ".workflow/active/WFS-{session}/.process/ACTION_PLAN_VERIFICATION.md"
Write(report_path, full_report_content)
```
**Step 7.2: Display Report Summary to User**:
- Show executive summary with counts
- Display recommendation (BLOCK/PROCEED_WITH_FIXES/PROCEED_WITH_CAUTION/PROCEED)
- List critical and high issues if any
**Step 7.3: After Report Generation**:
1. **Extract Findings**: Parse all issues by severity
2. **Create TodoWrite Task List**: Convert findings to actionable todos
3. **Execute Fixes**: Process each todo systematically
4. **Update Task Files**: Apply modifications directly to task JSON files
5. **Update IMPL_PLAN**: Apply strategic changes if needed
At end of report, provide remediation guidance:
```markdown
### 🔧 Remediation Workflow
**Recommended Approach**:
1. **Initialize TodoWrite**: Create comprehensive task list from all findings
2. **Process by Severity**: Start with CRITICAL, then HIGH, MEDIUM, LOW
3. **Apply Fixes Directly**: Modify task.json files and IMPL_PLAN.md as needed
4. **Track Progress**: Mark todos as completed after each fix
**TodoWrite Execution Pattern**:
```bash
# Step 1: Create task list from verification report
TodoWrite([
{ content: "Fix FR-03 coverage gap - add authentication task", status: "pending", activeForm: "Fixing FR-03 coverage gap" },
{ content: "Fix IMPL-1.2 consistency - align with ADR-02", status: "pending", activeForm: "Fixing IMPL-1.2 consistency" },
{ content: "Add context.artifacts to IMPL-1.2", status: "pending", activeForm: "Adding context.artifacts to IMPL-1.2" },
# ... additional todos for each finding
])
# Step 2: Process each todo systematically
# Mark as in_progress when starting
# Apply fix using Read/Edit tools
# Mark as completed when done
# Move to next priority item
```
**File Modification Workflow**:
```bash
# For task JSON modifications:
1. Read(.workflow/active/WFS-{session}/.task/IMPL-X.Y.json)
2. Edit() to apply fixes
3. Mark todo as completed
# For IMPL_PLAN modifications:
1. Read(.workflow/active/WFS-{session}/IMPL_PLAN.md)
2. Edit() to apply strategic changes
3. Mark todo as completed
```
**Note**: All fixes execute immediately after user confirmation without additional commands.

View File

@@ -0,0 +1,804 @@
---
name: analyze-with-file
description: Interactive collaborative analysis with documented discussions, CLI-assisted exploration, and evolving understanding
argument-hint: "[-y|--yes] [-c|--continue] \"topic or question\""
allowed-tools: TodoWrite(*), Task(*), AskUserQuestion(*), Read(*), Grep(*), Glob(*), Bash(*), Edit(*), Write(*)
---
## Auto Mode
When `--yes` or `-y`: Auto-confirm exploration decisions, use recommended analysis angles.
# Workflow Analyze-With-File Command (/workflow:analyze-with-file)
## Overview
Interactive collaborative analysis workflow with **documented discussion process**. Records understanding evolution, facilitates multi-round Q&A, and uses CLI tools (Gemini/Codex) for deep exploration.
**Core workflow**: Topic → Explore → Discuss → Document → Refine → Conclude
**Key features**:
- **discussion.md**: Timeline of discussions and understanding evolution
- **Multi-round Q&A**: Iterative clarification with user
- **CLI-assisted exploration**: Gemini/Codex for codebase and concept analysis
- **Consolidated insights**: Synthesizes discussions into actionable conclusions
- **Flexible continuation**: Resume analysis sessions to build on previous work
## Usage
```bash
/workflow:analyze-with-file [FLAGS] <TOPIC_OR_QUESTION>
# Flags
-y, --yes Skip confirmations, use recommended settings
-c, --continue Continue existing session (auto-detected if exists)
# Arguments
<topic-or-question> Analysis topic, question, or concept to explore (required)
# Examples
/workflow:analyze-with-file "如何优化这个项目的认证架构"
/workflow:analyze-with-file --continue "认证架构" # Continue existing session
/workflow:analyze-with-file -y "性能瓶颈分析" # Auto mode
```
## Execution Process
```
Session Detection:
├─ Check if analysis session exists for topic
├─ EXISTS + discussion.md exists → Continue mode
└─ NOT_FOUND → New session mode
Phase 1: Topic Understanding
├─ Parse topic/question
├─ Identify analysis dimensions (architecture, implementation, concept, etc.)
├─ Initial scoping with user (AskUserQuestion)
└─ Document initial understanding in discussion.md
Phase 2: CLI Exploration (Parallel)
├─ Launch cli-explore-agent for codebase context
├─ Use Gemini/Codex for deep analysis
└─ Aggregate findings into exploration summary
Phase 3: Interactive Discussion (Multi-Round)
├─ Present exploration findings
├─ Facilitate Q&A with user (AskUserQuestion)
├─ Capture user insights and requirements
├─ Update discussion.md with each round
└─ Repeat until user is satisfied or clarity achieved
Phase 4: Synthesis & Conclusion
├─ Consolidate all insights
├─ Update discussion.md with conclusions
├─ Generate actionable recommendations
└─ Optional: Create follow-up tasks or issues
Output:
├─ .workflow/.analysis/{slug}-{date}/discussion.md (evolving document)
├─ .workflow/.analysis/{slug}-{date}/explorations.json (CLI findings)
└─ .workflow/.analysis/{slug}-{date}/conclusions.json (final synthesis)
```
## Implementation
### Session Setup & Mode Detection
```javascript
const getUtc8ISOString = () => new Date(Date.now() + 8 * 60 * 60 * 1000).toISOString()
const topicSlug = topic_or_question.toLowerCase().replace(/[^a-z0-9\u4e00-\u9fa5]+/g, '-').substring(0, 40)
const dateStr = getUtc8ISOString().substring(0, 10)
const sessionId = `ANL-${topicSlug}-${dateStr}`
const sessionFolder = `.workflow/.analysis/${sessionId}`
const discussionPath = `${sessionFolder}/discussion.md`
const explorationsPath = `${sessionFolder}/explorations.json`
const conclusionsPath = `${sessionFolder}/conclusions.json`
// Auto-detect mode
const sessionExists = fs.existsSync(sessionFolder)
const hasDiscussion = sessionExists && fs.existsSync(discussionPath)
const forcesContinue = $ARGUMENTS.includes('--continue') || $ARGUMENTS.includes('-c')
const mode = (hasDiscussion || forcesContinue) ? 'continue' : 'new'
if (!sessionExists) {
bash(`mkdir -p ${sessionFolder}`)
}
```
---
### Phase 1: Topic Understanding
**Step 1.1: Parse Topic & Identify Dimensions**
```javascript
// Analyze topic to determine analysis dimensions
const ANALYSIS_DIMENSIONS = {
architecture: ['架构', 'architecture', 'design', 'structure', '设计'],
implementation: ['实现', 'implement', 'code', 'coding', '代码'],
performance: ['性能', 'performance', 'optimize', 'bottleneck', '优化'],
security: ['安全', 'security', 'auth', 'permission', '权限'],
concept: ['概念', 'concept', 'theory', 'principle', '原理'],
comparison: ['比较', 'compare', 'vs', 'difference', '区别'],
decision: ['决策', 'decision', 'choice', 'tradeoff', '选择']
}
function identifyDimensions(topic) {
const text = topic.toLowerCase()
const matched = []
for (const [dimension, keywords] of Object.entries(ANALYSIS_DIMENSIONS)) {
if (keywords.some(k => text.includes(k))) {
matched.push(dimension)
}
}
return matched.length > 0 ? matched : ['general']
}
const dimensions = identifyDimensions(topic_or_question)
```
**Step 1.2: Initial Scoping (New Session Only)**
```javascript
const autoYes = $ARGUMENTS.includes('--yes') || $ARGUMENTS.includes('-y')
if (mode === 'new' && !autoYes) {
// Ask user to scope the analysis
AskUserQuestion({
questions: [
{
question: `分析范围: "${topic_or_question}"\n\n您想重点关注哪些方面?`,
header: "Focus",
multiSelect: true,
options: [
{ label: "代码实现", description: "分析现有代码实现" },
{ label: "架构设计", description: "架构层面的分析" },
{ label: "最佳实践", description: "行业最佳实践对比" },
{ label: "问题诊断", description: "识别潜在问题" }
]
},
{
question: "分析深度?",
header: "Depth",
multiSelect: false,
options: [
{ label: "Quick Overview", description: "快速概览 (10-15分钟)" },
{ label: "Standard Analysis", description: "标准分析 (30-60分钟)" },
{ label: "Deep Dive", description: "深度分析 (1-2小时)" }
]
}
]
})
}
```
**Step 1.3: Create/Update discussion.md**
For new session:
```markdown
# Analysis Discussion
**Session ID**: ${sessionId}
**Topic**: ${topic_or_question}
**Started**: ${getUtc8ISOString()}
**Dimensions**: ${dimensions.join(', ')}
---
## User Context
**Focus Areas**: ${userFocusAreas.join(', ')}
**Analysis Depth**: ${analysisDepth}
---
## Discussion Timeline
### Round 1 - Initial Understanding (${timestamp})
#### Topic Analysis
Based on the topic "${topic_or_question}":
- **Primary dimensions**: ${dimensions.join(', ')}
- **Initial scope**: ${initialScope}
- **Key questions to explore**:
- ${question1}
- ${question2}
- ${question3}
#### Next Steps
- Launch CLI exploration for codebase context
- Gather external insights via Gemini
- Prepare discussion points for user
---
## Current Understanding
${initialUnderstanding}
```
For continue session, append:
```markdown
### Round ${n} - Continuation (${timestamp})
#### Previous Context
Resuming analysis based on prior discussion.
#### New Focus
${newFocusFromUser}
```
---
### Phase 2: CLI Exploration
**Step 2.1: Launch Parallel Explorations**
```javascript
const explorationPromises = []
// CLI Explore Agent for codebase
if (dimensions.includes('implementation') || dimensions.includes('architecture')) {
explorationPromises.push(
Task(
subagent_type="cli-explore-agent",
run_in_background=false,
description=`Explore codebase: ${topicSlug}`,
prompt=`
## Analysis Context
Topic: ${topic_or_question}
Dimensions: ${dimensions.join(', ')}
Session: ${sessionFolder}
## MANDATORY FIRST STEPS
1. Run: ccw tool exec get_modules_by_depth '{}'
2. Execute relevant searches based on topic keywords
3. Read: .workflow/project-tech.json (if exists)
## Exploration Focus
${dimensions.map(d => `- ${d}: Identify relevant code patterns and structures`).join('\n')}
## Output
Write findings to: ${sessionFolder}/exploration-codebase.json
Schema:
{
"relevant_files": [{path, relevance, rationale}],
"patterns": [],
"key_findings": [],
"questions_for_user": [],
"_metadata": { "exploration_type": "codebase", "timestamp": "..." }
}
`
)
)
}
// Gemini CLI for deep analysis
explorationPromises.push(
Bash({
command: `ccw cli -p "
PURPOSE: Analyze topic '${topic_or_question}' from ${dimensions.join(', ')} perspectives
Success criteria: Actionable insights with clear reasoning
TASK:
• Identify key considerations for this topic
• Analyze common patterns and anti-patterns
• Highlight potential issues or opportunities
• Generate discussion points for user clarification
MODE: analysis
CONTEXT: @**/* | Topic: ${topic_or_question}
EXPECTED:
- Structured analysis with clear sections
- Specific insights tied to evidence
- Questions to deepen understanding
- Recommendations with rationale
CONSTRAINTS: Focus on ${dimensions.join(', ')}
" --tool gemini --mode analysis`,
run_in_background: true
})
)
```
**Step 2.2: Aggregate Findings**
```javascript
// After explorations complete, aggregate into explorations.json
const explorations = {
session_id: sessionId,
timestamp: getUtc8ISOString(),
topic: topic_or_question,
dimensions: dimensions,
sources: [
{ type: "codebase", file: "exploration-codebase.json" },
{ type: "gemini", summary: geminiOutput }
],
key_findings: [...],
discussion_points: [...],
open_questions: [...]
}
Write(explorationsPath, JSON.stringify(explorations, null, 2))
```
**Step 2.3: Update discussion.md**
```markdown
#### Exploration Results (${timestamp})
**Sources Analyzed**:
${sources.map(s => `- ${s.type}: ${s.summary}`).join('\n')}
**Key Findings**:
${keyFindings.map((f, i) => `${i+1}. ${f}`).join('\n')}
**Points for Discussion**:
${discussionPoints.map((p, i) => `${i+1}. ${p}`).join('\n')}
**Open Questions**:
${openQuestions.map((q, i) => `- ${q}`).join('\n')}
```
---
### Phase 3: Interactive Discussion (Multi-Round)
**Step 3.1: Present Findings & Gather Feedback**
```javascript
// Maximum discussion rounds
const MAX_ROUNDS = 5
let roundNumber = 1
let discussionComplete = false
while (!discussionComplete && roundNumber <= MAX_ROUNDS) {
// Display current findings
console.log(`
## Discussion Round ${roundNumber}
${currentFindings}
### Key Points for Your Input
${discussionPoints.map((p, i) => `${i+1}. ${p}`).join('\n')}
`)
// Gather user input
const userResponse = AskUserQuestion({
questions: [
{
question: "对以上分析有什么看法或补充?",
header: "Feedback",
multiSelect: false,
options: [
{ label: "同意,继续深入", description: "分析方向正确,继续探索" },
{ label: "需要调整方向", description: "我有不同的理解或重点" },
{ label: "分析完成", description: "已获得足够信息" },
{ label: "有具体问题", description: "我想问一些具体问题" }
]
}
]
})
// Process user response
switch (userResponse.feedback) {
case "同意,继续深入":
// Deepen analysis in current direction
await deepenAnalysis()
break
case "需要调整方向":
// Get user's adjusted focus
const adjustment = AskUserQuestion({
questions: [{
question: "请说明您希望调整的方向或重点:",
header: "Direction",
multiSelect: false,
options: [
{ label: "更多代码细节", description: "深入代码实现" },
{ label: "更多架构视角", description: "关注整体设计" },
{ label: "更多实践对比", description: "对比最佳实践" }
]
}]
})
await adjustAnalysisDirection(adjustment)
break
case "分析完成":
discussionComplete = true
break
case "有具体问题":
// Let user ask specific questions, then answer
await handleUserQuestions()
break
}
// Update discussion.md with this round
updateDiscussionDocument(roundNumber, userResponse, findings)
roundNumber++
}
```
**Step 3.2: Document Each Round**
Append to discussion.md:
```markdown
### Round ${n} - Discussion (${timestamp})
#### User Input
${userInputSummary}
${userResponse === 'adjustment' ? `
**Direction Adjustment**: ${adjustmentDetails}
` : ''}
${userResponse === 'questions' ? `
**User Questions**:
${userQuestions.map((q, i) => `${i+1}. ${q}`).join('\n')}
**Answers**:
${answers.map((a, i) => `${i+1}. ${a}`).join('\n')}
` : ''}
#### Updated Understanding
Based on user feedback:
- ${insight1}
- ${insight2}
#### Corrected Assumptions
${corrections.length > 0 ? corrections.map(c => `
- ~~${c.wrong}~~ → ${c.corrected}
- Reason: ${c.reason}
`).join('\n') : 'None'}
#### New Insights
${newInsights.map(i => `- ${i}`).join('\n')}
```
---
### Phase 4: Synthesis & Conclusion
**Step 4.1: Consolidate Insights**
```javascript
const conclusions = {
session_id: sessionId,
topic: topic_or_question,
completed: getUtc8ISOString(),
total_rounds: roundNumber,
summary: "...",
key_conclusions: [
{ point: "...", evidence: "...", confidence: "high|medium|low" }
],
recommendations: [
{ action: "...", rationale: "...", priority: "high|medium|low" }
],
open_questions: [...],
follow_up_suggestions: [
{ type: "issue", summary: "..." },
{ type: "task", summary: "..." }
]
}
Write(conclusionsPath, JSON.stringify(conclusions, null, 2))
```
**Step 4.2: Final discussion.md Update**
```markdown
---
## Conclusions (${timestamp})
### Summary
${summaryParagraph}
### Key Conclusions
${conclusions.key_conclusions.map((c, i) => `
${i+1}. **${c.point}** (Confidence: ${c.confidence})
- Evidence: ${c.evidence}
`).join('\n')}
### Recommendations
${conclusions.recommendations.map((r, i) => `
${i+1}. **${r.action}** (Priority: ${r.priority})
- Rationale: ${r.rationale}
`).join('\n')}
### Remaining Questions
${conclusions.open_questions.map(q => `- ${q}`).join('\n')}
---
## Current Understanding (Final)
### What We Established
${establishedPoints.map(p => `- ${p}`).join('\n')}
### What Was Clarified/Corrected
${corrections.map(c => `- ~~${c.original}~~ → ${c.corrected}`).join('\n')}
### Key Insights
${keyInsights.map(i => `- ${i}`).join('\n')}
---
## Session Statistics
- **Total Rounds**: ${totalRounds}
- **Duration**: ${duration}
- **Sources Used**: ${sources.join(', ')}
- **Artifacts Generated**: discussion.md, explorations.json, conclusions.json
```
**Step 4.3: Post-Completion Options**
```javascript
AskUserQuestion({
questions: [{
question: "分析完成。是否需要后续操作?",
header: "Next Steps",
multiSelect: true,
options: [
{ label: "创建Issue", description: "将结论转为可执行的Issue" },
{ label: "生成任务", description: "创建实施任务" },
{ label: "导出报告", description: "生成独立的分析报告" },
{ label: "完成", description: "不需要后续操作" }
]
}]
})
// Handle selections
if (selection.includes("创建Issue")) {
SlashCommand("/issue:new", `${topic_or_question} - 分析结论实施`)
}
if (selection.includes("生成任务")) {
SlashCommand("/workflow:lite-plan", `实施分析结论: ${summary}`)
}
if (selection.includes("导出报告")) {
exportAnalysisReport(sessionFolder)
}
```
---
## Session Folder Structure
```
.workflow/.analysis/ANL-{slug}-{date}/
├── discussion.md # Evolution of understanding & discussions
├── explorations.json # CLI exploration findings
├── conclusions.json # Final synthesis
└── exploration-*.json # Individual exploration results (optional)
```
## Discussion Document Template
```markdown
# Analysis Discussion
**Session ID**: ANL-xxx-2025-01-25
**Topic**: [topic or question]
**Started**: 2025-01-25T10:00:00+08:00
**Dimensions**: [architecture, implementation, ...]
---
## User Context
**Focus Areas**: [user-selected focus]
**Analysis Depth**: [quick|standard|deep]
---
## Discussion Timeline
### Round 1 - Initial Understanding (2025-01-25 10:00)
#### Topic Analysis
...
#### Exploration Results
...
### Round 2 - Discussion (2025-01-25 10:15)
#### User Input
...
#### Updated Understanding
...
#### Corrected Assumptions
- ~~[wrong]~~ → [corrected]
### Round 3 - Deep Dive (2025-01-25 10:30)
...
---
## Conclusions (2025-01-25 11:00)
### Summary
...
### Key Conclusions
...
### Recommendations
...
---
## Current Understanding (Final)
### What We Established
- [confirmed points]
### What Was Clarified/Corrected
- ~~[original assumption]~~ → [corrected understanding]
### Key Insights
- [insights gained]
---
## Session Statistics
- **Total Rounds**: 3
- **Duration**: 1 hour
- **Sources Used**: codebase exploration, Gemini analysis
- **Artifacts Generated**: discussion.md, explorations.json, conclusions.json
```
## Iteration Flow
```
First Call (/workflow:analyze-with-file "topic"):
├─ No session exists → New mode
├─ Identify analysis dimensions
├─ Scope with user (unless --yes)
├─ Create discussion.md with initial understanding
├─ Launch CLI explorations
└─ Enter discussion loop
Continue Call (/workflow:analyze-with-file --continue "topic"):
├─ Session exists → Continue mode
├─ Load discussion.md
├─ Resume from last round
└─ Continue discussion loop
Discussion Loop:
├─ Present current findings
├─ Gather user feedback (AskUserQuestion)
├─ Process response:
│ ├─ Agree → Deepen analysis
│ ├─ Adjust → Change direction
│ ├─ Question → Answer then continue
│ └─ Complete → Exit loop
├─ Update discussion.md
└─ Repeat until complete or max rounds
Completion:
├─ Generate conclusions.json
├─ Update discussion.md with final synthesis
└─ Offer follow-up options (issue, task, report)
```
## CLI Integration Points
### 1. Codebase Exploration (cli-explore-agent)
**Purpose**: Gather relevant code context
**When**: Topic involves implementation or architecture analysis
### 2. Gemini Deep Analysis
**Purpose**: Conceptual analysis, pattern identification, best practices
**Prompt Pattern**:
```
PURPOSE: Analyze topic + identify insights
TASK: Explore dimensions + generate discussion points
CONTEXT: Codebase + topic
EXPECTED: Structured analysis + questions
```
### 3. Follow-up CLI Calls
**Purpose**: Deepen specific areas based on user feedback
**Dynamic invocation** based on discussion direction
## Consolidation Rules
When updating "Current Understanding":
1. **Promote confirmed insights**: Move validated findings to "What We Established"
2. **Track corrections**: Keep important wrong→right transformations
3. **Focus on current state**: What do we know NOW
4. **Avoid timeline repetition**: Don't copy discussion details
5. **Preserve key learnings**: Keep insights valuable for future reference
**Bad (cluttered)**:
```markdown
## Current Understanding
In round 1 we discussed X, then in round 2 user said Y, and we explored Z...
```
**Good (consolidated)**:
```markdown
## Current Understanding
### What We Established
- The authentication flow uses JWT with refresh tokens
- Rate limiting is implemented at API gateway level
### What Was Clarified
- ~~Assumed Redis for sessions~~ → Actually uses database-backed sessions
### Key Insights
- Current architecture supports horizontal scaling
- Security audit recommended before production
```
## Error Handling
| Situation | Action |
|-----------|--------|
| CLI exploration fails | Continue with available context, note limitation |
| User timeout in discussion | Save state, show resume command |
| Max rounds reached | Force synthesis, offer continuation option |
| No relevant findings | Broaden search, ask user for clarification |
| Session folder conflict | Append timestamp suffix |
| Gemini unavailable | Fallback to Codex or manual analysis |
## Usage Recommendations
Use `/workflow:analyze-with-file` when:
- Exploring a complex topic collaboratively
- Need documented discussion trail
- Decision-making requires multiple perspectives
- Want to iterate on understanding with user input
- Building shared understanding before implementation
Use `/workflow:debug-with-file` when:
- Diagnosing specific bugs
- Need hypothesis-driven investigation
- Focus on evidence and verification
Use `/workflow:lite-plan` when:
- Ready to implement (past analysis phase)
- Need structured task breakdown
- Focus on execution planning

File diff suppressed because it is too large Load Diff

View File

@@ -1,587 +0,0 @@
---
name: api-designer
description: Generate or update api-designer/analysis.md addressing guidance-specification discussion points for API design perspective
argument-hint: "optional topic - uses existing framework if available"
allowed-tools: Task(conceptual-planning-agent), TodoWrite(*), Read(*), Write(*)
---
## 🔌 **API Designer Analysis Generator**
### Purpose
**Specialized command for generating api-designer/analysis.md** that addresses guidance-specification.md discussion points from backend API design perspective. Creates or updates role-specific analysis with framework references.
### Core Function
- **Framework-based Analysis**: Address each discussion point in guidance-specification.md
- **API Design Focus**: RESTful/GraphQL API design, endpoint structure, and contract definition
- **Update Mechanism**: Create new or update existing analysis.md
- **Agent Delegation**: Use conceptual-planning-agent for analysis generation
### Analysis Scope
- **API Architecture**: RESTful/GraphQL design patterns and best practices
- **Endpoint Design**: Resource modeling, URL structure, and HTTP method selection
- **Data Contracts**: Request/response schemas, validation rules, and data transformation
- **API Documentation**: OpenAPI/Swagger specifications and developer experience
### Role Boundaries & Responsibilities
#### **What This Role OWNS (API Contract Within Architectural Framework)**
- **API Contract Definition**: Specific endpoint paths, HTTP methods, and status codes
- **Resource Modeling**: Mapping domain entities to RESTful resources or GraphQL types
- **Request/Response Schemas**: Detailed data contracts, validation rules, and error formats
- **API Versioning Strategy**: Version management, deprecation policies, and migration paths
- **Developer Experience**: API documentation (OpenAPI/Swagger), code examples, and SDKs
#### **What This Role DOES NOT Own (Defers to Other Roles)**
- **System Architecture Decisions**: Microservices vs monolithic, overall communication patterns → Defers to **System Architect**
- **Canonical Data Model**: Underlying data schemas and entity relationships → Defers to **Data Architect**
- **UI/Frontend Integration**: How clients consume the API → Defers to **UI Designer**
#### **Handoff Points**
- **FROM System Architect**: Receives architectural constraints (REST vs GraphQL, sync vs async) that define the design space
- **FROM Data Architect**: Receives canonical data model and translates it into public API data contracts (as projection/view)
- **TO Frontend Teams**: Provides complete API specifications, documentation, and integration guides
## ⚙️ **Execution Protocol**
### Phase 1: Session & Framework Detection
```bash
# Check active session and framework
CHECK: find .workflow/active/ -name "WFS-*" -type d
IF active_session EXISTS:
session_id = get_active_session()
brainstorm_dir = .workflow/active/WFS-{session}/.brainstorming/
CHECK: brainstorm_dir/guidance-specification.md
IF EXISTS:
framework_mode = true
load_framework = true
ELSE:
IF topic_provided:
framework_mode = false # Create analysis without framework
ELSE:
ERROR: "No framework found and no topic provided"
```
### Phase 2: Analysis Mode Detection
```bash
# Check existing analysis
CHECK: brainstorm_dir/api-designer/analysis.md
IF EXISTS:
SHOW existing analysis summary
ASK: "Analysis exists. Do you want to:"
OPTIONS:
1. "Update with new insights" → Update existing
2. "Replace completely" → Generate new
3. "Cancel" → Exit without changes
ELSE:
CREATE new analysis
```
### Phase 3: Agent Task Generation
**Framework-Based Analysis** (when guidance-specification.md exists):
```bash
Task(subagent_type="conceptual-planning-agent",
run_in_background=false,
prompt="Generate API designer analysis addressing topic framework
## Framework Integration Required
**MANDATORY**: Load and address guidance-specification.md discussion points
**Framework Reference**: @{session.brainstorm_dir}/guidance-specification.md
**Output Location**: {session.brainstorm_dir}/api-designer/analysis.md
## Analysis Requirements
1. **Load Topic Framework**: Read guidance-specification.md completely
2. **Address Each Discussion Point**: Respond to all 5 framework sections from API design perspective
3. **Include Framework Reference**: Start analysis.md with @../guidance-specification.md
4. **API Design Focus**: Emphasize endpoint structure, data contracts, versioning strategies
5. **Structured Response**: Use framework structure for analysis organization
## Framework Sections to Address
- Core Requirements (from API design perspective)
- Technical Considerations (detailed API architecture analysis)
- User Experience Factors (developer experience and API usability)
- Implementation Challenges (API design risks and solutions)
- Success Metrics (API performance metrics and adoption tracking)
## Output Structure Required
```markdown
# API Designer Analysis: [Topic]
**Framework Reference**: @../guidance-specification.md
**Role Focus**: Backend API Design and Contract Definition
## Core Requirements Analysis
[Address framework requirements from API design perspective]
## Technical Considerations
[Detailed API architecture and endpoint design analysis]
## Developer Experience Factors
[API usability, documentation, and integration ease]
## Implementation Challenges
[API design risks and mitigation strategies]
## Success Metrics
[API performance metrics, adoption rates, and developer satisfaction]
## API Design-Specific Recommendations
[Detailed API design recommendations and best practices]
```",
description="Generate API designer framework-based analysis")
```
### Phase 4: Update Mechanism
**Analysis Update Process**:
```bash
# For existing analysis updates
IF update_mode = "incremental":
Task(subagent_type="conceptual-planning-agent",
run_in_background=false,
prompt="Update existing API designer analysis
## Current Analysis Context
**Existing Analysis**: @{session.brainstorm_dir}/api-designer/analysis.md
**Framework Reference**: @{session.brainstorm_dir}/guidance-specification.md
## Update Requirements
1. **Preserve Structure**: Maintain existing analysis structure
2. **Add New Insights**: Integrate new API design insights and recommendations
3. **Framework Alignment**: Ensure continued alignment with topic framework
4. **API Updates**: Add new endpoint patterns, versioning strategies, documentation improvements
5. **Maintain References**: Keep @../guidance-specification.md reference
## Update Instructions
- Read existing analysis completely
- Identify areas for enhancement or new insights
- Add API design depth while preserving original structure
- Update recommendations with new API design patterns and approaches
- Maintain framework discussion point addressing",
description="Update API designer analysis incrementally")
```
## Document Structure
### Output Files
```
.workflow/active/WFS-[topic]/.brainstorming/
├── guidance-specification.md # Input: Framework (if exists)
└── api-designer/
└── analysis.md # ★ OUTPUT: Framework-based analysis
```
### Analysis Structure
**Required Elements**:
- **Framework Reference**: @../guidance-specification.md (if framework exists)
- **Role Focus**: Backend API Design and Contract Definition perspective
- **5 Framework Sections**: Address each framework discussion point
- **API Design Recommendations**: Endpoint-specific insights and solutions
## ⚡ **Two-Step Execution Flow**
### ⚠️ Session Management - FIRST STEP
Session detection and selection:
```bash
# Check for active sessions
active_sessions=$(find .workflow/active/ -name "WFS-*" -type d 2>/dev/null)
if [ multiple_sessions ]; then
prompt_user_to_select_session()
else
use_existing_or_create_new()
fi
```
### Step 1: Context Gathering Phase
**API Designer Perspective Questioning**
Before agent assignment, gather comprehensive API design context:
#### 📋 Role-Specific Questions
1. **API Type & Architecture**
- RESTful, GraphQL, or hybrid API approach?
- Synchronous vs asynchronous communication patterns?
- Real-time requirements (WebSocket, Server-Sent Events)?
2. **Resource Modeling & Endpoints**
- What are the core domain resources/entities?
- Expected CRUD operations for each resource?
- Complex query requirements (filtering, sorting, pagination)?
3. **Data Contracts & Validation**
- Request/response data format requirements (JSON, XML, Protocol Buffers)?
- Input validation and sanitization requirements?
- Data transformation and mapping needs?
4. **API Management & Governance**
- API versioning strategy requirements?
- Authentication and authorization mechanisms?
- Rate limiting and throttling requirements?
- API documentation and developer portal needs?
5. **Integration & Compatibility**
- Client platforms consuming the API (web, mobile, third-party)?
- Backward compatibility requirements?
- External API integrations needed?
#### Context Validation
- **Minimum Response**: Each answer must be ≥50 characters
- **Re-prompting**: Insufficient detail triggers follow-up questions
- **Context Storage**: Save responses to `.brainstorming/api-designer-context.md`
### Step 2: Agent Assignment with Flow Control
**Dedicated Agent Execution**
```bash
Task(conceptual-planning-agent): "
[FLOW_CONTROL]
Execute dedicated api-designer conceptual analysis for: {topic}
ASSIGNED_ROLE: api-designer
OUTPUT_LOCATION: .brainstorming/api-designer/
USER_CONTEXT: {validated_responses_from_context_gathering}
Flow Control Steps:
[
{
\"step\": \"load_role_template\",
\"action\": \"Load api-designer planning template\",
\"command\": \"bash($(cat ~/.claude/workflows/cli-templates/planning-roles/api-designer.md))\",
\"output_to\": \"role_template\"
}
]
Conceptual Analysis Requirements:
- Apply api-designer perspective to topic analysis
- Focus on endpoint design, data contracts, and API governance
- Use loaded role template framework for analysis structure
- Generate role-specific deliverables in designated output location
- Address all user context from questioning phase
Deliverables:
- analysis.md: Main API design analysis
- api-specification.md: Detailed endpoint specifications
- data-contracts.md: Request/response schemas and validation rules
- api-documentation.md: API documentation strategy and templates
Embody api-designer role expertise for comprehensive conceptual planning."
```
### Progress Tracking
TodoWrite tracking for two-step process:
```json
[
{"content": "Gather API designer context through role-specific questioning", "status": "in_progress", "activeForm": "Gathering context"},
{"content": "Validate context responses and save to api-designer-context.md", "status": "pending", "activeForm": "Validating context"},
{"content": "Load api-designer planning template via flow control", "status": "pending", "activeForm": "Loading template"},
{"content": "Execute dedicated conceptual-planning-agent for api-designer role", "status": "pending", "activeForm": "Executing agent"}
]
```
## 📊 **Output Specification**
### Output Location
```
.workflow/active/WFS-{topic-slug}/.brainstorming/api-designer/
├── analysis.md # Primary API design analysis
├── api-specification.md # Detailed endpoint specifications (OpenAPI/Swagger)
├── data-contracts.md # Request/response schemas and validation rules
├── versioning-strategy.md # API versioning and backward compatibility plan
└── developer-guide.md # API usage documentation and integration examples
```
### Document Templates
#### analysis.md Structure
```markdown
# API Design Analysis: {Topic}
*Generated: {timestamp}*
## Executive Summary
[Key API design findings and recommendations overview]
## API Architecture Overview
### API Type Selection (REST/GraphQL/Hybrid)
### Communication Patterns
### Authentication & Authorization Strategy
## Resource Modeling
### Core Domain Resources
### Resource Relationships
### URL Structure and Naming Conventions
## Endpoint Design
### Resource Endpoints
- GET /api/v1/resources
- POST /api/v1/resources
- GET /api/v1/resources/{id}
- PUT /api/v1/resources/{id}
- DELETE /api/v1/resources/{id}
### Query Parameters
- Filtering: ?filter[field]=value
- Sorting: ?sort=field,-field2
- Pagination: ?page=1&limit=20
### HTTP Methods and Status Codes
- Success responses (2xx)
- Client errors (4xx)
- Server errors (5xx)
## Data Contracts
### Request Schemas
[JSON Schema or OpenAPI definitions]
### Response Schemas
[JSON Schema or OpenAPI definitions]
### Validation Rules
- Required fields
- Data types and formats
- Business logic constraints
## API Versioning Strategy
### Versioning Approach (URL/Header/Accept)
### Version Lifecycle Management
### Deprecation Policy
### Migration Paths
## Security & Governance
### Authentication Mechanisms
- OAuth 2.0 / JWT / API Keys
### Authorization Patterns
- RBAC / ABAC / Resource-based
### Rate Limiting & Throttling
### CORS and Security Headers
## Error Handling
### Standard Error Response Format
```json
{
"error": {
"code": "ERROR_CODE",
"message": "Human-readable error message",
"details": [],
"trace_id": "uuid"
}
}
```
### Error Code Taxonomy
### Validation Error Responses
## API Documentation
### OpenAPI/Swagger Specification
### Developer Portal Requirements
### Code Examples and SDKs
### Changelog and Migration Guides
## Performance Optimization
### Response Caching Strategies
### Compression (gzip, brotli)
### Field Selection (sparse fieldsets)
### Bulk Operations and Batch Endpoints
## Monitoring & Observability
### API Metrics
- Request count, latency, error rates
- Endpoint usage analytics
### Logging Strategy
### Distributed Tracing
## Developer Experience
### API Usability Assessment
### Integration Complexity
### SDK and Client Library Needs
### Sandbox and Testing Environments
```
#### api-specification.md Structure
```markdown
# API Specification: {Topic}
*OpenAPI 3.0 Specification*
## API Information
- **Title**: {API Name}
- **Version**: 1.0.0
- **Base URL**: https://api.example.com/v1
- **Contact**: api-team@example.com
## Endpoints
### Users API
#### GET /users
**Description**: Retrieve a list of users
**Parameters**:
- `page` (query, integer): Page number (default: 1)
- `limit` (query, integer): Items per page (default: 20, max: 100)
- `sort` (query, string): Sort field (e.g., "created_at", "-updated_at")
- `filter[status]` (query, string): Filter by user status
**Response 200**:
```json
{
"data": [
{
"id": "uuid",
"username": "string",
"email": "string",
"created_at": "2025-10-15T00:00:00Z"
}
],
"meta": {
"page": 1,
"limit": 20,
"total": 100
},
"links": {
"self": "/users?page=1",
"next": "/users?page=2",
"prev": null
}
}
```
#### POST /users
**Description**: Create a new user
**Request Body**:
```json
{
"username": "string (required, 3-50 chars)",
"email": "string (required, valid email)",
"password": "string (required, min 8 chars)",
"profile": {
"first_name": "string (optional)",
"last_name": "string (optional)"
}
}
```
**Response 201**:
```json
{
"data": {
"id": "uuid",
"username": "string",
"email": "string",
"created_at": "2025-10-15T00:00:00Z"
}
}
```
**Response 400** (Validation Error):
```json
{
"error": {
"code": "VALIDATION_ERROR",
"message": "Request validation failed",
"details": [
{
"field": "email",
"message": "Invalid email format"
}
]
}
}
```
[Continue for all endpoints...]
## Authentication
### OAuth 2.0 Flow
1. Client requests authorization
2. User grants permission
3. Client receives access token
4. Client uses token in requests
**Header Format**:
```
Authorization: Bearer {access_token}
```
## Rate Limiting
**Headers**:
- `X-RateLimit-Limit`: 1000
- `X-RateLimit-Remaining`: 999
- `X-RateLimit-Reset`: 1634270400
**Response 429** (Too Many Requests):
```json
{
"error": {
"code": "RATE_LIMIT_EXCEEDED",
"message": "API rate limit exceeded",
"retry_after": 3600
}
}
```
```
## 🔄 **Session Integration**
### Status Synchronization
Upon completion, update `workflow-session.json`:
```json
{
"phases": {
"BRAINSTORM": {
"api_designer": {
"status": "completed",
"completed_at": "timestamp",
"output_directory": ".workflow/active/WFS-{topic}/.brainstorming/api-designer/",
"key_insights": ["endpoint_design", "versioning_strategy", "data_contracts"]
}
}
}
}
```
### Cross-Role Collaboration
API designer perspective provides:
- **API Contract Specifications** → Frontend Developer
- **Data Schema Requirements** → Data Architect
- **Security Requirements** → Security Expert
- **Integration Endpoints** → System Architect
- **Performance Constraints** → DevOps Engineer
## ✅ **Quality Assurance**
### Required Analysis Elements
- [ ] Complete endpoint inventory with HTTP methods and paths
- [ ] Detailed request/response schemas with validation rules
- [ ] Clear versioning strategy and backward compatibility plan
- [ ] Comprehensive error handling and status code usage
- [ ] API documentation strategy (OpenAPI/Swagger)
### API Design Principles
- [ ] **Consistency**: Uniform naming conventions and patterns across all endpoints
- [ ] **Simplicity**: Intuitive resource modeling and URL structures
- [ ] **Flexibility**: Support for filtering, sorting, pagination, and field selection
- [ ] **Security**: Proper authentication, authorization, and input validation
- [ ] **Performance**: Caching strategies, compression, and efficient data structures
### Developer Experience Validation
- [ ] API is self-documenting with clear endpoint descriptions
- [ ] Error messages are actionable and helpful for debugging
- [ ] Response formats are consistent and predictable
- [ ] Code examples and integration guides are provided
- [ ] Sandbox environment available for testing
### Technical Completeness
- [ ] **Resource Modeling**: All domain entities mapped to API resources
- [ ] **CRUD Coverage**: Complete create, read, update, delete operations
- [ ] **Query Capabilities**: Advanced filtering, sorting, and search functionality
- [ ] **Versioning**: Clear version management and migration paths
- [ ] **Monitoring**: API metrics, logging, and tracing strategies defined
### Integration Readiness
- [ ] **Client Compatibility**: API works with all target client platforms
- [ ] **External Integration**: Third-party API dependencies identified
- [ ] **Backward Compatibility**: Changes don't break existing clients
- [ ] **Migration Path**: Clear upgrade paths for API consumers
- [ ] **SDK Support**: Client libraries and code generation considered

View File

@@ -132,55 +132,30 @@ SlashCommand(command="/workflow:brainstorm:artifacts \"{topic}\" --count {N}")
### Phase 2: Parallel Role Analysis Execution
**For Each Selected Role**:
**For Each Selected Role** (unified role-analysis command):
```bash
Task(conceptual-planning-agent): "
[FLOW_CONTROL]
Execute {role-name} analysis for existing topic framework
## Context Loading
ASSIGNED_ROLE: {role-name}
OUTPUT_LOCATION: .workflow/active/WFS-{session}/.brainstorming/{role}/
TOPIC: {user-provided-topic}
## Flow Control Steps
1. load_topic_framework → .workflow/active/WFS-{session}/.brainstorming/guidance-specification.md
2. load_role_template → ~/.claude/workflows/cli-templates/planning-roles/{role}.md
3. load_session_metadata → .workflow/active/WFS-{session}/workflow-session.json
4. load_style_skill (ui-designer only, if style_skill_package) → .claude/skills/style-{style_skill_package}/
## Analysis Requirements
**Primary Reference**: Original user prompt from workflow-session.json is authoritative
**Framework Source**: Address all discussion points in guidance-specification.md from {role-name} perspective
**Role Focus**: {role-name} domain expertise aligned with user intent
**Structured Approach**: Create analysis.md addressing framework discussion points
**Template Integration**: Apply role template guidelines within framework structure
## Expected Deliverables
1. **analysis.md** (optionally with analysis-{slug}.md sub-documents)
2. **Framework Reference**: @../guidance-specification.md
3. **User Intent Alignment**: Validate against session_context
## Completion Criteria
- Address each discussion point from guidance-specification.md with {role-name} expertise
- Provide actionable recommendations from {role-name} perspective within analysis files
- All output files MUST start with `analysis` prefix (no recommendations.md or other naming)
- Reference framework document using @ notation for integration
- Update workflow-session.json with completion status
"
SlashCommand(command="/workflow:brainstorm:role-analysis {role-name} --session {session-id} --skip-questions")
```
**Parallel Execute**:
- Launch N agents simultaneously (one message with multiple Task calls)
- Each agent task **attached** to orchestrator's TodoWrite
- All agents execute concurrently, each attaching their own analysis sub-tasks
- Each agent operates independently reading same guidance-specification.md
**What It Does**:
- Unified command execution for each role
- Loads topic framework from guidance-specification.md
- Applies role-specific template and context
- Generates analysis.md addressing framework discussion points
- Supports optional interactive context gathering (via --include-questions flag)
**Parallel Execution**:
- Launch N SlashCommand calls simultaneously (one message with multiple SlashCommand invokes)
- Each role command **attached** to orchestrator's TodoWrite
- All roles execute concurrently, each reading same guidance-specification.md
- Each role operates independently
- For ui-designer only: append `--style-skill {style_skill_package}` if provided
**Input**:
- `selected_roles[]` from Phase 1
- `session_id` from Phase 1
- guidance-specification.md path
- `guidance-specification.md` (framework reference)
- `style_skill_package` (for ui-designer only)
**Validation**:
- Each role creates `.workflow/active/WFS-{topic}/.brainstorming/{role}/analysis.md`
@@ -189,6 +164,7 @@ TOPIC: {user-provided-topic}
- **FORBIDDEN**: `recommendations.md` or any non-`analysis` prefixed files
- All N role analyses completed
**TodoWrite Update (Phase 2 agents executed - tasks attached in parallel)**:
```json
[
@@ -455,4 +431,3 @@ CONTEXT_VARS:
```
**Template Source**: `~/.claude/workflows/cli-templates/planning-roles/`

View File

@@ -1,220 +0,0 @@
---
name: data-architect
description: Generate or update data-architect/analysis.md addressing guidance-specification discussion points for data architecture perspective
argument-hint: "optional topic - uses existing framework if available"
allowed-tools: Task(conceptual-planning-agent), TodoWrite(*), Read(*), Write(*)
---
## 📊 **Data Architect Analysis Generator**
### Purpose
**Specialized command for generating data-architect/analysis.md** that addresses guidance-specification.md discussion points from data architecture perspective. Creates or updates role-specific analysis with framework references.
### Core Function
- **Framework-based Analysis**: Address each discussion point in guidance-specification.md
- **Data Architecture Focus**: Data models, pipelines, governance, and analytics perspective
- **Update Mechanism**: Create new or update existing analysis.md
- **Agent Delegation**: Use conceptual-planning-agent for analysis generation
### Analysis Scope
- **Data Model Design**: Efficient and scalable data models and schemas
- **Data Flow Design**: Data collection, processing, and storage workflows
- **Data Quality Management**: Data accuracy, completeness, and consistency
- **Analytics and Insights**: Data analysis and business intelligence solutions
### Role Boundaries & Responsibilities
#### **What This Role OWNS (Canonical Data Model - Source of Truth)**
- **Canonical Data Model**: The authoritative, system-wide data schema representing domain entities and relationships
- **Entity-Relationship Design**: Defining entities, attributes, relationships, and constraints
- **Data Normalization & Optimization**: Ensuring data integrity, reducing redundancy, and optimizing storage
- **Database Schema Design**: Physical database structures, indexes, partitioning strategies
- **Data Pipeline Architecture**: ETL/ELT processes, data warehousing, and analytics pipelines
- **Data Governance**: Data quality standards, retention policies, and compliance requirements
#### **What This Role DOES NOT Own (Defers to Other Roles)**
- **API Data Contracts**: Public-facing request/response schemas exposed by APIs → Defers to **API Designer**
- **System Integration Patterns**: How services communicate at the macro level → Defers to **System Architect**
- **UI Data Presentation**: How data is displayed to users → Defers to **UI Designer**
#### **Handoff Points**
- **TO API Designer**: Provides canonical data model that API Designer translates into public API data contracts (as projection/view)
- **TO System Architect**: Provides data flow requirements and storage constraints to inform system design
- **FROM System Architect**: Receives system-level integration requirements and scalability constraints
## ⚙️ **Execution Protocol**
### Phase 1: Session & Framework Detection
```bash
# Check active session and framework
CHECK: find .workflow/active/ -name "WFS-*" -type d
IF active_session EXISTS:
session_id = get_active_session()
brainstorm_dir = .workflow/active/WFS-{session}/.brainstorming/
CHECK: brainstorm_dir/guidance-specification.md
IF EXISTS:
framework_mode = true
load_framework = true
ELSE:
IF topic_provided:
framework_mode = false # Create analysis without framework
ELSE:
ERROR: "No framework found and no topic provided"
```
### Phase 2: Analysis Mode Detection
```bash
# Determine execution mode
IF framework_mode == true:
mode = "framework_based_analysis"
topic_ref = load_framework_topic()
discussion_points = extract_framework_points()
ELSE:
mode = "standalone_analysis"
topic_ref = provided_topic
discussion_points = generate_basic_structure()
```
### Phase 3: Agent Execution with Flow Control
**Framework-Based Analysis Generation**
```bash
Task(conceptual-planning-agent): "
[FLOW_CONTROL]
Execute data-architect analysis for existing topic framework
## Context Loading
ASSIGNED_ROLE: data-architect
OUTPUT_LOCATION: .workflow/active/WFS-{session}/.brainstorming/data-architect/
ANALYSIS_MODE: {framework_mode ? "framework_based" : "standalone"}
## Flow Control Steps
1. **load_topic_framework**
- Action: Load structured topic discussion framework
- Command: Read(.workflow/active/WFS-{session}/.brainstorming/guidance-specification.md)
- Output: topic_framework_content
2. **load_role_template**
- Action: Load data-architect planning template
- Command: bash($(cat ~/.claude/workflows/cli-templates/planning-roles/data-architect.md))
- Output: role_template_guidelines
3. **load_session_metadata**
- Action: Load session metadata and existing context
- Command: Read(.workflow/active/WFS-{session}/workflow-session.json)
- Output: session_context
## Analysis Requirements
**Framework Reference**: Address all discussion points in guidance-specification.md from data architecture perspective
**Role Focus**: Data models, pipelines, governance, analytics platforms
**Structured Approach**: Create analysis.md addressing framework discussion points
**Template Integration**: Apply role template guidelines within framework structure
## Expected Deliverables
1. **analysis.md**: Comprehensive data architecture analysis addressing all framework discussion points
2. **Framework Reference**: Include @../guidance-specification.md reference in analysis
## Completion Criteria
- Address each discussion point from guidance-specification.md with data architecture expertise
- Provide data model designs, pipeline architectures, and governance strategies
- Include scalability, performance, and quality considerations
- Reference framework document using @ notation for integration
"
```
## 📋 **TodoWrite Integration**
### Workflow Progress Tracking
```javascript
TodoWrite({
todos: [
{
content: "Detect active session and locate topic framework",
status: "in_progress",
activeForm: "Detecting session and framework"
},
{
content: "Load guidance-specification.md and session metadata for context",
status: "pending",
activeForm: "Loading framework and session context"
},
{
content: "Execute data-architect analysis using conceptual-planning-agent with FLOW_CONTROL",
status: "pending",
activeForm: "Executing data-architect framework analysis"
},
{
content: "Generate analysis.md addressing all framework discussion points",
status: "pending",
activeForm: "Generating structured data-architect analysis"
},
{
content: "Update workflow-session.json with data-architect completion status",
status: "pending",
activeForm: "Updating session metadata"
}
]
});
```
## 📊 **Output Structure**
### Framework-Based Analysis
```
.workflow/active/WFS-{session}/.brainstorming/data-architect/
└── analysis.md # Structured analysis addressing guidance-specification.md discussion points
```
### Analysis Document Structure
```markdown
# Data Architect Analysis: [Topic from Framework]
## Framework Reference
**Topic Framework**: @../guidance-specification.md
**Role Focus**: Data Architecture perspective
## Discussion Points Analysis
[Address each point from guidance-specification.md with data architecture expertise]
### Core Requirements (from framework)
[Data architecture perspective on requirements]
### Technical Considerations (from framework)
[Data model, pipeline, and storage considerations]
### User Experience Factors (from framework)
[Data access patterns and analytics user experience]
### Implementation Challenges (from framework)
[Data migration, quality, and governance challenges]
### Success Metrics (from framework)
[Data quality metrics and analytics success criteria]
## Data Architecture Specific Recommendations
[Role-specific data architecture recommendations and solutions]
---
*Generated by data-architect analysis addressing structured framework*
```
## 🔄 **Session Integration**
### Completion Status Update
```json
{
"data_architect": {
"status": "completed",
"framework_addressed": true,
"output_location": ".workflow/active/WFS-{session}/.brainstorming/data-architect/analysis.md",
"framework_reference": "@../guidance-specification.md"
}
}
```
### Integration Points
- **Framework Reference**: @../guidance-specification.md for structured discussion points
- **Cross-Role Synthesis**: Data architecture insights available for synthesis-report.md integration
- **Agent Autonomy**: Independent execution with framework guidance

View File

@@ -1,200 +0,0 @@
---
name: product-manager
description: Generate or update product-manager/analysis.md addressing guidance-specification discussion points for product management perspective
argument-hint: "optional topic - uses existing framework if available"
allowed-tools: Task(conceptual-planning-agent), TodoWrite(*), Read(*), Write(*)
---
## 🎯 **Product Manager Analysis Generator**
### Purpose
**Specialized command for generating product-manager/analysis.md** that addresses guidance-specification.md discussion points from product strategy perspective. Creates or updates role-specific analysis with framework references.
### Core Function
- **Framework-based Analysis**: Address each discussion point in guidance-specification.md
- **Product Strategy Focus**: User needs, business value, and market positioning
- **Update Mechanism**: Create new or update existing analysis.md
- **Agent Delegation**: Use conceptual-planning-agent for analysis generation
### Analysis Scope
- **User Needs Analysis**: Target users, problems, and value propositions
- **Business Impact Assessment**: ROI, metrics, and commercial outcomes
- **Market Positioning**: Competitive analysis and differentiation
- **Product Strategy**: Roadmaps, priorities, and go-to-market approaches
## ⚙️ **Execution Protocol**
### Phase 1: Session & Framework Detection
```bash
# Check active session and framework
CHECK: find .workflow/active/ -name "WFS-*" -type d
IF active_session EXISTS:
session_id = get_active_session()
brainstorm_dir = .workflow/active/WFS-{session}/.brainstorming/
CHECK: brainstorm_dir/guidance-specification.md
IF EXISTS:
framework_mode = true
load_framework = true
ELSE:
IF topic_provided:
framework_mode = false # Create analysis without framework
ELSE:
ERROR: "No framework found and no topic provided"
```
### Phase 2: Analysis Mode Detection
```bash
# Determine execution mode
IF framework_mode == true:
mode = "framework_based_analysis"
topic_ref = load_framework_topic()
discussion_points = extract_framework_points()
ELSE:
mode = "standalone_analysis"
topic_ref = provided_topic
discussion_points = generate_basic_structure()
```
### Phase 3: Agent Execution with Flow Control
**Framework-Based Analysis Generation**
```bash
Task(conceptual-planning-agent): "
[FLOW_CONTROL]
Execute product-manager analysis for existing topic framework
## Context Loading
ASSIGNED_ROLE: product-manager
OUTPUT_LOCATION: .workflow/active/WFS-{session}/.brainstorming/product-manager/
ANALYSIS_MODE: {framework_mode ? "framework_based" : "standalone"}
## Flow Control Steps
1. **load_topic_framework**
- Action: Load structured topic discussion framework
- Command: Read(.workflow/active/WFS-{session}/.brainstorming/guidance-specification.md)
- Output: topic_framework_content
2. **load_role_template**
- Action: Load product-manager planning template
- Command: bash($(cat ~/.claude/workflows/cli-templates/planning-roles/product-manager.md))
- Output: role_template_guidelines
3. **load_session_metadata**
- Action: Load session metadata and existing context
- Command: Read(.workflow/active/WFS-{session}/workflow-session.json)
- Output: session_context
## Analysis Requirements
**Framework Reference**: Address all discussion points in guidance-specification.md from product strategy perspective
**Role Focus**: User value, business impact, market positioning, product strategy
**Structured Approach**: Create analysis.md addressing framework discussion points
**Template Integration**: Apply role template guidelines within framework structure
## Expected Deliverables
1. **analysis.md**: Comprehensive product strategy analysis addressing all framework discussion points
2. **Framework Reference**: Include @../guidance-specification.md reference in analysis
## Completion Criteria
- Address each discussion point from guidance-specification.md with product management expertise
- Provide actionable business strategies and user value propositions
- Include market analysis and competitive positioning insights
- Reference framework document using @ notation for integration
"
```
## 📋 **TodoWrite Integration**
### Workflow Progress Tracking
```javascript
TodoWrite({
todos: [
{
content: "Detect active session and locate topic framework",
status: "in_progress",
activeForm: "Detecting session and framework"
},
{
content: "Load guidance-specification.md and session metadata for context",
status: "pending",
activeForm: "Loading framework and session context"
},
{
content: "Execute product-manager analysis using conceptual-planning-agent with FLOW_CONTROL",
status: "pending",
activeForm: "Executing product-manager framework analysis"
},
{
content: "Generate analysis.md addressing all framework discussion points",
status: "pending",
activeForm: "Generating structured product-manager analysis"
},
{
content: "Update workflow-session.json with product-manager completion status",
status: "pending",
activeForm: "Updating session metadata"
}
]
});
```
## 📊 **Output Structure**
### Framework-Based Analysis
```
.workflow/active/WFS-{session}/.brainstorming/product-manager/
└── analysis.md # Structured analysis addressing guidance-specification.md discussion points
```
### Analysis Document Structure
```markdown
# Product Manager Analysis: [Topic from Framework]
## Framework Reference
**Topic Framework**: @../guidance-specification.md
**Role Focus**: Product Strategy perspective
## Discussion Points Analysis
[Address each point from guidance-specification.md with product management expertise]
### Core Requirements (from framework)
[Product strategy perspective on user needs and requirements]
### Technical Considerations (from framework)
[Business and technical feasibility considerations]
### User Experience Factors (from framework)
[User value proposition and market positioning analysis]
### Implementation Challenges (from framework)
[Business execution and go-to-market considerations]
### Success Metrics (from framework)
[Product success metrics and business KPIs]
## Product Strategy Specific Recommendations
[Role-specific product management strategies and business solutions]
---
*Generated by product-manager analysis addressing structured framework*
```
## 🔄 **Session Integration**
### Completion Status Update
```json
{
"product_manager": {
"status": "completed",
"framework_addressed": true,
"output_location": ".workflow/active/WFS-{session}/.brainstorming/product-manager/analysis.md",
"framework_reference": "@../guidance-specification.md"
}
}
```
### Integration Points
- **Framework Reference**: @../guidance-specification.md for structured discussion points
- **Cross-Role Synthesis**: Product strategy insights available for synthesis-report.md integration
- **Agent Autonomy**: Independent execution with framework guidance

View File

@@ -1,200 +0,0 @@
---
name: product-owner
description: Generate or update product-owner/analysis.md addressing guidance-specification discussion points for product ownership perspective
argument-hint: "optional topic - uses existing framework if available"
allowed-tools: Task(conceptual-planning-agent), TodoWrite(*), Read(*), Write(*)
---
## 🎯 **Product Owner Analysis Generator**
### Purpose
**Specialized command for generating product-owner/analysis.md** that addresses guidance-specification.md discussion points from product backlog and feature prioritization perspective. Creates or updates role-specific analysis with framework references.
### Core Function
- **Framework-based Analysis**: Address each discussion point in guidance-specification.md
- **Product Backlog Focus**: Feature prioritization, user stories, and acceptance criteria
- **Update Mechanism**: Create new or update existing analysis.md
- **Agent Delegation**: Use conceptual-planning-agent for analysis generation
### Analysis Scope
- **Backlog Management**: User story creation, refinement, and prioritization
- **Stakeholder Alignment**: Requirements gathering, value definition, and expectation management
- **Feature Prioritization**: ROI analysis, MoSCoW method, and value-driven delivery
- **Acceptance Criteria**: Definition of Done, acceptance testing, and quality standards
## ⚙️ **Execution Protocol**
### Phase 1: Session & Framework Detection
```bash
# Check active session and framework
CHECK: find .workflow/active/ -name "WFS-*" -type d
IF active_session EXISTS:
session_id = get_active_session()
brainstorm_dir = .workflow/active/WFS-{session}/.brainstorming/
CHECK: brainstorm_dir/guidance-specification.md
IF EXISTS:
framework_mode = true
load_framework = true
ELSE:
IF topic_provided:
framework_mode = false # Create analysis without framework
ELSE:
ERROR: "No framework found and no topic provided"
```
### Phase 2: Analysis Mode Detection
```bash
# Determine execution mode
IF framework_mode == true:
mode = "framework_based_analysis"
topic_ref = load_framework_topic()
discussion_points = extract_framework_points()
ELSE:
mode = "standalone_analysis"
topic_ref = provided_topic
discussion_points = generate_basic_structure()
```
### Phase 3: Agent Execution with Flow Control
**Framework-Based Analysis Generation**
```bash
Task(conceptual-planning-agent): "
[FLOW_CONTROL]
Execute product-owner analysis for existing topic framework
## Context Loading
ASSIGNED_ROLE: product-owner
OUTPUT_LOCATION: .workflow/active/WFS-{session}/.brainstorming/product-owner/
ANALYSIS_MODE: {framework_mode ? "framework_based" : "standalone"}
## Flow Control Steps
1. **load_topic_framework**
- Action: Load structured topic discussion framework
- Command: Read(.workflow/active/WFS-{session}/.brainstorming/guidance-specification.md)
- Output: topic_framework_content
2. **load_role_template**
- Action: Load product-owner planning template
- Command: bash($(cat ~/.claude/workflows/cli-templates/planning-roles/product-owner.md))
- Output: role_template_guidelines
3. **load_session_metadata**
- Action: Load session metadata and existing context
- Command: Read(.workflow/active/WFS-{session}/workflow-session.json)
- Output: session_context
## Analysis Requirements
**Framework Reference**: Address all discussion points in guidance-specification.md from product backlog and feature prioritization perspective
**Role Focus**: Backlog management, stakeholder alignment, feature prioritization, acceptance criteria
**Structured Approach**: Create analysis.md addressing framework discussion points
**Template Integration**: Apply role template guidelines within framework structure
## Expected Deliverables
1. **analysis.md**: Comprehensive product ownership analysis addressing all framework discussion points
2. **Framework Reference**: Include @../guidance-specification.md reference in analysis
## Completion Criteria
- Address each discussion point from guidance-specification.md with product ownership expertise
- Provide actionable user stories and acceptance criteria definitions
- Include feature prioritization and stakeholder alignment strategies
- Reference framework document using @ notation for integration
"
```
## 📋 **TodoWrite Integration**
### Workflow Progress Tracking
```javascript
TodoWrite({
todos: [
{
content: "Detect active session and locate topic framework",
status: "in_progress",
activeForm: "Detecting session and framework"
},
{
content: "Load guidance-specification.md and session metadata for context",
status: "pending",
activeForm: "Loading framework and session context"
},
{
content: "Execute product-owner analysis using conceptual-planning-agent with FLOW_CONTROL",
status: "pending",
activeForm: "Executing product-owner framework analysis"
},
{
content: "Generate analysis.md addressing all framework discussion points",
status: "pending",
activeForm: "Generating structured product-owner analysis"
},
{
content: "Update workflow-session.json with product-owner completion status",
status: "pending",
activeForm: "Updating session metadata"
}
]
});
```
## 📊 **Output Structure**
### Framework-Based Analysis
```
.workflow/active/WFS-{session}/.brainstorming/product-owner/
└── analysis.md # Structured analysis addressing guidance-specification.md discussion points
```
### Analysis Document Structure
```markdown
# Product Owner Analysis: [Topic from Framework]
## Framework Reference
**Topic Framework**: @../guidance-specification.md
**Role Focus**: Product Backlog & Feature Prioritization perspective
## Discussion Points Analysis
[Address each point from guidance-specification.md with product ownership expertise]
### Core Requirements (from framework)
[User story formulation and backlog refinement perspective]
### Technical Considerations (from framework)
[Technical feasibility and implementation sequencing considerations]
### User Experience Factors (from framework)
[User value definition and acceptance criteria analysis]
### Implementation Challenges (from framework)
[Sprint planning, dependency management, and delivery strategies]
### Success Metrics (from framework)
[Feature adoption, value delivery metrics, and stakeholder satisfaction indicators]
## Product Owner Specific Recommendations
[Role-specific backlog management and feature prioritization strategies]
---
*Generated by product-owner analysis addressing structured framework*
```
## 🔄 **Session Integration**
### Completion Status Update
```json
{
"product_owner": {
"status": "completed",
"framework_addressed": true,
"output_location": ".workflow/active/WFS-{session}/.brainstorming/product-owner/analysis.md",
"framework_reference": "@../guidance-specification.md"
}
}
```
### Integration Points
- **Framework Reference**: @../guidance-specification.md for structured discussion points
- **Cross-Role Synthesis**: Product ownership insights available for synthesis-report.md integration
- **Agent Autonomy**: Independent execution with framework guidance

View File

@@ -0,0 +1,705 @@
---
name: role-analysis
description: Unified role-specific analysis generation with interactive context gathering and incremental updates
argument-hint: "[role-name] [--session session-id] [--update] [--include-questions] [--skip-questions]"
allowed-tools: Task(conceptual-planning-agent), AskUserQuestion(*), TodoWrite(*), Read(*), Write(*), Edit(*), Glob(*)
---
## 🎯 **Unified Role Analysis Generator**
### Purpose
**Unified command for generating and updating role-specific analysis** with interactive context gathering, framework alignment, and incremental update support. Replaces 9 individual role commands with single parameterized workflow.
### Core Function
- **Multi-Role Support**: Single command supports all 9 brainstorming roles
- **Interactive Context**: Dynamic question generation based on role and framework
- **Incremental Updates**: Merge new insights into existing analyses
- **Framework Alignment**: Address guidance-specification.md discussion points
- **Agent Delegation**: Use conceptual-planning-agent with role-specific templates
### Supported Roles
| Role ID | Title | Focus Area | Context Questions |
|---------|-------|------------|-------------------|
| `ux-expert` | UX专家 | User research, information architecture, user journey | 4 |
| `ui-designer` | UI设计师 | Visual design, high-fidelity mockups, design systems | 4 |
| `system-architect` | 系统架构师 | Technical architecture, scalability, integration patterns | 5 |
| `product-manager` | 产品经理 | Product strategy, roadmap, prioritization | 4 |
| `product-owner` | 产品负责人 | Backlog management, user stories, acceptance criteria | 4 |
| `scrum-master` | 敏捷教练 | Process facilitation, impediment removal, team dynamics | 3 |
| `subject-matter-expert` | 领域专家 | Domain knowledge, business rules, compliance | 4 |
| `data-architect` | 数据架构师 | Data models, storage strategies, data flow | 5 |
| `api-designer` | API设计师 | API contracts, versioning, integration patterns | 4 |
---
## 📋 **Usage**
```bash
# Generate new analysis with interactive context
/workflow:brainstorm:role-analysis ux-expert
# Generate with existing framework + context questions
/workflow:brainstorm:role-analysis system-architect --session WFS-xxx --include-questions
# Update existing analysis (incremental merge)
/workflow:brainstorm:role-analysis ui-designer --session WFS-xxx --update
# Quick generation (skip interactive context)
/workflow:brainstorm:role-analysis product-manager --session WFS-xxx --skip-questions
```
---
## ⚙️ **Execution Protocol**
### Phase 1: Detection & Validation
**Step 1.1: Role Validation**
```bash
VALIDATE role_name IN [
ux-expert, ui-designer, system-architect, product-manager,
product-owner, scrum-master, subject-matter-expert,
data-architect, api-designer
]
IF invalid:
ERROR: "Unknown role: {role_name}. Use one of: ux-expert, ui-designer, ..."
EXIT
```
**Step 1.2: Session Detection**
```bash
IF --session PROVIDED:
session_id = --session
brainstorm_dir = .workflow/active/{session_id}/.brainstorming/
ELSE:
FIND .workflow/active/WFS-*/
IF multiple:
PROMPT user to select
ELSE IF single:
USE existing
ELSE:
ERROR: "No active session. Run /workflow:brainstorm:artifacts first"
EXIT
VALIDATE brainstorm_dir EXISTS
```
**Step 1.3: Framework Detection**
```bash
framework_file = {brainstorm_dir}/guidance-specification.md
IF framework_file EXISTS:
framework_mode = true
LOAD framework_content
ELSE:
WARN: "No framework found - will create standalone analysis"
framework_mode = false
```
**Step 1.4: Update Mode Detection**
```bash
existing_analysis = {brainstorm_dir}/{role_name}/analysis*.md
IF --update FLAG OR existing_analysis EXISTS:
update_mode = true
IF --update NOT PROVIDED:
ASK: "Analysis exists. Update or regenerate?"
OPTIONS: ["Incremental update", "Full regenerate", "Cancel"]
ELSE:
update_mode = false
```
### Phase 2: Interactive Context Gathering
**Trigger Conditions**:
- Default: Always ask unless `--skip-questions` provided
- `--include-questions`: Force context gathering even if analysis exists
- `--skip-questions`: Skip all interactive questions
**Step 2.1: Load Role Configuration**
```javascript
const roleConfig = {
'ux-expert': {
title: 'UX专家',
focus_area: 'User research, information architecture, user journey',
question_categories: ['User Intent', 'Requirements', 'UX'],
question_count: 4,
template: '~/.claude/workflows/cli-templates/planning-roles/ux-expert.md'
},
'ui-designer': {
title: 'UI设计师',
focus_area: 'Visual design, high-fidelity mockups, design systems',
question_categories: ['Requirements', 'UX', 'Feasibility'],
question_count: 4,
template: '~/.claude/workflows/cli-templates/planning-roles/ui-designer.md'
},
'system-architect': {
title: '系统架构师',
focus_area: 'Technical architecture, scalability, integration patterns',
question_categories: ['Scale & Performance', 'Technical Constraints', 'Architecture Complexity', 'Non-Functional Requirements'],
question_count: 5,
template: '~/.claude/workflows/cli-templates/planning-roles/system-architect.md'
},
'product-manager': {
title: '产品经理',
focus_area: 'Product strategy, roadmap, prioritization',
question_categories: ['User Intent', 'Requirements', 'Process'],
question_count: 4,
template: '~/.claude/workflows/cli-templates/planning-roles/product-manager.md'
},
'product-owner': {
title: '产品负责人',
focus_area: 'Backlog management, user stories, acceptance criteria',
question_categories: ['Requirements', 'Decisions', 'Process'],
question_count: 4,
template: '~/.claude/workflows/cli-templates/planning-roles/product-owner.md'
},
'scrum-master': {
title: '敏捷教练',
focus_area: 'Process facilitation, impediment removal, team dynamics',
question_categories: ['Process', 'Risk', 'Decisions'],
question_count: 3,
template: '~/.claude/workflows/cli-templates/planning-roles/scrum-master.md'
},
'subject-matter-expert': {
title: '领域专家',
focus_area: 'Domain knowledge, business rules, compliance',
question_categories: ['Requirements', 'Feasibility', 'Terminology'],
question_count: 4,
template: '~/.claude/workflows/cli-templates/planning-roles/subject-matter-expert.md'
},
'data-architect': {
title: '数据架构师',
focus_area: 'Data models, storage strategies, data flow',
question_categories: ['Architecture', 'Scale & Performance', 'Technical Constraints', 'Feasibility'],
question_count: 5,
template: '~/.claude/workflows/cli-templates/planning-roles/data-architect.md'
},
'api-designer': {
title: 'API设计师',
focus_area: 'API contracts, versioning, integration patterns',
question_categories: ['Architecture', 'Requirements', 'Feasibility', 'Decisions'],
question_count: 4,
template: '~/.claude/workflows/cli-templates/planning-roles/api-designer.md'
}
};
config = roleConfig[role_name];
```
**Step 2.2: Generate Role-Specific Questions**
**9-Category Taxonomy** (from synthesis.md):
| Category | Focus | Example Question Pattern |
|----------|-------|--------------------------|
| User Intent | 用户目标 | "该分析的核心目标是什么?" |
| Requirements | 需求细化 | "需求的优先级如何排序?" |
| Architecture | 架构决策 | "技术栈的选择考量?" |
| UX | 用户体验 | "交互复杂度的取舍?" |
| Feasibility | 可行性 | "资源约束下的实现范围?" |
| Risk | 风险管理 | "风险容忍度是多少?" |
| Process | 流程规范 | "开发迭代的节奏?" |
| Decisions | 决策确认 | "冲突的解决方案?" |
| Terminology | 术语统一 | "统一使用哪个术语?" |
| Scale & Performance | 性能扩展 | "预期的负载和性能要求?" |
| Technical Constraints | 技术约束 | "现有技术栈的限制?" |
| Architecture Complexity | 架构复杂度 | "架构的复杂度权衡?" |
| Non-Functional Requirements | 非功能需求 | "可用性和可维护性要求?" |
**Question Generation Algorithm**:
```javascript
async function generateQuestions(role_name, framework_content) {
const config = roleConfig[role_name];
const questions = [];
// Parse framework for keywords
const keywords = extractKeywords(framework_content);
// Generate category-specific questions
for (const category of config.question_categories) {
const question = generateCategoryQuestion(category, keywords, role_name);
questions.push(question);
}
return questions.slice(0, config.question_count);
}
```
**Step 2.3: Multi-Round Question Execution**
```javascript
const BATCH_SIZE = 4;
const user_context = {};
for (let i = 0; i < questions.length; i += BATCH_SIZE) {
const batch = questions.slice(i, i + BATCH_SIZE);
const currentRound = Math.floor(i / BATCH_SIZE) + 1;
const totalRounds = Math.ceil(questions.length / BATCH_SIZE);
console.log(`\n[Round ${currentRound}/${totalRounds}] ${config.title} 上下文询问\n`);
AskUserQuestion({
questions: batch.map(q => ({
question: q.question,
header: q.category.substring(0, 12),
multiSelect: false,
options: q.options.map(opt => ({
label: opt.label,
description: opt.description
}))
}))
});
// Store responses before next round
for (const answer of responses) {
user_context[answer.question] = {
answer: answer.selected,
category: answer.category,
timestamp: new Date().toISOString()
};
}
}
// Save context to file
Write(
`${brainstorm_dir}/${role_name}/${role_name}-context.md`,
formatUserContext(user_context)
);
```
**Question Quality Rules** (from artifacts.md):
**MUST Include**:
- ✅ All questions in Chinese (用中文提问)
- ✅ 业务场景作为问题前提
- ✅ 技术选项的业务影响说明
- ✅ 量化指标和约束条件
**MUST Avoid**:
- ❌ 纯技术选型无业务上下文
- ❌ 过度抽象的通用问题
- ❌ 脱离框架的重复询问
### Phase 3: Agent Execution
**Step 3.1: Load Session Metadata**
```bash
session_metadata = Read(.workflow/active/{session_id}/workflow-session.json)
original_topic = session_metadata.topic
selected_roles = session_metadata.selected_roles
```
**Step 3.2: Prepare Agent Context**
```javascript
const agentContext = {
role_name: role_name,
role_config: roleConfig[role_name],
output_location: `${brainstorm_dir}/${role_name}/`,
framework_mode: framework_mode,
framework_path: framework_mode ? `${brainstorm_dir}/guidance-specification.md` : null,
update_mode: update_mode,
user_context: user_context,
original_topic: original_topic,
session_id: session_id
};
```
**Step 3.3: Execute Conceptual Planning Agent**
**Framework-Based Analysis** (when guidance-specification.md exists):
```javascript
Task(
subagent_type="conceptual-planning-agent",
run_in_background=false,
description=`Generate ${role_name} analysis`,
prompt=`
[FLOW_CONTROL]
Execute ${role_name} analysis for existing topic framework
## Context Loading
ASSIGNED_ROLE: ${role_name}
OUTPUT_LOCATION: ${agentContext.output_location}
ANALYSIS_MODE: ${framework_mode ? "framework_based" : "standalone"}
UPDATE_MODE: ${update_mode}
## Flow Control Steps
1. **load_topic_framework**
- Action: Load structured topic discussion framework
- Command: Read(${agentContext.framework_path})
- Output: topic_framework_content
2. **load_role_template**
- Action: Load ${role_name} planning template
- Command: Read(${roleConfig[role_name].template})
- Output: role_template_guidelines
3. **load_session_metadata**
- Action: Load session metadata and user intent
- Command: Read(.workflow/active/${session_id}/workflow-session.json)
- Output: session_context
4. **load_user_context** (if exists)
- Action: Load interactive context responses
- Command: Read(${brainstorm_dir}/${role_name}/${role_name}-context.md)
- Output: user_context_answers
5. **${update_mode ? 'load_existing_analysis' : 'skip'}**
${update_mode ? `
- Action: Load existing analysis for incremental update
- Command: Read(${brainstorm_dir}/${role_name}/analysis.md)
- Output: existing_analysis_content
` : ''}
## Analysis Requirements
**Primary Reference**: Original user prompt from workflow-session.json is authoritative
**Framework Source**: Address all discussion points in guidance-specification.md from ${role_name} perspective
**User Context Integration**: Incorporate interactive Q&A responses into analysis
**Role Focus**: ${roleConfig[role_name].focus_area}
**Template Integration**: Apply role template guidelines within framework structure
## Expected Deliverables
1. **analysis.md** (main document, optionally with analysis-{slug}.md sub-documents)
2. **Framework Reference**: @../guidance-specification.md (if framework_mode)
3. **User Context Reference**: @./${role_name}-context.md (if user context exists)
4. **User Intent Alignment**: Validate against session_context
## Update Requirements (if UPDATE_MODE)
- **Preserve Structure**: Maintain existing analysis structure
- **Add "Clarifications" Section**: Document new user context with timestamp
- **Merge Insights**: Integrate new perspectives without removing existing content
- **Resolve Conflicts**: If new context contradicts existing analysis, document both and recommend resolution
## Completion Criteria
- Address each discussion point from guidance-specification.md with ${role_name} expertise
- Provide actionable recommendations from ${role_name} perspective within analysis files
- All output files MUST start with "analysis" prefix (no recommendations.md or other naming)
- Reference framework document using @ notation for integration
- Update workflow-session.json with completion status
`
);
```
### Phase 4: Validation & Finalization
**Step 4.1: Validate Output**
```bash
VERIFY EXISTS: ${brainstorm_dir}/${role_name}/analysis.md
VERIFY CONTAINS: "@../guidance-specification.md" (if framework_mode)
IF user_context EXISTS:
VERIFY CONTAINS: "@./${role_name}-context.md" OR "## Clarifications" section
```
**Step 4.2: Update Session Metadata**
```json
{
"phases": {
"BRAINSTORM": {
"${role_name}": {
"status": "${update_mode ? 'updated' : 'completed'}",
"completed_at": "timestamp",
"framework_addressed": true,
"context_gathered": user_context ? true : false,
"output_location": "${brainstorm_dir}/${role_name}/analysis.md",
"update_history": [
{
"timestamp": "ISO8601",
"mode": "${update_mode ? 'incremental' : 'initial'}",
"context_questions": question_count
}
]
}
}
}
}
```
**Step 4.3: Completion Report**
```markdown
✅ ${roleConfig[role_name].title} Analysis Complete
**Output**: ${brainstorm_dir}/${role_name}/analysis.md
**Mode**: ${update_mode ? 'Incremental Update' : 'New Generation'}
**Framework**: ${framework_mode ? '✓ Aligned' : '✗ Standalone'}
**Context Questions**: ${question_count} answered
${update_mode ? '
**Changes**:
- Added "Clarifications" section with new user context
- Merged new insights into existing sections
- Resolved conflicts with framework alignment
' : ''}
**Next Steps**:
${selected_roles.length > 1 ? `
- Continue with other roles: ${selected_roles.filter(r => r !== role_name).join(', ')}
- Run synthesis: /workflow:brainstorm:synthesis --session ${session_id}
` : `
- Clarify insights: /workflow:brainstorm:synthesis --session ${session_id}
- Generate plan: /workflow:plan --session ${session_id}
`}
```
---
## 📋 **TodoWrite Integration**
### Workflow Progress Tracking
```javascript
TodoWrite({
todos: [
{
content: "Phase 1: Detect session and validate role configuration",
status: "in_progress",
activeForm: "Detecting session and role"
},
{
content: "Phase 2: Interactive context gathering with AskUserQuestion",
status: "pending",
activeForm: "Gathering user context"
},
{
content: "Phase 3: Execute conceptual-planning-agent for role analysis",
status: "pending",
activeForm: "Executing agent analysis"
},
{
content: "Phase 4: Validate output and update session metadata",
status: "pending",
activeForm: "Finalizing and validating"
}
]
});
```
---
## 📊 **Output Structure**
### Directory Layout
```
.workflow/active/WFS-{session}/.brainstorming/
├── guidance-specification.md # Framework (if exists)
└── {role-name}/
├── {role-name}-context.md # Interactive Q&A responses
├── analysis.md # Main analysis (REQUIRED)
└── analysis-{slug}.md # Section documents (optional, max 5)
```
### Analysis Document Structure (New Generation)
```markdown
# ${roleConfig[role_name].title} Analysis: [Topic from Framework]
## Framework Reference
**Topic Framework**: @../guidance-specification.md
**Role Focus**: ${roleConfig[role_name].focus_area}
**User Context**: @./${role_name}-context.md
## User Context Summary
**Context Gathered**: ${question_count} questions answered
**Categories**: ${question_categories.join(', ')}
${user_context ? formatContextSummary(user_context) : ''}
## Discussion Points Analysis
[Address each point from guidance-specification.md with ${role_name} expertise]
### Core Requirements (from framework)
[Role-specific perspective on requirements]
### Technical Considerations (from framework)
[Role-specific technical analysis]
### User Experience Factors (from framework)
[Role-specific UX considerations]
### Implementation Challenges (from framework)
[Role-specific challenges and solutions]
### Success Metrics (from framework)
[Role-specific metrics and KPIs]
## ${roleConfig[role_name].title} Specific Recommendations
[Role-specific actionable strategies]
---
*Generated by ${role_name} analysis addressing structured framework*
*Context gathered: ${new Date().toISOString()}*
```
### Analysis Document Structure (Incremental Update)
```markdown
# ${roleConfig[role_name].title} Analysis: [Topic]
## Framework Reference
[Existing content preserved]
## Clarifications
### Session ${new Date().toISOString().split('T')[0]}
${Object.entries(user_context).map(([q, a]) => `
- **Q**: ${q} (Category: ${a.category})
**A**: ${a.answer}
`).join('\n')}
## User Context Summary
[Updated with new context]
## Discussion Points Analysis
[Existing content enhanced with new insights]
[Rest of sections updated based on clarifications]
```
---
## 🔄 **Integration with Other Commands**
### Called By
- `/workflow:brainstorm:auto-parallel` (Phase 2 - parallel role execution)
- Manual invocation for single-role analysis
### Calls To
- `conceptual-planning-agent` (agent execution)
- `AskUserQuestion` (interactive context gathering)
### Coordinates With
- `/workflow:brainstorm:artifacts` (creates framework for role analysis)
- `/workflow:brainstorm:synthesis` (reads role analyses for integration)
---
## ✅ **Quality Assurance**
### Required Analysis Elements
- [ ] Framework discussion points addressed (if framework_mode)
- [ ] User context integrated (if context gathered)
- [ ] Role template guidelines applied
- [ ] Output files follow naming convention (analysis*.md only)
- [ ] Framework reference using @ notation
- [ ] Session metadata updated
### Context Quality
- [ ] Questions in Chinese with business context
- [ ] Options include technical trade-offs
- [ ] Categories aligned with role focus
- [ ] No generic questions unrelated to framework
### Update Quality (if update_mode)
- [ ] "Clarifications" section added with timestamp
- [ ] New insights merged without content loss
- [ ] Conflicts documented and resolved
- [ ] Framework alignment maintained
---
## 🎛️ **Command Parameters**
### Required Parameters
- `[role-name]`: Role identifier (ux-expert, ui-designer, system-architect, etc.)
### Optional Parameters
- `--session [session-id]`: Specify brainstorming session (auto-detect if omitted)
- `--update`: Force incremental update mode (auto-detect if analysis exists)
- `--include-questions`: Force context gathering even if analysis exists
- `--skip-questions`: Skip all interactive context gathering
- `--style-skill [package]`: For ui-designer only, load style SKILL package
### Parameter Combinations
| Scenario | Command | Behavior |
|----------|---------|----------|
| New analysis | `role-analysis ux-expert` | Generate + ask context questions |
| Quick generation | `role-analysis ux-expert --skip-questions` | Generate without context |
| Update existing | `role-analysis ux-expert --update` | Ask clarifications + merge |
| Force questions | `role-analysis ux-expert --include-questions` | Ask even if exists |
| Specific session | `role-analysis ux-expert --session WFS-xxx` | Target specific session |
---
## 🚫 **Error Handling**
### Invalid Role Name
```
ERROR: Unknown role: "ui-expert"
Valid roles: ux-expert, ui-designer, system-architect, product-manager,
product-owner, scrum-master, subject-matter-expert,
data-architect, api-designer
```
### No Active Session
```
ERROR: No active brainstorming session found
Run: /workflow:brainstorm:artifacts "[topic]" to create session
```
### Missing Framework (with warning)
```
WARN: No guidance-specification.md found
Generating standalone analysis without framework alignment
Recommend: Run /workflow:brainstorm:artifacts first for better results
```
### Agent Execution Failure
```
ERROR: Conceptual planning agent failed
Check: ${brainstorm_dir}/${role_name}/error.log
Action: Retry with --skip-questions or check framework validity
```
---
## 🔧 **Advanced Usage**
### Batch Role Generation (via auto-parallel)
```bash
# This command handles multiple roles in parallel
/workflow:brainstorm:auto-parallel "topic" --count 3
# → Internally calls role-analysis for each selected role
```
### Manual Multi-Role Workflow
```bash
# 1. Create framework
/workflow:brainstorm:artifacts "Build real-time collaboration platform" --count 3
# 2. Generate each role with context
/workflow:brainstorm:role-analysis system-architect --include-questions
/workflow:brainstorm:role-analysis ui-designer --include-questions
/workflow:brainstorm:role-analysis product-manager --include-questions
# 3. Synthesize insights
/workflow:brainstorm:synthesis --session WFS-xxx
```
### Iterative Refinement
```bash
# Initial generation
/workflow:brainstorm:role-analysis ux-expert
# User reviews and wants more depth
/workflow:brainstorm:role-analysis ux-expert --update --include-questions
# → Asks clarification questions, merges new insights
```
---
## 📚 **Reference Information**
### Role Template Locations
- Templates: `~/.claude/workflows/cli-templates/planning-roles/`
- Format: `{role-name}.md` (e.g., `ux-expert.md`, `system-architect.md`)
### Related Commands
- `/workflow:brainstorm:artifacts` - Create framework and select roles
- `/workflow:brainstorm:auto-parallel` - Parallel multi-role execution
- `/workflow:brainstorm:synthesis` - Integrate role analyses
- `/workflow:plan` - Generate implementation plan from synthesis
### Context Package
- Location: `.workflow/active/WFS-{session}/.process/context-package.json`
- Used by: `context-search-agent` (Phase 0 of artifacts)
- Contains: Project context, tech stack, conflict risks

View File

@@ -1,200 +0,0 @@
---
name: scrum-master
description: Generate or update scrum-master/analysis.md addressing guidance-specification discussion points for Agile process perspective
argument-hint: "optional topic - uses existing framework if available"
allowed-tools: Task(conceptual-planning-agent), TodoWrite(*), Read(*), Write(*)
---
## 🎯 **Scrum Master Analysis Generator**
### Purpose
**Specialized command for generating scrum-master/analysis.md** that addresses guidance-specification.md discussion points from agile process and team collaboration perspective. Creates or updates role-specific analysis with framework references.
### Core Function
- **Framework-based Analysis**: Address each discussion point in guidance-specification.md
- **Agile Process Focus**: Sprint planning, team dynamics, and delivery optimization
- **Update Mechanism**: Create new or update existing analysis.md
- **Agent Delegation**: Use conceptual-planning-agent for analysis generation
### Analysis Scope
- **Sprint Planning**: Task breakdown, estimation, and iteration planning
- **Team Collaboration**: Communication patterns, impediment removal, and facilitation
- **Process Optimization**: Agile ceremonies, retrospectives, and continuous improvement
- **Delivery Management**: Velocity tracking, burndown analysis, and release planning
## ⚙️ **Execution Protocol**
### Phase 1: Session & Framework Detection
```bash
# Check active session and framework
CHECK: find .workflow/active/ -name "WFS-*" -type d
IF active_session EXISTS:
session_id = get_active_session()
brainstorm_dir = .workflow/active/WFS-{session}/.brainstorming/
CHECK: brainstorm_dir/guidance-specification.md
IF EXISTS:
framework_mode = true
load_framework = true
ELSE:
IF topic_provided:
framework_mode = false # Create analysis without framework
ELSE:
ERROR: "No framework found and no topic provided"
```
### Phase 2: Analysis Mode Detection
```bash
# Determine execution mode
IF framework_mode == true:
mode = "framework_based_analysis"
topic_ref = load_framework_topic()
discussion_points = extract_framework_points()
ELSE:
mode = "standalone_analysis"
topic_ref = provided_topic
discussion_points = generate_basic_structure()
```
### Phase 3: Agent Execution with Flow Control
**Framework-Based Analysis Generation**
```bash
Task(conceptual-planning-agent): "
[FLOW_CONTROL]
Execute scrum-master analysis for existing topic framework
## Context Loading
ASSIGNED_ROLE: scrum-master
OUTPUT_LOCATION: .workflow/active/WFS-{session}/.brainstorming/scrum-master/
ANALYSIS_MODE: {framework_mode ? "framework_based" : "standalone"}
## Flow Control Steps
1. **load_topic_framework**
- Action: Load structured topic discussion framework
- Command: Read(.workflow/active/WFS-{session}/.brainstorming/guidance-specification.md)
- Output: topic_framework_content
2. **load_role_template**
- Action: Load scrum-master planning template
- Command: bash($(cat ~/.claude/workflows/cli-templates/planning-roles/scrum-master.md))
- Output: role_template_guidelines
3. **load_session_metadata**
- Action: Load session metadata and existing context
- Command: Read(.workflow/active/WFS-{session}/workflow-session.json)
- Output: session_context
## Analysis Requirements
**Framework Reference**: Address all discussion points in guidance-specification.md from agile process and team collaboration perspective
**Role Focus**: Sprint planning, team dynamics, process optimization, delivery management
**Structured Approach**: Create analysis.md addressing framework discussion points
**Template Integration**: Apply role template guidelines within framework structure
## Expected Deliverables
1. **analysis.md**: Comprehensive agile process analysis addressing all framework discussion points
2. **Framework Reference**: Include @../guidance-specification.md reference in analysis
## Completion Criteria
- Address each discussion point from guidance-specification.md with scrum mastery expertise
- Provide actionable sprint planning and team facilitation strategies
- Include process optimization and impediment removal insights
- Reference framework document using @ notation for integration
"
```
## 📋 **TodoWrite Integration**
### Workflow Progress Tracking
```javascript
TodoWrite({
todos: [
{
content: "Detect active session and locate topic framework",
status: "in_progress",
activeForm: "Detecting session and framework"
},
{
content: "Load guidance-specification.md and session metadata for context",
status: "pending",
activeForm: "Loading framework and session context"
},
{
content: "Execute scrum-master analysis using conceptual-planning-agent with FLOW_CONTROL",
status: "pending",
activeForm: "Executing scrum-master framework analysis"
},
{
content: "Generate analysis.md addressing all framework discussion points",
status: "pending",
activeForm: "Generating structured scrum-master analysis"
},
{
content: "Update workflow-session.json with scrum-master completion status",
status: "pending",
activeForm: "Updating session metadata"
}
]
});
```
## 📊 **Output Structure**
### Framework-Based Analysis
```
.workflow/active/WFS-{session}/.brainstorming/scrum-master/
└── analysis.md # Structured analysis addressing guidance-specification.md discussion points
```
### Analysis Document Structure
```markdown
# Scrum Master Analysis: [Topic from Framework]
## Framework Reference
**Topic Framework**: @../guidance-specification.md
**Role Focus**: Agile Process & Team Collaboration perspective
## Discussion Points Analysis
[Address each point from guidance-specification.md with scrum mastery expertise]
### Core Requirements (from framework)
[Sprint planning and iteration breakdown perspective]
### Technical Considerations (from framework)
[Technical debt management and process considerations]
### User Experience Factors (from framework)
[User story refinement and acceptance criteria analysis]
### Implementation Challenges (from framework)
[Impediment identification and removal strategies]
### Success Metrics (from framework)
[Velocity tracking, burndown metrics, and team performance indicators]
## Scrum Master Specific Recommendations
[Role-specific agile process optimization and team facilitation strategies]
---
*Generated by scrum-master analysis addressing structured framework*
```
## 🔄 **Session Integration**
### Completion Status Update
```json
{
"scrum_master": {
"status": "completed",
"framework_addressed": true,
"output_location": ".workflow/active/WFS-{session}/.brainstorming/scrum-master/analysis.md",
"framework_reference": "@../guidance-specification.md"
}
}
```
### Integration Points
- **Framework Reference**: @../guidance-specification.md for structured discussion points
- **Cross-Role Synthesis**: Agile process insights available for synthesis-report.md integration
- **Agent Autonomy**: Independent execution with framework guidance

View File

@@ -1,200 +0,0 @@
---
name: subject-matter-expert
description: Generate or update subject-matter-expert/analysis.md addressing guidance-specification discussion points for domain expertise perspective
argument-hint: "optional topic - uses existing framework if available"
allowed-tools: Task(conceptual-planning-agent), TodoWrite(*), Read(*), Write(*)
---
## 🎯 **Subject Matter Expert Analysis Generator**
### Purpose
**Specialized command for generating subject-matter-expert/analysis.md** that addresses guidance-specification.md discussion points from domain knowledge and technical expertise perspective. Creates or updates role-specific analysis with framework references.
### Core Function
- **Framework-based Analysis**: Address each discussion point in guidance-specification.md
- **Domain Expertise Focus**: Deep technical knowledge, industry standards, and best practices
- **Update Mechanism**: Create new or update existing analysis.md
- **Agent Delegation**: Use conceptual-planning-agent for analysis generation
### Analysis Scope
- **Domain Knowledge**: Industry-specific expertise, regulatory requirements, and compliance
- **Technical Standards**: Best practices, design patterns, and architectural guidelines
- **Risk Assessment**: Technical debt, scalability concerns, and maintenance implications
- **Knowledge Transfer**: Documentation strategies, training requirements, and expertise sharing
## ⚙️ **Execution Protocol**
### Phase 1: Session & Framework Detection
```bash
# Check active session and framework
CHECK: find .workflow/active/ -name "WFS-*" -type d
IF active_session EXISTS:
session_id = get_active_session()
brainstorm_dir = .workflow/active/WFS-{session}/.brainstorming/
CHECK: brainstorm_dir/guidance-specification.md
IF EXISTS:
framework_mode = true
load_framework = true
ELSE:
IF topic_provided:
framework_mode = false # Create analysis without framework
ELSE:
ERROR: "No framework found and no topic provided"
```
### Phase 2: Analysis Mode Detection
```bash
# Determine execution mode
IF framework_mode == true:
mode = "framework_based_analysis"
topic_ref = load_framework_topic()
discussion_points = extract_framework_points()
ELSE:
mode = "standalone_analysis"
topic_ref = provided_topic
discussion_points = generate_basic_structure()
```
### Phase 3: Agent Execution with Flow Control
**Framework-Based Analysis Generation**
```bash
Task(conceptual-planning-agent): "
[FLOW_CONTROL]
Execute subject-matter-expert analysis for existing topic framework
## Context Loading
ASSIGNED_ROLE: subject-matter-expert
OUTPUT_LOCATION: .workflow/active/WFS-{session}/.brainstorming/subject-matter-expert/
ANALYSIS_MODE: {framework_mode ? "framework_based" : "standalone"}
## Flow Control Steps
1. **load_topic_framework**
- Action: Load structured topic discussion framework
- Command: Read(.workflow/active/WFS-{session}/.brainstorming/guidance-specification.md)
- Output: topic_framework_content
2. **load_role_template**
- Action: Load subject-matter-expert planning template
- Command: bash($(cat ~/.claude/workflows/cli-templates/planning-roles/subject-matter-expert.md))
- Output: role_template_guidelines
3. **load_session_metadata**
- Action: Load session metadata and existing context
- Command: Read(.workflow/active/WFS-{session}/workflow-session.json)
- Output: session_context
## Analysis Requirements
**Framework Reference**: Address all discussion points in guidance-specification.md from domain expertise and technical standards perspective
**Role Focus**: Domain knowledge, technical standards, risk assessment, knowledge transfer
**Structured Approach**: Create analysis.md addressing framework discussion points
**Template Integration**: Apply role template guidelines within framework structure
## Expected Deliverables
1. **analysis.md**: Comprehensive domain expertise analysis addressing all framework discussion points
2. **Framework Reference**: Include @../guidance-specification.md reference in analysis
## Completion Criteria
- Address each discussion point from guidance-specification.md with subject matter expertise
- Provide actionable technical standards and best practices recommendations
- Include risk assessment and compliance considerations
- Reference framework document using @ notation for integration
"
```
## 📋 **TodoWrite Integration**
### Workflow Progress Tracking
```javascript
TodoWrite({
todos: [
{
content: "Detect active session and locate topic framework",
status: "in_progress",
activeForm: "Detecting session and framework"
},
{
content: "Load guidance-specification.md and session metadata for context",
status: "pending",
activeForm: "Loading framework and session context"
},
{
content: "Execute subject-matter-expert analysis using conceptual-planning-agent with FLOW_CONTROL",
status: "pending",
activeForm: "Executing subject-matter-expert framework analysis"
},
{
content: "Generate analysis.md addressing all framework discussion points",
status: "pending",
activeForm: "Generating structured subject-matter-expert analysis"
},
{
content: "Update workflow-session.json with subject-matter-expert completion status",
status: "pending",
activeForm: "Updating session metadata"
}
]
});
```
## 📊 **Output Structure**
### Framework-Based Analysis
```
.workflow/active/WFS-{session}/.brainstorming/subject-matter-expert/
└── analysis.md # Structured analysis addressing guidance-specification.md discussion points
```
### Analysis Document Structure
```markdown
# Subject Matter Expert Analysis: [Topic from Framework]
## Framework Reference
**Topic Framework**: @../guidance-specification.md
**Role Focus**: Domain Expertise & Technical Standards perspective
## Discussion Points Analysis
[Address each point from guidance-specification.md with subject matter expertise]
### Core Requirements (from framework)
[Domain-specific requirements and industry standards perspective]
### Technical Considerations (from framework)
[Deep technical analysis, architectural patterns, and best practices]
### User Experience Factors (from framework)
[Domain-specific usability standards and industry conventions]
### Implementation Challenges (from framework)
[Technical risks, scalability concerns, and maintenance implications]
### Success Metrics (from framework)
[Domain-specific KPIs, compliance metrics, and quality standards]
## Subject Matter Expert Specific Recommendations
[Role-specific technical expertise and industry best practices]
---
*Generated by subject-matter-expert analysis addressing structured framework*
```
## 🔄 **Session Integration**
### Completion Status Update
```json
{
"subject_matter_expert": {
"status": "completed",
"framework_addressed": true,
"output_location": ".workflow/active/WFS-{session}/.brainstorming/subject-matter-expert/analysis.md",
"framework_reference": "@../guidance-specification.md"
}
}
```
### Integration Points
- **Framework Reference**: @../guidance-specification.md for structured discussion points
- **Cross-Role Synthesis**: Domain expertise insights available for synthesis-report.md integration
- **Agent Autonomy**: Independent execution with framework guidance

View File

@@ -1,389 +0,0 @@
---
name: system-architect
description: Generate or update system-architect/analysis.md addressing guidance-specification discussion points for system architecture perspective
argument-hint: "optional topic - uses existing framework if available"
allowed-tools: Task(conceptual-planning-agent), TodoWrite(*), Read(*), Write(*)
---
## 🏗️ **System Architect Analysis Generator**
### Purpose
**Specialized command for generating system-architect/analysis.md** that addresses guidance-specification.md discussion points from system architecture perspective. Creates or updates role-specific analysis with framework references.
### Core Function
- **Framework-based Analysis**: Address each discussion point in guidance-specification.md
- **Architecture Focus**: Technical architecture, scalability, and system design perspective
- **Update Mechanism**: Create new or update existing analysis.md
- **Agent Delegation**: Use conceptual-planning-agent for analysis generation
### Analysis Scope
- **Technical Architecture**: Scalable and maintainable system design
- **Technology Selection**: Stack evaluation and architectural decisions
- **Performance & Scalability**: Capacity planning and optimization strategies
- **Integration Patterns**: System communication and data flow design
### Role Boundaries & Responsibilities
#### **What This Role OWNS (Macro-Architecture)**
- **System-Level Architecture**: Service boundaries, deployment topology, and system composition
- **Cross-Service Communication Patterns**: Choosing between microservices/monolithic, event-driven/request-response, sync/async patterns
- **Technology Stack Decisions**: Language, framework, database, and infrastructure choices
- **Non-Functional Requirements**: Scalability, performance, availability, disaster recovery, and monitoring strategies
- **Integration Planning**: How systems and services connect at the macro level (not specific API contracts)
#### **What This Role DOES NOT Own (Defers to Other Roles)**
- **API Contract Details**: Specific endpoint definitions, URL structures, HTTP methods → Defers to **API Designer**
- **Data Schemas**: Detailed data model design and entity relationships → Defers to **Data Architect**
- **UI/UX Design**: Interface design and user experience → Defers to **UX Expert** and **UI Designer**
#### **Handoff Points**
- **TO API Designer**: Provides architectural constraints (REST vs GraphQL, sync vs async) that define the API design space
- **TO Data Architect**: Provides system-level data flow requirements and integration patterns
- **FROM Data Architect**: Receives canonical data model to inform system integration design
## ⚙️ **Execution Protocol**
### Phase 1: Session & Framework Detection
```bash
# Check active session and framework
CHECK: find .workflow/active/ -name "WFS-*" -type d
IF active_session EXISTS:
session_id = get_active_session()
brainstorm_dir = .workflow/active/WFS-{session}/.brainstorming/
CHECK: brainstorm_dir/guidance-specification.md
IF EXISTS:
framework_mode = true
load_framework = true
ELSE:
IF topic_provided:
framework_mode = false # Create analysis without framework
ELSE:
ERROR: "No framework found and no topic provided"
```
### Phase 2: Analysis Mode Detection
```bash
# Check existing analysis
CHECK: brainstorm_dir/system-architect/analysis.md
IF EXISTS:
SHOW existing analysis summary
ASK: "Analysis exists. Do you want to:"
OPTIONS:
1. "Update with new insights" → Update existing
2. "Replace completely" → Generate new
3. "Cancel" → Exit without changes
ELSE:
CREATE new analysis
```
### Phase 3: Agent Task Generation
**Framework-Based Analysis** (when guidance-specification.md exists):
```bash
Task(subagent_type="conceptual-planning-agent",
run_in_background=false,
prompt="Generate system architect analysis addressing topic framework
## Framework Integration Required
**MANDATORY**: Load and address guidance-specification.md discussion points
**Framework Reference**: @{session.brainstorm_dir}/guidance-specification.md
**Output Location**: {session.brainstorm_dir}/system-architect/analysis.md
## Analysis Requirements
1. **Load Topic Framework**: Read guidance-specification.md completely
2. **Address Each Discussion Point**: Respond to all 5 framework sections from system architecture perspective
3. **Include Framework Reference**: Start analysis.md with @../guidance-specification.md
4. **Technical Focus**: Emphasize scalability, architecture patterns, technology decisions
5. **Structured Response**: Use framework structure for analysis organization
## Framework Sections to Address
- Core Requirements (from architecture perspective)
- Technical Considerations (detailed architectural analysis)
- User Experience Factors (technical UX considerations)
- Implementation Challenges (architecture risks and solutions)
- Success Metrics (technical metrics and monitoring)
## Output Structure Required
```markdown
# System Architect Analysis: [Topic]
**Framework Reference**: @../guidance-specification.md
**Role Focus**: System Architecture and Technical Design
## Core Requirements Analysis
[Address framework requirements from architecture perspective]
## Technical Considerations
[Detailed architectural analysis]
## User Experience Factors
[Technical aspects of UX implementation]
## Implementation Challenges
[Architecture risks and mitigation strategies]
## Success Metrics
[Technical metrics and system monitoring]
## Architecture-Specific Recommendations
[Detailed technical recommendations]
```",
description="Generate system architect framework-based analysis")
```
### Phase 4: Update Mechanism
**Analysis Update Process**:
```bash
# For existing analysis updates
IF update_mode = "incremental":
Task(subagent_type="conceptual-planning-agent",
run_in_background=false,
prompt="Update existing system architect analysis
## Current Analysis Context
**Existing Analysis**: @{session.brainstorm_dir}/system-architect/analysis.md
**Framework Reference**: @{session.brainstorm_dir}/guidance-specification.md
## Update Requirements
1. **Preserve Structure**: Maintain existing analysis structure
2. **Add New Insights**: Integrate new technical insights and recommendations
3. **Framework Alignment**: Ensure continued alignment with topic framework
4. **Technical Updates**: Add new architecture patterns, technology considerations
5. **Maintain References**: Keep @../guidance-specification.md reference
## Update Instructions
- Read existing analysis completely
- Identify areas for enhancement or new insights
- Add technical depth while preserving original structure
- Update recommendations with new architectural approaches
- Maintain framework discussion point addressing",
description="Update system architect analysis incrementally")
```
## Document Structure
### Output Files
```
.workflow/active/WFS-[topic]/.brainstorming/
├── guidance-specification.md # Input: Framework (if exists)
└── system-architect/
└── analysis.md # ★ OUTPUT: Framework-based analysis
```
### Analysis Structure
**Required Elements**:
- **Framework Reference**: @../guidance-specification.md (if framework exists)
- **Role Focus**: System Architecture and Technical Design perspective
- **5 Framework Sections**: Address each framework discussion point
- **Technical Recommendations**: Architecture-specific insights and solutions
- How should we design APIs and manage versioning?
**4. Performance and Scalability**
- Where are the current system performance bottlenecks?
- How should we handle traffic growth and scaling demands?
- What database scaling and optimization strategies are needed?
## ⚡ **Two-Step Execution Flow**
### ⚠️ Session Management - FIRST STEP
Session detection and selection:
```bash
# Check for existing sessions
existing_sessions=$(find .workflow/active/ -name "WFS-*" -type d 2>/dev/null)
if [ multiple_sessions ]; then
prompt_user_to_select_session()
else
use_existing_or_create_new()
fi
```
### Step 1: Context Gathering Phase
**System Architect Perspective Questioning**
Before agent assignment, gather comprehensive system architecture context:
#### 📋 Role-Specific Questions
1. **Scale & Performance Requirements**
- Expected user load and traffic patterns?
- Performance requirements (latency, throughput)?
- Data volume and growth projections?
2. **Technical Constraints & Environment**
- Existing technology stack and constraints?
- Integration requirements with external systems?
- Infrastructure and deployment environment?
3. **Architecture Complexity & Patterns**
- Microservices vs monolithic considerations?
- Data consistency and transaction requirements?
- Event-driven vs request-response patterns?
4. **Non-Functional Requirements**
- High availability and disaster recovery needs?
- Security and compliance requirements?
- Monitoring and observability expectations?
#### Context Validation
- **Minimum Response**: Each answer must be ≥50 characters
- **Re-prompting**: Insufficient detail triggers follow-up questions
- **Context Storage**: Save responses to `.brainstorming/system-architect-context.md`
### Step 2: Agent Assignment with Flow Control
**Dedicated Agent Execution**
```bash
Task(conceptual-planning-agent): "
[FLOW_CONTROL]
Execute dedicated system-architect conceptual analysis for: {topic}
ASSIGNED_ROLE: system-architect
OUTPUT_LOCATION: .brainstorming/system-architect/
USER_CONTEXT: {validated_responses_from_context_gathering}
Flow Control Steps:
[
{
\"step\": \"load_role_template\",
\"action\": \"Load system-architect planning template\",
\"command\": \"bash($(cat ~/.claude/workflows/cli-templates/planning-roles/system-architect.md))\",
\"output_to\": \"role_template\"
}
]
Conceptual Analysis Requirements:
- Apply system-architect perspective to topic analysis
- Focus on architectural patterns, scalability, and integration points
- Use loaded role template framework for analysis structure
- Generate role-specific deliverables in designated output location
- Address all user context from questioning phase
Deliverables:
- analysis.md: Main system architecture analysis
- recommendations.md: Architecture recommendations
- deliverables/: Architecture-specific outputs as defined in role template
Embody system-architect role expertise for comprehensive conceptual planning."
```
### Progress Tracking
TodoWrite tracking for two-step process:
```json
[
{"content": "Gather system architect context through role-specific questioning", "status": "in_progress", "activeForm": "Gathering context"},
{"content": "Validate context responses and save to system-architect-context.md", "status": "pending", "activeForm": "Validating context"},
{"content": "Load system-architect planning template via flow control", "status": "pending", "activeForm": "Loading template"},
{"content": "Execute dedicated conceptual-planning-agent for system-architect role", "status": "pending", "activeForm": "Executing agent"}
]
```
## 📊 **Output Specification**
### Output Location
```
.workflow/active/WFS-{topic-slug}/.brainstorming/system-architect/
├── analysis.md # Primary architecture analysis
├── architecture-design.md # Detailed system design and diagrams
├── technology-stack.md # Technology stack recommendations and justifications
└── integration-plan.md # System integration and API strategies
```
### Document Templates
#### analysis.md Structure
```markdown
# System Architecture Analysis: {Topic}
*Generated: {timestamp}*
## Executive Summary
[Key architectural findings and recommendations overview]
## Current State Assessment
### Existing Architecture Overview
### Technical Stack Analysis
### Performance Bottlenecks
### Technical Debt Assessment
## Requirements Analysis
### Functional Requirements
### Non-Functional Requirements
- Performance: [Response time, throughput requirements]
- Scalability: [User growth, data volume expectations]
- Availability: [Uptime requirements]
- Security: [Security requirements]
## Proposed Architecture
### High-Level Architecture Design
### Component Breakdown
### Data Flow Diagrams
### Technology Stack Recommendations
## Implementation Strategy
### Migration Planning
### Risk Mitigation
### Performance Optimization
### Security Considerations
## Scalability and Maintenance
### Horizontal Scaling Strategy
### Monitoring and Observability
### Deployment Strategy
### Long-term Maintenance Plan
```
## 🔄 **Session Integration**
### Status Synchronization
Upon completion, update `workflow-session.json`:
```json
{
"phases": {
"BRAINSTORM": {
"system_architect": {
"status": "completed",
"completed_at": "timestamp",
"output_directory": ".workflow/active/WFS-{topic}/.brainstorming/system-architect/",
"key_insights": ["scalability_bottleneck", "architecture_pattern", "technology_recommendation"]
}
}
}
}
```
### Cross-Role Collaboration
System architect perspective provides:
- **Technical Constraints and Possibilities** → Product Manager
- **Architecture Requirements and Limitations** → UI Designer
- **Data Architecture Requirements** → Data Architect
- **Security Architecture Framework** → Security Expert
- **Technical Implementation Framework** → Feature Planner
## ✅ **Quality Assurance**
### Required Analysis Elements
- [ ] Clear architecture diagrams and component designs
- [ ] Detailed technology stack evaluation and recommendations
- [ ] Scalability and performance analysis with metrics
- [ ] System integration and API design specifications
- [ ] Comprehensive risk assessment and mitigation strategies
### Architecture Design Principles
- [ ] **Scalability**: System can handle growth in users and data
- [ ] **Maintainability**: Clear code structure, easy to modify and extend
- [ ] **Reliability**: Built-in fault tolerance and recovery mechanisms
- [ ] **Security**: Integrated security controls and protection measures
- [ ] **Performance**: Meets response time and throughput requirements
### Technical Decision Validation
- [ ] Technology choices have thorough justification and comparison analysis
- [ ] Architectural patterns align with business requirements and constraints
- [ ] Integration solutions consider compatibility and maintenance costs
- [ ] Deployment strategies are feasible with acceptable risk levels
- [ ] Monitoring and operations strategies are comprehensive and actionable
### Implementation Readiness
- [ ] **Technical Feasibility**: All proposed solutions are technically achievable
- [ ] **Resource Planning**: Resource requirements clearly defined and realistic
- [ ] **Risk Management**: Technical risks identified with mitigation plans
- [ ] **Performance Validation**: Architecture can meet performance requirements
- [ ] **Evolution Strategy**: Design allows for future growth and changes

View File

@@ -1,221 +0,0 @@
---
name: ui-designer
description: Generate or update ui-designer/analysis.md addressing guidance-specification discussion points for UI design perspective
argument-hint: "optional topic - uses existing framework if available"
allowed-tools: Task(conceptual-planning-agent), TodoWrite(*), Read(*), Write(*)
---
## 🎨 **UI Designer Analysis Generator**
### Purpose
**Specialized command for generating ui-designer/analysis.md** that addresses guidance-specification.md discussion points from UI/UX design perspective. Creates or updates role-specific analysis with framework references.
### Core Function
- **Framework-based Analysis**: Address each discussion point in guidance-specification.md
- **UI/UX Focus**: User experience, interface design, and accessibility perspective
- **Update Mechanism**: Create new or update existing analysis.md
- **Agent Delegation**: Use conceptual-planning-agent for analysis generation
### Analysis Scope
- **Visual Design**: Color palettes, typography, spacing, and visual hierarchy implementation
- **High-Fidelity Mockups**: Polished, pixel-perfect interface designs
- **Design System Implementation**: Component libraries, design tokens, and style guides
- **Micro-Interactions & Animations**: Transition effects, loading states, and interactive feedback
- **Responsive Design**: Layout adaptations for different screen sizes and devices
### Role Boundaries & Responsibilities
#### **What This Role OWNS (Concrete Visual Interface Implementation)**
- **Visual Design Language**: Colors, typography, iconography, spacing, and overall aesthetic
- **High-Fidelity Mockups**: Polished designs showing exactly how the interface will look
- **Design System Components**: Building and documenting reusable UI components (buttons, inputs, cards, etc.)
- **Design Tokens**: Defining variables for colors, spacing, typography that can be used in code
- **Micro-Interactions**: Hover states, transitions, animations, and interactive feedback details
- **Responsive Layouts**: Adapting designs for mobile, tablet, and desktop breakpoints
#### **What This Role DOES NOT Own (Defers to Other Roles)**
- **User Research & Personas**: User behavior analysis and needs assessment → Defers to **UX Expert**
- **Information Architecture**: Content structure and navigation strategy → Defers to **UX Expert**
- **Low-Fidelity Wireframes**: Structural layouts without visual design → Defers to **UX Expert**
#### **Handoff Points**
- **FROM UX Expert**: Receives wireframes, user flows, and information architecture as the foundation for visual design
- **TO Frontend Developers**: Provides design specifications, component libraries, and design tokens for implementation
- **WITH API Designer**: Coordinates on data presentation and form validation feedback (visual aspects only)
## ⚙️ **Execution Protocol**
### Phase 1: Session & Framework Detection
```bash
# Check active session and framework
CHECK: find .workflow/active/ -name "WFS-*" -type d
IF active_session EXISTS:
session_id = get_active_session()
brainstorm_dir = .workflow/active/WFS-{session}/.brainstorming/
CHECK: brainstorm_dir/guidance-specification.md
IF EXISTS:
framework_mode = true
load_framework = true
ELSE:
IF topic_provided:
framework_mode = false # Create analysis without framework
ELSE:
ERROR: "No framework found and no topic provided"
```
### Phase 2: Analysis Mode Detection
```bash
# Determine execution mode
IF framework_mode == true:
mode = "framework_based_analysis"
topic_ref = load_framework_topic()
discussion_points = extract_framework_points()
ELSE:
mode = "standalone_analysis"
topic_ref = provided_topic
discussion_points = generate_basic_structure()
```
### Phase 3: Agent Execution with Flow Control
**Framework-Based Analysis Generation**
```bash
Task(conceptual-planning-agent): "
[FLOW_CONTROL]
Execute ui-designer analysis for existing topic framework
## Context Loading
ASSIGNED_ROLE: ui-designer
OUTPUT_LOCATION: .workflow/active/WFS-{session}/.brainstorming/ui-designer/
ANALYSIS_MODE: {framework_mode ? "framework_based" : "standalone"}
## Flow Control Steps
1. **load_topic_framework**
- Action: Load structured topic discussion framework
- Command: Read(.workflow/active/WFS-{session}/.brainstorming/guidance-specification.md)
- Output: topic_framework_content
2. **load_role_template**
- Action: Load ui-designer planning template
- Command: bash($(cat ~/.claude/workflows/cli-templates/planning-roles/ui-designer.md))
- Output: role_template_guidelines
3. **load_session_metadata**
- Action: Load session metadata and existing context
- Command: Read(.workflow/active/WFS-{session}/workflow-session.json)
- Output: session_context
## Analysis Requirements
**Framework Reference**: Address all discussion points in guidance-specification.md from UI/UX perspective
**Role Focus**: User experience design, interface optimization, accessibility compliance
**Structured Approach**: Create analysis.md addressing framework discussion points
**Template Integration**: Apply role template guidelines within framework structure
## Expected Deliverables
1. **analysis.md**: Comprehensive UI/UX analysis addressing all framework discussion points
2. **Framework Reference**: Include @../guidance-specification.md reference in analysis
## Completion Criteria
- Address each discussion point from guidance-specification.md with UI/UX design expertise
- Provide actionable design recommendations and interface solutions
- Include accessibility considerations and WCAG compliance planning
- Reference framework document using @ notation for integration
"
```
## 📋 **TodoWrite Integration**
### Workflow Progress Tracking
```javascript
TodoWrite({
todos: [
{
content: "Detect active session and locate topic framework",
status: "in_progress",
activeForm: "Detecting session and framework"
},
{
content: "Load guidance-specification.md and session metadata for context",
status: "pending",
activeForm: "Loading framework and session context"
},
{
content: "Execute ui-designer analysis using conceptual-planning-agent with FLOW_CONTROL",
status: "pending",
activeForm: "Executing ui-designer framework analysis"
},
{
content: "Generate analysis.md addressing all framework discussion points",
status: "pending",
activeForm: "Generating structured ui-designer analysis"
},
{
content: "Update workflow-session.json with ui-designer completion status",
status: "pending",
activeForm: "Updating session metadata"
}
]
});
```
## 📊 **Output Structure**
### Framework-Based Analysis
```
.workflow/active/WFS-{session}/.brainstorming/ui-designer/
└── analysis.md # Structured analysis addressing guidance-specification.md discussion points
```
### Analysis Document Structure
```markdown
# UI Designer Analysis: [Topic from Framework]
## Framework Reference
**Topic Framework**: @../guidance-specification.md
**Role Focus**: UI/UX Design perspective
## Discussion Points Analysis
[Address each point from guidance-specification.md with UI/UX expertise]
### Core Requirements (from framework)
[UI/UX perspective on requirements]
### Technical Considerations (from framework)
[Interface and design system considerations]
### User Experience Factors (from framework)
[Detailed UX analysis and recommendations]
### Implementation Challenges (from framework)
[Design implementation and accessibility considerations]
### Success Metrics (from framework)
[UX metrics and usability success criteria]
## UI/UX Specific Recommendations
[Role-specific design recommendations and solutions]
---
*Generated by ui-designer analysis addressing structured framework*
```
## 🔄 **Session Integration**
### Completion Status Update
```json
{
"ui_designer": {
"status": "completed",
"framework_addressed": true,
"output_location": ".workflow/active/WFS-{session}/.brainstorming/ui-designer/analysis.md",
"framework_reference": "@../guidance-specification.md"
}
}
```
### Integration Points
- **Framework Reference**: @../guidance-specification.md for structured discussion points
- **Cross-Role Synthesis**: UI/UX insights available for synthesis-report.md integration
- **Agent Autonomy**: Independent execution with framework guidance

View File

@@ -1,221 +0,0 @@
---
name: ux-expert
description: Generate or update ux-expert/analysis.md addressing guidance-specification discussion points for UX perspective
argument-hint: "optional topic - uses existing framework if available"
allowed-tools: Task(conceptual-planning-agent), TodoWrite(*), Read(*), Write(*)
---
## 🎯 **UX Expert Analysis Generator**
### Purpose
**Specialized command for generating ux-expert/analysis.md** that addresses guidance-specification.md discussion points from user experience and interface design perspective. Creates or updates role-specific analysis with framework references.
### Core Function
- **Framework-based Analysis**: Address each discussion point in guidance-specification.md
- **UX Design Focus**: User interface, interaction patterns, and usability optimization
- **Update Mechanism**: Create new or update existing analysis.md
- **Agent Delegation**: Use conceptual-planning-agent for analysis generation
### Analysis Scope
- **User Research**: User personas, behavioral analysis, and needs assessment
- **Information Architecture**: Content structure, navigation hierarchy, and mental models
- **User Journey Mapping**: User flows, task analysis, and interaction models
- **Usability Strategy**: Accessibility planning, cognitive load reduction, and user testing frameworks
- **Wireframing**: Low-fidelity layouts and structural prototypes (not visual design)
### Role Boundaries & Responsibilities
#### **What This Role OWNS (Abstract User Experience & Research)**
- **User Research & Personas**: Understanding target users, their goals, pain points, and behaviors
- **Information Architecture**: Organizing content and defining navigation structures at a conceptual level
- **User Journey Mapping**: Defining user flows, task sequences, and interaction models
- **Wireframes & Low-Fidelity Prototypes**: Structural layouts showing information hierarchy (boxes and arrows, not colors/fonts)
- **Usability Testing Strategy**: Planning user testing, A/B tests, and validation methods
- **Accessibility Planning**: WCAG compliance strategy and inclusive design principles
#### **What This Role DOES NOT Own (Defers to Other Roles)**
- **Visual Design**: Colors, typography, spacing, visual style → Defers to **UI Designer**
- **High-Fidelity Mockups**: Polished, pixel-perfect designs → Defers to **UI Designer**
- **Component Implementation**: Design system components, CSS, animations → Defers to **UI Designer**
#### **Handoff Points**
- **TO UI Designer**: Provides wireframes, user flows, and information architecture that UI Designer will transform into high-fidelity visual designs
- **FROM User Research**: May receive external research data to inform UX decisions
- **TO Product Owner**: Provides user insights and validation results to inform feature prioritization
## ⚙️ **Execution Protocol**
### Phase 1: Session & Framework Detection
```bash
# Check active session and framework
CHECK: find .workflow/active/ -name "WFS-*" -type d
IF active_session EXISTS:
session_id = get_active_session()
brainstorm_dir = .workflow/active/WFS-{session}/.brainstorming/
CHECK: brainstorm_dir/guidance-specification.md
IF EXISTS:
framework_mode = true
load_framework = true
ELSE:
IF topic_provided:
framework_mode = false # Create analysis without framework
ELSE:
ERROR: "No framework found and no topic provided"
```
### Phase 2: Analysis Mode Detection
```bash
# Determine execution mode
IF framework_mode == true:
mode = "framework_based_analysis"
topic_ref = load_framework_topic()
discussion_points = extract_framework_points()
ELSE:
mode = "standalone_analysis"
topic_ref = provided_topic
discussion_points = generate_basic_structure()
```
### Phase 3: Agent Execution with Flow Control
**Framework-Based Analysis Generation**
```bash
Task(conceptual-planning-agent): "
[FLOW_CONTROL]
Execute ux-expert analysis for existing topic framework
## Context Loading
ASSIGNED_ROLE: ux-expert
OUTPUT_LOCATION: .workflow/active/WFS-{session}/.brainstorming/ux-expert/
ANALYSIS_MODE: {framework_mode ? "framework_based" : "standalone"}
## Flow Control Steps
1. **load_topic_framework**
- Action: Load structured topic discussion framework
- Command: Read(.workflow/active/WFS-{session}/.brainstorming/guidance-specification.md)
- Output: topic_framework_content
2. **load_role_template**
- Action: Load ux-expert planning template
- Command: bash($(cat ~/.claude/workflows/cli-templates/planning-roles/ux-expert.md))
- Output: role_template_guidelines
3. **load_session_metadata**
- Action: Load session metadata and existing context
- Command: Read(.workflow/active/WFS-{session}/workflow-session.json)
- Output: session_context
## Analysis Requirements
**Framework Reference**: Address all discussion points in guidance-specification.md from user experience and interface design perspective
**Role Focus**: UI design, interaction patterns, usability optimization, design systems
**Structured Approach**: Create analysis.md addressing framework discussion points
**Template Integration**: Apply role template guidelines within framework structure
## Expected Deliverables
1. **analysis.md**: Comprehensive UX design analysis addressing all framework discussion points
2. **Framework Reference**: Include @../guidance-specification.md reference in analysis
## Completion Criteria
- Address each discussion point from guidance-specification.md with UX design expertise
- Provide actionable interface design and usability optimization strategies
- Include accessibility considerations and interaction pattern recommendations
- Reference framework document using @ notation for integration
"
```
## 📋 **TodoWrite Integration**
### Workflow Progress Tracking
```javascript
TodoWrite({
todos: [
{
content: "Detect active session and locate topic framework",
status: "in_progress",
activeForm: "Detecting session and framework"
},
{
content: "Load guidance-specification.md and session metadata for context",
status: "pending",
activeForm: "Loading framework and session context"
},
{
content: "Execute ux-expert analysis using conceptual-planning-agent with FLOW_CONTROL",
status: "pending",
activeForm: "Executing ux-expert framework analysis"
},
{
content: "Generate analysis.md addressing all framework discussion points",
status: "pending",
activeForm: "Generating structured ux-expert analysis"
},
{
content: "Update workflow-session.json with ux-expert completion status",
status: "pending",
activeForm: "Updating session metadata"
}
]
});
```
## 📊 **Output Structure**
### Framework-Based Analysis
```
.workflow/active/WFS-{session}/.brainstorming/ux-expert/
└── analysis.md # Structured analysis addressing guidance-specification.md discussion points
```
### Analysis Document Structure
```markdown
# UX Expert Analysis: [Topic from Framework]
## Framework Reference
**Topic Framework**: @../guidance-specification.md
**Role Focus**: User Experience & Interface Design perspective
## Discussion Points Analysis
[Address each point from guidance-specification.md with UX design expertise]
### Core Requirements (from framework)
[User interface and interaction design requirements perspective]
### Technical Considerations (from framework)
[Design system implementation and technical feasibility considerations]
### User Experience Factors (from framework)
[Usability optimization, accessibility, and user-centered design analysis]
### Implementation Challenges (from framework)
[Design implementation challenges and progressive enhancement strategies]
### Success Metrics (from framework)
[UX metrics including usability testing, user satisfaction, and design KPIs]
## UX Expert Specific Recommendations
[Role-specific interface design patterns and usability optimization strategies]
---
*Generated by ux-expert analysis addressing structured framework*
```
## 🔄 **Session Integration**
### Completion Status Update
```json
{
"ux_expert": {
"status": "completed",
"framework_addressed": true,
"output_location": ".workflow/active/WFS-{session}/.brainstorming/ux-expert/analysis.md",
"framework_reference": "@../guidance-specification.md"
}
}
```
### Integration Points
- **Framework Reference**: @../guidance-specification.md for structured discussion points
- **Cross-Role Synthesis**: UX design insights available for synthesis-report.md integration
- **Agent Autonomy**: Independent execution with framework guidance

View File

@@ -17,6 +17,8 @@ Enhanced evidence-based debugging with **documented exploration process**. Recor
**Core workflow**: Explore → Document → Log → Analyze → Correct Understanding → Fix → Verify
**Scope**: Adds temporary debug logging to observe program state; cleans up all instrumentation after resolution. Does NOT execute code injection, security testing, or modify program behavior.
**Key enhancements over /workflow:debug**:
- **understanding.md**: Timeline of exploration and learning
- **Gemini-assisted correction**: Validates and corrects hypotheses
@@ -44,7 +46,7 @@ Explore Mode:
├─ Locate error source in codebase
├─ Document initial understanding in understanding.md
├─ Generate testable hypotheses with Gemini validation
├─ Add NDJSON logging instrumentation
├─ Add NDJSON debug logging statements
└─ Output: Hypothesis list + await user reproduction
Analyze Mode:
@@ -216,9 +218,9 @@ Save Gemini output to `hypotheses.json`:
}
```
**Step 1.4: Add NDJSON Instrumentation**
**Step 1.4: Add NDJSON Debug Logging**
For each hypothesis, add logging (same as original debug command).
For each hypothesis, add temporary logging statements to observe program state at key execution points. Use NDJSON format for structured log parsing. These are read-only observations that do not modify program behavior.
**Step 1.5: Update understanding.md**
@@ -441,7 +443,7 @@ What we learned from this debugging session:
**Step 3.3: Cleanup**
Remove debug instrumentation (same as original command).
Remove all temporary debug logging statements added during investigation. Verify no instrumentation code remains in production code.
---
@@ -647,7 +649,7 @@ Why is config value None during update?
| Feature | /workflow:debug | /workflow:debug-with-file |
|---------|-----------------|---------------------------|
| NDJSON logging | ✅ | ✅ |
| NDJSON debug logging | ✅ | ✅ |
| Hypothesis generation | Manual | Gemini-assisted |
| Exploration documentation | ❌ | ✅ understanding.md |
| Understanding evolution | ❌ | ✅ Timeline + corrections |

View File

@@ -1,331 +0,0 @@
---
name: debug
description: Interactive hypothesis-driven debugging with NDJSON logging, iterative until resolved
argument-hint: "[-y|--yes] \"bug description or error message\""
allowed-tools: TodoWrite(*), Task(*), AskUserQuestion(*), Read(*), Grep(*), Glob(*), Bash(*), Edit(*), Write(*)
---
## Auto Mode
When `--yes` or `-y`: Auto-confirm all decisions (hypotheses, fixes, iteration), use recommended settings.
# Workflow Debug Command (/workflow:debug)
## Overview
Evidence-based interactive debugging command. Systematically identifies root causes through hypothesis-driven logging and iterative verification.
**Core workflow**: Explore → Add Logging → Reproduce → Analyze Log → Fix → Verify
## Usage
```bash
/workflow:debug <BUG_DESCRIPTION>
# Arguments
<bug-description> Bug description, error message, or stack trace (required)
```
## Execution Process
```
Session Detection:
├─ Check if debug session exists for this bug
├─ EXISTS + debug.log has content → Analyze mode
└─ NOT_FOUND or empty log → Explore mode
Explore Mode:
├─ Locate error source in codebase
├─ Generate testable hypotheses (dynamic count)
├─ Add NDJSON logging instrumentation
└─ Output: Hypothesis list + await user reproduction
Analyze Mode:
├─ Parse debug.log, validate each hypothesis
└─ Decision:
├─ Confirmed → Fix root cause
├─ Inconclusive → Add more logging, iterate
└─ All rejected → Generate new hypotheses
Fix & Cleanup:
├─ Apply fix based on confirmed hypothesis
├─ User verifies
├─ Remove debug instrumentation
└─ If not fixed → Return to Analyze mode
```
## Implementation
### Session Setup & Mode Detection
```javascript
const getUtc8ISOString = () => new Date(Date.now() + 8 * 60 * 60 * 1000).toISOString()
const bugSlug = bug_description.toLowerCase().replace(/[^a-z0-9]+/g, '-').substring(0, 30)
const dateStr = getUtc8ISOString().substring(0, 10)
const sessionId = `DBG-${bugSlug}-${dateStr}`
const sessionFolder = `.workflow/.debug/${sessionId}`
const debugLogPath = `${sessionFolder}/debug.log`
// Auto-detect mode
const sessionExists = fs.existsSync(sessionFolder)
const logHasContent = sessionExists && fs.existsSync(debugLogPath) && fs.statSync(debugLogPath).size > 0
const mode = logHasContent ? 'analyze' : 'explore'
if (!sessionExists) {
bash(`mkdir -p ${sessionFolder}`)
}
```
---
### Explore Mode
**Step 1.1: Locate Error Source**
```javascript
// Extract keywords from bug description
const keywords = extractErrorKeywords(bug_description)
// e.g., ['Stack Length', '未找到', 'registered 0']
// Search codebase for error locations
for (const keyword of keywords) {
Grep({ pattern: keyword, path: ".", output_mode: "content", "-C": 3 })
}
// Identify affected files and functions
const affectedLocations = [...] // from search results
```
**Step 1.2: Generate Hypotheses (Dynamic)**
```javascript
// Hypothesis categories based on error pattern
const HYPOTHESIS_PATTERNS = {
"not found|missing|undefined|未找到": "data_mismatch",
"0|empty|zero|registered 0": "logic_error",
"timeout|connection|sync": "integration_issue",
"type|format|parse": "type_mismatch"
}
// Generate hypotheses based on actual issue (NOT fixed count)
function generateHypotheses(bugDescription, affectedLocations) {
const hypotheses = []
// Analyze bug and create targeted hypotheses
// Each hypothesis has:
// - id: H1, H2, ... (dynamic count)
// - description: What might be wrong
// - testable_condition: What to log
// - logging_point: Where to add instrumentation
return hypotheses // Could be 1, 3, 5, or more
}
const hypotheses = generateHypotheses(bug_description, affectedLocations)
```
**Step 1.3: Add NDJSON Instrumentation**
For each hypothesis, add logging at the relevant location:
**Python template**:
```python
# region debug [H{n}]
try:
import json, time
_dbg = {
"sid": "{sessionId}",
"hid": "H{n}",
"loc": "{file}:{line}",
"msg": "{testable_condition}",
"data": {
# Capture relevant values here
},
"ts": int(time.time() * 1000)
}
with open(r"{debugLogPath}", "a", encoding="utf-8") as _f:
_f.write(json.dumps(_dbg, ensure_ascii=False) + "\n")
except: pass
# endregion
```
**JavaScript/TypeScript template**:
```javascript
// region debug [H{n}]
try {
require('fs').appendFileSync("{debugLogPath}", JSON.stringify({
sid: "{sessionId}",
hid: "H{n}",
loc: "{file}:{line}",
msg: "{testable_condition}",
data: { /* Capture relevant values */ },
ts: Date.now()
}) + "\n");
} catch(_) {}
// endregion
```
**Output to user**:
```
## Hypotheses Generated
Based on error "{bug_description}", generated {n} hypotheses:
{hypotheses.map(h => `
### ${h.id}: ${h.description}
- Logging at: ${h.logging_point}
- Testing: ${h.testable_condition}
`).join('')}
**Debug log**: ${debugLogPath}
**Next**: Run reproduction steps, then come back for analysis.
```
---
### Analyze Mode
```javascript
// Parse NDJSON log
const entries = Read(debugLogPath).split('\n')
.filter(l => l.trim())
.map(l => JSON.parse(l))
// Group by hypothesis
const byHypothesis = groupBy(entries, 'hid')
// Validate each hypothesis
for (const [hid, logs] of Object.entries(byHypothesis)) {
const hypothesis = hypotheses.find(h => h.id === hid)
const latestLog = logs[logs.length - 1]
// Check if evidence confirms or rejects hypothesis
const verdict = evaluateEvidence(hypothesis, latestLog.data)
// Returns: 'confirmed' | 'rejected' | 'inconclusive'
}
```
**Output**:
```
## Evidence Analysis
Analyzed ${entries.length} log entries.
${results.map(r => `
### ${r.id}: ${r.description}
- **Status**: ${r.verdict}
- **Evidence**: ${JSON.stringify(r.evidence)}
- **Reason**: ${r.reason}
`).join('')}
${confirmedHypothesis ? `
## Root Cause Identified
**${confirmedHypothesis.id}**: ${confirmedHypothesis.description}
Ready to fix.
` : `
## Need More Evidence
Add more logging or refine hypotheses.
`}
```
---
### Fix & Cleanup
```javascript
// Apply fix based on confirmed hypothesis
// ... Edit affected files
// After user verifies fix works:
// Remove debug instrumentation (search for region markers)
const instrumentedFiles = Grep({
pattern: "# region debug|// region debug",
output_mode: "files_with_matches"
})
for (const file of instrumentedFiles) {
// Remove content between region markers
removeDebugRegions(file)
}
console.log(`
## Debug Complete
- Root cause: ${confirmedHypothesis.description}
- Fix applied to: ${modifiedFiles.join(', ')}
- Debug instrumentation removed
`)
```
---
## Debug Log Format (NDJSON)
Each line is a JSON object:
```json
{"sid":"DBG-xxx-2025-12-18","hid":"H1","loc":"file.py:func:42","msg":"Check dict keys","data":{"keys":["a","b"],"target":"c","found":false},"ts":1734567890123}
```
| Field | Description |
|-------|-------------|
| `sid` | Session ID |
| `hid` | Hypothesis ID (H1, H2, ...) |
| `loc` | Code location |
| `msg` | What's being tested |
| `data` | Captured values |
| `ts` | Timestamp (ms) |
## Session Folder
```
.workflow/.debug/DBG-{slug}-{date}/
├── debug.log # NDJSON log (main artifact)
└── resolution.md # Summary after fix (optional)
```
## Iteration Flow
```
First Call (/workflow:debug "error"):
├─ No session exists → Explore mode
├─ Extract error keywords, search codebase
├─ Generate hypotheses, add logging
└─ Await user reproduction
After Reproduction (/workflow:debug "error"):
├─ Session exists + debug.log has content → Analyze mode
├─ Parse log, evaluate hypotheses
└─ Decision:
├─ Confirmed → Fix → User verify
│ ├─ Fixed → Cleanup → Done
│ └─ Not fixed → Add logging → Iterate
├─ Inconclusive → Add logging → Iterate
└─ All rejected → New hypotheses → Iterate
Output:
└─ .workflow/.debug/DBG-{slug}-{date}/debug.log
```
## Post-Completion Expansion
完成后询问用户是否扩展为issue(test/enhance/refactor/doc),选中项调用 `/issue:new "{summary} - {dimension}"`
---
## Error Handling
| Situation | Action |
|-----------|--------|
| Empty debug.log | Verify reproduction triggered the code path |
| All hypotheses rejected | Generate new hypotheses with broader scope |
| Fix doesn't work | Iterate with more granular logging |
| >5 iterations | Escalate to `/workflow:lite-fix` with evidence |

File diff suppressed because it is too large Load Diff

View File

@@ -477,7 +477,7 @@ Task(subagent_type="{meta.agent}",
- TODO List: {session.todo_list_path}
- Summaries: {session.summaries_dir}
**Execution**: Read task JSON → Parse flow_control → Execute implementation_approach → Update TODO_LIST.md → Generate summary",
**Execution**: Read task JSON → Execute pre_analysis → Check execution_config.method → (CLI: handoff to CLI tool | Agent: direct implementation) → Update TODO_LIST.md → Generate summary",
description="Implement: {task.id}")
```
@@ -486,9 +486,11 @@ Task(subagent_type="{meta.agent}",
- `[FLOW_CONTROL]`: Triggers flow_control.pre_analysis execution
**Why Path-Based**: Agent (code-developer.md) autonomously:
- Reads and parses task JSON (requirements, acceptance, flow_control)
- Loads tech stack guidelines based on detected language
- Executes pre_analysis steps and implementation_approach
- Reads and parses task JSON (requirements, acceptance, flow_control, execution_config)
- Executes pre_analysis steps (Phase 1: context gathering)
- Checks execution_config.method (Phase 2: determine mode)
- CLI mode: Builds handoff prompt and executes via ccw cli with resume strategy
- Agent mode: Directly implements using modification_points and logic_flow
- Generates structured summary with integration points
Embedding task content in prompt creates duplication and conflicts with agent's parsing logic.

View File

@@ -72,8 +72,8 @@ Phase 2: Clarification (optional, multi-round)
Phase 3: Planning (NO CODE EXECUTION - planning only)
└─ Decision (based on Phase 1 complexity):
├─ Low → Load schema: cat ~/.claude/workflows/cli-templates/schemas/plan-json-schema.json → Direct Claude planning (following schema) → plan.json → MUST proceed to Phase 4
└─ Medium/High → cli-lite-planning-agent → plan.json → MUST proceed to Phase 4
├─ Low → Load schema: cat ~/.claude/workflows/cli-templates/schemas/plan-json-schema.json → Direct Claude planning (following schema) → plan.json
└─ Medium/High → cli-lite-planning-agent → plan.json (agent internally executes quality check)
Phase 4: Confirmation & Selection
├─ Display plan summary (tasks, complexity, estimated time)

View File

@@ -150,378 +150,213 @@ Create internal representations (do not include raw artifacts in output):
- Task-to-task dependencies (depends_on, blocks)
- Requirement-level dependencies (from synthesis)
### 4. Detection Passes (Token-Efficient Analysis)
### 4. Detection Passes (Agent-Driven Multi-Dimensional Analysis)
**Token Budget Strategy**:
- **Total Limit**: 50 findings maximum (aggregate remainder in overflow summary)
- **Priority Allocation**: CRITICAL (unlimited) → HIGH (15) → MEDIUM (20) → LOW (15)
- **Early Exit**: If CRITICAL findings > 0 in User Intent/Requirements Coverage, skip LOW/MEDIUM priority checks
**Execution Strategy**:
- Single `cli-explore-agent` invocation
- Agent executes multiple CLI analyses internally (different dimensions: A-H)
- Token Budget: 50 findings maximum (aggregate remainder in overflow summary)
- Priority Allocation: CRITICAL (unlimited) → HIGH (15) → MEDIUM (20) → LOW (15)
- Early Exit: If CRITICAL findings > 0 in User Intent/Requirements Coverage, skip LOW/MEDIUM checks
**Execution Order** (Process in sequence; skip if token budget exhausted):
**Execution Order** (Agent orchestrates internally):
1. **Tier 1 (CRITICAL Path)**: A, B, C - User intent, coverage, consistency (process fully)
2. **Tier 2 (HIGH Priority)**: D, E - Dependencies, synthesis alignment (limit 15 findings total)
1. **Tier 1 (CRITICAL Path)**: A, B, C - User intent, coverage, consistency (full analysis)
2. **Tier 2 (HIGH Priority)**: D, E - Dependencies, synthesis alignment (limit 15 findings)
3. **Tier 3 (MEDIUM Priority)**: F - Specification quality (limit 20 findings)
4. **Tier 4 (LOW Priority)**: G, H - Duplication, feasibility (limit 15 findings total)
4. **Tier 4 (LOW Priority)**: G, H - Duplication, feasibility (limit 15 findings)
---
#### A. User Intent Alignment (CRITICAL - Tier 1)
#### Phase 4.1: Launch Unified Verification Agent
- **Goal Alignment**: IMPL_PLAN objectives match user's original intent
- **Scope Drift**: Plan covers user's stated scope without unauthorized expansion
- **Success Criteria Match**: Plan's success criteria reflect user's expectations
- **Intent Conflicts**: Tasks contradicting user's original objectives
```javascript
Task(
subagent_type="cli-explore-agent",
run_in_background=false,
description="Multi-dimensional plan verification",
prompt=`
## Plan Verification Task
#### B. Requirements Coverage Analysis
### MANDATORY FIRST STEPS
1. Read: ~/.claude/workflows/cli-templates/schemas/plan-verify-agent-schema.json (dimensions & rules)
2. Read: ~/.claude/workflows/cli-templates/schemas/verify-json-schema.json (output schema)
3. Read: ${session_file} (user intent)
4. Read: ${IMPL_PLAN} (implementation plan)
5. Glob: ${task_dir}/*.json (task files)
6. Glob: ${SYNTHESIS_DIR}/*/analysis.md (role analyses)
- **Orphaned Requirements**: Requirements in synthesis with zero associated tasks
- **Unmapped Tasks**: Tasks with no clear requirement linkage
- **NFR Coverage Gaps**: Non-functional requirements (performance, security, scalability) not reflected in tasks
### Execution Flow
#### C. Consistency Validation
**Load schema → Execute tiered CLI analysis → Aggregate findings → Write JSON**
- **Requirement Conflicts**: Tasks contradicting synthesis requirements
- **Architecture Drift**: IMPL_PLAN architecture not matching synthesis ADRs
- **Terminology Drift**: Same concept named differently across IMPL_PLAN and tasks
- **Data Model Inconsistency**: Tasks referencing entities/fields not in synthesis data model
FOR each tier in [1, 2, 3, 4]:
- Load tier config from plan-verify-agent-schema.json
- Execute: ccw cli -p "PURPOSE: Verify dimensions {tier.dimensions}
TASK: {tier.checks from schema}
CONTEXT: @${session_dir}/**/*
EXPECTED: Findings JSON with dimension, severity, location, summary, recommendation
CONSTRAINTS: Limit {tier.limit} findings
" --tool gemini --mode analysis --rule {tier.rule}
- Parse findings, check early exit condition
- IF tier == 1 AND critical_count > 0: skip tier 3-4
#### D. Dependency Integrity
### Output
Write: ${process_dir}/verification-findings.json (follow verify-json-schema.json)
Return: Quality gate decision + 2-3 sentence summary
`
)
```
- **Circular Dependencies**: Task A depends on B, B depends on C, C depends on A
- **Missing Dependencies**: Task requires outputs from another task but no explicit dependency
- **Broken Dependencies**: Task depends on non-existent task ID
- **Logical Ordering Issues**: Implementation tasks before foundational setup without dependency note
---
#### E. Synthesis Alignment
#### Phase 4.2: Load and Organize Findings
- **Priority Conflicts**: High-priority synthesis requirements mapped to low-priority tasks
- **Success Criteria Mismatch**: IMPL_PLAN success criteria not covering synthesis acceptance criteria
- **Risk Mitigation Gaps**: Critical risks in synthesis without corresponding mitigation tasks
```javascript
// Load findings (single parse for all subsequent use)
const data = JSON.parse(Read(`${process_dir}/verification-findings.json`))
const { session_id, timestamp, verification_tiers_completed, findings, summary } = data
const { critical_count, high_count, medium_count, low_count, total_findings, coverage_percentage, recommendation } = summary
#### F. Task Specification Quality
// Group by severity and dimension
const bySeverity = Object.groupBy(findings, f => f.severity)
const byDimension = Object.groupBy(findings, f => f.dimension)
- **Ambiguous Focus Paths**: Tasks with vague or missing focus_paths
- **Underspecified Acceptance**: Tasks without clear acceptance criteria
- **Missing Artifacts References**: Tasks not referencing relevant brainstorming artifacts in context.artifacts
- **Weak Flow Control**: Tasks without clear implementation_approach or pre_analysis steps
- **Missing Target Files**: Tasks without flow_control.target_files specification
// Dimension metadata (from schema)
const DIMS = {
A: "User Intent Alignment", B: "Requirements Coverage", C: "Consistency Validation",
D: "Dependency Integrity", E: "Synthesis Alignment", F: "Task Specification Quality",
G: "Duplication Detection", H: "Feasibility Assessment"
}
```
#### G. Duplication Detection
### 5. Generate Report
- **Overlapping Task Scope**: Multiple tasks with nearly identical descriptions
- **Redundant Requirements Coverage**: Same requirement covered by multiple tasks without clear partitioning
```javascript
// Helper: render dimension section
const renderDimension = (dim) => {
const items = byDimension[dim] || []
return items.length > 0
? items.map(f => `### ${f.id}: ${f.summary}\n- **Severity**: ${f.severity}\n- **Location**: ${f.location.join(', ')}\n- **Recommendation**: ${f.recommendation}`).join('\n\n')
: `> ✅ No ${DIMS[dim]} issues detected.`
}
#### H. Feasibility Assessment
// Helper: render severity section
const renderSeverity = (severity, impact) => {
const items = bySeverity[severity] || []
return items.length > 0
? items.map(f => `#### ${f.id}: ${f.summary}\n- **Dimension**: ${f.dimension_name}\n- **Location**: ${f.location.join(', ')}\n- **Impact**: ${impact}\n- **Recommendation**: ${f.recommendation}`).join('\n\n')
: `> ✅ No ${severity.toLowerCase()}-severity issues detected.`
}
- **Complexity Misalignment**: Task marked "simple" but requires multiple file modifications
- **Resource Conflicts**: Parallel tasks requiring same resources/files
- **Skill Gap Risks**: Tasks requiring skills not in team capability assessment (from synthesis)
### 5. Severity Assignment
Use this heuristic to prioritize findings:
- **CRITICAL**:
- Violates user's original intent (goal misalignment, scope drift)
- Violates synthesis authority (requirement conflict)
- Core requirement with zero coverage
- Circular dependencies
- Broken dependencies
- **HIGH**:
- NFR coverage gaps
- Priority conflicts
- Missing risk mitigation tasks
- Ambiguous acceptance criteria
- **MEDIUM**:
- Terminology drift
- Missing artifacts references
- Weak flow control
- Logical ordering issues
- **LOW**:
- Style/wording improvements
- Minor redundancy not affecting execution
### 6. Produce Compact Analysis Report
**Report Generation**: Generate report content and save to file.
Output a Markdown report with the following structure:
```markdown
// Build Markdown report
const fullReport = `
# Plan Verification Report
**Session**: WFS-{session-id}
**Generated**: {timestamp}
**Artifacts Analyzed**: role analysis documents, IMPL_PLAN.md, {N} task files
**User Intent Analysis**: {user_intent_analysis or "SKIPPED: workflow-session.json not found"}
**Session**: WFS-${session_id} | **Generated**: ${timestamp}
**Tiers Completed**: ${verification_tiers_completed.join(', ')}
---
## Executive Summary
### Quality Gate Decision
| Metric | Value | Status |
|--------|-------|--------|
| Overall Risk Level | CRITICAL \| HIGH \| MEDIUM \| LOW | {status_emoji} |
| Critical Issues | {count} | 🔴 |
| High Issues | {count} | 🟠 |
| Medium Issues | {count} | 🟡 |
| Low Issues | {count} | 🟢 |
| Risk Level | ${critical_count > 0 ? 'CRITICAL' : high_count > 0 ? 'HIGH' : medium_count > 0 ? 'MEDIUM' : 'LOW'} | ${critical_count > 0 ? '🔴' : high_count > 0 ? '🟠' : medium_count > 0 ? '🟡' : '🟢'} |
| Critical/High/Medium/Low | ${critical_count}/${high_count}/${medium_count}/${low_count} | |
| Coverage | ${coverage_percentage}% | ${coverage_percentage >= 90 ? '🟢' : coverage_percentage >= 75 ? '🟡' : '🔴'} |
### Recommendation
**{RECOMMENDATION}**
**Decision Rationale**:
{brief explanation based on severity criteria}
**Quality Gate Criteria**:
- **BLOCK_EXECUTION**: Critical issues > 0 (must fix before proceeding)
- **PROCEED_WITH_FIXES**: Critical = 0, High > 0 (fix recommended before execution)
- **PROCEED_WITH_CAUTION**: Critical = 0, High = 0, Medium > 0 (proceed with awareness)
- **PROCEED**: Only Low issues or None (safe to execute)
**Recommendation**: **${recommendation}**
---
## Findings Summary
| ID | Category | Severity | Location(s) | Summary | Recommendation |
|----|----------|----------|-------------|---------|----------------|
| C1 | Coverage | CRITICAL | synthesis:FR-03 | Requirement "User auth" has zero task coverage | Add authentication implementation task |
| H1 | Consistency | HIGH | IMPL-1.2 vs synthesis:ADR-02 | Task uses REST while synthesis specifies GraphQL | Align task with ADR-02 decision |
| M1 | Specification | MEDIUM | IMPL-2.1 | Missing context.artifacts reference | Add @synthesis reference |
| L1 | Duplication | LOW | IMPL-3.1, IMPL-3.2 | Similar scope | Consider merging |
(Generate stable IDs prefixed by severity initial: C/H/M/L + number)
| ID | Dimension | Severity | Location | Summary |
|----|-----------|----------|----------|---------|
${findings.map(f => `| ${f.id} | ${f.dimension_name} | ${f.severity} | ${f.location.join(', ')} | ${f.summary} |`).join('\n')}
---
## User Intent Alignment Analysis
## Analysis by Dimension
{IF user_intent_analysis != "SKIPPED"}
### Goal Alignment
- **User Intent**: {user_original_intent}
- **IMPL_PLAN Objectives**: {plan_objectives}
- **Alignment Status**: {ALIGNED/MISALIGNED/PARTIAL}
- **Findings**: {specific alignment issues}
### Scope Verification
- **User Scope**: {user_defined_scope}
- **Plan Scope**: {plan_actual_scope}
- **Drift Detection**: {NONE/MINOR/MAJOR}
- **Findings**: {specific scope issues}
{ELSE}
> ⚠️ User intent alignment analysis was skipped because workflow-session.json was not found.
{END IF}
${['A','B','C','D','E','F','G','H'].map(d => `### ${d}. ${DIMS[d]}\n\n${renderDimension(d)}`).join('\n\n---\n\n')}
---
## Requirements Coverage Analysis
## Findings by Severity
### Functional Requirements
### CRITICAL (${critical_count})
${renderSeverity('CRITICAL', 'Blocks execution')}
| Requirement ID | Requirement Summary | Has Task? | Task IDs | Priority Match | Notes |
|----------------|---------------------|-----------|----------|----------------|-------|
| FR-01 | User authentication | Yes | IMPL-1.1, IMPL-1.2 | Match | Complete |
| FR-02 | Data export | Yes | IMPL-2.3 | Mismatch | High req → Med priority task |
| FR-03 | Profile management | No | - | - | **CRITICAL: Zero coverage** |
### HIGH (${high_count})
${renderSeverity('HIGH', 'Fix before execution recommended')}
### Non-Functional Requirements
### MEDIUM (${medium_count})
${renderSeverity('MEDIUM', 'Address during/after implementation')}
| Requirement ID | Requirement Summary | Has Task? | Task IDs | Notes |
|----------------|---------------------|-----------|----------|-------|
| NFR-01 | Response time <200ms | No | - | **HIGH: No performance tasks** |
| NFR-02 | Security compliance | Yes | IMPL-4.1 | Complete |
### Business Requirements
| Requirement ID | Requirement Summary | Has Task? | Task IDs | Notes |
|----------------|---------------------|-----------|----------|-------|
| BR-01 | Launch by Q2 | Yes | IMPL-1.* through IMPL-5.* | Timeline realistic |
### Coverage Metrics
| Requirement Type | Total | Covered | Coverage % |
|------------------|-------|---------|------------|
| Functional | {count} | {count} | {percent}% |
| Non-Functional | {count} | {count} | {percent}% |
| Business | {count} | {count} | {percent}% |
| **Overall** | **{total}** | **{covered}** | **{percent}%** |
### LOW (${low_count})
${renderSeverity('LOW', 'Optional improvement')}
---
## Dependency Integrity
## Next Steps
### Dependency Graph Analysis
${recommendation === 'BLOCK_EXECUTION' ? '🛑 **BLOCK**: Fix critical issues → Re-verify' :
recommendation === 'PROCEED_WITH_FIXES' ? '⚠️ **FIX RECOMMENDED**: Address high issues → Re-verify or Execute' :
'✅ **READY**: Proceed to /workflow:execute'}
**Circular Dependencies**: {None or List}
Re-verify: \`/workflow:plan-verify --session ${session_id}\`
Execute: \`/workflow:execute --resume-session="${session_id}"\`
`
**Broken Dependencies**:
- IMPL-2.3 depends on "IMPL-2.4" (non-existent)
**Missing Dependencies**:
- IMPL-5.1 (integration test) has no dependency on IMPL-1.* (implementation tasks)
**Logical Ordering Issues**:
{List or "None detected"}
---
## Synthesis Alignment Issues
| Issue Type | Synthesis Reference | IMPL_PLAN/Task | Impact | Recommendation |
|------------|---------------------|----------------|--------|----------------|
| Architecture Conflict | synthesis:ADR-01 (JWT auth) | IMPL_PLAN uses session cookies | HIGH | Update IMPL_PLAN to use JWT |
| Priority Mismatch | synthesis:FR-02 (High) | IMPL-2.3 (Medium) | MEDIUM | Elevate task priority |
| Missing Risk Mitigation | synthesis:Risk-03 (API rate limits) | No mitigation tasks | HIGH | Add rate limiting implementation task |
---
## Task Specification Quality
### Aggregate Statistics
| Quality Dimension | Tasks Affected | Percentage |
|-------------------|----------------|------------|
| Missing Artifacts References | {count} | {percent}% |
| Weak Flow Control | {count} | {percent}% |
| Missing Target Files | {count} | {percent}% |
| Ambiguous Focus Paths | {count} | {percent}% |
### Sample Issues
- **IMPL-1.2**: No context.artifacts reference to synthesis
- **IMPL-3.1**: Missing flow_control.target_files specification
- **IMPL-4.2**: Vague focus_paths ["src/"] - needs refinement
---
## Feasibility Concerns
| Concern | Tasks Affected | Issue | Recommendation |
|---------|----------------|-------|----------------|
| Skill Gap | IMPL-6.1, IMPL-6.2 | Requires Kubernetes expertise not in team | Add training task or external consultant |
| Resource Conflict | IMPL-3.1, IMPL-3.2 | Both modify src/auth/service.ts in parallel | Add dependency or serialize |
---
## Detailed Findings by Severity
### CRITICAL Issues ({count})
{Detailed breakdown of each critical issue with location, impact, and recommendation}
### HIGH Issues ({count})
{Detailed breakdown of each high issue with location, impact, and recommendation}
### MEDIUM Issues ({count})
{Detailed breakdown of each medium issue with location, impact, and recommendation}
### LOW Issues ({count})
{Detailed breakdown of each low issue with location, impact, and recommendation}
---
## Metrics Summary
| Metric | Value |
|--------|-------|
| Total Requirements | {count} ({functional} functional, {nonfunctional} non-functional, {business} business) |
| Total Tasks | {count} |
| Overall Coverage | {percent}% ({covered}/{total} requirements with ≥1 task) |
| Critical Issues | {count} |
| High Issues | {count} |
| Medium Issues | {count} |
| Low Issues | {count} |
| Total Findings | {total_findings} |
---
## Remediation Recommendations
### Priority Order
1. **CRITICAL** - Must fix before proceeding
2. **HIGH** - Fix before execution
3. **MEDIUM** - Fix during or after implementation
4. **LOW** - Optional improvements
### Next Steps
Based on the quality gate recommendation ({RECOMMENDATION}):
{IF BLOCK_EXECUTION}
**🛑 BLOCK EXECUTION**
You must resolve all CRITICAL issues before proceeding with implementation:
1. Review each critical issue in detail
2. Determine remediation approach (modify IMPL_PLAN.md, update task.json, or add new tasks)
3. Apply fixes systematically
4. Re-run verification to confirm resolution
{ELSE IF PROCEED_WITH_FIXES}
**⚠️ PROCEED WITH FIXES RECOMMENDED**
No critical issues detected, but HIGH issues exist. Recommended workflow:
1. Review high-priority issues
2. Apply fixes before execution for optimal results
3. Re-run verification (optional)
{ELSE IF PROCEED_WITH_CAUTION}
**✅ PROCEED WITH CAUTION**
Only MEDIUM issues detected. You may proceed with implementation:
- Address medium issues during or after implementation
- Maintain awareness of identified concerns
{ELSE}
**✅ PROCEED**
No significant issues detected. Safe to execute implementation workflow.
{END IF}
---
**Report End**
// Write report
Write(`${process_dir}/PLAN_VERIFICATION.md`, fullReport)
console.log(`✅ Report: ${process_dir}/PLAN_VERIFICATION.md\n📊 ${recommendation} | C:${critical_count} H:${high_count} M:${medium_count} L:${low_count} | Coverage:${coverage_percentage}%`)
```
### 7. Save and Display Report
### 6. Next Step Selection
**Step 7.1: Save Report**:
```bash
report_path = ".workflow/active/WFS-{session}/.process/PLAN_VERIFICATION.md"
Write(report_path, full_report_content)
```javascript
const autoYes = $ARGUMENTS.includes('--yes') || $ARGUMENTS.includes('-y')
const canExecute = recommendation !== 'BLOCK_EXECUTION'
// Auto mode
if (autoYes) {
if (canExecute) {
SlashCommand("/workflow:execute --yes --resume-session=\"${session_id}\"")
} else {
console.log(`[--yes] BLOCK_EXECUTION - Fix ${critical_count} critical issues first.`)
}
return
}
// Interactive mode - build options based on quality gate
const options = canExecute
? [
{ label: canExecute && recommendation === 'PROCEED_WITH_FIXES' ? "Execute Anyway" : "Execute (Recommended)",
description: "Proceed to /workflow:execute" },
{ label: "Review Report", description: "Review findings before deciding" },
{ label: "Re-verify", description: "Re-run after manual fixes" }
]
: [
{ label: "Review Report", description: "Review critical issues" },
{ label: "Re-verify", description: "Re-run after fixing issues" }
]
const selection = AskUserQuestion({
questions: [{
question: `Quality gate: ${recommendation}. Next step?`,
header: "Action",
multiSelect: false,
options
}]
})
// Handle selection
if (selection.includes("Execute")) {
SlashCommand("/workflow:execute --resume-session=\"${session_id}\"")
} else if (selection === "Re-verify") {
SlashCommand("/workflow:plan-verify --session ${session_id}")
}
```
**Step 7.2: Display Summary to User**:
```bash
# Display executive summary in terminal
echo "=== Plan Verification Complete ==="
echo "Report saved to: {report_path}"
echo ""
echo "Quality Gate: {RECOMMENDATION}"
echo "Critical: {count} | High: {count} | Medium: {count} | Low: {count}"
echo ""
echo "Next: Review full report for detailed findings and recommendations"
```
**Step 7.3: Completion**:
- Report is saved to `.process/PLAN_VERIFICATION.md`
- User can review findings and decide on remediation approach
- No automatic modifications are made to source artifacts
- User can manually apply fixes or use separate remediation command (if available)

View File

@@ -3,6 +3,7 @@ name: plan
description: 5-phase planning workflow with action-planning-agent task generation, outputs IMPL_PLAN.md and task JSONs
argument-hint: "[-y|--yes] \"text description\"|file.md"
allowed-tools: SlashCommand(*), TodoWrite(*), Read(*), Bash(*)
group: workflow
---
## Auto Mode
@@ -115,7 +116,38 @@ CONTEXT: Existing user database schema, REST API endpoints
**TodoWrite**: Mark phase 1 completed, phase 2 in_progress
**After Phase 1**: Return to user showing Phase 1 results, then auto-continue to Phase 2
**After Phase 1**: Initialize planning-notes.md with user intent
```javascript
// Create minimal planning notes document
const planningNotesPath = `.workflow/active/${sessionId}/planning-notes.md`
const userGoal = structuredDescription.goal
const userConstraints = structuredDescription.context || "None specified"
Write(planningNotesPath, `# Planning Notes
**Session**: ${sessionId}
**Created**: ${new Date().toISOString()}
## User Intent (Phase 1)
- **GOAL**: ${userGoal}
- **KEY_CONSTRAINTS**: ${userConstraints}
---
## Context Findings (Phase 2)
(To be filled by context-gather)
## Conflict Decisions (Phase 3)
(To be filled if conflicts detected)
## Consolidated Constraints (Phase 4 Input)
1. ${userConstraints}
`)
```
Return to user showing Phase 1 results, then auto-continue to Phase 2
---
@@ -138,6 +170,7 @@ SlashCommand(command="/workflow:tools:context-gather --session [sessionId] \"[st
**Validation**:
- Context package path extracted
- File exists and is valid JSON
- `prioritized_context` field exists
<!-- TodoWrite: When context-gather executed, INSERT 3 context-gather tasks, mark first as in_progress -->
@@ -168,7 +201,37 @@ SlashCommand(command="/workflow:tools:context-gather --session [sessionId] \"[st
**Note**: Phase 2 tasks completed and collapsed to summary.
**After Phase 2**: Return to user showing Phase 2 results, then auto-continue to Phase 3/4 (depending on conflict_risk)
**After Phase 2**: Update planning-notes.md with context findings, then auto-continue
```javascript
// Read context-package to extract key findings
const contextPackage = JSON.parse(Read(contextPath))
const conflictRisk = contextPackage.conflict_detection?.risk_level || 'low'
const criticalFiles = (contextPackage.exploration_results?.aggregated_insights?.critical_files || [])
.slice(0, 5).map(f => f.path)
const archPatterns = contextPackage.project_context?.architecture_patterns || []
const constraints = contextPackage.exploration_results?.aggregated_insights?.constraints || []
// Append Phase 2 findings to planning-notes.md
Edit(planningNotesPath, {
old: '## Context Findings (Phase 2)\n(To be filled by context-gather)',
new: `## Context Findings (Phase 2)
- **CRITICAL_FILES**: ${criticalFiles.join(', ') || 'None identified'}
- **ARCHITECTURE**: ${archPatterns.join(', ') || 'Not detected'}
- **CONFLICT_RISK**: ${conflictRisk}
- **CONSTRAINTS**: ${constraints.length > 0 ? constraints.join('; ') : 'None'}`
})
// Append Phase 2 constraints to consolidated list
Edit(planningNotesPath, {
old: '## Consolidated Constraints (Phase 4 Input)',
new: `## Consolidated Constraints (Phase 4 Input)
${constraints.map((c, i) => `${i + 2}. [Context] ${c}`).join('\n')}`
})
```
Return to user showing Phase 2 results, then auto-continue to Phase 3/4 (depending on conflict_risk)
---
@@ -229,7 +292,45 @@ SlashCommand(command="/workflow:tools:conflict-resolution --session [sessionId]
**Note**: Phase 3 tasks completed and collapsed to summary.
**After Phase 3**: Return to user showing conflict resolution results (if executed) and selected strategies, then auto-continue to Phase 3.5
**After Phase 3**: Update planning-notes.md with conflict decisions (if executed), then auto-continue
```javascript
// If Phase 3 was executed, update planning-notes.md
if (conflictRisk >= 'medium') {
const conflictResPath = `.workflow/active/${sessionId}/.process/conflict-resolution.json`
if (fs.existsSync(conflictResPath)) {
const conflictRes = JSON.parse(Read(conflictResPath))
const resolved = conflictRes.resolved_conflicts || []
const modifiedArtifacts = conflictRes.modified_artifacts || []
const planningConstraints = conflictRes.planning_constraints || []
// Update Phase 3 section
Edit(planningNotesPath, {
old: '## Conflict Decisions (Phase 3)\n(To be filled if conflicts detected)',
new: `## Conflict Decisions (Phase 3)
- **RESOLVED**: ${resolved.map(r => `${r.type}${r.strategy}`).join('; ') || 'None'}
- **MODIFIED_ARTIFACTS**: ${modifiedArtifacts.join(', ') || 'None'}
- **CONSTRAINTS**: ${planningConstraints.join('; ') || 'None'}`
})
// Append Phase 3 constraints to consolidated list
if (planningConstraints.length > 0) {
const currentNotes = Read(planningNotesPath)
const constraintCount = (currentNotes.match(/^\d+\./gm) || []).length
Edit(planningNotesPath, {
old: '## Consolidated Constraints (Phase 4 Input)',
new: `## Consolidated Constraints (Phase 4 Input)
${planningConstraints.map((c, i) => `${constraintCount + i + 1}. [Conflict] ${c}`).join('\n')}`
})
}
}
}
```
Return to user showing conflict resolution results (if executed) and selected strategies, then auto-continue to Phase 3.5
**Memory State Check**:
- Evaluate current context window usage and memory state
@@ -282,7 +383,12 @@ SlashCommand(command="/workflow:tools:task-generate-agent --session [sessionId]"
**CLI Execution Note**: CLI tool usage is now determined semantically by action-planning-agent based on user's task description. If user specifies "use Codex/Gemini/Qwen for X", the agent embeds `command` fields in relevant `implementation_approach` steps.
**Input**: `sessionId` from Phase 1
**Input**:
- `sessionId` from Phase 1
- **planning-notes.md**: Consolidated constraints from all phases (Phase 1-3)
- Path: `.workflow/active/[sessionId]/planning-notes.md`
- Contains: User intent, context findings, conflict decisions, consolidated constraints
- **Purpose**: Provides structured, minimal context summary to action-planning-agent
**Validation**:
- `.workflow/active/[sessionId]/IMPL_PLAN.md` exists
@@ -315,20 +421,55 @@ SlashCommand(command="/workflow:tools:task-generate-agent --session [sessionId]"
**Note**: Agent task completed. No collapse needed (single task).
**Return to User**:
```
Planning complete for session: [sessionId]
Tasks generated: [count]
Plan: .workflow/active/[sessionId]/IMPL_PLAN.md
**Step 4.2: User Decision** - Choose next action
Recommended Next Steps:
1. /workflow:plan-verify --session [sessionId] # Verify plan quality before execution
2. /workflow:status # Review task breakdown
3. /workflow:execute # Start implementation (after verification)
After Phase 4 completes, present user with action choices:
Quality Gate: Consider running /workflow:plan-verify to catch issues early
```javascript
console.log(`
✅ Planning complete for session: ${sessionId}
📊 Tasks generated: ${taskCount}
📋 Plan: .workflow/active/${sessionId}/IMPL_PLAN.md
`);
// Ask user for next action
const userChoice = AskUserQuestion({
questions: [{
question: "Planning complete. What would you like to do next?",
header: "Next Action",
multiSelect: false,
options: [
{
label: "Verify Plan Quality (Recommended)",
description: "Run quality verification to catch issues before execution. Checks plan structure, task dependencies, and completeness."
},
{
label: "Start Execution",
description: "Begin implementing tasks immediately. Use this if you've already reviewed the plan or want to start quickly."
},
{
label: "Review Status Only",
description: "View task breakdown and session status without taking further action. You can decide what to do next manually."
}
]
}]
});
// Execute based on user choice
if (userChoice.answers["Next Action"] === "Verify Plan Quality (Recommended)") {
console.log("\n🔍 Starting plan verification...\n");
SlashCommand(command="/workflow:plan-verify --session " + sessionId);
} else if (userChoice.answers["Next Action"] === "Start Execution") {
console.log("\n🚀 Starting task execution...\n");
SlashCommand(command="/workflow:execute --session " + sessionId);
} else if (userChoice.answers["Next Action"] === "Review Status Only") {
console.log("\n📊 Displaying session status...\n");
SlashCommand(command="/workflow:status --session " + sessionId);
}
```
**Return to User**: Based on user's choice, execute the corresponding workflow command.
## TodoWrite Pattern
**Core Concept**: Dynamic task attachment and collapse for real-time visibility into workflow execution.
@@ -404,26 +545,21 @@ User Input (task description)
Phase 1: session:start --auto "structured-description"
↓ Output: sessionId
Session Memory: Previous tasks, context, artifacts
Write: planning-notes.md (User Intent section)
Phase 2: context-gather --session sessionId "structured-description"
↓ Input: sessionId + session memory + structured description
↓ Output: contextPath (context-package.json) + conflict_risk
↓ Input: sessionId + structured description
↓ Output: contextPath (context-package.json with prioritized_context) + conflict_risk
↓ Update: planning-notes.md (Context Findings + Consolidated Constraints)
Phase 3: conflict-resolution [AUTO-TRIGGERED if conflict_risk ≥ medium]
↓ Input: sessionId + contextPath + conflict_risk
CLI-powered conflict detection (JSON output)
AskUserQuestion: Present conflicts + resolution strategies
↓ User selects strategies (or skip)
↓ Apply modifications via Edit tool:
↓ - Update guidance-specification.md
↓ - Update role analyses (*.md)
↓ - Mark context-package.json as "resolved"
↓ Output: Modified brainstorm artifacts (NO report file)
Output: Modified brainstorm artifacts
Update: planning-notes.md (Conflict Decisions + Consolidated Constraints)
↓ Skip if conflict_risk is none/low → proceed directly to Phase 4
Phase 4: task-generate-agent --session sessionId
↓ Input: sessionId + resolved brainstorm artifacts + session memory
↓ Input: sessionId + planning-notes.md + context-package.json + brainstorm artifacts
↓ Output: IMPL_PLAN.md, task JSONs, TODO_LIST.md
Return summary to user

View File

@@ -113,7 +113,40 @@ const taskId = taskIdMatch?.[1]
- List existing tasks
- Read `IMPL_PLAN.md` and `TODO_LIST.md`
**Output**: Session validated, context loaded, mode determined
4. **Parse Execution Intent** (from requirements text):
```javascript
// Dynamic tool detection from cli-tools.json
// Read enabled tools: ["gemini", "qwen", "codex", ...]
const enabledTools = loadEnabledToolsFromConfig(); // See ~/.claude/cli-tools.json
// Build dynamic patterns from enabled tools
function buildExecPatterns(tools) {
const patterns = {
agent: /改为\s*Agent\s*执行|使用\s*Agent\s*执行/i
};
tools.forEach(tool => {
// Pattern: "使用 {tool} 执行" or "改用 {tool}"
patterns[`cli_${tool}`] = new RegExp(
`使用\\s*(${tool})\\s*执行|改用\\s*(${tool})`, 'i'
);
});
return patterns;
}
const execPatterns = buildExecPatterns(enabledTools);
let executionIntent = null
for (const [key, pattern] of Object.entries(execPatterns)) {
if (pattern.test(requirements)) {
executionIntent = key.startsWith('cli_')
? { method: 'cli', cli_tool: key.replace('cli_', '') }
: { method: 'agent', cli_tool: null }
break
}
}
```
**Output**: Session validated, context loaded, mode determined, **executionIntent parsed**
---
@@ -356,7 +389,18 @@ const updated_task = {
flow_control: {
...task.flow_control,
implementation_approach: [...updated_steps]
}
},
// Update execution config if intent detected
...(executionIntent && {
meta: {
...task.meta,
execution_config: {
method: executionIntent.method,
cli_tool: executionIntent.cli_tool,
enable_resume: executionIntent.method !== 'agent'
}
}
})
};
Write({
@@ -365,6 +409,8 @@ Write({
});
```
**Note**: Implementation approach steps are NO LONGER modified. CLI execution is controlled by task-level `meta.execution_config` only.
**Step 5.4: Create New Tasks** (if needed)
Generate complete task JSON with all required fields:
@@ -570,3 +616,33 @@ A: 是,需要同步更新依赖任务
任务重规划完成! 更新 2 个任务
```
### Task Replan - Change Execution Method
```bash
/workflow:replan IMPL-001 "改用 Codex 执行"
# Semantic parsing detects executionIntent:
# { method: 'cli', cli_tool: 'codex' }
# Execution (no interactive questions needed)
✓ 创建备份
✓ 更新 IMPL-001.json
- meta.execution_config = { method: 'cli', cli_tool: 'codex', enable_resume: true }
任务执行方式已更新: Agent → CLI (codex)
```
```bash
/workflow:replan IMPL-002 "改为 Agent 执行"
# Semantic parsing detects executionIntent:
# { method: 'agent', cli_tool: null }
# Execution
✓ 创建备份
✓ 更新 IMPL-002.json
- meta.execution_config = { method: 'agent', cli_tool: null }
任务执行方式已更新: CLI → Agent
```

View File

@@ -1,26 +1,26 @@
---
name: review-fix
name: review-cycle-fix
description: Automated fixing of code review findings with AI-powered planning and coordinated execution. Uses intelligent grouping, multi-stage timeline coordination, and test-driven verification.
argument-hint: "<export-file|review-dir> [--resume] [--max-iterations=N]"
allowed-tools: SlashCommand(*), TodoWrite(*), Read(*), Bash(*), Task(*), Edit(*), Write(*)
---
# Workflow Review-Fix Command
# Workflow Review-Cycle-Fix Command
## Quick Start
```bash
# Fix from exported findings file (session-based path)
/workflow:review-fix .workflow/active/WFS-123/.review/fix-export-1706184622000.json
/workflow:review-cycle-fix .workflow/active/WFS-123/.review/fix-export-1706184622000.json
# Fix from review directory (auto-discovers latest export)
/workflow:review-fix .workflow/active/WFS-123/.review/
/workflow:review-cycle-fix .workflow/active/WFS-123/.review/
# Resume interrupted fix session
/workflow:review-fix --resume
/workflow:review-cycle-fix --resume
# Custom max retry attempts per finding
/workflow:review-fix .workflow/active/WFS-123/.review/ --max-iterations=5
/workflow:review-cycle-fix .workflow/active/WFS-123/.review/ --max-iterations=5
```
**Fix Source**: Exported findings from review cycle dashboard

View File

@@ -764,8 +764,8 @@ After completing a module review, use the generated findings JSON for automated
/workflow:review-module-cycle src/auth/**
# Step 2: Run automated fixes using dimension findings
/workflow:review-fix .workflow/active/WFS-{session-id}/.review/
/workflow:review-cycle-fix .workflow/active/WFS-{session-id}/.review/
```
See `/workflow:review-fix` for automated fixing with smart grouping, parallel execution, and test verification.
See `/workflow:review-cycle-fix` for automated fixing with smart grouping, parallel execution, and test verification.

View File

@@ -775,8 +775,8 @@ After completing a review, use the generated findings JSON for automated fixing:
/workflow:review-session-cycle
# Step 2: Run automated fixes using dimension findings
/workflow:review-fix .workflow/active/WFS-{session-id}/.review/
/workflow:review-cycle-fix .workflow/active/WFS-{session-id}/.review/
```
See `/workflow:review-fix` for automated fixing with smart grouping, parallel execution, and test verification.
See `/workflow:review-cycle-fix` for automated fixing with smart grouping, parallel execution, and test verification.

View File

@@ -25,10 +25,10 @@ Analyzes conflicts between implementation plans and existing codebase, **includi
| Responsibility | Description |
|---------------|-------------|
| **Detect Conflicts** | Analyze plan vs existing code inconsistencies |
| **Scenario Uniqueness** | **NEW**: Search and compare new modules with existing modules for functional overlaps |
| **Scenario Uniqueness** | Search and compare new modules with existing modules for functional overlaps |
| **Generate Strategies** | Provide 2-4 resolution options per conflict |
| **Iterative Clarification** | **NEW**: Ask unlimited questions until scenario boundaries are clear and unique |
| **Agent Re-analysis** | **NEW**: Dynamically update strategies based on user clarifications |
| **Iterative Clarification** | Ask unlimited questions until scenario boundaries are clear and unique |
| **Agent Re-analysis** | Dynamically update strategies based on user clarifications |
| **CLI Analysis** | Use Gemini/Qwen (Claude fallback) |
| **User Decision** | Present options ONE BY ONE, never auto-apply |
| **Direct Text Output** | Output questions via text directly, NEVER use bash echo/printf |
@@ -57,7 +57,7 @@ Analyzes conflicts between implementation plans and existing codebase, **includi
- Breaking updates
### 5. Module Scenario Overlap
- **NEW**: Functional overlap between new and existing modules
- Functional overlap between new and existing modules
- Scenario boundary ambiguity
- Duplicate responsibility detection
- Module merge/split decisions
@@ -134,7 +134,7 @@ Task(subagent_type="cli-execution-agent", run_in_background=false, prompt=`
### 1. Load Context
- Read existing files from conflict_detection.existing_files
- Load plan from .workflow/active/{session_id}/.process/context-package.json
- **NEW**: Load exploration_results and use aggregated_insights for enhanced analysis
- Load exploration_results and use aggregated_insights for enhanced analysis
- Extract role analyses and requirements
### 2. Execute CLI Analysis (Enhanced with Exploration + Scenario Uniqueness)
@@ -186,28 +186,18 @@ Task(subagent_type="cli-execution-agent", run_in_background=false, prompt=`
- modifications.old_content: 20-100 chars for unique Edit tool matching
- modifications.new_content: preserves markdown formatting
- modification_suggestions: 2-5 actionable suggestions for custom handling
`)
```
**Agent Internal Flow** (Enhanced):
```
1. Load context package
2. Check conflict_risk (exit if none/low)
3. Read existing files + plan artifacts
4. Run CLI analysis (Gemini→Qwen→Claude) with enhanced tasks:
- Standard conflict detection (Architecture/API/Data/Dependency)
- **NEW: Module scenario uniqueness detection**
* Extract new module functionality from plan
* Search all existing modules with similar keywords/functionality
* Compare scenario coverage and responsibilities
* Identify functional overlaps and boundary ambiguities
* Generate ModuleOverlap conflicts with overlap_analysis
5. Parse conflict findings (including ModuleOverlap category)
6. Generate 2-4 strategies per conflict:
- Include modifications for each strategy
- **For ModuleOverlap**: Add clarification_needed questions for boundary definition
7. Return JSON to stdout (NOT file write)
8. Return execution log path
### 5. Planning Notes Record (REQUIRED)
After analysis complete, append a brief execution record to planning-notes.md:
**File**: .workflow/active/{session_id}/planning-notes.md
**Location**: Under "## Conflict Decisions (Phase 3)" section
**Format**:
\`\`\`
### [Conflict-Resolution Agent] YYYY-MM-DD
- **Note**: [智能补充:简短总结冲突类型、解决策略、关键决策等]
\`\`\`
`)
```
### Phase 3: User Interaction Loop

View File

@@ -35,7 +35,7 @@ Step 1: Context-Package Detection
├─ Valid package exists → Return existing (skip execution)
└─ No valid package → Continue to Step 2
Step 2: Complexity Assessment & Parallel Explore (NEW)
Step 2: Complexity Assessment & Parallel Explore
├─ Analyze task_description → classify Low/Medium/High
├─ Select exploration angles (1-4 based on complexity)
├─ Launch N cli-explore-agents in parallel
@@ -213,19 +213,37 @@ Write(`${sessionFolder}/explorations-manifest.json`, JSON.stringify(explorationM
**Only execute after Step 2 completes**
```javascript
// Load user intent from planning-notes.md (from Phase 1)
const planningNotesPath = `.workflow/active/${session_id}/planning-notes.md`;
let userIntent = { goal: task_description, key_constraints: "None specified" };
if (file_exists(planningNotesPath)) {
const notesContent = Read(planningNotesPath);
const goalMatch = notesContent.match(/\*\*GOAL\*\*:\s*(.+)/);
const constraintsMatch = notesContent.match(/\*\*KEY_CONSTRAINTS\*\*:\s*(.+)/);
if (goalMatch) userIntent.goal = goalMatch[1].trim();
if (constraintsMatch) userIntent.key_constraints = constraintsMatch[1].trim();
}
Task(
subagent_type="context-search-agent",
run_in_background=false,
description="Gather comprehensive context for plan",
prompt=`
## Execution Mode
**PLAN MODE** (Comprehensive) - Full Phase 1-3 execution
**PLAN MODE** (Comprehensive) - Full Phase 1-3 execution with priority sorting
## Session Information
- **Session ID**: ${session_id}
- **Task Description**: ${task_description}
- **Output Path**: .workflow/${session_id}/.process/context-package.json
## User Intent (from Phase 1 - Planning Notes)
**GOAL**: ${userIntent.goal}
**KEY_CONSTRAINTS**: ${userIntent.key_constraints}
This is the PRIMARY context source - all subsequent analysis must align with user intent.
## Exploration Input (from Step 2)
- **Manifest**: ${sessionFolder}/explorations-manifest.json
- **Exploration Count**: ${explorationManifest.exploration_count}
@@ -245,7 +263,13 @@ Execute complete context-search-agent workflow for implementation planning:
4. **Analysis**: Extract keywords, determine scope, classify complexity based on task description and project state
### Phase 2: Multi-Source Context Discovery
Execute all discovery tracks:
Execute all discovery tracks (WITH USER INTENT INTEGRATION):
- **Track -1**: User Intent & Priority Foundation (EXECUTE FIRST)
- Load user intent (GOAL, KEY_CONSTRAINTS) from session input
- Map user requirements to codebase entities (files, modules, patterns)
- Establish baseline priority scores based on user goal alignment
- Output: user_intent_mapping.json with preliminary priority scores
- **Track 0**: Exploration Synthesis (load ${sessionFolder}/explorations-manifest.json, prioritize critical_files, deduplicate patterns/integration_points)
- **Track 1**: Historical archive analysis (query manifest.json for lessons learned)
- **Track 2**: Reference documentation (CLAUDE.md, architecture docs)
@@ -254,13 +278,45 @@ Execute all discovery tracks:
### Phase 3: Synthesis, Assessment & Packaging
1. Apply relevance scoring and build dependency graph
2. **Synthesize 4-source data**: Merge findings from all sources (archive > docs > code > web). **Prioritize the context from `project-tech.json`** for architecture and tech stack unless code analysis reveals it's outdated.
3. **Populate `project_context`**: Directly use the `overview` from `project-tech.json` to fill the `project_context` section. Include description, technology_stack, architecture, and key_components.
4. **Populate `project_guidelines`**: Load conventions, constraints, and learnings from `project-guidelines.json` into a dedicated section.
5. Integrate brainstorm artifacts (if .brainstorming/ exists, read content)
6. Perform conflict detection with risk assessment
7. **Inject historical conflicts** from archive analysis into conflict_detection
8. Generate and validate context-package.json
2. **Synthesize 5-source data** (including Track -1): Merge findings from all sources
- Priority order: User Intent > Archive > Docs > Exploration > Code > Web
- **Prioritize the context from `project-tech.json`** for architecture and tech stack unless code analysis reveals it's outdated
3. **Context Priority Sorting**:
a. Combine scores from Track -1 (user intent alignment) + relevance scores + exploration critical_files
b. Classify files into priority tiers:
- **Critical** (score ≥ 0.85): Directly mentioned in user goal OR exploration critical_files
- **High** (0.70-0.84): Key dependencies, patterns required for goal
- **Medium** (0.50-0.69): Supporting files, indirect dependencies
- **Low** (< 0.50): Contextual awareness only
c. Generate dependency_order: Based on dependency graph + user goal sequence
d. Document sorting_rationale: Explain prioritization logic
4. **Populate `project_context`**: Directly use the `overview` from `project-tech.json` to fill the `project_context` section. Include description, technology_stack, architecture, and key_components.
5. **Populate `project_guidelines`**: Load conventions, constraints, and learnings from `project-guidelines.json` into a dedicated section.
6. Integrate brainstorm artifacts (if .brainstorming/ exists, read content)
7. Perform conflict detection with risk assessment
8. **Inject historical conflicts** from archive analysis into conflict_detection
9. **Generate prioritized_context section**:
```json
{
"prioritized_context": {
"user_intent": {
"goal": "...",
"scope": "...",
"key_constraints": ["..."]
},
"priority_tiers": {
"critical": [{ "path": "...", "relevance": 0.95, "rationale": "..." }],
"high": [...],
"medium": [...],
"low": [...]
},
"dependency_order": ["module1", "module2", "module3"],
"sorting_rationale": "Based on user goal alignment (Track -1), exploration critical files, and dependency graph analysis"
}
}
```
10. Generate and validate context-package.json with prioritized_context field
## Output Requirements
Complete context-package.json with:
@@ -272,6 +328,7 @@ Complete context-package.json with:
- **brainstorm_artifacts**: {guidance_specification, role_analyses[], synthesis_output} with content
- **conflict_detection**: {risk_level, risk_factors, affected_modules[], mitigation_strategy, historical_conflicts[]}
- **exploration_results**: {manifest_path, exploration_count, angles, explorations[], aggregated_insights} (from Track 0)
- **prioritized_context**: {user_intent, priority_tiers{critical, high, medium, low}, dependency_order[], sorting_rationale}
## Quality Validation
Before completion verify:
@@ -282,6 +339,17 @@ Before completion verify:
- [ ] No sensitive data exposed
- [ ] Total files ≤50 (prioritize high-relevance)
## Planning Notes Record (REQUIRED)
After completing context-package.json, append a brief execution record to planning-notes.md:
**File**: .workflow/active/${session_id}/planning-notes.md
**Location**: Under "## Context Findings (Phase 2)" section
**Format**:
\`\`\`
### [Context-Search Agent] YYYY-MM-DD
- **Note**: [智能补充:简短总结关键发现,如探索角度、关键文件、冲突风险等]
\`\`\`
Execute autonomously following agent documentation.
Report completion with statistics.
`
@@ -326,116 +394,11 @@ Refer to `context-search-agent.md` Phase 3.7 for complete `context-package.json`
- **brainstorm_artifacts**: Brainstorm documents with full content (if exists)
- **conflict_detection**: Risk assessment with mitigation strategies and historical conflicts
- **exploration_results**: Aggregated exploration insights (from parallel explore phase)
## Historical Archive Analysis
### Track 1: Query Archive Manifest
The context-search-agent MUST perform historical archive analysis as Track 1 in Phase 2:
**Step 1: Check for Archive Manifest**
```bash
# Check if archive manifest exists
if [[ -f .workflow/archives/manifest.json ]]; then
# Manifest available for querying
fi
```
**Step 2: Extract Task Keywords**
```javascript
// From current task description, extract key entities and operations
const keywords = extractKeywords(task_description);
// Examples: ["User", "model", "authentication", "JWT", "reporting"]
```
**Step 3: Search Archive for Relevant Sessions**
```javascript
// Query manifest for sessions with matching tags or descriptions
const relevantArchives = archives.filter(archive => {
return archive.tags.some(tag => keywords.includes(tag)) ||
keywords.some(kw => archive.description.toLowerCase().includes(kw.toLowerCase()));
});
```
**Step 4: Extract Watch Patterns**
```javascript
// For each relevant archive, check watch_patterns for applicability
const historicalConflicts = [];
relevantArchives.forEach(archive => {
archive.lessons.watch_patterns?.forEach(pattern => {
// Check if pattern trigger matches current task
if (isPatternRelevant(pattern.pattern, task_description)) {
historicalConflicts.push({
source_session: archive.session_id,
pattern: pattern.pattern,
action: pattern.action,
files_to_check: pattern.related_files,
archived_at: archive.archived_at
});
}
});
});
```
**Step 5: Inject into Context Package**
```json
{
"conflict_detection": {
"risk_level": "medium",
"risk_factors": ["..."],
"affected_modules": ["..."],
"mitigation_strategy": "...",
"historical_conflicts": [
{
"source_session": "WFS-auth-feature",
"pattern": "When modifying User model",
"action": "Check reporting-service and auditing-service dependencies",
"files_to_check": ["src/models/User.ts", "src/services/reporting.ts"],
"archived_at": "2025-09-16T09:00:00Z"
}
]
}
}
```
### Risk Level Escalation
If `historical_conflicts` array is not empty, minimum risk level should be "medium":
```javascript
if (historicalConflicts.length > 0 && currentRisk === "low") {
conflict_detection.risk_level = "medium";
conflict_detection.risk_factors.push(
`${historicalConflicts.length} historical conflict pattern(s) detected from past sessions`
);
}
```
### Archive Query Algorithm
```markdown
1. IF .workflow/archives/manifest.json does NOT exist → Skip Track 1, continue to Track 2
2. IF manifest exists:
a. Load manifest.json
b. Extract keywords from task_description (nouns, verbs, technical terms)
c. Filter archives where:
- ANY tag matches keywords (case-insensitive) OR
- description contains keywords (case-insensitive substring match)
d. For each relevant archive:
- Read lessons.watch_patterns array
- Check if pattern.pattern keywords overlap with task_description
- If relevant: Add to historical_conflicts array
e. IF historical_conflicts.length > 0:
- Set risk_level = max(current_risk, "medium")
- Add to risk_factors
3. Continue to Track 2 (reference documentation)
```
- **prioritized_context**: Pre-sorted context with user intent and priority tiers (critical/high/medium/low)
## Notes
- **Detection-first**: Always check for existing package before invoking agent
- **Dual project file integration**: Agent reads both `.workflow/project-tech.json` (tech analysis) and `.workflow/project-guidelines.json` (user constraints) as primary sources
- **Guidelines injection**: Project guidelines are included in context-package to ensure task generation respects user-defined constraints
- **No redundancy**: This command is a thin orchestrator, all logic in agent
- **User intent integration**: Load user intent from planning-notes.md (Phase 1 output)
- **Output**: Generates `context-package.json` with `prioritized_context` field
- **Plan-specific**: Use this for implementation planning; brainstorm mode uses direct agent call

View File

@@ -19,6 +19,10 @@ Generate implementation planning documents (IMPL_PLAN.md, task JSONs, TODO_LIST.
## Core Philosophy
- **Planning Only**: Generate planning documents (IMPL_PLAN.md, task JSONs, TODO_LIST.md) - does NOT implement code
- **Agent-Driven Document Generation**: Delegate plan generation to action-planning-agent
- **NO Redundant Context Sorting**: Context priority sorting is ALREADY completed in context-gather Phase 2/3
- Use `context-package.json.prioritized_context` directly
- DO NOT re-sort files or re-compute priorities
- `priority_tiers` and `dependency_order` are pre-computed and ready-to-use
- **N+1 Parallel Planning**: Auto-detect multi-module projects, enable parallel planning (2+1 or 3+1 mode)
- **Progressive Loading**: Load context incrementally (Core → Selective → On-Demand) due to analysis.md file size
- **Memory-First**: Reuse loaded documents from conversation memory
@@ -161,12 +165,13 @@ const userConfig = {
### Phase 1: Context Preparation & Module Detection (Command Responsibility)
**Command prepares session paths, metadata, and detects module structure.**
**Command prepares session paths, metadata, detects module structure. Context priority sorting is NOT performed here - it's already completed in context-gather Phase 2/3.**
**Session Path Structure**:
```
.workflow/active/WFS-{session-id}/
├── workflow-session.json # Session metadata
├── planning-notes.md # Consolidated planning notes
├── .process/
│ └── context-package.json # Context package with artifact catalog
├── .task/ # Output: Task JSON files
@@ -248,9 +253,21 @@ IMPORTANT: This is PLANNING ONLY - you are generating planning documents, NOT im
CRITICAL: Follow the progressive loading strategy defined in agent specification (load analysis.md files incrementally due to file size)
## PLANNING NOTES (PHASE 1-3 CONTEXT)
Load: .workflow/active/{session-id}/planning-notes.md
This document contains:
- User Intent: Original GOAL and KEY_CONSTRAINTS from Phase 1
- Context Findings: Critical files, architecture, and constraints from Phase 2
- Conflict Decisions: Resolved conflicts and planning constraints from Phase 3
- Consolidated Constraints: All constraints from all phases
**USAGE**: Read planning-notes.md FIRST. Use Consolidated Constraints list to guide task sequencing and dependencies.
## SESSION PATHS
Input:
- Session Metadata: .workflow/active/{session-id}/workflow-session.json
- Planning Notes: .workflow/active/{session-id}/planning-notes.md
- Context Package: .workflow/active/{session-id}/.process/context-package.json
Output:
@@ -278,7 +295,17 @@ CLI Resume Support (MANDATORY for all CLI commands):
- Read previous task's cliExecutionId from session state
- Format: ccw cli -p "[prompt]" --resume ${previousCliId} --tool ${tool} --mode write
## EXPLORATION CONTEXT (from context-package.exploration_results)
## PRIORITIZED CONTEXT (from context-package.prioritized_context) - ALREADY SORTED
Context sorting is ALREADY COMPLETED in context-gather Phase 2/3. DO NOT re-sort.
Direct usage:
- **user_intent**: Use goal/scope/key_constraints for task alignment
- **priority_tiers.critical**: These files are PRIMARY focus for task generation
- **priority_tiers.high**: These files are SECONDARY focus
- **dependency_order**: Use this for task sequencing - already computed
- **sorting_rationale**: Reference for understanding priority decisions
## EXPLORATION CONTEXT (from context-package.exploration_results) - SUPPLEMENT ONLY
If prioritized_context is incomplete, fall back to exploration_results:
- Load exploration_results from context-package.json
- Use aggregated_insights.critical_files for focus_paths generation
- Apply aggregated_insights.constraints to acceptance criteria
@@ -298,8 +325,10 @@ CLI Resume Support (MANDATORY for all CLI commands):
- 6-field schema (id, title, status, context_package_path, meta, context, flow_control)
- Quantified requirements with explicit counts
- Artifacts integration from context package
- **focus_paths enhanced with exploration critical_files**
- Flow control with pre_analysis steps (include exploration integration_points analysis)
- **focus_paths generated directly from prioritized_context.priority_tiers (critical + high)**
- NO re-sorting or re-prioritization - use pre-computed tiers as-is
- Critical files are PRIMARY focus, High files are SECONDARY
- Flow control with pre_analysis steps (use prioritized_context.dependency_order for task sequencing)
- **CLI Execution IDs and strategies (MANDATORY)**
2. Implementation Plan (IMPL_PLAN.md)
@@ -347,6 +376,19 @@ Hard Constraints:
- IMPL_PLAN.md created with complete structure
- TODO_LIST.md generated matching task JSONs
- Return completion status with document count and task breakdown summary
## PLANNING NOTES RECORD (REQUIRED)
After completing all documents, append a brief execution record to planning-notes.md:
**File**: .workflow/active/{session_id}/planning-notes.md
**Location**: Create new section after "## Consolidated Constraints"
**Format**:
\`\`\`
## Task Generation (Phase 4)
### [Action-Planning Agent] YYYY-MM-DD
- **Note**: [智能补充:简短总结任务数量、关键任务、依赖关系等]
\`\`\`
`
)
```
@@ -376,16 +418,22 @@ IMPORTANT: Generate Task JSONs ONLY. IMPL_PLAN.md and TODO_LIST.md by Phase 3 Co
CRITICAL: Follow the progressive loading strategy defined in agent specification (load analysis.md files incrementally due to file size)
## PLANNING NOTES (PHASE 1-3 CONTEXT)
Load: .workflow/active/{session-id}/planning-notes.md
This document contains consolidated constraints and user intent to guide module-scoped task generation.
## MODULE SCOPE
- Module: ${module.name} (${module.type})
- Focus Paths: ${module.paths.join(', ')}
- Task ID Prefix: IMPL-${module.prefix}
- Task Limit: ≤9 tasks (hard limit for this module)
- Task Limit: ≤6 tasks (hard limit for this module)
- Other Modules: ${otherModules.join(', ')} (reference only, do NOT generate tasks for them)
## SESSION PATHS
Input:
- Session Metadata: .workflow/active/{session-id}/workflow-session.json
- Planning Notes: .workflow/active/{session-id}/planning-notes.md
- Context Package: .workflow/active/{session-id}/.process/context-package.json
Output:
@@ -411,7 +459,16 @@ CLI Resume Support (MANDATORY for all CLI commands):
- Read previous task's cliExecutionId from session state
- Format: ccw cli -p "[prompt]" --resume ${previousCliId} --tool ${tool} --mode write
## EXPLORATION CONTEXT (from context-package.exploration_results)
## PRIORITIZED CONTEXT (from context-package.prioritized_context) - ALREADY SORTED
Context sorting is ALREADY COMPLETED in context-gather Phase 2/3. DO NOT re-sort.
Filter by module scope (${module.paths.join(', ')}):
- **user_intent**: Use for task alignment within module
- **priority_tiers.critical**: Filter for files in ${module.paths.join(', ')} → PRIMARY focus
- **priority_tiers.high**: Filter for files in ${module.paths.join(', ')} → SECONDARY focus
- **dependency_order**: Use module-relevant entries for task sequencing
## EXPLORATION CONTEXT (from context-package.exploration_results) - SUPPLEMENT ONLY
If prioritized_context is incomplete for this module, fall back to exploration_results:
- Load exploration_results from context-package.json
- Filter for ${module.name} module: Use aggregated_insights.critical_files matching ${module.paths.join(', ')}
- Apply module-relevant constraints from aggregated_insights.constraints
@@ -438,8 +495,10 @@ Task JSON Files (.task/IMPL-${module.prefix}*.json):
- Task ID format: IMPL-${module.prefix}1, IMPL-${module.prefix}2, ...
- Quantified requirements with explicit counts
- Artifacts integration from context package (filtered for ${module.name})
- **focus_paths enhanced with exploration critical_files (module-scoped)**
- Flow control with pre_analysis steps (include exploration integration_points analysis)
- **focus_paths generated directly from prioritized_context.priority_tiers filtered by ${module.paths.join(', ')}**
- NO re-sorting - use pre-computed tiers filtered for this module
- Critical files are PRIMARY focus, High files are SECONDARY
- Flow control with pre_analysis steps (use prioritized_context.dependency_order for module task sequencing)
- **CLI Execution IDs and strategies (MANDATORY)**
- Focus ONLY on ${module.name} module scope
@@ -482,6 +541,21 @@ Hard Constraints:
- Cross-module dependencies use CROSS:: placeholder format consistently
- Focus paths scoped to ${module.paths.join(', ')} only
- Return: task count, task IDs, dependency summary (internal + cross-module)
## PLANNING NOTES RECORD (REQUIRED)
After completing module task JSONs, append a brief execution record to planning-notes.md:
**File**: .workflow/active/{session_id}/planning-notes.md
**Location**: Create new section after "## Consolidated Constraints" (if not exists)
**Format**:
\`\`\`
## Task Generation (Phase 4)
### [Action-Planning Agent - ${module.name}] YYYY-MM-DD
- **Note**: [智能补充:简短总结本模块任务数量、关键任务等]
\`\`\`
**Note**: Multiple module agents will append their records. Phase 3 Integration Coordinator will add final summary.
`
)
);
@@ -562,6 +636,17 @@ Module Count: ${modules.length}
- No CROSS:: placeholders remaining in task JSONs
- IMPL_PLAN.md and TODO_LIST.md generated with multi-module structure
- Return: task count, per-module breakdown, resolved dependency count
## PLANNING NOTES RECORD (REQUIRED)
After completing integration, append final summary to planning-notes.md:
**File**: .workflow/active/{session_id}/planning-notes.md
**Location**: Under "## Task Generation (Phase 4)" section (after module agent records)
**Format**:
\`\`\`
### [Integration Coordinator] YYYY-MM-DD
- **Note**: [智能补充:简短总结总任务数、跨模块依赖解决情况等]
\`\`\`
`
)
```
@@ -579,5 +664,4 @@ function resolveCrossModuleDependency(placeholder, allTasks) {
? candidates.sort((a, b) => a.id.localeCompare(b.id))[0].id
: placeholder; // Keep for manual resolution
}
```
```

View File

@@ -257,7 +257,7 @@
"essential": true,
"flow": {
"prerequisites": ["/workflow:execute"],
"next_steps": ["/workflow:review-fix"]
"next_steps": ["/workflow:review-cycle-fix"]
},
"source": "../../../commands/workflow/review-session-cycle.md"
},
@@ -271,8 +271,8 @@
"source": "../../../commands/workflow/review-module-cycle.md"
},
{
"name": "review-fix",
"command": "/workflow:review-fix",
"name": "review-cycle-fix",
"command": "/workflow:review-cycle-fix",
"description": "Automated fixing of review findings",
"arguments": "<export-file|review-dir>",
"category": "workflow",
@@ -280,7 +280,7 @@
"flow": {
"prerequisites": ["/workflow:review-session-cycle", "/workflow:review-module-cycle"]
},
"source": "../../../commands/workflow/review-fix.md"
"source": "../../../commands/workflow/review-cycle-fix.md"
},
{
"name": "test-gen",

View File

@@ -1,303 +0,0 @@
# CCW Loop Skill
无状态迭代开发循环工作流,支持开发 (Develop)、调试 (Debug)、验证 (Validate) 三个阶段,每个阶段都有独立的文件记录进展。
## Overview
CCW Loop 是一个自主模式 (Autonomous) 的 Skill通过文件驱动的无状态循环帮助开发者系统化地完成开发任务。
### 核心特性
1. **无状态循环**: 每次执行从文件读取状态,不依赖内存
2. **文件驱动**: 所有进度记录在 Markdown 文件中,可审计、可回顾
3. **Gemini 辅助**: 关键决策点使用 CLI 工具进行深度分析
4. **可恢复**: 任何时候中断后可继续
5. **双模式**: 支持交互式和自动循环
### 三大阶段
- **Develop**: 任务分解 → 代码实现 → 进度记录
- **Debug**: 假设生成 → 证据收集 → 根因分析 → 修复验证
- **Validate**: 测试执行 → 覆盖率检查 → 质量评估
## Installation
已包含在 `.claude/skills/ccw-loop/`,无需额外安装。
## Usage
### 基本用法
```bash
# 启动新循环
/ccw-loop "实现用户认证功能"
# 继续现有循环
/ccw-loop --resume LOOP-auth-2026-01-22
# 自动循环模式
/ccw-loop --auto "修复登录bug并添加测试"
```
### 交互式流程
```
1. 启动: /ccw-loop "任务描述"
2. 初始化: 自动分析任务并生成子任务列表
3. 显示菜单:
- 📝 继续开发 (Develop)
- 🔍 开始调试 (Debug)
- ✅ 运行验证 (Validate)
- 📊 查看详情 (Status)
- 🏁 完成循环 (Complete)
- 🚪 退出 (Exit)
4. 执行选择的动作
5. 重复步骤 3-4 直到完成
```
### 自动循环流程
```
Develop (所有任务) → Debug (如有需要) → Validate → 完成
```
## Directory Structure
```
.workflow/.loop/{session-id}/
├── meta.json # 会话元数据 (不可修改)
├── state.json # 当前状态 (每次更新)
├── summary.md # 完成报告 (结束时生成)
├── develop/
│ ├── progress.md # 开发进度时间线
│ ├── tasks.json # 任务列表
│ └── changes.log # 代码变更日志 (NDJSON)
├── debug/
│ ├── understanding.md # 理解演变文档
│ ├── hypotheses.json # 假设历史
│ └── debug.log # 调试日志 (NDJSON)
└── validate/
├── validation.md # 验证报告
├── test-results.json # 测试结果
└── coverage.json # 覆盖率数据
```
## Action Reference
| Action | 描述 | 触发条件 |
|--------|------|----------|
| action-init | 初始化会话 | 首次启动 |
| action-menu | 显示操作菜单 | 交互模式下每次循环 |
| action-develop-with-file | 执行开发任务 | 有待处理任务 |
| action-debug-with-file | 假设驱动调试 | 需要调试 |
| action-validate-with-file | 运行测试验证 | 需要验证 |
| action-complete | 完成并生成报告 | 所有任务完成 |
详细说明见 [specs/action-catalog.md](specs/action-catalog.md)
## CLI Integration
CCW Loop 在关键决策点集成 CLI 工具:
### 任务分解 (action-init)
```bash
ccw cli -p "PURPOSE: 分解开发任务..."
--tool gemini
--mode analysis
--rule planning-breakdown-task-steps
```
### 代码实现 (action-develop)
```bash
ccw cli -p "PURPOSE: 实现功能代码..."
--tool gemini
--mode write
--rule development-implement-feature
```
### 假设生成 (action-debug - 探索)
```bash
ccw cli -p "PURPOSE: Generate debugging hypotheses..."
--tool gemini
--mode analysis
--rule analysis-diagnose-bug-root-cause
```
### 证据分析 (action-debug - 分析)
```bash
ccw cli -p "PURPOSE: Analyze debug log evidence..."
--tool gemini
--mode analysis
--rule analysis-diagnose-bug-root-cause
```
### 质量评估 (action-validate)
```bash
ccw cli -p "PURPOSE: Analyze test results and coverage..."
--tool gemini
--mode analysis
--rule analysis-review-code-quality
```
## State Management
### State Schema
参见 [phases/state-schema.md](phases/state-schema.md)
### State Transitions
```
pending → running → completed
user_exit
failed
```
### State Recovery
如果 `state.json` 损坏,可从其他文件重建:
- develop/tasks.json → develop.*
- debug/hypotheses.json → debug.*
- validate/test-results.json → validate.*
## Examples
### Example 1: 功能开发
```bash
# 1. 启动循环
/ccw-loop "Add user profile page"
# 2. 系统初始化,生成任务:
# - task-001: Create profile component
# - task-002: Add API endpoints
# - task-003: Implement tests
# 3. 选择 "继续开发"
# → 执行 task-001 (Gemini 辅助实现)
# → 更新 progress.md
# 4. 重复开发直到所有任务完成
# 5. 选择 "运行验证"
# → 运行测试
# → 检查覆盖率
# → 生成 validation.md
# 6. 选择 "完成循环"
# → 生成 summary.md
# → 询问是否扩展为 Issue
```
### Example 2: Bug 修复
```bash
# 1. 启动循环
/ccw-loop "Fix login timeout issue"
# 2. 选择 "开始调试"
# → 输入 bug 描述: "Login times out after 30s"
# → Gemini 生成假设 (H1, H2, H3)
# → 添加 NDJSON 日志
# → 提示复现 bug
# 3. 复现 bug (在应用中操作)
# 4. 再次选择 "开始调试"
# → 解析 debug.log
# → Gemini 分析证据
# → H2 确认为根因
# → 生成修复代码
# → 更新 understanding.md
# 5. 选择 "运行验证"
# → 测试通过
# 6. 完成
```
## Templates
- [progress-template.md](templates/progress-template.md): 开发进度文档模板
- [understanding-template.md](templates/understanding-template.md): 调试理解文档模板
- [validation-template.md](templates/validation-template.md): 验证报告模板
## Specifications
- [loop-requirements.md](specs/loop-requirements.md): 循环需求规范
- [action-catalog.md](specs/action-catalog.md): 动作目录
## Integration
### Dashboard Integration
CCW Loop 与 Dashboard Loop Monitor 集成:
- Dashboard 创建 Loop → 触发此 Skill
- state.json → Dashboard 实时显示
- 任务列表双向同步
- 控制按钮映射到 actions
### Issue System Integration
完成后可扩展为 Issue:
- 维度: test, enhance, refactor, doc
- 自动调用 `/issue:new`
- 上下文自动填充
## Error Handling
| 情况 | 处理 |
|------|------|
| Session 不存在 | 创建新会话 |
| state.json 损坏 | 从文件重建 |
| CLI 工具失败 | 回退到手动模式 |
| 测试失败 | 循环回到 develop/debug |
| >10 迭代 | 警告用户,建议拆分 |
## Limitations
1. **单会话限制**: 同一时间只能有一个活跃会话
2. **迭代限制**: 建议不超过 10 次迭代
3. **CLI 依赖**: 部分功能依赖 Gemini CLI 可用性
4. **测试框架**: 需要 package.json 中定义测试脚本
## Troubleshooting
### Q: 如何查看当前会话状态?
A: 在菜单中选择 "查看详情 (Status)"
### Q: 如何恢复中断的会话?
A: 使用 `--resume` 参数:
```bash
/ccw-loop --resume LOOP-xxx-2026-01-22
```
### Q: 如果 CLI 工具失败怎么办?
A: Skill 会自动降级到手动模式,提示用户手动输入
### Q: 如何添加自定义 action
A: 参见 [specs/action-catalog.md](specs/action-catalog.md) 的 "Action Extensions" 部分
## Contributing
添加新功能:
1. 创建 action 文件在 `phases/actions/`
2. 更新 orchestrator 决策逻辑
3. 添加到 action-catalog.md
4. 更新 action-menu.md
## License
MIT
---
**Version**: 1.0.0
**Last Updated**: 2026-01-22
**Author**: CCW Team

View File

@@ -0,0 +1,650 @@
---
name: lite-skill-generator
description: Lightweight skill generator with style learning - creates simple skills using flow-based execution and style imitation. Use for quick skill scaffolding, simple workflow creation, or style-aware skill generation.
allowed-tools: Read, Write, Bash, Glob, Grep, AskUserQuestion
---
# Lite Skill Generator
Lightweight meta-skill for rapid skill creation with intelligent style learning and flow-based execution.
## Core Concept
**Simplicity First**: Generate simple, focused skills quickly with minimal overhead. Learn from existing skills to maintain consistent style and structure.
**Progressive Disclosure**: Follow anthropics' three-layer loading principle:
1. **Metadata** - name, description, triggers (always loaded)
2. **SKILL.md** - core instructions (loaded when triggered)
3. **Bundled resources** - scripts, references, assets (loaded on demand)
## Execution Model
**3-Phase Flow**: Style Learning → Requirements Gathering → Generation
```
User Input → Phase 1: Style Analysis → Phase 2: Requirements → Phase 3: Generate → Skill Package
↓ ↓ ↓
Learn from examples Interactive prompts Write files + validate
```
## Architecture Overview
```
┌─────────────────────────────────────────────────────────────────┐
│ Lite Skill Generator │
│ │
│ Input: Skill name, purpose, reference skills │
│ ↓ │
│ ┌─────────────────────────────────────────────────────────┐ │
│ │ Phase 1-3: Lightweight Pipeline │ │
│ │ ┌────┐ ┌────┐ ┌────┐ │ │
│ │ │ P1 │→│ P2 │→│ P3 │ │ │
│ │ │Styl│ │Req │ │Gen │ │ │
│ │ └────┘ └────┘ └────┘ │ │
│ └─────────────────────────────────────────────────────────┘ │
│ ↓ │
│ Output: .claude/skills/{skill-name}/ (minimal package) │
│ │
└─────────────────────────────────────────────────────────────────┘
```
## 3-Phase Workflow
### Phase 1: Style Analysis & Learning
Analyze reference skills to extract language patterns, structural conventions, and writing style.
```javascript
// Phase 1 Execution Flow
async function analyzeStyle(referencePaths) {
// Step 1: Load reference skills
const references = [];
for (const path of referencePaths) {
const content = Read(path);
references.push({
path: path,
content: content,
metadata: extractYAMLFrontmatter(content)
});
}
// Step 2: Extract style patterns
const styleProfile = {
// Structural patterns
structure: {
hasFrontmatter: references.every(r => r.metadata !== null),
sectionHeaders: extractCommonSections(references),
codeBlockUsage: detectCodeBlockPatterns(references),
flowDiagramUsage: detectFlowDiagrams(references)
},
// Language patterns
language: {
instructionStyle: detectInstructionStyle(references), // 'imperative' | 'declarative' | 'procedural'
pseudocodeUsage: detectPseudocodePatterns(references),
verbosity: calculateVerbosityLevel(references), // 'concise' | 'detailed' | 'verbose'
terminology: extractCommonTerms(references)
},
// Organization patterns
organization: {
phaseStructure: detectPhasePattern(references), // 'sequential' | 'autonomous' | 'flat'
exampleDensity: calculateExampleRatio(references),
templateUsage: detectTemplateReferences(references)
}
};
// Step 3: Generate style guide
return {
profile: styleProfile,
recommendations: generateStyleRecommendations(styleProfile),
examples: extractStyleExamples(references, styleProfile)
};
}
// Structural pattern detection
function extractCommonSections(references) {
const allSections = references.map(r =>
r.content.match(/^##? (.+)$/gm)?.map(s => s.replace(/^##? /, ''))
).flat();
return findMostCommon(allSections);
}
// Language style detection
function detectInstructionStyle(references) {
const imperativePattern = /^(Use|Execute|Run|Call|Create|Generate)\s/gim;
const declarativePattern = /^(The|This|Each|All)\s.*\s(is|are|will be)\s/gim;
const proceduralPattern = /^(Step \d+|Phase \d+|First|Then|Finally)\s/gim;
const scores = references.map(r => ({
imperative: (r.content.match(imperativePattern) || []).length,
declarative: (r.content.match(declarativePattern) || []).length,
procedural: (r.content.match(proceduralPattern) || []).length
}));
return getMaxStyle(scores);
}
// Pseudocode pattern detection
function detectPseudocodePatterns(references) {
const hasJavaScriptBlocks = references.some(r => r.content.includes('```javascript'));
const hasFunctionDefs = references.some(r => /function\s+\w+\(/m.test(r.content));
const hasFlowComments = references.some(r => /\/\/.*/m.test(r.content));
return {
usePseudocode: hasJavaScriptBlocks && hasFunctionDefs,
flowAnnotations: hasFlowComments,
style: hasFunctionDefs ? 'functional' : 'imperative'
};
}
```
**Output**:
```
Style Analysis Complete:
Structure: Flow-based with pseudocode
Language: Procedural, detailed
Organization: Sequential phases
Key Patterns: 3-5 phases, function definitions, ASCII diagrams
Recommendations:
✓ Use phase-based structure (3-4 phases)
✓ Include pseudocode for complex logic
✓ Add ASCII flow diagrams
✓ Maintain concise documentation style
```
---
### Phase 2: Requirements Gathering
Interactive discovery of skill requirements using learned style patterns.
```javascript
async function gatherRequirements(styleProfile) {
// Step 1: Basic information
const basicInfo = await AskUserQuestion({
questions: [
{
question: "What is the skill name? (kebab-case, e.g., 'pdf-generator')",
header: "Name",
options: [
{ label: "pdf-generator", description: "Example: PDF generation skill" },
{ label: "code-analyzer", description: "Example: Code analysis skill" },
{ label: "Custom", description: "Enter custom name" }
]
},
{
question: "What is the primary purpose?",
header: "Purpose",
options: [
{ label: "Generation", description: "Create/generate artifacts" },
{ label: "Analysis", description: "Analyze/inspect code or data" },
{ label: "Transformation", description: "Convert/transform content" },
{ label: "Orchestration", description: "Coordinate multiple operations" }
]
}
]
});
// Step 2: Execution complexity
const complexity = await AskUserQuestion({
questions: [{
question: "How many main steps does this skill need?",
header: "Steps",
options: [
{ label: "2-3 steps", description: "Simple workflow (recommended for lite-skill)" },
{ label: "4-5 steps", description: "Moderate workflow" },
{ label: "6+ steps", description: "Complex workflow (consider full skill-generator)" }
]
}]
});
// Step 3: Tool requirements
const tools = await AskUserQuestion({
questions: [{
question: "Which tools will this skill use? (select multiple)",
header: "Tools",
multiSelect: true,
options: [
{ label: "Read", description: "Read files" },
{ label: "Write", description: "Write files" },
{ label: "Bash", description: "Execute commands" },
{ label: "Task", description: "Launch agents" },
{ label: "AskUserQuestion", description: "Interactive prompts" }
]
}]
});
// Step 4: Output format
const output = await AskUserQuestion({
questions: [{
question: "What does this skill produce?",
header: "Output",
options: [
{ label: "Single file", description: "One main output file" },
{ label: "Multiple files", description: "Several related files" },
{ label: "Directory structure", description: "Complete directory tree" },
{ label: "Modified files", description: "Edits to existing files" }
]
}]
});
// Step 5: Build configuration
return {
name: basicInfo.Name,
purpose: basicInfo.Purpose,
description: generateDescription(basicInfo.Name, basicInfo.Purpose),
steps: parseStepCount(complexity.Steps),
allowedTools: tools.Tools,
outputType: output.Output,
styleProfile: styleProfile,
triggerPhrases: generateTriggerPhrases(basicInfo.Name, basicInfo.Purpose)
};
}
// Generate skill description from name and purpose
function generateDescription(name, purpose) {
const templates = {
Generation: `Generate ${humanize(name)} with intelligent scaffolding`,
Analysis: `Analyze ${humanize(name)} with detailed reporting`,
Transformation: `Transform ${humanize(name)} with format conversion`,
Orchestration: `Orchestrate ${humanize(name)} workflow with multi-step coordination`
};
return templates[purpose] || `${humanize(name)} skill for ${purpose.toLowerCase()} tasks`;
}
// Generate trigger phrases
function generateTriggerPhrases(name, purpose) {
const base = [name, name.replace(/-/g, ' ')];
const purposeVariants = {
Generation: ['generate', 'create', 'build'],
Analysis: ['analyze', 'inspect', 'review'],
Transformation: ['transform', 'convert', 'format'],
Orchestration: ['orchestrate', 'coordinate', 'manage']
};
return [...base, ...purposeVariants[purpose].map(v => `${v} ${humanize(name)}`)];
}
```
**Display to User**:
```
Requirements Gathered:
Name: pdf-generator
Purpose: Generation
Steps: 3 (Setup → Generate → Validate)
Tools: Read, Write, Bash
Output: Single file (PDF document)
Triggers: "pdf-generator", "generate pdf", "create pdf"
Style Application:
Using flow-based structure (from style analysis)
Including pseudocode blocks
Adding ASCII diagrams for clarity
```
---
### Phase 3: Generate Skill Package
Create minimal skill structure with style-aware content generation.
```javascript
async function generateSkillPackage(requirements) {
const skillDir = `.claude/skills/${requirements.name}`;
const workDir = `.workflow/.scratchpad/lite-skill-gen-${Date.now()}`;
// Step 1: Create directory structure
Bash(`mkdir -p "${skillDir}" "${workDir}"`);
// Step 2: Generate SKILL.md (using learned style)
const skillContent = generateSkillMd(requirements);
Write(`${skillDir}/SKILL.md`, skillContent);
// Step 3: Conditionally add bundled resources
if (requirements.outputType === 'Directory structure') {
Bash(`mkdir -p "${skillDir}/templates"`);
const templateContent = generateTemplate(requirements);
Write(`${skillDir}/templates/base-template.md`, templateContent);
}
if (requirements.allowedTools.includes('Bash')) {
Bash(`mkdir -p "${skillDir}/scripts"`);
const scriptContent = generateScript(requirements);
Write(`${skillDir}/scripts/helper.sh`, scriptContent);
}
// Step 4: Generate README
const readmeContent = generateReadme(requirements);
Write(`${skillDir}/README.md`, readmeContent);
// Step 5: Validate structure
const validation = validateSkillStructure(skillDir, requirements);
Write(`${workDir}/validation-report.json`, JSON.stringify(validation, null, 2));
// Step 6: Return summary
return {
skillPath: skillDir,
filesCreated: [
`${skillDir}/SKILL.md`,
...(validation.hasTemplates ? [`${skillDir}/templates/`] : []),
...(validation.hasScripts ? [`${skillDir}/scripts/`] : []),
`${skillDir}/README.md`
],
validation: validation,
nextSteps: generateNextSteps(requirements)
};
}
// Generate SKILL.md with style awareness
function generateSkillMd(req) {
const { styleProfile } = req;
// YAML frontmatter
const frontmatter = `---
name: ${req.name}
description: ${req.description}
allowed-tools: ${req.allowedTools.join(', ')}
---
`;
// Main content structure (adapts to style)
let content = frontmatter;
content += `\n# ${humanize(req.name)}\n\n`;
content += `${req.description}\n\n`;
// Add architecture diagram if style uses them
if (styleProfile.structure.flowDiagramUsage) {
content += generateArchitectureDiagram(req);
}
// Add execution flow
content += `## Execution Flow\n\n`;
if (styleProfile.language.pseudocodeUsage.usePseudocode) {
content += generatePseudocodeFlow(req);
} else {
content += generateProceduralFlow(req);
}
// Add phase sections
for (let i = 0; i < req.steps; i++) {
content += generatePhaseSection(i + 1, req, styleProfile);
}
// Add examples if style is verbose
if (styleProfile.language.verbosity !== 'concise') {
content += generateExamplesSection(req);
}
return content;
}
// Generate architecture diagram
function generateArchitectureDiagram(req) {
return `## Architecture
\`\`\`
┌─────────────────────────────────────────────────┐
${humanize(req.name)}
│ │
│ Input → Phase 1 → Phase 2 → Phase 3 → Output │
${getPhaseName(1, req)}
${getPhaseName(2, req)}
${getPhaseName(3, req)}
└─────────────────────────────────────────────────┘
\`\`\`
`;
}
// Generate pseudocode flow
function generatePseudocodeFlow(req) {
return `\`\`\`javascript
async function ${toCamelCase(req.name)}(input) {
// Phase 1: ${getPhaseName(1, req)}
const prepared = await phase1Prepare(input);
// Phase 2: ${getPhaseName(2, req)}
const processed = await phase2Process(prepared);
// Phase 3: ${getPhaseName(3, req)}
const result = await phase3Finalize(processed);
return result;
}
\`\`\`
`;
}
// Generate phase section
function generatePhaseSection(phaseNum, req, styleProfile) {
const phaseName = getPhaseName(phaseNum, req);
let section = `### Phase ${phaseNum}: ${phaseName}\n\n`;
if (styleProfile.language.pseudocodeUsage.usePseudocode) {
section += `\`\`\`javascript\n`;
section += `async function phase${phaseNum}${toCamelCase(phaseName)}(input) {\n`;
section += ` // TODO: Implement ${phaseName.toLowerCase()} logic\n`;
section += ` return output;\n`;
section += `}\n\`\`\`\n\n`;
} else {
section += `**Steps**:\n`;
section += `1. Load input data\n`;
section += `2. Process according to ${phaseName.toLowerCase()} logic\n`;
section += `3. Return result to next phase\n\n`;
}
return section;
}
// Validation
function validateSkillStructure(skillDir, req) {
const requiredFiles = [`${skillDir}/SKILL.md`, `${skillDir}/README.md`];
const exists = requiredFiles.map(f => Bash(`test -f "${f}"`).exitCode === 0);
return {
valid: exists.every(e => e),
hasTemplates: Bash(`test -d "${skillDir}/templates"`).exitCode === 0,
hasScripts: Bash(`test -d "${skillDir}/scripts"`).exitCode === 0,
filesPresent: requiredFiles.filter((f, i) => exists[i]),
styleCompliance: checkStyleCompliance(skillDir, req.styleProfile)
};
}
```
**Output**:
```
Skill Package Generated:
Location: .claude/skills/pdf-generator/
Structure:
✓ SKILL.md (entry point)
✓ README.md (usage guide)
✓ templates/ (directory templates)
✓ scripts/ (helper scripts)
Validation:
✓ All required files present
✓ Style compliance: 95%
✓ Frontmatter valid
✓ Tool references correct
Next Steps:
1. Review SKILL.md and customize phases
2. Test skill: /skill:pdf-generator "test input"
3. Iterate based on usage
```
---
## Complete Execution Flow
```
User: "Create a PDF generator skill"
Phase 1: Style Analysis
|-- Read reference skills (ccw.md, ccw-coordinator.md)
|-- Extract style patterns (flow diagrams, pseudocode, structure)
|-- Generate style profile
+-- Output: Style recommendations
Phase 2: Requirements
|-- Ask: Name, purpose, steps
|-- Ask: Tools, output format
|-- Generate: Description, triggers
+-- Output: Requirements config
Phase 3: Generation
|-- Create: Directory structure
|-- Write: SKILL.md (style-aware)
|-- Write: README.md
|-- Optionally: templates/, scripts/
|-- Validate: Structure and style
+-- Output: Skill package
Return: Skill location + next steps
```
## Phase Execution Protocol
```javascript
// Main entry point
async function liteSkillGenerator(input) {
// Phase 1: Style Learning
const references = [
'.claude/commands/ccw.md',
'.claude/commands/ccw-coordinator.md',
...discoverReferenceSkills(input)
];
const styleProfile = await analyzeStyle(references);
console.log(`Style Analysis: ${styleProfile.organization.phaseStructure}, ${styleProfile.language.verbosity}`);
// Phase 2: Requirements
const requirements = await gatherRequirements(styleProfile);
console.log(`Requirements: ${requirements.name} (${requirements.steps} phases)`);
// Phase 3: Generation
const result = await generateSkillPackage(requirements);
console.log(`✅ Generated: ${result.skillPath}`);
return result;
}
```
## Output Structure
**Minimal Package** (default):
```
.claude/skills/{skill-name}/
├── SKILL.md # Entry point with frontmatter
└── README.md # Usage guide
```
**With Templates** (if needed):
```
.claude/skills/{skill-name}/
├── SKILL.md
├── README.md
└── templates/
└── base-template.md
```
**With Scripts** (if using Bash):
```
.claude/skills/{skill-name}/
├── SKILL.md
├── README.md
└── scripts/
└── helper.sh
```
## Key Design Principles
1. **Style Learning** - Analyze reference skills to maintain consistency
2. **Minimal Overhead** - Generate only essential files (SKILL.md + README)
3. **Progressive Disclosure** - Follow anthropics' three-layer loading
4. **Flow-Based** - Use pseudocode and flow diagrams (when style appropriate)
5. **Interactive** - Guided requirements gathering via AskUserQuestion
6. **Fast Generation** - 3 phases instead of 6, focused on simplicity
7. **Style Awareness** - Adapt output based on detected patterns
## Style Pattern Detection
**Structural Patterns**:
- YAML frontmatter usage (100% in references)
- Section headers (H2 for major, H3 for sub-sections)
- Code blocks (JavaScript pseudocode, Bash examples)
- ASCII diagrams (architecture, flow charts)
**Language Patterns**:
- Instruction style: Procedural with function definitions
- Pseudocode: JavaScript-based with flow annotations
- Verbosity: Detailed but focused
- Terminology: Phase, workflow, pipeline, orchestrator
**Organization Patterns**:
- Phase structure: 3-5 sequential phases
- Example density: Moderate (1-2 per major section)
- Template usage: Minimal (only when necessary)
## Usage Examples
**Basic Generation**:
```
User: "Create a markdown formatter skill"
Lite-Skill-Generator:
→ Analyzes ccw.md style
→ Asks: Name? "markdown-formatter"
→ Asks: Purpose? "Transformation"
→ Asks: Steps? "3 steps"
→ Generates: .claude/skills/markdown-formatter/
```
**With Custom References**:
```
User: "Create a skill like software-manual but simpler"
Lite-Skill-Generator:
→ Analyzes software-manual skill
→ Learns: Multi-phase, agent-based, template-heavy
→ Simplifies: 3 phases, direct execution, minimal templates
→ Generates: Simplified version
```
## Comparison: lite-skill-generator vs skill-generator
| Aspect | lite-skill-generator | skill-generator |
|--------|---------------------|-----------------|
| **Phases** | 3 (Style → Req → Gen) | 6 (Spec → Req → Dir → Gen → Specs → Val) |
| **Style Learning** | Yes (analyze references) | No (fixed templates) |
| **Complexity** | Simple skills only | Full-featured skills |
| **Output** | Minimal (SKILL.md + README) | Complete (phases/, specs/, templates/) |
| **Generation Time** | Fast (~2 min) | Thorough (~10 min) |
| **Use Case** | Quick scaffolding | Production-ready skills |
## Workflow Integration
**Standalone**:
```bash
/skill:lite-skill-generator "Create a log analyzer skill"
```
**With References**:
```bash
/skill:lite-skill-generator "Create a skill based on ccw-coordinator.md style"
```
**Batch Generation** (for multiple simple skills):
```bash
/skill:lite-skill-generator "Create 3 skills: json-validator, yaml-parser, toml-converter"
```
---
**Next Steps After Generation**:
1. Review `.claude/skills/{name}/SKILL.md`
2. Customize phase logic for your use case
3. Add examples to README.md
4. Test skill with sample input
5. Iterate based on real usage

View File

@@ -0,0 +1,68 @@
---
name: {{SKILL_NAME}}
description: {{SKILL_DESCRIPTION}}
allowed-tools: {{ALLOWED_TOOLS}}
---
# {{SKILL_TITLE}}
{{SKILL_DESCRIPTION}}
## Architecture
```
┌─────────────────────────────────────────────────┐
│ {{SKILL_TITLE}} │
│ │
│ Input → {{PHASE_1}} → {{PHASE_2}} → Output │
└─────────────────────────────────────────────────┘
```
## Execution Flow
```javascript
async function {{SKILL_FUNCTION}}(input) {
// Phase 1: {{PHASE_1}}
const prepared = await phase1(input);
// Phase 2: {{PHASE_2}}
const result = await phase2(prepared);
return result;
}
```
### Phase 1: {{PHASE_1}}
```javascript
async function phase1(input) {
// TODO: Implement {{PHASE_1_LOWER}} logic
return output;
}
```
### Phase 2: {{PHASE_2}}
```javascript
async function phase2(input) {
// TODO: Implement {{PHASE_2_LOWER}} logic
return output;
}
```
## Usage
```bash
/skill:{{SKILL_NAME}} "input description"
```
## Examples
**Basic Usage**:
```
User: "{{EXAMPLE_INPUT}}"
{{SKILL_NAME}}:
→ Phase 1: {{PHASE_1_ACTION}}
→ Phase 2: {{PHASE_2_ACTION}}
→ Output: {{EXAMPLE_OUTPUT}}
```

View File

@@ -0,0 +1,64 @@
# Style Guide Template
Generated by lite-skill-generator style analysis phase.
## Detected Patterns
### Structural Patterns
| Pattern | Detected | Recommendation |
|---------|----------|----------------|
| YAML Frontmatter | {{HAS_FRONTMATTER}} | {{FRONTMATTER_REC}} |
| ASCII Diagrams | {{HAS_DIAGRAMS}} | {{DIAGRAMS_REC}} |
| Code Blocks | {{HAS_CODE_BLOCKS}} | {{CODE_BLOCKS_REC}} |
| Phase Structure | {{PHASE_STRUCTURE}} | {{PHASE_REC}} |
### Language Patterns
| Pattern | Value | Notes |
|---------|-------|-------|
| Instruction Style | {{INSTRUCTION_STYLE}} | imperative/declarative/procedural |
| Pseudocode Usage | {{PSEUDOCODE_USAGE}} | functional/imperative/none |
| Verbosity Level | {{VERBOSITY}} | concise/detailed/verbose |
| Common Terms | {{TERMINOLOGY}} | domain-specific vocabulary |
### Organization Patterns
| Pattern | Value |
|---------|-------|
| Phase Count | {{PHASE_COUNT}} |
| Example Density | {{EXAMPLE_DENSITY}} |
| Template Usage | {{TEMPLATE_USAGE}} |
## Style Compliance Checklist
- [ ] YAML frontmatter with name, description, allowed-tools
- [ ] Architecture diagram (if pattern detected)
- [ ] Execution flow section with pseudocode
- [ ] Phase sections (sequential numbered)
- [ ] Usage examples section
- [ ] README.md for external documentation
## Reference Skills Analyzed
{{#REFERENCES}}
- `{{REF_PATH}}`: {{REF_NOTES}}
{{/REFERENCES}}
## Generated Configuration
```json
{
"style": {
"structure": "{{STRUCTURE_TYPE}}",
"language": "{{LANGUAGE_TYPE}}",
"organization": "{{ORG_TYPE}}"
},
"recommendations": {
"usePseudocode": {{USE_PSEUDOCODE}},
"includeDiagrams": {{INCLUDE_DIAGRAMS}},
"verbosityLevel": "{{VERBOSITY}}",
"phaseCount": {{PHASE_COUNT}}
}
}
```

View File

@@ -12,26 +12,24 @@ Meta-skill for creating new Claude Code skills with configurable execution modes
```
┌─────────────────────────────────────────────────────────────────┐
Skill Generator Architecture
├─────────────────────────────────────────────────────────────────┤
Skill Generator
│ │
⚠️ Phase 0: Specification → 阅读并理解设计规范 (强制前置)
Study SKILL-DESIGN-SPEC.md + 模板
│ Phase 1: Requirements → skill-config.json
Discovery (name, type, mode, agents)
Phase 2: Structure → 目录结构 + 核心文件骨架
Generation
Phase 3: Phase → phases/*.md (根据 mode 生成)
Generation Sequential | Autonomous
Phase 4: Specs & → specs/*.md + templates/*.md
Templates
Phase 5: Validation → 验证完整性 + 生成使用说明
│ & Documentation │
Input: User Request (skill name, purpose, mode)
┌─────────────────────────────────────────────────────────┐
│ Phase 0-5: Sequential Pipeline
┌────┐ ┌────┐ ┌────┐ ┌────┐ ┌────┐ ┌────┐ │
│ │ P0 │→│ P1 │→│ P2 │→│ P3 │→│ P4 │→│ P5 │
│ │Spec│ │Req │ │Dir │ │Gen │ │Spec│ │Val │
│ └────┘ └────┘ └────┘ └─┬──┘ └────┘ └────┘
│ ┌────┴────┐ │
↓ ↓
│ Sequential Autonomous
│ (phases/) (actions/)
└─────────────────────────────────────────────────────────┘
Output: .claude/skills/{skill-name}/ (complete package)
│ │
└─────────────────────────────────────────────────────────────────┘
```
@@ -107,8 +105,7 @@ Phase 01 → Phase 02 → Phase 03 → ... → Phase N
| [templates/autonomous-action.md](templates/autonomous-action.md) | Autonomous Action 模板 |
| [templates/code-analysis-action.md](templates/code-analysis-action.md) | 代码分析 Action 模板 |
| [templates/llm-action.md](templates/llm-action.md) | LLM Action 模板 |
| [templates/script-bash.md](templates/script-bash.md) | Bash 脚本模板 |
| [templates/script-python.md](templates/script-python.md) | Python 脚本模板 |
| [templates/script-template.md](templates/script-template.md) | 统一脚本模板 (Bash + Python) |
### 规范文档 (按需阅读)
@@ -134,93 +131,288 @@ Phase 01 → Phase 02 → Phase 03 → ... → Phase N
## Execution Flow
```
┌─────────────────────────────────────────────────────────────────┐
⚠️ Phase 0: Specification Study (强制前置 - 禁止跳过) │
│ → Read: ../_shared/SKILL-DESIGN-SPEC.md (通用设计规范) │
│ → Read: templates/*.md (所有相关模板文件) │
│ → 理解: Skill 结构规范、命名约定、质量标准 │
→ Output: 内化规范要求,确保后续生成符合标准 │
⛔ 未完成 Phase 0 禁止进入 Phase 1 │
├─────────────────────────────────────────────────────────────────┤
│ Phase 1: Requirements Discovery │
│ → AskUserQuestion: Skill 名称、目标、执行模式 │
│ → Output: skill-config.json │
├─────────────────────────────────────────────────────────────────┤
Phase 2: Structure Generation
→ 创建目录结构: phases/, specs/, templates/, scripts/ │
│ → 生成 SKILL.md 入口文件
│ → Output: 完整目录结构
├─────────────────────────────────────────────────────────────────┤
Phase 3: Phase Generation │
→ Sequential: 生成 01-xx.md, 02-xx.md, ... │
→ Autonomous: 生成 orchestrator.md + actions/*.md │
→ Output: phases/*.md │
├─────────────────────────────────────────────────────────────────┤
Phase 4: Specs & Templates │
│ → 生成领域规范: specs/{domain}-requirements.md │
│ → 生成质量标准: specs/quality-standards.md │
→ 生成模板: templates/agent-base.md │
│ → Output: specs/*.md, templates/*.md │
├─────────────────────────────────────────────────────────────────┤
│ Phase 5: Validation & Documentation │
│ → 验证文件完整性 │
→ 生成 README.md 使用说明 │
│ → Output: 验证报告 + README.md │
└─────────────────────────────────────────────────────────────────┘
Input Parsing:
└─ Convert user request to structured format (skill-name/purpose/mode)
Phase 0: Specification Study (⚠️ MANDATORY - 禁止跳过)
└─ Read specification documents
├─ Load: ../_shared/SKILL-DESIGN-SPEC.md
├─ Load: All templates/*.md files
├─ Understand: Structure rules, naming conventions, quality standards
└─ Output: Internalized requirements (in-memory, no file output)
└─ Validation: ⛔ MUST complete before Phase 1
Phase 1: Requirements Discovery
└─ Gather skill requirements via user interaction
├─ Tool: AskUserQuestion
├─ Prompt: Skill name, purpose, execution mode
├─ Prompt: Phase/Action definition
│ └─ Prompt: Tool dependencies, output format
├─ Process: Generate configuration object
└─ Output: skill-config.json → ${workDir}/
├─ skill_name: string
├─ execution_mode: "sequential" | "autonomous"
├─ phases/actions: array
└─ allowed_tools: array
Phase 2: Structure Generation
└─ Create directory structure and entry file
├─ Input: skill-config.json (from Phase 1)
├─ Tool: Bash
│ └─ Execute: mkdir -p .claude/skills/{skill-name}/{phases,specs,templates,scripts}
├─ Tool: Write
│ └─ Generate: SKILL.md (entry point with architecture diagram)
└─ Output: Complete directory structure
├─ .claude/skills/{skill-name}/SKILL.md
├─ .claude/skills/{skill-name}/phases/
├─ .claude/skills/{skill-name}/specs/
├─ .claude/skills/{skill-name}/templates/
└─ .claude/skills/{skill-name}/scripts/
Phase 3: Phase/Action Generation
└─ Decision (execution_mode check):
├─ execution_mode === "sequential" → Generate Sequential Phases
│ ├─ Tool: Read (template: templates/sequential-phase.md)
│ ├─ Loop: For each phase in config.sequential_config.phases
│ │ ├─ Generate: phases/{phase-id}.md
│ │ └─ Link: Previous phase output → Current phase input
│ ├─ Tool: Write (orchestrator: phases/_orchestrator.md)
│ ├─ Tool: Write (workflow definition: workflow.json)
│ └─ Output: phases/01-{name}.md, phases/02-{name}.md, ...
└─ execution_mode === "autonomous" → Generate Orchestrator + Actions
├─ Tool: Read (template: templates/autonomous-orchestrator.md)
├─ Tool: Write (state schema: phases/state-schema.md)
├─ Tool: Write (orchestrator: phases/orchestrator.md)
├─ Tool: Write (action catalog: specs/action-catalog.md)
├─ Loop: For each action in config.autonomous_config.actions
│ ├─ Tool: Read (template: templates/autonomous-action.md)
│ └─ Generate: phases/actions/{action-id}.md
└─ Output: phases/orchestrator.md, phases/actions/*.md
Phase 4: Specs & Templates
└─ Generate domain specifications and templates
├─ Input: skill-config.json (domain context)
├─ Tool: Write
│ ├─ Generate: specs/{domain}-requirements.md
│ ├─ Generate: specs/quality-standards.md
│ └─ Generate: templates/agent-base.md (if needed)
└─ Output: Domain-specific documentation
├─ specs/{skill-name}-requirements.md
├─ specs/quality-standards.md
└─ templates/agent-base.md
Phase 5: Validation & Documentation
└─ Verify completeness and generate usage guide
├─ Input: All generated files from previous phases
├─ Tool: Glob + Read
│ └─ Check: Required files exist and contain proper structure
├─ Tool: Write
│ ├─ Generate: README.md (usage instructions)
│ └─ Generate: validation-report.json (completeness check)
└─ Output: Final documentation
├─ README.md (how to use this skill)
└─ validation-report.json (quality gate results)
Return:
└─ Summary with skill location and next steps
├─ Skill path: .claude/skills/{skill-name}/
├─ Status: ✅ All phases completed
└─ Suggestion: "Review SKILL.md and customize phase files as needed"
```
## Directory Setup
**Execution Protocol**:
```javascript
const skillName = config.skill_name;
const skillDir = `.claude/skills/${skillName}`;
// Phase 0: Read specifications (in-memory)
Read('.claude/skills/_shared/SKILL-DESIGN-SPEC.md');
Read('.claude/skills/skill-generator/templates/*.md'); // All templates
// 创建目录结构
Bash(`mkdir -p "${skillDir}/phases"`);
Bash(`mkdir -p "${skillDir}/specs"`);
Bash(`mkdir -p "${skillDir}/templates"`);
// Phase 1: Gather requirements
const answers = AskUserQuestion({
questions: [
{ question: "Skill name?", header: "Name", options: [...] },
{ question: "Execution mode?", header: "Mode", options: ["Sequential", "Autonomous"] }
]
});
// Autonomous 模式额外目录
if (config.execution_mode === 'autonomous') {
Bash(`mkdir -p "${skillDir}/phases/actions"`);
const config = generateConfig(answers);
const workDir = `.workflow/.scratchpad/skill-gen-${timestamp}`;
Write(`${workDir}/skill-config.json`, JSON.stringify(config));
// Phase 2: Create structure
const skillDir = `.claude/skills/${config.skill_name}`;
Bash(`mkdir -p "${skillDir}/phases" "${skillDir}/specs" "${skillDir}/templates"`);
Write(`${skillDir}/SKILL.md`, generateSkillEntry(config));
// Phase 3: Generate phases (mode-dependent)
if (config.execution_mode === 'sequential') {
Write(`${skillDir}/phases/_orchestrator.md`, generateOrchestrator(config));
Write(`${skillDir}/workflow.json`, generateWorkflowDef(config));
config.sequential_config.phases.forEach(phase => {
Write(`${skillDir}/phases/${phase.id}.md`, generatePhase(phase, config));
});
} else {
Write(`${skillDir}/phases/orchestrator.md`, generateAutonomousOrchestrator(config));
Write(`${skillDir}/phases/state-schema.md`, generateStateSchema(config));
config.autonomous_config.actions.forEach(action => {
Write(`${skillDir}/phases/actions/${action.id}.md`, generateAction(action, config));
});
}
// Phase 4: Generate specs
Write(`${skillDir}/specs/${config.skill_name}-requirements.md`, generateRequirements(config));
Write(`${skillDir}/specs/quality-standards.md`, generateQualityStandards(config));
// Phase 5: Validate & Document
const validation = validateStructure(skillDir);
Write(`${skillDir}/validation-report.json`, JSON.stringify(validation));
Write(`${skillDir}/README.md`, generateReadme(config, validation));
```
---
## Phase Reference Guide
Navigation and entry points for each execution phase:
### Phase 0: Specification Study (Mandatory)
**Document**: 🔗 [SKILL-DESIGN-SPEC.md](../_shared/SKILL-DESIGN-SPEC.md)
**Purpose**: Understand skill design standards before generating
**What to Read**:
- Skill structure conventions
- Naming standards
- Quality criteria
- Output format specifications
---
### Phase 1: Requirements Discovery
**Document**: 🔗 [phases/01-requirements-discovery.md](phases/01-requirements-discovery.md)
| Attribute | Value |
|-----------|-------|
| **Purpose** | Gather configuration from user via interactive prompts |
| **Input** | User responses (Skill name, purpose, execution mode) |
| **Output** | `skill-config.json` (configuration file) |
| **Key Decision** | Execution mode: Sequential vs Autonomous vs Hybrid |
---
### Phase 2: Structure Generation
**Document**: 🔗 [phases/02-structure-generation.md](phases/02-structure-generation.md)
| Attribute | Value |
|-----------|-------|
| **Purpose** | Create directory structure and entry file |
| **Input** | `skill-config.json` (from Phase 1) |
| **Output** | `.claude/skills/{skill-name}/` directory + `SKILL.md` |
---
### Phase 3: Phase/Action Generation
**Document**: 🔗 [phases/03-phase-generation.md](phases/03-phase-generation.md)
| Attribute | Value |
|-----------|-------|
| **Purpose** | Generate execution phases or actions based on mode |
| **Input** | `skill-config.json` + Templates |
| **Output** | Sequential: `phases/01-*.md` ... OR Autonomous: `phases/orchestrator.md` + `actions/*.md` |
**Decision Logic**:
```
IF execution_mode === "sequential":
└─ Generate sequential phases with linear flow
ELSE IF execution_mode === "autonomous":
└─ Generate orchestrator + independent actions
```
---
### Phase 4: Specs & Templates
**Document**: 🔗 [phases/04-specs-templates.md](phases/04-specs-templates.md)
| Attribute | Value |
|-----------|-------|
| **Purpose** | Generate domain specifications and template files |
| **Input** | `skill-config.json` (domain context) |
| **Output** | `specs/{skill-name}-requirements.md`, `specs/quality-standards.md` |
---
### Phase 5: Validation & Documentation
**Document**: 🔗 [phases/05-validation.md](phases/05-validation.md)
| Attribute | Value |
|-----------|-------|
| **Purpose** | Verify completeness and generate usage documentation |
| **Input** | All files from previous phases |
| **Output** | `README.md`, `validation-report.json` |
---
## Template Reference
| Template | Generated For | When Used |
|----------|--------------|-----------|
| [skill-md.md](templates/skill-md.md) | SKILL.md entry file | Phase 2 |
| [sequential-phase.md](templates/sequential-phase.md) | Sequential phase files | Phase 3 |
| [autonomous-orchestrator.md](templates/autonomous-orchestrator.md) | Orchestrator (autonomous) | Phase 3 |
| [autonomous-action.md](templates/autonomous-action.md) | Action files | Phase 3 |
| [code-analysis-action.md](templates/code-analysis-action.md) | Code analysis actions | Phase 3 |
| [llm-action.md](templates/llm-action.md) | LLM-powered actions | Phase 3 |
| [script-template.md](templates/script-template.md) | Bash + Python scripts | Phase 3/4 |
## Output Structure
### Sequential Mode
```
.claude/skills/{skill-name}/
├── SKILL.md
├── SKILL.md # 入口文件
├── phases/
│ ├── 01-{step-one}.md
│ ├── 02-{step-two}.md
── 03-{step-three}.md
│ ├── _orchestrator.md # 声明式编排器
│ ├── workflow.json # 工作流定义
── 01-{step-one}.md # 阶段 1
│ ├── 02-{step-two}.md # 阶段 2
│ └── 03-{step-three}.md # 阶段 3
├── specs/
│ ├── {domain}-requirements.md
│ ├── {skill-name}-requirements.md
│ └── quality-standards.md
── templates/
└── agent-base.md
── templates/
└── agent-base.md
├── scripts/
└── README.md
```
### Autonomous Mode
```
.claude/skills/{skill-name}/
├── SKILL.md
├── SKILL.md # 入口文件
├── phases/
│ ├── orchestrator.md # 编排器:读取状态 → 选择动作
│ ├── state-schema.md # 状态结构定义
│ └── actions/ # 独立动作(无顺序)
│ ├── action-{a}.md
│ ├── action-{b}.md
│ └── action-{c}.md
│ ├── orchestrator.md # 编排器 (状态驱动)
│ ├── state-schema.md # 状态结构定义
│ └── actions/
│ ├── action-init.md
│ ├── action-create.md
│ └── action-list.md
├── specs/
│ ├── {domain}-requirements.md
│ ├── action-catalog.md # 动作目录(描述、前置条件、效果)
│ ├── {skill-name}-requirements.md
│ ├── action-catalog.md
│ └── quality-standards.md
── templates/
├── orchestrator-base.md # 编排器模板
└── action-base.md # 动作模板
```
── templates/
├── orchestrator-base.md
└── action-base.md
├── scripts/
└── README.md
```

View File

@@ -1,15 +1,4 @@
# Phase 1: Requirements Discovery
收集新 Skill 的需求信息,生成配置文件。
## Objective
- 收集 Skill 基本信息(名称、描述、触发词)
- 确定执行模式Sequential / Autonomous
- 定义阶段/动作
- 配置工具依赖和输出格式
## Execution Steps
### Step 1: 基本信息收集
@@ -228,12 +217,12 @@ Bash(`mkdir -p "${workDir}"`);
Write(`${workDir}/skill-config.json`, JSON.stringify(config, null, 2));
```
## Output
- **File**: `skill-config.json`
- **Location**: `.workflow/.scratchpad/skill-gen-{timestamp}/`
- **Format**: JSON
## Next Phase
→ [Phase 2: Structure Generation](02-structure-generation.md)
**Data Flow to Phase 2**:
- skill-config.json with all configuration parameters
- Execution mode decision drives directory structure creation

View File

@@ -8,9 +8,7 @@
- 生成 SKILL.md 入口文件
- 根据执行模式创建对应的子目录
## Input
- 依赖: `skill-config.json` (Phase 1 产出)
## Execution Steps
@@ -23,19 +21,79 @@ const skillDir = `.claude/skills/${config.skill_name}`;
### Step 2: 创建目录结构
```javascript
// 基础目录
Bash(`mkdir -p "${skillDir}/phases"`);
Bash(`mkdir -p "${skillDir}/specs"`);
Bash(`mkdir -p "${skillDir}/templates"`);
#### 基础目录(所有模式)
// Autonomous 模式额外目录
```javascript
// 基础架构
Bash(`mkdir -p "${skillDir}/{phases,specs,templates,scripts}"`);
```
#### 执行模式特定目录
```
config.execution_mode
├─ "sequential"
│ ↓ Creates:
│ └─ phases/ (基础目录已包含)
│ ├─ _orchestrator.md
│ └─ workflow.json
└─ "autonomous" | "hybrid"
↓ Creates:
└─ phases/actions/
├─ state-schema.md
└─ *.md (动作文件)
```
```javascript
// Autonomous/Hybrid 模式额外目录
if (config.execution_mode === 'autonomous' || config.execution_mode === 'hybrid') {
Bash(`mkdir -p "${skillDir}/phases/actions"`);
}
```
// scripts 目录(默认创建,用于存放确定性脚本)
Bash(`mkdir -p "${skillDir}/scripts"`);
#### Context Strategy 特定目录 (P0 增强)
```javascript
// ========== P0: 根据上下文策略创建目录 ==========
const contextStrategy = config.context_strategy || 'file';
if (contextStrategy === 'file') {
// 文件策略:创建上下文持久化目录
Bash(`mkdir -p "${skillDir}/.scratchpad-template/context"`);
// 创建上下文模板文件
Write(
`${skillDir}/.scratchpad-template/context/.gitkeep`,
"# Runtime context storage for file-based strategy"
);
}
// 内存策略无需创建目录 (in-memory only)
```
**目录树视图**:
```
Sequential + File Strategy:
.claude/skills/{skill-name}/
├── phases/
│ ├── _orchestrator.md
│ ├── workflow.json
│ ├── 01-*.md
│ └── 02-*.md
├── .scratchpad-template/
│ └── context/ ← File strategy persistent storage
└── specs/
Autonomous + Memory Strategy:
.claude/skills/{skill-name}/
├── phases/
│ ├── orchestrator.md
│ ├── state-schema.md
│ └── actions/
│ └── *.md
└── specs/
```
### Step 3: 生成 SKILL.md
@@ -192,16 +250,13 @@ function generateReferenceTable(config) {
}
```
## Output
- **Directory**: `.claude/skills/{skill-name}/`
- **Files**:
- `SKILL.md` (入口文件)
- `phases/` (执行阶段目录)
- `specs/` (规范文档目录)
- `templates/` (模板目录)
- `scripts/` (脚本目录,存放 Python/Bash 确定性脚本)
## Next Phase
→ [Phase 3: Phase Generation](03-phase-generation.md)
**Data Flow to Phase 3**:
- Complete directory structure in .claude/skills/{skill-name}/
- SKILL.md entry file ready for phase/action generation
- skill-config.json for template population

View File

@@ -8,10 +8,7 @@
- Autonomous 模式:生成编排器和动作文件
- 支持 **文件上下文****内存上下文** 两种策略
## Input
- 依赖: `skill-config.json`, SKILL.md (Phase 1-2 产出)
- 模板: `templates/sequential-phase.md`, `templates/autonomous-*.md`
## 上下文策略 (P0 增强)
@@ -55,66 +52,93 @@ const skillRoot = '.claude/skills/skill-generator';
```javascript
if (config.execution_mode === 'sequential') {
const phases = config.sequential_config.phases;
// ========== P0 增强: 生成声明式编排器 ==========
const workflowOrchestrator = generateSequentialOrchestrator(config, phases);
Write(`${skillDir}/phases/_orchestrator.md`, workflowOrchestrator);
// ========== P0 增强: 生成工作流定义 ==========
const workflowDef = generateWorkflowDefinition(config, phases);
Write(`${skillDir}/workflow.json`, JSON.stringify(workflowDef, null, 2));
// 生成各阶段文件
// ========== P0 增强: 生成 Phase 0 (强制规范研读) ==========
const phase0Content = generatePhase0Spec(config);
Write(`${skillDir}/phases/00-spec-study.md`, phase0Content);
// ========== 生成用户定义的各阶段文件 ==========
for (let i = 0; i < phases.length; i++) {
const phase = phases[i];
const prevPhase = i > 0 ? phases[i-1] : null;
const nextPhase = i < phases.length - 1 ? phases[i+1] : null;
const content = generateSequentialPhase({
phaseNumber: i + 1,
phaseId: phase.id,
phaseName: phase.name,
phaseDescription: phase.description || `Execute ${phase.name}`,
input: prevPhase ? prevPhase.output : "user input",
input: prevPhase ? prevPhase.output : "phase 0 output", // Phase 0 为首个输入源
output: phase.output,
nextPhase: nextPhase ? nextPhase.id : null,
config: config,
contextStrategy: contextStrategy
});
Write(`${skillDir}/phases/${phase.id}.md`, content);
}
}
// ========== P0 增强: 声明式工作流定义 ==========
function generateWorkflowDefinition(config, phases) {
// ========== P0: 添加强制 Phase 0 ==========
const phase0 = {
id: '00-spec-study',
name: 'Specification Study',
order: 0,
input: null,
output: 'spec-study-complete.flag',
description: '⚠️ MANDATORY: Read all specification documents before execution',
parallel: false,
condition: null,
agent: {
type: 'universal-executor',
run_in_background: false
}
};
return {
skill_name: config.skill_name,
version: "1.0.0",
execution_mode: "sequential",
context_strategy: config.context_strategy || "file",
// 声明式阶段列表 (类似 software-manual 的 agents_to_run)
phases_to_run: phases.map(p => p.id),
// 阶段配置
phases: phases.map((p, i) => ({
id: p.id,
name: p.name,
order: i + 1,
input: i > 0 ? phases[i-1].output : null,
output: p.output,
// 可选的并行配置
parallel: p.parallel || false,
// 可选的条件执行
condition: p.condition || null,
// Agent 配置
agent: p.agent || {
type: "universal-executor",
run_in_background: false
}
})),
// ========== P0: Phase 0 置于首位 ==========
phases_to_run: ['00-spec-study', ...phases.map(p => p.id)],
// ========== P0: Phase 0 + 用户定义阶段 ==========
phases: [
phase0,
...phases.map((p, i) => ({
id: p.id,
name: p.name,
order: i + 1,
input: i === 0 ? phase0.output : phases[i-1].output, // 第一个阶段依赖 Phase 0
output: p.output,
parallel: p.parallel || false,
condition: p.condition || null,
// Agent 配置 (支持 LLM 集成)
agent: p.agent || (config.llm_integration?.enabled ? {
type: "llm",
tool: config.llm_integration.default_tool,
mode: config.llm_integration.mode || "analysis",
fallback_chain: config.llm_integration.fallback_chain || [],
run_in_background: false
} : {
type: "universal-executor",
run_in_background: false
})
}))
],
// 终止条件
termination: {
on_success: "all_phases_completed",
@@ -236,10 +260,30 @@ async function executePhase(phaseId, phaseConfig, workDir) {
## 阶段执行计划
**执行流程**:
\`\`\`
START
Phase 0: Specification Study
↓ Output: spec-study-complete.flag
Phase 1: ${phases[0]?.name || 'First Phase'}
↓ Output: ${phases[0]?.output || 'phase-1.json'}
${phases.slice(1).map((p, i) => `
Phase ${i+2}: ${p.name}
↓ Output: ${p.output}`).join('\n')}
COMPLETE
\`\`\`
**阶段列表**:
| Order | Phase | Input | Output | Agent |
|-------|-------|-------|--------|-------|
${phases.map((p, i) =>
`| ${i+1} | ${p.id} | ${i > 0 ? phases[i-1].output : '-'} | ${p.output} | ${p.agent?.type || 'universal-executor'} |`
| 0 | 00-spec-study | - | spec-study-complete.flag | universal-executor |
${phases.map((p, i) =>
`| ${i+1} | ${p.id} | ${i === 0 ? 'spec-study-complete.flag' : phases[i-1].output} | ${p.output} | ${p.agent?.type || 'universal-executor'} |`
).join('\n')}
## 错误恢复
@@ -754,6 +798,146 @@ ${actions.sort((a, b) => (b.priority || 0) - (a.priority || 0)).map(a =>
### Step 4: 辅助函数
```javascript
// ========== P0: Phase 0 生成函数 ==========
function generatePhase0Spec(config) {
const skillRoot = '.claude/skills/skill-generator';
const specsToRead = [
'../_shared/SKILL-DESIGN-SPEC.md',
`${skillRoot}/templates/*.md`
];
return `# Phase 0: Specification Study
⚠️ **MANDATORY PREREQUISITE** - 此阶段不可跳过
## Objective
在生成任何文件前,完整阅读所有规范文档,理解 Skill 设计标准。
## Why This Matters
**不研读规范 (❌)**:
\`\`\`
跳过规范
├─ ✗ 不符合标准
├─ ✗ 结构混乱
└─ ✗ 质量问题
\`\`\`
**研读规范 (✅)**:
\`\`\`
完整研读
├─ ✓ 标准化输出
├─ ✓ 高质量代码
└─ ✓ 易于维护
\`\`\`
## Required Reading
### P0 - 核心设计规范
\`\`\`javascript
// 通用设计标准 (MUST READ)
const designSpec = Read('.claude/skills/_shared/SKILL-DESIGN-SPEC.md');
// 关键内容检查点:
const checkpoints = {
structure: '目录结构约定',
naming: '命名规范',
quality: '质量标准',
output: '输出格式要求'
};
\`\`\`
### P1 - 模板文件 (生成前必读)
\`\`\`javascript
// 根据执行模式加载对应模板
const templates = {
all: [
'templates/skill-md.md' // SKILL.md 入口文件模板
],
sequential: [
'templates/sequential-phase.md'
],
autonomous: [
'templates/autonomous-orchestrator.md',
'templates/autonomous-action.md'
]
};
const mode = '${config.execution_mode}';
const requiredTemplates = [...templates.all, ...templates[mode]];
requiredTemplates.forEach(template => {
const content = Read(\`.claude/skills/skill-generator/\${template}\`);
// 理解模板结构、变量位置、生成规则
});
\`\`\`
## Execution
\`\`\`javascript
// ========== 加载规范 ==========
const specs = [];
// 1. 设计规范 (P0)
specs.push({
file: '../_shared/SKILL-DESIGN-SPEC.md',
content: Read('.claude/skills/_shared/SKILL-DESIGN-SPEC.md'),
priority: 'P0'
});
// 2. 模板文件 (P1)
const templateFiles = Glob('.claude/skills/skill-generator/templates/*.md');
templateFiles.forEach(file => {
specs.push({
file: file,
content: Read(file),
priority: 'P1'
});
});
// ========== 内化规范 ==========
console.log('📖 Reading specifications...');
specs.forEach(spec => {
console.log(\` [\${spec.priority}] \${spec.file}\`);
// 理解内容(无需生成文件,仅内存处理)
});
// ========== 生成完成标记 ==========
const result = {
status: 'completed',
specs_loaded: specs.length,
timestamp: new Date().toISOString()
};
Write(\`\${workDir}/spec-study-complete.flag\`, JSON.stringify(result, null, 2));
\`\`\`
## Output
- **标记文件**: \`spec-study-complete.flag\` (证明已完成阅读)
- **副作用**: 内化规范知识,后续阶段遵循标准
## Success Criteria
✅ **通过标准**:
- [ ] 已阅读 SKILL-DESIGN-SPEC.md
- [ ] 已阅读执行模式对应的模板文件
- [ ] 理解目录结构约定
- [ ] 理解命名规范
- [ ] 理解质量标准
## Next Phase
→ [Phase 1: Requirements Discovery](01-requirements-discovery.md)
**关键**: 只有完成规范研读后Phase 1 才能正确收集需求并生成符合标准的配置。
`;
}
// ========== 其他辅助函数 ==========
function toPascalCase(str) {
return str.split('-').map(s => s.charAt(0).toUpperCase() + s.slice(1)).join('');
}
@@ -782,21 +966,4 @@ function getPreconditionCheck(action) {
}
```
## Output
### Sequential 模式
- `phases/_orchestrator.md` (声明式编排器)
- `workflow.json` (工作流定义)
- `phases/01-{step}.md`, `02-{step}.md`, ...
### Autonomous 模式
- `phases/orchestrator.md` (增强版编排器)
- `phases/state-schema.md`
- `specs/action-catalog.md` (声明式动作目录)
- `phases/actions/action-{name}.md` (多个)
## Next Phase
→ [Phase 4: Specs & Templates](04-specs-templates.md)

View File

@@ -1,26 +1,107 @@
# Phase 4: Specs & Templates Generation
# Phase 4: Specifications & Templates Generation
生成规范文件和模板文件。
Generate domain requirements, quality standards, agent templates, and action catalogs.
## Objective
- 生成领域规范 (`specs/{domain}-requirements.md`)
- 生成质量标准 (`specs/quality-standards.md`)
- 生成 Agent 模板 (`templates/agent-base.md`)
- Autonomous 模式额外生成动作目录 (`specs/action-catalog.md`)
Generate comprehensive specifications and templates:
- Domain requirements document with validation function
- Quality standards with automated check system
- Agent base template with prompt structure
- Action catalog for autonomous mode (conditional)
## Input
- 依赖: `skill-config.json`, SKILL.md, phases/*.md
**File Dependencies**:
- `skill-config.json` (from Phase 1)
- `.claude/skills/{skill-name}/` directory (from Phase 2)
- Generated phase/action files (from Phase 3)
## Execution Steps
**Required Information**:
- Skill name, display name, description
- Execution mode (determines if action-catalog.md is generated)
- Output format and location
- Phase/action definitions
### Step 1: 生成领域规范
## Output
**Generated Files**:
| File | Purpose | Generation Condition |
|------|---------|---------------------|
| `specs/{skill-name}-requirements.md` | Domain requirements with validation | Always |
| `specs/quality-standards.md` | Quality evaluation criteria | Always |
| `templates/agent-base.md` | Agent prompt template | Always |
| `specs/action-catalog.md` | Action dependency graph and selection priority | Autonomous/Hybrid mode only |
**File Structure**:
**Domain Requirements** (`specs/{skill-name}-requirements.md`):
```markdown
# {display_name} Requirements
- When to Use (phase/action reference table)
- Domain Requirements (功能要求, 输出要求, 质量要求)
- Validation Function (JavaScript code)
- Error Handling (recovery strategies)
```
**Quality Standards** (`specs/quality-standards.md`):
```markdown
# Quality Standards
- Quality Dimensions (Completeness 25%, Consistency 25%, Accuracy 25%, Usability 25%)
- Quality Gates (Pass ≥80%, Review 60-79%, Fail <60%)
- Issue Classification (Errors, Warnings, Info)
- Automated Checks (runQualityChecks function)
```
**Agent Base** (`templates/agent-base.md`):
```markdown
# Agent Base Template
- 通用 Prompt 结构 (ROLE, PROJECT CONTEXT, TASK, CONSTRAINTS, OUTPUT_FORMAT, QUALITY_CHECKLIST)
- 变量说明 (workDir, output_path)
- 返回格式 (AgentReturn interface)
- 角色定义参考 (phase/action specific agents)
```
**Action Catalog** (`specs/action-catalog.md`, Autonomous/Hybrid only):
```markdown
# Action Catalog
- Available Actions (table with Purpose, Preconditions, Effects)
- Action Dependencies (Mermaid diagram)
- State Transitions (state machine table)
- Selection Priority (ordered action list)
```
## Decision Logic
```
Decision (execution_mode check):
├─ mode === 'sequential' → Generate 3 files only
│ └─ Files: requirements.md, quality-standards.md, agent-base.md
├─ mode === 'autonomous' → Generate 4 files
│ ├─ Files: requirements.md, quality-standards.md, agent-base.md
│ └─ Additional: action-catalog.md (with action dependencies)
└─ mode === 'hybrid' → Generate 4 files
├─ Files: requirements.md, quality-standards.md, agent-base.md
└─ Additional: action-catalog.md (with hybrid logic)
```
## Execution Protocol
```javascript
// Phase 4: Generate Specifications & Templates
// Reference: phases/04-specs-templates.md
// Load config and setup
const config = JSON.parse(Read(`${workDir}/skill-config.json`));
const skillDir = `.claude/skills/${config.skill_name}`;
// Ensure specs and templates directories exist (created in Phase 2)
// skillDir structure: phases/, specs/, templates/
// Step 1: Generate domain requirements
const domainRequirements = `# ${config.display_name} Requirements
${config.description}
@@ -29,8 +110,8 @@ ${config.description}
| Phase | Usage | Reference |
|-------|-------|-----------|
${config.execution_mode === 'sequential' ?
config.sequential_config.phases.map((p, i) =>
${config.execution_mode === 'sequential' ?
config.sequential_config.phases.map((p, i) =>
`| Phase ${i+1} | ${p.name} | ${p.id}.md |`
).join('\n') :
`| Orchestrator | 动作选择 | orchestrator.md |
@@ -67,7 +148,7 @@ function validate${toPascalCase(config.skill_name)}(output) {
{ name: "格式正确", pass: output.format === "${config.output.format}" },
{ name: "内容完整", pass: output.content?.length > 0 }
];
return {
passed: checks.filter(c => c.pass).length,
total: checks.length,
@@ -86,11 +167,8 @@ function validate${toPascalCase(config.skill_name)}(output) {
`;
Write(`${skillDir}/specs/${config.skill_name}-requirements.md`, domainRequirements);
```
### Step 2: 生成质量标准
```javascript
// Step 2: Generate quality standards
const qualityStandards = `# Quality Standards
${config.display_name} 的质量评估标准。
@@ -176,7 +254,7 @@ function runQualityChecks(workDir) {
return {
score: results.overall,
gate: results.overall >= 80 ? 'pass' :
gate: results.overall >= 80 ? 'pass' :
results.overall >= 60 ? 'review' : 'fail',
details: results
};
@@ -185,11 +263,8 @@ function runQualityChecks(workDir) {
`;
Write(`${skillDir}/specs/quality-standards.md`, qualityStandards);
```
### Step 3: 生成 Agent 模板
```javascript
// Step 3: Generate agent base template
const agentBase = `# Agent Base Template
${config.display_name} 的 Agent 基础模板。
@@ -246,20 +321,17 @@ interface AgentReturn {
## 角色定义参考
${config.execution_mode === 'sequential' ?
config.sequential_config.phases.map((p, i) =>
config.sequential_config.phases.map((p, i) =>
`- **Phase ${i+1} Agent**: ${p.name} 专家`
).join('\n') :
config.autonomous_config.actions.map(a =>
config.autonomous_config.actions.map(a =>
`- **${a.name} Agent**: ${a.description || a.name + ' 执行者'}`
).join('\n')}
`;
Write(`${skillDir}/templates/agent-base.md`, agentBase);
```
### Step 4: Autonomous 模式 - 动作目录
```javascript
// Step 4: Conditional - Generate action catalog for autonomous/hybrid mode
if (config.execution_mode === 'autonomous' || config.execution_mode === 'hybrid') {
const actionCatalog = `# Action Catalog
@@ -269,7 +341,7 @@ ${config.display_name} 的可用动作目录。
| Action | Purpose | Preconditions | Effects |
|--------|---------|---------------|---------|
${config.autonomous_config.actions.map(a =>
${config.autonomous_config.actions.map(a =>
`| [${a.id}](../phases/actions/${a.id}.md) | ${a.description || a.name} | ${a.preconditions?.join(', ') || '-'} | ${a.effects?.join(', ') || '-'} |`
).join('\n')}
@@ -289,7 +361,7 @@ ${config.autonomous_config.actions.map((a, i, arr) => {
| From State | Action | To State |
|------------|--------|----------|
| pending | action-init | running |
${config.autonomous_config.actions.slice(1).map(a =>
${config.autonomous_config.actions.slice(1).map(a =>
`| running | ${a.id} | running |`
).join('\n')}
| running | action-complete | completed |
@@ -299,30 +371,28 @@ ${config.autonomous_config.actions.slice(1).map(a =>
当多个动作的前置条件都满足时,按以下优先级选择:
${config.autonomous_config.actions.map((a, i) =>
${config.autonomous_config.actions.map((a, i) =>
`${i + 1}. \`${a.id}\` - ${a.name}`
).join('\n')}
`;
Write(`${skillDir}/specs/action-catalog.md`, actionCatalog);
}
```
### Step 5: 辅助函数
```javascript
// Helper function
function toPascalCase(str) {
return str.split('-').map(s => s.charAt(0).toUpperCase() + s.slice(1)).join('');
}
// Phase output summary
console.log('Phase 4 complete: Generated specs and templates');
```
## Output
- `specs/{skill-name}-requirements.md` - 领域规范
- `specs/quality-standards.md` - 质量标准
- `specs/action-catalog.md` - 动作目录 (Autonomous 模式)
- `templates/agent-base.md` - Agent 模板
## Next Phase
→ [Phase 5: Validation](05-validation.md)
**Data Flow to Phase 5**:
- All generated files in `specs/` and `templates/`
- skill-config.json for validation reference
- Complete skill directory structure ready for final validation

View File

@@ -1,27 +1,119 @@
# Phase 5: Validation & Documentation
验证生成的 Skill 完整性并生成使用说明。
Verify generated skill completeness and generate user documentation.
## Objective
- 验证所有必需文件存在
- 检查文件内容完整性
- 生成 README.md 使用说明
- 输出验证报告
Comprehensive validation and documentation:
- Verify all required files exist
- Check file content quality and completeness
- Generate validation report with issues and recommendations
- Generate README.md usage documentation
- Output final status and next steps
## Input
- 依赖: 所有前序阶段产出
- 生成的 Skill 目录
**File Dependencies**:
- `skill-config.json` (from Phase 1)
- `.claude/skills/{skill-name}/` directory (from Phase 2)
- All generated phase/action files (from Phase 3)
- All generated specs/templates files (from Phase 4)
## Execution Steps
**Required Information**:
- Skill name, display name, description
- Execution mode
- Trigger words
- Output configuration
- Complete skill directory structure
### Step 1: 文件完整性检查
## Output
**Generated Files**:
| File | Purpose | Content |
|------|---------|---------|
| `validation-report.json` (workDir) | Validation report with detailed checks | File completeness, content quality, issues, recommendations |
| `README.md` (skillDir) | User documentation | Quick Start, Usage, Output, Directory Structure, Customization |
**Validation Report Structure** (`validation-report.json`):
```json
{
"skill_name": "...",
"execution_mode": "sequential|autonomous",
"generated_at": "ISO timestamp",
"file_checks": {
"total": N,
"existing": N,
"with_content": N,
"with_todos": N,
"details": [...]
},
"content_checks": {
"files_checked": N,
"all_passed": true|false,
"details": [...]
},
"summary": {
"status": "PASS|REVIEW|FAIL",
"issues": [...],
"recommendations": [...]
}
}
```
**README Structure** (`README.md`):
```markdown
# {display_name}
- Quick Start (Triggers, Execution Mode)
- Usage (Examples)
- Output (Format, Location, Filename)
- Directory Structure (Tree view)
- Customization (How to modify)
- Related Documents (Links)
```
**Validation Status Gates**:
| Status | Condition | Meaning |
|--------|-----------|---------|
| PASS | All files exist + All content checks passed | Ready for use |
| REVIEW | All files exist + Some content checks failed | Needs refinement |
| FAIL | Missing files | Incomplete generation |
## Decision Logic
```
Decision (Validation Flow):
├─ File Completeness Check
│ ├─ All files exist → Continue to content checks
│ └─ Missing files → Status = FAIL, collect missing file errors
├─ Content Quality Check
│ ├─ Sequential mode → Check phase files for structure
│ ├─ Autonomous mode → Check orchestrator + action files
│ └─ Common → Check SKILL.md, specs/, templates/
├─ Status Calculation
│ ├─ All files exist + All checks pass → Status = PASS
│ ├─ All files exist + Some checks fail → Status = REVIEW
│ └─ Missing files → Status = FAIL
└─ Generate Report & README
├─ validation-report.json (with issues and recommendations)
└─ README.md (with usage documentation)
```
## Execution Protocol
```javascript
// Phase 5: Validation & Documentation
// Reference: phases/05-validation.md
// Load config and setup
const config = JSON.parse(Read(`${workDir}/skill-config.json`));
const skillDir = `.claude/skills/${config.skill_name}`;
// Step 1: File completeness check
const requiredFiles = {
common: [
'SKILL.md',
@@ -64,14 +156,11 @@ const fileCheckResults = filesToCheck.map(file => {
};
}
});
```
### Step 2: 内容质量检查
```javascript
// Step 2: Content quality check
const contentChecks = [];
// 检查 SKILL.md
// Check SKILL.md structure
const skillMd = Read(`${skillDir}/SKILL.md`);
contentChecks.push({
file: 'SKILL.md',
@@ -83,11 +172,11 @@ contentChecks.push({
]
});
// 检查 Phase 文件
// Check phase files
const phaseFiles = Glob(`${skillDir}/phases/*.md`);
for (const phaseFile of phaseFiles) {
if (phaseFile.includes('/actions/')) continue; // 单独检查
if (phaseFile.includes('/actions/')) continue; // Check separately
const content = Read(phaseFile);
contentChecks.push({
file: phaseFile.replace(skillDir + '/', ''),
@@ -100,7 +189,7 @@ for (const phaseFile of phaseFiles) {
});
}
// 检查 Specs 文件
// Check specs files
const specFiles = Glob(`${skillDir}/specs/*.md`);
for (const specFile of specFiles) {
const content = Read(specFile);
@@ -113,16 +202,13 @@ for (const specFile of specFiles) {
]
});
}
```
### Step 3: 生成验证报告
```javascript
// Step 3: Generate validation report
const report = {
skill_name: config.skill_name,
execution_mode: config.execution_mode,
generated_at: new Date().toISOString(),
file_checks: {
total: fileCheckResults.length,
existing: fileCheckResults.filter(f => f.exists).length,
@@ -130,13 +216,13 @@ const report = {
with_todos: fileCheckResults.filter(f => f.hasTodo).length,
details: fileCheckResults
},
content_checks: {
files_checked: contentChecks.length,
all_passed: contentChecks.every(c => c.checks.every(ch => ch.pass)),
details: contentChecks
},
summary: {
status: calculateOverallStatus(fileCheckResults, contentChecks),
issues: collectIssues(fileCheckResults, contentChecks),
@@ -146,10 +232,11 @@ const report = {
Write(`${workDir}/validation-report.json`, JSON.stringify(report, null, 2));
// Helper functions
function calculateOverallStatus(fileResults, contentResults) {
const allFilesExist = fileResults.every(f => f.exists);
const allContentPassed = contentResults.every(c => c.checks.every(ch => ch.pass));
if (allFilesExist && allContentPassed) return 'PASS';
if (allFilesExist) return 'REVIEW';
return 'FAIL';
@@ -157,44 +244,41 @@ function calculateOverallStatus(fileResults, contentResults) {
function collectIssues(fileResults, contentResults) {
const issues = [];
fileResults.filter(f => !f.exists).forEach(f => {
issues.push({ type: 'ERROR', message: `文件缺失: ${f.file}` });
});
fileResults.filter(f => f.hasTodo).forEach(f => {
issues.push({ type: 'WARNING', message: `包含 TODO: ${f.file}` });
});
contentResults.forEach(c => {
c.checks.filter(ch => !ch.pass).forEach(ch => {
issues.push({ type: 'WARNING', message: `${c.file}: 缺少 ${ch.name}` });
});
});
return issues;
}
function generateRecommendations(fileResults, contentResults) {
const recommendations = [];
if (fileResults.some(f => f.hasTodo)) {
recommendations.push('替换所有 TODO 占位符为实际内容');
}
contentResults.forEach(c => {
if (c.checks.some(ch => !ch.pass)) {
recommendations.push(`完善 ${c.file} 的结构`);
}
});
return recommendations;
}
```
### Step 4: 生成 README.md
```javascript
// Step 4: Generate README.md
const readme = `# ${config.display_name}
${config.description}
@@ -210,10 +294,10 @@ ${config.triggers.map(t => `- "${t}"`).join('\n')}
**${config.execution_mode === 'sequential' ? 'Sequential (顺序)' : 'Autonomous (自主)'}**
${config.execution_mode === 'sequential' ?
`阶段按固定顺序执行:\n${config.sequential_config.phases.map((p, i) =>
`阶段按固定顺序执行:\n${config.sequential_config.phases.map((p, i) =>
`${i + 1}. ${p.name}`
).join('\n')}` :
`动作由编排器动态选择:\n${config.autonomous_config.actions.map(a =>
`动作由编排器动态选择:\n${config.autonomous_config.actions.map(a =>
`- ${a.name}: ${a.description || ''}`
).join('\n')}`}
@@ -283,24 +367,21 @@ ${config.execution_mode === 'sequential' ?
`;
Write(`${skillDir}/README.md`, readme);
```
### Step 5: 输出最终结果
```javascript
// Step 5: Output final result
const finalResult = {
skill_name: config.skill_name,
skill_path: skillDir,
execution_mode: config.execution_mode,
generated_files: [
'SKILL.md',
'README.md',
...filesToCheck
],
validation: report.summary,
next_steps: [
'1. 审阅生成的文件结构',
'2. 替换 TODO 占位符',
@@ -319,16 +400,18 @@ console.log('下一步:');
finalResult.next_steps.forEach(s => console.log(s));
```
## Output
## Workflow Completion
- `{workDir}/validation-report.json` - 验证报告
- `{skillDir}/README.md` - 使用说明
**Final Status**: Skill generation pipeline complete
## Completion
**Generated Artifacts**:
- Complete skill directory structure in `.claude/skills/{skill-name}/`
- Validation report in `{workDir}/validation-report.json`
- User documentation in `{skillDir}/README.md`
Skill 生成流程完成。用户可以:
1. 查看生成的 Skill 目录
2. 根据验证报告修复问题
3. 自定义执行逻辑
4. 测试 Skill 功能
**Next Steps**:
1. Review validation report for any issues or recommendations
2. Replace TODO placeholders with actual implementation
3. Test skill execution with trigger words
4. Customize phase logic based on specific requirements
5. Update triggers and descriptions as needed

View File

@@ -2,6 +2,20 @@
自主模式动作文件的模板。
## Purpose
生成 Autonomous 执行模式的 Action 文件,定义可独立执行的动作单元。
## Usage Context
| Phase | Usage |
|-------|-------|
| Phase 3 (Phase Generation) | `config.execution_mode === 'autonomous'` 时生成 |
| Generation Trigger | 为每个 `config.autonomous_config.actions` 生成一个 action 文件 |
| Output Location | `.claude/skills/{skill-name}/phases/actions/{action-id}.md` |
---
## 模板结构
```markdown
@@ -70,10 +84,42 @@ return {
| `{{error_handling_table}}` | 错误处理表格 |
| `{{next_actions_hints}}` | 后续动作提示 |
## 动作生命周期
```
状态驱动执行流:
state.status === 'pending'
┌─ Init ─┐ ← 1次执行环境准备
│ 创建工作目录 │
│ 初始化 context │
│ status → running │
└────┬────┘
┌─ CRUD Loop ─┐ ← N次迭代核心业务
│ 编排器选择动作 │ List / Create / Edit / Delete
│ execute(state) │ 共享模式: 收集输入 → 操作 context.items → 返回更新
│ 更新 state │
└────┬────┘
┌─ Complete ─┐ ← 1次执行保存结果
│ 序列化输出 │
│ status → completed │
└──────────┘
共享状态结构:
state.status → 'pending' | 'running' | 'completed'
state.context.items → 业务数据数组
state.completed_actions → 已执行动作 ID 列表
```
## 动作类型模板
### 1. 初始化动作 (Init)
**触发条件**: `state.status === 'pending'`,仅执行一次
```markdown
# Action: Initialize
@@ -91,101 +137,29 @@ return {
\`\`\`javascript
async function execute(state) {
// 1. 创建工作目录
Bash(\`mkdir -p "\${workDir}"\`);
// 2. 初始化数据
const initialData = {
items: [],
metadata: {}
};
// 3. 返回状态更新
return {
stateUpdates: {
status: 'running',
context: initialData
started_at: new Date().toISOString(),
context: { items: [], metadata: {} }
}
};
}
\`\`\`
## State Updates
\`\`\`javascript
return {
stateUpdates: {
status: 'running',
started_at: new Date().toISOString(),
context: { /* 初始数据 */ }
}
};
\`\`\`
## Next Actions
- 成功: 进入主处理循环
- 成功: 进入主处理循环 (由编排器选择首个 CRUD 动作)
- 失败: action-abort
```
### 2. 列表动作 (List)
### 2. CRUD 动作 (List / Create / Edit / Delete)
```markdown
# Action: List Items
**触发条件**: `state.status === 'running'`,循环执行直至用户退出
显示当前项目列表
## Purpose
展示所有项目供用户查看和选择。
## Preconditions
- [ ] state.status === 'running'
## Execution
\`\`\`javascript
async function execute(state) {
const items = state.context.items || [];
if (items.length === 0) {
console.log('暂无项目');
} else {
console.log('项目列表:');
items.forEach((item, i) => {
console.log(\`\${i + 1}. \${item.name} - \${item.status}\`);
});
}
return {
stateUpdates: {
last_action: 'list',
current_view: 'list'
}
};
}
\`\`\`
## State Updates
\`\`\`javascript
return {
stateUpdates: {
current_view: 'list',
last_viewed_at: new Date().toISOString()
}
};
\`\`\`
## Next Actions
- 用户选择创建: action-create
- 用户选择编辑: action-edit
- 用户退出: action-complete
```
### 3. 创建动作 (Create)
> 以 Create 为示例展示共享模式。List / Edit / Delete 遵循同一结构,仅 `执行逻辑` 和 `状态更新字段` 不同
```markdown
# Action: Create Item
@@ -194,7 +168,7 @@ return {
## Purpose
引导用户创建新项目
收集用户输入,向 context.items 追加新记录
## Preconditions
@@ -204,26 +178,24 @@ return {
\`\`\`javascript
async function execute(state) {
// 1. 收集信息
// 1. 收集输入
const input = await AskUserQuestion({
questions: [{
question: "请输入项目名称:",
header: "名称",
multiSelect: false,
options: [
{ label: "手动输入", description: "输入自定义名称" }
]
options: [{ label: "手动输入", description: "输入自定义名称" }]
}]
});
// 2. 创建项目
// 2. 操作 context.items (核心逻辑因动作类型而异)
const newItem = {
id: Date.now().toString(),
name: input["名称"],
status: 'pending',
created_at: new Date().toISOString()
};
// 3. 返回状态更新
return {
stateUpdates: {
@@ -231,177 +203,30 @@ async function execute(state) {
...state.context,
items: [...(state.context.items || []), newItem]
},
last_created_id: newItem.id
last_action: 'create'
}
};
}
\`\`\`
## State Updates
\`\`\`javascript
return {
stateUpdates: {
'context.items': [...items, newItem],
last_action: 'create',
last_created_id: newItem.id
}
};
\`\`\`
## Next Actions
- 继续创建: action-create
- 返回列表: action-list
- 继续操作: 编排器根据 state 选择下一动作
- 用户退出: action-complete
```
### 4. 编辑动作 (Edit)
**其他 CRUD 动作差异对照:**
```markdown
# Action: Edit Item
| 动作 | 核心逻辑 | 额外前置条件 | 关键状态字段 |
|------|---------|------------|------------|
| List | `items.forEach(→ console.log)` | 无 | `current_view: 'list'` |
| Create | `items.push(newItem)` | 无 | `last_created_id` |
| Edit | `items.map(→ 替换匹配项)` | `selected_item_id !== null` | `updated_at` |
| Delete | `items.filter(→ 排除匹配项)` | `selected_item_id !== null` | 确认对话 → 执行 |
编辑现有项目。
### 3. 完成动作 (Complete)
## Purpose
修改已存在的项目。
## Preconditions
- [ ] state.status === 'running'
- [ ] state.selected_item_id !== null
## Execution
\`\`\`javascript
async function execute(state) {
const itemId = state.selected_item_id;
const items = state.context.items || [];
const item = items.find(i => i.id === itemId);
if (!item) {
throw new Error(\`Item not found: \${itemId}\`);
}
// 1. 显示当前值
console.log(\`当前名称: \${item.name}\`);
// 2. 收集新值
const input = await AskUserQuestion({
questions: [{
question: "请输入新名称(留空保持不变):",
header: "新名称",
multiSelect: false,
options: [
{ label: "保持不变", description: \`当前: \${item.name}\` },
{ label: "手动输入", description: "输入新名称" }
]
}]
});
// 3. 更新项目
const updatedItems = items.map(i =>
i.id === itemId
? { ...i, name: input["新名称"] || i.name, updated_at: new Date().toISOString() }
: i
);
return {
stateUpdates: {
context: { ...state.context, items: updatedItems },
selected_item_id: null
}
};
}
\`\`\`
## State Updates
\`\`\`javascript
return {
stateUpdates: {
'context.items': updatedItems,
selected_item_id: null,
last_action: 'edit'
}
};
\`\`\`
## Next Actions
- 返回列表: action-list
```
### 5. 删除动作 (Delete)
```markdown
# Action: Delete Item
删除项目。
## Purpose
从列表中移除项目。
## Preconditions
- [ ] state.status === 'running'
- [ ] state.selected_item_id !== null
## Execution
\`\`\`javascript
async function execute(state) {
const itemId = state.selected_item_id;
const items = state.context.items || [];
// 1. 确认删除
const confirm = await AskUserQuestion({
questions: [{
question: "确认删除此项目?",
header: "确认",
multiSelect: false,
options: [
{ label: "确认删除", description: "不可恢复" },
{ label: "取消", description: "返回列表" }
]
}]
});
if (confirm["确认"] === "取消") {
return { stateUpdates: { selected_item_id: null } };
}
// 2. 执行删除
const updatedItems = items.filter(i => i.id !== itemId);
return {
stateUpdates: {
context: { ...state.context, items: updatedItems },
selected_item_id: null
}
};
}
\`\`\`
## State Updates
\`\`\`javascript
return {
stateUpdates: {
'context.items': filteredItems,
selected_item_id: null,
last_action: 'delete'
}
};
\`\`\`
## Next Actions
- 返回列表: action-list
```
### 6. 完成动作 (Complete)
**触发条件**: 用户明确退出或终止条件满足,仅执行一次
```markdown
# Action: Complete
@@ -410,7 +235,7 @@ return {
## Purpose
保存最终状态,结束 Skill 执行。
序列化最终状态,结束 Skill 执行。
## Preconditions
@@ -420,41 +245,26 @@ return {
\`\`\`javascript
async function execute(state) {
// 1. 保存最终数据
Write(\`\${workDir}/final-output.json\`, JSON.stringify(state.context, null, 2));
// 2. 生成摘要
const summary = {
total_items: state.context.items?.length || 0,
duration: Date.now() - new Date(state.started_at).getTime(),
actions_executed: state.completed_actions.length
};
console.log('任务完成');
console.log(\`处理项目: \${summary.total_items}\`);
console.log(\`执行动作: \${summary.actions_executed}\`);
console.log(\`任务完成: \${summary.total_items} 项, \${summary.actions_executed} 次操作\`);
return {
stateUpdates: {
status: 'completed',
summary: summary
completed_at: new Date().toISOString(),
summary
}
};
}
\`\`\`
## State Updates
\`\`\`javascript
return {
stateUpdates: {
status: 'completed',
completed_at: new Date().toISOString(),
summary: { /* 统计信息 */ }
}
};
\`\`\`
## Next Actions
- 无(终止状态)

View File

@@ -2,6 +2,20 @@
自主模式编排器的模板。
## Purpose
生成 Autonomous 执行模式的 Orchestrator 文件,负责状态驱动的动作选择和执行循环。
## Usage Context
| Phase | Usage |
|-------|-------|
| Phase 3 (Phase Generation) | `config.execution_mode === 'autonomous'` 时生成 |
| Generation Trigger | 创建编排器逻辑,管理动作选择和状态更新 |
| Output Location | `.claude/skills/{skill-name}/phases/orchestrator.md` |
---
## ⚠️ 重要提示
> **Phase 0 是强制前置阶段**:在 Orchestrator 启动执行循环之前,必须先完成 Phase 0 的规范研读。

View File

@@ -2,6 +2,18 @@
代码分析动作模板,用于在 Skill 中集成代码探索和分析能力。
## Purpose
为 Skill 生成代码分析动作,集成 MCP 工具 (ACE) 和 Agent 进行语义搜索和深度分析。
## Usage Context
| Phase | Usage |
|-------|-------|
| Optional | 当 Skill 需要代码探索和分析能力时使用 |
| Generation Trigger | 用户选择添加 code-analysis 动作类型 |
| Agent Types | Explore, cli-explore-agent, universal-executor |
---
## 配置结构

View File

@@ -2,6 +2,18 @@
LLM 动作模板,用于在 Skill 中集成 LLM 调用能力。
## Purpose
为 Skill 生成 LLM 动作,通过 CCW CLI 统一接口调用 Gemini/Qwen/Codex 进行分析或生成。
## Usage Context
| Phase | Usage |
|-------|-------|
| Optional | 当 Skill 需要 LLM 能力时使用 |
| Generation Trigger | 用户选择添加 llm 动作类型 |
| Tools | gemini, qwen, codex (支持 fallback chain) |
---
## 配置结构

View File

@@ -1,277 +0,0 @@
# Bash Script Template
Bash 脚本模板,用于生成技能中的确定性脚本。
## 模板代码
```bash
#!/bin/bash
# {{script_description}}
set -euo pipefail
# ============================================================
# 参数解析
# ============================================================
INPUT_PATH=""
OUTPUT_DIR="" # 由调用方指定,不设默认值
show_help() {
echo "用法: $0 --input-path <path> --output-dir <dir>"
echo ""
echo "参数:"
echo " --input-path 输入文件路径 (必需)"
echo " --output-dir 输出目录 (必需,由调用方指定)"
echo " --help 显示帮助信息"
}
while [[ "$#" -gt 0 ]]; do
case $1 in
--input-path)
INPUT_PATH="$2"
shift
;;
--output-dir)
OUTPUT_DIR="$2"
shift
;;
--help)
show_help
exit 0
;;
*)
echo "错误: 未知参数 $1" >&2
show_help >&2
exit 1
;;
esac
shift
done
# ============================================================
# 参数验证
# ============================================================
if [[ -z "$INPUT_PATH" ]]; then
echo "错误: --input-path 是必需参数" >&2
exit 1
fi
if [[ -z "$OUTPUT_DIR" ]]; then
echo "错误: --output-dir 是必需参数" >&2
exit 1
fi
if [[ ! -f "$INPUT_PATH" ]]; then
echo "错误: 输入文件不存在: $INPUT_PATH" >&2
exit 1
fi
# 检查 jq 是否可用(用于 JSON 输出)
if ! command -v jq &> /dev/null; then
echo "错误: 需要安装 jq" >&2
exit 1
fi
mkdir -p "$OUTPUT_DIR"
# ============================================================
# 核心逻辑
# ============================================================
OUTPUT_FILE="$OUTPUT_DIR/result.txt"
ITEMS_COUNT=0
# TODO: 实现处理逻辑
# 示例:处理输入文件
while IFS= read -r line; do
echo "$line" >> "$OUTPUT_FILE"
((ITEMS_COUNT++))
done < "$INPUT_PATH"
# ============================================================
# 输出 JSON 结果(使用 jq 构建,避免特殊字符问题)
# ============================================================
jq -n \
--arg output_file "$OUTPUT_FILE" \
--argjson items_processed "$ITEMS_COUNT" \
'{output_file: $output_file, items_processed: $items_processed, status: "success"}'
```
## 变量说明
| 变量 | 说明 |
|------|------|
| `{{script_description}}` | 脚本功能描述 |
## 使用规范
### 脚本头部
```bash
#!/bin/bash
set -euo pipefail # 严格模式:出错退出、未定义变量报错、管道错误传递
```
### 参数解析模式
```bash
while [[ "$#" -gt 0 ]]; do
case $1 in
--param-name)
PARAM_VAR="$2"
shift
;;
--flag)
FLAG_VAR=true
;;
*)
echo "Unknown: $1" >&2
exit 1
;;
esac
shift
done
```
### 输出格式
- 最后一行打印单行 JSON
- **强烈推荐使用 `jq`**:自动处理转义和类型
```bash
# 推荐:使用 jq 构建(安全、可靠)
jq -n \
--arg file "$FILE" \
--argjson count "$COUNT" \
'{output_file: $file, items_processed: $count}'
# 备选:简单场景手动拼接(注意特殊字符转义)
echo "{\"file\": \"$FILE\", \"count\": $COUNT}"
```
**jq 参数类型**
- `--arg name value`:字符串类型
- `--argjson name value`:数字/布尔/null 类型
### 错误处理
```bash
# 验证错误
if [[ -z "$PARAM" ]]; then
echo "错误: 参数不能为空" >&2
exit 1
fi
# 命令错误
if ! command -v jq &> /dev/null; then
echo "错误: 需要安装 jq" >&2
exit 1
fi
# 运行时错误
if ! some_command; then
echo "错误: 命令执行失败" >&2
exit 1
fi
```
## 常用模式
### 文件遍历
```bash
for file in "$INPUT_DIR"/*.json; do
[[ -f "$file" ]] || continue
echo "处理: $file"
# 处理逻辑...
done
```
### 临时文件
```bash
TEMP_FILE=$(mktemp)
trap "rm -f $TEMP_FILE" EXIT
echo "data" > "$TEMP_FILE"
```
### 调用其他工具
```bash
# 检查工具存在
require_command() {
if ! command -v "$1" &> /dev/null; then
echo "错误: 需要 $1" >&2
exit 1
fi
}
require_command jq
require_command curl
```
### JSON 处理(使用 jq
```bash
# 读取 JSON 字段
VALUE=$(jq -r '.field' "$INPUT_PATH")
# 修改 JSON
jq '.field = "new_value"' "$INPUT_PATH" > "$OUTPUT_FILE"
# 合并 JSON 文件
jq -s 'add' file1.json file2.json > merged.json
```
## 生成函数
```javascript
function generateBashScript(scriptConfig) {
return `#!/bin/bash
# ${scriptConfig.description}
set -euo pipefail
# 参数定义
${scriptConfig.inputs.map(i =>
`${i.name.toUpperCase().replace(/-/g, '_')}="${i.default || ''}"`
).join('\n')}
# 参数解析
while [[ "$#" -gt 0 ]]; do
case $1 in
${scriptConfig.inputs.map(i =>
` --${i.name})
${i.name.toUpperCase().replace(/-/g, '_')}="$2"
shift
;;`
).join('\n')}
*)
echo "未知参数: $1" >&2
exit 1
;;
esac
shift
done
# 参数验证
${scriptConfig.inputs.filter(i => i.required).map(i =>
`if [[ -z "$${i.name.toUpperCase().replace(/-/g, '_')}" ]]; then
echo "错误: --${i.name} 是必需参数" >&2
exit 1
fi`
).join('\n\n')}
# TODO: 实现处理逻辑
# 输出结果
echo "{${scriptConfig.outputs.map(o =>
`\\"${o.name}\\": \\"\\$${o.name.toUpperCase().replace(/-/g, '_')}\\"`
).join(', ')}}"
`;
}
```

View File

@@ -1,198 +0,0 @@
# Python Script Template
Python 脚本模板,用于生成技能中的确定性脚本。
## 模板代码
```python
#!/usr/bin/env python3
"""
{{script_description}}
"""
import argparse
import json
import sys
from pathlib import Path
def main():
# 1. 定义参数
parser = argparse.ArgumentParser(description='{{script_description}}')
parser.add_argument('--input-path', type=str, required=True,
help='输入文件路径')
parser.add_argument('--output-dir', type=str, required=True,
help='输出目录(由调用方指定)')
# 添加更多参数...
args = parser.parse_args()
# 2. 验证输入
input_path = Path(args.input_path)
if not input_path.exists():
print(f"错误: 输入文件不存在: {input_path}", file=sys.stderr)
sys.exit(1)
output_dir = Path(args.output_dir)
output_dir.mkdir(parents=True, exist_ok=True)
# 3. 执行核心逻辑
try:
result = process(input_path, output_dir)
except Exception as e:
print(f"错误: {e}", file=sys.stderr)
sys.exit(1)
# 4. 输出 JSON 结果
print(json.dumps(result))
def process(input_path: Path, output_dir: Path) -> dict:
"""
核心处理逻辑
Args:
input_path: 输入文件路径
output_dir: 输出目录
Returns:
dict: 包含输出结果的字典
"""
# TODO: 实现处理逻辑
output_file = output_dir / 'result.json'
# 示例:读取并处理数据
with open(input_path, 'r', encoding='utf-8') as f:
data = json.load(f)
# 处理数据...
processed_count = len(data) if isinstance(data, list) else 1
# 写入输出
with open(output_file, 'w', encoding='utf-8') as f:
json.dump(data, f, indent=2, ensure_ascii=False)
return {
'output_file': str(output_file),
'items_processed': processed_count,
'status': 'success'
}
if __name__ == '__main__':
main()
```
## 变量说明
| 变量 | 说明 |
|------|------|
| `{{script_description}}` | 脚本功能描述 |
## 使用规范
### 输入参数
- 使用 `argparse` 定义参数
- 参数名使用 kebab-case`--input-path`
- 必需参数设置 `required=True`
- 可选参数提供 `default`
### 输出格式
- 最后一行打印单行 JSON
- 包含所有输出文件路径和关键数据
- 错误信息输出到 stderr
### 错误处理
```python
# 验证错误 - 直接退出
if not valid:
print("错误信息", file=sys.stderr)
sys.exit(1)
# 运行时错误 - 捕获并退出
try:
result = process()
except Exception as e:
print(f"错误: {e}", file=sys.stderr)
sys.exit(1)
```
## 常用模式
### 文件处理
```python
def process_files(input_dir: Path, pattern: str = '*.json') -> list:
results = []
for file in input_dir.glob(pattern):
with open(file, 'r') as f:
data = json.load(f)
results.append({'file': str(file), 'data': data})
return results
```
### 数据转换
```python
def transform_data(data: dict) -> dict:
return {
'id': data.get('id'),
'name': data.get('name', '').strip(),
'timestamp': datetime.now().isoformat()
}
```
### 调用外部命令
```python
import subprocess
def run_command(cmd: list) -> str:
result = subprocess.run(cmd, capture_output=True, text=True)
if result.returncode != 0:
raise RuntimeError(result.stderr)
return result.stdout
```
## 生成函数
```javascript
function generatePythonScript(scriptConfig) {
return `#!/usr/bin/env python3
"""
${scriptConfig.description}
"""
import argparse
import json
import sys
from pathlib import Path
def main():
parser = argparse.ArgumentParser(description='${scriptConfig.description}')
${scriptConfig.inputs.map(i =>
` parser.add_argument('--${i.name}', type=${i.type || 'str'}, ${i.required ? 'required=True' : `default='${i.default}'`},
help='${i.description}')`
).join('\n')}
args = parser.parse_args()
# TODO: 实现处理逻辑
result = {
${scriptConfig.outputs.map(o =>
` '${o.name}': None # ${o.description}`
).join(',\n')}
}
print(json.dumps(result))
if __name__ == '__main__':
main()
`;
}
```

View File

@@ -0,0 +1,368 @@
# Script Template
统一的脚本模板,覆盖 Bash 和 Python 两种运行时。
## Usage Context
| Phase | Usage |
|-------|-------|
| Optional | Phase/Action 中声明 `## Scripts` 时使用 |
| Execution | 通过 `ExecuteScript('script-id', params)` 调用 |
| Output Location | `.claude/skills/{skill-name}/scripts/{script-id}.{ext}` |
---
## 调用接口规范
所有脚本共享相同的调用约定:
```
调用者
↓ ExecuteScript('script-id', { key: value })
脚本入口
├─ 参数解析 (--key value)
├─ 输入验证 (必需参数检查, 文件存在)
├─ 核心处理 (数据读取 → 转换 → 写入)
└─ 输出结果 (最后一行: 单行 JSON → stdout)
├─ 成功: {"status":"success", "output_file":"...", ...}
└─ 失败: stderr 输出错误信息, exit 1
```
### 返回格式
```typescript
interface ScriptResult {
success: boolean; // exit code === 0
stdout: string; // 标准输出
stderr: string; // 标准错误
outputs: object; // 从 stdout 最后一行解析的 JSON
}
```
### 参数约定
| 参数 | 必需 | 说明 |
|------|------|------|
| `--input-path` | ✓ | 输入文件路径 |
| `--output-dir` | ✓ | 输出目录(由调用方指定) |
| 其他 | 按需 | 脚本特定参数 |
---
## Bash 实现
```bash
#!/bin/bash
# {{script_description}}
set -euo pipefail
# ============================================================
# 参数解析
# ============================================================
INPUT_PATH=""
OUTPUT_DIR=""
while [[ "$#" -gt 0 ]]; do
case $1 in
--input-path) INPUT_PATH="$2"; shift ;;
--output-dir) OUTPUT_DIR="$2"; shift ;;
--help)
echo "用法: $0 --input-path <path> --output-dir <dir>"
exit 0
;;
*)
echo "错误: 未知参数 $1" >&2
exit 1
;;
esac
shift
done
# ============================================================
# 参数验证
# ============================================================
[[ -z "$INPUT_PATH" ]] && { echo "错误: --input-path 是必需参数" >&2; exit 1; }
[[ -z "$OUTPUT_DIR" ]] && { echo "错误: --output-dir 是必需参数" >&2; exit 1; }
[[ ! -f "$INPUT_PATH" ]] && { echo "错误: 输入文件不存在: $INPUT_PATH" >&2; exit 1; }
command -v jq &> /dev/null || { echo "错误: 需要安装 jq" >&2; exit 1; }
mkdir -p "$OUTPUT_DIR"
# ============================================================
# 核心逻辑
# ============================================================
OUTPUT_FILE="$OUTPUT_DIR/result.txt"
ITEMS_COUNT=0
# TODO: 实现处理逻辑
while IFS= read -r line; do
echo "$line" >> "$OUTPUT_FILE"
((ITEMS_COUNT++))
done < "$INPUT_PATH"
# ============================================================
# 输出 JSON 结果(使用 jq 构建,避免转义问题)
# ============================================================
jq -n \
--arg output_file "$OUTPUT_FILE" \
--argjson items_processed "$ITEMS_COUNT" \
'{output_file: $output_file, items_processed: $items_processed, status: "success"}'
```
### Bash 常用模式
```bash
# 文件遍历
for file in "$INPUT_DIR"/*.json; do
[[ -f "$file" ]] || continue
# 处理逻辑...
done
# 临时文件 (自动清理)
TEMP_FILE=$(mktemp)
trap "rm -f $TEMP_FILE" EXIT
# 工具依赖检查
require_command() {
command -v "$1" &> /dev/null || { echo "错误: 需要 $1" >&2; exit 1; }
}
require_command jq
# jq 处理
VALUE=$(jq -r '.field' "$INPUT_PATH") # 读取字段
jq '.field = "new"' input.json > output.json # 修改字段
jq -s 'add' file1.json file2.json > merged.json # 合并文件
```
---
## Python 实现
```python
#!/usr/bin/env python3
"""
{{script_description}}
"""
import argparse
import json
import sys
from pathlib import Path
def main():
parser = argparse.ArgumentParser(description='{{script_description}}')
parser.add_argument('--input-path', type=str, required=True, help='输入文件路径')
parser.add_argument('--output-dir', type=str, required=True, help='输出目录')
args = parser.parse_args()
# 验证输入
input_path = Path(args.input_path)
if not input_path.exists():
print(f"错误: 输入文件不存在: {input_path}", file=sys.stderr)
sys.exit(1)
output_dir = Path(args.output_dir)
output_dir.mkdir(parents=True, exist_ok=True)
# 执行处理
try:
result = process(input_path, output_dir)
except Exception as e:
print(f"错误: {e}", file=sys.stderr)
sys.exit(1)
# 输出 JSON 结果
print(json.dumps(result))
def process(input_path: Path, output_dir: Path) -> dict:
"""核心处理逻辑"""
# TODO: 实现处理逻辑
output_file = output_dir / 'result.json'
with open(input_path, 'r', encoding='utf-8') as f:
data = json.load(f)
processed_count = len(data) if isinstance(data, list) else 1
with open(output_file, 'w', encoding='utf-8') as f:
json.dump(data, f, indent=2, ensure_ascii=False)
return {
'output_file': str(output_file),
'items_processed': processed_count,
'status': 'success'
}
if __name__ == '__main__':
main()
```
### Python 常用模式
```python
# 文件遍历
def process_files(input_dir: Path, pattern: str = '*.json') -> list:
return [
{'file': str(f), 'data': json.load(f.open())}
for f in input_dir.glob(pattern)
]
# 数据转换
def transform(data: dict) -> dict:
return {
'id': data.get('id'),
'name': data.get('name', '').strip(),
'timestamp': datetime.now().isoformat()
}
# 外部命令调用
import subprocess
def run_command(cmd: list) -> str:
result = subprocess.run(cmd, capture_output=True, text=True)
if result.returncode != 0:
raise RuntimeError(result.stderr)
return result.stdout
```
---
## 运行时选择指南
```
任务特征
├─ 文件处理 / 系统命令 / 管道操作
│ └─ 选 Bash (.sh)
├─ JSON 数据处理 / 复杂转换 / 数据分析
│ └─ 选 Python (.py)
└─ 简单读写 / 格式转换
└─ 任选Bash 更轻量)
```
---
## 生成函数
```javascript
function generateScript(scriptConfig) {
const runtime = scriptConfig.runtime || 'bash'; // 'bash' | 'python'
const ext = runtime === 'python' ? '.py' : '.sh';
if (runtime === 'python') {
return generatePythonScript(scriptConfig);
}
return generateBashScript(scriptConfig);
}
function generateBashScript(scriptConfig) {
const { description, inputs = [], outputs = [] } = scriptConfig;
const paramDefs = inputs.map(i =>
`${i.name.toUpperCase().replace(/-/g, '_')}="${i.default || ''}"`
).join('\n');
const paramParse = inputs.map(i =>
` --${i.name}) ${i.name.toUpperCase().replace(/-/g, '_')}="$2"; shift ;;`
).join('\n');
const paramValidation = inputs.filter(i => i.required).map(i => {
const VAR = i.name.toUpperCase().replace(/-/g, '_');
return `[[ -z "$${VAR}" ]] && { echo "错误: --${i.name} 是必需参数" >&2; exit 1; }`;
}).join('\n');
return `#!/bin/bash
# ${description}
set -euo pipefail
${paramDefs}
while [[ "$#" -gt 0 ]]; do
case $1 in
${paramParse}
*) echo "未知参数: $1" >&2; exit 1 ;;
esac
shift
done
${paramValidation}
# TODO: 实现处理逻辑
# 输出结果 (jq 构建)
jq -n ${outputs.map(o =>
`--arg ${o.name} "$${o.name.toUpperCase().replace(/-/g, '_')}"`
).join(' \\\n ')} \
'{${outputs.map(o => `${o.name}: $${o.name}`).join(', ')}}'
`;
}
function generatePythonScript(scriptConfig) {
const { description, inputs = [], outputs = [] } = scriptConfig;
const argDefs = inputs.map(i =>
` parser.add_argument('--${i.name}', type=${i.type || 'str'}, ${
i.required ? 'required=True' : `default='${i.default || ''}'`
}, help='${i.description || i.name}')`
).join('\n');
const resultFields = outputs.map(o =>
` '${o.name}': None # ${o.description || o.name}`
).join(',\n');
return `#!/usr/bin/env python3
"""
${description}
"""
import argparse
import json
import sys
from pathlib import Path
def main():
parser = argparse.ArgumentParser(description='${description}')
${argDefs}
args = parser.parse_args()
# TODO: 实现处理逻辑
result = {
${resultFields}
}
print(json.dumps(result))
if __name__ == '__main__':
main()
`;
}
```
---
## 目录约定
```
scripts/
├── process-data.py # id: process-data, runtime: python
├── validate.sh # id: validate, runtime: bash
└── transform.js # id: transform, runtime: node
```
- **命名即 ID**: 文件名(不含扩展名)= 脚本 ID
- **扩展名即运行时**: `.py` → python, `.sh` → bash, `.js` → node

View File

@@ -2,6 +2,20 @@
顺序模式 Phase 文件的模板。
## Purpose
生成 Sequential 执行模式的 Phase 文件,定义固定顺序的执行步骤。
## Usage Context
| Phase | Usage |
|-------|-------|
| Phase 3 (Phase Generation) | `config.execution_mode === 'sequential'` 时生成 |
| Generation Trigger | 为每个 `config.sequential_config.phases` 生成一个 phase 文件 |
| Output Location | `.claude/skills/{skill-name}/phases/{phase-id}.md` |
---
## ⚠️ 重要提示
> **Phase 0 是强制前置阶段**:在实现任何 Phase (1, 2, 3...) 之前,必须先完成 Phase 0 的规范研读。

View File

@@ -2,6 +2,20 @@
用于生成新 Skill 入口文件的模板。
## Purpose
生成新 Skill 的入口文件 (SKILL.md),作为 Skill 的主文档和执行入口点。
## Usage Context
| Phase | Usage |
|-------|-------|
| Phase 2 (Structure Generation) | 创建 SKILL.md 入口文件 |
| Generation Trigger | `config.execution_mode` 决定架构图样式 |
| Output Location | `.claude/skills/{skill-name}/SKILL.md` |
---
## ⚠️ 重要YAML Front Matter 规范
> **CRITICAL**: SKILL.md 文件必须以 YAML front matter 开头,即以 `---` 作为文件第一行。

View File

@@ -6,298 +6,162 @@ allowed-tools: Task, AskUserQuestion, Read, Write, Bash, Glob, Grep, mcp__ace-to
# Skill Tuning
Universal skill diagnosis and optimization tool that identifies and resolves skill execution problems through iterative multi-agent analysis.
Autonomous diagnosis and optimization for skill execution issues.
## Architecture Overview
## Architecture
```
┌─────────────────────────────────────────────────────────────────────────────
Skill Tuning Architecture (Autonomous Mode + Gemini CLI) │
├─────────────────────────────────────────────────────────────────────────────┤
│ │
⚠️ Phase 0: Specification → 阅读规范 + 理解目标 skill 结构 (强制前置)
│ Study │
┌───────────────────────────────────────────────────────────────────────┐
│ │ Orchestrator (状态驱动决策) │ │
读取诊断状态 → 选择下一步动作 → 执行 → 更新状态 → 循环直到完成
│ └───────────────────────────────────────────────────────────────────────┘ │
┌────────────┬───────────┼───────────┬────────────┬────────────┐
↓ ↓ ↓ ↓ ↓
┌──────┐ ┌──────────┐ ┌─────────┐ ┌────────┐ ┌────────┐ ┌─────────┐
│ Init │→ │ Analyze │→ │Diagnose │ │Diagnose│ │Diagnose│ │ Gemini │
│Requiremts│ │ Context │ │ Memory │ │DataFlow│ │Analysis │
└──────┘ └──────────┘ └─────────┘ └────────┘ └────────┘ └─────────┘
│ │ │ │ │ │ │
│ └───────────┴───────────┴────────────┘ │
↓ │
│ ┌───────────────────────────────────────────────────────────────────────┐
Requirement Analysis (NEW)
• Phase 1: 维度拆解 (Gemini CLI) - 单一描述 → 多个关注维度 │ │
│ │ • Phase 2: Spec 匹配 - 每个维度 → taxonomy + strategy │ │
│ │ • Phase 3: 覆盖度评估 - 以"有修复策略"为满足标准 │ │
│ │ • Phase 4: 歧义检测 - 识别多义性描述,必要时请求澄清 │ │
│ └───────────────────────────────────────────────────────────────────────┘ │
│ ↓ │
│ ┌──────────────────┐ │
│ │ Apply Fixes + │ │
│ │ Verify Results │ │
│ └──────────────────┘ │
│ │
│ ┌───────────────────────────────────────────────────────────────────────┐ │
│ │ Gemini CLI Integration │ │
│ │ 根据用户需求动态调用 gemini cli 进行深度分析: │ │
│ │ • 需求维度拆解 (requirement decomposition) │ │
│ │ • 复杂问题分析 (prompt engineering, architecture review) │ │
│ │ • 代码模式识别 (pattern matching, anti-pattern detection) │ │
│ │ • 修复策略生成 (fix generation, refactoring suggestions) │ │
│ └───────────────────────────────────────────────────────────────────────┘ │
│ │
└─────────────────────────────────────────────────────────────────────────────┘
┌─────────────────────────────────────────────────────┐
Phase 0: Read Specs (mandatory) │
│ → problem-taxonomy.md, tuning-strategies.md │
└─────────────────────────────────────────────────────┘
┌─────────────────────────────────────────────────────┐
Orchestrator (state-driven)
Read state → Select action → Execute → Update → ✓
└─────────────────────────────────────────────────────┘
──────────────────────┐ ┌──────────────────
Diagnosis Phase │ │ Gemini CLI
• Context │ │ Deep analysis
• Memory │ │ (on-demand)
• DataFlow │ │
• Agent │ │ Complex issues
• Docs │ │ Architecture
• Token Usage │ │ Performance
└──────────────────────┘ └──────────────────┘
┌───────────────────┐
│ Fix & Verify
Apply → Re-test
└───────────────────┘
```
## Problem Domain
## Core Issues Detected
Based on comprehensive analysis, skill-tuning addresses **core skill issues** and **general optimization areas**:
### Core Skill Issues (自动检测)
| Priority | Problem | Root Cause | Solution Strategy |
|----------|---------|------------|-------------------|
| **P0** | Authoring Principles Violation | 中间文件存储, State膨胀, 文件中转 | eliminate_intermediate_files, minimize_state, context_passing |
| Priority | Problem | Root Cause | Fix Strategy |
|----------|---------|-----------|--------------|
| **P0** | Authoring Violation | Intermediate files, state bloat, file relay | eliminate_intermediate, minimize_state |
| **P1** | Data Flow Disruption | Scattered state, inconsistent formats | state_centralization, schema_enforcement |
| **P2** | Agent Coordination | Fragile call chains, merge complexity | error_wrapping, result_validation |
| **P3** | Context Explosion | Token accumulation, multi-turn bloat | sliding_window, context_summarization |
| **P2** | Agent Coordination | Fragile chains, no error handling | error_wrapping, result_validation |
| **P3** | Context Explosion | Unbounded history, full content passing | sliding_window, path_reference |
| **P4** | Long-tail Forgetting | Early constraint loss | constraint_injection, checkpoint_restore |
| **P5** | Token Consumption | Verbose prompts, excessive state, redundant I/O | prompt_compression, lazy_loading, output_minimization |
| **P5** | Token Consumption | Verbose prompts, state bloat | prompt_compression, lazy_loading |
### General Optimization Areas (按需分析 via Gemini CLI)
## Problem Categories (Detailed Specs)
| Category | Issues | Gemini Analysis Scope |
|----------|--------|----------------------|
| **Prompt Engineering** | 模糊指令, 输出格式不一致, 幻觉风险 | 提示词优化, 结构化输出设计 |
| **Architecture** | 阶段划分不合理, 依赖混乱, 扩展性差 | 架构审查, 模块化建议 |
| **Performance** | 执行慢, Token消耗高, 重复计算 | 性能分析, 缓存策略 |
| **Error Handling** | 错误恢复不当, 无降级策略, 日志不足 | 容错设计, 可观测性增强 |
| **Output Quality** | 输出不稳定, 格式漂移, 质量波动 | 质量门控, 验证机制 |
| **User Experience** | 交互不流畅, 反馈不清晰, 进度不可见 | UX优化, 进度追踪 |
See [specs/problem-taxonomy.md](specs/problem-taxonomy.md) for:
- Detection patterns (regex/checks)
- Severity calculations
- Impact assessments
## Key Design Principles
## Tuning Strategies (Detailed Specs)
1. **Problem-First Diagnosis**: Systematic identification before any fix attempt
2. **Data-Driven Analysis**: Record execution traces, token counts, state snapshots
3. **Iterative Refinement**: Multiple tuning rounds until quality gates pass
4. **Non-Destructive**: All changes are reversible with backup checkpoints
5. **Agent Coordination**: Use specialized sub-agents for each diagnosis type
6. **Gemini CLI On-Demand**: Deep analysis via CLI for complex/custom issues
See [specs/tuning-strategies.md](specs/tuning-strategies.md) for:
- 10+ strategies per category
- Implementation patterns
- Verification methods
---
## Workflow
## Gemini CLI Integration
| Step | Action | Orchestrator Decision | Output |
|------|--------|----------------------|--------|
| 1 | `action-init` | status='pending' | Backup, session created |
| 2 | `action-analyze-requirements` | After init | Required dimensions + coverage |
| 3 | Diagnosis (6 types) | Focus areas | state.diagnosis.{type} |
| 4 | `action-gemini-analysis` | Critical issues OR user request | Deep findings |
| 5 | `action-generate-report` | All diagnosis complete | state.final_report |
| 6 | `action-propose-fixes` | Issues found | state.proposed_fixes[] |
| 7 | `action-apply-fix` | Pending fixes | Applied + verified |
| 8 | `action-complete` | Quality gates pass | session.status='completed' |
根据用户需求动态调用 Gemini CLI 进行深度分析。
## Action Reference
### Trigger Conditions
| Category | Actions | Purpose |
|----------|---------|---------|
| **Setup** | action-init | Initialize backup, session state |
| **Analysis** | action-analyze-requirements | Decompose user request via Gemini CLI |
| **Diagnosis** | action-diagnose-{context,memory,dataflow,agent,docs,token_consumption} | Detect category-specific issues |
| **Deep Analysis** | action-gemini-analysis | Gemini CLI: complex/critical issues |
| **Reporting** | action-generate-report | Consolidate findings → final_report |
| **Fixing** | action-propose-fixes, action-apply-fix | Generate + apply fixes |
| **Verify** | action-verify | Re-run diagnosis, check gates |
| **Exit** | action-complete, action-abort | Finalize or rollback |
| Condition | Action | CLI Mode |
|-----------|--------|----------|
| 用户描述复杂问题 | 调用 Gemini 分析问题根因 | `analysis` |
| 自动诊断发现 critical 问题 | 请求深度分析确认 | `analysis` |
| 用户请求架构审查 | 执行架构分析 | `analysis` |
| 需要生成修复代码 | 生成修复提案 | `write` |
| 标准策略不适用 | 请求定制化策略 | `analysis` |
Full action details: [phases/actions/](phases/actions/)
### CLI Command Template
## State Management
**Single source of truth**: `.workflow/.scratchpad/skill-tuning-{ts}/state.json`
```json
{
"status": "pending|running|completed|failed",
"target_skill": { "name": "...", "path": "..." },
"diagnosis": {
"context": {...},
"memory": {...},
"dataflow": {...},
"agent": {...},
"docs": {...},
"token_consumption": {...}
},
"issues": [{"id":"...", "severity":"...", "category":"...", "strategy":"..."}],
"proposed_fixes": [...],
"applied_fixes": [...],
"quality_gate": "pass|fail",
"final_report": "..."
}
```
See [phases/state-schema.md](phases/state-schema.md) for complete schema.
## Orchestrator Logic
See [phases/orchestrator.md](phases/orchestrator.md) for:
- Decision logic (termination checks → action selection)
- State transitions
- Error recovery
## Key Principles
1. **Problem-First**: Diagnosis before any fix
2. **Data-Driven**: Record traces, token counts, snapshots
3. **Iterative**: Multiple rounds until quality gates pass
4. **Reversible**: All changes with backup checkpoints
5. **Non-Invasive**: Minimal changes, maximum clarity
## Usage Examples
```bash
ccw cli -p "
PURPOSE: ${purpose}
TASK: ${task_steps}
MODE: ${mode}
CONTEXT: @${skill_path}/**/*
EXPECTED: ${expected_output}
RULES: $(cat ~/.claude/workflows/cli-templates/protocols/${mode}-protocol.md) | ${constraints}
" --tool gemini --mode ${mode} --cd ${skill_path}
# Basic skill diagnosis
/skill-tuning "Fix memory leaks in my skill"
# Deep analysis with Gemini
/skill-tuning "Architecture issues in async workflow"
# Focus on specific areas
/skill-tuning "Optimize token consumption and fix agent coordination"
# Custom issue
/skill-tuning "My skill produces inconsistent outputs"
```
### Analysis Types
## Output
#### 1. Problem Root Cause Analysis
```bash
ccw cli -p "
PURPOSE: Identify root cause of skill execution issue: ${user_issue_description}
TASK: • Analyze skill structure and phase flow • Identify anti-patterns • Trace data flow issues
MODE: analysis
CONTEXT: @**/*.md
EXPECTED: JSON with { root_causes: [], patterns_found: [], recommendations: [] }
RULES: $(cat ~/.claude/workflows/cli-templates/protocols/analysis-protocol.md) | Focus on execution flow
" --tool gemini --mode analysis
```
#### 2. Architecture Review
```bash
ccw cli -p "
PURPOSE: Review skill architecture for scalability and maintainability
TASK: • Evaluate phase decomposition • Check state management patterns • Assess agent coordination
MODE: analysis
CONTEXT: @**/*.md
EXPECTED: Architecture assessment with improvement recommendations
RULES: $(cat ~/.claude/workflows/cli-templates/protocols/analysis-protocol.md) | Focus on modularity
" --tool gemini --mode analysis
```
#### 3. Fix Strategy Generation
```bash
ccw cli -p "
PURPOSE: Generate fix strategy for issue: ${issue_id} - ${issue_description}
TASK: • Analyze issue context • Design fix approach • Generate implementation plan
MODE: analysis
CONTEXT: @**/*.md
EXPECTED: JSON with { strategy: string, changes: [], verification_steps: [] }
RULES: $(cat ~/.claude/workflows/cli-templates/protocols/analysis-protocol.md) | Minimal invasive changes
" --tool gemini --mode analysis
```
---
## Mandatory Prerequisites
> **CRITICAL**: Read these documents before executing any action.
### Core Specs (Required)
| Document | Purpose | Priority |
|----------|---------|----------|
| [specs/skill-authoring-principles.md](specs/skill-authoring-principles.md) | **首要准则:简洁高效、去除存储、上下文流转** | **P0** |
| [specs/problem-taxonomy.md](specs/problem-taxonomy.md) | Problem classification and detection patterns | **P0** |
| [specs/tuning-strategies.md](specs/tuning-strategies.md) | Fix strategies for each problem type | **P0** |
| [specs/dimension-mapping.md](specs/dimension-mapping.md) | Dimension to Spec mapping rules | **P0** |
| [specs/quality-gates.md](specs/quality-gates.md) | Quality thresholds and verification criteria | P1 |
### Templates (Reference)
| Document | Purpose |
|----------|---------|
| [templates/diagnosis-report.md](templates/diagnosis-report.md) | Diagnosis report structure |
| [templates/fix-proposal.md](templates/fix-proposal.md) | Fix proposal format |
---
## Execution Flow
```
┌─────────────────────────────────────────────────────────────────────────────┐
│ Phase 0: Specification Study (强制前置 - 禁止跳过) │
│ → Read: specs/problem-taxonomy.md (问题分类) │
│ → Read: specs/tuning-strategies.md (调优策略) │
│ → Read: specs/dimension-mapping.md (维度映射规则) │
│ → Read: Target skill's SKILL.md and phases/*.md │
│ → Output: 内化规范,理解目标 skill 结构 │
├─────────────────────────────────────────────────────────────────────────────┤
│ action-init: Initialize Tuning Session │
│ → Create work directory: .workflow/.scratchpad/skill-tuning-{timestamp} │
│ → Initialize state.json with target skill info │
│ → Create backup of target skill files │
├─────────────────────────────────────────────────────────────────────────────┤
│ action-analyze-requirements: Requirement Analysis │
│ → Phase 1: 维度拆解 (Gemini CLI) - 单一描述 → 多个关注维度 │
│ → Phase 2: Spec 匹配 - 每个维度 → taxonomy + strategy │
│ → Phase 3: 覆盖度评估 - 以"有修复策略"为满足标准 │
│ → Phase 4: 歧义检测 - 识别多义性描述,必要时请求澄清 │
│ → Output: state.json (requirement_analysis field) │
├─────────────────────────────────────────────────────────────────────────────┤
│ action-diagnose-*: Diagnosis Actions (context/memory/dataflow/agent/docs/ │
│ token_consumption) │
│ → Execute pattern-based detection for each category │
│ → Output: state.json (diagnosis.{category} field) │
├─────────────────────────────────────────────────────────────────────────────┤
│ action-generate-report: Consolidated Report │
│ → Generate markdown summary from state.diagnosis │
│ → Prioritize issues by severity │
│ → Output: state.json (final_report field) │
├─────────────────────────────────────────────────────────────────────────────┤
│ action-propose-fixes: Fix Proposal Generation │
│ → Generate fix strategies for each issue │
│ → Create implementation plan │
│ → Output: state.json (proposed_fixes field) │
├─────────────────────────────────────────────────────────────────────────────┤
│ action-apply-fix: Apply Selected Fix │
│ → User selects fix to apply │
│ → Execute fix with backup │
│ → Update state with fix result │
├─────────────────────────────────────────────────────────────────────────────┤
│ action-verify: Verification │
│ → Re-run affected diagnosis │
│ → Check quality gates │
│ → Update iteration count │
├─────────────────────────────────────────────────────────────────────────────┤
│ action-complete: Finalization │
│ → Set status='completed' │
│ → Final report already in state.json (final_report field) │
│ → Output: state.json (final) │
└─────────────────────────────────────────────────────────────────────────────┘
```
## Directory Setup
```javascript
const timestamp = new Date().toISOString().slice(0,19).replace(/[-:T]/g, '');
const workDir = `.workflow/.scratchpad/skill-tuning-${timestamp}`;
// Simplified: Only backups dir needed, diagnosis results go into state.json
Bash(`mkdir -p "${workDir}/backups"`);
```
## Output Structure
```
.workflow/.scratchpad/skill-tuning-{timestamp}/
├── state.json # Single source of truth (all results consolidated)
│ ├── diagnosis.* # All diagnosis results embedded
│ ├── issues[] # Found issues
│ ├── proposed_fixes[] # Fix proposals
│ └── final_report # Markdown summary (on completion)
└── backups/
└── {skill-name}-backup/ # Original skill files backup
```
> **Token Optimization**: All outputs consolidated into state.json. No separate diagnosis files or report files.
## State Schema
详细状态结构定义请参阅 [phases/state-schema.md](phases/state-schema.md)。
核心状态字段:
- `status`: 工作流状态 (pending/running/completed/failed)
- `target_skill`: 目标 skill 信息
- `diagnosis`: 各维度诊断结果
- `issues`: 发现的问题列表
- `proposed_fixes`: 建议的修复方案
After completion, review:
- `.workflow/.scratchpad/skill-tuning-{ts}/state.json` - Full state with final_report
- `state.final_report` - Markdown summary (in state.json)
- `state.applied_fixes` - List of applied fixes with verification results
## Reference Documents
| Document | Purpose |
|----------|---------|
| [phases/orchestrator.md](phases/orchestrator.md) | Orchestrator decision logic |
| [specs/problem-taxonomy.md](specs/problem-taxonomy.md) | Classification + detection patterns |
| [specs/tuning-strategies.md](specs/tuning-strategies.md) | Fix implementation guide |
| [specs/dimension-mapping.md](specs/dimension-mapping.md) | Dimension ↔ Spec mapping |
| [specs/quality-gates.md](specs/quality-gates.md) | Quality verification criteria |
| [phases/orchestrator.md](phases/orchestrator.md) | Workflow orchestration |
| [phases/state-schema.md](phases/state-schema.md) | State structure definition |
| [phases/actions/action-init.md](phases/actions/action-init.md) | Initialize tuning session |
| [phases/actions/action-analyze-requirements.md](phases/actions/action-analyze-requirements.md) | Requirement analysis (NEW) |
| [phases/actions/action-diagnose-context.md](phases/actions/action-diagnose-context.md) | Context explosion diagnosis |
| [phases/actions/action-diagnose-memory.md](phases/actions/action-diagnose-memory.md) | Long-tail forgetting diagnosis |
| [phases/actions/action-diagnose-dataflow.md](phases/actions/action-diagnose-dataflow.md) | Data flow diagnosis |
| [phases/actions/action-diagnose-agent.md](phases/actions/action-diagnose-agent.md) | Agent coordination diagnosis |
| [phases/actions/action-diagnose-docs.md](phases/actions/action-diagnose-docs.md) | Documentation structure diagnosis |
| [phases/actions/action-diagnose-token-consumption.md](phases/actions/action-diagnose-token-consumption.md) | Token consumption diagnosis |
| [phases/actions/action-generate-report.md](phases/actions/action-generate-report.md) | Report generation |
| [phases/actions/action-propose-fixes.md](phases/actions/action-propose-fixes.md) | Fix proposal |
| [phases/actions/action-apply-fix.md](phases/actions/action-apply-fix.md) | Fix application |
| [phases/actions/action-verify.md](phases/actions/action-verify.md) | Verification |
| [phases/actions/action-complete.md](phases/actions/action-complete.md) | Finalization |
| [specs/problem-taxonomy.md](specs/problem-taxonomy.md) | Problem classification |
| [specs/tuning-strategies.md](specs/tuning-strategies.md) | Fix strategies |
| [specs/dimension-mapping.md](specs/dimension-mapping.md) | Dimension to Spec mapping (NEW) |
| [specs/quality-gates.md](specs/quality-gates.md) | Quality criteria |
| [phases/actions/](phases/actions/) | Individual action implementations |

View File

@@ -1,28 +1,57 @@
# Orchestrator
Autonomous orchestrator for skill-tuning workflow. Reads current state and selects the next action based on diagnosis progress and quality gates.
State-driven orchestrator for autonomous skill-tuning workflow.
## Role
Drive the tuning workflow by:
1. Reading current session state
2. Selecting the appropriate next action
3. Executing the action via sub-agent
4. Updating state with results
5. Repeating until termination conditions met
Read state → Select action → Execute → Update → Repeat until termination.
## Decision Logic
### Termination Checks (priority order)
| Condition | Action |
|-----------|--------|
| `status === 'user_exit'` | null (exit) |
| `status === 'completed'` | null (exit) |
| `error_count >= max_errors` | action-abort |
| `iteration_count >= max_iterations` | action-complete |
| `quality_gate === 'pass'` | action-complete |
### Action Selection
| Priority | Condition | Action |
|----------|-----------|--------|
| 1 | `status === 'pending'` | action-init |
| 2 | Init done, req analysis missing | action-analyze-requirements |
| 3 | Req needs clarification | null (wait) |
| 4 | Req coverage unsatisfied | action-gemini-analysis |
| 5 | Gemini requested/critical issues | action-gemini-analysis |
| 6 | Gemini running | null (wait) |
| 7 | Diagnosis pending (in order) | action-diagnose-{type} |
| 8 | All diagnosis done, no report | action-generate-report |
| 9 | Report done, issues exist | action-propose-fixes |
| 10 | Pending fixes exist | action-apply-fix |
| 11 | Fixes need verification | action-verify |
| 12 | New iteration needed | action-diagnose-context (restart) |
| 13 | Default | action-complete |
**Diagnosis Order**: context → memory → dataflow → agent → docs → token_consumption
**Gemini Triggers**:
- `gemini_analysis_requested === true`
- Critical issues detected
- Focus areas include: architecture, prompt, performance, custom
- Second iteration with unresolved issues
## State Management
### Read State
```javascript
// Read
const state = JSON.parse(Read(`${workDir}/state.json`));
```
### Update State
```javascript
function updateState(updates) {
// Update (with sliding window for history)
function updateState(workDir, updates) {
const state = JSON.parse(Read(`${workDir}/state.json`));
const newState = {
...state,
@@ -34,344 +63,127 @@ function updateState(updates) {
}
```
## Decision Logic
```javascript
function selectNextAction(state) {
// === Termination Checks ===
// User exit
if (state.status === 'user_exit') return null;
// Completed
if (state.status === 'completed') return null;
// Error limit exceeded
if (state.error_count >= state.max_errors) {
return 'action-abort';
}
// Max iterations exceeded
if (state.iteration_count >= state.max_iterations) {
return 'action-complete';
}
// === Action Selection ===
// 1. Not initialized yet
if (state.status === 'pending') {
return 'action-init';
}
// 1.5. Requirement analysis (在 init 后diagnosis 前)
if (state.status === 'running' &&
state.completed_actions.includes('action-init') &&
!state.completed_actions.includes('action-analyze-requirements')) {
return 'action-analyze-requirements';
}
// 1.6. 如果需求分析发现歧义需要澄清,暂停等待用户
if (state.requirement_analysis?.status === 'needs_clarification') {
return null; // 等待用户澄清后继续
}
// 1.7. 如果需求分析覆盖度不足,优先触发 Gemini 深度分析
if (state.requirement_analysis?.coverage?.status === 'unsatisfied' &&
!state.completed_actions.includes('action-gemini-analysis')) {
return 'action-gemini-analysis';
}
// 2. Check if Gemini analysis is requested or needed
if (shouldTriggerGeminiAnalysis(state)) {
return 'action-gemini-analysis';
}
// 3. Check if Gemini analysis is running
if (state.gemini_analysis?.status === 'running') {
// Wait for Gemini analysis to complete
return null; // Orchestrator will be re-triggered when CLI completes
}
// 4. Run diagnosis in order (only if not completed)
const diagnosisOrder = ['context', 'memory', 'dataflow', 'agent', 'docs', 'token_consumption'];
for (const diagType of diagnosisOrder) {
if (state.diagnosis[diagType] === null) {
// Check if user wants to skip this diagnosis
if (!state.focus_areas.length || state.focus_areas.includes(diagType)) {
return `action-diagnose-${diagType}`;
}
// For docs diagnosis, also check 'all' focus_area
if (diagType === 'docs' && state.focus_areas.includes('all')) {
return 'action-diagnose-docs';
}
}
}
// 5. All diagnosis complete, generate report if not done
const allDiagnosisComplete = diagnosisOrder.every(
d => state.diagnosis[d] !== null || !state.focus_areas.includes(d)
);
if (allDiagnosisComplete && !state.completed_actions.includes('action-generate-report')) {
return 'action-generate-report';
}
// 6. Report generated, propose fixes if not done
if (state.completed_actions.includes('action-generate-report') &&
state.proposed_fixes.length === 0 &&
state.issues.length > 0) {
return 'action-propose-fixes';
}
// 7. Fixes proposed, check if user wants to apply
if (state.proposed_fixes.length > 0 && state.pending_fixes.length > 0) {
return 'action-apply-fix';
}
// 8. Fixes applied, verify
if (state.applied_fixes.length > 0 &&
state.applied_fixes.some(f => f.verification_result === 'pending')) {
return 'action-verify';
}
// 9. Quality gate check
if (state.quality_gate === 'pass') {
return 'action-complete';
}
// 10. More iterations needed
if (state.iteration_count < state.max_iterations &&
state.quality_gate !== 'pass' &&
state.issues.some(i => i.severity === 'critical' || i.severity === 'high')) {
// Reset diagnosis for re-evaluation
return 'action-diagnose-context'; // Start new iteration
}
// 11. Default: complete
return 'action-complete';
}
/**
* 判断是否需要触发 Gemini CLI 分析
*/
function shouldTriggerGeminiAnalysis(state) {
// 已完成 Gemini 分析,不再触发
if (state.gemini_analysis?.status === 'completed') {
return false;
}
// 用户显式请求
if (state.gemini_analysis_requested === true) {
return true;
}
// 发现 critical 问题且未进行深度分析
if (state.issues.some(i => i.severity === 'critical') &&
!state.completed_actions.includes('action-gemini-analysis')) {
return true;
}
// 用户指定了需要 Gemini 分析的 focus_areas
const geminiAreas = ['architecture', 'prompt', 'performance', 'custom'];
if (state.focus_areas.some(area => geminiAreas.includes(area))) {
return true;
}
// 标准诊断完成但问题未得到解决,需要深度分析
const diagnosisComplete = ['context', 'memory', 'dataflow', 'agent', 'docs'].every(
d => state.diagnosis[d] !== null
);
if (diagnosisComplete &&
state.issues.length > 0 &&
state.iteration_count > 0 &&
!state.completed_actions.includes('action-gemini-analysis')) {
// 第二轮迭代如果问题仍存在,触发 Gemini 分析
return true;
}
return false;
}
```
## Execution Loop
```javascript
async function runOrchestrator(workDir) {
console.log('=== Skill Tuning Orchestrator Started ===');
let iteration = 0;
const MAX_LOOP_ITERATIONS = 50; // Safety limit
const MAX_LOOP = 50;
while (iteration < MAX_LOOP_ITERATIONS) {
iteration++;
// 1. Read current state
while (iteration++ < MAX_LOOP) {
// 1. Read state
const state = JSON.parse(Read(`${workDir}/state.json`));
console.log(`[Loop ${iteration}] Status: ${state.status}, Action: ${state.current_action}`);
// 2. Select next action
// 2. Select action
const actionId = selectNextAction(state);
if (!actionId) break;
if (!actionId) {
console.log('No action selected, terminating orchestrator.');
break;
}
console.log(`[Loop ${iteration}] Executing: ${actionId}`);
// 3. Update state: current action
// FIX CTX-001: sliding window for action_history (keep last 10)
updateState({
// 3. Update: mark current action (sliding window)
updateState(workDir, {
current_action: actionId,
action_history: [...state.action_history, {
action: actionId,
started_at: new Date().toISOString(),
completed_at: null,
result: null,
output_files: []
}].slice(-10) // Sliding window: prevent unbounded growth
started_at: new Date().toISOString()
}].slice(-10) // Keep last 10
});
// 4. Execute action
try {
const actionPrompt = Read(`phases/actions/${actionId}.md`);
// FIX CTX-003: Pass state path + key fields only instead of full state
// Pass state path + key fields (not full state)
const stateKeyInfo = {
status: state.status,
iteration_count: state.iteration_count,
issues_by_severity: state.issues_by_severity,
quality_gate: state.quality_gate,
current_action: state.current_action,
completed_actions: state.completed_actions,
user_issue_description: state.user_issue_description,
target_skill: { name: state.target_skill.name, path: state.target_skill.path }
};
const stateKeyJson = JSON.stringify(stateKeyInfo, null, 2);
const result = await Task({
subagent_type: 'universal-executor',
run_in_background: false,
prompt: `
[CONTEXT]
You are executing action "${actionId}" for skill-tuning workflow.
Action: ${actionId}
Work directory: ${workDir}
[STATE KEY INFO]
${stateKeyJson}
${JSON.stringify(stateKeyInfo, null, 2)}
[FULL STATE PATH]
${workDir}/state.json
(Read full state from this file if you need additional fields)
(Read full state from this file if needed)
[ACTION INSTRUCTIONS]
${actionPrompt}
[OUTPUT REQUIREMENT]
After completing the action:
1. Write any output files to the work directory
2. Return a JSON object with:
- stateUpdates: object with state fields to update
- outputFiles: array of files created
- summary: brief description of what was done
[OUTPUT]
Return JSON: { stateUpdates: {}, outputFiles: [], summary: "..." }
`
});
// 5. Parse result and update state
let actionResult;
try {
actionResult = JSON.parse(result);
} catch (e) {
actionResult = {
stateUpdates: {},
outputFiles: [],
summary: result
};
}
// 5. Parse result
let actionResult = result;
try { actionResult = JSON.parse(result); } catch {}
// 6. Update state: action complete
const updatedHistory = [...state.action_history];
updatedHistory[updatedHistory.length - 1] = {
...updatedHistory[updatedHistory.length - 1],
completed_at: new Date().toISOString(),
result: 'success',
output_files: actionResult.outputFiles || []
};
updateState({
// 6. Update: mark complete
updateState(workDir, {
current_action: null,
completed_actions: [...state.completed_actions, actionId],
action_history: updatedHistory,
...actionResult.stateUpdates
});
console.log(`[Loop ${iteration}] Completed: ${actionId}`);
} catch (error) {
console.log(`[Loop ${iteration}] Error in ${actionId}: ${error.message}`);
// Error handling
// FIX CTX-002: sliding window for errors (keep last 5)
updateState({
// Error handling (sliding window for errors)
updateState(workDir, {
current_action: null,
errors: [...state.errors, {
action: actionId,
message: error.message,
timestamp: new Date().toISOString(),
recoverable: true
}].slice(-5), // Sliding window: prevent unbounded growth
timestamp: new Date().toISOString()
}].slice(-5), // Keep last 5
error_count: state.error_count + 1
});
}
}
console.log('=== Skill Tuning Orchestrator Finished ===');
}
```
## Action Catalog
## Action Preconditions
| Action | Purpose | Preconditions | Effects |
|--------|---------|---------------|---------|
| [action-init](actions/action-init.md) | Initialize tuning session | status === 'pending' | Creates work dirs, backup, sets status='running' |
| [action-analyze-requirements](actions/action-analyze-requirements.md) | Analyze user requirements | init completed | Sets requirement_analysis, optimizes focus_areas |
| [action-diagnose-context](actions/action-diagnose-context.md) | Analyze context explosion | status === 'running' | Sets diagnosis.context |
| [action-diagnose-memory](actions/action-diagnose-memory.md) | Analyze long-tail forgetting | status === 'running' | Sets diagnosis.memory |
| [action-diagnose-dataflow](actions/action-diagnose-dataflow.md) | Analyze data flow issues | status === 'running' | Sets diagnosis.dataflow |
| [action-diagnose-agent](actions/action-diagnose-agent.md) | Analyze agent coordination | status === 'running' | Sets diagnosis.agent |
| [action-diagnose-docs](actions/action-diagnose-docs.md) | Analyze documentation structure | status === 'running', focus includes 'docs' | Sets diagnosis.docs |
| [action-gemini-analysis](actions/action-gemini-analysis.md) | Deep analysis via Gemini CLI | User request OR critical issues | Sets gemini_analysis, adds issues |
| [action-generate-report](actions/action-generate-report.md) | Generate consolidated report | All diagnoses complete | Creates tuning-report.md |
| [action-propose-fixes](actions/action-propose-fixes.md) | Generate fix proposals | Report generated, issues > 0 | Sets proposed_fixes |
| [action-apply-fix](actions/action-apply-fix.md) | Apply selected fix | pending_fixes > 0 | Updates applied_fixes |
| [action-verify](actions/action-verify.md) | Verify applied fixes | applied_fixes with pending verification | Updates verification_result |
| [action-complete](actions/action-complete.md) | Finalize session | quality_gate='pass' OR max_iterations | Sets status='completed' |
| [action-abort](actions/action-abort.md) | Abort on errors | error_count >= max_errors | Sets status='failed' |
| Action | Precondition |
|--------|-------------|
| action-init | status='pending' |
| action-analyze-requirements | Init complete, not done |
| action-diagnose-* | status='running', focus area includes type |
| action-gemini-analysis | Requested OR critical issues OR high complexity |
| action-generate-report | All diagnosis complete |
| action-propose-fixes | Report generated, issues > 0 |
| action-apply-fix | pending_fixes > 0 |
| action-verify | applied_fixes with pending verification |
| action-complete | Quality gates pass OR max iterations |
| action-abort | error_count >= max_errors |
## Termination Conditions
## User Interaction Points
- `status === 'completed'`: Normal completion
- `status === 'user_exit'`: User requested exit
- `status === 'failed'`: Unrecoverable error
- `requirement_analysis.status === 'needs_clarification'`: Waiting for user clarification (暂停,非终止)
- `error_count >= max_errors`: Too many errors (default: 3)
- `iteration_count >= max_iterations`: Max iterations reached (default: 5)
- `quality_gate === 'pass'`: All quality criteria met
1. **action-init**: Confirm target skill, describe issue
2. **action-propose-fixes**: Select which fixes to apply
3. **action-verify**: Review verification, decide to continue or stop
4. **action-complete**: Review final summary
## Error Recovery
| Error Type | Recovery Strategy |
|------------|-------------------|
| Error Type | Strategy |
|------------|----------|
| Action execution failed | Retry up to 3 times, then skip |
| State parse error | Restore from backup |
| File write error | Retry with alternative path |
| User abort | Save state and exit gracefully |
## User Interaction Points
## Termination Conditions
The orchestrator pauses for user input at these points:
1. **action-init**: Confirm target skill and describe issue
2. **action-propose-fixes**: Select which fixes to apply
3. **action-verify**: Review verification results, decide to continue or stop
4. **action-complete**: Review final summary
- Normal: `status === 'completed'`, `quality_gate === 'pass'`
- User: `status === 'user_exit'`
- Error: `status === 'failed'`, `error_count >= max_errors`
- Iteration limit: `iteration_count >= max_iterations`
- Clarification wait: `requirement_analysis.status === 'needs_clarification'` (pause, not terminate)

View File

@@ -2,276 +2,174 @@
Classification of skill execution issues with detection patterns and severity criteria.
## When to Use
## Quick Reference
| Phase | Usage | Section |
|-------|-------|---------|
| All Diagnosis Actions | Issue classification | All sections |
| action-propose-fixes | Strategy selection | Fix Mapping |
| action-generate-report | Severity assessment | Severity Criteria |
| Category | Priority | Detection | Fix Strategy |
|----------|----------|-----------|--------------|
| Authoring Violation | P0 | Intermediate files, state bloat, file relay | eliminate_intermediate, minimize_state |
| Data Flow Disruption | P1 | Scattered state, inconsistent formats | state_centralization, schema_enforcement |
| Agent Coordination | P2 | Fragile chains, no error handling | error_wrapping, result_validation |
| Context Explosion | P3 | Unbounded history, full content passing | sliding_window, path_reference |
| Long-tail Forgetting | P4 | Early constraint loss | constraint_injection, checkpoint_restore |
| Token Consumption | P5 | Verbose prompts, redundant I/O | prompt_compression, lazy_loading |
| Doc Redundancy | P6 | Repeated definitions | consolidate_to_ssot |
| Doc Conflict | P7 | Inconsistent definitions | reconcile_definitions |
---
## Problem Categories
## 0. Authoring Principles Violation (P0)
### 0. Authoring Principles Violation (P0)
**Definition**: 违反 skill 撰写首要准则(简洁高效、去除存储、上下文流转)。
**Root Causes**:
- 不必要的中间文件存储
- State schema 过度膨胀
- 文件中转代替上下文传递
- 重复数据存储
**Definition**: Violates skill authoring principles (simplicity, no intermediate files, context passing).
**Detection Patterns**:
| Pattern ID | Regex/Check | Description |
|------------|-------------|-------------|
| APV-001 | `/Write\([^)]*temp-|intermediate-/` | 中间文件写入 |
| APV-002 | `/Write\([^)]+\)[\s\S]{0,50}Read\([^)]+\)/` | 写后立即读(文件中转) |
| APV-003 | State schema > 15 fields | State 字段过多 |
| APV-004 | `/_history\s*[.=].*push|concat/` | 无限增长数组 |
| APV-005 | `/debug_|_cache|_temp/` in state | 调试/缓存字段残留 |
| APV-006 | Same data in multiple state fields | 重复存储 |
| Pattern ID | Check | Description |
|------------|-------|-------------|
| APV-001 | `/Write\([^)]*temp-\|intermediate-/` | Intermediate file writes |
| APV-002 | `/Write\([^)]+\)[\s\S]{0,50}Read\([^)]+\)/` | Write-then-read relay |
| APV-003 | State schema > 15 fields | Excessive state fields |
| APV-004 | `/_history\s*[.=].*push\|concat/` | Unbounded array growth |
| APV-005 | `/debug_\|_cache\|_temp/` in state | Debug/cache field residue |
| APV-006 | Same data in multiple fields | Duplicate storage |
**Impact Levels**:
- **Critical**: 中间文件 > 5 个,严重违反原则
- **High**: State 字段 > 20 个,或存在文件中转
- **Medium**: 存在调试字段或轻微冗余
- **Low**: 轻微的命名不规范
**Impact**: Critical (>5 intermediate files), High (>20 state fields), Medium (debug fields), Low (naming issues)
---
### 1. Context Explosion (P2)
## 1. Context Explosion (P3)
**Definition**: Excessive token accumulation causing prompt size to grow unbounded.
**Root Causes**:
- Unbounded conversation history
- Full content passing instead of references
- Missing summarization mechanisms
- Agent returning full output instead of path+summary
**Definition**: Unbounded token accumulation causing prompt size growth.
**Detection Patterns**:
| Pattern ID | Regex/Check | Description |
|------------|-------------|-------------|
| Pattern ID | Check | Description |
|------------|-------|-------------|
| CTX-001 | `/history\s*[.=].*push\|concat/` | History array growth |
| CTX-002 | `/JSON\.stringify\s*\(\s*state\s*\)/` | Full state serialization |
| CTX-003 | `/Read\([^)]+\)\s*[\+,]/` | Multiple file content concatenation |
| CTX-004 | `/return\s*\{[^}]*content:/` | Agent returning full content |
| CTX-005 | File length > 5000 chars without summarize | Long prompt without compression |
| CTX-005 | File > 5000 chars without summarization | Long prompts |
**Impact Levels**:
- **Critical**: Context exceeds model limit (128K tokens)
- **High**: Context > 50K tokens per iteration
- **Medium**: Context grows 10%+ per iteration
- **Low**: Potential for growth but currently manageable
**Impact**: Critical (>128K tokens), High (>50K per iteration), Medium (10%+ growth), Low (manageable)
---
### 2. Long-tail Forgetting (P3)
## 2. Long-tail Forgetting (P4)
**Definition**: Loss of early instructions, constraints, or goals in long execution chains.
**Root Causes**:
- No explicit constraint propagation
- Reliance on implicit context
- Missing checkpoint/restore mechanisms
- State schema without requirements field
**Definition**: Loss of early instructions/constraints in long chains.
**Detection Patterns**:
| Pattern ID | Regex/Check | Description |
|------------|-------------|-------------|
| MEM-001 | Later phases missing constraint reference | Constraint not carried forward |
| Pattern ID | Check | Description |
|------------|-------|-------------|
| MEM-001 | Later phases missing constraint reference | Constraint not forwarded |
| MEM-002 | `/\[TASK\][^[]*(?!\[CONSTRAINTS\])/` | Task without constraints section |
| MEM-003 | Key phases without checkpoint | Missing state preservation |
| MEM-004 | State schema lacks `original_requirements` | No constraint persistence |
| MEM-004 | State lacks `original_requirements` | No constraint persistence |
| MEM-005 | No verification phase | Output not checked against intent |
**Impact Levels**:
- **Critical**: Original goal completely lost
- **High**: Key constraints ignored in output
- **Medium**: Some requirements missing
- **Low**: Minor goal drift
**Impact**: Critical (goal lost), High (constraints ignored), Medium (some missing), Low (minor drift)
---
### 3. Data Flow Disruption (P0)
## 3. Data Flow Disruption (P1)
**Definition**: Inconsistent state management causing data loss or corruption.
**Root Causes**:
- Multiple state storage locations
- Inconsistent field naming
- Missing schema validation
- Format transformation without normalization
**Definition**: Inconsistent state management causing data loss/corruption.
**Detection Patterns**:
| Pattern ID | Regex/Check | Description |
|------------|-------------|-------------|
| Pattern ID | Check | Description |
|------------|-------|-------------|
| DF-001 | Multiple state file writes | Scattered state storage |
| DF-002 | Same concept, different names | Field naming inconsistency |
| DF-003 | JSON.parse without validation | Missing schema validation |
| DF-004 | Files written but never read | Orphaned outputs |
| DF-005 | Autonomous skill without state-schema | Undefined state structure |
**Impact Levels**:
- **Critical**: Data loss or corruption
- **High**: State inconsistency between phases
- **Medium**: Potential for inconsistency
- **Low**: Minor naming inconsistencies
**Impact**: Critical (data loss), High (state inconsistency), Medium (potential inconsistency), Low (naming)
---
### 4. Agent Coordination Failure (P1)
## 4. Agent Coordination Failure (P2)
**Definition**: Fragile agent call patterns causing cascading failures.
**Root Causes**:
- Missing error handling in Task calls
- No result validation
- Inconsistent agent configurations
- Deeply nested agent calls
**Detection Patterns**:
| Pattern ID | Regex/Check | Description |
|------------|-------------|-------------|
| Pattern ID | Check | Description |
|------------|-------|-------------|
| AGT-001 | Task without try-catch | Missing error handling |
| AGT-002 | Result used without validation | No return value check |
| AGT-003 | > 3 different agent types | Agent type proliferation |
| AGT-003 | >3 different agent types | Agent type proliferation |
| AGT-004 | Nested Task in prompt | Agent calling agent |
| AGT-005 | Task used but not in allowed-tools | Tool declaration mismatch |
| AGT-006 | Multiple return formats | Inconsistent agent output |
**Impact Levels**:
- **Critical**: Workflow crash on agent failure
- **High**: Unpredictable agent behavior
- **Medium**: Occasional coordination issues
- **Low**: Minor inconsistencies
**Impact**: Critical (crash on failure), High (unpredictable behavior), Medium (occasional issues), Low (minor)
---
### 5. Documentation Redundancy (P5)
## 5. Documentation Redundancy (P6)
**Definition**: 同一定义(如 State Schema、映射表、类型定义在多个文件中重复出现导致维护困难和不一致风险。
**Root Causes**:
- 缺乏单一真相来源 (SSOT)
- 复制粘贴代替引用
- 硬编码配置代替集中管理
**Definition**: Same definition (State Schema, mappings, types) repeated across files.
**Detection Patterns**:
| Pattern ID | Regex/Check | Description |
|------------|-------------|-------------|
| DOC-RED-001 | 跨文件语义比较 | 找到 State Schema 等核心概念的重复定义 |
| DOC-RED-002 | 代码块 vs 规范表对比 | action 文件中硬编码与 spec 文档的重复 |
| DOC-RED-003 | `/interface\s+(\w+)/` 同名扫描 | 多处定义的 interface/type |
| Pattern ID | Check | Description |
|------------|-------|-------------|
| DOC-RED-001 | Cross-file semantic comparison | State Schema duplication |
| DOC-RED-002 | Code block vs spec comparison | Hardcoded config duplication |
| DOC-RED-003 | `/interface\s+(\w+)/` same-name scan | Interface/type duplication |
**Impact Levels**:
- **High**: 核心定义State Schema, 映射表)重复
- **Medium**: 类型定义重复
- **Low**: 示例代码重复
**Impact**: High (core definitions), Medium (type definitions), Low (example code)
---
### 6. Token Consumption (P6)
## 6. Token Consumption (P5)
**Definition**: Excessive token usage from verbose prompts, large state objects, or inefficient I/O patterns.
**Root Causes**:
- Long static prompts without compression
- State schema with too many fields
- Full content embedding instead of path references
- Arrays growing unbounded without sliding windows
- Write-then-read file relay patterns
**Definition**: Excessive token usage from verbose prompts, large state, inefficient I/O.
**Detection Patterns**:
| Pattern ID | Regex/Check | Description |
|------------|-------------|-------------|
| Pattern ID | Check | Description |
|------------|-------|-------------|
| TKN-001 | File size > 4KB | Verbose prompt files |
| TKN-002 | State fields > 15 | Excessive state schema |
| TKN-003 | `/Read\([^)]+\)\s*[\+,]/` | Full content passing |
| TKN-004 | `/.push\|concat(?!.*\.slice)/` | Unbounded array growth |
| TKN-005 | `/Write\([^)]+\)[\s\S]{0,100}Read\([^)]+\)/` | Write-then-read pattern |
**Impact Levels**:
- **High**: Multiple TKN-003/TKN-004 issues causing significant token waste
- **Medium**: Several verbose files or state bloat
- **Low**: Minor optimization opportunities
**Impact**: High (multiple TKN-003/004), Medium (verbose files), Low (minor optimization)
---
### 7. Documentation Conflict (P7)
## 7. Documentation Conflict (P7)
**Definition**: 同一概念在不同文件中定义不一致,导致行为不可预测和文档误导。
**Root Causes**:
- 定义更新后未同步其他位置
- 实现与文档漂移
- 缺乏一致性校验
**Definition**: Same concept defined inconsistently across files.
**Detection Patterns**:
| Pattern ID | Regex/Check | Description |
|------------|-------------|-------------|
| DOC-CON-001 | 键值一致性校验 | 同一键(如优先级)在不同文件中值不同 |
| DOC-CON-002 | 实现 vs 文档对比 | 硬编码配置与文档对应项不一致 |
| Pattern ID | Check | Description |
|------------|-------|-------------|
| DOC-CON-001 | Key-value consistency check | Same key, different values |
| DOC-CON-002 | Implementation vs docs comparison | Hardcoded vs documented mismatch |
**Impact Levels**:
- **Critical**: 优先级/类别定义冲突
- **High**: 策略映射不一致
- **Medium**: 示例与实际不符
**Impact**: Critical (priority/category conflicts), High (strategy mapping inconsistency), Medium (example mismatch)
---
## Severity Criteria
### Global Severity Matrix
| Severity | Definition | Action Required |
|----------|------------|-----------------|
| **Critical** | Blocks execution or causes data loss | Immediate fix required |
| **High** | Significantly impacts reliability | Should fix before deployment |
| **Medium** | Affects quality or maintainability | Fix in next iteration |
| **Low** | Minor improvement opportunity | Optional fix |
### Severity Calculation
## Severity Calculation
```javascript
function calculateIssueSeverity(issue) {
const weights = {
impact_on_execution: 40, // Does it block workflow?
data_integrity_risk: 30, // Can it cause data loss?
frequency: 20, // How often does it occur?
complexity_to_fix: 10 // How hard to fix?
};
function calculateSeverity(issue) {
const weights = { execution: 40, data_integrity: 30, frequency: 20, complexity: 10 };
let score = 0;
// Impact on execution
if (issue.blocks_execution) score += weights.impact_on_execution;
else if (issue.degrades_execution) score += weights.impact_on_execution * 0.5;
// Data integrity
if (issue.causes_data_loss) score += weights.data_integrity_risk;
else if (issue.causes_inconsistency) score += weights.data_integrity_risk * 0.5;
// Frequency
if (issue.blocks_execution) score += weights.execution;
if (issue.causes_data_loss) score += weights.data_integrity;
if (issue.occurs_every_run) score += weights.frequency;
else if (issue.occurs_sometimes) score += weights.frequency * 0.5;
if (issue.fix_complexity === 'low') score += weights.complexity;
// Complexity (inverse - easier to fix = higher priority)
if (issue.fix_complexity === 'low') score += weights.complexity_to_fix;
else if (issue.fix_complexity === 'medium') score += weights.complexity_to_fix * 0.5;
// Map score to severity
if (score >= 70) return 'critical';
if (score >= 50) return 'high';
if (score >= 30) return 'medium';
@@ -283,36 +181,30 @@ function calculateIssueSeverity(issue) {
## Fix Mapping
| Problem Type | Recommended Strategies | Priority Order |
|--------------|----------------------|----------------|
| **Authoring Principles Violation** | eliminate_intermediate_files, minimize_state, context_passing | 1, 2, 3 |
| Context Explosion | sliding_window, path_reference, context_summarization | 1, 2, 3 |
| Long-tail Forgetting | constraint_injection, state_constraints_field, checkpoint | 1, 2, 3 |
| Data Flow Disruption | state_centralization, schema_enforcement, field_normalization | 1, 2, 3 |
| Agent Coordination | error_wrapping, result_validation, flatten_nesting | 1, 2, 3 |
| **Token Consumption** | prompt_compression, lazy_loading, output_minimization, state_field_reduction | 1, 2, 3, 4 |
| **Documentation Redundancy** | consolidate_to_ssot, centralize_mapping_config | 1, 2 |
| **Documentation Conflict** | reconcile_conflicting_definitions | 1 |
| Problem | Strategies (priority order) |
|---------|---------------------------|
| Authoring Violation | eliminate_intermediate_files, minimize_state, context_passing |
| Context Explosion | sliding_window, path_reference, context_summarization |
| Long-tail Forgetting | constraint_injection, state_constraints_field, checkpoint |
| Data Flow Disruption | state_centralization, schema_enforcement, field_normalization |
| Agent Coordination | error_wrapping, result_validation, flatten_nesting |
| Token Consumption | prompt_compression, lazy_loading, output_minimization, state_field_reduction |
| Doc Redundancy | consolidate_to_ssot, centralize_mapping_config |
| Doc Conflict | reconcile_conflicting_definitions |
---
## Cross-Category Dependencies
Some issues may trigger others:
```
Context Explosion ──→ Long-tail Forgetting
(Large context causes important info to be pushed out)
Context Explosion → Long-tail Forgetting
(Large context pushes important info out)
Data Flow Disruption ──→ Agent Coordination Failure
(Inconsistent data causes agents to fail)
Data Flow Disruption → Agent Coordination Failure
(Inconsistent data causes agent failures)
Agent Coordination Failure ──→ Context Explosion
(Failed retries add to context)
Agent Coordination Failure → Context Explosion
(Failed retries add to context)
```
When fixing, address in this order:
1. **P0 Data Flow** - Foundation for other fixes
2. **P1 Agent Coordination** - Stability
3. **P2 Context Explosion** - Efficiency
4. **P3 Long-tail Forgetting** - Quality
**Fix Order**: P1 Data Flow → P2 Agent → P3 Context → P4 Memory

File diff suppressed because it is too large Load Diff

View File

@@ -0,0 +1,47 @@
{
"$schema": "http://json-schema.org/draft-07/schema#",
"title": "Plan Verification Agent Schema",
"description": "Defines dimensions, severity rules, and CLI templates for plan verification agent",
"dimensions": {
"A": { "name": "User Intent Alignment", "tier": 1, "severity": "CRITICAL",
"checks": ["Goal Alignment", "Scope Drift", "Success Criteria Match", "Intent Conflicts"] },
"B": { "name": "Requirements Coverage", "tier": 1, "severity": "CRITICAL",
"checks": ["Orphaned Requirements", "Unmapped Tasks", "NFR Coverage Gaps"] },
"C": { "name": "Consistency Validation", "tier": 1, "severity": "CRITICAL",
"checks": ["Requirement Conflicts", "Architecture Drift", "Terminology Drift", "Data Model Inconsistency"] },
"D": { "name": "Dependency Integrity", "tier": 2, "severity": "HIGH",
"checks": ["Circular Dependencies", "Missing Dependencies", "Broken Dependencies", "Logical Ordering"] },
"E": { "name": "Synthesis Alignment", "tier": 2, "severity": "HIGH",
"checks": ["Priority Conflicts", "Success Criteria Mismatch", "Risk Mitigation Gaps"] },
"F": { "name": "Task Specification Quality", "tier": 3, "severity": "MEDIUM",
"checks": ["Ambiguous Focus Paths", "Underspecified Acceptance", "Missing Artifacts", "Weak Flow Control"] },
"G": { "name": "Duplication Detection", "tier": 4, "severity": "LOW",
"checks": ["Overlapping Task Scope", "Redundant Coverage"] },
"H": { "name": "Feasibility Assessment", "tier": 4, "severity": "LOW",
"checks": ["Complexity Misalignment", "Resource Conflicts", "Skill Gap Risks"] }
},
"tiers": {
"1": { "dimensions": ["A", "B", "C"], "priority": "CRITICAL", "limit": null, "rule": "analysis-review-architecture" },
"2": { "dimensions": ["D", "E"], "priority": "HIGH", "limit": 15, "rule": "analysis-diagnose-bug-root-cause" },
"3": { "dimensions": ["F"], "priority": "MEDIUM", "limit": 20, "rule": "analysis-analyze-code-patterns" },
"4": { "dimensions": ["G", "H"], "priority": "LOW", "limit": 15, "rule": "analysis-analyze-code-patterns" }
},
"severity_rules": {
"CRITICAL": ["User intent violation", "Synthesis authority violation", "Zero coverage", "Circular/broken deps"],
"HIGH": ["NFR gaps", "Priority conflicts", "Missing risk mitigation"],
"MEDIUM": ["Terminology drift", "Missing refs", "Weak flow control"],
"LOW": ["Style improvements", "Minor redundancy"]
},
"quality_gate": {
"BLOCK_EXECUTION": { "condition": "critical > 0", "emoji": "🛑" },
"PROCEED_WITH_FIXES": { "condition": "critical == 0 && high > 0", "emoji": "⚠️" },
"PROCEED_WITH_CAUTION": { "condition": "critical == 0 && high == 0 && medium > 0", "emoji": "✅" },
"PROCEED": { "condition": "only low or none", "emoji": "✅" }
},
"token_budget": { "total_findings": 50, "early_exit": "CRITICAL > 0 in Tier 1 → skip Tier 3-4" }
}

View File

@@ -0,0 +1,158 @@
{
"$schema": "http://json-schema.org/draft-07/schema#",
"title": "Plan Verification Findings Schema",
"description": "Schema for plan verification findings output from cli-explore-agent",
"type": "object",
"required": [
"session_id",
"timestamp",
"verification_tiers_completed",
"findings",
"summary"
],
"properties": {
"session_id": {
"type": "string",
"description": "Workflow session ID (e.g., WFS-20250127-143000)",
"pattern": "^WFS-[0-9]{8}-[0-9]{6}$"
},
"timestamp": {
"type": "string",
"description": "ISO 8601 timestamp when verification was completed",
"format": "date-time"
},
"verification_tiers_completed": {
"type": "array",
"description": "List of verification tiers completed (e.g., ['Tier 1', 'Tier 2'])",
"items": {
"type": "string",
"enum": ["Tier 1", "Tier 2", "Tier 3", "Tier 4"]
},
"minItems": 1,
"maxItems": 4
},
"findings": {
"type": "array",
"description": "Array of all findings across all dimensions",
"items": {
"type": "object",
"required": [
"id",
"dimension",
"dimension_name",
"severity",
"location",
"summary",
"recommendation"
],
"properties": {
"id": {
"type": "string",
"description": "Unique finding ID prefixed by severity (C1, H1, M1, L1)",
"pattern": "^[CHML][0-9]+$"
},
"dimension": {
"type": "string",
"description": "Verification dimension identifier",
"enum": ["A", "B", "C", "D", "E", "F", "G", "H"]
},
"dimension_name": {
"type": "string",
"description": "Human-readable dimension name",
"enum": [
"User Intent Alignment",
"Requirements Coverage Analysis",
"Consistency Validation",
"Dependency Integrity",
"Synthesis Alignment",
"Task Specification Quality",
"Duplication Detection",
"Feasibility Assessment"
]
},
"severity": {
"type": "string",
"description": "Severity level of the finding",
"enum": ["CRITICAL", "HIGH", "MEDIUM", "LOW"]
},
"location": {
"type": "array",
"description": "Array of locations where issue was found (e.g., 'IMPL_PLAN.md:L45', 'task:IMPL-1.2', 'synthesis:FR-03')",
"items": {
"type": "string"
},
"minItems": 1
},
"summary": {
"type": "string",
"description": "Concise summary of the issue (1-2 sentences)",
"minLength": 10,
"maxLength": 500
},
"recommendation": {
"type": "string",
"description": "Actionable recommendation to resolve the issue",
"minLength": 10,
"maxLength": 500
}
}
}
},
"summary": {
"type": "object",
"description": "Aggregate summary of verification results",
"required": [
"critical_count",
"high_count",
"medium_count",
"low_count",
"total_findings",
"coverage_percentage",
"recommendation"
],
"properties": {
"critical_count": {
"type": "integer",
"description": "Number of critical severity findings",
"minimum": 0
},
"high_count": {
"type": "integer",
"description": "Number of high severity findings",
"minimum": 0
},
"medium_count": {
"type": "integer",
"description": "Number of medium severity findings",
"minimum": 0
},
"low_count": {
"type": "integer",
"description": "Number of low severity findings",
"minimum": 0
},
"total_findings": {
"type": "integer",
"description": "Total number of findings",
"minimum": 0
},
"coverage_percentage": {
"type": "number",
"description": "Percentage of synthesis requirements covered by tasks (0-100)",
"minimum": 0,
"maximum": 100
},
"recommendation": {
"type": "string",
"description": "Quality gate recommendation",
"enum": [
"BLOCK_EXECUTION",
"PROCEED_WITH_FIXES",
"PROCEED_WITH_CAUTION",
"PROCEED"
]
}
}
}
}
}

View File

@@ -14,7 +14,7 @@
### Configuration File
**Path**: `.claude/cli-tools.json`
**Path**: `~/.claude/cli-tools.json`
All tool availability, model selection, and routing are defined in this configuration file.

View File

@@ -1,5 +1,6 @@
# Codex Code Guidelines
## Code Quality Standards
### Code Quality
@@ -21,11 +22,8 @@
- Graceful degradation
- Don't expose sensitive info
## Core Principles
**Incremental Progress**:
- Small, testable changes
- Commit working code frequently

View File

@@ -0,0 +1,607 @@
---
description: Interactive collaborative analysis with documented discussions, CLI-assisted exploration, and evolving understanding
argument-hint: TOPIC="<topic or question to analyze>"
---
# Codex Analyze-With-File Prompt
## Overview
Interactive collaborative analysis workflow with **documented discussion process**. Records understanding evolution, facilitates multi-round Q&A, and uses deep analysis for codebase and concept exploration.
**Core workflow**: Topic → Explore → Discuss → Document → Refine → Conclude
**Key features**:
- **discussion.md**: Timeline of discussions and understanding evolution
- **Multi-round Q&A**: Iterative clarification with user
- **Analysis-assisted exploration**: Deep codebase and concept analysis
- **Consolidated insights**: Synthesizes discussions into actionable conclusions
- **Flexible continuation**: Resume analysis sessions to build on previous work
## Target Topic
**$TOPIC**
## Execution Process
```
Session Detection:
├─ Check if analysis session exists for topic
├─ EXISTS + discussion.md exists → Continue mode
└─ NOT_FOUND → New session mode
Phase 1: Topic Understanding
├─ Parse topic/question
├─ Identify analysis dimensions (architecture, implementation, concept, etc.)
├─ Initial scoping with user
└─ Document initial understanding in discussion.md
Phase 2: Exploration (Parallel)
├─ Search codebase for relevant patterns
├─ Analyze code structure and dependencies
└─ Aggregate findings into exploration summary
Phase 3: Interactive Discussion (Multi-Round)
├─ Present exploration findings
├─ Facilitate Q&A with user
├─ Capture user insights and requirements
├─ Update discussion.md with each round
└─ Repeat until user is satisfied or clarity achieved
Phase 4: Synthesis & Conclusion
├─ Consolidate all insights
├─ Update discussion.md with conclusions
├─ Generate actionable recommendations
└─ Optional: Create follow-up tasks or issues
Output:
├─ .workflow/.analysis/{slug}-{date}/discussion.md (evolving document)
├─ .workflow/.analysis/{slug}-{date}/explorations.json (findings)
└─ .workflow/.analysis/{slug}-{date}/conclusions.json (final synthesis)
```
## Implementation Details
### Session Setup & Mode Detection
```javascript
const getUtc8ISOString = () => new Date(Date.now() + 8 * 60 * 60 * 1000).toISOString()
const topicSlug = "$TOPIC".toLowerCase().replace(/[^a-z0-9]+/g, '-').substring(0, 40)
const dateStr = getUtc8ISOString().substring(0, 10)
const sessionId = `ANL-${topicSlug}-${dateStr}`
const sessionFolder = `.workflow/.analysis/${sessionId}`
const discussionPath = `${sessionFolder}/discussion.md`
const explorationsPath = `${sessionFolder}/explorations.json`
const conclusionsPath = `${sessionFolder}/conclusions.json`
// Auto-detect mode
const sessionExists = fs.existsSync(sessionFolder)
const hasDiscussion = sessionExists && fs.existsSync(discussionPath)
const mode = hasDiscussion ? 'continue' : 'new'
if (!sessionExists) {
bash(`mkdir -p ${sessionFolder}`)
}
```
---
### Phase 1: Topic Understanding
#### Step 1.1: Parse Topic & Identify Dimensions
```javascript
// Analyze topic to determine analysis dimensions
const ANALYSIS_DIMENSIONS = {
architecture: ['架构', 'architecture', 'design', 'structure', '设计'],
implementation: ['实现', 'implement', 'code', 'coding', '代码'],
performance: ['性能', 'performance', 'optimize', 'bottleneck', '优化'],
security: ['安全', 'security', 'auth', 'permission', '权限'],
concept: ['概念', 'concept', 'theory', 'principle', '原理'],
comparison: ['比较', 'compare', 'vs', 'difference', '区别'],
decision: ['决策', 'decision', 'choice', 'tradeoff', '选择']
}
function identifyDimensions(topic) {
const text = topic.toLowerCase()
const matched = []
for (const [dimension, keywords] of Object.entries(ANALYSIS_DIMENSIONS)) {
if (keywords.some(k => text.includes(k))) {
matched.push(dimension)
}
}
return matched.length > 0 ? matched : ['general']
}
const dimensions = identifyDimensions("$TOPIC")
```
#### Step 1.2: Initial Scoping (New Session Only)
Ask user to scope the analysis:
- Focus areas: 代码实现 / 架构设计 / 最佳实践 / 问题诊断
- Analysis depth: Quick Overview / Standard Analysis / Deep Dive
#### Step 1.3: Create/Update discussion.md
For new session:
```markdown
# Analysis Discussion
**Session ID**: ${sessionId}
**Topic**: $TOPIC
**Started**: ${getUtc8ISOString()}
**Dimensions**: ${dimensions.join(', ')}
---
## User Context
**Focus Areas**: ${userFocusAreas.join(', ')}
**Analysis Depth**: ${analysisDepth}
---
## Discussion Timeline
### Round 1 - Initial Understanding (${timestamp})
#### Topic Analysis
Based on topic "$TOPIC":
- **Primary dimensions**: ${dimensions.join(', ')}
- **Initial scope**: ${initialScope}
- **Key questions to explore**:
- ${question1}
- ${question2}
- ${question3}
#### Next Steps
- Search codebase for relevant patterns
- Gather insights via analysis
- Prepare discussion points for user
---
## Current Understanding
${initialUnderstanding}
```
For continue session, append:
```markdown
### Round ${n} - Continuation (${timestamp})
#### Previous Context
Resuming analysis based on prior discussion.
#### New Focus
${newFocusFromUser}
```
---
### Phase 2: Exploration
#### Step 2.1: Codebase Search
```javascript
// Extract keywords from topic
const keywords = extractTopicKeywords("$TOPIC")
// Search codebase for relevant code
const searchResults = []
for (const keyword of keywords) {
const results = Grep({ pattern: keyword, path: ".", output_mode: "content", "-C": 3 })
searchResults.push({ keyword, results })
}
// Identify affected files and patterns
const relevantLocations = analyzeSearchResults(searchResults)
```
#### Step 2.2: Pattern Analysis
Analyze the codebase from identified dimensions:
1. Architecture patterns and structure
2. Implementation conventions
3. Dependency relationships
4. Potential issues or improvements
#### Step 2.3: Aggregate Findings
```javascript
// Aggregate into explorations.json
const explorations = {
session_id: sessionId,
timestamp: getUtc8ISOString(),
topic: "$TOPIC",
dimensions: dimensions,
sources: [
{ type: "codebase", summary: codebaseSummary },
{ type: "analysis", summary: analysisSummary }
],
key_findings: [...],
discussion_points: [...],
open_questions: [...]
}
Write(explorationsPath, JSON.stringify(explorations, null, 2))
```
#### Step 2.4: Update discussion.md
```markdown
#### Exploration Results (${timestamp})
**Sources Analyzed**:
${sources.map(s => `- ${s.type}: ${s.summary}`).join('\n')}
**Key Findings**:
${keyFindings.map((f, i) => `${i+1}. ${f}`).join('\n')}
**Points for Discussion**:
${discussionPoints.map((p, i) => `${i+1}. ${p}`).join('\n')}
**Open Questions**:
${openQuestions.map((q, i) => `- ${q}`).join('\n')}
```
---
### Phase 3: Interactive Discussion (Multi-Round)
#### Step 3.1: Present Findings & Gather Feedback
```javascript
// Maximum discussion rounds
const MAX_ROUNDS = 5
let roundNumber = 1
let discussionComplete = false
while (!discussionComplete && roundNumber <= MAX_ROUNDS) {
// Display current findings
console.log(`
## Discussion Round ${roundNumber}
${currentFindings}
### Key Points for Your Input
${discussionPoints.map((p, i) => `${i+1}. ${p}`).join('\n')}
`)
// Gather user input
// Options:
// - 同意,继续深入: Deepen analysis in current direction
// - 需要调整方向: Get user's adjusted focus
// - 分析完成: Exit loop
// - 有具体问题: Answer specific questions
// Process user response and update understanding
updateDiscussionDocument(roundNumber, userResponse, findings)
roundNumber++
}
```
#### Step 3.2: Document Each Round
Append to discussion.md:
```markdown
### Round ${n} - Discussion (${timestamp})
#### User Input
${userInputSummary}
${userResponse === 'adjustment' ? `
**Direction Adjustment**: ${adjustmentDetails}
` : ''}
${userResponse === 'questions' ? `
**User Questions**:
${userQuestions.map((q, i) => `${i+1}. ${q}`).join('\n')}
**Answers**:
${answers.map((a, i) => `${i+1}. ${a}`).join('\n')}
` : ''}
#### Updated Understanding
Based on user feedback:
- ${insight1}
- ${insight2}
#### Corrected Assumptions
${corrections.length > 0 ? corrections.map(c => `
- ~~${c.wrong}~~ → ${c.corrected}
- Reason: ${c.reason}
`).join('\n') : 'None'}
#### New Insights
${newInsights.map(i => `- ${i}`).join('\n')}
```
---
### Phase 4: Synthesis & Conclusion
#### Step 4.1: Consolidate Insights
```javascript
const conclusions = {
session_id: sessionId,
topic: "$TOPIC",
completed: getUtc8ISOString(),
total_rounds: roundNumber,
summary: "...",
key_conclusions: [
{ point: "...", evidence: "...", confidence: "high|medium|low" }
],
recommendations: [
{ action: "...", rationale: "...", priority: "high|medium|low" }
],
open_questions: [...],
follow_up_suggestions: [
{ type: "issue", summary: "..." },
{ type: "task", summary: "..." }
]
}
Write(conclusionsPath, JSON.stringify(conclusions, null, 2))
```
#### Step 4.2: Final discussion.md Update
```markdown
---
## Conclusions (${timestamp})
### Summary
${summaryParagraph}
### Key Conclusions
${conclusions.key_conclusions.map((c, i) => `
${i+1}. **${c.point}** (Confidence: ${c.confidence})
- Evidence: ${c.evidence}
`).join('\n')}
### Recommendations
${conclusions.recommendations.map((r, i) => `
${i+1}. **${r.action}** (Priority: ${r.priority})
- Rationale: ${r.rationale}
`).join('\n')}
### Remaining Questions
${conclusions.open_questions.map(q => `- ${q}`).join('\n')}
---
## Current Understanding (Final)
### What We Established
${establishedPoints.map(p => `- ${p}`).join('\n')}
### What Was Clarified/Corrected
${corrections.map(c => `- ~~${c.original}~~ → ${c.corrected}`).join('\n')}
### Key Insights
${keyInsights.map(i => `- ${i}`).join('\n')}
---
## Session Statistics
- **Total Rounds**: ${totalRounds}
- **Duration**: ${duration}
- **Sources Used**: ${sources.join(', ')}
- **Artifacts Generated**: discussion.md, explorations.json, conclusions.json
```
#### Step 4.3: Post-Completion Options
Offer follow-up options:
- Create Issue: Convert conclusions to actionable issues
- Generate Task: Create implementation tasks
- Export Report: Generate standalone analysis report
- Complete: No further action needed
---
## Session Folder Structure
```
.workflow/.analysis/ANL-{slug}-{date}/
├── discussion.md # Evolution of understanding & discussions
├── explorations.json # Exploration findings
├── conclusions.json # Final synthesis
└── exploration-*.json # Individual exploration results (optional)
```
## Discussion Document Template
```markdown
# Analysis Discussion
**Session ID**: ANL-xxx-2025-01-25
**Topic**: [topic or question]
**Started**: 2025-01-25T10:00:00+08:00
**Dimensions**: [architecture, implementation, ...]
---
## User Context
**Focus Areas**: [user-selected focus]
**Analysis Depth**: [quick|standard|deep]
---
## Discussion Timeline
### Round 1 - Initial Understanding (2025-01-25 10:00)
#### Topic Analysis
...
#### Exploration Results
...
### Round 2 - Discussion (2025-01-25 10:15)
#### User Input
...
#### Updated Understanding
...
#### Corrected Assumptions
- ~~[wrong]~~ → [corrected]
### Round 3 - Deep Dive (2025-01-25 10:30)
...
---
## Conclusions (2025-01-25 11:00)
### Summary
...
### Key Conclusions
...
### Recommendations
...
---
## Current Understanding (Final)
### What We Established
- [confirmed points]
### What Was Clarified/Corrected
- ~~[original assumption]~~ → [corrected understanding]
### Key Insights
- [insights gained]
---
## Session Statistics
- **Total Rounds**: 3
- **Duration**: 1 hour
- **Sources Used**: codebase exploration, analysis
- **Artifacts Generated**: discussion.md, explorations.json, conclusions.json
```
## Iteration Flow
```
First Call (TOPIC="topic"):
├─ No session exists → New mode
├─ Identify analysis dimensions
├─ Scope with user
├─ Create discussion.md with initial understanding
├─ Launch explorations
└─ Enter discussion loop
Continue Call (TOPIC="topic"):
├─ Session exists → Continue mode
├─ Load discussion.md
├─ Resume from last round
└─ Continue discussion loop
Discussion Loop:
├─ Present current findings
├─ Gather user feedback
├─ Process response:
│ ├─ Agree → Deepen analysis
│ ├─ Adjust → Change direction
│ ├─ Question → Answer then continue
│ └─ Complete → Exit loop
├─ Update discussion.md
└─ Repeat until complete or max rounds
Completion:
├─ Generate conclusions.json
├─ Update discussion.md with final synthesis
└─ Offer follow-up options
```
## Consolidation Rules
When updating "Current Understanding":
1. **Promote confirmed insights**: Move validated findings to "What We Established"
2. **Track corrections**: Keep important wrong→right transformations
3. **Focus on current state**: What do we know NOW
4. **Avoid timeline repetition**: Don't copy discussion details
5. **Preserve key learnings**: Keep insights valuable for future reference
**Bad (cluttered)**:
```markdown
## Current Understanding
In round 1 we discussed X, then in round 2 user said Y, and we explored Z...
```
**Good (consolidated)**:
```markdown
## Current Understanding
### What We Established
- The authentication flow uses JWT with refresh tokens
- Rate limiting is implemented at API gateway level
### What Was Clarified
- ~~Assumed Redis for sessions~~ → Actually uses database-backed sessions
### Key Insights
- Current architecture supports horizontal scaling
- Security audit recommended before production
```
## Error Handling
| Situation | Action |
|-----------|--------|
| Exploration fails | Continue with available context, note limitation |
| User timeout in discussion | Save state, show resume instructions |
| Max rounds reached | Force synthesis, offer continuation option |
| No relevant findings | Broaden search, ask user for clarification |
| Session folder conflict | Append timestamp suffix |
---
**Now execute the analyze-with-file workflow for topic**: $TOPIC

View File

@@ -0,0 +1,455 @@
---
description: Convert brainstorm session output to parallel-dev-cycle input with idea selection and context enrichment
argument-hint: SESSION="<brainstorm-session-id>" [--idea=<index>] [--auto]
---
# Brainstorm to Cycle Adapter
## Overview
Bridge workflow that converts **brainstorm-with-file** output to **parallel-dev-cycle** input. Reads synthesis.json, allows user to select an idea, and formats it as an enriched TASK description.
**Core workflow**: Load Session → Select Idea → Format Task → Launch Cycle
## Inputs
| Argument | Required | Description |
|----------|----------|-------------|
| SESSION | Yes | Brainstorm session ID (e.g., `BS-rate-limiting-2025-01-28`) |
| --idea | No | Pre-select idea by index (0-based, from top_ideas) |
| --auto | No | Auto-select top-scored idea without confirmation |
## Output
Launches `/parallel-dev-cycle` with enriched TASK containing:
- Primary recommendation or selected idea
- Key strengths and challenges
- Suggested implementation steps
- Alternative approaches for reference
## Execution Process
```
Phase 1: Session Loading
├─ Validate session folder exists
├─ Read synthesis.json
├─ Parse top_ideas and recommendations
└─ Validate data structure
Phase 2: Idea Selection
├─ --auto mode → Select highest scored idea
├─ --idea=N → Select specified index
└─ Interactive → Present options, await selection
Phase 3: Task Formatting
├─ Build enriched task description
├─ Include context from brainstorm
└─ Generate parallel-dev-cycle command
Phase 4: Cycle Launch
├─ Confirm with user (unless --auto)
└─ Execute parallel-dev-cycle
```
## Implementation
### Phase 1: Session Loading
```javascript
const getUtc8ISOString = () => new Date(Date.now() + 8 * 60 * 60 * 1000).toISOString()
// Parse arguments
const args = "$ARGUMENTS"
const sessionId = "$SESSION"
const ideaIndexMatch = args.match(/--idea=(\d+)/)
const preSelectedIdea = ideaIndexMatch ? parseInt(ideaIndexMatch[1]) : null
const isAutoMode = args.includes('--auto')
// Validate session
const sessionFolder = `.workflow/.brainstorm/${sessionId}`
const synthesisPath = `${sessionFolder}/synthesis.json`
const brainstormPath = `${sessionFolder}/brainstorm.md`
function fileExists(p) {
try { return bash(`test -f "${p}" && echo "yes"`).includes('yes') } catch { return false }
}
if (!fileExists(synthesisPath)) {
console.error(`
## Error: Session Not Found
Session ID: ${sessionId}
Expected path: ${synthesisPath}
**Available sessions**:
`)
bash(`ls -1 .workflow/.brainstorm/ 2>/dev/null | head -10`)
return { status: 'error', message: 'Session not found' }
}
// Load synthesis
const synthesis = JSON.parse(Read(synthesisPath))
// Validate structure
if (!synthesis.top_ideas || synthesis.top_ideas.length === 0) {
console.error(`
## Error: No Ideas Found
The brainstorm session has no top_ideas.
Please complete the brainstorm workflow first.
`)
return { status: 'error', message: 'No ideas in synthesis' }
}
console.log(`
## Brainstorm Session Loaded
**Session**: ${sessionId}
**Topic**: ${synthesis.topic}
**Completed**: ${synthesis.completed}
**Ideas Found**: ${synthesis.top_ideas.length}
`)
```
---
### Phase 2: Idea Selection
```javascript
let selectedIdea = null
let selectionSource = ''
// Auto mode: select highest scored
if (isAutoMode) {
selectedIdea = synthesis.top_ideas.reduce((best, idea) =>
idea.score > best.score ? idea : best
)
selectionSource = 'auto (highest score)'
console.log(`
**Auto-selected**: ${selectedIdea.title} (Score: ${selectedIdea.score}/10)
`)
}
// Pre-selected by index
else if (preSelectedIdea !== null) {
if (preSelectedIdea >= synthesis.top_ideas.length) {
console.error(`
## Error: Invalid Idea Index
Requested: --idea=${preSelectedIdea}
Available: 0 to ${synthesis.top_ideas.length - 1}
`)
return { status: 'error', message: 'Invalid idea index' }
}
selectedIdea = synthesis.top_ideas[preSelectedIdea]
selectionSource = `index ${preSelectedIdea}`
console.log(`
**Pre-selected**: ${selectedIdea.title} (Index: ${preSelectedIdea})
`)
}
// Interactive selection
else {
// Display options
console.log(`
## Select Idea for Development
| # | Title | Score | Feasibility |
|---|-------|-------|-------------|
${synthesis.top_ideas.map((idea, i) =>
`| ${i} | ${idea.title.substring(0, 40)} | ${idea.score}/10 | ${idea.feasibility || 'N/A'} |`
).join('\n')}
**Primary Recommendation**: ${synthesis.recommendations?.primary?.substring(0, 60) || 'N/A'}
`)
// Build options for AskUser
const ideaOptions = synthesis.top_ideas.slice(0, 4).map((idea, i) => ({
label: `#${i}: ${idea.title.substring(0, 30)}`,
description: `Score: ${idea.score}/10 - ${idea.description?.substring(0, 50) || ''}`
}))
// Add primary recommendation option if different
if (synthesis.recommendations?.primary) {
ideaOptions.unshift({
label: "Primary Recommendation",
description: synthesis.recommendations.primary.substring(0, 60)
})
}
const selection = AskUser({
questions: [{
question: "Which idea should be developed?",
header: "Idea",
multiSelect: false,
options: ideaOptions
}]
})
// Parse selection
if (selection.idea === "Primary Recommendation") {
// Use primary recommendation as task
selectedIdea = {
title: "Primary Recommendation",
description: synthesis.recommendations.primary,
key_strengths: synthesis.key_insights || [],
main_challenges: [],
next_steps: synthesis.follow_up?.filter(f => f.type === 'implementation').map(f => f.summary) || []
}
selectionSource = 'primary recommendation'
} else {
const match = selection.idea.match(/^#(\d+):/)
const idx = match ? parseInt(match[1]) : 0
selectedIdea = synthesis.top_ideas[idx]
selectionSource = `user selected #${idx}`
}
}
console.log(`
### Selected Idea
**Title**: ${selectedIdea.title}
**Source**: ${selectionSource}
**Description**: ${selectedIdea.description?.substring(0, 200) || 'N/A'}
`)
```
---
### Phase 3: Task Formatting
```javascript
// Build enriched task description
function formatTask(idea, synthesis) {
const sections = []
// Main objective
sections.push(`# Main Objective\n\n${idea.title}`)
// Description
if (idea.description) {
sections.push(`# Description\n\n${idea.description}`)
}
// Key strengths
if (idea.key_strengths?.length > 0) {
sections.push(`# Key Strengths\n\n${idea.key_strengths.map(s => `- ${s}`).join('\n')}`)
}
// Main challenges (important for RA agent)
if (idea.main_challenges?.length > 0) {
sections.push(`# Main Challenges to Address\n\n${idea.main_challenges.map(c => `- ${c}`).join('\n')}`)
}
// Recommended steps
if (idea.next_steps?.length > 0) {
sections.push(`# Recommended Implementation Steps\n\n${idea.next_steps.map((s, i) => `${i + 1}. ${s}`).join('\n')}`)
}
// Alternative approaches (for RA consideration)
if (synthesis.recommendations?.alternatives?.length > 0) {
sections.push(`# Alternative Approaches (for reference)\n\n${synthesis.recommendations.alternatives.map(a => `- ${a}`).join('\n')}`)
}
// Key insights from brainstorm
if (synthesis.key_insights?.length > 0) {
const relevantInsights = synthesis.key_insights.slice(0, 3)
sections.push(`# Key Insights from Brainstorm\n\n${relevantInsights.map(i => `- ${i}`).join('\n')}`)
}
// Source reference
sections.push(`# Source\n\nBrainstorm Session: ${synthesis.session_id}\nTopic: ${synthesis.topic}`)
return sections.join('\n\n')
}
const enrichedTask = formatTask(selectedIdea, synthesis)
// Display formatted task
console.log(`
## Formatted Task for parallel-dev-cycle
\`\`\`markdown
${enrichedTask}
\`\`\`
`)
// Save task to session folder for reference
Write(`${sessionFolder}/cycle-task.md`, `# Generated Task\n\n**Generated**: ${getUtc8ISOString()}\n**Idea**: ${selectedIdea.title}\n**Selection**: ${selectionSource}\n\n---\n\n${enrichedTask}`)
```
---
### Phase 4: Cycle Launch
```javascript
// Confirm launch (unless auto mode)
let shouldLaunch = isAutoMode
if (!isAutoMode) {
const confirmation = AskUser({
questions: [{
question: "Launch parallel-dev-cycle with this task?",
header: "Launch",
multiSelect: false,
options: [
{ label: "Yes, launch cycle (Recommended)", description: "Start parallel-dev-cycle with enriched task" },
{ label: "No, just save task", description: "Save formatted task for manual use" }
]
}]
})
shouldLaunch = confirmation.launch.includes("Yes")
}
if (shouldLaunch) {
console.log(`
## Launching parallel-dev-cycle
**Task**: ${selectedIdea.title}
**Source Session**: ${sessionId}
`)
// Escape task for command line
const escapedTask = enrichedTask
.replace(/\\/g, '\\\\')
.replace(/"/g, '\\"')
.replace(/\$/g, '\\$')
.replace(/`/g, '\\`')
// Launch parallel-dev-cycle
// Note: In actual execution, this would invoke the skill
console.log(`
### Cycle Command
\`\`\`bash
/parallel-dev-cycle TASK="${escapedTask.substring(0, 100)}..."
\`\`\`
**Full task saved to**: ${sessionFolder}/cycle-task.md
`)
// Return success with cycle trigger
return {
status: 'success',
action: 'launch_cycle',
session_id: sessionId,
idea: selectedIdea.title,
task_file: `${sessionFolder}/cycle-task.md`,
cycle_command: `/parallel-dev-cycle TASK="${enrichedTask}"`
}
} else {
console.log(`
## Task Saved (Not Launched)
**Task file**: ${sessionFolder}/cycle-task.md
To launch manually:
\`\`\`bash
/parallel-dev-cycle TASK="$(cat ${sessionFolder}/cycle-task.md)"
\`\`\`
`)
return {
status: 'success',
action: 'saved_only',
session_id: sessionId,
task_file: `${sessionFolder}/cycle-task.md`
}
}
```
---
## Session Files
After execution:
```
.workflow/.brainstorm/{session-id}/
├── brainstorm.md # Original brainstorm
├── synthesis.json # Synthesis data (input)
├── perspectives.json # Perspectives data
├── ideas/ # Idea deep-dives
└── cycle-task.md # ⭐ Generated task (output)
```
## Task Format
The generated task includes:
| Section | Purpose | Used By |
|---------|---------|---------|
| Main Objective | Clear goal statement | RA: Primary requirement |
| Description | Detailed explanation | RA: Requirement context |
| Key Strengths | Why this approach | RA: Design decisions |
| Main Challenges | Known issues to address | RA: Edge cases, risks |
| Implementation Steps | Suggested approach | EP: Planning guidance |
| Alternatives | Other valid approaches | RA: Fallback options |
| Key Insights | Learnings from brainstorm | RA: Domain context |
## Error Handling
| Situation | Action |
|-----------|--------|
| Session not found | List available sessions, abort |
| synthesis.json missing | Suggest completing brainstorm first |
| No top_ideas | Report error, abort |
| Invalid --idea index | Show valid range, abort |
| Task too long | Truncate with reference to file |
## Examples
### Auto Mode (Quick Launch)
```bash
/brainstorm-to-cycle SESSION="BS-rate-limiting-2025-01-28" --auto
# → Selects highest-scored idea
# → Launches parallel-dev-cycle immediately
```
### Pre-Selected Idea
```bash
/brainstorm-to-cycle SESSION="BS-auth-system-2025-01-28" --idea=2
# → Selects top_ideas[2]
# → Confirms before launch
```
### Interactive Selection
```bash
/brainstorm-to-cycle SESSION="BS-caching-2025-01-28"
# → Displays all ideas with scores
# → User selects from options
# → Confirms and launches
```
## Integration Flow
```
brainstorm-with-file
synthesis.json
brainstorm-to-cycle ◄─── This command
enriched TASK
parallel-dev-cycle
RA → EP → CD → VAS
```
---
**Now execute brainstorm-to-cycle** with session: $SESSION

View File

@@ -0,0 +1,933 @@
---
description: Interactive brainstorming with multi-perspective analysis, idea expansion, and documented thought evolution
argument-hint: TOPIC="<idea or topic to brainstorm>"
---
# Codex Brainstorm-With-File Prompt
## Overview
Interactive brainstorming workflow with **documented thought evolution**. Expands initial ideas through questioning, multi-perspective analysis, and iterative refinement.
**Core workflow**: Seed Idea → Expand → Multi-Perspective Explore → Synthesize → Refine → Crystallize
**Key features**:
- **brainstorm.md**: Complete thought evolution timeline
- **Multi-perspective analysis**: Creative, Pragmatic, Systematic viewpoints
- **Idea expansion**: Progressive questioning and exploration
- **Diverge-Converge cycles**: Generate options then focus on best paths
- **Synthesis**: Merge multiple perspectives into coherent solutions
## Target Topic
**$TOPIC**
## Execution Process
```
Session Detection:
├─ Check if brainstorm session exists for topic
├─ EXISTS + brainstorm.md exists → Continue mode
└─ NOT_FOUND → New session mode
Phase 1: Seed Understanding
├─ Parse initial idea/topic
├─ Identify brainstorm dimensions (technical, UX, business, etc.)
├─ Initial scoping with user
├─ Expand seed into exploration vectors
└─ Document in brainstorm.md
Phase 2: Divergent Exploration (Multi-Perspective)
├─ Creative perspective: Innovative, unconventional ideas
├─ Pragmatic perspective: Implementation-focused approaches
├─ Systematic perspective: Architectural, structured solutions
└─ Aggregate diverse viewpoints
Phase 3: Interactive Refinement (Multi-Round)
├─ Present multi-perspective findings
├─ User selects promising directions
├─ Deep dive on selected paths
├─ Challenge assumptions (devil's advocate)
├─ Update brainstorm.md with evolution
└─ Repeat diverge-converge cycles
Phase 4: Convergence & Crystallization
├─ Synthesize best ideas
├─ Resolve conflicts between perspectives
├─ Formulate actionable conclusions
├─ Generate next steps or implementation plan
└─ Final brainstorm.md update
Output:
├─ .workflow/.brainstorm/{slug}-{date}/brainstorm.md (thought evolution)
├─ .workflow/.brainstorm/{slug}-{date}/perspectives.json (analysis findings)
├─ .workflow/.brainstorm/{slug}-{date}/synthesis.json (final ideas)
└─ .workflow/.brainstorm/{slug}-{date}/ideas/ (individual idea deep-dives)
```
## Implementation Details
### Session Setup & Mode Detection
```javascript
const getUtc8ISOString = () => new Date(Date.now() + 8 * 60 * 60 * 1000).toISOString()
const topicSlug = "$TOPIC".toLowerCase().replace(/[^a-z0-9\u4e00-\u9fa5]+/g, '-').substring(0, 40)
const dateStr = getUtc8ISOString().substring(0, 10)
const sessionId = `BS-${topicSlug}-${dateStr}`
const sessionFolder = `.workflow/.brainstorm/${sessionId}`
const brainstormPath = `${sessionFolder}/brainstorm.md`
const perspectivesPath = `${sessionFolder}/perspectives.json`
const synthesisPath = `${sessionFolder}/synthesis.json`
const ideasFolder = `${sessionFolder}/ideas`
// Auto-detect mode
const sessionExists = fs.existsSync(sessionFolder)
const hasBrainstorm = sessionExists && fs.existsSync(brainstormPath)
const mode = hasBrainstorm ? 'continue' : 'new'
if (!sessionExists) {
bash(`mkdir -p ${sessionFolder}/ideas`)
}
```
---
### Phase 1: Seed Understanding
#### Step 1.1: Parse Seed & Identify Dimensions
```javascript
// Brainstorm dimensions for multi-perspective analysis
const BRAINSTORM_DIMENSIONS = {
technical: ['技术', 'technical', 'implementation', 'code', '实现', 'architecture'],
ux: ['用户', 'user', 'experience', 'UX', 'UI', '体验', 'interaction'],
business: ['业务', 'business', 'value', 'ROI', '价值', 'market'],
innovation: ['创新', 'innovation', 'novel', 'creative', '新颖'],
feasibility: ['可行', 'feasible', 'practical', 'realistic', '实际'],
scalability: ['扩展', 'scale', 'growth', 'performance', '性能'],
security: ['安全', 'security', 'risk', 'protection', '风险']
}
function identifyDimensions(topic) {
const text = topic.toLowerCase()
const matched = []
for (const [dimension, keywords] of Object.entries(BRAINSTORM_DIMENSIONS)) {
if (keywords.some(k => text.includes(k))) {
matched.push(dimension)
}
}
return matched.length > 0 ? matched : ['technical', 'innovation', 'feasibility']
}
const dimensions = identifyDimensions("$TOPIC")
```
#### Step 1.2: Initial Scoping (New Session Only)
Ask user to scope the brainstorm:
- Focus areas: 技术方案 / 用户体验 / 创新突破 / 可行性评估
- Brainstorm depth: Quick Divergence / Balanced Exploration / Deep Dive
#### Step 1.3: Expand Seed into Exploration Vectors
Generate exploration vectors from seed idea:
1. Core question: What is the fundamental problem/opportunity?
2. User perspective: Who benefits and how?
3. Technical angle: What enables this technically?
4. Alternative approaches: What other ways could this be solved?
5. Challenges: What could go wrong or block success?
6. Innovation angle: What would make this 10x better?
7. Integration: How does this fit with existing systems/processes?
#### Step 1.4: Create/Update brainstorm.md
For new session:
```markdown
# Brainstorm Session
**Session ID**: ${sessionId}
**Topic**: $TOPIC
**Started**: ${getUtc8ISOString()}
**Dimensions**: ${dimensions.join(', ')}
---
## Initial Context
**Focus Areas**: ${userFocusAreas.join(', ')}
**Depth**: ${brainstormDepth}
**Constraints**: ${constraints.join(', ') || 'None specified'}
---
## Seed Expansion
### Original Idea
> $TOPIC
### Exploration Vectors
${explorationVectors.map((v, i) => `
#### Vector ${i+1}: ${v.title}
**Question**: ${v.question}
**Angle**: ${v.angle}
**Potential**: ${v.potential}
`).join('\n')}
---
## Thought Evolution Timeline
### Round 1 - Seed Understanding (${timestamp})
#### Initial Parsing
- **Core concept**: ${coreConcept}
- **Problem space**: ${problemSpace}
- **Opportunity**: ${opportunity}
#### Key Questions to Explore
${keyQuestions.map((q, i) => `${i+1}. ${q}`).join('\n')}
---
## Current Ideas
*To be populated after exploration phases*
---
## Idea Graveyard
*Discarded ideas with reasons - kept for reference*
```
For continue session, append:
```markdown
### Round ${n} - Continuation (${timestamp})
#### Previous Context
Resuming brainstorm based on prior discussion.
#### New Focus
${newFocusFromUser}
```
---
### Phase 2: Divergent Exploration (Multi-Perspective)
#### Step 2.1: Creative Perspective Analysis
Explore from creative/innovative angle:
- Think beyond obvious solutions - what would be surprising/delightful?
- Cross-domain inspiration (what can we learn from other industries?)
- Challenge assumptions - what if the opposite were true?
- Generate 'moonshot' ideas alongside practical ones
- Consider future trends and emerging technologies
Output:
- 5+ creative ideas with brief descriptions
- Each idea rated: novelty (1-5), potential impact (1-5)
- Key assumptions challenged
- Cross-domain inspirations
- One 'crazy' idea that might just work
#### Step 2.2: Pragmatic Perspective Analysis
Evaluate from implementation reality:
- Technical feasibility of core concept
- Existing patterns/libraries that could help
- Integration with current codebase
- Implementation complexity estimates
- Potential technical blockers
- Incremental implementation approach
Output:
- 3-5 practical implementation approaches
- Each rated: effort (1-5), risk (1-5), reuse potential (1-5)
- Technical dependencies identified
- Quick wins vs long-term solutions
- Recommended starting point
#### Step 2.3: Systematic Perspective Analysis
Analyze from architectural standpoint:
- Decompose the problem into sub-problems
- Identify architectural patterns that apply
- Map dependencies and interactions
- Consider scalability implications
- Evaluate long-term maintainability
- Propose systematic solution structure
Output:
- Problem decomposition diagram (text)
- 2-3 architectural approaches with tradeoffs
- Dependency mapping
- Scalability assessment
- Recommended architecture pattern
- Risk matrix
#### Step 2.4: Aggregate Multi-Perspective Findings
```javascript
const perspectives = {
session_id: sessionId,
timestamp: getUtc8ISOString(),
topic: "$TOPIC",
creative: {
ideas: [...],
insights: [...],
challenges: [...]
},
pragmatic: {
approaches: [...],
blockers: [...],
recommendations: [...]
},
systematic: {
decomposition: [...],
patterns: [...],
tradeoffs: [...]
},
synthesis: {
convergent_themes: [],
conflicting_views: [],
unique_contributions: []
}
}
Write(perspectivesPath, JSON.stringify(perspectives, null, 2))
```
#### Step 2.5: Update brainstorm.md with Perspectives
```markdown
### Round 2 - Multi-Perspective Exploration (${timestamp})
#### Creative Perspective
**Top Creative Ideas**:
${creativeIdeas.map((idea, i) => `
${i+1}. **${idea.title}** ⭐ Novelty: ${idea.novelty}/5 | Impact: ${idea.impact}/5
${idea.description}
`).join('\n')}
**Challenged Assumptions**:
${challengedAssumptions.map(a => `- ~~${a.assumption}~~ → Consider: ${a.alternative}`).join('\n')}
**Cross-Domain Inspirations**:
${inspirations.map(i => `- ${i}`).join('\n')}
---
#### Pragmatic Perspective
**Implementation Approaches**:
${pragmaticApproaches.map((a, i) => `
${i+1}. **${a.title}** | Effort: ${a.effort}/5 | Risk: ${a.risk}/5
${a.description}
- Quick win: ${a.quickWin}
- Dependencies: ${a.dependencies.join(', ')}
`).join('\n')}
**Technical Blockers**:
${blockers.map(b => `- ⚠️ ${b}`).join('\n')}
---
#### Systematic Perspective
**Problem Decomposition**:
${decomposition}
**Architectural Options**:
${architecturalOptions.map((opt, i) => `
${i+1}. **${opt.pattern}**
- Pros: ${opt.pros.join(', ')}
- Cons: ${opt.cons.join(', ')}
- Best for: ${opt.bestFor}
`).join('\n')}
---
#### Perspective Synthesis
**Convergent Themes** (all perspectives agree):
${convergentThemes.map(t => `- ✅ ${t}`).join('\n')}
**Conflicting Views** (need resolution):
${conflictingViews.map(v => `
- 🔄 ${v.topic}
- Creative: ${v.creative}
- Pragmatic: ${v.pragmatic}
- Systematic: ${v.systematic}
`).join('\n')}
**Unique Contributions**:
${uniqueContributions.map(c => `- 💡 [${c.source}] ${c.insight}`).join('\n')}
```
---
### Phase 3: Interactive Refinement (Multi-Round)
#### Step 3.1: Present & Select Directions
```javascript
const MAX_ROUNDS = 6
let roundNumber = 3 // After initial exploration
let brainstormComplete = false
while (!brainstormComplete && roundNumber <= MAX_ROUNDS) {
// Present current state
console.log(`
## Brainstorm Round ${roundNumber}
### Top Ideas So Far
${topIdeas.map((idea, i) => `
${i+1}. **${idea.title}** (${idea.source})
${idea.brief}
- Novelty: ${'⭐'.repeat(idea.novelty)} | Feasibility: ${'✅'.repeat(idea.feasibility)}
`).join('\n')}
### Open Questions
${openQuestions.map((q, i) => `${i+1}. ${q}`).join('\n')}
`)
// Gather user direction - options:
// - 深入探索: Deep dive on selected ideas
// - 继续发散: Generate more ideas
// - 挑战验证: Devil's advocate challenge
// - 合并综合: Merge multiple ideas
// - 准备收敛: Start concluding
// Process based on direction and update brainstorm.md
roundNumber++
}
```
#### Step 3.2: Deep Dive on Selected Ideas
For each selected idea, create dedicated idea file:
```javascript
async function deepDiveIdea(idea) {
const ideaPath = `${ideasFolder}/${idea.slug}.md`
// Deep dive analysis:
// - Elaborate the core concept in detail
// - Identify implementation requirements
// - List potential challenges and mitigations
// - Suggest proof-of-concept approach
// - Define success metrics
// - Map related/dependent features
// Output:
// - Detailed concept description
// - Technical requirements list
// - Risk/challenge matrix
// - MVP definition
// - Success criteria
// - Recommendation: pursue/pivot/park
Write(ideaPath, deepDiveContent)
}
```
#### Step 3.3: Devil's Advocate Challenge
For each idea, identify:
- 3 strongest objections
- Challenge core assumptions
- Scenarios where this fails
- Competitive/alternative solutions
- Whether this solves the right problem
- Survivability rating after challenge (1-5)
Output:
- Per-idea challenge report
- Critical weaknesses exposed
- Counter-arguments to objections (if any)
- Ideas that survive the challenge
- Modified/strengthened versions
#### Step 3.4: Merge & Synthesize Ideas
When merging selected ideas:
- Identify complementary elements
- Resolve contradictions
- Create unified concept
- Preserve key strengths from each
- Describe the merged solution
- Assess viability of merged idea
Output:
- Merged concept description
- Elements taken from each source idea
- Contradictions resolved (or noted as tradeoffs)
- New combined strengths
- Implementation considerations
#### Step 3.5: Document Each Round
Append to brainstorm.md:
```markdown
### Round ${n} - ${roundType} (${timestamp})
#### User Direction
- **Selected ideas**: ${selectedIdeas.join(', ')}
- **Action**: ${action}
- **Reasoning**: ${userReasoning || 'Not specified'}
${roundType === 'deep-dive' ? `
#### Deep Dive: ${ideaTitle}
**Elaborated Concept**:
${elaboratedConcept}
**Implementation Requirements**:
${requirements.map(r => `- ${r}`).join('\n')}
**Challenges & Mitigations**:
${challenges.map(c => `- ⚠️ ${c.challenge} → ✅ ${c.mitigation}`).join('\n')}
**MVP Definition**:
${mvpDefinition}
**Recommendation**: ${recommendation}
` : ''}
${roundType === 'challenge' ? `
#### Devil's Advocate Results
**Challenges Raised**:
${challenges.map(c => `
- 🔴 **${c.idea}**: ${c.objection}
- Counter: ${c.counter || 'No strong counter-argument'}
- Survivability: ${c.survivability}/5
`).join('\n')}
**Ideas That Survived**:
${survivedIdeas.map(i => `- ✅ ${i}`).join('\n')}
**Eliminated/Parked**:
${eliminatedIdeas.map(i => `- ❌ ${i.title}: ${i.reason}`).join('\n')}
` : ''}
${roundType === 'merge' ? `
#### Merged Idea: ${mergedIdea.title}
**Source Ideas Combined**:
${sourceIdeas.map(i => `- ${i}`).join('\n')}
**Unified Concept**:
${mergedIdea.description}
**Key Elements Preserved**:
${preservedElements.map(e => `- ✅ ${e}`).join('\n')}
**Tradeoffs Accepted**:
${tradeoffs.map(t => `- ⚖️ ${t}`).join('\n')}
` : ''}
#### Updated Idea Ranking
${updatedRanking.map((idea, i) => `
${i+1}. **${idea.title}** ${idea.status}
- Score: ${idea.score}/10
- Source: ${idea.source}
`).join('\n')}
```
---
### Phase 4: Convergence & Crystallization
#### Step 4.1: Final Synthesis
```javascript
const synthesis = {
session_id: sessionId,
topic: "$TOPIC",
completed: getUtc8ISOString(),
total_rounds: roundNumber,
// Top ideas with full details
top_ideas: ideas.filter(i => i.status === 'active').sort((a,b) => b.score - a.score).slice(0, 5).map(idea => ({
title: idea.title,
description: idea.description,
source_perspective: idea.source,
score: idea.score,
novelty: idea.novelty,
feasibility: idea.feasibility,
key_strengths: idea.strengths,
main_challenges: idea.challenges,
next_steps: idea.nextSteps
})),
// Parked ideas for future reference
parked_ideas: ideas.filter(i => i.status === 'parked').map(idea => ({
title: idea.title,
reason_parked: idea.parkReason,
potential_future_trigger: idea.futureTrigger
})),
// Key insights from the process
key_insights: keyInsights,
// Recommendations
recommendations: {
primary: primaryRecommendation,
alternatives: alternativeApproaches,
not_recommended: notRecommended
},
// Follow-up suggestions
follow_up: [
{ type: 'implementation', summary: '...' },
{ type: 'research', summary: '...' },
{ type: 'validation', summary: '...' }
]
}
Write(synthesisPath, JSON.stringify(synthesis, null, 2))
```
#### Step 4.2: Final brainstorm.md Update
```markdown
---
## Synthesis & Conclusions (${timestamp})
### Executive Summary
${executiveSummary}
### Top Ideas (Final Ranking)
${topIdeas.map((idea, i) => `
#### ${i+1}. ${idea.title} ⭐ Score: ${idea.score}/10
**Description**: ${idea.description}
**Why This Idea**:
${idea.strengths.map(s => `- ✅ ${s}`).join('\n')}
**Main Challenges**:
${idea.challenges.map(c => `- ⚠️ ${c}`).join('\n')}
**Recommended Next Steps**:
${idea.nextSteps.map((s, j) => `${j+1}. ${s}`).join('\n')}
---
`).join('\n')}
### Primary Recommendation
> ${primaryRecommendation}
**Rationale**: ${primaryRationale}
**Quick Start Path**:
1. ${step1}
2. ${step2}
3. ${step3}
### Alternative Approaches
${alternatives.map((alt, i) => `
${i+1}. **${alt.title}**
- When to consider: ${alt.whenToConsider}
- Tradeoff: ${alt.tradeoff}
`).join('\n')}
### Ideas Parked for Future
${parkedIdeas.map(idea => `
- **${idea.title}** (Parked: ${idea.reason})
- Revisit when: ${idea.futureTrigger}
`).join('\n')}
---
## Key Insights
### Process Discoveries
${processDiscoveries.map(d => `- 💡 ${d}`).join('\n')}
### Assumptions Challenged
${challengedAssumptions.map(a => `- ~~${a.original}~~ → ${a.updated}`).join('\n')}
### Unexpected Connections
${unexpectedConnections.map(c => `- 🔗 ${c}`).join('\n')}
---
## Current Understanding (Final)
### Problem Reframed
${reframedProblem}
### Solution Space Mapped
${solutionSpaceMap}
### Decision Framework
When to choose each approach:
${decisionFramework}
---
## Session Statistics
- **Total Rounds**: ${totalRounds}
- **Ideas Generated**: ${totalIdeas}
- **Ideas Survived**: ${survivedIdeas}
- **Perspectives Used**: Creative, Pragmatic, Systematic
- **Duration**: ${duration}
- **Artifacts**: brainstorm.md, perspectives.json, synthesis.json, ${ideaFiles.length} idea deep-dives
```
#### Step 4.3: Post-Completion Options
Offer follow-up options:
- Create Implementation Plan: Convert best idea to implementation plan
- Create Issue: Turn ideas into trackable issues
- Deep Analysis: Run detailed technical analysis on an idea
- Export Report: Generate shareable report
- Complete: No further action needed
---
## Session Folder Structure
```
.workflow/.brainstorm/BS-{slug}-{date}/
├── brainstorm.md # Complete thought evolution
├── perspectives.json # Multi-perspective analysis findings
├── synthesis.json # Final synthesis
└── ideas/ # Individual idea deep-dives
├── idea-1.md
├── idea-2.md
└── merged-idea-1.md
```
## Brainstorm Document Template
```markdown
# Brainstorm Session
**Session ID**: BS-xxx-2025-01-28
**Topic**: [idea or topic]
**Started**: 2025-01-28T10:00:00+08:00
**Dimensions**: [technical, ux, innovation, ...]
---
## Initial Context
**Focus Areas**: [selected focus areas]
**Depth**: [quick|balanced|deep]
**Constraints**: [if any]
---
## Seed Expansion
### Original Idea
> [the initial idea]
### Exploration Vectors
[generated questions and directions]
---
## Thought Evolution Timeline
### Round 1 - Seed Understanding
...
### Round 2 - Multi-Perspective Exploration
#### Creative Perspective
...
#### Pragmatic Perspective
...
#### Systematic Perspective
...
#### Perspective Synthesis
...
### Round 3 - Deep Dive
...
### Round 4 - Challenge
...
---
## Synthesis & Conclusions
### Executive Summary
...
### Top Ideas (Final Ranking)
...
### Primary Recommendation
...
---
## Key Insights
...
---
## Current Understanding (Final)
...
---
## Session Statistics
...
```
## Multi-Perspective Analysis Strategy
### Perspective Roles
| Perspective | Focus | Best For |
|-------------|-------|----------|
| Creative | Innovation, cross-domain | Generating novel ideas |
| Pragmatic | Implementation, feasibility | Reality-checking ideas |
| Systematic | Architecture, structure | Organizing solutions |
### Analysis Patterns
1. **Parallel Divergence**: All perspectives explore simultaneously from different angles
2. **Sequential Deep-Dive**: One perspective expands, others critique/refine
3. **Debate Mode**: Perspectives argue for/against specific approaches
4. **Synthesis Mode**: Combine insights from all perspectives
### When to Use Each Pattern
- **New topic**: Parallel Divergence → get diverse initial ideas
- **Promising idea**: Sequential Deep-Dive → thorough exploration
- **Controversial approach**: Debate Mode → uncover hidden issues
- **Ready to decide**: Synthesis Mode → create actionable conclusion
## Consolidation Rules
When updating "Current Understanding":
1. **Promote confirmed insights**: Move validated findings to "What We Established"
2. **Track corrections**: Keep important wrong→right transformations
3. **Focus on current state**: What do we know NOW
4. **Avoid timeline repetition**: Don't copy discussion details
5. **Preserve key learnings**: Keep insights valuable for future reference
**Bad (cluttered)**:
```markdown
## Current Understanding
In round 1 we discussed X, then in round 2 user said Y, and we explored Z...
```
**Good (consolidated)**:
```markdown
## Current Understanding
### Problem Reframed
The core challenge is not X but actually Y because...
### Solution Space Mapped
Three viable approaches emerged: A (creative), B (pragmatic), C (hybrid)
### Decision Framework
- Choose A when: innovation is priority
- Choose B when: time-to-market matters
- Choose C when: balanced approach needed
```
## Error Handling
| Situation | Action |
|-----------|--------|
| Analysis timeout | Retry with focused scope, or continue without that perspective |
| No good ideas | Reframe the problem, adjust constraints, try different angles |
| User disengaged | Summarize progress, offer break point with resume option |
| Perspectives conflict | Present as tradeoff, let user decide direction |
| Max rounds reached | Force synthesis, highlight unresolved questions |
| All ideas fail challenge | Return to divergent phase with new constraints |
| Session folder conflict | Append timestamp suffix |
## Iteration Flow
```
First Call (TOPIC="topic"):
├─ No session exists → New mode
├─ Identify brainstorm dimensions
├─ Scope with user
├─ Create brainstorm.md with initial understanding
├─ Expand seed into exploration vectors
├─ Launch multi-perspective exploration
└─ Enter refinement loop
Continue Call (TOPIC="topic"):
├─ Session exists → Continue mode
├─ Load brainstorm.md
├─ Resume from last round
└─ Continue refinement loop
Refinement Loop:
├─ Present current findings and top ideas
├─ Gather user feedback
├─ Process response:
│ ├─ Deep dive → Explore selected ideas in depth
│ ├─ Diverge → Generate more ideas
│ ├─ Challenge → Devil's advocate testing
│ ├─ Merge → Combine multiple ideas
│ └─ Converge → Exit loop for synthesis
├─ Update brainstorm.md
└─ Repeat until complete or max rounds
Completion:
├─ Generate synthesis.json
├─ Update brainstorm.md with final synthesis
└─ Offer follow-up options
```
---
**Now execute the brainstorm-with-file workflow for topic**: $TOPIC

View File

@@ -506,7 +506,7 @@ Each line is a JSON object:
## Iteration Flow
```
First Call (/prompts:debug-with-file BUG="error"):
First Call (BUG="error"):
├─ No session exists → Explore mode
├─ Extract error keywords, search codebase
├─ Document initial understanding in understanding.md
@@ -514,7 +514,7 @@ First Call (/prompts:debug-with-file BUG="error"):
├─ Add logging instrumentation
└─ Await user reproduction
After Reproduction (/prompts:debug-with-file BUG="error"):
After Reproduction (BUG="error"):
├─ Session exists + debug.log has content → Analyze mode
├─ Parse log, evaluate hypotheses
├─ Update understanding.md with:
@@ -578,32 +578,27 @@ Also we checked A and found B, and then we checked C...
Why is config value None during update?
```
## Comparison with /prompts:debug
## Key Features
| Feature | /prompts:debug | /prompts:debug-with-file |
|---------|-----------------|---------------------------|
| NDJSON logging | ✅ | ✅ |
| Hypothesis generation | Manual | Analysis-assisted |
| Exploration documentation | ❌ | ✅ understanding.md |
| Understanding evolution | ❌ | ✅ Timeline + corrections |
| Error correction | ❌ | ✅ Strikethrough + reasoning |
| Consolidated learning | ❌ | ✅ Current understanding section |
| Hypothesis history | ❌ | ✅ hypotheses.json |
| Analysis validation | ❌ | ✅ At key decision points |
| Feature | Description |
|---------|-------------|
| NDJSON logging | Structured debug log with hypothesis tracking |
| Hypothesis generation | Analysis-assisted hypothesis creation |
| Exploration documentation | understanding.md with timeline |
| Understanding evolution | Timeline + corrections tracking |
| Error correction | Strikethrough + reasoning for wrong assumptions |
| Consolidated learning | Current understanding section |
| Hypothesis history | hypotheses.json with verdicts |
| Analysis validation | At key decision points |
## Usage Recommendations
## When to Use
Use `/prompts:debug-with-file` when:
Best suited for:
- Complex bugs requiring multiple investigation rounds
- Learning from debugging process is valuable
- Team needs to understand debugging rationale
- Bug might recur, documentation helps prevention
Use `/prompts:debug` when:
- Simple, quick bugs
- One-off issues
- Documentation overhead not needed
---
**Now execute the debug-with-file workflow for bug**: $BUG

View File

@@ -1,318 +0,0 @@
---
description: Interactive hypothesis-driven debugging with NDJSON logging, iterative until resolved
argument-hint: BUG="<bug description or error message>"
---
# Workflow Debug Command
## Overview
Evidence-based interactive debugging command. Systematically identifies root causes through hypothesis-driven logging and iterative verification.
**Core workflow**: Explore → Add Logging → Reproduce → Analyze Log → Fix → Verify
## Bug Description
**Target bug**: $BUG
## Execution Process
```
Session Detection:
├─ Check if debug session exists for this bug
├─ EXISTS + debug.log has content → Analyze mode
└─ NOT_FOUND or empty log → Explore mode
Explore Mode:
├─ Locate error source in codebase
├─ Generate testable hypotheses (dynamic count)
├─ Add NDJSON logging instrumentation
└─ Output: Hypothesis list + await user reproduction
Analyze Mode:
├─ Parse debug.log, validate each hypothesis
└─ Decision:
├─ Confirmed → Fix root cause
├─ Inconclusive → Add more logging, iterate
└─ All rejected → Generate new hypotheses
Fix & Cleanup:
├─ Apply fix based on confirmed hypothesis
├─ User verifies
├─ Remove debug instrumentation
└─ If not fixed → Return to Analyze mode
```
## Implementation
### Session Setup & Mode Detection
```javascript
const getUtc8ISOString = () => new Date(Date.now() + 8 * 60 * 60 * 1000).toISOString()
const bugSlug = "$BUG".toLowerCase().replace(/[^a-z0-9]+/g, '-').substring(0, 30)
const dateStr = getUtc8ISOString().substring(0, 10)
const sessionId = `DBG-${bugSlug}-${dateStr}`
const sessionFolder = `.workflow/.debug/${sessionId}`
const debugLogPath = `${sessionFolder}/debug.log`
// Auto-detect mode
const sessionExists = fs.existsSync(sessionFolder)
const logHasContent = sessionExists && fs.existsSync(debugLogPath) && fs.statSync(debugLogPath).size > 0
const mode = logHasContent ? 'analyze' : 'explore'
if (!sessionExists) {
bash(`mkdir -p ${sessionFolder}`)
}
```
---
### Explore Mode
**Step 1.1: Locate Error Source**
```javascript
// Extract keywords from bug description
const keywords = extractErrorKeywords("$BUG")
// e.g., ['Stack Length', '未找到', 'registered 0']
// Search codebase for error locations
for (const keyword of keywords) {
Grep({ pattern: keyword, path: ".", output_mode: "content", "-C": 3 })
}
// Identify affected files and functions
const affectedLocations = [...] // from search results
```
**Step 1.2: Generate Hypotheses (Dynamic)**
```javascript
// Hypothesis categories based on error pattern
const HYPOTHESIS_PATTERNS = {
"not found|missing|undefined|未找到": "data_mismatch",
"0|empty|zero|registered 0": "logic_error",
"timeout|connection|sync": "integration_issue",
"type|format|parse": "type_mismatch"
}
// Generate hypotheses based on actual issue (NOT fixed count)
function generateHypotheses(bugDescription, affectedLocations) {
const hypotheses = []
// Analyze bug and create targeted hypotheses
// Each hypothesis has:
// - id: H1, H2, ... (dynamic count)
// - description: What might be wrong
// - testable_condition: What to log
// - logging_point: Where to add instrumentation
return hypotheses // Could be 1, 3, 5, or more
}
const hypotheses = generateHypotheses("$BUG", affectedLocations)
```
**Step 1.3: Add NDJSON Instrumentation**
For each hypothesis, add logging at the relevant location:
**Python template**:
```python
# region debug [H{n}]
try:
import json, time
_dbg = {
"sid": "{sessionId}",
"hid": "H{n}",
"loc": "{file}:{line}",
"msg": "{testable_condition}",
"data": {
# Capture relevant values here
},
"ts": int(time.time() * 1000)
}
with open(r"{debugLogPath}", "a", encoding="utf-8") as _f:
_f.write(json.dumps(_dbg, ensure_ascii=False) + "\n")
except: pass
# endregion
```
**JavaScript/TypeScript template**:
```javascript
// region debug [H{n}]
try {
require('fs').appendFileSync("{debugLogPath}", JSON.stringify({
sid: "{sessionId}",
hid: "H{n}",
loc: "{file}:{line}",
msg: "{testable_condition}",
data: { /* Capture relevant values */ },
ts: Date.now()
}) + "\n");
} catch(_) {}
// endregion
```
**Output to user**:
```
## Hypotheses Generated
Based on error "$BUG", generated {n} hypotheses:
{hypotheses.map(h => `
### ${h.id}: ${h.description}
- Logging at: ${h.logging_point}
- Testing: ${h.testable_condition}
`).join('')}
**Debug log**: ${debugLogPath}
**Next**: Run reproduction steps, then come back for analysis.
```
---
### Analyze Mode
```javascript
// Parse NDJSON log
const entries = Read(debugLogPath).split('\n')
.filter(l => l.trim())
.map(l => JSON.parse(l))
// Group by hypothesis
const byHypothesis = groupBy(entries, 'hid')
// Validate each hypothesis
for (const [hid, logs] of Object.entries(byHypothesis)) {
const hypothesis = hypotheses.find(h => h.id === hid)
const latestLog = logs[logs.length - 1]
// Check if evidence confirms or rejects hypothesis
const verdict = evaluateEvidence(hypothesis, latestLog.data)
// Returns: 'confirmed' | 'rejected' | 'inconclusive'
}
```
**Output**:
```
## Evidence Analysis
Analyzed ${entries.length} log entries.
${results.map(r => `
### ${r.id}: ${r.description}
- **Status**: ${r.verdict}
- **Evidence**: ${JSON.stringify(r.evidence)}
- **Reason**: ${r.reason}
`).join('')}
${confirmedHypothesis ? `
## Root Cause Identified
**${confirmedHypothesis.id}**: ${confirmedHypothesis.description}
Ready to fix.
` : `
## Need More Evidence
Add more logging or refine hypotheses.
`}
```
---
### Fix & Cleanup
```javascript
// Apply fix based on confirmed hypothesis
// ... Edit affected files
// After user verifies fix works:
// Remove debug instrumentation (search for region markers)
const instrumentedFiles = Grep({
pattern: "# region debug|// region debug",
output_mode: "files_with_matches"
})
for (const file of instrumentedFiles) {
// Remove content between region markers
removeDebugRegions(file)
}
console.log(`
## Debug Complete
- Root cause: ${confirmedHypothesis.description}
- Fix applied to: ${modifiedFiles.join(', ')}
- Debug instrumentation removed
`)
```
---
## Debug Log Format (NDJSON)
Each line is a JSON object:
```json
{"sid":"DBG-xxx-2025-12-18","hid":"H1","loc":"file.py:func:42","msg":"Check dict keys","data":{"keys":["a","b"],"target":"c","found":false},"ts":1734567890123}
```
| Field | Description |
|-------|-------------|
| `sid` | Session ID |
| `hid` | Hypothesis ID (H1, H2, ...) |
| `loc` | Code location |
| `msg` | What's being tested |
| `data` | Captured values |
| `ts` | Timestamp (ms) |
## Session Folder
```
.workflow/.debug/DBG-{slug}-{date}/
├── debug.log # NDJSON log (main artifact)
└── resolution.md # Summary after fix (optional)
```
## Iteration Flow
```
First Call (/prompts:debug BUG="error"):
├─ No session exists → Explore mode
├─ Extract error keywords, search codebase
├─ Generate hypotheses, add logging
└─ Await user reproduction
After Reproduction (/prompts:debug BUG="error"):
├─ Session exists + debug.log has content → Analyze mode
├─ Parse log, evaluate hypotheses
└─ Decision:
├─ Confirmed → Fix → User verify
│ ├─ Fixed → Cleanup → Done
│ └─ Not fixed → Add logging → Iterate
├─ Inconclusive → Add logging → Iterate
└─ All rejected → New hypotheses → Iterate
Output:
└─ .workflow/.debug/DBG-{slug}-{date}/debug.log
```
## Error Handling
| Situation | Action |
|-----------|--------|
| Empty debug.log | Verify reproduction triggered the code path |
| All hypotheses rejected | Generate new hypotheses with broader scope |
| Fix doesn't work | Iterate with more granular logging |
| >5 iterations | Escalate with collected evidence |
---
**Now execute the debug workflow for bug**: $BUG

View File

@@ -0,0 +1,559 @@
---
name: ccw-cli-tools
description: CLI tools execution specification (gemini/claude/codex/qwen/opencode) with unified prompt template, mode options, and auto-invoke triggers for code analysis and implementation tasks. Supports configurable CLI endpoints for analysis, write, and review modes.
version: 1.0.0
---
# CLI Tools - Unified Execution Framework
**Purpose**: Structured CLI tool usage with configuration-driven tool selection, unified prompt templates, and quality-gated execution.
**Configuration**: `~/.claude/cli-tools.json` (Global, always read at initialization)
## Initialization (Required First Step)
**Before any tool selection or recommendation**:
1. Check if configuration exists in memory:
- If configuration is already in conversation memory → Use it directly
- If NOT in memory → Read the configuration file:
```bash
Read(file_path="~/.claude/cli-tools.json")
```
2. Parse the JSON to understand:
- Available tools and their `enabled` status
- Each tool's `primaryModel` and `secondaryModel`
- Tags defined for tag-based routing
- Tool types (builtin, cli-wrapper, api-endpoint)
3. Use configuration throughout the selection process
**Why**: Tools, models, and tags may change. Configuration file is the single source of truth.
**Optimization**: Reuse in-memory configuration to avoid redundant file reads.
## Process Flow
```
┌─ USER REQUEST
├─ STEP 1: Load Configuration
│ ├─ Check if configuration exists in conversation memory
│ └─ If NOT in memory → Read(file_path="~/.claude/cli-tools.json")
├─ STEP 2: Understand User Intent
│ ├─ Parse task type (analysis, implementation, security, etc.)
│ ├─ Extract required capabilities (tags)
│ └─ Identify scope (files, modules)
├─ STEP 3: Select Tool (based on config)
│ ├─ Explicit --tool specified?
│ │ YES → Validate in config → Use it
│ │ NO → Match tags with enabled tools → Select best match
│ │ → No match → Use first enabled tool (default)
│ └─ Get primaryModel from config
├─ STEP 4: Build Prompt
│ └─ Use 6-field template: PURPOSE, TASK, MODE, CONTEXT, EXPECTED, CONSTRAINTS
├─ STEP 5: Select Rule Template
│ ├─ Determine rule from task type
│ └─ Pass via --rule parameter
├─ STEP 6: Execute CLI Command
│ └─ ccw cli -p "<PROMPT>" --tool <tool> --mode <mode> --rule <rule>
└─ STEP 7: Handle Results
├─ On success → Deliver output to user
└─ On failure → Check secondaryModel or fallback tool
```
## Configuration Reference
### Configuration File Location
**Path**: `~/.claude/cli-tools.json` (Global configuration)
**IMPORTANT**: Check conversation memory first. Only read file if configuration is not in memory.
### Reading Configuration
**Priority**: Check conversation memory first
**Loading Options**:
- **Option 1** (Preferred): Use in-memory configuration if already loaded in conversation
- **Option 2** (Fallback): Read from file when not in memory
```bash
# Read configuration file
cat ~/.claude/cli-tools.json
```
The configuration defines all available tools with their enabled status, models, and tags.
### Configuration Structure
The JSON file contains a `tools` object where each tool has these fields:
| Field | Type | Description | Example |
|-------|------|-------------|---------|
| `enabled` | boolean | Tool availability status | `true` or `false` |
| `primaryModel` | string | Default model for execution | `"gemini-2.5-pro"` |
| `secondaryModel` | string | Fallback model on primary failure | `"gemini-2.5-flash"` |
| `tags` | array | Capability tags for routing | `["分析", "Debug"]` |
| `type` | string | Tool type | `"builtin"`, `"cli-wrapper"`, `"api-endpoint"` |
### Expected Tools (Reference Only)
Typical tools found in configuration (actual availability determined by reading the file):
| Tool | Typical Type | Common Use |
|------|--------------|------------|
| `gemini` | builtin | Analysis, Debug (分析, Debug tags) |
| `qwen` | builtin | General coding |
| `codex` | builtin | Code review, implementation |
| `claude` | builtin | General tasks |
| `opencode` | builtin | Open-source model fallback |
**Note**: Tool availability, models, and tags may differ. Use in-memory configuration or read `~/.claude/cli-tools.json` if not cached.
### Configuration Fields
- **`enabled`**: Tool availability (boolean)
- **`primaryModel`**: Default model for execution
- **`secondaryModel`**: Fallback model on primary failure
- **`tags`**: Capability tags for routing (分析, Debug, implementation, etc.)
- **`type`**: Tool type (builtin, cli-wrapper, api-endpoint)
## Universal Prompt Template
**Structure**: Every CLI command follows this 6-field template
```bash
ccw cli -p "PURPOSE: [goal] + [why] + [success criteria] + [scope]
TASK: • [step 1: specific action] • [step 2: specific action] • [step 3: specific action]
MODE: [analysis|write|review]
CONTEXT: @[file patterns] | Memory: [session/tech/module context]
EXPECTED: [deliverable format] + [quality criteria] + [structure requirements]
CONSTRAINTS: [domain constraints]" --tool <tool-id> --mode <mode> --rule <template>
```
### Field Specifications
#### PURPOSE (Goal Definition)
**What**: Clear objective + motivation + success criteria + scope boundary
**Components**:
- What: Specific task goal
- Why: Business/technical motivation
- Success: Measurable success criteria
- Scope: Bounded context/files
**Example - Good**:
```
PURPOSE: Identify OWASP Top 10 vulnerabilities in auth module to pass security audit;
success = all critical/high issues documented with remediation;
scope = src/auth/** only
```
**Example - Bad**:
```
PURPOSE: Analyze code
```
#### TASK (Action Steps)
**What**: Specific, actionable steps with clear verbs and targets
**Format**: Bullet list with concrete actions
**Example - Good**:
```
TASK:
• Scan for SQL injection in query builders
• Check XSS in template rendering
• Verify CSRF token validation
```
**Example - Bad**:
```
TASK: Review code and find issues
```
#### MODE (Permission Level)
**Options**:
- **`analysis`** - Read-only, safe for auto-execution
- **`write`** - Create/Modify/Delete files, full operations
- **`review`** - Git-aware code review (codex only)
**Rules**:
- Always specify explicitly
- Default to `analysis` for read-only tasks
- Require explicit `--mode write` for file modifications
- Use `--mode review` with `--tool codex` for git-aware review
#### CONTEXT (File Scope + Memory)
**Format**: `CONTEXT: @[file patterns] | Memory: [memory context]`
**File Patterns**:
- `@**/*` - All files (default)
- `@src/**/*.ts` - Specific pattern
- `@../shared/**/*` - Parent/sibling (requires `--includeDirs`)
**Memory Context** (when building on previous work):
```
Memory: Building on auth refactoring (commit abc123), using JWT for sessions
Memory: Integration with auth module, shared error patterns from @shared/utils/errors.ts
```
#### EXPECTED (Output Specification)
**What**: Output format + quality criteria + structure requirements
**Example - Good**:
```
EXPECTED: Markdown report with:
severity levels (Critical/High/Medium/Low),
file:line references,
remediation code snippets,
priority ranking
```
**Example - Bad**:
```
EXPECTED: Report
```
#### CONSTRAINTS (Domain Boundaries)
**What**: Scope limits, special requirements, focus areas
**Example - Good**:
```
CONSTRAINTS: Focus on authentication | Ignore test files | No breaking changes
```
**Example - Bad**:
```
CONSTRAINTS: (missing or too vague)
```
## CLI Execution Modes
### MODE: analysis
- **Permission**: Read-only
- **Use For**: Code review, architecture analysis, pattern discovery, exploration
- **Safe**: Yes - can auto-execute
- **Default**: When not specified
### MODE: write
- **Permission**: Create/Modify/Delete files
- **Use For**: Feature implementation, bug fixes, documentation, code creation
- **Safe**: No - requires explicit `--mode write`
- **Requirements**: Must be explicitly requested by user
### MODE: review
- **Permission**: Read-only (git-aware review output)
- **Use For**: Code review of uncommitted changes, branch diffs, specific commits
- **Tool Support**: `codex` only (others treat as analysis)
- **Constraint**: Target flags (`--uncommitted`, `--base`, `--commit`) and prompt are mutually exclusive
## Command Structure
### Basic Command
```bash
ccw cli -p "<PROMPT>" --tool <tool-id> --mode <analysis|write|review>
```
### Command Options
| Option | Description | Example |
|--------|-------------|---------|
| `--tool <tool>` | Tool from config | `--tool gemini` |
| `--mode <mode>` | **REQUIRED**: analysis/write/review | `--mode analysis` |
| `--model <model>` | Model override | `--model gemini-2.5-flash` |
| `--cd <path>` | Working directory | `--cd src/auth` |
| `--includeDirs <dirs>` | Additional directories | `--includeDirs ../shared,../types` |
| `--rule <template>` | Auto-load template | `--rule analysis-review-architecture` |
| `--resume [id]` | Resume session | `--resume` or `--resume <id>` |
### Advanced Directory Configuration
#### Working Directory (`--cd`)
When using `--cd`:
- `@**/*` = Files within working directory tree only
- Cannot reference parent/sibling without `--includeDirs`
- Reduces token usage by scoping context
#### Include Directories (`--includeDirs`)
**Two-step requirement for external files**:
1. Add `--includeDirs` parameter
2. Reference in CONTEXT with @ patterns
```bash
# Single directory
ccw cli -p "CONTEXT: @**/* @../shared/**/*" \
--tool gemini --mode analysis \
--cd src/auth --includeDirs ../shared
# Multiple directories
ccw cli -p "..." \
--tool gemini --mode analysis \
--cd src/auth --includeDirs ../shared,../types,../utils
```
### Session Resume
**When to Use**:
- Multi-round planning (analysis → planning → implementation)
- Multi-model collaboration (tool A → tool B on same topic)
- Topic continuity (building on previous findings)
**Usage**:
```bash
ccw cli -p "Continue analyzing" --tool <tool-id> --mode analysis --resume # Resume last
ccw cli -p "Fix issues found" --tool <tool-id> --mode write --resume <id> # Resume specific
ccw cli -p "Merge findings" --tool <tool-id> --mode analysis --resume <id1>,<id2> # Merge sessions
```
## Available Rule Templates
### Template System
Use `--rule <template-name>` to auto-load protocol + template appended to prompt
### Universal Templates
- `universal-rigorous-style` - Precise tasks (default)
- `universal-creative-style` - Exploratory tasks
### Analysis Templates
- `analysis-trace-code-execution` - Execution tracing
- `analysis-diagnose-bug-root-cause` - Bug diagnosis
- `analysis-analyze-code-patterns` - Code patterns
- `analysis-analyze-technical-document` - Document analysis
- `analysis-review-architecture` - Architecture review
- `analysis-review-code-quality` - Code review
- `analysis-analyze-performance` - Performance analysis
- `analysis-assess-security-risks` - Security assessment
### Planning Templates
- `planning-plan-architecture-design` - Architecture design
- `planning-breakdown-task-steps` - Task breakdown
- `planning-design-component-spec` - Component design
- `planning-plan-migration-strategy` - Migration strategy
### Development Templates
- `development-implement-feature` - Feature implementation
- `development-refactor-codebase` - Code refactoring
- `development-generate-tests` - Test generation
- `development-implement-component-ui` - UI component
- `development-debug-runtime-issues` - Runtime debugging
## Task-Type Specific Examples
### Example 1: Security Analysis (Read-Only)
```bash
ccw cli -p "PURPOSE: Identify OWASP Top 10 vulnerabilities in authentication module to pass security audit; success = all critical/high issues documented with remediation
TASK: • Scan for injection flaws (SQL, command, LDAP) • Check authentication bypass vectors • Evaluate session management • Assess sensitive data exposure
MODE: analysis
CONTEXT: @src/auth/**/* @src/middleware/auth.ts | Memory: Using bcrypt for passwords, JWT for sessions
EXPECTED: Security report with: severity matrix, file:line references, CVE mappings where applicable, remediation code snippets prioritized by risk
CONSTRAINTS: Focus on authentication | Ignore test files
" --tool gemini --mode analysis --rule analysis-assess-security-risks --cd src/auth
```
### Example 2: Feature Implementation (Write Mode)
```bash
ccw cli -p "PURPOSE: Implement rate limiting for API endpoints to prevent abuse; must be configurable per-endpoint; backward compatible with existing clients
TASK: • Create rate limiter middleware with sliding window • Implement per-route configuration • Add Redis backend for distributed state • Include bypass for internal services
MODE: write
CONTEXT: @src/middleware/**/* @src/config/**/* | Memory: Using Express.js, Redis already configured, existing middleware pattern in auth.ts
EXPECTED: Production-ready code with: TypeScript types, unit tests, integration test, configuration example, migration guide
CONSTRAINTS: Follow existing middleware patterns | No breaking changes
" --tool gemini --mode write --rule development-implement-feature
```
### Example 3: Bug Root Cause Analysis
```bash
ccw cli -p "PURPOSE: Fix memory leak in WebSocket connection handler causing server OOM after 24h; root cause must be identified before any fix
TASK: • Trace connection lifecycle from open to close • Identify event listener accumulation • Check cleanup on disconnect • Verify garbage collection eligibility
MODE: analysis
CONTEXT: @src/websocket/**/* @src/services/connection-manager.ts | Memory: Using ws library, ~5000 concurrent connections in production
EXPECTED: Root cause analysis with: memory profile, leak source (file:line), fix recommendation with code, verification steps
CONSTRAINTS: Focus on resource cleanup
" --tool gemini --mode analysis --rule analysis-diagnose-bug-root-cause --cd src
```
### Example 4: Code Review (Codex Review Mode)
```bash
# Option 1: Custom focus (reviews uncommitted by default)
ccw cli -p "Focus on security vulnerabilities and error handling" --tool codex --mode review
# Option 2: Target flag only (no prompt with target flags)
ccw cli --tool codex --mode review --uncommitted
ccw cli --tool codex --mode review --base main
ccw cli --tool codex --mode review --commit abc123
```
## Tool Selection Strategy
### Selection Algorithm
**STEP 0 (REQUIRED)**: Load configuration (memory-first strategy)
```bash
# Check if configuration exists in conversation memory
# If YES → Use in-memory configuration
# If NO → Read(file_path="~/.claude/cli-tools.json")
```
Then proceed with selection:
1. **Parse task intent** → Extract required capabilities
2. **Load configuration** → Parse enabled tools with tags from JSON
3. **Match tags** → Filter tools supporting required capabilities
4. **Select tool** → Choose by priority (explicit > tag-match > default)
5. **Select model** → Use primaryModel, fallback to secondaryModel
### Selection Decision Tree
```
0. LOAD CONFIGURATION (memory-first)
├─ In memory? → Use it
└─ Not in memory? → Read ~/.claude/cli-tools.json
1. Explicit --tool specified?
YES → Validate tool is enabled in config → Use it
NO → Proceed to tag-based selection
├─ Extract task tags (security, analysis, implementation, etc.)
│ ├─ Find tools with matching tags
│ │ ├─ Multiple matches? Use first enabled
│ │ └─ Single match? Use it
│ └─ No tag match? Use default tool
└─ Default: Use first enabled tool in config
```
### Common Tag Routing
**Note**: Match task type to tags defined in `~/.claude/cli-tools.json`
| Task Type | Common Tags to Match |
|-----------|---------------------|
| Security audit | `分析`, `analysis`, `security` |
| Bug diagnosis | `Debug`, `分析`, `analysis` |
| Implementation | `implementation`, (any enabled tool) |
| Testing | `testing`, (any enabled tool) |
| Refactoring | `refactoring`, (any enabled tool) |
| Documentation | `documentation`, (any enabled tool) |
**Selection Logic**: Find tools where `tags` array contains matching keywords, otherwise use first enabled tool.
### Fallback Chain
When primary tool fails (based on `~/.claude/cli-tools.json` configuration):
1. Check `secondaryModel` for same tool (use `secondaryModel` from config)
2. Try next enabled tool with matching tags (scan config for enabled tools)
3. Fall back to default enabled tool (first enabled tool in config)
**Example Fallback**:
```
Tool1: primaryModel fails
Try Tool1: secondaryModel
↓ (if fails)
Try Tool2: primaryModel (next enabled with matching tags)
↓ (if fails)
Try default: first enabled tool
```
## Permission Framework
**Single-Use Authorization**: Each execution requires explicit user instruction. Previous authorization does NOT carry over.
**Mode Hierarchy**:
- `analysis`: Read-only, safe for auto-execution
- `write`: Create/Modify/Delete files - requires explicit `--mode write`
- `review`: Git-aware code review (codex only) - requires explicit `--mode review`
- **Exception**: User provides clear instructions like "modify", "create", "implement"
## Auto-Invoke Triggers
**Proactive CLI invocation** - Auto-invoke `ccw cli` when encountering these scenarios:
| Trigger | Suggested Rule | When |
|---------|----------------|------|
| **Self-repair fails** | `analysis-diagnose-bug-root-cause` | After 1+ failed fix attempts |
| **Ambiguous requirements** | `planning-breakdown-task-steps` | Task description lacks clarity |
| **Architecture decisions** | `planning-plan-architecture-design` | Complex feature needs design |
| **Pattern uncertainty** | `analysis-analyze-code-patterns` | Unsure of existing conventions |
| **Critical code paths** | `analysis-assess-security-risks` | Security/performance sensitive |
### Execution Principles for Auto-Invoke
- **Default mode**: `--mode analysis` (read-only, safe)
- **No confirmation needed**: Invoke proactively when triggers match
- **Wait for results**: Complete analysis before next action
- **Tool selection**: Use context-appropriate tool or fallback chain
- **Rule flexibility**: Suggested rules are guidelines, adapt as needed
## Best Practices
### Core Principles
- **Configuration-driven** - All tool selection from `cli-tools.json`
- **Tag-based routing** - Match task requirements to tool capabilities
- **Use tools early and often** - Tools are faster and more thorough than manual analysis
- **Unified CLI** - Always use `ccw cli -p` for consistent parameter handling
- **Default to analysis** - Omit `--mode` for read-only, explicitly use `--mode write` for modifications
- **Use `--rule` for templates** - Auto-loads protocol + template appended to prompt
- **Write protection** - Require EXPLICIT `--mode write` for file operations
### Workflow Principles
- **Use unified interface** - Always `ccw cli -p` format
- **Always include template** - Use `--rule <template-name>` to load templates
- **Be specific** - Clear PURPOSE, TASK, EXPECTED fields
- **Include constraints** - File patterns, scope in CONSTRAINTS
- **Leverage memory context** - When building on previous work
- **Discover patterns first** - Use rg/MCP before CLI execution
- **Default to full context** - Use `@**/*` unless specific files needed
### Planning Checklist
- [ ] **Purpose defined** - Clear goal and intent
- [ ] **Mode selected** - `--mode analysis|write|review`
- [ ] **Context gathered** - File references + memory (default `@**/*`)
- [ ] **Directory navigation** - `--cd` and/or `--includeDirs` if needed
- [ ] **Tool selected** - Explicit `--tool` or tag-based auto-selection
- [ ] **Rule template** - `--rule <template-name>` loads template
- [ ] **Constraints** - Domain constraints in CONSTRAINTS field
## Integration with CLAUDE.md Instructions
**From global CLAUDE.md**:
- Always use `run_in_background: false` for Task tool agent calls
- Default: Use Bash `run_in_background: true` for CLI calls
- After CLI call: Stop output immediately, wait for hook callback
- Wait for results: MUST wait for CLI analysis before taking write actions
- Value every call: Never waste analysis results, aggregate before proposing solutions
**From cli-tools-usage.md**:
- Strict cli-tools.json configuration adherence
- Configuration-driven tool selection
- Template system with --rule auto-loading
- Permission framework with single-use authorization
- Auto-invoke triggers for common failure scenarios

View File

@@ -1,29 +0,0 @@
{
"id": "E2E-TASK-1769007254162",
"title": "Test Task E2E-TASK-1769007254162",
"description": "Test task with loop control",
"status": "pending",
"loop_control": {
"enabled": true,
"description": "Test loop",
"max_iterations": 3,
"success_condition": "current_iteration >= 3",
"error_policy": {
"on_failure": "pause",
"max_retries": 3
},
"cli_sequence": [
{
"step_id": "step1",
"tool": "bash",
"command": "echo \"iteration\""
},
{
"step_id": "step2",
"tool": "gemini",
"mode": "analysis",
"prompt_template": "Process output"
}
]
}
}

View File

@@ -5,6 +5,74 @@ All notable changes to Claude Code Workflow (CCW) will be documented in this fil
The format is based on [Keep a Changelog](https://keepachangelog.com/en/1.0.0/),
and this project adheres to [Semantic Versioning](https://semver.org/spec/v2.0.0.html).
## [6.3.49] - 2026-01-28
### ✨ New Features | 新功能
#### CLI Tools & Configuration | CLI工具与配置
- **Added**: In-memory configuration prioritization for CLI tool selection initialization | CLI工具选择初始化的内存配置优先级
- **Added**: Codex CLI settings with toggle and refresh actions | Codex CLI设置的切换和刷新操作
- **Added**: Codex CLI enhancement settings with API integration and UI toggle | Codex CLI增强设置包含API集成和UI切换
- **Added**: ccw-cli-tools skill specification with unified execution framework and configuration-driven tool selection | ccw-cli-tools技能规范包含统一执行框架和配置驱动的工具选择
- **Added**: Commands management feature with API endpoints and UI integration | 命令管理功能包含API端点和UI集成
#### Skills & Workflows | 技能与工作流
- **Enhanced**: lite-skill-generator with single file output and improved validation | lite-skill-generator单文件输出和验证增强
- **Added**: brainstorm-to-cycle adapter for converting brainstorm output to parallel-dev-cycle input | brainstorm-to-cycle适配器用于将brainstorm输出转换为parallel-dev-cycle输入
- **Added**: brainstorm-with-file prompt for interactive brainstorming workflow | 交互式brainstorm工作流的brainstorm-with-file提示
- **Added**: Document consolidation, assembly, and compliance refinement phases | 文档整合、汇编和合规性细化阶段
- **Added**: Skill enable/disable functionality with enhanced moveDirectory rollback on failure | 技能启用/禁用功能增强的moveDirectory失败回滚
#### Review & Quality | 审查与质量
- **Updated**: Review commands to use review-cycle-fix for automated fixing | 审查命令更新为使用review-cycle-fix进行自动修复
- **Fixed**: Changelog workflow references from review-fix to review-cycle-fix for consistency | 更改changelog工作流引用从review-fix到review-cycle-fix以保持一致性
#### Documentation & CLI Integration | 文档与CLI集成
- **Added**: CLI endpoints documentation and unified script template for Bash and Python | CLI端点文档和Bash/Python统一脚本模板
- **Enhanced**: Skill generator documentation and templates | 技能生成器文档和模板增强
- **Added**: Skill tuning diagnosis report for skill-generator | skill-generator的技能调优诊断报告
### 🔒 Security | 安全
#### Critical Fixes | 关键修复
- **Fixed**: 3 critical security vulnerabilities | 修复3个关键安全漏洞
### 🛠️ Improvements | 改进
#### Core Logic | 核心逻辑
- **Refactored**: Orchestrator logic with enhanced problem taxonomy | 重构编排器逻辑,增强问题分类
- **Improved**: Skills enable/disable operations robustness | 改进技能启用/禁用操作的健壮性
#### Planning & Context Management | 规划与上下文管理
- **Enhanced**: Workflow commands and context management | 增强工作流命令和上下文管理
- **Enhanced**: CLI Lite Planning Agent with mandatory quality check | 增强CLI Lite规划代理的强制性质量检查
- **Added**: Planning notes feature for task generation and constraint management | 规划笔记功能,用于任务生成和约束管理
#### Issue Management | Issue管理
- **Added**: convert-to-plan command to convert planning documents to issue solutions | convert-to-plan命令将规划文档转换为问题解决方案
- **Enhanced**: Queue status validation with "merged" status | 队列状态验证,增加"merged"状态
- **Refactored**: Issue queue management to use "archived" instead of "merged" | 重构issue队列管理使用"archived"代替"merged"
#### Multi-CLI Analysis | 多CLI分析
- **Added**: Interactive analysis workflow with documented discussions and CLI exploration | 交互式分析工作流包含文档化讨论和CLI探索
- **Added**: Parent/child directory lookup for ccw cli output | ccw cli输出的父/子目录查找
### 📚 Documentation | 文档
- **Added**: Level 5 intelligent orchestration workflow guide to English version | 英文版添加Level 5智能编排工作流指南
- **Added**: Level 5 workflow guide with CCW Coordinator and decision flowchart | Level 5工作流指南包含CCW协调器和决策流程图
- **Added**: /ccw and /ccw-coordinator as recommended commands | 添加/ccw和/ccw-coordinator作为推荐命令
- **Removed**: Codex Subagent usage documentation | 移除Codex Subagent使用规范文档
- **Removed**: CLI endpoints section from Codex Code Guidelines | 从Codex代码指南中移除CLI端点部分
- **Fixed**: README_CN.md交流群二维码图片扩展名 | 修复README_CN.md交流群二维码图片扩展名
- **Archived**: Unused test scripts and temporary documents | 归档未使用的测试脚本和临时文档
### 🎨 UI & Integration | UI与集成
- **Refactored**: CLI Config Manager and added Provider Model Routes | 重构CLI配置管理器并添加Provider模型路由
---
## [6.3.29] - 2026-01-15
### ✨ New Features | 新功能
@@ -376,7 +444,7 @@ This release significantly enhances the code review capabilities with a new fix-
- **Enhanced Export Notifications**: Export notifications now include detailed usage instructions and recommended file locations (`6467480`).
#### 🔄 Changed
- **Dashboard Data Integration**: `fix-dashboard.html` now consumes more JSON fields from the `review-fix` workflow for richer data display (`b000359`).
- **Dashboard Data Integration**: `fix-dashboard.html` now consumes more JSON fields from the `review-cycle-fix` workflow for richer data display (`b000359`).
- **Dashboard Generation**: Optimized the generation process for the review cycle dashboard and merged JSON state files for efficiency (`2cf8efe`).
- **Agent Schema Requirements**: Added explicit JSON schema requirements to review cycle agent prompts to ensure structured output (`34a9a23`).
- **Standardized Naming**: Standardized "Execution Flow" phase naming across commands and removed redundant `REVIEW-SUMMARY.md` output (`ef09914`).
@@ -409,7 +477,7 @@ This release introduces the new `lite-fix` workflow for streamlined bug resoluti
- **`docs-related-cli` Command**: New command for CLI-related documentation (`7453987`).
- **Session Artifacts**: `lite-fix` workflow now creates a dedicated session folder with artifacts like `diagnosis.json` and `fix-plan.json` (`0207677`).
- **`review-session-cycle` Command**: A comprehensive command for multi-dimensional code analysis (`93d8e79`).
- **`review-fix` Workflow**: Automated workflow for reviewing fixes with an enhanced exploration schema (`a6561a7`).
- **`review-cycle-fix` Workflow**: Automated workflow for reviewing fixes with an enhanced exploration schema (`a6561a7`).
- **JSON Schemas**: Added new JSON schemas for deep-dive results and dimension analysis to structure agent outputs (`cd206f2`).
#### 🔄 Changed

View File

@@ -275,8 +275,8 @@ Open Dashboard via `ccw view`, manage indexes and execute searches in **CodexLen
</tr>
<tr>
<td><b>/ccw-coordinator</b></td>
<td>Manual orchestrator - recommends command chains, executes via external CLI with state persistence</td>
<td>🔧 Complex multi-step workflows, custom chains, resumable sessions</td>
<td>Smart orchestrator - intelligently recommends command chains, allows manual adjustment, executes via external CLI with state persistence</td>
<td>🔧 Complex multi-step workflows, customizable chains, resumable sessions</td>
</tr>
</table>
</div>
@@ -298,7 +298,7 @@ Open Dashboard via `ccw view`, manage indexes and execute searches in **CodexLen
| Aspect | /ccw | /ccw-coordinator |
|--------|------|------------------|
| **Execution** | Main process (SlashCommand) | External CLI (background tasks) |
| **Selection** | Auto intent-based | Manual chain confirmation |
| **Selection** | Auto intent-based | Smart recommendation + optional adjustment |
| **State** | TodoWrite tracking | Persistent state.json |
| **Use Case** | General tasks, quick dev | Complex chains, resumable |

View File

@@ -275,8 +275,8 @@ codexlens index /path/to/project
</tr>
<tr>
<td><b>/ccw-coordinator</b></td>
<td>手动编排器 - 推荐命令链、通过外部 CLI 执行、持久化状态</td>
<td>🔧 复杂多步骤工作流、自定义链、可恢复会话</td>
<td>智能编排器 - 智能推荐命令链、支持手动调整、通过外部 CLI 执行、持久化状态</td>
<td>🔧 复杂多步骤工作流、自定义链、可恢复会话</td>
</tr>
</table>
</div>
@@ -298,7 +298,7 @@ codexlens index /path/to/project
| 方面 | /ccw | /ccw-coordinator |
|------|------|------------------|
| **执行方式** | 主进程SlashCommand | 外部 CLI后台任务 |
| **选择方式** | 自动基于意图识别 | 手动链确认 |
| **选择方式** | 自动基于意图识别 | 智能推荐 + 可选调整 |
| **状态管理** | TodoWrite 跟踪 | 持久化 state.json |
| **适用场景** | 通用任务、快速开发 | 复杂链条、可恢复 |
@@ -393,7 +393,7 @@ MIT License - 详见 [LICENSE](LICENSE)
欢迎加入 CCW 交流群,与其他开发者一起讨论使用心得、分享经验!
<img src="assets/wechat-group-qr.jpg" width="300" alt="CCW 微信交流群"/>
<img src="assets/wechat-group-qr.png" width="300" alt="CCW 微信交流群"/>
<sub>扫码加入微信交流群(如二维码过期,请提 Issue 获取最新二维码)</sub>

View File

@@ -5,23 +5,27 @@
CCW provides two workflow systems: **Main Workflow** and **Issue Workflow**, working together to cover the complete software development lifecycle.
```
┌─────────────────────────────────────────────────────────────────────────────┐
│ Main Workflow │
│ │
│ ┌─────────────┐ ┌─────────────┐ ┌─────────────┐ ┌─────────────┐ │
│ │ Level 1 │ → │ Level 2 │ → │ Level 3 │ → │ Level 4 │
│ │ Rapid │ │ Lightweight │ │ Standard │ │ Brainstorm │ │
│ │ │ │ │ │ │ │ │ │
│ │ lite-lite- │ │ lite-plan │ │ plan │ │ brainstorm │ │
│ │ lite │ │ lite-fix │ │ tdd-plan │ │ :auto- │ │
│ │ │ │ multi-cli- │ │ test-fix- │ │ parallel │ │
│ │ │ │ plan │ │ gen │ │ ↓ │ │
│ │ │ │ │ │ │ │ plan │ │
└─────────────┘ └─────────────┘ └─────────────┘ └─────────────┘
Complexity: ━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━▶
Low High
└─────────────────────────────────────────────────────────────────────────────┘
┌──────────────────────────────────────────────────────────────────────────────────────────────
Main Workflow
│ ┌─────────────┐ ┌─────────────┐ ┌─────────────┐ ┌─────────────┐ ┌──────────────┐
│ │ Level 1 │ → │ Level 2 │ → │ Level 3 │ → │ Level 4 │→ │ Level 5 │
│ │ Rapid │ │ Lightweight │ │ Standard │ │ Brainstorm │ │ Intelligent │
│ │ │ │ │ │ │ │ │ │ Orchestration│
│ │ lite-lite- │ │ lite-plan │ │ plan │ │ brainstorm │ │ ccw- │
│ │ lite │ │ lite-fix │ │ tdd-plan │ │ :auto- │ │ coordinator │
│ │ │ │ multi-cli- │ │ test-fix- │ │ parallel │ │ │
│ │ │ │ plan │ │ gen │ │ ↓ │ │ Auto- │
│ │ │ │ │ │ │ │ plan │ │ analyze & │
│ │ │ │ │ │ │ │ │ recommend │
└─────────────┘ └─────────────┘ └─────────────┘ └─────────────┘ └──────────────┘
Manual Degree: ━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━▶
│ High (manual selection) Low (fully auto) │
│ │
│ Complexity: ━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━▶ │
│ Low High │
└──────────────────────────────────────────────────────────────────────────────────────────────┘
│ After development
@@ -497,6 +501,374 @@ Phase 3: Synthesis Integration
---
## Level 5: Intelligent Orchestration (CCW Coordinator)
**Most Intelligent - Automated command chain orchestration + Sequential execution + State persistence**
### Core Concept: Minimum Execution Units
**Definition**: A set of commands that must execute together as an atomic group to achieve a meaningful workflow milestone. Splitting these commands breaks the logical flow and creates incomplete states.
**Why This Matters**:
- **Prevents Incomplete States**: Avoid stopping after task generation without execution
- **User Experience**: User gets complete results, not intermediate artifacts requiring manual follow-up
- **Workflow Integrity**: Maintains logical coherence of multi-step operations
### Minimum Execution Units
**Planning + Execution Units**:
| Unit Name | Commands | Purpose | Output |
|-----------|----------|---------|--------|
| **Quick Implementation** | lite-plan → lite-execute | Lightweight plan and immediate execution | Working code |
| **Multi-CLI Planning** | multi-cli-plan → lite-execute | Multi-perspective analysis and execution | Working code |
| **Bug Fix** | lite-fix → lite-execute | Quick bug diagnosis and fix execution | Fixed code |
| **Full Planning + Execution** | plan → execute | Detailed planning and execution | Working code |
| **Verified Planning + Execution** | plan → plan-verify → execute | Planning with verification and execution | Working code |
| **Replanning + Execution** | replan → execute | Update plan and execute changes | Working code |
| **TDD Planning + Execution** | tdd-plan → execute | Test-driven development planning and execution | Working code |
| **Test Generation + Execution** | test-gen → execute | Generate test suite and execute | Generated tests |
**Testing Units**:
| Unit Name | Commands | Purpose | Output |
|-----------|----------|---------|--------|
| **Test Validation** | test-fix-gen → test-cycle-execute | Generate test tasks and execute test-fix cycle | Tests passed |
**Review Units**:
| Unit Name | Commands | Purpose | Output |
|-----------|----------|---------|--------|
| **Code Review (Session)** | review-session-cycle → review-fix | Complete review cycle and apply fixes | Fixed code |
| **Code Review (Module)** | review-module-cycle → review-fix | Module review cycle and apply fixes | Fixed code |
### 3-Phase Workflow
#### Phase 1: Analyze Requirements
Parse task description to extract: goal, scope, constraints, complexity, and task type.
```javascript
function analyzeRequirements(taskDescription) {
return {
goal: extractMainGoal(taskDescription), // e.g., "Implement user registration"
scope: extractScope(taskDescription), // e.g., ["auth", "user_management"]
constraints: extractConstraints(taskDescription), // e.g., ["no breaking changes"]
complexity: determineComplexity(taskDescription), // 'simple' | 'medium' | 'complex'
task_type: detectTaskType(taskDescription) // See task type patterns below
};
}
// Task Type Detection Patterns
function detectTaskType(text) {
// Priority order (first match wins)
if (/fix|bug|error|crash|fail|debug|diagnose/.test(text)) return 'bugfix';
if (/tdd|test-driven|test first/.test(text)) return 'tdd';
if (/test fail|fix test|failing test/.test(text)) return 'test-fix';
if (/generate test|add test/.test(text)) return 'test-gen';
if (/review/.test(text)) return 'review';
if (/explore|brainstorm/.test(text)) return 'brainstorm';
if (/multi-perspective|comparison/.test(text)) return 'multi-cli';
return 'feature'; // Default
}
// Complexity Assessment
function determineComplexity(text) {
let score = 0;
if (/refactor|migrate|architect|system/.test(text)) score += 2;
if (/multiple|across|all|entire/.test(text)) score += 2;
if (/integrate|api|database/.test(text)) score += 1;
if (/security|performance|scale/.test(text)) score += 1;
return score >= 4 ? 'complex' : score >= 2 ? 'medium' : 'simple';
}
```
#### Phase 2: Discover Commands & Recommend Chain
Dynamic command chain assembly using port-based matching.
**Display to user**:
```
Recommended Command Chain:
Pipeline (visual):
Requirement → lite-plan → Plan → lite-execute → Code → test-cycle-execute → Tests Passed
Commands:
1. /workflow:lite-plan
2. /workflow:lite-execute
3. /workflow:test-cycle-execute
Proceed? [Confirm / Show Details / Adjust / Cancel]
```
**User Confirmation**:
```javascript
async function getUserConfirmation(chain) {
// Present chain with options:
// - Confirm and execute
// - Show details
// - Adjust chain
// - Cancel
}
```
#### Phase 3: Execute Sequential Command Chain
```javascript
async function executeCommandChain(chain, analysis) {
const sessionId = `ccw-coord-${Date.now()}`;
const stateDir = `.workflow/.ccw-coordinator/${sessionId}`;
// Initialize state
const state = {
session_id: sessionId,
status: 'running',
created_at: new Date().toISOString(),
analysis: analysis,
command_chain: chain.map((cmd, idx) => ({ ...cmd, index: idx, status: 'pending' })),
execution_results: [],
prompts_used: []
};
// Save initial state
Write(`${stateDir}/state.json`, JSON.stringify(state, null, 2));
for (let i = 0; i < chain.length; i++) {
const cmd = chain[i];
// Assemble prompt
let prompt = formatCommand(cmd, state.execution_results, analysis);
prompt += `\n\nTask: ${analysis.goal}`;
if (state.execution_results.length > 0) {
prompt += '\n\nPrevious results:\n';
state.execution_results.forEach(r => {
if (r.session_id) {
prompt += `- ${r.command}: ${r.session_id}\n`;
}
});
}
// Launch CLI in background
const taskId = Bash(
`ccw cli -p "${escapePrompt(prompt)}" --tool claude --mode write`,
{ run_in_background: true }
).task_id;
// Save checkpoint
state.execution_results.push({
index: i,
command: cmd.command,
status: 'in-progress',
task_id: taskId,
session_id: null,
artifacts: [],
timestamp: new Date().toISOString()
});
// Stop here - wait for hook callback
Write(`${stateDir}/state.json`, JSON.stringify(state, null, 2));
break;
}
state.status = 'waiting';
Write(`${stateDir}/state.json`, JSON.stringify(state, null, 2));
return state;
}
```
### State File Structure
**Location**: `.workflow/.ccw-coordinator/{session_id}/state.json`
```json
{
"session_id": "ccw-coord-20250124-143025",
"status": "running|waiting|completed|failed",
"created_at": "2025-01-24T14:30:25Z",
"updated_at": "2025-01-24T14:35:45Z",
"analysis": {
"goal": "Implement user registration",
"scope": ["authentication", "user_management"],
"constraints": ["no breaking changes"],
"complexity": "medium",
"task_type": "feature"
},
"command_chain": [
{
"index": 0,
"command": "/workflow:plan",
"name": "plan",
"status": "completed"
},
{
"index": 1,
"command": "/workflow:execute",
"name": "execute",
"status": "running"
}
],
"execution_results": [
{
"index": 0,
"command": "/workflow:plan",
"status": "completed",
"task_id": "task-001",
"session_id": "WFS-plan-20250124",
"artifacts": ["IMPL_PLAN.md"],
"timestamp": "2025-01-24T14:30:25Z",
"completed_at": "2025-01-24T14:30:45Z"
}
]
}
```
### Complete Lifecycle Decision Flowchart
```mermaid
flowchart TD
Start([Start New Task]) --> Q0{Is this a bug fix?}
Q0 -->|Yes| BugFix[🐛 Bug Fix Process]
Q0 -->|No| Q1{Do you know what to do?}
BugFix --> BugSeverity{Understand root cause?}
BugSeverity -->|Clear| LiteFix[/workflow:lite-fix<br>Standard fix /]
BugSeverity -->|Production incident| HotFix[/workflow:lite-fix --hotfix<br>Emergency hotfix /]
BugSeverity -->|Unclear| BugDiag[/workflow:lite-fix<br>Auto-diagnose root cause /]
BugDiag --> LiteFix
LiteFix --> BugComplete[Bug fixed]
HotFix --> FollowUp[/Auto-generate follow-up tasks<br>Complete fix + post-mortem /]
FollowUp --> BugComplete
BugComplete --> End([Task Complete])
Q1 -->|No| Ideation[💡 Exploration Phase<br>Clarify requirements]
Q1 -->|Yes| Q2{Do you know how to do it?}
Ideation --> BrainIdea[/workflow:brainstorm:auto-parallel<br>Explore product direction /]
BrainIdea --> Q2
Q2 -->|No| Design[🏗️ Design Exploration<br>Explore architecture]
Q2 -->|Yes| Q3{Need planning?}
Design --> BrainDesign[/workflow:brainstorm:auto-parallel<br>Explore technical solutions /]
BrainDesign --> Q3
Q3 -->|Quick and simple| LitePlan[⚡ Lightweight Planning<br>/workflow:lite-plan]
Q3 -->|Complex and complete| FullPlan[📋 Standard Planning<br>/workflow:plan]
LitePlan --> Q4{Need code exploration?}
Q4 -->|Yes| LitePlanE[/workflow:lite-plan -e<br>Task description /]
Q4 -->|No| LitePlanNormal[/workflow:lite-plan<br>Task description /]
LitePlanE --> LiteConfirm[Three-dimensional confirmation:<br>1⃣ Task approval<br>2⃣ Execution method<br>3⃣ Code review]
LitePlanNormal --> LiteConfirm
LiteConfirm --> Q5{Select execution method}
Q5 -->|Agent| LiteAgent[/workflow:lite-execute<br>Use @code-developer /]
Q5 -->|CLI tool| LiteCLI[CLI Execution<br>Gemini/Qwen/Codex]
Q5 -->|Plan only| UserImpl[User manual implementation]
FullPlan --> PlanVerify{Verify plan quality?}
PlanVerify -->|Yes| Verify[/workflow:action-plan-verify /]
PlanVerify -->|No| Execute
Verify --> Q6{Verification passed?}
Q6 -->|No| FixPlan[Fix plan issues]
Q6 -->|Yes| Execute
FixPlan --> Execute
Execute[🚀 Execution Phase<br>/workflow:execute]
LiteAgent --> TestDecision
LiteCLI --> TestDecision
UserImpl --> TestDecision
Execute --> TestDecision
TestDecision{Need tests?}
TestDecision -->|TDD mode| TDD[/workflow:tdd-plan<br>Test-driven development /]
TestDecision -->|Post-test| TestGen[/workflow:test-gen<br>Generate tests /]
TestDecision -->|Tests exist| TestCycle[/workflow:test-cycle-execute<br>Test-fix cycle /]
TestDecision -->|Not needed| Review
TDD --> TDDExecute[/workflow:execute<br>Red-Green-Refactor /]
TDDExecute --> TDDVerify[/workflow:tdd-verify<br>Verify TDD compliance /]
TDDVerify --> Review
TestGen --> TestExecute[/workflow:execute<br>Execute test tasks /]
TestExecute --> TestResult{Tests passed?}
TestResult -->|No| TestCycle
TestResult -->|Yes| Review
TestCycle --> TestPass{Pass rate ≥ 95%?}
TestPass -->|No, continue fixing| TestCycle
TestPass -->|Yes| Review
Review[📝 Review Phase]
Review --> Q7{Need specialized review?}
Q7 -->|Security| SecurityReview[/workflow:review<br>--type security /]
Q7 -->|Architecture| ArchReview[/workflow:review<br>--type architecture /]
Q7 -->|Quality| QualityReview[/workflow:review<br>--type quality /]
Q7 -->|General| GeneralReview[/workflow:review<br>General review /]
Q7 -->|Not needed| Complete
SecurityReview --> Complete
ArchReview --> Complete
QualityReview --> Complete
GeneralReview --> Complete
Complete[✅ Completion Phase<br>/workflow:session:complete]
Complete --> End
style Start fill:#e1f5ff
style BugFix fill:#ffccbc
style LiteFix fill:#ffccbc
style HotFix fill:#ff8a65
style BugDiag fill:#ffccbc
style BugComplete fill:#c8e6c9
style End fill:#c8e6c9
style BrainIdea fill:#fff9c4
style BrainDesign fill:#fff9c4
style LitePlan fill:#b3e5fc
style FullPlan fill:#b3e5fc
style Execute fill:#c5e1a5
style TDD fill:#ffccbc
style TestGen fill:#ffccbc
style TestCycle fill:#ffccbc
style Review fill:#d1c4e9
style Complete fill:#c8e6c9
```
### Commands
```bash
/ccw-coordinator "task description"
# Auto-analyze → recommend command chain → execute sequentially
```
### Use Cases
- ✅ Complex multi-step workflows
- ✅ Uncertain which commands to use
- ✅ Desire end-to-end automation
- ✅ Need full state tracking and resumability
- ✅ Team collaboration with unified execution flow
- ❌ Simple single-command tasks
- ❌ Already know exact commands needed
### Relationship with Other Levels
| Level | Manual Degree | CCW Coordinator Role |
|-------|---------------|-----------------------|
| Level 1-4 | Manual command selection | Auto-combine these commands |
| Level 5 | Auto command selection | Intelligent orchestrator |
**CCW Coordinator uses Level 1-4 internally**:
- Analyzes task → Auto-selects appropriate Level
- Assembles command chain → Includes Level 1-4 commands
- Executes sequentially → Follows Minimum Execution Units
---
## Issue Workflow
**Main Workflow Supplement - Post-development continuous maintenance**
@@ -596,6 +968,7 @@ Phase 3: Synthesis Integration
| Test-driven development | `tdd-plan → execute → tdd-verify` | 3 |
| Test failure fixes | `test-fix-gen → test-cycle-execute` | 3 |
| New features, architecture design | `brainstorm:auto-parallel → plan → execute` | 4 |
| Complex multi-step workflows, uncertain commands | `ccw-coordinator` | 5 |
| Post-development issue fixes | Issue Workflow | - |
### Decision Flowchart
@@ -607,6 +980,10 @@ Start
│ ├─ Yes → Issue Workflow
│ └─ No ↓
├─ Uncertain which commands to use?
│ ├─ Yes → Level 5 (ccw-coordinator - auto-analyze & recommend)
│ └─ No ↓
├─ Are requirements clear?
│ ├─ Uncertain → Level 4 (brainstorm:auto-parallel)
│ └─ Clear ↓
@@ -714,6 +1091,7 @@ mcp__ace-tool__search_context({
| **2** | Lightweight | `lite-plan`, `lite-fix`, `multi-cli-plan` | Memory/Lightweight files | → `lite-execute` |
| **3** | Standard | `plan`, `tdd-plan`, `test-fix-gen` | Session persistence | → `execute` / `test-cycle-execute` |
| **4** | Brainstorm | `brainstorm:auto-parallel``plan` | Multi-role analysis + Session | → `execute` |
| **5** | Intelligent | `ccw-coordinator` | Full state persistence | Auto-analyze & recommend |
| **-** | Issue | `discover``plan``queue``execute` | Issue records | Worktree isolation (optional) |
### Core Principles
@@ -721,11 +1099,16 @@ mcp__ace-tool__search_context({
1. **Main Workflow** solves parallelism through **dependency analysis + Agent parallel execution**, no worktree needed
2. **Issue Workflow** serves as a **supplementary mechanism**, supporting worktree isolation to maintain main branch stability
3. Select appropriate workflow level based on task complexity, **avoid over-engineering**
4. Level 2 workflow selection criteria:
4. **Level 1-4** require manual command selection; **Level 5** auto-analyzes and recommends optimal command chains
5. Level 2 workflow selection criteria:
- Clear requirements → `lite-plan`
- Bug fix → `lite-fix`
- Need multi-perspective → `multi-cli-plan`
5. Level 3 workflow selection criteria:
6. Level 3 workflow selection criteria:
- Standard development → `plan`
- Test-driven → `tdd-plan`
- Test fix → `test-fix-gen`
7. Level 5 usage:
- Uncertain which commands to use → `ccw-coordinator`
- Need end-to-end workflow automation → `ccw-coordinator`
- Require complete state tracking and resumability → `ccw-coordinator`

View File

@@ -5,23 +5,26 @@
CCW 提供两类工作流体系:**主干工作流** (Main Workflow) 和 **Issue 工作流** (Issue Workflow),它们协同覆盖软件开发的完整生命周期。
```
┌─────────────────────────────────────────────────────────────────────────────┐
│ Main Workflow (主干工作流) │
│ │
│ ┌─────────────┐ ┌─────────────┐ ┌─────────────┐ ┌─────────────┐
│ │ Level 1 → │ Level 2 → │ Level 3 │ → │ Level 4
│ │ 急速执行 轻量规划 标准规划 头脑风暴 │ │
│ │ │ │
│ │ lite-lite- │ │ lite-plan plan brainstorm │ │
│ │ lite lite-fix tdd-plan │ :auto- │
│ │ multi-cli- test-fix- parallel │ │
│ │ │ plan gen │ │ ↓ │
│ │ │ plan │ │
│ └─────────────┘ └─────────────┘ └─────────────┘ └─────────────┘
│ │
复杂度: ━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━▶ │
└─────────────────────────────────────────────────────────────────────────────┘
┌───────────────────────────────────────────────────────────────────────────────────────────
Main Workflow (主干工作流)
│ ┌──────────┐ ┌──────────┐ ┌──────────┐ ┌──────────┐ ┌──────────────────────────┐ │
│ │ Level 1 → │ Level 2 → │ Level 3 │→Level 4 │→ │ Level 5 │ │
│ │ 急速执行 │ │ 轻量规划 │ │ 标准规划 │ 头脑风暴 智能编排 │
│ │ │ │
│ │ lite- lite-plan │ plan │ brainstorm 自动分析需求 │
│ │ lite- │ │lite-fix │ tdd-plan │ :auto- │ │ ↓ │ │
│ │ litemulti-cli- │test-fix- │ parallel 智能推荐命令链 │
│ │ │ plan │ gen │ │
│ │ │ plan 序列执行 (最小单元) │
│ └──────────┘ └──────────┘ └──────────┘ └──────────┘ └──────────────────────────┘ │
手动程度: ━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━▶ │
高 (手动选择每个命令) 低 (全自动)
│ │
│ 复杂度: ━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━▶ │
│ 低 高 │
└───────────────────────────────────────────────────────────────────────────────────────────┘
│ 开发完成后
@@ -496,6 +499,583 @@ Phase 3: Synthesis Integration
---
## Level 5: 智能编排 (CCW Coordinator)
**最智能 - 自动化命令链编排 + 序列执行 + 状态持久化**
### 特点
| 属性 | 值 |
|------|-----|
| **复杂度** | 高 |
| **产物** | 完整编排会话状态 |
| **状态** | 完整状态追踪 |
| **执行模式** | 3 阶段智能编排 |
| **适用** | 多命令协作、复杂流程自动化 |
### 核心概念
#### 全生命周期命令选择流程图
```mermaid
flowchart TD
Start([开始新任务]) --> Q0{这是Bug修复吗?}
Q0 -->|是| BugFix[🐛 Bug修复流程]
Q0 -->|否| Q1{知道要做什么吗?}
BugFix --> BugSeverity{了解问题根因?}
BugSeverity -->|清楚| LiteFix[/ /workflow:lite-fix<br>标准Bug修复 /]
BugSeverity -->|生产事故| HotFix[/ /workflow:lite-fix --hotfix<br>热修复模式 /]
BugSeverity -->|不清楚| BugDiag[/ /workflow:lite-fix<br>自动诊断根因 /]
BugDiag --> LiteFix
LiteFix --> BugComplete[Bug修复完成]
HotFix --> FollowUp[/ 自动生成跟进任务<br>全面修复+事后分析 /]
FollowUp --> BugComplete
BugComplete --> End([任务完成])
Q1 -->|不知道| Ideation[💡 构思阶段<br>需求探索]
Q1 -->|知道| Q2{知道怎么做吗?}
Ideation --> BrainIdea[/ /workflow:brainstorm:auto-parallel<br>探索产品方向和功能定位 /]
BrainIdea --> Q2
Q2 -->|不知道| Design[🏗️ 设计探索阶段<br>架构方案探索]
Q2 -->|知道| Q3{是否需要规划?}
Design --> BrainDesign[/ /workflow:brainstorm:auto-parallel<br>探索技术方案和架构 /]
BrainDesign --> Q3
Q3 -->|简单快速| LitePlan[⚡ 轻量规划<br>/workflow:lite-plan]
Q3 -->|复杂完整| FullPlan[📋 完整规划<br>/workflow:plan]
LitePlan --> Q4{需要代码探索?}
Q4 -->|需要| LitePlanE[/ /workflow:lite-plan -e<br>任务描述 /]
Q4 -->|不需要| LitePlanNormal[/ /workflow:lite-plan<br>任务描述 /]
LitePlanE --> LiteConfirm[三维确认:<br>1⃣ 任务批准<br>2⃣ 执行方式<br>3⃣ 代码审查]
LitePlanNormal --> LiteConfirm
LiteConfirm --> Q5{选择执行方式}
Q5 -->|Agent| LiteAgent[/ /workflow:lite-execute<br>使用@code-developer /]
Q5 -->|CLI工具| LiteCLI[CLI执行<br>Gemini/Qwen/Codex]
Q5 -->|仅计划| UserImpl[用户手动实现]
FullPlan --> PlanVerify{验证计划质量?}
PlanVerify -->|是| Verify[/ /workflow:action-plan-verify /]
PlanVerify -->|否| Execute
Verify --> Q6{验证通过?}
Q6 -->|否| FixPlan[修复计划问题]
Q6 -->|是| Execute
FixPlan --> Execute
Execute[🚀 执行阶段<br>/workflow:execute]
LiteAgent --> TestDecision
LiteCLI --> TestDecision
UserImpl --> TestDecision
Execute --> TestDecision
TestDecision{需要测试吗?}
TestDecision -->|TDD模式| TDD[/ /workflow:tdd-plan<br>测试驱动开发 /]
TestDecision -->|后置测试| TestGen[/ /workflow:test-gen<br>生成测试 /]
TestDecision -->|已有测试| TestCycle[/ /workflow:test-cycle-execute<br>测试修复循环 /]
TestDecision -->|不需要| Review
TDD --> TDDExecute[/ /workflow:execute<br>Red-Green-Refactor /]
TDDExecute --> TDDVerify[/ /workflow:tdd-verify<br>验证TDD合规 /]
TDDVerify --> Review
TestGen --> TestExecute[/ /workflow:execute<br>执行测试任务 /]
TestExecute --> TestResult{测试通过?}
TestResult -->|否| TestCycle
TestResult -->|是| Review
TestCycle --> TestPass{通过率≥95%?}
TestPass -->|否,继续修复| TestCycle
TestPass -->|是| Review
Review[📝 审查阶段]
Review --> Q7{需要专项审查?}
Q7 -->|安全| SecurityReview[/ /workflow:review<br>--type security /]
Q7 -->|架构| ArchReview[/ /workflow:review<br>--type architecture /]
Q7 -->|质量| QualityReview[/ /workflow:review<br>--type quality /]
Q7 -->|综合| GeneralReview[/ /workflow:review<br>综合审查 /]
Q7 -->|不需要| Complete
SecurityReview --> Complete
ArchReview --> Complete
QualityReview --> Complete
GeneralReview --> Complete
Complete[✅ 完成阶段<br>/workflow:session:complete]
Complete --> End
style Start fill:#e1f5ff
style BugFix fill:#ffccbc
style LiteFix fill:#ffccbc
style HotFix fill:#ff8a65
style BugDiag fill:#ffccbc
style BugComplete fill:#c8e6c9
style End fill:#c8e6c9
style BrainIdea fill:#fff9c4
style BrainDesign fill:#fff9c4
style LitePlan fill:#b3e5fc
style FullPlan fill:#b3e5fc
style Execute fill:#c5e1a5
style TDD fill:#ffccbc
style TestGen fill:#ffccbc
style TestCycle fill:#ffccbc
style Review fill:#d1c4e9
style Complete fill:#c8e6c9
```
**流程图说明**:
- 从"这是Bug修复吗"开始的首要决策
- 包含构思 (Ideation)、设计 (Design)、规划 (Planning)、执行 (Execution)、测试 (Testing)、审查 (Review) 完整阶段
- 每个阶段都有具体的命令推荐
- 支持轻量规划和完整规划两条路径
- 包含测试决策TDD、后置测试、测试修复
- 包含多种代码审查选项
#### 最小执行单元 (Minimum Execution Units)
**定义**: 一组必须一起执行的原子命令组合,分割后会破坏逻辑流程。
**设计理念**:
- **防止不完整状态**: 避免只生成任务但不执行
- **用户体验**: 用户获得完整结果,而非中间产物
- **工作流完整性**: 保持多步操作的逻辑连贯性
**Planning + Execution Units** (规划+执行单元):
| 单元名称 | 命令组合 | 目的 | 输出 |
|---------|----------|------|------|
| **Quick Implementation** | lite-plan → lite-execute | 轻量规划与立即执行 | 工作代码 |
| **Multi-CLI Planning** | multi-cli-plan → lite-execute | 多视角分析与执行 | 工作代码 |
| **Bug Fix** | lite-fix → lite-execute | 快速 Bug 诊断与修复执行 | 修复代码 |
| **Full Planning + Execution** | plan → execute | 详细规划与执行 | 工作代码 |
| **Verified Planning + Execution** | plan → plan-verify → execute | 规划验证与执行 | 工作代码 |
| **Replanning + Execution** | replan → execute | 更新规划与执行 | 工作代码 |
| **TDD Planning + Execution** | tdd-plan → execute | 测试驱动开发规划与执行 | 工作代码 |
| **Test Generation + Execution** | test-gen → execute | 测试套件生成与执行 | 生成的测试 |
**Testing Units** (测试单元):
| 单元名称 | 命令组合 | 目的 | 输出 |
|---------|----------|------|------|
| **Test Validation** | test-fix-gen → test-cycle-execute | 生成测试任务并执行测试修复循环 | 测试通过 |
**Review Units** (审查单元):
| 单元名称 | 命令组合 | 目的 | 输出 |
|---------|----------|------|------|
| **Code Review (Session)** | review-session-cycle → review-fix | 完整审查循环与应用修复 | 修复代码 |
| **Code Review (Module)** | review-module-cycle → review-fix | 模块审查循环与应用修复 | 修复代码 |
### 3 阶段工作流程
#### Phase 1: 需求分析 (Analyze Requirements)
解析任务描述,提取关键信息:
```javascript
function analyzeRequirements(taskDescription) {
return {
goal: extractMainGoal(taskDescription), // 主目标
scope: extractScope(taskDescription), // 范围
constraints: extractConstraints(taskDescription), // 约束
complexity: determineComplexity(taskDescription), // 复杂度
task_type: detectTaskType(taskDescription) // 任务类型
};
}
```
**任务类型检测模式**:
| 任务类型 | 检测关键词 | 示例 |
|---------|-----------|------|
| `bugfix` | fix, bug, error, crash, fail, debug | "修复登录超时问题" |
| `tdd` | tdd, test-driven, 先写测试, test first | "用 TDD 开发支付模块" |
| `test-fix` | 测试失败, test fail, fix test, failing test | "修复失败的集成测试" |
| `test-gen` | generate test, 写测试, add test, 补充测试 | "为认证模块生成测试" |
| `review` | review, 审查, code review | "审查支付模块代码" |
| `brainstorm` | 不确定, explore, 研究, what if, 权衡 | "探索缓存方案" |
| `multi-cli` | 多视角, 比较方案, cross-verify, multi-cli | "比较 OAuth 方案" |
| `feature` | (默认) | "实现用户注册" |
**复杂度评估**:
| 权重 | 关键词 |
|------|--------|
| +2 | refactor, 重构, migrate, 迁移, architect, 架构, system, 系统 |
| +2 | multiple, 多个, across, 跨, all, 所有, entire, 整个 |
| +1 | integrate, 集成, api, database, 数据库 |
| +1 | security, 安全, performance, 性能, scale, 扩展 |
- **高复杂度** (≥4): 自动选择复杂工作流
- **中复杂度** (2-3): 自动选择标准工作流
- **低复杂度** (<2): 自动选择轻量工作流
#### Phase 2: 命令发现与推荐 (Discover Commands & Recommend Chain)
**命令端口系统** - 基于端口的动态命令链组装:
```javascript
// 命令端口定义示例
const commandPorts = {
'lite-plan': {
input: ['requirement'], // 输入端口: 需求
output: ['plan'], // 输出端口: 计划
atomic_group: 'quick-implementation' // 最小单元
},
'lite-execute': {
input: ['plan', 'multi-cli-plan', 'lite-fix'], // 可接受多种输入
output: ['code'], // 输出端口: 代码
atomic_groups: [ // 可参与多个单元
'quick-implementation',
'multi-cli-planning',
'bug-fix'
]
},
'plan': {
input: ['requirement'],
output: ['detailed-plan'],
atomic_groups: [
'full-planning-execution',
'verified-planning-execution'
]
},
'execute': {
input: ['detailed-plan', 'verified-plan', 'replan', 'test-tasks', 'tdd-tasks'],
output: ['code'],
atomic_groups: [
'full-planning-execution',
'verified-planning-execution',
'replanning-execution',
'test-generation-execution',
'tdd-planning-execution'
]
}
};
```
**任务类型到端口流映射**:
| 任务类型 | 输入端口 | 输出端口 | 示例管道 |
|---------|---------|---------|---------|
| `bugfix` | bug-report | test-passed | Bug报告 → lite-fix → 修复 → test-passed |
| `tdd` | requirement | tdd-verified | 需求 → tdd-plan → execute → tdd-verify |
| `test-fix` | failing-tests | test-passed | 失败测试 → test-fix-gen → test-cycle-execute |
| `test-gen` | code/session | test-passed | 代码 → test-gen → execute → test-passed |
| `review` | code/session | review-verified | 代码 → review-* → review-fix |
| `feature` | requirement | code/test-passed | 需求 → plan → execute → code |
**管道可视化示例**:
```
需求 → 【lite-plan → lite-execute】→ 代码 → 【test-fix-gen → test-cycle-execute】→ 测试通过
└──── Quick Implementation ────┘ └────── Test Validation ──────┘
```
**用户确认界面**:
```
Recommended Command Chain:
Pipeline (管道视图):
需求 → lite-plan → 计划 → lite-execute → 代码 → test-cycle-execute → 测试通过
Commands (命令列表):
1. /workflow:lite-plan
2. /workflow:lite-execute
3. /workflow:test-cycle-execute
Proceed? [Confirm / Show Details / Adjust / Cancel]
```
#### Phase 3: 序列执行 (Execute Sequential Command Chain)
**串行阻塞模型** - 一次执行一个命令,通过 hook 回调延续:
```javascript
async function executeCommandChain(chain, analysis) {
const sessionId = `ccw-coord-${Date.now()}`;
const stateDir = `.workflow/.ccw-coordinator/${sessionId}`;
// 初始化状态
const state = {
session_id: sessionId,
status: 'running',
created_at: new Date().toISOString(),
analysis: analysis,
command_chain: chain.map((cmd, idx) => ({ ...cmd, index: idx, status: 'pending' })),
execution_results: [],
prompts_used: []
};
// 立即保存初始状态
Write(`${stateDir}/state.json`, JSON.stringify(state, null, 2));
// 执行第一个命令
for (let i = 0; i < chain.length; i++) {
const cmd = chain[i];
// 组装提示词
let prompt = formatCommand(cmd, state.execution_results, analysis);
prompt += `\n\nTask: ${analysis.goal}`;
// 启动后台 CLI 执行
const taskId = Bash(
`ccw cli -p "${escapePrompt(prompt)}" --tool claude --mode write`,
{ run_in_background: true }
).task_id;
// 保存检查点
state.execution_results.push({
index: i,
command: cmd.command,
status: 'in-progress',
task_id: taskId,
session_id: null,
artifacts: [],
timestamp: new Date().toISOString()
});
state.command_chain[i].status = 'running';
Write(`${stateDir}/state.json`, JSON.stringify(state, null, 2));
// 立即停止,等待 hook 回调
break;
}
state.status = 'waiting';
Write(`${stateDir}/state.json`, JSON.stringify(state, null, 2));
return state;
}
```
**智能参数组装**:
| 命令类型 | 参数模式 | 示例 |
|---------|---------|------|
| 规划命令 | 任务描述 | `/workflow:lite-plan -y "实现用户认证"` |
| 执行命令 (有计划) | `--resume-session` | `/workflow:execute -y --resume-session="WFS-plan-001"` |
| 执行命令 (独立) | `--in-memory` 或任务描述 | `/workflow:lite-execute -y --in-memory` |
| 基于会话 | `--session` | `/workflow:test-fix-gen -y --session="WFS-impl-001"` |
| Bug 修复 | 问题描述 | `/workflow:lite-fix -y "修复超时错误"` |
### 状态文件结构
**位置**: `.workflow/.ccw-coordinator/{session_id}/state.json`
```json
{
"session_id": "ccw-coord-20250124-143025",
"status": "running|waiting|completed|failed",
"created_at": "2025-01-24T14:30:25Z",
"updated_at": "2025-01-24T14:35:45Z",
"analysis": {
"goal": "实现用户注册",
"scope": ["authentication", "user_management"],
"constraints": ["no breaking changes"],
"complexity": "medium",
"task_type": "feature"
},
"command_chain": [
{
"index": 0,
"command": "/workflow:plan",
"name": "plan",
"description": "详细规划",
"status": "completed"
},
{
"index": 1,
"command": "/workflow:execute",
"name": "execute",
"description": "执行实现",
"status": "running"
},
{
"index": 2,
"command": "/workflow:test-cycle-execute",
"name": "test-cycle-execute",
"status": "pending"
}
],
"execution_results": [
{
"index": 0,
"command": "/workflow:plan",
"status": "completed",
"task_id": "task-001",
"session_id": "WFS-plan-20250124",
"artifacts": ["IMPL_PLAN.md", "exploration-architecture.json"],
"timestamp": "2025-01-24T14:30:25Z",
"completed_at": "2025-01-24T14:30:45Z"
},
{
"index": 1,
"command": "/workflow:execute",
"status": "in-progress",
"task_id": "task-002",
"session_id": null,
"artifacts": [],
"timestamp": "2025-01-24T14:32:00Z"
}
],
"prompts_used": [
{
"index": 0,
"command": "/workflow:plan",
"prompt": "/workflow:plan -y \"实现用户注册...\"\n\nTask: 实现用户注册..."
},
{
"index": 1,
"command": "/workflow:execute",
"prompt": "/workflow:execute -y --resume-session=\"WFS-plan-20250124\"\n\nTask: 实现用户注册\n\nPrevious results:\n- /workflow:plan: WFS-plan-20250124 (IMPL_PLAN.md)"
}
]
}
```
**状态流转**:
```
running → waiting → [hook callback] → waiting → [hook callback] → completed
↓ ↑
failed ←────────────────────────────────────────────────────────────┘
```
**状态值说明**:
- `running`: 编排器主动执行 (启动 CLI 命令)
- `waiting`: 暂停,等待 hook 回调触发继续
- `completed`: 所有命令成功完成
- `failed`: 用户中止或不可恢复错误
### 产物结构
```
.workflow/.ccw-coordinator/{session_id}/
└── state.json # 完整会话状态
├── session_id # 会话 ID
├── status # 当前状态
├── analysis # 需求分析结果
├── command_chain # 命令链定义
├── execution_results # 执行结果列表
└── prompts_used # 已使用的提示词
```
### 典型场景
#### 场景 1: 简单功能开发
```bash
用户: "实现用户头像上传功能"
# CCW Coordinator 自动执行:
Phase 1: 分析
Goal: 实现用户头像上传
Complexity: simple
Task Type: feature
Phase 2: 推荐命令链
Pipeline: 需求 → 【lite-plan → lite-execute】→ 代码 → 【test-fix-gen → test-cycle-execute】→ 测试通过
Commands: lite-plan, lite-execute, test-fix-gen, test-cycle-execute
Phase 3: 用户确认并执行
→ lite-plan: 生成规划 (内存)
→ lite-execute: 实现代码
→ test-fix-gen: 生成测试任务
→ test-cycle-execute: 测试修复循环
产物: .workflow/.ccw-coordinator/ccw-coord-20250124-xxx/state.json
```
#### 场景 2: Bug 修复
```bash
用户: "修复支付超时问题"
# CCW Coordinator 自动执行:
Phase 1: 分析
Goal: 修复支付超时
Task Type: bugfix
Phase 2: 推荐命令链
Pipeline: Bug报告 → 【lite-fix → lite-execute】→ 修复 → 【test-fix-gen → test-cycle-execute】→ 测试通过
Commands: lite-fix, lite-execute, test-fix-gen, test-cycle-execute
Phase 3: 执行
→ lite-fix: 诊断根因,生成修复计划
→ lite-execute: 应用修复
→ test-fix-gen: 生成回归测试
→ test-cycle-execute: 验证修复
产物:
.workflow/.ccw-coordinator/ccw-coord-20250124-xxx/state.json
.workflow/.lite-fix/payment-timeout-20250124-xxx/diagnosis.json
```
#### 场景 3: 复杂功能开发
```bash
用户: "实现完整的实时协作编辑系统"
# CCW Coordinator 自动执行:
Phase 1: 分析
Goal: 实现实时协作编辑
Complexity: complex
Task Type: feature
Phase 2: 推荐命令链
Pipeline: 需求 → 【plan → plan-verify → execute】→ 代码 → 【review-session-cycle → review-fix】→ 修复
Commands: plan, plan-verify, execute, review-session-cycle, review-fix
Phase 3: 执行
→ plan: 完整规划 (持久化)
→ plan-verify: 验证计划质量
→ execute: 实现功能
→ review-session-cycle: 多维度审查
→ review-fix: 应用审查修复
产物:
.workflow/.ccw-coordinator/ccw-coord-20250124-xxx/state.json
.workflow/active/WFS-realtime-collab-xxx/IMPL_PLAN.md
```
### 命令
```bash
/ccw-coordinator "任务描述"
# 自动分析、推荐命令链、用户确认、序列执行
```
### 适用场景
- ✅ 需要多命令协作的复杂任务
- ✅ 不确定需要哪些命令组合
- ✅ 希望自动化端到端流程
- ✅ 需要完整状态追踪和可恢复性
- ✅ 团队协作需要统一执行流程
- ❌ 单一简单命令即可完成
- ❌ 已明确知道要用的具体命令
### 与其他 Level 的关系
| Level | 手动程度 | CCW Coordinator 角色 |
|-------|---------|---------------------|
| Level 1-4 | 手动选择命令 | 自动组合这些命令 |
| Level 5 | 自动选择命令 | 智能编排器 |
**CCW Coordinator 内部使用 Level 1-4 命令**:
- 分析任务 → 自动选择合适的 Level
- 组装命令链 → 包含 Level 1-4 的命令
- 序列执行 → 按最小单元执行
---
## Issue 工作流
**主干工作流的补充 - 开发后的持续维护**
@@ -595,6 +1175,8 @@ Phase 3: Synthesis Integration
| 测试驱动开发 | `tdd-plan → execute → tdd-verify` | 3 |
| 测试失败修复 | `test-fix-gen → test-cycle-execute` | 3 |
| 全新功能、架构设计 | `brainstorm:auto-parallel → plan → execute` | 4 |
| 不确定需要哪些命令 | `ccw-coordinator` (自动分析) | 5 |
| 需要端到端自动化 | `ccw-coordinator` (自动推荐+执行) | 5 |
| 开发后问题修复 | Issue Workflow | - |
### 决策流程图
@@ -606,6 +1188,12 @@ Phase 3: Synthesis Integration
│ ├─ 是 → Issue Workflow
│ └─ 否 ↓
├─ 明确知道要用哪些命令?
│ ├─ 是 → 直接使用对应 Level 1-4 命令
│ └─ 否 → Level 5 (ccw-coordinator 自动编排)
│ │
│ └─ 自动分析 → 推荐命令链 → 用户确认 → 序列执行
├─ 需求是否明确?
│ ├─ 不确定 → Level 4 (brainstorm:auto-parallel)
│ └─ 明确 ↓
@@ -713,6 +1301,7 @@ mcp__ace-tool__search_context({
| **2** | 轻量规划 | `lite-plan`, `lite-fix`, `multi-cli-plan` | 内存/轻量文件 | → `lite-execute` |
| **3** | 标准规划 | `plan`, `tdd-plan`, `test-fix-gen` | Session 持久化 | → `execute` / `test-cycle-execute` |
| **4** | 头脑风暴 | `brainstorm:auto-parallel``plan` | 多角色分析 + Session | → `execute` |
| **5** | 智能编排 | `ccw-coordinator` | 完整编排状态 | 自动分析 → 推荐链 → 序列执行 |
| **-** | Issue | `discover``plan``queue``execute` | Issue 记录 | Worktree 隔离 (可选) |
### 核心原则
@@ -720,11 +1309,16 @@ mcp__ace-tool__search_context({
1. **主干工作流**通过**依赖分析 + Agent 并行**解决并行问题,无需 worktree
2. **Issue 工作流**作为**补充机制**,支持 worktree 隔离以保持主分支稳定
3. 根据任务复杂度选择合适的工作流层级,**避免过度工程化**
4. Level 2 的三个工作流选择依据:
4. **Level 1-4** 手动选择具体命令,**Level 5** 自动编排命令链
5. Level 2 的三个工作流选择依据:
- 需求明确 → `lite-plan`
- Bug 修复 → `lite-fix`
- 需要多视角 → `multi-cli-plan`
5. Level 3 的三个工作流选择依据:
6. Level 3 的三个工作流选择依据:
- 标准开发 → `plan`
- 测试驱动 → `tdd-plan`
- 测试修复 → `test-fix-gen`
7. Level 5 的使用场景:
- 不确定需要哪些命令组合 → `ccw-coordinator`
- 需要端到端流程自动化 → `ccw-coordinator`
- 需要完整状态追踪和可恢复性 → `ccw-coordinator`

Some files were not shown because too many files have changed in this diff Show More