Compare commits

...

67 Commits

Author SHA1 Message Date
catlog22
21d764127f Add command relationships, essential commands, and validation script
- Introduced `command-relationships.json` to define internal calls, next steps, and prerequisites for various workflows.
- Created `essential-commands.json` to document key commands, their descriptions, arguments, and usage scenarios.
- Added `validate-help.py` script to check for the existence of source files referenced in command definitions, ensuring all necessary files are present.
2026-01-29 17:29:37 +08:00
catlog22
860dbdab56 fix: Unify execution IDs between broadcast events and session storage
- Pass generated executionId to cliExecutorTool.execute as id parameter
- Ensures CLI_EXECUTION_STARTED broadcast uses same ID as saved session
- Fixes "Conversation not found" errors when querying by broadcast ID
- Add DEBUG logging for executionId tracking

This resolves the mismatch where:
  - Broadcast event used ID from Date.now() at broadcast time
  - Session saved used different ID from Date.now() at completion time
  - Now all use the same ID generated at cli.ts:868

Changes:
- cli.ts:868 - executionId generated once
- cli.ts:1001 - pass executionId to execute() as id parameter
- cli-executor-core.ts automatically uses passed id as conversationId
2026-01-29 16:59:00 +08:00
catlog22
113dce55c5 fix: Auto-detect JSON Lines output format for Codex CLI
Problem: Codex CLI uses --json flag to output JSONL events, but executor was using plain text parser. This prevented proper parsing of structured events, breaking session creation.

Root cause: buildCommand() added --json flag for Codex but never communicated this to the output parser. Result: JSONL events treated as raw text → session markers lost.

Solution:
- Extend buildCommand() to return outputFormat
- Auto-detect 'json-lines' when tool is 'codex'
- Use auto-detected format in executeCliTool()
- Properly parse structured events and extract session data

Files modified:
- ccw/src/tools/cli-executor-utils.ts: Add output format auto-detection
- ccw/src/tools/cli-executor-core.ts: Use auto-detected format for parser
- ccw/src/commands/cli.ts: Add debug instrumentation

Verified:
- Codex outputs valid JSONL (confirmed via direct test)
- CLI_EXECUTION_STARTED events broadcast correctly
- Issue was downstream in output parsing, not event transmission
2026-01-29 16:38:30 +08:00
catlog22
0b791c03cf fix: Resolve API path resolution for document loading
- Fixed source paths in command.json: change ../../../ to ../../
  (sources are relative to .claude/skills/ccw-help/, need 2 levels to reach .claude/)
- Rewrote help-routes.ts /api/help/command-content endpoint:
  - Use resolve() to properly handle ../ sequences in paths
  - Resolve paths against commandJsonDir (where command.json is located)
  - Maintain security checks to prevent path traversal
- Verified all document paths now resolve correctly to .claude/commands/*

This fixes the 404 errors when loading command documentation in Help page.
2026-01-29 16:29:10 +08:00
catlog22
bbc94fb73a chore: Update ccw-help command index with all 73 commands
- Regenerated by analyze_commands.py
- Now includes all workflow, issue, memory, cli, and general commands
- Updated to version 3.0.0 with 73 commands and 19 agents
- Full index sync with file system definitions
2026-01-29 16:00:29 +08:00
catlog22
f5e435f791 feat: Optimize ccw-help skill with user-prompted update mechanism
- Add auto-update.py script for simple index regeneration
- Update SKILL.md with clear update instructions
- Simplify update mechanism: prompt user on skill execution
- Support both automatic and manual update workflows
- Clean version 2.3.0 metadata in command.json
2026-01-29 15:58:51 +08:00
catlog22
86d5be8288 feat: Enhance CCW help system with new command orchestration and dashboard features 2026-01-29 15:43:07 +08:00
catlog22
9762445876 refactor: Convert skill-generator from Chinese to English and remove emoji icons
- Convert all markdown files from Chinese to English
- Remove all emoji/icon decorations (🔧📋⚙️🏁🔍📚)
- Update all section headers, descriptions, and documentation
- Keep all content logic, structure, code examples unchanged
- Maintain template variables and file paths as-is

Files converted (9 files total):
- SKILL.md: Output structure comments
- templates/skill-md.md: All Chinese descriptions and comments
- specs/reference-docs-spec.md: All section headers and explanations
- phases/01-requirements-discovery.md through 05-validation.md (5 files)
- specs/execution-modes.md, skill-requirements.md, cli-integration.md, scripting-integration.md (4 files)
- templates/sequential-phase.md, autonomous-orchestrator.md, autonomous-action.md, code-analysis-action.md, llm-action.md, script-template.md (6 files)

All 16 files in skill-generator are now fully in English.
2026-01-29 15:42:46 +08:00
catlog22
b791c09476 docs: Add reference-docs-spec and optimize skill-generator for proper document organization
- Create specs/reference-docs-spec.md with comprehensive guidelines for phase-based reference document organization
- Update skill-generator's Mandatory Prerequisites to include new reference-docs-spec
- Refactor skill-md.md template to generate phase-based reference tables with 'When to Use' guidance
- Add generateReferenceTable() function to automatically create structured reference sections
- Replace flat template reference lists with phase-based navigation
- Update skill-generator's own SKILL.md to demonstrate correct reference documentation pattern
- Ensure all generated skills will have clear document usage timing and context
2026-01-29 15:28:21 +08:00
catlog22
26283e7a5a docs: Optimize reference documents with phase-based guidance and usage timing 2026-01-29 15:24:38 +08:00
catlog22
1040459fef docs: Add comprehensive summary of unified-execute-with-file implementation 2026-01-29 15:23:41 +08:00
catlog22
0fe8c18a82 docs: Add comparison guide between Claude and Codex unified-execute versions 2026-01-29 15:22:24 +08:00
catlog22
0086413f95 feat: Add Codex unified-execute-with-file prompt
- Create codex version of unified-execute-with-file command
- Supports universal execution of planning/brainstorm/analysis output
- Coordinates multi-agents with smart dependency management
- Features parallel/sequential execution modes
- Unified event logging as single source of truth (execution-events.md)
- Agent context passing through previous execution history
- Knowledge chain: each agent reads full history of prior executions

Codex-specific adaptations:
- Use $VARIABLE format for argument substitution
- Simplified header configuration (description + argument-hint)
- Plan format agnostic parsing (IMPL_PLAN.md, synthesis.json, conclusions.json, debug recommendations)
- Multi-wave execution orchestration
- Dynamic artifact location handling

Execution flow:
1. Parse and validate plan from $PLAN_PATH
2. Extract and normalize tasks with dependencies
3. Create execution session (.workflow/.execution/{sessionId}/)
4. Group tasks into execution waves (topological sort)
5. Execute waves sequentially, tasks within wave execute in parallel
6. Unified event logging: execution-events.md (SINGLE SOURCE OF TRUTH)
7. Each agent reads previous executions for context
8. Final statistics and completion report
2026-01-29 15:21:40 +08:00
catlog22
8ff698ae73 refactor: Optimize unified-execute-with-file command documentation
- Consolidate Phase 3 (Progress Tracking) from 205+ to 30 lines by merging redundant explanations of execution-events.md format
- Merge error handling logic from separate handleTaskFailure function into executeTask catch block
- Remove duplicate Execution Document Template section (identical to Step 1.2)
- Consolidate Phase 4 (Completion & Summary) from 90+ to 40 lines
- Overall reduction: 1094 → 807 lines (26% reduction) while preserving all technical information

Key improvements:
- Single source of truth for execution state (execution-events.md)
- Clearer knowledge chain explanation between agents
- More concise yet complete Phase documentation
- Unified event logging format is now prominently featured
2026-01-29 15:19:40 +08:00
catlog22
8cdd6a8b5f Add execution and planning agent prompts, specifications, and quality standards
- Created execution agent prompt for issue execution with detailed deliverables and validation criteria.
- Developed planning agent prompt to analyze issues and generate structured solution plans.
- Introduced issue handling specifications outlining the workflow and issue structure.
- Established quality standards for evaluating completeness, consistency, correctness, and clarity of solutions.
- Defined solution schema specification detailing the required structure and validation rules for solutions.
- Documented subagent roles and responsibilities, emphasizing the dual-agent strategy for improved workflow efficiency.
2026-01-29 15:15:42 +08:00
catlog22
b86a8afd8b feat: 添加统一执行引擎文档,支持多任务协调与增量执行 2026-01-29 15:14:56 +08:00
catlog22
53bd5a6d4b feat: 添加自定义提示文档,说明如何创建和管理可重用的提示 2026-01-29 11:30:29 +08:00
catlog22
3a7bbe0e42 feat: Optimize Codex prompt commands parameter flexibility
- Enhanced 14 commands with flexible parameter support
- Standardized argument formats across all commands
- Added English parameter descriptions for clarity
- Maintained backward compatibility

Commands optimized:
- analyze-with-file: Added --depth, --max-iterations
- brainstorm-with-file: Added --perspectives, --max-ideas, --focus
- debug-with-file: Added --scope, --focus, --depth
- issue-execute: Unified format, added --skip-tests, --skip-build, --dry-run
- lite-plan-a/b/c: Added depth and execution control flags
- execute: Added --parallel, --filter, --skip-tests
- brainstorm-to-cycle: Unified to --session format, added --launch
- lite-fix: Added --hotfix, --severity, --scope
- clean: Added --focus, --target, --confirm
- lite-execute: Unified --plan format, added execution control
- compact: Added --description, --tags, --force
- issue-new: Complete flexible parameter support

Unchanged (already optimal):
- issue-plan, issue-discover, issue-queue, issue-discover-by-prompt
2026-01-29 11:29:39 +08:00
catlog22
04a84f9893 feat: Simplify issue creation documentation by removing examples and clarifying title 2026-01-29 10:51:42 +08:00
catlog22
11638facf7 feat: Add --to-file option to ccw cli for saving output to files
Adds support for saving CLI execution output directly to files with the following features:
- Support for relative paths: --to-file output.txt
- Support for nested directories: --to-file results/analysis/output.txt (auto-creates directories)
- Support for absolute paths: --to-file /tmp/output.txt or --to-file D:/results/output.txt
- Works in both streaming and non-streaming modes
- Automatically creates parent directories if they don't exist
- Proper error handling with user-friendly messages
- Shows file save location in completion feedback

Implementation details:
- Updated CLI option parser in ccw/src/cli.ts
- Added toFile parameter to CliExecOptions interface
- Implemented file saving logic in execAction() for both streaming and non-streaming modes
- Updated HTTP API endpoint /api/cli/execute to support toFile parameter
- All changes are backward compatible

Testing:
- Tested with relative paths (single and nested directories)
- Tested with absolute paths (Windows and Unix style)
- Tested with streaming mode
- All tests passed successfully
2026-01-29 09:48:30 +08:00
catlog22
4d93ffb06c feat: Add migration handling for Codex old reference format in CLI manager 2026-01-28 23:37:46 +08:00
catlog22
204cb20617 chore(release): version 6.3.49
- Update package.json version to 6.3.49
- Add changelog entry for v6.3.49 with all commits since v6.3.48
- New features: CLI tools enhancements, skills system improvements, security fixes
- Documentation updates and UI improvements
2026-01-28 23:34:31 +08:00
catlog22
63f0daebbb feat: Update initialization process to prioritize in-memory configuration for CLI tool selection 2026-01-28 23:20:50 +08:00
catlog22
6ac041c1d8 feat: Enhance Codex CLI settings with toggle and refresh actions 2026-01-28 23:05:31 +08:00
catlog22
279adfd391 feat: Implement Codex CLI enhancement settings with API integration and UI toggle 2026-01-28 23:01:18 +08:00
catlog22
0a07138c27 feat: Add ccw-cli-tools skill specification with unified execution framework and configuration-driven tool selection 2026-01-28 22:55:36 +08:00
catlog22
a5d9e8ca87 feat: Enhance lite-skill-generator with single file output and improved validation 2026-01-28 22:23:19 +08:00
catlog22
502c8a09a1 fix(security): Apply 3 critical security fixes
- sec-001: Add validateAllowedPath to /api/file endpoint (path traversal)
- sec-002: Enable CSRF by default with CCW_DISABLE_CSRF opt-out
- sec-003: Add validateAllowedPath to /api/dialog/browse and /api/dialog/open-file (path traversal)

Ref: fix-1738072800000
2026-01-28 22:04:18 +08:00
catlog22
ed0255b8a2 Add skill tuning diagnosis report for skill-generator
- Introduced a new JSON file `skill-tuning-diagnosis.json` containing a comprehensive diagnosis of the skill-generator.
- Documented critical issues related to context management and data flow, including:
  - Full state serialization leading to unbounded context growth.
  - Scattered state writing without a unified schema.
  - Lack of input state schema validation in autonomous orchestrators.
- Provided detailed descriptions, impacts, root causes, and fix strategies for each identified issue.
- Summarized recommendations with priority levels for urgent fixes.
2026-01-28 22:00:20 +08:00
catlog22
6e94fc0740 chore: remove CLI endpoints section from Codex Code Guidelines 2026-01-28 21:33:43 +08:00
catlog22
b361a8c041 Add CLI endpoints documentation and unified script template for Bash and Python
- Updated AGENTS.md to include CLI tools usage and configuration details.
- Introduced a new script template for both Bash and Python, outlining usage context, calling conventions, and implementation guidelines.
- Provided examples for common patterns in both Bash and Python scripts.
- Established a directory convention for script organization and naming.
2026-01-28 21:29:21 +08:00
catlog22
24dad8cefd Refactor orchestrator logic and enhance problem taxonomy
- Updated orchestrator decision logic to improve state management and action selection.
- Introduced structured termination checks and action selection criteria.
- Enhanced state update mechanism with sliding window for action history and error tracking.
- Revised problem taxonomy for skill execution issues, consolidating categories and refining detection patterns.
- Improved severity calculation method for issue prioritization.
- Streamlined fix mapping strategies for better clarity and usability.
2026-01-28 21:08:49 +08:00
catlog22
071c98d89c feat: add brainstorm-to-cycle adapter for converting brainstorm output to parallel-dev-cycle input 2026-01-28 20:51:35 +08:00
catlog22
994718dee2 chore: remove unnecessary blank line in auto-parallel command documentation 2026-01-28 20:36:41 +08:00
catlog22
3998d24e32 Enhance skill generator documentation and templates
- Updated Phase 1 and Phase 2 documentation to include next phase links and data flow details.
- Expanded Phase 5 documentation to include comprehensive validation and README generation steps, along with validation report structure.
- Added purpose and usage context sections to various action and script templates (e.g., autonomous-action, llm-action, script-bash).
- Improved commands management by simplifying the command scanning logic and enabling/disabling commands through renaming files.
- Enhanced dashboard command manager to format group names and display nested groups with appropriate icons and colors.
- Updated LiteLLM executor to allow model overrides during execution.
- Added action reference guide and template reference sections to the skill-tuning SKILL.md for better navigation and understanding.
2026-01-28 20:34:03 +08:00
catlog22
29274ee943 feat(codex): add brainstorm-with-file prompt for interactive brainstorming workflow
- Add multi-perspective brainstorming workflow
- Support creative, pragmatic, and systematic analysis
- Include diverge-converge cycles with user interaction
- Add deep dive, devil's advocate, and idea merging
- Document thought evolution in brainstorm.md
2026-01-28 20:33:13 +08:00
catlog22
46d5739935 fix(changelog): update workflow references from review-fix to review-cycle-fix for consistency 2026-01-28 20:30:03 +08:00
catlog22
152cab2b7e feat: update review commands to use review-cycle-fix for automated fixing 2026-01-28 20:24:59 +08:00
catlog22
0cc5101c0e feat: Add phases for document consolidation, assembly, and compliance refinement
- Introduced Phase 2.5: Consolidation Agent to summarize analysis outputs and generate design overviews.
- Added Phase 4: Document Assembly to create index-style documents linking chapter files.
- Implemented Phase 5: Compliance Review & Iterative Refinement for CPCC compliance checks and updates.
- Established CPCC Compliance Requirements document outlining mandatory sections and validation functions.
- Created a base template for analysis agents to ensure consistency and efficiency in execution.
2026-01-28 19:57:24 +08:00
catlog22
4c78f53bcc feat: add commands management feature with API endpoints and UI integration
- Implemented commands routes for listing, enabling, and disabling commands.
- Created commands manager view with accordion groups for better organization.
- Added loading states and confirmation dialogs for enabling/disabling commands.
- Enhanced error handling and user feedback for command operations.
- Introduced CSS styles for commands manager UI components.
- Updated navigation to include commands manager link.
- Refactored existing code for better maintainability and clarity.
2026-01-28 08:26:37 +08:00
catlog22
cc5a5716cf fix(skills): improve robustness of enable/disable operations
- Add rollback in moveDirectory when rmSync fails after cpSync
- Add transaction rollback in disable/enableSkill when config save fails
- Surface config corruption by throwing on JSON parse errors
- Add robust JSON error parsing with fallback in frontend
- Add loading state and double-click prevention for toggle button
2026-01-28 08:25:59 +08:00
catlog22
af05874510 feat(skills): enhance moveDirectory function with rollback on failure and update config handling 2026-01-28 00:50:24 +08:00
catlog22
7a40f16235 feat(skills): implement enable/disable functionality for skills
- Added new API endpoints to enable and disable skills.
- Introduced logic to manage disabled skills, including loading and saving configurations.
- Enhanced skills routes to return lists of disabled skills.
- Updated frontend to display disabled skills and allow toggling their status.
- Added internationalization support for new skill status messages.
- Created JSON schemas for plan verification agent and findings.
- Defined new types for skill management in TypeScript.
2026-01-28 00:49:39 +08:00
catlog22
8d178feaac feat: 增强计划验证和上下文收集功能,支持自动执行和用户交互选择 2026-01-28 00:12:15 +08:00
catlog22
b3c47294e7 Enhance workflow commands and context management
- Updated `plan.md` to include new fields in context-package.json: prioritized_context, user_intent, priority_tiers, dependency_order, and sorting_rationale.
- Added validation for the existence of the prioritized_context field in context-package.json.
- Modified user decision flow in task generation to present action choices after planning completion.
- Improved context-gathering process in `context-gather.md` to integrate user intent and prioritize context based on user goals.
- Revised conflict-resolution documentation to require planning notes records after conflict analysis.
- Streamlined task generation in `task-generate-agent.md` to utilize pre-sorted context without redundant sorting.
- Removed unused settings persistence functions and corresponding tests from `claude-cli-tools.ts` and `settings-persistence.test.ts`.
2026-01-28 00:02:45 +08:00
catlog22
9989cfcf21 feat: 更新任务生成和执行限制,优化多模块任务管理 2026-01-27 23:34:31 +08:00
catlog22
1b6ace0447 feat: 添加规划笔记功能以支持任务生成和约束管理 2026-01-27 23:16:01 +08:00
catlog22
a3b303d8e3 Enhance CLI Lite Planning Agent with Mandatory Quality Check
- Added Phase 5: Plan Quality Check to cli-lite-planning-agent.md, detailing mandatory quality validation after plan generation.
- Introduced quality dimensions: completeness, granularity, dependencies, acceptance criteria, implementation steps, and constraint compliance.
- Specified CLI command format for quality check execution and expected output structure.
- Implemented result parsing and auto-fix strategies for minor issues.
- Updated integration flow to ensure quality check is executed before returning the plan to the orchestrator.

Refactor lite-plan.md to reflect internal quality check execution for medium/high complexity plans.

Create new brainstorm-with-file.md for interactive brainstorming workflow, detailing session setup, execution process, and implementation steps.
2026-01-27 23:02:05 +08:00
catlog22
0c1c87f704 fix: 修正 README_CN.md 交流群二维码图片扩展名
将引用从 .jpg 改为 .png 以匹配实际文件名
2026-01-26 09:47:42 +08:00
catlog22
985085c624 Refactor CLI Config Manager and Add Provider Model Routes
- Removed deprecated constants and functions from cli-config-manager.ts.
- Introduced new provider model presets in litellm-provider-models.ts for better organization and management of model information.
- Created provider-routes.ts to handle API endpoints for retrieving provider information and models.
- Added integration tests for provider routes to ensure correct functionality and response structure.
- Implemented unit tests for settings persistence functions, covering various scenarios and edge cases.
- Enhanced error handling and validation in the new routes and settings functions.
2026-01-25 17:27:58 +08:00
catlog22
7c16cc6427 feat: 添加 convert-to-plan 命令以将规划文档转换为问题解决方案 2026-01-25 12:32:42 +08:00
catlog22
6875108dda Add interactive analysis workflow with documented discussions and CLI exploration
- Introduced `analyze-with-file` command for collaborative analysis.
- Implemented session management for new and continued analysis sessions.
- Developed structured phases for topic understanding, exploration, discussion, and conclusion synthesis.
- Created detailed documentation for the workflow, including examples and implementation details.
- Added Codex prompt for deep analysis and exploration of codebase and concepts.
2026-01-25 11:08:13 +08:00
catlog22
9cff6f5f43 refactor(issue): remove 'merged' queue status, use 'archived' instead
- Remove 'merged' from VALID_QUEUE_STATUSES constant
- Update mergeQueues() to set status to 'archived' instead of 'merged'
- Preserve merged_into/merged_at metadata for traceability
- Update frontend to use 'archived' for button visibility checks
- Fix queue --json returning fake queue ID when no active queue exists
2026-01-25 10:56:46 +08:00
catlog22
fe2536d4cd feat: 增加队列状态“merged”及相关验证功能,添加状态参考文档 2026-01-25 10:11:52 +08:00
catlog22
16f27c080a docs: add Level 5 intelligent orchestration workflow guide to English version
- Added Level 5 (CCW Coordinator) chapter with complete Minimum Execution Units definition
- Added 3-Phase Workflow explanation (Analyze Requirements, Discover Commands, Execute)
- Integrated command port system for dynamic pipeline composition
- Added JavaScript task analysis and complexity assessment functions
- Included state file structure and complete Mermaid flowchart (119 lines)
- Updated Quick Selection Table to reference ccw-coordinator for complex workflows
- Updated Decision Flowchart to include Level 5 at top level
- Updated Summary Level Overview table to include Level 5 row
- Updated Core Principles with Level 5 selection criteria and usage guidelines
- Synchronized English version with WORKFLOW_GUIDE_CN.md Level 5 content
2026-01-24 21:41:31 +08:00
catlog22
874b70726d chore: archive unused test scripts and temporary documents
- Moved 6 empty test files (*.test.ts) to archive/
- Moved 5 Python test scripts (*.py) to archive/
- Moved 5 outdated/temporary documents to archive/
- Cleaned up root directory for better organization
2026-01-24 21:26:03 +08:00
catlog22
862365ffaf chore: 删除 Codex Subagent 使用规范文档 2026-01-24 21:17:59 +08:00
catlog22
b8c807b2f9 feat: add parent/child directory lookup for ccw cli output
- Implement findProjectWithExecution() to search upward through parent directories
- Add automatic project path discovery in outputAction
- Support explicit --project parameter for manual path specification
- Improve error messages with search scope indication
- Display project path in formatted output
- Enable cross-directory execution without working directory dependency
2026-01-24 21:17:21 +08:00
catlog22
7ea6362c50 docs: add Level 5 workflow guide with CCW Coordinator and decision flowchart
- Add "Level 5: 智能编排 (CCW Coordinator)" chapter to WORKFLOW_GUIDE_CN.md
- Integrate 6.2.8 version full lifecycle command decision flowchart (Mermaid)
- Document minimum execution units (最小执行单元) for planning, testing, and review
- Explain 3-phase workflow: requirements analysis, command discovery & recommendation, sequential execution
- Include command port system for dynamic command chain assembly
- Add state file structure (.workflow/.ccw-coordinator/{session_id}/state.json)
- Document 3 typical scenarios: simple feature, bug fix, complex feature development
- Update overview diagram to show Level 5 with automation progression
- Update workflow selection guide, decision flowchart, and summary table
- Add Level 5 relationship documentation to other workflow levels
2026-01-24 21:15:44 +08:00
catlog22
b435391f17 docs: add /ccw and /ccw-coordinator as recommended commands
- Add prominent section highlighting /ccw and /ccw-coordinator as main features
- /ccw: Auto workflow orchestrator for general tasks
- /ccw-coordinator: Smart orchestrator with intelligent recommendations
- Include comparison table, quick examples, and key differences
- Update both English and Chinese READMEs
2026-01-24 15:12:33 +08:00
catlog22
88ff109ac4 chore: bump version to 6.3.48 2026-01-24 15:06:36 +08:00
catlog22
261196a804 docs: update CCW CLI commands with recommended commands and usage examples 2026-01-24 15:05:37 +08:00
catlog22
ea6cb8440f chore: bump version to 6.3.47
- Update ccw-coordinator.md with clarified CLI execution format
- Command-first prompt structure: /workflow:<command> -y <parameters>
- Simplified documentation with universal prompt template
- Clarify that -y is a prompt parameter, not a ccw cli parameter
2026-01-24 14:52:09 +08:00
catlog22
bf896342f4 refactor: adjust prompt structure for command execution clarity 2026-01-24 14:51:05 +08:00
catlog22
f2b0a5bbc9 Refactor code structure and remove redundant changes 2026-01-24 14:47:47 +08:00
catlog22
cf5fecd66d fix(codex-lens): resolve installation issues from frontend
- Add missing README.md file required by setuptools
- Fix deprecated license format in pyproject.toml (use SPDX string instead of TOML table)
- Add MIT LICENSE file for proper packaging
- Verified successful local installation and import

Fixes permission denied error during npm-based installation on macOS
2026-01-24 14:43:39 +08:00
catlog22
86d469ccc9 build: exclude test files from TypeScript compilation 2026-01-24 14:35:05 +08:00
320 changed files with 72110 additions and 18679 deletions

View File

@@ -9,12 +9,7 @@
**Strictly follow the cli-tools.json configuration**
Available CLI endpoints are dynamically defined by the config file:
- Built-in tools and their enable/disable status
- Custom API endpoints registered via the Dashboard
- Managed through the CCW Dashboard Status page
Available CLI endpoints are dynamically defined by the config file
## Tool Execution
- **Context Requirements**: @~/.claude/workflows/context-tools.md
@@ -25,9 +20,12 @@ Available CLI endpoints are dynamically defined by the config file:
- **TaskOutput usage**: Only use `TaskOutput({ task_id: "xxx", block: false })` + sleep loop to poll completion status. NEVER read intermediate output during agent/CLI execution - wait for final result only
### CLI Tool Calls (ccw cli)
- **Default: `run_in_background: true`** - Unless otherwise specified, always use background execution for CLI calls:
- **Default: Use Bash `run_in_background: true`** - Unless otherwise specified, always execute CLI calls in background using Bash tool's background mode:
```
Bash({ command: "ccw cli -p '...' --tool gemini", run_in_background: true })
Bash({
command: "ccw cli -p '...' --tool gemini",
run_in_background: true // Bash tool parameter, not ccw cli parameter
})
```
- **After CLI call**: Stop output immediately - let CLI execute in background. **DO NOT use TaskOutput polling** - wait for hook callback to receive results

View File

@@ -1,4 +0,0 @@
{
"interval": "manual",
"tool": "gemini"
}

View File

@@ -55,6 +55,17 @@ color: yellow
**Step-by-step execution**:
```
0. Load planning notes → Extract phase-level constraints (NEW)
Commands: Read('.workflow/active/{session-id}/planning-notes.md')
Output: Consolidated constraints from all workflow phases
Structure:
- User Intent: Original GOAL, KEY_CONSTRAINTS
- Context Findings: Critical files, architecture notes, constraints
- Conflict Decisions: Resolved conflicts, modified artifacts
- Consolidated Constraints: Numbered list of ALL constraints (Phase 1-3)
USAGE: This is the PRIMARY source of constraints. All task generation MUST respect these constraints.
1. Load session metadata → Extract user input
- User description: Original task/feature requirements
- Project scope: User-specified boundaries and goals
@@ -299,25 +310,22 @@ function computeCliStrategy(task, allTasks) {
**execution_config Alignment Rules** (MANDATORY):
```
userConfig.executionMethod → meta.execution_config + implementation_approach
userConfig.executionMethod → meta.execution_config
"agent" →
meta.execution_config = { method: "agent", cli_tool: null, enable_resume: false }
implementation_approach steps: NO command field (agent direct execution)
"hybrid" →
meta.execution_config = { method: "hybrid", cli_tool: userConfig.preferredCliTool }
implementation_approach steps: command field ONLY on complex steps
Execution: Agent executes pre_analysis, then directly implements implementation_approach
"cli" →
meta.execution_config = { method: "cli", cli_tool: userConfig.preferredCliTool }
implementation_approach steps: command field on ALL steps
meta.execution_config = { method: "cli", cli_tool: userConfig.preferredCliTool, enable_resume: true }
Execution: Agent executes pre_analysis, then hands off context + implementation_approach to CLI
"hybrid" →
meta.execution_config = { method: "hybrid", cli_tool: userConfig.preferredCliTool, enable_resume: true }
Execution: Agent decides which tasks to handoff to CLI based on complexity
```
**Consistency Check**: `meta.execution_config.method` MUST match presence of `command` fields:
- `method: "agent"` → 0 steps have command field
- `method: "hybrid"` → some steps have command field
- `method: "cli"` → all steps have command field
**Note**: implementation_approach steps NO LONGER contain `command` fields. CLI execution is controlled by task-level `meta.execution_config` only.
**Test Task Extensions** (for type="test-gen" or type="test-fix"):
@@ -638,32 +646,6 @@ Agent determines CLI tool usage per-step based on user semantics and task nature
"output": "implementation"
},
// === CLI MODE: Command Execution (optional command field) ===
{
"step": 3,
"title": "Execute implementation using CLI tool",
"description": "Use Codex/Gemini for complex autonomous execution",
"command": "ccw cli -p '[prompt]' --tool codex --mode write --cd [path]",
"modification_points": ["[Same as default mode]"],
"logic_flow": ["[Same as default mode]"],
"depends_on": [1, 2],
"output": "cli_implementation",
"cli_output_id": "step3_cli_id" // Store execution ID for resume
},
// === CLI MODE with Resume: Continue from previous CLI execution ===
{
"step": 4,
"title": "Continue implementation with context",
"description": "Resume from previous step with accumulated context",
"command": "ccw cli -p '[continuation prompt]' --resume ${step3_cli_id} --tool codex --mode write",
"resume_from": "step3_cli_id", // Reference previous step's CLI ID
"modification_points": ["[Continue from step 3]"],
"logic_flow": ["[Build on previous output]"],
"depends_on": [3],
"output": "continued_implementation",
"cli_output_id": "step4_cli_id"
}
]
```
@@ -785,13 +767,13 @@ Generate at `.workflow/active/{session_id}/TODO_LIST.md`:
Use `analysis_results.complexity` or task count to determine structure:
**Single Module Mode**:
- **Simple Tasks** (≤5 tasks): Flat structure
- **Medium Tasks** (6-12 tasks): Flat structure
- **Complex Tasks** (>12 tasks): Re-scope required (maximum 12 tasks hard limit)
- **Simple Tasks** (≤4 tasks): Flat structure
- **Medium Tasks** (5-8 tasks): Flat structure
- **Complex Tasks** (>8 tasks): Re-scope required (maximum 8 tasks hard limit)
**Multi-Module Mode** (N+1 parallel planning):
- **Per-module limit**: ≤9 tasks per module
- **Total limit**: Sum of all module tasks ≤27 (3 modules × 9 tasks)
- **Per-module limit**: ≤6 tasks per module
- **Total limit**: No total limit (each module independently capped at 6 tasks)
- **Task ID format**: `IMPL-{prefix}{seq}` (e.g., IMPL-A1, IMPL-B1)
- **Structure**: Hierarchical by module in IMPL_PLAN.md and TODO_LIST.md
@@ -855,6 +837,7 @@ Use `analysis_results.complexity` or task count to determine structure:
### 3.3 Guidelines Checklist
**ALWAYS:**
- **Load planning-notes.md FIRST**: Read planning-notes.md before context-package.json. Use its Consolidated Constraints as primary constraint source for all task generation
- **Search Tool Priority**: ACE (`mcp__ace-tool__search_context`) → CCW (`mcp__ccw-tools__smart_search`) / Built-in (`Grep`, `Glob`, `Read`)
- Apply Quantification Requirements to all requirements, acceptance criteria, and modification points
- Load IMPL_PLAN template: `Read(~/.claude/workflows/cli-templates/prompts/workflow/impl-plan-template.txt)` before generating IMPL_PLAN.md
@@ -865,7 +848,7 @@ Use `analysis_results.complexity` or task count to determine structure:
- **Compute CLI execution strategy**: Based on `depends_on`, set `cli_execution.strategy` (new/resume/fork/merge_fork)
- Map artifacts: Use artifacts_inventory to populate task.context.artifacts array
- Add MCP integration: Include MCP tool steps in flow_control.pre_analysis when capabilities available
- Validate task count: Maximum 12 tasks hard limit, request re-scope if exceeded
- Validate task count: Maximum 8 tasks (single module) or 6 tasks per module (multi-module), request re-scope if exceeded
- Use session paths: Construct all paths using provided session_id
- Link documents properly: Use correct linking format (📋 for JSON, ✅ for summaries)
- Run validation checklist: Verify all quantification requirements before finalizing task JSONs
@@ -879,7 +862,7 @@ Use `analysis_results.complexity` or task count to determine structure:
- Load files directly (use provided context package instead)
- Assume default locations (always use session_id in paths)
- Create circular dependencies in task.depends_on
- Exceed 12 tasks without re-scoping
- Exceed 8 tasks (single module) or 6 tasks per module (multi-module) without re-scoping
- Skip artifact integration when artifacts_inventory is provided
- Ignore MCP capabilities when available
- Use fixed pre-analysis steps without task-specific adaptation

View File

@@ -13,6 +13,8 @@ color: cyan
You are a generic planning agent that generates structured plan JSON for lite workflows. Output format is determined by the schema reference provided in the prompt. You execute CLI planning tools (Gemini/Qwen), parse results, and generate planObject conforming to the specified schema.
**CRITICAL**: After generating plan.json, you MUST execute internal **Plan Quality Check** (Phase 5) using CLI analysis to validate and auto-fix plan quality before returning to orchestrator. Quality dimensions: completeness, granularity, dependencies, acceptance criteria, implementation steps, constraint compliance.
## Input Context
@@ -72,7 +74,22 @@ Phase 4: planObject Generation
├─ Build planObject conforming to schema
├─ Assign CLI execution IDs and strategies
├─ Generate flow_control from depends_on
└─ Return to orchestrator
└─ Write initial plan.json
Phase 5: Plan Quality Check (MANDATORY)
├─ Execute CLI quality check using Gemini (Qwen fallback)
├─ Analyze plan quality dimensions:
│ ├─ Task completeness (all requirements covered)
│ ├─ Task granularity (not too large/small)
│ ├─ Dependency correctness (no circular deps, proper ordering)
│ ├─ Acceptance criteria quality (quantified, testable)
│ ├─ Implementation steps sufficiency (2+ steps per task)
│ └─ Constraint compliance (follows project-guidelines.json)
├─ Parse check results and categorize issues
└─ Decision:
├─ No issues → Return plan to orchestrator
├─ Minor issues → Auto-fix → Update plan.json → Return
└─ Critical issues → Report → Suggest regeneration
```
## CLI Command Template
@@ -734,3 +751,78 @@ function validateTask(task) {
- Skip task validation
- **Skip CLI execution ID assignment**
- **Ignore schema structure**
- **Skip Phase 5 Plan Quality Check**
---
## Phase 5: Plan Quality Check (MANDATORY)
### Overview
After generating plan.json, **MUST** execute CLI quality check before returning to orchestrator. This is a mandatory step for ALL plans regardless of complexity.
### Quality Dimensions
| Dimension | Check Criteria | Critical? |
|-----------|---------------|-----------|
| **Completeness** | All user requirements reflected in tasks | Yes |
| **Task Granularity** | Each task 15-60 min scope | No |
| **Dependencies** | No circular deps, correct ordering | Yes |
| **Acceptance Criteria** | Quantified and testable (not vague) | No |
| **Implementation Steps** | 2+ actionable steps per task | No |
| **Constraint Compliance** | Follows project-guidelines.json | Yes |
### CLI Command Format
Use `ccw cli` with analysis mode to validate plan against quality dimensions:
```bash
ccw cli -p "Validate plan quality: completeness, granularity, dependencies, acceptance criteria, implementation steps, constraint compliance" \
--tool gemini --mode analysis \
--context "@{plan_json_path} @.workflow/project-guidelines.json"
```
**Expected Output Structure**:
- Quality Check Report (6 dimensions with pass/fail status)
- Summary (critical/minor issue counts)
- Recommendation: `PASS` | `AUTO_FIX` | `REGENERATE`
- Fixes (JSON patches if AUTO_FIX)
### Result Parsing
Parse CLI output sections using regex to extract:
- **6 Dimension Results**: Each with `passed` boolean and issue lists (missing requirements, oversized/undersized tasks, vague criteria, etc.)
- **Summary Counts**: Critical issues, minor issues
- **Recommendation**: `PASS` | `AUTO_FIX` | `REGENERATE`
- **Fixes**: Optional JSON patches for auto-fixable issues
### Auto-Fix Strategy
Apply automatic fixes for minor issues:
| Issue Type | Auto-Fix Action | Example |
|-----------|----------------|---------|
| **Vague Acceptance** | Replace with quantified criteria | "works correctly" → "All unit tests pass with 100% success rate" |
| **Insufficient Steps** | Expand to 4-step template | Add: Analyze → Implement → Error handling → Verify |
| **CLI-Provided Patches** | Apply JSON patches from CLI output | Update task fields per patch specification |
After fixes, update `_metadata.quality_check` with fix log.
### Execution Flow
After Phase 4 planObject generation:
1. **Write Initial Plan**`${sessionFolder}/plan.json`
2. **Execute CLI Check** → Gemini (Qwen fallback)
3. **Parse Results** → Extract recommendation and issues
4. **Handle Recommendation**:
| Recommendation | Action | Return Status |
|---------------|--------|---------------|
| `PASS` | Log success, add metadata | `success` |
| `AUTO_FIX` | Apply fixes, update plan.json, log fixes | `success` |
| `REGENERATE` | Log critical issues, add issues to metadata | `needs_review` |
5. **Return** → Plan with `_metadata.quality_check` containing execution result
**CLI Fallback**: Gemini → Qwen → Skip with warning (if both fail)

View File

@@ -186,34 +186,150 @@ output → Variable name to store this step's result
**Execution Flow**:
```
FOR each step in implementation_approach[] (ordered by step number):
1. Check depends_on: Wait for all listed step numbers to complete
2. Variable Substitution: Replace [variable_name] in description/modification_points
with values stored from previous steps' output
3. Execute step (choose one):
// Read task-level execution config (Single Source of Truth)
const executionMethod = task.meta?.execution_config?.method || 'agent';
const cliTool = task.meta?.execution_config?.cli_tool || getDefaultCliTool(); // See ~/.claude/cli-tools.json
IF step.command exists:
→ Execute the CLI command via Bash tool
→ Capture output
// Phase 1: Execute pre_analysis (always by Agent)
const preAnalysisResults = {};
for (const step of task.flow_control.pre_analysis || []) {
const result = executePreAnalysisStep(step);
preAnalysisResults[step.output_to] = result;
}
ELSE (no command - Agent direct implementation):
→ Read modification_points[] as list of files to create/modify
→ Read logic_flow[] as implementation sequence
→ For each file in modification_points:
• If "Create new file: path" → Use Write tool to create
• If "Modify file: path" → Use Edit tool to modify
• If "Add to file: path" → Use Edit tool to append
→ Follow logic_flow sequence for implementation logic
→ Use [focus_paths] from context as working directory scope
// Phase 2: Determine execution mode
const hasLegacyCommands = task.flow_control.implementation_approach
.some(step => step.command);
4. Store result in [step.output] variable for later steps
5. Mark step complete, proceed to next
IF hasLegacyCommands:
// Backward compatibility: Old mode with step.command fields
FOR each step in implementation_approach[]:
IF step.command exists:
→ Execute via Bash: Bash({ command: step.command, timeout: 3600000 })
ELSE:
→ Agent direct implementation
ELSE IF executionMethod === 'cli':
// New mode: CLI Handoff
→ const cliPrompt = buildCliHandoffPrompt(preAnalysisResults, task)
→ const cliCommand = buildCliCommand(task, cliTool, cliPrompt)
→ Bash({ command: cliCommand, run_in_background: false, timeout: 3600000 })
ELSE IF executionMethod === 'hybrid':
// Hybrid mode: Agent decides based on task complexity
→ IF task is complex (multiple files, complex logic):
Use CLI Handoff (same as cli mode)
ELSE:
Use Agent direct implementation
ELSE (executionMethod === 'agent'):
// Default: Agent direct implementation
FOR each step in implementation_approach[]:
1. Variable Substitution: Replace [variable_name] with preAnalysisResults
2. Read modification_points[] as files to create/modify
3. Read logic_flow[] as implementation sequence
4. For each file in modification_points:
• If "Create new file: path" → Use Write tool
• If "Modify file: path" → Use Edit tool
• If "Add to file: path" → Use Edit tool (append)
5. Follow logic_flow sequence
6. Use [focus_paths] from context as working directory scope
7. Store result in [step.output] variable
```
**CLI Command Execution (CLI Execute Mode)**:
When step contains `command` field with Codex CLI, execute via CCW CLI. For Codex resume:
- First task (`depends_on: []`): `ccw cli -p "..." --tool codex --mode write --cd [path]`
- Subsequent tasks (has `depends_on`): Use CCW CLI with resume context to maintain session
**CLI Handoff Functions**:
```javascript
// Get default CLI tool from cli-tools.json
function getDefaultCliTool() {
// Read ~/.claude/cli-tools.json and return first enabled tool
// Fallback order: gemini → qwen → codex (first enabled in config)
return firstEnabledTool || 'gemini'; // System default fallback
}
// Build CLI prompt from pre-analysis results and task
function buildCliHandoffPrompt(preAnalysisResults, task) {
const contextSection = Object.entries(preAnalysisResults)
.map(([key, value]) => `### ${key}\n${value}`)
.join('\n\n');
const approachSection = task.flow_control.implementation_approach
.map((step, i) => `
### Step ${step.step}: ${step.title}
${step.description}
**Modification Points**:
${step.modification_points?.map(m => `- ${m}`).join('\n') || 'N/A'}
**Logic Flow**:
${step.logic_flow?.map((l, j) => `${j + 1}. ${l}`).join('\n') || 'Follow modification points'}
`).join('\n');
return `
PURPOSE: ${task.title}
Complete implementation based on pre-analyzed context.
## PRE-ANALYSIS CONTEXT
${contextSection}
## REQUIREMENTS
${task.context.requirements?.map(r => `- ${r}`).join('\n') || task.context.requirements}
## IMPLEMENTATION APPROACH
${approachSection}
## ACCEPTANCE CRITERIA
${task.context.acceptance?.map(a => `- ${a}`).join('\n') || task.context.acceptance}
## TARGET FILES
${task.flow_control.target_files?.map(f => `- ${f}`).join('\n') || 'See modification points above'}
MODE: write
CONSTRAINTS: Follow existing patterns | No breaking changes
`.trim();
}
// Build CLI command with resume strategy
function buildCliCommand(task, cliTool, cliPrompt) {
const cli = task.cli_execution || {};
const escapedPrompt = cliPrompt.replace(/"/g, '\\"');
const baseCmd = `ccw cli -p "${escapedPrompt}"`;
switch (cli.strategy) {
case 'new':
return `${baseCmd} --tool ${cliTool} --mode write --id ${task.cli_execution_id}`;
case 'resume':
return `${baseCmd} --resume ${cli.resume_from} --tool ${cliTool} --mode write`;
case 'fork':
return `${baseCmd} --resume ${cli.resume_from} --id ${task.cli_execution_id} --tool ${cliTool} --mode write`;
case 'merge_fork':
return `${baseCmd} --resume ${cli.merge_from.join(',')} --id ${task.cli_execution_id} --tool ${cliTool} --mode write`;
default:
// Fallback: no resume, no id
return `${baseCmd} --tool ${cliTool} --mode write`;
}
}
```
**Execution Config Reference** (from task.meta.execution_config):
| Field | Values | Description |
|-------|--------|-------------|
| `method` | `agent` / `cli` / `hybrid` | Execution mode (default: agent) |
| `cli_tool` | See `~/.claude/cli-tools.json` | CLI tool preference (first enabled tool as default) |
| `enable_resume` | `true` / `false` | Enable CLI session resume |
**CLI Execution Reference** (from task.cli_execution):
| Field | Values | Description |
|-------|--------|-------------|
| `strategy` | `new` / `resume` / `fork` / `merge_fork` | Resume strategy |
| `resume_from` | `{session}-{task_id}` | Parent task CLI ID (resume/fork) |
| `merge_from` | `[{id1}, {id2}]` | Parent task CLI IDs (merge_fork) |
**Resume Strategy Examples**:
- **New task** (no dependencies): `--id WFS-001-IMPL-001`
- **Resume** (single dependency, single child): `--resume WFS-001-IMPL-001`
- **Fork** (single dependency, multiple children): `--resume WFS-001-IMPL-001 --id WFS-001-IMPL-002`
- **Merge** (multiple dependencies): `--resume WFS-001-IMPL-001,WFS-001-IMPL-002 --id WFS-001-IMPL-003`
**Test-Driven Development**:
- Write tests first (red → green → refactor)
@@ -389,7 +505,8 @@ Before completing any task, verify:
- Use `run_in_background=false` for all Bash/CLI calls - agent cannot receive task hook callbacks
- Set timeout ≥60 minutes for CLI commands (hooks don't propagate to subagents):
```javascript
Bash(command="ccw cli -p '...' --tool codex --mode write", timeout=3600000) // 60 min
Bash(command="ccw cli -p '...' --tool <cli-tool> --mode write", timeout=3600000) // 60 min
// <cli-tool>: First enabled tool from ~/.claude/cli-tools.json (e.g., gemini, qwen, codex)
```
**ALWAYS:**

View File

@@ -47,14 +47,30 @@ Interactive orchestration tool: analyze task → discover commands → recommend
| Unit Name | Commands | Purpose | Output |
|-----------|----------|---------|--------|
| **Code Review (Session)** | review-session-cycle → review-fix | Complete review cycle and apply fixes | Fixed code |
| **Code Review (Module)** | review-module-cycle → review-fix | Module review cycle and apply fixes | Fixed code |
| **Code Review (Session)** | review-session-cycle → review-cycle-fix | Complete review cycle and apply fixes | Fixed code |
| **Code Review (Module)** | review-module-cycle → review-cycle-fix | Module review cycle and apply fixes | Fixed code |
**Issue Units** (Issue单元):
| Unit Name | Commands | Purpose | Output |
|-----------|----------|---------|--------|
| **Issue Workflow** | discover → plan → queue → execute | Complete issue lifecycle | Completed issues |
| **Rapid-to-Issue** | lite-plan → convert-to-plan → queue → execute | Bridge lite workflow to issue workflow | Completed issues |
| **Brainstorm-to-Issue** | from-brainstorm → queue → execute | Bridge brainstorm session to issue workflow | Completed issues |
**With-File Units** (文档化单元):
| Unit Name | Commands | Purpose | Output |
|-----------|----------|---------|--------|
| **Brainstorm With File** | brainstorm-with-file | Multi-perspective ideation with documentation | brainstorm.md |
| **Debug With File** | debug-with-file | Hypothesis-driven debugging with documentation | understanding.md |
| **Analyze With File** | analyze-with-file | Collaborative analysis with documentation | discussion.md |
### Command-to-Unit Mapping (命令与最小单元的映射)
| Command | Can Precede | Atomic Units |
|---------|-----------|--------------|
| lite-plan | lite-execute | Quick Implementation |
| lite-plan | lite-execute, convert-to-plan | Quick Implementation, Rapid-to-Issue |
| multi-cli-plan | lite-execute | Multi-CLI Planning |
| lite-fix | lite-execute | Bug Fix |
| plan | plan-verify, execute | Full Planning + Execution, Verified Planning + Execution |
@@ -62,9 +78,17 @@ Interactive orchestration tool: analyze task → discover commands → recommend
| replan | execute | Replanning + Execution |
| test-gen | execute | Test Generation + Execution |
| tdd-plan | execute | TDD Planning + Execution |
| review-session-cycle | review-fix | Code Review (Session) |
| review-module-cycle | review-fix | Code Review (Module) |
| review-session-cycle | review-cycle-fix | Code Review (Session) |
| review-module-cycle | review-cycle-fix | Code Review (Module) |
| test-fix-gen | test-cycle-execute | Test Validation |
| issue:discover | issue:plan | Issue Workflow |
| issue:plan | issue:queue | Issue Workflow |
| convert-to-plan | issue:queue | Rapid-to-Issue |
| issue:queue | issue:execute | Issue Workflow, Rapid-to-Issue, Brainstorm-to-Issue |
| issue:from-brainstorm | issue:queue | Brainstorm-to-Issue |
| brainstorm-with-file | issue:from-brainstorm (optional) | Brainstorm With File, Brainstorm-to-Issue |
| debug-with-file | (standalone) | Debug With File |
| analyze-with-file | (standalone) | Analyze With File |
### Atomic Group Rules
@@ -105,6 +129,14 @@ function detectTaskType(text) {
if (/测试失败|test fail|fix test|failing test/.test(text)) return 'test-fix';
if (/generate test|写测试|add test|补充测试/.test(text)) return 'test-gen';
if (/review|审查|code review/.test(text)) return 'review';
// Issue workflow patterns
if (/issues?.*batch|batch.*issues?|批量.*issue|issue.*批量/.test(text)) return 'issue-batch';
if (/issue workflow|structured workflow|queue|multi-stage|转.*issue|issue.*流程/.test(text)) return 'issue-transition';
// With-File workflow patterns
if (/brainstorm|ideation|头脑风暴|创意|发散思维|creative thinking/.test(text)) return 'brainstorm-file';
if (/brainstorm.*issue|头脑风暴.*issue|idea.*issue|想法.*issue|从.*头脑风暴|convert.*brainstorm/.test(text)) return 'brainstorm-to-issue';
if (/debug.*document|hypothesis.*debug|深度调试|假设.*验证|systematic debug/.test(text)) return 'debug-file';
if (/analyze.*document|collaborative analysis|协作分析|深度.*理解/.test(text)) return 'analyze-file';
if (/不确定|explore|研究|what if|brainstorm|权衡/.test(text)) return 'brainstorm';
if (/多视角|比较方案|cross-verify|multi-cli/.test(text)) return 'multi-cli';
return 'feature'; // Default
@@ -252,8 +284,8 @@ const commandPorts = {
output: ['review-findings'],
tags: ['review']
},
'review-fix': {
name: 'review-fix',
'review-cycle-fix': {
name: 'review-cycle-fix',
input: ['review-findings', 'review-verified'], // Accept output from review-session-cycle or review-module-cycle
output: ['fixed-code'],
tags: ['review'],
@@ -277,14 +309,81 @@ const commandPorts = {
input: ['code', 'session'], // 可接受代码或会话
output: ['review-verified'], // 输出端口:审查通过
tags: ['review'],
atomic_group: 'code-review' // 最小单元:与 review-fix 绑定
atomic_group: 'code-review' // 最小单元:与 review-cycle-fix 绑定
},
'review-module-cycle': {
name: 'review-module-cycle',
input: ['module-pattern'], // 输入端口:模块模式
output: ['review-verified'], // 输出端口:审查通过
tags: ['review'],
atomic_group: 'code-review' // 最小单元:与 review-fix 绑定
atomic_group: 'code-review' // 最小单元:与 review-cycle-fix 绑定
},
// Issue workflow commands
'issue:discover': {
name: 'issue:discover',
input: ['codebase'], // 输入端口:代码库
output: ['pending-issues'], // 输出端口:待处理 issues
tags: ['issue'],
atomic_group: 'issue-workflow' // 最小单元discover → plan → queue → execute
},
'issue:plan': {
name: 'issue:plan',
input: ['pending-issues'], // 输入端口:待处理 issues
output: ['issue-plans'], // 输出端口issue 计划
tags: ['issue'],
atomic_group: 'issue-workflow'
},
'issue:queue': {
name: 'issue:queue',
input: ['issue-plans', 'converted-plan'], // 可接受 issue:plan 或 convert-to-plan 输出
output: ['execution-queue'], // 输出端口:执行队列
tags: ['issue'],
atomic_groups: ['issue-workflow', 'rapid-to-issue']
},
'issue:execute': {
name: 'issue:execute',
input: ['execution-queue'], // 输入端口:执行队列
output: ['completed-issues'], // 输出端口:已完成 issues
tags: ['issue'],
atomic_groups: ['issue-workflow', 'rapid-to-issue']
},
'issue:convert-to-plan': {
name: 'issue:convert-to-plan',
input: ['plan'], // 输入端口lite-plan 输出
output: ['converted-plan'], // 输出端口:转换后的 issue 计划
tags: ['issue', 'planning'],
atomic_group: 'rapid-to-issue' // 最小单元lite-plan → convert-to-plan → queue → execute
},
// With-File workflows (documented exploration with multi-CLI collaboration)
'brainstorm-with-file': {
name: 'brainstorm-with-file',
input: ['exploration-topic'], // 输入端口:探索主题
output: ['brainstorm-document'], // 输出端口brainstorm.md + 综合结论
tags: ['brainstorm', 'with-file'],
note: 'Self-contained workflow with multi-round diverge-converge cycles'
},
'issue:from-brainstorm': {
name: 'issue:from-brainstorm',
input: ['brainstorm-document'], // 输入端口brainstorm 产物synthesis.json
output: ['converted-plan'], // 输出端口issue + solution
tags: ['issue', 'brainstorm'],
atomic_group: 'brainstorm-to-issue' // 最小单元from-brainstorm → queue → execute
},
'debug-with-file': {
name: 'debug-with-file',
input: ['bug-report'], // 输入端口bug 报告
output: ['understanding-document'], // 输出端口understanding.md + 修复
tags: ['bugfix', 'with-file'],
note: 'Self-contained workflow with hypothesis-driven iteration'
},
'analyze-with-file': {
name: 'analyze-with-file',
input: ['analysis-topic'], // 输入端口:分析主题
output: ['discussion-document'], // 输出端口discussion.md + 结论
tags: ['analysis', 'with-file'],
note: 'Self-contained workflow with multi-round discussion'
}
};
```
@@ -306,14 +405,22 @@ async function recommendCommandChain(analysis) {
// 任务类型对应的端口流
function determinePortFlow(taskType, constraints) {
const flows = {
'bugfix': { inputPort: 'bug-report', outputPort: constraints?.includes('skip-tests') ? 'fixed-code' : 'test-passed' },
'tdd': { inputPort: 'requirement', outputPort: 'tdd-verified' },
'test-fix': { inputPort: 'failing-tests', outputPort: 'test-passed' },
'test-gen': { inputPort: 'code', outputPort: 'test-passed' },
'review': { inputPort: 'code', outputPort: 'review-verified' },
'brainstorm': { inputPort: 'exploration-topic', outputPort: 'test-passed' },
'multi-cli': { inputPort: 'requirement', outputPort: 'test-passed' },
'feature': { inputPort: 'requirement', outputPort: constraints?.includes('skip-tests') ? 'code' : 'test-passed' }
'bugfix': { inputPort: 'bug-report', outputPort: constraints?.includes('skip-tests') ? 'fixed-code' : 'test-passed' },
'tdd': { inputPort: 'requirement', outputPort: 'tdd-verified' },
'test-fix': { inputPort: 'failing-tests', outputPort: 'test-passed' },
'test-gen': { inputPort: 'code', outputPort: 'test-passed' },
'review': { inputPort: 'code', outputPort: 'review-verified' },
'brainstorm': { inputPort: 'exploration-topic', outputPort: 'test-passed' },
'multi-cli': { inputPort: 'requirement', outputPort: 'test-passed' },
// Issue workflow types
'issue-batch': { inputPort: 'codebase', outputPort: 'completed-issues' },
'issue-transition': { inputPort: 'requirement', outputPort: 'completed-issues' },
// With-File workflow types
'brainstorm-file': { inputPort: 'exploration-topic', outputPort: 'brainstorm-document' },
'brainstorm-to-issue': { inputPort: 'brainstorm-document', outputPort: 'completed-issues' },
'debug-file': { inputPort: 'bug-report', outputPort: 'understanding-document' },
'analyze-file': { inputPort: 'analysis-topic', outputPort: 'discussion-document' },
'feature': { inputPort: 'requirement', outputPort: constraints?.includes('skip-tests') ? 'code' : 'test-passed' }
};
return flows[taskType] || flows['feature'];
}
@@ -401,17 +508,19 @@ async function executeCommandChain(chain, analysis) {
state.updated_at = new Date().toISOString();
Write(`${stateDir}/state.json`, JSON.stringify(state, null, 2));
// Assemble prompt with previous results
let prompt = `Task: ${analysis.goal}\n`;
// Assemble prompt: Command first, then context
let promptContent = formatCommand(cmd, state.execution_results, analysis);
// Build full prompt: Command → Task → Previous Results
let prompt = `${promptContent}\n\nTask: ${analysis.goal}`;
if (state.execution_results.length > 0) {
prompt += '\nPrevious results:\n';
prompt += '\n\nPrevious results:\n';
state.execution_results.forEach(r => {
if (r.session_id) {
prompt += `- ${r.command}: ${r.session_id} (${r.artifacts?.join(', ') || 'completed'})\n`;
}
});
}
prompt += `\n${formatCommand(cmd, state.execution_results, analysis)}\n`;
// Record prompt used
state.prompts_used.push({
@@ -421,9 +530,12 @@ async function executeCommandChain(chain, analysis) {
});
// Execute CLI command in background and stop
// Format: ccw cli -p "PROMPT" --tool <tool> --mode <mode>
// Note: -y is a command parameter INSIDE the prompt, not a ccw cli parameter
// Example prompt: "/workflow:plan -y \"task description here\""
try {
const taskId = Bash(
`ccw cli -p "${escapePrompt(prompt)}" --tool claude --mode write -y`,
`ccw cli -p "${escapePrompt(prompt)}" --tool claude --mode write`,
{ run_in_background: true }
).task_id;
@@ -486,69 +598,110 @@ async function executeCommandChain(chain, analysis) {
}
// Smart parameter assembly
// Returns prompt content to be used with: ccw cli -p "RETURNED_VALUE" --tool claude --mode write
function formatCommand(cmd, previousResults, analysis) {
let line = cmd.command + ' --yes';
// Format: /workflow:<command> -y <parameters>
let prompt = `/workflow:${cmd.name} -y`;
const name = cmd.name;
// Planning commands - take task description
if (['lite-plan', 'plan', 'tdd-plan', 'multi-cli-plan'].includes(name)) {
line += ` "${analysis.goal}"`;
prompt += ` "${analysis.goal}"`;
// Lite execution - use --in-memory if plan exists
} else if (name === 'lite-execute') {
const hasPlan = previousResults.some(r => r.command.includes('plan'));
line += hasPlan ? ' --in-memory' : ` "${analysis.goal}"`;
prompt += hasPlan ? ' --in-memory' : ` "${analysis.goal}"`;
// Standard execution - resume from planning session
} else if (name === 'execute') {
const plan = previousResults.find(r => r.command.includes('plan'));
if (plan?.session_id) line += ` --resume-session="${plan.session_id}"`;
if (plan?.session_id) prompt += ` --resume-session="${plan.session_id}"`;
// Bug fix commands - take bug description
} else if (['lite-fix', 'debug'].includes(name)) {
line += ` "${analysis.goal}"`;
prompt += ` "${analysis.goal}"`;
// Brainstorm - take topic description
} else if (name === 'brainstorm:auto-parallel' || name === 'auto-parallel') {
line += ` "${analysis.goal}"`;
prompt += ` "${analysis.goal}"`;
// Test generation from session - needs source session
} else if (name === 'test-gen') {
const impl = previousResults.find(r =>
r.command.includes('execute') || r.command.includes('lite-execute')
);
if (impl?.session_id) line += ` "${impl.session_id}"`;
else line += ` "${analysis.goal}"`;
if (impl?.session_id) prompt += ` "${impl.session_id}"`;
else prompt += ` "${analysis.goal}"`;
// Test fix generation - session or description
} else if (name === 'test-fix-gen') {
const latest = previousResults.filter(r => r.session_id).pop();
if (latest?.session_id) line += ` "${latest.session_id}"`;
else line += ` "${analysis.goal}"`;
if (latest?.session_id) prompt += ` "${latest.session_id}"`;
else prompt += ` "${analysis.goal}"`;
// Review commands - take session or use latest
} else if (name === 'review') {
const latest = previousResults.filter(r => r.session_id).pop();
if (latest?.session_id) line += ` --session="${latest.session_id}"`;
if (latest?.session_id) prompt += ` --session="${latest.session_id}"`;
// Review fix - takes session from review
} else if (name === 'review-fix') {
} else if (name === 'review-cycle-fix') {
const review = previousResults.find(r => r.command.includes('review'));
const latest = review || previousResults.filter(r => r.session_id).pop();
if (latest?.session_id) line += ` --session="${latest.session_id}"`;
if (latest?.session_id) prompt += ` --session="${latest.session_id}"`;
// TDD verify - takes execution session
} else if (name === 'tdd-verify') {
const exec = previousResults.find(r => r.command.includes('execute'));
if (exec?.session_id) line += ` --session="${exec.session_id}"`;
if (exec?.session_id) prompt += ` --session="${exec.session_id}"`;
// Session-based commands (test-cycle, review-session, plan-verify)
} else if (name.includes('test') || name.includes('review') || name.includes('verify')) {
const latest = previousResults.filter(r => r.session_id).pop();
if (latest?.session_id) line += ` --session="${latest.session_id}"`;
if (latest?.session_id) prompt += ` --session="${latest.session_id}"`;
// Issue workflow commands
} else if (name === 'issue:discover') {
// No parameters needed - discovers from codebase
prompt = `/issue:discover -y`;
} else if (name === 'issue:plan') {
prompt = `/issue:plan -y --all-pending`;
} else if (name === 'issue:queue') {
prompt = `/issue:queue -y`;
} else if (name === 'issue:execute') {
prompt = `/issue:execute -y --queue auto`;
} else if (name === 'issue:convert-to-plan' || name === 'convert-to-plan') {
// Convert latest lite-plan to issue plan
prompt = `/issue:convert-to-plan -y --latest-lite-plan`;
// With-File workflows (self-contained)
} else if (name === 'brainstorm-with-file') {
prompt = `/workflow:brainstorm-with-file -y "${analysis.goal}"`;
} else if (name === 'debug-with-file') {
prompt = `/workflow:debug-with-file -y "${analysis.goal}"`;
} else if (name === 'analyze-with-file') {
prompt = `/workflow:analyze-with-file -y "${analysis.goal}"`;
// Brainstorm-to-issue bridge
} else if (name === 'issue:from-brainstorm' || name === 'from-brainstorm') {
// Extract session ID from analysis.goal or latest brainstorm
const sessionMatch = analysis.goal.match(/BS-[\w-]+/);
if (sessionMatch) {
prompt = `/issue:from-brainstorm -y SESSION="${sessionMatch[0]}" --auto`;
} else {
// Find latest brainstorm session
prompt = `/issue:from-brainstorm -y --auto`;
}
}
return line;
return prompt;
}
// Hook callback: Called when background CLI completes
@@ -663,12 +816,12 @@ function parseOutput(output) {
{
"index": 0,
"command": "/workflow:plan",
"prompt": "Task: Implement user registration...\n\n/workflow:plan --yes \"Implement user registration...\""
"prompt": "/workflow:plan -y \"Implement user registration...\"\n\nTask: Implement user registration..."
},
{
"index": 1,
"command": "/workflow:execute",
"prompt": "Task: Implement user registration...\n\nPrevious results:\n- /workflow:plan: WFS-plan-20250124 (IMPL_PLAN.md)\n\n/workflow:execute --yes --resume-session=\"WFS-plan-20250124\""
"prompt": "/workflow:execute -y --resume-session=\"WFS-plan-20250124\"\n\nTask: Implement user registration\n\nPrevious results:\n- /workflow:plan: WFS-plan-20250124 (IMPL_PLAN.md)"
}
]
}
@@ -728,226 +881,68 @@ const cmd = registry.getCommand('lite-plan');
// {name, command, description, argumentHint, allowedTools, filePath}
```
## Execution Examples
## Universal Prompt Template
### Simple Feature
```
Goal: Add API endpoint for user profile
Scope: [api]
Complexity: simple
Constraints: []
Task Type: feature
### Standard Format
Pipeline (with Minimum Execution Units):
需求 →【lite-plan → lite-execute】→ 代码 →【test-fix-gen → test-cycle-execute】→ 测试通过
Chain:
# Unit 1: Quick Implementation
1. /workflow:lite-plan --yes "Add API endpoint..."
2. /workflow:lite-execute --yes --in-memory
# Unit 2: Test Validation
3. /workflow:test-fix-gen --yes --session="WFS-xxx"
4. /workflow:test-cycle-execute --yes --session="WFS-test-xxx"
```bash
ccw cli -p "PROMPT_CONTENT" --tool <tool> --mode <mode>
```
### Complex Feature with Verification
### Prompt Content Template
```
Goal: Implement OAuth2 authentication system
Scope: [auth, database, api, frontend]
Complexity: complex
Constraints: [no breaking changes]
Task Type: feature
/workflow:<command> -y <command_parameters>
Pipeline (with Minimum Execution Units):
需求 →【plan → plan-verify】→ 验证计划 → execute → 代码
→【review-session-cycle → review-fix】→ 修复代码
→【test-fix-gen → test-cycle-execute】→ 测试通过
Task: <task_description>
Chain:
# Unit 1: Full Planning (plan + plan-verify)
1. /workflow:plan --yes "Implement OAuth2..."
2. /workflow:plan-verify --yes --session="WFS-xxx"
# Execution phase
3. /workflow:execute --yes --resume-session="WFS-xxx"
# Unit 2: Code Review (review-session-cycle + review-fix)
4. /workflow:review-session-cycle --yes --session="WFS-xxx"
5. /workflow:review-fix --yes --session="WFS-xxx"
# Unit 3: Test Validation (test-fix-gen + test-cycle-execute)
6. /workflow:test-fix-gen --yes --session="WFS-xxx"
7. /workflow:test-cycle-execute --yes --session="WFS-test-xxx"
<optional_previous_results>
```
### Quick Bug Fix
```
Goal: Fix login timeout issue
Scope: [auth]
Complexity: simple
Constraints: [urgent]
Task Type: bugfix
### Template Variables
Pipeline:
Bug报告 → lite-fix → 修复代码 → test-fix-gen → 测试任务 → test-cycle-execute → 测试通过
| Variable | Description | Examples |
|----------|-------------|----------|
| `<command>` | Workflow command name | `plan`, `lite-execute`, `test-cycle-execute` |
| `-y` | Auto-confirm flag (inside prompt) | Always include for automation |
| `<command_parameters>` | Command-specific parameters | Task description, session ID, flags |
| `<task_description>` | Brief task description | "Implement user authentication", "Fix memory leak" |
| `<optional_previous_results>` | Context from previous commands | "Previous results:\n- /workflow:plan: WFS-xxx" |
Chain:
1. /workflow:lite-fix --yes "Fix login timeout..."
2. /workflow:test-fix-gen --yes --session="WFS-xxx"
3. /workflow:test-cycle-execute --yes --session="WFS-test-xxx"
### Command Parameter Patterns
| Command Type | Parameter Pattern | Example |
|--------------|------------------|---------|
| **Planning** | `"task description"` | `/workflow:plan -y "Implement OAuth2"` |
| **Execution (with plan)** | `--resume-session="WFS-xxx"` | `/workflow:execute -y --resume-session="WFS-plan-001"` |
| **Execution (standalone)** | `--in-memory` or `"task"` | `/workflow:lite-execute -y --in-memory` |
| **Session-based** | `--session="WFS-xxx"` | `/workflow:test-fix-gen -y --session="WFS-impl-001"` |
| **Fix/Debug** | `"problem description"` | `/workflow:lite-fix -y "Fix timeout bug"` |
### Complete Examples
**Planning Command**:
```bash
ccw cli -p '/workflow:plan -y "Implement user registration with email validation"
Task: Implement user registration' --tool claude --mode write
```
### Skip Tests
```
Goal: Update documentation
Scope: [docs]
Complexity: simple
Constraints: [skip-tests]
Task Type: feature
**Execution with Context**:
```bash
ccw cli -p '/workflow:execute -y --resume-session="WFS-plan-20250124"
Pipeline:
需求 → lite-plan → 计划 → lite-execute → 代码
Task: Implement user registration
Chain:
1. /workflow:lite-plan --yes "Update documentation..."
2. /workflow:lite-execute --yes --in-memory
Previous results:
- /workflow:plan: WFS-plan-20250124 (IMPL_PLAN.md)' --tool claude --mode write
```
### TDD Workflow
```
Goal: Implement user authentication with test-first approach
Scope: [auth]
Complexity: medium
Constraints: [test-driven]
Task Type: tdd
**Standalone Lite Execution**:
```bash
ccw cli -p '/workflow:lite-fix -y "Fix login timeout in auth module"
Pipeline:
需求 → tdd-plan → TDD任务 → execute → 代码 → tdd-verify → TDD验证通过
Chain:
1. /workflow:tdd-plan --yes "Implement user authentication..."
2. /workflow:execute --yes --resume-session="WFS-xxx"
3. /workflow:tdd-verify --yes --session="WFS-xxx"
```
### Debug Workflow
```
Goal: Fix memory leak in WebSocket handler
Scope: [websocket]
Complexity: medium
Constraints: [production-issue]
Task Type: bugfix
Pipeline (快速修复):
Bug报告 → lite-fix → 修复代码 → test-cycle-execute → 测试通过
Pipeline (系统调试):
Bug报告 → debug → 调试日志 → 分析定位 → 修复
Chain:
1. /workflow:lite-fix --yes "Fix memory leak in WebSocket..."
2. /workflow:test-cycle-execute --yes --session="WFS-xxx"
OR (for hypothesis-driven debugging):
1. /workflow:debug --yes "Memory leak in WebSocket handler..."
```
### Test Fix Workflow
```
Goal: Fix failing authentication tests
Scope: [auth, tests]
Complexity: simple
Constraints: []
Task Type: test-fix
Pipeline:
失败测试 → test-fix-gen → 测试任务 → test-cycle-execute → 测试通过
Chain:
1. /workflow:test-fix-gen --yes "WFS-auth-impl-001"
2. /workflow:test-cycle-execute --yes --session="WFS-test-xxx"
```
### Test Generation from Implementation
```
Goal: Generate comprehensive tests for completed user registration feature
Scope: [auth, tests]
Complexity: medium
Constraints: []
Task Type: test-gen
Pipeline (with Minimum Execution Units):
代码/会话 →【test-gen → execute】→ 测试通过
Chain:
# Unit: Test Generation (test-gen + execute)
1. /workflow:test-gen --yes "WFS-registration-20250124"
2. /workflow:execute --yes --session="WFS-test-registration"
Note: test-gen creates IMPL-001 (test generation) and IMPL-002 (test execution & fix)
execute runs both tasks - this is a Minimum Execution Unit
```
### Review + Fix Workflow
```
Goal: Code review of payment module
Scope: [payment]
Complexity: medium
Constraints: []
Task Type: review
Pipeline (with Minimum Execution Units):
代码 →【review-session-cycle → review-fix】→ 修复代码
→【test-fix-gen → test-cycle-execute】→ 测试通过
Chain:
# Unit 1: Code Review (review-session-cycle + review-fix)
1. /workflow:review-session-cycle --yes --session="WFS-payment-impl"
2. /workflow:review-fix --yes --session="WFS-payment-impl"
# Unit 2: Test Validation (test-fix-gen + test-cycle-execute)
3. /workflow:test-fix-gen --yes --session="WFS-payment-impl"
4. /workflow:test-cycle-execute --yes --session="WFS-test-payment-impl"
```
### Brainstorm Workflow (Uncertain Requirements)
```
Goal: Explore solutions for real-time notification system
Scope: [notifications, architecture]
Complexity: complex
Constraints: []
Task Type: brainstorm
Pipeline:
探索主题 → brainstorm:auto-parallel → 分析结果 → plan → 详细计划
→ plan-verify → 验证计划 → execute → 代码 → test-fix-gen → 测试任务 → test-cycle-execute → 测试通过
Chain:
1. /workflow:brainstorm:auto-parallel --yes "Explore solutions for real-time..."
2. /workflow:plan --yes "Implement chosen notification approach..."
3. /workflow:plan-verify --yes --session="WFS-xxx"
4. /workflow:execute --yes --resume-session="WFS-xxx"
5. /workflow:test-fix-gen --yes --session="WFS-xxx"
6. /workflow:test-cycle-execute --yes --session="WFS-test-xxx"
```
### Multi-CLI Plan (Multi-Perspective Analysis)
```
Goal: Compare microservices vs monolith architecture
Scope: [architecture]
Complexity: complex
Constraints: []
Task Type: multi-cli
Pipeline:
需求 → multi-cli-plan → 对比计划 → lite-execute → 代码 → test-fix-gen → 测试任务 → test-cycle-execute → 测试通过
Chain:
1. /workflow:multi-cli-plan --yes "Compare microservices vs monolith..."
2. /workflow:lite-execute --yes --in-memory
3. /workflow:test-fix-gen --yes --session="WFS-xxx"
4. /workflow:test-cycle-execute --yes --session="WFS-test-xxx"
Task: Fix login timeout' --tool claude --mode write
```
## Execution Flow
@@ -983,33 +978,92 @@ async function ccwCoordinator(taskDescription) {
## CLI Execution Model
**Serial Blocking**: Commands execute one-by-one. After launching CLI in background, orchestrator stops immediately and waits for hook callback.
### CLI Invocation Format
**IMPORTANT**: The `ccw cli` command executes prompts through external tools. The format is:
```bash
ccw cli -p "PROMPT_CONTENT" --tool <tool> --mode <mode>
```
**Parameters**:
- `-p "PROMPT_CONTENT"`: The prompt content to execute (required)
- `--tool <tool>`: CLI tool to use (e.g., `claude`, `gemini`, `qwen`)
- `--mode <mode>`: Execution mode (`analysis` or `write`)
**Note**: `-y` is a **command parameter inside the prompt**, NOT a `ccw cli` parameter.
### Prompt Assembly
The prompt content MUST start with the workflow command, followed by task context:
```
/workflow:<command> -y <parameters>
Task: <description>
<optional_context>
```
**Examples**:
```bash
# Planning command
ccw cli -p '/workflow:plan -y "Implement user registration feature"
Task: Implement user registration' --tool claude --mode write
# Execution command (with session reference)
ccw cli -p '/workflow:execute -y --resume-session="WFS-plan-20250124"
Task: Implement user registration
Previous results:
- /workflow:plan: WFS-plan-20250124' --tool claude --mode write
# Lite execution (in-memory from previous plan)
ccw cli -p '/workflow:lite-execute -y --in-memory
Task: Implement user registration' --tool claude --mode write
```
### Serial Blocking
**CRITICAL**: Commands execute one-by-one. After launching CLI in background:
1. Orchestrator stops immediately (`break`)
2. Wait for hook callback - **DO NOT use TaskOutput polling**
3. Hook callback triggers next command
**Prompt Structure**: Command must be first in prompt content
```javascript
// Example: Execute command and stop
const taskId = Bash(`ccw cli -p "..." --tool claude --mode write -y`, { run_in_background: true }).task_id;
const prompt = '/workflow:plan -y "Implement user authentication"\n\nTask: Implement user auth system';
const taskId = Bash(`ccw cli -p "${prompt}" --tool claude --mode write`, { run_in_background: true }).task_id;
state.execution_results.push({ status: 'in-progress', task_id: taskId, ... });
Write(`${stateDir}/state.json`, JSON.stringify(state, null, 2));
break; // Stop, wait for hook callback
break; // ⚠️ STOP HERE - DO NOT use TaskOutput polling
// Hook calls handleCliCompletion(sessionId, taskId, output) when done
// Hook callback will call handleCliCompletion(sessionId, taskId, output) when done
// → Updates state → Triggers next command via resumeChainExecution()
```
## Available Commands
All from `~/.claude/commands/workflow/`:
All from `~/.claude/commands/workflow/` and `~/.claude/commands/issue/`:
**Planning**: lite-plan, plan, multi-cli-plan, plan-verify, tdd-plan
**Execution**: lite-execute, execute, develop-with-file
**Testing**: test-cycle-execute, test-gen, test-fix-gen, tdd-verify
**Review**: review, review-session-cycle, review-module-cycle, review-fix
**Review**: review, review-session-cycle, review-module-cycle, review-cycle-fix
**Bug Fixes**: lite-fix, debug, debug-with-file
**Brainstorming**: brainstorm:auto-parallel, brainstorm:artifacts, brainstorm:synthesis
**Design**: ui-design:*, animation-extract, layout-extract, style-extract, codify-style
**Session Management**: session:start, session:resume, session:complete, session:solidify, session:list
**Tools**: context-gather, test-context-gather, task-generate, conflict-resolution, action-plan-verify
**Utility**: clean, init, replan
**Issue Workflow**: issue:discover, issue:plan, issue:queue, issue:execute, issue:convert-to-plan, issue:from-brainstorm
**With-File Workflows**: brainstorm-with-file, debug-with-file, analyze-with-file
### Testing Commands Distinction
@@ -1023,20 +1077,26 @@ All from `~/.claude/commands/workflow/`:
- **test-gen → execute**: 生成全面的测试套件execute 执行生成和测试
- **test-fix-gen → test-cycle-execute**: 针对特定问题生成修复任务test-cycle-execute 迭代测试和修复直到通过
### Task Type Routing (Pipeline View)
### Task Type Routing (Pipeline Summary)
**Note**: `【 】` marks Minimum Execution Units (最小执行单元) - these commands must execute together.
| Task Type | Pipeline |
|-----------|----------|
| **feature** (simple) | 需求 →【lite-plan → lite-execute】→ 代码 →【test-fix-gen → test-cycle-execute】→ 测试通过 |
| **feature** (complex) | 需求 →【plan → plan-verify】→ 验证计划 → execute → 代码 →review-session-cycle → review-fix】→ 修复代码 →【test-fix-gen → test-cycle-execute】→ 测试通过 |
| **bugfix** | Bug报告 → lite-fix → 修复代码 →【test-fix-gen → test-cycle-execute】→ 测试通过 |
| **tdd** | 需求 → tdd-plan → TDD任务 → execute → 代码 → tdd-verify TDD验证通过 |
| **test-fix** | 失败测试 →【test-fix-gen → test-cycle-execute】→ 测试通过 |
| **test-gen** | 代码/会话 →【test-gen → execute】→ 测试通过 |
| **review** | 代码 →【review-session-cycle/review-module-cycle → review-fix】→ 修复代码 →【test-fix-gen → test-cycle-execute】→ 测试通过 |
| **brainstorm** | 探索主题 → brainstorm:auto-parallel → 分析结果 →【plan → plan-verify】→ 验证计划 → execute → 代码 →【test-fix-gen → test-cycle-execute】→ 测试通过 |
| **multi-cli** | 需求 → multi-cli-plan → 对比计划 → lite-execute → 代码 →【test-fix-gen → test-cycle-execute】→ 测试通过 |
| Task Type | Pipeline | Minimum Units |
|-----------|----------|---|
| **feature** (simple) | 需求 →【lite-plan → lite-execute】→ 代码 →【test-fix-gen → test-cycle-execute】→ 测试通过 | Quick Implementation + Test Validation |
| **feature** (complex) | 需求 →【plan → plan-verify】→ validate → execute → 代码 → review → fix | Full Planning + Code Review + Testing |
| **bugfix** | Bug报告 → lite-fix → 修复代码 →【test-fix-gen → test-cycle-execute】→ 测试通过 | Bug Fix + Test Validation |
| **tdd** | 需求 → tdd-plan → TDD任务 → execute → 代码 → tdd-verify | TDD Planning + Execution |
| **test-fix** | 失败测试 →【test-fix-gen → test-cycle-execute】→ 测试通过 | Test Validation |
| **test-gen** | 代码/会话 →【test-gen → execute】→ 测试通过 | Test Generation + Execution |
| **review** | 代码 →【review-* → review-cycle-fix】→ 修复代码 →【test-fix-gen → test-cycle-execute】→ 测试通过 | Code Review + Testing |
| **brainstorm** | 探索主题 → brainstorm → 分析 →【plan → plan-verify】→ execute → test | Exploration + Planning + Execution |
| **multi-cli** | 需求 → multi-cli-plan → 对比分析 → lite-execute → test | Multi-Perspective + Testing |
| **issue-batch** | 代码库 →【discover → plan → queue → execute】→ 完成 issues | Issue Workflow |
| **issue-transition** | 需求 →【lite-plan → convert-to-plan → queue → execute】→ 完成 issues | Rapid-to-Issue |
| **brainstorm-file** | 主题 → brainstorm-with-file → brainstorm.md (自包含) | Brainstorm With File |
| **brainstorm-to-issue** | brainstorm.md →【from-brainstorm → queue → execute】→ 完成 issues | Brainstorm to Issue |
| **debug-file** | Bug报告 → debug-with-file → understanding.md (自包含) | Debug With File |
| **analyze-file** | 分析主题 → analyze-with-file → discussion.md (自包含) | Analyze With File |
Use `CommandRegistry.getAllCommandsSummary()` to discover all commands dynamically.

View File

@@ -0,0 +1,832 @@
---
name: ccw-debug
description: Aggregated debug command - combines debugging diagnostics and test verification in a synergistic workflow supporting cli-quick / debug-first / test-first / bidirectional-verification modes
argument-hint: "[--mode cli|debug|test|bidirectional] [--yes|-y] [--hotfix] \"bug description or error message\""
allowed-tools: SlashCommand(*), TodoWrite(*), AskUserQuestion(*), Read(*), Bash(*)
---
# CCW-Debug Aggregated Command
## Core Concept
**Aggregated Debug Command** - Combines debugging diagnostics and test verification in a synergistic workflow. Not a simple concatenation of two commands, but intelligent orchestration based on mode selection.
### Four Execution Modes
| Mode | Workflow | Use Case | Characteristics |
|------|----------|----------|-----------------|
| **CLI Quick** (cli) | Direct CLI Analysis → Fix Suggestions | Simple issues, quick diagnosis | Fastest, minimal workflow, recommendation-only |
| **Debug First** (debug) | Debug → Analyze Hypotheses → Apply Fix → Test Verification | Root cause unclear, requires exploration | Starts with exploration, Gemini-assisted |
| **Test First** (test) | Generate Tests → Execute → Analyze Failures → CLI Fixes | Code implemented, needs test validation | Driven by test coverage, auto-iterates |
| **Bidirectional Verification** (bidirectional) | Parallel: Debug + Test → Merge Findings → Unified Fix | Complex systems, ambiguous symptoms | Parallel execution, converged insights |
---
## Quick Start
### Basic Usage
```bash
# CLI quick mode: fastest, recommendation-only (new!)
/ccw-debug --mode cli "Login failed: token validation error"
# Default mode: debug-first (recommended for most scenarios)
/ccw-debug "Login failed: token validation error"
# Test-first mode
/ccw-debug --mode test "User permission check failure"
# Bidirectional verification mode (complex issues)
/ccw-debug --mode bidirectional "Payment flow multiple failures"
# Auto mode (skip all confirmations)
/ccw-debug --yes "Quick fix: database connection timeout"
# Production hotfix (minimal diagnostics)
/ccw-debug --hotfix --yes "Production: API returns 500"
```
### Mode Selection Guide
**Choose "CLI Quick"** when:
- Need immediate diagnosis, not execution
- Want quick recommendations without workflows
- Simple issues with clear symptoms
- Just need fix suggestions, no auto-application
- Time is critical, prefer fast output
- Want to review CLI analysis before action
**Choose "Debug First"** when:
- Root cause is unclear
- Error messages are incomplete or vague
- Need to understand code execution flow
- Issues involve multi-module interactions
**Choose "Test First"** when:
- Code is fully implemented
- Need test coverage verification
- Have clear failure cases
- Want automated iterative fixes
**Choose "Bidirectional Verification"** when:
- System is complex (multiple subsystems)
- Problem symptoms are ambiguous (multiple possible root causes)
- Need multi-angle validation
- Time allows parallel analysis
---
## Execution Flow
### Overall Process
```
Phase 1: Intent Analysis & Mode Selection
├─ Parse --mode flag or recommend mode
├─ Check --hotfix and --yes flags
└─ Determine workflow path
Phase 2: Initialization
├─ CLI Quick: Lightweight init (no session directory needed)
├─ Others: Create unified session directory (.workflow/.ccw-debug/)
├─ Setup TodoWrite tracking
└─ Prepare session context
Phase 3: Execute Corresponding Workflow
├─ CLI Quick: ccw cli → Diagnosis Report → Optional: Escalate to debug/test/apply fix
├─ Debug First: /workflow:debug-with-file → Fix → /workflow:test-fix-gen → /workflow:test-cycle-execute
├─ Test First: /workflow:test-fix-gen → /workflow:test-cycle-execute → CLI analyze failures
└─ Bidirectional: [/workflow:debug-with-file] ∥ [/workflow:test-fix-gen → test-cycle-execute]
Phase 4: Merge Findings (Bidirectional Mode) / Escalation Decision (CLI Mode)
├─ CLI Quick: Present results → Ask user: Apply fix? Escalate? Done?
├─ Bidirectional: Converge findings from both workflows
├─ Identify consistent and conflicting root cause analyses
└─ Generate unified fix plan
Phase 5: Completion & Follow-up
├─ Generate summary report
├─ Provide next step recommendations
└─ Optional: Expand to issues (testing/enhancement/refactoring/documentation)
```
---
## Workflow Details
### Mode 0: CLI Quick (Minimal Debug Method)
**Best For**: Fast recommendations without full workflow overhead
**Workflow**:
```
User Input → Quick Context Gather → ccw cli (Gemini/Qwen/Codex)
Analysis Report
Fix Recommendations
Optional: User Decision
┌──────────────┼──────────────┐
↓ ↓ ↓
Apply Fix Escalate Mode Done
(debug/test)
```
**Execution Steps**:
1. **Lightweight Context Gather** (Phase 2)
```javascript
// No session directory needed for CLI mode
const tempContext = {
bug_description: bug_description,
timestamp: getUtc8ISOString(),
mode: "cli"
}
// Quick context discovery (30s max)
// - Read error file if path provided
// - Extract error patterns from description
// - Identify likely affected files (basic grep)
```
2. **Execute CLI Analysis** (Phase 3)
```bash
# Use ccw cli with bug diagnosis template
ccw cli -p "
PURPOSE: Quick bug diagnosis for immediate recommendations
TASK:
• Analyze bug symptoms: ${bug_description}
• Identify likely root cause
• Provide actionable fix recommendations (code snippets if possible)
• Assess fix confidence level
MODE: analysis
CONTEXT: ${contextFiles.length > 0 ? '@' + contextFiles.join(' @') : 'Bug description only'}
EXPECTED:
- Root cause hypothesis (1-2 sentences)
- Fix strategy (immediate/comprehensive/refactor)
- Code snippets or file modification suggestions
- Confidence level: High/Medium/Low
- Risk assessment
CONSTRAINTS: Quick analysis, 2-5 minutes max
" --tool gemini --mode analysis --rule analysis-diagnose-bug-root-cause
```
3. **Present Results** (Phase 4)
```
## CLI Quick Analysis Complete
**Issue**: [bug_description]
**Analysis Time**: [duration]
**Confidence**: [High/Medium/Low]
### Root Cause
[1-2 sentence hypothesis]
### Fix Strategy
[immediate_patch | comprehensive_fix | refactor]
### Recommended Changes
**File**: src/module/file.ts
```typescript
// Change line 45-50
- old code
+ new code
```
**Rationale**: [why this fix]
**Risk**: [Low/Medium/High] - [risk description]
### Confidence Assessment
- Analysis confidence: [percentage]
- Recommendation: [apply immediately | review first | escalate to full debug]
```
4. **User Decision** (Phase 5)
```javascript
// Parse --yes flag
const autoYes = $ARGUMENTS.includes('--yes') || $ARGUMENTS.includes('-y')
if (autoYes && confidence === 'High') {
// Auto-apply fix
console.log('[--yes + High confidence] Auto-applying fix...')
applyFixFromCLIRecommendation(cliOutput)
} else {
// Ask user
const decision = AskUserQuestion({
questions: [{
question: `CLI analysis complete (${confidence} confidence). What next?`,
header: "Decision",
multiSelect: false,
options: [
{ label: "Apply Fix", description: "Apply recommended changes immediately" },
{ label: "Escalate to Debug", description: "Switch to debug-first for deeper analysis" },
{ label: "Escalate to Test", description: "Switch to test-first for validation" },
{ label: "Review Only", description: "Just review, no action" }
]
}]
})
if (decision === "Apply Fix") {
applyFixFromCLIRecommendation(cliOutput)
} else if (decision === "Escalate to Debug") {
// Re-invoke ccw-debug with --mode debug
SlashCommand(command=`/ccw-debug --mode debug "${bug_description}"`)
} else if (decision === "Escalate to Test") {
// Re-invoke ccw-debug with --mode test
SlashCommand(command=`/ccw-debug --mode test "${bug_description}"`)
}
}
```
**Key Characteristics**:
- **Speed**: 2-5 minutes total (fastest mode)
- **Session**: No persistent session directory (lightweight)
- **Output**: Recommendation report only
- **Execution**: Optional, user-controlled
- **Escalation**: Can upgrade to full debug/test workflows
**Limitations**:
- No hypothesis iteration (single-shot analysis)
- No automatic test generation
- No instrumentation/logging
- Best for clear symptoms with localized fixes
---
### Mode 1: Debug First
**Best For**: Issues requiring root cause exploration
**Workflow**:
```
User Input → Session Init → /workflow:debug-with-file
Generate understanding.md + hypotheses
User reproduces issue, analyze logs
Gemini validates hypotheses
Apply fix code
/workflow:test-fix-gen
/workflow:test-cycle-execute
Generate unified report
```
**Execution Steps**:
1. **Session Initialization** (Phase 2)
```javascript
const sessionId = `CCWD-${bugSlug}-${dateStr}`
const sessionFolder = `.workflow/.ccw-debug/${sessionId}`
bash(`mkdir -p ${sessionFolder}`)
// Record mode selection
const modeConfig = {
mode: "debug",
original_input: bug_description,
timestamp: getUtc8ISOString(),
flags: { hotfix, autoYes }
}
Write(`${sessionFolder}/mode-config.json`, JSON.stringify(modeConfig, null, 2))
```
2. **Start Debug** (Phase 3)
```javascript
SlashCommand(command=`/workflow:debug-with-file "${bug_description}"`)
// Update TodoWrite
TodoWrite({
todos: [
{ content: "Phase 1: Debug & Analysis", status: "completed" },
{ content: "Phase 2: Apply Fix from Debug Findings", status: "in_progress" },
{ content: "Phase 3: Generate & Execute Tests", status: "pending" },
{ content: "Phase 4: Generate Report", status: "pending" }
]
})
```
3. **Apply Fix** (Handled by debug command)
4. **Test Generation & Execution**
```javascript
// Auto-continue after debug command completes
SlashCommand(command=`/workflow:test-fix-gen "Test validation for: ${bug_description}"`)
SlashCommand(command="/workflow:test-cycle-execute")
```
5. **Generate Report** (Phase 5)
```
## Debug-First Workflow Completed
**Issue**: [bug_description]
**Mode**: Debug First
**Session**: [sessionId]
### Debug Phase Results
- Root Cause: [extracted from understanding.md]
- Hypothesis Confirmation: [from hypotheses.json]
- Fixes Applied: [list of modified files]
### Test Phase Results
- Tests Created: [test files generated by IMPL-001]
- Pass Rate: [final test pass rate]
- Iteration Count: [fix iterations]
### Key Findings
- [learning points from debugging]
- [coverage insights from testing]
```
---
### Mode 2: Test First
**Best For**: Implemented code needing test validation
**Workflow**:
```
User Input → Session Init → /workflow:test-fix-gen
Generate test tasks (IMPL-001, IMPL-002)
/workflow:test-cycle-execute
Auto-iterate: Test → Analyze Failures → CLI Fix
Until pass rate ≥ 95%
Generate report
```
**Execution Steps**:
1. **Session Initialization** (Phase 2)
```javascript
const modeConfig = {
mode: "test",
original_input: bug_description,
timestamp: getUtc8ISOString(),
flags: { hotfix, autoYes }
}
```
2. **Generate Tests** (Phase 3)
```javascript
SlashCommand(command=`/workflow:test-fix-gen "${bug_description}"`)
// Update TodoWrite
TodoWrite({
todos: [
{ content: "Phase 1: Generate Tests", status: "completed" },
{ content: "Phase 2: Execute & Fix Tests", status: "in_progress" },
{ content: "Phase 3: Final Validation", status: "pending" },
{ content: "Phase 4: Generate Report", status: "pending" }
]
})
```
3. **Execute & Iterate** (Phase 3 cont.)
```javascript
SlashCommand(command="/workflow:test-cycle-execute")
// test-cycle-execute handles:
// - Execute tests
// - Analyze failures
// - Generate fix tasks via CLI
// - Iterate fixes until pass
```
4. **Generate Report** (Phase 5)
---
### Mode 3: Bidirectional Verification
**Best For**: Complex systems, multi-dimensional analysis
**Workflow**:
```
User Input → Session Init → Parallel execution:
┌──────────────────────────────┐
│ │
↓ ↓
/workflow:debug-with-file /workflow:test-fix-gen
│ │
Generate hypotheses & understanding Generate test tasks
│ │
↓ ↓
Apply debug fixes /workflow:test-cycle-execute
│ │
└──────────────┬───────────────┘
Phase 4: Merge Findings
├─ Converge root cause analyses
├─ Identify consistency (mutual validation)
├─ Identify conflicts (need coordination)
└─ Generate unified report
```
**Execution Steps**:
1. **Parallel Execution** (Phase 3)
```javascript
// Start debug
const debugTask = SlashCommand(
command=`/workflow:debug-with-file "${bug_description}"`,
run_in_background=false
)
// Start test generation (synchronous execution, SlashCommand blocks)
const testTask = SlashCommand(
command=`/workflow:test-fix-gen "${bug_description}"`,
run_in_background=false
)
// Execute test cycle
const testCycleTask = SlashCommand(
command="/workflow:test-cycle-execute",
run_in_background=false
)
```
2. **Merge Findings** (Phase 4)
```javascript
// Read debug results
const understandingMd = Read(`${debugSessionFolder}/understanding.md`)
const hypothesesJson = JSON.parse(Read(`${debugSessionFolder}/hypotheses.json`))
// Read test results
const testResultsJson = JSON.parse(Read(`${testSessionFolder}/.process/test-results.json`))
const fixPlanJson = JSON.parse(Read(`${testSessionFolder}/.task/IMPL-002.json`))
// Merge analysis
const convergence = {
debug_root_cause: hypothesesJson.confirmed_hypothesis,
test_failure_pattern: testResultsJson.failures,
consistency: analyzeConsistency(debugRootCause, testFailures),
conflicts: identifyConflicts(debugRootCause, testFailures),
unified_root_cause: mergeRootCauses(debugRootCause, testFailures),
recommended_fix: selectBestFix(debugRootCause, testRootCause)
}
```
3. **Generate Report** (Phase 5)
```
## Bidirectional Verification Workflow Completed
**Issue**: [bug_description]
**Mode**: Bidirectional Verification
### Debug Findings
- Root Cause (hypothesis): [from understanding.md]
- Confidence: [from hypotheses.json]
- Key code paths: [file:line]
### Test Findings
- Failure pattern: [list of failing tests]
- Error type: [error type]
- Impact scope: [affected modules]
### Merged Analysis
- ✓ Consistent: Both workflows identified same root cause
- ⚠ Conflicts: [list any conflicts]
- → Unified Root Cause: [final confirmed root cause]
### Recommended Fix
- Strategy: [selected fix strategy]
- Rationale: [why this strategy]
- Risks: [known risks]
```
---
## Command Line Interface
### Complete Syntax
```bash
/ccw-debug [OPTIONS] <BUG_DESCRIPTION>
Options:
--mode <cli|debug|test|bidirectional> Execution mode (default: debug)
--yes, -y Auto mode (skip all confirmations)
--hotfix, -h Production hotfix mode (only for debug mode)
--no-tests Skip test generation in debug-first mode
--skip-report Don't generate final report
--resume <session-id> Resume interrupted session
Arguments:
<BUG_DESCRIPTION> Issue description, error message, or .md file path
```
### Examples
```bash
# CLI quick mode: fastest, recommendation-only (NEW!)
/ccw-debug --mode cli "User login timeout"
/ccw-debug --mode cli --yes "Quick fix: API 500 error" # Auto-apply if high confidence
# Debug first (default)
/ccw-debug "User login timeout"
# Test first
/ccw-debug --mode test "Payment validation failure"
# Bidirectional verification
/ccw-debug --mode bidirectional "Multi-module data consistency issue"
# Hotfix auto mode
/ccw-debug --hotfix --yes "API 500 error"
# Debug first, skip tests
/ccw-debug --no-tests "Understand code flow"
# Resume interrupted session
/ccw-debug --resume CCWD-login-timeout-2025-01-27
```
---
## Session Structure
### File Organization
```
.workflow/.ccw-debug/CCWD-{slug}-{date}/
├── mode-config.json # Mode configuration and flags
├── session-manifest.json # Session index and status
├── final-report.md # Final report
├── debug/ # Debug workflow (if mode includes debug)
│ ├── debug-session-id.txt
│ ├── understanding.md
│ ├── hypotheses.json
│ └── debug.log
├── test/ # Test workflow (if mode includes test)
│ ├── test-session-id.txt
│ ├── IMPL_PLAN.md
│ ├── test-results.json
│ └── iteration-state.json
└── fusion/ # Fusion analysis (bidirectional mode)
├── convergence-analysis.json
├── consistency-report.md
└── unified-root-cause.json
```
### Session State Management
```json
{
"session_id": "CCWD-login-timeout-2025-01-27",
"mode": "debug|test|bidirectional",
"status": "running|completed|failed|paused",
"phases": {
"phase_1": { "status": "completed", "timestamp": "..." },
"phase_2": { "status": "in_progress", "timestamp": "..." },
"phase_3": { "status": "pending" },
"phase_4": { "status": "pending" },
"phase_5": { "status": "pending" }
},
"sub_sessions": {
"debug_session": "DBG-...",
"test_session": "WFS-test-..."
},
"artifacts": {
"debug_understanding": "...",
"test_results": "...",
"fusion_analysis": "..."
}
}
```
---
## Mode Selection Logic
### Auto Mode Recommendation
When user doesn't specify `--mode`, recommend based on input analysis:
```javascript
function recommendMode(bugDescription) {
const indicators = {
cli_signals: [
/quick|fast|simple|immediate/,
/recommendation|suggest|advice/,
/just need|only want|quick look/,
/straightforward|obvious|clear/
],
debug_signals: [
/unclear|don't know|maybe|uncertain|why/,
/error|crash|fail|exception|stack trace/,
/execution flow|code path|how does/
],
test_signals: [
/test|coverage|verify|pass|fail/,
/implementation|implemented|complete/,
/case|scenario|should/
],
complex_signals: [
/multiple|all|system|integration/,
/module|subsystem|network|distributed/,
/concurrent|async|race/
]
}
let score = { cli: 0, debug: 0, test: 0, bidirectional: 0 }
// CLI signals (lightweight preference)
for (const pattern of indicators.cli_signals) {
if (pattern.test(bugDescription)) score.cli += 3
}
// Debug signals
for (const pattern of indicators.debug_signals) {
if (pattern.test(bugDescription)) score.debug += 2
}
// Test signals
for (const pattern of indicators.test_signals) {
if (pattern.test(bugDescription)) score.test += 2
}
// Complex signals (prefer bidirectional for complex issues)
for (const pattern of indicators.complex_signals) {
if (pattern.test(bugDescription)) {
score.bidirectional += 3
score.cli -= 2 // Complex issues not suitable for CLI quick
}
}
// If description is short and has clear error, prefer CLI
if (bugDescription.length < 100 && /error|fail|crash/.test(bugDescription)) {
score.cli += 2
}
// Return highest scoring mode
return Object.entries(score).sort((a, b) => b[1] - a[1])[0][0]
}
```
---
## Best Practices
### When to Use Each Mode
| Issue Characteristic | Recommended Mode | Rationale |
|----------------------|-----------------|-----------|
| Simple error, clear symptoms | CLI Quick | Fastest recommendation |
| Incomplete error info, requires exploration | Debug First | Deep diagnostic capability |
| Code complete, needs test coverage | Test First | Automated iterative fixes |
| Cross-module issue, ambiguous symptoms | Bidirectional | Multi-angle insights |
| Production failure, needs immediate guidance | CLI Quick + --yes | Fastest guidance, optional escalation |
| Production failure, needs safe fix | Debug First + --hotfix | Minimal diagnosis time |
| Want to understand why it failed | Debug First | Records understanding evolution |
| Want to ensure all scenarios pass | Test First | Complete coverage-driven |
### Performance Tips
- **CLI Quick**: 2-5 minutes, no file I/O, recommendation-only
- **Debug First**: Usually requires manual issue reproduction (after logging added), then 15-30 min
- **Test First**: Fully automated, 20-45 min depending on test suite size
- **Bidirectional**: Most comprehensive but slowest (parallel workflows), 30-60 min
### Workflow Continuity
- **CLI Quick**: Can escalate to debug/test/apply fix based on user decision
- **Debug First**: Auto-launches test generation and execution after completion
- **Test First**: With high failure rates suggests switching to debug mode for root cause
- **Bidirectional**: Always executes complete flow
---
## Follow-up Expansion
After completion, offer to expand to issues:
```
## Done! What's next?
- [ ] Create Test issue (improve test coverage)
- [ ] Create Enhancement issue (optimize code quality)
- [ ] Create Refactor issue (improve architecture)
- [ ] Create Documentation issue (record learnings)
- [ ] Don't create any issue, end workflow
```
Selected items call: `/issue:new "{issue summary} - {dimension}"`
---
## Error Handling
| Error | CLI Quick | Debug First | Test First | Bidirectional |
|-------|-----------|-------------|-----------|---------------|
| Session creation failed | N/A (no session) | Retry → abort | Retry → abort | Retry → abort |
| CLI analysis failed | Retry with fallback tool → manual | N/A | N/A | N/A |
| Diagnosis/test failed | N/A | Continue with partial results | Direct failure | Use alternate workflow results |
| Low confidence result | Ask escalate or review | N/A | N/A | N/A |
| Merge conflicts | N/A | N/A | N/A | Select highest confidence plan |
| Fix application failed | Report error, no auto-retry | Request manual fix | Mark failed, request intervention | Try alternative plan |
---
## Relationship with ccw Command
| Feature | ccw | ccw-debug |
|---------|-----|----------|
| **Design** | General workflow orchestration | Debug + test aggregation |
| **Intent Detection** | ✅ Detects task type | ✅ Detects issue type |
| **Automation** | ✅ Auto-selects workflow | ✅ Auto-selects mode |
| **Quick Mode** | ❌ None | ✅ CLI Quick (2-5 min) |
| **Parallel Execution** | ❌ Sequential | ✅ Bidirectional mode parallel |
| **Fusion Analysis** | ❌ None | ✅ Bidirectional mode fusion |
| **Workflow Scope** | Broad (feature/bugfix/tdd/ui etc.) | Deep focus (debug + test) |
| **CLI Integration** | Yes | Yes (4 levels: quick/deep/iterative/fusion) |
---
## Usage Recommendations
1. **First Time**: Use default mode (debug-first), observe workflow
2. **Quick Decision**: Use CLI Quick (--mode cli) for immediate recommendations
3. **Quick Fix**: Use `--hotfix --yes` for minimal diagnostics (debug mode)
4. **Learning**: Use debug-first, read `understanding.md`
5. **Complete Validation**: Use bidirectional for multi-dimensional insights
6. **Auto Repair**: Use test-first for automatic iteration
7. **Escalation**: Start with CLI Quick, escalate to other modes as needed
---
## Reference
### Related Commands
- `ccw cli` - Direct CLI analysis (used by CLI Quick mode)
- `/workflow:debug-with-file` - Deep debug diagnostics
- `/workflow:test-fix-gen` - Test generation
- `/workflow:test-cycle-execute` - Test execution
- `/workflow:lite-fix` - Lightweight fix
- `/ccw` - General workflow orchestration
### Configuration Files
- `~/.claude/cli-tools.json` - CLI tool configuration (Gemini/Qwen/Codex)
- `.workflow/project-tech.json` - Project technology stack
- `.workflow/project-guidelines.json` - Project conventions
### CLI Tool Fallback Chain (for CLI Quick mode)
When CLI analysis fails, fallback order:
1. **Gemini** (primary): `gemini-2.5-pro`
2. **Qwen** (fallback): `coder-model`
3. **Codex** (fallback): `gpt-5.2`
---
## Summary: Mode Selection Decision Tree
```
User calls: /ccw-debug <bug_description>
┌─ Explicit --mode specified?
│ ├─ YES → Use specified mode
│ │ ├─ cli → 2-5 min analysis, optionally escalate
│ │ ├─ debug → Full debug-with-file workflow
│ │ ├─ test → Test-first workflow
│ │ └─ bidirectional → Parallel debug + test
│ │
│ └─ NO → Auto-recommend based on bug description
│ ├─ Keywords: "quick", "fast", "simple" → CLI Quick
│ ├─ Keywords: "error", "crash", "exception" → Debug First (or CLI if simple)
│ ├─ Keywords: "test", "verify", "coverage" → Test First
│ └─ Keywords: "multiple", "system", "distributed" → Bidirectional
├─ Check --yes flag
│ ├─ YES → Auto-confirm all decisions
│ │ ├─ CLI mode: Auto-apply if confidence High
│ │ └─ Others: Auto-select default options
│ │
│ └─ NO → Interactive mode, ask user for confirmations
├─ Check --hotfix flag (debug mode only)
│ ├─ YES → Minimal diagnostics, fast fix
│ └─ NO → Full analysis
└─ Execute selected mode workflow
└─ Return results or escalation options
```

View File

@@ -24,7 +24,7 @@ Main process orchestrator: intent analysis → workflow selection → command ch
|-----------|---------|---------|
| **Planning + Execution** | plan-cmd → execute-cmd | lite-plan → lite-execute |
| **Testing** | test-gen-cmd → test-exec-cmd | test-fix-gen → test-cycle-execute |
| **Review** | review-cmd → fix-cmd | review-session-cycle → review-fix |
| **Review** | review-cmd → fix-cmd | review-session-cycle → review-cycle-fix |
**Atomic Rules**:
1. CCW automatically groups commands into minimum units - never splits them
@@ -67,10 +67,16 @@ function analyzeIntent(input) {
function detectTaskType(text) {
const patterns = {
'bugfix-hotfix': /urgent|production|critical/ && /fix|bug/,
// With-File workflows (documented exploration with multi-CLI collaboration)
'brainstorm': /brainstorm|ideation|头脑风暴|创意|发散思维|creative thinking|multi-perspective.*think|compare perspectives|探索.*可能/,
'brainstorm-to-issue': /brainstorm.*issue|头脑风暴.*issue|idea.*issue|想法.*issue|从.*头脑风暴|convert.*brainstorm/,
'debug-file': /debug.*document|hypothesis.*debug|troubleshoot.*track|investigate.*log|调试.*记录|假设.*验证|systematic debug|深度调试/,
'analyze-file': /analyze.*document|explore.*concept|understand.*architecture|investigate.*discuss|collaborative analysis|分析.*讨论|深度.*理解|协作.*分析/,
// Standard workflows
'bugfix': /fix|bug|error|crash|fail|debug/,
'issue-batch': /issues?|batch/ && /fix|resolve/,
'issue-transition': /issue workflow|structured workflow|queue|multi-stage/,
'exploration': /uncertain|explore|research|what if/,
'multi-perspective': /multi-perspective|compare|cross-verify/,
'quick-task': /quick|simple|small/ && /feature|function/,
'ui-design': /ui|design|component|style/,
'tdd': /tdd|test-driven|test first/,
@@ -111,14 +117,21 @@ async function clarifyRequirements(analysis) {
function selectWorkflow(analysis) {
const levelMap = {
'bugfix-hotfix': { level: 2, flow: 'bugfix.hotfix' },
// With-File workflows (documented exploration with multi-CLI collaboration)
'brainstorm': { level: 4, flow: 'brainstorm-with-file' }, // Multi-perspective ideation
'brainstorm-to-issue': { level: 4, flow: 'brainstorm-to-issue' }, // Brainstorm → Issue workflow
'debug-file': { level: 3, flow: 'debug-with-file' }, // Hypothesis-driven debugging
'analyze-file': { level: 3, flow: 'analyze-with-file' }, // Collaborative analysis
// Standard workflows
'bugfix': { level: 2, flow: 'bugfix.standard' },
'issue-batch': { level: 'Issue', flow: 'issue' },
'issue-transition': { level: 2.5, flow: 'rapid-to-issue' }, // Bridge workflow
'exploration': { level: 4, flow: 'full' },
'quick-task': { level: 1, flow: 'lite-lite-lite' },
'ui-design': { level: analysis.complexity === 'high' ? 4 : 3, flow: 'ui' },
'tdd': { level: 3, flow: 'tdd' },
'test-fix': { level: 3, flow: 'test-fix-gen' },
'review': { level: 3, flow: 'review-fix' },
'review': { level: 3, flow: 'review-cycle-fix' },
'documentation': { level: 2, flow: 'docs' },
'feature': { level: analysis.complexity === 'high' ? 3 : 2, flow: analysis.complexity === 'high' ? 'coupled' : 'rapid' }
};
@@ -147,6 +160,16 @@ function buildCommandChain(workflow, analysis) {
])
],
// Level 2 Bridge - Lightweight to Issue Workflow
'rapid-to-issue': [
// Unit: Quick Implementation【lite-plan → convert-to-plan】
{ cmd: '/workflow:lite-plan', args: `"${analysis.goal}"`, unit: 'quick-impl-to-issue' },
{ cmd: '/issue:convert-to-plan', args: '--latest-lite-plan -y', unit: 'quick-impl-to-issue' },
// Auto-continue to issue workflow
{ cmd: '/issue:queue', args: '' },
{ cmd: '/issue:execute', args: '--queue auto' }
],
'bugfix.standard': [
// Unit: Bug Fix【lite-fix → lite-execute】
{ cmd: '/workflow:lite-fix', args: `"${analysis.goal}"`, unit: 'bug-fix' },
@@ -179,6 +202,30 @@ function buildCommandChain(workflow, analysis) {
{ cmd: '/workflow:lite-execute', args: '--in-memory', unit: 'quick-impl' }
],
// With-File workflows (documented exploration with multi-CLI collaboration)
'brainstorm-with-file': [
{ cmd: '/workflow:brainstorm-with-file', args: `"${analysis.goal}"` }
// Note: Has built-in post-completion options (create plan, create issue, deep analysis)
],
// Brainstorm-to-Issue workflow (bridge from brainstorm to issue execution)
'brainstorm-to-issue': [
// Note: Assumes brainstorm session already exists, or run brainstorm first
{ cmd: '/issue:from-brainstorm', args: `SESSION="${extractBrainstormSession(analysis)}" --auto` },
{ cmd: '/issue:queue', args: '' },
{ cmd: '/issue:execute', args: '--queue auto' }
],
'debug-with-file': [
{ cmd: '/workflow:debug-with-file', args: `"${analysis.goal}"` }
// Note: Self-contained with hypothesis-driven iteration and Gemini validation
],
'analyze-with-file': [
{ cmd: '/workflow:analyze-with-file', args: `"${analysis.goal}"` }
// Note: Self-contained with multi-round discussion and CLI exploration
],
// Level 3 - Standard
'coupled': [
// Unit: Verified Planning【plan → plan-verify】
@@ -186,9 +233,9 @@ function buildCommandChain(workflow, analysis) {
{ cmd: '/workflow:plan-verify', args: '', unit: 'verified-planning' },
// Execution
{ cmd: '/workflow:execute', args: '' },
// Unit: Code Review【review-session-cycle → review-fix】
// Unit: Code Review【review-session-cycle → review-cycle-fix】
{ cmd: '/workflow:review-session-cycle', args: '', unit: 'code-review' },
{ cmd: '/workflow:review-fix', args: '', unit: 'code-review' },
{ cmd: '/workflow:review-cycle-fix', args: '', unit: 'code-review' },
// Unit: Test Validation【test-fix-gen → test-cycle-execute】
...(analysis.constraints?.includes('skip-tests') ? [] : [
{ cmd: '/workflow:test-fix-gen', args: '', unit: 'test-validation' },
@@ -210,10 +257,10 @@ function buildCommandChain(workflow, analysis) {
{ cmd: '/workflow:test-cycle-execute', args: '', unit: 'test-validation' }
],
'review-fix': [
// Unit: Code Review【review-session-cycle → review-fix】
'review-cycle-fix': [
// Unit: Code Review【review-session-cycle → review-cycle-fix】
{ cmd: '/workflow:review-session-cycle', args: '', unit: 'code-review' },
{ cmd: '/workflow:review-fix', args: '', unit: 'code-review' },
{ cmd: '/workflow:review-cycle-fix', args: '', unit: 'code-review' },
// Unit: Test Validation【test-fix-gen → test-cycle-execute】
{ cmd: '/workflow:test-fix-gen', args: '', unit: 'test-validation' },
{ cmd: '/workflow:test-cycle-execute', args: '', unit: 'test-validation' }
@@ -409,7 +456,12 @@ Phase 5: Execute Command Chain
|-------|------|-------|-----------------------|
| "Add API endpoint" | feature (low) | 2 |【lite-plan → lite-execute】→【test-fix-gen → test-cycle-execute】|
| "Fix login timeout" | bugfix | 2 |【lite-fix → lite-execute】→【test-fix-gen → test-cycle-execute】|
| "OAuth2 system" | feature (high) | 3 |【plan → plan-verify】→ execute →【review-session-cycle → review-fix】→【test-fix-gen → test-cycle-execute|
| "Use issue workflow" | issue-transition | 2.5 |【lite-plan → convert-to-plan】→ queue → execute |
| "头脑风暴: 通知系统重构" | brainstorm | 4 | brainstorm-with-file → (built-in post-completion) |
| "从头脑风暴创建 issue" | brainstorm-to-issue | 4 | from-brainstorm → queue → execute |
| "深度调试 WebSocket 连接断开" | debug-file | 3 | debug-with-file → (hypothesis iteration) |
| "协作分析: 认证架构优化" | analyze-file | 3 | analyze-with-file → (multi-round discussion) |
| "OAuth2 system" | feature (high) | 3 |【plan → plan-verify】→ execute →【review-session-cycle → review-cycle-fix】→【test-fix-gen → test-cycle-execute】|
| "Implement with TDD" | tdd | 3 |【tdd-plan → execute】→ tdd-verify |
| "Uncertain: real-time arch" | exploration | 4 | brainstorm:auto-parallel →【plan → plan-verify】→ execute →【test-fix-gen → test-cycle-execute】|
@@ -452,6 +504,29 @@ todos = [
---
## With-File Workflows
**With-File workflows** provide documented exploration with multi-CLI collaboration. They are self-contained and generate comprehensive session artifacts.
| Workflow | Purpose | Key Features | Output Folder |
|----------|---------|--------------|---------------|
| **brainstorm-with-file** | Multi-perspective ideation | Gemini/Codex/Claude perspectives, diverge-converge cycles | `.workflow/.brainstorm/` |
| **debug-with-file** | Hypothesis-driven debugging | Gemini validation, understanding evolution, NDJSON logging | `.workflow/.debug/` |
| **analyze-with-file** | Collaborative analysis | Multi-round Q&A, CLI exploration, documented discussions | `.workflow/.analysis/` |
**Detection Keywords**:
- **brainstorm**: 头脑风暴, 创意, 发散思维, multi-perspective, compare perspectives
- **debug-file**: 深度调试, 假设验证, systematic debug, hypothesis debug
- **analyze-file**: 协作分析, 深度理解, collaborative analysis, explore concept
**Characteristics**:
1. **Self-Contained**: Each workflow handles its own iteration loop
2. **Documented Process**: Creates evolving documents (brainstorm.md, understanding.md, discussion.md)
3. **Multi-CLI**: Uses Gemini/Codex/Claude for different perspectives
4. **Built-in Post-Completion**: Offers follow-up options (create plan, issue, etc.)
---
## Type Comparison: ccw vs ccw-coordinator
| Aspect | ccw | ccw-coordinator |
@@ -483,4 +558,10 @@ ccw "Implement user registration with TDD"
# Exploratory task
ccw "Uncertain about architecture for real-time notifications"
# With-File workflows (documented exploration with multi-CLI collaboration)
ccw "头脑风暴: 用户通知系统重新设计" # → brainstorm-with-file
ccw "从头脑风暴 BS-通知系统-2025-01-28 创建 issue" # → brainstorm-to-issue (bridge)
ccw "深度调试: 系统随机崩溃问题" # → debug-with-file
ccw "协作分析: 理解现有认证架构的设计决策" # → analyze-with-file
```

View File

@@ -3,6 +3,7 @@ name: cli-init
description: Generate .gemini/ and .qwen/ config directories with settings.json and ignore files based on workspace technology detection
argument-hint: "[--tool gemini|qwen|all] [--output path] [--preview]"
allowed-tools: Bash(*), Read(*), Write(*), Glob(*)
group: cli
---
# CLI Initialization Command (/cli:cli-init)

View File

@@ -1,93 +0,0 @@
---
name: enhance-prompt
description: Enhanced prompt transformation using session memory and intent analysis with --enhance flag detection
argument-hint: "user input to enhance"
---
## Overview
Systematically enhances user prompts by leveraging session memory context and intent analysis, translating ambiguous requests into actionable specifications.
## Core Protocol
**Enhancement Pipeline:**
`Intent Translation``Context Integration``Structured Output`
**Context Sources:**
- Session memory (conversation history, previous analysis)
- Implicit technical requirements
- User intent patterns
## Enhancement Rules
### Intent Translation
| User Says | Translate To | Focus |
|-----------|--------------|-------|
| "fix" | Debug and resolve | Root cause → preserve behavior |
| "improve" | Enhance/optimize | Performance/readability |
| "add" | Implement feature | Integration + edge cases |
| "refactor" | Restructure quality | Maintain behavior |
| "update" | Modernize | Version compatibility |
### Context Integration Strategy
**Session Memory:**
- Reference recent conversation context
- Reuse previously identified patterns
- Build on established understanding
- Infer technical requirements from discussion
**Example:**
```bash
# User: "add login"
# Session Memory: Previous auth discussion, JWT mentioned
# Inferred: JWT-based auth, integrate with existing session management
# Action: Implement JWT authentication with session persistence
```
## Output Structure
```bash
INTENT: [Clear technical goal]
CONTEXT: [Session memory + codebase patterns]
ACTION: [Specific implementation steps]
ATTENTION: [Critical constraints]
```
### Output Examples
**Example 1:**
```bash
# Input: "fix login button"
INTENT: Debug non-functional login button
CONTEXT: From session - OAuth flow discussed, known state issue
ACTION: Check event binding → verify state updates → test auth flow
ATTENTION: Preserve existing OAuth integration
```
**Example 2:**
```bash
# Input: "refactor payment code"
INTENT: Restructure payment module for maintainability
CONTEXT: Session memory - PCI compliance requirements, Stripe integration patterns
ACTION: Extract reusable validators → isolate payment gateway logic → maintain adapter pattern
ATTENTION: Zero behavior change, maintain PCI compliance, full test coverage
```
## Enhancement Triggers
- Ambiguous language: "fix", "improve", "clean up"
- Vague requests requiring clarification
- Complex technical requirements
- Architecture changes
- Critical systems: auth, payment, security
- Multi-step refactoring
## Key Principles
1. **Session Memory First**: Leverage conversation context and established understanding
2. **Context Reuse**: Build on previous discussions and decisions
3. **Clear Output**: Structured, actionable specifications
4. **Intent Clarification**: Transform vague requests into specific technical goals
5. **Avoid Duplication**: Reference existing context, don't repeat

View File

@@ -0,0 +1,718 @@
---
name: convert-to-plan
description: Convert planning artifacts (lite-plan, workflow session, markdown) to issue solutions
argument-hint: "[-y|--yes] [--issue <id>] [--supplement] <SOURCE>"
allowed-tools: TodoWrite(*), Bash(*), Read(*), Write(*), Glob(*), AskUserQuestion(*)
---
## Auto Mode
When `--yes` or `-y`: Skip confirmation, auto-create issue and bind solution.
# Issue Convert-to-Plan Command (/issue:convert-to-plan)
## Overview
Converts various planning artifact formats into issue workflow solutions with intelligent detection and automatic binding.
**Supported Sources** (auto-detected):
- **lite-plan**: `.workflow/.lite-plan/{slug}/plan.json`
- **workflow-session**: `WFS-xxx` ID or `.workflow/active/{session}/` folder
- **markdown**: Any `.md` file with implementation/task content
- **json**: Direct JSON files matching plan-json-schema
## Quick Reference
```bash
# Convert lite-plan to new issue (auto-creates issue)
/issue:convert-to-plan ".workflow/.lite-plan/implement-auth-2026-01-25"
# Convert workflow session to existing issue
/issue:convert-to-plan WFS-auth-impl --issue GH-123
# Supplement existing solution with additional tasks
/issue:convert-to-plan "./docs/additional-tasks.md" --issue ISS-001 --supplement
# Auto mode - skip confirmations
/issue:convert-to-plan ".workflow/.lite-plan/my-plan" -y
```
## Command Options
| Option | Description | Default |
|--------|-------------|---------|
| `<SOURCE>` | Planning artifact path or WFS-xxx ID | Required |
| `--issue <id>` | Bind to existing issue instead of creating new | Auto-create |
| `--supplement` | Add tasks to existing solution (requires --issue) | false |
| `-y, --yes` | Skip all confirmations | false |
## Core Data Access Principle
**⚠️ Important**: Use CLI commands for all issue/solution operations.
| Operation | Correct | Incorrect |
|-----------|---------|-----------|
| Get issue | `ccw issue status <id> --json` | Read issues.jsonl directly |
| Create issue | `ccw issue init <id> --title "..."` | Write to issues.jsonl |
| Bind solution | `ccw issue bind <id> <sol-id>` | Edit issues.jsonl |
| List solutions | `ccw issue solutions --issue <id> --brief` | Read solutions/*.jsonl |
## Solution Schema Reference
Target format for all extracted data (from solution-schema.json):
```typescript
interface Solution {
id: string; // SOL-{issue-id}-{4-char-uid}
description?: string; // High-level summary
approach?: string; // Technical strategy
tasks: Task[]; // Required: at least 1 task
exploration_context?: object; // Optional: source context
analysis?: { risk, impact, complexity };
score?: number; // 0.0-1.0
is_bound: boolean;
created_at: string;
bound_at?: string;
}
interface Task {
id: string; // T1, T2, T3... (pattern: ^T[0-9]+$)
title: string; // Required: action verb + target
scope: string; // Required: module path or feature area
action: Action; // Required: Create|Update|Implement|...
description?: string;
modification_points?: Array<{file, target, change}>;
implementation: string[]; // Required: step-by-step guide
test?: { unit?, integration?, commands?, coverage_target? };
acceptance: { criteria: string[], verification: string[] }; // Required
commit?: { type, scope, message_template, breaking? };
depends_on?: string[];
priority?: number; // 1-5 (default: 3)
}
type Action = 'Create' | 'Update' | 'Implement' | 'Refactor' | 'Add' | 'Delete' | 'Configure' | 'Test' | 'Fix';
```
## Implementation
### Phase 1: Parse Arguments & Detect Source Type
```javascript
const input = userInput.trim();
const flags = parseFlags(userInput); // --issue, --supplement, -y/--yes
// Extract source path (first non-flag argument)
const source = extractSourceArg(input);
// Detect source type
function detectSourceType(source) {
// Check for WFS-xxx pattern (workflow session ID)
if (source.match(/^WFS-[\w-]+$/)) {
return { type: 'workflow-session-id', path: `.workflow/active/${source}` };
}
// Check if directory
const isDir = Bash(`test -d "${source}" && echo "dir" || echo "file"`).trim() === 'dir';
if (isDir) {
// Check for lite-plan indicator
const hasPlanJson = Bash(`test -f "${source}/plan.json" && echo "yes" || echo "no"`).trim() === 'yes';
if (hasPlanJson) {
return { type: 'lite-plan', path: source };
}
// Check for workflow session indicator
const hasSession = Bash(`test -f "${source}/workflow-session.json" && echo "yes" || echo "no"`).trim() === 'yes';
if (hasSession) {
return { type: 'workflow-session', path: source };
}
}
// Check file extensions
if (source.endsWith('.json')) {
return { type: 'json-file', path: source };
}
if (source.endsWith('.md')) {
return { type: 'markdown-file', path: source };
}
// Check if path exists at all
const exists = Bash(`test -e "${source}" && echo "yes" || echo "no"`).trim() === 'yes';
if (!exists) {
throw new Error(`E001: Source not found: ${source}`);
}
return { type: 'unknown', path: source };
}
const sourceInfo = detectSourceType(source);
if (sourceInfo.type === 'unknown') {
throw new Error(`E002: Unable to detect source format for: ${source}`);
}
console.log(`Detected source type: ${sourceInfo.type}`);
```
### Phase 2: Extract Data Using Format-Specific Extractor
```javascript
let extracted = { title: '', approach: '', tasks: [], metadata: {} };
switch (sourceInfo.type) {
case 'lite-plan':
extracted = extractFromLitePlan(sourceInfo.path);
break;
case 'workflow-session':
case 'workflow-session-id':
extracted = extractFromWorkflowSession(sourceInfo.path);
break;
case 'markdown-file':
extracted = await extractFromMarkdownAI(sourceInfo.path);
break;
case 'json-file':
extracted = extractFromJsonFile(sourceInfo.path);
break;
}
// Validate extraction
if (!extracted.tasks || extracted.tasks.length === 0) {
throw new Error('E006: No tasks extracted from source');
}
// Ensure task IDs are normalized to T1, T2, T3...
extracted.tasks = normalizeTaskIds(extracted.tasks);
console.log(`Extracted: ${extracted.tasks.length} tasks`);
```
#### Extractor: Lite-Plan
```javascript
function extractFromLitePlan(folderPath) {
const planJson = Read(`${folderPath}/plan.json`);
const plan = JSON.parse(planJson);
return {
title: plan.summary?.split('.')[0]?.trim() || 'Untitled Plan',
description: plan.summary,
approach: plan.approach,
tasks: plan.tasks.map(t => ({
id: t.id,
title: t.title,
scope: t.scope || '',
action: t.action || 'Implement',
description: t.description || t.title,
modification_points: t.modification_points || [],
implementation: Array.isArray(t.implementation) ? t.implementation : [t.implementation || ''],
test: t.verification ? {
unit: t.verification.unit_tests,
integration: t.verification.integration_tests,
commands: t.verification.manual_checks
} : {},
acceptance: {
criteria: Array.isArray(t.acceptance) ? t.acceptance : [t.acceptance || ''],
verification: t.verification?.manual_checks || []
},
depends_on: t.depends_on || [],
priority: 3
})),
metadata: {
source_type: 'lite-plan',
source_path: folderPath,
complexity: plan.complexity,
estimated_time: plan.estimated_time,
exploration_angles: plan._metadata?.exploration_angles || [],
original_timestamp: plan._metadata?.timestamp
}
};
}
```
#### Extractor: Workflow Session
```javascript
function extractFromWorkflowSession(sessionPath) {
// Load session metadata
const sessionJson = Read(`${sessionPath}/workflow-session.json`);
const session = JSON.parse(sessionJson);
// Load IMPL_PLAN.md for approach (if exists)
let approach = '';
const implPlanPath = `${sessionPath}/IMPL_PLAN.md`;
const hasImplPlan = Bash(`test -f "${implPlanPath}" && echo "yes" || echo "no"`).trim() === 'yes';
if (hasImplPlan) {
const implPlan = Read(implPlanPath);
// Extract overview/approach section
const overviewMatch = implPlan.match(/##\s*(?:Overview|Approach|Strategy)\s*\n([\s\S]*?)(?=\n##|$)/i);
approach = overviewMatch?.[1]?.trim() || implPlan.split('\n').slice(0, 10).join('\n');
}
// Load all task JSONs from .task folder
const taskFiles = Glob({ pattern: `${sessionPath}/.task/IMPL-*.json` });
const tasks = taskFiles.map(f => {
const taskJson = Read(f);
const task = JSON.parse(taskJson);
return {
id: task.id?.replace(/^IMPL-0*/, 'T') || 'T1', // IMPL-001 → T1
title: task.title,
scope: task.scope || inferScopeFromTask(task),
action: capitalizeAction(task.type) || 'Implement',
description: task.description,
modification_points: task.implementation?.modification_points || [],
implementation: task.implementation?.steps || [],
test: task.implementation?.test || {},
acceptance: {
criteria: task.acceptance_criteria || [],
verification: task.verification_steps || []
},
commit: task.commit,
depends_on: (task.depends_on || []).map(d => d.replace(/^IMPL-0*/, 'T')),
priority: task.priority || 3
};
});
return {
title: session.name || session.description?.split('.')[0] || 'Workflow Session',
description: session.description || session.name,
approach: approach || session.description,
tasks: tasks,
metadata: {
source_type: 'workflow-session',
source_path: sessionPath,
session_id: session.id,
created_at: session.created_at
}
};
}
function inferScopeFromTask(task) {
if (task.implementation?.modification_points?.length) {
const files = task.implementation.modification_points.map(m => m.file);
// Find common directory prefix
const dirs = files.map(f => f.split('/').slice(0, -1).join('/'));
return [...new Set(dirs)][0] || '';
}
return '';
}
function capitalizeAction(type) {
if (!type) return 'Implement';
const map = { feature: 'Implement', bugfix: 'Fix', refactor: 'Refactor', test: 'Test', docs: 'Update' };
return map[type.toLowerCase()] || type.charAt(0).toUpperCase() + type.slice(1);
}
```
#### Extractor: Markdown (AI-Assisted via Gemini)
```javascript
async function extractFromMarkdownAI(filePath) {
const fileContent = Read(filePath);
// Use Gemini CLI for intelligent extraction
const cliPrompt = `PURPOSE: Extract implementation plan from markdown document for issue solution conversion. Must output ONLY valid JSON.
TASK: • Analyze document structure • Identify title/summary • Extract approach/strategy section • Parse tasks from any format (lists, tables, sections, code blocks) • Normalize each task to solution schema
MODE: analysis
CONTEXT: Document content provided below
EXPECTED: Valid JSON object with format:
{
"title": "extracted title",
"approach": "extracted approach/strategy",
"tasks": [
{
"id": "T1",
"title": "task title",
"scope": "module or feature area",
"action": "Implement|Update|Create|Fix|Refactor|Add|Delete|Configure|Test",
"description": "what to do",
"implementation": ["step 1", "step 2"],
"acceptance": ["criteria 1", "criteria 2"]
}
]
}
CONSTRAINTS: Output ONLY valid JSON - no markdown, no explanation | Action must be one of: Create, Update, Implement, Refactor, Add, Delete, Configure, Test, Fix | Tasks must have id, title, scope, action, implementation (array), acceptance (array)
DOCUMENT CONTENT:
${fileContent}`;
// Execute Gemini CLI
const result = Bash(`ccw cli -p '${cliPrompt.replace(/'/g, "'\\''")}' --tool gemini --mode analysis`, { timeout: 120000 });
// Parse JSON from result (may be wrapped in markdown code block)
let jsonText = result.trim();
const jsonMatch = jsonText.match(/```(?:json)?\s*([\s\S]*?)```/);
if (jsonMatch) {
jsonText = jsonMatch[1].trim();
}
try {
const extracted = JSON.parse(jsonText);
// Normalize tasks
const tasks = (extracted.tasks || []).map((t, i) => ({
id: t.id || `T${i + 1}`,
title: t.title || 'Untitled task',
scope: t.scope || '',
action: validateAction(t.action) || 'Implement',
description: t.description || t.title,
modification_points: t.modification_points || [],
implementation: Array.isArray(t.implementation) ? t.implementation : [t.implementation || ''],
test: t.test || {},
acceptance: {
criteria: Array.isArray(t.acceptance) ? t.acceptance : [t.acceptance || ''],
verification: t.verification || []
},
depends_on: t.depends_on || [],
priority: t.priority || 3
}));
return {
title: extracted.title || 'Extracted Plan',
description: extracted.summary || extracted.title,
approach: extracted.approach || '',
tasks: tasks,
metadata: {
source_type: 'markdown',
source_path: filePath,
extraction_method: 'gemini-ai'
}
};
} catch (e) {
// Provide more context for debugging
throw new Error(`E005: Failed to extract tasks from markdown. Gemini response was not valid JSON. Error: ${e.message}. Response preview: ${jsonText.substring(0, 200)}...`);
}
}
function validateAction(action) {
const validActions = ['Create', 'Update', 'Implement', 'Refactor', 'Add', 'Delete', 'Configure', 'Test', 'Fix'];
if (!action) return null;
const normalized = action.charAt(0).toUpperCase() + action.slice(1).toLowerCase();
return validActions.includes(normalized) ? normalized : null;
}
```
#### Extractor: JSON File
```javascript
function extractFromJsonFile(filePath) {
const content = Read(filePath);
const plan = JSON.parse(content);
// Detect if it's already solution format or plan format
if (plan.tasks && Array.isArray(plan.tasks)) {
// Map tasks to normalized format
const tasks = plan.tasks.map((t, i) => ({
id: t.id || `T${i + 1}`,
title: t.title,
scope: t.scope || '',
action: t.action || 'Implement',
description: t.description || t.title,
modification_points: t.modification_points || [],
implementation: Array.isArray(t.implementation) ? t.implementation : [t.implementation || ''],
test: t.test || t.verification || {},
acceptance: normalizeAcceptance(t.acceptance),
depends_on: t.depends_on || [],
priority: t.priority || 3
}));
return {
title: plan.summary?.split('.')[0] || plan.title || 'JSON Plan',
description: plan.summary || plan.description,
approach: plan.approach,
tasks: tasks,
metadata: {
source_type: 'json',
source_path: filePath,
complexity: plan.complexity,
original_metadata: plan._metadata
}
};
}
throw new Error('E002: JSON file does not contain valid plan structure (missing tasks array)');
}
function normalizeAcceptance(acceptance) {
if (!acceptance) return { criteria: [], verification: [] };
if (typeof acceptance === 'object' && acceptance.criteria) return acceptance;
if (Array.isArray(acceptance)) return { criteria: acceptance, verification: [] };
return { criteria: [String(acceptance)], verification: [] };
}
```
### Phase 3: Normalize Task IDs
```javascript
function normalizeTaskIds(tasks) {
return tasks.map((t, i) => ({
...t,
id: `T${i + 1}`,
// Also normalize depends_on references
depends_on: (t.depends_on || []).map(d => {
// Handle various ID formats: IMPL-001, T1, 1, etc.
const num = d.match(/\d+/)?.[0];
return num ? `T${parseInt(num)}` : d;
})
}));
}
```
### Phase 4: Resolve Issue (Create or Find)
```javascript
let issueId = flags.issue;
let existingSolution = null;
if (issueId) {
// Validate issue exists
let issueCheck;
try {
issueCheck = Bash(`ccw issue status ${issueId} --json 2>/dev/null`).trim();
if (!issueCheck || issueCheck === '') {
throw new Error('empty response');
}
} catch (e) {
throw new Error(`E003: Issue not found: ${issueId}`);
}
const issue = JSON.parse(issueCheck);
// Check if issue already has bound solution
if (issue.bound_solution_id && !flags.supplement) {
throw new Error(`E004: Issue ${issueId} already has bound solution (${issue.bound_solution_id}). Use --supplement to add tasks.`);
}
// Load existing solution for supplement mode
if (flags.supplement && issue.bound_solution_id) {
try {
const solResult = Bash(`ccw issue solution ${issue.bound_solution_id} --json`).trim();
existingSolution = JSON.parse(solResult);
console.log(`Loaded existing solution with ${existingSolution.tasks.length} tasks`);
} catch (e) {
throw new Error(`Failed to load existing solution: ${e.message}`);
}
}
} else {
// Create new issue via ccw issue create (auto-generates correct ID)
// Smart extraction: title from content, priority from complexity
const title = extracted.title || 'Converted Plan';
const context = extracted.description || extracted.approach || title;
// Auto-determine priority based on complexity
const complexityMap = { high: 2, medium: 3, low: 4 };
const priority = complexityMap[extracted.metadata.complexity?.toLowerCase()] || 3;
try {
// Use heredoc to avoid shell escaping issues
const createResult = Bash(`ccw issue create << 'EOF'
{
"title": ${JSON.stringify(title)},
"context": ${JSON.stringify(context)},
"priority": ${priority},
"source": "converted"
}
EOF`).trim();
// Parse result to get created issue ID
const created = JSON.parse(createResult);
issueId = created.id;
console.log(`Created issue: ${issueId} (priority: ${priority})`);
} catch (e) {
throw new Error(`Failed to create issue: ${e.message}`);
}
}
```
### Phase 5: Generate Solution
```javascript
// Generate solution ID
function generateSolutionId(issueId) {
const chars = 'abcdefghijklmnopqrstuvwxyz0123456789';
let uid = '';
for (let i = 0; i < 4; i++) {
uid += chars[Math.floor(Math.random() * chars.length)];
}
return `SOL-${issueId}-${uid}`;
}
let solution;
const solutionId = generateSolutionId(issueId);
if (flags.supplement && existingSolution) {
// Supplement mode: merge with existing solution
const maxTaskId = Math.max(...existingSolution.tasks.map(t => parseInt(t.id.slice(1))));
const newTasks = extracted.tasks.map((t, i) => ({
...t,
id: `T${maxTaskId + i + 1}`
}));
solution = {
...existingSolution,
tasks: [...existingSolution.tasks, ...newTasks],
approach: existingSolution.approach + '\n\n[Supplementary] ' + (extracted.approach || ''),
updated_at: new Date().toISOString()
};
console.log(`Supplementing: ${existingSolution.tasks.length} existing + ${newTasks.length} new = ${solution.tasks.length} total tasks`);
} else {
// New solution
solution = {
id: solutionId,
description: extracted.description || extracted.title,
approach: extracted.approach,
tasks: extracted.tasks,
exploration_context: extracted.metadata.exploration_angles ? {
exploration_angles: extracted.metadata.exploration_angles
} : undefined,
analysis: {
risk: 'medium',
impact: 'medium',
complexity: extracted.metadata.complexity?.toLowerCase() || 'medium'
},
is_bound: false,
created_at: new Date().toISOString(),
_conversion_metadata: {
source_type: extracted.metadata.source_type,
source_path: extracted.metadata.source_path,
converted_at: new Date().toISOString()
}
};
}
```
### Phase 6: Confirm & Persist
```javascript
// Display preview
console.log(`
## Conversion Summary
**Issue**: ${issueId}
**Solution**: ${flags.supplement ? existingSolution.id : solutionId}
**Tasks**: ${solution.tasks.length}
**Mode**: ${flags.supplement ? 'Supplement' : 'New'}
### Tasks:
${solution.tasks.map(t => `- ${t.id}: ${t.title} [${t.action}]`).join('\n')}
`);
// Confirm if not auto mode
if (!flags.yes && !flags.y) {
const confirm = AskUserQuestion({
questions: [{
question: `Create solution for issue ${issueId} with ${solution.tasks.length} tasks?`,
header: 'Confirm',
multiSelect: false,
options: [
{ label: 'Yes, create solution', description: 'Create and bind solution' },
{ label: 'Cancel', description: 'Abort without changes' }
]
}]
});
if (!confirm.answers?.['Confirm']?.includes('Yes')) {
console.log('Cancelled.');
return;
}
}
// Persist solution (following issue-plan-agent pattern)
Bash(`mkdir -p .workflow/issues/solutions`);
const solutionFile = `.workflow/issues/solutions/${issueId}.jsonl`;
if (flags.supplement) {
// Supplement mode: update existing solution line atomically
try {
const existingContent = Read(solutionFile);
const lines = existingContent.trim().split('\n').filter(l => l);
const updatedLines = lines.map(line => {
const sol = JSON.parse(line);
if (sol.id === existingSolution.id) {
return JSON.stringify(solution);
}
return line;
});
// Atomic write: write entire content at once
Write({ file_path: solutionFile, content: updatedLines.join('\n') + '\n' });
console.log(`✓ Updated solution: ${existingSolution.id}`);
} catch (e) {
throw new Error(`Failed to update solution: ${e.message}`);
}
// Note: No need to rebind - solution is already bound to issue
} else {
// New solution: append to JSONL file (following issue-plan-agent pattern)
try {
const solutionLine = JSON.stringify(solution);
// Read existing content, append new line, write atomically
const existing = Bash(`test -f "${solutionFile}" && cat "${solutionFile}" || echo ""`).trim();
const newContent = existing ? existing + '\n' + solutionLine + '\n' : solutionLine + '\n';
Write({ file_path: solutionFile, content: newContent });
console.log(`✓ Created solution: ${solutionId}`);
} catch (e) {
throw new Error(`Failed to write solution: ${e.message}`);
}
// Bind solution to issue
try {
Bash(`ccw issue bind ${issueId} ${solutionId}`);
console.log(`✓ Bound solution to issue`);
} catch (e) {
// Cleanup: remove solution file on bind failure
try {
Bash(`rm -f "${solutionFile}"`);
} catch (cleanupError) {
// Ignore cleanup errors
}
throw new Error(`Failed to bind solution: ${e.message}`);
}
// Update issue status to planned
try {
Bash(`ccw issue update ${issueId} --status planned`);
} catch (e) {
throw new Error(`Failed to update issue status: ${e.message}`);
}
}
```
### Phase 7: Summary
```javascript
console.log(`
## Done
**Issue**: ${issueId}
**Solution**: ${flags.supplement ? existingSolution.id : solutionId}
**Tasks**: ${solution.tasks.length}
**Status**: planned
### Next Steps:
- \`/issue:queue\` → Form execution queue
- \`ccw issue status ${issueId}\` → View issue details
- \`ccw issue solution ${flags.supplement ? existingSolution.id : solutionId}\` → View solution
`);
```
## Error Handling
| Error | Code | Resolution |
|-------|------|------------|
| Source not found | E001 | Check path exists |
| Invalid source format | E002 | Verify file contains valid plan structure |
| Issue not found | E003 | Check issue ID or omit --issue to create new |
| Solution already bound | E004 | Use --supplement to add tasks |
| AI extraction failed | E005 | Check markdown structure, try simpler format |
| No tasks extracted | E006 | Source must contain at least 1 task |
## Related Commands
- `/issue:plan` - Generate solutions from issue exploration
- `/issue:queue` - Form execution queue from bound solutions
- `/issue:execute` - Execute queue with DAG parallelism
- `ccw issue status <id>` - View issue details
- `ccw issue solution <id>` - View solution details

View File

@@ -0,0 +1,382 @@
---
name: from-brainstorm
description: Convert brainstorm session ideas into issue with executable solution for parallel-dev-cycle
argument-hint: "SESSION=\"<session-id>\" [--idea=<index>] [--auto] [-y|--yes]"
allowed-tools: TodoWrite(*), Bash(*), Read(*), Write(*), Glob(*), AskUserQuestion(*)
---
## Auto Mode
When `--yes` or `-y`: Auto-select highest-scored idea, skip confirmations, create issue directly.
# Issue From-Brainstorm Command (/issue:from-brainstorm)
## Overview
Bridge command that converts **brainstorm-with-file** session output into executable **issue + solution** for parallel-dev-cycle consumption.
**Core workflow**: Load Session → Select Idea → Convert to Issue → Generate Solution → Bind & Ready
**Input sources**:
- **synthesis.json** - Main brainstorm results with top_ideas
- **perspectives.json** - Multi-CLI perspectives (creative/pragmatic/systematic)
- **.brainstorming/** - Synthesis artifacts (clarifications, enhancements from role analyses)
**Output**:
- **Issue** (ISS-YYYYMMDD-NNN) - Full context with clarifications
- **Solution** (SOL-{issue-id}-{uid}) - Structured tasks for parallel-dev-cycle
## Quick Reference
```bash
# Interactive mode - select idea, confirm before creation
/issue:from-brainstorm SESSION="BS-rate-limiting-2025-01-28"
# Pre-select idea by index
/issue:from-brainstorm SESSION="BS-auth-system-2025-01-28" --idea=0
# Auto mode - select highest scored, no confirmations
/issue:from-brainstorm SESSION="BS-caching-2025-01-28" --auto -y
```
## Arguments
| Argument | Required | Type | Default | Description |
|----------|----------|------|---------|-------------|
| SESSION | Yes | String | - | Session ID or path to `.workflow/.brainstorm/BS-xxx` |
| --idea | No | Integer | - | Pre-select idea by index (0-based) |
| --auto | No | Flag | false | Auto-select highest-scored idea |
| -y, --yes | No | Flag | false | Skip all confirmations |
## Data Structures
### Issue Schema (Output)
```typescript
interface Issue {
id: string; // ISS-YYYYMMDD-NNN
title: string; // From idea.title
status: 'planned'; // Auto-set after solution binding
priority: number; // 1-5 (derived from idea.score)
context: string; // Full description with clarifications
source: 'brainstorm';
labels: string[]; // ['brainstorm', perspective, feasibility]
// Structured fields
expected_behavior: string; // From key_strengths
actual_behavior: string; // From main_challenges
affected_components: string[]; // Extracted from description
_brainstorm_metadata: {
session_id: string;
idea_score: number;
novelty: number;
feasibility: string;
clarifications_count: number;
};
}
```
### Solution Schema (Output)
```typescript
interface Solution {
id: string; // SOL-{issue-id}-{4-char-uid}
description: string; // idea.title
approach: string; // idea.description
tasks: Task[]; // Generated from idea.next_steps
analysis: {
risk: 'low' | 'medium' | 'high';
impact: 'low' | 'medium' | 'high';
complexity: 'low' | 'medium' | 'high';
};
is_bound: boolean; // true
created_at: string;
bound_at: string;
}
interface Task {
id: string; // T1, T2, T3...
title: string; // Actionable task name
scope: string; // design|implementation|testing|documentation
action: string; // Implement|Design|Research|Test|Document
description: string;
implementation: string[]; // Step-by-step guide
acceptance: {
criteria: string[]; // What defines success
verification: string[]; // How to verify
};
priority: number; // 1-5
depends_on: string[]; // Task dependencies
}
```
## Execution Flow
```
Phase 1: Session Loading
├─ Validate session path
├─ Load synthesis.json (required)
├─ Load perspectives.json (optional - multi-CLI insights)
├─ Load .brainstorming/** (optional - synthesis artifacts)
└─ Validate top_ideas array exists
Phase 2: Idea Selection
├─ Auto mode: Select highest scored idea
├─ Pre-selected: Use --idea=N index
└─ Interactive: Display table, ask user to select
Phase 3: Enrich Issue Context
├─ Base: idea.description + key_strengths + main_challenges
├─ Add: Relevant clarifications (Requirements/Architecture/Feasibility)
├─ Add: Multi-perspective insights (creative/pragmatic/systematic)
└─ Add: Session metadata (session_id, completion date, clarification count)
Phase 4: Create Issue
├─ Generate issue data with enriched context
├─ Calculate priority from idea.score (0-10 → 1-5)
├─ Create via: ccw issue create (heredoc for JSON)
└─ Returns: ISS-YYYYMMDD-NNN
Phase 5: Generate Solution Tasks
├─ T1: Research & Validate (if main_challenges exist)
├─ T2: Design & Specification (if key_strengths exist)
├─ T3+: Implementation tasks (from idea.next_steps)
└─ Each task includes: implementation steps + acceptance criteria
Phase 6: Bind Solution
├─ Write solution to .workflow/issues/solutions/{issue-id}.jsonl
├─ Bind via: ccw issue bind {issue-id} {solution-id}
├─ Update issue status to 'planned'
└─ Returns: SOL-{issue-id}-{uid}
Phase 7: Next Steps
└─ Offer: Form queue | Convert another idea | View details | Done
```
## Context Enrichment Logic
### Base Context (Always Included)
- **Description**: `idea.description`
- **Why This Idea**: `idea.key_strengths[]`
- **Challenges to Address**: `idea.main_challenges[]`
- **Implementation Steps**: `idea.next_steps[]`
### Enhanced Context (If Available)
**From Synthesis Artifacts** (`.brainstorming/*/analysis*.md`):
- Extract clarifications matching categories: Requirements, Architecture, Feasibility
- Format: `**{Category}** ({role}): {question} → {answer}`
- Limit: Top 3 most relevant
**From Perspectives** (`perspectives.json`):
- **Creative**: First insight from `perspectives.creative.insights[0]`
- **Pragmatic**: First blocker from `perspectives.pragmatic.blockers[0]`
- **Systematic**: First pattern from `perspectives.systematic.patterns[0]`
**Session Metadata**:
- Session ID, Topic, Completion Date
- Clarifications count (if synthesis artifacts loaded)
## Task Generation Strategy
### Task 1: Research & Validation
**Trigger**: `idea.main_challenges.length > 0`
- **Title**: "Research & Validate Approach"
- **Scope**: design
- **Action**: Research
- **Implementation**: Investigate blockers, review similar implementations, validate with team
- **Acceptance**: Blockers documented, feasibility assessed, approach validated
### Task 2: Design & Specification
**Trigger**: `idea.key_strengths.length > 0`
- **Title**: "Design & Create Specification"
- **Scope**: design
- **Action**: Design
- **Implementation**: Create design doc, define success criteria, plan phases
- **Acceptance**: Design complete, metrics defined, plan outlined
### Task 3+: Implementation Tasks
**Trigger**: `idea.next_steps[]`
- **Title**: From `next_steps[i]` (max 60 chars)
- **Scope**: Inferred from keywords (test→testing, api→backend, ui→frontend)
- **Action**: Detected from verbs (implement, create, update, fix, test, document)
- **Implementation**: Execute step + follow design + write tests
- **Acceptance**: Step implemented + tests passing + code reviewed
### Fallback Task
**Trigger**: No tasks generated from above
- **Title**: `idea.title`
- **Scope**: implementation
- **Action**: Implement
- **Generic implementation + acceptance criteria**
## Priority Calculation
### Issue Priority (1-5)
```
idea.score: 0-10
priority = max(1, min(5, ceil((10 - score) / 2)))
Examples:
score 9-10 → priority 1 (critical)
score 7-8 → priority 2 (high)
score 5-6 → priority 3 (medium)
score 3-4 → priority 4 (low)
score 0-2 → priority 5 (lowest)
```
### Task Priority (1-5)
- Research task: 1 (highest)
- Design task: 2
- Implementation tasks: 3 by default, decrement for later tasks
- Testing/documentation: 4-5
### Complexity Analysis
```
risk: main_challenges.length > 2 ? 'high' : 'medium'
impact: score >= 8 ? 'high' : score >= 6 ? 'medium' : 'low'
complexity: main_challenges > 3 OR tasks > 5 ? 'high'
tasks > 3 ? 'medium' : 'low'
```
## CLI Integration
### Issue Creation
```bash
# Uses heredoc to avoid shell escaping
ccw issue create << 'EOF'
{
"title": "...",
"context": "...",
"priority": 3,
"source": "brainstorm",
"labels": ["brainstorm", "creative", "feasibility-high"],
...
}
EOF
```
### Solution Binding
```bash
# Append solution to JSONL file
echo '{"id":"SOL-xxx","tasks":[...]}' >> .workflow/issues/solutions/{issue-id}.jsonl
# Bind to issue
ccw issue bind {issue-id} {solution-id}
# Update status
ccw issue update {issue-id} --status planned
```
## Error Handling
| Error | Message | Resolution |
|-------|---------|------------|
| Session not found | synthesis.json missing | Check session ID, list available sessions |
| No ideas | top_ideas array empty | Complete brainstorm workflow first |
| Invalid idea index | Index out of range | Check valid range 0 to N-1 |
| Issue creation failed | ccw issue create error | Verify CLI endpoint working |
| Solution binding failed | Bind error | Check issue exists, retry |
## Examples
### Interactive Mode
```bash
/issue:from-brainstorm SESSION="BS-rate-limiting-2025-01-28"
# Output:
# | # | Title | Score | Feasibility |
# |---|-------|-------|-------------|
# | 0 | Token Bucket Algorithm | 8.5 | High |
# | 1 | Sliding Window Counter | 7.2 | Medium |
# | 2 | Fixed Window | 6.1 | High |
# User selects: #0
# Result:
# ✓ Created issue: ISS-20250128-001
# ✓ Created solution: SOL-ISS-20250128-001-ab3d
# ✓ Bound solution to issue
# → Next: /issue:queue
```
### Auto Mode
```bash
/issue:from-brainstorm SESSION="BS-caching-2025-01-28" --auto
# Result:
# Auto-selected: Redis Cache Layer (Score: 9.2/10)
# ✓ Created issue: ISS-20250128-002
# ✓ Solution with 4 tasks
# → Status: planned
```
## Integration Flow
```
brainstorm-with-file
├─ synthesis.json
├─ perspectives.json
└─ .brainstorming/** (optional)
/issue:from-brainstorm ◄─── This command
├─ ISS-YYYYMMDD-NNN (enriched issue)
└─ SOL-{issue-id}-{uid} (structured solution)
/issue:queue
/parallel-dev-cycle
RA → EP → CD → VAS
```
## Session Files Reference
### Input Files
```
.workflow/.brainstorm/BS-{slug}-{date}/
├── synthesis.json # REQUIRED - Top ideas with scores
├── perspectives.json # OPTIONAL - Multi-CLI insights
├── brainstorm.md # Reference only
└── .brainstorming/ # OPTIONAL - Synthesis artifacts
├── system-architect/
│ └── analysis.md # Contains clarifications + enhancements
├── api-designer/
│ └── analysis.md
└── ...
```
### Output Files
```
.workflow/issues/
├── solutions/
│ └── ISS-YYYYMMDD-001.jsonl # Created solution (JSONL)
└── (managed by ccw issue CLI)
```
## Related Commands
- `/workflow:brainstorm-with-file` - Generate brainstorm sessions
- `/workflow:brainstorm:synthesis` - Add clarifications to brainstorm
- `/issue:new` - Create issues from GitHub or text
- `/issue:plan` - Generate solutions via exploration
- `/issue:queue` - Form execution queue
- `/issue:execute` - Execute with parallel-dev-cycle
- `ccw issue status <id>` - View issue
- `ccw issue solution <id>` - View solution

View File

@@ -413,5 +413,4 @@ function parseMarkdownBody(body) {
## Related Commands
- `/issue:plan` - Plan solution for issue
- `/issue:plan` - Plan solution for issue

View File

@@ -1,687 +0,0 @@
---
name: code-map-memory
description: 3-phase orchestrator: parse feature keyword → cli-explore-agent analyzes (Deep Scan dual-source) → orchestrator generates Mermaid docs + SKILL package (skips phase 2 if exists)
argument-hint: "\"feature-keyword\" [--regenerate] [--tool <gemini|qwen>]"
allowed-tools: SlashCommand(*), TodoWrite(*), Bash(*), Read(*), Write(*), Task(*)
---
# Code Flow Mapping Generator
## Overview
**Pure Orchestrator with Agent Delegation**: Prepares context paths and delegates code flow analysis to specialized cli-explore-agent. Orchestrator transforms agent's JSON analysis into Mermaid documentation.
**Auto-Continue Workflow**: Runs fully autonomously once triggered. Each phase completes and automatically triggers the next phase.
**Execution Paths**:
- **Full Path**: All 3 phases (no existing codemap OR `--regenerate` specified)
- **Skip Path**: Phase 1 → Phase 3 (existing codemap found AND no `--regenerate` flag)
- **Phase 3 Always Executes**: SKILL index is always generated or updated
**Agent Responsibility** (cli-explore-agent):
- Deep code flow analysis using dual-source strategy (Bash + Gemini CLI)
- Returns structured JSON with architecture, functions, data flow, conditionals, patterns
- NO file writing - analysis only
**Orchestrator Responsibility**:
- Provides feature keyword and analysis scope to agent
- Transforms agent's JSON into Mermaid-enriched markdown documentation
- Writes all files (5 docs + metadata.json + SKILL.md)
## Core Rules
1. **Start Immediately**: First action is TodoWrite initialization, second action is Phase 1 execution
2. **Feature-Specific SKILL**: Each feature creates independent `.claude/skills/codemap-{feature}/` package
3. **Specialized Agent**: Phase 2a uses cli-explore-agent for professional code analysis (Deep Scan mode)
4. **Orchestrator Documentation**: Phase 2b transforms agent JSON into Mermaid markdown files
5. **Auto-Continue**: After completing each phase, update TodoWrite and immediately execute next phase
6. **No User Prompts**: Never ask user questions or wait for input between phases
7. **Track Progress**: Update TodoWrite after EVERY phase completion before starting next phase
8. **Multi-Level Detail**: Generate 4 levels: architecture → function → data → conditional
---
## 3-Phase Execution
### Phase 1: Parse Feature Keyword & Check Existing
**Goal**: Normalize feature keyword, check existing codemap, prepare for analysis
**Step 1: Parse Feature Keyword**
```bash
# Get feature keyword from argument
FEATURE_KEYWORD="$1"
# Normalize: lowercase, spaces to hyphens
normalized_feature=$(echo "$FEATURE_KEYWORD" | tr '[:upper:]' '[:lower:]' | tr ' ' '-' | tr '_' '-')
# Example: "User Authentication" → "user-authentication"
# Example: "支付处理" → "支付处理" (keep non-ASCII)
```
**Step 2: Set Tool Preference**
```bash
# Default to gemini unless --tool specified
TOOL="${tool_flag:-gemini}"
```
**Step 3: Check Existing Codemap**
```bash
# Define codemap directory
CODEMAP_DIR=".claude/skills/codemap-${normalized_feature}"
# Check if codemap exists
bash(test -d "$CODEMAP_DIR" && echo "exists" || echo "not_exists")
# Count existing files
bash(find "$CODEMAP_DIR" -name "*.md" 2>/dev/null | wc -l || echo 0)
```
**Step 4: Skip Decision**
```javascript
if (existing_files > 0 && !regenerate_flag) {
SKIP_GENERATION = true
message = "Codemap already exists, skipping Phase 2. Use --regenerate to force regeneration."
} else if (regenerate_flag) {
bash(rm -rf "$CODEMAP_DIR")
SKIP_GENERATION = false
message = "Regenerating codemap from scratch."
} else {
SKIP_GENERATION = false
message = "No existing codemap found, generating new code flow analysis."
}
```
**Output Variables**:
- `FEATURE_KEYWORD`: Original feature keyword
- `normalized_feature`: Normalized feature name for directory
- `CODEMAP_DIR`: `.claude/skills/codemap-{feature}`
- `TOOL`: CLI tool to use (gemini or qwen)
- `SKIP_GENERATION`: Boolean - whether to skip Phase 2
**TodoWrite**:
- If skipping: Mark phase 1 completed, phase 2 completed, phase 3 in_progress
- If not skipping: Mark phase 1 completed, phase 2 in_progress
---
### Phase 2: Code Flow Analysis & Documentation Generation
**Skip Condition**: Skipped if `SKIP_GENERATION = true`
**Goal**: Use cli-explore-agent for professional code analysis, then orchestrator generates Mermaid documentation
**Architecture**: Phase 2a (Agent Analysis) → Phase 2b (Orchestrator Documentation)
---
#### Phase 2a: cli-explore-agent Analysis
**Purpose**: Leverage specialized cli-explore-agent for deep code flow analysis
**Agent Task Specification**:
```
Task(
subagent_type: "cli-explore-agent",
description: "Analyze code flow: {FEATURE_KEYWORD}",
prompt: "
Perform Deep Scan analysis for feature: {FEATURE_KEYWORD}
**Analysis Mode**: deep-scan (Dual-source: Bash structural scan + Gemini semantic analysis)
**Analysis Objectives**:
1. **Module Architecture**: Identify high-level module organization, interactions, and entry points
2. **Function Call Chains**: Trace execution paths, call sequences, and parameter flows
3. **Data Transformations**: Map data structure changes and transformation stages
4. **Conditional Paths**: Document decision trees, branches, and error handling strategies
5. **Design Patterns**: Discover architectural patterns and extract design intent
**Scope**:
- Feature: {FEATURE_KEYWORD}
- CLI Tool: {TOOL} (gemini-2.5-pro or qwen coder-model)
- File Discovery: MCP Code Index (preferred) + rg fallback
- Target: 5-15 most relevant files
**MANDATORY FIRST STEP**:
Read: ~/.claude/workflows/cli-templates/schemas/codemap-json-schema.json
**Output**: Return JSON following schema exactly. NO FILE WRITING - return JSON analysis only.
**Critical Requirements**:
- Use Deep Scan mode: Bash (Phase 1 - precise locations) + Gemini CLI (Phase 2 - semantic understanding) + Synthesis (Phase 3 - merge with attribution)
- Focus exclusively on {FEATURE_KEYWORD} feature flow
- Include file:line references for ALL findings
- Extract design intent from code structure and comments
- NO FILE WRITING - return JSON analysis only
- Handle tool failures gracefully (Gemini → Qwen fallback, MCP → rg fallback)
"
)
```
**Agent Output**: JSON analysis result with architecture, functions, data flow, conditionals, and patterns
---
#### Phase 2b: Orchestrator Documentation Generation
**Purpose**: Transform cli-explore-agent JSON into Mermaid-enriched documentation
**Input**: Agent's JSON analysis result
**Process**:
1. **Parse Agent Analysis**:
```javascript
const analysis = JSON.parse(agentResult)
const { feature, files_analyzed, architecture, function_calls, data_flow, conditional_logic, design_patterns } = analysis
```
2. **Generate Mermaid Diagrams from Structured Data**:
**a) architecture-flow.md** (~3K tokens):
```javascript
// Convert architecture.modules + architecture.interactions → Mermaid graph TD
const architectureMermaid = `
graph TD
${architecture.modules.map(m => ` ${m.name}[${m.name}]`).join('\n')}
${architecture.interactions.map(i => ` ${i.from} -->|${i.type}| ${i.to}`).join('\n')}
`
Write({
file_path: `${CODEMAP_DIR}/architecture-flow.md`,
content: `---
feature: ${feature}
level: architecture
detail: high-level module interactions
---
# Architecture Flow: ${feature}
## Overview
${architecture.overview}
## Module Architecture
${architecture.modules.map(m => `### ${m.name}\n- **File**: ${m.file}\n- **Role**: ${m.responsibility}\n- **Dependencies**: ${m.dependencies.join(', ')}`).join('\n\n')}
## Flow Diagram
\`\`\`mermaid
${architectureMermaid}
\`\`\`
## Key Interactions
${architecture.interactions.map(i => `- **${i.from} → ${i.to}**: ${i.description}`).join('\n')}
## Entry Points
${architecture.entry_points.map(e => `- **${e.function}** (${e.file}): ${e.description}`).join('\n')}
`
})
```
**b) function-calls.md** (~5K tokens):
```javascript
// Convert function_calls.sequences → Mermaid sequenceDiagram
const sequenceMermaid = `
sequenceDiagram
${function_calls.sequences.map(s => ` ${s.from}->>${s.to}: ${s.method}`).join('\n')}
`
Write({
file_path: `${CODEMAP_DIR}/function-calls.md`,
content: `---
feature: ${feature}
level: function
detail: function-level call sequences
---
# Function Call Chains: ${feature}
## Call Sequence Diagram
\`\`\`mermaid
${sequenceMermaid}
\`\`\`
## Detailed Call Chains
${function_calls.call_chains.map(chain => `
### Chain ${chain.chain_id}: ${chain.description}
${chain.sequence.map(fn => `- **${fn.function}** (${fn.file})\n - Calls: ${fn.calls.join(', ')}`).join('\n')}
`).join('\n')}
## Parameters & Returns
${function_calls.sequences.map(s => `- **${s.method}** → Returns: ${s.returns || 'void'}`).join('\n')}
`
})
```
**c) data-flow.md** (~4K tokens):
```javascript
// Convert data_flow.transformations → Mermaid flowchart LR
const dataFlowMermaid = `
flowchart LR
${data_flow.transformations.map((t, i) => ` Stage${i}[${t.from}] -->|${t.transformer}| Stage${i+1}[${t.to}]`).join('\n')}
`
Write({
file_path: `${CODEMAP_DIR}/data-flow.md`,
content: `---
feature: ${feature}
level: data
detail: data structure transformations
---
# Data Flow: ${feature}
## Data Transformation Diagram
\`\`\`mermaid
${dataFlowMermaid}
\`\`\`
## Data Structures
${data_flow.structures.map(s => `### ${s.name} (${s.stage})\n\`\`\`json\n${JSON.stringify(s.shape, null, 2)}\n\`\`\``).join('\n\n')}
## Transformations
${data_flow.transformations.map(t => `- **${t.from} → ${t.to}** via \`${t.transformer}\` (${t.file})`).join('\n')}
`
})
```
**d) conditional-paths.md** (~4K tokens):
```javascript
// Convert conditional_logic.branches → Mermaid flowchart TD
const conditionalMermaid = `
flowchart TD
Start[Entry Point]
${conditional_logic.branches.map((b, i) => `
Start --> Check${i}{${b.condition}}
Check${i} -->|Yes| Path${i}A[${b.true_path}]
Check${i} -->|No| Path${i}B[${b.false_path}]
`).join('\n')}
`
Write({
file_path: `${CODEMAP_DIR}/conditional-paths.md`,
content: `---
feature: ${feature}
level: conditional
detail: decision trees and error paths
---
# Conditional Paths: ${feature}
## Decision Tree
\`\`\`mermaid
${conditionalMermaid}
\`\`\`
## Branch Conditions
${conditional_logic.branches.map(b => `- **${b.condition}** (${b.file})\n - True: ${b.true_path}\n - False: ${b.false_path}`).join('\n')}
## Error Handling
${conditional_logic.error_handling.map(e => `- **${e.error_type}**: Handler \`${e.handler}\` (${e.file}) - Recovery: ${e.recovery}`).join('\n')}
`
})
```
**e) complete-flow.md** (~8K tokens):
```javascript
// Integrate all Mermaid diagrams
Write({
file_path: `${CODEMAP_DIR}/complete-flow.md`,
content: `---
feature: ${feature}
level: complete
detail: integrated multi-level view
---
# Complete Flow: ${feature}
## Integrated Flow Diagram
\`\`\`mermaid
graph TB
subgraph Architecture
${architecture.modules.map(m => ` ${m.name}[${m.name}]`).join('\n')}
end
subgraph "Function Calls"
${function_calls.call_chains[0]?.sequence.map(fn => ` ${fn.function}`).join('\n') || ''}
end
subgraph "Data Flow"
${data_flow.structures.map(s => ` ${s.name}[${s.name}]`).join('\n')}
end
\`\`\`
## Complete Trace
[Comprehensive end-to-end documentation combining all analysis layers]
## Design Patterns Identified
${design_patterns.map(p => `- **${p.pattern}** in ${p.location}: ${p.description}`).join('\n')}
## Recommendations
${analysis.recommendations.map(r => `- ${r}`).join('\n')}
## Cross-References
- [Architecture Flow](./architecture-flow.md) - High-level module structure
- [Function Calls](./function-calls.md) - Detailed call chains
- [Data Flow](./data-flow.md) - Data transformation stages
- [Conditional Paths](./conditional-paths.md) - Decision trees and error handling
`
})
```
3. **Write metadata.json**:
```javascript
Write({
file_path: `${CODEMAP_DIR}/metadata.json`,
content: JSON.stringify({
feature: feature,
normalized_name: normalized_feature,
generated_at: new Date().toISOString(),
tool_used: analysis.analysis_metadata.tool_used,
files_analyzed: files_analyzed.map(f => f.file),
analysis_summary: {
total_files: files_analyzed.length,
modules_traced: architecture.modules.length,
functions_traced: function_calls.call_chains.reduce((sum, c) => sum + c.sequence.length, 0),
patterns_discovered: design_patterns.length
}
}, null, 2)
})
```
4. **Report Phase 2 Completion**:
```
Phase 2 Complete: Code flow analysis and documentation generated
- Agent Analysis: cli-explore-agent with {TOOL}
- Files Analyzed: {count}
- Documentation Generated: 5 markdown files + metadata.json
- Location: {CODEMAP_DIR}
```
**Completion Criteria**:
- cli-explore-agent task completed successfully with JSON result
- 5 documentation files written with valid Mermaid diagrams
- metadata.json written with analysis summary
- All files properly formatted and cross-referenced
**TodoWrite**: Mark phase 2 completed, phase 3 in_progress
---
### Phase 3: Generate SKILL.md Index
**Note**: This phase **ALWAYS executes** - generates or updates the SKILL index.
**Goal**: Read generated flow documentation and create SKILL.md index with progressive loading
**Steps**:
1. **Verify Generated Files**:
```bash
bash(find "{CODEMAP_DIR}" -name "*.md" -type f | sort)
```
2. **Read metadata.json**:
```javascript
Read({CODEMAP_DIR}/metadata.json)
// Extract: feature, normalized_name, files_analyzed, analysis_summary
```
3. **Read File Headers** (optional, first 30 lines):
```javascript
Read({CODEMAP_DIR}/architecture-flow.md, limit: 30)
Read({CODEMAP_DIR}/function-calls.md, limit: 30)
// Extract overview and diagram counts
```
4. **Generate SKILL.md Index**:
Template structure:
```yaml
---
name: codemap-{normalized_feature}
description: Code flow mapping for {FEATURE_KEYWORD} feature (located at {project_path}). Load this SKILL when analyzing, tracing, or understanding {FEATURE_KEYWORD} execution flow, especially when no relevant context exists in memory.
version: 1.0.0
generated_at: {ISO_TIMESTAMP}
---
# Code Flow Map: {FEATURE_KEYWORD}
## Feature: `{FEATURE_KEYWORD}`
**Analysis Date**: {DATE}
**Tool Used**: {TOOL}
**Files Analyzed**: {COUNT}
## Progressive Loading
### Level 0: Quick Overview (~2K tokens)
- [Architecture Flow](./architecture-flow.md) - High-level module interactions
### Level 1: Core Flows (~10K tokens)
- [Architecture Flow](./architecture-flow.md) - Module architecture
- [Function Calls](./function-calls.md) - Function call chains
### Level 2: Complete Analysis (~20K tokens)
- [Architecture Flow](./architecture-flow.md)
- [Function Calls](./function-calls.md)
- [Data Flow](./data-flow.md) - Data transformations
### Level 3: Deep Dive (~30K tokens)
- [Architecture Flow](./architecture-flow.md)
- [Function Calls](./function-calls.md)
- [Data Flow](./data-flow.md)
- [Conditional Paths](./conditional-paths.md) - Branches and error handling
- [Complete Flow](./complete-flow.md) - Integrated comprehensive view
## Usage
Load this SKILL package when:
- Analyzing {FEATURE_KEYWORD} implementation
- Tracing execution flow for debugging
- Understanding code dependencies
- Planning refactoring or enhancements
## Analysis Summary
- **Modules Traced**: {modules_traced}
- **Functions Traced**: {functions_traced}
- **Files Analyzed**: {total_files}
## Mermaid Diagrams Included
- Architecture flow diagram (graph TD)
- Function call sequence diagram (sequenceDiagram)
- Data transformation flowchart (flowchart LR)
- Conditional decision tree (flowchart TD)
- Complete integrated diagram (graph TB)
```
5. **Write SKILL.md**:
```javascript
Write({
file_path: `{CODEMAP_DIR}/SKILL.md`,
content: generatedIndexMarkdown
})
```
**Completion Criteria**:
- SKILL.md index written
- All documentation files verified
- Progressive loading levels (0-3) properly structured
- Mermaid diagram references included
**TodoWrite**: Mark phase 3 completed
**Final Report**:
```
Code Flow Mapping Complete
Feature: {FEATURE_KEYWORD}
Location: .claude/skills/codemap-{normalized_feature}/
Files Generated:
- SKILL.md (index)
- architecture-flow.md (with Mermaid diagram)
- function-calls.md (with Mermaid sequence diagram)
- data-flow.md (with Mermaid flowchart)
- conditional-paths.md (with Mermaid decision tree)
- complete-flow.md (with integrated Mermaid diagram)
- metadata.json
Analysis:
- Files analyzed: {count}
- Modules traced: {count}
- Functions traced: {count}
Usage: Skill(command: "codemap-{normalized_feature}")
```
---
## Implementation Details
### TodoWrite Patterns
**Initialization** (Before Phase 1):
```javascript
TodoWrite({todos: [
{"content": "Parse feature keyword and check existing", "status": "in_progress", "activeForm": "Parsing feature keyword"},
{"content": "Agent analyzes code flow and generates files", "status": "pending", "activeForm": "Analyzing code flow"},
{"content": "Generate SKILL.md index", "status": "pending", "activeForm": "Generating SKILL index"}
]})
```
**Full Path** (SKIP_GENERATION = false):
```javascript
// After Phase 1
TodoWrite({todos: [
{"content": "Parse feature keyword and check existing", "status": "completed", ...},
{"content": "Agent analyzes code flow and generates files", "status": "in_progress", ...},
{"content": "Generate SKILL.md index", "status": "pending", ...}
]})
// After Phase 2
TodoWrite({todos: [
{"content": "Parse feature keyword and check existing", "status": "completed", ...},
{"content": "Agent analyzes code flow and generates files", "status": "completed", ...},
{"content": "Generate SKILL.md index", "status": "in_progress", ...}
]})
// After Phase 3
TodoWrite({todos: [
{"content": "Parse feature keyword and check existing", "status": "completed", ...},
{"content": "Agent analyzes code flow and generates files", "status": "completed", ...},
{"content": "Generate SKILL.md index", "status": "completed", ...}
]})
```
**Skip Path** (SKIP_GENERATION = true):
```javascript
// After Phase 1 (skip Phase 2)
TodoWrite({todos: [
{"content": "Parse feature keyword and check existing", "status": "completed", ...},
{"content": "Agent analyzes code flow and generates files", "status": "completed", ...}, // Skipped
{"content": "Generate SKILL.md index", "status": "in_progress", ...}
]})
```
### Execution Flow
**Full Path**:
```
User → TodoWrite Init → Phase 1 (parse) → Phase 2 (agent analyzes) → Phase 3 (write index) → Report
```
**Skip Path**:
```
User → TodoWrite Init → Phase 1 (detect existing) → Phase 3 (update index) → Report
```
### Error Handling
**Phase 1 Errors**:
- Empty feature keyword: Report error, ask user to provide feature description
- Invalid characters: Normalize and continue
**Phase 2 Errors (Agent)**:
- Agent task fails: Retry once, report if fails again
- No files discovered: Warn user, ask for more specific feature keyword
- CLI failures: Agent handles internally with retries
- Invalid Mermaid syntax: Agent validates before writing
**Phase 3 Errors**:
- Write failures: Report which files failed
- Missing files: Note in SKILL.md, suggest regeneration
---
## Parameters
```bash
/memory:code-map-memory "feature-keyword" [--regenerate] [--tool <gemini|qwen>]
```
**Arguments**:
- **"feature-keyword"**: Feature or flow to analyze (required)
- Examples: `"user authentication"`, `"payment processing"`, `"数据导入流程"`
- Can be English, Chinese, or mixed
- Spaces and underscores normalized to hyphens
- **--regenerate**: Force regenerate existing codemap (deletes and recreates)
- **--tool**: CLI tool for analysis (default: gemini)
- `gemini`: Comprehensive flow analysis with gemini-2.5-pro
- `qwen`: Alternative with coder-model
---
## Examples
**Generated File Structure** (for all examples):
```
.claude/skills/codemap-{feature}/
├── SKILL.md # Index (Phase 3)
├── architecture-flow.md # Agent (Phase 2) - High-level flow
├── function-calls.md # Agent (Phase 2) - Function chains
├── data-flow.md # Agent (Phase 2) - Data transformations
├── conditional-paths.md # Agent (Phase 2) - Branches & errors
├── complete-flow.md # Agent (Phase 2) - Integrated view
└── metadata.json # Agent (Phase 2)
```
### Example 1: User Authentication Flow
```bash
/memory:code-map-memory "user authentication"
```
**Workflow**:
1. Phase 1: Normalizes to "user-authentication", checks existing codemap
2. Phase 2: Agent discovers auth-related files, executes CLI analysis, generates 5 flow docs with Mermaid
3. Phase 3: Generates SKILL.md index with progressive loading
**Output**: `.claude/skills/codemap-user-authentication/` with 6 files + metadata
### Example 3: Regenerate with Qwen
```bash
/memory:code-map-memory "payment processing" --regenerate --tool qwen
```
**Workflow**:
1. Phase 1: Deletes existing codemap due to --regenerate
2. Phase 2: Agent uses qwen with coder-model for fresh analysis
3. Phase 3: Generates updated SKILL.md
---
## Architecture
```
code-map-memory (orchestrator)
├─ Phase 1: Parse & Check (bash commands, skip decision)
├─ Phase 2: Code Analysis & Documentation (skippable)
│ ├─ Phase 2a: cli-explore-agent Analysis
│ │ └─ Deep Scan: Bash structural + Gemini semantic → JSON
│ └─ Phase 2b: Orchestrator Documentation
│ └─ Transform JSON → 5 Mermaid markdown files + metadata.json
└─ Phase 3: Write SKILL.md (index generation, always runs)
Output: .claude/skills/codemap-{feature}/
```

View File

@@ -1,615 +0,0 @@
---
name: docs
description: Plan documentation workflow with dynamic grouping (≤10 docs/task), generates IMPL tasks for parallel module trees, README, ARCHITECTURE, and HTTP API docs
argument-hint: "[path] [--tool <gemini|qwen|codex>] [--mode <full|partial>] [--cli-execute]"
---
# Documentation Workflow (/memory:docs)
## Overview
Lightweight planner that analyzes project structure, decomposes documentation work into tasks, and generates execution plans. Does NOT generate documentation content itself - delegates to doc-generator agent.
**Execution Strategy**:
- **Dynamic Task Grouping**: Level 1 tasks grouped by top-level directories with document count limit
- **Primary constraint**: Each task generates ≤10 documents (API.md + README.md count)
- **Optimization goal**: Prefer grouping 2 top-level directories per task for context sharing
- **Conflict resolution**: If 2 dirs exceed 10 docs, reduce to 1 dir/task; if 1 dir exceeds 10 docs, split by subdirectories
- **Context benefit**: Same-task directories analyzed together via single Gemini call
- **Parallel Execution**: Multiple Level 1 tasks execute concurrently for faster completion
- **Pre-computed Analysis**: Phase 2 performs unified analysis once, stored in `.process/` for reuse
- **Efficient Data Loading**: All existing docs loaded once in Phase 2, shared across tasks
**Path Mirroring**: Documentation structure mirrors source code under `.workflow/docs/{project_name}/`
- Example: `my_app/src/core/``.workflow/docs/my_app/src/core/API.md`
**Two Execution Modes**:
- **Default (Agent Mode)**: CLI analyzes in `pre_analysis` (MODE=analysis), agent writes docs
- **--cli-execute (CLI Mode)**: CLI generates docs in `implementation_approach` (MODE=write), agent executes CLI commands
## Path Mirroring Strategy
**Principle**: Documentation structure **mirrors** source code structure under project-specific directory.
| Source Path | Project Name | Documentation Path |
|------------|--------------|-------------------|
| `my_app/src/core/` | `my_app` | `.workflow/docs/my_app/src/core/API.md` |
| `my_app/src/modules/auth/` | `my_app` | `.workflow/docs/my_app/src/modules/auth/API.md` |
| `another_project/lib/utils/` | `another_project` | `.workflow/docs/another_project/lib/utils/API.md` |
## Parameters
```bash
/memory:docs [path] [--tool <gemini|qwen|codex>] [--mode <full|partial>] [--cli-execute]
```
- **path**: Source directory to analyze (default: current directory)
- Specifies the source code directory to be documented
- Documentation is generated in a separate `.workflow/docs/{project_name}/` directory at the workspace root, **not** within the source `path` itself
- The source path's structure is mirrored within the project-specific documentation folder
- Example: analyzing `src/modules` produces documentation at `.workflow/docs/{project_name}/src/modules/`
- **--mode**: Documentation generation mode (default: full)
- `full`: Complete documentation (modules + README + ARCHITECTURE + EXAMPLES + HTTP API)
- `partial`: Module documentation only (API.md + README.md)
- **--tool**: CLI tool selection (default: gemini)
- `gemini`: Comprehensive documentation, pattern recognition
- `qwen`: Architecture analysis, system design focus
- `codex`: Implementation validation, code quality
- **--cli-execute**: Enable CLI-based documentation generation (optional)
## Planning Workflow
### Phase 1: Initialize Session
```bash
# Get target path, project name, and root
bash(pwd && basename "$(pwd)" && git rev-parse --show-toplevel 2>/dev/null || pwd && date +%Y%m%d-%H%M%S)
```
```javascript
// Create docs session (type: docs)
SlashCommand(command="/workflow:session:start --type docs --new \"{project_name}-docs-{timestamp}\"")
// Parse output to get sessionId
```
```bash
# Update workflow-session.json with docs-specific fields
bash(jq '. + {"target_path":"{target_path}","project_root":"{project_root}","project_name":"{project_name}","mode":"full","tool":"gemini","cli_execute":false}' .workflow/active/{sessionId}/workflow-session.json > tmp.json && mv tmp.json .workflow/active/{sessionId}/workflow-session.json)
```
### Phase 2: Analyze Structure
**Smart filter**: Auto-detect and skip tests/build/config/vendor based on project tech stack.
**Commands** (collect data with simple bash):
```bash
# 1. Run folder analysis
bash(ccw tool exec get_modules_by_depth '{}' | ccw tool exec classify_folders '{}')
# 2. Get top-level directories (first 2 path levels)
bash(ccw tool exec get_modules_by_depth '{}' | ccw tool exec classify_folders '{}' | awk -F'|' '{print $1}' | sed 's|^\./||' | awk -F'/' '{if(NF>=2) print $1"/"$2; else if(NF==1) print $1}' | sort -u)
# 3. Find existing docs (if directory exists)
bash(if [ -d .workflow/docs/\${project_name} ]; then find .workflow/docs/\${project_name} -type f -name "*.md" ! -path "*/README.md" ! -path "*/ARCHITECTURE.md" ! -path "*/EXAMPLES.md" ! -path "*/api/*" 2>/dev/null; fi)
# 4. Read existing docs content (if files exist)
bash(if [ -d .workflow/docs/\${project_name} ]; then find .workflow/docs/\${project_name} -type f -name "*.md" ! -path "*/README.md" ! -path "*/ARCHITECTURE.md" ! -path "*/EXAMPLES.md" ! -path "*/api/*" 2>/dev/null | xargs cat 2>/dev/null; fi)
```
**Data Processing**: Parse bash outputs, calculate statistics, use **Write tool** to create `${session_dir}/.process/doc-planning-data.json` with structure:
```json
{
"metadata": {
"generated_at": "2025-11-03T16:57:30.469669",
"project_name": "project_name",
"project_root": "/path/to/project"
},
"folder_analysis": [
{"path": "./src/core", "type": "code", "code_count": 5, "dirs_count": 2}
],
"top_level_dirs": ["src/modules", "lib/core"],
"existing_docs": {
"file_list": [".workflow/docs/project/src/core/API.md"],
"content": "... existing docs content ..."
},
"unified_analysis": [],
"statistics": {
"total": 15,
"code": 8,
"navigation": 7,
"top_level": 3
}
}
```
**Then** use **Edit tool** to update `workflow-session.json` adding analysis field.
**Output**: Single `doc-planning-data.json` with all analysis data (no temp files or Python scripts).
**Auto-skipped**: Tests (`**/test/**`, `**/*.test.*`), Build (`**/node_modules/**`, `**/dist/**`), Config (root-level files), Vendor directories.
### Phase 3: Detect Update Mode
**Commands**:
```bash
# Count existing docs from doc-planning-data.json
bash(cat .workflow/active/WFS-docs-{timestamp}/.process/doc-planning-data.json | jq '.existing_docs.file_list | length')
```
**Data Processing**: Use count result, then use **Edit tool** to update `workflow-session.json`:
- Add `"update_mode": "update"` if count > 0, else `"create"`
- Add `"existing_docs": <count>`
### Phase 4: Decompose Tasks
**Task Hierarchy** (Dynamic based on document count):
```
Small Projects (total ≤10 docs):
Level 1: IMPL-001 (all directories in single task, shared context)
Level 2: IMPL-002 (README, full mode only)
Level 3: IMPL-003 (ARCHITECTURE+EXAMPLES), IMPL-004 (HTTP API, optional)
Medium Projects (Example: 7 top-level dirs, 18 total docs):
Step 1: Count docs per top-level dir
├─ dir1: 3 docs, dir2: 4 docs → Group 1 (7 docs)
├─ dir3: 5 docs, dir4: 3 docs → Group 2 (8 docs)
├─ dir5: 2 docs → Group 3 (2 docs, can add more)
Step 2: Create tasks with ≤10 docs constraint
Level 1: IMPL-001 to IMPL-003 (parallel groups)
├─ IMPL-001: Group 1 (dir1 + dir2, 7 docs, shared context)
├─ IMPL-002: Group 2 (dir3 + dir4, 8 docs, shared context)
└─ IMPL-003: Group 3 (remaining dirs, ≤10 docs)
Level 2: IMPL-004 (README, depends on Level 1, full mode only)
Level 3: IMPL-005 (ARCHITECTURE+EXAMPLES), IMPL-006 (HTTP API, optional)
Large Projects (single dir >10 docs):
Step 1: Detect oversized directory
└─ src/modules/: 15 subdirs → 30 docs (exceeds limit)
Step 2: Split by subdirectories
Level 1: IMPL-001 to IMPL-003 (split oversized dir)
├─ IMPL-001: src/modules/ subdirs 1-5 (10 docs)
├─ IMPL-002: src/modules/ subdirs 6-10 (10 docs)
└─ IMPL-003: src/modules/ subdirs 11-15 (10 docs)
```
**Grouping Algorithm**:
1. Count total docs for each top-level directory
2. Try grouping 2 directories (optimization for context sharing)
3. If group exceeds 10 docs, split to 1 dir/task
4. If single dir exceeds 10 docs, split by subdirectories
5. Create parallel Level 1 tasks with ≤10 docs each
**Commands**:
```bash
# 1. Get top-level directories from doc-planning-data.json
bash(cat .workflow/active/WFS-docs-{timestamp}/.process/doc-planning-data.json | jq -r '.top_level_dirs[]')
# 2. Get mode from workflow-session.json
bash(cat .workflow/active/WFS-docs-{timestamp}/workflow-session.json | jq -r '.mode // "full"')
# 3. Check for HTTP API
bash(grep -r "router\.|@Get\|@Post" src/ 2>/dev/null && echo "API_FOUND" || echo "NO_API")
```
**Data Processing**:
1. Count documents for each top-level directory (from folder_analysis):
- Code folders: 2 docs each (API.md + README.md)
- Navigation folders: 1 doc each (README.md only)
2. Apply grouping algorithm with ≤10 docs constraint:
- Try grouping 2 directories, calculate total docs
- If total ≤10 docs: create group
- If total >10 docs: split to 1 dir/group or subdivide
- If single dir >10 docs: split by subdirectories
3. Use **Edit tool** to update `doc-planning-data.json` adding groups field:
```json
"groups": {
"count": 3,
"assignments": [
{"group_id": "001", "directories": ["src/modules", "src/utils"], "doc_count": 5},
{"group_id": "002", "directories": ["lib/core"], "doc_count": 6},
{"group_id": "003", "directories": ["lib/helpers"], "doc_count": 3}
]
}
```
**Task ID Calculation**:
```bash
group_count=$(jq '.groups.count' .workflow/active/WFS-docs-{timestamp}/.process/doc-planning-data.json)
readme_id=$((group_count + 1)) # Next ID after groups
arch_id=$((group_count + 2))
api_id=$((group_count + 3))
```
### Phase 5: Generate Task JSONs
**CLI Strategy**:
| Mode | cli_execute | Placement | CLI MODE | Approval Flag | Agent Role |
|------|-------------|-----------|----------|---------------|------------|
| **Agent** | false | pre_analysis | analysis | (none) | Generate docs in implementation_approach |
| **CLI** | true | implementation_approach | write | --mode write | Execute CLI commands, validate output |
**Command Patterns**:
- Gemini/Qwen: `ccw cli -p "..." --tool gemini --mode analysis --cd dir`
- CLI Mode: `ccw cli -p "..." --tool gemini --mode write --cd dir`
- Codex: `ccw cli -p "..." --tool codex --mode write --cd dir`
**Generation Process**:
1. Read configuration values (tool, cli_execute, mode) from workflow-session.json
2. Read group assignments from doc-planning-data.json
3. Generate Level 1 tasks (IMPL-001 to IMPL-N, one per group)
4. Generate Level 2+ tasks if mode=full (README, ARCHITECTURE, HTTP API)
## Task Templates
### Level 1: Module Trees Group Task (Unified)
**Execution Model**: Each task processes assigned directory group (max 2 directories) using pre-analyzed data from Phase 2.
```json
{
"id": "IMPL-${group_number}",
"title": "Document Module Trees Group ${group_number}",
"status": "pending",
"meta": {
"type": "docs-tree-group",
"agent": "@doc-generator",
"tool": "gemini",
"cli_execute": false,
"group_number": "${group_number}",
"total_groups": "${total_groups}"
},
"context": {
"requirements": [
"Process directories from group ${group_number} in doc-planning-data.json",
"Generate docs to .workflow/docs/${project_name}/ (mirrored structure)",
"Code folders: API.md + README.md; Navigation folders: README.md only",
"Use pre-analyzed data from Phase 2 (no redundant analysis)"
],
"focus_paths": ["${group_dirs_from_json}"],
"precomputed_data": {
"phase2_analysis": "${session_dir}/.process/doc-planning-data.json"
}
},
"flow_control": {
"pre_analysis": [
{
"step": "load_precomputed_data",
"action": "Load Phase 2 analysis and extract group directories",
"commands": [
"bash(cat ${session_dir}/.process/doc-planning-data.json)",
"bash(jq '.groups.assignments[] | select(.group_id == \"${group_number}\") | .directories' ${session_dir}/.process/doc-planning-data.json)"
],
"output_to": "phase2_context",
"note": "Single JSON file contains all Phase 2 analysis results"
}
],
"implementation_approach": [
{
"step": 1,
"title": "Generate documentation for assigned directory group",
"description": "Process directories in Group ${group_number} using pre-analyzed data",
"modification_points": [
"Read group directories from [phase2_context].groups.assignments[${group_number}].directories",
"For each directory: parse folder types from folder_analysis, parse structure from unified_analysis",
"Map source_path to .workflow/docs/${project_name}/{path}",
"Generate API.md for code folders, README.md for all folders",
"Preserve user modifications from [phase2_context].existing_docs.content"
],
"logic_flow": [
"phase2 = parse([phase2_context])",
"dirs = phase2.groups.assignments[${group_number}].directories",
"for dir in dirs:",
" folder_info = find(dir, phase2.folder_analysis)",
" outline = find(dir, phase2.unified_analysis)",
" if folder_info.type == 'code': generate API.md + README.md",
" elif folder_info.type == 'navigation': generate README.md only",
" write to .workflow/docs/${project_name}/{dir}/"
],
"depends_on": [],
"output": "group_module_docs"
}
],
"target_files": [
".workflow/docs/${project_name}/*/API.md",
".workflow/docs/${project_name}/*/README.md"
]
}
}
```
**CLI Execute Mode Note**: When `cli_execute=true`, add Step 2 in `implementation_approach`:
```json
{
"step": 2,
"title": "Batch generate documentation via CLI",
"command": "ccw cli -p 'PURPOSE: Generate module docs\\nTASK: Create documentation\\nMODE: write\\nCONTEXT: @**/* [phase2_context]\\nEXPECTED: API.md and README.md\\nRULES: Mirror structure' --tool gemini --mode write --cd ${dirs_from_group}",
"depends_on": [1],
"output": "generated_docs"
}
```
### Level 2: Project README Task
**Task ID**: `IMPL-${readme_id}` (where `readme_id = group_count + 1`)
**Dependencies**: Depends on all Level 1 tasks completing.
```json
{
"id": "IMPL-${readme_id}",
"title": "Generate Project README",
"status": "pending",
"depends_on": ["IMPL-001", "...", "IMPL-${group_count}"],
"meta": {"type": "docs", "agent": "@doc-generator", "tool": "gemini", "cli_execute": false},
"flow_control": {
"pre_analysis": [
{
"step": "load_existing_readme",
"command": "bash(cat .workflow/docs/${project_name}/README.md 2>/dev/null || echo 'No existing README')",
"output_to": "existing_readme"
},
{
"step": "load_module_docs",
"command": "bash(find .workflow/docs/${project_name} -type f -name '*.md' ! -path '.workflow/docs/${project_name}/README.md' ! -path '.workflow/docs/${project_name}/ARCHITECTURE.md' ! -path '.workflow/docs/${project_name}/EXAMPLES.md' ! -path '.workflow/docs/${project_name}/api/*' | xargs cat)",
"output_to": "all_module_docs"
},
{
"step": "analyze_project",
"command": "bash(ccw cli -p \"PURPOSE: Analyze project structure\\nTASK: Extract overview from modules\\nMODE: analysis\\nCONTEXT: [all_module_docs]\\nEXPECTED: Project outline\" --tool gemini --mode analysis)",
"output_to": "project_outline"
}
],
"implementation_approach": [
{
"step": 1,
"title": "Generate project README",
"description": "Generate project README with navigation links while preserving user modifications",
"modification_points": [
"Parse [project_outline] and [all_module_docs]",
"Generate README structure with navigation links",
"Preserve [existing_readme] user modifications"
],
"logic_flow": ["Parse data", "Generate README with navigation", "Preserve modifications"],
"depends_on": [],
"output": "project_readme"
}
],
"target_files": [".workflow/docs/${project_name}/README.md"]
}
}
```
### Level 3: Architecture & Examples Documentation Task
**Task ID**: `IMPL-${arch_id}` (where `arch_id = group_count + 2`)
**Dependencies**: Depends on Level 2 (Project README).
```json
{
"id": "IMPL-${arch_id}",
"title": "Generate Architecture & Examples Documentation",
"status": "pending",
"depends_on": ["IMPL-${readme_id}"],
"meta": {"type": "docs", "agent": "@doc-generator", "tool": "gemini", "cli_execute": false},
"flow_control": {
"pre_analysis": [
{"step": "load_existing_docs", "command": "bash(cat .workflow/docs/${project_name}/{ARCHITECTURE,EXAMPLES}.md 2>/dev/null || echo 'No existing docs')", "output_to": "existing_arch_examples"},
{"step": "load_all_docs", "command": "bash(cat .workflow/docs/${project_name}/README.md && find .workflow/docs/${project_name} -type f -name '*.md' ! -path '*/README.md' ! -path '*/ARCHITECTURE.md' ! -path '*/EXAMPLES.md' ! -path '*/api/*' | xargs cat)", "output_to": "all_docs"},
{"step": "analyze_architecture", "command": "bash(ccw cli -p \"PURPOSE: Analyze system architecture\\nTASK: Synthesize architectural overview and examples\\nMODE: analysis\\nCONTEXT: [all_docs]\\nEXPECTED: Architecture + Examples outline\" --tool gemini --mode analysis)", "output_to": "arch_examples_outline"}
],
"implementation_approach": [
{
"step": 1,
"title": "Generate architecture and examples documentation",
"modification_points": [
"Parse [arch_examples_outline] and [all_docs]",
"Generate ARCHITECTURE.md (system design, patterns)",
"Generate EXAMPLES.md (code snippets, usage)",
"Preserve [existing_arch_examples] modifications"
],
"depends_on": [],
"output": "arch_examples_docs"
}
],
"target_files": [".workflow/docs/${project_name}/ARCHITECTURE.md", ".workflow/docs/${project_name}/EXAMPLES.md"]
}
}
```
### Level 4: HTTP API Documentation Task (Optional)
**Task ID**: `IMPL-${api_id}` (where `api_id = group_count + 3`)
**Dependencies**: Depends on Level 3.
```json
{
"id": "IMPL-${api_id}",
"title": "Generate HTTP API Documentation",
"status": "pending",
"depends_on": ["IMPL-${arch_id}"],
"meta": {"type": "docs", "agent": "@doc-generator", "tool": "gemini", "cli_execute": false},
"flow_control": {
"pre_analysis": [
{"step": "discover_api", "command": "bash(rg 'router\\.| @(Get|Post)' -g '*.{ts,js}')", "output_to": "endpoint_discovery"},
{"step": "load_existing_api", "command": "bash(cat .workflow/docs/${project_name}/api/README.md 2>/dev/null || echo 'No existing API docs')", "output_to": "existing_api_docs"},
{"step": "analyze_api", "command": "bash(ccw cli -p \"PURPOSE: Document HTTP API\\nTASK: Analyze endpoints\\nMODE: analysis\\nCONTEXT: @src/api/**/* [endpoint_discovery]\\nEXPECTED: API outline\" --tool gemini --mode analysis)", "output_to": "api_outline"}
],
"implementation_approach": [
{
"step": 1,
"title": "Generate HTTP API documentation",
"modification_points": [
"Parse [api_outline] and [endpoint_discovery]",
"Document endpoints, request/response formats",
"Preserve [existing_api_docs] modifications"
],
"depends_on": [],
"output": "api_docs"
}
],
"target_files": [".workflow/docs/${project_name}/api/README.md"]
}
}
```
## Session Structure
**Unified Structure** (single JSON replaces multiple text files):
```
.workflow/active/
└── WFS-docs-{timestamp}/
├── workflow-session.json # Session metadata
├── IMPL_PLAN.md
├── TODO_LIST.md
├── .process/
│ └── doc-planning-data.json # All Phase 2 analysis data (replaces 7+ files)
└── .task/
├── IMPL-001.json # Small: all modules | Large: group 1
├── IMPL-00N.json # (Large only: groups 2-N)
├── IMPL-{N+1}.json # README (full mode)
├── IMPL-{N+2}.json # ARCHITECTURE+EXAMPLES (full mode)
└── IMPL-{N+3}.json # HTTP API (optional)
```
**doc-planning-data.json Structure**:
```json
{
"metadata": {
"generated_at": "2025-11-03T16:41:06+08:00",
"project_name": "Claude_dms3",
"project_root": "/d/Claude_dms3"
},
"folder_analysis": [
{"path": "./src/core", "type": "code", "code_count": 5, "dirs_count": 2},
{"path": "./src/utils", "type": "navigation", "code_count": 0, "dirs_count": 4}
],
"top_level_dirs": ["src/modules", "src/utils", "lib/core"],
"existing_docs": {
"file_list": [".workflow/docs/project/src/core/API.md"],
"content": "... concatenated existing docs ..."
},
"unified_analysis": [
{"module_path": "./src/core", "outline_summary": "Core functionality"}
],
"groups": {
"count": 4,
"assignments": [
{"group_id": "001", "directories": ["src/modules", "src/utils"], "doc_count": 6},
{"group_id": "002", "directories": ["lib/core", "lib/helpers"], "doc_count": 7}
]
},
"statistics": {
"total": 15,
"code": 8,
"navigation": 7,
"top_level": 3
}
}
```
**Workflow Session Structure** (workflow-session.json):
```json
{
"session_id": "WFS-docs-{timestamp}",
"project": "{project_name} documentation",
"status": "planning",
"timestamp": "2024-01-20T14:30:22+08:00",
"path": ".",
"target_path": "/path/to/project",
"project_root": "/path/to/project",
"project_name": "{project_name}",
"mode": "full",
"tool": "gemini",
"cli_execute": false,
"update_mode": "update",
"existing_docs": 5,
"analysis": {
"total": "15",
"code": "8",
"navigation": "7",
"top_level": "3"
}
}
```
## Generated Documentation
**Structure mirrors project source directories under project-specific folder**:
```
.workflow/docs/
└── {project_name}/ # Project-specific root
├── src/ # Mirrors src/ directory
│ ├── modules/
│ │ ├── README.md # Navigation
│ │ ├── auth/
│ │ │ ├── API.md # API signatures
│ │ │ ├── README.md # Module docs
│ │ │ └── middleware/
│ │ │ ├── API.md
│ │ │ └── README.md
│ │ └── api/
│ │ ├── API.md
│ │ └── README.md
│ └── utils/
│ └── README.md
├── lib/ # Mirrors lib/ directory
│ └── core/
│ ├── API.md
│ └── README.md
├── README.md # Project root
├── ARCHITECTURE.md # System design
├── EXAMPLES.md # Usage examples
└── api/ # Optional
└── README.md # HTTP API reference
```
## Execution Commands
```bash
# Execute entire workflow (auto-discovers active session)
/workflow:execute
# Or specify session
/workflow:execute --resume-session="WFS-docs-yyyymmdd-hhmmss"
# Individual task execution
/task:execute IMPL-001
```
## Template Reference
**Available Templates** (`~/.claude/workflows/cli-templates/prompts/documentation/`):
- `api.txt`: Code API (Part A) + HTTP API (Part B)
- `module-readme.txt`: Module purpose, usage, dependencies
- `folder-navigation.txt`: Navigation README for folders with subdirectories
- `project-readme.txt`: Project overview, getting started, navigation
- `project-architecture.txt`: System structure, module map, design patterns
- `project-examples.txt`: End-to-end usage examples
## Execution Mode Summary
| Mode | CLI Placement | CLI MODE | Approval Flag | Agent Role |
|------|---------------|----------|---------------|------------|
| **Agent (default)** | pre_analysis | analysis | (none) | Generates documentation content |
| **CLI (--cli-execute)** | implementation_approach | write | --mode write | Executes CLI commands, validates output |
**Execution Flow**:
- **Phase 2**: Unified analysis once, results in `.process/`
- **Phase 4**: Dynamic grouping (max 2 dirs per group)
- **Level 1**: Parallel processing for module tree groups
- **Level 2+**: Sequential execution for project-level docs
## Related Commands
- `/workflow:execute` - Execute documentation tasks
- `/workflow:status` - View task progress
- `/workflow:session:complete` - Mark session complete

View File

@@ -1,182 +0,0 @@
---
name: load-skill-memory
description: Activate SKILL package (auto-detect from paths/keywords or manual) and intelligently load documentation based on task intent keywords
argument-hint: "[skill_name] \"task intent description\""
allowed-tools: Bash(*), Read(*), Skill(*)
---
# Memory Load SKILL Command (/memory:load-skill-memory)
## 1. Overview
The `memory:load-skill-memory` command **activates SKILL package** (auto-detect from task or manual specification) and intelligently loads documentation based on user's task intent. The system automatically determines which documentation files to read based on the intent description.
**Core Philosophy**:
- **Flexible Activation**: Auto-detect skill from task description/paths, or user explicitly specifies
- **Intent-Driven Loading**: System analyzes task intent to determine documentation scope
- **Intelligent Selection**: Automatically chooses appropriate documentation level and modules
- **Direct Context Loading**: Loads selected documentation into conversation memory
**When to Use**:
- Manually activate a known SKILL package for a specific task
- Load SKILL context when system hasn't auto-triggered it
- Force reload SKILL documentation with specific intent focus
**Note**: Normal SKILL activation happens automatically via description triggers or path mentions (system extracts skill name from file paths for intelligent triggering). Use this command only when manual activation is needed.
## 2. Parameters
- `[skill_name]` (Optional): Name of SKILL package to activate
- If omitted: System auto-detects from task description or file paths
- If specified: Direct activation of named SKILL package
- Example: `my_project`, `api_service`
- Must match directory name under `.claude/skills/`
- `"task intent description"` (Required): Description of what you want to do
- Used for both: auto-detection (if skill_name omitted) and documentation scope selection
- **Analysis tasks**: "分析builder pattern实现", "理解参数系统架构"
- **Modification tasks**: "修改workflow逻辑", "增强thermal template功能"
- **Learning tasks**: "学习接口设计模式", "了解测试框架使用"
- **With paths**: "修改D:\projects\my_project\src\auth.py的认证逻辑" (auto-extracts `my_project`)
## 3. Execution Flow
### Step 1: Determine SKILL Name (if not provided)
**Auto-Detection Strategy** (when skill_name parameter is omitted):
1. **Path Extraction**: Scan task description for file paths
- Extract potential project names from path segments
- Example: `"修改D:\projects\my_project\src\auth.py"` → extracts `my_project`
2. **Keyword Matching**: Match task keywords against SKILL descriptions
- Search for project-specific terms, domain keywords
3. **Validation**: Check if extracted name matches `.claude/skills/{skill_name}/`
**Result**: Either uses provided skill_name or auto-detected name for activation
### Step 2: Activate SKILL and Analyze Intent
**Activate SKILL Package**:
```javascript
Skill(command: "${skill_name}") // Uses provided or auto-detected name
```
**What Happens After Activation**:
1. If SKILL exists in memory: System reads `.claude/skills/${skill_name}/SKILL.md`
2. If SKILL not found in memory: Error - SKILL package doesn't exist
3. SKILL description triggers are loaded into memory
4. Progressive loading mechanism becomes available
5. Documentation structure is now accessible
**Intent Analysis**:
Based on task intent description, system determines:
- **Action type**: analyzing, modifying, learning
- **Scope**: specific module, architecture overview, complete system
- **Depth**: quick reference, detailed API, full documentation
### Step 3: Intelligent Documentation Loading
**Loading Strategy**:
The system automatically selects documentation based on intent keywords:
1. **Quick Understanding** ("了解", "快速理解", "什么是"):
- Load: Level 0 (README.md only, ~2K tokens)
- Use case: Quick overview of capabilities
2. **Specific Module Analysis** ("分析XXX模块", "理解XXX实现"):
- Load: Module-specific README.md + API.md (~5K tokens)
- Use case: Deep dive into specific component
3. **Architecture Review** ("架构", "设计模式", "整体结构"):
- Load: README.md + ARCHITECTURE.md (~10K tokens)
- Use case: System design understanding
4. **Implementation/Modification** ("修改", "增强", "实现"):
- Load: Relevant module docs + EXAMPLES.md (~15K tokens)
- Use case: Code modification with examples
5. **Comprehensive Learning** ("学习", "完整了解", "深入理解"):
- Load: Level 3 (All documentation, ~40K tokens)
- Use case: Complete system mastery
**Documentation Loaded into Memory**:
After loading, the selected documentation content is available in conversation memory for subsequent operations.
## 4. Usage Examples
### Example 1: Manual Specification
**User Command**:
```bash
/memory:load-skill-memory my_project "修改认证模块增加OAuth支持"
```
**Execution**:
```javascript
// Step 1: Use provided skill_name
skill_name = "my_project" // Directly from parameter
// Step 2: Activate SKILL
Skill(command: "my_project")
// Step 3: Intent Analysis
Keywords: ["修改", "认证模块", "增加", "OAuth"]
Action: modifying (implementation)
Scope: auth module + examples
// Load documentation based on intent
Read(.workflow/docs/my_project/auth/README.md)
Read(.workflow/docs/my_project/auth/API.md)
Read(.workflow/docs/my_project/EXAMPLES.md)
```
### Example 2: Auto-Detection from Path
**User Command**:
```bash
/memory:load-skill-memory "修改D:\projects\my_project\src\services\api.py的接口逻辑"
```
**Execution**:
```javascript
// Step 1: Auto-detect skill_name from path
Path detected: "D:\projects\my_project\src\services\api.py"
Extracted: "my_project"
Validated: .claude/skills/my_project/ exists
skill_name = "my_project"
// Step 2: Activate SKILL
Skill(command: "my_project")
// Step 3: Intent Analysis
Keywords: ["修改", "services", "接口逻辑"]
Action: modifying (implementation)
Scope: services module + examples
// Load documentation based on intent
Read(.workflow/docs/my_project/services/README.md)
Read(.workflow/docs/my_project/services/API.md)
Read(.workflow/docs/my_project/EXAMPLES.md)
```
## 5. Intent Keyword Mapping
**Quick Reference**:
- **Triggers**: "了解", "快速", "什么是", "简介"
- **Loads**: README.md only (~2K)
**Module-Specific**:
- **Triggers**: "XXX模块", "XXX组件", "分析XXX"
- **Loads**: Module README + API (~5K)
**Architecture**:
- **Triggers**: "架构", "设计", "整体结构", "系统设计"
- **Loads**: README + ARCHITECTURE (~10K)
**Implementation**:
- **Triggers**: "修改", "增强", "实现", "开发", "集成"
- **Loads**: Relevant module + EXAMPLES (~15K)
**Comprehensive**:
- **Triggers**: "完整", "深入", "全面", "学习整个"
- **Loads**: All documentation (~40K)

View File

@@ -1,525 +0,0 @@
---
name: skill-memory
description: 4-phase autonomous orchestrator: check docs → /memory:docs planning → /workflow:execute → generate SKILL.md with progressive loading index (skips phases 2-3 if docs exist)
argument-hint: "[path] [--tool <gemini|qwen|codex>] [--regenerate] [--mode <full|partial>] [--cli-execute]"
allowed-tools: SlashCommand(*), TodoWrite(*), Bash(*), Read(*), Write(*)
---
# Memory SKILL Package Generator
## Orchestrator Role
**Pure Orchestrator**: Execute documentation generation workflow, then generate SKILL.md index. Does NOT create task JSON files.
**Auto-Continue Workflow**: This command runs **fully autonomously** once triggered. Each phase completes and automatically triggers the next phase without user interaction.
**Execution Paths**:
- **Full Path**: All 4 phases (no existing docs OR `--regenerate` specified)
- **Skip Path**: Phase 1 → Phase 4 (existing docs found AND no `--regenerate` flag)
- **Phase 4 Always Executes**: SKILL.md index is never skipped, always generated or updated
## Core Rules
1. **Start Immediately**: First action is TodoWrite initialization, second action is Phase 1 execution
2. **No Task JSON**: This command does not create task JSON files - delegates to /memory:docs
3. **Parse Every Output**: Extract required data from each command output (session_id, task_count, file paths)
4. **Auto-Continue**: After completing each phase, update TodoWrite and immediately execute next phase
5. **Track Progress**: Update TodoWrite after EVERY phase completion before starting next phase
6. **Direct Generation**: Phase 4 directly generates SKILL.md using Write tool
7. **No Manual Steps**: User should never be prompted for decisions between phases
---
## 4-Phase Execution
### Phase 1: Prepare Arguments
**Goal**: Parse command arguments and check existing documentation
**Step 1: Get Target Path and Project Name**
```bash
# Get current directory (or use provided path)
bash(pwd)
# Get project name from directory
bash(basename "$(pwd)")
# Get project root
bash(git rev-parse --show-toplevel 2>/dev/null || pwd)
```
**Output**:
- `target_path`: `/d/my_project`
- `project_name`: `my_project`
- `project_root`: `/d/my_project`
**Step 2: Set Default Parameters**
```bash
# Default values (use these unless user specifies otherwise):
# - tool: "gemini"
# - mode: "full"
# - regenerate: false (no --regenerate flag)
# - cli_execute: false (no --cli-execute flag)
```
**Step 3: Check Existing Documentation**
```bash
# Check if docs directory exists
bash(test -d .workflow/docs/my_project && echo "exists" || echo "not_exists")
# Count existing documentation files
bash(find .workflow/docs/my_project -name "*.md" 2>/dev/null | wc -l || echo 0)
```
**Output**:
- `docs_exists`: `exists` or `not_exists`
- `existing_docs`: `5` (or `0` if no docs)
**Step 4: Determine Execution Path**
**Decision Logic**:
```javascript
if (existing_docs > 0 && !regenerate_flag) {
// Documentation exists and no regenerate flag
SKIP_DOCS_GENERATION = true
message = "Documentation already exists, skipping Phase 2 and Phase 3. Use --regenerate to force regeneration."
} else if (regenerate_flag) {
// Force regeneration: delete existing docs
bash(rm -rf .workflow/docs/my_project 2>/dev/null || true)
SKIP_DOCS_GENERATION = false
message = "Regenerating documentation from scratch."
} else {
// No existing docs
SKIP_DOCS_GENERATION = false
message = "No existing documentation found, generating new documentation."
}
```
**Summary Variables**:
- `PROJECT_NAME`: `my_project`
- `TARGET_PATH`: `/d/my_project`
- `DOCS_PATH`: `.workflow/docs/my_project`
- `TOOL`: `gemini` (default) or user-specified
- `MODE`: `full` (default) or user-specified
- `CLI_EXECUTE`: `false` (default) or `true` if --cli-execute flag
- `REGENERATE`: `false` (default) or `true` if --regenerate flag
- `EXISTING_DOCS`: Count of existing documentation files
- `SKIP_DOCS_GENERATION`: `true` if skipping Phase 2/3, `false` otherwise
**Completion & TodoWrite**:
- If `SKIP_DOCS_GENERATION = true`: Mark phase 1 completed, phase 2&3 completed (skipped), phase 4 in_progress
- If `SKIP_DOCS_GENERATION = false`: Mark phase 1 completed, phase 2 in_progress
**Next Action**:
- If skipping: Display skip message → Jump to Phase 4 (SKILL.md generation)
- If not skipping: Display preparation results → Continue to Phase 2 (documentation planning)
---
### Phase 2: Call /memory:docs
**Skip Condition**: This phase is **skipped if SKIP_DOCS_GENERATION = true** (documentation already exists without --regenerate flag)
**Goal**: Trigger documentation generation workflow
**Command**:
```bash
SlashCommand(command="/memory:docs [targetPath] --tool [tool] --mode [mode] [--cli-execute]")
```
**Example**:
```bash
/memory:docs /d/my_app --tool gemini --mode full
/memory:docs /d/my_app --tool gemini --mode full --cli-execute
```
**Note**: The `--regenerate` flag is handled in Phase 1 by deleting existing documentation. This command always calls `/memory:docs` without the regenerate flag, relying on docs.md's built-in update detection.
**Parse Output**:
- Extract session ID: `WFS-docs-[timestamp]` (store as `docsSessionId`)
- Extract task count (store as `taskCount`)
**Completion Criteria**:
- `/memory:docs` command executed successfully
- Session ID extracted and stored
- Task count retrieved
- Task files created in `.workflow/[docsSessionId]/.task/`
- workflow-session.json exists
**TodoWrite**: Mark phase 2 completed, phase 3 in_progress
**Next Action**: Display docs planning results (session ID, task count) → Auto-continue to Phase 3
---
### Phase 3: Execute Documentation Generation
**Skip Condition**: This phase is **skipped if SKIP_DOCS_GENERATION = true** (documentation already exists without --regenerate flag)
**Goal**: Execute documentation generation tasks
**Command**:
```bash
SlashCommand(command="/workflow:execute")
```
**Note**: `/workflow:execute` automatically discovers active session from Phase 2
**Completion Criteria**:
- `/workflow:execute` command executed successfully
- Documentation files generated in `.workflow/docs/[projectName]/`
- All tasks marked as completed in session
- At minimum: module documentation files exist (API.md and/or README.md)
- For full mode: Project README, ARCHITECTURE, EXAMPLES files generated
**TodoWrite**: Mark phase 3 completed, phase 4 in_progress
**Next Action**: Display execution results (file count, module count) → Auto-continue to Phase 4
---
### Phase 4: Generate SKILL.md Index
**Note**: This phase is **NEVER skipped** - it always executes to generate or update the SKILL index.
**Step 1: Read Key Files** (Use Read tool)
- `.workflow/docs/{project_name}/README.md` (required)
- `.workflow/docs/{project_name}/ARCHITECTURE.md` (optional)
**Step 2: Discover Structure**
```bash
bash(find .workflow/docs/{project_name} -name "*.md" | sed 's|.workflow/docs/{project_name}/||' | awk -F'/' '{if(NF>=2) print $1"/"$2}' | sort -u)
```
**Step 3: Generate Intelligent Description**
Extract from README + structure: Function (capabilities), Modules (names), Keywords (API/CLI/auth/etc.)
**Format**: `{Project} {core capabilities} (located at {project_path}). Load this SKILL when analyzing, modifying, or learning about {domain_description} or files under this path, especially when no relevant context exists in memory.`
**Key Elements**:
- **Path Reference**: Use `TARGET_PATH` from Phase 1 for precise location identification
- **Domain Description**: Extract human-readable domain/feature area from README (e.g., "workflow management", "thermal modeling")
- **Trigger Optimization**: Include project path, emphasize "especially when no relevant context exists in memory"
- **Action Coverage**: analyzing (分析), modifying (修改), learning (了解)
**Example**: "Workflow orchestration system with CLI tools and documentation generation (located at /d/Claude_dms3). Load this SKILL when analyzing, modifying, or learning about workflow management or files under this path, especially when no relevant context exists in memory."
**Step 4: Write SKILL.md** (Use Write tool)
```bash
bash(mkdir -p .claude/skills/{project_name})
```
`.claude/skills/{project_name}/SKILL.md`:
```yaml
---
name: {project_name}
description: {intelligent description from Step 3}
version: 1.0.0
---
# {Project Name} SKILL Package
## Documentation: `../../../.workflow/docs/{project_name}/`
## Progressive Loading
### Level 0: Quick Start (~2K)
- [README](../../../.workflow/docs/{project_name}/README.md)
### Level 1: Core Modules (~8K)
{Module READMEs}
### Level 2: Complete (~25K)
All modules + [Architecture](../../../.workflow/docs/{project_name}/ARCHITECTURE.md)
### Level 3: Deep Dive (~40K)
Everything + [Examples](../../../.workflow/docs/{project_name}/EXAMPLES.md)
```
**Completion Criteria**:
- SKILL.md file created at `.claude/skills/{project_name}/SKILL.md`
- Intelligent description generated from documentation
- Progressive loading levels (0-3) properly structured
- Module index includes all documented modules
- All file references use relative paths
**TodoWrite**: Mark phase 4 completed
**Final Action**: Report completion summary to user
**Return to User**:
```
SKILL Package Generation Complete
Project: {project_name}
Documentation: .workflow/docs/{project_name}/ ({doc_count} files)
SKILL Index: .claude/skills/{project_name}/SKILL.md
Generated:
- {task_count} documentation tasks completed
- SKILL.md with progressive loading (4 levels)
- Module index with {module_count} modules
Usage:
- Load Level 0: Quick project overview (~2K tokens)
- Load Level 1: Core modules (~8K tokens)
- Load Level 2: Complete docs (~25K tokens)
- Load Level 3: Everything (~40K tokens)
```
---
## Implementation Details
### Critical Rules
1. **No User Prompts Between Phases**: Never ask user questions or wait for input between phases
2. **Immediate Phase Transition**: After TodoWrite update, immediately execute next phase command
3. **Status-Driven Execution**: Check TodoList status after each phase:
- If next task is "pending" → Mark it "in_progress" and execute
- If all tasks are "completed" → Report final summary
4. **Phase Completion Pattern**:
```
Phase N completes → Update TodoWrite (N=completed, N+1=in_progress) → Execute Phase N+1
```
### TodoWrite Patterns
#### Initialization (Before Phase 1)
**FIRST ACTION**: Create TodoList with all 4 phases
```javascript
TodoWrite({todos: [
{"content": "Parse arguments and prepare", "status": "in_progress", "activeForm": "Parsing arguments"},
{"content": "Call /memory:docs to plan documentation", "status": "pending", "activeForm": "Calling /memory:docs"},
{"content": "Execute documentation generation", "status": "pending", "activeForm": "Executing documentation"},
{"content": "Generate SKILL.md index", "status": "pending", "activeForm": "Generating SKILL.md"}
]})
```
**SECOND ACTION**: Execute Phase 1 immediately
#### Full Path (SKIP_DOCS_GENERATION = false)
**After Phase 1**:
```javascript
TodoWrite({todos: [
{"content": "Parse arguments and prepare", "status": "completed", "activeForm": "Parsing arguments"},
{"content": "Call /memory:docs to plan documentation", "status": "in_progress", "activeForm": "Calling /memory:docs"},
{"content": "Execute documentation generation", "status": "pending", "activeForm": "Executing documentation"},
{"content": "Generate SKILL.md index", "status": "pending", "activeForm": "Generating SKILL.md"}
]})
// Auto-continue to Phase 2
```
**After Phase 2**:
```javascript
TodoWrite({todos: [
{"content": "Parse arguments and prepare", "status": "completed", "activeForm": "Parsing arguments"},
{"content": "Call /memory:docs to plan documentation", "status": "completed", "activeForm": "Calling /memory:docs"},
{"content": "Execute documentation generation", "status": "in_progress", "activeForm": "Executing documentation"},
{"content": "Generate SKILL.md index", "status": "pending", "activeForm": "Generating SKILL.md"}
]})
// Auto-continue to Phase 3
```
**After Phase 3**:
```javascript
TodoWrite({todos: [
{"content": "Parse arguments and prepare", "status": "completed", "activeForm": "Parsing arguments"},
{"content": "Call /memory:docs to plan documentation", "status": "completed", "activeForm": "Calling /memory:docs"},
{"content": "Execute documentation generation", "status": "completed", "activeForm": "Executing documentation"},
{"content": "Generate SKILL.md index", "status": "in_progress", "activeForm": "Generating SKILL.md"}
]})
// Auto-continue to Phase 4
```
**After Phase 4**:
```javascript
TodoWrite({todos: [
{"content": "Parse arguments and prepare", "status": "completed", "activeForm": "Parsing arguments"},
{"content": "Call /memory:docs to plan documentation", "status": "completed", "activeForm": "Calling /memory:docs"},
{"content": "Execute documentation generation", "status": "completed", "activeForm": "Executing documentation"},
{"content": "Generate SKILL.md index", "status": "completed", "activeForm": "Generating SKILL.md"}
]})
// Report completion summary to user
```
#### Skip Path (SKIP_DOCS_GENERATION = true)
**After Phase 1** (detects existing docs, skips Phase 2 & 3):
```javascript
TodoWrite({todos: [
{"content": "Parse arguments and prepare", "status": "completed", "activeForm": "Parsing arguments"},
{"content": "Call /memory:docs to plan documentation", "status": "completed", "activeForm": "Calling /memory:docs"},
{"content": "Execute documentation generation", "status": "completed", "activeForm": "Executing documentation"},
{"content": "Generate SKILL.md index", "status": "in_progress", "activeForm": "Generating SKILL.md"}
]})
// Display skip message: "Documentation already exists, skipping Phase 2 and Phase 3. Use --regenerate to force regeneration."
// Jump directly to Phase 4
```
**After Phase 4**:
```javascript
TodoWrite({todos: [
{"content": "Parse arguments and prepare", "status": "completed", "activeForm": "Parsing arguments"},
{"content": "Call /memory:docs to plan documentation", "status": "completed", "activeForm": "Calling /memory:docs"},
{"content": "Execute documentation generation", "status": "completed", "activeForm": "Executing documentation"},
{"content": "Generate SKILL.md index", "status": "completed", "activeForm": "Generating SKILL.md"}
]})
// Report completion summary to user
```
### Execution Flow Diagrams
#### Full Path Flow
```
User triggers command
[TodoWrite] Initialize 4 phases (Phase 1 = in_progress)
[Execute] Phase 1: Parse arguments
[TodoWrite] Phase 1 = completed, Phase 2 = in_progress
[Execute] Phase 2: Call /memory:docs
[TodoWrite] Phase 2 = completed, Phase 3 = in_progress
[Execute] Phase 3: Call /workflow:execute
[TodoWrite] Phase 3 = completed, Phase 4 = in_progress
[Execute] Phase 4: Generate SKILL.md
[TodoWrite] Phase 4 = completed
[Report] Display completion summary
```
#### Skip Path Flow
```
User triggers command
[TodoWrite] Initialize 4 phases (Phase 1 = in_progress)
[Execute] Phase 1: Parse arguments, detect existing docs
[TodoWrite] Phase 1 = completed, Phase 2&3 = completed (skipped), Phase 4 = in_progress
[Display] Skip message: "Documentation already exists, skipping Phase 2 and Phase 3"
[Execute] Phase 4: Generate SKILL.md (always runs)
[TodoWrite] Phase 4 = completed
[Report] Display completion summary
```
### Error Handling
- If any phase fails, mark it as "in_progress" (not completed)
- Report error details to user
- Do NOT auto-continue to next phase on failure
---
## Parameters
```bash
/memory:skill-memory [path] [--tool <gemini|qwen|codex>] [--regenerate] [--mode <full|partial>] [--cli-execute]
```
- **path**: Target directory (default: current directory)
- **--tool**: CLI tool for documentation (default: gemini)
- `gemini`: Comprehensive documentation
- `qwen`: Architecture analysis
- `codex`: Implementation validation
- **--regenerate**: Force regenerate all documentation
- When enabled: Deletes existing `.workflow/docs/{project_name}/` before regeneration
- Ensures fresh documentation from source code
- **--mode**: Documentation mode (default: full)
- `full`: Complete docs (modules + README + ARCHITECTURE + EXAMPLES)
- `partial`: Module docs only
- **--cli-execute**: Enable CLI-based documentation generation (optional)
- When enabled: CLI generates docs directly in implementation_approach
- When disabled (default): Agent generates documentation content
---
## Examples
### Example 1: Generate SKILL Package (Default)
```bash
/memory:skill-memory
```
**Workflow**:
1. Phase 1: Detects current directory, checks existing docs
2. Phase 2: Calls `/memory:docs . --tool gemini --mode full` (Agent Mode)
3. Phase 3: Executes documentation generation via `/workflow:execute`
4. Phase 4: Generates SKILL.md at `.claude/skills/{project_name}/SKILL.md`
### Example 2: Regenerate with Qwen
```bash
/memory:skill-memory /d/my_app --tool qwen --regenerate
```
**Workflow**:
1. Phase 1: Parses target path, detects regenerate flag, deletes existing docs
2. Phase 2: Calls `/memory:docs /d/my_app --tool qwen --mode full`
3. Phase 3: Executes documentation regeneration
4. Phase 4: Generates updated SKILL.md
### Example 3: Partial Mode (Modules Only)
```bash
/memory:skill-memory --mode partial
```
**Workflow**:
1. Phase 1: Detects partial mode
2. Phase 2: Calls `/memory:docs . --tool gemini --mode partial` (Agent Mode)
3. Phase 3: Executes module documentation only
4. Phase 4: Generates SKILL.md with module-only index
### Example 4: CLI Execute Mode
```bash
/memory:skill-memory --cli-execute
```
**Workflow**:
1. Phase 1: Detects CLI execute mode
2. Phase 2: Calls `/memory:docs . --tool gemini --mode full --cli-execute` (CLI Mode)
3. Phase 3: Executes CLI-based documentation generation
4. Phase 4: Generates SKILL.md at `.claude/skills/{project_name}/SKILL.md`
### Example 5: Skip Path (Existing Docs)
```bash
/memory:skill-memory
```
**Scenario**: Documentation already exists in `.workflow/docs/{project_name}/`
**Workflow**:
1. Phase 1: Detects existing docs (5 files), sets SKIP_DOCS_GENERATION = true
2. Display: "Documentation already exists, skipping Phase 2 and Phase 3. Use --regenerate to force regeneration."
3. Phase 4: Generates or updates SKILL.md index only (~5-10x faster)
---
## Architecture
```
skill-memory (orchestrator)
├─ Phase 1: Prepare (bash commands, skip decision)
├─ Phase 2: /memory:docs (task planning, skippable)
├─ Phase 3: /workflow:execute (task execution, skippable)
└─ Phase 4: Write SKILL.md (direct file generation, always runs)
No task JSON created by this command
All documentation tasks managed by /memory:docs
Smart skip logic: 5-10x faster when docs exist
```

View File

@@ -1,773 +0,0 @@
---
name: swagger-docs
description: Generate complete Swagger/OpenAPI documentation following RESTful standards with global security, API details, error codes, and validation tests
argument-hint: "[path] [--tool <gemini|qwen|codex>] [--format <yaml|json>] [--version <v3.0|v3.1>] [--lang <zh|en>]"
---
# Swagger API Documentation Workflow (/memory:swagger-docs)
## Overview
Professional Swagger/OpenAPI documentation generator that strictly follows RESTful API design standards to produce enterprise-grade API documentation.
**Core Features**:
- **RESTful Standards**: Strict adherence to REST architecture and HTTP semantics
- **Global Security**: Unified Authorization Token validation mechanism
- **Complete API Docs**: Descriptions, methods, URLs, parameters for each endpoint
- **Organized Structure**: Clear directory hierarchy by business domain
- **Detailed Fields**: Type, required, example, description for each field
- **Error Code Standards**: Unified error response format and code definitions
- **Validation Tests**: Boundary conditions and exception handling tests
**Output Structure** (--lang zh):
```
.workflow/docs/{project_name}/api/
├── swagger.yaml # Main OpenAPI spec file
├── 概述/
│ ├── README.md # API overview
│ ├── 认证说明.md # Authentication guide
│ ├── 错误码规范.md # Error code definitions
│ └── 版本历史.md # Version history
├── 用户模块/ # Grouped by business domain
│ ├── 用户认证.md
│ ├── 用户管理.md
│ └── 权限控制.md
├── 业务模块/
│ └── ...
└── 测试报告/
├── 接口测试.md # API test results
└── 边界测试.md # Boundary condition tests
```
**Output Structure** (--lang en):
```
.workflow/docs/{project_name}/api/
├── swagger.yaml # Main OpenAPI spec file
├── overview/
│ ├── README.md # API overview
│ ├── authentication.md # Authentication guide
│ ├── error-codes.md # Error code definitions
│ └── changelog.md # Version history
├── users/ # Grouped by business domain
│ ├── authentication.md
│ ├── management.md
│ └── permissions.md
├── orders/
│ └── ...
└── test-reports/
├── api-tests.md # API test results
└── boundary-tests.md # Boundary condition tests
```
## Parameters
```bash
/memory:swagger-docs [path] [--tool <gemini|qwen|codex>] [--format <yaml|json>] [--version <v3.0|v3.1>] [--lang <zh|en>]
```
- **path**: API source code directory (default: current directory)
- **--tool**: CLI tool selection (default: gemini)
- `gemini`: Comprehensive analysis, pattern recognition
- `qwen`: Architecture analysis, system design
- `codex`: Implementation validation, code quality
- **--format**: OpenAPI spec format (default: yaml)
- `yaml`: YAML format (recommended, better readability)
- `json`: JSON format
- **--version**: OpenAPI version (default: v3.0)
- `v3.0`: OpenAPI 3.0.x
- `v3.1`: OpenAPI 3.1.0 (supports JSON Schema 2020-12)
- **--lang**: Documentation language (default: zh)
- `zh`: Chinese documentation with Chinese directory names
- `en`: English documentation with English directory names
## Planning Workflow
### Phase 1: Initialize Session
```bash
# Get project info
bash(pwd && basename "$(pwd)" && git rev-parse --show-toplevel 2>/dev/null || pwd && date +%Y%m%d-%H%M%S)
```
```javascript
// Create swagger-docs session
SlashCommand(command="/workflow:session:start --type swagger-docs --new \"{project_name}-swagger-{timestamp}\"")
// Parse output to get sessionId
```
```bash
# Update workflow-session.json
bash(jq '. + {"target_path":"{target_path}","project_root":"{project_root}","project_name":"{project_name}","format":"yaml","openapi_version":"3.0.3","lang":"{lang}","tool":"gemini"}' .workflow/active/{sessionId}/workflow-session.json > tmp.json && mv tmp.json .workflow/active/{sessionId}/workflow-session.json)
```
### Phase 2: Scan API Endpoints
**Discovery Patterns**: Auto-detect framework signatures and API definition styles.
**Supported Frameworks**:
| Framework | Detection Pattern | Example |
|-----------|-------------------|---------|
| Express.js | `router.get/post/put/delete` | `router.get('/users/:id')` |
| Fastify | `fastify.route`, `@Route` | `fastify.get('/api/users')` |
| NestJS | `@Controller`, `@Get/@Post` | `@Get('users/:id')` |
| Koa | `router.get`, `ctx.body` | `router.get('/users')` |
| Hono | `app.get/post`, `c.json` | `app.get('/users/:id')` |
| FastAPI | `@app.get`, `@router.post` | `@app.get("/users/{id}")` |
| Flask | `@app.route`, `@bp.route` | `@app.route('/users')` |
| Spring | `@GetMapping`, `@PostMapping` | `@GetMapping("/users/{id}")` |
| Go Gin | `r.GET`, `r.POST` | `r.GET("/users/:id")` |
| Go Chi | `r.Get`, `r.Post` | `r.Get("/users/{id}")` |
**Commands**:
```bash
# 1. Detect API framework type
bash(
if rg -q "@Controller|@Get|@Post|@Put|@Delete" --type ts 2>/dev/null; then echo "NESTJS";
elif rg -q "router\.(get|post|put|delete|patch)" --type ts --type js 2>/dev/null; then echo "EXPRESS";
elif rg -q "fastify\.(get|post|route)" --type ts --type js 2>/dev/null; then echo "FASTIFY";
elif rg -q "@app\.(get|post|put|delete)" --type py 2>/dev/null; then echo "FASTAPI";
elif rg -q "@GetMapping|@PostMapping|@RequestMapping" --type java 2>/dev/null; then echo "SPRING";
elif rg -q 'r\.(GET|POST|PUT|DELETE)' --type go 2>/dev/null; then echo "GO_GIN";
else echo "UNKNOWN"; fi
)
# 2. Scan all API endpoint definitions
bash(rg -n "(router|app|fastify)\.(get|post|put|delete|patch)|@(Get|Post|Put|Delete|Patch|Controller|RequestMapping)" --type ts --type js --type py --type java --type go -g '!*.test.*' -g '!*.spec.*' -g '!node_modules/**' 2>/dev/null | head -200)
# 3. Extract route paths
bash(rg -o "['\"](/api)?/[a-zA-Z0-9/:_-]+['\"]" --type ts --type js --type py -g '!*.test.*' 2>/dev/null | sort -u | head -100)
# 4. Detect existing OpenAPI/Swagger files
bash(find . -type f \( -name "swagger.yaml" -o -name "swagger.json" -o -name "openapi.yaml" -o -name "openapi.json" \) ! -path "*/node_modules/*" 2>/dev/null)
# 5. Extract DTO/Schema definitions
bash(rg -n "export (interface|type|class).*Dto|@ApiProperty|class.*Schema" --type ts -g '!*.test.*' 2>/dev/null | head -100)
```
**Data Processing**: Parse outputs, use **Write tool** to create `${session_dir}/.process/swagger-planning-data.json`:
```json
{
"metadata": {
"generated_at": "2025-01-01T12:00:00+08:00",
"project_name": "project_name",
"project_root": "/path/to/project",
"openapi_version": "3.0.3",
"format": "yaml",
"lang": "zh"
},
"framework": {
"type": "NESTJS",
"detected_patterns": ["@Controller", "@Get", "@Post"],
"base_path": "/api/v1"
},
"endpoints": [
{
"file": "src/modules/users/users.controller.ts",
"line": 25,
"method": "GET",
"path": "/api/v1/users/:id",
"handler": "getUser",
"controller": "UsersController"
}
],
"existing_specs": {
"found": false,
"files": []
},
"dto_schemas": [
{
"name": "CreateUserDto",
"file": "src/modules/users/dto/create-user.dto.ts",
"properties": ["email", "password", "name"]
}
],
"statistics": {
"total_endpoints": 45,
"by_method": {"GET": 20, "POST": 15, "PUT": 5, "DELETE": 5},
"by_module": {"users": 12, "auth": 8, "orders": 15, "products": 10}
}
}
```
### Phase 3: Analyze API Structure
**Commands**:
```bash
# 1. Analyze controller/route file structure
bash(cat ${session_dir}/.process/swagger-planning-data.json | jq -r '.endpoints[].file' | sort -u | head -20)
# 2. Extract request/response types
bash(for f in $(jq -r '.dto_schemas[].file' ${session_dir}/.process/swagger-planning-data.json | head -20); do echo "=== $f ===" && cat "$f" 2>/dev/null; done)
# 3. Analyze authentication middleware
bash(rg -n "auth|guard|middleware|jwt|bearer|token" -i --type ts --type js -g '!*.test.*' -g '!node_modules/**' 2>/dev/null | head -50)
# 4. Detect error handling patterns
bash(rg -n "HttpException|BadRequest|Unauthorized|Forbidden|NotFound|throw new" --type ts --type js -g '!*.test.*' 2>/dev/null | head -50)
```
**Deep Analysis via Gemini CLI**:
```bash
ccw cli -p "
PURPOSE: Analyze API structure and generate OpenAPI specification outline for comprehensive documentation
TASK:
• Parse all API endpoints and identify business module boundaries
• Extract request parameters, request bodies, and response formats
• Identify authentication mechanisms and security requirements
• Discover error handling patterns and error codes
• Map endpoints to logical module groups
MODE: analysis
CONTEXT: @src/**/*.controller.ts @src/**/*.routes.ts @src/**/*.dto.ts @src/**/middleware/**/*
EXPECTED: JSON format API structure analysis report with modules, endpoints, security schemes, and error codes
CONSTRAINTS: Strict RESTful standards | Identify all public endpoints | Document output language: {lang}
" --tool gemini --mode analysis --rule analysis-code-patterns --cd {project_root}
```
**Update swagger-planning-data.json** with analysis results:
```json
{
"api_structure": {
"modules": [
{
"name": "Users",
"name_zh": "用户模块",
"base_path": "/api/v1/users",
"endpoints": [
{
"path": "/api/v1/users",
"method": "GET",
"operation_id": "listUsers",
"summary": "List all users",
"summary_zh": "获取用户列表",
"description": "Paginated list of system users with filtering by status and role",
"description_zh": "分页获取系统用户列表,支持按状态、角色筛选",
"tags": ["User Management"],
"tags_zh": ["用户管理"],
"security": ["bearerAuth"],
"parameters": {
"query": ["page", "limit", "status", "role"]
},
"responses": {
"200": "UserListResponse",
"401": "UnauthorizedError",
"403": "ForbiddenError"
}
}
]
}
],
"security_schemes": {
"bearerAuth": {
"type": "http",
"scheme": "bearer",
"bearerFormat": "JWT",
"description": "JWT Token authentication. Add Authorization: Bearer <token> to request header"
}
},
"error_codes": [
{"code": "AUTH_001", "status": 401, "message": "Invalid or expired token", "message_zh": "Token 无效或已过期"},
{"code": "AUTH_002", "status": 401, "message": "Authentication required", "message_zh": "未提供认证信息"},
{"code": "AUTH_003", "status": 403, "message": "Insufficient permissions", "message_zh": "权限不足"}
]
}
}
```
### Phase 4: Task Decomposition
**Task Hierarchy**:
```
Level 1: Infrastructure Tasks (Parallel)
├─ IMPL-001: Generate main OpenAPI spec file (swagger.yaml)
├─ IMPL-002: Generate global security config and auth documentation
└─ IMPL-003: Generate unified error code specification
Level 2: Module Documentation Tasks (Parallel, by business module)
├─ IMPL-004: Users module API documentation
├─ IMPL-005: Auth module API documentation
├─ IMPL-006: Business module N API documentation
└─ ...
Level 3: Aggregation Tasks (Depends on Level 1-2)
├─ IMPL-N+1: Generate API overview and navigation
└─ IMPL-N+2: Generate version history and changelog
Level 4: Validation Tasks (Depends on Level 1-3)
├─ IMPL-N+3: API endpoint validation tests
└─ IMPL-N+4: Boundary condition tests
```
**Grouping Strategy**:
1. Group by business module (users, orders, products, etc.)
2. Maximum 10 endpoints per task
3. Large modules (>10 endpoints) split by submodules
**Commands**:
```bash
# 1. Count endpoints by module
bash(cat ${session_dir}/.process/swagger-planning-data.json | jq '.statistics.by_module')
# 2. Calculate task groupings
bash(cat ${session_dir}/.process/swagger-planning-data.json | jq -r '.api_structure.modules[] | "\(.name):\(.endpoints | length)"')
```
**Data Processing**: Use **Edit tool** to update `swagger-planning-data.json` with task groups:
```json
{
"task_groups": {
"level1_count": 3,
"level2_count": 5,
"total_count": 12,
"assignments": [
{"task_id": "IMPL-001", "level": 1, "type": "openapi-spec", "title": "Generate OpenAPI main spec file"},
{"task_id": "IMPL-002", "level": 1, "type": "security", "title": "Generate global security config"},
{"task_id": "IMPL-003", "level": 1, "type": "error-codes", "title": "Generate error code specification"},
{"task_id": "IMPL-004", "level": 2, "type": "module-doc", "module": "users", "endpoint_count": 12},
{"task_id": "IMPL-005", "level": 2, "type": "module-doc", "module": "auth", "endpoint_count": 8}
]
}
}
```
### Phase 5: Generate Task JSONs
**Generation Process**:
1. Read configuration values from workflow-session.json
2. Read task groups from swagger-planning-data.json
3. Generate Level 1 tasks (infrastructure)
4. Generate Level 2 tasks (by module)
5. Generate Level 3-4 tasks (aggregation and validation)
## Task Templates
### Level 1-1: OpenAPI Main Spec File
```json
{
"id": "IMPL-001",
"title": "Generate OpenAPI main specification file",
"status": "pending",
"meta": {
"type": "swagger-openapi-spec",
"agent": "@doc-generator",
"tool": "gemini",
"priority": "critical"
},
"context": {
"requirements": [
"Generate OpenAPI 3.0.3 compliant swagger.yaml",
"Include complete info, servers, tags, paths, components definitions",
"Follow RESTful design standards, use {lang} for descriptions"
],
"precomputed_data": {
"planning_data": "${session_dir}/.process/swagger-planning-data.json"
}
},
"flow_control": {
"pre_analysis": [
{
"step": "load_analysis_data",
"action": "Load API analysis data",
"commands": [
"bash(cat ${session_dir}/.process/swagger-planning-data.json)"
],
"output_to": "api_analysis"
}
],
"implementation_approach": [
{
"step": 1,
"title": "Generate OpenAPI spec file",
"description": "Create complete swagger.yaml specification file",
"cli_prompt": "PURPOSE: Generate OpenAPI 3.0.3 specification file from analyzed API structure\nTASK:\n• Define openapi version: 3.0.3\n• Define info: title, description, version, contact, license\n• Define servers: development, staging, production environments\n• Define tags: organized by business modules\n• Define paths: all API endpoints with complete specifications\n• Define components: schemas, securitySchemes, parameters, responses\nMODE: write\nCONTEXT: @[api_analysis]\nEXPECTED: Complete swagger.yaml file following OpenAPI 3.0.3 specification\nCONSTRAINTS: Use {lang} for all descriptions | Strict RESTful standards\n--rule documentation-swagger-api",
"output": "swagger.yaml"
}
],
"target_files": [
".workflow/docs/${project_name}/api/swagger.yaml"
]
}
}
```
### Level 1-2: Global Security Configuration
```json
{
"id": "IMPL-002",
"title": "Generate global security configuration and authentication guide",
"status": "pending",
"meta": {
"type": "swagger-security",
"agent": "@doc-generator",
"tool": "gemini"
},
"context": {
"requirements": [
"Document Authorization header format in detail",
"Describe token acquisition, refresh, and expiration mechanisms",
"List permission requirements for each endpoint"
]
},
"flow_control": {
"pre_analysis": [
{
"step": "analyze_auth",
"command": "bash(rg -n 'auth|guard|jwt|bearer' -i --type ts -g '!*.test.*' 2>/dev/null | head -50)",
"output_to": "auth_patterns"
}
],
"implementation_approach": [
{
"step": 1,
"title": "Generate authentication documentation",
"cli_prompt": "PURPOSE: Generate comprehensive authentication documentation for API security\nTASK:\n• Document authentication mechanism: JWT Bearer Token\n• Explain header format: Authorization: Bearer <token>\n• Describe token lifecycle: acquisition, refresh, expiration handling\n• Define permission levels: public, user, admin, super_admin\n• Document authentication failure responses: 401/403 error handling\nMODE: write\nCONTEXT: @[auth_patterns] @src/**/auth/**/* @src/**/guard/**/*\nEXPECTED: Complete authentication guide in {lang}\nCONSTRAINTS: Include code examples | Clear step-by-step instructions\n--rule development-feature",
"output": "{auth_doc_name}"
}
],
"target_files": [
".workflow/docs/${project_name}/api/{overview_dir}/{auth_doc_name}"
]
}
}
```
### Level 1-3: Unified Error Code Specification
```json
{
"id": "IMPL-003",
"title": "Generate unified error code specification",
"status": "pending",
"meta": {
"type": "swagger-error-codes",
"agent": "@doc-generator",
"tool": "gemini"
},
"context": {
"requirements": [
"Define unified error response format",
"Create categorized error code system (auth, business, system)",
"Provide detailed description and examples for each error code"
]
},
"flow_control": {
"implementation_approach": [
{
"step": 1,
"title": "Generate error code specification document",
"cli_prompt": "PURPOSE: Generate comprehensive error code specification for consistent API error handling\nTASK:\n• Define error response format: {code, message, details, timestamp}\n• Document authentication errors (AUTH_xxx): 401/403 series\n• Document parameter errors (PARAM_xxx): 400 series\n• Document business errors (BIZ_xxx): business logic errors\n• Document system errors (SYS_xxx): 500 series\n• For each error code: HTTP status, error message, possible causes, resolution suggestions\nMODE: write\nCONTEXT: @src/**/*.exception.ts @src/**/*.filter.ts\nEXPECTED: Complete error code specification in {lang} with tables and examples\nCONSTRAINTS: Include response examples | Clear categorization\n--rule development-feature",
"output": "{error_doc_name}"
}
],
"target_files": [
".workflow/docs/${project_name}/api/{overview_dir}/{error_doc_name}"
]
}
}
```
### Level 2: Module API Documentation (Template)
```json
{
"id": "IMPL-${module_task_id}",
"title": "Generate ${module_name} API documentation",
"status": "pending",
"depends_on": ["IMPL-001", "IMPL-002", "IMPL-003"],
"meta": {
"type": "swagger-module-doc",
"agent": "@doc-generator",
"tool": "gemini",
"module": "${module_name}",
"endpoint_count": "${endpoint_count}"
},
"context": {
"requirements": [
"Complete documentation for all endpoints in this module",
"Each endpoint: description, method, URL, parameters, responses",
"Include success and failure response examples",
"Mark API version and last update time"
],
"focus_paths": ["${module_source_paths}"]
},
"flow_control": {
"pre_analysis": [
{
"step": "load_module_endpoints",
"action": "Load module endpoint information",
"commands": [
"bash(cat ${session_dir}/.process/swagger-planning-data.json | jq '.api_structure.modules[] | select(.name == \"${module_name}\")')"
],
"output_to": "module_endpoints"
},
{
"step": "read_source_files",
"action": "Read module source files",
"commands": [
"bash(cat ${module_source_files})"
],
"output_to": "source_code"
}
],
"implementation_approach": [
{
"step": 1,
"title": "Generate module API documentation",
"description": "Generate complete API documentation for ${module_name}",
"cli_prompt": "PURPOSE: Generate complete RESTful API documentation for ${module_name} module\nTASK:\n• Create module overview: purpose, use cases, prerequisites\n• Generate endpoint index: grouped by functionality\n• For each endpoint document:\n - Functional description: purpose and business context\n - Request method: GET/POST/PUT/DELETE\n - URL path: complete API path\n - Request headers: Authorization and other required headers\n - Path parameters: {id} and other path variables\n - Query parameters: pagination, filters, etc.\n - Request body: JSON Schema format\n - Response body: success and error responses\n - Field description table: type, required, example, description\n• Add usage examples: cURL, JavaScript, Python\n• Add version info: v1.0.0, last updated date\nMODE: write\nCONTEXT: @[module_endpoints] @[source_code]\nEXPECTED: Complete module API documentation in {lang} with all endpoints fully documented\nCONSTRAINTS: RESTful standards | Include all response codes\n--rule documentation-swagger-api",
"output": "${module_doc_name}"
}
],
"target_files": [
".workflow/docs/${project_name}/api/${module_dir}/${module_doc_name}"
]
}
}
```
### Level 3: API Overview and Navigation
```json
{
"id": "IMPL-${overview_task_id}",
"title": "Generate API overview and navigation",
"status": "pending",
"depends_on": ["IMPL-001", "...", "IMPL-${last_module_task_id}"],
"meta": {
"type": "swagger-overview",
"agent": "@doc-generator",
"tool": "gemini"
},
"flow_control": {
"pre_analysis": [
{
"step": "load_all_docs",
"command": "bash(find .workflow/docs/${project_name}/api -type f -name '*.md' ! -path '*/{overview_dir}/*' | xargs cat)",
"output_to": "all_module_docs"
}
],
"implementation_approach": [
{
"step": 1,
"title": "Generate API overview",
"cli_prompt": "PURPOSE: Generate API overview document with navigation and quick start guide\nTASK:\n• Create introduction: system features, tech stack, version\n• Write quick start guide: authentication, first request example\n• Build module navigation: categorized links to all modules\n• Document environment configuration: development, staging, production\n• List SDKs and tools: client libraries, Postman collection\nMODE: write\nCONTEXT: @[all_module_docs] @.workflow/docs/${project_name}/api/swagger.yaml\nEXPECTED: Complete API overview in {lang} with navigation links\nCONSTRAINTS: Clear structure | Quick start focus\n--rule development-feature",
"output": "README.md"
}
],
"target_files": [
".workflow/docs/${project_name}/api/{overview_dir}/README.md"
]
}
}
```
### Level 4: Validation Tasks
```json
{
"id": "IMPL-${test_task_id}",
"title": "API endpoint validation tests",
"status": "pending",
"depends_on": ["IMPL-${overview_task_id}"],
"meta": {
"type": "swagger-validation",
"agent": "@test-fix-agent",
"tool": "codex"
},
"context": {
"requirements": [
"Validate accessibility of all endpoints",
"Test various boundary conditions",
"Verify exception handling"
]
},
"flow_control": {
"pre_analysis": [
{
"step": "load_swagger_spec",
"command": "bash(cat .workflow/docs/${project_name}/api/swagger.yaml)",
"output_to": "swagger_spec"
}
],
"implementation_approach": [
{
"step": 1,
"title": "Generate test report",
"cli_prompt": "PURPOSE: Generate comprehensive API test validation report\nTASK:\n• Document test environment configuration\n• Calculate endpoint coverage statistics\n• Report test results: pass/fail counts\n• Document boundary tests: parameter limits, null values, special characters\n• Document exception tests: auth failures, permission denied, resource not found\n• List issues found with recommendations\nMODE: write\nCONTEXT: @[swagger_spec]\nEXPECTED: Complete test report in {lang} with detailed results\nCONSTRAINTS: Include test cases | Clear pass/fail status\n--rule development-tests",
"output": "{test_doc_name}"
}
],
"target_files": [
".workflow/docs/${project_name}/api/{test_dir}/{test_doc_name}"
]
}
}
```
## Language-Specific Directory Mapping
| Component | --lang zh | --lang en |
|-----------|-----------|-----------|
| Overview dir | 概述 | overview |
| Auth doc | 认证说明.md | authentication.md |
| Error doc | 错误码规范.md | error-codes.md |
| Changelog | 版本历史.md | changelog.md |
| Users module | 用户模块 | users |
| Orders module | 订单模块 | orders |
| Products module | 商品模块 | products |
| Test dir | 测试报告 | test-reports |
| API test doc | 接口测试.md | api-tests.md |
| Boundary test doc | 边界测试.md | boundary-tests.md |
## API Documentation Template
### Single Endpoint Format
Each endpoint must include:
```markdown
### Get User Details
**Description**: Retrieve detailed user information by ID, including profile and permissions.
**Endpoint Info**:
| Property | Value |
|----------|-------|
| Method | GET |
| URL | `/api/v1/users/{id}` |
| Version | v1.0.0 |
| Updated | 2025-01-01 |
| Auth | Bearer Token |
| Permission | user / admin |
**Request Headers**:
| Field | Type | Required | Example | Description |
|-------|------|----------|---------|-------------|
| Authorization | string | Yes | `Bearer eyJhbGc...` | JWT Token |
| Content-Type | string | No | `application/json` | Request content type |
**Path Parameters**:
| Field | Type | Required | Example | Description |
|-------|------|----------|---------|-------------|
| id | string | Yes | `usr_123456` | Unique user identifier |
**Query Parameters**:
| Field | Type | Required | Default | Example | Description |
|-------|------|----------|---------|---------|-------------|
| include | string | No | - | `roles,permissions` | Related data to include |
**Success Response** (200 OK):
```json
{
"code": 0,
"message": "success",
"data": {
"id": "usr_123456",
"email": "user@example.com",
"name": "John Doe",
"status": "active",
"roles": ["user"],
"created_at": "2025-01-01T00:00:00Z",
"updated_at": "2025-01-01T00:00:00Z"
},
"timestamp": "2025-01-01T12:00:00Z"
}
```
**Response Fields**:
| Field | Type | Description |
|-------|------|-------------|
| code | integer | Business status code, 0 = success |
| message | string | Response message |
| data.id | string | Unique user identifier |
| data.email | string | User email address |
| data.name | string | User display name |
| data.status | string | User status: active/inactive/suspended |
| data.roles | array | User role list |
| data.created_at | string | Creation timestamp (ISO 8601) |
| data.updated_at | string | Last update timestamp (ISO 8601) |
**Error Responses**:
| Status | Code | Message | Possible Cause |
|--------|------|---------|----------------|
| 401 | AUTH_001 | Invalid or expired token | Token format error or expired |
| 403 | AUTH_003 | Insufficient permissions | No access to this user info |
| 404 | USER_001 | User not found | User ID doesn't exist or deleted |
**Examples**:
```bash
# cURL
curl -X GET "https://api.example.com/api/v1/users/usr_123456" \
-H "Authorization: Bearer eyJhbGc..." \
-H "Content-Type: application/json"
```
```javascript
// JavaScript (fetch)
const response = await fetch('https://api.example.com/api/v1/users/usr_123456', {
method: 'GET',
headers: {
'Authorization': 'Bearer eyJhbGc...',
'Content-Type': 'application/json'
}
});
const data = await response.json();
```
```
## Session Structure
```
.workflow/active/
└── WFS-swagger-{timestamp}/
├── workflow-session.json
├── IMPL_PLAN.md
├── TODO_LIST.md
├── .process/
│ └── swagger-planning-data.json
└── .task/
├── IMPL-001.json # OpenAPI spec
├── IMPL-002.json # Security config
├── IMPL-003.json # Error codes
├── IMPL-004.json # Module 1 API
├── ...
├── IMPL-N+1.json # API overview
└── IMPL-N+2.json # Validation tests
```
## Execution Commands
```bash
# Execute entire workflow
/workflow:execute
# Specify session
/workflow:execute --resume-session="WFS-swagger-yyyymmdd-hhmmss"
# Single task execution
/task:execute IMPL-001
```
## Related Commands
- `/workflow:execute` - Execute documentation tasks
- `/workflow:status` - View task progress
- `/workflow:session:complete` - Mark session complete
- `/memory:docs` - General documentation workflow

View File

@@ -1,310 +0,0 @@
---
name: tech-research-rules
description: "3-phase orchestrator: extract tech stack → Exa research → generate path-conditional rules (auto-loaded by Claude Code)"
argument-hint: "[session-id | tech-stack-name] [--regenerate] [--tool <gemini|qwen>]"
allowed-tools: SlashCommand(*), TodoWrite(*), Bash(*), Read(*), Write(*), Task(*)
---
# Tech Stack Rules Generator
## Overview
**Purpose**: Generate multi-layered, path-conditional rules that Claude Code automatically loads based on file context.
**Output Structure**:
```
.claude/rules/tech/{tech-stack}/
├── core.md # paths: **/*.{ext} - Core principles
├── patterns.md # paths: src/**/*.{ext} - Implementation patterns
├── testing.md # paths: **/*.{test,spec}.{ext} - Testing rules
├── config.md # paths: *.config.* - Configuration rules
├── api.md # paths: **/api/**/* - API rules (backend only)
├── components.md # paths: **/components/**/* - Component rules (frontend only)
└── metadata.json # Generation metadata
```
**Templates Location**: `~/.claude/workflows/cli-templates/prompts/rules/`
---
## Core Rules
1. **Start Immediately**: First action is TodoWrite initialization
2. **Path-Conditional Output**: Every rule file includes `paths` frontmatter
3. **Template-Driven**: Agent reads templates before generating content
4. **Agent Produces Files**: Agent writes all rule files directly
5. **No Manual Loading**: Rules auto-activate when Claude works with matching files
---
## 3-Phase Execution
### Phase 1: Prepare Context & Detect Tech Stack
**Goal**: Detect input mode, extract tech stack info, determine file extensions
**Input Mode Detection**:
```bash
input="$1"
if [[ "$input" == WFS-* ]]; then
MODE="session"
SESSION_ID="$input"
# Read workflow-session.json to extract tech stack
else
MODE="direct"
TECH_STACK_NAME="$input"
fi
```
**Tech Stack Analysis**:
```javascript
// Decompose composite tech stacks
// "typescript-react-nextjs" → ["typescript", "react", "nextjs"]
const TECH_EXTENSIONS = {
"typescript": "{ts,tsx}",
"javascript": "{js,jsx}",
"python": "py",
"rust": "rs",
"go": "go",
"java": "java",
"csharp": "cs",
"ruby": "rb",
"php": "php"
};
const FRAMEWORK_TYPE = {
"react": "frontend",
"vue": "frontend",
"angular": "frontend",
"nextjs": "fullstack",
"nuxt": "fullstack",
"fastapi": "backend",
"express": "backend",
"django": "backend",
"rails": "backend"
};
```
**Check Existing Rules**:
```bash
normalized_name=$(echo "$TECH_STACK_NAME" | tr '[:upper:]' '[:lower:]' | tr ' ' '-')
rules_dir=".claude/rules/tech/${normalized_name}"
existing_count=$(find "${rules_dir}" -name "*.md" 2>/dev/null | wc -l || echo 0)
```
**Skip Decision**:
- If `existing_count > 0` AND no `--regenerate``SKIP_GENERATION = true`
- If `--regenerate` → Delete existing and regenerate
**Output Variables**:
- `TECH_STACK_NAME`: Normalized name
- `PRIMARY_LANG`: Primary language
- `FILE_EXT`: File extension pattern
- `FRAMEWORK_TYPE`: frontend | backend | fullstack | library
- `COMPONENTS`: Array of tech components
- `SKIP_GENERATION`: Boolean
**TodoWrite**: Mark phase 1 completed
---
### Phase 2: Agent Produces Path-Conditional Rules
**Skip Condition**: Skipped if `SKIP_GENERATION = true`
**Goal**: Delegate to agent for Exa research and rule file generation
**Template Files**:
```
~/.claude/workflows/cli-templates/prompts/rules/
├── tech-rules-agent-prompt.txt # Agent instructions
├── rule-core.txt # Core principles template
├── rule-patterns.txt # Implementation patterns template
├── rule-testing.txt # Testing rules template
├── rule-config.txt # Configuration rules template
├── rule-api.txt # API rules template (backend)
└── rule-components.txt # Component rules template (frontend)
```
**Agent Task**:
```javascript
Task({
subagent_type: "general-purpose",
description: `Generate tech stack rules: ${TECH_STACK_NAME}`,
prompt: `
You are generating path-conditional rules for Claude Code.
## Context
- Tech Stack: ${TECH_STACK_NAME}
- Primary Language: ${PRIMARY_LANG}
- File Extensions: ${FILE_EXT}
- Framework Type: ${FRAMEWORK_TYPE}
- Components: ${JSON.stringify(COMPONENTS)}
- Output Directory: .claude/rules/tech/${TECH_STACK_NAME}/
## Instructions
Read the agent prompt template for detailed instructions.
Use --rule rules-tech-rules-agent-prompt to load the template automatically.
## Execution Steps
1. Execute Exa research queries (see agent prompt)
2. Read each rule template
3. Generate rule files following template structure
4. Write files to output directory
5. Write metadata.json
6. Report completion
## Variable Substitutions
Replace in templates:
- {TECH_STACK_NAME} → ${TECH_STACK_NAME}
- {PRIMARY_LANG} → ${PRIMARY_LANG}
- {FILE_EXT} → ${FILE_EXT}
- {FRAMEWORK_TYPE} → ${FRAMEWORK_TYPE}
`
})
```
**Completion Criteria**:
- 4-6 rule files written with proper `paths` frontmatter
- metadata.json written
- Agent reports files created
**TodoWrite**: Mark phase 2 completed
---
### Phase 3: Verify & Report
**Goal**: Verify generated files and provide usage summary
**Steps**:
1. **Verify Files**:
```bash
find ".claude/rules/tech/${TECH_STACK_NAME}" -name "*.md" -type f
```
2. **Validate Frontmatter**:
```bash
head -5 ".claude/rules/tech/${TECH_STACK_NAME}/core.md"
```
3. **Read Metadata**:
```javascript
Read(`.claude/rules/tech/${TECH_STACK_NAME}/metadata.json`)
```
4. **Generate Summary Report**:
```
Tech Stack Rules Generated
Tech Stack: {TECH_STACK_NAME}
Location: .claude/rules/tech/{TECH_STACK_NAME}/
Files Created:
├── core.md → paths: **/*.{ext}
├── patterns.md → paths: src/**/*.{ext}
├── testing.md → paths: **/*.{test,spec}.{ext}
├── config.md → paths: *.config.*
├── api.md → paths: **/api/**/* (if backend)
└── components.md → paths: **/components/**/* (if frontend)
Auto-Loading:
- Rules apply automatically when editing matching files
- No manual loading required
Example Activation:
- Edit src/components/Button.tsx → core.md + patterns.md + components.md
- Edit tests/api.test.ts → core.md + testing.md
- Edit package.json → config.md
```
**TodoWrite**: Mark phase 3 completed
---
## Path Pattern Reference
| Pattern | Matches |
|---------|---------|
| `**/*.ts` | All .ts files |
| `src/**/*` | All files under src/ |
| `*.config.*` | Config files in root |
| `**/*.{ts,tsx}` | .ts and .tsx files |
| Tech Stack | Core Pattern | Test Pattern |
|------------|--------------|--------------|
| TypeScript | `**/*.{ts,tsx}` | `**/*.{test,spec}.{ts,tsx}` |
| Python | `**/*.py` | `**/test_*.py, **/*_test.py` |
| Rust | `**/*.rs` | `**/tests/**/*.rs` |
| Go | `**/*.go` | `**/*_test.go` |
---
## Parameters
```bash
/memory:tech-research [session-id | "tech-stack-name"] [--regenerate]
```
**Arguments**:
- **session-id**: `WFS-*` format - Extract from workflow session
- **tech-stack-name**: Direct input - `"typescript"`, `"typescript-react"`
- **--regenerate**: Force regenerate existing rules
---
## Examples
### Single Language
```bash
/memory:tech-research "typescript"
```
**Output**: `.claude/rules/tech/typescript/` with 4 rule files
### Frontend Stack
```bash
/memory:tech-research "typescript-react"
```
**Output**: `.claude/rules/tech/typescript-react/` with 5 rule files (includes components.md)
### Backend Stack
```bash
/memory:tech-research "python-fastapi"
```
**Output**: `.claude/rules/tech/python-fastapi/` with 5 rule files (includes api.md)
### From Session
```bash
/memory:tech-research WFS-user-auth-20251104
```
**Workflow**: Extract tech stack from session → Generate rules
---
## Comparison: Rules vs SKILL
| Aspect | SKILL Memory | Rules |
|--------|--------------|-------|
| Loading | Manual: `Skill("tech")` | Automatic by path |
| Scope | All files when loaded | Only matching files |
| Granularity | Monolithic packages | Per-file-type |
| Context | Full package | Only relevant rules |
**When to Use**:
- **Rules**: Tech stack conventions per file type
- **SKILL**: Reference docs, APIs, examples for manual lookup

View File

@@ -0,0 +1,332 @@
---
name: tips
description: Quick note-taking command to capture ideas, snippets, reminders, and insights for later reference
argument-hint: "<note content> [--tag <tag1,tag2>] [--context <context>]"
allowed-tools: mcp__ccw-tools__core_memory(*), Read(*)
examples:
- /memory:tips "Remember to use Redis for rate limiting"
- /memory:tips "Auth pattern: JWT with refresh tokens" --tag architecture,auth
- /memory:tips "Bug: memory leak in WebSocket handler after 24h" --context websocket-service
- /memory:tips "Performance: lazy loading reduced bundle by 40%" --tag performance
---
# Memory Tips Command (/memory:tips)
## 1. Overview
The `memory:tips` command provides **quick note-taking** for capturing:
- Quick ideas and insights
- Code snippets and patterns
- Reminders and follow-ups
- Bug notes and debugging hints
- Performance observations
- Architecture decisions
- Library/tool recommendations
**Core Philosophy**:
- **Speed First**: Minimal friction for capturing thoughts
- **Searchable**: Tagged for easy retrieval
- **Context-Aware**: Optional context linking
- **Lightweight**: No complex session analysis
## 2. Parameters
- `<note content>` (Required): The tip/note content to save
- `--tag <tags>` (Optional): Comma-separated tags for categorization
- `--context <context>` (Optional): Related context (file, module, feature)
**Examples**:
```bash
/memory:tips "Use Zod for runtime validation - better DX than class-validator"
/memory:tips "Redis connection pool: max 10, min 2" --tag config,redis
/memory:tips "Fix needed: race condition in payment processor" --tag bug,payment --context src/payments
```
## 3. Structured Output Format
```markdown
## Tip ID
TIP-YYYYMMDD-HHMMSS
## Timestamp
YYYY-MM-DD HH:MM:SS
## Project Root
[Absolute path to project root, e.g., D:\Claude_dms3]
## Content
[The tip/note content exactly as provided]
## Tags
[Comma-separated tags, or (none)]
## Context
[Optional context linking - file, module, or feature reference]
## Session Link
[WFS-ID if workflow session active, otherwise (none)]
## Auto-Detected Context
[Files/topics from current conversation if relevant]
```
## 4. Field Definitions
| Field | Purpose | Example |
|-------|---------|---------|
| **Tip ID** | Unique identifier with timestamp | TIP-20260128-143052 |
| **Timestamp** | When tip was created | 2026-01-28 14:30:52 |
| **Project Root** | Current project path | D:\Claude_dms3 |
| **Content** | The actual tip/note | "Use Redis for rate limiting" |
| **Tags** | Categorization labels | architecture, auth, performance |
| **Context** | Related code/feature | src/auth/**, payment-module |
| **Session Link** | Link to workflow session | WFS-auth-20260128 |
| **Auto-Detected Context** | Files from conversation | src/api/handler.ts |
## 5. Execution Flow
### Step 1: Parse Arguments
```javascript
const parseTipsCommand = (input) => {
// Extract note content (everything before flags)
const contentMatch = input.match(/^"([^"]+)"|^([^\s-]+)/);
const content = contentMatch ? (contentMatch[1] || contentMatch[2]) : '';
// Extract tags
const tagsMatch = input.match(/--tag\s+([^\s-]+)/);
const tags = tagsMatch ? tagsMatch[1].split(',').map(t => t.trim()) : [];
// Extract context
const contextMatch = input.match(/--context\s+([^\s-]+)/);
const context = contextMatch ? contextMatch[1] : '';
return { content, tags, context };
};
```
### Step 2: Gather Context
```javascript
const gatherTipContext = async () => {
// Get project root
const projectRoot = process.cwd(); // or detect from environment
// Get current session if active
const manifest = await mcp__ccw-tools__session_manager({
operation: "list",
location: "active"
});
const sessionId = manifest.sessions?.[0]?.id || null;
// Auto-detect files from recent conversation
const recentFiles = extractRecentFilesFromConversation(); // Last 5 messages
return {
projectRoot,
sessionId,
autoDetectedContext: recentFiles
};
};
```
### Step 3: Generate Structured Text
```javascript
const generateTipText = (parsed, context) => {
const timestamp = new Date().toISOString().replace('T', ' ').slice(0, 19);
const tipId = `TIP-${new Date().toISOString().slice(0,10).replace(/-/g, '')}-${new Date().toTimeString().slice(0,8).replace(/:/g, '')}`;
return `## Tip ID
${tipId}
## Timestamp
${timestamp}
## Project Root
${context.projectRoot}
## Content
${parsed.content}
## Tags
${parsed.tags.length > 0 ? parsed.tags.join(', ') : '(none)'}
## Context
${parsed.context || '(none)'}
## Session Link
${context.sessionId || '(none)'}
## Auto-Detected Context
${context.autoDetectedContext.length > 0
? context.autoDetectedContext.map(f => `- ${f}`).join('\n')
: '(none)'}`;
};
```
### Step 4: Save to Core Memory
```javascript
mcp__ccw-tools__core_memory({
operation: "import",
text: structuredText
})
```
**Response Format**:
```json
{
"operation": "import",
"id": "CMEM-YYYYMMDD-HHMMSS",
"message": "Created memory: CMEM-YYYYMMDD-HHMMSS"
}
```
### Step 5: Confirm to User
```
✓ Tip saved successfully
ID: CMEM-YYYYMMDD-HHMMSS
Tags: architecture, auth
Context: src/auth/**
To retrieve: /memory:search "auth patterns"
Or via MCP: core_memory(operation="search", query="auth")
```
## 6. Tag Categories (Suggested)
**Technical**:
- `architecture` - Design decisions and patterns
- `performance` - Optimization insights
- `security` - Security considerations
- `bug` - Bug notes and fixes
- `config` - Configuration settings
- `api` - API design patterns
**Development**:
- `testing` - Test strategies and patterns
- `debugging` - Debugging techniques
- `refactoring` - Refactoring notes
- `documentation` - Doc improvements
**Domain Specific**:
- `auth` - Authentication/authorization
- `database` - Database patterns
- `frontend` - UI/UX patterns
- `backend` - Backend logic
- `devops` - Infrastructure and deployment
**Organizational**:
- `reminder` - Follow-up items
- `research` - Research findings
- `idea` - Feature ideas
- `review` - Code review notes
## 7. Search Integration
Tips can be retrieved using:
```bash
# Via command (if /memory:search exists)
/memory:search "rate limiting"
# Via MCP tool
mcp__ccw-tools__core_memory({
operation: "search",
query: "rate limiting",
source_type: "core_memory",
top_k: 10
})
# Via CLI
ccw core-memory search --query "rate limiting" --top-k 10
```
## 8. Quality Checklist
Before saving:
- [ ] Content is clear and actionable
- [ ] Tags are relevant and consistent
- [ ] Context provides enough reference
- [ ] Auto-detected context is accurate
- [ ] Project root is absolute path
- [ ] Timestamp is properly formatted
## 9. Best Practices
### Good Tips Examples
**Specific and Actionable**:
```
"Use connection pooling for Redis: { max: 10, min: 2, acquireTimeoutMillis: 30000 }"
--tag config,redis
```
**With Context**:
```
"Auth middleware must validate both access and refresh tokens"
--tag security,auth --context src/middleware/auth.ts
```
**Problem + Solution**:
```
"Memory leak fixed by unsubscribing event listeners in componentWillUnmount"
--tag bug,react --context src/components/Chat.tsx
```
### Poor Tips Examples
**Too Vague**:
```
"Fix the bug" --tag bug
```
**Too Long** (use /memory:compact instead):
```
"Here's the complete implementation plan for the entire auth system... [3 paragraphs]"
```
**No Context**:
```
"Remember to update this later"
```
## 10. Use Cases
### During Development
```bash
/memory:tips "JWT secret must be 256-bit minimum" --tag security,auth
/memory:tips "Use debounce (300ms) for search input" --tag performance,ux
```
### After Bug Fixes
```bash
/memory:tips "Race condition in payment: lock with Redis SETNX" --tag bug,payment
```
### Code Review Insights
```bash
/memory:tips "Prefer early returns over nested ifs" --tag style,readability
```
### Architecture Decisions
```bash
/memory:tips "Chose PostgreSQL over MongoDB for ACID compliance" --tag architecture,database
```
### Library Recommendations
```bash
/memory:tips "Zod > Yup for TypeScript validation - better type inference" --tag library,typescript
```
## 11. Notes
- **Frequency**: Use liberally - capture all valuable insights
- **Retrieval**: Search by tags, content, or context
- **Lifecycle**: Tips persist across sessions
- **Organization**: Tags enable filtering and categorization
- **Integration**: Can reference tips in later workflows
- **Lightweight**: No complex session analysis required

View File

@@ -1,517 +0,0 @@
---
name: workflow-skill-memory
description: Process WFS-* archived sessions using universal-executor agents with Gemini analysis to generate workflow-progress SKILL package (sessions-timeline, lessons, conflicts)
argument-hint: "session <session-id> | all"
allowed-tools: Task(*), TodoWrite(*), Bash(*), Read(*), Write(*)
---
# Workflow SKILL Memory Generator
## Overview
Generate SKILL package from archived workflow sessions using agent-driven analysis. Supports single-session incremental updates or parallel processing of all sessions.
**Scope**: Only processes WFS-* workflow sessions. Other session types (e.g., doc sessions) are automatically ignored.
## Usage
```bash
/memory:workflow-skill-memory session WFS-<session-id> # Process single WFS session
/memory:workflow-skill-memory all # Process all WFS sessions in parallel
```
## Execution Modes
### Mode 1: Single Session (`session <session-id>`)
**Purpose**: Incremental update - process one archived session and merge into existing SKILL package
**Workflow**:
1. **Validate session**: Check if session exists in `.workflow/.archives/{session-id}/`
2. **Invoke agent**: Call `universal-executor` to analyze session and update SKILL documents
3. **Agent tasks**:
- Read session data from `.workflow/.archives/{session-id}/`
- Extract lessons, conflicts, and outcomes
- Use Gemini for intelligent aggregation (optional)
- Update or create SKILL documents using templates
- Regenerate SKILL.md index
**Command Example**:
```bash
/memory:workflow-skill-memory session WFS-user-auth
```
**Expected Output**:
```
Session WFS-user-auth processed
Updated:
- sessions-timeline.md (1 session added)
- lessons-learned.md (3 lessons merged)
- conflict-patterns.md (1 conflict added)
- SKILL.md (index regenerated)
```
---
### Mode 2: All Sessions (`all`)
**Purpose**: Full regeneration - process all archived sessions in parallel for complete SKILL package
**Workflow**:
1. **List sessions**: Read manifest.json to get all archived session IDs
2. **Parallel invocation**: Launch multiple `universal-executor` agents in parallel (one per session)
3. **Agent coordination**:
- Each agent processes one session independently
- Agents use Gemini for analysis
- Agents collect data into JSON (no direct file writes)
- Final aggregator agent merges results and generates SKILL documents
**Command Example**:
```bash
/memory:workflow-skill-memory all
```
**Expected Output**:
```
All sessions processed in parallel
Sessions: 8 total
Updated:
- sessions-timeline.md (8 sessions)
- lessons-learned.md (24 lessons aggregated)
- conflict-patterns.md (12 conflicts documented)
- SKILL.md (index regenerated)
```
---
## Implementation Flow
### Phase 1: Validation and Setup
**Step 1.1: Parse Command Arguments**
Extract mode and session ID:
```javascript
if (args === "all") {
mode = "all"
} else if (args.startsWith("session ")) {
mode = "session"
session_id = args.replace("session ", "").trim()
} else {
ERROR = "Invalid arguments. Usage: session <session-id> | all"
EXIT
}
```
**Step 1.2: Validate Archive Directory**
```bash
bash(test -d .workflow/.archives && echo "exists" || echo "missing")
```
If missing, report error and exit.
**Step 1.3: Mode-Specific Validation**
**Single Session Mode**:
```bash
# Validate session ID format (must start with WFS-)
if [[ ! "$session_id" =~ ^WFS- ]]; then
ERROR = "Invalid session ID format. Only WFS-* sessions are supported"
EXIT
fi
# Check if session exists
bash(test -d .workflow/.archives/{session_id} && echo "exists" || echo "missing")
```
If missing, report error: "Session {session_id} not found in archives"
**All Sessions Mode**:
```bash
# Read manifest and filter only WFS- sessions
bash(cat .workflow/.archives/manifest.json | jq -r '.archives[].session_id | select(startswith("WFS-"))')
```
Store filtered session IDs in array. Ignore doc sessions and other non-WFS sessions.
**Step 1.4: TodoWrite Initialization**
**Single Session Mode**:
```javascript
TodoWrite({todos: [
{"content": "Validate session existence", "status": "completed", "activeForm": "Validating session"},
{"content": "Invoke agent to process session", "status": "in_progress", "activeForm": "Invoking agent"},
{"content": "Verify SKILL package updated", "status": "pending", "activeForm": "Verifying update"}
]})
```
**All Sessions Mode**:
```javascript
TodoWrite({todos: [
{"content": "Read manifest and list sessions", "status": "completed", "activeForm": "Reading manifest"},
{"content": "Invoke agents in parallel", "status": "in_progress", "activeForm": "Invoking agents"},
{"content": "Verify SKILL package regenerated", "status": "pending", "activeForm": "Verifying regeneration"}
]})
```
---
### Phase 2: Agent Invocation
#### Single Session Mode - Agent Task
Invoke `universal-executor` with session-specific task:
**Agent Prompt Structure**:
```
Task: Process Workflow Session for SKILL Package
Context:
- Session ID: {session_id}
- Session Path: .workflow/.archives/{session_id}/
- Mode: Incremental update
Objectives:
1. Read session data:
- workflow-session.json (metadata)
- IMPL_PLAN.md (implementation summary)
- TODO_LIST.md (if exists)
- manifest.json entry for lessons
2. Extract key information:
- Description, tags, metrics
- Lessons (successes, challenges, watch_patterns)
- Context package path (reference only)
- Key outcomes from IMPL_PLAN
3. Use Gemini for aggregation (optional):
Command pattern:
ccw cli -p "
PURPOSE: Extract lessons and conflicts from workflow session
TASK:
• Analyze IMPL_PLAN and lessons from manifest
• Identify success patterns and challenges
• Extract conflict patterns with resolutions
• Categorize by functional domain
MODE: analysis
CONTEXT: @IMPL_PLAN.md @workflow-session.json
EXPECTED: Structured lessons and conflicts in JSON format
RULES: Template reference from skill-aggregation.txt
" --tool gemini --mode analysis --cd .workflow/.archives/{session_id}
3.5. **Generate SKILL.md Description** (CRITICAL for auto-loading):
Read skill-index.txt template Section: "Description Field Generation"
Execute command to get project root:
```bash
git rev-parse --show-toplevel # Example output: /d/Claude_dms3
```
Apply description format:
```
Progressive workflow development history (located at {project_root}).
Load this SKILL when continuing development, analyzing past implementations,
or learning from workflow history, especially when no relevant context exists in memory.
```
**Validation**:
- [ ] Path uses forward slashes (not backslashes)
- [ ] All three use cases present
- [ ] Trigger optimization phrase included
- [ ] Path is absolute (starts with / or drive letter)
4. Read templates for formatting guidance:
- ~/.claude/workflows/cli-templates/prompts/workflow/skill-sessions-timeline.txt
- ~/.claude/workflows/cli-templates/prompts/workflow/skill-lessons-learned.txt
- ~/.claude/workflows/cli-templates/prompts/workflow/skill-conflict-patterns.txt
- ~/.claude/workflows/cli-templates/prompts/workflow/skill-index.txt
**CRITICAL**: From skill-index.txt, read these sections:
- "Description Field Generation" - Rules for generating description
- "Variable Substitution Guide" - All required variables
- "Generation Instructions" - Step-by-step generation process
- "Validation Checklist" - Final validation steps
5. Update SKILL documents:
- sessions-timeline.md: Append new session, update domain grouping
- lessons-learned.md: Merge lessons into categories, update frequencies
- conflict-patterns.md: Add conflicts, update recurring pattern frequencies
- SKILL.md: Regenerate index with updated counts
**For SKILL.md generation**:
- Follow "Generation Instructions" from skill-index.txt (Steps 1-7)
- Use git command for project_root: `git rev-parse --show-toplevel`
- Apply "Description Field Generation" rules
- Validate using "Validation Checklist"
- Increment version (patch level)
6. Return result JSON:
{
"status": "success",
"session_id": "{session_id}",
"updates": {
"sessions_added": 1,
"lessons_merged": count,
"conflicts_added": count
}
}
```
---
#### All Sessions Mode - Parallel Agent Tasks
**Step 2.1: Launch parallel session analyzers**
Invoke multiple agents in parallel (one message with multiple Task calls):
**Per-Session Agent Prompt**:
```
Task: Extract Session Data for SKILL Package
Context:
- Session ID: {session_id}
- Mode: Parallel analysis (no direct file writes)
Objectives:
1. Read session data (same as single mode)
2. Extract key information (same as single mode)
3. Use Gemini for analysis (same as single mode)
4. Return structured data JSON:
{
"status": "success",
"session_id": "{session_id}",
"data": {
"metadata": {
"description": "...",
"archived_at": "...",
"tags": [...],
"metrics": {...}
},
"lessons": {
"successes": [...],
"challenges": [...],
"watch_patterns": [...]
},
"conflicts": [
{
"type": "architecture|dependencies|testing|performance",
"pattern": "...",
"resolution": "...",
"code_impact": [...]
}
],
"impl_summary": "First 200 chars of IMPL_PLAN",
"context_package_path": "..."
}
}
```
**Step 2.2: Aggregate results**
After all session agents complete, invoke aggregator agent:
**Aggregator Agent Prompt**:
```
Task: Aggregate Session Results and Generate SKILL Package
Context:
- Mode: Full regeneration
- Input: JSON results from {session_count} session agents
Objectives:
1. Aggregate all session data:
- Collect metadata from all sessions
- Merge lessons by category
- Group conflicts by type
- Sort sessions by date
2. Use Gemini for final aggregation:
ccw cli -p "
PURPOSE: Aggregate lessons and conflicts from all workflow sessions
TASK:
• Group successes by functional domain
• Categorize challenges by severity (HIGH/MEDIUM/LOW)
• Identify recurring conflict patterns
• Calculate frequencies and prioritize
MODE: analysis
CONTEXT: [Provide aggregated JSON data]
EXPECTED: Final aggregated structure for SKILL documents
RULES: Template reference from skill-aggregation.txt
" --tool gemini --mode analysis
3. Read templates for formatting (same 4 templates as single mode)
4. Generate all SKILL documents:
- sessions-timeline.md (all sessions, sorted by date)
- lessons-learned.md (aggregated lessons with frequencies)
- conflict-patterns.md (recurring patterns with resolutions)
- SKILL.md (index with progressive loading)
5. Write files to .claude/skills/workflow-progress/
6. Return result JSON:
{
"status": "success",
"sessions_processed": count,
"files_generated": ["SKILL.md", "sessions-timeline.md", ...],
"summary": {
"total_sessions": count,
"functional_domains": [...],
"date_range": "...",
"lessons_count": count,
"conflicts_count": count
}
}
```
---
### Phase 3: Verification
**Step 3.1: Check SKILL Package Files**
```bash
bash(ls -lh .claude/skills/workflow-progress/)
```
Verify all 4 files exist:
- SKILL.md
- sessions-timeline.md
- lessons-learned.md
- conflict-patterns.md
**Step 3.2: TodoWrite Completion**
Mark all tasks as completed.
**Step 3.3: Display Summary**
**Single Session Mode**:
```
Session {session_id} processed successfully
Updated:
- sessions-timeline.md
- lessons-learned.md
- conflict-patterns.md
- SKILL.md
SKILL Location: .claude/skills/workflow-progress/SKILL.md
```
**All Sessions Mode**:
```
All sessions processed in parallel
Sessions: {count} total
Functional Domains: {domain_list}
Date Range: {earliest} - {latest}
Generated:
- sessions-timeline.md ({count} sessions)
- lessons-learned.md ({lessons_count} lessons)
- conflict-patterns.md ({conflicts_count} conflicts)
- SKILL.md (4-level progressive loading)
SKILL Location: .claude/skills/workflow-progress/SKILL.md
Usage:
- Level 0: Quick refresh (~2K tokens)
- Level 1: Recent history (~8K tokens)
- Level 2: Complete analysis (~25K tokens)
- Level 3: Deep dive (~40K tokens)
```
---
## Agent Guidelines
### Agent Capabilities
**universal-executor agents can**:
- Read files from `.workflow/.archives/`
- Execute bash commands
- Call Gemini CLI for intelligent analysis
- Read template files for formatting guidance
- Write SKILL package files (single mode) or return JSON (parallel mode)
- Return structured results
### Gemini Usage Pattern
**When to use Gemini**:
- Aggregating lessons from multiple sources
- Identifying recurring patterns
- Classifying conflicts by type and severity
- Extracting structured data from IMPL_PLAN
**Fallback Strategy**: If Gemini fails or times out, use direct file parsing with structured extraction logic.
---
## Template System
### Template Files
All templates located in: `~/.claude/workflows/cli-templates/prompts/workflow/`
1. **skill-sessions-timeline.txt**: Format for sessions-timeline.md
2. **skill-lessons-learned.txt**: Format for lessons-learned.md
3. **skill-conflict-patterns.txt**: Format for conflict-patterns.md
4. **skill-index.txt**: Format for SKILL.md index
5. **skill-aggregation.txt**: Rules for Gemini aggregation (existing)
### Template Usage in Agent
**Agents read templates to understand**:
- File structure and markdown format
- Data sources (which files to read)
- Update strategy (incremental vs full)
- Formatting rules and conventions
- Aggregation logic (for Gemini)
**Templates are NOT shown in this command documentation** - agents read them directly as needed.
---
## Error Handling
### Validation Errors
- **No archives directory**: "Error: No workflow archives found at .workflow/.archives/"
- **Invalid session ID format**: "Error: Invalid session ID format. Only WFS-* sessions are supported"
- **Session not found**: "Error: Session {session_id} not found in archives"
- **No WFS sessions in manifest**: "Error: No WFS-* workflow sessions found in manifest.json"
### Agent Errors
- If agent fails, report error message from agent result
- If Gemini times out, agents use fallback direct parsing
- If template read fails, agents use inline format
### Recovery
- Single session mode: Can be retried without affecting other sessions
- All sessions mode: If one agent fails, others continue; retry failed sessions individually
## Integration
### Called by `/workflow:session:complete`
Automatically invoked after session archival:
```bash
SlashCommand(command="/memory:workflow-skill-memory session {session_id}")
```
### Manual Invocation
Users can manually process sessions:
```bash
/memory:workflow-skill-memory session WFS-custom-feature # Single session
/memory:workflow-skill-memory all # Full regeneration
```

View File

@@ -1,208 +0,0 @@
---
name: breakdown
description: Decompose complex task into subtasks with dependency mapping, creates child task JSONs with parent references and execution order
argument-hint: "[-y|--yes] task-id"
---
## Auto Mode
When `--yes` or `-y`: Auto-confirm breakdown, use recommended subtask structure.
# Task Breakdown Command (/task:breakdown)
## Overview
Breaks down complex tasks into executable subtasks with context inheritance and agent assignment.
## Core Principles
**File Cohesion:** Related files must stay in same task
**10-Task Limit:** Total tasks cannot exceed 10 (triggers re-scoping)
## Core Features
**CRITICAL**: Manual breakdown with safety controls to prevent file conflicts and task limit violations.
### Breakdown Process
1. **Session Check**: Verify active session contains parent task
2. **Task Validation**: Ensure parent is `pending` status
3. **10-Task Limit Check**: Verify breakdown won't exceed total limit
4. **Manual Decomposition**: User defines subtasks with validation
5. **File Conflict Detection**: Warn if same files appear in multiple subtasks
6. **Similar Function Warning**: Alert if subtasks have overlapping functionality
7. **Context Distribution**: Inherit parent requirements and scope
8. **Agent Assignment**: Auto-assign agents based on subtask type
9. **TODO_LIST Update**: Regenerate TODO_LIST.md with new structure
### Breakdown Rules
- Only `pending` tasks can be broken down
- **Manual breakdown only**: Automated breakdown disabled to prevent violations
- Parent becomes `container` status (not executable)
- Subtasks use format: IMPL-N.M (max 2 levels)
- Context flows from parent to subtasks
- All relationships tracked in JSON
- **10-task limit enforced**: Breakdown rejected if total would exceed 10 tasks
- **File cohesion preserved**: Same files cannot be split across subtasks
## Usage
### Basic Breakdown
```bash
/task:breakdown impl-1
```
Interactive process:
```
Task: Build authentication module
Current total tasks: 6/10
MANUAL BREAKDOWN REQUIRED
Define subtasks manually (remaining capacity: 4 tasks):
1. Enter subtask title: User authentication core
Focus files: models/User.js, routes/auth.js, middleware/auth.js
2. Enter subtask title: OAuth integration
Focus files: services/OAuthService.js, routes/oauth.js
FILE CONFLICT DETECTED:
- routes/auth.js appears in multiple subtasks
- Recommendation: Merge related authentication routes
SIMILAR FUNCTIONALITY WARNING:
- "User authentication" and "OAuth integration" both handle auth
- Consider combining into single task
# Use AskUserQuestion for confirmation
AskUserQuestion({
questions: [{
question: "File conflicts and/or similar functionality detected. How do you want to proceed?",
header: "Confirm",
options: [
{ label: "Proceed with breakdown", description: "Accept the risks and create the subtasks as defined." },
{ label: "Restart breakdown", description: "Discard current subtasks and start over." },
{ label: "Cancel breakdown", description: "Abort the operation and leave the parent task as is." }
],
multiSelect: false
}]
})
User selected: "Proceed with breakdown"
Task IMPL-1 broken down:
IMPL-1: Build authentication module (container)
├── IMPL-1.1: User authentication core -> @code-developer
└── IMPL-1.2: OAuth integration -> @code-developer
Files updated: .task/IMPL-1.json + 2 subtask files + TODO_LIST.md
```
## Decomposition Logic
### Agent Assignment
- **Design/Planning** → `@planning-agent`
- **Implementation** → `@code-developer`
- **Testing** → `@code-developer` (type: "test-gen")
- **Test Validation** → `@test-fix-agent` (type: "test-fix")
- **Review** → `@universal-executor` (optional)
### Context Inheritance
- Subtasks inherit parent requirements
- Scope refined for specific subtask
- Implementation details distributed appropriately
## Safety Controls
### File Conflict Detection
**Validates file cohesion across subtasks:**
- Scans `focus_paths` in all subtasks
- Warns if same file appears in multiple subtasks
- Suggests merging subtasks with overlapping files
- Blocks breakdown if critical conflicts detected
### Similar Functionality Detection
**Prevents functional overlap:**
- Analyzes subtask titles for similar keywords
- Warns about potential functional redundancy
- Suggests consolidation of related functionality
- Examples: "user auth" + "login system" → merge recommendation
### 10-Task Limit Enforcement
**Hard limit compliance:**
- Counts current total tasks in session
- Calculates breakdown impact on total
- Rejects breakdown if would exceed 10 tasks
- Suggests re-scoping if limit reached
### Manual Control Requirements
**User-driven breakdown only:**
- No automatic subtask generation
- User must define each subtask title and scope
- Real-time validation during input
- Confirmation required before execution
## Implementation Details
- Complete task JSON schema
- Implementation field structure
- Context inheritance rules
- Agent assignment logic
## Validation
### Pre-breakdown Checks
1. Active session exists
2. Task found in session
3. Task status is `pending`
4. Not already broken down
5. **10-task limit compliance**: Total tasks + new subtasks ≤ 10
6. **Manual mode enabled**: No automatic breakdown allowed
### Post-breakdown Actions
1. Update parent to `container` status
2. Create subtask JSON files
3. Update parent subtasks list
4. Update session stats
5. **Regenerate TODO_LIST.md** with new hierarchy
6. Validate file paths in focus_paths
7. Update session task count
## Examples
### Basic Breakdown
```bash
/task:breakdown impl-1
impl-1: Build authentication (container)
├── impl-1.1: Design schema -> @planning-agent
├── impl-1.2: Implement logic + tests -> @code-developer
└── impl-1.3: Execute & fix tests -> @test-fix-agent
```
## Error Handling
```bash
# Task not found
Task IMPL-5 not found
# Already broken down
Task IMPL-1 already has subtasks
# Wrong status
Cannot breakdown completed task IMPL-2
# 10-task limit exceeded
Breakdown would exceed 10-task limit (current: 8, proposed: 4)
Suggestion: Re-scope project into smaller iterations
# File conflicts detected
File conflict: routes/auth.js appears in IMPL-1.1 and IMPL-1.2
Recommendation: Merge subtasks or redistribute files
# Similar functionality warning
Similar functions detected: "user login" and "authentication"
Consider consolidating related functionality
# Manual breakdown required
Automatic breakdown disabled. Use manual breakdown process.
```
**System ensures**: Manual breakdown control with file cohesion enforcement, similar functionality detection, and 10-task limit compliance

View File

@@ -1,152 +0,0 @@
---
name: create
description: Generate task JSON from natural language description with automatic file pattern detection, scope inference, and dependency analysis
argument-hint: "\"task title\""
---
# Task Create Command (/task:create)
## Overview
Creates new implementation tasks with automatic context awareness and ID generation.
## Core Principles
**Task System:** @~/.claude/workflows/task-core.md
## Core Features
### Automatic Behaviors
- **ID Generation**: Auto-generates IMPL-N format (max 2 levels)
- **Context Inheritance**: Inherits from active workflow session
- **JSON Creation**: Creates task JSON in active session
- **Status Setting**: Initial status = "pending"
- **Agent Assignment**: Suggests agent based on task type
- **Session Integration**: Updates workflow session stats
### Context Awareness
- Validates active workflow session exists
- Avoids duplicate task IDs
- Inherits session requirements and scope
- Suggests task relationships
## Usage
### Basic Creation
```bash
/task:create "Build authentication module"
```
Output:
```
Task created: IMPL-1
Title: Build authentication module
Type: feature
Agent: code-developer
Status: pending
```
### Task Types
- `feature` - New functionality (default)
- `bugfix` - Bug fixes
- `refactor` - Code improvements
- `test` - Test implementation
- `docs` - Documentation
## Task Creation Process
1. **Session Validation**: Check active workflow session
2. **ID Generation**: Auto-increment IMPL-N
3. **Context Inheritance**: Load workflow context
4. **Implementation Setup**: Initialize implementation field
5. **Agent Assignment**: Select appropriate agent
6. **File Creation**: Save JSON to .task/ directory
7. **Session Update**: Update workflow stats
**Task Schema**: See @~/.claude/workflows/task-core.md for complete JSON structure
## Implementation Field Setup
### Auto-Population Strategy
- **Detailed info**: Extract from task description and scope
- **Missing info**: Mark `pre_analysis` as multi-step array format for later pre-analysis
- **Basic structure**: Initialize with standard template
### Analysis Triggers
When implementation details incomplete:
```bash
Task requires analysis for implementation details
Suggest running: gemini analysis for file locations and dependencies
```
## File Management
### JSON Task File
- **Location**: `.task/IMPL-[N].json` in active session
- **Content**: Complete task with implementation field
- **Updates**: Session stats only
### Simple Process
1. Validate session and inputs
2. Generate task JSON
3. Update session stats
4. Notify completion
## Context Inheritance
Tasks inherit from:
1. **Active Session** - Requirements and scope from workflow-session.json
2. **Planning Document** - Context from IMPL_PLAN.md
3. **Parent Task** - For subtasks (IMPL-N.M format)
## Agent Assignment
Based on task type and title keywords:
- **Build/Implement** → @code-developer
- **Design/Plan** → @planning-agent
- **Test Generation** → @code-developer (type: "test-gen")
- **Test Execution/Fix** → @test-fix-agent (type: "test-fix")
- **Review/Audit** → @universal-executor (optional, only when explicitly requested)
## Validation Rules
1. **Session Check** - Active workflow session required
2. **Duplicate Check** - Avoid similar task titles
3. **ID Uniqueness** - Auto-increment task IDs
4. **Schema Validation** - Ensure proper JSON structure
## Error Handling
```bash
# No workflow session
No active workflow found
Use: /workflow init "project name"
# Duplicate task
Similar task exists: IMPL-3
Continue anyway? (y/n)
# Max depth exceeded
Cannot create IMPL-1.2.1 (max 2 levels)
Use: IMPL-2 for new main task
```
## Examples
### Feature Task
```bash
/task:create "Implement user authentication"
Created IMPL-1: Implement user authentication
Type: feature
Agent: code-developer
Status: pending
```
### Bug Fix
```bash
/task:create "Fix login validation bug" --type=bugfix
Created IMPL-2: Fix login validation bug
Type: bugfix
Agent: code-developer
Status: pending
```

View File

@@ -1,270 +0,0 @@
---
name: execute
description: Execute task JSON using appropriate agent (@doc-generator/@implementation-agent/@test-agent) with pre-analysis context loading and status tracking
argument-hint: "task-id"
---
## Command Overview: /task:execute
**Purpose**: Executes tasks using intelligent agent selection, context preparation, and progress tracking.
## Execution Modes
- **auto (Default)**
- Fully autonomous execution with automatic agent selection.
- Provides progress updates at each checkpoint.
- Automatically completes the task when done.
- **guided**
- Executes step-by-step, requiring user confirmation at each checkpoint.
- Allows for dynamic adjustments and manual review during the process.
- **review**
- Optional manual review using `@universal-executor`.
- Used only when explicitly requested by user.
## Agent Selection Logic
The system determines the appropriate agent for a task using the following logic.
```pseudo
FUNCTION select_agent(task, agent_override):
// A manual override always takes precedence.
// Corresponds to the --agent=<agent-type> flag.
IF agent_override IS NOT NULL:
RETURN agent_override
// If no override, select based on keywords in the task title.
ELSE:
CASE task.title:
WHEN CONTAINS "Build API", "Implement":
RETURN "@code-developer"
WHEN CONTAINS "Design schema", "Plan":
RETURN "@planning-agent"
WHEN CONTAINS "Write tests", "Generate tests":
RETURN "@code-developer" // type: test-gen
WHEN CONTAINS "Execute tests", "Fix tests", "Validate":
RETURN "@test-fix-agent" // type: test-fix
WHEN CONTAINS "Review code":
RETURN "@universal-executor" // Optional manual review
DEFAULT:
RETURN "@code-developer" // Default agent
END CASE
END FUNCTION
```
## Core Execution Protocol
`Pre-Execution` -> `Execution` -> `Post-Execution`
### Pre-Execution Protocol
`Validate Task & Dependencies` **->** `Prepare Execution Context` **->** `Coordinate with TodoWrite`
- **Validation**: Checks for the task's JSON file in `.task/` and resolves its dependencies.
- **Context Preparation**: Loads task and workflow context, preparing it for the selected agent.
- **Session Context Injection**: Provides workflow directory paths to agents for TODO_LIST.md and summary management.
- **TodoWrite Coordination**: Generates execution Todos and checkpoints, syncing with `TODO_LIST.md`.
### Post-Execution Protocol
`Update Task Status` **->** `Generate Summary` **->** `Save Artifacts` **->** `Sync All Progress` **->** `Validate File Integrity`
- Updates status in the task's JSON file and `TODO_LIST.md`.
- Creates a summary in `.summaries/`.
- Stores outputs and syncs progress across the entire workflow session.
### Task & Subtask Execution Logic
This logic defines how single, multiple, or parent tasks are handled.
```pseudo
FUNCTION execute_task_command(task_id, mode, parallel_flag):
// Handle parent tasks by executing their subtasks.
IF is_parent_task(task_id):
subtasks = get_subtasks(task_id)
EXECUTE_SUBTASK_BATCH(subtasks, mode)
// Handle wildcard execution (e.g., IMPL-001.*)
ELSE IF task_id CONTAINS "*":
subtasks = find_matching_tasks(task_id)
IF parallel_flag IS true:
EXECUTE_IN_PARALLEL(subtasks)
ELSE:
FOR each subtask in subtasks:
EXECUTE_SINGLE_TASK(subtask, mode)
// Default case for a single task ID.
ELSE:
EXECUTE_SINGLE_TASK(task_id, mode)
END FUNCTION
```
### Error Handling & Recovery Logic
```pseudo
FUNCTION pre_execution_check(task):
// Ensure dependencies are met before starting.
IF task.dependencies ARE NOT MET:
LOG_ERROR("Cannot execute " + task.id)
LOG_INFO("Blocked by: " + unmet_dependencies)
HALT_EXECUTION()
FUNCTION on_execution_failure(checkpoint):
// Provide user with recovery options upon failure.
LOG_WARNING("Execution failed at checkpoint " + checkpoint)
PRESENT_OPTIONS([
"Retry from checkpoint",
"Retry from beginning",
"Switch to guided mode",
"Abort execution"
])
AWAIT user_input
// System performs the selected action.
END FUNCTION
```
### Simplified Context Structure (JSON)
This is the simplified data structure loaded to provide context for task execution.
```json
{
"task": {
"id": "IMPL-1",
"title": "Build authentication module",
"type": "feature",
"status": "active",
"agent": "code-developer",
"context": {
"requirements": ["JWT authentication", "OAuth2 support"],
"scope": ["src/auth/*", "tests/auth/*"],
"acceptance": ["Module handles JWT tokens", "OAuth2 flow implemented"],
"inherited_from": "WFS-user-auth"
},
"relations": {
"parent": null,
"subtasks": ["IMPL-1.1", "IMPL-1.2"],
"dependencies": ["IMPL-0"]
},
"implementation": {
"files": [
{
"path": "src/auth/login.ts",
"location": {
"function": "authenticateUser",
"lines": "25-65",
"description": "Main authentication logic"
},
"original_code": "// Code snippet extracted via gemini analysis",
"modifications": {
"current_state": "Basic password authentication only",
"proposed_changes": [
"Add JWT token generation",
"Implement OAuth2 callback handling",
"Add multi-factor authentication support"
],
"logic_flow": [
"validateCredentials() ───► checkUserExists()",
"◊─── if password ───► generateJWT() ───► return token",
"◊─── if OAuth ───► validateOAuthCode() ───► exchangeForToken()",
"◊─── if MFA ───► sendMFACode() ───► awaitVerification()"
],
"reason": "Support modern authentication standards and security requirements",
"expected_outcome": "Comprehensive authentication system supporting multiple methods"
}
}
],
"context_notes": {
"dependencies": ["jsonwebtoken", "passport", "speakeasy"],
"affected_modules": ["user-session", "auth-middleware", "api-routes"],
"risks": [
"Breaking changes to existing login endpoints",
"Token storage and rotation complexity",
"OAuth provider configuration dependencies"
],
"performance_considerations": "JWT validation adds ~10ms per request, OAuth callbacks may timeout",
"error_handling": "Ensure sensitive authentication errors don't leak user enumeration data"
},
"pre_analysis": [
{
"action": "analyze patterns",
"template": "~/.claude/workflows/cli-templates/prompts/analysis/02-analyze-code-patterns.txt",
"method": "gemini"
}
]
}
},
"workflow": {
"session": "WFS-user-auth",
"phase": "IMPLEMENT",
"session_context": {
"workflow_directory": ".workflow/active/WFS-user-auth/",
"todo_list_location": ".workflow/active/WFS-user-auth/TODO_LIST.md",
"summaries_directory": ".workflow/active/WFS-user-auth/.summaries/",
"task_json_location": ".workflow/active/WFS-user-auth/.task/"
}
},
"execution": {
"agent": "code-developer",
"mode": "auto",
"attempts": 0
}
}
```
### Agent-Specific Context
Different agents receive context tailored to their function, including implementation details:
**`@code-developer`**:
- Complete implementation.files array with file paths and locations
- original_code snippets and proposed_changes for precise modifications
- logic_flow diagrams for understanding data flow
- Dependencies and affected modules for integration planning
- Performance and error handling considerations
**`@planning-agent`**:
- High-level requirements, constraints, success criteria
- Implementation risks and mitigation strategies
- Architecture implications from implementation.context_notes
**`@test-fix-agent`**:
- Test files to execute from task.context.focus_paths
- Source files to fix from implementation.files[].path
- Expected behaviors from implementation.modifications.logic_flow
- Error conditions to validate from implementation.context_notes.error_handling
- Performance requirements from implementation.context_notes.performance_considerations
**`@universal-executor`**:
- Used for optional manual reviews when explicitly requested
- Code quality standards and implementation patterns
- Security considerations from implementation.context_notes.risks
- Dependency validation from implementation.context_notes.dependencies
- Architecture compliance checks
### Simplified File Output
- **Task JSON File (`.task/<task-id>.json`)**: Updated with status and last attempt time only.
- **Session File (`workflow-session.json`)**: Updated task stats (completed count).
- **Summary File**: Generated in `.summaries/` upon completion (optional).
### Simplified Summary Template
Optional summary file generated at `.summaries/IMPL-[task-id]-summary.md`.
```markdown
# Task Summary: IMPL-1 Build Authentication Module
## What Was Done
- Created src/auth/login.ts with JWT validation
- Added tests in tests/auth.test.ts
## Execution Results
- **Agent**: code-developer
- **Status**: completed
## Files Modified
- `src/auth/login.ts` (created)
- `tests/auth.test.ts` (created)
```

View File

@@ -1,441 +0,0 @@
---
name: replan
description: Update task JSON with new requirements or batch-update multiple tasks from verification report, tracks changes in task-changes.json
argument-hint: "[-y|--yes] task-id [\"text\"|file.md] | --batch [verification-report.md]"
allowed-tools: Read(*), Write(*), Edit(*), TodoWrite(*), Glob(*), Bash(*)
---
## Auto Mode
When `--yes` or `-y`: Auto-confirm updates, use recommended changes.
# Task Replan Command (/task:replan)
> **⚠️ DEPRECATION NOTICE**: This command is maintained for backward compatibility. For new workflows, use `/workflow:replan` which provides:
> - Session-level replanning with comprehensive artifact updates
> - Interactive boundary clarification
> - Updates to IMPL_PLAN.md, TODO_LIST.md, and session metadata
> - Better integration with workflow sessions
>
> **Migration**: Replace `/task:replan IMPL-1 "changes"` with `/workflow:replan IMPL-1 "changes"`
## Overview
Replans individual tasks or batch processes multiple tasks with change tracking and backup management.
**Modes**:
- **Single Task Mode**: Replan one task with specific changes
- **Batch Mode**: Process multiple tasks from action-plan verification report
## Key Features
- **Single/Batch Operations**: Single task or multiple tasks from verification report
- **Multiple Input Sources**: Text, files, or verification report
- **Backup Management**: Automatic backup of previous versions
- **Change Documentation**: Track all modifications
- **Progress Tracking**: TodoWrite integration for batch operations
**CRITICAL**: Validates active session before replanning
## Operation Modes
### Single Task Mode
#### Direct Text (Default)
```bash
/task:replan IMPL-1 "Add OAuth2 authentication support"
```
#### File-based Input
```bash
/task:replan IMPL-1 updated-specs.md
```
Supports: .md, .txt, .json, .yaml
#### Interactive Mode
```bash
/task:replan IMPL-1 --interactive
```
Guided step-by-step modification process with validation
### Batch Mode
#### From Verification Report
```bash
/task:replan --batch ACTION_PLAN_VERIFICATION.md
```
**Workflow**:
1. Parse verification report to extract replan recommendations
2. Create TodoWrite task list for all modifications
3. Process each task sequentially with confirmation
4. Track progress and generate summary report
**Auto-detection**: If input file contains "Action Plan Verification Report" header, automatically enters batch mode
## Replanning Process
### Single Task Process
1. **Load & Validate**: Read task JSON and validate session
2. **Parse Input**: Process changes from input source
3. **Create Backup**: Save previous version to backup folder
4. **Update Task**: Modify JSON structure and relationships
5. **Save Changes**: Write updated task and increment version
6. **Update Session**: Reflect changes in workflow stats
### Batch Process
1. **Parse Verification Report**: Extract all replan recommendations
2. **Initialize TodoWrite**: Create task list for tracking
3. **For Each Task**:
- Mark todo as in_progress
- Load and validate task JSON
- Create backup
- Apply recommended changes
- Save updated task
- Mark todo as completed
4. **Generate Summary**: Report all changes and backup locations
## Backup Management
### Backup Tracking
Tasks maintain backup history:
```json
{
"id": "IMPL-1",
"version": "1.2",
"replan_history": [
{
"version": "1.2",
"reason": "Add OAuth2 support",
"input_source": "direct_text",
"backup_location": ".task/backup/IMPL-1-v1.1.json",
"timestamp": "2025-10-17T10:30:00Z"
}
]
}
```
**Complete schema**: See @~/.claude/workflows/task-core.md
### File Structure
```
.task/
├── IMPL-1.json # Current version
├── backup/
│ ├── IMPL-1-v1.0.json # Original version
│ ├── IMPL-1-v1.1.json # Previous backup
│ └── IMPL-1-v1.2.json # Latest backup
└── [new subtasks as needed]
```
**Backup Naming**: `{task-id}-v{version}.json`
## Implementation Updates
### Change Detection
Tracks modifications to:
- Files in implementation.files array
- Dependencies and affected modules
- Risk assessments and performance notes
- Logic flows and code locations
### Analysis Triggers
May require gemini re-analysis when:
- New files need code extraction
- Function locations change
- Dependencies require re-evaluation
## Document Updates
### Planning Document
May update IMPL_PLAN.md sections when task structure changes significantly
### TODO List Sync
If TODO_LIST.md exists, synchronizes:
- New subtasks (with [ ] checkbox)
- Modified tasks (marked as updated)
- Removed subtasks (deleted from list)
## Change Documentation
### Change Summary
Generates brief change log with:
- Version increment (1.1 → 1.2)
- Input source and reason
- Key modifications made
- Files updated/created
- Backup location
## Session Updates
Updates workflow-session.json with:
- Modified task tracking
- Task count changes (if subtasks added/removed)
- Last modification timestamps
## Rollback Support
```bash
/task:replan IMPL-1 --rollback v1.1
Rollback to version 1.1:
- Restore task from backup/.../IMPL-1-v1.1.json
- Remove new subtasks if any
- Update session stats
# Use AskUserQuestion for confirmation
AskUserQuestion({
questions: [{
question: "Are you sure you want to roll back this task to a previous version?",
header: "Confirm",
options: [
{ label: "Yes, rollback", description: "Restore the task from the selected backup." },
{ label: "No, cancel", description: "Keep the current version of the task." }
],
multiSelect: false
}]
})
User selected: "Yes, rollback"
Task rolled back to version 1.1
```
## Batch Processing with TodoWrite
### Progress Tracking
When processing multiple tasks, automatically creates TodoWrite task list:
```markdown
**Batch Replan Progress**:
- [x] IMPL-002: Add FR-12 draft saving acceptance criteria
- [x] IMPL-003: Add FR-14 history tracking acceptance criteria
- [ ] IMPL-004: Add FR-09 response surface explicit coverage
- [ ] IMPL-008: Add NFR performance validation steps
```
### Batch Report
After completion, generates summary:
```markdown
## Batch Replan Summary
**Total Tasks**: 4
**Successful**: 3
**Failed**: 1
**Skipped**: 0
### Changes Made
- IMPL-002 v1.0 → v1.1: Added FR-12 acceptance criteria
- IMPL-003 v1.0 → v1.1: Added FR-14 acceptance criteria
- IMPL-004 v1.0 → v1.1: Added FR-09 explicit coverage
### Backups Created
- .task/backup/IMPL-002-v1.0.json
- .task/backup/IMPL-003-v1.0.json
- .task/backup/IMPL-004-v1.0.json
### Errors
- IMPL-008: File not found (task may have been renamed)
```
## Examples
### Single Task - Text Input
```bash
/task:replan IMPL-1 "Add OAuth2 authentication support"
Processing changes...
Proposed updates:
+ Add OAuth2 integration
+ Update authentication flow
# Use AskUserQuestion for confirmation
AskUserQuestion({
questions: [{
question: "Do you want to apply these changes to the task?",
header: "Apply",
options: [
{ label: "Yes, apply", description: "Create new version with these changes." },
{ label: "No, cancel", description: "Discard changes and keep current version." }
],
multiSelect: false
}]
})
User selected: "Yes, apply"
Version 1.2 created
Context updated
Backup saved to .task/backup/IMPL-1-v1.1.json
```
### Single Task - File Input
```bash
/task:replan IMPL-2 requirements.md
Loading requirements.md...
Applying specification changes...
Task updated with new requirements
Version 1.1 created
Backup saved to .task/backup/IMPL-2-v1.0.json
```
### Batch Mode - From Verification Report
```bash
/task:replan --batch .workflow/active/WFS-{session}/.process/ACTION_PLAN_VERIFICATION.md
Parsing verification report...
Found 4 tasks requiring replanning:
- IMPL-002: Add FR-12 draft saving acceptance criteria
- IMPL-003: Add FR-14 history tracking acceptance criteria
- IMPL-004: Add FR-09 response surface explicit coverage
- IMPL-008: Add NFR performance validation steps
Creating task tracking list...
Processing IMPL-002...
Backup created: .task/backup/IMPL-002-v1.0.json
Updated to v1.1
Processing IMPL-003...
Backup created: .task/backup/IMPL-003-v1.0.json
Updated to v1.1
Processing IMPL-004...
Backup created: .task/backup/IMPL-004-v1.0.json
Updated to v1.1
Processing IMPL-008...
Backup created: .task/backup/IMPL-008-v1.0.json
Updated to v1.1
Batch replan completed: 4/4 successful
Summary report saved
```
### Batch Mode - Auto-detection
```bash
# If file contains "Action Plan Verification Report", auto-enters batch mode
/task:replan ACTION_PLAN_VERIFICATION.md
Detected verification report format
Entering batch mode...
[same as above]
```
## Error Handling
### Single Task Errors
```bash
# Task not found
Task IMPL-5 not found
Check task ID with /workflow:status
# Task completed
Task IMPL-1 is completed (cannot replan)
Create new task for additional work
# File not found
File requirements.md not found
Check file path
# No input provided
Please specify changes needed
Provide text, file, or verification report
```
### Batch Mode Errors
```bash
# Invalid verification report
File does not contain valid verification report format
Check report structure or use single task mode
# Partial failures
Batch completed with errors: 3/4 successful
Review error details in summary report
# No replan recommendations found
Verification report contains no replan recommendations
Check report content or use /workflow:plan-verify first
```
## Batch Mode Integration
### Input Format Expectations
Batch mode parses verification reports looking for:
1. **Required Actions Section**: Commands like `/task:replan IMPL-X "changes"`
2. **Findings Table**: Task IDs with recommendations
3. **Next Actions Section**: Specific replan commands
**Example Patterns**:
```markdown
#### 1. HIGH Priority - Address FR Coverage Gaps
/task:replan IMPL-004 "
Add explicit acceptance criteria:
- FR-09: Response surface 3D visualization
"
#### 2. MEDIUM Priority - Enhance NFR Coverage
/task:replan IMPL-008 "
Add performance testing:
- NFR-01: Load test API endpoints
"
```
### Extraction Logic
1. Scan for `/task:replan` commands in report
2. Extract task ID and change description
3. Group by priority (HIGH, MEDIUM, LOW)
4. Process in priority order with TodoWrite tracking
### Confirmation Behavior
- **Default**: Confirm each task before applying
- **With `--auto-confirm`**: Apply all changes without prompting
```bash
/task:replan --batch report.md --auto-confirm
```
## Implementation Details
### Backup Management
```typescript
// Backup file naming convention
const backupPath = `.task/backup/${taskId}-v${previousVersion}.json`;
// Backup metadata in task JSON
{
"replan_history": [
{
"version": "1.2",
"timestamp": "2025-10-17T10:30:00Z",
"reason": "Add FR-09 explicit coverage",
"input_source": "batch_verification_report",
"backup_location": ".task/backup/IMPL-004-v1.1.json"
}
]
}
```
### TodoWrite Integration
```typescript
// Initialize tracking for batch mode
TodoWrite({
todos: taskList.map(task => ({
content: `${task.id}: ${task.changeDescription}`,
status: "pending",
activeForm: `Replanning ${task.id}`
}))
});
// Update progress during processing
TodoWrite({
todos: updateTaskStatus(taskId, "in_progress")
});
// Mark completed
TodoWrite({
todos: updateTaskStatus(taskId, "completed")
});
```

View File

@@ -1,254 +0,0 @@
---
name: version
description: Display Claude Code version information and check for updates
allowed-tools: Bash(*)
---
# Version Command (/version)
## Purpose
Display local and global installation versions, check for the latest updates from GitHub, and provide upgrade recommendations.
## Execution Flow
1. **Local Version Check**: Read version information from `./.claude/version.json` if it exists.
2. **Global Version Check**: Read version information from `~/.claude/version.json` if it exists.
3. **Fetch Remote Versions**: Use GitHub API to get the latest stable release tag and the latest commit hash from the main branch.
4. **Compare & Suggest**: Compare installed versions with the latest remote versions and provide upgrade suggestions if applicable.
## Step 1: Check Local Version
### Check if local version.json exists
```bash
bash(test -f ./.claude/version.json && echo "found" || echo "not_found")
```
### Read local version (if exists)
```bash
bash(cat ./.claude/version.json)
```
### Extract version with jq (preferred)
```bash
bash(cat ./.claude/version.json | grep -o '"version": *"[^"]*"' | cut -d'"' -f4)
```
### Extract installation date
```bash
bash(cat ./.claude/version.json | grep -o '"installation_date_utc": *"[^"]*"' | cut -d'"' -f4)
```
**Output Format**:
```
Local Version: 3.2.1
Installed: 2025-10-03T12:00:00Z
```
## Step 2: Check Global Version
### Check if global version.json exists
```bash
bash(test -f ~/.claude/version.json && echo "found" || echo "not_found")
```
### Read global version
```bash
bash(cat ~/.claude/version.json)
```
### Extract version
```bash
bash(cat ~/.claude/version.json | grep -o '"version": *"[^"]*"' | cut -d'"' -f4)
```
### Extract installation date
```bash
bash(cat ~/.claude/version.json | grep -o '"installation_date_utc": *"[^"]*"' | cut -d'"' -f4)
```
**Output Format**:
```
Global Version: 3.2.1
Installed: 2025-10-03T12:00:00Z
```
## Step 3: Fetch Latest Stable Release
### Call GitHub API for latest release (with timeout)
```bash
bash(curl -fsSL "https://api.github.com/repos/catlog22/Claude-Code-Workflow/releases/latest" 2>/dev/null, timeout: 30000)
```
### Extract tag name (version)
```bash
bash(curl -fsSL "https://api.github.com/repos/catlog22/Claude-Code-Workflow/releases/latest" 2>/dev/null | grep -o '"tag_name": *"[^"]*"' | head -1 | cut -d'"' -f4, timeout: 30000)
```
### Extract release name
```bash
bash(curl -fsSL "https://api.github.com/repos/catlog22/Claude-Code-Workflow/releases/latest" 2>/dev/null | grep -o '"name": *"[^"]*"' | head -1 | cut -d'"' -f4, timeout: 30000)
```
### Extract published date
```bash
bash(curl -fsSL "https://api.github.com/repos/catlog22/Claude-Code-Workflow/releases/latest" 2>/dev/null | grep -o '"published_at": *"[^"]*"' | cut -d'"' -f4, timeout: 30000)
```
**Output Format**:
```
Latest Stable: v3.2.2
Release: v3.2.2: Independent Test-Gen Workflow with Cross-Session Context
Published: 2025-10-03T04:10:08Z
```
## Step 4: Fetch Latest Main Branch
### Call GitHub API for latest commit on main (with timeout)
```bash
bash(curl -fsSL "https://api.github.com/repos/catlog22/Claude-Code-Workflow/commits/main" 2>/dev/null, timeout: 30000)
```
### Extract commit SHA (short)
```bash
bash(curl -fsSL "https://api.github.com/repos/catlog22/Claude-Code-Workflow/commits/main" 2>/dev/null | grep -o '"sha": *"[^"]*"' | head -1 | cut -d'"' -f4 | cut -c1-7, timeout: 30000)
```
### Extract commit message (first line only)
```bash
bash(curl -fsSL "https://api.github.com/repos/catlog22/Claude-Code-Workflow/commits/main" 2>/dev/null | grep '"message":' | cut -d'"' -f4 | cut -d'\' -f1, timeout: 30000)
```
### Extract commit date
```bash
bash(curl -fsSL "https://api.github.com/repos/catlog22/Claude-Code-Workflow/commits/main" 2>/dev/null | grep -o '"date": *"[^"]*"' | head -1 | cut -d'"' -f4, timeout: 30000)
```
**Output Format**:
```
Latest Dev: a03415b
Message: feat: Add version tracking and upgrade check system
Date: 2025-10-03T04:46:44Z
```
## Step 5: Compare Versions and Suggest Upgrade
### Normalize versions (remove 'v' prefix)
```bash
bash(echo "v3.2.1" | sed 's/^v//')
```
### Compare two versions
```bash
bash(printf "%s\n%s" "3.2.1" "3.2.2" | sort -V | tail -n 1)
```
### Check if versions are equal
```bash
# If equal: Up to date
# If remote newer: Upgrade available
# If local newer: Development version
```
**Output Scenarios**:
**Scenario 1: Up to date**
```
You are on the latest stable version (3.2.1)
```
**Scenario 2: Upgrade available**
```
A newer stable version is available: v3.2.2
Your version: 3.2.1
To upgrade:
PowerShell: iex (iwr -useb https://raw.githubusercontent.com/catlog22/Claude-Code-Workflow/main/install-remote.ps1)
Bash: bash <(curl -fsSL https://raw.githubusercontent.com/catlog22/Claude-Code-Workflow/main/install-remote.sh)
```
**Scenario 3: Development version**
```
You are running a development version (3.4.0-dev)
This is newer than the latest stable release (v3.3.0)
```
## Simple Bash Commands
### Basic Operations
```bash
# Check local version file
bash(test -f ./.claude/version.json && cat ./.claude/version.json)
# Check global version file
bash(test -f ~/.claude/version.json && cat ~/.claude/version.json)
# Extract version from JSON
bash(cat version.json | grep -o '"version": *"[^"]*"' | cut -d'"' -f4)
# Extract date from JSON
bash(cat version.json | grep -o '"installation_date_utc": *"[^"]*"' | cut -d'"' -f4)
# Fetch latest release (with timeout)
bash(curl -fsSL "https://api.github.com/repos/catlog22/Claude-Code-Workflow/releases/latest" 2>/dev/null, timeout: 30000)
# Extract tag name
bash(curl -fsSL "https://api.github.com/repos/catlog22/Claude-Code-Workflow/releases/latest" 2>/dev/null | grep -o '"tag_name": *"[^"]*"' | cut -d'"' -f4, timeout: 30000)
# Extract release name
bash(curl -fsSL "https://api.github.com/repos/catlog22/Claude-Code-Workflow/releases/latest" 2>/dev/null | grep -o '"name": *"[^"]*"' | head -1 | cut -d'"' -f4, timeout: 30000)
# Fetch latest commit (with timeout)
bash(curl -fsSL "https://api.github.com/repos/catlog22/Claude-Code-Workflow/commits/main" 2>/dev/null, timeout: 30000)
# Extract commit SHA
bash(curl -fsSL "https://api.github.com/repos/catlog22/Claude-Code-Workflow/commits/main" 2>/dev/null | grep -o '"sha": *"[^"]*"' | head -1 | cut -d'"' -f4 | cut -c1-7, timeout: 30000)
# Extract commit message (first line)
bash(curl -fsSL "https://api.github.com/repos/catlog22/Claude-Code-Workflow/commits/main" 2>/dev/null | grep '"message":' | cut -d'"' -f4 | cut -d'\' -f1, timeout: 30000)
# Compare versions
bash(printf "%s\n%s" "3.2.1" "3.2.2" | sort -V | tail -n 1)
# Remove 'v' prefix
bash(echo "v3.2.1" | sed 's/^v//')
```
## Error Handling
### No installation found
```
WARNING: Claude Code Workflow not installed
Install using:
PowerShell: iex (iwr -useb https://raw.githubusercontent.com/catlog22/Claude-Code-Workflow/main/install-remote.ps1)
```
### Network error
```
ERROR: Could not fetch latest version from GitHub
Check your network connection
```
### Invalid version.json
```
ERROR: version.json is invalid or corrupted
```
## Design Notes
- Uses simple, direct bash commands instead of complex functions
- Each step is independent and can be executed separately
- Fallback to grep/sed for JSON parsing (no jq dependency required)
- Network calls use curl with error suppression and 30-second timeout
- Version comparison uses `sort -V` for accurate semantic versioning
- Use `/commits/main` API instead of `/branches/main` for more reliable commit info
- Extract first line of commit message using `cut -d'\' -f1` to handle JSON escape sequences
## API Endpoints
### GitHub API Used
- **Latest Release**: `https://api.github.com/repos/catlog22/Claude-Code-Workflow/releases/latest`
- Fields: `tag_name`, `name`, `published_at`
- **Latest Commit**: `https://api.github.com/repos/catlog22/Claude-Code-Workflow/commits/main`
- Fields: `sha`, `commit.message`, `commit.author.date`
### Timeout Configuration
All network calls should use `timeout: 30000` (30 seconds) to handle slow connections.

367
.claude/commands/view.md Normal file
View File

@@ -0,0 +1,367 @@
---
name: ccw view
description: Dashboard - Open CCW workflow dashboard for managing tasks and sessions
category: general
---
# CCW View Command
Open the CCW workflow dashboard for visualizing and managing project tasks, sessions, and workflow execution status.
## Description
`ccw view` launches an interactive web dashboard that provides:
- **Workflow Overview**: Visualize current workflow status and command chain execution
- **Session Management**: View and manage active workflow sessions
- **Task Tracking**: Monitor TODO items and task progress
- **Workspace Switching**: Switch between different project workspaces
- **Real-time Updates**: Live updates of command execution and status
## Usage
```bash
# Open dashboard for current workspace
ccw view
# Specify workspace path
ccw view --path /path/to/workspace
# Custom port (default: 3456)
ccw view --port 3000
# Bind to specific host
ccw view --host 0.0.0.0 --port 3456
# Open without launching browser
ccw view --no-browser
# Show URL without opening browser
ccw view --no-browser
```
## Options
| Option | Default | Description |
|--------|---------|-------------|
| `--path <path>` | Current directory | Workspace path to display |
| `--port <port>` | 3456 | Server port for dashboard |
| `--host <host>` | 127.0.0.1 | Server host/bind address |
| `--no-browser` | false | Don't launch browser automatically |
| `-h, --help` | - | Show help message |
## Features
### Dashboard Sections
#### 1. **Workflow Overview**
- Current workflow status
- Command chain visualization (with Minimum Execution Units marked)
- Live progress tracking
- Error alerts
#### 2. **Session Management**
- List active sessions by type (workflow, review, tdd)
- Session details (created time, last activity, session ID)
- Quick actions (resume, pause, complete)
- Session logs/history
#### 3. **Task Tracking**
- TODO list with status indicators
- Progress percentage
- Task grouping by workflow stage
- Quick inline task updates
#### 4. **Workspace Switcher**
- Browse available workspaces
- Switch context with one click
- Recent workspaces list
#### 5. **Command History**
- Recent commands executed
- Execution time and status
- Quick re-run options
### Keyboard Shortcuts
| Shortcut | Action |
|----------|--------|
| `R` | Refresh dashboard |
| `Cmd/Ctrl + J` | Jump to session search |
| `Cmd/Ctrl + K` | Open command palette |
| `?` | Show help |
## Multi-Instance Support
The dashboard supports multiple concurrent instances:
```bash
# Terminal 1: Workspace A on port 3456
ccw view --path ~/projects/workspace-a
# Terminal 2: Workspace B on port 3457
ccw view --path ~/projects/workspace-b --port 3457
# Switching workspaces on the same port
ccw view --path ~/projects/workspace-c # Auto-switches existing server
```
When the server is already running and you execute `ccw view` with a different path:
1. Detects running server on the port
2. Sends workspace switch request
3. Updates dashboard to new workspace
4. Opens browser with updated context
## Server Lifecycle
### Startup
```
ccw view
├─ Check if server running on port
│ ├─ If yes: Send switch-path request
│ └─ If no: Start new server
├─ Launch browser (unless --no-browser)
└─ Display dashboard URL
```
### Running
The dashboard server continues running until:
- User explicitly stops it (Ctrl+C)
- All connections close after timeout
- System shutdown
### Multiple Workspaces
Switching to a different workspace keeps the same server instance:
```
Server State Before: workspace-a on port 3456
ccw view --path ~/projects/workspace-b
Server State After: workspace-b on port 3456 (same instance)
```
## Environment Variables
```bash
# Set default port
export CCW_VIEW_PORT=4000
ccw view # Uses port 4000
# Set default host
export CCW_VIEW_HOST=localhost
ccw view --port 3456 # Binds to localhost:3456
# Disable browser launch by default
export CCW_VIEW_NO_BROWSER=true
ccw view # Won't auto-launch browser
```
## Integration with CCW Workflows
The dashboard is fully integrated with CCW commands:
### Viewing Workflow Progress
```bash
# Start a workflow
ccw "Add user authentication"
# In another terminal, view progress
ccw view # Shows execution progress in real-time
```
### Session Management from Dashboard
- Start new session: Click "New Session" button
- Resume paused session: Sessions list → Resume button
- View session logs: Click session name
- Complete session: Sessions list → Complete button
### Real-time Command Execution
- View active command chain execution
- Watch command transition through Minimum Execution Units
- See error alerts and recovery options
- View command output logs
## Troubleshooting
### Port Already in Use
```bash
# Use different port
ccw view --port 3457
# Or kill existing server
lsof -i :3456 # Find process
kill -9 <pid> # Kill it
ccw view # Start fresh
```
### Dashboard Not Loading
```bash
# Try without browser
ccw view --no-browser
# Check server logs
tail -f ~/.ccw/logs/dashboard.log
# Verify network access
curl http://localhost:3456/api/health
```
### Workspace Path Not Found
```bash
# Use full absolute path
ccw view --path "$(pwd)"
# Or specify explicit path
ccw view --path ~/projects/my-project
```
## Related Commands
- **`/ccw`** - Main workflow orchestrator
- **`/workflow:session:list`** - List workflow sessions
- **`/workflow:session:resume`** - Resume paused session
- **`/memory:compact`** - Compact session memory for dashboard display
## Examples
### Basic Dashboard View
```bash
cd ~/projects/my-app
ccw view
# → Launches http://localhost:3456 in browser
```
### Network-Accessible Dashboard
```bash
# Allow remote access
ccw view --host 0.0.0.0 --port 3000
# → Dashboard accessible at http://machine-ip:3000
```
### Multiple Workspaces on Different Ports
```bash
# Terminal 1: Main project
ccw view --path ~/projects/main --port 3456
# Terminal 2: Side project
ccw view --path ~/projects/side --port 3457
# View both simultaneously
# → http://localhost:3456 (main)
# → http://localhost:3457 (side)
```
### Headless Dashboard
```bash
# Run dashboard without browser
ccw view --port 3000 --no-browser
echo "Dashboard available at http://localhost:3000"
# Share URL with team
# Can be proxied through nginx/port forwarding
```
### Environment-Based Configuration
```bash
# Script for CI/CD
export CCW_VIEW_HOST=0.0.0.0
export CCW_VIEW_PORT=8080
ccw view --path /workspace
# → Dashboard accessible on port 8080 to all interfaces
```
## Dashboard Pages
### Overview Page (`/`)
- Current workflow status
- Active sessions summary
- Recent commands
- System health indicators
### Sessions Page (`/sessions`)
- All sessions (grouped by type)
- Session details and metadata
- Session logs viewer
- Quick actions (resume/complete)
### Tasks Page (`/tasks`)
- Current TODO items
- Progress tracking
- Inline task editing
- Workflow history
### Workspace Page (`/workspace`)
- Current workspace info
- Available workspaces
- Workspace switcher
- Workspace settings
### Settings Page (`/settings`)
- Port configuration
- Theme preferences
- Auto-refresh settings
- Export settings
## Server Health Monitoring
The dashboard includes health monitoring:
```bash
# Check health endpoint
curl http://localhost:3456/api/health
# → { "status": "ok", "uptime": 12345 }
# Monitor metrics
curl http://localhost:3456/api/metrics
# → { "sessions": 3, "tasks": 15, "lastUpdate": "2025-01-29T10:30:00Z" }
```
## Advanced Usage
### Custom Port with Dynamic Discovery
```bash
# Find next available port
available_port=$(find-available-port 3456)
ccw view --port $available_port
# Display in CI/CD
echo "Dashboard: http://localhost:$available_port"
```
### Dashboard Behind Proxy
```bash
# Configure nginx reverse proxy
# Proxy http://proxy.example.com/dashboard → http://localhost:3456
ccw view --host 127.0.0.1 --port 3456
# Access via proxy
# http://proxy.example.com/dashboard
```
### Session Export from Dashboard
- View → Sessions → Export JSON
- Exports session metadata and progress
- Useful for record-keeping and reporting
## See Also
- **CCW Commands**: `/ccw` - Auto workflow orchestration
- **Session Management**: `/workflow:session:start`, `/workflow:session:list`
- **Task Tracking**: `TodoWrite` tool for programmatic task management
- **Workflow Status**: `/workflow:status` for CLI-based status view

View File

@@ -1,485 +0,0 @@
---
name: plan-verify
description: Perform READ-ONLY verification analysis between IMPL_PLAN.md, task JSONs, and brainstorming artifacts. Generates structured report with quality gate recommendation. Does NOT modify any files.
argument-hint: "[optional: --session session-id]"
allowed-tools: Read(*), Write(*), Glob(*), Bash(*)
---
## User Input
```text
$ARGUMENTS
```
You **MUST** consider the user input before proceeding (if not empty).
## Goal
Generate a comprehensive verification report that identifies inconsistencies, duplications, ambiguities, and underspecified items between action planning artifacts (`IMPL_PLAN.md`, `task.json`) and brainstorming artifacts (`role analysis documents`). This command MUST run only after `/workflow:plan` has successfully produced complete `IMPL_PLAN.md` and task JSON files.
**Output**: A structured Markdown report saved to `.workflow/active/WFS-{session}/.process/ACTION_PLAN_VERIFICATION.md` containing:
- Executive summary with quality gate recommendation
- Detailed findings by severity (CRITICAL/HIGH/MEDIUM/LOW)
- Requirements coverage analysis
- Dependency integrity check
- Synthesis alignment validation
- Actionable remediation recommendations
## Operating Constraints
**STRICTLY READ-ONLY FOR SOURCE ARTIFACTS**:
- **MUST NOT** modify `IMPL_PLAN.md`, any `task.json` files, or brainstorming artifacts
- **MUST NOT** create or delete task files
- **MUST ONLY** write the verification report to `.process/ACTION_PLAN_VERIFICATION.md`
**Synthesis Authority**: The `role analysis documents` are **authoritative** for requirements and design decisions. Any conflicts between IMPL_PLAN/tasks and synthesis are automatically CRITICAL and require adjustment of the plan/tasks—not reinterpretation of requirements.
**Quality Gate Authority**: The verification report provides a binding recommendation (BLOCK_EXECUTION / PROCEED_WITH_FIXES / PROCEED_WITH_CAUTION / PROCEED) based on objective severity criteria. User MUST review critical/high issues before proceeding with implementation.
## Execution Steps
### 1. Initialize Analysis Context
```bash
# Detect active workflow session
IF --session parameter provided:
session_id = provided session
ELSE:
# Auto-detect active session
active_sessions = bash(find .workflow/active/ -name "WFS-*" -type d 2>/dev/null)
IF active_sessions is empty:
ERROR: "No active workflow session found. Use --session <session-id>"
EXIT
ELSE IF active_sessions has multiple entries:
# Use most recently modified session
session_id = bash(ls -td .workflow/active/WFS-*/ 2>/dev/null | head -1 | xargs basename)
ELSE:
session_id = basename(active_sessions[0])
# Derive absolute paths
session_dir = .workflow/active/WFS-{session}
brainstorm_dir = session_dir/.brainstorming
task_dir = session_dir/.task
process_dir = session_dir/.process
session_file = session_dir/workflow-session.json
# Create .process directory if not exists (report output location)
IF NOT EXISTS(process_dir):
bash(mkdir -p "{process_dir}")
# Validate required artifacts
# Note: "role analysis documents" refers to [role]/analysis.md files (e.g., product-manager/analysis.md)
SYNTHESIS_DIR = brainstorm_dir # Contains role analysis files: */analysis.md
IMPL_PLAN = session_dir/IMPL_PLAN.md
TASK_FILES = Glob(task_dir/*.json)
# Abort if missing - in order of dependency
SESSION_FILE_EXISTS = EXISTS(session_file)
IF NOT SESSION_FILE_EXISTS:
WARNING: "workflow-session.json not found. User intent alignment verification will be skipped."
# Continue execution - this is optional context, not blocking
SYNTHESIS_FILES = Glob(brainstorm_dir/*/analysis.md)
IF SYNTHESIS_FILES.count == 0:
ERROR: "No role analysis documents found in .brainstorming/*/analysis.md. Run /workflow:brainstorm:synthesis first"
EXIT
IF NOT EXISTS(IMPL_PLAN):
ERROR: "IMPL_PLAN.md not found. Run /workflow:plan first"
EXIT
IF TASK_FILES.count == 0:
ERROR: "No task JSON files found. Run /workflow:plan first"
EXIT
```
### 2. Load Artifacts (Progressive Disclosure)
Load only minimal necessary context from each artifact:
**From workflow-session.json** (OPTIONAL - Primary Reference for User Intent):
- **ONLY IF EXISTS**: Load user intent context
- Original user prompt/intent (project or description field)
- User's stated goals and objectives
- User's scope definition
- **IF MISSING**: Set user_intent_analysis = "SKIPPED: workflow-session.json not found"
**From role analysis documents** (AUTHORITATIVE SOURCE):
- Functional Requirements (IDs, descriptions, acceptance criteria)
- Non-Functional Requirements (IDs, targets)
- Business Requirements (IDs, success metrics)
- Key Architecture Decisions
- Risk factors and mitigation strategies
- Implementation Roadmap (high-level phases)
**From IMPL_PLAN.md**:
- Summary and objectives
- Context Analysis
- Implementation Strategy
- Task Breakdown Summary
- Success Criteria
- Brainstorming Artifacts References (if present)
**From task.json files**:
- Task IDs
- Titles and descriptions
- Status
- Dependencies (depends_on, blocks)
- Context (requirements, focus_paths, acceptance, artifacts)
- Flow control (pre_analysis, implementation_approach)
- Meta (complexity, priority)
### 3. Build Semantic Models
Create internal representations (do not include raw artifacts in output):
**Requirements inventory**:
- Each functional/non-functional/business requirement with stable ID
- Requirement text, acceptance criteria, priority
**Architecture decisions inventory**:
- ADRs from synthesis
- Technology choices
- Data model references
**Task coverage mapping**:
- Map each task to one or more requirements (by ID reference or keyword inference)
- Map each requirement to covering tasks
**Dependency graph**:
- Task-to-task dependencies (depends_on, blocks)
- Requirement-level dependencies (from synthesis)
### 4. Detection Passes (Token-Efficient Analysis)
**Token Budget Strategy**:
- **Total Limit**: 50 findings maximum (aggregate remainder in overflow summary)
- **Priority Allocation**: CRITICAL (unlimited) → HIGH (15) → MEDIUM (20) → LOW (15)
- **Early Exit**: If CRITICAL findings > 0 in User Intent/Requirements Coverage, skip LOW/MEDIUM priority checks
**Execution Order** (Process in sequence; skip if token budget exhausted):
1. **Tier 1 (CRITICAL Path)**: A, B, C - User intent, coverage, consistency (process fully)
2. **Tier 2 (HIGH Priority)**: D, E - Dependencies, synthesis alignment (limit 15 findings total)
3. **Tier 3 (MEDIUM Priority)**: F - Specification quality (limit 20 findings)
4. **Tier 4 (LOW Priority)**: G, H - Duplication, feasibility (limit 15 findings total)
---
#### A. User Intent Alignment (CRITICAL - Tier 1)
- **Goal Alignment**: IMPL_PLAN objectives match user's original intent
- **Scope Drift**: Plan covers user's stated scope without unauthorized expansion
- **Success Criteria Match**: Plan's success criteria reflect user's expectations
- **Intent Conflicts**: Tasks contradicting user's original objectives
#### B. Requirements Coverage Analysis
- **Orphaned Requirements**: Requirements in synthesis with zero associated tasks
- **Unmapped Tasks**: Tasks with no clear requirement linkage
- **NFR Coverage Gaps**: Non-functional requirements (performance, security, scalability) not reflected in tasks
#### C. Consistency Validation
- **Requirement Conflicts**: Tasks contradicting synthesis requirements
- **Architecture Drift**: IMPL_PLAN architecture not matching synthesis ADRs
- **Terminology Drift**: Same concept named differently across IMPL_PLAN and tasks
- **Data Model Inconsistency**: Tasks referencing entities/fields not in synthesis data model
#### D. Dependency Integrity
- **Circular Dependencies**: Task A depends on B, B depends on C, C depends on A
- **Missing Dependencies**: Task requires outputs from another task but no explicit dependency
- **Broken Dependencies**: Task depends on non-existent task ID
- **Logical Ordering Issues**: Implementation tasks before foundational setup without dependency note
#### E. Synthesis Alignment
- **Priority Conflicts**: High-priority synthesis requirements mapped to low-priority tasks
- **Success Criteria Mismatch**: IMPL_PLAN success criteria not covering synthesis acceptance criteria
- **Risk Mitigation Gaps**: Critical risks in synthesis without corresponding mitigation tasks
#### F. Task Specification Quality
- **Ambiguous Focus Paths**: Tasks with vague or missing focus_paths
- **Underspecified Acceptance**: Tasks without clear acceptance criteria
- **Missing Artifacts References**: Tasks not referencing relevant brainstorming artifacts in context.artifacts
- **Weak Flow Control**: Tasks without clear implementation_approach or pre_analysis steps
- **Missing Target Files**: Tasks without flow_control.target_files specification
#### G. Duplication Detection
- **Overlapping Task Scope**: Multiple tasks with nearly identical descriptions
- **Redundant Requirements Coverage**: Same requirement covered by multiple tasks without clear partitioning
#### H. Feasibility Assessment
- **Complexity Misalignment**: Task marked "simple" but requires multiple file modifications
- **Resource Conflicts**: Parallel tasks requiring same resources/files
- **Skill Gap Risks**: Tasks requiring skills not in team capability assessment (from synthesis)
### 5. Severity Assignment
Use this heuristic to prioritize findings:
- **CRITICAL**:
- Violates user's original intent (goal misalignment, scope drift)
- Violates synthesis authority (requirement conflict)
- Core requirement with zero coverage
- Circular dependencies
- Broken dependencies
- **HIGH**:
- NFR coverage gaps
- Priority conflicts
- Missing risk mitigation tasks
- Ambiguous acceptance criteria
- **MEDIUM**:
- Terminology drift
- Missing artifacts references
- Weak flow control
- Logical ordering issues
- **LOW**:
- Style/wording improvements
- Minor redundancy not affecting execution
### 6. Produce Compact Analysis Report
**Report Generation**: Generate report content and save to file.
Output a Markdown report with the following structure:
```markdown
## Action Plan Verification Report
**Session**: WFS-{session-id}
**Generated**: {timestamp}
**Artifacts Analyzed**: role analysis documents, IMPL_PLAN.md, {N} task files
---
### Executive Summary
- **Overall Risk Level**: CRITICAL | HIGH | MEDIUM | LOW
- **Recommendation**: (See decision matrix below)
- BLOCK_EXECUTION: Critical issues exist (must fix before proceeding)
- PROCEED_WITH_FIXES: High issues exist, no critical (fix recommended before execution)
- PROCEED_WITH_CAUTION: Medium issues only (proceed with awareness)
- PROCEED: Low issues only or no issues (safe to execute)
- **Critical Issues**: {count}
- **High Issues**: {count}
- **Medium Issues**: {count}
- **Low Issues**: {count}
---
### Findings Summary
| ID | Category | Severity | Location(s) | Summary | Recommendation |
|----|----------|----------|-------------|---------|----------------|
| C1 | Coverage | CRITICAL | synthesis:FR-03 | Requirement "User auth" has zero task coverage | Add authentication implementation task |
| H1 | Consistency | HIGH | IMPL-1.2 vs synthesis:ADR-02 | Task uses REST while synthesis specifies GraphQL | Align task with ADR-02 decision |
| M1 | Specification | MEDIUM | IMPL-2.1 | Missing context.artifacts reference | Add @synthesis reference |
| L1 | Duplication | LOW | IMPL-3.1, IMPL-3.2 | Similar scope | Consider merging |
(Add one row per finding; generate stable IDs prefixed by severity initial.)
---
### Requirements Coverage Analysis
| Requirement ID | Requirement Summary | Has Task? | Task IDs | Priority Match | Notes |
|----------------|---------------------|-----------|----------|----------------|-------|
| FR-01 | User authentication | Yes | IMPL-1.1, IMPL-1.2 | Match | Complete |
| FR-02 | Data export | Yes | IMPL-2.3 | Mismatch | High req → Med priority task |
| FR-03 | Profile management | No | - | - | **CRITICAL: Zero coverage** |
| NFR-01 | Response time <200ms | No | - | - | **HIGH: No performance tasks** |
**Coverage Metrics**:
- Functional Requirements: 85% (17/20 covered)
- Non-Functional Requirements: 40% (2/5 covered)
- Business Requirements: 100% (5/5 covered)
---
### Unmapped Tasks
| Task ID | Title | Issue | Recommendation |
|---------|-------|-------|----------------|
| IMPL-4.5 | Refactor utils | No requirement linkage | Link to technical debt or remove |
---
### Dependency Graph Issues
**Circular Dependencies**: None detected
**Broken Dependencies**:
- IMPL-2.3 depends on "IMPL-2.4" (non-existent)
**Logical Ordering Issues**:
- IMPL-5.1 (integration test) has no dependency on IMPL-1.* (implementation tasks)
---
### Synthesis Alignment Issues
| Issue Type | Synthesis Reference | IMPL_PLAN/Task | Impact | Recommendation |
|------------|---------------------|----------------|--------|----------------|
| Architecture Conflict | synthesis:ADR-01 (JWT auth) | IMPL_PLAN uses session cookies | HIGH | Update IMPL_PLAN to use JWT |
| Priority Mismatch | synthesis:FR-02 (High) | IMPL-2.3 (Medium) | MEDIUM | Elevate task priority |
| Missing Risk Mitigation | synthesis:Risk-03 (API rate limits) | No mitigation tasks | HIGH | Add rate limiting implementation task |
---
### Task Specification Quality Issues
**Missing Artifacts References**: 12 tasks lack context.artifacts
**Weak Flow Control**: 5 tasks lack implementation_approach
**Missing Target Files**: 8 tasks lack flow_control.target_files
**Sample Issues**:
- IMPL-1.2: No context.artifacts reference to synthesis
- IMPL-3.1: Missing flow_control.target_files specification
- IMPL-4.2: Vague focus_paths ["src/"] - needs refinement
---
### Feasibility Concerns
| Concern | Tasks Affected | Issue | Recommendation |
|---------|----------------|-------|----------------|
| Skill Gap | IMPL-6.1, IMPL-6.2 | Requires Kubernetes expertise not in team | Add training task or external consultant |
| Resource Conflict | IMPL-3.1, IMPL-3.2 | Both modify src/auth/service.ts in parallel | Add dependency or serialize |
---
### Metrics
- **Total Requirements**: 30 (20 functional, 5 non-functional, 5 business)
- **Total Tasks**: 25
- **Overall Coverage**: 77% (23/30 requirements with ≥1 task)
- **Critical Issues**: 2
- **High Issues**: 5
- **Medium Issues**: 8
- **Low Issues**: 3
---
### Next Actions
#### Action Recommendations
**Recommendation Decision Matrix**:
| Condition | Recommendation | Action |
|-----------|----------------|--------|
| Critical > 0 | BLOCK_EXECUTION | Must resolve all critical issues before proceeding |
| Critical = 0, High > 0 | PROCEED_WITH_FIXES | Fix high-priority issues before execution |
| Critical = 0, High = 0, Medium > 0 | PROCEED_WITH_CAUTION | Proceed with awareness of medium issues |
| Only Low or None | PROCEED | Safe to execute workflow |
**If CRITICAL Issues Exist** (BLOCK_EXECUTION):
- Resolve all critical issues before proceeding
- Use TodoWrite to track required fixes
- Fix broken dependencies and circular references first
**If HIGH Issues Exist** (PROCEED_WITH_FIXES):
- Fix high-priority issues before execution
- Use TodoWrite to systematically track and complete improvements
**If Only MEDIUM/LOW Issues** (PROCEED_WITH_CAUTION / PROCEED):
- Can proceed with execution
- Address issues during or after implementation
#### TodoWrite-Based Remediation Workflow
**Report Location**: `.workflow/active/WFS-{session}/.process/ACTION_PLAN_VERIFICATION.md`
**Recommended Workflow**:
1. **Create TodoWrite Task List**: Extract all findings from report
2. **Process by Priority**: CRITICAL → HIGH → MEDIUM → LOW
3. **Complete Each Fix**: Mark tasks as in_progress/completed as you work
4. **Validate Changes**: Verify each modification against requirements
**TodoWrite Task Structure Example**:
```markdown
Priority Order:
1. Fix coverage gaps (CRITICAL)
2. Resolve consistency conflicts (CRITICAL/HIGH)
3. Add missing specifications (MEDIUM)
4. Improve task quality (LOW)
```
**Notes**:
- TodoWrite provides real-time progress tracking
- Each finding becomes a trackable todo item
- User can monitor progress throughout remediation
- Architecture drift in IMPL_PLAN requires manual editing
```
### 7. Save Report and Execute TodoWrite-Based Remediation
**Step 7.1: Save Analysis Report**:
```bash
report_path = ".workflow/active/WFS-{session}/.process/ACTION_PLAN_VERIFICATION.md"
Write(report_path, full_report_content)
```
**Step 7.2: Display Report Summary to User**:
- Show executive summary with counts
- Display recommendation (BLOCK/PROCEED_WITH_FIXES/PROCEED_WITH_CAUTION/PROCEED)
- List critical and high issues if any
**Step 7.3: After Report Generation**:
1. **Extract Findings**: Parse all issues by severity
2. **Create TodoWrite Task List**: Convert findings to actionable todos
3. **Execute Fixes**: Process each todo systematically
4. **Update Task Files**: Apply modifications directly to task JSON files
5. **Update IMPL_PLAN**: Apply strategic changes if needed
At end of report, provide remediation guidance:
```markdown
### 🔧 Remediation Workflow
**Recommended Approach**:
1. **Initialize TodoWrite**: Create comprehensive task list from all findings
2. **Process by Severity**: Start with CRITICAL, then HIGH, MEDIUM, LOW
3. **Apply Fixes Directly**: Modify task.json files and IMPL_PLAN.md as needed
4. **Track Progress**: Mark todos as completed after each fix
**TodoWrite Execution Pattern**:
```bash
# Step 1: Create task list from verification report
TodoWrite([
{ content: "Fix FR-03 coverage gap - add authentication task", status: "pending", activeForm: "Fixing FR-03 coverage gap" },
{ content: "Fix IMPL-1.2 consistency - align with ADR-02", status: "pending", activeForm: "Fixing IMPL-1.2 consistency" },
{ content: "Add context.artifacts to IMPL-1.2", status: "pending", activeForm: "Adding context.artifacts to IMPL-1.2" },
# ... additional todos for each finding
])
# Step 2: Process each todo systematically
# Mark as in_progress when starting
# Apply fix using Read/Edit tools
# Mark as completed when done
# Move to next priority item
```
**File Modification Workflow**:
```bash
# For task JSON modifications:
1. Read(.workflow/active/WFS-{session}/.task/IMPL-X.Y.json)
2. Edit() to apply fixes
3. Mark todo as completed
# For IMPL_PLAN modifications:
1. Read(.workflow/active/WFS-{session}/IMPL_PLAN.md)
2. Edit() to apply strategic changes
3. Mark todo as completed
```
**Note**: All fixes execute immediately after user confirmation without additional commands.

View File

@@ -0,0 +1,804 @@
---
name: analyze-with-file
description: Interactive collaborative analysis with documented discussions, CLI-assisted exploration, and evolving understanding
argument-hint: "[-y|--yes] [-c|--continue] \"topic or question\""
allowed-tools: TodoWrite(*), Task(*), AskUserQuestion(*), Read(*), Grep(*), Glob(*), Bash(*), Edit(*), Write(*)
---
## Auto Mode
When `--yes` or `-y`: Auto-confirm exploration decisions, use recommended analysis angles.
# Workflow Analyze-With-File Command (/workflow:analyze-with-file)
## Overview
Interactive collaborative analysis workflow with **documented discussion process**. Records understanding evolution, facilitates multi-round Q&A, and uses CLI tools (Gemini/Codex) for deep exploration.
**Core workflow**: Topic → Explore → Discuss → Document → Refine → Conclude
**Key features**:
- **discussion.md**: Timeline of discussions and understanding evolution
- **Multi-round Q&A**: Iterative clarification with user
- **CLI-assisted exploration**: Gemini/Codex for codebase and concept analysis
- **Consolidated insights**: Synthesizes discussions into actionable conclusions
- **Flexible continuation**: Resume analysis sessions to build on previous work
## Usage
```bash
/workflow:analyze-with-file [FLAGS] <TOPIC_OR_QUESTION>
# Flags
-y, --yes Skip confirmations, use recommended settings
-c, --continue Continue existing session (auto-detected if exists)
# Arguments
<topic-or-question> Analysis topic, question, or concept to explore (required)
# Examples
/workflow:analyze-with-file "如何优化这个项目的认证架构"
/workflow:analyze-with-file --continue "认证架构" # Continue existing session
/workflow:analyze-with-file -y "性能瓶颈分析" # Auto mode
```
## Execution Process
```
Session Detection:
├─ Check if analysis session exists for topic
├─ EXISTS + discussion.md exists → Continue mode
└─ NOT_FOUND → New session mode
Phase 1: Topic Understanding
├─ Parse topic/question
├─ Identify analysis dimensions (architecture, implementation, concept, etc.)
├─ Initial scoping with user (AskUserQuestion)
└─ Document initial understanding in discussion.md
Phase 2: CLI Exploration (Parallel)
├─ Launch cli-explore-agent for codebase context
├─ Use Gemini/Codex for deep analysis
└─ Aggregate findings into exploration summary
Phase 3: Interactive Discussion (Multi-Round)
├─ Present exploration findings
├─ Facilitate Q&A with user (AskUserQuestion)
├─ Capture user insights and requirements
├─ Update discussion.md with each round
└─ Repeat until user is satisfied or clarity achieved
Phase 4: Synthesis & Conclusion
├─ Consolidate all insights
├─ Update discussion.md with conclusions
├─ Generate actionable recommendations
└─ Optional: Create follow-up tasks or issues
Output:
├─ .workflow/.analysis/{slug}-{date}/discussion.md (evolving document)
├─ .workflow/.analysis/{slug}-{date}/explorations.json (CLI findings)
└─ .workflow/.analysis/{slug}-{date}/conclusions.json (final synthesis)
```
## Implementation
### Session Setup & Mode Detection
```javascript
const getUtc8ISOString = () => new Date(Date.now() + 8 * 60 * 60 * 1000).toISOString()
const topicSlug = topic_or_question.toLowerCase().replace(/[^a-z0-9\u4e00-\u9fa5]+/g, '-').substring(0, 40)
const dateStr = getUtc8ISOString().substring(0, 10)
const sessionId = `ANL-${topicSlug}-${dateStr}`
const sessionFolder = `.workflow/.analysis/${sessionId}`
const discussionPath = `${sessionFolder}/discussion.md`
const explorationsPath = `${sessionFolder}/explorations.json`
const conclusionsPath = `${sessionFolder}/conclusions.json`
// Auto-detect mode
const sessionExists = fs.existsSync(sessionFolder)
const hasDiscussion = sessionExists && fs.existsSync(discussionPath)
const forcesContinue = $ARGUMENTS.includes('--continue') || $ARGUMENTS.includes('-c')
const mode = (hasDiscussion || forcesContinue) ? 'continue' : 'new'
if (!sessionExists) {
bash(`mkdir -p ${sessionFolder}`)
}
```
---
### Phase 1: Topic Understanding
**Step 1.1: Parse Topic & Identify Dimensions**
```javascript
// Analyze topic to determine analysis dimensions
const ANALYSIS_DIMENSIONS = {
architecture: ['架构', 'architecture', 'design', 'structure', '设计'],
implementation: ['实现', 'implement', 'code', 'coding', '代码'],
performance: ['性能', 'performance', 'optimize', 'bottleneck', '优化'],
security: ['安全', 'security', 'auth', 'permission', '权限'],
concept: ['概念', 'concept', 'theory', 'principle', '原理'],
comparison: ['比较', 'compare', 'vs', 'difference', '区别'],
decision: ['决策', 'decision', 'choice', 'tradeoff', '选择']
}
function identifyDimensions(topic) {
const text = topic.toLowerCase()
const matched = []
for (const [dimension, keywords] of Object.entries(ANALYSIS_DIMENSIONS)) {
if (keywords.some(k => text.includes(k))) {
matched.push(dimension)
}
}
return matched.length > 0 ? matched : ['general']
}
const dimensions = identifyDimensions(topic_or_question)
```
**Step 1.2: Initial Scoping (New Session Only)**
```javascript
const autoYes = $ARGUMENTS.includes('--yes') || $ARGUMENTS.includes('-y')
if (mode === 'new' && !autoYes) {
// Ask user to scope the analysis
AskUserQuestion({
questions: [
{
question: `分析范围: "${topic_or_question}"\n\n您想重点关注哪些方面?`,
header: "Focus",
multiSelect: true,
options: [
{ label: "代码实现", description: "分析现有代码实现" },
{ label: "架构设计", description: "架构层面的分析" },
{ label: "最佳实践", description: "行业最佳实践对比" },
{ label: "问题诊断", description: "识别潜在问题" }
]
},
{
question: "分析深度?",
header: "Depth",
multiSelect: false,
options: [
{ label: "Quick Overview", description: "快速概览 (10-15分钟)" },
{ label: "Standard Analysis", description: "标准分析 (30-60分钟)" },
{ label: "Deep Dive", description: "深度分析 (1-2小时)" }
]
}
]
})
}
```
**Step 1.3: Create/Update discussion.md**
For new session:
```markdown
# Analysis Discussion
**Session ID**: ${sessionId}
**Topic**: ${topic_or_question}
**Started**: ${getUtc8ISOString()}
**Dimensions**: ${dimensions.join(', ')}
---
## User Context
**Focus Areas**: ${userFocusAreas.join(', ')}
**Analysis Depth**: ${analysisDepth}
---
## Discussion Timeline
### Round 1 - Initial Understanding (${timestamp})
#### Topic Analysis
Based on the topic "${topic_or_question}":
- **Primary dimensions**: ${dimensions.join(', ')}
- **Initial scope**: ${initialScope}
- **Key questions to explore**:
- ${question1}
- ${question2}
- ${question3}
#### Next Steps
- Launch CLI exploration for codebase context
- Gather external insights via Gemini
- Prepare discussion points for user
---
## Current Understanding
${initialUnderstanding}
```
For continue session, append:
```markdown
### Round ${n} - Continuation (${timestamp})
#### Previous Context
Resuming analysis based on prior discussion.
#### New Focus
${newFocusFromUser}
```
---
### Phase 2: CLI Exploration
**Step 2.1: Launch Parallel Explorations**
```javascript
const explorationPromises = []
// CLI Explore Agent for codebase
if (dimensions.includes('implementation') || dimensions.includes('architecture')) {
explorationPromises.push(
Task(
subagent_type="cli-explore-agent",
run_in_background=false,
description=`Explore codebase: ${topicSlug}`,
prompt=`
## Analysis Context
Topic: ${topic_or_question}
Dimensions: ${dimensions.join(', ')}
Session: ${sessionFolder}
## MANDATORY FIRST STEPS
1. Run: ccw tool exec get_modules_by_depth '{}'
2. Execute relevant searches based on topic keywords
3. Read: .workflow/project-tech.json (if exists)
## Exploration Focus
${dimensions.map(d => `- ${d}: Identify relevant code patterns and structures`).join('\n')}
## Output
Write findings to: ${sessionFolder}/exploration-codebase.json
Schema:
{
"relevant_files": [{path, relevance, rationale}],
"patterns": [],
"key_findings": [],
"questions_for_user": [],
"_metadata": { "exploration_type": "codebase", "timestamp": "..." }
}
`
)
)
}
// Gemini CLI for deep analysis
explorationPromises.push(
Bash({
command: `ccw cli -p "
PURPOSE: Analyze topic '${topic_or_question}' from ${dimensions.join(', ')} perspectives
Success criteria: Actionable insights with clear reasoning
TASK:
• Identify key considerations for this topic
• Analyze common patterns and anti-patterns
• Highlight potential issues or opportunities
• Generate discussion points for user clarification
MODE: analysis
CONTEXT: @**/* | Topic: ${topic_or_question}
EXPECTED:
- Structured analysis with clear sections
- Specific insights tied to evidence
- Questions to deepen understanding
- Recommendations with rationale
CONSTRAINTS: Focus on ${dimensions.join(', ')}
" --tool gemini --mode analysis`,
run_in_background: true
})
)
```
**Step 2.2: Aggregate Findings**
```javascript
// After explorations complete, aggregate into explorations.json
const explorations = {
session_id: sessionId,
timestamp: getUtc8ISOString(),
topic: topic_or_question,
dimensions: dimensions,
sources: [
{ type: "codebase", file: "exploration-codebase.json" },
{ type: "gemini", summary: geminiOutput }
],
key_findings: [...],
discussion_points: [...],
open_questions: [...]
}
Write(explorationsPath, JSON.stringify(explorations, null, 2))
```
**Step 2.3: Update discussion.md**
```markdown
#### Exploration Results (${timestamp})
**Sources Analyzed**:
${sources.map(s => `- ${s.type}: ${s.summary}`).join('\n')}
**Key Findings**:
${keyFindings.map((f, i) => `${i+1}. ${f}`).join('\n')}
**Points for Discussion**:
${discussionPoints.map((p, i) => `${i+1}. ${p}`).join('\n')}
**Open Questions**:
${openQuestions.map((q, i) => `- ${q}`).join('\n')}
```
---
### Phase 3: Interactive Discussion (Multi-Round)
**Step 3.1: Present Findings & Gather Feedback**
```javascript
// Maximum discussion rounds
const MAX_ROUNDS = 5
let roundNumber = 1
let discussionComplete = false
while (!discussionComplete && roundNumber <= MAX_ROUNDS) {
// Display current findings
console.log(`
## Discussion Round ${roundNumber}
${currentFindings}
### Key Points for Your Input
${discussionPoints.map((p, i) => `${i+1}. ${p}`).join('\n')}
`)
// Gather user input
const userResponse = AskUserQuestion({
questions: [
{
question: "对以上分析有什么看法或补充?",
header: "Feedback",
multiSelect: false,
options: [
{ label: "同意,继续深入", description: "分析方向正确,继续探索" },
{ label: "需要调整方向", description: "我有不同的理解或重点" },
{ label: "分析完成", description: "已获得足够信息" },
{ label: "有具体问题", description: "我想问一些具体问题" }
]
}
]
})
// Process user response
switch (userResponse.feedback) {
case "同意,继续深入":
// Deepen analysis in current direction
await deepenAnalysis()
break
case "需要调整方向":
// Get user's adjusted focus
const adjustment = AskUserQuestion({
questions: [{
question: "请说明您希望调整的方向或重点:",
header: "Direction",
multiSelect: false,
options: [
{ label: "更多代码细节", description: "深入代码实现" },
{ label: "更多架构视角", description: "关注整体设计" },
{ label: "更多实践对比", description: "对比最佳实践" }
]
}]
})
await adjustAnalysisDirection(adjustment)
break
case "分析完成":
discussionComplete = true
break
case "有具体问题":
// Let user ask specific questions, then answer
await handleUserQuestions()
break
}
// Update discussion.md with this round
updateDiscussionDocument(roundNumber, userResponse, findings)
roundNumber++
}
```
**Step 3.2: Document Each Round**
Append to discussion.md:
```markdown
### Round ${n} - Discussion (${timestamp})
#### User Input
${userInputSummary}
${userResponse === 'adjustment' ? `
**Direction Adjustment**: ${adjustmentDetails}
` : ''}
${userResponse === 'questions' ? `
**User Questions**:
${userQuestions.map((q, i) => `${i+1}. ${q}`).join('\n')}
**Answers**:
${answers.map((a, i) => `${i+1}. ${a}`).join('\n')}
` : ''}
#### Updated Understanding
Based on user feedback:
- ${insight1}
- ${insight2}
#### Corrected Assumptions
${corrections.length > 0 ? corrections.map(c => `
- ~~${c.wrong}~~ → ${c.corrected}
- Reason: ${c.reason}
`).join('\n') : 'None'}
#### New Insights
${newInsights.map(i => `- ${i}`).join('\n')}
```
---
### Phase 4: Synthesis & Conclusion
**Step 4.1: Consolidate Insights**
```javascript
const conclusions = {
session_id: sessionId,
topic: topic_or_question,
completed: getUtc8ISOString(),
total_rounds: roundNumber,
summary: "...",
key_conclusions: [
{ point: "...", evidence: "...", confidence: "high|medium|low" }
],
recommendations: [
{ action: "...", rationale: "...", priority: "high|medium|low" }
],
open_questions: [...],
follow_up_suggestions: [
{ type: "issue", summary: "..." },
{ type: "task", summary: "..." }
]
}
Write(conclusionsPath, JSON.stringify(conclusions, null, 2))
```
**Step 4.2: Final discussion.md Update**
```markdown
---
## Conclusions (${timestamp})
### Summary
${summaryParagraph}
### Key Conclusions
${conclusions.key_conclusions.map((c, i) => `
${i+1}. **${c.point}** (Confidence: ${c.confidence})
- Evidence: ${c.evidence}
`).join('\n')}
### Recommendations
${conclusions.recommendations.map((r, i) => `
${i+1}. **${r.action}** (Priority: ${r.priority})
- Rationale: ${r.rationale}
`).join('\n')}
### Remaining Questions
${conclusions.open_questions.map(q => `- ${q}`).join('\n')}
---
## Current Understanding (Final)
### What We Established
${establishedPoints.map(p => `- ${p}`).join('\n')}
### What Was Clarified/Corrected
${corrections.map(c => `- ~~${c.original}~~ → ${c.corrected}`).join('\n')}
### Key Insights
${keyInsights.map(i => `- ${i}`).join('\n')}
---
## Session Statistics
- **Total Rounds**: ${totalRounds}
- **Duration**: ${duration}
- **Sources Used**: ${sources.join(', ')}
- **Artifacts Generated**: discussion.md, explorations.json, conclusions.json
```
**Step 4.3: Post-Completion Options**
```javascript
AskUserQuestion({
questions: [{
question: "分析完成。是否需要后续操作?",
header: "Next Steps",
multiSelect: true,
options: [
{ label: "创建Issue", description: "将结论转为可执行的Issue" },
{ label: "生成任务", description: "创建实施任务" },
{ label: "导出报告", description: "生成独立的分析报告" },
{ label: "完成", description: "不需要后续操作" }
]
}]
})
// Handle selections
if (selection.includes("创建Issue")) {
SlashCommand("/issue:new", `${topic_or_question} - 分析结论实施`)
}
if (selection.includes("生成任务")) {
SlashCommand("/workflow:lite-plan", `实施分析结论: ${summary}`)
}
if (selection.includes("导出报告")) {
exportAnalysisReport(sessionFolder)
}
```
---
## Session Folder Structure
```
.workflow/.analysis/ANL-{slug}-{date}/
├── discussion.md # Evolution of understanding & discussions
├── explorations.json # CLI exploration findings
├── conclusions.json # Final synthesis
└── exploration-*.json # Individual exploration results (optional)
```
## Discussion Document Template
```markdown
# Analysis Discussion
**Session ID**: ANL-xxx-2025-01-25
**Topic**: [topic or question]
**Started**: 2025-01-25T10:00:00+08:00
**Dimensions**: [architecture, implementation, ...]
---
## User Context
**Focus Areas**: [user-selected focus]
**Analysis Depth**: [quick|standard|deep]
---
## Discussion Timeline
### Round 1 - Initial Understanding (2025-01-25 10:00)
#### Topic Analysis
...
#### Exploration Results
...
### Round 2 - Discussion (2025-01-25 10:15)
#### User Input
...
#### Updated Understanding
...
#### Corrected Assumptions
- ~~[wrong]~~ → [corrected]
### Round 3 - Deep Dive (2025-01-25 10:30)
...
---
## Conclusions (2025-01-25 11:00)
### Summary
...
### Key Conclusions
...
### Recommendations
...
---
## Current Understanding (Final)
### What We Established
- [confirmed points]
### What Was Clarified/Corrected
- ~~[original assumption]~~ → [corrected understanding]
### Key Insights
- [insights gained]
---
## Session Statistics
- **Total Rounds**: 3
- **Duration**: 1 hour
- **Sources Used**: codebase exploration, Gemini analysis
- **Artifacts Generated**: discussion.md, explorations.json, conclusions.json
```
## Iteration Flow
```
First Call (/workflow:analyze-with-file "topic"):
├─ No session exists → New mode
├─ Identify analysis dimensions
├─ Scope with user (unless --yes)
├─ Create discussion.md with initial understanding
├─ Launch CLI explorations
└─ Enter discussion loop
Continue Call (/workflow:analyze-with-file --continue "topic"):
├─ Session exists → Continue mode
├─ Load discussion.md
├─ Resume from last round
└─ Continue discussion loop
Discussion Loop:
├─ Present current findings
├─ Gather user feedback (AskUserQuestion)
├─ Process response:
│ ├─ Agree → Deepen analysis
│ ├─ Adjust → Change direction
│ ├─ Question → Answer then continue
│ └─ Complete → Exit loop
├─ Update discussion.md
└─ Repeat until complete or max rounds
Completion:
├─ Generate conclusions.json
├─ Update discussion.md with final synthesis
└─ Offer follow-up options (issue, task, report)
```
## CLI Integration Points
### 1. Codebase Exploration (cli-explore-agent)
**Purpose**: Gather relevant code context
**When**: Topic involves implementation or architecture analysis
### 2. Gemini Deep Analysis
**Purpose**: Conceptual analysis, pattern identification, best practices
**Prompt Pattern**:
```
PURPOSE: Analyze topic + identify insights
TASK: Explore dimensions + generate discussion points
CONTEXT: Codebase + topic
EXPECTED: Structured analysis + questions
```
### 3. Follow-up CLI Calls
**Purpose**: Deepen specific areas based on user feedback
**Dynamic invocation** based on discussion direction
## Consolidation Rules
When updating "Current Understanding":
1. **Promote confirmed insights**: Move validated findings to "What We Established"
2. **Track corrections**: Keep important wrong→right transformations
3. **Focus on current state**: What do we know NOW
4. **Avoid timeline repetition**: Don't copy discussion details
5. **Preserve key learnings**: Keep insights valuable for future reference
**Bad (cluttered)**:
```markdown
## Current Understanding
In round 1 we discussed X, then in round 2 user said Y, and we explored Z...
```
**Good (consolidated)**:
```markdown
## Current Understanding
### What We Established
- The authentication flow uses JWT with refresh tokens
- Rate limiting is implemented at API gateway level
### What Was Clarified
- ~~Assumed Redis for sessions~~ → Actually uses database-backed sessions
### Key Insights
- Current architecture supports horizontal scaling
- Security audit recommended before production
```
## Error Handling
| Situation | Action |
|-----------|--------|
| CLI exploration fails | Continue with available context, note limitation |
| User timeout in discussion | Save state, show resume command |
| Max rounds reached | Force synthesis, offer continuation option |
| No relevant findings | Broaden search, ask user for clarification |
| Session folder conflict | Append timestamp suffix |
| Gemini unavailable | Fallback to Codex or manual analysis |
## Usage Recommendations
Use `/workflow:analyze-with-file` when:
- Exploring a complex topic collaboratively
- Need documented discussion trail
- Decision-making requires multiple perspectives
- Want to iterate on understanding with user input
- Building shared understanding before implementation
Use `/workflow:debug-with-file` when:
- Diagnosing specific bugs
- Need hypothesis-driven investigation
- Focus on evidence and verification
Use `/workflow:lite-plan` when:
- Ready to implement (past analysis phase)
- Need structured task breakdown
- Focus on execution planning

File diff suppressed because it is too large Load Diff

View File

@@ -1,587 +0,0 @@
---
name: api-designer
description: Generate or update api-designer/analysis.md addressing guidance-specification discussion points for API design perspective
argument-hint: "optional topic - uses existing framework if available"
allowed-tools: Task(conceptual-planning-agent), TodoWrite(*), Read(*), Write(*)
---
## 🔌 **API Designer Analysis Generator**
### Purpose
**Specialized command for generating api-designer/analysis.md** that addresses guidance-specification.md discussion points from backend API design perspective. Creates or updates role-specific analysis with framework references.
### Core Function
- **Framework-based Analysis**: Address each discussion point in guidance-specification.md
- **API Design Focus**: RESTful/GraphQL API design, endpoint structure, and contract definition
- **Update Mechanism**: Create new or update existing analysis.md
- **Agent Delegation**: Use conceptual-planning-agent for analysis generation
### Analysis Scope
- **API Architecture**: RESTful/GraphQL design patterns and best practices
- **Endpoint Design**: Resource modeling, URL structure, and HTTP method selection
- **Data Contracts**: Request/response schemas, validation rules, and data transformation
- **API Documentation**: OpenAPI/Swagger specifications and developer experience
### Role Boundaries & Responsibilities
#### **What This Role OWNS (API Contract Within Architectural Framework)**
- **API Contract Definition**: Specific endpoint paths, HTTP methods, and status codes
- **Resource Modeling**: Mapping domain entities to RESTful resources or GraphQL types
- **Request/Response Schemas**: Detailed data contracts, validation rules, and error formats
- **API Versioning Strategy**: Version management, deprecation policies, and migration paths
- **Developer Experience**: API documentation (OpenAPI/Swagger), code examples, and SDKs
#### **What This Role DOES NOT Own (Defers to Other Roles)**
- **System Architecture Decisions**: Microservices vs monolithic, overall communication patterns → Defers to **System Architect**
- **Canonical Data Model**: Underlying data schemas and entity relationships → Defers to **Data Architect**
- **UI/Frontend Integration**: How clients consume the API → Defers to **UI Designer**
#### **Handoff Points**
- **FROM System Architect**: Receives architectural constraints (REST vs GraphQL, sync vs async) that define the design space
- **FROM Data Architect**: Receives canonical data model and translates it into public API data contracts (as projection/view)
- **TO Frontend Teams**: Provides complete API specifications, documentation, and integration guides
## ⚙️ **Execution Protocol**
### Phase 1: Session & Framework Detection
```bash
# Check active session and framework
CHECK: find .workflow/active/ -name "WFS-*" -type d
IF active_session EXISTS:
session_id = get_active_session()
brainstorm_dir = .workflow/active/WFS-{session}/.brainstorming/
CHECK: brainstorm_dir/guidance-specification.md
IF EXISTS:
framework_mode = true
load_framework = true
ELSE:
IF topic_provided:
framework_mode = false # Create analysis without framework
ELSE:
ERROR: "No framework found and no topic provided"
```
### Phase 2: Analysis Mode Detection
```bash
# Check existing analysis
CHECK: brainstorm_dir/api-designer/analysis.md
IF EXISTS:
SHOW existing analysis summary
ASK: "Analysis exists. Do you want to:"
OPTIONS:
1. "Update with new insights" → Update existing
2. "Replace completely" → Generate new
3. "Cancel" → Exit without changes
ELSE:
CREATE new analysis
```
### Phase 3: Agent Task Generation
**Framework-Based Analysis** (when guidance-specification.md exists):
```bash
Task(subagent_type="conceptual-planning-agent",
run_in_background=false,
prompt="Generate API designer analysis addressing topic framework
## Framework Integration Required
**MANDATORY**: Load and address guidance-specification.md discussion points
**Framework Reference**: @{session.brainstorm_dir}/guidance-specification.md
**Output Location**: {session.brainstorm_dir}/api-designer/analysis.md
## Analysis Requirements
1. **Load Topic Framework**: Read guidance-specification.md completely
2. **Address Each Discussion Point**: Respond to all 5 framework sections from API design perspective
3. **Include Framework Reference**: Start analysis.md with @../guidance-specification.md
4. **API Design Focus**: Emphasize endpoint structure, data contracts, versioning strategies
5. **Structured Response**: Use framework structure for analysis organization
## Framework Sections to Address
- Core Requirements (from API design perspective)
- Technical Considerations (detailed API architecture analysis)
- User Experience Factors (developer experience and API usability)
- Implementation Challenges (API design risks and solutions)
- Success Metrics (API performance metrics and adoption tracking)
## Output Structure Required
```markdown
# API Designer Analysis: [Topic]
**Framework Reference**: @../guidance-specification.md
**Role Focus**: Backend API Design and Contract Definition
## Core Requirements Analysis
[Address framework requirements from API design perspective]
## Technical Considerations
[Detailed API architecture and endpoint design analysis]
## Developer Experience Factors
[API usability, documentation, and integration ease]
## Implementation Challenges
[API design risks and mitigation strategies]
## Success Metrics
[API performance metrics, adoption rates, and developer satisfaction]
## API Design-Specific Recommendations
[Detailed API design recommendations and best practices]
```",
description="Generate API designer framework-based analysis")
```
### Phase 4: Update Mechanism
**Analysis Update Process**:
```bash
# For existing analysis updates
IF update_mode = "incremental":
Task(subagent_type="conceptual-planning-agent",
run_in_background=false,
prompt="Update existing API designer analysis
## Current Analysis Context
**Existing Analysis**: @{session.brainstorm_dir}/api-designer/analysis.md
**Framework Reference**: @{session.brainstorm_dir}/guidance-specification.md
## Update Requirements
1. **Preserve Structure**: Maintain existing analysis structure
2. **Add New Insights**: Integrate new API design insights and recommendations
3. **Framework Alignment**: Ensure continued alignment with topic framework
4. **API Updates**: Add new endpoint patterns, versioning strategies, documentation improvements
5. **Maintain References**: Keep @../guidance-specification.md reference
## Update Instructions
- Read existing analysis completely
- Identify areas for enhancement or new insights
- Add API design depth while preserving original structure
- Update recommendations with new API design patterns and approaches
- Maintain framework discussion point addressing",
description="Update API designer analysis incrementally")
```
## Document Structure
### Output Files
```
.workflow/active/WFS-[topic]/.brainstorming/
├── guidance-specification.md # Input: Framework (if exists)
└── api-designer/
└── analysis.md # ★ OUTPUT: Framework-based analysis
```
### Analysis Structure
**Required Elements**:
- **Framework Reference**: @../guidance-specification.md (if framework exists)
- **Role Focus**: Backend API Design and Contract Definition perspective
- **5 Framework Sections**: Address each framework discussion point
- **API Design Recommendations**: Endpoint-specific insights and solutions
## ⚡ **Two-Step Execution Flow**
### ⚠️ Session Management - FIRST STEP
Session detection and selection:
```bash
# Check for active sessions
active_sessions=$(find .workflow/active/ -name "WFS-*" -type d 2>/dev/null)
if [ multiple_sessions ]; then
prompt_user_to_select_session()
else
use_existing_or_create_new()
fi
```
### Step 1: Context Gathering Phase
**API Designer Perspective Questioning**
Before agent assignment, gather comprehensive API design context:
#### 📋 Role-Specific Questions
1. **API Type & Architecture**
- RESTful, GraphQL, or hybrid API approach?
- Synchronous vs asynchronous communication patterns?
- Real-time requirements (WebSocket, Server-Sent Events)?
2. **Resource Modeling & Endpoints**
- What are the core domain resources/entities?
- Expected CRUD operations for each resource?
- Complex query requirements (filtering, sorting, pagination)?
3. **Data Contracts & Validation**
- Request/response data format requirements (JSON, XML, Protocol Buffers)?
- Input validation and sanitization requirements?
- Data transformation and mapping needs?
4. **API Management & Governance**
- API versioning strategy requirements?
- Authentication and authorization mechanisms?
- Rate limiting and throttling requirements?
- API documentation and developer portal needs?
5. **Integration & Compatibility**
- Client platforms consuming the API (web, mobile, third-party)?
- Backward compatibility requirements?
- External API integrations needed?
#### Context Validation
- **Minimum Response**: Each answer must be ≥50 characters
- **Re-prompting**: Insufficient detail triggers follow-up questions
- **Context Storage**: Save responses to `.brainstorming/api-designer-context.md`
### Step 2: Agent Assignment with Flow Control
**Dedicated Agent Execution**
```bash
Task(conceptual-planning-agent): "
[FLOW_CONTROL]
Execute dedicated api-designer conceptual analysis for: {topic}
ASSIGNED_ROLE: api-designer
OUTPUT_LOCATION: .brainstorming/api-designer/
USER_CONTEXT: {validated_responses_from_context_gathering}
Flow Control Steps:
[
{
\"step\": \"load_role_template\",
\"action\": \"Load api-designer planning template\",
\"command\": \"bash($(cat ~/.claude/workflows/cli-templates/planning-roles/api-designer.md))\",
\"output_to\": \"role_template\"
}
]
Conceptual Analysis Requirements:
- Apply api-designer perspective to topic analysis
- Focus on endpoint design, data contracts, and API governance
- Use loaded role template framework for analysis structure
- Generate role-specific deliverables in designated output location
- Address all user context from questioning phase
Deliverables:
- analysis.md: Main API design analysis
- api-specification.md: Detailed endpoint specifications
- data-contracts.md: Request/response schemas and validation rules
- api-documentation.md: API documentation strategy and templates
Embody api-designer role expertise for comprehensive conceptual planning."
```
### Progress Tracking
TodoWrite tracking for two-step process:
```json
[
{"content": "Gather API designer context through role-specific questioning", "status": "in_progress", "activeForm": "Gathering context"},
{"content": "Validate context responses and save to api-designer-context.md", "status": "pending", "activeForm": "Validating context"},
{"content": "Load api-designer planning template via flow control", "status": "pending", "activeForm": "Loading template"},
{"content": "Execute dedicated conceptual-planning-agent for api-designer role", "status": "pending", "activeForm": "Executing agent"}
]
```
## 📊 **Output Specification**
### Output Location
```
.workflow/active/WFS-{topic-slug}/.brainstorming/api-designer/
├── analysis.md # Primary API design analysis
├── api-specification.md # Detailed endpoint specifications (OpenAPI/Swagger)
├── data-contracts.md # Request/response schemas and validation rules
├── versioning-strategy.md # API versioning and backward compatibility plan
└── developer-guide.md # API usage documentation and integration examples
```
### Document Templates
#### analysis.md Structure
```markdown
# API Design Analysis: {Topic}
*Generated: {timestamp}*
## Executive Summary
[Key API design findings and recommendations overview]
## API Architecture Overview
### API Type Selection (REST/GraphQL/Hybrid)
### Communication Patterns
### Authentication & Authorization Strategy
## Resource Modeling
### Core Domain Resources
### Resource Relationships
### URL Structure and Naming Conventions
## Endpoint Design
### Resource Endpoints
- GET /api/v1/resources
- POST /api/v1/resources
- GET /api/v1/resources/{id}
- PUT /api/v1/resources/{id}
- DELETE /api/v1/resources/{id}
### Query Parameters
- Filtering: ?filter[field]=value
- Sorting: ?sort=field,-field2
- Pagination: ?page=1&limit=20
### HTTP Methods and Status Codes
- Success responses (2xx)
- Client errors (4xx)
- Server errors (5xx)
## Data Contracts
### Request Schemas
[JSON Schema or OpenAPI definitions]
### Response Schemas
[JSON Schema or OpenAPI definitions]
### Validation Rules
- Required fields
- Data types and formats
- Business logic constraints
## API Versioning Strategy
### Versioning Approach (URL/Header/Accept)
### Version Lifecycle Management
### Deprecation Policy
### Migration Paths
## Security & Governance
### Authentication Mechanisms
- OAuth 2.0 / JWT / API Keys
### Authorization Patterns
- RBAC / ABAC / Resource-based
### Rate Limiting & Throttling
### CORS and Security Headers
## Error Handling
### Standard Error Response Format
```json
{
"error": {
"code": "ERROR_CODE",
"message": "Human-readable error message",
"details": [],
"trace_id": "uuid"
}
}
```
### Error Code Taxonomy
### Validation Error Responses
## API Documentation
### OpenAPI/Swagger Specification
### Developer Portal Requirements
### Code Examples and SDKs
### Changelog and Migration Guides
## Performance Optimization
### Response Caching Strategies
### Compression (gzip, brotli)
### Field Selection (sparse fieldsets)
### Bulk Operations and Batch Endpoints
## Monitoring & Observability
### API Metrics
- Request count, latency, error rates
- Endpoint usage analytics
### Logging Strategy
### Distributed Tracing
## Developer Experience
### API Usability Assessment
### Integration Complexity
### SDK and Client Library Needs
### Sandbox and Testing Environments
```
#### api-specification.md Structure
```markdown
# API Specification: {Topic}
*OpenAPI 3.0 Specification*
## API Information
- **Title**: {API Name}
- **Version**: 1.0.0
- **Base URL**: https://api.example.com/v1
- **Contact**: api-team@example.com
## Endpoints
### Users API
#### GET /users
**Description**: Retrieve a list of users
**Parameters**:
- `page` (query, integer): Page number (default: 1)
- `limit` (query, integer): Items per page (default: 20, max: 100)
- `sort` (query, string): Sort field (e.g., "created_at", "-updated_at")
- `filter[status]` (query, string): Filter by user status
**Response 200**:
```json
{
"data": [
{
"id": "uuid",
"username": "string",
"email": "string",
"created_at": "2025-10-15T00:00:00Z"
}
],
"meta": {
"page": 1,
"limit": 20,
"total": 100
},
"links": {
"self": "/users?page=1",
"next": "/users?page=2",
"prev": null
}
}
```
#### POST /users
**Description**: Create a new user
**Request Body**:
```json
{
"username": "string (required, 3-50 chars)",
"email": "string (required, valid email)",
"password": "string (required, min 8 chars)",
"profile": {
"first_name": "string (optional)",
"last_name": "string (optional)"
}
}
```
**Response 201**:
```json
{
"data": {
"id": "uuid",
"username": "string",
"email": "string",
"created_at": "2025-10-15T00:00:00Z"
}
}
```
**Response 400** (Validation Error):
```json
{
"error": {
"code": "VALIDATION_ERROR",
"message": "Request validation failed",
"details": [
{
"field": "email",
"message": "Invalid email format"
}
]
}
}
```
[Continue for all endpoints...]
## Authentication
### OAuth 2.0 Flow
1. Client requests authorization
2. User grants permission
3. Client receives access token
4. Client uses token in requests
**Header Format**:
```
Authorization: Bearer {access_token}
```
## Rate Limiting
**Headers**:
- `X-RateLimit-Limit`: 1000
- `X-RateLimit-Remaining`: 999
- `X-RateLimit-Reset`: 1634270400
**Response 429** (Too Many Requests):
```json
{
"error": {
"code": "RATE_LIMIT_EXCEEDED",
"message": "API rate limit exceeded",
"retry_after": 3600
}
}
```
```
## 🔄 **Session Integration**
### Status Synchronization
Upon completion, update `workflow-session.json`:
```json
{
"phases": {
"BRAINSTORM": {
"api_designer": {
"status": "completed",
"completed_at": "timestamp",
"output_directory": ".workflow/active/WFS-{topic}/.brainstorming/api-designer/",
"key_insights": ["endpoint_design", "versioning_strategy", "data_contracts"]
}
}
}
}
```
### Cross-Role Collaboration
API designer perspective provides:
- **API Contract Specifications** → Frontend Developer
- **Data Schema Requirements** → Data Architect
- **Security Requirements** → Security Expert
- **Integration Endpoints** → System Architect
- **Performance Constraints** → DevOps Engineer
## ✅ **Quality Assurance**
### Required Analysis Elements
- [ ] Complete endpoint inventory with HTTP methods and paths
- [ ] Detailed request/response schemas with validation rules
- [ ] Clear versioning strategy and backward compatibility plan
- [ ] Comprehensive error handling and status code usage
- [ ] API documentation strategy (OpenAPI/Swagger)
### API Design Principles
- [ ] **Consistency**: Uniform naming conventions and patterns across all endpoints
- [ ] **Simplicity**: Intuitive resource modeling and URL structures
- [ ] **Flexibility**: Support for filtering, sorting, pagination, and field selection
- [ ] **Security**: Proper authentication, authorization, and input validation
- [ ] **Performance**: Caching strategies, compression, and efficient data structures
### Developer Experience Validation
- [ ] API is self-documenting with clear endpoint descriptions
- [ ] Error messages are actionable and helpful for debugging
- [ ] Response formats are consistent and predictable
- [ ] Code examples and integration guides are provided
- [ ] Sandbox environment available for testing
### Technical Completeness
- [ ] **Resource Modeling**: All domain entities mapped to API resources
- [ ] **CRUD Coverage**: Complete create, read, update, delete operations
- [ ] **Query Capabilities**: Advanced filtering, sorting, and search functionality
- [ ] **Versioning**: Clear version management and migration paths
- [ ] **Monitoring**: API metrics, logging, and tracing strategies defined
### Integration Readiness
- [ ] **Client Compatibility**: API works with all target client platforms
- [ ] **External Integration**: Third-party API dependencies identified
- [ ] **Backward Compatibility**: Changes don't break existing clients
- [ ] **Migration Path**: Clear upgrade paths for API consumers
- [ ] **SDK Support**: Client libraries and code generation considered

View File

@@ -132,55 +132,30 @@ SlashCommand(command="/workflow:brainstorm:artifacts \"{topic}\" --count {N}")
### Phase 2: Parallel Role Analysis Execution
**For Each Selected Role**:
**For Each Selected Role** (unified role-analysis command):
```bash
Task(conceptual-planning-agent): "
[FLOW_CONTROL]
Execute {role-name} analysis for existing topic framework
## Context Loading
ASSIGNED_ROLE: {role-name}
OUTPUT_LOCATION: .workflow/active/WFS-{session}/.brainstorming/{role}/
TOPIC: {user-provided-topic}
## Flow Control Steps
1. load_topic_framework → .workflow/active/WFS-{session}/.brainstorming/guidance-specification.md
2. load_role_template → ~/.claude/workflows/cli-templates/planning-roles/{role}.md
3. load_session_metadata → .workflow/active/WFS-{session}/workflow-session.json
4. load_style_skill (ui-designer only, if style_skill_package) → .claude/skills/style-{style_skill_package}/
## Analysis Requirements
**Primary Reference**: Original user prompt from workflow-session.json is authoritative
**Framework Source**: Address all discussion points in guidance-specification.md from {role-name} perspective
**Role Focus**: {role-name} domain expertise aligned with user intent
**Structured Approach**: Create analysis.md addressing framework discussion points
**Template Integration**: Apply role template guidelines within framework structure
## Expected Deliverables
1. **analysis.md** (optionally with analysis-{slug}.md sub-documents)
2. **Framework Reference**: @../guidance-specification.md
3. **User Intent Alignment**: Validate against session_context
## Completion Criteria
- Address each discussion point from guidance-specification.md with {role-name} expertise
- Provide actionable recommendations from {role-name} perspective within analysis files
- All output files MUST start with `analysis` prefix (no recommendations.md or other naming)
- Reference framework document using @ notation for integration
- Update workflow-session.json with completion status
"
SlashCommand(command="/workflow:brainstorm:role-analysis {role-name} --session {session-id} --skip-questions")
```
**Parallel Execute**:
- Launch N agents simultaneously (one message with multiple Task calls)
- Each agent task **attached** to orchestrator's TodoWrite
- All agents execute concurrently, each attaching their own analysis sub-tasks
- Each agent operates independently reading same guidance-specification.md
**What It Does**:
- Unified command execution for each role
- Loads topic framework from guidance-specification.md
- Applies role-specific template and context
- Generates analysis.md addressing framework discussion points
- Supports optional interactive context gathering (via --include-questions flag)
**Parallel Execution**:
- Launch N SlashCommand calls simultaneously (one message with multiple SlashCommand invokes)
- Each role command **attached** to orchestrator's TodoWrite
- All roles execute concurrently, each reading same guidance-specification.md
- Each role operates independently
- For ui-designer only: append `--style-skill {style_skill_package}` if provided
**Input**:
- `selected_roles[]` from Phase 1
- `session_id` from Phase 1
- guidance-specification.md path
- `guidance-specification.md` (framework reference)
- `style_skill_package` (for ui-designer only)
**Validation**:
- Each role creates `.workflow/active/WFS-{topic}/.brainstorming/{role}/analysis.md`
@@ -189,6 +164,7 @@ TOPIC: {user-provided-topic}
- **FORBIDDEN**: `recommendations.md` or any non-`analysis` prefixed files
- All N role analyses completed
**TodoWrite Update (Phase 2 agents executed - tasks attached in parallel)**:
```json
[
@@ -455,4 +431,3 @@ CONTEXT_VARS:
```
**Template Source**: `~/.claude/workflows/cli-templates/planning-roles/`

View File

@@ -1,220 +0,0 @@
---
name: data-architect
description: Generate or update data-architect/analysis.md addressing guidance-specification discussion points for data architecture perspective
argument-hint: "optional topic - uses existing framework if available"
allowed-tools: Task(conceptual-planning-agent), TodoWrite(*), Read(*), Write(*)
---
## 📊 **Data Architect Analysis Generator**
### Purpose
**Specialized command for generating data-architect/analysis.md** that addresses guidance-specification.md discussion points from data architecture perspective. Creates or updates role-specific analysis with framework references.
### Core Function
- **Framework-based Analysis**: Address each discussion point in guidance-specification.md
- **Data Architecture Focus**: Data models, pipelines, governance, and analytics perspective
- **Update Mechanism**: Create new or update existing analysis.md
- **Agent Delegation**: Use conceptual-planning-agent for analysis generation
### Analysis Scope
- **Data Model Design**: Efficient and scalable data models and schemas
- **Data Flow Design**: Data collection, processing, and storage workflows
- **Data Quality Management**: Data accuracy, completeness, and consistency
- **Analytics and Insights**: Data analysis and business intelligence solutions
### Role Boundaries & Responsibilities
#### **What This Role OWNS (Canonical Data Model - Source of Truth)**
- **Canonical Data Model**: The authoritative, system-wide data schema representing domain entities and relationships
- **Entity-Relationship Design**: Defining entities, attributes, relationships, and constraints
- **Data Normalization & Optimization**: Ensuring data integrity, reducing redundancy, and optimizing storage
- **Database Schema Design**: Physical database structures, indexes, partitioning strategies
- **Data Pipeline Architecture**: ETL/ELT processes, data warehousing, and analytics pipelines
- **Data Governance**: Data quality standards, retention policies, and compliance requirements
#### **What This Role DOES NOT Own (Defers to Other Roles)**
- **API Data Contracts**: Public-facing request/response schemas exposed by APIs → Defers to **API Designer**
- **System Integration Patterns**: How services communicate at the macro level → Defers to **System Architect**
- **UI Data Presentation**: How data is displayed to users → Defers to **UI Designer**
#### **Handoff Points**
- **TO API Designer**: Provides canonical data model that API Designer translates into public API data contracts (as projection/view)
- **TO System Architect**: Provides data flow requirements and storage constraints to inform system design
- **FROM System Architect**: Receives system-level integration requirements and scalability constraints
## ⚙️ **Execution Protocol**
### Phase 1: Session & Framework Detection
```bash
# Check active session and framework
CHECK: find .workflow/active/ -name "WFS-*" -type d
IF active_session EXISTS:
session_id = get_active_session()
brainstorm_dir = .workflow/active/WFS-{session}/.brainstorming/
CHECK: brainstorm_dir/guidance-specification.md
IF EXISTS:
framework_mode = true
load_framework = true
ELSE:
IF topic_provided:
framework_mode = false # Create analysis without framework
ELSE:
ERROR: "No framework found and no topic provided"
```
### Phase 2: Analysis Mode Detection
```bash
# Determine execution mode
IF framework_mode == true:
mode = "framework_based_analysis"
topic_ref = load_framework_topic()
discussion_points = extract_framework_points()
ELSE:
mode = "standalone_analysis"
topic_ref = provided_topic
discussion_points = generate_basic_structure()
```
### Phase 3: Agent Execution with Flow Control
**Framework-Based Analysis Generation**
```bash
Task(conceptual-planning-agent): "
[FLOW_CONTROL]
Execute data-architect analysis for existing topic framework
## Context Loading
ASSIGNED_ROLE: data-architect
OUTPUT_LOCATION: .workflow/active/WFS-{session}/.brainstorming/data-architect/
ANALYSIS_MODE: {framework_mode ? "framework_based" : "standalone"}
## Flow Control Steps
1. **load_topic_framework**
- Action: Load structured topic discussion framework
- Command: Read(.workflow/active/WFS-{session}/.brainstorming/guidance-specification.md)
- Output: topic_framework_content
2. **load_role_template**
- Action: Load data-architect planning template
- Command: bash($(cat ~/.claude/workflows/cli-templates/planning-roles/data-architect.md))
- Output: role_template_guidelines
3. **load_session_metadata**
- Action: Load session metadata and existing context
- Command: Read(.workflow/active/WFS-{session}/workflow-session.json)
- Output: session_context
## Analysis Requirements
**Framework Reference**: Address all discussion points in guidance-specification.md from data architecture perspective
**Role Focus**: Data models, pipelines, governance, analytics platforms
**Structured Approach**: Create analysis.md addressing framework discussion points
**Template Integration**: Apply role template guidelines within framework structure
## Expected Deliverables
1. **analysis.md**: Comprehensive data architecture analysis addressing all framework discussion points
2. **Framework Reference**: Include @../guidance-specification.md reference in analysis
## Completion Criteria
- Address each discussion point from guidance-specification.md with data architecture expertise
- Provide data model designs, pipeline architectures, and governance strategies
- Include scalability, performance, and quality considerations
- Reference framework document using @ notation for integration
"
```
## 📋 **TodoWrite Integration**
### Workflow Progress Tracking
```javascript
TodoWrite({
todos: [
{
content: "Detect active session and locate topic framework",
status: "in_progress",
activeForm: "Detecting session and framework"
},
{
content: "Load guidance-specification.md and session metadata for context",
status: "pending",
activeForm: "Loading framework and session context"
},
{
content: "Execute data-architect analysis using conceptual-planning-agent with FLOW_CONTROL",
status: "pending",
activeForm: "Executing data-architect framework analysis"
},
{
content: "Generate analysis.md addressing all framework discussion points",
status: "pending",
activeForm: "Generating structured data-architect analysis"
},
{
content: "Update workflow-session.json with data-architect completion status",
status: "pending",
activeForm: "Updating session metadata"
}
]
});
```
## 📊 **Output Structure**
### Framework-Based Analysis
```
.workflow/active/WFS-{session}/.brainstorming/data-architect/
└── analysis.md # Structured analysis addressing guidance-specification.md discussion points
```
### Analysis Document Structure
```markdown
# Data Architect Analysis: [Topic from Framework]
## Framework Reference
**Topic Framework**: @../guidance-specification.md
**Role Focus**: Data Architecture perspective
## Discussion Points Analysis
[Address each point from guidance-specification.md with data architecture expertise]
### Core Requirements (from framework)
[Data architecture perspective on requirements]
### Technical Considerations (from framework)
[Data model, pipeline, and storage considerations]
### User Experience Factors (from framework)
[Data access patterns and analytics user experience]
### Implementation Challenges (from framework)
[Data migration, quality, and governance challenges]
### Success Metrics (from framework)
[Data quality metrics and analytics success criteria]
## Data Architecture Specific Recommendations
[Role-specific data architecture recommendations and solutions]
---
*Generated by data-architect analysis addressing structured framework*
```
## 🔄 **Session Integration**
### Completion Status Update
```json
{
"data_architect": {
"status": "completed",
"framework_addressed": true,
"output_location": ".workflow/active/WFS-{session}/.brainstorming/data-architect/analysis.md",
"framework_reference": "@../guidance-specification.md"
}
}
```
### Integration Points
- **Framework Reference**: @../guidance-specification.md for structured discussion points
- **Cross-Role Synthesis**: Data architecture insights available for synthesis-report.md integration
- **Agent Autonomy**: Independent execution with framework guidance

View File

@@ -1,200 +0,0 @@
---
name: product-manager
description: Generate or update product-manager/analysis.md addressing guidance-specification discussion points for product management perspective
argument-hint: "optional topic - uses existing framework if available"
allowed-tools: Task(conceptual-planning-agent), TodoWrite(*), Read(*), Write(*)
---
## 🎯 **Product Manager Analysis Generator**
### Purpose
**Specialized command for generating product-manager/analysis.md** that addresses guidance-specification.md discussion points from product strategy perspective. Creates or updates role-specific analysis with framework references.
### Core Function
- **Framework-based Analysis**: Address each discussion point in guidance-specification.md
- **Product Strategy Focus**: User needs, business value, and market positioning
- **Update Mechanism**: Create new or update existing analysis.md
- **Agent Delegation**: Use conceptual-planning-agent for analysis generation
### Analysis Scope
- **User Needs Analysis**: Target users, problems, and value propositions
- **Business Impact Assessment**: ROI, metrics, and commercial outcomes
- **Market Positioning**: Competitive analysis and differentiation
- **Product Strategy**: Roadmaps, priorities, and go-to-market approaches
## ⚙️ **Execution Protocol**
### Phase 1: Session & Framework Detection
```bash
# Check active session and framework
CHECK: find .workflow/active/ -name "WFS-*" -type d
IF active_session EXISTS:
session_id = get_active_session()
brainstorm_dir = .workflow/active/WFS-{session}/.brainstorming/
CHECK: brainstorm_dir/guidance-specification.md
IF EXISTS:
framework_mode = true
load_framework = true
ELSE:
IF topic_provided:
framework_mode = false # Create analysis without framework
ELSE:
ERROR: "No framework found and no topic provided"
```
### Phase 2: Analysis Mode Detection
```bash
# Determine execution mode
IF framework_mode == true:
mode = "framework_based_analysis"
topic_ref = load_framework_topic()
discussion_points = extract_framework_points()
ELSE:
mode = "standalone_analysis"
topic_ref = provided_topic
discussion_points = generate_basic_structure()
```
### Phase 3: Agent Execution with Flow Control
**Framework-Based Analysis Generation**
```bash
Task(conceptual-planning-agent): "
[FLOW_CONTROL]
Execute product-manager analysis for existing topic framework
## Context Loading
ASSIGNED_ROLE: product-manager
OUTPUT_LOCATION: .workflow/active/WFS-{session}/.brainstorming/product-manager/
ANALYSIS_MODE: {framework_mode ? "framework_based" : "standalone"}
## Flow Control Steps
1. **load_topic_framework**
- Action: Load structured topic discussion framework
- Command: Read(.workflow/active/WFS-{session}/.brainstorming/guidance-specification.md)
- Output: topic_framework_content
2. **load_role_template**
- Action: Load product-manager planning template
- Command: bash($(cat ~/.claude/workflows/cli-templates/planning-roles/product-manager.md))
- Output: role_template_guidelines
3. **load_session_metadata**
- Action: Load session metadata and existing context
- Command: Read(.workflow/active/WFS-{session}/workflow-session.json)
- Output: session_context
## Analysis Requirements
**Framework Reference**: Address all discussion points in guidance-specification.md from product strategy perspective
**Role Focus**: User value, business impact, market positioning, product strategy
**Structured Approach**: Create analysis.md addressing framework discussion points
**Template Integration**: Apply role template guidelines within framework structure
## Expected Deliverables
1. **analysis.md**: Comprehensive product strategy analysis addressing all framework discussion points
2. **Framework Reference**: Include @../guidance-specification.md reference in analysis
## Completion Criteria
- Address each discussion point from guidance-specification.md with product management expertise
- Provide actionable business strategies and user value propositions
- Include market analysis and competitive positioning insights
- Reference framework document using @ notation for integration
"
```
## 📋 **TodoWrite Integration**
### Workflow Progress Tracking
```javascript
TodoWrite({
todos: [
{
content: "Detect active session and locate topic framework",
status: "in_progress",
activeForm: "Detecting session and framework"
},
{
content: "Load guidance-specification.md and session metadata for context",
status: "pending",
activeForm: "Loading framework and session context"
},
{
content: "Execute product-manager analysis using conceptual-planning-agent with FLOW_CONTROL",
status: "pending",
activeForm: "Executing product-manager framework analysis"
},
{
content: "Generate analysis.md addressing all framework discussion points",
status: "pending",
activeForm: "Generating structured product-manager analysis"
},
{
content: "Update workflow-session.json with product-manager completion status",
status: "pending",
activeForm: "Updating session metadata"
}
]
});
```
## 📊 **Output Structure**
### Framework-Based Analysis
```
.workflow/active/WFS-{session}/.brainstorming/product-manager/
└── analysis.md # Structured analysis addressing guidance-specification.md discussion points
```
### Analysis Document Structure
```markdown
# Product Manager Analysis: [Topic from Framework]
## Framework Reference
**Topic Framework**: @../guidance-specification.md
**Role Focus**: Product Strategy perspective
## Discussion Points Analysis
[Address each point from guidance-specification.md with product management expertise]
### Core Requirements (from framework)
[Product strategy perspective on user needs and requirements]
### Technical Considerations (from framework)
[Business and technical feasibility considerations]
### User Experience Factors (from framework)
[User value proposition and market positioning analysis]
### Implementation Challenges (from framework)
[Business execution and go-to-market considerations]
### Success Metrics (from framework)
[Product success metrics and business KPIs]
## Product Strategy Specific Recommendations
[Role-specific product management strategies and business solutions]
---
*Generated by product-manager analysis addressing structured framework*
```
## 🔄 **Session Integration**
### Completion Status Update
```json
{
"product_manager": {
"status": "completed",
"framework_addressed": true,
"output_location": ".workflow/active/WFS-{session}/.brainstorming/product-manager/analysis.md",
"framework_reference": "@../guidance-specification.md"
}
}
```
### Integration Points
- **Framework Reference**: @../guidance-specification.md for structured discussion points
- **Cross-Role Synthesis**: Product strategy insights available for synthesis-report.md integration
- **Agent Autonomy**: Independent execution with framework guidance

View File

@@ -1,200 +0,0 @@
---
name: product-owner
description: Generate or update product-owner/analysis.md addressing guidance-specification discussion points for product ownership perspective
argument-hint: "optional topic - uses existing framework if available"
allowed-tools: Task(conceptual-planning-agent), TodoWrite(*), Read(*), Write(*)
---
## 🎯 **Product Owner Analysis Generator**
### Purpose
**Specialized command for generating product-owner/analysis.md** that addresses guidance-specification.md discussion points from product backlog and feature prioritization perspective. Creates or updates role-specific analysis with framework references.
### Core Function
- **Framework-based Analysis**: Address each discussion point in guidance-specification.md
- **Product Backlog Focus**: Feature prioritization, user stories, and acceptance criteria
- **Update Mechanism**: Create new or update existing analysis.md
- **Agent Delegation**: Use conceptual-planning-agent for analysis generation
### Analysis Scope
- **Backlog Management**: User story creation, refinement, and prioritization
- **Stakeholder Alignment**: Requirements gathering, value definition, and expectation management
- **Feature Prioritization**: ROI analysis, MoSCoW method, and value-driven delivery
- **Acceptance Criteria**: Definition of Done, acceptance testing, and quality standards
## ⚙️ **Execution Protocol**
### Phase 1: Session & Framework Detection
```bash
# Check active session and framework
CHECK: find .workflow/active/ -name "WFS-*" -type d
IF active_session EXISTS:
session_id = get_active_session()
brainstorm_dir = .workflow/active/WFS-{session}/.brainstorming/
CHECK: brainstorm_dir/guidance-specification.md
IF EXISTS:
framework_mode = true
load_framework = true
ELSE:
IF topic_provided:
framework_mode = false # Create analysis without framework
ELSE:
ERROR: "No framework found and no topic provided"
```
### Phase 2: Analysis Mode Detection
```bash
# Determine execution mode
IF framework_mode == true:
mode = "framework_based_analysis"
topic_ref = load_framework_topic()
discussion_points = extract_framework_points()
ELSE:
mode = "standalone_analysis"
topic_ref = provided_topic
discussion_points = generate_basic_structure()
```
### Phase 3: Agent Execution with Flow Control
**Framework-Based Analysis Generation**
```bash
Task(conceptual-planning-agent): "
[FLOW_CONTROL]
Execute product-owner analysis for existing topic framework
## Context Loading
ASSIGNED_ROLE: product-owner
OUTPUT_LOCATION: .workflow/active/WFS-{session}/.brainstorming/product-owner/
ANALYSIS_MODE: {framework_mode ? "framework_based" : "standalone"}
## Flow Control Steps
1. **load_topic_framework**
- Action: Load structured topic discussion framework
- Command: Read(.workflow/active/WFS-{session}/.brainstorming/guidance-specification.md)
- Output: topic_framework_content
2. **load_role_template**
- Action: Load product-owner planning template
- Command: bash($(cat ~/.claude/workflows/cli-templates/planning-roles/product-owner.md))
- Output: role_template_guidelines
3. **load_session_metadata**
- Action: Load session metadata and existing context
- Command: Read(.workflow/active/WFS-{session}/workflow-session.json)
- Output: session_context
## Analysis Requirements
**Framework Reference**: Address all discussion points in guidance-specification.md from product backlog and feature prioritization perspective
**Role Focus**: Backlog management, stakeholder alignment, feature prioritization, acceptance criteria
**Structured Approach**: Create analysis.md addressing framework discussion points
**Template Integration**: Apply role template guidelines within framework structure
## Expected Deliverables
1. **analysis.md**: Comprehensive product ownership analysis addressing all framework discussion points
2. **Framework Reference**: Include @../guidance-specification.md reference in analysis
## Completion Criteria
- Address each discussion point from guidance-specification.md with product ownership expertise
- Provide actionable user stories and acceptance criteria definitions
- Include feature prioritization and stakeholder alignment strategies
- Reference framework document using @ notation for integration
"
```
## 📋 **TodoWrite Integration**
### Workflow Progress Tracking
```javascript
TodoWrite({
todos: [
{
content: "Detect active session and locate topic framework",
status: "in_progress",
activeForm: "Detecting session and framework"
},
{
content: "Load guidance-specification.md and session metadata for context",
status: "pending",
activeForm: "Loading framework and session context"
},
{
content: "Execute product-owner analysis using conceptual-planning-agent with FLOW_CONTROL",
status: "pending",
activeForm: "Executing product-owner framework analysis"
},
{
content: "Generate analysis.md addressing all framework discussion points",
status: "pending",
activeForm: "Generating structured product-owner analysis"
},
{
content: "Update workflow-session.json with product-owner completion status",
status: "pending",
activeForm: "Updating session metadata"
}
]
});
```
## 📊 **Output Structure**
### Framework-Based Analysis
```
.workflow/active/WFS-{session}/.brainstorming/product-owner/
└── analysis.md # Structured analysis addressing guidance-specification.md discussion points
```
### Analysis Document Structure
```markdown
# Product Owner Analysis: [Topic from Framework]
## Framework Reference
**Topic Framework**: @../guidance-specification.md
**Role Focus**: Product Backlog & Feature Prioritization perspective
## Discussion Points Analysis
[Address each point from guidance-specification.md with product ownership expertise]
### Core Requirements (from framework)
[User story formulation and backlog refinement perspective]
### Technical Considerations (from framework)
[Technical feasibility and implementation sequencing considerations]
### User Experience Factors (from framework)
[User value definition and acceptance criteria analysis]
### Implementation Challenges (from framework)
[Sprint planning, dependency management, and delivery strategies]
### Success Metrics (from framework)
[Feature adoption, value delivery metrics, and stakeholder satisfaction indicators]
## Product Owner Specific Recommendations
[Role-specific backlog management and feature prioritization strategies]
---
*Generated by product-owner analysis addressing structured framework*
```
## 🔄 **Session Integration**
### Completion Status Update
```json
{
"product_owner": {
"status": "completed",
"framework_addressed": true,
"output_location": ".workflow/active/WFS-{session}/.brainstorming/product-owner/analysis.md",
"framework_reference": "@../guidance-specification.md"
}
}
```
### Integration Points
- **Framework Reference**: @../guidance-specification.md for structured discussion points
- **Cross-Role Synthesis**: Product ownership insights available for synthesis-report.md integration
- **Agent Autonomy**: Independent execution with framework guidance

View File

@@ -0,0 +1,705 @@
---
name: role-analysis
description: Unified role-specific analysis generation with interactive context gathering and incremental updates
argument-hint: "[role-name] [--session session-id] [--update] [--include-questions] [--skip-questions]"
allowed-tools: Task(conceptual-planning-agent), AskUserQuestion(*), TodoWrite(*), Read(*), Write(*), Edit(*), Glob(*)
---
## 🎯 **Unified Role Analysis Generator**
### Purpose
**Unified command for generating and updating role-specific analysis** with interactive context gathering, framework alignment, and incremental update support. Replaces 9 individual role commands with single parameterized workflow.
### Core Function
- **Multi-Role Support**: Single command supports all 9 brainstorming roles
- **Interactive Context**: Dynamic question generation based on role and framework
- **Incremental Updates**: Merge new insights into existing analyses
- **Framework Alignment**: Address guidance-specification.md discussion points
- **Agent Delegation**: Use conceptual-planning-agent with role-specific templates
### Supported Roles
| Role ID | Title | Focus Area | Context Questions |
|---------|-------|------------|-------------------|
| `ux-expert` | UX专家 | User research, information architecture, user journey | 4 |
| `ui-designer` | UI设计师 | Visual design, high-fidelity mockups, design systems | 4 |
| `system-architect` | 系统架构师 | Technical architecture, scalability, integration patterns | 5 |
| `product-manager` | 产品经理 | Product strategy, roadmap, prioritization | 4 |
| `product-owner` | 产品负责人 | Backlog management, user stories, acceptance criteria | 4 |
| `scrum-master` | 敏捷教练 | Process facilitation, impediment removal, team dynamics | 3 |
| `subject-matter-expert` | 领域专家 | Domain knowledge, business rules, compliance | 4 |
| `data-architect` | 数据架构师 | Data models, storage strategies, data flow | 5 |
| `api-designer` | API设计师 | API contracts, versioning, integration patterns | 4 |
---
## 📋 **Usage**
```bash
# Generate new analysis with interactive context
/workflow:brainstorm:role-analysis ux-expert
# Generate with existing framework + context questions
/workflow:brainstorm:role-analysis system-architect --session WFS-xxx --include-questions
# Update existing analysis (incremental merge)
/workflow:brainstorm:role-analysis ui-designer --session WFS-xxx --update
# Quick generation (skip interactive context)
/workflow:brainstorm:role-analysis product-manager --session WFS-xxx --skip-questions
```
---
## ⚙️ **Execution Protocol**
### Phase 1: Detection & Validation
**Step 1.1: Role Validation**
```bash
VALIDATE role_name IN [
ux-expert, ui-designer, system-architect, product-manager,
product-owner, scrum-master, subject-matter-expert,
data-architect, api-designer
]
IF invalid:
ERROR: "Unknown role: {role_name}. Use one of: ux-expert, ui-designer, ..."
EXIT
```
**Step 1.2: Session Detection**
```bash
IF --session PROVIDED:
session_id = --session
brainstorm_dir = .workflow/active/{session_id}/.brainstorming/
ELSE:
FIND .workflow/active/WFS-*/
IF multiple:
PROMPT user to select
ELSE IF single:
USE existing
ELSE:
ERROR: "No active session. Run /workflow:brainstorm:artifacts first"
EXIT
VALIDATE brainstorm_dir EXISTS
```
**Step 1.3: Framework Detection**
```bash
framework_file = {brainstorm_dir}/guidance-specification.md
IF framework_file EXISTS:
framework_mode = true
LOAD framework_content
ELSE:
WARN: "No framework found - will create standalone analysis"
framework_mode = false
```
**Step 1.4: Update Mode Detection**
```bash
existing_analysis = {brainstorm_dir}/{role_name}/analysis*.md
IF --update FLAG OR existing_analysis EXISTS:
update_mode = true
IF --update NOT PROVIDED:
ASK: "Analysis exists. Update or regenerate?"
OPTIONS: ["Incremental update", "Full regenerate", "Cancel"]
ELSE:
update_mode = false
```
### Phase 2: Interactive Context Gathering
**Trigger Conditions**:
- Default: Always ask unless `--skip-questions` provided
- `--include-questions`: Force context gathering even if analysis exists
- `--skip-questions`: Skip all interactive questions
**Step 2.1: Load Role Configuration**
```javascript
const roleConfig = {
'ux-expert': {
title: 'UX专家',
focus_area: 'User research, information architecture, user journey',
question_categories: ['User Intent', 'Requirements', 'UX'],
question_count: 4,
template: '~/.claude/workflows/cli-templates/planning-roles/ux-expert.md'
},
'ui-designer': {
title: 'UI设计师',
focus_area: 'Visual design, high-fidelity mockups, design systems',
question_categories: ['Requirements', 'UX', 'Feasibility'],
question_count: 4,
template: '~/.claude/workflows/cli-templates/planning-roles/ui-designer.md'
},
'system-architect': {
title: '系统架构师',
focus_area: 'Technical architecture, scalability, integration patterns',
question_categories: ['Scale & Performance', 'Technical Constraints', 'Architecture Complexity', 'Non-Functional Requirements'],
question_count: 5,
template: '~/.claude/workflows/cli-templates/planning-roles/system-architect.md'
},
'product-manager': {
title: '产品经理',
focus_area: 'Product strategy, roadmap, prioritization',
question_categories: ['User Intent', 'Requirements', 'Process'],
question_count: 4,
template: '~/.claude/workflows/cli-templates/planning-roles/product-manager.md'
},
'product-owner': {
title: '产品负责人',
focus_area: 'Backlog management, user stories, acceptance criteria',
question_categories: ['Requirements', 'Decisions', 'Process'],
question_count: 4,
template: '~/.claude/workflows/cli-templates/planning-roles/product-owner.md'
},
'scrum-master': {
title: '敏捷教练',
focus_area: 'Process facilitation, impediment removal, team dynamics',
question_categories: ['Process', 'Risk', 'Decisions'],
question_count: 3,
template: '~/.claude/workflows/cli-templates/planning-roles/scrum-master.md'
},
'subject-matter-expert': {
title: '领域专家',
focus_area: 'Domain knowledge, business rules, compliance',
question_categories: ['Requirements', 'Feasibility', 'Terminology'],
question_count: 4,
template: '~/.claude/workflows/cli-templates/planning-roles/subject-matter-expert.md'
},
'data-architect': {
title: '数据架构师',
focus_area: 'Data models, storage strategies, data flow',
question_categories: ['Architecture', 'Scale & Performance', 'Technical Constraints', 'Feasibility'],
question_count: 5,
template: '~/.claude/workflows/cli-templates/planning-roles/data-architect.md'
},
'api-designer': {
title: 'API设计师',
focus_area: 'API contracts, versioning, integration patterns',
question_categories: ['Architecture', 'Requirements', 'Feasibility', 'Decisions'],
question_count: 4,
template: '~/.claude/workflows/cli-templates/planning-roles/api-designer.md'
}
};
config = roleConfig[role_name];
```
**Step 2.2: Generate Role-Specific Questions**
**9-Category Taxonomy** (from synthesis.md):
| Category | Focus | Example Question Pattern |
|----------|-------|--------------------------|
| User Intent | 用户目标 | "该分析的核心目标是什么?" |
| Requirements | 需求细化 | "需求的优先级如何排序?" |
| Architecture | 架构决策 | "技术栈的选择考量?" |
| UX | 用户体验 | "交互复杂度的取舍?" |
| Feasibility | 可行性 | "资源约束下的实现范围?" |
| Risk | 风险管理 | "风险容忍度是多少?" |
| Process | 流程规范 | "开发迭代的节奏?" |
| Decisions | 决策确认 | "冲突的解决方案?" |
| Terminology | 术语统一 | "统一使用哪个术语?" |
| Scale & Performance | 性能扩展 | "预期的负载和性能要求?" |
| Technical Constraints | 技术约束 | "现有技术栈的限制?" |
| Architecture Complexity | 架构复杂度 | "架构的复杂度权衡?" |
| Non-Functional Requirements | 非功能需求 | "可用性和可维护性要求?" |
**Question Generation Algorithm**:
```javascript
async function generateQuestions(role_name, framework_content) {
const config = roleConfig[role_name];
const questions = [];
// Parse framework for keywords
const keywords = extractKeywords(framework_content);
// Generate category-specific questions
for (const category of config.question_categories) {
const question = generateCategoryQuestion(category, keywords, role_name);
questions.push(question);
}
return questions.slice(0, config.question_count);
}
```
**Step 2.3: Multi-Round Question Execution**
```javascript
const BATCH_SIZE = 4;
const user_context = {};
for (let i = 0; i < questions.length; i += BATCH_SIZE) {
const batch = questions.slice(i, i + BATCH_SIZE);
const currentRound = Math.floor(i / BATCH_SIZE) + 1;
const totalRounds = Math.ceil(questions.length / BATCH_SIZE);
console.log(`\n[Round ${currentRound}/${totalRounds}] ${config.title} 上下文询问\n`);
AskUserQuestion({
questions: batch.map(q => ({
question: q.question,
header: q.category.substring(0, 12),
multiSelect: false,
options: q.options.map(opt => ({
label: opt.label,
description: opt.description
}))
}))
});
// Store responses before next round
for (const answer of responses) {
user_context[answer.question] = {
answer: answer.selected,
category: answer.category,
timestamp: new Date().toISOString()
};
}
}
// Save context to file
Write(
`${brainstorm_dir}/${role_name}/${role_name}-context.md`,
formatUserContext(user_context)
);
```
**Question Quality Rules** (from artifacts.md):
**MUST Include**:
- ✅ All questions in Chinese (用中文提问)
- ✅ 业务场景作为问题前提
- ✅ 技术选项的业务影响说明
- ✅ 量化指标和约束条件
**MUST Avoid**:
- ❌ 纯技术选型无业务上下文
- ❌ 过度抽象的通用问题
- ❌ 脱离框架的重复询问
### Phase 3: Agent Execution
**Step 3.1: Load Session Metadata**
```bash
session_metadata = Read(.workflow/active/{session_id}/workflow-session.json)
original_topic = session_metadata.topic
selected_roles = session_metadata.selected_roles
```
**Step 3.2: Prepare Agent Context**
```javascript
const agentContext = {
role_name: role_name,
role_config: roleConfig[role_name],
output_location: `${brainstorm_dir}/${role_name}/`,
framework_mode: framework_mode,
framework_path: framework_mode ? `${brainstorm_dir}/guidance-specification.md` : null,
update_mode: update_mode,
user_context: user_context,
original_topic: original_topic,
session_id: session_id
};
```
**Step 3.3: Execute Conceptual Planning Agent**
**Framework-Based Analysis** (when guidance-specification.md exists):
```javascript
Task(
subagent_type="conceptual-planning-agent",
run_in_background=false,
description=`Generate ${role_name} analysis`,
prompt=`
[FLOW_CONTROL]
Execute ${role_name} analysis for existing topic framework
## Context Loading
ASSIGNED_ROLE: ${role_name}
OUTPUT_LOCATION: ${agentContext.output_location}
ANALYSIS_MODE: ${framework_mode ? "framework_based" : "standalone"}
UPDATE_MODE: ${update_mode}
## Flow Control Steps
1. **load_topic_framework**
- Action: Load structured topic discussion framework
- Command: Read(${agentContext.framework_path})
- Output: topic_framework_content
2. **load_role_template**
- Action: Load ${role_name} planning template
- Command: Read(${roleConfig[role_name].template})
- Output: role_template_guidelines
3. **load_session_metadata**
- Action: Load session metadata and user intent
- Command: Read(.workflow/active/${session_id}/workflow-session.json)
- Output: session_context
4. **load_user_context** (if exists)
- Action: Load interactive context responses
- Command: Read(${brainstorm_dir}/${role_name}/${role_name}-context.md)
- Output: user_context_answers
5. **${update_mode ? 'load_existing_analysis' : 'skip'}**
${update_mode ? `
- Action: Load existing analysis for incremental update
- Command: Read(${brainstorm_dir}/${role_name}/analysis.md)
- Output: existing_analysis_content
` : ''}
## Analysis Requirements
**Primary Reference**: Original user prompt from workflow-session.json is authoritative
**Framework Source**: Address all discussion points in guidance-specification.md from ${role_name} perspective
**User Context Integration**: Incorporate interactive Q&A responses into analysis
**Role Focus**: ${roleConfig[role_name].focus_area}
**Template Integration**: Apply role template guidelines within framework structure
## Expected Deliverables
1. **analysis.md** (main document, optionally with analysis-{slug}.md sub-documents)
2. **Framework Reference**: @../guidance-specification.md (if framework_mode)
3. **User Context Reference**: @./${role_name}-context.md (if user context exists)
4. **User Intent Alignment**: Validate against session_context
## Update Requirements (if UPDATE_MODE)
- **Preserve Structure**: Maintain existing analysis structure
- **Add "Clarifications" Section**: Document new user context with timestamp
- **Merge Insights**: Integrate new perspectives without removing existing content
- **Resolve Conflicts**: If new context contradicts existing analysis, document both and recommend resolution
## Completion Criteria
- Address each discussion point from guidance-specification.md with ${role_name} expertise
- Provide actionable recommendations from ${role_name} perspective within analysis files
- All output files MUST start with "analysis" prefix (no recommendations.md or other naming)
- Reference framework document using @ notation for integration
- Update workflow-session.json with completion status
`
);
```
### Phase 4: Validation & Finalization
**Step 4.1: Validate Output**
```bash
VERIFY EXISTS: ${brainstorm_dir}/${role_name}/analysis.md
VERIFY CONTAINS: "@../guidance-specification.md" (if framework_mode)
IF user_context EXISTS:
VERIFY CONTAINS: "@./${role_name}-context.md" OR "## Clarifications" section
```
**Step 4.2: Update Session Metadata**
```json
{
"phases": {
"BRAINSTORM": {
"${role_name}": {
"status": "${update_mode ? 'updated' : 'completed'}",
"completed_at": "timestamp",
"framework_addressed": true,
"context_gathered": user_context ? true : false,
"output_location": "${brainstorm_dir}/${role_name}/analysis.md",
"update_history": [
{
"timestamp": "ISO8601",
"mode": "${update_mode ? 'incremental' : 'initial'}",
"context_questions": question_count
}
]
}
}
}
}
```
**Step 4.3: Completion Report**
```markdown
✅ ${roleConfig[role_name].title} Analysis Complete
**Output**: ${brainstorm_dir}/${role_name}/analysis.md
**Mode**: ${update_mode ? 'Incremental Update' : 'New Generation'}
**Framework**: ${framework_mode ? '✓ Aligned' : '✗ Standalone'}
**Context Questions**: ${question_count} answered
${update_mode ? '
**Changes**:
- Added "Clarifications" section with new user context
- Merged new insights into existing sections
- Resolved conflicts with framework alignment
' : ''}
**Next Steps**:
${selected_roles.length > 1 ? `
- Continue with other roles: ${selected_roles.filter(r => r !== role_name).join(', ')}
- Run synthesis: /workflow:brainstorm:synthesis --session ${session_id}
` : `
- Clarify insights: /workflow:brainstorm:synthesis --session ${session_id}
- Generate plan: /workflow:plan --session ${session_id}
`}
```
---
## 📋 **TodoWrite Integration**
### Workflow Progress Tracking
```javascript
TodoWrite({
todos: [
{
content: "Phase 1: Detect session and validate role configuration",
status: "in_progress",
activeForm: "Detecting session and role"
},
{
content: "Phase 2: Interactive context gathering with AskUserQuestion",
status: "pending",
activeForm: "Gathering user context"
},
{
content: "Phase 3: Execute conceptual-planning-agent for role analysis",
status: "pending",
activeForm: "Executing agent analysis"
},
{
content: "Phase 4: Validate output and update session metadata",
status: "pending",
activeForm: "Finalizing and validating"
}
]
});
```
---
## 📊 **Output Structure**
### Directory Layout
```
.workflow/active/WFS-{session}/.brainstorming/
├── guidance-specification.md # Framework (if exists)
└── {role-name}/
├── {role-name}-context.md # Interactive Q&A responses
├── analysis.md # Main analysis (REQUIRED)
└── analysis-{slug}.md # Section documents (optional, max 5)
```
### Analysis Document Structure (New Generation)
```markdown
# ${roleConfig[role_name].title} Analysis: [Topic from Framework]
## Framework Reference
**Topic Framework**: @../guidance-specification.md
**Role Focus**: ${roleConfig[role_name].focus_area}
**User Context**: @./${role_name}-context.md
## User Context Summary
**Context Gathered**: ${question_count} questions answered
**Categories**: ${question_categories.join(', ')}
${user_context ? formatContextSummary(user_context) : ''}
## Discussion Points Analysis
[Address each point from guidance-specification.md with ${role_name} expertise]
### Core Requirements (from framework)
[Role-specific perspective on requirements]
### Technical Considerations (from framework)
[Role-specific technical analysis]
### User Experience Factors (from framework)
[Role-specific UX considerations]
### Implementation Challenges (from framework)
[Role-specific challenges and solutions]
### Success Metrics (from framework)
[Role-specific metrics and KPIs]
## ${roleConfig[role_name].title} Specific Recommendations
[Role-specific actionable strategies]
---
*Generated by ${role_name} analysis addressing structured framework*
*Context gathered: ${new Date().toISOString()}*
```
### Analysis Document Structure (Incremental Update)
```markdown
# ${roleConfig[role_name].title} Analysis: [Topic]
## Framework Reference
[Existing content preserved]
## Clarifications
### Session ${new Date().toISOString().split('T')[0]}
${Object.entries(user_context).map(([q, a]) => `
- **Q**: ${q} (Category: ${a.category})
**A**: ${a.answer}
`).join('\n')}
## User Context Summary
[Updated with new context]
## Discussion Points Analysis
[Existing content enhanced with new insights]
[Rest of sections updated based on clarifications]
```
---
## 🔄 **Integration with Other Commands**
### Called By
- `/workflow:brainstorm:auto-parallel` (Phase 2 - parallel role execution)
- Manual invocation for single-role analysis
### Calls To
- `conceptual-planning-agent` (agent execution)
- `AskUserQuestion` (interactive context gathering)
### Coordinates With
- `/workflow:brainstorm:artifacts` (creates framework for role analysis)
- `/workflow:brainstorm:synthesis` (reads role analyses for integration)
---
## ✅ **Quality Assurance**
### Required Analysis Elements
- [ ] Framework discussion points addressed (if framework_mode)
- [ ] User context integrated (if context gathered)
- [ ] Role template guidelines applied
- [ ] Output files follow naming convention (analysis*.md only)
- [ ] Framework reference using @ notation
- [ ] Session metadata updated
### Context Quality
- [ ] Questions in Chinese with business context
- [ ] Options include technical trade-offs
- [ ] Categories aligned with role focus
- [ ] No generic questions unrelated to framework
### Update Quality (if update_mode)
- [ ] "Clarifications" section added with timestamp
- [ ] New insights merged without content loss
- [ ] Conflicts documented and resolved
- [ ] Framework alignment maintained
---
## 🎛️ **Command Parameters**
### Required Parameters
- `[role-name]`: Role identifier (ux-expert, ui-designer, system-architect, etc.)
### Optional Parameters
- `--session [session-id]`: Specify brainstorming session (auto-detect if omitted)
- `--update`: Force incremental update mode (auto-detect if analysis exists)
- `--include-questions`: Force context gathering even if analysis exists
- `--skip-questions`: Skip all interactive context gathering
- `--style-skill [package]`: For ui-designer only, load style SKILL package
### Parameter Combinations
| Scenario | Command | Behavior |
|----------|---------|----------|
| New analysis | `role-analysis ux-expert` | Generate + ask context questions |
| Quick generation | `role-analysis ux-expert --skip-questions` | Generate without context |
| Update existing | `role-analysis ux-expert --update` | Ask clarifications + merge |
| Force questions | `role-analysis ux-expert --include-questions` | Ask even if exists |
| Specific session | `role-analysis ux-expert --session WFS-xxx` | Target specific session |
---
## 🚫 **Error Handling**
### Invalid Role Name
```
ERROR: Unknown role: "ui-expert"
Valid roles: ux-expert, ui-designer, system-architect, product-manager,
product-owner, scrum-master, subject-matter-expert,
data-architect, api-designer
```
### No Active Session
```
ERROR: No active brainstorming session found
Run: /workflow:brainstorm:artifacts "[topic]" to create session
```
### Missing Framework (with warning)
```
WARN: No guidance-specification.md found
Generating standalone analysis without framework alignment
Recommend: Run /workflow:brainstorm:artifacts first for better results
```
### Agent Execution Failure
```
ERROR: Conceptual planning agent failed
Check: ${brainstorm_dir}/${role_name}/error.log
Action: Retry with --skip-questions or check framework validity
```
---
## 🔧 **Advanced Usage**
### Batch Role Generation (via auto-parallel)
```bash
# This command handles multiple roles in parallel
/workflow:brainstorm:auto-parallel "topic" --count 3
# → Internally calls role-analysis for each selected role
```
### Manual Multi-Role Workflow
```bash
# 1. Create framework
/workflow:brainstorm:artifacts "Build real-time collaboration platform" --count 3
# 2. Generate each role with context
/workflow:brainstorm:role-analysis system-architect --include-questions
/workflow:brainstorm:role-analysis ui-designer --include-questions
/workflow:brainstorm:role-analysis product-manager --include-questions
# 3. Synthesize insights
/workflow:brainstorm:synthesis --session WFS-xxx
```
### Iterative Refinement
```bash
# Initial generation
/workflow:brainstorm:role-analysis ux-expert
# User reviews and wants more depth
/workflow:brainstorm:role-analysis ux-expert --update --include-questions
# → Asks clarification questions, merges new insights
```
---
## 📚 **Reference Information**
### Role Template Locations
- Templates: `~/.claude/workflows/cli-templates/planning-roles/`
- Format: `{role-name}.md` (e.g., `ux-expert.md`, `system-architect.md`)
### Related Commands
- `/workflow:brainstorm:artifacts` - Create framework and select roles
- `/workflow:brainstorm:auto-parallel` - Parallel multi-role execution
- `/workflow:brainstorm:synthesis` - Integrate role analyses
- `/workflow:plan` - Generate implementation plan from synthesis
### Context Package
- Location: `.workflow/active/WFS-{session}/.process/context-package.json`
- Used by: `context-search-agent` (Phase 0 of artifacts)
- Contains: Project context, tech stack, conflict risks

View File

@@ -1,200 +0,0 @@
---
name: scrum-master
description: Generate or update scrum-master/analysis.md addressing guidance-specification discussion points for Agile process perspective
argument-hint: "optional topic - uses existing framework if available"
allowed-tools: Task(conceptual-planning-agent), TodoWrite(*), Read(*), Write(*)
---
## 🎯 **Scrum Master Analysis Generator**
### Purpose
**Specialized command for generating scrum-master/analysis.md** that addresses guidance-specification.md discussion points from agile process and team collaboration perspective. Creates or updates role-specific analysis with framework references.
### Core Function
- **Framework-based Analysis**: Address each discussion point in guidance-specification.md
- **Agile Process Focus**: Sprint planning, team dynamics, and delivery optimization
- **Update Mechanism**: Create new or update existing analysis.md
- **Agent Delegation**: Use conceptual-planning-agent for analysis generation
### Analysis Scope
- **Sprint Planning**: Task breakdown, estimation, and iteration planning
- **Team Collaboration**: Communication patterns, impediment removal, and facilitation
- **Process Optimization**: Agile ceremonies, retrospectives, and continuous improvement
- **Delivery Management**: Velocity tracking, burndown analysis, and release planning
## ⚙️ **Execution Protocol**
### Phase 1: Session & Framework Detection
```bash
# Check active session and framework
CHECK: find .workflow/active/ -name "WFS-*" -type d
IF active_session EXISTS:
session_id = get_active_session()
brainstorm_dir = .workflow/active/WFS-{session}/.brainstorming/
CHECK: brainstorm_dir/guidance-specification.md
IF EXISTS:
framework_mode = true
load_framework = true
ELSE:
IF topic_provided:
framework_mode = false # Create analysis without framework
ELSE:
ERROR: "No framework found and no topic provided"
```
### Phase 2: Analysis Mode Detection
```bash
# Determine execution mode
IF framework_mode == true:
mode = "framework_based_analysis"
topic_ref = load_framework_topic()
discussion_points = extract_framework_points()
ELSE:
mode = "standalone_analysis"
topic_ref = provided_topic
discussion_points = generate_basic_structure()
```
### Phase 3: Agent Execution with Flow Control
**Framework-Based Analysis Generation**
```bash
Task(conceptual-planning-agent): "
[FLOW_CONTROL]
Execute scrum-master analysis for existing topic framework
## Context Loading
ASSIGNED_ROLE: scrum-master
OUTPUT_LOCATION: .workflow/active/WFS-{session}/.brainstorming/scrum-master/
ANALYSIS_MODE: {framework_mode ? "framework_based" : "standalone"}
## Flow Control Steps
1. **load_topic_framework**
- Action: Load structured topic discussion framework
- Command: Read(.workflow/active/WFS-{session}/.brainstorming/guidance-specification.md)
- Output: topic_framework_content
2. **load_role_template**
- Action: Load scrum-master planning template
- Command: bash($(cat ~/.claude/workflows/cli-templates/planning-roles/scrum-master.md))
- Output: role_template_guidelines
3. **load_session_metadata**
- Action: Load session metadata and existing context
- Command: Read(.workflow/active/WFS-{session}/workflow-session.json)
- Output: session_context
## Analysis Requirements
**Framework Reference**: Address all discussion points in guidance-specification.md from agile process and team collaboration perspective
**Role Focus**: Sprint planning, team dynamics, process optimization, delivery management
**Structured Approach**: Create analysis.md addressing framework discussion points
**Template Integration**: Apply role template guidelines within framework structure
## Expected Deliverables
1. **analysis.md**: Comprehensive agile process analysis addressing all framework discussion points
2. **Framework Reference**: Include @../guidance-specification.md reference in analysis
## Completion Criteria
- Address each discussion point from guidance-specification.md with scrum mastery expertise
- Provide actionable sprint planning and team facilitation strategies
- Include process optimization and impediment removal insights
- Reference framework document using @ notation for integration
"
```
## 📋 **TodoWrite Integration**
### Workflow Progress Tracking
```javascript
TodoWrite({
todos: [
{
content: "Detect active session and locate topic framework",
status: "in_progress",
activeForm: "Detecting session and framework"
},
{
content: "Load guidance-specification.md and session metadata for context",
status: "pending",
activeForm: "Loading framework and session context"
},
{
content: "Execute scrum-master analysis using conceptual-planning-agent with FLOW_CONTROL",
status: "pending",
activeForm: "Executing scrum-master framework analysis"
},
{
content: "Generate analysis.md addressing all framework discussion points",
status: "pending",
activeForm: "Generating structured scrum-master analysis"
},
{
content: "Update workflow-session.json with scrum-master completion status",
status: "pending",
activeForm: "Updating session metadata"
}
]
});
```
## 📊 **Output Structure**
### Framework-Based Analysis
```
.workflow/active/WFS-{session}/.brainstorming/scrum-master/
└── analysis.md # Structured analysis addressing guidance-specification.md discussion points
```
### Analysis Document Structure
```markdown
# Scrum Master Analysis: [Topic from Framework]
## Framework Reference
**Topic Framework**: @../guidance-specification.md
**Role Focus**: Agile Process & Team Collaboration perspective
## Discussion Points Analysis
[Address each point from guidance-specification.md with scrum mastery expertise]
### Core Requirements (from framework)
[Sprint planning and iteration breakdown perspective]
### Technical Considerations (from framework)
[Technical debt management and process considerations]
### User Experience Factors (from framework)
[User story refinement and acceptance criteria analysis]
### Implementation Challenges (from framework)
[Impediment identification and removal strategies]
### Success Metrics (from framework)
[Velocity tracking, burndown metrics, and team performance indicators]
## Scrum Master Specific Recommendations
[Role-specific agile process optimization and team facilitation strategies]
---
*Generated by scrum-master analysis addressing structured framework*
```
## 🔄 **Session Integration**
### Completion Status Update
```json
{
"scrum_master": {
"status": "completed",
"framework_addressed": true,
"output_location": ".workflow/active/WFS-{session}/.brainstorming/scrum-master/analysis.md",
"framework_reference": "@../guidance-specification.md"
}
}
```
### Integration Points
- **Framework Reference**: @../guidance-specification.md for structured discussion points
- **Cross-Role Synthesis**: Agile process insights available for synthesis-report.md integration
- **Agent Autonomy**: Independent execution with framework guidance

View File

@@ -1,200 +0,0 @@
---
name: subject-matter-expert
description: Generate or update subject-matter-expert/analysis.md addressing guidance-specification discussion points for domain expertise perspective
argument-hint: "optional topic - uses existing framework if available"
allowed-tools: Task(conceptual-planning-agent), TodoWrite(*), Read(*), Write(*)
---
## 🎯 **Subject Matter Expert Analysis Generator**
### Purpose
**Specialized command for generating subject-matter-expert/analysis.md** that addresses guidance-specification.md discussion points from domain knowledge and technical expertise perspective. Creates or updates role-specific analysis with framework references.
### Core Function
- **Framework-based Analysis**: Address each discussion point in guidance-specification.md
- **Domain Expertise Focus**: Deep technical knowledge, industry standards, and best practices
- **Update Mechanism**: Create new or update existing analysis.md
- **Agent Delegation**: Use conceptual-planning-agent for analysis generation
### Analysis Scope
- **Domain Knowledge**: Industry-specific expertise, regulatory requirements, and compliance
- **Technical Standards**: Best practices, design patterns, and architectural guidelines
- **Risk Assessment**: Technical debt, scalability concerns, and maintenance implications
- **Knowledge Transfer**: Documentation strategies, training requirements, and expertise sharing
## ⚙️ **Execution Protocol**
### Phase 1: Session & Framework Detection
```bash
# Check active session and framework
CHECK: find .workflow/active/ -name "WFS-*" -type d
IF active_session EXISTS:
session_id = get_active_session()
brainstorm_dir = .workflow/active/WFS-{session}/.brainstorming/
CHECK: brainstorm_dir/guidance-specification.md
IF EXISTS:
framework_mode = true
load_framework = true
ELSE:
IF topic_provided:
framework_mode = false # Create analysis without framework
ELSE:
ERROR: "No framework found and no topic provided"
```
### Phase 2: Analysis Mode Detection
```bash
# Determine execution mode
IF framework_mode == true:
mode = "framework_based_analysis"
topic_ref = load_framework_topic()
discussion_points = extract_framework_points()
ELSE:
mode = "standalone_analysis"
topic_ref = provided_topic
discussion_points = generate_basic_structure()
```
### Phase 3: Agent Execution with Flow Control
**Framework-Based Analysis Generation**
```bash
Task(conceptual-planning-agent): "
[FLOW_CONTROL]
Execute subject-matter-expert analysis for existing topic framework
## Context Loading
ASSIGNED_ROLE: subject-matter-expert
OUTPUT_LOCATION: .workflow/active/WFS-{session}/.brainstorming/subject-matter-expert/
ANALYSIS_MODE: {framework_mode ? "framework_based" : "standalone"}
## Flow Control Steps
1. **load_topic_framework**
- Action: Load structured topic discussion framework
- Command: Read(.workflow/active/WFS-{session}/.brainstorming/guidance-specification.md)
- Output: topic_framework_content
2. **load_role_template**
- Action: Load subject-matter-expert planning template
- Command: bash($(cat ~/.claude/workflows/cli-templates/planning-roles/subject-matter-expert.md))
- Output: role_template_guidelines
3. **load_session_metadata**
- Action: Load session metadata and existing context
- Command: Read(.workflow/active/WFS-{session}/workflow-session.json)
- Output: session_context
## Analysis Requirements
**Framework Reference**: Address all discussion points in guidance-specification.md from domain expertise and technical standards perspective
**Role Focus**: Domain knowledge, technical standards, risk assessment, knowledge transfer
**Structured Approach**: Create analysis.md addressing framework discussion points
**Template Integration**: Apply role template guidelines within framework structure
## Expected Deliverables
1. **analysis.md**: Comprehensive domain expertise analysis addressing all framework discussion points
2. **Framework Reference**: Include @../guidance-specification.md reference in analysis
## Completion Criteria
- Address each discussion point from guidance-specification.md with subject matter expertise
- Provide actionable technical standards and best practices recommendations
- Include risk assessment and compliance considerations
- Reference framework document using @ notation for integration
"
```
## 📋 **TodoWrite Integration**
### Workflow Progress Tracking
```javascript
TodoWrite({
todos: [
{
content: "Detect active session and locate topic framework",
status: "in_progress",
activeForm: "Detecting session and framework"
},
{
content: "Load guidance-specification.md and session metadata for context",
status: "pending",
activeForm: "Loading framework and session context"
},
{
content: "Execute subject-matter-expert analysis using conceptual-planning-agent with FLOW_CONTROL",
status: "pending",
activeForm: "Executing subject-matter-expert framework analysis"
},
{
content: "Generate analysis.md addressing all framework discussion points",
status: "pending",
activeForm: "Generating structured subject-matter-expert analysis"
},
{
content: "Update workflow-session.json with subject-matter-expert completion status",
status: "pending",
activeForm: "Updating session metadata"
}
]
});
```
## 📊 **Output Structure**
### Framework-Based Analysis
```
.workflow/active/WFS-{session}/.brainstorming/subject-matter-expert/
└── analysis.md # Structured analysis addressing guidance-specification.md discussion points
```
### Analysis Document Structure
```markdown
# Subject Matter Expert Analysis: [Topic from Framework]
## Framework Reference
**Topic Framework**: @../guidance-specification.md
**Role Focus**: Domain Expertise & Technical Standards perspective
## Discussion Points Analysis
[Address each point from guidance-specification.md with subject matter expertise]
### Core Requirements (from framework)
[Domain-specific requirements and industry standards perspective]
### Technical Considerations (from framework)
[Deep technical analysis, architectural patterns, and best practices]
### User Experience Factors (from framework)
[Domain-specific usability standards and industry conventions]
### Implementation Challenges (from framework)
[Technical risks, scalability concerns, and maintenance implications]
### Success Metrics (from framework)
[Domain-specific KPIs, compliance metrics, and quality standards]
## Subject Matter Expert Specific Recommendations
[Role-specific technical expertise and industry best practices]
---
*Generated by subject-matter-expert analysis addressing structured framework*
```
## 🔄 **Session Integration**
### Completion Status Update
```json
{
"subject_matter_expert": {
"status": "completed",
"framework_addressed": true,
"output_location": ".workflow/active/WFS-{session}/.brainstorming/subject-matter-expert/analysis.md",
"framework_reference": "@../guidance-specification.md"
}
}
```
### Integration Points
- **Framework Reference**: @../guidance-specification.md for structured discussion points
- **Cross-Role Synthesis**: Domain expertise insights available for synthesis-report.md integration
- **Agent Autonomy**: Independent execution with framework guidance

View File

@@ -1,389 +0,0 @@
---
name: system-architect
description: Generate or update system-architect/analysis.md addressing guidance-specification discussion points for system architecture perspective
argument-hint: "optional topic - uses existing framework if available"
allowed-tools: Task(conceptual-planning-agent), TodoWrite(*), Read(*), Write(*)
---
## 🏗️ **System Architect Analysis Generator**
### Purpose
**Specialized command for generating system-architect/analysis.md** that addresses guidance-specification.md discussion points from system architecture perspective. Creates or updates role-specific analysis with framework references.
### Core Function
- **Framework-based Analysis**: Address each discussion point in guidance-specification.md
- **Architecture Focus**: Technical architecture, scalability, and system design perspective
- **Update Mechanism**: Create new or update existing analysis.md
- **Agent Delegation**: Use conceptual-planning-agent for analysis generation
### Analysis Scope
- **Technical Architecture**: Scalable and maintainable system design
- **Technology Selection**: Stack evaluation and architectural decisions
- **Performance & Scalability**: Capacity planning and optimization strategies
- **Integration Patterns**: System communication and data flow design
### Role Boundaries & Responsibilities
#### **What This Role OWNS (Macro-Architecture)**
- **System-Level Architecture**: Service boundaries, deployment topology, and system composition
- **Cross-Service Communication Patterns**: Choosing between microservices/monolithic, event-driven/request-response, sync/async patterns
- **Technology Stack Decisions**: Language, framework, database, and infrastructure choices
- **Non-Functional Requirements**: Scalability, performance, availability, disaster recovery, and monitoring strategies
- **Integration Planning**: How systems and services connect at the macro level (not specific API contracts)
#### **What This Role DOES NOT Own (Defers to Other Roles)**
- **API Contract Details**: Specific endpoint definitions, URL structures, HTTP methods → Defers to **API Designer**
- **Data Schemas**: Detailed data model design and entity relationships → Defers to **Data Architect**
- **UI/UX Design**: Interface design and user experience → Defers to **UX Expert** and **UI Designer**
#### **Handoff Points**
- **TO API Designer**: Provides architectural constraints (REST vs GraphQL, sync vs async) that define the API design space
- **TO Data Architect**: Provides system-level data flow requirements and integration patterns
- **FROM Data Architect**: Receives canonical data model to inform system integration design
## ⚙️ **Execution Protocol**
### Phase 1: Session & Framework Detection
```bash
# Check active session and framework
CHECK: find .workflow/active/ -name "WFS-*" -type d
IF active_session EXISTS:
session_id = get_active_session()
brainstorm_dir = .workflow/active/WFS-{session}/.brainstorming/
CHECK: brainstorm_dir/guidance-specification.md
IF EXISTS:
framework_mode = true
load_framework = true
ELSE:
IF topic_provided:
framework_mode = false # Create analysis without framework
ELSE:
ERROR: "No framework found and no topic provided"
```
### Phase 2: Analysis Mode Detection
```bash
# Check existing analysis
CHECK: brainstorm_dir/system-architect/analysis.md
IF EXISTS:
SHOW existing analysis summary
ASK: "Analysis exists. Do you want to:"
OPTIONS:
1. "Update with new insights" → Update existing
2. "Replace completely" → Generate new
3. "Cancel" → Exit without changes
ELSE:
CREATE new analysis
```
### Phase 3: Agent Task Generation
**Framework-Based Analysis** (when guidance-specification.md exists):
```bash
Task(subagent_type="conceptual-planning-agent",
run_in_background=false,
prompt="Generate system architect analysis addressing topic framework
## Framework Integration Required
**MANDATORY**: Load and address guidance-specification.md discussion points
**Framework Reference**: @{session.brainstorm_dir}/guidance-specification.md
**Output Location**: {session.brainstorm_dir}/system-architect/analysis.md
## Analysis Requirements
1. **Load Topic Framework**: Read guidance-specification.md completely
2. **Address Each Discussion Point**: Respond to all 5 framework sections from system architecture perspective
3. **Include Framework Reference**: Start analysis.md with @../guidance-specification.md
4. **Technical Focus**: Emphasize scalability, architecture patterns, technology decisions
5. **Structured Response**: Use framework structure for analysis organization
## Framework Sections to Address
- Core Requirements (from architecture perspective)
- Technical Considerations (detailed architectural analysis)
- User Experience Factors (technical UX considerations)
- Implementation Challenges (architecture risks and solutions)
- Success Metrics (technical metrics and monitoring)
## Output Structure Required
```markdown
# System Architect Analysis: [Topic]
**Framework Reference**: @../guidance-specification.md
**Role Focus**: System Architecture and Technical Design
## Core Requirements Analysis
[Address framework requirements from architecture perspective]
## Technical Considerations
[Detailed architectural analysis]
## User Experience Factors
[Technical aspects of UX implementation]
## Implementation Challenges
[Architecture risks and mitigation strategies]
## Success Metrics
[Technical metrics and system monitoring]
## Architecture-Specific Recommendations
[Detailed technical recommendations]
```",
description="Generate system architect framework-based analysis")
```
### Phase 4: Update Mechanism
**Analysis Update Process**:
```bash
# For existing analysis updates
IF update_mode = "incremental":
Task(subagent_type="conceptual-planning-agent",
run_in_background=false,
prompt="Update existing system architect analysis
## Current Analysis Context
**Existing Analysis**: @{session.brainstorm_dir}/system-architect/analysis.md
**Framework Reference**: @{session.brainstorm_dir}/guidance-specification.md
## Update Requirements
1. **Preserve Structure**: Maintain existing analysis structure
2. **Add New Insights**: Integrate new technical insights and recommendations
3. **Framework Alignment**: Ensure continued alignment with topic framework
4. **Technical Updates**: Add new architecture patterns, technology considerations
5. **Maintain References**: Keep @../guidance-specification.md reference
## Update Instructions
- Read existing analysis completely
- Identify areas for enhancement or new insights
- Add technical depth while preserving original structure
- Update recommendations with new architectural approaches
- Maintain framework discussion point addressing",
description="Update system architect analysis incrementally")
```
## Document Structure
### Output Files
```
.workflow/active/WFS-[topic]/.brainstorming/
├── guidance-specification.md # Input: Framework (if exists)
└── system-architect/
└── analysis.md # ★ OUTPUT: Framework-based analysis
```
### Analysis Structure
**Required Elements**:
- **Framework Reference**: @../guidance-specification.md (if framework exists)
- **Role Focus**: System Architecture and Technical Design perspective
- **5 Framework Sections**: Address each framework discussion point
- **Technical Recommendations**: Architecture-specific insights and solutions
- How should we design APIs and manage versioning?
**4. Performance and Scalability**
- Where are the current system performance bottlenecks?
- How should we handle traffic growth and scaling demands?
- What database scaling and optimization strategies are needed?
## ⚡ **Two-Step Execution Flow**
### ⚠️ Session Management - FIRST STEP
Session detection and selection:
```bash
# Check for existing sessions
existing_sessions=$(find .workflow/active/ -name "WFS-*" -type d 2>/dev/null)
if [ multiple_sessions ]; then
prompt_user_to_select_session()
else
use_existing_or_create_new()
fi
```
### Step 1: Context Gathering Phase
**System Architect Perspective Questioning**
Before agent assignment, gather comprehensive system architecture context:
#### 📋 Role-Specific Questions
1. **Scale & Performance Requirements**
- Expected user load and traffic patterns?
- Performance requirements (latency, throughput)?
- Data volume and growth projections?
2. **Technical Constraints & Environment**
- Existing technology stack and constraints?
- Integration requirements with external systems?
- Infrastructure and deployment environment?
3. **Architecture Complexity & Patterns**
- Microservices vs monolithic considerations?
- Data consistency and transaction requirements?
- Event-driven vs request-response patterns?
4. **Non-Functional Requirements**
- High availability and disaster recovery needs?
- Security and compliance requirements?
- Monitoring and observability expectations?
#### Context Validation
- **Minimum Response**: Each answer must be ≥50 characters
- **Re-prompting**: Insufficient detail triggers follow-up questions
- **Context Storage**: Save responses to `.brainstorming/system-architect-context.md`
### Step 2: Agent Assignment with Flow Control
**Dedicated Agent Execution**
```bash
Task(conceptual-planning-agent): "
[FLOW_CONTROL]
Execute dedicated system-architect conceptual analysis for: {topic}
ASSIGNED_ROLE: system-architect
OUTPUT_LOCATION: .brainstorming/system-architect/
USER_CONTEXT: {validated_responses_from_context_gathering}
Flow Control Steps:
[
{
\"step\": \"load_role_template\",
\"action\": \"Load system-architect planning template\",
\"command\": \"bash($(cat ~/.claude/workflows/cli-templates/planning-roles/system-architect.md))\",
\"output_to\": \"role_template\"
}
]
Conceptual Analysis Requirements:
- Apply system-architect perspective to topic analysis
- Focus on architectural patterns, scalability, and integration points
- Use loaded role template framework for analysis structure
- Generate role-specific deliverables in designated output location
- Address all user context from questioning phase
Deliverables:
- analysis.md: Main system architecture analysis
- recommendations.md: Architecture recommendations
- deliverables/: Architecture-specific outputs as defined in role template
Embody system-architect role expertise for comprehensive conceptual planning."
```
### Progress Tracking
TodoWrite tracking for two-step process:
```json
[
{"content": "Gather system architect context through role-specific questioning", "status": "in_progress", "activeForm": "Gathering context"},
{"content": "Validate context responses and save to system-architect-context.md", "status": "pending", "activeForm": "Validating context"},
{"content": "Load system-architect planning template via flow control", "status": "pending", "activeForm": "Loading template"},
{"content": "Execute dedicated conceptual-planning-agent for system-architect role", "status": "pending", "activeForm": "Executing agent"}
]
```
## 📊 **Output Specification**
### Output Location
```
.workflow/active/WFS-{topic-slug}/.brainstorming/system-architect/
├── analysis.md # Primary architecture analysis
├── architecture-design.md # Detailed system design and diagrams
├── technology-stack.md # Technology stack recommendations and justifications
└── integration-plan.md # System integration and API strategies
```
### Document Templates
#### analysis.md Structure
```markdown
# System Architecture Analysis: {Topic}
*Generated: {timestamp}*
## Executive Summary
[Key architectural findings and recommendations overview]
## Current State Assessment
### Existing Architecture Overview
### Technical Stack Analysis
### Performance Bottlenecks
### Technical Debt Assessment
## Requirements Analysis
### Functional Requirements
### Non-Functional Requirements
- Performance: [Response time, throughput requirements]
- Scalability: [User growth, data volume expectations]
- Availability: [Uptime requirements]
- Security: [Security requirements]
## Proposed Architecture
### High-Level Architecture Design
### Component Breakdown
### Data Flow Diagrams
### Technology Stack Recommendations
## Implementation Strategy
### Migration Planning
### Risk Mitigation
### Performance Optimization
### Security Considerations
## Scalability and Maintenance
### Horizontal Scaling Strategy
### Monitoring and Observability
### Deployment Strategy
### Long-term Maintenance Plan
```
## 🔄 **Session Integration**
### Status Synchronization
Upon completion, update `workflow-session.json`:
```json
{
"phases": {
"BRAINSTORM": {
"system_architect": {
"status": "completed",
"completed_at": "timestamp",
"output_directory": ".workflow/active/WFS-{topic}/.brainstorming/system-architect/",
"key_insights": ["scalability_bottleneck", "architecture_pattern", "technology_recommendation"]
}
}
}
}
```
### Cross-Role Collaboration
System architect perspective provides:
- **Technical Constraints and Possibilities** → Product Manager
- **Architecture Requirements and Limitations** → UI Designer
- **Data Architecture Requirements** → Data Architect
- **Security Architecture Framework** → Security Expert
- **Technical Implementation Framework** → Feature Planner
## ✅ **Quality Assurance**
### Required Analysis Elements
- [ ] Clear architecture diagrams and component designs
- [ ] Detailed technology stack evaluation and recommendations
- [ ] Scalability and performance analysis with metrics
- [ ] System integration and API design specifications
- [ ] Comprehensive risk assessment and mitigation strategies
### Architecture Design Principles
- [ ] **Scalability**: System can handle growth in users and data
- [ ] **Maintainability**: Clear code structure, easy to modify and extend
- [ ] **Reliability**: Built-in fault tolerance and recovery mechanisms
- [ ] **Security**: Integrated security controls and protection measures
- [ ] **Performance**: Meets response time and throughput requirements
### Technical Decision Validation
- [ ] Technology choices have thorough justification and comparison analysis
- [ ] Architectural patterns align with business requirements and constraints
- [ ] Integration solutions consider compatibility and maintenance costs
- [ ] Deployment strategies are feasible with acceptable risk levels
- [ ] Monitoring and operations strategies are comprehensive and actionable
### Implementation Readiness
- [ ] **Technical Feasibility**: All proposed solutions are technically achievable
- [ ] **Resource Planning**: Resource requirements clearly defined and realistic
- [ ] **Risk Management**: Technical risks identified with mitigation plans
- [ ] **Performance Validation**: Architecture can meet performance requirements
- [ ] **Evolution Strategy**: Design allows for future growth and changes

View File

@@ -1,221 +0,0 @@
---
name: ui-designer
description: Generate or update ui-designer/analysis.md addressing guidance-specification discussion points for UI design perspective
argument-hint: "optional topic - uses existing framework if available"
allowed-tools: Task(conceptual-planning-agent), TodoWrite(*), Read(*), Write(*)
---
## 🎨 **UI Designer Analysis Generator**
### Purpose
**Specialized command for generating ui-designer/analysis.md** that addresses guidance-specification.md discussion points from UI/UX design perspective. Creates or updates role-specific analysis with framework references.
### Core Function
- **Framework-based Analysis**: Address each discussion point in guidance-specification.md
- **UI/UX Focus**: User experience, interface design, and accessibility perspective
- **Update Mechanism**: Create new or update existing analysis.md
- **Agent Delegation**: Use conceptual-planning-agent for analysis generation
### Analysis Scope
- **Visual Design**: Color palettes, typography, spacing, and visual hierarchy implementation
- **High-Fidelity Mockups**: Polished, pixel-perfect interface designs
- **Design System Implementation**: Component libraries, design tokens, and style guides
- **Micro-Interactions & Animations**: Transition effects, loading states, and interactive feedback
- **Responsive Design**: Layout adaptations for different screen sizes and devices
### Role Boundaries & Responsibilities
#### **What This Role OWNS (Concrete Visual Interface Implementation)**
- **Visual Design Language**: Colors, typography, iconography, spacing, and overall aesthetic
- **High-Fidelity Mockups**: Polished designs showing exactly how the interface will look
- **Design System Components**: Building and documenting reusable UI components (buttons, inputs, cards, etc.)
- **Design Tokens**: Defining variables for colors, spacing, typography that can be used in code
- **Micro-Interactions**: Hover states, transitions, animations, and interactive feedback details
- **Responsive Layouts**: Adapting designs for mobile, tablet, and desktop breakpoints
#### **What This Role DOES NOT Own (Defers to Other Roles)**
- **User Research & Personas**: User behavior analysis and needs assessment → Defers to **UX Expert**
- **Information Architecture**: Content structure and navigation strategy → Defers to **UX Expert**
- **Low-Fidelity Wireframes**: Structural layouts without visual design → Defers to **UX Expert**
#### **Handoff Points**
- **FROM UX Expert**: Receives wireframes, user flows, and information architecture as the foundation for visual design
- **TO Frontend Developers**: Provides design specifications, component libraries, and design tokens for implementation
- **WITH API Designer**: Coordinates on data presentation and form validation feedback (visual aspects only)
## ⚙️ **Execution Protocol**
### Phase 1: Session & Framework Detection
```bash
# Check active session and framework
CHECK: find .workflow/active/ -name "WFS-*" -type d
IF active_session EXISTS:
session_id = get_active_session()
brainstorm_dir = .workflow/active/WFS-{session}/.brainstorming/
CHECK: brainstorm_dir/guidance-specification.md
IF EXISTS:
framework_mode = true
load_framework = true
ELSE:
IF topic_provided:
framework_mode = false # Create analysis without framework
ELSE:
ERROR: "No framework found and no topic provided"
```
### Phase 2: Analysis Mode Detection
```bash
# Determine execution mode
IF framework_mode == true:
mode = "framework_based_analysis"
topic_ref = load_framework_topic()
discussion_points = extract_framework_points()
ELSE:
mode = "standalone_analysis"
topic_ref = provided_topic
discussion_points = generate_basic_structure()
```
### Phase 3: Agent Execution with Flow Control
**Framework-Based Analysis Generation**
```bash
Task(conceptual-planning-agent): "
[FLOW_CONTROL]
Execute ui-designer analysis for existing topic framework
## Context Loading
ASSIGNED_ROLE: ui-designer
OUTPUT_LOCATION: .workflow/active/WFS-{session}/.brainstorming/ui-designer/
ANALYSIS_MODE: {framework_mode ? "framework_based" : "standalone"}
## Flow Control Steps
1. **load_topic_framework**
- Action: Load structured topic discussion framework
- Command: Read(.workflow/active/WFS-{session}/.brainstorming/guidance-specification.md)
- Output: topic_framework_content
2. **load_role_template**
- Action: Load ui-designer planning template
- Command: bash($(cat ~/.claude/workflows/cli-templates/planning-roles/ui-designer.md))
- Output: role_template_guidelines
3. **load_session_metadata**
- Action: Load session metadata and existing context
- Command: Read(.workflow/active/WFS-{session}/workflow-session.json)
- Output: session_context
## Analysis Requirements
**Framework Reference**: Address all discussion points in guidance-specification.md from UI/UX perspective
**Role Focus**: User experience design, interface optimization, accessibility compliance
**Structured Approach**: Create analysis.md addressing framework discussion points
**Template Integration**: Apply role template guidelines within framework structure
## Expected Deliverables
1. **analysis.md**: Comprehensive UI/UX analysis addressing all framework discussion points
2. **Framework Reference**: Include @../guidance-specification.md reference in analysis
## Completion Criteria
- Address each discussion point from guidance-specification.md with UI/UX design expertise
- Provide actionable design recommendations and interface solutions
- Include accessibility considerations and WCAG compliance planning
- Reference framework document using @ notation for integration
"
```
## 📋 **TodoWrite Integration**
### Workflow Progress Tracking
```javascript
TodoWrite({
todos: [
{
content: "Detect active session and locate topic framework",
status: "in_progress",
activeForm: "Detecting session and framework"
},
{
content: "Load guidance-specification.md and session metadata for context",
status: "pending",
activeForm: "Loading framework and session context"
},
{
content: "Execute ui-designer analysis using conceptual-planning-agent with FLOW_CONTROL",
status: "pending",
activeForm: "Executing ui-designer framework analysis"
},
{
content: "Generate analysis.md addressing all framework discussion points",
status: "pending",
activeForm: "Generating structured ui-designer analysis"
},
{
content: "Update workflow-session.json with ui-designer completion status",
status: "pending",
activeForm: "Updating session metadata"
}
]
});
```
## 📊 **Output Structure**
### Framework-Based Analysis
```
.workflow/active/WFS-{session}/.brainstorming/ui-designer/
└── analysis.md # Structured analysis addressing guidance-specification.md discussion points
```
### Analysis Document Structure
```markdown
# UI Designer Analysis: [Topic from Framework]
## Framework Reference
**Topic Framework**: @../guidance-specification.md
**Role Focus**: UI/UX Design perspective
## Discussion Points Analysis
[Address each point from guidance-specification.md with UI/UX expertise]
### Core Requirements (from framework)
[UI/UX perspective on requirements]
### Technical Considerations (from framework)
[Interface and design system considerations]
### User Experience Factors (from framework)
[Detailed UX analysis and recommendations]
### Implementation Challenges (from framework)
[Design implementation and accessibility considerations]
### Success Metrics (from framework)
[UX metrics and usability success criteria]
## UI/UX Specific Recommendations
[Role-specific design recommendations and solutions]
---
*Generated by ui-designer analysis addressing structured framework*
```
## 🔄 **Session Integration**
### Completion Status Update
```json
{
"ui_designer": {
"status": "completed",
"framework_addressed": true,
"output_location": ".workflow/active/WFS-{session}/.brainstorming/ui-designer/analysis.md",
"framework_reference": "@../guidance-specification.md"
}
}
```
### Integration Points
- **Framework Reference**: @../guidance-specification.md for structured discussion points
- **Cross-Role Synthesis**: UI/UX insights available for synthesis-report.md integration
- **Agent Autonomy**: Independent execution with framework guidance

View File

@@ -1,221 +0,0 @@
---
name: ux-expert
description: Generate or update ux-expert/analysis.md addressing guidance-specification discussion points for UX perspective
argument-hint: "optional topic - uses existing framework if available"
allowed-tools: Task(conceptual-planning-agent), TodoWrite(*), Read(*), Write(*)
---
## 🎯 **UX Expert Analysis Generator**
### Purpose
**Specialized command for generating ux-expert/analysis.md** that addresses guidance-specification.md discussion points from user experience and interface design perspective. Creates or updates role-specific analysis with framework references.
### Core Function
- **Framework-based Analysis**: Address each discussion point in guidance-specification.md
- **UX Design Focus**: User interface, interaction patterns, and usability optimization
- **Update Mechanism**: Create new or update existing analysis.md
- **Agent Delegation**: Use conceptual-planning-agent for analysis generation
### Analysis Scope
- **User Research**: User personas, behavioral analysis, and needs assessment
- **Information Architecture**: Content structure, navigation hierarchy, and mental models
- **User Journey Mapping**: User flows, task analysis, and interaction models
- **Usability Strategy**: Accessibility planning, cognitive load reduction, and user testing frameworks
- **Wireframing**: Low-fidelity layouts and structural prototypes (not visual design)
### Role Boundaries & Responsibilities
#### **What This Role OWNS (Abstract User Experience & Research)**
- **User Research & Personas**: Understanding target users, their goals, pain points, and behaviors
- **Information Architecture**: Organizing content and defining navigation structures at a conceptual level
- **User Journey Mapping**: Defining user flows, task sequences, and interaction models
- **Wireframes & Low-Fidelity Prototypes**: Structural layouts showing information hierarchy (boxes and arrows, not colors/fonts)
- **Usability Testing Strategy**: Planning user testing, A/B tests, and validation methods
- **Accessibility Planning**: WCAG compliance strategy and inclusive design principles
#### **What This Role DOES NOT Own (Defers to Other Roles)**
- **Visual Design**: Colors, typography, spacing, visual style → Defers to **UI Designer**
- **High-Fidelity Mockups**: Polished, pixel-perfect designs → Defers to **UI Designer**
- **Component Implementation**: Design system components, CSS, animations → Defers to **UI Designer**
#### **Handoff Points**
- **TO UI Designer**: Provides wireframes, user flows, and information architecture that UI Designer will transform into high-fidelity visual designs
- **FROM User Research**: May receive external research data to inform UX decisions
- **TO Product Owner**: Provides user insights and validation results to inform feature prioritization
## ⚙️ **Execution Protocol**
### Phase 1: Session & Framework Detection
```bash
# Check active session and framework
CHECK: find .workflow/active/ -name "WFS-*" -type d
IF active_session EXISTS:
session_id = get_active_session()
brainstorm_dir = .workflow/active/WFS-{session}/.brainstorming/
CHECK: brainstorm_dir/guidance-specification.md
IF EXISTS:
framework_mode = true
load_framework = true
ELSE:
IF topic_provided:
framework_mode = false # Create analysis without framework
ELSE:
ERROR: "No framework found and no topic provided"
```
### Phase 2: Analysis Mode Detection
```bash
# Determine execution mode
IF framework_mode == true:
mode = "framework_based_analysis"
topic_ref = load_framework_topic()
discussion_points = extract_framework_points()
ELSE:
mode = "standalone_analysis"
topic_ref = provided_topic
discussion_points = generate_basic_structure()
```
### Phase 3: Agent Execution with Flow Control
**Framework-Based Analysis Generation**
```bash
Task(conceptual-planning-agent): "
[FLOW_CONTROL]
Execute ux-expert analysis for existing topic framework
## Context Loading
ASSIGNED_ROLE: ux-expert
OUTPUT_LOCATION: .workflow/active/WFS-{session}/.brainstorming/ux-expert/
ANALYSIS_MODE: {framework_mode ? "framework_based" : "standalone"}
## Flow Control Steps
1. **load_topic_framework**
- Action: Load structured topic discussion framework
- Command: Read(.workflow/active/WFS-{session}/.brainstorming/guidance-specification.md)
- Output: topic_framework_content
2. **load_role_template**
- Action: Load ux-expert planning template
- Command: bash($(cat ~/.claude/workflows/cli-templates/planning-roles/ux-expert.md))
- Output: role_template_guidelines
3. **load_session_metadata**
- Action: Load session metadata and existing context
- Command: Read(.workflow/active/WFS-{session}/workflow-session.json)
- Output: session_context
## Analysis Requirements
**Framework Reference**: Address all discussion points in guidance-specification.md from user experience and interface design perspective
**Role Focus**: UI design, interaction patterns, usability optimization, design systems
**Structured Approach**: Create analysis.md addressing framework discussion points
**Template Integration**: Apply role template guidelines within framework structure
## Expected Deliverables
1. **analysis.md**: Comprehensive UX design analysis addressing all framework discussion points
2. **Framework Reference**: Include @../guidance-specification.md reference in analysis
## Completion Criteria
- Address each discussion point from guidance-specification.md with UX design expertise
- Provide actionable interface design and usability optimization strategies
- Include accessibility considerations and interaction pattern recommendations
- Reference framework document using @ notation for integration
"
```
## 📋 **TodoWrite Integration**
### Workflow Progress Tracking
```javascript
TodoWrite({
todos: [
{
content: "Detect active session and locate topic framework",
status: "in_progress",
activeForm: "Detecting session and framework"
},
{
content: "Load guidance-specification.md and session metadata for context",
status: "pending",
activeForm: "Loading framework and session context"
},
{
content: "Execute ux-expert analysis using conceptual-planning-agent with FLOW_CONTROL",
status: "pending",
activeForm: "Executing ux-expert framework analysis"
},
{
content: "Generate analysis.md addressing all framework discussion points",
status: "pending",
activeForm: "Generating structured ux-expert analysis"
},
{
content: "Update workflow-session.json with ux-expert completion status",
status: "pending",
activeForm: "Updating session metadata"
}
]
});
```
## 📊 **Output Structure**
### Framework-Based Analysis
```
.workflow/active/WFS-{session}/.brainstorming/ux-expert/
└── analysis.md # Structured analysis addressing guidance-specification.md discussion points
```
### Analysis Document Structure
```markdown
# UX Expert Analysis: [Topic from Framework]
## Framework Reference
**Topic Framework**: @../guidance-specification.md
**Role Focus**: User Experience & Interface Design perspective
## Discussion Points Analysis
[Address each point from guidance-specification.md with UX design expertise]
### Core Requirements (from framework)
[User interface and interaction design requirements perspective]
### Technical Considerations (from framework)
[Design system implementation and technical feasibility considerations]
### User Experience Factors (from framework)
[Usability optimization, accessibility, and user-centered design analysis]
### Implementation Challenges (from framework)
[Design implementation challenges and progressive enhancement strategies]
### Success Metrics (from framework)
[UX metrics including usability testing, user satisfaction, and design KPIs]
## UX Expert Specific Recommendations
[Role-specific interface design patterns and usability optimization strategies]
---
*Generated by ux-expert analysis addressing structured framework*
```
## 🔄 **Session Integration**
### Completion Status Update
```json
{
"ux_expert": {
"status": "completed",
"framework_addressed": true,
"output_location": ".workflow/active/WFS-{session}/.brainstorming/ux-expert/analysis.md",
"framework_reference": "@../guidance-specification.md"
}
}
```
### Integration Points
- **Framework Reference**: @../guidance-specification.md for structured discussion points
- **Cross-Role Synthesis**: UX design insights available for synthesis-report.md integration
- **Agent Autonomy**: Independent execution with framework guidance

View File

@@ -17,6 +17,8 @@ Enhanced evidence-based debugging with **documented exploration process**. Recor
**Core workflow**: Explore → Document → Log → Analyze → Correct Understanding → Fix → Verify
**Scope**: Adds temporary debug logging to observe program state; cleans up all instrumentation after resolution. Does NOT execute code injection, security testing, or modify program behavior.
**Key enhancements over /workflow:debug**:
- **understanding.md**: Timeline of exploration and learning
- **Gemini-assisted correction**: Validates and corrects hypotheses
@@ -44,7 +46,7 @@ Explore Mode:
├─ Locate error source in codebase
├─ Document initial understanding in understanding.md
├─ Generate testable hypotheses with Gemini validation
├─ Add NDJSON logging instrumentation
├─ Add NDJSON debug logging statements
└─ Output: Hypothesis list + await user reproduction
Analyze Mode:
@@ -216,9 +218,9 @@ Save Gemini output to `hypotheses.json`:
}
```
**Step 1.4: Add NDJSON Instrumentation**
**Step 1.4: Add NDJSON Debug Logging**
For each hypothesis, add logging (same as original debug command).
For each hypothesis, add temporary logging statements to observe program state at key execution points. Use NDJSON format for structured log parsing. These are read-only observations that do not modify program behavior.
**Step 1.5: Update understanding.md**
@@ -441,7 +443,7 @@ What we learned from this debugging session:
**Step 3.3: Cleanup**
Remove debug instrumentation (same as original command).
Remove all temporary debug logging statements added during investigation. Verify no instrumentation code remains in production code.
---
@@ -647,7 +649,7 @@ Why is config value None during update?
| Feature | /workflow:debug | /workflow:debug-with-file |
|---------|-----------------|---------------------------|
| NDJSON logging | ✅ | ✅ |
| NDJSON debug logging | ✅ | ✅ |
| Hypothesis generation | Manual | Gemini-assisted |
| Exploration documentation | ❌ | ✅ understanding.md |
| Understanding evolution | ❌ | ✅ Timeline + corrections |

View File

@@ -1,331 +0,0 @@
---
name: debug
description: Interactive hypothesis-driven debugging with NDJSON logging, iterative until resolved
argument-hint: "[-y|--yes] \"bug description or error message\""
allowed-tools: TodoWrite(*), Task(*), AskUserQuestion(*), Read(*), Grep(*), Glob(*), Bash(*), Edit(*), Write(*)
---
## Auto Mode
When `--yes` or `-y`: Auto-confirm all decisions (hypotheses, fixes, iteration), use recommended settings.
# Workflow Debug Command (/workflow:debug)
## Overview
Evidence-based interactive debugging command. Systematically identifies root causes through hypothesis-driven logging and iterative verification.
**Core workflow**: Explore → Add Logging → Reproduce → Analyze Log → Fix → Verify
## Usage
```bash
/workflow:debug <BUG_DESCRIPTION>
# Arguments
<bug-description> Bug description, error message, or stack trace (required)
```
## Execution Process
```
Session Detection:
├─ Check if debug session exists for this bug
├─ EXISTS + debug.log has content → Analyze mode
└─ NOT_FOUND or empty log → Explore mode
Explore Mode:
├─ Locate error source in codebase
├─ Generate testable hypotheses (dynamic count)
├─ Add NDJSON logging instrumentation
└─ Output: Hypothesis list + await user reproduction
Analyze Mode:
├─ Parse debug.log, validate each hypothesis
└─ Decision:
├─ Confirmed → Fix root cause
├─ Inconclusive → Add more logging, iterate
└─ All rejected → Generate new hypotheses
Fix & Cleanup:
├─ Apply fix based on confirmed hypothesis
├─ User verifies
├─ Remove debug instrumentation
└─ If not fixed → Return to Analyze mode
```
## Implementation
### Session Setup & Mode Detection
```javascript
const getUtc8ISOString = () => new Date(Date.now() + 8 * 60 * 60 * 1000).toISOString()
const bugSlug = bug_description.toLowerCase().replace(/[^a-z0-9]+/g, '-').substring(0, 30)
const dateStr = getUtc8ISOString().substring(0, 10)
const sessionId = `DBG-${bugSlug}-${dateStr}`
const sessionFolder = `.workflow/.debug/${sessionId}`
const debugLogPath = `${sessionFolder}/debug.log`
// Auto-detect mode
const sessionExists = fs.existsSync(sessionFolder)
const logHasContent = sessionExists && fs.existsSync(debugLogPath) && fs.statSync(debugLogPath).size > 0
const mode = logHasContent ? 'analyze' : 'explore'
if (!sessionExists) {
bash(`mkdir -p ${sessionFolder}`)
}
```
---
### Explore Mode
**Step 1.1: Locate Error Source**
```javascript
// Extract keywords from bug description
const keywords = extractErrorKeywords(bug_description)
// e.g., ['Stack Length', '未找到', 'registered 0']
// Search codebase for error locations
for (const keyword of keywords) {
Grep({ pattern: keyword, path: ".", output_mode: "content", "-C": 3 })
}
// Identify affected files and functions
const affectedLocations = [...] // from search results
```
**Step 1.2: Generate Hypotheses (Dynamic)**
```javascript
// Hypothesis categories based on error pattern
const HYPOTHESIS_PATTERNS = {
"not found|missing|undefined|未找到": "data_mismatch",
"0|empty|zero|registered 0": "logic_error",
"timeout|connection|sync": "integration_issue",
"type|format|parse": "type_mismatch"
}
// Generate hypotheses based on actual issue (NOT fixed count)
function generateHypotheses(bugDescription, affectedLocations) {
const hypotheses = []
// Analyze bug and create targeted hypotheses
// Each hypothesis has:
// - id: H1, H2, ... (dynamic count)
// - description: What might be wrong
// - testable_condition: What to log
// - logging_point: Where to add instrumentation
return hypotheses // Could be 1, 3, 5, or more
}
const hypotheses = generateHypotheses(bug_description, affectedLocations)
```
**Step 1.3: Add NDJSON Instrumentation**
For each hypothesis, add logging at the relevant location:
**Python template**:
```python
# region debug [H{n}]
try:
import json, time
_dbg = {
"sid": "{sessionId}",
"hid": "H{n}",
"loc": "{file}:{line}",
"msg": "{testable_condition}",
"data": {
# Capture relevant values here
},
"ts": int(time.time() * 1000)
}
with open(r"{debugLogPath}", "a", encoding="utf-8") as _f:
_f.write(json.dumps(_dbg, ensure_ascii=False) + "\n")
except: pass
# endregion
```
**JavaScript/TypeScript template**:
```javascript
// region debug [H{n}]
try {
require('fs').appendFileSync("{debugLogPath}", JSON.stringify({
sid: "{sessionId}",
hid: "H{n}",
loc: "{file}:{line}",
msg: "{testable_condition}",
data: { /* Capture relevant values */ },
ts: Date.now()
}) + "\n");
} catch(_) {}
// endregion
```
**Output to user**:
```
## Hypotheses Generated
Based on error "{bug_description}", generated {n} hypotheses:
{hypotheses.map(h => `
### ${h.id}: ${h.description}
- Logging at: ${h.logging_point}
- Testing: ${h.testable_condition}
`).join('')}
**Debug log**: ${debugLogPath}
**Next**: Run reproduction steps, then come back for analysis.
```
---
### Analyze Mode
```javascript
// Parse NDJSON log
const entries = Read(debugLogPath).split('\n')
.filter(l => l.trim())
.map(l => JSON.parse(l))
// Group by hypothesis
const byHypothesis = groupBy(entries, 'hid')
// Validate each hypothesis
for (const [hid, logs] of Object.entries(byHypothesis)) {
const hypothesis = hypotheses.find(h => h.id === hid)
const latestLog = logs[logs.length - 1]
// Check if evidence confirms or rejects hypothesis
const verdict = evaluateEvidence(hypothesis, latestLog.data)
// Returns: 'confirmed' | 'rejected' | 'inconclusive'
}
```
**Output**:
```
## Evidence Analysis
Analyzed ${entries.length} log entries.
${results.map(r => `
### ${r.id}: ${r.description}
- **Status**: ${r.verdict}
- **Evidence**: ${JSON.stringify(r.evidence)}
- **Reason**: ${r.reason}
`).join('')}
${confirmedHypothesis ? `
## Root Cause Identified
**${confirmedHypothesis.id}**: ${confirmedHypothesis.description}
Ready to fix.
` : `
## Need More Evidence
Add more logging or refine hypotheses.
`}
```
---
### Fix & Cleanup
```javascript
// Apply fix based on confirmed hypothesis
// ... Edit affected files
// After user verifies fix works:
// Remove debug instrumentation (search for region markers)
const instrumentedFiles = Grep({
pattern: "# region debug|// region debug",
output_mode: "files_with_matches"
})
for (const file of instrumentedFiles) {
// Remove content between region markers
removeDebugRegions(file)
}
console.log(`
## Debug Complete
- Root cause: ${confirmedHypothesis.description}
- Fix applied to: ${modifiedFiles.join(', ')}
- Debug instrumentation removed
`)
```
---
## Debug Log Format (NDJSON)
Each line is a JSON object:
```json
{"sid":"DBG-xxx-2025-12-18","hid":"H1","loc":"file.py:func:42","msg":"Check dict keys","data":{"keys":["a","b"],"target":"c","found":false},"ts":1734567890123}
```
| Field | Description |
|-------|-------------|
| `sid` | Session ID |
| `hid` | Hypothesis ID (H1, H2, ...) |
| `loc` | Code location |
| `msg` | What's being tested |
| `data` | Captured values |
| `ts` | Timestamp (ms) |
## Session Folder
```
.workflow/.debug/DBG-{slug}-{date}/
├── debug.log # NDJSON log (main artifact)
└── resolution.md # Summary after fix (optional)
```
## Iteration Flow
```
First Call (/workflow:debug "error"):
├─ No session exists → Explore mode
├─ Extract error keywords, search codebase
├─ Generate hypotheses, add logging
└─ Await user reproduction
After Reproduction (/workflow:debug "error"):
├─ Session exists + debug.log has content → Analyze mode
├─ Parse log, evaluate hypotheses
└─ Decision:
├─ Confirmed → Fix → User verify
│ ├─ Fixed → Cleanup → Done
│ └─ Not fixed → Add logging → Iterate
├─ Inconclusive → Add logging → Iterate
└─ All rejected → New hypotheses → Iterate
Output:
└─ .workflow/.debug/DBG-{slug}-{date}/debug.log
```
## Post-Completion Expansion
完成后询问用户是否扩展为issue(test/enhance/refactor/doc),选中项调用 `/issue:new "{summary} - {dimension}"`
---
## Error Handling
| Situation | Action |
|-----------|--------|
| Empty debug.log | Verify reproduction triggered the code path |
| All hypotheses rejected | Generate new hypotheses with broader scope |
| Fix doesn't work | Iterate with more granular logging |
| >5 iterations | Escalate to `/workflow:lite-fix` with evidence |

File diff suppressed because it is too large Load Diff

View File

@@ -477,7 +477,7 @@ Task(subagent_type="{meta.agent}",
- TODO List: {session.todo_list_path}
- Summaries: {session.summaries_dir}
**Execution**: Read task JSON → Parse flow_control → Execute implementation_approach → Update TODO_LIST.md → Generate summary",
**Execution**: Read task JSON → Execute pre_analysis → Check execution_config.method → (CLI: handoff to CLI tool | Agent: direct implementation) → Update TODO_LIST.md → Generate summary",
description="Implement: {task.id}")
```
@@ -486,9 +486,11 @@ Task(subagent_type="{meta.agent}",
- `[FLOW_CONTROL]`: Triggers flow_control.pre_analysis execution
**Why Path-Based**: Agent (code-developer.md) autonomously:
- Reads and parses task JSON (requirements, acceptance, flow_control)
- Loads tech stack guidelines based on detected language
- Executes pre_analysis steps and implementation_approach
- Reads and parses task JSON (requirements, acceptance, flow_control, execution_config)
- Executes pre_analysis steps (Phase 1: context gathering)
- Checks execution_config.method (Phase 2: determine mode)
- CLI mode: Builds handoff prompt and executes via ccw cli with resume strategy
- Agent mode: Directly implements using modification_points and logic_flow
- Generates structured summary with integration points
Embedding task content in prompt creates duplication and conflicts with agent's parsing logic.

View File

@@ -72,8 +72,8 @@ Phase 2: Clarification (optional, multi-round)
Phase 3: Planning (NO CODE EXECUTION - planning only)
└─ Decision (based on Phase 1 complexity):
├─ Low → Load schema: cat ~/.claude/workflows/cli-templates/schemas/plan-json-schema.json → Direct Claude planning (following schema) → plan.json → MUST proceed to Phase 4
└─ Medium/High → cli-lite-planning-agent → plan.json → MUST proceed to Phase 4
├─ Low → Load schema: cat ~/.claude/workflows/cli-templates/schemas/plan-json-schema.json → Direct Claude planning (following schema) → plan.json
└─ Medium/High → cli-lite-planning-agent → plan.json (agent internally executes quality check)
Phase 4: Confirmation & Selection
├─ Display plan summary (tasks, complexity, estimated time)

View File

@@ -150,378 +150,213 @@ Create internal representations (do not include raw artifacts in output):
- Task-to-task dependencies (depends_on, blocks)
- Requirement-level dependencies (from synthesis)
### 4. Detection Passes (Token-Efficient Analysis)
### 4. Detection Passes (Agent-Driven Multi-Dimensional Analysis)
**Token Budget Strategy**:
- **Total Limit**: 50 findings maximum (aggregate remainder in overflow summary)
- **Priority Allocation**: CRITICAL (unlimited) → HIGH (15) → MEDIUM (20) → LOW (15)
- **Early Exit**: If CRITICAL findings > 0 in User Intent/Requirements Coverage, skip LOW/MEDIUM priority checks
**Execution Strategy**:
- Single `cli-explore-agent` invocation
- Agent executes multiple CLI analyses internally (different dimensions: A-H)
- Token Budget: 50 findings maximum (aggregate remainder in overflow summary)
- Priority Allocation: CRITICAL (unlimited) → HIGH (15) → MEDIUM (20) → LOW (15)
- Early Exit: If CRITICAL findings > 0 in User Intent/Requirements Coverage, skip LOW/MEDIUM checks
**Execution Order** (Process in sequence; skip if token budget exhausted):
**Execution Order** (Agent orchestrates internally):
1. **Tier 1 (CRITICAL Path)**: A, B, C - User intent, coverage, consistency (process fully)
2. **Tier 2 (HIGH Priority)**: D, E - Dependencies, synthesis alignment (limit 15 findings total)
1. **Tier 1 (CRITICAL Path)**: A, B, C - User intent, coverage, consistency (full analysis)
2. **Tier 2 (HIGH Priority)**: D, E - Dependencies, synthesis alignment (limit 15 findings)
3. **Tier 3 (MEDIUM Priority)**: F - Specification quality (limit 20 findings)
4. **Tier 4 (LOW Priority)**: G, H - Duplication, feasibility (limit 15 findings total)
4. **Tier 4 (LOW Priority)**: G, H - Duplication, feasibility (limit 15 findings)
---
#### A. User Intent Alignment (CRITICAL - Tier 1)
#### Phase 4.1: Launch Unified Verification Agent
- **Goal Alignment**: IMPL_PLAN objectives match user's original intent
- **Scope Drift**: Plan covers user's stated scope without unauthorized expansion
- **Success Criteria Match**: Plan's success criteria reflect user's expectations
- **Intent Conflicts**: Tasks contradicting user's original objectives
```javascript
Task(
subagent_type="cli-explore-agent",
run_in_background=false,
description="Multi-dimensional plan verification",
prompt=`
## Plan Verification Task
#### B. Requirements Coverage Analysis
### MANDATORY FIRST STEPS
1. Read: ~/.claude/workflows/cli-templates/schemas/plan-verify-agent-schema.json (dimensions & rules)
2. Read: ~/.claude/workflows/cli-templates/schemas/verify-json-schema.json (output schema)
3. Read: ${session_file} (user intent)
4. Read: ${IMPL_PLAN} (implementation plan)
5. Glob: ${task_dir}/*.json (task files)
6. Glob: ${SYNTHESIS_DIR}/*/analysis.md (role analyses)
- **Orphaned Requirements**: Requirements in synthesis with zero associated tasks
- **Unmapped Tasks**: Tasks with no clear requirement linkage
- **NFR Coverage Gaps**: Non-functional requirements (performance, security, scalability) not reflected in tasks
### Execution Flow
#### C. Consistency Validation
**Load schema → Execute tiered CLI analysis → Aggregate findings → Write JSON**
- **Requirement Conflicts**: Tasks contradicting synthesis requirements
- **Architecture Drift**: IMPL_PLAN architecture not matching synthesis ADRs
- **Terminology Drift**: Same concept named differently across IMPL_PLAN and tasks
- **Data Model Inconsistency**: Tasks referencing entities/fields not in synthesis data model
FOR each tier in [1, 2, 3, 4]:
- Load tier config from plan-verify-agent-schema.json
- Execute: ccw cli -p "PURPOSE: Verify dimensions {tier.dimensions}
TASK: {tier.checks from schema}
CONTEXT: @${session_dir}/**/*
EXPECTED: Findings JSON with dimension, severity, location, summary, recommendation
CONSTRAINTS: Limit {tier.limit} findings
" --tool gemini --mode analysis --rule {tier.rule}
- Parse findings, check early exit condition
- IF tier == 1 AND critical_count > 0: skip tier 3-4
#### D. Dependency Integrity
### Output
Write: ${process_dir}/verification-findings.json (follow verify-json-schema.json)
Return: Quality gate decision + 2-3 sentence summary
`
)
```
- **Circular Dependencies**: Task A depends on B, B depends on C, C depends on A
- **Missing Dependencies**: Task requires outputs from another task but no explicit dependency
- **Broken Dependencies**: Task depends on non-existent task ID
- **Logical Ordering Issues**: Implementation tasks before foundational setup without dependency note
---
#### E. Synthesis Alignment
#### Phase 4.2: Load and Organize Findings
- **Priority Conflicts**: High-priority synthesis requirements mapped to low-priority tasks
- **Success Criteria Mismatch**: IMPL_PLAN success criteria not covering synthesis acceptance criteria
- **Risk Mitigation Gaps**: Critical risks in synthesis without corresponding mitigation tasks
```javascript
// Load findings (single parse for all subsequent use)
const data = JSON.parse(Read(`${process_dir}/verification-findings.json`))
const { session_id, timestamp, verification_tiers_completed, findings, summary } = data
const { critical_count, high_count, medium_count, low_count, total_findings, coverage_percentage, recommendation } = summary
#### F. Task Specification Quality
// Group by severity and dimension
const bySeverity = Object.groupBy(findings, f => f.severity)
const byDimension = Object.groupBy(findings, f => f.dimension)
- **Ambiguous Focus Paths**: Tasks with vague or missing focus_paths
- **Underspecified Acceptance**: Tasks without clear acceptance criteria
- **Missing Artifacts References**: Tasks not referencing relevant brainstorming artifacts in context.artifacts
- **Weak Flow Control**: Tasks without clear implementation_approach or pre_analysis steps
- **Missing Target Files**: Tasks without flow_control.target_files specification
// Dimension metadata (from schema)
const DIMS = {
A: "User Intent Alignment", B: "Requirements Coverage", C: "Consistency Validation",
D: "Dependency Integrity", E: "Synthesis Alignment", F: "Task Specification Quality",
G: "Duplication Detection", H: "Feasibility Assessment"
}
```
#### G. Duplication Detection
### 5. Generate Report
- **Overlapping Task Scope**: Multiple tasks with nearly identical descriptions
- **Redundant Requirements Coverage**: Same requirement covered by multiple tasks without clear partitioning
```javascript
// Helper: render dimension section
const renderDimension = (dim) => {
const items = byDimension[dim] || []
return items.length > 0
? items.map(f => `### ${f.id}: ${f.summary}\n- **Severity**: ${f.severity}\n- **Location**: ${f.location.join(', ')}\n- **Recommendation**: ${f.recommendation}`).join('\n\n')
: `> ✅ No ${DIMS[dim]} issues detected.`
}
#### H. Feasibility Assessment
// Helper: render severity section
const renderSeverity = (severity, impact) => {
const items = bySeverity[severity] || []
return items.length > 0
? items.map(f => `#### ${f.id}: ${f.summary}\n- **Dimension**: ${f.dimension_name}\n- **Location**: ${f.location.join(', ')}\n- **Impact**: ${impact}\n- **Recommendation**: ${f.recommendation}`).join('\n\n')
: `> ✅ No ${severity.toLowerCase()}-severity issues detected.`
}
- **Complexity Misalignment**: Task marked "simple" but requires multiple file modifications
- **Resource Conflicts**: Parallel tasks requiring same resources/files
- **Skill Gap Risks**: Tasks requiring skills not in team capability assessment (from synthesis)
### 5. Severity Assignment
Use this heuristic to prioritize findings:
- **CRITICAL**:
- Violates user's original intent (goal misalignment, scope drift)
- Violates synthesis authority (requirement conflict)
- Core requirement with zero coverage
- Circular dependencies
- Broken dependencies
- **HIGH**:
- NFR coverage gaps
- Priority conflicts
- Missing risk mitigation tasks
- Ambiguous acceptance criteria
- **MEDIUM**:
- Terminology drift
- Missing artifacts references
- Weak flow control
- Logical ordering issues
- **LOW**:
- Style/wording improvements
- Minor redundancy not affecting execution
### 6. Produce Compact Analysis Report
**Report Generation**: Generate report content and save to file.
Output a Markdown report with the following structure:
```markdown
// Build Markdown report
const fullReport = `
# Plan Verification Report
**Session**: WFS-{session-id}
**Generated**: {timestamp}
**Artifacts Analyzed**: role analysis documents, IMPL_PLAN.md, {N} task files
**User Intent Analysis**: {user_intent_analysis or "SKIPPED: workflow-session.json not found"}
**Session**: WFS-${session_id} | **Generated**: ${timestamp}
**Tiers Completed**: ${verification_tiers_completed.join(', ')}
---
## Executive Summary
### Quality Gate Decision
| Metric | Value | Status |
|--------|-------|--------|
| Overall Risk Level | CRITICAL \| HIGH \| MEDIUM \| LOW | {status_emoji} |
| Critical Issues | {count} | 🔴 |
| High Issues | {count} | 🟠 |
| Medium Issues | {count} | 🟡 |
| Low Issues | {count} | 🟢 |
| Risk Level | ${critical_count > 0 ? 'CRITICAL' : high_count > 0 ? 'HIGH' : medium_count > 0 ? 'MEDIUM' : 'LOW'} | ${critical_count > 0 ? '🔴' : high_count > 0 ? '🟠' : medium_count > 0 ? '🟡' : '🟢'} |
| Critical/High/Medium/Low | ${critical_count}/${high_count}/${medium_count}/${low_count} | |
| Coverage | ${coverage_percentage}% | ${coverage_percentage >= 90 ? '🟢' : coverage_percentage >= 75 ? '🟡' : '🔴'} |
### Recommendation
**{RECOMMENDATION}**
**Decision Rationale**:
{brief explanation based on severity criteria}
**Quality Gate Criteria**:
- **BLOCK_EXECUTION**: Critical issues > 0 (must fix before proceeding)
- **PROCEED_WITH_FIXES**: Critical = 0, High > 0 (fix recommended before execution)
- **PROCEED_WITH_CAUTION**: Critical = 0, High = 0, Medium > 0 (proceed with awareness)
- **PROCEED**: Only Low issues or None (safe to execute)
**Recommendation**: **${recommendation}**
---
## Findings Summary
| ID | Category | Severity | Location(s) | Summary | Recommendation |
|----|----------|----------|-------------|---------|----------------|
| C1 | Coverage | CRITICAL | synthesis:FR-03 | Requirement "User auth" has zero task coverage | Add authentication implementation task |
| H1 | Consistency | HIGH | IMPL-1.2 vs synthesis:ADR-02 | Task uses REST while synthesis specifies GraphQL | Align task with ADR-02 decision |
| M1 | Specification | MEDIUM | IMPL-2.1 | Missing context.artifacts reference | Add @synthesis reference |
| L1 | Duplication | LOW | IMPL-3.1, IMPL-3.2 | Similar scope | Consider merging |
(Generate stable IDs prefixed by severity initial: C/H/M/L + number)
| ID | Dimension | Severity | Location | Summary |
|----|-----------|----------|----------|---------|
${findings.map(f => `| ${f.id} | ${f.dimension_name} | ${f.severity} | ${f.location.join(', ')} | ${f.summary} |`).join('\n')}
---
## User Intent Alignment Analysis
## Analysis by Dimension
{IF user_intent_analysis != "SKIPPED"}
### Goal Alignment
- **User Intent**: {user_original_intent}
- **IMPL_PLAN Objectives**: {plan_objectives}
- **Alignment Status**: {ALIGNED/MISALIGNED/PARTIAL}
- **Findings**: {specific alignment issues}
### Scope Verification
- **User Scope**: {user_defined_scope}
- **Plan Scope**: {plan_actual_scope}
- **Drift Detection**: {NONE/MINOR/MAJOR}
- **Findings**: {specific scope issues}
{ELSE}
> ⚠️ User intent alignment analysis was skipped because workflow-session.json was not found.
{END IF}
${['A','B','C','D','E','F','G','H'].map(d => `### ${d}. ${DIMS[d]}\n\n${renderDimension(d)}`).join('\n\n---\n\n')}
---
## Requirements Coverage Analysis
## Findings by Severity
### Functional Requirements
### CRITICAL (${critical_count})
${renderSeverity('CRITICAL', 'Blocks execution')}
| Requirement ID | Requirement Summary | Has Task? | Task IDs | Priority Match | Notes |
|----------------|---------------------|-----------|----------|----------------|-------|
| FR-01 | User authentication | Yes | IMPL-1.1, IMPL-1.2 | Match | Complete |
| FR-02 | Data export | Yes | IMPL-2.3 | Mismatch | High req → Med priority task |
| FR-03 | Profile management | No | - | - | **CRITICAL: Zero coverage** |
### HIGH (${high_count})
${renderSeverity('HIGH', 'Fix before execution recommended')}
### Non-Functional Requirements
### MEDIUM (${medium_count})
${renderSeverity('MEDIUM', 'Address during/after implementation')}
| Requirement ID | Requirement Summary | Has Task? | Task IDs | Notes |
|----------------|---------------------|-----------|----------|-------|
| NFR-01 | Response time <200ms | No | - | **HIGH: No performance tasks** |
| NFR-02 | Security compliance | Yes | IMPL-4.1 | Complete |
### Business Requirements
| Requirement ID | Requirement Summary | Has Task? | Task IDs | Notes |
|----------------|---------------------|-----------|----------|-------|
| BR-01 | Launch by Q2 | Yes | IMPL-1.* through IMPL-5.* | Timeline realistic |
### Coverage Metrics
| Requirement Type | Total | Covered | Coverage % |
|------------------|-------|---------|------------|
| Functional | {count} | {count} | {percent}% |
| Non-Functional | {count} | {count} | {percent}% |
| Business | {count} | {count} | {percent}% |
| **Overall** | **{total}** | **{covered}** | **{percent}%** |
### LOW (${low_count})
${renderSeverity('LOW', 'Optional improvement')}
---
## Dependency Integrity
## Next Steps
### Dependency Graph Analysis
${recommendation === 'BLOCK_EXECUTION' ? '🛑 **BLOCK**: Fix critical issues → Re-verify' :
recommendation === 'PROCEED_WITH_FIXES' ? '⚠️ **FIX RECOMMENDED**: Address high issues → Re-verify or Execute' :
'✅ **READY**: Proceed to /workflow:execute'}
**Circular Dependencies**: {None or List}
Re-verify: \`/workflow:plan-verify --session ${session_id}\`
Execute: \`/workflow:execute --resume-session="${session_id}"\`
`
**Broken Dependencies**:
- IMPL-2.3 depends on "IMPL-2.4" (non-existent)
**Missing Dependencies**:
- IMPL-5.1 (integration test) has no dependency on IMPL-1.* (implementation tasks)
**Logical Ordering Issues**:
{List or "None detected"}
---
## Synthesis Alignment Issues
| Issue Type | Synthesis Reference | IMPL_PLAN/Task | Impact | Recommendation |
|------------|---------------------|----------------|--------|----------------|
| Architecture Conflict | synthesis:ADR-01 (JWT auth) | IMPL_PLAN uses session cookies | HIGH | Update IMPL_PLAN to use JWT |
| Priority Mismatch | synthesis:FR-02 (High) | IMPL-2.3 (Medium) | MEDIUM | Elevate task priority |
| Missing Risk Mitigation | synthesis:Risk-03 (API rate limits) | No mitigation tasks | HIGH | Add rate limiting implementation task |
---
## Task Specification Quality
### Aggregate Statistics
| Quality Dimension | Tasks Affected | Percentage |
|-------------------|----------------|------------|
| Missing Artifacts References | {count} | {percent}% |
| Weak Flow Control | {count} | {percent}% |
| Missing Target Files | {count} | {percent}% |
| Ambiguous Focus Paths | {count} | {percent}% |
### Sample Issues
- **IMPL-1.2**: No context.artifacts reference to synthesis
- **IMPL-3.1**: Missing flow_control.target_files specification
- **IMPL-4.2**: Vague focus_paths ["src/"] - needs refinement
---
## Feasibility Concerns
| Concern | Tasks Affected | Issue | Recommendation |
|---------|----------------|-------|----------------|
| Skill Gap | IMPL-6.1, IMPL-6.2 | Requires Kubernetes expertise not in team | Add training task or external consultant |
| Resource Conflict | IMPL-3.1, IMPL-3.2 | Both modify src/auth/service.ts in parallel | Add dependency or serialize |
---
## Detailed Findings by Severity
### CRITICAL Issues ({count})
{Detailed breakdown of each critical issue with location, impact, and recommendation}
### HIGH Issues ({count})
{Detailed breakdown of each high issue with location, impact, and recommendation}
### MEDIUM Issues ({count})
{Detailed breakdown of each medium issue with location, impact, and recommendation}
### LOW Issues ({count})
{Detailed breakdown of each low issue with location, impact, and recommendation}
---
## Metrics Summary
| Metric | Value |
|--------|-------|
| Total Requirements | {count} ({functional} functional, {nonfunctional} non-functional, {business} business) |
| Total Tasks | {count} |
| Overall Coverage | {percent}% ({covered}/{total} requirements with ≥1 task) |
| Critical Issues | {count} |
| High Issues | {count} |
| Medium Issues | {count} |
| Low Issues | {count} |
| Total Findings | {total_findings} |
---
## Remediation Recommendations
### Priority Order
1. **CRITICAL** - Must fix before proceeding
2. **HIGH** - Fix before execution
3. **MEDIUM** - Fix during or after implementation
4. **LOW** - Optional improvements
### Next Steps
Based on the quality gate recommendation ({RECOMMENDATION}):
{IF BLOCK_EXECUTION}
**🛑 BLOCK EXECUTION**
You must resolve all CRITICAL issues before proceeding with implementation:
1. Review each critical issue in detail
2. Determine remediation approach (modify IMPL_PLAN.md, update task.json, or add new tasks)
3. Apply fixes systematically
4. Re-run verification to confirm resolution
{ELSE IF PROCEED_WITH_FIXES}
**⚠️ PROCEED WITH FIXES RECOMMENDED**
No critical issues detected, but HIGH issues exist. Recommended workflow:
1. Review high-priority issues
2. Apply fixes before execution for optimal results
3. Re-run verification (optional)
{ELSE IF PROCEED_WITH_CAUTION}
**✅ PROCEED WITH CAUTION**
Only MEDIUM issues detected. You may proceed with implementation:
- Address medium issues during or after implementation
- Maintain awareness of identified concerns
{ELSE}
**✅ PROCEED**
No significant issues detected. Safe to execute implementation workflow.
{END IF}
---
**Report End**
// Write report
Write(`${process_dir}/PLAN_VERIFICATION.md`, fullReport)
console.log(`✅ Report: ${process_dir}/PLAN_VERIFICATION.md\n📊 ${recommendation} | C:${critical_count} H:${high_count} M:${medium_count} L:${low_count} | Coverage:${coverage_percentage}%`)
```
### 7. Save and Display Report
### 6. Next Step Selection
**Step 7.1: Save Report**:
```bash
report_path = ".workflow/active/WFS-{session}/.process/PLAN_VERIFICATION.md"
Write(report_path, full_report_content)
```javascript
const autoYes = $ARGUMENTS.includes('--yes') || $ARGUMENTS.includes('-y')
const canExecute = recommendation !== 'BLOCK_EXECUTION'
// Auto mode
if (autoYes) {
if (canExecute) {
SlashCommand("/workflow:execute --yes --resume-session=\"${session_id}\"")
} else {
console.log(`[--yes] BLOCK_EXECUTION - Fix ${critical_count} critical issues first.`)
}
return
}
// Interactive mode - build options based on quality gate
const options = canExecute
? [
{ label: canExecute && recommendation === 'PROCEED_WITH_FIXES' ? "Execute Anyway" : "Execute (Recommended)",
description: "Proceed to /workflow:execute" },
{ label: "Review Report", description: "Review findings before deciding" },
{ label: "Re-verify", description: "Re-run after manual fixes" }
]
: [
{ label: "Review Report", description: "Review critical issues" },
{ label: "Re-verify", description: "Re-run after fixing issues" }
]
const selection = AskUserQuestion({
questions: [{
question: `Quality gate: ${recommendation}. Next step?`,
header: "Action",
multiSelect: false,
options
}]
})
// Handle selection
if (selection.includes("Execute")) {
SlashCommand("/workflow:execute --resume-session=\"${session_id}\"")
} else if (selection === "Re-verify") {
SlashCommand("/workflow:plan-verify --session ${session_id}")
}
```
**Step 7.2: Display Summary to User**:
```bash
# Display executive summary in terminal
echo "=== Plan Verification Complete ==="
echo "Report saved to: {report_path}"
echo ""
echo "Quality Gate: {RECOMMENDATION}"
echo "Critical: {count} | High: {count} | Medium: {count} | Low: {count}"
echo ""
echo "Next: Review full report for detailed findings and recommendations"
```
**Step 7.3: Completion**:
- Report is saved to `.process/PLAN_VERIFICATION.md`
- User can review findings and decide on remediation approach
- No automatic modifications are made to source artifacts
- User can manually apply fixes or use separate remediation command (if available)

View File

@@ -3,6 +3,7 @@ name: plan
description: 5-phase planning workflow with action-planning-agent task generation, outputs IMPL_PLAN.md and task JSONs
argument-hint: "[-y|--yes] \"text description\"|file.md"
allowed-tools: SlashCommand(*), TodoWrite(*), Read(*), Bash(*)
group: workflow
---
## Auto Mode
@@ -115,7 +116,38 @@ CONTEXT: Existing user database schema, REST API endpoints
**TodoWrite**: Mark phase 1 completed, phase 2 in_progress
**After Phase 1**: Return to user showing Phase 1 results, then auto-continue to Phase 2
**After Phase 1**: Initialize planning-notes.md with user intent
```javascript
// Create minimal planning notes document
const planningNotesPath = `.workflow/active/${sessionId}/planning-notes.md`
const userGoal = structuredDescription.goal
const userConstraints = structuredDescription.context || "None specified"
Write(planningNotesPath, `# Planning Notes
**Session**: ${sessionId}
**Created**: ${new Date().toISOString()}
## User Intent (Phase 1)
- **GOAL**: ${userGoal}
- **KEY_CONSTRAINTS**: ${userConstraints}
---
## Context Findings (Phase 2)
(To be filled by context-gather)
## Conflict Decisions (Phase 3)
(To be filled if conflicts detected)
## Consolidated Constraints (Phase 4 Input)
1. ${userConstraints}
`)
```
Return to user showing Phase 1 results, then auto-continue to Phase 2
---
@@ -138,6 +170,7 @@ SlashCommand(command="/workflow:tools:context-gather --session [sessionId] \"[st
**Validation**:
- Context package path extracted
- File exists and is valid JSON
- `prioritized_context` field exists
<!-- TodoWrite: When context-gather executed, INSERT 3 context-gather tasks, mark first as in_progress -->
@@ -168,7 +201,37 @@ SlashCommand(command="/workflow:tools:context-gather --session [sessionId] \"[st
**Note**: Phase 2 tasks completed and collapsed to summary.
**After Phase 2**: Return to user showing Phase 2 results, then auto-continue to Phase 3/4 (depending on conflict_risk)
**After Phase 2**: Update planning-notes.md with context findings, then auto-continue
```javascript
// Read context-package to extract key findings
const contextPackage = JSON.parse(Read(contextPath))
const conflictRisk = contextPackage.conflict_detection?.risk_level || 'low'
const criticalFiles = (contextPackage.exploration_results?.aggregated_insights?.critical_files || [])
.slice(0, 5).map(f => f.path)
const archPatterns = contextPackage.project_context?.architecture_patterns || []
const constraints = contextPackage.exploration_results?.aggregated_insights?.constraints || []
// Append Phase 2 findings to planning-notes.md
Edit(planningNotesPath, {
old: '## Context Findings (Phase 2)\n(To be filled by context-gather)',
new: `## Context Findings (Phase 2)
- **CRITICAL_FILES**: ${criticalFiles.join(', ') || 'None identified'}
- **ARCHITECTURE**: ${archPatterns.join(', ') || 'Not detected'}
- **CONFLICT_RISK**: ${conflictRisk}
- **CONSTRAINTS**: ${constraints.length > 0 ? constraints.join('; ') : 'None'}`
})
// Append Phase 2 constraints to consolidated list
Edit(planningNotesPath, {
old: '## Consolidated Constraints (Phase 4 Input)',
new: `## Consolidated Constraints (Phase 4 Input)
${constraints.map((c, i) => `${i + 2}. [Context] ${c}`).join('\n')}`
})
```
Return to user showing Phase 2 results, then auto-continue to Phase 3/4 (depending on conflict_risk)
---
@@ -229,7 +292,45 @@ SlashCommand(command="/workflow:tools:conflict-resolution --session [sessionId]
**Note**: Phase 3 tasks completed and collapsed to summary.
**After Phase 3**: Return to user showing conflict resolution results (if executed) and selected strategies, then auto-continue to Phase 3.5
**After Phase 3**: Update planning-notes.md with conflict decisions (if executed), then auto-continue
```javascript
// If Phase 3 was executed, update planning-notes.md
if (conflictRisk >= 'medium') {
const conflictResPath = `.workflow/active/${sessionId}/.process/conflict-resolution.json`
if (fs.existsSync(conflictResPath)) {
const conflictRes = JSON.parse(Read(conflictResPath))
const resolved = conflictRes.resolved_conflicts || []
const modifiedArtifacts = conflictRes.modified_artifacts || []
const planningConstraints = conflictRes.planning_constraints || []
// Update Phase 3 section
Edit(planningNotesPath, {
old: '## Conflict Decisions (Phase 3)\n(To be filled if conflicts detected)',
new: `## Conflict Decisions (Phase 3)
- **RESOLVED**: ${resolved.map(r => `${r.type}${r.strategy}`).join('; ') || 'None'}
- **MODIFIED_ARTIFACTS**: ${modifiedArtifacts.join(', ') || 'None'}
- **CONSTRAINTS**: ${planningConstraints.join('; ') || 'None'}`
})
// Append Phase 3 constraints to consolidated list
if (planningConstraints.length > 0) {
const currentNotes = Read(planningNotesPath)
const constraintCount = (currentNotes.match(/^\d+\./gm) || []).length
Edit(planningNotesPath, {
old: '## Consolidated Constraints (Phase 4 Input)',
new: `## Consolidated Constraints (Phase 4 Input)
${planningConstraints.map((c, i) => `${constraintCount + i + 1}. [Conflict] ${c}`).join('\n')}`
})
}
}
}
```
Return to user showing conflict resolution results (if executed) and selected strategies, then auto-continue to Phase 3.5
**Memory State Check**:
- Evaluate current context window usage and memory state
@@ -282,7 +383,12 @@ SlashCommand(command="/workflow:tools:task-generate-agent --session [sessionId]"
**CLI Execution Note**: CLI tool usage is now determined semantically by action-planning-agent based on user's task description. If user specifies "use Codex/Gemini/Qwen for X", the agent embeds `command` fields in relevant `implementation_approach` steps.
**Input**: `sessionId` from Phase 1
**Input**:
- `sessionId` from Phase 1
- **planning-notes.md**: Consolidated constraints from all phases (Phase 1-3)
- Path: `.workflow/active/[sessionId]/planning-notes.md`
- Contains: User intent, context findings, conflict decisions, consolidated constraints
- **Purpose**: Provides structured, minimal context summary to action-planning-agent
**Validation**:
- `.workflow/active/[sessionId]/IMPL_PLAN.md` exists
@@ -315,20 +421,55 @@ SlashCommand(command="/workflow:tools:task-generate-agent --session [sessionId]"
**Note**: Agent task completed. No collapse needed (single task).
**Return to User**:
```
Planning complete for session: [sessionId]
Tasks generated: [count]
Plan: .workflow/active/[sessionId]/IMPL_PLAN.md
**Step 4.2: User Decision** - Choose next action
Recommended Next Steps:
1. /workflow:plan-verify --session [sessionId] # Verify plan quality before execution
2. /workflow:status # Review task breakdown
3. /workflow:execute # Start implementation (after verification)
After Phase 4 completes, present user with action choices:
Quality Gate: Consider running /workflow:plan-verify to catch issues early
```javascript
console.log(`
✅ Planning complete for session: ${sessionId}
📊 Tasks generated: ${taskCount}
📋 Plan: .workflow/active/${sessionId}/IMPL_PLAN.md
`);
// Ask user for next action
const userChoice = AskUserQuestion({
questions: [{
question: "Planning complete. What would you like to do next?",
header: "Next Action",
multiSelect: false,
options: [
{
label: "Verify Plan Quality (Recommended)",
description: "Run quality verification to catch issues before execution. Checks plan structure, task dependencies, and completeness."
},
{
label: "Start Execution",
description: "Begin implementing tasks immediately. Use this if you've already reviewed the plan or want to start quickly."
},
{
label: "Review Status Only",
description: "View task breakdown and session status without taking further action. You can decide what to do next manually."
}
]
}]
});
// Execute based on user choice
if (userChoice.answers["Next Action"] === "Verify Plan Quality (Recommended)") {
console.log("\n🔍 Starting plan verification...\n");
SlashCommand(command="/workflow:plan-verify --session " + sessionId);
} else if (userChoice.answers["Next Action"] === "Start Execution") {
console.log("\n🚀 Starting task execution...\n");
SlashCommand(command="/workflow:execute --session " + sessionId);
} else if (userChoice.answers["Next Action"] === "Review Status Only") {
console.log("\n📊 Displaying session status...\n");
SlashCommand(command="/workflow:status --session " + sessionId);
}
```
**Return to User**: Based on user's choice, execute the corresponding workflow command.
## TodoWrite Pattern
**Core Concept**: Dynamic task attachment and collapse for real-time visibility into workflow execution.
@@ -404,26 +545,21 @@ User Input (task description)
Phase 1: session:start --auto "structured-description"
↓ Output: sessionId
Session Memory: Previous tasks, context, artifacts
Write: planning-notes.md (User Intent section)
Phase 2: context-gather --session sessionId "structured-description"
↓ Input: sessionId + session memory + structured description
↓ Output: contextPath (context-package.json) + conflict_risk
↓ Input: sessionId + structured description
↓ Output: contextPath (context-package.json with prioritized_context) + conflict_risk
↓ Update: planning-notes.md (Context Findings + Consolidated Constraints)
Phase 3: conflict-resolution [AUTO-TRIGGERED if conflict_risk ≥ medium]
↓ Input: sessionId + contextPath + conflict_risk
CLI-powered conflict detection (JSON output)
AskUserQuestion: Present conflicts + resolution strategies
↓ User selects strategies (or skip)
↓ Apply modifications via Edit tool:
↓ - Update guidance-specification.md
↓ - Update role analyses (*.md)
↓ - Mark context-package.json as "resolved"
↓ Output: Modified brainstorm artifacts (NO report file)
Output: Modified brainstorm artifacts
Update: planning-notes.md (Conflict Decisions + Consolidated Constraints)
↓ Skip if conflict_risk is none/low → proceed directly to Phase 4
Phase 4: task-generate-agent --session sessionId
↓ Input: sessionId + resolved brainstorm artifacts + session memory
↓ Input: sessionId + planning-notes.md + context-package.json + brainstorm artifacts
↓ Output: IMPL_PLAN.md, task JSONs, TODO_LIST.md
Return summary to user

View File

@@ -113,7 +113,40 @@ const taskId = taskIdMatch?.[1]
- List existing tasks
- Read `IMPL_PLAN.md` and `TODO_LIST.md`
**Output**: Session validated, context loaded, mode determined
4. **Parse Execution Intent** (from requirements text):
```javascript
// Dynamic tool detection from cli-tools.json
// Read enabled tools: ["gemini", "qwen", "codex", ...]
const enabledTools = loadEnabledToolsFromConfig(); // See ~/.claude/cli-tools.json
// Build dynamic patterns from enabled tools
function buildExecPatterns(tools) {
const patterns = {
agent: /改为\s*Agent\s*执行|使用\s*Agent\s*执行/i
};
tools.forEach(tool => {
// Pattern: "使用 {tool} 执行" or "改用 {tool}"
patterns[`cli_${tool}`] = new RegExp(
`使用\\s*(${tool})\\s*执行|改用\\s*(${tool})`, 'i'
);
});
return patterns;
}
const execPatterns = buildExecPatterns(enabledTools);
let executionIntent = null
for (const [key, pattern] of Object.entries(execPatterns)) {
if (pattern.test(requirements)) {
executionIntent = key.startsWith('cli_')
? { method: 'cli', cli_tool: key.replace('cli_', '') }
: { method: 'agent', cli_tool: null }
break
}
}
```
**Output**: Session validated, context loaded, mode determined, **executionIntent parsed**
---
@@ -356,7 +389,18 @@ const updated_task = {
flow_control: {
...task.flow_control,
implementation_approach: [...updated_steps]
}
},
// Update execution config if intent detected
...(executionIntent && {
meta: {
...task.meta,
execution_config: {
method: executionIntent.method,
cli_tool: executionIntent.cli_tool,
enable_resume: executionIntent.method !== 'agent'
}
}
})
};
Write({
@@ -365,6 +409,8 @@ Write({
});
```
**Note**: Implementation approach steps are NO LONGER modified. CLI execution is controlled by task-level `meta.execution_config` only.
**Step 5.4: Create New Tasks** (if needed)
Generate complete task JSON with all required fields:
@@ -570,3 +616,33 @@ A: 是,需要同步更新依赖任务
任务重规划完成! 更新 2 个任务
```
### Task Replan - Change Execution Method
```bash
/workflow:replan IMPL-001 "改用 Codex 执行"
# Semantic parsing detects executionIntent:
# { method: 'cli', cli_tool: 'codex' }
# Execution (no interactive questions needed)
✓ 创建备份
✓ 更新 IMPL-001.json
- meta.execution_config = { method: 'cli', cli_tool: 'codex', enable_resume: true }
任务执行方式已更新: Agent → CLI (codex)
```
```bash
/workflow:replan IMPL-002 "改为 Agent 执行"
# Semantic parsing detects executionIntent:
# { method: 'agent', cli_tool: null }
# Execution
✓ 创建备份
✓ 更新 IMPL-002.json
- meta.execution_config = { method: 'agent', cli_tool: null }
任务执行方式已更新: CLI → Agent
```

View File

@@ -1,26 +1,26 @@
---
name: review-fix
name: review-cycle-fix
description: Automated fixing of code review findings with AI-powered planning and coordinated execution. Uses intelligent grouping, multi-stage timeline coordination, and test-driven verification.
argument-hint: "<export-file|review-dir> [--resume] [--max-iterations=N]"
allowed-tools: SlashCommand(*), TodoWrite(*), Read(*), Bash(*), Task(*), Edit(*), Write(*)
---
# Workflow Review-Fix Command
# Workflow Review-Cycle-Fix Command
## Quick Start
```bash
# Fix from exported findings file (session-based path)
/workflow:review-fix .workflow/active/WFS-123/.review/fix-export-1706184622000.json
/workflow:review-cycle-fix .workflow/active/WFS-123/.review/fix-export-1706184622000.json
# Fix from review directory (auto-discovers latest export)
/workflow:review-fix .workflow/active/WFS-123/.review/
/workflow:review-cycle-fix .workflow/active/WFS-123/.review/
# Resume interrupted fix session
/workflow:review-fix --resume
/workflow:review-cycle-fix --resume
# Custom max retry attempts per finding
/workflow:review-fix .workflow/active/WFS-123/.review/ --max-iterations=5
/workflow:review-cycle-fix .workflow/active/WFS-123/.review/ --max-iterations=5
```
**Fix Source**: Exported findings from review cycle dashboard

View File

@@ -764,8 +764,8 @@ After completing a module review, use the generated findings JSON for automated
/workflow:review-module-cycle src/auth/**
# Step 2: Run automated fixes using dimension findings
/workflow:review-fix .workflow/active/WFS-{session-id}/.review/
/workflow:review-cycle-fix .workflow/active/WFS-{session-id}/.review/
```
See `/workflow:review-fix` for automated fixing with smart grouping, parallel execution, and test verification.
See `/workflow:review-cycle-fix` for automated fixing with smart grouping, parallel execution, and test verification.

View File

@@ -775,8 +775,8 @@ After completing a review, use the generated findings JSON for automated fixing:
/workflow:review-session-cycle
# Step 2: Run automated fixes using dimension findings
/workflow:review-fix .workflow/active/WFS-{session-id}/.review/
/workflow:review-cycle-fix .workflow/active/WFS-{session-id}/.review/
```
See `/workflow:review-fix` for automated fixing with smart grouping, parallel execution, and test verification.
See `/workflow:review-cycle-fix` for automated fixing with smart grouping, parallel execution, and test verification.

View File

@@ -25,10 +25,10 @@ Analyzes conflicts between implementation plans and existing codebase, **includi
| Responsibility | Description |
|---------------|-------------|
| **Detect Conflicts** | Analyze plan vs existing code inconsistencies |
| **Scenario Uniqueness** | **NEW**: Search and compare new modules with existing modules for functional overlaps |
| **Scenario Uniqueness** | Search and compare new modules with existing modules for functional overlaps |
| **Generate Strategies** | Provide 2-4 resolution options per conflict |
| **Iterative Clarification** | **NEW**: Ask unlimited questions until scenario boundaries are clear and unique |
| **Agent Re-analysis** | **NEW**: Dynamically update strategies based on user clarifications |
| **Iterative Clarification** | Ask unlimited questions until scenario boundaries are clear and unique |
| **Agent Re-analysis** | Dynamically update strategies based on user clarifications |
| **CLI Analysis** | Use Gemini/Qwen (Claude fallback) |
| **User Decision** | Present options ONE BY ONE, never auto-apply |
| **Direct Text Output** | Output questions via text directly, NEVER use bash echo/printf |
@@ -57,7 +57,7 @@ Analyzes conflicts between implementation plans and existing codebase, **includi
- Breaking updates
### 5. Module Scenario Overlap
- **NEW**: Functional overlap between new and existing modules
- Functional overlap between new and existing modules
- Scenario boundary ambiguity
- Duplicate responsibility detection
- Module merge/split decisions
@@ -134,7 +134,7 @@ Task(subagent_type="cli-execution-agent", run_in_background=false, prompt=`
### 1. Load Context
- Read existing files from conflict_detection.existing_files
- Load plan from .workflow/active/{session_id}/.process/context-package.json
- **NEW**: Load exploration_results and use aggregated_insights for enhanced analysis
- Load exploration_results and use aggregated_insights for enhanced analysis
- Extract role analyses and requirements
### 2. Execute CLI Analysis (Enhanced with Exploration + Scenario Uniqueness)
@@ -186,28 +186,18 @@ Task(subagent_type="cli-execution-agent", run_in_background=false, prompt=`
- modifications.old_content: 20-100 chars for unique Edit tool matching
- modifications.new_content: preserves markdown formatting
- modification_suggestions: 2-5 actionable suggestions for custom handling
`)
```
**Agent Internal Flow** (Enhanced):
```
1. Load context package
2. Check conflict_risk (exit if none/low)
3. Read existing files + plan artifacts
4. Run CLI analysis (Gemini→Qwen→Claude) with enhanced tasks:
- Standard conflict detection (Architecture/API/Data/Dependency)
- **NEW: Module scenario uniqueness detection**
* Extract new module functionality from plan
* Search all existing modules with similar keywords/functionality
* Compare scenario coverage and responsibilities
* Identify functional overlaps and boundary ambiguities
* Generate ModuleOverlap conflicts with overlap_analysis
5. Parse conflict findings (including ModuleOverlap category)
6. Generate 2-4 strategies per conflict:
- Include modifications for each strategy
- **For ModuleOverlap**: Add clarification_needed questions for boundary definition
7. Return JSON to stdout (NOT file write)
8. Return execution log path
### 5. Planning Notes Record (REQUIRED)
After analysis complete, append a brief execution record to planning-notes.md:
**File**: .workflow/active/{session_id}/planning-notes.md
**Location**: Under "## Conflict Decisions (Phase 3)" section
**Format**:
\`\`\`
### [Conflict-Resolution Agent] YYYY-MM-DD
- **Note**: [智能补充:简短总结冲突类型、解决策略、关键决策等]
\`\`\`
`)
```
### Phase 3: User Interaction Loop

View File

@@ -35,7 +35,7 @@ Step 1: Context-Package Detection
├─ Valid package exists → Return existing (skip execution)
└─ No valid package → Continue to Step 2
Step 2: Complexity Assessment & Parallel Explore (NEW)
Step 2: Complexity Assessment & Parallel Explore
├─ Analyze task_description → classify Low/Medium/High
├─ Select exploration angles (1-4 based on complexity)
├─ Launch N cli-explore-agents in parallel
@@ -213,19 +213,37 @@ Write(`${sessionFolder}/explorations-manifest.json`, JSON.stringify(explorationM
**Only execute after Step 2 completes**
```javascript
// Load user intent from planning-notes.md (from Phase 1)
const planningNotesPath = `.workflow/active/${session_id}/planning-notes.md`;
let userIntent = { goal: task_description, key_constraints: "None specified" };
if (file_exists(planningNotesPath)) {
const notesContent = Read(planningNotesPath);
const goalMatch = notesContent.match(/\*\*GOAL\*\*:\s*(.+)/);
const constraintsMatch = notesContent.match(/\*\*KEY_CONSTRAINTS\*\*:\s*(.+)/);
if (goalMatch) userIntent.goal = goalMatch[1].trim();
if (constraintsMatch) userIntent.key_constraints = constraintsMatch[1].trim();
}
Task(
subagent_type="context-search-agent",
run_in_background=false,
description="Gather comprehensive context for plan",
prompt=`
## Execution Mode
**PLAN MODE** (Comprehensive) - Full Phase 1-3 execution
**PLAN MODE** (Comprehensive) - Full Phase 1-3 execution with priority sorting
## Session Information
- **Session ID**: ${session_id}
- **Task Description**: ${task_description}
- **Output Path**: .workflow/${session_id}/.process/context-package.json
## User Intent (from Phase 1 - Planning Notes)
**GOAL**: ${userIntent.goal}
**KEY_CONSTRAINTS**: ${userIntent.key_constraints}
This is the PRIMARY context source - all subsequent analysis must align with user intent.
## Exploration Input (from Step 2)
- **Manifest**: ${sessionFolder}/explorations-manifest.json
- **Exploration Count**: ${explorationManifest.exploration_count}
@@ -245,7 +263,13 @@ Execute complete context-search-agent workflow for implementation planning:
4. **Analysis**: Extract keywords, determine scope, classify complexity based on task description and project state
### Phase 2: Multi-Source Context Discovery
Execute all discovery tracks:
Execute all discovery tracks (WITH USER INTENT INTEGRATION):
- **Track -1**: User Intent & Priority Foundation (EXECUTE FIRST)
- Load user intent (GOAL, KEY_CONSTRAINTS) from session input
- Map user requirements to codebase entities (files, modules, patterns)
- Establish baseline priority scores based on user goal alignment
- Output: user_intent_mapping.json with preliminary priority scores
- **Track 0**: Exploration Synthesis (load ${sessionFolder}/explorations-manifest.json, prioritize critical_files, deduplicate patterns/integration_points)
- **Track 1**: Historical archive analysis (query manifest.json for lessons learned)
- **Track 2**: Reference documentation (CLAUDE.md, architecture docs)
@@ -254,13 +278,45 @@ Execute all discovery tracks:
### Phase 3: Synthesis, Assessment & Packaging
1. Apply relevance scoring and build dependency graph
2. **Synthesize 4-source data**: Merge findings from all sources (archive > docs > code > web). **Prioritize the context from `project-tech.json`** for architecture and tech stack unless code analysis reveals it's outdated.
3. **Populate `project_context`**: Directly use the `overview` from `project-tech.json` to fill the `project_context` section. Include description, technology_stack, architecture, and key_components.
4. **Populate `project_guidelines`**: Load conventions, constraints, and learnings from `project-guidelines.json` into a dedicated section.
5. Integrate brainstorm artifacts (if .brainstorming/ exists, read content)
6. Perform conflict detection with risk assessment
7. **Inject historical conflicts** from archive analysis into conflict_detection
8. Generate and validate context-package.json
2. **Synthesize 5-source data** (including Track -1): Merge findings from all sources
- Priority order: User Intent > Archive > Docs > Exploration > Code > Web
- **Prioritize the context from `project-tech.json`** for architecture and tech stack unless code analysis reveals it's outdated
3. **Context Priority Sorting**:
a. Combine scores from Track -1 (user intent alignment) + relevance scores + exploration critical_files
b. Classify files into priority tiers:
- **Critical** (score ≥ 0.85): Directly mentioned in user goal OR exploration critical_files
- **High** (0.70-0.84): Key dependencies, patterns required for goal
- **Medium** (0.50-0.69): Supporting files, indirect dependencies
- **Low** (< 0.50): Contextual awareness only
c. Generate dependency_order: Based on dependency graph + user goal sequence
d. Document sorting_rationale: Explain prioritization logic
4. **Populate `project_context`**: Directly use the `overview` from `project-tech.json` to fill the `project_context` section. Include description, technology_stack, architecture, and key_components.
5. **Populate `project_guidelines`**: Load conventions, constraints, and learnings from `project-guidelines.json` into a dedicated section.
6. Integrate brainstorm artifacts (if .brainstorming/ exists, read content)
7. Perform conflict detection with risk assessment
8. **Inject historical conflicts** from archive analysis into conflict_detection
9. **Generate prioritized_context section**:
```json
{
"prioritized_context": {
"user_intent": {
"goal": "...",
"scope": "...",
"key_constraints": ["..."]
},
"priority_tiers": {
"critical": [{ "path": "...", "relevance": 0.95, "rationale": "..." }],
"high": [...],
"medium": [...],
"low": [...]
},
"dependency_order": ["module1", "module2", "module3"],
"sorting_rationale": "Based on user goal alignment (Track -1), exploration critical files, and dependency graph analysis"
}
}
```
10. Generate and validate context-package.json with prioritized_context field
## Output Requirements
Complete context-package.json with:
@@ -272,6 +328,7 @@ Complete context-package.json with:
- **brainstorm_artifacts**: {guidance_specification, role_analyses[], synthesis_output} with content
- **conflict_detection**: {risk_level, risk_factors, affected_modules[], mitigation_strategy, historical_conflicts[]}
- **exploration_results**: {manifest_path, exploration_count, angles, explorations[], aggregated_insights} (from Track 0)
- **prioritized_context**: {user_intent, priority_tiers{critical, high, medium, low}, dependency_order[], sorting_rationale}
## Quality Validation
Before completion verify:
@@ -282,6 +339,17 @@ Before completion verify:
- [ ] No sensitive data exposed
- [ ] Total files ≤50 (prioritize high-relevance)
## Planning Notes Record (REQUIRED)
After completing context-package.json, append a brief execution record to planning-notes.md:
**File**: .workflow/active/${session_id}/planning-notes.md
**Location**: Under "## Context Findings (Phase 2)" section
**Format**:
\`\`\`
### [Context-Search Agent] YYYY-MM-DD
- **Note**: [智能补充:简短总结关键发现,如探索角度、关键文件、冲突风险等]
\`\`\`
Execute autonomously following agent documentation.
Report completion with statistics.
`
@@ -326,116 +394,11 @@ Refer to `context-search-agent.md` Phase 3.7 for complete `context-package.json`
- **brainstorm_artifacts**: Brainstorm documents with full content (if exists)
- **conflict_detection**: Risk assessment with mitigation strategies and historical conflicts
- **exploration_results**: Aggregated exploration insights (from parallel explore phase)
## Historical Archive Analysis
### Track 1: Query Archive Manifest
The context-search-agent MUST perform historical archive analysis as Track 1 in Phase 2:
**Step 1: Check for Archive Manifest**
```bash
# Check if archive manifest exists
if [[ -f .workflow/archives/manifest.json ]]; then
# Manifest available for querying
fi
```
**Step 2: Extract Task Keywords**
```javascript
// From current task description, extract key entities and operations
const keywords = extractKeywords(task_description);
// Examples: ["User", "model", "authentication", "JWT", "reporting"]
```
**Step 3: Search Archive for Relevant Sessions**
```javascript
// Query manifest for sessions with matching tags or descriptions
const relevantArchives = archives.filter(archive => {
return archive.tags.some(tag => keywords.includes(tag)) ||
keywords.some(kw => archive.description.toLowerCase().includes(kw.toLowerCase()));
});
```
**Step 4: Extract Watch Patterns**
```javascript
// For each relevant archive, check watch_patterns for applicability
const historicalConflicts = [];
relevantArchives.forEach(archive => {
archive.lessons.watch_patterns?.forEach(pattern => {
// Check if pattern trigger matches current task
if (isPatternRelevant(pattern.pattern, task_description)) {
historicalConflicts.push({
source_session: archive.session_id,
pattern: pattern.pattern,
action: pattern.action,
files_to_check: pattern.related_files,
archived_at: archive.archived_at
});
}
});
});
```
**Step 5: Inject into Context Package**
```json
{
"conflict_detection": {
"risk_level": "medium",
"risk_factors": ["..."],
"affected_modules": ["..."],
"mitigation_strategy": "...",
"historical_conflicts": [
{
"source_session": "WFS-auth-feature",
"pattern": "When modifying User model",
"action": "Check reporting-service and auditing-service dependencies",
"files_to_check": ["src/models/User.ts", "src/services/reporting.ts"],
"archived_at": "2025-09-16T09:00:00Z"
}
]
}
}
```
### Risk Level Escalation
If `historical_conflicts` array is not empty, minimum risk level should be "medium":
```javascript
if (historicalConflicts.length > 0 && currentRisk === "low") {
conflict_detection.risk_level = "medium";
conflict_detection.risk_factors.push(
`${historicalConflicts.length} historical conflict pattern(s) detected from past sessions`
);
}
```
### Archive Query Algorithm
```markdown
1. IF .workflow/archives/manifest.json does NOT exist → Skip Track 1, continue to Track 2
2. IF manifest exists:
a. Load manifest.json
b. Extract keywords from task_description (nouns, verbs, technical terms)
c. Filter archives where:
- ANY tag matches keywords (case-insensitive) OR
- description contains keywords (case-insensitive substring match)
d. For each relevant archive:
- Read lessons.watch_patterns array
- Check if pattern.pattern keywords overlap with task_description
- If relevant: Add to historical_conflicts array
e. IF historical_conflicts.length > 0:
- Set risk_level = max(current_risk, "medium")
- Add to risk_factors
3. Continue to Track 2 (reference documentation)
```
- **prioritized_context**: Pre-sorted context with user intent and priority tiers (critical/high/medium/low)
## Notes
- **Detection-first**: Always check for existing package before invoking agent
- **Dual project file integration**: Agent reads both `.workflow/project-tech.json` (tech analysis) and `.workflow/project-guidelines.json` (user constraints) as primary sources
- **Guidelines injection**: Project guidelines are included in context-package to ensure task generation respects user-defined constraints
- **No redundancy**: This command is a thin orchestrator, all logic in agent
- **User intent integration**: Load user intent from planning-notes.md (Phase 1 output)
- **Output**: Generates `context-package.json` with `prioritized_context` field
- **Plan-specific**: Use this for implementation planning; brainstorm mode uses direct agent call

View File

@@ -19,6 +19,10 @@ Generate implementation planning documents (IMPL_PLAN.md, task JSONs, TODO_LIST.
## Core Philosophy
- **Planning Only**: Generate planning documents (IMPL_PLAN.md, task JSONs, TODO_LIST.md) - does NOT implement code
- **Agent-Driven Document Generation**: Delegate plan generation to action-planning-agent
- **NO Redundant Context Sorting**: Context priority sorting is ALREADY completed in context-gather Phase 2/3
- Use `context-package.json.prioritized_context` directly
- DO NOT re-sort files or re-compute priorities
- `priority_tiers` and `dependency_order` are pre-computed and ready-to-use
- **N+1 Parallel Planning**: Auto-detect multi-module projects, enable parallel planning (2+1 or 3+1 mode)
- **Progressive Loading**: Load context incrementally (Core → Selective → On-Demand) due to analysis.md file size
- **Memory-First**: Reuse loaded documents from conversation memory
@@ -161,12 +165,13 @@ const userConfig = {
### Phase 1: Context Preparation & Module Detection (Command Responsibility)
**Command prepares session paths, metadata, and detects module structure.**
**Command prepares session paths, metadata, detects module structure. Context priority sorting is NOT performed here - it's already completed in context-gather Phase 2/3.**
**Session Path Structure**:
```
.workflow/active/WFS-{session-id}/
├── workflow-session.json # Session metadata
├── planning-notes.md # Consolidated planning notes
├── .process/
│ └── context-package.json # Context package with artifact catalog
├── .task/ # Output: Task JSON files
@@ -248,9 +253,21 @@ IMPORTANT: This is PLANNING ONLY - you are generating planning documents, NOT im
CRITICAL: Follow the progressive loading strategy defined in agent specification (load analysis.md files incrementally due to file size)
## PLANNING NOTES (PHASE 1-3 CONTEXT)
Load: .workflow/active/{session-id}/planning-notes.md
This document contains:
- User Intent: Original GOAL and KEY_CONSTRAINTS from Phase 1
- Context Findings: Critical files, architecture, and constraints from Phase 2
- Conflict Decisions: Resolved conflicts and planning constraints from Phase 3
- Consolidated Constraints: All constraints from all phases
**USAGE**: Read planning-notes.md FIRST. Use Consolidated Constraints list to guide task sequencing and dependencies.
## SESSION PATHS
Input:
- Session Metadata: .workflow/active/{session-id}/workflow-session.json
- Planning Notes: .workflow/active/{session-id}/planning-notes.md
- Context Package: .workflow/active/{session-id}/.process/context-package.json
Output:
@@ -278,7 +295,17 @@ CLI Resume Support (MANDATORY for all CLI commands):
- Read previous task's cliExecutionId from session state
- Format: ccw cli -p "[prompt]" --resume ${previousCliId} --tool ${tool} --mode write
## EXPLORATION CONTEXT (from context-package.exploration_results)
## PRIORITIZED CONTEXT (from context-package.prioritized_context) - ALREADY SORTED
Context sorting is ALREADY COMPLETED in context-gather Phase 2/3. DO NOT re-sort.
Direct usage:
- **user_intent**: Use goal/scope/key_constraints for task alignment
- **priority_tiers.critical**: These files are PRIMARY focus for task generation
- **priority_tiers.high**: These files are SECONDARY focus
- **dependency_order**: Use this for task sequencing - already computed
- **sorting_rationale**: Reference for understanding priority decisions
## EXPLORATION CONTEXT (from context-package.exploration_results) - SUPPLEMENT ONLY
If prioritized_context is incomplete, fall back to exploration_results:
- Load exploration_results from context-package.json
- Use aggregated_insights.critical_files for focus_paths generation
- Apply aggregated_insights.constraints to acceptance criteria
@@ -298,8 +325,10 @@ CLI Resume Support (MANDATORY for all CLI commands):
- 6-field schema (id, title, status, context_package_path, meta, context, flow_control)
- Quantified requirements with explicit counts
- Artifacts integration from context package
- **focus_paths enhanced with exploration critical_files**
- Flow control with pre_analysis steps (include exploration integration_points analysis)
- **focus_paths generated directly from prioritized_context.priority_tiers (critical + high)**
- NO re-sorting or re-prioritization - use pre-computed tiers as-is
- Critical files are PRIMARY focus, High files are SECONDARY
- Flow control with pre_analysis steps (use prioritized_context.dependency_order for task sequencing)
- **CLI Execution IDs and strategies (MANDATORY)**
2. Implementation Plan (IMPL_PLAN.md)
@@ -347,6 +376,19 @@ Hard Constraints:
- IMPL_PLAN.md created with complete structure
- TODO_LIST.md generated matching task JSONs
- Return completion status with document count and task breakdown summary
## PLANNING NOTES RECORD (REQUIRED)
After completing all documents, append a brief execution record to planning-notes.md:
**File**: .workflow/active/{session_id}/planning-notes.md
**Location**: Create new section after "## Consolidated Constraints"
**Format**:
\`\`\`
## Task Generation (Phase 4)
### [Action-Planning Agent] YYYY-MM-DD
- **Note**: [智能补充:简短总结任务数量、关键任务、依赖关系等]
\`\`\`
`
)
```
@@ -376,16 +418,22 @@ IMPORTANT: Generate Task JSONs ONLY. IMPL_PLAN.md and TODO_LIST.md by Phase 3 Co
CRITICAL: Follow the progressive loading strategy defined in agent specification (load analysis.md files incrementally due to file size)
## PLANNING NOTES (PHASE 1-3 CONTEXT)
Load: .workflow/active/{session-id}/planning-notes.md
This document contains consolidated constraints and user intent to guide module-scoped task generation.
## MODULE SCOPE
- Module: ${module.name} (${module.type})
- Focus Paths: ${module.paths.join(', ')}
- Task ID Prefix: IMPL-${module.prefix}
- Task Limit: ≤9 tasks (hard limit for this module)
- Task Limit: ≤6 tasks (hard limit for this module)
- Other Modules: ${otherModules.join(', ')} (reference only, do NOT generate tasks for them)
## SESSION PATHS
Input:
- Session Metadata: .workflow/active/{session-id}/workflow-session.json
- Planning Notes: .workflow/active/{session-id}/planning-notes.md
- Context Package: .workflow/active/{session-id}/.process/context-package.json
Output:
@@ -411,7 +459,16 @@ CLI Resume Support (MANDATORY for all CLI commands):
- Read previous task's cliExecutionId from session state
- Format: ccw cli -p "[prompt]" --resume ${previousCliId} --tool ${tool} --mode write
## EXPLORATION CONTEXT (from context-package.exploration_results)
## PRIORITIZED CONTEXT (from context-package.prioritized_context) - ALREADY SORTED
Context sorting is ALREADY COMPLETED in context-gather Phase 2/3. DO NOT re-sort.
Filter by module scope (${module.paths.join(', ')}):
- **user_intent**: Use for task alignment within module
- **priority_tiers.critical**: Filter for files in ${module.paths.join(', ')} → PRIMARY focus
- **priority_tiers.high**: Filter for files in ${module.paths.join(', ')} → SECONDARY focus
- **dependency_order**: Use module-relevant entries for task sequencing
## EXPLORATION CONTEXT (from context-package.exploration_results) - SUPPLEMENT ONLY
If prioritized_context is incomplete for this module, fall back to exploration_results:
- Load exploration_results from context-package.json
- Filter for ${module.name} module: Use aggregated_insights.critical_files matching ${module.paths.join(', ')}
- Apply module-relevant constraints from aggregated_insights.constraints
@@ -438,8 +495,10 @@ Task JSON Files (.task/IMPL-${module.prefix}*.json):
- Task ID format: IMPL-${module.prefix}1, IMPL-${module.prefix}2, ...
- Quantified requirements with explicit counts
- Artifacts integration from context package (filtered for ${module.name})
- **focus_paths enhanced with exploration critical_files (module-scoped)**
- Flow control with pre_analysis steps (include exploration integration_points analysis)
- **focus_paths generated directly from prioritized_context.priority_tiers filtered by ${module.paths.join(', ')}**
- NO re-sorting - use pre-computed tiers filtered for this module
- Critical files are PRIMARY focus, High files are SECONDARY
- Flow control with pre_analysis steps (use prioritized_context.dependency_order for module task sequencing)
- **CLI Execution IDs and strategies (MANDATORY)**
- Focus ONLY on ${module.name} module scope
@@ -482,6 +541,21 @@ Hard Constraints:
- Cross-module dependencies use CROSS:: placeholder format consistently
- Focus paths scoped to ${module.paths.join(', ')} only
- Return: task count, task IDs, dependency summary (internal + cross-module)
## PLANNING NOTES RECORD (REQUIRED)
After completing module task JSONs, append a brief execution record to planning-notes.md:
**File**: .workflow/active/{session_id}/planning-notes.md
**Location**: Create new section after "## Consolidated Constraints" (if not exists)
**Format**:
\`\`\`
## Task Generation (Phase 4)
### [Action-Planning Agent - ${module.name}] YYYY-MM-DD
- **Note**: [智能补充:简短总结本模块任务数量、关键任务等]
\`\`\`
**Note**: Multiple module agents will append their records. Phase 3 Integration Coordinator will add final summary.
`
)
);
@@ -562,6 +636,17 @@ Module Count: ${modules.length}
- No CROSS:: placeholders remaining in task JSONs
- IMPL_PLAN.md and TODO_LIST.md generated with multi-module structure
- Return: task count, per-module breakdown, resolved dependency count
## PLANNING NOTES RECORD (REQUIRED)
After completing integration, append final summary to planning-notes.md:
**File**: .workflow/active/{session_id}/planning-notes.md
**Location**: Under "## Task Generation (Phase 4)" section (after module agent records)
**Format**:
\`\`\`
### [Integration Coordinator] YYYY-MM-DD
- **Note**: [智能补充:简短总结总任务数、跨模块依赖解决情况等]
\`\`\`
`
)
```
@@ -579,5 +664,4 @@ function resolveCrossModuleDependency(placeholder, allTasks) {
? candidates.sort((a, b) => a.id.localeCompare(b.id))[0].id
: placeholder; // Keep for manual resolution
}
```
```

View File

@@ -0,0 +1,807 @@
---
name: unified-execute-with-file
description: Universal execution engine for consuming any planning/brainstorm/analysis output with minimal progress tracking, multi-agent coordination, and incremental execution
argument-hint: "[-y|--yes] [-p|--plan <path>] [-m|--mode sequential|parallel] [\"execution context or task name\"]"
allowed-tools: TodoWrite(*), Task(*), AskUserQuestion(*), Read(*), Grep(*), Glob(*), Bash(*), Edit(*), Write(*)
---
## Auto Mode
When `--yes` or `-y`: Auto-confirm execution decisions, use default parallel strategy where possible.
# Workflow Unified-Execute-With-File Command (/workflow:unified-execute-with-file)
## Overview
Universal execution engine that consumes **any** planning/brainstorm/analysis output and executes it with minimal progress tracking. Coordinates multiple agents (subagents or CLI tools), handles dependencies, and maintains execution timeline in a single minimal document.
**Core workflow**: Load Plan → Parse Tasks → Coordinate Agents → Execute → Track Progress → Verify
**Key features**:
- **Plan Format Agnostic**: Consumes IMPL_PLAN.md, brainstorm.md, analysis conclusions, debug resolutions
- **execution.md**: Single source of truth for progress, execution timeline, and results
- **Multi-Agent Orchestration**: Parallel execution where possible, sequential where needed
- **Incremental Execution**: Resume from failure point, no re-execution of completed tasks
- **Dependency Management**: Automatic topological sort and wait strategy
- **Real-Time Progress**: TodoWrite integration for live task status
## Usage
```bash
/workflow:unified-execute-with-file [FLAGS] [EXECUTION_CONTEXT]
# Flags
-y, --yes Auto-confirm execution decisions, use defaults
-p, --plan <path> Explicitly specify plan file (auto-detected if omitted)
-m, --mode <mode> Execution strategy: sequential (strict order) | parallel (smart dependencies)
# Arguments
[execution-context] Optional: Task category, module name, or execution focus (for filtering/priority)
# Examples
/workflow:unified-execute-with-file # Auto-detect and execute latest plan
/workflow:unified-execute-with-file -p .workflow/plans/auth-plan.md # Execute specific plan
/workflow:unified-execute-with-file -y "auth module" # Auto-execute with context focus
/workflow:unified-execute-with-file -m sequential "payment feature" # Sequential execution
```
## Execution Process
```
Plan Detection:
├─ Check for IMPL_PLAN.md or task JSON files in .workflow/
├─ Or use explicit --plan path
├─ Or auto-detect from git branch/issue context
└─ Load plan metadata and task definitions
Session Initialization:
├─ Create .workflow/.execution/{sessionId}/
├─ Initialize execution.md with plan summary
├─ Parse all tasks, identify dependencies
├─ Determine execution strategy (parallel/sequential)
└─ Initialize progress tracking
Pre-Execution Validation:
├─ Check task feasibility (required files exist, tools available)
├─ Validate dependency graph (detect cycles)
├─ Ask user to confirm execution (unless --yes)
└─ Display execution plan and timeline estimate
Task Execution Loop (Parallel/Sequential):
├─ Select next executable tasks (dependencies satisfied)
├─ Launch agents in parallel (if strategy=parallel)
├─ Monitor execution, wait for completion
├─ Capture outputs, log results
├─ Update execution.md with progress
├─ Mark tasks complete/failed
└─ Repeat until all done or max failures reached
Error Handling:
├─ Task failure → Ask user: retry|skip|abort
├─ Dependency failure → Auto-skip dependent tasks
├─ Output conflict → Ask for resolution
└─ Timeout → Mark as timeout, continue or escalate
Completion:
├─ Mark session complete
├─ Summarize execution results in execution.md
├─ Generate completion report (statistics, failures, recommendations)
└─ Offer follow-up: review|debug|enhance
Output:
├─ .workflow/.execution/{sessionId}/execution.md (plan and overall status)
├─ .workflow/.execution/{sessionId}/execution-events.md (SINGLE SOURCE OF TRUTH - all task executions)
└─ Generated files in project directories (src/*, tests/*, docs/*, etc.)
```
## Implementation
### Session Setup & Plan Detection
```javascript
const getUtc8ISOString = () => new Date(Date.now() + 8 * 60 * 60 * 1000).toISOString()
// Plan detection strategy
let planPath = $ARGUMENTS.match(/--plan\s+(\S+)/)?.[1]
if (!planPath) {
// Auto-detect: check recent workflow artifacts
const candidates = [
'.workflow/.plan/IMPL_PLAN.md',
'.workflow/plans/IMPL_PLAN.md',
'.workflow/IMPL_PLAN.md',
]
// Find most recent plan
planPath = findMostRecentPlan(candidates)
if (!planPath) {
// Check for task JSONs
const taskJsons = glob('.workflow/**/*.json').filter(f => f.includes('IMPL-') || f.includes('task'))
if (taskJsons.length > 0) {
planPath = taskJsons[0] // Primary task
}
}
}
if (!planPath) {
AskUserQuestion({
questions: [{
question: "未找到执行规划。请选择方式:",
header: "Plan Source",
multiSelect: false,
options: [
{ label: "浏览文件", description: "从 .workflow 目录选择" },
{ label: "使用最近规划", description: "从git提交消息推断" },
{ label: "手动输入路径", description: "直接指定规划文件路径" }
]
}]
})
}
// Parse plan and extract tasks
const planContent = Read(planPath)
const plan = parsePlan(planContent, planPath) // Format-agnostic parser
const executionId = `EXEC-${plan.slug}-${getUtc8ISOString().substring(0, 10)}-${randomId(4)}`
const executionFolder = `.workflow/.execution/${executionId}`
const executionPath = `${executionFolder}/execution.md`
const eventLogPath = `${executionFolder}/execution-events.md`
bash(`mkdir -p ${executionFolder}`)
```
---
## Plan Format Parsers
Support multiple plan sources:
```javascript
function parsePlan(content, filePath) {
const ext = filePath.split('.').pop()
if (filePath.includes('IMPL_PLAN')) {
return parseImplPlan(content) // From /workflow:plan
} else if (filePath.includes('brainstorm')) {
return parseBrainstormPlan(content) // From /workflow:brainstorm-with-file
} else if (filePath.includes('synthesis')) {
return parseSynthesisPlan(content) // From /workflow:brainstorm-with-file synthesis.json
} else if (filePath.includes('conclusions')) {
return parseConclusionsPlan(content) // From /workflow:analyze-with-file conclusions.json
} else if (filePath.endsWith('.json') && content.includes('tasks')) {
return parseTaskJson(content) // Direct task JSON
}
throw new Error(`Unsupported plan format: ${filePath}`)
}
// IMPL_PLAN.md parser
function parseImplPlan(content) {
// Extract:
// - Overview/summary
// - Phase sections
// - Task list with dependencies
// - Critical files
// - Execution order
return {
type: 'impl-plan',
title: extractSection(content, 'Overview'),
phases: extractPhases(content),
tasks: extractTasks(content),
criticalFiles: extractCriticalFiles(content),
estimatedDuration: extractEstimate(content)
}
}
// Brainstorm synthesis.json parser
function parseSynthesisPlan(content) {
const synthesis = JSON.parse(content)
return {
type: 'brainstorm-synthesis',
title: synthesis.topic,
ideas: synthesis.top_ideas,
tasks: synthesis.top_ideas.map(idea => ({
id: `IDEA-${slugify(idea.title)}`,
type: 'investigation',
title: idea.title,
description: idea.description,
dependencies: [],
agent_type: 'cli-execution-agent',
prompt: `Implement: ${idea.title}\n${idea.description}`,
expected_output: idea.next_steps
})),
recommendations: synthesis.recommendations
}
}
```
---
### Phase 1: Plan Loading & Validation
**Step 1.1: Parse Plan and Extract Tasks**
```javascript
const tasks = plan.tasks || parseTasksFromContent(plan)
// Normalize task structure
const normalizedTasks = tasks.map(task => ({
id: task.id || `TASK-${generateId()}`,
title: task.title || task.content,
description: task.description || task.activeForm,
type: task.type || inferTaskType(task), // 'code', 'test', 'doc', 'analysis', 'integration'
agent_type: task.agent_type || selectBestAgent(task),
dependencies: task.dependencies || [],
// Execution parameters
prompt: task.prompt || task.description,
files_to_modify: task.files_to_modify || [],
expected_output: task.expected_output || [],
// Metadata
priority: task.priority || 'normal',
parallel_safe: task.parallel_safe !== false,
estimated_duration: task.estimated_duration || null,
// Status tracking
status: 'pending',
attempts: 0,
max_retries: 2
}))
// Validate and detect issues
const validation = {
cycles: detectDependencyCycles(normalizedTasks),
missing_dependencies: findMissingDependencies(normalizedTasks),
file_conflicts: detectOutputConflicts(normalizedTasks),
warnings: []
}
if (validation.cycles.length > 0) {
throw new Error(`Circular dependencies detected: ${validation.cycles.join(', ')}`)
}
```
**Step 1.2: Create execution.md**
```markdown
# Execution Progress
**Execution ID**: ${executionId}
**Plan Source**: ${planPath}
**Started**: ${getUtc8ISOString()}
**Mode**: ${executionMode}
**Plan Summary**:
- Title: ${plan.title}
- Total Tasks: ${tasks.length}
- Phases: ${plan.phases?.length || 'N/A'}
---
## Execution Plan
### Task Overview
| Task ID | Title | Type | Agent | Dependencies | Status |
|---------|-------|------|-------|--------------|--------|
${normalizedTasks.map(t => `| ${t.id} | ${t.title} | ${t.type} | ${t.agent_type} | ${t.dependencies.join(',')} | ${t.status} |`).join('\n')}
### Dependency Graph
\`\`\`
${generateDependencyGraph(normalizedTasks)}
\`\`\`
### Execution Strategy
- **Mode**: ${executionMode}
- **Parallelization**: ${calculateParallel(normalizedTasks)}
- **Estimated Duration**: ${estimateTotalDuration(normalizedTasks)}
---
## Execution Timeline
*Updates as execution progresses*
---
## Current Status
${executionStatus()}
```
**Step 1.3: Pre-Execution Confirmation**
```javascript
const autoYes = $ARGUMENTS.includes('--yes') || $ARGUMENTS.includes('-y')
if (!autoYes) {
AskUserQuestion({
questions: [{
question: `准备执行 ${normalizedTasks.length} 个任务,模式: ${executionMode}\n\n关键任务:\n${normalizedTasks.slice(0, 3).map(t => `${t.id}: ${t.title}`).join('\n')}\n\n继续?`,
header: "Confirmation",
multiSelect: false,
options: [
{ label: "开始执行", description: "按计划执行" },
{ label: "调整参数", description: "修改执行参数" },
{ label: "查看详情", description: "查看完整任务列表" },
{ label: "取消", description: "退出不执行" }
]
}]
})
}
```
---
## Phase 2: Execution Orchestration
**Step 2.1: Determine Execution Order**
```javascript
// Topological sort
const executionOrder = topologicalSort(normalizedTasks)
// For parallel mode, group tasks into waves
let executionWaves = []
if (executionMode === 'parallel') {
executionWaves = groupIntoWaves(executionOrder, parallelLimit = 3)
} else {
executionWaves = executionOrder.map(task => [task])
}
// Log execution plan to execution.md
// execution-events.md will track actual progress as tasks execute
```
**Step 2.2: Execute Task Waves**
```javascript
let completedCount = 0
let failedCount = 0
const results = {}
for (let waveIndex = 0; waveIndex < executionWaves.length; waveIndex++) {
const wave = executionWaves[waveIndex]
console.log(`\n=== Wave ${waveIndex + 1}/${executionWaves.length} ===`)
console.log(`Tasks: ${wave.map(t => t.id).join(', ')}`)
// Launch tasks in parallel
const taskPromises = wave.map(task => executeTask(task, executionFolder))
// Wait for wave completion
const waveResults = await Promise.allSettled(taskPromises)
// Process results
for (let i = 0; i < waveResults.length; i++) {
const result = waveResults[i]
const task = wave[i]
if (result.status === 'fulfilled') {
results[task.id] = result.value
if (result.value.success) {
completedCount++
task.status = 'completed'
console.log(`${task.id}: Completed`)
} else if (result.value.retry) {
console.log(`⚠️ ${task.id}: Will retry`)
task.status = 'pending'
} else {
console.log(`${task.id}: Failed`)
}
} else {
console.log(`${task.id}: Execution error`)
}
// Progress is tracked in execution-events.md (appended by executeTask)
}
// Update execution.md summary
appendExecutionTimeline(executionPath, waveIndex + 1, wave, waveResults)
}
```
**Step 2.3: Execute Individual Task with Unified Event Logging**
```javascript
async function executeTask(task, executionFolder) {
const eventLogPath = `${executionFolder}/execution-events.md`
const startTime = Date.now()
try {
// Read previous execution events for context
let previousEvents = ''
if (fs.existsSync(eventLogPath)) {
previousEvents = Read(eventLogPath)
}
// Select agent based on task type
const agent = selectAgent(task.agent_type)
// Build execution context including previous agent outputs
const executionContext = `
## Previous Agent Executions (for reference)
${previousEvents}
---
## Current Task: ${task.id}
**Title**: ${task.title}
**Agent**: ${agent}
**Time**: ${getUtc8ISOString()}
### Description
${task.description}
### Context
- Modified Files: ${task.files_to_modify.join(', ')}
- Expected Output: ${task.expected_output.join(', ')}
- Previous Artifacts: [list any artifacts from previous tasks]
### Requirements
${task.requirements || 'Follow the plan'}
### Constraints
${task.constraints || 'No breaking changes'}
`
// Execute based on agent type
let result
if (agent === 'code-developer' || agent === 'tdd-developer') {
// Code implementation
result = await Task({
subagent_type: agent,
description: `Execute: ${task.title}`,
prompt: executionContext,
run_in_background: false
})
} else if (agent === 'cli-execution-agent' || agent === 'universal-executor') {
// CLI-based execution
result = await Bash({
command: `ccw cli -p "${escapeQuotes(executionContext)}" --tool gemini --mode analysis`,
run_in_background: false
})
} else if (agent === 'test-fix-agent') {
// Test execution and fixing
result = await Task({
subagent_type: 'test-fix-agent',
description: `Execute Tests: ${task.title}`,
prompt: executionContext,
run_in_background: false
})
} else {
// Generic task execution
result = await Task({
subagent_type: 'universal-executor',
description: task.title,
prompt: executionContext,
run_in_background: false
})
}
// Capture artifacts (code, tests, docs generated by this task)
const artifacts = captureArtifacts(task, executionFolder)
// Append to unified execution events log
const eventEntry = `
## Task ${task.id} - COMPLETED ✅
**Timestamp**: ${getUtc8ISOString()}
**Duration**: ${calculateDuration(startTime)}ms
**Agent**: ${agent}
### Execution Summary
${generateSummary(result)}
### Key Outputs
${formatOutputs(result)}
### Generated Artifacts
${artifacts.map(a => `- **${a.type}**: \`${a.path}\` (${a.size})`).join('\n')}
### Notes for Next Agent
${generateNotesForNextAgent(result, task)}
---
`
appendToEventLog(eventLogPath, eventEntry)
return {
success: true,
task_id: task.id,
output: result,
artifacts: artifacts,
duration: calculateDuration(startTime)
}
} catch (error) {
// Append failure event to unified log
const failureEntry = `
## Task ${task.id} - FAILED ❌
**Timestamp**: ${getUtc8ISOString()}
**Duration**: ${calculateDuration(startTime)}ms
**Agent**: ${agent}
**Error**: ${error.message}
### Error Details
\`\`\`
${error.stack}
\`\`\`
### Recovery Notes for Next Attempt
${generateRecoveryNotes(error, task)}
---
`
appendToEventLog(eventLogPath, failureEntry)
// Handle failure: retry, skip, or abort
task.attempts++
if (task.attempts < task.max_retries && autoYes) {
console.log(`⚠️ ${task.id}: Failed, retrying (${task.attempts}/${task.max_retries})`)
return { success: false, task_id: task.id, error: error.message, retry: true, duration: calculateDuration(startTime) }
} else if (task.attempts >= task.max_retries && !autoYes) {
const decision = AskUserQuestion({
questions: [{
question: `任务失败: ${task.id}\n错误: ${error.message}`,
header: "Decision",
multiSelect: false,
options: [
{ label: "重试", description: "重新执行该任务" },
{ label: "跳过", description: "跳过此任务,继续下一个" },
{ label: "终止", description: "停止整个执行" }
]
}]
})
if (decision === 'retry') {
task.attempts = 0
return { success: false, task_id: task.id, error: error.message, retry: true, duration: calculateDuration(startTime) }
} else if (decision === 'skip') {
task.status = 'skipped'
skipDependentTasks(task.id, normalizedTasks)
} else {
throw new Error('Execution aborted by user')
}
} else {
task.status = 'failed'
skipDependentTasks(task.id, normalizedTasks)
}
return {
success: false,
task_id: task.id,
error: error.message,
duration: calculateDuration(startTime)
}
}
}
// Helper function to append to unified event log
function appendToEventLog(logPath, eventEntry) {
if (fs.existsSync(logPath)) {
const currentContent = Read(logPath)
Write(logPath, currentContent + eventEntry)
} else {
Write(logPath, eventEntry)
}
}
```
---
## Phase 3: Progress Tracking & Event Logging
The `execution-events.md` file is the **single source of truth** for all agent executions:
- Each agent **reads** previous execution events for context
- **Executes** its task (with full knowledge of what was done before)
- **Writes** its execution event (success or failure) in markdown format
- Next agent **reads** all previous events, creating a "knowledge chain"
**Event log format** (appended entry):
```markdown
## Task {id} - {STATUS} {emoji}
**Timestamp**: {time}
**Duration**: {ms}
**Agent**: {type}
### Execution Summary
{What was done}
### Generated Artifacts
- `src/types/auth.ts` (2.3KB)
### Notes for Next Agent
- Key decisions made
- Potential issues
- Ready for: TASK-003
```
---
## Phase 4: Completion & Summary
After all tasks complete or max failures reached:
1. **Collect results**: Count completed/failed/skipped tasks
2. **Update execution.md**: Add "Execution Completed" section with statistics
3. **execution-events.md**: Already contains all detailed execution records
```javascript
const statistics = {
total_tasks: normalizedTasks.length,
completed: normalizedTasks.filter(t => t.status === 'completed').length,
failed: normalizedTasks.filter(t => t.status === 'failed').length,
skipped: normalizedTasks.filter(t => t.status === 'skipped').length,
success_rate: (completedCount / normalizedTasks.length * 100).toFixed(1)
}
// Update execution.md with final status
appendExecutionSummary(executionPath, statistics)
```
**Post-Completion Options** (unless --yes):
```javascript
AskUserQuestion({
questions: [{
question: "执行完成。是否需要后续操作?",
header: "Next Steps",
multiSelect: true,
options: [
{ label: "查看详情", description: "查看完整执行日志" },
{ label: "调试失败项", description: "对失败任务进行调试" },
{ label: "优化执行", description: "分析执行改进建议" },
{ label: "完成", description: "不需要后续操作" }
]
}]
})
```
---
## Session Folder Structure
```
.workflow/.execution/{sessionId}/
├── execution.md # Execution plan and overall status
└── execution-events.md # 📋 Unified execution log (all agents) - SINGLE SOURCE OF TRUTH
# This is both human-readable AND machine-parseable
# Generated files go directly to project directories (not into execution folder)
# E.g. TASK-001 generates: src/types/auth.ts (not artifacts/src/types/auth.ts)
# execution-events.md records the actual project paths
```
**Key Concept**:
- **execution-events.md** is the **single source of truth** for execution state
- Human-readable: Clear markdown format with task summaries
- Machine-parseable: Status indicators (✅/❌/⏳) and structured sections
- Progress tracking: Read task count by parsing status indicators
- No redundancy: One unified log for all purposes
---
## Agent Selection Strategy
```javascript
function selectBestAgent(task) {
if (task.type === 'code' || task.type === 'implementation') {
return task.includes_tests ? 'tdd-developer' : 'code-developer'
} else if (task.type === 'test' || task.type === 'test-fix') {
return 'test-fix-agent'
} else if (task.type === 'doc' || task.type === 'documentation') {
return 'doc-generator'
} else if (task.type === 'analysis' || task.type === 'investigation') {
return 'cli-execution-agent'
} else if (task.type === 'debug') {
return 'debug-explore-agent'
} else {
return 'universal-executor'
}
}
```
## Parallelization Rules
```javascript
function calculateParallel(tasks) {
// Group tasks into execution waves
// Constraints:
// - Tasks with same file modifications must be sequential
// - Tasks with dependencies must wait
// - Max 3 parallel tasks per wave (resource constraint)
const waves = []
const completed = new Set()
while (completed.size < tasks.length) {
const available = tasks.filter(t =>
!completed.has(t.id) &&
t.dependencies.every(d => completed.has(d))
)
if (available.length === 0) break
// Check for file conflicts
const noConflict = []
const modifiedFiles = new Set()
for (const task of available) {
const conflicts = task.files_to_modify.some(f => modifiedFiles.has(f))
if (!conflicts && noConflict.length < 3) {
noConflict.push(task)
task.files_to_modify.forEach(f => modifiedFiles.add(f))
} else if (!conflicts && noConflict.length < 3) {
waves.push([task])
completed.add(task.id)
}
}
if (noConflict.length > 0) {
waves.push(noConflict)
noConflict.forEach(t => completed.add(t.id))
}
}
return waves
}
```
## Error Handling & Recovery
| Situation | Action |
|-----------|--------|
| Task timeout | Mark as timeout, ask user: retry/skip/abort |
| Missing dependency | Auto-skip dependent tasks, log warning |
| File conflict | Detect before execution, ask for resolution |
| Output mismatch | Validate against expected_output, flag for review |
| Agent unavailable | Fallback to universal-executor |
| Execution interrupted | Support resume with `/workflow:unified-execute-with-file --continue` |
## Usage Recommendations
Use `/workflow:unified-execute-with-file` when:
- Executing any planning document (IMPL_PLAN.md, brainstorm conclusions, analysis recommendations)
- Multiple tasks with dependencies need orchestration
- Want minimal progress tracking without clutter
- Need to handle failures gracefully and resume
- Want to parallelize where possible but ensure correctness
Use for consuming output from:
- `/workflow:plan` → IMPL_PLAN.md
- `/workflow:brainstorm-with-file` → synthesis.json → execution
- `/workflow:analyze-with-file` → conclusions.json → execution
- `/workflow:debug-with-file` → recommendations → execution
- `/workflow:lite-plan` → task JSONs → execution
## Session Resume
```bash
/workflow:unified-execute-with-file --continue # Resume last execution
/workflow:unified-execute-with-file --continue EXEC-xxx-2025-01-27-abcd # Resume specific
```
When resuming:
1. Load execution.md and execution-events.md
2. Parse execution-events.md to identify completed/failed/skipped tasks
3. Recalculate remaining dependencies
4. Resume from first incomplete task
5. Append to execution-events.md with "Resumed from [sessionId]" note

View File

@@ -11,8 +11,8 @@ CCW 命令帮助系统,提供命令搜索、推荐、文档查看功能。
## Trigger Conditions
- 关键词: "ccw-help", "ccw-issue", "帮助", "命令", "怎么用"
- 场景: 询问命令用法、搜索命令、请求下一步建议
- 关键词: "ccw-help", "ccw-issue", "帮助", "命令", "怎么用", "ccw 怎么用", "工作流"
- 场景: 询问命令用法、搜索命令、请求下一步建议、询问任务应该用哪个工作流
## Operation Modes
@@ -50,7 +50,35 @@ CCW 命令帮助系统,提供命令搜索、推荐、文档查看功能。
1. Query `essential_commands` array
2. Guide appropriate workflow entry point
### Mode 5: Issue Reporting
### Mode 5: CCW Command Orchestration
**Triggers**: "ccw ", "自动工作流", "自动选择工作流", "帮我规划"
**Process**:
1. Analyze user intent (task type, complexity, clarity)
2. Auto-select workflow level (1-4 or Issue)
3. Build command chain based on workflow
4. Get user confirmation
5. Execute chain with TODO tracking
**Supported Workflows**:
- **Level 1** (Lite-Lite-Lite): Ultra-simple quick tasks
- **Level 2** (Rapid/Hotfix): Bug fixes, simple features, documentation
- **Level 2.5** (Rapid-to-Issue): Bridge from quick planning to issue workflow
- **Level 3** (Coupled): Complex features with planning, execution, review, tests
- **Level 3 Variants**:
- TDD workflows (test-first development)
- Test-fix workflows (debug failing tests)
- Review workflows (code review and fixes)
- UI design workflows
- **Level 4** (Full): Exploratory tasks with brainstorming
- **With-File Workflows**: Documented exploration with multi-CLI collaboration
- `brainstorm-with-file`: Multi-perspective ideation
- `debug-with-file`: Hypothesis-driven debugging
- `analyze-with-file`: Collaborative analysis
- **Issue Workflow**: Batch issue discovery, planning, queueing, execution
### Mode 6: Issue Reporting
**Triggers**: "ccw-issue", "报告 bug"
@@ -84,28 +112,60 @@ Single source of truth: **[command.json](command.json)**
## Slash Commands
```bash
/ccw-help # 通用帮助入口
/ccw-help search <keyword> # 搜索命令
/ccw-help next <command> # 获取下一步建议
/ccw-issue # 问题报告
/ccw "task description" # Auto-select workflow and execute
/ccw-help # General help entry
/ccw-help search <keyword> # Search commands
/ccw-help next <command> # Get next step suggestions
/ccw-issue # Issue reporting
```
### CCW Command Examples
```bash
/ccw "Add user authentication" # → auto-select level 2-3
/ccw "Fix memory leak in WebSocket" # → auto-select bugfix workflow
/ccw "Implement with TDD" # → detect TDD, use tdd-plan → execute → tdd-verify
/ccw "头脑风暴: 用户通知系统" # → detect brainstorm, use brainstorm-with-file
/ccw "深度调试: 系统随机崩溃" # → detect debug-file, use debug-with-file
/ccw "协作分析: 认证架构设计" # → detect analyze-file, use analyze-with-file
```
## Maintenance
### Update Index
### Update Mechanism
CCW-Help skill supports manual updates through user confirmation dialog.
#### How to Update
**Option 1: When executing the skill, user will be prompted:**
```
Would you like to update CCW-Help command index?
- Yes: Run auto-update and regenerate command.json
- No: Use current index
```
**Option 2: Manual update**
```bash
cd D:/Claude_dms3/.claude/skills/ccw-help
python scripts/analyze_commands.py
python scripts/auto-update.py
```
脚本功能:扫描 commands/ agents/ 目录,生成统一的 command.json
This runs `analyze_commands.py` to scan commands/ and agents/ directories and regenerate `command.json`.
#### Update Scripts
- **`auto-update.py`**: Simple wrapper that runs analyze_commands.py
- **`analyze_commands.py`**: Scans directories and generates command index
## Statistics
- **Commands**: 88+
- **Commands**: 50+
- **Agents**: 16
- **Essential**: 10 核心命令
- **Workflows**: 6 main levels + 3 with-file variants
- **Essential**: 10 core commands
## Core Principle

File diff suppressed because it is too large Load Diff

View File

@@ -0,0 +1,97 @@
[
{
"name": "action-planning-agent",
"description": "|",
"source": "../../../agents/action-planning-agent.md"
},
{
"name": "cli-discuss-agent",
"description": "|",
"source": "../../../agents/cli-discuss-agent.md"
},
{
"name": "cli-execution-agent",
"description": "|",
"source": "../../../agents/cli-execution-agent.md"
},
{
"name": "cli-explore-agent",
"description": "|",
"source": "../../../agents/cli-explore-agent.md"
},
{
"name": "cli-lite-planning-agent",
"description": "|",
"source": "../../../agents/cli-lite-planning-agent.md"
},
{
"name": "cli-planning-agent",
"description": "|",
"source": "../../../agents/cli-planning-agent.md"
},
{
"name": "code-developer",
"description": "|",
"source": "../../../agents/code-developer.md"
},
{
"name": "conceptual-planning-agent",
"description": "|",
"source": "../../../agents/conceptual-planning-agent.md"
},
{
"name": "context-search-agent",
"description": "|",
"source": "../../../agents/context-search-agent.md"
},
{
"name": "debug-explore-agent",
"description": "|",
"source": "../../../agents/debug-explore-agent.md"
},
{
"name": "doc-generator",
"description": "|",
"source": "../../../agents/doc-generator.md"
},
{
"name": "issue-plan-agent",
"description": "|",
"source": "../../../agents/issue-plan-agent.md"
},
{
"name": "issue-queue-agent",
"description": "|",
"source": "../../../agents/issue-queue-agent.md"
},
{
"name": "memory-bridge",
"description": "Execute complex project documentation updates using script coordination",
"source": "../../../agents/memory-bridge.md"
},
{
"name": "tdd-developer",
"description": "|",
"source": "../../../agents/tdd-developer.md"
},
{
"name": "test-context-search-agent",
"description": "|",
"source": "../../../agents/test-context-search-agent.md"
},
{
"name": "test-fix-agent",
"description": "|",
"source": "../../../agents/test-fix-agent.md"
},
{
"name": "ui-design-agent",
"description": "|",
"source": "../../../agents/ui-design-agent.md"
},
{
"name": "universal-executor",
"description": "|",
"source": "../../../agents/universal-executor.md"
}
]

View File

@@ -0,0 +1,805 @@
[
{
"name": "ccw-coordinator",
"command": "/ccw-coordinator",
"description": "Command orchestration tool - analyze requirements, recommend chain, execute sequentially with state persistence",
"arguments": "[task description]",
"category": "general",
"subcategory": null,
"usage_scenario": "general",
"difficulty": "Intermediate",
"source": "../../../commands/ccw-coordinator.md"
},
{
"name": "ccw-debug",
"command": "/ccw-debug",
"description": "Aggregated debug command - combines debugging diagnostics and test verification in a synergistic workflow supporting cli-quick / debug-first / test-first / bidirectional-verification modes",
"arguments": "[--mode cli|debug|test|bidirectional] [--yes|-y] [--hotfix] \\\"bug description or error message\\",
"category": "general",
"subcategory": null,
"usage_scenario": "general",
"difficulty": "Intermediate",
"source": "../../../commands/ccw-debug.md"
},
{
"name": "ccw",
"command": "/ccw",
"description": "Main workflow orchestrator - analyze intent, select workflow, execute command chain in main process",
"arguments": "\\\"task description\\",
"category": "general",
"subcategory": null,
"usage_scenario": "general",
"difficulty": "Intermediate",
"source": "../../../commands/ccw.md"
},
{
"name": "cli-init",
"command": "/cli:cli-init",
"description": "Generate .gemini/ and .qwen/ config directories with settings.json and ignore files based on workspace technology detection",
"arguments": "[--tool gemini|qwen|all] [--output path] [--preview]",
"category": "cli",
"subcategory": null,
"usage_scenario": "general",
"difficulty": "Intermediate",
"source": "../../../commands/cli/cli-init.md"
},
{
"name": "codex-review",
"command": "/cli:codex-review",
"description": "Interactive code review using Codex CLI via ccw endpoint with configurable review target, model, and custom instructions",
"arguments": "[--uncommitted|--base <branch>|--commit <sha>] [--model <model>] [--title <title>] [prompt]",
"category": "cli",
"subcategory": null,
"usage_scenario": "analysis",
"difficulty": "Intermediate",
"source": "../../../commands/cli/codex-review.md"
},
{
"name": "convert-to-plan",
"command": "/issue:convert-to-plan",
"description": "Convert planning artifacts (lite-plan, workflow session, markdown) to issue solutions",
"arguments": "[-y|--yes] [--issue <id>] [--supplement] <SOURCE>",
"category": "issue",
"subcategory": null,
"usage_scenario": "planning",
"difficulty": "Intermediate",
"source": "../../../commands/issue/convert-to-plan.md"
},
{
"name": "issue:discover-by-prompt",
"command": "/issue:discover-by-prompt",
"description": "Discover issues from user prompt with Gemini-planned iterative multi-agent exploration. Uses ACE semantic search for context gathering and supports cross-module comparison (e.g., frontend vs backend API contracts).",
"arguments": "[-y|--yes] <prompt> [--scope=src/**] [--depth=standard|deep] [--max-iterations=5]",
"category": "issue",
"subcategory": null,
"usage_scenario": "general",
"difficulty": "Intermediate",
"source": "../../../commands/issue/discover-by-prompt.md"
},
{
"name": "issue:discover",
"command": "/issue:discover",
"description": "Discover potential issues from multiple perspectives (bug, UX, test, quality, security, performance, maintainability, best-practices) using CLI explore. Supports Exa external research for security and best-practices perspectives.",
"arguments": "[-y|--yes] <path-pattern> [--perspectives=bug,ux,...] [--external]",
"category": "issue",
"subcategory": null,
"usage_scenario": "general",
"difficulty": "Intermediate",
"source": "../../../commands/issue/discover.md"
},
{
"name": "execute",
"command": "/issue:execute",
"description": "Execute queue with DAG-based parallel orchestration (one commit per solution)",
"arguments": "[-y|--yes] --queue <queue-id> [--worktree [<existing-path>]]",
"category": "issue",
"subcategory": null,
"usage_scenario": "implementation",
"difficulty": "Intermediate",
"source": "../../../commands/issue/execute.md"
},
{
"name": "from-brainstorm",
"command": "/issue:from-brainstorm",
"description": "Convert brainstorm session ideas into issue with executable solution for parallel-dev-cycle",
"arguments": "SESSION=\\\"<session-id>\\\" [--idea=<index>] [--auto] [-y|--yes]",
"category": "issue",
"subcategory": null,
"usage_scenario": "planning",
"difficulty": "Intermediate",
"source": "../../../commands/issue/from-brainstorm.md"
},
{
"name": "new",
"command": "/issue:new",
"description": "Create structured issue from GitHub URL or text description",
"arguments": "[-y|--yes] <github-url | text-description> [--priority 1-5]",
"category": "issue",
"subcategory": null,
"usage_scenario": "general",
"difficulty": "Intermediate",
"source": "../../../commands/issue/new.md"
},
{
"name": "plan",
"command": "/issue:plan",
"description": "Batch plan issue resolution using issue-plan-agent (explore + plan closed-loop)",
"arguments": "[-y|--yes] --all-pending <issue-id>[,<issue-id>,...] [--batch-size 3]",
"category": "issue",
"subcategory": null,
"usage_scenario": "planning",
"difficulty": "Intermediate",
"source": "../../../commands/issue/plan.md"
},
{
"name": "queue",
"command": "/issue:queue",
"description": "Form execution queue from bound solutions using issue-queue-agent (solution-level)",
"arguments": "[-y|--yes] [--queues <n>] [--issue <id>]",
"category": "issue",
"subcategory": null,
"usage_scenario": "general",
"difficulty": "Intermediate",
"source": "../../../commands/issue/queue.md"
},
{
"name": "compact",
"command": "/memory:compact",
"description": "Compact current session memory into structured text for session recovery, extracting objective/plan/files/decisions/constraints/state, and save via MCP core_memory tool",
"arguments": "[optional: session description]",
"category": "memory",
"subcategory": null,
"usage_scenario": "general",
"difficulty": "Intermediate",
"source": "../../../commands/memory/compact.md"
},
{
"name": "docs-full-cli",
"command": "/memory:docs-full-cli",
"description": "Generate full project documentation using CLI execution (Layer 3→1) with batched agents (4 modules/agent) and gemini→qwen→codex fallback, <20 modules uses direct parallel",
"arguments": "[path] [--tool <gemini|qwen|codex>]",
"category": "memory",
"subcategory": null,
"usage_scenario": "documentation",
"difficulty": "Intermediate",
"source": "../../../commands/memory/docs-full-cli.md"
},
{
"name": "docs-related-cli",
"command": "/memory:docs-related-cli",
"description": "Generate/update documentation for git-changed modules using CLI execution with batched agents (4 modules/agent) and gemini→qwen→codex fallback, <15 modules uses direct parallel",
"arguments": "[--tool <gemini|qwen|codex>]",
"category": "memory",
"subcategory": null,
"usage_scenario": "documentation",
"difficulty": "Intermediate",
"source": "../../../commands/memory/docs-related-cli.md"
},
{
"name": "load",
"command": "/memory:load",
"description": "Delegate to universal-executor agent to analyze project via Gemini/Qwen CLI and return JSON core content package for task context",
"arguments": "[--tool gemini|qwen] \\\"task context description\\",
"category": "memory",
"subcategory": null,
"usage_scenario": "general",
"difficulty": "Intermediate",
"source": "../../../commands/memory/load.md"
},
{
"name": "style-skill-memory",
"command": "/memory:style-skill-memory",
"description": "Generate SKILL memory package from style reference for easy loading and consistent design system usage",
"arguments": "[package-name] [--regenerate]",
"category": "memory",
"subcategory": null,
"usage_scenario": "documentation",
"difficulty": "Intermediate",
"source": "../../../commands/memory/style-skill-memory.md"
},
{
"name": "tips",
"command": "/memory:tips",
"description": "Quick note-taking command to capture ideas, snippets, reminders, and insights for later reference",
"arguments": "<note content> [--tag <tag1,tag2>] [--context <context>]",
"category": "memory",
"subcategory": null,
"usage_scenario": "general",
"difficulty": "Intermediate",
"source": "../../../commands/memory/tips.md"
},
{
"name": "update-full",
"command": "/memory:update-full",
"description": "Update all CLAUDE.md files using layer-based execution (Layer 3→1) with batched agents (4 modules/agent) and gemini→qwen→codex fallback, <20 modules uses direct parallel",
"arguments": "[--tool gemini|qwen|codex] [--path <directory>]",
"category": "memory",
"subcategory": null,
"usage_scenario": "general",
"difficulty": "Intermediate",
"source": "../../../commands/memory/update-full.md"
},
{
"name": "update-related",
"command": "/memory:update-related",
"description": "Update CLAUDE.md for git-changed modules using batched agent execution (4 modules/agent) with gemini→qwen→codex fallback, <15 modules uses direct execution",
"arguments": "[--tool gemini|qwen|codex]",
"category": "memory",
"subcategory": null,
"usage_scenario": "general",
"difficulty": "Intermediate",
"source": "../../../commands/memory/update-related.md"
},
{
"name": "ccw view",
"command": "/ccw view",
"description": "Dashboard - Open CCW workflow dashboard for managing tasks and sessions",
"arguments": "",
"category": "general",
"subcategory": null,
"usage_scenario": "general",
"difficulty": "Intermediate",
"source": "../../../commands/view.md"
},
{
"name": "analyze-with-file",
"command": "/workflow:analyze-with-file",
"description": "Interactive collaborative analysis with documented discussions, CLI-assisted exploration, and evolving understanding",
"arguments": "[-y|--yes] [-c|--continue] \\\"topic or question\\",
"category": "workflow",
"subcategory": null,
"usage_scenario": "analysis",
"difficulty": "Beginner",
"source": "../../../commands/workflow/analyze-with-file.md"
},
{
"name": "artifacts",
"command": "/workflow:brainstorm:artifacts",
"description": "Interactive clarification generating confirmed guidance specification through role-based analysis and synthesis",
"arguments": "[-y|--yes] topic or challenge description [--count N]",
"category": "workflow",
"subcategory": "brainstorm",
"usage_scenario": "general",
"difficulty": "Intermediate",
"source": "../../../commands/workflow/brainstorm/artifacts.md"
},
{
"name": "auto-parallel",
"command": "/workflow:brainstorm:auto-parallel",
"description": "Parallel brainstorming automation with dynamic role selection and concurrent execution across multiple perspectives",
"arguments": "[-y|--yes] topic or challenge description [--count N]",
"category": "workflow",
"subcategory": "brainstorm",
"usage_scenario": "general",
"difficulty": "Advanced",
"source": "../../../commands/workflow/brainstorm/auto-parallel.md"
},
{
"name": "role-analysis",
"command": "/workflow:brainstorm:role-analysis",
"description": "Unified role-specific analysis generation with interactive context gathering and incremental updates",
"arguments": "[role-name] [--session session-id] [--update] [--include-questions] [--skip-questions]",
"category": "workflow",
"subcategory": "brainstorm",
"usage_scenario": "general",
"difficulty": "Intermediate",
"source": "../../../commands/workflow/brainstorm/role-analysis.md"
},
{
"name": "synthesis",
"command": "/workflow:brainstorm:synthesis",
"description": "Clarify and refine role analyses through intelligent Q&A and targeted updates with synthesis agent",
"arguments": "[-y|--yes] [optional: --session session-id]",
"category": "workflow",
"subcategory": "brainstorm",
"usage_scenario": "general",
"difficulty": "Advanced",
"source": "../../../commands/workflow/brainstorm/synthesis.md"
},
{
"name": "brainstorm-with-file",
"command": "/workflow:brainstorm-with-file",
"description": "Interactive brainstorming with multi-CLI collaboration, idea expansion, and documented thought evolution",
"arguments": "[-y|--yes] [-c|--continue] [-m|--mode creative|structured] \\\"idea or topic\\",
"category": "workflow",
"subcategory": null,
"usage_scenario": "planning",
"difficulty": "Intermediate",
"source": "../../../commands/workflow/brainstorm-with-file.md"
},
{
"name": "clean",
"command": "/workflow:clean",
"description": "Intelligent code cleanup with mainline detection, stale artifact discovery, and safe execution",
"arguments": "[-y|--yes] [--dry-run] [\\\"focus area\\\"]",
"category": "workflow",
"subcategory": null,
"usage_scenario": "general",
"difficulty": "Intermediate",
"source": "../../../commands/workflow/clean.md"
},
{
"name": "debug-with-file",
"command": "/workflow:debug-with-file",
"description": "Interactive hypothesis-driven debugging with documented exploration, understanding evolution, and Gemini-assisted correction",
"arguments": "[-y|--yes] \\\"bug description or error message\\",
"category": "workflow",
"subcategory": null,
"usage_scenario": "general",
"difficulty": "Intermediate",
"source": "../../../commands/workflow/debug-with-file.md"
},
{
"name": "execute",
"command": "/workflow:execute",
"description": "Coordinate agent execution for workflow tasks with automatic session discovery, parallel task processing, and status tracking",
"arguments": "[-y|--yes] [--resume-session=\\\"session-id\\\"]",
"category": "workflow",
"subcategory": null,
"usage_scenario": "implementation",
"difficulty": "Intermediate",
"source": "../../../commands/workflow/execute.md"
},
{
"name": "init",
"command": "/workflow:init",
"description": "Initialize project-level state with intelligent project analysis using cli-explore-agent",
"arguments": "[--regenerate]",
"category": "workflow",
"subcategory": null,
"usage_scenario": "general",
"difficulty": "Intermediate",
"source": "../../../commands/workflow/init.md"
},
{
"name": "lite-execute",
"command": "/workflow:lite-execute",
"description": "Execute tasks based on in-memory plan, prompt description, or file content",
"arguments": "[-y|--yes] [--in-memory] [\\\"task description\\\"|file-path]",
"category": "workflow",
"subcategory": null,
"usage_scenario": "implementation",
"difficulty": "Intermediate",
"source": "../../../commands/workflow/lite-execute.md"
},
{
"name": "lite-fix",
"command": "/workflow:lite-fix",
"description": "Lightweight bug diagnosis and fix workflow with intelligent severity assessment and optional hotfix mode for production incidents",
"arguments": "[-y|--yes] [--hotfix] \\\"bug description or issue reference\\",
"category": "workflow",
"subcategory": null,
"usage_scenario": "general",
"difficulty": "Intermediate",
"source": "../../../commands/workflow/lite-fix.md"
},
{
"name": "workflow:lite-lite-lite",
"command": "/workflow:lite-lite-lite",
"description": "Ultra-lightweight multi-tool analysis and direct execution. No artifacts for simple tasks; auto-creates planning docs in .workflow/.scratchpad/ for complex tasks. Auto tool selection based on task analysis, user-driven iteration via AskUser.",
"arguments": "[-y|--yes] <task description>",
"category": "workflow",
"subcategory": null,
"usage_scenario": "general",
"difficulty": "Intermediate",
"source": "../../../commands/workflow/lite-lite-lite.md"
},
{
"name": "lite-plan",
"command": "/workflow:lite-plan",
"description": "Lightweight interactive planning workflow with in-memory planning, code exploration, and execution execute to lite-execute after user confirmation",
"arguments": "[-y|--yes] [-e|--explore] \\\"task description\\\"|file.md",
"category": "workflow",
"subcategory": null,
"usage_scenario": "planning",
"difficulty": "Intermediate",
"source": "../../../commands/workflow/lite-plan.md"
},
{
"name": "workflow:multi-cli-plan",
"command": "/workflow:multi-cli-plan",
"description": "Multi-CLI collaborative planning workflow with ACE context gathering and iterative cross-verification. Uses cli-discuss-agent for Gemini+Codex+Claude analysis to converge on optimal execution plan.",
"arguments": "[-y|--yes] <task description> [--max-rounds=3] [--tools=gemini,codex] [--mode=parallel|serial]",
"category": "workflow",
"subcategory": null,
"usage_scenario": "planning",
"difficulty": "Intermediate",
"source": "../../../commands/workflow/multi-cli-plan.md"
},
{
"name": "plan-verify",
"command": "/workflow:plan-verify",
"description": "Perform READ-ONLY verification analysis between IMPL_PLAN.md, task JSONs, and brainstorming artifacts. Generates structured report with quality gate recommendation. Does NOT modify any files.",
"arguments": "[optional: --session session-id]",
"category": "workflow",
"subcategory": null,
"usage_scenario": "planning",
"difficulty": "Intermediate",
"source": "../../../commands/workflow/plan-verify.md"
},
{
"name": "plan",
"command": "/workflow:plan",
"description": "5-phase planning workflow with action-planning-agent task generation, outputs IMPL_PLAN.md and task JSONs",
"arguments": "[-y|--yes] \\\"text description\\\"|file.md",
"category": "workflow",
"subcategory": null,
"usage_scenario": "planning",
"difficulty": "Intermediate",
"source": "../../../commands/workflow/plan.md"
},
{
"name": "replan",
"command": "/workflow:replan",
"description": "Interactive workflow replanning with session-level artifact updates and boundary clarification through guided questioning",
"arguments": "[-y|--yes] [--session session-id] [task-id] \\\"requirements\\\"|file.md [--interactive]",
"category": "workflow",
"subcategory": null,
"usage_scenario": "planning",
"difficulty": "Intermediate",
"source": "../../../commands/workflow/replan.md"
},
{
"name": "review-cycle-fix",
"command": "/workflow:review-cycle-fix",
"description": "Automated fixing of code review findings with AI-powered planning and coordinated execution. Uses intelligent grouping, multi-stage timeline coordination, and test-driven verification.",
"arguments": "<export-file|review-dir> [--resume] [--max-iterations=N]",
"category": "workflow",
"subcategory": null,
"usage_scenario": "analysis",
"difficulty": "Intermediate",
"source": "../../../commands/workflow/review-cycle-fix.md"
},
{
"name": "review-module-cycle",
"command": "/workflow:review-module-cycle",
"description": "Independent multi-dimensional code review for specified modules/files. Analyzes specific code paths across 7 dimensions with hybrid parallel-iterative execution, independent of workflow sessions.",
"arguments": "<path-pattern> [--dimensions=security,architecture,...] [--max-iterations=N]",
"category": "workflow",
"subcategory": null,
"usage_scenario": "analysis",
"difficulty": "Intermediate",
"source": "../../../commands/workflow/review-module-cycle.md"
},
{
"name": "review-session-cycle",
"command": "/workflow:review-session-cycle",
"description": "Session-based comprehensive multi-dimensional code review. Analyzes git changes from workflow session across 7 dimensions with hybrid parallel-iterative execution, aggregates findings, and performs focused deep-dives on critical issues until quality gates met.",
"arguments": "[session-id] [--dimensions=security,architecture,...] [--max-iterations=N]",
"category": "workflow",
"subcategory": null,
"usage_scenario": "session-management",
"difficulty": "Intermediate",
"source": "../../../commands/workflow/review-session-cycle.md"
},
{
"name": "review",
"command": "/workflow:review",
"description": "Post-implementation review with specialized types (security/architecture/action-items/quality) using analysis agents and Gemini",
"arguments": "[--type=security|architecture|action-items|quality] [--archived] [optional: session-id]",
"category": "workflow",
"subcategory": null,
"usage_scenario": "analysis",
"difficulty": "Intermediate",
"source": "../../../commands/workflow/review.md"
},
{
"name": "complete",
"command": "/workflow:session:complete",
"description": "Mark active workflow session as complete, archive with lessons learned, update manifest, remove active flag",
"arguments": "[-y|--yes] [--detailed]",
"category": "workflow",
"subcategory": "session",
"usage_scenario": "session-management",
"difficulty": "Intermediate",
"source": "../../../commands/workflow/session/complete.md"
},
{
"name": "list",
"command": "/workflow:session:list",
"description": "List all workflow sessions with status filtering, shows session metadata and progress information",
"arguments": "",
"category": "workflow",
"subcategory": "session",
"usage_scenario": "general",
"difficulty": "Beginner",
"source": "../../../commands/workflow/session/list.md"
},
{
"name": "resume",
"command": "/workflow:session:resume",
"description": "Resume the most recently paused workflow session with automatic session discovery and status update",
"arguments": "",
"category": "workflow",
"subcategory": "session",
"usage_scenario": "session-management",
"difficulty": "Intermediate",
"source": "../../../commands/workflow/session/resume.md"
},
{
"name": "solidify",
"command": "/workflow:session:solidify",
"description": "Crystallize session learnings and user-defined constraints into permanent project guidelines",
"arguments": "[-y|--yes] [--type <convention|constraint|learning>] [--category <category>] \\\"rule or insight\\",
"category": "workflow",
"subcategory": "session",
"usage_scenario": "general",
"difficulty": "Intermediate",
"source": "../../../commands/workflow/session/solidify.md"
},
{
"name": "start",
"command": "/workflow:session:start",
"description": "Discover existing sessions or start new workflow session with intelligent session management and conflict detection",
"arguments": "[--type <workflow|review|tdd|test|docs>] [--auto|--new] [optional: task description for new session]",
"category": "workflow",
"subcategory": "session",
"usage_scenario": "general",
"difficulty": "Intermediate",
"source": "../../../commands/workflow/session/start.md"
},
{
"name": "tdd-plan",
"command": "/workflow:tdd-plan",
"description": "TDD workflow planning with Red-Green-Refactor task chain generation, test-first development structure, and cycle tracking",
"arguments": "\\\"feature description\\\"|file.md",
"category": "workflow",
"subcategory": null,
"usage_scenario": "planning",
"difficulty": "Advanced",
"source": "../../../commands/workflow/tdd-plan.md"
},
{
"name": "tdd-verify",
"command": "/workflow:tdd-verify",
"description": "Verify TDD workflow compliance against Red-Green-Refactor cycles. Generates quality report with coverage analysis and quality gate recommendation. Orchestrates sub-commands for comprehensive validation.",
"arguments": "[optional: --session WFS-session-id]",
"category": "workflow",
"subcategory": null,
"usage_scenario": "testing",
"difficulty": "Advanced",
"source": "../../../commands/workflow/tdd-verify.md"
},
{
"name": "test-cycle-execute",
"command": "/workflow:test-cycle-execute",
"description": "Execute test-fix workflow with dynamic task generation and iterative fix cycles until test pass rate >= 95% or max iterations reached. Uses @cli-planning-agent for failure analysis and task generation.",
"arguments": "[--resume-session=\\\"session-id\\\"] [--max-iterations=N]",
"category": "workflow",
"subcategory": null,
"usage_scenario": "implementation",
"difficulty": "Intermediate",
"source": "../../../commands/workflow/test-cycle-execute.md"
},
{
"name": "test-fix-gen",
"command": "/workflow:test-fix-gen",
"description": "Create test-fix workflow session from session ID, description, or file path with test strategy generation and task planning",
"arguments": "(source-session-id | \\\"feature description\\\" | /path/to/file.md)",
"category": "workflow",
"subcategory": null,
"usage_scenario": "testing",
"difficulty": "Intermediate",
"source": "../../../commands/workflow/test-fix-gen.md"
},
{
"name": "test-gen",
"command": "/workflow:test-gen",
"description": "Create independent test-fix workflow session from completed implementation session, analyzes code to generate test tasks",
"arguments": "source-session-id",
"category": "workflow",
"subcategory": null,
"usage_scenario": "testing",
"difficulty": "Intermediate",
"source": "../../../commands/workflow/test-gen.md"
},
{
"name": "conflict-resolution",
"command": "/workflow:tools:conflict-resolution",
"description": "Detect and resolve conflicts between plan and existing codebase using CLI-powered analysis with Gemini/Qwen",
"arguments": "[-y|--yes] --session WFS-session-id --context path/to/context-package.json",
"category": "workflow",
"subcategory": "tools",
"usage_scenario": "general",
"difficulty": "Advanced",
"source": "../../../commands/workflow/tools/conflict-resolution.md"
},
{
"name": "gather",
"command": "/workflow:tools:gather",
"description": "Intelligently collect project context using context-search-agent based on task description, packages into standardized JSON",
"arguments": "--session WFS-session-id \\\"task description\\",
"category": "workflow",
"subcategory": "tools",
"usage_scenario": "general",
"difficulty": "Intermediate",
"source": "../../../commands/workflow/tools/context-gather.md"
},
{
"name": "task-generate-agent",
"command": "/workflow:tools:task-generate-agent",
"description": "Generate implementation plan documents (IMPL_PLAN.md, task JSONs, TODO_LIST.md) using action-planning-agent - produces planning artifacts, does NOT execute code implementation",
"arguments": "[-y|--yes] --session WFS-session-id",
"category": "workflow",
"subcategory": "tools",
"usage_scenario": "implementation",
"difficulty": "Advanced",
"source": "../../../commands/workflow/tools/task-generate-agent.md"
},
{
"name": "task-generate-tdd",
"command": "/workflow:tools:task-generate-tdd",
"description": "Autonomous TDD task generation using action-planning-agent with Red-Green-Refactor cycles, test-first structure, and cycle validation",
"arguments": "[-y|--yes] --session WFS-session-id",
"category": "workflow",
"subcategory": "tools",
"usage_scenario": "implementation",
"difficulty": "Advanced",
"source": "../../../commands/workflow/tools/task-generate-tdd.md"
},
{
"name": "tdd-coverage-analysis",
"command": "/workflow:tools:tdd-coverage-analysis",
"description": "Analyze test coverage and TDD cycle execution with Red-Green-Refactor compliance verification",
"arguments": "--session WFS-session-id",
"category": "workflow",
"subcategory": "tools",
"usage_scenario": "testing",
"difficulty": "Advanced",
"source": "../../../commands/workflow/tools/tdd-coverage-analysis.md"
},
{
"name": "test-concept-enhanced",
"command": "/workflow:tools:test-concept-enhanced",
"description": "Coordinate test analysis workflow using cli-execution-agent to generate test strategy via Gemini",
"arguments": "--session WFS-test-session-id --context path/to/test-context-package.json",
"category": "workflow",
"subcategory": "tools",
"usage_scenario": "testing",
"difficulty": "Intermediate",
"source": "../../../commands/workflow/tools/test-concept-enhanced.md"
},
{
"name": "test-context-gather",
"command": "/workflow:tools:test-context-gather",
"description": "Collect test coverage context using test-context-search-agent and package into standardized test-context JSON",
"arguments": "--session WFS-test-session-id",
"category": "workflow",
"subcategory": "tools",
"usage_scenario": "testing",
"difficulty": "Intermediate",
"source": "../../../commands/workflow/tools/test-context-gather.md"
},
{
"name": "test-task-generate",
"command": "/workflow:tools:test-task-generate",
"description": "Generate test planning documents (IMPL_PLAN.md, test task JSONs, TODO_LIST.md) using action-planning-agent - produces test planning artifacts, does NOT execute tests",
"arguments": "--session WFS-test-session-id",
"category": "workflow",
"subcategory": "tools",
"usage_scenario": "implementation",
"difficulty": "Intermediate",
"source": "../../../commands/workflow/tools/test-task-generate.md"
},
{
"name": "animation-extract",
"command": "/workflow:ui-design:animation-extract",
"description": "Extract animation and transition patterns from prompt inference and image references for design system documentation",
"arguments": "[-y|--yes] [--design-id <id>] [--session <id>] [--images \"<glob>\"] [--focus \"<types>\"] [--interactive] [--refine]",
"category": "workflow",
"subcategory": "ui-design",
"usage_scenario": "general",
"difficulty": "Intermediate",
"source": "../../../commands/workflow/ui-design/animation-extract.md"
},
{
"name": "workflow:ui-design:codify-style",
"command": "/workflow:ui-design:codify-style",
"description": "Orchestrator to extract styles from code and generate shareable reference package with preview (automatic file discovery)",
"arguments": "<path> [--package-name <name>] [--output-dir <path>] [--overwrite]",
"category": "workflow",
"subcategory": "ui-design",
"usage_scenario": "planning",
"difficulty": "Intermediate",
"source": "../../../commands/workflow/ui-design/codify-style.md"
},
{
"name": "design-sync",
"command": "/workflow:ui-design:design-sync",
"description": "Synchronize finalized design system references to brainstorming artifacts, preparing them for /workflow:plan consumption",
"arguments": "--session <session_id> [--selected-prototypes \"<list>\"]",
"category": "workflow",
"subcategory": "ui-design",
"usage_scenario": "planning",
"difficulty": "Intermediate",
"source": "../../../commands/workflow/ui-design/design-sync.md"
},
{
"name": "explore-auto",
"command": "/workflow:ui-design:explore-auto",
"description": "Interactive exploratory UI design workflow with style-centric batch generation, creates design variants from prompts/images with parallel execution and user selection",
"arguments": "[--input \"<value>\"] [--targets \"<list>\"] [--target-type \"page|component\"] [--session <id>] [--style-variants <count>] [--layout-variants <count>]",
"category": "workflow",
"subcategory": "ui-design",
"usage_scenario": "general",
"difficulty": "Intermediate",
"source": "../../../commands/workflow/ui-design/explore-auto.md"
},
{
"name": "generate",
"command": "/workflow:ui-design:generate",
"description": "Assemble UI prototypes by combining layout templates with design tokens (default animation support), pure assembler without new content generation",
"arguments": "[--design-id <id>] [--session <id>]",
"category": "workflow",
"subcategory": "ui-design",
"usage_scenario": "implementation",
"difficulty": "Intermediate",
"source": "../../../commands/workflow/ui-design/generate.md"
},
{
"name": "imitate-auto",
"command": "/workflow:ui-design:imitate-auto",
"description": "UI design workflow with direct code/image input for design token extraction and prototype generation",
"arguments": "[--input \"<value>\"] [--session <id>]",
"category": "workflow",
"subcategory": "ui-design",
"usage_scenario": "general",
"difficulty": "Intermediate",
"source": "../../../commands/workflow/ui-design/imitate-auto.md"
},
{
"name": "workflow:ui-design:import-from-code",
"command": "/workflow:ui-design:import-from-code",
"description": "Import design system from code files (CSS/JS/HTML/SCSS) with automatic file discovery and parallel agent analysis",
"arguments": "[--design-id <id>] [--session <id>] [--source <path>]",
"category": "workflow",
"subcategory": "ui-design",
"usage_scenario": "planning",
"difficulty": "Intermediate",
"source": "../../../commands/workflow/ui-design/import-from-code.md"
},
{
"name": "layout-extract",
"command": "/workflow:ui-design:layout-extract",
"description": "Extract structural layout information from reference images or text prompts using Claude analysis with variant generation or refinement mode",
"arguments": "[-y|--yes] [--design-id <id>] [--session <id>] [--images \"<glob>\"] [--prompt \"<desc>\"] [--targets \"<list>\"] [--variants <count>] [--device-type <desktop|mobile|tablet|responsive>] [--interactive] [--refine]",
"category": "workflow",
"subcategory": "ui-design",
"usage_scenario": "general",
"difficulty": "Intermediate",
"source": "../../../commands/workflow/ui-design/layout-extract.md"
},
{
"name": "workflow:ui-design:reference-page-generator",
"command": "/workflow:ui-design:reference-page-generator",
"description": "Generate multi-component reference pages and documentation from design run extraction",
"arguments": "[--design-run <path>] [--package-name <name>] [--output-dir <path>]",
"category": "workflow",
"subcategory": "ui-design",
"usage_scenario": "planning",
"difficulty": "Intermediate",
"source": "../../../commands/workflow/ui-design/reference-page-generator.md"
},
{
"name": "style-extract",
"command": "/workflow:ui-design:style-extract",
"description": "Extract design style from reference images or text prompts using Claude analysis with variant generation or refinement mode",
"arguments": "[-y|--yes] [--design-id <id>] [--session <id>] [--images \"<glob>\"] [--prompt \"<desc>\"] [--variants <count>] [--interactive] [--refine]",
"category": "workflow",
"subcategory": "ui-design",
"usage_scenario": "general",
"difficulty": "Intermediate",
"source": "../../../commands/workflow/ui-design/style-extract.md"
},
{
"name": "unified-execute-with-file",
"command": "/workflow:unified-execute-with-file",
"description": "Universal execution engine for consuming any planning/brainstorm/analysis output with minimal progress tracking, multi-agent coordination, and incremental execution",
"arguments": "[-y|--yes] [-p|--plan <path>] [-m|--mode sequential|parallel] [\\\"execution context or task name\\\"]",
"category": "workflow",
"subcategory": null,
"usage_scenario": "implementation",
"difficulty": "Intermediate",
"source": "../../../commands/workflow/unified-execute-with-file.md"
}
]

View File

@@ -0,0 +1,833 @@
{
"general": {
"_root": [
{
"name": "ccw-coordinator",
"command": "/ccw-coordinator",
"description": "Command orchestration tool - analyze requirements, recommend chain, execute sequentially with state persistence",
"arguments": "[task description]",
"category": "general",
"subcategory": null,
"usage_scenario": "general",
"difficulty": "Intermediate",
"source": "../../../commands/ccw-coordinator.md"
},
{
"name": "ccw-debug",
"command": "/ccw-debug",
"description": "Aggregated debug command - combines debugging diagnostics and test verification in a synergistic workflow supporting cli-quick / debug-first / test-first / bidirectional-verification modes",
"arguments": "[--mode cli|debug|test|bidirectional] [--yes|-y] [--hotfix] \\\"bug description or error message\\",
"category": "general",
"subcategory": null,
"usage_scenario": "general",
"difficulty": "Intermediate",
"source": "../../../commands/ccw-debug.md"
},
{
"name": "ccw",
"command": "/ccw",
"description": "Main workflow orchestrator - analyze intent, select workflow, execute command chain in main process",
"arguments": "\\\"task description\\",
"category": "general",
"subcategory": null,
"usage_scenario": "general",
"difficulty": "Intermediate",
"source": "../../../commands/ccw.md"
},
{
"name": "ccw view",
"command": "/ccw view",
"description": "Dashboard - Open CCW workflow dashboard for managing tasks and sessions",
"arguments": "",
"category": "general",
"subcategory": null,
"usage_scenario": "general",
"difficulty": "Intermediate",
"source": "../../../commands/view.md"
}
]
},
"cli": {
"_root": [
{
"name": "cli-init",
"command": "/cli:cli-init",
"description": "Generate .gemini/ and .qwen/ config directories with settings.json and ignore files based on workspace technology detection",
"arguments": "[--tool gemini|qwen|all] [--output path] [--preview]",
"category": "cli",
"subcategory": null,
"usage_scenario": "general",
"difficulty": "Intermediate",
"source": "../../../commands/cli/cli-init.md"
},
{
"name": "codex-review",
"command": "/cli:codex-review",
"description": "Interactive code review using Codex CLI via ccw endpoint with configurable review target, model, and custom instructions",
"arguments": "[--uncommitted|--base <branch>|--commit <sha>] [--model <model>] [--title <title>] [prompt]",
"category": "cli",
"subcategory": null,
"usage_scenario": "analysis",
"difficulty": "Intermediate",
"source": "../../../commands/cli/codex-review.md"
}
]
},
"issue": {
"_root": [
{
"name": "convert-to-plan",
"command": "/issue:convert-to-plan",
"description": "Convert planning artifacts (lite-plan, workflow session, markdown) to issue solutions",
"arguments": "[-y|--yes] [--issue <id>] [--supplement] <SOURCE>",
"category": "issue",
"subcategory": null,
"usage_scenario": "planning",
"difficulty": "Intermediate",
"source": "../../../commands/issue/convert-to-plan.md"
},
{
"name": "issue:discover-by-prompt",
"command": "/issue:discover-by-prompt",
"description": "Discover issues from user prompt with Gemini-planned iterative multi-agent exploration. Uses ACE semantic search for context gathering and supports cross-module comparison (e.g., frontend vs backend API contracts).",
"arguments": "[-y|--yes] <prompt> [--scope=src/**] [--depth=standard|deep] [--max-iterations=5]",
"category": "issue",
"subcategory": null,
"usage_scenario": "general",
"difficulty": "Intermediate",
"source": "../../../commands/issue/discover-by-prompt.md"
},
{
"name": "issue:discover",
"command": "/issue:discover",
"description": "Discover potential issues from multiple perspectives (bug, UX, test, quality, security, performance, maintainability, best-practices) using CLI explore. Supports Exa external research for security and best-practices perspectives.",
"arguments": "[-y|--yes] <path-pattern> [--perspectives=bug,ux,...] [--external]",
"category": "issue",
"subcategory": null,
"usage_scenario": "general",
"difficulty": "Intermediate",
"source": "../../../commands/issue/discover.md"
},
{
"name": "execute",
"command": "/issue:execute",
"description": "Execute queue with DAG-based parallel orchestration (one commit per solution)",
"arguments": "[-y|--yes] --queue <queue-id> [--worktree [<existing-path>]]",
"category": "issue",
"subcategory": null,
"usage_scenario": "implementation",
"difficulty": "Intermediate",
"source": "../../../commands/issue/execute.md"
},
{
"name": "from-brainstorm",
"command": "/issue:from-brainstorm",
"description": "Convert brainstorm session ideas into issue with executable solution for parallel-dev-cycle",
"arguments": "SESSION=\\\"<session-id>\\\" [--idea=<index>] [--auto] [-y|--yes]",
"category": "issue",
"subcategory": null,
"usage_scenario": "planning",
"difficulty": "Intermediate",
"source": "../../../commands/issue/from-brainstorm.md"
},
{
"name": "new",
"command": "/issue:new",
"description": "Create structured issue from GitHub URL or text description",
"arguments": "[-y|--yes] <github-url | text-description> [--priority 1-5]",
"category": "issue",
"subcategory": null,
"usage_scenario": "general",
"difficulty": "Intermediate",
"source": "../../../commands/issue/new.md"
},
{
"name": "plan",
"command": "/issue:plan",
"description": "Batch plan issue resolution using issue-plan-agent (explore + plan closed-loop)",
"arguments": "[-y|--yes] --all-pending <issue-id>[,<issue-id>,...] [--batch-size 3]",
"category": "issue",
"subcategory": null,
"usage_scenario": "planning",
"difficulty": "Intermediate",
"source": "../../../commands/issue/plan.md"
},
{
"name": "queue",
"command": "/issue:queue",
"description": "Form execution queue from bound solutions using issue-queue-agent (solution-level)",
"arguments": "[-y|--yes] [--queues <n>] [--issue <id>]",
"category": "issue",
"subcategory": null,
"usage_scenario": "general",
"difficulty": "Intermediate",
"source": "../../../commands/issue/queue.md"
}
]
},
"memory": {
"_root": [
{
"name": "compact",
"command": "/memory:compact",
"description": "Compact current session memory into structured text for session recovery, extracting objective/plan/files/decisions/constraints/state, and save via MCP core_memory tool",
"arguments": "[optional: session description]",
"category": "memory",
"subcategory": null,
"usage_scenario": "general",
"difficulty": "Intermediate",
"source": "../../../commands/memory/compact.md"
},
{
"name": "docs-full-cli",
"command": "/memory:docs-full-cli",
"description": "Generate full project documentation using CLI execution (Layer 3→1) with batched agents (4 modules/agent) and gemini→qwen→codex fallback, <20 modules uses direct parallel",
"arguments": "[path] [--tool <gemini|qwen|codex>]",
"category": "memory",
"subcategory": null,
"usage_scenario": "documentation",
"difficulty": "Intermediate",
"source": "../../../commands/memory/docs-full-cli.md"
},
{
"name": "docs-related-cli",
"command": "/memory:docs-related-cli",
"description": "Generate/update documentation for git-changed modules using CLI execution with batched agents (4 modules/agent) and gemini→qwen→codex fallback, <15 modules uses direct parallel",
"arguments": "[--tool <gemini|qwen|codex>]",
"category": "memory",
"subcategory": null,
"usage_scenario": "documentation",
"difficulty": "Intermediate",
"source": "../../../commands/memory/docs-related-cli.md"
},
{
"name": "load",
"command": "/memory:load",
"description": "Delegate to universal-executor agent to analyze project via Gemini/Qwen CLI and return JSON core content package for task context",
"arguments": "[--tool gemini|qwen] \\\"task context description\\",
"category": "memory",
"subcategory": null,
"usage_scenario": "general",
"difficulty": "Intermediate",
"source": "../../../commands/memory/load.md"
},
{
"name": "style-skill-memory",
"command": "/memory:style-skill-memory",
"description": "Generate SKILL memory package from style reference for easy loading and consistent design system usage",
"arguments": "[package-name] [--regenerate]",
"category": "memory",
"subcategory": null,
"usage_scenario": "documentation",
"difficulty": "Intermediate",
"source": "../../../commands/memory/style-skill-memory.md"
},
{
"name": "tips",
"command": "/memory:tips",
"description": "Quick note-taking command to capture ideas, snippets, reminders, and insights for later reference",
"arguments": "<note content> [--tag <tag1,tag2>] [--context <context>]",
"category": "memory",
"subcategory": null,
"usage_scenario": "general",
"difficulty": "Intermediate",
"source": "../../../commands/memory/tips.md"
},
{
"name": "update-full",
"command": "/memory:update-full",
"description": "Update all CLAUDE.md files using layer-based execution (Layer 3→1) with batched agents (4 modules/agent) and gemini→qwen→codex fallback, <20 modules uses direct parallel",
"arguments": "[--tool gemini|qwen|codex] [--path <directory>]",
"category": "memory",
"subcategory": null,
"usage_scenario": "general",
"difficulty": "Intermediate",
"source": "../../../commands/memory/update-full.md"
},
{
"name": "update-related",
"command": "/memory:update-related",
"description": "Update CLAUDE.md for git-changed modules using batched agent execution (4 modules/agent) with gemini→qwen→codex fallback, <15 modules uses direct execution",
"arguments": "[--tool gemini|qwen|codex]",
"category": "memory",
"subcategory": null,
"usage_scenario": "general",
"difficulty": "Intermediate",
"source": "../../../commands/memory/update-related.md"
}
]
},
"workflow": {
"_root": [
{
"name": "analyze-with-file",
"command": "/workflow:analyze-with-file",
"description": "Interactive collaborative analysis with documented discussions, CLI-assisted exploration, and evolving understanding",
"arguments": "[-y|--yes] [-c|--continue] \\\"topic or question\\",
"category": "workflow",
"subcategory": null,
"usage_scenario": "analysis",
"difficulty": "Beginner",
"source": "../../../commands/workflow/analyze-with-file.md"
},
{
"name": "brainstorm-with-file",
"command": "/workflow:brainstorm-with-file",
"description": "Interactive brainstorming with multi-CLI collaboration, idea expansion, and documented thought evolution",
"arguments": "[-y|--yes] [-c|--continue] [-m|--mode creative|structured] \\\"idea or topic\\",
"category": "workflow",
"subcategory": null,
"usage_scenario": "planning",
"difficulty": "Intermediate",
"source": "../../../commands/workflow/brainstorm-with-file.md"
},
{
"name": "clean",
"command": "/workflow:clean",
"description": "Intelligent code cleanup with mainline detection, stale artifact discovery, and safe execution",
"arguments": "[-y|--yes] [--dry-run] [\\\"focus area\\\"]",
"category": "workflow",
"subcategory": null,
"usage_scenario": "general",
"difficulty": "Intermediate",
"source": "../../../commands/workflow/clean.md"
},
{
"name": "debug-with-file",
"command": "/workflow:debug-with-file",
"description": "Interactive hypothesis-driven debugging with documented exploration, understanding evolution, and Gemini-assisted correction",
"arguments": "[-y|--yes] \\\"bug description or error message\\",
"category": "workflow",
"subcategory": null,
"usage_scenario": "general",
"difficulty": "Intermediate",
"source": "../../../commands/workflow/debug-with-file.md"
},
{
"name": "execute",
"command": "/workflow:execute",
"description": "Coordinate agent execution for workflow tasks with automatic session discovery, parallel task processing, and status tracking",
"arguments": "[-y|--yes] [--resume-session=\\\"session-id\\\"]",
"category": "workflow",
"subcategory": null,
"usage_scenario": "implementation",
"difficulty": "Intermediate",
"source": "../../../commands/workflow/execute.md"
},
{
"name": "init",
"command": "/workflow:init",
"description": "Initialize project-level state with intelligent project analysis using cli-explore-agent",
"arguments": "[--regenerate]",
"category": "workflow",
"subcategory": null,
"usage_scenario": "general",
"difficulty": "Intermediate",
"source": "../../../commands/workflow/init.md"
},
{
"name": "lite-execute",
"command": "/workflow:lite-execute",
"description": "Execute tasks based on in-memory plan, prompt description, or file content",
"arguments": "[-y|--yes] [--in-memory] [\\\"task description\\\"|file-path]",
"category": "workflow",
"subcategory": null,
"usage_scenario": "implementation",
"difficulty": "Intermediate",
"source": "../../../commands/workflow/lite-execute.md"
},
{
"name": "lite-fix",
"command": "/workflow:lite-fix",
"description": "Lightweight bug diagnosis and fix workflow with intelligent severity assessment and optional hotfix mode for production incidents",
"arguments": "[-y|--yes] [--hotfix] \\\"bug description or issue reference\\",
"category": "workflow",
"subcategory": null,
"usage_scenario": "general",
"difficulty": "Intermediate",
"source": "../../../commands/workflow/lite-fix.md"
},
{
"name": "workflow:lite-lite-lite",
"command": "/workflow:lite-lite-lite",
"description": "Ultra-lightweight multi-tool analysis and direct execution. No artifacts for simple tasks; auto-creates planning docs in .workflow/.scratchpad/ for complex tasks. Auto tool selection based on task analysis, user-driven iteration via AskUser.",
"arguments": "[-y|--yes] <task description>",
"category": "workflow",
"subcategory": null,
"usage_scenario": "general",
"difficulty": "Intermediate",
"source": "../../../commands/workflow/lite-lite-lite.md"
},
{
"name": "lite-plan",
"command": "/workflow:lite-plan",
"description": "Lightweight interactive planning workflow with in-memory planning, code exploration, and execution execute to lite-execute after user confirmation",
"arguments": "[-y|--yes] [-e|--explore] \\\"task description\\\"|file.md",
"category": "workflow",
"subcategory": null,
"usage_scenario": "planning",
"difficulty": "Intermediate",
"source": "../../../commands/workflow/lite-plan.md"
},
{
"name": "workflow:multi-cli-plan",
"command": "/workflow:multi-cli-plan",
"description": "Multi-CLI collaborative planning workflow with ACE context gathering and iterative cross-verification. Uses cli-discuss-agent for Gemini+Codex+Claude analysis to converge on optimal execution plan.",
"arguments": "[-y|--yes] <task description> [--max-rounds=3] [--tools=gemini,codex] [--mode=parallel|serial]",
"category": "workflow",
"subcategory": null,
"usage_scenario": "planning",
"difficulty": "Intermediate",
"source": "../../../commands/workflow/multi-cli-plan.md"
},
{
"name": "plan-verify",
"command": "/workflow:plan-verify",
"description": "Perform READ-ONLY verification analysis between IMPL_PLAN.md, task JSONs, and brainstorming artifacts. Generates structured report with quality gate recommendation. Does NOT modify any files.",
"arguments": "[optional: --session session-id]",
"category": "workflow",
"subcategory": null,
"usage_scenario": "planning",
"difficulty": "Intermediate",
"source": "../../../commands/workflow/plan-verify.md"
},
{
"name": "plan",
"command": "/workflow:plan",
"description": "5-phase planning workflow with action-planning-agent task generation, outputs IMPL_PLAN.md and task JSONs",
"arguments": "[-y|--yes] \\\"text description\\\"|file.md",
"category": "workflow",
"subcategory": null,
"usage_scenario": "planning",
"difficulty": "Intermediate",
"source": "../../../commands/workflow/plan.md"
},
{
"name": "replan",
"command": "/workflow:replan",
"description": "Interactive workflow replanning with session-level artifact updates and boundary clarification through guided questioning",
"arguments": "[-y|--yes] [--session session-id] [task-id] \\\"requirements\\\"|file.md [--interactive]",
"category": "workflow",
"subcategory": null,
"usage_scenario": "planning",
"difficulty": "Intermediate",
"source": "../../../commands/workflow/replan.md"
},
{
"name": "review-cycle-fix",
"command": "/workflow:review-cycle-fix",
"description": "Automated fixing of code review findings with AI-powered planning and coordinated execution. Uses intelligent grouping, multi-stage timeline coordination, and test-driven verification.",
"arguments": "<export-file|review-dir> [--resume] [--max-iterations=N]",
"category": "workflow",
"subcategory": null,
"usage_scenario": "analysis",
"difficulty": "Intermediate",
"source": "../../../commands/workflow/review-cycle-fix.md"
},
{
"name": "review-module-cycle",
"command": "/workflow:review-module-cycle",
"description": "Independent multi-dimensional code review for specified modules/files. Analyzes specific code paths across 7 dimensions with hybrid parallel-iterative execution, independent of workflow sessions.",
"arguments": "<path-pattern> [--dimensions=security,architecture,...] [--max-iterations=N]",
"category": "workflow",
"subcategory": null,
"usage_scenario": "analysis",
"difficulty": "Intermediate",
"source": "../../../commands/workflow/review-module-cycle.md"
},
{
"name": "review-session-cycle",
"command": "/workflow:review-session-cycle",
"description": "Session-based comprehensive multi-dimensional code review. Analyzes git changes from workflow session across 7 dimensions with hybrid parallel-iterative execution, aggregates findings, and performs focused deep-dives on critical issues until quality gates met.",
"arguments": "[session-id] [--dimensions=security,architecture,...] [--max-iterations=N]",
"category": "workflow",
"subcategory": null,
"usage_scenario": "session-management",
"difficulty": "Intermediate",
"source": "../../../commands/workflow/review-session-cycle.md"
},
{
"name": "review",
"command": "/workflow:review",
"description": "Post-implementation review with specialized types (security/architecture/action-items/quality) using analysis agents and Gemini",
"arguments": "[--type=security|architecture|action-items|quality] [--archived] [optional: session-id]",
"category": "workflow",
"subcategory": null,
"usage_scenario": "analysis",
"difficulty": "Intermediate",
"source": "../../../commands/workflow/review.md"
},
{
"name": "tdd-plan",
"command": "/workflow:tdd-plan",
"description": "TDD workflow planning with Red-Green-Refactor task chain generation, test-first development structure, and cycle tracking",
"arguments": "\\\"feature description\\\"|file.md",
"category": "workflow",
"subcategory": null,
"usage_scenario": "planning",
"difficulty": "Advanced",
"source": "../../../commands/workflow/tdd-plan.md"
},
{
"name": "tdd-verify",
"command": "/workflow:tdd-verify",
"description": "Verify TDD workflow compliance against Red-Green-Refactor cycles. Generates quality report with coverage analysis and quality gate recommendation. Orchestrates sub-commands for comprehensive validation.",
"arguments": "[optional: --session WFS-session-id]",
"category": "workflow",
"subcategory": null,
"usage_scenario": "testing",
"difficulty": "Advanced",
"source": "../../../commands/workflow/tdd-verify.md"
},
{
"name": "test-cycle-execute",
"command": "/workflow:test-cycle-execute",
"description": "Execute test-fix workflow with dynamic task generation and iterative fix cycles until test pass rate >= 95% or max iterations reached. Uses @cli-planning-agent for failure analysis and task generation.",
"arguments": "[--resume-session=\\\"session-id\\\"] [--max-iterations=N]",
"category": "workflow",
"subcategory": null,
"usage_scenario": "implementation",
"difficulty": "Intermediate",
"source": "../../../commands/workflow/test-cycle-execute.md"
},
{
"name": "test-fix-gen",
"command": "/workflow:test-fix-gen",
"description": "Create test-fix workflow session from session ID, description, or file path with test strategy generation and task planning",
"arguments": "(source-session-id | \\\"feature description\\\" | /path/to/file.md)",
"category": "workflow",
"subcategory": null,
"usage_scenario": "testing",
"difficulty": "Intermediate",
"source": "../../../commands/workflow/test-fix-gen.md"
},
{
"name": "test-gen",
"command": "/workflow:test-gen",
"description": "Create independent test-fix workflow session from completed implementation session, analyzes code to generate test tasks",
"arguments": "source-session-id",
"category": "workflow",
"subcategory": null,
"usage_scenario": "testing",
"difficulty": "Intermediate",
"source": "../../../commands/workflow/test-gen.md"
},
{
"name": "unified-execute-with-file",
"command": "/workflow:unified-execute-with-file",
"description": "Universal execution engine for consuming any planning/brainstorm/analysis output with minimal progress tracking, multi-agent coordination, and incremental execution",
"arguments": "[-y|--yes] [-p|--plan <path>] [-m|--mode sequential|parallel] [\\\"execution context or task name\\\"]",
"category": "workflow",
"subcategory": null,
"usage_scenario": "implementation",
"difficulty": "Intermediate",
"source": "../../../commands/workflow/unified-execute-with-file.md"
}
],
"brainstorm": [
{
"name": "artifacts",
"command": "/workflow:brainstorm:artifacts",
"description": "Interactive clarification generating confirmed guidance specification through role-based analysis and synthesis",
"arguments": "[-y|--yes] topic or challenge description [--count N]",
"category": "workflow",
"subcategory": "brainstorm",
"usage_scenario": "general",
"difficulty": "Intermediate",
"source": "../../../commands/workflow/brainstorm/artifacts.md"
},
{
"name": "auto-parallel",
"command": "/workflow:brainstorm:auto-parallel",
"description": "Parallel brainstorming automation with dynamic role selection and concurrent execution across multiple perspectives",
"arguments": "[-y|--yes] topic or challenge description [--count N]",
"category": "workflow",
"subcategory": "brainstorm",
"usage_scenario": "general",
"difficulty": "Advanced",
"source": "../../../commands/workflow/brainstorm/auto-parallel.md"
},
{
"name": "role-analysis",
"command": "/workflow:brainstorm:role-analysis",
"description": "Unified role-specific analysis generation with interactive context gathering and incremental updates",
"arguments": "[role-name] [--session session-id] [--update] [--include-questions] [--skip-questions]",
"category": "workflow",
"subcategory": "brainstorm",
"usage_scenario": "general",
"difficulty": "Intermediate",
"source": "../../../commands/workflow/brainstorm/role-analysis.md"
},
{
"name": "synthesis",
"command": "/workflow:brainstorm:synthesis",
"description": "Clarify and refine role analyses through intelligent Q&A and targeted updates with synthesis agent",
"arguments": "[-y|--yes] [optional: --session session-id]",
"category": "workflow",
"subcategory": "brainstorm",
"usage_scenario": "general",
"difficulty": "Advanced",
"source": "../../../commands/workflow/brainstorm/synthesis.md"
}
],
"session": [
{
"name": "complete",
"command": "/workflow:session:complete",
"description": "Mark active workflow session as complete, archive with lessons learned, update manifest, remove active flag",
"arguments": "[-y|--yes] [--detailed]",
"category": "workflow",
"subcategory": "session",
"usage_scenario": "session-management",
"difficulty": "Intermediate",
"source": "../../../commands/workflow/session/complete.md"
},
{
"name": "list",
"command": "/workflow:session:list",
"description": "List all workflow sessions with status filtering, shows session metadata and progress information",
"arguments": "",
"category": "workflow",
"subcategory": "session",
"usage_scenario": "general",
"difficulty": "Beginner",
"source": "../../../commands/workflow/session/list.md"
},
{
"name": "resume",
"command": "/workflow:session:resume",
"description": "Resume the most recently paused workflow session with automatic session discovery and status update",
"arguments": "",
"category": "workflow",
"subcategory": "session",
"usage_scenario": "session-management",
"difficulty": "Intermediate",
"source": "../../../commands/workflow/session/resume.md"
},
{
"name": "solidify",
"command": "/workflow:session:solidify",
"description": "Crystallize session learnings and user-defined constraints into permanent project guidelines",
"arguments": "[-y|--yes] [--type <convention|constraint|learning>] [--category <category>] \\\"rule or insight\\",
"category": "workflow",
"subcategory": "session",
"usage_scenario": "general",
"difficulty": "Intermediate",
"source": "../../../commands/workflow/session/solidify.md"
},
{
"name": "start",
"command": "/workflow:session:start",
"description": "Discover existing sessions or start new workflow session with intelligent session management and conflict detection",
"arguments": "[--type <workflow|review|tdd|test|docs>] [--auto|--new] [optional: task description for new session]",
"category": "workflow",
"subcategory": "session",
"usage_scenario": "general",
"difficulty": "Intermediate",
"source": "../../../commands/workflow/session/start.md"
}
],
"tools": [
{
"name": "conflict-resolution",
"command": "/workflow:tools:conflict-resolution",
"description": "Detect and resolve conflicts between plan and existing codebase using CLI-powered analysis with Gemini/Qwen",
"arguments": "[-y|--yes] --session WFS-session-id --context path/to/context-package.json",
"category": "workflow",
"subcategory": "tools",
"usage_scenario": "general",
"difficulty": "Advanced",
"source": "../../../commands/workflow/tools/conflict-resolution.md"
},
{
"name": "gather",
"command": "/workflow:tools:gather",
"description": "Intelligently collect project context using context-search-agent based on task description, packages into standardized JSON",
"arguments": "--session WFS-session-id \\\"task description\\",
"category": "workflow",
"subcategory": "tools",
"usage_scenario": "general",
"difficulty": "Intermediate",
"source": "../../../commands/workflow/tools/context-gather.md"
},
{
"name": "task-generate-agent",
"command": "/workflow:tools:task-generate-agent",
"description": "Generate implementation plan documents (IMPL_PLAN.md, task JSONs, TODO_LIST.md) using action-planning-agent - produces planning artifacts, does NOT execute code implementation",
"arguments": "[-y|--yes] --session WFS-session-id",
"category": "workflow",
"subcategory": "tools",
"usage_scenario": "implementation",
"difficulty": "Advanced",
"source": "../../../commands/workflow/tools/task-generate-agent.md"
},
{
"name": "task-generate-tdd",
"command": "/workflow:tools:task-generate-tdd",
"description": "Autonomous TDD task generation using action-planning-agent with Red-Green-Refactor cycles, test-first structure, and cycle validation",
"arguments": "[-y|--yes] --session WFS-session-id",
"category": "workflow",
"subcategory": "tools",
"usage_scenario": "implementation",
"difficulty": "Advanced",
"source": "../../../commands/workflow/tools/task-generate-tdd.md"
},
{
"name": "tdd-coverage-analysis",
"command": "/workflow:tools:tdd-coverage-analysis",
"description": "Analyze test coverage and TDD cycle execution with Red-Green-Refactor compliance verification",
"arguments": "--session WFS-session-id",
"category": "workflow",
"subcategory": "tools",
"usage_scenario": "testing",
"difficulty": "Advanced",
"source": "../../../commands/workflow/tools/tdd-coverage-analysis.md"
},
{
"name": "test-concept-enhanced",
"command": "/workflow:tools:test-concept-enhanced",
"description": "Coordinate test analysis workflow using cli-execution-agent to generate test strategy via Gemini",
"arguments": "--session WFS-test-session-id --context path/to/test-context-package.json",
"category": "workflow",
"subcategory": "tools",
"usage_scenario": "testing",
"difficulty": "Intermediate",
"source": "../../../commands/workflow/tools/test-concept-enhanced.md"
},
{
"name": "test-context-gather",
"command": "/workflow:tools:test-context-gather",
"description": "Collect test coverage context using test-context-search-agent and package into standardized test-context JSON",
"arguments": "--session WFS-test-session-id",
"category": "workflow",
"subcategory": "tools",
"usage_scenario": "testing",
"difficulty": "Intermediate",
"source": "../../../commands/workflow/tools/test-context-gather.md"
},
{
"name": "test-task-generate",
"command": "/workflow:tools:test-task-generate",
"description": "Generate test planning documents (IMPL_PLAN.md, test task JSONs, TODO_LIST.md) using action-planning-agent - produces test planning artifacts, does NOT execute tests",
"arguments": "--session WFS-test-session-id",
"category": "workflow",
"subcategory": "tools",
"usage_scenario": "implementation",
"difficulty": "Intermediate",
"source": "../../../commands/workflow/tools/test-task-generate.md"
}
],
"ui-design": [
{
"name": "animation-extract",
"command": "/workflow:ui-design:animation-extract",
"description": "Extract animation and transition patterns from prompt inference and image references for design system documentation",
"arguments": "[-y|--yes] [--design-id <id>] [--session <id>] [--images \"<glob>\"] [--focus \"<types>\"] [--interactive] [--refine]",
"category": "workflow",
"subcategory": "ui-design",
"usage_scenario": "general",
"difficulty": "Intermediate",
"source": "../../../commands/workflow/ui-design/animation-extract.md"
},
{
"name": "workflow:ui-design:codify-style",
"command": "/workflow:ui-design:codify-style",
"description": "Orchestrator to extract styles from code and generate shareable reference package with preview (automatic file discovery)",
"arguments": "<path> [--package-name <name>] [--output-dir <path>] [--overwrite]",
"category": "workflow",
"subcategory": "ui-design",
"usage_scenario": "planning",
"difficulty": "Intermediate",
"source": "../../../commands/workflow/ui-design/codify-style.md"
},
{
"name": "design-sync",
"command": "/workflow:ui-design:design-sync",
"description": "Synchronize finalized design system references to brainstorming artifacts, preparing them for /workflow:plan consumption",
"arguments": "--session <session_id> [--selected-prototypes \"<list>\"]",
"category": "workflow",
"subcategory": "ui-design",
"usage_scenario": "planning",
"difficulty": "Intermediate",
"source": "../../../commands/workflow/ui-design/design-sync.md"
},
{
"name": "explore-auto",
"command": "/workflow:ui-design:explore-auto",
"description": "Interactive exploratory UI design workflow with style-centric batch generation, creates design variants from prompts/images with parallel execution and user selection",
"arguments": "[--input \"<value>\"] [--targets \"<list>\"] [--target-type \"page|component\"] [--session <id>] [--style-variants <count>] [--layout-variants <count>]",
"category": "workflow",
"subcategory": "ui-design",
"usage_scenario": "general",
"difficulty": "Intermediate",
"source": "../../../commands/workflow/ui-design/explore-auto.md"
},
{
"name": "generate",
"command": "/workflow:ui-design:generate",
"description": "Assemble UI prototypes by combining layout templates with design tokens (default animation support), pure assembler without new content generation",
"arguments": "[--design-id <id>] [--session <id>]",
"category": "workflow",
"subcategory": "ui-design",
"usage_scenario": "implementation",
"difficulty": "Intermediate",
"source": "../../../commands/workflow/ui-design/generate.md"
},
{
"name": "imitate-auto",
"command": "/workflow:ui-design:imitate-auto",
"description": "UI design workflow with direct code/image input for design token extraction and prototype generation",
"arguments": "[--input \"<value>\"] [--session <id>]",
"category": "workflow",
"subcategory": "ui-design",
"usage_scenario": "general",
"difficulty": "Intermediate",
"source": "../../../commands/workflow/ui-design/imitate-auto.md"
},
{
"name": "workflow:ui-design:import-from-code",
"command": "/workflow:ui-design:import-from-code",
"description": "Import design system from code files (CSS/JS/HTML/SCSS) with automatic file discovery and parallel agent analysis",
"arguments": "[--design-id <id>] [--session <id>] [--source <path>]",
"category": "workflow",
"subcategory": "ui-design",
"usage_scenario": "planning",
"difficulty": "Intermediate",
"source": "../../../commands/workflow/ui-design/import-from-code.md"
},
{
"name": "layout-extract",
"command": "/workflow:ui-design:layout-extract",
"description": "Extract structural layout information from reference images or text prompts using Claude analysis with variant generation or refinement mode",
"arguments": "[-y|--yes] [--design-id <id>] [--session <id>] [--images \"<glob>\"] [--prompt \"<desc>\"] [--targets \"<list>\"] [--variants <count>] [--device-type <desktop|mobile|tablet|responsive>] [--interactive] [--refine]",
"category": "workflow",
"subcategory": "ui-design",
"usage_scenario": "general",
"difficulty": "Intermediate",
"source": "../../../commands/workflow/ui-design/layout-extract.md"
},
{
"name": "workflow:ui-design:reference-page-generator",
"command": "/workflow:ui-design:reference-page-generator",
"description": "Generate multi-component reference pages and documentation from design run extraction",
"arguments": "[--design-run <path>] [--package-name <name>] [--output-dir <path>]",
"category": "workflow",
"subcategory": "ui-design",
"usage_scenario": "planning",
"difficulty": "Intermediate",
"source": "../../../commands/workflow/ui-design/reference-page-generator.md"
},
{
"name": "style-extract",
"command": "/workflow:ui-design:style-extract",
"description": "Extract design style from reference images or text prompts using Claude analysis with variant generation or refinement mode",
"arguments": "[-y|--yes] [--design-id <id>] [--session <id>] [--images \"<glob>\"] [--prompt \"<desc>\"] [--variants <count>] [--interactive] [--refine]",
"category": "workflow",
"subcategory": "ui-design",
"usage_scenario": "general",
"difficulty": "Intermediate",
"source": "../../../commands/workflow/ui-design/style-extract.md"
}
]
}
}

View File

@@ -0,0 +1,819 @@
{
"general": [
{
"name": "ccw-coordinator",
"command": "/ccw-coordinator",
"description": "Command orchestration tool - analyze requirements, recommend chain, execute sequentially with state persistence",
"arguments": "[task description]",
"category": "general",
"subcategory": null,
"usage_scenario": "general",
"difficulty": "Intermediate",
"source": "../../../commands/ccw-coordinator.md"
},
{
"name": "ccw-debug",
"command": "/ccw-debug",
"description": "Aggregated debug command - combines debugging diagnostics and test verification in a synergistic workflow supporting cli-quick / debug-first / test-first / bidirectional-verification modes",
"arguments": "[--mode cli|debug|test|bidirectional] [--yes|-y] [--hotfix] \\\"bug description or error message\\",
"category": "general",
"subcategory": null,
"usage_scenario": "general",
"difficulty": "Intermediate",
"source": "../../../commands/ccw-debug.md"
},
{
"name": "ccw",
"command": "/ccw",
"description": "Main workflow orchestrator - analyze intent, select workflow, execute command chain in main process",
"arguments": "\\\"task description\\",
"category": "general",
"subcategory": null,
"usage_scenario": "general",
"difficulty": "Intermediate",
"source": "../../../commands/ccw.md"
},
{
"name": "cli-init",
"command": "/cli:cli-init",
"description": "Generate .gemini/ and .qwen/ config directories with settings.json and ignore files based on workspace technology detection",
"arguments": "[--tool gemini|qwen|all] [--output path] [--preview]",
"category": "cli",
"subcategory": null,
"usage_scenario": "general",
"difficulty": "Intermediate",
"source": "../../../commands/cli/cli-init.md"
},
{
"name": "issue:discover-by-prompt",
"command": "/issue:discover-by-prompt",
"description": "Discover issues from user prompt with Gemini-planned iterative multi-agent exploration. Uses ACE semantic search for context gathering and supports cross-module comparison (e.g., frontend vs backend API contracts).",
"arguments": "[-y|--yes] <prompt> [--scope=src/**] [--depth=standard|deep] [--max-iterations=5]",
"category": "issue",
"subcategory": null,
"usage_scenario": "general",
"difficulty": "Intermediate",
"source": "../../../commands/issue/discover-by-prompt.md"
},
{
"name": "issue:discover",
"command": "/issue:discover",
"description": "Discover potential issues from multiple perspectives (bug, UX, test, quality, security, performance, maintainability, best-practices) using CLI explore. Supports Exa external research for security and best-practices perspectives.",
"arguments": "[-y|--yes] <path-pattern> [--perspectives=bug,ux,...] [--external]",
"category": "issue",
"subcategory": null,
"usage_scenario": "general",
"difficulty": "Intermediate",
"source": "../../../commands/issue/discover.md"
},
{
"name": "new",
"command": "/issue:new",
"description": "Create structured issue from GitHub URL or text description",
"arguments": "[-y|--yes] <github-url | text-description> [--priority 1-5]",
"category": "issue",
"subcategory": null,
"usage_scenario": "general",
"difficulty": "Intermediate",
"source": "../../../commands/issue/new.md"
},
{
"name": "queue",
"command": "/issue:queue",
"description": "Form execution queue from bound solutions using issue-queue-agent (solution-level)",
"arguments": "[-y|--yes] [--queues <n>] [--issue <id>]",
"category": "issue",
"subcategory": null,
"usage_scenario": "general",
"difficulty": "Intermediate",
"source": "../../../commands/issue/queue.md"
},
{
"name": "compact",
"command": "/memory:compact",
"description": "Compact current session memory into structured text for session recovery, extracting objective/plan/files/decisions/constraints/state, and save via MCP core_memory tool",
"arguments": "[optional: session description]",
"category": "memory",
"subcategory": null,
"usage_scenario": "general",
"difficulty": "Intermediate",
"source": "../../../commands/memory/compact.md"
},
{
"name": "load",
"command": "/memory:load",
"description": "Delegate to universal-executor agent to analyze project via Gemini/Qwen CLI and return JSON core content package for task context",
"arguments": "[--tool gemini|qwen] \\\"task context description\\",
"category": "memory",
"subcategory": null,
"usage_scenario": "general",
"difficulty": "Intermediate",
"source": "../../../commands/memory/load.md"
},
{
"name": "tips",
"command": "/memory:tips",
"description": "Quick note-taking command to capture ideas, snippets, reminders, and insights for later reference",
"arguments": "<note content> [--tag <tag1,tag2>] [--context <context>]",
"category": "memory",
"subcategory": null,
"usage_scenario": "general",
"difficulty": "Intermediate",
"source": "../../../commands/memory/tips.md"
},
{
"name": "update-full",
"command": "/memory:update-full",
"description": "Update all CLAUDE.md files using layer-based execution (Layer 3→1) with batched agents (4 modules/agent) and gemini→qwen→codex fallback, <20 modules uses direct parallel",
"arguments": "[--tool gemini|qwen|codex] [--path <directory>]",
"category": "memory",
"subcategory": null,
"usage_scenario": "general",
"difficulty": "Intermediate",
"source": "../../../commands/memory/update-full.md"
},
{
"name": "update-related",
"command": "/memory:update-related",
"description": "Update CLAUDE.md for git-changed modules using batched agent execution (4 modules/agent) with gemini→qwen→codex fallback, <15 modules uses direct execution",
"arguments": "[--tool gemini|qwen|codex]",
"category": "memory",
"subcategory": null,
"usage_scenario": "general",
"difficulty": "Intermediate",
"source": "../../../commands/memory/update-related.md"
},
{
"name": "ccw view",
"command": "/ccw view",
"description": "Dashboard - Open CCW workflow dashboard for managing tasks and sessions",
"arguments": "",
"category": "general",
"subcategory": null,
"usage_scenario": "general",
"difficulty": "Intermediate",
"source": "../../../commands/view.md"
},
{
"name": "artifacts",
"command": "/workflow:brainstorm:artifacts",
"description": "Interactive clarification generating confirmed guidance specification through role-based analysis and synthesis",
"arguments": "[-y|--yes] topic or challenge description [--count N]",
"category": "workflow",
"subcategory": "brainstorm",
"usage_scenario": "general",
"difficulty": "Intermediate",
"source": "../../../commands/workflow/brainstorm/artifacts.md"
},
{
"name": "auto-parallel",
"command": "/workflow:brainstorm:auto-parallel",
"description": "Parallel brainstorming automation with dynamic role selection and concurrent execution across multiple perspectives",
"arguments": "[-y|--yes] topic or challenge description [--count N]",
"category": "workflow",
"subcategory": "brainstorm",
"usage_scenario": "general",
"difficulty": "Advanced",
"source": "../../../commands/workflow/brainstorm/auto-parallel.md"
},
{
"name": "role-analysis",
"command": "/workflow:brainstorm:role-analysis",
"description": "Unified role-specific analysis generation with interactive context gathering and incremental updates",
"arguments": "[role-name] [--session session-id] [--update] [--include-questions] [--skip-questions]",
"category": "workflow",
"subcategory": "brainstorm",
"usage_scenario": "general",
"difficulty": "Intermediate",
"source": "../../../commands/workflow/brainstorm/role-analysis.md"
},
{
"name": "synthesis",
"command": "/workflow:brainstorm:synthesis",
"description": "Clarify and refine role analyses through intelligent Q&A and targeted updates with synthesis agent",
"arguments": "[-y|--yes] [optional: --session session-id]",
"category": "workflow",
"subcategory": "brainstorm",
"usage_scenario": "general",
"difficulty": "Advanced",
"source": "../../../commands/workflow/brainstorm/synthesis.md"
},
{
"name": "clean",
"command": "/workflow:clean",
"description": "Intelligent code cleanup with mainline detection, stale artifact discovery, and safe execution",
"arguments": "[-y|--yes] [--dry-run] [\\\"focus area\\\"]",
"category": "workflow",
"subcategory": null,
"usage_scenario": "general",
"difficulty": "Intermediate",
"source": "../../../commands/workflow/clean.md"
},
{
"name": "debug-with-file",
"command": "/workflow:debug-with-file",
"description": "Interactive hypothesis-driven debugging with documented exploration, understanding evolution, and Gemini-assisted correction",
"arguments": "[-y|--yes] \\\"bug description or error message\\",
"category": "workflow",
"subcategory": null,
"usage_scenario": "general",
"difficulty": "Intermediate",
"source": "../../../commands/workflow/debug-with-file.md"
},
{
"name": "init",
"command": "/workflow:init",
"description": "Initialize project-level state with intelligent project analysis using cli-explore-agent",
"arguments": "[--regenerate]",
"category": "workflow",
"subcategory": null,
"usage_scenario": "general",
"difficulty": "Intermediate",
"source": "../../../commands/workflow/init.md"
},
{
"name": "lite-fix",
"command": "/workflow:lite-fix",
"description": "Lightweight bug diagnosis and fix workflow with intelligent severity assessment and optional hotfix mode for production incidents",
"arguments": "[-y|--yes] [--hotfix] \\\"bug description or issue reference\\",
"category": "workflow",
"subcategory": null,
"usage_scenario": "general",
"difficulty": "Intermediate",
"source": "../../../commands/workflow/lite-fix.md"
},
{
"name": "workflow:lite-lite-lite",
"command": "/workflow:lite-lite-lite",
"description": "Ultra-lightweight multi-tool analysis and direct execution. No artifacts for simple tasks; auto-creates planning docs in .workflow/.scratchpad/ for complex tasks. Auto tool selection based on task analysis, user-driven iteration via AskUser.",
"arguments": "[-y|--yes] <task description>",
"category": "workflow",
"subcategory": null,
"usage_scenario": "general",
"difficulty": "Intermediate",
"source": "../../../commands/workflow/lite-lite-lite.md"
},
{
"name": "list",
"command": "/workflow:session:list",
"description": "List all workflow sessions with status filtering, shows session metadata and progress information",
"arguments": "",
"category": "workflow",
"subcategory": "session",
"usage_scenario": "general",
"difficulty": "Beginner",
"source": "../../../commands/workflow/session/list.md"
},
{
"name": "solidify",
"command": "/workflow:session:solidify",
"description": "Crystallize session learnings and user-defined constraints into permanent project guidelines",
"arguments": "[-y|--yes] [--type <convention|constraint|learning>] [--category <category>] \\\"rule or insight\\",
"category": "workflow",
"subcategory": "session",
"usage_scenario": "general",
"difficulty": "Intermediate",
"source": "../../../commands/workflow/session/solidify.md"
},
{
"name": "start",
"command": "/workflow:session:start",
"description": "Discover existing sessions or start new workflow session with intelligent session management and conflict detection",
"arguments": "[--type <workflow|review|tdd|test|docs>] [--auto|--new] [optional: task description for new session]",
"category": "workflow",
"subcategory": "session",
"usage_scenario": "general",
"difficulty": "Intermediate",
"source": "../../../commands/workflow/session/start.md"
},
{
"name": "conflict-resolution",
"command": "/workflow:tools:conflict-resolution",
"description": "Detect and resolve conflicts between plan and existing codebase using CLI-powered analysis with Gemini/Qwen",
"arguments": "[-y|--yes] --session WFS-session-id --context path/to/context-package.json",
"category": "workflow",
"subcategory": "tools",
"usage_scenario": "general",
"difficulty": "Advanced",
"source": "../../../commands/workflow/tools/conflict-resolution.md"
},
{
"name": "gather",
"command": "/workflow:tools:gather",
"description": "Intelligently collect project context using context-search-agent based on task description, packages into standardized JSON",
"arguments": "--session WFS-session-id \\\"task description\\",
"category": "workflow",
"subcategory": "tools",
"usage_scenario": "general",
"difficulty": "Intermediate",
"source": "../../../commands/workflow/tools/context-gather.md"
},
{
"name": "animation-extract",
"command": "/workflow:ui-design:animation-extract",
"description": "Extract animation and transition patterns from prompt inference and image references for design system documentation",
"arguments": "[-y|--yes] [--design-id <id>] [--session <id>] [--images \"<glob>\"] [--focus \"<types>\"] [--interactive] [--refine]",
"category": "workflow",
"subcategory": "ui-design",
"usage_scenario": "general",
"difficulty": "Intermediate",
"source": "../../../commands/workflow/ui-design/animation-extract.md"
},
{
"name": "explore-auto",
"command": "/workflow:ui-design:explore-auto",
"description": "Interactive exploratory UI design workflow with style-centric batch generation, creates design variants from prompts/images with parallel execution and user selection",
"arguments": "[--input \"<value>\"] [--targets \"<list>\"] [--target-type \"page|component\"] [--session <id>] [--style-variants <count>] [--layout-variants <count>]",
"category": "workflow",
"subcategory": "ui-design",
"usage_scenario": "general",
"difficulty": "Intermediate",
"source": "../../../commands/workflow/ui-design/explore-auto.md"
},
{
"name": "imitate-auto",
"command": "/workflow:ui-design:imitate-auto",
"description": "UI design workflow with direct code/image input for design token extraction and prototype generation",
"arguments": "[--input \"<value>\"] [--session <id>]",
"category": "workflow",
"subcategory": "ui-design",
"usage_scenario": "general",
"difficulty": "Intermediate",
"source": "../../../commands/workflow/ui-design/imitate-auto.md"
},
{
"name": "layout-extract",
"command": "/workflow:ui-design:layout-extract",
"description": "Extract structural layout information from reference images or text prompts using Claude analysis with variant generation or refinement mode",
"arguments": "[-y|--yes] [--design-id <id>] [--session <id>] [--images \"<glob>\"] [--prompt \"<desc>\"] [--targets \"<list>\"] [--variants <count>] [--device-type <desktop|mobile|tablet|responsive>] [--interactive] [--refine]",
"category": "workflow",
"subcategory": "ui-design",
"usage_scenario": "general",
"difficulty": "Intermediate",
"source": "../../../commands/workflow/ui-design/layout-extract.md"
},
{
"name": "style-extract",
"command": "/workflow:ui-design:style-extract",
"description": "Extract design style from reference images or text prompts using Claude analysis with variant generation or refinement mode",
"arguments": "[-y|--yes] [--design-id <id>] [--session <id>] [--images \"<glob>\"] [--prompt \"<desc>\"] [--variants <count>] [--interactive] [--refine]",
"category": "workflow",
"subcategory": "ui-design",
"usage_scenario": "general",
"difficulty": "Intermediate",
"source": "../../../commands/workflow/ui-design/style-extract.md"
}
],
"analysis": [
{
"name": "codex-review",
"command": "/cli:codex-review",
"description": "Interactive code review using Codex CLI via ccw endpoint with configurable review target, model, and custom instructions",
"arguments": "[--uncommitted|--base <branch>|--commit <sha>] [--model <model>] [--title <title>] [prompt]",
"category": "cli",
"subcategory": null,
"usage_scenario": "analysis",
"difficulty": "Intermediate",
"source": "../../../commands/cli/codex-review.md"
},
{
"name": "analyze-with-file",
"command": "/workflow:analyze-with-file",
"description": "Interactive collaborative analysis with documented discussions, CLI-assisted exploration, and evolving understanding",
"arguments": "[-y|--yes] [-c|--continue] \\\"topic or question\\",
"category": "workflow",
"subcategory": null,
"usage_scenario": "analysis",
"difficulty": "Beginner",
"source": "../../../commands/workflow/analyze-with-file.md"
},
{
"name": "review-cycle-fix",
"command": "/workflow:review-cycle-fix",
"description": "Automated fixing of code review findings with AI-powered planning and coordinated execution. Uses intelligent grouping, multi-stage timeline coordination, and test-driven verification.",
"arguments": "<export-file|review-dir> [--resume] [--max-iterations=N]",
"category": "workflow",
"subcategory": null,
"usage_scenario": "analysis",
"difficulty": "Intermediate",
"source": "../../../commands/workflow/review-cycle-fix.md"
},
{
"name": "review-module-cycle",
"command": "/workflow:review-module-cycle",
"description": "Independent multi-dimensional code review for specified modules/files. Analyzes specific code paths across 7 dimensions with hybrid parallel-iterative execution, independent of workflow sessions.",
"arguments": "<path-pattern> [--dimensions=security,architecture,...] [--max-iterations=N]",
"category": "workflow",
"subcategory": null,
"usage_scenario": "analysis",
"difficulty": "Intermediate",
"source": "../../../commands/workflow/review-module-cycle.md"
},
{
"name": "review",
"command": "/workflow:review",
"description": "Post-implementation review with specialized types (security/architecture/action-items/quality) using analysis agents and Gemini",
"arguments": "[--type=security|architecture|action-items|quality] [--archived] [optional: session-id]",
"category": "workflow",
"subcategory": null,
"usage_scenario": "analysis",
"difficulty": "Intermediate",
"source": "../../../commands/workflow/review.md"
}
],
"planning": [
{
"name": "convert-to-plan",
"command": "/issue:convert-to-plan",
"description": "Convert planning artifacts (lite-plan, workflow session, markdown) to issue solutions",
"arguments": "[-y|--yes] [--issue <id>] [--supplement] <SOURCE>",
"category": "issue",
"subcategory": null,
"usage_scenario": "planning",
"difficulty": "Intermediate",
"source": "../../../commands/issue/convert-to-plan.md"
},
{
"name": "from-brainstorm",
"command": "/issue:from-brainstorm",
"description": "Convert brainstorm session ideas into issue with executable solution for parallel-dev-cycle",
"arguments": "SESSION=\\\"<session-id>\\\" [--idea=<index>] [--auto] [-y|--yes]",
"category": "issue",
"subcategory": null,
"usage_scenario": "planning",
"difficulty": "Intermediate",
"source": "../../../commands/issue/from-brainstorm.md"
},
{
"name": "plan",
"command": "/issue:plan",
"description": "Batch plan issue resolution using issue-plan-agent (explore + plan closed-loop)",
"arguments": "[-y|--yes] --all-pending <issue-id>[,<issue-id>,...] [--batch-size 3]",
"category": "issue",
"subcategory": null,
"usage_scenario": "planning",
"difficulty": "Intermediate",
"source": "../../../commands/issue/plan.md"
},
{
"name": "brainstorm-with-file",
"command": "/workflow:brainstorm-with-file",
"description": "Interactive brainstorming with multi-CLI collaboration, idea expansion, and documented thought evolution",
"arguments": "[-y|--yes] [-c|--continue] [-m|--mode creative|structured] \\\"idea or topic\\",
"category": "workflow",
"subcategory": null,
"usage_scenario": "planning",
"difficulty": "Intermediate",
"source": "../../../commands/workflow/brainstorm-with-file.md"
},
{
"name": "lite-plan",
"command": "/workflow:lite-plan",
"description": "Lightweight interactive planning workflow with in-memory planning, code exploration, and execution execute to lite-execute after user confirmation",
"arguments": "[-y|--yes] [-e|--explore] \\\"task description\\\"|file.md",
"category": "workflow",
"subcategory": null,
"usage_scenario": "planning",
"difficulty": "Intermediate",
"source": "../../../commands/workflow/lite-plan.md"
},
{
"name": "workflow:multi-cli-plan",
"command": "/workflow:multi-cli-plan",
"description": "Multi-CLI collaborative planning workflow with ACE context gathering and iterative cross-verification. Uses cli-discuss-agent for Gemini+Codex+Claude analysis to converge on optimal execution plan.",
"arguments": "[-y|--yes] <task description> [--max-rounds=3] [--tools=gemini,codex] [--mode=parallel|serial]",
"category": "workflow",
"subcategory": null,
"usage_scenario": "planning",
"difficulty": "Intermediate",
"source": "../../../commands/workflow/multi-cli-plan.md"
},
{
"name": "plan-verify",
"command": "/workflow:plan-verify",
"description": "Perform READ-ONLY verification analysis between IMPL_PLAN.md, task JSONs, and brainstorming artifacts. Generates structured report with quality gate recommendation. Does NOT modify any files.",
"arguments": "[optional: --session session-id]",
"category": "workflow",
"subcategory": null,
"usage_scenario": "planning",
"difficulty": "Intermediate",
"source": "../../../commands/workflow/plan-verify.md"
},
{
"name": "plan",
"command": "/workflow:plan",
"description": "5-phase planning workflow with action-planning-agent task generation, outputs IMPL_PLAN.md and task JSONs",
"arguments": "[-y|--yes] \\\"text description\\\"|file.md",
"category": "workflow",
"subcategory": null,
"usage_scenario": "planning",
"difficulty": "Intermediate",
"source": "../../../commands/workflow/plan.md"
},
{
"name": "replan",
"command": "/workflow:replan",
"description": "Interactive workflow replanning with session-level artifact updates and boundary clarification through guided questioning",
"arguments": "[-y|--yes] [--session session-id] [task-id] \\\"requirements\\\"|file.md [--interactive]",
"category": "workflow",
"subcategory": null,
"usage_scenario": "planning",
"difficulty": "Intermediate",
"source": "../../../commands/workflow/replan.md"
},
{
"name": "tdd-plan",
"command": "/workflow:tdd-plan",
"description": "TDD workflow planning with Red-Green-Refactor task chain generation, test-first development structure, and cycle tracking",
"arguments": "\\\"feature description\\\"|file.md",
"category": "workflow",
"subcategory": null,
"usage_scenario": "planning",
"difficulty": "Advanced",
"source": "../../../commands/workflow/tdd-plan.md"
},
{
"name": "workflow:ui-design:codify-style",
"command": "/workflow:ui-design:codify-style",
"description": "Orchestrator to extract styles from code and generate shareable reference package with preview (automatic file discovery)",
"arguments": "<path> [--package-name <name>] [--output-dir <path>] [--overwrite]",
"category": "workflow",
"subcategory": "ui-design",
"usage_scenario": "planning",
"difficulty": "Intermediate",
"source": "../../../commands/workflow/ui-design/codify-style.md"
},
{
"name": "design-sync",
"command": "/workflow:ui-design:design-sync",
"description": "Synchronize finalized design system references to brainstorming artifacts, preparing them for /workflow:plan consumption",
"arguments": "--session <session_id> [--selected-prototypes \"<list>\"]",
"category": "workflow",
"subcategory": "ui-design",
"usage_scenario": "planning",
"difficulty": "Intermediate",
"source": "../../../commands/workflow/ui-design/design-sync.md"
},
{
"name": "workflow:ui-design:import-from-code",
"command": "/workflow:ui-design:import-from-code",
"description": "Import design system from code files (CSS/JS/HTML/SCSS) with automatic file discovery and parallel agent analysis",
"arguments": "[--design-id <id>] [--session <id>] [--source <path>]",
"category": "workflow",
"subcategory": "ui-design",
"usage_scenario": "planning",
"difficulty": "Intermediate",
"source": "../../../commands/workflow/ui-design/import-from-code.md"
},
{
"name": "workflow:ui-design:reference-page-generator",
"command": "/workflow:ui-design:reference-page-generator",
"description": "Generate multi-component reference pages and documentation from design run extraction",
"arguments": "[--design-run <path>] [--package-name <name>] [--output-dir <path>]",
"category": "workflow",
"subcategory": "ui-design",
"usage_scenario": "planning",
"difficulty": "Intermediate",
"source": "../../../commands/workflow/ui-design/reference-page-generator.md"
}
],
"implementation": [
{
"name": "execute",
"command": "/issue:execute",
"description": "Execute queue with DAG-based parallel orchestration (one commit per solution)",
"arguments": "[-y|--yes] --queue <queue-id> [--worktree [<existing-path>]]",
"category": "issue",
"subcategory": null,
"usage_scenario": "implementation",
"difficulty": "Intermediate",
"source": "../../../commands/issue/execute.md"
},
{
"name": "execute",
"command": "/workflow:execute",
"description": "Coordinate agent execution for workflow tasks with automatic session discovery, parallel task processing, and status tracking",
"arguments": "[-y|--yes] [--resume-session=\\\"session-id\\\"]",
"category": "workflow",
"subcategory": null,
"usage_scenario": "implementation",
"difficulty": "Intermediate",
"source": "../../../commands/workflow/execute.md"
},
{
"name": "lite-execute",
"command": "/workflow:lite-execute",
"description": "Execute tasks based on in-memory plan, prompt description, or file content",
"arguments": "[-y|--yes] [--in-memory] [\\\"task description\\\"|file-path]",
"category": "workflow",
"subcategory": null,
"usage_scenario": "implementation",
"difficulty": "Intermediate",
"source": "../../../commands/workflow/lite-execute.md"
},
{
"name": "test-cycle-execute",
"command": "/workflow:test-cycle-execute",
"description": "Execute test-fix workflow with dynamic task generation and iterative fix cycles until test pass rate >= 95% or max iterations reached. Uses @cli-planning-agent for failure analysis and task generation.",
"arguments": "[--resume-session=\\\"session-id\\\"] [--max-iterations=N]",
"category": "workflow",
"subcategory": null,
"usage_scenario": "implementation",
"difficulty": "Intermediate",
"source": "../../../commands/workflow/test-cycle-execute.md"
},
{
"name": "task-generate-agent",
"command": "/workflow:tools:task-generate-agent",
"description": "Generate implementation plan documents (IMPL_PLAN.md, task JSONs, TODO_LIST.md) using action-planning-agent - produces planning artifacts, does NOT execute code implementation",
"arguments": "[-y|--yes] --session WFS-session-id",
"category": "workflow",
"subcategory": "tools",
"usage_scenario": "implementation",
"difficulty": "Advanced",
"source": "../../../commands/workflow/tools/task-generate-agent.md"
},
{
"name": "task-generate-tdd",
"command": "/workflow:tools:task-generate-tdd",
"description": "Autonomous TDD task generation using action-planning-agent with Red-Green-Refactor cycles, test-first structure, and cycle validation",
"arguments": "[-y|--yes] --session WFS-session-id",
"category": "workflow",
"subcategory": "tools",
"usage_scenario": "implementation",
"difficulty": "Advanced",
"source": "../../../commands/workflow/tools/task-generate-tdd.md"
},
{
"name": "test-task-generate",
"command": "/workflow:tools:test-task-generate",
"description": "Generate test planning documents (IMPL_PLAN.md, test task JSONs, TODO_LIST.md) using action-planning-agent - produces test planning artifacts, does NOT execute tests",
"arguments": "--session WFS-test-session-id",
"category": "workflow",
"subcategory": "tools",
"usage_scenario": "implementation",
"difficulty": "Intermediate",
"source": "../../../commands/workflow/tools/test-task-generate.md"
},
{
"name": "generate",
"command": "/workflow:ui-design:generate",
"description": "Assemble UI prototypes by combining layout templates with design tokens (default animation support), pure assembler without new content generation",
"arguments": "[--design-id <id>] [--session <id>]",
"category": "workflow",
"subcategory": "ui-design",
"usage_scenario": "implementation",
"difficulty": "Intermediate",
"source": "../../../commands/workflow/ui-design/generate.md"
},
{
"name": "unified-execute-with-file",
"command": "/workflow:unified-execute-with-file",
"description": "Universal execution engine for consuming any planning/brainstorm/analysis output with minimal progress tracking, multi-agent coordination, and incremental execution",
"arguments": "[-y|--yes] [-p|--plan <path>] [-m|--mode sequential|parallel] [\\\"execution context or task name\\\"]",
"category": "workflow",
"subcategory": null,
"usage_scenario": "implementation",
"difficulty": "Intermediate",
"source": "../../../commands/workflow/unified-execute-with-file.md"
}
],
"documentation": [
{
"name": "docs-full-cli",
"command": "/memory:docs-full-cli",
"description": "Generate full project documentation using CLI execution (Layer 3→1) with batched agents (4 modules/agent) and gemini→qwen→codex fallback, <20 modules uses direct parallel",
"arguments": "[path] [--tool <gemini|qwen|codex>]",
"category": "memory",
"subcategory": null,
"usage_scenario": "documentation",
"difficulty": "Intermediate",
"source": "../../../commands/memory/docs-full-cli.md"
},
{
"name": "docs-related-cli",
"command": "/memory:docs-related-cli",
"description": "Generate/update documentation for git-changed modules using CLI execution with batched agents (4 modules/agent) and gemini→qwen→codex fallback, <15 modules uses direct parallel",
"arguments": "[--tool <gemini|qwen|codex>]",
"category": "memory",
"subcategory": null,
"usage_scenario": "documentation",
"difficulty": "Intermediate",
"source": "../../../commands/memory/docs-related-cli.md"
},
{
"name": "style-skill-memory",
"command": "/memory:style-skill-memory",
"description": "Generate SKILL memory package from style reference for easy loading and consistent design system usage",
"arguments": "[package-name] [--regenerate]",
"category": "memory",
"subcategory": null,
"usage_scenario": "documentation",
"difficulty": "Intermediate",
"source": "../../../commands/memory/style-skill-memory.md"
}
],
"session-management": [
{
"name": "review-session-cycle",
"command": "/workflow:review-session-cycle",
"description": "Session-based comprehensive multi-dimensional code review. Analyzes git changes from workflow session across 7 dimensions with hybrid parallel-iterative execution, aggregates findings, and performs focused deep-dives on critical issues until quality gates met.",
"arguments": "[session-id] [--dimensions=security,architecture,...] [--max-iterations=N]",
"category": "workflow",
"subcategory": null,
"usage_scenario": "session-management",
"difficulty": "Intermediate",
"source": "../../../commands/workflow/review-session-cycle.md"
},
{
"name": "complete",
"command": "/workflow:session:complete",
"description": "Mark active workflow session as complete, archive with lessons learned, update manifest, remove active flag",
"arguments": "[-y|--yes] [--detailed]",
"category": "workflow",
"subcategory": "session",
"usage_scenario": "session-management",
"difficulty": "Intermediate",
"source": "../../../commands/workflow/session/complete.md"
},
{
"name": "resume",
"command": "/workflow:session:resume",
"description": "Resume the most recently paused workflow session with automatic session discovery and status update",
"arguments": "",
"category": "workflow",
"subcategory": "session",
"usage_scenario": "session-management",
"difficulty": "Intermediate",
"source": "../../../commands/workflow/session/resume.md"
}
],
"testing": [
{
"name": "tdd-verify",
"command": "/workflow:tdd-verify",
"description": "Verify TDD workflow compliance against Red-Green-Refactor cycles. Generates quality report with coverage analysis and quality gate recommendation. Orchestrates sub-commands for comprehensive validation.",
"arguments": "[optional: --session WFS-session-id]",
"category": "workflow",
"subcategory": null,
"usage_scenario": "testing",
"difficulty": "Advanced",
"source": "../../../commands/workflow/tdd-verify.md"
},
{
"name": "test-fix-gen",
"command": "/workflow:test-fix-gen",
"description": "Create test-fix workflow session from session ID, description, or file path with test strategy generation and task planning",
"arguments": "(source-session-id | \\\"feature description\\\" | /path/to/file.md)",
"category": "workflow",
"subcategory": null,
"usage_scenario": "testing",
"difficulty": "Intermediate",
"source": "../../../commands/workflow/test-fix-gen.md"
},
{
"name": "test-gen",
"command": "/workflow:test-gen",
"description": "Create independent test-fix workflow session from completed implementation session, analyzes code to generate test tasks",
"arguments": "source-session-id",
"category": "workflow",
"subcategory": null,
"usage_scenario": "testing",
"difficulty": "Intermediate",
"source": "../../../commands/workflow/test-gen.md"
},
{
"name": "tdd-coverage-analysis",
"command": "/workflow:tools:tdd-coverage-analysis",
"description": "Analyze test coverage and TDD cycle execution with Red-Green-Refactor compliance verification",
"arguments": "--session WFS-session-id",
"category": "workflow",
"subcategory": "tools",
"usage_scenario": "testing",
"difficulty": "Advanced",
"source": "../../../commands/workflow/tools/tdd-coverage-analysis.md"
},
{
"name": "test-concept-enhanced",
"command": "/workflow:tools:test-concept-enhanced",
"description": "Coordinate test analysis workflow using cli-execution-agent to generate test strategy via Gemini",
"arguments": "--session WFS-test-session-id --context path/to/test-context-package.json",
"category": "workflow",
"subcategory": "tools",
"usage_scenario": "testing",
"difficulty": "Intermediate",
"source": "../../../commands/workflow/tools/test-concept-enhanced.md"
},
{
"name": "test-context-gather",
"command": "/workflow:tools:test-context-gather",
"description": "Collect test coverage context using test-context-search-agent and package into standardized test-context JSON",
"arguments": "--session WFS-test-session-id",
"category": "workflow",
"subcategory": "tools",
"usage_scenario": "testing",
"difficulty": "Intermediate",
"source": "../../../commands/workflow/tools/test-context-gather.md"
}
]
}

View File

@@ -0,0 +1,160 @@
{
"workflow:plan": {
"calls_internally": [
"workflow:session:start",
"workflow:tools:context-gather",
"workflow:tools:conflict-resolution",
"workflow:tools:task-generate-agent"
],
"next_steps": [
"workflow:plan-verify",
"workflow:status",
"workflow:execute"
],
"alternatives": [
"workflow:tdd-plan"
],
"prerequisites": []
},
"workflow:tdd-plan": {
"calls_internally": [
"workflow:session:start",
"workflow:tools:context-gather",
"workflow:tools:task-generate-tdd"
],
"next_steps": [
"workflow:tdd-verify",
"workflow:status",
"workflow:execute"
],
"alternatives": [
"workflow:plan"
],
"prerequisites": []
},
"workflow:execute": {
"prerequisites": [
"workflow:plan",
"workflow:tdd-plan"
],
"related": [
"workflow:status",
"workflow:resume"
],
"next_steps": [
"workflow:review",
"workflow:tdd-verify"
]
},
"workflow:plan-verify": {
"prerequisites": [
"workflow:plan"
],
"next_steps": [
"workflow:execute"
],
"related": [
"workflow:status"
]
},
"workflow:tdd-verify": {
"prerequisites": [
"workflow:execute"
],
"related": [
"workflow:tools:tdd-coverage-analysis"
]
},
"workflow:session:start": {
"next_steps": [
"workflow:plan",
"workflow:execute"
],
"related": [
"workflow:session:list",
"workflow:session:resume"
]
},
"workflow:session:resume": {
"alternatives": [
"workflow:resume"
],
"related": [
"workflow:session:list",
"workflow:status"
]
},
"workflow:lite-plan": {
"calls_internally": [
"workflow:lite-execute"
],
"next_steps": [
"workflow:lite-execute",
"workflow:status"
],
"alternatives": [
"workflow:plan"
],
"prerequisites": []
},
"workflow:lite-fix": {
"next_steps": [
"workflow:lite-execute",
"workflow:status"
],
"alternatives": [
"workflow:lite-plan"
],
"related": [
"workflow:test-cycle-execute"
]
},
"workflow:lite-execute": {
"prerequisites": [
"workflow:lite-plan",
"workflow:lite-fix"
],
"related": [
"workflow:execute",
"workflow:status"
]
},
"workflow:review-session-cycle": {
"prerequisites": [
"workflow:execute"
],
"next_steps": [
"workflow:review-fix"
],
"related": [
"workflow:review-module-cycle"
]
},
"workflow:review-fix": {
"prerequisites": [
"workflow:review-module-cycle",
"workflow:review-session-cycle"
],
"related": [
"workflow:test-cycle-execute"
]
},
"memory:docs": {
"calls_internally": [
"workflow:session:start",
"workflow:tools:context-gather"
],
"next_steps": [
"workflow:execute"
]
},
"memory:skill-memory": {
"next_steps": [
"workflow:plan",
"cli:analyze"
],
"related": [
"memory:load-skill-memory"
]
}
}

View File

@@ -0,0 +1,90 @@
[
{
"name": "lite-plan",
"command": "/workflow:lite-plan",
"description": "Lightweight interactive planning workflow with in-memory planning, code exploration, and execution execute to lite-execute after user confirmation",
"arguments": "[-y|--yes] [-e|--explore] \\\"task description\\\"|file.md",
"category": "workflow",
"subcategory": null,
"usage_scenario": "planning",
"difficulty": "Intermediate",
"source": "../../../commands/workflow/lite-plan.md"
},
{
"name": "lite-fix",
"command": "/workflow:lite-fix",
"description": "Lightweight bug diagnosis and fix workflow with intelligent severity assessment and optional hotfix mode for production incidents",
"arguments": "[-y|--yes] [--hotfix] \\\"bug description or issue reference\\",
"category": "workflow",
"subcategory": null,
"usage_scenario": "general",
"difficulty": "Intermediate",
"source": "../../../commands/workflow/lite-fix.md"
},
{
"name": "plan",
"command": "/workflow:plan",
"description": "5-phase planning workflow with action-planning-agent task generation, outputs IMPL_PLAN.md and task JSONs",
"arguments": "[-y|--yes] \\\"text description\\\"|file.md",
"category": "workflow",
"subcategory": null,
"usage_scenario": "planning",
"difficulty": "Intermediate",
"source": "../../../commands/workflow/plan.md"
},
{
"name": "execute",
"command": "/workflow:execute",
"description": "Coordinate agent execution for workflow tasks with automatic session discovery, parallel task processing, and status tracking",
"arguments": "[-y|--yes] [--resume-session=\\\"session-id\\\"]",
"category": "workflow",
"subcategory": null,
"usage_scenario": "implementation",
"difficulty": "Intermediate",
"source": "../../../commands/workflow/execute.md"
},
{
"name": "start",
"command": "/workflow:session:start",
"description": "Discover existing sessions or start new workflow session with intelligent session management and conflict detection",
"arguments": "[--type <workflow|review|tdd|test|docs>] [--auto|--new] [optional: task description for new session]",
"category": "workflow",
"subcategory": "session",
"usage_scenario": "general",
"difficulty": "Intermediate",
"source": "../../../commands/workflow/session/start.md"
},
{
"name": "review-session-cycle",
"command": "/workflow:review-session-cycle",
"description": "Session-based comprehensive multi-dimensional code review. Analyzes git changes from workflow session across 7 dimensions with hybrid parallel-iterative execution, aggregates findings, and performs focused deep-dives on critical issues until quality gates met.",
"arguments": "[session-id] [--dimensions=security,architecture,...] [--max-iterations=N]",
"category": "workflow",
"subcategory": null,
"usage_scenario": "session-management",
"difficulty": "Intermediate",
"source": "../../../commands/workflow/review-session-cycle.md"
},
{
"name": "artifacts",
"command": "/workflow:brainstorm:artifacts",
"description": "Interactive clarification generating confirmed guidance specification through role-based analysis and synthesis",
"arguments": "[-y|--yes] topic or challenge description [--count N]",
"category": "workflow",
"subcategory": "brainstorm",
"usage_scenario": "general",
"difficulty": "Intermediate",
"source": "../../../commands/workflow/brainstorm/artifacts.md"
},
{
"name": "plan-verify",
"command": "/workflow:plan-verify",
"description": "Perform READ-ONLY verification analysis between IMPL_PLAN.md, task JSONs, and brainstorming artifacts. Generates structured report with quality gate recommendation. Does NOT modify any files.",
"arguments": "[optional: --session session-id]",
"category": "workflow",
"subcategory": null,
"usage_scenario": "planning",
"difficulty": "Intermediate",
"source": "../../../commands/workflow/plan-verify.md"
}
]

View File

@@ -0,0 +1,34 @@
#!/usr/bin/env python3
"""
Simple update script for ccw-help skill.
Runs analyze_commands.py to regenerate command index.
"""
import sys
import subprocess
from pathlib import Path
BASE_DIR = Path("D:/Claude_dms3/.claude")
SKILL_DIR = BASE_DIR / "skills" / "ccw-help"
ANALYZE_SCRIPT = SKILL_DIR / "scripts" / "analyze_commands.py"
def run_update():
"""Run command analysis update."""
try:
result = subprocess.run(
[sys.executable, str(ANALYZE_SCRIPT)],
capture_output=True,
text=True,
timeout=30
)
print(result.stdout)
return result.returncode == 0
except Exception as e:
print(f"Error running update: {e}")
return False
if __name__ == '__main__':
success = run_update()
sys.exit(0 if success else 1)

View File

@@ -1,303 +0,0 @@
# CCW Loop Skill
无状态迭代开发循环工作流,支持开发 (Develop)、调试 (Debug)、验证 (Validate) 三个阶段,每个阶段都有独立的文件记录进展。
## Overview
CCW Loop 是一个自主模式 (Autonomous) 的 Skill通过文件驱动的无状态循环帮助开发者系统化地完成开发任务。
### 核心特性
1. **无状态循环**: 每次执行从文件读取状态,不依赖内存
2. **文件驱动**: 所有进度记录在 Markdown 文件中,可审计、可回顾
3. **Gemini 辅助**: 关键决策点使用 CLI 工具进行深度分析
4. **可恢复**: 任何时候中断后可继续
5. **双模式**: 支持交互式和自动循环
### 三大阶段
- **Develop**: 任务分解 → 代码实现 → 进度记录
- **Debug**: 假设生成 → 证据收集 → 根因分析 → 修复验证
- **Validate**: 测试执行 → 覆盖率检查 → 质量评估
## Installation
已包含在 `.claude/skills/ccw-loop/`,无需额外安装。
## Usage
### 基本用法
```bash
# 启动新循环
/ccw-loop "实现用户认证功能"
# 继续现有循环
/ccw-loop --resume LOOP-auth-2026-01-22
# 自动循环模式
/ccw-loop --auto "修复登录bug并添加测试"
```
### 交互式流程
```
1. 启动: /ccw-loop "任务描述"
2. 初始化: 自动分析任务并生成子任务列表
3. 显示菜单:
- 📝 继续开发 (Develop)
- 🔍 开始调试 (Debug)
- ✅ 运行验证 (Validate)
- 📊 查看详情 (Status)
- 🏁 完成循环 (Complete)
- 🚪 退出 (Exit)
4. 执行选择的动作
5. 重复步骤 3-4 直到完成
```
### 自动循环流程
```
Develop (所有任务) → Debug (如有需要) → Validate → 完成
```
## Directory Structure
```
.workflow/.loop/{session-id}/
├── meta.json # 会话元数据 (不可修改)
├── state.json # 当前状态 (每次更新)
├── summary.md # 完成报告 (结束时生成)
├── develop/
│ ├── progress.md # 开发进度时间线
│ ├── tasks.json # 任务列表
│ └── changes.log # 代码变更日志 (NDJSON)
├── debug/
│ ├── understanding.md # 理解演变文档
│ ├── hypotheses.json # 假设历史
│ └── debug.log # 调试日志 (NDJSON)
└── validate/
├── validation.md # 验证报告
├── test-results.json # 测试结果
└── coverage.json # 覆盖率数据
```
## Action Reference
| Action | 描述 | 触发条件 |
|--------|------|----------|
| action-init | 初始化会话 | 首次启动 |
| action-menu | 显示操作菜单 | 交互模式下每次循环 |
| action-develop-with-file | 执行开发任务 | 有待处理任务 |
| action-debug-with-file | 假设驱动调试 | 需要调试 |
| action-validate-with-file | 运行测试验证 | 需要验证 |
| action-complete | 完成并生成报告 | 所有任务完成 |
详细说明见 [specs/action-catalog.md](specs/action-catalog.md)
## CLI Integration
CCW Loop 在关键决策点集成 CLI 工具:
### 任务分解 (action-init)
```bash
ccw cli -p "PURPOSE: 分解开发任务..."
--tool gemini
--mode analysis
--rule planning-breakdown-task-steps
```
### 代码实现 (action-develop)
```bash
ccw cli -p "PURPOSE: 实现功能代码..."
--tool gemini
--mode write
--rule development-implement-feature
```
### 假设生成 (action-debug - 探索)
```bash
ccw cli -p "PURPOSE: Generate debugging hypotheses..."
--tool gemini
--mode analysis
--rule analysis-diagnose-bug-root-cause
```
### 证据分析 (action-debug - 分析)
```bash
ccw cli -p "PURPOSE: Analyze debug log evidence..."
--tool gemini
--mode analysis
--rule analysis-diagnose-bug-root-cause
```
### 质量评估 (action-validate)
```bash
ccw cli -p "PURPOSE: Analyze test results and coverage..."
--tool gemini
--mode analysis
--rule analysis-review-code-quality
```
## State Management
### State Schema
参见 [phases/state-schema.md](phases/state-schema.md)
### State Transitions
```
pending → running → completed
user_exit
failed
```
### State Recovery
如果 `state.json` 损坏,可从其他文件重建:
- develop/tasks.json → develop.*
- debug/hypotheses.json → debug.*
- validate/test-results.json → validate.*
## Examples
### Example 1: 功能开发
```bash
# 1. 启动循环
/ccw-loop "Add user profile page"
# 2. 系统初始化,生成任务:
# - task-001: Create profile component
# - task-002: Add API endpoints
# - task-003: Implement tests
# 3. 选择 "继续开发"
# → 执行 task-001 (Gemini 辅助实现)
# → 更新 progress.md
# 4. 重复开发直到所有任务完成
# 5. 选择 "运行验证"
# → 运行测试
# → 检查覆盖率
# → 生成 validation.md
# 6. 选择 "完成循环"
# → 生成 summary.md
# → 询问是否扩展为 Issue
```
### Example 2: Bug 修复
```bash
# 1. 启动循环
/ccw-loop "Fix login timeout issue"
# 2. 选择 "开始调试"
# → 输入 bug 描述: "Login times out after 30s"
# → Gemini 生成假设 (H1, H2, H3)
# → 添加 NDJSON 日志
# → 提示复现 bug
# 3. 复现 bug (在应用中操作)
# 4. 再次选择 "开始调试"
# → 解析 debug.log
# → Gemini 分析证据
# → H2 确认为根因
# → 生成修复代码
# → 更新 understanding.md
# 5. 选择 "运行验证"
# → 测试通过
# 6. 完成
```
## Templates
- [progress-template.md](templates/progress-template.md): 开发进度文档模板
- [understanding-template.md](templates/understanding-template.md): 调试理解文档模板
- [validation-template.md](templates/validation-template.md): 验证报告模板
## Specifications
- [loop-requirements.md](specs/loop-requirements.md): 循环需求规范
- [action-catalog.md](specs/action-catalog.md): 动作目录
## Integration
### Dashboard Integration
CCW Loop 与 Dashboard Loop Monitor 集成:
- Dashboard 创建 Loop → 触发此 Skill
- state.json → Dashboard 实时显示
- 任务列表双向同步
- 控制按钮映射到 actions
### Issue System Integration
完成后可扩展为 Issue:
- 维度: test, enhance, refactor, doc
- 自动调用 `/issue:new`
- 上下文自动填充
## Error Handling
| 情况 | 处理 |
|------|------|
| Session 不存在 | 创建新会话 |
| state.json 损坏 | 从文件重建 |
| CLI 工具失败 | 回退到手动模式 |
| 测试失败 | 循环回到 develop/debug |
| >10 迭代 | 警告用户,建议拆分 |
## Limitations
1. **单会话限制**: 同一时间只能有一个活跃会话
2. **迭代限制**: 建议不超过 10 次迭代
3. **CLI 依赖**: 部分功能依赖 Gemini CLI 可用性
4. **测试框架**: 需要 package.json 中定义测试脚本
## Troubleshooting
### Q: 如何查看当前会话状态?
A: 在菜单中选择 "查看详情 (Status)"
### Q: 如何恢复中断的会话?
A: 使用 `--resume` 参数:
```bash
/ccw-loop --resume LOOP-xxx-2026-01-22
```
### Q: 如果 CLI 工具失败怎么办?
A: Skill 会自动降级到手动模式,提示用户手动输入
### Q: 如何添加自定义 action
A: 参见 [specs/action-catalog.md](specs/action-catalog.md) 的 "Action Extensions" 部分
## Contributing
添加新功能:
1. 创建 action 文件在 `phases/actions/`
2. 更新 orchestrator 决策逻辑
3. 添加到 action-catalog.md
4. 更新 action-menu.md
## License
MIT
---
**Version**: 1.0.0
**Last Updated**: 2026-01-22
**Author**: CCW Team

View File

@@ -0,0 +1,650 @@
---
name: lite-skill-generator
description: Lightweight skill generator with style learning - creates simple skills using flow-based execution and style imitation. Use for quick skill scaffolding, simple workflow creation, or style-aware skill generation.
allowed-tools: Read, Write, Bash, Glob, Grep, AskUserQuestion
---
# Lite Skill Generator
Lightweight meta-skill for rapid skill creation with intelligent style learning and flow-based execution.
## Core Concept
**Simplicity First**: Generate simple, focused skills quickly with minimal overhead. Learn from existing skills to maintain consistent style and structure.
**Progressive Disclosure**: Follow anthropics' three-layer loading principle:
1. **Metadata** - name, description, triggers (always loaded)
2. **SKILL.md** - core instructions (loaded when triggered)
3. **Bundled resources** - scripts, references, assets (loaded on demand)
## Execution Model
**3-Phase Flow**: Style Learning → Requirements Gathering → Generation
```
User Input → Phase 1: Style Analysis → Phase 2: Requirements → Phase 3: Generate → Skill Package
↓ ↓ ↓
Learn from examples Interactive prompts Write files + validate
```
## Architecture Overview
```
┌─────────────────────────────────────────────────────────────────┐
│ Lite Skill Generator │
│ │
│ Input: Skill name, purpose, reference skills │
│ ↓ │
│ ┌─────────────────────────────────────────────────────────┐ │
│ │ Phase 1-3: Lightweight Pipeline │ │
│ │ ┌────┐ ┌────┐ ┌────┐ │ │
│ │ │ P1 │→│ P2 │→│ P3 │ │ │
│ │ │Styl│ │Req │ │Gen │ │ │
│ │ └────┘ └────┘ └────┘ │ │
│ └─────────────────────────────────────────────────────────┘ │
│ ↓ │
│ Output: .claude/skills/{skill-name}/ (minimal package) │
│ │
└─────────────────────────────────────────────────────────────────┘
```
## 3-Phase Workflow
### Phase 1: Style Analysis & Learning
Analyze reference skills to extract language patterns, structural conventions, and writing style.
```javascript
// Phase 1 Execution Flow
async function analyzeStyle(referencePaths) {
// Step 1: Load reference skills
const references = [];
for (const path of referencePaths) {
const content = Read(path);
references.push({
path: path,
content: content,
metadata: extractYAMLFrontmatter(content)
});
}
// Step 2: Extract style patterns
const styleProfile = {
// Structural patterns
structure: {
hasFrontmatter: references.every(r => r.metadata !== null),
sectionHeaders: extractCommonSections(references),
codeBlockUsage: detectCodeBlockPatterns(references),
flowDiagramUsage: detectFlowDiagrams(references)
},
// Language patterns
language: {
instructionStyle: detectInstructionStyle(references), // 'imperative' | 'declarative' | 'procedural'
pseudocodeUsage: detectPseudocodePatterns(references),
verbosity: calculateVerbosityLevel(references), // 'concise' | 'detailed' | 'verbose'
terminology: extractCommonTerms(references)
},
// Organization patterns
organization: {
phaseStructure: detectPhasePattern(references), // 'sequential' | 'autonomous' | 'flat'
exampleDensity: calculateExampleRatio(references),
templateUsage: detectTemplateReferences(references)
}
};
// Step 3: Generate style guide
return {
profile: styleProfile,
recommendations: generateStyleRecommendations(styleProfile),
examples: extractStyleExamples(references, styleProfile)
};
}
// Structural pattern detection
function extractCommonSections(references) {
const allSections = references.map(r =>
r.content.match(/^##? (.+)$/gm)?.map(s => s.replace(/^##? /, ''))
).flat();
return findMostCommon(allSections);
}
// Language style detection
function detectInstructionStyle(references) {
const imperativePattern = /^(Use|Execute|Run|Call|Create|Generate)\s/gim;
const declarativePattern = /^(The|This|Each|All)\s.*\s(is|are|will be)\s/gim;
const proceduralPattern = /^(Step \d+|Phase \d+|First|Then|Finally)\s/gim;
const scores = references.map(r => ({
imperative: (r.content.match(imperativePattern) || []).length,
declarative: (r.content.match(declarativePattern) || []).length,
procedural: (r.content.match(proceduralPattern) || []).length
}));
return getMaxStyle(scores);
}
// Pseudocode pattern detection
function detectPseudocodePatterns(references) {
const hasJavaScriptBlocks = references.some(r => r.content.includes('```javascript'));
const hasFunctionDefs = references.some(r => /function\s+\w+\(/m.test(r.content));
const hasFlowComments = references.some(r => /\/\/.*/m.test(r.content));
return {
usePseudocode: hasJavaScriptBlocks && hasFunctionDefs,
flowAnnotations: hasFlowComments,
style: hasFunctionDefs ? 'functional' : 'imperative'
};
}
```
**Output**:
```
Style Analysis Complete:
Structure: Flow-based with pseudocode
Language: Procedural, detailed
Organization: Sequential phases
Key Patterns: 3-5 phases, function definitions, ASCII diagrams
Recommendations:
✓ Use phase-based structure (3-4 phases)
✓ Include pseudocode for complex logic
✓ Add ASCII flow diagrams
✓ Maintain concise documentation style
```
---
### Phase 2: Requirements Gathering
Interactive discovery of skill requirements using learned style patterns.
```javascript
async function gatherRequirements(styleProfile) {
// Step 1: Basic information
const basicInfo = await AskUserQuestion({
questions: [
{
question: "What is the skill name? (kebab-case, e.g., 'pdf-generator')",
header: "Name",
options: [
{ label: "pdf-generator", description: "Example: PDF generation skill" },
{ label: "code-analyzer", description: "Example: Code analysis skill" },
{ label: "Custom", description: "Enter custom name" }
]
},
{
question: "What is the primary purpose?",
header: "Purpose",
options: [
{ label: "Generation", description: "Create/generate artifacts" },
{ label: "Analysis", description: "Analyze/inspect code or data" },
{ label: "Transformation", description: "Convert/transform content" },
{ label: "Orchestration", description: "Coordinate multiple operations" }
]
}
]
});
// Step 2: Execution complexity
const complexity = await AskUserQuestion({
questions: [{
question: "How many main steps does this skill need?",
header: "Steps",
options: [
{ label: "2-3 steps", description: "Simple workflow (recommended for lite-skill)" },
{ label: "4-5 steps", description: "Moderate workflow" },
{ label: "6+ steps", description: "Complex workflow (consider full skill-generator)" }
]
}]
});
// Step 3: Tool requirements
const tools = await AskUserQuestion({
questions: [{
question: "Which tools will this skill use? (select multiple)",
header: "Tools",
multiSelect: true,
options: [
{ label: "Read", description: "Read files" },
{ label: "Write", description: "Write files" },
{ label: "Bash", description: "Execute commands" },
{ label: "Task", description: "Launch agents" },
{ label: "AskUserQuestion", description: "Interactive prompts" }
]
}]
});
// Step 4: Output format
const output = await AskUserQuestion({
questions: [{
question: "What does this skill produce?",
header: "Output",
options: [
{ label: "Single file", description: "One main output file" },
{ label: "Multiple files", description: "Several related files" },
{ label: "Directory structure", description: "Complete directory tree" },
{ label: "Modified files", description: "Edits to existing files" }
]
}]
});
// Step 5: Build configuration
return {
name: basicInfo.Name,
purpose: basicInfo.Purpose,
description: generateDescription(basicInfo.Name, basicInfo.Purpose),
steps: parseStepCount(complexity.Steps),
allowedTools: tools.Tools,
outputType: output.Output,
styleProfile: styleProfile,
triggerPhrases: generateTriggerPhrases(basicInfo.Name, basicInfo.Purpose)
};
}
// Generate skill description from name and purpose
function generateDescription(name, purpose) {
const templates = {
Generation: `Generate ${humanize(name)} with intelligent scaffolding`,
Analysis: `Analyze ${humanize(name)} with detailed reporting`,
Transformation: `Transform ${humanize(name)} with format conversion`,
Orchestration: `Orchestrate ${humanize(name)} workflow with multi-step coordination`
};
return templates[purpose] || `${humanize(name)} skill for ${purpose.toLowerCase()} tasks`;
}
// Generate trigger phrases
function generateTriggerPhrases(name, purpose) {
const base = [name, name.replace(/-/g, ' ')];
const purposeVariants = {
Generation: ['generate', 'create', 'build'],
Analysis: ['analyze', 'inspect', 'review'],
Transformation: ['transform', 'convert', 'format'],
Orchestration: ['orchestrate', 'coordinate', 'manage']
};
return [...base, ...purposeVariants[purpose].map(v => `${v} ${humanize(name)}`)];
}
```
**Display to User**:
```
Requirements Gathered:
Name: pdf-generator
Purpose: Generation
Steps: 3 (Setup → Generate → Validate)
Tools: Read, Write, Bash
Output: Single file (PDF document)
Triggers: "pdf-generator", "generate pdf", "create pdf"
Style Application:
Using flow-based structure (from style analysis)
Including pseudocode blocks
Adding ASCII diagrams for clarity
```
---
### Phase 3: Generate Skill Package
Create minimal skill structure with style-aware content generation.
```javascript
async function generateSkillPackage(requirements) {
const skillDir = `.claude/skills/${requirements.name}`;
const workDir = `.workflow/.scratchpad/lite-skill-gen-${Date.now()}`;
// Step 1: Create directory structure
Bash(`mkdir -p "${skillDir}" "${workDir}"`);
// Step 2: Generate SKILL.md (using learned style)
const skillContent = generateSkillMd(requirements);
Write(`${skillDir}/SKILL.md`, skillContent);
// Step 3: Conditionally add bundled resources
if (requirements.outputType === 'Directory structure') {
Bash(`mkdir -p "${skillDir}/templates"`);
const templateContent = generateTemplate(requirements);
Write(`${skillDir}/templates/base-template.md`, templateContent);
}
if (requirements.allowedTools.includes('Bash')) {
Bash(`mkdir -p "${skillDir}/scripts"`);
const scriptContent = generateScript(requirements);
Write(`${skillDir}/scripts/helper.sh`, scriptContent);
}
// Step 4: Generate README
const readmeContent = generateReadme(requirements);
Write(`${skillDir}/README.md`, readmeContent);
// Step 5: Validate structure
const validation = validateSkillStructure(skillDir, requirements);
Write(`${workDir}/validation-report.json`, JSON.stringify(validation, null, 2));
// Step 6: Return summary
return {
skillPath: skillDir,
filesCreated: [
`${skillDir}/SKILL.md`,
...(validation.hasTemplates ? [`${skillDir}/templates/`] : []),
...(validation.hasScripts ? [`${skillDir}/scripts/`] : []),
`${skillDir}/README.md`
],
validation: validation,
nextSteps: generateNextSteps(requirements)
};
}
// Generate SKILL.md with style awareness
function generateSkillMd(req) {
const { styleProfile } = req;
// YAML frontmatter
const frontmatter = `---
name: ${req.name}
description: ${req.description}
allowed-tools: ${req.allowedTools.join(', ')}
---
`;
// Main content structure (adapts to style)
let content = frontmatter;
content += `\n# ${humanize(req.name)}\n\n`;
content += `${req.description}\n\n`;
// Add architecture diagram if style uses them
if (styleProfile.structure.flowDiagramUsage) {
content += generateArchitectureDiagram(req);
}
// Add execution flow
content += `## Execution Flow\n\n`;
if (styleProfile.language.pseudocodeUsage.usePseudocode) {
content += generatePseudocodeFlow(req);
} else {
content += generateProceduralFlow(req);
}
// Add phase sections
for (let i = 0; i < req.steps; i++) {
content += generatePhaseSection(i + 1, req, styleProfile);
}
// Add examples if style is verbose
if (styleProfile.language.verbosity !== 'concise') {
content += generateExamplesSection(req);
}
return content;
}
// Generate architecture diagram
function generateArchitectureDiagram(req) {
return `## Architecture
\`\`\`
┌─────────────────────────────────────────────────┐
${humanize(req.name)}
│ │
│ Input → Phase 1 → Phase 2 → Phase 3 → Output │
${getPhaseName(1, req)}
${getPhaseName(2, req)}
${getPhaseName(3, req)}
└─────────────────────────────────────────────────┘
\`\`\`
`;
}
// Generate pseudocode flow
function generatePseudocodeFlow(req) {
return `\`\`\`javascript
async function ${toCamelCase(req.name)}(input) {
// Phase 1: ${getPhaseName(1, req)}
const prepared = await phase1Prepare(input);
// Phase 2: ${getPhaseName(2, req)}
const processed = await phase2Process(prepared);
// Phase 3: ${getPhaseName(3, req)}
const result = await phase3Finalize(processed);
return result;
}
\`\`\`
`;
}
// Generate phase section
function generatePhaseSection(phaseNum, req, styleProfile) {
const phaseName = getPhaseName(phaseNum, req);
let section = `### Phase ${phaseNum}: ${phaseName}\n\n`;
if (styleProfile.language.pseudocodeUsage.usePseudocode) {
section += `\`\`\`javascript\n`;
section += `async function phase${phaseNum}${toCamelCase(phaseName)}(input) {\n`;
section += ` // TODO: Implement ${phaseName.toLowerCase()} logic\n`;
section += ` return output;\n`;
section += `}\n\`\`\`\n\n`;
} else {
section += `**Steps**:\n`;
section += `1. Load input data\n`;
section += `2. Process according to ${phaseName.toLowerCase()} logic\n`;
section += `3. Return result to next phase\n\n`;
}
return section;
}
// Validation
function validateSkillStructure(skillDir, req) {
const requiredFiles = [`${skillDir}/SKILL.md`, `${skillDir}/README.md`];
const exists = requiredFiles.map(f => Bash(`test -f "${f}"`).exitCode === 0);
return {
valid: exists.every(e => e),
hasTemplates: Bash(`test -d "${skillDir}/templates"`).exitCode === 0,
hasScripts: Bash(`test -d "${skillDir}/scripts"`).exitCode === 0,
filesPresent: requiredFiles.filter((f, i) => exists[i]),
styleCompliance: checkStyleCompliance(skillDir, req.styleProfile)
};
}
```
**Output**:
```
Skill Package Generated:
Location: .claude/skills/pdf-generator/
Structure:
✓ SKILL.md (entry point)
✓ README.md (usage guide)
✓ templates/ (directory templates)
✓ scripts/ (helper scripts)
Validation:
✓ All required files present
✓ Style compliance: 95%
✓ Frontmatter valid
✓ Tool references correct
Next Steps:
1. Review SKILL.md and customize phases
2. Test skill: /skill:pdf-generator "test input"
3. Iterate based on usage
```
---
## Complete Execution Flow
```
User: "Create a PDF generator skill"
Phase 1: Style Analysis
|-- Read reference skills (ccw.md, ccw-coordinator.md)
|-- Extract style patterns (flow diagrams, pseudocode, structure)
|-- Generate style profile
+-- Output: Style recommendations
Phase 2: Requirements
|-- Ask: Name, purpose, steps
|-- Ask: Tools, output format
|-- Generate: Description, triggers
+-- Output: Requirements config
Phase 3: Generation
|-- Create: Directory structure
|-- Write: SKILL.md (style-aware)
|-- Write: README.md
|-- Optionally: templates/, scripts/
|-- Validate: Structure and style
+-- Output: Skill package
Return: Skill location + next steps
```
## Phase Execution Protocol
```javascript
// Main entry point
async function liteSkillGenerator(input) {
// Phase 1: Style Learning
const references = [
'.claude/commands/ccw.md',
'.claude/commands/ccw-coordinator.md',
...discoverReferenceSkills(input)
];
const styleProfile = await analyzeStyle(references);
console.log(`Style Analysis: ${styleProfile.organization.phaseStructure}, ${styleProfile.language.verbosity}`);
// Phase 2: Requirements
const requirements = await gatherRequirements(styleProfile);
console.log(`Requirements: ${requirements.name} (${requirements.steps} phases)`);
// Phase 3: Generation
const result = await generateSkillPackage(requirements);
console.log(`✅ Generated: ${result.skillPath}`);
return result;
}
```
## Output Structure
**Minimal Package** (default):
```
.claude/skills/{skill-name}/
├── SKILL.md # Entry point with frontmatter
└── README.md # Usage guide
```
**With Templates** (if needed):
```
.claude/skills/{skill-name}/
├── SKILL.md
├── README.md
└── templates/
└── base-template.md
```
**With Scripts** (if using Bash):
```
.claude/skills/{skill-name}/
├── SKILL.md
├── README.md
└── scripts/
└── helper.sh
```
## Key Design Principles
1. **Style Learning** - Analyze reference skills to maintain consistency
2. **Minimal Overhead** - Generate only essential files (SKILL.md + README)
3. **Progressive Disclosure** - Follow anthropics' three-layer loading
4. **Flow-Based** - Use pseudocode and flow diagrams (when style appropriate)
5. **Interactive** - Guided requirements gathering via AskUserQuestion
6. **Fast Generation** - 3 phases instead of 6, focused on simplicity
7. **Style Awareness** - Adapt output based on detected patterns
## Style Pattern Detection
**Structural Patterns**:
- YAML frontmatter usage (100% in references)
- Section headers (H2 for major, H3 for sub-sections)
- Code blocks (JavaScript pseudocode, Bash examples)
- ASCII diagrams (architecture, flow charts)
**Language Patterns**:
- Instruction style: Procedural with function definitions
- Pseudocode: JavaScript-based with flow annotations
- Verbosity: Detailed but focused
- Terminology: Phase, workflow, pipeline, orchestrator
**Organization Patterns**:
- Phase structure: 3-5 sequential phases
- Example density: Moderate (1-2 per major section)
- Template usage: Minimal (only when necessary)
## Usage Examples
**Basic Generation**:
```
User: "Create a markdown formatter skill"
Lite-Skill-Generator:
→ Analyzes ccw.md style
→ Asks: Name? "markdown-formatter"
→ Asks: Purpose? "Transformation"
→ Asks: Steps? "3 steps"
→ Generates: .claude/skills/markdown-formatter/
```
**With Custom References**:
```
User: "Create a skill like software-manual but simpler"
Lite-Skill-Generator:
→ Analyzes software-manual skill
→ Learns: Multi-phase, agent-based, template-heavy
→ Simplifies: 3 phases, direct execution, minimal templates
→ Generates: Simplified version
```
## Comparison: lite-skill-generator vs skill-generator
| Aspect | lite-skill-generator | skill-generator |
|--------|---------------------|-----------------|
| **Phases** | 3 (Style → Req → Gen) | 6 (Spec → Req → Dir → Gen → Specs → Val) |
| **Style Learning** | Yes (analyze references) | No (fixed templates) |
| **Complexity** | Simple skills only | Full-featured skills |
| **Output** | Minimal (SKILL.md + README) | Complete (phases/, specs/, templates/) |
| **Generation Time** | Fast (~2 min) | Thorough (~10 min) |
| **Use Case** | Quick scaffolding | Production-ready skills |
## Workflow Integration
**Standalone**:
```bash
/skill:lite-skill-generator "Create a log analyzer skill"
```
**With References**:
```bash
/skill:lite-skill-generator "Create a skill based on ccw-coordinator.md style"
```
**Batch Generation** (for multiple simple skills):
```bash
/skill:lite-skill-generator "Create 3 skills: json-validator, yaml-parser, toml-converter"
```
---
**Next Steps After Generation**:
1. Review `.claude/skills/{name}/SKILL.md`
2. Customize phase logic for your use case
3. Add examples to README.md
4. Test skill with sample input
5. Iterate based on real usage

View File

@@ -0,0 +1,68 @@
---
name: {{SKILL_NAME}}
description: {{SKILL_DESCRIPTION}}
allowed-tools: {{ALLOWED_TOOLS}}
---
# {{SKILL_TITLE}}
{{SKILL_DESCRIPTION}}
## Architecture
```
┌─────────────────────────────────────────────────┐
│ {{SKILL_TITLE}} │
│ │
│ Input → {{PHASE_1}} → {{PHASE_2}} → Output │
└─────────────────────────────────────────────────┘
```
## Execution Flow
```javascript
async function {{SKILL_FUNCTION}}(input) {
// Phase 1: {{PHASE_1}}
const prepared = await phase1(input);
// Phase 2: {{PHASE_2}}
const result = await phase2(prepared);
return result;
}
```
### Phase 1: {{PHASE_1}}
```javascript
async function phase1(input) {
// TODO: Implement {{PHASE_1_LOWER}} logic
return output;
}
```
### Phase 2: {{PHASE_2}}
```javascript
async function phase2(input) {
// TODO: Implement {{PHASE_2_LOWER}} logic
return output;
}
```
## Usage
```bash
/skill:{{SKILL_NAME}} "input description"
```
## Examples
**Basic Usage**:
```
User: "{{EXAMPLE_INPUT}}"
{{SKILL_NAME}}:
→ Phase 1: {{PHASE_1_ACTION}}
→ Phase 2: {{PHASE_2_ACTION}}
→ Output: {{EXAMPLE_OUTPUT}}
```

View File

@@ -0,0 +1,64 @@
# Style Guide Template
Generated by lite-skill-generator style analysis phase.
## Detected Patterns
### Structural Patterns
| Pattern | Detected | Recommendation |
|---------|----------|----------------|
| YAML Frontmatter | {{HAS_FRONTMATTER}} | {{FRONTMATTER_REC}} |
| ASCII Diagrams | {{HAS_DIAGRAMS}} | {{DIAGRAMS_REC}} |
| Code Blocks | {{HAS_CODE_BLOCKS}} | {{CODE_BLOCKS_REC}} |
| Phase Structure | {{PHASE_STRUCTURE}} | {{PHASE_REC}} |
### Language Patterns
| Pattern | Value | Notes |
|---------|-------|-------|
| Instruction Style | {{INSTRUCTION_STYLE}} | imperative/declarative/procedural |
| Pseudocode Usage | {{PSEUDOCODE_USAGE}} | functional/imperative/none |
| Verbosity Level | {{VERBOSITY}} | concise/detailed/verbose |
| Common Terms | {{TERMINOLOGY}} | domain-specific vocabulary |
### Organization Patterns
| Pattern | Value |
|---------|-------|
| Phase Count | {{PHASE_COUNT}} |
| Example Density | {{EXAMPLE_DENSITY}} |
| Template Usage | {{TEMPLATE_USAGE}} |
## Style Compliance Checklist
- [ ] YAML frontmatter with name, description, allowed-tools
- [ ] Architecture diagram (if pattern detected)
- [ ] Execution flow section with pseudocode
- [ ] Phase sections (sequential numbered)
- [ ] Usage examples section
- [ ] README.md for external documentation
## Reference Skills Analyzed
{{#REFERENCES}}
- `{{REF_PATH}}`: {{REF_NOTES}}
{{/REFERENCES}}
## Generated Configuration
```json
{
"style": {
"structure": "{{STRUCTURE_TYPE}}",
"language": "{{LANGUAGE_TYPE}}",
"organization": "{{ORG_TYPE}}"
},
"recommendations": {
"usePseudocode": {{USE_PSEUDOCODE}},
"includeDiagrams": {{INCLUDE_DIAGRAMS}},
"verbosityLevel": "{{VERBOSITY}}",
"phaseCount": {{PHASE_COUNT}}
}
}
```

View File

@@ -1,6 +1,6 @@
---
name: skill-generator
description: Meta-skill for creating new Claude Code skills with configurable execution modes. Supports sequential (fixed order) and autonomous (stateless) phase patterns. Use for skill scaffolding, skill creation, or building new workflows. Triggers on "create skill", "new skill", "skill generator", "生成技能", "创建技能".
description: Meta-skill for creating new Claude Code skills with configurable execution modes. Supports sequential (fixed order) and autonomous (stateless) phase patterns. Use for skill scaffolding, skill creation, or building new workflows. Triggers on "create skill", "new skill", "skill generator".
allowed-tools: Task, AskUserQuestion, Read, Bash, Glob, Grep, Write
---
@@ -12,215 +12,454 @@ Meta-skill for creating new Claude Code skills with configurable execution modes
```
┌─────────────────────────────────────────────────────────────────┐
Skill Generator Architecture
├─────────────────────────────────────────────────────────────────┤
Skill Generator
│ │
⚠️ Phase 0: Specification → 阅读并理解设计规范 (强制前置)
Study SKILL-DESIGN-SPEC.md + 模板
│ Phase 1: Requirements → skill-config.json
Discovery (name, type, mode, agents)
Phase 2: Structure → 目录结构 + 核心文件骨架
Generation
Phase 3: Phase → phases/*.md (根据 mode 生成)
Generation Sequential | Autonomous
Phase 4: Specs & → specs/*.md + templates/*.md
Templates
Phase 5: Validation → 验证完整性 + 生成使用说明
│ & Documentation │
Input: User Request (skill name, purpose, mode)
┌─────────────────────────────────────────────────────────┐
│ Phase 0-5: Sequential Pipeline
┌────┐ ┌────┐ ┌────┐ ┌────┐ ┌────┐ ┌────┐ │
│ │ P0 │→│ P1 │→│ P2 │→│ P3 │→│ P4 │→│ P5 │
│ │Spec│ │Req │ │Dir │ │Gen │ │Spec│ │Val │
│ └────┘ └────┘ └────┘ └─┬──┘ └────┘ └────┘
│ ┌────┴────┐ │
↓ ↓
│ Sequential Autonomous
│ (phases/) (actions/)
└─────────────────────────────────────────────────────────┘
Output: .claude/skills/{skill-name}/ (complete package)
│ │
└─────────────────────────────────────────────────────────────────┘
```
## Execution Modes
### Mode 1: Sequential (固定顺序)
### Mode 1: Sequential (Fixed Order)
传统线性执行模式,阶段按数字前缀顺序执行。
Traditional linear execution model, phases execute in numeric prefix order.
```
Phase 01 Phase 02 Phase 03 ... Phase N
Phase 01 -> Phase 02 -> Phase 03 -> ... -> Phase N
```
**适用场景**:
- 流水线式任务(收集 → 分析 → 生成)
- 阶段间有强依赖关系
- 输出结构固定
**Use Cases**:
- Pipeline tasks (collect -> analyze -> generate)
- Strong dependencies between phases
- Fixed output structure
**示例**: `software-manual`, `copyright-docs`
**Examples**: `software-manual`, `copyright-docs`
### Mode 2: Autonomous (无状态自主选择)
### Mode 2: Autonomous (Stateless Auto-Select)
智能路由模式,根据上下文动态选择执行路径。
Intelligent routing model, dynamically selects execution path based on context.
```
┌─────────────────────────────────────────┐
Orchestrator Agent
(读取状态 → 选择 Phase → 执行 → 更新) │
└───────────────┬─────────────────────────┘
┌───────────┼───────────┐
┌───────┐ ┌───────┐ ┌───────┐
│Phase A│ │Phase B│ │Phase C│
│(独立) │ │(独立) │ │(独立) │
└───────┘ └───────┘ └───────┘
---------------------------------------------------
Orchestrator Agent
(Read state -> Select Phase -> Execute -> Update)
---------------------------------------------------
|
---------+----------+----------
| | |
Phase A Phase B Phase C
(standalone) (standalone) (standalone)
```
**适用场景**:
- 交互式任务(对话、问答)
- 阶段间无强依赖
- 需要动态响应用户意图
**Use Cases**:
- Interactive tasks (chat, Q&A)
- No strong dependencies between phases
- Dynamic user intent response required
**示例**: `issue-manage`, `workflow-debug`
**Examples**: `issue-manage`, `workflow-debug`
## Key Design Principles
1. **模式感知**: 根据任务特性自动推荐执行模式
2. **骨架生成**: 生成完整目录结构和文件骨架
3. **规范遵循**: 严格遵循 `_shared/SKILL-DESIGN-SPEC.md`
4. **可扩展性**: 生成的 Skill 易于扩展和修改
1. **Mode Awareness**: Automatically recommend execution mode based on task characteristics
2. **Skeleton Generation**: Generate complete directory structure and file skeletons
3. **Standards Compliance**: Strictly follow `_shared/SKILL-DESIGN-SPEC.md`
4. **Extensibility**: Generated Skills are easy to extend and modify
---
## ⚠️ Mandatory Prerequisites (强制前置条件)
## Required Prerequisites
> **⛔ 禁止跳过**: 在执行任何生成操作之前,**必须**完整阅读以下文档。未阅读规范直接生成将导致输出不符合质量标准。
IMPORTANT: Before any generation operation, read the following specification documents. Generating without understanding these standards will result in non-conforming output.
### 核心规范 (必读)
### Core Specifications (Mandatory Read)
| Document | Purpose | Priority |
|----------|---------|----------|
| [../_shared/SKILL-DESIGN-SPEC.md](../_shared/SKILL-DESIGN-SPEC.md) | 通用设计规范 - 定义所有 Skill 的结构、命名、质量标准 | **P0 - 最高** |
| [../_shared/SKILL-DESIGN-SPEC.md](../_shared/SKILL-DESIGN-SPEC.md) | Universal design spec - defines structure, naming, quality standards for all Skills | **P0 - Critical** |
| [specs/reference-docs-spec.md](specs/reference-docs-spec.md) | Reference document generation spec - ensures generated Skills have proper phase-based Reference Documents with usage timing guidance | **P0 - Critical** |
### 模板文件 (生成前必读)
### Template Files (Read Before Generation)
| Document | Purpose |
|----------|---------|
| [templates/skill-md.md](templates/skill-md.md) | SKILL.md 入口文件模板 |
| [templates/sequential-phase.md](templates/sequential-phase.md) | Sequential Phase 模板 |
| [templates/autonomous-orchestrator.md](templates/autonomous-orchestrator.md) | Autonomous 编排器模板 |
| [templates/autonomous-action.md](templates/autonomous-action.md) | Autonomous Action 模板 |
| [templates/code-analysis-action.md](templates/code-analysis-action.md) | 代码分析 Action 模板 |
| [templates/llm-action.md](templates/llm-action.md) | LLM Action 模板 |
| [templates/script-bash.md](templates/script-bash.md) | Bash 脚本模板 |
| [templates/script-python.md](templates/script-python.md) | Python 脚本模板 |
| [templates/skill-md.md](templates/skill-md.md) | SKILL.md entry file template |
| [templates/sequential-phase.md](templates/sequential-phase.md) | Sequential Phase template |
| [templates/autonomous-orchestrator.md](templates/autonomous-orchestrator.md) | Autonomous Orchestrator template |
| [templates/autonomous-action.md](templates/autonomous-action.md) | Autonomous Action template |
| [templates/code-analysis-action.md](templates/code-analysis-action.md) | Code Analysis Action template |
| [templates/llm-action.md](templates/llm-action.md) | LLM Action template |
| [templates/script-template.md](templates/script-template.md) | Unified Script Template (Bash + Python) |
### 规范文档 (按需阅读)
### Specification Documents (Read as Needed)
| Document | Purpose |
|----------|---------|
| [specs/execution-modes.md](specs/execution-modes.md) | 执行模式规范 |
| [specs/skill-requirements.md](specs/skill-requirements.md) | Skill 需求规范 |
| [specs/cli-integration.md](specs/cli-integration.md) | CLI 集成规范 |
| [specs/scripting-integration.md](specs/scripting-integration.md) | 脚本集成规范 |
| [specs/execution-modes.md](specs/execution-modes.md) | Execution Modes Specification |
| [specs/skill-requirements.md](specs/skill-requirements.md) | Skill Requirements Specification |
| [specs/cli-integration.md](specs/cli-integration.md) | CLI Integration Specification |
| [specs/scripting-integration.md](specs/scripting-integration.md) | Script Integration Specification |
### Phase 执行指南 (执行时参考)
### Phase Execution Guides (Reference During Execution)
| Document | Purpose |
|----------|---------|
| [phases/01-requirements-discovery.md](phases/01-requirements-discovery.md) | 收集 Skill 需求 |
| [phases/02-structure-generation.md](phases/02-structure-generation.md) | 生成目录结构 |
| [phases/03-phase-generation.md](phases/03-phase-generation.md) | 生成 Phase 文件 |
| [phases/04-specs-templates.md](phases/04-specs-templates.md) | 生成规范和模板 |
| [phases/05-validation.md](phases/05-validation.md) | 验证和文档 |
| [phases/01-requirements-discovery.md](phases/01-requirements-discovery.md) | Collect Skill Requirements |
| [phases/02-structure-generation.md](phases/02-structure-generation.md) | Generate Directory Structure |
| [phases/03-phase-generation.md](phases/03-phase-generation.md) | Generate Phase Files |
| [phases/04-specs-templates.md](phases/04-specs-templates.md) | Generate Specs and Templates |
| [phases/05-validation.md](phases/05-validation.md) | Validation and Documentation |
---
## Execution Flow
```
┌─────────────────────────────────────────────────────────────────┐
⚠️ Phase 0: Specification Study (强制前置 - 禁止跳过) │
│ → Read: ../_shared/SKILL-DESIGN-SPEC.md (通用设计规范) │
│ → Read: templates/*.md (所有相关模板文件) │
→ 理解: Skill 结构规范、命名约定、质量标准 │
→ Output: 内化规范要求,确保后续生成符合标准 │
⛔ 未完成 Phase 0 禁止进入 Phase 1 │
├─────────────────────────────────────────────────────────────────┤
Phase 1: Requirements Discovery │
→ AskUserQuestion: Skill 名称、目标、执行模式 │
│ → Output: skill-config.json │
├─────────────────────────────────────────────────────────────────┤
Phase 2: Structure Generation
→ 创建目录结构: phases/, specs/, templates/, scripts/ │
→ 生成 SKILL.md 入口文件 │
→ Output: 完整目录结构 │
├─────────────────────────────────────────────────────────────────┤
Phase 3: Phase Generation │
→ Sequential: 生成 01-xx.md, 02-xx.md, ... │
→ Autonomous: 生成 orchestrator.md + actions/*.md │
│ → Output: phases/*.md │
├─────────────────────────────────────────────────────────────────┤
Phase 4: Specs & Templates │
→ 生成领域规范: specs/{domain}-requirements.md │
→ 生成质量标准: specs/quality-standards.md │
→ 生成模板: templates/agent-base.md │
→ Output: specs/*.md, templates/*.md │
├─────────────────────────────────────────────────────────────────┤
Phase 5: Validation & Documentation │
│ → 验证文件完整性 │
│ → 生成 README.md 使用说明 │
→ Output: 验证报告 + README.md │
└─────────────────────────────────────────────────────────────────┘
Input Parsing:
└─ Convert user request to structured format (skill-name/purpose/mode)
Phase 0: Specification Study (MANDATORY - Must complete before proceeding)
- Read specification documents
- Load: ../_shared/SKILL-DESIGN-SPEC.md
- Load: All templates/*.md files
- Understand: Structure rules, naming conventions, quality standards
- Output: Internalized requirements (in-memory, no file output)
- Validation: MUST complete before Phase 1
Phase 1: Requirements Discovery
- Gather skill requirements via user interaction
- Tool: AskUserQuestion
- Collect: Skill name, purpose, execution mode
- Collect: Phase/Action definition
- Collect: Tool dependencies, output format
- Process: Generate configuration object
- Output: skill-config.json
- Contains: skill_name, execution_mode, phases/actions, allowed_tools
Phase 2: Structure Generation
- Create directory structure and entry file
- Input: skill-config.json (from Phase 1)
- Tool: Bash
- Execute: mkdir -p .claude/skills/{skill-name}/{phases,specs,templates,scripts}
- Tool: Write
- Generate: SKILL.md (entry point with architecture diagram)
- Output: Complete directory structure
Phase 3: Phase/Action Generation
- Decision (execution_mode check):
- IF execution_mode === "sequential": Generate Sequential Phases
- Read template: templates/sequential-phase.md
- Loop: For each phase in config.sequential_config.phases
- Generate: phases/{phase-id}.md
- Link: Previous phase output -> Current phase input
- Write: phases/_orchestrator.md
- Write: workflow.json
- Output: phases/01-{name}.md, phases/02-{name}.md, ...
- ELSE IF execution_mode === "autonomous": Generate Orchestrator + Actions
- Read template: templates/autonomous-orchestrator.md
- Write: phases/state-schema.md
- Write: phases/orchestrator.md
- Write: specs/action-catalog.md
- Loop: For each action in config.autonomous_config.actions
- Read template: templates/autonomous-action.md
- Generate: phases/actions/{action-id}.md
- Output: phases/orchestrator.md, phases/actions/*.md
Phase 4: Specs & Templates
- Generate domain specifications and templates
- Input: skill-config.json (domain context)
- Reference: [specs/reference-docs-spec.md](specs/reference-docs-spec.md) for document organization
- Tool: Write
- Generate: specs/{domain}-requirements.md
- Generate: specs/quality-standards.md
- Generate: templates/agent-base.md (if needed)
- Output: Domain-specific documentation
Phase 5: Validation & Documentation
- Verify completeness and generate usage guide
- Input: All generated files from previous phases
- Tool: Glob + Read
- Check: Required files exist and contain proper structure
- Tool: Write
- Generate: README.md (usage instructions)
- Generate: validation-report.json (completeness check)
- Output: Final documentation
```
## Directory Setup
**Execution Protocol**:
```javascript
const skillName = config.skill_name;
const skillDir = `.claude/skills/${skillName}`;
// Phase 0: Read specifications (in-memory)
Read('.claude/skills/_shared/SKILL-DESIGN-SPEC.md');
Read('.claude/skills/skill-generator/templates/*.md'); // All templates
// 创建目录结构
Bash(`mkdir -p "${skillDir}/phases"`);
Bash(`mkdir -p "${skillDir}/specs"`);
Bash(`mkdir -p "${skillDir}/templates"`);
// Phase 1: Gather requirements
const answers = AskUserQuestion({
questions: [
{ question: "Skill name?", header: "Name", options: [...] },
{ question: "Execution mode?", header: "Mode", options: ["Sequential", "Autonomous"] }
]
});
// Autonomous 模式额外目录
if (config.execution_mode === 'autonomous') {
Bash(`mkdir -p "${skillDir}/phases/actions"`);
const config = generateConfig(answers);
const workDir = `.workflow/.scratchpad/skill-gen-${timestamp}`;
Write(`${workDir}/skill-config.json`, JSON.stringify(config));
// Phase 2: Create structure
const skillDir = `.claude/skills/${config.skill_name}`;
Bash(`mkdir -p "${skillDir}/phases" "${skillDir}/specs" "${skillDir}/templates"`);
Write(`${skillDir}/SKILL.md`, generateSkillEntry(config));
// Phase 3: Generate phases (mode-dependent)
if (config.execution_mode === 'sequential') {
Write(`${skillDir}/phases/_orchestrator.md`, generateOrchestrator(config));
Write(`${skillDir}/workflow.json`, generateWorkflowDef(config));
config.sequential_config.phases.forEach(phase => {
Write(`${skillDir}/phases/${phase.id}.md`, generatePhase(phase, config));
});
} else {
Write(`${skillDir}/phases/orchestrator.md`, generateAutonomousOrchestrator(config));
Write(`${skillDir}/phases/state-schema.md`, generateStateSchema(config));
config.autonomous_config.actions.forEach(action => {
Write(`${skillDir}/phases/actions/${action.id}.md`, generateAction(action, config));
});
}
// Phase 4: Generate specs
Write(`${skillDir}/specs/${config.skill_name}-requirements.md`, generateRequirements(config));
Write(`${skillDir}/specs/quality-standards.md`, generateQualityStandards(config));
// Phase 5: Validate & Document
const validation = validateStructure(skillDir);
Write(`${skillDir}/validation-report.json`, JSON.stringify(validation));
Write(`${skillDir}/README.md`, generateReadme(config, validation));
```
---
## Reference Documents by Phase
IMPORTANT: This section demonstrates how skill-generator organizes its own reference documentation. This is the pattern that all generated Skills should emulate. See [specs/reference-docs-spec.md](specs/reference-docs-spec.md) for details.
### Phase 0: Specification Study (Mandatory Prerequisites)
Specification documents that must be read before any generation operation
| Document | Purpose | When to Use |
|----------|---------|-------------|
| [../_shared/SKILL-DESIGN-SPEC.md](../_shared/SKILL-DESIGN-SPEC.md) | Universal Skill design specification | Understand Skill structure and naming conventions - **REQUIRED** |
| [specs/reference-docs-spec.md](specs/reference-docs-spec.md) | Reference document generation specification | Ensure Reference Documents have proper phase-based organization - **REQUIRED** |
### Phase 1: Requirements Discovery
Collect Skill requirements and configuration
| Document | Purpose | When to Use |
|----------|---------|-------------|
| [phases/01-requirements-discovery.md](phases/01-requirements-discovery.md) | Phase 1 execution guide | Understand how to collect user requirements and generate configuration |
| [specs/skill-requirements.md](specs/skill-requirements.md) | Skill requirements specification | Understand what information a Skill should contain |
### Phase 2: Structure Generation
Generate directory structure and entry file
| Document | Purpose | When to Use |
|----------|---------|-------------|
| [phases/02-structure-generation.md](phases/02-structure-generation.md) | Phase 2 execution guide | Understand how to generate directory structure |
| [templates/skill-md.md](templates/skill-md.md) | SKILL.md template | Learn how to generate the entry file |
### Phase 3: Phase/Action Generation
Generate specific phase or action files based on execution mode
| Document | Purpose | When to Use |
|----------|---------|-------------|
| [phases/03-phase-generation.md](phases/03-phase-generation.md) | Phase 3 execution guide | Understand Sequential vs Autonomous generation logic |
| [templates/sequential-phase.md](templates/sequential-phase.md) | Sequential Phase template | Generate phase files for Sequential mode |
| [templates/autonomous-orchestrator.md](templates/autonomous-orchestrator.md) | Orchestrator template | Generate orchestrator for Autonomous mode |
| [templates/autonomous-action.md](templates/autonomous-action.md) | Action template | Generate action files for Autonomous mode |
### Phase 4: Specs & Templates
Generate domain-specific specifications and templates
| Document | Purpose | When to Use |
|----------|---------|-------------|
| [phases/04-specs-templates.md](phases/04-specs-templates.md) | Phase 4 execution guide | Understand how to generate domain-specific documentation |
| [specs/reference-docs-spec.md](specs/reference-docs-spec.md) | Reference document specification | IMPORTANT: Follow this spec when generating Specs |
### Phase 5: Validation & Documentation
Verify results and generate final documentation
| Document | Purpose | When to Use |
|----------|---------|-------------|
| [phases/05-validation.md](phases/05-validation.md) | Phase 5 execution guide | Understand how to verify generated Skill completeness |
### Debugging & Troubleshooting
Reference documents when encountering issues
| Issue | Solution Document |
|-------|------------------|
| Generated Skill missing Reference Documents | [specs/reference-docs-spec.md](specs/reference-docs-spec.md) - verify phase-based organization is followed |
| Reference document organization unclear | [specs/reference-docs-spec.md](specs/reference-docs-spec.md) - Core Principles section |
| Generated documentation does not meet quality standards | [../_shared/SKILL-DESIGN-SPEC.md](../_shared/SKILL-DESIGN-SPEC.md) |
### Reference & Background
Documents for deep learning and design decisions
| Document | Purpose | Notes |
|----------|---------|-------|
| [specs/execution-modes.md](specs/execution-modes.md) | Detailed execution modes specification | Comparison and use cases for Sequential vs Autonomous |
| [specs/cli-integration.md](specs/cli-integration.md) | CLI integration specification | How generated Skills integrate with CLI |
| [specs/scripting-integration.md](specs/scripting-integration.md) | Script integration specification | How to use scripts in Phases |
| [templates/script-template.md](templates/script-template.md) | Script template | Unified Bash + Python template |
---
## Output Structure
### Sequential Mode
```
.claude/skills/{skill-name}/
├── SKILL.md
├── SKILL.md # Entry file
├── phases/
│ ├── 01-{step-one}.md
│ ├── 02-{step-two}.md
── 03-{step-three}.md
│ ├── _orchestrator.md # Declarative orchestrator
│ ├── workflow.json # Workflow definition
── 01-{step-one}.md # Phase 1
│ ├── 02-{step-two}.md # Phase 2
│ └── 03-{step-three}.md # Phase 3
├── specs/
│ ├── {domain}-requirements.md
│ ├── {skill-name}-requirements.md
│ └── quality-standards.md
── templates/
└── agent-base.md
── templates/
└── agent-base.md
├── scripts/
└── README.md
```
### Autonomous Mode
```
.claude/skills/{skill-name}/
├── SKILL.md
├── SKILL.md # Entry file
├── phases/
│ ├── orchestrator.md # 编排器:读取状态 → 选择动作
│ ├── state-schema.md # 状态结构定义
│ └── actions/ # 独立动作(无顺序)
│ ├── action-{a}.md
│ ├── action-{b}.md
│ └── action-{c}.md
│ ├── orchestrator.md # Orchestrator (state-driven)
│ ├── state-schema.md # State schema definition
│ └── actions/
│ ├── action-init.md
│ ├── action-create.md
│ └── action-list.md
├── specs/
│ ├── {domain}-requirements.md
│ ├── action-catalog.md # 动作目录(描述、前置条件、效果)
│ ├── {skill-name}-requirements.md
│ ├── action-catalog.md
│ └── quality-standards.md
── templates/
├── orchestrator-base.md # 编排器模板
└── action-base.md # 动作模板
── templates/
├── orchestrator-base.md
└── action-base.md
├── scripts/
└── README.md
```
---
## Reference Documents by Phase
IMPORTANT: This section demonstrates how skill-generator organizes its own reference documentation. This is the pattern that all generated Skills should emulate. See [specs/reference-docs-spec.md](specs/reference-docs-spec.md) for details.
### Phase 0: Specification Study (Mandatory Prerequisites)
Specification documents that must be read before any generation operation
| Document | Purpose | When to Use |
|----------|---------|-------------|
| [../_shared/SKILL-DESIGN-SPEC.md](../_shared/SKILL-DESIGN-SPEC.md) | Universal Skill design specification | Understand Skill structure and naming conventions - **REQUIRED** |
| [specs/reference-docs-spec.md](specs/reference-docs-spec.md) | Reference document generation specification | Ensure Reference Documents have proper phase-based organization - **REQUIRED** |
### Phase 1: Requirements Discovery
Collect Skill requirements and configuration
| Document | Purpose | When to Use |
|----------|---------|-------------|
| [phases/01-requirements-discovery.md](phases/01-requirements-discovery.md) | Phase 1 execution guide | Understand how to collect user requirements and generate configuration |
| [specs/skill-requirements.md](specs/skill-requirements.md) | Skill requirements specification | Understand what information a Skill should contain |
### Phase 2: Structure Generation
Generate directory structure and entry file
| Document | Purpose | When to Use |
|----------|---------|-------------|
| [phases/02-structure-generation.md](phases/02-structure-generation.md) | Phase 2 execution guide | Understand how to generate directory structure |
| [templates/skill-md.md](templates/skill-md.md) | SKILL.md template | Learn how to generate the entry file |
### Phase 3: Phase/Action Generation
Generate specific phase or action files based on execution mode
| Document | Purpose | When to Use |
|----------|---------|-------------|
| [phases/03-phase-generation.md](phases/03-phase-generation.md) | Phase 3 execution guide | Understand Sequential vs Autonomous generation logic |
| [templates/sequential-phase.md](templates/sequential-phase.md) | Sequential Phase template | Generate phase files for Sequential mode |
| [templates/autonomous-orchestrator.md](templates/autonomous-orchestrator.md) | Orchestrator template | Generate orchestrator for Autonomous mode |
| [templates/autonomous-action.md](templates/autonomous-action.md) | Action template | Generate action files for Autonomous mode |
### Phase 4: Specs & Templates
Generate domain-specific specifications and templates
| Document | Purpose | When to Use |
|----------|---------|-------------|
| [phases/04-specs-templates.md](phases/04-specs-templates.md) | Phase 4 execution guide | Understand how to generate domain-specific documentation |
| [specs/reference-docs-spec.md](specs/reference-docs-spec.md) | Reference document specification | IMPORTANT: Follow this spec when generating Specs |
### Phase 5: Validation & Documentation
Verify results and generate final documentation
| Document | Purpose | When to Use |
|----------|---------|-------------|
| [phases/05-validation.md](phases/05-validation.md) | Phase 5 execution guide | Understand how to verify generated Skill completeness |
### Debugging & Troubleshooting
Reference documents when encountering issues
| Issue | Solution Document |
|-------|------------------|
| Generated Skill missing Reference Documents | [specs/reference-docs-spec.md](specs/reference-docs-spec.md) - verify phase-based organization is followed |
| Reference document organization unclear | [specs/reference-docs-spec.md](specs/reference-docs-spec.md) - Core Principles section |
| Generated documentation does not meet quality standards | [../_shared/SKILL-DESIGN-SPEC.md](../_shared/SKILL-DESIGN-SPEC.md) |
### Reference & Background
Documents for deep learning and design decisions
| Document | Purpose | Notes |
|----------|---------|-------|
| [specs/execution-modes.md](specs/execution-modes.md) | Detailed execution modes specification | Comparison and use cases for Sequential vs Autonomous |
| [specs/cli-integration.md](specs/cli-integration.md) | CLI integration specification | How generated Skills integrate with CLI |
| [specs/scripting-integration.md](specs/scripting-integration.md) | Script integration specification | How to use scripts in Phases |
| [templates/script-template.md](templates/script-template.md) | Script template | Unified Bash + Python template |

View File

@@ -1,125 +1,125 @@
# Phase 1: Requirements Discovery
收集新 Skill 的需求信息,生成配置文件。
Collect basic skill information, configuration, and execution mode based on user input.
## Objective
- 收集 Skill 基本信息(名称、描述、触发词)
- 确定执行模式(Sequential / Autonomous
- 定义阶段/动作
- 配置工具依赖和输出格式
- Collect skill basic information (name, description, trigger words)
- Determine execution mode (Sequential/Autonomous/Hybrid)
- Define phases or actions
- Generate initial configuration file
## Execution Steps
### Step 1: 基本信息收集
### Step 1: Basic Information Collection
```javascript
const basicInfo = await AskUserQuestion({
questions: [
{
question: "新 Skill 的名称是什么?(英文,小写-连字符格式,如 'api-docs'",
header: "Skill 名称",
question: "What is the name of the new Skill? (English, lowercase with hyphens, e.g., 'api-docs')",
header: "Skill Name",
multiSelect: false,
options: [
{ label: "自动生成", description: "根据后续描述自动生成名称" },
{ label: "手动输入", description: "现在输入自定义名称" }
{ label: "Auto-generate", description: "Generate name automatically based on description" },
{ label: "Manual Input", description: "Enter custom name now" }
]
},
{
question: "Skill 的主要用途是什么?",
header: "用途类型",
question: "What is the primary purpose of the Skill?",
header: "Purpose Type",
multiSelect: false,
options: [
{ label: "文档生成", description: "生成 Markdown/HTML 文档(如手册、报告)" },
{ label: "代码分析", description: "分析代码结构、质量、安全性" },
{ label: "交互管理", description: "管理 Issue、任务、工作流CRUD 操作)" },
{ label: "数据处理", description: "ETL、格式转换、报告生成" }
{ label: "Document Generation", description: "Generate Markdown/HTML documents (manuals, reports)" },
{ label: "Code Analysis", description: "Analyze code structure, quality, security" },
{ label: "Interactive Management", description: "Manage Issues, tasks, workflows (CRUD operations)" },
{ label: "Data Processing", description: "ETL, format conversion, report generation" }
]
}
]
});
// 如果选择手动输入,进一步询问
if (basicInfo["Skill 名称"] === "手动输入") {
// 用户会在 "Other" 中输入
// If manual input is selected, prompt further
if (basicInfo["Skill Name"] === "Manual Input") {
// User will input in "Other"
}
// 根据用途类型推断描述模板
// Infer description template based on purpose type
const purposeTemplates = {
"文档生成": "Generate {type} documents from {source}",
"代码分析": "Analyze {target} for {purpose}",
"交互管理": "Manage {entity} with interactive operations",
"数据处理": "Process {data} and generate {output}"
"Document Generation": "Generate {type} documents from {source}",
"Code Analysis": "Analyze {target} for {purpose}",
"Interactive Management": "Manage {entity} with interactive operations",
"Data Processing": "Process {data} and generate {output}"
};
```
### Step 2: 执行模式选择
### Step 2: Execution Mode Selection
```javascript
const modeInfo = await AskUserQuestion({
questions: [
{
question: "选择执行模式:",
header: "执行模式",
question: "Select execution mode:",
header: "Execution Mode",
multiSelect: false,
options: [
{
label: "Sequential (顺序模式)",
description: "阶段按固定顺序执行(收集→分析→生成),适合流水线任务(推荐)"
{
label: "Sequential (Sequential Mode)",
description: "Phases execute in fixed order (collect→analyze→generate), suitable for pipeline tasks (recommended)"
},
{
label: "Autonomous (自主模式)",
description: "动态选择执行路径,适合交互式任务(如 Issue 管理)"
{
label: "Autonomous (Autonomous Mode)",
description: "Dynamically select execution path, suitable for interactive tasks (e.g., Issue management)"
},
{
label: "Hybrid (混合模式)",
description: "初始化和收尾固定,中间交互灵活"
{
label: "Hybrid (Hybrid Mode)",
description: "Fixed initialization and finalization, flexible interaction in the middle"
}
]
}
]
});
const executionMode = modeInfo["执行模式"].includes("Sequential") ? "sequential" :
modeInfo["执行模式"].includes("Autonomous") ? "autonomous" : "hybrid";
const executionMode = modeInfo["Execution Mode"].includes("Sequential") ? "sequential" :
modeInfo["Execution Mode"].includes("Autonomous") ? "autonomous" : "hybrid";
```
### Step 3: 阶段/动作定义
### Step 3: Phase/Action Definition
#### Sequential 模式
#### Sequential Mode
```javascript
if (executionMode === "sequential") {
const phaseInfo = await AskUserQuestion({
questions: [
{
question: "需要多少个执行阶段?",
header: "阶段数量",
question: "How many execution phases are needed?",
header: "Phase Count",
multiSelect: false,
options: [
{ label: "3 阶段(简单)", description: "收集 → 处理 → 输出" },
{ label: "5 阶段(标准)", description: "收集 → 探索 → 分析 → 组装 → 验证" },
{ label: "7 阶段(完整)", description: "含并行处理、汇总、迭代优化" }
{ label: "3 Phases (Simple)", description: "Collection → Processing → Output" },
{ label: "5 Phases (Standard)", description: "Collection → Exploration → Analysis → Assembly → Validation" },
{ label: "7 Phases (Complete)", description: "Includes parallel processing, consolidation, iterative optimization" }
]
}
]
});
// 根据选择生成阶段定义
// Generate phase definitions based on selection
const phaseTemplates = {
"3 阶段": [
"3 Phases": [
{ id: "01-collection", name: "Data Collection" },
{ id: "02-processing", name: "Processing" },
{ id: "03-output", name: "Output Generation" }
],
"5 阶段": [
"5 Phases": [
{ id: "01-collection", name: "Requirements Collection" },
{ id: "02-exploration", name: "Project Exploration" },
{ id: "03-analysis", name: "Deep Analysis" },
{ id: "04-assembly", name: "Document Assembly" },
{ id: "05-validation", name: "Validation" }
],
"7 阶段": [
"7 Phases": [
{ id: "01-collection", name: "Requirements Collection" },
{ id: "02-exploration", name: "Project Exploration" },
{ id: "03-parallel", name: "Parallel Analysis" },
@@ -132,23 +132,23 @@ if (executionMode === "sequential") {
}
```
#### Autonomous 模式
#### Autonomous Mode
```javascript
if (executionMode === "autonomous") {
const actionInfo = await AskUserQuestion({
questions: [
{
question: "核心动作有哪些?(可多选)",
header: "动作定义",
question: "What are the core actions? (Multiple selection allowed)",
header: "Action Definition",
multiSelect: true,
options: [
{ label: "初始化 (init)", description: "设置初始状态" },
{ label: "列表 (list)", description: "显示当前项目列表" },
{ label: "创建 (create)", description: "创建新项目" },
{ label: "编辑 (edit)", description: "修改现有项目" },
{ label: "删除 (delete)", description: "删除项目" },
{ label: "搜索 (search)", description: "搜索/过滤项目" }
{ label: "Initialize (init)", description: "Set initial state" },
{ label: "List (list)", description: "Display current item list" },
{ label: "Create (create)", description: "Create new item" },
{ label: "Edit (edit)", description: "Modify existing item" },
{ label: "Delete (delete)", description: "Delete item" },
{ label: "Search (search)", description: "Search/filter items" }
]
}
]
@@ -156,37 +156,37 @@ if (executionMode === "autonomous") {
}
```
### Step 4: 工具和输出配置
### Step 4: Tool and Output Configuration
```javascript
const toolsInfo = await AskUserQuestion({
questions: [
{
question: "需要哪些特殊工具?(基础工具已默认包含)",
header: "工具选择",
question: "Which special tools are needed? (Basic tools are included by default)",
header: "Tool Selection",
multiSelect: true,
options: [
{ label: "用户交互 (AskUserQuestion)", description: "需要与用户对话" },
{ label: "Chrome 截图 (mcp__chrome__*)", description: "需要网页截图" },
{ label: "外部搜索 (mcp__exa__search)", description: "需要搜索外部信息" },
{ label: "无特殊需求", description: "仅使用基础工具" }
{ label: "User Interaction (AskUserQuestion)", description: "Need to dialog with user" },
{ label: "Chrome Screenshot (mcp__chrome__*)", description: "Need web page screenshots" },
{ label: "External Search (mcp__exa__search)", description: "Need to search external information" },
{ label: "No Special Requirements", description: "Use basic tools only" }
]
},
{
question: "输出格式是什么?",
header: "输出格式",
question: "What is the output format?",
header: "Output Format",
multiSelect: false,
options: [
{ label: "Markdown", description: "适合文档和报告" },
{ label: "HTML", description: "适合交互式文档" },
{ label: "JSON", description: "适合数据和配置" }
{ label: "Markdown", description: "Suitable for documents and reports" },
{ label: "HTML", description: "Suitable for interactive documents" },
{ label: "JSON", description: "Suitable for data and configuration" }
]
}
]
});
```
### Step 5: 生成配置文件
### Step 5: Generate Configuration File
```javascript
const config = {
@@ -195,45 +195,44 @@ const config = {
description: description,
triggers: triggers,
execution_mode: executionMode,
// 模式特定配置
// Mode-specific configuration
...(executionMode === "sequential" ? {
sequential_config: { phases: phases }
} : {
autonomous_config: {
autonomous_config: {
state_schema: stateSchema,
actions: actions,
termination_conditions: ["user_exit", "error_limit", "task_completed"]
}
}),
allowed_tools: [
"Task", "Read", "Write", "Glob", "Grep", "Bash",
...selectedTools
],
output: {
format: outputFormat.toLowerCase(),
location: `.workflow/.scratchpad/${skillName}-{timestamp}`,
filename_pattern: `{name}-output.${outputFormat === "HTML" ? "html" : outputFormat === "JSON" ? "json" : "md"}`
},
created_at: new Date().toISOString(),
version: "1.0.0"
};
// 写入配置文件
// Write configuration file
const workDir = `.workflow/.scratchpad/skill-gen-${timestamp}`;
Bash(`mkdir -p "${workDir}"`);
Write(`${workDir}/skill-config.json`, JSON.stringify(config, null, 2));
```
## Output
- **File**: `skill-config.json`
- **Location**: `.workflow/.scratchpad/skill-gen-{timestamp}/`
- **Format**: JSON
## Next Phase
→ [Phase 2: Structure Generation](02-structure-generation.md)
**Data Flow to Phase 2**:
- skill-config.json with all configuration parameters
- Execution mode decision drives directory structure creation

View File

@@ -1,44 +1,101 @@
# Phase 2: Structure Generation
根据配置创建 Skill 目录结构和入口文件。
Create Skill directory structure and entry file based on configuration.
## Objective
- 创建标准目录结构
- 生成 SKILL.md 入口文件
- 根据执行模式创建对应的子目录
- Create standard directory structure
- Generate SKILL.md entry file
- Create corresponding subdirectories based on execution mode
## Input
- 依赖: `skill-config.json` (Phase 1 产出)
## Execution Steps
### Step 1: 读取配置
### Step 1: Read Configuration
```javascript
const config = JSON.parse(Read(`${workDir}/skill-config.json`));
const skillDir = `.claude/skills/${config.skill_name}`;
```
### Step 2: 创建目录结构
### Step 2: Create Directory Structure
#### Base Directories (All Modes)
```javascript
// 基础目录
Bash(`mkdir -p "${skillDir}/phases"`);
Bash(`mkdir -p "${skillDir}/specs"`);
Bash(`mkdir -p "${skillDir}/templates"`);
// Base infrastructure
Bash(`mkdir -p "${skillDir}/{phases,specs,templates,scripts}"`);
```
// Autonomous 模式额外目录
#### Execution Mode-Specific Directories
```
config.execution_mode
├─ "sequential"
│ ↓ Creates:
│ └─ phases/ (base directory already included)
│ ├─ _orchestrator.md
│ └─ workflow.json
└─ "autonomous" | "hybrid"
↓ Creates:
└─ phases/actions/
├─ state-schema.md
└─ *.md (action files)
```
```javascript
// Additional directories for Autonomous/Hybrid mode
if (config.execution_mode === 'autonomous' || config.execution_mode === 'hybrid') {
Bash(`mkdir -p "${skillDir}/phases/actions"`);
}
// scripts 目录(默认创建,用于存放确定性脚本)
Bash(`mkdir -p "${skillDir}/scripts"`);
```
### Step 3: 生成 SKILL.md
#### Context Strategy-Specific Directories (P0 Enhancement)
```javascript
// ========== P0: Create directories based on context strategy ==========
const contextStrategy = config.context_strategy || 'file';
if (contextStrategy === 'file') {
// File strategy: Create persistent context directory
Bash(`mkdir -p "${skillDir}/.scratchpad-template/context"`);
// Create context template file
Write(
`${skillDir}/.scratchpad-template/context/.gitkeep`,
"# Runtime context storage for file-based strategy"
);
}
// Memory strategy does not require directory creation (in-memory only)
```
**Directory Tree View**:
```
Sequential + File Strategy:
.claude/skills/{skill-name}/
├── phases/
│ ├── _orchestrator.md
│ ├── workflow.json
│ ├── 01-*.md
│ └── 02-*.md
├── .scratchpad-template/
│ └── context/ <- File strategy persistent storage
└── specs/
Autonomous + Memory Strategy:
.claude/skills/{skill-name}/
├── phases/
│ ├── orchestrator.md
│ ├── state-schema.md
│ └── actions/
│ └── *.md
└── specs/
```
### Step 3: Generate SKILL.md
```javascript
const skillMdTemplate = `---
@@ -72,8 +129,8 @@ const timestamp = new Date().toISOString().slice(0,19).replace(/[-:T]/g, '');
const workDir = \`${config.output.location.replace('{timestamp}', '${timestamp}')}\`;
Bash(\`mkdir -p "\${workDir}"\`);
${config.execution_mode === 'sequential' ?
`Bash(\`mkdir -p "\${workDir}/sections"\`);` :
${config.execution_mode === 'sequential' ?
`Bash(\`mkdir -p "\${workDir}/sections"\`);` :
`Bash(\`mkdir -p "\${workDir}/state"\`);`}
\`\`\`
@@ -91,53 +148,53 @@ ${generateReferenceTable(config)}
Write(`${skillDir}/SKILL.md`, skillMdTemplate);
```
### Step 4: 架构图生成函数
### Step 4: Architecture Diagram Generation Functions
```javascript
function generateArchitectureDiagram(config) {
if (config.execution_mode === 'sequential') {
return config.sequential_config.phases.map((p, i) =>
return config.sequential_config.phases.map((p, i) =>
`│ Phase ${i+1}: ${p.name.padEnd(15)}${p.output || 'output-' + (i+1) + '.json'}${' '.repeat(10)}`
).join('\n│ ↓' + ' '.repeat(45) + '│\n');
} else {
return `
┌─────────────────────────────────────────────────────────────────┐
│ Orchestrator (状态驱动决策)
│ Orchestrator (State-driven decision-making)
└───────────────┬─────────────────────────────────────────────────┘
┌───────────┼───────────┐
↓ ↓ ↓
${config.autonomous_config.actions.slice(0, 3).map(a =>
${config.autonomous_config.actions.slice(0, 3).map(a =>
`┌─────────┐ `).join('')}
${config.autonomous_config.actions.slice(0, 3).map(a =>
${config.autonomous_config.actions.slice(0, 3).map(a =>
`${a.name.slice(0, 7).padEnd(7)}`).join('')}
${config.autonomous_config.actions.slice(0, 3).map(a =>
${config.autonomous_config.actions.slice(0, 3).map(a =>
`└─────────┘ `).join('')}`;
}
}
function generateDesignPrinciples(config) {
const common = [
"1. **规范遵循**: 严格遵循 `_shared/SKILL-DESIGN-SPEC.md`",
"2. **简要返回**: Agent 返回路径+摘要,避免上下文溢出"
"1. **Specification Compliance**: Strictly follow `_shared/SKILL-DESIGN-SPEC.md`",
"2. **Brief Return**: Agent returns path+summary, avoiding context overflow"
];
if (config.execution_mode === 'sequential') {
return [...common,
"3. **阶段隔离**: 每个阶段独立可测",
"4. **链式输出**: 阶段产出作为下阶段输入"
"3. **Phase Isolation**: Each phase is independently testable",
"4. **Chained Output**: Phase output becomes next phase input"
].join('\n');
} else {
return [...common,
"3. **状态驱动**: 显式状态管理,动态决策",
"4. **动作独立**: 每个动作无副作用依赖"
"3. **State-driven**: Explicit state management, dynamic decision-making",
"4. **Action Independence**: Each action has no side-effect dependencies"
].join('\n');
}
}
function generateExecutionFlow(config) {
if (config.execution_mode === 'sequential') {
return '```\n' + config.sequential_config.phases.map((p, i) =>
return '```\n' + config.sequential_config.phases.map((p, i) =>
`├─ Phase ${i+1}: ${p.name}\n│ → Output: ${p.output || 'output.json'}`
).join('\n') + '\n```';
} else {
@@ -158,9 +215,9 @@ function generateExecutionFlow(config) {
function generateOutputStructure(config) {
const base = `${config.output.location}/
├── ${config.execution_mode === 'sequential' ? 'sections/' : 'state.json'}`;
if (config.execution_mode === 'sequential') {
return base + '\n' + config.sequential_config.phases.map(p =>
return base + '\n' + config.sequential_config.phases.map(p =>
`│ └── ${p.output || 'section-' + p.id + '.md'}`
).join('\n') + `\n└── ${config.output.filename_pattern}`;
} else {
@@ -172,36 +229,33 @@ function generateOutputStructure(config) {
function generateReferenceTable(config) {
const rows = [];
if (config.execution_mode === 'sequential') {
config.sequential_config.phases.forEach(p => {
rows.push(`| [phases/${p.id}.md](phases/${p.id}.md) | ${p.name} |`);
});
} else {
rows.push(`| [phases/orchestrator.md](phases/orchestrator.md) | 编排器 |`);
rows.push(`| [phases/state-schema.md](phases/state-schema.md) | 状态定义 |`);
rows.push(`| [phases/orchestrator.md](phases/orchestrator.md) | Orchestrator |`);
rows.push(`| [phases/state-schema.md](phases/state-schema.md) | State Definition |`);
config.autonomous_config.actions.forEach(a => {
rows.push(`| [phases/actions/${a.id}.md](phases/actions/${a.id}.md) | ${a.name} |`);
});
}
rows.push(`| [specs/${config.skill_name}-requirements.md](specs/${config.skill_name}-requirements.md) | 领域规范 |`);
rows.push(`| [specs/quality-standards.md](specs/quality-standards.md) | 质量标准 |`);
rows.push(`| [specs/${config.skill_name}-requirements.md](specs/${config.skill_name}-requirements.md) | Domain Requirements |`);
rows.push(`| [specs/quality-standards.md](specs/quality-standards.md) | Quality Standards |`);
return `| Document | Purpose |\n|----------|---------||\n` + rows.join('\n');
}
```
## Output
- **Directory**: `.claude/skills/{skill-name}/`
- **Files**:
- `SKILL.md` (入口文件)
- `phases/` (执行阶段目录)
- `specs/` (规范文档目录)
- `templates/` (模板目录)
- `scripts/` (脚本目录,存放 Python/Bash 确定性脚本)
## Next Phase
→ [Phase 3: Phase Generation](03-phase-generation.md)
**Data Flow to Phase 3**:
- Complete directory structure in .claude/skills/{skill-name}/
- SKILL.md entry file ready for phase/action generation
- skill-config.json for template population

File diff suppressed because it is too large Load Diff

View File

@@ -1,26 +1,107 @@
# Phase 4: Specs & Templates Generation
# Phase 4: Specifications & Templates Generation
生成规范文件和模板文件。
Generate domain requirements, quality standards, agent templates, and action catalogs.
## Objective
- 生成领域规范 (`specs/{domain}-requirements.md`)
- 生成质量标准 (`specs/quality-standards.md`)
- 生成 Agent 模板 (`templates/agent-base.md`)
- Autonomous 模式额外生成动作目录 (`specs/action-catalog.md`)
Generate comprehensive specifications and templates:
- Domain requirements document with validation function
- Quality standards with automated check system
- Agent base template with prompt structure
- Action catalog for autonomous mode (conditional)
## Input
- 依赖: `skill-config.json`, SKILL.md, phases/*.md
**File Dependencies**:
- `skill-config.json` (from Phase 1)
- `.claude/skills/{skill-name}/` directory (from Phase 2)
- Generated phase/action files (from Phase 3)
## Execution Steps
**Required Information**:
- Skill name, display name, description
- Execution mode (determines if action-catalog.md is generated)
- Output format and location
- Phase/action definitions
### Step 1: 生成领域规范
## Output
**Generated Files**:
| File | Purpose | Generation Condition |
|------|---------|---------------------|
| `specs/{skill-name}-requirements.md` | Domain requirements with validation | Always |
| `specs/quality-standards.md` | Quality evaluation criteria | Always |
| `templates/agent-base.md` | Agent prompt template | Always |
| `specs/action-catalog.md` | Action dependency graph and selection priority | Autonomous/Hybrid mode only |
**File Structure**:
**Domain Requirements** (`specs/{skill-name}-requirements.md`):
```markdown
# {display_name} Requirements
- When to Use (phase/action reference table)
- Domain Requirements (Functional requirements, Output requirements, Quality requirements)
- Validation Function (JavaScript code)
- Error Handling (recovery strategies)
```
**Quality Standards** (`specs/quality-standards.md`):
```markdown
# Quality Standards
- Quality Dimensions (Completeness 25%, Consistency 25%, Accuracy 25%, Usability 25%)
- Quality Gates (Pass ≥80%, Review 60-79%, Fail <60%)
- Issue Classification (Errors, Warnings, Info)
- Automated Checks (runQualityChecks function)
```
**Agent Base** (`templates/agent-base.md`):
```markdown
# Agent Base Template
- Universal Prompt Structure (ROLE, PROJECT CONTEXT, TASK, CONSTRAINTS, OUTPUT_FORMAT, QUALITY_CHECKLIST)
- Variable Description (workDir, output_path)
- Return Format (AgentReturn interface)
- Role Definition Reference (phase/action specific agents)
```
**Action Catalog** (`specs/action-catalog.md`, Autonomous/Hybrid only):
```markdown
# Action Catalog
- Available Actions (table with Purpose, Preconditions, Effects)
- Action Dependencies (Mermaid diagram)
- State Transitions (state machine table)
- Selection Priority (ordered action list)
```
## Decision Logic
```
Decision (execution_mode check):
├─ mode === 'sequential' → Generate 3 files only
│ └─ Files: requirements.md, quality-standards.md, agent-base.md
├─ mode === 'autonomous' → Generate 4 files
│ ├─ Files: requirements.md, quality-standards.md, agent-base.md
│ └─ Additional: action-catalog.md (with action dependencies)
└─ mode === 'hybrid' → Generate 4 files
├─ Files: requirements.md, quality-standards.md, agent-base.md
└─ Additional: action-catalog.md (with hybrid logic)
```
## Execution Protocol
```javascript
// Phase 4: Generate Specifications & Templates
// Reference: phases/04-specs-templates.md
// Load config and setup
const config = JSON.parse(Read(`${workDir}/skill-config.json`));
const skillDir = `.claude/skills/${config.skill_name}`;
// Ensure specs and templates directories exist (created in Phase 2)
// skillDir structure: phases/, specs/, templates/
// Step 1: Generate domain requirements
const domainRequirements = `# ${config.display_name} Requirements
${config.description}
@@ -29,45 +110,45 @@ ${config.description}
| Phase | Usage | Reference |
|-------|-------|-----------|
${config.execution_mode === 'sequential' ?
config.sequential_config.phases.map((p, i) =>
${config.execution_mode === 'sequential' ?
config.sequential_config.phases.map((p, i) =>
`| Phase ${i+1} | ${p.name} | ${p.id}.md |`
).join('\n') :
`| Orchestrator | 动作选择 | orchestrator.md |
| Actions | 动作执行 | actions/*.md |`}
`| Orchestrator | Action selection | orchestrator.md |
| Actions | Action execution | actions/*.md |`}
---
## Domain Requirements
### 功能要求
### Functional Requirements
- [ ] 要求1: TODO
- [ ] 要求2: TODO
- [ ] 要求3: TODO
- [ ] Requirement 1: TODO
- [ ] Requirement 2: TODO
- [ ] Requirement 3: TODO
### 输出要求
### Output Requirements
- [ ] 格式: ${config.output.format}
- [ ] 位置: ${config.output.location}
- [ ] 命名: ${config.output.filename_pattern}
- [ ] Format: ${config.output.format}
- [ ] Location: ${config.output.location}
- [ ] Naming: ${config.output.filename_pattern}
### 质量要求
### Quality Requirements
- [ ] 完整性: 所有必需内容存在
- [ ] 一致性: 术语和格式统一
- [ ] 准确性: 内容基于实际分析
- [ ] Completeness: All necessary content exists
- [ ] Consistency: Terminology and format unified
- [ ] Accuracy: Content based on actual analysis
## Validation Function
\`\`\`javascript
function validate${toPascalCase(config.skill_name)}(output) {
const checks = [
// TODO: 添加验证规则
{ name: "格式正确", pass: output.format === "${config.output.format}" },
{ name: "内容完整", pass: output.content?.length > 0 }
// TODO: Add validation rules
{ name: "Format correct", pass: output.format === "${config.output.format}" },
{ name: "Content complete", pass: output.content?.length > 0 }
];
return {
passed: checks.filter(c => c.pass).length,
total: checks.length,
@@ -80,81 +161,78 @@ function validate${toPascalCase(config.skill_name)}(output) {
| Error | Recovery |
|-------|----------|
| 输入数据缺失 | 返回明确错误信息 |
| 处理超时 | 缩小范围,重试 |
| 输出验证失败 | 记录问题,人工审核 |
| Missing input data | Return clear error message |
| Processing timeout | Reduce scope, retry |
| Output validation failure | Log issue, manual review |
`;
Write(`${skillDir}/specs/${config.skill_name}-requirements.md`, domainRequirements);
```
### Step 2: 生成质量标准
```javascript
// Step 2: Generate quality standards
const qualityStandards = `# Quality Standards
${config.display_name} 的质量评估标准。
Quality assessment standards for ${config.display_name}.
## Quality Dimensions
### 1. Completeness (完整性) - 25%
### 1. Completeness (Completeness) - 25%
| 要求 | 权重 | 检查方式 |
|------|------|----------|
| 所有必需输出存在 | 10 | 文件检查 |
| 内容覆盖完整 | 10 | 内容分析 |
| 无占位符残留 | 5 | 文本搜索 |
| Requirement | Weight | Validation Method |
|------------|--------|-----------------|
| All necessary outputs exist | 10 | File check |
| Content coverage complete | 10 | Content analysis |
| No placeholder remnants | 5 | Text search |
### 2. Consistency (一致性) - 25%
### 2. Consistency (Consistency) - 25%
| 方面 | 检查 |
|------|------|
| 术语 | 同一概念使用相同术语 |
| 格式 | 标题层级、代码块格式一致 |
| 风格 | 语气和表达方式统一 |
| Aspect | Check |
|--------|-------|
| Terminology | Use same term for same concept |
| Format | Title levels, code block format consistent |
| Style | Tone and expression unified |
### 3. Accuracy (准确性) - 25%
### 3. Accuracy (Accuracy) - 25%
| 要求 | 说明 |
|------|------|
| 数据正确 | 引用和数据无错误 |
| 逻辑正确 | 流程和关系描述准确 |
| 代码正确 | 代码示例可运行 |
| Requirement | Description |
|-------------|------------|
| Data correct | References and data error-free |
| Logic correct | Process and relationship descriptions accurate |
| Code correct | Code examples runnable |
### 4. Usability (可用性) - 25%
### 4. Usability (Usability) - 25%
| 指标 | 目标 |
|------|------|
| 可读性 | 结构清晰,易于理解 |
| 可导航 | 目录和链接正确 |
| 可操作 | 步骤明确,可执行 |
| Metric | Goal |
|--------|------|
| Readability | Clear structure, easy to understand |
| Navigability | Table of contents and links correct |
| Operability | Steps clear, executable |
## Quality Gates
| Gate | Threshold | Action |
|------|-----------|--------|
| Pass | 80% | 输出最终产物 |
| Review | 60-79% | 处理警告后继续 |
| Fail | < 60% | 必须修复 |
| Pass | >= 80% | Output final deliverables |
| Review | 60-79% | Process warnings then continue |
| Fail | < 60% | Must fix |
## Issue Classification
### Errors (Must Fix)
- 必需输出缺失
- 数据错误
- 代码不可运行
- Necessary output missing
- Data error
- Code not runnable
### Warnings (Should Fix)
- 格式不一致
- 内容深度不足
- 缺少示例
- Format inconsistency
- Content depth insufficient
- Missing examples
### Info (Nice to Have)
- 优化建议
- 增强机会
- Optimization suggestions
- Enhancement opportunities
## Automated Checks
@@ -176,7 +254,7 @@ function runQualityChecks(workDir) {
return {
score: results.overall,
gate: results.overall >= 80 ? 'pass' :
gate: results.overall >= 80 ? 'pass' :
results.overall >= 60 ? 'review' : 'fail',
details: results
};
@@ -185,51 +263,48 @@ function runQualityChecks(workDir) {
`;
Write(`${skillDir}/specs/quality-standards.md`, qualityStandards);
```
### Step 3: 生成 Agent 模板
```javascript
// Step 3: Generate agent base template
const agentBase = `# Agent Base Template
${config.display_name} 的 Agent 基础模板。
Agent base template for ${config.display_name}.
## 通用 Prompt 结构
## Universal Prompt Structure
\`\`\`
[ROLE] 你是{角色},专注于{职责}。
[ROLE] You are {role}, focused on {responsibility}.
[PROJECT CONTEXT]
Skill: ${config.skill_name}
目标: ${config.description}
Objective: ${config.description}
[TASK]
{任务描述}
- 输出: {output_path}
- 格式: ${config.output.format}
{task description}
- Output: {output_path}
- Format: ${config.output.format}
[CONSTRAINTS]
- 约束1
- 约束2
- Constraint 1
- Constraint 2
[OUTPUT_FORMAT]
1. 执行任务
2. 返回 JSON 简要信息
1. Execute task
2. Return JSON summary information
[QUALITY_CHECKLIST]
- [ ] 输出格式正确
- [ ] 内容完整无遗漏
- [ ] 无占位符残留
- [ ] Output format correct
- [ ] Content complete without omission
- [ ] No placeholder remnants
\`\`\`
## 变量说明
## Variable Description
| 变量 | 来源 | 示例 |
|------|------|------|
| {workDir} | 运行时 | .workflow/.scratchpad/${config.skill_name}-xxx |
| {output_path} | 配置 | ${config.output.location}/${config.output.filename_pattern} |
| Variable | Source | Example |
|----------|--------|---------|
| {workDir} | Runtime | .workflow/.scratchpad/${config.skill_name}-xxx |
| {output_path} | Configuration | ${config.output.location}/${config.output.filename_pattern} |
## 返回格式
## Return Format
\`\`\`typescript
interface AgentReturn {
@@ -243,33 +318,30 @@ interface AgentReturn {
}
\`\`\`
## 角色定义参考
## Role Definition Reference
${config.execution_mode === 'sequential' ?
config.sequential_config.phases.map((p, i) =>
`- **Phase ${i+1} Agent**: ${p.name} 专家`
config.sequential_config.phases.map((p, i) =>
`- **Phase ${i+1} Agent**: ${p.name} Expert`
).join('\n') :
config.autonomous_config.actions.map(a =>
`- **${a.name} Agent**: ${a.description || a.name + ' 执行者'}`
config.autonomous_config.actions.map(a =>
`- **${a.name} Agent**: ${a.description || a.name + ' Executor'}`
).join('\n')}
`;
Write(`${skillDir}/templates/agent-base.md`, agentBase);
```
### Step 4: Autonomous 模式 - 动作目录
```javascript
// Step 4: Conditional - Generate action catalog for autonomous/hybrid mode
if (config.execution_mode === 'autonomous' || config.execution_mode === 'hybrid') {
const actionCatalog = `# Action Catalog
${config.display_name} 的可用动作目录。
Available action catalog for ${config.display_name}.
## Available Actions
| Action | Purpose | Preconditions | Effects |
|--------|---------|---------------|---------|
${config.autonomous_config.actions.map(a =>
${config.autonomous_config.actions.map(a =>
`| [${a.id}](../phases/actions/${a.id}.md) | ${a.description || a.name} | ${a.preconditions?.join(', ') || '-'} | ${a.effects?.join(', ') || '-'} |`
).join('\n')}
@@ -278,9 +350,9 @@ ${config.autonomous_config.actions.map(a =>
\`\`\`mermaid
graph TD
${config.autonomous_config.actions.map((a, i, arr) => {
if (i === 0) return ` ${a.id.replace(/-/g, '_')}[${a.name}]`;
if (i === 0) return \` ${a.id.replace(/-/g, '_')}[${a.name}]\`;
const prev = arr[i-1];
return ` ${prev.id.replace(/-/g, '_')} --> ${a.id.replace(/-/g, '_')}[${a.name}]`;
return \` ${prev.id.replace(/-/g, '_')} --> ${a.id.replace(/-/g, '_')}[${a.name}]\`;
}).join('\n')}
\`\`\`
@@ -289,7 +361,7 @@ ${config.autonomous_config.actions.map((a, i, arr) => {
| From State | Action | To State |
|------------|--------|----------|
| pending | action-init | running |
${config.autonomous_config.actions.slice(1).map(a =>
${config.autonomous_config.actions.slice(1).map(a =>
`| running | ${a.id} | running |`
).join('\n')}
| running | action-complete | completed |
@@ -297,32 +369,30 @@ ${config.autonomous_config.actions.slice(1).map(a =>
## Selection Priority
当多个动作的前置条件都满足时,按以下优先级选择:
When multiple actions' preconditions are met, select based on the following priority:
${config.autonomous_config.actions.map((a, i) =>
`${i + 1}. \`${a.id}\` - ${a.name}`
${config.autonomous_config.actions.map((a, i) =>
\`${i + 1}. \\\`${a.id}\\\` - ${a.name}\`
).join('\n')}
`;
Write(`${skillDir}/specs/action-catalog.md`, actionCatalog);
}
```
### Step 5: 辅助函数
```javascript
// Helper function
function toPascalCase(str) {
return str.split('-').map(s => s.charAt(0).toUpperCase() + s.slice(1)).join('');
}
// Phase output summary
console.log('Phase 4 complete: Generated specs and templates');
```
## Output
- `specs/{skill-name}-requirements.md` - 领域规范
- `specs/quality-standards.md` - 质量标准
- `specs/action-catalog.md` - 动作目录 (Autonomous 模式)
- `templates/agent-base.md` - Agent 模板
## Next Phase
→ [Phase 5: Validation](05-validation.md)
**Data Flow to Phase 5**:
- All generated files in `specs/` and `templates/`
- skill-config.json for validation reference
- Complete skill directory structure ready for final validation

View File

@@ -1,27 +1,119 @@
# Phase 5: Validation & Documentation
验证生成的 Skill 完整性并生成使用说明。
Verify generated skill completeness and generate user documentation.
## Objective
- 验证所有必需文件存在
- 检查文件内容完整性
- 生成 README.md 使用说明
- 输出验证报告
Comprehensive validation and documentation:
- Verify all required files exist
- Check file content quality and completeness
- Generate validation report with issues and recommendations
- Generate README.md usage documentation
- Output final status and next steps
## Input
- 依赖: 所有前序阶段产出
- 生成的 Skill 目录
**File Dependencies**:
- `skill-config.json` (from Phase 1)
- `.claude/skills/{skill-name}/` directory (from Phase 2)
- All generated phase/action files (from Phase 3)
- All generated specs/templates files (from Phase 4)
## Execution Steps
**Required Information**:
- Skill name, display name, description
- Execution mode
- Trigger words
- Output configuration
- Complete skill directory structure
### Step 1: 文件完整性检查
## Output
**Generated Files**:
| File | Purpose | Content |
|------|---------|---------|
| `validation-report.json` (workDir) | Validation report with detailed checks | File completeness, content quality, issues, recommendations |
| `README.md` (skillDir) | User documentation | Quick Start, Usage, Output, Directory Structure, Customization |
**Validation Report Structure** (`validation-report.json`):
```json
{
"skill_name": "...",
"execution_mode": "sequential|autonomous",
"generated_at": "ISO timestamp",
"file_checks": {
"total": N,
"existing": N,
"with_content": N,
"with_todos": N,
"details": [...]
},
"content_checks": {
"files_checked": N,
"all_passed": true|false,
"details": [...]
},
"summary": {
"status": "PASS|REVIEW|FAIL",
"issues": [...],
"recommendations": [...]
}
}
```
**README Structure** (`README.md`):
```markdown
# {display_name}
- Quick Start (Triggers, Execution Mode)
- Usage (Examples)
- Output (Format, Location, Filename)
- Directory Structure (Tree view)
- Customization (How to modify)
- Related Documents (Links)
```
**Validation Status Gates**:
| Status | Condition | Meaning |
|--------|-----------|---------|
| PASS | All files exist + All content checks passed | Ready for use |
| REVIEW | All files exist + Some content checks failed | Needs refinement |
| FAIL | Missing files | Incomplete generation |
## Decision Logic
```
Decision (Validation Flow):
├─ File Completeness Check
│ ├─ All files exist → Continue to content checks
│ └─ Missing files → Status = FAIL, collect missing file errors
├─ Content Quality Check
│ ├─ Sequential mode → Check phase files for structure
│ ├─ Autonomous mode → Check orchestrator + action files
│ └─ Common → Check SKILL.md, specs/, templates/
├─ Status Calculation
│ ├─ All files exist + All checks pass → Status = PASS
│ ├─ All files exist + Some checks fail → Status = REVIEW
│ └─ Missing files → Status = FAIL
└─ Generate Report & README
├─ validation-report.json (with issues and recommendations)
└─ README.md (with usage documentation)
```
## Execution Protocol
```javascript
// Phase 5: Validation & Documentation
// Reference: phases/05-validation.md
// Load config and setup
const config = JSON.parse(Read(`${workDir}/skill-config.json`));
const skillDir = `.claude/skills/${config.skill_name}`;
// Step 1: File completeness check
const requiredFiles = {
common: [
'SKILL.md',
@@ -64,14 +156,11 @@ const fileCheckResults = filesToCheck.map(file => {
};
}
});
```
### Step 2: 内容质量检查
```javascript
// Step 2: Content quality check
const contentChecks = [];
// 检查 SKILL.md
// Check SKILL.md structure
const skillMd = Read(`${skillDir}/SKILL.md`);
contentChecks.push({
file: 'SKILL.md',
@@ -83,11 +172,11 @@ contentChecks.push({
]
});
// 检查 Phase 文件
// Check phase files
const phaseFiles = Glob(`${skillDir}/phases/*.md`);
for (const phaseFile of phaseFiles) {
if (phaseFile.includes('/actions/')) continue; // 单独检查
if (phaseFile.includes('/actions/')) continue; // Check separately
const content = Read(phaseFile);
contentChecks.push({
file: phaseFile.replace(skillDir + '/', ''),
@@ -100,7 +189,7 @@ for (const phaseFile of phaseFiles) {
});
}
// 检查 Specs 文件
// Check specs files
const specFiles = Glob(`${skillDir}/specs/*.md`);
for (const specFile of specFiles) {
const content = Read(specFile);
@@ -113,16 +202,13 @@ for (const specFile of specFiles) {
]
});
}
```
### Step 3: 生成验证报告
```javascript
// Step 3: Generate validation report
const report = {
skill_name: config.skill_name,
execution_mode: config.execution_mode,
generated_at: new Date().toISOString(),
file_checks: {
total: fileCheckResults.length,
existing: fileCheckResults.filter(f => f.exists).length,
@@ -130,13 +216,13 @@ const report = {
with_todos: fileCheckResults.filter(f => f.hasTodo).length,
details: fileCheckResults
},
content_checks: {
files_checked: contentChecks.length,
all_passed: contentChecks.every(c => c.checks.every(ch => ch.pass)),
details: contentChecks
},
summary: {
status: calculateOverallStatus(fileCheckResults, contentChecks),
issues: collectIssues(fileCheckResults, contentChecks),
@@ -146,10 +232,11 @@ const report = {
Write(`${workDir}/validation-report.json`, JSON.stringify(report, null, 2));
// Helper functions
function calculateOverallStatus(fileResults, contentResults) {
const allFilesExist = fileResults.every(f => f.exists);
const allContentPassed = contentResults.every(c => c.checks.every(ch => ch.pass));
if (allFilesExist && allContentPassed) return 'PASS';
if (allFilesExist) return 'REVIEW';
return 'FAIL';
@@ -157,125 +244,122 @@ function calculateOverallStatus(fileResults, contentResults) {
function collectIssues(fileResults, contentResults) {
const issues = [];
fileResults.filter(f => !f.exists).forEach(f => {
issues.push({ type: 'ERROR', message: `文件缺失: ${f.file}` });
issues.push({ type: 'ERROR', message: `Missing file: ${f.file}` });
});
fileResults.filter(f => f.hasTodo).forEach(f => {
issues.push({ type: 'WARNING', message: `包含 TODO: ${f.file}` });
issues.push({ type: 'WARNING', message: `Contains TODO: ${f.file}` });
});
contentResults.forEach(c => {
c.checks.filter(ch => !ch.pass).forEach(ch => {
issues.push({ type: 'WARNING', message: `${c.file}: 缺少 ${ch.name}` });
issues.push({ type: 'WARNING', message: `${c.file}: Missing ${ch.name}` });
});
});
return issues;
}
function generateRecommendations(fileResults, contentResults) {
const recommendations = [];
if (fileResults.some(f => f.hasTodo)) {
recommendations.push('替换所有 TODO 占位符为实际内容');
recommendations.push('Replace all TODO placeholders with actual content');
}
contentResults.forEach(c => {
if (c.checks.some(ch => !ch.pass)) {
recommendations.push(`完善 ${c.file} 的结构`);
recommendations.push(`Improve structure of ${c.file}`);
}
});
return recommendations;
}
```
### Step 4: 生成 README.md
```javascript
// Step 4: Generate README.md
const readme = `# ${config.display_name}
${config.description}
## Quick Start
### 触发词
### Trigger Words
${config.triggers.map(t => `- "${t}"`).join('\n')}
### 执行模式
### Execution Mode
**${config.execution_mode === 'sequential' ? 'Sequential (顺序)' : 'Autonomous (自主)'}**
**${config.execution_mode === 'sequential' ? 'Sequential (Sequential)' : 'Autonomous (Autonomous)'}**
${config.execution_mode === 'sequential' ?
`阶段按固定顺序执行:\n${config.sequential_config.phases.map((p, i) =>
`${i + 1}. ${p.name}`
).join('\n')}` :
`动作由编排器动态选择:\n${config.autonomous_config.actions.map(a =>
`- ${a.name}: ${a.description || ''}`
).join('\n')}`}
\`Phases execute in fixed order:\n\${config.sequential_config.phases.map((p, i) =>
\`\${i + 1}. \${p.name}\`
).join('\n')}\` :
\`Actions selected dynamically by orchestrator:\n\${config.autonomous_config.actions.map(a =>
\`- \${a.name}: \${a.description || ''}\`
).join('\n')}\`}
## Usage
\`\`\`
# 直接触发
用户: ${config.triggers[0]}
# Direct trigger
User: ${config.triggers[0]}
# 或使用 Skill 名称
用户: /skill ${config.skill_name}
# Or use Skill name
User: /skill ${config.skill_name}
\`\`\`
## Output
- **格式**: ${config.output.format}
- **位置**: \`${config.output.location}\`
- **文件名**: \`${config.output.filename_pattern}\`
- **Format**: ${config.output.format}
- **Location**: \`${config.output.location}\`
- **Filename**: \`${config.output.filename_pattern}\`
## Directory Structure
\`\`\`
.claude/skills/${config.skill_name}/
├── SKILL.md # 入口文件
├── phases/ # 执行阶段
├── SKILL.md # Entry file
├── phases/ # Execution phases
${config.execution_mode === 'sequential' ?
config.sequential_config.phases.map(p => `│ ├── ${p.id}.md`).join('\n') :
`│ ├── orchestrator.md
config.sequential_config.phases.map(p => \`│ ├── \${p.id}.md\`).join('\n') :
\`│ ├── orchestrator.md
│ ├── state-schema.md
│ └── actions/
${config.autonomous_config.actions.map(a => `│ ├── ${a.id}.md`).join('\n')}`}
├── specs/ # 规范文件
\${config.autonomous_config.actions.map(a => \`│ ├── \${a.id}.md\`).join('\n')}\`}
├── specs/ # Specification files
│ ├── ${config.skill_name}-requirements.md
│ ├── quality-standards.md
${config.execution_mode === 'autonomous' ? '│ └── action-catalog.md' : ''}
└── templates/ # 模板文件
└── templates/ # Template files
└── agent-base.md
\`\`\`
## Customization
### 修改执行逻辑
### Modify Execution Logic
编辑 \`phases/\` 目录下的阶段文件。
Edit phase files in the \`phases/\` directory.
### 调整质量标准
### Adjust Quality Standards
编辑 \`specs/quality-standards.md\`
Edit \`specs/quality-standards.md\`.
### 添加新${config.execution_mode === 'sequential' ? '阶段' : '动作'}
### Add New ${config.execution_mode === 'sequential' ? 'Phase' : 'Action'}
${config.execution_mode === 'sequential' ?
`1. \`phases/\` 创建新的阶段文件 (如 \`03.5-new-step.md\`)
2. 更新 SKILL.md 的执行流程` :
`1. \`phases/actions/\` 创建新的动作文件
2. 更新 \`specs/action-catalog.md\`
3. \`phases/orchestrator.md\` 添加选择逻辑`}
\`1. Create new phase file in \`phases/\` (e.g., \`03.5-new-step.md\`)
2. Update execution flow in SKILL.md\` :
\`1. Create new action file in \`phases/actions/\`
2. Update \`specs/action-catalog.md\`
3. Add selection logic in \`phases/orchestrator.md\`\`}
## Related Documents
- [设计规范](../_shared/SKILL-DESIGN-SPEC.md)
- [执行模式规范](specs/../../../skill-generator/specs/execution-modes.md)
- [Design Specification](../_shared/SKILL-DESIGN-SPEC.md)
- [Execution Modes Specification](specs/../../../skill-generator/specs/execution-modes.md)
---
@@ -283,52 +367,51 @@ ${config.execution_mode === 'sequential' ?
`;
Write(`${skillDir}/README.md`, readme);
```
### Step 5: 输出最终结果
```javascript
// Step 5: Output final result
const finalResult = {
skill_name: config.skill_name,
skill_path: skillDir,
execution_mode: config.execution_mode,
generated_files: [
'SKILL.md',
'README.md',
...filesToCheck
],
validation: report.summary,
next_steps: [
'1. 审阅生成的文件结构',
'2. 替换 TODO 占位符',
'3. 根据实际需求调整阶段逻辑',
'4. 测试 Skill 执行流程',
'5. 更新触发词和描述'
'1. Review generated file structure',
'2. Replace TODO placeholders',
'3. Adjust phase logic based on actual requirements',
'4. Test Skill execution flow',
'5. Update trigger words and descriptions'
]
};
console.log('=== Skill 生成完成 ===');
console.log(`路径: ${skillDir}`);
console.log(`模式: ${config.execution_mode}`);
console.log(`状态: ${report.summary.status}`);
console.log('=== Skill Generation Complete ===');
console.log(\`Path: \${skillDir}\`);
console.log(\`Mode: \${config.execution_mode}\`);
console.log(\`Status: \${report.summary.status}\`);
console.log('');
console.log('下一步:');
console.log('Next Steps:');
finalResult.next_steps.forEach(s => console.log(s));
```
## Output
## Workflow Completion
- `{workDir}/validation-report.json` - 验证报告
- `{skillDir}/README.md` - 使用说明
**Final Status**: Skill generation pipeline complete
## Completion
**Generated Artifacts**:
- Complete skill directory structure in `.claude/skills/{skill-name}/`
- Validation report in `{workDir}/validation-report.json`
- User documentation in `{skillDir}/README.md`
Skill 生成流程完成。用户可以:
1. 查看生成的 Skill 目录
2. 根据验证报告修复问题
3. 自定义执行逻辑
4. 测试 Skill 功能
**Next Steps**:
1. Review validation report for any issues or recommendations
2. Replace TODO placeholders with actual implementation
3. Test skill execution with trigger words
4. Customize phase logic based on specific requirements
5. Update triggers and descriptions as needed

View File

@@ -1,111 +1,111 @@
# CLI Integration Specification
CCW CLI 集成规范,定义 Skill 中如何正确调用外部 CLI 工具。
CCW CLI integration specification that defines how to properly call external CLI tools within Skills.
---
## 执行模式
## Execution Modes
### 1. 同步执行 (Blocking)
### 1. Synchronous Execution (Blocking)
适用于需要立即获取结果的场景。
Suitable for scenarios that need immediate results.
```javascript
// Agent 调用 - 同步
// Agent call - synchronous
const result = Task({
subagent_type: 'universal-executor',
prompt: '执行任务...',
run_in_background: false // 关键: 同步执行
prompt: 'Execute task...',
run_in_background: false // Key: synchronous execution
});
// 结果立即可用
// Result immediately available
console.log(result);
```
### 2. 异步执行 (Background)
### 2. Asynchronous Execution (Background)
适用于长时间运行的 CLI 命令。
Suitable for long-running CLI commands.
```javascript
// CLI 调用 - 异步
// CLI call - asynchronous
const task = Bash({
command: 'ccw cli -p "..." --tool gemini --mode analysis',
run_in_background: true // 关键: 后台执行
run_in_background: true // Key: background execution
});
// 立即返回,不等待结果
// task.task_id 可用于后续查询
// Returns immediately without waiting for result
// task.task_id available for later queries
```
---
## CCW CLI 调用规范
## CCW CLI Call Specification
### 基础命令结构
### Basic Command Structure
```bash
ccw cli -p "<PROMPT>" --tool <gemini|qwen|codex> --mode <analysis|write>
```
### 参数说明
### Parameter Description
| 参数 | 必需 | 说明 |
|------|------|------|
| `-p "<prompt>"` | ✓ | 提示词(使用双引号) |
| `--tool <tool>` | ✓ | 工具选择: gemini, qwen, codex |
| `--mode <mode>` | ✓ | 执行模式: analysis, write |
| `--cd <path>` | - | 工作目录 |
| `--includeDirs <dirs>` | - | 包含额外目录(逗号分隔) |
| `--resume [id]` | - | 恢复会话 |
| Parameter | Required | Description |
|-----------|----------|-------------|
| `-p "<prompt>"` | Yes | Prompt text (use double quotes) |
| `--tool <tool>` | Yes | Tool selection: gemini, qwen, codex |
| `--mode <mode>` | Yes | Execution mode: analysis, write |
| `--cd <path>` | - | Working directory |
| `--includeDirs <dirs>` | - | Additional directories (comma-separated) |
| `--resume [id]` | - | Resume session |
### 模式选择
### Mode Selection
```
┌─ 分析/文档任务?
└─→ --mode analysis (只读)
└─ 实现/修改任务?
└─→ --mode write (读写)
- Analysis/Documentation tasks?
→ --mode analysis (read-only)
- Implementation/Modification tasks?
→ --mode write (read-write)
```
---
## Agent 类型与选择
## Agent Types and Selection
### universal-executor
通用执行器,最常用的 Agent 类型。
General-purpose executor, the most commonly used agent type.
```javascript
Task({
subagent_type: 'universal-executor',
prompt: `
执行任务:
1. 读取配置文件
2. 分析依赖关系
3. 生成报告到 ${outputPath}
Execute task:
1. Read configuration file
2. Analyze dependencies
3. Generate report to ${outputPath}
`,
run_in_background: false
});
```
**适用场景**:
- 多步骤任务执行
- 文件操作(读/写/编辑)
- 需要工具调用的任务
**Applicable Scenarios**:
- Multi-step task execution
- File operations (read/write/edit)
- Tasks that require tool invocation
### Explore
代码探索 Agent快速理解代码库。
Code exploration agent for quick codebase understanding.
```javascript
Task({
subagent_type: 'Explore',
prompt: `
探索 src/ 目录:
- 识别主要模块
- 理解目录结构
- 找到入口点
Explore src/ directory:
- Identify main modules
- Understand directory structure
- Find entry points
Thoroughness: medium
`,
@@ -113,104 +113,104 @@ Thoroughness: medium
});
```
**适用场景**:
- 代码库探索
- 文件发现
- 结构理解
**Applicable Scenarios**:
- Codebase exploration
- File discovery
- Structure understanding
### cli-explore-agent
深度代码分析 Agent
Deep code analysis agent.
```javascript
Task({
subagent_type: 'cli-explore-agent',
prompt: `
深度分析 src/auth/ 模块:
- 认证流程
- 会话管理
- 安全机制
Deep analysis of src/auth/ module:
- Authentication flow
- Session management
- Security mechanisms
`,
run_in_background: false
});
```
**适用场景**:
- 深度代码理解
- 设计模式识别
- 复杂逻辑分析
**Applicable Scenarios**:
- Deep code understanding
- Design pattern identification
- Complex logic analysis
---
## 会话管理
## Session Management
### 会话恢复
### Session Recovery
```javascript
// 保存会话 ID
// Save session ID
const session = Bash({
command: 'ccw cli -p "初始分析..." --tool gemini --mode analysis',
command: 'ccw cli -p "Initial analysis..." --tool gemini --mode analysis',
run_in_background: true
});
// 后续恢复
// Resume later
const continuation = Bash({
command: `ccw cli -p "继续分析..." --tool gemini --mode analysis --resume ${session.id}`,
command: `ccw cli -p "Continue analysis..." --tool gemini --mode analysis --resume ${session.id}`,
run_in_background: true
});
```
### 多会话合并
### Multi-Session Merge
```javascript
// 合并多个会话的上下文
// Merge context from multiple sessions
const merged = Bash({
command: `ccw cli -p "汇总分析..." --tool gemini --mode analysis --resume ${id1},${id2}`,
command: `ccw cli -p "Aggregate analysis..." --tool gemini --mode analysis --resume ${id1},${id2}`,
run_in_background: true
});
```
---
## Skill 中的 CLI 集成模式
## CLI Integration Patterns in Skills
### 模式 1: 单次调用
### Pattern 1: Single Call
简单任务,一次调用完成。
Simple tasks completed in one call.
```javascript
// Phase 执行
// Phase execution
async function executePhase(context) {
const result = Bash({
command: `ccw cli -p "
PURPOSE: 分析项目结构
TASK: 识别模块、依赖、入口点
PURPOSE: Analyze project structure
TASK: Identify modules, dependencies, entry points
MODE: analysis
CONTEXT: @src/**/*
EXPECTED: JSON 格式的结构报告
EXPECTED: JSON format structure report
" --tool gemini --mode analysis --cd ${context.projectRoot}`,
run_in_background: true,
timeout: 600000
});
// 等待完成
// Wait for completion
return await waitForCompletion(result.task_id);
}
```
### 模式 2: 链式调用
### Pattern 2: Chained Calls
多步骤任务,每步依赖前一步结果。
Multi-step tasks where each step depends on previous results.
```javascript
async function executeChain(context) {
// Step 1: 收集
// Step 1: Collect
const collectId = await runCLI('collect', context);
// Step 2: 分析 (依赖 Step 1)
// Step 2: Analyze (depends on Step 1)
const analyzeId = await runCLI('analyze', context, `--resume ${collectId}`);
// Step 3: 生成 (依赖 Step 2)
// Step 3: Generate (depends on Step 2)
const generateId = await runCLI('generate', context, `--resume ${analyzeId}`);
return generateId;
@@ -218,9 +218,9 @@ async function executeChain(context) {
async function runCLI(step, context, resumeFlag = '') {
const prompts = {
collect: 'PURPOSE: 收集代码文件...',
analyze: 'PURPOSE: 分析代码模式...',
generate: 'PURPOSE: 生成文档...'
collect: 'PURPOSE: Collect code files...',
analyze: 'PURPOSE: Analyze code patterns...',
generate: 'PURPOSE: Generate documentation...'
};
const result = Bash({
@@ -232,9 +232,9 @@ async function runCLI(step, context, resumeFlag = '') {
}
```
### 模式 3: 并行调用
### Pattern 3: Parallel Calls
独立任务并行执行。
Independent tasks executed in parallel.
```javascript
async function executeParallel(context) {
@@ -244,15 +244,15 @@ async function executeParallel(context) {
{ type: 'patterns', tool: 'qwen' }
];
// 并行启动
// Start tasks in parallel
const taskIds = tasks.map(task =>
Bash({
command: `ccw cli -p "分析 ${task.type}..." --tool ${task.tool} --mode analysis`,
command: `ccw cli -p "Analyze ${task.type}..." --tool ${task.tool} --mode analysis`,
run_in_background: true
}).task_id
);
// 等待全部完成
// Wait for all to complete
const results = await Promise.all(
taskIds.map(id => waitForCompletion(id))
);
@@ -261,9 +261,9 @@ async function executeParallel(context) {
}
```
### 模式 4: Fallback
### Pattern 4: Fallback Chain
工具失败时自动切换。
Automatically switch tools on failure.
```javascript
async function executeWithFallback(context) {
@@ -299,9 +299,9 @@ async function runWithTool(tool, context) {
---
## 提示词模板集成
## Prompt Template Integration
### 引用协议模板
### Reference Protocol Templates
```bash
# Analysis mode - use --rule to auto-load protocol and template (appended to prompt)
@@ -315,7 +315,7 @@ CONSTRAINTS: ...
..." --tool codex --mode write --rule development-feature
```
### 动态模板构建
### Dynamic Template Building
```javascript
function buildPrompt(config) {
@@ -334,21 +334,21 @@ CONSTRAINTS: ${constraints || ''}
---
## 超时配置
## Timeout Configuration
### 推荐超时值
### Recommended Timeout Values
| 任务类型 | 超时 (ms) | 说明 |
|---------|----------|------|
| 快速分析 | 300000 | 5 分钟 |
| 标准分析 | 600000 | 10 分钟 |
| 深度分析 | 1200000 | 20 分钟 |
| 代码生成 | 1800000 | 30 分钟 |
| 复杂任务 | 3600000 | 60 分钟 |
| Task Type | Timeout (ms) | Description |
|-----------|--------------|-------------|
| Quick analysis | 300000 | 5 minutes |
| Standard analysis | 600000 | 10 minutes |
| Deep analysis | 1200000 | 20 minutes |
| Code generation | 1800000 | 30 minutes |
| Complex tasks | 3600000 | 60 minutes |
### Codex 特殊处理
### Special Codex Handling
Codex 需要更长的超时时间(建议 3x)。
Codex requires longer timeout (recommend 3x).
```javascript
const timeout = tool === 'codex' ? baseTimeout * 3 : baseTimeout;
@@ -362,17 +362,17 @@ Bash({
---
## 错误处理
## Error Handling
### 常见错误
### Common Errors
| 错误 | 原因 | 处理 |
|------|------|------|
| ETIMEDOUT | 网络超时 | 重试或切换工具 |
| Exit code 1 | 命令执行失败 | 检查参数,切换工具 |
| Context overflow | 上下文过大 | 减少输入范围 |
| Error | Cause | Handler |
|-------|-------|---------|
| ETIMEDOUT | Network timeout | Retry or switch tool |
| Exit code 1 | Command execution failed | Check parameters, switch tool |
| Context overflow | Input context too large | Reduce input scope |
### 重试策略
### Retry Strategy
```javascript
async function executeWithRetry(command, maxRetries = 3) {
@@ -391,7 +391,7 @@ async function executeWithRetry(command, maxRetries = 3) {
lastError = error;
console.log(`Attempt ${attempt} failed: ${error.message}`);
// 指数退避
// Exponential backoff
if (attempt < maxRetries) {
await sleep(Math.pow(2, attempt) * 1000);
}
@@ -404,30 +404,30 @@ async function executeWithRetry(command, maxRetries = 3) {
---
## 最佳实践
## Best Practices
### 1. run_in_background 规则
### 1. run_in_background Rule
```
Agent 调用 (Task):
run_in_background: false → 同步,立即获取结果
Agent calls (Task):
run_in_background: false → Synchronous, get result immediately
CLI 调用 (Bash + ccw cli):
run_in_background: true → 异步,后台执行
CLI calls (Bash + ccw cli):
run_in_background: true → Asynchronous, run in background
```
### 2. 工具选择
### 2. Tool Selection
```
分析任务: gemini > qwen
生成任务: codex > gemini > qwen
代码修改: codex > gemini
Analysis tasks: gemini > qwen
Generation tasks: codex > gemini > qwen
Code modification: codex > gemini
```
### 3. 会话管理
### 3. Session Management
- 相关任务使用 `--resume` 保持上下文
- 独立任务不使用 `--resume`
- Use `--resume` for related tasks to maintain context
- Do not use `--resume` for independent tasks
### 4. Prompt Specification
@@ -435,8 +435,8 @@ CLI 调用 (Bash + ccw cli):
- Use `--rule <template>` to auto-append protocol + template to prompt
- Template name format: `category-function` (e.g., `analysis-code-patterns`)
### 5. 结果处理
### 5. Result Processing
- 持久化重要结果到 workDir
- Brief returns: 路径 + 摘要,避免上下文溢出
- JSON 格式便于后续处理
- Persist important results to workDir
- Brief returns: path + summary, avoid context overflow
- JSON format convenient for downstream processing

View File

@@ -1,40 +1,40 @@
# Execution Modes Specification
两种 Skill 执行模式的详细规范定义。
Detailed specification definitions for two Skill execution modes.
---
## 模式概览
## Mode Overview
| 特性 | Sequential (顺序) | Autonomous (自主) |
|------|-------------------|-------------------|
| 执行顺序 | 固定(数字前缀) | 动态(编排器决策) |
| 阶段依赖 | 强依赖 | 弱依赖/无依赖 |
| 状态管理 | 隐式(阶段产出) | 显式(状态文件) |
| 适用场景 | 流水线任务 | 交互式任务 |
| 复杂度 | 低 | 中-高 |
| 可扩展性 | 插入子阶段 | 添加新动作 |
| Feature | Sequential (Fixed Order) | Autonomous (Dynamic) |
|---------|--------------------------|----------------------|
| Execution Order | Fixed (numeric prefix) | Dynamic (orchestrator decision) |
| Phase Dependencies | Strong dependencies | Weak/no dependencies |
| State Management | Implicit (phase output) | Explicit (state file) |
| Use Cases | Pipeline tasks | Interactive tasks |
| Complexity | Low | Medium-High |
| Extensibility | Insert sub-phases | Add new actions |
---
## Mode 1: Sequential (顺序模式)
## Mode 1: Sequential (Fixed Order Mode)
### 定义
### Definition
阶段按固定顺序线性执行,每个阶段的输出作为下一阶段的输入。
Phases execute linearly in fixed order, with each phase's output serving as input to the next phase.
### 目录结构
### Directory Structure
```
phases/
├── 01-{first-step}.md
├── 02-{second-step}.md
├── 02.5-{sub-step}.md # 可选:子阶段
├── 02.5-{sub-step}.md # Optional: sub-phase
├── 03-{third-step}.md
└── ...
```
### 执行流程
### Execution Flow
```
┌─────────┐ ┌─────────┐ ┌─────────┐
@@ -45,33 +45,33 @@ phases/
output1.json output2.md output3.md
```
### Phase 文件规范
### Phase File Specification
```markdown
# Phase N: {阶段名称}
# Phase N: {Phase Name}
{一句话描述}
{One-sentence description}
## Objective
{详细目标}
{Detailed objective}
## Input
- 依赖: {上一阶段产出}
- 配置: {配置文件}
- Dependencies: {Previous phase output}
- Configuration: {Configuration file}
## Execution Steps
### Step 1: {步骤}
{执行代码或说明}
### Step 1: {Step}
{Execution code or description}
### Step 2: {步骤}
{执行代码或说明}
### Step 2: {Step}
{Execution code or description}
## Output
- **File**: `{输出文件}`
- **File**: `{Output file}`
- **Format**: {JSON/Markdown}
## Next Phase
@@ -79,74 +79,74 @@ phases/
→ [Phase N+1: xxx](0N+1-xxx.md)
```
### 适用场景
### Applicable Scenarios
- **文档生成**: 收集 → 分析 → 组装 → 优化
- **代码分析**: 扫描 → 解析 → 报告
- **数据处理**: 提取 → 转换 → 加载
- **Document Generation**: Collect → Analyze → Assemble → Optimize
- **Code Analysis**: Scan → Parse → Report
- **Data Processing**: Extract → Transform → Load
### 优点
### Advantages
- 逻辑清晰,易于理解
- 调试简单,可逐阶段验证
- 输出可预测
- Clear logic, easy to understand
- Simple debugging, can validate phase by phase
- Predictable output
### 缺点
### Disadvantages
- 灵活性低
- 难以处理分支逻辑
- 用户交互受限
- Low flexibility
- Difficult to handle branching logic
- Limited user interaction
---
## Mode 2: Autonomous (自主模式)
## Mode 2: Autonomous (Dynamic Mode)
### 定义
### Definition
无固定执行顺序,由编排器 (Orchestrator) 根据当前状态动态选择下一个动作。
No fixed execution order. The orchestrator dynamically selects the next action based on current state.
### 目录结构
### Directory Structure
```
phases/
├── orchestrator.md # 编排器:核心决策逻辑
├── state-schema.md # 状态结构定义
└── actions/ # 独立动作(无顺序)
├── orchestrator.md # Orchestrator: core decision logic
├── state-schema.md # State structure definition
└── actions/ # Independent actions (no order)
├── action-{a}.md
├── action-{b}.md
├── action-{c}.md
└── ...
```
### 核心组件
### Core Components
#### 1. Orchestrator (编排器)
#### 1. Orchestrator
```markdown
# Orchestrator
## Role
根据当前状态选择并执行下一个动作。
Select and execute the next action based on current state.
## State Reading
读取状态文件: `{workDir}/state.json`
Read state file: `{workDir}/state.json`
## Decision Logic
```javascript
function selectNextAction(state) {
// 1. 检查终止条件
// 1. Check termination conditions
if (state.status === 'completed') return null;
if (state.error_count > MAX_RETRIES) return 'action-abort';
// 2. 根据状态选择动作
// 2. Select action based on state
if (!state.initialized) return 'action-init';
if (state.pending_items.length > 0) return 'action-process';
if (state.needs_review) return 'action-review';
// 3. 默认动作
// 3. Default action
return 'action-complete';
}
```
@@ -158,42 +158,42 @@ while (true) {
state = readState();
action = selectNextAction(state);
if (!action) break;
result = executeAction(action, state);
updateState(result);
}
```
```
#### 2. State Schema (状态结构)
#### 2. State Schema
```markdown
# State Schema
## 状态文件
## State File
位置: `{workDir}/state.json`
Location: `{workDir}/state.json`
## 结构定义
## Structure Definition
```typescript
interface SkillState {
// 元信息
// Metadata
skill_name: string;
started_at: string;
updated_at: string;
// 执行状态
// Execution state
status: 'pending' | 'running' | 'completed' | 'failed';
current_action: string | null;
completed_actions: string[];
// 业务数据
// Business data
context: Record<string, any>;
pending_items: any[];
results: Record<string, any>;
// 错误追踪
// Error tracking
errors: Array<{
action: string;
message: string;
@@ -203,7 +203,7 @@ interface SkillState {
}
```
## 初始状态
## Initial State
```json
{
@@ -222,23 +222,23 @@ interface SkillState {
```
```
#### 3. Action (动作)
#### 3. Action
```markdown
# Action: {action-name}
## Purpose
{动作目的}
{Action purpose}
## Preconditions
- [ ] 条件1
- [ ] 条件2
- [ ] Condition 1
- [ ] Condition 2
## Execution
{执行逻辑}
{Execution logic}
## State Updates
@@ -247,19 +247,19 @@ return {
completed_actions: [...state.completed_actions, 'action-name'],
results: {
...state.results,
action_name: { /* 结果 */ }
action_name: { /* result */ }
},
// 其他状态更新
// Other state updates
};
```
## Next Actions (Hints)
- 成功时: `action-{next}`
- 失败时: `action-retry` `action-abort`
- On success: `action-{next}`
- On failure: `action-retry` or `action-abort`
```
### 执行流程
### Execution Flow
```
┌─────────────────────────────────────────────────────────────────┐
@@ -289,9 +289,9 @@ return {
└─────────────────────────────────────────────────────────────────┘
```
### 动作目录 (Action Catalog)
### Action Catalog
`specs/action-catalog.md` 中定义:
Defined in `specs/action-catalog.md`:
```markdown
# Action Catalog
@@ -300,11 +300,11 @@ return {
| Action | Purpose | Preconditions | Effects |
|--------|---------|---------------|---------|
| action-init | 初始化状态 | status=pending | status=running |
| action-process | 处理待办项 | pending_items.length>0 | pending_items-- |
| action-review | 用户审核 | needs_review=true | needs_review=false |
| action-complete | 完成任务 | pending_items.length=0 | status=completed |
| action-abort | 中止任务 | error_count>MAX | status=failed |
| action-init | Initialize state | status=pending | status=running |
| action-process | Process pending items | pending_items.length>0 | pending_items-- |
| action-review | User review | needs_review=true | needs_review=false |
| action-complete | Complete task | pending_items.length=0 | status=completed |
| action-abort | Abort task | error_count>MAX | status=failed |
## Action Dependencies Graph
@@ -319,78 +319,81 @@ graph TD
```
```
### 适用场景
### Applicable Scenarios
- **交互式任务**: 问答、对话、表单填写
- **状态机任务**: Issue 管理、工作流审批
- **探索式任务**: 调试、诊断、搜索
- **Interactive Tasks**: Q&A, dialog, form filling
- **State Machine Tasks**: Issue management, workflow approval
- **Exploratory Tasks**: Debugging, diagnosis, search
### 优点
### Advantages
- 高度灵活,适应动态需求
- 支持复杂分支逻辑
- 易于扩展新动作
- Highly flexible, adapts to dynamic requirements
- Supports complex branching logic
- Easy to extend with new actions
### 缺点
### Disadvantages
- 复杂度高
- 状态管理开销
- 调试难度大
- High complexity
- State management overhead
- Large debugging difficulty
---
## 模式选择指南
## Mode Selection Guide
### 决策流程
### Decision Flow
```
用户需求分析
Analyze user requirements
┌────────────────────────────┐
阶段间是否有强依赖关系?
Are there strong
│ dependencies between │
│ phases? │
└────────────────────────────┘
├── → Sequential
├── Yes → Sequential
└── 继续判断
└── NoContinue decision
┌────────────────────────────┐
是否需要动态响应用户意图?
Do you need dynamic
│ response to user intent? │
└────────────────────────────┘
├── → Autonomous
├── Yes → Autonomous
└── → Sequential
└── No → Sequential
```
### 快速判断表
### Quick Decision Table
| 问题 | Sequential | Autonomous |
|------|------------|------------|
| 输出结构是否固定? | ✓ | |
| 是否需要用户多轮交互? | | |
| 阶段是否可以跳过/重复? | | |
| 是否有复杂分支逻辑? | | |
| 调试是否需要简单? | ✓ | |
| Question | Sequential | Autonomous |
|----------|------------|------------|
| Is output structure fixed? | Yes | No |
| Do you need multi-turn user interaction? | No | Yes |
| Can phases be skipped/repeated? | No | Yes |
| Is there complex branching logic? | No | Yes |
| Should debugging be simple? | Yes | No |
---
## 混合模式
## Hybrid Mode
某些复杂 Skill 可能需要混合使用两种模式:
Some complex Skills may need to use both modes in combination:
```
phases/
├── 01-init.md # Sequential: 初始化
├── 02-orchestrator.md # Autonomous: 核心交互循环
├── 01-init.md # Sequential: initialization
├── 02-orchestrator.md # Autonomous: core interaction loop
│ └── actions/
│ ├── action-a.md
│ └── action-b.md
└── 03-finalize.md # Sequential: 收尾
└── 03-finalize.md # Sequential: finalization
```
**适用场景**:
- 初始化和收尾固定,中间交互灵活
- 多阶段任务,某阶段需要动态决策
**Applicable Scenarios**:
- Initialization and finalization are fixed, middle interaction is flexible
- Multi-phase tasks where certain phases need dynamic decisions

View File

@@ -0,0 +1,271 @@
# Reference Documents Generation Specification
> **IMPORTANT**: This specification defines how to organize and present reference documents in generated skills to avoid duplication issues.
## Core Principles
### 1. Phase-Based Organization
Reference documents must be organized by skill execution phases, not as a flat list.
**Wrong Approach** (Flat List):
```markdown
## Reference Documents
| Document | Purpose |
|----------|---------|
| doc1.md | ... |
| doc2.md | ... |
| doc3.md | ... |
```
**Correct Approach** (Phase-Based Navigation):
```markdown
## Reference Documents by Phase
### Phase 1: Analysis
Documents to refer to when executing Phase 1
| Document | Purpose | When to Use |
|----------|---------|-------------|
| doc1.md | ... | Understand concept x |
### Phase 2: Implementation
Documents to refer to when executing Phase 2
| Document | Purpose | When to Use |
|----------|---------|-------------|
| doc2.md | ... | Implement feature y |
```
### 2. Four Standard Groupings
Reference documents must be divided into the following four groupings:
| Grouping | When to Use | Content |
|----------|------------|---------|
| **Phase N: [Name]** | When executing this phase | All documents related to this phase |
| **Debugging** | When encountering problems | Issue to documentation mapping table |
| **Reference** | When learning in depth | Templates, original implementations, best practices |
| (Optional) **Quick Links** | Quick navigation | Most frequently consulted 5-7 documents |
### 3. Each Document Entry Must Include
```
| [path](path) | Purpose | When to Use |
```
**When to Use Column Requirements**:
- Clear explanation of usage scenarios
- Describe what problem is solved
- Do not simply say "refer to" or "learn about"
**Good Examples**:
- "Understand issue data structure"
- "Learn about the Planning Agent role"
- "Check if implementation meets quality standards"
- "Quickly locate the reason for status anomalies"
**Poor Examples**:
- "Reference document"
- "More information"
- "Background knowledge"
### 4. Embedding Document Guidance in Execution Flow
In the "Execution Flow" section, each Phase description should include "Refer to" hints:
```markdown
### Phase 2: Planning Pipeline
**Refer to**: action-plan.md, subagent-roles.md
→ Detailed flow description...
```
### 5. Quick Troubleshooting Reference Table
Should contain common issue to documentation mapping:
```markdown
### Debugging & Troubleshooting
| Issue | Solution Document |
|-------|------------------|
| Phase execution failed | Refer to corresponding phase documentation |
| Output format incorrect | specs/quality-standards.md |
| Data validation failed | specs/schema-validation.md |
```
---
## Generation Rules
### Rule 1: Document Classification Recognition
Automatically generate groupings based on skill phases:
```javascript
const phaseEmojis = {
'discovery': '📋', // Collection, exploration
'generation': '🔧', // Generation, creation
'analysis': '🔍', // Analysis, review
'implementation': '⚙️', // Implementation, execution
'validation': '✅', // Validation, testing
'completion': '🏁', // Completion, wrap-up
};
// Generate a section for each phase
phases.forEach((phase, index) => {
const emoji = phaseEmojis[phase.type] || '📌';
const title = `### ${emoji} Phase ${index + 1}: ${phase.name}`;
// List all documents related to this phase
});
```
### Rule 2: Document to Phase Mapping
In config, specs and templates should be annotated with their belonging phases:
```json
{
"specs": [
{
"path": "specs/issue-handling.md",
"purpose": "Issue data specification",
"phases": ["phase-2", "phase-3"], // Which phases this spec is related to
"context": "Understand issue structure and validation rules"
}
]
}
```
### Rule 3: Priority and Mandatory Reading
Use visual symbols to distinguish document importance:
```markdown
| Document | When | Notes |
|----------|------|-------|
| spec.md | **Must Read Before Execution** | Mandatory prerequisite |
| action.md | Refer to during execution | Operation guide |
| template.md | Reference for learning | Optional in-depth |
```
### Rule 4: Avoid Duplication
- **Mandatory Prerequisites** section: List mandatory P0 specifications
- **Reference Documents by Phase** section: List all documents (including mandatory prerequisites)
- Documents in both sections can overlap, but their purposes differ:
- Prerequisites: Emphasize "must read first"
- Reference: Provide "complete navigation"
---
## Implementation Example
### Sequential Skill Example
```markdown
## Mandatory Prerequisites
| Document | Purpose | When |
|----------|---------|------|
| [specs/issue-handling.md](specs/issue-handling.md) | Issue data specification | **Must Read Before Execution** |
| [specs/solution-schema.md](specs/solution-schema.md) | Solution structure | **Must Read Before Execution** |
---
## Reference Documents by Phase
### Phase 1: Issue Collection
Documents to refer to when executing Phase 1
| Document | Purpose | When to Use |
|----------|---------|-------------|
| [phases/actions/action-list.md](phases/actions/action-list.md) | Issue loading logic | Understand how to collect issues |
| [specs/issue-handling.md](specs/issue-handling.md) | Issue data specification | Verify issue format **Required Reading** |
### Phase 2: Planning
Documents to refer to when executing Phase 2
| Document | Purpose | When to Use |
|----------|---------|-------------|
| [phases/actions/action-plan.md](phases/actions/action-plan.md) | Planning process | Understand issue to solution transformation |
| [specs/solution-schema.md](specs/solution-schema.md) | Solution structure | Verify solution JSON format **Required Reading** |
### Debugging & Troubleshooting
| Issue | Solution Document |
|-------|------------------|
| Phase 1 failed | [phases/actions/action-list.md](phases/actions/action-list.md) |
| Planning output incorrect | [phases/actions/action-plan.md](phases/actions/action-plan.md) + [specs/solution-schema.md](specs/solution-schema.md) |
| Data validation failed | [specs/issue-handling.md](specs/issue-handling.md) |
### Reference & Background
| Document | Purpose | Notes |
|----------|---------|-------|
| [../issue-plan.md](../../.codex/prompts/issue-plan.md) | Original implementation | Planning Agent system prompt |
```
---
## Generation Algorithm
```javascript
function generateReferenceDocuments(config) {
let result = '## Reference Documents by Phase\n\n';
// Generate a section for each phase
const phases = config.phases || config.actions || [];
phases.forEach((phase, index) => {
const phaseNum = index + 1;
const emoji = getPhaseEmoji(phase.type);
const title = phase.display_name || phase.name;
result += `### ${emoji} Phase ${phaseNum}: ${title}\n`;
result += `Documents to refer to when executing Phase ${phaseNum}\n\n`;
// Find all documents related to this phase
const docs = config.specs.filter(spec =>
(spec.phases || []).includes(`phase-${phaseNum}`) ||
matchesByName(spec.path, phase.name)
);
if (docs.length > 0) {
result += '| Document | Purpose | When to Use |\n';
result += '|----------|---------|-------------|\n';
docs.forEach(doc => {
const required = doc.phases && doc.phases[0] === `phase-${phaseNum}` ? ' **Required Reading**' : '';
result += `| [${doc.path}](${doc.path}) | ${doc.purpose} | ${doc.context}${required} |\n`;
});
result += '\n';
}
});
// Troubleshooting section
result += '### Debugging & Troubleshooting\n\n';
result += generateDebuggingTable(config);
// In-depth reference learning
result += '### Reference & Background\n\n';
result += generateReferenceTable(config);
return result;
}
```
---
## Checklist
When generating skill's SKILL.md, the reference documents section should satisfy:
- [ ] Has clear "## Reference Documents by Phase" heading
- [ ] Each phase has a corresponding section (identified with symbols)
- [ ] Each document entry includes "When to Use" column
- [ ] Includes "Debugging & Troubleshooting" section
- [ ] Includes "Reference & Background" section
- [ ] Mandatory reading documents are marked with **bold** text
- [ ] Execution Flow section includes "→ **Refer to**: ..." guidance
- [ ] Avoid overly long document lists (maximum 5-8 documents per phase)

View File

@@ -1,18 +1,18 @@
# Scripting Integration Spec
# Scripting Integration Specification
技能脚本集成规范,定义如何在技能中使用外部脚本执行确定性任务。
Skill scripting integration specification that defines how to use external scripts for deterministic task execution.
## 核心原则
## Core Principles
1. **约定优于配置**:命名即 ID扩展名即运行时
2. **极简调用**:一行完成脚本调用
3. **标准输入输出**命令行参数输入JSON 标准输出
1. **Convention over configuration**: Naming is ID, file extension is runtime
2. **Minimal invocation**: Complete script call in one line
3. **Standard input/output**: Command-line parameters as input, JSON as standard output
## 目录结构
## Directory Structure
```
.claude/skills/<skill-name>/
├── scripts/ # 脚本专用目录
├── scripts/ # Scripts directory
│ ├── process-data.py # id: process-data
│ ├── validate-output.sh # id: validate-output
│ └── transform-json.js # id: transform-json
@@ -20,17 +20,17 @@
└── specs/
```
## 命名约定
## Naming Conventions
| 扩展名 | 运行时 | 执行命令 |
|--------|--------|----------|
| Extension | Runtime | Execution Command |
|-----------|---------|-------------------|
| `.py` | python | `python scripts/{id}.py` |
| `.sh` | bash | `bash scripts/{id}.sh` |
| `.js` | node | `node scripts/{id}.js` |
## 声明格式
## Declaration Format
在 Phase 或 Action 文件的 `## Scripts` 部分声明:
Declare in the `## Scripts` section of Phase or Action files:
```yaml
## Scripts
@@ -39,27 +39,27 @@
- validate-output
```
## 调用语法
## Invocation Syntax
### 基础调用
### Basic Call
```javascript
const result = await ExecuteScript('script-id', { key: value });
```
### 参数命名转换
### Parameter Name Conversion
调用时 JS 对象中的键会**自动转换**为 `kebab-case` 命令行参数:
Keys in the JS object are **automatically converted** to `kebab-case` command-line parameters:
| JS 键名 | 转换后参数 |
|---------|-----------|
| JS Key Name | Converted Parameter |
|-------------|-------------------|
| `input_path` | `--input-path` |
| `output_dir` | `--output-dir` |
| `max_count` | `--max-count` |
脚本中使用 `--input-path` 接收,调用时使用 `input_path` 传入。
Use `--input-path` in scripts, pass `input_path` when calling.
### 完整调用(含错误处理)
### Complete Call (with Error Handling)
```javascript
const result = await ExecuteScript('process-data', {
@@ -68,46 +68,46 @@ const result = await ExecuteScript('process-data', {
});
if (!result.success) {
throw new Error(`脚本执行失败: ${result.stderr}`);
throw new Error(`Script execution failed: ${result.stderr}`);
}
const { output_file, count } = result.outputs;
```
## 返回格式
## Return Format
```typescript
interface ScriptResult {
success: boolean; // exit code === 0
stdout: string; // 完整标准输出
stderr: string; // 完整标准错误
outputs: { // 从 stdout 最后一行解析的 JSON
stdout: string; // Complete standard output
stderr: string; // Complete standard error
outputs: { // JSON parsed from last line of stdout
[key: string]: any;
};
}
```
## 脚本编写规范
## Script Writing Specification
### 输入:命令行参数
### Input: Command-line Parameters
```bash
# Python: argparse
--input-path /path/to/file --threshold 0.9
# Bash: 手动解析
# Bash: manual parsing
--input-path /path/to/file
```
### 输出:标准输出 JSON
### Output: Standard Output JSON
脚本最后一行必须打印单行 JSON
Script must print single-line JSON on last line:
```json
{"output_file": "/tmp/result.json", "count": 42}
```
### Python 模板
### Python Template
```python
import argparse
@@ -119,10 +119,10 @@ def main():
parser.add_argument('--threshold', type=float, default=0.9)
args = parser.parse_args()
# 执行逻辑...
# Execution logic...
result_path = "/tmp/result.json"
# 输出 JSON
# Output JSON
print(json.dumps({
"output_file": result_path,
"items_processed": 100
@@ -132,12 +132,12 @@ if __name__ == '__main__':
main()
```
### Bash 模板
### Bash Template
```bash
#!/bin/bash
# 解析参数
# Parse parameters
while [[ "$#" -gt 0 ]]; do
case $1 in
--input-path) INPUT_PATH="$2"; shift ;;
@@ -146,21 +146,21 @@ while [[ "$#" -gt 0 ]]; do
shift
done
# 执行逻辑...
# Execution logic...
LOG_FILE="/tmp/process.log"
echo "Processing $INPUT_PATH" > "$LOG_FILE"
# 输出 JSON
# Output JSON
echo "{\"log_file\": \"$LOG_FILE\", \"status\": \"done\"}"
```
## ExecuteScript 实现
## ExecuteScript Implementation
```javascript
async function ExecuteScript(scriptId, inputs = {}) {
const skillDir = GetSkillDir();
// 查找脚本文件
// Find script file
const extensions = ['.py', '.sh', '.js'];
let scriptPath, runtime;
@@ -177,22 +177,22 @@ async function ExecuteScript(scriptId, inputs = {}) {
throw new Error(`Script not found: ${scriptId}`);
}
// 构建命令行参数
// Build command-line parameters
const args = Object.entries(inputs)
.map(([k, v]) => `--${k.replace(/_/g, '-')} "${v}"`)
.join(' ');
// 执行脚本
// Execute script
const cmd = `${runtime} "${scriptPath}" ${args}`;
const { stdout, stderr, exitCode } = await Bash(cmd);
// 解析输出
// Parse output
let outputs = {};
try {
const lastLine = stdout.trim().split('\n').pop();
outputs = JSON.parse(lastLine);
} catch (e) {
// 无法解析 JSON保持空对象
// Unable to parse JSON, keep empty object
}
return {
@@ -204,62 +204,62 @@ async function ExecuteScript(scriptId, inputs = {}) {
}
```
## 使用场景
## Use Cases
### 适合脚本化的任务
### Suitable for Scripting
- 数据处理和转换
- 文件格式转换
- 批量文件操作
- 复杂计算逻辑
- 调用外部工具/库
- Data processing and transformation
- File format conversion
- Batch file operations
- Complex calculation logic
- Call external tools/libraries
### 不适合脚本化的任务
### Not Suitable for Scripting
- 需要用户交互的任务
- 需要访问 Claude 工具的任务
- 简单的文件读写
- 需要动态决策的任务
- Tasks requiring user interaction
- Tasks needing access to Claude tools
- Simple file read/write
- Tasks requiring dynamic decision-making
## 路径约定
## Path Conventions
### 脚本路径
### Script Path
脚本路径相对于 `SKILL.md` 所在目录(技能根目录):
Script paths are relative to the directory containing `SKILL.md` (skill root directory):
```
.claude/skills/<skill-name>/ # 技能根目录SKILL.md 所在位置)
.claude/skills/<skill-name>/ # Skill root directory (SKILL.md location)
├── SKILL.md
├── scripts/ # 脚本目录
│ └── process-data.py # 相对路径: scripts/process-data.py
├── scripts/ # Scripts directory
│ └── process-data.py # Relative path: scripts/process-data.py
└── phases/
```
`ExecuteScript` 自动从技能根目录查找脚本:
`ExecuteScript` automatically finds scripts from skill root directory:
```javascript
// 实际执行: python .claude/skills/<skill-name>/scripts/process-data.py
// Actually executes: python .claude/skills/<skill-name>/scripts/process-data.py
await ExecuteScript('process-data', { ... });
```
### 输出目录
### Output Directory
**推荐**:由调用方传递输出目录,而非脚本默认 `/tmp`
**Recommended**: Pass output directory from caller, not hardcode in script to `/tmp`:
```javascript
// 调用时指定输出目录(在工作流工作目录内)
// Specify output directory when calling (in workflow working directory)
const result = await ExecuteScript('process-data', {
input_path: `${workDir}/data.json`,
output_dir: `${workDir}/output` // 明确指定输出位置
output_dir: `${workDir}/output` // Explicitly specify output location
});
```
脚本应接受 `--output-dir` 参数,而非硬编码输出路径。
Scripts should accept `--output-dir` parameter instead of hardcoding output paths.
## 最佳实践
## Best Practices
1. **单一职责**:每个脚本只做一件事
2. **无副作用**:脚本不应修改全局状态
3. **幂等性**:相同输入产生相同输出
4. **错误明确**:错误信息写入 stderr正常输出写入 stdout
5. **快速失败**:参数验证失败立即退出
6. **路径参数化**:输出路径由调用方指定,不硬编码
1. **Single Responsibility**: Each script does one thing
2. **No Side Effects**: Scripts should not modify global state
3. **Idempotence**: Same input produces same output
4. **Clear Errors**: Error messages to stderr, normal output to stdout
5. **Fail Fast**: Exit immediately on parameter validation failure
6. **Parameterized Paths**: Output paths specified by caller, not hardcoded

View File

@@ -1,102 +1,102 @@
# Skill Requirements Specification
新 Skill 创建的需求收集规范。
Requirements collection specification for new Skill creation.
---
## 必需信息
## Required Information
### 1. 基本信息
### 1. Basic Information
| 字段 | 类型 | 必需 | 说明 |
|------|------|------|------|
| `skill_name` | string | | Skill 标识符(小写-连字符) |
| `display_name` | string | ✓ | 显示名称 |
| `description` | string | ✓ | 一句话描述 |
| `triggers` | string[] | ✓ | 触发关键词列表 |
| Field | Type | Required | Description |
|-------|------|----------|-------------|
| `skill_name` | string | Yes | Skill identifier (lowercase with hyphens) |
| `display_name` | string | Yes | Display name |
| `description` | string | Yes | One-sentence description |
| `triggers` | string[] | Yes | List of trigger keywords |
### 2. 执行模式
### 2. Execution Mode
| 字段 | 类型 | 必需 | 说明 |
|------|------|------|------|
| `execution_mode` | enum | | `sequential` \| `autonomous` \| `hybrid` |
| `phase_count` | number | 条件 | Sequential 模式下的阶段数 |
| `action_count` | number | 条件 | Autonomous 模式下的动作数 |
| Field | Type | Required | Description |
|-------|------|----------|-------------|
| `execution_mode` | enum | Yes | `sequential` \| `autonomous` \| `hybrid` |
| `phase_count` | number | Conditional | Number of phases in Sequential mode |
| `action_count` | number | Conditional | Number of actions in Autonomous mode |
### 2.5 上下文策略 (P0 增强)
### 2.5 Context Strategy (P0 Enhancement)
| 字段 | 类型 | 必需 | 说明 |
|------|------|------|------|
| `context_strategy` | enum | | `file` \| `memory` |
| Field | Type | Required | Description |
|-------|------|----------|-------------|
| `context_strategy` | enum | Yes | `file` \| `memory` |
**策略对比**:
**Strategy Comparison**:
| 策略 | 持久化 | 可调试 | 可恢复 | 适用场景 |
|------|--------|--------|--------|----------|
| `file` | ✓ | ✓ | ✓ | 复杂多阶段任务(推荐) |
| `memory` | | | | 简单线性任务 |
| Strategy | Persistence | Debuggable | Recoverable | Applicable Scenarios |
|----------|-------------|-----------|------------|----------------------|
| `file` | Yes | Yes | Yes | Complex multi-phase tasks (recommended) |
| `memory` | No | No | No | Simple linear tasks |
### 2.6 LLM 集成配置 (P1 增强)
### 2.6 LLM Integration Configuration (P1 Enhancement)
| 字段 | 类型 | 必需 | 说明 |
|------|------|------|------|
| `llm_integration` | object | 可选 | LLM 调用配置 |
| `llm_integration.enabled` | boolean | - | 是否启用 LLM 调用 |
| Field | Type | Required | Description |
|-------|------|----------|-------------|
| `llm_integration` | object | Optional | LLM invocation configuration |
| `llm_integration.enabled` | boolean | - | Enable LLM invocation |
| `llm_integration.default_tool` | enum | - | `gemini` \| `qwen` \| `codex` |
| `llm_integration.fallback_chain` | string[] | - | 失败时的备选工具链 |
| `llm_integration.fallback_chain` | string[] | - | Fallback tool chain on failure |
### 3. 工具依赖
### 3. Tool Dependencies
| 字段 | 类型 | 必需 | 说明 |
|------|------|------|------|
| `allowed_tools` | string[] | ✓ | 允许使用的工具列表 |
| `mcp_tools` | string[] | 可选 | 需要的 MCP 工具 |
| Field | Type | Required | Description |
|-------|------|----------|-------------|
| `allowed_tools` | string[] | Yes | List of allowed tools |
| `mcp_tools` | string[] | Optional | Required MCP tools |
### 4. 输出配置
### 4. Output Configuration
| 字段 | 类型 | 必需 | 说明 |
|------|------|------|------|
| `output_format` | enum | | `markdown` \| `html` \| `json` |
| `output_location` | string | ✓ | 输出目录模式 |
| Field | Type | Required | Description |
|-------|------|----------|-------------|
| `output_format` | enum | Yes | `markdown` \| `html` \| `json` |
| `output_location` | string | Yes | Output directory pattern |
---
## 配置文件结构
## Configuration File Structure
```typescript
interface SkillConfig {
// 基本信息
// Basic information
skill_name: string; // "my-skill"
display_name: string; // "My Skill"
description: string; // "一句话描述"
description: string; // "One-sentence description"
triggers: string[]; // ["keyword1", "keyword2"]
// 执行模式
// Execution mode
execution_mode: 'sequential' | 'autonomous' | 'hybrid';
// 上下文策略 (P0 增强)
context_strategy: 'file' | 'memory'; // 默认: 'file'
// Context strategy (P0 Enhancement)
context_strategy: 'file' | 'memory'; // Default: 'file'
// LLM 集成配置 (P1 增强)
// LLM Integration Configuration (P1 Enhancement)
llm_integration?: {
enabled: boolean; // 是否启用 LLM 调用
enabled: boolean; // Enable LLM invocation
default_tool: 'gemini' | 'qwen' | 'codex';
fallback_chain: string[]; // ['gemini', 'qwen', 'codex']
mode: 'analysis' | 'write'; // 默认 mode
mode: 'analysis' | 'write'; // Default mode
};
// Sequential 模式配置
// Sequential mode configuration
sequential_config?: {
phases: Array<{
id: string; // "01-init"
name: string; // "Initialization"
description: string; // "收集初始配置"
input: string[]; // 输入依赖
output: string; // 输出文件
description: string; // "Collect initial configuration"
input: string[]; // Input dependencies
output: string; // Output file
}>;
};
// Autonomous 模式配置
// Autonomous mode configuration
autonomous_config?: {
state_schema: {
fields: Array<{
@@ -108,31 +108,31 @@ interface SkillConfig {
actions: Array<{
id: string; // "action-init"
name: string; // "Initialize"
description: string; // "初始化状态"
preconditions: string[]; // 前置条件
effects: string[]; // 执行效果
description: string; // "Initialize state"
preconditions: string[]; // Preconditions
effects: string[]; // Execution effects
}>;
termination_conditions: string[];
};
// 工具依赖
// Tool dependencies
allowed_tools: string[]; // ["Task", "Read", "Write", ...]
mcp_tools?: string[]; // ["mcp__chrome__*"]
// 输出配置
// Output configuration
output: {
format: 'markdown' | 'html' | 'json';
location: string; // ".workflow/.scratchpad/{skill}-{timestamp}"
filename_pattern: string; // "{name}-output.{ext}"
};
// 质量配置
// Quality configuration
quality?: {
dimensions: string[]; // ["completeness", "consistency", ...]
pass_threshold: number; // 80
};
// 元数据
// Metadata
created_at: string;
version: string;
}
@@ -140,59 +140,59 @@ interface SkillConfig {
---
## 需求收集问题
## Requirements Collection Questions
### Phase 1: 基本信息
### Phase 1: Basic Information
```javascript
AskUserQuestion({
questions: [
{
question: "Skill 的名称是什么?(英文,小写-连字符格式)",
header: "Skill 名称",
question: "What is the Skill name? (English, lowercase with hyphens)",
header: "Skill Name",
multiSelect: false,
options: [
{ label: "自动生成", description: "根据描述自动生成名称" },
{ label: "手动输入", description: "输入自定义名称" }
{ label: "Auto-generate", description: "Auto-generate name from description" },
{ label: "Manual input", description: "Enter custom name" }
]
},
{
question: "Skill 的主要用途是什么?",
header: "用途类型",
question: "What is the primary purpose of this Skill?",
header: "Purpose Type",
multiSelect: false,
options: [
{ label: "文档生成", description: "生成 Markdown/HTML 文档" },
{ label: "代码分析", description: "分析代码结构、质量、安全" },
{ label: "交互管理", description: "管理 Issue、任务、工作流" },
{ label: "数据处理", description: "ETL、转换、报告生成" },
{ label: "自定义", description: "其他用途" }
{ label: "Document Generation", description: "Generate Markdown/HTML documents" },
{ label: "Code Analysis", description: "Analyze code structure, quality, security" },
{ label: "Interactive Management", description: "Manage Issues, tasks, workflows" },
{ label: "Data Processing", description: "ETL, transformation, report generation" },
{ label: "Custom", description: "Other purposes" }
]
}
]
});
```
### Phase 2: 执行模式
### Phase 2: Execution Mode
```javascript
AskUserQuestion({
questions: [
{
question: "选择执行模式:",
header: "执行模式",
question: "Select execution mode:",
header: "Execution Mode",
multiSelect: false,
options: [
{
label: "Sequential (顺序)",
description: "阶段按固定顺序执行,适合流水线任务(推荐)"
{
label: "Sequential (Fixed Order)",
description: "Phases execute in fixed order, suitable for pipeline tasks (recommended)"
},
{
label: "Autonomous (自主)",
description: "动态选择执行路径,适合交互式任务"
{
label: "Autonomous (Dynamic)",
description: "Dynamically select execution path, suitable for interactive tasks"
},
{
label: "Hybrid (混合)",
description: "初始化和收尾固定,中间交互灵活"
{
label: "Hybrid (Mixed)",
description: "Fixed initialization and finalization, flexible middle interaction"
}
]
}
@@ -200,67 +200,67 @@ AskUserQuestion({
});
```
### Phase 3: 阶段/动作定义
### Phase 3: Phase/Action Definition
#### Sequential 模式
#### Sequential Mode
```javascript
AskUserQuestion({
questions: [
{
question: "需要多少个执行阶段?",
header: "阶段数量",
question: "How many execution phases do you need?",
header: "Phase Count",
multiSelect: false,
options: [
{ label: "3 阶段", description: "简单: 收集 → 处理 → 输出" },
{ label: "5 阶段", description: "标准: 收集 → 探索 → 分析 → 组装 → 验证" },
{ label: "7 阶段", description: "完整: 包含并行处理和迭代优化" },
{ label: "自定义", description: "手动指定阶段" }
{ label: "3 phases", description: "Simple: Collect → Process → Output" },
{ label: "5 phases", description: "Standard: Collect → Explore → Analyze → Assemble → Validate" },
{ label: "7 phases", description: "Complete: Include parallel processing and iterative optimization" },
{ label: "Custom", description: "Manually specify phases" }
]
}
]
});
```
#### Autonomous 模式
#### Autonomous Mode
```javascript
AskUserQuestion({
questions: [
{
question: "核心动作有哪些?",
header: "动作定义",
question: "What are the core actions?",
header: "Action Definition",
multiSelect: true,
options: [
{ label: "初始化 (init)", description: "设置初始状态" },
{ label: "列表 (list)", description: "显示当前项目" },
{ label: "创建 (create)", description: "创建新项目" },
{ label: "编辑 (edit)", description: "修改现有项目" },
{ label: "删除 (delete)", description: "删除项目" },
{ label: "完成 (complete)", description: "完成任务" }
{ label: "Initialize (init)", description: "Set initial state" },
{ label: "List (list)", description: "Display current items" },
{ label: "Create (create)", description: "Create new item" },
{ label: "Edit (edit)", description: "Modify existing item" },
{ label: "Delete (delete)", description: "Delete item" },
{ label: "Complete (complete)", description: "Complete task" }
]
}
]
});
```
### Phase 4: 上下文策略 (P0 增强)
### Phase 4: Context Strategy (P0 Enhancement)
```javascript
AskUserQuestion({
questions: [
{
question: "选择上下文管理策略:",
header: "上下文策略",
question: "Select context management strategy:",
header: "Context Strategy",
multiSelect: false,
options: [
{
label: "文件策略 (file)",
description: "持久化到 .scratchpad,支持调试和恢复(推荐)"
label: "File Strategy (file)",
description: "Persist to .scratchpad, supports debugging and recovery (recommended)"
},
{
label: "内存策略 (memory)",
description: "仅在运行时保持,速度快但无法恢复"
label: "Memory Strategy (memory)",
description: "Keep only at runtime, fast but no recovery"
}
]
}
@@ -268,41 +268,41 @@ AskUserQuestion({
});
```
### Phase 5: LLM 集成 (P1 增强)
### Phase 5: LLM Integration (P1 Enhancement)
```javascript
AskUserQuestion({
questions: [
{
question: "是否需要 LLM 调用能力?",
header: "LLM 集成",
question: "Do you need LLM invocation capability?",
header: "LLM Integration",
multiSelect: false,
options: [
{
label: "启用 LLM 调用",
description: "使用 gemini/qwen/codex 进行分析或生成"
label: "Enable LLM Invocation",
description: "Use gemini/qwen/codex for analysis or generation"
},
{
label: "不需要",
description: "仅使用本地工具"
label: "Not needed",
description: "Only use local tools"
}
]
}
]
});
// 如果启用 LLM
// If LLM enabled
if (llmEnabled) {
AskUserQuestion({
questions: [
{
question: "选择默认 LLM 工具:",
header: "LLM 工具",
question: "Select default LLM tool:",
header: "LLM Tool",
multiSelect: false,
options: [
{ label: "Gemini", description: "大上下文,适合分析任务(推荐)" },
{ label: "Qwen", description: "代码生成能力强" },
{ label: "Codex", description: "自主执行能力强,适合实现任务" }
{ label: "Gemini", description: "Large context, suitable for analysis tasks (recommended)" },
{ label: "Qwen", description: "Strong code generation capability" },
{ label: "Codex", description: "Strong autonomous execution, suitable for implementation tasks" }
]
}
]
@@ -310,21 +310,21 @@ if (llmEnabled) {
}
```
### Phase 6: 工具依赖
### Phase 6: Tool Dependencies
```javascript
AskUserQuestion({
questions: [
{
question: "需要哪些工具?",
header: "工具选择",
question: "What tools do you need?",
header: "Tool Selection",
multiSelect: true,
options: [
{ label: "基础工具", description: "Task, Read, Write, Glob, Grep, Bash" },
{ label: "用户交互", description: "AskUserQuestion" },
{ label: "Chrome 截图", description: "mcp__chrome__*" },
{ label: "外部搜索", description: "mcp__exa__search" },
{ label: "CCW CLI 调用", description: "ccw cli (gemini/qwen/codex)" }
{ label: "Basic tools", description: "Task, Read, Write, Glob, Grep, Bash" },
{ label: "User interaction", description: "AskUserQuestion" },
{ label: "Chrome screenshot", description: "mcp__chrome__*" },
{ label: "External search", description: "mcp__exa__search" },
{ label: "CCW CLI invocation", description: "ccw cli (gemini/qwen/codex)" }
]
}
]
@@ -333,19 +333,19 @@ AskUserQuestion({
---
## 验证规则
## Validation Rules
### 名称验证
### Name Validation
```javascript
function validateSkillName(name) {
const rules = [
{ test: /^[a-z][a-z0-9-]*$/, msg: "必须以小写字母开头,只包含小写字母、数字、连字符" },
{ test: /^.{3,30}$/, msg: "长度 3-30 字符" },
{ test: /^(?!.*--)/, msg: "不能有连续连字符" },
{ test: /[^-]$/, msg: "不能以连字符结尾" }
{ test: /^[a-z][a-z0-9-]*$/, msg: "Must start with lowercase letter, only contain lowercase letters, digits, hyphens" },
{ test: /^.{3,30}$/, msg: "Length 3-30 characters" },
{ test: /^(?!.*--)/, msg: "Cannot have consecutive hyphens" },
{ test: /[^-]$/, msg: "Cannot end with hyphen" }
];
for (const rule of rules) {
if (!rule.test.test(name)) {
return { valid: false, error: rule.msg };
@@ -355,37 +355,37 @@ function validateSkillName(name) {
}
```
### 配置验证
### Configuration Validation
```javascript
function validateSkillConfig(config) {
const errors = [];
// 必需字段
if (!config.skill_name) errors.push("缺少 skill_name");
if (!config.description) errors.push("缺少 description");
if (!config.execution_mode) errors.push("缺少 execution_mode");
// 模式特定验证
// Required fields
if (!config.skill_name) errors.push("Missing skill_name");
if (!config.description) errors.push("Missing description");
if (!config.execution_mode) errors.push("Missing execution_mode");
// Mode-specific validation
if (config.execution_mode === 'sequential') {
if (!config.sequential_config?.phases?.length) {
errors.push("Sequential 模式需要定义 phases");
errors.push("Sequential mode requires phases definition");
}
} else if (config.execution_mode === 'autonomous') {
if (!config.autonomous_config?.actions?.length) {
errors.push("Autonomous 模式需要定义 actions");
errors.push("Autonomous mode requires actions definition");
}
}
return { valid: errors.length === 0, errors };
}
```
---
## 示例配置
## Example Configurations
### Sequential 模式示例 (增强版)
### Sequential Mode Example (Enhanced)
```json
{
@@ -432,7 +432,7 @@ function validateSkillConfig(config) {
}
```
### Autonomous 模式示例
### Autonomous Mode Example
```json
{
@@ -444,15 +444,15 @@ function validateSkillConfig(config) {
"autonomous_config": {
"state_schema": {
"fields": [
{ "name": "tasks", "type": "Task[]", "description": "任务列表" },
{ "name": "current_view", "type": "string", "description": "当前视图" }
{ "name": "tasks", "type": "Task[]", "description": "Task list" },
{ "name": "current_view", "type": "string", "description": "Current view" }
]
},
"actions": [
{ "id": "action-list", "name": "List Tasks", "preconditions": [], "effects": ["显示任务列表"] },
{ "id": "action-create", "name": "Create Task", "preconditions": [], "effects": ["添加新任务"] },
{ "id": "action-edit", "name": "Edit Task", "preconditions": ["task_selected"], "effects": ["更新任务"] },
{ "id": "action-delete", "name": "Delete Task", "preconditions": ["task_selected"], "effects": ["删除任务"] }
{ "id": "action-list", "name": "List Tasks", "preconditions": [], "effects": ["Display task list"] },
{ "id": "action-create", "name": "Create Task", "preconditions": [], "effects": ["Add new task"] },
{ "id": "action-edit", "name": "Edit Task", "preconditions": ["task_selected"], "effects": ["Update task"] },
{ "id": "action-delete", "name": "Delete Task", "preconditions": ["task_selected"], "effects": ["Delete task"] }
],
"termination_conditions": ["user_exit", "error_limit"]
},

View File

@@ -1,8 +1,22 @@
# Autonomous Action Template
自主模式动作文件的模板。
Template for action files in Autonomous execution mode.
## 模板结构
## Purpose
Generate Action files for Autonomous execution mode, defining independent executable action units.
## Usage Context
| Phase | Usage |
|-------|-------|
| Phase 3 (Phase Generation) | Generated when `config.execution_mode === 'autonomous'` |
| Generation Trigger | Generate one action file for each `config.autonomous_config.actions` |
| Output Location | `.claude/skills/{skill-name}/phases/actions/{action-id}.md` |
---
## Template Structure
```markdown
# Action: {{action_name}}
@@ -20,8 +34,8 @@
## Scripts
\`\`\`yaml
# 声明本动作使用的脚本(可选)
# - script-id # 对应 scripts/script-id.py .sh
# Declare scripts used in this action (optional)
# - script-id # Corresponds to scripts/script-id.py or .sh
\`\`\`
## Execution
@@ -30,7 +44,7 @@
async function execute(state) {
{{execution_code}}
// 调用脚本示例
// Script execution example
// const result = await ExecuteScript('script-id', { input: state.context.data });
// if (!result.success) throw new Error(result.stderr);
}
@@ -57,31 +71,66 @@ return {
{{next_actions_hints}}
```
## 变量说明
## Variable Descriptions
| 变量 | 说明 |
|------|------|
| `{{action_name}}` | 动作名称 |
| `{{action_description}}` | 动作描述 |
| `{{purpose}}` | 详细目的 |
| `{{preconditions_list}}` | 前置条件列表 |
| `{{execution_code}}` | 执行代码 |
| `{{state_updates}}` | 状态更新 |
| `{{error_handling_table}}` | 错误处理表格 |
| `{{next_actions_hints}}` | 后续动作提示 |
| Variable | Description |
|----------|-------------|
| `{{action_name}}` | Action name |
| `{{action_description}}` | Action description |
| `{{purpose}}` | Detailed purpose |
| `{{preconditions_list}}` | List of preconditions |
| `{{execution_code}}` | Execution code |
| `{{state_updates}}` | State updates |
| `{{error_handling_table}}` | Error handling table |
| `{{next_actions_hints}}` | Next action hints |
## 动作类型模板
## Action Lifecycle
### 1. 初始化动作 (Init)
```
State-driven execution flow:
state.status === 'pending'
|
v
+-- Init --+ <- 1 execution, environment preparation
| Create working directory
| Initialize context
| status -> running
+----+----+
|
v
+-- CRUD Loop --+ <- N iterations, core business
| Orchestrator selects action | List / Create / Edit / Delete
| execute(state) | Shared pattern: collect input -> operate context.items -> return updates
| Update state
+----+----+
|
v
+-- Complete --+ <- 1 execution, save results
| Serialize output
| status -> completed
+----------+
Shared state structure:
state.status -> 'pending' | 'running' | 'completed'
state.context.items -> Business data array
state.completed_actions -> List of executed action IDs
```
## Action Type Templates
### 1. Initialize Action (Init)
**Trigger condition**: `state.status === 'pending'`, executes once
```markdown
# Action: Initialize
初始化 Skill 执行状态。
Initialize Skill execution state.
## Purpose
设置初始状态,准备执行环境。
Set initial state, prepare execution environment.
## Preconditions
@@ -91,110 +140,38 @@ return {
\`\`\`javascript
async function execute(state) {
// 1. 创建工作目录
Bash(\`mkdir -p "\${workDir}"\`);
// 2. 初始化数据
const initialData = {
items: [],
metadata: {}
};
// 3. 返回状态更新
return {
stateUpdates: {
status: 'running',
context: initialData
started_at: new Date().toISOString(),
context: { items: [], metadata: {} }
}
};
}
\`\`\`
## State Updates
\`\`\`javascript
return {
stateUpdates: {
status: 'running',
started_at: new Date().toISOString(),
context: { /* 初始数据 */ }
}
};
\`\`\`
## Next Actions
- 成功: 进入主处理循环
- 失败: action-abort
- Success: Enter main processing loop (Orchestrator selects first CRUD action)
- Failure: action-abort
```
### 2. 列表动作 (List)
### 2. CRUD Actions (List / Create / Edit / Delete)
```markdown
# Action: List Items
**Trigger condition**: `state.status === 'running'`, loop until user exits
显示当前项目列表。
## Purpose
展示所有项目供用户查看和选择。
## Preconditions
- [ ] state.status === 'running'
## Execution
\`\`\`javascript
async function execute(state) {
const items = state.context.items || [];
if (items.length === 0) {
console.log('暂无项目');
} else {
console.log('项目列表:');
items.forEach((item, i) => {
console.log(\`\${i + 1}. \${item.name} - \${item.status}\`);
});
}
return {
stateUpdates: {
last_action: 'list',
current_view: 'list'
}
};
}
\`\`\`
## State Updates
\`\`\`javascript
return {
stateUpdates: {
current_view: 'list',
last_viewed_at: new Date().toISOString()
}
};
\`\`\`
## Next Actions
- 用户选择创建: action-create
- 用户选择编辑: action-edit
- 用户退出: action-complete
```
### 3. 创建动作 (Create)
> Example shows Create action demonstrating shared pattern. List / Edit / Delete follow same structure with different execution logic and state update fields.
```markdown
# Action: Create Item
创建新项目。
Create new item.
## Purpose
引导用户创建新项目。
Collect user input, append new record to context.items.
## Preconditions
@@ -204,213 +181,64 @@ return {
\`\`\`javascript
async function execute(state) {
// 1. 收集信息
// 1. Collect input
const input = await AskUserQuestion({
questions: [{
question: "请输入项目名称:",
header: "名称",
question: "Please enter item name:",
header: "Name",
multiSelect: false,
options: [
{ label: "手动输入", description: "输入自定义名称" }
]
options: [{ label: "Manual input", description: "Enter custom name" }]
}]
});
// 2. 创建项目
// 2. Operate context.items (core logic differs by action type)
const newItem = {
id: Date.now().toString(),
name: input["名称"],
name: input["Name"],
status: 'pending',
created_at: new Date().toISOString()
};
// 3. 返回状态更新
// 3. Return state update
return {
stateUpdates: {
context: {
...state.context,
items: [...(state.context.items || []), newItem]
},
last_created_id: newItem.id
last_action: 'create'
}
};
}
\`\`\`
## State Updates
\`\`\`javascript
return {
stateUpdates: {
'context.items': [...items, newItem],
last_action: 'create',
last_created_id: newItem.id
}
};
\`\`\`
## Next Actions
- 继续创建: action-create
- 返回列表: action-list
- Continue operations: Orchestrator selects next action based on state
- User exit: action-complete
```
### 4. 编辑动作 (Edit)
**Other CRUD Actions Differences:**
```markdown
# Action: Edit Item
| Action | Core Logic | Extra Preconditions | Key State Field |
|--------|-----------|-------------------|-----------------|
| List | `items.forEach(-> console.log)` | None | `current_view: 'list'` |
| Create | `items.push(newItem)` | None | `last_created_id` |
| Edit | `items.map(-> replace matching)` | `selected_item_id !== null` | `updated_at` |
| Delete | `items.filter(-> exclude matching)` | `selected_item_id !== null` | Confirm dialog -> execute |
编辑现有项目。
### 3. Complete Action
## Purpose
修改已存在的项目。
## Preconditions
- [ ] state.status === 'running'
- [ ] state.selected_item_id !== null
## Execution
\`\`\`javascript
async function execute(state) {
const itemId = state.selected_item_id;
const items = state.context.items || [];
const item = items.find(i => i.id === itemId);
if (!item) {
throw new Error(\`Item not found: \${itemId}\`);
}
// 1. 显示当前值
console.log(\`当前名称: \${item.name}\`);
// 2. 收集新值
const input = await AskUserQuestion({
questions: [{
question: "请输入新名称(留空保持不变):",
header: "新名称",
multiSelect: false,
options: [
{ label: "保持不变", description: \`当前: \${item.name}\` },
{ label: "手动输入", description: "输入新名称" }
]
}]
});
// 3. 更新项目
const updatedItems = items.map(i =>
i.id === itemId
? { ...i, name: input["新名称"] || i.name, updated_at: new Date().toISOString() }
: i
);
return {
stateUpdates: {
context: { ...state.context, items: updatedItems },
selected_item_id: null
}
};
}
\`\`\`
## State Updates
\`\`\`javascript
return {
stateUpdates: {
'context.items': updatedItems,
selected_item_id: null,
last_action: 'edit'
}
};
\`\`\`
## Next Actions
- 返回列表: action-list
```
### 5. 删除动作 (Delete)
```markdown
# Action: Delete Item
删除项目。
## Purpose
从列表中移除项目。
## Preconditions
- [ ] state.status === 'running'
- [ ] state.selected_item_id !== null
## Execution
\`\`\`javascript
async function execute(state) {
const itemId = state.selected_item_id;
const items = state.context.items || [];
// 1. 确认删除
const confirm = await AskUserQuestion({
questions: [{
question: "确认删除此项目?",
header: "确认",
multiSelect: false,
options: [
{ label: "确认删除", description: "不可恢复" },
{ label: "取消", description: "返回列表" }
]
}]
});
if (confirm["确认"] === "取消") {
return { stateUpdates: { selected_item_id: null } };
}
// 2. 执行删除
const updatedItems = items.filter(i => i.id !== itemId);
return {
stateUpdates: {
context: { ...state.context, items: updatedItems },
selected_item_id: null
}
};
}
\`\`\`
## State Updates
\`\`\`javascript
return {
stateUpdates: {
'context.items': filteredItems,
selected_item_id: null,
last_action: 'delete'
}
};
\`\`\`
## Next Actions
- 返回列表: action-list
```
### 6. 完成动作 (Complete)
**Trigger condition**: User explicitly exits or termination condition met, executes once
```markdown
# Action: Complete
完成任务并退出。
Complete task and exit.
## Purpose
保存最终状态,结束 Skill 执行。
Serialize final state, end Skill execution.
## Preconditions
@@ -420,68 +248,53 @@ return {
\`\`\`javascript
async function execute(state) {
// 1. 保存最终数据
Write(\`\${workDir}/final-output.json\`, JSON.stringify(state.context, null, 2));
// 2. 生成摘要
const summary = {
total_items: state.context.items?.length || 0,
duration: Date.now() - new Date(state.started_at).getTime(),
actions_executed: state.completed_actions.length
};
console.log('任务完成!');
console.log(\`处理项目: \${summary.total_items}\`);
console.log(\`执行动作: \${summary.actions_executed}\`);
console.log(\`Task complete: \${summary.total_items} items, \${summary.actions_executed} operations\`);
return {
stateUpdates: {
status: 'completed',
summary: summary
completed_at: new Date().toISOString(),
summary
}
};
}
\`\`\`
## State Updates
\`\`\`javascript
return {
stateUpdates: {
status: 'completed',
completed_at: new Date().toISOString(),
summary: { /* 统计信息 */ }
}
};
\`\`\`
## Next Actions
- 无(终止状态)
- None (terminal state)
```
## 生成函数
## Generation Function
```javascript
function generateAction(actionConfig, skillConfig) {
return `# Action: ${actionConfig.name}
${actionConfig.description || `执行 ${actionConfig.name} 操作`}
${actionConfig.description || `Execute ${actionConfig.name} operation`}
## Purpose
${actionConfig.purpose || 'TODO: 描述此动作的详细目的'}
${actionConfig.purpose || 'TODO: Describe detailed purpose of this action'}
## Preconditions
${actionConfig.preconditions?.map(p => `- [ ] ${p}`).join('\n') || '- [ ] 无特殊前置条件'}
${actionConfig.preconditions?.map(p => `- [ ] ${p}`).join('\n') || '- [ ] No special preconditions'}
## Execution
\`\`\`javascript
async function execute(state) {
// TODO: 实现动作逻辑
// TODO: Implement action logic
return {
stateUpdates: {
completed_actions: [...state.completed_actions, '${actionConfig.id}']
@@ -495,7 +308,7 @@ async function execute(state) {
\`\`\`javascript
return {
stateUpdates: {
// TODO: 定义状态更新
// TODO: Define state updates
${actionConfig.effects?.map(e => ` // Effect: ${e}`).join('\n') || ''}
}
};
@@ -505,13 +318,13 @@ ${actionConfig.effects?.map(e => ` // Effect: ${e}`).join('\n') || ''}
| Error Type | Recovery |
|------------|----------|
| 数据验证失败 | 返回错误,不更新状态 |
| 执行异常 | 记录错误,增加 error_count |
| Data validation failed | Return error, no state update |
| Execution exception | Log error, increment error_count |
## Next Actions (Hints)
- 成功: 由编排器根据状态决定
- 失败: 重试或 action-abort
- Success: Orchestrator decides based on state
- Failure: Retry or action-abort
`;
}
```

View File

@@ -1,45 +1,59 @@
# Autonomous Orchestrator Template
自主模式编排器的模板。
Template for orchestrator file in Autonomous execution mode.
## ⚠️ 重要提示
## Purpose
> **Phase 0 是强制前置阶段**:在 Orchestrator 启动执行循环之前,必须先完成 Phase 0 的规范研读。
Generate Orchestrator file for Autonomous execution mode, responsible for state-driven action selection and execution loop.
## Usage Context
| Phase | Usage |
|-------|-------|
| Phase 3 (Phase Generation) | Generated when `config.execution_mode === 'autonomous'` |
| Generation Trigger | Create orchestrator logic to manage action selection and state updates |
| Output Location | `.claude/skills/{skill-name}/phases/orchestrator.md` |
---
## Important Notes
> **Phase 0 is mandatory prerequisite**: Before Orchestrator starts execution loop, Phase 0 specification review must be completed first.
>
> 生成 Orchestrator 时,需要确保:
> 1. SKILL.md 中已包含 Phase 0 规范研读步骤
> 2. Orchestrator 启动前验证规范已阅读
> 3. 所有 Action 文件都引用相关的规范文档
> 4. Architecture Overview Phase 0 位于 Orchestrator 之前
> When generating Orchestrator, ensure:
> 1. Phase 0 specification review step is included in SKILL.md
> 2. Orchestrator validates specification has been reviewed before starting execution loop
> 3. All Action files reference related specification documents
> 4. Architecture Overview places Phase 0 before Orchestrator
## 模板结构
## Template Structure
```markdown
# Orchestrator
## Role
根据当前状态选择并执行下一个动作。
Select and execute next action based on current state.
## State Management
### 读取状态
### Read State
\`\`\`javascript
const state = JSON.parse(Read(`${workDir}/state.json`));
const state = JSON.parse(Read(\`${workDir}/state.json\`));
\`\`\`
### 更新状态
### Update State
\`\`\`javascript
function updateState(updates) {
const state = JSON.parse(Read(`${workDir}/state.json`));
const state = JSON.parse(Read(\`${workDir}/state.json\`));
const newState = {
...state,
...updates,
updated_at: new Date().toISOString()
};
Write(`${workDir}/state.json`, JSON.stringify(newState, null, 2));
Write(\`${workDir}/state.json\`, JSON.stringify(newState, null, 2));
return newState;
}
\`\`\`
@@ -48,18 +62,18 @@ function updateState(updates) {
\`\`\`javascript
function selectNextAction(state) {
// 1. 终止条件检查
// 1. Check termination conditions
{{termination_checks}}
// 2. 错误限制检查
// 2. Check error limit
if (state.error_count >= 3) {
return 'action-abort';
}
// 3. 动作选择逻辑
// 3. Action selection logic
{{action_selection_logic}}
// 4. 默认完成
// 4. Default completion
return 'action-complete';
}
\`\`\`
@@ -69,34 +83,34 @@ function selectNextAction(state) {
\`\`\`javascript
async function runOrchestrator() {
console.log('=== Orchestrator Started ===');
let iteration = 0;
const MAX_ITERATIONS = 100;
while (iteration < MAX_ITERATIONS) {
iteration++;
// 1. 读取当前状态
const state = JSON.parse(Read(`${workDir}/state.json`));
console.log(`[Iteration ${iteration}] Status: ${state.status}`);
// 2. 选择下一个动作
// 1. Read current state
const state = JSON.parse(Read(\`${workDir}/state.json\`));
console.log(\`[Iteration ${iteration}] Status: ${state.status}\`);
// 2. Select next action
const actionId = selectNextAction(state);
if (!actionId) {
console.log('No action selected, terminating.');
break;
}
console.log(`[Iteration ${iteration}] Executing: ${actionId}`);
// 3. 更新状态:当前动作
console.log(\`[Iteration ${iteration}] Executing: ${actionId}\`);
// 3. Update state: current action
updateState({ current_action: actionId });
// 4. 执行动作
// 4. Execute action
try {
const actionPrompt = Read(`phases/actions/${actionId}.md`);
const actionPrompt = Read(\`phases/actions/${actionId}.md\`);
const result = await Task({
subagent_type: 'universal-executor',
run_in_background: false,
@@ -111,18 +125,18 @@ async function runOrchestrator() {
Return JSON with stateUpdates field.
\`
});
const actionResult = JSON.parse(result);
// 5. 更新状态:动作完成
// 5. Update state: action completed
updateState({
current_action: null,
completed_actions: [...state.completed_actions, actionId],
...actionResult.stateUpdates
});
} catch (error) {
// 错误处理
// Error handling
updateState({
current_action: null,
errors: [...state.errors, {
@@ -134,7 +148,7 @@ Return JSON with stateUpdates field.
});
}
}
console.log('=== Orchestrator Finished ===');
}
\`\`\`
@@ -153,28 +167,28 @@ Return JSON with stateUpdates field.
| Error Type | Recovery Strategy |
|------------|-------------------|
| 动作执行失败 | 重试最多 3 次 |
| 状态不一致 | 回滚到上一个稳定状态 |
| 用户中止 | 保存当前状态,允许恢复 |
| Action execution failed | Retry up to 3 times |
| State inconsistency | Rollback to last stable state |
| User abort | Save current state, allow recovery |
```
## 变量说明
## Variable Descriptions
| 变量 | 说明 |
|------|------|
| `{{termination_checks}}` | 终止条件检查代码 |
| `{{action_selection_logic}}` | 动作选择逻辑代码 |
| `{{action_catalog_table}}` | 动作目录表格 |
| `{{termination_conditions_list}}` | 终止条件列表 |
| Variable | Description |
|----------|-------------|
| `{{termination_checks}}` | Termination condition check code |
| `{{action_selection_logic}}` | Action selection logic code |
| `{{action_catalog_table}}` | Action directory table |
| `{{termination_conditions_list}}` | List of termination conditions |
## 生成函数
## Generation Function
```javascript
function generateOrchestrator(config) {
const actions = config.autonomous_config.actions;
const terminations = config.autonomous_config.termination_conditions || [];
// 生成终止条件检查
// Generate termination checks
const terminationChecks = terminations.map(t => {
const checks = {
'user_exit': 'if (state.status === "user_exit") return null;',
@@ -184,24 +198,24 @@ function generateOrchestrator(config) {
};
return checks[t] || `if (state.${t}) return null;`;
}).join('\n ');
// 生成动作选择逻辑
// Generate action selection logic
const actionSelectionLogic = actions.map(action => {
if (!action.preconditions?.length) {
return `// ${action.name}: 无前置条件,需要手动添加选择逻辑`;
return `// ${action.name}: No preconditions, add selection logic manually`;
}
const conditions = action.preconditions.map(p => `state.${p}`).join(' && ');
return `if (${conditions}) return '${action.id}';`;
}).join('\n ');
// 生成动作目录表格
const actionCatalogTable = actions.map(a =>
// Generate action catalog table
const actionCatalogTable = actions.map(a =>
`| [${a.id}](actions/${a.id}.md) | ${a.description || a.name} | ${a.preconditions?.join(', ') || '-'} |`
).join('\n');
// 生成终止条件列表
// Generate termination conditions list
const terminationConditionsList = terminations.map(t => `- ${t}`).join('\n');
return template
.replace('{{termination_checks}}', terminationChecks)
.replace('{{action_selection_logic}}', actionSelectionLogic)
@@ -210,11 +224,11 @@ function generateOrchestrator(config) {
}
```
## 编排策略
## Orchestration Strategies
### 1. 优先级策略
### 1. Priority Strategy
按预定义优先级选择动作:
Select action by predefined priority:
```javascript
const PRIORITY = ['action-init', 'action-process', 'action-review', 'action-complete'];
@@ -229,16 +243,16 @@ function selectByPriority(state, availableActions) {
}
```
### 2. 用户驱动策略
### 2. User-Driven Strategy
询问用户选择下一个动作:
Ask user to select next action:
```javascript
async function selectByUser(state, availableActions) {
const response = await AskUserQuestion({
questions: [{
question: "选择下一个操作:",
header: "操作",
question: "Select next operation:",
header: "Operations",
multiSelect: false,
options: availableActions.map(a => ({
label: a.name,
@@ -246,32 +260,32 @@ async function selectByUser(state, availableActions) {
}))
}]
});
return availableActions.find(a => a.name === response["操作"])?.id;
return availableActions.find(a => a.name === response["Operations"])?.id;
}
```
### 3. 状态驱动策略
### 3. State-Driven Strategy
完全基于状态自动决策:
Fully automatic decision based on state:
```javascript
function selectByState(state) {
// 初始化
// Initialization
if (state.status === 'pending') return 'action-init';
// 有待处理项
// Has pending items
if (state.pending_items?.length > 0) return 'action-process';
// 需要审核
// Needs review
if (state.needs_review) return 'action-review';
// 完成
// Completed
return 'action-complete';
}
```
## 状态机示例
## State Machine Example
```mermaid
stateDiagram-v2

View File

@@ -1,47 +1,59 @@
# Code Analysis Action Template
代码分析动作模板,用于在 Skill 中集成代码探索和分析能力。
Code analysis action template for integrating code exploration and analysis capabilities into a Skill.
## Purpose
Generate code analysis actions for a Skill, integrating MCP tools (ACE) and Agents for semantic search and in-depth analysis.
## Usage Context
| Phase | Usage |
|-------|-------|
| Optional | Use when Skill requires code exploration and analysis capabilities |
| Generation Trigger | User selects to add code-analysis action type |
| Agent Types | Explore, cli-explore-agent, universal-executor |
---
## 配置结构
## Configuration Structure
```typescript
interface CodeAnalysisActionConfig {
id: string; // "analyze-structure", "explore-patterns"
name: string; // "Code Structure Analysis"
type: 'code-analysis'; // 动作类型标识
type: 'code-analysis'; // Action type identifier
// 分析范围
// Analysis scope
scope: {
paths: string[]; // 目标路径
patterns: string[]; // Glob 模式
excludes?: string[]; // 排除模式
paths: string[]; // Target paths
patterns: string[]; // Glob patterns
excludes?: string[]; // Exclude patterns
};
// 分析类型
// Analysis type
analysis_type: 'structure' | 'patterns' | 'dependencies' | 'quality' | 'security';
// Agent 配置
// Agent config
agent: {
type: 'Explore' | 'cli-explore-agent' | 'universal-executor';
thoroughness: 'quick' | 'medium' | 'very thorough';
};
// 输出配置
// Output config
output: {
format: 'json' | 'markdown';
file: string;
};
// MCP 工具增强
// MCP tool enhancement
mcp_tools?: string[]; // ['mcp__ace-tool__search_context']
}
```
---
## 模板生成函数
## Template Generation Function
```javascript
function generateCodeAnalysisAction(config) {
@@ -52,20 +64,20 @@ function generateCodeAnalysisAction(config) {
## Action: ${id}
### 分析范围
### Analysis Scope
- **路径**: ${scope.paths.join(', ')}
- **模式**: ${scope.patterns.join(', ')}
${scope.excludes ? `- **排除**: ${scope.excludes.join(', ')}` : ''}
- **Paths**: ${scope.paths.join(', ')}
- **Patterns**: ${scope.patterns.join(', ')}
${scope.excludes ? `- **Excludes**: ${scope.excludes.join(', ')}` : ''}
### 执行逻辑
### Execution Logic
\`\`\`javascript
async function execute${toPascalCase(id)}(context) {
const workDir = context.workDir;
const results = [];
// 1. 文件发现
// 1. File discovery
const files = await discoverFiles({
paths: ${JSON.stringify(scope.paths)},
patterns: ${JSON.stringify(scope.patterns)},
@@ -74,34 +86,34 @@ async function execute${toPascalCase(id)}(context) {
console.log(\`Found \${files.length} files to analyze\`);
// 2. 使用 MCP 工具进行语义搜索(如果配置)
${mcp_tools.length > 0 ? `
// 2. Semantic search using MCP tools (if configured)
${mcp_tools.length > 0 ? \`
const semanticResults = await mcp__ace_tool__search_context({
project_root_path: context.projectRoot,
query: '${getQueryForAnalysisType(analysis_type)}'
query: '\${getQueryForAnalysisType(analysis_type)}'
});
results.push({ type: 'semantic', data: semanticResults });
` : '// No MCP tools configured'}
\` : '// No MCP tools configured'}
// 3. 启动 Agent 进行深度分析
// 3. Launch Agent for in-depth analysis
const agentResult = await Task({
subagent_type: '${agent.type}',
subagent_type: '\${agent.type}',
prompt: \`
${generateAgentPrompt(analysis_type, scope)}
\${generateAgentPrompt(analysis_type, scope)}
\`,
run_in_background: false
});
results.push({ type: 'agent', data: agentResult });
// 4. 汇总结果
// 4. Aggregate results
const summary = aggregateResults(results);
// 5. 输出结果
// 5. Output results
const outputPath = \`\${workDir}/${output.file}\`;
${output.format === 'json'
? `Write(outputPath, JSON.stringify(summary, null, 2));`
: `Write(outputPath, formatAsMarkdown(summary));`}
? \`Write(outputPath, JSON.stringify(summary, null, 2));\`
: \`Write(outputPath, formatAsMarkdown(summary));\`}
return {
success: true,
@@ -110,8 +122,7 @@ ${generateAgentPrompt(analysis_type, scope)}
analysis_type: '${analysis_type}'
};
}
\`\`\`
`;
\`\`\`;
}
function getQueryForAnalysisType(type) {
@@ -127,101 +138,101 @@ function getQueryForAnalysisType(type) {
function generateAgentPrompt(type, scope) {
const prompts = {
structure: `分析以下路径的代码结构:
${scope.paths.map(p => `- ${p}`).join('\\n')}
structure: \`Analyze code structure of the following paths:
\${scope.paths.map(p => \`- \${p}\`).join('\\n')}
任务:
1. 识别主要模块和入口点
2. 分析目录组织结构
3. 提取模块间的导入导出关系
4. 生成结构概览图 (Mermaid)
Tasks:
1. Identify main modules and entry points
2. Analyze directory organization structure
3. Extract module import/export relationships
4. Generate structure overview diagram (Mermaid)
输出格式: JSON
Output format: JSON
{
"modules": [...],
"entry_points": [...],
"structure_diagram": "mermaid code"
}`,
}\`,
patterns: `分析以下路径的设计模式:
${scope.paths.map(p => `- ${p}`).join('\\n')}
patterns: \`Analyze design patterns in the following paths:
\${scope.paths.map(p => \`- \${p}\`).join('\\n')}
任务:
1. 识别使用的设计模式 (Factory, Strategy, Observer)
2. 分析抽象层级
3. 评估模式使用的恰当性
4. 提取可复用的模式实例
Tasks:
1. Identify design patterns used (Factory, Strategy, Observer, etc.)
2. Analyze abstraction levels
3. Evaluate appropriateness of pattern usage
4. Extract reusable pattern instances
输出格式: JSON
Output format: JSON
{
"patterns": [{ "name": "...", "location": "...", "usage": "..." }],
"abstractions": [...],
"reusable_components": [...]
}`,
}\`,
dependencies: `分析以下路径的依赖关系:
${scope.paths.map(p => `- ${p}`).join('\\n')}
dependencies: \`Analyze dependencies in the following paths:
\${scope.paths.map(p => \`- \${p}\`).join('\\n')}
任务:
1. 提取内部模块依赖
2. 识别外部包依赖
3. 分析耦合度
4. 检测循环依赖
Tasks:
1. Extract internal module dependencies
2. Identify external package dependencies
3. Analyze coupling degree
4. Detect circular dependencies
输出格式: JSON
Output format: JSON
{
"internal_deps": [...],
"external_deps": [...],
"coupling_score": 0-100,
"circular_deps": [...]
}`,
}\`,
quality: `分析以下路径的代码质量:
${scope.paths.map(p => `- ${p}`).join('\\n')}
quality: \`Analyze code quality in the following paths:
\${scope.paths.map(p => \`- \${p}\`).join('\\n')}
任务:
1. 评估代码复杂度
2. 检查测试覆盖率
3. 分析文档完整性
4. 识别技术债务
Tasks:
1. Assess code complexity
2. Check test coverage
3. Analyze documentation completeness
4. Identify technical debt
输出格式: JSON
Output format: JSON
{
"complexity": { "avg": 0, "max": 0, "hotspots": [...] },
"test_coverage": { "percentage": 0, "gaps": [...] },
"documentation": { "score": 0, "missing": [...] },
"tech_debt": [...]
}`,
}\`,
security: `分析以下路径的安全性:
${scope.paths.map(p => `- ${p}`).join('\\n')}
security: \`Analyze security in the following paths:
\${scope.paths.map(p => \`- \${p}\`).join('\\n')}
任务:
1. 检查认证授权实现
2. 分析输入验证
3. 检测敏感数据处理
4. 识别常见漏洞模式
Tasks:
1. Check authentication/authorization implementation
2. Analyze input validation
3. Detect sensitive data handling
4. Identify common vulnerability patterns
输出格式: JSON
Output format: JSON
{
"auth": { "methods": [...], "issues": [...] },
"input_validation": { "coverage": 0, "gaps": [...] },
"sensitive_data": { "found": [...], "protected": true/false },
"vulnerabilities": [{ "type": "...", "severity": "...", "location": "..." }]
}`
}\`
};
return prompts[type] || prompts.structure;
}
```
\`\`\`
---
## 预置代码分析动作
## Preset Code Analysis Actions
### 1. 项目结构分析
### 1. Project Structure Analysis
```yaml
\`\`\`yaml
id: analyze-project-structure
name: Project Structure Analysis
type: code-analysis
@@ -243,11 +254,11 @@ output:
file: structure-analysis.json
mcp_tools:
- mcp__ace-tool__search_context
```
\`\`\`
### 2. 设计模式提取
### 2. Design Pattern Extraction
```yaml
\`\`\`yaml
id: extract-design-patterns
name: Design Pattern Extraction
type: code-analysis
@@ -263,11 +274,11 @@ agent:
output:
format: markdown
file: patterns-report.md
```
\`\`\`
### 3. 依赖关系分析
### 3. Dependency Analysis
```yaml
\`\`\`yaml
id: analyze-dependencies
name: Dependency Analysis
type: code-analysis
@@ -285,11 +296,11 @@ agent:
output:
format: json
file: dependency-graph.json
```
\`\`\`
### 4. 安全审计
### 4. Security Audit
```yaml
\`\`\`yaml
id: security-audit
name: Security Audit
type: code-analysis
@@ -308,15 +319,15 @@ output:
file: security-report.json
mcp_tools:
- mcp__ace-tool__search_context
```
\`\`\`
---
## 使用示例
## Usage Examples
### Phase 中使用
### Using in Phase
```javascript
\`\`\`javascript
// phases/01-code-exploration.md
const analysisConfig = {
@@ -339,14 +350,14 @@ const analysisConfig = {
}
};
// 执行
// Execute
const result = await executeCodeAnalysis(analysisConfig, context);
```
\`\`\`
### 组合多种分析
### Combining Multiple Analyses
```javascript
// 串行执行多种分析
\`\`\`javascript
// Serial execution of multiple analyses
const analyses = [
{ type: 'structure', file: 'structure.json' },
{ type: 'patterns', file: 'patterns.json' },
@@ -361,7 +372,7 @@ for (const analysis of analyses) {
}, context);
}
// 并行执行(独立分析)
// Parallel execution (independent analyses)
const parallelResults = await Promise.all(
analyses.map(a => executeCodeAnalysis({
...baseConfig,
@@ -369,51 +380,51 @@ const parallelResults = await Promise.all(
output: { format: 'json', file: a.file }
}, context))
);
```
\`\`\`
---
## Agent 选择指南
## Agent Selection Guide
| 分析类型 | 推荐 Agent | Thoroughness | 原因 |
|---------|-----------|--------------|------|
| structure | Explore | medium | 快速获取目录结构 |
| patterns | cli-explore-agent | very thorough | 需要深度代码理解 |
| dependencies | Explore | medium | 主要分析 import 语句 |
| quality | universal-executor | medium | 需要运行分析工具 |
| security | universal-executor | very thorough | 需要全面扫描 |
| Analysis Type | Recommended Agent | Thoroughness | Reason |
|-------------|-----------------|--------------|--------|
| structure | Explore | medium | Quick directory structure retrieval |
| patterns | cli-explore-agent | very thorough | Requires deep code understanding |
| dependencies | Explore | medium | Mainly analyzes import statements |
| quality | universal-executor | medium | Requires running analysis tools |
| security | universal-executor | very thorough | Requires comprehensive scanning |
---
## MCP 工具集成
## MCP Tool Integration
### 语义搜索增强
### Semantic Search Enhancement
```javascript
// 使用 ACE 工具进行语义搜索
\`\`\`javascript
// Use ACE tool for semantic search
const semanticContext = await mcp__ace_tool__search_context({
project_root_path: projectRoot,
query: 'authentication logic, user session management'
});
// 将语义搜索结果作为 Agent 的输入上下文
// Use semantic search results as Agent input context
const agentResult = await Task({
subagent_type: 'Explore',
prompt: `
基于以下语义搜索结果进行深度分析:
prompt: \`
Based on following semantic search results, perform in-depth analysis:
${semanticContext}
\${semanticContext}
任务: 分析认证逻辑的实现细节...
`,
Task: Analyze authentication logic implementation details...
\`,
run_in_background: false
});
```
\`\`\`
### smart_search 集成
### smart_search Integration
```javascript
// 使用 smart_search 进行精确搜索
\`\`\`javascript
// Use smart_search for exact matching
const exactMatches = await mcp__ccw_tools__smart_search({
action: 'search',
query: 'class.*Controller',
@@ -421,19 +432,19 @@ const exactMatches = await mcp__ccw_tools__smart_search({
path: 'src/'
});
// 使用 find_files 发现文件
// Use find_files for file discovery
const configFiles = await mcp__ccw_tools__smart_search({
action: 'find_files',
pattern: '**/*.config.ts',
path: 'src/'
});
```
\`\`\`
---
## 结果聚合
## Results Aggregation
```javascript
\`\`\`javascript
function aggregateResults(results) {
const aggregated = {
timestamp: new Date().toISOString(),
@@ -466,38 +477,38 @@ function aggregateResults(results) {
}
function extractKeyFindings(agentResult) {
// 从 Agent 结果中提取关键发现
// 实现取决于 Agent 的输出格式
// Extract key findings from Agent result
// Implementation depends on Agent output format
return {
modules: agentResult.modules?.length || 0,
patterns: agentResult.patterns?.length || 0,
issues: agentResult.issues?.length || 0
};
}
```
\`\`\`
---
## 最佳实践
## Best Practices
1. **范围控制**
- 使用精确的 patterns 减少分析范围
- 配置 excludes 排除无关文件
1. **Scope Control**
- Use precise patterns to reduce analysis scope
- Configure excludes to ignore irrelevant files
2. **Agent 选择**
- 快速探索用 Explore
- 深度分析用 cli-explore-agent
- 需要执行操作用 universal-executor
2. **Agent Selection**
- Use Explore for quick exploration
- Use cli-explore-agent for in-depth analysis
- Use universal-executor when execution is required
3. **MCP 工具组合**
- 先用 mcp__ace-tool__search_context 获取语义上下文
- 再用 Agent 进行深度分析
- 最后用 smart_search 补充精确匹配
3. **MCP Tool Combination**
- First use mcp__ace-tool__search_context for semantic context
- Then use Agent for in-depth analysis
- Finally use smart_search for exact matching
4. **结果缓存**
- 将分析结果持久化到 workDir
- 后续阶段可直接读取,避免重复分析
4. **Result Caching**
- Persist analysis results to workDir
- Subsequent phases can read directly, avoiding re-analysis
5. **Brief Returns**
- Agent 返回路径 + 摘要,而非完整内容
- 避免上下文溢出
- Agent returns path + summary, not full content
- Prevents context overflow

View File

@@ -1,44 +1,56 @@
# LLM Action Template
LLM 动作模板,用于在 Skill 中集成 LLM 调用能力。
LLM action template for integrating LLM call capabilities into a Skill.
## Purpose
Generate LLM actions for a Skill, call Gemini/Qwen/Codex through CCW CLI unified interface for analysis or generation.
## Usage Context
| Phase | Usage |
|-------|-------|
| Optional | Use when Skill requires LLM capabilities |
| Generation Trigger | User selects to add llm action type |
| Tools | gemini, qwen, codex (supports fallback chain) |
---
## 配置结构
## Configuration Structure
```typescript
interface LLMActionConfig {
id: string; // "llm-analyze", "llm-generate"
name: string; // "LLM Analysis"
type: 'llm'; // 动作类型标识
type: 'llm'; // Action type identifier
// LLM 工具配置
// LLM tool config
tool: {
primary: 'gemini' | 'qwen' | 'codex';
fallback_chain: string[]; // ['gemini', 'qwen', 'codex']
};
// 执行模式
// Execution mode
mode: 'analysis' | 'write';
// 提示词配置
// Prompt config
prompt: {
template: string; // 提示词模板路径或内联
variables: string[]; // 需要替换的变量
template: string; // Prompt template path or inline
variables: string[]; // Variables to replace
};
// 输入输出
input: string[]; // 依赖的上下文文件
output: string; // 输出文件路径
// Input/Output
input: string[]; // Dependent context files
output: string; // Output file path
// 超时配置
timeout?: number; // 毫秒,默认 600000 (10min)
// Timeout config
timeout?: number; // Milliseconds, default 600000 (10min)
}
```
---
## 模板生成函数
## Template Generation Function
```javascript
function generateLLMAction(config) {
@@ -49,25 +61,25 @@ function generateLLMAction(config) {
## Action: ${id}
### 执行逻辑
### Execution Logic
\`\`\`javascript
async function execute${toPascalCase(id)}(context) {
const workDir = context.workDir;
const state = context.state;
// 1. 收集输入上下文
// 1. Collect input context
const inputContext = ${JSON.stringify(input)}.map(f => {
const path = \`\${workDir}/\${f}\`;
return Read(path);
}).join('\\n\\n---\\n\\n');
// 2. 构建提示词
// 2. Build prompt
const promptTemplate = \`${prompt.template}\`;
const finalPrompt = promptTemplate
${prompt.variables.map(v => `.replace('{{${v}}}', context.${v} || '')`).join('\n ')};
// 3. 执行 LLM 调用 (带 fallback)
// 3. Execute LLM call (with fallback)
const tools = ['${tool.primary}', ${tool.fallback_chain.map(t => `'${t}'`).join(', ')}];
let result = null;
let usedTool = null;
@@ -86,10 +98,10 @@ async function execute${toPascalCase(id)}(context) {
throw new Error('All LLM tools failed');
}
// 4. 保存结果
// 4. Save result
Write(\`\${workDir}/${output}\`, result);
// 5. 更新状态
// 5. Update state
state.llm_calls = (state.llm_calls || 0) + 1;
state.last_llm_tool = usedTool;
@@ -100,38 +112,38 @@ async function execute${toPascalCase(id)}(context) {
};
}
// LLM 调用封装
// LLM call wrapper
async function callLLM(tool, prompt, mode, timeout) {
const modeFlag = mode === 'write' ? '--mode write' : '--mode analysis';
// 使用 CCW CLI 统一接口
// Use CCW CLI unified interface
const command = \`ccw cli -p "\${escapePrompt(prompt)}" --tool \${tool} \${modeFlag}\`;
const result = Bash({
command,
timeout,
run_in_background: true // 异步执行
run_in_background: true // Async execution
});
// 等待完成
// Wait for completion
return await waitForResult(result.task_id, timeout);
}
function escapePrompt(prompt) {
// 转义双引号和特殊字符
// Escape double quotes and special characters
return prompt.replace(/"/g, '\\\\"').replace(/\$/g, '\\\\$');
}
\`\`\`
### Prompt 模板
### Prompt Template
\`\`\`
${prompt.template}
\`\`\`
### 变量说明
### Variable Descriptions
${prompt.variables.map(v => `- \`{{${v}}}\`: ${v} 变量`).join('\n')}
${prompt.variables.map(v => `- \`{{${v}}}\`: ${v} variable`).join('\n')}
`;
}
@@ -142,11 +154,11 @@ function toPascalCase(str) {
---
## 预置 LLM 动作模板
## Preset LLM Action Templates
### 1. 代码分析动作
### 1. Code Analysis Action
```yaml
\`\`\`yaml
id: llm-code-analysis
name: LLM Code Analysis
type: llm
@@ -156,15 +168,15 @@ tool:
mode: analysis
prompt:
template: |
PURPOSE: 分析代码结构和模式,提取关键设计特征
PURPOSE: Analyze code structure and patterns, extract key design features
TASK:
识别主要模块和组件
分析依赖关系
提取设计模式
评估代码质量
Identify main modules and components
Analyze dependencies
Extract design patterns
Evaluate code quality
MODE: analysis
CONTEXT: {{code_context}}
EXPECTED: JSON 格式的分析报告,包含 modules, dependencies, patterns, quality_score
EXPECTED: JSON formatted analysis report with modules, dependencies, patterns, quality_score
RULES: $(cat ~/.claude/workflows/cli-templates/protocols/analysis-protocol.md)
variables:
- code_context
@@ -172,11 +184,11 @@ input:
- collected-code.md
output: analysis-report.json
timeout: 900000
```
\`\`\`
### 2. 文档生成动作
### 2. Documentation Generation Action
```yaml
\`\`\`yaml
id: llm-doc-generation
name: LLM Documentation Generation
type: llm
@@ -186,15 +198,15 @@ tool:
mode: write
prompt:
template: |
PURPOSE: 根据分析结果生成高质量文档
PURPOSE: Generate high-quality documentation based on analysis results
TASK:
基于分析报告生成文档大纲
填充各章节内容
添加代码示例和说明
生成 Mermaid 图表
Generate documentation outline based on analysis report
Populate chapter content
Add code examples and explanations
Generate Mermaid diagrams
MODE: write
CONTEXT: {{analysis_report}}
EXPECTED: 完整的 Markdown 文档,包含目录、章节、图表
EXPECTED: Complete Markdown documentation with table of contents, chapters, diagrams
RULES: $(cat ~/.claude/workflows/cli-templates/protocols/write-protocol.md)
variables:
- analysis_report
@@ -202,11 +214,11 @@ input:
- analysis-report.json
output: generated-doc.md
timeout: 1200000
```
\`\`\`
### 3. 代码重构建议动作
### 3. Code Refactoring Suggestions Action
```yaml
\`\`\`yaml
id: llm-refactor-suggest
name: LLM Refactoring Suggestions
type: llm
@@ -216,15 +228,15 @@ tool:
mode: analysis
prompt:
template: |
PURPOSE: 分析代码并提供重构建议
PURPOSE: Analyze code and provide refactoring suggestions
TASK:
识别代码异味 (code smells)
评估复杂度热点
提出具体重构方案
估算重构影响范围
Identify code smells
Evaluate complexity hotspots
Propose specific refactoring plans
Estimate refactoring impact scope
MODE: analysis
CONTEXT: {{source_code}}
EXPECTED: 重构建议列表,每项包含 location, issue, suggestion, impact
EXPECTED: List of refactoring suggestions with location, issue, suggestion, impact fields
RULES: $(cat ~/.claude/workflows/cli-templates/protocols/analysis-protocol.md)
variables:
- source_code
@@ -232,15 +244,15 @@ input:
- source-files.md
output: refactor-suggestions.json
timeout: 600000
```
\`\`\`
---
## 使用示例
## Usage Examples
### 在 Phase 中使用 LLM 动作
### Using LLM Actions in Phase
```javascript
\`\`\`javascript
// phases/02-llm-analysis.md
const llmConfig = {
@@ -253,39 +265,39 @@ const llmConfig = {
},
mode: 'analysis',
prompt: {
template: `
PURPOSE: 分析现有 Skill 的设计模式
template: \`
PURPOSE: Analyze design patterns of existing Skills
TASK:
提取 Skill 结构规范
识别 Phase 组织模式
分析 Agent 调用模式
Extract Skill structure specification
Identify Phase organization patterns
Analyze Agent invocation patterns
MODE: analysis
CONTEXT: {{skill_source}}
EXPECTED: 结构化的设计模式分析
`,
EXPECTED: Structured design pattern analysis
\`,
variables: ['skill_source']
},
input: ['collected-skills.md'],
output: 'skill-patterns.json'
};
// 执行
// Execute
const result = await executeLLMAction(llmConfig, {
workDir: '.workflow/.scratchpad/skill-gen-xxx',
skill_source: Read('.workflow/.scratchpad/skill-gen-xxx/collected-skills.md')
});
```
\`\`\`
### Orchestrator 中调度 LLM 动作
### Scheduling LLM Actions in Orchestrator
```javascript
// autonomous-orchestrator 中的 LLM 动作调度
\`\`\`javascript
// Schedule LLM actions in autonomous-orchestrator
const actions = [
{ type: 'collect', priority: 100 },
{ type: 'llm', id: 'llm-analyze', priority: 90 }, // LLM 分析
{ type: 'llm', id: 'llm-analyze', priority: 90 }, // LLM analysis
{ type: 'process', priority: 80 },
{ type: 'llm', id: 'llm-generate', priority: 70 }, // LLM 生成
{ type: 'llm', id: 'llm-generate', priority: 70 }, // LLM generation
{ type: 'validate', priority: 60 }
];
@@ -298,13 +310,13 @@ for (const action of sortByPriority(actions)) {
context.state[action.id] = llmResult;
}
}
```
\`\`\`
---
## 错误处理
## Error Handling
```javascript
\`\`\`javascript
async function executeLLMActionWithRetry(config, context, maxRetries = 3) {
let lastError = null;
@@ -313,43 +325,43 @@ async function executeLLMActionWithRetry(config, context, maxRetries = 3) {
return await executeLLMAction(config, context);
} catch (error) {
lastError = error;
console.log(`Attempt ${attempt} failed: ${error.message}`);
console.log(\`Attempt ${attempt} failed: ${error.message}\`);
// 指数退避
// Exponential backoff
if (attempt < maxRetries) {
await sleep(Math.pow(2, attempt) * 1000);
}
}
}
// 所有重试失败
// All retries failed
return {
success: false,
error: lastError.message,
fallback: 'manual_review_required'
};
}
```
\`\`\`
---
## 最佳实践
## Best Practices
1. **选择合适的工具**
- 分析任务Gemini大上下文> Qwen
- 生成任务Codex自主执行> Gemini > Qwen
- 代码修改:Codex > Gemini
1. **Select Appropriate Tool**
- Analysis tasks: Gemini (large context) > Qwen
- Generation tasks: Codex (autonomous execution) > Gemini > Qwen
- Code modification: Codex > Gemini
2. **配置 Fallback Chain**
- 总是配置至少一个 fallback
- 考虑工具特性选择 fallback 顺序
2. **Configure Fallback Chain**
- Always configure at least one fallback
- Consider tool characteristics when ordering fallbacks
3. **超时设置**
- 分析任务10-15 分钟
- 生成任务15-20 分钟
- 复杂任务20-60 分钟
3. **Timeout Settings**
- Analysis tasks: 10-15 minutes
- Generation tasks: 15-20 minutes
- Complex tasks: 20-60 minutes
4. **提示词设计**
- 使用 PURPOSE/TASK/MODE/CONTEXT/EXPECTED/RULES 结构
- 引用标准协议模板
- 明确输出格式要求
4. **Prompt Design**
- Use PURPOSE/TASK/MODE/CONTEXT/EXPECTED/RULES structure
- Reference standard protocol templates
- Clearly specify output format requirements

View File

@@ -1,277 +0,0 @@
# Bash Script Template
Bash 脚本模板,用于生成技能中的确定性脚本。
## 模板代码
```bash
#!/bin/bash
# {{script_description}}
set -euo pipefail
# ============================================================
# 参数解析
# ============================================================
INPUT_PATH=""
OUTPUT_DIR="" # 由调用方指定,不设默认值
show_help() {
echo "用法: $0 --input-path <path> --output-dir <dir>"
echo ""
echo "参数:"
echo " --input-path 输入文件路径 (必需)"
echo " --output-dir 输出目录 (必需,由调用方指定)"
echo " --help 显示帮助信息"
}
while [[ "$#" -gt 0 ]]; do
case $1 in
--input-path)
INPUT_PATH="$2"
shift
;;
--output-dir)
OUTPUT_DIR="$2"
shift
;;
--help)
show_help
exit 0
;;
*)
echo "错误: 未知参数 $1" >&2
show_help >&2
exit 1
;;
esac
shift
done
# ============================================================
# 参数验证
# ============================================================
if [[ -z "$INPUT_PATH" ]]; then
echo "错误: --input-path 是必需参数" >&2
exit 1
fi
if [[ -z "$OUTPUT_DIR" ]]; then
echo "错误: --output-dir 是必需参数" >&2
exit 1
fi
if [[ ! -f "$INPUT_PATH" ]]; then
echo "错误: 输入文件不存在: $INPUT_PATH" >&2
exit 1
fi
# 检查 jq 是否可用(用于 JSON 输出)
if ! command -v jq &> /dev/null; then
echo "错误: 需要安装 jq" >&2
exit 1
fi
mkdir -p "$OUTPUT_DIR"
# ============================================================
# 核心逻辑
# ============================================================
OUTPUT_FILE="$OUTPUT_DIR/result.txt"
ITEMS_COUNT=0
# TODO: 实现处理逻辑
# 示例:处理输入文件
while IFS= read -r line; do
echo "$line" >> "$OUTPUT_FILE"
((ITEMS_COUNT++))
done < "$INPUT_PATH"
# ============================================================
# 输出 JSON 结果(使用 jq 构建,避免特殊字符问题)
# ============================================================
jq -n \
--arg output_file "$OUTPUT_FILE" \
--argjson items_processed "$ITEMS_COUNT" \
'{output_file: $output_file, items_processed: $items_processed, status: "success"}'
```
## 变量说明
| 变量 | 说明 |
|------|------|
| `{{script_description}}` | 脚本功能描述 |
## 使用规范
### 脚本头部
```bash
#!/bin/bash
set -euo pipefail # 严格模式:出错退出、未定义变量报错、管道错误传递
```
### 参数解析模式
```bash
while [[ "$#" -gt 0 ]]; do
case $1 in
--param-name)
PARAM_VAR="$2"
shift
;;
--flag)
FLAG_VAR=true
;;
*)
echo "Unknown: $1" >&2
exit 1
;;
esac
shift
done
```
### 输出格式
- 最后一行打印单行 JSON
- **强烈推荐使用 `jq`**:自动处理转义和类型
```bash
# 推荐:使用 jq 构建(安全、可靠)
jq -n \
--arg file "$FILE" \
--argjson count "$COUNT" \
'{output_file: $file, items_processed: $count}'
# 备选:简单场景手动拼接(注意特殊字符转义)
echo "{\"file\": \"$FILE\", \"count\": $COUNT}"
```
**jq 参数类型**
- `--arg name value`:字符串类型
- `--argjson name value`:数字/布尔/null 类型
### 错误处理
```bash
# 验证错误
if [[ -z "$PARAM" ]]; then
echo "错误: 参数不能为空" >&2
exit 1
fi
# 命令错误
if ! command -v jq &> /dev/null; then
echo "错误: 需要安装 jq" >&2
exit 1
fi
# 运行时错误
if ! some_command; then
echo "错误: 命令执行失败" >&2
exit 1
fi
```
## 常用模式
### 文件遍历
```bash
for file in "$INPUT_DIR"/*.json; do
[[ -f "$file" ]] || continue
echo "处理: $file"
# 处理逻辑...
done
```
### 临时文件
```bash
TEMP_FILE=$(mktemp)
trap "rm -f $TEMP_FILE" EXIT
echo "data" > "$TEMP_FILE"
```
### 调用其他工具
```bash
# 检查工具存在
require_command() {
if ! command -v "$1" &> /dev/null; then
echo "错误: 需要 $1" >&2
exit 1
fi
}
require_command jq
require_command curl
```
### JSON 处理(使用 jq
```bash
# 读取 JSON 字段
VALUE=$(jq -r '.field' "$INPUT_PATH")
# 修改 JSON
jq '.field = "new_value"' "$INPUT_PATH" > "$OUTPUT_FILE"
# 合并 JSON 文件
jq -s 'add' file1.json file2.json > merged.json
```
## 生成函数
```javascript
function generateBashScript(scriptConfig) {
return `#!/bin/bash
# ${scriptConfig.description}
set -euo pipefail
# 参数定义
${scriptConfig.inputs.map(i =>
`${i.name.toUpperCase().replace(/-/g, '_')}="${i.default || ''}"`
).join('\n')}
# 参数解析
while [[ "$#" -gt 0 ]]; do
case $1 in
${scriptConfig.inputs.map(i =>
` --${i.name})
${i.name.toUpperCase().replace(/-/g, '_')}="$2"
shift
;;`
).join('\n')}
*)
echo "未知参数: $1" >&2
exit 1
;;
esac
shift
done
# 参数验证
${scriptConfig.inputs.filter(i => i.required).map(i =>
`if [[ -z "$${i.name.toUpperCase().replace(/-/g, '_')}" ]]; then
echo "错误: --${i.name} 是必需参数" >&2
exit 1
fi`
).join('\n\n')}
# TODO: 实现处理逻辑
# 输出结果
echo "{${scriptConfig.outputs.map(o =>
`\\"${o.name}\\": \\"\\$${o.name.toUpperCase().replace(/-/g, '_')}\\"`
).join(', ')}}"
`;
}
```

View File

@@ -1,198 +0,0 @@
# Python Script Template
Python 脚本模板,用于生成技能中的确定性脚本。
## 模板代码
```python
#!/usr/bin/env python3
"""
{{script_description}}
"""
import argparse
import json
import sys
from pathlib import Path
def main():
# 1. 定义参数
parser = argparse.ArgumentParser(description='{{script_description}}')
parser.add_argument('--input-path', type=str, required=True,
help='输入文件路径')
parser.add_argument('--output-dir', type=str, required=True,
help='输出目录(由调用方指定)')
# 添加更多参数...
args = parser.parse_args()
# 2. 验证输入
input_path = Path(args.input_path)
if not input_path.exists():
print(f"错误: 输入文件不存在: {input_path}", file=sys.stderr)
sys.exit(1)
output_dir = Path(args.output_dir)
output_dir.mkdir(parents=True, exist_ok=True)
# 3. 执行核心逻辑
try:
result = process(input_path, output_dir)
except Exception as e:
print(f"错误: {e}", file=sys.stderr)
sys.exit(1)
# 4. 输出 JSON 结果
print(json.dumps(result))
def process(input_path: Path, output_dir: Path) -> dict:
"""
核心处理逻辑
Args:
input_path: 输入文件路径
output_dir: 输出目录
Returns:
dict: 包含输出结果的字典
"""
# TODO: 实现处理逻辑
output_file = output_dir / 'result.json'
# 示例:读取并处理数据
with open(input_path, 'r', encoding='utf-8') as f:
data = json.load(f)
# 处理数据...
processed_count = len(data) if isinstance(data, list) else 1
# 写入输出
with open(output_file, 'w', encoding='utf-8') as f:
json.dump(data, f, indent=2, ensure_ascii=False)
return {
'output_file': str(output_file),
'items_processed': processed_count,
'status': 'success'
}
if __name__ == '__main__':
main()
```
## 变量说明
| 变量 | 说明 |
|------|------|
| `{{script_description}}` | 脚本功能描述 |
## 使用规范
### 输入参数
- 使用 `argparse` 定义参数
- 参数名使用 kebab-case`--input-path`
- 必需参数设置 `required=True`
- 可选参数提供 `default`
### 输出格式
- 最后一行打印单行 JSON
- 包含所有输出文件路径和关键数据
- 错误信息输出到 stderr
### 错误处理
```python
# 验证错误 - 直接退出
if not valid:
print("错误信息", file=sys.stderr)
sys.exit(1)
# 运行时错误 - 捕获并退出
try:
result = process()
except Exception as e:
print(f"错误: {e}", file=sys.stderr)
sys.exit(1)
```
## 常用模式
### 文件处理
```python
def process_files(input_dir: Path, pattern: str = '*.json') -> list:
results = []
for file in input_dir.glob(pattern):
with open(file, 'r') as f:
data = json.load(f)
results.append({'file': str(file), 'data': data})
return results
```
### 数据转换
```python
def transform_data(data: dict) -> dict:
return {
'id': data.get('id'),
'name': data.get('name', '').strip(),
'timestamp': datetime.now().isoformat()
}
```
### 调用外部命令
```python
import subprocess
def run_command(cmd: list) -> str:
result = subprocess.run(cmd, capture_output=True, text=True)
if result.returncode != 0:
raise RuntimeError(result.stderr)
return result.stdout
```
## 生成函数
```javascript
function generatePythonScript(scriptConfig) {
return `#!/usr/bin/env python3
"""
${scriptConfig.description}
"""
import argparse
import json
import sys
from pathlib import Path
def main():
parser = argparse.ArgumentParser(description='${scriptConfig.description}')
${scriptConfig.inputs.map(i =>
` parser.add_argument('--${i.name}', type=${i.type || 'str'}, ${i.required ? 'required=True' : `default='${i.default}'`},
help='${i.description}')`
).join('\n')}
args = parser.parse_args()
# TODO: 实现处理逻辑
result = {
${scriptConfig.outputs.map(o =>
` '${o.name}': None # ${o.description}`
).join(',\n')}
}
print(json.dumps(result))
if __name__ == '__main__':
main()
`;
}
```

View File

@@ -0,0 +1,368 @@
# Script Template
Unified script template covering both Bash and Python runtimes.
## Usage Context
| Phase | Usage |
|-------|-------|
| Optional | Use when declaring `## Scripts` in Phase/Action |
| Execution | Invoke via `ExecuteScript('script-id', params)` |
| Output Location | `.claude/skills/{skill-name}/scripts/{script-id}.{ext}` |
---
## Invocation Interface Specification
All scripts share the same calling convention:
```
Caller
| ExecuteScript('script-id', { key: value })
|
Script Entry
├─ Parameter parsing (--key value)
├─ Input validation (required parameter checks, file exists)
├─ Core processing (data read -> transform -> write)
└─ Output result (last line: single-line JSON -> stdout)
├─ Success: {"status":"success", "output_file":"...", ...}
└─ Failure: stderr output error message, exit 1
```
### Return Format
```typescript
interface ScriptResult {
success: boolean; // exit code === 0
stdout: string; // Standard output
stderr: string; // Standard error
outputs: object; // JSON output parsed from stdout last line
}
```
### Parameter Convention
| Parameter | Required | Description |
|-----------|----------|-------------|
| `--input-path` | Yes | Input file path |
| `--output-dir` | Yes | Output directory (specified by caller) |
| Others | Optional | Script-specific parameters |
---
## Bash Implementation
```bash
#!/bin/bash
# {{script_description}}
set -euo pipefail
# ============================================================
# Parameter Parsing
# ============================================================
INPUT_PATH=""
OUTPUT_DIR=""
while [[ "$#" -gt 0 ]]; do
case $1 in
--input-path) INPUT_PATH="$2"; shift ;;
--output-dir) OUTPUT_DIR="$2"; shift ;;
--help)
echo "Usage: $0 --input-path <path> --output-dir <dir>"
exit 0
;;
*)
echo "Error: Unknown parameter $1" >&2
exit 1
;;
esac
shift
done
# ============================================================
# Parameter Validation
# ============================================================
[[ -z "$INPUT_PATH" ]] && { echo "Error: --input-path is required parameter" >&2; exit 1; }
[[ -z "$OUTPUT_DIR" ]] && { echo "Error: --output-dir is required parameter" >&2; exit 1; }
[[ ! -f "$INPUT_PATH" ]] && { echo "Error: Input file does not exist: $INPUT_PATH" >&2; exit 1; }
command -v jq &> /dev/null || { echo "Error: jq is required" >&2; exit 1; }
mkdir -p "$OUTPUT_DIR"
# ============================================================
# Core Logic
# ============================================================
OUTPUT_FILE="$OUTPUT_DIR/result.txt"
ITEMS_COUNT=0
# TODO: Implement processing logic
while IFS= read -r line; do
echo "$line" >> "$OUTPUT_FILE"
((ITEMS_COUNT++))
done < "$INPUT_PATH"
# ============================================================
# Output JSON Result (use jq to build, avoid escaping issues)
# ============================================================
jq -n \
--arg output_file "$OUTPUT_FILE" \
--argjson items_processed "$ITEMS_COUNT" \
'{output_file: $output_file, items_processed: $items_processed, status: "success"}'
```
### Bash Common Patterns
```bash
# File iteration
for file in "$INPUT_DIR"/*.json; do
[[ -f "$file" ]] || continue
# Processing logic...
done
# Temp file (auto cleanup)
TEMP_FILE=$(mktemp)
trap "rm -f $TEMP_FILE" EXIT
# Tool dependency check
require_command() {
command -v "$1" &> /dev/null || { echo "Error: $1 required" >&2; exit 1; }
}
require_command jq
# jq processing
VALUE=$(jq -r '.field' "$INPUT_PATH") # Read field
jq '.field = "new"' input.json > output.json # Modify field
jq -s 'add' file1.json file2.json > merged.json # Merge files
```
---
## Python Implementation
```python
#!/usr/bin/env python3
"""
{{script_description}}
"""
import argparse
import json
import sys
from pathlib import Path
def main():
parser = argparse.ArgumentParser(description='{{script_description}}')
parser.add_argument('--input-path', type=str, required=True, help='Input file path')
parser.add_argument('--output-dir', type=str, required=True, help='Output directory')
args = parser.parse_args()
# Validate input
input_path = Path(args.input_path)
if not input_path.exists():
print(f"Error: Input file does not exist: {input_path}", file=sys.stderr)
sys.exit(1)
output_dir = Path(args.output_dir)
output_dir.mkdir(parents=True, exist_ok=True)
# Execute processing
try:
result = process(input_path, output_dir)
except Exception as e:
print(f"Error: {e}", file=sys.stderr)
sys.exit(1)
# Output JSON result
print(json.dumps(result))
def process(input_path: Path, output_dir: Path) -> dict:
"""Core processing logic"""
# TODO: Implement processing logic
output_file = output_dir / 'result.json'
with open(input_path, 'r', encoding='utf-8') as f:
data = json.load(f)
processed_count = len(data) if isinstance(data, list) else 1
with open(output_file, 'w', encoding='utf-8') as f:
json.dump(data, f, indent=2, ensure_ascii=False)
return {
'output_file': str(output_file),
'items_processed': processed_count,
'status': 'success'
}
if __name__ == '__main__':
main()
```
### Python Common Patterns
```python
# File iteration
def process_files(input_dir: Path, pattern: str = '*.json') -> list:
return [
{'file': str(f), 'data': json.load(f.open())}
for f in input_dir.glob(pattern)
]
# Data transformation
def transform(data: dict) -> dict:
return {
'id': data.get('id'),
'name': data.get('name', '').strip(),
'timestamp': datetime.now().isoformat()
}
# External command invocation
import subprocess
def run_command(cmd: list) -> str:
result = subprocess.run(cmd, capture_output=True, text=True)
if result.returncode != 0:
raise RuntimeError(result.stderr)
return result.stdout
```
---
## Runtime Selection Guide
```
Task Characteristics
|
├─ File processing / system commands / pipeline operations
│ └─ Choose Bash (.sh)
├─ JSON data processing / complex transformation / data analysis
│ └─ Choose Python (.py)
└─ Simple read/write / format conversion
└─ Either (Bash is lighter)
```
---
## Generation Function
```javascript
function generateScript(scriptConfig) {
const runtime = scriptConfig.runtime || 'bash'; // 'bash' | 'python'
const ext = runtime === 'python' ? '.py' : '.sh';
if (runtime === 'python') {
return generatePythonScript(scriptConfig);
}
return generateBashScript(scriptConfig);
}
function generateBashScript(scriptConfig) {
const { description, inputs = [], outputs = [] } = scriptConfig;
const paramDefs = inputs.map(i =>
`${i.name.toUpperCase().replace(/-/g, '_')}="${i.default || ''}"`
).join('\n');
const paramParse = inputs.map(i =>
` --${i.name}) ${i.name.toUpperCase().replace(/-/g, '_')}="$2"; shift ;;`
).join('\n');
const paramValidation = inputs.filter(i => i.required).map(i => {
const VAR = i.name.toUpperCase().replace(/-/g, '_');
return `[[ -z "$${VAR}" ]] && { echo "Error: --${i.name} is required parameter" >&2; exit 1; }`;
}).join('\n');
return `#!/bin/bash
# ${description}
set -euo pipefail
${paramDefs}
while [[ "$#" -gt 0 ]]; do
case $1 in
${paramParse}
*) echo "Unknown parameter: $1" >&2; exit 1 ;;
esac
shift
done
${paramValidation}
# TODO: Implement processing logic
# Output result (jq build)
jq -n ${outputs.map(o =>
`--arg ${o.name} "$${o.name.toUpperCase().replace(/-/g, '_')}"`
).join(' \\\n ')} \
'{${outputs.map(o => `${o.name}: $${o.name}`).join(', ')}}'
`;
}
function generatePythonScript(scriptConfig) {
const { description, inputs = [], outputs = [] } = scriptConfig;
const argDefs = inputs.map(i =>
` parser.add_argument('--${i.name}', type=${i.type || 'str'}, ${
i.required ? 'required=True' : `default='${i.default || ''}'`
}, help='${i.description || i.name}')`
).join('\n');
const resultFields = outputs.map(o =>
` '${o.name}': None # ${o.description || o.name}`
).join(',\n');
return `#!/usr/bin/env python3
"""
${description}
"""
import argparse
import json
import sys
from pathlib import Path
def main():
parser = argparse.ArgumentParser(description='${description}')
${argDefs}
args = parser.parse_args()
# TODO: Implement processing logic
result = {
${resultFields}
}
print(json.dumps(result))
if __name__ == '__main__':
main()
`;
}
```
---
## Directory Convention
```
scripts/
├── process-data.py # id: process-data, runtime: python
├── validate.sh # id: validate, runtime: bash
└── transform.js # id: transform, runtime: node
```
- **Name is ID**: Filename (without extension) = script ID
- **Extension is runtime**: `.py` -> python, `.sh` -> bash, `.js` -> node

View File

@@ -1,17 +1,31 @@
# Sequential Phase Template
顺序模式 Phase 文件的模板。
Template for Phase files in Sequential execution mode.
## ⚠️ 重要提示
## Purpose
> **Phase 0 是强制前置阶段**:在实现任何 Phase (1, 2, 3...) 之前,必须先完成 Phase 0 的规范研读。
Generate Phase files for Sequential execution mode, defining fixed-order execution steps.
## Usage Context
| Phase | Usage |
|-------|-------|
| Phase 3 (Phase Generation) | Generated when `config.execution_mode === 'sequential'` |
| Generation Trigger | Generate one phase file for each `config.sequential_config.phases` |
| Output Location | `.claude/skills/{skill-name}/phases/{phase-id}.md` |
---
## Important Notes
> **Phase 0 is mandatory prerequisite**: Before implementing any Phase (1, 2, 3...), Phase 0 specification review must be completed first.
>
> 生成 Sequential Phase 时,需要确保:
> 1. SKILL.md 中已包含 Phase 0 规范研读步骤
> 2. 每个 Phase 文件都引用相关的规范文档
> 3. 执行流程明确标注 Phase 0 为禁止跳过的前置步骤
> When generating Sequential Phase, ensure:
> 1. Phase 0 specification review step is included in SKILL.md
> 2. Each Phase file references related specification documents
> 3. Execution flow clearly marks Phase 0 as non-skippable prerequisite
## 模板结构
## Template Structure
```markdown
# Phase {{phase_number}}: {{phase_name}}
@@ -24,14 +38,14 @@
## Input
- 依赖: `{{input_dependency}}`
- 配置: `{workDir}/skill-config.json`
- Dependency: `{{input_dependency}}`
- Config: `{workDir}/skill-config.json`
## Scripts
\`\`\`yaml
# 声明本阶段使用的脚本(可选)
# - script-id # 对应 scripts/script-id.py .sh
# Declare scripts used in this phase (optional)
# - script-id # Corresponds to scripts/script-id.py or .sh
\`\`\`
## Execution Steps
@@ -48,10 +62,10 @@
{{step_2_code}}
\`\`\`
### Step 3: 执行脚本(可选)
### Step 3: Execute Script (Optional)
\`\`\`javascript
// 调用脚本示例
// Script execution example
// const result = await ExecuteScript('script-id', { input_path: `${workDir}/data.json` });
// if (!result.success) throw new Error(result.stderr);
// console.log(result.outputs.output_file);
@@ -71,25 +85,25 @@
{{next_phase_link}}
```
## 变量说明
## Variable Descriptions
| 变量 | 说明 |
|------|------|
| `{{phase_number}}` | 阶段序号 (1, 2, 3...) |
| `{{phase_name}}` | 阶段名称 |
| `{{phase_description}}` | 一句话描述 |
| `{{objectives}}` | 目标列表 |
| `{{input_dependency}}` | 输入依赖文件 |
| `{{step_N_name}}` | 步骤名称 |
| `{{step_N_code}}` | 步骤代码 |
| `{{output_file}}` | 输出文件名 |
| `{{output_format}}` | 输出格式 |
| `{{quality_checklist}}` | 质量检查项 |
| `{{next_phase_link}}` | 下一阶段链接 |
| Variable | Description |
|----------|-------------|
| `{{phase_number}}` | Phase number (1, 2, 3...) |
| `{{phase_name}}` | Phase name |
| `{{phase_description}}` | One-line description |
| `{{objectives}}` | List of objectives |
| `{{input_dependency}}` | Input dependency file |
| `{{step_N_name}}` | Step name |
| `{{step_N_code}}` | Step code |
| `{{output_file}}` | Output filename |
| `{{output_format}}` | Output format |
| `{{quality_checklist}}` | Quality checklist items |
| `{{next_phase_link}}` | Next phase link |
## 脚本调用说明
## Script Invocation Guide
### 目录约定
### Directory Convention
```
scripts/
@@ -98,154 +112,154 @@ scripts/
└── transform.js # id: transform, runtime: node
```
- **命名即 ID**:文件名(不含扩展名)= 脚本 ID
- **扩展名即运行时**`.py` → python, `.sh` → bash, `.js` → node
- **Name is ID**: Filename (without extension) = script ID
- **Extension is runtime**: `.py` → python, `.sh` → bash, `.js` → node
### 调用语法
### Invocation Syntax
```javascript
// 一行调用
// Single-line invocation
const result = await ExecuteScript('script-id', { key: value });
// 检查结果
// Check result
if (!result.success) throw new Error(result.stderr);
// 获取输出
// Get output
const { output_file } = result.outputs;
```
### 返回格式
### Return Format
```typescript
interface ScriptResult {
success: boolean; // exit code === 0
stdout: string; // 标准输出
stderr: string; // 标准错误
outputs: object; // 从 stdout 解析的 JSON 输出
stdout: string; // Standard output
stderr: string; // Standard error
outputs: object; // JSON output parsed from stdout
}
```
## Phase 类型模板
## Phase Type Templates
### 1. 收集型 Phase (Collection)
### 1. Collection Phase
```markdown
# Phase 1: Requirements Collection
收集用户需求和项目配置。
Collect user requirements and project configuration.
## Objective
- 收集用户输入
- 自动检测项目信息
- 生成配置文件
- Collect user input
- Auto-detect project information
- Generate configuration file
## Execution Steps
### Step 1: 用户交互
### Step 1: User Interaction
\`\`\`javascript
const userInput = await AskUserQuestion({
questions: [
{
question: "请选择...",
header: "选项",
question: "Please select...",
header: "Option",
multiSelect: false,
options: [
{ label: "选项A", description: "..." },
{ label: "选项B", description: "..." }
{ label: "Option A", description: "..." },
{ label: "Option B", description: "..." }
]
}
]
});
\`\`\`
### Step 2: 自动检测
### Step 2: Auto-detection
\`\`\`javascript
// 检测项目信息
// Detect project information
const packageJson = JSON.parse(Read('package.json'));
const projectName = packageJson.name;
\`\`\`
### Step 3: 生成配置
### Step 3: Generate Configuration
\`\`\`javascript
const config = {
name: projectName,
userChoice: userInput["选项"],
userChoice: userInput["Option"],
// ...
};
Write(`${workDir}/config.json`, JSON.stringify(config, null, 2));
Write(\`${workDir}/config.json\`, JSON.stringify(config, null, 2));
\`\`\`
## Output
- **File**: `config.json`
- **File**: \`config.json\`
- **Format**: JSON
```
### 2. 分析型 Phase (Analysis)
### 2. Analysis Phase
```markdown
# Phase 2: Deep Analysis
深度分析代码结构。
Analyze code structure in depth.
## Objective
- 扫描代码文件
- 提取关键信息
- 生成分析报告
- Scan code files
- Extract key information
- Generate analysis report
## Execution Steps
### Step 1: 文件扫描
### Step 1: File Scanning
\`\`\`javascript
const files = Glob('src/**/*.ts');
\`\`\`
### Step 2: 内容分析
### Step 2: Content Analysis
\`\`\`javascript
const analysisResults = [];
for (const file of files) {
const content = Read(file);
// 分析逻辑
analysisResults.push({ file, /* 分析结果 */ });
// Analysis logic
analysisResults.push({ file, /* analysis results */ });
}
\`\`\`
### Step 3: 生成报告
### Step 3: Generate Report
\`\`\`javascript
Write(`${workDir}/analysis.json`, JSON.stringify(analysisResults, null, 2));
Write(\`${workDir}/analysis.json\`, JSON.stringify(analysisResults, null, 2));
\`\`\`
## Output
- **File**: `analysis.json`
- **File**: \`analysis.json\`
- **Format**: JSON
```
### 3. 并行型 Phase (Parallel)
### 3. Parallel Phase
```markdown
# Phase 3: Parallel Processing
并行处理多个子任务。
Process multiple subtasks in parallel.
## Objective
- 启动多个 Agent 并行执行
- 收集各 Agent 结果
- 合并输出
- Launch multiple agents for parallel execution
- Collect results from each agent
- Merge outputs
## Execution Steps
### Step 1: 准备任务
### Step 1: Prepare Tasks
\`\`\`javascript
const tasks = [
@@ -255,11 +269,11 @@ const tasks = [
];
\`\`\`
### Step 2: 并行执行
### Step 2: Parallel Execution
\`\`\`javascript
const results = await Promise.all(
tasks.map(task =>
tasks.map(task =>
Task({
subagent_type: 'universal-executor',
run_in_background: false,
@@ -269,7 +283,7 @@ const results = await Promise.all(
);
\`\`\`
### Step 3: 合并结果
### Step 3: Merge Results
\`\`\`javascript
const merged = results.map((r, i) => ({
@@ -277,83 +291,83 @@ const merged = results.map((r, i) => ({
result: JSON.parse(r)
}));
Write(`${workDir}/parallel-results.json`, JSON.stringify(merged, null, 2));
Write(\`${workDir}/parallel-results.json\`, JSON.stringify(merged, null, 2));
\`\`\`
## Output
- **File**: `parallel-results.json`
- **File**: \`parallel-results.json\`
- **Format**: JSON
```
### 4. 组装型 Phase (Assembly)
### 4. Assembly Phase
```markdown
# Phase 4: Document Assembly
组装最终输出文档。
Assemble final output documents.
## Objective
- 读取各阶段产出
- 合并内容
- 生成最终文档
- Read outputs from each phase
- Merge content
- Generate final document
## Execution Steps
### Step 1: 读取产出
### Step 1: Read Outputs
\`\`\`javascript
const config = JSON.parse(Read(`${workDir}/config.json`));
const analysis = JSON.parse(Read(`${workDir}/analysis.json`));
const sections = Glob(`${workDir}/sections/*.md`).map(f => Read(f));
const config = JSON.parse(Read(\`${workDir}/config.json\`));
const analysis = JSON.parse(Read(\`${workDir}/analysis.json\`));
const sections = Glob(\`${workDir}/sections/*.md\`).map(f => Read(f));
\`\`\`
### Step 2: 组装内容
### Step 2: Assemble Content
\`\`\`javascript
const document = \`
# \${config.name}
## 概述
## Overview
\${config.description}
## 详细内容
## Detailed Content
\${sections.join('\\n\\n')}
\`;
\`\`\`
### Step 3: 写入文件
### Step 3: Write File
\`\`\`javascript
Write(`${workDir}/${config.name}-output.md`, document);
Write(\`${workDir}/\${config.name}-output.md\`, document);
\`\`\`
## Output
- **File**: `{name}-output.md`
- **File**: \`{name}-output.md\`
- **Format**: Markdown
```
### 5. 验证型 Phase (Validation)
### 5. Validation Phase
```markdown
# Phase 5: Validation
验证输出质量。
Verify output quality.
## Objective
- 检查输出完整性
- 验证内容质量
- 生成验证报告
- Check output completeness
- Verify content quality
- Generate validation report
## Execution Steps
### Step 1: 完整性检查
### Step 1: Completeness Check
\`\`\`javascript
const outputFile = `${workDir}/${config.name}-output.md`;
const outputFile = \`${workDir}/\${config.name}-output.md\`;
const content = Read(outputFile);
const completeness = {
hasTitle: content.includes('# '),
@@ -362,16 +376,16 @@ const completeness = {
};
\`\`\`
### Step 2: 质量评估
### Step 2: Quality Assessment
\`\`\`javascript
const quality = {
completeness: Object.values(completeness).filter(v => v).length / 3 * 100,
// 其他维度...
// Other dimensions...
};
\`\`\`
### Step 3: 生成报告
### Step 3: Generate Report
\`\`\`javascript
const report = {
@@ -380,55 +394,55 @@ const report = {
issues: []
};
Write(`${workDir}/validation-report.json`, JSON.stringify(report, null, 2));
Write(\`${workDir}/validation-report.json\`, JSON.stringify(report, null, 2));
\`\`\`
## Output
- **File**: `validation-report.json`
- **File**: \`validation-report.json\`
- **Format**: JSON
```
## 生成函数
## Generation Function
```javascript
function generateSequentialPhase(phaseConfig, index, phases, skillConfig) {
const prevPhase = index > 0 ? phases[index - 1] : null;
const nextPhase = index < phases.length - 1 ? phases[index + 1] : null;
return `# Phase ${index + 1}: ${phaseConfig.name}
${phaseConfig.description || `执行 ${phaseConfig.name}`}
${phaseConfig.description || `Execute ${phaseConfig.name}`}
## Objective
- ${phaseConfig.objectives?.join('\n- ') || 'TODO: 定义目标'}
- ${phaseConfig.objectives?.join('\n- ') || 'TODO: Define objectives'}
## Input
- 依赖: \`${prevPhase ? prevPhase.output : 'user input'}\`
- 配置: \`{workDir}/skill-config.json\`
- Dependency: \`${prevPhase ? prevPhase.output : 'user input'}\`
- Config: \`{workDir}/skill-config.json\`
## Execution Steps
### Step 1: 准备
### Step 1: Preparation
\`\`\`javascript
${prevPhase ?
`const prevOutput = JSON.parse(Read(\`\${workDir}/${prevPhase.output}\`));` :
'// 首阶段,从配置开始'}
${prevPhase ?
`const prevOutput = JSON.parse(Read(\`${workDir}/${prevPhase.output}\`));` :
'// First phase, start from configuration'}
\`\`\`
### Step 2: 处理
### Step 2: Processing
\`\`\`javascript
// TODO: 实现核心逻辑
// TODO: Implement core logic
\`\`\`
### Step 3: 输出
### Step 3: Output
\`\`\`javascript
Write(\`\${workDir}/${phaseConfig.output}\`, JSON.stringify(result, null, 2));
Write(\`${workDir}/${phaseConfig.output}\`, JSON.stringify(result, null, 2));
\`\`\`
## Output
@@ -438,13 +452,13 @@ Write(\`\${workDir}/${phaseConfig.output}\`, JSON.stringify(result, null, 2));
## Quality Checklist
- [ ] 输入验证通过
- [ ] 核心逻辑执行成功
- [ ] 输出格式正确
- [ ] Input validation passed
- [ ] Core logic executed successfully
- [ ] Output format correct
${nextPhase ?
${nextPhase ?
`## Next Phase\n\n→ [Phase ${index + 2}: ${nextPhase.name}](${nextPhase.id}.md)` :
'## Completion\n\n此为最后阶段。'}
'## Completion\n\nThis is the final phase.'}
`;
}
```

View File

@@ -1,20 +1,34 @@
# SKILL.md Template
用于生成新 Skill 入口文件的模板。
Template for generating new Skill entry files.
## ⚠️ 重要YAML Front Matter 规范
## Purpose
> **CRITICAL**: SKILL.md 文件必须以 YAML front matter 开头,即以 `---` 作为文件第一行。
Generate the entry file (SKILL.md) for new Skills, serving as the main documentation and execution entry point for the Skill.
## Usage Context
| Phase | Usage |
|-------|-------|
| Phase 2 (Structure Generation) | Create SKILL.md entry file |
| Generation Trigger | `config.execution_mode` determines architecture diagram style |
| Output Location | `.claude/skills/{skill-name}/SKILL.md` |
---
## Important: YAML Front Matter Specification
> **CRITICAL**: The SKILL.md file MUST begin with YAML front matter, meaning `---` must be the first line of the file.
>
> **禁止**使用以下格式:
> - `# Title` 然后 `## Metadata` + yaml 代码块 ❌
> - 任何在 `---` 之前的内容 ❌
> **Do NOT use** the following formats:
> - `# Title` followed by `## Metadata` + yaml code block
> - Any content before `---`
>
> **正确格式**:文件第一行必须是 `---`
> **Correct format**: The first line MUST be `---`
## 可直接应用的模板
## Ready-to-use Template
以下是完整的 SKILL.md 模板。生成时**直接复制应用**,将 `{{变量}}` 替换为实际值:
The following is a complete SKILL.md template. When generating, **directly copy and apply** it, replacing `{{variables}}` with actual values:
---
name: {{skill_name}}
@@ -38,9 +52,9 @@ allowed-tools: {{allowed_tools}}
---
## ⚠️ Mandatory Prerequisites (强制前置条件)
## Mandatory Prerequisites
> **⛔ 禁止跳过**: 在执行任何操作之前,**必须**完整阅读以下文档。未阅读规范直接执行将导致输出不符合质量标准。
> **Do NOT skip**: Before performing any operations, you **must** completely read the following documents. Proceeding without reading the specifications will result in outputs that do not meet quality standards.
{{mandatory_prerequisites}}
@@ -66,31 +80,33 @@ Bash(\`mkdir -p "\${workDir}"\`);
{{output_structure}}
\`\`\`
## Reference Documents
## Reference Documents by Phase
> **Important**: Reference documents should be organized by execution phase, clearly marking when and in what scenarios they are used. Avoid listing documents in a flat manner.
{{reference_table}}
---
## 变量说明
## Variable Descriptions
| 变量 | 类型 | 来源 |
|------|------|------|
| Variable | Type | Source |
|----------|------|--------|
| `{{skill_name}}` | string | config.skill_name |
| `{{display_name}}` | string | config.display_name |
| `{{description}}` | string | config.description |
| `{{triggers}}` | string | config.triggers.join(", ") |
| `{{allowed_tools}}` | string | config.allowed_tools.join(", ") |
| `{{architecture_diagram}}` | string | 根据 execution_mode 生成 (包含 Phase 0) |
| `{{design_principles}}` | string | 根据 execution_mode 生成 |
| `{{mandatory_prerequisites}}` | string | 强制前置阅读文档列表 (specs + templates) |
| `{{execution_flow}}` | string | 根据 phases/actions 生成 (Phase 0 在最前) |
| `{{architecture_diagram}}` | string | Generated based on execution_mode (includes Phase 0) |
| `{{design_principles}}` | string | Generated based on execution_mode |
| `{{mandatory_prerequisites}}` | string | List of mandatory prerequisite reading documents (specs + templates) |
| `{{execution_flow}}` | string | Generated from phases/actions (Phase 0 first) |
| `{{output_location}}` | string | config.output.location |
| `{{additional_dirs}}` | string | 根据 execution_mode 生成 |
| `{{output_structure}}` | string | 根据配置生成 |
| `{{reference_table}}` | string | 根据文件列表生成 |
| `{{additional_dirs}}` | string | Generated based on execution_mode |
| `{{output_structure}}` | string | Generated based on configuration |
| `{{reference_table}}` | string | Generated from file list |
## 生成函数
## Generation Function
```javascript
function generateSkillMd(config) {
@@ -102,32 +118,32 @@ function generateSkillMd(config) {
.replace(/\{\{description\}\}/g, config.description)
.replace(/\{\{triggers\}\}/g, config.triggers.map(t => `"${t}"`).join(", "))
.replace(/\{\{allowed_tools\}\}/g, config.allowed_tools.join(", "))
.replace(/\{\{architecture_diagram\}\}/g, generateArchitecture(config)) // 包含 Phase 0
.replace(/\{\{architecture_diagram\}\}/g, generateArchitecture(config)) // Includes Phase 0
.replace(/\{\{design_principles\}\}/g, generatePrinciples(config))
.replace(/\{\{mandatory_prerequisites\}\}/g, generatePrerequisites(config)) // 强制前置条件
.replace(/\{\{execution_flow\}\}/g, generateFlow(config)) // Phase 0 在最前
.replace(/\{\{mandatory_prerequisites\}\}/g, generatePrerequisites(config)) // Mandatory prerequisites
.replace(/\{\{execution_flow\}\}/g, generateFlow(config)) // Phase 0 first
.replace(/\{\{output_location\}\}/g, config.output.location)
.replace(/\{\{additional_dirs\}\}/g, generateAdditionalDirs(config))
.replace(/\{\{output_structure\}\}/g, generateOutputStructure(config))
.replace(/\{\{reference_table\}\}/g, generateReferenceTable(config));
}
// 生成强制前置条件表格
// Generate mandatory prerequisites table
function generatePrerequisites(config) {
const specs = config.specs || [];
const templates = config.templates || [];
let result = '### 规范文档 (必读)\n\n';
result += '| Document | Purpose | Priority |\n';
result += '|----------|---------|----------|\n';
let result = '### Specification Documents (Required Reading)\n\n';
result += '| Document | Purpose | When |\n';
result += '|----------|---------|------|\n';
specs.forEach((spec, index) => {
const priority = index === 0 ? '**P0 - 最高**' : 'P1';
result += `| [${spec.path}](${spec.path}) | ${spec.purpose} | ${priority} |\n`;
const when = index === 0 ? '**Must read before execution**' : 'Recommended before execution';
result += `| [${spec.path}](${spec.path}) | ${spec.purpose} | ${when} |\n`;
});
if (templates.length > 0) {
result += '\n### 模板文件 (生成前必读)\n\n';
result += '\n### Template Files (Must read before generation)\n\n';
result += '| Document | Purpose |\n';
result += '|----------|---------|\n';
templates.forEach(tmpl => {
@@ -137,9 +153,71 @@ function generatePrerequisites(config) {
return result;
}
// Generate phase-by-phase reference document guide
function generateReferenceTable(config) {
const phases = config.phases || config.actions || [];
const specs = config.specs || [];
const templates = config.templates || [];
let result = '';
// Generate document navigation for each execution phase
phases.forEach((phase, index) => {
const phaseNum = index + 1;
const phaseTitle = phase.display_name || phase.name;
result += `### Phase ${phaseNum}: ${phaseTitle}\n`;
result += `Documents to reference when executing Phase ${phaseNum}\n\n`;
// List documents related to this phase
const relatedDocs = filterDocsByPhase(specs, phase, index);
if (relatedDocs.length > 0) {
result += '| Document | Purpose | When to Use |\n';
result += '|----------|---------|-------------|\n';
relatedDocs.forEach(doc => {
result += `| [${doc.path}](${doc.path}) | ${doc.purpose} | ${doc.context || 'Reference content'} |\n`;
});
result += '\n';
}
});
// Troubleshooting section
result += '### Debugging & Troubleshooting\n';
result += 'Documents to reference when encountering issues\n\n';
result += '| Issue | Solution Document |\n';
result += '|-------|-------------------|\n';
result += `| Phase execution failed | Refer to the relevant Phase documentation |\n`;
result += `| Output does not meet expectations | [specs/quality-standards.md](specs/quality-standards.md) - Verify quality standards |\n`;
result += '\n';
// In-depth learning reference
result += '### Reference & Background\n';
result += 'For understanding the original implementation and design decisions\n\n';
result += '| Document | Purpose | Notes |\n';
result += '|----------|---------|-------|\n';
templates.forEach(tmpl => {
result += `| [${tmpl.path}](${tmpl.path}) | ${tmpl.purpose} | Reference during generation |\n`;
});
return result;
}
// Helper function: Get Phase emoji (removed)
// Note: Emoji support has been removed. Consider using Phase numbers instead.
// Helper function: Filter documents by Phase
function filterDocsByPhase(specs, phase, phaseIndex) {
// Simple filtering logic: match phase name keywords
const keywords = phase.name.toLowerCase().split('-');
return specs.filter(spec => {
const specName = spec.path.toLowerCase();
return keywords.some(kw => specName.includes(kw));
});
}
```
## Sequential 模式示例
## Sequential Mode Example
```markdown
---
@@ -155,36 +233,33 @@ Generate API documentation from source code.
## Architecture Overview
\`\`\`
┌─────────────────────────────────────────────────────────────────┐
⚠️ Phase 0: Specification → 阅读并理解设计规范 (强制前置) │
│ Study │
Phase 1: Scanning → endpoints.json │
Phase 2: Parsing → schemas.json │
│ ↓ │
│ Phase 3: Generation → api-docs.md │
└─────────────────────────────────────────────────────────────────┘
Phase 0: Specification Study (Mandatory prerequisite - Read and understand design specifications)
Phase 1: Scanning → endpoints.json
Phase 2: Parsing schemas.json
Phase 3: Generation → api-docs.md
\`\`\`
## ⚠️ Mandatory Prerequisites (强制前置条件)
## Mandatory Prerequisites
> **⛔ 禁止跳过**: 在执行任何操作之前,**必须**完整阅读以下文档。
> **Do NOT skip**: Before performing any operations, you **must** completely read the following documents.
### 规范文档 (必读)
### Specification Documents (Required Reading)
| Document | Purpose | Priority |
|----------|---------|----------|
| [specs/api-standards.md](specs/api-standards.md) | API 文档标准规范 | **P0 - 最高** |
| [specs/api-standards.md](specs/api-standards.md) | API documentation standards specification | **P0 - Highest** |
### 模板文件 (生成前必读)
### Template Files (Must read before generation)
| Document | Purpose |
|----------|---------|
| [templates/endpoint-doc.md](templates/endpoint-doc.md) | 端点文档模板 |
| [templates/endpoint-doc.md](templates/endpoint-doc.md) | Endpoint documentation template |
```
## Autonomous 模式示例
## Autonomous Mode Example
```markdown
---
@@ -200,36 +275,34 @@ Interactive task management with CRUD operations.
## Architecture Overview
\`\`\`
┌─────────────────────────────────────────────────────────────────┐
│ ⚠️ Phase 0: Specification Study (强制前置) │
└───────────────┬─────────────────────────────────────────────────
┌─────────────────────────────────────────────────────────────────┐
│ Orchestrator (状态驱动决策)
└────────────────────────────────────────────────────────────────┘
┌────────────────────────────────┐
↓ ↓ ↓ ↓
───────┐ ┌───────┐ ┌───────┐ ┌───────
│ List │ │Create │ │ Edit │ │Delete │
└───────┘ └───────┘ └───────┘ └───────┘
Phase 0: Specification Study (Mandatory prerequisite)
────────────────────────────────────────
Orchestrator (State-driven decision) │
───────────────────────────────────────
────────────────────────
↓ ↓
┌────────┐ ┌────────┐ ┌────────┐ ┌────────┐
│ List │ │ Create │ │ Edit │ │ Delete │
└────────┘ └────────┘ └────────┘ └────────┘
\`\`\`
## ⚠️ Mandatory Prerequisites (强制前置条件)
## Mandatory Prerequisites
> **⛔ 禁止跳过**: 在执行任何操作之前,**必须**完整阅读以下文档。
> **Do NOT skip**: Before performing any operations, you **must** completely read the following documents.
### 规范文档 (必读)
### Specification Documents (Required Reading)
| Document | Purpose | Priority |
|----------|---------|----------|
| [specs/task-schema.md](specs/task-schema.md) | 任务数据结构规范 | **P0 - 最高** |
| [specs/action-catalog.md](specs/action-catalog.md) | 动作目录 | P1 |
| [specs/task-schema.md](specs/task-schema.md) | Task data structure specification | **P0 - Highest** |
| [specs/action-catalog.md](specs/action-catalog.md) | Action catalog | P1 |
### 模板文件 (生成前必读)
### Template Files (Must read before generation)
| Document | Purpose |
|----------|---------|
| [templates/orchestrator-base.md](templates/orchestrator-base.md) | 编排器模板 |
| [templates/action-base.md](templates/action-base.md) | 动作模板 |
| [templates/orchestrator-base.md](templates/orchestrator-base.md) | Orchestrator template |
| [templates/action-base.md](templates/action-base.md) | Action template |
```

View File

@@ -6,298 +6,162 @@ allowed-tools: Task, AskUserQuestion, Read, Write, Bash, Glob, Grep, mcp__ace-to
# Skill Tuning
Universal skill diagnosis and optimization tool that identifies and resolves skill execution problems through iterative multi-agent analysis.
Autonomous diagnosis and optimization for skill execution issues.
## Architecture Overview
## Architecture
```
┌─────────────────────────────────────────────────────────────────────────────
Skill Tuning Architecture (Autonomous Mode + Gemini CLI) │
├─────────────────────────────────────────────────────────────────────────────┤
│ │
⚠️ Phase 0: Specification → 阅读规范 + 理解目标 skill 结构 (强制前置)
│ Study │
┌───────────────────────────────────────────────────────────────────────┐
│ │ Orchestrator (状态驱动决策) │ │
读取诊断状态 → 选择下一步动作 → 执行 → 更新状态 → 循环直到完成
│ └───────────────────────────────────────────────────────────────────────┘ │
┌────────────┬───────────┼───────────┬────────────┬────────────┐
↓ ↓ ↓ ↓ ↓
┌──────┐ ┌──────────┐ ┌─────────┐ ┌────────┐ ┌────────┐ ┌─────────┐
│ Init │→ │ Analyze │→ │Diagnose │ │Diagnose│ │Diagnose│ │ Gemini │
│Requiremts│ │ Context │ │ Memory │ │DataFlow│ │Analysis │
└──────┘ └──────────┘ └─────────┘ └────────┘ └────────┘ └─────────┘
│ │ │ │ │ │ │
│ └───────────┴───────────┴────────────┘ │
↓ │
│ ┌───────────────────────────────────────────────────────────────────────┐
Requirement Analysis (NEW)
• Phase 1: 维度拆解 (Gemini CLI) - 单一描述 → 多个关注维度 │ │
│ │ • Phase 2: Spec 匹配 - 每个维度 → taxonomy + strategy │ │
│ │ • Phase 3: 覆盖度评估 - 以"有修复策略"为满足标准 │ │
│ │ • Phase 4: 歧义检测 - 识别多义性描述,必要时请求澄清 │ │
│ └───────────────────────────────────────────────────────────────────────┘ │
│ ↓ │
│ ┌──────────────────┐ │
│ │ Apply Fixes + │ │
│ │ Verify Results │ │
│ └──────────────────┘ │
│ │
│ ┌───────────────────────────────────────────────────────────────────────┐ │
│ │ Gemini CLI Integration │ │
│ │ 根据用户需求动态调用 gemini cli 进行深度分析: │ │
│ │ • 需求维度拆解 (requirement decomposition) │ │
│ │ • 复杂问题分析 (prompt engineering, architecture review) │ │
│ │ • 代码模式识别 (pattern matching, anti-pattern detection) │ │
│ │ • 修复策略生成 (fix generation, refactoring suggestions) │ │
│ └───────────────────────────────────────────────────────────────────────┘ │
│ │
└─────────────────────────────────────────────────────────────────────────────┘
┌─────────────────────────────────────────────────────┐
Phase 0: Read Specs (mandatory) │
│ → problem-taxonomy.md, tuning-strategies.md │
└─────────────────────────────────────────────────────┘
┌─────────────────────────────────────────────────────┐
Orchestrator (state-driven)
Read state → Select action → Execute → Update → ✓
└─────────────────────────────────────────────────────┘
──────────────────────┐ ┌──────────────────
Diagnosis Phase │ │ Gemini CLI
• Context │ │ Deep analysis
• Memory │ │ (on-demand)
• DataFlow │ │
• Agent │ │ Complex issues
• Docs │ │ Architecture
• Token Usage │ │ Performance
└──────────────────────┘ └──────────────────┘
┌───────────────────┐
│ Fix & Verify
Apply → Re-test
└───────────────────┘
```
## Problem Domain
## Core Issues Detected
Based on comprehensive analysis, skill-tuning addresses **core skill issues** and **general optimization areas**:
### Core Skill Issues (自动检测)
| Priority | Problem | Root Cause | Solution Strategy |
|----------|---------|------------|-------------------|
| **P0** | Authoring Principles Violation | 中间文件存储, State膨胀, 文件中转 | eliminate_intermediate_files, minimize_state, context_passing |
| Priority | Problem | Root Cause | Fix Strategy |
|----------|---------|-----------|--------------|
| **P0** | Authoring Violation | Intermediate files, state bloat, file relay | eliminate_intermediate, minimize_state |
| **P1** | Data Flow Disruption | Scattered state, inconsistent formats | state_centralization, schema_enforcement |
| **P2** | Agent Coordination | Fragile call chains, merge complexity | error_wrapping, result_validation |
| **P3** | Context Explosion | Token accumulation, multi-turn bloat | sliding_window, context_summarization |
| **P2** | Agent Coordination | Fragile chains, no error handling | error_wrapping, result_validation |
| **P3** | Context Explosion | Unbounded history, full content passing | sliding_window, path_reference |
| **P4** | Long-tail Forgetting | Early constraint loss | constraint_injection, checkpoint_restore |
| **P5** | Token Consumption | Verbose prompts, excessive state, redundant I/O | prompt_compression, lazy_loading, output_minimization |
| **P5** | Token Consumption | Verbose prompts, state bloat | prompt_compression, lazy_loading |
### General Optimization Areas (按需分析 via Gemini CLI)
## Problem Categories (Detailed Specs)
| Category | Issues | Gemini Analysis Scope |
|----------|--------|----------------------|
| **Prompt Engineering** | 模糊指令, 输出格式不一致, 幻觉风险 | 提示词优化, 结构化输出设计 |
| **Architecture** | 阶段划分不合理, 依赖混乱, 扩展性差 | 架构审查, 模块化建议 |
| **Performance** | 执行慢, Token消耗高, 重复计算 | 性能分析, 缓存策略 |
| **Error Handling** | 错误恢复不当, 无降级策略, 日志不足 | 容错设计, 可观测性增强 |
| **Output Quality** | 输出不稳定, 格式漂移, 质量波动 | 质量门控, 验证机制 |
| **User Experience** | 交互不流畅, 反馈不清晰, 进度不可见 | UX优化, 进度追踪 |
See [specs/problem-taxonomy.md](specs/problem-taxonomy.md) for:
- Detection patterns (regex/checks)
- Severity calculations
- Impact assessments
## Key Design Principles
## Tuning Strategies (Detailed Specs)
1. **Problem-First Diagnosis**: Systematic identification before any fix attempt
2. **Data-Driven Analysis**: Record execution traces, token counts, state snapshots
3. **Iterative Refinement**: Multiple tuning rounds until quality gates pass
4. **Non-Destructive**: All changes are reversible with backup checkpoints
5. **Agent Coordination**: Use specialized sub-agents for each diagnosis type
6. **Gemini CLI On-Demand**: Deep analysis via CLI for complex/custom issues
See [specs/tuning-strategies.md](specs/tuning-strategies.md) for:
- 10+ strategies per category
- Implementation patterns
- Verification methods
---
## Workflow
## Gemini CLI Integration
| Step | Action | Orchestrator Decision | Output |
|------|--------|----------------------|--------|
| 1 | `action-init` | status='pending' | Backup, session created |
| 2 | `action-analyze-requirements` | After init | Required dimensions + coverage |
| 3 | Diagnosis (6 types) | Focus areas | state.diagnosis.{type} |
| 4 | `action-gemini-analysis` | Critical issues OR user request | Deep findings |
| 5 | `action-generate-report` | All diagnosis complete | state.final_report |
| 6 | `action-propose-fixes` | Issues found | state.proposed_fixes[] |
| 7 | `action-apply-fix` | Pending fixes | Applied + verified |
| 8 | `action-complete` | Quality gates pass | session.status='completed' |
根据用户需求动态调用 Gemini CLI 进行深度分析。
## Action Reference
### Trigger Conditions
| Category | Actions | Purpose |
|----------|---------|---------|
| **Setup** | action-init | Initialize backup, session state |
| **Analysis** | action-analyze-requirements | Decompose user request via Gemini CLI |
| **Diagnosis** | action-diagnose-{context,memory,dataflow,agent,docs,token_consumption} | Detect category-specific issues |
| **Deep Analysis** | action-gemini-analysis | Gemini CLI: complex/critical issues |
| **Reporting** | action-generate-report | Consolidate findings → final_report |
| **Fixing** | action-propose-fixes, action-apply-fix | Generate + apply fixes |
| **Verify** | action-verify | Re-run diagnosis, check gates |
| **Exit** | action-complete, action-abort | Finalize or rollback |
| Condition | Action | CLI Mode |
|-----------|--------|----------|
| 用户描述复杂问题 | 调用 Gemini 分析问题根因 | `analysis` |
| 自动诊断发现 critical 问题 | 请求深度分析确认 | `analysis` |
| 用户请求架构审查 | 执行架构分析 | `analysis` |
| 需要生成修复代码 | 生成修复提案 | `write` |
| 标准策略不适用 | 请求定制化策略 | `analysis` |
Full action details: [phases/actions/](phases/actions/)
### CLI Command Template
## State Management
**Single source of truth**: `.workflow/.scratchpad/skill-tuning-{ts}/state.json`
```json
{
"status": "pending|running|completed|failed",
"target_skill": { "name": "...", "path": "..." },
"diagnosis": {
"context": {...},
"memory": {...},
"dataflow": {...},
"agent": {...},
"docs": {...},
"token_consumption": {...}
},
"issues": [{"id":"...", "severity":"...", "category":"...", "strategy":"..."}],
"proposed_fixes": [...],
"applied_fixes": [...],
"quality_gate": "pass|fail",
"final_report": "..."
}
```
See [phases/state-schema.md](phases/state-schema.md) for complete schema.
## Orchestrator Logic
See [phases/orchestrator.md](phases/orchestrator.md) for:
- Decision logic (termination checks → action selection)
- State transitions
- Error recovery
## Key Principles
1. **Problem-First**: Diagnosis before any fix
2. **Data-Driven**: Record traces, token counts, snapshots
3. **Iterative**: Multiple rounds until quality gates pass
4. **Reversible**: All changes with backup checkpoints
5. **Non-Invasive**: Minimal changes, maximum clarity
## Usage Examples
```bash
ccw cli -p "
PURPOSE: ${purpose}
TASK: ${task_steps}
MODE: ${mode}
CONTEXT: @${skill_path}/**/*
EXPECTED: ${expected_output}
RULES: $(cat ~/.claude/workflows/cli-templates/protocols/${mode}-protocol.md) | ${constraints}
" --tool gemini --mode ${mode} --cd ${skill_path}
# Basic skill diagnosis
/skill-tuning "Fix memory leaks in my skill"
# Deep analysis with Gemini
/skill-tuning "Architecture issues in async workflow"
# Focus on specific areas
/skill-tuning "Optimize token consumption and fix agent coordination"
# Custom issue
/skill-tuning "My skill produces inconsistent outputs"
```
### Analysis Types
## Output
#### 1. Problem Root Cause Analysis
```bash
ccw cli -p "
PURPOSE: Identify root cause of skill execution issue: ${user_issue_description}
TASK: • Analyze skill structure and phase flow • Identify anti-patterns • Trace data flow issues
MODE: analysis
CONTEXT: @**/*.md
EXPECTED: JSON with { root_causes: [], patterns_found: [], recommendations: [] }
RULES: $(cat ~/.claude/workflows/cli-templates/protocols/analysis-protocol.md) | Focus on execution flow
" --tool gemini --mode analysis
```
#### 2. Architecture Review
```bash
ccw cli -p "
PURPOSE: Review skill architecture for scalability and maintainability
TASK: • Evaluate phase decomposition • Check state management patterns • Assess agent coordination
MODE: analysis
CONTEXT: @**/*.md
EXPECTED: Architecture assessment with improvement recommendations
RULES: $(cat ~/.claude/workflows/cli-templates/protocols/analysis-protocol.md) | Focus on modularity
" --tool gemini --mode analysis
```
#### 3. Fix Strategy Generation
```bash
ccw cli -p "
PURPOSE: Generate fix strategy for issue: ${issue_id} - ${issue_description}
TASK: • Analyze issue context • Design fix approach • Generate implementation plan
MODE: analysis
CONTEXT: @**/*.md
EXPECTED: JSON with { strategy: string, changes: [], verification_steps: [] }
RULES: $(cat ~/.claude/workflows/cli-templates/protocols/analysis-protocol.md) | Minimal invasive changes
" --tool gemini --mode analysis
```
---
## Mandatory Prerequisites
> **CRITICAL**: Read these documents before executing any action.
### Core Specs (Required)
| Document | Purpose | Priority |
|----------|---------|----------|
| [specs/skill-authoring-principles.md](specs/skill-authoring-principles.md) | **首要准则:简洁高效、去除存储、上下文流转** | **P0** |
| [specs/problem-taxonomy.md](specs/problem-taxonomy.md) | Problem classification and detection patterns | **P0** |
| [specs/tuning-strategies.md](specs/tuning-strategies.md) | Fix strategies for each problem type | **P0** |
| [specs/dimension-mapping.md](specs/dimension-mapping.md) | Dimension to Spec mapping rules | **P0** |
| [specs/quality-gates.md](specs/quality-gates.md) | Quality thresholds and verification criteria | P1 |
### Templates (Reference)
| Document | Purpose |
|----------|---------|
| [templates/diagnosis-report.md](templates/diagnosis-report.md) | Diagnosis report structure |
| [templates/fix-proposal.md](templates/fix-proposal.md) | Fix proposal format |
---
## Execution Flow
```
┌─────────────────────────────────────────────────────────────────────────────┐
│ Phase 0: Specification Study (强制前置 - 禁止跳过) │
│ → Read: specs/problem-taxonomy.md (问题分类) │
│ → Read: specs/tuning-strategies.md (调优策略) │
│ → Read: specs/dimension-mapping.md (维度映射规则) │
│ → Read: Target skill's SKILL.md and phases/*.md │
│ → Output: 内化规范,理解目标 skill 结构 │
├─────────────────────────────────────────────────────────────────────────────┤
│ action-init: Initialize Tuning Session │
│ → Create work directory: .workflow/.scratchpad/skill-tuning-{timestamp} │
│ → Initialize state.json with target skill info │
│ → Create backup of target skill files │
├─────────────────────────────────────────────────────────────────────────────┤
│ action-analyze-requirements: Requirement Analysis │
│ → Phase 1: 维度拆解 (Gemini CLI) - 单一描述 → 多个关注维度 │
│ → Phase 2: Spec 匹配 - 每个维度 → taxonomy + strategy │
│ → Phase 3: 覆盖度评估 - 以"有修复策略"为满足标准 │
│ → Phase 4: 歧义检测 - 识别多义性描述,必要时请求澄清 │
│ → Output: state.json (requirement_analysis field) │
├─────────────────────────────────────────────────────────────────────────────┤
│ action-diagnose-*: Diagnosis Actions (context/memory/dataflow/agent/docs/ │
│ token_consumption) │
│ → Execute pattern-based detection for each category │
│ → Output: state.json (diagnosis.{category} field) │
├─────────────────────────────────────────────────────────────────────────────┤
│ action-generate-report: Consolidated Report │
│ → Generate markdown summary from state.diagnosis │
│ → Prioritize issues by severity │
│ → Output: state.json (final_report field) │
├─────────────────────────────────────────────────────────────────────────────┤
│ action-propose-fixes: Fix Proposal Generation │
│ → Generate fix strategies for each issue │
│ → Create implementation plan │
│ → Output: state.json (proposed_fixes field) │
├─────────────────────────────────────────────────────────────────────────────┤
│ action-apply-fix: Apply Selected Fix │
│ → User selects fix to apply │
│ → Execute fix with backup │
│ → Update state with fix result │
├─────────────────────────────────────────────────────────────────────────────┤
│ action-verify: Verification │
│ → Re-run affected diagnosis │
│ → Check quality gates │
│ → Update iteration count │
├─────────────────────────────────────────────────────────────────────────────┤
│ action-complete: Finalization │
│ → Set status='completed' │
│ → Final report already in state.json (final_report field) │
│ → Output: state.json (final) │
└─────────────────────────────────────────────────────────────────────────────┘
```
## Directory Setup
```javascript
const timestamp = new Date().toISOString().slice(0,19).replace(/[-:T]/g, '');
const workDir = `.workflow/.scratchpad/skill-tuning-${timestamp}`;
// Simplified: Only backups dir needed, diagnosis results go into state.json
Bash(`mkdir -p "${workDir}/backups"`);
```
## Output Structure
```
.workflow/.scratchpad/skill-tuning-{timestamp}/
├── state.json # Single source of truth (all results consolidated)
│ ├── diagnosis.* # All diagnosis results embedded
│ ├── issues[] # Found issues
│ ├── proposed_fixes[] # Fix proposals
│ └── final_report # Markdown summary (on completion)
└── backups/
└── {skill-name}-backup/ # Original skill files backup
```
> **Token Optimization**: All outputs consolidated into state.json. No separate diagnosis files or report files.
## State Schema
详细状态结构定义请参阅 [phases/state-schema.md](phases/state-schema.md)。
核心状态字段:
- `status`: 工作流状态 (pending/running/completed/failed)
- `target_skill`: 目标 skill 信息
- `diagnosis`: 各维度诊断结果
- `issues`: 发现的问题列表
- `proposed_fixes`: 建议的修复方案
After completion, review:
- `.workflow/.scratchpad/skill-tuning-{ts}/state.json` - Full state with final_report
- `state.final_report` - Markdown summary (in state.json)
- `state.applied_fixes` - List of applied fixes with verification results
## Reference Documents
| Document | Purpose |
|----------|---------|
| [phases/orchestrator.md](phases/orchestrator.md) | Orchestrator decision logic |
| [specs/problem-taxonomy.md](specs/problem-taxonomy.md) | Classification + detection patterns |
| [specs/tuning-strategies.md](specs/tuning-strategies.md) | Fix implementation guide |
| [specs/dimension-mapping.md](specs/dimension-mapping.md) | Dimension ↔ Spec mapping |
| [specs/quality-gates.md](specs/quality-gates.md) | Quality verification criteria |
| [phases/orchestrator.md](phases/orchestrator.md) | Workflow orchestration |
| [phases/state-schema.md](phases/state-schema.md) | State structure definition |
| [phases/actions/action-init.md](phases/actions/action-init.md) | Initialize tuning session |
| [phases/actions/action-analyze-requirements.md](phases/actions/action-analyze-requirements.md) | Requirement analysis (NEW) |
| [phases/actions/action-diagnose-context.md](phases/actions/action-diagnose-context.md) | Context explosion diagnosis |
| [phases/actions/action-diagnose-memory.md](phases/actions/action-diagnose-memory.md) | Long-tail forgetting diagnosis |
| [phases/actions/action-diagnose-dataflow.md](phases/actions/action-diagnose-dataflow.md) | Data flow diagnosis |
| [phases/actions/action-diagnose-agent.md](phases/actions/action-diagnose-agent.md) | Agent coordination diagnosis |
| [phases/actions/action-diagnose-docs.md](phases/actions/action-diagnose-docs.md) | Documentation structure diagnosis |
| [phases/actions/action-diagnose-token-consumption.md](phases/actions/action-diagnose-token-consumption.md) | Token consumption diagnosis |
| [phases/actions/action-generate-report.md](phases/actions/action-generate-report.md) | Report generation |
| [phases/actions/action-propose-fixes.md](phases/actions/action-propose-fixes.md) | Fix proposal |
| [phases/actions/action-apply-fix.md](phases/actions/action-apply-fix.md) | Fix application |
| [phases/actions/action-verify.md](phases/actions/action-verify.md) | Verification |
| [phases/actions/action-complete.md](phases/actions/action-complete.md) | Finalization |
| [specs/problem-taxonomy.md](specs/problem-taxonomy.md) | Problem classification |
| [specs/tuning-strategies.md](specs/tuning-strategies.md) | Fix strategies |
| [specs/dimension-mapping.md](specs/dimension-mapping.md) | Dimension to Spec mapping (NEW) |
| [specs/quality-gates.md](specs/quality-gates.md) | Quality criteria |
| [phases/actions/](phases/actions/) | Individual action implementations |

View File

@@ -1,28 +1,57 @@
# Orchestrator
Autonomous orchestrator for skill-tuning workflow. Reads current state and selects the next action based on diagnosis progress and quality gates.
State-driven orchestrator for autonomous skill-tuning workflow.
## Role
Drive the tuning workflow by:
1. Reading current session state
2. Selecting the appropriate next action
3. Executing the action via sub-agent
4. Updating state with results
5. Repeating until termination conditions met
Read state → Select action → Execute → Update → Repeat until termination.
## Decision Logic
### Termination Checks (priority order)
| Condition | Action |
|-----------|--------|
| `status === 'user_exit'` | null (exit) |
| `status === 'completed'` | null (exit) |
| `error_count >= max_errors` | action-abort |
| `iteration_count >= max_iterations` | action-complete |
| `quality_gate === 'pass'` | action-complete |
### Action Selection
| Priority | Condition | Action |
|----------|-----------|--------|
| 1 | `status === 'pending'` | action-init |
| 2 | Init done, req analysis missing | action-analyze-requirements |
| 3 | Req needs clarification | null (wait) |
| 4 | Req coverage unsatisfied | action-gemini-analysis |
| 5 | Gemini requested/critical issues | action-gemini-analysis |
| 6 | Gemini running | null (wait) |
| 7 | Diagnosis pending (in order) | action-diagnose-{type} |
| 8 | All diagnosis done, no report | action-generate-report |
| 9 | Report done, issues exist | action-propose-fixes |
| 10 | Pending fixes exist | action-apply-fix |
| 11 | Fixes need verification | action-verify |
| 12 | New iteration needed | action-diagnose-context (restart) |
| 13 | Default | action-complete |
**Diagnosis Order**: context → memory → dataflow → agent → docs → token_consumption
**Gemini Triggers**:
- `gemini_analysis_requested === true`
- Critical issues detected
- Focus areas include: architecture, prompt, performance, custom
- Second iteration with unresolved issues
## State Management
### Read State
```javascript
// Read
const state = JSON.parse(Read(`${workDir}/state.json`));
```
### Update State
```javascript
function updateState(updates) {
// Update (with sliding window for history)
function updateState(workDir, updates) {
const state = JSON.parse(Read(`${workDir}/state.json`));
const newState = {
...state,
@@ -34,344 +63,127 @@ function updateState(updates) {
}
```
## Decision Logic
```javascript
function selectNextAction(state) {
// === Termination Checks ===
// User exit
if (state.status === 'user_exit') return null;
// Completed
if (state.status === 'completed') return null;
// Error limit exceeded
if (state.error_count >= state.max_errors) {
return 'action-abort';
}
// Max iterations exceeded
if (state.iteration_count >= state.max_iterations) {
return 'action-complete';
}
// === Action Selection ===
// 1. Not initialized yet
if (state.status === 'pending') {
return 'action-init';
}
// 1.5. Requirement analysis (在 init 后diagnosis 前)
if (state.status === 'running' &&
state.completed_actions.includes('action-init') &&
!state.completed_actions.includes('action-analyze-requirements')) {
return 'action-analyze-requirements';
}
// 1.6. 如果需求分析发现歧义需要澄清,暂停等待用户
if (state.requirement_analysis?.status === 'needs_clarification') {
return null; // 等待用户澄清后继续
}
// 1.7. 如果需求分析覆盖度不足,优先触发 Gemini 深度分析
if (state.requirement_analysis?.coverage?.status === 'unsatisfied' &&
!state.completed_actions.includes('action-gemini-analysis')) {
return 'action-gemini-analysis';
}
// 2. Check if Gemini analysis is requested or needed
if (shouldTriggerGeminiAnalysis(state)) {
return 'action-gemini-analysis';
}
// 3. Check if Gemini analysis is running
if (state.gemini_analysis?.status === 'running') {
// Wait for Gemini analysis to complete
return null; // Orchestrator will be re-triggered when CLI completes
}
// 4. Run diagnosis in order (only if not completed)
const diagnosisOrder = ['context', 'memory', 'dataflow', 'agent', 'docs', 'token_consumption'];
for (const diagType of diagnosisOrder) {
if (state.diagnosis[diagType] === null) {
// Check if user wants to skip this diagnosis
if (!state.focus_areas.length || state.focus_areas.includes(diagType)) {
return `action-diagnose-${diagType}`;
}
// For docs diagnosis, also check 'all' focus_area
if (diagType === 'docs' && state.focus_areas.includes('all')) {
return 'action-diagnose-docs';
}
}
}
// 5. All diagnosis complete, generate report if not done
const allDiagnosisComplete = diagnosisOrder.every(
d => state.diagnosis[d] !== null || !state.focus_areas.includes(d)
);
if (allDiagnosisComplete && !state.completed_actions.includes('action-generate-report')) {
return 'action-generate-report';
}
// 6. Report generated, propose fixes if not done
if (state.completed_actions.includes('action-generate-report') &&
state.proposed_fixes.length === 0 &&
state.issues.length > 0) {
return 'action-propose-fixes';
}
// 7. Fixes proposed, check if user wants to apply
if (state.proposed_fixes.length > 0 && state.pending_fixes.length > 0) {
return 'action-apply-fix';
}
// 8. Fixes applied, verify
if (state.applied_fixes.length > 0 &&
state.applied_fixes.some(f => f.verification_result === 'pending')) {
return 'action-verify';
}
// 9. Quality gate check
if (state.quality_gate === 'pass') {
return 'action-complete';
}
// 10. More iterations needed
if (state.iteration_count < state.max_iterations &&
state.quality_gate !== 'pass' &&
state.issues.some(i => i.severity === 'critical' || i.severity === 'high')) {
// Reset diagnosis for re-evaluation
return 'action-diagnose-context'; // Start new iteration
}
// 11. Default: complete
return 'action-complete';
}
/**
* 判断是否需要触发 Gemini CLI 分析
*/
function shouldTriggerGeminiAnalysis(state) {
// 已完成 Gemini 分析,不再触发
if (state.gemini_analysis?.status === 'completed') {
return false;
}
// 用户显式请求
if (state.gemini_analysis_requested === true) {
return true;
}
// 发现 critical 问题且未进行深度分析
if (state.issues.some(i => i.severity === 'critical') &&
!state.completed_actions.includes('action-gemini-analysis')) {
return true;
}
// 用户指定了需要 Gemini 分析的 focus_areas
const geminiAreas = ['architecture', 'prompt', 'performance', 'custom'];
if (state.focus_areas.some(area => geminiAreas.includes(area))) {
return true;
}
// 标准诊断完成但问题未得到解决,需要深度分析
const diagnosisComplete = ['context', 'memory', 'dataflow', 'agent', 'docs'].every(
d => state.diagnosis[d] !== null
);
if (diagnosisComplete &&
state.issues.length > 0 &&
state.iteration_count > 0 &&
!state.completed_actions.includes('action-gemini-analysis')) {
// 第二轮迭代如果问题仍存在,触发 Gemini 分析
return true;
}
return false;
}
```
## Execution Loop
```javascript
async function runOrchestrator(workDir) {
console.log('=== Skill Tuning Orchestrator Started ===');
let iteration = 0;
const MAX_LOOP_ITERATIONS = 50; // Safety limit
const MAX_LOOP = 50;
while (iteration < MAX_LOOP_ITERATIONS) {
iteration++;
// 1. Read current state
while (iteration++ < MAX_LOOP) {
// 1. Read state
const state = JSON.parse(Read(`${workDir}/state.json`));
console.log(`[Loop ${iteration}] Status: ${state.status}, Action: ${state.current_action}`);
// 2. Select next action
// 2. Select action
const actionId = selectNextAction(state);
if (!actionId) break;
if (!actionId) {
console.log('No action selected, terminating orchestrator.');
break;
}
console.log(`[Loop ${iteration}] Executing: ${actionId}`);
// 3. Update state: current action
// FIX CTX-001: sliding window for action_history (keep last 10)
updateState({
// 3. Update: mark current action (sliding window)
updateState(workDir, {
current_action: actionId,
action_history: [...state.action_history, {
action: actionId,
started_at: new Date().toISOString(),
completed_at: null,
result: null,
output_files: []
}].slice(-10) // Sliding window: prevent unbounded growth
started_at: new Date().toISOString()
}].slice(-10) // Keep last 10
});
// 4. Execute action
try {
const actionPrompt = Read(`phases/actions/${actionId}.md`);
// FIX CTX-003: Pass state path + key fields only instead of full state
// Pass state path + key fields (not full state)
const stateKeyInfo = {
status: state.status,
iteration_count: state.iteration_count,
issues_by_severity: state.issues_by_severity,
quality_gate: state.quality_gate,
current_action: state.current_action,
completed_actions: state.completed_actions,
user_issue_description: state.user_issue_description,
target_skill: { name: state.target_skill.name, path: state.target_skill.path }
};
const stateKeyJson = JSON.stringify(stateKeyInfo, null, 2);
const result = await Task({
subagent_type: 'universal-executor',
run_in_background: false,
prompt: `
[CONTEXT]
You are executing action "${actionId}" for skill-tuning workflow.
Action: ${actionId}
Work directory: ${workDir}
[STATE KEY INFO]
${stateKeyJson}
${JSON.stringify(stateKeyInfo, null, 2)}
[FULL STATE PATH]
${workDir}/state.json
(Read full state from this file if you need additional fields)
(Read full state from this file if needed)
[ACTION INSTRUCTIONS]
${actionPrompt}
[OUTPUT REQUIREMENT]
After completing the action:
1. Write any output files to the work directory
2. Return a JSON object with:
- stateUpdates: object with state fields to update
- outputFiles: array of files created
- summary: brief description of what was done
[OUTPUT]
Return JSON: { stateUpdates: {}, outputFiles: [], summary: "..." }
`
});
// 5. Parse result and update state
let actionResult;
try {
actionResult = JSON.parse(result);
} catch (e) {
actionResult = {
stateUpdates: {},
outputFiles: [],
summary: result
};
}
// 5. Parse result
let actionResult = result;
try { actionResult = JSON.parse(result); } catch {}
// 6. Update state: action complete
const updatedHistory = [...state.action_history];
updatedHistory[updatedHistory.length - 1] = {
...updatedHistory[updatedHistory.length - 1],
completed_at: new Date().toISOString(),
result: 'success',
output_files: actionResult.outputFiles || []
};
updateState({
// 6. Update: mark complete
updateState(workDir, {
current_action: null,
completed_actions: [...state.completed_actions, actionId],
action_history: updatedHistory,
...actionResult.stateUpdates
});
console.log(`[Loop ${iteration}] Completed: ${actionId}`);
} catch (error) {
console.log(`[Loop ${iteration}] Error in ${actionId}: ${error.message}`);
// Error handling
// FIX CTX-002: sliding window for errors (keep last 5)
updateState({
// Error handling (sliding window for errors)
updateState(workDir, {
current_action: null,
errors: [...state.errors, {
action: actionId,
message: error.message,
timestamp: new Date().toISOString(),
recoverable: true
}].slice(-5), // Sliding window: prevent unbounded growth
timestamp: new Date().toISOString()
}].slice(-5), // Keep last 5
error_count: state.error_count + 1
});
}
}
console.log('=== Skill Tuning Orchestrator Finished ===');
}
```
## Action Catalog
## Action Preconditions
| Action | Purpose | Preconditions | Effects |
|--------|---------|---------------|---------|
| [action-init](actions/action-init.md) | Initialize tuning session | status === 'pending' | Creates work dirs, backup, sets status='running' |
| [action-analyze-requirements](actions/action-analyze-requirements.md) | Analyze user requirements | init completed | Sets requirement_analysis, optimizes focus_areas |
| [action-diagnose-context](actions/action-diagnose-context.md) | Analyze context explosion | status === 'running' | Sets diagnosis.context |
| [action-diagnose-memory](actions/action-diagnose-memory.md) | Analyze long-tail forgetting | status === 'running' | Sets diagnosis.memory |
| [action-diagnose-dataflow](actions/action-diagnose-dataflow.md) | Analyze data flow issues | status === 'running' | Sets diagnosis.dataflow |
| [action-diagnose-agent](actions/action-diagnose-agent.md) | Analyze agent coordination | status === 'running' | Sets diagnosis.agent |
| [action-diagnose-docs](actions/action-diagnose-docs.md) | Analyze documentation structure | status === 'running', focus includes 'docs' | Sets diagnosis.docs |
| [action-gemini-analysis](actions/action-gemini-analysis.md) | Deep analysis via Gemini CLI | User request OR critical issues | Sets gemini_analysis, adds issues |
| [action-generate-report](actions/action-generate-report.md) | Generate consolidated report | All diagnoses complete | Creates tuning-report.md |
| [action-propose-fixes](actions/action-propose-fixes.md) | Generate fix proposals | Report generated, issues > 0 | Sets proposed_fixes |
| [action-apply-fix](actions/action-apply-fix.md) | Apply selected fix | pending_fixes > 0 | Updates applied_fixes |
| [action-verify](actions/action-verify.md) | Verify applied fixes | applied_fixes with pending verification | Updates verification_result |
| [action-complete](actions/action-complete.md) | Finalize session | quality_gate='pass' OR max_iterations | Sets status='completed' |
| [action-abort](actions/action-abort.md) | Abort on errors | error_count >= max_errors | Sets status='failed' |
| Action | Precondition |
|--------|-------------|
| action-init | status='pending' |
| action-analyze-requirements | Init complete, not done |
| action-diagnose-* | status='running', focus area includes type |
| action-gemini-analysis | Requested OR critical issues OR high complexity |
| action-generate-report | All diagnosis complete |
| action-propose-fixes | Report generated, issues > 0 |
| action-apply-fix | pending_fixes > 0 |
| action-verify | applied_fixes with pending verification |
| action-complete | Quality gates pass OR max iterations |
| action-abort | error_count >= max_errors |
## Termination Conditions
## User Interaction Points
- `status === 'completed'`: Normal completion
- `status === 'user_exit'`: User requested exit
- `status === 'failed'`: Unrecoverable error
- `requirement_analysis.status === 'needs_clarification'`: Waiting for user clarification (暂停,非终止)
- `error_count >= max_errors`: Too many errors (default: 3)
- `iteration_count >= max_iterations`: Max iterations reached (default: 5)
- `quality_gate === 'pass'`: All quality criteria met
1. **action-init**: Confirm target skill, describe issue
2. **action-propose-fixes**: Select which fixes to apply
3. **action-verify**: Review verification, decide to continue or stop
4. **action-complete**: Review final summary
## Error Recovery
| Error Type | Recovery Strategy |
|------------|-------------------|
| Error Type | Strategy |
|------------|----------|
| Action execution failed | Retry up to 3 times, then skip |
| State parse error | Restore from backup |
| File write error | Retry with alternative path |
| User abort | Save state and exit gracefully |
## User Interaction Points
## Termination Conditions
The orchestrator pauses for user input at these points:
1. **action-init**: Confirm target skill and describe issue
2. **action-propose-fixes**: Select which fixes to apply
3. **action-verify**: Review verification results, decide to continue or stop
4. **action-complete**: Review final summary
- Normal: `status === 'completed'`, `quality_gate === 'pass'`
- User: `status === 'user_exit'`
- Error: `status === 'failed'`, `error_count >= max_errors`
- Iteration limit: `iteration_count >= max_iterations`
- Clarification wait: `requirement_analysis.status === 'needs_clarification'` (pause, not terminate)

View File

@@ -2,276 +2,174 @@
Classification of skill execution issues with detection patterns and severity criteria.
## When to Use
## Quick Reference
| Phase | Usage | Section |
|-------|-------|---------|
| All Diagnosis Actions | Issue classification | All sections |
| action-propose-fixes | Strategy selection | Fix Mapping |
| action-generate-report | Severity assessment | Severity Criteria |
| Category | Priority | Detection | Fix Strategy |
|----------|----------|-----------|--------------|
| Authoring Violation | P0 | Intermediate files, state bloat, file relay | eliminate_intermediate, minimize_state |
| Data Flow Disruption | P1 | Scattered state, inconsistent formats | state_centralization, schema_enforcement |
| Agent Coordination | P2 | Fragile chains, no error handling | error_wrapping, result_validation |
| Context Explosion | P3 | Unbounded history, full content passing | sliding_window, path_reference |
| Long-tail Forgetting | P4 | Early constraint loss | constraint_injection, checkpoint_restore |
| Token Consumption | P5 | Verbose prompts, redundant I/O | prompt_compression, lazy_loading |
| Doc Redundancy | P6 | Repeated definitions | consolidate_to_ssot |
| Doc Conflict | P7 | Inconsistent definitions | reconcile_definitions |
---
## Problem Categories
## 0. Authoring Principles Violation (P0)
### 0. Authoring Principles Violation (P0)
**Definition**: 违反 skill 撰写首要准则(简洁高效、去除存储、上下文流转)。
**Root Causes**:
- 不必要的中间文件存储
- State schema 过度膨胀
- 文件中转代替上下文传递
- 重复数据存储
**Definition**: Violates skill authoring principles (simplicity, no intermediate files, context passing).
**Detection Patterns**:
| Pattern ID | Regex/Check | Description |
|------------|-------------|-------------|
| APV-001 | `/Write\([^)]*temp-|intermediate-/` | 中间文件写入 |
| APV-002 | `/Write\([^)]+\)[\s\S]{0,50}Read\([^)]+\)/` | 写后立即读(文件中转) |
| APV-003 | State schema > 15 fields | State 字段过多 |
| APV-004 | `/_history\s*[.=].*push|concat/` | 无限增长数组 |
| APV-005 | `/debug_|_cache|_temp/` in state | 调试/缓存字段残留 |
| APV-006 | Same data in multiple state fields | 重复存储 |
| Pattern ID | Check | Description |
|------------|-------|-------------|
| APV-001 | `/Write\([^)]*temp-\|intermediate-/` | Intermediate file writes |
| APV-002 | `/Write\([^)]+\)[\s\S]{0,50}Read\([^)]+\)/` | Write-then-read relay |
| APV-003 | State schema > 15 fields | Excessive state fields |
| APV-004 | `/_history\s*[.=].*push\|concat/` | Unbounded array growth |
| APV-005 | `/debug_\|_cache\|_temp/` in state | Debug/cache field residue |
| APV-006 | Same data in multiple fields | Duplicate storage |
**Impact Levels**:
- **Critical**: 中间文件 > 5 个,严重违反原则
- **High**: State 字段 > 20 个,或存在文件中转
- **Medium**: 存在调试字段或轻微冗余
- **Low**: 轻微的命名不规范
**Impact**: Critical (>5 intermediate files), High (>20 state fields), Medium (debug fields), Low (naming issues)
---
### 1. Context Explosion (P2)
## 1. Context Explosion (P3)
**Definition**: Excessive token accumulation causing prompt size to grow unbounded.
**Root Causes**:
- Unbounded conversation history
- Full content passing instead of references
- Missing summarization mechanisms
- Agent returning full output instead of path+summary
**Definition**: Unbounded token accumulation causing prompt size growth.
**Detection Patterns**:
| Pattern ID | Regex/Check | Description |
|------------|-------------|-------------|
| Pattern ID | Check | Description |
|------------|-------|-------------|
| CTX-001 | `/history\s*[.=].*push\|concat/` | History array growth |
| CTX-002 | `/JSON\.stringify\s*\(\s*state\s*\)/` | Full state serialization |
| CTX-003 | `/Read\([^)]+\)\s*[\+,]/` | Multiple file content concatenation |
| CTX-004 | `/return\s*\{[^}]*content:/` | Agent returning full content |
| CTX-005 | File length > 5000 chars without summarize | Long prompt without compression |
| CTX-005 | File > 5000 chars without summarization | Long prompts |
**Impact Levels**:
- **Critical**: Context exceeds model limit (128K tokens)
- **High**: Context > 50K tokens per iteration
- **Medium**: Context grows 10%+ per iteration
- **Low**: Potential for growth but currently manageable
**Impact**: Critical (>128K tokens), High (>50K per iteration), Medium (10%+ growth), Low (manageable)
---
### 2. Long-tail Forgetting (P3)
## 2. Long-tail Forgetting (P4)
**Definition**: Loss of early instructions, constraints, or goals in long execution chains.
**Root Causes**:
- No explicit constraint propagation
- Reliance on implicit context
- Missing checkpoint/restore mechanisms
- State schema without requirements field
**Definition**: Loss of early instructions/constraints in long chains.
**Detection Patterns**:
| Pattern ID | Regex/Check | Description |
|------------|-------------|-------------|
| MEM-001 | Later phases missing constraint reference | Constraint not carried forward |
| Pattern ID | Check | Description |
|------------|-------|-------------|
| MEM-001 | Later phases missing constraint reference | Constraint not forwarded |
| MEM-002 | `/\[TASK\][^[]*(?!\[CONSTRAINTS\])/` | Task without constraints section |
| MEM-003 | Key phases without checkpoint | Missing state preservation |
| MEM-004 | State schema lacks `original_requirements` | No constraint persistence |
| MEM-004 | State lacks `original_requirements` | No constraint persistence |
| MEM-005 | No verification phase | Output not checked against intent |
**Impact Levels**:
- **Critical**: Original goal completely lost
- **High**: Key constraints ignored in output
- **Medium**: Some requirements missing
- **Low**: Minor goal drift
**Impact**: Critical (goal lost), High (constraints ignored), Medium (some missing), Low (minor drift)
---
### 3. Data Flow Disruption (P0)
## 3. Data Flow Disruption (P1)
**Definition**: Inconsistent state management causing data loss or corruption.
**Root Causes**:
- Multiple state storage locations
- Inconsistent field naming
- Missing schema validation
- Format transformation without normalization
**Definition**: Inconsistent state management causing data loss/corruption.
**Detection Patterns**:
| Pattern ID | Regex/Check | Description |
|------------|-------------|-------------|
| Pattern ID | Check | Description |
|------------|-------|-------------|
| DF-001 | Multiple state file writes | Scattered state storage |
| DF-002 | Same concept, different names | Field naming inconsistency |
| DF-003 | JSON.parse without validation | Missing schema validation |
| DF-004 | Files written but never read | Orphaned outputs |
| DF-005 | Autonomous skill without state-schema | Undefined state structure |
**Impact Levels**:
- **Critical**: Data loss or corruption
- **High**: State inconsistency between phases
- **Medium**: Potential for inconsistency
- **Low**: Minor naming inconsistencies
**Impact**: Critical (data loss), High (state inconsistency), Medium (potential inconsistency), Low (naming)
---
### 4. Agent Coordination Failure (P1)
## 4. Agent Coordination Failure (P2)
**Definition**: Fragile agent call patterns causing cascading failures.
**Root Causes**:
- Missing error handling in Task calls
- No result validation
- Inconsistent agent configurations
- Deeply nested agent calls
**Detection Patterns**:
| Pattern ID | Regex/Check | Description |
|------------|-------------|-------------|
| Pattern ID | Check | Description |
|------------|-------|-------------|
| AGT-001 | Task without try-catch | Missing error handling |
| AGT-002 | Result used without validation | No return value check |
| AGT-003 | > 3 different agent types | Agent type proliferation |
| AGT-003 | >3 different agent types | Agent type proliferation |
| AGT-004 | Nested Task in prompt | Agent calling agent |
| AGT-005 | Task used but not in allowed-tools | Tool declaration mismatch |
| AGT-006 | Multiple return formats | Inconsistent agent output |
**Impact Levels**:
- **Critical**: Workflow crash on agent failure
- **High**: Unpredictable agent behavior
- **Medium**: Occasional coordination issues
- **Low**: Minor inconsistencies
**Impact**: Critical (crash on failure), High (unpredictable behavior), Medium (occasional issues), Low (minor)
---
### 5. Documentation Redundancy (P5)
## 5. Documentation Redundancy (P6)
**Definition**: 同一定义(如 State Schema、映射表、类型定义在多个文件中重复出现导致维护困难和不一致风险。
**Root Causes**:
- 缺乏单一真相来源 (SSOT)
- 复制粘贴代替引用
- 硬编码配置代替集中管理
**Definition**: Same definition (State Schema, mappings, types) repeated across files.
**Detection Patterns**:
| Pattern ID | Regex/Check | Description |
|------------|-------------|-------------|
| DOC-RED-001 | 跨文件语义比较 | 找到 State Schema 等核心概念的重复定义 |
| DOC-RED-002 | 代码块 vs 规范表对比 | action 文件中硬编码与 spec 文档的重复 |
| DOC-RED-003 | `/interface\s+(\w+)/` 同名扫描 | 多处定义的 interface/type |
| Pattern ID | Check | Description |
|------------|-------|-------------|
| DOC-RED-001 | Cross-file semantic comparison | State Schema duplication |
| DOC-RED-002 | Code block vs spec comparison | Hardcoded config duplication |
| DOC-RED-003 | `/interface\s+(\w+)/` same-name scan | Interface/type duplication |
**Impact Levels**:
- **High**: 核心定义State Schema, 映射表)重复
- **Medium**: 类型定义重复
- **Low**: 示例代码重复
**Impact**: High (core definitions), Medium (type definitions), Low (example code)
---
### 6. Token Consumption (P6)
## 6. Token Consumption (P5)
**Definition**: Excessive token usage from verbose prompts, large state objects, or inefficient I/O patterns.
**Root Causes**:
- Long static prompts without compression
- State schema with too many fields
- Full content embedding instead of path references
- Arrays growing unbounded without sliding windows
- Write-then-read file relay patterns
**Definition**: Excessive token usage from verbose prompts, large state, inefficient I/O.
**Detection Patterns**:
| Pattern ID | Regex/Check | Description |
|------------|-------------|-------------|
| Pattern ID | Check | Description |
|------------|-------|-------------|
| TKN-001 | File size > 4KB | Verbose prompt files |
| TKN-002 | State fields > 15 | Excessive state schema |
| TKN-003 | `/Read\([^)]+\)\s*[\+,]/` | Full content passing |
| TKN-004 | `/.push\|concat(?!.*\.slice)/` | Unbounded array growth |
| TKN-005 | `/Write\([^)]+\)[\s\S]{0,100}Read\([^)]+\)/` | Write-then-read pattern |
**Impact Levels**:
- **High**: Multiple TKN-003/TKN-004 issues causing significant token waste
- **Medium**: Several verbose files or state bloat
- **Low**: Minor optimization opportunities
**Impact**: High (multiple TKN-003/004), Medium (verbose files), Low (minor optimization)
---
### 7. Documentation Conflict (P7)
## 7. Documentation Conflict (P7)
**Definition**: 同一概念在不同文件中定义不一致,导致行为不可预测和文档误导。
**Root Causes**:
- 定义更新后未同步其他位置
- 实现与文档漂移
- 缺乏一致性校验
**Definition**: Same concept defined inconsistently across files.
**Detection Patterns**:
| Pattern ID | Regex/Check | Description |
|------------|-------------|-------------|
| DOC-CON-001 | 键值一致性校验 | 同一键(如优先级)在不同文件中值不同 |
| DOC-CON-002 | 实现 vs 文档对比 | 硬编码配置与文档对应项不一致 |
| Pattern ID | Check | Description |
|------------|-------|-------------|
| DOC-CON-001 | Key-value consistency check | Same key, different values |
| DOC-CON-002 | Implementation vs docs comparison | Hardcoded vs documented mismatch |
**Impact Levels**:
- **Critical**: 优先级/类别定义冲突
- **High**: 策略映射不一致
- **Medium**: 示例与实际不符
**Impact**: Critical (priority/category conflicts), High (strategy mapping inconsistency), Medium (example mismatch)
---
## Severity Criteria
### Global Severity Matrix
| Severity | Definition | Action Required |
|----------|------------|-----------------|
| **Critical** | Blocks execution or causes data loss | Immediate fix required |
| **High** | Significantly impacts reliability | Should fix before deployment |
| **Medium** | Affects quality or maintainability | Fix in next iteration |
| **Low** | Minor improvement opportunity | Optional fix |
### Severity Calculation
## Severity Calculation
```javascript
function calculateIssueSeverity(issue) {
const weights = {
impact_on_execution: 40, // Does it block workflow?
data_integrity_risk: 30, // Can it cause data loss?
frequency: 20, // How often does it occur?
complexity_to_fix: 10 // How hard to fix?
};
function calculateSeverity(issue) {
const weights = { execution: 40, data_integrity: 30, frequency: 20, complexity: 10 };
let score = 0;
// Impact on execution
if (issue.blocks_execution) score += weights.impact_on_execution;
else if (issue.degrades_execution) score += weights.impact_on_execution * 0.5;
// Data integrity
if (issue.causes_data_loss) score += weights.data_integrity_risk;
else if (issue.causes_inconsistency) score += weights.data_integrity_risk * 0.5;
// Frequency
if (issue.blocks_execution) score += weights.execution;
if (issue.causes_data_loss) score += weights.data_integrity;
if (issue.occurs_every_run) score += weights.frequency;
else if (issue.occurs_sometimes) score += weights.frequency * 0.5;
if (issue.fix_complexity === 'low') score += weights.complexity;
// Complexity (inverse - easier to fix = higher priority)
if (issue.fix_complexity === 'low') score += weights.complexity_to_fix;
else if (issue.fix_complexity === 'medium') score += weights.complexity_to_fix * 0.5;
// Map score to severity
if (score >= 70) return 'critical';
if (score >= 50) return 'high';
if (score >= 30) return 'medium';
@@ -283,36 +181,30 @@ function calculateIssueSeverity(issue) {
## Fix Mapping
| Problem Type | Recommended Strategies | Priority Order |
|--------------|----------------------|----------------|
| **Authoring Principles Violation** | eliminate_intermediate_files, minimize_state, context_passing | 1, 2, 3 |
| Context Explosion | sliding_window, path_reference, context_summarization | 1, 2, 3 |
| Long-tail Forgetting | constraint_injection, state_constraints_field, checkpoint | 1, 2, 3 |
| Data Flow Disruption | state_centralization, schema_enforcement, field_normalization | 1, 2, 3 |
| Agent Coordination | error_wrapping, result_validation, flatten_nesting | 1, 2, 3 |
| **Token Consumption** | prompt_compression, lazy_loading, output_minimization, state_field_reduction | 1, 2, 3, 4 |
| **Documentation Redundancy** | consolidate_to_ssot, centralize_mapping_config | 1, 2 |
| **Documentation Conflict** | reconcile_conflicting_definitions | 1 |
| Problem | Strategies (priority order) |
|---------|---------------------------|
| Authoring Violation | eliminate_intermediate_files, minimize_state, context_passing |
| Context Explosion | sliding_window, path_reference, context_summarization |
| Long-tail Forgetting | constraint_injection, state_constraints_field, checkpoint |
| Data Flow Disruption | state_centralization, schema_enforcement, field_normalization |
| Agent Coordination | error_wrapping, result_validation, flatten_nesting |
| Token Consumption | prompt_compression, lazy_loading, output_minimization, state_field_reduction |
| Doc Redundancy | consolidate_to_ssot, centralize_mapping_config |
| Doc Conflict | reconcile_conflicting_definitions |
---
## Cross-Category Dependencies
Some issues may trigger others:
```
Context Explosion ──→ Long-tail Forgetting
(Large context causes important info to be pushed out)
Context Explosion → Long-tail Forgetting
(Large context pushes important info out)
Data Flow Disruption ──→ Agent Coordination Failure
(Inconsistent data causes agents to fail)
Data Flow Disruption → Agent Coordination Failure
(Inconsistent data causes agent failures)
Agent Coordination Failure ──→ Context Explosion
(Failed retries add to context)
Agent Coordination Failure → Context Explosion
(Failed retries add to context)
```
When fixing, address in this order:
1. **P0 Data Flow** - Foundation for other fixes
2. **P1 Agent Coordination** - Stability
3. **P2 Context Explosion** - Efficiency
4. **P3 Long-tail Forgetting** - Quality
**Fix Order**: P1 Data Flow → P2 Agent → P3 Context → P4 Memory

File diff suppressed because it is too large Load Diff

View File

@@ -0,0 +1,47 @@
{
"$schema": "http://json-schema.org/draft-07/schema#",
"title": "Plan Verification Agent Schema",
"description": "Defines dimensions, severity rules, and CLI templates for plan verification agent",
"dimensions": {
"A": { "name": "User Intent Alignment", "tier": 1, "severity": "CRITICAL",
"checks": ["Goal Alignment", "Scope Drift", "Success Criteria Match", "Intent Conflicts"] },
"B": { "name": "Requirements Coverage", "tier": 1, "severity": "CRITICAL",
"checks": ["Orphaned Requirements", "Unmapped Tasks", "NFR Coverage Gaps"] },
"C": { "name": "Consistency Validation", "tier": 1, "severity": "CRITICAL",
"checks": ["Requirement Conflicts", "Architecture Drift", "Terminology Drift", "Data Model Inconsistency"] },
"D": { "name": "Dependency Integrity", "tier": 2, "severity": "HIGH",
"checks": ["Circular Dependencies", "Missing Dependencies", "Broken Dependencies", "Logical Ordering"] },
"E": { "name": "Synthesis Alignment", "tier": 2, "severity": "HIGH",
"checks": ["Priority Conflicts", "Success Criteria Mismatch", "Risk Mitigation Gaps"] },
"F": { "name": "Task Specification Quality", "tier": 3, "severity": "MEDIUM",
"checks": ["Ambiguous Focus Paths", "Underspecified Acceptance", "Missing Artifacts", "Weak Flow Control"] },
"G": { "name": "Duplication Detection", "tier": 4, "severity": "LOW",
"checks": ["Overlapping Task Scope", "Redundant Coverage"] },
"H": { "name": "Feasibility Assessment", "tier": 4, "severity": "LOW",
"checks": ["Complexity Misalignment", "Resource Conflicts", "Skill Gap Risks"] }
},
"tiers": {
"1": { "dimensions": ["A", "B", "C"], "priority": "CRITICAL", "limit": null, "rule": "analysis-review-architecture" },
"2": { "dimensions": ["D", "E"], "priority": "HIGH", "limit": 15, "rule": "analysis-diagnose-bug-root-cause" },
"3": { "dimensions": ["F"], "priority": "MEDIUM", "limit": 20, "rule": "analysis-analyze-code-patterns" },
"4": { "dimensions": ["G", "H"], "priority": "LOW", "limit": 15, "rule": "analysis-analyze-code-patterns" }
},
"severity_rules": {
"CRITICAL": ["User intent violation", "Synthesis authority violation", "Zero coverage", "Circular/broken deps"],
"HIGH": ["NFR gaps", "Priority conflicts", "Missing risk mitigation"],
"MEDIUM": ["Terminology drift", "Missing refs", "Weak flow control"],
"LOW": ["Style improvements", "Minor redundancy"]
},
"quality_gate": {
"BLOCK_EXECUTION": { "condition": "critical > 0", "emoji": "🛑" },
"PROCEED_WITH_FIXES": { "condition": "critical == 0 && high > 0", "emoji": "⚠️" },
"PROCEED_WITH_CAUTION": { "condition": "critical == 0 && high == 0 && medium > 0", "emoji": "✅" },
"PROCEED": { "condition": "only low or none", "emoji": "✅" }
},
"token_budget": { "total_findings": 50, "early_exit": "CRITICAL > 0 in Tier 1 → skip Tier 3-4" }
}

View File

@@ -0,0 +1,158 @@
{
"$schema": "http://json-schema.org/draft-07/schema#",
"title": "Plan Verification Findings Schema",
"description": "Schema for plan verification findings output from cli-explore-agent",
"type": "object",
"required": [
"session_id",
"timestamp",
"verification_tiers_completed",
"findings",
"summary"
],
"properties": {
"session_id": {
"type": "string",
"description": "Workflow session ID (e.g., WFS-20250127-143000)",
"pattern": "^WFS-[0-9]{8}-[0-9]{6}$"
},
"timestamp": {
"type": "string",
"description": "ISO 8601 timestamp when verification was completed",
"format": "date-time"
},
"verification_tiers_completed": {
"type": "array",
"description": "List of verification tiers completed (e.g., ['Tier 1', 'Tier 2'])",
"items": {
"type": "string",
"enum": ["Tier 1", "Tier 2", "Tier 3", "Tier 4"]
},
"minItems": 1,
"maxItems": 4
},
"findings": {
"type": "array",
"description": "Array of all findings across all dimensions",
"items": {
"type": "object",
"required": [
"id",
"dimension",
"dimension_name",
"severity",
"location",
"summary",
"recommendation"
],
"properties": {
"id": {
"type": "string",
"description": "Unique finding ID prefixed by severity (C1, H1, M1, L1)",
"pattern": "^[CHML][0-9]+$"
},
"dimension": {
"type": "string",
"description": "Verification dimension identifier",
"enum": ["A", "B", "C", "D", "E", "F", "G", "H"]
},
"dimension_name": {
"type": "string",
"description": "Human-readable dimension name",
"enum": [
"User Intent Alignment",
"Requirements Coverage Analysis",
"Consistency Validation",
"Dependency Integrity",
"Synthesis Alignment",
"Task Specification Quality",
"Duplication Detection",
"Feasibility Assessment"
]
},
"severity": {
"type": "string",
"description": "Severity level of the finding",
"enum": ["CRITICAL", "HIGH", "MEDIUM", "LOW"]
},
"location": {
"type": "array",
"description": "Array of locations where issue was found (e.g., 'IMPL_PLAN.md:L45', 'task:IMPL-1.2', 'synthesis:FR-03')",
"items": {
"type": "string"
},
"minItems": 1
},
"summary": {
"type": "string",
"description": "Concise summary of the issue (1-2 sentences)",
"minLength": 10,
"maxLength": 500
},
"recommendation": {
"type": "string",
"description": "Actionable recommendation to resolve the issue",
"minLength": 10,
"maxLength": 500
}
}
}
},
"summary": {
"type": "object",
"description": "Aggregate summary of verification results",
"required": [
"critical_count",
"high_count",
"medium_count",
"low_count",
"total_findings",
"coverage_percentage",
"recommendation"
],
"properties": {
"critical_count": {
"type": "integer",
"description": "Number of critical severity findings",
"minimum": 0
},
"high_count": {
"type": "integer",
"description": "Number of high severity findings",
"minimum": 0
},
"medium_count": {
"type": "integer",
"description": "Number of medium severity findings",
"minimum": 0
},
"low_count": {
"type": "integer",
"description": "Number of low severity findings",
"minimum": 0
},
"total_findings": {
"type": "integer",
"description": "Total number of findings",
"minimum": 0
},
"coverage_percentage": {
"type": "number",
"description": "Percentage of synthesis requirements covered by tasks (0-100)",
"minimum": 0,
"maximum": 100
},
"recommendation": {
"type": "string",
"description": "Quality gate recommendation",
"enum": [
"BLOCK_EXECUTION",
"PROCEED_WITH_FIXES",
"PROCEED_WITH_CAUTION",
"PROCEED"
]
}
}
}
}
}

View File

@@ -14,7 +14,7 @@
### Configuration File
**Path**: `.claude/cli-tools.json`
**Path**: `~/.claude/cli-tools.json`
All tool availability, model selection, and routing are defined in this configuration file.

View File

@@ -1,5 +1,6 @@
# Codex Code Guidelines
## Code Quality Standards
### Code Quality
@@ -21,11 +22,8 @@
- Graceful degradation
- Don't expose sensitive info
## Core Principles
**Incremental Progress**:
- Small, testable changes
- Commit working code frequently

View File

@@ -0,0 +1,205 @@
# Unified-Execute-With-File: Claude vs Codex Versions
## Overview
Two complementary implementations of the universal execution engine:
| Aspect | Claude CLI Command | Codex Prompt |
|--------|-------------------|--------------|
| **Location** | `.claude/commands/workflow/` | `.codex/prompts/` |
| **Format** | YAML frontmatter + Markdown | Simple Markdown + Variables |
| **Execution** | `/workflow:unified-execute-with-file` | Direct Codex execution |
| **Lines** | 807 (optimized) | 722 (adapted) |
| **Parameters** | CLI flags (`-y`, `-p`, `-m`) | Substitution variables (`$PLAN_PATH`, etc) |
---
## Format Differences
### Claude Version (CLI Command)
**Header (YAML)**:
```yaml
---
name: unified-execute-with-file
description: Universal execution engine...
argument-hint: "[-y|--yes] [-p|--plan <path>] [-m|--mode sequential|parallel]"
allowed-tools: TodoWrite(*), Task(*), ...
---
```
**Parameters**: CLI-style flags with short forms
```bash
/workflow:unified-execute-with-file -y -p PLAN_PATH -m parallel
```
### Codex Version (Prompt)
**Header (Simple)**:
```yaml
---
description: Universal execution engine...
argument-hint: "PLAN_PATH=\"<path>\" [EXECUTION_MODE=\"sequential|parallel\"]"
---
```
**Parameters**: Variable substitution with named arguments
```
PLAN_PATH=".workflow/IMPL_PLAN.md"
EXECUTION_MODE="parallel"
AUTO_CONFIRM="yes"
```
---
## Functional Equivalence
### Core Features (Identical)
Both versions support:
- ✅ Format-agnostic plan parsing (IMPL_PLAN.md, synthesis.json, conclusions.json)
- ✅ Multi-agent orchestration (code-developer, test-fix-agent, doc-generator, etc)
- ✅ Automatic dependency resolution with topological sort
- ✅ Parallel execution with wave-based grouping (max 3 tasks/wave)
- ✅ Unified event logging (execution-events.md as SINGLE SOURCE OF TRUTH)
- ✅ Knowledge chain: agents read all previous executions
- ✅ Incremental execution with resume capability
- ✅ Error handling: retry/skip/abort logic
- ✅ Session management and folder organization
### Session Structure (Identical)
Both create:
```
.workflow/.execution/{executionId}/
├── execution.md # Execution plan and status
└── execution-events.md # Unified execution log (SINGLE SOURCE OF TRUTH)
```
---
## Key Adaptations
### Claude CLI Version
**Optimizations**:
- Direct access to Claude Code tools (TodoWrite, Task, AskUserQuestion)
- CLI tool integration (`ccw cli`)
- Background agent execution with run_in_background flag
- Direct file system operations via Bash
**Structure**:
- Comprehensive Implementation Details section
- Explicit allowed-tools configuration
- Integration with workflow command system
### Codex Version
**Adaptations**:
- Simplified execution context (no direct tool access)
- Variable substitution for parameter passing
- Streamlined phase explanations
- Focused on core logic and flow
- Self-contained event logging
**Benefits**:
- Works with Codex's execution model
- Simpler parameter interface
- 85 fewer lines while maintaining all core functionality
---
## Parameter Mapping
| Concept | Claude | Codex |
|---------|--------|-------|
| Plan path | `-p path/to/plan.md` | `PLAN_PATH="path/to/plan.md"` |
| Execution mode | `-m sequential\|parallel` | `EXECUTION_MODE="sequential\|parallel"` |
| Auto-confirm | `-y, --yes` | `AUTO_CONFIRM="yes"` |
| Context focus | `"execution context"` | `EXECUTION_CONTEXT="focus area"` |
---
## Recommended Usage
### Use Claude Version When:
- Using Claude Code CLI environment
- Need direct integration with workflow system
- Want full tool access (TodoWrite, Task, AskUserQuestion)
- Prefer CLI flag syntax
- Building multi-command workflows
### Use Codex Version When:
- Executing within Codex directly
- Need simpler execution model
- Prefer variable substitution
- Want standalone execution
- Integrating with Codex command chains
---
## Event Logging (Unified)
Both versions produce identical execution-events.md format:
```markdown
## Task {id} - {STATUS} {emoji}
**Timestamp**: {ISO8601}
**Duration**: {ms}
**Agent**: {agent_type}
### Execution Summary
{summary}
### Generated Artifacts
- `path/to/file` (size)
### Notes for Next Agent
- Key decisions
- Issues identified
- Ready for: NEXT_TASK_ID
---
```
---
## Migration Path
If switching between Claude and Codex versions:
1. **Same session ID format**: Both use `.workflow/.execution/{executionId}/`
2. **Same event log structure**: execution-events.md is 100% compatible
3. **Same artifact locations**: Files generated at project paths (e.g., `src/types/auth.ts`)
4. **Same agent selection**: Both use identical selectBestAgent() strategy
5. **Same parallelization rules**: Identical wave grouping and file conflict detection
You can:
- Start execution with Claude, resume with Codex
- Start with Codex, continue with Claude
- Mix both in multi-step workflows
---
## Statistics
| Metric | Claude | Codex |
|--------|--------|-------|
| **Lines** | 807 | 722 |
| **Size** | 25 KB | 22 KB |
| **Phases** | 4 full phases | 4 phases (adapted) |
| **Agent types** | 6+ supported | 6+ supported |
| **Parallelization** | Max 3 tasks/wave | Max 3 tasks/wave |
| **Error handling** | retry/skip/abort | retry/skip/abort |
---
## Implementation Timeline
1. **Initial Claude version**: Full unified-execute-with-file.md (1094 lines)
2. **Claude optimization**: Consolidated duplicates (807 lines, -26%)
3. **Codex adaptation**: Format-adapted version (722 lines)
Both versions represent same core logic with format-specific optimizations.

View File

@@ -0,0 +1,610 @@
---
description: Interactive collaborative analysis with documented discussions, CLI-assisted exploration, and evolving understanding. Supports depth control and iteration limits.
argument-hint: "TOPIC=\"<topic or question>\" [--depth=standard|deep|full] [--max-iterations=<n>] [--verbose]"
---
# Codex Analyze-With-File Prompt
## Overview
Interactive collaborative analysis workflow with **documented discussion process**. Records understanding evolution, facilitates multi-round Q&A, and uses deep analysis for codebase and concept exploration.
**Core workflow**: Topic → Explore → Discuss → Document → Refine → Conclude
**Key features**:
- **discussion.md**: Timeline of discussions and understanding evolution
- **Multi-round Q&A**: Iterative clarification with user
- **Analysis-assisted exploration**: Deep codebase and concept analysis
- **Consolidated insights**: Synthesizes discussions into actionable conclusions
- **Flexible continuation**: Resume analysis sessions to build on previous work
## Target Topic
**$TOPIC**
- `--depth`: Analysis depth (standard|deep|full)
- `--max-iterations`: Max discussion rounds
## Execution Process
```
Session Detection:
├─ Check if analysis session exists for topic
├─ EXISTS + discussion.md exists → Continue mode
└─ NOT_FOUND → New session mode
Phase 1: Topic Understanding
├─ Parse topic/question
├─ Identify analysis dimensions (architecture, implementation, concept, etc.)
├─ Initial scoping with user
└─ Document initial understanding in discussion.md
Phase 2: Exploration (Parallel)
├─ Search codebase for relevant patterns
├─ Analyze code structure and dependencies
└─ Aggregate findings into exploration summary
Phase 3: Interactive Discussion (Multi-Round)
├─ Present exploration findings
├─ Facilitate Q&A with user
├─ Capture user insights and requirements
├─ Update discussion.md with each round
└─ Repeat until user is satisfied or clarity achieved
Phase 4: Synthesis & Conclusion
├─ Consolidate all insights
├─ Update discussion.md with conclusions
├─ Generate actionable recommendations
└─ Optional: Create follow-up tasks or issues
Output:
├─ .workflow/.analysis/{slug}-{date}/discussion.md (evolving document)
├─ .workflow/.analysis/{slug}-{date}/explorations.json (findings)
└─ .workflow/.analysis/{slug}-{date}/conclusions.json (final synthesis)
```
## Implementation Details
### Session Setup & Mode Detection
```javascript
const getUtc8ISOString = () => new Date(Date.now() + 8 * 60 * 60 * 1000).toISOString()
const topicSlug = "$TOPIC".toLowerCase().replace(/[^a-z0-9]+/g, '-').substring(0, 40)
const dateStr = getUtc8ISOString().substring(0, 10)
const sessionId = `ANL-${topicSlug}-${dateStr}`
const sessionFolder = `.workflow/.analysis/${sessionId}`
const discussionPath = `${sessionFolder}/discussion.md`
const explorationsPath = `${sessionFolder}/explorations.json`
const conclusionsPath = `${sessionFolder}/conclusions.json`
// Auto-detect mode
const sessionExists = fs.existsSync(sessionFolder)
const hasDiscussion = sessionExists && fs.existsSync(discussionPath)
const mode = hasDiscussion ? 'continue' : 'new'
if (!sessionExists) {
bash(`mkdir -p ${sessionFolder}`)
}
```
---
### Phase 1: Topic Understanding
#### Step 1.1: Parse Topic & Identify Dimensions
```javascript
// Analyze topic to determine analysis dimensions
const ANALYSIS_DIMENSIONS = {
architecture: ['架构', 'architecture', 'design', 'structure', '设计'],
implementation: ['实现', 'implement', 'code', 'coding', '代码'],
performance: ['性能', 'performance', 'optimize', 'bottleneck', '优化'],
security: ['安全', 'security', 'auth', 'permission', '权限'],
concept: ['概念', 'concept', 'theory', 'principle', '原理'],
comparison: ['比较', 'compare', 'vs', 'difference', '区别'],
decision: ['决策', 'decision', 'choice', 'tradeoff', '选择']
}
function identifyDimensions(topic) {
const text = topic.toLowerCase()
const matched = []
for (const [dimension, keywords] of Object.entries(ANALYSIS_DIMENSIONS)) {
if (keywords.some(k => text.includes(k))) {
matched.push(dimension)
}
}
return matched.length > 0 ? matched : ['general']
}
const dimensions = identifyDimensions("$TOPIC")
```
#### Step 1.2: Initial Scoping (New Session Only)
Ask user to scope the analysis:
- Focus areas: 代码实现 / 架构设计 / 最佳实践 / 问题诊断
- Analysis depth: Quick Overview / Standard Analysis / Deep Dive
#### Step 1.3: Create/Update discussion.md
For new session:
```markdown
# Analysis Discussion
**Session ID**: ${sessionId}
**Topic**: $TOPIC
**Started**: ${getUtc8ISOString()}
**Dimensions**: ${dimensions.join(', ')}
---
## User Context
**Focus Areas**: ${userFocusAreas.join(', ')}
**Analysis Depth**: ${analysisDepth}
---
## Discussion Timeline
### Round 1 - Initial Understanding (${timestamp})
#### Topic Analysis
Based on topic "$TOPIC":
- **Primary dimensions**: ${dimensions.join(', ')}
- **Initial scope**: ${initialScope}
- **Key questions to explore**:
- ${question1}
- ${question2}
- ${question3}
#### Next Steps
- Search codebase for relevant patterns
- Gather insights via analysis
- Prepare discussion points for user
---
## Current Understanding
${initialUnderstanding}
```
For continue session, append:
```markdown
### Round ${n} - Continuation (${timestamp})
#### Previous Context
Resuming analysis based on prior discussion.
#### New Focus
${newFocusFromUser}
```
---
### Phase 2: Exploration
#### Step 2.1: Codebase Search
```javascript
// Extract keywords from topic
const keywords = extractTopicKeywords("$TOPIC")
// Search codebase for relevant code
const searchResults = []
for (const keyword of keywords) {
const results = Grep({ pattern: keyword, path: ".", output_mode: "content", "-C": 3 })
searchResults.push({ keyword, results })
}
// Identify affected files and patterns
const relevantLocations = analyzeSearchResults(searchResults)
```
#### Step 2.2: Pattern Analysis
Analyze the codebase from identified dimensions:
1. Architecture patterns and structure
2. Implementation conventions
3. Dependency relationships
4. Potential issues or improvements
#### Step 2.3: Aggregate Findings
```javascript
// Aggregate into explorations.json
const explorations = {
session_id: sessionId,
timestamp: getUtc8ISOString(),
topic: "$TOPIC",
dimensions: dimensions,
sources: [
{ type: "codebase", summary: codebaseSummary },
{ type: "analysis", summary: analysisSummary }
],
key_findings: [...],
discussion_points: [...],
open_questions: [...]
}
Write(explorationsPath, JSON.stringify(explorations, null, 2))
```
#### Step 2.4: Update discussion.md
```markdown
#### Exploration Results (${timestamp})
**Sources Analyzed**:
${sources.map(s => `- ${s.type}: ${s.summary}`).join('\n')}
**Key Findings**:
${keyFindings.map((f, i) => `${i+1}. ${f}`).join('\n')}
**Points for Discussion**:
${discussionPoints.map((p, i) => `${i+1}. ${p}`).join('\n')}
**Open Questions**:
${openQuestions.map((q, i) => `- ${q}`).join('\n')}
```
---
### Phase 3: Interactive Discussion (Multi-Round)
#### Step 3.1: Present Findings & Gather Feedback
```javascript
// Maximum discussion rounds
const MAX_ROUNDS = 5
let roundNumber = 1
let discussionComplete = false
while (!discussionComplete && roundNumber <= MAX_ROUNDS) {
// Display current findings
console.log(`
## Discussion Round ${roundNumber}
${currentFindings}
### Key Points for Your Input
${discussionPoints.map((p, i) => `${i+1}. ${p}`).join('\n')}
`)
// Gather user input
// Options:
// - 同意,继续深入: Deepen analysis in current direction
// - 需要调整方向: Get user's adjusted focus
// - 分析完成: Exit loop
// - 有具体问题: Answer specific questions
// Process user response and update understanding
updateDiscussionDocument(roundNumber, userResponse, findings)
roundNumber++
}
```
#### Step 3.2: Document Each Round
Append to discussion.md:
```markdown
### Round ${n} - Discussion (${timestamp})
#### User Input
${userInputSummary}
${userResponse === 'adjustment' ? `
**Direction Adjustment**: ${adjustmentDetails}
` : ''}
${userResponse === 'questions' ? `
**User Questions**:
${userQuestions.map((q, i) => `${i+1}. ${q}`).join('\n')}
**Answers**:
${answers.map((a, i) => `${i+1}. ${a}`).join('\n')}
` : ''}
#### Updated Understanding
Based on user feedback:
- ${insight1}
- ${insight2}
#### Corrected Assumptions
${corrections.length > 0 ? corrections.map(c => `
- ~~${c.wrong}~~ → ${c.corrected}
- Reason: ${c.reason}
`).join('\n') : 'None'}
#### New Insights
${newInsights.map(i => `- ${i}`).join('\n')}
```
---
### Phase 4: Synthesis & Conclusion
#### Step 4.1: Consolidate Insights
```javascript
const conclusions = {
session_id: sessionId,
topic: "$TOPIC",
completed: getUtc8ISOString(),
total_rounds: roundNumber,
summary: "...",
key_conclusions: [
{ point: "...", evidence: "...", confidence: "high|medium|low" }
],
recommendations: [
{ action: "...", rationale: "...", priority: "high|medium|low" }
],
open_questions: [...],
follow_up_suggestions: [
{ type: "issue", summary: "..." },
{ type: "task", summary: "..." }
]
}
Write(conclusionsPath, JSON.stringify(conclusions, null, 2))
```
#### Step 4.2: Final discussion.md Update
```markdown
---
## Conclusions (${timestamp})
### Summary
${summaryParagraph}
### Key Conclusions
${conclusions.key_conclusions.map((c, i) => `
${i+1}. **${c.point}** (Confidence: ${c.confidence})
- Evidence: ${c.evidence}
`).join('\n')}
### Recommendations
${conclusions.recommendations.map((r, i) => `
${i+1}. **${r.action}** (Priority: ${r.priority})
- Rationale: ${r.rationale}
`).join('\n')}
### Remaining Questions
${conclusions.open_questions.map(q => `- ${q}`).join('\n')}
---
## Current Understanding (Final)
### What We Established
${establishedPoints.map(p => `- ${p}`).join('\n')}
### What Was Clarified/Corrected
${corrections.map(c => `- ~~${c.original}~~ → ${c.corrected}`).join('\n')}
### Key Insights
${keyInsights.map(i => `- ${i}`).join('\n')}
---
## Session Statistics
- **Total Rounds**: ${totalRounds}
- **Duration**: ${duration}
- **Sources Used**: ${sources.join(', ')}
- **Artifacts Generated**: discussion.md, explorations.json, conclusions.json
```
#### Step 4.3: Post-Completion Options
Offer follow-up options:
- Create Issue: Convert conclusions to actionable issues
- Generate Task: Create implementation tasks
- Export Report: Generate standalone analysis report
- Complete: No further action needed
---
## Session Folder Structure
```
.workflow/.analysis/ANL-{slug}-{date}/
├── discussion.md # Evolution of understanding & discussions
├── explorations.json # Exploration findings
├── conclusions.json # Final synthesis
└── exploration-*.json # Individual exploration results (optional)
```
## Discussion Document Template
```markdown
# Analysis Discussion
**Session ID**: ANL-xxx-2025-01-25
**Topic**: [topic or question]
**Started**: 2025-01-25T10:00:00+08:00
**Dimensions**: [architecture, implementation, ...]
---
## User Context
**Focus Areas**: [user-selected focus]
**Analysis Depth**: [quick|standard|deep]
---
## Discussion Timeline
### Round 1 - Initial Understanding (2025-01-25 10:00)
#### Topic Analysis
...
#### Exploration Results
...
### Round 2 - Discussion (2025-01-25 10:15)
#### User Input
...
#### Updated Understanding
...
#### Corrected Assumptions
- ~~[wrong]~~ → [corrected]
### Round 3 - Deep Dive (2025-01-25 10:30)
...
---
## Conclusions (2025-01-25 11:00)
### Summary
...
### Key Conclusions
...
### Recommendations
...
---
## Current Understanding (Final)
### What We Established
- [confirmed points]
### What Was Clarified/Corrected
- ~~[original assumption]~~ → [corrected understanding]
### Key Insights
- [insights gained]
---
## Session Statistics
- **Total Rounds**: 3
- **Duration**: 1 hour
- **Sources Used**: codebase exploration, analysis
- **Artifacts Generated**: discussion.md, explorations.json, conclusions.json
```
## Iteration Flow
```
First Call (TOPIC="topic"):
├─ No session exists → New mode
├─ Identify analysis dimensions
├─ Scope with user
├─ Create discussion.md with initial understanding
├─ Launch explorations
└─ Enter discussion loop
Continue Call (TOPIC="topic"):
├─ Session exists → Continue mode
├─ Load discussion.md
├─ Resume from last round
└─ Continue discussion loop
Discussion Loop:
├─ Present current findings
├─ Gather user feedback
├─ Process response:
│ ├─ Agree → Deepen analysis
│ ├─ Adjust → Change direction
│ ├─ Question → Answer then continue
│ └─ Complete → Exit loop
├─ Update discussion.md
└─ Repeat until complete or max rounds
Completion:
├─ Generate conclusions.json
├─ Update discussion.md with final synthesis
└─ Offer follow-up options
```
## Consolidation Rules
When updating "Current Understanding":
1. **Promote confirmed insights**: Move validated findings to "What We Established"
2. **Track corrections**: Keep important wrong→right transformations
3. **Focus on current state**: What do we know NOW
4. **Avoid timeline repetition**: Don't copy discussion details
5. **Preserve key learnings**: Keep insights valuable for future reference
**Bad (cluttered)**:
```markdown
## Current Understanding
In round 1 we discussed X, then in round 2 user said Y, and we explored Z...
```
**Good (consolidated)**:
```markdown
## Current Understanding
### What We Established
- The authentication flow uses JWT with refresh tokens
- Rate limiting is implemented at API gateway level
### What Was Clarified
- ~~Assumed Redis for sessions~~ → Actually uses database-backed sessions
### Key Insights
- Current architecture supports horizontal scaling
- Security audit recommended before production
```
## Error Handling
| Situation | Action |
|-----------|--------|
| Exploration fails | Continue with available context, note limitation |
| User timeout in discussion | Save state, show resume instructions |
| Max rounds reached | Force synthesis, offer continuation option |
| No relevant findings | Broaden search, ask user for clarification |
| Session folder conflict | Append timestamp suffix |
---
**Now execute the analyze-with-file workflow for topic**: $TOPIC

View File

@@ -0,0 +1,456 @@
---
description: Convert brainstorm session output to parallel-dev-cycle input with idea selection and context enrichment. Unified parameter format.
argument-hint: "--session=<id> [--idea=<index>] [--auto] [--launch]"
---
# Brainstorm to Cycle Adapter
## Overview
Bridge workflow that converts **brainstorm-with-file** output to **parallel-dev-cycle** input. Reads synthesis.json, allows user to select an idea, and formats it as an enriched TASK description.
**Core workflow**: Load Session → Select Idea → Format Task → Launch Cycle
## Inputs
| Argument | Required | Description |
|----------|----------|-------------|
| --session | Yes | Brainstorm session ID (e.g., `BS-rate-limiting-2025-01-28`) |
| --idea | No | Pre-select idea by index (0-based, from top_ideas) |
| --auto | No | Auto-select top-scored idea without confirmation |
| --launch | No | Auto-launch parallel-dev-cycle without preview |
## Output
Launches `/parallel-dev-cycle` with enriched TASK containing:
- Primary recommendation or selected idea
- Key strengths and challenges
- Suggested implementation steps
- Alternative approaches for reference
## Execution Process
```
Phase 1: Session Loading
├─ Validate session folder exists
├─ Read synthesis.json
├─ Parse top_ideas and recommendations
└─ Validate data structure
Phase 2: Idea Selection
├─ --auto mode → Select highest scored idea
├─ --idea=N → Select specified index
└─ Interactive → Present options, await selection
Phase 3: Task Formatting
├─ Build enriched task description
├─ Include context from brainstorm
└─ Generate parallel-dev-cycle command
Phase 4: Cycle Launch
├─ Confirm with user (unless --auto)
└─ Execute parallel-dev-cycle
```
## Implementation
### Phase 1: Session Loading
```javascript
const getUtc8ISOString = () => new Date(Date.now() + 8 * 60 * 60 * 1000).toISOString()
// Parse arguments
const args = "$ARGUMENTS"
const sessionId = "$SESSION"
const ideaIndexMatch = args.match(/--idea=(\d+)/)
const preSelectedIdea = ideaIndexMatch ? parseInt(ideaIndexMatch[1]) : null
const isAutoMode = args.includes('--auto')
// Validate session
const sessionFolder = `.workflow/.brainstorm/${sessionId}`
const synthesisPath = `${sessionFolder}/synthesis.json`
const brainstormPath = `${sessionFolder}/brainstorm.md`
function fileExists(p) {
try { return bash(`test -f "${p}" && echo "yes"`).includes('yes') } catch { return false }
}
if (!fileExists(synthesisPath)) {
console.error(`
## Error: Session Not Found
Session ID: ${sessionId}
Expected path: ${synthesisPath}
**Available sessions**:
`)
bash(`ls -1 .workflow/.brainstorm/ 2>/dev/null | head -10`)
return { status: 'error', message: 'Session not found' }
}
// Load synthesis
const synthesis = JSON.parse(Read(synthesisPath))
// Validate structure
if (!synthesis.top_ideas || synthesis.top_ideas.length === 0) {
console.error(`
## Error: No Ideas Found
The brainstorm session has no top_ideas.
Please complete the brainstorm workflow first.
`)
return { status: 'error', message: 'No ideas in synthesis' }
}
console.log(`
## Brainstorm Session Loaded
**Session**: ${sessionId}
**Topic**: ${synthesis.topic}
**Completed**: ${synthesis.completed}
**Ideas Found**: ${synthesis.top_ideas.length}
`)
```
---
### Phase 2: Idea Selection
```javascript
let selectedIdea = null
let selectionSource = ''
// Auto mode: select highest scored
if (isAutoMode) {
selectedIdea = synthesis.top_ideas.reduce((best, idea) =>
idea.score > best.score ? idea : best
)
selectionSource = 'auto (highest score)'
console.log(`
**Auto-selected**: ${selectedIdea.title} (Score: ${selectedIdea.score}/10)
`)
}
// Pre-selected by index
else if (preSelectedIdea !== null) {
if (preSelectedIdea >= synthesis.top_ideas.length) {
console.error(`
## Error: Invalid Idea Index
Requested: --idea=${preSelectedIdea}
Available: 0 to ${synthesis.top_ideas.length - 1}
`)
return { status: 'error', message: 'Invalid idea index' }
}
selectedIdea = synthesis.top_ideas[preSelectedIdea]
selectionSource = `index ${preSelectedIdea}`
console.log(`
**Pre-selected**: ${selectedIdea.title} (Index: ${preSelectedIdea})
`)
}
// Interactive selection
else {
// Display options
console.log(`
## Select Idea for Development
| # | Title | Score | Feasibility |
|---|-------|-------|-------------|
${synthesis.top_ideas.map((idea, i) =>
`| ${i} | ${idea.title.substring(0, 40)} | ${idea.score}/10 | ${idea.feasibility || 'N/A'} |`
).join('\n')}
**Primary Recommendation**: ${synthesis.recommendations?.primary?.substring(0, 60) || 'N/A'}
`)
// Build options for AskUser
const ideaOptions = synthesis.top_ideas.slice(0, 4).map((idea, i) => ({
label: `#${i}: ${idea.title.substring(0, 30)}`,
description: `Score: ${idea.score}/10 - ${idea.description?.substring(0, 50) || ''}`
}))
// Add primary recommendation option if different
if (synthesis.recommendations?.primary) {
ideaOptions.unshift({
label: "Primary Recommendation",
description: synthesis.recommendations.primary.substring(0, 60)
})
}
const selection = AskUser({
questions: [{
question: "Which idea should be developed?",
header: "Idea",
multiSelect: false,
options: ideaOptions
}]
})
// Parse selection
if (selection.idea === "Primary Recommendation") {
// Use primary recommendation as task
selectedIdea = {
title: "Primary Recommendation",
description: synthesis.recommendations.primary,
key_strengths: synthesis.key_insights || [],
main_challenges: [],
next_steps: synthesis.follow_up?.filter(f => f.type === 'implementation').map(f => f.summary) || []
}
selectionSource = 'primary recommendation'
} else {
const match = selection.idea.match(/^#(\d+):/)
const idx = match ? parseInt(match[1]) : 0
selectedIdea = synthesis.top_ideas[idx]
selectionSource = `user selected #${idx}`
}
}
console.log(`
### Selected Idea
**Title**: ${selectedIdea.title}
**Source**: ${selectionSource}
**Description**: ${selectedIdea.description?.substring(0, 200) || 'N/A'}
`)
```
---
### Phase 3: Task Formatting
```javascript
// Build enriched task description
function formatTask(idea, synthesis) {
const sections = []
// Main objective
sections.push(`# Main Objective\n\n${idea.title}`)
// Description
if (idea.description) {
sections.push(`# Description\n\n${idea.description}`)
}
// Key strengths
if (idea.key_strengths?.length > 0) {
sections.push(`# Key Strengths\n\n${idea.key_strengths.map(s => `- ${s}`).join('\n')}`)
}
// Main challenges (important for RA agent)
if (idea.main_challenges?.length > 0) {
sections.push(`# Main Challenges to Address\n\n${idea.main_challenges.map(c => `- ${c}`).join('\n')}`)
}
// Recommended steps
if (idea.next_steps?.length > 0) {
sections.push(`# Recommended Implementation Steps\n\n${idea.next_steps.map((s, i) => `${i + 1}. ${s}`).join('\n')}`)
}
// Alternative approaches (for RA consideration)
if (synthesis.recommendations?.alternatives?.length > 0) {
sections.push(`# Alternative Approaches (for reference)\n\n${synthesis.recommendations.alternatives.map(a => `- ${a}`).join('\n')}`)
}
// Key insights from brainstorm
if (synthesis.key_insights?.length > 0) {
const relevantInsights = synthesis.key_insights.slice(0, 3)
sections.push(`# Key Insights from Brainstorm\n\n${relevantInsights.map(i => `- ${i}`).join('\n')}`)
}
// Source reference
sections.push(`# Source\n\nBrainstorm Session: ${synthesis.session_id}\nTopic: ${synthesis.topic}`)
return sections.join('\n\n')
}
const enrichedTask = formatTask(selectedIdea, synthesis)
// Display formatted task
console.log(`
## Formatted Task for parallel-dev-cycle
\`\`\`markdown
${enrichedTask}
\`\`\`
`)
// Save task to session folder for reference
Write(`${sessionFolder}/cycle-task.md`, `# Generated Task\n\n**Generated**: ${getUtc8ISOString()}\n**Idea**: ${selectedIdea.title}\n**Selection**: ${selectionSource}\n\n---\n\n${enrichedTask}`)
```
---
### Phase 4: Cycle Launch
```javascript
// Confirm launch (unless auto mode)
let shouldLaunch = isAutoMode
if (!isAutoMode) {
const confirmation = AskUser({
questions: [{
question: "Launch parallel-dev-cycle with this task?",
header: "Launch",
multiSelect: false,
options: [
{ label: "Yes, launch cycle (Recommended)", description: "Start parallel-dev-cycle with enriched task" },
{ label: "No, just save task", description: "Save formatted task for manual use" }
]
}]
})
shouldLaunch = confirmation.launch.includes("Yes")
}
if (shouldLaunch) {
console.log(`
## Launching parallel-dev-cycle
**Task**: ${selectedIdea.title}
**Source Session**: ${sessionId}
`)
// Escape task for command line
const escapedTask = enrichedTask
.replace(/\\/g, '\\\\')
.replace(/"/g, '\\"')
.replace(/\$/g, '\\$')
.replace(/`/g, '\\`')
// Launch parallel-dev-cycle
// Note: In actual execution, this would invoke the skill
console.log(`
### Cycle Command
\`\`\`bash
/parallel-dev-cycle TASK="${escapedTask.substring(0, 100)}..."
\`\`\`
**Full task saved to**: ${sessionFolder}/cycle-task.md
`)
// Return success with cycle trigger
return {
status: 'success',
action: 'launch_cycle',
session_id: sessionId,
idea: selectedIdea.title,
task_file: `${sessionFolder}/cycle-task.md`,
cycle_command: `/parallel-dev-cycle TASK="${enrichedTask}"`
}
} else {
console.log(`
## Task Saved (Not Launched)
**Task file**: ${sessionFolder}/cycle-task.md
To launch manually:
\`\`\`bash
/parallel-dev-cycle TASK="$(cat ${sessionFolder}/cycle-task.md)"
\`\`\`
`)
return {
status: 'success',
action: 'saved_only',
session_id: sessionId,
task_file: `${sessionFolder}/cycle-task.md`
}
}
```
---
## Session Files
After execution:
```
.workflow/.brainstorm/{session-id}/
├── brainstorm.md # Original brainstorm
├── synthesis.json # Synthesis data (input)
├── perspectives.json # Perspectives data
├── ideas/ # Idea deep-dives
└── cycle-task.md # ⭐ Generated task (output)
```
## Task Format
The generated task includes:
| Section | Purpose | Used By |
|---------|---------|---------|
| Main Objective | Clear goal statement | RA: Primary requirement |
| Description | Detailed explanation | RA: Requirement context |
| Key Strengths | Why this approach | RA: Design decisions |
| Main Challenges | Known issues to address | RA: Edge cases, risks |
| Implementation Steps | Suggested approach | EP: Planning guidance |
| Alternatives | Other valid approaches | RA: Fallback options |
| Key Insights | Learnings from brainstorm | RA: Domain context |
## Error Handling
| Situation | Action |
|-----------|--------|
| Session not found | List available sessions, abort |
| synthesis.json missing | Suggest completing brainstorm first |
| No top_ideas | Report error, abort |
| Invalid --idea index | Show valid range, abort |
| Task too long | Truncate with reference to file |
## Examples
### Auto Mode (Quick Launch)
```bash
/brainstorm-to-cycle SESSION="BS-rate-limiting-2025-01-28" --auto
# → Selects highest-scored idea
# → Launches parallel-dev-cycle immediately
```
### Pre-Selected Idea
```bash
/brainstorm-to-cycle SESSION="BS-auth-system-2025-01-28" --idea=2
# → Selects top_ideas[2]
# → Confirms before launch
```
### Interactive Selection
```bash
/brainstorm-to-cycle SESSION="BS-caching-2025-01-28"
# → Displays all ideas with scores
# → User selects from options
# → Confirms and launches
```
## Integration Flow
```
brainstorm-with-file
synthesis.json
brainstorm-to-cycle ◄─── This command
enriched TASK
parallel-dev-cycle
RA → EP → CD → VAS
```
---
**Now execute brainstorm-to-cycle** with session: $SESSION

Some files were not shown because too many files have changed in this diff Show More