Compare commits

...

102 Commits

Author SHA1 Message Date
catlog22
204cb20617 chore(release): version 6.3.49
- Update package.json version to 6.3.49
- Add changelog entry for v6.3.49 with all commits since v6.3.48
- New features: CLI tools enhancements, skills system improvements, security fixes
- Documentation updates and UI improvements
2026-01-28 23:34:31 +08:00
catlog22
63f0daebbb feat: Update initialization process to prioritize in-memory configuration for CLI tool selection 2026-01-28 23:20:50 +08:00
catlog22
6ac041c1d8 feat: Enhance Codex CLI settings with toggle and refresh actions 2026-01-28 23:05:31 +08:00
catlog22
279adfd391 feat: Implement Codex CLI enhancement settings with API integration and UI toggle 2026-01-28 23:01:18 +08:00
catlog22
0a07138c27 feat: Add ccw-cli-tools skill specification with unified execution framework and configuration-driven tool selection 2026-01-28 22:55:36 +08:00
catlog22
a5d9e8ca87 feat: Enhance lite-skill-generator with single file output and improved validation 2026-01-28 22:23:19 +08:00
catlog22
502c8a09a1 fix(security): Apply 3 critical security fixes
- sec-001: Add validateAllowedPath to /api/file endpoint (path traversal)
- sec-002: Enable CSRF by default with CCW_DISABLE_CSRF opt-out
- sec-003: Add validateAllowedPath to /api/dialog/browse and /api/dialog/open-file (path traversal)

Ref: fix-1738072800000
2026-01-28 22:04:18 +08:00
catlog22
ed0255b8a2 Add skill tuning diagnosis report for skill-generator
- Introduced a new JSON file `skill-tuning-diagnosis.json` containing a comprehensive diagnosis of the skill-generator.
- Documented critical issues related to context management and data flow, including:
  - Full state serialization leading to unbounded context growth.
  - Scattered state writing without a unified schema.
  - Lack of input state schema validation in autonomous orchestrators.
- Provided detailed descriptions, impacts, root causes, and fix strategies for each identified issue.
- Summarized recommendations with priority levels for urgent fixes.
2026-01-28 22:00:20 +08:00
catlog22
6e94fc0740 chore: remove CLI endpoints section from Codex Code Guidelines 2026-01-28 21:33:43 +08:00
catlog22
b361a8c041 Add CLI endpoints documentation and unified script template for Bash and Python
- Updated AGENTS.md to include CLI tools usage and configuration details.
- Introduced a new script template for both Bash and Python, outlining usage context, calling conventions, and implementation guidelines.
- Provided examples for common patterns in both Bash and Python scripts.
- Established a directory convention for script organization and naming.
2026-01-28 21:29:21 +08:00
catlog22
24dad8cefd Refactor orchestrator logic and enhance problem taxonomy
- Updated orchestrator decision logic to improve state management and action selection.
- Introduced structured termination checks and action selection criteria.
- Enhanced state update mechanism with sliding window for action history and error tracking.
- Revised problem taxonomy for skill execution issues, consolidating categories and refining detection patterns.
- Improved severity calculation method for issue prioritization.
- Streamlined fix mapping strategies for better clarity and usability.
2026-01-28 21:08:49 +08:00
catlog22
071c98d89c feat: add brainstorm-to-cycle adapter for converting brainstorm output to parallel-dev-cycle input 2026-01-28 20:51:35 +08:00
catlog22
994718dee2 chore: remove unnecessary blank line in auto-parallel command documentation 2026-01-28 20:36:41 +08:00
catlog22
3998d24e32 Enhance skill generator documentation and templates
- Updated Phase 1 and Phase 2 documentation to include next phase links and data flow details.
- Expanded Phase 5 documentation to include comprehensive validation and README generation steps, along with validation report structure.
- Added purpose and usage context sections to various action and script templates (e.g., autonomous-action, llm-action, script-bash).
- Improved commands management by simplifying the command scanning logic and enabling/disabling commands through renaming files.
- Enhanced dashboard command manager to format group names and display nested groups with appropriate icons and colors.
- Updated LiteLLM executor to allow model overrides during execution.
- Added action reference guide and template reference sections to the skill-tuning SKILL.md for better navigation and understanding.
2026-01-28 20:34:03 +08:00
catlog22
29274ee943 feat(codex): add brainstorm-with-file prompt for interactive brainstorming workflow
- Add multi-perspective brainstorming workflow
- Support creative, pragmatic, and systematic analysis
- Include diverge-converge cycles with user interaction
- Add deep dive, devil's advocate, and idea merging
- Document thought evolution in brainstorm.md
2026-01-28 20:33:13 +08:00
catlog22
46d5739935 fix(changelog): update workflow references from review-fix to review-cycle-fix for consistency 2026-01-28 20:30:03 +08:00
catlog22
152cab2b7e feat: update review commands to use review-cycle-fix for automated fixing 2026-01-28 20:24:59 +08:00
catlog22
0cc5101c0e feat: Add phases for document consolidation, assembly, and compliance refinement
- Introduced Phase 2.5: Consolidation Agent to summarize analysis outputs and generate design overviews.
- Added Phase 4: Document Assembly to create index-style documents linking chapter files.
- Implemented Phase 5: Compliance Review & Iterative Refinement for CPCC compliance checks and updates.
- Established CPCC Compliance Requirements document outlining mandatory sections and validation functions.
- Created a base template for analysis agents to ensure consistency and efficiency in execution.
2026-01-28 19:57:24 +08:00
catlog22
4c78f53bcc feat: add commands management feature with API endpoints and UI integration
- Implemented commands routes for listing, enabling, and disabling commands.
- Created commands manager view with accordion groups for better organization.
- Added loading states and confirmation dialogs for enabling/disabling commands.
- Enhanced error handling and user feedback for command operations.
- Introduced CSS styles for commands manager UI components.
- Updated navigation to include commands manager link.
- Refactored existing code for better maintainability and clarity.
2026-01-28 08:26:37 +08:00
catlog22
cc5a5716cf fix(skills): improve robustness of enable/disable operations
- Add rollback in moveDirectory when rmSync fails after cpSync
- Add transaction rollback in disable/enableSkill when config save fails
- Surface config corruption by throwing on JSON parse errors
- Add robust JSON error parsing with fallback in frontend
- Add loading state and double-click prevention for toggle button
2026-01-28 08:25:59 +08:00
catlog22
af05874510 feat(skills): enhance moveDirectory function with rollback on failure and update config handling 2026-01-28 00:50:24 +08:00
catlog22
7a40f16235 feat(skills): implement enable/disable functionality for skills
- Added new API endpoints to enable and disable skills.
- Introduced logic to manage disabled skills, including loading and saving configurations.
- Enhanced skills routes to return lists of disabled skills.
- Updated frontend to display disabled skills and allow toggling their status.
- Added internationalization support for new skill status messages.
- Created JSON schemas for plan verification agent and findings.
- Defined new types for skill management in TypeScript.
2026-01-28 00:49:39 +08:00
catlog22
8d178feaac feat: 增强计划验证和上下文收集功能,支持自动执行和用户交互选择 2026-01-28 00:12:15 +08:00
catlog22
b3c47294e7 Enhance workflow commands and context management
- Updated `plan.md` to include new fields in context-package.json: prioritized_context, user_intent, priority_tiers, dependency_order, and sorting_rationale.
- Added validation for the existence of the prioritized_context field in context-package.json.
- Modified user decision flow in task generation to present action choices after planning completion.
- Improved context-gathering process in `context-gather.md` to integrate user intent and prioritize context based on user goals.
- Revised conflict-resolution documentation to require planning notes records after conflict analysis.
- Streamlined task generation in `task-generate-agent.md` to utilize pre-sorted context without redundant sorting.
- Removed unused settings persistence functions and corresponding tests from `claude-cli-tools.ts` and `settings-persistence.test.ts`.
2026-01-28 00:02:45 +08:00
catlog22
9989cfcf21 feat: 更新任务生成和执行限制,优化多模块任务管理 2026-01-27 23:34:31 +08:00
catlog22
1b6ace0447 feat: 添加规划笔记功能以支持任务生成和约束管理 2026-01-27 23:16:01 +08:00
catlog22
a3b303d8e3 Enhance CLI Lite Planning Agent with Mandatory Quality Check
- Added Phase 5: Plan Quality Check to cli-lite-planning-agent.md, detailing mandatory quality validation after plan generation.
- Introduced quality dimensions: completeness, granularity, dependencies, acceptance criteria, implementation steps, and constraint compliance.
- Specified CLI command format for quality check execution and expected output structure.
- Implemented result parsing and auto-fix strategies for minor issues.
- Updated integration flow to ensure quality check is executed before returning the plan to the orchestrator.

Refactor lite-plan.md to reflect internal quality check execution for medium/high complexity plans.

Create new brainstorm-with-file.md for interactive brainstorming workflow, detailing session setup, execution process, and implementation steps.
2026-01-27 23:02:05 +08:00
catlog22
0c1c87f704 fix: 修正 README_CN.md 交流群二维码图片扩展名
将引用从 .jpg 改为 .png 以匹配实际文件名
2026-01-26 09:47:42 +08:00
catlog22
985085c624 Refactor CLI Config Manager and Add Provider Model Routes
- Removed deprecated constants and functions from cli-config-manager.ts.
- Introduced new provider model presets in litellm-provider-models.ts for better organization and management of model information.
- Created provider-routes.ts to handle API endpoints for retrieving provider information and models.
- Added integration tests for provider routes to ensure correct functionality and response structure.
- Implemented unit tests for settings persistence functions, covering various scenarios and edge cases.
- Enhanced error handling and validation in the new routes and settings functions.
2026-01-25 17:27:58 +08:00
catlog22
7c16cc6427 feat: 添加 convert-to-plan 命令以将规划文档转换为问题解决方案 2026-01-25 12:32:42 +08:00
catlog22
6875108dda Add interactive analysis workflow with documented discussions and CLI exploration
- Introduced `analyze-with-file` command for collaborative analysis.
- Implemented session management for new and continued analysis sessions.
- Developed structured phases for topic understanding, exploration, discussion, and conclusion synthesis.
- Created detailed documentation for the workflow, including examples and implementation details.
- Added Codex prompt for deep analysis and exploration of codebase and concepts.
2026-01-25 11:08:13 +08:00
catlog22
9cff6f5f43 refactor(issue): remove 'merged' queue status, use 'archived' instead
- Remove 'merged' from VALID_QUEUE_STATUSES constant
- Update mergeQueues() to set status to 'archived' instead of 'merged'
- Preserve merged_into/merged_at metadata for traceability
- Update frontend to use 'archived' for button visibility checks
- Fix queue --json returning fake queue ID when no active queue exists
2026-01-25 10:56:46 +08:00
catlog22
fe2536d4cd feat: 增加队列状态“merged”及相关验证功能,添加状态参考文档 2026-01-25 10:11:52 +08:00
catlog22
16f27c080a docs: add Level 5 intelligent orchestration workflow guide to English version
- Added Level 5 (CCW Coordinator) chapter with complete Minimum Execution Units definition
- Added 3-Phase Workflow explanation (Analyze Requirements, Discover Commands, Execute)
- Integrated command port system for dynamic pipeline composition
- Added JavaScript task analysis and complexity assessment functions
- Included state file structure and complete Mermaid flowchart (119 lines)
- Updated Quick Selection Table to reference ccw-coordinator for complex workflows
- Updated Decision Flowchart to include Level 5 at top level
- Updated Summary Level Overview table to include Level 5 row
- Updated Core Principles with Level 5 selection criteria and usage guidelines
- Synchronized English version with WORKFLOW_GUIDE_CN.md Level 5 content
2026-01-24 21:41:31 +08:00
catlog22
874b70726d chore: archive unused test scripts and temporary documents
- Moved 6 empty test files (*.test.ts) to archive/
- Moved 5 Python test scripts (*.py) to archive/
- Moved 5 outdated/temporary documents to archive/
- Cleaned up root directory for better organization
2026-01-24 21:26:03 +08:00
catlog22
862365ffaf chore: 删除 Codex Subagent 使用规范文档 2026-01-24 21:17:59 +08:00
catlog22
b8c807b2f9 feat: add parent/child directory lookup for ccw cli output
- Implement findProjectWithExecution() to search upward through parent directories
- Add automatic project path discovery in outputAction
- Support explicit --project parameter for manual path specification
- Improve error messages with search scope indication
- Display project path in formatted output
- Enable cross-directory execution without working directory dependency
2026-01-24 21:17:21 +08:00
catlog22
7ea6362c50 docs: add Level 5 workflow guide with CCW Coordinator and decision flowchart
- Add "Level 5: 智能编排 (CCW Coordinator)" chapter to WORKFLOW_GUIDE_CN.md
- Integrate 6.2.8 version full lifecycle command decision flowchart (Mermaid)
- Document minimum execution units (最小执行单元) for planning, testing, and review
- Explain 3-phase workflow: requirements analysis, command discovery & recommendation, sequential execution
- Include command port system for dynamic command chain assembly
- Add state file structure (.workflow/.ccw-coordinator/{session_id}/state.json)
- Document 3 typical scenarios: simple feature, bug fix, complex feature development
- Update overview diagram to show Level 5 with automation progression
- Update workflow selection guide, decision flowchart, and summary table
- Add Level 5 relationship documentation to other workflow levels
2026-01-24 21:15:44 +08:00
catlog22
b435391f17 docs: add /ccw and /ccw-coordinator as recommended commands
- Add prominent section highlighting /ccw and /ccw-coordinator as main features
- /ccw: Auto workflow orchestrator for general tasks
- /ccw-coordinator: Smart orchestrator with intelligent recommendations
- Include comparison table, quick examples, and key differences
- Update both English and Chinese READMEs
2026-01-24 15:12:33 +08:00
catlog22
88ff109ac4 chore: bump version to 6.3.48 2026-01-24 15:06:36 +08:00
catlog22
261196a804 docs: update CCW CLI commands with recommended commands and usage examples 2026-01-24 15:05:37 +08:00
catlog22
ea6cb8440f chore: bump version to 6.3.47
- Update ccw-coordinator.md with clarified CLI execution format
- Command-first prompt structure: /workflow:<command> -y <parameters>
- Simplified documentation with universal prompt template
- Clarify that -y is a prompt parameter, not a ccw cli parameter
2026-01-24 14:52:09 +08:00
catlog22
bf896342f4 refactor: adjust prompt structure for command execution clarity 2026-01-24 14:51:05 +08:00
catlog22
f2b0a5bbc9 Refactor code structure and remove redundant changes 2026-01-24 14:47:47 +08:00
catlog22
cf5fecd66d fix(codex-lens): resolve installation issues from frontend
- Add missing README.md file required by setuptools
- Fix deprecated license format in pyproject.toml (use SPDX string instead of TOML table)
- Add MIT LICENSE file for proper packaging
- Verified successful local installation and import

Fixes permission denied error during npm-based installation on macOS
2026-01-24 14:43:39 +08:00
catlog22
86d469ccc9 build: exclude test files from TypeScript compilation 2026-01-24 14:35:05 +08:00
catlog22
357d3524f5 chore: bump version to 6.3.46 2026-01-24 14:31:29 +08:00
catlog22
4334162ddf refactor: remove unused command definitions from ccw-coordinator 2026-01-24 14:29:51 +08:00
catlog22
2dcd1637f0 refactor: enhance documentation on Minimum Execution Units and command grouping in CCW 2026-01-24 14:27:58 +08:00
catlog22
38e1cdc737 chore(release): publish 6.3.45
## Features

- New `ccw` command: Main process workflow orchestrator with auto intent-based workflow selection
- New CommandRegistry for dynamic command discovery and metadata management

## Improvements

- Optimize ccw-coordinator: Serial blocking execution model with hook-based continuation
- Refactor execution flow: Stop after CLI launch, wait for hook callbacks (no polling)
- Add task_id tracking and state.json checkpoints for resumable execution
- Consolidate documentation: Reduce report verbosity while maintaining all core information

## Documentation

- Add Execution Model comparison (main process vs external CLI)
- Add State Management section with TodoWrite tracking examples
- Update Type Comparison table highlighting ccw vs ccw-coordinator differences
- Simplify code examples with inline comments

## Changes Summary

- ccw-coordinator.md: +272/-26 (serial blocking), -143 docs (consolidation)
- ccw.md: +121/-352 (state management, execution model)
- Rename: CCW-COORDINATOR.md → ccw-coordinator.md (lowercase)
2026-01-24 14:09:52 +08:00
catlog22
097a7346b9 refactor: optimize ccw.md with streamlined documentation and state management
- Add Execution Model section (Synchronous vs Async blocking comparison)
- Add State Management section (TodoWrite-based tracking)
- Simplify Phase 1-5 code (remove verbose comments, consolidate logic)
- Consolidate Pipeline Examples into table format (5 examples → 1 table)
- Update Type Comparison table (highlight ccw vs ccw-coordinator differences)
- Maintain all core information (no content loss)

Changes:
- -352 lines (verbose explanations, redundant code)
- +121 lines (consolidated content, new sections)
- net: -231 lines (35% reduction: 665→433 lines)

Key additions:
- Execution Model flow diagram
- State Management with TodoWrite example
- Type Comparison: Synchronous (main) vs Async (external CLI)
2026-01-24 14:06:31 +08:00
catlog22
9df8063fbd refactor: reduce documentation report, consolidate overlapping content
- Eliminate redundant Stop-Action explanations (moved to CLI Execution Model)
- Remove verbose hook/error handling examples (keep in code only)
- Consolidate 5-step CLI example into 1-line pattern
- Simplify handleCliCompletion function comments
- Streamline executor loop exit notes
- Maintain all core information (no content loss)
- Reduce report from ~1000 lines to ~900 lines

Changes:
- -143 lines (old verbose explanations)
- +21 lines (consolidated content)
- net: -122 lines
2026-01-24 14:00:34 +08:00
catlog22
d00f0bc7ca refactor: improve CCW orchestrator with serial blocking execution and hook-based continuation
- Rename file to lowercase: CCW-COORDINATOR.md → ccw-coordinator.md
- Replace polling waitForTaskCompletion with stop-action blocking model
- CLI commands execute in background with immediate stop (no polling)
- Hook callbacks (handleCliCompletion) trigger continuation to next command
- Add task_id and completed_at fields to execution_results
- Maintain state checkpoint after each command launch
- Add status flow documentation (running → waiting → completed)
- Include CLI invocation example with hook configuration
- Separate concerns: orchestrator launches, hooks handle callbacks
- Support serial execution: one command at a time with break after launch
2026-01-24 13:57:08 +08:00
catlog22
24efef7f17 feat: Add main workflow orchestrator (ccw) with intent analysis and command execution
- Implemented the ccw command as a main workflow orchestrator.
- Added a 5-phase workflow including intent analysis, requirement clarification, workflow selection, user confirmation, and command execution.
- Developed functions for analyzing user input, selecting workflows, and executing command chains.
- Integrated TODO tracking for command execution progress.
- Created comprehensive tests for the CommandRegistry, covering YAML parsing, command retrieval, and error handling.
2026-01-24 13:43:47 +08:00
catlog22
44b8269a74 feat: add CommandRegistry for command management and direct imports 2026-01-24 13:29:50 +08:00
catlog22
dd51837bbc Enhance CCW Coordinator: Refactor command execution flow, improve prompt generation, and update documentation
- Refactored the command execution process to support dynamic command chaining and intelligent prompt generation.
- Updated the architecture overview to reflect changes in the orchestrator and command execution logic.
- Improved the prompt generation strategy to directly include complete command calls, enhancing clarity and usability.
- Added detailed examples and templates for command prompts in the documentation.
- Enhanced error handling and user decision-making during command execution failures.
- Introduced logging for command execution details and state updates for better traceability.
- Updated specifications and README files to align with the new command execution and prompt generation logic.
2026-01-24 12:44:40 +08:00
catlog22
a17edc3e50 chore(release): publish 6.3.44 2026-01-24 11:29:39 +08:00
catlog22
01ab3cf3fa feat: enhance tdd-verify command with detailed compliance reporting and validation improvements 2026-01-24 11:10:31 +08:00
catlog22
a2c1b9b47c fix: replace hardcoded Windows paths with dynamic cross-platform paths in CodexLens error messages
- Remove hardcoded Windows paths (D:\Claude_dms3\codex-lens) that were displayed to macOS/Linux users
- Generate dynamic possible paths list based on runtime environment
- Support multiple installation locations (cwd, project root, home directory)
- Improve error messages with platform-appropriate paths
- Maintain consistency across both bootstrapWithUv() and installSemanticWithUv() functions

Fixes remaining issue from #104 regarding cross-platform error message compatibility
2026-01-24 11:08:02 +08:00
catlog22
780e118844 fix: resolve CodexLens installation failure with NPM global install
Implements two-pass search strategy to support CodexLens in NPM global installations. Fixes issue #104.
2026-01-24 10:52:07 +08:00
catlog22
159dfd179e Refactor action plan verification command to plan verification
- Updated all references from `/workflow:action-plan-verify` to `/workflow:plan-verify` across various documentation and command files.
- Introduced a new command file for `/workflow:plan-verify` that performs read-only verification analysis on planning artifacts.
- Adjusted command relationships and help documentation to reflect the new command structure.
- Ensured consistency in command usage throughout the workflow guide and getting started documentation.
2026-01-24 10:46:15 +08:00
catlog22
6c80168612 feat: enhance project root detection with caching and debug logging 2026-01-24 10:04:04 +08:00
catlog22
a293a01d85 feat: add --yes flag for auto-confirmation across multiple workflows
- Enhanced lite-execute, lite-fix, lite-lite-lite, lite-plan, multi-cli-plan, plan, replan, session complete, session solidify, and various UI design commands to support a --yes or -y flag for skipping user confirmations and auto-selecting defaults.
- Updated argument hints and examples to reflect new auto mode functionality.
- Implemented auto mode defaults for confirmation, execution methods, and code review options.
- Improved error handling and validation in command parsing and execution processes.
2026-01-24 09:23:24 +08:00
jerry
ab259b1970 fix: resolve CodexLens installation failure with NPM global install
- Implement two-pass search strategy for codex-lens path detection
- First pass: prefer non-node_modules paths (development environment)
- Second pass: allow node_modules paths (NPM global install)
- Fixes CodexLens installation for all NPM global install users
- No breaking changes, maintains backward compatibility

Resolves issue where NPM global install users could not install CodexLens
because the code rejected paths containing /node_modules/, which is the
only valid location for codex-lens in NPM installations.

Tested on macOS with Node.js v22.18.0 via NPM global install.
2026-01-24 08:58:44 +08:00
catlog22
fd50adf581 feat: Update command validation tools and improve README documentation 2026-01-24 08:41:32 +08:00
catlog22
24a28f289d refactor: Rename command-registry.js to command-registry.cjs and update references 2026-01-23 23:54:08 +08:00
catlog22
e727a07fc5 feat: Implement CCW Coordinator for interactive command orchestration
- Add action files for session management, command selection, building, execution, and completion.
- Introduce orchestrator logic to drive state transitions and action selection.
- Create state schema to define session state structure.
- Develop command registry and validation tools for command metadata extraction and chain validation.
- Establish skill configuration and specifications for command library and validation rules.
- Implement tools for command registry and chain validation with CLI support.
2026-01-23 23:39:16 +08:00
catlog22
8179472e56 fix: auto-sync CLI tools availability on first config creation (Issue #95)
**问题描述**:
新安装 CCW 后,默认配置中所有 CLI 工具 enabled: true,但实际上用户可能没有安装这些工具,导致执行任务时尝试调用未安装的工具而失败。

**根本原因**:
- DEFAULT_TOOLS_CONFIG 中所有工具默认 enabled: true
- 首次创建配置时不检测工具实际可用性
- 现有的 syncBuiltinToolsAvailability() 只在用户手动触发时才执行

**修复内容**:
1. 新增 ensureClaudeCliToolsAsync() 异步版本
   - 在创建默认配置后自动调用 syncBuiltinToolsAvailability()
   - 通过 which/where 命令检测工具实际可用性
   - 根据检测结果自动调整 enabled 状态

2. 更新两个关键 API 端点使用新函数
   - /api/cli/endpoints - 获取 API 端点列表
   - /api/cli/tools-config - 获取 CLI 工具配置

**效果**:
- 首次安装时自动检测并禁用未安装的工具
- 避免调用不可用工具导致的错误
- 用户可在 Dashboard 中看到准确的工具状态

Fixes #95
2026-01-23 23:20:58 +08:00
catlog22
277b3f86f1 feat: Enhance TDD workflow with specialized executor and optimized task generation
- Create tdd-developer.md: Specialized TDD agent with Red-Green-Refactor awareness
  - Full TDD metadata parsing (tdd_workflow, max_iterations, cli_execution)
  - Green phase Test-Fix Cycle with automatic diagnosis and repair
  - CLI session resumption strategies (new/resume/fork/merge_fork)
  - Auto-revert safety mechanism when max_iterations reached

- Optimize task-generate-tdd.md: Enhanced task generation with CLI support
  - Phase 0: User configuration questionnaire (materials, execution method, CLI tool)
  - Phase 1: Progressive loading strategy (Core → Selective → On-Demand)
  - CLI Execution ID management with dependency-based strategy selection
  - Fixed task limit to 18 (consistent with system-wide limit)
  - Fixed double-slash path issues in output structure
  - Enhanced tdd_cycles schema documentation with full structure
  - Unified resume_from type documentation (string | string[])

- Update tdd-plan.md: Workflow orchestrator improvements
  - Phase 0 user configuration details
  - Enhanced validation rules for CLI execution IDs
  - Updated error handling for 18-task limit

Validated by Gemini CLI analysis - complete execution chain compatibility confirmed.
2026-01-23 23:01:56 +08:00
catlog22
7a6f4c3f22 chore: bump version to 6.3.43 - fix parallel-dev-cycle documentation inconsistencies 2026-01-23 17:52:12 +08:00
catlog22
2f32d08d87 feat: Update documentation and file references for changes.log in parallel development cycle 2026-01-23 17:51:21 +08:00
catlog22
79d20add43 feat: Enhance Code Developer and Requirements Analyst agents with proactive debugging and self-enhancement strategies 2026-01-23 17:41:17 +08:00
catlog22
f363c635f5 feat: Enhance issue loading with intelligent grouping for batch processing 2026-01-23 17:03:27 +08:00
catlog22
61e3747768 feat: Add batch solutions endpoint (ccw issue solutions)
- Add solutionsAction() to query all bound solutions in one call
- Reduces O(N) queries to O(1) for queue formation
- Update /issue:queue command to use new endpoint
- Performance: 18 individual queries → 1 batch query

Version: 6.3.42
2026-01-23 16:56:08 +08:00
catlog22
54ec6a7c57 feat: Enhance issue management with batch processing and solution querying
- Updated issue loading process to create batches based on size (max 3 per batch).
- Removed semantic grouping in favor of simple size-based batching.
- Introduced a new command to batch query solutions for multiple issues.
- Improved solution loading to fetch all planned issues with bound solutions in a single call.
- Added detailed handling for multi-solution issues, including user selection for binding.
- Created a new workflow command for multi-agent development with documented progress and incremental iteration support.
- Added .gitignore for ace-tool directory to prevent unnecessary files from being tracked.
2026-01-23 16:55:10 +08:00
catlog22
d6a3da2084 chore: bump version to 6.3.41 2026-01-23 12:40:01 +08:00
catlog22
b9f17f0fcf fix: Add required 'name' field to Codex skill YAML frontmatter
According to OpenAI Codex skill specification, SKILL.md files must have both
'name' and 'description' fields in YAML frontmatter. Added missing 'name' field
to all three skills:
- CCW Loop
- CCW Loop-B
- Parallel Dev Cycle

Also enhanced ccw-loop description with Chinese trigger keywords for better
multi-language support.
2026-01-23 12:38:45 +08:00
catlog22
88eb42f65b chore: bump version to 6.3.40 2026-01-23 10:23:43 +08:00
catlog22
b1ac0cf8ff feat: Add communication optimization and coordination protocol for multi-agent system
- Introduced a new specification for agent communication optimization focusing on file references instead of content transfer to enhance efficiency and reduce message size.
- Established a coordination protocol detailing communication channels, message formats, and dependency resolution strategies among agents (RA, EP, CD, VAS).
- Created a unified progress format specification for all agents, standardizing documentation structure and versioning practices.
2026-01-23 10:04:31 +08:00
catlog22
09eeb84cda chore: bump version to 6.3.39 2026-01-22 23:39:02 +08:00
catlog22
2fb1d1243c feat: prioritize user config, do not merge default tools
Changed loadClaudeCliTools() to only load tools explicitly defined
in user config. Previously, DEFAULT_TOOLS_CONFIG.tools was spread
before user tools, causing all default tools to be loaded even if
not present in user config.

User config now has complete control over which tools are loaded.
2026-01-22 23:37:42 +08:00
catlog22
ac62bf70db fix: preserve envFile in ensureToolTags merge function
The ensureToolTags() function was only returning enabled, primaryModel,
secondaryModel, and tags - missing envFile. This caused envFile to be
lost during config merge in loadClaudeCliTools().

Related to #96 - gemini envFile setting lost after page refresh
2026-01-22 23:35:33 +08:00
catlog22
edb55c4895 fix: include envFile in getFullConfigResponse API response
Fixes #96 - gemini/qwen envFile setting was lost after page refresh
because getFullConfigResponse() was not including the envFile field
when converting config to the legacy API format.

Changes:
- Add envFile?: string | null to CliToolConfig interface
- Include envFile in getFullConfigResponse() conversion
2026-01-22 23:30:01 +08:00
catlog22
8a7f636a85 feat: Refactor intelligent cleanup command for clarity and efficiency 2026-01-22 23:25:34 +08:00
catlog22
97ab82628d Add intelligent cleanup command with mainline detection and artifact discovery
- Introduced the `/workflow:clean` command for intelligent code cleanup.
- Implemented mainline detection to identify active development branches and core modules.
- Added drift analysis to discover stale sessions, abandoned documents, and dead code.
- Included safe execution features with staged deletion and confirmation.
- Documented usage, execution process, and implementation details in `clean.md`.
2026-01-22 23:19:54 +08:00
catlog22
be89552b0a feat: Add ccw-loop-b hybrid orchestrator skill with specialized workers
Create new ccw-loop-b skill implementing coordinator + workers architecture:

**Skill Structure**:
- SKILL.md: Entry point with three execution modes (interactive/auto/parallel)
- phases/state-schema.md: Unified state structure
- specs/action-catalog.md: Complete action reference

**Worker Agents**:
- ccw-loop-b-init.md: Session initialization and task breakdown
- ccw-loop-b-develop.md: Code implementation and file operations
- ccw-loop-b-debug.md: Root cause analysis and problem diagnosis
- ccw-loop-b-validate.md: Testing, coverage, and quality checks
- ccw-loop-b-complete.md: Session finalization and commit preparation

**Execution Modes**:
- Interactive: Menu-driven, user selects actions
- Auto: Predetermined sequential workflow
- Parallel: Concurrent worker execution with batch wait

**Features**:
- Flexible coordination patterns (single/multi-agent/hybrid)
- Batch wait API for parallel execution
- Unified state management (.loop/ directory)
- Per-worker progress tracking
- No Claude/Codex comparison content (follows new guidelines)

Follows updated design principles:
- Content independence (no framework comparisons)
- Mode flexibility (no over-constraining)
- Coordinator pattern with specialized workers
2026-01-22 23:10:43 +08:00
catlog22
df25b43884 fix(install): change default for Git Bash multi-line prompt fix to false 2026-01-22 22:59:22 +08:00
catlog22
04cd536da5 chore: bump version to 6.3.38 2026-01-22 22:54:42 +08:00
catlog22
9a3608173a feat: Add multi-perspective issue discovery and structured issue creation
- Implemented issue discovery prompt to analyze code from various perspectives (bug, UX, test, quality, security, performance, maintainability, best-practices).
- Created structured issue generation prompt from GitHub URLs or text descriptions, including clarity detection and optional clarification questions.
- Introduced CCW Loop-B hybrid orchestrator pattern for iterative development, featuring a coordinator and specialized workers with batch wait support.
- Defined state management, session structure, and output schemas for the CCW Loop-B workflow.
- Added error handling and best practices documentation for the new features.
2026-01-22 22:53:05 +08:00
catlog22
f5b6bb97bc feat(issue-manager): update queue status display logic in renderQueueCard function 2026-01-22 22:33:44 +08:00
catlog22
2819f3597f feat: Add validation action and orchestrator for CCW Loop
- Implemented the VALIDATE action to run tests, check coverage, and generate reports.
- Created orchestrator for managing CCW Loop execution using Codex subagent pattern.
- Defined state schema for unified loop state management.
- Updated action catalog with new actions and their specifications.
- Enhanced CLI and issue routes to support new features and data structures.
- Improved documentation for Codex subagent design principles and action flow.
2026-01-22 22:32:37 +08:00
catlog22
c0c1a2eb92 fix(dashboard): make showHidden state match checkbox checked state
Fixes #97 - File browser "Show Hidden Files" checkbox appeared checked
but hidden files weren't displayed until the checkbox was toggled.

Root cause: Timing mismatch where loadFileBrowserDirectory() was called
before initFileBrowserEvents(), causing the initial API request to send
showHidden: false while the checkbox was checked.

Fix: Initialize fileBrowserState.showHidden = true in showFileBrowserModal()
to match the checkbox's default checked state.
2026-01-22 22:21:54 +08:00
catlog22
012197a861 删除 Plan-A 与 Plan-B 的比较部分,以简化文档内容 2026-01-22 21:42:20 +08:00
catlog22
407b2e6930 Add lightweight interactive planning workflows: Lite-Plan-B and Lite-Plan-C
- Introduced Lite-Plan-B for hybrid mode planning with multi-agent parallel exploration and primary agent merge/clarify/plan capabilities.
- Added Lite-Plan-C for Codex subagent orchestration, featuring intelligent task analysis, parallel exploration, and adaptive planning based on task complexity.
- Both workflows include detailed execution processes, session setup, and structured output formats for exploration and planning results.
2026-01-22 21:40:57 +08:00
catlog22
6428febdf6 Add universal-executor agent and enhance Codex subagent documentation
- Introduced a new agent: universal-executor, designed for versatile task execution across various domains with a systematic approach.
- Added comprehensive documentation for Codex subagents, detailing core architecture, API usage, lifecycle management, and output templates.
- Created a new markdown file for Codex subagent usage guidelines, emphasizing parallel processing and structured deliverables.
- Updated codex_prompt.md to clarify the deprecation of custom prompts in favor of skills for reusable instructions.
2026-01-22 20:41:37 +08:00
catlog22
9f9ef1d054 Refactor code structure for improved readability and maintainability 2026-01-22 18:22:38 +08:00
catlog22
ea04663035 fix(multi-cli): populate multiCliPlan sessions in liteTaskDataStore
Fix task click handlers not working in multi-CLI planning detail page.

Root cause: liteTaskDataStore was not being populated with multiCliPlan
sessions during initialization, so task click handlers couldn't access
session data using currentSessionDetailKey.

Changes:
- navigation.js: Add code to populate multiCliPlan sessions in liteTaskDataStore
- notifications.js: Add code to populate multiCliPlan sessions when data refreshes

Now when task detail page loads, liteTaskDataStore contains the correct key
'multi-cli-${sessionId}' matching currentSessionDetailKey, allowing task
click handlers to find session data and open detail drawer.

Verified: Task clicks now properly open detail panel for all 7 tasks.
2026-01-22 15:41:01 +08:00
catlog22
f0954b3247 fix(lite-execute): pass project-guidelines.json to execution phase
Ensure buildExecutionPrompt includes project constraints reference,
maintaining consistency with full workflow (plan + execute) which passes
project_guidelines via context-package.json. This allows execution phase
to respect user-defined constraints (via /workflow:session:solidify).
2026-01-22 15:30:36 +08:00
catlog22
2fffe78dc9 fix(multi-cli): complete solution details display in summary tab (#98)
Fixed issue where multi-CLI planning solution cards only showed count,
feasibility, effort, and risk badges but had empty content area.

Changes:
- Enhanced renderMultiCliSummaryContent() to extract and display all solution fields
  - Solution name (name/title)
  - Feasibility score (feasibility)
  - Effort level (effort)
  - Risk level (risk)
  - Summary/description (summary)
  - Pros list (pros)
  - Cons list (cons)

- Added CSS styles for solution cards
  - .solution-details, .details-label, .details-list
  - .solution-header, .solution-title-row, .solution-badges
  - .badge with variants for feasibility/effort/risk

- Fixed related issues:
  - Added multiCliPlan support to backend data structures
  - Exposed liteTaskDataStore to window for global access
  - Fixed header left-alignment in detail pages
  - Added 'active' class to tab content for visibility

Files modified:
- ccw/src/templates/dashboard-js/views/lite-tasks.js
- ccw/src/templates/dashboard-css/04-lite-tasks.css
- ccw/src/core/server.ts
- ccw/src/core/routes/system-routes.ts
- ccw/src/templates/dashboard-js/state.js
- ccw/src/templates/dashboard-css/02-session.css
- ccw/src/config/litellm-api-config-manager.ts (fix homedir import)

Closes #98
2026-01-22 15:30:35 +08:00
catlog22
02531c4d15 feat: i18n for CLI history view; fix Claude session discovery path encoding
## Changes

### i18n 中文化 (i18n.js, cli-history.js)
- 添加 60+ 个翻译键用于 CLI 执行历史和对话详情
- 将 cli-history.js 中的硬编码英文字符串替换为 t() 函数调用
- 覆盖范围: 执行历史、对话详情、状态、工具标签、按钮、提示等

### 修复 Claude 会话追踪 (native-session-discovery.ts)
- 问题: Claude 使用路径编码存储会话 (D:\path -> D--path),但代码使用 SHA256 哈希导致无法发现
- 解决方案:
  - 添加 encodeClaudeProjectPath() 函数用于路径编码
  - 更新 ClaudeSessionDiscoverer.getSessions() 使用路径编码
  - 增强 extractFirstUserMessage() 支持多种消息格式 (string/array)
- 结果: Claude 会话现可正确关联,UI 按钮 "查看完整过程对话" 应可正常显示

## 验证
- npm run build 通过 
- Claude 会话发现 1267 个会话 
- 消息提取成功率 80% 
- 路径编码验证正确 
2026-01-22 14:53:38 +08:00
catlog22
5fa7524ad7 feat(loop): support external CLI tools (cli-wrapper) in task management
- Fix missing i18n translations: loop.add, loop.save, loop.cancel
- Replace hardcoded validTools with dynamic tool loading from cli-tools.json
- Support external CLI wrappers (like doubao) in task creation and updates
- Add getEnabledToolsList() helper to fetch enabled tools dynamically
- Update mapIssueToolToLoopTool() to accept any string tool name
- Update validateTool() to use dynamic tool list
- Change LoopTask.tool type from specific strings to string (accepts any tool)

This allows tasks to use any enabled CLI tool from configuration,
including builtin tools, cli-wrappers, and api-endpoints, not just
the hardcoded ['bash', 'gemini', 'codex', 'qwen', 'claude'].
2026-01-22 12:43:37 +08:00
catlog22
21fbdbc55e feat(loop-monitor): add 'In Development' badge and bump to v6.3.37
- Add 'In Development' (开发中) badge to Loop Monitor navigation item
- Use yellow highlight to indicate development status
- Add i18n translations: nav.inDevelopment ('In Dev' / '开发中')
- Bump version to 6.3.37

The Loop Monitor feature is now clearly marked as under development,
helping users understand it may have limited functionality.
2026-01-22 11:59:07 +08:00
390 changed files with 87589 additions and 15969 deletions

View File

@@ -9,12 +9,7 @@
**Strictly follow the cli-tools.json configuration**
Available CLI endpoints are dynamically defined by the config file:
- Built-in tools and their enable/disable status
- Custom API endpoints registered via the Dashboard
- Managed through the CCW Dashboard Status page
Available CLI endpoints are dynamically defined by the config file
## Tool Execution
- **Context Requirements**: @~/.claude/workflows/context-tools.md
@@ -25,9 +20,12 @@ Available CLI endpoints are dynamically defined by the config file:
- **TaskOutput usage**: Only use `TaskOutput({ task_id: "xxx", block: false })` + sleep loop to poll completion status. NEVER read intermediate output during agent/CLI execution - wait for final result only
### CLI Tool Calls (ccw cli)
- **Default: `run_in_background: true`** - Unless otherwise specified, always use background execution for CLI calls:
- **Default: Use Bash `run_in_background: true`** - Unless otherwise specified, always execute CLI calls in background using Bash tool's background mode:
```
Bash({ command: "ccw cli -p '...' --tool gemini", run_in_background: true })
Bash({
command: "ccw cli -p '...' --tool gemini",
run_in_background: true // Bash tool parameter, not ccw cli parameter
})
```
- **After CLI call**: Stop output immediately - let CLI execute in background. **DO NOT use TaskOutput polling** - wait for hook callback to receive results

View File

@@ -1,4 +0,0 @@
{
"interval": "manual",
"tool": "gemini"
}

View File

@@ -55,6 +55,17 @@ color: yellow
**Step-by-step execution**:
```
0. Load planning notes → Extract phase-level constraints (NEW)
Commands: Read('.workflow/active/{session-id}/planning-notes.md')
Output: Consolidated constraints from all workflow phases
Structure:
- User Intent: Original GOAL, KEY_CONSTRAINTS
- Context Findings: Critical files, architecture notes, constraints
- Conflict Decisions: Resolved conflicts, modified artifacts
- Consolidated Constraints: Numbered list of ALL constraints (Phase 1-3)
USAGE: This is the PRIMARY source of constraints. All task generation MUST respect these constraints.
1. Load session metadata → Extract user input
- User description: Original task/feature requirements
- Project scope: User-specified boundaries and goals
@@ -299,25 +310,22 @@ function computeCliStrategy(task, allTasks) {
**execution_config Alignment Rules** (MANDATORY):
```
userConfig.executionMethod → meta.execution_config + implementation_approach
userConfig.executionMethod → meta.execution_config
"agent" →
meta.execution_config = { method: "agent", cli_tool: null, enable_resume: false }
implementation_approach steps: NO command field (agent direct execution)
"hybrid" →
meta.execution_config = { method: "hybrid", cli_tool: userConfig.preferredCliTool }
implementation_approach steps: command field ONLY on complex steps
Execution: Agent executes pre_analysis, then directly implements implementation_approach
"cli" →
meta.execution_config = { method: "cli", cli_tool: userConfig.preferredCliTool }
implementation_approach steps: command field on ALL steps
meta.execution_config = { method: "cli", cli_tool: userConfig.preferredCliTool, enable_resume: true }
Execution: Agent executes pre_analysis, then hands off context + implementation_approach to CLI
"hybrid" →
meta.execution_config = { method: "hybrid", cli_tool: userConfig.preferredCliTool, enable_resume: true }
Execution: Agent decides which tasks to handoff to CLI based on complexity
```
**Consistency Check**: `meta.execution_config.method` MUST match presence of `command` fields:
- `method: "agent"` → 0 steps have command field
- `method: "hybrid"` → some steps have command field
- `method: "cli"` → all steps have command field
**Note**: implementation_approach steps NO LONGER contain `command` fields. CLI execution is controlled by task-level `meta.execution_config` only.
**Test Task Extensions** (for type="test-gen" or type="test-fix"):
@@ -638,32 +646,6 @@ Agent determines CLI tool usage per-step based on user semantics and task nature
"output": "implementation"
},
// === CLI MODE: Command Execution (optional command field) ===
{
"step": 3,
"title": "Execute implementation using CLI tool",
"description": "Use Codex/Gemini for complex autonomous execution",
"command": "ccw cli -p '[prompt]' --tool codex --mode write --cd [path]",
"modification_points": ["[Same as default mode]"],
"logic_flow": ["[Same as default mode]"],
"depends_on": [1, 2],
"output": "cli_implementation",
"cli_output_id": "step3_cli_id" // Store execution ID for resume
},
// === CLI MODE with Resume: Continue from previous CLI execution ===
{
"step": 4,
"title": "Continue implementation with context",
"description": "Resume from previous step with accumulated context",
"command": "ccw cli -p '[continuation prompt]' --resume ${step3_cli_id} --tool codex --mode write",
"resume_from": "step3_cli_id", // Reference previous step's CLI ID
"modification_points": ["[Continue from step 3]"],
"logic_flow": ["[Build on previous output]"],
"depends_on": [3],
"output": "continued_implementation",
"cli_output_id": "step4_cli_id"
}
]
```
@@ -785,13 +767,13 @@ Generate at `.workflow/active/{session_id}/TODO_LIST.md`:
Use `analysis_results.complexity` or task count to determine structure:
**Single Module Mode**:
- **Simple Tasks** (≤5 tasks): Flat structure
- **Medium Tasks** (6-12 tasks): Flat structure
- **Complex Tasks** (>12 tasks): Re-scope required (maximum 12 tasks hard limit)
- **Simple Tasks** (≤4 tasks): Flat structure
- **Medium Tasks** (5-8 tasks): Flat structure
- **Complex Tasks** (>8 tasks): Re-scope required (maximum 8 tasks hard limit)
**Multi-Module Mode** (N+1 parallel planning):
- **Per-module limit**: ≤9 tasks per module
- **Total limit**: Sum of all module tasks ≤27 (3 modules × 9 tasks)
- **Per-module limit**: ≤6 tasks per module
- **Total limit**: No total limit (each module independently capped at 6 tasks)
- **Task ID format**: `IMPL-{prefix}{seq}` (e.g., IMPL-A1, IMPL-B1)
- **Structure**: Hierarchical by module in IMPL_PLAN.md and TODO_LIST.md
@@ -855,6 +837,7 @@ Use `analysis_results.complexity` or task count to determine structure:
### 3.3 Guidelines Checklist
**ALWAYS:**
- **Load planning-notes.md FIRST**: Read planning-notes.md before context-package.json. Use its Consolidated Constraints as primary constraint source for all task generation
- **Search Tool Priority**: ACE (`mcp__ace-tool__search_context`) → CCW (`mcp__ccw-tools__smart_search`) / Built-in (`Grep`, `Glob`, `Read`)
- Apply Quantification Requirements to all requirements, acceptance criteria, and modification points
- Load IMPL_PLAN template: `Read(~/.claude/workflows/cli-templates/prompts/workflow/impl-plan-template.txt)` before generating IMPL_PLAN.md
@@ -865,7 +848,7 @@ Use `analysis_results.complexity` or task count to determine structure:
- **Compute CLI execution strategy**: Based on `depends_on`, set `cli_execution.strategy` (new/resume/fork/merge_fork)
- Map artifacts: Use artifacts_inventory to populate task.context.artifacts array
- Add MCP integration: Include MCP tool steps in flow_control.pre_analysis when capabilities available
- Validate task count: Maximum 12 tasks hard limit, request re-scope if exceeded
- Validate task count: Maximum 8 tasks (single module) or 6 tasks per module (multi-module), request re-scope if exceeded
- Use session paths: Construct all paths using provided session_id
- Link documents properly: Use correct linking format (📋 for JSON, ✅ for summaries)
- Run validation checklist: Verify all quantification requirements before finalizing task JSONs
@@ -879,7 +862,7 @@ Use `analysis_results.complexity` or task count to determine structure:
- Load files directly (use provided context package instead)
- Assume default locations (always use session_id in paths)
- Create circular dependencies in task.depends_on
- Exceed 12 tasks without re-scoping
- Exceed 8 tasks (single module) or 6 tasks per module (multi-module) without re-scoping
- Skip artifact integration when artifacts_inventory is provided
- Ignore MCP capabilities when available
- Use fixed pre-analysis steps without task-specific adaptation

View File

@@ -13,6 +13,8 @@ color: cyan
You are a generic planning agent that generates structured plan JSON for lite workflows. Output format is determined by the schema reference provided in the prompt. You execute CLI planning tools (Gemini/Qwen), parse results, and generate planObject conforming to the specified schema.
**CRITICAL**: After generating plan.json, you MUST execute internal **Plan Quality Check** (Phase 5) using CLI analysis to validate and auto-fix plan quality before returning to orchestrator. Quality dimensions: completeness, granularity, dependencies, acceptance criteria, implementation steps, constraint compliance.
## Input Context
@@ -72,7 +74,22 @@ Phase 4: planObject Generation
├─ Build planObject conforming to schema
├─ Assign CLI execution IDs and strategies
├─ Generate flow_control from depends_on
└─ Return to orchestrator
└─ Write initial plan.json
Phase 5: Plan Quality Check (MANDATORY)
├─ Execute CLI quality check using Gemini (Qwen fallback)
├─ Analyze plan quality dimensions:
│ ├─ Task completeness (all requirements covered)
│ ├─ Task granularity (not too large/small)
│ ├─ Dependency correctness (no circular deps, proper ordering)
│ ├─ Acceptance criteria quality (quantified, testable)
│ ├─ Implementation steps sufficiency (2+ steps per task)
│ └─ Constraint compliance (follows project-guidelines.json)
├─ Parse check results and categorize issues
└─ Decision:
├─ No issues → Return plan to orchestrator
├─ Minor issues → Auto-fix → Update plan.json → Return
└─ Critical issues → Report → Suggest regeneration
```
## CLI Command Template
@@ -734,3 +751,78 @@ function validateTask(task) {
- Skip task validation
- **Skip CLI execution ID assignment**
- **Ignore schema structure**
- **Skip Phase 5 Plan Quality Check**
---
## Phase 5: Plan Quality Check (MANDATORY)
### Overview
After generating plan.json, **MUST** execute CLI quality check before returning to orchestrator. This is a mandatory step for ALL plans regardless of complexity.
### Quality Dimensions
| Dimension | Check Criteria | Critical? |
|-----------|---------------|-----------|
| **Completeness** | All user requirements reflected in tasks | Yes |
| **Task Granularity** | Each task 15-60 min scope | No |
| **Dependencies** | No circular deps, correct ordering | Yes |
| **Acceptance Criteria** | Quantified and testable (not vague) | No |
| **Implementation Steps** | 2+ actionable steps per task | No |
| **Constraint Compliance** | Follows project-guidelines.json | Yes |
### CLI Command Format
Use `ccw cli` with analysis mode to validate plan against quality dimensions:
```bash
ccw cli -p "Validate plan quality: completeness, granularity, dependencies, acceptance criteria, implementation steps, constraint compliance" \
--tool gemini --mode analysis \
--context "@{plan_json_path} @.workflow/project-guidelines.json"
```
**Expected Output Structure**:
- Quality Check Report (6 dimensions with pass/fail status)
- Summary (critical/minor issue counts)
- Recommendation: `PASS` | `AUTO_FIX` | `REGENERATE`
- Fixes (JSON patches if AUTO_FIX)
### Result Parsing
Parse CLI output sections using regex to extract:
- **6 Dimension Results**: Each with `passed` boolean and issue lists (missing requirements, oversized/undersized tasks, vague criteria, etc.)
- **Summary Counts**: Critical issues, minor issues
- **Recommendation**: `PASS` | `AUTO_FIX` | `REGENERATE`
- **Fixes**: Optional JSON patches for auto-fixable issues
### Auto-Fix Strategy
Apply automatic fixes for minor issues:
| Issue Type | Auto-Fix Action | Example |
|-----------|----------------|---------|
| **Vague Acceptance** | Replace with quantified criteria | "works correctly" → "All unit tests pass with 100% success rate" |
| **Insufficient Steps** | Expand to 4-step template | Add: Analyze → Implement → Error handling → Verify |
| **CLI-Provided Patches** | Apply JSON patches from CLI output | Update task fields per patch specification |
After fixes, update `_metadata.quality_check` with fix log.
### Execution Flow
After Phase 4 planObject generation:
1. **Write Initial Plan**`${sessionFolder}/plan.json`
2. **Execute CLI Check** → Gemini (Qwen fallback)
3. **Parse Results** → Extract recommendation and issues
4. **Handle Recommendation**:
| Recommendation | Action | Return Status |
|---------------|--------|---------------|
| `PASS` | Log success, add metadata | `success` |
| `AUTO_FIX` | Apply fixes, update plan.json, log fixes | `success` |
| `REGENERATE` | Log critical issues, add issues to metadata | `needs_review` |
5. **Return** → Plan with `_metadata.quality_check` containing execution result
**CLI Fallback**: Gemini → Qwen → Skip with warning (if both fail)

View File

@@ -186,34 +186,150 @@ output → Variable name to store this step's result
**Execution Flow**:
```
FOR each step in implementation_approach[] (ordered by step number):
1. Check depends_on: Wait for all listed step numbers to complete
2. Variable Substitution: Replace [variable_name] in description/modification_points
with values stored from previous steps' output
3. Execute step (choose one):
// Read task-level execution config (Single Source of Truth)
const executionMethod = task.meta?.execution_config?.method || 'agent';
const cliTool = task.meta?.execution_config?.cli_tool || getDefaultCliTool(); // See ~/.claude/cli-tools.json
IF step.command exists:
→ Execute the CLI command via Bash tool
→ Capture output
// Phase 1: Execute pre_analysis (always by Agent)
const preAnalysisResults = {};
for (const step of task.flow_control.pre_analysis || []) {
const result = executePreAnalysisStep(step);
preAnalysisResults[step.output_to] = result;
}
ELSE (no command - Agent direct implementation):
→ Read modification_points[] as list of files to create/modify
→ Read logic_flow[] as implementation sequence
→ For each file in modification_points:
• If "Create new file: path" → Use Write tool to create
• If "Modify file: path" → Use Edit tool to modify
• If "Add to file: path" → Use Edit tool to append
→ Follow logic_flow sequence for implementation logic
→ Use [focus_paths] from context as working directory scope
// Phase 2: Determine execution mode
const hasLegacyCommands = task.flow_control.implementation_approach
.some(step => step.command);
4. Store result in [step.output] variable for later steps
5. Mark step complete, proceed to next
IF hasLegacyCommands:
// Backward compatibility: Old mode with step.command fields
FOR each step in implementation_approach[]:
IF step.command exists:
→ Execute via Bash: Bash({ command: step.command, timeout: 3600000 })
ELSE:
→ Agent direct implementation
ELSE IF executionMethod === 'cli':
// New mode: CLI Handoff
→ const cliPrompt = buildCliHandoffPrompt(preAnalysisResults, task)
→ const cliCommand = buildCliCommand(task, cliTool, cliPrompt)
→ Bash({ command: cliCommand, run_in_background: false, timeout: 3600000 })
ELSE IF executionMethod === 'hybrid':
// Hybrid mode: Agent decides based on task complexity
→ IF task is complex (multiple files, complex logic):
Use CLI Handoff (same as cli mode)
ELSE:
Use Agent direct implementation
ELSE (executionMethod === 'agent'):
// Default: Agent direct implementation
FOR each step in implementation_approach[]:
1. Variable Substitution: Replace [variable_name] with preAnalysisResults
2. Read modification_points[] as files to create/modify
3. Read logic_flow[] as implementation sequence
4. For each file in modification_points:
• If "Create new file: path" → Use Write tool
• If "Modify file: path" → Use Edit tool
• If "Add to file: path" → Use Edit tool (append)
5. Follow logic_flow sequence
6. Use [focus_paths] from context as working directory scope
7. Store result in [step.output] variable
```
**CLI Command Execution (CLI Execute Mode)**:
When step contains `command` field with Codex CLI, execute via CCW CLI. For Codex resume:
- First task (`depends_on: []`): `ccw cli -p "..." --tool codex --mode write --cd [path]`
- Subsequent tasks (has `depends_on`): Use CCW CLI with resume context to maintain session
**CLI Handoff Functions**:
```javascript
// Get default CLI tool from cli-tools.json
function getDefaultCliTool() {
// Read ~/.claude/cli-tools.json and return first enabled tool
// Fallback order: gemini → qwen → codex (first enabled in config)
return firstEnabledTool || 'gemini'; // System default fallback
}
// Build CLI prompt from pre-analysis results and task
function buildCliHandoffPrompt(preAnalysisResults, task) {
const contextSection = Object.entries(preAnalysisResults)
.map(([key, value]) => `### ${key}\n${value}`)
.join('\n\n');
const approachSection = task.flow_control.implementation_approach
.map((step, i) => `
### Step ${step.step}: ${step.title}
${step.description}
**Modification Points**:
${step.modification_points?.map(m => `- ${m}`).join('\n') || 'N/A'}
**Logic Flow**:
${step.logic_flow?.map((l, j) => `${j + 1}. ${l}`).join('\n') || 'Follow modification points'}
`).join('\n');
return `
PURPOSE: ${task.title}
Complete implementation based on pre-analyzed context.
## PRE-ANALYSIS CONTEXT
${contextSection}
## REQUIREMENTS
${task.context.requirements?.map(r => `- ${r}`).join('\n') || task.context.requirements}
## IMPLEMENTATION APPROACH
${approachSection}
## ACCEPTANCE CRITERIA
${task.context.acceptance?.map(a => `- ${a}`).join('\n') || task.context.acceptance}
## TARGET FILES
${task.flow_control.target_files?.map(f => `- ${f}`).join('\n') || 'See modification points above'}
MODE: write
CONSTRAINTS: Follow existing patterns | No breaking changes
`.trim();
}
// Build CLI command with resume strategy
function buildCliCommand(task, cliTool, cliPrompt) {
const cli = task.cli_execution || {};
const escapedPrompt = cliPrompt.replace(/"/g, '\\"');
const baseCmd = `ccw cli -p "${escapedPrompt}"`;
switch (cli.strategy) {
case 'new':
return `${baseCmd} --tool ${cliTool} --mode write --id ${task.cli_execution_id}`;
case 'resume':
return `${baseCmd} --resume ${cli.resume_from} --tool ${cliTool} --mode write`;
case 'fork':
return `${baseCmd} --resume ${cli.resume_from} --id ${task.cli_execution_id} --tool ${cliTool} --mode write`;
case 'merge_fork':
return `${baseCmd} --resume ${cli.merge_from.join(',')} --id ${task.cli_execution_id} --tool ${cliTool} --mode write`;
default:
// Fallback: no resume, no id
return `${baseCmd} --tool ${cliTool} --mode write`;
}
}
```
**Execution Config Reference** (from task.meta.execution_config):
| Field | Values | Description |
|-------|--------|-------------|
| `method` | `agent` / `cli` / `hybrid` | Execution mode (default: agent) |
| `cli_tool` | See `~/.claude/cli-tools.json` | CLI tool preference (first enabled tool as default) |
| `enable_resume` | `true` / `false` | Enable CLI session resume |
**CLI Execution Reference** (from task.cli_execution):
| Field | Values | Description |
|-------|--------|-------------|
| `strategy` | `new` / `resume` / `fork` / `merge_fork` | Resume strategy |
| `resume_from` | `{session}-{task_id}` | Parent task CLI ID (resume/fork) |
| `merge_from` | `[{id1}, {id2}]` | Parent task CLI IDs (merge_fork) |
**Resume Strategy Examples**:
- **New task** (no dependencies): `--id WFS-001-IMPL-001`
- **Resume** (single dependency, single child): `--resume WFS-001-IMPL-001`
- **Fork** (single dependency, multiple children): `--resume WFS-001-IMPL-001 --id WFS-001-IMPL-002`
- **Merge** (multiple dependencies): `--resume WFS-001-IMPL-001,WFS-001-IMPL-002 --id WFS-001-IMPL-003`
**Test-Driven Development**:
- Write tests first (red → green → refactor)
@@ -389,7 +505,8 @@ Before completing any task, verify:
- Use `run_in_background=false` for all Bash/CLI calls - agent cannot receive task hook callbacks
- Set timeout ≥60 minutes for CLI commands (hooks don't propagate to subagents):
```javascript
Bash(command="ccw cli -p '...' --tool codex --mode write", timeout=3600000) // 60 min
Bash(command="ccw cli -p '...' --tool <cli-tool> --mode write", timeout=3600000) // 60 min
// <cli-tool>: First enabled tool from ~/.claude/cli-tools.json (e.g., gemini, qwen, codex)
```
**ALWAYS:**

View File

@@ -0,0 +1,530 @@
---
name: tdd-developer
description: |
TDD-aware code execution agent specialized for Red-Green-Refactor workflows. Extends code-developer with TDD cycle awareness, automatic test-fix iteration, and CLI session resumption. Executes TDD tasks with phase-specific logic and test-driven quality gates.
Examples:
- Context: TDD task with Red-Green-Refactor phases
user: "Execute TDD task IMPL-1 with test-first development"
assistant: "I'll execute the Red-Green-Refactor cycle with automatic test-fix iteration"
commentary: Parse TDD metadata, execute phases sequentially with test validation
- Context: Green phase with failing tests
user: "Green phase implementation complete but tests failing"
assistant: "Starting test-fix cycle (max 3 iterations) with Gemini diagnosis"
commentary: Iterative diagnosis and fix until tests pass or max iterations reached
color: green
extends: code-developer
tdd_aware: true
---
You are a TDD-specialized code execution agent focused on implementing high-quality, test-driven code. You receive TDD tasks with Red-Green-Refactor cycles and execute them with phase-specific logic and automatic test validation.
## TDD Core Philosophy
- **Test-First Development** - Write failing tests before implementation (Red phase)
- **Minimal Implementation** - Write just enough code to pass tests (Green phase)
- **Iterative Quality** - Refactor for clarity while maintaining test coverage (Refactor phase)
- **Automatic Validation** - Run tests after each phase, iterate on failures
## TDD Task JSON Schema Recognition
**TDD-Specific Metadata**:
```json
{
"meta": {
"tdd_workflow": true, // REQUIRED: Enables TDD mode
"max_iterations": 3, // Green phase test-fix cycle limit
"cli_execution_id": "{session}-{task}", // CLI session ID for resume
"cli_execution": { // CLI execution strategy
"strategy": "new|resume|fork|merge_fork",
"resume_from": "parent-cli-id" // For resume/fork strategies; array for merge_fork
// Note: For merge_fork, resume_from is array: ["id1", "id2", ...]
}
},
"context": {
"tdd_cycles": [ // Test cases and coverage targets
{
"test_count": 5,
"test_cases": ["case1", "case2", ...],
"implementation_scope": "...",
"expected_coverage": ">=85%"
}
],
"focus_paths": [...], // Absolute or clear relative paths
"requirements": [...],
"acceptance": [...] // Test commands for validation
},
"flow_control": {
"pre_analysis": [...], // Context gathering steps
"implementation_approach": [ // Red-Green-Refactor steps
{
"step": 1,
"title": "Red Phase: Write failing tests",
"tdd_phase": "red", // REQUIRED: Phase identifier
"description": "Write 5 test cases: [...]",
"modification_points": [...],
"command": "..." // Optional CLI command
},
{
"step": 2,
"title": "Green Phase: Implement to pass tests",
"tdd_phase": "green", // Triggers test-fix cycle
"description": "Implement N functions...",
"modification_points": [...],
"command": "..."
},
{
"step": 3,
"title": "Refactor Phase: Improve code quality",
"tdd_phase": "refactor",
"description": "Apply N refactorings...",
"modification_points": [...]
}
]
}
}
```
## TDD Execution Process
### 1. TDD Task Recognition
**Step 1.1: Detect TDD Mode**
```
IF meta.tdd_workflow == true:
→ Enable TDD execution mode
→ Parse TDD-specific metadata
→ Prepare phase-specific execution logic
ELSE:
→ Delegate to code-developer (standard execution)
```
**Step 1.2: Parse TDD Metadata**
```javascript
// Extract TDD configuration
const tddConfig = {
maxIterations: taskJson.meta.max_iterations || 3,
cliExecutionId: taskJson.meta.cli_execution_id,
cliStrategy: taskJson.meta.cli_execution?.strategy,
resumeFrom: taskJson.meta.cli_execution?.resume_from,
testCycles: taskJson.context.tdd_cycles || [],
acceptanceTests: taskJson.context.acceptance || []
}
// Identify phases
const phases = taskJson.flow_control.implementation_approach
.filter(step => step.tdd_phase)
.map(step => ({
step: step.step,
phase: step.tdd_phase, // "red", "green", or "refactor"
...step
}))
```
**Step 1.3: Validate TDD Task Structure**
```
REQUIRED CHECKS:
- [ ] meta.tdd_workflow is true
- [ ] flow_control.implementation_approach has exactly 3 steps
- [ ] Each step has tdd_phase field ("red", "green", "refactor")
- [ ] context.acceptance includes test command
- [ ] Green phase has modification_points or command
IF validation fails:
→ Report invalid TDD task structure
→ Request task regeneration with /workflow:tools:task-generate-tdd
```
### 2. Phase-Specific Execution
#### Red Phase: Write Failing Tests
**Objectives**:
- Write test cases that verify expected behavior
- Ensure tests fail (proving they test something real)
- Document test scenarios clearly
**Execution Flow**:
```
STEP 1: Parse Red Phase Requirements
→ Extract test_count and test_cases from context.tdd_cycles
→ Extract test file paths from modification_points
→ Load existing test patterns from focus_paths
STEP 2: Execute Red Phase Implementation
IF step.command exists:
→ Execute CLI command with session resume
→ Build CLI command: ccw cli -p "..." --resume {resume_from} --tool {tool} --mode write
ELSE:
→ Direct agent implementation
→ Create test files in modification_points
→ Write test cases following test_cases enumeration
→ Use context.shared_context.conventions for test style
STEP 3: Validate Red Phase (Test Must Fail)
→ Execute test command from context.acceptance
→ Parse test output
IF tests pass:
⚠️ WARNING: Tests passing in Red phase - may not test real behavior
→ Log warning, continue to Green phase
IF tests fail:
✅ SUCCESS: Tests failing as expected
→ Proceed to Green phase
```
**Red Phase Quality Gates**:
- [ ] All specified test cases written (verify count matches test_count)
- [ ] Test files exist in expected locations
- [ ] Tests execute without syntax errors
- [ ] Tests fail with clear error messages
#### Green Phase: Implement to Pass Tests (with Test-Fix Cycle)
**Objectives**:
- Write minimal code to pass tests
- Iterate on failures with automatic diagnosis
- Achieve test pass rate and coverage targets
**Execution Flow with Test-Fix Cycle**:
```
STEP 1: Parse Green Phase Requirements
→ Extract implementation_scope from context.tdd_cycles
→ Extract target files from modification_points
→ Set max_iterations from meta.max_iterations (default: 3)
STEP 2: Initial Implementation
IF step.command exists:
→ Execute CLI command with session resume
→ Build CLI command: ccw cli -p "..." --resume {resume_from} --tool {tool} --mode write
ELSE:
→ Direct agent implementation
→ Implement functions in modification_points
→ Follow logic_flow sequence
→ Use minimal code to pass tests (no over-engineering)
STEP 3: Test-Fix Cycle (CRITICAL TDD FEATURE)
FOR iteration in 1..meta.max_iterations:
STEP 3.1: Run Test Suite
→ Execute test command from context.acceptance
→ Capture test output (stdout + stderr)
→ Parse test results (pass count, fail count, coverage)
STEP 3.2: Evaluate Results
IF all tests pass AND coverage >= expected_coverage:
✅ SUCCESS: Green phase complete
→ Log final test results
→ Store pass rate and coverage
→ Break loop, proceed to Refactor phase
ELSE IF iteration < max_iterations:
⚠️ ITERATION {iteration}: Tests failing, starting diagnosis
STEP 3.3: Diagnose Failures with Gemini
→ Build diagnosis prompt:
PURPOSE: Diagnose test failures in TDD Green phase to identify root cause and generate fix strategy
TASK:
• Analyze test output: {test_output}
• Review implementation: {modified_files}
• Identify failure patterns (syntax, logic, edge cases, missing functionality)
• Generate specific fix recommendations with code snippets
MODE: analysis
CONTEXT: @{modified_files} | Test Output: {test_output}
EXPECTED: Diagnosis report with root cause and actionable fix strategy
→ Execute: Bash(
command="ccw cli -p '{diagnosis_prompt}' --tool gemini --mode analysis --rule analysis-diagnose-bug-root-cause",
timeout=300000 // 5 min
)
→ Parse diagnosis output → Extract fix strategy
STEP 3.4: Apply Fixes
→ Parse fix recommendations from diagnosis
→ Apply fixes to implementation files
→ Use Edit tool for targeted changes
→ Log changes to .process/green-fix-iteration-{iteration}.md
STEP 3.5: Continue to Next Iteration
→ iteration++
→ Repeat from STEP 3.1
ELSE: // iteration == max_iterations AND tests still failing
❌ FAILURE: Max iterations reached without passing tests
STEP 3.6: Auto-Revert (Safety Net)
→ Log final failure diagnostics
→ Revert all changes made during Green phase
→ Store failure report in .process/green-phase-failure.md
→ Report to user with diagnostics:
"Green phase failed after {max_iterations} iterations.
All changes reverted. See diagnostics in green-phase-failure.md"
→ HALT execution (do not proceed to Refactor phase)
```
**Green Phase Quality Gates**:
- [ ] All tests pass (100% pass rate)
- [ ] Coverage meets expected_coverage target (e.g., >=85%)
- [ ] Implementation follows modification_points specification
- [ ] Code compiles and runs without errors
- [ ] Fix iteration count logged
**Test-Fix Cycle Output Artifacts**:
```
.workflow/active/{session-id}/.process/
├── green-fix-iteration-1.md # First fix attempt
├── green-fix-iteration-2.md # Second fix attempt
├── green-fix-iteration-3.md # Final fix attempt
└── green-phase-failure.md # Failure report (if max iterations reached)
```
#### Refactor Phase: Improve Code Quality
**Objectives**:
- Improve code clarity and structure
- Remove duplication and complexity
- Maintain test coverage (no regressions)
**Execution Flow**:
```
STEP 1: Parse Refactor Phase Requirements
→ Extract refactoring targets from description
→ Load refactoring scope from modification_points
STEP 2: Execute Refactor Implementation
IF step.command exists:
→ Execute CLI command with session resume
ELSE:
→ Direct agent refactoring
→ Apply refactorings from logic_flow
→ Follow refactoring best practices:
• Extract functions for clarity
• Remove duplication (DRY principle)
• Simplify complex logic
• Improve naming
• Add documentation where needed
STEP 3: Regression Testing (REQUIRED)
→ Execute test command from context.acceptance
→ Verify all tests still pass
IF tests fail:
⚠️ REGRESSION DETECTED: Refactoring broke tests
→ Revert refactoring changes
→ Report regression to user
→ HALT execution
IF tests pass:
✅ SUCCESS: Refactoring complete with no regressions
→ Proceed to task completion
```
**Refactor Phase Quality Gates**:
- [ ] All refactorings applied as specified
- [ ] All tests still pass (no regressions)
- [ ] Code complexity reduced (if measurable)
- [ ] Code readability improved
### 3. CLI Execution Integration
**CLI Session Resumption** (when step.command exists):
**Build CLI Command with Resume Strategy**:
```javascript
function buildCliCommand(step, tddConfig) {
const baseCommand = step.command // From task JSON
// Parse cli_execution strategy
switch (tddConfig.cliStrategy) {
case "new":
// First task - start fresh conversation
return `ccw cli -p "${baseCommand}" --tool ${tool} --mode write --id ${tddConfig.cliExecutionId}`
case "resume":
// Single child - continue same conversation
return `ccw cli -p "${baseCommand}" --resume ${tddConfig.resumeFrom} --tool ${tool} --mode write`
case "fork":
// Multiple children - branch with parent context
return `ccw cli -p "${baseCommand}" --resume ${tddConfig.resumeFrom} --id ${tddConfig.cliExecutionId} --tool ${tool} --mode write`
case "merge_fork":
// Multiple parents - merge contexts
// resume_from is an array for merge_fork strategy
const mergeIds = Array.isArray(tddConfig.resumeFrom)
? tddConfig.resumeFrom.join(',')
: tddConfig.resumeFrom
return `ccw cli -p "${baseCommand}" --resume ${mergeIds} --id ${tddConfig.cliExecutionId} --tool ${tool} --mode write`
default:
// Fallback - no resume
return `ccw cli -p "${baseCommand}" --tool ${tool} --mode write`
}
}
```
**Execute CLI Command**:
```javascript
// TDD agent runs in foreground - can receive hook callbacks
Bash(
command=buildCliCommand(step, tddConfig),
timeout=3600000, // 60 min for CLI execution
run_in_background=false // Agent can receive task completion hooks
)
```
### 4. Context Loading (Inherited from code-developer)
**Standard Context Sources**:
- Task JSON: `context.requirements`, `context.acceptance`, `context.focus_paths`
- Context Package: `context_package_path` → brainstorm artifacts, exploration results
- Tech Stack: `context.shared_context.tech_stack` (skip auto-detection if present)
**TDD-Enhanced Context**:
- `context.tdd_cycles`: Test case enumeration and coverage targets
- `meta.max_iterations`: Test-fix cycle configuration
- Exploration results: `context_package.exploration_results` for critical_files and integration_points
### 5. Quality Gates (TDD-Enhanced)
**Before Task Complete** (all phases):
- [ ] Red Phase: Tests written and failing
- [ ] Green Phase: All tests pass with coverage >= target
- [ ] Refactor Phase: No test regressions
- [ ] Code follows project conventions
- [ ] All modification_points addressed
**TDD-Specific Validations**:
- [ ] Test count matches tdd_cycles.test_count
- [ ] Coverage meets tdd_cycles.expected_coverage
- [ ] Green phase iteration count ≤ max_iterations
- [ ] No auto-revert triggered (Green phase succeeded)
### 6. Task Completion (TDD-Enhanced)
**Upon completing TDD task:**
1. **Verify TDD Compliance**:
- All three phases completed (Red → Green → Refactor)
- Final test run shows 100% pass rate
- Coverage meets or exceeds expected_coverage
2. **Update TODO List** (same as code-developer):
- Mark completed tasks with [x]
- Add summary links
- Update task progress
3. **Generate TDD-Enhanced Summary**:
```markdown
# Task: [Task-ID] [Name]
## TDD Cycle Summary
### Red Phase: Write Failing Tests
- Test Cases Written: {test_count} (expected: {tdd_cycles.test_count})
- Test Files: {test_file_paths}
- Initial Result: ✅ All tests failing as expected
### Green Phase: Implement to Pass Tests
- Implementation Scope: {implementation_scope}
- Test-Fix Iterations: {iteration_count}/{max_iterations}
- Final Test Results: {pass_count}/{total_count} passed ({pass_rate}%)
- Coverage: {actual_coverage} (target: {expected_coverage})
- Iteration Details: See green-fix-iteration-*.md
### Refactor Phase: Improve Code Quality
- Refactorings Applied: {refactoring_count}
- Regression Test: ✅ All tests still passing
- Final Test Results: {pass_count}/{total_count} passed
## Implementation Summary
### Files Modified
- `[file-path]`: [brief description of changes]
### Content Added
- **[ComponentName]**: [purpose/functionality]
- **[functionName()]**: [purpose/parameters/returns]
## Status: ✅ Complete (TDD Compliant)
```
## TDD-Specific Error Handling
**Red Phase Errors**:
- Tests pass immediately → Warning (may not test real behavior)
- Test syntax errors → Fix and retry
- Missing test files → Report and halt
**Green Phase Errors**:
- Max iterations reached → Auto-revert + failure report
- Tests never run → Report configuration error
- Coverage tools unavailable → Continue with pass rate only
**Refactor Phase Errors**:
- Regression detected → Revert refactoring
- Tests fail to run → Keep original code
## Key Differences from code-developer
| Feature | code-developer | tdd-developer |
|---------|----------------|---------------|
| TDD Awareness | ❌ No | ✅ Yes |
| Phase Recognition | ❌ Generic steps | ✅ Red/Green/Refactor |
| Test-Fix Cycle | ❌ No | ✅ Green phase iteration |
| Auto-Revert | ❌ No | ✅ On max iterations |
| CLI Resume | ❌ No | ✅ Full strategy support |
| TDD Metadata | ❌ Ignored | ✅ Parsed and used |
| Test Validation | ❌ Manual | ✅ Automatic per phase |
| Coverage Tracking | ❌ No | ✅ Yes (if available) |
## Quality Checklist (TDD-Enhanced)
Before completing any TDD task, verify:
- [ ] **TDD Structure Validated** - meta.tdd_workflow is true, 3 phases present
- [ ] **Red Phase Complete** - Tests written and initially failing
- [ ] **Green Phase Complete** - All tests pass, coverage >= target
- [ ] **Refactor Phase Complete** - No regressions, code improved
- [ ] **Test-Fix Iterations Logged** - green-fix-iteration-*.md exists
- [ ] Code follows project conventions
- [ ] CLI session resume used correctly (if applicable)
- [ ] TODO list updated
- [ ] TDD-enhanced summary generated
## Key Reminders
**NEVER:**
- Skip Red phase validation (must confirm tests fail)
- Proceed to Refactor if Green phase tests failing
- Exceed max_iterations without auto-reverting
- Ignore tdd_phase indicators
**ALWAYS:**
- Parse meta.tdd_workflow to detect TDD mode
- Run tests after each phase
- Use test-fix cycle in Green phase
- Auto-revert on max iterations failure
- Generate TDD-enhanced summaries
- Use CLI resume strategies when step.command exists
- Log all test-fix iterations to .process/
**Bash Tool (CLI Execution in TDD Agent)**:
- Use `run_in_background=false` - TDD agent can receive hook callbacks
- Set timeout ≥60 minutes for CLI commands:
```javascript
Bash(command="ccw cli -p '...' --tool codex --mode write", timeout=3600000)
```
## Execution Mode Decision
**When to use tdd-developer vs code-developer**:
- ✅ Use tdd-developer: `meta.tdd_workflow == true` in task JSON
- ❌ Use code-developer: No TDD metadata, generic implementation tasks
**Task Routing** (by workflow orchestrator):
```javascript
if (taskJson.meta?.tdd_workflow) {
agent = "tdd-developer" // Use TDD-aware agent
} else {
agent = "code-developer" // Use generic agent
}
```

File diff suppressed because it is too large Load Diff

View File

@@ -0,0 +1,832 @@
---
name: ccw-debug
description: Aggregated debug command - combines debugging diagnostics and test verification in a synergistic workflow supporting cli-quick / debug-first / test-first / bidirectional-verification modes
argument-hint: "[--mode cli|debug|test|bidirectional] [--yes|-y] [--hotfix] \"bug description or error message\""
allowed-tools: SlashCommand(*), TodoWrite(*), AskUserQuestion(*), Read(*), Bash(*)
---
# CCW-Debug Aggregated Command
## Core Concept
**Aggregated Debug Command** - Combines debugging diagnostics and test verification in a synergistic workflow. Not a simple concatenation of two commands, but intelligent orchestration based on mode selection.
### Four Execution Modes
| Mode | Workflow | Use Case | Characteristics |
|------|----------|----------|-----------------|
| **CLI Quick** (cli) | Direct CLI Analysis → Fix Suggestions | Simple issues, quick diagnosis | Fastest, minimal workflow, recommendation-only |
| **Debug First** (debug) | Debug → Analyze Hypotheses → Apply Fix → Test Verification | Root cause unclear, requires exploration | Starts with exploration, Gemini-assisted |
| **Test First** (test) | Generate Tests → Execute → Analyze Failures → CLI Fixes | Code implemented, needs test validation | Driven by test coverage, auto-iterates |
| **Bidirectional Verification** (bidirectional) | Parallel: Debug + Test → Merge Findings → Unified Fix | Complex systems, ambiguous symptoms | Parallel execution, converged insights |
---
## Quick Start
### Basic Usage
```bash
# CLI quick mode: fastest, recommendation-only (new!)
/ccw-debug --mode cli "Login failed: token validation error"
# Default mode: debug-first (recommended for most scenarios)
/ccw-debug "Login failed: token validation error"
# Test-first mode
/ccw-debug --mode test "User permission check failure"
# Bidirectional verification mode (complex issues)
/ccw-debug --mode bidirectional "Payment flow multiple failures"
# Auto mode (skip all confirmations)
/ccw-debug --yes "Quick fix: database connection timeout"
# Production hotfix (minimal diagnostics)
/ccw-debug --hotfix --yes "Production: API returns 500"
```
### Mode Selection Guide
**Choose "CLI Quick"** when:
- Need immediate diagnosis, not execution
- Want quick recommendations without workflows
- Simple issues with clear symptoms
- Just need fix suggestions, no auto-application
- Time is critical, prefer fast output
- Want to review CLI analysis before action
**Choose "Debug First"** when:
- Root cause is unclear
- Error messages are incomplete or vague
- Need to understand code execution flow
- Issues involve multi-module interactions
**Choose "Test First"** when:
- Code is fully implemented
- Need test coverage verification
- Have clear failure cases
- Want automated iterative fixes
**Choose "Bidirectional Verification"** when:
- System is complex (multiple subsystems)
- Problem symptoms are ambiguous (multiple possible root causes)
- Need multi-angle validation
- Time allows parallel analysis
---
## Execution Flow
### Overall Process
```
Phase 1: Intent Analysis & Mode Selection
├─ Parse --mode flag or recommend mode
├─ Check --hotfix and --yes flags
└─ Determine workflow path
Phase 2: Initialization
├─ CLI Quick: Lightweight init (no session directory needed)
├─ Others: Create unified session directory (.workflow/.ccw-debug/)
├─ Setup TodoWrite tracking
└─ Prepare session context
Phase 3: Execute Corresponding Workflow
├─ CLI Quick: ccw cli → Diagnosis Report → Optional: Escalate to debug/test/apply fix
├─ Debug First: /workflow:debug-with-file → Fix → /workflow:test-fix-gen → /workflow:test-cycle-execute
├─ Test First: /workflow:test-fix-gen → /workflow:test-cycle-execute → CLI analyze failures
└─ Bidirectional: [/workflow:debug-with-file] ∥ [/workflow:test-fix-gen → test-cycle-execute]
Phase 4: Merge Findings (Bidirectional Mode) / Escalation Decision (CLI Mode)
├─ CLI Quick: Present results → Ask user: Apply fix? Escalate? Done?
├─ Bidirectional: Converge findings from both workflows
├─ Identify consistent and conflicting root cause analyses
└─ Generate unified fix plan
Phase 5: Completion & Follow-up
├─ Generate summary report
├─ Provide next step recommendations
└─ Optional: Expand to issues (testing/enhancement/refactoring/documentation)
```
---
## Workflow Details
### Mode 0: CLI Quick (Minimal Debug Method)
**Best For**: Fast recommendations without full workflow overhead
**Workflow**:
```
User Input → Quick Context Gather → ccw cli (Gemini/Qwen/Codex)
Analysis Report
Fix Recommendations
Optional: User Decision
┌──────────────┼──────────────┐
↓ ↓ ↓
Apply Fix Escalate Mode Done
(debug/test)
```
**Execution Steps**:
1. **Lightweight Context Gather** (Phase 2)
```javascript
// No session directory needed for CLI mode
const tempContext = {
bug_description: bug_description,
timestamp: getUtc8ISOString(),
mode: "cli"
}
// Quick context discovery (30s max)
// - Read error file if path provided
// - Extract error patterns from description
// - Identify likely affected files (basic grep)
```
2. **Execute CLI Analysis** (Phase 3)
```bash
# Use ccw cli with bug diagnosis template
ccw cli -p "
PURPOSE: Quick bug diagnosis for immediate recommendations
TASK:
• Analyze bug symptoms: ${bug_description}
• Identify likely root cause
• Provide actionable fix recommendations (code snippets if possible)
• Assess fix confidence level
MODE: analysis
CONTEXT: ${contextFiles.length > 0 ? '@' + contextFiles.join(' @') : 'Bug description only'}
EXPECTED:
- Root cause hypothesis (1-2 sentences)
- Fix strategy (immediate/comprehensive/refactor)
- Code snippets or file modification suggestions
- Confidence level: High/Medium/Low
- Risk assessment
CONSTRAINTS: Quick analysis, 2-5 minutes max
" --tool gemini --mode analysis --rule analysis-diagnose-bug-root-cause
```
3. **Present Results** (Phase 4)
```
## CLI Quick Analysis Complete
**Issue**: [bug_description]
**Analysis Time**: [duration]
**Confidence**: [High/Medium/Low]
### Root Cause
[1-2 sentence hypothesis]
### Fix Strategy
[immediate_patch | comprehensive_fix | refactor]
### Recommended Changes
**File**: src/module/file.ts
```typescript
// Change line 45-50
- old code
+ new code
```
**Rationale**: [why this fix]
**Risk**: [Low/Medium/High] - [risk description]
### Confidence Assessment
- Analysis confidence: [percentage]
- Recommendation: [apply immediately | review first | escalate to full debug]
```
4. **User Decision** (Phase 5)
```javascript
// Parse --yes flag
const autoYes = $ARGUMENTS.includes('--yes') || $ARGUMENTS.includes('-y')
if (autoYes && confidence === 'High') {
// Auto-apply fix
console.log('[--yes + High confidence] Auto-applying fix...')
applyFixFromCLIRecommendation(cliOutput)
} else {
// Ask user
const decision = AskUserQuestion({
questions: [{
question: `CLI analysis complete (${confidence} confidence). What next?`,
header: "Decision",
multiSelect: false,
options: [
{ label: "Apply Fix", description: "Apply recommended changes immediately" },
{ label: "Escalate to Debug", description: "Switch to debug-first for deeper analysis" },
{ label: "Escalate to Test", description: "Switch to test-first for validation" },
{ label: "Review Only", description: "Just review, no action" }
]
}]
})
if (decision === "Apply Fix") {
applyFixFromCLIRecommendation(cliOutput)
} else if (decision === "Escalate to Debug") {
// Re-invoke ccw-debug with --mode debug
SlashCommand(command=`/ccw-debug --mode debug "${bug_description}"`)
} else if (decision === "Escalate to Test") {
// Re-invoke ccw-debug with --mode test
SlashCommand(command=`/ccw-debug --mode test "${bug_description}"`)
}
}
```
**Key Characteristics**:
- **Speed**: 2-5 minutes total (fastest mode)
- **Session**: No persistent session directory (lightweight)
- **Output**: Recommendation report only
- **Execution**: Optional, user-controlled
- **Escalation**: Can upgrade to full debug/test workflows
**Limitations**:
- No hypothesis iteration (single-shot analysis)
- No automatic test generation
- No instrumentation/logging
- Best for clear symptoms with localized fixes
---
### Mode 1: Debug First
**Best For**: Issues requiring root cause exploration
**Workflow**:
```
User Input → Session Init → /workflow:debug-with-file
Generate understanding.md + hypotheses
User reproduces issue, analyze logs
Gemini validates hypotheses
Apply fix code
/workflow:test-fix-gen
/workflow:test-cycle-execute
Generate unified report
```
**Execution Steps**:
1. **Session Initialization** (Phase 2)
```javascript
const sessionId = `CCWD-${bugSlug}-${dateStr}`
const sessionFolder = `.workflow/.ccw-debug/${sessionId}`
bash(`mkdir -p ${sessionFolder}`)
// Record mode selection
const modeConfig = {
mode: "debug",
original_input: bug_description,
timestamp: getUtc8ISOString(),
flags: { hotfix, autoYes }
}
Write(`${sessionFolder}/mode-config.json`, JSON.stringify(modeConfig, null, 2))
```
2. **Start Debug** (Phase 3)
```javascript
SlashCommand(command=`/workflow:debug-with-file "${bug_description}"`)
// Update TodoWrite
TodoWrite({
todos: [
{ content: "Phase 1: Debug & Analysis", status: "completed" },
{ content: "Phase 2: Apply Fix from Debug Findings", status: "in_progress" },
{ content: "Phase 3: Generate & Execute Tests", status: "pending" },
{ content: "Phase 4: Generate Report", status: "pending" }
]
})
```
3. **Apply Fix** (Handled by debug command)
4. **Test Generation & Execution**
```javascript
// Auto-continue after debug command completes
SlashCommand(command=`/workflow:test-fix-gen "Test validation for: ${bug_description}"`)
SlashCommand(command="/workflow:test-cycle-execute")
```
5. **Generate Report** (Phase 5)
```
## Debug-First Workflow Completed
**Issue**: [bug_description]
**Mode**: Debug First
**Session**: [sessionId]
### Debug Phase Results
- Root Cause: [extracted from understanding.md]
- Hypothesis Confirmation: [from hypotheses.json]
- Fixes Applied: [list of modified files]
### Test Phase Results
- Tests Created: [test files generated by IMPL-001]
- Pass Rate: [final test pass rate]
- Iteration Count: [fix iterations]
### Key Findings
- [learning points from debugging]
- [coverage insights from testing]
```
---
### Mode 2: Test First
**Best For**: Implemented code needing test validation
**Workflow**:
```
User Input → Session Init → /workflow:test-fix-gen
Generate test tasks (IMPL-001, IMPL-002)
/workflow:test-cycle-execute
Auto-iterate: Test → Analyze Failures → CLI Fix
Until pass rate ≥ 95%
Generate report
```
**Execution Steps**:
1. **Session Initialization** (Phase 2)
```javascript
const modeConfig = {
mode: "test",
original_input: bug_description,
timestamp: getUtc8ISOString(),
flags: { hotfix, autoYes }
}
```
2. **Generate Tests** (Phase 3)
```javascript
SlashCommand(command=`/workflow:test-fix-gen "${bug_description}"`)
// Update TodoWrite
TodoWrite({
todos: [
{ content: "Phase 1: Generate Tests", status: "completed" },
{ content: "Phase 2: Execute & Fix Tests", status: "in_progress" },
{ content: "Phase 3: Final Validation", status: "pending" },
{ content: "Phase 4: Generate Report", status: "pending" }
]
})
```
3. **Execute & Iterate** (Phase 3 cont.)
```javascript
SlashCommand(command="/workflow:test-cycle-execute")
// test-cycle-execute handles:
// - Execute tests
// - Analyze failures
// - Generate fix tasks via CLI
// - Iterate fixes until pass
```
4. **Generate Report** (Phase 5)
---
### Mode 3: Bidirectional Verification
**Best For**: Complex systems, multi-dimensional analysis
**Workflow**:
```
User Input → Session Init → Parallel execution:
┌──────────────────────────────┐
│ │
↓ ↓
/workflow:debug-with-file /workflow:test-fix-gen
│ │
Generate hypotheses & understanding Generate test tasks
│ │
↓ ↓
Apply debug fixes /workflow:test-cycle-execute
│ │
└──────────────┬───────────────┘
Phase 4: Merge Findings
├─ Converge root cause analyses
├─ Identify consistency (mutual validation)
├─ Identify conflicts (need coordination)
└─ Generate unified report
```
**Execution Steps**:
1. **Parallel Execution** (Phase 3)
```javascript
// Start debug
const debugTask = SlashCommand(
command=`/workflow:debug-with-file "${bug_description}"`,
run_in_background=false
)
// Start test generation (synchronous execution, SlashCommand blocks)
const testTask = SlashCommand(
command=`/workflow:test-fix-gen "${bug_description}"`,
run_in_background=false
)
// Execute test cycle
const testCycleTask = SlashCommand(
command="/workflow:test-cycle-execute",
run_in_background=false
)
```
2. **Merge Findings** (Phase 4)
```javascript
// Read debug results
const understandingMd = Read(`${debugSessionFolder}/understanding.md`)
const hypothesesJson = JSON.parse(Read(`${debugSessionFolder}/hypotheses.json`))
// Read test results
const testResultsJson = JSON.parse(Read(`${testSessionFolder}/.process/test-results.json`))
const fixPlanJson = JSON.parse(Read(`${testSessionFolder}/.task/IMPL-002.json`))
// Merge analysis
const convergence = {
debug_root_cause: hypothesesJson.confirmed_hypothesis,
test_failure_pattern: testResultsJson.failures,
consistency: analyzeConsistency(debugRootCause, testFailures),
conflicts: identifyConflicts(debugRootCause, testFailures),
unified_root_cause: mergeRootCauses(debugRootCause, testFailures),
recommended_fix: selectBestFix(debugRootCause, testRootCause)
}
```
3. **Generate Report** (Phase 5)
```
## Bidirectional Verification Workflow Completed
**Issue**: [bug_description]
**Mode**: Bidirectional Verification
### Debug Findings
- Root Cause (hypothesis): [from understanding.md]
- Confidence: [from hypotheses.json]
- Key code paths: [file:line]
### Test Findings
- Failure pattern: [list of failing tests]
- Error type: [error type]
- Impact scope: [affected modules]
### Merged Analysis
- ✓ Consistent: Both workflows identified same root cause
- ⚠ Conflicts: [list any conflicts]
- → Unified Root Cause: [final confirmed root cause]
### Recommended Fix
- Strategy: [selected fix strategy]
- Rationale: [why this strategy]
- Risks: [known risks]
```
---
## Command Line Interface
### Complete Syntax
```bash
/ccw-debug [OPTIONS] <BUG_DESCRIPTION>
Options:
--mode <cli|debug|test|bidirectional> Execution mode (default: debug)
--yes, -y Auto mode (skip all confirmations)
--hotfix, -h Production hotfix mode (only for debug mode)
--no-tests Skip test generation in debug-first mode
--skip-report Don't generate final report
--resume <session-id> Resume interrupted session
Arguments:
<BUG_DESCRIPTION> Issue description, error message, or .md file path
```
### Examples
```bash
# CLI quick mode: fastest, recommendation-only (NEW!)
/ccw-debug --mode cli "User login timeout"
/ccw-debug --mode cli --yes "Quick fix: API 500 error" # Auto-apply if high confidence
# Debug first (default)
/ccw-debug "User login timeout"
# Test first
/ccw-debug --mode test "Payment validation failure"
# Bidirectional verification
/ccw-debug --mode bidirectional "Multi-module data consistency issue"
# Hotfix auto mode
/ccw-debug --hotfix --yes "API 500 error"
# Debug first, skip tests
/ccw-debug --no-tests "Understand code flow"
# Resume interrupted session
/ccw-debug --resume CCWD-login-timeout-2025-01-27
```
---
## Session Structure
### File Organization
```
.workflow/.ccw-debug/CCWD-{slug}-{date}/
├── mode-config.json # Mode configuration and flags
├── session-manifest.json # Session index and status
├── final-report.md # Final report
├── debug/ # Debug workflow (if mode includes debug)
│ ├── debug-session-id.txt
│ ├── understanding.md
│ ├── hypotheses.json
│ └── debug.log
├── test/ # Test workflow (if mode includes test)
│ ├── test-session-id.txt
│ ├── IMPL_PLAN.md
│ ├── test-results.json
│ └── iteration-state.json
└── fusion/ # Fusion analysis (bidirectional mode)
├── convergence-analysis.json
├── consistency-report.md
└── unified-root-cause.json
```
### Session State Management
```json
{
"session_id": "CCWD-login-timeout-2025-01-27",
"mode": "debug|test|bidirectional",
"status": "running|completed|failed|paused",
"phases": {
"phase_1": { "status": "completed", "timestamp": "..." },
"phase_2": { "status": "in_progress", "timestamp": "..." },
"phase_3": { "status": "pending" },
"phase_4": { "status": "pending" },
"phase_5": { "status": "pending" }
},
"sub_sessions": {
"debug_session": "DBG-...",
"test_session": "WFS-test-..."
},
"artifacts": {
"debug_understanding": "...",
"test_results": "...",
"fusion_analysis": "..."
}
}
```
---
## Mode Selection Logic
### Auto Mode Recommendation
When user doesn't specify `--mode`, recommend based on input analysis:
```javascript
function recommendMode(bugDescription) {
const indicators = {
cli_signals: [
/quick|fast|simple|immediate/,
/recommendation|suggest|advice/,
/just need|only want|quick look/,
/straightforward|obvious|clear/
],
debug_signals: [
/unclear|don't know|maybe|uncertain|why/,
/error|crash|fail|exception|stack trace/,
/execution flow|code path|how does/
],
test_signals: [
/test|coverage|verify|pass|fail/,
/implementation|implemented|complete/,
/case|scenario|should/
],
complex_signals: [
/multiple|all|system|integration/,
/module|subsystem|network|distributed/,
/concurrent|async|race/
]
}
let score = { cli: 0, debug: 0, test: 0, bidirectional: 0 }
// CLI signals (lightweight preference)
for (const pattern of indicators.cli_signals) {
if (pattern.test(bugDescription)) score.cli += 3
}
// Debug signals
for (const pattern of indicators.debug_signals) {
if (pattern.test(bugDescription)) score.debug += 2
}
// Test signals
for (const pattern of indicators.test_signals) {
if (pattern.test(bugDescription)) score.test += 2
}
// Complex signals (prefer bidirectional for complex issues)
for (const pattern of indicators.complex_signals) {
if (pattern.test(bugDescription)) {
score.bidirectional += 3
score.cli -= 2 // Complex issues not suitable for CLI quick
}
}
// If description is short and has clear error, prefer CLI
if (bugDescription.length < 100 && /error|fail|crash/.test(bugDescription)) {
score.cli += 2
}
// Return highest scoring mode
return Object.entries(score).sort((a, b) => b[1] - a[1])[0][0]
}
```
---
## Best Practices
### When to Use Each Mode
| Issue Characteristic | Recommended Mode | Rationale |
|----------------------|-----------------|-----------|
| Simple error, clear symptoms | CLI Quick | Fastest recommendation |
| Incomplete error info, requires exploration | Debug First | Deep diagnostic capability |
| Code complete, needs test coverage | Test First | Automated iterative fixes |
| Cross-module issue, ambiguous symptoms | Bidirectional | Multi-angle insights |
| Production failure, needs immediate guidance | CLI Quick + --yes | Fastest guidance, optional escalation |
| Production failure, needs safe fix | Debug First + --hotfix | Minimal diagnosis time |
| Want to understand why it failed | Debug First | Records understanding evolution |
| Want to ensure all scenarios pass | Test First | Complete coverage-driven |
### Performance Tips
- **CLI Quick**: 2-5 minutes, no file I/O, recommendation-only
- **Debug First**: Usually requires manual issue reproduction (after logging added), then 15-30 min
- **Test First**: Fully automated, 20-45 min depending on test suite size
- **Bidirectional**: Most comprehensive but slowest (parallel workflows), 30-60 min
### Workflow Continuity
- **CLI Quick**: Can escalate to debug/test/apply fix based on user decision
- **Debug First**: Auto-launches test generation and execution after completion
- **Test First**: With high failure rates suggests switching to debug mode for root cause
- **Bidirectional**: Always executes complete flow
---
## Follow-up Expansion
After completion, offer to expand to issues:
```
## Done! What's next?
- [ ] Create Test issue (improve test coverage)
- [ ] Create Enhancement issue (optimize code quality)
- [ ] Create Refactor issue (improve architecture)
- [ ] Create Documentation issue (record learnings)
- [ ] Don't create any issue, end workflow
```
Selected items call: `/issue:new "{issue summary} - {dimension}"`
---
## Error Handling
| Error | CLI Quick | Debug First | Test First | Bidirectional |
|-------|-----------|-------------|-----------|---------------|
| Session creation failed | N/A (no session) | Retry → abort | Retry → abort | Retry → abort |
| CLI analysis failed | Retry with fallback tool → manual | N/A | N/A | N/A |
| Diagnosis/test failed | N/A | Continue with partial results | Direct failure | Use alternate workflow results |
| Low confidence result | Ask escalate or review | N/A | N/A | N/A |
| Merge conflicts | N/A | N/A | N/A | Select highest confidence plan |
| Fix application failed | Report error, no auto-retry | Request manual fix | Mark failed, request intervention | Try alternative plan |
---
## Relationship with ccw Command
| Feature | ccw | ccw-debug |
|---------|-----|----------|
| **Design** | General workflow orchestration | Debug + test aggregation |
| **Intent Detection** | ✅ Detects task type | ✅ Detects issue type |
| **Automation** | ✅ Auto-selects workflow | ✅ Auto-selects mode |
| **Quick Mode** | ❌ None | ✅ CLI Quick (2-5 min) |
| **Parallel Execution** | ❌ Sequential | ✅ Bidirectional mode parallel |
| **Fusion Analysis** | ❌ None | ✅ Bidirectional mode fusion |
| **Workflow Scope** | Broad (feature/bugfix/tdd/ui etc.) | Deep focus (debug + test) |
| **CLI Integration** | Yes | Yes (4 levels: quick/deep/iterative/fusion) |
---
## Usage Recommendations
1. **First Time**: Use default mode (debug-first), observe workflow
2. **Quick Decision**: Use CLI Quick (--mode cli) for immediate recommendations
3. **Quick Fix**: Use `--hotfix --yes` for minimal diagnostics (debug mode)
4. **Learning**: Use debug-first, read `understanding.md`
5. **Complete Validation**: Use bidirectional for multi-dimensional insights
6. **Auto Repair**: Use test-first for automatic iteration
7. **Escalation**: Start with CLI Quick, escalate to other modes as needed
---
## Reference
### Related Commands
- `ccw cli` - Direct CLI analysis (used by CLI Quick mode)
- `/workflow:debug-with-file` - Deep debug diagnostics
- `/workflow:test-fix-gen` - Test generation
- `/workflow:test-cycle-execute` - Test execution
- `/workflow:lite-fix` - Lightweight fix
- `/ccw` - General workflow orchestration
### Configuration Files
- `~/.claude/cli-tools.json` - CLI tool configuration (Gemini/Qwen/Codex)
- `.workflow/project-tech.json` - Project technology stack
- `.workflow/project-guidelines.json` - Project conventions
### CLI Tool Fallback Chain (for CLI Quick mode)
When CLI analysis fails, fallback order:
1. **Gemini** (primary): `gemini-2.5-pro`
2. **Qwen** (fallback): `coder-model`
3. **Codex** (fallback): `gpt-5.2`
---
## Summary: Mode Selection Decision Tree
```
User calls: /ccw-debug <bug_description>
┌─ Explicit --mode specified?
│ ├─ YES → Use specified mode
│ │ ├─ cli → 2-5 min analysis, optionally escalate
│ │ ├─ debug → Full debug-with-file workflow
│ │ ├─ test → Test-first workflow
│ │ └─ bidirectional → Parallel debug + test
│ │
│ └─ NO → Auto-recommend based on bug description
│ ├─ Keywords: "quick", "fast", "simple" → CLI Quick
│ ├─ Keywords: "error", "crash", "exception" → Debug First (or CLI if simple)
│ ├─ Keywords: "test", "verify", "coverage" → Test First
│ └─ Keywords: "multiple", "system", "distributed" → Bidirectional
├─ Check --yes flag
│ ├─ YES → Auto-confirm all decisions
│ │ ├─ CLI mode: Auto-apply if confidence High
│ │ └─ Others: Auto-select default options
│ │
│ └─ NO → Interactive mode, ask user for confirmations
├─ Check --hotfix flag (debug mode only)
│ ├─ YES → Minimal diagnostics, fast fix
│ └─ NO → Full analysis
└─ Execute selected mode workflow
└─ Return results or escalation options
```

567
.claude/commands/ccw.md Normal file
View File

@@ -0,0 +1,567 @@
---
name: ccw
description: Main workflow orchestrator - analyze intent, select workflow, execute command chain in main process
argument-hint: "\"task description\""
allowed-tools: SlashCommand(*), TodoWrite(*), AskUserQuestion(*), Read(*), Grep(*), Glob(*)
---
# CCW Command - Main Workflow Orchestrator
Main process orchestrator: intent analysis → workflow selection → command chain execution.
## Core Concept: Minimum Execution Units (最小执行单元)
**Definition**: A set of commands that must execute together as an atomic group to achieve a meaningful workflow milestone.
**Why This Matters**:
- **Prevents Incomplete States**: Avoid stopping after task generation without execution
- **User Experience**: User gets complete results, not intermediate artifacts requiring manual follow-up
- **Workflow Integrity**: Maintains logical coherence of multi-step operations
**Key Units in CCW**:
| Unit Type | Pattern | Example |
|-----------|---------|---------|
| **Planning + Execution** | plan-cmd → execute-cmd | lite-plan → lite-execute |
| **Testing** | test-gen-cmd → test-exec-cmd | test-fix-gen → test-cycle-execute |
| **Review** | review-cmd → fix-cmd | review-session-cycle → review-cycle-fix |
**Atomic Rules**:
1. CCW automatically groups commands into minimum units - never splits them
2. Pipeline visualization shows units with `【 】` markers
3. Error handling preserves unit boundaries (retry/skip affects whole unit)
## Execution Model
**Synchronous (Main Process)**: Commands execute via SlashCommand in main process, blocking until complete.
```
User Input → Analyze Intent → Select Workflow → [Confirm] → Execute Chain
SlashCommand (blocking)
Update TodoWrite
Next Command...
```
**vs ccw-coordinator**: External CLI execution with background tasks and hook callbacks.
## 5-Phase Workflow
### Phase 1: Analyze Intent
```javascript
function analyzeIntent(input) {
return {
goal: extractGoal(input),
scope: extractScope(input),
constraints: extractConstraints(input),
task_type: detectTaskType(input), // bugfix|feature|tdd|review|exploration|...
complexity: assessComplexity(input), // low|medium|high
clarity_score: calculateClarity(input) // 0-3 (>=2 = clear)
};
}
// Task type detection (priority order)
function detectTaskType(text) {
const patterns = {
'bugfix-hotfix': /urgent|production|critical/ && /fix|bug/,
// With-File workflows (documented exploration with multi-CLI collaboration)
'brainstorm': /brainstorm|ideation|头脑风暴|创意|发散思维|creative thinking|multi-perspective.*think|compare perspectives|探索.*可能/,
'brainstorm-to-issue': /brainstorm.*issue|头脑风暴.*issue|idea.*issue|想法.*issue|从.*头脑风暴|convert.*brainstorm/,
'debug-file': /debug.*document|hypothesis.*debug|troubleshoot.*track|investigate.*log|调试.*记录|假设.*验证|systematic debug|深度调试/,
'analyze-file': /analyze.*document|explore.*concept|understand.*architecture|investigate.*discuss|collaborative analysis|分析.*讨论|深度.*理解|协作.*分析/,
// Standard workflows
'bugfix': /fix|bug|error|crash|fail|debug/,
'issue-batch': /issues?|batch/ && /fix|resolve/,
'issue-transition': /issue workflow|structured workflow|queue|multi-stage/,
'exploration': /uncertain|explore|research|what if/,
'quick-task': /quick|simple|small/ && /feature|function/,
'ui-design': /ui|design|component|style/,
'tdd': /tdd|test-driven|test first/,
'test-fix': /test fail|fix test|failing test/,
'review': /review|code review/,
'documentation': /docs|documentation|readme/
};
for (const [type, pattern] of Object.entries(patterns)) {
if (pattern.test(text)) return type;
}
return 'feature';
}
```
**Output**: `Type: [task_type] | Goal: [goal] | Complexity: [complexity] | Clarity: [clarity_score]/3`
---
### Phase 1.5: Requirement Clarification (if clarity_score < 2)
```javascript
async function clarifyRequirements(analysis) {
if (analysis.clarity_score >= 2) return analysis;
const questions = generateClarificationQuestions(analysis); // Goal, Scope, Constraints
const answers = await AskUserQuestion({ questions });
return updateAnalysis(analysis, answers);
}
```
**Questions**: Goal (Create/Fix/Optimize/Analyze), Scope (Single file/Module/Cross-module/System), Constraints (Backward compat/Skip tests/Urgent hotfix)
---
### Phase 2: Select Workflow & Build Command Chain
```javascript
function selectWorkflow(analysis) {
const levelMap = {
'bugfix-hotfix': { level: 2, flow: 'bugfix.hotfix' },
// With-File workflows (documented exploration with multi-CLI collaboration)
'brainstorm': { level: 4, flow: 'brainstorm-with-file' }, // Multi-perspective ideation
'brainstorm-to-issue': { level: 4, flow: 'brainstorm-to-issue' }, // Brainstorm → Issue workflow
'debug-file': { level: 3, flow: 'debug-with-file' }, // Hypothesis-driven debugging
'analyze-file': { level: 3, flow: 'analyze-with-file' }, // Collaborative analysis
// Standard workflows
'bugfix': { level: 2, flow: 'bugfix.standard' },
'issue-batch': { level: 'Issue', flow: 'issue' },
'issue-transition': { level: 2.5, flow: 'rapid-to-issue' }, // Bridge workflow
'exploration': { level: 4, flow: 'full' },
'quick-task': { level: 1, flow: 'lite-lite-lite' },
'ui-design': { level: analysis.complexity === 'high' ? 4 : 3, flow: 'ui' },
'tdd': { level: 3, flow: 'tdd' },
'test-fix': { level: 3, flow: 'test-fix-gen' },
'review': { level: 3, flow: 'review-cycle-fix' },
'documentation': { level: 2, flow: 'docs' },
'feature': { level: analysis.complexity === 'high' ? 3 : 2, flow: analysis.complexity === 'high' ? 'coupled' : 'rapid' }
};
const selected = levelMap[analysis.task_type] || levelMap['feature'];
return buildCommandChain(selected, analysis);
}
// Build command chain (port-based matching with Minimum Execution Units)
function buildCommandChain(workflow, analysis) {
const chains = {
// Level 1 - Rapid
'lite-lite-lite': [
{ cmd: '/workflow:lite-lite-lite', args: `"${analysis.goal}"` }
],
// Level 2 - Lightweight
'rapid': [
// Unit: Quick Implementation【lite-plan → lite-execute】
{ cmd: '/workflow:lite-plan', args: `"${analysis.goal}"`, unit: 'quick-impl' },
{ cmd: '/workflow:lite-execute', args: '--in-memory', unit: 'quick-impl' },
// Unit: Test Validation【test-fix-gen → test-cycle-execute】
...(analysis.constraints?.includes('skip-tests') ? [] : [
{ cmd: '/workflow:test-fix-gen', args: '', unit: 'test-validation' },
{ cmd: '/workflow:test-cycle-execute', args: '', unit: 'test-validation' }
])
],
// Level 2 Bridge - Lightweight to Issue Workflow
'rapid-to-issue': [
// Unit: Quick Implementation【lite-plan → convert-to-plan】
{ cmd: '/workflow:lite-plan', args: `"${analysis.goal}"`, unit: 'quick-impl-to-issue' },
{ cmd: '/issue:convert-to-plan', args: '--latest-lite-plan -y', unit: 'quick-impl-to-issue' },
// Auto-continue to issue workflow
{ cmd: '/issue:queue', args: '' },
{ cmd: '/issue:execute', args: '--queue auto' }
],
'bugfix.standard': [
// Unit: Bug Fix【lite-fix → lite-execute】
{ cmd: '/workflow:lite-fix', args: `"${analysis.goal}"`, unit: 'bug-fix' },
{ cmd: '/workflow:lite-execute', args: '--in-memory', unit: 'bug-fix' },
// Unit: Test Validation【test-fix-gen → test-cycle-execute】
...(analysis.constraints?.includes('skip-tests') ? [] : [
{ cmd: '/workflow:test-fix-gen', args: '', unit: 'test-validation' },
{ cmd: '/workflow:test-cycle-execute', args: '', unit: 'test-validation' }
])
],
'bugfix.hotfix': [
{ cmd: '/workflow:lite-fix', args: `--hotfix "${analysis.goal}"` }
],
'multi-cli-plan': [
// Unit: Multi-CLI Planning【multi-cli-plan → lite-execute】
{ cmd: '/workflow:multi-cli-plan', args: `"${analysis.goal}"`, unit: 'multi-cli' },
{ cmd: '/workflow:lite-execute', args: '--in-memory', unit: 'multi-cli' },
// Unit: Test Validation【test-fix-gen → test-cycle-execute】
...(analysis.constraints?.includes('skip-tests') ? [] : [
{ cmd: '/workflow:test-fix-gen', args: '', unit: 'test-validation' },
{ cmd: '/workflow:test-cycle-execute', args: '', unit: 'test-validation' }
])
],
'docs': [
// Unit: Quick Implementation【lite-plan → lite-execute】
{ cmd: '/workflow:lite-plan', args: `"${analysis.goal}"`, unit: 'quick-impl' },
{ cmd: '/workflow:lite-execute', args: '--in-memory', unit: 'quick-impl' }
],
// With-File workflows (documented exploration with multi-CLI collaboration)
'brainstorm-with-file': [
{ cmd: '/workflow:brainstorm-with-file', args: `"${analysis.goal}"` }
// Note: Has built-in post-completion options (create plan, create issue, deep analysis)
],
// Brainstorm-to-Issue workflow (bridge from brainstorm to issue execution)
'brainstorm-to-issue': [
// Note: Assumes brainstorm session already exists, or run brainstorm first
{ cmd: '/issue:from-brainstorm', args: `SESSION="${extractBrainstormSession(analysis)}" --auto` },
{ cmd: '/issue:queue', args: '' },
{ cmd: '/issue:execute', args: '--queue auto' }
],
'debug-with-file': [
{ cmd: '/workflow:debug-with-file', args: `"${analysis.goal}"` }
// Note: Self-contained with hypothesis-driven iteration and Gemini validation
],
'analyze-with-file': [
{ cmd: '/workflow:analyze-with-file', args: `"${analysis.goal}"` }
// Note: Self-contained with multi-round discussion and CLI exploration
],
// Level 3 - Standard
'coupled': [
// Unit: Verified Planning【plan → plan-verify】
{ cmd: '/workflow:plan', args: `"${analysis.goal}"`, unit: 'verified-planning' },
{ cmd: '/workflow:plan-verify', args: '', unit: 'verified-planning' },
// Execution
{ cmd: '/workflow:execute', args: '' },
// Unit: Code Review【review-session-cycle → review-cycle-fix】
{ cmd: '/workflow:review-session-cycle', args: '', unit: 'code-review' },
{ cmd: '/workflow:review-cycle-fix', args: '', unit: 'code-review' },
// Unit: Test Validation【test-fix-gen → test-cycle-execute】
...(analysis.constraints?.includes('skip-tests') ? [] : [
{ cmd: '/workflow:test-fix-gen', args: '', unit: 'test-validation' },
{ cmd: '/workflow:test-cycle-execute', args: '', unit: 'test-validation' }
])
],
'tdd': [
// Unit: TDD Planning + Execution【tdd-plan → execute】
{ cmd: '/workflow:tdd-plan', args: `"${analysis.goal}"`, unit: 'tdd-planning' },
{ cmd: '/workflow:execute', args: '', unit: 'tdd-planning' },
// TDD Verification
{ cmd: '/workflow:tdd-verify', args: '' }
],
'test-fix-gen': [
// Unit: Test Validation【test-fix-gen → test-cycle-execute】
{ cmd: '/workflow:test-fix-gen', args: `"${analysis.goal}"`, unit: 'test-validation' },
{ cmd: '/workflow:test-cycle-execute', args: '', unit: 'test-validation' }
],
'review-cycle-fix': [
// Unit: Code Review【review-session-cycle → review-cycle-fix】
{ cmd: '/workflow:review-session-cycle', args: '', unit: 'code-review' },
{ cmd: '/workflow:review-cycle-fix', args: '', unit: 'code-review' },
// Unit: Test Validation【test-fix-gen → test-cycle-execute】
{ cmd: '/workflow:test-fix-gen', args: '', unit: 'test-validation' },
{ cmd: '/workflow:test-cycle-execute', args: '', unit: 'test-validation' }
],
'ui': [
{ cmd: '/workflow:ui-design:explore-auto', args: `"${analysis.goal}"` },
// Unit: Planning + Execution【plan → execute】
{ cmd: '/workflow:plan', args: '', unit: 'plan-execute' },
{ cmd: '/workflow:execute', args: '', unit: 'plan-execute' }
],
// Level 4 - Brainstorm
'full': [
{ cmd: '/workflow:brainstorm:auto-parallel', args: `"${analysis.goal}"` },
// Unit: Verified Planning【plan → plan-verify】
{ cmd: '/workflow:plan', args: '', unit: 'verified-planning' },
{ cmd: '/workflow:plan-verify', args: '', unit: 'verified-planning' },
// Execution
{ cmd: '/workflow:execute', args: '' },
// Unit: Test Validation【test-fix-gen → test-cycle-execute】
{ cmd: '/workflow:test-fix-gen', args: '', unit: 'test-validation' },
{ cmd: '/workflow:test-cycle-execute', args: '', unit: 'test-validation' }
],
// Issue Workflow
'issue': [
{ cmd: '/issue:discover', args: '' },
{ cmd: '/issue:plan', args: '--all-pending' },
{ cmd: '/issue:queue', args: '' },
{ cmd: '/issue:execute', args: '' }
]
};
return chains[workflow.flow] || chains['rapid'];
}
```
**Output**: `Level [X] - [flow] | Pipeline: [...] | Commands: [1. /cmd1 2. /cmd2 ...]`
---
### Phase 3: User Confirmation
```javascript
async function getUserConfirmation(chain) {
const response = await AskUserQuestion({
questions: [{
question: "Execute this command chain?",
header: "Confirm",
options: [
{ label: "Confirm", description: "Start" },
{ label: "Adjust", description: "Modify" },
{ label: "Cancel", description: "Abort" }
]
}]
});
if (response.error === "Cancel") throw new Error("Cancelled");
if (response.error === "Adjust") return await adjustChain(chain);
return chain;
}
```
---
### Phase 4: Setup TODO Tracking
```javascript
function setupTodoTracking(chain, workflow) {
const todos = chain.map((step, i) => ({
content: `CCW:${workflow}: [${i + 1}/${chain.length}] ${step.cmd}`,
status: i === 0 ? 'in_progress' : 'pending',
activeForm: `Executing ${step.cmd}`
}));
TodoWrite({ todos });
}
```
**Output**: `-> CCW:rapid: [1/3] /workflow:lite-plan | CCW:rapid: [2/3] /workflow:lite-execute | ...`
---
### Phase 5: Execute Command Chain
```javascript
async function executeCommandChain(chain, workflow) {
let previousResult = null;
for (let i = 0; i < chain.length; i++) {
try {
const fullCommand = assembleCommand(chain[i], previousResult);
const result = await SlashCommand({ command: fullCommand });
previousResult = { ...result, success: true };
updateTodoStatus(i, chain.length, workflow, 'completed');
} catch (error) {
const action = await handleError(chain[i], error, i);
if (action === 'retry') {
i--; // Retry
} else if (action === 'abort') {
return { success: false, error: error.message };
}
// 'skip' - continue
}
}
return { success: true, completed: chain.length };
}
// Assemble full command with session/plan parameters
function assembleCommand(step, previousResult) {
let command = step.cmd;
if (step.args) {
command += ` ${step.args}`;
} else if (previousResult?.session_id) {
command += ` --session="${previousResult.session_id}"`;
}
return command;
}
// Update TODO: mark current as complete, next as in-progress
function updateTodoStatus(index, total, workflow, status) {
const todos = getAllCurrentTodos();
const updated = todos.map(todo => {
if (todo.content.startsWith(`CCW:${workflow}:`)) {
const stepNum = extractStepIndex(todo.content);
if (stepNum === index + 1) return { ...todo, status };
if (stepNum === index + 2 && status === 'completed') return { ...todo, status: 'in_progress' };
}
return todo;
});
TodoWrite({ todos: updated });
}
// Error handling: Retry/Skip/Abort
async function handleError(step, error, index) {
const response = await AskUserQuestion({
questions: [{
question: `${step.cmd} failed: ${error.message}`,
header: "Error",
options: [
{ label: "Retry", description: "Re-execute" },
{ label: "Skip", description: "Continue next" },
{ label: "Abort", description: "Stop" }
]
}]
});
return { Retry: 'retry', Skip: 'skip', Abort: 'abort' }[response.Error] || 'abort';
}
```
---
## Execution Flow Summary
```
User Input
|
Phase 1: Analyze Intent
|-- Extract: goal, scope, constraints, task_type, complexity, clarity
+-- If clarity < 2 -> Phase 1.5: Clarify Requirements
|
Phase 2: Select Workflow & Build Chain
|-- Map task_type -> Level (1/2/3/4/Issue)
|-- Select flow based on complexity
+-- Build command chain (port-based)
|
Phase 3: User Confirmation (optional)
|-- Show pipeline visualization
+-- Allow adjustment
|
Phase 4: Setup TODO Tracking
+-- Create todos with CCW prefix
|
Phase 5: Execute Command Chain
|-- For each command:
| |-- Assemble full command
| |-- Execute via SlashCommand
| |-- Update TODO status
| +-- Handle errors (retry/skip/abort)
+-- Return workflow result
```
---
## Pipeline Examples (with Minimum Execution Units)
**Note**: `【 】` marks Minimum Execution Units - commands execute together as atomic groups.
| Input | Type | Level | Pipeline (with Units) |
|-------|------|-------|-----------------------|
| "Add API endpoint" | feature (low) | 2 |【lite-plan → lite-execute】→【test-fix-gen → test-cycle-execute】|
| "Fix login timeout" | bugfix | 2 |【lite-fix → lite-execute】→【test-fix-gen → test-cycle-execute】|
| "Use issue workflow" | issue-transition | 2.5 |【lite-plan → convert-to-plan】→ queue → execute |
| "头脑风暴: 通知系统重构" | brainstorm | 4 | brainstorm-with-file → (built-in post-completion) |
| "从头脑风暴创建 issue" | brainstorm-to-issue | 4 | from-brainstorm → queue → execute |
| "深度调试 WebSocket 连接断开" | debug-file | 3 | debug-with-file → (hypothesis iteration) |
| "协作分析: 认证架构优化" | analyze-file | 3 | analyze-with-file → (multi-round discussion) |
| "OAuth2 system" | feature (high) | 3 |【plan → plan-verify】→ execute →【review-session-cycle → review-cycle-fix】→【test-fix-gen → test-cycle-execute】|
| "Implement with TDD" | tdd | 3 |【tdd-plan → execute】→ tdd-verify |
| "Uncertain: real-time arch" | exploration | 4 | brainstorm:auto-parallel →【plan → plan-verify】→ execute →【test-fix-gen → test-cycle-execute】|
---
## Key Design Principles
1. **Main Process Execution** - Use SlashCommand in main process, no external CLI
2. **Intent-Driven** - Auto-select workflow based on task intent
3. **Port-Based Chaining** - Build command chain using port matching
4. **Minimum Execution Units** - Commands grouped into atomic units, never split (e.g., lite-plan → lite-execute)
5. **Progressive Clarification** - Low clarity triggers clarification phase
6. **TODO Tracking** - Use CCW prefix to isolate workflow todos
7. **Unit-Aware Error Handling** - Retry/skip/abort affects whole unit, not individual commands
8. **User Control** - Optional user confirmation at each phase
---
## State Management
**TodoWrite-Based Tracking**: All execution state tracked via TodoWrite with `CCW:` prefix.
```javascript
// Initial state
todos = [
{ content: "CCW:rapid: [1/3] /workflow:lite-plan", status: "in_progress" },
{ content: "CCW:rapid: [2/3] /workflow:lite-execute", status: "pending" },
{ content: "CCW:rapid: [3/3] /workflow:test-cycle-execute", status: "pending" }
];
// After command 1 completes
todos = [
{ content: "CCW:rapid: [1/3] /workflow:lite-plan", status: "completed" },
{ content: "CCW:rapid: [2/3] /workflow:lite-execute", status: "in_progress" },
{ content: "CCW:rapid: [3/3] /workflow:test-cycle-execute", status: "pending" }
];
```
**vs ccw-coordinator**: Extensive state.json with task_id, status transitions, hook callbacks.
---
## With-File Workflows
**With-File workflows** provide documented exploration with multi-CLI collaboration. They are self-contained and generate comprehensive session artifacts.
| Workflow | Purpose | Key Features | Output Folder |
|----------|---------|--------------|---------------|
| **brainstorm-with-file** | Multi-perspective ideation | Gemini/Codex/Claude perspectives, diverge-converge cycles | `.workflow/.brainstorm/` |
| **debug-with-file** | Hypothesis-driven debugging | Gemini validation, understanding evolution, NDJSON logging | `.workflow/.debug/` |
| **analyze-with-file** | Collaborative analysis | Multi-round Q&A, CLI exploration, documented discussions | `.workflow/.analysis/` |
**Detection Keywords**:
- **brainstorm**: 头脑风暴, 创意, 发散思维, multi-perspective, compare perspectives
- **debug-file**: 深度调试, 假设验证, systematic debug, hypothesis debug
- **analyze-file**: 协作分析, 深度理解, collaborative analysis, explore concept
**Characteristics**:
1. **Self-Contained**: Each workflow handles its own iteration loop
2. **Documented Process**: Creates evolving documents (brainstorm.md, understanding.md, discussion.md)
3. **Multi-CLI**: Uses Gemini/Codex/Claude for different perspectives
4. **Built-in Post-Completion**: Offers follow-up options (create plan, issue, etc.)
---
## Type Comparison: ccw vs ccw-coordinator
| Aspect | ccw | ccw-coordinator |
|--------|-----|-----------------|
| **Type** | Main process (SlashCommand) | External CLI (ccw cli + hook callbacks) |
| **Execution** | Synchronous blocking | Async background with hook completion |
| **Workflow** | Auto intent-based selection | Manual chain building |
| **Intent Analysis** | 5-phase clarity check | 3-phase requirement analysis |
| **State** | TodoWrite only (in-memory) | state.json + checkpoint/resume |
| **Error Handling** | Retry/skip/abort (interactive) | Retry/skip/abort (via AskUser) |
| **Use Case** | Auto workflow for any task | Manual orchestration, large chains |
---
## Usage
```bash
# Auto-select workflow
ccw "Add user authentication"
# Complex requirement (triggers clarification)
ccw "Optimize system performance"
# Bug fix
ccw "Fix memory leak in WebSocket handler"
# TDD development
ccw "Implement user registration with TDD"
# Exploratory task
ccw "Uncertain about architecture for real-time notifications"
# With-File workflows (documented exploration with multi-CLI collaboration)
ccw "头脑风暴: 用户通知系统重新设计" # → brainstorm-with-file
ccw "从头脑风暴 BS-通知系统-2025-01-28 创建 issue" # → brainstorm-to-issue (bridge)
ccw "深度调试: 系统随机崩溃问题" # → debug-with-file
ccw "协作分析: 理解现有认证架构的设计决策" # → analyze-with-file
```

View File

@@ -3,6 +3,7 @@ name: cli-init
description: Generate .gemini/ and .qwen/ config directories with settings.json and ignore files based on workspace technology detection
argument-hint: "[--tool gemini|qwen|all] [--output path] [--preview]"
allowed-tools: Bash(*), Read(*), Write(*), Glob(*)
group: cli
---
# CLI Initialization Command (/cli:cli-init)

View File

@@ -1,93 +0,0 @@
---
name: enhance-prompt
description: Enhanced prompt transformation using session memory and intent analysis with --enhance flag detection
argument-hint: "user input to enhance"
---
## Overview
Systematically enhances user prompts by leveraging session memory context and intent analysis, translating ambiguous requests into actionable specifications.
## Core Protocol
**Enhancement Pipeline:**
`Intent Translation``Context Integration``Structured Output`
**Context Sources:**
- Session memory (conversation history, previous analysis)
- Implicit technical requirements
- User intent patterns
## Enhancement Rules
### Intent Translation
| User Says | Translate To | Focus |
|-----------|--------------|-------|
| "fix" | Debug and resolve | Root cause → preserve behavior |
| "improve" | Enhance/optimize | Performance/readability |
| "add" | Implement feature | Integration + edge cases |
| "refactor" | Restructure quality | Maintain behavior |
| "update" | Modernize | Version compatibility |
### Context Integration Strategy
**Session Memory:**
- Reference recent conversation context
- Reuse previously identified patterns
- Build on established understanding
- Infer technical requirements from discussion
**Example:**
```bash
# User: "add login"
# Session Memory: Previous auth discussion, JWT mentioned
# Inferred: JWT-based auth, integrate with existing session management
# Action: Implement JWT authentication with session persistence
```
## Output Structure
```bash
INTENT: [Clear technical goal]
CONTEXT: [Session memory + codebase patterns]
ACTION: [Specific implementation steps]
ATTENTION: [Critical constraints]
```
### Output Examples
**Example 1:**
```bash
# Input: "fix login button"
INTENT: Debug non-functional login button
CONTEXT: From session - OAuth flow discussed, known state issue
ACTION: Check event binding → verify state updates → test auth flow
ATTENTION: Preserve existing OAuth integration
```
**Example 2:**
```bash
# Input: "refactor payment code"
INTENT: Restructure payment module for maintainability
CONTEXT: Session memory - PCI compliance requirements, Stripe integration patterns
ACTION: Extract reusable validators → isolate payment gateway logic → maintain adapter pattern
ATTENTION: Zero behavior change, maintain PCI compliance, full test coverage
```
## Enhancement Triggers
- Ambiguous language: "fix", "improve", "clean up"
- Vague requests requiring clarification
- Complex technical requirements
- Architecture changes
- Critical systems: auth, payment, security
- Multi-step refactoring
## Key Principles
1. **Session Memory First**: Leverage conversation context and established understanding
2. **Context Reuse**: Build on previous discussions and decisions
3. **Clear Output**: Structured, actionable specifications
4. **Intent Clarification**: Transform vague requests into specific technical goals
5. **Avoid Duplication**: Reference existing context, don't repeat

View File

@@ -0,0 +1,718 @@
---
name: convert-to-plan
description: Convert planning artifacts (lite-plan, workflow session, markdown) to issue solutions
argument-hint: "[-y|--yes] [--issue <id>] [--supplement] <SOURCE>"
allowed-tools: TodoWrite(*), Bash(*), Read(*), Write(*), Glob(*), AskUserQuestion(*)
---
## Auto Mode
When `--yes` or `-y`: Skip confirmation, auto-create issue and bind solution.
# Issue Convert-to-Plan Command (/issue:convert-to-plan)
## Overview
Converts various planning artifact formats into issue workflow solutions with intelligent detection and automatic binding.
**Supported Sources** (auto-detected):
- **lite-plan**: `.workflow/.lite-plan/{slug}/plan.json`
- **workflow-session**: `WFS-xxx` ID or `.workflow/active/{session}/` folder
- **markdown**: Any `.md` file with implementation/task content
- **json**: Direct JSON files matching plan-json-schema
## Quick Reference
```bash
# Convert lite-plan to new issue (auto-creates issue)
/issue:convert-to-plan ".workflow/.lite-plan/implement-auth-2026-01-25"
# Convert workflow session to existing issue
/issue:convert-to-plan WFS-auth-impl --issue GH-123
# Supplement existing solution with additional tasks
/issue:convert-to-plan "./docs/additional-tasks.md" --issue ISS-001 --supplement
# Auto mode - skip confirmations
/issue:convert-to-plan ".workflow/.lite-plan/my-plan" -y
```
## Command Options
| Option | Description | Default |
|--------|-------------|---------|
| `<SOURCE>` | Planning artifact path or WFS-xxx ID | Required |
| `--issue <id>` | Bind to existing issue instead of creating new | Auto-create |
| `--supplement` | Add tasks to existing solution (requires --issue) | false |
| `-y, --yes` | Skip all confirmations | false |
## Core Data Access Principle
**⚠️ Important**: Use CLI commands for all issue/solution operations.
| Operation | Correct | Incorrect |
|-----------|---------|-----------|
| Get issue | `ccw issue status <id> --json` | Read issues.jsonl directly |
| Create issue | `ccw issue init <id> --title "..."` | Write to issues.jsonl |
| Bind solution | `ccw issue bind <id> <sol-id>` | Edit issues.jsonl |
| List solutions | `ccw issue solutions --issue <id> --brief` | Read solutions/*.jsonl |
## Solution Schema Reference
Target format for all extracted data (from solution-schema.json):
```typescript
interface Solution {
id: string; // SOL-{issue-id}-{4-char-uid}
description?: string; // High-level summary
approach?: string; // Technical strategy
tasks: Task[]; // Required: at least 1 task
exploration_context?: object; // Optional: source context
analysis?: { risk, impact, complexity };
score?: number; // 0.0-1.0
is_bound: boolean;
created_at: string;
bound_at?: string;
}
interface Task {
id: string; // T1, T2, T3... (pattern: ^T[0-9]+$)
title: string; // Required: action verb + target
scope: string; // Required: module path or feature area
action: Action; // Required: Create|Update|Implement|...
description?: string;
modification_points?: Array<{file, target, change}>;
implementation: string[]; // Required: step-by-step guide
test?: { unit?, integration?, commands?, coverage_target? };
acceptance: { criteria: string[], verification: string[] }; // Required
commit?: { type, scope, message_template, breaking? };
depends_on?: string[];
priority?: number; // 1-5 (default: 3)
}
type Action = 'Create' | 'Update' | 'Implement' | 'Refactor' | 'Add' | 'Delete' | 'Configure' | 'Test' | 'Fix';
```
## Implementation
### Phase 1: Parse Arguments & Detect Source Type
```javascript
const input = userInput.trim();
const flags = parseFlags(userInput); // --issue, --supplement, -y/--yes
// Extract source path (first non-flag argument)
const source = extractSourceArg(input);
// Detect source type
function detectSourceType(source) {
// Check for WFS-xxx pattern (workflow session ID)
if (source.match(/^WFS-[\w-]+$/)) {
return { type: 'workflow-session-id', path: `.workflow/active/${source}` };
}
// Check if directory
const isDir = Bash(`test -d "${source}" && echo "dir" || echo "file"`).trim() === 'dir';
if (isDir) {
// Check for lite-plan indicator
const hasPlanJson = Bash(`test -f "${source}/plan.json" && echo "yes" || echo "no"`).trim() === 'yes';
if (hasPlanJson) {
return { type: 'lite-plan', path: source };
}
// Check for workflow session indicator
const hasSession = Bash(`test -f "${source}/workflow-session.json" && echo "yes" || echo "no"`).trim() === 'yes';
if (hasSession) {
return { type: 'workflow-session', path: source };
}
}
// Check file extensions
if (source.endsWith('.json')) {
return { type: 'json-file', path: source };
}
if (source.endsWith('.md')) {
return { type: 'markdown-file', path: source };
}
// Check if path exists at all
const exists = Bash(`test -e "${source}" && echo "yes" || echo "no"`).trim() === 'yes';
if (!exists) {
throw new Error(`E001: Source not found: ${source}`);
}
return { type: 'unknown', path: source };
}
const sourceInfo = detectSourceType(source);
if (sourceInfo.type === 'unknown') {
throw new Error(`E002: Unable to detect source format for: ${source}`);
}
console.log(`Detected source type: ${sourceInfo.type}`);
```
### Phase 2: Extract Data Using Format-Specific Extractor
```javascript
let extracted = { title: '', approach: '', tasks: [], metadata: {} };
switch (sourceInfo.type) {
case 'lite-plan':
extracted = extractFromLitePlan(sourceInfo.path);
break;
case 'workflow-session':
case 'workflow-session-id':
extracted = extractFromWorkflowSession(sourceInfo.path);
break;
case 'markdown-file':
extracted = await extractFromMarkdownAI(sourceInfo.path);
break;
case 'json-file':
extracted = extractFromJsonFile(sourceInfo.path);
break;
}
// Validate extraction
if (!extracted.tasks || extracted.tasks.length === 0) {
throw new Error('E006: No tasks extracted from source');
}
// Ensure task IDs are normalized to T1, T2, T3...
extracted.tasks = normalizeTaskIds(extracted.tasks);
console.log(`Extracted: ${extracted.tasks.length} tasks`);
```
#### Extractor: Lite-Plan
```javascript
function extractFromLitePlan(folderPath) {
const planJson = Read(`${folderPath}/plan.json`);
const plan = JSON.parse(planJson);
return {
title: plan.summary?.split('.')[0]?.trim() || 'Untitled Plan',
description: plan.summary,
approach: plan.approach,
tasks: plan.tasks.map(t => ({
id: t.id,
title: t.title,
scope: t.scope || '',
action: t.action || 'Implement',
description: t.description || t.title,
modification_points: t.modification_points || [],
implementation: Array.isArray(t.implementation) ? t.implementation : [t.implementation || ''],
test: t.verification ? {
unit: t.verification.unit_tests,
integration: t.verification.integration_tests,
commands: t.verification.manual_checks
} : {},
acceptance: {
criteria: Array.isArray(t.acceptance) ? t.acceptance : [t.acceptance || ''],
verification: t.verification?.manual_checks || []
},
depends_on: t.depends_on || [],
priority: 3
})),
metadata: {
source_type: 'lite-plan',
source_path: folderPath,
complexity: plan.complexity,
estimated_time: plan.estimated_time,
exploration_angles: plan._metadata?.exploration_angles || [],
original_timestamp: plan._metadata?.timestamp
}
};
}
```
#### Extractor: Workflow Session
```javascript
function extractFromWorkflowSession(sessionPath) {
// Load session metadata
const sessionJson = Read(`${sessionPath}/workflow-session.json`);
const session = JSON.parse(sessionJson);
// Load IMPL_PLAN.md for approach (if exists)
let approach = '';
const implPlanPath = `${sessionPath}/IMPL_PLAN.md`;
const hasImplPlan = Bash(`test -f "${implPlanPath}" && echo "yes" || echo "no"`).trim() === 'yes';
if (hasImplPlan) {
const implPlan = Read(implPlanPath);
// Extract overview/approach section
const overviewMatch = implPlan.match(/##\s*(?:Overview|Approach|Strategy)\s*\n([\s\S]*?)(?=\n##|$)/i);
approach = overviewMatch?.[1]?.trim() || implPlan.split('\n').slice(0, 10).join('\n');
}
// Load all task JSONs from .task folder
const taskFiles = Glob({ pattern: `${sessionPath}/.task/IMPL-*.json` });
const tasks = taskFiles.map(f => {
const taskJson = Read(f);
const task = JSON.parse(taskJson);
return {
id: task.id?.replace(/^IMPL-0*/, 'T') || 'T1', // IMPL-001 → T1
title: task.title,
scope: task.scope || inferScopeFromTask(task),
action: capitalizeAction(task.type) || 'Implement',
description: task.description,
modification_points: task.implementation?.modification_points || [],
implementation: task.implementation?.steps || [],
test: task.implementation?.test || {},
acceptance: {
criteria: task.acceptance_criteria || [],
verification: task.verification_steps || []
},
commit: task.commit,
depends_on: (task.depends_on || []).map(d => d.replace(/^IMPL-0*/, 'T')),
priority: task.priority || 3
};
});
return {
title: session.name || session.description?.split('.')[0] || 'Workflow Session',
description: session.description || session.name,
approach: approach || session.description,
tasks: tasks,
metadata: {
source_type: 'workflow-session',
source_path: sessionPath,
session_id: session.id,
created_at: session.created_at
}
};
}
function inferScopeFromTask(task) {
if (task.implementation?.modification_points?.length) {
const files = task.implementation.modification_points.map(m => m.file);
// Find common directory prefix
const dirs = files.map(f => f.split('/').slice(0, -1).join('/'));
return [...new Set(dirs)][0] || '';
}
return '';
}
function capitalizeAction(type) {
if (!type) return 'Implement';
const map = { feature: 'Implement', bugfix: 'Fix', refactor: 'Refactor', test: 'Test', docs: 'Update' };
return map[type.toLowerCase()] || type.charAt(0).toUpperCase() + type.slice(1);
}
```
#### Extractor: Markdown (AI-Assisted via Gemini)
```javascript
async function extractFromMarkdownAI(filePath) {
const fileContent = Read(filePath);
// Use Gemini CLI for intelligent extraction
const cliPrompt = `PURPOSE: Extract implementation plan from markdown document for issue solution conversion. Must output ONLY valid JSON.
TASK: • Analyze document structure • Identify title/summary • Extract approach/strategy section • Parse tasks from any format (lists, tables, sections, code blocks) • Normalize each task to solution schema
MODE: analysis
CONTEXT: Document content provided below
EXPECTED: Valid JSON object with format:
{
"title": "extracted title",
"approach": "extracted approach/strategy",
"tasks": [
{
"id": "T1",
"title": "task title",
"scope": "module or feature area",
"action": "Implement|Update|Create|Fix|Refactor|Add|Delete|Configure|Test",
"description": "what to do",
"implementation": ["step 1", "step 2"],
"acceptance": ["criteria 1", "criteria 2"]
}
]
}
CONSTRAINTS: Output ONLY valid JSON - no markdown, no explanation | Action must be one of: Create, Update, Implement, Refactor, Add, Delete, Configure, Test, Fix | Tasks must have id, title, scope, action, implementation (array), acceptance (array)
DOCUMENT CONTENT:
${fileContent}`;
// Execute Gemini CLI
const result = Bash(`ccw cli -p '${cliPrompt.replace(/'/g, "'\\''")}' --tool gemini --mode analysis`, { timeout: 120000 });
// Parse JSON from result (may be wrapped in markdown code block)
let jsonText = result.trim();
const jsonMatch = jsonText.match(/```(?:json)?\s*([\s\S]*?)```/);
if (jsonMatch) {
jsonText = jsonMatch[1].trim();
}
try {
const extracted = JSON.parse(jsonText);
// Normalize tasks
const tasks = (extracted.tasks || []).map((t, i) => ({
id: t.id || `T${i + 1}`,
title: t.title || 'Untitled task',
scope: t.scope || '',
action: validateAction(t.action) || 'Implement',
description: t.description || t.title,
modification_points: t.modification_points || [],
implementation: Array.isArray(t.implementation) ? t.implementation : [t.implementation || ''],
test: t.test || {},
acceptance: {
criteria: Array.isArray(t.acceptance) ? t.acceptance : [t.acceptance || ''],
verification: t.verification || []
},
depends_on: t.depends_on || [],
priority: t.priority || 3
}));
return {
title: extracted.title || 'Extracted Plan',
description: extracted.summary || extracted.title,
approach: extracted.approach || '',
tasks: tasks,
metadata: {
source_type: 'markdown',
source_path: filePath,
extraction_method: 'gemini-ai'
}
};
} catch (e) {
// Provide more context for debugging
throw new Error(`E005: Failed to extract tasks from markdown. Gemini response was not valid JSON. Error: ${e.message}. Response preview: ${jsonText.substring(0, 200)}...`);
}
}
function validateAction(action) {
const validActions = ['Create', 'Update', 'Implement', 'Refactor', 'Add', 'Delete', 'Configure', 'Test', 'Fix'];
if (!action) return null;
const normalized = action.charAt(0).toUpperCase() + action.slice(1).toLowerCase();
return validActions.includes(normalized) ? normalized : null;
}
```
#### Extractor: JSON File
```javascript
function extractFromJsonFile(filePath) {
const content = Read(filePath);
const plan = JSON.parse(content);
// Detect if it's already solution format or plan format
if (plan.tasks && Array.isArray(plan.tasks)) {
// Map tasks to normalized format
const tasks = plan.tasks.map((t, i) => ({
id: t.id || `T${i + 1}`,
title: t.title,
scope: t.scope || '',
action: t.action || 'Implement',
description: t.description || t.title,
modification_points: t.modification_points || [],
implementation: Array.isArray(t.implementation) ? t.implementation : [t.implementation || ''],
test: t.test || t.verification || {},
acceptance: normalizeAcceptance(t.acceptance),
depends_on: t.depends_on || [],
priority: t.priority || 3
}));
return {
title: plan.summary?.split('.')[0] || plan.title || 'JSON Plan',
description: plan.summary || plan.description,
approach: plan.approach,
tasks: tasks,
metadata: {
source_type: 'json',
source_path: filePath,
complexity: plan.complexity,
original_metadata: plan._metadata
}
};
}
throw new Error('E002: JSON file does not contain valid plan structure (missing tasks array)');
}
function normalizeAcceptance(acceptance) {
if (!acceptance) return { criteria: [], verification: [] };
if (typeof acceptance === 'object' && acceptance.criteria) return acceptance;
if (Array.isArray(acceptance)) return { criteria: acceptance, verification: [] };
return { criteria: [String(acceptance)], verification: [] };
}
```
### Phase 3: Normalize Task IDs
```javascript
function normalizeTaskIds(tasks) {
return tasks.map((t, i) => ({
...t,
id: `T${i + 1}`,
// Also normalize depends_on references
depends_on: (t.depends_on || []).map(d => {
// Handle various ID formats: IMPL-001, T1, 1, etc.
const num = d.match(/\d+/)?.[0];
return num ? `T${parseInt(num)}` : d;
})
}));
}
```
### Phase 4: Resolve Issue (Create or Find)
```javascript
let issueId = flags.issue;
let existingSolution = null;
if (issueId) {
// Validate issue exists
let issueCheck;
try {
issueCheck = Bash(`ccw issue status ${issueId} --json 2>/dev/null`).trim();
if (!issueCheck || issueCheck === '') {
throw new Error('empty response');
}
} catch (e) {
throw new Error(`E003: Issue not found: ${issueId}`);
}
const issue = JSON.parse(issueCheck);
// Check if issue already has bound solution
if (issue.bound_solution_id && !flags.supplement) {
throw new Error(`E004: Issue ${issueId} already has bound solution (${issue.bound_solution_id}). Use --supplement to add tasks.`);
}
// Load existing solution for supplement mode
if (flags.supplement && issue.bound_solution_id) {
try {
const solResult = Bash(`ccw issue solution ${issue.bound_solution_id} --json`).trim();
existingSolution = JSON.parse(solResult);
console.log(`Loaded existing solution with ${existingSolution.tasks.length} tasks`);
} catch (e) {
throw new Error(`Failed to load existing solution: ${e.message}`);
}
}
} else {
// Create new issue via ccw issue create (auto-generates correct ID)
// Smart extraction: title from content, priority from complexity
const title = extracted.title || 'Converted Plan';
const context = extracted.description || extracted.approach || title;
// Auto-determine priority based on complexity
const complexityMap = { high: 2, medium: 3, low: 4 };
const priority = complexityMap[extracted.metadata.complexity?.toLowerCase()] || 3;
try {
// Use heredoc to avoid shell escaping issues
const createResult = Bash(`ccw issue create << 'EOF'
{
"title": ${JSON.stringify(title)},
"context": ${JSON.stringify(context)},
"priority": ${priority},
"source": "converted"
}
EOF`).trim();
// Parse result to get created issue ID
const created = JSON.parse(createResult);
issueId = created.id;
console.log(`Created issue: ${issueId} (priority: ${priority})`);
} catch (e) {
throw new Error(`Failed to create issue: ${e.message}`);
}
}
```
### Phase 5: Generate Solution
```javascript
// Generate solution ID
function generateSolutionId(issueId) {
const chars = 'abcdefghijklmnopqrstuvwxyz0123456789';
let uid = '';
for (let i = 0; i < 4; i++) {
uid += chars[Math.floor(Math.random() * chars.length)];
}
return `SOL-${issueId}-${uid}`;
}
let solution;
const solutionId = generateSolutionId(issueId);
if (flags.supplement && existingSolution) {
// Supplement mode: merge with existing solution
const maxTaskId = Math.max(...existingSolution.tasks.map(t => parseInt(t.id.slice(1))));
const newTasks = extracted.tasks.map((t, i) => ({
...t,
id: `T${maxTaskId + i + 1}`
}));
solution = {
...existingSolution,
tasks: [...existingSolution.tasks, ...newTasks],
approach: existingSolution.approach + '\n\n[Supplementary] ' + (extracted.approach || ''),
updated_at: new Date().toISOString()
};
console.log(`Supplementing: ${existingSolution.tasks.length} existing + ${newTasks.length} new = ${solution.tasks.length} total tasks`);
} else {
// New solution
solution = {
id: solutionId,
description: extracted.description || extracted.title,
approach: extracted.approach,
tasks: extracted.tasks,
exploration_context: extracted.metadata.exploration_angles ? {
exploration_angles: extracted.metadata.exploration_angles
} : undefined,
analysis: {
risk: 'medium',
impact: 'medium',
complexity: extracted.metadata.complexity?.toLowerCase() || 'medium'
},
is_bound: false,
created_at: new Date().toISOString(),
_conversion_metadata: {
source_type: extracted.metadata.source_type,
source_path: extracted.metadata.source_path,
converted_at: new Date().toISOString()
}
};
}
```
### Phase 6: Confirm & Persist
```javascript
// Display preview
console.log(`
## Conversion Summary
**Issue**: ${issueId}
**Solution**: ${flags.supplement ? existingSolution.id : solutionId}
**Tasks**: ${solution.tasks.length}
**Mode**: ${flags.supplement ? 'Supplement' : 'New'}
### Tasks:
${solution.tasks.map(t => `- ${t.id}: ${t.title} [${t.action}]`).join('\n')}
`);
// Confirm if not auto mode
if (!flags.yes && !flags.y) {
const confirm = AskUserQuestion({
questions: [{
question: `Create solution for issue ${issueId} with ${solution.tasks.length} tasks?`,
header: 'Confirm',
multiSelect: false,
options: [
{ label: 'Yes, create solution', description: 'Create and bind solution' },
{ label: 'Cancel', description: 'Abort without changes' }
]
}]
});
if (!confirm.answers?.['Confirm']?.includes('Yes')) {
console.log('Cancelled.');
return;
}
}
// Persist solution (following issue-plan-agent pattern)
Bash(`mkdir -p .workflow/issues/solutions`);
const solutionFile = `.workflow/issues/solutions/${issueId}.jsonl`;
if (flags.supplement) {
// Supplement mode: update existing solution line atomically
try {
const existingContent = Read(solutionFile);
const lines = existingContent.trim().split('\n').filter(l => l);
const updatedLines = lines.map(line => {
const sol = JSON.parse(line);
if (sol.id === existingSolution.id) {
return JSON.stringify(solution);
}
return line;
});
// Atomic write: write entire content at once
Write({ file_path: solutionFile, content: updatedLines.join('\n') + '\n' });
console.log(`✓ Updated solution: ${existingSolution.id}`);
} catch (e) {
throw new Error(`Failed to update solution: ${e.message}`);
}
// Note: No need to rebind - solution is already bound to issue
} else {
// New solution: append to JSONL file (following issue-plan-agent pattern)
try {
const solutionLine = JSON.stringify(solution);
// Read existing content, append new line, write atomically
const existing = Bash(`test -f "${solutionFile}" && cat "${solutionFile}" || echo ""`).trim();
const newContent = existing ? existing + '\n' + solutionLine + '\n' : solutionLine + '\n';
Write({ file_path: solutionFile, content: newContent });
console.log(`✓ Created solution: ${solutionId}`);
} catch (e) {
throw new Error(`Failed to write solution: ${e.message}`);
}
// Bind solution to issue
try {
Bash(`ccw issue bind ${issueId} ${solutionId}`);
console.log(`✓ Bound solution to issue`);
} catch (e) {
// Cleanup: remove solution file on bind failure
try {
Bash(`rm -f "${solutionFile}"`);
} catch (cleanupError) {
// Ignore cleanup errors
}
throw new Error(`Failed to bind solution: ${e.message}`);
}
// Update issue status to planned
try {
Bash(`ccw issue update ${issueId} --status planned`);
} catch (e) {
throw new Error(`Failed to update issue status: ${e.message}`);
}
}
```
### Phase 7: Summary
```javascript
console.log(`
## Done
**Issue**: ${issueId}
**Solution**: ${flags.supplement ? existingSolution.id : solutionId}
**Tasks**: ${solution.tasks.length}
**Status**: planned
### Next Steps:
- \`/issue:queue\` → Form execution queue
- \`ccw issue status ${issueId}\` → View issue details
- \`ccw issue solution ${flags.supplement ? existingSolution.id : solutionId}\` → View solution
`);
```
## Error Handling
| Error | Code | Resolution |
|-------|------|------------|
| Source not found | E001 | Check path exists |
| Invalid source format | E002 | Verify file contains valid plan structure |
| Issue not found | E003 | Check issue ID or omit --issue to create new |
| Solution already bound | E004 | Use --supplement to add tasks |
| AI extraction failed | E005 | Check markdown structure, try simpler format |
| No tasks extracted | E006 | Source must contain at least 1 task |
## Related Commands
- `/issue:plan` - Generate solutions from issue exploration
- `/issue:queue` - Form execution queue from bound solutions
- `/issue:execute` - Execute queue with DAG parallelism
- `ccw issue status <id>` - View issue details
- `ccw issue solution <id>` - View solution details

View File

@@ -1,10 +1,14 @@
---
name: issue:discover-by-prompt
description: Discover issues from user prompt with Gemini-planned iterative multi-agent exploration. Uses ACE semantic search for context gathering and supports cross-module comparison (e.g., frontend vs backend API contracts).
argument-hint: "<prompt> [--scope=src/**] [--depth=standard|deep] [--max-iterations=5]"
argument-hint: "[-y|--yes] <prompt> [--scope=src/**] [--depth=standard|deep] [--max-iterations=5]"
allowed-tools: SlashCommand(*), TodoWrite(*), Read(*), Bash(*), Task(*), AskUserQuestion(*), Glob(*), Grep(*), mcp__ace-tool__search_context(*), mcp__exa__search(*)
---
## Auto Mode
When `--yes` or `-y`: Auto-continue all iterations, skip confirmations.
# Issue Discovery by Prompt
## Quick Start

View File

@@ -1,10 +1,14 @@
---
name: issue:discover
description: Discover potential issues from multiple perspectives (bug, UX, test, quality, security, performance, maintainability, best-practices) using CLI explore. Supports Exa external research for security and best-practices perspectives.
argument-hint: "<path-pattern> [--perspectives=bug,ux,...] [--external]"
argument-hint: "[-y|--yes] <path-pattern> [--perspectives=bug,ux,...] [--external]"
allowed-tools: SlashCommand(*), TodoWrite(*), Read(*), Bash(*), Task(*), AskUserQuestion(*), Glob(*), Grep(*)
---
## Auto Mode
When `--yes` or `-y`: Auto-select all perspectives, skip confirmations.
# Issue Discovery Command
## Quick Start

View File

@@ -1,10 +1,14 @@
---
name: execute
description: Execute queue with DAG-based parallel orchestration (one commit per solution)
argument-hint: "--queue <queue-id> [--worktree [<existing-path>]]"
argument-hint: "[-y|--yes] --queue <queue-id> [--worktree [<existing-path>]]"
allowed-tools: TodoWrite(*), Bash(*), Read(*), AskUserQuestion(*)
---
## Auto Mode
When `--yes` or `-y`: Auto-confirm execution, use recommended settings.
# Issue Execute Command (/issue:execute)
## Overview
@@ -312,65 +316,60 @@ batch.forEach(id => updateTodo(id, 'completed'));
function dispatchExecutor(solutionId, executorType, worktreePath = null) {
// If worktree is provided, executor works in that directory
// No per-solution worktree creation - ONE worktree for entire queue
const cdCommand = worktreePath ? `cd "${worktreePath}"` : '';
// Pre-defined values (replaced at dispatch time, NOT by executor)
const SOLUTION_ID = solutionId;
const WORK_DIR = worktreePath || null;
// Build prompt without markdown code blocks to avoid escaping issues
const prompt = `
## Execute Solution ${solutionId}
${worktreePath ? `
### Step 0: Enter Queue Worktree
\`\`\`bash
cd "${worktreePath}"
\`\`\`
` : ''}
### Step 1: Get Solution (read-only)
\`\`\`bash
ccw issue detail ${solutionId}
\`\`\`
## Execute Solution: ${SOLUTION_ID}
${WORK_DIR ? `Working Directory: ${WORK_DIR}` : ''}
### Step 1: Get Solution Details
Run this command to get the full solution with all tasks:
ccw issue detail ${SOLUTION_ID}
### Step 2: Execute All Tasks Sequentially
The detail command returns a FULL SOLUTION with all tasks.
Execute each task in order (T1 → T2 → T3 → ...):
For each task:
1. Follow task.implementation steps
2. Run task.test commands
3. Verify task.acceptance criteria
(Do NOT commit after each task)
- Follow task.implementation steps
- Run task.test commands
- Verify task.acceptance criteria
- Do NOT commit after each task
### Step 3: Commit Solution (Once)
After ALL tasks pass, commit once with formatted summary:
\`\`\`bash
git add <all-modified-files>
git commit -m "[type](scope): [solution.description]
After ALL tasks pass, commit once with formatted summary.
## Solution Summary
- Solution-ID: ${solutionId}
- Tasks: T1, T2, ...
Command:
git add -A
git commit -m "<type>(<scope>): <description>
## Tasks Completed
- [T1] task1.title: action
- [T2] task2.title: action
Solution: ${SOLUTION_ID}
Tasks completed: <list task IDs>
## Files Modified
- file1.ts
- file2.ts
Changes:
- <file1>: <what changed>
- <file2>: <what changed>
## Verification
- All tests passed
- All acceptance criteria verified"
\`\`\`
Verified: all tests passed"
Replace <type> with: feat|fix|refactor|docs|test
Replace <scope> with: affected module name
Replace <description> with: brief summary from solution
### Step 4: Report Completion
\`\`\`bash
ccw issue done ${solutionId} --result '{"summary": "...", "files_modified": [...], "commit": {"hash": "...", "type": "feat"}, "tasks_completed": N}'
\`\`\`
On success, run:
ccw issue done ${SOLUTION_ID} --result '{"summary": "<brief>", "files_modified": ["<file1>", "<file2>"], "commit": {"hash": "<hash>", "type": "<type>"}, "tasks_completed": <N>}'
If any task failed:
\`\`\`bash
ccw issue done ${solutionId} --fail --reason '{"task_id": "TX", "error_type": "test_failure", "message": "..."}'
\`\`\`
On failure, run:
ccw issue done ${SOLUTION_ID} --fail --reason '{"task_id": "<TX>", "error_type": "<test_failure|build_error|other>", "message": "<error details>"}'
**Note**: Do NOT cleanup worktree after this solution. Worktree is shared by all solutions in the queue.
### Important Notes
- Do NOT cleanup worktree - it is shared by all solutions in the queue
- Replace all <placeholder> values with actual values from your execution
`;
// For CLI tools, pass --cd to set working directory

View File

@@ -0,0 +1,382 @@
---
name: from-brainstorm
description: Convert brainstorm session ideas into issue with executable solution for parallel-dev-cycle
argument-hint: "SESSION=\"<session-id>\" [--idea=<index>] [--auto] [-y|--yes]"
allowed-tools: TodoWrite(*), Bash(*), Read(*), Write(*), Glob(*), AskUserQuestion(*)
---
## Auto Mode
When `--yes` or `-y`: Auto-select highest-scored idea, skip confirmations, create issue directly.
# Issue From-Brainstorm Command (/issue:from-brainstorm)
## Overview
Bridge command that converts **brainstorm-with-file** session output into executable **issue + solution** for parallel-dev-cycle consumption.
**Core workflow**: Load Session → Select Idea → Convert to Issue → Generate Solution → Bind & Ready
**Input sources**:
- **synthesis.json** - Main brainstorm results with top_ideas
- **perspectives.json** - Multi-CLI perspectives (creative/pragmatic/systematic)
- **.brainstorming/** - Synthesis artifacts (clarifications, enhancements from role analyses)
**Output**:
- **Issue** (ISS-YYYYMMDD-NNN) - Full context with clarifications
- **Solution** (SOL-{issue-id}-{uid}) - Structured tasks for parallel-dev-cycle
## Quick Reference
```bash
# Interactive mode - select idea, confirm before creation
/issue:from-brainstorm SESSION="BS-rate-limiting-2025-01-28"
# Pre-select idea by index
/issue:from-brainstorm SESSION="BS-auth-system-2025-01-28" --idea=0
# Auto mode - select highest scored, no confirmations
/issue:from-brainstorm SESSION="BS-caching-2025-01-28" --auto -y
```
## Arguments
| Argument | Required | Type | Default | Description |
|----------|----------|------|---------|-------------|
| SESSION | Yes | String | - | Session ID or path to `.workflow/.brainstorm/BS-xxx` |
| --idea | No | Integer | - | Pre-select idea by index (0-based) |
| --auto | No | Flag | false | Auto-select highest-scored idea |
| -y, --yes | No | Flag | false | Skip all confirmations |
## Data Structures
### Issue Schema (Output)
```typescript
interface Issue {
id: string; // ISS-YYYYMMDD-NNN
title: string; // From idea.title
status: 'planned'; // Auto-set after solution binding
priority: number; // 1-5 (derived from idea.score)
context: string; // Full description with clarifications
source: 'brainstorm';
labels: string[]; // ['brainstorm', perspective, feasibility]
// Structured fields
expected_behavior: string; // From key_strengths
actual_behavior: string; // From main_challenges
affected_components: string[]; // Extracted from description
_brainstorm_metadata: {
session_id: string;
idea_score: number;
novelty: number;
feasibility: string;
clarifications_count: number;
};
}
```
### Solution Schema (Output)
```typescript
interface Solution {
id: string; // SOL-{issue-id}-{4-char-uid}
description: string; // idea.title
approach: string; // idea.description
tasks: Task[]; // Generated from idea.next_steps
analysis: {
risk: 'low' | 'medium' | 'high';
impact: 'low' | 'medium' | 'high';
complexity: 'low' | 'medium' | 'high';
};
is_bound: boolean; // true
created_at: string;
bound_at: string;
}
interface Task {
id: string; // T1, T2, T3...
title: string; // Actionable task name
scope: string; // design|implementation|testing|documentation
action: string; // Implement|Design|Research|Test|Document
description: string;
implementation: string[]; // Step-by-step guide
acceptance: {
criteria: string[]; // What defines success
verification: string[]; // How to verify
};
priority: number; // 1-5
depends_on: string[]; // Task dependencies
}
```
## Execution Flow
```
Phase 1: Session Loading
├─ Validate session path
├─ Load synthesis.json (required)
├─ Load perspectives.json (optional - multi-CLI insights)
├─ Load .brainstorming/** (optional - synthesis artifacts)
└─ Validate top_ideas array exists
Phase 2: Idea Selection
├─ Auto mode: Select highest scored idea
├─ Pre-selected: Use --idea=N index
└─ Interactive: Display table, ask user to select
Phase 3: Enrich Issue Context
├─ Base: idea.description + key_strengths + main_challenges
├─ Add: Relevant clarifications (Requirements/Architecture/Feasibility)
├─ Add: Multi-perspective insights (creative/pragmatic/systematic)
└─ Add: Session metadata (session_id, completion date, clarification count)
Phase 4: Create Issue
├─ Generate issue data with enriched context
├─ Calculate priority from idea.score (0-10 → 1-5)
├─ Create via: ccw issue create (heredoc for JSON)
└─ Returns: ISS-YYYYMMDD-NNN
Phase 5: Generate Solution Tasks
├─ T1: Research & Validate (if main_challenges exist)
├─ T2: Design & Specification (if key_strengths exist)
├─ T3+: Implementation tasks (from idea.next_steps)
└─ Each task includes: implementation steps + acceptance criteria
Phase 6: Bind Solution
├─ Write solution to .workflow/issues/solutions/{issue-id}.jsonl
├─ Bind via: ccw issue bind {issue-id} {solution-id}
├─ Update issue status to 'planned'
└─ Returns: SOL-{issue-id}-{uid}
Phase 7: Next Steps
└─ Offer: Form queue | Convert another idea | View details | Done
```
## Context Enrichment Logic
### Base Context (Always Included)
- **Description**: `idea.description`
- **Why This Idea**: `idea.key_strengths[]`
- **Challenges to Address**: `idea.main_challenges[]`
- **Implementation Steps**: `idea.next_steps[]`
### Enhanced Context (If Available)
**From Synthesis Artifacts** (`.brainstorming/*/analysis*.md`):
- Extract clarifications matching categories: Requirements, Architecture, Feasibility
- Format: `**{Category}** ({role}): {question} → {answer}`
- Limit: Top 3 most relevant
**From Perspectives** (`perspectives.json`):
- **Creative**: First insight from `perspectives.creative.insights[0]`
- **Pragmatic**: First blocker from `perspectives.pragmatic.blockers[0]`
- **Systematic**: First pattern from `perspectives.systematic.patterns[0]`
**Session Metadata**:
- Session ID, Topic, Completion Date
- Clarifications count (if synthesis artifacts loaded)
## Task Generation Strategy
### Task 1: Research & Validation
**Trigger**: `idea.main_challenges.length > 0`
- **Title**: "Research & Validate Approach"
- **Scope**: design
- **Action**: Research
- **Implementation**: Investigate blockers, review similar implementations, validate with team
- **Acceptance**: Blockers documented, feasibility assessed, approach validated
### Task 2: Design & Specification
**Trigger**: `idea.key_strengths.length > 0`
- **Title**: "Design & Create Specification"
- **Scope**: design
- **Action**: Design
- **Implementation**: Create design doc, define success criteria, plan phases
- **Acceptance**: Design complete, metrics defined, plan outlined
### Task 3+: Implementation Tasks
**Trigger**: `idea.next_steps[]`
- **Title**: From `next_steps[i]` (max 60 chars)
- **Scope**: Inferred from keywords (test→testing, api→backend, ui→frontend)
- **Action**: Detected from verbs (implement, create, update, fix, test, document)
- **Implementation**: Execute step + follow design + write tests
- **Acceptance**: Step implemented + tests passing + code reviewed
### Fallback Task
**Trigger**: No tasks generated from above
- **Title**: `idea.title`
- **Scope**: implementation
- **Action**: Implement
- **Generic implementation + acceptance criteria**
## Priority Calculation
### Issue Priority (1-5)
```
idea.score: 0-10
priority = max(1, min(5, ceil((10 - score) / 2)))
Examples:
score 9-10 → priority 1 (critical)
score 7-8 → priority 2 (high)
score 5-6 → priority 3 (medium)
score 3-4 → priority 4 (low)
score 0-2 → priority 5 (lowest)
```
### Task Priority (1-5)
- Research task: 1 (highest)
- Design task: 2
- Implementation tasks: 3 by default, decrement for later tasks
- Testing/documentation: 4-5
### Complexity Analysis
```
risk: main_challenges.length > 2 ? 'high' : 'medium'
impact: score >= 8 ? 'high' : score >= 6 ? 'medium' : 'low'
complexity: main_challenges > 3 OR tasks > 5 ? 'high'
tasks > 3 ? 'medium' : 'low'
```
## CLI Integration
### Issue Creation
```bash
# Uses heredoc to avoid shell escaping
ccw issue create << 'EOF'
{
"title": "...",
"context": "...",
"priority": 3,
"source": "brainstorm",
"labels": ["brainstorm", "creative", "feasibility-high"],
...
}
EOF
```
### Solution Binding
```bash
# Append solution to JSONL file
echo '{"id":"SOL-xxx","tasks":[...]}' >> .workflow/issues/solutions/{issue-id}.jsonl
# Bind to issue
ccw issue bind {issue-id} {solution-id}
# Update status
ccw issue update {issue-id} --status planned
```
## Error Handling
| Error | Message | Resolution |
|-------|---------|------------|
| Session not found | synthesis.json missing | Check session ID, list available sessions |
| No ideas | top_ideas array empty | Complete brainstorm workflow first |
| Invalid idea index | Index out of range | Check valid range 0 to N-1 |
| Issue creation failed | ccw issue create error | Verify CLI endpoint working |
| Solution binding failed | Bind error | Check issue exists, retry |
## Examples
### Interactive Mode
```bash
/issue:from-brainstorm SESSION="BS-rate-limiting-2025-01-28"
# Output:
# | # | Title | Score | Feasibility |
# |---|-------|-------|-------------|
# | 0 | Token Bucket Algorithm | 8.5 | High |
# | 1 | Sliding Window Counter | 7.2 | Medium |
# | 2 | Fixed Window | 6.1 | High |
# User selects: #0
# Result:
# ✓ Created issue: ISS-20250128-001
# ✓ Created solution: SOL-ISS-20250128-001-ab3d
# ✓ Bound solution to issue
# → Next: /issue:queue
```
### Auto Mode
```bash
/issue:from-brainstorm SESSION="BS-caching-2025-01-28" --auto
# Result:
# Auto-selected: Redis Cache Layer (Score: 9.2/10)
# ✓ Created issue: ISS-20250128-002
# ✓ Solution with 4 tasks
# → Status: planned
```
## Integration Flow
```
brainstorm-with-file
├─ synthesis.json
├─ perspectives.json
└─ .brainstorming/** (optional)
/issue:from-brainstorm ◄─── This command
├─ ISS-YYYYMMDD-NNN (enriched issue)
└─ SOL-{issue-id}-{uid} (structured solution)
/issue:queue
/parallel-dev-cycle
RA → EP → CD → VAS
```
## Session Files Reference
### Input Files
```
.workflow/.brainstorm/BS-{slug}-{date}/
├── synthesis.json # REQUIRED - Top ideas with scores
├── perspectives.json # OPTIONAL - Multi-CLI insights
├── brainstorm.md # Reference only
└── .brainstorming/ # OPTIONAL - Synthesis artifacts
├── system-architect/
│ └── analysis.md # Contains clarifications + enhancements
├── api-designer/
│ └── analysis.md
└── ...
```
### Output Files
```
.workflow/issues/
├── solutions/
│ └── ISS-YYYYMMDD-001.jsonl # Created solution (JSONL)
└── (managed by ccw issue CLI)
```
## Related Commands
- `/workflow:brainstorm-with-file` - Generate brainstorm sessions
- `/workflow:brainstorm:synthesis` - Add clarifications to brainstorm
- `/issue:new` - Create issues from GitHub or text
- `/issue:plan` - Generate solutions via exploration
- `/issue:queue` - Form execution queue
- `/issue:execute` - Execute with parallel-dev-cycle
- `ccw issue status <id>` - View issue
- `ccw issue solution <id>` - View solution

View File

@@ -1,10 +1,14 @@
---
name: new
description: Create structured issue from GitHub URL or text description
argument-hint: "<github-url | text-description> [--priority 1-5]"
argument-hint: "[-y|--yes] <github-url | text-description> [--priority 1-5]"
allowed-tools: TodoWrite(*), Bash(*), Read(*), AskUserQuestion(*), mcp__ace-tool__search_context(*)
---
## Auto Mode
When `--yes` or `-y`: Skip clarification questions, create issue with inferred details.
# Issue New Command (/issue:new)
## Core Principle

View File

@@ -1,10 +1,14 @@
---
name: plan
description: Batch plan issue resolution using issue-plan-agent (explore + plan closed-loop)
argument-hint: "--all-pending <issue-id>[,<issue-id>,...] [--batch-size 3] "
argument-hint: "[-y|--yes] --all-pending <issue-id>[,<issue-id>,...] [--batch-size 3]"
allowed-tools: TodoWrite(*), Task(*), SlashCommand(*), AskUserQuestion(*), Bash(*), Read(*), Write(*)
---
## Auto Mode
When `--yes` or `-y`: Auto-bind solutions without confirmation, use recommended settings.
# Issue Plan Command (/issue:plan)
## Overview
@@ -55,11 +59,11 @@ Unified planning command using **issue-plan-agent** that combines exploration an
## Execution Process
```
Phase 1: Issue Loading
Phase 1: Issue Loading & Intelligent Grouping
├─ Parse input (single, comma-separated, or --all-pending)
├─ Fetch issue metadata (ID, title, tags)
├─ Validate issues exist (create if needed)
└─ Group by similarity (shared tags or title keywords, max 3 per batch)
└─ Intelligent grouping via Gemini (semantic similarity, max 3 per batch)
Phase 2: Unified Explore + Plan (issue-plan-agent)
├─ Launch issue-plan-agent per batch
@@ -119,46 +123,11 @@ if (useAllPending) {
}
// Note: Agent fetches full issue content via `ccw issue status <id> --json`
// Semantic grouping via Gemini CLI (max 4 issues per group)
async function groupBySimilarityGemini(issues) {
const issueSummaries = issues.map(i => ({
id: i.id, title: i.title, tags: i.tags
}));
// Intelligent grouping: Analyze issues by title/tags, group semantically similar ones
// Strategy: Same module/component, related bugs, feature clusters
// Constraint: Max ${batchSize} issues per batch
const prompt = `
PURPOSE: Group similar issues by semantic similarity for batch processing; maximize within-group coherence; max 4 issues per group
TASK: • Analyze issue titles/tags semantically • Identify functional/architectural clusters • Assign each issue to one group
MODE: analysis
CONTEXT: Issue metadata only
EXPECTED: JSON with groups array, each containing max 4 issue_ids, theme, rationale
CONSTRAINTS: Each issue in exactly one group | Max 4 issues per group | Balance group sizes
INPUT:
${JSON.stringify(issueSummaries, null, 2)}
OUTPUT FORMAT:
{"groups":[{"group_id":1,"theme":"...","issue_ids":["..."],"rationale":"..."}],"ungrouped":[]}
`;
const taskId = Bash({
command: `ccw cli -p "${prompt}" --tool gemini --mode analysis`,
run_in_background: true, timeout: 600000
});
const output = TaskOutput({ task_id: taskId, block: true });
// Extract JSON from potential markdown code blocks
function extractJsonFromMarkdown(text) {
const jsonMatch = text.match(/```json\s*\n([\s\S]*?)\n```/) ||
text.match(/```\s*\n([\s\S]*?)\n```/);
return jsonMatch ? jsonMatch[1] : text;
}
const result = JSON.parse(extractJsonFromMarkdown(output));
return result.groups.map(g => g.issue_ids.map(id => issues.find(i => i.id === id)));
}
const batches = await groupBySimilarityGemini(issues);
console.log(`Processing ${issues.length} issues in ${batches.length} batch(es) (max 4 issues/agent)`);
console.log(`Processing ${issues.length} issues in ${batches.length} batch(es)`);
TodoWrite({
todos: batches.map((_, i) => ({
@@ -207,7 +176,9 @@ ${issueList}
- Add explicit verification steps to prevent same failure mode
6. **If github_url exists**: Add final task to comment on GitHub issue
7. Write solution to: .workflow/issues/solutions/{issue-id}.jsonl
8. Single solution → auto-bind; Multiple → return for selection
8. **CRITICAL - Binding Decision**:
- Single solution → **MUST execute**: ccw issue bind <issue-id> <solution-id>
- Multiple solutions → Return pending_selection only (no bind)
### Failure-Aware Planning Rules
- **Extract failure patterns**: Parse issue.feedback where type='failure' and stage='execute'
@@ -265,35 +236,55 @@ for (let i = 0; i < agentTasks.length; i += MAX_PARALLEL) {
}
agentResults.push(summary); // Store for Phase 3 conflict aggregation
// Verify binding for bound issues (agent should have executed bind)
for (const item of summary.bound || []) {
console.log(`${item.issue_id}: ${item.solution_id} (${item.task_count} tasks)`);
const status = JSON.parse(Bash(`ccw issue status ${item.issue_id} --json`).trim());
if (status.bound_solution_id === item.solution_id) {
console.log(`${item.issue_id}: ${item.solution_id} (${item.task_count} tasks)`);
} else {
// Fallback: agent failed to bind, execute here
Bash(`ccw issue bind ${item.issue_id} ${item.solution_id}`);
console.log(`${item.issue_id}: ${item.solution_id} (${item.task_count} tasks) [recovered]`);
}
}
// Collect and notify pending selections
// Collect pending selections for Phase 3
for (const pending of summary.pending_selection || []) {
console.log(`${pending.issue_id}: ${pending.solutions.length} solutions → awaiting selection`);
pendingSelections.push(pending);
}
if (summary.conflicts?.length > 0) {
console.log(`⚠ Conflicts: ${summary.conflicts.length} detected (will resolve in Phase 3)`);
}
updateTodo(`Plan batch ${batchIndex + 1}`, 'completed');
}
}
```
### Phase 3: Conflict Resolution & Solution Selection
### Phase 3: Solution Selection (if pending)
**Conflict Handling:**
- Collect `conflicts` from all agent results
- Low/Medium severity → auto-resolve with `recommended_resolution`
- High severity → use `AskUserQuestion` to let user choose resolution
```javascript
// Handle multi-solution issues
for (const pending of pendingSelections) {
if (pending.solutions.length === 0) continue;
**Multi-Solution Selection:**
- If `pending_selection` contains issues with multiple solutions:
- Use `AskUserQuestion` to present options (solution ID + task count + description)
- Extract selected solution ID from user response
- Verify solution file exists, recover from payload if missing
- Bind selected solution via `ccw issue bind <issue-id> <solution-id>`
const options = pending.solutions.slice(0, 4).map(sol => ({
label: `${sol.id} (${sol.task_count} tasks)`,
description: sol.description || sol.approach || 'No description'
}));
const answer = AskUserQuestion({
questions: [{
question: `Issue ${pending.issue_id}: which solution to bind?`,
header: pending.issue_id,
options: options,
multiSelect: false
}]
});
const selected = answer[Object.keys(answer)[0]];
if (!selected || selected === 'Other') continue;
const solId = selected.split(' ')[0];
Bash(`ccw issue bind ${pending.issue_id} ${solId}`);
console.log(`${pending.issue_id}: ${solId} bound`);
}
```
### Phase 4: Summary

View File

@@ -1,10 +1,14 @@
---
name: queue
description: Form execution queue from bound solutions using issue-queue-agent (solution-level)
argument-hint: "[--queues <n>] [--issue <id>]"
argument-hint: "[-y|--yes] [--queues <n>] [--issue <id>]"
allowed-tools: TodoWrite(*), Task(*), Bash(*), Read(*), Write(*)
---
## Auto Mode
When `--yes` or `-y`: Auto-confirm queue formation, use recommended conflict resolutions.
# Issue Queue Command (/issue:queue)
## Overview
@@ -28,12 +32,13 @@ Queue formation command using **issue-queue-agent** that analyzes all bound solu
| Operation | Correct | Incorrect |
|-----------|---------|-----------|
| List issues (brief) | `ccw issue list --status planned --brief` | `Read('issues.jsonl')` |
| **Batch solutions (NEW)** | `ccw issue solutions --status planned --brief` | Loop `ccw issue solution <id>` |
| List queue (brief) | `ccw issue queue --brief` | `Read('queues/*.json')` |
| Read issue details | `ccw issue status <id> --json` | `Read('issues.jsonl')` |
| Get next item | `ccw issue next --json` | `Read('queues/*.json')` |
| Update status | `ccw issue update <id> --status ...` | Direct file edit |
| Sync from queue | `ccw issue update --from-queue` | Direct file edit |
| **Read solution (brief)** | `ccw issue solution <id> --brief` | `Read('solutions/*.jsonl')` |
| Read solution (single) | `ccw issue solution <id> --brief` | `Read('solutions/*.jsonl')` |
**Output Options**:
- `--brief`: JSON with minimal fields (id, status, counts)
@@ -131,24 +136,23 @@ Phase 7: Active Queue Check & Decision (REQUIRED)
### Phase 1: Solution Loading & Distribution
**Data Loading:**
- Use `ccw issue list --status planned --brief` to get planned issues with `bound_solution_id`
- If no planned issues found → display message, suggest `/issue:plan`
**Solution Brief Loading** (for each planned issue):
```bash
ccw issue solution <issue-id> --brief
# Returns: [{ solution_id, is_bound, task_count, files_touched[] }]
```
- Use `ccw issue solutions --status planned --brief` to get all planned issues with solutions in **one call**
- Returns: Array of `{ issue_id, solution_id, is_bound, task_count, files_touched[], priority }`
- If no bound solutions found → display message, suggest `/issue:plan`
**Build Solution Objects:**
```json
{
"issue_id": "ISS-xxx",
"solution_id": "SOL-ISS-xxx-1",
"task_count": 3,
"files_touched": ["src/auth.ts", "src/utils.ts"],
"priority": "medium"
```javascript
// Single CLI call replaces N individual queries
const result = Bash(`ccw issue solutions --status planned --brief`).trim();
const solutions = result ? JSON.parse(result) : [];
if (solutions.length === 0) {
console.log('No bound solutions found. Run /issue:plan first.');
return;
}
// solutions already in correct format:
// { issue_id, solution_id, is_bound, task_count, files_touched[], priority }
```
**Multi-Queue Distribution** (if `--queues > 1`):

View File

@@ -1,687 +0,0 @@
---
name: code-map-memory
description: 3-phase orchestrator: parse feature keyword → cli-explore-agent analyzes (Deep Scan dual-source) → orchestrator generates Mermaid docs + SKILL package (skips phase 2 if exists)
argument-hint: "\"feature-keyword\" [--regenerate] [--tool <gemini|qwen>]"
allowed-tools: SlashCommand(*), TodoWrite(*), Bash(*), Read(*), Write(*), Task(*)
---
# Code Flow Mapping Generator
## Overview
**Pure Orchestrator with Agent Delegation**: Prepares context paths and delegates code flow analysis to specialized cli-explore-agent. Orchestrator transforms agent's JSON analysis into Mermaid documentation.
**Auto-Continue Workflow**: Runs fully autonomously once triggered. Each phase completes and automatically triggers the next phase.
**Execution Paths**:
- **Full Path**: All 3 phases (no existing codemap OR `--regenerate` specified)
- **Skip Path**: Phase 1 → Phase 3 (existing codemap found AND no `--regenerate` flag)
- **Phase 3 Always Executes**: SKILL index is always generated or updated
**Agent Responsibility** (cli-explore-agent):
- Deep code flow analysis using dual-source strategy (Bash + Gemini CLI)
- Returns structured JSON with architecture, functions, data flow, conditionals, patterns
- NO file writing - analysis only
**Orchestrator Responsibility**:
- Provides feature keyword and analysis scope to agent
- Transforms agent's JSON into Mermaid-enriched markdown documentation
- Writes all files (5 docs + metadata.json + SKILL.md)
## Core Rules
1. **Start Immediately**: First action is TodoWrite initialization, second action is Phase 1 execution
2. **Feature-Specific SKILL**: Each feature creates independent `.claude/skills/codemap-{feature}/` package
3. **Specialized Agent**: Phase 2a uses cli-explore-agent for professional code analysis (Deep Scan mode)
4. **Orchestrator Documentation**: Phase 2b transforms agent JSON into Mermaid markdown files
5. **Auto-Continue**: After completing each phase, update TodoWrite and immediately execute next phase
6. **No User Prompts**: Never ask user questions or wait for input between phases
7. **Track Progress**: Update TodoWrite after EVERY phase completion before starting next phase
8. **Multi-Level Detail**: Generate 4 levels: architecture → function → data → conditional
---
## 3-Phase Execution
### Phase 1: Parse Feature Keyword & Check Existing
**Goal**: Normalize feature keyword, check existing codemap, prepare for analysis
**Step 1: Parse Feature Keyword**
```bash
# Get feature keyword from argument
FEATURE_KEYWORD="$1"
# Normalize: lowercase, spaces to hyphens
normalized_feature=$(echo "$FEATURE_KEYWORD" | tr '[:upper:]' '[:lower:]' | tr ' ' '-' | tr '_' '-')
# Example: "User Authentication" → "user-authentication"
# Example: "支付处理" → "支付处理" (keep non-ASCII)
```
**Step 2: Set Tool Preference**
```bash
# Default to gemini unless --tool specified
TOOL="${tool_flag:-gemini}"
```
**Step 3: Check Existing Codemap**
```bash
# Define codemap directory
CODEMAP_DIR=".claude/skills/codemap-${normalized_feature}"
# Check if codemap exists
bash(test -d "$CODEMAP_DIR" && echo "exists" || echo "not_exists")
# Count existing files
bash(find "$CODEMAP_DIR" -name "*.md" 2>/dev/null | wc -l || echo 0)
```
**Step 4: Skip Decision**
```javascript
if (existing_files > 0 && !regenerate_flag) {
SKIP_GENERATION = true
message = "Codemap already exists, skipping Phase 2. Use --regenerate to force regeneration."
} else if (regenerate_flag) {
bash(rm -rf "$CODEMAP_DIR")
SKIP_GENERATION = false
message = "Regenerating codemap from scratch."
} else {
SKIP_GENERATION = false
message = "No existing codemap found, generating new code flow analysis."
}
```
**Output Variables**:
- `FEATURE_KEYWORD`: Original feature keyword
- `normalized_feature`: Normalized feature name for directory
- `CODEMAP_DIR`: `.claude/skills/codemap-{feature}`
- `TOOL`: CLI tool to use (gemini or qwen)
- `SKIP_GENERATION`: Boolean - whether to skip Phase 2
**TodoWrite**:
- If skipping: Mark phase 1 completed, phase 2 completed, phase 3 in_progress
- If not skipping: Mark phase 1 completed, phase 2 in_progress
---
### Phase 2: Code Flow Analysis & Documentation Generation
**Skip Condition**: Skipped if `SKIP_GENERATION = true`
**Goal**: Use cli-explore-agent for professional code analysis, then orchestrator generates Mermaid documentation
**Architecture**: Phase 2a (Agent Analysis) → Phase 2b (Orchestrator Documentation)
---
#### Phase 2a: cli-explore-agent Analysis
**Purpose**: Leverage specialized cli-explore-agent for deep code flow analysis
**Agent Task Specification**:
```
Task(
subagent_type: "cli-explore-agent",
description: "Analyze code flow: {FEATURE_KEYWORD}",
prompt: "
Perform Deep Scan analysis for feature: {FEATURE_KEYWORD}
**Analysis Mode**: deep-scan (Dual-source: Bash structural scan + Gemini semantic analysis)
**Analysis Objectives**:
1. **Module Architecture**: Identify high-level module organization, interactions, and entry points
2. **Function Call Chains**: Trace execution paths, call sequences, and parameter flows
3. **Data Transformations**: Map data structure changes and transformation stages
4. **Conditional Paths**: Document decision trees, branches, and error handling strategies
5. **Design Patterns**: Discover architectural patterns and extract design intent
**Scope**:
- Feature: {FEATURE_KEYWORD}
- CLI Tool: {TOOL} (gemini-2.5-pro or qwen coder-model)
- File Discovery: MCP Code Index (preferred) + rg fallback
- Target: 5-15 most relevant files
**MANDATORY FIRST STEP**:
Read: ~/.claude/workflows/cli-templates/schemas/codemap-json-schema.json
**Output**: Return JSON following schema exactly. NO FILE WRITING - return JSON analysis only.
**Critical Requirements**:
- Use Deep Scan mode: Bash (Phase 1 - precise locations) + Gemini CLI (Phase 2 - semantic understanding) + Synthesis (Phase 3 - merge with attribution)
- Focus exclusively on {FEATURE_KEYWORD} feature flow
- Include file:line references for ALL findings
- Extract design intent from code structure and comments
- NO FILE WRITING - return JSON analysis only
- Handle tool failures gracefully (Gemini → Qwen fallback, MCP → rg fallback)
"
)
```
**Agent Output**: JSON analysis result with architecture, functions, data flow, conditionals, and patterns
---
#### Phase 2b: Orchestrator Documentation Generation
**Purpose**: Transform cli-explore-agent JSON into Mermaid-enriched documentation
**Input**: Agent's JSON analysis result
**Process**:
1. **Parse Agent Analysis**:
```javascript
const analysis = JSON.parse(agentResult)
const { feature, files_analyzed, architecture, function_calls, data_flow, conditional_logic, design_patterns } = analysis
```
2. **Generate Mermaid Diagrams from Structured Data**:
**a) architecture-flow.md** (~3K tokens):
```javascript
// Convert architecture.modules + architecture.interactions → Mermaid graph TD
const architectureMermaid = `
graph TD
${architecture.modules.map(m => ` ${m.name}[${m.name}]`).join('\n')}
${architecture.interactions.map(i => ` ${i.from} -->|${i.type}| ${i.to}`).join('\n')}
`
Write({
file_path: `${CODEMAP_DIR}/architecture-flow.md`,
content: `---
feature: ${feature}
level: architecture
detail: high-level module interactions
---
# Architecture Flow: ${feature}
## Overview
${architecture.overview}
## Module Architecture
${architecture.modules.map(m => `### ${m.name}\n- **File**: ${m.file}\n- **Role**: ${m.responsibility}\n- **Dependencies**: ${m.dependencies.join(', ')}`).join('\n\n')}
## Flow Diagram
\`\`\`mermaid
${architectureMermaid}
\`\`\`
## Key Interactions
${architecture.interactions.map(i => `- **${i.from} → ${i.to}**: ${i.description}`).join('\n')}
## Entry Points
${architecture.entry_points.map(e => `- **${e.function}** (${e.file}): ${e.description}`).join('\n')}
`
})
```
**b) function-calls.md** (~5K tokens):
```javascript
// Convert function_calls.sequences → Mermaid sequenceDiagram
const sequenceMermaid = `
sequenceDiagram
${function_calls.sequences.map(s => ` ${s.from}->>${s.to}: ${s.method}`).join('\n')}
`
Write({
file_path: `${CODEMAP_DIR}/function-calls.md`,
content: `---
feature: ${feature}
level: function
detail: function-level call sequences
---
# Function Call Chains: ${feature}
## Call Sequence Diagram
\`\`\`mermaid
${sequenceMermaid}
\`\`\`
## Detailed Call Chains
${function_calls.call_chains.map(chain => `
### Chain ${chain.chain_id}: ${chain.description}
${chain.sequence.map(fn => `- **${fn.function}** (${fn.file})\n - Calls: ${fn.calls.join(', ')}`).join('\n')}
`).join('\n')}
## Parameters & Returns
${function_calls.sequences.map(s => `- **${s.method}** → Returns: ${s.returns || 'void'}`).join('\n')}
`
})
```
**c) data-flow.md** (~4K tokens):
```javascript
// Convert data_flow.transformations → Mermaid flowchart LR
const dataFlowMermaid = `
flowchart LR
${data_flow.transformations.map((t, i) => ` Stage${i}[${t.from}] -->|${t.transformer}| Stage${i+1}[${t.to}]`).join('\n')}
`
Write({
file_path: `${CODEMAP_DIR}/data-flow.md`,
content: `---
feature: ${feature}
level: data
detail: data structure transformations
---
# Data Flow: ${feature}
## Data Transformation Diagram
\`\`\`mermaid
${dataFlowMermaid}
\`\`\`
## Data Structures
${data_flow.structures.map(s => `### ${s.name} (${s.stage})\n\`\`\`json\n${JSON.stringify(s.shape, null, 2)}\n\`\`\``).join('\n\n')}
## Transformations
${data_flow.transformations.map(t => `- **${t.from} → ${t.to}** via \`${t.transformer}\` (${t.file})`).join('\n')}
`
})
```
**d) conditional-paths.md** (~4K tokens):
```javascript
// Convert conditional_logic.branches → Mermaid flowchart TD
const conditionalMermaid = `
flowchart TD
Start[Entry Point]
${conditional_logic.branches.map((b, i) => `
Start --> Check${i}{${b.condition}}
Check${i} -->|Yes| Path${i}A[${b.true_path}]
Check${i} -->|No| Path${i}B[${b.false_path}]
`).join('\n')}
`
Write({
file_path: `${CODEMAP_DIR}/conditional-paths.md`,
content: `---
feature: ${feature}
level: conditional
detail: decision trees and error paths
---
# Conditional Paths: ${feature}
## Decision Tree
\`\`\`mermaid
${conditionalMermaid}
\`\`\`
## Branch Conditions
${conditional_logic.branches.map(b => `- **${b.condition}** (${b.file})\n - True: ${b.true_path}\n - False: ${b.false_path}`).join('\n')}
## Error Handling
${conditional_logic.error_handling.map(e => `- **${e.error_type}**: Handler \`${e.handler}\` (${e.file}) - Recovery: ${e.recovery}`).join('\n')}
`
})
```
**e) complete-flow.md** (~8K tokens):
```javascript
// Integrate all Mermaid diagrams
Write({
file_path: `${CODEMAP_DIR}/complete-flow.md`,
content: `---
feature: ${feature}
level: complete
detail: integrated multi-level view
---
# Complete Flow: ${feature}
## Integrated Flow Diagram
\`\`\`mermaid
graph TB
subgraph Architecture
${architecture.modules.map(m => ` ${m.name}[${m.name}]`).join('\n')}
end
subgraph "Function Calls"
${function_calls.call_chains[0]?.sequence.map(fn => ` ${fn.function}`).join('\n') || ''}
end
subgraph "Data Flow"
${data_flow.structures.map(s => ` ${s.name}[${s.name}]`).join('\n')}
end
\`\`\`
## Complete Trace
[Comprehensive end-to-end documentation combining all analysis layers]
## Design Patterns Identified
${design_patterns.map(p => `- **${p.pattern}** in ${p.location}: ${p.description}`).join('\n')}
## Recommendations
${analysis.recommendations.map(r => `- ${r}`).join('\n')}
## Cross-References
- [Architecture Flow](./architecture-flow.md) - High-level module structure
- [Function Calls](./function-calls.md) - Detailed call chains
- [Data Flow](./data-flow.md) - Data transformation stages
- [Conditional Paths](./conditional-paths.md) - Decision trees and error handling
`
})
```
3. **Write metadata.json**:
```javascript
Write({
file_path: `${CODEMAP_DIR}/metadata.json`,
content: JSON.stringify({
feature: feature,
normalized_name: normalized_feature,
generated_at: new Date().toISOString(),
tool_used: analysis.analysis_metadata.tool_used,
files_analyzed: files_analyzed.map(f => f.file),
analysis_summary: {
total_files: files_analyzed.length,
modules_traced: architecture.modules.length,
functions_traced: function_calls.call_chains.reduce((sum, c) => sum + c.sequence.length, 0),
patterns_discovered: design_patterns.length
}
}, null, 2)
})
```
4. **Report Phase 2 Completion**:
```
Phase 2 Complete: Code flow analysis and documentation generated
- Agent Analysis: cli-explore-agent with {TOOL}
- Files Analyzed: {count}
- Documentation Generated: 5 markdown files + metadata.json
- Location: {CODEMAP_DIR}
```
**Completion Criteria**:
- cli-explore-agent task completed successfully with JSON result
- 5 documentation files written with valid Mermaid diagrams
- metadata.json written with analysis summary
- All files properly formatted and cross-referenced
**TodoWrite**: Mark phase 2 completed, phase 3 in_progress
---
### Phase 3: Generate SKILL.md Index
**Note**: This phase **ALWAYS executes** - generates or updates the SKILL index.
**Goal**: Read generated flow documentation and create SKILL.md index with progressive loading
**Steps**:
1. **Verify Generated Files**:
```bash
bash(find "{CODEMAP_DIR}" -name "*.md" -type f | sort)
```
2. **Read metadata.json**:
```javascript
Read({CODEMAP_DIR}/metadata.json)
// Extract: feature, normalized_name, files_analyzed, analysis_summary
```
3. **Read File Headers** (optional, first 30 lines):
```javascript
Read({CODEMAP_DIR}/architecture-flow.md, limit: 30)
Read({CODEMAP_DIR}/function-calls.md, limit: 30)
// Extract overview and diagram counts
```
4. **Generate SKILL.md Index**:
Template structure:
```yaml
---
name: codemap-{normalized_feature}
description: Code flow mapping for {FEATURE_KEYWORD} feature (located at {project_path}). Load this SKILL when analyzing, tracing, or understanding {FEATURE_KEYWORD} execution flow, especially when no relevant context exists in memory.
version: 1.0.0
generated_at: {ISO_TIMESTAMP}
---
# Code Flow Map: {FEATURE_KEYWORD}
## Feature: `{FEATURE_KEYWORD}`
**Analysis Date**: {DATE}
**Tool Used**: {TOOL}
**Files Analyzed**: {COUNT}
## Progressive Loading
### Level 0: Quick Overview (~2K tokens)
- [Architecture Flow](./architecture-flow.md) - High-level module interactions
### Level 1: Core Flows (~10K tokens)
- [Architecture Flow](./architecture-flow.md) - Module architecture
- [Function Calls](./function-calls.md) - Function call chains
### Level 2: Complete Analysis (~20K tokens)
- [Architecture Flow](./architecture-flow.md)
- [Function Calls](./function-calls.md)
- [Data Flow](./data-flow.md) - Data transformations
### Level 3: Deep Dive (~30K tokens)
- [Architecture Flow](./architecture-flow.md)
- [Function Calls](./function-calls.md)
- [Data Flow](./data-flow.md)
- [Conditional Paths](./conditional-paths.md) - Branches and error handling
- [Complete Flow](./complete-flow.md) - Integrated comprehensive view
## Usage
Load this SKILL package when:
- Analyzing {FEATURE_KEYWORD} implementation
- Tracing execution flow for debugging
- Understanding code dependencies
- Planning refactoring or enhancements
## Analysis Summary
- **Modules Traced**: {modules_traced}
- **Functions Traced**: {functions_traced}
- **Files Analyzed**: {total_files}
## Mermaid Diagrams Included
- Architecture flow diagram (graph TD)
- Function call sequence diagram (sequenceDiagram)
- Data transformation flowchart (flowchart LR)
- Conditional decision tree (flowchart TD)
- Complete integrated diagram (graph TB)
```
5. **Write SKILL.md**:
```javascript
Write({
file_path: `{CODEMAP_DIR}/SKILL.md`,
content: generatedIndexMarkdown
})
```
**Completion Criteria**:
- SKILL.md index written
- All documentation files verified
- Progressive loading levels (0-3) properly structured
- Mermaid diagram references included
**TodoWrite**: Mark phase 3 completed
**Final Report**:
```
Code Flow Mapping Complete
Feature: {FEATURE_KEYWORD}
Location: .claude/skills/codemap-{normalized_feature}/
Files Generated:
- SKILL.md (index)
- architecture-flow.md (with Mermaid diagram)
- function-calls.md (with Mermaid sequence diagram)
- data-flow.md (with Mermaid flowchart)
- conditional-paths.md (with Mermaid decision tree)
- complete-flow.md (with integrated Mermaid diagram)
- metadata.json
Analysis:
- Files analyzed: {count}
- Modules traced: {count}
- Functions traced: {count}
Usage: Skill(command: "codemap-{normalized_feature}")
```
---
## Implementation Details
### TodoWrite Patterns
**Initialization** (Before Phase 1):
```javascript
TodoWrite({todos: [
{"content": "Parse feature keyword and check existing", "status": "in_progress", "activeForm": "Parsing feature keyword"},
{"content": "Agent analyzes code flow and generates files", "status": "pending", "activeForm": "Analyzing code flow"},
{"content": "Generate SKILL.md index", "status": "pending", "activeForm": "Generating SKILL index"}
]})
```
**Full Path** (SKIP_GENERATION = false):
```javascript
// After Phase 1
TodoWrite({todos: [
{"content": "Parse feature keyword and check existing", "status": "completed", ...},
{"content": "Agent analyzes code flow and generates files", "status": "in_progress", ...},
{"content": "Generate SKILL.md index", "status": "pending", ...}
]})
// After Phase 2
TodoWrite({todos: [
{"content": "Parse feature keyword and check existing", "status": "completed", ...},
{"content": "Agent analyzes code flow and generates files", "status": "completed", ...},
{"content": "Generate SKILL.md index", "status": "in_progress", ...}
]})
// After Phase 3
TodoWrite({todos: [
{"content": "Parse feature keyword and check existing", "status": "completed", ...},
{"content": "Agent analyzes code flow and generates files", "status": "completed", ...},
{"content": "Generate SKILL.md index", "status": "completed", ...}
]})
```
**Skip Path** (SKIP_GENERATION = true):
```javascript
// After Phase 1 (skip Phase 2)
TodoWrite({todos: [
{"content": "Parse feature keyword and check existing", "status": "completed", ...},
{"content": "Agent analyzes code flow and generates files", "status": "completed", ...}, // Skipped
{"content": "Generate SKILL.md index", "status": "in_progress", ...}
]})
```
### Execution Flow
**Full Path**:
```
User → TodoWrite Init → Phase 1 (parse) → Phase 2 (agent analyzes) → Phase 3 (write index) → Report
```
**Skip Path**:
```
User → TodoWrite Init → Phase 1 (detect existing) → Phase 3 (update index) → Report
```
### Error Handling
**Phase 1 Errors**:
- Empty feature keyword: Report error, ask user to provide feature description
- Invalid characters: Normalize and continue
**Phase 2 Errors (Agent)**:
- Agent task fails: Retry once, report if fails again
- No files discovered: Warn user, ask for more specific feature keyword
- CLI failures: Agent handles internally with retries
- Invalid Mermaid syntax: Agent validates before writing
**Phase 3 Errors**:
- Write failures: Report which files failed
- Missing files: Note in SKILL.md, suggest regeneration
---
## Parameters
```bash
/memory:code-map-memory "feature-keyword" [--regenerate] [--tool <gemini|qwen>]
```
**Arguments**:
- **"feature-keyword"**: Feature or flow to analyze (required)
- Examples: `"user authentication"`, `"payment processing"`, `"数据导入流程"`
- Can be English, Chinese, or mixed
- Spaces and underscores normalized to hyphens
- **--regenerate**: Force regenerate existing codemap (deletes and recreates)
- **--tool**: CLI tool for analysis (default: gemini)
- `gemini`: Comprehensive flow analysis with gemini-2.5-pro
- `qwen`: Alternative with coder-model
---
## Examples
**Generated File Structure** (for all examples):
```
.claude/skills/codemap-{feature}/
├── SKILL.md # Index (Phase 3)
├── architecture-flow.md # Agent (Phase 2) - High-level flow
├── function-calls.md # Agent (Phase 2) - Function chains
├── data-flow.md # Agent (Phase 2) - Data transformations
├── conditional-paths.md # Agent (Phase 2) - Branches & errors
├── complete-flow.md # Agent (Phase 2) - Integrated view
└── metadata.json # Agent (Phase 2)
```
### Example 1: User Authentication Flow
```bash
/memory:code-map-memory "user authentication"
```
**Workflow**:
1. Phase 1: Normalizes to "user-authentication", checks existing codemap
2. Phase 2: Agent discovers auth-related files, executes CLI analysis, generates 5 flow docs with Mermaid
3. Phase 3: Generates SKILL.md index with progressive loading
**Output**: `.claude/skills/codemap-user-authentication/` with 6 files + metadata
### Example 3: Regenerate with Qwen
```bash
/memory:code-map-memory "payment processing" --regenerate --tool qwen
```
**Workflow**:
1. Phase 1: Deletes existing codemap due to --regenerate
2. Phase 2: Agent uses qwen with coder-model for fresh analysis
3. Phase 3: Generates updated SKILL.md
---
## Architecture
```
code-map-memory (orchestrator)
├─ Phase 1: Parse & Check (bash commands, skip decision)
├─ Phase 2: Code Analysis & Documentation (skippable)
│ ├─ Phase 2a: cli-explore-agent Analysis
│ │ └─ Deep Scan: Bash structural + Gemini semantic → JSON
│ └─ Phase 2b: Orchestrator Documentation
│ └─ Transform JSON → 5 Mermaid markdown files + metadata.json
└─ Phase 3: Write SKILL.md (index generation, always runs)
Output: .claude/skills/codemap-{feature}/
```

View File

@@ -1,615 +0,0 @@
---
name: docs
description: Plan documentation workflow with dynamic grouping (≤10 docs/task), generates IMPL tasks for parallel module trees, README, ARCHITECTURE, and HTTP API docs
argument-hint: "[path] [--tool <gemini|qwen|codex>] [--mode <full|partial>] [--cli-execute]"
---
# Documentation Workflow (/memory:docs)
## Overview
Lightweight planner that analyzes project structure, decomposes documentation work into tasks, and generates execution plans. Does NOT generate documentation content itself - delegates to doc-generator agent.
**Execution Strategy**:
- **Dynamic Task Grouping**: Level 1 tasks grouped by top-level directories with document count limit
- **Primary constraint**: Each task generates ≤10 documents (API.md + README.md count)
- **Optimization goal**: Prefer grouping 2 top-level directories per task for context sharing
- **Conflict resolution**: If 2 dirs exceed 10 docs, reduce to 1 dir/task; if 1 dir exceeds 10 docs, split by subdirectories
- **Context benefit**: Same-task directories analyzed together via single Gemini call
- **Parallel Execution**: Multiple Level 1 tasks execute concurrently for faster completion
- **Pre-computed Analysis**: Phase 2 performs unified analysis once, stored in `.process/` for reuse
- **Efficient Data Loading**: All existing docs loaded once in Phase 2, shared across tasks
**Path Mirroring**: Documentation structure mirrors source code under `.workflow/docs/{project_name}/`
- Example: `my_app/src/core/``.workflow/docs/my_app/src/core/API.md`
**Two Execution Modes**:
- **Default (Agent Mode)**: CLI analyzes in `pre_analysis` (MODE=analysis), agent writes docs
- **--cli-execute (CLI Mode)**: CLI generates docs in `implementation_approach` (MODE=write), agent executes CLI commands
## Path Mirroring Strategy
**Principle**: Documentation structure **mirrors** source code structure under project-specific directory.
| Source Path | Project Name | Documentation Path |
|------------|--------------|-------------------|
| `my_app/src/core/` | `my_app` | `.workflow/docs/my_app/src/core/API.md` |
| `my_app/src/modules/auth/` | `my_app` | `.workflow/docs/my_app/src/modules/auth/API.md` |
| `another_project/lib/utils/` | `another_project` | `.workflow/docs/another_project/lib/utils/API.md` |
## Parameters
```bash
/memory:docs [path] [--tool <gemini|qwen|codex>] [--mode <full|partial>] [--cli-execute]
```
- **path**: Source directory to analyze (default: current directory)
- Specifies the source code directory to be documented
- Documentation is generated in a separate `.workflow/docs/{project_name}/` directory at the workspace root, **not** within the source `path` itself
- The source path's structure is mirrored within the project-specific documentation folder
- Example: analyzing `src/modules` produces documentation at `.workflow/docs/{project_name}/src/modules/`
- **--mode**: Documentation generation mode (default: full)
- `full`: Complete documentation (modules + README + ARCHITECTURE + EXAMPLES + HTTP API)
- `partial`: Module documentation only (API.md + README.md)
- **--tool**: CLI tool selection (default: gemini)
- `gemini`: Comprehensive documentation, pattern recognition
- `qwen`: Architecture analysis, system design focus
- `codex`: Implementation validation, code quality
- **--cli-execute**: Enable CLI-based documentation generation (optional)
## Planning Workflow
### Phase 1: Initialize Session
```bash
# Get target path, project name, and root
bash(pwd && basename "$(pwd)" && git rev-parse --show-toplevel 2>/dev/null || pwd && date +%Y%m%d-%H%M%S)
```
```javascript
// Create docs session (type: docs)
SlashCommand(command="/workflow:session:start --type docs --new \"{project_name}-docs-{timestamp}\"")
// Parse output to get sessionId
```
```bash
# Update workflow-session.json with docs-specific fields
bash(jq '. + {"target_path":"{target_path}","project_root":"{project_root}","project_name":"{project_name}","mode":"full","tool":"gemini","cli_execute":false}' .workflow/active/{sessionId}/workflow-session.json > tmp.json && mv tmp.json .workflow/active/{sessionId}/workflow-session.json)
```
### Phase 2: Analyze Structure
**Smart filter**: Auto-detect and skip tests/build/config/vendor based on project tech stack.
**Commands** (collect data with simple bash):
```bash
# 1. Run folder analysis
bash(ccw tool exec get_modules_by_depth '{}' | ccw tool exec classify_folders '{}')
# 2. Get top-level directories (first 2 path levels)
bash(ccw tool exec get_modules_by_depth '{}' | ccw tool exec classify_folders '{}' | awk -F'|' '{print $1}' | sed 's|^\./||' | awk -F'/' '{if(NF>=2) print $1"/"$2; else if(NF==1) print $1}' | sort -u)
# 3. Find existing docs (if directory exists)
bash(if [ -d .workflow/docs/\${project_name} ]; then find .workflow/docs/\${project_name} -type f -name "*.md" ! -path "*/README.md" ! -path "*/ARCHITECTURE.md" ! -path "*/EXAMPLES.md" ! -path "*/api/*" 2>/dev/null; fi)
# 4. Read existing docs content (if files exist)
bash(if [ -d .workflow/docs/\${project_name} ]; then find .workflow/docs/\${project_name} -type f -name "*.md" ! -path "*/README.md" ! -path "*/ARCHITECTURE.md" ! -path "*/EXAMPLES.md" ! -path "*/api/*" 2>/dev/null | xargs cat 2>/dev/null; fi)
```
**Data Processing**: Parse bash outputs, calculate statistics, use **Write tool** to create `${session_dir}/.process/doc-planning-data.json` with structure:
```json
{
"metadata": {
"generated_at": "2025-11-03T16:57:30.469669",
"project_name": "project_name",
"project_root": "/path/to/project"
},
"folder_analysis": [
{"path": "./src/core", "type": "code", "code_count": 5, "dirs_count": 2}
],
"top_level_dirs": ["src/modules", "lib/core"],
"existing_docs": {
"file_list": [".workflow/docs/project/src/core/API.md"],
"content": "... existing docs content ..."
},
"unified_analysis": [],
"statistics": {
"total": 15,
"code": 8,
"navigation": 7,
"top_level": 3
}
}
```
**Then** use **Edit tool** to update `workflow-session.json` adding analysis field.
**Output**: Single `doc-planning-data.json` with all analysis data (no temp files or Python scripts).
**Auto-skipped**: Tests (`**/test/**`, `**/*.test.*`), Build (`**/node_modules/**`, `**/dist/**`), Config (root-level files), Vendor directories.
### Phase 3: Detect Update Mode
**Commands**:
```bash
# Count existing docs from doc-planning-data.json
bash(cat .workflow/active/WFS-docs-{timestamp}/.process/doc-planning-data.json | jq '.existing_docs.file_list | length')
```
**Data Processing**: Use count result, then use **Edit tool** to update `workflow-session.json`:
- Add `"update_mode": "update"` if count > 0, else `"create"`
- Add `"existing_docs": <count>`
### Phase 4: Decompose Tasks
**Task Hierarchy** (Dynamic based on document count):
```
Small Projects (total ≤10 docs):
Level 1: IMPL-001 (all directories in single task, shared context)
Level 2: IMPL-002 (README, full mode only)
Level 3: IMPL-003 (ARCHITECTURE+EXAMPLES), IMPL-004 (HTTP API, optional)
Medium Projects (Example: 7 top-level dirs, 18 total docs):
Step 1: Count docs per top-level dir
├─ dir1: 3 docs, dir2: 4 docs → Group 1 (7 docs)
├─ dir3: 5 docs, dir4: 3 docs → Group 2 (8 docs)
├─ dir5: 2 docs → Group 3 (2 docs, can add more)
Step 2: Create tasks with ≤10 docs constraint
Level 1: IMPL-001 to IMPL-003 (parallel groups)
├─ IMPL-001: Group 1 (dir1 + dir2, 7 docs, shared context)
├─ IMPL-002: Group 2 (dir3 + dir4, 8 docs, shared context)
└─ IMPL-003: Group 3 (remaining dirs, ≤10 docs)
Level 2: IMPL-004 (README, depends on Level 1, full mode only)
Level 3: IMPL-005 (ARCHITECTURE+EXAMPLES), IMPL-006 (HTTP API, optional)
Large Projects (single dir >10 docs):
Step 1: Detect oversized directory
└─ src/modules/: 15 subdirs → 30 docs (exceeds limit)
Step 2: Split by subdirectories
Level 1: IMPL-001 to IMPL-003 (split oversized dir)
├─ IMPL-001: src/modules/ subdirs 1-5 (10 docs)
├─ IMPL-002: src/modules/ subdirs 6-10 (10 docs)
└─ IMPL-003: src/modules/ subdirs 11-15 (10 docs)
```
**Grouping Algorithm**:
1. Count total docs for each top-level directory
2. Try grouping 2 directories (optimization for context sharing)
3. If group exceeds 10 docs, split to 1 dir/task
4. If single dir exceeds 10 docs, split by subdirectories
5. Create parallel Level 1 tasks with ≤10 docs each
**Commands**:
```bash
# 1. Get top-level directories from doc-planning-data.json
bash(cat .workflow/active/WFS-docs-{timestamp}/.process/doc-planning-data.json | jq -r '.top_level_dirs[]')
# 2. Get mode from workflow-session.json
bash(cat .workflow/active/WFS-docs-{timestamp}/workflow-session.json | jq -r '.mode // "full"')
# 3. Check for HTTP API
bash(grep -r "router\.|@Get\|@Post" src/ 2>/dev/null && echo "API_FOUND" || echo "NO_API")
```
**Data Processing**:
1. Count documents for each top-level directory (from folder_analysis):
- Code folders: 2 docs each (API.md + README.md)
- Navigation folders: 1 doc each (README.md only)
2. Apply grouping algorithm with ≤10 docs constraint:
- Try grouping 2 directories, calculate total docs
- If total ≤10 docs: create group
- If total >10 docs: split to 1 dir/group or subdivide
- If single dir >10 docs: split by subdirectories
3. Use **Edit tool** to update `doc-planning-data.json` adding groups field:
```json
"groups": {
"count": 3,
"assignments": [
{"group_id": "001", "directories": ["src/modules", "src/utils"], "doc_count": 5},
{"group_id": "002", "directories": ["lib/core"], "doc_count": 6},
{"group_id": "003", "directories": ["lib/helpers"], "doc_count": 3}
]
}
```
**Task ID Calculation**:
```bash
group_count=$(jq '.groups.count' .workflow/active/WFS-docs-{timestamp}/.process/doc-planning-data.json)
readme_id=$((group_count + 1)) # Next ID after groups
arch_id=$((group_count + 2))
api_id=$((group_count + 3))
```
### Phase 5: Generate Task JSONs
**CLI Strategy**:
| Mode | cli_execute | Placement | CLI MODE | Approval Flag | Agent Role |
|------|-------------|-----------|----------|---------------|------------|
| **Agent** | false | pre_analysis | analysis | (none) | Generate docs in implementation_approach |
| **CLI** | true | implementation_approach | write | --mode write | Execute CLI commands, validate output |
**Command Patterns**:
- Gemini/Qwen: `ccw cli -p "..." --tool gemini --mode analysis --cd dir`
- CLI Mode: `ccw cli -p "..." --tool gemini --mode write --cd dir`
- Codex: `ccw cli -p "..." --tool codex --mode write --cd dir`
**Generation Process**:
1. Read configuration values (tool, cli_execute, mode) from workflow-session.json
2. Read group assignments from doc-planning-data.json
3. Generate Level 1 tasks (IMPL-001 to IMPL-N, one per group)
4. Generate Level 2+ tasks if mode=full (README, ARCHITECTURE, HTTP API)
## Task Templates
### Level 1: Module Trees Group Task (Unified)
**Execution Model**: Each task processes assigned directory group (max 2 directories) using pre-analyzed data from Phase 2.
```json
{
"id": "IMPL-${group_number}",
"title": "Document Module Trees Group ${group_number}",
"status": "pending",
"meta": {
"type": "docs-tree-group",
"agent": "@doc-generator",
"tool": "gemini",
"cli_execute": false,
"group_number": "${group_number}",
"total_groups": "${total_groups}"
},
"context": {
"requirements": [
"Process directories from group ${group_number} in doc-planning-data.json",
"Generate docs to .workflow/docs/${project_name}/ (mirrored structure)",
"Code folders: API.md + README.md; Navigation folders: README.md only",
"Use pre-analyzed data from Phase 2 (no redundant analysis)"
],
"focus_paths": ["${group_dirs_from_json}"],
"precomputed_data": {
"phase2_analysis": "${session_dir}/.process/doc-planning-data.json"
}
},
"flow_control": {
"pre_analysis": [
{
"step": "load_precomputed_data",
"action": "Load Phase 2 analysis and extract group directories",
"commands": [
"bash(cat ${session_dir}/.process/doc-planning-data.json)",
"bash(jq '.groups.assignments[] | select(.group_id == \"${group_number}\") | .directories' ${session_dir}/.process/doc-planning-data.json)"
],
"output_to": "phase2_context",
"note": "Single JSON file contains all Phase 2 analysis results"
}
],
"implementation_approach": [
{
"step": 1,
"title": "Generate documentation for assigned directory group",
"description": "Process directories in Group ${group_number} using pre-analyzed data",
"modification_points": [
"Read group directories from [phase2_context].groups.assignments[${group_number}].directories",
"For each directory: parse folder types from folder_analysis, parse structure from unified_analysis",
"Map source_path to .workflow/docs/${project_name}/{path}",
"Generate API.md for code folders, README.md for all folders",
"Preserve user modifications from [phase2_context].existing_docs.content"
],
"logic_flow": [
"phase2 = parse([phase2_context])",
"dirs = phase2.groups.assignments[${group_number}].directories",
"for dir in dirs:",
" folder_info = find(dir, phase2.folder_analysis)",
" outline = find(dir, phase2.unified_analysis)",
" if folder_info.type == 'code': generate API.md + README.md",
" elif folder_info.type == 'navigation': generate README.md only",
" write to .workflow/docs/${project_name}/{dir}/"
],
"depends_on": [],
"output": "group_module_docs"
}
],
"target_files": [
".workflow/docs/${project_name}/*/API.md",
".workflow/docs/${project_name}/*/README.md"
]
}
}
```
**CLI Execute Mode Note**: When `cli_execute=true`, add Step 2 in `implementation_approach`:
```json
{
"step": 2,
"title": "Batch generate documentation via CLI",
"command": "ccw cli -p 'PURPOSE: Generate module docs\\nTASK: Create documentation\\nMODE: write\\nCONTEXT: @**/* [phase2_context]\\nEXPECTED: API.md and README.md\\nRULES: Mirror structure' --tool gemini --mode write --cd ${dirs_from_group}",
"depends_on": [1],
"output": "generated_docs"
}
```
### Level 2: Project README Task
**Task ID**: `IMPL-${readme_id}` (where `readme_id = group_count + 1`)
**Dependencies**: Depends on all Level 1 tasks completing.
```json
{
"id": "IMPL-${readme_id}",
"title": "Generate Project README",
"status": "pending",
"depends_on": ["IMPL-001", "...", "IMPL-${group_count}"],
"meta": {"type": "docs", "agent": "@doc-generator", "tool": "gemini", "cli_execute": false},
"flow_control": {
"pre_analysis": [
{
"step": "load_existing_readme",
"command": "bash(cat .workflow/docs/${project_name}/README.md 2>/dev/null || echo 'No existing README')",
"output_to": "existing_readme"
},
{
"step": "load_module_docs",
"command": "bash(find .workflow/docs/${project_name} -type f -name '*.md' ! -path '.workflow/docs/${project_name}/README.md' ! -path '.workflow/docs/${project_name}/ARCHITECTURE.md' ! -path '.workflow/docs/${project_name}/EXAMPLES.md' ! -path '.workflow/docs/${project_name}/api/*' | xargs cat)",
"output_to": "all_module_docs"
},
{
"step": "analyze_project",
"command": "bash(ccw cli -p \"PURPOSE: Analyze project structure\\nTASK: Extract overview from modules\\nMODE: analysis\\nCONTEXT: [all_module_docs]\\nEXPECTED: Project outline\" --tool gemini --mode analysis)",
"output_to": "project_outline"
}
],
"implementation_approach": [
{
"step": 1,
"title": "Generate project README",
"description": "Generate project README with navigation links while preserving user modifications",
"modification_points": [
"Parse [project_outline] and [all_module_docs]",
"Generate README structure with navigation links",
"Preserve [existing_readme] user modifications"
],
"logic_flow": ["Parse data", "Generate README with navigation", "Preserve modifications"],
"depends_on": [],
"output": "project_readme"
}
],
"target_files": [".workflow/docs/${project_name}/README.md"]
}
}
```
### Level 3: Architecture & Examples Documentation Task
**Task ID**: `IMPL-${arch_id}` (where `arch_id = group_count + 2`)
**Dependencies**: Depends on Level 2 (Project README).
```json
{
"id": "IMPL-${arch_id}",
"title": "Generate Architecture & Examples Documentation",
"status": "pending",
"depends_on": ["IMPL-${readme_id}"],
"meta": {"type": "docs", "agent": "@doc-generator", "tool": "gemini", "cli_execute": false},
"flow_control": {
"pre_analysis": [
{"step": "load_existing_docs", "command": "bash(cat .workflow/docs/${project_name}/{ARCHITECTURE,EXAMPLES}.md 2>/dev/null || echo 'No existing docs')", "output_to": "existing_arch_examples"},
{"step": "load_all_docs", "command": "bash(cat .workflow/docs/${project_name}/README.md && find .workflow/docs/${project_name} -type f -name '*.md' ! -path '*/README.md' ! -path '*/ARCHITECTURE.md' ! -path '*/EXAMPLES.md' ! -path '*/api/*' | xargs cat)", "output_to": "all_docs"},
{"step": "analyze_architecture", "command": "bash(ccw cli -p \"PURPOSE: Analyze system architecture\\nTASK: Synthesize architectural overview and examples\\nMODE: analysis\\nCONTEXT: [all_docs]\\nEXPECTED: Architecture + Examples outline\" --tool gemini --mode analysis)", "output_to": "arch_examples_outline"}
],
"implementation_approach": [
{
"step": 1,
"title": "Generate architecture and examples documentation",
"modification_points": [
"Parse [arch_examples_outline] and [all_docs]",
"Generate ARCHITECTURE.md (system design, patterns)",
"Generate EXAMPLES.md (code snippets, usage)",
"Preserve [existing_arch_examples] modifications"
],
"depends_on": [],
"output": "arch_examples_docs"
}
],
"target_files": [".workflow/docs/${project_name}/ARCHITECTURE.md", ".workflow/docs/${project_name}/EXAMPLES.md"]
}
}
```
### Level 4: HTTP API Documentation Task (Optional)
**Task ID**: `IMPL-${api_id}` (where `api_id = group_count + 3`)
**Dependencies**: Depends on Level 3.
```json
{
"id": "IMPL-${api_id}",
"title": "Generate HTTP API Documentation",
"status": "pending",
"depends_on": ["IMPL-${arch_id}"],
"meta": {"type": "docs", "agent": "@doc-generator", "tool": "gemini", "cli_execute": false},
"flow_control": {
"pre_analysis": [
{"step": "discover_api", "command": "bash(rg 'router\\.| @(Get|Post)' -g '*.{ts,js}')", "output_to": "endpoint_discovery"},
{"step": "load_existing_api", "command": "bash(cat .workflow/docs/${project_name}/api/README.md 2>/dev/null || echo 'No existing API docs')", "output_to": "existing_api_docs"},
{"step": "analyze_api", "command": "bash(ccw cli -p \"PURPOSE: Document HTTP API\\nTASK: Analyze endpoints\\nMODE: analysis\\nCONTEXT: @src/api/**/* [endpoint_discovery]\\nEXPECTED: API outline\" --tool gemini --mode analysis)", "output_to": "api_outline"}
],
"implementation_approach": [
{
"step": 1,
"title": "Generate HTTP API documentation",
"modification_points": [
"Parse [api_outline] and [endpoint_discovery]",
"Document endpoints, request/response formats",
"Preserve [existing_api_docs] modifications"
],
"depends_on": [],
"output": "api_docs"
}
],
"target_files": [".workflow/docs/${project_name}/api/README.md"]
}
}
```
## Session Structure
**Unified Structure** (single JSON replaces multiple text files):
```
.workflow/active/
└── WFS-docs-{timestamp}/
├── workflow-session.json # Session metadata
├── IMPL_PLAN.md
├── TODO_LIST.md
├── .process/
│ └── doc-planning-data.json # All Phase 2 analysis data (replaces 7+ files)
└── .task/
├── IMPL-001.json # Small: all modules | Large: group 1
├── IMPL-00N.json # (Large only: groups 2-N)
├── IMPL-{N+1}.json # README (full mode)
├── IMPL-{N+2}.json # ARCHITECTURE+EXAMPLES (full mode)
└── IMPL-{N+3}.json # HTTP API (optional)
```
**doc-planning-data.json Structure**:
```json
{
"metadata": {
"generated_at": "2025-11-03T16:41:06+08:00",
"project_name": "Claude_dms3",
"project_root": "/d/Claude_dms3"
},
"folder_analysis": [
{"path": "./src/core", "type": "code", "code_count": 5, "dirs_count": 2},
{"path": "./src/utils", "type": "navigation", "code_count": 0, "dirs_count": 4}
],
"top_level_dirs": ["src/modules", "src/utils", "lib/core"],
"existing_docs": {
"file_list": [".workflow/docs/project/src/core/API.md"],
"content": "... concatenated existing docs ..."
},
"unified_analysis": [
{"module_path": "./src/core", "outline_summary": "Core functionality"}
],
"groups": {
"count": 4,
"assignments": [
{"group_id": "001", "directories": ["src/modules", "src/utils"], "doc_count": 6},
{"group_id": "002", "directories": ["lib/core", "lib/helpers"], "doc_count": 7}
]
},
"statistics": {
"total": 15,
"code": 8,
"navigation": 7,
"top_level": 3
}
}
```
**Workflow Session Structure** (workflow-session.json):
```json
{
"session_id": "WFS-docs-{timestamp}",
"project": "{project_name} documentation",
"status": "planning",
"timestamp": "2024-01-20T14:30:22+08:00",
"path": ".",
"target_path": "/path/to/project",
"project_root": "/path/to/project",
"project_name": "{project_name}",
"mode": "full",
"tool": "gemini",
"cli_execute": false,
"update_mode": "update",
"existing_docs": 5,
"analysis": {
"total": "15",
"code": "8",
"navigation": "7",
"top_level": "3"
}
}
```
## Generated Documentation
**Structure mirrors project source directories under project-specific folder**:
```
.workflow/docs/
└── {project_name}/ # Project-specific root
├── src/ # Mirrors src/ directory
│ ├── modules/
│ │ ├── README.md # Navigation
│ │ ├── auth/
│ │ │ ├── API.md # API signatures
│ │ │ ├── README.md # Module docs
│ │ │ └── middleware/
│ │ │ ├── API.md
│ │ │ └── README.md
│ │ └── api/
│ │ ├── API.md
│ │ └── README.md
│ └── utils/
│ └── README.md
├── lib/ # Mirrors lib/ directory
│ └── core/
│ ├── API.md
│ └── README.md
├── README.md # Project root
├── ARCHITECTURE.md # System design
├── EXAMPLES.md # Usage examples
└── api/ # Optional
└── README.md # HTTP API reference
```
## Execution Commands
```bash
# Execute entire workflow (auto-discovers active session)
/workflow:execute
# Or specify session
/workflow:execute --resume-session="WFS-docs-yyyymmdd-hhmmss"
# Individual task execution
/task:execute IMPL-001
```
## Template Reference
**Available Templates** (`~/.claude/workflows/cli-templates/prompts/documentation/`):
- `api.txt`: Code API (Part A) + HTTP API (Part B)
- `module-readme.txt`: Module purpose, usage, dependencies
- `folder-navigation.txt`: Navigation README for folders with subdirectories
- `project-readme.txt`: Project overview, getting started, navigation
- `project-architecture.txt`: System structure, module map, design patterns
- `project-examples.txt`: End-to-end usage examples
## Execution Mode Summary
| Mode | CLI Placement | CLI MODE | Approval Flag | Agent Role |
|------|---------------|----------|---------------|------------|
| **Agent (default)** | pre_analysis | analysis | (none) | Generates documentation content |
| **CLI (--cli-execute)** | implementation_approach | write | --mode write | Executes CLI commands, validates output |
**Execution Flow**:
- **Phase 2**: Unified analysis once, results in `.process/`
- **Phase 4**: Dynamic grouping (max 2 dirs per group)
- **Level 1**: Parallel processing for module tree groups
- **Level 2+**: Sequential execution for project-level docs
## Related Commands
- `/workflow:execute` - Execute documentation tasks
- `/workflow:status` - View task progress
- `/workflow:session:complete` - Mark session complete

View File

@@ -1,182 +0,0 @@
---
name: load-skill-memory
description: Activate SKILL package (auto-detect from paths/keywords or manual) and intelligently load documentation based on task intent keywords
argument-hint: "[skill_name] \"task intent description\""
allowed-tools: Bash(*), Read(*), Skill(*)
---
# Memory Load SKILL Command (/memory:load-skill-memory)
## 1. Overview
The `memory:load-skill-memory` command **activates SKILL package** (auto-detect from task or manual specification) and intelligently loads documentation based on user's task intent. The system automatically determines which documentation files to read based on the intent description.
**Core Philosophy**:
- **Flexible Activation**: Auto-detect skill from task description/paths, or user explicitly specifies
- **Intent-Driven Loading**: System analyzes task intent to determine documentation scope
- **Intelligent Selection**: Automatically chooses appropriate documentation level and modules
- **Direct Context Loading**: Loads selected documentation into conversation memory
**When to Use**:
- Manually activate a known SKILL package for a specific task
- Load SKILL context when system hasn't auto-triggered it
- Force reload SKILL documentation with specific intent focus
**Note**: Normal SKILL activation happens automatically via description triggers or path mentions (system extracts skill name from file paths for intelligent triggering). Use this command only when manual activation is needed.
## 2. Parameters
- `[skill_name]` (Optional): Name of SKILL package to activate
- If omitted: System auto-detects from task description or file paths
- If specified: Direct activation of named SKILL package
- Example: `my_project`, `api_service`
- Must match directory name under `.claude/skills/`
- `"task intent description"` (Required): Description of what you want to do
- Used for both: auto-detection (if skill_name omitted) and documentation scope selection
- **Analysis tasks**: "分析builder pattern实现", "理解参数系统架构"
- **Modification tasks**: "修改workflow逻辑", "增强thermal template功能"
- **Learning tasks**: "学习接口设计模式", "了解测试框架使用"
- **With paths**: "修改D:\projects\my_project\src\auth.py的认证逻辑" (auto-extracts `my_project`)
## 3. Execution Flow
### Step 1: Determine SKILL Name (if not provided)
**Auto-Detection Strategy** (when skill_name parameter is omitted):
1. **Path Extraction**: Scan task description for file paths
- Extract potential project names from path segments
- Example: `"修改D:\projects\my_project\src\auth.py"` → extracts `my_project`
2. **Keyword Matching**: Match task keywords against SKILL descriptions
- Search for project-specific terms, domain keywords
3. **Validation**: Check if extracted name matches `.claude/skills/{skill_name}/`
**Result**: Either uses provided skill_name or auto-detected name for activation
### Step 2: Activate SKILL and Analyze Intent
**Activate SKILL Package**:
```javascript
Skill(command: "${skill_name}") // Uses provided or auto-detected name
```
**What Happens After Activation**:
1. If SKILL exists in memory: System reads `.claude/skills/${skill_name}/SKILL.md`
2. If SKILL not found in memory: Error - SKILL package doesn't exist
3. SKILL description triggers are loaded into memory
4. Progressive loading mechanism becomes available
5. Documentation structure is now accessible
**Intent Analysis**:
Based on task intent description, system determines:
- **Action type**: analyzing, modifying, learning
- **Scope**: specific module, architecture overview, complete system
- **Depth**: quick reference, detailed API, full documentation
### Step 3: Intelligent Documentation Loading
**Loading Strategy**:
The system automatically selects documentation based on intent keywords:
1. **Quick Understanding** ("了解", "快速理解", "什么是"):
- Load: Level 0 (README.md only, ~2K tokens)
- Use case: Quick overview of capabilities
2. **Specific Module Analysis** ("分析XXX模块", "理解XXX实现"):
- Load: Module-specific README.md + API.md (~5K tokens)
- Use case: Deep dive into specific component
3. **Architecture Review** ("架构", "设计模式", "整体结构"):
- Load: README.md + ARCHITECTURE.md (~10K tokens)
- Use case: System design understanding
4. **Implementation/Modification** ("修改", "增强", "实现"):
- Load: Relevant module docs + EXAMPLES.md (~15K tokens)
- Use case: Code modification with examples
5. **Comprehensive Learning** ("学习", "完整了解", "深入理解"):
- Load: Level 3 (All documentation, ~40K tokens)
- Use case: Complete system mastery
**Documentation Loaded into Memory**:
After loading, the selected documentation content is available in conversation memory for subsequent operations.
## 4. Usage Examples
### Example 1: Manual Specification
**User Command**:
```bash
/memory:load-skill-memory my_project "修改认证模块增加OAuth支持"
```
**Execution**:
```javascript
// Step 1: Use provided skill_name
skill_name = "my_project" // Directly from parameter
// Step 2: Activate SKILL
Skill(command: "my_project")
// Step 3: Intent Analysis
Keywords: ["修改", "认证模块", "增加", "OAuth"]
Action: modifying (implementation)
Scope: auth module + examples
// Load documentation based on intent
Read(.workflow/docs/my_project/auth/README.md)
Read(.workflow/docs/my_project/auth/API.md)
Read(.workflow/docs/my_project/EXAMPLES.md)
```
### Example 2: Auto-Detection from Path
**User Command**:
```bash
/memory:load-skill-memory "修改D:\projects\my_project\src\services\api.py的接口逻辑"
```
**Execution**:
```javascript
// Step 1: Auto-detect skill_name from path
Path detected: "D:\projects\my_project\src\services\api.py"
Extracted: "my_project"
Validated: .claude/skills/my_project/ exists
skill_name = "my_project"
// Step 2: Activate SKILL
Skill(command: "my_project")
// Step 3: Intent Analysis
Keywords: ["修改", "services", "接口逻辑"]
Action: modifying (implementation)
Scope: services module + examples
// Load documentation based on intent
Read(.workflow/docs/my_project/services/README.md)
Read(.workflow/docs/my_project/services/API.md)
Read(.workflow/docs/my_project/EXAMPLES.md)
```
## 5. Intent Keyword Mapping
**Quick Reference**:
- **Triggers**: "了解", "快速", "什么是", "简介"
- **Loads**: README.md only (~2K)
**Module-Specific**:
- **Triggers**: "XXX模块", "XXX组件", "分析XXX"
- **Loads**: Module README + API (~5K)
**Architecture**:
- **Triggers**: "架构", "设计", "整体结构", "系统设计"
- **Loads**: README + ARCHITECTURE (~10K)
**Implementation**:
- **Triggers**: "修改", "增强", "实现", "开发", "集成"
- **Loads**: Relevant module + EXAMPLES (~15K)
**Comprehensive**:
- **Triggers**: "完整", "深入", "全面", "学习整个"
- **Loads**: All documentation (~40K)

View File

@@ -1,525 +0,0 @@
---
name: skill-memory
description: 4-phase autonomous orchestrator: check docs → /memory:docs planning → /workflow:execute → generate SKILL.md with progressive loading index (skips phases 2-3 if docs exist)
argument-hint: "[path] [--tool <gemini|qwen|codex>] [--regenerate] [--mode <full|partial>] [--cli-execute]"
allowed-tools: SlashCommand(*), TodoWrite(*), Bash(*), Read(*), Write(*)
---
# Memory SKILL Package Generator
## Orchestrator Role
**Pure Orchestrator**: Execute documentation generation workflow, then generate SKILL.md index. Does NOT create task JSON files.
**Auto-Continue Workflow**: This command runs **fully autonomously** once triggered. Each phase completes and automatically triggers the next phase without user interaction.
**Execution Paths**:
- **Full Path**: All 4 phases (no existing docs OR `--regenerate` specified)
- **Skip Path**: Phase 1 → Phase 4 (existing docs found AND no `--regenerate` flag)
- **Phase 4 Always Executes**: SKILL.md index is never skipped, always generated or updated
## Core Rules
1. **Start Immediately**: First action is TodoWrite initialization, second action is Phase 1 execution
2. **No Task JSON**: This command does not create task JSON files - delegates to /memory:docs
3. **Parse Every Output**: Extract required data from each command output (session_id, task_count, file paths)
4. **Auto-Continue**: After completing each phase, update TodoWrite and immediately execute next phase
5. **Track Progress**: Update TodoWrite after EVERY phase completion before starting next phase
6. **Direct Generation**: Phase 4 directly generates SKILL.md using Write tool
7. **No Manual Steps**: User should never be prompted for decisions between phases
---
## 4-Phase Execution
### Phase 1: Prepare Arguments
**Goal**: Parse command arguments and check existing documentation
**Step 1: Get Target Path and Project Name**
```bash
# Get current directory (or use provided path)
bash(pwd)
# Get project name from directory
bash(basename "$(pwd)")
# Get project root
bash(git rev-parse --show-toplevel 2>/dev/null || pwd)
```
**Output**:
- `target_path`: `/d/my_project`
- `project_name`: `my_project`
- `project_root`: `/d/my_project`
**Step 2: Set Default Parameters**
```bash
# Default values (use these unless user specifies otherwise):
# - tool: "gemini"
# - mode: "full"
# - regenerate: false (no --regenerate flag)
# - cli_execute: false (no --cli-execute flag)
```
**Step 3: Check Existing Documentation**
```bash
# Check if docs directory exists
bash(test -d .workflow/docs/my_project && echo "exists" || echo "not_exists")
# Count existing documentation files
bash(find .workflow/docs/my_project -name "*.md" 2>/dev/null | wc -l || echo 0)
```
**Output**:
- `docs_exists`: `exists` or `not_exists`
- `existing_docs`: `5` (or `0` if no docs)
**Step 4: Determine Execution Path**
**Decision Logic**:
```javascript
if (existing_docs > 0 && !regenerate_flag) {
// Documentation exists and no regenerate flag
SKIP_DOCS_GENERATION = true
message = "Documentation already exists, skipping Phase 2 and Phase 3. Use --regenerate to force regeneration."
} else if (regenerate_flag) {
// Force regeneration: delete existing docs
bash(rm -rf .workflow/docs/my_project 2>/dev/null || true)
SKIP_DOCS_GENERATION = false
message = "Regenerating documentation from scratch."
} else {
// No existing docs
SKIP_DOCS_GENERATION = false
message = "No existing documentation found, generating new documentation."
}
```
**Summary Variables**:
- `PROJECT_NAME`: `my_project`
- `TARGET_PATH`: `/d/my_project`
- `DOCS_PATH`: `.workflow/docs/my_project`
- `TOOL`: `gemini` (default) or user-specified
- `MODE`: `full` (default) or user-specified
- `CLI_EXECUTE`: `false` (default) or `true` if --cli-execute flag
- `REGENERATE`: `false` (default) or `true` if --regenerate flag
- `EXISTING_DOCS`: Count of existing documentation files
- `SKIP_DOCS_GENERATION`: `true` if skipping Phase 2/3, `false` otherwise
**Completion & TodoWrite**:
- If `SKIP_DOCS_GENERATION = true`: Mark phase 1 completed, phase 2&3 completed (skipped), phase 4 in_progress
- If `SKIP_DOCS_GENERATION = false`: Mark phase 1 completed, phase 2 in_progress
**Next Action**:
- If skipping: Display skip message → Jump to Phase 4 (SKILL.md generation)
- If not skipping: Display preparation results → Continue to Phase 2 (documentation planning)
---
### Phase 2: Call /memory:docs
**Skip Condition**: This phase is **skipped if SKIP_DOCS_GENERATION = true** (documentation already exists without --regenerate flag)
**Goal**: Trigger documentation generation workflow
**Command**:
```bash
SlashCommand(command="/memory:docs [targetPath] --tool [tool] --mode [mode] [--cli-execute]")
```
**Example**:
```bash
/memory:docs /d/my_app --tool gemini --mode full
/memory:docs /d/my_app --tool gemini --mode full --cli-execute
```
**Note**: The `--regenerate` flag is handled in Phase 1 by deleting existing documentation. This command always calls `/memory:docs` without the regenerate flag, relying on docs.md's built-in update detection.
**Parse Output**:
- Extract session ID: `WFS-docs-[timestamp]` (store as `docsSessionId`)
- Extract task count (store as `taskCount`)
**Completion Criteria**:
- `/memory:docs` command executed successfully
- Session ID extracted and stored
- Task count retrieved
- Task files created in `.workflow/[docsSessionId]/.task/`
- workflow-session.json exists
**TodoWrite**: Mark phase 2 completed, phase 3 in_progress
**Next Action**: Display docs planning results (session ID, task count) → Auto-continue to Phase 3
---
### Phase 3: Execute Documentation Generation
**Skip Condition**: This phase is **skipped if SKIP_DOCS_GENERATION = true** (documentation already exists without --regenerate flag)
**Goal**: Execute documentation generation tasks
**Command**:
```bash
SlashCommand(command="/workflow:execute")
```
**Note**: `/workflow:execute` automatically discovers active session from Phase 2
**Completion Criteria**:
- `/workflow:execute` command executed successfully
- Documentation files generated in `.workflow/docs/[projectName]/`
- All tasks marked as completed in session
- At minimum: module documentation files exist (API.md and/or README.md)
- For full mode: Project README, ARCHITECTURE, EXAMPLES files generated
**TodoWrite**: Mark phase 3 completed, phase 4 in_progress
**Next Action**: Display execution results (file count, module count) → Auto-continue to Phase 4
---
### Phase 4: Generate SKILL.md Index
**Note**: This phase is **NEVER skipped** - it always executes to generate or update the SKILL index.
**Step 1: Read Key Files** (Use Read tool)
- `.workflow/docs/{project_name}/README.md` (required)
- `.workflow/docs/{project_name}/ARCHITECTURE.md` (optional)
**Step 2: Discover Structure**
```bash
bash(find .workflow/docs/{project_name} -name "*.md" | sed 's|.workflow/docs/{project_name}/||' | awk -F'/' '{if(NF>=2) print $1"/"$2}' | sort -u)
```
**Step 3: Generate Intelligent Description**
Extract from README + structure: Function (capabilities), Modules (names), Keywords (API/CLI/auth/etc.)
**Format**: `{Project} {core capabilities} (located at {project_path}). Load this SKILL when analyzing, modifying, or learning about {domain_description} or files under this path, especially when no relevant context exists in memory.`
**Key Elements**:
- **Path Reference**: Use `TARGET_PATH` from Phase 1 for precise location identification
- **Domain Description**: Extract human-readable domain/feature area from README (e.g., "workflow management", "thermal modeling")
- **Trigger Optimization**: Include project path, emphasize "especially when no relevant context exists in memory"
- **Action Coverage**: analyzing (分析), modifying (修改), learning (了解)
**Example**: "Workflow orchestration system with CLI tools and documentation generation (located at /d/Claude_dms3). Load this SKILL when analyzing, modifying, or learning about workflow management or files under this path, especially when no relevant context exists in memory."
**Step 4: Write SKILL.md** (Use Write tool)
```bash
bash(mkdir -p .claude/skills/{project_name})
```
`.claude/skills/{project_name}/SKILL.md`:
```yaml
---
name: {project_name}
description: {intelligent description from Step 3}
version: 1.0.0
---
# {Project Name} SKILL Package
## Documentation: `../../../.workflow/docs/{project_name}/`
## Progressive Loading
### Level 0: Quick Start (~2K)
- [README](../../../.workflow/docs/{project_name}/README.md)
### Level 1: Core Modules (~8K)
{Module READMEs}
### Level 2: Complete (~25K)
All modules + [Architecture](../../../.workflow/docs/{project_name}/ARCHITECTURE.md)
### Level 3: Deep Dive (~40K)
Everything + [Examples](../../../.workflow/docs/{project_name}/EXAMPLES.md)
```
**Completion Criteria**:
- SKILL.md file created at `.claude/skills/{project_name}/SKILL.md`
- Intelligent description generated from documentation
- Progressive loading levels (0-3) properly structured
- Module index includes all documented modules
- All file references use relative paths
**TodoWrite**: Mark phase 4 completed
**Final Action**: Report completion summary to user
**Return to User**:
```
SKILL Package Generation Complete
Project: {project_name}
Documentation: .workflow/docs/{project_name}/ ({doc_count} files)
SKILL Index: .claude/skills/{project_name}/SKILL.md
Generated:
- {task_count} documentation tasks completed
- SKILL.md with progressive loading (4 levels)
- Module index with {module_count} modules
Usage:
- Load Level 0: Quick project overview (~2K tokens)
- Load Level 1: Core modules (~8K tokens)
- Load Level 2: Complete docs (~25K tokens)
- Load Level 3: Everything (~40K tokens)
```
---
## Implementation Details
### Critical Rules
1. **No User Prompts Between Phases**: Never ask user questions or wait for input between phases
2. **Immediate Phase Transition**: After TodoWrite update, immediately execute next phase command
3. **Status-Driven Execution**: Check TodoList status after each phase:
- If next task is "pending" → Mark it "in_progress" and execute
- If all tasks are "completed" → Report final summary
4. **Phase Completion Pattern**:
```
Phase N completes → Update TodoWrite (N=completed, N+1=in_progress) → Execute Phase N+1
```
### TodoWrite Patterns
#### Initialization (Before Phase 1)
**FIRST ACTION**: Create TodoList with all 4 phases
```javascript
TodoWrite({todos: [
{"content": "Parse arguments and prepare", "status": "in_progress", "activeForm": "Parsing arguments"},
{"content": "Call /memory:docs to plan documentation", "status": "pending", "activeForm": "Calling /memory:docs"},
{"content": "Execute documentation generation", "status": "pending", "activeForm": "Executing documentation"},
{"content": "Generate SKILL.md index", "status": "pending", "activeForm": "Generating SKILL.md"}
]})
```
**SECOND ACTION**: Execute Phase 1 immediately
#### Full Path (SKIP_DOCS_GENERATION = false)
**After Phase 1**:
```javascript
TodoWrite({todos: [
{"content": "Parse arguments and prepare", "status": "completed", "activeForm": "Parsing arguments"},
{"content": "Call /memory:docs to plan documentation", "status": "in_progress", "activeForm": "Calling /memory:docs"},
{"content": "Execute documentation generation", "status": "pending", "activeForm": "Executing documentation"},
{"content": "Generate SKILL.md index", "status": "pending", "activeForm": "Generating SKILL.md"}
]})
// Auto-continue to Phase 2
```
**After Phase 2**:
```javascript
TodoWrite({todos: [
{"content": "Parse arguments and prepare", "status": "completed", "activeForm": "Parsing arguments"},
{"content": "Call /memory:docs to plan documentation", "status": "completed", "activeForm": "Calling /memory:docs"},
{"content": "Execute documentation generation", "status": "in_progress", "activeForm": "Executing documentation"},
{"content": "Generate SKILL.md index", "status": "pending", "activeForm": "Generating SKILL.md"}
]})
// Auto-continue to Phase 3
```
**After Phase 3**:
```javascript
TodoWrite({todos: [
{"content": "Parse arguments and prepare", "status": "completed", "activeForm": "Parsing arguments"},
{"content": "Call /memory:docs to plan documentation", "status": "completed", "activeForm": "Calling /memory:docs"},
{"content": "Execute documentation generation", "status": "completed", "activeForm": "Executing documentation"},
{"content": "Generate SKILL.md index", "status": "in_progress", "activeForm": "Generating SKILL.md"}
]})
// Auto-continue to Phase 4
```
**After Phase 4**:
```javascript
TodoWrite({todos: [
{"content": "Parse arguments and prepare", "status": "completed", "activeForm": "Parsing arguments"},
{"content": "Call /memory:docs to plan documentation", "status": "completed", "activeForm": "Calling /memory:docs"},
{"content": "Execute documentation generation", "status": "completed", "activeForm": "Executing documentation"},
{"content": "Generate SKILL.md index", "status": "completed", "activeForm": "Generating SKILL.md"}
]})
// Report completion summary to user
```
#### Skip Path (SKIP_DOCS_GENERATION = true)
**After Phase 1** (detects existing docs, skips Phase 2 & 3):
```javascript
TodoWrite({todos: [
{"content": "Parse arguments and prepare", "status": "completed", "activeForm": "Parsing arguments"},
{"content": "Call /memory:docs to plan documentation", "status": "completed", "activeForm": "Calling /memory:docs"},
{"content": "Execute documentation generation", "status": "completed", "activeForm": "Executing documentation"},
{"content": "Generate SKILL.md index", "status": "in_progress", "activeForm": "Generating SKILL.md"}
]})
// Display skip message: "Documentation already exists, skipping Phase 2 and Phase 3. Use --regenerate to force regeneration."
// Jump directly to Phase 4
```
**After Phase 4**:
```javascript
TodoWrite({todos: [
{"content": "Parse arguments and prepare", "status": "completed", "activeForm": "Parsing arguments"},
{"content": "Call /memory:docs to plan documentation", "status": "completed", "activeForm": "Calling /memory:docs"},
{"content": "Execute documentation generation", "status": "completed", "activeForm": "Executing documentation"},
{"content": "Generate SKILL.md index", "status": "completed", "activeForm": "Generating SKILL.md"}
]})
// Report completion summary to user
```
### Execution Flow Diagrams
#### Full Path Flow
```
User triggers command
[TodoWrite] Initialize 4 phases (Phase 1 = in_progress)
[Execute] Phase 1: Parse arguments
[TodoWrite] Phase 1 = completed, Phase 2 = in_progress
[Execute] Phase 2: Call /memory:docs
[TodoWrite] Phase 2 = completed, Phase 3 = in_progress
[Execute] Phase 3: Call /workflow:execute
[TodoWrite] Phase 3 = completed, Phase 4 = in_progress
[Execute] Phase 4: Generate SKILL.md
[TodoWrite] Phase 4 = completed
[Report] Display completion summary
```
#### Skip Path Flow
```
User triggers command
[TodoWrite] Initialize 4 phases (Phase 1 = in_progress)
[Execute] Phase 1: Parse arguments, detect existing docs
[TodoWrite] Phase 1 = completed, Phase 2&3 = completed (skipped), Phase 4 = in_progress
[Display] Skip message: "Documentation already exists, skipping Phase 2 and Phase 3"
[Execute] Phase 4: Generate SKILL.md (always runs)
[TodoWrite] Phase 4 = completed
[Report] Display completion summary
```
### Error Handling
- If any phase fails, mark it as "in_progress" (not completed)
- Report error details to user
- Do NOT auto-continue to next phase on failure
---
## Parameters
```bash
/memory:skill-memory [path] [--tool <gemini|qwen|codex>] [--regenerate] [--mode <full|partial>] [--cli-execute]
```
- **path**: Target directory (default: current directory)
- **--tool**: CLI tool for documentation (default: gemini)
- `gemini`: Comprehensive documentation
- `qwen`: Architecture analysis
- `codex`: Implementation validation
- **--regenerate**: Force regenerate all documentation
- When enabled: Deletes existing `.workflow/docs/{project_name}/` before regeneration
- Ensures fresh documentation from source code
- **--mode**: Documentation mode (default: full)
- `full`: Complete docs (modules + README + ARCHITECTURE + EXAMPLES)
- `partial`: Module docs only
- **--cli-execute**: Enable CLI-based documentation generation (optional)
- When enabled: CLI generates docs directly in implementation_approach
- When disabled (default): Agent generates documentation content
---
## Examples
### Example 1: Generate SKILL Package (Default)
```bash
/memory:skill-memory
```
**Workflow**:
1. Phase 1: Detects current directory, checks existing docs
2. Phase 2: Calls `/memory:docs . --tool gemini --mode full` (Agent Mode)
3. Phase 3: Executes documentation generation via `/workflow:execute`
4. Phase 4: Generates SKILL.md at `.claude/skills/{project_name}/SKILL.md`
### Example 2: Regenerate with Qwen
```bash
/memory:skill-memory /d/my_app --tool qwen --regenerate
```
**Workflow**:
1. Phase 1: Parses target path, detects regenerate flag, deletes existing docs
2. Phase 2: Calls `/memory:docs /d/my_app --tool qwen --mode full`
3. Phase 3: Executes documentation regeneration
4. Phase 4: Generates updated SKILL.md
### Example 3: Partial Mode (Modules Only)
```bash
/memory:skill-memory --mode partial
```
**Workflow**:
1. Phase 1: Detects partial mode
2. Phase 2: Calls `/memory:docs . --tool gemini --mode partial` (Agent Mode)
3. Phase 3: Executes module documentation only
4. Phase 4: Generates SKILL.md with module-only index
### Example 4: CLI Execute Mode
```bash
/memory:skill-memory --cli-execute
```
**Workflow**:
1. Phase 1: Detects CLI execute mode
2. Phase 2: Calls `/memory:docs . --tool gemini --mode full --cli-execute` (CLI Mode)
3. Phase 3: Executes CLI-based documentation generation
4. Phase 4: Generates SKILL.md at `.claude/skills/{project_name}/SKILL.md`
### Example 5: Skip Path (Existing Docs)
```bash
/memory:skill-memory
```
**Scenario**: Documentation already exists in `.workflow/docs/{project_name}/`
**Workflow**:
1. Phase 1: Detects existing docs (5 files), sets SKIP_DOCS_GENERATION = true
2. Display: "Documentation already exists, skipping Phase 2 and Phase 3. Use --regenerate to force regeneration."
3. Phase 4: Generates or updates SKILL.md index only (~5-10x faster)
---
## Architecture
```
skill-memory (orchestrator)
├─ Phase 1: Prepare (bash commands, skip decision)
├─ Phase 2: /memory:docs (task planning, skippable)
├─ Phase 3: /workflow:execute (task execution, skippable)
└─ Phase 4: Write SKILL.md (direct file generation, always runs)
No task JSON created by this command
All documentation tasks managed by /memory:docs
Smart skip logic: 5-10x faster when docs exist
```

View File

@@ -1,773 +0,0 @@
---
name: swagger-docs
description: Generate complete Swagger/OpenAPI documentation following RESTful standards with global security, API details, error codes, and validation tests
argument-hint: "[path] [--tool <gemini|qwen|codex>] [--format <yaml|json>] [--version <v3.0|v3.1>] [--lang <zh|en>]"
---
# Swagger API Documentation Workflow (/memory:swagger-docs)
## Overview
Professional Swagger/OpenAPI documentation generator that strictly follows RESTful API design standards to produce enterprise-grade API documentation.
**Core Features**:
- **RESTful Standards**: Strict adherence to REST architecture and HTTP semantics
- **Global Security**: Unified Authorization Token validation mechanism
- **Complete API Docs**: Descriptions, methods, URLs, parameters for each endpoint
- **Organized Structure**: Clear directory hierarchy by business domain
- **Detailed Fields**: Type, required, example, description for each field
- **Error Code Standards**: Unified error response format and code definitions
- **Validation Tests**: Boundary conditions and exception handling tests
**Output Structure** (--lang zh):
```
.workflow/docs/{project_name}/api/
├── swagger.yaml # Main OpenAPI spec file
├── 概述/
│ ├── README.md # API overview
│ ├── 认证说明.md # Authentication guide
│ ├── 错误码规范.md # Error code definitions
│ └── 版本历史.md # Version history
├── 用户模块/ # Grouped by business domain
│ ├── 用户认证.md
│ ├── 用户管理.md
│ └── 权限控制.md
├── 业务模块/
│ └── ...
└── 测试报告/
├── 接口测试.md # API test results
└── 边界测试.md # Boundary condition tests
```
**Output Structure** (--lang en):
```
.workflow/docs/{project_name}/api/
├── swagger.yaml # Main OpenAPI spec file
├── overview/
│ ├── README.md # API overview
│ ├── authentication.md # Authentication guide
│ ├── error-codes.md # Error code definitions
│ └── changelog.md # Version history
├── users/ # Grouped by business domain
│ ├── authentication.md
│ ├── management.md
│ └── permissions.md
├── orders/
│ └── ...
└── test-reports/
├── api-tests.md # API test results
└── boundary-tests.md # Boundary condition tests
```
## Parameters
```bash
/memory:swagger-docs [path] [--tool <gemini|qwen|codex>] [--format <yaml|json>] [--version <v3.0|v3.1>] [--lang <zh|en>]
```
- **path**: API source code directory (default: current directory)
- **--tool**: CLI tool selection (default: gemini)
- `gemini`: Comprehensive analysis, pattern recognition
- `qwen`: Architecture analysis, system design
- `codex`: Implementation validation, code quality
- **--format**: OpenAPI spec format (default: yaml)
- `yaml`: YAML format (recommended, better readability)
- `json`: JSON format
- **--version**: OpenAPI version (default: v3.0)
- `v3.0`: OpenAPI 3.0.x
- `v3.1`: OpenAPI 3.1.0 (supports JSON Schema 2020-12)
- **--lang**: Documentation language (default: zh)
- `zh`: Chinese documentation with Chinese directory names
- `en`: English documentation with English directory names
## Planning Workflow
### Phase 1: Initialize Session
```bash
# Get project info
bash(pwd && basename "$(pwd)" && git rev-parse --show-toplevel 2>/dev/null || pwd && date +%Y%m%d-%H%M%S)
```
```javascript
// Create swagger-docs session
SlashCommand(command="/workflow:session:start --type swagger-docs --new \"{project_name}-swagger-{timestamp}\"")
// Parse output to get sessionId
```
```bash
# Update workflow-session.json
bash(jq '. + {"target_path":"{target_path}","project_root":"{project_root}","project_name":"{project_name}","format":"yaml","openapi_version":"3.0.3","lang":"{lang}","tool":"gemini"}' .workflow/active/{sessionId}/workflow-session.json > tmp.json && mv tmp.json .workflow/active/{sessionId}/workflow-session.json)
```
### Phase 2: Scan API Endpoints
**Discovery Patterns**: Auto-detect framework signatures and API definition styles.
**Supported Frameworks**:
| Framework | Detection Pattern | Example |
|-----------|-------------------|---------|
| Express.js | `router.get/post/put/delete` | `router.get('/users/:id')` |
| Fastify | `fastify.route`, `@Route` | `fastify.get('/api/users')` |
| NestJS | `@Controller`, `@Get/@Post` | `@Get('users/:id')` |
| Koa | `router.get`, `ctx.body` | `router.get('/users')` |
| Hono | `app.get/post`, `c.json` | `app.get('/users/:id')` |
| FastAPI | `@app.get`, `@router.post` | `@app.get("/users/{id}")` |
| Flask | `@app.route`, `@bp.route` | `@app.route('/users')` |
| Spring | `@GetMapping`, `@PostMapping` | `@GetMapping("/users/{id}")` |
| Go Gin | `r.GET`, `r.POST` | `r.GET("/users/:id")` |
| Go Chi | `r.Get`, `r.Post` | `r.Get("/users/{id}")` |
**Commands**:
```bash
# 1. Detect API framework type
bash(
if rg -q "@Controller|@Get|@Post|@Put|@Delete" --type ts 2>/dev/null; then echo "NESTJS";
elif rg -q "router\.(get|post|put|delete|patch)" --type ts --type js 2>/dev/null; then echo "EXPRESS";
elif rg -q "fastify\.(get|post|route)" --type ts --type js 2>/dev/null; then echo "FASTIFY";
elif rg -q "@app\.(get|post|put|delete)" --type py 2>/dev/null; then echo "FASTAPI";
elif rg -q "@GetMapping|@PostMapping|@RequestMapping" --type java 2>/dev/null; then echo "SPRING";
elif rg -q 'r\.(GET|POST|PUT|DELETE)' --type go 2>/dev/null; then echo "GO_GIN";
else echo "UNKNOWN"; fi
)
# 2. Scan all API endpoint definitions
bash(rg -n "(router|app|fastify)\.(get|post|put|delete|patch)|@(Get|Post|Put|Delete|Patch|Controller|RequestMapping)" --type ts --type js --type py --type java --type go -g '!*.test.*' -g '!*.spec.*' -g '!node_modules/**' 2>/dev/null | head -200)
# 3. Extract route paths
bash(rg -o "['\"](/api)?/[a-zA-Z0-9/:_-]+['\"]" --type ts --type js --type py -g '!*.test.*' 2>/dev/null | sort -u | head -100)
# 4. Detect existing OpenAPI/Swagger files
bash(find . -type f \( -name "swagger.yaml" -o -name "swagger.json" -o -name "openapi.yaml" -o -name "openapi.json" \) ! -path "*/node_modules/*" 2>/dev/null)
# 5. Extract DTO/Schema definitions
bash(rg -n "export (interface|type|class).*Dto|@ApiProperty|class.*Schema" --type ts -g '!*.test.*' 2>/dev/null | head -100)
```
**Data Processing**: Parse outputs, use **Write tool** to create `${session_dir}/.process/swagger-planning-data.json`:
```json
{
"metadata": {
"generated_at": "2025-01-01T12:00:00+08:00",
"project_name": "project_name",
"project_root": "/path/to/project",
"openapi_version": "3.0.3",
"format": "yaml",
"lang": "zh"
},
"framework": {
"type": "NESTJS",
"detected_patterns": ["@Controller", "@Get", "@Post"],
"base_path": "/api/v1"
},
"endpoints": [
{
"file": "src/modules/users/users.controller.ts",
"line": 25,
"method": "GET",
"path": "/api/v1/users/:id",
"handler": "getUser",
"controller": "UsersController"
}
],
"existing_specs": {
"found": false,
"files": []
},
"dto_schemas": [
{
"name": "CreateUserDto",
"file": "src/modules/users/dto/create-user.dto.ts",
"properties": ["email", "password", "name"]
}
],
"statistics": {
"total_endpoints": 45,
"by_method": {"GET": 20, "POST": 15, "PUT": 5, "DELETE": 5},
"by_module": {"users": 12, "auth": 8, "orders": 15, "products": 10}
}
}
```
### Phase 3: Analyze API Structure
**Commands**:
```bash
# 1. Analyze controller/route file structure
bash(cat ${session_dir}/.process/swagger-planning-data.json | jq -r '.endpoints[].file' | sort -u | head -20)
# 2. Extract request/response types
bash(for f in $(jq -r '.dto_schemas[].file' ${session_dir}/.process/swagger-planning-data.json | head -20); do echo "=== $f ===" && cat "$f" 2>/dev/null; done)
# 3. Analyze authentication middleware
bash(rg -n "auth|guard|middleware|jwt|bearer|token" -i --type ts --type js -g '!*.test.*' -g '!node_modules/**' 2>/dev/null | head -50)
# 4. Detect error handling patterns
bash(rg -n "HttpException|BadRequest|Unauthorized|Forbidden|NotFound|throw new" --type ts --type js -g '!*.test.*' 2>/dev/null | head -50)
```
**Deep Analysis via Gemini CLI**:
```bash
ccw cli -p "
PURPOSE: Analyze API structure and generate OpenAPI specification outline for comprehensive documentation
TASK:
• Parse all API endpoints and identify business module boundaries
• Extract request parameters, request bodies, and response formats
• Identify authentication mechanisms and security requirements
• Discover error handling patterns and error codes
• Map endpoints to logical module groups
MODE: analysis
CONTEXT: @src/**/*.controller.ts @src/**/*.routes.ts @src/**/*.dto.ts @src/**/middleware/**/*
EXPECTED: JSON format API structure analysis report with modules, endpoints, security schemes, and error codes
CONSTRAINTS: Strict RESTful standards | Identify all public endpoints | Document output language: {lang}
" --tool gemini --mode analysis --rule analysis-code-patterns --cd {project_root}
```
**Update swagger-planning-data.json** with analysis results:
```json
{
"api_structure": {
"modules": [
{
"name": "Users",
"name_zh": "用户模块",
"base_path": "/api/v1/users",
"endpoints": [
{
"path": "/api/v1/users",
"method": "GET",
"operation_id": "listUsers",
"summary": "List all users",
"summary_zh": "获取用户列表",
"description": "Paginated list of system users with filtering by status and role",
"description_zh": "分页获取系统用户列表,支持按状态、角色筛选",
"tags": ["User Management"],
"tags_zh": ["用户管理"],
"security": ["bearerAuth"],
"parameters": {
"query": ["page", "limit", "status", "role"]
},
"responses": {
"200": "UserListResponse",
"401": "UnauthorizedError",
"403": "ForbiddenError"
}
}
]
}
],
"security_schemes": {
"bearerAuth": {
"type": "http",
"scheme": "bearer",
"bearerFormat": "JWT",
"description": "JWT Token authentication. Add Authorization: Bearer <token> to request header"
}
},
"error_codes": [
{"code": "AUTH_001", "status": 401, "message": "Invalid or expired token", "message_zh": "Token 无效或已过期"},
{"code": "AUTH_002", "status": 401, "message": "Authentication required", "message_zh": "未提供认证信息"},
{"code": "AUTH_003", "status": 403, "message": "Insufficient permissions", "message_zh": "权限不足"}
]
}
}
```
### Phase 4: Task Decomposition
**Task Hierarchy**:
```
Level 1: Infrastructure Tasks (Parallel)
├─ IMPL-001: Generate main OpenAPI spec file (swagger.yaml)
├─ IMPL-002: Generate global security config and auth documentation
└─ IMPL-003: Generate unified error code specification
Level 2: Module Documentation Tasks (Parallel, by business module)
├─ IMPL-004: Users module API documentation
├─ IMPL-005: Auth module API documentation
├─ IMPL-006: Business module N API documentation
└─ ...
Level 3: Aggregation Tasks (Depends on Level 1-2)
├─ IMPL-N+1: Generate API overview and navigation
└─ IMPL-N+2: Generate version history and changelog
Level 4: Validation Tasks (Depends on Level 1-3)
├─ IMPL-N+3: API endpoint validation tests
└─ IMPL-N+4: Boundary condition tests
```
**Grouping Strategy**:
1. Group by business module (users, orders, products, etc.)
2. Maximum 10 endpoints per task
3. Large modules (>10 endpoints) split by submodules
**Commands**:
```bash
# 1. Count endpoints by module
bash(cat ${session_dir}/.process/swagger-planning-data.json | jq '.statistics.by_module')
# 2. Calculate task groupings
bash(cat ${session_dir}/.process/swagger-planning-data.json | jq -r '.api_structure.modules[] | "\(.name):\(.endpoints | length)"')
```
**Data Processing**: Use **Edit tool** to update `swagger-planning-data.json` with task groups:
```json
{
"task_groups": {
"level1_count": 3,
"level2_count": 5,
"total_count": 12,
"assignments": [
{"task_id": "IMPL-001", "level": 1, "type": "openapi-spec", "title": "Generate OpenAPI main spec file"},
{"task_id": "IMPL-002", "level": 1, "type": "security", "title": "Generate global security config"},
{"task_id": "IMPL-003", "level": 1, "type": "error-codes", "title": "Generate error code specification"},
{"task_id": "IMPL-004", "level": 2, "type": "module-doc", "module": "users", "endpoint_count": 12},
{"task_id": "IMPL-005", "level": 2, "type": "module-doc", "module": "auth", "endpoint_count": 8}
]
}
}
```
### Phase 5: Generate Task JSONs
**Generation Process**:
1. Read configuration values from workflow-session.json
2. Read task groups from swagger-planning-data.json
3. Generate Level 1 tasks (infrastructure)
4. Generate Level 2 tasks (by module)
5. Generate Level 3-4 tasks (aggregation and validation)
## Task Templates
### Level 1-1: OpenAPI Main Spec File
```json
{
"id": "IMPL-001",
"title": "Generate OpenAPI main specification file",
"status": "pending",
"meta": {
"type": "swagger-openapi-spec",
"agent": "@doc-generator",
"tool": "gemini",
"priority": "critical"
},
"context": {
"requirements": [
"Generate OpenAPI 3.0.3 compliant swagger.yaml",
"Include complete info, servers, tags, paths, components definitions",
"Follow RESTful design standards, use {lang} for descriptions"
],
"precomputed_data": {
"planning_data": "${session_dir}/.process/swagger-planning-data.json"
}
},
"flow_control": {
"pre_analysis": [
{
"step": "load_analysis_data",
"action": "Load API analysis data",
"commands": [
"bash(cat ${session_dir}/.process/swagger-planning-data.json)"
],
"output_to": "api_analysis"
}
],
"implementation_approach": [
{
"step": 1,
"title": "Generate OpenAPI spec file",
"description": "Create complete swagger.yaml specification file",
"cli_prompt": "PURPOSE: Generate OpenAPI 3.0.3 specification file from analyzed API structure\nTASK:\n• Define openapi version: 3.0.3\n• Define info: title, description, version, contact, license\n• Define servers: development, staging, production environments\n• Define tags: organized by business modules\n• Define paths: all API endpoints with complete specifications\n• Define components: schemas, securitySchemes, parameters, responses\nMODE: write\nCONTEXT: @[api_analysis]\nEXPECTED: Complete swagger.yaml file following OpenAPI 3.0.3 specification\nCONSTRAINTS: Use {lang} for all descriptions | Strict RESTful standards\n--rule documentation-swagger-api",
"output": "swagger.yaml"
}
],
"target_files": [
".workflow/docs/${project_name}/api/swagger.yaml"
]
}
}
```
### Level 1-2: Global Security Configuration
```json
{
"id": "IMPL-002",
"title": "Generate global security configuration and authentication guide",
"status": "pending",
"meta": {
"type": "swagger-security",
"agent": "@doc-generator",
"tool": "gemini"
},
"context": {
"requirements": [
"Document Authorization header format in detail",
"Describe token acquisition, refresh, and expiration mechanisms",
"List permission requirements for each endpoint"
]
},
"flow_control": {
"pre_analysis": [
{
"step": "analyze_auth",
"command": "bash(rg -n 'auth|guard|jwt|bearer' -i --type ts -g '!*.test.*' 2>/dev/null | head -50)",
"output_to": "auth_patterns"
}
],
"implementation_approach": [
{
"step": 1,
"title": "Generate authentication documentation",
"cli_prompt": "PURPOSE: Generate comprehensive authentication documentation for API security\nTASK:\n• Document authentication mechanism: JWT Bearer Token\n• Explain header format: Authorization: Bearer <token>\n• Describe token lifecycle: acquisition, refresh, expiration handling\n• Define permission levels: public, user, admin, super_admin\n• Document authentication failure responses: 401/403 error handling\nMODE: write\nCONTEXT: @[auth_patterns] @src/**/auth/**/* @src/**/guard/**/*\nEXPECTED: Complete authentication guide in {lang}\nCONSTRAINTS: Include code examples | Clear step-by-step instructions\n--rule development-feature",
"output": "{auth_doc_name}"
}
],
"target_files": [
".workflow/docs/${project_name}/api/{overview_dir}/{auth_doc_name}"
]
}
}
```
### Level 1-3: Unified Error Code Specification
```json
{
"id": "IMPL-003",
"title": "Generate unified error code specification",
"status": "pending",
"meta": {
"type": "swagger-error-codes",
"agent": "@doc-generator",
"tool": "gemini"
},
"context": {
"requirements": [
"Define unified error response format",
"Create categorized error code system (auth, business, system)",
"Provide detailed description and examples for each error code"
]
},
"flow_control": {
"implementation_approach": [
{
"step": 1,
"title": "Generate error code specification document",
"cli_prompt": "PURPOSE: Generate comprehensive error code specification for consistent API error handling\nTASK:\n• Define error response format: {code, message, details, timestamp}\n• Document authentication errors (AUTH_xxx): 401/403 series\n• Document parameter errors (PARAM_xxx): 400 series\n• Document business errors (BIZ_xxx): business logic errors\n• Document system errors (SYS_xxx): 500 series\n• For each error code: HTTP status, error message, possible causes, resolution suggestions\nMODE: write\nCONTEXT: @src/**/*.exception.ts @src/**/*.filter.ts\nEXPECTED: Complete error code specification in {lang} with tables and examples\nCONSTRAINTS: Include response examples | Clear categorization\n--rule development-feature",
"output": "{error_doc_name}"
}
],
"target_files": [
".workflow/docs/${project_name}/api/{overview_dir}/{error_doc_name}"
]
}
}
```
### Level 2: Module API Documentation (Template)
```json
{
"id": "IMPL-${module_task_id}",
"title": "Generate ${module_name} API documentation",
"status": "pending",
"depends_on": ["IMPL-001", "IMPL-002", "IMPL-003"],
"meta": {
"type": "swagger-module-doc",
"agent": "@doc-generator",
"tool": "gemini",
"module": "${module_name}",
"endpoint_count": "${endpoint_count}"
},
"context": {
"requirements": [
"Complete documentation for all endpoints in this module",
"Each endpoint: description, method, URL, parameters, responses",
"Include success and failure response examples",
"Mark API version and last update time"
],
"focus_paths": ["${module_source_paths}"]
},
"flow_control": {
"pre_analysis": [
{
"step": "load_module_endpoints",
"action": "Load module endpoint information",
"commands": [
"bash(cat ${session_dir}/.process/swagger-planning-data.json | jq '.api_structure.modules[] | select(.name == \"${module_name}\")')"
],
"output_to": "module_endpoints"
},
{
"step": "read_source_files",
"action": "Read module source files",
"commands": [
"bash(cat ${module_source_files})"
],
"output_to": "source_code"
}
],
"implementation_approach": [
{
"step": 1,
"title": "Generate module API documentation",
"description": "Generate complete API documentation for ${module_name}",
"cli_prompt": "PURPOSE: Generate complete RESTful API documentation for ${module_name} module\nTASK:\n• Create module overview: purpose, use cases, prerequisites\n• Generate endpoint index: grouped by functionality\n• For each endpoint document:\n - Functional description: purpose and business context\n - Request method: GET/POST/PUT/DELETE\n - URL path: complete API path\n - Request headers: Authorization and other required headers\n - Path parameters: {id} and other path variables\n - Query parameters: pagination, filters, etc.\n - Request body: JSON Schema format\n - Response body: success and error responses\n - Field description table: type, required, example, description\n• Add usage examples: cURL, JavaScript, Python\n• Add version info: v1.0.0, last updated date\nMODE: write\nCONTEXT: @[module_endpoints] @[source_code]\nEXPECTED: Complete module API documentation in {lang} with all endpoints fully documented\nCONSTRAINTS: RESTful standards | Include all response codes\n--rule documentation-swagger-api",
"output": "${module_doc_name}"
}
],
"target_files": [
".workflow/docs/${project_name}/api/${module_dir}/${module_doc_name}"
]
}
}
```
### Level 3: API Overview and Navigation
```json
{
"id": "IMPL-${overview_task_id}",
"title": "Generate API overview and navigation",
"status": "pending",
"depends_on": ["IMPL-001", "...", "IMPL-${last_module_task_id}"],
"meta": {
"type": "swagger-overview",
"agent": "@doc-generator",
"tool": "gemini"
},
"flow_control": {
"pre_analysis": [
{
"step": "load_all_docs",
"command": "bash(find .workflow/docs/${project_name}/api -type f -name '*.md' ! -path '*/{overview_dir}/*' | xargs cat)",
"output_to": "all_module_docs"
}
],
"implementation_approach": [
{
"step": 1,
"title": "Generate API overview",
"cli_prompt": "PURPOSE: Generate API overview document with navigation and quick start guide\nTASK:\n• Create introduction: system features, tech stack, version\n• Write quick start guide: authentication, first request example\n• Build module navigation: categorized links to all modules\n• Document environment configuration: development, staging, production\n• List SDKs and tools: client libraries, Postman collection\nMODE: write\nCONTEXT: @[all_module_docs] @.workflow/docs/${project_name}/api/swagger.yaml\nEXPECTED: Complete API overview in {lang} with navigation links\nCONSTRAINTS: Clear structure | Quick start focus\n--rule development-feature",
"output": "README.md"
}
],
"target_files": [
".workflow/docs/${project_name}/api/{overview_dir}/README.md"
]
}
}
```
### Level 4: Validation Tasks
```json
{
"id": "IMPL-${test_task_id}",
"title": "API endpoint validation tests",
"status": "pending",
"depends_on": ["IMPL-${overview_task_id}"],
"meta": {
"type": "swagger-validation",
"agent": "@test-fix-agent",
"tool": "codex"
},
"context": {
"requirements": [
"Validate accessibility of all endpoints",
"Test various boundary conditions",
"Verify exception handling"
]
},
"flow_control": {
"pre_analysis": [
{
"step": "load_swagger_spec",
"command": "bash(cat .workflow/docs/${project_name}/api/swagger.yaml)",
"output_to": "swagger_spec"
}
],
"implementation_approach": [
{
"step": 1,
"title": "Generate test report",
"cli_prompt": "PURPOSE: Generate comprehensive API test validation report\nTASK:\n• Document test environment configuration\n• Calculate endpoint coverage statistics\n• Report test results: pass/fail counts\n• Document boundary tests: parameter limits, null values, special characters\n• Document exception tests: auth failures, permission denied, resource not found\n• List issues found with recommendations\nMODE: write\nCONTEXT: @[swagger_spec]\nEXPECTED: Complete test report in {lang} with detailed results\nCONSTRAINTS: Include test cases | Clear pass/fail status\n--rule development-tests",
"output": "{test_doc_name}"
}
],
"target_files": [
".workflow/docs/${project_name}/api/{test_dir}/{test_doc_name}"
]
}
}
```
## Language-Specific Directory Mapping
| Component | --lang zh | --lang en |
|-----------|-----------|-----------|
| Overview dir | 概述 | overview |
| Auth doc | 认证说明.md | authentication.md |
| Error doc | 错误码规范.md | error-codes.md |
| Changelog | 版本历史.md | changelog.md |
| Users module | 用户模块 | users |
| Orders module | 订单模块 | orders |
| Products module | 商品模块 | products |
| Test dir | 测试报告 | test-reports |
| API test doc | 接口测试.md | api-tests.md |
| Boundary test doc | 边界测试.md | boundary-tests.md |
## API Documentation Template
### Single Endpoint Format
Each endpoint must include:
```markdown
### Get User Details
**Description**: Retrieve detailed user information by ID, including profile and permissions.
**Endpoint Info**:
| Property | Value |
|----------|-------|
| Method | GET |
| URL | `/api/v1/users/{id}` |
| Version | v1.0.0 |
| Updated | 2025-01-01 |
| Auth | Bearer Token |
| Permission | user / admin |
**Request Headers**:
| Field | Type | Required | Example | Description |
|-------|------|----------|---------|-------------|
| Authorization | string | Yes | `Bearer eyJhbGc...` | JWT Token |
| Content-Type | string | No | `application/json` | Request content type |
**Path Parameters**:
| Field | Type | Required | Example | Description |
|-------|------|----------|---------|-------------|
| id | string | Yes | `usr_123456` | Unique user identifier |
**Query Parameters**:
| Field | Type | Required | Default | Example | Description |
|-------|------|----------|---------|---------|-------------|
| include | string | No | - | `roles,permissions` | Related data to include |
**Success Response** (200 OK):
```json
{
"code": 0,
"message": "success",
"data": {
"id": "usr_123456",
"email": "user@example.com",
"name": "John Doe",
"status": "active",
"roles": ["user"],
"created_at": "2025-01-01T00:00:00Z",
"updated_at": "2025-01-01T00:00:00Z"
},
"timestamp": "2025-01-01T12:00:00Z"
}
```
**Response Fields**:
| Field | Type | Description |
|-------|------|-------------|
| code | integer | Business status code, 0 = success |
| message | string | Response message |
| data.id | string | Unique user identifier |
| data.email | string | User email address |
| data.name | string | User display name |
| data.status | string | User status: active/inactive/suspended |
| data.roles | array | User role list |
| data.created_at | string | Creation timestamp (ISO 8601) |
| data.updated_at | string | Last update timestamp (ISO 8601) |
**Error Responses**:
| Status | Code | Message | Possible Cause |
|--------|------|---------|----------------|
| 401 | AUTH_001 | Invalid or expired token | Token format error or expired |
| 403 | AUTH_003 | Insufficient permissions | No access to this user info |
| 404 | USER_001 | User not found | User ID doesn't exist or deleted |
**Examples**:
```bash
# cURL
curl -X GET "https://api.example.com/api/v1/users/usr_123456" \
-H "Authorization: Bearer eyJhbGc..." \
-H "Content-Type: application/json"
```
```javascript
// JavaScript (fetch)
const response = await fetch('https://api.example.com/api/v1/users/usr_123456', {
method: 'GET',
headers: {
'Authorization': 'Bearer eyJhbGc...',
'Content-Type': 'application/json'
}
});
const data = await response.json();
```
```
## Session Structure
```
.workflow/active/
└── WFS-swagger-{timestamp}/
├── workflow-session.json
├── IMPL_PLAN.md
├── TODO_LIST.md
├── .process/
│ └── swagger-planning-data.json
└── .task/
├── IMPL-001.json # OpenAPI spec
├── IMPL-002.json # Security config
├── IMPL-003.json # Error codes
├── IMPL-004.json # Module 1 API
├── ...
├── IMPL-N+1.json # API overview
└── IMPL-N+2.json # Validation tests
```
## Execution Commands
```bash
# Execute entire workflow
/workflow:execute
# Specify session
/workflow:execute --resume-session="WFS-swagger-yyyymmdd-hhmmss"
# Single task execution
/task:execute IMPL-001
```
## Related Commands
- `/workflow:execute` - Execute documentation tasks
- `/workflow:status` - View task progress
- `/workflow:session:complete` - Mark session complete
- `/memory:docs` - General documentation workflow

View File

@@ -1,310 +0,0 @@
---
name: tech-research-rules
description: "3-phase orchestrator: extract tech stack → Exa research → generate path-conditional rules (auto-loaded by Claude Code)"
argument-hint: "[session-id | tech-stack-name] [--regenerate] [--tool <gemini|qwen>]"
allowed-tools: SlashCommand(*), TodoWrite(*), Bash(*), Read(*), Write(*), Task(*)
---
# Tech Stack Rules Generator
## Overview
**Purpose**: Generate multi-layered, path-conditional rules that Claude Code automatically loads based on file context.
**Output Structure**:
```
.claude/rules/tech/{tech-stack}/
├── core.md # paths: **/*.{ext} - Core principles
├── patterns.md # paths: src/**/*.{ext} - Implementation patterns
├── testing.md # paths: **/*.{test,spec}.{ext} - Testing rules
├── config.md # paths: *.config.* - Configuration rules
├── api.md # paths: **/api/**/* - API rules (backend only)
├── components.md # paths: **/components/**/* - Component rules (frontend only)
└── metadata.json # Generation metadata
```
**Templates Location**: `~/.claude/workflows/cli-templates/prompts/rules/`
---
## Core Rules
1. **Start Immediately**: First action is TodoWrite initialization
2. **Path-Conditional Output**: Every rule file includes `paths` frontmatter
3. **Template-Driven**: Agent reads templates before generating content
4. **Agent Produces Files**: Agent writes all rule files directly
5. **No Manual Loading**: Rules auto-activate when Claude works with matching files
---
## 3-Phase Execution
### Phase 1: Prepare Context & Detect Tech Stack
**Goal**: Detect input mode, extract tech stack info, determine file extensions
**Input Mode Detection**:
```bash
input="$1"
if [[ "$input" == WFS-* ]]; then
MODE="session"
SESSION_ID="$input"
# Read workflow-session.json to extract tech stack
else
MODE="direct"
TECH_STACK_NAME="$input"
fi
```
**Tech Stack Analysis**:
```javascript
// Decompose composite tech stacks
// "typescript-react-nextjs" → ["typescript", "react", "nextjs"]
const TECH_EXTENSIONS = {
"typescript": "{ts,tsx}",
"javascript": "{js,jsx}",
"python": "py",
"rust": "rs",
"go": "go",
"java": "java",
"csharp": "cs",
"ruby": "rb",
"php": "php"
};
const FRAMEWORK_TYPE = {
"react": "frontend",
"vue": "frontend",
"angular": "frontend",
"nextjs": "fullstack",
"nuxt": "fullstack",
"fastapi": "backend",
"express": "backend",
"django": "backend",
"rails": "backend"
};
```
**Check Existing Rules**:
```bash
normalized_name=$(echo "$TECH_STACK_NAME" | tr '[:upper:]' '[:lower:]' | tr ' ' '-')
rules_dir=".claude/rules/tech/${normalized_name}"
existing_count=$(find "${rules_dir}" -name "*.md" 2>/dev/null | wc -l || echo 0)
```
**Skip Decision**:
- If `existing_count > 0` AND no `--regenerate``SKIP_GENERATION = true`
- If `--regenerate` → Delete existing and regenerate
**Output Variables**:
- `TECH_STACK_NAME`: Normalized name
- `PRIMARY_LANG`: Primary language
- `FILE_EXT`: File extension pattern
- `FRAMEWORK_TYPE`: frontend | backend | fullstack | library
- `COMPONENTS`: Array of tech components
- `SKIP_GENERATION`: Boolean
**TodoWrite**: Mark phase 1 completed
---
### Phase 2: Agent Produces Path-Conditional Rules
**Skip Condition**: Skipped if `SKIP_GENERATION = true`
**Goal**: Delegate to agent for Exa research and rule file generation
**Template Files**:
```
~/.claude/workflows/cli-templates/prompts/rules/
├── tech-rules-agent-prompt.txt # Agent instructions
├── rule-core.txt # Core principles template
├── rule-patterns.txt # Implementation patterns template
├── rule-testing.txt # Testing rules template
├── rule-config.txt # Configuration rules template
├── rule-api.txt # API rules template (backend)
└── rule-components.txt # Component rules template (frontend)
```
**Agent Task**:
```javascript
Task({
subagent_type: "general-purpose",
description: `Generate tech stack rules: ${TECH_STACK_NAME}`,
prompt: `
You are generating path-conditional rules for Claude Code.
## Context
- Tech Stack: ${TECH_STACK_NAME}
- Primary Language: ${PRIMARY_LANG}
- File Extensions: ${FILE_EXT}
- Framework Type: ${FRAMEWORK_TYPE}
- Components: ${JSON.stringify(COMPONENTS)}
- Output Directory: .claude/rules/tech/${TECH_STACK_NAME}/
## Instructions
Read the agent prompt template for detailed instructions.
Use --rule rules-tech-rules-agent-prompt to load the template automatically.
## Execution Steps
1. Execute Exa research queries (see agent prompt)
2. Read each rule template
3. Generate rule files following template structure
4. Write files to output directory
5. Write metadata.json
6. Report completion
## Variable Substitutions
Replace in templates:
- {TECH_STACK_NAME} → ${TECH_STACK_NAME}
- {PRIMARY_LANG} → ${PRIMARY_LANG}
- {FILE_EXT} → ${FILE_EXT}
- {FRAMEWORK_TYPE} → ${FRAMEWORK_TYPE}
`
})
```
**Completion Criteria**:
- 4-6 rule files written with proper `paths` frontmatter
- metadata.json written
- Agent reports files created
**TodoWrite**: Mark phase 2 completed
---
### Phase 3: Verify & Report
**Goal**: Verify generated files and provide usage summary
**Steps**:
1. **Verify Files**:
```bash
find ".claude/rules/tech/${TECH_STACK_NAME}" -name "*.md" -type f
```
2. **Validate Frontmatter**:
```bash
head -5 ".claude/rules/tech/${TECH_STACK_NAME}/core.md"
```
3. **Read Metadata**:
```javascript
Read(`.claude/rules/tech/${TECH_STACK_NAME}/metadata.json`)
```
4. **Generate Summary Report**:
```
Tech Stack Rules Generated
Tech Stack: {TECH_STACK_NAME}
Location: .claude/rules/tech/{TECH_STACK_NAME}/
Files Created:
├── core.md → paths: **/*.{ext}
├── patterns.md → paths: src/**/*.{ext}
├── testing.md → paths: **/*.{test,spec}.{ext}
├── config.md → paths: *.config.*
├── api.md → paths: **/api/**/* (if backend)
└── components.md → paths: **/components/**/* (if frontend)
Auto-Loading:
- Rules apply automatically when editing matching files
- No manual loading required
Example Activation:
- Edit src/components/Button.tsx → core.md + patterns.md + components.md
- Edit tests/api.test.ts → core.md + testing.md
- Edit package.json → config.md
```
**TodoWrite**: Mark phase 3 completed
---
## Path Pattern Reference
| Pattern | Matches |
|---------|---------|
| `**/*.ts` | All .ts files |
| `src/**/*` | All files under src/ |
| `*.config.*` | Config files in root |
| `**/*.{ts,tsx}` | .ts and .tsx files |
| Tech Stack | Core Pattern | Test Pattern |
|------------|--------------|--------------|
| TypeScript | `**/*.{ts,tsx}` | `**/*.{test,spec}.{ts,tsx}` |
| Python | `**/*.py` | `**/test_*.py, **/*_test.py` |
| Rust | `**/*.rs` | `**/tests/**/*.rs` |
| Go | `**/*.go` | `**/*_test.go` |
---
## Parameters
```bash
/memory:tech-research [session-id | "tech-stack-name"] [--regenerate]
```
**Arguments**:
- **session-id**: `WFS-*` format - Extract from workflow session
- **tech-stack-name**: Direct input - `"typescript"`, `"typescript-react"`
- **--regenerate**: Force regenerate existing rules
---
## Examples
### Single Language
```bash
/memory:tech-research "typescript"
```
**Output**: `.claude/rules/tech/typescript/` with 4 rule files
### Frontend Stack
```bash
/memory:tech-research "typescript-react"
```
**Output**: `.claude/rules/tech/typescript-react/` with 5 rule files (includes components.md)
### Backend Stack
```bash
/memory:tech-research "python-fastapi"
```
**Output**: `.claude/rules/tech/python-fastapi/` with 5 rule files (includes api.md)
### From Session
```bash
/memory:tech-research WFS-user-auth-20251104
```
**Workflow**: Extract tech stack from session → Generate rules
---
## Comparison: Rules vs SKILL
| Aspect | SKILL Memory | Rules |
|--------|--------------|-------|
| Loading | Manual: `Skill("tech")` | Automatic by path |
| Scope | All files when loaded | Only matching files |
| Granularity | Monolithic packages | Per-file-type |
| Context | Full package | Only relevant rules |
**When to Use**:
- **Rules**: Tech stack conventions per file type
- **SKILL**: Reference docs, APIs, examples for manual lookup

View File

@@ -0,0 +1,332 @@
---
name: tips
description: Quick note-taking command to capture ideas, snippets, reminders, and insights for later reference
argument-hint: "<note content> [--tag <tag1,tag2>] [--context <context>]"
allowed-tools: mcp__ccw-tools__core_memory(*), Read(*)
examples:
- /memory:tips "Remember to use Redis for rate limiting"
- /memory:tips "Auth pattern: JWT with refresh tokens" --tag architecture,auth
- /memory:tips "Bug: memory leak in WebSocket handler after 24h" --context websocket-service
- /memory:tips "Performance: lazy loading reduced bundle by 40%" --tag performance
---
# Memory Tips Command (/memory:tips)
## 1. Overview
The `memory:tips` command provides **quick note-taking** for capturing:
- Quick ideas and insights
- Code snippets and patterns
- Reminders and follow-ups
- Bug notes and debugging hints
- Performance observations
- Architecture decisions
- Library/tool recommendations
**Core Philosophy**:
- **Speed First**: Minimal friction for capturing thoughts
- **Searchable**: Tagged for easy retrieval
- **Context-Aware**: Optional context linking
- **Lightweight**: No complex session analysis
## 2. Parameters
- `<note content>` (Required): The tip/note content to save
- `--tag <tags>` (Optional): Comma-separated tags for categorization
- `--context <context>` (Optional): Related context (file, module, feature)
**Examples**:
```bash
/memory:tips "Use Zod for runtime validation - better DX than class-validator"
/memory:tips "Redis connection pool: max 10, min 2" --tag config,redis
/memory:tips "Fix needed: race condition in payment processor" --tag bug,payment --context src/payments
```
## 3. Structured Output Format
```markdown
## Tip ID
TIP-YYYYMMDD-HHMMSS
## Timestamp
YYYY-MM-DD HH:MM:SS
## Project Root
[Absolute path to project root, e.g., D:\Claude_dms3]
## Content
[The tip/note content exactly as provided]
## Tags
[Comma-separated tags, or (none)]
## Context
[Optional context linking - file, module, or feature reference]
## Session Link
[WFS-ID if workflow session active, otherwise (none)]
## Auto-Detected Context
[Files/topics from current conversation if relevant]
```
## 4. Field Definitions
| Field | Purpose | Example |
|-------|---------|---------|
| **Tip ID** | Unique identifier with timestamp | TIP-20260128-143052 |
| **Timestamp** | When tip was created | 2026-01-28 14:30:52 |
| **Project Root** | Current project path | D:\Claude_dms3 |
| **Content** | The actual tip/note | "Use Redis for rate limiting" |
| **Tags** | Categorization labels | architecture, auth, performance |
| **Context** | Related code/feature | src/auth/**, payment-module |
| **Session Link** | Link to workflow session | WFS-auth-20260128 |
| **Auto-Detected Context** | Files from conversation | src/api/handler.ts |
## 5. Execution Flow
### Step 1: Parse Arguments
```javascript
const parseTipsCommand = (input) => {
// Extract note content (everything before flags)
const contentMatch = input.match(/^"([^"]+)"|^([^\s-]+)/);
const content = contentMatch ? (contentMatch[1] || contentMatch[2]) : '';
// Extract tags
const tagsMatch = input.match(/--tag\s+([^\s-]+)/);
const tags = tagsMatch ? tagsMatch[1].split(',').map(t => t.trim()) : [];
// Extract context
const contextMatch = input.match(/--context\s+([^\s-]+)/);
const context = contextMatch ? contextMatch[1] : '';
return { content, tags, context };
};
```
### Step 2: Gather Context
```javascript
const gatherTipContext = async () => {
// Get project root
const projectRoot = process.cwd(); // or detect from environment
// Get current session if active
const manifest = await mcp__ccw-tools__session_manager({
operation: "list",
location: "active"
});
const sessionId = manifest.sessions?.[0]?.id || null;
// Auto-detect files from recent conversation
const recentFiles = extractRecentFilesFromConversation(); // Last 5 messages
return {
projectRoot,
sessionId,
autoDetectedContext: recentFiles
};
};
```
### Step 3: Generate Structured Text
```javascript
const generateTipText = (parsed, context) => {
const timestamp = new Date().toISOString().replace('T', ' ').slice(0, 19);
const tipId = `TIP-${new Date().toISOString().slice(0,10).replace(/-/g, '')}-${new Date().toTimeString().slice(0,8).replace(/:/g, '')}`;
return `## Tip ID
${tipId}
## Timestamp
${timestamp}
## Project Root
${context.projectRoot}
## Content
${parsed.content}
## Tags
${parsed.tags.length > 0 ? parsed.tags.join(', ') : '(none)'}
## Context
${parsed.context || '(none)'}
## Session Link
${context.sessionId || '(none)'}
## Auto-Detected Context
${context.autoDetectedContext.length > 0
? context.autoDetectedContext.map(f => `- ${f}`).join('\n')
: '(none)'}`;
};
```
### Step 4: Save to Core Memory
```javascript
mcp__ccw-tools__core_memory({
operation: "import",
text: structuredText
})
```
**Response Format**:
```json
{
"operation": "import",
"id": "CMEM-YYYYMMDD-HHMMSS",
"message": "Created memory: CMEM-YYYYMMDD-HHMMSS"
}
```
### Step 5: Confirm to User
```
✓ Tip saved successfully
ID: CMEM-YYYYMMDD-HHMMSS
Tags: architecture, auth
Context: src/auth/**
To retrieve: /memory:search "auth patterns"
Or via MCP: core_memory(operation="search", query="auth")
```
## 6. Tag Categories (Suggested)
**Technical**:
- `architecture` - Design decisions and patterns
- `performance` - Optimization insights
- `security` - Security considerations
- `bug` - Bug notes and fixes
- `config` - Configuration settings
- `api` - API design patterns
**Development**:
- `testing` - Test strategies and patterns
- `debugging` - Debugging techniques
- `refactoring` - Refactoring notes
- `documentation` - Doc improvements
**Domain Specific**:
- `auth` - Authentication/authorization
- `database` - Database patterns
- `frontend` - UI/UX patterns
- `backend` - Backend logic
- `devops` - Infrastructure and deployment
**Organizational**:
- `reminder` - Follow-up items
- `research` - Research findings
- `idea` - Feature ideas
- `review` - Code review notes
## 7. Search Integration
Tips can be retrieved using:
```bash
# Via command (if /memory:search exists)
/memory:search "rate limiting"
# Via MCP tool
mcp__ccw-tools__core_memory({
operation: "search",
query: "rate limiting",
source_type: "core_memory",
top_k: 10
})
# Via CLI
ccw core-memory search --query "rate limiting" --top-k 10
```
## 8. Quality Checklist
Before saving:
- [ ] Content is clear and actionable
- [ ] Tags are relevant and consistent
- [ ] Context provides enough reference
- [ ] Auto-detected context is accurate
- [ ] Project root is absolute path
- [ ] Timestamp is properly formatted
## 9. Best Practices
### Good Tips Examples
**Specific and Actionable**:
```
"Use connection pooling for Redis: { max: 10, min: 2, acquireTimeoutMillis: 30000 }"
--tag config,redis
```
**With Context**:
```
"Auth middleware must validate both access and refresh tokens"
--tag security,auth --context src/middleware/auth.ts
```
**Problem + Solution**:
```
"Memory leak fixed by unsubscribing event listeners in componentWillUnmount"
--tag bug,react --context src/components/Chat.tsx
```
### Poor Tips Examples
**Too Vague**:
```
"Fix the bug" --tag bug
```
**Too Long** (use /memory:compact instead):
```
"Here's the complete implementation plan for the entire auth system... [3 paragraphs]"
```
**No Context**:
```
"Remember to update this later"
```
## 10. Use Cases
### During Development
```bash
/memory:tips "JWT secret must be 256-bit minimum" --tag security,auth
/memory:tips "Use debounce (300ms) for search input" --tag performance,ux
```
### After Bug Fixes
```bash
/memory:tips "Race condition in payment: lock with Redis SETNX" --tag bug,payment
```
### Code Review Insights
```bash
/memory:tips "Prefer early returns over nested ifs" --tag style,readability
```
### Architecture Decisions
```bash
/memory:tips "Chose PostgreSQL over MongoDB for ACID compliance" --tag architecture,database
```
### Library Recommendations
```bash
/memory:tips "Zod > Yup for TypeScript validation - better type inference" --tag library,typescript
```
## 11. Notes
- **Frequency**: Use liberally - capture all valuable insights
- **Retrieval**: Search by tags, content, or context
- **Lifecycle**: Tips persist across sessions
- **Organization**: Tags enable filtering and categorization
- **Integration**: Can reference tips in later workflows
- **Lightweight**: No complex session analysis required

View File

@@ -1,517 +0,0 @@
---
name: workflow-skill-memory
description: Process WFS-* archived sessions using universal-executor agents with Gemini analysis to generate workflow-progress SKILL package (sessions-timeline, lessons, conflicts)
argument-hint: "session <session-id> | all"
allowed-tools: Task(*), TodoWrite(*), Bash(*), Read(*), Write(*)
---
# Workflow SKILL Memory Generator
## Overview
Generate SKILL package from archived workflow sessions using agent-driven analysis. Supports single-session incremental updates or parallel processing of all sessions.
**Scope**: Only processes WFS-* workflow sessions. Other session types (e.g., doc sessions) are automatically ignored.
## Usage
```bash
/memory:workflow-skill-memory session WFS-<session-id> # Process single WFS session
/memory:workflow-skill-memory all # Process all WFS sessions in parallel
```
## Execution Modes
### Mode 1: Single Session (`session <session-id>`)
**Purpose**: Incremental update - process one archived session and merge into existing SKILL package
**Workflow**:
1. **Validate session**: Check if session exists in `.workflow/.archives/{session-id}/`
2. **Invoke agent**: Call `universal-executor` to analyze session and update SKILL documents
3. **Agent tasks**:
- Read session data from `.workflow/.archives/{session-id}/`
- Extract lessons, conflicts, and outcomes
- Use Gemini for intelligent aggregation (optional)
- Update or create SKILL documents using templates
- Regenerate SKILL.md index
**Command Example**:
```bash
/memory:workflow-skill-memory session WFS-user-auth
```
**Expected Output**:
```
Session WFS-user-auth processed
Updated:
- sessions-timeline.md (1 session added)
- lessons-learned.md (3 lessons merged)
- conflict-patterns.md (1 conflict added)
- SKILL.md (index regenerated)
```
---
### Mode 2: All Sessions (`all`)
**Purpose**: Full regeneration - process all archived sessions in parallel for complete SKILL package
**Workflow**:
1. **List sessions**: Read manifest.json to get all archived session IDs
2. **Parallel invocation**: Launch multiple `universal-executor` agents in parallel (one per session)
3. **Agent coordination**:
- Each agent processes one session independently
- Agents use Gemini for analysis
- Agents collect data into JSON (no direct file writes)
- Final aggregator agent merges results and generates SKILL documents
**Command Example**:
```bash
/memory:workflow-skill-memory all
```
**Expected Output**:
```
All sessions processed in parallel
Sessions: 8 total
Updated:
- sessions-timeline.md (8 sessions)
- lessons-learned.md (24 lessons aggregated)
- conflict-patterns.md (12 conflicts documented)
- SKILL.md (index regenerated)
```
---
## Implementation Flow
### Phase 1: Validation and Setup
**Step 1.1: Parse Command Arguments**
Extract mode and session ID:
```javascript
if (args === "all") {
mode = "all"
} else if (args.startsWith("session ")) {
mode = "session"
session_id = args.replace("session ", "").trim()
} else {
ERROR = "Invalid arguments. Usage: session <session-id> | all"
EXIT
}
```
**Step 1.2: Validate Archive Directory**
```bash
bash(test -d .workflow/.archives && echo "exists" || echo "missing")
```
If missing, report error and exit.
**Step 1.3: Mode-Specific Validation**
**Single Session Mode**:
```bash
# Validate session ID format (must start with WFS-)
if [[ ! "$session_id" =~ ^WFS- ]]; then
ERROR = "Invalid session ID format. Only WFS-* sessions are supported"
EXIT
fi
# Check if session exists
bash(test -d .workflow/.archives/{session_id} && echo "exists" || echo "missing")
```
If missing, report error: "Session {session_id} not found in archives"
**All Sessions Mode**:
```bash
# Read manifest and filter only WFS- sessions
bash(cat .workflow/.archives/manifest.json | jq -r '.archives[].session_id | select(startswith("WFS-"))')
```
Store filtered session IDs in array. Ignore doc sessions and other non-WFS sessions.
**Step 1.4: TodoWrite Initialization**
**Single Session Mode**:
```javascript
TodoWrite({todos: [
{"content": "Validate session existence", "status": "completed", "activeForm": "Validating session"},
{"content": "Invoke agent to process session", "status": "in_progress", "activeForm": "Invoking agent"},
{"content": "Verify SKILL package updated", "status": "pending", "activeForm": "Verifying update"}
]})
```
**All Sessions Mode**:
```javascript
TodoWrite({todos: [
{"content": "Read manifest and list sessions", "status": "completed", "activeForm": "Reading manifest"},
{"content": "Invoke agents in parallel", "status": "in_progress", "activeForm": "Invoking agents"},
{"content": "Verify SKILL package regenerated", "status": "pending", "activeForm": "Verifying regeneration"}
]})
```
---
### Phase 2: Agent Invocation
#### Single Session Mode - Agent Task
Invoke `universal-executor` with session-specific task:
**Agent Prompt Structure**:
```
Task: Process Workflow Session for SKILL Package
Context:
- Session ID: {session_id}
- Session Path: .workflow/.archives/{session_id}/
- Mode: Incremental update
Objectives:
1. Read session data:
- workflow-session.json (metadata)
- IMPL_PLAN.md (implementation summary)
- TODO_LIST.md (if exists)
- manifest.json entry for lessons
2. Extract key information:
- Description, tags, metrics
- Lessons (successes, challenges, watch_patterns)
- Context package path (reference only)
- Key outcomes from IMPL_PLAN
3. Use Gemini for aggregation (optional):
Command pattern:
ccw cli -p "
PURPOSE: Extract lessons and conflicts from workflow session
TASK:
• Analyze IMPL_PLAN and lessons from manifest
• Identify success patterns and challenges
• Extract conflict patterns with resolutions
• Categorize by functional domain
MODE: analysis
CONTEXT: @IMPL_PLAN.md @workflow-session.json
EXPECTED: Structured lessons and conflicts in JSON format
RULES: Template reference from skill-aggregation.txt
" --tool gemini --mode analysis --cd .workflow/.archives/{session_id}
3.5. **Generate SKILL.md Description** (CRITICAL for auto-loading):
Read skill-index.txt template Section: "Description Field Generation"
Execute command to get project root:
```bash
git rev-parse --show-toplevel # Example output: /d/Claude_dms3
```
Apply description format:
```
Progressive workflow development history (located at {project_root}).
Load this SKILL when continuing development, analyzing past implementations,
or learning from workflow history, especially when no relevant context exists in memory.
```
**Validation**:
- [ ] Path uses forward slashes (not backslashes)
- [ ] All three use cases present
- [ ] Trigger optimization phrase included
- [ ] Path is absolute (starts with / or drive letter)
4. Read templates for formatting guidance:
- ~/.claude/workflows/cli-templates/prompts/workflow/skill-sessions-timeline.txt
- ~/.claude/workflows/cli-templates/prompts/workflow/skill-lessons-learned.txt
- ~/.claude/workflows/cli-templates/prompts/workflow/skill-conflict-patterns.txt
- ~/.claude/workflows/cli-templates/prompts/workflow/skill-index.txt
**CRITICAL**: From skill-index.txt, read these sections:
- "Description Field Generation" - Rules for generating description
- "Variable Substitution Guide" - All required variables
- "Generation Instructions" - Step-by-step generation process
- "Validation Checklist" - Final validation steps
5. Update SKILL documents:
- sessions-timeline.md: Append new session, update domain grouping
- lessons-learned.md: Merge lessons into categories, update frequencies
- conflict-patterns.md: Add conflicts, update recurring pattern frequencies
- SKILL.md: Regenerate index with updated counts
**For SKILL.md generation**:
- Follow "Generation Instructions" from skill-index.txt (Steps 1-7)
- Use git command for project_root: `git rev-parse --show-toplevel`
- Apply "Description Field Generation" rules
- Validate using "Validation Checklist"
- Increment version (patch level)
6. Return result JSON:
{
"status": "success",
"session_id": "{session_id}",
"updates": {
"sessions_added": 1,
"lessons_merged": count,
"conflicts_added": count
}
}
```
---
#### All Sessions Mode - Parallel Agent Tasks
**Step 2.1: Launch parallel session analyzers**
Invoke multiple agents in parallel (one message with multiple Task calls):
**Per-Session Agent Prompt**:
```
Task: Extract Session Data for SKILL Package
Context:
- Session ID: {session_id}
- Mode: Parallel analysis (no direct file writes)
Objectives:
1. Read session data (same as single mode)
2. Extract key information (same as single mode)
3. Use Gemini for analysis (same as single mode)
4. Return structured data JSON:
{
"status": "success",
"session_id": "{session_id}",
"data": {
"metadata": {
"description": "...",
"archived_at": "...",
"tags": [...],
"metrics": {...}
},
"lessons": {
"successes": [...],
"challenges": [...],
"watch_patterns": [...]
},
"conflicts": [
{
"type": "architecture|dependencies|testing|performance",
"pattern": "...",
"resolution": "...",
"code_impact": [...]
}
],
"impl_summary": "First 200 chars of IMPL_PLAN",
"context_package_path": "..."
}
}
```
**Step 2.2: Aggregate results**
After all session agents complete, invoke aggregator agent:
**Aggregator Agent Prompt**:
```
Task: Aggregate Session Results and Generate SKILL Package
Context:
- Mode: Full regeneration
- Input: JSON results from {session_count} session agents
Objectives:
1. Aggregate all session data:
- Collect metadata from all sessions
- Merge lessons by category
- Group conflicts by type
- Sort sessions by date
2. Use Gemini for final aggregation:
ccw cli -p "
PURPOSE: Aggregate lessons and conflicts from all workflow sessions
TASK:
• Group successes by functional domain
• Categorize challenges by severity (HIGH/MEDIUM/LOW)
• Identify recurring conflict patterns
• Calculate frequencies and prioritize
MODE: analysis
CONTEXT: [Provide aggregated JSON data]
EXPECTED: Final aggregated structure for SKILL documents
RULES: Template reference from skill-aggregation.txt
" --tool gemini --mode analysis
3. Read templates for formatting (same 4 templates as single mode)
4. Generate all SKILL documents:
- sessions-timeline.md (all sessions, sorted by date)
- lessons-learned.md (aggregated lessons with frequencies)
- conflict-patterns.md (recurring patterns with resolutions)
- SKILL.md (index with progressive loading)
5. Write files to .claude/skills/workflow-progress/
6. Return result JSON:
{
"status": "success",
"sessions_processed": count,
"files_generated": ["SKILL.md", "sessions-timeline.md", ...],
"summary": {
"total_sessions": count,
"functional_domains": [...],
"date_range": "...",
"lessons_count": count,
"conflicts_count": count
}
}
```
---
### Phase 3: Verification
**Step 3.1: Check SKILL Package Files**
```bash
bash(ls -lh .claude/skills/workflow-progress/)
```
Verify all 4 files exist:
- SKILL.md
- sessions-timeline.md
- lessons-learned.md
- conflict-patterns.md
**Step 3.2: TodoWrite Completion**
Mark all tasks as completed.
**Step 3.3: Display Summary**
**Single Session Mode**:
```
Session {session_id} processed successfully
Updated:
- sessions-timeline.md
- lessons-learned.md
- conflict-patterns.md
- SKILL.md
SKILL Location: .claude/skills/workflow-progress/SKILL.md
```
**All Sessions Mode**:
```
All sessions processed in parallel
Sessions: {count} total
Functional Domains: {domain_list}
Date Range: {earliest} - {latest}
Generated:
- sessions-timeline.md ({count} sessions)
- lessons-learned.md ({lessons_count} lessons)
- conflict-patterns.md ({conflicts_count} conflicts)
- SKILL.md (4-level progressive loading)
SKILL Location: .claude/skills/workflow-progress/SKILL.md
Usage:
- Level 0: Quick refresh (~2K tokens)
- Level 1: Recent history (~8K tokens)
- Level 2: Complete analysis (~25K tokens)
- Level 3: Deep dive (~40K tokens)
```
---
## Agent Guidelines
### Agent Capabilities
**universal-executor agents can**:
- Read files from `.workflow/.archives/`
- Execute bash commands
- Call Gemini CLI for intelligent analysis
- Read template files for formatting guidance
- Write SKILL package files (single mode) or return JSON (parallel mode)
- Return structured results
### Gemini Usage Pattern
**When to use Gemini**:
- Aggregating lessons from multiple sources
- Identifying recurring patterns
- Classifying conflicts by type and severity
- Extracting structured data from IMPL_PLAN
**Fallback Strategy**: If Gemini fails or times out, use direct file parsing with structured extraction logic.
---
## Template System
### Template Files
All templates located in: `~/.claude/workflows/cli-templates/prompts/workflow/`
1. **skill-sessions-timeline.txt**: Format for sessions-timeline.md
2. **skill-lessons-learned.txt**: Format for lessons-learned.md
3. **skill-conflict-patterns.txt**: Format for conflict-patterns.md
4. **skill-index.txt**: Format for SKILL.md index
5. **skill-aggregation.txt**: Rules for Gemini aggregation (existing)
### Template Usage in Agent
**Agents read templates to understand**:
- File structure and markdown format
- Data sources (which files to read)
- Update strategy (incremental vs full)
- Formatting rules and conventions
- Aggregation logic (for Gemini)
**Templates are NOT shown in this command documentation** - agents read them directly as needed.
---
## Error Handling
### Validation Errors
- **No archives directory**: "Error: No workflow archives found at .workflow/.archives/"
- **Invalid session ID format**: "Error: Invalid session ID format. Only WFS-* sessions are supported"
- **Session not found**: "Error: Session {session_id} not found in archives"
- **No WFS sessions in manifest**: "Error: No WFS-* workflow sessions found in manifest.json"
### Agent Errors
- If agent fails, report error message from agent result
- If Gemini times out, agents use fallback direct parsing
- If template read fails, agents use inline format
### Recovery
- Single session mode: Can be retried without affecting other sessions
- All sessions mode: If one agent fails, others continue; retry failed sessions individually
## Integration
### Called by `/workflow:session:complete`
Automatically invoked after session archival:
```bash
SlashCommand(command="/memory:workflow-skill-memory session {session_id}")
```
### Manual Invocation
Users can manually process sessions:
```bash
/memory:workflow-skill-memory session WFS-custom-feature # Single session
/memory:workflow-skill-memory all # Full regeneration
```

View File

@@ -1,204 +0,0 @@
---
name: breakdown
description: Decompose complex task into subtasks with dependency mapping, creates child task JSONs with parent references and execution order
argument-hint: "task-id"
---
# Task Breakdown Command (/task:breakdown)
## Overview
Breaks down complex tasks into executable subtasks with context inheritance and agent assignment.
## Core Principles
**File Cohesion:** Related files must stay in same task
**10-Task Limit:** Total tasks cannot exceed 10 (triggers re-scoping)
## Core Features
**CRITICAL**: Manual breakdown with safety controls to prevent file conflicts and task limit violations.
### Breakdown Process
1. **Session Check**: Verify active session contains parent task
2. **Task Validation**: Ensure parent is `pending` status
3. **10-Task Limit Check**: Verify breakdown won't exceed total limit
4. **Manual Decomposition**: User defines subtasks with validation
5. **File Conflict Detection**: Warn if same files appear in multiple subtasks
6. **Similar Function Warning**: Alert if subtasks have overlapping functionality
7. **Context Distribution**: Inherit parent requirements and scope
8. **Agent Assignment**: Auto-assign agents based on subtask type
9. **TODO_LIST Update**: Regenerate TODO_LIST.md with new structure
### Breakdown Rules
- Only `pending` tasks can be broken down
- **Manual breakdown only**: Automated breakdown disabled to prevent violations
- Parent becomes `container` status (not executable)
- Subtasks use format: IMPL-N.M (max 2 levels)
- Context flows from parent to subtasks
- All relationships tracked in JSON
- **10-task limit enforced**: Breakdown rejected if total would exceed 10 tasks
- **File cohesion preserved**: Same files cannot be split across subtasks
## Usage
### Basic Breakdown
```bash
/task:breakdown impl-1
```
Interactive process:
```
Task: Build authentication module
Current total tasks: 6/10
MANUAL BREAKDOWN REQUIRED
Define subtasks manually (remaining capacity: 4 tasks):
1. Enter subtask title: User authentication core
Focus files: models/User.js, routes/auth.js, middleware/auth.js
2. Enter subtask title: OAuth integration
Focus files: services/OAuthService.js, routes/oauth.js
FILE CONFLICT DETECTED:
- routes/auth.js appears in multiple subtasks
- Recommendation: Merge related authentication routes
SIMILAR FUNCTIONALITY WARNING:
- "User authentication" and "OAuth integration" both handle auth
- Consider combining into single task
# Use AskUserQuestion for confirmation
AskUserQuestion({
questions: [{
question: "File conflicts and/or similar functionality detected. How do you want to proceed?",
header: "Confirm",
options: [
{ label: "Proceed with breakdown", description: "Accept the risks and create the subtasks as defined." },
{ label: "Restart breakdown", description: "Discard current subtasks and start over." },
{ label: "Cancel breakdown", description: "Abort the operation and leave the parent task as is." }
],
multiSelect: false
}]
})
User selected: "Proceed with breakdown"
Task IMPL-1 broken down:
IMPL-1: Build authentication module (container)
├── IMPL-1.1: User authentication core -> @code-developer
└── IMPL-1.2: OAuth integration -> @code-developer
Files updated: .task/IMPL-1.json + 2 subtask files + TODO_LIST.md
```
## Decomposition Logic
### Agent Assignment
- **Design/Planning** → `@planning-agent`
- **Implementation** → `@code-developer`
- **Testing** → `@code-developer` (type: "test-gen")
- **Test Validation** → `@test-fix-agent` (type: "test-fix")
- **Review** → `@universal-executor` (optional)
### Context Inheritance
- Subtasks inherit parent requirements
- Scope refined for specific subtask
- Implementation details distributed appropriately
## Safety Controls
### File Conflict Detection
**Validates file cohesion across subtasks:**
- Scans `focus_paths` in all subtasks
- Warns if same file appears in multiple subtasks
- Suggests merging subtasks with overlapping files
- Blocks breakdown if critical conflicts detected
### Similar Functionality Detection
**Prevents functional overlap:**
- Analyzes subtask titles for similar keywords
- Warns about potential functional redundancy
- Suggests consolidation of related functionality
- Examples: "user auth" + "login system" → merge recommendation
### 10-Task Limit Enforcement
**Hard limit compliance:**
- Counts current total tasks in session
- Calculates breakdown impact on total
- Rejects breakdown if would exceed 10 tasks
- Suggests re-scoping if limit reached
### Manual Control Requirements
**User-driven breakdown only:**
- No automatic subtask generation
- User must define each subtask title and scope
- Real-time validation during input
- Confirmation required before execution
## Implementation Details
- Complete task JSON schema
- Implementation field structure
- Context inheritance rules
- Agent assignment logic
## Validation
### Pre-breakdown Checks
1. Active session exists
2. Task found in session
3. Task status is `pending`
4. Not already broken down
5. **10-task limit compliance**: Total tasks + new subtasks ≤ 10
6. **Manual mode enabled**: No automatic breakdown allowed
### Post-breakdown Actions
1. Update parent to `container` status
2. Create subtask JSON files
3. Update parent subtasks list
4. Update session stats
5. **Regenerate TODO_LIST.md** with new hierarchy
6. Validate file paths in focus_paths
7. Update session task count
## Examples
### Basic Breakdown
```bash
/task:breakdown impl-1
impl-1: Build authentication (container)
├── impl-1.1: Design schema -> @planning-agent
├── impl-1.2: Implement logic + tests -> @code-developer
└── impl-1.3: Execute & fix tests -> @test-fix-agent
```
## Error Handling
```bash
# Task not found
Task IMPL-5 not found
# Already broken down
Task IMPL-1 already has subtasks
# Wrong status
Cannot breakdown completed task IMPL-2
# 10-task limit exceeded
Breakdown would exceed 10-task limit (current: 8, proposed: 4)
Suggestion: Re-scope project into smaller iterations
# File conflicts detected
File conflict: routes/auth.js appears in IMPL-1.1 and IMPL-1.2
Recommendation: Merge subtasks or redistribute files
# Similar functionality warning
Similar functions detected: "user login" and "authentication"
Consider consolidating related functionality
# Manual breakdown required
Automatic breakdown disabled. Use manual breakdown process.
```
**System ensures**: Manual breakdown control with file cohesion enforcement, similar functionality detection, and 10-task limit compliance

View File

@@ -1,152 +0,0 @@
---
name: create
description: Generate task JSON from natural language description with automatic file pattern detection, scope inference, and dependency analysis
argument-hint: "\"task title\""
---
# Task Create Command (/task:create)
## Overview
Creates new implementation tasks with automatic context awareness and ID generation.
## Core Principles
**Task System:** @~/.claude/workflows/task-core.md
## Core Features
### Automatic Behaviors
- **ID Generation**: Auto-generates IMPL-N format (max 2 levels)
- **Context Inheritance**: Inherits from active workflow session
- **JSON Creation**: Creates task JSON in active session
- **Status Setting**: Initial status = "pending"
- **Agent Assignment**: Suggests agent based on task type
- **Session Integration**: Updates workflow session stats
### Context Awareness
- Validates active workflow session exists
- Avoids duplicate task IDs
- Inherits session requirements and scope
- Suggests task relationships
## Usage
### Basic Creation
```bash
/task:create "Build authentication module"
```
Output:
```
Task created: IMPL-1
Title: Build authentication module
Type: feature
Agent: code-developer
Status: pending
```
### Task Types
- `feature` - New functionality (default)
- `bugfix` - Bug fixes
- `refactor` - Code improvements
- `test` - Test implementation
- `docs` - Documentation
## Task Creation Process
1. **Session Validation**: Check active workflow session
2. **ID Generation**: Auto-increment IMPL-N
3. **Context Inheritance**: Load workflow context
4. **Implementation Setup**: Initialize implementation field
5. **Agent Assignment**: Select appropriate agent
6. **File Creation**: Save JSON to .task/ directory
7. **Session Update**: Update workflow stats
**Task Schema**: See @~/.claude/workflows/task-core.md for complete JSON structure
## Implementation Field Setup
### Auto-Population Strategy
- **Detailed info**: Extract from task description and scope
- **Missing info**: Mark `pre_analysis` as multi-step array format for later pre-analysis
- **Basic structure**: Initialize with standard template
### Analysis Triggers
When implementation details incomplete:
```bash
Task requires analysis for implementation details
Suggest running: gemini analysis for file locations and dependencies
```
## File Management
### JSON Task File
- **Location**: `.task/IMPL-[N].json` in active session
- **Content**: Complete task with implementation field
- **Updates**: Session stats only
### Simple Process
1. Validate session and inputs
2. Generate task JSON
3. Update session stats
4. Notify completion
## Context Inheritance
Tasks inherit from:
1. **Active Session** - Requirements and scope from workflow-session.json
2. **Planning Document** - Context from IMPL_PLAN.md
3. **Parent Task** - For subtasks (IMPL-N.M format)
## Agent Assignment
Based on task type and title keywords:
- **Build/Implement** → @code-developer
- **Design/Plan** → @planning-agent
- **Test Generation** → @code-developer (type: "test-gen")
- **Test Execution/Fix** → @test-fix-agent (type: "test-fix")
- **Review/Audit** → @universal-executor (optional, only when explicitly requested)
## Validation Rules
1. **Session Check** - Active workflow session required
2. **Duplicate Check** - Avoid similar task titles
3. **ID Uniqueness** - Auto-increment task IDs
4. **Schema Validation** - Ensure proper JSON structure
## Error Handling
```bash
# No workflow session
No active workflow found
Use: /workflow init "project name"
# Duplicate task
Similar task exists: IMPL-3
Continue anyway? (y/n)
# Max depth exceeded
Cannot create IMPL-1.2.1 (max 2 levels)
Use: IMPL-2 for new main task
```
## Examples
### Feature Task
```bash
/task:create "Implement user authentication"
Created IMPL-1: Implement user authentication
Type: feature
Agent: code-developer
Status: pending
```
### Bug Fix
```bash
/task:create "Fix login validation bug" --type=bugfix
Created IMPL-2: Fix login validation bug
Type: bugfix
Agent: code-developer
Status: pending
```

View File

@@ -1,270 +0,0 @@
---
name: execute
description: Execute task JSON using appropriate agent (@doc-generator/@implementation-agent/@test-agent) with pre-analysis context loading and status tracking
argument-hint: "task-id"
---
## Command Overview: /task:execute
**Purpose**: Executes tasks using intelligent agent selection, context preparation, and progress tracking.
## Execution Modes
- **auto (Default)**
- Fully autonomous execution with automatic agent selection.
- Provides progress updates at each checkpoint.
- Automatically completes the task when done.
- **guided**
- Executes step-by-step, requiring user confirmation at each checkpoint.
- Allows for dynamic adjustments and manual review during the process.
- **review**
- Optional manual review using `@universal-executor`.
- Used only when explicitly requested by user.
## Agent Selection Logic
The system determines the appropriate agent for a task using the following logic.
```pseudo
FUNCTION select_agent(task, agent_override):
// A manual override always takes precedence.
// Corresponds to the --agent=<agent-type> flag.
IF agent_override IS NOT NULL:
RETURN agent_override
// If no override, select based on keywords in the task title.
ELSE:
CASE task.title:
WHEN CONTAINS "Build API", "Implement":
RETURN "@code-developer"
WHEN CONTAINS "Design schema", "Plan":
RETURN "@planning-agent"
WHEN CONTAINS "Write tests", "Generate tests":
RETURN "@code-developer" // type: test-gen
WHEN CONTAINS "Execute tests", "Fix tests", "Validate":
RETURN "@test-fix-agent" // type: test-fix
WHEN CONTAINS "Review code":
RETURN "@universal-executor" // Optional manual review
DEFAULT:
RETURN "@code-developer" // Default agent
END CASE
END FUNCTION
```
## Core Execution Protocol
`Pre-Execution` -> `Execution` -> `Post-Execution`
### Pre-Execution Protocol
`Validate Task & Dependencies` **->** `Prepare Execution Context` **->** `Coordinate with TodoWrite`
- **Validation**: Checks for the task's JSON file in `.task/` and resolves its dependencies.
- **Context Preparation**: Loads task and workflow context, preparing it for the selected agent.
- **Session Context Injection**: Provides workflow directory paths to agents for TODO_LIST.md and summary management.
- **TodoWrite Coordination**: Generates execution Todos and checkpoints, syncing with `TODO_LIST.md`.
### Post-Execution Protocol
`Update Task Status` **->** `Generate Summary` **->** `Save Artifacts` **->** `Sync All Progress` **->** `Validate File Integrity`
- Updates status in the task's JSON file and `TODO_LIST.md`.
- Creates a summary in `.summaries/`.
- Stores outputs and syncs progress across the entire workflow session.
### Task & Subtask Execution Logic
This logic defines how single, multiple, or parent tasks are handled.
```pseudo
FUNCTION execute_task_command(task_id, mode, parallel_flag):
// Handle parent tasks by executing their subtasks.
IF is_parent_task(task_id):
subtasks = get_subtasks(task_id)
EXECUTE_SUBTASK_BATCH(subtasks, mode)
// Handle wildcard execution (e.g., IMPL-001.*)
ELSE IF task_id CONTAINS "*":
subtasks = find_matching_tasks(task_id)
IF parallel_flag IS true:
EXECUTE_IN_PARALLEL(subtasks)
ELSE:
FOR each subtask in subtasks:
EXECUTE_SINGLE_TASK(subtask, mode)
// Default case for a single task ID.
ELSE:
EXECUTE_SINGLE_TASK(task_id, mode)
END FUNCTION
```
### Error Handling & Recovery Logic
```pseudo
FUNCTION pre_execution_check(task):
// Ensure dependencies are met before starting.
IF task.dependencies ARE NOT MET:
LOG_ERROR("Cannot execute " + task.id)
LOG_INFO("Blocked by: " + unmet_dependencies)
HALT_EXECUTION()
FUNCTION on_execution_failure(checkpoint):
// Provide user with recovery options upon failure.
LOG_WARNING("Execution failed at checkpoint " + checkpoint)
PRESENT_OPTIONS([
"Retry from checkpoint",
"Retry from beginning",
"Switch to guided mode",
"Abort execution"
])
AWAIT user_input
// System performs the selected action.
END FUNCTION
```
### Simplified Context Structure (JSON)
This is the simplified data structure loaded to provide context for task execution.
```json
{
"task": {
"id": "IMPL-1",
"title": "Build authentication module",
"type": "feature",
"status": "active",
"agent": "code-developer",
"context": {
"requirements": ["JWT authentication", "OAuth2 support"],
"scope": ["src/auth/*", "tests/auth/*"],
"acceptance": ["Module handles JWT tokens", "OAuth2 flow implemented"],
"inherited_from": "WFS-user-auth"
},
"relations": {
"parent": null,
"subtasks": ["IMPL-1.1", "IMPL-1.2"],
"dependencies": ["IMPL-0"]
},
"implementation": {
"files": [
{
"path": "src/auth/login.ts",
"location": {
"function": "authenticateUser",
"lines": "25-65",
"description": "Main authentication logic"
},
"original_code": "// Code snippet extracted via gemini analysis",
"modifications": {
"current_state": "Basic password authentication only",
"proposed_changes": [
"Add JWT token generation",
"Implement OAuth2 callback handling",
"Add multi-factor authentication support"
],
"logic_flow": [
"validateCredentials() ───► checkUserExists()",
"◊─── if password ───► generateJWT() ───► return token",
"◊─── if OAuth ───► validateOAuthCode() ───► exchangeForToken()",
"◊─── if MFA ───► sendMFACode() ───► awaitVerification()"
],
"reason": "Support modern authentication standards and security requirements",
"expected_outcome": "Comprehensive authentication system supporting multiple methods"
}
}
],
"context_notes": {
"dependencies": ["jsonwebtoken", "passport", "speakeasy"],
"affected_modules": ["user-session", "auth-middleware", "api-routes"],
"risks": [
"Breaking changes to existing login endpoints",
"Token storage and rotation complexity",
"OAuth provider configuration dependencies"
],
"performance_considerations": "JWT validation adds ~10ms per request, OAuth callbacks may timeout",
"error_handling": "Ensure sensitive authentication errors don't leak user enumeration data"
},
"pre_analysis": [
{
"action": "analyze patterns",
"template": "~/.claude/workflows/cli-templates/prompts/analysis/02-analyze-code-patterns.txt",
"method": "gemini"
}
]
}
},
"workflow": {
"session": "WFS-user-auth",
"phase": "IMPLEMENT",
"session_context": {
"workflow_directory": ".workflow/active/WFS-user-auth/",
"todo_list_location": ".workflow/active/WFS-user-auth/TODO_LIST.md",
"summaries_directory": ".workflow/active/WFS-user-auth/.summaries/",
"task_json_location": ".workflow/active/WFS-user-auth/.task/"
}
},
"execution": {
"agent": "code-developer",
"mode": "auto",
"attempts": 0
}
}
```
### Agent-Specific Context
Different agents receive context tailored to their function, including implementation details:
**`@code-developer`**:
- Complete implementation.files array with file paths and locations
- original_code snippets and proposed_changes for precise modifications
- logic_flow diagrams for understanding data flow
- Dependencies and affected modules for integration planning
- Performance and error handling considerations
**`@planning-agent`**:
- High-level requirements, constraints, success criteria
- Implementation risks and mitigation strategies
- Architecture implications from implementation.context_notes
**`@test-fix-agent`**:
- Test files to execute from task.context.focus_paths
- Source files to fix from implementation.files[].path
- Expected behaviors from implementation.modifications.logic_flow
- Error conditions to validate from implementation.context_notes.error_handling
- Performance requirements from implementation.context_notes.performance_considerations
**`@universal-executor`**:
- Used for optional manual reviews when explicitly requested
- Code quality standards and implementation patterns
- Security considerations from implementation.context_notes.risks
- Dependency validation from implementation.context_notes.dependencies
- Architecture compliance checks
### Simplified File Output
- **Task JSON File (`.task/<task-id>.json`)**: Updated with status and last attempt time only.
- **Session File (`workflow-session.json`)**: Updated task stats (completed count).
- **Summary File**: Generated in `.summaries/` upon completion (optional).
### Simplified Summary Template
Optional summary file generated at `.summaries/IMPL-[task-id]-summary.md`.
```markdown
# Task Summary: IMPL-1 Build Authentication Module
## What Was Done
- Created src/auth/login.ts with JWT validation
- Added tests in tests/auth.test.ts
## Execution Results
- **Agent**: code-developer
- **Status**: completed
## Files Modified
- `src/auth/login.ts` (created)
- `tests/auth.test.ts` (created)
```

View File

@@ -1,437 +0,0 @@
---
name: replan
description: Update task JSON with new requirements or batch-update multiple tasks from verification report, tracks changes in task-changes.json
argument-hint: "task-id [\"text\"|file.md] | --batch [verification-report.md]"
allowed-tools: Read(*), Write(*), Edit(*), TodoWrite(*), Glob(*), Bash(*)
---
# Task Replan Command (/task:replan)
> **⚠️ DEPRECATION NOTICE**: This command is maintained for backward compatibility. For new workflows, use `/workflow:replan` which provides:
> - Session-level replanning with comprehensive artifact updates
> - Interactive boundary clarification
> - Updates to IMPL_PLAN.md, TODO_LIST.md, and session metadata
> - Better integration with workflow sessions
>
> **Migration**: Replace `/task:replan IMPL-1 "changes"` with `/workflow:replan IMPL-1 "changes"`
## Overview
Replans individual tasks or batch processes multiple tasks with change tracking and backup management.
**Modes**:
- **Single Task Mode**: Replan one task with specific changes
- **Batch Mode**: Process multiple tasks from action-plan verification report
## Key Features
- **Single/Batch Operations**: Single task or multiple tasks from verification report
- **Multiple Input Sources**: Text, files, or verification report
- **Backup Management**: Automatic backup of previous versions
- **Change Documentation**: Track all modifications
- **Progress Tracking**: TodoWrite integration for batch operations
**CRITICAL**: Validates active session before replanning
## Operation Modes
### Single Task Mode
#### Direct Text (Default)
```bash
/task:replan IMPL-1 "Add OAuth2 authentication support"
```
#### File-based Input
```bash
/task:replan IMPL-1 updated-specs.md
```
Supports: .md, .txt, .json, .yaml
#### Interactive Mode
```bash
/task:replan IMPL-1 --interactive
```
Guided step-by-step modification process with validation
### Batch Mode
#### From Verification Report
```bash
/task:replan --batch ACTION_PLAN_VERIFICATION.md
```
**Workflow**:
1. Parse verification report to extract replan recommendations
2. Create TodoWrite task list for all modifications
3. Process each task sequentially with confirmation
4. Track progress and generate summary report
**Auto-detection**: If input file contains "Action Plan Verification Report" header, automatically enters batch mode
## Replanning Process
### Single Task Process
1. **Load & Validate**: Read task JSON and validate session
2. **Parse Input**: Process changes from input source
3. **Create Backup**: Save previous version to backup folder
4. **Update Task**: Modify JSON structure and relationships
5. **Save Changes**: Write updated task and increment version
6. **Update Session**: Reflect changes in workflow stats
### Batch Process
1. **Parse Verification Report**: Extract all replan recommendations
2. **Initialize TodoWrite**: Create task list for tracking
3. **For Each Task**:
- Mark todo as in_progress
- Load and validate task JSON
- Create backup
- Apply recommended changes
- Save updated task
- Mark todo as completed
4. **Generate Summary**: Report all changes and backup locations
## Backup Management
### Backup Tracking
Tasks maintain backup history:
```json
{
"id": "IMPL-1",
"version": "1.2",
"replan_history": [
{
"version": "1.2",
"reason": "Add OAuth2 support",
"input_source": "direct_text",
"backup_location": ".task/backup/IMPL-1-v1.1.json",
"timestamp": "2025-10-17T10:30:00Z"
}
]
}
```
**Complete schema**: See @~/.claude/workflows/task-core.md
### File Structure
```
.task/
├── IMPL-1.json # Current version
├── backup/
│ ├── IMPL-1-v1.0.json # Original version
│ ├── IMPL-1-v1.1.json # Previous backup
│ └── IMPL-1-v1.2.json # Latest backup
└── [new subtasks as needed]
```
**Backup Naming**: `{task-id}-v{version}.json`
## Implementation Updates
### Change Detection
Tracks modifications to:
- Files in implementation.files array
- Dependencies and affected modules
- Risk assessments and performance notes
- Logic flows and code locations
### Analysis Triggers
May require gemini re-analysis when:
- New files need code extraction
- Function locations change
- Dependencies require re-evaluation
## Document Updates
### Planning Document
May update IMPL_PLAN.md sections when task structure changes significantly
### TODO List Sync
If TODO_LIST.md exists, synchronizes:
- New subtasks (with [ ] checkbox)
- Modified tasks (marked as updated)
- Removed subtasks (deleted from list)
## Change Documentation
### Change Summary
Generates brief change log with:
- Version increment (1.1 → 1.2)
- Input source and reason
- Key modifications made
- Files updated/created
- Backup location
## Session Updates
Updates workflow-session.json with:
- Modified task tracking
- Task count changes (if subtasks added/removed)
- Last modification timestamps
## Rollback Support
```bash
/task:replan IMPL-1 --rollback v1.1
Rollback to version 1.1:
- Restore task from backup/.../IMPL-1-v1.1.json
- Remove new subtasks if any
- Update session stats
# Use AskUserQuestion for confirmation
AskUserQuestion({
questions: [{
question: "Are you sure you want to roll back this task to a previous version?",
header: "Confirm",
options: [
{ label: "Yes, rollback", description: "Restore the task from the selected backup." },
{ label: "No, cancel", description: "Keep the current version of the task." }
],
multiSelect: false
}]
})
User selected: "Yes, rollback"
Task rolled back to version 1.1
```
## Batch Processing with TodoWrite
### Progress Tracking
When processing multiple tasks, automatically creates TodoWrite task list:
```markdown
**Batch Replan Progress**:
- [x] IMPL-002: Add FR-12 draft saving acceptance criteria
- [x] IMPL-003: Add FR-14 history tracking acceptance criteria
- [ ] IMPL-004: Add FR-09 response surface explicit coverage
- [ ] IMPL-008: Add NFR performance validation steps
```
### Batch Report
After completion, generates summary:
```markdown
## Batch Replan Summary
**Total Tasks**: 4
**Successful**: 3
**Failed**: 1
**Skipped**: 0
### Changes Made
- IMPL-002 v1.0 → v1.1: Added FR-12 acceptance criteria
- IMPL-003 v1.0 → v1.1: Added FR-14 acceptance criteria
- IMPL-004 v1.0 → v1.1: Added FR-09 explicit coverage
### Backups Created
- .task/backup/IMPL-002-v1.0.json
- .task/backup/IMPL-003-v1.0.json
- .task/backup/IMPL-004-v1.0.json
### Errors
- IMPL-008: File not found (task may have been renamed)
```
## Examples
### Single Task - Text Input
```bash
/task:replan IMPL-1 "Add OAuth2 authentication support"
Processing changes...
Proposed updates:
+ Add OAuth2 integration
+ Update authentication flow
# Use AskUserQuestion for confirmation
AskUserQuestion({
questions: [{
question: "Do you want to apply these changes to the task?",
header: "Apply",
options: [
{ label: "Yes, apply", description: "Create new version with these changes." },
{ label: "No, cancel", description: "Discard changes and keep current version." }
],
multiSelect: false
}]
})
User selected: "Yes, apply"
Version 1.2 created
Context updated
Backup saved to .task/backup/IMPL-1-v1.1.json
```
### Single Task - File Input
```bash
/task:replan IMPL-2 requirements.md
Loading requirements.md...
Applying specification changes...
Task updated with new requirements
Version 1.1 created
Backup saved to .task/backup/IMPL-2-v1.0.json
```
### Batch Mode - From Verification Report
```bash
/task:replan --batch .workflow/active/WFS-{session}/.process/ACTION_PLAN_VERIFICATION.md
Parsing verification report...
Found 4 tasks requiring replanning:
- IMPL-002: Add FR-12 draft saving acceptance criteria
- IMPL-003: Add FR-14 history tracking acceptance criteria
- IMPL-004: Add FR-09 response surface explicit coverage
- IMPL-008: Add NFR performance validation steps
Creating task tracking list...
Processing IMPL-002...
Backup created: .task/backup/IMPL-002-v1.0.json
Updated to v1.1
Processing IMPL-003...
Backup created: .task/backup/IMPL-003-v1.0.json
Updated to v1.1
Processing IMPL-004...
Backup created: .task/backup/IMPL-004-v1.0.json
Updated to v1.1
Processing IMPL-008...
Backup created: .task/backup/IMPL-008-v1.0.json
Updated to v1.1
Batch replan completed: 4/4 successful
Summary report saved
```
### Batch Mode - Auto-detection
```bash
# If file contains "Action Plan Verification Report", auto-enters batch mode
/task:replan ACTION_PLAN_VERIFICATION.md
Detected verification report format
Entering batch mode...
[same as above]
```
## Error Handling
### Single Task Errors
```bash
# Task not found
Task IMPL-5 not found
Check task ID with /workflow:status
# Task completed
Task IMPL-1 is completed (cannot replan)
Create new task for additional work
# File not found
File requirements.md not found
Check file path
# No input provided
Please specify changes needed
Provide text, file, or verification report
```
### Batch Mode Errors
```bash
# Invalid verification report
File does not contain valid verification report format
Check report structure or use single task mode
# Partial failures
Batch completed with errors: 3/4 successful
Review error details in summary report
# No replan recommendations found
Verification report contains no replan recommendations
Check report content or use /workflow:action-plan-verify first
```
## Batch Mode Integration
### Input Format Expectations
Batch mode parses verification reports looking for:
1. **Required Actions Section**: Commands like `/task:replan IMPL-X "changes"`
2. **Findings Table**: Task IDs with recommendations
3. **Next Actions Section**: Specific replan commands
**Example Patterns**:
```markdown
#### 1. HIGH Priority - Address FR Coverage Gaps
/task:replan IMPL-004 "
Add explicit acceptance criteria:
- FR-09: Response surface 3D visualization
"
#### 2. MEDIUM Priority - Enhance NFR Coverage
/task:replan IMPL-008 "
Add performance testing:
- NFR-01: Load test API endpoints
"
```
### Extraction Logic
1. Scan for `/task:replan` commands in report
2. Extract task ID and change description
3. Group by priority (HIGH, MEDIUM, LOW)
4. Process in priority order with TodoWrite tracking
### Confirmation Behavior
- **Default**: Confirm each task before applying
- **With `--auto-confirm`**: Apply all changes without prompting
```bash
/task:replan --batch report.md --auto-confirm
```
## Implementation Details
### Backup Management
```typescript
// Backup file naming convention
const backupPath = `.task/backup/${taskId}-v${previousVersion}.json`;
// Backup metadata in task JSON
{
"replan_history": [
{
"version": "1.2",
"timestamp": "2025-10-17T10:30:00Z",
"reason": "Add FR-09 explicit coverage",
"input_source": "batch_verification_report",
"backup_location": ".task/backup/IMPL-004-v1.1.json"
}
]
}
```
### TodoWrite Integration
```typescript
// Initialize tracking for batch mode
TodoWrite({
todos: taskList.map(task => ({
content: `${task.id}: ${task.changeDescription}`,
status: "pending",
activeForm: `Replanning ${task.id}`
}))
});
// Update progress during processing
TodoWrite({
todos: updateTaskStatus(taskId, "in_progress")
});
// Mark completed
TodoWrite({
todos: updateTaskStatus(taskId, "completed")
});
```

View File

@@ -1,254 +0,0 @@
---
name: version
description: Display Claude Code version information and check for updates
allowed-tools: Bash(*)
---
# Version Command (/version)
## Purpose
Display local and global installation versions, check for the latest updates from GitHub, and provide upgrade recommendations.
## Execution Flow
1. **Local Version Check**: Read version information from `./.claude/version.json` if it exists.
2. **Global Version Check**: Read version information from `~/.claude/version.json` if it exists.
3. **Fetch Remote Versions**: Use GitHub API to get the latest stable release tag and the latest commit hash from the main branch.
4. **Compare & Suggest**: Compare installed versions with the latest remote versions and provide upgrade suggestions if applicable.
## Step 1: Check Local Version
### Check if local version.json exists
```bash
bash(test -f ./.claude/version.json && echo "found" || echo "not_found")
```
### Read local version (if exists)
```bash
bash(cat ./.claude/version.json)
```
### Extract version with jq (preferred)
```bash
bash(cat ./.claude/version.json | grep -o '"version": *"[^"]*"' | cut -d'"' -f4)
```
### Extract installation date
```bash
bash(cat ./.claude/version.json | grep -o '"installation_date_utc": *"[^"]*"' | cut -d'"' -f4)
```
**Output Format**:
```
Local Version: 3.2.1
Installed: 2025-10-03T12:00:00Z
```
## Step 2: Check Global Version
### Check if global version.json exists
```bash
bash(test -f ~/.claude/version.json && echo "found" || echo "not_found")
```
### Read global version
```bash
bash(cat ~/.claude/version.json)
```
### Extract version
```bash
bash(cat ~/.claude/version.json | grep -o '"version": *"[^"]*"' | cut -d'"' -f4)
```
### Extract installation date
```bash
bash(cat ~/.claude/version.json | grep -o '"installation_date_utc": *"[^"]*"' | cut -d'"' -f4)
```
**Output Format**:
```
Global Version: 3.2.1
Installed: 2025-10-03T12:00:00Z
```
## Step 3: Fetch Latest Stable Release
### Call GitHub API for latest release (with timeout)
```bash
bash(curl -fsSL "https://api.github.com/repos/catlog22/Claude-Code-Workflow/releases/latest" 2>/dev/null, timeout: 30000)
```
### Extract tag name (version)
```bash
bash(curl -fsSL "https://api.github.com/repos/catlog22/Claude-Code-Workflow/releases/latest" 2>/dev/null | grep -o '"tag_name": *"[^"]*"' | head -1 | cut -d'"' -f4, timeout: 30000)
```
### Extract release name
```bash
bash(curl -fsSL "https://api.github.com/repos/catlog22/Claude-Code-Workflow/releases/latest" 2>/dev/null | grep -o '"name": *"[^"]*"' | head -1 | cut -d'"' -f4, timeout: 30000)
```
### Extract published date
```bash
bash(curl -fsSL "https://api.github.com/repos/catlog22/Claude-Code-Workflow/releases/latest" 2>/dev/null | grep -o '"published_at": *"[^"]*"' | cut -d'"' -f4, timeout: 30000)
```
**Output Format**:
```
Latest Stable: v3.2.2
Release: v3.2.2: Independent Test-Gen Workflow with Cross-Session Context
Published: 2025-10-03T04:10:08Z
```
## Step 4: Fetch Latest Main Branch
### Call GitHub API for latest commit on main (with timeout)
```bash
bash(curl -fsSL "https://api.github.com/repos/catlog22/Claude-Code-Workflow/commits/main" 2>/dev/null, timeout: 30000)
```
### Extract commit SHA (short)
```bash
bash(curl -fsSL "https://api.github.com/repos/catlog22/Claude-Code-Workflow/commits/main" 2>/dev/null | grep -o '"sha": *"[^"]*"' | head -1 | cut -d'"' -f4 | cut -c1-7, timeout: 30000)
```
### Extract commit message (first line only)
```bash
bash(curl -fsSL "https://api.github.com/repos/catlog22/Claude-Code-Workflow/commits/main" 2>/dev/null | grep '"message":' | cut -d'"' -f4 | cut -d'\' -f1, timeout: 30000)
```
### Extract commit date
```bash
bash(curl -fsSL "https://api.github.com/repos/catlog22/Claude-Code-Workflow/commits/main" 2>/dev/null | grep -o '"date": *"[^"]*"' | head -1 | cut -d'"' -f4, timeout: 30000)
```
**Output Format**:
```
Latest Dev: a03415b
Message: feat: Add version tracking and upgrade check system
Date: 2025-10-03T04:46:44Z
```
## Step 5: Compare Versions and Suggest Upgrade
### Normalize versions (remove 'v' prefix)
```bash
bash(echo "v3.2.1" | sed 's/^v//')
```
### Compare two versions
```bash
bash(printf "%s\n%s" "3.2.1" "3.2.2" | sort -V | tail -n 1)
```
### Check if versions are equal
```bash
# If equal: Up to date
# If remote newer: Upgrade available
# If local newer: Development version
```
**Output Scenarios**:
**Scenario 1: Up to date**
```
You are on the latest stable version (3.2.1)
```
**Scenario 2: Upgrade available**
```
A newer stable version is available: v3.2.2
Your version: 3.2.1
To upgrade:
PowerShell: iex (iwr -useb https://raw.githubusercontent.com/catlog22/Claude-Code-Workflow/main/install-remote.ps1)
Bash: bash <(curl -fsSL https://raw.githubusercontent.com/catlog22/Claude-Code-Workflow/main/install-remote.sh)
```
**Scenario 3: Development version**
```
You are running a development version (3.4.0-dev)
This is newer than the latest stable release (v3.3.0)
```
## Simple Bash Commands
### Basic Operations
```bash
# Check local version file
bash(test -f ./.claude/version.json && cat ./.claude/version.json)
# Check global version file
bash(test -f ~/.claude/version.json && cat ~/.claude/version.json)
# Extract version from JSON
bash(cat version.json | grep -o '"version": *"[^"]*"' | cut -d'"' -f4)
# Extract date from JSON
bash(cat version.json | grep -o '"installation_date_utc": *"[^"]*"' | cut -d'"' -f4)
# Fetch latest release (with timeout)
bash(curl -fsSL "https://api.github.com/repos/catlog22/Claude-Code-Workflow/releases/latest" 2>/dev/null, timeout: 30000)
# Extract tag name
bash(curl -fsSL "https://api.github.com/repos/catlog22/Claude-Code-Workflow/releases/latest" 2>/dev/null | grep -o '"tag_name": *"[^"]*"' | cut -d'"' -f4, timeout: 30000)
# Extract release name
bash(curl -fsSL "https://api.github.com/repos/catlog22/Claude-Code-Workflow/releases/latest" 2>/dev/null | grep -o '"name": *"[^"]*"' | head -1 | cut -d'"' -f4, timeout: 30000)
# Fetch latest commit (with timeout)
bash(curl -fsSL "https://api.github.com/repos/catlog22/Claude-Code-Workflow/commits/main" 2>/dev/null, timeout: 30000)
# Extract commit SHA
bash(curl -fsSL "https://api.github.com/repos/catlog22/Claude-Code-Workflow/commits/main" 2>/dev/null | grep -o '"sha": *"[^"]*"' | head -1 | cut -d'"' -f4 | cut -c1-7, timeout: 30000)
# Extract commit message (first line)
bash(curl -fsSL "https://api.github.com/repos/catlog22/Claude-Code-Workflow/commits/main" 2>/dev/null | grep '"message":' | cut -d'"' -f4 | cut -d'\' -f1, timeout: 30000)
# Compare versions
bash(printf "%s\n%s" "3.2.1" "3.2.2" | sort -V | tail -n 1)
# Remove 'v' prefix
bash(echo "v3.2.1" | sed 's/^v//')
```
## Error Handling
### No installation found
```
WARNING: Claude Code Workflow not installed
Install using:
PowerShell: iex (iwr -useb https://raw.githubusercontent.com/catlog22/Claude-Code-Workflow/main/install-remote.ps1)
```
### Network error
```
ERROR: Could not fetch latest version from GitHub
Check your network connection
```
### Invalid version.json
```
ERROR: version.json is invalid or corrupted
```
## Design Notes
- Uses simple, direct bash commands instead of complex functions
- Each step is independent and can be executed separately
- Fallback to grep/sed for JSON parsing (no jq dependency required)
- Network calls use curl with error suppression and 30-second timeout
- Version comparison uses `sort -V` for accurate semantic versioning
- Use `/commits/main` API instead of `/branches/main` for more reliable commit info
- Extract first line of commit message using `cut -d'\' -f1` to handle JSON escape sequences
## API Endpoints
### GitHub API Used
- **Latest Release**: `https://api.github.com/repos/catlog22/Claude-Code-Workflow/releases/latest`
- Fields: `tag_name`, `name`, `published_at`
- **Latest Commit**: `https://api.github.com/repos/catlog22/Claude-Code-Workflow/commits/main`
- Fields: `sha`, `commit.message`, `commit.author.date`
### Timeout Configuration
All network calls should use `timeout: 30000` (30 seconds) to handle slow connections.

View File

@@ -1,447 +0,0 @@
---
name: action-plan-verify
description: Perform non-destructive cross-artifact consistency analysis between IMPL_PLAN.md and task JSONs with quality gate validation
argument-hint: "[optional: --session session-id]"
allowed-tools: Read(*), TodoWrite(*), Glob(*), Bash(*)
---
## User Input
```text
$ARGUMENTS
```
You **MUST** consider the user input before proceeding (if not empty).
## Goal
Identify inconsistencies, duplications, ambiguities, and underspecified items between action planning artifacts (`IMPL_PLAN.md`, `task.json`) and brainstorming artifacts (`role analysis documents`) before implementation. This command MUST run only after `/workflow:plan` has successfully produced complete `IMPL_PLAN.md` and task JSON files.
## Operating Constraints
**STRICTLY READ-ONLY**: Do **not** modify any files. Output a structured analysis report. Offer an optional remediation plan (user must explicitly approve before any follow-up editing commands).
**Synthesis Authority**: The `role analysis documents` is **authoritative** for requirements and design decisions. Any conflicts between IMPL_PLAN/tasks and synthesis are automatically CRITICAL and require adjustment of the plan/tasks—not reinterpretation of requirements.
## Execution Steps
### 1. Initialize Analysis Context
```bash
# Detect active workflow session
IF --session parameter provided:
session_id = provided session
ELSE:
# Auto-detect active session
active_sessions = bash(find .workflow/active/ -name "WFS-*" -type d 2>/dev/null)
IF active_sessions is empty:
ERROR: "No active workflow session found. Use --session <session-id>"
EXIT
ELSE IF active_sessions has multiple entries:
# Use most recently modified session
session_id = bash(ls -td .workflow/active/WFS-*/ 2>/dev/null | head -1 | xargs basename)
ELSE:
session_id = basename(active_sessions[0])
# Derive absolute paths
session_dir = .workflow/active/WFS-{session}
brainstorm_dir = session_dir/.brainstorming
task_dir = session_dir/.task
# Validate required artifacts
# Note: "role analysis documents" refers to [role]/analysis.md files (e.g., product-manager/analysis.md)
SYNTHESIS_DIR = brainstorm_dir # Contains role analysis files: */analysis.md
IMPL_PLAN = session_dir/IMPL_PLAN.md
TASK_FILES = Glob(task_dir/*.json)
# Abort if missing
SYNTHESIS_FILES = Glob(brainstorm_dir/*/analysis.md)
IF SYNTHESIS_FILES.count == 0:
ERROR: "No role analysis documents found in .brainstorming/*/analysis.md. Run /workflow:brainstorm:synthesis first"
EXIT
IF NOT EXISTS(IMPL_PLAN):
ERROR: "IMPL_PLAN.md not found. Run /workflow:plan first"
EXIT
IF TASK_FILES.count == 0:
ERROR: "No task JSON files found. Run /workflow:plan first"
EXIT
```
### 2. Load Artifacts (Progressive Disclosure)
Load only minimal necessary context from each artifact:
**From workflow-session.json** (NEW - PRIMARY REFERENCE):
- Original user prompt/intent (project or description field)
- User's stated goals and objectives
- User's scope definition
**From role analysis documents**:
- Functional Requirements (IDs, descriptions, acceptance criteria)
- Non-Functional Requirements (IDs, targets)
- Business Requirements (IDs, success metrics)
- Key Architecture Decisions
- Risk factors and mitigation strategies
- Implementation Roadmap (high-level phases)
**From IMPL_PLAN.md**:
- Summary and objectives
- Context Analysis
- Implementation Strategy
- Task Breakdown Summary
- Success Criteria
- Brainstorming Artifacts References (if present)
**From task.json files**:
- Task IDs
- Titles and descriptions
- Status
- Dependencies (depends_on, blocks)
- Context (requirements, focus_paths, acceptance, artifacts)
- Flow control (pre_analysis, implementation_approach)
- Meta (complexity, priority)
### 3. Build Semantic Models
Create internal representations (do not include raw artifacts in output):
**Requirements inventory**:
- Each functional/non-functional/business requirement with stable ID
- Requirement text, acceptance criteria, priority
**Architecture decisions inventory**:
- ADRs from synthesis
- Technology choices
- Data model references
**Task coverage mapping**:
- Map each task to one or more requirements (by ID reference or keyword inference)
- Map each requirement to covering tasks
**Dependency graph**:
- Task-to-task dependencies (depends_on, blocks)
- Requirement-level dependencies (from synthesis)
### 4. Detection Passes (Token-Efficient Analysis)
Focus on high-signal findings. Limit to 50 findings total; aggregate remainder in overflow summary.
#### A. User Intent Alignment (NEW - CRITICAL)
- **Goal Alignment**: IMPL_PLAN objectives match user's original intent
- **Scope Drift**: Plan covers user's stated scope without unauthorized expansion
- **Success Criteria Match**: Plan's success criteria reflect user's expectations
- **Intent Conflicts**: Tasks contradicting user's original objectives
#### B. Requirements Coverage Analysis
- **Orphaned Requirements**: Requirements in synthesis with zero associated tasks
- **Unmapped Tasks**: Tasks with no clear requirement linkage
- **NFR Coverage Gaps**: Non-functional requirements (performance, security, scalability) not reflected in tasks
#### C. Consistency Validation
- **Requirement Conflicts**: Tasks contradicting synthesis requirements
- **Architecture Drift**: IMPL_PLAN architecture not matching synthesis ADRs
- **Terminology Drift**: Same concept named differently across IMPL_PLAN and tasks
- **Data Model Inconsistency**: Tasks referencing entities/fields not in synthesis data model
#### D. Dependency Integrity
- **Circular Dependencies**: Task A depends on B, B depends on C, C depends on A
- **Missing Dependencies**: Task requires outputs from another task but no explicit dependency
- **Broken Dependencies**: Task depends on non-existent task ID
- **Logical Ordering Issues**: Implementation tasks before foundational setup without dependency note
#### E. Synthesis Alignment
- **Priority Conflicts**: High-priority synthesis requirements mapped to low-priority tasks
- **Success Criteria Mismatch**: IMPL_PLAN success criteria not covering synthesis acceptance criteria
- **Risk Mitigation Gaps**: Critical risks in synthesis without corresponding mitigation tasks
#### F. Task Specification Quality
- **Ambiguous Focus Paths**: Tasks with vague or missing focus_paths
- **Underspecified Acceptance**: Tasks without clear acceptance criteria
- **Missing Artifacts References**: Tasks not referencing relevant brainstorming artifacts in context.artifacts
- **Weak Flow Control**: Tasks without clear implementation_approach or pre_analysis steps
- **Missing Target Files**: Tasks without flow_control.target_files specification
#### G. Duplication Detection
- **Overlapping Task Scope**: Multiple tasks with nearly identical descriptions
- **Redundant Requirements Coverage**: Same requirement covered by multiple tasks without clear partitioning
#### H. Feasibility Assessment
- **Complexity Misalignment**: Task marked "simple" but requires multiple file modifications
- **Resource Conflicts**: Parallel tasks requiring same resources/files
- **Skill Gap Risks**: Tasks requiring skills not in team capability assessment (from synthesis)
### 5. Severity Assignment
Use this heuristic to prioritize findings:
- **CRITICAL**:
- Violates user's original intent (goal misalignment, scope drift)
- Violates synthesis authority (requirement conflict)
- Core requirement with zero coverage
- Circular dependencies
- Broken dependencies
- **HIGH**:
- NFR coverage gaps
- Priority conflicts
- Missing risk mitigation tasks
- Ambiguous acceptance criteria
- **MEDIUM**:
- Terminology drift
- Missing artifacts references
- Weak flow control
- Logical ordering issues
- **LOW**:
- Style/wording improvements
- Minor redundancy not affecting execution
### 6. Produce Compact Analysis Report
**Report Generation**: Generate report content and save to file.
Output a Markdown report with the following structure:
```markdown
## Action Plan Verification Report
**Session**: WFS-{session-id}
**Generated**: {timestamp}
**Artifacts Analyzed**: role analysis documents, IMPL_PLAN.md, {N} task files
---
### Executive Summary
- **Overall Risk Level**: CRITICAL | HIGH | MEDIUM | LOW
- **Recommendation**: (See decision matrix below)
- BLOCK_EXECUTION: Critical issues exist (must fix before proceeding)
- PROCEED_WITH_FIXES: High issues exist, no critical (fix recommended before execution)
- PROCEED_WITH_CAUTION: Medium issues only (proceed with awareness)
- PROCEED: Low issues only or no issues (safe to execute)
- **Critical Issues**: {count}
- **High Issues**: {count}
- **Medium Issues**: {count}
- **Low Issues**: {count}
---
### Findings Summary
| ID | Category | Severity | Location(s) | Summary | Recommendation |
|----|----------|----------|-------------|---------|----------------|
| C1 | Coverage | CRITICAL | synthesis:FR-03 | Requirement "User auth" has zero task coverage | Add authentication implementation task |
| H1 | Consistency | HIGH | IMPL-1.2 vs synthesis:ADR-02 | Task uses REST while synthesis specifies GraphQL | Align task with ADR-02 decision |
| M1 | Specification | MEDIUM | IMPL-2.1 | Missing context.artifacts reference | Add @synthesis reference |
| L1 | Duplication | LOW | IMPL-3.1, IMPL-3.2 | Similar scope | Consider merging |
(Add one row per finding; generate stable IDs prefixed by severity initial.)
---
### Requirements Coverage Analysis
| Requirement ID | Requirement Summary | Has Task? | Task IDs | Priority Match | Notes |
|----------------|---------------------|-----------|----------|----------------|-------|
| FR-01 | User authentication | Yes | IMPL-1.1, IMPL-1.2 | Match | Complete |
| FR-02 | Data export | Yes | IMPL-2.3 | Mismatch | High req → Med priority task |
| FR-03 | Profile management | No | - | - | **CRITICAL: Zero coverage** |
| NFR-01 | Response time <200ms | No | - | - | **HIGH: No performance tasks** |
**Coverage Metrics**:
- Functional Requirements: 85% (17/20 covered)
- Non-Functional Requirements: 40% (2/5 covered)
- Business Requirements: 100% (5/5 covered)
---
### Unmapped Tasks
| Task ID | Title | Issue | Recommendation |
|---------|-------|-------|----------------|
| IMPL-4.5 | Refactor utils | No requirement linkage | Link to technical debt or remove |
---
### Dependency Graph Issues
**Circular Dependencies**: None detected
**Broken Dependencies**:
- IMPL-2.3 depends on "IMPL-2.4" (non-existent)
**Logical Ordering Issues**:
- IMPL-5.1 (integration test) has no dependency on IMPL-1.* (implementation tasks)
---
### Synthesis Alignment Issues
| Issue Type | Synthesis Reference | IMPL_PLAN/Task | Impact | Recommendation |
|------------|---------------------|----------------|--------|----------------|
| Architecture Conflict | synthesis:ADR-01 (JWT auth) | IMPL_PLAN uses session cookies | HIGH | Update IMPL_PLAN to use JWT |
| Priority Mismatch | synthesis:FR-02 (High) | IMPL-2.3 (Medium) | MEDIUM | Elevate task priority |
| Missing Risk Mitigation | synthesis:Risk-03 (API rate limits) | No mitigation tasks | HIGH | Add rate limiting implementation task |
---
### Task Specification Quality Issues
**Missing Artifacts References**: 12 tasks lack context.artifacts
**Weak Flow Control**: 5 tasks lack implementation_approach
**Missing Target Files**: 8 tasks lack flow_control.target_files
**Sample Issues**:
- IMPL-1.2: No context.artifacts reference to synthesis
- IMPL-3.1: Missing flow_control.target_files specification
- IMPL-4.2: Vague focus_paths ["src/"] - needs refinement
---
### Feasibility Concerns
| Concern | Tasks Affected | Issue | Recommendation |
|---------|----------------|-------|----------------|
| Skill Gap | IMPL-6.1, IMPL-6.2 | Requires Kubernetes expertise not in team | Add training task or external consultant |
| Resource Conflict | IMPL-3.1, IMPL-3.2 | Both modify src/auth/service.ts in parallel | Add dependency or serialize |
---
### Metrics
- **Total Requirements**: 30 (20 functional, 5 non-functional, 5 business)
- **Total Tasks**: 25
- **Overall Coverage**: 77% (23/30 requirements with ≥1 task)
- **Critical Issues**: 2
- **High Issues**: 5
- **Medium Issues**: 8
- **Low Issues**: 3
---
### Next Actions
#### Action Recommendations
**Recommendation Decision Matrix**:
| Condition | Recommendation | Action |
|-----------|----------------|--------|
| Critical > 0 | BLOCK_EXECUTION | Must resolve all critical issues before proceeding |
| Critical = 0, High > 0 | PROCEED_WITH_FIXES | Fix high-priority issues before execution |
| Critical = 0, High = 0, Medium > 0 | PROCEED_WITH_CAUTION | Proceed with awareness of medium issues |
| Only Low or None | PROCEED | Safe to execute workflow |
**If CRITICAL Issues Exist** (BLOCK_EXECUTION):
- Resolve all critical issues before proceeding
- Use TodoWrite to track required fixes
- Fix broken dependencies and circular references first
**If HIGH Issues Exist** (PROCEED_WITH_FIXES):
- Fix high-priority issues before execution
- Use TodoWrite to systematically track and complete improvements
**If Only MEDIUM/LOW Issues** (PROCEED_WITH_CAUTION / PROCEED):
- Can proceed with execution
- Address issues during or after implementation
#### TodoWrite-Based Remediation Workflow
**Report Location**: `.workflow/active/WFS-{session}/.process/ACTION_PLAN_VERIFICATION.md`
**Recommended Workflow**:
1. **Create TodoWrite Task List**: Extract all findings from report
2. **Process by Priority**: CRITICAL → HIGH → MEDIUM → LOW
3. **Complete Each Fix**: Mark tasks as in_progress/completed as you work
4. **Validate Changes**: Verify each modification against requirements
**TodoWrite Task Structure Example**:
```markdown
Priority Order:
1. Fix coverage gaps (CRITICAL)
2. Resolve consistency conflicts (CRITICAL/HIGH)
3. Add missing specifications (MEDIUM)
4. Improve task quality (LOW)
```
**Notes**:
- TodoWrite provides real-time progress tracking
- Each finding becomes a trackable todo item
- User can monitor progress throughout remediation
- Architecture drift in IMPL_PLAN requires manual editing
```
### 7. Save Report and Execute TodoWrite-Based Remediation
**Step 7.1: Save Analysis Report**:
```bash
report_path = ".workflow/active/WFS-{session}/.process/ACTION_PLAN_VERIFICATION.md"
Write(report_path, full_report_content)
```
**Step 7.2: Display Report Summary to User**:
- Show executive summary with counts
- Display recommendation (BLOCK/PROCEED_WITH_FIXES/PROCEED_WITH_CAUTION/PROCEED)
- List critical and high issues if any
**Step 7.3: After Report Generation**:
1. **Extract Findings**: Parse all issues by severity
2. **Create TodoWrite Task List**: Convert findings to actionable todos
3. **Execute Fixes**: Process each todo systematically
4. **Update Task Files**: Apply modifications directly to task JSON files
5. **Update IMPL_PLAN**: Apply strategic changes if needed
At end of report, provide remediation guidance:
```markdown
### 🔧 Remediation Workflow
**Recommended Approach**:
1. **Initialize TodoWrite**: Create comprehensive task list from all findings
2. **Process by Severity**: Start with CRITICAL, then HIGH, MEDIUM, LOW
3. **Apply Fixes Directly**: Modify task.json files and IMPL_PLAN.md as needed
4. **Track Progress**: Mark todos as completed after each fix
**TodoWrite Execution Pattern**:
```bash
# Step 1: Create task list from verification report
TodoWrite([
{ content: "Fix FR-03 coverage gap - add authentication task", status: "pending", activeForm: "Fixing FR-03 coverage gap" },
{ content: "Fix IMPL-1.2 consistency - align with ADR-02", status: "pending", activeForm: "Fixing IMPL-1.2 consistency" },
{ content: "Add context.artifacts to IMPL-1.2", status: "pending", activeForm: "Adding context.artifacts to IMPL-1.2" },
# ... additional todos for each finding
])
# Step 2: Process each todo systematically
# Mark as in_progress when starting
# Apply fix using Read/Edit tools
# Mark as completed when done
# Move to next priority item
```
**File Modification Workflow**:
```bash
# For task JSON modifications:
1. Read(.workflow/active/WFS-{session}/.task/IMPL-X.Y.json)
2. Edit() to apply fixes
3. Mark todo as completed
# For IMPL_PLAN modifications:
1. Read(.workflow/active/WFS-{session}/IMPL_PLAN.md)
2. Edit() to apply strategic changes
3. Mark todo as completed
```
**Note**: All fixes execute immediately after user confirmation without additional commands.

View File

@@ -0,0 +1,804 @@
---
name: analyze-with-file
description: Interactive collaborative analysis with documented discussions, CLI-assisted exploration, and evolving understanding
argument-hint: "[-y|--yes] [-c|--continue] \"topic or question\""
allowed-tools: TodoWrite(*), Task(*), AskUserQuestion(*), Read(*), Grep(*), Glob(*), Bash(*), Edit(*), Write(*)
---
## Auto Mode
When `--yes` or `-y`: Auto-confirm exploration decisions, use recommended analysis angles.
# Workflow Analyze-With-File Command (/workflow:analyze-with-file)
## Overview
Interactive collaborative analysis workflow with **documented discussion process**. Records understanding evolution, facilitates multi-round Q&A, and uses CLI tools (Gemini/Codex) for deep exploration.
**Core workflow**: Topic → Explore → Discuss → Document → Refine → Conclude
**Key features**:
- **discussion.md**: Timeline of discussions and understanding evolution
- **Multi-round Q&A**: Iterative clarification with user
- **CLI-assisted exploration**: Gemini/Codex for codebase and concept analysis
- **Consolidated insights**: Synthesizes discussions into actionable conclusions
- **Flexible continuation**: Resume analysis sessions to build on previous work
## Usage
```bash
/workflow:analyze-with-file [FLAGS] <TOPIC_OR_QUESTION>
# Flags
-y, --yes Skip confirmations, use recommended settings
-c, --continue Continue existing session (auto-detected if exists)
# Arguments
<topic-or-question> Analysis topic, question, or concept to explore (required)
# Examples
/workflow:analyze-with-file "如何优化这个项目的认证架构"
/workflow:analyze-with-file --continue "认证架构" # Continue existing session
/workflow:analyze-with-file -y "性能瓶颈分析" # Auto mode
```
## Execution Process
```
Session Detection:
├─ Check if analysis session exists for topic
├─ EXISTS + discussion.md exists → Continue mode
└─ NOT_FOUND → New session mode
Phase 1: Topic Understanding
├─ Parse topic/question
├─ Identify analysis dimensions (architecture, implementation, concept, etc.)
├─ Initial scoping with user (AskUserQuestion)
└─ Document initial understanding in discussion.md
Phase 2: CLI Exploration (Parallel)
├─ Launch cli-explore-agent for codebase context
├─ Use Gemini/Codex for deep analysis
└─ Aggregate findings into exploration summary
Phase 3: Interactive Discussion (Multi-Round)
├─ Present exploration findings
├─ Facilitate Q&A with user (AskUserQuestion)
├─ Capture user insights and requirements
├─ Update discussion.md with each round
└─ Repeat until user is satisfied or clarity achieved
Phase 4: Synthesis & Conclusion
├─ Consolidate all insights
├─ Update discussion.md with conclusions
├─ Generate actionable recommendations
└─ Optional: Create follow-up tasks or issues
Output:
├─ .workflow/.analysis/{slug}-{date}/discussion.md (evolving document)
├─ .workflow/.analysis/{slug}-{date}/explorations.json (CLI findings)
└─ .workflow/.analysis/{slug}-{date}/conclusions.json (final synthesis)
```
## Implementation
### Session Setup & Mode Detection
```javascript
const getUtc8ISOString = () => new Date(Date.now() + 8 * 60 * 60 * 1000).toISOString()
const topicSlug = topic_or_question.toLowerCase().replace(/[^a-z0-9\u4e00-\u9fa5]+/g, '-').substring(0, 40)
const dateStr = getUtc8ISOString().substring(0, 10)
const sessionId = `ANL-${topicSlug}-${dateStr}`
const sessionFolder = `.workflow/.analysis/${sessionId}`
const discussionPath = `${sessionFolder}/discussion.md`
const explorationsPath = `${sessionFolder}/explorations.json`
const conclusionsPath = `${sessionFolder}/conclusions.json`
// Auto-detect mode
const sessionExists = fs.existsSync(sessionFolder)
const hasDiscussion = sessionExists && fs.existsSync(discussionPath)
const forcesContinue = $ARGUMENTS.includes('--continue') || $ARGUMENTS.includes('-c')
const mode = (hasDiscussion || forcesContinue) ? 'continue' : 'new'
if (!sessionExists) {
bash(`mkdir -p ${sessionFolder}`)
}
```
---
### Phase 1: Topic Understanding
**Step 1.1: Parse Topic & Identify Dimensions**
```javascript
// Analyze topic to determine analysis dimensions
const ANALYSIS_DIMENSIONS = {
architecture: ['架构', 'architecture', 'design', 'structure', '设计'],
implementation: ['实现', 'implement', 'code', 'coding', '代码'],
performance: ['性能', 'performance', 'optimize', 'bottleneck', '优化'],
security: ['安全', 'security', 'auth', 'permission', '权限'],
concept: ['概念', 'concept', 'theory', 'principle', '原理'],
comparison: ['比较', 'compare', 'vs', 'difference', '区别'],
decision: ['决策', 'decision', 'choice', 'tradeoff', '选择']
}
function identifyDimensions(topic) {
const text = topic.toLowerCase()
const matched = []
for (const [dimension, keywords] of Object.entries(ANALYSIS_DIMENSIONS)) {
if (keywords.some(k => text.includes(k))) {
matched.push(dimension)
}
}
return matched.length > 0 ? matched : ['general']
}
const dimensions = identifyDimensions(topic_or_question)
```
**Step 1.2: Initial Scoping (New Session Only)**
```javascript
const autoYes = $ARGUMENTS.includes('--yes') || $ARGUMENTS.includes('-y')
if (mode === 'new' && !autoYes) {
// Ask user to scope the analysis
AskUserQuestion({
questions: [
{
question: `分析范围: "${topic_or_question}"\n\n您想重点关注哪些方面?`,
header: "Focus",
multiSelect: true,
options: [
{ label: "代码实现", description: "分析现有代码实现" },
{ label: "架构设计", description: "架构层面的分析" },
{ label: "最佳实践", description: "行业最佳实践对比" },
{ label: "问题诊断", description: "识别潜在问题" }
]
},
{
question: "分析深度?",
header: "Depth",
multiSelect: false,
options: [
{ label: "Quick Overview", description: "快速概览 (10-15分钟)" },
{ label: "Standard Analysis", description: "标准分析 (30-60分钟)" },
{ label: "Deep Dive", description: "深度分析 (1-2小时)" }
]
}
]
})
}
```
**Step 1.3: Create/Update discussion.md**
For new session:
```markdown
# Analysis Discussion
**Session ID**: ${sessionId}
**Topic**: ${topic_or_question}
**Started**: ${getUtc8ISOString()}
**Dimensions**: ${dimensions.join(', ')}
---
## User Context
**Focus Areas**: ${userFocusAreas.join(', ')}
**Analysis Depth**: ${analysisDepth}
---
## Discussion Timeline
### Round 1 - Initial Understanding (${timestamp})
#### Topic Analysis
Based on the topic "${topic_or_question}":
- **Primary dimensions**: ${dimensions.join(', ')}
- **Initial scope**: ${initialScope}
- **Key questions to explore**:
- ${question1}
- ${question2}
- ${question3}
#### Next Steps
- Launch CLI exploration for codebase context
- Gather external insights via Gemini
- Prepare discussion points for user
---
## Current Understanding
${initialUnderstanding}
```
For continue session, append:
```markdown
### Round ${n} - Continuation (${timestamp})
#### Previous Context
Resuming analysis based on prior discussion.
#### New Focus
${newFocusFromUser}
```
---
### Phase 2: CLI Exploration
**Step 2.1: Launch Parallel Explorations**
```javascript
const explorationPromises = []
// CLI Explore Agent for codebase
if (dimensions.includes('implementation') || dimensions.includes('architecture')) {
explorationPromises.push(
Task(
subagent_type="cli-explore-agent",
run_in_background=false,
description=`Explore codebase: ${topicSlug}`,
prompt=`
## Analysis Context
Topic: ${topic_or_question}
Dimensions: ${dimensions.join(', ')}
Session: ${sessionFolder}
## MANDATORY FIRST STEPS
1. Run: ccw tool exec get_modules_by_depth '{}'
2. Execute relevant searches based on topic keywords
3. Read: .workflow/project-tech.json (if exists)
## Exploration Focus
${dimensions.map(d => `- ${d}: Identify relevant code patterns and structures`).join('\n')}
## Output
Write findings to: ${sessionFolder}/exploration-codebase.json
Schema:
{
"relevant_files": [{path, relevance, rationale}],
"patterns": [],
"key_findings": [],
"questions_for_user": [],
"_metadata": { "exploration_type": "codebase", "timestamp": "..." }
}
`
)
)
}
// Gemini CLI for deep analysis
explorationPromises.push(
Bash({
command: `ccw cli -p "
PURPOSE: Analyze topic '${topic_or_question}' from ${dimensions.join(', ')} perspectives
Success criteria: Actionable insights with clear reasoning
TASK:
• Identify key considerations for this topic
• Analyze common patterns and anti-patterns
• Highlight potential issues or opportunities
• Generate discussion points for user clarification
MODE: analysis
CONTEXT: @**/* | Topic: ${topic_or_question}
EXPECTED:
- Structured analysis with clear sections
- Specific insights tied to evidence
- Questions to deepen understanding
- Recommendations with rationale
CONSTRAINTS: Focus on ${dimensions.join(', ')}
" --tool gemini --mode analysis`,
run_in_background: true
})
)
```
**Step 2.2: Aggregate Findings**
```javascript
// After explorations complete, aggregate into explorations.json
const explorations = {
session_id: sessionId,
timestamp: getUtc8ISOString(),
topic: topic_or_question,
dimensions: dimensions,
sources: [
{ type: "codebase", file: "exploration-codebase.json" },
{ type: "gemini", summary: geminiOutput }
],
key_findings: [...],
discussion_points: [...],
open_questions: [...]
}
Write(explorationsPath, JSON.stringify(explorations, null, 2))
```
**Step 2.3: Update discussion.md**
```markdown
#### Exploration Results (${timestamp})
**Sources Analyzed**:
${sources.map(s => `- ${s.type}: ${s.summary}`).join('\n')}
**Key Findings**:
${keyFindings.map((f, i) => `${i+1}. ${f}`).join('\n')}
**Points for Discussion**:
${discussionPoints.map((p, i) => `${i+1}. ${p}`).join('\n')}
**Open Questions**:
${openQuestions.map((q, i) => `- ${q}`).join('\n')}
```
---
### Phase 3: Interactive Discussion (Multi-Round)
**Step 3.1: Present Findings & Gather Feedback**
```javascript
// Maximum discussion rounds
const MAX_ROUNDS = 5
let roundNumber = 1
let discussionComplete = false
while (!discussionComplete && roundNumber <= MAX_ROUNDS) {
// Display current findings
console.log(`
## Discussion Round ${roundNumber}
${currentFindings}
### Key Points for Your Input
${discussionPoints.map((p, i) => `${i+1}. ${p}`).join('\n')}
`)
// Gather user input
const userResponse = AskUserQuestion({
questions: [
{
question: "对以上分析有什么看法或补充?",
header: "Feedback",
multiSelect: false,
options: [
{ label: "同意,继续深入", description: "分析方向正确,继续探索" },
{ label: "需要调整方向", description: "我有不同的理解或重点" },
{ label: "分析完成", description: "已获得足够信息" },
{ label: "有具体问题", description: "我想问一些具体问题" }
]
}
]
})
// Process user response
switch (userResponse.feedback) {
case "同意,继续深入":
// Deepen analysis in current direction
await deepenAnalysis()
break
case "需要调整方向":
// Get user's adjusted focus
const adjustment = AskUserQuestion({
questions: [{
question: "请说明您希望调整的方向或重点:",
header: "Direction",
multiSelect: false,
options: [
{ label: "更多代码细节", description: "深入代码实现" },
{ label: "更多架构视角", description: "关注整体设计" },
{ label: "更多实践对比", description: "对比最佳实践" }
]
}]
})
await adjustAnalysisDirection(adjustment)
break
case "分析完成":
discussionComplete = true
break
case "有具体问题":
// Let user ask specific questions, then answer
await handleUserQuestions()
break
}
// Update discussion.md with this round
updateDiscussionDocument(roundNumber, userResponse, findings)
roundNumber++
}
```
**Step 3.2: Document Each Round**
Append to discussion.md:
```markdown
### Round ${n} - Discussion (${timestamp})
#### User Input
${userInputSummary}
${userResponse === 'adjustment' ? `
**Direction Adjustment**: ${adjustmentDetails}
` : ''}
${userResponse === 'questions' ? `
**User Questions**:
${userQuestions.map((q, i) => `${i+1}. ${q}`).join('\n')}
**Answers**:
${answers.map((a, i) => `${i+1}. ${a}`).join('\n')}
` : ''}
#### Updated Understanding
Based on user feedback:
- ${insight1}
- ${insight2}
#### Corrected Assumptions
${corrections.length > 0 ? corrections.map(c => `
- ~~${c.wrong}~~ → ${c.corrected}
- Reason: ${c.reason}
`).join('\n') : 'None'}
#### New Insights
${newInsights.map(i => `- ${i}`).join('\n')}
```
---
### Phase 4: Synthesis & Conclusion
**Step 4.1: Consolidate Insights**
```javascript
const conclusions = {
session_id: sessionId,
topic: topic_or_question,
completed: getUtc8ISOString(),
total_rounds: roundNumber,
summary: "...",
key_conclusions: [
{ point: "...", evidence: "...", confidence: "high|medium|low" }
],
recommendations: [
{ action: "...", rationale: "...", priority: "high|medium|low" }
],
open_questions: [...],
follow_up_suggestions: [
{ type: "issue", summary: "..." },
{ type: "task", summary: "..." }
]
}
Write(conclusionsPath, JSON.stringify(conclusions, null, 2))
```
**Step 4.2: Final discussion.md Update**
```markdown
---
## Conclusions (${timestamp})
### Summary
${summaryParagraph}
### Key Conclusions
${conclusions.key_conclusions.map((c, i) => `
${i+1}. **${c.point}** (Confidence: ${c.confidence})
- Evidence: ${c.evidence}
`).join('\n')}
### Recommendations
${conclusions.recommendations.map((r, i) => `
${i+1}. **${r.action}** (Priority: ${r.priority})
- Rationale: ${r.rationale}
`).join('\n')}
### Remaining Questions
${conclusions.open_questions.map(q => `- ${q}`).join('\n')}
---
## Current Understanding (Final)
### What We Established
${establishedPoints.map(p => `- ${p}`).join('\n')}
### What Was Clarified/Corrected
${corrections.map(c => `- ~~${c.original}~~ → ${c.corrected}`).join('\n')}
### Key Insights
${keyInsights.map(i => `- ${i}`).join('\n')}
---
## Session Statistics
- **Total Rounds**: ${totalRounds}
- **Duration**: ${duration}
- **Sources Used**: ${sources.join(', ')}
- **Artifacts Generated**: discussion.md, explorations.json, conclusions.json
```
**Step 4.3: Post-Completion Options**
```javascript
AskUserQuestion({
questions: [{
question: "分析完成。是否需要后续操作?",
header: "Next Steps",
multiSelect: true,
options: [
{ label: "创建Issue", description: "将结论转为可执行的Issue" },
{ label: "生成任务", description: "创建实施任务" },
{ label: "导出报告", description: "生成独立的分析报告" },
{ label: "完成", description: "不需要后续操作" }
]
}]
})
// Handle selections
if (selection.includes("创建Issue")) {
SlashCommand("/issue:new", `${topic_or_question} - 分析结论实施`)
}
if (selection.includes("生成任务")) {
SlashCommand("/workflow:lite-plan", `实施分析结论: ${summary}`)
}
if (selection.includes("导出报告")) {
exportAnalysisReport(sessionFolder)
}
```
---
## Session Folder Structure
```
.workflow/.analysis/ANL-{slug}-{date}/
├── discussion.md # Evolution of understanding & discussions
├── explorations.json # CLI exploration findings
├── conclusions.json # Final synthesis
└── exploration-*.json # Individual exploration results (optional)
```
## Discussion Document Template
```markdown
# Analysis Discussion
**Session ID**: ANL-xxx-2025-01-25
**Topic**: [topic or question]
**Started**: 2025-01-25T10:00:00+08:00
**Dimensions**: [architecture, implementation, ...]
---
## User Context
**Focus Areas**: [user-selected focus]
**Analysis Depth**: [quick|standard|deep]
---
## Discussion Timeline
### Round 1 - Initial Understanding (2025-01-25 10:00)
#### Topic Analysis
...
#### Exploration Results
...
### Round 2 - Discussion (2025-01-25 10:15)
#### User Input
...
#### Updated Understanding
...
#### Corrected Assumptions
- ~~[wrong]~~ → [corrected]
### Round 3 - Deep Dive (2025-01-25 10:30)
...
---
## Conclusions (2025-01-25 11:00)
### Summary
...
### Key Conclusions
...
### Recommendations
...
---
## Current Understanding (Final)
### What We Established
- [confirmed points]
### What Was Clarified/Corrected
- ~~[original assumption]~~ → [corrected understanding]
### Key Insights
- [insights gained]
---
## Session Statistics
- **Total Rounds**: 3
- **Duration**: 1 hour
- **Sources Used**: codebase exploration, Gemini analysis
- **Artifacts Generated**: discussion.md, explorations.json, conclusions.json
```
## Iteration Flow
```
First Call (/workflow:analyze-with-file "topic"):
├─ No session exists → New mode
├─ Identify analysis dimensions
├─ Scope with user (unless --yes)
├─ Create discussion.md with initial understanding
├─ Launch CLI explorations
└─ Enter discussion loop
Continue Call (/workflow:analyze-with-file --continue "topic"):
├─ Session exists → Continue mode
├─ Load discussion.md
├─ Resume from last round
└─ Continue discussion loop
Discussion Loop:
├─ Present current findings
├─ Gather user feedback (AskUserQuestion)
├─ Process response:
│ ├─ Agree → Deepen analysis
│ ├─ Adjust → Change direction
│ ├─ Question → Answer then continue
│ └─ Complete → Exit loop
├─ Update discussion.md
└─ Repeat until complete or max rounds
Completion:
├─ Generate conclusions.json
├─ Update discussion.md with final synthesis
└─ Offer follow-up options (issue, task, report)
```
## CLI Integration Points
### 1. Codebase Exploration (cli-explore-agent)
**Purpose**: Gather relevant code context
**When**: Topic involves implementation or architecture analysis
### 2. Gemini Deep Analysis
**Purpose**: Conceptual analysis, pattern identification, best practices
**Prompt Pattern**:
```
PURPOSE: Analyze topic + identify insights
TASK: Explore dimensions + generate discussion points
CONTEXT: Codebase + topic
EXPECTED: Structured analysis + questions
```
### 3. Follow-up CLI Calls
**Purpose**: Deepen specific areas based on user feedback
**Dynamic invocation** based on discussion direction
## Consolidation Rules
When updating "Current Understanding":
1. **Promote confirmed insights**: Move validated findings to "What We Established"
2. **Track corrections**: Keep important wrong→right transformations
3. **Focus on current state**: What do we know NOW
4. **Avoid timeline repetition**: Don't copy discussion details
5. **Preserve key learnings**: Keep insights valuable for future reference
**Bad (cluttered)**:
```markdown
## Current Understanding
In round 1 we discussed X, then in round 2 user said Y, and we explored Z...
```
**Good (consolidated)**:
```markdown
## Current Understanding
### What We Established
- The authentication flow uses JWT with refresh tokens
- Rate limiting is implemented at API gateway level
### What Was Clarified
- ~~Assumed Redis for sessions~~ → Actually uses database-backed sessions
### Key Insights
- Current architecture supports horizontal scaling
- Security audit recommended before production
```
## Error Handling
| Situation | Action |
|-----------|--------|
| CLI exploration fails | Continue with available context, note limitation |
| User timeout in discussion | Save state, show resume command |
| Max rounds reached | Force synthesis, offer continuation option |
| No relevant findings | Broaden search, ask user for clarification |
| Session folder conflict | Append timestamp suffix |
| Gemini unavailable | Fallback to Codex or manual analysis |
## Usage Recommendations
Use `/workflow:analyze-with-file` when:
- Exploring a complex topic collaboratively
- Need documented discussion trail
- Decision-making requires multiple perspectives
- Want to iterate on understanding with user input
- Building shared understanding before implementation
Use `/workflow:debug-with-file` when:
- Diagnosing specific bugs
- Need hypothesis-driven investigation
- Focus on evidence and verification
Use `/workflow:lite-plan` when:
- Ready to implement (past analysis phase)
- Need structured task breakdown
- Focus on execution planning

File diff suppressed because it is too large Load Diff

View File

@@ -1,587 +0,0 @@
---
name: api-designer
description: Generate or update api-designer/analysis.md addressing guidance-specification discussion points for API design perspective
argument-hint: "optional topic - uses existing framework if available"
allowed-tools: Task(conceptual-planning-agent), TodoWrite(*), Read(*), Write(*)
---
## 🔌 **API Designer Analysis Generator**
### Purpose
**Specialized command for generating api-designer/analysis.md** that addresses guidance-specification.md discussion points from backend API design perspective. Creates or updates role-specific analysis with framework references.
### Core Function
- **Framework-based Analysis**: Address each discussion point in guidance-specification.md
- **API Design Focus**: RESTful/GraphQL API design, endpoint structure, and contract definition
- **Update Mechanism**: Create new or update existing analysis.md
- **Agent Delegation**: Use conceptual-planning-agent for analysis generation
### Analysis Scope
- **API Architecture**: RESTful/GraphQL design patterns and best practices
- **Endpoint Design**: Resource modeling, URL structure, and HTTP method selection
- **Data Contracts**: Request/response schemas, validation rules, and data transformation
- **API Documentation**: OpenAPI/Swagger specifications and developer experience
### Role Boundaries & Responsibilities
#### **What This Role OWNS (API Contract Within Architectural Framework)**
- **API Contract Definition**: Specific endpoint paths, HTTP methods, and status codes
- **Resource Modeling**: Mapping domain entities to RESTful resources or GraphQL types
- **Request/Response Schemas**: Detailed data contracts, validation rules, and error formats
- **API Versioning Strategy**: Version management, deprecation policies, and migration paths
- **Developer Experience**: API documentation (OpenAPI/Swagger), code examples, and SDKs
#### **What This Role DOES NOT Own (Defers to Other Roles)**
- **System Architecture Decisions**: Microservices vs monolithic, overall communication patterns → Defers to **System Architect**
- **Canonical Data Model**: Underlying data schemas and entity relationships → Defers to **Data Architect**
- **UI/Frontend Integration**: How clients consume the API → Defers to **UI Designer**
#### **Handoff Points**
- **FROM System Architect**: Receives architectural constraints (REST vs GraphQL, sync vs async) that define the design space
- **FROM Data Architect**: Receives canonical data model and translates it into public API data contracts (as projection/view)
- **TO Frontend Teams**: Provides complete API specifications, documentation, and integration guides
## ⚙️ **Execution Protocol**
### Phase 1: Session & Framework Detection
```bash
# Check active session and framework
CHECK: find .workflow/active/ -name "WFS-*" -type d
IF active_session EXISTS:
session_id = get_active_session()
brainstorm_dir = .workflow/active/WFS-{session}/.brainstorming/
CHECK: brainstorm_dir/guidance-specification.md
IF EXISTS:
framework_mode = true
load_framework = true
ELSE:
IF topic_provided:
framework_mode = false # Create analysis without framework
ELSE:
ERROR: "No framework found and no topic provided"
```
### Phase 2: Analysis Mode Detection
```bash
# Check existing analysis
CHECK: brainstorm_dir/api-designer/analysis.md
IF EXISTS:
SHOW existing analysis summary
ASK: "Analysis exists. Do you want to:"
OPTIONS:
1. "Update with new insights" → Update existing
2. "Replace completely" → Generate new
3. "Cancel" → Exit without changes
ELSE:
CREATE new analysis
```
### Phase 3: Agent Task Generation
**Framework-Based Analysis** (when guidance-specification.md exists):
```bash
Task(subagent_type="conceptual-planning-agent",
run_in_background=false,
prompt="Generate API designer analysis addressing topic framework
## Framework Integration Required
**MANDATORY**: Load and address guidance-specification.md discussion points
**Framework Reference**: @{session.brainstorm_dir}/guidance-specification.md
**Output Location**: {session.brainstorm_dir}/api-designer/analysis.md
## Analysis Requirements
1. **Load Topic Framework**: Read guidance-specification.md completely
2. **Address Each Discussion Point**: Respond to all 5 framework sections from API design perspective
3. **Include Framework Reference**: Start analysis.md with @../guidance-specification.md
4. **API Design Focus**: Emphasize endpoint structure, data contracts, versioning strategies
5. **Structured Response**: Use framework structure for analysis organization
## Framework Sections to Address
- Core Requirements (from API design perspective)
- Technical Considerations (detailed API architecture analysis)
- User Experience Factors (developer experience and API usability)
- Implementation Challenges (API design risks and solutions)
- Success Metrics (API performance metrics and adoption tracking)
## Output Structure Required
```markdown
# API Designer Analysis: [Topic]
**Framework Reference**: @../guidance-specification.md
**Role Focus**: Backend API Design and Contract Definition
## Core Requirements Analysis
[Address framework requirements from API design perspective]
## Technical Considerations
[Detailed API architecture and endpoint design analysis]
## Developer Experience Factors
[API usability, documentation, and integration ease]
## Implementation Challenges
[API design risks and mitigation strategies]
## Success Metrics
[API performance metrics, adoption rates, and developer satisfaction]
## API Design-Specific Recommendations
[Detailed API design recommendations and best practices]
```",
description="Generate API designer framework-based analysis")
```
### Phase 4: Update Mechanism
**Analysis Update Process**:
```bash
# For existing analysis updates
IF update_mode = "incremental":
Task(subagent_type="conceptual-planning-agent",
run_in_background=false,
prompt="Update existing API designer analysis
## Current Analysis Context
**Existing Analysis**: @{session.brainstorm_dir}/api-designer/analysis.md
**Framework Reference**: @{session.brainstorm_dir}/guidance-specification.md
## Update Requirements
1. **Preserve Structure**: Maintain existing analysis structure
2. **Add New Insights**: Integrate new API design insights and recommendations
3. **Framework Alignment**: Ensure continued alignment with topic framework
4. **API Updates**: Add new endpoint patterns, versioning strategies, documentation improvements
5. **Maintain References**: Keep @../guidance-specification.md reference
## Update Instructions
- Read existing analysis completely
- Identify areas for enhancement or new insights
- Add API design depth while preserving original structure
- Update recommendations with new API design patterns and approaches
- Maintain framework discussion point addressing",
description="Update API designer analysis incrementally")
```
## Document Structure
### Output Files
```
.workflow/active/WFS-[topic]/.brainstorming/
├── guidance-specification.md # Input: Framework (if exists)
└── api-designer/
└── analysis.md # ★ OUTPUT: Framework-based analysis
```
### Analysis Structure
**Required Elements**:
- **Framework Reference**: @../guidance-specification.md (if framework exists)
- **Role Focus**: Backend API Design and Contract Definition perspective
- **5 Framework Sections**: Address each framework discussion point
- **API Design Recommendations**: Endpoint-specific insights and solutions
## ⚡ **Two-Step Execution Flow**
### ⚠️ Session Management - FIRST STEP
Session detection and selection:
```bash
# Check for active sessions
active_sessions=$(find .workflow/active/ -name "WFS-*" -type d 2>/dev/null)
if [ multiple_sessions ]; then
prompt_user_to_select_session()
else
use_existing_or_create_new()
fi
```
### Step 1: Context Gathering Phase
**API Designer Perspective Questioning**
Before agent assignment, gather comprehensive API design context:
#### 📋 Role-Specific Questions
1. **API Type & Architecture**
- RESTful, GraphQL, or hybrid API approach?
- Synchronous vs asynchronous communication patterns?
- Real-time requirements (WebSocket, Server-Sent Events)?
2. **Resource Modeling & Endpoints**
- What are the core domain resources/entities?
- Expected CRUD operations for each resource?
- Complex query requirements (filtering, sorting, pagination)?
3. **Data Contracts & Validation**
- Request/response data format requirements (JSON, XML, Protocol Buffers)?
- Input validation and sanitization requirements?
- Data transformation and mapping needs?
4. **API Management & Governance**
- API versioning strategy requirements?
- Authentication and authorization mechanisms?
- Rate limiting and throttling requirements?
- API documentation and developer portal needs?
5. **Integration & Compatibility**
- Client platforms consuming the API (web, mobile, third-party)?
- Backward compatibility requirements?
- External API integrations needed?
#### Context Validation
- **Minimum Response**: Each answer must be ≥50 characters
- **Re-prompting**: Insufficient detail triggers follow-up questions
- **Context Storage**: Save responses to `.brainstorming/api-designer-context.md`
### Step 2: Agent Assignment with Flow Control
**Dedicated Agent Execution**
```bash
Task(conceptual-planning-agent): "
[FLOW_CONTROL]
Execute dedicated api-designer conceptual analysis for: {topic}
ASSIGNED_ROLE: api-designer
OUTPUT_LOCATION: .brainstorming/api-designer/
USER_CONTEXT: {validated_responses_from_context_gathering}
Flow Control Steps:
[
{
\"step\": \"load_role_template\",
\"action\": \"Load api-designer planning template\",
\"command\": \"bash($(cat ~/.claude/workflows/cli-templates/planning-roles/api-designer.md))\",
\"output_to\": \"role_template\"
}
]
Conceptual Analysis Requirements:
- Apply api-designer perspective to topic analysis
- Focus on endpoint design, data contracts, and API governance
- Use loaded role template framework for analysis structure
- Generate role-specific deliverables in designated output location
- Address all user context from questioning phase
Deliverables:
- analysis.md: Main API design analysis
- api-specification.md: Detailed endpoint specifications
- data-contracts.md: Request/response schemas and validation rules
- api-documentation.md: API documentation strategy and templates
Embody api-designer role expertise for comprehensive conceptual planning."
```
### Progress Tracking
TodoWrite tracking for two-step process:
```json
[
{"content": "Gather API designer context through role-specific questioning", "status": "in_progress", "activeForm": "Gathering context"},
{"content": "Validate context responses and save to api-designer-context.md", "status": "pending", "activeForm": "Validating context"},
{"content": "Load api-designer planning template via flow control", "status": "pending", "activeForm": "Loading template"},
{"content": "Execute dedicated conceptual-planning-agent for api-designer role", "status": "pending", "activeForm": "Executing agent"}
]
```
## 📊 **Output Specification**
### Output Location
```
.workflow/active/WFS-{topic-slug}/.brainstorming/api-designer/
├── analysis.md # Primary API design analysis
├── api-specification.md # Detailed endpoint specifications (OpenAPI/Swagger)
├── data-contracts.md # Request/response schemas and validation rules
├── versioning-strategy.md # API versioning and backward compatibility plan
└── developer-guide.md # API usage documentation and integration examples
```
### Document Templates
#### analysis.md Structure
```markdown
# API Design Analysis: {Topic}
*Generated: {timestamp}*
## Executive Summary
[Key API design findings and recommendations overview]
## API Architecture Overview
### API Type Selection (REST/GraphQL/Hybrid)
### Communication Patterns
### Authentication & Authorization Strategy
## Resource Modeling
### Core Domain Resources
### Resource Relationships
### URL Structure and Naming Conventions
## Endpoint Design
### Resource Endpoints
- GET /api/v1/resources
- POST /api/v1/resources
- GET /api/v1/resources/{id}
- PUT /api/v1/resources/{id}
- DELETE /api/v1/resources/{id}
### Query Parameters
- Filtering: ?filter[field]=value
- Sorting: ?sort=field,-field2
- Pagination: ?page=1&limit=20
### HTTP Methods and Status Codes
- Success responses (2xx)
- Client errors (4xx)
- Server errors (5xx)
## Data Contracts
### Request Schemas
[JSON Schema or OpenAPI definitions]
### Response Schemas
[JSON Schema or OpenAPI definitions]
### Validation Rules
- Required fields
- Data types and formats
- Business logic constraints
## API Versioning Strategy
### Versioning Approach (URL/Header/Accept)
### Version Lifecycle Management
### Deprecation Policy
### Migration Paths
## Security & Governance
### Authentication Mechanisms
- OAuth 2.0 / JWT / API Keys
### Authorization Patterns
- RBAC / ABAC / Resource-based
### Rate Limiting & Throttling
### CORS and Security Headers
## Error Handling
### Standard Error Response Format
```json
{
"error": {
"code": "ERROR_CODE",
"message": "Human-readable error message",
"details": [],
"trace_id": "uuid"
}
}
```
### Error Code Taxonomy
### Validation Error Responses
## API Documentation
### OpenAPI/Swagger Specification
### Developer Portal Requirements
### Code Examples and SDKs
### Changelog and Migration Guides
## Performance Optimization
### Response Caching Strategies
### Compression (gzip, brotli)
### Field Selection (sparse fieldsets)
### Bulk Operations and Batch Endpoints
## Monitoring & Observability
### API Metrics
- Request count, latency, error rates
- Endpoint usage analytics
### Logging Strategy
### Distributed Tracing
## Developer Experience
### API Usability Assessment
### Integration Complexity
### SDK and Client Library Needs
### Sandbox and Testing Environments
```
#### api-specification.md Structure
```markdown
# API Specification: {Topic}
*OpenAPI 3.0 Specification*
## API Information
- **Title**: {API Name}
- **Version**: 1.0.0
- **Base URL**: https://api.example.com/v1
- **Contact**: api-team@example.com
## Endpoints
### Users API
#### GET /users
**Description**: Retrieve a list of users
**Parameters**:
- `page` (query, integer): Page number (default: 1)
- `limit` (query, integer): Items per page (default: 20, max: 100)
- `sort` (query, string): Sort field (e.g., "created_at", "-updated_at")
- `filter[status]` (query, string): Filter by user status
**Response 200**:
```json
{
"data": [
{
"id": "uuid",
"username": "string",
"email": "string",
"created_at": "2025-10-15T00:00:00Z"
}
],
"meta": {
"page": 1,
"limit": 20,
"total": 100
},
"links": {
"self": "/users?page=1",
"next": "/users?page=2",
"prev": null
}
}
```
#### POST /users
**Description**: Create a new user
**Request Body**:
```json
{
"username": "string (required, 3-50 chars)",
"email": "string (required, valid email)",
"password": "string (required, min 8 chars)",
"profile": {
"first_name": "string (optional)",
"last_name": "string (optional)"
}
}
```
**Response 201**:
```json
{
"data": {
"id": "uuid",
"username": "string",
"email": "string",
"created_at": "2025-10-15T00:00:00Z"
}
}
```
**Response 400** (Validation Error):
```json
{
"error": {
"code": "VALIDATION_ERROR",
"message": "Request validation failed",
"details": [
{
"field": "email",
"message": "Invalid email format"
}
]
}
}
```
[Continue for all endpoints...]
## Authentication
### OAuth 2.0 Flow
1. Client requests authorization
2. User grants permission
3. Client receives access token
4. Client uses token in requests
**Header Format**:
```
Authorization: Bearer {access_token}
```
## Rate Limiting
**Headers**:
- `X-RateLimit-Limit`: 1000
- `X-RateLimit-Remaining`: 999
- `X-RateLimit-Reset`: 1634270400
**Response 429** (Too Many Requests):
```json
{
"error": {
"code": "RATE_LIMIT_EXCEEDED",
"message": "API rate limit exceeded",
"retry_after": 3600
}
}
```
```
## 🔄 **Session Integration**
### Status Synchronization
Upon completion, update `workflow-session.json`:
```json
{
"phases": {
"BRAINSTORM": {
"api_designer": {
"status": "completed",
"completed_at": "timestamp",
"output_directory": ".workflow/active/WFS-{topic}/.brainstorming/api-designer/",
"key_insights": ["endpoint_design", "versioning_strategy", "data_contracts"]
}
}
}
}
```
### Cross-Role Collaboration
API designer perspective provides:
- **API Contract Specifications** → Frontend Developer
- **Data Schema Requirements** → Data Architect
- **Security Requirements** → Security Expert
- **Integration Endpoints** → System Architect
- **Performance Constraints** → DevOps Engineer
## ✅ **Quality Assurance**
### Required Analysis Elements
- [ ] Complete endpoint inventory with HTTP methods and paths
- [ ] Detailed request/response schemas with validation rules
- [ ] Clear versioning strategy and backward compatibility plan
- [ ] Comprehensive error handling and status code usage
- [ ] API documentation strategy (OpenAPI/Swagger)
### API Design Principles
- [ ] **Consistency**: Uniform naming conventions and patterns across all endpoints
- [ ] **Simplicity**: Intuitive resource modeling and URL structures
- [ ] **Flexibility**: Support for filtering, sorting, pagination, and field selection
- [ ] **Security**: Proper authentication, authorization, and input validation
- [ ] **Performance**: Caching strategies, compression, and efficient data structures
### Developer Experience Validation
- [ ] API is self-documenting with clear endpoint descriptions
- [ ] Error messages are actionable and helpful for debugging
- [ ] Response formats are consistent and predictable
- [ ] Code examples and integration guides are provided
- [ ] Sandbox environment available for testing
### Technical Completeness
- [ ] **Resource Modeling**: All domain entities mapped to API resources
- [ ] **CRUD Coverage**: Complete create, read, update, delete operations
- [ ] **Query Capabilities**: Advanced filtering, sorting, and search functionality
- [ ] **Versioning**: Clear version management and migration paths
- [ ] **Monitoring**: API metrics, logging, and tracing strategies defined
### Integration Readiness
- [ ] **Client Compatibility**: API works with all target client platforms
- [ ] **External Integration**: Third-party API dependencies identified
- [ ] **Backward Compatibility**: Changes don't break existing clients
- [ ] **Migration Path**: Clear upgrade paths for API consumers
- [ ] **SDK Support**: Client libraries and code generation considered

View File

@@ -1,10 +1,14 @@
---
name: artifacts
description: Interactive clarification generating confirmed guidance specification through role-based analysis and synthesis
argument-hint: "topic or challenge description [--count N]"
argument-hint: "[-y|--yes] topic or challenge description [--count N]"
allowed-tools: TodoWrite(*), Read(*), Write(*), Glob(*), AskUserQuestion(*)
---
## Auto Mode
When `--yes` or `-y`: Auto-select recommended roles, skip all clarification questions, use default answers.
## Overview
Seven-phase workflow: **Context collection****Topic analysis****Role selection****Role questions****Conflict resolution****Final check****Generate specification**

View File

@@ -1,10 +1,14 @@
---
name: auto-parallel
description: Parallel brainstorming automation with dynamic role selection and concurrent execution across multiple perspectives
argument-hint: "topic or challenge description" [--count N]
argument-hint: "[-y|--yes] topic or challenge description [--count N]"
allowed-tools: SlashCommand(*), Task(*), TodoWrite(*), Read(*), Write(*), Bash(*), Glob(*)
---
## Auto Mode
When `--yes` or `-y`: Auto-select recommended roles, skip all clarification questions, use default answers.
# Workflow Brainstorm Parallel Auto Command
## Coordinator Role
@@ -128,55 +132,30 @@ SlashCommand(command="/workflow:brainstorm:artifacts \"{topic}\" --count {N}")
### Phase 2: Parallel Role Analysis Execution
**For Each Selected Role**:
**For Each Selected Role** (unified role-analysis command):
```bash
Task(conceptual-planning-agent): "
[FLOW_CONTROL]
Execute {role-name} analysis for existing topic framework
## Context Loading
ASSIGNED_ROLE: {role-name}
OUTPUT_LOCATION: .workflow/active/WFS-{session}/.brainstorming/{role}/
TOPIC: {user-provided-topic}
## Flow Control Steps
1. load_topic_framework → .workflow/active/WFS-{session}/.brainstorming/guidance-specification.md
2. load_role_template → ~/.claude/workflows/cli-templates/planning-roles/{role}.md
3. load_session_metadata → .workflow/active/WFS-{session}/workflow-session.json
4. load_style_skill (ui-designer only, if style_skill_package) → .claude/skills/style-{style_skill_package}/
## Analysis Requirements
**Primary Reference**: Original user prompt from workflow-session.json is authoritative
**Framework Source**: Address all discussion points in guidance-specification.md from {role-name} perspective
**Role Focus**: {role-name} domain expertise aligned with user intent
**Structured Approach**: Create analysis.md addressing framework discussion points
**Template Integration**: Apply role template guidelines within framework structure
## Expected Deliverables
1. **analysis.md** (optionally with analysis-{slug}.md sub-documents)
2. **Framework Reference**: @../guidance-specification.md
3. **User Intent Alignment**: Validate against session_context
## Completion Criteria
- Address each discussion point from guidance-specification.md with {role-name} expertise
- Provide actionable recommendations from {role-name} perspective within analysis files
- All output files MUST start with `analysis` prefix (no recommendations.md or other naming)
- Reference framework document using @ notation for integration
- Update workflow-session.json with completion status
"
SlashCommand(command="/workflow:brainstorm:role-analysis {role-name} --session {session-id} --skip-questions")
```
**Parallel Execute**:
- Launch N agents simultaneously (one message with multiple Task calls)
- Each agent task **attached** to orchestrator's TodoWrite
- All agents execute concurrently, each attaching their own analysis sub-tasks
- Each agent operates independently reading same guidance-specification.md
**What It Does**:
- Unified command execution for each role
- Loads topic framework from guidance-specification.md
- Applies role-specific template and context
- Generates analysis.md addressing framework discussion points
- Supports optional interactive context gathering (via --include-questions flag)
**Parallel Execution**:
- Launch N SlashCommand calls simultaneously (one message with multiple SlashCommand invokes)
- Each role command **attached** to orchestrator's TodoWrite
- All roles execute concurrently, each reading same guidance-specification.md
- Each role operates independently
- For ui-designer only: append `--style-skill {style_skill_package}` if provided
**Input**:
- `selected_roles[]` from Phase 1
- `session_id` from Phase 1
- guidance-specification.md path
- `guidance-specification.md` (framework reference)
- `style_skill_package` (for ui-designer only)
**Validation**:
- Each role creates `.workflow/active/WFS-{topic}/.brainstorming/{role}/analysis.md`
@@ -185,6 +164,7 @@ TOPIC: {user-provided-topic}
- **FORBIDDEN**: `recommendations.md` or any non-`analysis` prefixed files
- All N role analyses completed
**TodoWrite Update (Phase 2 agents executed - tasks attached in parallel)**:
```json
[
@@ -451,4 +431,3 @@ CONTEXT_VARS:
```
**Template Source**: `~/.claude/workflows/cli-templates/planning-roles/`

View File

@@ -1,220 +0,0 @@
---
name: data-architect
description: Generate or update data-architect/analysis.md addressing guidance-specification discussion points for data architecture perspective
argument-hint: "optional topic - uses existing framework if available"
allowed-tools: Task(conceptual-planning-agent), TodoWrite(*), Read(*), Write(*)
---
## 📊 **Data Architect Analysis Generator**
### Purpose
**Specialized command for generating data-architect/analysis.md** that addresses guidance-specification.md discussion points from data architecture perspective. Creates or updates role-specific analysis with framework references.
### Core Function
- **Framework-based Analysis**: Address each discussion point in guidance-specification.md
- **Data Architecture Focus**: Data models, pipelines, governance, and analytics perspective
- **Update Mechanism**: Create new or update existing analysis.md
- **Agent Delegation**: Use conceptual-planning-agent for analysis generation
### Analysis Scope
- **Data Model Design**: Efficient and scalable data models and schemas
- **Data Flow Design**: Data collection, processing, and storage workflows
- **Data Quality Management**: Data accuracy, completeness, and consistency
- **Analytics and Insights**: Data analysis and business intelligence solutions
### Role Boundaries & Responsibilities
#### **What This Role OWNS (Canonical Data Model - Source of Truth)**
- **Canonical Data Model**: The authoritative, system-wide data schema representing domain entities and relationships
- **Entity-Relationship Design**: Defining entities, attributes, relationships, and constraints
- **Data Normalization & Optimization**: Ensuring data integrity, reducing redundancy, and optimizing storage
- **Database Schema Design**: Physical database structures, indexes, partitioning strategies
- **Data Pipeline Architecture**: ETL/ELT processes, data warehousing, and analytics pipelines
- **Data Governance**: Data quality standards, retention policies, and compliance requirements
#### **What This Role DOES NOT Own (Defers to Other Roles)**
- **API Data Contracts**: Public-facing request/response schemas exposed by APIs → Defers to **API Designer**
- **System Integration Patterns**: How services communicate at the macro level → Defers to **System Architect**
- **UI Data Presentation**: How data is displayed to users → Defers to **UI Designer**
#### **Handoff Points**
- **TO API Designer**: Provides canonical data model that API Designer translates into public API data contracts (as projection/view)
- **TO System Architect**: Provides data flow requirements and storage constraints to inform system design
- **FROM System Architect**: Receives system-level integration requirements and scalability constraints
## ⚙️ **Execution Protocol**
### Phase 1: Session & Framework Detection
```bash
# Check active session and framework
CHECK: find .workflow/active/ -name "WFS-*" -type d
IF active_session EXISTS:
session_id = get_active_session()
brainstorm_dir = .workflow/active/WFS-{session}/.brainstorming/
CHECK: brainstorm_dir/guidance-specification.md
IF EXISTS:
framework_mode = true
load_framework = true
ELSE:
IF topic_provided:
framework_mode = false # Create analysis without framework
ELSE:
ERROR: "No framework found and no topic provided"
```
### Phase 2: Analysis Mode Detection
```bash
# Determine execution mode
IF framework_mode == true:
mode = "framework_based_analysis"
topic_ref = load_framework_topic()
discussion_points = extract_framework_points()
ELSE:
mode = "standalone_analysis"
topic_ref = provided_topic
discussion_points = generate_basic_structure()
```
### Phase 3: Agent Execution with Flow Control
**Framework-Based Analysis Generation**
```bash
Task(conceptual-planning-agent): "
[FLOW_CONTROL]
Execute data-architect analysis for existing topic framework
## Context Loading
ASSIGNED_ROLE: data-architect
OUTPUT_LOCATION: .workflow/active/WFS-{session}/.brainstorming/data-architect/
ANALYSIS_MODE: {framework_mode ? "framework_based" : "standalone"}
## Flow Control Steps
1. **load_topic_framework**
- Action: Load structured topic discussion framework
- Command: Read(.workflow/active/WFS-{session}/.brainstorming/guidance-specification.md)
- Output: topic_framework_content
2. **load_role_template**
- Action: Load data-architect planning template
- Command: bash($(cat ~/.claude/workflows/cli-templates/planning-roles/data-architect.md))
- Output: role_template_guidelines
3. **load_session_metadata**
- Action: Load session metadata and existing context
- Command: Read(.workflow/active/WFS-{session}/workflow-session.json)
- Output: session_context
## Analysis Requirements
**Framework Reference**: Address all discussion points in guidance-specification.md from data architecture perspective
**Role Focus**: Data models, pipelines, governance, analytics platforms
**Structured Approach**: Create analysis.md addressing framework discussion points
**Template Integration**: Apply role template guidelines within framework structure
## Expected Deliverables
1. **analysis.md**: Comprehensive data architecture analysis addressing all framework discussion points
2. **Framework Reference**: Include @../guidance-specification.md reference in analysis
## Completion Criteria
- Address each discussion point from guidance-specification.md with data architecture expertise
- Provide data model designs, pipeline architectures, and governance strategies
- Include scalability, performance, and quality considerations
- Reference framework document using @ notation for integration
"
```
## 📋 **TodoWrite Integration**
### Workflow Progress Tracking
```javascript
TodoWrite({
todos: [
{
content: "Detect active session and locate topic framework",
status: "in_progress",
activeForm: "Detecting session and framework"
},
{
content: "Load guidance-specification.md and session metadata for context",
status: "pending",
activeForm: "Loading framework and session context"
},
{
content: "Execute data-architect analysis using conceptual-planning-agent with FLOW_CONTROL",
status: "pending",
activeForm: "Executing data-architect framework analysis"
},
{
content: "Generate analysis.md addressing all framework discussion points",
status: "pending",
activeForm: "Generating structured data-architect analysis"
},
{
content: "Update workflow-session.json with data-architect completion status",
status: "pending",
activeForm: "Updating session metadata"
}
]
});
```
## 📊 **Output Structure**
### Framework-Based Analysis
```
.workflow/active/WFS-{session}/.brainstorming/data-architect/
└── analysis.md # Structured analysis addressing guidance-specification.md discussion points
```
### Analysis Document Structure
```markdown
# Data Architect Analysis: [Topic from Framework]
## Framework Reference
**Topic Framework**: @../guidance-specification.md
**Role Focus**: Data Architecture perspective
## Discussion Points Analysis
[Address each point from guidance-specification.md with data architecture expertise]
### Core Requirements (from framework)
[Data architecture perspective on requirements]
### Technical Considerations (from framework)
[Data model, pipeline, and storage considerations]
### User Experience Factors (from framework)
[Data access patterns and analytics user experience]
### Implementation Challenges (from framework)
[Data migration, quality, and governance challenges]
### Success Metrics (from framework)
[Data quality metrics and analytics success criteria]
## Data Architecture Specific Recommendations
[Role-specific data architecture recommendations and solutions]
---
*Generated by data-architect analysis addressing structured framework*
```
## 🔄 **Session Integration**
### Completion Status Update
```json
{
"data_architect": {
"status": "completed",
"framework_addressed": true,
"output_location": ".workflow/active/WFS-{session}/.brainstorming/data-architect/analysis.md",
"framework_reference": "@../guidance-specification.md"
}
}
```
### Integration Points
- **Framework Reference**: @../guidance-specification.md for structured discussion points
- **Cross-Role Synthesis**: Data architecture insights available for synthesis-report.md integration
- **Agent Autonomy**: Independent execution with framework guidance

View File

@@ -1,200 +0,0 @@
---
name: product-manager
description: Generate or update product-manager/analysis.md addressing guidance-specification discussion points for product management perspective
argument-hint: "optional topic - uses existing framework if available"
allowed-tools: Task(conceptual-planning-agent), TodoWrite(*), Read(*), Write(*)
---
## 🎯 **Product Manager Analysis Generator**
### Purpose
**Specialized command for generating product-manager/analysis.md** that addresses guidance-specification.md discussion points from product strategy perspective. Creates or updates role-specific analysis with framework references.
### Core Function
- **Framework-based Analysis**: Address each discussion point in guidance-specification.md
- **Product Strategy Focus**: User needs, business value, and market positioning
- **Update Mechanism**: Create new or update existing analysis.md
- **Agent Delegation**: Use conceptual-planning-agent for analysis generation
### Analysis Scope
- **User Needs Analysis**: Target users, problems, and value propositions
- **Business Impact Assessment**: ROI, metrics, and commercial outcomes
- **Market Positioning**: Competitive analysis and differentiation
- **Product Strategy**: Roadmaps, priorities, and go-to-market approaches
## ⚙️ **Execution Protocol**
### Phase 1: Session & Framework Detection
```bash
# Check active session and framework
CHECK: find .workflow/active/ -name "WFS-*" -type d
IF active_session EXISTS:
session_id = get_active_session()
brainstorm_dir = .workflow/active/WFS-{session}/.brainstorming/
CHECK: brainstorm_dir/guidance-specification.md
IF EXISTS:
framework_mode = true
load_framework = true
ELSE:
IF topic_provided:
framework_mode = false # Create analysis without framework
ELSE:
ERROR: "No framework found and no topic provided"
```
### Phase 2: Analysis Mode Detection
```bash
# Determine execution mode
IF framework_mode == true:
mode = "framework_based_analysis"
topic_ref = load_framework_topic()
discussion_points = extract_framework_points()
ELSE:
mode = "standalone_analysis"
topic_ref = provided_topic
discussion_points = generate_basic_structure()
```
### Phase 3: Agent Execution with Flow Control
**Framework-Based Analysis Generation**
```bash
Task(conceptual-planning-agent): "
[FLOW_CONTROL]
Execute product-manager analysis for existing topic framework
## Context Loading
ASSIGNED_ROLE: product-manager
OUTPUT_LOCATION: .workflow/active/WFS-{session}/.brainstorming/product-manager/
ANALYSIS_MODE: {framework_mode ? "framework_based" : "standalone"}
## Flow Control Steps
1. **load_topic_framework**
- Action: Load structured topic discussion framework
- Command: Read(.workflow/active/WFS-{session}/.brainstorming/guidance-specification.md)
- Output: topic_framework_content
2. **load_role_template**
- Action: Load product-manager planning template
- Command: bash($(cat ~/.claude/workflows/cli-templates/planning-roles/product-manager.md))
- Output: role_template_guidelines
3. **load_session_metadata**
- Action: Load session metadata and existing context
- Command: Read(.workflow/active/WFS-{session}/workflow-session.json)
- Output: session_context
## Analysis Requirements
**Framework Reference**: Address all discussion points in guidance-specification.md from product strategy perspective
**Role Focus**: User value, business impact, market positioning, product strategy
**Structured Approach**: Create analysis.md addressing framework discussion points
**Template Integration**: Apply role template guidelines within framework structure
## Expected Deliverables
1. **analysis.md**: Comprehensive product strategy analysis addressing all framework discussion points
2. **Framework Reference**: Include @../guidance-specification.md reference in analysis
## Completion Criteria
- Address each discussion point from guidance-specification.md with product management expertise
- Provide actionable business strategies and user value propositions
- Include market analysis and competitive positioning insights
- Reference framework document using @ notation for integration
"
```
## 📋 **TodoWrite Integration**
### Workflow Progress Tracking
```javascript
TodoWrite({
todos: [
{
content: "Detect active session and locate topic framework",
status: "in_progress",
activeForm: "Detecting session and framework"
},
{
content: "Load guidance-specification.md and session metadata for context",
status: "pending",
activeForm: "Loading framework and session context"
},
{
content: "Execute product-manager analysis using conceptual-planning-agent with FLOW_CONTROL",
status: "pending",
activeForm: "Executing product-manager framework analysis"
},
{
content: "Generate analysis.md addressing all framework discussion points",
status: "pending",
activeForm: "Generating structured product-manager analysis"
},
{
content: "Update workflow-session.json with product-manager completion status",
status: "pending",
activeForm: "Updating session metadata"
}
]
});
```
## 📊 **Output Structure**
### Framework-Based Analysis
```
.workflow/active/WFS-{session}/.brainstorming/product-manager/
└── analysis.md # Structured analysis addressing guidance-specification.md discussion points
```
### Analysis Document Structure
```markdown
# Product Manager Analysis: [Topic from Framework]
## Framework Reference
**Topic Framework**: @../guidance-specification.md
**Role Focus**: Product Strategy perspective
## Discussion Points Analysis
[Address each point from guidance-specification.md with product management expertise]
### Core Requirements (from framework)
[Product strategy perspective on user needs and requirements]
### Technical Considerations (from framework)
[Business and technical feasibility considerations]
### User Experience Factors (from framework)
[User value proposition and market positioning analysis]
### Implementation Challenges (from framework)
[Business execution and go-to-market considerations]
### Success Metrics (from framework)
[Product success metrics and business KPIs]
## Product Strategy Specific Recommendations
[Role-specific product management strategies and business solutions]
---
*Generated by product-manager analysis addressing structured framework*
```
## 🔄 **Session Integration**
### Completion Status Update
```json
{
"product_manager": {
"status": "completed",
"framework_addressed": true,
"output_location": ".workflow/active/WFS-{session}/.brainstorming/product-manager/analysis.md",
"framework_reference": "@../guidance-specification.md"
}
}
```
### Integration Points
- **Framework Reference**: @../guidance-specification.md for structured discussion points
- **Cross-Role Synthesis**: Product strategy insights available for synthesis-report.md integration
- **Agent Autonomy**: Independent execution with framework guidance

View File

@@ -1,200 +0,0 @@
---
name: product-owner
description: Generate or update product-owner/analysis.md addressing guidance-specification discussion points for product ownership perspective
argument-hint: "optional topic - uses existing framework if available"
allowed-tools: Task(conceptual-planning-agent), TodoWrite(*), Read(*), Write(*)
---
## 🎯 **Product Owner Analysis Generator**
### Purpose
**Specialized command for generating product-owner/analysis.md** that addresses guidance-specification.md discussion points from product backlog and feature prioritization perspective. Creates or updates role-specific analysis with framework references.
### Core Function
- **Framework-based Analysis**: Address each discussion point in guidance-specification.md
- **Product Backlog Focus**: Feature prioritization, user stories, and acceptance criteria
- **Update Mechanism**: Create new or update existing analysis.md
- **Agent Delegation**: Use conceptual-planning-agent for analysis generation
### Analysis Scope
- **Backlog Management**: User story creation, refinement, and prioritization
- **Stakeholder Alignment**: Requirements gathering, value definition, and expectation management
- **Feature Prioritization**: ROI analysis, MoSCoW method, and value-driven delivery
- **Acceptance Criteria**: Definition of Done, acceptance testing, and quality standards
## ⚙️ **Execution Protocol**
### Phase 1: Session & Framework Detection
```bash
# Check active session and framework
CHECK: find .workflow/active/ -name "WFS-*" -type d
IF active_session EXISTS:
session_id = get_active_session()
brainstorm_dir = .workflow/active/WFS-{session}/.brainstorming/
CHECK: brainstorm_dir/guidance-specification.md
IF EXISTS:
framework_mode = true
load_framework = true
ELSE:
IF topic_provided:
framework_mode = false # Create analysis without framework
ELSE:
ERROR: "No framework found and no topic provided"
```
### Phase 2: Analysis Mode Detection
```bash
# Determine execution mode
IF framework_mode == true:
mode = "framework_based_analysis"
topic_ref = load_framework_topic()
discussion_points = extract_framework_points()
ELSE:
mode = "standalone_analysis"
topic_ref = provided_topic
discussion_points = generate_basic_structure()
```
### Phase 3: Agent Execution with Flow Control
**Framework-Based Analysis Generation**
```bash
Task(conceptual-planning-agent): "
[FLOW_CONTROL]
Execute product-owner analysis for existing topic framework
## Context Loading
ASSIGNED_ROLE: product-owner
OUTPUT_LOCATION: .workflow/active/WFS-{session}/.brainstorming/product-owner/
ANALYSIS_MODE: {framework_mode ? "framework_based" : "standalone"}
## Flow Control Steps
1. **load_topic_framework**
- Action: Load structured topic discussion framework
- Command: Read(.workflow/active/WFS-{session}/.brainstorming/guidance-specification.md)
- Output: topic_framework_content
2. **load_role_template**
- Action: Load product-owner planning template
- Command: bash($(cat ~/.claude/workflows/cli-templates/planning-roles/product-owner.md))
- Output: role_template_guidelines
3. **load_session_metadata**
- Action: Load session metadata and existing context
- Command: Read(.workflow/active/WFS-{session}/workflow-session.json)
- Output: session_context
## Analysis Requirements
**Framework Reference**: Address all discussion points in guidance-specification.md from product backlog and feature prioritization perspective
**Role Focus**: Backlog management, stakeholder alignment, feature prioritization, acceptance criteria
**Structured Approach**: Create analysis.md addressing framework discussion points
**Template Integration**: Apply role template guidelines within framework structure
## Expected Deliverables
1. **analysis.md**: Comprehensive product ownership analysis addressing all framework discussion points
2. **Framework Reference**: Include @../guidance-specification.md reference in analysis
## Completion Criteria
- Address each discussion point from guidance-specification.md with product ownership expertise
- Provide actionable user stories and acceptance criteria definitions
- Include feature prioritization and stakeholder alignment strategies
- Reference framework document using @ notation for integration
"
```
## 📋 **TodoWrite Integration**
### Workflow Progress Tracking
```javascript
TodoWrite({
todos: [
{
content: "Detect active session and locate topic framework",
status: "in_progress",
activeForm: "Detecting session and framework"
},
{
content: "Load guidance-specification.md and session metadata for context",
status: "pending",
activeForm: "Loading framework and session context"
},
{
content: "Execute product-owner analysis using conceptual-planning-agent with FLOW_CONTROL",
status: "pending",
activeForm: "Executing product-owner framework analysis"
},
{
content: "Generate analysis.md addressing all framework discussion points",
status: "pending",
activeForm: "Generating structured product-owner analysis"
},
{
content: "Update workflow-session.json with product-owner completion status",
status: "pending",
activeForm: "Updating session metadata"
}
]
});
```
## 📊 **Output Structure**
### Framework-Based Analysis
```
.workflow/active/WFS-{session}/.brainstorming/product-owner/
└── analysis.md # Structured analysis addressing guidance-specification.md discussion points
```
### Analysis Document Structure
```markdown
# Product Owner Analysis: [Topic from Framework]
## Framework Reference
**Topic Framework**: @../guidance-specification.md
**Role Focus**: Product Backlog & Feature Prioritization perspective
## Discussion Points Analysis
[Address each point from guidance-specification.md with product ownership expertise]
### Core Requirements (from framework)
[User story formulation and backlog refinement perspective]
### Technical Considerations (from framework)
[Technical feasibility and implementation sequencing considerations]
### User Experience Factors (from framework)
[User value definition and acceptance criteria analysis]
### Implementation Challenges (from framework)
[Sprint planning, dependency management, and delivery strategies]
### Success Metrics (from framework)
[Feature adoption, value delivery metrics, and stakeholder satisfaction indicators]
## Product Owner Specific Recommendations
[Role-specific backlog management and feature prioritization strategies]
---
*Generated by product-owner analysis addressing structured framework*
```
## 🔄 **Session Integration**
### Completion Status Update
```json
{
"product_owner": {
"status": "completed",
"framework_addressed": true,
"output_location": ".workflow/active/WFS-{session}/.brainstorming/product-owner/analysis.md",
"framework_reference": "@../guidance-specification.md"
}
}
```
### Integration Points
- **Framework Reference**: @../guidance-specification.md for structured discussion points
- **Cross-Role Synthesis**: Product ownership insights available for synthesis-report.md integration
- **Agent Autonomy**: Independent execution with framework guidance

View File

@@ -0,0 +1,705 @@
---
name: role-analysis
description: Unified role-specific analysis generation with interactive context gathering and incremental updates
argument-hint: "[role-name] [--session session-id] [--update] [--include-questions] [--skip-questions]"
allowed-tools: Task(conceptual-planning-agent), AskUserQuestion(*), TodoWrite(*), Read(*), Write(*), Edit(*), Glob(*)
---
## 🎯 **Unified Role Analysis Generator**
### Purpose
**Unified command for generating and updating role-specific analysis** with interactive context gathering, framework alignment, and incremental update support. Replaces 9 individual role commands with single parameterized workflow.
### Core Function
- **Multi-Role Support**: Single command supports all 9 brainstorming roles
- **Interactive Context**: Dynamic question generation based on role and framework
- **Incremental Updates**: Merge new insights into existing analyses
- **Framework Alignment**: Address guidance-specification.md discussion points
- **Agent Delegation**: Use conceptual-planning-agent with role-specific templates
### Supported Roles
| Role ID | Title | Focus Area | Context Questions |
|---------|-------|------------|-------------------|
| `ux-expert` | UX专家 | User research, information architecture, user journey | 4 |
| `ui-designer` | UI设计师 | Visual design, high-fidelity mockups, design systems | 4 |
| `system-architect` | 系统架构师 | Technical architecture, scalability, integration patterns | 5 |
| `product-manager` | 产品经理 | Product strategy, roadmap, prioritization | 4 |
| `product-owner` | 产品负责人 | Backlog management, user stories, acceptance criteria | 4 |
| `scrum-master` | 敏捷教练 | Process facilitation, impediment removal, team dynamics | 3 |
| `subject-matter-expert` | 领域专家 | Domain knowledge, business rules, compliance | 4 |
| `data-architect` | 数据架构师 | Data models, storage strategies, data flow | 5 |
| `api-designer` | API设计师 | API contracts, versioning, integration patterns | 4 |
---
## 📋 **Usage**
```bash
# Generate new analysis with interactive context
/workflow:brainstorm:role-analysis ux-expert
# Generate with existing framework + context questions
/workflow:brainstorm:role-analysis system-architect --session WFS-xxx --include-questions
# Update existing analysis (incremental merge)
/workflow:brainstorm:role-analysis ui-designer --session WFS-xxx --update
# Quick generation (skip interactive context)
/workflow:brainstorm:role-analysis product-manager --session WFS-xxx --skip-questions
```
---
## ⚙️ **Execution Protocol**
### Phase 1: Detection & Validation
**Step 1.1: Role Validation**
```bash
VALIDATE role_name IN [
ux-expert, ui-designer, system-architect, product-manager,
product-owner, scrum-master, subject-matter-expert,
data-architect, api-designer
]
IF invalid:
ERROR: "Unknown role: {role_name}. Use one of: ux-expert, ui-designer, ..."
EXIT
```
**Step 1.2: Session Detection**
```bash
IF --session PROVIDED:
session_id = --session
brainstorm_dir = .workflow/active/{session_id}/.brainstorming/
ELSE:
FIND .workflow/active/WFS-*/
IF multiple:
PROMPT user to select
ELSE IF single:
USE existing
ELSE:
ERROR: "No active session. Run /workflow:brainstorm:artifacts first"
EXIT
VALIDATE brainstorm_dir EXISTS
```
**Step 1.3: Framework Detection**
```bash
framework_file = {brainstorm_dir}/guidance-specification.md
IF framework_file EXISTS:
framework_mode = true
LOAD framework_content
ELSE:
WARN: "No framework found - will create standalone analysis"
framework_mode = false
```
**Step 1.4: Update Mode Detection**
```bash
existing_analysis = {brainstorm_dir}/{role_name}/analysis*.md
IF --update FLAG OR existing_analysis EXISTS:
update_mode = true
IF --update NOT PROVIDED:
ASK: "Analysis exists. Update or regenerate?"
OPTIONS: ["Incremental update", "Full regenerate", "Cancel"]
ELSE:
update_mode = false
```
### Phase 2: Interactive Context Gathering
**Trigger Conditions**:
- Default: Always ask unless `--skip-questions` provided
- `--include-questions`: Force context gathering even if analysis exists
- `--skip-questions`: Skip all interactive questions
**Step 2.1: Load Role Configuration**
```javascript
const roleConfig = {
'ux-expert': {
title: 'UX专家',
focus_area: 'User research, information architecture, user journey',
question_categories: ['User Intent', 'Requirements', 'UX'],
question_count: 4,
template: '~/.claude/workflows/cli-templates/planning-roles/ux-expert.md'
},
'ui-designer': {
title: 'UI设计师',
focus_area: 'Visual design, high-fidelity mockups, design systems',
question_categories: ['Requirements', 'UX', 'Feasibility'],
question_count: 4,
template: '~/.claude/workflows/cli-templates/planning-roles/ui-designer.md'
},
'system-architect': {
title: '系统架构师',
focus_area: 'Technical architecture, scalability, integration patterns',
question_categories: ['Scale & Performance', 'Technical Constraints', 'Architecture Complexity', 'Non-Functional Requirements'],
question_count: 5,
template: '~/.claude/workflows/cli-templates/planning-roles/system-architect.md'
},
'product-manager': {
title: '产品经理',
focus_area: 'Product strategy, roadmap, prioritization',
question_categories: ['User Intent', 'Requirements', 'Process'],
question_count: 4,
template: '~/.claude/workflows/cli-templates/planning-roles/product-manager.md'
},
'product-owner': {
title: '产品负责人',
focus_area: 'Backlog management, user stories, acceptance criteria',
question_categories: ['Requirements', 'Decisions', 'Process'],
question_count: 4,
template: '~/.claude/workflows/cli-templates/planning-roles/product-owner.md'
},
'scrum-master': {
title: '敏捷教练',
focus_area: 'Process facilitation, impediment removal, team dynamics',
question_categories: ['Process', 'Risk', 'Decisions'],
question_count: 3,
template: '~/.claude/workflows/cli-templates/planning-roles/scrum-master.md'
},
'subject-matter-expert': {
title: '领域专家',
focus_area: 'Domain knowledge, business rules, compliance',
question_categories: ['Requirements', 'Feasibility', 'Terminology'],
question_count: 4,
template: '~/.claude/workflows/cli-templates/planning-roles/subject-matter-expert.md'
},
'data-architect': {
title: '数据架构师',
focus_area: 'Data models, storage strategies, data flow',
question_categories: ['Architecture', 'Scale & Performance', 'Technical Constraints', 'Feasibility'],
question_count: 5,
template: '~/.claude/workflows/cli-templates/planning-roles/data-architect.md'
},
'api-designer': {
title: 'API设计师',
focus_area: 'API contracts, versioning, integration patterns',
question_categories: ['Architecture', 'Requirements', 'Feasibility', 'Decisions'],
question_count: 4,
template: '~/.claude/workflows/cli-templates/planning-roles/api-designer.md'
}
};
config = roleConfig[role_name];
```
**Step 2.2: Generate Role-Specific Questions**
**9-Category Taxonomy** (from synthesis.md):
| Category | Focus | Example Question Pattern |
|----------|-------|--------------------------|
| User Intent | 用户目标 | "该分析的核心目标是什么?" |
| Requirements | 需求细化 | "需求的优先级如何排序?" |
| Architecture | 架构决策 | "技术栈的选择考量?" |
| UX | 用户体验 | "交互复杂度的取舍?" |
| Feasibility | 可行性 | "资源约束下的实现范围?" |
| Risk | 风险管理 | "风险容忍度是多少?" |
| Process | 流程规范 | "开发迭代的节奏?" |
| Decisions | 决策确认 | "冲突的解决方案?" |
| Terminology | 术语统一 | "统一使用哪个术语?" |
| Scale & Performance | 性能扩展 | "预期的负载和性能要求?" |
| Technical Constraints | 技术约束 | "现有技术栈的限制?" |
| Architecture Complexity | 架构复杂度 | "架构的复杂度权衡?" |
| Non-Functional Requirements | 非功能需求 | "可用性和可维护性要求?" |
**Question Generation Algorithm**:
```javascript
async function generateQuestions(role_name, framework_content) {
const config = roleConfig[role_name];
const questions = [];
// Parse framework for keywords
const keywords = extractKeywords(framework_content);
// Generate category-specific questions
for (const category of config.question_categories) {
const question = generateCategoryQuestion(category, keywords, role_name);
questions.push(question);
}
return questions.slice(0, config.question_count);
}
```
**Step 2.3: Multi-Round Question Execution**
```javascript
const BATCH_SIZE = 4;
const user_context = {};
for (let i = 0; i < questions.length; i += BATCH_SIZE) {
const batch = questions.slice(i, i + BATCH_SIZE);
const currentRound = Math.floor(i / BATCH_SIZE) + 1;
const totalRounds = Math.ceil(questions.length / BATCH_SIZE);
console.log(`\n[Round ${currentRound}/${totalRounds}] ${config.title} 上下文询问\n`);
AskUserQuestion({
questions: batch.map(q => ({
question: q.question,
header: q.category.substring(0, 12),
multiSelect: false,
options: q.options.map(opt => ({
label: opt.label,
description: opt.description
}))
}))
});
// Store responses before next round
for (const answer of responses) {
user_context[answer.question] = {
answer: answer.selected,
category: answer.category,
timestamp: new Date().toISOString()
};
}
}
// Save context to file
Write(
`${brainstorm_dir}/${role_name}/${role_name}-context.md`,
formatUserContext(user_context)
);
```
**Question Quality Rules** (from artifacts.md):
**MUST Include**:
- ✅ All questions in Chinese (用中文提问)
- ✅ 业务场景作为问题前提
- ✅ 技术选项的业务影响说明
- ✅ 量化指标和约束条件
**MUST Avoid**:
- ❌ 纯技术选型无业务上下文
- ❌ 过度抽象的通用问题
- ❌ 脱离框架的重复询问
### Phase 3: Agent Execution
**Step 3.1: Load Session Metadata**
```bash
session_metadata = Read(.workflow/active/{session_id}/workflow-session.json)
original_topic = session_metadata.topic
selected_roles = session_metadata.selected_roles
```
**Step 3.2: Prepare Agent Context**
```javascript
const agentContext = {
role_name: role_name,
role_config: roleConfig[role_name],
output_location: `${brainstorm_dir}/${role_name}/`,
framework_mode: framework_mode,
framework_path: framework_mode ? `${brainstorm_dir}/guidance-specification.md` : null,
update_mode: update_mode,
user_context: user_context,
original_topic: original_topic,
session_id: session_id
};
```
**Step 3.3: Execute Conceptual Planning Agent**
**Framework-Based Analysis** (when guidance-specification.md exists):
```javascript
Task(
subagent_type="conceptual-planning-agent",
run_in_background=false,
description=`Generate ${role_name} analysis`,
prompt=`
[FLOW_CONTROL]
Execute ${role_name} analysis for existing topic framework
## Context Loading
ASSIGNED_ROLE: ${role_name}
OUTPUT_LOCATION: ${agentContext.output_location}
ANALYSIS_MODE: ${framework_mode ? "framework_based" : "standalone"}
UPDATE_MODE: ${update_mode}
## Flow Control Steps
1. **load_topic_framework**
- Action: Load structured topic discussion framework
- Command: Read(${agentContext.framework_path})
- Output: topic_framework_content
2. **load_role_template**
- Action: Load ${role_name} planning template
- Command: Read(${roleConfig[role_name].template})
- Output: role_template_guidelines
3. **load_session_metadata**
- Action: Load session metadata and user intent
- Command: Read(.workflow/active/${session_id}/workflow-session.json)
- Output: session_context
4. **load_user_context** (if exists)
- Action: Load interactive context responses
- Command: Read(${brainstorm_dir}/${role_name}/${role_name}-context.md)
- Output: user_context_answers
5. **${update_mode ? 'load_existing_analysis' : 'skip'}**
${update_mode ? `
- Action: Load existing analysis for incremental update
- Command: Read(${brainstorm_dir}/${role_name}/analysis.md)
- Output: existing_analysis_content
` : ''}
## Analysis Requirements
**Primary Reference**: Original user prompt from workflow-session.json is authoritative
**Framework Source**: Address all discussion points in guidance-specification.md from ${role_name} perspective
**User Context Integration**: Incorporate interactive Q&A responses into analysis
**Role Focus**: ${roleConfig[role_name].focus_area}
**Template Integration**: Apply role template guidelines within framework structure
## Expected Deliverables
1. **analysis.md** (main document, optionally with analysis-{slug}.md sub-documents)
2. **Framework Reference**: @../guidance-specification.md (if framework_mode)
3. **User Context Reference**: @./${role_name}-context.md (if user context exists)
4. **User Intent Alignment**: Validate against session_context
## Update Requirements (if UPDATE_MODE)
- **Preserve Structure**: Maintain existing analysis structure
- **Add "Clarifications" Section**: Document new user context with timestamp
- **Merge Insights**: Integrate new perspectives without removing existing content
- **Resolve Conflicts**: If new context contradicts existing analysis, document both and recommend resolution
## Completion Criteria
- Address each discussion point from guidance-specification.md with ${role_name} expertise
- Provide actionable recommendations from ${role_name} perspective within analysis files
- All output files MUST start with "analysis" prefix (no recommendations.md or other naming)
- Reference framework document using @ notation for integration
- Update workflow-session.json with completion status
`
);
```
### Phase 4: Validation & Finalization
**Step 4.1: Validate Output**
```bash
VERIFY EXISTS: ${brainstorm_dir}/${role_name}/analysis.md
VERIFY CONTAINS: "@../guidance-specification.md" (if framework_mode)
IF user_context EXISTS:
VERIFY CONTAINS: "@./${role_name}-context.md" OR "## Clarifications" section
```
**Step 4.2: Update Session Metadata**
```json
{
"phases": {
"BRAINSTORM": {
"${role_name}": {
"status": "${update_mode ? 'updated' : 'completed'}",
"completed_at": "timestamp",
"framework_addressed": true,
"context_gathered": user_context ? true : false,
"output_location": "${brainstorm_dir}/${role_name}/analysis.md",
"update_history": [
{
"timestamp": "ISO8601",
"mode": "${update_mode ? 'incremental' : 'initial'}",
"context_questions": question_count
}
]
}
}
}
}
```
**Step 4.3: Completion Report**
```markdown
✅ ${roleConfig[role_name].title} Analysis Complete
**Output**: ${brainstorm_dir}/${role_name}/analysis.md
**Mode**: ${update_mode ? 'Incremental Update' : 'New Generation'}
**Framework**: ${framework_mode ? '✓ Aligned' : '✗ Standalone'}
**Context Questions**: ${question_count} answered
${update_mode ? '
**Changes**:
- Added "Clarifications" section with new user context
- Merged new insights into existing sections
- Resolved conflicts with framework alignment
' : ''}
**Next Steps**:
${selected_roles.length > 1 ? `
- Continue with other roles: ${selected_roles.filter(r => r !== role_name).join(', ')}
- Run synthesis: /workflow:brainstorm:synthesis --session ${session_id}
` : `
- Clarify insights: /workflow:brainstorm:synthesis --session ${session_id}
- Generate plan: /workflow:plan --session ${session_id}
`}
```
---
## 📋 **TodoWrite Integration**
### Workflow Progress Tracking
```javascript
TodoWrite({
todos: [
{
content: "Phase 1: Detect session and validate role configuration",
status: "in_progress",
activeForm: "Detecting session and role"
},
{
content: "Phase 2: Interactive context gathering with AskUserQuestion",
status: "pending",
activeForm: "Gathering user context"
},
{
content: "Phase 3: Execute conceptual-planning-agent for role analysis",
status: "pending",
activeForm: "Executing agent analysis"
},
{
content: "Phase 4: Validate output and update session metadata",
status: "pending",
activeForm: "Finalizing and validating"
}
]
});
```
---
## 📊 **Output Structure**
### Directory Layout
```
.workflow/active/WFS-{session}/.brainstorming/
├── guidance-specification.md # Framework (if exists)
└── {role-name}/
├── {role-name}-context.md # Interactive Q&A responses
├── analysis.md # Main analysis (REQUIRED)
└── analysis-{slug}.md # Section documents (optional, max 5)
```
### Analysis Document Structure (New Generation)
```markdown
# ${roleConfig[role_name].title} Analysis: [Topic from Framework]
## Framework Reference
**Topic Framework**: @../guidance-specification.md
**Role Focus**: ${roleConfig[role_name].focus_area}
**User Context**: @./${role_name}-context.md
## User Context Summary
**Context Gathered**: ${question_count} questions answered
**Categories**: ${question_categories.join(', ')}
${user_context ? formatContextSummary(user_context) : ''}
## Discussion Points Analysis
[Address each point from guidance-specification.md with ${role_name} expertise]
### Core Requirements (from framework)
[Role-specific perspective on requirements]
### Technical Considerations (from framework)
[Role-specific technical analysis]
### User Experience Factors (from framework)
[Role-specific UX considerations]
### Implementation Challenges (from framework)
[Role-specific challenges and solutions]
### Success Metrics (from framework)
[Role-specific metrics and KPIs]
## ${roleConfig[role_name].title} Specific Recommendations
[Role-specific actionable strategies]
---
*Generated by ${role_name} analysis addressing structured framework*
*Context gathered: ${new Date().toISOString()}*
```
### Analysis Document Structure (Incremental Update)
```markdown
# ${roleConfig[role_name].title} Analysis: [Topic]
## Framework Reference
[Existing content preserved]
## Clarifications
### Session ${new Date().toISOString().split('T')[0]}
${Object.entries(user_context).map(([q, a]) => `
- **Q**: ${q} (Category: ${a.category})
**A**: ${a.answer}
`).join('\n')}
## User Context Summary
[Updated with new context]
## Discussion Points Analysis
[Existing content enhanced with new insights]
[Rest of sections updated based on clarifications]
```
---
## 🔄 **Integration with Other Commands**
### Called By
- `/workflow:brainstorm:auto-parallel` (Phase 2 - parallel role execution)
- Manual invocation for single-role analysis
### Calls To
- `conceptual-planning-agent` (agent execution)
- `AskUserQuestion` (interactive context gathering)
### Coordinates With
- `/workflow:brainstorm:artifacts` (creates framework for role analysis)
- `/workflow:brainstorm:synthesis` (reads role analyses for integration)
---
## ✅ **Quality Assurance**
### Required Analysis Elements
- [ ] Framework discussion points addressed (if framework_mode)
- [ ] User context integrated (if context gathered)
- [ ] Role template guidelines applied
- [ ] Output files follow naming convention (analysis*.md only)
- [ ] Framework reference using @ notation
- [ ] Session metadata updated
### Context Quality
- [ ] Questions in Chinese with business context
- [ ] Options include technical trade-offs
- [ ] Categories aligned with role focus
- [ ] No generic questions unrelated to framework
### Update Quality (if update_mode)
- [ ] "Clarifications" section added with timestamp
- [ ] New insights merged without content loss
- [ ] Conflicts documented and resolved
- [ ] Framework alignment maintained
---
## 🎛️ **Command Parameters**
### Required Parameters
- `[role-name]`: Role identifier (ux-expert, ui-designer, system-architect, etc.)
### Optional Parameters
- `--session [session-id]`: Specify brainstorming session (auto-detect if omitted)
- `--update`: Force incremental update mode (auto-detect if analysis exists)
- `--include-questions`: Force context gathering even if analysis exists
- `--skip-questions`: Skip all interactive context gathering
- `--style-skill [package]`: For ui-designer only, load style SKILL package
### Parameter Combinations
| Scenario | Command | Behavior |
|----------|---------|----------|
| New analysis | `role-analysis ux-expert` | Generate + ask context questions |
| Quick generation | `role-analysis ux-expert --skip-questions` | Generate without context |
| Update existing | `role-analysis ux-expert --update` | Ask clarifications + merge |
| Force questions | `role-analysis ux-expert --include-questions` | Ask even if exists |
| Specific session | `role-analysis ux-expert --session WFS-xxx` | Target specific session |
---
## 🚫 **Error Handling**
### Invalid Role Name
```
ERROR: Unknown role: "ui-expert"
Valid roles: ux-expert, ui-designer, system-architect, product-manager,
product-owner, scrum-master, subject-matter-expert,
data-architect, api-designer
```
### No Active Session
```
ERROR: No active brainstorming session found
Run: /workflow:brainstorm:artifacts "[topic]" to create session
```
### Missing Framework (with warning)
```
WARN: No guidance-specification.md found
Generating standalone analysis without framework alignment
Recommend: Run /workflow:brainstorm:artifacts first for better results
```
### Agent Execution Failure
```
ERROR: Conceptual planning agent failed
Check: ${brainstorm_dir}/${role_name}/error.log
Action: Retry with --skip-questions or check framework validity
```
---
## 🔧 **Advanced Usage**
### Batch Role Generation (via auto-parallel)
```bash
# This command handles multiple roles in parallel
/workflow:brainstorm:auto-parallel "topic" --count 3
# → Internally calls role-analysis for each selected role
```
### Manual Multi-Role Workflow
```bash
# 1. Create framework
/workflow:brainstorm:artifacts "Build real-time collaboration platform" --count 3
# 2. Generate each role with context
/workflow:brainstorm:role-analysis system-architect --include-questions
/workflow:brainstorm:role-analysis ui-designer --include-questions
/workflow:brainstorm:role-analysis product-manager --include-questions
# 3. Synthesize insights
/workflow:brainstorm:synthesis --session WFS-xxx
```
### Iterative Refinement
```bash
# Initial generation
/workflow:brainstorm:role-analysis ux-expert
# User reviews and wants more depth
/workflow:brainstorm:role-analysis ux-expert --update --include-questions
# → Asks clarification questions, merges new insights
```
---
## 📚 **Reference Information**
### Role Template Locations
- Templates: `~/.claude/workflows/cli-templates/planning-roles/`
- Format: `{role-name}.md` (e.g., `ux-expert.md`, `system-architect.md`)
### Related Commands
- `/workflow:brainstorm:artifacts` - Create framework and select roles
- `/workflow:brainstorm:auto-parallel` - Parallel multi-role execution
- `/workflow:brainstorm:synthesis` - Integrate role analyses
- `/workflow:plan` - Generate implementation plan from synthesis
### Context Package
- Location: `.workflow/active/WFS-{session}/.process/context-package.json`
- Used by: `context-search-agent` (Phase 0 of artifacts)
- Contains: Project context, tech stack, conflict risks

View File

@@ -1,200 +0,0 @@
---
name: scrum-master
description: Generate or update scrum-master/analysis.md addressing guidance-specification discussion points for Agile process perspective
argument-hint: "optional topic - uses existing framework if available"
allowed-tools: Task(conceptual-planning-agent), TodoWrite(*), Read(*), Write(*)
---
## 🎯 **Scrum Master Analysis Generator**
### Purpose
**Specialized command for generating scrum-master/analysis.md** that addresses guidance-specification.md discussion points from agile process and team collaboration perspective. Creates or updates role-specific analysis with framework references.
### Core Function
- **Framework-based Analysis**: Address each discussion point in guidance-specification.md
- **Agile Process Focus**: Sprint planning, team dynamics, and delivery optimization
- **Update Mechanism**: Create new or update existing analysis.md
- **Agent Delegation**: Use conceptual-planning-agent for analysis generation
### Analysis Scope
- **Sprint Planning**: Task breakdown, estimation, and iteration planning
- **Team Collaboration**: Communication patterns, impediment removal, and facilitation
- **Process Optimization**: Agile ceremonies, retrospectives, and continuous improvement
- **Delivery Management**: Velocity tracking, burndown analysis, and release planning
## ⚙️ **Execution Protocol**
### Phase 1: Session & Framework Detection
```bash
# Check active session and framework
CHECK: find .workflow/active/ -name "WFS-*" -type d
IF active_session EXISTS:
session_id = get_active_session()
brainstorm_dir = .workflow/active/WFS-{session}/.brainstorming/
CHECK: brainstorm_dir/guidance-specification.md
IF EXISTS:
framework_mode = true
load_framework = true
ELSE:
IF topic_provided:
framework_mode = false # Create analysis without framework
ELSE:
ERROR: "No framework found and no topic provided"
```
### Phase 2: Analysis Mode Detection
```bash
# Determine execution mode
IF framework_mode == true:
mode = "framework_based_analysis"
topic_ref = load_framework_topic()
discussion_points = extract_framework_points()
ELSE:
mode = "standalone_analysis"
topic_ref = provided_topic
discussion_points = generate_basic_structure()
```
### Phase 3: Agent Execution with Flow Control
**Framework-Based Analysis Generation**
```bash
Task(conceptual-planning-agent): "
[FLOW_CONTROL]
Execute scrum-master analysis for existing topic framework
## Context Loading
ASSIGNED_ROLE: scrum-master
OUTPUT_LOCATION: .workflow/active/WFS-{session}/.brainstorming/scrum-master/
ANALYSIS_MODE: {framework_mode ? "framework_based" : "standalone"}
## Flow Control Steps
1. **load_topic_framework**
- Action: Load structured topic discussion framework
- Command: Read(.workflow/active/WFS-{session}/.brainstorming/guidance-specification.md)
- Output: topic_framework_content
2. **load_role_template**
- Action: Load scrum-master planning template
- Command: bash($(cat ~/.claude/workflows/cli-templates/planning-roles/scrum-master.md))
- Output: role_template_guidelines
3. **load_session_metadata**
- Action: Load session metadata and existing context
- Command: Read(.workflow/active/WFS-{session}/workflow-session.json)
- Output: session_context
## Analysis Requirements
**Framework Reference**: Address all discussion points in guidance-specification.md from agile process and team collaboration perspective
**Role Focus**: Sprint planning, team dynamics, process optimization, delivery management
**Structured Approach**: Create analysis.md addressing framework discussion points
**Template Integration**: Apply role template guidelines within framework structure
## Expected Deliverables
1. **analysis.md**: Comprehensive agile process analysis addressing all framework discussion points
2. **Framework Reference**: Include @../guidance-specification.md reference in analysis
## Completion Criteria
- Address each discussion point from guidance-specification.md with scrum mastery expertise
- Provide actionable sprint planning and team facilitation strategies
- Include process optimization and impediment removal insights
- Reference framework document using @ notation for integration
"
```
## 📋 **TodoWrite Integration**
### Workflow Progress Tracking
```javascript
TodoWrite({
todos: [
{
content: "Detect active session and locate topic framework",
status: "in_progress",
activeForm: "Detecting session and framework"
},
{
content: "Load guidance-specification.md and session metadata for context",
status: "pending",
activeForm: "Loading framework and session context"
},
{
content: "Execute scrum-master analysis using conceptual-planning-agent with FLOW_CONTROL",
status: "pending",
activeForm: "Executing scrum-master framework analysis"
},
{
content: "Generate analysis.md addressing all framework discussion points",
status: "pending",
activeForm: "Generating structured scrum-master analysis"
},
{
content: "Update workflow-session.json with scrum-master completion status",
status: "pending",
activeForm: "Updating session metadata"
}
]
});
```
## 📊 **Output Structure**
### Framework-Based Analysis
```
.workflow/active/WFS-{session}/.brainstorming/scrum-master/
└── analysis.md # Structured analysis addressing guidance-specification.md discussion points
```
### Analysis Document Structure
```markdown
# Scrum Master Analysis: [Topic from Framework]
## Framework Reference
**Topic Framework**: @../guidance-specification.md
**Role Focus**: Agile Process & Team Collaboration perspective
## Discussion Points Analysis
[Address each point from guidance-specification.md with scrum mastery expertise]
### Core Requirements (from framework)
[Sprint planning and iteration breakdown perspective]
### Technical Considerations (from framework)
[Technical debt management and process considerations]
### User Experience Factors (from framework)
[User story refinement and acceptance criteria analysis]
### Implementation Challenges (from framework)
[Impediment identification and removal strategies]
### Success Metrics (from framework)
[Velocity tracking, burndown metrics, and team performance indicators]
## Scrum Master Specific Recommendations
[Role-specific agile process optimization and team facilitation strategies]
---
*Generated by scrum-master analysis addressing structured framework*
```
## 🔄 **Session Integration**
### Completion Status Update
```json
{
"scrum_master": {
"status": "completed",
"framework_addressed": true,
"output_location": ".workflow/active/WFS-{session}/.brainstorming/scrum-master/analysis.md",
"framework_reference": "@../guidance-specification.md"
}
}
```
### Integration Points
- **Framework Reference**: @../guidance-specification.md for structured discussion points
- **Cross-Role Synthesis**: Agile process insights available for synthesis-report.md integration
- **Agent Autonomy**: Independent execution with framework guidance

View File

@@ -1,200 +0,0 @@
---
name: subject-matter-expert
description: Generate or update subject-matter-expert/analysis.md addressing guidance-specification discussion points for domain expertise perspective
argument-hint: "optional topic - uses existing framework if available"
allowed-tools: Task(conceptual-planning-agent), TodoWrite(*), Read(*), Write(*)
---
## 🎯 **Subject Matter Expert Analysis Generator**
### Purpose
**Specialized command for generating subject-matter-expert/analysis.md** that addresses guidance-specification.md discussion points from domain knowledge and technical expertise perspective. Creates or updates role-specific analysis with framework references.
### Core Function
- **Framework-based Analysis**: Address each discussion point in guidance-specification.md
- **Domain Expertise Focus**: Deep technical knowledge, industry standards, and best practices
- **Update Mechanism**: Create new or update existing analysis.md
- **Agent Delegation**: Use conceptual-planning-agent for analysis generation
### Analysis Scope
- **Domain Knowledge**: Industry-specific expertise, regulatory requirements, and compliance
- **Technical Standards**: Best practices, design patterns, and architectural guidelines
- **Risk Assessment**: Technical debt, scalability concerns, and maintenance implications
- **Knowledge Transfer**: Documentation strategies, training requirements, and expertise sharing
## ⚙️ **Execution Protocol**
### Phase 1: Session & Framework Detection
```bash
# Check active session and framework
CHECK: find .workflow/active/ -name "WFS-*" -type d
IF active_session EXISTS:
session_id = get_active_session()
brainstorm_dir = .workflow/active/WFS-{session}/.brainstorming/
CHECK: brainstorm_dir/guidance-specification.md
IF EXISTS:
framework_mode = true
load_framework = true
ELSE:
IF topic_provided:
framework_mode = false # Create analysis without framework
ELSE:
ERROR: "No framework found and no topic provided"
```
### Phase 2: Analysis Mode Detection
```bash
# Determine execution mode
IF framework_mode == true:
mode = "framework_based_analysis"
topic_ref = load_framework_topic()
discussion_points = extract_framework_points()
ELSE:
mode = "standalone_analysis"
topic_ref = provided_topic
discussion_points = generate_basic_structure()
```
### Phase 3: Agent Execution with Flow Control
**Framework-Based Analysis Generation**
```bash
Task(conceptual-planning-agent): "
[FLOW_CONTROL]
Execute subject-matter-expert analysis for existing topic framework
## Context Loading
ASSIGNED_ROLE: subject-matter-expert
OUTPUT_LOCATION: .workflow/active/WFS-{session}/.brainstorming/subject-matter-expert/
ANALYSIS_MODE: {framework_mode ? "framework_based" : "standalone"}
## Flow Control Steps
1. **load_topic_framework**
- Action: Load structured topic discussion framework
- Command: Read(.workflow/active/WFS-{session}/.brainstorming/guidance-specification.md)
- Output: topic_framework_content
2. **load_role_template**
- Action: Load subject-matter-expert planning template
- Command: bash($(cat ~/.claude/workflows/cli-templates/planning-roles/subject-matter-expert.md))
- Output: role_template_guidelines
3. **load_session_metadata**
- Action: Load session metadata and existing context
- Command: Read(.workflow/active/WFS-{session}/workflow-session.json)
- Output: session_context
## Analysis Requirements
**Framework Reference**: Address all discussion points in guidance-specification.md from domain expertise and technical standards perspective
**Role Focus**: Domain knowledge, technical standards, risk assessment, knowledge transfer
**Structured Approach**: Create analysis.md addressing framework discussion points
**Template Integration**: Apply role template guidelines within framework structure
## Expected Deliverables
1. **analysis.md**: Comprehensive domain expertise analysis addressing all framework discussion points
2. **Framework Reference**: Include @../guidance-specification.md reference in analysis
## Completion Criteria
- Address each discussion point from guidance-specification.md with subject matter expertise
- Provide actionable technical standards and best practices recommendations
- Include risk assessment and compliance considerations
- Reference framework document using @ notation for integration
"
```
## 📋 **TodoWrite Integration**
### Workflow Progress Tracking
```javascript
TodoWrite({
todos: [
{
content: "Detect active session and locate topic framework",
status: "in_progress",
activeForm: "Detecting session and framework"
},
{
content: "Load guidance-specification.md and session metadata for context",
status: "pending",
activeForm: "Loading framework and session context"
},
{
content: "Execute subject-matter-expert analysis using conceptual-planning-agent with FLOW_CONTROL",
status: "pending",
activeForm: "Executing subject-matter-expert framework analysis"
},
{
content: "Generate analysis.md addressing all framework discussion points",
status: "pending",
activeForm: "Generating structured subject-matter-expert analysis"
},
{
content: "Update workflow-session.json with subject-matter-expert completion status",
status: "pending",
activeForm: "Updating session metadata"
}
]
});
```
## 📊 **Output Structure**
### Framework-Based Analysis
```
.workflow/active/WFS-{session}/.brainstorming/subject-matter-expert/
└── analysis.md # Structured analysis addressing guidance-specification.md discussion points
```
### Analysis Document Structure
```markdown
# Subject Matter Expert Analysis: [Topic from Framework]
## Framework Reference
**Topic Framework**: @../guidance-specification.md
**Role Focus**: Domain Expertise & Technical Standards perspective
## Discussion Points Analysis
[Address each point from guidance-specification.md with subject matter expertise]
### Core Requirements (from framework)
[Domain-specific requirements and industry standards perspective]
### Technical Considerations (from framework)
[Deep technical analysis, architectural patterns, and best practices]
### User Experience Factors (from framework)
[Domain-specific usability standards and industry conventions]
### Implementation Challenges (from framework)
[Technical risks, scalability concerns, and maintenance implications]
### Success Metrics (from framework)
[Domain-specific KPIs, compliance metrics, and quality standards]
## Subject Matter Expert Specific Recommendations
[Role-specific technical expertise and industry best practices]
---
*Generated by subject-matter-expert analysis addressing structured framework*
```
## 🔄 **Session Integration**
### Completion Status Update
```json
{
"subject_matter_expert": {
"status": "completed",
"framework_addressed": true,
"output_location": ".workflow/active/WFS-{session}/.brainstorming/subject-matter-expert/analysis.md",
"framework_reference": "@../guidance-specification.md"
}
}
```
### Integration Points
- **Framework Reference**: @../guidance-specification.md for structured discussion points
- **Cross-Role Synthesis**: Domain expertise insights available for synthesis-report.md integration
- **Agent Autonomy**: Independent execution with framework guidance

View File

@@ -1,10 +1,14 @@
---
name: synthesis
description: Clarify and refine role analyses through intelligent Q&A and targeted updates with synthesis agent
argument-hint: "[optional: --session session-id]"
argument-hint: "[-y|--yes] [optional: --session session-id]"
allowed-tools: Task(conceptual-planning-agent), TodoWrite(*), Read(*), Write(*), Edit(*), Glob(*), AskUserQuestion(*)
---
## Auto Mode
When `--yes` or `-y`: Auto-select all enhancements, skip clarification questions, use default answers.
## Overview
Six-phase workflow to eliminate ambiguities and enhance conceptual depth in role analyses:

View File

@@ -1,389 +0,0 @@
---
name: system-architect
description: Generate or update system-architect/analysis.md addressing guidance-specification discussion points for system architecture perspective
argument-hint: "optional topic - uses existing framework if available"
allowed-tools: Task(conceptual-planning-agent), TodoWrite(*), Read(*), Write(*)
---
## 🏗️ **System Architect Analysis Generator**
### Purpose
**Specialized command for generating system-architect/analysis.md** that addresses guidance-specification.md discussion points from system architecture perspective. Creates or updates role-specific analysis with framework references.
### Core Function
- **Framework-based Analysis**: Address each discussion point in guidance-specification.md
- **Architecture Focus**: Technical architecture, scalability, and system design perspective
- **Update Mechanism**: Create new or update existing analysis.md
- **Agent Delegation**: Use conceptual-planning-agent for analysis generation
### Analysis Scope
- **Technical Architecture**: Scalable and maintainable system design
- **Technology Selection**: Stack evaluation and architectural decisions
- **Performance & Scalability**: Capacity planning and optimization strategies
- **Integration Patterns**: System communication and data flow design
### Role Boundaries & Responsibilities
#### **What This Role OWNS (Macro-Architecture)**
- **System-Level Architecture**: Service boundaries, deployment topology, and system composition
- **Cross-Service Communication Patterns**: Choosing between microservices/monolithic, event-driven/request-response, sync/async patterns
- **Technology Stack Decisions**: Language, framework, database, and infrastructure choices
- **Non-Functional Requirements**: Scalability, performance, availability, disaster recovery, and monitoring strategies
- **Integration Planning**: How systems and services connect at the macro level (not specific API contracts)
#### **What This Role DOES NOT Own (Defers to Other Roles)**
- **API Contract Details**: Specific endpoint definitions, URL structures, HTTP methods → Defers to **API Designer**
- **Data Schemas**: Detailed data model design and entity relationships → Defers to **Data Architect**
- **UI/UX Design**: Interface design and user experience → Defers to **UX Expert** and **UI Designer**
#### **Handoff Points**
- **TO API Designer**: Provides architectural constraints (REST vs GraphQL, sync vs async) that define the API design space
- **TO Data Architect**: Provides system-level data flow requirements and integration patterns
- **FROM Data Architect**: Receives canonical data model to inform system integration design
## ⚙️ **Execution Protocol**
### Phase 1: Session & Framework Detection
```bash
# Check active session and framework
CHECK: find .workflow/active/ -name "WFS-*" -type d
IF active_session EXISTS:
session_id = get_active_session()
brainstorm_dir = .workflow/active/WFS-{session}/.brainstorming/
CHECK: brainstorm_dir/guidance-specification.md
IF EXISTS:
framework_mode = true
load_framework = true
ELSE:
IF topic_provided:
framework_mode = false # Create analysis without framework
ELSE:
ERROR: "No framework found and no topic provided"
```
### Phase 2: Analysis Mode Detection
```bash
# Check existing analysis
CHECK: brainstorm_dir/system-architect/analysis.md
IF EXISTS:
SHOW existing analysis summary
ASK: "Analysis exists. Do you want to:"
OPTIONS:
1. "Update with new insights" → Update existing
2. "Replace completely" → Generate new
3. "Cancel" → Exit without changes
ELSE:
CREATE new analysis
```
### Phase 3: Agent Task Generation
**Framework-Based Analysis** (when guidance-specification.md exists):
```bash
Task(subagent_type="conceptual-planning-agent",
run_in_background=false,
prompt="Generate system architect analysis addressing topic framework
## Framework Integration Required
**MANDATORY**: Load and address guidance-specification.md discussion points
**Framework Reference**: @{session.brainstorm_dir}/guidance-specification.md
**Output Location**: {session.brainstorm_dir}/system-architect/analysis.md
## Analysis Requirements
1. **Load Topic Framework**: Read guidance-specification.md completely
2. **Address Each Discussion Point**: Respond to all 5 framework sections from system architecture perspective
3. **Include Framework Reference**: Start analysis.md with @../guidance-specification.md
4. **Technical Focus**: Emphasize scalability, architecture patterns, technology decisions
5. **Structured Response**: Use framework structure for analysis organization
## Framework Sections to Address
- Core Requirements (from architecture perspective)
- Technical Considerations (detailed architectural analysis)
- User Experience Factors (technical UX considerations)
- Implementation Challenges (architecture risks and solutions)
- Success Metrics (technical metrics and monitoring)
## Output Structure Required
```markdown
# System Architect Analysis: [Topic]
**Framework Reference**: @../guidance-specification.md
**Role Focus**: System Architecture and Technical Design
## Core Requirements Analysis
[Address framework requirements from architecture perspective]
## Technical Considerations
[Detailed architectural analysis]
## User Experience Factors
[Technical aspects of UX implementation]
## Implementation Challenges
[Architecture risks and mitigation strategies]
## Success Metrics
[Technical metrics and system monitoring]
## Architecture-Specific Recommendations
[Detailed technical recommendations]
```",
description="Generate system architect framework-based analysis")
```
### Phase 4: Update Mechanism
**Analysis Update Process**:
```bash
# For existing analysis updates
IF update_mode = "incremental":
Task(subagent_type="conceptual-planning-agent",
run_in_background=false,
prompt="Update existing system architect analysis
## Current Analysis Context
**Existing Analysis**: @{session.brainstorm_dir}/system-architect/analysis.md
**Framework Reference**: @{session.brainstorm_dir}/guidance-specification.md
## Update Requirements
1. **Preserve Structure**: Maintain existing analysis structure
2. **Add New Insights**: Integrate new technical insights and recommendations
3. **Framework Alignment**: Ensure continued alignment with topic framework
4. **Technical Updates**: Add new architecture patterns, technology considerations
5. **Maintain References**: Keep @../guidance-specification.md reference
## Update Instructions
- Read existing analysis completely
- Identify areas for enhancement or new insights
- Add technical depth while preserving original structure
- Update recommendations with new architectural approaches
- Maintain framework discussion point addressing",
description="Update system architect analysis incrementally")
```
## Document Structure
### Output Files
```
.workflow/active/WFS-[topic]/.brainstorming/
├── guidance-specification.md # Input: Framework (if exists)
└── system-architect/
└── analysis.md # ★ OUTPUT: Framework-based analysis
```
### Analysis Structure
**Required Elements**:
- **Framework Reference**: @../guidance-specification.md (if framework exists)
- **Role Focus**: System Architecture and Technical Design perspective
- **5 Framework Sections**: Address each framework discussion point
- **Technical Recommendations**: Architecture-specific insights and solutions
- How should we design APIs and manage versioning?
**4. Performance and Scalability**
- Where are the current system performance bottlenecks?
- How should we handle traffic growth and scaling demands?
- What database scaling and optimization strategies are needed?
## ⚡ **Two-Step Execution Flow**
### ⚠️ Session Management - FIRST STEP
Session detection and selection:
```bash
# Check for existing sessions
existing_sessions=$(find .workflow/active/ -name "WFS-*" -type d 2>/dev/null)
if [ multiple_sessions ]; then
prompt_user_to_select_session()
else
use_existing_or_create_new()
fi
```
### Step 1: Context Gathering Phase
**System Architect Perspective Questioning**
Before agent assignment, gather comprehensive system architecture context:
#### 📋 Role-Specific Questions
1. **Scale & Performance Requirements**
- Expected user load and traffic patterns?
- Performance requirements (latency, throughput)?
- Data volume and growth projections?
2. **Technical Constraints & Environment**
- Existing technology stack and constraints?
- Integration requirements with external systems?
- Infrastructure and deployment environment?
3. **Architecture Complexity & Patterns**
- Microservices vs monolithic considerations?
- Data consistency and transaction requirements?
- Event-driven vs request-response patterns?
4. **Non-Functional Requirements**
- High availability and disaster recovery needs?
- Security and compliance requirements?
- Monitoring and observability expectations?
#### Context Validation
- **Minimum Response**: Each answer must be ≥50 characters
- **Re-prompting**: Insufficient detail triggers follow-up questions
- **Context Storage**: Save responses to `.brainstorming/system-architect-context.md`
### Step 2: Agent Assignment with Flow Control
**Dedicated Agent Execution**
```bash
Task(conceptual-planning-agent): "
[FLOW_CONTROL]
Execute dedicated system-architect conceptual analysis for: {topic}
ASSIGNED_ROLE: system-architect
OUTPUT_LOCATION: .brainstorming/system-architect/
USER_CONTEXT: {validated_responses_from_context_gathering}
Flow Control Steps:
[
{
\"step\": \"load_role_template\",
\"action\": \"Load system-architect planning template\",
\"command\": \"bash($(cat ~/.claude/workflows/cli-templates/planning-roles/system-architect.md))\",
\"output_to\": \"role_template\"
}
]
Conceptual Analysis Requirements:
- Apply system-architect perspective to topic analysis
- Focus on architectural patterns, scalability, and integration points
- Use loaded role template framework for analysis structure
- Generate role-specific deliverables in designated output location
- Address all user context from questioning phase
Deliverables:
- analysis.md: Main system architecture analysis
- recommendations.md: Architecture recommendations
- deliverables/: Architecture-specific outputs as defined in role template
Embody system-architect role expertise for comprehensive conceptual planning."
```
### Progress Tracking
TodoWrite tracking for two-step process:
```json
[
{"content": "Gather system architect context through role-specific questioning", "status": "in_progress", "activeForm": "Gathering context"},
{"content": "Validate context responses and save to system-architect-context.md", "status": "pending", "activeForm": "Validating context"},
{"content": "Load system-architect planning template via flow control", "status": "pending", "activeForm": "Loading template"},
{"content": "Execute dedicated conceptual-planning-agent for system-architect role", "status": "pending", "activeForm": "Executing agent"}
]
```
## 📊 **Output Specification**
### Output Location
```
.workflow/active/WFS-{topic-slug}/.brainstorming/system-architect/
├── analysis.md # Primary architecture analysis
├── architecture-design.md # Detailed system design and diagrams
├── technology-stack.md # Technology stack recommendations and justifications
└── integration-plan.md # System integration and API strategies
```
### Document Templates
#### analysis.md Structure
```markdown
# System Architecture Analysis: {Topic}
*Generated: {timestamp}*
## Executive Summary
[Key architectural findings and recommendations overview]
## Current State Assessment
### Existing Architecture Overview
### Technical Stack Analysis
### Performance Bottlenecks
### Technical Debt Assessment
## Requirements Analysis
### Functional Requirements
### Non-Functional Requirements
- Performance: [Response time, throughput requirements]
- Scalability: [User growth, data volume expectations]
- Availability: [Uptime requirements]
- Security: [Security requirements]
## Proposed Architecture
### High-Level Architecture Design
### Component Breakdown
### Data Flow Diagrams
### Technology Stack Recommendations
## Implementation Strategy
### Migration Planning
### Risk Mitigation
### Performance Optimization
### Security Considerations
## Scalability and Maintenance
### Horizontal Scaling Strategy
### Monitoring and Observability
### Deployment Strategy
### Long-term Maintenance Plan
```
## 🔄 **Session Integration**
### Status Synchronization
Upon completion, update `workflow-session.json`:
```json
{
"phases": {
"BRAINSTORM": {
"system_architect": {
"status": "completed",
"completed_at": "timestamp",
"output_directory": ".workflow/active/WFS-{topic}/.brainstorming/system-architect/",
"key_insights": ["scalability_bottleneck", "architecture_pattern", "technology_recommendation"]
}
}
}
}
```
### Cross-Role Collaboration
System architect perspective provides:
- **Technical Constraints and Possibilities** → Product Manager
- **Architecture Requirements and Limitations** → UI Designer
- **Data Architecture Requirements** → Data Architect
- **Security Architecture Framework** → Security Expert
- **Technical Implementation Framework** → Feature Planner
## ✅ **Quality Assurance**
### Required Analysis Elements
- [ ] Clear architecture diagrams and component designs
- [ ] Detailed technology stack evaluation and recommendations
- [ ] Scalability and performance analysis with metrics
- [ ] System integration and API design specifications
- [ ] Comprehensive risk assessment and mitigation strategies
### Architecture Design Principles
- [ ] **Scalability**: System can handle growth in users and data
- [ ] **Maintainability**: Clear code structure, easy to modify and extend
- [ ] **Reliability**: Built-in fault tolerance and recovery mechanisms
- [ ] **Security**: Integrated security controls and protection measures
- [ ] **Performance**: Meets response time and throughput requirements
### Technical Decision Validation
- [ ] Technology choices have thorough justification and comparison analysis
- [ ] Architectural patterns align with business requirements and constraints
- [ ] Integration solutions consider compatibility and maintenance costs
- [ ] Deployment strategies are feasible with acceptable risk levels
- [ ] Monitoring and operations strategies are comprehensive and actionable
### Implementation Readiness
- [ ] **Technical Feasibility**: All proposed solutions are technically achievable
- [ ] **Resource Planning**: Resource requirements clearly defined and realistic
- [ ] **Risk Management**: Technical risks identified with mitigation plans
- [ ] **Performance Validation**: Architecture can meet performance requirements
- [ ] **Evolution Strategy**: Design allows for future growth and changes

View File

@@ -1,221 +0,0 @@
---
name: ui-designer
description: Generate or update ui-designer/analysis.md addressing guidance-specification discussion points for UI design perspective
argument-hint: "optional topic - uses existing framework if available"
allowed-tools: Task(conceptual-planning-agent), TodoWrite(*), Read(*), Write(*)
---
## 🎨 **UI Designer Analysis Generator**
### Purpose
**Specialized command for generating ui-designer/analysis.md** that addresses guidance-specification.md discussion points from UI/UX design perspective. Creates or updates role-specific analysis with framework references.
### Core Function
- **Framework-based Analysis**: Address each discussion point in guidance-specification.md
- **UI/UX Focus**: User experience, interface design, and accessibility perspective
- **Update Mechanism**: Create new or update existing analysis.md
- **Agent Delegation**: Use conceptual-planning-agent for analysis generation
### Analysis Scope
- **Visual Design**: Color palettes, typography, spacing, and visual hierarchy implementation
- **High-Fidelity Mockups**: Polished, pixel-perfect interface designs
- **Design System Implementation**: Component libraries, design tokens, and style guides
- **Micro-Interactions & Animations**: Transition effects, loading states, and interactive feedback
- **Responsive Design**: Layout adaptations for different screen sizes and devices
### Role Boundaries & Responsibilities
#### **What This Role OWNS (Concrete Visual Interface Implementation)**
- **Visual Design Language**: Colors, typography, iconography, spacing, and overall aesthetic
- **High-Fidelity Mockups**: Polished designs showing exactly how the interface will look
- **Design System Components**: Building and documenting reusable UI components (buttons, inputs, cards, etc.)
- **Design Tokens**: Defining variables for colors, spacing, typography that can be used in code
- **Micro-Interactions**: Hover states, transitions, animations, and interactive feedback details
- **Responsive Layouts**: Adapting designs for mobile, tablet, and desktop breakpoints
#### **What This Role DOES NOT Own (Defers to Other Roles)**
- **User Research & Personas**: User behavior analysis and needs assessment → Defers to **UX Expert**
- **Information Architecture**: Content structure and navigation strategy → Defers to **UX Expert**
- **Low-Fidelity Wireframes**: Structural layouts without visual design → Defers to **UX Expert**
#### **Handoff Points**
- **FROM UX Expert**: Receives wireframes, user flows, and information architecture as the foundation for visual design
- **TO Frontend Developers**: Provides design specifications, component libraries, and design tokens for implementation
- **WITH API Designer**: Coordinates on data presentation and form validation feedback (visual aspects only)
## ⚙️ **Execution Protocol**
### Phase 1: Session & Framework Detection
```bash
# Check active session and framework
CHECK: find .workflow/active/ -name "WFS-*" -type d
IF active_session EXISTS:
session_id = get_active_session()
brainstorm_dir = .workflow/active/WFS-{session}/.brainstorming/
CHECK: brainstorm_dir/guidance-specification.md
IF EXISTS:
framework_mode = true
load_framework = true
ELSE:
IF topic_provided:
framework_mode = false # Create analysis without framework
ELSE:
ERROR: "No framework found and no topic provided"
```
### Phase 2: Analysis Mode Detection
```bash
# Determine execution mode
IF framework_mode == true:
mode = "framework_based_analysis"
topic_ref = load_framework_topic()
discussion_points = extract_framework_points()
ELSE:
mode = "standalone_analysis"
topic_ref = provided_topic
discussion_points = generate_basic_structure()
```
### Phase 3: Agent Execution with Flow Control
**Framework-Based Analysis Generation**
```bash
Task(conceptual-planning-agent): "
[FLOW_CONTROL]
Execute ui-designer analysis for existing topic framework
## Context Loading
ASSIGNED_ROLE: ui-designer
OUTPUT_LOCATION: .workflow/active/WFS-{session}/.brainstorming/ui-designer/
ANALYSIS_MODE: {framework_mode ? "framework_based" : "standalone"}
## Flow Control Steps
1. **load_topic_framework**
- Action: Load structured topic discussion framework
- Command: Read(.workflow/active/WFS-{session}/.brainstorming/guidance-specification.md)
- Output: topic_framework_content
2. **load_role_template**
- Action: Load ui-designer planning template
- Command: bash($(cat ~/.claude/workflows/cli-templates/planning-roles/ui-designer.md))
- Output: role_template_guidelines
3. **load_session_metadata**
- Action: Load session metadata and existing context
- Command: Read(.workflow/active/WFS-{session}/workflow-session.json)
- Output: session_context
## Analysis Requirements
**Framework Reference**: Address all discussion points in guidance-specification.md from UI/UX perspective
**Role Focus**: User experience design, interface optimization, accessibility compliance
**Structured Approach**: Create analysis.md addressing framework discussion points
**Template Integration**: Apply role template guidelines within framework structure
## Expected Deliverables
1. **analysis.md**: Comprehensive UI/UX analysis addressing all framework discussion points
2. **Framework Reference**: Include @../guidance-specification.md reference in analysis
## Completion Criteria
- Address each discussion point from guidance-specification.md with UI/UX design expertise
- Provide actionable design recommendations and interface solutions
- Include accessibility considerations and WCAG compliance planning
- Reference framework document using @ notation for integration
"
```
## 📋 **TodoWrite Integration**
### Workflow Progress Tracking
```javascript
TodoWrite({
todos: [
{
content: "Detect active session and locate topic framework",
status: "in_progress",
activeForm: "Detecting session and framework"
},
{
content: "Load guidance-specification.md and session metadata for context",
status: "pending",
activeForm: "Loading framework and session context"
},
{
content: "Execute ui-designer analysis using conceptual-planning-agent with FLOW_CONTROL",
status: "pending",
activeForm: "Executing ui-designer framework analysis"
},
{
content: "Generate analysis.md addressing all framework discussion points",
status: "pending",
activeForm: "Generating structured ui-designer analysis"
},
{
content: "Update workflow-session.json with ui-designer completion status",
status: "pending",
activeForm: "Updating session metadata"
}
]
});
```
## 📊 **Output Structure**
### Framework-Based Analysis
```
.workflow/active/WFS-{session}/.brainstorming/ui-designer/
└── analysis.md # Structured analysis addressing guidance-specification.md discussion points
```
### Analysis Document Structure
```markdown
# UI Designer Analysis: [Topic from Framework]
## Framework Reference
**Topic Framework**: @../guidance-specification.md
**Role Focus**: UI/UX Design perspective
## Discussion Points Analysis
[Address each point from guidance-specification.md with UI/UX expertise]
### Core Requirements (from framework)
[UI/UX perspective on requirements]
### Technical Considerations (from framework)
[Interface and design system considerations]
### User Experience Factors (from framework)
[Detailed UX analysis and recommendations]
### Implementation Challenges (from framework)
[Design implementation and accessibility considerations]
### Success Metrics (from framework)
[UX metrics and usability success criteria]
## UI/UX Specific Recommendations
[Role-specific design recommendations and solutions]
---
*Generated by ui-designer analysis addressing structured framework*
```
## 🔄 **Session Integration**
### Completion Status Update
```json
{
"ui_designer": {
"status": "completed",
"framework_addressed": true,
"output_location": ".workflow/active/WFS-{session}/.brainstorming/ui-designer/analysis.md",
"framework_reference": "@../guidance-specification.md"
}
}
```
### Integration Points
- **Framework Reference**: @../guidance-specification.md for structured discussion points
- **Cross-Role Synthesis**: UI/UX insights available for synthesis-report.md integration
- **Agent Autonomy**: Independent execution with framework guidance

View File

@@ -1,221 +0,0 @@
---
name: ux-expert
description: Generate or update ux-expert/analysis.md addressing guidance-specification discussion points for UX perspective
argument-hint: "optional topic - uses existing framework if available"
allowed-tools: Task(conceptual-planning-agent), TodoWrite(*), Read(*), Write(*)
---
## 🎯 **UX Expert Analysis Generator**
### Purpose
**Specialized command for generating ux-expert/analysis.md** that addresses guidance-specification.md discussion points from user experience and interface design perspective. Creates or updates role-specific analysis with framework references.
### Core Function
- **Framework-based Analysis**: Address each discussion point in guidance-specification.md
- **UX Design Focus**: User interface, interaction patterns, and usability optimization
- **Update Mechanism**: Create new or update existing analysis.md
- **Agent Delegation**: Use conceptual-planning-agent for analysis generation
### Analysis Scope
- **User Research**: User personas, behavioral analysis, and needs assessment
- **Information Architecture**: Content structure, navigation hierarchy, and mental models
- **User Journey Mapping**: User flows, task analysis, and interaction models
- **Usability Strategy**: Accessibility planning, cognitive load reduction, and user testing frameworks
- **Wireframing**: Low-fidelity layouts and structural prototypes (not visual design)
### Role Boundaries & Responsibilities
#### **What This Role OWNS (Abstract User Experience & Research)**
- **User Research & Personas**: Understanding target users, their goals, pain points, and behaviors
- **Information Architecture**: Organizing content and defining navigation structures at a conceptual level
- **User Journey Mapping**: Defining user flows, task sequences, and interaction models
- **Wireframes & Low-Fidelity Prototypes**: Structural layouts showing information hierarchy (boxes and arrows, not colors/fonts)
- **Usability Testing Strategy**: Planning user testing, A/B tests, and validation methods
- **Accessibility Planning**: WCAG compliance strategy and inclusive design principles
#### **What This Role DOES NOT Own (Defers to Other Roles)**
- **Visual Design**: Colors, typography, spacing, visual style → Defers to **UI Designer**
- **High-Fidelity Mockups**: Polished, pixel-perfect designs → Defers to **UI Designer**
- **Component Implementation**: Design system components, CSS, animations → Defers to **UI Designer**
#### **Handoff Points**
- **TO UI Designer**: Provides wireframes, user flows, and information architecture that UI Designer will transform into high-fidelity visual designs
- **FROM User Research**: May receive external research data to inform UX decisions
- **TO Product Owner**: Provides user insights and validation results to inform feature prioritization
## ⚙️ **Execution Protocol**
### Phase 1: Session & Framework Detection
```bash
# Check active session and framework
CHECK: find .workflow/active/ -name "WFS-*" -type d
IF active_session EXISTS:
session_id = get_active_session()
brainstorm_dir = .workflow/active/WFS-{session}/.brainstorming/
CHECK: brainstorm_dir/guidance-specification.md
IF EXISTS:
framework_mode = true
load_framework = true
ELSE:
IF topic_provided:
framework_mode = false # Create analysis without framework
ELSE:
ERROR: "No framework found and no topic provided"
```
### Phase 2: Analysis Mode Detection
```bash
# Determine execution mode
IF framework_mode == true:
mode = "framework_based_analysis"
topic_ref = load_framework_topic()
discussion_points = extract_framework_points()
ELSE:
mode = "standalone_analysis"
topic_ref = provided_topic
discussion_points = generate_basic_structure()
```
### Phase 3: Agent Execution with Flow Control
**Framework-Based Analysis Generation**
```bash
Task(conceptual-planning-agent): "
[FLOW_CONTROL]
Execute ux-expert analysis for existing topic framework
## Context Loading
ASSIGNED_ROLE: ux-expert
OUTPUT_LOCATION: .workflow/active/WFS-{session}/.brainstorming/ux-expert/
ANALYSIS_MODE: {framework_mode ? "framework_based" : "standalone"}
## Flow Control Steps
1. **load_topic_framework**
- Action: Load structured topic discussion framework
- Command: Read(.workflow/active/WFS-{session}/.brainstorming/guidance-specification.md)
- Output: topic_framework_content
2. **load_role_template**
- Action: Load ux-expert planning template
- Command: bash($(cat ~/.claude/workflows/cli-templates/planning-roles/ux-expert.md))
- Output: role_template_guidelines
3. **load_session_metadata**
- Action: Load session metadata and existing context
- Command: Read(.workflow/active/WFS-{session}/workflow-session.json)
- Output: session_context
## Analysis Requirements
**Framework Reference**: Address all discussion points in guidance-specification.md from user experience and interface design perspective
**Role Focus**: UI design, interaction patterns, usability optimization, design systems
**Structured Approach**: Create analysis.md addressing framework discussion points
**Template Integration**: Apply role template guidelines within framework structure
## Expected Deliverables
1. **analysis.md**: Comprehensive UX design analysis addressing all framework discussion points
2. **Framework Reference**: Include @../guidance-specification.md reference in analysis
## Completion Criteria
- Address each discussion point from guidance-specification.md with UX design expertise
- Provide actionable interface design and usability optimization strategies
- Include accessibility considerations and interaction pattern recommendations
- Reference framework document using @ notation for integration
"
```
## 📋 **TodoWrite Integration**
### Workflow Progress Tracking
```javascript
TodoWrite({
todos: [
{
content: "Detect active session and locate topic framework",
status: "in_progress",
activeForm: "Detecting session and framework"
},
{
content: "Load guidance-specification.md and session metadata for context",
status: "pending",
activeForm: "Loading framework and session context"
},
{
content: "Execute ux-expert analysis using conceptual-planning-agent with FLOW_CONTROL",
status: "pending",
activeForm: "Executing ux-expert framework analysis"
},
{
content: "Generate analysis.md addressing all framework discussion points",
status: "pending",
activeForm: "Generating structured ux-expert analysis"
},
{
content: "Update workflow-session.json with ux-expert completion status",
status: "pending",
activeForm: "Updating session metadata"
}
]
});
```
## 📊 **Output Structure**
### Framework-Based Analysis
```
.workflow/active/WFS-{session}/.brainstorming/ux-expert/
└── analysis.md # Structured analysis addressing guidance-specification.md discussion points
```
### Analysis Document Structure
```markdown
# UX Expert Analysis: [Topic from Framework]
## Framework Reference
**Topic Framework**: @../guidance-specification.md
**Role Focus**: User Experience & Interface Design perspective
## Discussion Points Analysis
[Address each point from guidance-specification.md with UX design expertise]
### Core Requirements (from framework)
[User interface and interaction design requirements perspective]
### Technical Considerations (from framework)
[Design system implementation and technical feasibility considerations]
### User Experience Factors (from framework)
[Usability optimization, accessibility, and user-centered design analysis]
### Implementation Challenges (from framework)
[Design implementation challenges and progressive enhancement strategies]
### Success Metrics (from framework)
[UX metrics including usability testing, user satisfaction, and design KPIs]
## UX Expert Specific Recommendations
[Role-specific interface design patterns and usability optimization strategies]
---
*Generated by ux-expert analysis addressing structured framework*
```
## 🔄 **Session Integration**
### Completion Status Update
```json
{
"ux_expert": {
"status": "completed",
"framework_addressed": true,
"output_location": ".workflow/active/WFS-{session}/.brainstorming/ux-expert/analysis.md",
"framework_reference": "@../guidance-specification.md"
}
}
```
### Integration Points
- **Framework Reference**: @../guidance-specification.md for structured discussion points
- **Cross-Role Synthesis**: UX design insights available for synthesis-report.md integration
- **Agent Autonomy**: Independent execution with framework guidance

View File

@@ -1,7 +1,7 @@
---
name: clean
description: Intelligent code cleanup with mainline detection, stale artifact discovery, and safe execution
argument-hint: "[--dry-run] [\"focus area\"]"
argument-hint: "[-y|--yes] [--dry-run] [\"focus area\"]"
allowed-tools: TodoWrite(*), Task(*), AskUserQuestion(*), Read(*), Glob(*), Bash(*), Write(*)
---
@@ -21,8 +21,22 @@ Intelligent cleanup command that explores the codebase to identify the developme
```bash
/workflow:clean # Full intelligent cleanup (explore → analyze → confirm → execute)
/workflow:clean --yes # Auto mode (use safe defaults, no confirmation)
/workflow:clean --dry-run # Explore and analyze only, no execution
/workflow:clean "auth module" # Focus cleanup on specific area
/workflow:clean -y "auth module" # Auto mode with focus area
```
## Auto Mode Defaults
When `--yes` or `-y` flag is used:
- **Categories to Clean**: Auto-selects `["Sessions"]` only (safest - only workflow sessions)
- **Risk Level**: Auto-selects `"Low only"` (only low-risk items)
- All confirmations skipped, proceeds directly to execution
**Flag Parsing**:
```javascript
const autoYes = $ARGUMENTS.includes('--yes') || $ARGUMENTS.includes('-y')
const dryRun = $ARGUMENTS.includes('--dry-run')
```
## Execution Process
@@ -329,39 +343,57 @@ To execute cleanup: /workflow:clean
**Step 3.3: User Confirmation**
```javascript
AskUserQuestion({
questions: [
{
question: "Which categories to clean?",
header: "Categories",
multiSelect: true,
options: [
{
label: "Sessions",
description: `${manifest.summary.by_category.stale_sessions} stale workflow sessions`
},
{
label: "Documents",
description: `${manifest.summary.by_category.drifted_documents} drifted documents`
},
{
label: "Dead Code",
description: `${manifest.summary.by_category.dead_code} unused code files`
}
]
},
{
question: "Risk level to include?",
header: "Risk",
multiSelect: false,
options: [
{ label: "Low only", description: "Safest - only obviously stale items" },
{ label: "Low + Medium", description: "Recommended - includes likely unused items" },
{ label: "All", description: "Aggressive - includes high-risk items" }
]
}
]
})
// Parse --yes flag
const autoYes = $ARGUMENTS.includes('--yes') || $ARGUMENTS.includes('-y')
let userSelection
if (autoYes) {
// Auto mode: Use safe defaults
console.log(`[--yes] Auto-selecting safe cleanup defaults:`)
console.log(` - Categories: Sessions only`)
console.log(` - Risk level: Low only`)
userSelection = {
categories: ["Sessions"],
risk: "Low only"
}
} else {
// Interactive mode: Ask user
userSelection = AskUserQuestion({
questions: [
{
question: "Which categories to clean?",
header: "Categories",
multiSelect: true,
options: [
{
label: "Sessions",
description: `${manifest.summary.by_category.stale_sessions} stale workflow sessions`
},
{
label: "Documents",
description: `${manifest.summary.by_category.drifted_documents} drifted documents`
},
{
label: "Dead Code",
description: `${manifest.summary.by_category.dead_code} unused code files`
}
]
},
{
question: "Risk level to include?",
header: "Risk",
multiSelect: false,
options: [
{ label: "Low only", description: "Safest - only obviously stale items" },
{ label: "Low + Medium", description: "Recommended - includes likely unused items" },
{ label: "All", description: "Aggressive - includes high-risk items" }
]
}
]
})
}
```
---

View File

@@ -1,10 +1,14 @@
---
name: debug-with-file
description: Interactive hypothesis-driven debugging with documented exploration, understanding evolution, and Gemini-assisted correction
argument-hint: "\"bug description or error message\""
argument-hint: "[-y|--yes] \"bug description or error message\""
allowed-tools: TodoWrite(*), Task(*), AskUserQuestion(*), Read(*), Grep(*), Glob(*), Bash(*), Edit(*), Write(*)
---
## Auto Mode
When `--yes` or `-y`: Auto-confirm all decisions (hypotheses, fixes, iteration), use recommended settings.
# Workflow Debug-With-File Command (/workflow:debug-with-file)
## Overview
@@ -13,6 +17,8 @@ Enhanced evidence-based debugging with **documented exploration process**. Recor
**Core workflow**: Explore → Document → Log → Analyze → Correct Understanding → Fix → Verify
**Scope**: Adds temporary debug logging to observe program state; cleans up all instrumentation after resolution. Does NOT execute code injection, security testing, or modify program behavior.
**Key enhancements over /workflow:debug**:
- **understanding.md**: Timeline of exploration and learning
- **Gemini-assisted correction**: Validates and corrects hypotheses
@@ -40,7 +46,7 @@ Explore Mode:
├─ Locate error source in codebase
├─ Document initial understanding in understanding.md
├─ Generate testable hypotheses with Gemini validation
├─ Add NDJSON logging instrumentation
├─ Add NDJSON debug logging statements
└─ Output: Hypothesis list + await user reproduction
Analyze Mode:
@@ -212,9 +218,9 @@ Save Gemini output to `hypotheses.json`:
}
```
**Step 1.4: Add NDJSON Instrumentation**
**Step 1.4: Add NDJSON Debug Logging**
For each hypothesis, add logging (same as original debug command).
For each hypothesis, add temporary logging statements to observe program state at key execution points. Use NDJSON format for structured log parsing. These are read-only observations that do not modify program behavior.
**Step 1.5: Update understanding.md**
@@ -437,7 +443,7 @@ What we learned from this debugging session:
**Step 3.3: Cleanup**
Remove debug instrumentation (same as original command).
Remove all temporary debug logging statements added during investigation. Verify no instrumentation code remains in production code.
---
@@ -643,7 +649,7 @@ Why is config value None during update?
| Feature | /workflow:debug | /workflow:debug-with-file |
|---------|-----------------|---------------------------|
| NDJSON logging | ✅ | ✅ |
| NDJSON debug logging | ✅ | ✅ |
| Hypothesis generation | Manual | Gemini-assisted |
| Exploration documentation | ❌ | ✅ understanding.md |
| Understanding evolution | ❌ | ✅ Timeline + corrections |

View File

@@ -1,327 +0,0 @@
---
name: debug
description: Interactive hypothesis-driven debugging with NDJSON logging, iterative until resolved
argument-hint: "\"bug description or error message\""
allowed-tools: TodoWrite(*), Task(*), AskUserQuestion(*), Read(*), Grep(*), Glob(*), Bash(*), Edit(*), Write(*)
---
# Workflow Debug Command (/workflow:debug)
## Overview
Evidence-based interactive debugging command. Systematically identifies root causes through hypothesis-driven logging and iterative verification.
**Core workflow**: Explore → Add Logging → Reproduce → Analyze Log → Fix → Verify
## Usage
```bash
/workflow:debug <BUG_DESCRIPTION>
# Arguments
<bug-description> Bug description, error message, or stack trace (required)
```
## Execution Process
```
Session Detection:
├─ Check if debug session exists for this bug
├─ EXISTS + debug.log has content → Analyze mode
└─ NOT_FOUND or empty log → Explore mode
Explore Mode:
├─ Locate error source in codebase
├─ Generate testable hypotheses (dynamic count)
├─ Add NDJSON logging instrumentation
└─ Output: Hypothesis list + await user reproduction
Analyze Mode:
├─ Parse debug.log, validate each hypothesis
└─ Decision:
├─ Confirmed → Fix root cause
├─ Inconclusive → Add more logging, iterate
└─ All rejected → Generate new hypotheses
Fix & Cleanup:
├─ Apply fix based on confirmed hypothesis
├─ User verifies
├─ Remove debug instrumentation
└─ If not fixed → Return to Analyze mode
```
## Implementation
### Session Setup & Mode Detection
```javascript
const getUtc8ISOString = () => new Date(Date.now() + 8 * 60 * 60 * 1000).toISOString()
const bugSlug = bug_description.toLowerCase().replace(/[^a-z0-9]+/g, '-').substring(0, 30)
const dateStr = getUtc8ISOString().substring(0, 10)
const sessionId = `DBG-${bugSlug}-${dateStr}`
const sessionFolder = `.workflow/.debug/${sessionId}`
const debugLogPath = `${sessionFolder}/debug.log`
// Auto-detect mode
const sessionExists = fs.existsSync(sessionFolder)
const logHasContent = sessionExists && fs.existsSync(debugLogPath) && fs.statSync(debugLogPath).size > 0
const mode = logHasContent ? 'analyze' : 'explore'
if (!sessionExists) {
bash(`mkdir -p ${sessionFolder}`)
}
```
---
### Explore Mode
**Step 1.1: Locate Error Source**
```javascript
// Extract keywords from bug description
const keywords = extractErrorKeywords(bug_description)
// e.g., ['Stack Length', '未找到', 'registered 0']
// Search codebase for error locations
for (const keyword of keywords) {
Grep({ pattern: keyword, path: ".", output_mode: "content", "-C": 3 })
}
// Identify affected files and functions
const affectedLocations = [...] // from search results
```
**Step 1.2: Generate Hypotheses (Dynamic)**
```javascript
// Hypothesis categories based on error pattern
const HYPOTHESIS_PATTERNS = {
"not found|missing|undefined|未找到": "data_mismatch",
"0|empty|zero|registered 0": "logic_error",
"timeout|connection|sync": "integration_issue",
"type|format|parse": "type_mismatch"
}
// Generate hypotheses based on actual issue (NOT fixed count)
function generateHypotheses(bugDescription, affectedLocations) {
const hypotheses = []
// Analyze bug and create targeted hypotheses
// Each hypothesis has:
// - id: H1, H2, ... (dynamic count)
// - description: What might be wrong
// - testable_condition: What to log
// - logging_point: Where to add instrumentation
return hypotheses // Could be 1, 3, 5, or more
}
const hypotheses = generateHypotheses(bug_description, affectedLocations)
```
**Step 1.3: Add NDJSON Instrumentation**
For each hypothesis, add logging at the relevant location:
**Python template**:
```python
# region debug [H{n}]
try:
import json, time
_dbg = {
"sid": "{sessionId}",
"hid": "H{n}",
"loc": "{file}:{line}",
"msg": "{testable_condition}",
"data": {
# Capture relevant values here
},
"ts": int(time.time() * 1000)
}
with open(r"{debugLogPath}", "a", encoding="utf-8") as _f:
_f.write(json.dumps(_dbg, ensure_ascii=False) + "\n")
except: pass
# endregion
```
**JavaScript/TypeScript template**:
```javascript
// region debug [H{n}]
try {
require('fs').appendFileSync("{debugLogPath}", JSON.stringify({
sid: "{sessionId}",
hid: "H{n}",
loc: "{file}:{line}",
msg: "{testable_condition}",
data: { /* Capture relevant values */ },
ts: Date.now()
}) + "\n");
} catch(_) {}
// endregion
```
**Output to user**:
```
## Hypotheses Generated
Based on error "{bug_description}", generated {n} hypotheses:
{hypotheses.map(h => `
### ${h.id}: ${h.description}
- Logging at: ${h.logging_point}
- Testing: ${h.testable_condition}
`).join('')}
**Debug log**: ${debugLogPath}
**Next**: Run reproduction steps, then come back for analysis.
```
---
### Analyze Mode
```javascript
// Parse NDJSON log
const entries = Read(debugLogPath).split('\n')
.filter(l => l.trim())
.map(l => JSON.parse(l))
// Group by hypothesis
const byHypothesis = groupBy(entries, 'hid')
// Validate each hypothesis
for (const [hid, logs] of Object.entries(byHypothesis)) {
const hypothesis = hypotheses.find(h => h.id === hid)
const latestLog = logs[logs.length - 1]
// Check if evidence confirms or rejects hypothesis
const verdict = evaluateEvidence(hypothesis, latestLog.data)
// Returns: 'confirmed' | 'rejected' | 'inconclusive'
}
```
**Output**:
```
## Evidence Analysis
Analyzed ${entries.length} log entries.
${results.map(r => `
### ${r.id}: ${r.description}
- **Status**: ${r.verdict}
- **Evidence**: ${JSON.stringify(r.evidence)}
- **Reason**: ${r.reason}
`).join('')}
${confirmedHypothesis ? `
## Root Cause Identified
**${confirmedHypothesis.id}**: ${confirmedHypothesis.description}
Ready to fix.
` : `
## Need More Evidence
Add more logging or refine hypotheses.
`}
```
---
### Fix & Cleanup
```javascript
// Apply fix based on confirmed hypothesis
// ... Edit affected files
// After user verifies fix works:
// Remove debug instrumentation (search for region markers)
const instrumentedFiles = Grep({
pattern: "# region debug|// region debug",
output_mode: "files_with_matches"
})
for (const file of instrumentedFiles) {
// Remove content between region markers
removeDebugRegions(file)
}
console.log(`
## Debug Complete
- Root cause: ${confirmedHypothesis.description}
- Fix applied to: ${modifiedFiles.join(', ')}
- Debug instrumentation removed
`)
```
---
## Debug Log Format (NDJSON)
Each line is a JSON object:
```json
{"sid":"DBG-xxx-2025-12-18","hid":"H1","loc":"file.py:func:42","msg":"Check dict keys","data":{"keys":["a","b"],"target":"c","found":false},"ts":1734567890123}
```
| Field | Description |
|-------|-------------|
| `sid` | Session ID |
| `hid` | Hypothesis ID (H1, H2, ...) |
| `loc` | Code location |
| `msg` | What's being tested |
| `data` | Captured values |
| `ts` | Timestamp (ms) |
## Session Folder
```
.workflow/.debug/DBG-{slug}-{date}/
├── debug.log # NDJSON log (main artifact)
└── resolution.md # Summary after fix (optional)
```
## Iteration Flow
```
First Call (/workflow:debug "error"):
├─ No session exists → Explore mode
├─ Extract error keywords, search codebase
├─ Generate hypotheses, add logging
└─ Await user reproduction
After Reproduction (/workflow:debug "error"):
├─ Session exists + debug.log has content → Analyze mode
├─ Parse log, evaluate hypotheses
└─ Decision:
├─ Confirmed → Fix → User verify
│ ├─ Fixed → Cleanup → Done
│ └─ Not fixed → Add logging → Iterate
├─ Inconclusive → Add logging → Iterate
└─ All rejected → New hypotheses → Iterate
Output:
└─ .workflow/.debug/DBG-{slug}-{date}/debug.log
```
## Post-Completion Expansion
完成后询问用户是否扩展为issue(test/enhance/refactor/doc),选中项调用 `/issue:new "{summary} - {dimension}"`
---
## Error Handling
| Situation | Action |
|-----------|--------|
| Empty debug.log | Verify reproduction triggered the code path |
| All hypotheses rejected | Generate new hypotheses with broader scope |
| Fix doesn't work | Iterate with more granular logging |
| >5 iterations | Escalate to `/workflow:lite-fix` with evidence |

View File

@@ -1,7 +1,7 @@
---
name: execute
description: Coordinate agent execution for workflow tasks with automatic session discovery, parallel task processing, and status tracking
argument-hint: "[--resume-session=\"session-id\"]"
argument-hint: "[-y|--yes] [--resume-session=\"session-id\"]"
---
# Workflow Execute Command
@@ -11,6 +11,30 @@ Orchestrates autonomous workflow execution through systematic task discovery, ag
**Resume Mode**: When called with `--resume-session` flag, skips discovery phase and directly enters TodoWrite generation and agent execution for the specified session.
## Usage
```bash
# Interactive mode (with confirmations)
/workflow:execute
/workflow:execute --resume-session="WFS-auth"
# Auto mode (skip confirmations, use defaults)
/workflow:execute --yes
/workflow:execute -y
/workflow:execute -y --resume-session="WFS-auth"
```
## Auto Mode Defaults
When `--yes` or `-y` flag is used:
- **Session Selection**: Automatically selects the first (most recent) active session
- **Completion Choice**: Automatically completes session (runs `/workflow:session:complete --yes`)
**Flag Parsing**:
```javascript
const autoYes = $ARGUMENTS.includes('--yes') || $ARGUMENTS.includes('-y')
```
## Performance Optimization Strategy
**Lazy Loading**: Task JSONs read **on-demand** during execution, not upfront. TODO_LIST.md + IMPL_PLAN.md provide metadata for planning.
@@ -122,24 +146,38 @@ List sessions with metadata and prompt user selection:
bash(for dir in .workflow/active/WFS-*/; do [ -d "$dir" ] || continue; session=$(basename "$dir"); project=$(jq -r '.project // "Unknown"' "${dir}workflow-session.json" 2>/dev/null || echo "Unknown"); total=$(grep -c '^\- \[' "${dir}TODO_LIST.md" 2>/dev/null || echo 0); completed=$(grep -c '^\- \[x\]' "${dir}TODO_LIST.md" 2>/dev/null || echo 0); if [ "$total" -gt 0 ]; then progress=$((completed * 100 / total)); else progress=0; fi; echo "$session | $project | $completed/$total tasks ($progress%)"; done)
```
Use AskUserQuestion to present formatted options (max 4 options shown):
**Parse --yes flag**:
```javascript
// If more than 4 sessions, show most recent 4 with "Other" option for manual input
const sessions = getActiveSessions() // sorted by last modified
const displaySessions = sessions.slice(0, 4)
const autoYes = $ARGUMENTS.includes('--yes') || $ARGUMENTS.includes('-y')
```
AskUserQuestion({
questions: [{
question: "Multiple active sessions detected. Select one:",
header: "Session",
multiSelect: false,
options: displaySessions.map(s => ({
label: s.id,
description: `${s.project} | ${s.progress}`
}))
// Note: User can select "Other" to manually enter session ID
}]
})
**Conditional Selection**:
```javascript
if (autoYes) {
// Auto mode: Select first session (most recent)
const firstSession = sessions[0]
console.log(`[--yes] Auto-selecting session: ${firstSession.id}`)
selectedSessionId = firstSession.id
// Continue to Phase 2
} else {
// Interactive mode: Use AskUserQuestion to present formatted options (max 4 options shown)
// If more than 4 sessions, show most recent 4 with "Other" option for manual input
const sessions = getActiveSessions() // sorted by last modified
const displaySessions = sessions.slice(0, 4)
AskUserQuestion({
questions: [{
question: "Multiple active sessions detected. Select one:",
header: "Session",
multiSelect: false,
options: displaySessions.map(s => ({
label: s.id,
description: `${s.project} | ${s.progress}`
}))
// Note: User can select "Other" to manually enter session ID
}]
})
}
```
**Input Validation**:
@@ -252,23 +290,33 @@ while (TODO_LIST.md has pending tasks) {
6. **User Choice**: When all tasks finished, ask user to choose next step:
```javascript
AskUserQuestion({
questions: [{
question: "All tasks completed. What would you like to do next?",
header: "Next Step",
multiSelect: false,
options: [
{
label: "Enter Review",
description: "Run specialized review (security/architecture/quality/action-items)"
},
{
label: "Complete Session",
description: "Archive session and update manifest"
}
]
}]
})
// Parse --yes flag
const autoYes = $ARGUMENTS.includes('--yes') || $ARGUMENTS.includes('-y')
if (autoYes) {
// Auto mode: Complete session automatically
console.log(`[--yes] Auto-selecting: Complete Session`)
SlashCommand("/workflow:session:complete --yes")
} else {
// Interactive mode: Ask user
AskUserQuestion({
questions: [{
question: "All tasks completed. What would you like to do next?",
header: "Next Step",
multiSelect: false,
options: [
{
label: "Enter Review",
description: "Run specialized review (security/architecture/quality/action-items)"
},
{
label: "Complete Session",
description: "Archive session and update manifest"
}
]
}]
})
}
```
**Based on user selection**:
@@ -429,7 +477,7 @@ Task(subagent_type="{meta.agent}",
- TODO List: {session.todo_list_path}
- Summaries: {session.summaries_dir}
**Execution**: Read task JSON → Parse flow_control → Execute implementation_approach → Update TODO_LIST.md → Generate summary",
**Execution**: Read task JSON → Execute pre_analysis → Check execution_config.method → (CLI: handoff to CLI tool | Agent: direct implementation) → Update TODO_LIST.md → Generate summary",
description="Implement: {task.id}")
```
@@ -438,9 +486,11 @@ Task(subagent_type="{meta.agent}",
- `[FLOW_CONTROL]`: Triggers flow_control.pre_analysis execution
**Why Path-Based**: Agent (code-developer.md) autonomously:
- Reads and parses task JSON (requirements, acceptance, flow_control)
- Loads tech stack guidelines based on detected language
- Executes pre_analysis steps and implementation_approach
- Reads and parses task JSON (requirements, acceptance, flow_control, execution_config)
- Executes pre_analysis steps (Phase 1: context gathering)
- Checks execution_config.method (Phase 2: determine mode)
- CLI mode: Builds handoff prompt and executes via ccw cli with resume strategy
- Agent mode: Directly implements using modification_points and logic_flow
- Generates structured summary with integration points
Embedding task content in prompt creates duplication and conflicts with agent's parsing logic.

View File

@@ -1,7 +1,7 @@
---
name: lite-execute
description: Execute tasks based on in-memory plan, prompt description, or file content
argument-hint: "[--in-memory] [\"task description\"|file-path]"
argument-hint: "[-y|--yes] [--in-memory] [\"task description\"|file-path]"
allowed-tools: TodoWrite(*), Task(*), Bash(*)
---
@@ -62,31 +62,49 @@ Flexible task execution command supporting three input modes: in-memory plan (fr
**User Interaction**:
```javascript
AskUserQuestion({
questions: [
{
question: "Select execution method:",
header: "Execution",
multiSelect: false,
options: [
{ label: "Agent", description: "@code-developer agent" },
{ label: "Codex", description: "codex CLI tool" },
{ label: "Auto", description: "Auto-select based on complexity" }
]
},
{
question: "Enable code review after execution?",
header: "Code Review",
multiSelect: false,
options: [
{ label: "Skip", description: "No review" },
{ label: "Gemini Review", description: "Gemini CLI tool" },
{ label: "Codex Review", description: "Git-aware review (prompt OR --uncommitted)" },
{ label: "Agent Review", description: "Current agent review" }
]
}
]
})
// Parse --yes flag
const autoYes = $ARGUMENTS.includes('--yes') || $ARGUMENTS.includes('-y')
let userSelection
if (autoYes) {
// Auto mode: Use defaults
console.log(`[--yes] Auto-confirming execution:`)
console.log(` - Execution method: Auto`)
console.log(` - Code review: Skip`)
userSelection = {
execution_method: "Auto",
code_review_tool: "Skip"
}
} else {
// Interactive mode: Ask user
userSelection = AskUserQuestion({
questions: [
{
question: "Select execution method:",
header: "Execution",
multiSelect: false,
options: [
{ label: "Agent", description: "@code-developer agent" },
{ label: "Codex", description: "codex CLI tool" },
{ label: "Auto", description: "Auto-select based on complexity" }
]
},
{
question: "Enable code review after execution?",
header: "Code Review",
multiSelect: false,
options: [
{ label: "Skip", description: "No review" },
{ label: "Gemini Review", description: "Gemini CLI tool" },
{ label: "Codex Review", description: "Git-aware review (prompt OR --uncommitted)" },
{ label: "Agent Review", description: "Current agent review" }
]
}
]
})
}
```
### Mode 3: File Content
@@ -390,6 +408,8 @@ ${t.verification?.success_metrics?.length > 0 ? `\n**Success metrics**: ${t.veri
if (executionContext?.session?.artifacts?.plan) {
context.push(`### Artifacts\nPlan: ${executionContext.session.artifacts.plan}`)
}
// Project guidelines (user-defined constraints from /workflow:session:solidify)
context.push(`### Project Guidelines\n@.workflow/project-guidelines.json`)
if (context.length > 0) sections.push(`## Context\n${context.join('\n\n')}`)
sections.push(`Complete each task according to its "Done when" checklist.`)

View File

@@ -1,7 +1,7 @@
---
name: lite-fix
description: Lightweight bug diagnosis and fix workflow with intelligent severity assessment and optional hotfix mode for production incidents
argument-hint: "[--hotfix] \"bug description or issue reference\""
argument-hint: "[-y|--yes] [--hotfix] \"bug description or issue reference\""
allowed-tools: TodoWrite(*), Task(*), SlashCommand(*), AskUserQuestion(*)
---
@@ -25,10 +25,32 @@ Intelligent lightweight bug fixing command with dynamic workflow adaptation base
/workflow:lite-fix [FLAGS] <BUG_DESCRIPTION>
# Flags
-y, --yes Skip all confirmations (auto mode)
--hotfix, -h Production hotfix mode (minimal diagnosis, fast fix)
# Arguments
<bug-description> Bug description, error message, or path to .md file (required)
# Examples
/workflow:lite-fix "用户登录失败" # Interactive mode
/workflow:lite-fix --yes "用户登录失败" # Auto mode (no confirmations)
/workflow:lite-fix -y --hotfix "生产环境数据库连接失败" # Auto + hotfix mode
```
## Auto Mode Defaults
When `--yes` or `-y` flag is used:
- **Clarification Questions**: Skipped (no clarification phase)
- **Fix Plan Confirmation**: Auto-selected "Allow"
- **Execution Method**: Auto-selected "Auto"
- **Code Review**: Auto-selected "Skip"
- **Severity**: Uses auto-detected severity (no manual override)
- **Hotfix Mode**: Respects --hotfix flag if present, otherwise normal mode
**Flag Parsing**:
```javascript
const autoYes = $ARGUMENTS.includes('--yes') || $ARGUMENTS.includes('-y')
const hotfixMode = $ARGUMENTS.includes('--hotfix') || $ARGUMENTS.includes('-h')
```
## Execution Process
@@ -332,9 +354,17 @@ function deduplicateClarifications(clarifications) {
const uniqueClarifications = deduplicateClarifications(allClarifications)
// Multi-round clarification: batch questions (max 4 per round)
// ⚠️ MUST execute ALL rounds until uniqueClarifications exhausted
if (uniqueClarifications.length > 0) {
// Parse --yes flag
const autoYes = $ARGUMENTS.includes('--yes') || $ARGUMENTS.includes('-y')
if (autoYes) {
// Auto mode: Skip clarification phase
console.log(`[--yes] Skipping ${uniqueClarifications.length} clarification questions`)
console.log(`Proceeding to fix planning with diagnosis results...`)
// Continue to Phase 3
} else if (uniqueClarifications.length > 0) {
// Interactive mode: Multi-round clarification
// ⚠️ MUST execute ALL rounds until uniqueClarifications exhausted
const BATCH_SIZE = 4
const totalRounds = Math.ceil(uniqueClarifications.length / BATCH_SIZE)
@@ -600,40 +630,60 @@ ${fixPlan.tasks.map((t, i) => `${i+1}. ${t.title} (${t.scope})`).join('\n')}
**Step 4.2: Collect Confirmation**
```javascript
AskUserQuestion({
questions: [
{
question: `Confirm fix plan? (${fixPlan.tasks.length} tasks, ${fixPlan.severity} severity)`,
header: "Confirm",
multiSelect: true,
options: [
{ label: "Allow", description: "Proceed as-is" },
{ label: "Modify", description: "Adjust before execution" },
{ label: "Cancel", description: "Abort workflow" }
]
},
{
question: "Execution method:",
header: "Execution",
multiSelect: false,
options: [
{ label: "Agent", description: "@code-developer agent" },
{ label: "Codex", description: "codex CLI tool" },
{ label: "Auto", description: `Auto: ${fixPlan.severity === 'Low' ? 'Agent' : 'Codex'}` }
]
},
{
question: "Code review after fix?",
header: "Review",
multiSelect: false,
options: [
{ label: "Gemini Review", description: "Gemini CLI" },
{ label: "Agent Review", description: "@code-reviewer" },
{ label: "Skip", description: "No review" }
]
}
]
})
// Parse --yes flag
const autoYes = $ARGUMENTS.includes('--yes') || $ARGUMENTS.includes('-y')
let userSelection
if (autoYes) {
// Auto mode: Use defaults
console.log(`[--yes] Auto-confirming fix plan:`)
console.log(` - Confirmation: Allow`)
console.log(` - Execution: Auto`)
console.log(` - Review: Skip`)
userSelection = {
confirmation: "Allow",
execution_method: "Auto",
code_review_tool: "Skip"
}
} else {
// Interactive mode: Ask user
userSelection = AskUserQuestion({
questions: [
{
question: `Confirm fix plan? (${fixPlan.tasks.length} tasks, ${fixPlan.severity} severity)`,
header: "Confirm",
multiSelect: false,
options: [
{ label: "Allow", description: "Proceed as-is" },
{ label: "Modify", description: "Adjust before execution" },
{ label: "Cancel", description: "Abort workflow" }
]
},
{
question: "Execution method:",
header: "Execution",
multiSelect: false,
options: [
{ label: "Agent", description: "@code-developer agent" },
{ label: "Codex", description: "codex CLI tool" },
{ label: "Auto", description: `Auto: ${fixPlan.severity === 'Low' ? 'Agent' : 'Codex'}` }
]
},
{
question: "Code review after fix?",
header: "Review",
multiSelect: false,
options: [
{ label: "Gemini Review", description: "Gemini CLI" },
{ label: "Agent Review", description: "@code-reviewer" },
{ label: "Skip", description: "No review" }
]
}
]
})
}
```
---

View File

@@ -1,10 +1,14 @@
---
name: workflow:lite-lite-lite
description: Ultra-lightweight multi-tool analysis and direct execution. No artifacts for simple tasks; auto-creates planning docs in .workflow/.scratchpad/ for complex tasks. Auto tool selection based on task analysis, user-driven iteration via AskUser.
argument-hint: "<task description>"
argument-hint: "[-y|--yes] <task description>"
allowed-tools: TodoWrite(*), Task(*), AskUserQuestion(*), Read(*), Bash(*), Write(*), mcp__ace-tool__search_context(*), mcp__ccw-tools__write_file(*)
---
## Auto Mode
When `--yes` or `-y`: Skip clarification questions, auto-select tools, execute directly with recommended settings.
# Ultra-Lite Multi-Tool Workflow
## Quick Start

View File

@@ -1,7 +1,7 @@
---
name: lite-plan
description: Lightweight interactive planning workflow with in-memory planning, code exploration, and execution execute to lite-execute after user confirmation
argument-hint: "[-e|--explore] \"task description\"|file.md"
argument-hint: "[-y|--yes] [-e|--explore] \"task description\"|file.md"
allowed-tools: TodoWrite(*), Task(*), SlashCommand(*), AskUserQuestion(*)
---
@@ -25,10 +25,30 @@ Intelligent lightweight planning command with dynamic workflow adaptation based
/workflow:lite-plan [FLAGS] <TASK_DESCRIPTION>
# Flags
-y, --yes Skip all confirmations (auto mode)
-e, --explore Force code exploration phase (overrides auto-detection)
# Arguments
<task-description> Task description or path to .md file (required)
# Examples
/workflow:lite-plan "实现JWT认证" # Interactive mode
/workflow:lite-plan --yes "实现JWT认证" # Auto mode (no confirmations)
/workflow:lite-plan -y -e "优化数据库查询性能" # Auto mode + force exploration
```
## Auto Mode Defaults
When `--yes` or `-y` flag is used:
- **Clarification Questions**: Skipped (no clarification phase)
- **Plan Confirmation**: Auto-selected "Allow"
- **Execution Method**: Auto-selected "Auto"
- **Code Review**: Auto-selected "Skip"
**Flag Parsing**:
```javascript
const autoYes = $ARGUMENTS.includes('--yes') || $ARGUMENTS.includes('-y')
const forceExplore = $ARGUMENTS.includes('--explore') || $ARGUMENTS.includes('-e')
```
## Execution Process
@@ -52,8 +72,8 @@ Phase 2: Clarification (optional, multi-round)
Phase 3: Planning (NO CODE EXECUTION - planning only)
└─ Decision (based on Phase 1 complexity):
├─ Low → Load schema: cat ~/.claude/workflows/cli-templates/schemas/plan-json-schema.json → Direct Claude planning (following schema) → plan.json → MUST proceed to Phase 4
└─ Medium/High → cli-lite-planning-agent → plan.json → MUST proceed to Phase 4
├─ Low → Load schema: cat ~/.claude/workflows/cli-templates/schemas/plan-json-schema.json → Direct Claude planning (following schema) → plan.json
└─ Medium/High → cli-lite-planning-agent → plan.json (agent internally executes quality check)
Phase 4: Confirmation & Selection
├─ Display plan summary (tasks, complexity, estimated time)
@@ -323,8 +343,16 @@ explorations.forEach(exp => {
// - Produce dedupedClarifications with unique intents only
const dedupedClarifications = intelligentMerge(allClarifications)
// Multi-round clarification: batch questions (max 4 per round)
if (dedupedClarifications.length > 0) {
// Parse --yes flag
const autoYes = $ARGUMENTS.includes('--yes') || $ARGUMENTS.includes('-y')
if (autoYes) {
// Auto mode: Skip clarification phase
console.log(`[--yes] Skipping ${dedupedClarifications.length} clarification questions`)
console.log(`Proceeding to planning with exploration results...`)
// Continue to Phase 3
} else if (dedupedClarifications.length > 0) {
// Interactive mode: Multi-round clarification
const BATCH_SIZE = 4
const totalRounds = Math.ceil(dedupedClarifications.length / BATCH_SIZE)
@@ -497,42 +525,62 @@ ${plan.tasks.map((t, i) => `${i+1}. ${t.title} (${t.file})`).join('\n')}
**Step 4.2: Collect Confirmation**
```javascript
// Note: Execution "Other" option allows specifying CLI tools from ~/.claude/cli-tools.json
AskUserQuestion({
questions: [
{
question: `Confirm plan? (${plan.tasks.length} tasks, ${plan.complexity})`,
header: "Confirm",
multiSelect: true,
options: [
{ label: "Allow", description: "Proceed as-is" },
{ label: "Modify", description: "Adjust before execution" },
{ label: "Cancel", description: "Abort workflow" }
]
},
{
question: "Execution method:",
header: "Execution",
multiSelect: false,
options: [
{ label: "Agent", description: "@code-developer agent" },
{ label: "Codex", description: "codex CLI tool" },
{ label: "Auto", description: `Auto: ${plan.complexity === 'Low' ? 'Agent' : 'Codex'}` }
]
},
{
question: "Code review after execution?",
header: "Review",
multiSelect: false,
options: [
{ label: "Gemini Review", description: "Gemini CLI review" },
{ label: "Codex Review", description: "Git-aware review (prompt OR --uncommitted)" },
{ label: "Agent Review", description: "@code-reviewer agent" },
{ label: "Skip", description: "No review" }
]
}
]
})
// Parse --yes flag
const autoYes = $ARGUMENTS.includes('--yes') || $ARGUMENTS.includes('-y')
let userSelection
if (autoYes) {
// Auto mode: Use defaults
console.log(`[--yes] Auto-confirming plan:`)
console.log(` - Confirmation: Allow`)
console.log(` - Execution: Auto`)
console.log(` - Review: Skip`)
userSelection = {
confirmation: "Allow",
execution_method: "Auto",
code_review_tool: "Skip"
}
} else {
// Interactive mode: Ask user
// Note: Execution "Other" option allows specifying CLI tools from ~/.claude/cli-tools.json
userSelection = AskUserQuestion({
questions: [
{
question: `Confirm plan? (${plan.tasks.length} tasks, ${plan.complexity})`,
header: "Confirm",
multiSelect: false,
options: [
{ label: "Allow", description: "Proceed as-is" },
{ label: "Modify", description: "Adjust before execution" },
{ label: "Cancel", description: "Abort workflow" }
]
},
{
question: "Execution method:",
header: "Execution",
multiSelect: false,
options: [
{ label: "Agent", description: "@code-developer agent" },
{ label: "Codex", description: "codex CLI tool" },
{ label: "Auto", description: `Auto: ${plan.complexity === 'Low' ? 'Agent' : 'Codex'}` }
]
},
{
question: "Code review after execution?",
header: "Review",
multiSelect: false,
options: [
{ label: "Gemini Review", description: "Gemini CLI review" },
{ label: "Codex Review", description: "Git-aware review (prompt OR --uncommitted)" },
{ label: "Agent Review", description: "@code-reviewer agent" },
{ label: "Skip", description: "No review" }
]
}
]
})
}
```
---

View File

@@ -1,10 +1,14 @@
---
name: workflow:multi-cli-plan
description: Multi-CLI collaborative planning workflow with ACE context gathering and iterative cross-verification. Uses cli-discuss-agent for Gemini+Codex+Claude analysis to converge on optimal execution plan.
argument-hint: "<task description> [--max-rounds=3] [--tools=gemini,codex] [--mode=parallel|serial]"
argument-hint: "[-y|--yes] <task description> [--max-rounds=3] [--tools=gemini,codex] [--mode=parallel|serial]"
allowed-tools: TodoWrite(*), Task(*), AskUserQuestion(*), Read(*), Bash(*), Write(*), mcp__ace-tool__search_context(*)
---
## Auto Mode
When `--yes` or `-y`: Auto-approve plan, use recommended solution and execution method (Agent, Skip review).
# Multi-CLI Collaborative Planning Command
## Quick Start

View File

@@ -0,0 +1,362 @@
---
name: plan-verify
description: Perform READ-ONLY verification analysis between IMPL_PLAN.md, task JSONs, and brainstorming artifacts. Generates structured report with quality gate recommendation. Does NOT modify any files.
argument-hint: "[optional: --session session-id]"
allowed-tools: Read(*), Write(*), Glob(*), Bash(*)
---
## User Input
```text
$ARGUMENTS
```
You **MUST** consider the user input before proceeding (if not empty).
## Goal
Generate a comprehensive verification report that identifies inconsistencies, duplications, ambiguities, and underspecified items between action planning artifacts (`IMPL_PLAN.md`, `task.json`) and brainstorming artifacts (`role analysis documents`). This command MUST run only after `/workflow:plan` has successfully produced complete `IMPL_PLAN.md` and task JSON files.
**Output**: A structured Markdown report saved to `.workflow/active/WFS-{session}/.process/PLAN_VERIFICATION.md` containing:
- Executive summary with quality gate recommendation
- Detailed findings by severity (CRITICAL/HIGH/MEDIUM/LOW)
- Requirements coverage analysis
- Dependency integrity check
- Synthesis alignment validation
- Actionable remediation recommendations
## Operating Constraints
**STRICTLY READ-ONLY FOR SOURCE ARTIFACTS**:
- **MUST NOT** modify `IMPL_PLAN.md`, any `task.json` files, or brainstorming artifacts
- **MUST NOT** create or delete task files
- **MUST ONLY** write the verification report to `.process/PLAN_VERIFICATION.md`
**Synthesis Authority**: The `role analysis documents` are **authoritative** for requirements and design decisions. Any conflicts between IMPL_PLAN/tasks and synthesis are automatically CRITICAL and require adjustment of the plan/tasks—not reinterpretation of requirements.
**Quality Gate Authority**: The verification report provides a binding recommendation (BLOCK_EXECUTION / PROCEED_WITH_FIXES / PROCEED_WITH_CAUTION / PROCEED) based on objective severity criteria. User MUST review critical/high issues before proceeding with implementation.
## Execution Steps
### 1. Initialize Analysis Context
```bash
# Detect active workflow session
IF --session parameter provided:
session_id = provided session
ELSE:
# Auto-detect active session
active_sessions = bash(find .workflow/active/ -name "WFS-*" -type d 2>/dev/null)
IF active_sessions is empty:
ERROR: "No active workflow session found. Use --session <session-id>"
EXIT
ELSE IF active_sessions has multiple entries:
# Use most recently modified session
session_id = bash(ls -td .workflow/active/WFS-*/ 2>/dev/null | head -1 | xargs basename)
ELSE:
session_id = basename(active_sessions[0])
# Derive absolute paths
session_dir = .workflow/active/WFS-{session}
brainstorm_dir = session_dir/.brainstorming
task_dir = session_dir/.task
process_dir = session_dir/.process
session_file = session_dir/workflow-session.json
# Create .process directory if not exists (report output location)
IF NOT EXISTS(process_dir):
bash(mkdir -p "{process_dir}")
# Validate required artifacts
# Note: "role analysis documents" refers to [role]/analysis.md files (e.g., product-manager/analysis.md)
SYNTHESIS_DIR = brainstorm_dir # Contains role analysis files: */analysis.md
IMPL_PLAN = session_dir/IMPL_PLAN.md
TASK_FILES = Glob(task_dir/*.json)
# Abort if missing - in order of dependency
SESSION_FILE_EXISTS = EXISTS(session_file)
IF NOT SESSION_FILE_EXISTS:
WARNING: "workflow-session.json not found. User intent alignment verification will be skipped."
# Continue execution - this is optional context, not blocking
SYNTHESIS_FILES = Glob(brainstorm_dir/*/analysis.md)
IF SYNTHESIS_FILES.count == 0:
ERROR: "No role analysis documents found in .brainstorming/*/analysis.md. Run /workflow:brainstorm:synthesis first"
EXIT
IF NOT EXISTS(IMPL_PLAN):
ERROR: "IMPL_PLAN.md not found. Run /workflow:plan first"
EXIT
IF TASK_FILES.count == 0:
ERROR: "No task JSON files found. Run /workflow:plan first"
EXIT
```
### 2. Load Artifacts (Progressive Disclosure)
Load only minimal necessary context from each artifact:
**From workflow-session.json** (OPTIONAL - Primary Reference for User Intent):
- **ONLY IF EXISTS**: Load user intent context
- Original user prompt/intent (project or description field)
- User's stated goals and objectives
- User's scope definition
- **IF MISSING**: Set user_intent_analysis = "SKIPPED: workflow-session.json not found"
**From role analysis documents** (AUTHORITATIVE SOURCE):
- Functional Requirements (IDs, descriptions, acceptance criteria)
- Non-Functional Requirements (IDs, targets)
- Business Requirements (IDs, success metrics)
- Key Architecture Decisions
- Risk factors and mitigation strategies
- Implementation Roadmap (high-level phases)
**From IMPL_PLAN.md**:
- Summary and objectives
- Context Analysis
- Implementation Strategy
- Task Breakdown Summary
- Success Criteria
- Brainstorming Artifacts References (if present)
**From task.json files**:
- Task IDs
- Titles and descriptions
- Status
- Dependencies (depends_on, blocks)
- Context (requirements, focus_paths, acceptance, artifacts)
- Flow control (pre_analysis, implementation_approach)
- Meta (complexity, priority)
### 3. Build Semantic Models
Create internal representations (do not include raw artifacts in output):
**Requirements inventory**:
- Each functional/non-functional/business requirement with stable ID
- Requirement text, acceptance criteria, priority
**Architecture decisions inventory**:
- ADRs from synthesis
- Technology choices
- Data model references
**Task coverage mapping**:
- Map each task to one or more requirements (by ID reference or keyword inference)
- Map each requirement to covering tasks
**Dependency graph**:
- Task-to-task dependencies (depends_on, blocks)
- Requirement-level dependencies (from synthesis)
### 4. Detection Passes (Agent-Driven Multi-Dimensional Analysis)
**Execution Strategy**:
- Single `cli-explore-agent` invocation
- Agent executes multiple CLI analyses internally (different dimensions: A-H)
- Token Budget: 50 findings maximum (aggregate remainder in overflow summary)
- Priority Allocation: CRITICAL (unlimited) → HIGH (15) → MEDIUM (20) → LOW (15)
- Early Exit: If CRITICAL findings > 0 in User Intent/Requirements Coverage, skip LOW/MEDIUM checks
**Execution Order** (Agent orchestrates internally):
1. **Tier 1 (CRITICAL Path)**: A, B, C - User intent, coverage, consistency (full analysis)
2. **Tier 2 (HIGH Priority)**: D, E - Dependencies, synthesis alignment (limit 15 findings)
3. **Tier 3 (MEDIUM Priority)**: F - Specification quality (limit 20 findings)
4. **Tier 4 (LOW Priority)**: G, H - Duplication, feasibility (limit 15 findings)
---
#### Phase 4.1: Launch Unified Verification Agent
```javascript
Task(
subagent_type="cli-explore-agent",
run_in_background=false,
description="Multi-dimensional plan verification",
prompt=`
## Plan Verification Task
### MANDATORY FIRST STEPS
1. Read: ~/.claude/workflows/cli-templates/schemas/plan-verify-agent-schema.json (dimensions & rules)
2. Read: ~/.claude/workflows/cli-templates/schemas/verify-json-schema.json (output schema)
3. Read: ${session_file} (user intent)
4. Read: ${IMPL_PLAN} (implementation plan)
5. Glob: ${task_dir}/*.json (task files)
6. Glob: ${SYNTHESIS_DIR}/*/analysis.md (role analyses)
### Execution Flow
**Load schema → Execute tiered CLI analysis → Aggregate findings → Write JSON**
FOR each tier in [1, 2, 3, 4]:
- Load tier config from plan-verify-agent-schema.json
- Execute: ccw cli -p "PURPOSE: Verify dimensions {tier.dimensions}
TASK: {tier.checks from schema}
CONTEXT: @${session_dir}/**/*
EXPECTED: Findings JSON with dimension, severity, location, summary, recommendation
CONSTRAINTS: Limit {tier.limit} findings
" --tool gemini --mode analysis --rule {tier.rule}
- Parse findings, check early exit condition
- IF tier == 1 AND critical_count > 0: skip tier 3-4
### Output
Write: ${process_dir}/verification-findings.json (follow verify-json-schema.json)
Return: Quality gate decision + 2-3 sentence summary
`
)
```
---
#### Phase 4.2: Load and Organize Findings
```javascript
// Load findings (single parse for all subsequent use)
const data = JSON.parse(Read(`${process_dir}/verification-findings.json`))
const { session_id, timestamp, verification_tiers_completed, findings, summary } = data
const { critical_count, high_count, medium_count, low_count, total_findings, coverage_percentage, recommendation } = summary
// Group by severity and dimension
const bySeverity = Object.groupBy(findings, f => f.severity)
const byDimension = Object.groupBy(findings, f => f.dimension)
// Dimension metadata (from schema)
const DIMS = {
A: "User Intent Alignment", B: "Requirements Coverage", C: "Consistency Validation",
D: "Dependency Integrity", E: "Synthesis Alignment", F: "Task Specification Quality",
G: "Duplication Detection", H: "Feasibility Assessment"
}
```
### 5. Generate Report
```javascript
// Helper: render dimension section
const renderDimension = (dim) => {
const items = byDimension[dim] || []
return items.length > 0
? items.map(f => `### ${f.id}: ${f.summary}\n- **Severity**: ${f.severity}\n- **Location**: ${f.location.join(', ')}\n- **Recommendation**: ${f.recommendation}`).join('\n\n')
: `> ✅ No ${DIMS[dim]} issues detected.`
}
// Helper: render severity section
const renderSeverity = (severity, impact) => {
const items = bySeverity[severity] || []
return items.length > 0
? items.map(f => `#### ${f.id}: ${f.summary}\n- **Dimension**: ${f.dimension_name}\n- **Location**: ${f.location.join(', ')}\n- **Impact**: ${impact}\n- **Recommendation**: ${f.recommendation}`).join('\n\n')
: `> ✅ No ${severity.toLowerCase()}-severity issues detected.`
}
// Build Markdown report
const fullReport = `
# Plan Verification Report
**Session**: WFS-${session_id} | **Generated**: ${timestamp}
**Tiers Completed**: ${verification_tiers_completed.join(', ')}
---
## Executive Summary
| Metric | Value | Status |
|--------|-------|--------|
| Risk Level | ${critical_count > 0 ? 'CRITICAL' : high_count > 0 ? 'HIGH' : medium_count > 0 ? 'MEDIUM' : 'LOW'} | ${critical_count > 0 ? '🔴' : high_count > 0 ? '🟠' : medium_count > 0 ? '🟡' : '🟢'} |
| Critical/High/Medium/Low | ${critical_count}/${high_count}/${medium_count}/${low_count} | |
| Coverage | ${coverage_percentage}% | ${coverage_percentage >= 90 ? '🟢' : coverage_percentage >= 75 ? '🟡' : '🔴'} |
**Recommendation**: **${recommendation}**
---
## Findings Summary
| ID | Dimension | Severity | Location | Summary |
|----|-----------|----------|----------|---------|
${findings.map(f => `| ${f.id} | ${f.dimension_name} | ${f.severity} | ${f.location.join(', ')} | ${f.summary} |`).join('\n')}
---
## Analysis by Dimension
${['A','B','C','D','E','F','G','H'].map(d => `### ${d}. ${DIMS[d]}\n\n${renderDimension(d)}`).join('\n\n---\n\n')}
---
## Findings by Severity
### CRITICAL (${critical_count})
${renderSeverity('CRITICAL', 'Blocks execution')}
### HIGH (${high_count})
${renderSeverity('HIGH', 'Fix before execution recommended')}
### MEDIUM (${medium_count})
${renderSeverity('MEDIUM', 'Address during/after implementation')}
### LOW (${low_count})
${renderSeverity('LOW', 'Optional improvement')}
---
## Next Steps
${recommendation === 'BLOCK_EXECUTION' ? '🛑 **BLOCK**: Fix critical issues → Re-verify' :
recommendation === 'PROCEED_WITH_FIXES' ? '⚠️ **FIX RECOMMENDED**: Address high issues → Re-verify or Execute' :
'✅ **READY**: Proceed to /workflow:execute'}
Re-verify: \`/workflow:plan-verify --session ${session_id}\`
Execute: \`/workflow:execute --resume-session="${session_id}"\`
`
// Write report
Write(`${process_dir}/PLAN_VERIFICATION.md`, fullReport)
console.log(`✅ Report: ${process_dir}/PLAN_VERIFICATION.md\n📊 ${recommendation} | C:${critical_count} H:${high_count} M:${medium_count} L:${low_count} | Coverage:${coverage_percentage}%`)
```
### 6. Next Step Selection
```javascript
const autoYes = $ARGUMENTS.includes('--yes') || $ARGUMENTS.includes('-y')
const canExecute = recommendation !== 'BLOCK_EXECUTION'
// Auto mode
if (autoYes) {
if (canExecute) {
SlashCommand("/workflow:execute --yes --resume-session=\"${session_id}\"")
} else {
console.log(`[--yes] BLOCK_EXECUTION - Fix ${critical_count} critical issues first.`)
}
return
}
// Interactive mode - build options based on quality gate
const options = canExecute
? [
{ label: canExecute && recommendation === 'PROCEED_WITH_FIXES' ? "Execute Anyway" : "Execute (Recommended)",
description: "Proceed to /workflow:execute" },
{ label: "Review Report", description: "Review findings before deciding" },
{ label: "Re-verify", description: "Re-run after manual fixes" }
]
: [
{ label: "Review Report", description: "Review critical issues" },
{ label: "Re-verify", description: "Re-run after fixing issues" }
]
const selection = AskUserQuestion({
questions: [{
question: `Quality gate: ${recommendation}. Next step?`,
header: "Action",
multiSelect: false,
options
}]
})
// Handle selection
if (selection.includes("Execute")) {
SlashCommand("/workflow:execute --resume-session=\"${session_id}\"")
} else if (selection === "Re-verify") {
SlashCommand("/workflow:plan-verify --session ${session_id}")
}
```

View File

@@ -1,10 +1,15 @@
---
name: plan
description: 5-phase planning workflow with action-planning-agent task generation, outputs IMPL_PLAN.md and task JSONs
argument-hint: "\"text description\"|file.md"
argument-hint: "[-y|--yes] \"text description\"|file.md"
allowed-tools: SlashCommand(*), TodoWrite(*), Read(*), Bash(*)
group: workflow
---
## Auto Mode
When `--yes` or `-y`: Auto-continue all phases (skip confirmations), use recommended conflict resolutions.
# Workflow Plan Command (/workflow:plan)
## Coordinator Role
@@ -111,7 +116,38 @@ CONTEXT: Existing user database schema, REST API endpoints
**TodoWrite**: Mark phase 1 completed, phase 2 in_progress
**After Phase 1**: Return to user showing Phase 1 results, then auto-continue to Phase 2
**After Phase 1**: Initialize planning-notes.md with user intent
```javascript
// Create minimal planning notes document
const planningNotesPath = `.workflow/active/${sessionId}/planning-notes.md`
const userGoal = structuredDescription.goal
const userConstraints = structuredDescription.context || "None specified"
Write(planningNotesPath, `# Planning Notes
**Session**: ${sessionId}
**Created**: ${new Date().toISOString()}
## User Intent (Phase 1)
- **GOAL**: ${userGoal}
- **KEY_CONSTRAINTS**: ${userConstraints}
---
## Context Findings (Phase 2)
(To be filled by context-gather)
## Conflict Decisions (Phase 3)
(To be filled if conflicts detected)
## Consolidated Constraints (Phase 4 Input)
1. ${userConstraints}
`)
```
Return to user showing Phase 1 results, then auto-continue to Phase 2
---
@@ -134,6 +170,7 @@ SlashCommand(command="/workflow:tools:context-gather --session [sessionId] \"[st
**Validation**:
- Context package path extracted
- File exists and is valid JSON
- `prioritized_context` field exists
<!-- TodoWrite: When context-gather executed, INSERT 3 context-gather tasks, mark first as in_progress -->
@@ -164,7 +201,37 @@ SlashCommand(command="/workflow:tools:context-gather --session [sessionId] \"[st
**Note**: Phase 2 tasks completed and collapsed to summary.
**After Phase 2**: Return to user showing Phase 2 results, then auto-continue to Phase 3/4 (depending on conflict_risk)
**After Phase 2**: Update planning-notes.md with context findings, then auto-continue
```javascript
// Read context-package to extract key findings
const contextPackage = JSON.parse(Read(contextPath))
const conflictRisk = contextPackage.conflict_detection?.risk_level || 'low'
const criticalFiles = (contextPackage.exploration_results?.aggregated_insights?.critical_files || [])
.slice(0, 5).map(f => f.path)
const archPatterns = contextPackage.project_context?.architecture_patterns || []
const constraints = contextPackage.exploration_results?.aggregated_insights?.constraints || []
// Append Phase 2 findings to planning-notes.md
Edit(planningNotesPath, {
old: '## Context Findings (Phase 2)\n(To be filled by context-gather)',
new: `## Context Findings (Phase 2)
- **CRITICAL_FILES**: ${criticalFiles.join(', ') || 'None identified'}
- **ARCHITECTURE**: ${archPatterns.join(', ') || 'Not detected'}
- **CONFLICT_RISK**: ${conflictRisk}
- **CONSTRAINTS**: ${constraints.length > 0 ? constraints.join('; ') : 'None'}`
})
// Append Phase 2 constraints to consolidated list
Edit(planningNotesPath, {
old: '## Consolidated Constraints (Phase 4 Input)',
new: `## Consolidated Constraints (Phase 4 Input)
${constraints.map((c, i) => `${i + 2}. [Context] ${c}`).join('\n')}`
})
```
Return to user showing Phase 2 results, then auto-continue to Phase 3/4 (depending on conflict_risk)
---
@@ -225,7 +292,45 @@ SlashCommand(command="/workflow:tools:conflict-resolution --session [sessionId]
**Note**: Phase 3 tasks completed and collapsed to summary.
**After Phase 3**: Return to user showing conflict resolution results (if executed) and selected strategies, then auto-continue to Phase 3.5
**After Phase 3**: Update planning-notes.md with conflict decisions (if executed), then auto-continue
```javascript
// If Phase 3 was executed, update planning-notes.md
if (conflictRisk >= 'medium') {
const conflictResPath = `.workflow/active/${sessionId}/.process/conflict-resolution.json`
if (fs.existsSync(conflictResPath)) {
const conflictRes = JSON.parse(Read(conflictResPath))
const resolved = conflictRes.resolved_conflicts || []
const modifiedArtifacts = conflictRes.modified_artifacts || []
const planningConstraints = conflictRes.planning_constraints || []
// Update Phase 3 section
Edit(planningNotesPath, {
old: '## Conflict Decisions (Phase 3)\n(To be filled if conflicts detected)',
new: `## Conflict Decisions (Phase 3)
- **RESOLVED**: ${resolved.map(r => `${r.type}${r.strategy}`).join('; ') || 'None'}
- **MODIFIED_ARTIFACTS**: ${modifiedArtifacts.join(', ') || 'None'}
- **CONSTRAINTS**: ${planningConstraints.join('; ') || 'None'}`
})
// Append Phase 3 constraints to consolidated list
if (planningConstraints.length > 0) {
const currentNotes = Read(planningNotesPath)
const constraintCount = (currentNotes.match(/^\d+\./gm) || []).length
Edit(planningNotesPath, {
old: '## Consolidated Constraints (Phase 4 Input)',
new: `## Consolidated Constraints (Phase 4 Input)
${planningConstraints.map((c, i) => `${constraintCount + i + 1}. [Conflict] ${c}`).join('\n')}`
})
}
}
}
```
Return to user showing conflict resolution results (if executed) and selected strategies, then auto-continue to Phase 3.5
**Memory State Check**:
- Evaluate current context window usage and memory state
@@ -278,7 +383,12 @@ SlashCommand(command="/workflow:tools:task-generate-agent --session [sessionId]"
**CLI Execution Note**: CLI tool usage is now determined semantically by action-planning-agent based on user's task description. If user specifies "use Codex/Gemini/Qwen for X", the agent embeds `command` fields in relevant `implementation_approach` steps.
**Input**: `sessionId` from Phase 1
**Input**:
- `sessionId` from Phase 1
- **planning-notes.md**: Consolidated constraints from all phases (Phase 1-3)
- Path: `.workflow/active/[sessionId]/planning-notes.md`
- Contains: User intent, context findings, conflict decisions, consolidated constraints
- **Purpose**: Provides structured, minimal context summary to action-planning-agent
**Validation**:
- `.workflow/active/[sessionId]/IMPL_PLAN.md` exists
@@ -311,20 +421,55 @@ SlashCommand(command="/workflow:tools:task-generate-agent --session [sessionId]"
**Note**: Agent task completed. No collapse needed (single task).
**Return to User**:
```
Planning complete for session: [sessionId]
Tasks generated: [count]
Plan: .workflow/active/[sessionId]/IMPL_PLAN.md
**Step 4.2: User Decision** - Choose next action
Recommended Next Steps:
1. /workflow:action-plan-verify --session [sessionId] # Verify plan quality before execution
2. /workflow:status # Review task breakdown
3. /workflow:execute # Start implementation (after verification)
After Phase 4 completes, present user with action choices:
Quality Gate: Consider running /workflow:action-plan-verify to catch issues early
```javascript
console.log(`
✅ Planning complete for session: ${sessionId}
📊 Tasks generated: ${taskCount}
📋 Plan: .workflow/active/${sessionId}/IMPL_PLAN.md
`);
// Ask user for next action
const userChoice = AskUserQuestion({
questions: [{
question: "Planning complete. What would you like to do next?",
header: "Next Action",
multiSelect: false,
options: [
{
label: "Verify Plan Quality (Recommended)",
description: "Run quality verification to catch issues before execution. Checks plan structure, task dependencies, and completeness."
},
{
label: "Start Execution",
description: "Begin implementing tasks immediately. Use this if you've already reviewed the plan or want to start quickly."
},
{
label: "Review Status Only",
description: "View task breakdown and session status without taking further action. You can decide what to do next manually."
}
]
}]
});
// Execute based on user choice
if (userChoice.answers["Next Action"] === "Verify Plan Quality (Recommended)") {
console.log("\n🔍 Starting plan verification...\n");
SlashCommand(command="/workflow:plan-verify --session " + sessionId);
} else if (userChoice.answers["Next Action"] === "Start Execution") {
console.log("\n🚀 Starting task execution...\n");
SlashCommand(command="/workflow:execute --session " + sessionId);
} else if (userChoice.answers["Next Action"] === "Review Status Only") {
console.log("\n📊 Displaying session status...\n");
SlashCommand(command="/workflow:status --session " + sessionId);
}
```
**Return to User**: Based on user's choice, execute the corresponding workflow command.
## TodoWrite Pattern
**Core Concept**: Dynamic task attachment and collapse for real-time visibility into workflow execution.
@@ -400,26 +545,21 @@ User Input (task description)
Phase 1: session:start --auto "structured-description"
↓ Output: sessionId
Session Memory: Previous tasks, context, artifacts
Write: planning-notes.md (User Intent section)
Phase 2: context-gather --session sessionId "structured-description"
↓ Input: sessionId + session memory + structured description
↓ Output: contextPath (context-package.json) + conflict_risk
↓ Input: sessionId + structured description
↓ Output: contextPath (context-package.json with prioritized_context) + conflict_risk
↓ Update: planning-notes.md (Context Findings + Consolidated Constraints)
Phase 3: conflict-resolution [AUTO-TRIGGERED if conflict_risk ≥ medium]
↓ Input: sessionId + contextPath + conflict_risk
CLI-powered conflict detection (JSON output)
AskUserQuestion: Present conflicts + resolution strategies
↓ User selects strategies (or skip)
↓ Apply modifications via Edit tool:
↓ - Update guidance-specification.md
↓ - Update role analyses (*.md)
↓ - Mark context-package.json as "resolved"
↓ Output: Modified brainstorm artifacts (NO report file)
Output: Modified brainstorm artifacts
Update: planning-notes.md (Conflict Decisions + Consolidated Constraints)
↓ Skip if conflict_risk is none/low → proceed directly to Phase 4
Phase 4: task-generate-agent --session sessionId
↓ Input: sessionId + resolved brainstorm artifacts + session memory
↓ Input: sessionId + planning-notes.md + context-package.json + brainstorm artifacts
↓ Output: IMPL_PLAN.md, task JSONs, TODO_LIST.md
Return summary to user
@@ -546,6 +686,6 @@ CONSTRAINTS: [Limitations or boundaries]
- `/workflow:tools:task-generate-agent` - Phase 4: Generate task JSON files with agent-driven approach
**Follow-up Commands**:
- `/workflow:action-plan-verify` - Recommended: Verify plan quality and catch issues before execution
- `/workflow:plan-verify` - Recommended: Verify plan quality and catch issues before execution
- `/workflow:status` - Review task breakdown and current progress
- `/workflow:execute` - Begin implementation of generated tasks

View File

@@ -1,7 +1,7 @@
---
name: replan
description: Interactive workflow replanning with session-level artifact updates and boundary clarification through guided questioning
argument-hint: "[--session session-id] [task-id] \"requirements\"|file.md [--interactive]"
argument-hint: "[-y|--yes] [--session session-id] [task-id] \"requirements\"|file.md [--interactive]"
allowed-tools: Read(*), Write(*), Edit(*), TodoWrite(*), Glob(*), Bash(*)
---
@@ -113,7 +113,59 @@ const taskId = taskIdMatch?.[1]
- List existing tasks
- Read `IMPL_PLAN.md` and `TODO_LIST.md`
**Output**: Session validated, context loaded, mode determined
4. **Parse Execution Intent** (from requirements text):
```javascript
// Dynamic tool detection from cli-tools.json
// Read enabled tools: ["gemini", "qwen", "codex", ...]
const enabledTools = loadEnabledToolsFromConfig(); // See ~/.claude/cli-tools.json
// Build dynamic patterns from enabled tools
function buildExecPatterns(tools) {
const patterns = {
agent: /改为\s*Agent\s*执行|使用\s*Agent\s*执行/i
};
tools.forEach(tool => {
// Pattern: "使用 {tool} 执行" or "改用 {tool}"
patterns[`cli_${tool}`] = new RegExp(
`使用\\s*(${tool})\\s*执行|改用\\s*(${tool})`, 'i'
);
});
return patterns;
}
const execPatterns = buildExecPatterns(enabledTools);
let executionIntent = null
for (const [key, pattern] of Object.entries(execPatterns)) {
if (pattern.test(requirements)) {
executionIntent = key.startsWith('cli_')
? { method: 'cli', cli_tool: key.replace('cli_', '') }
: { method: 'agent', cli_tool: null }
break
}
}
```
**Output**: Session validated, context loaded, mode determined, **executionIntent parsed**
---
### Auto Mode Support
When `--yes` or `-y` flag is used, the command skips interactive clarification and uses safe defaults:
```javascript
const autoYes = $ARGUMENTS.includes('--yes') || $ARGUMENTS.includes('-y')
```
**Auto Mode Defaults**:
- **Modification Scope**: `tasks_only` (safest - only update task details)
- **Affected Modules**: All modules related to the task
- **Task Changes**: `update_only` (no structural changes)
- **Dependency Changes**: `no` (preserve existing dependencies)
- **User Confirmation**: Auto-confirm execution
**Note**: `--interactive` flag overrides `--yes` flag (forces interactive mode).
---
@@ -121,6 +173,25 @@ const taskId = taskIdMatch?.[1]
**Purpose**: Define modification scope through guided questioning
**Auto Mode Check**:
```javascript
if (autoYes && !interactive) {
// Use defaults and skip to Phase 3
console.log(`[--yes] Using safe defaults for replan:`)
console.log(` - Scope: tasks_only`)
console.log(` - Changes: update_only`)
console.log(` - Dependencies: preserve existing`)
userSelections = {
scope: 'tasks_only',
modules: 'all_affected',
task_changes: 'update_only',
dependency_changes: false
}
// Proceed to Phase 3
}
```
#### Session Mode Questions
**Q1: Modification Scope**
@@ -228,10 +299,29 @@ interface ImpactAnalysis {
**Step 3.3: User Confirmation**
```javascript
Options:
- 确认执行: 开始应用所有修改
- 调整计划: 重新回答问题调整范围
- 取消操作: 放弃本次重规划
// Parse --yes flag
const autoYes = $ARGUMENTS.includes('--yes') || $ARGUMENTS.includes('-y')
if (autoYes) {
// Auto mode: Auto-confirm execution
console.log(`[--yes] Auto-confirming replan execution`)
userConfirmation = '确认执行'
// Proceed to Phase 4
} else {
// Interactive mode: Ask user
AskUserQuestion({
questions: [{
question: "修改计划已生成,请确认操作:",
header: "Confirm",
options: [
{ label: "确认执行", description: "开始应用所有修改" },
{ label: "调整计划", description: "重新回答问题调整范围" },
{ label: "取消操作", description: "放弃本次重规划" }
],
multiSelect: false
}]
})
}
```
**Output**: Modification plan confirmed or adjusted
@@ -299,7 +389,18 @@ const updated_task = {
flow_control: {
...task.flow_control,
implementation_approach: [...updated_steps]
}
},
// Update execution config if intent detected
...(executionIntent && {
meta: {
...task.meta,
execution_config: {
method: executionIntent.method,
cli_tool: executionIntent.cli_tool,
enable_resume: executionIntent.method !== 'agent'
}
}
})
};
Write({
@@ -308,6 +409,8 @@ Write({
});
```
**Note**: Implementation approach steps are NO LONGER modified. CLI execution is controlled by task-level `meta.execution_config` only.
**Step 5.4: Create New Tasks** (if needed)
Generate complete task JSON with all required fields:
@@ -513,3 +616,33 @@ A: 是,需要同步更新依赖任务
任务重规划完成! 更新 2 个任务
```
### Task Replan - Change Execution Method
```bash
/workflow:replan IMPL-001 "改用 Codex 执行"
# Semantic parsing detects executionIntent:
# { method: 'cli', cli_tool: 'codex' }
# Execution (no interactive questions needed)
✓ 创建备份
✓ 更新 IMPL-001.json
- meta.execution_config = { method: 'cli', cli_tool: 'codex', enable_resume: true }
任务执行方式已更新: Agent → CLI (codex)
```
```bash
/workflow:replan IMPL-002 "改为 Agent 执行"
# Semantic parsing detects executionIntent:
# { method: 'agent', cli_tool: null }
# Execution
✓ 创建备份
✓ 更新 IMPL-002.json
- meta.execution_config = { method: 'agent', cli_tool: null }
任务执行方式已更新: CLI → Agent
```

View File

@@ -1,26 +1,26 @@
---
name: review-fix
name: review-cycle-fix
description: Automated fixing of code review findings with AI-powered planning and coordinated execution. Uses intelligent grouping, multi-stage timeline coordination, and test-driven verification.
argument-hint: "<export-file|review-dir> [--resume] [--max-iterations=N]"
allowed-tools: SlashCommand(*), TodoWrite(*), Read(*), Bash(*), Task(*), Edit(*), Write(*)
---
# Workflow Review-Fix Command
# Workflow Review-Cycle-Fix Command
## Quick Start
```bash
# Fix from exported findings file (session-based path)
/workflow:review-fix .workflow/active/WFS-123/.review/fix-export-1706184622000.json
/workflow:review-cycle-fix .workflow/active/WFS-123/.review/fix-export-1706184622000.json
# Fix from review directory (auto-discovers latest export)
/workflow:review-fix .workflow/active/WFS-123/.review/
/workflow:review-cycle-fix .workflow/active/WFS-123/.review/
# Resume interrupted fix session
/workflow:review-fix --resume
/workflow:review-cycle-fix --resume
# Custom max retry attempts per finding
/workflow:review-fix .workflow/active/WFS-123/.review/ --max-iterations=5
/workflow:review-cycle-fix .workflow/active/WFS-123/.review/ --max-iterations=5
```
**Fix Source**: Exported findings from review cycle dashboard
@@ -605,6 +605,4 @@ Use `ccw view` to open the workflow dashboard in browser:
```bash
ccw view
```
```

View File

@@ -764,8 +764,8 @@ After completing a module review, use the generated findings JSON for automated
/workflow:review-module-cycle src/auth/**
# Step 2: Run automated fixes using dimension findings
/workflow:review-fix .workflow/active/WFS-{session-id}/.review/
/workflow:review-cycle-fix .workflow/active/WFS-{session-id}/.review/
```
See `/workflow:review-fix` for automated fixing with smart grouping, parallel execution, and test verification.
See `/workflow:review-cycle-fix` for automated fixing with smart grouping, parallel execution, and test verification.

View File

@@ -775,8 +775,8 @@ After completing a review, use the generated findings JSON for automated fixing:
/workflow:review-session-cycle
# Step 2: Run automated fixes using dimension findings
/workflow:review-fix .workflow/active/WFS-{session-id}/.review/
/workflow:review-cycle-fix .workflow/active/WFS-{session-id}/.review/
```
See `/workflow:review-fix` for automated fixing with smart grouping, parallel execution, and test verification.
See `/workflow:review-cycle-fix` for automated fixing with smart grouping, parallel execution, and test verification.

View File

@@ -1,8 +1,10 @@
---
name: complete
description: Mark active workflow session as complete, archive with lessons learned, update manifest, remove active flag
argument-hint: "[-y|--yes] [--detailed]"
examples:
- /workflow:session:complete
- /workflow:session:complete --yes
- /workflow:session:complete --detailed
---
@@ -139,20 +141,41 @@ test -f .workflow/project-tech.json || echo "SKIP"
After successful archival, prompt user to capture learnings:
```javascript
AskUserQuestion({
questions: [{
question: "Would you like to solidify learnings from this session into project guidelines?",
header: "Solidify",
options: [
{ label: "Yes, solidify now", description: "Extract learnings and update project-guidelines.json" },
{ label: "Skip", description: "Archive complete, no learnings to capture" }
],
multiSelect: false
}]
})
// Parse --yes flag
const autoYes = $ARGUMENTS.includes('--yes') || $ARGUMENTS.includes('-y')
if (autoYes) {
// Auto mode: Skip solidify
console.log(`[--yes] Auto-selecting: Skip solidify`)
console.log(`Session archived successfully.`)
// Done - no solidify
} else {
// Interactive mode: Ask user
AskUserQuestion({
questions: [{
question: "Would you like to solidify learnings from this session into project guidelines?",
header: "Solidify",
options: [
{ label: "Yes, solidify now", description: "Extract learnings and update project-guidelines.json" },
{ label: "Skip", description: "Archive complete, no learnings to capture" }
],
multiSelect: false
}]
})
// **If "Yes, solidify now"**: Execute `/workflow:session:solidify` with the archived session ID.
}
```
**If "Yes, solidify now"**: Execute `/workflow:session:solidify` with the archived session ID.
## Auto Mode Defaults
When `--yes` or `-y` flag is used:
- **Solidify Learnings**: Auto-selected "Skip" (archive only, no solidify)
**Flag Parsing**:
```javascript
const autoYes = $ARGUMENTS.includes('--yes') || $ARGUMENTS.includes('-y')
```
**Output**:
```

View File

@@ -1,14 +1,18 @@
---
name: solidify
description: Crystallize session learnings and user-defined constraints into permanent project guidelines
argument-hint: "[--type <convention|constraint|learning>] [--category <category>] \"rule or insight\""
argument-hint: "[-y|--yes] [--type <convention|constraint|learning>] [--category <category>] \"rule or insight\""
examples:
- /workflow:session:solidify "Use functional components for all React code" --type convention
- /workflow:session:solidify "No direct DB access from controllers" --type constraint --category architecture
- /workflow:session:solidify -y "No direct DB access from controllers" --type constraint --category architecture
- /workflow:session:solidify "Cache invalidation requires event sourcing" --type learning --category architecture
- /workflow:session:solidify --interactive
---
## Auto Mode
When `--yes` or `-y`: Auto-categorize and add guideline without confirmation.
# Session Solidify Command (/workflow:session:solidify)
## Overview

View File

@@ -268,15 +268,19 @@ SlashCommand(command="/workflow:tools:conflict-resolution --session [sessionId]
### Phase 5: TDD Task Generation
**Step 5.1: Execute** - TDD task generation via action-planning-agent
**Step 5.1: Execute** - TDD task generation via action-planning-agent with Phase 0 user configuration
```javascript
SlashCommand(command="/workflow:tools:task-generate-tdd --session [sessionId]")
```
**Note**: CLI tool usage is determined semantically from user's task description.
**Note**: Phase 0 now includes:
- Supplementary materials collection (file paths or inline content)
- Execution method preference (Agent/Hybrid/CLI)
- CLI tool preference (Codex/Gemini/Qwen/Auto)
- These preferences are passed to agent for task generation
**Parse**: Extract feature count, task count (not chain count - tasks now contain internal TDD cycles)
**Parse**: Extract feature count, task count (not chain count - tasks now contain internal TDD cycles), CLI execution IDs assigned
**Validate**:
- IMPL_PLAN.md exists (unified plan with TDD Implementation Tasks section)
@@ -284,15 +288,24 @@ SlashCommand(command="/workflow:tools:task-generate-tdd --session [sessionId]")
- TODO_LIST.md exists with internal TDD phase indicators
- Each IMPL task includes:
- `meta.tdd_workflow: true`
- `flow_control.implementation_approach` with 3 steps (red/green/refactor)
- `meta.cli_execution_id: {session_id}-{task_id}`
- `meta.cli_execution: { "strategy": "new|resume|fork|merge_fork", ... }`
- `flow_control.implementation_approach` with exactly 3 steps (red/green/refactor)
- Green phase includes test-fix-cycle configuration
- `context.focus_paths`: absolute or clear relative paths (enhanced with exploration critical_files)
- `flow_control.pre_analysis`: includes exploration integration_points analysis
- IMPL_PLAN.md contains workflow_type: "tdd" in frontmatter
- Task count ≤10 (compliance with task limit)
- User configuration applied:
- If executionMethod == "cli" or "hybrid": command field added to steps
- CLI tool preference reflected in execution guidance
- Task count ≤18 (compliance with hard limit)
**Red Flag Detection** (Non-Blocking Warnings):
- Task count >10: `⚠️ High task count may indicate insufficient decomposition`
- Task count >18: `⚠️ Task count exceeds hard limit - request re-scope`
- Missing cli_execution_id: `⚠️ Task lacks CLI execution ID for resume support`
- Missing test-fix-cycle: `⚠️ Green phase lacks auto-revert configuration`
- Generic task names: `⚠️ Vague task names suggest unclear TDD cycles`
- Missing focus_paths: `⚠️ Task lacks clear file scope for implementation`
**Action**: Log warnings to `.workflow/active/[sessionId]/.process/tdd-warnings.log` (non-blocking)
@@ -338,14 +351,22 @@ SlashCommand(command="/workflow:tools:task-generate-tdd --session [sessionId]")
1. Each task contains complete TDD workflow (Red-Green-Refactor internally)
2. Task structure validation:
- `meta.tdd_workflow: true` in all IMPL tasks
- `meta.cli_execution_id` present (format: {session_id}-{task_id})
- `meta.cli_execution` strategy assigned (new/resume/fork/merge_fork)
- `flow_control.implementation_approach` has exactly 3 steps
- Each step has correct `tdd_phase`: "red", "green", "refactor"
- `context.focus_paths` are absolute or clear relative paths
- `flow_control.pre_analysis` includes exploration integration analysis
3. Dependency validation:
- Sequential features: IMPL-N depends_on ["IMPL-(N-1)"] if needed
- Complex features: IMPL-N.M depends_on ["IMPL-N.(M-1)"] for subtasks
- CLI execution strategies correctly assigned based on dependency graph
4. Agent assignment: All IMPL tasks use @code-developer
5. Test-fix cycle: Green phase step includes test-fix-cycle logic with max_iterations
6. Task count: Total tasks ≤10 (simple + subtasks)
6. Task count: Total tasks ≤18 (simple + subtasks hard limit)
7. User configuration:
- Execution method choice reflected in task structure
- CLI tool preference documented in implementation guidance (if CLI selected)
**Red Flag Checklist** (from TDD best practices):
- [ ] No tasks skip Red phase (`tdd_phase: "red"` exists in step 1)
@@ -371,7 +392,7 @@ ls -la .workflow/active/[sessionId]/.task/IMPL-*.json
echo "IMPL tasks: $(ls .workflow/active/[sessionId]/.task/IMPL-*.json 2>/dev/null | wc -l)"
# Sample task structure verification (first task)
jq '{id, tdd: .meta.tdd_workflow, phases: [.flow_control.implementation_approach[].tdd_phase]}' \
jq '{id, tdd: .meta.tdd_workflow, cli_id: .meta.cli_execution_id, phases: [.flow_control.implementation_approach[].tdd_phase]}' \
"$(ls .workflow/active/[sessionId]/.task/IMPL-*.json | head -1)"
```
@@ -379,8 +400,9 @@ jq '{id, tdd: .meta.tdd_workflow, phases: [.flow_control.implementation_approach
| Evidence Type | Verification Method | Pass Criteria |
|---------------|---------------------|---------------|
| File existence | `ls -la` artifacts | All files present |
| Task count | Count IMPL-*.json | Count matches claims |
| TDD structure | jq sample extraction | Shows red/green/refactor |
| Task count | Count IMPL-*.json | Count matches claims (≤18) |
| TDD structure | jq sample extraction | Shows red/green/refactor + cli_execution_id |
| CLI execution IDs | jq extraction | All tasks have cli_execution_id assigned |
| Warning log | Check tdd-warnings.log | Logged (may be empty) |
**Return Summary**:
@@ -393,7 +415,7 @@ Total tasks: [M] (1 task per simple feature + subtasks for complex features)
Task breakdown:
- Simple features: [K] tasks (IMPL-1 to IMPL-K)
- Complex features: [L] features with [P] subtasks
- Total task count: [M] (within 10-task limit)
- Total task count: [M] (within 18-task hard limit)
Structure:
- IMPL-1: {Feature 1 Name} (Internal: Red → Green → Refactor)
@@ -407,22 +429,31 @@ Plans generated:
- Unified Implementation Plan: .workflow/active/[sessionId]/IMPL_PLAN.md
(includes TDD Implementation Tasks section with workflow_type: "tdd")
- Task List: .workflow/active/[sessionId]/TODO_LIST.md
(with internal TDD phase indicators)
(with internal TDD phase indicators and CLI execution strategies)
- Task JSONs: .workflow/active/[sessionId]/.task/IMPL-*.json
(with cli_execution_id and execution strategies for resume support)
TDD Configuration:
- Each task contains complete Red-Green-Refactor cycle
- Green phase includes test-fix cycle (max 3 iterations)
- Auto-revert on max iterations reached
- CLI execution strategies: new/resume/fork/merge_fork based on dependency graph
User Configuration Applied:
- Execution Method: [agent|hybrid|cli]
- CLI Tool Preference: [codex|gemini|qwen|auto]
- Supplementary Materials: [included|none]
- Task generation follows cli-tools-usage.md guidelines
⚠️ ACTION REQUIRED: Before execution, ensure you understand WHY each Red phase test is expected to fail.
This is crucial for valid TDD - if you don't know why the test fails, you can't verify it tests the right thing.
Recommended Next Steps:
1. /workflow:action-plan-verify --session [sessionId] # Verify TDD plan quality and dependencies
2. /workflow:execute --session [sessionId] # Start TDD execution
1. /workflow:plan-verify --session [sessionId] # Verify TDD plan quality and dependencies
2. /workflow:execute --session [sessionId] # Start TDD execution with CLI strategies
3. /workflow:tdd-verify [sessionId] # Post-execution TDD compliance check
Quality Gate: Consider running /workflow:action-plan-verify to validate TDD task structure and dependencies
Quality Gate: Consider running /workflow:plan-verify to validate TDD task structure, dependencies, and CLI execution strategies
```
## TodoWrite Pattern
@@ -500,7 +531,7 @@ TDD Workflow Orchestrator
└─ Phase 6: TDD Structure Validation
└─ Internal validation + summary returned
└─ Recommend: /workflow:action-plan-verify
└─ Recommend: /workflow:plan-verify
Key Points:
• ← ATTACHED: SlashCommand attaches sub-tasks to orchestrator TodoWrite
@@ -547,9 +578,11 @@ Convert user input to TDD-structured format:
| Parsing failure | Empty/malformed output | Retry once, then report |
| Missing context-package | File read error | Re-run `/workflow:tools:context-gather` |
| Invalid task JSON | jq parse error | Report malformed file path |
| High task count (>10) | Count validation | Log warning, continue (non-blocking) |
| Task count exceeds 18 | Count validation ≥19 | Request re-scope, split into multiple sessions |
| Missing cli_execution_id | All tasks lack ID | Regenerate tasks with phase 0 user config |
| Test-context missing | File not found | Re-run `/workflow:tools:test-context-gather` |
| Phase timeout | No response | Retry phase, check CLI connectivity |
| CLI tool not available | Tool not in cli-tools.json | Fall back to alternative preferred tool |
## Related Commands
@@ -565,7 +598,7 @@ Convert user input to TDD-structured format:
- `/workflow:tools:task-generate-tdd` - Phase 5: Generate TDD tasks (CLI tool usage determined semantically)
**Follow-up Commands**:
- `/workflow:action-plan-verify` - Recommended: Verify TDD plan quality and structure before execution
- `/workflow:plan-verify` - Recommended: Verify TDD plan quality and structure before execution
- `/workflow:status` - Review TDD task breakdown
- `/workflow:execute` - Begin TDD implementation
- `/workflow:tdd-verify` - Post-execution: Verify TDD compliance and generate quality report
@@ -574,7 +607,7 @@ Convert user input to TDD-structured format:
| Situation | Recommended Command | Purpose |
|-----------|---------------------|---------|
| First time planning | `/workflow:action-plan-verify` | Validate task structure before execution |
| First time planning | `/workflow:plan-verify` | Validate task structure before execution |
| Warnings in tdd-warnings.log | Review log, refine tasks | Address Red Flags before proceeding |
| High task count warning | Consider `/workflow:session:start` | Split into focused sub-sessions |
| Ready to implement | `/workflow:execute` | Begin TDD Red-Green-Refactor cycles |
@@ -587,7 +620,7 @@ Convert user input to TDD-structured format:
```
/workflow:tdd-plan
[Planning Complete] ──→ /workflow:action-plan-verify (recommended)
[Planning Complete] ──→ /workflow:plan-verify (recommended)
[Verified/Ready] ─────→ /workflow:execute

View File

@@ -1,214 +1,301 @@
---
name: tdd-verify
description: Verify TDD workflow compliance against Red-Green-Refactor cycles, generate quality report with coverage analysis
argument-hint: "[optional: WFS-session-id]"
allowed-tools: SlashCommand(*), TodoWrite(*), Read(*), Bash(gemini:*)
description: Verify TDD workflow compliance against Red-Green-Refactor cycles. Generates quality report with coverage analysis and quality gate recommendation. Orchestrates sub-commands for comprehensive validation.
argument-hint: "[optional: --session WFS-session-id]"
allowed-tools: SlashCommand(*), TodoWrite(*), Read(*), Write(*), Bash(*), Glob(*)
---
# TDD Verification Command (/workflow:tdd-verify)
## Goal
Verify TDD workflow execution quality by validating Red-Green-Refactor cycle compliance, test coverage completeness, and task chain structure integrity. This command orchestrates multiple analysis phases and generates a comprehensive compliance report with quality gate recommendation.
**Output**: A structured Markdown report saved to `.workflow/active/WFS-{session}/TDD_COMPLIANCE_REPORT.md` containing:
- Executive summary with compliance score and quality gate recommendation
- Task chain validation (TEST → IMPL → REFACTOR structure)
- Test coverage metrics (line, branch, function)
- Red-Green-Refactor cycle verification
- Best practices adherence assessment
- Actionable improvement recommendations
## Operating Constraints
**ORCHESTRATOR MODE**:
- This command coordinates multiple sub-commands (`/workflow:tools:tdd-coverage-analysis`, `ccw cli`)
- MAY write output files: TDD_COMPLIANCE_REPORT.md (primary report), .process/*.json (intermediate artifacts)
- MUST NOT modify source task files or implementation code
- MUST NOT create or delete tasks in the workflow
**Quality Gate Authority**: The compliance report provides a binding recommendation (BLOCK_MERGE / REQUIRE_FIXES / PROCEED_WITH_CAVEATS / APPROVED) based on objective compliance criteria.
## Coordinator Role
**This command is a pure orchestrator**: Execute 4 phases to verify TDD workflow compliance, test coverage, and Red-Green-Refactor cycle execution.
## Core Responsibilities
- Verify TDD task chain structure
- Analyze test coverage
- Validate TDD cycle execution
- Generate compliance report
- Verify TDD task chain structure (TEST → IMPL → REFACTOR)
- Analyze test coverage metrics
- Validate TDD cycle execution quality
- Generate compliance report with quality gate recommendation
## Execution Process
```
Input Parsing:
└─ Decision (session argument):
├─ session-id provided → Use provided session
└─ No session-id → Auto-detect active session
├─ --session provided → Use provided session
└─ No session → Auto-detect active session
Phase 1: Session Discovery
├─ Validate session directory exists
TodoWrite: Mark phase 1 completed
Phase 1: Session Discovery & Validation
├─ Detect or validate session directory
Check required artifacts exist (.task/*.json, .summaries/*)
└─ ERROR if invalid or incomplete
Phase 2: Task Chain Validation
Phase 2: Task Chain Structure Validation
├─ Load all task JSONs from .task/
├─ Extract task IDs and group by feature
├─ Validate TDD structure:
│ ├─ TEST-N.M → IMPL-N.M → REFACTOR-N.M chain
│ ├─ Dependency verification
│ └─ Meta field validation (tdd_phase, agent)
└─ TodoWrite: Mark phase 2 completed
├─ Validate TDD structure: TEST-N.M → IMPL-N.M → REFACTOR-N.M
├─ Verify dependencies (depends_on)
├─ Validate meta fields (tdd_phase, agent)
└─ Extract chain validation data
Phase 3: Test Execution Analysis
└─ /workflow:tools:tdd-coverage-analysis
├─ Coverage metrics extraction
├─ TDD cycle verification
└─ Compliance score calculation
Phase 3: Coverage & Cycle Analysis
├─ Call: /workflow:tools:tdd-coverage-analysis
├─ Parse: test-results.json, coverage-report.json, tdd-cycle-report.md
└─ Extract coverage metrics and TDD cycle verification
Phase 4: Compliance Report Generation
├─ Gemini analysis for comprehensive report
├─ Aggregate findings from Phases 1-3
├─ Calculate compliance score (0-100)
├─ Determine quality gate recommendation
├─ Generate TDD_COMPLIANCE_REPORT.md
└─ Return summary to user
└─ Display summary to user
```
## 4-Phase Execution
### Phase 1: Session Discovery
**Auto-detect or use provided session**
### Phase 1: Session Discovery & Validation
**Step 1.1: Detect Session**
```bash
# If session-id provided
sessionId = argument
IF --session parameter provided:
session_id = provided session
ELSE:
# Auto-detect active session
active_sessions = bash(find .workflow/active/ -name "WFS-*" -type d 2>/dev/null)
IF active_sessions is empty:
ERROR: "No active workflow session found. Use --session <session-id>"
EXIT
ELSE IF active_sessions has multiple entries:
# Use most recently modified session
session_id = bash(ls -td .workflow/active/WFS-*/ 2>/dev/null | head -1 | xargs basename)
ELSE:
session_id = basename(active_sessions[0])
# Else auto-detect active session
find .workflow/active/ -name "WFS-*" -type d | head -1 | sed 's/.*\///'
# Derive paths
session_dir = .workflow/active/WFS-{session_id}
task_dir = session_dir/.task
summaries_dir = session_dir/.summaries
process_dir = session_dir/.process
```
**Extract**: sessionId
**Step 1.2: Validate Required Artifacts**
```bash
# Check task files exist
task_files = Glob(task_dir/*.json)
IF task_files.count == 0:
ERROR: "No task JSON files found. Run /workflow:tdd-plan first"
EXIT
**Validation**: Session directory exists
# Check summaries exist (optional but recommended for full analysis)
summaries_exist = EXISTS(summaries_dir)
IF NOT summaries_exist:
WARNING: "No .summaries/ directory found. Some analysis may be limited."
```
**TodoWrite**: Mark phase 1 completed, phase 2 in_progress
**Output**: session_id, session_dir, task_files list
---
### Phase 2: Task Chain Validation
**Validate TDD structure using bash commands**
### Phase 2: Task Chain Structure Validation
**Step 2.1: Load and Parse Task JSONs**
```bash
# Load all task JSONs
for task_file in .workflow/active/{sessionId}/.task/*.json; do
cat "$task_file"
done
# Single-pass JSON extraction using jq
validation_data = bash("""
# Load all tasks and extract structured data
cd '{session_dir}/.task'
# Extract task IDs
for task_file in .workflow/active/{sessionId}/.task/*.json; do
cat "$task_file" | jq -r '.id'
done
# Extract all task IDs
task_ids=$(jq -r '.id' *.json 2>/dev/null | sort)
# Check dependencies - read tasks and filter for IMPL/REFACTOR
for task_file in .workflow/active/{sessionId}/.task/IMPL-*.json; do
cat "$task_file" | jq -r '.context.depends_on[]?'
done
# Extract dependencies for IMPL tasks
impl_deps=$(jq -r 'select(.id | startswith("IMPL")) | .id + ":" + (.context.depends_on[]? // "none")' *.json 2>/dev/null)
for task_file in .workflow/active/{sessionId}/.task/REFACTOR-*.json; do
cat "$task_file" | jq -r '.context.depends_on[]?'
done
# Extract dependencies for REFACTOR tasks
refactor_deps=$(jq -r 'select(.id | startswith("REFACTOR")) | .id + ":" + (.context.depends_on[]? // "none")' *.json 2>/dev/null)
# Check meta fields
for task_file in .workflow/active/{sessionId}/.task/*.json; do
cat "$task_file" | jq -r '.meta.tdd_phase'
done
# Extract meta fields
meta_tdd=$(jq -r '.id + ":" + (.meta.tdd_phase // "missing")' *.json 2>/dev/null)
meta_agent=$(jq -r '.id + ":" + (.meta.agent // "missing")' *.json 2>/dev/null)
for task_file in .workflow/active/{sessionId}/.task/*.json; do
cat "$task_file" | jq -r '.meta.agent'
done
# Output as JSON
jq -n --arg ids "$task_ids" \\
--arg impl "$impl_deps" \\
--arg refactor "$refactor_deps" \\
--arg tdd "$meta_tdd" \\
--arg agent "$meta_agent" \\
'{ids: $ids, impl_deps: $impl, refactor_deps: $refactor, tdd: $tdd, agent: $agent}'
""")
```
**Validation**:
- For each feature N, verify TEST-N.M → IMPL-N.M → REFACTOR-N.M exists
- IMPL-N.M.context.depends_on includes TEST-N.M
- REFACTOR-N.M.context.depends_on includes IMPL-N.M
- TEST tasks have tdd_phase="red" and agent="@code-review-test-agent"
- IMPL/REFACTOR tasks have tdd_phase="green"/"refactor" and agent="@code-developer"
**Step 2.2: Validate TDD Chain Structure**
```
Parse validation_data JSON and validate:
**Extract**: Chain validation report
For each feature N (extracted from task IDs):
1. TEST-N.M exists?
2. IMPL-N.M exists?
3. REFACTOR-N.M exists? (optional but recommended)
4. IMPL-N.M.context.depends_on contains TEST-N.M?
5. REFACTOR-N.M.context.depends_on contains IMPL-N.M?
6. TEST-N.M.meta.tdd_phase == "red"?
7. TEST-N.M.meta.agent == "@code-review-test-agent"?
8. IMPL-N.M.meta.tdd_phase == "green"?
9. IMPL-N.M.meta.agent == "@code-developer"?
10. REFACTOR-N.M.meta.tdd_phase == "refactor"?
**TodoWrite**: Mark phase 2 completed, phase 3 in_progress
Calculate:
- chain_completeness_score = (complete_chains / total_chains) * 100
- dependency_accuracy = (correct_deps / total_deps) * 100
- meta_field_accuracy = (correct_meta / total_meta) * 100
```
**Output**: chain_validation_report (JSON structure with validation results)
---
### Phase 3: Test Execution Analysis
**Command**: `SlashCommand(command="/workflow:tools:tdd-coverage-analysis --session [sessionId]")`
### Phase 3: Coverage & Cycle Analysis
**Input**: sessionId from Phase 1
**Step 3.1: Call Coverage Analysis Sub-command**
```bash
SlashCommand(command="/workflow:tools:tdd-coverage-analysis --session {session_id}")
```
**Parse Output**:
- Coverage metrics (line, branch, function percentages)
- TDD cycle verification results
- Compliance score
**Step 3.2: Parse Output Files**
```bash
# Check required outputs exist
IF NOT EXISTS(process_dir/test-results.json):
WARNING: "test-results.json not found. Coverage analysis incomplete."
coverage_data = null
ELSE:
coverage_data = Read(process_dir/test-results.json)
**Validation**:
- `.workflow/active/{sessionId}/.process/test-results.json` exists
- `.workflow/active/{sessionId}/.process/coverage-report.json` exists
- `.workflow/active/{sessionId}/.process/tdd-cycle-report.md` exists
IF NOT EXISTS(process_dir/coverage-report.json):
WARNING: "coverage-report.json not found. Coverage metrics incomplete."
metrics = null
ELSE:
metrics = Read(process_dir/coverage-report.json)
**TodoWrite**: Mark phase 3 completed, phase 4 in_progress
IF NOT EXISTS(process_dir/tdd-cycle-report.md):
WARNING: "tdd-cycle-report.md not found. Cycle validation incomplete."
cycle_data = null
ELSE:
cycle_data = Read(process_dir/tdd-cycle-report.md)
```
**Step 3.3: Extract Coverage Metrics**
```
If coverage_data exists:
- line_coverage_percent
- branch_coverage_percent
- function_coverage_percent
- uncovered_files (list)
- uncovered_lines (map: file -> line ranges)
If cycle_data exists:
- red_phase_compliance (tests failed initially?)
- green_phase_compliance (tests pass after impl?)
- refactor_phase_compliance (tests stay green during refactor?)
- minimal_implementation_score (was impl minimal?)
```
**Output**: coverage_analysis, cycle_analysis
---
### Phase 4: Compliance Report Generation
**Gemini analysis for comprehensive TDD compliance report**
**Step 4.1: Calculate Compliance Score**
```
Base Score: 100 points
Deductions:
Chain Structure:
- Missing TEST task: -30 points per feature
- Missing IMPL task: -30 points per feature
- Missing REFACTOR task: -10 points per feature
- Wrong dependency: -15 points per error
- Wrong agent: -5 points per error
- Wrong tdd_phase: -5 points per error
TDD Cycle Compliance:
- Test didn't fail initially: -10 points per feature
- Tests didn't pass after IMPL: -20 points per feature
- Tests broke during REFACTOR: -15 points per feature
- Over-engineered IMPL: -10 points per feature
Coverage Quality:
- Line coverage < 80%: -5 points
- Branch coverage < 70%: -5 points
- Function coverage < 80%: -5 points
- Critical paths uncovered: -10 points
Final Score: Max(0, Base Score - Total Deductions)
```
**Step 4.2: Determine Quality Gate**
```
IF score >= 90 AND no_critical_violations:
recommendation = "APPROVED"
ELSE IF score >= 70 AND critical_violations == 0:
recommendation = "PROCEED_WITH_CAVEATS"
ELSE IF score >= 50:
recommendation = "REQUIRE_FIXES"
ELSE:
recommendation = "BLOCK_MERGE"
```
**Step 4.3: Generate Report**
```bash
ccw cli -p "
PURPOSE: Generate TDD compliance report
TASK: Analyze TDD workflow execution and generate quality report
CONTEXT: @{.workflow/active/{sessionId}/.task/*.json,.workflow/active/{sessionId}/.summaries/*,.workflow/active/{sessionId}/.process/tdd-cycle-report.md}
EXPECTED:
- TDD compliance score (0-100)
- Chain completeness verification
- Test coverage analysis summary
- Quality recommendations
- Red-Green-Refactor cycle validation
- Best practices adherence assessment
RULES: Focus on TDD best practices and workflow adherence. Be specific about violations and improvements.
" --tool gemini --mode analysis --cd project-root > .workflow/active/{sessionId}/TDD_COMPLIANCE_REPORT.md
report_content = Generate markdown report (see structure below)
report_path = "{session_dir}/TDD_COMPLIANCE_REPORT.md"
Write(report_path, report_content)
```
**Output**: TDD_COMPLIANCE_REPORT.md
**TodoWrite**: Mark phase 4 completed
**Return to User**:
```
TDD Verification Report - Session: {sessionId}
## Chain Validation
[COMPLETE] Feature 1: TEST-1.1 → IMPL-1.1 → REFACTOR-1.1 (Complete)
[COMPLETE] Feature 2: TEST-2.1 → IMPL-2.1 → REFACTOR-2.1 (Complete)
[INCOMPLETE] Feature 3: TEST-3.1 → IMPL-3.1 (Missing REFACTOR phase)
## Test Execution
All TEST tasks produced failing tests
All IMPL tasks made tests pass
All REFACTOR tasks maintained green tests
## Coverage Metrics
Line Coverage: {percentage}%
Branch Coverage: {percentage}%
Function Coverage: {percentage}%
## Compliance Score: {score}/100
Detailed report: .workflow/active/{sessionId}/TDD_COMPLIANCE_REPORT.md
Recommendations:
- Complete missing REFACTOR-3.1 task
- Consider additional edge case tests for Feature 2
- Improve test failure message clarity in Feature 1
**Step 4.4: Display Summary to User**
```bash
echo "=== TDD Verification Complete ==="
echo "Session: {session_id}"
echo "Report: {report_path}"
echo ""
echo "Quality Gate: {recommendation}"
echo "Compliance Score: {score}/100"
echo ""
echo "Chain Validation: {chain_completeness_score}%"
echo "Line Coverage: {line_coverage}%"
echo "Branch Coverage: {branch_coverage}%"
echo ""
echo "Next: Review full report for detailed findings"
```
## TodoWrite Pattern
## TodoWrite Pattern (Optional)
**Note**: As an orchestrator command, TodoWrite tracking is optional and primarily useful for long-running verification processes. For most cases, the 4-phase execution is fast enough that progress tracking adds noise without value.
```javascript
// Initialize (before Phase 1)
TodoWrite({todos: [
{"content": "Identify target session", "status": "in_progress", "activeForm": "Identifying target session"},
{"content": "Validate task chain structure", "status": "pending", "activeForm": "Validating task chain structure"},
{"content": "Analyze test execution", "status": "pending", "activeForm": "Analyzing test execution"},
{"content": "Generate compliance report", "status": "pending", "activeForm": "Generating compliance report"}
]})
// After Phase 1
TodoWrite({todos: [
{"content": "Identify target session", "status": "completed", "activeForm": "Identifying target session"},
{"content": "Validate task chain structure", "status": "in_progress", "activeForm": "Validating task chain structure"},
{"content": "Analyze test execution", "status": "pending", "activeForm": "Analyzing test execution"},
{"content": "Generate compliance report", "status": "pending", "activeForm": "Generating compliance report"}
]})
// Continue pattern for Phase 2, 3, 4...
// Only use TodoWrite for complex multi-session verification
// Skip for single-session verification
```
## Validation Logic
@@ -229,27 +316,24 @@ TodoWrite({todos: [
5. Report incomplete or invalid chains
```
### Compliance Scoring
```
Base Score: 100 points
### Quality Gate Criteria
Deductions:
- Missing TEST task: -30 points per feature
- Missing IMPL task: -30 points per feature
- Missing REFACTOR task: -10 points per feature
- Wrong dependency: -15 points per error
- Wrong agent: -5 points per error
- Wrong tdd_phase: -5 points per error
- Test didn't fail initially: -10 points per feature
- Tests didn't pass after IMPL: -20 points per feature
- Tests broke during REFACTOR: -15 points per feature
| Recommendation | Score Range | Critical Violations | Action |
|----------------|-------------|---------------------|--------|
| **APPROVED** | ≥90 | 0 | Safe to merge |
| **PROCEED_WITH_CAVEATS** | ≥70 | 0 | Can proceed, address minor issues |
| **REQUIRE_FIXES** | ≥50 | Any | Must fix before merge |
| **BLOCK_MERGE** | <50 | Any | Block merge until resolved |
Final Score: Max(0, Base Score - Deductions)
```
**Critical Violations**:
- Missing TEST or IMPL task for any feature
- Tests didn't fail initially (Red phase violation)
- Tests didn't pass after IMPL (Green phase violation)
- Tests broke during REFACTOR (Refactor phase violation)
## Output Files
```
.workflow/active/{session-id}/
.workflow/active/WFS-{session-id}/
├── TDD_COMPLIANCE_REPORT.md # Comprehensive compliance report ⭐
└── .process/
├── test-results.json # From tdd-coverage-analysis
@@ -262,14 +346,14 @@ Final Score: Max(0, Base Score - Deductions)
### Session Discovery Errors
| Error | Cause | Resolution |
|-------|-------|------------|
| No active session | No WFS-* directories | Provide session-id explicitly |
| Multiple active sessions | Multiple WFS-* directories | Provide session-id explicitly |
| No active session | No WFS-* directories | Provide --session explicitly |
| Multiple active sessions | Multiple WFS-* directories | Provide --session explicitly |
| Session not found | Invalid session-id | Check available sessions |
### Validation Errors
| Error | Cause | Resolution |
|-------|-------|------------|
| Task files missing | Incomplete planning | Run tdd-plan first |
| Task files missing | Incomplete planning | Run /workflow:tdd-plan first |
| Invalid JSON | Corrupted task files | Regenerate tasks |
| Missing summaries | Tasks not executed | Execute tasks before verify |
@@ -278,13 +362,13 @@ Final Score: Max(0, Base Score - Deductions)
|-------|-------|------------|
| Coverage tool missing | No test framework | Configure testing first |
| Tests fail to run | Code errors | Fix errors before verify |
| Gemini analysis fails | Token limit / API error | Retry or reduce context |
| Sub-command fails | tdd-coverage-analysis error | Check sub-command logs |
## Integration & Usage
### Command Chain
- **Called After**: `/workflow:execute` (when TDD tasks completed)
- **Calls**: `/workflow:tools:tdd-coverage-analysis`, Gemini CLI
- **Calls**: `/workflow:tools:tdd-coverage-analysis`
- **Related**: `/workflow:tdd-plan`, `/workflow:status`
### Basic Usage
@@ -293,7 +377,7 @@ Final Score: Max(0, Base Score - Deductions)
/workflow:tdd-verify
# Specify session
/workflow:tdd-verify WFS-auth
/workflow:tdd-verify --session WFS-auth
```
### When to Use
@@ -308,61 +392,125 @@ Final Score: Max(0, Base Score - Deductions)
# TDD Compliance Report - {Session ID}
**Generated**: {timestamp}
**Session**: {sessionId}
**Session**: WFS-{sessionId}
**Workflow Type**: TDD
---
## Executive Summary
Overall Compliance Score: {score}/100
Status: {EXCELLENT | GOOD | NEEDS IMPROVEMENT | FAILED}
### Quality Gate Decision
| Metric | Value | Status |
|--------|-------|--------|
| Compliance Score | {score}/100 | {status_emoji} |
| Chain Completeness | {percentage}% | {status} |
| Line Coverage | {percentage}% | {status} |
| Branch Coverage | {percentage}% | {status} |
| Function Coverage | {percentage}% | {status} |
### Recommendation
**{RECOMMENDATION}**
**Decision Rationale**:
{brief explanation based on score and violations}
**Quality Gate Criteria**:
- **APPROVED**: Score ≥90, no critical violations
- **PROCEED_WITH_CAVEATS**: Score ≥70, no critical violations
- **REQUIRE_FIXES**: Score ≥50 or critical violations exist
- **BLOCK_MERGE**: Score <50
---
## Chain Analysis
### Feature 1: {Feature Name}
**Status**: Complete
**Status**: Complete
**Chain**: TEST-1.1 → IMPL-1.1 → REFACTOR-1.1
- **Red Phase**: Test created and failed with clear message
- **Green Phase**: Minimal implementation made test pass
- **Refactor Phase**: Code improved, tests remained green
| Phase | Task | Status | Details |
|-------|------|--------|---------|
| Red | TEST-1.1 | ✅ Pass | Test created and failed with clear message |
| Green | IMPL-1.1 | ✅ Pass | Minimal implementation made test pass |
| Refactor | REFACTOR-1.1 | ✅ Pass | Code improved, tests remained green |
### Feature 2: {Feature Name}
**Status**: Incomplete
**Status**: ⚠️ Incomplete
**Chain**: TEST-2.1 → IMPL-2.1 (Missing REFACTOR-2.1)
- **Red Phase**: Test created and failed
- **Green Phase**: Implementation seems over-engineered
- **Refactor Phase**: Missing
| Phase | Task | Status | Details |
|-------|------|--------|---------|
| Red | TEST-2.1 | ✅ Pass | Test created and failed |
| Green | IMPL-2.1 | ⚠️ Warning | Implementation seems over-engineered |
| Refactor | REFACTOR-2.1 | ❌ Missing | Task not completed |
**Issues**:
- REFACTOR-2.1 task not completed
- IMPL-2.1 implementation exceeded minimal scope
- REFACTOR-2.1 task not completed (-10 points)
- IMPL-2.1 implementation exceeded minimal scope (-10 points)
[Repeat for all features]
### Chain Validation Summary
| Metric | Value |
|--------|-------|
| Total Features | {count} |
| Complete Chains | {count} ({percent}%) |
| Incomplete Chains | {count} |
| Missing TEST | {count} |
| Missing IMPL | {count} |
| Missing REFACTOR | {count} |
| Dependency Errors | {count} |
| Meta Field Errors | {count} |
---
## Test Coverage Analysis
### Coverage Metrics
- Line Coverage: {percentage}% {status}
- Branch Coverage: {percentage}% {status}
- Function Coverage: {percentage}% {status}
| Metric | Coverage | Target | Status |
|--------|----------|--------|--------|
| Line Coverage | {percentage}% | ≥80% | {status} |
| Branch Coverage | {percentage}% | ≥70% | {status} |
| Function Coverage | {percentage}% | ≥80% | {status} |
### Coverage Gaps
- {file}:{lines} - Uncovered error handling
- {file}:{lines} - Uncovered edge case
| File | Lines | Issue | Priority |
|------|-------|-------|----------|
| src/auth/service.ts | 45-52 | Uncovered error handling | HIGH |
| src/utils/parser.ts | 78-85 | Uncovered edge case | MEDIUM |
---
## TDD Cycle Validation
### Red Phase (Write Failing Test)
- {N}/{total} features had failing tests initially
- Feature 3: No evidence of initial test failure
- {N}/{total} features had failing tests initially ({percent}%)
- ✅ Compliant features: {list}
- ❌ Non-compliant features: {list}
**Violations**:
- Feature 3: No evidence of initial test failure (-10 points)
### Green Phase (Make Test Pass)
- {N}/{total} implementations made tests pass
- All implementations minimal and focused
- {N}/{total} implementations made tests pass ({percent}%)
- ✅ Compliant features: {list}
- ❌ Non-compliant features: {list}
**Violations**:
- Feature 2: Implementation over-engineered (-10 points)
### Refactor Phase (Improve Quality)
- {N}/{total} features completed refactoring
- Feature 2, 4: Refactoring step skipped
- {N}/{total} features completed refactoring ({percent}%)
- ✅ Compliant features: {list}
- ❌ Non-compliant features: {list}
**Violations**:
- Feature 2, 4: Refactoring step skipped (-20 points total)
---
## Best Practices Assessment
@@ -377,24 +525,61 @@ Status: {EXCELLENT | GOOD | NEEDS IMPROVEMENT | FAILED}
- Missing refactoring steps
- Test failure messages could be more descriptive
---
## Detailed Findings by Severity
### Critical Issues ({count})
{List of critical issues with impact and remediation}
### High Priority Issues ({count})
{List of high priority issues with impact and remediation}
### Medium Priority Issues ({count})
{List of medium priority issues with impact and remediation}
### Low Priority Issues ({count})
{List of low priority issues with impact and remediation}
---
## Recommendations
### High Priority
### Required Fixes (Before Merge)
1. Complete missing REFACTOR tasks (Features 2, 4)
2. Verify initial test failures for Feature 3
3. Simplify over-engineered implementations
3. Fix tests that broke during refactoring
### Medium Priority
1. Add edge case tests for Features 1, 3
2. Improve test failure message clarity
3. Increase branch coverage to >85%
### Recommended Improvements
1. Simplify over-engineered implementations
2. Add edge case tests for Features 1, 3
3. Improve test failure message clarity
4. Increase branch coverage to >85%
### Low Priority
### Optional Enhancements
1. Add more descriptive test names
2. Consider parameterized tests for similar scenarios
3. Document TDD process learnings
## Conclusion
{Summary of compliance status and next steps}
---
## Metrics Summary
| Metric | Value |
|--------|-------|
| Total Features | {count} |
| Complete Chains | {count} ({percent}%) |
| Compliance Score | {score}/100 |
| Critical Issues | {count} |
| High Issues | {count} |
| Medium Issues | {count} |
| Low Issues | {count} |
| Line Coverage | {percent}% |
| Branch Coverage | {percent}% |
| Function Coverage | {percent}% |
---
**Report End**
```

View File

@@ -1,12 +1,16 @@
---
name: conflict-resolution
description: Detect and resolve conflicts between plan and existing codebase using CLI-powered analysis with Gemini/Qwen
argument-hint: "--session WFS-session-id --context path/to/context-package.json"
argument-hint: "[-y|--yes] --session WFS-session-id --context path/to/context-package.json"
examples:
- /workflow:tools:conflict-resolution --session WFS-auth --context .workflow/active/WFS-auth/.process/context-package.json
- /workflow:tools:conflict-resolution --session WFS-payment --context .workflow/active/WFS-payment/.process/context-package.json
- /workflow:tools:conflict-resolution -y --session WFS-payment --context .workflow/active/WFS-payment/.process/context-package.json
---
## Auto Mode
When `--yes` or `-y`: Auto-select recommended strategy for each conflict, skip clarification questions.
# Conflict Resolution Command
## Purpose
@@ -21,10 +25,10 @@ Analyzes conflicts between implementation plans and existing codebase, **includi
| Responsibility | Description |
|---------------|-------------|
| **Detect Conflicts** | Analyze plan vs existing code inconsistencies |
| **Scenario Uniqueness** | **NEW**: Search and compare new modules with existing modules for functional overlaps |
| **Scenario Uniqueness** | Search and compare new modules with existing modules for functional overlaps |
| **Generate Strategies** | Provide 2-4 resolution options per conflict |
| **Iterative Clarification** | **NEW**: Ask unlimited questions until scenario boundaries are clear and unique |
| **Agent Re-analysis** | **NEW**: Dynamically update strategies based on user clarifications |
| **Iterative Clarification** | Ask unlimited questions until scenario boundaries are clear and unique |
| **Agent Re-analysis** | Dynamically update strategies based on user clarifications |
| **CLI Analysis** | Use Gemini/Qwen (Claude fallback) |
| **User Decision** | Present options ONE BY ONE, never auto-apply |
| **Direct Text Output** | Output questions via text directly, NEVER use bash echo/printf |
@@ -53,7 +57,7 @@ Analyzes conflicts between implementation plans and existing codebase, **includi
- Breaking updates
### 5. Module Scenario Overlap
- **NEW**: Functional overlap between new and existing modules
- Functional overlap between new and existing modules
- Scenario boundary ambiguity
- Duplicate responsibility detection
- Module merge/split decisions
@@ -130,7 +134,7 @@ Task(subagent_type="cli-execution-agent", run_in_background=false, prompt=`
### 1. Load Context
- Read existing files from conflict_detection.existing_files
- Load plan from .workflow/active/{session_id}/.process/context-package.json
- **NEW**: Load exploration_results and use aggregated_insights for enhanced analysis
- Load exploration_results and use aggregated_insights for enhanced analysis
- Extract role analyses and requirements
### 2. Execute CLI Analysis (Enhanced with Exploration + Scenario Uniqueness)
@@ -182,33 +186,25 @@ Task(subagent_type="cli-execution-agent", run_in_background=false, prompt=`
- modifications.old_content: 20-100 chars for unique Edit tool matching
- modifications.new_content: preserves markdown formatting
- modification_suggestions: 2-5 actionable suggestions for custom handling
`)
```
**Agent Internal Flow** (Enhanced):
```
1. Load context package
2. Check conflict_risk (exit if none/low)
3. Read existing files + plan artifacts
4. Run CLI analysis (Gemini→Qwen→Claude) with enhanced tasks:
- Standard conflict detection (Architecture/API/Data/Dependency)
- **NEW: Module scenario uniqueness detection**
* Extract new module functionality from plan
* Search all existing modules with similar keywords/functionality
* Compare scenario coverage and responsibilities
* Identify functional overlaps and boundary ambiguities
* Generate ModuleOverlap conflicts with overlap_analysis
5. Parse conflict findings (including ModuleOverlap category)
6. Generate 2-4 strategies per conflict:
- Include modifications for each strategy
- **For ModuleOverlap**: Add clarification_needed questions for boundary definition
7. Return JSON to stdout (NOT file write)
8. Return execution log path
### 5. Planning Notes Record (REQUIRED)
After analysis complete, append a brief execution record to planning-notes.md:
**File**: .workflow/active/{session_id}/planning-notes.md
**Location**: Under "## Conflict Decisions (Phase 3)" section
**Format**:
\`\`\`
### [Conflict-Resolution Agent] YYYY-MM-DD
- **Note**: [智能补充:简短总结冲突类型、解决策略、关键决策等]
\`\`\`
`)
```
### Phase 3: User Interaction Loop
```javascript
const autoYes = $ARGUMENTS.includes('--yes') || $ARGUMENTS.includes('-y')
FOR each conflict:
round = 0, clarified = false, userClarifications = []
@@ -216,8 +212,13 @@ FOR each conflict:
// 1. Display conflict info (text output for context)
displayConflictSummary(conflict) // id, brief, severity, overlap_analysis if ModuleOverlap
// 2. Strategy selection via AskUserQuestion
AskUserQuestion({
// 2. Strategy selection
if (autoYes) {
console.log(`[--yes] Auto-selecting recommended strategy`)
selectedStrategy = conflict.strategies[conflict.recommended || 0]
clarified = true // Skip clarification loop
} else {
AskUserQuestion({
questions: [{
question: formatStrategiesForDisplay(conflict.strategies),
header: "策略选择",
@@ -230,18 +231,19 @@ FOR each conflict:
{ label: "自定义修改", description: `建议: ${conflict.modification_suggestions?.slice(0,2).join('; ')}` }
]
}]
})
})
// 3. Handle selection
if (userChoice === "自定义修改") {
customConflicts.push({ id, brief, category, suggestions, overlap_analysis })
break
// 3. Handle selection
if (userChoice === "自定义修改") {
customConflicts.push({ id, brief, category, suggestions, overlap_analysis })
break
}
selectedStrategy = findStrategyByName(userChoice)
}
selectedStrategy = findStrategyByName(userChoice)
// 4. Clarification (if needed) - batched max 4 per call
if (selectedStrategy.clarification_needed?.length > 0) {
if (!autoYes && selectedStrategy.clarification_needed?.length > 0) {
for (batch of chunk(selectedStrategy.clarification_needed, 4)) {
AskUserQuestion({
questions: batch.map((q, i) => ({

View File

@@ -35,7 +35,7 @@ Step 1: Context-Package Detection
├─ Valid package exists → Return existing (skip execution)
└─ No valid package → Continue to Step 2
Step 2: Complexity Assessment & Parallel Explore (NEW)
Step 2: Complexity Assessment & Parallel Explore
├─ Analyze task_description → classify Low/Medium/High
├─ Select exploration angles (1-4 based on complexity)
├─ Launch N cli-explore-agents in parallel
@@ -213,19 +213,37 @@ Write(`${sessionFolder}/explorations-manifest.json`, JSON.stringify(explorationM
**Only execute after Step 2 completes**
```javascript
// Load user intent from planning-notes.md (from Phase 1)
const planningNotesPath = `.workflow/active/${session_id}/planning-notes.md`;
let userIntent = { goal: task_description, key_constraints: "None specified" };
if (file_exists(planningNotesPath)) {
const notesContent = Read(planningNotesPath);
const goalMatch = notesContent.match(/\*\*GOAL\*\*:\s*(.+)/);
const constraintsMatch = notesContent.match(/\*\*KEY_CONSTRAINTS\*\*:\s*(.+)/);
if (goalMatch) userIntent.goal = goalMatch[1].trim();
if (constraintsMatch) userIntent.key_constraints = constraintsMatch[1].trim();
}
Task(
subagent_type="context-search-agent",
run_in_background=false,
description="Gather comprehensive context for plan",
prompt=`
## Execution Mode
**PLAN MODE** (Comprehensive) - Full Phase 1-3 execution
**PLAN MODE** (Comprehensive) - Full Phase 1-3 execution with priority sorting
## Session Information
- **Session ID**: ${session_id}
- **Task Description**: ${task_description}
- **Output Path**: .workflow/${session_id}/.process/context-package.json
## User Intent (from Phase 1 - Planning Notes)
**GOAL**: ${userIntent.goal}
**KEY_CONSTRAINTS**: ${userIntent.key_constraints}
This is the PRIMARY context source - all subsequent analysis must align with user intent.
## Exploration Input (from Step 2)
- **Manifest**: ${sessionFolder}/explorations-manifest.json
- **Exploration Count**: ${explorationManifest.exploration_count}
@@ -245,7 +263,13 @@ Execute complete context-search-agent workflow for implementation planning:
4. **Analysis**: Extract keywords, determine scope, classify complexity based on task description and project state
### Phase 2: Multi-Source Context Discovery
Execute all discovery tracks:
Execute all discovery tracks (WITH USER INTENT INTEGRATION):
- **Track -1**: User Intent & Priority Foundation (EXECUTE FIRST)
- Load user intent (GOAL, KEY_CONSTRAINTS) from session input
- Map user requirements to codebase entities (files, modules, patterns)
- Establish baseline priority scores based on user goal alignment
- Output: user_intent_mapping.json with preliminary priority scores
- **Track 0**: Exploration Synthesis (load ${sessionFolder}/explorations-manifest.json, prioritize critical_files, deduplicate patterns/integration_points)
- **Track 1**: Historical archive analysis (query manifest.json for lessons learned)
- **Track 2**: Reference documentation (CLAUDE.md, architecture docs)
@@ -254,13 +278,45 @@ Execute all discovery tracks:
### Phase 3: Synthesis, Assessment & Packaging
1. Apply relevance scoring and build dependency graph
2. **Synthesize 4-source data**: Merge findings from all sources (archive > docs > code > web). **Prioritize the context from `project-tech.json`** for architecture and tech stack unless code analysis reveals it's outdated.
3. **Populate `project_context`**: Directly use the `overview` from `project-tech.json` to fill the `project_context` section. Include description, technology_stack, architecture, and key_components.
4. **Populate `project_guidelines`**: Load conventions, constraints, and learnings from `project-guidelines.json` into a dedicated section.
5. Integrate brainstorm artifacts (if .brainstorming/ exists, read content)
6. Perform conflict detection with risk assessment
7. **Inject historical conflicts** from archive analysis into conflict_detection
8. Generate and validate context-package.json
2. **Synthesize 5-source data** (including Track -1): Merge findings from all sources
- Priority order: User Intent > Archive > Docs > Exploration > Code > Web
- **Prioritize the context from `project-tech.json`** for architecture and tech stack unless code analysis reveals it's outdated
3. **Context Priority Sorting**:
a. Combine scores from Track -1 (user intent alignment) + relevance scores + exploration critical_files
b. Classify files into priority tiers:
- **Critical** (score ≥ 0.85): Directly mentioned in user goal OR exploration critical_files
- **High** (0.70-0.84): Key dependencies, patterns required for goal
- **Medium** (0.50-0.69): Supporting files, indirect dependencies
- **Low** (< 0.50): Contextual awareness only
c. Generate dependency_order: Based on dependency graph + user goal sequence
d. Document sorting_rationale: Explain prioritization logic
4. **Populate `project_context`**: Directly use the `overview` from `project-tech.json` to fill the `project_context` section. Include description, technology_stack, architecture, and key_components.
5. **Populate `project_guidelines`**: Load conventions, constraints, and learnings from `project-guidelines.json` into a dedicated section.
6. Integrate brainstorm artifacts (if .brainstorming/ exists, read content)
7. Perform conflict detection with risk assessment
8. **Inject historical conflicts** from archive analysis into conflict_detection
9. **Generate prioritized_context section**:
```json
{
"prioritized_context": {
"user_intent": {
"goal": "...",
"scope": "...",
"key_constraints": ["..."]
},
"priority_tiers": {
"critical": [{ "path": "...", "relevance": 0.95, "rationale": "..." }],
"high": [...],
"medium": [...],
"low": [...]
},
"dependency_order": ["module1", "module2", "module3"],
"sorting_rationale": "Based on user goal alignment (Track -1), exploration critical files, and dependency graph analysis"
}
}
```
10. Generate and validate context-package.json with prioritized_context field
## Output Requirements
Complete context-package.json with:
@@ -272,6 +328,7 @@ Complete context-package.json with:
- **brainstorm_artifacts**: {guidance_specification, role_analyses[], synthesis_output} with content
- **conflict_detection**: {risk_level, risk_factors, affected_modules[], mitigation_strategy, historical_conflicts[]}
- **exploration_results**: {manifest_path, exploration_count, angles, explorations[], aggregated_insights} (from Track 0)
- **prioritized_context**: {user_intent, priority_tiers{critical, high, medium, low}, dependency_order[], sorting_rationale}
## Quality Validation
Before completion verify:
@@ -282,6 +339,17 @@ Before completion verify:
- [ ] No sensitive data exposed
- [ ] Total files ≤50 (prioritize high-relevance)
## Planning Notes Record (REQUIRED)
After completing context-package.json, append a brief execution record to planning-notes.md:
**File**: .workflow/active/${session_id}/planning-notes.md
**Location**: Under "## Context Findings (Phase 2)" section
**Format**:
\`\`\`
### [Context-Search Agent] YYYY-MM-DD
- **Note**: [智能补充:简短总结关键发现,如探索角度、关键文件、冲突风险等]
\`\`\`
Execute autonomously following agent documentation.
Report completion with statistics.
`
@@ -326,116 +394,11 @@ Refer to `context-search-agent.md` Phase 3.7 for complete `context-package.json`
- **brainstorm_artifacts**: Brainstorm documents with full content (if exists)
- **conflict_detection**: Risk assessment with mitigation strategies and historical conflicts
- **exploration_results**: Aggregated exploration insights (from parallel explore phase)
## Historical Archive Analysis
### Track 1: Query Archive Manifest
The context-search-agent MUST perform historical archive analysis as Track 1 in Phase 2:
**Step 1: Check for Archive Manifest**
```bash
# Check if archive manifest exists
if [[ -f .workflow/archives/manifest.json ]]; then
# Manifest available for querying
fi
```
**Step 2: Extract Task Keywords**
```javascript
// From current task description, extract key entities and operations
const keywords = extractKeywords(task_description);
// Examples: ["User", "model", "authentication", "JWT", "reporting"]
```
**Step 3: Search Archive for Relevant Sessions**
```javascript
// Query manifest for sessions with matching tags or descriptions
const relevantArchives = archives.filter(archive => {
return archive.tags.some(tag => keywords.includes(tag)) ||
keywords.some(kw => archive.description.toLowerCase().includes(kw.toLowerCase()));
});
```
**Step 4: Extract Watch Patterns**
```javascript
// For each relevant archive, check watch_patterns for applicability
const historicalConflicts = [];
relevantArchives.forEach(archive => {
archive.lessons.watch_patterns?.forEach(pattern => {
// Check if pattern trigger matches current task
if (isPatternRelevant(pattern.pattern, task_description)) {
historicalConflicts.push({
source_session: archive.session_id,
pattern: pattern.pattern,
action: pattern.action,
files_to_check: pattern.related_files,
archived_at: archive.archived_at
});
}
});
});
```
**Step 5: Inject into Context Package**
```json
{
"conflict_detection": {
"risk_level": "medium",
"risk_factors": ["..."],
"affected_modules": ["..."],
"mitigation_strategy": "...",
"historical_conflicts": [
{
"source_session": "WFS-auth-feature",
"pattern": "When modifying User model",
"action": "Check reporting-service and auditing-service dependencies",
"files_to_check": ["src/models/User.ts", "src/services/reporting.ts"],
"archived_at": "2025-09-16T09:00:00Z"
}
]
}
}
```
### Risk Level Escalation
If `historical_conflicts` array is not empty, minimum risk level should be "medium":
```javascript
if (historicalConflicts.length > 0 && currentRisk === "low") {
conflict_detection.risk_level = "medium";
conflict_detection.risk_factors.push(
`${historicalConflicts.length} historical conflict pattern(s) detected from past sessions`
);
}
```
### Archive Query Algorithm
```markdown
1. IF .workflow/archives/manifest.json does NOT exist → Skip Track 1, continue to Track 2
2. IF manifest exists:
a. Load manifest.json
b. Extract keywords from task_description (nouns, verbs, technical terms)
c. Filter archives where:
- ANY tag matches keywords (case-insensitive) OR
- description contains keywords (case-insensitive substring match)
d. For each relevant archive:
- Read lessons.watch_patterns array
- Check if pattern.pattern keywords overlap with task_description
- If relevant: Add to historical_conflicts array
e. IF historical_conflicts.length > 0:
- Set risk_level = max(current_risk, "medium")
- Add to risk_factors
3. Continue to Track 2 (reference documentation)
```
- **prioritized_context**: Pre-sorted context with user intent and priority tiers (critical/high/medium/low)
## Notes
- **Detection-first**: Always check for existing package before invoking agent
- **Dual project file integration**: Agent reads both `.workflow/project-tech.json` (tech analysis) and `.workflow/project-guidelines.json` (user constraints) as primary sources
- **Guidelines injection**: Project guidelines are included in context-package to ensure task generation respects user-defined constraints
- **No redundancy**: This command is a thin orchestrator, all logic in agent
- **User intent integration**: Load user intent from planning-notes.md (Phase 1 output)
- **Output**: Generates `context-package.json` with `prioritized_context` field
- **Plan-specific**: Use this for implementation planning; brainstorm mode uses direct agent call

View File

@@ -1,11 +1,16 @@
---
name: task-generate-agent
description: Generate implementation plan documents (IMPL_PLAN.md, task JSONs, TODO_LIST.md) using action-planning-agent - produces planning artifacts, does NOT execute code implementation
argument-hint: "--session WFS-session-id"
argument-hint: "[-y|--yes] --session WFS-session-id"
examples:
- /workflow:tools:task-generate-agent --session WFS-auth
- /workflow:tools:task-generate-agent -y --session WFS-auth
---
## Auto Mode
When `--yes` or `-y`: Skip user questions, use defaults (no materials, Agent executor, Codex CLI tool).
# Generate Implementation Plan Command
## Overview
@@ -14,6 +19,10 @@ Generate implementation planning documents (IMPL_PLAN.md, task JSONs, TODO_LIST.
## Core Philosophy
- **Planning Only**: Generate planning documents (IMPL_PLAN.md, task JSONs, TODO_LIST.md) - does NOT implement code
- **Agent-Driven Document Generation**: Delegate plan generation to action-planning-agent
- **NO Redundant Context Sorting**: Context priority sorting is ALREADY completed in context-gather Phase 2/3
- Use `context-package.json.prioritized_context` directly
- DO NOT re-sort files or re-compute priorities
- `priority_tiers` and `dependency_order` are pre-computed and ready-to-use
- **N+1 Parallel Planning**: Auto-detect multi-module projects, enable parallel planning (2+1 or 3+1 mode)
- **Progressive Loading**: Load context incrementally (Core → Selective → On-Demand) due to analysis.md file size
- **Memory-First**: Reuse loaded documents from conversation memory
@@ -67,9 +76,25 @@ Phase 3: Integration (+1 Coordinator, Multi-Module Only)
**Purpose**: Collect user preferences before task generation to ensure generated tasks match execution expectations.
**User Questions**:
**Auto Mode Check**:
```javascript
AskUserQuestion({
const autoYes = $ARGUMENTS.includes('--yes') || $ARGUMENTS.includes('-y')
if (autoYes) {
console.log(`[--yes] Using defaults: No materials, Agent executor, Codex CLI`)
userConfig = {
supplementaryMaterials: { type: "none", content: [] },
executionMethod: "agent",
preferredCliTool: "codex",
enableResume: true
}
// Skip to Phase 1
}
```
**User Questions** (skipped if autoYes):
```javascript
if (!autoYes) AskUserQuestion({
questions: [
{
question: "Do you have supplementary materials or guidelines to include?",
@@ -104,11 +129,10 @@ AskUserQuestion({
}
]
})
```
**Handle Materials Response**:
**Handle Materials Response** (skipped if autoYes):
```javascript
if (userConfig.materials === "Provide file paths") {
if (!autoYes && userConfig.materials === "Provide file paths") {
// Follow-up question for file paths
const pathsResponse = AskUserQuestion({
questions: [{
@@ -141,12 +165,13 @@ const userConfig = {
### Phase 1: Context Preparation & Module Detection (Command Responsibility)
**Command prepares session paths, metadata, and detects module structure.**
**Command prepares session paths, metadata, detects module structure. Context priority sorting is NOT performed here - it's already completed in context-gather Phase 2/3.**
**Session Path Structure**:
```
.workflow/active/WFS-{session-id}/
├── workflow-session.json # Session metadata
├── planning-notes.md # Consolidated planning notes
├── .process/
│ └── context-package.json # Context package with artifact catalog
├── .task/ # Output: Task JSON files
@@ -228,9 +253,21 @@ IMPORTANT: This is PLANNING ONLY - you are generating planning documents, NOT im
CRITICAL: Follow the progressive loading strategy defined in agent specification (load analysis.md files incrementally due to file size)
## PLANNING NOTES (PHASE 1-3 CONTEXT)
Load: .workflow/active/{session-id}/planning-notes.md
This document contains:
- User Intent: Original GOAL and KEY_CONSTRAINTS from Phase 1
- Context Findings: Critical files, architecture, and constraints from Phase 2
- Conflict Decisions: Resolved conflicts and planning constraints from Phase 3
- Consolidated Constraints: All constraints from all phases
**USAGE**: Read planning-notes.md FIRST. Use Consolidated Constraints list to guide task sequencing and dependencies.
## SESSION PATHS
Input:
- Session Metadata: .workflow/active/{session-id}/workflow-session.json
- Planning Notes: .workflow/active/{session-id}/planning-notes.md
- Context Package: .workflow/active/{session-id}/.process/context-package.json
Output:
@@ -258,7 +295,17 @@ CLI Resume Support (MANDATORY for all CLI commands):
- Read previous task's cliExecutionId from session state
- Format: ccw cli -p "[prompt]" --resume ${previousCliId} --tool ${tool} --mode write
## EXPLORATION CONTEXT (from context-package.exploration_results)
## PRIORITIZED CONTEXT (from context-package.prioritized_context) - ALREADY SORTED
Context sorting is ALREADY COMPLETED in context-gather Phase 2/3. DO NOT re-sort.
Direct usage:
- **user_intent**: Use goal/scope/key_constraints for task alignment
- **priority_tiers.critical**: These files are PRIMARY focus for task generation
- **priority_tiers.high**: These files are SECONDARY focus
- **dependency_order**: Use this for task sequencing - already computed
- **sorting_rationale**: Reference for understanding priority decisions
## EXPLORATION CONTEXT (from context-package.exploration_results) - SUPPLEMENT ONLY
If prioritized_context is incomplete, fall back to exploration_results:
- Load exploration_results from context-package.json
- Use aggregated_insights.critical_files for focus_paths generation
- Apply aggregated_insights.constraints to acceptance criteria
@@ -278,8 +325,10 @@ CLI Resume Support (MANDATORY for all CLI commands):
- 6-field schema (id, title, status, context_package_path, meta, context, flow_control)
- Quantified requirements with explicit counts
- Artifacts integration from context package
- **focus_paths enhanced with exploration critical_files**
- Flow control with pre_analysis steps (include exploration integration_points analysis)
- **focus_paths generated directly from prioritized_context.priority_tiers (critical + high)**
- NO re-sorting or re-prioritization - use pre-computed tiers as-is
- Critical files are PRIMARY focus, High files are SECONDARY
- Flow control with pre_analysis steps (use prioritized_context.dependency_order for task sequencing)
- **CLI Execution IDs and strategies (MANDATORY)**
2. Implementation Plan (IMPL_PLAN.md)
@@ -327,6 +376,19 @@ Hard Constraints:
- IMPL_PLAN.md created with complete structure
- TODO_LIST.md generated matching task JSONs
- Return completion status with document count and task breakdown summary
## PLANNING NOTES RECORD (REQUIRED)
After completing all documents, append a brief execution record to planning-notes.md:
**File**: .workflow/active/{session_id}/planning-notes.md
**Location**: Create new section after "## Consolidated Constraints"
**Format**:
\`\`\`
## Task Generation (Phase 4)
### [Action-Planning Agent] YYYY-MM-DD
- **Note**: [智能补充:简短总结任务数量、关键任务、依赖关系等]
\`\`\`
`
)
```
@@ -356,16 +418,22 @@ IMPORTANT: Generate Task JSONs ONLY. IMPL_PLAN.md and TODO_LIST.md by Phase 3 Co
CRITICAL: Follow the progressive loading strategy defined in agent specification (load analysis.md files incrementally due to file size)
## PLANNING NOTES (PHASE 1-3 CONTEXT)
Load: .workflow/active/{session-id}/planning-notes.md
This document contains consolidated constraints and user intent to guide module-scoped task generation.
## MODULE SCOPE
- Module: ${module.name} (${module.type})
- Focus Paths: ${module.paths.join(', ')}
- Task ID Prefix: IMPL-${module.prefix}
- Task Limit: ≤9 tasks (hard limit for this module)
- Task Limit: ≤6 tasks (hard limit for this module)
- Other Modules: ${otherModules.join(', ')} (reference only, do NOT generate tasks for them)
## SESSION PATHS
Input:
- Session Metadata: .workflow/active/{session-id}/workflow-session.json
- Planning Notes: .workflow/active/{session-id}/planning-notes.md
- Context Package: .workflow/active/{session-id}/.process/context-package.json
Output:
@@ -391,7 +459,16 @@ CLI Resume Support (MANDATORY for all CLI commands):
- Read previous task's cliExecutionId from session state
- Format: ccw cli -p "[prompt]" --resume ${previousCliId} --tool ${tool} --mode write
## EXPLORATION CONTEXT (from context-package.exploration_results)
## PRIORITIZED CONTEXT (from context-package.prioritized_context) - ALREADY SORTED
Context sorting is ALREADY COMPLETED in context-gather Phase 2/3. DO NOT re-sort.
Filter by module scope (${module.paths.join(', ')}):
- **user_intent**: Use for task alignment within module
- **priority_tiers.critical**: Filter for files in ${module.paths.join(', ')} → PRIMARY focus
- **priority_tiers.high**: Filter for files in ${module.paths.join(', ')} → SECONDARY focus
- **dependency_order**: Use module-relevant entries for task sequencing
## EXPLORATION CONTEXT (from context-package.exploration_results) - SUPPLEMENT ONLY
If prioritized_context is incomplete for this module, fall back to exploration_results:
- Load exploration_results from context-package.json
- Filter for ${module.name} module: Use aggregated_insights.critical_files matching ${module.paths.join(', ')}
- Apply module-relevant constraints from aggregated_insights.constraints
@@ -418,8 +495,10 @@ Task JSON Files (.task/IMPL-${module.prefix}*.json):
- Task ID format: IMPL-${module.prefix}1, IMPL-${module.prefix}2, ...
- Quantified requirements with explicit counts
- Artifacts integration from context package (filtered for ${module.name})
- **focus_paths enhanced with exploration critical_files (module-scoped)**
- Flow control with pre_analysis steps (include exploration integration_points analysis)
- **focus_paths generated directly from prioritized_context.priority_tiers filtered by ${module.paths.join(', ')}**
- NO re-sorting - use pre-computed tiers filtered for this module
- Critical files are PRIMARY focus, High files are SECONDARY
- Flow control with pre_analysis steps (use prioritized_context.dependency_order for module task sequencing)
- **CLI Execution IDs and strategies (MANDATORY)**
- Focus ONLY on ${module.name} module scope
@@ -462,6 +541,21 @@ Hard Constraints:
- Cross-module dependencies use CROSS:: placeholder format consistently
- Focus paths scoped to ${module.paths.join(', ')} only
- Return: task count, task IDs, dependency summary (internal + cross-module)
## PLANNING NOTES RECORD (REQUIRED)
After completing module task JSONs, append a brief execution record to planning-notes.md:
**File**: .workflow/active/{session_id}/planning-notes.md
**Location**: Create new section after "## Consolidated Constraints" (if not exists)
**Format**:
\`\`\`
## Task Generation (Phase 4)
### [Action-Planning Agent - ${module.name}] YYYY-MM-DD
- **Note**: [智能补充:简短总结本模块任务数量、关键任务等]
\`\`\`
**Note**: Multiple module agents will append their records. Phase 3 Integration Coordinator will add final summary.
`
)
);
@@ -542,6 +636,17 @@ Module Count: ${modules.length}
- No CROSS:: placeholders remaining in task JSONs
- IMPL_PLAN.md and TODO_LIST.md generated with multi-module structure
- Return: task count, per-module breakdown, resolved dependency count
## PLANNING NOTES RECORD (REQUIRED)
After completing integration, append final summary to planning-notes.md:
**File**: .workflow/active/{session_id}/planning-notes.md
**Location**: Under "## Task Generation (Phase 4)" section (after module agent records)
**Format**:
\`\`\`
### [Integration Coordinator] YYYY-MM-DD
- **Note**: [智能补充:简短总结总任务数、跨模块依赖解决情况等]
\`\`\`
`
)
```
@@ -559,5 +664,4 @@ function resolveCrossModuleDependency(placeholder, allTasks) {
? candidates.sort((a, b) => a.id.localeCompare(b.id))[0].id
: placeholder; // Keep for manual resolution
}
```
```

View File

@@ -1,11 +1,16 @@
---
name: task-generate-tdd
description: Autonomous TDD task generation using action-planning-agent with Red-Green-Refactor cycles, test-first structure, and cycle validation
argument-hint: "--session WFS-session-id"
argument-hint: "[-y|--yes] --session WFS-session-id"
examples:
- /workflow:tools:task-generate-tdd --session WFS-auth
- /workflow:tools:task-generate-tdd -y --session WFS-auth
---
## Auto Mode
When `--yes` or `-y`: Skip user questions, use defaults (no materials, Agent executor).
# Autonomous TDD Task Generation Command
## Overview
@@ -78,44 +83,176 @@ Phase 2: Agent Execution (Document Generation)
## Execution Lifecycle
### Phase 1: Discovery & Context Loading
### Phase 0: User Configuration (Interactive)
**Purpose**: Collect user preferences before TDD task generation to ensure generated tasks match execution expectations and provide necessary supplementary context.
**User Questions**:
```javascript
AskUserQuestion({
questions: [
{
question: "Do you have supplementary materials or guidelines to include?",
header: "Materials",
multiSelect: false,
options: [
{ label: "No additional materials", description: "Use existing context only" },
{ label: "Provide file paths", description: "I'll specify paths to include" },
{ label: "Provide inline content", description: "I'll paste content directly" }
]
},
{
question: "Select execution method for generated TDD tasks:",
header: "Execution",
multiSelect: false,
options: [
{ label: "Agent (Recommended)", description: "Claude agent executes Red-Green-Refactor cycles directly" },
{ label: "Hybrid", description: "Agent orchestrates, calls CLI for complex steps (Red/Green phases)" },
{ label: "CLI Only", description: "All TDD cycles via CLI tools (codex/gemini/qwen)" }
]
},
{
question: "If using CLI, which tool do you prefer?",
header: "CLI Tool",
multiSelect: false,
options: [
{ label: "Codex (Recommended)", description: "Best for TDD Red-Green-Refactor cycles" },
{ label: "Gemini", description: "Best for analysis and large context" },
{ label: "Qwen", description: "Alternative analysis tool" },
{ label: "Auto", description: "Let agent decide per-task" }
]
}
]
})
```
**Handle Materials Response**:
```javascript
if (userConfig.materials === "Provide file paths") {
// Follow-up question for file paths
const pathsResponse = AskUserQuestion({
questions: [{
question: "Enter file paths to include (comma-separated or one per line):",
header: "Paths",
multiSelect: false,
options: [
{ label: "Enter paths", description: "Provide paths in text input" }
]
}]
})
userConfig.supplementaryPaths = parseUserPaths(pathsResponse)
}
```
**Build userConfig**:
```javascript
const userConfig = {
supplementaryMaterials: {
type: "none|paths|inline",
content: [...], // Parsed paths or inline content
},
executionMethod: "agent|hybrid|cli",
preferredCliTool: "codex|gemini|qwen|auto",
enableResume: true // Always enable resume for CLI executions
}
```
**Pass to Agent**: Include `userConfig` in agent prompt for Phase 2.
---
### Phase 1: Context Preparation & Discovery
**Command Responsibility**: Command prepares session paths and metadata, provides to agent for autonomous context loading.
**⚡ Memory-First Rule**: Skip file loading if documents already in conversation memory
**Agent Context Package**:
**📊 Progressive Loading Strategy**: Load context incrementally due to large analysis.md file sizes:
- **Core**: session metadata + context-package.json (always load)
- **Selective**: synthesis_output OR (guidance + relevant role analyses) - NOT all role analyses
- **On-Demand**: conflict resolution (if conflict_risk >= medium), test context
**🛤️ Path Clarity Requirement**: All `focus_paths` prefer absolute paths (e.g., `D:\\project\\src\\module`), or clear relative paths from project root (e.g., `./src/module`)
**Session Path Structure** (Provided by Command to Agent):
```
.workflow/active/WFS-{session-id}/
├── workflow-session.json # Session metadata
├── .process/
│ ├── context-package.json # Context package with artifact catalog
│ ├── test-context-package.json # Test coverage analysis
│ └── conflict-resolution.json # Conflict resolution (if exists)
├── .task/ # Output: Task JSON files
│ ├── IMPL-1.json
│ ├── IMPL-2.json
│ └── ...
├── IMPL_PLAN.md # Output: TDD implementation plan
└── TODO_LIST.md # Output: TODO list with TDD phases
```
**Command Preparation**:
1. **Assemble Session Paths** for agent prompt:
- `session_metadata_path`: `.workflow/active/{session-id}/workflow-session.json`
- `context_package_path`: `.workflow/active/{session-id}/.process/context-package.json`
- `test_context_package_path`: `.workflow/active/{session-id}/.process/test-context-package.json`
- Output directory paths
2. **Provide Metadata** (simple values):
- `session_id`: WFS-{session-id}
- `workflow_type`: "tdd"
- `mcp_capabilities`: {exa_code, exa_web, code_index}
3. **Pass userConfig** from Phase 0
**Agent Context Package** (Agent loads autonomously):
```javascript
{
"session_id": "WFS-[session-id]",
"workflow_type": "tdd",
// Note: CLI tool usage is determined semantically by action-planning-agent based on user's task description
// Core (ALWAYS load)
"session_metadata": {
// If in memory: use cached content
// Else: Load from .workflow/active//{session-id}/workflow-session.json
// Else: Load from workflow-session.json
},
"context_package": {
// If in memory: use cached content
// Else: Load from context-package.json
},
// Selective (load based on progressive strategy)
"brainstorm_artifacts": {
// Loaded from context-package.json → brainstorm_artifacts section
"role_analyses": [
"synthesis_output": {"path": "...", "exists": true}, // Load if exists (highest priority)
"guidance_specification": {"path": "...", "exists": true}, // Load if no synthesis
"role_analyses": [ // Load SELECTIVELY based on task relevance
{
"role": "system-architect",
"files": [{"path": "...", "type": "primary|supplementary"}]
}
],
"guidance_specification": {"path": "...", "exists": true},
"synthesis_output": {"path": "...", "exists": true},
"conflict_resolution": {"path": "...", "exists": true} // if conflict_risk >= medium
]
},
"context_package_path": ".workflow/active//{session-id}/.process/context-package.json",
"context_package": {
// If in memory: use cached content
// Else: Load from .workflow/active//{session-id}/.process/context-package.json
},
"test_context_package_path": ".workflow/active//{session-id}/.process/test-context-package.json",
// On-Demand (load if exists)
"test_context_package": {
// Existing test patterns and coverage analysis
// Load from test-context-package.json
// Contains existing test patterns and coverage analysis
},
"conflict_resolution": {
// Load from conflict-resolution.json if conflict_risk >= medium
// Check context-package.conflict_detection.resolution_file
},
// Capabilities
"mcp_capabilities": {
"codex_lens": true,
"exa_code": true,
"exa_web": true
"exa_web": true,
"code_index": true
},
// User configuration from Phase 0
"user_config": {
// From Phase 0 AskUserQuestion
}
}
```
@@ -124,21 +261,21 @@ Phase 2: Agent Execution (Document Generation)
1. **Load Session Context** (if not in memory)
```javascript
if (!memory.has("workflow-session.json")) {
Read(.workflow/active//{session-id}/workflow-session.json)
Read(.workflow/active/{session-id}/workflow-session.json)
}
```
2. **Load Context Package** (if not in memory)
```javascript
if (!memory.has("context-package.json")) {
Read(.workflow/active//{session-id}/.process/context-package.json)
Read(.workflow/active/{session-id}/.process/context-package.json)
}
```
3. **Load Test Context Package** (if not in memory)
```javascript
if (!memory.has("test-context-package.json")) {
Read(.workflow/active//{session-id}/.process/test-context-package.json)
Read(.workflow/active/{session-id}/.process/test-context-package.json)
}
```
@@ -180,62 +317,81 @@ Phase 2: Agent Execution (Document Generation)
)
```
### Phase 2: Agent Execution (Document Generation)
### Phase 2: Agent Execution (TDD Document Generation)
**Pre-Agent Template Selection** (Command decides path before invoking agent):
```javascript
// Command checks flag and selects template PATH (not content)
const templatePath = hasCliExecuteFlag
? "~/.claude/workflows/cli-templates/prompts/workflow/task-json-cli-mode.txt"
: "~/.claude/workflows/cli-templates/prompts/workflow/task-json-agent-mode.txt";
```
**Purpose**: Generate TDD planning documents (IMPL_PLAN.md, task JSONs, TODO_LIST.md) - planning only, NOT code implementation.
**Agent Invocation**:
```javascript
Task(
subagent_type="action-planning-agent",
run_in_background=false,
description="Generate TDD task JSON and implementation plan",
description="Generate TDD planning documents (IMPL_PLAN.md, task JSONs, TODO_LIST.md)",
prompt=`
## Execution Context
## TASK OBJECTIVE
Generate TDD implementation planning documents (IMPL_PLAN.md, task JSONs, TODO_LIST.md) for workflow session
**Session ID**: WFS-{session-id}
**Workflow Type**: TDD
**Note**: CLI tool usage is determined semantically from user's task description
IMPORTANT: This is PLANNING ONLY - you are generating planning documents, NOT implementing code.
## Phase 1: Discovery Results (Provided Context)
CRITICAL: Follow the progressive loading strategy (load analysis.md files incrementally due to file size):
- **Core**: session metadata + context-package.json (always)
- **Selective**: synthesis_output OR (guidance + relevant role analyses) - NOT all
- **On-Demand**: conflict resolution (if conflict_risk >= medium), test context
### Session Metadata
{session_metadata_content}
## SESSION PATHS
Input:
- Session Metadata: .workflow/active/{session-id}/workflow-session.json
- Context Package: .workflow/active/{session-id}/.process/context-package.json
- Test Context: .workflow/active/{session-id}/.process/test-context-package.json
### Role Analyses (Enhanced by Synthesis)
{role_analyses_content}
- Includes requirements, design specs, enhancements, and clarifications from synthesis phase
Output:
- Task Dir: .workflow/active/{session-id}/.task/
- IMPL_PLAN: .workflow/active/{session-id}/IMPL_PLAN.md
- TODO_LIST: .workflow/active/{session-id}/TODO_LIST.md
### Artifacts Inventory
- **Guidance Specification**: {guidance_spec_path}
- **Role Analyses**: {role_analyses_list}
## CONTEXT METADATA
Session ID: {session-id}
Workflow Type: TDD
MCP Capabilities: {exa_code, exa_web, code_index}
### Context Package
{context_package_summary}
- Includes conflict_risk assessment
## USER CONFIGURATION (from Phase 0)
Execution Method: ${userConfig.executionMethod} // agent|hybrid|cli
Preferred CLI Tool: ${userConfig.preferredCliTool} // codex|gemini|qwen|auto
Supplementary Materials: ${userConfig.supplementaryMaterials}
### Test Context Package
{test_context_package_summary}
- Existing test patterns, framework config, coverage analysis
## CLI TOOL SELECTION
Based on userConfig.executionMethod:
- "agent": No command field in implementation_approach steps
- "hybrid": Add command field to complex steps only (Red/Green phases recommended for CLI)
- "cli": Add command field to ALL Red-Green-Refactor steps
### Conflict Resolution (Conditional)
If conflict_risk was medium/high, modifications have been applied to:
- **guidance-specification.md**: Design decisions updated to resolve conflicts
- **Role analyses (*.md)**: Recommendations adjusted for compatibility
- **context-package.json**: Marked as "resolved" with conflict IDs
- Conflict resolution results stored in conflict-resolution.json
CLI Resume Support (MANDATORY for all CLI commands):
- Use --resume parameter to continue from previous task execution
- Read previous task's cliExecutionId from session state
- Format: ccw cli -p "[prompt]" --resume [previousCliId] --tool [tool] --mode write
### MCP Analysis Results (Optional)
**Code Structure**: {mcp_code_index_results}
**External Research**: {mcp_exa_research_results}
## EXPLORATION CONTEXT (from context-package.exploration_results)
- Load exploration_results from context-package.json
- Use aggregated_insights.critical_files for focus_paths generation
- Apply aggregated_insights.constraints to acceptance criteria
- Reference aggregated_insights.all_patterns for implementation approach
- Use aggregated_insights.all_integration_points for precise modification locations
- Use conflict_indicators for risk-aware task sequencing
## Phase 2: TDD Document Generation Task
## CONFLICT RESOLUTION CONTEXT (if exists)
- Check context-package.conflict_detection.resolution_file for conflict-resolution.json path
- If exists, load .process/conflict-resolution.json:
- Apply planning_constraints as task constraints (for brainstorm-less workflows)
- Reference resolved_conflicts for implementation approach alignment
- Handle custom_conflicts with explicit task notes
## TEST CONTEXT INTEGRATION
- Load test-context-package.json for existing test patterns and coverage analysis
- Extract test framework configuration (Jest/Pytest/etc.)
- Identify existing test conventions and patterns
- Map coverage gaps to TDD Red phase test targets
## TDD DOCUMENT GENERATION TASK
**Agent Configuration Reference**: All TDD task generation rules, quantification requirements, Red-Green-Refactor cycle structure, quality standards, and execution details are defined in action-planning-agent.
@@ -256,31 +412,61 @@ If conflict_risk was medium/high, modifications have been applied to:
#### Required Outputs Summary
##### 1. TDD Task JSON Files (.task/IMPL-*.json)
- **Location**: `.workflow/active//{session-id}/.task/`
- **Schema**: 5-field structure with TDD-specific metadata
- **Location**: `.workflow/active/{session-id}/.task/`
- **Schema**: 6-field structure with TDD-specific metadata
- `id, title, status, context_package_path, meta, context, flow_control`
- `meta.tdd_workflow`: true (REQUIRED)
- `meta.max_iterations`: 3 (Green phase test-fix cycle limit)
- `meta.cli_execution_id`: Unique CLI execution ID (format: `{session_id}-{task_id}`)
- `meta.cli_execution`: Strategy object (new|resume|fork|merge_fork)
- `context.tdd_cycles`: Array with quantified test cases and coverage
- `context.focus_paths`: Absolute or clear relative paths (enhanced with exploration critical_files)
- `flow_control.implementation_approach`: Exactly 3 steps with `tdd_phase` field
1. Red Phase (`tdd_phase: "red"`): Write failing tests
2. Green Phase (`tdd_phase: "green"`): Implement to pass tests
3. Refactor Phase (`tdd_phase: "refactor"`): Improve code quality
- CLI tool usage determined semantically (add `command` field when user requests CLI execution)
- `flow_control.pre_analysis`: Include exploration integration_points analysis
- CLI tool usage based on userConfig (add `command` field per executionMethod)
- **Details**: See action-planning-agent.md § TDD Task JSON Generation
##### 2. IMPL_PLAN.md (TDD Variant)
- **Location**: `.workflow/active//{session-id}/IMPL_PLAN.md`
- **Location**: `.workflow/active/{session-id}/IMPL_PLAN.md`
- **Template**: `~/.claude/workflows/cli-templates/prompts/workflow/impl-plan-template.txt`
- **TDD-Specific Frontmatter**: workflow_type="tdd", tdd_workflow=true, feature_count, task_breakdown
- **TDD Implementation Tasks Section**: Feature-by-feature with internal Red-Green-Refactor cycles
- **Context Analysis**: Artifact references and exploration insights
- **Details**: See action-planning-agent.md § TDD Implementation Plan Creation
##### 3. TODO_LIST.md
- **Location**: `.workflow/active//{session-id}/TODO_LIST.md`
- **Location**: `.workflow/active/{session-id}/TODO_LIST.md`
- **Format**: Hierarchical task list with internal TDD phase indicators (Red → Green → Refactor)
- **Status**: ▸ (container), [ ] (pending), [x] (completed)
- **Links**: Task JSON references and summaries
- **Details**: See action-planning-agent.md § TODO List Generation
### CLI EXECUTION ID REQUIREMENTS (MANDATORY)
Each task JSON MUST include:
- **meta.cli_execution_id**: Unique ID for CLI execution (format: `{session_id}-{task_id}`)
- **meta.cli_execution**: Strategy object based on depends_on:
- No deps → `{ "strategy": "new" }`
- 1 dep (single child) → `{ "strategy": "resume", "resume_from": "parent-cli-id" }`
- 1 dep (multiple children) → `{ "strategy": "fork", "resume_from": "parent-cli-id" }`
- N deps → `{ "strategy": "merge_fork", "resume_from": ["id1", "id2", ...] }`
- **Type**: `resume_from: string | string[]` (string for resume/fork, array for merge_fork)
**CLI Execution Strategy Rules**:
1. **new**: Task has no dependencies - starts fresh CLI conversation
2. **resume**: Task has 1 parent AND that parent has only this child - continues same conversation
3. **fork**: Task has 1 parent BUT parent has multiple children - creates new branch with parent context
4. **merge_fork**: Task has multiple parents - merges all parent contexts into new conversation
**Execution Command Patterns**:
- new: `ccw cli -p "[prompt]" --tool [tool] --mode write --id [cli_execution_id]`
- resume: `ccw cli -p "[prompt]" --resume [resume_from] --tool [tool] --mode write`
- fork: `ccw cli -p "[prompt]" --resume [resume_from] --id [cli_execution_id] --tool [tool] --mode write`
- merge_fork: `ccw cli -p "[prompt]" --resume [resume_from.join(',')] --id [cli_execution_id] --tool [tool] --mode write` (resume_from is array)
### Quantification Requirements (MANDATORY)
**Core Rules**:
@@ -302,6 +488,7 @@ If conflict_risk was medium/high, modifications have been applied to:
- [ ] Every acceptance criterion includes measurable coverage percentage
- [ ] tdd_cycles array contains test_count and test_cases for each cycle
- [ ] No vague language ("comprehensive", "complete", "thorough")
- [ ] cli_execution_id and cli_execution strategy assigned to each task
### Agent Execution Summary
@@ -317,20 +504,34 @@ If conflict_risk was medium/high, modifications have been applied to:
- ✓ Quantification requirements enforced (explicit counts, measurable acceptance, exact targets)
- ✓ Task count ≤18 (hard limit)
- ✓ Each task has meta.tdd_workflow: true
- ✓ Each task has exactly 3 implementation steps with tdd_phase field
-Green phase includes test-fix cycle logic
-Artifact references mapped correctly
-MCP tool integration added
- ✓ Each task has exactly 3 implementation steps with tdd_phase field ("red", "green", "refactor")
-Each task has meta.cli_execution_id and meta.cli_execution strategy
-Green phase includes test-fix cycle logic with max_iterations
-focus_paths are absolute or clear relative paths (from exploration critical_files)
- ✓ Artifact references mapped correctly from context package
- ✓ Exploration context integrated (critical_files, constraints, patterns, integration_points)
- ✓ Conflict resolution context applied (if conflict_risk >= medium)
- ✓ Test context integrated (existing test patterns and coverage analysis)
- ✓ Documents follow TDD template structure
- ✓ CLI tool selection based on userConfig.executionMethod
## Output
## SUCCESS CRITERIA
- All planning documents generated successfully:
- Task JSONs valid and saved to .task/ directory with cli_execution_id
- IMPL_PLAN.md created with complete TDD structure
- TODO_LIST.md generated matching task JSONs
- CLI execution strategies assigned based on task dependencies
- Return completion status with document count and task breakdown summary
Generate all three documents and report completion status:
- TDD task JSON files created: N files (IMPL-*.json)
## OUTPUT SUMMARY
Generate all three documents and report:
- TDD task JSON files created: N files (IMPL-*.json) with cli_execution_id assigned
- TDD cycles configured: N cycles with quantified test cases
- Artifacts integrated: synthesis-spec, guidance-specification, N role analyses
- CLI execution strategies: new/resume/fork/merge_fork assigned per dependency graph
- Artifacts integrated: synthesis-spec/guidance-specification, relevant role analyses
- Exploration context: critical_files, constraints, patterns, integration_points
- Test context integrated: existing patterns and coverage
- MCP enhancements: CodexLens, exa-research
- Conflict resolution: applied (if conflict_risk >= medium)
- Session ready for TDD execution: /workflow:execute
`
)
@@ -338,50 +539,64 @@ Generate all three documents and report completion status:
### Agent Context Passing
**Memory-Aware Context Assembly**:
**Context Delegation Model**: Command provides paths and metadata, agent loads context autonomously using progressive loading strategy.
**Command Provides** (in agent prompt):
```javascript
// Assemble context package for agent
const agentContext = {
session_id: "WFS-[id]",
// Command assembles these simple values and paths for agent
const commandProvides = {
// Session paths
session_metadata_path: ".workflow/active/WFS-{id}/workflow-session.json",
context_package_path: ".workflow/active/WFS-{id}/.process/context-package.json",
test_context_package_path: ".workflow/active/WFS-{id}/.process/test-context-package.json",
output_task_dir: ".workflow/active/WFS-{id}/.task/",
output_impl_plan: ".workflow/active/WFS-{id}/IMPL_PLAN.md",
output_todo_list: ".workflow/active/WFS-{id}/TODO_LIST.md",
// Simple metadata
session_id: "WFS-{id}",
workflow_type: "tdd",
mcp_capabilities: { exa_code: true, exa_web: true, code_index: true },
// Use memory if available, else load
session_metadata: memory.has("workflow-session.json")
? memory.get("workflow-session.json")
: Read(.workflow/active/WFS-[id]/workflow-session.json),
context_package_path: ".workflow/active/WFS-[id]/.process/context-package.json",
context_package: memory.has("context-package.json")
? memory.get("context-package.json")
: Read(".workflow/active/WFS-[id]/.process/context-package.json"),
test_context_package_path: ".workflow/active/WFS-[id]/.process/test-context-package.json",
test_context_package: memory.has("test-context-package.json")
? memory.get("test-context-package.json")
: Read(".workflow/active/WFS-[id]/.process/test-context-package.json"),
// Extract brainstorm artifacts from context package
brainstorm_artifacts: extractBrainstormArtifacts(context_package),
// Load role analyses using paths from context package
role_analyses: brainstorm_artifacts.role_analyses
.flatMap(role => role.files)
.map(file => Read(file.path)),
// Load conflict resolution if exists (prefer new JSON format)
conflict_resolution: context_package.conflict_detection?.resolution_file
? Read(context_package.conflict_detection.resolution_file) // .process/conflict-resolution.json
: (brainstorm_artifacts?.conflict_resolution?.exists
? Read(brainstorm_artifacts.conflict_resolution.path)
: null),
// Optional MCP enhancements
mcp_analysis: executeMcpDiscovery()
// User configuration from Phase 0
user_config: {
supplementaryMaterials: { type: "...", content: [...] },
executionMethod: "agent|hybrid|cli",
preferredCliTool: "codex|gemini|qwen|auto",
enableResume: true
}
}
```
**Agent Loads Autonomously** (progressive loading):
```javascript
// Agent executes progressive loading based on memory state
const agentLoads = {
// Core (ALWAYS load if not in memory)
session_metadata: loadIfNotInMemory(session_metadata_path),
context_package: loadIfNotInMemory(context_package_path),
// Selective (based on progressive strategy)
// Priority: synthesis_output > guidance + relevant_role_analyses
brainstorm_content: loadSelectiveBrainstormArtifacts(context_package),
// On-Demand (load if exists and relevant)
test_context: loadIfExists(test_context_package_path),
conflict_resolution: loadConflictResolution(context_package),
// Optional (if MCP available)
exploration_results: extractExplorationResults(context_package),
external_research: executeMcpResearch() // If needed
}
```
**Progressive Loading Implementation** (agent responsibility):
1. **Check memory first** - skip if already loaded
2. **Load core files** - session metadata + context-package.json
3. **Smart selective loading** - synthesis_output OR (guidance + task-relevant role analyses)
4. **On-demand loading** - test context, conflict resolution (if conflict_risk >= medium)
5. **Extract references** - exploration results, artifact paths from context package
## TDD Task Structure Reference
This section provides quick reference for TDD task JSON structure. For complete implementation details, see the agent invocation prompt in Phase 2 above.
@@ -389,14 +604,31 @@ This section provides quick reference for TDD task JSON structure. For complete
**Quick Reference**:
- Each TDD task contains complete Red-Green-Refactor cycle
- Task ID format: `IMPL-N` (simple) or `IMPL-N.M` (complex subtasks)
- Required metadata: `meta.tdd_workflow: true`, `meta.max_iterations: 3`
- Flow control: Exactly 3 steps with `tdd_phase` field (red, green, refactor)
- Context: `tdd_cycles` array with quantified test cases and coverage
- Required metadata:
- `meta.tdd_workflow: true`
- `meta.max_iterations: 3`
- `meta.cli_execution_id: "{session_id}-{task_id}"`
- `meta.cli_execution: { "strategy": "new|resume|fork|merge_fork", ... }`
- Context: `tdd_cycles` array with quantified test cases and coverage:
```javascript
tdd_cycles: [
{
test_count: 5, // Number of test cases to write
test_cases: ["case1", "case2"], // Enumerated test scenarios
implementation_scope: "...", // Files and functions to implement
expected_coverage: ">=85%" // Coverage target
}
]
```
- Context: `focus_paths` use absolute or clear relative paths
- Flow control: Exactly 3 steps with `tdd_phase` field ("red", "green", "refactor")
- Flow control: `pre_analysis` includes exploration integration_points analysis
- Command field: Added per `userConfig.executionMethod` (agent/hybrid/cli)
- See Phase 2 agent prompt for full schema and requirements
## Output Files Structure
```
.workflow/active//{session-id}/
.workflow/active/{session-id}/
├── IMPL_PLAN.md # Unified plan with TDD Implementation Tasks section
├── TODO_LIST.md # Progress tracking with internal TDD phase indicators
├── .task/
@@ -432,9 +664,9 @@ This section provides quick reference for TDD task JSON structure. For complete
- No circular dependencies allowed
### Task Limits
- Maximum 10 total tasks (simple + subtasks)
- Flat hierarchy (≤5 tasks) or two-level (6-10 tasks with containers)
- Re-scope requirements if >10 tasks needed
- Maximum 18 total tasks (simple + subtasks) - hard limit for TDD workflows
- Flat hierarchy (≤5 tasks) or two-level (6-18 tasks with containers)
- Re-scope requirements if >18 tasks needed
### TDD Workflow Validation
- `meta.tdd_workflow` must be true
@@ -454,7 +686,7 @@ This section provides quick reference for TDD task JSON structure. For complete
### TDD Generation Errors
| Error | Cause | Resolution |
|-------|-------|------------|
| Task count exceeds 10 | Too many features or subtasks | Re-scope requirements or merge features |
| Task count exceeds 18 | Too many features or subtasks | Re-scope requirements or merge features into multiple TDD sessions |
| Missing test framework | No test config | Configure testing first |
| Invalid TDD workflow | Missing tdd_phase or incomplete flow_control | Fix TDD structure in ANALYSIS_RESULTS.md |
| Missing tdd_workflow flag | Task doesn't have meta.tdd_workflow: true | Add TDD workflow metadata |
@@ -512,6 +744,6 @@ IMPL (Green phase) tasks include automatic test-fix cycle:
## Configuration Options
- **meta.max_iterations**: Number of fix attempts (default: 3 for TDD, 5 for test-gen)
- **meta.max_iterations**: Number of fix attempts in Green phase (default: 3)
- **CLI tool usage**: Determined semantically from user's task description via `command` field in implementation_approach

View File

@@ -1,10 +1,14 @@
---
name: animation-extract
description: Extract animation and transition patterns from prompt inference and image references for design system documentation
argument-hint: "[--design-id <id>] [--session <id>] [--images "<glob>"] [--focus "<types>"] [--interactive] [--refine]"
argument-hint: "[-y|--yes] [--design-id <id>] [--session <id>] [--images "<glob>"] [--focus "<types>"] [--interactive] [--refine]"
allowed-tools: TodoWrite(*), Read(*), Write(*), Glob(*), Bash(*), AskUserQuestion(*), Task(ui-design-agent)
---
## Auto Mode
When `--yes` or `-y`: Skip all clarification questions, use AI-inferred animation decisions.
# Animation Extraction Command
## Overview

View File

@@ -1,10 +1,14 @@
---
name: layout-extract
description: Extract structural layout information from reference images or text prompts using Claude analysis with variant generation or refinement mode
argument-hint: [--design-id <id>] [--session <id>] [--images "<glob>"] [--prompt "<desc>"] [--targets "<list>"] [--variants <count>] [--device-type <desktop|mobile|tablet|responsive>] [--interactive] [--refine]
argument-hint: "[-y|--yes] [--design-id <id>] [--session <id>] [--images "<glob>"] [--prompt "<desc>"] [--targets "<list>"] [--variants <count>] [--device-type <desktop|mobile|tablet|responsive>] [--interactive] [--refine]"
allowed-tools: TodoWrite(*), Read(*), Write(*), Glob(*), Bash(*), AskUserQuestion(*), Task(ui-design-agent), mcp__exa__web_search_exa(*)
---
## Auto Mode
When `--yes` or `-y`: Skip all clarification questions, use AI-inferred layout decisions.
# Layout Extraction Command
## Overview

View File

@@ -1,10 +1,14 @@
---
name: style-extract
description: Extract design style from reference images or text prompts using Claude analysis with variant generation or refinement mode
argument-hint: "[--design-id <id>] [--session <id>] [--images "<glob>"] [--prompt "<desc>"] [--variants <count>] [--interactive] [--refine]"
argument-hint: "[-y|--yes] [--design-id <id>] [--session <id>] [--images "<glob>"] [--prompt "<desc>"] [--variants <count>] [--interactive] [--refine]"
allowed-tools: TodoWrite(*), Read(*), Write(*), Glob(*), AskUserQuestion(*)
---
## Auto Mode
When `--yes` or `-y`: Skip all clarification questions, use AI-inferred design decisions.
# Style Extraction Command
## Overview

View File

@@ -15,7 +15,7 @@
"/workflow:review-session-cycle",
"/memory:docs",
"/workflow:brainstorm:artifacts",
"/workflow:action-plan-verify",
"/workflow:plan-verify",
"/version"
],
@@ -69,7 +69,7 @@
"difficulty": "Intermediate",
"essential": true,
"flow": {
"next_steps": ["/workflow:action-plan-verify", "/workflow:execute"],
"next_steps": ["/workflow:plan-verify", "/workflow:execute"],
"alternatives": ["/workflow:tdd-plan"]
},
"source": "../../../commands/workflow/plan.md"
@@ -89,8 +89,8 @@
"source": "../../../commands/workflow/execute.md"
},
{
"name": "action-plan-verify",
"command": "/workflow:action-plan-verify",
"name": "plan-verify",
"command": "/workflow:plan-verify",
"description": "Cross-artifact consistency analysis",
"arguments": "[--session session-id]",
"category": "workflow",
@@ -100,7 +100,7 @@
"prerequisites": ["/workflow:plan"],
"next_steps": ["/workflow:execute"]
},
"source": "../../../commands/workflow/action-plan-verify.md"
"source": "../../../commands/workflow/plan-verify.md"
},
{
"name": "init",
@@ -257,7 +257,7 @@
"essential": true,
"flow": {
"prerequisites": ["/workflow:execute"],
"next_steps": ["/workflow:review-fix"]
"next_steps": ["/workflow:review-cycle-fix"]
},
"source": "../../../commands/workflow/review-session-cycle.md"
},
@@ -271,8 +271,8 @@
"source": "../../../commands/workflow/review-module-cycle.md"
},
{
"name": "review-fix",
"command": "/workflow:review-fix",
"name": "review-cycle-fix",
"command": "/workflow:review-cycle-fix",
"description": "Automated fixing of review findings",
"arguments": "<export-file|review-dir>",
"category": "workflow",
@@ -280,7 +280,7 @@
"flow": {
"prerequisites": ["/workflow:review-session-cycle", "/workflow:review-module-cycle"]
},
"source": "../../../commands/workflow/review-fix.md"
"source": "../../../commands/workflow/review-cycle-fix.md"
},
{
"name": "test-gen",

View File

@@ -144,7 +144,7 @@ def build_command_relationships() -> Dict[str, Any]:
return {
"workflow:plan": {
"calls_internally": ["workflow:session:start", "workflow:tools:context-gather", "workflow:tools:conflict-resolution", "workflow:tools:task-generate-agent"],
"next_steps": ["workflow:action-plan-verify", "workflow:status", "workflow:execute"],
"next_steps": ["workflow:plan-verify", "workflow:status", "workflow:execute"],
"alternatives": ["workflow:tdd-plan"],
"prerequisites": []
},
@@ -159,7 +159,7 @@ def build_command_relationships() -> Dict[str, Any]:
"related": ["workflow:status", "workflow:resume"],
"next_steps": ["workflow:review", "workflow:tdd-verify"]
},
"workflow:action-plan-verify": {
"workflow:plan-verify": {
"prerequisites": ["workflow:plan"],
"next_steps": ["workflow:execute"],
"related": ["workflow:status"]
@@ -217,7 +217,7 @@ def identify_essential_commands(all_commands: List[Dict]) -> List[Dict]:
"workflow:execute", "workflow:status", "workflow:session:start",
"workflow:review-session-cycle", "cli:analyze", "cli:chat",
"memory:docs", "workflow:brainstorm:artifacts",
"workflow:action-plan-verify", "workflow:resume", "version"
"workflow:plan-verify", "workflow:resume", "version"
]
essential = []

View File

@@ -1,303 +0,0 @@
# CCW Loop Skill
无状态迭代开发循环工作流,支持开发 (Develop)、调试 (Debug)、验证 (Validate) 三个阶段,每个阶段都有独立的文件记录进展。
## Overview
CCW Loop 是一个自主模式 (Autonomous) 的 Skill通过文件驱动的无状态循环帮助开发者系统化地完成开发任务。
### 核心特性
1. **无状态循环**: 每次执行从文件读取状态,不依赖内存
2. **文件驱动**: 所有进度记录在 Markdown 文件中,可审计、可回顾
3. **Gemini 辅助**: 关键决策点使用 CLI 工具进行深度分析
4. **可恢复**: 任何时候中断后可继续
5. **双模式**: 支持交互式和自动循环
### 三大阶段
- **Develop**: 任务分解 → 代码实现 → 进度记录
- **Debug**: 假设生成 → 证据收集 → 根因分析 → 修复验证
- **Validate**: 测试执行 → 覆盖率检查 → 质量评估
## Installation
已包含在 `.claude/skills/ccw-loop/`,无需额外安装。
## Usage
### 基本用法
```bash
# 启动新循环
/ccw-loop "实现用户认证功能"
# 继续现有循环
/ccw-loop --resume LOOP-auth-2026-01-22
# 自动循环模式
/ccw-loop --auto "修复登录bug并添加测试"
```
### 交互式流程
```
1. 启动: /ccw-loop "任务描述"
2. 初始化: 自动分析任务并生成子任务列表
3. 显示菜单:
- 📝 继续开发 (Develop)
- 🔍 开始调试 (Debug)
- ✅ 运行验证 (Validate)
- 📊 查看详情 (Status)
- 🏁 完成循环 (Complete)
- 🚪 退出 (Exit)
4. 执行选择的动作
5. 重复步骤 3-4 直到完成
```
### 自动循环流程
```
Develop (所有任务) → Debug (如有需要) → Validate → 完成
```
## Directory Structure
```
.workflow/.loop/{session-id}/
├── meta.json # 会话元数据 (不可修改)
├── state.json # 当前状态 (每次更新)
├── summary.md # 完成报告 (结束时生成)
├── develop/
│ ├── progress.md # 开发进度时间线
│ ├── tasks.json # 任务列表
│ └── changes.log # 代码变更日志 (NDJSON)
├── debug/
│ ├── understanding.md # 理解演变文档
│ ├── hypotheses.json # 假设历史
│ └── debug.log # 调试日志 (NDJSON)
└── validate/
├── validation.md # 验证报告
├── test-results.json # 测试结果
└── coverage.json # 覆盖率数据
```
## Action Reference
| Action | 描述 | 触发条件 |
|--------|------|----------|
| action-init | 初始化会话 | 首次启动 |
| action-menu | 显示操作菜单 | 交互模式下每次循环 |
| action-develop-with-file | 执行开发任务 | 有待处理任务 |
| action-debug-with-file | 假设驱动调试 | 需要调试 |
| action-validate-with-file | 运行测试验证 | 需要验证 |
| action-complete | 完成并生成报告 | 所有任务完成 |
详细说明见 [specs/action-catalog.md](specs/action-catalog.md)
## CLI Integration
CCW Loop 在关键决策点集成 CLI 工具:
### 任务分解 (action-init)
```bash
ccw cli -p "PURPOSE: 分解开发任务..."
--tool gemini
--mode analysis
--rule planning-breakdown-task-steps
```
### 代码实现 (action-develop)
```bash
ccw cli -p "PURPOSE: 实现功能代码..."
--tool gemini
--mode write
--rule development-implement-feature
```
### 假设生成 (action-debug - 探索)
```bash
ccw cli -p "PURPOSE: Generate debugging hypotheses..."
--tool gemini
--mode analysis
--rule analysis-diagnose-bug-root-cause
```
### 证据分析 (action-debug - 分析)
```bash
ccw cli -p "PURPOSE: Analyze debug log evidence..."
--tool gemini
--mode analysis
--rule analysis-diagnose-bug-root-cause
```
### 质量评估 (action-validate)
```bash
ccw cli -p "PURPOSE: Analyze test results and coverage..."
--tool gemini
--mode analysis
--rule analysis-review-code-quality
```
## State Management
### State Schema
参见 [phases/state-schema.md](phases/state-schema.md)
### State Transitions
```
pending → running → completed
user_exit
failed
```
### State Recovery
如果 `state.json` 损坏,可从其他文件重建:
- develop/tasks.json → develop.*
- debug/hypotheses.json → debug.*
- validate/test-results.json → validate.*
## Examples
### Example 1: 功能开发
```bash
# 1. 启动循环
/ccw-loop "Add user profile page"
# 2. 系统初始化,生成任务:
# - task-001: Create profile component
# - task-002: Add API endpoints
# - task-003: Implement tests
# 3. 选择 "继续开发"
# → 执行 task-001 (Gemini 辅助实现)
# → 更新 progress.md
# 4. 重复开发直到所有任务完成
# 5. 选择 "运行验证"
# → 运行测试
# → 检查覆盖率
# → 生成 validation.md
# 6. 选择 "完成循环"
# → 生成 summary.md
# → 询问是否扩展为 Issue
```
### Example 2: Bug 修复
```bash
# 1. 启动循环
/ccw-loop "Fix login timeout issue"
# 2. 选择 "开始调试"
# → 输入 bug 描述: "Login times out after 30s"
# → Gemini 生成假设 (H1, H2, H3)
# → 添加 NDJSON 日志
# → 提示复现 bug
# 3. 复现 bug (在应用中操作)
# 4. 再次选择 "开始调试"
# → 解析 debug.log
# → Gemini 分析证据
# → H2 确认为根因
# → 生成修复代码
# → 更新 understanding.md
# 5. 选择 "运行验证"
# → 测试通过
# 6. 完成
```
## Templates
- [progress-template.md](templates/progress-template.md): 开发进度文档模板
- [understanding-template.md](templates/understanding-template.md): 调试理解文档模板
- [validation-template.md](templates/validation-template.md): 验证报告模板
## Specifications
- [loop-requirements.md](specs/loop-requirements.md): 循环需求规范
- [action-catalog.md](specs/action-catalog.md): 动作目录
## Integration
### Dashboard Integration
CCW Loop 与 Dashboard Loop Monitor 集成:
- Dashboard 创建 Loop → 触发此 Skill
- state.json → Dashboard 实时显示
- 任务列表双向同步
- 控制按钮映射到 actions
### Issue System Integration
完成后可扩展为 Issue:
- 维度: test, enhance, refactor, doc
- 自动调用 `/issue:new`
- 上下文自动填充
## Error Handling
| 情况 | 处理 |
|------|------|
| Session 不存在 | 创建新会话 |
| state.json 损坏 | 从文件重建 |
| CLI 工具失败 | 回退到手动模式 |
| 测试失败 | 循环回到 develop/debug |
| >10 迭代 | 警告用户,建议拆分 |
## Limitations
1. **单会话限制**: 同一时间只能有一个活跃会话
2. **迭代限制**: 建议不超过 10 次迭代
3. **CLI 依赖**: 部分功能依赖 Gemini CLI 可用性
4. **测试框架**: 需要 package.json 中定义测试脚本
## Troubleshooting
### Q: 如何查看当前会话状态?
A: 在菜单中选择 "查看详情 (Status)"
### Q: 如何恢复中断的会话?
A: 使用 `--resume` 参数:
```bash
/ccw-loop --resume LOOP-xxx-2026-01-22
```
### Q: 如果 CLI 工具失败怎么办?
A: Skill 会自动降级到手动模式,提示用户手动输入
### Q: 如何添加自定义 action
A: 参见 [specs/action-catalog.md](specs/action-catalog.md) 的 "Action Extensions" 部分
## Contributing
添加新功能:
1. 创建 action 文件在 `phases/actions/`
2. 更新 orchestrator 决策逻辑
3. 添加到 action-catalog.md
4. 更新 action-menu.md
## License
MIT
---
**Version**: 1.0.0
**Last Updated**: 2026-01-22
**Author**: CCW Team

View File

@@ -1,522 +0,0 @@
---
name: ccw
description: Stateless workflow orchestrator. Auto-selects optimal workflow based on task intent. Triggers "ccw", "workflow".
allowed-tools: Task(*), SlashCommand(*), AskUserQuestion(*), Read(*), Bash(*), Grep(*), TodoWrite(*)
---
# CCW - Claude Code Workflow Orchestrator
无状态工作流协调器,根据任务意图自动选择最优工作流。
## Workflow System Overview
CCW 提供两个工作流系统:**Main Workflow** 和 **Issue Workflow**,协同覆盖完整的软件开发生命周期。
```
┌─────────────────────────────────────────────────────────────────────────────┐
│ Main Workflow │
│ │
│ ┌─────────────┐ ┌─────────────┐ ┌─────────────┐ ┌─────────────┐ │
│ │ Level 1 │ → │ Level 2 │ → │ Level 3 │ → │ Level 4 │ │
│ │ Rapid │ │ Lightweight │ │ Standard │ │ Brainstorm │ │
│ │ │ │ │ │ │ │ │ │
│ │ lite-lite- │ │ lite-plan │ │ plan │ │ brainstorm │ │
│ │ lite │ │ lite-fix │ │ tdd-plan │ │ :auto- │ │
│ │ │ │ multi-cli- │ │ test-fix- │ │ parallel │ │
│ │ │ │ plan │ │ gen │ │ ↓ │ │
│ │ │ │ │ │ │ │ plan │ │
│ └─────────────┘ └─────────────┘ └─────────────┘ └─────────────┘ │
│ │
│ Complexity: ━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━▶ │
│ Low High │
└─────────────────────────────────────────────────────────────────────────────┘
│ After development
┌─────────────────────────────────────────────────────────────────────────────┐
│ Issue Workflow │
│ │
│ ┌──────────────┐ ┌──────────────┐ ┌──────────────┐ │
│ │ Accumulate │ → │ Plan │ → │ Execute │ │
│ │ Discover & │ │ Batch │ │ Parallel │ │
│ │ Collect │ │ Planning │ │ Execution │ │
│ └──────────────┘ └──────────────┘ └──────────────┘ │
│ │
│ Supplementary role: Maintain main branch stability, worktree isolation │
└─────────────────────────────────────────────────────────────────────────────┘
```
## Architecture
```
┌─────────────────────────────────────────────────────────────────┐
│ CCW Orchestrator (CLI-Enhanced + Requirement Analysis) │
├─────────────────────────────────────────────────────────────────┤
│ Phase 1 │ Input Analysis (rule-based, fast path) │
│ Phase 1.5 │ CLI Classification (semantic, smart path) │
│ Phase 1.75 │ Requirement Clarification (clarity < 2) │
│ Phase 2 │ Level Selection (intent → level → workflow) │
│ Phase 2.5 │ CLI Action Planning (high complexity) │
│ Phase 3 │ User Confirmation (optional) │
│ Phase 4 │ TODO Tracking Setup │
│ Phase 5 │ Execution Loop │
└─────────────────────────────────────────────────────────────────┘
```
## Level Quick Reference
| Level | Name | Workflows | Artifacts | Execution |
|-------|------|-----------|-----------|-----------|
| **1** | Rapid | `lite-lite-lite` | None | Direct execute |
| **2** | Lightweight | `lite-plan`, `lite-fix`, `multi-cli-plan` | Memory/Lightweight files | → `lite-execute` |
| **3** | Standard | `plan`, `tdd-plan`, `test-fix-gen` | Session persistence | → `execute` / `test-cycle-execute` |
| **4** | Brainstorm | `brainstorm:auto-parallel``plan` | Multi-role analysis + Session | → `execute` |
| **-** | Issue | `discover``plan``queue``execute` | Issue records | Worktree isolation (optional) |
## Workflow Selection Decision Tree
```
Start
├─ Is it post-development maintenance?
│ ├─ Yes → Issue Workflow
│ └─ No ↓
├─ Are requirements clear?
│ ├─ Uncertain → Level 4 (brainstorm:auto-parallel)
│ └─ Clear ↓
├─ Need persistent Session?
│ ├─ Yes → Level 3 (plan / tdd-plan / test-fix-gen)
│ └─ No ↓
├─ Need multi-perspective / solution comparison?
│ ├─ Yes → Level 2 (multi-cli-plan)
│ └─ No ↓
├─ Is it a bug fix?
│ ├─ Yes → Level 2 (lite-fix)
│ └─ No ↓
├─ Need planning?
│ ├─ Yes → Level 2 (lite-plan)
│ └─ No → Level 1 (lite-lite-lite)
```
## Intent Classification
### Priority Order (with Level Mapping)
| Priority | Intent | Patterns | Level | Flow |
|----------|--------|----------|-------|------|
| 1 | bugfix/hotfix | `urgent,production,critical` + bug | L2 | `bugfix.hotfix` |
| 1 | bugfix | `fix,bug,error,crash,fail` | L2 | `bugfix.standard` |
| 2 | issue batch | `issues,batch` + `fix,resolve` | Issue | `issue` |
| 3 | exploration | `不确定,explore,研究,what if` | L4 | `full` |
| 3 | multi-perspective | `多视角,权衡,比较方案,cross-verify` | L2 | `multi-cli-plan` |
| 4 | quick-task | `快速,简单,small,quick` + feature | L1 | `lite-lite-lite` |
| 5 | ui design | `ui,design,component,style` | L3/L4 | `ui` |
| 6 | tdd | `tdd,test-driven,先写测试` | L3 | `tdd` |
| 7 | test-fix | `测试失败,test fail,fix test` | L3 | `test-fix-gen` |
| 8 | review | `review,审查,code review` | L3 | `review-fix` |
| 9 | documentation | `文档,docs,readme` | L2 | `docs` |
| 99 | feature | complexity-based | L2/L3 | `rapid`/`coupled` |
### Quick Selection Guide
| Scenario | Recommended Workflow | Level |
|----------|---------------------|-------|
| Quick fixes, config adjustments | `lite-lite-lite` | 1 |
| Clear single-module features | `lite-plan → lite-execute` | 2 |
| Bug diagnosis and fix | `lite-fix` | 2 |
| Production emergencies | `lite-fix --hotfix` | 2 |
| Technology selection, solution comparison | `multi-cli-plan → lite-execute` | 2 |
| Multi-module changes, refactoring | `plan → verify → execute` | 3 |
| Test-driven development | `tdd-plan → execute → tdd-verify` | 3 |
| Test failure fixes | `test-fix-gen → test-cycle-execute` | 3 |
| New features, architecture design | `brainstorm:auto-parallel → plan → execute` | 4 |
| Post-development issue fixes | Issue Workflow | - |
### Complexity Assessment
```javascript
function assessComplexity(text) {
let score = 0
if (/refactor|重构|migrate|迁移|architect|架构|system|系统/.test(text)) score += 2
if (/multiple|多个|across|跨|all|所有|entire|整个/.test(text)) score += 2
if (/integrate|集成|api|database|数据库/.test(text)) score += 1
if (/security|安全|performance|性能|scale|扩展/.test(text)) score += 1
return score >= 4 ? 'high' : score >= 2 ? 'medium' : 'low'
}
```
| Complexity | Flow |
|------------|------|
| high | `coupled` (plan → verify → execute) |
| medium/low | `rapid` (lite-plan → lite-execute) |
### Dimension Extraction (WHAT/WHERE/WHY/HOW)
从用户输入提取四个维度,用于需求澄清和工作流选择:
| 维度 | 提取内容 | 示例模式 |
|------|----------|----------|
| **WHAT** | action + target | `创建/修复/重构/优化/分析` + 目标对象 |
| **WHERE** | scope + paths | `file/module/system` + 文件路径 |
| **WHY** | goal + motivation | `为了.../因为.../目的是...` |
| **HOW** | constraints + preferences | `必须.../不要.../应该...` |
**Clarity Score** (0-3):
- +0.5: 有明确 action
- +0.5: 有具体 target
- +0.5: 有文件路径
- +0.5: scope 不是 unknown
- +0.5: 有明确 goal
- +0.5: 有约束条件
- -0.5: 包含不确定词 (`不知道/maybe/怎么`)
### Requirement Clarification
`clarity_score < 2` 时触发需求澄清:
```javascript
if (dimensions.clarity_score < 2) {
const questions = generateClarificationQuestions(dimensions)
// 生成问题:目标是什么? 范围是什么? 有什么约束?
AskUserQuestion({ questions })
}
```
**澄清问题类型**:
- 目标不明确 → "你想要对什么进行操作?"
- 范围不明确 → "操作的范围是什么?"
- 目的不明确 → "这个操作的主要目标是什么?"
- 复杂操作 → "有什么特殊要求或限制?"
## TODO Tracking Protocol
### CRITICAL: Append-Only Rule
CCW 创建的 Todo **必须附加到现有列表**,不能覆盖用户的其他 Todo。
### Implementation
```javascript
// 1. 使用 CCW 前缀隔离工作流 todo
const prefix = `CCW:${flowName}`
// 2. 创建新 todo 时使用前缀格式
TodoWrite({
todos: [
...existingNonCCWTodos, // 保留用户的 todo
{ content: `${prefix}: [1/N] /command:step1`, status: "in_progress", activeForm: "..." },
{ content: `${prefix}: [2/N] /command:step2`, status: "pending", activeForm: "..." }
]
})
// 3. 更新状态时只修改匹配前缀的 todo
```
### Todo Format
```
CCW:{flow}: [{N}/{Total}] /command:name
```
### Visual Example
```
✓ CCW:rapid: [1/2] /workflow:lite-plan
→ CCW:rapid: [2/2] /workflow:lite-execute
用户自己的 todo保留不动
```
### Status Management
- 开始工作流:创建所有步骤 todo第一步 `in_progress`
- 完成步骤:当前步骤 `completed`,下一步 `in_progress`
- 工作流结束:所有 CCW todo 标记 `completed`
## Execution Flow
```javascript
// 1. Check explicit command
if (input.startsWith('/workflow:') || input.startsWith('/issue:')) {
SlashCommand(input)
return
}
// 2. Classify intent
const intent = classifyIntent(input) // See command.json intent_rules
// 3. Select flow
const flow = selectFlow(intent) // See command.json flows
// 4. Create todos with CCW prefix
createWorkflowTodos(flow)
// 5. Dispatch first command
SlashCommand(flow.steps[0].command, args: input)
```
## CLI Tool Integration
CCW 在特定条件下自动注入 CLI 调用:
| Condition | CLI Inject |
|-----------|------------|
| 大量代码上下文 (≥50k chars) | `gemini --mode analysis` |
| 高复杂度任务 | `gemini --mode analysis` |
| Bug 诊断 | `gemini --mode analysis` |
| 多任务执行 (≥3 tasks) | `codex --mode write` |
### CLI Enhancement Phases
**Phase 1.5: CLI-Assisted Classification**
当规则匹配不明确时,使用 CLI 辅助分类:
| 触发条件 | 说明 |
|----------|------|
| matchCount < 2 | 多个意图模式匹配 |
| complexity = high | 高复杂度任务 |
| input > 100 chars | 长输入需要语义理解 |
**Phase 2.5: CLI-Assisted Action Planning**
高复杂度任务的工作流优化:
| 触发条件 | 说明 |
|----------|------|
| complexity = high | 高复杂度任务 |
| steps >= 3 | 多步骤工作流 |
| input > 200 chars | 复杂需求描述 |
CLI 可返回建议:`use_default` | `modify` (调整步骤) | `upgrade` (升级工作流)
## Continuation Commands
工作流执行中的用户控制命令:
| 命令 | 作用 |
|------|------|
| `continue` | 继续执行下一步 |
| `skip` | 跳过当前步骤 |
| `abort` | 终止工作流 |
| `/workflow:*` | 切换到指定命令 |
| 自然语言 | 重新分析意图 |
## Workflow Flow Details
### Issue Workflow (Main Workflow 补充机制)
Issue Workflow 是 Main Workflow 的**补充机制**,专注于开发后的持续维护。
#### 设计理念
| 方面 | Main Workflow | Issue Workflow |
|------|---------------|----------------|
| **用途** | 主要开发周期 | 开发后维护 |
| **时机** | 功能开发阶段 | 主工作流完成后 |
| **范围** | 完整功能实现 | 针对性修复/增强 |
| **并行性** | 依赖分析 → Agent 并行 | Worktree 隔离 (可选) |
| **分支模型** | 当前分支工作 | 可使用隔离的 worktree |
#### 为什么 Main Workflow 不自动使用 Worktree
**依赖分析已解决并行性问题**
1. 规划阶段 (`/workflow:plan`) 执行依赖分析
2. 自动识别任务依赖和关键路径
3. 划分为**并行组**(独立任务)和**串行链**(依赖任务)
4. Agent 并行执行独立任务,无需文件系统隔离
#### 两阶段生命周期
```
┌─────────────────────────────────────────────────────────────────────┐
│ Phase 1: Accumulation (积累阶段) │
│ │
│ Triggers: 任务完成后的 review、代码审查发现、测试失败 │
│ │
│ ┌────────────┐ ┌────────────┐ ┌────────────┐ │
│ │ discover │ │ discover- │ │ new │ │
│ │ Auto-find │ │ by-prompt │ │ Manual │ │
│ └────────────┘ └────────────┘ └────────────┘ │
│ │
│ 持续积累 issues 到待处理队列 │
└─────────────────────────────────────────────────────────────────────┘
│ 积累足够后
┌─────────────────────────────────────────────────────────────────────┐
│ Phase 2: Batch Resolution (批量解决阶段) │
│ │
│ ┌────────────┐ ┌────────────┐ ┌────────────┐ │
│ │ plan │ ──→ │ queue │ ──→ │ execute │ │
│ │ --all- │ │ Optimize │ │ Parallel │ │
│ │ pending │ │ order │ │ execution │ │
│ └────────────┘ └────────────┘ └────────────┘ │
│ │
│ 支持 worktree 隔离,保持主分支稳定 │
└─────────────────────────────────────────────────────────────────────┘
```
#### 与 Main Workflow 的协作
```
开发迭代循环
┌─────────────────────────────────────────────────────────────────────┐
│ │
│ ┌─────────┐ ┌─────────┐ │
│ │ Feature │ ──→ Main Workflow ──→ Done ──→│ Review │ │
│ │ Request │ (Level 1-4) └────┬────┘ │
│ └─────────┘ │ │
│ ▲ │ 发现 Issues │
│ │ ▼ │
│ │ ┌─────────┐ │
│ 继续 │ │ Issue │ │
│ 新功能│ │ Workflow│ │
│ │ └────┬────┘ │
│ │ ┌──────────────────────────────┘ │
│ │ │ 修复完成 │
│ │ ▼ │
│ ┌────┴────┐◀────── │
│ │ Main │ Merge │
│ │ Branch │ back │
│ └─────────┘ │
│ │
└─────────────────────────────────────────────────────────────────────┘
```
#### 命令列表
**积累阶段:**
```bash
/issue:discover # 多视角自动发现
/issue:discover-by-prompt # 基于提示发现
/issue:new # 手动创建
```
**批量解决阶段:**
```bash
/issue:plan --all-pending # 批量规划所有待处理
/issue:queue # 生成优化执行队列
/issue:execute # 并行执行
```
### lite-lite-lite vs multi-cli-plan
| 维度 | lite-lite-lite | multi-cli-plan |
|------|---------------|----------------|
| **产物** | 无文件 | IMPL_PLAN.md + plan.json + synthesis.json |
| **状态** | 无状态 | 持久化 session |
| **CLI选择** | 自动分析任务类型选择 | 配置驱动 |
| **迭代** | 通过 AskUser | 多轮收敛 |
| **执行** | 直接执行 | 通过 lite-execute |
| **适用** | 快速修复、简单功能 | 复杂多步骤实现 |
**选择指南**
- 任务清晰、改动范围小 → `lite-lite-lite`
- 需要多视角分析、复杂架构 → `multi-cli-plan`
### multi-cli-plan vs lite-plan
| 维度 | multi-cli-plan | lite-plan |
|------|---------------|-----------|
| **上下文** | ACE 语义搜索 | 手动文件模式 |
| **分析** | 多 CLI 交叉验证 | 单次规划 |
| **迭代** | 多轮直到收敛 | 单轮 |
| **置信度** | 高 (共识驱动) | 中 (单一视角) |
| **适用** | 需要多视角的复杂任务 | 直接明确的实现 |
**选择指南**
- 需求明确、路径清晰 → `lite-plan`
- 需要权衡、多方案比较 → `multi-cli-plan`
## Artifact Flow Protocol
工作流产出的自动流转机制,支持不同格式产出间的意图提取和完成度判断。
### 产出格式
| 命令 | 产出位置 | 格式 | 关键字段 |
|------|----------|------|----------|
| `/workflow:lite-plan` | memory://plan | structured_plan | tasks, files, dependencies |
| `/workflow:plan` | .workflow/{session}/IMPL_PLAN.md | markdown_plan | phases, tasks, risks |
| `/workflow:execute` | execution_log.json | execution_report | completed_tasks, errors |
| `/workflow:test-cycle-execute` | test_results.json | test_report | pass_rate, failures, coverage |
| `/workflow:review-session-cycle` | review_report.md | review_report | findings, severity_counts |
### 意图提取 (Intent Extraction)
流转到下一步时,自动提取关键信息:
```
plan → execute:
提取: tasks (未完成), priority_order, files_to_modify, context_summary
execute → test:
提取: modified_files, test_scope (推断), pending_verification
test → fix:
条件: pass_rate < 0.95
提取: failures, error_messages, affected_files, suggested_fixes
review → fix:
条件: critical > 0 OR high > 3
提取: findings (critical/high), fix_priority, affected_files
```
### 完成度判断
**Test 完成度路由**:
```
pass_rate >= 0.95 AND coverage >= 0.80 → complete
pass_rate >= 0.95 AND coverage < 0.80 → add_more_tests
pass_rate >= 0.80 → fix_failures_then_continue
pass_rate < 0.80 → major_fix_required
```
**Review 完成度路由**:
```
critical == 0 AND high <= 3 → complete_or_optional_fix
critical > 0 → mandatory_fix
high > 3 → recommended_fix
```
### 流转决策模式
**plan_execute_test**:
```
plan → execute → test
↓ (if test fail)
extract_failures → fix → test (max 3 iterations)
↓ (if still fail)
manual_intervention
```
**iterative_improvement**:
```
execute → test → fix → test → ...
loop until: pass_rate >= 0.95 OR iterations >= 3
```
### 使用示例
```javascript
// 执行完成后,根据产出决定下一步
const result = await execute(plan)
// 提取意图流转到测试
const testContext = extractIntent('execute_to_test', result)
// testContext = { modified_files, test_scope, pending_verification }
// 测试完成后,根据完成度决定路由
const testResult = await test(testContext)
const nextStep = evaluateCompletion('test', testResult)
// nextStep = 'fix_failures_then_continue' if pass_rate = 0.85
```
## Reference
- [command.json](command.json) - 命令元数据、Flow 定义、意图规则、Artifact Flow

View File

@@ -1,641 +0,0 @@
{
"_metadata": {
"version": "2.0.0",
"description": "Unified CCW command index with capabilities, flows, and intent rules"
},
"capabilities": {
"explore": {
"description": "Codebase exploration and context gathering",
"commands": ["/workflow:init", "/workflow:tools:gather", "/memory:load"],
"agents": ["cli-explore-agent", "context-search-agent"]
},
"brainstorm": {
"description": "Multi-perspective analysis and ideation",
"commands": ["/workflow:brainstorm:auto-parallel", "/workflow:brainstorm:artifacts", "/workflow:brainstorm:synthesis"],
"roles": ["product-manager", "system-architect", "ux-expert", "data-architect", "api-designer"]
},
"plan": {
"description": "Task planning and decomposition",
"commands": ["/workflow:lite-plan", "/workflow:plan", "/workflow:tdd-plan", "/task:create", "/task:breakdown"],
"agents": ["cli-lite-planning-agent", "action-planning-agent"]
},
"verify": {
"description": "Plan and quality verification",
"commands": ["/workflow:action-plan-verify", "/workflow:tdd-verify"]
},
"execute": {
"description": "Task execution and implementation",
"commands": ["/workflow:lite-execute", "/workflow:execute", "/task:execute"],
"agents": ["code-developer", "cli-execution-agent", "universal-executor"]
},
"bugfix": {
"description": "Bug diagnosis and fixing",
"commands": ["/workflow:lite-fix"],
"agents": ["code-developer"]
},
"test": {
"description": "Test generation and execution",
"commands": ["/workflow:test-gen", "/workflow:test-fix-gen", "/workflow:test-cycle-execute"],
"agents": ["test-fix-agent"]
},
"review": {
"description": "Code review and quality analysis",
"commands": ["/workflow:review-session-cycle", "/workflow:review-module-cycle", "/workflow:review", "/workflow:review-fix"]
},
"issue": {
"description": "Issue lifecycle management - discover, accumulate, batch resolve",
"commands": ["/issue:new", "/issue:discover", "/issue:discover-by-prompt", "/issue:plan", "/issue:queue", "/issue:execute", "/issue:manage"],
"agents": ["issue-plan-agent", "issue-queue-agent", "cli-explore-agent"],
"lifecycle": {
"accumulation": {
"description": "任务完成后进行需求扩展、bug分析、测试发现",
"triggers": ["post-task review", "code review findings", "test failures"],
"commands": ["/issue:discover", "/issue:discover-by-prompt", "/issue:new"]
},
"batch_resolution": {
"description": "积累的issue集中规划和并行执行",
"flow": ["plan", "queue", "execute"],
"commands": ["/issue:plan --all-pending", "/issue:queue", "/issue:execute"]
}
}
},
"ui-design": {
"description": "UI design and prototyping",
"commands": ["/workflow:ui-design:explore-auto", "/workflow:ui-design:imitate-auto", "/workflow:ui-design:design-sync"],
"agents": ["ui-design-agent"]
},
"memory": {
"description": "Documentation and knowledge management",
"commands": ["/memory:docs", "/memory:update-related", "/memory:update-full", "/memory:skill-memory"],
"agents": ["doc-generator", "memory-bridge"]
}
},
"flows": {
"_level_guide": {
"L1": "Rapid - No artifacts, direct execution",
"L2": "Lightweight - Memory/lightweight files, → lite-execute",
"L3": "Standard - Session persistence, → execute/test-cycle-execute",
"L4": "Brainstorm - Multi-role analysis + Session, → execute"
},
"lite-lite-lite": {
"name": "Ultra-Rapid Execution",
"level": "L1",
"description": "零文件 + 自动CLI选择 + 语义描述 + 直接执行",
"complexity": ["low"],
"artifacts": "none",
"steps": [
{ "phase": "clarify", "description": "需求澄清 (AskUser if needed)" },
{ "phase": "auto-select", "description": "任务分析 → 自动选择CLI组合" },
{ "phase": "multi-cli", "description": "并行多CLI分析" },
{ "phase": "decision", "description": "展示结果 → AskUser决策" },
{ "phase": "execute", "description": "直接执行 (无中间文件)" }
],
"cli_hints": {
"analysis": { "tool": "auto", "mode": "analysis", "parallel": true },
"execution": { "tool": "auto", "mode": "write" }
},
"estimated_time": "10-30 min"
},
"rapid": {
"name": "Rapid Iteration",
"level": "L2",
"description": "内存规划 + 直接执行",
"complexity": ["low", "medium"],
"artifacts": "memory://plan",
"steps": [
{ "command": "/workflow:lite-plan", "optional": false, "auto_continue": true },
{ "command": "/workflow:lite-execute", "optional": false }
],
"cli_hints": {
"explore_phase": { "tool": "gemini", "mode": "analysis", "trigger": "needs_exploration" },
"execution": { "tool": "codex", "mode": "write", "trigger": "complexity >= medium" }
},
"estimated_time": "15-45 min"
},
"multi-cli-plan": {
"name": "Multi-CLI Collaborative Planning",
"level": "L2",
"description": "ACE上下文 + 多CLI协作分析 + 迭代收敛 + 计划生成",
"complexity": ["medium", "high"],
"artifacts": ".workflow/.multi-cli-plan/{session}/",
"steps": [
{ "command": "/workflow:multi-cli-plan", "optional": false, "phases": [
"context_gathering: ACE语义搜索",
"multi_cli_discussion: cli-discuss-agent多轮分析",
"present_options: 展示解决方案",
"user_decision: 用户选择",
"plan_generation: cli-lite-planning-agent生成计划"
]},
{ "command": "/workflow:lite-execute", "optional": false }
],
"vs_lite_plan": {
"context": "ACE semantic search vs Manual file patterns",
"analysis": "Multi-CLI cross-verification vs Single-pass planning",
"iteration": "Multiple rounds until convergence vs Single round",
"confidence": "High (consensus-based) vs Medium (single perspective)",
"best_for": "Complex tasks needing multiple perspectives vs Straightforward implementations"
},
"agents": ["cli-discuss-agent", "cli-lite-planning-agent"],
"cli_hints": {
"discussion": { "tools": ["gemini", "codex", "claude"], "mode": "analysis", "parallel": true },
"planning": { "tool": "gemini", "mode": "analysis" }
},
"estimated_time": "30-90 min"
},
"coupled": {
"name": "Standard Planning",
"level": "L3",
"description": "完整规划 + 验证 + 执行",
"complexity": ["medium", "high"],
"artifacts": ".workflow/active/{session}/",
"steps": [
{ "command": "/workflow:plan", "optional": false },
{ "command": "/workflow:action-plan-verify", "optional": false, "auto_continue": true },
{ "command": "/workflow:execute", "optional": false },
{ "command": "/workflow:review", "optional": true }
],
"cli_hints": {
"pre_analysis": { "tool": "gemini", "mode": "analysis", "trigger": "always" },
"execution": { "tool": "codex", "mode": "write", "trigger": "always" }
},
"estimated_time": "2-4 hours"
},
"full": {
"name": "Full Exploration (Brainstorm)",
"level": "L4",
"description": "头脑风暴 + 规划 + 执行",
"complexity": ["high"],
"artifacts": ".workflow/active/{session}/.brainstorming/",
"steps": [
{ "command": "/workflow:brainstorm:auto-parallel", "optional": false, "confirm_before": true },
{ "command": "/workflow:plan", "optional": false },
{ "command": "/workflow:action-plan-verify", "optional": true, "auto_continue": true },
{ "command": "/workflow:execute", "optional": false }
],
"cli_hints": {
"role_analysis": { "tool": "gemini", "mode": "analysis", "trigger": "always", "parallel": true },
"execution": { "tool": "codex", "mode": "write", "trigger": "task_count >= 3" }
},
"estimated_time": "1-3 hours"
},
"bugfix": {
"name": "Bug Fix",
"level": "L2",
"description": "智能诊断 + 修复 (5 phases)",
"complexity": ["low", "medium"],
"artifacts": ".workflow/.lite-fix/{bug-slug}-{date}/",
"variants": {
"standard": [{ "command": "/workflow:lite-fix", "optional": false }],
"hotfix": [{ "command": "/workflow:lite-fix --hotfix", "optional": false }]
},
"phases": [
"Phase 1: Bug Analysis & Diagnosis (severity pre-assessment)",
"Phase 2: Clarification (optional, AskUserQuestion)",
"Phase 3: Fix Planning (Low/Medium → Claude, High/Critical → cli-lite-planning-agent)",
"Phase 4: Confirmation & Selection",
"Phase 5: Execute (→ lite-execute --mode bugfix)"
],
"cli_hints": {
"diagnosis": { "tool": "gemini", "mode": "analysis", "trigger": "always" },
"fix": { "tool": "codex", "mode": "write", "trigger": "severity >= medium" }
},
"estimated_time": "10-30 min"
},
"issue": {
"name": "Issue Lifecycle",
"level": "Supplementary",
"description": "发现积累 → 批量规划 → 队列优化 → 并行执行 (Main Workflow 补充机制)",
"complexity": ["medium", "high"],
"artifacts": ".workflow/.issues/",
"purpose": "Post-development continuous maintenance, maintain main branch stability",
"phases": {
"accumulation": {
"description": "项目迭代中持续发现和积累issue",
"commands": ["/issue:discover", "/issue:discover-by-prompt", "/issue:new"],
"trigger": "post-task, code-review, test-failure"
},
"resolution": {
"description": "集中规划和执行积累的issue",
"steps": [
{ "command": "/issue:plan --all-pending", "optional": false },
{ "command": "/issue:queue", "optional": false },
{ "command": "/issue:execute", "optional": false }
]
}
},
"worktree_support": {
"description": "可选的 worktree 隔离,保持主分支稳定",
"use_case": "主开发完成后的 issue 修复"
},
"cli_hints": {
"discovery": { "tool": "gemini", "mode": "analysis", "trigger": "perspective_analysis", "parallel": true },
"solution_generation": { "tool": "gemini", "mode": "analysis", "trigger": "always", "parallel": true },
"batch_execution": { "tool": "codex", "mode": "write", "trigger": "always" }
},
"estimated_time": "1-4 hours"
},
"tdd": {
"name": "Test-Driven Development",
"level": "L3",
"description": "TDD规划 + 执行 + 验证 (6 phases)",
"complexity": ["medium", "high"],
"artifacts": ".workflow/active/{session}/",
"steps": [
{ "command": "/workflow:tdd-plan", "optional": false },
{ "command": "/workflow:action-plan-verify", "optional": true, "auto_continue": true },
{ "command": "/workflow:execute", "optional": false },
{ "command": "/workflow:tdd-verify", "optional": false }
],
"tdd_structure": {
"description": "Each IMPL task contains complete internal Red-Green-Refactor cycle",
"meta": "tdd_workflow: true",
"flow_control": "implementation_approach contains 3 steps (red/green/refactor)"
},
"cli_hints": {
"test_strategy": { "tool": "gemini", "mode": "analysis", "trigger": "always" },
"red_green_refactor": { "tool": "codex", "mode": "write", "trigger": "always" }
},
"estimated_time": "1-3 hours"
},
"test-fix": {
"name": "Test Fix Generation",
"level": "L3",
"description": "测试修复生成 + 执行循环 (5 phases)",
"complexity": ["medium", "high"],
"artifacts": ".workflow/active/WFS-test-{session}/",
"dual_mode": {
"session_mode": { "input": "WFS-xxx", "context_source": "Source session summaries" },
"prompt_mode": { "input": "Text/file path", "context_source": "Direct codebase analysis" }
},
"steps": [
{ "command": "/workflow:test-fix-gen", "optional": false },
{ "command": "/workflow:test-cycle-execute", "optional": false }
],
"task_structure": [
"IMPL-001.json (test understanding & generation)",
"IMPL-001.5-review.json (quality gate)",
"IMPL-002.json (test execution & fix cycle)"
],
"cli_hints": {
"analysis": { "tool": "gemini", "mode": "analysis", "trigger": "always" },
"fix_cycle": { "tool": "codex", "mode": "write", "trigger": "pass_rate < 0.95" }
},
"estimated_time": "1-2 hours"
},
"ui": {
"name": "UI-First Development",
"level": "L3/L4",
"description": "UI设计 + 规划 + 执行",
"complexity": ["medium", "high"],
"artifacts": ".workflow/active/{session}/",
"variants": {
"explore": [
{ "command": "/workflow:ui-design:explore-auto", "optional": false },
{ "command": "/workflow:ui-design:design-sync", "optional": false, "auto_continue": true },
{ "command": "/workflow:plan", "optional": false },
{ "command": "/workflow:execute", "optional": false }
],
"imitate": [
{ "command": "/workflow:ui-design:imitate-auto", "optional": false },
{ "command": "/workflow:ui-design:design-sync", "optional": false, "auto_continue": true },
{ "command": "/workflow:plan", "optional": false },
{ "command": "/workflow:execute", "optional": false }
]
},
"estimated_time": "2-4 hours"
},
"review-fix": {
"name": "Review and Fix",
"level": "L3",
"description": "多维审查 + 自动修复",
"complexity": ["medium"],
"artifacts": ".workflow/active/{session}/review_report.md",
"steps": [
{ "command": "/workflow:review-session-cycle", "optional": false },
{ "command": "/workflow:review-fix", "optional": true }
],
"cli_hints": {
"multi_dimension_review": { "tool": "gemini", "mode": "analysis", "trigger": "always", "parallel": true },
"auto_fix": { "tool": "codex", "mode": "write", "trigger": "findings_count >= 3" }
},
"estimated_time": "30-90 min"
},
"docs": {
"name": "Documentation",
"level": "L2",
"description": "批量文档生成",
"complexity": ["low", "medium"],
"variants": {
"incremental": [{ "command": "/memory:update-related", "optional": false }],
"full": [
{ "command": "/memory:docs", "optional": false },
{ "command": "/workflow:execute", "optional": false }
]
},
"estimated_time": "15-60 min"
}
},
"intent_rules": {
"_level_mapping": {
"description": "Intent → Level → Flow mapping guide",
"L1": ["lite-lite-lite"],
"L2": ["rapid", "bugfix", "multi-cli-plan", "docs"],
"L3": ["coupled", "tdd", "test-fix", "review-fix", "ui"],
"L4": ["full"],
"Supplementary": ["issue"]
},
"bugfix": {
"priority": 1,
"level": "L2",
"variants": {
"hotfix": {
"patterns": ["hotfix", "urgent", "production", "critical", "emergency", "紧急", "生产环境", "线上"],
"flow": "bugfix.hotfix"
},
"standard": {
"patterns": ["fix", "bug", "error", "issue", "crash", "broken", "fail", "wrong", "修复", "错误", "崩溃"],
"flow": "bugfix.standard"
}
}
},
"issue_batch": {
"priority": 2,
"level": "Supplementary",
"patterns": {
"batch": ["issues", "batch", "queue", "多个", "批量"],
"action": ["fix", "resolve", "处理", "解决"]
},
"require_both": true,
"flow": "issue"
},
"exploration": {
"priority": 3,
"level": "L4",
"patterns": ["不确定", "不知道", "explore", "研究", "分析一下", "怎么做", "what if", "探索"],
"flow": "full"
},
"multi_perspective": {
"priority": 3,
"level": "L2",
"patterns": ["多视角", "权衡", "比较方案", "cross-verify", "多CLI", "协作分析"],
"flow": "multi-cli-plan"
},
"quick_task": {
"priority": 4,
"level": "L1",
"patterns": ["快速", "简单", "small", "quick", "simple", "trivial", "小改动"],
"flow": "lite-lite-lite"
},
"ui_design": {
"priority": 5,
"level": "L3/L4",
"patterns": ["ui", "界面", "design", "设计", "component", "组件", "style", "样式", "layout", "布局"],
"variants": {
"imitate": { "triggers": ["参考", "模仿", "像", "类似"], "flow": "ui.imitate" },
"explore": { "triggers": [], "flow": "ui.explore" }
}
},
"tdd": {
"priority": 6,
"level": "L3",
"patterns": ["tdd", "test-driven", "测试驱动", "先写测试", "test first"],
"flow": "tdd"
},
"test_fix": {
"priority": 7,
"level": "L3",
"patterns": ["测试失败", "test fail", "fix test", "test error", "pass rate", "coverage gap"],
"flow": "test-fix"
},
"review": {
"priority": 8,
"level": "L3",
"patterns": ["review", "审查", "检查代码", "code review", "质量检查"],
"flow": "review-fix"
},
"documentation": {
"priority": 9,
"level": "L2",
"patterns": ["文档", "documentation", "docs", "readme"],
"variants": {
"incremental": { "triggers": ["更新", "增量"], "flow": "docs.incremental" },
"full": { "triggers": ["全部", "完整"], "flow": "docs.full" }
}
},
"feature": {
"priority": 99,
"complexity_map": {
"high": { "level": "L3", "flow": "coupled" },
"medium": { "level": "L2", "flow": "rapid" },
"low": { "level": "L1", "flow": "lite-lite-lite" }
}
}
},
"complexity_indicators": {
"high": {
"threshold": 4,
"patterns": {
"architecture": { "keywords": ["refactor", "重构", "migrate", "迁移", "architect", "架构", "system", "系统"], "weight": 2 },
"multi_module": { "keywords": ["multiple", "多个", "across", "跨", "all", "所有", "entire", "整个"], "weight": 2 },
"integration": { "keywords": ["integrate", "集成", "api", "database", "数据库"], "weight": 1 },
"quality": { "keywords": ["security", "安全", "performance", "性能", "scale", "扩展"], "weight": 1 }
}
},
"medium": { "threshold": 2 },
"low": { "threshold": 0 }
},
"cli_tools": {
"gemini": {
"strengths": ["超长上下文", "深度分析", "架构理解", "执行流追踪"],
"triggers": ["分析", "理解", "设计", "架构", "诊断"],
"mode": "analysis"
},
"qwen": {
"strengths": ["代码模式识别", "多维度分析"],
"triggers": ["评估", "对比", "验证"],
"mode": "analysis"
},
"codex": {
"strengths": ["精确代码生成", "自主执行"],
"triggers": ["实现", "重构", "修复", "生成"],
"mode": "write"
}
},
"cli_injection_rules": {
"context_gathering": { "trigger": "file_read >= 50k OR module_count >= 5", "inject": "gemini --mode analysis" },
"pre_planning_analysis": { "trigger": "complexity === high", "inject": "gemini --mode analysis" },
"debug_diagnosis": { "trigger": "intent === bugfix AND root_cause_unclear", "inject": "gemini --mode analysis" },
"code_review": { "trigger": "step === review", "inject": "gemini --mode analysis" },
"implementation": { "trigger": "step === execute AND task_count >= 3", "inject": "codex --mode write" }
},
"artifact_flow": {
"_description": "定义工作流产出的格式、意图提取和流转规则",
"outputs": {
"/workflow:lite-plan": {
"artifact": "memory://plan",
"format": "structured_plan",
"fields": ["tasks", "files", "dependencies", "approach"]
},
"/workflow:plan": {
"artifact": ".workflow/{session}/IMPL_PLAN.md",
"format": "markdown_plan",
"fields": ["phases", "tasks", "dependencies", "risks", "test_strategy"]
},
"/workflow:multi-cli-plan": {
"artifact": ".workflow/.multi-cli-plan/{session}/",
"format": "multi_file",
"files": ["IMPL_PLAN.md", "plan.json", "synthesis.json"],
"fields": ["consensus", "divergences", "recommended_approach", "tasks"]
},
"/workflow:lite-execute": {
"artifact": "git_changes",
"format": "code_diff",
"fields": ["modified_files", "added_files", "deleted_files", "build_status"]
},
"/workflow:execute": {
"artifact": ".workflow/{session}/execution_log.json",
"format": "execution_report",
"fields": ["completed_tasks", "pending_tasks", "errors", "warnings"]
},
"/workflow:test-cycle-execute": {
"artifact": ".workflow/{session}/test_results.json",
"format": "test_report",
"fields": ["pass_rate", "failures", "coverage", "duration"]
},
"/workflow:review-session-cycle": {
"artifact": ".workflow/{session}/review_report.md",
"format": "review_report",
"fields": ["findings", "severity_counts", "recommendations"]
},
"/workflow:lite-fix": {
"artifact": "git_changes",
"format": "fix_report",
"fields": ["root_cause", "fix_applied", "files_modified", "verification_status"]
}
},
"intent_extraction": {
"plan_to_execute": {
"from": ["lite-plan", "plan", "multi-cli-plan"],
"to": ["lite-execute", "execute"],
"extract": {
"tasks": "$.tasks[] | filter(status != 'completed')",
"priority_order": "$.tasks | sort_by(priority)",
"files_to_modify": "$.tasks[].files | flatten | unique",
"dependencies": "$.dependencies",
"context_summary": "$.approach OR $.recommended_approach"
}
},
"execute_to_test": {
"from": ["lite-execute", "execute"],
"to": ["test-cycle-execute", "test-fix-gen"],
"extract": {
"modified_files": "$.modified_files",
"test_scope": "infer_from($.modified_files)",
"build_status": "$.build_status",
"pending_verification": "$.completed_tasks | needs_test"
}
},
"test_to_fix": {
"from": ["test-cycle-execute"],
"to": ["lite-fix", "review-fix"],
"condition": "$.pass_rate < 0.95",
"extract": {
"failures": "$.failures",
"error_messages": "$.failures[].message",
"affected_files": "$.failures[].file",
"suggested_fixes": "$.failures[].suggested_fix"
}
},
"review_to_fix": {
"from": ["review-session-cycle", "review-module-cycle"],
"to": ["review-fix"],
"condition": "$.severity_counts.critical > 0 OR $.severity_counts.high > 3",
"extract": {
"findings": "$.findings | filter(severity in ['critical', 'high'])",
"fix_priority": "$.findings | group_by(category) | sort_by(severity)",
"affected_files": "$.findings[].file | unique"
}
}
},
"completion_criteria": {
"plan": {
"required": ["has_tasks", "has_files"],
"optional": ["has_tests", "no_blocking_risks"],
"threshold": 0.8,
"routing": {
"complete": "proceed_to_execute",
"incomplete": "clarify_requirements"
}
},
"execute": {
"required": ["all_tasks_attempted", "no_critical_errors"],
"optional": ["build_passes", "lint_passes"],
"threshold": 1.0,
"routing": {
"complete": "proceed_to_test_or_review",
"partial": "continue_execution",
"failed": "diagnose_and_retry"
}
},
"test": {
"metrics": {
"pass_rate": { "target": 0.95, "minimum": 0.80 },
"coverage": { "target": 0.80, "minimum": 0.60 }
},
"routing": {
"pass_rate >= 0.95 AND coverage >= 0.80": "complete",
"pass_rate >= 0.95 AND coverage < 0.80": "add_more_tests",
"pass_rate >= 0.80": "fix_failures_then_continue",
"pass_rate < 0.80": "major_fix_required"
}
},
"review": {
"metrics": {
"critical_findings": { "target": 0, "maximum": 0 },
"high_findings": { "target": 0, "maximum": 3 }
},
"routing": {
"critical == 0 AND high <= 3": "complete_or_optional_fix",
"critical > 0": "mandatory_fix",
"high > 3": "recommended_fix"
}
}
},
"flow_decisions": {
"_description": "根据产出完成度决定下一步",
"patterns": {
"plan_execute_test": {
"sequence": ["plan", "execute", "test"],
"on_test_fail": {
"action": "extract_failures_and_fix",
"max_iterations": 3,
"fallback": "manual_intervention"
}
},
"plan_execute_review": {
"sequence": ["plan", "execute", "review"],
"on_review_issues": {
"action": "prioritize_and_fix",
"auto_fix_threshold": "severity < high"
}
},
"iterative_improvement": {
"sequence": ["execute", "test", "fix"],
"loop_until": "pass_rate >= 0.95 OR iterations >= 3",
"on_loop_exit": "report_status"
}
}
}
}
}

View File

@@ -0,0 +1,650 @@
---
name: lite-skill-generator
description: Lightweight skill generator with style learning - creates simple skills using flow-based execution and style imitation. Use for quick skill scaffolding, simple workflow creation, or style-aware skill generation.
allowed-tools: Read, Write, Bash, Glob, Grep, AskUserQuestion
---
# Lite Skill Generator
Lightweight meta-skill for rapid skill creation with intelligent style learning and flow-based execution.
## Core Concept
**Simplicity First**: Generate simple, focused skills quickly with minimal overhead. Learn from existing skills to maintain consistent style and structure.
**Progressive Disclosure**: Follow anthropics' three-layer loading principle:
1. **Metadata** - name, description, triggers (always loaded)
2. **SKILL.md** - core instructions (loaded when triggered)
3. **Bundled resources** - scripts, references, assets (loaded on demand)
## Execution Model
**3-Phase Flow**: Style Learning → Requirements Gathering → Generation
```
User Input → Phase 1: Style Analysis → Phase 2: Requirements → Phase 3: Generate → Skill Package
↓ ↓ ↓
Learn from examples Interactive prompts Write files + validate
```
## Architecture Overview
```
┌─────────────────────────────────────────────────────────────────┐
│ Lite Skill Generator │
│ │
│ Input: Skill name, purpose, reference skills │
│ ↓ │
│ ┌─────────────────────────────────────────────────────────┐ │
│ │ Phase 1-3: Lightweight Pipeline │ │
│ │ ┌────┐ ┌────┐ ┌────┐ │ │
│ │ │ P1 │→│ P2 │→│ P3 │ │ │
│ │ │Styl│ │Req │ │Gen │ │ │
│ │ └────┘ └────┘ └────┘ │ │
│ └─────────────────────────────────────────────────────────┘ │
│ ↓ │
│ Output: .claude/skills/{skill-name}/ (minimal package) │
│ │
└─────────────────────────────────────────────────────────────────┘
```
## 3-Phase Workflow
### Phase 1: Style Analysis & Learning
Analyze reference skills to extract language patterns, structural conventions, and writing style.
```javascript
// Phase 1 Execution Flow
async function analyzeStyle(referencePaths) {
// Step 1: Load reference skills
const references = [];
for (const path of referencePaths) {
const content = Read(path);
references.push({
path: path,
content: content,
metadata: extractYAMLFrontmatter(content)
});
}
// Step 2: Extract style patterns
const styleProfile = {
// Structural patterns
structure: {
hasFrontmatter: references.every(r => r.metadata !== null),
sectionHeaders: extractCommonSections(references),
codeBlockUsage: detectCodeBlockPatterns(references),
flowDiagramUsage: detectFlowDiagrams(references)
},
// Language patterns
language: {
instructionStyle: detectInstructionStyle(references), // 'imperative' | 'declarative' | 'procedural'
pseudocodeUsage: detectPseudocodePatterns(references),
verbosity: calculateVerbosityLevel(references), // 'concise' | 'detailed' | 'verbose'
terminology: extractCommonTerms(references)
},
// Organization patterns
organization: {
phaseStructure: detectPhasePattern(references), // 'sequential' | 'autonomous' | 'flat'
exampleDensity: calculateExampleRatio(references),
templateUsage: detectTemplateReferences(references)
}
};
// Step 3: Generate style guide
return {
profile: styleProfile,
recommendations: generateStyleRecommendations(styleProfile),
examples: extractStyleExamples(references, styleProfile)
};
}
// Structural pattern detection
function extractCommonSections(references) {
const allSections = references.map(r =>
r.content.match(/^##? (.+)$/gm)?.map(s => s.replace(/^##? /, ''))
).flat();
return findMostCommon(allSections);
}
// Language style detection
function detectInstructionStyle(references) {
const imperativePattern = /^(Use|Execute|Run|Call|Create|Generate)\s/gim;
const declarativePattern = /^(The|This|Each|All)\s.*\s(is|are|will be)\s/gim;
const proceduralPattern = /^(Step \d+|Phase \d+|First|Then|Finally)\s/gim;
const scores = references.map(r => ({
imperative: (r.content.match(imperativePattern) || []).length,
declarative: (r.content.match(declarativePattern) || []).length,
procedural: (r.content.match(proceduralPattern) || []).length
}));
return getMaxStyle(scores);
}
// Pseudocode pattern detection
function detectPseudocodePatterns(references) {
const hasJavaScriptBlocks = references.some(r => r.content.includes('```javascript'));
const hasFunctionDefs = references.some(r => /function\s+\w+\(/m.test(r.content));
const hasFlowComments = references.some(r => /\/\/.*/m.test(r.content));
return {
usePseudocode: hasJavaScriptBlocks && hasFunctionDefs,
flowAnnotations: hasFlowComments,
style: hasFunctionDefs ? 'functional' : 'imperative'
};
}
```
**Output**:
```
Style Analysis Complete:
Structure: Flow-based with pseudocode
Language: Procedural, detailed
Organization: Sequential phases
Key Patterns: 3-5 phases, function definitions, ASCII diagrams
Recommendations:
✓ Use phase-based structure (3-4 phases)
✓ Include pseudocode for complex logic
✓ Add ASCII flow diagrams
✓ Maintain concise documentation style
```
---
### Phase 2: Requirements Gathering
Interactive discovery of skill requirements using learned style patterns.
```javascript
async function gatherRequirements(styleProfile) {
// Step 1: Basic information
const basicInfo = await AskUserQuestion({
questions: [
{
question: "What is the skill name? (kebab-case, e.g., 'pdf-generator')",
header: "Name",
options: [
{ label: "pdf-generator", description: "Example: PDF generation skill" },
{ label: "code-analyzer", description: "Example: Code analysis skill" },
{ label: "Custom", description: "Enter custom name" }
]
},
{
question: "What is the primary purpose?",
header: "Purpose",
options: [
{ label: "Generation", description: "Create/generate artifacts" },
{ label: "Analysis", description: "Analyze/inspect code or data" },
{ label: "Transformation", description: "Convert/transform content" },
{ label: "Orchestration", description: "Coordinate multiple operations" }
]
}
]
});
// Step 2: Execution complexity
const complexity = await AskUserQuestion({
questions: [{
question: "How many main steps does this skill need?",
header: "Steps",
options: [
{ label: "2-3 steps", description: "Simple workflow (recommended for lite-skill)" },
{ label: "4-5 steps", description: "Moderate workflow" },
{ label: "6+ steps", description: "Complex workflow (consider full skill-generator)" }
]
}]
});
// Step 3: Tool requirements
const tools = await AskUserQuestion({
questions: [{
question: "Which tools will this skill use? (select multiple)",
header: "Tools",
multiSelect: true,
options: [
{ label: "Read", description: "Read files" },
{ label: "Write", description: "Write files" },
{ label: "Bash", description: "Execute commands" },
{ label: "Task", description: "Launch agents" },
{ label: "AskUserQuestion", description: "Interactive prompts" }
]
}]
});
// Step 4: Output format
const output = await AskUserQuestion({
questions: [{
question: "What does this skill produce?",
header: "Output",
options: [
{ label: "Single file", description: "One main output file" },
{ label: "Multiple files", description: "Several related files" },
{ label: "Directory structure", description: "Complete directory tree" },
{ label: "Modified files", description: "Edits to existing files" }
]
}]
});
// Step 5: Build configuration
return {
name: basicInfo.Name,
purpose: basicInfo.Purpose,
description: generateDescription(basicInfo.Name, basicInfo.Purpose),
steps: parseStepCount(complexity.Steps),
allowedTools: tools.Tools,
outputType: output.Output,
styleProfile: styleProfile,
triggerPhrases: generateTriggerPhrases(basicInfo.Name, basicInfo.Purpose)
};
}
// Generate skill description from name and purpose
function generateDescription(name, purpose) {
const templates = {
Generation: `Generate ${humanize(name)} with intelligent scaffolding`,
Analysis: `Analyze ${humanize(name)} with detailed reporting`,
Transformation: `Transform ${humanize(name)} with format conversion`,
Orchestration: `Orchestrate ${humanize(name)} workflow with multi-step coordination`
};
return templates[purpose] || `${humanize(name)} skill for ${purpose.toLowerCase()} tasks`;
}
// Generate trigger phrases
function generateTriggerPhrases(name, purpose) {
const base = [name, name.replace(/-/g, ' ')];
const purposeVariants = {
Generation: ['generate', 'create', 'build'],
Analysis: ['analyze', 'inspect', 'review'],
Transformation: ['transform', 'convert', 'format'],
Orchestration: ['orchestrate', 'coordinate', 'manage']
};
return [...base, ...purposeVariants[purpose].map(v => `${v} ${humanize(name)}`)];
}
```
**Display to User**:
```
Requirements Gathered:
Name: pdf-generator
Purpose: Generation
Steps: 3 (Setup → Generate → Validate)
Tools: Read, Write, Bash
Output: Single file (PDF document)
Triggers: "pdf-generator", "generate pdf", "create pdf"
Style Application:
Using flow-based structure (from style analysis)
Including pseudocode blocks
Adding ASCII diagrams for clarity
```
---
### Phase 3: Generate Skill Package
Create minimal skill structure with style-aware content generation.
```javascript
async function generateSkillPackage(requirements) {
const skillDir = `.claude/skills/${requirements.name}`;
const workDir = `.workflow/.scratchpad/lite-skill-gen-${Date.now()}`;
// Step 1: Create directory structure
Bash(`mkdir -p "${skillDir}" "${workDir}"`);
// Step 2: Generate SKILL.md (using learned style)
const skillContent = generateSkillMd(requirements);
Write(`${skillDir}/SKILL.md`, skillContent);
// Step 3: Conditionally add bundled resources
if (requirements.outputType === 'Directory structure') {
Bash(`mkdir -p "${skillDir}/templates"`);
const templateContent = generateTemplate(requirements);
Write(`${skillDir}/templates/base-template.md`, templateContent);
}
if (requirements.allowedTools.includes('Bash')) {
Bash(`mkdir -p "${skillDir}/scripts"`);
const scriptContent = generateScript(requirements);
Write(`${skillDir}/scripts/helper.sh`, scriptContent);
}
// Step 4: Generate README
const readmeContent = generateReadme(requirements);
Write(`${skillDir}/README.md`, readmeContent);
// Step 5: Validate structure
const validation = validateSkillStructure(skillDir, requirements);
Write(`${workDir}/validation-report.json`, JSON.stringify(validation, null, 2));
// Step 6: Return summary
return {
skillPath: skillDir,
filesCreated: [
`${skillDir}/SKILL.md`,
...(validation.hasTemplates ? [`${skillDir}/templates/`] : []),
...(validation.hasScripts ? [`${skillDir}/scripts/`] : []),
`${skillDir}/README.md`
],
validation: validation,
nextSteps: generateNextSteps(requirements)
};
}
// Generate SKILL.md with style awareness
function generateSkillMd(req) {
const { styleProfile } = req;
// YAML frontmatter
const frontmatter = `---
name: ${req.name}
description: ${req.description}
allowed-tools: ${req.allowedTools.join(', ')}
---
`;
// Main content structure (adapts to style)
let content = frontmatter;
content += `\n# ${humanize(req.name)}\n\n`;
content += `${req.description}\n\n`;
// Add architecture diagram if style uses them
if (styleProfile.structure.flowDiagramUsage) {
content += generateArchitectureDiagram(req);
}
// Add execution flow
content += `## Execution Flow\n\n`;
if (styleProfile.language.pseudocodeUsage.usePseudocode) {
content += generatePseudocodeFlow(req);
} else {
content += generateProceduralFlow(req);
}
// Add phase sections
for (let i = 0; i < req.steps; i++) {
content += generatePhaseSection(i + 1, req, styleProfile);
}
// Add examples if style is verbose
if (styleProfile.language.verbosity !== 'concise') {
content += generateExamplesSection(req);
}
return content;
}
// Generate architecture diagram
function generateArchitectureDiagram(req) {
return `## Architecture
\`\`\`
┌─────────────────────────────────────────────────┐
${humanize(req.name)}
│ │
│ Input → Phase 1 → Phase 2 → Phase 3 → Output │
${getPhaseName(1, req)}
${getPhaseName(2, req)}
${getPhaseName(3, req)}
└─────────────────────────────────────────────────┘
\`\`\`
`;
}
// Generate pseudocode flow
function generatePseudocodeFlow(req) {
return `\`\`\`javascript
async function ${toCamelCase(req.name)}(input) {
// Phase 1: ${getPhaseName(1, req)}
const prepared = await phase1Prepare(input);
// Phase 2: ${getPhaseName(2, req)}
const processed = await phase2Process(prepared);
// Phase 3: ${getPhaseName(3, req)}
const result = await phase3Finalize(processed);
return result;
}
\`\`\`
`;
}
// Generate phase section
function generatePhaseSection(phaseNum, req, styleProfile) {
const phaseName = getPhaseName(phaseNum, req);
let section = `### Phase ${phaseNum}: ${phaseName}\n\n`;
if (styleProfile.language.pseudocodeUsage.usePseudocode) {
section += `\`\`\`javascript\n`;
section += `async function phase${phaseNum}${toCamelCase(phaseName)}(input) {\n`;
section += ` // TODO: Implement ${phaseName.toLowerCase()} logic\n`;
section += ` return output;\n`;
section += `}\n\`\`\`\n\n`;
} else {
section += `**Steps**:\n`;
section += `1. Load input data\n`;
section += `2. Process according to ${phaseName.toLowerCase()} logic\n`;
section += `3. Return result to next phase\n\n`;
}
return section;
}
// Validation
function validateSkillStructure(skillDir, req) {
const requiredFiles = [`${skillDir}/SKILL.md`, `${skillDir}/README.md`];
const exists = requiredFiles.map(f => Bash(`test -f "${f}"`).exitCode === 0);
return {
valid: exists.every(e => e),
hasTemplates: Bash(`test -d "${skillDir}/templates"`).exitCode === 0,
hasScripts: Bash(`test -d "${skillDir}/scripts"`).exitCode === 0,
filesPresent: requiredFiles.filter((f, i) => exists[i]),
styleCompliance: checkStyleCompliance(skillDir, req.styleProfile)
};
}
```
**Output**:
```
Skill Package Generated:
Location: .claude/skills/pdf-generator/
Structure:
✓ SKILL.md (entry point)
✓ README.md (usage guide)
✓ templates/ (directory templates)
✓ scripts/ (helper scripts)
Validation:
✓ All required files present
✓ Style compliance: 95%
✓ Frontmatter valid
✓ Tool references correct
Next Steps:
1. Review SKILL.md and customize phases
2. Test skill: /skill:pdf-generator "test input"
3. Iterate based on usage
```
---
## Complete Execution Flow
```
User: "Create a PDF generator skill"
Phase 1: Style Analysis
|-- Read reference skills (ccw.md, ccw-coordinator.md)
|-- Extract style patterns (flow diagrams, pseudocode, structure)
|-- Generate style profile
+-- Output: Style recommendations
Phase 2: Requirements
|-- Ask: Name, purpose, steps
|-- Ask: Tools, output format
|-- Generate: Description, triggers
+-- Output: Requirements config
Phase 3: Generation
|-- Create: Directory structure
|-- Write: SKILL.md (style-aware)
|-- Write: README.md
|-- Optionally: templates/, scripts/
|-- Validate: Structure and style
+-- Output: Skill package
Return: Skill location + next steps
```
## Phase Execution Protocol
```javascript
// Main entry point
async function liteSkillGenerator(input) {
// Phase 1: Style Learning
const references = [
'.claude/commands/ccw.md',
'.claude/commands/ccw-coordinator.md',
...discoverReferenceSkills(input)
];
const styleProfile = await analyzeStyle(references);
console.log(`Style Analysis: ${styleProfile.organization.phaseStructure}, ${styleProfile.language.verbosity}`);
// Phase 2: Requirements
const requirements = await gatherRequirements(styleProfile);
console.log(`Requirements: ${requirements.name} (${requirements.steps} phases)`);
// Phase 3: Generation
const result = await generateSkillPackage(requirements);
console.log(`✅ Generated: ${result.skillPath}`);
return result;
}
```
## Output Structure
**Minimal Package** (default):
```
.claude/skills/{skill-name}/
├── SKILL.md # Entry point with frontmatter
└── README.md # Usage guide
```
**With Templates** (if needed):
```
.claude/skills/{skill-name}/
├── SKILL.md
├── README.md
└── templates/
└── base-template.md
```
**With Scripts** (if using Bash):
```
.claude/skills/{skill-name}/
├── SKILL.md
├── README.md
└── scripts/
└── helper.sh
```
## Key Design Principles
1. **Style Learning** - Analyze reference skills to maintain consistency
2. **Minimal Overhead** - Generate only essential files (SKILL.md + README)
3. **Progressive Disclosure** - Follow anthropics' three-layer loading
4. **Flow-Based** - Use pseudocode and flow diagrams (when style appropriate)
5. **Interactive** - Guided requirements gathering via AskUserQuestion
6. **Fast Generation** - 3 phases instead of 6, focused on simplicity
7. **Style Awareness** - Adapt output based on detected patterns
## Style Pattern Detection
**Structural Patterns**:
- YAML frontmatter usage (100% in references)
- Section headers (H2 for major, H3 for sub-sections)
- Code blocks (JavaScript pseudocode, Bash examples)
- ASCII diagrams (architecture, flow charts)
**Language Patterns**:
- Instruction style: Procedural with function definitions
- Pseudocode: JavaScript-based with flow annotations
- Verbosity: Detailed but focused
- Terminology: Phase, workflow, pipeline, orchestrator
**Organization Patterns**:
- Phase structure: 3-5 sequential phases
- Example density: Moderate (1-2 per major section)
- Template usage: Minimal (only when necessary)
## Usage Examples
**Basic Generation**:
```
User: "Create a markdown formatter skill"
Lite-Skill-Generator:
→ Analyzes ccw.md style
→ Asks: Name? "markdown-formatter"
→ Asks: Purpose? "Transformation"
→ Asks: Steps? "3 steps"
→ Generates: .claude/skills/markdown-formatter/
```
**With Custom References**:
```
User: "Create a skill like software-manual but simpler"
Lite-Skill-Generator:
→ Analyzes software-manual skill
→ Learns: Multi-phase, agent-based, template-heavy
→ Simplifies: 3 phases, direct execution, minimal templates
→ Generates: Simplified version
```
## Comparison: lite-skill-generator vs skill-generator
| Aspect | lite-skill-generator | skill-generator |
|--------|---------------------|-----------------|
| **Phases** | 3 (Style → Req → Gen) | 6 (Spec → Req → Dir → Gen → Specs → Val) |
| **Style Learning** | Yes (analyze references) | No (fixed templates) |
| **Complexity** | Simple skills only | Full-featured skills |
| **Output** | Minimal (SKILL.md + README) | Complete (phases/, specs/, templates/) |
| **Generation Time** | Fast (~2 min) | Thorough (~10 min) |
| **Use Case** | Quick scaffolding | Production-ready skills |
## Workflow Integration
**Standalone**:
```bash
/skill:lite-skill-generator "Create a log analyzer skill"
```
**With References**:
```bash
/skill:lite-skill-generator "Create a skill based on ccw-coordinator.md style"
```
**Batch Generation** (for multiple simple skills):
```bash
/skill:lite-skill-generator "Create 3 skills: json-validator, yaml-parser, toml-converter"
```
---
**Next Steps After Generation**:
1. Review `.claude/skills/{name}/SKILL.md`
2. Customize phase logic for your use case
3. Add examples to README.md
4. Test skill with sample input
5. Iterate based on real usage

View File

@@ -0,0 +1,68 @@
---
name: {{SKILL_NAME}}
description: {{SKILL_DESCRIPTION}}
allowed-tools: {{ALLOWED_TOOLS}}
---
# {{SKILL_TITLE}}
{{SKILL_DESCRIPTION}}
## Architecture
```
┌─────────────────────────────────────────────────┐
│ {{SKILL_TITLE}} │
│ │
│ Input → {{PHASE_1}} → {{PHASE_2}} → Output │
└─────────────────────────────────────────────────┘
```
## Execution Flow
```javascript
async function {{SKILL_FUNCTION}}(input) {
// Phase 1: {{PHASE_1}}
const prepared = await phase1(input);
// Phase 2: {{PHASE_2}}
const result = await phase2(prepared);
return result;
}
```
### Phase 1: {{PHASE_1}}
```javascript
async function phase1(input) {
// TODO: Implement {{PHASE_1_LOWER}} logic
return output;
}
```
### Phase 2: {{PHASE_2}}
```javascript
async function phase2(input) {
// TODO: Implement {{PHASE_2_LOWER}} logic
return output;
}
```
## Usage
```bash
/skill:{{SKILL_NAME}} "input description"
```
## Examples
**Basic Usage**:
```
User: "{{EXAMPLE_INPUT}}"
{{SKILL_NAME}}:
→ Phase 1: {{PHASE_1_ACTION}}
→ Phase 2: {{PHASE_2_ACTION}}
→ Output: {{EXAMPLE_OUTPUT}}
```

View File

@@ -0,0 +1,64 @@
# Style Guide Template
Generated by lite-skill-generator style analysis phase.
## Detected Patterns
### Structural Patterns
| Pattern | Detected | Recommendation |
|---------|----------|----------------|
| YAML Frontmatter | {{HAS_FRONTMATTER}} | {{FRONTMATTER_REC}} |
| ASCII Diagrams | {{HAS_DIAGRAMS}} | {{DIAGRAMS_REC}} |
| Code Blocks | {{HAS_CODE_BLOCKS}} | {{CODE_BLOCKS_REC}} |
| Phase Structure | {{PHASE_STRUCTURE}} | {{PHASE_REC}} |
### Language Patterns
| Pattern | Value | Notes |
|---------|-------|-------|
| Instruction Style | {{INSTRUCTION_STYLE}} | imperative/declarative/procedural |
| Pseudocode Usage | {{PSEUDOCODE_USAGE}} | functional/imperative/none |
| Verbosity Level | {{VERBOSITY}} | concise/detailed/verbose |
| Common Terms | {{TERMINOLOGY}} | domain-specific vocabulary |
### Organization Patterns
| Pattern | Value |
|---------|-------|
| Phase Count | {{PHASE_COUNT}} |
| Example Density | {{EXAMPLE_DENSITY}} |
| Template Usage | {{TEMPLATE_USAGE}} |
## Style Compliance Checklist
- [ ] YAML frontmatter with name, description, allowed-tools
- [ ] Architecture diagram (if pattern detected)
- [ ] Execution flow section with pseudocode
- [ ] Phase sections (sequential numbered)
- [ ] Usage examples section
- [ ] README.md for external documentation
## Reference Skills Analyzed
{{#REFERENCES}}
- `{{REF_PATH}}`: {{REF_NOTES}}
{{/REFERENCES}}
## Generated Configuration
```json
{
"style": {
"structure": "{{STRUCTURE_TYPE}}",
"language": "{{LANGUAGE_TYPE}}",
"organization": "{{ORG_TYPE}}"
},
"recommendations": {
"usePseudocode": {{USE_PSEUDOCODE}},
"includeDiagrams": {{INCLUDE_DIAGRAMS}},
"verbosityLevel": "{{VERBOSITY}}",
"phaseCount": {{PHASE_COUNT}}
}
}
```

View File

@@ -12,26 +12,24 @@ Meta-skill for creating new Claude Code skills with configurable execution modes
```
┌─────────────────────────────────────────────────────────────────┐
Skill Generator Architecture
├─────────────────────────────────────────────────────────────────┤
Skill Generator
│ │
⚠️ Phase 0: Specification → 阅读并理解设计规范 (强制前置)
Study SKILL-DESIGN-SPEC.md + 模板
│ Phase 1: Requirements → skill-config.json
Discovery (name, type, mode, agents)
Phase 2: Structure → 目录结构 + 核心文件骨架
Generation
Phase 3: Phase → phases/*.md (根据 mode 生成)
Generation Sequential | Autonomous
Phase 4: Specs & → specs/*.md + templates/*.md
Templates
Phase 5: Validation → 验证完整性 + 生成使用说明
│ & Documentation │
Input: User Request (skill name, purpose, mode)
┌─────────────────────────────────────────────────────────┐
│ Phase 0-5: Sequential Pipeline
┌────┐ ┌────┐ ┌────┐ ┌────┐ ┌────┐ ┌────┐ │
│ │ P0 │→│ P1 │→│ P2 │→│ P3 │→│ P4 │→│ P5 │
│ │Spec│ │Req │ │Dir │ │Gen │ │Spec│ │Val │
│ └────┘ └────┘ └────┘ └─┬──┘ └────┘ └────┘
│ ┌────┴────┐ │
↓ ↓
│ Sequential Autonomous
│ (phases/) (actions/)
└─────────────────────────────────────────────────────────┘
Output: .claude/skills/{skill-name}/ (complete package)
│ │
└─────────────────────────────────────────────────────────────────┘
```
@@ -107,8 +105,7 @@ Phase 01 → Phase 02 → Phase 03 → ... → Phase N
| [templates/autonomous-action.md](templates/autonomous-action.md) | Autonomous Action 模板 |
| [templates/code-analysis-action.md](templates/code-analysis-action.md) | 代码分析 Action 模板 |
| [templates/llm-action.md](templates/llm-action.md) | LLM Action 模板 |
| [templates/script-bash.md](templates/script-bash.md) | Bash 脚本模板 |
| [templates/script-python.md](templates/script-python.md) | Python 脚本模板 |
| [templates/script-template.md](templates/script-template.md) | 统一脚本模板 (Bash + Python) |
### 规范文档 (按需阅读)
@@ -134,93 +131,288 @@ Phase 01 → Phase 02 → Phase 03 → ... → Phase N
## Execution Flow
```
┌─────────────────────────────────────────────────────────────────┐
⚠️ Phase 0: Specification Study (强制前置 - 禁止跳过) │
│ → Read: ../_shared/SKILL-DESIGN-SPEC.md (通用设计规范) │
│ → Read: templates/*.md (所有相关模板文件) │
│ → 理解: Skill 结构规范、命名约定、质量标准 │
→ Output: 内化规范要求,确保后续生成符合标准 │
⛔ 未完成 Phase 0 禁止进入 Phase 1 │
├─────────────────────────────────────────────────────────────────┤
│ Phase 1: Requirements Discovery │
│ → AskUserQuestion: Skill 名称、目标、执行模式 │
│ → Output: skill-config.json │
├─────────────────────────────────────────────────────────────────┤
Phase 2: Structure Generation
→ 创建目录结构: phases/, specs/, templates/, scripts/ │
│ → 生成 SKILL.md 入口文件
│ → Output: 完整目录结构
├─────────────────────────────────────────────────────────────────┤
Phase 3: Phase Generation │
→ Sequential: 生成 01-xx.md, 02-xx.md, ... │
→ Autonomous: 生成 orchestrator.md + actions/*.md │
→ Output: phases/*.md │
├─────────────────────────────────────────────────────────────────┤
Phase 4: Specs & Templates │
│ → 生成领域规范: specs/{domain}-requirements.md │
│ → 生成质量标准: specs/quality-standards.md │
→ 生成模板: templates/agent-base.md │
│ → Output: specs/*.md, templates/*.md │
├─────────────────────────────────────────────────────────────────┤
│ Phase 5: Validation & Documentation │
│ → 验证文件完整性 │
→ 生成 README.md 使用说明 │
│ → Output: 验证报告 + README.md │
└─────────────────────────────────────────────────────────────────┘
Input Parsing:
└─ Convert user request to structured format (skill-name/purpose/mode)
Phase 0: Specification Study (⚠️ MANDATORY - 禁止跳过)
└─ Read specification documents
├─ Load: ../_shared/SKILL-DESIGN-SPEC.md
├─ Load: All templates/*.md files
├─ Understand: Structure rules, naming conventions, quality standards
└─ Output: Internalized requirements (in-memory, no file output)
└─ Validation: ⛔ MUST complete before Phase 1
Phase 1: Requirements Discovery
└─ Gather skill requirements via user interaction
├─ Tool: AskUserQuestion
├─ Prompt: Skill name, purpose, execution mode
├─ Prompt: Phase/Action definition
│ └─ Prompt: Tool dependencies, output format
├─ Process: Generate configuration object
└─ Output: skill-config.json → ${workDir}/
├─ skill_name: string
├─ execution_mode: "sequential" | "autonomous"
├─ phases/actions: array
└─ allowed_tools: array
Phase 2: Structure Generation
└─ Create directory structure and entry file
├─ Input: skill-config.json (from Phase 1)
├─ Tool: Bash
│ └─ Execute: mkdir -p .claude/skills/{skill-name}/{phases,specs,templates,scripts}
├─ Tool: Write
│ └─ Generate: SKILL.md (entry point with architecture diagram)
└─ Output: Complete directory structure
├─ .claude/skills/{skill-name}/SKILL.md
├─ .claude/skills/{skill-name}/phases/
├─ .claude/skills/{skill-name}/specs/
├─ .claude/skills/{skill-name}/templates/
└─ .claude/skills/{skill-name}/scripts/
Phase 3: Phase/Action Generation
└─ Decision (execution_mode check):
├─ execution_mode === "sequential" → Generate Sequential Phases
│ ├─ Tool: Read (template: templates/sequential-phase.md)
│ ├─ Loop: For each phase in config.sequential_config.phases
│ │ ├─ Generate: phases/{phase-id}.md
│ │ └─ Link: Previous phase output → Current phase input
│ ├─ Tool: Write (orchestrator: phases/_orchestrator.md)
│ ├─ Tool: Write (workflow definition: workflow.json)
│ └─ Output: phases/01-{name}.md, phases/02-{name}.md, ...
└─ execution_mode === "autonomous" → Generate Orchestrator + Actions
├─ Tool: Read (template: templates/autonomous-orchestrator.md)
├─ Tool: Write (state schema: phases/state-schema.md)
├─ Tool: Write (orchestrator: phases/orchestrator.md)
├─ Tool: Write (action catalog: specs/action-catalog.md)
├─ Loop: For each action in config.autonomous_config.actions
│ ├─ Tool: Read (template: templates/autonomous-action.md)
│ └─ Generate: phases/actions/{action-id}.md
└─ Output: phases/orchestrator.md, phases/actions/*.md
Phase 4: Specs & Templates
└─ Generate domain specifications and templates
├─ Input: skill-config.json (domain context)
├─ Tool: Write
│ ├─ Generate: specs/{domain}-requirements.md
│ ├─ Generate: specs/quality-standards.md
│ └─ Generate: templates/agent-base.md (if needed)
└─ Output: Domain-specific documentation
├─ specs/{skill-name}-requirements.md
├─ specs/quality-standards.md
└─ templates/agent-base.md
Phase 5: Validation & Documentation
└─ Verify completeness and generate usage guide
├─ Input: All generated files from previous phases
├─ Tool: Glob + Read
│ └─ Check: Required files exist and contain proper structure
├─ Tool: Write
│ ├─ Generate: README.md (usage instructions)
│ └─ Generate: validation-report.json (completeness check)
└─ Output: Final documentation
├─ README.md (how to use this skill)
└─ validation-report.json (quality gate results)
Return:
└─ Summary with skill location and next steps
├─ Skill path: .claude/skills/{skill-name}/
├─ Status: ✅ All phases completed
└─ Suggestion: "Review SKILL.md and customize phase files as needed"
```
## Directory Setup
**Execution Protocol**:
```javascript
const skillName = config.skill_name;
const skillDir = `.claude/skills/${skillName}`;
// Phase 0: Read specifications (in-memory)
Read('.claude/skills/_shared/SKILL-DESIGN-SPEC.md');
Read('.claude/skills/skill-generator/templates/*.md'); // All templates
// 创建目录结构
Bash(`mkdir -p "${skillDir}/phases"`);
Bash(`mkdir -p "${skillDir}/specs"`);
Bash(`mkdir -p "${skillDir}/templates"`);
// Phase 1: Gather requirements
const answers = AskUserQuestion({
questions: [
{ question: "Skill name?", header: "Name", options: [...] },
{ question: "Execution mode?", header: "Mode", options: ["Sequential", "Autonomous"] }
]
});
// Autonomous 模式额外目录
if (config.execution_mode === 'autonomous') {
Bash(`mkdir -p "${skillDir}/phases/actions"`);
const config = generateConfig(answers);
const workDir = `.workflow/.scratchpad/skill-gen-${timestamp}`;
Write(`${workDir}/skill-config.json`, JSON.stringify(config));
// Phase 2: Create structure
const skillDir = `.claude/skills/${config.skill_name}`;
Bash(`mkdir -p "${skillDir}/phases" "${skillDir}/specs" "${skillDir}/templates"`);
Write(`${skillDir}/SKILL.md`, generateSkillEntry(config));
// Phase 3: Generate phases (mode-dependent)
if (config.execution_mode === 'sequential') {
Write(`${skillDir}/phases/_orchestrator.md`, generateOrchestrator(config));
Write(`${skillDir}/workflow.json`, generateWorkflowDef(config));
config.sequential_config.phases.forEach(phase => {
Write(`${skillDir}/phases/${phase.id}.md`, generatePhase(phase, config));
});
} else {
Write(`${skillDir}/phases/orchestrator.md`, generateAutonomousOrchestrator(config));
Write(`${skillDir}/phases/state-schema.md`, generateStateSchema(config));
config.autonomous_config.actions.forEach(action => {
Write(`${skillDir}/phases/actions/${action.id}.md`, generateAction(action, config));
});
}
// Phase 4: Generate specs
Write(`${skillDir}/specs/${config.skill_name}-requirements.md`, generateRequirements(config));
Write(`${skillDir}/specs/quality-standards.md`, generateQualityStandards(config));
// Phase 5: Validate & Document
const validation = validateStructure(skillDir);
Write(`${skillDir}/validation-report.json`, JSON.stringify(validation));
Write(`${skillDir}/README.md`, generateReadme(config, validation));
```
---
## Phase Reference Guide
Navigation and entry points for each execution phase:
### Phase 0: Specification Study (Mandatory)
**Document**: 🔗 [SKILL-DESIGN-SPEC.md](../_shared/SKILL-DESIGN-SPEC.md)
**Purpose**: Understand skill design standards before generating
**What to Read**:
- Skill structure conventions
- Naming standards
- Quality criteria
- Output format specifications
---
### Phase 1: Requirements Discovery
**Document**: 🔗 [phases/01-requirements-discovery.md](phases/01-requirements-discovery.md)
| Attribute | Value |
|-----------|-------|
| **Purpose** | Gather configuration from user via interactive prompts |
| **Input** | User responses (Skill name, purpose, execution mode) |
| **Output** | `skill-config.json` (configuration file) |
| **Key Decision** | Execution mode: Sequential vs Autonomous vs Hybrid |
---
### Phase 2: Structure Generation
**Document**: 🔗 [phases/02-structure-generation.md](phases/02-structure-generation.md)
| Attribute | Value |
|-----------|-------|
| **Purpose** | Create directory structure and entry file |
| **Input** | `skill-config.json` (from Phase 1) |
| **Output** | `.claude/skills/{skill-name}/` directory + `SKILL.md` |
---
### Phase 3: Phase/Action Generation
**Document**: 🔗 [phases/03-phase-generation.md](phases/03-phase-generation.md)
| Attribute | Value |
|-----------|-------|
| **Purpose** | Generate execution phases or actions based on mode |
| **Input** | `skill-config.json` + Templates |
| **Output** | Sequential: `phases/01-*.md` ... OR Autonomous: `phases/orchestrator.md` + `actions/*.md` |
**Decision Logic**:
```
IF execution_mode === "sequential":
└─ Generate sequential phases with linear flow
ELSE IF execution_mode === "autonomous":
└─ Generate orchestrator + independent actions
```
---
### Phase 4: Specs & Templates
**Document**: 🔗 [phases/04-specs-templates.md](phases/04-specs-templates.md)
| Attribute | Value |
|-----------|-------|
| **Purpose** | Generate domain specifications and template files |
| **Input** | `skill-config.json` (domain context) |
| **Output** | `specs/{skill-name}-requirements.md`, `specs/quality-standards.md` |
---
### Phase 5: Validation & Documentation
**Document**: 🔗 [phases/05-validation.md](phases/05-validation.md)
| Attribute | Value |
|-----------|-------|
| **Purpose** | Verify completeness and generate usage documentation |
| **Input** | All files from previous phases |
| **Output** | `README.md`, `validation-report.json` |
---
## Template Reference
| Template | Generated For | When Used |
|----------|--------------|-----------|
| [skill-md.md](templates/skill-md.md) | SKILL.md entry file | Phase 2 |
| [sequential-phase.md](templates/sequential-phase.md) | Sequential phase files | Phase 3 |
| [autonomous-orchestrator.md](templates/autonomous-orchestrator.md) | Orchestrator (autonomous) | Phase 3 |
| [autonomous-action.md](templates/autonomous-action.md) | Action files | Phase 3 |
| [code-analysis-action.md](templates/code-analysis-action.md) | Code analysis actions | Phase 3 |
| [llm-action.md](templates/llm-action.md) | LLM-powered actions | Phase 3 |
| [script-template.md](templates/script-template.md) | Bash + Python scripts | Phase 3/4 |
## Output Structure
### Sequential Mode
```
.claude/skills/{skill-name}/
├── SKILL.md
├── SKILL.md # 入口文件
├── phases/
│ ├── 01-{step-one}.md
│ ├── 02-{step-two}.md
── 03-{step-three}.md
│ ├── _orchestrator.md # 声明式编排器
│ ├── workflow.json # 工作流定义
── 01-{step-one}.md # 阶段 1
│ ├── 02-{step-two}.md # 阶段 2
│ └── 03-{step-three}.md # 阶段 3
├── specs/
│ ├── {domain}-requirements.md
│ ├── {skill-name}-requirements.md
│ └── quality-standards.md
── templates/
└── agent-base.md
── templates/
└── agent-base.md
├── scripts/
└── README.md
```
### Autonomous Mode
```
.claude/skills/{skill-name}/
├── SKILL.md
├── SKILL.md # 入口文件
├── phases/
│ ├── orchestrator.md # 编排器:读取状态 → 选择动作
│ ├── state-schema.md # 状态结构定义
│ └── actions/ # 独立动作(无顺序)
│ ├── action-{a}.md
│ ├── action-{b}.md
│ └── action-{c}.md
│ ├── orchestrator.md # 编排器 (状态驱动)
│ ├── state-schema.md # 状态结构定义
│ └── actions/
│ ├── action-init.md
│ ├── action-create.md
│ └── action-list.md
├── specs/
│ ├── {domain}-requirements.md
│ ├── action-catalog.md # 动作目录(描述、前置条件、效果)
│ ├── {skill-name}-requirements.md
│ ├── action-catalog.md
│ └── quality-standards.md
── templates/
├── orchestrator-base.md # 编排器模板
└── action-base.md # 动作模板
```
── templates/
├── orchestrator-base.md
└── action-base.md
├── scripts/
└── README.md
```

View File

@@ -1,15 +1,4 @@
# Phase 1: Requirements Discovery
收集新 Skill 的需求信息,生成配置文件。
## Objective
- 收集 Skill 基本信息(名称、描述、触发词)
- 确定执行模式Sequential / Autonomous
- 定义阶段/动作
- 配置工具依赖和输出格式
## Execution Steps
### Step 1: 基本信息收集
@@ -228,12 +217,12 @@ Bash(`mkdir -p "${workDir}"`);
Write(`${workDir}/skill-config.json`, JSON.stringify(config, null, 2));
```
## Output
- **File**: `skill-config.json`
- **Location**: `.workflow/.scratchpad/skill-gen-{timestamp}/`
- **Format**: JSON
## Next Phase
→ [Phase 2: Structure Generation](02-structure-generation.md)
**Data Flow to Phase 2**:
- skill-config.json with all configuration parameters
- Execution mode decision drives directory structure creation

View File

@@ -8,9 +8,7 @@
- 生成 SKILL.md 入口文件
- 根据执行模式创建对应的子目录
## Input
- 依赖: `skill-config.json` (Phase 1 产出)
## Execution Steps
@@ -23,19 +21,79 @@ const skillDir = `.claude/skills/${config.skill_name}`;
### Step 2: 创建目录结构
```javascript
// 基础目录
Bash(`mkdir -p "${skillDir}/phases"`);
Bash(`mkdir -p "${skillDir}/specs"`);
Bash(`mkdir -p "${skillDir}/templates"`);
#### 基础目录(所有模式)
// Autonomous 模式额外目录
```javascript
// 基础架构
Bash(`mkdir -p "${skillDir}/{phases,specs,templates,scripts}"`);
```
#### 执行模式特定目录
```
config.execution_mode
├─ "sequential"
│ ↓ Creates:
│ └─ phases/ (基础目录已包含)
│ ├─ _orchestrator.md
│ └─ workflow.json
└─ "autonomous" | "hybrid"
↓ Creates:
└─ phases/actions/
├─ state-schema.md
└─ *.md (动作文件)
```
```javascript
// Autonomous/Hybrid 模式额外目录
if (config.execution_mode === 'autonomous' || config.execution_mode === 'hybrid') {
Bash(`mkdir -p "${skillDir}/phases/actions"`);
}
```
// scripts 目录(默认创建,用于存放确定性脚本)
Bash(`mkdir -p "${skillDir}/scripts"`);
#### Context Strategy 特定目录 (P0 增强)
```javascript
// ========== P0: 根据上下文策略创建目录 ==========
const contextStrategy = config.context_strategy || 'file';
if (contextStrategy === 'file') {
// 文件策略:创建上下文持久化目录
Bash(`mkdir -p "${skillDir}/.scratchpad-template/context"`);
// 创建上下文模板文件
Write(
`${skillDir}/.scratchpad-template/context/.gitkeep`,
"# Runtime context storage for file-based strategy"
);
}
// 内存策略无需创建目录 (in-memory only)
```
**目录树视图**:
```
Sequential + File Strategy:
.claude/skills/{skill-name}/
├── phases/
│ ├── _orchestrator.md
│ ├── workflow.json
│ ├── 01-*.md
│ └── 02-*.md
├── .scratchpad-template/
│ └── context/ ← File strategy persistent storage
└── specs/
Autonomous + Memory Strategy:
.claude/skills/{skill-name}/
├── phases/
│ ├── orchestrator.md
│ ├── state-schema.md
│ └── actions/
│ └── *.md
└── specs/
```
### Step 3: 生成 SKILL.md
@@ -192,16 +250,13 @@ function generateReferenceTable(config) {
}
```
## Output
- **Directory**: `.claude/skills/{skill-name}/`
- **Files**:
- `SKILL.md` (入口文件)
- `phases/` (执行阶段目录)
- `specs/` (规范文档目录)
- `templates/` (模板目录)
- `scripts/` (脚本目录,存放 Python/Bash 确定性脚本)
## Next Phase
→ [Phase 3: Phase Generation](03-phase-generation.md)
**Data Flow to Phase 3**:
- Complete directory structure in .claude/skills/{skill-name}/
- SKILL.md entry file ready for phase/action generation
- skill-config.json for template population

View File

@@ -8,10 +8,7 @@
- Autonomous 模式:生成编排器和动作文件
- 支持 **文件上下文****内存上下文** 两种策略
## Input
- 依赖: `skill-config.json`, SKILL.md (Phase 1-2 产出)
- 模板: `templates/sequential-phase.md`, `templates/autonomous-*.md`
## 上下文策略 (P0 增强)
@@ -55,66 +52,93 @@ const skillRoot = '.claude/skills/skill-generator';
```javascript
if (config.execution_mode === 'sequential') {
const phases = config.sequential_config.phases;
// ========== P0 增强: 生成声明式编排器 ==========
const workflowOrchestrator = generateSequentialOrchestrator(config, phases);
Write(`${skillDir}/phases/_orchestrator.md`, workflowOrchestrator);
// ========== P0 增强: 生成工作流定义 ==========
const workflowDef = generateWorkflowDefinition(config, phases);
Write(`${skillDir}/workflow.json`, JSON.stringify(workflowDef, null, 2));
// 生成各阶段文件
// ========== P0 增强: 生成 Phase 0 (强制规范研读) ==========
const phase0Content = generatePhase0Spec(config);
Write(`${skillDir}/phases/00-spec-study.md`, phase0Content);
// ========== 生成用户定义的各阶段文件 ==========
for (let i = 0; i < phases.length; i++) {
const phase = phases[i];
const prevPhase = i > 0 ? phases[i-1] : null;
const nextPhase = i < phases.length - 1 ? phases[i+1] : null;
const content = generateSequentialPhase({
phaseNumber: i + 1,
phaseId: phase.id,
phaseName: phase.name,
phaseDescription: phase.description || `Execute ${phase.name}`,
input: prevPhase ? prevPhase.output : "user input",
input: prevPhase ? prevPhase.output : "phase 0 output", // Phase 0 为首个输入源
output: phase.output,
nextPhase: nextPhase ? nextPhase.id : null,
config: config,
contextStrategy: contextStrategy
});
Write(`${skillDir}/phases/${phase.id}.md`, content);
}
}
// ========== P0 增强: 声明式工作流定义 ==========
function generateWorkflowDefinition(config, phases) {
// ========== P0: 添加强制 Phase 0 ==========
const phase0 = {
id: '00-spec-study',
name: 'Specification Study',
order: 0,
input: null,
output: 'spec-study-complete.flag',
description: '⚠️ MANDATORY: Read all specification documents before execution',
parallel: false,
condition: null,
agent: {
type: 'universal-executor',
run_in_background: false
}
};
return {
skill_name: config.skill_name,
version: "1.0.0",
execution_mode: "sequential",
context_strategy: config.context_strategy || "file",
// 声明式阶段列表 (类似 software-manual 的 agents_to_run)
phases_to_run: phases.map(p => p.id),
// 阶段配置
phases: phases.map((p, i) => ({
id: p.id,
name: p.name,
order: i + 1,
input: i > 0 ? phases[i-1].output : null,
output: p.output,
// 可选的并行配置
parallel: p.parallel || false,
// 可选的条件执行
condition: p.condition || null,
// Agent 配置
agent: p.agent || {
type: "universal-executor",
run_in_background: false
}
})),
// ========== P0: Phase 0 置于首位 ==========
phases_to_run: ['00-spec-study', ...phases.map(p => p.id)],
// ========== P0: Phase 0 + 用户定义阶段 ==========
phases: [
phase0,
...phases.map((p, i) => ({
id: p.id,
name: p.name,
order: i + 1,
input: i === 0 ? phase0.output : phases[i-1].output, // 第一个阶段依赖 Phase 0
output: p.output,
parallel: p.parallel || false,
condition: p.condition || null,
// Agent 配置 (支持 LLM 集成)
agent: p.agent || (config.llm_integration?.enabled ? {
type: "llm",
tool: config.llm_integration.default_tool,
mode: config.llm_integration.mode || "analysis",
fallback_chain: config.llm_integration.fallback_chain || [],
run_in_background: false
} : {
type: "universal-executor",
run_in_background: false
})
}))
],
// 终止条件
termination: {
on_success: "all_phases_completed",
@@ -236,10 +260,30 @@ async function executePhase(phaseId, phaseConfig, workDir) {
## 阶段执行计划
**执行流程**:
\`\`\`
START
Phase 0: Specification Study
↓ Output: spec-study-complete.flag
Phase 1: ${phases[0]?.name || 'First Phase'}
↓ Output: ${phases[0]?.output || 'phase-1.json'}
${phases.slice(1).map((p, i) => `
Phase ${i+2}: ${p.name}
↓ Output: ${p.output}`).join('\n')}
COMPLETE
\`\`\`
**阶段列表**:
| Order | Phase | Input | Output | Agent |
|-------|-------|-------|--------|-------|
${phases.map((p, i) =>
`| ${i+1} | ${p.id} | ${i > 0 ? phases[i-1].output : '-'} | ${p.output} | ${p.agent?.type || 'universal-executor'} |`
| 0 | 00-spec-study | - | spec-study-complete.flag | universal-executor |
${phases.map((p, i) =>
`| ${i+1} | ${p.id} | ${i === 0 ? 'spec-study-complete.flag' : phases[i-1].output} | ${p.output} | ${p.agent?.type || 'universal-executor'} |`
).join('\n')}
## 错误恢复
@@ -754,6 +798,146 @@ ${actions.sort((a, b) => (b.priority || 0) - (a.priority || 0)).map(a =>
### Step 4: 辅助函数
```javascript
// ========== P0: Phase 0 生成函数 ==========
function generatePhase0Spec(config) {
const skillRoot = '.claude/skills/skill-generator';
const specsToRead = [
'../_shared/SKILL-DESIGN-SPEC.md',
`${skillRoot}/templates/*.md`
];
return `# Phase 0: Specification Study
⚠️ **MANDATORY PREREQUISITE** - 此阶段不可跳过
## Objective
在生成任何文件前,完整阅读所有规范文档,理解 Skill 设计标准。
## Why This Matters
**不研读规范 (❌)**:
\`\`\`
跳过规范
├─ ✗ 不符合标准
├─ ✗ 结构混乱
└─ ✗ 质量问题
\`\`\`
**研读规范 (✅)**:
\`\`\`
完整研读
├─ ✓ 标准化输出
├─ ✓ 高质量代码
└─ ✓ 易于维护
\`\`\`
## Required Reading
### P0 - 核心设计规范
\`\`\`javascript
// 通用设计标准 (MUST READ)
const designSpec = Read('.claude/skills/_shared/SKILL-DESIGN-SPEC.md');
// 关键内容检查点:
const checkpoints = {
structure: '目录结构约定',
naming: '命名规范',
quality: '质量标准',
output: '输出格式要求'
};
\`\`\`
### P1 - 模板文件 (生成前必读)
\`\`\`javascript
// 根据执行模式加载对应模板
const templates = {
all: [
'templates/skill-md.md' // SKILL.md 入口文件模板
],
sequential: [
'templates/sequential-phase.md'
],
autonomous: [
'templates/autonomous-orchestrator.md',
'templates/autonomous-action.md'
]
};
const mode = '${config.execution_mode}';
const requiredTemplates = [...templates.all, ...templates[mode]];
requiredTemplates.forEach(template => {
const content = Read(\`.claude/skills/skill-generator/\${template}\`);
// 理解模板结构、变量位置、生成规则
});
\`\`\`
## Execution
\`\`\`javascript
// ========== 加载规范 ==========
const specs = [];
// 1. 设计规范 (P0)
specs.push({
file: '../_shared/SKILL-DESIGN-SPEC.md',
content: Read('.claude/skills/_shared/SKILL-DESIGN-SPEC.md'),
priority: 'P0'
});
// 2. 模板文件 (P1)
const templateFiles = Glob('.claude/skills/skill-generator/templates/*.md');
templateFiles.forEach(file => {
specs.push({
file: file,
content: Read(file),
priority: 'P1'
});
});
// ========== 内化规范 ==========
console.log('📖 Reading specifications...');
specs.forEach(spec => {
console.log(\` [\${spec.priority}] \${spec.file}\`);
// 理解内容(无需生成文件,仅内存处理)
});
// ========== 生成完成标记 ==========
const result = {
status: 'completed',
specs_loaded: specs.length,
timestamp: new Date().toISOString()
};
Write(\`\${workDir}/spec-study-complete.flag\`, JSON.stringify(result, null, 2));
\`\`\`
## Output
- **标记文件**: \`spec-study-complete.flag\` (证明已完成阅读)
- **副作用**: 内化规范知识,后续阶段遵循标准
## Success Criteria
✅ **通过标准**:
- [ ] 已阅读 SKILL-DESIGN-SPEC.md
- [ ] 已阅读执行模式对应的模板文件
- [ ] 理解目录结构约定
- [ ] 理解命名规范
- [ ] 理解质量标准
## Next Phase
→ [Phase 1: Requirements Discovery](01-requirements-discovery.md)
**关键**: 只有完成规范研读后Phase 1 才能正确收集需求并生成符合标准的配置。
`;
}
// ========== 其他辅助函数 ==========
function toPascalCase(str) {
return str.split('-').map(s => s.charAt(0).toUpperCase() + s.slice(1)).join('');
}
@@ -782,21 +966,4 @@ function getPreconditionCheck(action) {
}
```
## Output
### Sequential 模式
- `phases/_orchestrator.md` (声明式编排器)
- `workflow.json` (工作流定义)
- `phases/01-{step}.md`, `02-{step}.md`, ...
### Autonomous 模式
- `phases/orchestrator.md` (增强版编排器)
- `phases/state-schema.md`
- `specs/action-catalog.md` (声明式动作目录)
- `phases/actions/action-{name}.md` (多个)
## Next Phase
→ [Phase 4: Specs & Templates](04-specs-templates.md)

View File

@@ -1,26 +1,107 @@
# Phase 4: Specs & Templates Generation
# Phase 4: Specifications & Templates Generation
生成规范文件和模板文件。
Generate domain requirements, quality standards, agent templates, and action catalogs.
## Objective
- 生成领域规范 (`specs/{domain}-requirements.md`)
- 生成质量标准 (`specs/quality-standards.md`)
- 生成 Agent 模板 (`templates/agent-base.md`)
- Autonomous 模式额外生成动作目录 (`specs/action-catalog.md`)
Generate comprehensive specifications and templates:
- Domain requirements document with validation function
- Quality standards with automated check system
- Agent base template with prompt structure
- Action catalog for autonomous mode (conditional)
## Input
- 依赖: `skill-config.json`, SKILL.md, phases/*.md
**File Dependencies**:
- `skill-config.json` (from Phase 1)
- `.claude/skills/{skill-name}/` directory (from Phase 2)
- Generated phase/action files (from Phase 3)
## Execution Steps
**Required Information**:
- Skill name, display name, description
- Execution mode (determines if action-catalog.md is generated)
- Output format and location
- Phase/action definitions
### Step 1: 生成领域规范
## Output
**Generated Files**:
| File | Purpose | Generation Condition |
|------|---------|---------------------|
| `specs/{skill-name}-requirements.md` | Domain requirements with validation | Always |
| `specs/quality-standards.md` | Quality evaluation criteria | Always |
| `templates/agent-base.md` | Agent prompt template | Always |
| `specs/action-catalog.md` | Action dependency graph and selection priority | Autonomous/Hybrid mode only |
**File Structure**:
**Domain Requirements** (`specs/{skill-name}-requirements.md`):
```markdown
# {display_name} Requirements
- When to Use (phase/action reference table)
- Domain Requirements (功能要求, 输出要求, 质量要求)
- Validation Function (JavaScript code)
- Error Handling (recovery strategies)
```
**Quality Standards** (`specs/quality-standards.md`):
```markdown
# Quality Standards
- Quality Dimensions (Completeness 25%, Consistency 25%, Accuracy 25%, Usability 25%)
- Quality Gates (Pass ≥80%, Review 60-79%, Fail <60%)
- Issue Classification (Errors, Warnings, Info)
- Automated Checks (runQualityChecks function)
```
**Agent Base** (`templates/agent-base.md`):
```markdown
# Agent Base Template
- 通用 Prompt 结构 (ROLE, PROJECT CONTEXT, TASK, CONSTRAINTS, OUTPUT_FORMAT, QUALITY_CHECKLIST)
- 变量说明 (workDir, output_path)
- 返回格式 (AgentReturn interface)
- 角色定义参考 (phase/action specific agents)
```
**Action Catalog** (`specs/action-catalog.md`, Autonomous/Hybrid only):
```markdown
# Action Catalog
- Available Actions (table with Purpose, Preconditions, Effects)
- Action Dependencies (Mermaid diagram)
- State Transitions (state machine table)
- Selection Priority (ordered action list)
```
## Decision Logic
```
Decision (execution_mode check):
├─ mode === 'sequential' → Generate 3 files only
│ └─ Files: requirements.md, quality-standards.md, agent-base.md
├─ mode === 'autonomous' → Generate 4 files
│ ├─ Files: requirements.md, quality-standards.md, agent-base.md
│ └─ Additional: action-catalog.md (with action dependencies)
└─ mode === 'hybrid' → Generate 4 files
├─ Files: requirements.md, quality-standards.md, agent-base.md
└─ Additional: action-catalog.md (with hybrid logic)
```
## Execution Protocol
```javascript
// Phase 4: Generate Specifications & Templates
// Reference: phases/04-specs-templates.md
// Load config and setup
const config = JSON.parse(Read(`${workDir}/skill-config.json`));
const skillDir = `.claude/skills/${config.skill_name}`;
// Ensure specs and templates directories exist (created in Phase 2)
// skillDir structure: phases/, specs/, templates/
// Step 1: Generate domain requirements
const domainRequirements = `# ${config.display_name} Requirements
${config.description}
@@ -29,8 +110,8 @@ ${config.description}
| Phase | Usage | Reference |
|-------|-------|-----------|
${config.execution_mode === 'sequential' ?
config.sequential_config.phases.map((p, i) =>
${config.execution_mode === 'sequential' ?
config.sequential_config.phases.map((p, i) =>
`| Phase ${i+1} | ${p.name} | ${p.id}.md |`
).join('\n') :
`| Orchestrator | 动作选择 | orchestrator.md |
@@ -67,7 +148,7 @@ function validate${toPascalCase(config.skill_name)}(output) {
{ name: "格式正确", pass: output.format === "${config.output.format}" },
{ name: "内容完整", pass: output.content?.length > 0 }
];
return {
passed: checks.filter(c => c.pass).length,
total: checks.length,
@@ -86,11 +167,8 @@ function validate${toPascalCase(config.skill_name)}(output) {
`;
Write(`${skillDir}/specs/${config.skill_name}-requirements.md`, domainRequirements);
```
### Step 2: 生成质量标准
```javascript
// Step 2: Generate quality standards
const qualityStandards = `# Quality Standards
${config.display_name} 的质量评估标准。
@@ -176,7 +254,7 @@ function runQualityChecks(workDir) {
return {
score: results.overall,
gate: results.overall >= 80 ? 'pass' :
gate: results.overall >= 80 ? 'pass' :
results.overall >= 60 ? 'review' : 'fail',
details: results
};
@@ -185,11 +263,8 @@ function runQualityChecks(workDir) {
`;
Write(`${skillDir}/specs/quality-standards.md`, qualityStandards);
```
### Step 3: 生成 Agent 模板
```javascript
// Step 3: Generate agent base template
const agentBase = `# Agent Base Template
${config.display_name} 的 Agent 基础模板。
@@ -246,20 +321,17 @@ interface AgentReturn {
## 角色定义参考
${config.execution_mode === 'sequential' ?
config.sequential_config.phases.map((p, i) =>
config.sequential_config.phases.map((p, i) =>
`- **Phase ${i+1} Agent**: ${p.name} 专家`
).join('\n') :
config.autonomous_config.actions.map(a =>
config.autonomous_config.actions.map(a =>
`- **${a.name} Agent**: ${a.description || a.name + ' 执行者'}`
).join('\n')}
`;
Write(`${skillDir}/templates/agent-base.md`, agentBase);
```
### Step 4: Autonomous 模式 - 动作目录
```javascript
// Step 4: Conditional - Generate action catalog for autonomous/hybrid mode
if (config.execution_mode === 'autonomous' || config.execution_mode === 'hybrid') {
const actionCatalog = `# Action Catalog
@@ -269,7 +341,7 @@ ${config.display_name} 的可用动作目录。
| Action | Purpose | Preconditions | Effects |
|--------|---------|---------------|---------|
${config.autonomous_config.actions.map(a =>
${config.autonomous_config.actions.map(a =>
`| [${a.id}](../phases/actions/${a.id}.md) | ${a.description || a.name} | ${a.preconditions?.join(', ') || '-'} | ${a.effects?.join(', ') || '-'} |`
).join('\n')}
@@ -289,7 +361,7 @@ ${config.autonomous_config.actions.map((a, i, arr) => {
| From State | Action | To State |
|------------|--------|----------|
| pending | action-init | running |
${config.autonomous_config.actions.slice(1).map(a =>
${config.autonomous_config.actions.slice(1).map(a =>
`| running | ${a.id} | running |`
).join('\n')}
| running | action-complete | completed |
@@ -299,30 +371,28 @@ ${config.autonomous_config.actions.slice(1).map(a =>
当多个动作的前置条件都满足时,按以下优先级选择:
${config.autonomous_config.actions.map((a, i) =>
${config.autonomous_config.actions.map((a, i) =>
`${i + 1}. \`${a.id}\` - ${a.name}`
).join('\n')}
`;
Write(`${skillDir}/specs/action-catalog.md`, actionCatalog);
}
```
### Step 5: 辅助函数
```javascript
// Helper function
function toPascalCase(str) {
return str.split('-').map(s => s.charAt(0).toUpperCase() + s.slice(1)).join('');
}
// Phase output summary
console.log('Phase 4 complete: Generated specs and templates');
```
## Output
- `specs/{skill-name}-requirements.md` - 领域规范
- `specs/quality-standards.md` - 质量标准
- `specs/action-catalog.md` - 动作目录 (Autonomous 模式)
- `templates/agent-base.md` - Agent 模板
## Next Phase
→ [Phase 5: Validation](05-validation.md)
**Data Flow to Phase 5**:
- All generated files in `specs/` and `templates/`
- skill-config.json for validation reference
- Complete skill directory structure ready for final validation

View File

@@ -1,27 +1,119 @@
# Phase 5: Validation & Documentation
验证生成的 Skill 完整性并生成使用说明。
Verify generated skill completeness and generate user documentation.
## Objective
- 验证所有必需文件存在
- 检查文件内容完整性
- 生成 README.md 使用说明
- 输出验证报告
Comprehensive validation and documentation:
- Verify all required files exist
- Check file content quality and completeness
- Generate validation report with issues and recommendations
- Generate README.md usage documentation
- Output final status and next steps
## Input
- 依赖: 所有前序阶段产出
- 生成的 Skill 目录
**File Dependencies**:
- `skill-config.json` (from Phase 1)
- `.claude/skills/{skill-name}/` directory (from Phase 2)
- All generated phase/action files (from Phase 3)
- All generated specs/templates files (from Phase 4)
## Execution Steps
**Required Information**:
- Skill name, display name, description
- Execution mode
- Trigger words
- Output configuration
- Complete skill directory structure
### Step 1: 文件完整性检查
## Output
**Generated Files**:
| File | Purpose | Content |
|------|---------|---------|
| `validation-report.json` (workDir) | Validation report with detailed checks | File completeness, content quality, issues, recommendations |
| `README.md` (skillDir) | User documentation | Quick Start, Usage, Output, Directory Structure, Customization |
**Validation Report Structure** (`validation-report.json`):
```json
{
"skill_name": "...",
"execution_mode": "sequential|autonomous",
"generated_at": "ISO timestamp",
"file_checks": {
"total": N,
"existing": N,
"with_content": N,
"with_todos": N,
"details": [...]
},
"content_checks": {
"files_checked": N,
"all_passed": true|false,
"details": [...]
},
"summary": {
"status": "PASS|REVIEW|FAIL",
"issues": [...],
"recommendations": [...]
}
}
```
**README Structure** (`README.md`):
```markdown
# {display_name}
- Quick Start (Triggers, Execution Mode)
- Usage (Examples)
- Output (Format, Location, Filename)
- Directory Structure (Tree view)
- Customization (How to modify)
- Related Documents (Links)
```
**Validation Status Gates**:
| Status | Condition | Meaning |
|--------|-----------|---------|
| PASS | All files exist + All content checks passed | Ready for use |
| REVIEW | All files exist + Some content checks failed | Needs refinement |
| FAIL | Missing files | Incomplete generation |
## Decision Logic
```
Decision (Validation Flow):
├─ File Completeness Check
│ ├─ All files exist → Continue to content checks
│ └─ Missing files → Status = FAIL, collect missing file errors
├─ Content Quality Check
│ ├─ Sequential mode → Check phase files for structure
│ ├─ Autonomous mode → Check orchestrator + action files
│ └─ Common → Check SKILL.md, specs/, templates/
├─ Status Calculation
│ ├─ All files exist + All checks pass → Status = PASS
│ ├─ All files exist + Some checks fail → Status = REVIEW
│ └─ Missing files → Status = FAIL
└─ Generate Report & README
├─ validation-report.json (with issues and recommendations)
└─ README.md (with usage documentation)
```
## Execution Protocol
```javascript
// Phase 5: Validation & Documentation
// Reference: phases/05-validation.md
// Load config and setup
const config = JSON.parse(Read(`${workDir}/skill-config.json`));
const skillDir = `.claude/skills/${config.skill_name}`;
// Step 1: File completeness check
const requiredFiles = {
common: [
'SKILL.md',
@@ -64,14 +156,11 @@ const fileCheckResults = filesToCheck.map(file => {
};
}
});
```
### Step 2: 内容质量检查
```javascript
// Step 2: Content quality check
const contentChecks = [];
// 检查 SKILL.md
// Check SKILL.md structure
const skillMd = Read(`${skillDir}/SKILL.md`);
contentChecks.push({
file: 'SKILL.md',
@@ -83,11 +172,11 @@ contentChecks.push({
]
});
// 检查 Phase 文件
// Check phase files
const phaseFiles = Glob(`${skillDir}/phases/*.md`);
for (const phaseFile of phaseFiles) {
if (phaseFile.includes('/actions/')) continue; // 单独检查
if (phaseFile.includes('/actions/')) continue; // Check separately
const content = Read(phaseFile);
contentChecks.push({
file: phaseFile.replace(skillDir + '/', ''),
@@ -100,7 +189,7 @@ for (const phaseFile of phaseFiles) {
});
}
// 检查 Specs 文件
// Check specs files
const specFiles = Glob(`${skillDir}/specs/*.md`);
for (const specFile of specFiles) {
const content = Read(specFile);
@@ -113,16 +202,13 @@ for (const specFile of specFiles) {
]
});
}
```
### Step 3: 生成验证报告
```javascript
// Step 3: Generate validation report
const report = {
skill_name: config.skill_name,
execution_mode: config.execution_mode,
generated_at: new Date().toISOString(),
file_checks: {
total: fileCheckResults.length,
existing: fileCheckResults.filter(f => f.exists).length,
@@ -130,13 +216,13 @@ const report = {
with_todos: fileCheckResults.filter(f => f.hasTodo).length,
details: fileCheckResults
},
content_checks: {
files_checked: contentChecks.length,
all_passed: contentChecks.every(c => c.checks.every(ch => ch.pass)),
details: contentChecks
},
summary: {
status: calculateOverallStatus(fileCheckResults, contentChecks),
issues: collectIssues(fileCheckResults, contentChecks),
@@ -146,10 +232,11 @@ const report = {
Write(`${workDir}/validation-report.json`, JSON.stringify(report, null, 2));
// Helper functions
function calculateOverallStatus(fileResults, contentResults) {
const allFilesExist = fileResults.every(f => f.exists);
const allContentPassed = contentResults.every(c => c.checks.every(ch => ch.pass));
if (allFilesExist && allContentPassed) return 'PASS';
if (allFilesExist) return 'REVIEW';
return 'FAIL';
@@ -157,44 +244,41 @@ function calculateOverallStatus(fileResults, contentResults) {
function collectIssues(fileResults, contentResults) {
const issues = [];
fileResults.filter(f => !f.exists).forEach(f => {
issues.push({ type: 'ERROR', message: `文件缺失: ${f.file}` });
});
fileResults.filter(f => f.hasTodo).forEach(f => {
issues.push({ type: 'WARNING', message: `包含 TODO: ${f.file}` });
});
contentResults.forEach(c => {
c.checks.filter(ch => !ch.pass).forEach(ch => {
issues.push({ type: 'WARNING', message: `${c.file}: 缺少 ${ch.name}` });
});
});
return issues;
}
function generateRecommendations(fileResults, contentResults) {
const recommendations = [];
if (fileResults.some(f => f.hasTodo)) {
recommendations.push('替换所有 TODO 占位符为实际内容');
}
contentResults.forEach(c => {
if (c.checks.some(ch => !ch.pass)) {
recommendations.push(`完善 ${c.file} 的结构`);
}
});
return recommendations;
}
```
### Step 4: 生成 README.md
```javascript
// Step 4: Generate README.md
const readme = `# ${config.display_name}
${config.description}
@@ -210,10 +294,10 @@ ${config.triggers.map(t => `- "${t}"`).join('\n')}
**${config.execution_mode === 'sequential' ? 'Sequential (顺序)' : 'Autonomous (自主)'}**
${config.execution_mode === 'sequential' ?
`阶段按固定顺序执行:\n${config.sequential_config.phases.map((p, i) =>
`阶段按固定顺序执行:\n${config.sequential_config.phases.map((p, i) =>
`${i + 1}. ${p.name}`
).join('\n')}` :
`动作由编排器动态选择:\n${config.autonomous_config.actions.map(a =>
`动作由编排器动态选择:\n${config.autonomous_config.actions.map(a =>
`- ${a.name}: ${a.description || ''}`
).join('\n')}`}
@@ -283,24 +367,21 @@ ${config.execution_mode === 'sequential' ?
`;
Write(`${skillDir}/README.md`, readme);
```
### Step 5: 输出最终结果
```javascript
// Step 5: Output final result
const finalResult = {
skill_name: config.skill_name,
skill_path: skillDir,
execution_mode: config.execution_mode,
generated_files: [
'SKILL.md',
'README.md',
...filesToCheck
],
validation: report.summary,
next_steps: [
'1. 审阅生成的文件结构',
'2. 替换 TODO 占位符',
@@ -319,16 +400,18 @@ console.log('下一步:');
finalResult.next_steps.forEach(s => console.log(s));
```
## Output
## Workflow Completion
- `{workDir}/validation-report.json` - 验证报告
- `{skillDir}/README.md` - 使用说明
**Final Status**: Skill generation pipeline complete
## Completion
**Generated Artifacts**:
- Complete skill directory structure in `.claude/skills/{skill-name}/`
- Validation report in `{workDir}/validation-report.json`
- User documentation in `{skillDir}/README.md`
Skill 生成流程完成。用户可以:
1. 查看生成的 Skill 目录
2. 根据验证报告修复问题
3. 自定义执行逻辑
4. 测试 Skill 功能
**Next Steps**:
1. Review validation report for any issues or recommendations
2. Replace TODO placeholders with actual implementation
3. Test skill execution with trigger words
4. Customize phase logic based on specific requirements
5. Update triggers and descriptions as needed

View File

@@ -2,6 +2,20 @@
自主模式动作文件的模板。
## Purpose
生成 Autonomous 执行模式的 Action 文件,定义可独立执行的动作单元。
## Usage Context
| Phase | Usage |
|-------|-------|
| Phase 3 (Phase Generation) | `config.execution_mode === 'autonomous'` 时生成 |
| Generation Trigger | 为每个 `config.autonomous_config.actions` 生成一个 action 文件 |
| Output Location | `.claude/skills/{skill-name}/phases/actions/{action-id}.md` |
---
## 模板结构
```markdown
@@ -70,10 +84,42 @@ return {
| `{{error_handling_table}}` | 错误处理表格 |
| `{{next_actions_hints}}` | 后续动作提示 |
## 动作生命周期
```
状态驱动执行流:
state.status === 'pending'
┌─ Init ─┐ ← 1次执行环境准备
│ 创建工作目录 │
│ 初始化 context │
│ status → running │
└────┬────┘
┌─ CRUD Loop ─┐ ← N次迭代核心业务
│ 编排器选择动作 │ List / Create / Edit / Delete
│ execute(state) │ 共享模式: 收集输入 → 操作 context.items → 返回更新
│ 更新 state │
└────┬────┘
┌─ Complete ─┐ ← 1次执行保存结果
│ 序列化输出 │
│ status → completed │
└──────────┘
共享状态结构:
state.status → 'pending' | 'running' | 'completed'
state.context.items → 业务数据数组
state.completed_actions → 已执行动作 ID 列表
```
## 动作类型模板
### 1. 初始化动作 (Init)
**触发条件**: `state.status === 'pending'`,仅执行一次
```markdown
# Action: Initialize
@@ -91,101 +137,29 @@ return {
\`\`\`javascript
async function execute(state) {
// 1. 创建工作目录
Bash(\`mkdir -p "\${workDir}"\`);
// 2. 初始化数据
const initialData = {
items: [],
metadata: {}
};
// 3. 返回状态更新
return {
stateUpdates: {
status: 'running',
context: initialData
started_at: new Date().toISOString(),
context: { items: [], metadata: {} }
}
};
}
\`\`\`
## State Updates
\`\`\`javascript
return {
stateUpdates: {
status: 'running',
started_at: new Date().toISOString(),
context: { /* 初始数据 */ }
}
};
\`\`\`
## Next Actions
- 成功: 进入主处理循环
- 成功: 进入主处理循环 (由编排器选择首个 CRUD 动作)
- 失败: action-abort
```
### 2. 列表动作 (List)
### 2. CRUD 动作 (List / Create / Edit / Delete)
```markdown
# Action: List Items
**触发条件**: `state.status === 'running'`,循环执行直至用户退出
显示当前项目列表
## Purpose
展示所有项目供用户查看和选择。
## Preconditions
- [ ] state.status === 'running'
## Execution
\`\`\`javascript
async function execute(state) {
const items = state.context.items || [];
if (items.length === 0) {
console.log('暂无项目');
} else {
console.log('项目列表:');
items.forEach((item, i) => {
console.log(\`\${i + 1}. \${item.name} - \${item.status}\`);
});
}
return {
stateUpdates: {
last_action: 'list',
current_view: 'list'
}
};
}
\`\`\`
## State Updates
\`\`\`javascript
return {
stateUpdates: {
current_view: 'list',
last_viewed_at: new Date().toISOString()
}
};
\`\`\`
## Next Actions
- 用户选择创建: action-create
- 用户选择编辑: action-edit
- 用户退出: action-complete
```
### 3. 创建动作 (Create)
> 以 Create 为示例展示共享模式。List / Edit / Delete 遵循同一结构,仅 `执行逻辑` 和 `状态更新字段` 不同
```markdown
# Action: Create Item
@@ -194,7 +168,7 @@ return {
## Purpose
引导用户创建新项目
收集用户输入,向 context.items 追加新记录
## Preconditions
@@ -204,26 +178,24 @@ return {
\`\`\`javascript
async function execute(state) {
// 1. 收集信息
// 1. 收集输入
const input = await AskUserQuestion({
questions: [{
question: "请输入项目名称:",
header: "名称",
multiSelect: false,
options: [
{ label: "手动输入", description: "输入自定义名称" }
]
options: [{ label: "手动输入", description: "输入自定义名称" }]
}]
});
// 2. 创建项目
// 2. 操作 context.items (核心逻辑因动作类型而异)
const newItem = {
id: Date.now().toString(),
name: input["名称"],
status: 'pending',
created_at: new Date().toISOString()
};
// 3. 返回状态更新
return {
stateUpdates: {
@@ -231,177 +203,30 @@ async function execute(state) {
...state.context,
items: [...(state.context.items || []), newItem]
},
last_created_id: newItem.id
last_action: 'create'
}
};
}
\`\`\`
## State Updates
\`\`\`javascript
return {
stateUpdates: {
'context.items': [...items, newItem],
last_action: 'create',
last_created_id: newItem.id
}
};
\`\`\`
## Next Actions
- 继续创建: action-create
- 返回列表: action-list
- 继续操作: 编排器根据 state 选择下一动作
- 用户退出: action-complete
```
### 4. 编辑动作 (Edit)
**其他 CRUD 动作差异对照:**
```markdown
# Action: Edit Item
| 动作 | 核心逻辑 | 额外前置条件 | 关键状态字段 |
|------|---------|------------|------------|
| List | `items.forEach(→ console.log)` | 无 | `current_view: 'list'` |
| Create | `items.push(newItem)` | 无 | `last_created_id` |
| Edit | `items.map(→ 替换匹配项)` | `selected_item_id !== null` | `updated_at` |
| Delete | `items.filter(→ 排除匹配项)` | `selected_item_id !== null` | 确认对话 → 执行 |
编辑现有项目。
### 3. 完成动作 (Complete)
## Purpose
修改已存在的项目。
## Preconditions
- [ ] state.status === 'running'
- [ ] state.selected_item_id !== null
## Execution
\`\`\`javascript
async function execute(state) {
const itemId = state.selected_item_id;
const items = state.context.items || [];
const item = items.find(i => i.id === itemId);
if (!item) {
throw new Error(\`Item not found: \${itemId}\`);
}
// 1. 显示当前值
console.log(\`当前名称: \${item.name}\`);
// 2. 收集新值
const input = await AskUserQuestion({
questions: [{
question: "请输入新名称(留空保持不变):",
header: "新名称",
multiSelect: false,
options: [
{ label: "保持不变", description: \`当前: \${item.name}\` },
{ label: "手动输入", description: "输入新名称" }
]
}]
});
// 3. 更新项目
const updatedItems = items.map(i =>
i.id === itemId
? { ...i, name: input["新名称"] || i.name, updated_at: new Date().toISOString() }
: i
);
return {
stateUpdates: {
context: { ...state.context, items: updatedItems },
selected_item_id: null
}
};
}
\`\`\`
## State Updates
\`\`\`javascript
return {
stateUpdates: {
'context.items': updatedItems,
selected_item_id: null,
last_action: 'edit'
}
};
\`\`\`
## Next Actions
- 返回列表: action-list
```
### 5. 删除动作 (Delete)
```markdown
# Action: Delete Item
删除项目。
## Purpose
从列表中移除项目。
## Preconditions
- [ ] state.status === 'running'
- [ ] state.selected_item_id !== null
## Execution
\`\`\`javascript
async function execute(state) {
const itemId = state.selected_item_id;
const items = state.context.items || [];
// 1. 确认删除
const confirm = await AskUserQuestion({
questions: [{
question: "确认删除此项目?",
header: "确认",
multiSelect: false,
options: [
{ label: "确认删除", description: "不可恢复" },
{ label: "取消", description: "返回列表" }
]
}]
});
if (confirm["确认"] === "取消") {
return { stateUpdates: { selected_item_id: null } };
}
// 2. 执行删除
const updatedItems = items.filter(i => i.id !== itemId);
return {
stateUpdates: {
context: { ...state.context, items: updatedItems },
selected_item_id: null
}
};
}
\`\`\`
## State Updates
\`\`\`javascript
return {
stateUpdates: {
'context.items': filteredItems,
selected_item_id: null,
last_action: 'delete'
}
};
\`\`\`
## Next Actions
- 返回列表: action-list
```
### 6. 完成动作 (Complete)
**触发条件**: 用户明确退出或终止条件满足,仅执行一次
```markdown
# Action: Complete
@@ -410,7 +235,7 @@ return {
## Purpose
保存最终状态,结束 Skill 执行。
序列化最终状态,结束 Skill 执行。
## Preconditions
@@ -420,41 +245,26 @@ return {
\`\`\`javascript
async function execute(state) {
// 1. 保存最终数据
Write(\`\${workDir}/final-output.json\`, JSON.stringify(state.context, null, 2));
// 2. 生成摘要
const summary = {
total_items: state.context.items?.length || 0,
duration: Date.now() - new Date(state.started_at).getTime(),
actions_executed: state.completed_actions.length
};
console.log('任务完成');
console.log(\`处理项目: \${summary.total_items}\`);
console.log(\`执行动作: \${summary.actions_executed}\`);
console.log(\`任务完成: \${summary.total_items} 项, \${summary.actions_executed} 次操作\`);
return {
stateUpdates: {
status: 'completed',
summary: summary
completed_at: new Date().toISOString(),
summary
}
};
}
\`\`\`
## State Updates
\`\`\`javascript
return {
stateUpdates: {
status: 'completed',
completed_at: new Date().toISOString(),
summary: { /* 统计信息 */ }
}
};
\`\`\`
## Next Actions
- 无(终止状态)

View File

@@ -2,6 +2,20 @@
自主模式编排器的模板。
## Purpose
生成 Autonomous 执行模式的 Orchestrator 文件,负责状态驱动的动作选择和执行循环。
## Usage Context
| Phase | Usage |
|-------|-------|
| Phase 3 (Phase Generation) | `config.execution_mode === 'autonomous'` 时生成 |
| Generation Trigger | 创建编排器逻辑,管理动作选择和状态更新 |
| Output Location | `.claude/skills/{skill-name}/phases/orchestrator.md` |
---
## ⚠️ 重要提示
> **Phase 0 是强制前置阶段**:在 Orchestrator 启动执行循环之前,必须先完成 Phase 0 的规范研读。

View File

@@ -2,6 +2,18 @@
代码分析动作模板,用于在 Skill 中集成代码探索和分析能力。
## Purpose
为 Skill 生成代码分析动作,集成 MCP 工具 (ACE) 和 Agent 进行语义搜索和深度分析。
## Usage Context
| Phase | Usage |
|-------|-------|
| Optional | 当 Skill 需要代码探索和分析能力时使用 |
| Generation Trigger | 用户选择添加 code-analysis 动作类型 |
| Agent Types | Explore, cli-explore-agent, universal-executor |
---
## 配置结构

View File

@@ -2,6 +2,18 @@
LLM 动作模板,用于在 Skill 中集成 LLM 调用能力。
## Purpose
为 Skill 生成 LLM 动作,通过 CCW CLI 统一接口调用 Gemini/Qwen/Codex 进行分析或生成。
## Usage Context
| Phase | Usage |
|-------|-------|
| Optional | 当 Skill 需要 LLM 能力时使用 |
| Generation Trigger | 用户选择添加 llm 动作类型 |
| Tools | gemini, qwen, codex (支持 fallback chain) |
---
## 配置结构

View File

@@ -1,277 +0,0 @@
# Bash Script Template
Bash 脚本模板,用于生成技能中的确定性脚本。
## 模板代码
```bash
#!/bin/bash
# {{script_description}}
set -euo pipefail
# ============================================================
# 参数解析
# ============================================================
INPUT_PATH=""
OUTPUT_DIR="" # 由调用方指定,不设默认值
show_help() {
echo "用法: $0 --input-path <path> --output-dir <dir>"
echo ""
echo "参数:"
echo " --input-path 输入文件路径 (必需)"
echo " --output-dir 输出目录 (必需,由调用方指定)"
echo " --help 显示帮助信息"
}
while [[ "$#" -gt 0 ]]; do
case $1 in
--input-path)
INPUT_PATH="$2"
shift
;;
--output-dir)
OUTPUT_DIR="$2"
shift
;;
--help)
show_help
exit 0
;;
*)
echo "错误: 未知参数 $1" >&2
show_help >&2
exit 1
;;
esac
shift
done
# ============================================================
# 参数验证
# ============================================================
if [[ -z "$INPUT_PATH" ]]; then
echo "错误: --input-path 是必需参数" >&2
exit 1
fi
if [[ -z "$OUTPUT_DIR" ]]; then
echo "错误: --output-dir 是必需参数" >&2
exit 1
fi
if [[ ! -f "$INPUT_PATH" ]]; then
echo "错误: 输入文件不存在: $INPUT_PATH" >&2
exit 1
fi
# 检查 jq 是否可用(用于 JSON 输出)
if ! command -v jq &> /dev/null; then
echo "错误: 需要安装 jq" >&2
exit 1
fi
mkdir -p "$OUTPUT_DIR"
# ============================================================
# 核心逻辑
# ============================================================
OUTPUT_FILE="$OUTPUT_DIR/result.txt"
ITEMS_COUNT=0
# TODO: 实现处理逻辑
# 示例:处理输入文件
while IFS= read -r line; do
echo "$line" >> "$OUTPUT_FILE"
((ITEMS_COUNT++))
done < "$INPUT_PATH"
# ============================================================
# 输出 JSON 结果(使用 jq 构建,避免特殊字符问题)
# ============================================================
jq -n \
--arg output_file "$OUTPUT_FILE" \
--argjson items_processed "$ITEMS_COUNT" \
'{output_file: $output_file, items_processed: $items_processed, status: "success"}'
```
## 变量说明
| 变量 | 说明 |
|------|------|
| `{{script_description}}` | 脚本功能描述 |
## 使用规范
### 脚本头部
```bash
#!/bin/bash
set -euo pipefail # 严格模式:出错退出、未定义变量报错、管道错误传递
```
### 参数解析模式
```bash
while [[ "$#" -gt 0 ]]; do
case $1 in
--param-name)
PARAM_VAR="$2"
shift
;;
--flag)
FLAG_VAR=true
;;
*)
echo "Unknown: $1" >&2
exit 1
;;
esac
shift
done
```
### 输出格式
- 最后一行打印单行 JSON
- **强烈推荐使用 `jq`**:自动处理转义和类型
```bash
# 推荐:使用 jq 构建(安全、可靠)
jq -n \
--arg file "$FILE" \
--argjson count "$COUNT" \
'{output_file: $file, items_processed: $count}'
# 备选:简单场景手动拼接(注意特殊字符转义)
echo "{\"file\": \"$FILE\", \"count\": $COUNT}"
```
**jq 参数类型**
- `--arg name value`:字符串类型
- `--argjson name value`:数字/布尔/null 类型
### 错误处理
```bash
# 验证错误
if [[ -z "$PARAM" ]]; then
echo "错误: 参数不能为空" >&2
exit 1
fi
# 命令错误
if ! command -v jq &> /dev/null; then
echo "错误: 需要安装 jq" >&2
exit 1
fi
# 运行时错误
if ! some_command; then
echo "错误: 命令执行失败" >&2
exit 1
fi
```
## 常用模式
### 文件遍历
```bash
for file in "$INPUT_DIR"/*.json; do
[[ -f "$file" ]] || continue
echo "处理: $file"
# 处理逻辑...
done
```
### 临时文件
```bash
TEMP_FILE=$(mktemp)
trap "rm -f $TEMP_FILE" EXIT
echo "data" > "$TEMP_FILE"
```
### 调用其他工具
```bash
# 检查工具存在
require_command() {
if ! command -v "$1" &> /dev/null; then
echo "错误: 需要 $1" >&2
exit 1
fi
}
require_command jq
require_command curl
```
### JSON 处理(使用 jq
```bash
# 读取 JSON 字段
VALUE=$(jq -r '.field' "$INPUT_PATH")
# 修改 JSON
jq '.field = "new_value"' "$INPUT_PATH" > "$OUTPUT_FILE"
# 合并 JSON 文件
jq -s 'add' file1.json file2.json > merged.json
```
## 生成函数
```javascript
function generateBashScript(scriptConfig) {
return `#!/bin/bash
# ${scriptConfig.description}
set -euo pipefail
# 参数定义
${scriptConfig.inputs.map(i =>
`${i.name.toUpperCase().replace(/-/g, '_')}="${i.default || ''}"`
).join('\n')}
# 参数解析
while [[ "$#" -gt 0 ]]; do
case $1 in
${scriptConfig.inputs.map(i =>
` --${i.name})
${i.name.toUpperCase().replace(/-/g, '_')}="$2"
shift
;;`
).join('\n')}
*)
echo "未知参数: $1" >&2
exit 1
;;
esac
shift
done
# 参数验证
${scriptConfig.inputs.filter(i => i.required).map(i =>
`if [[ -z "$${i.name.toUpperCase().replace(/-/g, '_')}" ]]; then
echo "错误: --${i.name} 是必需参数" >&2
exit 1
fi`
).join('\n\n')}
# TODO: 实现处理逻辑
# 输出结果
echo "{${scriptConfig.outputs.map(o =>
`\\"${o.name}\\": \\"\\$${o.name.toUpperCase().replace(/-/g, '_')}\\"`
).join(', ')}}"
`;
}
```

View File

@@ -1,198 +0,0 @@
# Python Script Template
Python 脚本模板,用于生成技能中的确定性脚本。
## 模板代码
```python
#!/usr/bin/env python3
"""
{{script_description}}
"""
import argparse
import json
import sys
from pathlib import Path
def main():
# 1. 定义参数
parser = argparse.ArgumentParser(description='{{script_description}}')
parser.add_argument('--input-path', type=str, required=True,
help='输入文件路径')
parser.add_argument('--output-dir', type=str, required=True,
help='输出目录(由调用方指定)')
# 添加更多参数...
args = parser.parse_args()
# 2. 验证输入
input_path = Path(args.input_path)
if not input_path.exists():
print(f"错误: 输入文件不存在: {input_path}", file=sys.stderr)
sys.exit(1)
output_dir = Path(args.output_dir)
output_dir.mkdir(parents=True, exist_ok=True)
# 3. 执行核心逻辑
try:
result = process(input_path, output_dir)
except Exception as e:
print(f"错误: {e}", file=sys.stderr)
sys.exit(1)
# 4. 输出 JSON 结果
print(json.dumps(result))
def process(input_path: Path, output_dir: Path) -> dict:
"""
核心处理逻辑
Args:
input_path: 输入文件路径
output_dir: 输出目录
Returns:
dict: 包含输出结果的字典
"""
# TODO: 实现处理逻辑
output_file = output_dir / 'result.json'
# 示例:读取并处理数据
with open(input_path, 'r', encoding='utf-8') as f:
data = json.load(f)
# 处理数据...
processed_count = len(data) if isinstance(data, list) else 1
# 写入输出
with open(output_file, 'w', encoding='utf-8') as f:
json.dump(data, f, indent=2, ensure_ascii=False)
return {
'output_file': str(output_file),
'items_processed': processed_count,
'status': 'success'
}
if __name__ == '__main__':
main()
```
## 变量说明
| 变量 | 说明 |
|------|------|
| `{{script_description}}` | 脚本功能描述 |
## 使用规范
### 输入参数
- 使用 `argparse` 定义参数
- 参数名使用 kebab-case`--input-path`
- 必需参数设置 `required=True`
- 可选参数提供 `default`
### 输出格式
- 最后一行打印单行 JSON
- 包含所有输出文件路径和关键数据
- 错误信息输出到 stderr
### 错误处理
```python
# 验证错误 - 直接退出
if not valid:
print("错误信息", file=sys.stderr)
sys.exit(1)
# 运行时错误 - 捕获并退出
try:
result = process()
except Exception as e:
print(f"错误: {e}", file=sys.stderr)
sys.exit(1)
```
## 常用模式
### 文件处理
```python
def process_files(input_dir: Path, pattern: str = '*.json') -> list:
results = []
for file in input_dir.glob(pattern):
with open(file, 'r') as f:
data = json.load(f)
results.append({'file': str(file), 'data': data})
return results
```
### 数据转换
```python
def transform_data(data: dict) -> dict:
return {
'id': data.get('id'),
'name': data.get('name', '').strip(),
'timestamp': datetime.now().isoformat()
}
```
### 调用外部命令
```python
import subprocess
def run_command(cmd: list) -> str:
result = subprocess.run(cmd, capture_output=True, text=True)
if result.returncode != 0:
raise RuntimeError(result.stderr)
return result.stdout
```
## 生成函数
```javascript
function generatePythonScript(scriptConfig) {
return `#!/usr/bin/env python3
"""
${scriptConfig.description}
"""
import argparse
import json
import sys
from pathlib import Path
def main():
parser = argparse.ArgumentParser(description='${scriptConfig.description}')
${scriptConfig.inputs.map(i =>
` parser.add_argument('--${i.name}', type=${i.type || 'str'}, ${i.required ? 'required=True' : `default='${i.default}'`},
help='${i.description}')`
).join('\n')}
args = parser.parse_args()
# TODO: 实现处理逻辑
result = {
${scriptConfig.outputs.map(o =>
` '${o.name}': None # ${o.description}`
).join(',\n')}
}
print(json.dumps(result))
if __name__ == '__main__':
main()
`;
}
```

View File

@@ -0,0 +1,368 @@
# Script Template
统一的脚本模板,覆盖 Bash 和 Python 两种运行时。
## Usage Context
| Phase | Usage |
|-------|-------|
| Optional | Phase/Action 中声明 `## Scripts` 时使用 |
| Execution | 通过 `ExecuteScript('script-id', params)` 调用 |
| Output Location | `.claude/skills/{skill-name}/scripts/{script-id}.{ext}` |
---
## 调用接口规范
所有脚本共享相同的调用约定:
```
调用者
↓ ExecuteScript('script-id', { key: value })
脚本入口
├─ 参数解析 (--key value)
├─ 输入验证 (必需参数检查, 文件存在)
├─ 核心处理 (数据读取 → 转换 → 写入)
└─ 输出结果 (最后一行: 单行 JSON → stdout)
├─ 成功: {"status":"success", "output_file":"...", ...}
└─ 失败: stderr 输出错误信息, exit 1
```
### 返回格式
```typescript
interface ScriptResult {
success: boolean; // exit code === 0
stdout: string; // 标准输出
stderr: string; // 标准错误
outputs: object; // 从 stdout 最后一行解析的 JSON
}
```
### 参数约定
| 参数 | 必需 | 说明 |
|------|------|------|
| `--input-path` | ✓ | 输入文件路径 |
| `--output-dir` | ✓ | 输出目录(由调用方指定) |
| 其他 | 按需 | 脚本特定参数 |
---
## Bash 实现
```bash
#!/bin/bash
# {{script_description}}
set -euo pipefail
# ============================================================
# 参数解析
# ============================================================
INPUT_PATH=""
OUTPUT_DIR=""
while [[ "$#" -gt 0 ]]; do
case $1 in
--input-path) INPUT_PATH="$2"; shift ;;
--output-dir) OUTPUT_DIR="$2"; shift ;;
--help)
echo "用法: $0 --input-path <path> --output-dir <dir>"
exit 0
;;
*)
echo "错误: 未知参数 $1" >&2
exit 1
;;
esac
shift
done
# ============================================================
# 参数验证
# ============================================================
[[ -z "$INPUT_PATH" ]] && { echo "错误: --input-path 是必需参数" >&2; exit 1; }
[[ -z "$OUTPUT_DIR" ]] && { echo "错误: --output-dir 是必需参数" >&2; exit 1; }
[[ ! -f "$INPUT_PATH" ]] && { echo "错误: 输入文件不存在: $INPUT_PATH" >&2; exit 1; }
command -v jq &> /dev/null || { echo "错误: 需要安装 jq" >&2; exit 1; }
mkdir -p "$OUTPUT_DIR"
# ============================================================
# 核心逻辑
# ============================================================
OUTPUT_FILE="$OUTPUT_DIR/result.txt"
ITEMS_COUNT=0
# TODO: 实现处理逻辑
while IFS= read -r line; do
echo "$line" >> "$OUTPUT_FILE"
((ITEMS_COUNT++))
done < "$INPUT_PATH"
# ============================================================
# 输出 JSON 结果(使用 jq 构建,避免转义问题)
# ============================================================
jq -n \
--arg output_file "$OUTPUT_FILE" \
--argjson items_processed "$ITEMS_COUNT" \
'{output_file: $output_file, items_processed: $items_processed, status: "success"}'
```
### Bash 常用模式
```bash
# 文件遍历
for file in "$INPUT_DIR"/*.json; do
[[ -f "$file" ]] || continue
# 处理逻辑...
done
# 临时文件 (自动清理)
TEMP_FILE=$(mktemp)
trap "rm -f $TEMP_FILE" EXIT
# 工具依赖检查
require_command() {
command -v "$1" &> /dev/null || { echo "错误: 需要 $1" >&2; exit 1; }
}
require_command jq
# jq 处理
VALUE=$(jq -r '.field' "$INPUT_PATH") # 读取字段
jq '.field = "new"' input.json > output.json # 修改字段
jq -s 'add' file1.json file2.json > merged.json # 合并文件
```
---
## Python 实现
```python
#!/usr/bin/env python3
"""
{{script_description}}
"""
import argparse
import json
import sys
from pathlib import Path
def main():
parser = argparse.ArgumentParser(description='{{script_description}}')
parser.add_argument('--input-path', type=str, required=True, help='输入文件路径')
parser.add_argument('--output-dir', type=str, required=True, help='输出目录')
args = parser.parse_args()
# 验证输入
input_path = Path(args.input_path)
if not input_path.exists():
print(f"错误: 输入文件不存在: {input_path}", file=sys.stderr)
sys.exit(1)
output_dir = Path(args.output_dir)
output_dir.mkdir(parents=True, exist_ok=True)
# 执行处理
try:
result = process(input_path, output_dir)
except Exception as e:
print(f"错误: {e}", file=sys.stderr)
sys.exit(1)
# 输出 JSON 结果
print(json.dumps(result))
def process(input_path: Path, output_dir: Path) -> dict:
"""核心处理逻辑"""
# TODO: 实现处理逻辑
output_file = output_dir / 'result.json'
with open(input_path, 'r', encoding='utf-8') as f:
data = json.load(f)
processed_count = len(data) if isinstance(data, list) else 1
with open(output_file, 'w', encoding='utf-8') as f:
json.dump(data, f, indent=2, ensure_ascii=False)
return {
'output_file': str(output_file),
'items_processed': processed_count,
'status': 'success'
}
if __name__ == '__main__':
main()
```
### Python 常用模式
```python
# 文件遍历
def process_files(input_dir: Path, pattern: str = '*.json') -> list:
return [
{'file': str(f), 'data': json.load(f.open())}
for f in input_dir.glob(pattern)
]
# 数据转换
def transform(data: dict) -> dict:
return {
'id': data.get('id'),
'name': data.get('name', '').strip(),
'timestamp': datetime.now().isoformat()
}
# 外部命令调用
import subprocess
def run_command(cmd: list) -> str:
result = subprocess.run(cmd, capture_output=True, text=True)
if result.returncode != 0:
raise RuntimeError(result.stderr)
return result.stdout
```
---
## 运行时选择指南
```
任务特征
├─ 文件处理 / 系统命令 / 管道操作
│ └─ 选 Bash (.sh)
├─ JSON 数据处理 / 复杂转换 / 数据分析
│ └─ 选 Python (.py)
└─ 简单读写 / 格式转换
└─ 任选Bash 更轻量)
```
---
## 生成函数
```javascript
function generateScript(scriptConfig) {
const runtime = scriptConfig.runtime || 'bash'; // 'bash' | 'python'
const ext = runtime === 'python' ? '.py' : '.sh';
if (runtime === 'python') {
return generatePythonScript(scriptConfig);
}
return generateBashScript(scriptConfig);
}
function generateBashScript(scriptConfig) {
const { description, inputs = [], outputs = [] } = scriptConfig;
const paramDefs = inputs.map(i =>
`${i.name.toUpperCase().replace(/-/g, '_')}="${i.default || ''}"`
).join('\n');
const paramParse = inputs.map(i =>
` --${i.name}) ${i.name.toUpperCase().replace(/-/g, '_')}="$2"; shift ;;`
).join('\n');
const paramValidation = inputs.filter(i => i.required).map(i => {
const VAR = i.name.toUpperCase().replace(/-/g, '_');
return `[[ -z "$${VAR}" ]] && { echo "错误: --${i.name} 是必需参数" >&2; exit 1; }`;
}).join('\n');
return `#!/bin/bash
# ${description}
set -euo pipefail
${paramDefs}
while [[ "$#" -gt 0 ]]; do
case $1 in
${paramParse}
*) echo "未知参数: $1" >&2; exit 1 ;;
esac
shift
done
${paramValidation}
# TODO: 实现处理逻辑
# 输出结果 (jq 构建)
jq -n ${outputs.map(o =>
`--arg ${o.name} "$${o.name.toUpperCase().replace(/-/g, '_')}"`
).join(' \\\n ')} \
'{${outputs.map(o => `${o.name}: $${o.name}`).join(', ')}}'
`;
}
function generatePythonScript(scriptConfig) {
const { description, inputs = [], outputs = [] } = scriptConfig;
const argDefs = inputs.map(i =>
` parser.add_argument('--${i.name}', type=${i.type || 'str'}, ${
i.required ? 'required=True' : `default='${i.default || ''}'`
}, help='${i.description || i.name}')`
).join('\n');
const resultFields = outputs.map(o =>
` '${o.name}': None # ${o.description || o.name}`
).join(',\n');
return `#!/usr/bin/env python3
"""
${description}
"""
import argparse
import json
import sys
from pathlib import Path
def main():
parser = argparse.ArgumentParser(description='${description}')
${argDefs}
args = parser.parse_args()
# TODO: 实现处理逻辑
result = {
${resultFields}
}
print(json.dumps(result))
if __name__ == '__main__':
main()
`;
}
```
---
## 目录约定
```
scripts/
├── process-data.py # id: process-data, runtime: python
├── validate.sh # id: validate, runtime: bash
└── transform.js # id: transform, runtime: node
```
- **命名即 ID**: 文件名(不含扩展名)= 脚本 ID
- **扩展名即运行时**: `.py` → python, `.sh` → bash, `.js` → node

View File

@@ -2,6 +2,20 @@
顺序模式 Phase 文件的模板。
## Purpose
生成 Sequential 执行模式的 Phase 文件,定义固定顺序的执行步骤。
## Usage Context
| Phase | Usage |
|-------|-------|
| Phase 3 (Phase Generation) | `config.execution_mode === 'sequential'` 时生成 |
| Generation Trigger | 为每个 `config.sequential_config.phases` 生成一个 phase 文件 |
| Output Location | `.claude/skills/{skill-name}/phases/{phase-id}.md` |
---
## ⚠️ 重要提示
> **Phase 0 是强制前置阶段**:在实现任何 Phase (1, 2, 3...) 之前,必须先完成 Phase 0 的规范研读。

View File

@@ -2,6 +2,20 @@
用于生成新 Skill 入口文件的模板。
## Purpose
生成新 Skill 的入口文件 (SKILL.md),作为 Skill 的主文档和执行入口点。
## Usage Context
| Phase | Usage |
|-------|-------|
| Phase 2 (Structure Generation) | 创建 SKILL.md 入口文件 |
| Generation Trigger | `config.execution_mode` 决定架构图样式 |
| Output Location | `.claude/skills/{skill-name}/SKILL.md` |
---
## ⚠️ 重要YAML Front Matter 规范
> **CRITICAL**: SKILL.md 文件必须以 YAML front matter 开头,即以 `---` 作为文件第一行。

View File

@@ -6,298 +6,162 @@ allowed-tools: Task, AskUserQuestion, Read, Write, Bash, Glob, Grep, mcp__ace-to
# Skill Tuning
Universal skill diagnosis and optimization tool that identifies and resolves skill execution problems through iterative multi-agent analysis.
Autonomous diagnosis and optimization for skill execution issues.
## Architecture Overview
## Architecture
```
┌─────────────────────────────────────────────────────────────────────────────
Skill Tuning Architecture (Autonomous Mode + Gemini CLI) │
├─────────────────────────────────────────────────────────────────────────────┤
│ │
⚠️ Phase 0: Specification → 阅读规范 + 理解目标 skill 结构 (强制前置)
│ Study │
┌───────────────────────────────────────────────────────────────────────┐
│ │ Orchestrator (状态驱动决策) │ │
读取诊断状态 → 选择下一步动作 → 执行 → 更新状态 → 循环直到完成
│ └───────────────────────────────────────────────────────────────────────┘ │
┌────────────┬───────────┼───────────┬────────────┬────────────┐
↓ ↓ ↓ ↓ ↓
┌──────┐ ┌──────────┐ ┌─────────┐ ┌────────┐ ┌────────┐ ┌─────────┐
│ Init │→ │ Analyze │→ │Diagnose │ │Diagnose│ │Diagnose│ │ Gemini │
│Requiremts│ │ Context │ │ Memory │ │DataFlow│ │Analysis │
└──────┘ └──────────┘ └─────────┘ └────────┘ └────────┘ └─────────┘
│ │ │ │ │ │ │
│ └───────────┴───────────┴────────────┘ │
↓ │
│ ┌───────────────────────────────────────────────────────────────────────┐
Requirement Analysis (NEW)
• Phase 1: 维度拆解 (Gemini CLI) - 单一描述 → 多个关注维度 │ │
│ │ • Phase 2: Spec 匹配 - 每个维度 → taxonomy + strategy │ │
│ │ • Phase 3: 覆盖度评估 - 以"有修复策略"为满足标准 │ │
│ │ • Phase 4: 歧义检测 - 识别多义性描述,必要时请求澄清 │ │
│ └───────────────────────────────────────────────────────────────────────┘ │
│ ↓ │
│ ┌──────────────────┐ │
│ │ Apply Fixes + │ │
│ │ Verify Results │ │
│ └──────────────────┘ │
│ │
│ ┌───────────────────────────────────────────────────────────────────────┐ │
│ │ Gemini CLI Integration │ │
│ │ 根据用户需求动态调用 gemini cli 进行深度分析: │ │
│ │ • 需求维度拆解 (requirement decomposition) │ │
│ │ • 复杂问题分析 (prompt engineering, architecture review) │ │
│ │ • 代码模式识别 (pattern matching, anti-pattern detection) │ │
│ │ • 修复策略生成 (fix generation, refactoring suggestions) │ │
│ └───────────────────────────────────────────────────────────────────────┘ │
│ │
└─────────────────────────────────────────────────────────────────────────────┘
┌─────────────────────────────────────────────────────┐
Phase 0: Read Specs (mandatory) │
│ → problem-taxonomy.md, tuning-strategies.md │
└─────────────────────────────────────────────────────┘
┌─────────────────────────────────────────────────────┐
Orchestrator (state-driven)
Read state → Select action → Execute → Update → ✓
└─────────────────────────────────────────────────────┘
──────────────────────┐ ┌──────────────────
Diagnosis Phase │ │ Gemini CLI
• Context │ │ Deep analysis
• Memory │ │ (on-demand)
• DataFlow │ │
• Agent │ │ Complex issues
• Docs │ │ Architecture
• Token Usage │ │ Performance
└──────────────────────┘ └──────────────────┘
┌───────────────────┐
│ Fix & Verify
Apply → Re-test
└───────────────────┘
```
## Problem Domain
## Core Issues Detected
Based on comprehensive analysis, skill-tuning addresses **core skill issues** and **general optimization areas**:
### Core Skill Issues (自动检测)
| Priority | Problem | Root Cause | Solution Strategy |
|----------|---------|------------|-------------------|
| **P0** | Authoring Principles Violation | 中间文件存储, State膨胀, 文件中转 | eliminate_intermediate_files, minimize_state, context_passing |
| Priority | Problem | Root Cause | Fix Strategy |
|----------|---------|-----------|--------------|
| **P0** | Authoring Violation | Intermediate files, state bloat, file relay | eliminate_intermediate, minimize_state |
| **P1** | Data Flow Disruption | Scattered state, inconsistent formats | state_centralization, schema_enforcement |
| **P2** | Agent Coordination | Fragile call chains, merge complexity | error_wrapping, result_validation |
| **P3** | Context Explosion | Token accumulation, multi-turn bloat | sliding_window, context_summarization |
| **P2** | Agent Coordination | Fragile chains, no error handling | error_wrapping, result_validation |
| **P3** | Context Explosion | Unbounded history, full content passing | sliding_window, path_reference |
| **P4** | Long-tail Forgetting | Early constraint loss | constraint_injection, checkpoint_restore |
| **P5** | Token Consumption | Verbose prompts, excessive state, redundant I/O | prompt_compression, lazy_loading, output_minimization |
| **P5** | Token Consumption | Verbose prompts, state bloat | prompt_compression, lazy_loading |
### General Optimization Areas (按需分析 via Gemini CLI)
## Problem Categories (Detailed Specs)
| Category | Issues | Gemini Analysis Scope |
|----------|--------|----------------------|
| **Prompt Engineering** | 模糊指令, 输出格式不一致, 幻觉风险 | 提示词优化, 结构化输出设计 |
| **Architecture** | 阶段划分不合理, 依赖混乱, 扩展性差 | 架构审查, 模块化建议 |
| **Performance** | 执行慢, Token消耗高, 重复计算 | 性能分析, 缓存策略 |
| **Error Handling** | 错误恢复不当, 无降级策略, 日志不足 | 容错设计, 可观测性增强 |
| **Output Quality** | 输出不稳定, 格式漂移, 质量波动 | 质量门控, 验证机制 |
| **User Experience** | 交互不流畅, 反馈不清晰, 进度不可见 | UX优化, 进度追踪 |
See [specs/problem-taxonomy.md](specs/problem-taxonomy.md) for:
- Detection patterns (regex/checks)
- Severity calculations
- Impact assessments
## Key Design Principles
## Tuning Strategies (Detailed Specs)
1. **Problem-First Diagnosis**: Systematic identification before any fix attempt
2. **Data-Driven Analysis**: Record execution traces, token counts, state snapshots
3. **Iterative Refinement**: Multiple tuning rounds until quality gates pass
4. **Non-Destructive**: All changes are reversible with backup checkpoints
5. **Agent Coordination**: Use specialized sub-agents for each diagnosis type
6. **Gemini CLI On-Demand**: Deep analysis via CLI for complex/custom issues
See [specs/tuning-strategies.md](specs/tuning-strategies.md) for:
- 10+ strategies per category
- Implementation patterns
- Verification methods
---
## Workflow
## Gemini CLI Integration
| Step | Action | Orchestrator Decision | Output |
|------|--------|----------------------|--------|
| 1 | `action-init` | status='pending' | Backup, session created |
| 2 | `action-analyze-requirements` | After init | Required dimensions + coverage |
| 3 | Diagnosis (6 types) | Focus areas | state.diagnosis.{type} |
| 4 | `action-gemini-analysis` | Critical issues OR user request | Deep findings |
| 5 | `action-generate-report` | All diagnosis complete | state.final_report |
| 6 | `action-propose-fixes` | Issues found | state.proposed_fixes[] |
| 7 | `action-apply-fix` | Pending fixes | Applied + verified |
| 8 | `action-complete` | Quality gates pass | session.status='completed' |
根据用户需求动态调用 Gemini CLI 进行深度分析。
## Action Reference
### Trigger Conditions
| Category | Actions | Purpose |
|----------|---------|---------|
| **Setup** | action-init | Initialize backup, session state |
| **Analysis** | action-analyze-requirements | Decompose user request via Gemini CLI |
| **Diagnosis** | action-diagnose-{context,memory,dataflow,agent,docs,token_consumption} | Detect category-specific issues |
| **Deep Analysis** | action-gemini-analysis | Gemini CLI: complex/critical issues |
| **Reporting** | action-generate-report | Consolidate findings → final_report |
| **Fixing** | action-propose-fixes, action-apply-fix | Generate + apply fixes |
| **Verify** | action-verify | Re-run diagnosis, check gates |
| **Exit** | action-complete, action-abort | Finalize or rollback |
| Condition | Action | CLI Mode |
|-----------|--------|----------|
| 用户描述复杂问题 | 调用 Gemini 分析问题根因 | `analysis` |
| 自动诊断发现 critical 问题 | 请求深度分析确认 | `analysis` |
| 用户请求架构审查 | 执行架构分析 | `analysis` |
| 需要生成修复代码 | 生成修复提案 | `write` |
| 标准策略不适用 | 请求定制化策略 | `analysis` |
Full action details: [phases/actions/](phases/actions/)
### CLI Command Template
## State Management
**Single source of truth**: `.workflow/.scratchpad/skill-tuning-{ts}/state.json`
```json
{
"status": "pending|running|completed|failed",
"target_skill": { "name": "...", "path": "..." },
"diagnosis": {
"context": {...},
"memory": {...},
"dataflow": {...},
"agent": {...},
"docs": {...},
"token_consumption": {...}
},
"issues": [{"id":"...", "severity":"...", "category":"...", "strategy":"..."}],
"proposed_fixes": [...],
"applied_fixes": [...],
"quality_gate": "pass|fail",
"final_report": "..."
}
```
See [phases/state-schema.md](phases/state-schema.md) for complete schema.
## Orchestrator Logic
See [phases/orchestrator.md](phases/orchestrator.md) for:
- Decision logic (termination checks → action selection)
- State transitions
- Error recovery
## Key Principles
1. **Problem-First**: Diagnosis before any fix
2. **Data-Driven**: Record traces, token counts, snapshots
3. **Iterative**: Multiple rounds until quality gates pass
4. **Reversible**: All changes with backup checkpoints
5. **Non-Invasive**: Minimal changes, maximum clarity
## Usage Examples
```bash
ccw cli -p "
PURPOSE: ${purpose}
TASK: ${task_steps}
MODE: ${mode}
CONTEXT: @${skill_path}/**/*
EXPECTED: ${expected_output}
RULES: $(cat ~/.claude/workflows/cli-templates/protocols/${mode}-protocol.md) | ${constraints}
" --tool gemini --mode ${mode} --cd ${skill_path}
# Basic skill diagnosis
/skill-tuning "Fix memory leaks in my skill"
# Deep analysis with Gemini
/skill-tuning "Architecture issues in async workflow"
# Focus on specific areas
/skill-tuning "Optimize token consumption and fix agent coordination"
# Custom issue
/skill-tuning "My skill produces inconsistent outputs"
```
### Analysis Types
## Output
#### 1. Problem Root Cause Analysis
```bash
ccw cli -p "
PURPOSE: Identify root cause of skill execution issue: ${user_issue_description}
TASK: • Analyze skill structure and phase flow • Identify anti-patterns • Trace data flow issues
MODE: analysis
CONTEXT: @**/*.md
EXPECTED: JSON with { root_causes: [], patterns_found: [], recommendations: [] }
RULES: $(cat ~/.claude/workflows/cli-templates/protocols/analysis-protocol.md) | Focus on execution flow
" --tool gemini --mode analysis
```
#### 2. Architecture Review
```bash
ccw cli -p "
PURPOSE: Review skill architecture for scalability and maintainability
TASK: • Evaluate phase decomposition • Check state management patterns • Assess agent coordination
MODE: analysis
CONTEXT: @**/*.md
EXPECTED: Architecture assessment with improvement recommendations
RULES: $(cat ~/.claude/workflows/cli-templates/protocols/analysis-protocol.md) | Focus on modularity
" --tool gemini --mode analysis
```
#### 3. Fix Strategy Generation
```bash
ccw cli -p "
PURPOSE: Generate fix strategy for issue: ${issue_id} - ${issue_description}
TASK: • Analyze issue context • Design fix approach • Generate implementation plan
MODE: analysis
CONTEXT: @**/*.md
EXPECTED: JSON with { strategy: string, changes: [], verification_steps: [] }
RULES: $(cat ~/.claude/workflows/cli-templates/protocols/analysis-protocol.md) | Minimal invasive changes
" --tool gemini --mode analysis
```
---
## Mandatory Prerequisites
> **CRITICAL**: Read these documents before executing any action.
### Core Specs (Required)
| Document | Purpose | Priority |
|----------|---------|----------|
| [specs/skill-authoring-principles.md](specs/skill-authoring-principles.md) | **首要准则:简洁高效、去除存储、上下文流转** | **P0** |
| [specs/problem-taxonomy.md](specs/problem-taxonomy.md) | Problem classification and detection patterns | **P0** |
| [specs/tuning-strategies.md](specs/tuning-strategies.md) | Fix strategies for each problem type | **P0** |
| [specs/dimension-mapping.md](specs/dimension-mapping.md) | Dimension to Spec mapping rules | **P0** |
| [specs/quality-gates.md](specs/quality-gates.md) | Quality thresholds and verification criteria | P1 |
### Templates (Reference)
| Document | Purpose |
|----------|---------|
| [templates/diagnosis-report.md](templates/diagnosis-report.md) | Diagnosis report structure |
| [templates/fix-proposal.md](templates/fix-proposal.md) | Fix proposal format |
---
## Execution Flow
```
┌─────────────────────────────────────────────────────────────────────────────┐
│ Phase 0: Specification Study (强制前置 - 禁止跳过) │
│ → Read: specs/problem-taxonomy.md (问题分类) │
│ → Read: specs/tuning-strategies.md (调优策略) │
│ → Read: specs/dimension-mapping.md (维度映射规则) │
│ → Read: Target skill's SKILL.md and phases/*.md │
│ → Output: 内化规范,理解目标 skill 结构 │
├─────────────────────────────────────────────────────────────────────────────┤
│ action-init: Initialize Tuning Session │
│ → Create work directory: .workflow/.scratchpad/skill-tuning-{timestamp} │
│ → Initialize state.json with target skill info │
│ → Create backup of target skill files │
├─────────────────────────────────────────────────────────────────────────────┤
│ action-analyze-requirements: Requirement Analysis │
│ → Phase 1: 维度拆解 (Gemini CLI) - 单一描述 → 多个关注维度 │
│ → Phase 2: Spec 匹配 - 每个维度 → taxonomy + strategy │
│ → Phase 3: 覆盖度评估 - 以"有修复策略"为满足标准 │
│ → Phase 4: 歧义检测 - 识别多义性描述,必要时请求澄清 │
│ → Output: state.json (requirement_analysis field) │
├─────────────────────────────────────────────────────────────────────────────┤
│ action-diagnose-*: Diagnosis Actions (context/memory/dataflow/agent/docs/ │
│ token_consumption) │
│ → Execute pattern-based detection for each category │
│ → Output: state.json (diagnosis.{category} field) │
├─────────────────────────────────────────────────────────────────────────────┤
│ action-generate-report: Consolidated Report │
│ → Generate markdown summary from state.diagnosis │
│ → Prioritize issues by severity │
│ → Output: state.json (final_report field) │
├─────────────────────────────────────────────────────────────────────────────┤
│ action-propose-fixes: Fix Proposal Generation │
│ → Generate fix strategies for each issue │
│ → Create implementation plan │
│ → Output: state.json (proposed_fixes field) │
├─────────────────────────────────────────────────────────────────────────────┤
│ action-apply-fix: Apply Selected Fix │
│ → User selects fix to apply │
│ → Execute fix with backup │
│ → Update state with fix result │
├─────────────────────────────────────────────────────────────────────────────┤
│ action-verify: Verification │
│ → Re-run affected diagnosis │
│ → Check quality gates │
│ → Update iteration count │
├─────────────────────────────────────────────────────────────────────────────┤
│ action-complete: Finalization │
│ → Set status='completed' │
│ → Final report already in state.json (final_report field) │
│ → Output: state.json (final) │
└─────────────────────────────────────────────────────────────────────────────┘
```
## Directory Setup
```javascript
const timestamp = new Date().toISOString().slice(0,19).replace(/[-:T]/g, '');
const workDir = `.workflow/.scratchpad/skill-tuning-${timestamp}`;
// Simplified: Only backups dir needed, diagnosis results go into state.json
Bash(`mkdir -p "${workDir}/backups"`);
```
## Output Structure
```
.workflow/.scratchpad/skill-tuning-{timestamp}/
├── state.json # Single source of truth (all results consolidated)
│ ├── diagnosis.* # All diagnosis results embedded
│ ├── issues[] # Found issues
│ ├── proposed_fixes[] # Fix proposals
│ └── final_report # Markdown summary (on completion)
└── backups/
└── {skill-name}-backup/ # Original skill files backup
```
> **Token Optimization**: All outputs consolidated into state.json. No separate diagnosis files or report files.
## State Schema
详细状态结构定义请参阅 [phases/state-schema.md](phases/state-schema.md)。
核心状态字段:
- `status`: 工作流状态 (pending/running/completed/failed)
- `target_skill`: 目标 skill 信息
- `diagnosis`: 各维度诊断结果
- `issues`: 发现的问题列表
- `proposed_fixes`: 建议的修复方案
After completion, review:
- `.workflow/.scratchpad/skill-tuning-{ts}/state.json` - Full state with final_report
- `state.final_report` - Markdown summary (in state.json)
- `state.applied_fixes` - List of applied fixes with verification results
## Reference Documents
| Document | Purpose |
|----------|---------|
| [phases/orchestrator.md](phases/orchestrator.md) | Orchestrator decision logic |
| [specs/problem-taxonomy.md](specs/problem-taxonomy.md) | Classification + detection patterns |
| [specs/tuning-strategies.md](specs/tuning-strategies.md) | Fix implementation guide |
| [specs/dimension-mapping.md](specs/dimension-mapping.md) | Dimension ↔ Spec mapping |
| [specs/quality-gates.md](specs/quality-gates.md) | Quality verification criteria |
| [phases/orchestrator.md](phases/orchestrator.md) | Workflow orchestration |
| [phases/state-schema.md](phases/state-schema.md) | State structure definition |
| [phases/actions/action-init.md](phases/actions/action-init.md) | Initialize tuning session |
| [phases/actions/action-analyze-requirements.md](phases/actions/action-analyze-requirements.md) | Requirement analysis (NEW) |
| [phases/actions/action-diagnose-context.md](phases/actions/action-diagnose-context.md) | Context explosion diagnosis |
| [phases/actions/action-diagnose-memory.md](phases/actions/action-diagnose-memory.md) | Long-tail forgetting diagnosis |
| [phases/actions/action-diagnose-dataflow.md](phases/actions/action-diagnose-dataflow.md) | Data flow diagnosis |
| [phases/actions/action-diagnose-agent.md](phases/actions/action-diagnose-agent.md) | Agent coordination diagnosis |
| [phases/actions/action-diagnose-docs.md](phases/actions/action-diagnose-docs.md) | Documentation structure diagnosis |
| [phases/actions/action-diagnose-token-consumption.md](phases/actions/action-diagnose-token-consumption.md) | Token consumption diagnosis |
| [phases/actions/action-generate-report.md](phases/actions/action-generate-report.md) | Report generation |
| [phases/actions/action-propose-fixes.md](phases/actions/action-propose-fixes.md) | Fix proposal |
| [phases/actions/action-apply-fix.md](phases/actions/action-apply-fix.md) | Fix application |
| [phases/actions/action-verify.md](phases/actions/action-verify.md) | Verification |
| [phases/actions/action-complete.md](phases/actions/action-complete.md) | Finalization |
| [specs/problem-taxonomy.md](specs/problem-taxonomy.md) | Problem classification |
| [specs/tuning-strategies.md](specs/tuning-strategies.md) | Fix strategies |
| [specs/dimension-mapping.md](specs/dimension-mapping.md) | Dimension to Spec mapping (NEW) |
| [specs/quality-gates.md](specs/quality-gates.md) | Quality criteria |
| [phases/actions/](phases/actions/) | Individual action implementations |

View File

@@ -1,28 +1,57 @@
# Orchestrator
Autonomous orchestrator for skill-tuning workflow. Reads current state and selects the next action based on diagnosis progress and quality gates.
State-driven orchestrator for autonomous skill-tuning workflow.
## Role
Drive the tuning workflow by:
1. Reading current session state
2. Selecting the appropriate next action
3. Executing the action via sub-agent
4. Updating state with results
5. Repeating until termination conditions met
Read state → Select action → Execute → Update → Repeat until termination.
## Decision Logic
### Termination Checks (priority order)
| Condition | Action |
|-----------|--------|
| `status === 'user_exit'` | null (exit) |
| `status === 'completed'` | null (exit) |
| `error_count >= max_errors` | action-abort |
| `iteration_count >= max_iterations` | action-complete |
| `quality_gate === 'pass'` | action-complete |
### Action Selection
| Priority | Condition | Action |
|----------|-----------|--------|
| 1 | `status === 'pending'` | action-init |
| 2 | Init done, req analysis missing | action-analyze-requirements |
| 3 | Req needs clarification | null (wait) |
| 4 | Req coverage unsatisfied | action-gemini-analysis |
| 5 | Gemini requested/critical issues | action-gemini-analysis |
| 6 | Gemini running | null (wait) |
| 7 | Diagnosis pending (in order) | action-diagnose-{type} |
| 8 | All diagnosis done, no report | action-generate-report |
| 9 | Report done, issues exist | action-propose-fixes |
| 10 | Pending fixes exist | action-apply-fix |
| 11 | Fixes need verification | action-verify |
| 12 | New iteration needed | action-diagnose-context (restart) |
| 13 | Default | action-complete |
**Diagnosis Order**: context → memory → dataflow → agent → docs → token_consumption
**Gemini Triggers**:
- `gemini_analysis_requested === true`
- Critical issues detected
- Focus areas include: architecture, prompt, performance, custom
- Second iteration with unresolved issues
## State Management
### Read State
```javascript
// Read
const state = JSON.parse(Read(`${workDir}/state.json`));
```
### Update State
```javascript
function updateState(updates) {
// Update (with sliding window for history)
function updateState(workDir, updates) {
const state = JSON.parse(Read(`${workDir}/state.json`));
const newState = {
...state,
@@ -34,344 +63,127 @@ function updateState(updates) {
}
```
## Decision Logic
```javascript
function selectNextAction(state) {
// === Termination Checks ===
// User exit
if (state.status === 'user_exit') return null;
// Completed
if (state.status === 'completed') return null;
// Error limit exceeded
if (state.error_count >= state.max_errors) {
return 'action-abort';
}
// Max iterations exceeded
if (state.iteration_count >= state.max_iterations) {
return 'action-complete';
}
// === Action Selection ===
// 1. Not initialized yet
if (state.status === 'pending') {
return 'action-init';
}
// 1.5. Requirement analysis (在 init 后diagnosis 前)
if (state.status === 'running' &&
state.completed_actions.includes('action-init') &&
!state.completed_actions.includes('action-analyze-requirements')) {
return 'action-analyze-requirements';
}
// 1.6. 如果需求分析发现歧义需要澄清,暂停等待用户
if (state.requirement_analysis?.status === 'needs_clarification') {
return null; // 等待用户澄清后继续
}
// 1.7. 如果需求分析覆盖度不足,优先触发 Gemini 深度分析
if (state.requirement_analysis?.coverage?.status === 'unsatisfied' &&
!state.completed_actions.includes('action-gemini-analysis')) {
return 'action-gemini-analysis';
}
// 2. Check if Gemini analysis is requested or needed
if (shouldTriggerGeminiAnalysis(state)) {
return 'action-gemini-analysis';
}
// 3. Check if Gemini analysis is running
if (state.gemini_analysis?.status === 'running') {
// Wait for Gemini analysis to complete
return null; // Orchestrator will be re-triggered when CLI completes
}
// 4. Run diagnosis in order (only if not completed)
const diagnosisOrder = ['context', 'memory', 'dataflow', 'agent', 'docs', 'token_consumption'];
for (const diagType of diagnosisOrder) {
if (state.diagnosis[diagType] === null) {
// Check if user wants to skip this diagnosis
if (!state.focus_areas.length || state.focus_areas.includes(diagType)) {
return `action-diagnose-${diagType}`;
}
// For docs diagnosis, also check 'all' focus_area
if (diagType === 'docs' && state.focus_areas.includes('all')) {
return 'action-diagnose-docs';
}
}
}
// 5. All diagnosis complete, generate report if not done
const allDiagnosisComplete = diagnosisOrder.every(
d => state.diagnosis[d] !== null || !state.focus_areas.includes(d)
);
if (allDiagnosisComplete && !state.completed_actions.includes('action-generate-report')) {
return 'action-generate-report';
}
// 6. Report generated, propose fixes if not done
if (state.completed_actions.includes('action-generate-report') &&
state.proposed_fixes.length === 0 &&
state.issues.length > 0) {
return 'action-propose-fixes';
}
// 7. Fixes proposed, check if user wants to apply
if (state.proposed_fixes.length > 0 && state.pending_fixes.length > 0) {
return 'action-apply-fix';
}
// 8. Fixes applied, verify
if (state.applied_fixes.length > 0 &&
state.applied_fixes.some(f => f.verification_result === 'pending')) {
return 'action-verify';
}
// 9. Quality gate check
if (state.quality_gate === 'pass') {
return 'action-complete';
}
// 10. More iterations needed
if (state.iteration_count < state.max_iterations &&
state.quality_gate !== 'pass' &&
state.issues.some(i => i.severity === 'critical' || i.severity === 'high')) {
// Reset diagnosis for re-evaluation
return 'action-diagnose-context'; // Start new iteration
}
// 11. Default: complete
return 'action-complete';
}
/**
* 判断是否需要触发 Gemini CLI 分析
*/
function shouldTriggerGeminiAnalysis(state) {
// 已完成 Gemini 分析,不再触发
if (state.gemini_analysis?.status === 'completed') {
return false;
}
// 用户显式请求
if (state.gemini_analysis_requested === true) {
return true;
}
// 发现 critical 问题且未进行深度分析
if (state.issues.some(i => i.severity === 'critical') &&
!state.completed_actions.includes('action-gemini-analysis')) {
return true;
}
// 用户指定了需要 Gemini 分析的 focus_areas
const geminiAreas = ['architecture', 'prompt', 'performance', 'custom'];
if (state.focus_areas.some(area => geminiAreas.includes(area))) {
return true;
}
// 标准诊断完成但问题未得到解决,需要深度分析
const diagnosisComplete = ['context', 'memory', 'dataflow', 'agent', 'docs'].every(
d => state.diagnosis[d] !== null
);
if (diagnosisComplete &&
state.issues.length > 0 &&
state.iteration_count > 0 &&
!state.completed_actions.includes('action-gemini-analysis')) {
// 第二轮迭代如果问题仍存在,触发 Gemini 分析
return true;
}
return false;
}
```
## Execution Loop
```javascript
async function runOrchestrator(workDir) {
console.log('=== Skill Tuning Orchestrator Started ===');
let iteration = 0;
const MAX_LOOP_ITERATIONS = 50; // Safety limit
const MAX_LOOP = 50;
while (iteration < MAX_LOOP_ITERATIONS) {
iteration++;
// 1. Read current state
while (iteration++ < MAX_LOOP) {
// 1. Read state
const state = JSON.parse(Read(`${workDir}/state.json`));
console.log(`[Loop ${iteration}] Status: ${state.status}, Action: ${state.current_action}`);
// 2. Select next action
// 2. Select action
const actionId = selectNextAction(state);
if (!actionId) break;
if (!actionId) {
console.log('No action selected, terminating orchestrator.');
break;
}
console.log(`[Loop ${iteration}] Executing: ${actionId}`);
// 3. Update state: current action
// FIX CTX-001: sliding window for action_history (keep last 10)
updateState({
// 3. Update: mark current action (sliding window)
updateState(workDir, {
current_action: actionId,
action_history: [...state.action_history, {
action: actionId,
started_at: new Date().toISOString(),
completed_at: null,
result: null,
output_files: []
}].slice(-10) // Sliding window: prevent unbounded growth
started_at: new Date().toISOString()
}].slice(-10) // Keep last 10
});
// 4. Execute action
try {
const actionPrompt = Read(`phases/actions/${actionId}.md`);
// FIX CTX-003: Pass state path + key fields only instead of full state
// Pass state path + key fields (not full state)
const stateKeyInfo = {
status: state.status,
iteration_count: state.iteration_count,
issues_by_severity: state.issues_by_severity,
quality_gate: state.quality_gate,
current_action: state.current_action,
completed_actions: state.completed_actions,
user_issue_description: state.user_issue_description,
target_skill: { name: state.target_skill.name, path: state.target_skill.path }
};
const stateKeyJson = JSON.stringify(stateKeyInfo, null, 2);
const result = await Task({
subagent_type: 'universal-executor',
run_in_background: false,
prompt: `
[CONTEXT]
You are executing action "${actionId}" for skill-tuning workflow.
Action: ${actionId}
Work directory: ${workDir}
[STATE KEY INFO]
${stateKeyJson}
${JSON.stringify(stateKeyInfo, null, 2)}
[FULL STATE PATH]
${workDir}/state.json
(Read full state from this file if you need additional fields)
(Read full state from this file if needed)
[ACTION INSTRUCTIONS]
${actionPrompt}
[OUTPUT REQUIREMENT]
After completing the action:
1. Write any output files to the work directory
2. Return a JSON object with:
- stateUpdates: object with state fields to update
- outputFiles: array of files created
- summary: brief description of what was done
[OUTPUT]
Return JSON: { stateUpdates: {}, outputFiles: [], summary: "..." }
`
});
// 5. Parse result and update state
let actionResult;
try {
actionResult = JSON.parse(result);
} catch (e) {
actionResult = {
stateUpdates: {},
outputFiles: [],
summary: result
};
}
// 5. Parse result
let actionResult = result;
try { actionResult = JSON.parse(result); } catch {}
// 6. Update state: action complete
const updatedHistory = [...state.action_history];
updatedHistory[updatedHistory.length - 1] = {
...updatedHistory[updatedHistory.length - 1],
completed_at: new Date().toISOString(),
result: 'success',
output_files: actionResult.outputFiles || []
};
updateState({
// 6. Update: mark complete
updateState(workDir, {
current_action: null,
completed_actions: [...state.completed_actions, actionId],
action_history: updatedHistory,
...actionResult.stateUpdates
});
console.log(`[Loop ${iteration}] Completed: ${actionId}`);
} catch (error) {
console.log(`[Loop ${iteration}] Error in ${actionId}: ${error.message}`);
// Error handling
// FIX CTX-002: sliding window for errors (keep last 5)
updateState({
// Error handling (sliding window for errors)
updateState(workDir, {
current_action: null,
errors: [...state.errors, {
action: actionId,
message: error.message,
timestamp: new Date().toISOString(),
recoverable: true
}].slice(-5), // Sliding window: prevent unbounded growth
timestamp: new Date().toISOString()
}].slice(-5), // Keep last 5
error_count: state.error_count + 1
});
}
}
console.log('=== Skill Tuning Orchestrator Finished ===');
}
```
## Action Catalog
## Action Preconditions
| Action | Purpose | Preconditions | Effects |
|--------|---------|---------------|---------|
| [action-init](actions/action-init.md) | Initialize tuning session | status === 'pending' | Creates work dirs, backup, sets status='running' |
| [action-analyze-requirements](actions/action-analyze-requirements.md) | Analyze user requirements | init completed | Sets requirement_analysis, optimizes focus_areas |
| [action-diagnose-context](actions/action-diagnose-context.md) | Analyze context explosion | status === 'running' | Sets diagnosis.context |
| [action-diagnose-memory](actions/action-diagnose-memory.md) | Analyze long-tail forgetting | status === 'running' | Sets diagnosis.memory |
| [action-diagnose-dataflow](actions/action-diagnose-dataflow.md) | Analyze data flow issues | status === 'running' | Sets diagnosis.dataflow |
| [action-diagnose-agent](actions/action-diagnose-agent.md) | Analyze agent coordination | status === 'running' | Sets diagnosis.agent |
| [action-diagnose-docs](actions/action-diagnose-docs.md) | Analyze documentation structure | status === 'running', focus includes 'docs' | Sets diagnosis.docs |
| [action-gemini-analysis](actions/action-gemini-analysis.md) | Deep analysis via Gemini CLI | User request OR critical issues | Sets gemini_analysis, adds issues |
| [action-generate-report](actions/action-generate-report.md) | Generate consolidated report | All diagnoses complete | Creates tuning-report.md |
| [action-propose-fixes](actions/action-propose-fixes.md) | Generate fix proposals | Report generated, issues > 0 | Sets proposed_fixes |
| [action-apply-fix](actions/action-apply-fix.md) | Apply selected fix | pending_fixes > 0 | Updates applied_fixes |
| [action-verify](actions/action-verify.md) | Verify applied fixes | applied_fixes with pending verification | Updates verification_result |
| [action-complete](actions/action-complete.md) | Finalize session | quality_gate='pass' OR max_iterations | Sets status='completed' |
| [action-abort](actions/action-abort.md) | Abort on errors | error_count >= max_errors | Sets status='failed' |
| Action | Precondition |
|--------|-------------|
| action-init | status='pending' |
| action-analyze-requirements | Init complete, not done |
| action-diagnose-* | status='running', focus area includes type |
| action-gemini-analysis | Requested OR critical issues OR high complexity |
| action-generate-report | All diagnosis complete |
| action-propose-fixes | Report generated, issues > 0 |
| action-apply-fix | pending_fixes > 0 |
| action-verify | applied_fixes with pending verification |
| action-complete | Quality gates pass OR max iterations |
| action-abort | error_count >= max_errors |
## Termination Conditions
## User Interaction Points
- `status === 'completed'`: Normal completion
- `status === 'user_exit'`: User requested exit
- `status === 'failed'`: Unrecoverable error
- `requirement_analysis.status === 'needs_clarification'`: Waiting for user clarification (暂停,非终止)
- `error_count >= max_errors`: Too many errors (default: 3)
- `iteration_count >= max_iterations`: Max iterations reached (default: 5)
- `quality_gate === 'pass'`: All quality criteria met
1. **action-init**: Confirm target skill, describe issue
2. **action-propose-fixes**: Select which fixes to apply
3. **action-verify**: Review verification, decide to continue or stop
4. **action-complete**: Review final summary
## Error Recovery
| Error Type | Recovery Strategy |
|------------|-------------------|
| Error Type | Strategy |
|------------|----------|
| Action execution failed | Retry up to 3 times, then skip |
| State parse error | Restore from backup |
| File write error | Retry with alternative path |
| User abort | Save state and exit gracefully |
## User Interaction Points
## Termination Conditions
The orchestrator pauses for user input at these points:
1. **action-init**: Confirm target skill and describe issue
2. **action-propose-fixes**: Select which fixes to apply
3. **action-verify**: Review verification results, decide to continue or stop
4. **action-complete**: Review final summary
- Normal: `status === 'completed'`, `quality_gate === 'pass'`
- User: `status === 'user_exit'`
- Error: `status === 'failed'`, `error_count >= max_errors`
- Iteration limit: `iteration_count >= max_iterations`
- Clarification wait: `requirement_analysis.status === 'needs_clarification'` (pause, not terminate)

View File

@@ -2,276 +2,174 @@
Classification of skill execution issues with detection patterns and severity criteria.
## When to Use
## Quick Reference
| Phase | Usage | Section |
|-------|-------|---------|
| All Diagnosis Actions | Issue classification | All sections |
| action-propose-fixes | Strategy selection | Fix Mapping |
| action-generate-report | Severity assessment | Severity Criteria |
| Category | Priority | Detection | Fix Strategy |
|----------|----------|-----------|--------------|
| Authoring Violation | P0 | Intermediate files, state bloat, file relay | eliminate_intermediate, minimize_state |
| Data Flow Disruption | P1 | Scattered state, inconsistent formats | state_centralization, schema_enforcement |
| Agent Coordination | P2 | Fragile chains, no error handling | error_wrapping, result_validation |
| Context Explosion | P3 | Unbounded history, full content passing | sliding_window, path_reference |
| Long-tail Forgetting | P4 | Early constraint loss | constraint_injection, checkpoint_restore |
| Token Consumption | P5 | Verbose prompts, redundant I/O | prompt_compression, lazy_loading |
| Doc Redundancy | P6 | Repeated definitions | consolidate_to_ssot |
| Doc Conflict | P7 | Inconsistent definitions | reconcile_definitions |
---
## Problem Categories
## 0. Authoring Principles Violation (P0)
### 0. Authoring Principles Violation (P0)
**Definition**: 违反 skill 撰写首要准则(简洁高效、去除存储、上下文流转)。
**Root Causes**:
- 不必要的中间文件存储
- State schema 过度膨胀
- 文件中转代替上下文传递
- 重复数据存储
**Definition**: Violates skill authoring principles (simplicity, no intermediate files, context passing).
**Detection Patterns**:
| Pattern ID | Regex/Check | Description |
|------------|-------------|-------------|
| APV-001 | `/Write\([^)]*temp-|intermediate-/` | 中间文件写入 |
| APV-002 | `/Write\([^)]+\)[\s\S]{0,50}Read\([^)]+\)/` | 写后立即读(文件中转) |
| APV-003 | State schema > 15 fields | State 字段过多 |
| APV-004 | `/_history\s*[.=].*push|concat/` | 无限增长数组 |
| APV-005 | `/debug_|_cache|_temp/` in state | 调试/缓存字段残留 |
| APV-006 | Same data in multiple state fields | 重复存储 |
| Pattern ID | Check | Description |
|------------|-------|-------------|
| APV-001 | `/Write\([^)]*temp-\|intermediate-/` | Intermediate file writes |
| APV-002 | `/Write\([^)]+\)[\s\S]{0,50}Read\([^)]+\)/` | Write-then-read relay |
| APV-003 | State schema > 15 fields | Excessive state fields |
| APV-004 | `/_history\s*[.=].*push\|concat/` | Unbounded array growth |
| APV-005 | `/debug_\|_cache\|_temp/` in state | Debug/cache field residue |
| APV-006 | Same data in multiple fields | Duplicate storage |
**Impact Levels**:
- **Critical**: 中间文件 > 5 个,严重违反原则
- **High**: State 字段 > 20 个,或存在文件中转
- **Medium**: 存在调试字段或轻微冗余
- **Low**: 轻微的命名不规范
**Impact**: Critical (>5 intermediate files), High (>20 state fields), Medium (debug fields), Low (naming issues)
---
### 1. Context Explosion (P2)
## 1. Context Explosion (P3)
**Definition**: Excessive token accumulation causing prompt size to grow unbounded.
**Root Causes**:
- Unbounded conversation history
- Full content passing instead of references
- Missing summarization mechanisms
- Agent returning full output instead of path+summary
**Definition**: Unbounded token accumulation causing prompt size growth.
**Detection Patterns**:
| Pattern ID | Regex/Check | Description |
|------------|-------------|-------------|
| Pattern ID | Check | Description |
|------------|-------|-------------|
| CTX-001 | `/history\s*[.=].*push\|concat/` | History array growth |
| CTX-002 | `/JSON\.stringify\s*\(\s*state\s*\)/` | Full state serialization |
| CTX-003 | `/Read\([^)]+\)\s*[\+,]/` | Multiple file content concatenation |
| CTX-004 | `/return\s*\{[^}]*content:/` | Agent returning full content |
| CTX-005 | File length > 5000 chars without summarize | Long prompt without compression |
| CTX-005 | File > 5000 chars without summarization | Long prompts |
**Impact Levels**:
- **Critical**: Context exceeds model limit (128K tokens)
- **High**: Context > 50K tokens per iteration
- **Medium**: Context grows 10%+ per iteration
- **Low**: Potential for growth but currently manageable
**Impact**: Critical (>128K tokens), High (>50K per iteration), Medium (10%+ growth), Low (manageable)
---
### 2. Long-tail Forgetting (P3)
## 2. Long-tail Forgetting (P4)
**Definition**: Loss of early instructions, constraints, or goals in long execution chains.
**Root Causes**:
- No explicit constraint propagation
- Reliance on implicit context
- Missing checkpoint/restore mechanisms
- State schema without requirements field
**Definition**: Loss of early instructions/constraints in long chains.
**Detection Patterns**:
| Pattern ID | Regex/Check | Description |
|------------|-------------|-------------|
| MEM-001 | Later phases missing constraint reference | Constraint not carried forward |
| Pattern ID | Check | Description |
|------------|-------|-------------|
| MEM-001 | Later phases missing constraint reference | Constraint not forwarded |
| MEM-002 | `/\[TASK\][^[]*(?!\[CONSTRAINTS\])/` | Task without constraints section |
| MEM-003 | Key phases without checkpoint | Missing state preservation |
| MEM-004 | State schema lacks `original_requirements` | No constraint persistence |
| MEM-004 | State lacks `original_requirements` | No constraint persistence |
| MEM-005 | No verification phase | Output not checked against intent |
**Impact Levels**:
- **Critical**: Original goal completely lost
- **High**: Key constraints ignored in output
- **Medium**: Some requirements missing
- **Low**: Minor goal drift
**Impact**: Critical (goal lost), High (constraints ignored), Medium (some missing), Low (minor drift)
---
### 3. Data Flow Disruption (P0)
## 3. Data Flow Disruption (P1)
**Definition**: Inconsistent state management causing data loss or corruption.
**Root Causes**:
- Multiple state storage locations
- Inconsistent field naming
- Missing schema validation
- Format transformation without normalization
**Definition**: Inconsistent state management causing data loss/corruption.
**Detection Patterns**:
| Pattern ID | Regex/Check | Description |
|------------|-------------|-------------|
| Pattern ID | Check | Description |
|------------|-------|-------------|
| DF-001 | Multiple state file writes | Scattered state storage |
| DF-002 | Same concept, different names | Field naming inconsistency |
| DF-003 | JSON.parse without validation | Missing schema validation |
| DF-004 | Files written but never read | Orphaned outputs |
| DF-005 | Autonomous skill without state-schema | Undefined state structure |
**Impact Levels**:
- **Critical**: Data loss or corruption
- **High**: State inconsistency between phases
- **Medium**: Potential for inconsistency
- **Low**: Minor naming inconsistencies
**Impact**: Critical (data loss), High (state inconsistency), Medium (potential inconsistency), Low (naming)
---
### 4. Agent Coordination Failure (P1)
## 4. Agent Coordination Failure (P2)
**Definition**: Fragile agent call patterns causing cascading failures.
**Root Causes**:
- Missing error handling in Task calls
- No result validation
- Inconsistent agent configurations
- Deeply nested agent calls
**Detection Patterns**:
| Pattern ID | Regex/Check | Description |
|------------|-------------|-------------|
| Pattern ID | Check | Description |
|------------|-------|-------------|
| AGT-001 | Task without try-catch | Missing error handling |
| AGT-002 | Result used without validation | No return value check |
| AGT-003 | > 3 different agent types | Agent type proliferation |
| AGT-003 | >3 different agent types | Agent type proliferation |
| AGT-004 | Nested Task in prompt | Agent calling agent |
| AGT-005 | Task used but not in allowed-tools | Tool declaration mismatch |
| AGT-006 | Multiple return formats | Inconsistent agent output |
**Impact Levels**:
- **Critical**: Workflow crash on agent failure
- **High**: Unpredictable agent behavior
- **Medium**: Occasional coordination issues
- **Low**: Minor inconsistencies
**Impact**: Critical (crash on failure), High (unpredictable behavior), Medium (occasional issues), Low (minor)
---
### 5. Documentation Redundancy (P5)
## 5. Documentation Redundancy (P6)
**Definition**: 同一定义(如 State Schema、映射表、类型定义在多个文件中重复出现导致维护困难和不一致风险。
**Root Causes**:
- 缺乏单一真相来源 (SSOT)
- 复制粘贴代替引用
- 硬编码配置代替集中管理
**Definition**: Same definition (State Schema, mappings, types) repeated across files.
**Detection Patterns**:
| Pattern ID | Regex/Check | Description |
|------------|-------------|-------------|
| DOC-RED-001 | 跨文件语义比较 | 找到 State Schema 等核心概念的重复定义 |
| DOC-RED-002 | 代码块 vs 规范表对比 | action 文件中硬编码与 spec 文档的重复 |
| DOC-RED-003 | `/interface\s+(\w+)/` 同名扫描 | 多处定义的 interface/type |
| Pattern ID | Check | Description |
|------------|-------|-------------|
| DOC-RED-001 | Cross-file semantic comparison | State Schema duplication |
| DOC-RED-002 | Code block vs spec comparison | Hardcoded config duplication |
| DOC-RED-003 | `/interface\s+(\w+)/` same-name scan | Interface/type duplication |
**Impact Levels**:
- **High**: 核心定义State Schema, 映射表)重复
- **Medium**: 类型定义重复
- **Low**: 示例代码重复
**Impact**: High (core definitions), Medium (type definitions), Low (example code)
---
### 6. Token Consumption (P6)
## 6. Token Consumption (P5)
**Definition**: Excessive token usage from verbose prompts, large state objects, or inefficient I/O patterns.
**Root Causes**:
- Long static prompts without compression
- State schema with too many fields
- Full content embedding instead of path references
- Arrays growing unbounded without sliding windows
- Write-then-read file relay patterns
**Definition**: Excessive token usage from verbose prompts, large state, inefficient I/O.
**Detection Patterns**:
| Pattern ID | Regex/Check | Description |
|------------|-------------|-------------|
| Pattern ID | Check | Description |
|------------|-------|-------------|
| TKN-001 | File size > 4KB | Verbose prompt files |
| TKN-002 | State fields > 15 | Excessive state schema |
| TKN-003 | `/Read\([^)]+\)\s*[\+,]/` | Full content passing |
| TKN-004 | `/.push\|concat(?!.*\.slice)/` | Unbounded array growth |
| TKN-005 | `/Write\([^)]+\)[\s\S]{0,100}Read\([^)]+\)/` | Write-then-read pattern |
**Impact Levels**:
- **High**: Multiple TKN-003/TKN-004 issues causing significant token waste
- **Medium**: Several verbose files or state bloat
- **Low**: Minor optimization opportunities
**Impact**: High (multiple TKN-003/004), Medium (verbose files), Low (minor optimization)
---
### 7. Documentation Conflict (P7)
## 7. Documentation Conflict (P7)
**Definition**: 同一概念在不同文件中定义不一致,导致行为不可预测和文档误导。
**Root Causes**:
- 定义更新后未同步其他位置
- 实现与文档漂移
- 缺乏一致性校验
**Definition**: Same concept defined inconsistently across files.
**Detection Patterns**:
| Pattern ID | Regex/Check | Description |
|------------|-------------|-------------|
| DOC-CON-001 | 键值一致性校验 | 同一键(如优先级)在不同文件中值不同 |
| DOC-CON-002 | 实现 vs 文档对比 | 硬编码配置与文档对应项不一致 |
| Pattern ID | Check | Description |
|------------|-------|-------------|
| DOC-CON-001 | Key-value consistency check | Same key, different values |
| DOC-CON-002 | Implementation vs docs comparison | Hardcoded vs documented mismatch |
**Impact Levels**:
- **Critical**: 优先级/类别定义冲突
- **High**: 策略映射不一致
- **Medium**: 示例与实际不符
**Impact**: Critical (priority/category conflicts), High (strategy mapping inconsistency), Medium (example mismatch)
---
## Severity Criteria
### Global Severity Matrix
| Severity | Definition | Action Required |
|----------|------------|-----------------|
| **Critical** | Blocks execution or causes data loss | Immediate fix required |
| **High** | Significantly impacts reliability | Should fix before deployment |
| **Medium** | Affects quality or maintainability | Fix in next iteration |
| **Low** | Minor improvement opportunity | Optional fix |
### Severity Calculation
## Severity Calculation
```javascript
function calculateIssueSeverity(issue) {
const weights = {
impact_on_execution: 40, // Does it block workflow?
data_integrity_risk: 30, // Can it cause data loss?
frequency: 20, // How often does it occur?
complexity_to_fix: 10 // How hard to fix?
};
function calculateSeverity(issue) {
const weights = { execution: 40, data_integrity: 30, frequency: 20, complexity: 10 };
let score = 0;
// Impact on execution
if (issue.blocks_execution) score += weights.impact_on_execution;
else if (issue.degrades_execution) score += weights.impact_on_execution * 0.5;
// Data integrity
if (issue.causes_data_loss) score += weights.data_integrity_risk;
else if (issue.causes_inconsistency) score += weights.data_integrity_risk * 0.5;
// Frequency
if (issue.blocks_execution) score += weights.execution;
if (issue.causes_data_loss) score += weights.data_integrity;
if (issue.occurs_every_run) score += weights.frequency;
else if (issue.occurs_sometimes) score += weights.frequency * 0.5;
if (issue.fix_complexity === 'low') score += weights.complexity;
// Complexity (inverse - easier to fix = higher priority)
if (issue.fix_complexity === 'low') score += weights.complexity_to_fix;
else if (issue.fix_complexity === 'medium') score += weights.complexity_to_fix * 0.5;
// Map score to severity
if (score >= 70) return 'critical';
if (score >= 50) return 'high';
if (score >= 30) return 'medium';
@@ -283,36 +181,30 @@ function calculateIssueSeverity(issue) {
## Fix Mapping
| Problem Type | Recommended Strategies | Priority Order |
|--------------|----------------------|----------------|
| **Authoring Principles Violation** | eliminate_intermediate_files, minimize_state, context_passing | 1, 2, 3 |
| Context Explosion | sliding_window, path_reference, context_summarization | 1, 2, 3 |
| Long-tail Forgetting | constraint_injection, state_constraints_field, checkpoint | 1, 2, 3 |
| Data Flow Disruption | state_centralization, schema_enforcement, field_normalization | 1, 2, 3 |
| Agent Coordination | error_wrapping, result_validation, flatten_nesting | 1, 2, 3 |
| **Token Consumption** | prompt_compression, lazy_loading, output_minimization, state_field_reduction | 1, 2, 3, 4 |
| **Documentation Redundancy** | consolidate_to_ssot, centralize_mapping_config | 1, 2 |
| **Documentation Conflict** | reconcile_conflicting_definitions | 1 |
| Problem | Strategies (priority order) |
|---------|---------------------------|
| Authoring Violation | eliminate_intermediate_files, minimize_state, context_passing |
| Context Explosion | sliding_window, path_reference, context_summarization |
| Long-tail Forgetting | constraint_injection, state_constraints_field, checkpoint |
| Data Flow Disruption | state_centralization, schema_enforcement, field_normalization |
| Agent Coordination | error_wrapping, result_validation, flatten_nesting |
| Token Consumption | prompt_compression, lazy_loading, output_minimization, state_field_reduction |
| Doc Redundancy | consolidate_to_ssot, centralize_mapping_config |
| Doc Conflict | reconcile_conflicting_definitions |
---
## Cross-Category Dependencies
Some issues may trigger others:
```
Context Explosion ──→ Long-tail Forgetting
(Large context causes important info to be pushed out)
Context Explosion → Long-tail Forgetting
(Large context pushes important info out)
Data Flow Disruption ──→ Agent Coordination Failure
(Inconsistent data causes agents to fail)
Data Flow Disruption → Agent Coordination Failure
(Inconsistent data causes agent failures)
Agent Coordination Failure ──→ Context Explosion
(Failed retries add to context)
Agent Coordination Failure → Context Explosion
(Failed retries add to context)
```
When fixing, address in this order:
1. **P0 Data Flow** - Foundation for other fixes
2. **P1 Agent Coordination** - Stability
3. **P2 Context Explosion** - Efficiency
4. **P3 Long-tail Forgetting** - Quality
**Fix Order**: P1 Data Flow → P2 Agent → P3 Context → P4 Memory

Some files were not shown because too many files have changed in this diff Show More