Compare commits

..

94 Commits

Author SHA1 Message Date
catlog22
f9188eb0b6 Add Chinese documentation for CLI commands and workflows 2026-02-05 23:30:06 +08:00
catlog22
05ad8110f4 Add runtime JavaScript file for Chinese documentation build 2026-02-05 21:50:52 +08:00
catlog22
4258a77dd2 Add license files and new runtime JavaScript bundle for Chinese documentation site 2026-02-05 20:57:57 +08:00
catlog22
9fc2c876f7 Add Phase 4: Task Generation documentation and execution process
- Introduced comprehensive guidelines for generating implementation plan documents (IMPL_PLAN.md, task JSONs, TODO_LIST.md) using the action-planning-agent.
- Defined auto mode for user configuration with default settings.
- Outlined core philosophy emphasizing planning only, agent-driven document generation, and memory-first context loading.
- Detailed execution process divided into phases: User Configuration, Context Preparation, Single Agent Planning, N Parallel Planning, and Integration.
- Included lifecycle management for user configuration and agent interactions.
- Specified document generation lifecycle with clear expectations for outputs and quality standards.
2026-02-05 20:50:21 +08:00
catlog22
01459a34a5 Add tests for CLI command generation and model alias resolution
- Implement `test-cli-command-gen.js` to verify the logic of `buildCliCommand` function.
- Create `test-e2e-model-alias.js` for end-to-end testing of model alias resolution in `ccw cli`.
- Add `test-model-alias.js` to test model alias resolution for different models.
- Introduce `test-model-alias.txt` for prompt testing with model alias.
- Develop `test-update-claude-command.js` to test command generation for `update_module_claude`.
- Create a test file in `test-update-claude/src` for future tests.
2026-02-05 20:17:10 +08:00
catlog22
6576886457 feat: 支持模型别名(PRIMARY_MODEL, SECONDARY_MODEL)并更新CLI命令构建逻辑 2026-02-05 20:09:04 +08:00
catlog22
47c192f584 Add orchestrator design, phase file generation, and validation processes
- Implement Phase 2: Orchestrator Design with detailed steps for generating SKILL.md
- Introduce Phase 3: Phase Files Design to create phase files with content fidelity
- Establish Phase 4: Validation & Integration to ensure structural completeness and reference integrity
- Include comprehensive validation checks for content quality and data flow consistency
- Enhance documentation with clear objectives and critical rules for each phase
2026-02-05 19:29:45 +08:00
catlog22
eac1bb81c8 feat: Add unified-execute-parallel, unified-execute-with-file, and worktree-merge workflows
- Implemented unified-execute-parallel for executing tasks in isolated Git worktrees with parallel execution capabilities.
- Developed unified-execute-with-file for universal task execution from various planning outputs with progress tracking.
- Created worktree-merge to handle merging of completed worktrees back to the main branch, including dependency management and conflict detection.
- Enhanced documentation for each workflow, detailing core functionalities, implementation phases, and configuration options.
2026-02-05 18:44:00 +08:00
catlog22
1480873008 feat: Implement parallel collaborative planning workflow with Plan Note
- Updated collaborative planning prompt to support parallel task generation with subagents.
- Enhanced workflow to include explicit lifecycle management for agents and conflict detection.
- Revised output structure to accommodate parallel planning results.
- Added new LocaleDropdownNavbarItem component for language selection in the documentation site.
- Introduced styles for the language icon in the dropdown.
- Modified issue execution process to streamline commit messages and report completion with full solution metadata.
2026-02-05 18:24:35 +08:00
catlog22
a59baf2473 feat: 移除清理命令参数中的目标和确认选项,简化使用方式 2026-02-05 17:37:03 +08:00
catlog22
5cfeb59124 feat: add configuration backup, sync, and version checker services
- Implemented ConfigBackupService for backing up local configuration files.
- Added ConfigSyncService to download configuration files from GitHub with remote-first conflict resolution.
- Created VersionChecker to check application version against the latest GitHub release with caching.
- Introduced security validation utilities for input validation to prevent common vulnerabilities.
- Developed utility functions to start and stop Docusaurus documentation server.
2026-02-05 17:32:31 +08:00
catlog22
834951a08d feat(flow-coordinator): add slashCommand/slashArgs support for unified workflow
- Add buildNodeInstruction() to construct instructions from slashCommand field
- Support slashArgs with {{variable}} interpolation
- Append additional instruction as context when slashCommand is set
- Update unified-workflow-spec.md with slashCommand/slashArgs documentation

This enables frontend orchestrator JSON to be directly consumed by
flow-coordinator skill in Claude client.
2026-02-05 15:33:06 +08:00
catlog22
e555355a71 feat: 更新依赖项,提升 TypeScript 和 Playwright 测试库版本,支持 Node.js 18+ 2026-02-05 14:45:57 +08:00
catlog22
23f752b975 feat: Implement slash command functionality in the orchestrator
- Refactored NodePalette to remove unused templates and streamline the UI.
- Enhanced PropertyPanel to support slash commands with input fields for command name and arguments.
- Introduced TagEditor for inline variable editing and custom template creation.
- Updated PromptTemplateNode to display slash command badges and instructions.
- Modified flow types to include slashCommand and slashArgs for structured execution.
- Adjusted flow executor to construct instructions based on slash command fields.
2026-02-05 14:29:04 +08:00
catlog22
a19ef94444 feat: add quick template functionality to NodePalette and enhance node creation experience 2026-02-05 09:57:20 +08:00
catlog22
0664937b98 feat: add gradient effects settings and enhance theme customization options 2026-02-04 23:24:16 +08:00
catlog22
369b470969 feat: enhance CoordinatorEmptyState and ThemeSelector with gradient utilities and improved layout 2026-02-04 23:06:00 +08:00
catlog22
de989aa038 feat: unify CLI output handling and enhance theme variables
- Updated `CliStreamMonitorNew`, `CliStreamMonitorLegacy`, and `CliViewerPage` components to prioritize `unitContent` from payloads, falling back to `data` when necessary.
- Enhanced `colorGenerator` to include legacy variables for compatibility with shadcn/ui.
- Refactored orchestrator index to unify node exports under a single module.
- Improved `appStore` to clear both new and legacy CSS variables when applying themes.
- Added new options to CLI execution for raw and final output modes, improving programmatic output handling.
- Enhanced `cli-output-converter` to normalize cumulative delta frames and avoid duplication in streaming outputs.
- Introduced a new unified workflow specification for prompt template-based workflows, replacing the previous multi-type node system.
- Added tests for CLI final output handling and streaming output converter to ensure correct behavior in various scenarios.
2026-02-04 22:57:41 +08:00
catlog22
4ee165119b refactor: unify node types into a single PromptTemplate model
- Removed individual node components (SlashCommandNode, FileOperationNode, etc.) and replaced them with a unified PromptTemplateNode.
- Updated flow types and interfaces to reflect the new single node type system.
- Refactored flow execution logic to handle the new unified model, simplifying node execution and context handling.
- Adjusted UI components to support the new PromptTemplateNode, including instruction display and context references.
- Cleaned up legacy code related to removed node types and ensured compatibility with the new structure.
2026-02-04 22:22:27 +08:00
catlog22
113c14970f feat: add CommandCombobox component for selecting slash commands and update PropertyPanel to use it
refactor: remove unused liteTasks localization from common.json and zh/common.json
refactor: consolidate liteTasks localization into lite-tasks.json and zh/lite-tasks.json
refactor: simplify MultiCliTab type in LiteTaskDetailPage
refactor: enhance task display in LiteTasksPage with additional metadata
2026-02-04 19:24:31 +08:00
catlog22
7b2ac46760 refactor: streamline store access in useWebSocket by consolidating state retrieval into getStoreState function 2026-02-04 18:00:06 +08:00
catlog22
346c87a706 refactor: optimize useWebSocket hook by consolidating store references and improving handler stability 2026-02-04 17:29:30 +08:00
catlog22
e260a3f77b feat: enhance theme customization and UI components
- Implemented a new color generation module to create CSS variables based on a single hue value, supporting both light and dark modes.
- Added unit tests for the color generation logic to ensure accuracy and robustness.
- Replaced dropdown location filter with tab navigation in RulesManagerPage and SkillsManagerPage for improved UX.
- Updated app store to manage custom theme hues and states, allowing for dynamic theme adjustments.
- Sanitized notification content before persisting to localStorage to prevent sensitive data exposure.
- Refactored memory retrieval logic to handle archived status more flexibly.
- Improved Tailwind CSS configuration with new gradient utilities and animations.
- Minor adjustments to SettingsPage layout for better visual consistency.
2026-02-04 17:20:40 +08:00
catlog22
88616224e0 refactor: remove redundant usage guidelines from workflow documentation 2026-02-04 17:18:56 +08:00
catlog22
6e6c2b9e09 fix: add mountedRef guard to useWebSocket to prevent setState on unmounted component
- Add mountedRef to track component mount state in useWebSocket hook
- Guard handleMessage callback to return early if component is unmounted
- Set mountedRef.current = false in useEffect cleanup function

This addresses finding perf-011 about potential memory leaks from setState
on unmounted components. The mounted ref provides a defensive check to
ensure no state updates occur after component unmount.

Related: code review group G03 - Critical Memory Leaks
2026-02-04 17:05:20 +08:00
catlog22
2bfce150ec feat: enhance RecommendedMcpWizard with icon mapping; improve ExecutionTab accessibility; refine NavGroup path matching; update fetchSkills to include enabled status; add loading and error messages to localization 2026-02-04 16:38:18 +08:00
catlog22
ac95ee3161 feat: update CLI mode toggle for improved UI and functionality; refactor RecommendedMcpWizard API call; add new localization strings for installation wizard and tools 2026-02-04 15:26:36 +08:00
catlog22
8454ae4f41 feat: add quick install templates and index status to CLI hooks and home locales
feat: enhance MCP manager with interactive question feature and update locales

feat: implement tags and available models management in settings page

fix: improve process termination logic in stop command for React frontend

fix: update view command to default to 'js' frontend

feat: add Recommended MCP Wizard component for dynamic server configuration
2026-02-04 15:24:34 +08:00
catlog22
341331325c feat: Enhance A2UI with RadioGroup and Markdown support
- Added support for RadioGroup component in A2UI, allowing single selection from multiple options.
- Implemented Markdown parsing in A2UIPopupCard for better content rendering.
- Updated A2UIPopupCard to handle different question types and improved layout for multi-select and single-select questions.
- Introduced new utility functions for handling disabled items during installation.
- Enhanced installation process to restore previously disabled skills and commands.
- Updated notification store and related tests to accommodate new features.
- Adjusted Vite configuration for better development experience.
2026-02-04 13:45:47 +08:00
catlog22
1a05551d00 feat(a2ui): enhance A2UI notification handling and multi-select support 2026-02-04 11:11:55 +08:00
catlog22
c6093ef741 feat: add CLI Command Node and Prompt Node components for orchestrator
- Implemented CliCommandNode component for executing CLI tools with AI models.
- Implemented PromptNode component for constructing AI prompts with context.
- Added styling for mode and tool badges in both components.
- Enhanced user experience with command and argument previews, execution status, and error handling.

test: add comprehensive tests for ask_question tool

- Created direct test for ask_question tool execution.
- Developed end-to-end tests to validate ask_question tool integration with WebSocket and A2UI surfaces.
- Implemented simple and integrated WebSocket tests to ensure proper message handling and surface reception.
- Added tool registration test to verify ask_question tool is correctly registered.

chore: add WebSocket listener and simulation tests

- Added WebSocket listener for A2UI surfaces to facilitate testing.
- Implemented frontend simulation test to validate complete flow from backend to frontend.
- Created various test scripts to ensure robust testing of ask_question tool functionality.
2026-02-03 23:10:36 +08:00
catlog22
a806d70d9b refactor: remove lite-plan-c workflow and update orchestrator UI components
- Deleted the lite-plan-c workflow file to streamline the planning process.
- Updated orchestrator localization files to include new variable picker and multi-node selector messages.
- Modified PropertyPanel to support new execution modes and added output variable input fields.
- Enhanced SlashCommandNode to reflect changes in execution modes.
- Introduced MultiNodeSelector component for better node selection management.
- Added VariablePicker component for selecting variables with improved UX.
2026-02-03 21:24:34 +08:00
catlog22
9bb50a13fa feat(websocket): integrate A2UI message handling and answer callback registration 2026-02-03 21:18:19 +08:00
catlog22
bb4cd0529e feat(codex): add parallel execution workflows with worktree support
Add three new workflow commands for parallel development:

- collaborative-plan-parallel.md: Planning with execution group assignment
  - Supports automatic/balanced/manual group assignment strategies
  - Assigns codex instances and branch names per group
  - Detects cross-group conflicts and dependencies

- unified-execute-parallel.md: Worktree-based group execution
  - Executes tasks in isolated Git worktrees (.ccw/worktree/{group-id}/)
  - Enables true parallel execution across multiple codex instances
  - Simplified focus on execution only

- worktree-merge.md: Dependency-aware worktree merging
  - Merges completed worktrees in dependency order
  - Handles cross-group file conflicts
  - Optional worktree cleanup after merge

Key improvements:
- True parallel development via Git worktrees (vs branch switching)
- Clear separation of concerns (plan → execute → merge)
- Support for multi-codex coordination
2026-02-03 21:17:45 +08:00
catlog22
a8385e2ea5 feat: Enhance Project Overview and Review Session pages with improved UI and functionality
- Updated ProjectOverviewPage to enhance the guidelines section with better spacing, larger icons, and improved button styles.
- Refactored ReviewSessionPage to unify filter controls, improve selection actions, and enhance the findings list with a new layout.
- Added dimension tabs and severity filters to the SessionsPage for better navigation and filtering.
- Improved SessionDetailPage to utilize a mapping for status labels, enhancing internationalization support.
- Refactored TaskListTab to remove unused priority configuration code.
- Updated store types to better reflect session metadata structure.
- Added a temporary JSON file for future use.
2026-02-03 20:58:03 +08:00
catlog22
37ba849e75 feat: add CLI Viewer Page with multi-pane layout and state management
- Implemented the CliViewerPage component for displaying CLI outputs in a configurable multi-pane layout.
- Integrated Zustand for state management, allowing for dynamic layout changes and tab management.
- Added layout options: single, split horizontal, split vertical, and 2x2 grid.
- Created viewerStore for managing layout, panes, and tabs, including actions for adding/removing panes and tabs.
- Added CoordinatorPage barrel export for easier imports.
2026-02-03 17:28:26 +08:00
catlog22
b63e254f36 fix(frontend): fix button icon and text layout in help page
- Add inline-flex items-center to all button links for proper icon alignment
- Add flex-shrink-0 to all icons to prevent shrinking
- Change support buttons container to flex-wrap for better responsiveness
- Ensure icons and text stay on same line in all buttons
2026-02-03 15:14:15 +08:00
catlog22
86b1a15671 feat(frontend): optimize help page layout and add i18n support
- Add Chinese and English translations for help page
- Fix button overlapping issues with responsive layouts
- Improve icon and button positioning with flexbox
- Add proper spacing and breakpoints for mobile/tablet/desktop
- Make cards consistent with flexbox layout
- Ensure all buttons have proper whitespace-nowrap
- Use flex-shrink-0 for icons to prevent squashing
2026-02-03 15:07:14 +08:00
catlog22
39b80b3386 feat: initialize monorepo with package.json for CCW workflow platform 2026-02-03 14:42:20 +08:00
catlog22
5483a72e9f feat: add Accordion component for UI and Zustand store for coordinator management
- Implemented Accordion component using Radix UI for collapsible sections.
- Created Zustand store to manage coordinator execution state, command chains, logs, and interactive questions.
- Added validation tests for CLI settings type definitions, ensuring type safety and correct behavior of helper functions.
2026-02-03 10:02:40 +08:00
catlog22
bcb4af3ba0 feat(commands): add ccw-plan and ccw-test coordinators with arrow-style documentation
- Add ccw-plan.md: Planning coordinator with 10 modes (lite/multi-cli/full/plan-verify/replan + cli/issue/rapid-to-issue/brainstorm-with-file/analyze-with-file)
- Add ccw-test.md: Test coordinator with 4 modes (gen/fix/verify/tdd) and auto-iteration support
- Refactor ccw-debug.md: Convert JavaScript pseudocode to arrow instruction format for better readability
- Fix command format: Use /ccw* skill call syntax instead of ccw* across all command files

Key features:
- CLI integration for quick analysis and recommendations
- Issue workflow integration (rapid-to-issue bridge)
- With-File workflows for documented multi-CLI collaboration
- Consistent arrow-based flow diagrams across all coordinators
- TodoWrite tracking with mode-specific prefixes (CCWP/CCWT/CCWD)
- Dual tracking system (TodoWrite + status.json)
2026-02-02 21:41:56 +08:00
catlog22
545679eeb9 Refactor execution modes and CLI integration across agents
- Updated code-developer, tdd-developer, and test-fix-agent to streamline execution modes based on task.meta.execution_config.method.
- Removed legacy command handling and introduced CLI handoff for 'cli' execution method.
- Enhanced buildCliHandoffPrompt to include task JSON path and improved context handling.
- Updated task-generate-agent and task-generate-tdd to reflect new execution method mappings and removed command field from implementation_approach.
- Improved CLI settings validation in CliSettingsModal with format and length checks.
- Added localization for new CLI settings messages in English and Chinese.
- Enhanced GPU selector to use localized strings for GPU types.
- Introduced TypeScript LSP setup documentation for better user guidance.
2026-02-02 19:21:30 +08:00
catlog22
2305e7b8e7 feat(tests): add verification script for execution ID with output analysis 2026-02-02 11:55:12 +08:00
catlog22
48871f0d9e feat(tests): add CLI API response format tests and output format detection 2026-02-02 11:46:12 +08:00
catlog22
a54246a46f feat: add API settings page for managing providers, endpoints, cache, model pools, and CLI settings
feat: implement semantic install dialog for CodexLens with GPU mode selection
feat: add radio group component for GPU mode selection
feat: update navigation and localization for API settings and semantic install
2026-02-02 11:16:19 +08:00
catlog22
e4b627bc76 fix(hooks): fix EventGroup undefined component error
- Add Play icon import for SessionStart trigger type
- Add SessionStart case to getEventIcon function
- Add SessionStart case to getEventColor function with purple color
- Add default cases to both functions to prevent undefined returns

Fixes runtime error: 'Element type is invalid: expected a string or class/function but got: undefined' in EventGroup component
2026-02-02 10:56:45 +08:00
catlog22
392f89f62f fix(i18n): fix JSON structure and add missing Chinese translations for API Settings
- Fix duplicate validation key in common section
- Fix messages section nesting (should be sibling to common, not child)
- Add missing Chinese translations:
  - providers.showDisabled/hideDisabled/showAll
  - endpoints.showDisabled/showAll
  - common.showAll
2026-02-02 10:52:43 +08:00
catlog22
1beb98366b refactor(frontend): change API Settings from sidebar to horizontal tabs layout
Refactor ApiSettingsPage to use horizontal Tabs component like CodexLens page,
replacing the previous vertical sidebar navigation.

Changes:
- Replace Card-based sidebar with horizontal TabsList/TabsTrigger components
- Remove split-panel layout (sidebar + main panel)
- Use TabsContent to wrap each tab's content
- Import and use Tabs, TabsList, TabsTrigger, TabsContent from ui/Tabs
- Add api-settings.json import to en/zh locale index files for i18n support

Layout comparison:
- Before: Vertical sidebar (lg:w-64) with Card + nav buttons
- After: Horizontal tabs (TabsList) with tab triggers
2026-02-02 10:49:21 +08:00
catlog22
c522681c4c fix(frontend): fix i18n and runtime error in EndpointList
Fix hardcoded English labels in cache strategy display and prevent runtime
error when filePatterns is undefined.

Changes:
- Replace hardcoded "TTL:", "Max:", "Patterns:" with formatMessage calls
- Add optional chaining for filePatterns.length to prevent undefined error
- Add "filePatterns" i18n key to en/zh api-settings.json
2026-02-02 10:46:36 +08:00
catlog22
abce912ee5 feat(frontend): implement comprehensive API Settings Management Interface
Implement a complete API Management Interface for React frontend with split-
panel layout, migrating all features from legacy JS frontend.

New Features:
- API Settings page with 5 tabs: Providers, Endpoints, Cache, Model Pools, CLI Settings
- Provider Management: CRUD operations, multi-key rotation, health checks, test connection
- Endpoint Management: CRUD operations, cache strategy configuration, enable/disable toggle
- Cache Settings: Global configuration, statistics display, clear cache functionality
- Model Pool Management: CRUD operations, auto-discovery feature, provider exclusion
- CLI Settings Management: Provider-based and Direct modes, full CRUD support
- Multi-Key Settings Modal: Manage API keys with rotation strategies and weights
- Manage Models Modal: View and manage models per provider (LLM and Embedding)
- Sync to CodexLens: Integration handler for provider configuration sync

Technical Implementation:
- Created 12 new React components in components/api-settings/
- Extended lib/api.ts with 460+ lines of API client functions
- Created hooks/useApiSettings.ts with 772 lines of TanStack Query hooks
- Added RadioGroup UI component for form selections
- Implemented unified error handling with useNotifications across all operations
- Complete i18n support (500+ keys in English and Chinese)
- Route integration (/api-settings) and sidebar navigation

Code Quality:
- All acceptance criteria from plan.json verified
- Code review passed with Gemini (all 7 IMPL tasks complete)
- Follows existing patterns: Shadcn UI, TanStack Query, react-intl, Lucide icons
2026-02-01 23:58:04 +08:00
catlog22
690597bae8 fix(cli): fix codex review command argument handling
- Fix prompt incorrectly passed to codex review with target flags (--uncommitted/--base/--commit)
- Remove unsupported --skip-git-repo-check flag from codex review command
- Fix TypeScript type error in codex-lens.ts (missing 'installed' property)

Codex CLI constraint: --uncommitted/--base/--commit and [PROMPT] are mutually exclusive.
2026-02-01 23:51:58 +08:00
catlog22
e5252f8a77 feat: add useApiSettings hook for managing API settings, including providers, endpoints, cache, and model pools
- Implemented hooks for CRUD operations on providers and endpoints.
- Added cache management hooks for cache stats and settings.
- Introduced model pool management hooks for high availability and load balancing.
- Created localization files for English and Chinese translations of API settings.
2026-02-01 23:14:55 +08:00
catlog22
b76424feef fix(frontend): show empty state placeholders for quality rules and learnings in view mode
- Modify conditional rendering logic to always display section headers in view mode
- Add empty state placeholders when quality_rules/learnings arrays are empty
- Add localization keys for empty state messages in English and Chinese
- Improves UX by making these features visible even when no entries exist

Related files:
- ccw/frontend/src/pages/ProjectOverviewPage.tsx
- ccw/frontend/src/locales/en/project-overview.json
- ccw/frontend/src/locales/zh/project-overview.json
2026-02-01 23:11:46 +08:00
catlog22
76967a7350 feat: enhance workflow commands with dynamic analysis, multi-agent parallel exploration, and best practices
- analyze-with-file: Add dynamic direction generation from dimensions, support up to 4 parallel agent perspectives, enable cli-explore-agent parallelization
- brainstorm-with-file: Add dynamic direction generation from dimensions, add Agent-First guideline for complex tasks
- collaborative-plan-with-file: Add Understanding phase guidelines to prioritize latest documentation and resolve ambiguities with user clarification

Key improvements:
- Dimension-Direction Mapping replaces hard-coded options, enabling task-aware direction recommendations
- Multi-perspective parallel exploration (Analysis Perspectives, up to 4) with synthesis
- cli-explore-agent supports parallel execution per perspective for richer codebase context
- Agent-First principle: delegate complex tasks (code analysis, implementation) to agents/CLI instead of main process analysis
- Understanding phase emphasizes latest documentation discovery and clarification of ambiguities before planning
2026-02-01 22:51:04 +08:00
catlog22
7dcc0a1c05 feat: update usage recommendations across multiple workflow commands to require user confirmation and improve clarity 2026-02-01 22:04:26 +08:00
catlog22
5fb910610a fix(frontend): resolve URL path inconsistency caused by localStorage race condition
Fixed the issue where accessing the React frontend via ccw view with a URL path
parameter would load a different workspace due to localStorage rehydration race
condition.

Root cause: zustand persist's onRehydrateStorage callback automatically called
switchWorkspace with the cached projectPath before AppShell could process the
URL parameter.

Changes:
- workflowStore.ts: Remove automatic switchWorkspace from onRehydrateStorage
- AppShell.tsx: Centralize initialization logic with clear priority order:
  * Priority 1: URL ?path= parameter (explicit user intent)
  * Priority 2: localStorage fallback (implicit cache)
  * Added isWorkspaceInitialized state lock to prevent duplicate execution

Fixes: URL showing ?path=/new/path but loading /old/path from cache
2026-02-01 18:35:51 +08:00
catlog22
d46406df4a feat: Add CodexLens Manager Page with tabbed interface for managing CodexLens features
feat: Implement ConflictTab component to display conflict resolution decisions in session detail

feat: Create ImplPlanTab component to show implementation plan with modal viewer in session detail

feat: Develop ReviewTab component to display review findings by dimension in session detail

test: Add end-to-end tests for CodexLens Manager functionality including navigation, tab switching, and settings validation
2026-02-01 17:45:38 +08:00
catlog22
8dc115a894 feat(codex): convert 4 workflow commands to Codex prompt format with serial execution
Convert parallel multi-agent/multi-CLI workflows to Codex-compatible serial execution:

- brainstorm-with-file: Parallel Creative/Pragmatic/Systematic perspectives → Serial CLI execution
- analyze-with-file: cli-explore-agent + parallel CLI → Native tools (Glob/Grep/Read) + serial Gemini CLI
- collaborative-plan-with-file: Parallel sub-agents → Serial domain planning loop
- unified-execute-with-file: DAG-based parallel wave execution → Serial task-by-task execution

Key Technical Changes:
- YAML header: Remove 'name' and 'allowed-tools' fields
- Variables: $ARGUMENTS → $TOPIC/$TASK/$PLAN
- Task agents: Task() calls → ccw cli commands with --tool flag
- Parallel execution: Parallel Task/Bash calls → Sequential loops
- Session format: Match existing Codex prompt structure

Pattern: For multi-step workflows, use explicit wait points with " Wait for completion before proceeding"
2026-02-01 17:20:00 +08:00
catlog22
e9789e747a feat: update unified-execute-with-file command for sequential execution and improved dependency handling 2026-02-01 15:35:41 +08:00
catlog22
0342976c51 feat: add JSON detection utility for various formats
- Implemented a utility function `detectJsonInLine` to identify and parse JSON data from different formats including direct JSON, tool calls, tool results, embedded JSON, and code blocks.
- Introduced `JsonDetectionResult` interface to standardize the detection results.
- Added a new test results file to track the status of tests.
2026-02-01 15:17:03 +08:00
catlog22
f196b76064 chore(tests): remove obsolete test results and reports 2026-02-01 11:16:17 +08:00
catlog22
44cc4cad0f chore(tests): add gitignore for test results and reports 2026-02-01 11:16:00 +08:00
catlog22
fc1471396c feat(e2e): generate comprehensive E2E tests and fix TypeScript compilation errors
- Add 23 E2E test spec files covering 94 API endpoints across business domains
- Fix TypeScript compilation errors (file casing, duplicate export, implicit any)
- Update Playwright deprecated API calls (getByPlaceholderText -> getByPlaceholder)
- Tests cover: dashboard, sessions, tasks, workspace, loops, issues-queue, discovery,
  skills, commands, memory, project-overview, session-detail, cli-history,
  cli-config, cli-installations, lite-tasks, review, mcp, hooks, rules,
  index-management, prompt-memory, file-explorer

Test coverage: 100% domain coverage (23/23 domains)
API coverage: 94 endpoints across 23 business domains
Quality gates: 0 CRITICAL issues, all anti-patterns passed

Note: 700+ timeout tests require backend server (port 3456) to pass
2026-02-01 11:15:11 +08:00
catlog22
cf401d00e1 feat(cli-stream-monitor): add JSON card components and refactor tab UI
Add new components for CLI stream monitoring with JSON visualization:
- ExecutionTab: simplified tab with status indicator and active styling
- JsonCard: collapsible card for JSON data with type-based styling (6 types)
- JsonField: recursive JSON field renderer with color-coded values
- OutputLine: auto-detects JSON and renders appropriate component
- jsonDetector: smart JSON detection supporting 5 formats

Refactor CliStreamMonitorLegacy to use new components:
- Replace inline tab rendering with ExecutionTab component
- Replace inline output rendering with OutputLine component
- Add Badge import for active count display
- Fix type safety with proper id propagation

Component features:
- Type-specific styling for tool_call, metadata, system, stdout, stderr, thought
- Collapsible content (show 3 fields by default, expandable)
- Copy button and raw JSON view toggle
- Timestamp display
- Auto-detection of JSON in output lines

Fixes:
- Missing jsonDetector.ts file
- Type mismatch between OutputLine (6 types) and JsonCard (4 types)
- Unused isActive prop in ExecutionTab
2026-02-01 11:14:33 +08:00
catlog22
b66d20f5a6 Merge branch 'main' of https://github.com/catlog22/Claude-Code-Workflow 2026-01-31 23:12:45 +08:00
catlog22
a2206df50f feat: add CliStreamMonitor and related components for CLI output streaming
- Implemented CliStreamMonitor component for real-time CLI output monitoring with multi-execution support.
- Created JsonFormatter component for displaying JSON content in various formats (text, card, inline).
- Added utility functions for JSON detection and formatting in jsonUtils.ts.
- Introduced LogBlock utility functions for styling CLI output lines.
- Developed a new Collapsible component for better UI interactions.
- Created IssueHubPage for managing issues, queue, and discovery with tab navigation.
2026-01-31 23:12:39 +08:00
catlog22
2f10305945 refactor(commands): replace SlashCommand with Skill tool invocation
Update all workflow command files to use Skill tool instead of SlashCommand:
- Change allowed-tools: SlashCommand(*) → Skill(*)
- Convert SlashCommand(command="/path", args="...") → Skill(skill="path", args="...")
- Update descriptive text references from SlashCommand to Skill
- Remove javascript language tags from code blocks (25 files)

Affected 25 command files across:
- workflow: plan, execute, init, lite-plan, lite-fix, etc.
- workflow/test: test-fix-gen, test-cycle-execute, tdd-plan, tdd-verify
- workflow/review: review-cycle-fix, review-module-cycle, review-session-cycle
- workflow/ui-design: codify-style, explore-auto, imitate-auto
- workflow/brainstorm: brainstorm-with-file, auto-parallel
- issue: discover, discover-by-prompt, plan
- ccw, ccw-debug

This aligns with the actual Skill tool interface which uses 'skill' and 'args' parameters.
2026-01-31 23:09:59 +08:00
catlog22
92ded5908f feat(flowchart): update implementation section and enhance edge styling
feat(taskdrawer): improve implementation steps display and support multiple file formats
2026-01-31 21:38:02 +08:00
catlog22
1bd082a725 feat: add tests and implementation for issue discovery and queue pages
- Implemented `DiscoveryPage` with session management and findings display.
- Added tests for `DiscoveryPage` to ensure proper rendering and functionality.
- Created `QueuePage` for managing issue execution queues with stats and actions.
- Added tests for `QueuePage` to verify UI elements and translations.
- Introduced `useIssues` hooks for fetching and managing issue data.
- Added loading skeletons and error handling for better user experience.
- Created `vite-env.d.ts` for TypeScript support in Vite environment.
2026-01-31 21:20:10 +08:00
catlog22
6d225948d1 feat(workflow): add interactive init-guidelines wizard for project guidelines configuration
Add /workflow:init-guidelines command that interactively fills project-guidelines.json
based on project analysis from project-tech.json. Features:
- 5-round questionnaire covering coding style, naming, file structure, documentation,
  architecture, tech stack, performance, security, and quality rules
- Dynamic question generation based on detected language/framework/architecture
- Append/reset mode for existing guidelines
- Multi-select options with "Other" for custom input

Modify /workflow:init to ask user whether to run init-guidelines after initialization
when guidelines are empty (scaffold only).
2026-01-31 20:58:29 +08:00
catlog22
35f9116cce feat: add support for dual frontend (JS and React) in the CCW application
- Updated CLI to include `--frontend` option for selecting frontend type (js, react, both).
- Modified serve command to start React frontend when specified.
- Implemented React frontend startup and shutdown logic.
- Enhanced server routing to handle requests for both JS and React frontends.
- Added workspace selector component with i18n support.
- Updated tests to reflect changes in header and A2UI components.
- Introduced new Radix UI components for improved UI consistency.
- Refactored A2UIButton and A2UIDateTimeInput components for better code clarity.
- Created migration plan for gradual transition from JS to React frontend.
2026-01-31 16:46:45 +08:00
catlog22
345437415f Add end-to-end tests for workspace switching and backend tests for ask_question tool
- Implemented E2E tests for workspace switching functionality, covering scenarios such as switching workspaces, data isolation, language preference maintenance, and UI updates.
- Added tests to ensure workspace data is cleared on logout and handles unsaved changes during workspace switches.
- Created comprehensive backend tests for the ask_question tool, validating question creation, execution, answer handling, cancellation, and timeout scenarios.
- Included edge case tests to ensure robustness against duplicate questions and invalid answers.
2026-01-31 16:02:20 +08:00
catlog22
715ef12c92 feat(a2ui): Implement A2UI backend with question handling and WebSocket support
- Added A2UITypes for defining question structures and answers.
- Created A2UIWebSocketHandler for managing WebSocket connections and message handling.
- Developed ask-question tool for interactive user questions via A2UI.
- Introduced platformUtils for platform detection and shell command handling.
- Centralized TypeScript types in index.ts for better organization.
- Implemented compatibility checks for hook templates based on platform requirements.
2026-01-31 15:27:12 +08:00
catlog22
4e009bb03a feat: add --with-commit parameter to workflow:execute for auto-commit after each task
- Add --with-commit flag to argument-hint and usage examples
- Auto-commit changes based on summary document after each agent task
- Minimal principle: only commit files modified by completed task
- Commit message format: {type}: {task-title} - {summary}
- Type mapping: feature→feat, bugfix→fix, refactor→refactor, etc.
2026-01-31 10:31:06 +08:00
catlog22
a0f81f8841 Add API error monitoring tests and error context snapshots for various browsers
- Created error context snapshots for Firefox, WebKit, and Chromium to capture UI state during API error monitoring.
- Implemented e2e tests for API error detection, including console errors, failed API requests, and proxy errors.
- Added functionality to ignore specific API patterns in monitoring assertions.
- Ensured tests validate the monitoring system's ability to detect and report errors effectively.
2026-01-31 00:15:59 +08:00
catlog22
f1324a0bc8 feat: enhance test-task-generate with Gemini enrichment and planning-notes mechanism
Implement two major enhancements to test generation workflow:

1. Gemini Test Enhancement (Phase 1.5)
   - Add Gemini CLI analysis for comprehensive test suggestions
   - Generate enriched test specifications for L1-L3 test layers
   - Focus on API contracts, integration patterns, error scenarios
   - Create gemini-enriched-suggestions.md as artifact

2. Planning Notes Mechanism (test-planning-notes.md)
   - Similar to plan.md's planning-notes pattern
   - Record Test Intent (Phase 1)
   - Embed complete Gemini enrichment (Phase 1.5)
   - Track consolidated test requirements
   - Support N+1 context with decisions and deferred items

Implementation Details:
   - Created Gemini prompt template: test-suggestions-enhancement.txt
   - Enhanced test-task-generate.md with Phase 1, 1.5, 2 logic
   - Gemini content embedded in planning-notes for single source of truth
   - test-action-planning-agent reads both TEST_ANALYSIS_RESULTS.md and test-planning-notes.md
   - Backward compatible: Phase 1.5 is optional enhancement

Benefits:
   - Comprehensive test coverage (API, integration, error scenarios)
   - Full traceability of test planning process
   - Consolidated context in one file for easy review
   - Preserved Gemini output as independent artifact
2026-01-30 23:51:57 +08:00
catlog22
81725c94b1 Add E2E tests for internationalization across multiple pages
- Implemented navigation.spec.ts to test language switching and translation of navigation elements.
- Created sessions-page.spec.ts to verify translations on the sessions page, including headers, status badges, and date formatting.
- Developed settings-page.spec.ts to ensure settings page content is translated and persists across sessions.
- Added skills-page.spec.ts to validate translations for skill categories, action buttons, and empty states.
2026-01-30 22:54:21 +08:00
catlog22
917de6f167 Merge pull request #109 from wzp-0815/fix-install-docs
Fix missing ccw install step in installation documentation
2026-01-30 22:06:28 +08:00
catlog22
e78e95049b feat: update dependencies and refactor dropdown components for improved functionality 2026-01-30 17:26:07 +08:00
catlog22
a5c3dff8d3 feat: implement FlowExecutor for executing flow definitions with DAG traversal and node execution 2026-01-30 16:59:18 +08:00
catlog22
0a7c1454d9 feat: add task JSON status update instructions to agents
Add jq commands for agents to update task status lifecycle:
- Step 0: Mark in_progress when starting
- Task Completion: Mark completed when finishing
2026-01-30 15:59:28 +08:00
catlog22
4a8481958a feat: add planning-notes consumption to plan-verify
- Add dimensions I (Constraints Compliance) and J (N+1 Context Validation)
- Verify tasks respect Consolidated Constraints from planning-notes.md
- Detect deferred items incorrectly included in current plan
- Check decision contradictions against N+1 Decisions table
2026-01-30 15:50:22 +08:00
catlog22
78b1287ced chore: Bump version to 6.3.54 2026-01-30 15:43:20 +08:00
catlog22
c6ad8e53b9 Add flow template generator documentation for meta-skill/flow-coordinator
- Introduced a comprehensive guide for generating workflow templates.
- Detailed usage instructions with examples for creating templates.
- Outlined execution flow across three phases: Template Design, Step Definition, and JSON Generation.
- Included JavaScript code snippets for each phase to facilitate implementation.
- Provided suggested step templates for various development scenarios.
- Documented command port references and minimum execution units for clarity.
2026-01-30 15:42:08 +08:00
catlog22
4006b2a0ee feat: add N+1 planning context recording to planning-notes
- Add N+1 Context section (Decisions + Deferred) to planning-notes.md init
- Add section 3.3 N+1 Context Recording in action-planning-agent.md
- Update task-generate-agent.md Phase 2A/2B/3 prompts with N+1 recording

Supports cross-module dependency resolution tracking and deferred items
for continuous planning iterations.
2026-01-30 15:42:00 +08:00
laoman.wzp
e464d93e29 Fix missing ccw install step in installation documentation
- Added the crucial `ccw install` step after npm installation in both
  English (INSTALL.md) and Chinese (INSTALL_CN.md) documentation
- Included detailed explanation of what ccw install does:
  * Installs workflows, scripts, templates to ~/.claude/
  * Installs skill definitions to ~/.codex/
  * Configures shell integration (optional)

This step is essential for the complete installation of CCW system files.

Co-Authored-By: Claude (Kimi-K2-Instruct-0905) <noreply@anthropic.com>
2026-01-30 14:47:21 +08:00
catlog22
0bb102c56a feat: add meta-skill flow-create command for workflow template generation
Interactive command to create flow-coordinator templates with comprehensive
command selection (9 categories), minimum execution units, and command port
reference integrated from ccw/ccw-coordinator/codex-coordinator.
2026-01-30 14:27:20 +08:00
catlog22
fca03a3f9c refactor: rename meta-skill → flow-coordinator, update template cmd paths
**Major changes:**
- Rename skill from meta-skill to flow-coordinator (avoid /ccw conflicts)
- Update all 17 templates: store full /workflow: command paths in cmd field
  - Session ID prefix: ms → fc (fc-YYYYMMDD-HHMMSS)
  - Workflow path: .workflow/.meta-skill → .workflow/.flow-coordinator
- Simplify SKILL.md schema documentation
  - Streamline Status Schema section
  - Consolidate Template Schema with single example
  - Remove redundant Field Explanations and behavior tables
- All templates now store cmd as full paths (e.g. /workflow:lite-plan)
  - Eliminates need for path assembly during execution
  - Matches ccw-coordinator execution format
2026-01-30 12:29:38 +08:00
catlog22
a9df4c6659 refactor: 移除统一执行命令的使用推荐部分,简化文档内容 2026-01-30 10:54:02 +08:00
catlog22
0a3246ab36 chore: Bump version to 6.3.53 2026-01-30 10:34:02 +08:00
catlog22
b5caee6b94 rename: collaborative-plan → collaborative-plan-with-file 2026-01-30 10:30:37 +08:00
catlog22
64d2156319 refactor: 统一 lite 工作流产物结构,合并 exploration+understanding 为 planning-context
- 合并 exploration.md 和 understanding.md 为单一的 planning-context.md
- 为所有 lite 工作流添加 Output Artifacts 表格
- 统一 agent 提示词的 Output Location 格式
- 更新 Session Folder Structure 包含 planning-context.md

涉及文件:
- cli-lite-planning-agent.md: 更新产物文档和格式模板
- collaborative-plan.md: 简化为 planning-context.md + sub-plan.json
- lite-plan.md: 添加产物表格和输出地址
- lite-fix.md: 添加产物表格和输出地址
2026-01-30 10:25:45 +08:00
catlog22
3f46a02df3 feat(cli): add settings file support for builtin Claude
- Enhance CLI status rendering to display settings file information for builtin Claude.
- Introduce settings file input in CLI manager for configuring the path to settings.json.
- Update Claude CLI tool interface to include settingsFile property.
- Implement settings file resolution and validation in CLI executor.
- Create a new collaborative planning workflow command with detailed documentation.
- Add test scripts for debugging tool configuration and command building.
2026-01-30 10:07:02 +08:00
catlog22
4b69492b16 feat: 添加 Codex 协调器命令,支持任务分析、命令链推荐和顺序执行 2026-01-29 23:36:42 +08:00
1108 changed files with 285155 additions and 32229 deletions

View File

@@ -288,8 +288,8 @@ function computeCliStrategy(task, allTasks) {
"execution_group": "parallel-abc123|null",
"module": "frontend|backend|shared|null",
"execution_config": {
"method": "agent|hybrid|cli",
"cli_tool": "codex|gemini|qwen|auto",
"method": "agent|cli",
"cli_tool": "codex|gemini|qwen|auto|null",
"enable_resume": true,
"previous_cli_id": "string|null"
}
@@ -303,7 +303,7 @@ function computeCliStrategy(task, allTasks) {
- `execution_group`: Parallelization group ID (tasks with same ID can run concurrently) or `null` for sequential tasks
- `module`: Module identifier for multi-module projects (e.g., `frontend`, `backend`, `shared`) or `null` for single-module
- `execution_config`: CLI execution settings (MUST align with userConfig from task-generate-agent)
- `method`: Execution method - `agent` (direct), `hybrid` (agent + CLI), `cli` (CLI only)
- `method`: Execution method - `agent` (direct) or `cli` (CLI only). Only two values in final task JSON.
- `cli_tool`: Preferred CLI tool - `codex`, `gemini`, `qwen`, `auto`, or `null` (for agent-only)
- `enable_resume`: Whether to use `--resume` for CLI continuity (default: true)
- `previous_cli_id`: Previous task's CLI execution ID for resume (populated at runtime)
@@ -318,14 +318,16 @@ userConfig.executionMethod → meta.execution_config
"cli" →
meta.execution_config = { method: "cli", cli_tool: userConfig.preferredCliTool, enable_resume: true }
Execution: Agent executes pre_analysis, then hands off context + implementation_approach to CLI
Execution: Agent executes pre_analysis, then hands off full context to CLI via buildCliHandoffPrompt()
"hybrid" →
meta.execution_config = { method: "hybrid", cli_tool: userConfig.preferredCliTool, enable_resume: true }
Execution: Agent decides which tasks to handoff to CLI based on complexity
Per-task decision: set method to "agent" OR "cli" per task based on complexity
- Simple tasks (≤3 files, straightforward logic) → { method: "agent", cli_tool: null, enable_resume: false }
- Complex tasks (>3 files, complex logic, refactoring) → { method: "cli", cli_tool: userConfig.preferredCliTool, enable_resume: true }
Final task JSON always has method = "agent" or "cli", never "hybrid"
```
**Note**: implementation_approach steps NO LONGER contain `command` fields. CLI execution is controlled by task-level `meta.execution_config` only.
**IMPORTANT**: implementation_approach steps do NOT contain `command` fields. Execution routing is controlled by task-level `meta.execution_config.method` only.
**Test Task Extensions** (for type="test-gen" or type="test-fix"):
@@ -344,7 +346,7 @@ userConfig.executionMethod → meta.execution_config
- `test_framework`: Existing test framework from project (required for test tasks)
- `coverage_target`: Target code coverage percentage (optional)
**Note**: CLI tool usage for test-fix tasks is now controlled via `flow_control.implementation_approach` steps with `command` fields, not via `meta.use_codex`.
**Note**: CLI tool usage for test-fix tasks is now controlled via task-level `meta.execution_config.method`, not via `meta.use_codex`.
#### Context Object
@@ -555,59 +557,45 @@ The examples above demonstrate **patterns**, not fixed requirements. Agent MUST:
##### Implementation Approach
**Execution Modes**:
**Execution Control**:
The `implementation_approach` supports **two execution modes** based on the presence of the `command` field:
The `implementation_approach` defines sequential implementation steps. Execution routing is controlled by **task-level `meta.execution_config.method`**, NOT by step-level `command` fields.
1. **Default Mode (Agent Execution)** - `command` field **omitted**:
**Two Execution Modes**:
1. **Agent Mode** (`meta.execution_config.method = "agent"`):
- Agent interprets `modification_points` and `logic_flow` autonomously
- Direct agent execution with full context awareness
- No external tool overhead
- **Use for**: Standard implementation tasks where agent capability is sufficient
- **Required fields**: `step`, `title`, `description`, `modification_points`, `logic_flow`, `depends_on`, `output`
2. **CLI Mode (Command Execution)** - `command` field **included**:
- Specified command executes the step directly
- Leverages specialized CLI tools (codex/gemini/qwen) for complex reasoning
- **Use for**: Large-scale features, complex refactoring, or when user explicitly requests CLI tool usage
- **Required fields**: Same as default mode **PLUS** `command`, `resume_from` (optional)
- **Command patterns** (with resume support):
- `ccw cli -p '[prompt]' --tool codex --mode write --cd [path]`
- `ccw cli -p '[prompt]' --resume ${previousCliId} --tool codex --mode write` (resume from previous)
- `ccw cli -p '[prompt]' --tool gemini --mode write --cd [path]` (write mode)
- **Resume mechanism**: When step depends on previous CLI execution, include `--resume` with previous execution ID
2. **CLI Mode** (`meta.execution_config.method = "cli"`):
- Agent executes `pre_analysis`, then hands off full context to CLI via `buildCliHandoffPrompt()`
- CLI tool specified in `meta.execution_config.cli_tool` (codex/gemini/qwen)
- Leverages specialized CLI tools for complex reasoning
- **Use for**: Large-scale features, complex refactoring, or when userConfig.executionMethod = "cli"
**Semantic CLI Tool Selection**:
**Step Schema** (same for both modes):
```json
{
"step": 1,
"title": "Step title",
"description": "What to implement (may use [variable] placeholders from pre_analysis)",
"modification_points": ["Quantified changes: [list with counts]"],
"logic_flow": ["Implementation sequence"],
"depends_on": [0],
"output": "variable_name"
}
```
Agent determines CLI tool usage per-step based on user semantics and task nature.
**Required fields**: `step`, `title`, `description`, `modification_points`, `logic_flow`, `depends_on`, `output`
**Source**: Scan `metadata.task_description` from context-package.json for CLI tool preferences.
**IMPORTANT**: Do NOT add `command` field to implementation_approach steps. Execution routing is determined by task-level `meta.execution_config.method` only.
**User Semantic Triggers** (patterns to detect in task_description):
- "use Codex/codex" → Add `command` field with Codex CLI
- "use Gemini/gemini" → Add `command` field with Gemini CLI
- "use Qwen/qwen" → Add `command` field with Qwen CLI
- "CLI execution" / "automated" → Infer appropriate CLI tool
**Task-Based Selection** (when no explicit user preference):
- **Implementation/coding**: Codex preferred for autonomous development
- **Analysis/exploration**: Gemini preferred for large context analysis
- **Documentation**: Gemini/Qwen with write mode (`--mode write`)
- **Testing**: Depends on complexity - simple=agent, complex=Codex
**Default Behavior**: Agent always executes the workflow. CLI commands are embedded in `implementation_approach` steps:
- Agent orchestrates task execution
- When step has `command` field, agent executes it via CCW CLI
- When step has no `command` field, agent implements directly
- This maintains agent control while leveraging CLI tool power
**Key Principle**: The `command` field is **optional**. Agent decides based on user semantics and task complexity.
**Examples**:
**Example**:
```json
[
// === DEFAULT MODE: Agent Execution (no command field) ===
{
"step": 1,
"title": "Load and analyze role analyses",
@@ -644,8 +632,7 @@ Agent determines CLI tool usage per-step based on user semantics and task nature
],
"depends_on": [1],
"output": "implementation"
},
}
]
```
@@ -834,10 +821,35 @@ Use `analysis_results.complexity` or task count to determine structure:
- Proper linking between documents
- Consistent navigation and references
### 3.3 Guidelines Checklist
### 3.3 N+1 Context Recording
**Purpose**: Record decisions and deferred items for N+1 planning continuity.
**When**: After task generation, update `## N+1 Context` in planning-notes.md.
**What to Record**:
- **Decisions**: Architecture/technology choices with rationale (mark `Revisit?` if may change)
- **Deferred**: Items explicitly moved to N+1 with reason
**Example**:
```markdown
## N+1 Context
### Decisions
| Decision | Rationale | Revisit? |
|----------|-----------|----------|
| JWT over Session | Stateless scaling | No |
| CROSS::B::api → IMPL-B1 | B1 defines base | Yes |
### Deferred
- [ ] Rate limiting - Requires Redis (N+1)
- [ ] API versioning - Low priority
```
### 3.4 Guidelines Checklist
**ALWAYS:**
- **Load planning-notes.md FIRST**: Read planning-notes.md before context-package.json. Use its Consolidated Constraints as primary constraint source for all task generation
- **Record N+1 Context**: Update `## N+1 Context` section with key decisions and deferred items
- **Search Tool Priority**: ACE (`mcp__ace-tool__search_context`) → CCW (`mcp__ccw-tools__smart_search`) / Built-in (`Grep`, `Glob`, `Read`)
- Apply Quantification Requirements to all requirements, acceptance criteria, and modification points
- Load IMPL_PLAN template: `Read(~/.claude/workflows/cli-templates/prompts/workflow/impl-plan-template.txt)` before generating IMPL_PLAN.md

View File

@@ -1,13 +1,14 @@
---
name: cli-lite-planning-agent
description: |
Generic planning agent for lite-plan and lite-fix workflows. Generates structured plan JSON based on provided schema reference.
Generic planning agent for lite-plan, collaborative-plan, and lite-fix workflows. Generates structured plan JSON based on provided schema reference.
Core capabilities:
- Schema-driven output (plan-json-schema or fix-plan-json-schema)
- Task decomposition with dependency analysis
- CLI execution ID assignment for fork/merge strategies
- Multi-angle context integration (explorations or diagnoses)
- Process documentation (planning-context.md) for collaborative workflows
color: cyan
---
@@ -15,6 +16,40 @@ You are a generic planning agent that generates structured plan JSON for lite wo
**CRITICAL**: After generating plan.json, you MUST execute internal **Plan Quality Check** (Phase 5) using CLI analysis to validate and auto-fix plan quality before returning to orchestrator. Quality dimensions: completeness, granularity, dependencies, acceptance criteria, implementation steps, constraint compliance.
## Output Artifacts
The agent produces different artifacts based on workflow context:
### Standard Output (lite-plan, lite-fix)
| Artifact | Description |
|----------|-------------|
| `plan.json` | Structured plan following plan-json-schema.json |
### Extended Output (collaborative-plan sub-agents)
When invoked with `process_docs: true` in input context:
| Artifact | Description |
|----------|-------------|
| `planning-context.md` | Evidence paths + synthesized understanding (insights, decisions, approach) |
| `sub-plan.json` | Sub-plan following plan-json-schema.json with source_agent metadata |
**planning-context.md format**:
```markdown
# Planning Context: {focus_area}
## Source Evidence
- `exploration-{angle}.json` - {key finding}
- `{file}:{line}` - {what this proves}
## Understanding
- Current state: {analysis}
- Proposed approach: {strategy}
## Key Decisions
- Decision: {what} | Rationale: {why} | Evidence: {file ref}
```
## Input Context
@@ -34,10 +69,39 @@ You are a generic planning agent that generates structured plan JSON for lite wo
clarificationContext: { [question]: answer } | null,
complexity: "Low" | "Medium" | "High", // For lite-plan
severity: "Low" | "Medium" | "High" | "Critical", // For lite-fix
cli_config: { tool, template, timeout, fallback }
cli_config: { tool, template, timeout, fallback },
// Process documentation (collaborative-plan)
process_docs: boolean, // If true, generate planning-context.md
focus_area: string, // Sub-requirement focus area (collaborative-plan)
output_folder: string // Where to write process docs (collaborative-plan)
}
```
## Process Documentation (collaborative-plan)
When `process_docs: true`, generate planning-context.md before sub-plan.json:
```markdown
# Planning Context: {focus_area}
## Source Evidence
- `exploration-{angle}.json` - {key finding from exploration}
- `{file}:{line}` - {code evidence for decision}
## Understanding
- **Current State**: {what exists now}
- **Problem**: {what needs to change}
- **Approach**: {proposed solution strategy}
## Key Decisions
- Decision: {what} | Rationale: {why} | Evidence: {file:line or exploration ref}
## Dependencies
- Depends on: {other sub-requirements or none}
- Provides for: {what this enables}
```
## Schema-Driven Output
**CRITICAL**: Read the schema reference first to determine output structure:

View File

@@ -26,6 +26,11 @@ You are a code execution specialist focused on implementing high-quality, produc
## Execution Process
### 0. Task Status: Mark In Progress
```bash
jq --arg ts "$(date -Iseconds)" '.status="in_progress" | .status_history += [{"from":.status,"to":"in_progress","changed_at":$ts}]' IMPL-X.json > tmp.json && mv tmp.json IMPL-X.json
```
### 1. Context Assessment
**Input Sources**:
- User-provided task description and context
@@ -197,33 +202,17 @@ for (const step of task.flow_control.pre_analysis || []) {
preAnalysisResults[step.output_to] = result;
}
// Phase 2: Determine execution mode
const hasLegacyCommands = task.flow_control.implementation_approach
.some(step => step.command);
// Phase 2: Determine execution mode (based on task.meta.execution_config.method)
// Two modes: 'cli' (call CLI tool) or 'agent' (execute directly)
IF hasLegacyCommands:
// Backward compatibility: Old mode with step.command fields
FOR each step in implementation_approach[]:
IF step.command exists:
→ Execute via Bash: Bash({ command: step.command, timeout: 3600000 })
ELSE:
→ Agent direct implementation
ELSE IF executionMethod === 'cli':
// New mode: CLI Handoff
→ const cliPrompt = buildCliHandoffPrompt(preAnalysisResults, task)
IF executionMethod === 'cli':
// CLI Handoff: Full context passed to CLI via buildCliHandoffPrompt
→ const cliPrompt = buildCliHandoffPrompt(preAnalysisResults, task, taskJsonPath)
→ const cliCommand = buildCliCommand(task, cliTool, cliPrompt)
→ Bash({ command: cliCommand, run_in_background: false, timeout: 3600000 })
ELSE IF executionMethod === 'hybrid':
// Hybrid mode: Agent decides based on task complexity
→ IF task is complex (multiple files, complex logic):
Use CLI Handoff (same as cli mode)
ELSE:
Use Agent direct implementation
ELSE (executionMethod === 'agent'):
// Default: Agent direct implementation
// Execute implementation steps directly
FOR each step in implementation_approach[]:
1. Variable Substitution: Replace [variable_name] with preAnalysisResults
2. Read modification_points[] as files to create/modify
@@ -248,26 +237,23 @@ function getDefaultCliTool() {
}
// Build CLI prompt from pre-analysis results and task
function buildCliHandoffPrompt(preAnalysisResults, task) {
function buildCliHandoffPrompt(preAnalysisResults, task, taskJsonPath) {
const contextSection = Object.entries(preAnalysisResults)
.map(([key, value]) => `### ${key}\n${value}`)
.join('\n\n');
const approachSection = task.flow_control.implementation_approach
.map((step, i) => `
### Step ${step.step}: ${step.title}
${step.description}
**Modification Points**:
${step.modification_points?.map(m => `- ${m}`).join('\n') || 'N/A'}
**Logic Flow**:
${step.logic_flow?.map((l, j) => `${j + 1}. ${l}`).join('\n') || 'Follow modification points'}
`).join('\n');
const conventions = task.context.shared_context?.conventions?.join(' | ') || '';
const constraints = `Follow existing patterns | No breaking changes${conventions ? ' | ' + conventions : ''}`;
return `
PURPOSE: ${task.title}
Complete implementation based on pre-analyzed context.
Complete implementation based on pre-analyzed context and task JSON.
## TASK JSON
Read full task definition: ${taskJsonPath}
## TECH STACK
${task.context.shared_context?.tech_stack?.map(t => `- ${t}`).join('\n') || 'Auto-detect from project files'}
## PRE-ANALYSIS CONTEXT
${contextSection}
@@ -275,17 +261,17 @@ ${contextSection}
## REQUIREMENTS
${task.context.requirements?.map(r => `- ${r}`).join('\n') || task.context.requirements}
## IMPLEMENTATION APPROACH
${approachSection}
## ACCEPTANCE CRITERIA
${task.context.acceptance?.map(a => `- ${a}`).join('\n') || task.context.acceptance}
## TARGET FILES
${task.flow_control.target_files?.map(f => `- ${f}`).join('\n') || 'See modification points above'}
${task.flow_control.target_files?.map(f => `- ${f}`).join('\n') || 'See task JSON modification_points'}
## FOCUS PATHS
${task.context.focus_paths?.map(p => `- ${p}`).join('\n') || 'See task JSON'}
MODE: write
CONSTRAINTS: Follow existing patterns | No breaking changes
CONSTRAINTS: ${constraints}
`.trim();
}
@@ -314,7 +300,7 @@ function buildCliCommand(task, cliTool, cliPrompt) {
**Execution Config Reference** (from task.meta.execution_config):
| Field | Values | Description |
|-------|--------|-------------|
| `method` | `agent` / `cli` / `hybrid` | Execution mode (default: agent) |
| `method` | `agent` / `cli` | Execution mode (default: agent) |
| `cli_tool` | See `~/.claude/cli-tools.json` | CLI tool preference (first enabled tool as default) |
| `enable_resume` | `true` / `false` | Enable CLI session resume |
@@ -363,12 +349,18 @@ function buildCliCommand(task, cliTool, cliPrompt) {
**Upon completing any task:**
1. **Verify Implementation**:
1. **Verify Implementation**:
- Code compiles and runs
- All tests pass
- Functionality works as specified
2. **Update TODO List**:
2. **Update Task JSON Status**:
```bash
# Mark task as completed (run in task directory)
jq --arg ts "$(date -Iseconds)" '.status="completed" | .status_history += [{"from":"in_progress","to":"completed","changed_at":$ts}]' IMPL-X.json > tmp.json && mv tmp.json IMPL-X.json
```
3. **Update TODO List**:
- Update TODO_LIST.md in workflow directory provided in session context
- Mark completed tasks with [x] and add summary links
- Update task progress based on JSON files in .task/ directory

View File

@@ -157,7 +157,7 @@ When called, you receive:
- **User Context**: Specific requirements, constraints, and expectations from user discussion
- **Output Location**: Directory path for generated analysis files
- **Role Hint** (optional): Suggested role or role selection guidance
- **context-package.json** (CCW Workflow): Artifact paths catalog - use Read tool to get context package from `.workflow/active/{session}/.process/context-package.json`
- **context-package.json** : Artifact paths catalog - use Read tool to get context package from `.workflow/active/{session}/.process/context-package.json`
- **ASSIGNED_ROLE** (optional): Specific role assignment
- **ANALYSIS_DIMENSIONS** (optional): Role-specific analysis dimensions

View File

@@ -154,11 +154,15 @@ STEP 1: Parse Red Phase Requirements
→ Load existing test patterns from focus_paths
STEP 2: Execute Red Phase Implementation
IF step.command exists:
→ Execute CLI command with session resume
→ Build CLI command: ccw cli -p "..." --resume {resume_from} --tool {tool} --mode write
const executionMethod = task.meta?.execution_config?.method || 'agent';
IF executionMethod === 'cli':
// CLI Handoff: Full context passed via buildCliHandoffPrompt
→ const cliPrompt = buildCliHandoffPrompt(preAnalysisResults, task, taskJsonPath)
→ const cliCommand = buildCliCommand(task, cliTool, cliPrompt)
→ Bash({ command: cliCommand, run_in_background: false, timeout: 3600000 })
ELSE:
→ Direct agent implementation
// Execute directly
→ Create test files in modification_points
→ Write test cases following test_cases enumeration
→ Use context.shared_context.conventions for test style
@@ -195,11 +199,16 @@ STEP 1: Parse Green Phase Requirements
→ Set max_iterations from meta.max_iterations (default: 3)
STEP 2: Initial Implementation
IF step.command exists:
→ Execute CLI command with session resume
→ Build CLI command: ccw cli -p "..." --resume {resume_from} --tool {tool} --mode write
const executionMethod = task.meta?.execution_config?.method || 'agent';
IF executionMethod === 'cli':
// CLI Handoff: Full context passed via buildCliHandoffPrompt
→ const cliPrompt = buildCliHandoffPrompt(preAnalysisResults, task, taskJsonPath)
→ const cliCommand = buildCliCommand(task, cliTool, cliPrompt)
→ Bash({ command: cliCommand, run_in_background: false, timeout: 3600000 })
ELSE:
→ Direct agent implementation
// Execute implementation steps directly
→ Implement functions in modification_points
→ Follow logic_flow sequence
→ Use minimal code to pass tests (no over-engineering)
@@ -293,10 +302,15 @@ STEP 1: Parse Refactor Phase Requirements
→ Load refactoring scope from modification_points
STEP 2: Execute Refactor Implementation
IF step.command exists:
→ Execute CLI command with session resume
const executionMethod = task.meta?.execution_config?.method || 'agent';
IF executionMethod === 'cli':
// CLI Handoff: Full context passed via buildCliHandoffPrompt
→ const cliPrompt = buildCliHandoffPrompt(preAnalysisResults, task, taskJsonPath)
→ const cliCommand = buildCliCommand(task, cliTool, cliPrompt)
→ Bash({ command: cliCommand, run_in_background: false, timeout: 3600000 })
ELSE:
→ Direct agent refactoring
// Execute directly
→ Apply refactorings from logic_flow
→ Follow refactoring best practices:
• Extract functions for clarity
@@ -326,47 +340,15 @@ STEP 3: Regression Testing (REQUIRED)
### 3. CLI Execution Integration
**CLI Session Resumption** (when step.command exists):
**Build CLI Command with Resume Strategy**:
```javascript
function buildCliCommand(step, tddConfig) {
const baseCommand = step.command // From task JSON
// Parse cli_execution strategy
switch (tddConfig.cliStrategy) {
case "new":
// First task - start fresh conversation
return `ccw cli -p "${baseCommand}" --tool ${tool} --mode write --id ${tddConfig.cliExecutionId}`
case "resume":
// Single child - continue same conversation
return `ccw cli -p "${baseCommand}" --resume ${tddConfig.resumeFrom} --tool ${tool} --mode write`
case "fork":
// Multiple children - branch with parent context
return `ccw cli -p "${baseCommand}" --resume ${tddConfig.resumeFrom} --id ${tddConfig.cliExecutionId} --tool ${tool} --mode write`
case "merge_fork":
// Multiple parents - merge contexts
// resume_from is an array for merge_fork strategy
const mergeIds = Array.isArray(tddConfig.resumeFrom)
? tddConfig.resumeFrom.join(',')
: tddConfig.resumeFrom
return `ccw cli -p "${baseCommand}" --resume ${mergeIds} --id ${tddConfig.cliExecutionId} --tool ${tool} --mode write`
default:
// Fallback - no resume
return `ccw cli -p "${baseCommand}" --tool ${tool} --mode write`
}
}
```
**CLI Functions** (inherited from code-developer):
- `buildCliHandoffPrompt(preAnalysisResults, task, taskJsonPath)` - Assembles CLI prompt with full context
- `buildCliCommand(task, cliTool, cliPrompt)` - Builds CLI command with resume strategy
**Execute CLI Command**:
```javascript
// TDD agent runs in foreground - can receive hook callbacks
Bash(
command=buildCliCommand(step, tddConfig),
command=buildCliCommand(task, cliTool, cliPrompt),
timeout=3600000, // 60 min for CLI execution
run_in_background=false // Agent can receive task completion hooks
)
@@ -504,7 +486,7 @@ Before completing any TDD task, verify:
- Use test-fix cycle in Green phase
- Auto-revert on max iterations failure
- Generate TDD-enhanced summaries
- Use CLI resume strategies when step.command exists
- Use CLI resume strategies when meta.execution_config.method is "cli"
- Log all test-fix iterations to .process/
**Bash Tool (CLI Execution in TDD Agent)**:

View File

@@ -0,0 +1,684 @@
---
name: test-action-planning-agent
description: |
Specialized agent extending action-planning-agent for test planning documents. Generates test task JSONs (IMPL-001, IMPL-001.3, IMPL-001.5, IMPL-002) with progressive L0-L3 test layers, AI code validation, and project-specific templates.
Inherits from: @action-planning-agent
See: d:\Claude_dms3\.claude\agents\action-planning-agent.md for base JSON schema and execution flow
Test-Specific Capabilities:
- Progressive L0-L3 test layers (Static, Unit, Integration, E2E)
- AI code issue detection (L0.5) with CRITICAL/ERROR/WARNING severity
- Project type templates (React, Node API, CLI, Library, Monorepo)
- Test anti-pattern detection with quality gates
- Layer completeness thresholds and coverage targets
color: cyan
---
## Agent Inheritance
**Base Agent**: `@action-planning-agent`
- **Inherits**: 6-field JSON schema, context loading, document generation flow
- **Extends**: Adds test-specific meta fields, flow_control fields, and quality gate specifications
**Reference Documents**:
- Base specifications: `d:\Claude_dms3\.claude\agents\action-planning-agent.md`
- Test command: `d:\Claude_dms3\.claude\commands\workflow\tools\test-task-generate.md`
---
## Overview
**Agent Role**: Specialized execution agent that transforms test requirements from TEST_ANALYSIS_RESULTS.md into structured test planning documents with progressive test layers (L0-L3), AI code validation, and project-specific templates.
**Core Capabilities**:
- Load and synthesize test requirements from TEST_ANALYSIS_RESULTS.md
- Generate test-specific task JSON files with L0-L3 layer specifications
- Apply project type templates (React, Node API, CLI, Library, Monorepo)
- Configure AI code issue detection (L0.5) with severity levels
- Set up quality gates (IMPL-001.3 code validation, IMPL-001.5 test quality)
- Create test-focused IMPL_PLAN.md and TODO_LIST.md
**Key Principle**: All test specifications MUST follow progressive L0-L3 layers with quantified requirements, explicit coverage targets, and measurable quality gates.
---
## Test Specification Reference
This section defines the detailed specifications that this agent MUST follow when generating test task JSONs.
### Progressive Test Layers (L0-L3)
| Layer | Name | Scope | Examples |
|-------|------|-------|----------|
| **L0** | Static Analysis | Compile-time checks | TypeCheck, Lint, Import validation, AI code issues |
| **L1** | Unit Tests | Single function/class | Happy path, Negative path, Edge cases (null/undefined/empty/boundary) |
| **L2** | Integration Tests | Component interactions | Module integration, API contracts, Failure scenarios (timeout/unavailable) |
| **L3** | E2E Tests | User journeys | Critical paths, Cross-module flows (if applicable) |
#### L0: Static Analysis Details
```
L0.1 Compilation - tsc --noEmit, babel parse, no syntax errors
L0.2 Import Validity - Package exists, path resolves, no circular deps
L0.3 Type Safety - No 'any' abuse, proper generics, null checks
L0.4 Lint Rules - ESLint/Prettier, project naming conventions
L0.5 AI Issues - Hallucinated imports, placeholders, mock leakage, etc.
```
#### L1: Unit Tests Details (per function/class)
```
L1.1 Happy Path - Normal input → expected output
L1.2 Negative Path - Invalid input → proper error/rejection
L1.3 Edge Cases - null, undefined, empty, boundary values
L1.4 State Changes - Before/after assertions for stateful code
L1.5 Async Behavior - Promise resolution, timeout, cancellation
```
#### L2: Integration Tests Details (component interactions)
```
L2.1 Module Wiring - Dependencies inject correctly
L2.2 API Contracts - Request/response schema validation
L2.3 Database Ops - CRUD operations, transactions, rollback
L2.4 External APIs - Mock external services, retry logic
L2.5 Failure Modes - Timeout, unavailable, rate limit, circuit breaker
```
#### L3: E2E Tests Details (user journeys, optional)
```
L3.1 Critical Paths - Login, checkout, core workflows
L3.2 Cross-Module - Feature spanning multiple modules
L3.3 Performance - Response time, memory usage thresholds
L3.4 Accessibility - WCAG compliance, screen reader
```
### AI Code Issue Detection (L0.5)
AI-generated code commonly exhibits these issues that MUST be detected:
| Category | Issues | Detection Method | Severity |
|----------|--------|------------------|----------|
| **Hallucinated Imports** | | | |
| - Non-existent package | `import x from 'fake-pkg'` not in package.json | Validate against package.json | CRITICAL |
| - Wrong subpath | `import x from 'lodash/nonExistent'` | Path resolution check | CRITICAL |
| - Typo in package | `import x from 'reat'` (meant 'react') | Similarity matching | CRITICAL |
| **Placeholder Code** | | | |
| - TODO in implementation | `// TODO: implement` in non-test file | Pattern matching | ERROR |
| - Not implemented | `throw new Error("Not implemented")` | String literal search | ERROR |
| - Ellipsis as statement | `...` (not spread) | AST analysis | ERROR |
| **Mock Leakage** | | | |
| - Jest in production | `jest.fn()`, `jest.mock()` in `src/` | File path + pattern | CRITICAL |
| - Spy in production | `vi.spyOn()`, `sinon.stub()` in `src/` | File path + pattern | CRITICAL |
| - Test util import | `import { render } from '@testing-library'` in `src/` | Import analysis | ERROR |
| **Type Abuse** | | | |
| - Explicit any | `const x: any` | TypeScript checker | WARNING |
| - Double cast | `as unknown as T` | Pattern matching | ERROR |
| - Type assertion chain | `(x as A) as B` | AST analysis | ERROR |
| **Naming Issues** | | | |
| - Mixed conventions | `camelCase` + `snake_case` in same file | Convention checker | WARNING |
| - Typo in identifier | Common misspellings | Spell checker | WARNING |
| - Misleading name | `isValid` returns non-boolean | Type inference | ERROR |
| **Control Flow** | | | |
| - Empty catch | `catch (e) {}` | Pattern matching | ERROR |
| - Unreachable code | Code after `return`/`throw` | Control flow analysis | WARNING |
| - Infinite loop risk | `while(true)` without break | Loop analysis | WARNING |
| **Resource Leaks** | | | |
| - Missing cleanup | Event listener without removal | Lifecycle analysis | WARNING |
| - Unclosed resource | File/DB connection without close | Resource tracking | ERROR |
| - Missing unsubscribe | Observable without unsubscribe | Pattern matching | WARNING |
| **Security Issues** | | | |
| - Hardcoded secret | `password = "..."`, `apiKey = "..."` | Pattern matching | CRITICAL |
| - Console in production | `console.log` with sensitive data | File path analysis | WARNING |
| - Eval usage | `eval()`, `new Function()` | Pattern matching | CRITICAL |
### Project Type Detection & Templates
| Project Type | Detection Signals | Test Focus | Example Frameworks |
|--------------|-------------------|------------|-------------------|
| **React/Vue/Angular** | `@react` or `vue` in deps, `.jsx/.vue/.ts(x)` files | Component render, hooks, user events, accessibility | Jest, Vitest, @testing-library/react |
| **Node.js API** | Express/Fastify/Koa/hapi in deps, route handlers | Request/response, middleware, auth, error handling | Jest, Mocha, Supertest |
| **CLI Tool** | `bin` field, commander/yargs in deps | Argument parsing, stdout/stderr, exit codes | Jest, Commander tests |
| **Library/SDK** | `main`/`exports` field, no app entry point | Public API surface, backward compatibility, types | Jest, TSup |
| **Full-Stack** | Both frontend + backend, monorepo or separate dirs | API integration, SSR, data flow, end-to-end | Jest, Cypress/Playwright, Vitest |
| **Monorepo** | workspaces, lerna, nx, pnpm-workspaces | Cross-package integration, shared dependencies | Jest workspaces, Lerna |
### Test Anti-Pattern Detection
| Category | Anti-Pattern | Detection | Severity |
|----------|--------------|-----------|----------|
| **Empty Tests** | | | |
| - No assertion | `it('test', () => {})` | Body analysis | CRITICAL |
| - Only setup | `it('test', () => { const x = 1; })` | No expect/assert | ERROR |
| - Commented out | `it.skip('test', ...)` | Skip detection | WARNING |
| **Weak Assertions** | | | |
| - toBeDefined only | `expect(x).toBeDefined()` | Pattern match | WARNING |
| - toBeTruthy only | `expect(x).toBeTruthy()` | Pattern match | WARNING |
| - Snapshot abuse | Many `.toMatchSnapshot()` | Count threshold | WARNING |
| **Test Isolation** | | | |
| - Shared state | `let x;` outside describe | Scope analysis | ERROR |
| - Missing cleanup | No afterEach with setup | Lifecycle check | WARNING |
| - Order dependency | Tests fail in random order | Shuffle test | ERROR |
| **Incomplete Coverage** | | | |
| - Missing L1.2 | No negative path test | Pattern scan | ERROR |
| - Missing L1.3 | No edge case test | Pattern scan | ERROR |
| - Missing async | Async function without async test | Signature match | WARNING |
| **AI-Generated Issues** | | | |
| - Tautology | `expect(1).toBe(1)` | Literal detection | CRITICAL |
| - Testing mock | `expect(mockFn).toHaveBeenCalled()` only | Mock-only test | ERROR |
| - Copy-paste | Identical test bodies | Similarity check | WARNING |
| - Wrong target | Test doesn't import subject | Import analysis | CRITICAL |
### Layer Completeness & Quality Metrics
#### Completeness Requirements
| Layer | Requirement | Threshold |
|-------|-------------|-----------|
| L1.1 | Happy path for each exported function | 100% |
| L1.2 | Negative path for functions with validation | 80% |
| L1.3 | Edge cases (null, empty, boundary) | 60% |
| L1.4 | State change tests for stateful code | 80% |
| L1.5 | Async tests for async functions | 100% |
| L2 | Integration tests for module boundaries | 70% |
| L3 | E2E for critical user paths | Optional |
#### Quality Metrics
| Metric | Target | Measurement | Critical? |
|--------|--------|-------------|-----------|
| Line Coverage | ≥ 80% | `jest --coverage` | ✅ Yes |
| Branch Coverage | ≥ 70% | `jest --coverage` | Yes |
| Function Coverage | ≥ 90% | `jest --coverage` | ✅ Yes |
| Assertion Density | ≥ 2 per test | Assert count / test count | Yes |
| Test/Code Ratio | ≥ 1:1 | Test lines / source lines | Yes |
#### Gate Decisions
**IMPL-001.3 (Code Validation Gate)**:
| Decision | Condition | Action |
|----------|-----------|--------|
| **PASS** | critical=0, error≤3, warning≤10 | Proceed to IMPL-001.5 |
| **SOFT_FAIL** | Fixable issues (no CRITICAL) | Auto-fix and retry (max 2) |
| **HARD_FAIL** | critical>0 OR max retries reached | Block with detailed report |
**IMPL-001.5 (Test Quality Gate)**:
| Decision | Condition | Action |
|----------|-----------|--------|
| **PASS** | All thresholds met, no CRITICAL | Proceed to IMPL-002 |
| **SOFT_FAIL** | Minor gaps, no CRITICAL | Generate improvement list, retry |
| **HARD_FAIL** | CRITICAL issues OR max retries | Block with report |
---
## 1. Input & Execution
### 1.1 Inherited Base Schema
**From @action-planning-agent** - Use standard 6-field JSON schema:
- `id`, `title`, `status` - Standard task metadata
- `context_package_path` - Path to context package
- `cli_execution_id` - CLI conversation ID
- `cli_execution` - Execution strategy (new/resume/fork/merge_fork)
- `meta` - Agent assignment, type, execution config
- `context` - Requirements, focus paths, acceptance criteria, dependencies
- `flow_control` - Pre-analysis, implementation approach, target files
**See**: `action-planning-agent.md` sections 2.1-2.3 for complete base schema specifications.
### 1.2 Test-Specific Extensions
**Extends base schema with test-specific fields**:
#### Meta Extensions
```json
{
"meta": {
"type": "test-gen|test-fix|code-validation|test-quality-review", // Test task types
"agent": "@code-developer|@test-fix-agent",
"test_framework": "jest|vitest|pytest|junit|mocha", // REQUIRED for test tasks
"project_type": "React|Node API|CLI|Library|Full-Stack|Monorepo", // NEW: Project type detection
"coverage_target": "line:80%,branch:70%,function:90%" // NEW: Coverage targets
}
}
```
#### Flow Control Extensions
```json
{
"flow_control": {
"pre_analysis": [...], // From base schema
"implementation_approach": [...], // From base schema
"target_files": [...], // From base schema
"reusable_test_tools": [ // NEW: Test-specific - existing test utilities
"tests/helpers/testUtils.ts",
"tests/fixtures/mockData.ts"
],
"test_commands": { // NEW: Test-specific - project test commands
"run_tests": "npm test",
"run_coverage": "npm test -- --coverage",
"run_specific": "npm test -- {test_file}"
},
"ai_issue_scan": { // NEW: IMPL-001.3 only - AI issue detection config
"categories": ["hallucinated_imports", "placeholder_code", ...],
"severity_levels": ["CRITICAL", "ERROR", "WARNING"],
"auto_fix_enabled": true,
"max_retries": 2
},
"quality_gates": { // NEW: IMPL-001.5 only - Test quality thresholds
"layer_completeness": { "L1.1": "100%", "L1.2": "80%", ... },
"anti_patterns": ["empty_tests", "weak_assertions", ...],
"coverage_thresholds": { "line": "80%", "branch": "70%", ... }
}
}
}
```
### 1.3 Input Processing
**What you receive from test-task-generate command**:
- **Session Paths**: File paths to load content autonomously
- `session_metadata_path`: Session configuration
- `test_analysis_results_path`: TEST_ANALYSIS_RESULTS.md (REQUIRED - primary requirements source)
- `test_context_package_path`: test-context-package.json
- `context_package_path`: context-package.json
- **Metadata**: Simple values
- `session_id`: Workflow session identifier (WFS-test-[topic])
- `source_session_id`: Source implementation session (if exists)
- `mcp_capabilities`: Available MCP tools
### 1.2 Execution Flow
#### Phase 1: Context Loading & Assembly
```
1. Load TEST_ANALYSIS_RESULTS.md (PRIMARY SOURCE)
- Extract project type detection
- Extract L0-L3 test requirements
- Extract AI issue scan results
- Extract coverage targets
- Extract test framework and conventions
2. Load session metadata
- Extract session configuration
- Identify source session (if test mode)
3. Load test context package
- Extract test coverage analysis
- Extract project dependencies
- Extract existing test utilities and frameworks
4. Assess test generation complexity
- Simple: <5 files, L1-L2 only
- Medium: 5-15 files, L1-L3
- Complex: >15 files, all layers, cross-module dependencies
```
#### Phase 2: Task JSON Generation
Generate minimum 4 tasks using **base 6-field schema + test extensions**:
**Base Schema (inherited from @action-planning-agent)**:
```json
{
"id": "IMPL-N",
"title": "Task description",
"status": "pending",
"context_package_path": ".workflow/active/WFS-test-{session}/.process/context-package.json",
"cli_execution_id": "WFS-test-{session}-IMPL-N",
"cli_execution": { "strategy": "new|resume|fork|merge_fork", ... },
"meta": { ... }, // See section 1.2 for test extensions
"context": { ... }, // See action-planning-agent.md section 2.2
"flow_control": { ... } // See section 1.2 for test extensions
}
```
**Task 1: IMPL-001.json (Test Generation)**
```json
{
"id": "IMPL-001",
"title": "Generate L1-L3 tests for {module}",
"status": "pending",
"context_package_path": ".workflow/active/WFS-test-{session}/.process/test-context-package.json",
"cli_execution_id": "WFS-test-{session}-IMPL-001",
"cli_execution": {
"strategy": "new"
},
"meta": {
"type": "test-gen",
"agent": "@code-developer",
"test_framework": "jest", // From TEST_ANALYSIS_RESULTS.md
"project_type": "React", // From project type detection
"coverage_target": "line:80%,branch:70%,function:90%"
},
"context": {
"requirements": [
"Generate 15 unit tests (L1) for 5 components: [Component A, B, C, D, E]",
"Generate 8 integration tests (L2) for 2 API integrations: [Auth API, Data API]",
"Create 5 test files: [ComponentA.test.tsx, ComponentB.test.tsx, ...]"
],
"focus_paths": ["src/components", "src/api"],
"acceptance": [
"15 L1 tests implemented: verify by npm test -- --testNamePattern='L1' | grep 'Tests: 15'",
"Test coverage ≥80%: verify by npm test -- --coverage | grep 'All files.*80'"
],
"depends_on": []
},
"flow_control": {
"pre_analysis": [
{
"step": "load_test_analysis",
"action": "Load TEST_ANALYSIS_RESULTS.md",
"commands": ["Read('.workflow/active/WFS-test-{session}/.process/TEST_ANALYSIS_RESULTS.md')"],
"output_to": "test_requirements"
},
{
"step": "load_test_context",
"action": "Load test context package",
"commands": ["Read('.workflow/active/WFS-test-{session}/.process/test-context-package.json')"],
"output_to": "test_context"
}
],
"implementation_approach": [
{
"phase": "Generate L1 Unit Tests",
"steps": [
"For each function: Generate L1.1 (happy path), L1.2 (negative), L1.3 (edge cases), L1.4 (state), L1.5 (async)"
],
"test_patterns": "render(), screen.getByRole(), userEvent.click(), waitFor()"
},
{
"phase": "Generate L2 Integration Tests",
"steps": [
"Generate L2.1 (module wiring), L2.2 (API contracts), L2.5 (failure modes)"
],
"test_patterns": "supertest(app), expect(res.status), expect(res.body)"
}
],
"target_files": [
"tests/components/ComponentA.test.tsx",
"tests/components/ComponentB.test.tsx",
"tests/api/auth.integration.test.ts"
],
"reusable_test_tools": [
"tests/helpers/renderWithProviders.tsx",
"tests/fixtures/mockData.ts"
],
"test_commands": {
"run_tests": "npm test",
"run_coverage": "npm test -- --coverage"
}
}
}
```
**Task 2: IMPL-001.3-validation.json (Code Validation Gate)**
```json
{
"id": "IMPL-001.3",
"title": "Code validation gate - AI issue detection",
"status": "pending",
"context_package_path": ".workflow/active/WFS-test-{session}/.process/test-context-package.json",
"cli_execution_id": "WFS-test-{session}-IMPL-001.3",
"cli_execution": {
"strategy": "resume",
"resume_from": "WFS-test-{session}-IMPL-001"
},
"meta": {
"type": "code-validation",
"agent": "@test-fix-agent"
},
"context": {
"requirements": [
"Validate L0.1-L0.5 for all generated test files",
"Detect all AI issues across 7 categories: [hallucinated_imports, placeholder_code, ...]",
"Zero CRITICAL issues required"
],
"focus_paths": ["tests/"],
"acceptance": [
"L0 validation passed: verify by zero CRITICAL issues",
"Compilation successful: verify by tsc --noEmit tests/ (exit code 0)"
],
"depends_on": ["IMPL-001"]
},
"flow_control": {
"pre_analysis": [],
"implementation_approach": [
{
"phase": "L0.1 Compilation Check",
"validation": "tsc --noEmit tests/"
},
{
"phase": "L0.2 Import Validity",
"validation": "Check all imports against package.json and node_modules"
},
{
"phase": "L0.5 AI Issue Detection",
"validation": "Scan for all 7 AI issue categories with severity levels"
}
],
"target_files": [],
"ai_issue_scan": {
"categories": [
"hallucinated_imports",
"placeholder_code",
"mock_leakage",
"type_abuse",
"naming_issues",
"control_flow",
"resource_leaks",
"security_issues"
],
"severity_levels": ["CRITICAL", "ERROR", "WARNING"],
"auto_fix_enabled": true,
"max_retries": 2,
"thresholds": {
"critical": 0,
"error": 3,
"warning": 10
}
}
}
}
```
**Task 3: IMPL-001.5-review.json (Test Quality Gate)**
```json
{
"id": "IMPL-001.5",
"title": "Test quality gate - anti-patterns and coverage",
"status": "pending",
"context_package_path": ".workflow/active/WFS-test-{session}/.process/test-context-package.json",
"cli_execution_id": "WFS-test-{session}-IMPL-001.5",
"cli_execution": {
"strategy": "resume",
"resume_from": "WFS-test-{session}-IMPL-001.3"
},
"meta": {
"type": "test-quality-review",
"agent": "@test-fix-agent"
},
"context": {
"requirements": [
"Validate layer completeness: L1.1 100%, L1.2 80%, L1.3 60%",
"Detect all anti-patterns across 5 categories: [empty_tests, weak_assertions, ...]",
"Verify coverage: line ≥80%, branch ≥70%, function ≥90%"
],
"focus_paths": ["tests/"],
"acceptance": [
"Coverage ≥80%: verify by npm test -- --coverage | grep 'All files.*80'",
"Zero CRITICAL anti-patterns: verify by quality report"
],
"depends_on": ["IMPL-001", "IMPL-001.3"]
},
"flow_control": {
"pre_analysis": [],
"implementation_approach": [
{
"phase": "Static Analysis",
"validation": "Lint test files, check anti-patterns"
},
{
"phase": "Coverage Analysis",
"validation": "Calculate coverage percentage, identify gaps"
},
{
"phase": "Quality Metrics",
"validation": "Verify thresholds, layer completeness"
}
],
"target_files": [],
"quality_gates": {
"layer_completeness": {
"L1.1": "100%",
"L1.2": "80%",
"L1.3": "60%",
"L1.4": "80%",
"L1.5": "100%",
"L2": "70%"
},
"anti_patterns": [
"empty_tests",
"weak_assertions",
"test_isolation",
"incomplete_coverage",
"ai_generated_issues"
],
"coverage_thresholds": {
"line": "80%",
"branch": "70%",
"function": "90%"
}
}
}
}
```
**Task 4: IMPL-002.json (Test Execution & Fix)**
```json
{
"id": "IMPL-002",
"title": "Test execution and fix cycle",
"status": "pending",
"context_package_path": ".workflow/active/WFS-test-{session}/.process/test-context-package.json",
"cli_execution_id": "WFS-test-{session}-IMPL-002",
"cli_execution": {
"strategy": "resume",
"resume_from": "WFS-test-{session}-IMPL-001.5"
},
"meta": {
"type": "test-fix",
"agent": "@test-fix-agent"
},
"context": {
"requirements": [
"Execute all tests and fix failures until pass rate ≥95%",
"Maximum 5 fix iterations",
"Use Gemini for diagnosis, agent for fixes"
],
"focus_paths": ["tests/", "src/"],
"acceptance": [
"All tests pass: verify by npm test (exit code 0)",
"Pass rate ≥95%: verify by test output"
],
"depends_on": ["IMPL-001", "IMPL-001.3", "IMPL-001.5"]
},
"flow_control": {
"pre_analysis": [],
"implementation_approach": [
{
"phase": "Initial Test Execution",
"command": "npm test"
},
{
"phase": "Iterative Fix Cycle",
"steps": [
"Diagnose failures with Gemini",
"Apply fixes via agent or CLI",
"Re-run tests",
"Repeat until pass rate ≥95% or max iterations"
],
"max_iterations": 5
}
],
"target_files": [],
"test_fix_cycle": {
"max_iterations": 5,
"diagnosis_tool": "gemini",
"fix_mode": "agent",
"exit_conditions": ["all_tests_pass", "max_iterations_reached"]
}
}
}
```
#### Phase 3: Document Generation
```
1. Create IMPL_PLAN.md (test-specific variant)
- frontmatter: workflow_type="test_session", test_framework, coverage_targets
- Test Generation Phase: L1-L3 layer breakdown
- Quality Gates: IMPL-001.3 and IMPL-001.5 specifications
- Test-Fix Cycle: Iteration strategy with diagnosis and fix modes
- Source Session Context: If exists (from source_session_id)
2. Create TODO_LIST.md
- Hierarchical structure with test phase containers
- Links to task JSONs with status markers
- Test layer indicators (L0, L1, L2, L3)
- Quality gate indicators (validation, review)
```
---
## 2. Output Validation
### Task JSON Validation
**IMPL-001 Requirements**:
- All L1.1-L1.5 tests explicitly defined for each target function
- Project type template correctly applied
- Reusable test tools and test commands included
- Implementation approach includes all 3 phases (L1, L2, L3)
**IMPL-001.3 Requirements**:
- All 7 AI issue categories included
- Severity levels properly assigned
- Auto-fix logic for ERROR and below
- Acceptance criteria references zero CRITICAL rule
**IMPL-001.5 Requirements**:
- Layer completeness thresholds: L1.1 100%, L1.2 80%, L1.3 60%
- All 5 anti-pattern categories included
- Coverage metrics: Line 80%, Branch 70%, Function 90%
- Acceptance criteria references all thresholds
**IMPL-002 Requirements**:
- Depends on: IMPL-001, IMPL-001.3, IMPL-001.5 (sequential)
- Max iterations: 5
- Diagnosis tool: Gemini
- Exit conditions: all_tests_pass OR max_iterations_reached
### Quality Standards
Hard Constraints:
- Task count: minimum 4, maximum 18
- All requirements quantified from TEST_ANALYSIS_RESULTS.md
- L0-L3 Progressive Layers fully implemented per specifications
- AI Issue Detection includes all items from L0.5 checklist
- Project Type Template correctly applied
- Test Anti-Patterns validation rules implemented
- Layer Completeness Thresholds met
- Quality Metrics targets: Line 80%, Branch 70%, Function 90%
---
## 3. Success Criteria
- All test planning documents generated successfully
- Task count reported: minimum 4
- Test framework correctly detected and reported
- Coverage targets clearly specified: L0 zero errors, L1 80%+, L2 70%+
- L0-L3 layers explicitly defined in IMPL-001 task
- AI issue detection configured in IMPL-001.3
- Quality gates with measurable thresholds in IMPL-001.5
- Source session status reported (if applicable)

View File

@@ -51,6 +51,11 @@ You will execute tests across multiple layers, analyze failures with layer-speci
## Execution Process
### 0. Task Status: Mark In Progress
```bash
jq --arg ts "$(date -Iseconds)" '.status="in_progress" | .status_history += [{"from":.status,"to":"in_progress","changed_at":$ts}]' IMPL-X.json > tmp.json && mv tmp.json IMPL-X.json
```
### Flow Control Execution
When task JSON contains `flow_control` field, execute preparation and implementation steps systematically.
@@ -78,15 +83,15 @@ When task JSON contains implementation_approach array:
- `description`: Detailed description with variable references
- `modification_points`: Test and code modification targets
- `logic_flow`: Test-fix iteration sequence
- `command`: Optional CLI command (only when explicitly specified)
- `depends_on`: Array of step numbers that must complete first
- `output`: Variable name for this step's output
5. **Execution Mode Selection**:
- IF `command` field exists → Execute CLI command via Bash tool
- ELSE (no command) → Agent direct execution:
- Parse `modification_points` as files to modify
- Follow `logic_flow` for test-fix iteration
- Use test_commands from flow_control for test execution
- Based on `meta.execution_config.method`:
- `"cli"` → Build CLI command via buildCliHandoffPrompt() and execute via Bash tool
- `"agent"` (default) → Agent direct execution:
- Parse `modification_points` as files to modify
- Follow `logic_flow` for test-fix iteration
- Use test_commands from flow_control for test execution
### 1. Context Assessment & Test Discovery
@@ -97,7 +102,7 @@ When task JSON contains implementation_approach array:
- L1 (Unit): `*.test.*`, `*.spec.*` in `__tests__/`, `tests/unit/`
- L2 (Integration): `tests/integration/`, `*.integration.test.*`
- L3 (E2E): `tests/e2e/`, `*.e2e.test.*`, `cypress/`, `playwright/`
- **context-package.json** (CCW Workflow): Use Read tool to get context package from `.workflow/active/{session}/.process/context-package.json`
- **context-package.json** : Use Read tool to get context package from `.workflow/active/{session}/.process/context-package.json`
- Identify test commands from project configuration
```bash
@@ -329,6 +334,13 @@ When generating test results for orchestrator (saved to `.process/test-results.j
- Pass rate >= 95% + any "high" or "medium" criticality failures → ⚠️ NEEDS FIX (continue iteration)
- Pass rate < 95% → ❌ FAILED (continue iteration or abort)
## Task Status Update
**Upon task completion**, update task JSON status:
```bash
jq --arg ts "$(date -Iseconds)" '.status="completed" | .status_history += [{"from":"in_progress","to":"completed","changed_at":$ts}]' IMPL-X.json > tmp.json && mv tmp.json IMPL-X.json
```
## Important Reminders
**ALWAYS:**

File diff suppressed because it is too large Load Diff

View File

@@ -0,0 +1,456 @@
---
name: ccw-plan
description: Planning coordinator - analyze requirements, select planning strategy, execute planning workflow in main process
argument-hint: "[--mode lite|multi-cli|full|plan-verify|replan|cli|issue|rapid-to-issue|brainstorm-with-file|analyze-with-file] [--yes|-y] \"task description\""
allowed-tools: Skill(*), TodoWrite(*), AskUserQuestion(*), Read(*), Grep(*), Glob(*)
---
# CCW-Plan Command - Planning Coordinator
Planning orchestrator: requirement analysis → strategy selection → planning execution.
## Core Concept: Planning Units (规划单元)
**Definition**: Planning commands are grouped into logical units based on verification requirements and collaboration strategies.
**Planning Units**:
| Unit Type | Pattern | Example |
|-----------|---------|---------|
| **Quick Planning** | plan-cmd (no verify) | lite-plan |
| **Verified Planning** | plan-cmd → verify-cmd | plan → plan-verify |
| **Collaborative Planning** | multi-cli-plan (implicit verify) | multi-cli-plan |
| **With-File Planning** | brainstorm-with-file or analyze-with-file | brainstorm + plan options |
| **CLI-Assisted Planning** | ccw cli (analysis) → recommendations | quick analysis + decision |
| **Issue Workflow Planning** | plan → issue workflow (discover/queue/execute) | rapid-to-issue bridge |
**Atomic Rules**:
1. Lite mode: No verification (fast iteration)
2. Plan-verify mode: Mandatory quality gate
3. Multi-cli/Full mode: Optional verification (via --skip-verify flag)
4. With-File modes: Self-contained iteration with built-in post-completion options
5. CLI mode: Quick analysis, user-driven decisions
6. Issue modes: Planning integrated into issue workflow lifecycle
## Execution Model
**Synchronous (Main Process)**: Planning commands execute via Skill, blocking until complete.
```
User Input → Analyze Requirements → Select Strategy → [Confirm] → Execute Planning
Skill (blocking)
Update TodoWrite
Generate Artifacts
```
## 5-Phase Workflow
### Phase 1: Analyze Requirements
**Input** → Extract (goal, scope, constraints) → Assess (complexity, clarity, criticality) → **Analysis**
| Field | Values |
|-------|--------|
| complexity | low \| medium \| high |
| clarity | 0-3 (≥2 = clear) |
| criticality | normal \| high \| critical |
| scope | single-module \| cross-module \| system \| batch-issues |
**Output**: `Type: [task_type] | Goal: [goal] | Complexity: [complexity] | Clarity: [clarity]/3 | Criticality: [criticality]`
---
### Phase 1.5: Requirement Clarification (if clarity < 2)
```
Analysis → Check clarity ≥ 2?
YES → Continue to Phase 2
NO → Ask Questions → Update Analysis
```
**Questions Asked**: Goal (Create/Fix/Optimize/Analyze), Scope (Single file/Module/Cross-module/System), Constraints (Backward compat/Skip tests/Urgent hotfix)
---
### Phase 2: Select Planning Strategy & Build Command Chain
```
Analysis → Detect Mode (keywords) → Build Command Chain → Planning Workflow
```
#### Mode Detection (Priority Order)
```
Input Keywords → Mode
───────────────────────────────────────────────────────────────────────────────
quick|fast|immediate|recommendation|suggest → cli
issues?|batch|issue workflow|structured workflow|queue → issue
issue transition|rapid.*issue|plan.*issue|convert.*issue → rapid-to-issue
brainstorm|ideation|头脑风暴|创意|发散思维|multi-perspective → brainstorm-with-file
analyze.*document|explore.*concept|collaborative analysis → analyze-with-file
production|critical|payment|auth → plan-verify
adjust|modify|change plan → replan
uncertain|explore → full
complex|multiple module|integrate → multi-cli
(default) → lite
```
#### Command Chain Mapping
| Mode | Command Chain | Verification | Use Case |
|------|---------------|--------------|----------|
| **cli** | ccw cli --mode analysis --rule planning-* | None | Quick planning recommendation |
| **issue** | /issue:discover → /issue:plan → /issue:queue → /issue:execute | Optional | Batch issue planning & execution |
| **rapid-to-issue** | lite-plan → /issue:convert-to-plan → queue → execute | Optional | Quick planning → Issue workflow bridge |
| **brainstorm-with-file** | /workflow:brainstorm-with-file → (plan/issue options) | Self-contained | Multi-perspective ideation |
| **analyze-with-file** | /workflow:analyze-with-file → (plan/issue options) | Self-contained | Collaborative architecture analysis |
| **lite** | lite-plan | None | Fast simple planning |
| **multi-cli** | multi-cli-plan → [plan-verify] | Optional | Multi-model collaborative planning |
| **full** | brainstorm → plan → [plan-verify] | Optional | Comprehensive brainstorm + planning |
| **plan-verify** | plan → **plan-verify** | **Mandatory** | Production/critical features |
| **replan** | replan | None | Plan refinement/adjustment |
**Note**:
- `[ ]` = optional verification
- **bold** = mandatory quality gate
- With-File modes include built-in post-completion options to create plans/issues
**Output**: `Mode: [mode] | Strategy: [strategy] | Commands: [1. /cmd1 2. /cmd2]`
---
### Phase 3: User Confirmation
```
Planning Chain → Show Strategy → Ask User → User Decision:
- ✓ Confirm → Continue to Phase 4
- ⚙ Adjust → Change Mode (back to Phase 2)
- ✗ Cancel → Abort
```
---
### Phase 4: Setup TODO Tracking & Status File
```
Planning Chain → Create Session Dir → Initialize Tracking → Tracking State
```
**Session Structure**:
```
Session ID: CCWP-{goal-slug}-{date}
Session Dir: .workflow/.ccw-plan/{session_id}/
TodoWrite:
CCWP:{mode}: [1/n] /command1 [in_progress]
CCWP:{mode}: [2/n] /command2 [pending]
...
status.json:
{
"session_id": "CCWP-...",
"mode": "plan-verify",
"status": "running",
"command_chain": [...],
"quality_gate": "pending" // plan-verify mode only
}
```
**Output**:
- TODO: `-> CCWP:plan-verify: [1/2] /workflow:plan | ...`
- Status File: `.workflow/.ccw-plan/{session_id}/status.json`
---
### Phase 5: Execute Planning Chain
```
Start Command → Update status (running) → Execute via Skill → Result
```
#### For Plan-Verify Mode (Quality Gate)
```
Quality Gate → PASS → Mark completed → Next command
↓ FAIL (plan-verify mode)
Ask User → Refine: replan + re-verify
→ Override: continue anyway
→ Abort: stop planning
```
#### Error Handling Pattern
```
Command Error → Update status (failed) → Ask User:
- Retry → Re-execute (same index)
- Skip → Continue next command
- Abort → Stop execution
```
---
## Planning Pipeline Examples
| Input | Mode | Pipeline | Use Case |
|-------|------|----------|----------|
| "Quick: should we use OAuth2?" | cli | ccw cli --mode analysis → recommendation | Immediate planning advice |
| "Plan user login system" | lite | lite-plan | Fast simple planning |
| "Implement OAuth2 auth" | multi-cli | multi-cli-plan → [plan-verify] | Multi-model collaborative planning |
| "Design notification system" | full | brainstorm → plan → [plan-verify] | Comprehensive brainstorm + planning |
| "Payment processing (prod)" | plan-verify | plan → **plan-verify** | Production critical (mandatory gate) |
| "头脑风暴: 用户通知系统重新设计" | brainstorm-with-file | brainstorm-with-file → (plan/issue options) | Multi-perspective ideation |
| "协作分析: 认证架构设计决策" | analyze-with-file | analyze-with-file → (plan/issue options) | Collaborative analysis |
| "Batch plan: handle 10 pending issues" | issue | /issue:discover → plan → queue → execute | Batch issue planning |
| "Plan and create issues" | rapid-to-issue | lite-plan → convert-to-plan → queue → execute | Quick plan → Issue workflow |
| "Update existing plan" | replan | replan | Plan refinement/adjustment |
**Legend**:
- `[ ]` = optional verification
- **bold** = mandatory quality gate
- **With-File modes** include built-in post-completion options to create plans/issues
---
## State Management
### Dual Tracking System
**1. TodoWrite-Based Tracking** (UI Display):
```
// Plan-verify mode (mandatory quality gate)
CCWP:plan-verify: [1/2] /workflow:plan [in_progress]
CCWP:plan-verify: [2/2] /workflow:plan-verify [pending]
// CLI mode (quick recommendations)
CCWP:cli: [1/1] ccw cli --mode analysis [in_progress]
// Issue mode (batch planning)
CCWP:issue: [1/4] /issue:discover [in_progress]
CCWP:issue: [2/4] /issue:plan [pending]
CCWP:issue: [3/4] /issue:queue [pending]
CCWP:issue: [4/4] /issue:execute [pending]
// Rapid-to-issue mode (planning → issue bridge)
CCWP:rapid-to-issue: [1/4] /workflow:lite-plan [in_progress]
CCWP:rapid-to-issue: [2/4] /issue:convert-to-plan [pending]
CCWP:rapid-to-issue: [3/4] /issue:queue [pending]
CCWP:rapid-to-issue: [4/4] /issue:execute [pending]
// Brainstorm-with-file mode (self-contained)
CCWP:brainstorm-with-file: [1/1] /workflow:brainstorm-with-file [in_progress]
// Analyze-with-file mode (self-contained)
CCWP:analyze-with-file: [1/1] /workflow:analyze-with-file [in_progress]
// Lite mode (fast simple planning)
CCWP:lite: [1/1] /workflow:lite-plan [in_progress]
// Multi-CLI mode (collaborative planning)
CCWP:multi-cli: [1/1] /workflow:multi-cli-plan [in_progress]
// Full mode (brainstorm + planning with optional verification)
CCWP:full: [1/2] /workflow:brainstorm [in_progress]
CCWP:full: [2/2] /workflow:plan [pending]
```
**2. Status.json Tracking**: Persistent state for planning monitoring.
**Location**: `.workflow/.ccw-plan/{session_id}/status.json`
**Structure**:
```json
{
"session_id": "CCWP-oauth-auth-2025-02-02",
"mode": "plan-verify",
"status": "running|completed|failed",
"created_at": "2025-02-02T10:00:00Z",
"updated_at": "2025-02-02T10:05:00Z",
"analysis": {
"goal": "Implement OAuth2 authentication",
"complexity": "high",
"clarity_score": 2,
"criticality": "high"
},
"command_chain": [
{ "index": 0, "command": "/workflow:plan", "mandatory": false, "status": "completed" },
{ "index": 1, "command": "/workflow:plan-verify", "mandatory": true, "status": "running" }
],
"current_index": 1,
"quality_gate": "pending|PASS|FAIL"
}
```
**Status Values**:
- `running`: Planning in progress
- `completed`: Planning finished successfully
- `failed`: Planning aborted or quality gate failed
**Quality Gate Values** (plan-verify mode only):
- `pending`: Verification not started
- `PASS`: Plan meets quality standards
- `FAIL`: Plan needs refinement
**Mode-Specific Fields**:
- **plan-verify**: `quality_gate` field (pending|PASS|FAIL)
- **cli**: No command_chain, stores CLI recommendations and user decision
- **issue**: includes issue discovery results and queue configuration
- **rapid-to-issue**: includes plan output and conversion to issue
- **with-file modes**: stores session artifacts and post-completion options
- **other modes**: basic command_chain tracking
---
## Extended Planning Modes
### CLI-Assisted Planning (cli mode)
```
Quick Input → ccw cli --mode analysis --rule planning-* → Recommendations → User Decision:
- ✓ Accept → Create lite-plan from recommendations
- ↗ Escalate → Switch to multi-cli or full mode
- ✗ Done → Stop (recommendation only)
```
**Use Cases**:
- Quick architecture decision questions
- Planning approach recommendations
- Pattern/library selection advice
**CLI Rules** (auto-selected based on context):
- `planning-plan-architecture-design` - Architecture decisions
- `planning-breakdown-task-steps` - Task decomposition
- `planning-design-component-spec` - Component specifications
---
### With-File Planning Workflows
**With-File workflows** provide documented exploration with multi-CLI collaboration, generating comprehensive session artifacts.
| Mode | Purpose | Key Features | Output Folder |
|------|---------|--------------|---------------|
| **brainstorm-with-file** | Multi-perspective ideation | Gemini/Codex/Claude perspectives, diverge-converge | `.workflow/.brainstorm/` |
| **analyze-with-file** | Collaborative architecture analysis | Multi-round Q&A, CLI exploration, documented discussions | `.workflow/.analysis/` |
**Detection Keywords**:
- **brainstorm-with-file**: 头脑风暴, 创意, 发散思维, multi-perspective, ideation
- **analyze-with-file**: 协作分析, 深度理解, collaborative analysis, explore concept
**Characteristics**:
1. **Self-Contained**: Each workflow handles its own iteration loop
2. **Documented Process**: Creates evolving documents (brainstorm.md, discussion.md)
3. **Multi-CLI**: Uses Gemini/Codex/Claude for different perspectives
4. **Built-in Post-Completion**: Offers follow-up options (create plan, create issue, deep dive)
---
### Issue Workflow Integration
| Mode | Purpose | Command Chain | Typical Use |
|------|---------|---------------|-------------|
| **issue** | Batch issue planning | discover → plan → queue → execute | Multiple issues in codebase |
| **rapid-to-issue** | Quick plan → Issue workflow | lite-plan → convert-to-plan → queue → execute | Fast iteration → structured execution |
**Issue Workflow Bridge**:
```
lite-plan (in-memory) → /issue:convert-to-plan → Creates issue JSON
/issue:queue → Form execution queue
/issue:execute → DAG-based parallel execution
```
**When to use Issue Workflow**:
- Need structured multi-stage execution (queue-based)
- Want parallel DAG execution
- Multiple related changes as individual commits
- Converting brainstorm/plan output to executable tasks
---
## Key Design Principles
1. **Planning-Focused** - Pure planning coordination, no execution
2. **Mode-Driven** - 10 planning modes for different needs (lite/multi-cli/full/plan-verify/replan + cli/issue/rapid-to-issue/brainstorm-with-file/analyze-with-file)
3. **CLI Integration** - Quick analysis for immediate recommendations
4. **With-File Support** - Multi-CLI collaboration with documented artifacts
5. **Issue Workflow Bridge** - Seamless transition from planning to structured execution
6. **Quality Gates** - Mandatory verification for production features
7. **Flexible Verification** - Optional for exploration, mandatory for critical features
8. **Progressive Clarification** - Low clarity triggers requirement questions
9. **TODO Tracking** - Use CCWP prefix to isolate planning todos
10. **Handoff Ready** - Generates artifacts ready for execution phase
---
## Usage
```bash
# Auto-select mode (keyword-based detection)
/ccw-plan "Add user authentication"
# Standard planning modes
/ccw-plan --mode lite "Add logout endpoint"
/ccw-plan --mode multi-cli "Implement OAuth2"
/ccw-plan --mode full "Design notification system"
/ccw-plan --mode plan-verify "Payment processing (production)"
/ccw-plan --mode replan --session WFS-auth-2025-01-28
# CLI-assisted planning (quick recommendations)
/ccw-plan --mode cli "Quick: should we use OAuth2 or JWT?"
/ccw-plan --mode cli "Which state management pattern for React app?"
# With-File workflows (multi-CLI collaboration)
/ccw-plan --mode brainstorm-with-file "头脑风暴: 用户通知系统重新设计"
/ccw-plan --mode analyze-with-file "协作分析: 认证架构的设计决策"
# Issue workflow integration
/ccw-plan --mode issue "Batch plan: handle all pending security issues"
/ccw-plan --mode rapid-to-issue "Plan user profile feature and create issue"
# Auto mode (skip confirmations)
/ccw-plan --yes "Quick feature: user profile endpoint"
```
---
## Mode Selection Decision Tree
```
User calls: /ccw-plan "task description"
├─ Keywords: "quick", "fast", "recommendation"
│ └─ Mode: CLI (quick analysis → recommendations)
├─ Keywords: "issue", "batch", "queue"
│ └─ Mode: Issue (batch planning → execution queue)
├─ Keywords: "plan.*issue", "rapid.*issue"
│ └─ Mode: Rapid-to-Issue (lite-plan → issue bridge)
├─ Keywords: "头脑风暴", "brainstorm", "ideation"
│ └─ Mode: Brainstorm-with-file (multi-CLI ideation)
├─ Keywords: "协作分析", "analyze.*document"
│ └─ Mode: Analyze-with-file (collaborative analysis)
├─ Keywords: "production", "critical", "payment"
│ └─ Mode: Plan-Verify (mandatory quality gate)
├─ Keywords: "adjust", "modify", "change plan"
│ └─ Mode: Replan (refine existing plan)
├─ Keywords: "uncertain", "explore"
│ └─ Mode: Full (brainstorm → plan → [verify])
├─ Keywords: "complex", "multiple module"
│ └─ Mode: Multi-CLI (collaborative planning)
└─ Default → Lite (fast simple planning)
```

View File

@@ -0,0 +1,387 @@
---
name: ccw-test
description: Test coordinator - analyze testing needs, select test strategy, execute test workflow in main process
argument-hint: "[--mode gen|fix|verify|tdd] [--yes|-y] \"test description\""
allowed-tools: Skill(*), TodoWrite(*), AskUserQuestion(*), Read(*), Bash(*)
---
# CCW-Test Command - Test Coordinator
Test orchestrator: testing needs analysis → strategy selection → test execution.
## Core Concept: Test Units (测试单元)
**Definition**: Test commands grouped into logical units based on testing objectives.
**Test Units**:
| Unit Type | Pattern | Example |
|-----------|---------|---------|
| **Generation Only** | test-gen (no execution) | test-fix-gen |
| **Test + Fix Cycle** | test-gen → test-execute-fix | test-fix-gen → test-cycle-execute |
| **Verification Only** | existing-tests → execute | execute-tests |
| **TDD Cycle** | tdd-plan → tdd-execute → verify | Red-Green-Refactor |
**Atomic Rules**:
1. Gen mode: Generate tests only (no execution)
2. Fix mode: Generate + auto-iteration until ≥95% pass
3. Verify mode: Execute existing tests + report
4. TDD mode: Full Red-Green-Refactor cycle compliance
## Execution Model
**Synchronous (Main Process)**: Test commands execute via Skill, blocking until complete.
```
User Input → Analyze Testing Needs → Select Strategy → [Confirm] → Execute Tests
Skill (blocking)
Update TodoWrite
Generate Tests/Results
```
## 5-Phase Workflow
### Phase 1: Analyze Testing Needs
**Input** → Extract (description, target_module, existing_tests) → Assess (testing_goal, framework, coverage_target) → **Analysis**
| Field | Values |
|-------|--------|
| testing_goal | generate \| fix \| verify \| tdd |
| framework | jest \| vitest \| pytest \| ... |
| coverage_target | 0-100 (default: 80) |
| existing_tests | true \| false |
#### Mode Detection (Priority Order)
```
Input Keywords → Mode
─────────────────────────────────────────────────────────
generate|create|write test|need test → gen
fix|repair|failing|broken → fix
verify|validate|check|run test → verify
tdd|test-driven|test first → tdd
(default) → fix
```
**Output**: `TestingGoal: [goal] | Mode: [mode] | Target: [module] | Framework: [framework]`
---
### Phase 1.5: Testing Clarification (if needed)
```
Analysis → Check testing_goal known?
YES → Check target_module set?
YES → Continue to Phase 2
NO → Ask Questions → Update Analysis
```
**Questions Asked**: Testing Goal, Target Module/Files, Coverage Requirements, Test Framework
---
### Phase 2: Select Test Strategy & Build Command Chain
```
Analysis → Detect Mode (keywords) → Build Command Chain → Test Workflow
```
#### Command Chain Mapping
| Mode | Command Chain | Behavior |
|------|---------------|----------|
| **gen** | test-fix-gen | Generate only, no execution |
| **fix** | test-fix-gen → test-cycle-execute (iterate) | Auto-iteration until ≥95% pass or max iterations |
| **verify** | execute-existing-tests → coverage-report | Execute + report only |
| **tdd** | tdd-plan → execute → tdd-verify | Red-Green-Refactor cycle compliance |
**Note**: `(iterate)` = auto-iteration until pass_rate ≥ 95% or max_iterations reached
**Output**: `Mode: [mode] | Strategy: [strategy] | Commands: [1. /cmd1 2. /cmd2]`
---
### Phase 3: User Confirmation
```
Test Chain → Show Strategy → Ask User → User Decision:
- ✓ Confirm → Continue to Phase 4
- ⚙ Change Mode → Select Different Mode (back to Phase 2)
- ✗ Cancel → Abort
```
---
### Phase 4: Setup TODO Tracking & Status File
```
Test Chain → Create Session Dir → Initialize Tracking → Tracking State
```
**Session Structure**:
```
Session ID: CCWT-{target-module-slug}-{date}
Session Dir: .workflow/.ccw-test/{session_id}/
TodoWrite:
CCWT:{mode}: [1/n] /command1 [in_progress]
CCWT:{mode}: [2/n] /command2 [pending]
...
status.json:
{
"session_id": "CCWT-...",
"mode": "gen|fix|verify|tdd",
"status": "running",
"testing": { description, target_module, framework, coverage_target },
"command_chain": [...],
"test_metrics": { total_tests, passed, failed, pass_rate, iteration_count, coverage }
}
```
**Output**:
- TODO: `-> CCWT:fix: [1/2] /workflow:test-fix-gen | CCWT:fix: [2/2] /workflow:test-cycle-execute`
- Status File: `.workflow/.ccw-test/{session_id}/status.json`
---
### Phase 5: Execute Test Chain
#### For All Modes (Sequential Execution)
```
Start Command → Update status (running) → Execute via Skill → Result
Update test_metrics → Next Command
Error? → YES → Ask Action (Retry/Skip/Abort)
→ NO → Continue
```
#### For Fix Mode (Auto-Iteration)
```
test-fix-gen completes → test-cycle-execute begins
Check pass_rate ≥ 95%?
↓ ↓
YES → Complete NO → Check iteration < max?
↓ ↓
YES → Iteration NO → Complete
| (analyze failures
| generate fix
| re-execute tests)
|
└→ Loop back to pass_rate check
```
#### Error Handling Pattern
```
Command Error → Update status (failed) → Ask User:
- Retry → Re-execute (same index)
- Skip → Continue next command
- Abort → Stop execution
```
#### Test Metrics Update
```
After Each Execution → Collect test_metrics:
- total_tests: number
- passed/failed: count
- pass_rate: percentage
- iteration_count: increment (fix mode)
- coverage: line/branch/function
Update status.json → Update TODO with iteration info (if fix mode)
```
---
## Execution Flow Summary
```
User Input
|
Phase 1: Analyze Testing Needs
|-- Extract: description, testing_goal, target_module, existing_tests
+-- If unclear -> Phase 1.5: Clarify Testing Needs
|
Phase 2: Select Test Strategy & Build Chain
|-- Detect mode: gen | fix | verify | tdd
|-- Build command chain based on mode
+-- Configure iteration limits (fix mode)
|
Phase 3: User Confirmation (optional)
|-- Show test strategy
+-- Allow mode change
|
Phase 4: Setup TODO Tracking & Status File
|-- Create todos with CCWT prefix
+-- Initialize .workflow/.ccw-test/{session_id}/status.json
|
Phase 5: Execute Test Chain
|-- For each command:
| |-- Update status.json (current=running)
| |-- Execute via Skill
| |-- Test-fix cycle: iterate until ≥95% pass or max iterations
| |-- Update test_metrics in status.json
| +-- Update TODO status
+-- Mark status.json as completed
```
---
## Test Pipeline Examples
| Input | Mode | Pipeline | Iteration |
|-------|------|----------|-----------|
| "Generate tests for auth module" | gen | test-fix-gen | No execution |
| "Fix failing authentication tests" | fix | test-fix-gen → test-cycle-execute (iterate) | Max 3 iterations |
| "Run existing test suite" | verify | execute-tests → coverage-report | One-time |
| "Implement user profile with TDD" | tdd | tdd-plan → execute → tdd-verify | Red-Green-Refactor |
**Legend**: `(iterate)` = auto-iteration until ≥95% pass rate
---
## State Management
### Dual Tracking System
**1. TodoWrite-Based Tracking** (UI Display):
```
// Initial state (fix mode)
CCWT:fix: [1/2] /workflow:test-fix-gen [in_progress]
CCWT:fix: [2/2] /workflow:test-cycle-execute [pending]
// During iteration (fix mode, iteration 2/3)
CCWT:fix: [1/2] /workflow:test-fix-gen [completed]
CCWT:fix: [2/2] /workflow:test-cycle-execute [in_progress] (iteration 2/3, pass rate: 78%)
// Gen mode (no execution)
CCWT:gen: [1/1] /workflow:test-fix-gen [in_progress]
// Verify mode (one-time)
CCWT:verify: [1/2] execute-existing-tests [in_progress]
CCWT:verify: [2/2] generate-coverage-report [pending]
// TDD mode (Red-Green-Refactor)
CCWT:tdd: [1/3] /workflow:tdd-plan [in_progress]
CCWT:tdd: [2/3] /workflow:execute [pending]
CCWT:tdd: [3/3] /workflow:tdd-verify [pending]
```
**2. Status.json Tracking**: Persistent state for test monitoring.
**Location**: `.workflow/.ccw-test/{session_id}/status.json`
**Structure**:
```json
{
"session_id": "CCWT-auth-module-2025-02-02",
"mode": "fix",
"status": "running|completed|failed",
"created_at": "2025-02-02T10:00:00Z",
"updated_at": "2025-02-02T10:05:00Z",
"testing": {
"description": "Fix failing authentication tests",
"target_module": "src/auth/**/*.ts",
"framework": "jest",
"coverage_target": 80
},
"command_chain": [
{ "index": 0, "command": "/workflow:test-fix-gen", "unit": "sequential", "status": "completed" },
{ "index": 1, "command": "/workflow:test-cycle-execute", "unit": "test-fix-cycle", "max_iterations": 3, "status": "in_progress" }
],
"current_index": 1,
"test_metrics": {
"total_tests": 42,
"passed": 38,
"failed": 4,
"pass_rate": 90.5,
"iteration_count": 2,
"coverage": {
"line": 82.3,
"branch": 75.6,
"function": 88.1
}
}
}
```
**Status Values**:
- `running`: Test workflow in progress
- `completed`: Tests passing (≥95%) or generation complete
- `failed`: Test workflow aborted
**Test Metrics** (updated during execution):
- `total_tests`: Number of tests executed
- `pass_rate`: Percentage of passing tests (target: ≥95%)
- `iteration_count`: Number of test-fix iterations (fix mode)
- `coverage`: Line/branch/function coverage percentages
---
## Key Design Principles
1. **Testing-Focused** - Pure test coordination, no implementation
2. **Mode-Driven** - 4 test strategies for different needs
3. **Auto-Iteration** - Fix mode iterates until ≥95% pass rate
4. **Metrics Tracking** - Real-time test metrics in status.json
5. **Coverage-Driven** - Coverage targets guide test generation
6. **TODO Tracking** - Use CCWT prefix to isolate test todos
7. **TDD Compliance** - TDD mode enforces Red-Green-Refactor cycle
---
## Usage
```bash
# Auto-select mode
/ccw-test "Test user authentication module"
# Explicit mode selection
/ccw-test --mode gen "Generate tests for payment module"
/ccw-test --mode fix "Fix failing authentication tests"
/ccw-test --mode verify "Validate current test suite"
/ccw-test --mode tdd "Implement user profile with TDD"
# Custom configuration
/ccw-test --mode fix --max-iterations 5 --pass-threshold 98 "Fix all tests"
/ccw-test --target "src/auth/**/*.ts" "Test authentication module"
# Auto mode (skip confirmations)
/ccw-test --yes "Quick test validation"
```
---
## Mode Selection Decision Tree
```
User calls: /ccw-test "test description"
├─ Keywords: "generate", "create", "write test"
│ └─ Mode: Gen (generate only, no execution)
├─ Keywords: "fix", "repair", "failing"
│ └─ Mode: Fix (auto-iterate until ≥95% pass)
├─ Keywords: "verify", "validate", "run test"
│ └─ Mode: Verify (execute existing tests)
├─ Keywords: "tdd", "test-driven", "test first"
│ └─ Mode: TDD (Red-Green-Refactor cycle)
└─ Default → Fix (most common: fix failing tests)
```

View File

@@ -2,7 +2,7 @@
name: ccw
description: Main workflow orchestrator - analyze intent, select workflow, execute command chain in main process
argument-hint: "\"task description\""
allowed-tools: SlashCommand(*), TodoWrite(*), AskUserQuestion(*), Read(*), Grep(*), Glob(*)
allowed-tools: Skill(*), TodoWrite(*), AskUserQuestion(*), Read(*), Grep(*), Glob(*)
---
# CCW Command - Main Workflow Orchestrator
@@ -33,12 +33,12 @@ Main process orchestrator: intent analysis → workflow selection → command ch
## Execution Model
**Synchronous (Main Process)**: Commands execute via SlashCommand in main process, blocking until complete.
**Synchronous (Main Process)**: Commands execute via Skill in main process, blocking until complete.
```
User Input → Analyze Intent → Select Workflow → [Confirm] → Execute Chain
SlashCommand (blocking)
Skill (blocking)
Update TodoWrite
@@ -327,49 +327,107 @@ async function getUserConfirmation(chain) {
---
### Phase 4: Setup TODO Tracking
### Phase 4: Setup TODO Tracking & Status File
```javascript
function setupTodoTracking(chain, workflow) {
function setupTodoTracking(chain, workflow, analysis) {
const sessionId = `ccw-${Date.now()}`;
const stateDir = `.workflow/.ccw/${sessionId}`;
Bash(`mkdir -p "${stateDir}"`);
const todos = chain.map((step, i) => ({
content: `CCW:${workflow}: [${i + 1}/${chain.length}] ${step.cmd}`,
status: i === 0 ? 'in_progress' : 'pending',
activeForm: `Executing ${step.cmd}`
}));
TodoWrite({ todos });
// Initialize status.json for hook tracking
const state = {
session_id: sessionId,
workflow: workflow,
status: 'running',
created_at: new Date().toISOString(),
updated_at: new Date().toISOString(),
analysis: analysis,
command_chain: chain.map((step, idx) => ({
index: idx,
command: step.cmd,
status: idx === 0 ? 'running' : 'pending'
})),
current_index: 0
};
Write(`${stateDir}/status.json`, JSON.stringify(state, null, 2));
return { sessionId, stateDir, state };
}
```
**Output**: `-> CCW:rapid: [1/3] /workflow:lite-plan | CCW:rapid: [2/3] /workflow:lite-execute | ...`
**Output**:
- TODO: `-> CCW:rapid: [1/3] /workflow:lite-plan | CCW:rapid: [2/3] /workflow:lite-execute | ...`
- Status File: `.workflow/.ccw/{session_id}/status.json`
---
### Phase 5: Execute Command Chain
```javascript
async function executeCommandChain(chain, workflow) {
async function executeCommandChain(chain, workflow, trackingState) {
let previousResult = null;
const { sessionId, stateDir, state } = trackingState;
for (let i = 0; i < chain.length; i++) {
try {
// Update status: mark current as running
state.command_chain[i].status = 'running';
state.current_index = i;
state.updated_at = new Date().toISOString();
Write(`${stateDir}/status.json`, JSON.stringify(state, null, 2));
const fullCommand = assembleCommand(chain[i], previousResult);
const result = await SlashCommand({ command: fullCommand });
const result = await Skill({ skill: fullCommand });
previousResult = { ...result, success: true };
// Update status: mark current as completed, next as running
state.command_chain[i].status = 'completed';
if (i + 1 < chain.length) {
state.command_chain[i + 1].status = 'running';
}
state.updated_at = new Date().toISOString();
Write(`${stateDir}/status.json`, JSON.stringify(state, null, 2));
updateTodoStatus(i, chain.length, workflow, 'completed');
} catch (error) {
// Update status on error
state.command_chain[i].status = 'failed';
state.status = 'error';
state.updated_at = new Date().toISOString();
Write(`${stateDir}/status.json`, JSON.stringify(state, null, 2));
const action = await handleError(chain[i], error, i);
if (action === 'retry') {
state.command_chain[i].status = 'pending';
state.status = 'running';
i--; // Retry
} else if (action === 'abort') {
state.status = 'failed';
Write(`${stateDir}/status.json`, JSON.stringify(state, null, 2));
return { success: false, error: error.message };
}
// 'skip' - continue
state.status = 'running';
}
}
return { success: true, completed: chain.length };
// Mark workflow as completed
state.status = 'completed';
state.updated_at = new Date().toISOString();
Write(`${stateDir}/status.json`, JSON.stringify(state, null, 2));
return { success: true, completed: chain.length, sessionId };
}
// Assemble full command with session/plan parameters
@@ -434,16 +492,19 @@ Phase 3: User Confirmation (optional)
|-- Show pipeline visualization
+-- Allow adjustment
|
Phase 4: Setup TODO Tracking
+-- Create todos with CCW prefix
Phase 4: Setup TODO Tracking & Status File
|-- Create todos with CCW prefix
+-- Initialize .workflow/.ccw/{session_id}/status.json
|
Phase 5: Execute Command Chain
|-- For each command:
| |-- Update status.json (current=running)
| |-- Assemble full command
| |-- Execute via SlashCommand
| |-- Execute via Skill
| |-- Update status.json (current=completed, next=running)
| |-- Update TODO status
| +-- Handle errors (retry/skip/abort)
+-- Return workflow result
+-- Mark status.json as completed
```
---
@@ -469,7 +530,7 @@ Phase 5: Execute Command Chain
## Key Design Principles
1. **Main Process Execution** - Use SlashCommand in main process, no external CLI
1. **Main Process Execution** - Use Skill in main process, no external CLI
2. **Intent-Driven** - Auto-select workflow based on task intent
3. **Port-Based Chaining** - Build command chain using port matching
4. **Minimum Execution Units** - Commands grouped into atomic units, never split (e.g., lite-plan → lite-execute)
@@ -482,7 +543,9 @@ Phase 5: Execute Command Chain
## State Management
**TodoWrite-Based Tracking**: All execution state tracked via TodoWrite with `CCW:` prefix.
### Dual Tracking System
**1. TodoWrite-Based Tracking** (UI Display): All execution state tracked via TodoWrite with `CCW:` prefix.
```javascript
// Initial state
@@ -500,7 +563,57 @@ todos = [
];
```
**vs ccw-coordinator**: Extensive state.json with task_id, status transitions, hook callbacks.
**2. Status.json Tracking**: Persistent state file for workflow monitoring.
**Location**: `.workflow/.ccw/{session_id}/status.json`
**Structure**:
```json
{
"session_id": "ccw-1706123456789",
"workflow": "rapid",
"status": "running|completed|failed|error",
"created_at": "2025-02-01T10:30:00Z",
"updated_at": "2025-02-01T10:35:00Z",
"analysis": {
"goal": "Add user authentication",
"scope": ["auth"],
"constraints": [],
"task_type": "feature",
"complexity": "medium"
},
"command_chain": [
{
"index": 0,
"command": "/workflow:lite-plan",
"status": "completed"
},
{
"index": 1,
"command": "/workflow:lite-execute",
"status": "running"
},
{
"index": 2,
"command": "/workflow:test-cycle-execute",
"status": "pending"
}
],
"current_index": 1
}
```
**Status Values**:
- `running`: Workflow executing commands
- `completed`: All commands finished
- `failed`: User aborted or unrecoverable error
- `error`: Command execution failed (during error handling)
**Command Status Values**:
- `pending`: Not started
- `running`: Currently executing
- `completed`: Successfully finished
- `failed`: Execution failed
---
@@ -527,41 +640,27 @@ todos = [
---
## Type Comparison: ccw vs ccw-coordinator
| Aspect | ccw | ccw-coordinator |
|--------|-----|-----------------|
| **Type** | Main process (SlashCommand) | External CLI (ccw cli + hook callbacks) |
| **Execution** | Synchronous blocking | Async background with hook completion |
| **Workflow** | Auto intent-based selection | Manual chain building |
| **Intent Analysis** | 5-phase clarity check | 3-phase requirement analysis |
| **State** | TodoWrite only (in-memory) | state.json + checkpoint/resume |
| **Error Handling** | Retry/skip/abort (interactive) | Retry/skip/abort (via AskUser) |
| **Use Case** | Auto workflow for any task | Manual orchestration, large chains |
---
## Usage
```bash
# Auto-select workflow
ccw "Add user authentication"
/ccw "Add user authentication"
# Complex requirement (triggers clarification)
ccw "Optimize system performance"
/ccw "Optimize system performance"
# Bug fix
ccw "Fix memory leak in WebSocket handler"
/ccw "Fix memory leak in WebSocket handler"
# TDD development
ccw "Implement user registration with TDD"
/ccw "Implement user registration with TDD"
# Exploratory task
ccw "Uncertain about architecture for real-time notifications"
/ccw "Uncertain about architecture for real-time notifications"
# With-File workflows (documented exploration with multi-CLI collaboration)
ccw "头脑风暴: 用户通知系统重新设计" # → brainstorm-with-file
ccw "从头脑风暴 BS-通知系统-2025-01-28 创建 issue" # → brainstorm-to-issue (bridge)
ccw "深度调试: 系统随机崩溃问题" # → debug-with-file
ccw "协作分析: 理解现有认证架构的设计决策" # → analyze-with-file
/ccw "头脑风暴: 用户通知系统重新设计" # → brainstorm-with-file
/ccw "从头脑风暴 BS-通知系统-2025-01-28 创建 issue" # → brainstorm-to-issue (bridge)
/ccw "深度调试: 系统随机崩溃问题" # → debug-with-file
/ccw "协作分析: 理解现有认证架构的设计决策" # → analyze-with-file
```

View File

@@ -0,0 +1,513 @@
---
name: codex-coordinator
description: Command orchestration tool for Codex - analyze requirements, recommend command chain, execute sequentially with state persistence
argument-hint: "TASK=\"<task description>\" [--depth=standard|deep] [--auto-confirm] [--verbose]"
---
# Codex Coordinator Command
Interactive orchestration tool for Codex commands: analyze task → discover commands → recommend chain → execute sequentially → track state.
**Execution Model**: Intelligent agent-driven workflow. Claude analyzes each phase and orchestrates command execution.
## Core Concept: Minimum Execution Units (最小执行单元)
### What is a Minimum Execution Unit?
**Definition**: A set of commands that must execute together as an atomic group to achieve a meaningful workflow milestone. Splitting these commands breaks the logical flow and creates incomplete states.
**Why This Matters**:
- **Prevents Incomplete States**: Avoid stopping after task generation without execution
- **User Experience**: User gets complete results, not intermediate artifacts requiring manual follow-up
- **Workflow Integrity**: Maintains logical coherence of multi-step operations
### Codex Minimum Execution Units
**Planning + Execution Units** (规划+执行单元):
| Unit Name | Commands | Purpose | Output |
|-----------|----------|---------|--------|
| **Quick Implementation** | lite-plan-a → execute | Lightweight plan and immediate execution | Working code |
| **Bug Fix** | lite-fix → execute | Quick bug diagnosis and fix execution | Fixed code |
| **Issue Workflow** | issue-discover → issue-plan → issue-queue → issue-execute | Complete issue lifecycle | Completed issues |
| **Discovery & Analysis** | issue-discover → issue-discover-by-prompt | Issue discovery with multiple perspectives | Generated issues |
| **Brainstorm to Execution** | brainstorm-with-file → execute | Brainstorm ideas then implement | Working code |
**With-File Workflows** (文档化单元):
| Unit Name | Commands | Purpose | Output |
|-----------|----------|---------|--------|
| **Brainstorm With File** | brainstorm-with-file | Multi-perspective ideation with documentation | brainstorm.md |
| **Debug With File** | debug-with-file | Hypothesis-driven debugging with documentation | understanding.md |
| **Analyze With File** | analyze-with-file | Collaborative analysis with documentation | discussion.md |
| **Clean & Analyze** | clean → analyze-with-file | Cleanup then analyze | Cleaned code + analysis |
### Command-to-Unit Mapping (命令与最小单元的映射)
| Command | Precedes | Atomic Units |
|---------|----------|--------------|
| lite-plan-a | execute, brainstorm-with-file | Quick Implementation |
| lite-fix | execute | Bug Fix |
| issue-discover | issue-plan | Issue Workflow |
| issue-plan | issue-queue | Issue Workflow |
| issue-queue | issue-execute | Issue Workflow |
| brainstorm-with-file | execute, issue-execute | Brainstorm to Execution |
| debug-with-file | execute | Debug With File |
| analyze-with-file | (standalone) | Analyze With File |
| clean | analyze-with-file, execute | Clean & Analyze |
| quick-plan-with-file | execute | Quick Planning with File |
| merge-plans-with-file | execute | Merge Multiple Plans |
| unified-execute-with-file | (terminal) | Execute with File Tracking |
### Atomic Group Rules
1. **Never Split Units**: Coordinator must recommend complete units, not partial chains
2. **Multi-Unit Participation**: Some commands can participate in multiple units
3. **User Override**: User can explicitly request partial execution (advanced mode)
4. **Visualization**: Pipeline view shows unit boundaries with 【 】markers
5. **Validation**: Before execution, verify all unit commands are included
**Example Pipeline with Units**:
```
需求 → 【lite-plan-a → execute】→ 代码 → 【issue-discover → issue-plan → issue-queue → issue-execute】→ 完成
└──── Quick Implementation ────┘ └────────── Issue Workflow ─────────┘
```
## 3-Phase Workflow
### Phase 1: Analyze Requirements
Parse task to extract: goal, scope, complexity, and task type.
```javascript
function analyzeRequirements(taskDescription) {
return {
goal: extractMainGoal(taskDescription), // e.g., "Fix login bug"
scope: extractScope(taskDescription), // e.g., ["auth", "login"]
complexity: determineComplexity(taskDescription), // 'simple' | 'medium' | 'complex'
task_type: detectTaskType(taskDescription) // See task type patterns below
};
}
// Task Type Detection Patterns
function detectTaskType(text) {
// Priority order (first match wins)
if (/fix|bug|error|crash|fail|debug|diagnose/.test(text)) return 'bugfix';
if (/生成|generate|discover|找出|issue|问题/.test(text)) return 'discovery';
if (/plan|规划|设计|design|analyze|分析/.test(text)) return 'analysis';
if (/清理|cleanup|clean|refactor|重构/.test(text)) return 'cleanup';
if (/头脑|brainstorm|创意|ideation/.test(text)) return 'brainstorm';
if (/合并|merge|combine|batch/.test(text)) return 'batch-planning';
return 'feature'; // Default
}
// Complexity Assessment
function determineComplexity(text) {
let score = 0;
if (/refactor|重构|migrate|迁移|architect|架构|system|系统/.test(text)) score += 2;
if (/multiple|多个|across|跨|all|所有|entire|整个/.test(text)) score += 2;
if (/integrate|集成|api|database|数据库/.test(text)) score += 1;
if (/security|安全|performance|性能|scale|扩展/.test(text)) score += 1;
return score >= 4 ? 'complex' : score >= 2 ? 'medium' : 'simple';
}
```
**Display to user**:
```
Analysis Complete:
Goal: [extracted goal]
Scope: [identified areas]
Complexity: [level]
Task Type: [detected type]
```
### Phase 2: Discover Commands & Recommend Chain
Dynamic command chain assembly using task type and complexity matching.
#### Available Codex Commands (Discovery)
All commands from `~/.codex/prompts/`:
- **Planning**: @~/.codex/prompts/lite-plan-a.md, @~/.codex/prompts/lite-plan-b.md, @~/.codex/prompts/lite-plan-c.md, @~/.codex/prompts/quick-plan-with-file.md, @~/.codex/prompts/merge-plans-with-file.md
- **Execution**: @~/.codex/prompts/execute.md, @~/.codex/prompts/unified-execute-with-file.md
- **Bug Fixes**: @~/.codex/prompts/lite-fix.md, @~/.codex/prompts/debug-with-file.md
- **Discovery**: @~/.codex/prompts/issue-discover.md, @~/.codex/prompts/issue-discover-by-prompt.md, @~/.codex/prompts/issue-plan.md, @~/.codex/prompts/issue-queue.md, @~/.codex/prompts/issue-execute.md
- **Analysis**: @~/.codex/prompts/analyze-with-file.md
- **Brainstorming**: @~/.codex/prompts/brainstorm-with-file.md, @~/.codex/prompts/brainstorm-to-cycle.md
- **Cleanup**: @~/.codex/prompts/clean.md, @~/.codex/prompts/compact.md
#### Recommendation Algorithm
```javascript
async function recommendCommandChain(analysis) {
// Step 1: 根据任务类型确定流程
const { inputPort, outputPort } = determinePortFlow(analysis.task_type, analysis.complexity);
// Step 2: Claude 根据命令特性和任务特征,智能选择命令序列
const chain = selectChainByTaskType(analysis);
return chain;
}
// 任务类型对应的端口流
function determinePortFlow(taskType, complexity) {
const flows = {
'bugfix': { flow: ['lite-fix', 'execute'], depth: complexity === 'complex' ? 'deep' : 'standard' },
'discovery': { flow: ['issue-discover', 'issue-plan', 'issue-queue', 'issue-execute'], depth: 'standard' },
'analysis': { flow: ['analyze-with-file'], depth: complexity === 'complex' ? 'deep' : 'standard' },
'cleanup': { flow: ['clean'], depth: 'standard' },
'brainstorm': { flow: ['brainstorm-with-file', 'execute'], depth: complexity === 'complex' ? 'deep' : 'standard' },
'batch-planning': { flow: ['merge-plans-with-file', 'execute'], depth: 'standard' },
'feature': { flow: complexity === 'complex' ? ['lite-plan-b'] : ['lite-plan-a', 'execute'], depth: complexity === 'complex' ? 'deep' : 'standard' }
};
return flows[taskType] || flows['feature'];
}
```
#### Display to User
```
Recommended Command Chain:
Pipeline (管道视图):
需求 → @~/.codex/prompts/lite-plan-a.md → 计划 → @~/.codex/prompts/execute.md → 代码完成
Commands (命令列表):
1. @~/.codex/prompts/lite-plan-a.md
2. @~/.codex/prompts/execute.md
Proceed? [Confirm / Show Details / Adjust / Cancel]
```
### Phase 2b: Get User Confirmation
Ask user for confirmation before proceeding with execution.
```javascript
async function getUserConfirmation(chain) {
const response = await AskUserQuestion({
questions: [{
question: 'Proceed with this command chain?',
header: 'Confirm Chain',
multiSelect: false,
options: [
{ label: 'Confirm and execute', description: 'Proceed with commands' },
{ label: 'Show details', description: 'View each command' },
{ label: 'Adjust chain', description: 'Remove or reorder' },
{ label: 'Cancel', description: 'Abort' }
]
}]
});
return response;
}
```
### Phase 3: Execute Sequential Command Chain
```javascript
async function executeCommandChain(chain, analysis) {
const sessionId = `codex-coord-${Date.now()}`;
const stateDir = `.workflow/.codex-coordinator/${sessionId}`;
// Create state directory
const state = {
session_id: sessionId,
status: 'running',
created_at: new Date().toISOString(),
analysis: analysis,
command_chain: chain.map((cmd, idx) => ({ ...cmd, index: idx, status: 'pending' })),
execution_results: [],
};
// Save initial state
Write(`${stateDir}/state.json`, JSON.stringify(state, null, 2));
for (let i = 0; i < chain.length; i++) {
const cmd = chain[i];
console.log(`[${i+1}/${chain.length}] Executing: @~/.codex/prompts/${cmd.name}.md`);
// Update status to running
state.command_chain[i].status = 'running';
state.updated_at = new Date().toISOString();
Write(`${stateDir}/state.json`, JSON.stringify(state, null, 2));
try {
// Build command with parameters using full path
let commandStr = `@~/.codex/prompts/${cmd.name}.md`;
// Add parameters based on previous results and task context
if (i > 0 && state.execution_results.length > 0) {
const lastResult = state.execution_results[state.execution_results.length - 1];
commandStr += ` --resume="${lastResult.session_id || lastResult.artifact}"`;
}
// For analysis-based commands, add depth parameter
if (analysis.complexity === 'complex' && (cmd.name.includes('analyze') || cmd.name.includes('plan'))) {
commandStr += ` --depth=deep`;
}
// Add task description for planning commands
if (cmd.type === 'planning' && i === 0) {
commandStr += ` TASK="${analysis.goal}"`;
}
// Execute command via Bash (spawning as background task)
// Format: @~/.codex/prompts/command-name.md [] parameters
// Note: This simulates the execution; actual implementation uses hook callbacks
console.log(`Executing: ${commandStr}`);
// Save execution record
state.execution_results.push({
index: i,
command: cmd.name,
status: 'in-progress',
started_at: new Date().toISOString(),
session_id: null,
artifact: null
});
state.command_chain[i].status = 'completed';
state.updated_at = new Date().toISOString();
Write(`${stateDir}/state.json`, JSON.stringify(state, null, 2));
console.log(`[${i+1}/${chain.length}] ✓ Completed: @~/.codex/prompts/${cmd.name}.md`);
} catch (error) {
state.command_chain[i].status = 'failed';
state.updated_at = new Date().toISOString();
Write(`${stateDir}/state.json`, JSON.stringify(state, null, 2));
console.log(`❌ Command failed: ${error.message}`);
break;
}
}
state.status = 'completed';
state.updated_at = new Date().toISOString();
Write(`${stateDir}/state.json`, JSON.stringify(state, null, 2));
console.log(`\n✅ Orchestration Complete: ${state.session_id}`);
return state;
}
```
## State File Structure
**Location**: `.workflow/.codex-coordinator/{session_id}/state.json`
```json
{
"session_id": "codex-coord-20250129-143025",
"status": "running|waiting|completed|failed",
"created_at": "2025-01-29T14:30:25Z",
"updated_at": "2025-01-29T14:35:45Z",
"analysis": {
"goal": "Fix login authentication bug",
"scope": ["auth", "login"],
"complexity": "medium",
"task_type": "bugfix"
},
"command_chain": [
{
"index": 0,
"name": "lite-fix",
"type": "bugfix",
"status": "completed"
},
{
"index": 1,
"name": "execute",
"type": "execution",
"status": "pending"
}
],
"execution_results": [
{
"index": 0,
"command": "lite-fix",
"status": "completed",
"started_at": "2025-01-29T14:30:25Z",
"session_id": "fix-login-2025-01-29",
"artifact": ".workflow/.lite-fix/fix-login-2025-01-29/fix-plan.json"
}
]
}
```
### Status Values
- `running`: Orchestrator actively executing
- `waiting`: Paused, waiting for external events
- `completed`: All commands finished successfully
- `failed`: Error occurred or user aborted
## Task Type Routing (Pipeline Summary)
**Note**: 【 】marks Minimum Execution Units (最小执行单元) - these commands must execute together.
| Task Type | Pipeline | Minimum Units |
|-----------|----------|---|
| **bugfix** | Bug报告 →【@~/.codex/prompts/lite-fix.md → @~/.codex/prompts/execute.md】→ 修复代码 | Bug Fix |
| **discovery** | 需求 →【@~/.codex/prompts/issue-discover.md → @~/.codex/prompts/issue-plan.md → @~/.codex/prompts/issue-queue.md → @~/.codex/prompts/issue-execute.md】→ 完成 issues | Issue Workflow |
| **analysis** | 需求 → @~/.codex/prompts/analyze-with-file.md → 分析报告 | Analyze With File |
| **cleanup** | 代码库 → @~/.codex/prompts/clean.md → 清理完成 | Cleanup |
| **brainstorm** | 主题 →【@~/.codex/prompts/brainstorm-with-file.md → @~/.codex/prompts/execute.md】→ 实现代码 | Brainstorm to Execution |
| **batch-planning** | 需求集合 →【@~/.codex/prompts/merge-plans-with-file.md → @~/.codex/prompts/execute.md】→ 代码完成 | Merge Multiple Plans |
| **feature** (simple) | 需求 →【@~/.codex/prompts/lite-plan-a.md → @~/.codex/prompts/execute.md】→ 代码 | Quick Implementation |
| **feature** (complex) | 需求 → @~/.codex/prompts/lite-plan-b.md → 详细计划 → @~/.codex/prompts/execute.md → 代码 | Complex Planning |
## Available Commands Reference
### Planning Commands
| Command | Purpose | Usage | Output |
|---------|---------|-------|--------|
| **lite-plan-a** | Lightweight merged-mode planning | `@~/.codex/prompts/lite-plan-a.md TASK="..."` | plan.json |
| **lite-plan-b** | Multi-angle exploration planning | `@~/.codex/prompts/lite-plan-b.md TASK="..."` | plan.json |
| **lite-plan-c** | Parallel angle planning | `@~/.codex/prompts/lite-plan-c.md TASK="..."` | plan.json |
| **quick-plan-with-file** | Quick planning with file tracking | `@~/.codex/prompts/quick-plan-with-file.md TASK="..."` | plan + docs |
| **merge-plans-with-file** | Merge multiple plans | `@~/.codex/prompts/merge-plans-with-file.md PLANS="..."` | merged-plan.json |
### Execution Commands
| Command | Purpose | Usage | Output |
|---------|---------|-------|--------|
| **execute** | Execute tasks from plan | `@~/.codex/prompts/execute.md SESSION=".../plan/"` | Working code |
| **unified-execute-with-file** | Execute with file tracking | `@~/.codex/prompts/unified-execute-with-file.md SESSION="..."` | Code + tracking |
### Bug Fix Commands
| Command | Purpose | Usage | Output |
|---------|---------|-------|--------|
| **lite-fix** | Quick bug diagnosis and planning | `@~/.codex/prompts/lite-fix.md BUG="..."` | fix-plan.json |
| **debug-with-file** | Hypothesis-driven debugging | `@~/.codex/prompts/debug-with-file.md BUG="..."` | understanding.md |
### Discovery Commands
| Command | Purpose | Usage | Output |
|---------|---------|-------|--------|
| **issue-discover** | Multi-perspective issue discovery | `@~/.codex/prompts/issue-discover.md PATTERN="src/**"` | issues.jsonl |
| **issue-discover-by-prompt** | Prompt-based discovery | `@~/.codex/prompts/issue-discover-by-prompt.md PROMPT="..."` | issues |
| **issue-plan** | Plan issue solutions | `@~/.codex/prompts/issue-plan.md --all-pending` | issue-plans.json |
| **issue-queue** | Form execution queue | `@~/.codex/prompts/issue-queue.md --from-plan` | queue.json |
| **issue-execute** | Execute issue queue | `@~/.codex/prompts/issue-execute.md QUEUE="..."` | Completed |
### Analysis Commands
| Command | Purpose | Usage | Output |
|---------|---------|-------|--------|
| **analyze-with-file** | Collaborative analysis | `@~/.codex/prompts/analyze-with-file.md TOPIC="..."` | discussion.md |
### Brainstorm Commands
| Command | Purpose | Usage | Output |
|---------|---------|-------|--------|
| **brainstorm-with-file** | Multi-perspective brainstorming | `@~/.codex/prompts/brainstorm-with-file.md TOPIC="..."` | brainstorm.md |
| **brainstorm-to-cycle** | Bridge brainstorm to execution | `@~/.codex/prompts/brainstorm-to-cycle.md` | Executable plan |
### Utility Commands
| Command | Purpose | Usage | Output |
|---------|---------|-------|--------|
| **clean** | Intelligent code cleanup | `@~/.codex/prompts/clean.md` | Cleaned code |
| **compact** | Compact session memory | `@~/.codex/prompts/compact.md SESSION="..."` | Compressed state |
## Execution Flow
```
User Input: TASK="..."
Phase 1: analyzeRequirements(task)
Phase 2: recommendCommandChain(analysis)
Display pipeline and commands
User Confirmation
Phase 3: executeCommandChain(chain, analysis)
├─ For each command:
│ ├─ Update state to "running"
│ ├─ Build command string with parameters
│ ├─ Execute @command [] with parameters
│ ├─ Save execution results
│ └─ Update state to "completed"
Output completion summary
```
## Key Design Principles
1. **Atomic Execution** - Never split minimum execution units
2. **State Persistence** - All state saved to JSON
3. **User Control** - Confirmation before execution
4. **Context Passing** - Parameters chain across commands
5. **Resume Support** - Can resume from state.json
6. **Intelligent Routing** - Task type determines command chain
7. **Complexity Awareness** - Different paths for simple vs complex tasks
## Command Invocation Format
**Format**: `@~/.codex/prompts/<command-name>.md <parameters>`
**Examples**:
```bash
@~/.codex/prompts/lite-plan-a.md TASK="Implement user authentication"
@~/.codex/prompts/execute.md SESSION=".workflow/.lite-plan/..."
@~/.codex/prompts/lite-fix.md BUG="Login fails with 404 error"
@~/.codex/prompts/issue-discover.md PATTERN="src/auth/**"
@~/.codex/prompts/brainstorm-with-file.md TOPIC="Improve user onboarding"
```
## Error Handling
| Situation | Action |
|-----------|--------|
| Unknown task type | Default to feature implementation |
| Command not found | Error: command not available |
| Execution fails | Report error, offer retry or skip |
| Invalid parameters | Validate and ask for correction |
| Circular dependency | Detect and report |
| All commands fail | Report and suggest manual intervention |
## Session Management
**Resume Previous Session**:
```
1. Find session in .workflow/.codex-coordinator/
2. Load state.json
3. Identify last completed command
4. Restart from next pending command
```
**View Session Progress**:
```
cat .workflow/.codex-coordinator/{session-id}/state.json
```
---
## Execution Instructions
The coordinator workflow follows these steps:
1. **Parse Input**: Extract task description from TASK parameter
2. **Analyze**: Determine goal, scope, complexity, and task type
3. **Recommend**: Build optimal command chain based on analysis
4. **Confirm**: Display pipeline and request user approval
5. **Execute**: Run commands sequentially with state tracking
6. **Report**: Display final results and artifacts
To use this coordinator, invoke it as a Claude Code command (not a Codex command):
From the Claude Code CLI, you would call Codex commands like:
```bash
@~/.codex/prompts/lite-plan-a.md TASK="Your task description"
```
Or with options:
```bash
@~/.codex/prompts/lite-plan-a.md TASK="..." --depth=deep
```
This coordinator orchestrates such Codex commands based on your task requirements.

View File

@@ -0,0 +1,675 @@
# Flow Template Generator
Generate workflow templates for meta-skill/flow-coordinator.
## Usage
```
/meta-skill:flow-create [template-name] [--output <path>]
```
**Examples**:
```bash
/meta-skill:flow-create bugfix-v2
/meta-skill:flow-create my-workflow --output ~/.claude/skills/my-skill/templates/
```
## Execution Flow
```
User Input → Phase 1: Template Design → Phase 2: Step Definition → Phase 3: Generate JSON
↓ ↓ ↓
Name + Description Define workflow steps Write template file
```
---
## Phase 1: Template Design
Gather basic template information:
```javascript
async function designTemplate(input) {
const templateName = parseTemplateName(input) || await askTemplateName();
const metadata = await AskUserQuestion({
questions: [
{
question: "What is the purpose of this workflow template?",
header: "Purpose",
options: [
{ label: "Feature Development", description: "Implement new features with planning and testing" },
{ label: "Bug Fix", description: "Diagnose and fix bugs with verification" },
{ label: "TDD Development", description: "Test-driven development workflow" },
{ label: "Code Review", description: "Review cycle with findings and fixes" },
{ label: "Testing", description: "Test generation and validation" },
{ label: "Issue Workflow", description: "Complete issue lifecycle (discover → plan → queue → execute)" },
{ label: "With-File Workflow", description: "Documented exploration (brainstorm/debug/analyze)" },
{ label: "Custom", description: "Define custom workflow purpose" }
],
multiSelect: false
},
{
question: "What complexity level?",
header: "Level",
options: [
{ label: "Level 1 (Rapid)", description: "1-2 steps, ultra-lightweight (lite-lite-lite)" },
{ label: "Level 2 (Lightweight)", description: "2-4 steps, quick implementation" },
{ label: "Level 3 (Standard)", description: "4-6 steps, with verification and testing" },
{ label: "Level 4 (Full)", description: "6+ steps, brainstorm + full workflow" }
],
multiSelect: false
}
]
});
return {
name: templateName,
description: generateDescription(templateName, metadata.Purpose),
level: parseLevel(metadata.Level),
purpose: metadata.Purpose
};
}
```
---
## Phase 2: Step Definition
### Step 2.1: Select Command Category
```javascript
async function selectCommandCategory() {
return await AskUserQuestion({
questions: [{
question: "Select command category",
header: "Category",
options: [
{ label: "Planning", description: "lite-plan, plan, multi-cli-plan, tdd-plan, quick-plan-with-file" },
{ label: "Execution", description: "lite-execute, execute, unified-execute-with-file" },
{ label: "Testing", description: "test-fix-gen, test-cycle-execute, test-gen, tdd-verify" },
{ label: "Review", description: "review-session-cycle, review-module-cycle, review-cycle-fix" },
{ label: "Bug Fix", description: "lite-fix, debug-with-file" },
{ label: "Brainstorm", description: "brainstorm-with-file, brainstorm:auto-parallel" },
{ label: "Analysis", description: "analyze-with-file" },
{ label: "Issue", description: "discover, plan, queue, execute, from-brainstorm, convert-to-plan" },
{ label: "Utility", description: "clean, init, replan, status" }
],
multiSelect: false
}]
});
}
```
### Step 2.2: Select Specific Command
```javascript
async function selectCommand(category) {
const commandOptions = {
'Planning': [
{ label: "/workflow:lite-plan", description: "Lightweight merged-mode planning" },
{ label: "/workflow:plan", description: "Full planning with architecture design" },
{ label: "/workflow:multi-cli-plan", description: "Multi-CLI collaborative planning (Gemini+Codex+Claude)" },
{ label: "/workflow:tdd-plan", description: "TDD workflow planning with Red-Green-Refactor" },
{ label: "/workflow:quick-plan-with-file", description: "Rapid planning with minimal docs" },
{ label: "/workflow:plan-verify", description: "Verify plan against requirements" },
{ label: "/workflow:replan", description: "Update plan and execute changes" }
],
'Execution': [
{ label: "/workflow:lite-execute", description: "Execute from in-memory plan" },
{ label: "/workflow:execute", description: "Execute from planning session" },
{ label: "/workflow:unified-execute-with-file", description: "Universal execution engine" },
{ label: "/workflow:lite-lite-lite", description: "Ultra-lightweight multi-tool execution" }
],
'Testing': [
{ label: "/workflow:test-fix-gen", description: "Generate test tasks for specific issues" },
{ label: "/workflow:test-cycle-execute", description: "Execute iterative test-fix cycle (>=95% pass)" },
{ label: "/workflow:test-gen", description: "Generate comprehensive test suite" },
{ label: "/workflow:tdd-verify", description: "Verify TDD workflow compliance" }
],
'Review': [
{ label: "/workflow:review-session-cycle", description: "Session-based multi-dimensional code review" },
{ label: "/workflow:review-module-cycle", description: "Module-focused code review" },
{ label: "/workflow:review-cycle-fix", description: "Fix review findings with prioritization" },
{ label: "/workflow:review", description: "Post-implementation review" }
],
'Bug Fix': [
{ label: "/workflow:lite-fix", description: "Lightweight bug diagnosis and fix" },
{ label: "/workflow:debug-with-file", description: "Hypothesis-driven debugging with documentation" }
],
'Brainstorm': [
{ label: "/workflow:brainstorm-with-file", description: "Multi-perspective ideation with documentation" },
{ label: "/workflow:brainstorm:auto-parallel", description: "Parallel multi-role brainstorming" }
],
'Analysis': [
{ label: "/workflow:analyze-with-file", description: "Collaborative analysis with documentation" }
],
'Issue': [
{ label: "/issue:discover", description: "Multi-perspective issue discovery" },
{ label: "/issue:discover-by-prompt", description: "Prompt-based issue discovery with Gemini" },
{ label: "/issue:plan", description: "Plan issue solutions" },
{ label: "/issue:queue", description: "Form execution queue with conflict analysis" },
{ label: "/issue:execute", description: "Execute issue queue with DAG orchestration" },
{ label: "/issue:from-brainstorm", description: "Convert brainstorm to issue" },
{ label: "/issue:convert-to-plan", description: "Convert planning artifacts to issue solutions" }
],
'Utility': [
{ label: "/workflow:clean", description: "Intelligent code cleanup" },
{ label: "/workflow:init", description: "Initialize project-level state" },
{ label: "/workflow:replan", description: "Interactive workflow replanning" },
{ label: "/workflow:status", description: "Generate workflow status views" }
]
};
return await AskUserQuestion({
questions: [{
question: `Select ${category} command`,
header: "Command",
options: commandOptions[category] || commandOptions['Planning'],
multiSelect: false
}]
});
}
```
### Step 2.3: Select Execution Unit
```javascript
async function selectExecutionUnit() {
return await AskUserQuestion({
questions: [{
question: "Select execution unit (atomic command group)",
header: "Unit",
options: [
// Planning + Execution Units
{ label: "quick-implementation", description: "【lite-plan → lite-execute】" },
{ label: "multi-cli-planning", description: "【multi-cli-plan → lite-execute】" },
{ label: "full-planning-execution", description: "【plan → execute】" },
{ label: "verified-planning-execution", description: "【plan → plan-verify → execute】" },
{ label: "replanning-execution", description: "【replan → execute】" },
{ label: "tdd-planning-execution", description: "【tdd-plan → execute】" },
// Testing Units
{ label: "test-validation", description: "【test-fix-gen → test-cycle-execute】" },
{ label: "test-generation-execution", description: "【test-gen → execute】" },
// Review Units
{ label: "code-review", description: "【review-*-cycle → review-cycle-fix】" },
// Bug Fix Units
{ label: "bug-fix", description: "【lite-fix → lite-execute】" },
// Issue Units
{ label: "issue-workflow", description: "【discover → plan → queue → execute】" },
{ label: "rapid-to-issue", description: "【lite-plan → convert-to-plan → queue → execute】" },
{ label: "brainstorm-to-issue", description: "【from-brainstorm → queue → execute】" },
// With-File Units (self-contained)
{ label: "brainstorm-with-file", description: "Self-contained brainstorming workflow" },
{ label: "debug-with-file", description: "Self-contained debugging workflow" },
{ label: "analyze-with-file", description: "Self-contained analysis workflow" },
// Standalone
{ label: "standalone", description: "Single command, no atomic grouping" }
],
multiSelect: false
}]
});
}
```
### Step 2.4: Select Execution Mode
```javascript
async function selectExecutionMode() {
return await AskUserQuestion({
questions: [{
question: "Execution mode for this step?",
header: "Mode",
options: [
{ label: "mainprocess", description: "Run in main process (blocking, synchronous)" },
{ label: "async", description: "Run asynchronously (background, hook callbacks)" }
],
multiSelect: false
}]
});
}
```
### Complete Step Definition Flow
```javascript
async function defineSteps(templateDesign) {
// Suggest steps based on purpose
const suggestedSteps = getSuggestedSteps(templateDesign.purpose);
const customize = await AskUserQuestion({
questions: [{
question: "Use suggested steps or customize?",
header: "Steps",
options: [
{ label: "Use Suggested", description: `Suggested: ${suggestedSteps.map(s => s.cmd).join(' → ')}` },
{ label: "Customize", description: "Modify or add custom steps" },
{ label: "Start Empty", description: "Define all steps from scratch" }
],
multiSelect: false
}]
});
if (customize.Steps === "Use Suggested") {
return suggestedSteps;
}
// Interactive step definition
const steps = [];
let addMore = true;
while (addMore) {
const category = await selectCommandCategory();
const command = await selectCommand(category.Category);
const unit = await selectExecutionUnit();
const execMode = await selectExecutionMode();
const contextHint = await askContextHint(command.Command);
steps.push({
cmd: command.Command,
args: command.Command.includes('plan') || command.Command.includes('fix') ? '"{{goal}}"' : undefined,
unit: unit.Unit,
execution: {
type: "slash-command",
mode: execMode.Mode
},
contextHint: contextHint
});
const continueAdding = await AskUserQuestion({
questions: [{
question: `Added step ${steps.length}: ${command.Command}. Add another?`,
header: "Continue",
options: [
{ label: "Add More", description: "Define another step" },
{ label: "Done", description: "Finish step definition" }
],
multiSelect: false
}]
});
addMore = continueAdding.Continue === "Add More";
}
return steps;
}
```
---
## Suggested Step Templates
### Feature Development (Level 2 - Rapid)
```json
{
"name": "rapid",
"description": "Quick implementation with testing",
"level": 2,
"steps": [
{ "cmd": "/workflow:lite-plan", "args": "\"{{goal}}\"", "unit": "quick-implementation", "execution": { "type": "slash-command", "mode": "mainprocess" }, "contextHint": "Create lightweight implementation plan" },
{ "cmd": "/workflow:lite-execute", "args": "--in-memory", "unit": "quick-implementation", "execution": { "type": "slash-command", "mode": "mainprocess" }, "contextHint": "Execute implementation based on plan" },
{ "cmd": "/workflow:test-fix-gen", "unit": "test-validation", "execution": { "type": "slash-command", "mode": "mainprocess" }, "contextHint": "Generate test tasks" },
{ "cmd": "/workflow:test-cycle-execute", "unit": "test-validation", "execution": { "type": "slash-command", "mode": "async" }, "contextHint": "Execute test-fix cycle until pass rate >= 95%" }
]
}
```
### Feature Development (Level 3 - Coupled)
```json
{
"name": "coupled",
"description": "Full workflow with verification, review, and testing",
"level": 3,
"steps": [
{ "cmd": "/workflow:plan", "args": "\"{{goal}}\"", "unit": "verified-planning-execution", "execution": { "type": "slash-command", "mode": "mainprocess" }, "contextHint": "Create detailed implementation plan" },
{ "cmd": "/workflow:plan-verify", "unit": "verified-planning-execution", "execution": { "type": "slash-command", "mode": "mainprocess" }, "contextHint": "Verify plan against requirements" },
{ "cmd": "/workflow:execute", "unit": "verified-planning-execution", "execution": { "type": "slash-command", "mode": "async" }, "contextHint": "Execute implementation" },
{ "cmd": "/workflow:review-session-cycle", "unit": "code-review", "execution": { "type": "slash-command", "mode": "mainprocess" }, "contextHint": "Multi-dimensional code review" },
{ "cmd": "/workflow:review-cycle-fix", "unit": "code-review", "execution": { "type": "slash-command", "mode": "mainprocess" }, "contextHint": "Fix review findings" },
{ "cmd": "/workflow:test-fix-gen", "unit": "test-validation", "execution": { "type": "slash-command", "mode": "mainprocess" }, "contextHint": "Generate test tasks" },
{ "cmd": "/workflow:test-cycle-execute", "unit": "test-validation", "execution": { "type": "slash-command", "mode": "async" }, "contextHint": "Execute test-fix cycle" }
]
}
```
### Bug Fix (Level 2)
```json
{
"name": "bugfix",
"description": "Bug diagnosis and fix with testing",
"level": 2,
"steps": [
{ "cmd": "/workflow:lite-fix", "args": "\"{{goal}}\"", "unit": "bug-fix", "execution": { "type": "slash-command", "mode": "mainprocess" }, "contextHint": "Diagnose and plan bug fix" },
{ "cmd": "/workflow:lite-execute", "args": "--in-memory", "unit": "bug-fix", "execution": { "type": "slash-command", "mode": "mainprocess" }, "contextHint": "Execute bug fix" },
{ "cmd": "/workflow:test-fix-gen", "unit": "test-validation", "execution": { "type": "slash-command", "mode": "mainprocess" }, "contextHint": "Generate regression tests" },
{ "cmd": "/workflow:test-cycle-execute", "unit": "test-validation", "execution": { "type": "slash-command", "mode": "async" }, "contextHint": "Verify fix with tests" }
]
}
```
### Bug Fix Hotfix (Level 2)
```json
{
"name": "bugfix-hotfix",
"description": "Urgent production bug fix (no tests)",
"level": 2,
"steps": [
{ "cmd": "/workflow:lite-fix", "args": "--hotfix \"{{goal}}\"", "unit": "standalone", "execution": { "type": "slash-command", "mode": "mainprocess" }, "contextHint": "Emergency hotfix mode" }
]
}
```
### TDD Development (Level 3)
```json
{
"name": "tdd",
"description": "Test-driven development with Red-Green-Refactor",
"level": 3,
"steps": [
{ "cmd": "/workflow:tdd-plan", "args": "\"{{goal}}\"", "unit": "tdd-planning-execution", "execution": { "type": "slash-command", "mode": "mainprocess" }, "contextHint": "Create TDD task chain" },
{ "cmd": "/workflow:execute", "unit": "tdd-planning-execution", "execution": { "type": "slash-command", "mode": "async" }, "contextHint": "Execute TDD cycle" },
{ "cmd": "/workflow:tdd-verify", "unit": "standalone", "execution": { "type": "slash-command", "mode": "mainprocess" }, "contextHint": "Verify TDD compliance" }
]
}
```
### Code Review (Level 3)
```json
{
"name": "review",
"description": "Code review cycle with fixes and testing",
"level": 3,
"steps": [
{ "cmd": "/workflow:review-session-cycle", "unit": "code-review", "execution": { "type": "slash-command", "mode": "mainprocess" }, "contextHint": "Multi-dimensional code review" },
{ "cmd": "/workflow:review-cycle-fix", "unit": "code-review", "execution": { "type": "slash-command", "mode": "mainprocess" }, "contextHint": "Fix review findings" },
{ "cmd": "/workflow:test-fix-gen", "unit": "test-validation", "execution": { "type": "slash-command", "mode": "mainprocess" }, "contextHint": "Generate tests for fixes" },
{ "cmd": "/workflow:test-cycle-execute", "unit": "test-validation", "execution": { "type": "slash-command", "mode": "async" }, "contextHint": "Verify fixes pass tests" }
]
}
```
### Test Fix (Level 3)
```json
{
"name": "test-fix",
"description": "Fix failing tests",
"level": 3,
"steps": [
{ "cmd": "/workflow:test-fix-gen", "args": "\"{{goal}}\"", "unit": "test-validation", "execution": { "type": "slash-command", "mode": "mainprocess" }, "contextHint": "Generate test fix tasks" },
{ "cmd": "/workflow:test-cycle-execute", "unit": "test-validation", "execution": { "type": "slash-command", "mode": "async" }, "contextHint": "Execute test-fix cycle" }
]
}
```
### Issue Workflow (Level Issue)
```json
{
"name": "issue",
"description": "Complete issue lifecycle",
"level": "Issue",
"steps": [
{ "cmd": "/issue:discover", "unit": "issue-workflow", "execution": { "type": "slash-command", "mode": "mainprocess" }, "contextHint": "Discover issues from codebase" },
{ "cmd": "/issue:plan", "args": "--all-pending", "unit": "issue-workflow", "execution": { "type": "slash-command", "mode": "mainprocess" }, "contextHint": "Plan issue solutions" },
{ "cmd": "/issue:queue", "unit": "issue-workflow", "execution": { "type": "slash-command", "mode": "mainprocess" }, "contextHint": "Form execution queue" },
{ "cmd": "/issue:execute", "unit": "issue-workflow", "execution": { "type": "slash-command", "mode": "async" }, "contextHint": "Execute issue queue" }
]
}
```
### Rapid to Issue (Level 2.5)
```json
{
"name": "rapid-to-issue",
"description": "Bridge lightweight planning to issue workflow",
"level": 2,
"steps": [
{ "cmd": "/workflow:lite-plan", "args": "\"{{goal}}\"", "unit": "rapid-to-issue", "execution": { "type": "slash-command", "mode": "mainprocess" }, "contextHint": "Create lightweight plan" },
{ "cmd": "/issue:convert-to-plan", "args": "--latest-lite-plan -y", "unit": "rapid-to-issue", "execution": { "type": "slash-command", "mode": "mainprocess" }, "contextHint": "Convert to issue plan" },
{ "cmd": "/issue:queue", "unit": "rapid-to-issue", "execution": { "type": "slash-command", "mode": "mainprocess" }, "contextHint": "Form execution queue" },
{ "cmd": "/issue:execute", "args": "--queue auto", "unit": "rapid-to-issue", "execution": { "type": "slash-command", "mode": "async" }, "contextHint": "Execute issue queue" }
]
}
```
### Brainstorm to Issue (Level 4)
```json
{
"name": "brainstorm-to-issue",
"description": "Bridge brainstorm session to issue workflow",
"level": 4,
"steps": [
{ "cmd": "/issue:from-brainstorm", "args": "SESSION=\"{{session}}\" --auto", "unit": "brainstorm-to-issue", "execution": { "type": "slash-command", "mode": "mainprocess" }, "contextHint": "Convert brainstorm to issue" },
{ "cmd": "/issue:queue", "unit": "brainstorm-to-issue", "execution": { "type": "slash-command", "mode": "mainprocess" }, "contextHint": "Form execution queue" },
{ "cmd": "/issue:execute", "args": "--queue auto", "unit": "brainstorm-to-issue", "execution": { "type": "slash-command", "mode": "async" }, "contextHint": "Execute issue queue" }
]
}
```
### With-File: Brainstorm (Level 4)
```json
{
"name": "brainstorm",
"description": "Multi-perspective ideation with documentation",
"level": 4,
"steps": [
{ "cmd": "/workflow:brainstorm-with-file", "args": "\"{{goal}}\"", "unit": "brainstorm-with-file", "execution": { "type": "slash-command", "mode": "mainprocess" }, "contextHint": "Multi-CLI brainstorming with documented diverge-converge cycles" }
]
}
```
### With-File: Debug (Level 3)
```json
{
"name": "debug",
"description": "Hypothesis-driven debugging with documentation",
"level": 3,
"steps": [
{ "cmd": "/workflow:debug-with-file", "args": "\"{{goal}}\"", "unit": "debug-with-file", "execution": { "type": "slash-command", "mode": "mainprocess" }, "contextHint": "Hypothesis-driven debugging with Gemini validation" }
]
}
```
### With-File: Analyze (Level 3)
```json
{
"name": "analyze",
"description": "Collaborative analysis with documentation",
"level": 3,
"steps": [
{ "cmd": "/workflow:analyze-with-file", "args": "\"{{goal}}\"", "unit": "analyze-with-file", "execution": { "type": "slash-command", "mode": "mainprocess" }, "contextHint": "Multi-round collaborative analysis with CLI exploration" }
]
}
```
### Full Workflow (Level 4)
```json
{
"name": "full",
"description": "Complete workflow: brainstorm → plan → execute → test",
"level": 4,
"steps": [
{ "cmd": "/workflow:brainstorm:auto-parallel", "args": "\"{{goal}}\"", "unit": "standalone", "execution": { "type": "slash-command", "mode": "mainprocess" }, "contextHint": "Parallel multi-perspective brainstorming" },
{ "cmd": "/workflow:plan", "unit": "verified-planning-execution", "execution": { "type": "slash-command", "mode": "mainprocess" }, "contextHint": "Create detailed plan from brainstorm" },
{ "cmd": "/workflow:plan-verify", "unit": "verified-planning-execution", "execution": { "type": "slash-command", "mode": "mainprocess" }, "contextHint": "Verify plan quality" },
{ "cmd": "/workflow:execute", "unit": "verified-planning-execution", "execution": { "type": "slash-command", "mode": "async" }, "contextHint": "Execute implementation" },
{ "cmd": "/workflow:test-fix-gen", "unit": "test-validation", "execution": { "type": "slash-command", "mode": "mainprocess" }, "contextHint": "Generate comprehensive tests" },
{ "cmd": "/workflow:test-cycle-execute", "unit": "test-validation", "execution": { "type": "slash-command", "mode": "async" }, "contextHint": "Execute test cycle" }
]
}
```
### Multi-CLI Planning (Level 3)
```json
{
"name": "multi-cli-plan",
"description": "Multi-CLI collaborative planning with cross-verification",
"level": 3,
"steps": [
{ "cmd": "/workflow:multi-cli-plan", "args": "\"{{goal}}\"", "unit": "multi-cli-planning", "execution": { "type": "slash-command", "mode": "mainprocess" }, "contextHint": "Gemini+Codex+Claude collaborative planning" },
{ "cmd": "/workflow:lite-execute", "args": "--in-memory", "unit": "multi-cli-planning", "execution": { "type": "slash-command", "mode": "mainprocess" }, "contextHint": "Execute converged plan" },
{ "cmd": "/workflow:test-fix-gen", "unit": "test-validation", "execution": { "type": "slash-command", "mode": "mainprocess" }, "contextHint": "Generate tests" },
{ "cmd": "/workflow:test-cycle-execute", "unit": "test-validation", "execution": { "type": "slash-command", "mode": "async" }, "contextHint": "Execute test cycle" }
]
}
```
### Ultra-Lightweight (Level 1)
```json
{
"name": "lite-lite-lite",
"description": "Ultra-lightweight multi-tool execution",
"level": 1,
"steps": [
{ "cmd": "/workflow:lite-lite-lite", "args": "\"{{goal}}\"", "unit": "standalone", "execution": { "type": "slash-command", "mode": "mainprocess" }, "contextHint": "Direct execution with minimal overhead" }
]
}
```
---
## Command Port Reference
Each command has input/output ports for pipeline composition:
| Command | Input Port | Output Port | Atomic Unit |
|---------|------------|-------------|-------------|
| **Planning** |
| lite-plan | requirement | plan | quick-implementation |
| plan | requirement | detailed-plan | full-planning-execution |
| plan-verify | detailed-plan | verified-plan | verified-planning-execution |
| multi-cli-plan | requirement | multi-cli-plan | multi-cli-planning |
| tdd-plan | requirement | tdd-tasks | tdd-planning-execution |
| replan | session, feedback | replan | replanning-execution |
| **Execution** |
| lite-execute | plan, multi-cli-plan, lite-fix | code | (multiple) |
| execute | detailed-plan, verified-plan, replan, tdd-tasks | code | (multiple) |
| **Testing** |
| test-fix-gen | failing-tests, session | test-tasks | test-validation |
| test-cycle-execute | test-tasks | test-passed | test-validation |
| test-gen | code, session | test-tasks | test-generation-execution |
| tdd-verify | code | tdd-verified | standalone |
| **Review** |
| review-session-cycle | code, session | review-verified | code-review |
| review-module-cycle | module-pattern | review-verified | code-review |
| review-cycle-fix | review-findings | fixed-code | code-review |
| **Bug Fix** |
| lite-fix | bug-report | lite-fix | bug-fix |
| debug-with-file | bug-report | understanding-document | debug-with-file |
| **With-File** |
| brainstorm-with-file | exploration-topic | brainstorm-document | brainstorm-with-file |
| analyze-with-file | analysis-topic | discussion-document | analyze-with-file |
| **Issue** |
| issue:discover | codebase | pending-issues | issue-workflow |
| issue:plan | pending-issues | issue-plans | issue-workflow |
| issue:queue | issue-plans, converted-plan | execution-queue | issue-workflow |
| issue:execute | execution-queue | completed-issues | issue-workflow |
| issue:convert-to-plan | plan | converted-plan | rapid-to-issue |
| issue:from-brainstorm | brainstorm-document | converted-plan | brainstorm-to-issue |
---
## Minimum Execution Units (最小执行单元)
**Definition**: Commands that must execute together as an atomic group.
| Unit Name | Commands | Purpose |
|-----------|----------|---------|
| **quick-implementation** | lite-plan → lite-execute | Lightweight plan and execution |
| **multi-cli-planning** | multi-cli-plan → lite-execute | Multi-perspective planning and execution |
| **bug-fix** | lite-fix → lite-execute | Bug diagnosis and fix |
| **full-planning-execution** | plan → execute | Detailed planning and execution |
| **verified-planning-execution** | plan → plan-verify → execute | Planning with verification |
| **replanning-execution** | replan → execute | Update plan and execute |
| **tdd-planning-execution** | tdd-plan → execute | TDD planning and execution |
| **test-validation** | test-fix-gen → test-cycle-execute | Test generation and fix cycle |
| **test-generation-execution** | test-gen → execute | Generate and execute tests |
| **code-review** | review-*-cycle → review-cycle-fix | Review and fix findings |
| **issue-workflow** | discover → plan → queue → execute | Complete issue lifecycle |
| **rapid-to-issue** | lite-plan → convert-to-plan → queue → execute | Bridge to issue workflow |
| **brainstorm-to-issue** | from-brainstorm → queue → execute | Brainstorm to issue bridge |
| **brainstorm-with-file** | (self-contained) | Multi-perspective ideation |
| **debug-with-file** | (self-contained) | Hypothesis-driven debugging |
| **analyze-with-file** | (self-contained) | Collaborative analysis |
---
## Phase 3: Generate JSON
```javascript
async function generateTemplate(design, steps, outputPath) {
const template = {
name: design.name,
description: design.description,
level: design.level,
steps: steps
};
const finalPath = outputPath || `~/.claude/skills/flow-coordinator/templates/${design.name}.json`;
// Write template
Write(finalPath, JSON.stringify(template, null, 2));
// Validate
const validation = validateTemplate(template);
console.log(`✅ Template created: ${finalPath}`);
console.log(` Steps: ${template.steps.length}`);
console.log(` Level: ${template.level}`);
console.log(` Units: ${[...new Set(template.steps.map(s => s.unit))].join(', ')}`);
return { path: finalPath, template, validation };
}
```
---
## Output Format
```json
{
"name": "template-name",
"description": "Template description",
"level": 2,
"steps": [
{
"cmd": "/workflow:command",
"args": "\"{{goal}}\"",
"unit": "unit-name",
"execution": {
"type": "slash-command",
"mode": "mainprocess"
},
"contextHint": "Description of what this step does"
}
]
}
```
---
## Examples
**Create a quick bugfix template**:
```
/meta-skill:flow-create hotfix-simple
→ Purpose: Bug Fix
→ Level: 2 (Lightweight)
→ Steps: Use Suggested
→ Output: ~/.claude/skills/flow-coordinator/templates/hotfix-simple.json
```
**Create a custom multi-stage workflow**:
```
/meta-skill:flow-create complex-feature --output ~/.claude/skills/my-project/templates/
→ Purpose: Feature Development
→ Level: 3 (Standard)
→ Steps: Customize
→ Step 1: /workflow:brainstorm:auto-parallel (standalone, mainprocess)
→ Step 2: /workflow:plan (verified-planning-execution, mainprocess)
→ Step 3: /workflow:plan-verify (verified-planning-execution, mainprocess)
→ Step 4: /workflow:execute (verified-planning-execution, async)
→ Step 5: /workflow:review-session-cycle (code-review, mainprocess)
→ Step 6: /workflow:review-cycle-fix (code-review, mainprocess)
→ Done
→ Output: ~/.claude/skills/my-project/templates/complex-feature.json
```

View File

@@ -2,7 +2,7 @@
name: issue:discover-by-prompt
description: Discover issues from user prompt with Gemini-planned iterative multi-agent exploration. Uses ACE semantic search for context gathering and supports cross-module comparison (e.g., frontend vs backend API contracts).
argument-hint: "[-y|--yes] <prompt> [--scope=src/**] [--depth=standard|deep] [--max-iterations=5]"
allowed-tools: SlashCommand(*), TodoWrite(*), Read(*), Bash(*), Task(*), AskUserQuestion(*), Glob(*), Grep(*), mcp__ace-tool__search_context(*), mcp__exa__search(*)
allowed-tools: Skill(*), TodoWrite(*), Read(*), Bash(*), Task(*), AskUserQuestion(*), Glob(*), Grep(*), mcp__ace-tool__search_context(*), mcp__exa__search(*)
---
## Auto Mode

View File

@@ -2,7 +2,7 @@
name: issue:discover
description: Discover potential issues from multiple perspectives (bug, UX, test, quality, security, performance, maintainability, best-practices) using CLI explore. Supports Exa external research for security and best-practices perspectives.
argument-hint: "[-y|--yes] <path-pattern> [--perspectives=bug,ux,...] [--external]"
allowed-tools: SlashCommand(*), TodoWrite(*), Read(*), Bash(*), Task(*), AskUserQuestion(*), Glob(*), Grep(*)
allowed-tools: Skill(*), TodoWrite(*), Read(*), Bash(*), Task(*), AskUserQuestion(*), Glob(*), Grep(*)
---
## Auto Mode

View File

@@ -341,31 +341,59 @@ For each task:
- Do NOT commit after each task
### Step 3: Commit Solution (Once)
After ALL tasks pass, commit once with formatted summary.
After ALL tasks pass, commit once with clean conventional format.
Command:
git add -A
git commit -m "<type>(<scope>): <description>
git commit -m "<type>(<scope>): <brief description>"
Solution: ${SOLUTION_ID}
Tasks completed: <list task IDs>
Examples:
git commit -m "feat(auth): add token refresh mechanism"
git commit -m "fix(payment): resolve timeout in checkout flow"
git commit -m "refactor(api): simplify error handling"
Changes:
- <file1>: <what changed>
- <file2>: <what changed>
Verified: all tests passed"
Replace <type> with: feat|fix|refactor|docs|test
Replace <type> with: feat|fix|refactor|docs|test|chore
Replace <scope> with: affected module name
Replace <description> with: brief summary from solution
Replace <description> with: brief summary (NO solution/issue IDs)
### Step 4: Report Completion
On success, run:
ccw issue done ${SOLUTION_ID} --result '{"summary": "<brief>", "files_modified": ["<file1>", "<file2>"], "commit": {"hash": "<hash>", "type": "<type>"}, "tasks_completed": <N>}'
ccw issue done ${SOLUTION_ID} --result '{
"solution_id": "<solution-id>",
"issue_id": "<issue-id>",
"commit": {
"hash": "<commit-hash>",
"type": "<commit-type>",
"scope": "<commit-scope>",
"message": "<commit-message>"
},
"analysis": {
"risk": "<low|medium|high>",
"impact": "<low|medium|high>",
"complexity": "<low|medium|high>"
},
"tasks_completed": [
{"id": "T1", "title": "...", "action": "...", "scope": "..."},
{"id": "T2", "title": "...", "action": "...", "scope": "..."}
],
"files_modified": ["<file1>", "<file2>"],
"tests_passed": true,
"verification": {
"all_tests_passed": true,
"acceptance_criteria_met": true,
"regression_checked": true
},
"summary": "<brief description of accomplishment>"
}'
On failure, run:
ccw issue done ${SOLUTION_ID} --fail --reason '{"task_id": "<TX>", "error_type": "<test_failure|build_error|other>", "message": "<error details>"}'
ccw issue done ${SOLUTION_ID} --fail --reason '{
"task_id": "<TX>",
"error_type": "<test_failure|build_error|other>",
"message": "<error details>",
"files_attempted": ["<file1>", "<file2>"],
"commit": null
}'
### Important Notes
- Do NOT cleanup worktree - it is shared by all solutions in the queue

View File

@@ -2,7 +2,7 @@
name: plan
description: Batch plan issue resolution using issue-plan-agent (explore + plan closed-loop)
argument-hint: "[-y|--yes] --all-pending <issue-id>[,<issue-id>,...] [--batch-size 3]"
allowed-tools: TodoWrite(*), Task(*), SlashCommand(*), AskUserQuestion(*), Bash(*), Read(*), Write(*)
allowed-tools: TodoWrite(*), Task(*), Skill(*), AskUserQuestion(*), Bash(*), Read(*), Write(*)
---
## Auto Mode

View File

@@ -1,367 +0,0 @@
---
name: ccw view
description: Dashboard - Open CCW workflow dashboard for managing tasks and sessions
category: general
---
# CCW View Command
Open the CCW workflow dashboard for visualizing and managing project tasks, sessions, and workflow execution status.
## Description
`ccw view` launches an interactive web dashboard that provides:
- **Workflow Overview**: Visualize current workflow status and command chain execution
- **Session Management**: View and manage active workflow sessions
- **Task Tracking**: Monitor TODO items and task progress
- **Workspace Switching**: Switch between different project workspaces
- **Real-time Updates**: Live updates of command execution and status
## Usage
```bash
# Open dashboard for current workspace
ccw view
# Specify workspace path
ccw view --path /path/to/workspace
# Custom port (default: 3456)
ccw view --port 3000
# Bind to specific host
ccw view --host 0.0.0.0 --port 3456
# Open without launching browser
ccw view --no-browser
# Show URL without opening browser
ccw view --no-browser
```
## Options
| Option | Default | Description |
|--------|---------|-------------|
| `--path <path>` | Current directory | Workspace path to display |
| `--port <port>` | 3456 | Server port for dashboard |
| `--host <host>` | 127.0.0.1 | Server host/bind address |
| `--no-browser` | false | Don't launch browser automatically |
| `-h, --help` | - | Show help message |
## Features
### Dashboard Sections
#### 1. **Workflow Overview**
- Current workflow status
- Command chain visualization (with Minimum Execution Units marked)
- Live progress tracking
- Error alerts
#### 2. **Session Management**
- List active sessions by type (workflow, review, tdd)
- Session details (created time, last activity, session ID)
- Quick actions (resume, pause, complete)
- Session logs/history
#### 3. **Task Tracking**
- TODO list with status indicators
- Progress percentage
- Task grouping by workflow stage
- Quick inline task updates
#### 4. **Workspace Switcher**
- Browse available workspaces
- Switch context with one click
- Recent workspaces list
#### 5. **Command History**
- Recent commands executed
- Execution time and status
- Quick re-run options
### Keyboard Shortcuts
| Shortcut | Action |
|----------|--------|
| `R` | Refresh dashboard |
| `Cmd/Ctrl + J` | Jump to session search |
| `Cmd/Ctrl + K` | Open command palette |
| `?` | Show help |
## Multi-Instance Support
The dashboard supports multiple concurrent instances:
```bash
# Terminal 1: Workspace A on port 3456
ccw view --path ~/projects/workspace-a
# Terminal 2: Workspace B on port 3457
ccw view --path ~/projects/workspace-b --port 3457
# Switching workspaces on the same port
ccw view --path ~/projects/workspace-c # Auto-switches existing server
```
When the server is already running and you execute `ccw view` with a different path:
1. Detects running server on the port
2. Sends workspace switch request
3. Updates dashboard to new workspace
4. Opens browser with updated context
## Server Lifecycle
### Startup
```
ccw view
├─ Check if server running on port
│ ├─ If yes: Send switch-path request
│ └─ If no: Start new server
├─ Launch browser (unless --no-browser)
└─ Display dashboard URL
```
### Running
The dashboard server continues running until:
- User explicitly stops it (Ctrl+C)
- All connections close after timeout
- System shutdown
### Multiple Workspaces
Switching to a different workspace keeps the same server instance:
```
Server State Before: workspace-a on port 3456
ccw view --path ~/projects/workspace-b
Server State After: workspace-b on port 3456 (same instance)
```
## Environment Variables
```bash
# Set default port
export CCW_VIEW_PORT=4000
ccw view # Uses port 4000
# Set default host
export CCW_VIEW_HOST=localhost
ccw view --port 3456 # Binds to localhost:3456
# Disable browser launch by default
export CCW_VIEW_NO_BROWSER=true
ccw view # Won't auto-launch browser
```
## Integration with CCW Workflows
The dashboard is fully integrated with CCW commands:
### Viewing Workflow Progress
```bash
# Start a workflow
ccw "Add user authentication"
# In another terminal, view progress
ccw view # Shows execution progress in real-time
```
### Session Management from Dashboard
- Start new session: Click "New Session" button
- Resume paused session: Sessions list → Resume button
- View session logs: Click session name
- Complete session: Sessions list → Complete button
### Real-time Command Execution
- View active command chain execution
- Watch command transition through Minimum Execution Units
- See error alerts and recovery options
- View command output logs
## Troubleshooting
### Port Already in Use
```bash
# Use different port
ccw view --port 3457
# Or kill existing server
lsof -i :3456 # Find process
kill -9 <pid> # Kill it
ccw view # Start fresh
```
### Dashboard Not Loading
```bash
# Try without browser
ccw view --no-browser
# Check server logs
tail -f ~/.ccw/logs/dashboard.log
# Verify network access
curl http://localhost:3456/api/health
```
### Workspace Path Not Found
```bash
# Use full absolute path
ccw view --path "$(pwd)"
# Or specify explicit path
ccw view --path ~/projects/my-project
```
## Related Commands
- **`/ccw`** - Main workflow orchestrator
- **`/workflow:session:list`** - List workflow sessions
- **`/workflow:session:resume`** - Resume paused session
- **`/memory:compact`** - Compact session memory for dashboard display
## Examples
### Basic Dashboard View
```bash
cd ~/projects/my-app
ccw view
# → Launches http://localhost:3456 in browser
```
### Network-Accessible Dashboard
```bash
# Allow remote access
ccw view --host 0.0.0.0 --port 3000
# → Dashboard accessible at http://machine-ip:3000
```
### Multiple Workspaces on Different Ports
```bash
# Terminal 1: Main project
ccw view --path ~/projects/main --port 3456
# Terminal 2: Side project
ccw view --path ~/projects/side --port 3457
# View both simultaneously
# → http://localhost:3456 (main)
# → http://localhost:3457 (side)
```
### Headless Dashboard
```bash
# Run dashboard without browser
ccw view --port 3000 --no-browser
echo "Dashboard available at http://localhost:3000"
# Share URL with team
# Can be proxied through nginx/port forwarding
```
### Environment-Based Configuration
```bash
# Script for CI/CD
export CCW_VIEW_HOST=0.0.0.0
export CCW_VIEW_PORT=8080
ccw view --path /workspace
# → Dashboard accessible on port 8080 to all interfaces
```
## Dashboard Pages
### Overview Page (`/`)
- Current workflow status
- Active sessions summary
- Recent commands
- System health indicators
### Sessions Page (`/sessions`)
- All sessions (grouped by type)
- Session details and metadata
- Session logs viewer
- Quick actions (resume/complete)
### Tasks Page (`/tasks`)
- Current TODO items
- Progress tracking
- Inline task editing
- Workflow history
### Workspace Page (`/workspace`)
- Current workspace info
- Available workspaces
- Workspace switcher
- Workspace settings
### Settings Page (`/settings`)
- Port configuration
- Theme preferences
- Auto-refresh settings
- Export settings
## Server Health Monitoring
The dashboard includes health monitoring:
```bash
# Check health endpoint
curl http://localhost:3456/api/health
# → { "status": "ok", "uptime": 12345 }
# Monitor metrics
curl http://localhost:3456/api/metrics
# → { "sessions": 3, "tasks": 15, "lastUpdate": "2025-01-29T10:30:00Z" }
```
## Advanced Usage
### Custom Port with Dynamic Discovery
```bash
# Find next available port
available_port=$(find-available-port 3456)
ccw view --port $available_port
# Display in CI/CD
echo "Dashboard: http://localhost:$available_port"
```
### Dashboard Behind Proxy
```bash
# Configure nginx reverse proxy
# Proxy http://proxy.example.com/dashboard → http://localhost:3456
ccw view --host 127.0.0.1 --port 3456
# Access via proxy
# http://proxy.example.com/dashboard
```
### Session Export from Dashboard
- View → Sessions → Export JSON
- Exports session metadata and progress
- Useful for record-keeping and reporting
## See Also
- **CCW Commands**: `/ccw` - Auto workflow orchestration
- **Session Management**: `/workflow:session:start`, `/workflow:session:list`
- **Task Tracking**: `TodoWrite` tool for programmatic task management
- **Workflow Status**: `/workflow:status` for CLI-based status view

File diff suppressed because it is too large Load Diff

File diff suppressed because it is too large Load Diff

View File

@@ -2,7 +2,7 @@
name: auto-parallel
description: Parallel brainstorming automation with dynamic role selection and concurrent execution across multiple perspectives
argument-hint: "[-y|--yes] topic or challenge description [--count N]"
allowed-tools: SlashCommand(*), Task(*), TodoWrite(*), Read(*), Write(*), Bash(*), Glob(*)
allowed-tools: Skill(*), Task(*), TodoWrite(*), Read(*), Write(*), Bash(*), Glob(*)
---
## Auto Mode
@@ -16,7 +16,7 @@ When `--yes` or `-y`: Auto-select recommended roles, skip all clarification ques
**This command is a pure orchestrator**: Executes 3 phases in sequence (interactive framework → parallel role analysis → synthesis), coordinating specialized commands/agents through task attachment model.
**Task Attachment Model**:
- SlashCommand execute **expands workflow** by attaching sub-tasks to current TodoWrite
- Skill execute **expands workflow** by attaching sub-tasks to current TodoWrite
- Task agent execute **attaches analysis tasks** to orchestrator's TodoWrite
- Phase 1: artifacts command attaches its internal tasks (Phase 1-5)
- Phase 2: N conceptual-planning-agent tasks attached in parallel
@@ -47,7 +47,7 @@ This workflow runs **fully autonomously** once triggered. Phase 1 (artifacts) ha
3. **Parse Every Output**: Extract selected_roles from workflow-session.json after Phase 1
4. **Auto-Continue via TodoList**: Check TodoList status to execute next pending phase automatically
5. **Track Progress**: Update TodoWrite dynamically with task attachment/collapse pattern
6. **Task Attachment Model**: SlashCommand and Task executes **attach** sub-tasks to current workflow. Orchestrator **executes** these attached tasks itself, then **collapses** them after completion
6. **Task Attachment Model**: Skill and Task executes **attach** sub-tasks to current workflow. Orchestrator **executes** these attached tasks itself, then **collapses** them after completion
7. **⚠️ CRITICAL: DO NOT STOP**: Continuous multi-phase workflow. After executing all attached tasks, immediately collapse them and execute next phase
8. **Parallel Execution**: Phase 2 attaches multiple agent tasks simultaneously for concurrent execution
@@ -74,7 +74,7 @@ This workflow runs **fully autonomously** once triggered. Phase 1 (artifacts) ha
**Step 1: Execute** - Interactive framework generation via artifacts command
```javascript
SlashCommand(command="/workflow:brainstorm:artifacts \"{topic}\" --count {N}")
Skill(skill="workflow:brainstorm:artifacts", args="\"{topic}\" --count {N}")
```
**What It Does**:
@@ -95,7 +95,7 @@ SlashCommand(command="/workflow:brainstorm:artifacts \"{topic}\" --count {N}")
- workflow-session.json contains selected_roles[] (metadata only, no content duplication)
- Session directory `.workflow/active/WFS-{topic}/.brainstorming/` exists
**TodoWrite Update (Phase 1 SlashCommand executed - tasks attached)**:
**TodoWrite Update (Phase 1 Skill executed - tasks attached)**:
```json
[
{"content": "Phase 0: Parameter Parsing", "status": "completed", "activeForm": "Parsing count parameter"},
@@ -110,7 +110,7 @@ SlashCommand(command="/workflow:brainstorm:artifacts \"{topic}\" --count {N}")
]
```
**Note**: SlashCommand execute **attaches** artifacts' 5 internal tasks. Orchestrator **executes** these tasks sequentially.
**Note**: Skill execute **attaches** artifacts' 5 internal tasks. Orchestrator **executes** these tasks sequentially.
**Next Action**: Tasks attached → **Execute Phase 1.1-1.5** sequentially
@@ -134,7 +134,7 @@ SlashCommand(command="/workflow:brainstorm:artifacts \"{topic}\" --count {N}")
**For Each Selected Role** (unified role-analysis command):
```bash
SlashCommand(command="/workflow:brainstorm:role-analysis {role-name} --session {session-id} --skip-questions")
Skill(skill="workflow:brainstorm:role-analysis", args="{role-name} --session {session-id} --skip-questions")
```
**What It Does**:
@@ -145,7 +145,7 @@ SlashCommand(command="/workflow:brainstorm:role-analysis {role-name} --session {
- Supports optional interactive context gathering (via --include-questions flag)
**Parallel Execution**:
- Launch N SlashCommand calls simultaneously (one message with multiple SlashCommand invokes)
- Launch N Skill calls simultaneously (one message with multiple Skill invokes)
- Each role command **attached** to orchestrator's TodoWrite
- All roles execute concurrently, each reading same guidance-specification.md
- Each role operates independently
@@ -203,7 +203,7 @@ SlashCommand(command="/workflow:brainstorm:role-analysis {role-name} --session {
**Step 3: Execute** - Synthesis integration via synthesis command
```javascript
SlashCommand(command="/workflow:brainstorm:synthesis --session {sessionId}")
Skill(skill="workflow:brainstorm:synthesis", args="--session {sessionId}")
```
**What It Does**:
@@ -218,7 +218,7 @@ SlashCommand(command="/workflow:brainstorm:synthesis --session {sessionId}")
- `.workflow/active/WFS-{topic}/.brainstorming/synthesis-specification.md` exists
- Synthesis references all role analyses
**TodoWrite Update (Phase 3 SlashCommand executed - tasks attached)**:
**TodoWrite Update (Phase 3 Skill executed - tasks attached)**:
```json
[
{"content": "Phase 0: Parameter Parsing", "status": "completed", "activeForm": "Parsing count parameter"},
@@ -231,7 +231,7 @@ SlashCommand(command="/workflow:brainstorm:synthesis --session {sessionId}")
]
```
**Note**: SlashCommand execute **attaches** synthesis' internal tasks. Orchestrator **executes** these tasks sequentially.
**Note**: Skill execute **attaches** synthesis' internal tasks. Orchestrator **executes** these tasks sequentially.
**Next Action**: Tasks attached → **Execute Phase 3.1-3.3** sequentially
@@ -264,7 +264,7 @@ Synthesis: .workflow/active/WFS-{topic}/.brainstorming/synthesis-specification.m
### Key Principles
1. **Task Attachment** (when SlashCommand/Task executed):
1. **Task Attachment** (when Skill/Task executed):
- Sub-command's or agent's internal tasks are **attached** to orchestrator's TodoWrite
- Phase 1: `/workflow:brainstorm:artifacts` attaches 5 internal tasks (Phase 1.1-1.5)
- Phase 2: Multiple `Task(conceptual-planning-agent)` calls attach N role analysis tasks simultaneously
@@ -289,9 +289,9 @@ Synthesis: .workflow/active/WFS-{topic}/.brainstorming/synthesis-specification.m
### Brainstorming Workflow Specific Features
- **Phase 1**: Interactive framework generation with user Q&A (SlashCommand attachment)
- **Phase 1**: Interactive framework generation with user Q&A (Skill attachment)
- **Phase 2**: Parallel role analysis execution with N concurrent agents (Task agent attachments)
- **Phase 3**: Cross-role synthesis integration (SlashCommand attachment)
- **Phase 3**: Cross-role synthesis integration (Skill attachment)
- **Dynamic Role Count**: `--count N` parameter determines number of Phase 2 parallel tasks (default: 3, max: 9)
- **Mixed Execution**: Sequential (Phase 1, 3) and Parallel (Phase 2) task execution

View File

@@ -0,0 +1,617 @@
---
name: workflow:collaborative-plan-with-file
description: Collaborative planning with Plan Note - Understanding agent creates shared plan-note.md template, parallel agents fill pre-allocated sections, conflict detection without merge. Outputs executable plan-note.md.
argument-hint: "[-y|--yes] <task description> [--max-agents=5]"
allowed-tools: TodoWrite(*), Task(*), AskUserQuestion(*), Read(*), Bash(*), Write(*), Glob(*), Grep(*), mcp__ace-tool__search_context(*)
---
## Auto Mode
When `--yes` or `-y`: Auto-approve splits, skip confirmations.
# Collaborative Planning Command
## Quick Start
```bash
# Basic usage
/workflow:collaborative-plan-with-file "Implement real-time notification system"
# With options
/workflow:collaborative-plan-with-file "Refactor authentication module" --max-agents=4
/workflow:collaborative-plan-with-file "Add payment gateway support" -y
```
**Context Source**: Understanding-Agent + Per-agent exploration
**Output Directory**: `.workflow/.planning/{session-id}/`
**Default Max Agents**: 5 (actual count based on requirement complexity)
**Core Innovation**: Plan Note - shared collaborative document, no merge needed
## Output Artifacts
### Phase 1: Understanding Agent
| Artifact | Description |
|----------|-------------|
| `plan-note.md` | Shared collaborative document with pre-allocated sections |
| `requirement-analysis.json` | Sub-domain assignments and TASK ID ranges |
### Phase 2: Per Sub-Agent
| Artifact | Description |
|----------|-------------|
| `planning-context.md` | Evidence paths + synthesized understanding |
| `plan.json` | Complete agent plan (detailed implementation) |
| Updates to `plan-note.md` | Agent fills pre-allocated sections |
### Phase 3: Final Output
| Artifact | Description |
|----------|-------------|
| `plan-note.md` | ⭐ Executable plan with conflict markers |
| `conflicts.json` | Detected conflicts with resolution options |
| `plan.md` | Human-readable summary |
## Overview
Unified collaborative planning workflow using **Plan Note** architecture:
1. **Understanding**: Agent analyzes requirements and creates plan-note.md template with pre-allocated sections
2. **Parallel Planning**: Each agent generates plan.json + fills their pre-allocated section in plan-note.md
3. **Conflict Detection**: Scan plan-note.md for conflicts (no merge needed)
4. **Completion**: Generate plan.md summary, ready for execution
```
┌─────────────────────────────────────────────────────────────────────────┐
│ PLAN NOTE COLLABORATIVE PLANNING │
├─────────────────────────────────────────────────────────────────────────┤
│ │
│ Phase 1: Understanding & Template Creation │
│ ├─ Understanding-Agent analyzes requirements │
│ ├─ Identify 2-5 sub-domains (focus areas) │
│ ├─ Create plan-note.md with pre-allocated sections │
│ └─ Assign TASK ID ranges (no conflicts) │
│ │
│ Phase 2: Parallel Agent Execution (No Locks Needed) │
│ ┌──────────────┬──────────────┬──────────────┐ │
│ │ Agent 1 │ Agent 2 │ Agent N │ │
│ ├──────────────┼──────────────┼──────────────┤ │
│ │ Own Section │ Own Section │ Own Section │ ← Pre-allocated │
│ │ plan.json │ plan.json │ plan.json │ ← Detailed plans │
│ └──────────────┴──────────────┴──────────────┘ │
│ │
│ Phase 3: Conflict Detection (Single Source) │
│ ├─ Parse plan-note.md (all sections) │
│ ├─ Detect file/dependency/strategy conflicts │
│ └─ Update plan-note.md conflict section │
│ │
│ Phase 4: Completion (No Merge) │
│ ├─ Generate plan.md (human-readable) │
│ └─ Ready for execution │
│ │
└─────────────────────────────────────────────────────────────────────────┘
```
## Output Structure
```
.workflow/.planning/{CPLAN-slug-YYYY-MM-DD}/
├── plan-note.md # ⭐ Core: Requirements + Tasks + Conflicts
├── requirement-analysis.json # Phase 1: Sub-domain assignments
├── agents/ # Phase 2: Per-agent detailed plans
│ ├── {focus-area-1}/
│ │ ├── planning-context.md # Evidence + understanding
│ │ └── plan.json # Complete agent plan
│ ├── {focus-area-2}/
│ │ └── ...
│ └── {focus-area-N}/
│ └── ...
├── conflicts.json # Phase 3: Conflict details
└── plan.md # Phase 4: Human-readable summary
```
## Implementation
### Session Initialization
**Objective**: Create session context and directory structure for collaborative planning.
**Required Actions**:
1. Extract task description from `$ARGUMENTS`
2. Generate session ID with format: `CPLAN-{slug}-{date}`
- slug: lowercase, alphanumeric, max 30 chars
- date: YYYY-MM-DD (UTC+8)
3. Define session folder: `.workflow/.planning/{session-id}`
4. Parse command options:
- `--max-agents=N` (default: 5)
- `-y` or `--yes` for auto-approval mode
5. Create directory structure: `{session-folder}/agents/`
**Session Variables**:
- `sessionId`: Unique session identifier
- `sessionFolder`: Base directory for all artifacts
- `maxAgents`: Maximum number of parallel agents
- `autoMode`: Boolean for auto-confirmation
### Phase 1: Understanding & Template Creation
**Objective**: Analyze requirements and create the plan-note.md template with pre-allocated sections for parallel agents.
**Prerequisites**:
- Session initialized with valid sessionId and sessionFolder
- Task description available from $ARGUMENTS
**Guideline**: In Understanding phase, prioritize identifying latest documentation (README, design docs, architecture guides). When ambiguities exist, ask user for clarification instead of assuming interpretations.
**Workflow Steps**:
1. **Initialize Progress Tracking**
- Create 4 todo items for workflow phases
- Set Phase 1 status to `in_progress`
2. **Launch Understanding Agent**
- Agent type: `cli-lite-planning-agent`
- Execution mode: synchronous (run_in_background: false)
3. **Agent Tasks**:
- **Identify Latest Documentation**: Search for and prioritize latest README, design docs, architecture guides
- **Understand Requirements**: Extract core objective, key points, constraints from task description and latest docs
- **Identify Ambiguities**: List any unclear points or multiple possible interpretations
- **Form Clarification Checklist**: Prepare questions for user if ambiguities found (use AskUserQuestion)
- **Split Sub-Domains**: Identify 2-{maxAgents} parallelizable focus areas
- **Create Plan Note**: Generate plan-note.md with pre-allocated sections
**Output Files**:
| File | Purpose |
|------|---------|
| `{sessionFolder}/plan-note.md` | Collaborative template with pre-allocated sections per agent |
| `{sessionFolder}/requirement-analysis.json` | Sub-domain assignments and TASK ID ranges |
**requirement-analysis.json Schema**:
- `session_id`: Session identifier
- `original_requirement`: Task description
- `complexity`: Low | Medium | High
- `sub_domains[]`: Array of focus areas with task_id_range and estimated_effort
- `total_agents`: Number of agents to spawn
**Success Criteria**:
- Latest documentation identified and referenced (if available)
- Ambiguities resolved via user clarification (if any found)
- 2-{maxAgents} clear sub-domains identified
- Each sub-domain can be planned independently
- Plan Note template includes all pre-allocated sections
- TASK ID ranges have no overlap (100 IDs per agent)
- Requirements understanding is comprehensive
**Completion**:
- Log created artifacts
- Update Phase 1 todo status to `completed`
**Agent Call**:
```javascript
Task(
subagent_type="cli-lite-planning-agent",
run_in_background=false,
description="Understand requirements and create plan template",
prompt=`
## Mission: Create Plan Note Template
### Key Guidelines
1. **Prioritize Latest Documentation**: Search for and reference latest README, design docs, architecture guides when available
2. **Handle Ambiguities**: When requirement ambiguities exist, ask user for clarification (use AskUserQuestion) instead of assuming interpretations
### Input Requirements
${taskDescription}
### Tasks
1. **Understand Requirements**: Extract core objective, key points, constraints (reference latest docs when available)
2. **Identify Ambiguities**: List any unclear points or multiple possible interpretations
3. **Form Clarification Checklist**: Prepare questions for user if ambiguities found
4. **Split Sub-Domains**: Identify 2-${maxAgents} parallelizable focus areas
5. **Create Plan Note**: Generate plan-note.md with pre-allocated sections
### Output Files
**File 1**: ${sessionFolder}/plan-note.md
Structure Requirements:
- YAML frontmatter: session_id, original_requirement, created_at, contributors, sub_domains, agent_sections, agent_task_id_ranges, status
- Section: ## 需求理解 (Core objectives, key points, constraints, split strategy)
- Section: ## 任务池 - {Focus Area 1} (Pre-allocated task section for agent 1, TASK-001 ~ TASK-100)
- Section: ## 任务池 - {Focus Area 2} (Pre-allocated task section for agent 2, TASK-101 ~ TASK-200)
- ... (One task pool section per sub-domain)
- Section: ## 依赖关系 (Auto-generated after all agents complete)
- Section: ## 冲突标记 (Populated in Phase 3)
- Section: ## 上下文证据 - {Focus Area 1} (Evidence for agent 1)
- Section: ## 上下文证据 - {Focus Area 2} (Evidence for agent 2)
- ... (One evidence section per sub-domain)
**File 2**: ${sessionFolder}/requirement-analysis.json
- session_id, original_requirement, complexity, sub_domains[], total_agents
### Success Criteria
- [ ] 2-${maxAgents} clear sub-domains identified
- [ ] Each sub-domain can be planned independently
- [ ] Plan Note template includes all pre-allocated sections
- [ ] TASK ID ranges have no overlap (100 IDs per agent)
`
)
```
### Phase 2: Parallel Sub-Agent Execution
**Objective**: Launch parallel planning agents to fill their pre-allocated sections in plan-note.md.
**Prerequisites**:
- Phase 1 completed successfully
- `{sessionFolder}/requirement-analysis.json` exists with sub-domain definitions
- `{sessionFolder}/plan-note.md` template created
**Workflow Steps**:
1. **Load Sub-Domain Configuration**
- Read `{sessionFolder}/requirement-analysis.json`
- Extract sub-domains array with focus_area, description, task_id_range
2. **Update Progress Tracking**
- Set Phase 2 status to `in_progress`
- Add sub-todo for each agent
3. **User Confirmation** (unless autoMode)
- Display identified sub-domains with descriptions
- Options: "开始规划" / "调整拆分" / "取消"
- Skip if autoMode enabled
4. **Create Agent Directories**
- For each sub-domain: `{sessionFolder}/agents/{focus-area}/`
5. **Launch Parallel Agents**
- Agent type: `cli-lite-planning-agent`
- Execution mode: synchronous (run_in_background: false)
- Launch ALL agents in parallel (single message with multiple Task calls)
**Per-Agent Context**:
- Focus area name and description
- Assigned TASK ID range (no overlap with other agents)
- Session ID and folder path
**Per-Agent Tasks**:
| Task | Output | Description |
|------|--------|-------------|
| Generate plan.json | `{sessionFolder}/agents/{focus-area}/plan.json` | Complete detailed plan following schema |
| Update plan-note.md | Sync to shared file | Fill pre-allocated "任务池" and "上下文证据" sections |
**Task Summary Format** (for plan-note.md):
- Task header: `### TASK-{ID}: {Title} [{focus-area}]`
- Status, Complexity, Dependencies
- Scope description
- Modification points with file:line references
- Conflict risk assessment
**Evidence Format** (for plan-note.md):
- Related files with relevance scores
- Existing patterns identified
- Constraints discovered
**Agent Execution Rules**:
- Each agent modifies ONLY its pre-allocated sections
- Use assigned TASK ID range exclusively
- No locking needed (exclusive sections)
- Include conflict_risk assessment for each task
**Completion**:
- Wait for all agents to complete
- Log generated artifacts for each agent
- Update Phase 2 todo status to `completed`
**User Confirmation** (unless autoMode):
```javascript
if (!autoMode) {
AskUserQuestion({
questions: [{
question: `已识别 ${subDomains.length} 个子领域:\n${subDomains.map((s, i) => `${i+1}. ${s.focus_area}: ${s.description}`).join('\n')}\n\n确认开始并行规划?`,
header: "Confirm Split",
multiSelect: false,
options: [
{ label: "开始规划", description: "启动并行sub-agent" },
{ label: "调整拆分", description: "修改子领域划分" },
{ label: "取消", description: "退出规划" }
]
}]
})
}
```
**Launch Parallel Agents** (single message, multiple Task calls):
```javascript
// Create agent directories
subDomains.forEach(sub => {
Bash(`mkdir -p ${sessionFolder}/agents/${sub.focus_area}`)
})
// Launch all agents in parallel
subDomains.map(sub =>
Task(
subagent_type="cli-lite-planning-agent",
run_in_background=false,
description=`Plan: ${sub.focus_area}`,
prompt=`
## Sub-Agent Context
**Focus Area**: ${sub.focus_area}
**Description**: ${sub.description}
**TASK ID Range**: ${sub.task_id_range[0]}-${sub.task_id_range[1]}
**Session**: ${sessionId}
## Dual Output Tasks
### Task 1: Generate Complete plan.json
Output: ${sessionFolder}/agents/${sub.focus_area}/plan.json
Schema: ~/.claude/workflows/cli-templates/schemas/plan-json-schema.json
### Task 2: Sync Summary to plan-note.md
**Locate Your Sections**:
- Task Pool: "## 任务池 - ${toTitleCase(sub.focus_area)}"
- Evidence: "## 上下文证据 - ${toTitleCase(sub.focus_area)}"
**Task Summary Format**:
- Task header: ### TASK-${sub.task_id_range[0]}: Task Title [${sub.focus_area}]
- Fields: 状态, 复杂度, 依赖, 范围, 修改点, 冲突风险
**Evidence Format**:
- 相关文件, 现有模式, 约束
## Execution Steps
1. Generate complete plan.json
2. Extract summary from plan.json
3. Read ${sessionFolder}/plan-note.md
4. Locate and replace your task pool section
5. Locate and replace your evidence section
6. Write back plan-note.md
## Important
- Only modify your pre-allocated sections
- Use assigned TASK ID range: ${sub.task_id_range[0]}-${sub.task_id_range[1]}
`
)
)
```
### Phase 3: Conflict Detection
**Objective**: Analyze plan-note.md for conflicts across all agent contributions without merging files.
**Prerequisites**:
- Phase 2 completed successfully
- All agents have updated plan-note.md with their sections
- `{sessionFolder}/plan-note.md` contains all task and evidence sections
**Workflow Steps**:
1. **Update Progress Tracking**
- Set Phase 3 status to `in_progress`
2. **Parse Plan Note**
- Read `{sessionFolder}/plan-note.md`
- Extract YAML frontmatter (session metadata)
- Parse markdown sections by heading levels
- Identify all "任务池" sections
3. **Extract All Tasks**
- For each "任务池" section:
- Extract tasks matching pattern: `### TASK-{ID}: {Title} [{author}]`
- Parse task details: status, complexity, dependencies, modification points, conflict risk
- Consolidate into single task list
4. **Detect Conflicts**
**File Conflicts**:
- Group modification points by file:location
- Identify locations modified by multiple agents
- Record: severity=high, tasks involved, agents involved, suggested resolution
**Dependency Cycles**:
- Build dependency graph from task dependencies
- Detect cycles using depth-first search
- Record: severity=critical, cycle path, suggested resolution
**Strategy Conflicts**:
- Group tasks by files they modify
- Identify files with high/medium conflict risk from multiple agents
- Record: severity=medium, tasks involved, agents involved, suggested resolution
5. **Generate Conflict Artifacts**
**conflicts.json**:
- Write to `{sessionFolder}/conflicts.json`
- Include: detected_at, total_tasks, total_agents, conflicts array
- Each conflict: type, severity, tasks_involved, description, suggested_resolution
**Update plan-note.md**:
- Locate "## 冲突标记" section
- Generate markdown summary of conflicts
- Replace section content with conflict markdown
6. **Completion**
- Log conflict detection summary
- Display conflict details if any found
- Update Phase 3 todo status to `completed`
**Conflict Types**:
| Type | Severity | Detection Logic |
|------|----------|-----------------|
| file_conflict | high | Same file:location modified by multiple agents |
| dependency_cycle | critical | Circular dependencies in task graph |
| strategy_conflict | medium | Multiple high-risk tasks in same file from different agents |
**Conflict Detection Functions**:
**parsePlanNote(markdown)**:
- Input: Raw markdown content of plan-note.md
- Process:
- Extract YAML frontmatter between `---` markers
- Parse frontmatter as YAML to get session metadata
- Scan for heading patterns `^(#{2,})\s+(.+)$`
- Build sections array with: level, heading, start position, content
- Output: `{ frontmatter: object, sections: array }`
**extractTasksFromSection(content, sectionHeading)**:
- Input: Section content text, section heading for attribution
- Process:
- Match task pattern: `### (TASK-\d+):\s+(.+?)\s+\[(.+?)\]`
- For each match: extract taskId, title, author
- Call parseTaskDetails for additional fields
- Output: Array of task objects with id, title, author, source_section, ...details
**parseTaskDetails(content)**:
- Input: Task content block
- Process:
- Extract fields via regex patterns:
- `**状态**:\s*(.+)` → status
- `**复杂度**:\s*(.+)` → complexity
- `**依赖**:\s*(.+)` → depends_on (extract TASK-\d+ references)
- `**冲突风险**:\s*(.+)` → conflict_risk
- Extract modification points: `-\s+\`([^`]+):\s*([^`]+)\`:\s*(.+)` → file, location, summary
- Output: Details object with status, complexity, depends_on[], modification_points[], conflict_risk
**detectFileConflicts(tasks)**:
- Input: All tasks array
- Process:
- Build fileMap: `{ "file:location": [{ task_id, task_title, source_agent, change }] }`
- For each location with multiple modifications from different agents:
- Create conflict with type='file_conflict', severity='high'
- Include: location, tasks_involved, agents_involved, modifications
- Suggested resolution: 'Coordinate modification order or merge changes'
- Output: Array of file conflict objects
**detectDependencyCycles(tasks)**:
- Input: All tasks array
- Process:
- Build dependency graph: `{ taskId: [dependsOn_taskIds] }`
- Use DFS with recursion stack to detect cycles
- For each cycle found:
- Create conflict with type='dependency_cycle', severity='critical'
- Include: cycle path as tasks_involved
- Suggested resolution: 'Remove or reorganize dependencies'
- Output: Array of dependency cycle conflict objects
**detectStrategyConflicts(tasks)**:
- Input: All tasks array
- Process:
- Group tasks by files they modify
- For each file with tasks from multiple agents:
- Filter for high/medium conflict_risk tasks
- If >1 high-risk tasks from different agents:
- Create conflict with type='strategy_conflict', severity='medium'
- Include: file, tasks_involved, agents_involved
- Suggested resolution: 'Review approaches and align on single strategy'
- Output: Array of strategy conflict objects
**generateConflictMarkdown(conflicts)**:
- Input: Array of conflict objects
- Process:
- If empty: return '✅ 无冲突检测到'
- For each conflict:
- Generate header: `### CONFLICT-{padded_index}: {description}`
- Add fields: 严重程度, 涉及任务, 涉及Agent
- Add 问题详情 based on conflict type
- Add 建议解决方案
- Add 决策状态: [ ] 待解决
- Output: Markdown string for plan-note.md "## 冲突标记" section
**replaceSectionContent(markdown, sectionHeading, newContent)**:
- Input: Original markdown, target section heading, new content
- Process:
- Find section heading position via regex
- Find next heading of same or higher level
- Replace content between heading and next section
- If section not found: append at end
- Output: Updated markdown string
### Phase 4: Completion
**Objective**: Generate human-readable plan summary and finalize workflow.
**Prerequisites**:
- Phase 3 completed successfully
- Conflicts detected and documented in plan-note.md
- All artifacts generated
**Workflow Steps**:
1. **Update Progress Tracking**
- Set Phase 4 status to `in_progress`
2. **Read Final State**
- Read `{sessionFolder}/plan-note.md`
- Extract frontmatter metadata
- Load conflicts from Phase 3
3. **Generate plan.md**
- Create human-readable summary including:
- Session metadata
- Requirements understanding
- Sub-domain breakdown
- Task overview by focus area
- Conflict report
- Execution instructions
4. **Write Summary File**
- Write to `{sessionFolder}/plan.md`
5. **Display Completion Summary**
- Session statistics
- File structure
- Execution command
- Conflict status
6. **Update Todo**
- Set Phase 4 status to `completed`
**plan.md Structure**:
| Section | Content |
|---------|---------|
| Header | Session ID, created time, original requirement |
| Requirements | Copy from plan-note.md "## 需求理解" section |
| Sub-Domain Split | List each focus area with description and task ID range |
| Task Overview | Tasks grouped by focus area with complexity and dependencies |
| Conflict Report | Summary of detected conflicts or "无冲突" |
| Execution | Command to execute the plan |
**Required Function** (semantic description):
- **generateHumanReadablePlan**: Extract sections from plan-note.md and format as readable plan.md with session info, requirements, tasks, and conflicts
## Configuration
| Flag | Default | Description |
|------|---------|-------------|
| `--max-agents` | 5 | Maximum sub-agents to spawn |
| `-y, --yes` | false | Auto-confirm all decisions |
## Error Handling
| Error | Resolution |
|-------|------------|
| Understanding agent fails | Retry once, provide more context |
| Planning agent fails | Skip failed agent, continue with others |
| Section not found in plan-note | Agent creates section (defensive) |
| Conflict detection fails | Continue with empty conflicts |
## Best Practices
1. **Clear Requirements**: Detailed requirements → better sub-domain splitting
2. **Reference Latest Documentation**: Understanding agent should prioritize identifying and referencing latest docs (README, design docs, architecture guides)
3. **Ask When Uncertain**: When ambiguities or multiple interpretations exist, ask user for clarification instead of assuming
4. **Review Plan Note**: Check plan-note.md before execution
5. **Resolve Conflicts**: Address high/critical conflicts before execution
6. **Inspect Details**: Use agents/{focus-area}/plan.json for deep dive
---
**Now execute collaborative-plan-with-file for**: $ARGUMENTS

View File

@@ -658,15 +658,15 @@ Why is config value None during update?
| Hypothesis history | ❌ | ✅ hypotheses.json |
| Gemini validation | ❌ | ✅ At key decision points |
## Usage Recommendations
## Usage Recommendations (Requires User Confirmation)
Use `/workflow:debug-with-file` when:
**Use `Skill(skill="workflow:debug-with-file", args="\"bug description\"")` when:**
- Complex bugs requiring multiple investigation rounds
- Learning from debugging process is valuable
- Team needs to understand debugging rationale
- Bug might recur, documentation helps prevention
Use `/workflow:debug` when:
**Use `Skill(skill="ccw-debug", args="--mode cli \"issue\"")` when:**
- Simple, quick bugs
- One-off issues
- Documentation overhead not needed

View File

@@ -1,7 +1,7 @@
---
name: execute
description: Coordinate agent execution for workflow tasks with automatic session discovery, parallel task processing, and status tracking
argument-hint: "[-y|--yes] [--resume-session=\"session-id\"]"
argument-hint: "[-y|--yes] [--resume-session=\"session-id\"] [--with-commit]"
---
# Workflow Execute Command
@@ -22,6 +22,11 @@ Orchestrates autonomous workflow execution through systematic task discovery, ag
/workflow:execute --yes
/workflow:execute -y
/workflow:execute -y --resume-session="WFS-auth"
# With auto-commit (commit after each task completion)
/workflow:execute --with-commit
/workflow:execute -y --with-commit
/workflow:execute -y --with-commit --resume-session="WFS-auth"
```
## Auto Mode Defaults
@@ -30,9 +35,15 @@ When `--yes` or `-y` flag is used:
- **Session Selection**: Automatically selects the first (most recent) active session
- **Completion Choice**: Automatically completes session (runs `/workflow:session:complete --yes`)
When `--with-commit` flag is used:
- **Auto-Commit**: After each agent task completes, commit changes based on summary document
- **Commit Principle**: Minimal commits - only commit files modified by the completed task
- **Commit Message**: Generated from task summary with format: "feat/fix/refactor: {task-title} - {summary}"
**Flag Parsing**:
```javascript
const autoYes = $ARGUMENTS.includes('--yes') || $ARGUMENTS.includes('-y')
const withCommit = $ARGUMENTS.includes('--with-commit')
```
## Performance Optimization Strategy
@@ -95,6 +106,11 @@ Phase 4: Execution Strategy & Task Execution
├─ Mark task completed (update IMPL-*.json status)
│ # Quick fix: Update task status for ccw dashboard
│ # TS=$(date -Iseconds) && jq --arg ts "$TS" '.status="completed" | .status_history=(.status_history // [])+[{"from":"in_progress","to":"completed","changed_at":$ts}]' IMPL-X.json > tmp.json && mv tmp.json IMPL-X.json
├─ [with-commit] Commit changes based on summary (minimal principle)
│ # Read summary from .summaries/IMPL-X-summary.md
│ # Extract changed files from summary's "Files Modified" section
│ # Generate commit message: "feat/fix/refactor: {task-title} - {summary}"
│ # git add <changed-files> && git commit -m "<commit-message>"
└─ Advance to next task
Phase 5: Completion
@@ -273,7 +289,13 @@ while (TODO_LIST.md has pending tasks) {
4. **Launch Agent**: Invoke specialized agent with complete context including flow control steps
5. **Monitor Progress**: Track agent execution and handle errors without user interruption
6. **Collect Results**: Gather implementation results and outputs
7. **Continue Workflow**: Identify next pending task from TODO_LIST.md and repeat
7. **[with-commit] Auto-Commit**: If `--with-commit` flag enabled, commit changes based on summary
- Read summary from `.summaries/{task-id}-summary.md`
- Extract changed files from summary's "Files Modified" section
- Determine commit type from `meta.type` (feature→feat, bugfix→fix, refactor→refactor)
- Generate commit message: "{type}: {task-title} - {summary-first-line}"
- Commit only modified files (minimal principle): `git add <files> && git commit -m "<message>"`
8. **Continue Workflow**: Identify next pending task from TODO_LIST.md and repeat
**Note**: TODO_LIST.md updates are handled by agents (e.g., code-developer.md), not by the orchestrator.
@@ -296,7 +318,7 @@ const autoYes = $ARGUMENTS.includes('--yes') || $ARGUMENTS.includes('-y')
if (autoYes) {
// Auto mode: Complete session automatically
console.log(`[--yes] Auto-selecting: Complete Session`)
SlashCommand("/workflow:session:complete --yes")
Skill(skill="workflow:session:complete", args="--yes")
} else {
// Interactive mode: Ask user
AskUserQuestion({
@@ -546,3 +568,26 @@ meta.agent missing → Infer from meta.type:
- **Dependency Validation**: Check all depends_on references exist
- **Context Verification**: Ensure all required context is available
## Auto-Commit Mode (--with-commit)
**Behavior**: After each agent task completes, automatically commit changes based on summary document.
**Minimal Principle**: Only commit files modified by the completed task.
**Commit Message Format**: `{type}: {task-title} - {summary}`
**Type Mapping** (from `meta.type`):
- `feature``feat` | `bugfix``fix` | `refactor``refactor`
- `test-gen``test` | `docs``docs` | `review``chore`
**Implementation**:
```bash
# 1. Read summary from .summaries/{task-id}-summary.md
# 2. Extract files from "Files Modified" section
# 3. Commit: git add <files> && git commit -m "{type}: {title} - {summary}"
```
**Error Handling**: Skip commit on no changes/missing summary, log errors, continue workflow.

View File

@@ -0,0 +1,408 @@
---
name: init-guidelines
description: Interactive wizard to fill project-guidelines.json based on project analysis
argument-hint: "[--reset]"
examples:
- /workflow:init-guidelines
- /workflow:init-guidelines --reset
---
# Workflow Init Guidelines Command (/workflow:init-guidelines)
## Overview
Interactive multi-round wizard that analyzes the current project (via `project-tech.json`) and asks targeted questions to populate `.workflow/project-guidelines.json` with coding conventions, constraints, and quality rules.
**Design Principle**: Questions are dynamically generated based on the project's tech stack, architecture, and patterns — not generic boilerplate.
**Note**: This command may be called by `/workflow:init` after initialization. Upon completion, return to the calling workflow if applicable.
## Usage
```bash
/workflow:init-guidelines # Fill guidelines interactively (skip if already populated)
/workflow:init-guidelines --reset # Reset and re-fill guidelines from scratch
```
## Execution Process
```
Input Parsing:
└─ Parse --reset flag → reset = true | false
Step 1: Check Prerequisites
├─ project-tech.json must exist (run /workflow:init first)
├─ project-guidelines.json: check if populated or scaffold-only
└─ If populated + no --reset → Ask: "Guidelines already exist. Overwrite or append?"
Step 2: Load Project Context
└─ Read project-tech.json → extract tech stack, architecture, patterns
Step 3: Multi-Round Interactive Questionnaire
├─ Round 1: Coding Conventions (coding_style, naming_patterns)
├─ Round 2: File & Documentation Conventions (file_structure, documentation)
├─ Round 3: Architecture & Tech Constraints (architecture, tech_stack)
├─ Round 4: Performance & Security Constraints (performance, security)
└─ Round 5: Quality Rules (quality_rules)
Step 4: Write project-guidelines.json
Step 5: Display Summary
```
## Implementation
### Step 1: Check Prerequisites
```bash
bash(test -f .workflow/project-tech.json && echo "TECH_EXISTS" || echo "TECH_NOT_FOUND")
bash(test -f .workflow/project-guidelines.json && echo "GUIDELINES_EXISTS" || echo "GUIDELINES_NOT_FOUND")
```
**If TECH_NOT_FOUND**: Exit with message
```
Project tech analysis not found. Run /workflow:init first.
```
**Parse --reset flag**:
```javascript
const reset = $ARGUMENTS.includes('--reset')
```
**If GUIDELINES_EXISTS and not --reset**: Check if guidelines are populated (not just scaffold)
```javascript
const guidelines = JSON.parse(Read('.workflow/project-guidelines.json'))
const isPopulated =
guidelines.conventions.coding_style.length > 0 ||
guidelines.conventions.naming_patterns.length > 0 ||
guidelines.constraints.architecture.length > 0 ||
guidelines.constraints.tech_stack.length > 0
if (isPopulated) {
AskUserQuestion({
questions: [{
question: "Project guidelines already contain entries. How would you like to proceed?",
header: "Mode",
multiSelect: false,
options: [
{ label: "Append", description: "Keep existing entries and add new ones from the wizard" },
{ label: "Reset", description: "Clear all existing entries and start fresh" },
{ label: "Cancel", description: "Exit without changes" }
]
}]
})
// If Cancel → exit
// If Reset → clear all arrays before proceeding
// If Append → keep existing, wizard adds to them
}
```
### Step 2: Load Project Context
```javascript
const projectTech = JSON.parse(Read('.workflow/project-tech.json'))
// Extract key info for generating smart questions
const languages = projectTech.technology_analysis?.technology_stack?.languages
|| projectTech.overview?.technology_stack?.languages || []
const primaryLang = languages.find(l => l.primary)?.name || languages[0]?.name || 'Unknown'
const frameworks = projectTech.technology_analysis?.technology_stack?.frameworks
|| projectTech.overview?.technology_stack?.frameworks || []
const testFrameworks = projectTech.technology_analysis?.technology_stack?.test_frameworks
|| projectTech.overview?.technology_stack?.test_frameworks || []
const archStyle = projectTech.technology_analysis?.architecture?.style
|| projectTech.overview?.architecture?.style || 'Unknown'
const archPatterns = projectTech.technology_analysis?.architecture?.patterns
|| projectTech.overview?.architecture?.patterns || []
const buildTools = projectTech.technology_analysis?.technology_stack?.build_tools
|| projectTech.overview?.technology_stack?.build_tools || []
```
### Step 3: Multi-Round Interactive Questionnaire
Each round uses `AskUserQuestion` with project-aware options. The user can always select "Other" to provide custom input.
**⚠️ CRITICAL**: After each round, collect the user's answers and convert them into guideline entries. Do NOT batch all rounds — process each round's answers before proceeding to the next.
---
#### Round 1: Coding Conventions
Generate options dynamically based on detected language/framework:
```javascript
// Build language-specific coding style options
const codingStyleOptions = []
if (['TypeScript', 'JavaScript'].includes(primaryLang)) {
codingStyleOptions.push(
{ label: "Strict TypeScript", description: "Use strict mode, no 'any' type, explicit return types for public APIs" },
{ label: "Functional style", description: "Prefer pure functions, immutability, avoid class-based patterns where possible" },
{ label: "Const over let", description: "Always use const; only use let when reassignment is truly needed" }
)
} else if (primaryLang === 'Python') {
codingStyleOptions.push(
{ label: "Type hints", description: "Use type hints for all function signatures and class attributes" },
{ label: "Functional style", description: "Prefer pure functions, list comprehensions, avoid mutable state" },
{ label: "PEP 8 strict", description: "Strict PEP 8 compliance with max line length 88 (Black formatter)" }
)
} else if (primaryLang === 'Go') {
codingStyleOptions.push(
{ label: "Error wrapping", description: "Always wrap errors with context using fmt.Errorf with %w" },
{ label: "Interface first", description: "Define interfaces at the consumer side, not the provider" },
{ label: "Table-driven tests", description: "Use table-driven test pattern for all unit tests" }
)
}
// Add universal options
codingStyleOptions.push(
{ label: "Early returns", description: "Prefer early returns / guard clauses over deep nesting" }
)
AskUserQuestion({
questions: [
{
question: `Your project uses ${primaryLang}. Which coding style conventions do you follow?`,
header: "Coding Style",
multiSelect: true,
options: codingStyleOptions.slice(0, 4) // Max 4 options
},
{
question: `What naming conventions does your ${primaryLang} project use?`,
header: "Naming",
multiSelect: true,
options: [
{ label: "camelCase variables", description: "Variables and functions use camelCase (e.g., getUserName)" },
{ label: "PascalCase types", description: "Classes, interfaces, type aliases use PascalCase (e.g., UserService)" },
{ label: "UPPER_SNAKE constants", description: "Constants use UPPER_SNAKE_CASE (e.g., MAX_RETRIES)" },
{ label: "Prefix interfaces", description: "Prefix interfaces with 'I' (e.g., IUserService)" }
]
}
]
})
```
**Process Round 1 answers** → add to `conventions.coding_style` and `conventions.naming_patterns` arrays.
---
#### Round 2: File Structure & Documentation
```javascript
AskUserQuestion({
questions: [
{
question: `Your project has a ${archStyle} architecture. What file organization rules apply?`,
header: "File Structure",
multiSelect: true,
options: [
{ label: "Co-located tests", description: "Test files live next to source files (e.g., foo.ts + foo.test.ts)" },
{ label: "Separate test dir", description: "Tests in a dedicated __tests__ or tests/ directory" },
{ label: "One export per file", description: "Each file exports a single main component/class/function" },
{ label: "Index barrels", description: "Use index.ts barrel files for clean imports from directories" }
]
},
{
question: "What documentation standards does your project follow?",
header: "Documentation",
multiSelect: true,
options: [
{ label: "JSDoc/docstring public APIs", description: "All public functions and classes must have JSDoc/docstrings" },
{ label: "README per module", description: "Each major module/package has its own README" },
{ label: "Inline comments for why", description: "Comments explain 'why', not 'what' — code should be self-documenting" },
{ label: "No comment requirement", description: "Code should be self-explanatory; comments only for non-obvious logic" }
]
}
]
})
```
**Process Round 2 answers** → add to `conventions.file_structure` and `conventions.documentation`.
---
#### Round 3: Architecture & Tech Stack Constraints
```javascript
// Build architecture-specific options
const archOptions = []
if (archStyle.toLowerCase().includes('monolith')) {
archOptions.push(
{ label: "No circular deps", description: "Modules must not have circular dependencies" },
{ label: "Layer boundaries", description: "Strict layer separation: UI → Service → Data (no skipping layers)" }
)
} else if (archStyle.toLowerCase().includes('microservice')) {
archOptions.push(
{ label: "Service isolation", description: "Services must not share databases or internal state" },
{ label: "API contracts", description: "All inter-service communication through versioned API contracts" }
)
}
archOptions.push(
{ label: "Stateless services", description: "Service/business logic must be stateless (state in DB/cache only)" },
{ label: "Dependency injection", description: "Use dependency injection for testability, no hardcoded dependencies" }
)
AskUserQuestion({
questions: [
{
question: `Your ${archStyle} architecture uses ${archPatterns.join(', ') || 'various'} patterns. What architecture constraints apply?`,
header: "Architecture",
multiSelect: true,
options: archOptions.slice(0, 4)
},
{
question: `Tech stack: ${frameworks.join(', ')}. What technology constraints apply?`,
header: "Tech Stack",
multiSelect: true,
options: [
{ label: "No new deps without review", description: "Adding new dependencies requires explicit justification and review" },
{ label: "Pin dependency versions", description: "All dependencies must use exact versions, not ranges" },
{ label: "Prefer native APIs", description: "Use built-in/native APIs over third-party libraries when possible" },
{ label: "Framework conventions", description: `Follow official ${frameworks[0] || 'framework'} conventions and best practices` }
]
}
]
})
```
**Process Round 3 answers** → add to `constraints.architecture` and `constraints.tech_stack`.
---
#### Round 4: Performance & Security Constraints
```javascript
AskUserQuestion({
questions: [
{
question: "What performance requirements does your project have?",
header: "Performance",
multiSelect: true,
options: [
{ label: "API response time", description: "API endpoints must respond within 200ms (p95)" },
{ label: "Bundle size limit", description: "Frontend bundle size must stay under 500KB gzipped" },
{ label: "Lazy loading", description: "Large modules/routes must use lazy loading / code splitting" },
{ label: "No N+1 queries", description: "Database access must avoid N+1 query patterns" }
]
},
{
question: "What security requirements does your project enforce?",
header: "Security",
multiSelect: true,
options: [
{ label: "Input sanitization", description: "All user input must be validated and sanitized before use" },
{ label: "No secrets in code", description: "No API keys, passwords, or tokens in source code — use env vars" },
{ label: "Auth on all endpoints", description: "All API endpoints require authentication unless explicitly public" },
{ label: "Parameterized queries", description: "All database queries must use parameterized/prepared statements" }
]
}
]
})
```
**Process Round 4 answers** → add to `constraints.performance` and `constraints.security`.
---
#### Round 5: Quality Rules
```javascript
AskUserQuestion({
questions: [
{
question: `Testing with ${testFrameworks.join(', ') || 'your test framework'}. What quality rules apply?`,
header: "Quality",
multiSelect: true,
options: [
{ label: "Min test coverage", description: "Minimum 80% code coverage for new code; no merging below threshold" },
{ label: "No skipped tests", description: "Tests must not be skipped (.skip/.only) in committed code" },
{ label: "Lint must pass", description: "All code must pass linter checks before commit (enforced by pre-commit)" },
{ label: "Type check must pass", description: "Full type checking (tsc --noEmit) must pass with zero errors" }
]
}
]
})
```
**Process Round 5 answers** → add to `quality_rules` array as `{ rule, scope, enforced_by }` objects.
### Step 4: Write project-guidelines.json
```javascript
// Build the final guidelines object
const finalGuidelines = {
conventions: {
coding_style: existingCodingStyle.concat(newCodingStyle),
naming_patterns: existingNamingPatterns.concat(newNamingPatterns),
file_structure: existingFileStructure.concat(newFileStructure),
documentation: existingDocumentation.concat(newDocumentation)
},
constraints: {
architecture: existingArchitecture.concat(newArchitecture),
tech_stack: existingTechStack.concat(newTechStack),
performance: existingPerformance.concat(newPerformance),
security: existingSecurity.concat(newSecurity)
},
quality_rules: existingQualityRules.concat(newQualityRules),
learnings: existingLearnings, // Preserve existing learnings
_metadata: {
created_at: existingMetadata?.created_at || new Date().toISOString(),
version: "1.0.0",
last_updated: new Date().toISOString(),
updated_by: "workflow:init-guidelines"
}
}
Write('.workflow/project-guidelines.json', JSON.stringify(finalGuidelines, null, 2))
```
### Step 5: Display Summary
```javascript
const countConventions = finalGuidelines.conventions.coding_style.length
+ finalGuidelines.conventions.naming_patterns.length
+ finalGuidelines.conventions.file_structure.length
+ finalGuidelines.conventions.documentation.length
const countConstraints = finalGuidelines.constraints.architecture.length
+ finalGuidelines.constraints.tech_stack.length
+ finalGuidelines.constraints.performance.length
+ finalGuidelines.constraints.security.length
const countQuality = finalGuidelines.quality_rules.length
console.log(`
✓ Project guidelines configured
## Summary
- Conventions: ${countConventions} rules (coding: ${cs}, naming: ${np}, files: ${fs}, docs: ${doc})
- Constraints: ${countConstraints} rules (arch: ${ar}, tech: ${ts}, perf: ${pf}, security: ${sc})
- Quality rules: ${countQuality}
File: .workflow/project-guidelines.json
Next steps:
- Use /workflow:session:solidify to add individual rules later
- Guidelines will be auto-loaded by /workflow:plan for task generation
`)
```
## Answer Processing Rules
When converting user selections to guideline entries:
1. **Selected option** → Use the option's `description` as the guideline string (it's more precise than the label)
2. **"Other" with custom text** → Use the user's text directly as the guideline string
3. **Deduplication** → Skip entries that already exist in the guidelines (exact string match)
4. **Quality rules** → Convert to `{ rule: description, scope: "all", enforced_by: "code-review" }` format
## Error Handling
- **No project-tech.json**: Exit with instruction to run `/workflow:init` first
- **User cancels mid-wizard**: Save whatever was collected so far (partial is better than nothing)
- **File write failure**: Report error, suggest manual edit
## Related Commands
- `/workflow:init` - Creates scaffold; optionally calls this command
- `/workflow:session:solidify` - Add individual rules one at a time

View File

@@ -44,11 +44,16 @@ Analysis Flow:
│ └─ Write .workflow/project-tech.json
├─ Create guidelines scaffold (if not exists)
│ └─ Write .workflow/project-guidelines.json (empty structure)
─ Display summary
─ Display summary
└─ Ask about guidelines configuration
├─ If guidelines empty → Ask user: "Configure now?" or "Skip"
│ ├─ Configure now → Skill(skill="workflow:init-guidelines")
│ └─ Skip → Show next steps
└─ If guidelines populated → Show next steps only
Output:
├─ .workflow/project-tech.json (+ .backup if regenerate)
└─ .workflow/project-guidelines.json (scaffold if new)
└─ .workflow/project-guidelines.json (scaffold or configured)
```
## Implementation
@@ -210,11 +215,63 @@ Files created:
- Tech analysis: .workflow/project-tech.json
- Guidelines: .workflow/project-guidelines.json ${guidelinesExists ? '(scaffold)' : ''}
${regenerate ? '- Backup: .workflow/project-tech.json.backup' : ''}
`);
```
### Step 5: Ask About Guidelines Configuration
After displaying the summary, ask the user if they want to configure project guidelines interactively.
```javascript
// Check if guidelines are just a scaffold (empty) or already populated
const guidelines = JSON.parse(Read('.workflow/project-guidelines.json'));
const isGuidelinesPopulated =
guidelines.conventions.coding_style.length > 0 ||
guidelines.conventions.naming_patterns.length > 0 ||
guidelines.constraints.architecture.length > 0 ||
guidelines.constraints.security.length > 0;
// Only ask if guidelines are not yet populated
if (!isGuidelinesPopulated) {
const userChoice = AskUserQuestion({
questions: [{
question: "Would you like to configure project guidelines now? The wizard will ask targeted questions based on your tech stack.",
header: "Guidelines",
multiSelect: false,
options: [
{
label: "Configure now (Recommended)",
description: "Interactive wizard to set up coding conventions, constraints, and quality rules"
},
{
label: "Skip for now",
description: "You can run /workflow:init-guidelines later or use /workflow:session:solidify to add rules individually"
}
]
}]
});
if (userChoice.answers["Guidelines"] === "Configure now (Recommended)") {
console.log("\n🔧 Starting guidelines configuration wizard...\n");
Skill(skill="workflow:init-guidelines");
} else {
console.log(`
Next steps:
- Use /workflow:session:solidify to add project guidelines
- Use /workflow:init-guidelines to configure guidelines interactively
- Use /workflow:session:solidify to add individual rules
- Use /workflow:plan to start planning
`);
}
} else {
console.log(`
Guidelines already configured (${guidelines.conventions.coding_style.length + guidelines.constraints.architecture.length}+ rules).
Next steps:
- Use /workflow:init-guidelines --reset to reconfigure
- Use /workflow:session:solidify to add individual rules
- Use /workflow:plan to start planning
`);
}
```
## Error Handling
@@ -222,3 +279,10 @@ Next steps:
**Agent Failure**: Fall back to basic initialization with placeholder overview
**Missing Tools**: Agent uses Qwen fallback or bash-only
**Empty Project**: Create minimal JSON with all gaps identified
## Related Commands
- `/workflow:init-guidelines` - Interactive wizard to configure project guidelines (called after init)
- `/workflow:session:solidify` - Add individual rules/constraints one at a time
- `/workflow:plan` - Start planning with initialized project context
- `/workflow:status --project` - View project state and guidelines

View File

@@ -2,7 +2,7 @@
name: lite-fix
description: Lightweight bug diagnosis and fix workflow with intelligent severity assessment and optional hotfix mode for production incidents
argument-hint: "[-y|--yes] [--hotfix] \"bug description or issue reference\""
allowed-tools: TodoWrite(*), Task(*), SlashCommand(*), AskUserQuestion(*)
allowed-tools: TodoWrite(*), Task(*), Skill(*), AskUserQuestion(*)
---
# Workflow Lite-Fix Command (/workflow:lite-fix)
@@ -37,6 +37,23 @@ Intelligent lightweight bug fixing command with dynamic workflow adaptation base
/workflow:lite-fix -y --hotfix "生产环境数据库连接失败" # Auto + hotfix mode
```
## Output Artifacts
| Artifact | Description |
|----------|-------------|
| `diagnosis-{angle}.json` | Per-angle diagnosis results (1-4 files based on severity) |
| `diagnoses-manifest.json` | Index of all diagnosis files |
| `planning-context.md` | Evidence paths + synthesized understanding |
| `fix-plan.json` | Structured fix plan (fix-plan-json-schema.json) |
**Output Directory**: `.workflow/.lite-fix/{bug-slug}-{YYYY-MM-DD}/`
**Agent Usage**:
- Low/Medium severity → Direct Claude planning (no agent)
- High/Critical severity → `cli-lite-planning-agent` generates `fix-plan.json`
**Schema Reference**: `~/.claude/workflows/cli-templates/schemas/fix-plan-json-schema.json`
## Auto Mode Defaults
When `--yes` or `-y` flag is used:
@@ -86,7 +103,7 @@ Phase 4: Confirmation & Selection
Phase 5: Execute
|- Build executionContext (fix-plan + diagnoses + clarifications + selections)
+- SlashCommand("/workflow:lite-execute --in-memory --mode bugfix")
+- Skill(skill="workflow:lite-execute", args="--in-memory --mode bugfix")
```
## Implementation
@@ -192,11 +209,15 @@ const diagnosisTasks = selectedAngles.map((angle, index) =>
## Task Objective
Execute **${angle}** diagnosis for bug root cause analysis. Analyze codebase from this specific angle to discover root cause, affected paths, and fix hints.
## Output Location
**Session Folder**: ${sessionFolder}
**Output File**: ${sessionFolder}/diagnosis-${angle}.json
## Assigned Context
- **Diagnosis Angle**: ${angle}
- **Bug Description**: ${bug_description}
- **Diagnosis Index**: ${index + 1} of ${selectedAngles.length}
- **Output File**: ${sessionFolder}/diagnosis-${angle}.json
## MANDATORY FIRST STEPS (Execute by Agent)
**You (cli-explore-agent) MUST execute these steps in order:**
@@ -225,8 +246,6 @@ Execute **${angle}** diagnosis for bug root cause analysis. Analyze codebase fro
## Expected Output
**File**: ${sessionFolder}/diagnosis-${angle}.json
**Schema Reference**: Schema obtained in MANDATORY FIRST STEPS step 3, follow schema exactly
**Required Fields** (all ${angle} focused):
@@ -255,9 +274,9 @@ Execute **${angle}** diagnosis for bug root cause analysis. Analyze codebase fro
- [ ] JSON output follows schema exactly
- [ ] clarification_needs includes options + recommended
## Output
Write: ${sessionFolder}/diagnosis-${angle}.json
Return: 2-3 sentence summary of ${angle} diagnosis findings
## Execution
**Write**: \`${sessionFolder}/diagnosis-${angle}.json\`
**Return**: 2-3 sentence summary of ${angle} diagnosis findings
`
)
)
@@ -493,6 +512,13 @@ Task(
prompt=`
Generate fix plan and write fix-plan.json.
## Output Location
**Session Folder**: ${sessionFolder}
**Output Files**:
- ${sessionFolder}/planning-context.md (evidence + understanding)
- ${sessionFolder}/fix-plan.json (fix plan)
## Output Schema Reference
Execute: cat ~/.claude/workflows/cli-templates/schemas/fix-plan-json-schema.json (get schema reference before generating plan)
@@ -588,8 +614,9 @@ Generate fix-plan.json with:
- For High/Critical: REQUIRED new fields (rationale, verification, risks, code_skeleton, data_flow, design_decisions)
- Each task MUST have rationale (why this fix), verification (how to verify success), and risks (potential issues)
5. Parse output and structure fix-plan
6. Write JSON: Write('${sessionFolder}/fix-plan.json', jsonContent)
7. Return brief completion summary
6. **Write**: \`${sessionFolder}/planning-context.md\` (evidence paths + understanding)
7. **Write**: \`${sessionFolder}/fix-plan.json\`
8. Return brief completion summary
## Output Format for CLI
Include these sections in your fix-plan output:
@@ -740,29 +767,31 @@ executionContext = {
**Step 5.2: Execute**
```javascript
SlashCommand(command="/workflow:lite-execute --in-memory --mode bugfix")
Skill(skill="workflow:lite-execute", args="--in-memory --mode bugfix")
```
## Session Folder Structure
```
.workflow/.lite-fix/{bug-slug}-{YYYY-MM-DD}/
|- diagnosis-{angle1}.json # Diagnosis angle 1
|- diagnosis-{angle2}.json # Diagnosis angle 2
|- diagnosis-{angle3}.json # Diagnosis angle 3 (if applicable)
|- diagnosis-{angle4}.json # Diagnosis angle 4 (if applicable)
|- diagnoses-manifest.json # Diagnosis index
+- fix-plan.json # Fix plan
├── diagnosis-{angle1}.json # Diagnosis angle 1
├── diagnosis-{angle2}.json # Diagnosis angle 2
├── diagnosis-{angle3}.json # Diagnosis angle 3 (if applicable)
├── diagnosis-{angle4}.json # Diagnosis angle 4 (if applicable)
├── diagnoses-manifest.json # Diagnosis index
├── planning-context.md # Evidence + understanding
└── fix-plan.json # Fix plan
```
**Example**:
```
.workflow/.lite-fix/user-avatar-upload-fails-413-2025-11-25-14-30-25/
|- diagnosis-error-handling.json
|- diagnosis-dataflow.json
|- diagnosis-validation.json
|- diagnoses-manifest.json
+- fix-plan.json
.workflow/.lite-fix/user-avatar-upload-fails-413-2025-11-25/
├── diagnosis-error-handling.json
├── diagnosis-dataflow.json
├── diagnosis-validation.json
├── diagnoses-manifest.json
├── planning-context.md
└── fix-plan.json
```
## Error Handling

View File

@@ -1,465 +0,0 @@
---
name: workflow:lite-lite-lite
description: Ultra-lightweight multi-tool analysis and direct execution. No artifacts for simple tasks; auto-creates planning docs in .workflow/.scratchpad/ for complex tasks. Auto tool selection based on task analysis, user-driven iteration via AskUser.
argument-hint: "[-y|--yes] <task description>"
allowed-tools: TodoWrite(*), Task(*), AskUserQuestion(*), Read(*), Bash(*), Write(*), mcp__ace-tool__search_context(*), mcp__ccw-tools__write_file(*)
---
## Auto Mode
When `--yes` or `-y`: Skip clarification questions, auto-select tools, execute directly with recommended settings.
# Ultra-Lite Multi-Tool Workflow
## Quick Start
```bash
/workflow:lite-lite-lite "Fix the login bug"
/workflow:lite-lite-lite "Refactor payment module for multi-gateway support"
```
**Core Philosophy**: Minimal friction, maximum velocity. Simple tasks = no artifacts. Complex tasks = lightweight planning doc in `.workflow/.scratchpad/`.
## Overview
**Complexity-aware workflow**: Clarify → Assess Complexity → Select Tools → Multi-Mode Analysis → Decision → Direct Execution
**vs multi-cli-plan**: No IMPL_PLAN.md, plan.json, synthesis.json - state in memory or lightweight scratchpad doc for complex tasks.
## Execution Flow
```
Phase 1: Clarify Requirements → AskUser for missing details
Phase 1.5: Assess Complexity → Determine if planning doc needed
Phase 2: Select Tools (CLI → Mode → Agent) → 3-step selection
Phase 3: Multi-Mode Analysis → Execute with --resume chaining
Phase 4: User Decision → Execute / Refine / Change / Cancel
Phase 5: Direct Execution → No plan files (simple) or scratchpad doc (complex)
```
## Phase 1: Clarify Requirements
```javascript
const taskDescription = $ARGUMENTS
if (taskDescription.length < 20 || isAmbiguous(taskDescription)) {
AskUserQuestion({
questions: [{
question: "Please provide more details: target files/modules, expected behavior, constraints?",
header: "Details",
options: [
{ label: "I'll provide more", description: "Add more context" },
{ label: "Continue analysis", description: "Let tools explore autonomously" }
],
multiSelect: false
}]
})
}
// Optional: Quick ACE Context for complex tasks
mcp__ace-tool__search_context({
project_root_path: process.cwd(),
query: `${taskDescription} implementation patterns`
})
```
## Phase 1.5: Assess Complexity
| Level | Creates Plan Doc | Trigger Keywords |
|-------|------------------|------------------|
| **simple** | ❌ | (default) |
| **moderate** | ✅ | module, system, service, integration, multiple |
| **complex** | ✅ | refactor, migrate, security, auth, payment, database |
```javascript
// Complexity detection (after ACE query)
const isComplex = /refactor|migrate|security|auth|payment|database/i.test(taskDescription)
const isModerate = /module|system|service|integration|multiple/i.test(taskDescription) || aceContext?.relevant_files?.length > 2
if (isComplex || isModerate) {
const planPath = `.workflow/.scratchpad/lite3-${taskSlug}-${dateStr}.md`
// Create planning doc with: Task, Status, Complexity, Analysis Summary, Execution Plan, Progress Log
}
```
## Phase 2: Select Tools
### Tool Definitions
**CLI Tools** (from cli-tools.json):
```javascript
const cliConfig = JSON.parse(Read("~/.claude/cli-tools.json"))
const cliTools = Object.entries(cliConfig.tools)
.filter(([_, config]) => config.enabled)
.map(([name, config]) => ({
name, type: 'cli',
tags: config.tags || [],
model: config.primaryModel,
toolType: config.type // builtin, cli-wrapper, api-endpoint
}))
```
**Sub Agents**:
| Agent | Strengths | canExecute |
|-------|-----------|------------|
| **code-developer** | Code implementation, test writing | ✅ |
| **Explore** | Fast code exploration, pattern discovery | ❌ |
| **cli-explore-agent** | Dual-source analysis (Bash+CLI) | ❌ |
| **cli-discuss-agent** | Multi-CLI collaboration, cross-verification | ❌ |
| **debug-explore-agent** | Hypothesis-driven debugging | ❌ |
| **context-search-agent** | Multi-layer file discovery, dependency analysis | ❌ |
| **test-fix-agent** | Test execution, failure diagnosis, code fixing | ✅ |
| **universal-executor** | General execution, multi-domain adaptation | ✅ |
**Analysis Modes**:
| Mode | Pattern | Use Case | minCLIs |
|------|---------|----------|---------|
| **Parallel** | `A \|\| B \|\| C → Aggregate` | Fast multi-perspective | 1+ |
| **Sequential** | `A → B(resume) → C(resume)` | Incremental deepening | 2+ |
| **Collaborative** | `A → B → A → B → Synthesize` | Multi-round refinement | 2+ |
| **Debate** | `A(propose) → B(challenge) → A(defend)` | Adversarial validation | 2 |
| **Challenge** | `A(analyze) → B(challenge)` | Find flaws and risks | 2 |
### Three-Step Selection Flow
```javascript
// Step 1: Select CLIs (multiSelect)
AskUserQuestion({
questions: [{
question: "Select CLI tools for analysis (1-3 for collaboration modes)",
header: "CLI Tools",
options: cliTools.map(cli => ({
label: cli.name,
description: cli.tags.length > 0 ? cli.tags.join(', ') : cli.model || 'general'
})),
multiSelect: true
}]
})
// Step 2: Select Mode (filtered by CLI count)
const availableModes = analysisModes.filter(m => selectedCLIs.length >= m.minCLIs)
AskUserQuestion({
questions: [{
question: "Select analysis mode",
header: "Mode",
options: availableModes.map(m => ({
label: m.label,
description: `${m.description} [${m.pattern}]`
})),
multiSelect: false
}]
})
// Step 3: Select Agent for execution
AskUserQuestion({
questions: [{
question: "Select Sub Agent for execution",
header: "Agent",
options: agents.map(a => ({ label: a.name, description: a.strength })),
multiSelect: false
}]
})
// Confirm selection
AskUserQuestion({
questions: [{
question: "Confirm selection?",
header: "Confirm",
options: [
{ label: "Confirm and continue", description: `${selectedMode.label} with ${selectedCLIs.length} CLIs` },
{ label: "Re-select CLIs", description: "Choose different CLI tools" },
{ label: "Re-select Mode", description: "Choose different analysis mode" },
{ label: "Re-select Agent", description: "Choose different Sub Agent" }
],
multiSelect: false
}]
})
```
## Phase 3: Multi-Mode Analysis
### Universal CLI Prompt Template
```javascript
// Unified prompt builder - used by all modes
function buildPrompt({ purpose, tasks, expected, rules, taskDescription }) {
return `
PURPOSE: ${purpose}: ${taskDescription}
TASK: ${tasks.map(t => `${t}`).join(' ')}
MODE: analysis
CONTEXT: @**/*
EXPECTED: ${expected}
CONSTRAINTS: ${rules}
`
}
// Execute CLI with prompt
function execCLI(cli, prompt, options = {}) {
const { resume, background = false } = options
const resumeFlag = resume ? `--resume ${resume}` : ''
return Bash({
command: `ccw cli -p "${prompt}" --tool ${cli.name} --mode analysis ${resumeFlag}`,
run_in_background: background
})
}
```
### Prompt Presets by Role
| Role | PURPOSE | TASKS | EXPECTED | RULES |
|------|---------|-------|----------|-------|
| **initial** | Initial analysis | Identify files, Analyze approach, List changes | Root cause, files, changes, risks | Focus on actionable insights |
| **extend** | Build on previous | Review previous, Extend, Add insights | Extended analysis building on findings | Build incrementally, avoid repetition |
| **synthesize** | Refine and synthesize | Review, Identify gaps, Synthesize | Refined synthesis with new perspectives | Add value not repetition |
| **propose** | Propose comprehensive analysis | Analyze thoroughly, Propose solution, State assumptions | Well-reasoned proposal with trade-offs | Be clear about assumptions |
| **challenge** | Challenge and stress-test | Identify weaknesses, Question assumptions, Suggest alternatives | Critique with counter-arguments | Be adversarial but constructive |
| **defend** | Respond to challenges | Address challenges, Defend valid aspects, Propose refined solution | Refined proposal incorporating feedback | Be open to criticism, synthesize |
| **criticize** | Find flaws ruthlessly | Find logical flaws, Identify edge cases, Rate criticisms | Critique with severity: [CRITICAL]/[HIGH]/[MEDIUM]/[LOW] | Be ruthlessly critical |
```javascript
const PROMPTS = {
initial: { purpose: 'Initial analysis', tasks: ['Identify affected files', 'Analyze implementation approach', 'List specific changes'], expected: 'Root cause, files to modify, key changes, risks', rules: 'Focus on actionable insights' },
extend: { purpose: 'Build on previous analysis', tasks: ['Review previous findings', 'Extend analysis', 'Add new insights'], expected: 'Extended analysis building on previous', rules: 'Build incrementally, avoid repetition' },
synthesize: { purpose: 'Refine and synthesize', tasks: ['Review previous', 'Identify gaps', 'Add insights', 'Synthesize findings'], expected: 'Refined synthesis with new perspectives', rules: 'Build collaboratively, add value' },
propose: { purpose: 'Propose comprehensive analysis', tasks: ['Analyze thoroughly', 'Propose solution', 'State assumptions clearly'], expected: 'Well-reasoned proposal with trade-offs', rules: 'Be clear about assumptions' },
challenge: { purpose: 'Challenge and stress-test', tasks: ['Identify weaknesses', 'Question assumptions', 'Suggest alternatives', 'Highlight overlooked risks'], expected: 'Constructive critique with counter-arguments', rules: 'Be adversarial but constructive' },
defend: { purpose: 'Respond to challenges', tasks: ['Address each challenge', 'Defend valid aspects', 'Acknowledge valid criticisms', 'Propose refined solution'], expected: 'Refined proposal incorporating alternatives', rules: 'Be open to criticism, synthesize best ideas' },
criticize: { purpose: 'Stress-test and find weaknesses', tasks: ['Find logical flaws', 'Identify missed edge cases', 'Propose alternatives', 'Rate criticisms (High/Medium/Low)'], expected: 'Detailed critique with severity ratings', rules: 'Be ruthlessly critical, find every flaw' }
}
```
### Mode Implementations
```javascript
// Parallel: All CLIs run simultaneously
async function executeParallel(clis, task) {
return await Promise.all(clis.map(cli =>
execCLI(cli, buildPrompt({ ...PROMPTS.initial, taskDescription: task }), { background: true })
))
}
// Sequential: Each CLI builds on previous via --resume
async function executeSequential(clis, task) {
const results = []
let prevId = null
for (const cli of clis) {
const preset = prevId ? PROMPTS.extend : PROMPTS.initial
const result = await execCLI(cli, buildPrompt({ ...preset, taskDescription: task }), { resume: prevId })
results.push(result)
prevId = extractSessionId(result)
}
return results
}
// Collaborative: Multi-round synthesis
async function executeCollaborative(clis, task, rounds = 2) {
const results = []
let prevId = null
for (let r = 0; r < rounds; r++) {
for (const cli of clis) {
const preset = !prevId ? PROMPTS.initial : PROMPTS.synthesize
const result = await execCLI(cli, buildPrompt({ ...preset, taskDescription: task }), { resume: prevId })
results.push({ cli: cli.name, round: r, result })
prevId = extractSessionId(result)
}
}
return results
}
// Debate: Propose → Challenge → Defend
async function executeDebate(clis, task) {
const [cliA, cliB] = clis
const results = []
const propose = await execCLI(cliA, buildPrompt({ ...PROMPTS.propose, taskDescription: task }))
results.push({ phase: 'propose', cli: cliA.name, result: propose })
const challenge = await execCLI(cliB, buildPrompt({ ...PROMPTS.challenge, taskDescription: task }), { resume: extractSessionId(propose) })
results.push({ phase: 'challenge', cli: cliB.name, result: challenge })
const defend = await execCLI(cliA, buildPrompt({ ...PROMPTS.defend, taskDescription: task }), { resume: extractSessionId(challenge) })
results.push({ phase: 'defend', cli: cliA.name, result: defend })
return results
}
// Challenge: Analyze → Criticize
async function executeChallenge(clis, task) {
const [cliA, cliB] = clis
const results = []
const analyze = await execCLI(cliA, buildPrompt({ ...PROMPTS.initial, taskDescription: task }))
results.push({ phase: 'analyze', cli: cliA.name, result: analyze })
const criticize = await execCLI(cliB, buildPrompt({ ...PROMPTS.criticize, taskDescription: task }), { resume: extractSessionId(analyze) })
results.push({ phase: 'challenge', cli: cliB.name, result: criticize })
return results
}
```
### Mode Router & Result Aggregation
```javascript
async function executeAnalysis(mode, clis, taskDescription) {
switch (mode.name) {
case 'parallel': return await executeParallel(clis, taskDescription)
case 'sequential': return await executeSequential(clis, taskDescription)
case 'collaborative': return await executeCollaborative(clis, taskDescription)
case 'debate': return await executeDebate(clis, taskDescription)
case 'challenge': return await executeChallenge(clis, taskDescription)
}
}
function aggregateResults(mode, results) {
const base = { mode: mode.name, pattern: mode.pattern, tools_used: results.map(r => r.cli || 'unknown') }
switch (mode.name) {
case 'parallel':
return { ...base, findings: results.map(parseOutput), consensus: findCommonPoints(results), divergences: findDifferences(results) }
case 'sequential':
return { ...base, evolution: results.map((r, i) => ({ step: i + 1, analysis: parseOutput(r) })), finalAnalysis: parseOutput(results.at(-1)) }
case 'collaborative':
return { ...base, rounds: groupByRound(results), synthesis: extractSynthesis(results.at(-1)) }
case 'debate':
return { ...base, proposal: parseOutput(results.find(r => r.phase === 'propose')?.result),
challenges: parseOutput(results.find(r => r.phase === 'challenge')?.result),
resolution: parseOutput(results.find(r => r.phase === 'defend')?.result), confidence: calculateDebateConfidence(results) }
case 'challenge':
return { ...base, originalAnalysis: parseOutput(results.find(r => r.phase === 'analyze')?.result),
critiques: parseCritiques(results.find(r => r.phase === 'challenge')?.result), riskScore: calculateRiskScore(results) }
}
}
// If planPath exists: update Analysis Summary & Execution Plan sections
```
## Phase 4: User Decision
```javascript
function presentSummary(analysis) {
console.log(`## Analysis Result\n**Mode**: ${analysis.mode} (${analysis.pattern})\n**Tools**: ${analysis.tools_used.join(' → ')}`)
switch (analysis.mode) {
case 'parallel':
console.log(`### Consensus\n${analysis.consensus.map(c => `- ${c}`).join('\n')}\n### Divergences\n${analysis.divergences.map(d => `- ${d}`).join('\n')}`)
break
case 'sequential':
console.log(`### Evolution\n${analysis.evolution.map(e => `**Step ${e.step}**: ${e.analysis.summary}`).join('\n')}\n### Final\n${analysis.finalAnalysis.summary}`)
break
case 'collaborative':
console.log(`### Rounds\n${Object.entries(analysis.rounds).map(([r, a]) => `**Round ${r}**: ${a.map(x => x.cli).join(' + ')}`).join('\n')}\n### Synthesis\n${analysis.synthesis}`)
break
case 'debate':
console.log(`### Debate\n**Proposal**: ${analysis.proposal.summary}\n**Challenges**: ${analysis.challenges.points?.length || 0} points\n**Resolution**: ${analysis.resolution.summary}\n**Confidence**: ${analysis.confidence}%`)
break
case 'challenge':
console.log(`### Challenge\n**Original**: ${analysis.originalAnalysis.summary}\n**Critiques**: ${analysis.critiques.length} issues\n${analysis.critiques.map(c => `- [${c.severity}] ${c.description}`).join('\n')}\n**Risk Score**: ${analysis.riskScore}/100`)
break
}
}
AskUserQuestion({
questions: [{
question: "How to proceed?",
header: "Next Step",
options: [
{ label: "Execute directly", description: "Implement immediately" },
{ label: "Refine analysis", description: "Add constraints, re-analyze" },
{ label: "Change tools", description: "Different tool combination" },
{ label: "Cancel", description: "End workflow" }
],
multiSelect: false
}]
})
// If planPath exists: record decision to Decisions Made table
// Routing: Execute → Phase 5 | Refine → Phase 3 | Change → Phase 2 | Cancel → End
```
## Phase 5: Direct Execution
```javascript
// Simple tasks: No artifacts | Complex tasks: Update scratchpad doc
const executionAgents = agents.filter(a => a.canExecute)
const executionTool = selectedAgent.canExecute ? selectedAgent : selectedCLIs[0]
if (executionTool.type === 'agent') {
Task({
subagent_type: executionTool.name,
run_in_background: false,
description: `Execute: ${taskDescription.slice(0, 30)}`,
prompt: `## Task\n${taskDescription}\n\n## Analysis Results\n${JSON.stringify(aggregatedAnalysis, null, 2)}\n\n## Instructions\n1. Apply changes to identified files\n2. Follow recommended approach\n3. Handle identified risks\n4. Verify changes work correctly`
})
} else {
Bash({
command: `ccw cli -p "
PURPOSE: Implement solution: ${taskDescription}
TASK: ${extractedTasks.join(' • ')}
MODE: write
CONTEXT: @${affectedFiles.join(' @')}
EXPECTED: Working implementation with all changes applied
CONSTRAINTS: Follow existing patterns
" --tool ${executionTool.name} --mode write`,
run_in_background: false
})
}
// If planPath exists: update Status to completed/failed, append to Progress Log
```
## TodoWrite Structure
```javascript
TodoWrite({ todos: [
{ content: "Phase 1: Clarify requirements", status: "in_progress", activeForm: "Clarifying requirements" },
{ content: "Phase 1.5: Assess complexity", status: "pending", activeForm: "Assessing complexity" },
{ content: "Phase 2: Select tools", status: "pending", activeForm: "Selecting tools" },
{ content: "Phase 3: Multi-mode analysis", status: "pending", activeForm: "Running analysis" },
{ content: "Phase 4: User decision", status: "pending", activeForm: "Awaiting decision" },
{ content: "Phase 5: Direct execution", status: "pending", activeForm: "Executing" }
]})
```
## Iteration Patterns
| Pattern | Flow |
|---------|------|
| **Direct** | Phase 1 → 2 → 3 → 4(execute) → 5 |
| **Refinement** | Phase 3 → 4(refine) → 3 → 4 → 5 |
| **Tool Adjust** | Phase 2(adjust) → 3 → 4 → 5 |
## Error Handling
| Error | Resolution |
|-------|------------|
| CLI timeout | Retry with secondary model |
| No enabled tools | Ask user to enable tools in cli-tools.json |
| Task unclear | Default to first CLI + code-developer |
| Ambiguous task | Force clarification via AskUser |
| Execution fails | Present error, ask user for direction |
| Plan doc write fails | Continue without doc (degrade to zero-artifact mode) |
| Scratchpad dir missing | Auto-create `.workflow/.scratchpad/` |
## Comparison with multi-cli-plan
| Aspect | lite-lite-lite | multi-cli-plan |
|--------|----------------|----------------|
| **Artifacts** | Conditional (scratchpad doc for complex tasks) | Always (IMPL_PLAN.md, plan.json, synthesis.json) |
| **Session** | Stateless (--resume chaining) | Persistent session folder |
| **Tool Selection** | 3-step (CLI → Mode → Agent) | Config-driven fixed tools |
| **Analysis Modes** | 5 modes with --resume | Fixed synthesis rounds |
| **Complexity** | Auto-detected (simple/moderate/complex) | Assumed complex |
| **Best For** | Quick analysis, simple-to-moderate tasks | Complex multi-step implementations |
## Post-Completion Expansion
完成后询问用户是否扩展为issue(test/enhance/refactor/doc),选中项调用 `/issue:new "{summary} - {dimension}"`
## Related Commands
```bash
/workflow:multi-cli-plan "complex task" # Full planning workflow
/workflow:lite-plan "task" # Single CLI planning
/workflow:lite-execute --in-memory # Direct execution
```

View File

@@ -2,7 +2,7 @@
name: lite-plan
description: Lightweight interactive planning workflow with in-memory planning, code exploration, and execution execute to lite-execute after user confirmation
argument-hint: "[-y|--yes] [-e|--explore] \"task description\"|file.md"
allowed-tools: TodoWrite(*), Task(*), SlashCommand(*), AskUserQuestion(*)
allowed-tools: TodoWrite(*), Task(*), Skill(*), AskUserQuestion(*)
---
# Workflow Lite-Plan Command (/workflow:lite-plan)
@@ -37,6 +37,23 @@ Intelligent lightweight planning command with dynamic workflow adaptation based
/workflow:lite-plan -y -e "优化数据库查询性能" # Auto mode + force exploration
```
## Output Artifacts
| Artifact | Description |
|----------|-------------|
| `exploration-{angle}.json` | Per-angle exploration results (1-4 files based on complexity) |
| `explorations-manifest.json` | Index of all exploration files |
| `planning-context.md` | Evidence paths + synthesized understanding |
| `plan.json` | Structured implementation plan (plan-json-schema.json) |
**Output Directory**: `.workflow/.lite-plan/{task-slug}-{YYYY-MM-DD}/`
**Agent Usage**:
- Low complexity → Direct Claude planning (no agent)
- Medium/High complexity → `cli-lite-planning-agent` generates `plan.json`
**Schema Reference**: `~/.claude/workflows/cli-templates/schemas/plan-json-schema.json`
## Auto Mode Defaults
When `--yes` or `-y` flag is used:
@@ -84,7 +101,7 @@ Phase 4: Confirmation & Selection
Phase 5: Execute
├─ Build executionContext (plan + explorations + clarifications + selections)
└─ SlashCommand("/workflow:lite-execute --in-memory")
└─ Skill(skill="workflow:lite-execute", args="--in-memory")
```
## Implementation
@@ -193,11 +210,15 @@ const explorationTasks = selectedAngles.map((angle, index) =>
## Task Objective
Execute **${angle}** exploration for task planning context. Analyze codebase from this specific angle to discover relevant structure, patterns, and constraints.
## Output Location
**Session Folder**: ${sessionFolder}
**Output File**: ${sessionFolder}/exploration-${angle}.json
## Assigned Context
- **Exploration Angle**: ${angle}
- **Task Description**: ${task_description}
- **Exploration Index**: ${index + 1} of ${selectedAngles.length}
- **Output File**: ${sessionFolder}/exploration-${angle}.json
## MANDATORY FIRST STEPS (Execute by Agent)
**You (cli-explore-agent) MUST execute these steps in order:**
@@ -225,8 +246,6 @@ Execute **${angle}** exploration for task planning context. Analyze codebase fro
## Expected Output
**File**: ${sessionFolder}/exploration-${angle}.json
**Schema Reference**: Schema obtained in MANDATORY FIRST STEPS step 3, follow schema exactly
**Required Fields** (all ${angle} focused):
@@ -252,9 +271,9 @@ Execute **${angle}** exploration for task planning context. Analyze codebase fro
- [ ] JSON output follows schema exactly
- [ ] clarification_needs includes options + recommended
## Output
Write: ${sessionFolder}/exploration-${angle}.json
Return: 2-3 sentence summary of ${angle} findings
## Execution
**Write**: \`${sessionFolder}/exploration-${angle}.json\`
**Return**: 2-3 sentence summary of ${angle} findings
`
)
)
@@ -443,6 +462,13 @@ Task(
prompt=`
Generate implementation plan and write plan.json.
## Output Location
**Session Folder**: ${sessionFolder}
**Output Files**:
- ${sessionFolder}/planning-context.md (evidence + understanding)
- ${sessionFolder}/plan.json (implementation plan)
## Output Schema Reference
Execute: cat ~/.claude/workflows/cli-templates/schemas/plan-json-schema.json (get schema reference before generating plan)
@@ -492,8 +518,9 @@ Generate plan.json following the schema obtained above. Key constraints:
2. Execute CLI planning using Gemini (Qwen fallback)
3. Read ALL exploration files for comprehensive context
4. Synthesize findings and generate plan following schema
5. Write JSON: Write('${sessionFolder}/plan.json', jsonContent)
6. Return brief completion summary
5. **Write**: \`${sessionFolder}/planning-context.md\` (evidence paths + understanding)
6. **Write**: \`${sessionFolder}/plan.json\`
7. Return brief completion summary
`
)
```
@@ -635,7 +662,7 @@ executionContext = {
**Step 5.2: Execute**
```javascript
SlashCommand(command="/workflow:lite-execute --in-memory")
Skill(skill="workflow:lite-execute", args="--in-memory")
```
## Session Folder Structure

View File

@@ -1,807 +0,0 @@
---
name: merge-plans-with-file
description: Merge multiple planning/brainstorm/analysis outputs, resolve conflicts, and synthesize unified plan. Designed for multi-team input aggregation and final plan crystallization
argument-hint: "[-y|--yes] [-r|--rule consensus|priority|hierarchy] \"plan or topic name\""
allowed-tools: TodoWrite(*), Task(*), AskUserQuestion(*), Read(*), Grep(*), Glob(*), Bash(*), Edit(*), Write(*)
---
## Auto Mode
When `--yes` or `-y`: Auto-resolve conflicts using specified rule (consensus/priority/hierarchy), minimal user prompts.
# Workflow Merge-Plans-With-File Command (/workflow:merge-plans-with-file)
## Overview
Plan aggregation and conflict resolution workflow. Takes multiple planning artifacts (brainstorm conclusions, analysis recommendations, quick-plans, implementation plans) and synthesizes them into a unified, conflict-resolved execution plan.
**Core workflow**: Load Sources → Parse Plans → Conflict Analysis → Arbitration → Unified Plan
**Key features**:
- **Multi-Source Support**: Brainstorm, analysis, quick-plan, IMPL_PLAN, task JSONs
- **Parallel Conflict Detection**: Identify all contradictions across input plans
- **Conflict Resolution**: Consensus, priority-based, or hierarchical resolution modes
- **Unified Synthesis**: Single authoritative plan from multiple perspectives
- **Decision Tracking**: Full audit trail of conflicts and resolutions
- **Resumable**: Save intermediate states, refine resolutions
## Usage
```bash
/workflow:merge-plans-with-file [FLAGS] <PLAN_NAME_OR_PATTERN>
# Flags
-y, --yes Auto-resolve conflicts using rule, skip confirmations
-r, --rule <rule> Conflict resolution rule: consensus (default) | priority | hierarchy
-o, --output <path> Output directory (default: .workflow/.merged/{name})
# Arguments
<plan-name-or-pattern> Plan name or glob pattern to identify input files/sessions
Examples: "auth-module", "*.analysis-*.json", "PLAN-*"
# Examples
/workflow:merge-plans-with-file "authentication" # Auto-detect all auth-related plans
/workflow:merge-plans-with-file -y -r priority "payment-system" # Auto-resolve with priority rule
/workflow:merge-plans-with-file -r hierarchy "feature-complete" # Use hierarchy rule (requires user ranking)
```
## Execution Process
```
Discovery & Loading:
├─ Search for planning artifacts matching pattern
├─ Load all synthesis.json, conclusions.json, IMPL_PLAN.md
├─ Parse each into normalized task/plan structure
└─ Validate data completeness
Session Initialization:
├─ Create .workflow/.merged/{sessionId}/
├─ Initialize merge.md with plan summary
├─ Index all source plans
└─ Extract planning metadata and constraints
Phase 1: Plan Normalization
├─ Convert all formats to common task representation
├─ Extract tasks, dependencies, effort, risks
├─ Identify plan scope and boundaries
├─ Validate no duplicate tasks
└─ Aggregate recommendations from each plan
Phase 2: Conflict Detection (Parallel)
├─ Architecture conflicts: different design approaches
├─ Task conflicts: overlapping responsibilities or different priorities
├─ Effort conflicts: vastly different estimates
├─ Risk assessment conflicts: different risk levels
├─ Scope conflicts: different feature inclusions
└─ Generate conflict matrix with severity levels
Phase 3: Consensus Building / Arbitration
├─ For each conflict, analyze source rationale
├─ Apply resolution rule (consensus/priority/hierarchy)
├─ Escalate unresolvable conflicts to user (unless --yes)
├─ Document decision rationale
└─ Generate resolutions.json
Phase 4: Plan Synthesis
├─ Merge task lists (remove duplicates, combine insights)
├─ Integrate dependencies from all sources
├─ Consolidate effort and risk estimates
├─ Generate unified execution sequence
├─ Create final unified plan
└─ Output ready for execution
Output:
├─ .workflow/.merged/{sessionId}/merge.md (merge process & decisions)
├─ .workflow/.merged/{sessionId}/source-index.json (input sources)
├─ .workflow/.merged/{sessionId}/conflicts.json (conflict matrix)
├─ .workflow/.merged/{sessionId}/resolutions.json (how conflicts were resolved)
├─ .workflow/.merged/{sessionId}/unified-plan.json (final merged plan)
└─ .workflow/.merged/{sessionId}/unified-plan.md (execution-ready markdown)
```
## Implementation
### Phase 1: Plan Discovery & Loading
```javascript
const getUtc8ISOString = () => new Date(Date.now() + 8 * 60 * 60 * 1000).toISOString()
// Parse arguments
const planPattern = "$PLAN_NAME_OR_PATTERN"
const resolutionRule = $ARGUMENTS.match(/--rule\s+(\w+)/)?.[1] || 'consensus'
const isAutoMode = $ARGUMENTS.includes('--yes') || $ARGUMENTS.includes('-y')
// Generate session ID
const mergeSlug = planPattern.toLowerCase()
.replace(/[*?]/g, '-')
.replace(/[^a-z0-9\u4e00-\u9fa5-]+/g, '-')
.substring(0, 30)
const dateStr = getUtc8ISOString().substring(0, 10)
const sessionId = `MERGE-${mergeSlug}-${dateStr}`
const sessionFolder = `.workflow/.merged/${sessionId}`
bash(`mkdir -p ${sessionFolder}`)
// Discover all relevant planning artifacts
const discoveryPaths = [
`.workflow/.brainstorm/*/${planPattern}*/synthesis.json`,
`.workflow/.analysis/*/${planPattern}*/conclusions.json`,
`.workflow/.planning/*/${planPattern}*/synthesis.json`,
`.workflow/.plan/${planPattern}*IMPL_PLAN.md`,
`.workflow/*/${planPattern}*.json`
]
const sourcePlans = []
for (const pattern of discoveryPaths) {
const matches = glob(pattern)
for (const path of matches) {
try {
const content = Read(path)
const plan = parsePlanFile(path, content)
if (plan && plan.tasks?.length > 0) {
sourcePlans.push({
source_path: path,
source_type: identifySourceType(path),
plan: plan,
loaded_at: getUtc8ISOString()
})
}
} catch (e) {
console.warn(`Failed to load plan from ${path}: ${e.message}`)
}
}
}
if (sourcePlans.length === 0) {
console.error(`
## Error: No Plans Found
Pattern: ${planPattern}
Searched locations:
${discoveryPaths.join('\n')}
Available plans in .workflow/:
`)
bash(`find .workflow -name "*.json" -o -name "*PLAN.md" | head -20`)
return { status: 'error', message: 'No plans found' }
}
console.log(`
## Plans Discovered
Total: ${sourcePlans.length}
${sourcePlans.map(sp => `- ${sp.source_type}: ${sp.source_path}`).join('\n')}
`)
```
---
### Phase 2: Plan Normalization
```javascript
// Normalize all plans to common format
const normalizedPlans = sourcePlans.map((sourcePlan, idx) => {
const plan = sourcePlan.plan
const tasks = plan.tasks || []
return {
index: idx,
source: sourcePlan.source_path,
source_type: sourcePlan.source_type,
metadata: {
title: plan.title || `Plan ${idx + 1}`,
topic: plan.topic || plan.planning_topic || 'unknown',
timestamp: plan.completed || plan.timestamp || sourcePlan.loaded_at,
source_ideas: plan.top_ideas?.length || 0,
complexity: plan.complexity_level || 'unknown'
},
// Normalized tasks
tasks: tasks.map(task => ({
id: task.id || `T${idx}-${task.title?.substring(0, 20)}`,
title: task.title || task.content,
description: task.description || '',
type: task.type || inferType(task),
priority: task.priority || 'normal',
// Effort estimation
effort: {
estimated: task.estimated_duration || task.effort_estimate || 'unknown',
from_plan: idx
},
// Risk assessment
risk: {
level: task.risk_level || 'medium',
from_plan: idx
},
// Dependencies
dependencies: task.dependencies || [],
// Source tracking
source_plan_index: idx,
original_id: task.id,
// Quality tracking
success_criteria: task.success_criteria || [],
challenges: task.challenges || []
}))
}
})
// Save source index
const sourceIndex = {
session_id: sessionId,
merge_timestamp: getUtc8ISOString(),
pattern: planPattern,
total_source_plans: sourcePlans.length,
sources: normalizedPlans.map(p => ({
index: p.index,
source_path: p.source,
source_type: p.source_type,
topic: p.metadata.topic,
task_count: p.tasks.length
}))
}
Write(`${sessionFolder}/source-index.json`, JSON.stringify(sourceIndex, null, 2))
```
---
### Phase 3: Conflict Detection
```javascript
// Detect conflicts across plans
const conflictDetector = {
// Architecture conflicts
architectureConflicts: [],
// Task conflicts (duplicates, different scope)
taskConflicts: [],
// Effort conflicts
effortConflicts: [],
// Risk assessment conflicts
riskConflicts: [],
// Scope conflicts
scopeConflicts: [],
// Priority conflicts
priorityConflicts: []
}
// Algorithm 1: Detect similar tasks across plans
const allTasks = normalizedPlans.flatMap(p => p.tasks)
const taskGroups = groupSimilarTasks(allTasks)
for (const group of taskGroups) {
if (group.tasks.length > 1) {
// Same task appears in multiple plans
const efforts = group.tasks.map(t => t.effort.estimated)
const effortVariance = calculateVariance(efforts)
if (effortVariance > 0.5) {
// Significant difference in effort estimates
conflictDetector.effortConflicts.push({
task_group: group.title,
conflicting_tasks: group.tasks.map((t, i) => ({
id: t.id,
from_plan: t.source_plan_index,
effort: t.effort.estimated
})),
variance: effortVariance,
severity: 'high'
})
}
// Check for scope differences
const scopeDifferences = analyzeScopeDifferences(group.tasks)
if (scopeDifferences.length > 0) {
conflictDetector.taskConflicts.push({
task_group: group.title,
scope_differences: scopeDifferences,
severity: 'medium'
})
}
}
}
// Algorithm 2: Architecture conflicts
const architectures = normalizedPlans.map(p => p.metadata.complexity)
if (new Set(architectures).size > 1) {
conflictDetector.architectureConflicts.push({
different_approaches: true,
complexity_levels: architectures.map((a, i) => ({
plan: i,
complexity: a
})),
severity: 'high'
})
}
// Algorithm 3: Risk assessment conflicts
const riskLevels = allTasks.map(t => ({ task: t.id, risk: t.risk.level }))
const taskRisks = {}
for (const tr of riskLevels) {
if (!taskRisks[tr.task]) taskRisks[tr.task] = []
taskRisks[tr.task].push(tr.risk)
}
for (const [task, risks] of Object.entries(taskRisks)) {
if (new Set(risks).size > 1) {
conflictDetector.riskConflicts.push({
task: task,
conflicting_risk_levels: risks,
severity: 'medium'
})
}
}
// Save conflicts
Write(`${sessionFolder}/conflicts.json`, JSON.stringify(conflictDetector, null, 2))
```
---
### Phase 4: Conflict Resolution
```javascript
// Resolve conflicts based on selected rule
const resolutions = {
resolution_rule: resolutionRule,
timestamp: getUtc8ISOString(),
effort_resolutions: [],
architecture_resolutions: [],
risk_resolutions: [],
scope_resolutions: [],
priority_resolutions: []
}
// Resolution Strategy 1: Consensus
if (resolutionRule === 'consensus') {
for (const conflict of conflictDetector.effortConflicts) {
// Use median or average
const efforts = conflict.conflicting_tasks.map(t => parseEffort(t.effort))
const resolved_effort = calculateMedian(efforts)
resolutions.effort_resolutions.push({
conflict: conflict.task_group,
original_estimates: efforts,
resolved_estimate: resolved_effort,
method: 'consensus-median',
rationale: 'Used median of all estimates'
})
}
}
// Resolution Strategy 2: Priority-Based
else if (resolutionRule === 'priority') {
// Use the plan from highest priority source (first or most recent)
for (const conflict of conflictDetector.effortConflicts) {
const highestPriority = conflict.conflicting_tasks[0] // First plan has priority
resolutions.effort_resolutions.push({
conflict: conflict.task_group,
conflicting_estimates: conflict.conflicting_tasks.map(t => t.effort),
resolved_estimate: highestPriority.effort,
selected_from_plan: highestPriority.from_plan,
method: 'priority-based',
rationale: `Selected estimate from plan ${highestPriority.from_plan} (highest priority)`
})
}
}
// Resolution Strategy 3: Hierarchy (requires user ranking)
else if (resolutionRule === 'hierarchy') {
if (!isAutoMode) {
// Ask user to rank plan importance
const planRanking = AskUserQuestion({
questions: [{
question: "请按重要性排序这些规划(从最重要到最不重要):",
header: "Plan Ranking",
multiSelect: false,
options: normalizedPlans.slice(0, 5).map(p => ({
label: `Plan ${p.index}: ${p.metadata.title.substring(0, 40)}`,
description: `${p.tasks.length} tasks, complexity: ${p.metadata.complexity}`
}))
}]
})
// Apply hierarchy
const hierarchy = extractHierarchy(planRanking)
for (const conflict of conflictDetector.effortConflicts) {
const topPriorityTask = conflict.conflicting_tasks
.sort((a, b) => hierarchy[a.from_plan] - hierarchy[b.from_plan])[0]
resolutions.effort_resolutions.push({
conflict: conflict.task_group,
resolved_estimate: topPriorityTask.effort,
selected_from_plan: topPriorityTask.from_plan,
method: 'hierarchy-based',
rationale: `Selected from highest-ranked plan`
})
}
}
}
Write(`${sessionFolder}/resolutions.json`, JSON.stringify(resolutions, null, 2))
```
---
### Phase 5: Plan Synthesis
```javascript
// Merge all tasks into unified plan
const unifiedTasks = []
const processedTaskIds = new Set()
for (const task of allTasks) {
const taskKey = generateTaskKey(task)
if (processedTaskIds.has(taskKey)) {
// Task already added, skip
continue
}
processedTaskIds.add(taskKey)
// Apply resolution if this task has conflicts
let resolvedTask = { ...task }
const effortResolution = resolutions.effort_resolutions
.find(r => r.conflict === taskKey)
if (effortResolution) {
resolvedTask.effort.estimated = effortResolution.resolved_estimate
resolvedTask.effort.resolution_method = effortResolution.method
}
unifiedTasks.push({
id: taskKey,
title: task.title,
description: task.description,
type: task.type,
priority: task.priority,
effort: resolvedTask.effort,
risk: task.risk,
dependencies: task.dependencies,
success_criteria: [...new Set([
...task.success_criteria,
...findRelatedTasks(task, allTasks)
.flatMap(t => t.success_criteria)
])],
challenges: [...new Set([
...task.challenges,
...findRelatedTasks(task, allTasks)
.flatMap(t => t.challenges)
])],
source_plans: [
...new Set(allTasks
.filter(t => generateTaskKey(t) === taskKey)
.map(t => t.source_plan_index))
]
})
}
// Generate execution sequence
const executionSequence = topologicalSort(unifiedTasks)
const criticalPath = identifyCriticalPath(unifiedTasks, executionSequence)
// Final unified plan
const unifiedPlan = {
session_id: sessionId,
merge_timestamp: getUtc8ISOString(),
summary: {
total_source_plans: normalizedPlans.length,
original_tasks_total: allTasks.length,
merged_tasks: unifiedTasks.length,
conflicts_resolved: Object.values(conflictDetector).flat().length,
resolution_rule: resolutionRule
},
merged_metadata: {
topics: [...new Set(normalizedPlans.map(p => p.metadata.topic))],
average_complexity: calculateAverage(normalizedPlans.map(p => parseComplexity(p.metadata.complexity))),
combined_scope: estimateScope(unifiedTasks)
},
tasks: unifiedTasks,
execution_sequence: executionSequence,
critical_path: criticalPath,
risks: aggregateRisks(unifiedTasks),
success_criteria: aggregateSuccessCriteria(unifiedTasks),
audit_trail: {
source_plans: normalizedPlans.length,
conflicts_detected: Object.values(conflictDetector).flat().length,
conflicts_resolved: Object.values(resolutions).flat().length,
resolution_method: resolutionRule
}
}
Write(`${sessionFolder}/unified-plan.json`, JSON.stringify(unifiedPlan, null, 2))
```
---
### Phase 6: Generate Execution Plan
```markdown
# Merged Planning Session
**Session ID**: ${sessionId}
**Pattern**: ${planPattern}
**Created**: ${getUtc8ISOString()}
---
## Merge Summary
**Source Plans**: ${unifiedPlan.summary.total_source_plans}
**Original Tasks**: ${unifiedPlan.summary.original_tasks_total}
**Merged Tasks**: ${unifiedPlan.summary.merged_tasks}
**Tasks Deduplicated**: ${unifiedPlan.summary.original_tasks_total - unifiedPlan.summary.merged_tasks}
**Conflicts Resolved**: ${unifiedPlan.summary.conflicts_resolved}
**Resolution Method**: ${unifiedPlan.summary.resolution_rule}
---
## Merged Plan Overview
**Topics**: ${unifiedPlan.merged_metadata.topics.join(', ')}
**Combined Complexity**: ${unifiedPlan.merged_metadata.average_complexity}
**Total Scope**: ${unifiedPlan.merged_metadata.combined_scope}
---
## Unified Task List
${unifiedPlan.tasks.map((task, i) => `
${i+1}. **${task.id}: ${task.title}**
- Type: ${task.type}
- Effort: ${task.effort.estimated}
- Risk: ${task.risk.level}
- Source Plans: ${task.source_plans.join(', ')}
- ${task.description}
`).join('\n')}
---
## Execution Sequence
**Critical Path**: ${unifiedPlan.critical_path.join(' → ')}
**Execution Order**:
${unifiedPlan.execution_sequence.map((id, i) => `${i+1}. ${id}`).join('\n')}
---
## Conflict Resolution Report
**Total Conflicts**: ${unifiedPlan.summary.conflicts_resolved}
**Resolved Conflicts**:
${Object.entries(resolutions).flatMap(([key, items]) =>
items.slice(0, 3).map((item, i) => `
- ${key.replace('_', ' ')}: ${item.rationale || item.method}
`)
).join('\n')}
**Full Report**: See \`conflicts.json\` and \`resolutions.json\`
---
## Risks & Considerations
**Aggregated Risks**:
${unifiedPlan.risks.slice(0, 5).map(r => `- **${r.title}**: ${r.mitigation}`).join('\n')}
**Combined Success Criteria**:
${unifiedPlan.success_criteria.slice(0, 5).map(c => `- ${c}`).join('\n')}
---
## Next Steps
### Option 1: Direct Execution
Execute merged plan with unified-execute-with-file:
\`\`\`
/workflow:unified-execute-with-file -p ${sessionFolder}/unified-plan.json
\`\`\`
### Option 2: Detailed Planning
Create detailed IMPL_PLAN from merged plan:
\`\`\`
/workflow:plan "Based on merged plan from $(echo ${planPattern})"
\`\`\`
### Option 3: Review Conflicts
Review detailed conflict analysis:
\`\`\`
cat ${sessionFolder}/resolutions.json
\`\`\`
---
## Artifacts
- **source-index.json** - All input plans and sources
- **conflicts.json** - Conflict detection results
- **resolutions.json** - How each conflict was resolved
- **unified-plan.json** - Merged plan data structure (for execution)
- **unified-plan.md** - This document (human-readable)
```
---
## Session Folder Structure
```
.workflow/.merged/{sessionId}/
├── merge.md # Merge process and decisions
├── source-index.json # All input plan sources
├── conflicts.json # Detected conflicts
├── resolutions.json # Conflict resolutions applied
├── unified-plan.json # Merged plan (machine-parseable, for execution)
└── unified-plan.md # Execution-ready plan (human-readable)
```
---
## Resolution Rules
### Rule 1: Consensus (default)
- Use median or average of conflicting estimates
- Good for: Multiple similar perspectives
- Tradeoff: May miss important minority viewpoints
### Rule 2: Priority-Based
- First plan has highest priority, subsequent plans are fallback
- Good for: Clear ranking of plan sources
- Tradeoff: Discards valuable alternative perspectives
### Rule 3: Hierarchy
- User explicitly ranks importance of each plan
- Good for: Mixed-source plans (engineering + product + leadership)
- Tradeoff: Requires user input
---
## Input Format Support
| Source Type | Detection | Parsing | Notes |
|-------------|-----------|---------|-------|
| **Brainstorm** | `.brainstorm/*/synthesis.json` | Top ideas → tasks | Ideas converted to work items |
| **Analysis** | `.analysis/*/conclusions.json` | Recommendations → tasks | Recommendations prioritized |
| **Quick-Plan** | `.planning/*/synthesis.json` | Direct task list | Already normalized |
| **IMPL_PLAN** | `*IMPL_PLAN.md` | Markdown → tasks | Parsed from markdown structure |
| **Task JSON** | `.json` with `tasks` key | Direct mapping | Requires standard schema |
---
## Error Handling
| Situation | Action |
|-----------|--------|
| No plans found | Suggest search terms, list available plans |
| Incompatible formats | Skip unsupported format, continue with others |
| Circular dependencies | Alert user, suggest manual review |
| Unresolvable conflicts | Require user decision (unless --yes + conflict rule) |
| Contradictory recommendations | Document both options for user consideration |
---
## Usage Patterns
### Pattern 1: Merge Multiple Brainstorms
```bash
/workflow:merge-plans-with-file "authentication" -y -r consensus
# → Finds all brainstorm sessions with "auth"
# → Merges top ideas into unified task list
# → Uses consensus method for conflicts
```
### Pattern 2: Synthesize Team Input
```bash
/workflow:merge-plans-with-file "payment-integration" -r hierarchy
# → Loads plans from different team members
# → Asks for ranking by importance
# → Applies hierarchy-based conflict resolution
```
### Pattern 3: Bridge Planning Phases
```bash
/workflow:merge-plans-with-file "user-auth" -f analysis
# → Takes analysis conclusions
# → Merges with existing quick-plans
# → Produces execution-ready plan
```
---
## Advanced: Custom Conflict Resolution
For complex conflict scenarios, create custom resolution script:
```
.workflow/.merged/{sessionId}/
└── custom-resolutions.js (optional)
- Define custom conflict resolution logic
- Applied after automatic resolution
- Override specific decisions
```
---
## Best Practices
1. **Before merging**:
- Ensure all source plans have same quality level
- Verify plans address same scope/topic
- Document any special considerations
2. **During merging**:
- Review conflict matrix (conflicts.json)
- Understand resolution rationale (resolutions.json)
- Challenge assumptions if results seem odd
3. **After merging**:
- Validate unified plan makes sense
- Review critical path
- Ensure no important details lost
- Execute or iterate if needed
---
## Integration with Other Workflows
```
Multiple Brainstorms / Analyses
├─ brainstorm-with-file (session 1)
├─ brainstorm-with-file (session 2)
├─ analyze-with-file (session 3)
merge-plans-with-file ◄──── This workflow
unified-plan.json
├─ /workflow:unified-execute-with-file (direct execution)
├─ /workflow:plan (detailed planning)
└─ /workflow:quick-plan-with-file (refinement)
```
---
## Comparison: When to Use Which Merge Rule
| Rule | Use When | Pros | Cons |
|------|----------|------|------|
| **Consensus** | Similar-quality inputs | Fair, balanced | May miss extremes |
| **Priority** | Clear hierarchy | Simple, predictable | May bias to first input |
| **Hierarchy** | Mixed stakeholders | Respects importance | Requires user ranking |
---
**Ready to execute**: Run `/workflow:merge-plans-with-file` to start merging plans!

View File

@@ -126,6 +126,13 @@ const aceQueries = [
**Core Principle**: Orchestrator only delegates and reads output - NO direct CLI execution.
**⚠️ CRITICAL - CLI EXECUTION REQUIREMENT**:
- **MUST** execute CLI calls via `Bash` with `run_in_background: true`
- **MUST** wait for hook callback to receive complete results
- **MUST NOT** proceed with next phase until CLI execution fully completes
- Do NOT use `TaskOutput` polling during CLI execution - wait passively for results
- Minimize scope: Proceed only when 100% result available
**Agent Invocation**:
```javascript
Task({
@@ -425,7 +432,7 @@ executionContext = {
**Step 4: Hand off to Execution**:
```javascript
// Execute to lite-execute with in-memory context
SlashCommand("/workflow:lite-execute --in-memory")
Skill(skill="workflow:lite-execute", args="--in-memory")
```
## Output File Structure

View File

@@ -72,6 +72,7 @@ IF NOT EXISTS(process_dir):
SYNTHESIS_DIR = brainstorm_dir # Contains role analysis files: */analysis.md
IMPL_PLAN = session_dir/IMPL_PLAN.md
TASK_FILES = Glob(task_dir/*.json)
PLANNING_NOTES = session_dir/planning-notes.md # N+1 context and constraints
# Abort if missing - in order of dependency
SESSION_FILE_EXISTS = EXISTS(session_file)
@@ -79,6 +80,11 @@ IF NOT SESSION_FILE_EXISTS:
WARNING: "workflow-session.json not found. User intent alignment verification will be skipped."
# Continue execution - this is optional context, not blocking
PLANNING_NOTES_EXISTS = EXISTS(PLANNING_NOTES)
IF NOT PLANNING_NOTES_EXISTS:
WARNING: "planning-notes.md not found. Constraints/N+1 context verification will be skipped."
# Continue execution - optional context
SYNTHESIS_FILES = Glob(brainstorm_dir/*/analysis.md)
IF SYNTHESIS_FILES.count == 0:
ERROR: "No role analysis documents found in .brainstorming/*/analysis.md. Run /workflow:brainstorm:synthesis first"
@@ -104,6 +110,13 @@ Load only minimal necessary context from each artifact:
- User's scope definition
- **IF MISSING**: Set user_intent_analysis = "SKIPPED: workflow-session.json not found"
**From planning-notes.md** (OPTIONAL - Constraints & N+1 Context):
- **ONLY IF EXISTS**: Load planning context
- Consolidated Constraints (numbered list from Phase 1-3)
- N+1 Context: Decisions table (Decision | Rationale | Revisit?)
- N+1 Context: Deferred items list
- **IF MISSING**: Set planning_notes_analysis = "SKIPPED: planning-notes.md not found"
**From role analysis documents** (AUTHORITATIVE SOURCE):
- Functional Requirements (IDs, descriptions, acceptance criteria)
- Non-Functional Requirements (IDs, targets)
@@ -161,8 +174,8 @@ Create internal representations (do not include raw artifacts in output):
**Execution Order** (Agent orchestrates internally):
1. **Tier 1 (CRITICAL Path)**: A, B, C - User intent, coverage, consistency (full analysis)
2. **Tier 2 (HIGH Priority)**: D, E - Dependencies, synthesis alignment (limit 15 findings)
1. **Tier 1 (CRITICAL Path)**: A, B, C, I - User intent, coverage, consistency, constraints compliance (full analysis)
2. **Tier 2 (HIGH Priority)**: D, E, J - Dependencies, synthesis alignment, N+1 context validation (limit 15 findings)
3. **Tier 3 (MEDIUM Priority)**: F - Specification quality (limit 20 findings)
4. **Tier 4 (LOW Priority)**: G, H - Duplication, feasibility (limit 15 findings)
@@ -182,9 +195,10 @@ Task(
1. Read: ~/.claude/workflows/cli-templates/schemas/plan-verify-agent-schema.json (dimensions & rules)
2. Read: ~/.claude/workflows/cli-templates/schemas/verify-json-schema.json (output schema)
3. Read: ${session_file} (user intent)
4. Read: ${IMPL_PLAN} (implementation plan)
5. Glob: ${task_dir}/*.json (task files)
6. Glob: ${SYNTHESIS_DIR}/*/analysis.md (role analyses)
4. Read: ${PLANNING_NOTES} (constraints & N+1 context)
5. Read: ${IMPL_PLAN} (implementation plan)
6. Glob: ${task_dir}/*.json (task files)
7. Glob: ${SYNTHESIS_DIR}/*/analysis.md (role analyses)
### Execution Flow
@@ -226,7 +240,8 @@ const byDimension = Object.groupBy(findings, f => f.dimension)
const DIMS = {
A: "User Intent Alignment", B: "Requirements Coverage", C: "Consistency Validation",
D: "Dependency Integrity", E: "Synthesis Alignment", F: "Task Specification Quality",
G: "Duplication Detection", H: "Feasibility Assessment"
G: "Duplication Detection", H: "Feasibility Assessment",
I: "Constraints Compliance", J: "N+1 Context Validation"
}
```
@@ -280,7 +295,7 @@ ${findings.map(f => `| ${f.id} | ${f.dimension_name} | ${f.severity} | ${f.locat
## Analysis by Dimension
${['A','B','C','D','E','F','G','H'].map(d => `### ${d}. ${DIMS[d]}\n\n${renderDimension(d)}`).join('\n\n---\n\n')}
${['A','B','C','D','E','F','G','H','I','J'].map(d => `### ${d}. ${DIMS[d]}\n\n${renderDimension(d)}`).join('\n\n---\n\n')}
---
@@ -324,7 +339,7 @@ const canExecute = recommendation !== 'BLOCK_EXECUTION'
// Auto mode
if (autoYes) {
if (canExecute) {
SlashCommand("/workflow:execute --yes --resume-session=\"${session_id}\"")
Skill(skill="workflow:execute", args="--yes --resume-session=\"${session_id}\"")
} else {
console.log(`[--yes] BLOCK_EXECUTION - Fix ${critical_count} critical issues first.`)
}
@@ -355,8 +370,8 @@ const selection = AskUserQuestion({
// Handle selection
if (selection.includes("Execute")) {
SlashCommand("/workflow:execute --resume-session=\"${session_id}\"")
Skill(skill="workflow:execute", args="--resume-session=\"${session_id}\"")
} else if (selection === "Re-verify") {
SlashCommand("/workflow:plan-verify --session ${session_id}")
Skill(skill="workflow:plan-verify", args="--session ${session_id}")
}
```

View File

@@ -2,7 +2,7 @@
name: plan
description: 5-phase planning workflow with action-planning-agent task generation, outputs IMPL_PLAN.md and task JSONs
argument-hint: "[-y|--yes] \"text description\"|file.md"
allowed-tools: SlashCommand(*), TodoWrite(*), Read(*), Bash(*)
allowed-tools: Skill(*), TodoWrite(*), Read(*), Bash(*)
group: workflow
---
@@ -28,7 +28,7 @@ This workflow runs **fully autonomously** once triggered. Phase 3 (conflict reso
5. **Phase 4 executes** → Task generation (task-generate-agent) → Reports final summary
**Task Attachment Model**:
- SlashCommand execute **expands workflow** by attaching sub-tasks to current TodoWrite
- Skill execute **expands workflow** by attaching sub-tasks to current TodoWrite
- When a sub-command is executed (e.g., `/workflow:tools:context-gather`), its internal tasks are attached to the orchestrator's TodoWrite
- Orchestrator **executes these attached tasks** sequentially
- After completion, attached tasks are **collapsed** back to high-level phase summary
@@ -48,7 +48,7 @@ This workflow runs **fully autonomously** once triggered. Phase 3 (conflict reso
3. **Parse Every Output**: Extract required data from each command/agent output for next phase
4. **Auto-Continue via TodoList**: Check TodoList status to execute next pending phase automatically
5. **Track Progress**: Update TodoWrite dynamically with task attachment/collapse pattern
6. **Task Attachment Model**: SlashCommand execute **attaches** sub-tasks to current workflow. Orchestrator **executes** these attached tasks itself, then **collapses** them after completion
6. **Task Attachment Model**: Skill execute **attaches** sub-tasks to current workflow. Orchestrator **executes** these attached tasks itself, then **collapses** them after completion
7. **⚠️ CRITICAL: DO NOT STOP**: Continuous multi-phase workflow. After executing all attached tasks, immediately collapse them and execute next phase
## Execution Process
@@ -88,7 +88,7 @@ Return:
**Step 1.1: Execute** - Create or discover workflow session
```javascript
SlashCommand(command="/workflow:session:start --auto \"[structured-task-description]\"")
Skill(skill="workflow:session:start", args="--auto \"[structured-task-description]\"")
```
**Task Description Structure**:
@@ -119,7 +119,7 @@ CONTEXT: Existing user database schema, REST API endpoints
**After Phase 1**: Initialize planning-notes.md with user intent
```javascript
// Create minimal planning notes document
// Create planning notes document with N+1 context support
const planningNotesPath = `.workflow/active/${sessionId}/planning-notes.md`
const userGoal = structuredDescription.goal
const userConstraints = structuredDescription.context || "None specified"
@@ -144,6 +144,19 @@ Write(planningNotesPath, `# Planning Notes
## Consolidated Constraints (Phase 4 Input)
1. ${userConstraints}
---
## Task Generation (Phase 4)
(To be filled by action-planning-agent)
## N+1 Context
### Decisions
| Decision | Rationale | Revisit? |
|----------|-----------|----------|
### Deferred
- [ ] (For N+1)
`)
```
@@ -156,7 +169,7 @@ Return to user showing Phase 1 results, then auto-continue to Phase 2
**Step 2.1: Execute** - Gather project context and analyze codebase
```javascript
SlashCommand(command="/workflow:tools:context-gather --session [sessionId] \"[structured-task-description]\"")
Skill(skill="workflow:tools:context-gather", args="--session [sessionId] \"[structured-task-description]\"")
```
**Use Same Structured Description**: Pass the same structured format from Phase 1
@@ -174,7 +187,7 @@ SlashCommand(command="/workflow:tools:context-gather --session [sessionId] \"[st
<!-- TodoWrite: When context-gather executed, INSERT 3 context-gather tasks, mark first as in_progress -->
**TodoWrite Update (Phase 2 SlashCommand executed - tasks attached)**:
**TodoWrite Update (Phase 2 Skill executed - tasks attached)**:
```json
[
{"content": "Phase 1: Session Discovery", "status": "completed", "activeForm": "Executing session discovery"},
@@ -186,7 +199,7 @@ SlashCommand(command="/workflow:tools:context-gather --session [sessionId] \"[st
]
```
**Note**: SlashCommand execute **attaches** context-gather's 3 tasks. Orchestrator **executes** these tasks sequentially.
**Note**: Skill execute **attaches** context-gather's 3 tasks. Orchestrator **executes** these tasks sequentially.
<!-- TodoWrite: After Phase 2 tasks complete, REMOVE Phase 2.1-2.3, restore to orchestrator view -->
@@ -242,7 +255,7 @@ Return to user showing Phase 2 results, then auto-continue to Phase 3/4 (dependi
**Step 3.1: Execute** - Detect and resolve conflicts with CLI analysis
```javascript
SlashCommand(command="/workflow:tools:conflict-resolution --session [sessionId] --context [contextPath]")
Skill(skill="workflow:tools:conflict-resolution", args="--session [sessionId] --context [contextPath]")
```
**Input**:
@@ -263,7 +276,7 @@ SlashCommand(command="/workflow:tools:conflict-resolution --session [sessionId]
<!-- TodoWrite: If conflict_risk ≥ medium, INSERT 3 conflict-resolution tasks -->
**TodoWrite Update (Phase 3 SlashCommand executed - tasks attached, if conflict_risk ≥ medium)**:
**TodoWrite Update (Phase 3 Skill executed - tasks attached, if conflict_risk ≥ medium)**:
```json
[
{"content": "Phase 1: Session Discovery", "status": "completed", "activeForm": "Executing session discovery"},
@@ -276,7 +289,7 @@ SlashCommand(command="/workflow:tools:conflict-resolution --session [sessionId]
]
```
**Note**: SlashCommand execute **attaches** conflict-resolution's 3 tasks. Orchestrator **executes** these tasks sequentially.
**Note**: Skill execute **attaches** conflict-resolution's 3 tasks. Orchestrator **executes** these tasks sequentially.
<!-- TodoWrite: After Phase 3 tasks complete, REMOVE Phase 3.1-3.3, restore to orchestrator view -->
@@ -339,7 +352,7 @@ Return to user showing conflict resolution results (if executed) and selected st
**Step 3.2: Execute** - Optimize memory before proceeding
```javascript
SlashCommand(command="/compact")
Skill(skill="compact")
```
- Memory compaction is particularly important after analysis phase which may generate extensive documentation
@@ -378,7 +391,7 @@ Return to user showing conflict resolution results (if executed) and selected st
**Step 4.1: Execute** - Generate implementation plan and task JSONs
```javascript
SlashCommand(command="/workflow:tools:task-generate-agent --session [sessionId]")
Skill(skill="workflow:tools:task-generate-agent", args="--session [sessionId]")
```
**CLI Execution Note**: CLI tool usage is now determined semantically by action-planning-agent based on user's task description. If user specifies "use Codex/Gemini/Qwen for X", the agent embeds `command` fields in relevant `implementation_approach` steps.
@@ -397,7 +410,7 @@ SlashCommand(command="/workflow:tools:task-generate-agent --session [sessionId]"
<!-- TodoWrite: When task-generate-agent executed, ATTACH 1 agent task -->
**TodoWrite Update (Phase 4 SlashCommand executed - agent task attached)**:
**TodoWrite Update (Phase 4 Skill executed - agent task attached)**:
```json
[
{"content": "Phase 1: Session Discovery", "status": "completed", "activeForm": "Executing session discovery"},
@@ -458,13 +471,13 @@ const userChoice = AskUserQuestion({
// Execute based on user choice
if (userChoice.answers["Next Action"] === "Verify Plan Quality (Recommended)") {
console.log("\n🔍 Starting plan verification...\n");
SlashCommand(command="/workflow:plan-verify --session " + sessionId);
Skill(skill="workflow:plan-verify", args="--session " + sessionId);
} else if (userChoice.answers["Next Action"] === "Start Execution") {
console.log("\n🚀 Starting task execution...\n");
SlashCommand(command="/workflow:execute --session " + sessionId);
Skill(skill="workflow:execute", args="--session " + sessionId);
} else if (userChoice.answers["Next Action"] === "Review Status Only") {
console.log("\n📊 Displaying session status...\n");
SlashCommand(command="/workflow:status --session " + sessionId);
Skill(skill="workflow:status", args="--session " + sessionId);
}
```
@@ -476,7 +489,7 @@ if (userChoice.answers["Next Action"] === "Verify Plan Quality (Recommended)") {
### Key Principles
1. **Task Attachment** (when SlashCommand executed):
1. **Task Attachment** (when Skill executed):
- Sub-command's internal tasks are **attached** to orchestrator's TodoWrite
- **Phase 2, 3**: Multiple sub-tasks attached (e.g., Phase 2.1, 2.2, 2.3)
- **Phase 4**: Single agent task attached (e.g., "Execute task-generate-agent")
@@ -582,7 +595,7 @@ User triggers: /workflow:plan "Build authentication system"
Phase 1: Session Discovery
→ sessionId extracted
Phase 2: Context Gathering (SlashCommand executed)
Phase 2: Context Gathering (Skill executed)
→ ATTACH 3 sub-tasks: ← ATTACHED
- → Analyze codebase structure
- → Identify integration points
@@ -593,7 +606,7 @@ Phase 2: Context Gathering (SlashCommand executed)
Conditional Branch: Check conflict_risk
├─ IF conflict_risk ≥ medium:
│ Phase 3: Conflict Resolution (SlashCommand executed)
│ Phase 3: Conflict Resolution (Skill executed)
│ → ATTACH 3 sub-tasks: ← ATTACHED
│ - → Detect conflicts with CLI analysis
│ - → Present conflicts to user
@@ -603,7 +616,7 @@ Conditional Branch: Check conflict_risk
└─ ELSE: Skip Phase 3, proceed to Phase 4
Phase 4: Task Generation (SlashCommand executed)
Phase 4: Task Generation (Skill executed)
→ Single agent task (no sub-tasks)
→ Agent autonomously completes internally:
(discovery → planning → output)
@@ -613,7 +626,7 @@ Return summary to user
```
**Key Points**:
- **← ATTACHED**: Tasks attached to TodoWrite when SlashCommand executed
- **← ATTACHED**: Tasks attached to TodoWrite when Skill executed
- Phase 2, 3: Multiple sub-tasks
- Phase 4: Single agent task
- **← COLLAPSED**: Sub-tasks collapsed to summary after completion (Phase 2, 3 only)

View File

@@ -1,808 +0,0 @@
---
name: quick-plan-with-file
description: Multi-agent rapid planning with minimal documentation, conflict resolution, and actionable synthesis. Designed as a lightweight planning supplement between brainstorm and full implementation planning
argument-hint: "[-y|--yes] [-c|--continue] [-f|--from <type>] \"planning topic or task description\""
allowed-tools: TodoWrite(*), Task(*), AskUserQuestion(*), Read(*), Grep(*), Glob(*), Bash(*), Edit(*), Write(*)
---
## Auto Mode
When `--yes` or `-y`: Auto-confirm planning decisions, use aggressive parallelization, minimal user interaction.
# Workflow Quick-Plan-With-File Command (/workflow:quick-plan-with-file)
## Overview
Multi-agent rapid planning workflow with **minimal documentation overhead**. Coordinates parallel agent analysis, synthesizes conflicting perspectives into actionable decisions, and generates a lightweight implementation-ready plan.
**Core workflow**: Parse Input → Parallel Analysis → Conflict Resolution → Plan Synthesis → Output
**Key features**:
- **Plan Format Agnostic**: Consumes brainstorm conclusions, analysis recommendations, or raw task descriptions
- **Minimal Docs**: Single `plan.md` (no lengthy brainstorm.md or discussion.md)
- **Parallel Multi-Agent**: 3-4 concurrent agent perspectives (architecture, implementation, validation, risk)
- **Conflict Resolution**: Automatic conflict detection and resolution via synthesis agent
- **Actionable Output**: Direct task breakdown ready for execution
- **Session Resumable**: Continue if interrupted, checkpoint at each phase
## Usage
```bash
/workflow:quick-plan-with-file [FLAGS] <PLANNING_TOPIC>
# Flags
-y, --yes Auto-confirm decisions, use defaults
-c, --continue Continue existing session (auto-detected)
-f, --from <type> Input source type: brainstorm|analysis|task|raw
# Arguments
<planning-topic> Planning topic, task, or reference to planning artifact
# Examples
/workflow:quick-plan-with-file "实现分布式缓存层支持Redis和内存后端"
/workflow:quick-plan-with-file --continue "缓存层规划" # Continue
/workflow:quick-plan-with-file -y -f analysis "从分析结论生成实施规划" # Auto mode
/workflow:quick-plan-with-file --from brainstorm BS-rate-limiting-2025-01-28 # From artifact
```
## Execution Process
```
Input Validation & Loading:
├─ Parse input (topic | artifact reference)
├─ Load artifact if referenced (synthesis.json | conclusions.json | etc.)
├─ Extract key constraints and requirements
└─ Initialize session folder and plan.md
Session Initialization:
├─ Create .workflow/.planning/{sessionId}/
├─ Initialize plan.md with input summary
├─ Parse existing output (if --from artifact)
└─ Define planning dimensions & focus areas
Phase 1: Parallel Multi-Agent Analysis (concurrent)
├─ Agent 1 (Architecture): High-level design & decomposition
├─ Agent 2 (Implementation): Technical approach & feasibility
├─ Agent 3 (Validation): Risk analysis & edge cases
├─ Agent 4 (Decision): Recommendations & tradeoffs
└─ Aggregate findings into perspectives.json
Phase 2: Conflict Detection & Resolution
├─ Analyze agent perspectives for contradictions
├─ Identify critical decision points
├─ Generate synthesis via arbitration agent
├─ Document conflicts and resolutions
└─ Update plan.md with decisive recommendations
Phase 3: Plan Synthesis
├─ Consolidate all insights
├─ Generate actionable task breakdown
├─ Create execution strategy
├─ Document assumptions & risks
└─ Generate synthesis.md with ready-to-execute tasks
Output:
├─ .workflow/.planning/{sessionId}/plan.md (minimal, actionable)
├─ .workflow/.planning/{sessionId}/perspectives.json (agent findings)
├─ .workflow/.planning/{sessionId}/conflicts.json (decision points)
├─ .workflow/.planning/{sessionId}/synthesis.md (task breakdown)
└─ Optional: Feed to /workflow:unified-execute-with-file
```
## Implementation
### Session Setup & Input Loading
```javascript
const getUtc8ISOString = () => new Date(Date.now() + 8 * 60 * 60 * 1000).toISOString()
// Parse arguments
const planningTopic = "$PLANNING_TOPIC"
const inputType = $ARGUMENTS.match(/--from\s+(\w+)/)?.[1] || 'raw'
const isAutoMode = $ARGUMENTS.includes('--yes') || $ARGUMENTS.includes('-y')
const isContinue = $ARGUMENTS.includes('--continue') || $ARGUMENTS.includes('-c')
// Auto-detect artifact if referenced
let artifact = null
let artifactContent = null
if (inputType === 'brainstorm' || planningTopic.startsWith('BS-')) {
const sessionId = planningTopic
const synthesisPath = `.workflow/.brainstorm/${sessionId}/synthesis.json`
if (fs.existsSync(synthesisPath)) {
artifact = { type: 'brainstorm', path: synthesisPath }
artifactContent = JSON.parse(Read(synthesisPath))
}
} else if (inputType === 'analysis' || planningTopic.startsWith('ANL-')) {
const sessionId = planningTopic
const conclusionsPath = `.workflow/.analysis/${sessionId}/conclusions.json`
if (fs.existsSync(conclusionsPath)) {
artifact = { type: 'analysis', path: conclusionsPath }
artifactContent = JSON.parse(Read(conclusionsPath))
}
}
// Generate session ID
const planSlug = planningTopic.toLowerCase()
.replace(/[^a-z0-9\u4e00-\u9fa5]+/g, '-')
.substring(0, 30)
const dateStr = getUtc8ISOString().substring(0, 10)
const sessionId = `PLAN-${planSlug}-${dateStr}`
const sessionFolder = `.workflow/.planning/${sessionId}`
// Session mode detection
const sessionExists = fs.existsSync(sessionFolder)
const hasPlan = sessionExists && fs.existsSync(`${sessionFolder}/plan.md`)
const mode = (hasPlan || isContinue) ? 'continue' : 'new'
if (!sessionExists) {
bash(`mkdir -p ${sessionFolder}`)
}
```
---
### Phase 1: Initialize plan.md (Minimal)
```markdown
# Quick Planning Session
**Session ID**: ${sessionId}
**Topic**: ${planningTopic}
**Started**: ${getUtc8ISOString()}
**Mode**: ${mode}
---
## Input Context
${artifact ? `
**Source**: ${artifact.type} artifact
**Path**: ${artifact.path}
**Artifact Summary**:
${artifact.type === 'brainstorm' ? `
- Topic: ${artifactContent.topic}
- Top Ideas: ${artifactContent.top_ideas?.length || 0}
- Key Insights: ${artifactContent.key_insights?.slice(0, 2).join(', ') || 'N/A'}
` : artifact.type === 'analysis' ? `
- Topic: ${artifactContent.topic}
- Key Conclusions: ${artifactContent.key_conclusions?.length || 0}
- Recommendations: ${artifactContent.recommendations?.length || 0}
` : ''}
` : `
**User Input**: ${planningTopic}
`}
---
## Planning Dimensions
*To be populated after agent analysis*
---
## Key Decisions
*Conflict resolution and recommendations - to be populated*
---
## Implementation Plan
*Task breakdown - to be populated after synthesis*
---
## Progress
- [ ] Multi-agent analysis
- [ ] Conflict detection
- [ ] Plan synthesis
- [ ] Ready for execution
```
---
### Phase 2: Parallel Multi-Agent Analysis
```javascript
const analysisPrompt = artifact
? `Convert ${artifact.type} artifact to planning requirements and execute parallel analysis`
: `Create planning breakdown for: ${planningTopic}`
// Prepare context for agents
const agentContext = {
topic: planningTopic,
artifact: artifact ? {
type: artifact.type,
summary: extractArtifactSummary(artifactContent)
} : null,
planning_focus: determineFocusAreas(planningTopic),
constraints: extractConstraints(planningTopic, artifactContent)
}
// Agent 1: Architecture & Design
const archPromise = Bash({
command: `ccw cli -p "
PURPOSE: Architecture & high-level design planning for '${planningTopic}'
Success: Clear component decomposition, interface design, and data flow
TASK:
• Decompose problem into major components/modules
• Identify architectural patterns and integration points
• Design interfaces and data models
• Assess scalability and maintainability implications
• Propose architectural approach with rationale
MODE: analysis
CONTEXT: @**/*
${artifact ? `| Source artifact: ${artifact.type}` : ''}
EXPECTED:
- Component decomposition (box diagram in text)
- Module interfaces and responsibilities
- Data flow between components
- Architectural patterns applied
- Scalability assessment (1-5 rating)
- Risks from architectural perspective
CONSTRAINTS: Focus on long-term maintainability
" --tool gemini --mode analysis`,
run_in_background: true
})
// Agent 2: Implementation & Feasibility
const implPromise = Bash({
command: `ccw cli -p "
PURPOSE: Implementation approach & technical feasibility for '${planningTopic}'
Success: Concrete implementation strategy with realistic resource estimates
TASK:
• Evaluate technical feasibility of approach
• Identify required technologies and dependencies
• Estimate effort: high/medium/low + rationale
• Suggest implementation phases and milestones
• Highlight technical blockers or challenges
MODE: analysis
CONTEXT: @**/*
${artifact ? `| Source artifact: ${artifact.type}` : ''}
EXPECTED:
- Technology stack recommendation
- Implementation complexity: high|medium|low with justification
- Estimated effort breakdown (analysis/design/coding/testing/deployment)
- Key technical decisions with tradeoffs
- Potential blockers and mitigations
- Suggested implementation phases
- Reusable components or libraries
CONSTRAINTS: Realistic with current tech stack
" --tool codex --mode analysis`,
run_in_background: true
})
// Agent 3: Risk & Validation
const riskPromise = Bash({
command: `ccw cli -p "
PURPOSE: Risk analysis and validation strategy for '${planningTopic}'
Success: Comprehensive risk matrix with testing strategy
TASK:
• Identify technical risks and failure scenarios
• Assess business/timeline risks
• Define validation/testing strategy
• Suggest monitoring and observability requirements
• Rate overall risk level (low/medium/high)
MODE: analysis
CONTEXT: @**/*
${artifact ? `| Source artifact: ${artifact.type}` : ''}
EXPECTED:
- Risk matrix (likelihood × impact, 1-5 each)
- Top 3 technical risks with mitigations
- Top 3 timeline/resource risks with mitigations
- Testing strategy (unit/integration/e2e/performance)
- Deployment strategy and rollback plan
- Monitoring/observability requirements
- Overall risk rating with confidence (low/medium/high)
CONSTRAINTS: Be realistic, not pessimistic
" --tool claude --mode analysis`,
run_in_background: true
})
// Agent 4: Decisions & Recommendations
const decisionPromise = Bash({
command: `ccw cli -p "
PURPOSE: Strategic decisions and execution recommendations for '${planningTopic}'
Success: Clear recommended approach with tradeoff analysis
TASK:
• Synthesize all considerations into recommendations
• Clearly identify critical decision points
• Outline key tradeoffs (speed vs quality, scope vs timeline, etc.)
• Propose go/no-go decision criteria
• Suggest execution strategy and sequencing
MODE: analysis
CONTEXT: @**/*
${artifact ? `| Source artifact: ${artifact.type}` : ''}
EXPECTED:
- Primary recommendation with strong rationale
- Alternative approaches with pros/cons
- 2-3 critical decision points with recommended choices
- Key tradeoffs and what we're optimizing for
- Success metrics and go/no-go criteria
- Suggested execution sequencing
- Resource requirements and dependencies
CONSTRAINTS: Focus on actionable decisions, not analysis
" --tool gemini --mode analysis`,
run_in_background: true
})
// Wait for all parallel analyses
const [archResult, implResult, riskResult, decisionResult] = await Promise.all([
archPromise, implPromise, riskPromise, decisionPromise
])
```
---
### Phase 3: Aggregate Perspectives
```javascript
// Parse and structure agent findings
const perspectives = {
session_id: sessionId,
timestamp: getUtc8ISOString(),
topic: planningTopic,
source_artifact: artifact?.type || 'raw',
architecture: {
source: 'gemini (design)',
components: extractComponents(archResult),
interfaces: extractInterfaces(archResult),
patterns: extractPatterns(archResult),
scalability_rating: extractRating(archResult, 'scalability'),
risks_from_design: extractRisks(archResult)
},
implementation: {
source: 'codex (pragmatic)',
technology_stack: extractStack(implResult),
complexity: extractComplexity(implResult),
effort_breakdown: extractEffort(implResult),
blockers: extractBlockers(implResult),
phases: extractPhases(implResult)
},
validation: {
source: 'claude (systematic)',
risk_matrix: extractRiskMatrix(riskResult),
top_risks: extractTopRisks(riskResult),
testing_strategy: extractTestingStrategy(riskResult),
deployment_strategy: extractDeploymentStrategy(riskResult),
monitoring_requirements: extractMonitoring(riskResult),
overall_risk_rating: extractRiskRating(riskResult)
},
recommendation: {
source: 'gemini (synthesis)',
primary_approach: extractPrimaryApproach(decisionResult),
alternatives: extractAlternatives(decisionResult),
critical_decisions: extractDecisions(decisionResult),
tradeoffs: extractTradeoffs(decisionResult),
success_criteria: extractCriteria(decisionResult),
execution_sequence: extractSequence(decisionResult)
},
analysis_timestamp: getUtc8ISOString()
}
Write(`${sessionFolder}/perspectives.json`, JSON.stringify(perspectives, null, 2))
```
---
### Phase 4: Conflict Detection & Resolution
```javascript
// Analyze for conflicts and contradictions
const conflicts = detectConflicts({
arch_vs_impl: compareArchitectureAndImplementation(perspectives),
design_vs_risk: compareDesignAndRisk(perspectives),
effort_vs_scope: compareEffortAndScope(perspectives),
timeline_implications: extractTimingConflicts(perspectives)
})
// If conflicts exist, invoke arbitration agent
if (conflicts.critical.length > 0) {
const arbitrationResult = await Bash({
command: `ccw cli -p "
PURPOSE: Resolve planning conflicts and generate unified recommendation
Input: ${JSON.stringify(conflicts, null, 2)}
TASK:
• Review all conflicts presented
• Recommend resolution for each critical conflict
• Explain tradeoff choices
• Identify what we're optimizing for (speed/quality/risk/resource)
• Generate unified execution strategy
MODE: analysis
EXPECTED:
- For each conflict: recommended resolution + rationale
- Unified optimization criteria (what matters most?)
- Final recommendation with confidence level
- Any unresolved tensions that need user input
CONSTRAINTS: Be decisive, not fence-sitting
" --tool gemini --mode analysis`,
run_in_background: false
})
const conflictResolution = {
detected_conflicts: conflicts,
arbitration_result: arbitrationResult,
timestamp: getUtc8ISOString()
}
Write(`${sessionFolder}/conflicts.json`, JSON.stringify(conflictResolution, null, 2))
}
```
---
### Phase 5: Plan Synthesis & Task Breakdown
```javascript
const synthesisPrompt = `
Given the planning context:
- Topic: ${planningTopic}
- Architecture: ${perspectives.architecture.components.map(c => c.name).join(', ')}
- Implementation Complexity: ${perspectives.implementation.complexity}
- Timeline Risk: ${perspectives.validation.overall_risk_rating}
- Primary Recommendation: ${perspectives.recommendation.primary_approach.summary}
Generate a minimal but complete implementation plan with:
1. Task breakdown (5-8 major tasks)
2. Dependencies between tasks
3. For each task: what needs to be done, why, and key considerations
4. Success criteria for the entire effort
5. Known risks and mitigation strategies
Output as structured task list ready for execution.
`
const synthesisResult = await Bash({
command: `ccw cli -p "${synthesisPrompt}" --tool gemini --mode analysis`,
run_in_background: false
})
// Parse synthesis and generate task breakdown
const tasks = parseTaskBreakdown(synthesisResult)
const synthesis = {
session_id: sessionId,
planning_topic: planningTopic,
completed: getUtc8ISOString(),
// Summary
executive_summary: perspectives.recommendation.primary_approach.summary,
optimization_focus: extractOptimizationFocus(perspectives),
// Architecture
architecture_approach: perspectives.architecture.patterns[0] || 'TBD',
key_components: perspectives.architecture.components.slice(0, 5),
// Implementation
technology_stack: perspectives.implementation.technology_stack,
complexity_level: perspectives.implementation.complexity,
estimated_effort: perspectives.implementation.effort_breakdown,
// Risks & Validation
top_risks: perspectives.validation.top_risks.slice(0, 3),
testing_approach: perspectives.validation.testing_strategy,
// Execution
phases: perspectives.implementation.phases,
critical_path_tasks: extractCriticalPath(tasks),
total_tasks: tasks.length,
// Task breakdown (ready for unified-execute-with-file)
tasks: tasks.map(task => ({
id: task.id,
title: task.title,
description: task.description,
type: task.type,
dependencies: task.dependencies,
effort_estimate: task.effort,
success_criteria: task.criteria
}))
}
Write(`${sessionFolder}/synthesis.md`, formatSynthesisMarkdown(synthesis))
Write(`${sessionFolder}/synthesis.json`, JSON.stringify(synthesis, null, 2))
```
---
### Phase 6: Update plan.md with Results
```markdown
# Quick Planning Session
**Session ID**: ${sessionId}
**Topic**: ${planningTopic}
**Started**: ${startTime}
**Completed**: ${completionTime}
---
## Executive Summary
${synthesis.executive_summary}
**Optimization Focus**: ${synthesis.optimization_focus}
**Complexity**: ${synthesis.complexity_level}
**Estimated Effort**: ${formatEffort(synthesis.estimated_effort)}
---
## Architecture
**Primary Pattern**: ${synthesis.architecture_approach}
**Key Components**:
${synthesis.key_components.map((c, i) => `${i+1}. ${c.name}: ${c.responsibility}`).join('\n')}
---
## Implementation Strategy
**Technology Stack**:
${synthesis.technology_stack.map(t => `- ${t}`).join('\n')}
**Phases**:
${synthesis.phases.map((p, i) => `${i+1}. ${p.name} (${p.effort})`).join('\n')}
---
## Risk Assessment
**Overall Risk Level**: ${synthesis.top_risks[0].risk_level}
**Top 3 Risks**:
${synthesis.top_risks.map((r, i) => `
${i+1}. **${r.title}** (Impact: ${r.impact})
- Mitigation: ${r.mitigation}
`).join('\n')}
**Testing Approach**: ${synthesis.testing_approach}
---
## Execution Plan
**Total Tasks**: ${synthesis.total_tasks}
**Critical Path**: ${synthesis.critical_path_tasks.map(t => t.id).join(' → ')}
### Task Breakdown
${synthesis.tasks.map((task, i) => `
${i+1}. **${task.id}: ${task.title}** (Effort: ${task.effort_estimate})
- ${task.description}
- Depends on: ${task.dependencies.join(', ') || 'none'}
- Success: ${task.success_criteria}
`).join('\n')}
---
## Next Steps
**Recommended**: Execute with \`/workflow:unified-execute-with-file\` using:
\`\`\`
/workflow:unified-execute-with-file -p ${sessionFolder}/synthesis.json
\`\`\`
---
## Artifacts
- **Perspectives**: ${sessionFolder}/perspectives.json (all agent findings)
- **Conflicts**: ${sessionFolder}/conflicts.json (decision points and resolutions)
- **Synthesis**: ${sessionFolder}/synthesis.json (task breakdown for execution)
```
---
## Session Folder Structure
```
.workflow/.planning/{sessionId}/
├── plan.md # Minimal, actionable planning doc
├── perspectives.json # Multi-agent findings (architecture, impl, risk, decision)
├── conflicts.json # Detected conflicts and resolutions (if any)
├── synthesis.json # Task breakdown ready for execution
└── synthesis.md # Human-readable execution plan
```
---
## Multi-Agent Roles
| Agent | Focus | Input | Output |
|-------|-------|-------|--------|
| **Gemini (Design)** | Architecture & design patterns | Topic + constraints | Components, interfaces, patterns, scalability |
| **Codex (Pragmatic)** | Implementation reality | Topic + architecture | Tech stack, effort, phases, blockers |
| **Claude (Validation)** | Risk & testing | Architecture + impl | Risk matrix, test strategy, monitoring |
| **Gemini (Decision)** | Synthesis & strategy | All findings | Recommendations, tradeoffs, execution plan |
---
## Conflict Resolution Strategy
**Auto-Resolution for conflicts**:
1. **Architecture vs Implementation**: Recommend design-for-feasibility approach
2. **Scope vs Timeline**: Prioritize critical path, defer nice-to-haves
3. **Quality vs Speed**: Suggest iterative approach (MVP + iterations)
4. **Resource vs Effort**: Identify parallelizable tasks
**Require User Input for**:
- Strategic choices (which feature to prioritize?)
- Tool/technology decisions with strong team preferences
- Budget/resource constraints not stated in planning topic
---
## Continue & Resume
```bash
/workflow:quick-plan-with-file --continue "planning-topic"
```
When continuing:
1. Load existing plan.md and perspectives.json
2. Identify what's incomplete
3. Re-run affected agents (if planning has changed)
4. Update plan.md with new findings
5. Generate updated synthesis.json
---
## Integration Flow
```
Input Source:
├─ Raw task description
├─ Brainstorm synthesis.json
└─ Analysis conclusions.json
/workflow:quick-plan-with-file
plan.md + synthesis.json
/workflow:unified-execute-with-file
Implementation
```
---
## Usage Patterns
### Pattern 1: Quick Planning from Task
```bash
# User has a task, needs rapid multi-perspective plan
/workflow:quick-plan-with-file -y "实现实时通知系统支持推送和WebSocket"
# → Creates plan in ~5 minutes
# → Ready for execution
```
### Pattern 2: Convert Brainstorm to Executable Plan
```bash
# User completed brainstorm, wants to convert top idea to executable plan
/workflow:quick-plan-with-file --from brainstorm BS-notifications-2025-01-28
# → Reads synthesis.json from brainstorm
# → Generates implementation plan
# → Ready for unified-execute-with-file
```
### Pattern 3: From Analysis to Implementation
```bash
# Analysis completed, now need execution plan
/workflow:quick-plan-with-file --from analysis ANL-auth-architecture-2025-01-28
# → Reads conclusions.json from analysis
# → Generates planning with recommendations
# → Output task breakdown
```
### Pattern 4: Planning with Interactive Conflict Resolution
```bash
# Full planning with user involvement in decision-making
/workflow:quick-plan-with-file "新的支付流程集成"
# → Without -y flag
# → After conflict detection, asks user about tradeoffs
# → Generates plan based on user preferences
```
---
## Comparison with Other Workflows
| Feature | brainstorm | analyze | quick-plan | plan |
|---------|-----------|---------|-----------|------|
| **Purpose** | Ideation | Investigation | Lightweight planning | Detailed planning |
| **Multi-agent** | 3 perspectives | 2 CLI + explore | 4 concurrent agents | N/A (single) |
| **Documentation** | Extensive | Extensive | Minimal | Standard |
| **Output** | Ideas + synthesis | Conclusions | Executable tasks | IMPL_PLAN |
| **Typical Duration** | 30-60 min | 20-30 min | 5-10 min | 15-20 min |
| **User Interaction** | High (multi-round) | High (Q&A) | Low (decisions) | Medium |
---
## Error Handling
| Situation | Action |
|-----------|--------|
| Agents conflict on approach | Arbitration agent decides, document in conflicts.json |
| Missing critical files | Continue with available context, note limitations |
| Insufficient task breakdown | Ask user for planning focus areas |
| Effort estimate too high | Suggest MVP approach or phasing |
| Unclear requirements | Ask clarifying questions via AskUserQuestion |
| Agent timeout | Use last successful result, note partial analysis |
---
## Best Practices
1. **Use when**:
- You have clarity on WHAT but not HOW
- Need rapid multi-perspective planning
- Converting brainstorm/analysis into execution
- Want minimal planning overhead
2. **Avoid when**:
- Requirements are highly ambiguous (use brainstorm instead)
- Need deep investigation (use analyze instead)
- Want extensive planning document (use plan instead)
- No tech stack clarity (use analyze first)
3. **For best results**:
- Provide complete task/requirement description
- Include constraints and success criteria
- Specify preferences (speed vs quality vs risk)
- Review conflicts.json and make conscious tradeoff decisions
---
## Next Steps After Planning
### Feed to Execution
```bash
/workflow:unified-execute-with-file -p .workflow/.planning/{sessionId}/synthesis.json
```
### Detailed Planning if Needed
```bash
/workflow:plan "Based on quick-plan recommendations..."
```
### Continuous Refinement
```bash
/workflow:quick-plan-with-file --continue "{topic}" # Update plan with new constraints
```

View File

@@ -1,8 +1,8 @@
---
name: review-cycle-fix
description: Automated fixing of code review findings with AI-powered planning and coordinated execution. Uses intelligent grouping, multi-stage timeline coordination, and test-driven verification.
argument-hint: "<export-file|review-dir> [--resume] [--max-iterations=N]"
allowed-tools: SlashCommand(*), TodoWrite(*), Read(*), Bash(*), Task(*), Edit(*), Write(*)
argument-hint: "<export-file|review-dir> [--resume] [--max-iterations=N] [--batch-size=N]"
allowed-tools: Skill(*), TodoWrite(*), Read(*), Bash(*), Task(*), Edit(*), Write(*)
---
# Workflow Review-Cycle-Fix Command
@@ -21,37 +21,45 @@ allowed-tools: SlashCommand(*), TodoWrite(*), Read(*), Bash(*), Task(*), Edit(*)
# Custom max retry attempts per finding
/workflow:review-cycle-fix .workflow/active/WFS-123/.review/ --max-iterations=5
# Custom batch size for parallel planning (default: 5 findings per batch)
/workflow:review-cycle-fix .workflow/active/WFS-123/.review/ --batch-size=3
```
**Fix Source**: Exported findings from review cycle dashboard
**Output Directory**: `{review-dir}/fixes/{fix-session-id}/` (within session .review/)
**Default Max Iterations**: 3 (per finding, adjustable)
**Default Batch Size**: 5 (findings per planning batch, adjustable)
**Max Parallel Agents**: 10 (concurrent planning agents)
**CLI Tools**: @cli-planning-agent (planning), @cli-execute-agent (fixing)
## What & Why
### Core Concept
Automated fix orchestrator with **two-phase architecture**: AI-powered planning followed by coordinated parallel/serial execution. Generates fix timeline with intelligent grouping and dependency analysis, then executes fixes with conservative test verification.
Automated fix orchestrator with **parallel planning architecture**: Multiple AI agents analyze findings concurrently in batches, then coordinate parallel/serial execution. Generates fix timeline with intelligent grouping and dependency analysis, executes fixes with conservative test verification.
**Fix Process**:
- **Planning Phase**: AI analyzes findings, generates fix plan with grouping and execution strategy
- **Execution Phase**: Main orchestrator coordinates agents per timeline stages
- **Batching Phase (1.5)**: Orchestrator groups findings by file+dimension similarity, creates batches
- **Planning Phase (2)**: Up to 10 agents plan batches in parallel, generate partial plans, orchestrator aggregates
- **Execution Phase (3)**: Main orchestrator coordinates agents per aggregated timeline stages
- **Parallel Efficiency**: Customizable batch size (default: 5), MAX_PARALLEL=10 agents
- **No rigid structure**: Adapts to task requirements, not bound to fixed JSON format
**vs Manual Fixing**:
- **Manual**: Developer reviews findings one-by-one, fixes sequentially
- **Automated**: AI groups related issues, executes in optimal parallel/serial order with automatic test verification
- **Automated**: AI groups related issues, multiple agents plan in parallel, executes in optimal parallel/serial order with automatic test verification
### Value Proposition
1. **Intelligent Planning**: AI-powered analysis identifies optimal grouping and execution strategy
2. **Multi-stage Coordination**: Supports complex parallel + serial execution with dependency management
3. **Conservative Safety**: Mandatory test verification with automatic rollback on failure
4. **Resume Support**: Checkpoint-based recovery for interrupted sessions
1. **Parallel Planning**: Multiple agents analyze findings concurrently, reducing planning time for large batches (10+ findings)
2. **Intelligent Batching**: Semantic similarity grouping ensures related findings are analyzed together
3. **Multi-stage Coordination**: Supports complex parallel + serial execution with cross-batch dependency management
4. **Conservative Safety**: Mandatory test verification with automatic rollback on failure
5. **Resume Support**: Checkpoint-based recovery for interrupted sessions
### Orchestrator Boundary (CRITICAL)
- **ONLY command** for automated review finding fixes
- Manages: Planning phase coordination, stage-based execution, agent scheduling, progress tracking
- Delegates: Fix planning to @cli-planning-agent, fix execution to @cli-execute-agent
- Manages: Intelligent batching (Phase 1.5), parallel planning coordination (launch N agents), plan aggregation, stage-based execution, agent scheduling, progress tracking
- Delegates: Batch planning to @cli-planning-agent, fix execution to @cli-execute-agent
### Execution Flow
@@ -60,12 +68,22 @@ Automated fix orchestrator with **two-phase architecture**: AI-powered planning
Phase 1: Discovery & Initialization
└─ Validate export file, create fix session structure, initialize state files
Phase 2: Planning Coordination (@cli-planning-agent)
├─ Analyze findings for patterns and dependencies
├─ Group by file + dimension + root cause similarity
├─ Determine execution strategy (parallel/serial/hybrid)
Generate fix timeline with stages
└─ Output: fix-plan.json
Phase 1.5: Intelligent Grouping & Batching
├─ Analyze findings metadata (file, dimension, severity)
├─ Group by semantic similarity (file proximity + dimension affinity)
├─ Create batches respecting --batch-size (default: 5)
Output: Finding batches for parallel planning
Phase 2: Parallel Planning Coordination (@cli-planning-agent × N)
├─ Launch MAX_PARALLEL planning agents concurrently (default: 10)
├─ Each agent processes one batch:
│ ├─ Analyze findings for patterns and dependencies
│ ├─ Group by file + dimension + root cause similarity
│ ├─ Determine execution strategy (parallel/serial/hybrid)
│ ├─ Generate fix timeline with stages
│ └─ Output: partial-plan-{batch-id}.json
├─ Collect results from all agents
└─ Aggregate: Merge partial plans → fix-plan.json (resolve cross-batch dependencies)
Phase 3: Execution Orchestration (Stage-based)
For each timeline stage:
@@ -91,25 +109,29 @@ Phase 5: Session Completion (Optional)
| Agent | Responsibility |
|-------|---------------|
| **Orchestrator** | Input validation, session management, planning coordination, stage-based execution scheduling, progress tracking, aggregation |
| **@cli-planning-agent** | Findings analysis, intelligent grouping (file+dimension+root cause), execution strategy determination (parallel/serial/hybrid), timeline generation with dependency mapping |
| **Orchestrator** | Input validation, session management, intelligent batching (Phase 1.5), parallel planning coordination (launch N agents), plan aggregation (merge partial plans, resolve cross-batch dependencies), stage-based execution scheduling, progress tracking, result aggregation |
| **@cli-planning-agent** | Batch findings analysis, intelligent grouping (file+dimension+root cause), execution strategy determination (parallel/serial/hybrid), timeline generation with dependency mapping, partial plan output |
| **@cli-execute-agent** | Fix execution per group, code context analysis, Edit tool operations, test verification, git rollback on failure, completion JSON generation |
## Enhanced Features
### 1. Two-Phase Architecture
### 1. Parallel Planning Architecture
**Phase Separation**:
**Batch Processing Strategy**:
| Phase | Agent | Output | Purpose |
|-------|-------|--------|---------|
| **Planning** | @cli-planning-agent | fix-plan.json | Analyze findings, group intelligently, determine optimal execution strategy |
| **Execution** | @cli-execute-agent | completions/*.json | Execute fixes per plan with test verification and rollback |
| Phase | Agent Count | Input | Output | Purpose |
|-------|-------------|-------|--------|---------|
| **Batching (1.5)** | Orchestrator | All findings | Finding batches | Semantic grouping by file+dimension, respecting --batch-size |
| **Planning (2)** | N agents (≤10) | 1 batch each | partial-plan-{batch-id}.json | Analyze batch in parallel, generate execution groups and timeline |
| **Aggregation (2)** | Orchestrator | All partial plans | fix-plan.json | Merge timelines, resolve cross-batch dependencies |
| **Execution (3)** | M agents (dynamic) | 1 group each | fix-progress-{N}.json | Execute fixes per aggregated plan with test verification |
**Benefits**:
- Clear separation of concerns (analysis vs execution)
- Reusable plans (can re-execute without re-planning)
- Better error isolation (planning failures vs execution failures)
- **Speed**: N agents plan concurrently, reducing planning time for large batches
- **Scalability**: MAX_PARALLEL=10 prevents resource exhaustion
- **Flexibility**: Batch size customizable via --batch-size (default: 5)
- **Isolation**: Each planning agent focuses on related findings (semantic grouping)
- **Reusable**: Aggregated plan can be re-executed without re-planning
### 2. Intelligent Grouping Strategy
@@ -197,12 +219,30 @@ if (result.passRate < 100%) {
- Session creation: Generate fix-session-id (`fix-{timestamp}`)
- Directory structure: Create `{review-dir}/fixes/{fix-session-id}/` with subdirectories
- State files: Initialize active-fix-session.json (session marker)
- TodoWrite initialization: Set up 4-phase tracking
- TodoWrite initialization: Set up 5-phase tracking (including Phase 1.5)
**Phase 2: Planning Coordination**
- Launch @cli-planning-agent with findings data and project context
- Validate fix-plan.json output (schema conformance, includes metadata with session status)
- Load plan into memory for execution phase
**Phase 1.5: Intelligent Grouping & Batching**
- Load all findings metadata (id, file, dimension, severity, title)
- Semantic similarity analysis:
- Primary: Group by file proximity (same file or related modules)
- Secondary: Group by dimension affinity (same review dimension)
- Tertiary: Analyze title/description similarity (root cause clustering)
- Create batches respecting --batch-size (default: 5 findings per batch)
- Balance workload: Distribute high-severity findings across batches
- Output: Array of finding batches for parallel planning
**Phase 2: Parallel Planning Coordination**
- Determine concurrency: MIN(batch_count, MAX_PARALLEL=10)
- For each batch chunk (≤10 batches):
- Launch all agents in parallel with run_in_background=true
- Pass batch findings + project context + batch_id to each agent
- Each agent outputs: partial-plan-{batch-id}.json
- Collect results via TaskOutput (blocking until all complete)
- Aggregate partial plans:
- Merge execution groups (renumber group_ids sequentially: G1, G2, ...)
- Merge timelines (detect cross-batch dependencies, adjust stages)
- Resolve conflicts (same file in multiple batches → serialize)
- Generate final fix-plan.json with aggregated metadata
- TodoWrite update: Mark planning complete, start execution
**Phase 3: Execution Orchestration**
@@ -236,7 +276,10 @@ if (result.passRate < 100%) {
.workflow/active/WFS-{session-id}/.review/
├── fix-export-{timestamp}.json # Exported findings (input)
└── fixes/{fix-session-id}/
├── fix-plan.json # Planning agent output (execution plan with metadata)
├── partial-plan-1.json # Batch 1 partial plan (planning agent 1 output)
├── partial-plan-2.json # Batch 2 partial plan (planning agent 2 output)
├── partial-plan-N.json # Batch N partial plan (planning agent N output)
├── fix-plan.json # Aggregated execution plan (orchestrator merges partials)
├── fix-progress-1.json # Group 1 progress (planning agent init → agent updates)
├── fix-progress-2.json # Group 2 progress (planning agent init → agent updates)
├── fix-progress-3.json # Group 3 progress (planning agent init → agent updates)
@@ -246,28 +289,126 @@ if (result.passRate < 100%) {
```
**File Producers**:
- **Planning Agent**: `fix-plan.json` (with metadata), all `fix-progress-*.json` (initial state)
- **Execution Agents**: Update assigned `fix-progress-{N}.json` in real-time
- **Orchestrator**: Batches findings (Phase 1.5), aggregates partial plans → `fix-plan.json` (Phase 2), launches parallel planning agents
- **Planning Agents (N)**: Each outputs `partial-plan-{batch-id}.json` + initializes `fix-progress-*.json` for assigned groups
- **Execution Agents (M)**: Update assigned `fix-progress-{N}.json` in real-time
### Agent Invocation Template
**Planning Agent**:
**Phase 1.5: Intelligent Batching** (Orchestrator):
```javascript
// Load findings
const findings = JSON.parse(Read(exportFile));
const batchSize = flags.batchSize || 5;
// Semantic similarity analysis: group by file+dimension
const batches = [];
const grouped = new Map(); // key: "${file}:${dimension}"
for (const finding of findings) {
const key = `${finding.file || 'unknown'}:${finding.dimension || 'general'}`;
if (!grouped.has(key)) grouped.set(key, []);
grouped.get(key).push(finding);
}
// Create batches respecting batchSize
for (const [key, group] of grouped) {
while (group.length > 0) {
const batch = group.splice(0, batchSize);
batches.push({
batch_id: batches.length + 1,
findings: batch,
metadata: { primary_file: batch[0].file, primary_dimension: batch[0].dimension }
});
}
}
console.log(`Created ${batches.length} batches (${batchSize} findings per batch)`);
```
**Phase 2: Parallel Planning** (Orchestrator launches N agents):
```javascript
const MAX_PARALLEL = 10;
const partialPlans = [];
// Process batches in chunks of MAX_PARALLEL
for (let i = 0; i < batches.length; i += MAX_PARALLEL) {
const chunk = batches.slice(i, i + MAX_PARALLEL);
const taskIds = [];
// Launch agents in parallel (run_in_background=true)
for (const batch of chunk) {
const taskId = Task({
subagent_type: "cli-planning-agent",
run_in_background: true,
description: `Plan batch ${batch.batch_id}: ${batch.findings.length} findings`,
prompt: planningPrompt(batch) // See Planning Agent template below
});
taskIds.push({ taskId, batch });
}
console.log(`Launched ${taskIds.length} planning agents...`);
// Collect results from this chunk (blocking)
for (const { taskId, batch } of taskIds) {
const result = TaskOutput({ task_id: taskId, block: true });
const partialPlan = JSON.parse(Read(`${sessionDir}/partial-plan-${batch.batch_id}.json`));
partialPlans.push(partialPlan);
updateTodo(`Batch ${batch.batch_id}`, 'completed');
}
}
// Aggregate partial plans → fix-plan.json
let groupCounter = 1;
const groupIdMap = new Map();
for (const partial of partialPlans) {
for (const group of partial.groups) {
const newGroupId = `G${groupCounter}`;
groupIdMap.set(`${partial.batch_id}:${group.group_id}`, newGroupId);
aggregatedPlan.groups.push({ ...group, group_id: newGroupId, progress_file: `fix-progress-${groupCounter}.json` });
groupCounter++;
}
}
// Merge timelines, resolve cross-batch conflicts (shared files → serialize)
let stageCounter = 1;
for (const partial of partialPlans) {
for (const stage of partial.timeline) {
aggregatedPlan.timeline.push({
...stage, stage_id: stageCounter,
groups: stage.groups.map(gid => groupIdMap.get(`${partial.batch_id}:${gid}`))
});
stageCounter++;
}
}
// Write aggregated plan + initialize progress files
Write(`${sessionDir}/fix-plan.json`, JSON.stringify(aggregatedPlan, null, 2));
for (let i = 1; i <= aggregatedPlan.groups.length; i++) {
Write(`${sessionDir}/fix-progress-${i}.json`, JSON.stringify(initProgressFile(aggregatedPlan.groups[i-1]), null, 2));
}
```
**Planning Agent (Batch Mode - Partial Plan Only)**:
```javascript
Task({
subagent_type: "cli-planning-agent",
description: `Generate fix plan and initialize progress files for ${findings.length} findings`,
run_in_background: true,
description: `Plan batch ${batch.batch_id}: ${batch.findings.length} findings`,
prompt: `
## Task Objective
Analyze ${findings.length} code review findings and generate execution plan with intelligent grouping and timeline coordination.
Analyze code review findings in batch ${batch.batch_id} and generate **partial** execution plan.
## Input Data
Review Session: ${reviewId}
Fix Session ID: ${fixSessionId}
Total Findings: ${findings.length}
Batch ID: ${batch.batch_id}
Batch Findings: ${batch.findings.length}
Findings:
${JSON.stringify(findings, null, 2)}
${JSON.stringify(batch.findings, null, 2)}
Project Context:
- Structure: ${projectStructure}
@@ -276,31 +417,23 @@ Project Context:
## Output Requirements
### 1. fix-plan.json
Execute: cat ~/.claude/workflows/cli-templates/fix-plan-template.json
Generate execution plan following template structure:
### 1. partial-plan-${batch.batch_id}.json
Generate partial execution plan with structure:
{
"batch_id": ${batch.batch_id},
"groups": [...], // Groups created from batch findings (use local IDs: G1, G2, ...)
"timeline": [...], // Local timeline for this batch only
"metadata": {
"findings_count": ${batch.findings.length},
"groups_count": N,
"created_at": "ISO-8601-timestamp"
}
}
**Key Generation Rules**:
- **Metadata**: Populate fix_session_id, review_session_id, status ("planning"), created_at, started_at timestamps
- **Execution Strategy**: Choose approach (parallel/serial/hybrid) based on dependency analysis, set parallel_limit and stages count
- **Groups**: Create groups (G1, G2, ...) with intelligent grouping (see Analysis Requirements below), assign progress files (fix-progress-1.json, ...), populate fix_strategy with approach/complexity/test_pattern, assess risks, identify dependencies
- **Timeline**: Define stages respecting dependencies, set execution_mode per stage, map groups to stages, calculate critical path
### 2. fix-progress-{N}.json (one per group)
Execute: cat ~/.claude/workflows/cli-templates/fix-progress-template.json
For each group (G1, G2, G3, ...), generate fix-progress-{N}.json following template structure:
**Initial State Requirements**:
- Status: "pending", phase: "waiting"
- Timestamps: Set last_update to now, others null
- Findings: Populate from review findings with status "pending", all operation fields null
- Summary: Initialize all counters to zero
- Flow control: Empty implementation_approach array
- Errors: Empty array
**CRITICAL**: Ensure complete template structure - all fields must be present.
- **Groups**: Create groups with local IDs (G1, G2, ...) using intelligent grouping (file+dimension+root cause)
- **Timeline**: Define stages for this batch only (local dependencies within batch)
- **Progress Files**: DO NOT generate fix-progress-*.json here (orchestrator handles after aggregation)
## Analysis Requirements
@@ -318,7 +451,7 @@ Group findings using these criteria (in priority order):
- Consider test isolation (different test suites → different groups)
- Balance workload across groups for parallel execution
### Execution Strategy Determination
### Execution Strategy Determination (Local Only)
**Parallel Mode**: Use when groups are independent, no shared files
**Serial Mode**: Use when groups have dependencies or shared resources
@@ -346,21 +479,16 @@ For each group, determine:
## Output Files
Write to ${sessionDir}:
- ./fix-plan.json
- ./fix-progress-1.json
- ./fix-progress-2.json
- ./fix-progress-{N}.json (as many as groups created)
- ./partial-plan-${batch.batch_id}.json
## Quality Checklist
Before finalizing outputs:
- ✅ All findings assigned to exactly one group
- ✅ Group dependencies correctly identified
- ✅ Timeline stages respect dependencies
- ✅ All progress files have complete initial structure
- ✅ All batch findings assigned to exactly one group
- ✅ Group dependencies (within batch) correctly identified
- ✅ Timeline stages respect local dependencies
- ✅ Test patterns are valid and specific
- ✅ Risk assessments are realistic
- ✅ Estimated times are reasonable
`
})
```
@@ -532,12 +660,18 @@ Use fix_strategy.test_pattern to run affected tests:
### Error Handling
**Planning Failures**:
- Invalid template → Abort with error message
- Insufficient findings data → Request complete export
- Planning timeout → Retry once, then fail gracefully
**Batching Failures (Phase 1.5)**:
- Invalid findings data → Abort with error message
- Empty batches after grouping → Warn and skip empty batches
**Execution Failures**:
**Planning Failures (Phase 2)**:
- Planning agent timeout → Mark batch as failed, continue with other batches
- Partial plan missing → Skip batch, warn user
- Agent crash → Collect available partial plans, proceed with aggregation
- All agents fail → Abort entire fix session with error
- Aggregation conflicts → Apply conflict resolution (serialize conflicting groups)
**Execution Failures (Phase 3)**:
- Agent crash → Mark group as failed, continue with other groups
- Test command not found → Skip test verification, warn user
- Git operations fail → Abort with error, preserve state
@@ -549,14 +683,34 @@ Use fix_strategy.test_pattern to run affected tests:
### TodoWrite Structure
**Initialization**:
**Initialization (after Phase 1.5 batching)**:
```javascript
TodoWrite({
todos: [
{content: "Phase 1: Discovery & Initialization", status: "completed"},
{content: "Phase 2: Planning", status: "in_progress"},
{content: "Phase 3: Execution", status: "pending"},
{content: "Phase 4: Completion", status: "pending"}
{content: "Phase 1: Discovery & Initialization", status: "completed", activeForm: "Discovering"},
{content: "Phase 1.5: Intelligent Batching", status: "completed", activeForm: "Batching"},
{content: "Phase 2: Parallel Planning", status: "in_progress", activeForm: "Planning"},
{content: " → Batch 1: 4 findings (auth.ts:security)", status: "pending", activeForm: "Planning batch 1"},
{content: " → Batch 2: 3 findings (query.ts:security)", status: "pending", activeForm: "Planning batch 2"},
{content: " → Batch 3: 2 findings (config.ts:quality)", status: "pending", activeForm: "Planning batch 3"},
{content: "Phase 3: Execution", status: "pending", activeForm: "Executing"},
{content: "Phase 4: Completion", status: "pending", activeForm: "Completing"}
]
});
```
**During Planning (parallel agents running)**:
```javascript
TodoWrite({
todos: [
{content: "Phase 1: Discovery & Initialization", status: "completed", activeForm: "Discovering"},
{content: "Phase 1.5: Intelligent Batching", status: "completed", activeForm: "Batching"},
{content: "Phase 2: Parallel Planning", status: "in_progress", activeForm: "Planning"},
{content: " → Batch 1: 4 findings (auth.ts:security)", status: "completed", activeForm: "Planning batch 1"},
{content: " → Batch 2: 3 findings (query.ts:security)", status: "in_progress", activeForm: "Planning batch 2"},
{content: " → Batch 3: 2 findings (config.ts:quality)", status: "in_progress", activeForm: "Planning batch 3"},
{content: "Phase 3: Execution", status: "pending", activeForm: "Executing"},
{content: "Phase 4: Completion", status: "pending", activeForm: "Completing"}
]
});
```
@@ -565,23 +719,25 @@ TodoWrite({
```javascript
TodoWrite({
todos: [
{content: "Phase 1: Discovery & Initialization", status: "completed"},
{content: "Phase 2: Planning", status: "completed"},
{content: "Phase 3: Execution", status: "in_progress"},
{content: " → Stage 1: Parallel execution (3 groups)", status: "completed"},
{content: " • Group G1: Auth validation (2 findings)", status: "completed"},
{content: " • Group G2: Query security (3 findings)", status: "completed"},
{content: " • Group G3: Config quality (1 finding)", status: "completed"},
{content: " → Stage 2: Serial execution (1 group)", status: "in_progress"},
{content: " • Group G4: Dependent fixes (2 findings)", status: "in_progress"},
{content: "Phase 4: Completion", status: "pending"}
{content: "Phase 1: Discovery & Initialization", status: "completed", activeForm: "Discovering"},
{content: "Phase 1.5: Intelligent Batching", status: "completed", activeForm: "Batching"},
{content: "Phase 2: Parallel Planning (3 batches → 5 groups)", status: "completed", activeForm: "Planning"},
{content: "Phase 3: Execution", status: "in_progress", activeForm: "Executing"},
{content: " → Stage 1: Parallel execution (3 groups)", status: "completed", activeForm: "Executing stage 1"},
{content: " • Group G1: Auth validation (2 findings)", status: "completed", activeForm: "Fixing G1"},
{content: " • Group G2: Query security (3 findings)", status: "completed", activeForm: "Fixing G2"},
{content: " • Group G3: Config quality (1 finding)", status: "completed", activeForm: "Fixing G3"},
{content: " → Stage 2: Serial execution (1 group)", status: "in_progress", activeForm: "Executing stage 2"},
{content: " • Group G4: Dependent fixes (2 findings)", status: "in_progress", activeForm: "Fixing G4"},
{content: "Phase 4: Completion", status: "pending", activeForm: "Completing"}
]
});
```
**Update Rules**:
- Add stage items dynamically based on fix-plan.json timeline
- Add group items per stage
- Add batch items dynamically during Phase 1.5
- Mark batch items completed as parallel agents return results
- Add stage/group items dynamically after Phase 2 plan aggregation
- Mark completed immediately after each group finishes
- Update parent phase status when all child items complete
@@ -591,12 +747,13 @@ TodoWrite({
## Best Practices
1. **Trust AI Planning**: Planning agent's grouping and execution strategy are based on dependency analysis
2. **Conservative Approach**: Test verification is mandatory - no fixes kept without passing tests
3. **Parallel Efficiency**: Default 3 concurrent agents balances speed and resource usage
4. **Resume Support**: Fix sessions can resume from checkpoints after interruption
5. **Manual Review**: Always review failed fixes manually - may require architectural changes
6. **Incremental Fixing**: Start with small batches (5-10 findings) before large-scale fixes
1. **Leverage Parallel Planning**: For 10+ findings, parallel batching significantly reduces planning time
2. **Tune Batch Size**: Use `--batch-size` to control granularity (smaller batches = more parallelism, larger = better grouping context)
3. **Conservative Approach**: Test verification is mandatory - no fixes kept without passing tests
4. **Parallel Efficiency**: MAX_PARALLEL=10 for planning agents, 3 concurrent execution agents per stage
5. **Resume Support**: Fix sessions can resume from checkpoints after interruption
6. **Manual Review**: Always review failed fixes manually - may require architectural changes
7. **Incremental Fixing**: Start with small batches (5-10 findings) before large-scale fixes
## Related Commands

View File

@@ -2,7 +2,7 @@
name: review-module-cycle
description: Independent multi-dimensional code review for specified modules/files. Analyzes specific code paths across 7 dimensions with hybrid parallel-iterative execution, independent of workflow sessions.
argument-hint: "<path-pattern> [--dimensions=security,architecture,...] [--max-iterations=N]"
allowed-tools: SlashCommand(*), TodoWrite(*), Read(*), Bash(*), Task(*)
allowed-tools: Skill(*), TodoWrite(*), Read(*), Bash(*), Task(*)
---
# Workflow Review-Module-Cycle Command
@@ -187,7 +187,7 @@ const CATEGORIES = {
**Step 1: Session Creation**
```javascript
// Create workflow session for this review (type: review)
SlashCommand(command="/workflow:session:start --type review \"Code review for [target_pattern]\"")
Skill(skill="workflow:session:start", args="--type review \"Code review for [target_pattern]\"")
// Parse output
const sessionId = output.match(/SESSION_ID: (WFS-[^\s]+)/)[1];

View File

@@ -2,7 +2,7 @@
name: review-session-cycle
description: Session-based comprehensive multi-dimensional code review. Analyzes git changes from workflow session across 7 dimensions with hybrid parallel-iterative execution, aggregates findings, and performs focused deep-dives on critical issues until quality gates met.
argument-hint: "[session-id] [--dimensions=security,architecture,...] [--max-iterations=N]"
allowed-tools: SlashCommand(*), TodoWrite(*), Read(*), Bash(*), Task(*)
allowed-tools: Skill(*), TodoWrite(*), Read(*), Bash(*), Task(*)
---
# Workflow Review-Session-Cycle Command

View File

@@ -50,7 +50,7 @@ bash(test -f .workflow/project-guidelines.json && echo "GUIDELINES_EXISTS" || ec
**If either NOT_FOUND**, delegate to `/workflow:init`:
```javascript
// Call workflow:init for intelligent project analysis
SlashCommand({command: "/workflow:init"});
Skill(skill="workflow:init");
// Wait for init completion
// project-tech.json and project-guidelines.json will be created

View File

@@ -2,7 +2,7 @@
name: tdd-plan
description: TDD workflow planning with Red-Green-Refactor task chain generation, test-first development structure, and cycle tracking
argument-hint: "\"feature description\"|file.md"
allowed-tools: SlashCommand(*), TodoWrite(*), Read(*), Bash(*)
allowed-tools: Skill(*), TodoWrite(*), Read(*), Bash(*)
---
# TDD Workflow Plan Command (/workflow:tdd-plan)
@@ -14,7 +14,7 @@ allowed-tools: SlashCommand(*), TodoWrite(*), Read(*), Bash(*)
**CLI Tool Selection**: CLI tool usage is determined semantically from user's task description. Include "use Codex/Gemini/Qwen" in your request for CLI execution.
**Task Attachment Model**:
- SlashCommand execute **expands workflow** by attaching sub-tasks to current TodoWrite
- Skill execute **expands workflow** by attaching sub-tasks to current TodoWrite
- When executing a sub-command (e.g., `/workflow:tools:test-context-gather`), its internal tasks are attached to the orchestrator's TodoWrite
- Orchestrator **executes these attached tasks** sequentially
- After completion, attached tasks are **collapsed** back to high-level phase summary
@@ -34,7 +34,7 @@ allowed-tools: SlashCommand(*), TodoWrite(*), Read(*), Bash(*)
4. **Auto-Continue via TodoList**: Check TodoList status to execute next pending phase automatically
5. **Track Progress**: Update TodoWrite dynamically with task attachment/collapse pattern
6. **TDD Context**: All descriptions include "TDD:" prefix
7. **Task Attachment Model**: SlashCommand execute **attaches** sub-tasks to current workflow. Orchestrator **executes** these attached tasks itself, then **collapses** them after completion
7. **Task Attachment Model**: Skill execute **attaches** sub-tasks to current workflow. Orchestrator **executes** these attached tasks itself, then **collapses** them after completion
8. **⚠️ CRITICAL: DO NOT STOP**: Continuous multi-phase workflow. After executing all attached tasks, immediately collapse them and execute next phase
## TDD Compliance Requirements
@@ -82,7 +82,7 @@ NO PRODUCTION CODE WITHOUT A FAILING TEST FIRST
**Step 1.1: Execute** - Session discovery and initialization
```javascript
SlashCommand(command="/workflow:session:start --type tdd --auto \"TDD: [structured-description]\"")
Skill(skill="workflow:session:start", args="--type tdd --auto \"TDD: [structured-description]\"")
```
**TDD Structured Format**:
@@ -107,7 +107,7 @@ TEST_FOCUS: [Test scenarios]
**Step 2.1: Execute** - Context gathering and analysis
```javascript
SlashCommand(command="/workflow:tools:context-gather --session [sessionId] \"TDD: [structured-description]\"")
Skill(skill="workflow:tools:context-gather", args="--session [sessionId] \"TDD: [structured-description]\"")
```
**Use Same Structured Description**: Pass the same structured format from Phase 1
@@ -133,7 +133,7 @@ SlashCommand(command="/workflow:tools:context-gather --session [sessionId] \"TDD
**Step 3.1: Execute** - Test coverage analysis and framework detection
```javascript
SlashCommand(command="/workflow:tools:test-context-gather --session [sessionId]")
Skill(skill="workflow:tools:test-context-gather", args="--session [sessionId]")
```
**Purpose**: Analyze existing codebase for:
@@ -148,7 +148,7 @@ SlashCommand(command="/workflow:tools:test-context-gather --session [sessionId]"
<!-- TodoWrite: When test-context-gather executed, INSERT 3 test-context-gather tasks -->
**TodoWrite Update (Phase 3 SlashCommand executed - tasks attached)**:
**TodoWrite Update (Phase 3 Skill executed - tasks attached)**:
```json
[
{"content": "Phase 1: Session Discovery", "status": "completed", "activeForm": "Executing session discovery"},
@@ -162,7 +162,7 @@ SlashCommand(command="/workflow:tools:test-context-gather --session [sessionId]"
]
```
**Note**: SlashCommand execute **attaches** test-context-gather's 3 tasks. Orchestrator **executes** these tasks.
**Note**: Skill execute **attaches** test-context-gather's 3 tasks. Orchestrator **executes** these tasks.
**Next Action**: Tasks attached → **Execute Phase 3.1-3.3** sequentially
@@ -192,7 +192,7 @@ SlashCommand(command="/workflow:tools:test-context-gather --session [sessionId]"
**Step 4.1: Execute** - Conflict detection and resolution
```javascript
SlashCommand(command="/workflow:tools:conflict-resolution --session [sessionId] --context [contextPath]")
Skill(skill="workflow:tools:conflict-resolution", args="--session [sessionId] --context [contextPath]")
```
**Input**:
@@ -213,7 +213,7 @@ SlashCommand(command="/workflow:tools:conflict-resolution --session [sessionId]
<!-- TodoWrite: If conflict_risk ≥ medium, INSERT 3 conflict-resolution tasks when executed -->
**TodoWrite Update (Phase 4 SlashCommand executed - tasks attached, if conflict_risk ≥ medium)**:
**TodoWrite Update (Phase 4 Skill executed - tasks attached, if conflict_risk ≥ medium)**:
```json
[
{"content": "Phase 1: Session Discovery", "status": "completed", "activeForm": "Executing session discovery"},
@@ -228,7 +228,7 @@ SlashCommand(command="/workflow:tools:conflict-resolution --session [sessionId]
]
```
**Note**: SlashCommand execute **attaches** conflict-resolution's 3 tasks. Orchestrator **executes** these tasks.
**Note**: Skill execute **attaches** conflict-resolution's 3 tasks. Orchestrator **executes** these tasks.
**Next Action**: Tasks attached → **Execute Phase 4.1-4.3** sequentially
@@ -257,7 +257,7 @@ SlashCommand(command="/workflow:tools:conflict-resolution --session [sessionId]
**Step 4.5: Execute** - Memory compaction
```javascript
SlashCommand(command="/compact")
Skill(skill="compact")
```
- This optimizes memory before proceeding to Phase 5
@@ -271,7 +271,7 @@ SlashCommand(command="/workflow:tools:conflict-resolution --session [sessionId]
**Step 5.1: Execute** - TDD task generation via action-planning-agent with Phase 0 user configuration
```javascript
SlashCommand(command="/workflow:tools:task-generate-tdd --session [sessionId]")
Skill(skill="workflow:tools:task-generate-tdd", args="--session [sessionId]")
```
**Note**: Phase 0 now includes:
@@ -311,7 +311,7 @@ SlashCommand(command="/workflow:tools:task-generate-tdd --session [sessionId]")
<!-- TodoWrite: When task-generate-tdd executed, INSERT 3 task-generate-tdd tasks -->
**TodoWrite Update (Phase 5 SlashCommand executed - tasks attached)**:
**TodoWrite Update (Phase 5 Skill executed - tasks attached)**:
```json
[
{"content": "Phase 1: Session Discovery", "status": "completed", "activeForm": "Executing session discovery"},
@@ -325,7 +325,7 @@ SlashCommand(command="/workflow:tools:task-generate-tdd --session [sessionId]")
]
```
**Note**: SlashCommand execute **attaches** task-generate-tdd's 3 tasks. Orchestrator **executes** these tasks. Each generated IMPL task will contain internal Red-Green-Refactor cycle.
**Note**: Skill execute **attaches** task-generate-tdd's 3 tasks. Orchestrator **executes** these tasks. Each generated IMPL task will contain internal Red-Green-Refactor cycle.
**Next Action**: Tasks attached → **Execute Phase 5.1-5.3** sequentially
@@ -462,7 +462,7 @@ Quality Gate: Consider running /workflow:plan-verify to validate TDD task struct
### Key Principles
1. **Task Attachment** (when SlashCommand executed):
1. **Task Attachment** (when Skill executed):
- Sub-command's internal tasks are **attached** to orchestrator's TodoWrite
- Example: `/workflow:tools:test-context-gather` attaches 3 sub-tasks (Phase 3.1, 3.2, 3.3)
- First attached task marked as `in_progress`, others as `pending`
@@ -534,7 +534,7 @@ TDD Workflow Orchestrator
└─ Recommend: /workflow:plan-verify
Key Points:
• ← ATTACHED: SlashCommand attaches sub-tasks to orchestrator TodoWrite
• ← ATTACHED: Skill attaches sub-tasks to orchestrator TodoWrite
• ← COLLAPSED: Sub-tasks executed and collapsed to phase summary
• TDD-specific: Each generated IMPL task contains complete Red-Green-Refactor cycle
```

View File

@@ -2,7 +2,7 @@
name: tdd-verify
description: Verify TDD workflow compliance against Red-Green-Refactor cycles. Generates quality report with coverage analysis and quality gate recommendation. Orchestrates sub-commands for comprehensive validation.
argument-hint: "[optional: --session WFS-session-id]"
allowed-tools: SlashCommand(*), TodoWrite(*), Read(*), Write(*), Bash(*), Glob(*)
allowed-tools: Skill(*), TodoWrite(*), Read(*), Write(*), Bash(*), Glob(*)
---
# TDD Verification Command (/workflow:tdd-verify)
@@ -179,7 +179,7 @@ Calculate:
**Step 3.1: Call Coverage Analysis Sub-command**
```bash
SlashCommand(command="/workflow:tools:tdd-coverage-analysis --session {session_id}")
Skill(skill="workflow:tools:tdd-coverage-analysis", args="--session {session_id}")
```
**Step 3.2: Parse Output Files**

View File

@@ -2,7 +2,7 @@
name: test-cycle-execute
description: Execute test-fix workflow with dynamic task generation and iterative fix cycles until test pass rate >= 95% or max iterations reached. Uses @cli-planning-agent for failure analysis and task generation.
argument-hint: "[--resume-session=\"session-id\"] [--max-iterations=N]"
allowed-tools: SlashCommand(*), TodoWrite(*), Read(*), Bash(*), Task(*)
allowed-tools: Skill(*), TodoWrite(*), Read(*), Bash(*), Task(*)
---
# Workflow Test-Cycle-Execute Command

File diff suppressed because it is too large Load Diff

View File

@@ -1,529 +0,0 @@
---
name: test-gen
description: Create independent test-fix workflow session from completed implementation session, analyzes code to generate test tasks
argument-hint: "source-session-id"
allowed-tools: SlashCommand(*), TodoWrite(*), Read(*), Bash(*)
---
# Workflow Test Generation Command (/workflow:test-gen)
## Coordinator Role
**This command is a pure orchestrator**: Creates an independent test-fix workflow session for validating a completed implementation. It reuses the standard planning toolchain with automatic cross-session context gathering.
**Core Principles**:
- **Session Isolation**: Creates new `WFS-test-[source]` session to keep verification separate from implementation
- **Context-First**: Prioritizes gathering code changes and summaries from source session
- **Format Reuse**: Creates standard `IMPL-*.json` task, using `meta.type: "test-fix"` for agent assignment
- **Parameter Simplification**: Tools auto-detect test session type via metadata, no manual cross-session parameters needed
- **Semantic CLI Selection**: CLI tool usage is determined by user's task description (e.g., "use Codex for fixes")
**Task Attachment Model**:
- SlashCommand dispatch **expands workflow** by attaching sub-tasks to current TodoWrite
- When a sub-command is executed (e.g., `/workflow:tools:test-context-gather`), its internal tasks are attached to the orchestrator's TodoWrite
- Orchestrator **executes these attached tasks** sequentially
- After completion, attached tasks are **collapsed** back to high-level phase summary
- This is **task expansion**, not external delegation
**Auto-Continue Mechanism**:
- TodoList tracks current phase status and dynamically manages task attachment/collapse
- When each phase finishes executing, automatically execute next pending phase
- All phases run autonomously without user interaction
- **⚠️ CONTINUOUS EXECUTION** - Do not stop until all phases complete
**Execution Flow**:
1. Initialize TodoWrite → Create test session → Parse session ID
2. Gather cross-session context (automatic) → Parse context path
3. Analyze implementation with concept-enhanced → Parse ANALYSIS_RESULTS.md
4. Generate test task from analysis → Return summary
**Command Scope**: This command ONLY prepares test workflow artifacts. It does NOT execute tests or implementation. Task execution requires separate user action.
## Core Rules
1. **Start Immediately**: First action is TodoWrite initialization, second action is Phase 1 test session creation
2. **No Preliminary Analysis**: Do not read files or analyze before Phase 1
3. **Parse Every Output**: Extract required data from each phase for next phase
4. **Sequential Execution**: Each phase depends on previous phase's output
5. **Complete All Phases**: Do not return to user until Phase 5 completes (summary returned)
6. **Track Progress**: Update TodoWrite dynamically with task attachment/collapse pattern
7. **Automatic Detection**: context-gather auto-detects test session and gathers source session context
8. **Semantic CLI Selection**: CLI tool usage determined from user's task description, passed to Phase 4
9. **Command Boundary**: This command ends at Phase 5 summary. Test execution is NOT part of this command.
10. **Task Attachment Model**: SlashCommand dispatch **attaches** sub-tasks to current workflow. Orchestrator **executes** these attached tasks itself, then **collapses** them after completion
11. **⚠️ CRITICAL: DO NOT STOP**: Continuous multi-phase workflow. After executing all attached tasks, immediately collapse them and execute next phase
## 5-Phase Execution
### Phase 1: Create Test Session
**Step 1.0: Load Source Session Intent** - Preserve user's original task description for semantic CLI selection
```javascript
// Read source session metadata to get original task description
Read(".workflow/active/[sourceSessionId]/workflow-session.json")
// OR if context-package exists:
Read(".workflow/active/[sourceSessionId]/.process/context-package.json")
// Extract: metadata.task_description or project/description field
// This preserves user's CLI tool preferences (e.g., "use Codex for fixes")
```
**Step 1.1: Execute** - Create new test workflow session with preserved intent
```javascript
// Include original task description to enable semantic CLI selection
SlashCommand(command="/workflow:session:start --new \"Test validation for [sourceSessionId]: [originalTaskDescription]\"")
```
**Input**:
- `sourceSessionId` from user argument (e.g., `WFS-user-auth`)
- `originalTaskDescription` from source session metadata (preserves CLI tool preferences)
**Expected Behavior**:
- Creates new session with pattern `WFS-test-[source-slug]` (e.g., `WFS-test-user-auth`)
- Writes metadata to `workflow-session.json`:
- `workflow_type: "test_session"`
- `source_session_id: "[sourceSessionId]"`
- Description includes original user intent for semantic CLI selection
- Returns new session ID for subsequent phases
**Parse Output**:
- Extract: new test session ID (store as `testSessionId`)
- Pattern: `WFS-test-[slug]`
**Validation**:
- Source session `.workflow/[sourceSessionId]/` exists
- Source session has completed IMPL tasks (`.summaries/IMPL-*-summary.md`)
- New test session directory created
- Metadata includes `workflow_type` and `source_session_id`
**TodoWrite**: Mark phase 1 completed, phase 2 in_progress
---
### Phase 2: Gather Test Context
**Step 2.1: Execute** - Gather test coverage context from source session
```javascript
SlashCommand(command="/workflow:tools:test-context-gather --session [testSessionId]")
```
**Input**: `testSessionId` from Phase 1 (e.g., `WFS-test-user-auth`)
**Expected Behavior**:
- Load source session implementation context and summaries
- Analyze test coverage using MCP tools (find existing tests)
- Identify files requiring tests (coverage gaps)
- Detect test framework and conventions
- Generate `test-context-package.json`
**Parse Output**:
- Extract: test context package path (store as `testContextPath`)
- Pattern: `.workflow/[testSessionId]/.process/test-context-package.json`
**Validation**:
- Test context package created
- Contains source session summaries
- Includes coverage gap analysis
- Test framework detected
- Test conventions documented
<!-- TodoWrite: When test-context-gather executed, INSERT 3 test-context-gather tasks -->
**TodoWrite Update (Phase 2 SlashCommand executed - tasks attached)**:
```json
[
{"content": "Create independent test session", "status": "completed", "activeForm": "Creating test session"},
{"content": "Phase 2.1: Load source session summaries (test-context-gather)", "status": "in_progress", "activeForm": "Loading source session summaries"},
{"content": "Phase 2.2: Analyze test coverage with MCP tools (test-context-gather)", "status": "pending", "activeForm": "Analyzing test coverage"},
{"content": "Phase 2.3: Identify coverage gaps and framework (test-context-gather)", "status": "pending", "activeForm": "Identifying coverage gaps"},
{"content": "Analyze test requirements with Gemini", "status": "pending", "activeForm": "Analyzing test requirements"},
{"content": "Generate test generation and execution tasks", "status": "pending", "activeForm": "Generating test tasks"},
{"content": "Return workflow summary", "status": "pending", "activeForm": "Returning workflow summary"}
]
```
**Note**: SlashCommand dispatch **attaches** test-context-gather's 3 tasks. Orchestrator **executes** these tasks.
**Next Action**: Tasks attached → **Execute Phase 2.1-2.3** sequentially
<!-- TodoWrite: After Phase 2 tasks complete, REMOVE Phase 2.1-2.3, restore to orchestrator view -->
**TodoWrite Update (Phase 2 completed - tasks collapsed)**:
```json
[
{"content": "Create independent test session", "status": "completed", "activeForm": "Creating test session"},
{"content": "Gather test coverage context", "status": "completed", "activeForm": "Gathering test coverage context"},
{"content": "Analyze test requirements with Gemini", "status": "pending", "activeForm": "Analyzing test requirements"},
{"content": "Generate test generation and execution tasks", "status": "pending", "activeForm": "Generating test tasks"},
{"content": "Return workflow summary", "status": "pending", "activeForm": "Returning workflow summary"}
]
```
**Note**: Phase 2 tasks completed and collapsed to summary.
---
### Phase 3: Test Generation Analysis
**Step 3.1: Execute** - Analyze test requirements with Gemini
```javascript
SlashCommand(command="/workflow:tools:test-concept-enhanced --session [testSessionId] --context [testContextPath]")
```
**Input**:
- `testSessionId` from Phase 1
- `testContextPath` from Phase 2
**Expected Behavior**:
- Use Gemini to analyze coverage gaps and implementation context
- Study existing test patterns and conventions
- Generate test requirements for each missing test file
- Design test generation strategy
- Generate `TEST_ANALYSIS_RESULTS.md`
**Parse Output**:
- Verify `.workflow/[testSessionId]/.process/TEST_ANALYSIS_RESULTS.md` created
- Contains test requirements and generation strategy
- Lists test files to create with specifications
**Validation**:
- TEST_ANALYSIS_RESULTS.md exists with complete sections:
- Coverage Assessment
- Test Framework & Conventions
- Test Requirements by File
- Test Generation Strategy
- Implementation Targets (test files to create)
- Success Criteria
<!-- TodoWrite: When test-concept-enhanced executed, INSERT 3 concept-enhanced tasks -->
**TodoWrite Update (Phase 3 SlashCommand executed - tasks attached)**:
```json
[
{"content": "Create independent test session", "status": "completed", "activeForm": "Creating test session"},
{"content": "Gather test coverage context", "status": "completed", "activeForm": "Gathering test coverage context"},
{"content": "Phase 3.1: Analyze coverage gaps with Gemini (test-concept-enhanced)", "status": "in_progress", "activeForm": "Analyzing coverage gaps"},
{"content": "Phase 3.2: Study existing test patterns (test-concept-enhanced)", "status": "pending", "activeForm": "Studying test patterns"},
{"content": "Phase 3.3: Generate test generation strategy (test-concept-enhanced)", "status": "pending", "activeForm": "Generating test strategy"},
{"content": "Generate test generation and execution tasks", "status": "pending", "activeForm": "Generating test tasks"},
{"content": "Return workflow summary", "status": "pending", "activeForm": "Returning workflow summary"}
]
```
**Note**: SlashCommand dispatch **attaches** test-concept-enhanced's 3 tasks. Orchestrator **executes** these tasks.
**Next Action**: Tasks attached → **Execute Phase 3.1-3.3** sequentially
<!-- TodoWrite: After Phase 3 tasks complete, REMOVE Phase 3.1-3.3, restore to orchestrator view -->
**TodoWrite Update (Phase 3 completed - tasks collapsed)**:
```json
[
{"content": "Create independent test session", "status": "completed", "activeForm": "Creating test session"},
{"content": "Gather test coverage context", "status": "completed", "activeForm": "Gathering test coverage context"},
{"content": "Analyze test requirements with Gemini", "status": "completed", "activeForm": "Analyzing test requirements"},
{"content": "Generate test generation and execution tasks", "status": "pending", "activeForm": "Generating test tasks"},
{"content": "Return workflow summary", "status": "pending", "activeForm": "Returning workflow summary"}
]
```
**Note**: Phase 3 tasks completed and collapsed to summary.
---
### Phase 4: Generate Test Tasks
**Step 4.1: Execute** - Generate test task JSON files and planning documents
```javascript
SlashCommand(command="/workflow:tools:test-task-generate --session [testSessionId]")
```
**Input**:
- `testSessionId` from Phase 1
**Note**: CLI tool usage for fixes is determined semantically from user's task description (e.g., "use Codex for automated fixes").
**Expected Behavior**:
- Parse TEST_ANALYSIS_RESULTS.md from Phase 3
- Extract test requirements and generation strategy
- Generate **TWO task JSON files**:
- **IMPL-001.json**: Test Generation task (calls @code-developer)
- **IMPL-002.json**: Test Execution and Fix Cycle task (calls @test-fix-agent)
- Generate IMPL_PLAN.md with test generation and execution strategy
- Generate TODO_LIST.md with both tasks
**Parse Output**:
- Verify `.workflow/[testSessionId]/.task/IMPL-001.json` exists (test generation)
- Verify `.workflow/[testSessionId]/.task/IMPL-002.json` exists (test execution & fix)
- Verify `.workflow/[testSessionId]/IMPL_PLAN.md` created
- Verify `.workflow/[testSessionId]/TODO_LIST.md` created
**Validation - IMPL-001.json (Test Generation)**:
- Task ID: `IMPL-001`
- `meta.type: "test-gen"`
- `meta.agent: "@code-developer"`
- `context.requirements`: Generate tests based on TEST_ANALYSIS_RESULTS.md
- `flow_control.pre_analysis`: Load TEST_ANALYSIS_RESULTS.md and test context
- `flow_control.implementation_approach`: Test generation steps
- `flow_control.target_files`: Test files to create from analysis section 5
**Validation - IMPL-002.json (Test Execution & Fix)**:
- Task ID: `IMPL-002`
- `meta.type: "test-fix"`
- `meta.agent: "@test-fix-agent"`
- `context.depends_on: ["IMPL-001"]`
- `context.requirements`: Execute and fix tests
- `flow_control.implementation_approach.test_fix_cycle`: Complete cycle specification
- **Cycle pattern**: test → gemini_diagnose → fix (agent or CLI based on `command` field) → retest
- **Tools configuration**: Gemini for analysis with bug-fix template, agent or CLI for fixes
- **Exit conditions**: Success (all pass) or failure (max iterations)
- `flow_control.implementation_approach.modification_points`: 3-phase execution flow
- Phase 1: Initial test execution
- Phase 2: Iterative Gemini diagnosis + fixes (agent or CLI based on step's `command` field)
- Phase 3: Final validation and certification
<!-- TodoWrite: When test-task-generate executed, INSERT 3 test-task-generate tasks -->
**TodoWrite Update (Phase 4 SlashCommand executed - tasks attached)**:
```json
[
{"content": "Create independent test session", "status": "completed", "activeForm": "Creating test session"},
{"content": "Gather test coverage context", "status": "completed", "activeForm": "Gathering test coverage context"},
{"content": "Analyze test requirements with Gemini", "status": "completed", "activeForm": "Analyzing test requirements"},
{"content": "Phase 4.1: Parse TEST_ANALYSIS_RESULTS.md (test-task-generate)", "status": "in_progress", "activeForm": "Parsing test analysis"},
{"content": "Phase 4.2: Generate IMPL-001.json and IMPL-002.json (test-task-generate)", "status": "pending", "activeForm": "Generating task JSONs"},
{"content": "Phase 4.3: Generate IMPL_PLAN.md and TODO_LIST.md (test-task-generate)", "status": "pending", "activeForm": "Generating plan documents"},
{"content": "Return workflow summary", "status": "pending", "activeForm": "Returning workflow summary"}
]
```
**Note**: SlashCommand dispatch **attaches** test-task-generate's 3 tasks. Orchestrator **executes** these tasks.
**Next Action**: Tasks attached → **Execute Phase 4.1-4.3** sequentially
<!-- TodoWrite: After Phase 4 tasks complete, REMOVE Phase 4.1-4.3, restore to orchestrator view -->
**TodoWrite Update (Phase 4 completed - tasks collapsed)**:
```json
[
{"content": "Create independent test session", "status": "completed", "activeForm": "Creating test session"},
{"content": "Gather test coverage context", "status": "completed", "activeForm": "Gathering test coverage context"},
{"content": "Analyze test requirements with Gemini", "status": "completed", "activeForm": "Analyzing test requirements"},
{"content": "Generate test generation and execution tasks", "status": "completed", "activeForm": "Generating test tasks"},
{"content": "Return workflow summary", "status": "in_progress", "activeForm": "Returning workflow summary"}
]
```
**Note**: Phase 4 tasks completed and collapsed to summary.
---
### Phase 5: Return Summary (Command Ends Here)
**Important**: This is the final phase of `/workflow:test-gen`. The command completes and returns control to the user. No automatic execution occurs.
**Return to User**:
```
Test workflow preparation complete!
Source Session: [sourceSessionId]
Test Session: [testSessionId]
Artifacts Created:
- Test context analysis
- Test generation strategy
- Task definitions (IMPL-001, IMPL-002)
- Implementation plan
Test Framework: [detected framework]
Test Files to Generate: [count]
Fix Mode: [Agent|CLI] (based on `command` field in implementation_approach steps)
Review Generated Artifacts:
- Test plan: .workflow/[testSessionId]/IMPL_PLAN.md
- Task list: .workflow/[testSessionId]/TODO_LIST.md
- Analysis: .workflow/[testSessionId]/.process/TEST_ANALYSIS_RESULTS.md
Ready for execution. Use appropriate workflow commands to proceed.
```
**TodoWrite**: Mark phase 5 completed
**Command Boundary**: After this phase, the command terminates and returns to user prompt.
---
## TodoWrite Pattern
**Core Concept**: Dynamic task attachment and collapse for test-gen workflow with cross-session context gathering and test generation strategy.
### Key Principles
1. **Task Attachment** (when SlashCommand executed):
- Sub-command's internal tasks are **attached** to orchestrator's TodoWrite
- Example: `/workflow:tools:test-context-gather` attaches 3 sub-tasks (Phase 2.1, 2.2, 2.3)
- First attached task marked as `in_progress`, others as `pending`
- Orchestrator **executes** these attached tasks sequentially
2. **Task Collapse** (after sub-tasks complete):
- Remove detailed sub-tasks from TodoWrite
- **Collapse** to high-level phase summary
- Example: Phase 2.1-2.3 collapse to "Gather test coverage context: completed"
- Maintains clean orchestrator-level view
3. **Continuous Execution**:
- After collapse, automatically proceed to next pending phase
- No user intervention required between phases
- TodoWrite dynamically reflects current execution state
**Lifecycle Summary**: Initial pending tasks → Phase executed (tasks ATTACHED) → Sub-tasks executed sequentially → Phase completed (tasks COLLAPSED to summary) → Next phase begins → Repeat until all phases complete.
### Test-Gen Specific Features
- **Phase 2**: Cross-session context gathering from source implementation session
- **Phase 3**: Test requirements analysis with Gemini for generation strategy
- **Phase 4**: Dual-task generation (IMPL-001 for test generation, IMPL-002 for test execution)
- **Fix Mode Configuration**: CLI tool usage determined semantically from user's task description
**Note**: See individual Phase descriptions (Phase 2, 3, 4) for detailed TodoWrite Update examples with full JSON structures.
## Execution Flow Diagram
```
Test-Gen Workflow Orchestrator
├─ Phase 1: Create Test Session
│ └─ /workflow:session:start --new
│ └─ Returns: testSessionId (WFS-test-[source])
├─ Phase 2: Gather Test Context ← ATTACHED (3 tasks)
│ └─ /workflow:tools:test-context-gather
│ ├─ Phase 2.1: Load source session summaries
│ ├─ Phase 2.2: Analyze test coverage with MCP tools
│ └─ Phase 2.3: Identify coverage gaps and framework
│ └─ Returns: test-context-package.json ← COLLAPSED
├─ Phase 3: Test Generation Analysis ← ATTACHED (3 tasks)
│ └─ /workflow:tools:test-concept-enhanced
│ ├─ Phase 3.1: Analyze coverage gaps with Gemini
│ ├─ Phase 3.2: Study existing test patterns
│ └─ Phase 3.3: Generate test generation strategy
│ └─ Returns: TEST_ANALYSIS_RESULTS.md ← COLLAPSED
├─ Phase 4: Generate Test Tasks ← ATTACHED (3 tasks)
│ └─ /workflow:tools:test-task-generate
│ ├─ Phase 4.1: Parse TEST_ANALYSIS_RESULTS.md
│ ├─ Phase 4.2: Generate IMPL-001.json and IMPL-002.json
│ └─ Phase 4.3: Generate IMPL_PLAN.md and TODO_LIST.md
│ └─ Returns: Task JSONs and plans ← COLLAPSED
└─ Phase 5: Return Summary
└─ Command ends, control returns to user
Artifacts Created:
├── .workflow/active/WFS-test-[session]/
│ ├── workflow-session.json
│ ├── IMPL_PLAN.md
│ ├── TODO_LIST.md
│ ├── .task/
│ │ ├── IMPL-001.json (test generation task)
│ │ └── IMPL-002.json (test execution task)
│ └── .process/
│ ├── test-context-package.json
│ └── TEST_ANALYSIS_RESULTS.md
Key Points:
• ← ATTACHED: SlashCommand attaches sub-tasks to orchestrator TodoWrite
• ← COLLAPSED: Sub-tasks executed and collapsed to phase summary
```
## Session Metadata
Test session includes `workflow_type: "test_session"` and `source_session_id` for automatic context gathering.
## Task Output
Generates two task definition files:
- **IMPL-001.json**: Test generation task specification
- Agent: @code-developer
- Input: TEST_ANALYSIS_RESULTS.md
- Output: Test files based on analysis
- **IMPL-002.json**: Test execution and fix cycle specification
- Agent: @test-fix-agent
- Dependency: IMPL-001 must complete first
- Max iterations: 5
- Fix mode: Agent or CLI (based on `command` field in implementation_approach)
See `/workflow:tools:test-task-generate` for complete task JSON schemas.
## Error Handling
| Phase | Error | Action |
|-------|-------|--------|
| 1 | Source session not found | Return error with source session ID |
| 1 | No completed IMPL tasks | Return error, source incomplete |
| 2 | Context gathering failed | Return error, check source artifacts |
| 3 | Analysis failed | Return error, check context package |
| 4 | Task generation failed | Retry once, then error with details |
## Output Files
Created in `.workflow/active/WFS-test-[session]/`:
- `workflow-session.json` - Session metadata
- `.process/test-context-package.json` - Coverage analysis
- `.process/TEST_ANALYSIS_RESULTS.md` - Test requirements
- `.task/IMPL-001.json` - Test generation task
- `.task/IMPL-002.json` - Test execution & fix task
- `IMPL_PLAN.md` - Test plan
- `TODO_LIST.md` - Task checklist
## Task Specifications
**IMPL-001.json Structure**:
- `meta.type: "test-gen"`
- `meta.agent: "@code-developer"`
- `context.requirements`: Generate tests based on TEST_ANALYSIS_RESULTS.md
- `flow_control.target_files`: Test files to create
- `flow_control.implementation_approach`: Test generation strategy
**IMPL-002.json Structure**:
- `meta.type: "test-fix"`
- `meta.agent: "@test-fix-agent"`
- `context.depends_on: ["IMPL-001"]`
- `flow_control.implementation_approach.test_fix_cycle`: Complete cycle specification
- Gemini diagnosis template
- Fix application mode (agent or CLI based on `command` field)
- Max iterations: 5
- `flow_control.implementation_approach.modification_points`: 3-phase flow
See `/workflow:tools:test-task-generate` for complete JSON schemas.
## Best Practices
1. **Prerequisites**: Ensure source session has completed IMPL tasks with summaries
2. **Clean State**: Commit implementation changes before running test-gen
3. **Review Artifacts**: Check generated IMPL_PLAN.md and TODO_LIST.md before proceeding
4. **Understand Scope**: This command only prepares artifacts; it does not execute tests
## Related Commands
**Prerequisite Commands**:
- `/workflow:plan` or `/workflow:execute` - Complete implementation session that needs test validation
**Executed by This Command** (4 phases):
- `/workflow:session:start` - Phase 1: Create independent test workflow session
- `/workflow:tools:test-context-gather` - Phase 2: Analyze test coverage and gather source session context
- `/workflow:tools:test-concept-enhanced` - Phase 3: Generate test requirements and strategy using Gemini
- `/workflow:tools:test-task-generate` - Phase 4: Generate test task JSONs (CLI tool usage determined semantically)
**Follow-up Commands**:
- `/workflow:status` - Review generated test tasks
- `/workflow:test-cycle-execute` - Execute test generation and fix cycles
- `/workflow:execute` - Execute generated test tasks

View File

@@ -0,0 +1,391 @@
---
name: code-validation-gate
description: Validate AI-generated code for common errors (imports, variables, types) before test execution
argument-hint: "--session WFS-test-session-id [--fix] [--strict]"
examples:
- /workflow:tools:code-validation-gate --session WFS-test-auth
- /workflow:tools:code-validation-gate --session WFS-test-auth --fix
- /workflow:tools:code-validation-gate --session WFS-test-auth --strict
---
# Code Validation Gate Command
## Overview
Pre-test validation gate that checks AI-generated code for common errors before test execution. This prevents wasted test cycles on code with fundamental issues like import errors, variable conflicts, and type mismatches.
## Core Philosophy
- **Fail Fast**: Catch fundamental errors before expensive test execution
- **AI-Aware**: Specifically targets common AI code generation mistakes
- **Auto-Remediation**: Attempt safe fixes before failing
- **Clear Feedback**: Provide actionable fix suggestions for manual intervention
## Target Error Categories
### L0.1: Compilation Errors
- TypeScript compilation failures
- Syntax errors
- Module resolution failures
### L0.2: Import Errors
- Unresolved module imports (hallucinated packages)
- Circular dependencies
- Duplicate imports
- Unused imports
### L0.3: Variable Errors
- Variable redeclaration
- Scope conflicts (shadowing)
- Undefined variable usage
- Unused variables
### L0.4: Type Errors (TypeScript)
- Type mismatches
- Missing type definitions
- Excessive `any` usage
- Implicit `any` types
### L0.5: AI-Specific Patterns
- Placeholder code (`// TODO: implement`)
- Hallucinated package imports
- Mock code in production files
- Inconsistent naming patterns
## Execution Process
```
Input Parsing:
├─ Parse flags: --session (required), --fix, --strict
└─ Load test-quality-config.json
Phase 1: Context Loading
├─ Load session metadata
├─ Identify target files (from IMPL-001 output or context-package)
└─ Detect project configuration (tsconfig, eslint, etc.)
Phase 2: Validation Execution
├─ L0.1: Run TypeScript compilation check
├─ L0.2: Run import validation
├─ L0.3: Run variable validation
├─ L0.4: Run type validation
└─ L0.5: Run AI-specific checks
Phase 3: Result Analysis
├─ Aggregate all findings by severity
├─ Calculate pass/fail status
└─ Generate fix suggestions
Phase 4: Auto-Fix (if --fix enabled)
├─ Apply safe auto-fixes (imports, formatting)
├─ Re-run validation
└─ Report remaining issues
Phase 5: Gate Decision
├─ PASS: Proceed to IMPL-001.5
├─ SOFT_FAIL: Auto-fix applied, needs re-validation
└─ HARD_FAIL: Block with detailed report
```
## Execution Lifecycle
### Phase 1: Context Loading
**Load session and identify validation targets.**
```javascript
// Load session metadata
Read(".workflow/active/{session_id}/workflow-session.json")
// Load context package for target files
Read(".workflow/active/{session_id}/.process/context-package.json")
// OR
Read(".workflow/active/{session_id}/.process/test-context-package.json")
// Identify files to validate:
// 1. Source files from context.implementation_files
// 2. Test files from IMPL-001 output (if exists)
// 3. All modified files since session start
```
**Target File Discovery**:
- Source files: `context.focus_paths` from context-package
- Generated tests: `.workflow/active/{session_id}/.task/IMPL-001-output/`
- All TypeScript/JavaScript in target directories
### Phase 2: Validation Execution
**Execute validation checks in order of dependency.**
#### L0.1: TypeScript Compilation
```bash
# Primary check - catches most fundamental errors
npx tsc --noEmit --skipLibCheck --project tsconfig.json 2>&1
# Parse output for errors
# Critical: Any compilation error blocks further validation
```
**Error Patterns**:
```
error TS2307: Cannot find module 'xxx'
error TS2451: Cannot redeclare block-scoped variable 'xxx'
error TS2322: Type 'xxx' is not assignable to type 'yyy'
```
#### L0.2: Import Validation
```bash
# Check for circular dependencies
npx madge --circular --extensions ts,tsx,js,jsx {target_dirs}
# ESLint import rules
npx eslint --rule 'import/no-duplicates: error' --rule 'import/no-unresolved: error' {files}
```
**Hallucinated Package Check**:
```javascript
// Extract all imports from files
// Verify each package exists in package.json or node_modules
// Flag any unresolvable imports as "hallucinated"
```
#### L0.3: Variable Validation
```bash
# ESLint variable rules
npx eslint --rule 'no-shadow: error' --rule 'no-undef: error' --rule 'no-redeclare: error' {files}
```
#### L0.4: Type Validation
```bash
# TypeScript strict checks
npx tsc --noEmit --strict {files}
# Check for any abuse
npx eslint --rule '@typescript-eslint/no-explicit-any: warn' {files}
```
#### L0.5: AI-Specific Checks
```bash
# Check for placeholder code
grep -rn "// TODO: implement\|// Add your code here\|throw new Error.*Not implemented" {files}
# Check for mock code in production files
grep -rn "jest\.mock\|sinon\.\|vi\.mock" {source_files_only}
```
### Phase 3: Result Analysis
**Aggregate and categorize findings.**
```javascript
const findings = {
critical: [], // Blocks all progress
error: [], // Blocks with threshold
warning: [] // Advisory only
};
// Apply thresholds from config
const config = loadConfig("test-quality-config.json");
const thresholds = config.code_validation.severity_thresholds;
// Gate decision
if (findings.critical.length > thresholds.critical) {
decision = "HARD_FAIL";
} else if (findings.error.length > thresholds.error) {
decision = "SOFT_FAIL"; // Try auto-fix
} else {
decision = "PASS";
}
```
### Phase 4: Auto-Fix (Optional)
**Apply safe automatic fixes when --fix flag provided.**
```bash
# Safe fixes only
npx eslint --fix --rule 'import/no-duplicates: error' --rule 'unused-imports/no-unused-imports: error' {files}
# Re-run validation after fixes
# Report what was fixed vs what remains
```
**Safe Fix Categories**:
- Remove unused imports
- Remove duplicate imports
- Fix import ordering
- Remove unused variables (with caution)
- Formatting fixes
**Unsafe (Manual Only)**:
- Missing imports (need to determine correct package)
- Type errors (need to understand intent)
- Variable shadowing (need to understand scope intent)
### Phase 5: Gate Decision
**Determine next action based on results.**
| Decision | Condition | Action |
|----------|-----------|--------|
| **PASS** | critical=0, error<=3, warning<=10 | Proceed to IMPL-001.5 |
| **SOFT_FAIL** | critical=0, error>3 OR fixable issues | Auto-fix and retry (max 2) |
| **HARD_FAIL** | critical>0 OR max retries exceeded | Block with report |
## Output Artifacts
### Validation Report
**File**: `.workflow/active/{session_id}/.process/code-validation-report.md`
```markdown
# Code Validation Report
**Session**: {session_id}
**Timestamp**: {timestamp}
**Status**: PASS | SOFT_FAIL | HARD_FAIL
## Summary
- Files Validated: {count}
- Critical Issues: {count}
- Errors: {count}
- Warnings: {count}
## Critical Issues (Must Fix)
### Import Errors
- `src/auth/service.ts:5` - Cannot find module 'non-existent-package'
- **Suggestion**: Check if package exists, may be hallucinated by AI
### Variable Conflicts
- `src/utils/helper.ts:12` - Cannot redeclare block-scoped variable 'config'
- **Suggestion**: Rename one of the variables or merge declarations
## Errors (Should Fix)
...
## Warnings (Consider Fixing)
...
## Auto-Fix Applied
- Removed 3 unused imports in `src/auth/service.ts`
- Fixed import ordering in `src/utils/index.ts`
## Remaining Issues Requiring Manual Fix
...
## Next Steps
- [ ] Fix critical issues before proceeding
- [ ] Review error suggestions
- [ ] Re-run validation: `/workflow:tools:code-validation-gate --session {session_id}`
```
### JSON Report (Machine-Readable)
**File**: `.workflow/active/{session_id}/.process/code-validation-report.json`
```json
{
"session_id": "WFS-test-xxx",
"timestamp": "2025-01-30T10:00:00Z",
"status": "HARD_FAIL",
"summary": {
"files_validated": 15,
"critical": 2,
"error": 5,
"warning": 8
},
"findings": {
"critical": [
{
"category": "import",
"file": "src/auth/service.ts",
"line": 5,
"message": "Cannot find module 'non-existent-package'",
"suggestion": "Check if package exists in package.json",
"auto_fixable": false
}
],
"error": [...],
"warning": [...]
},
"auto_fixes_applied": [...],
"gate_decision": "HARD_FAIL",
"retry_count": 0,
"max_retries": 2
}
```
## Command Options
| Option | Description | Default |
|--------|-------------|---------|
| `--session` | Test session ID (required) | - |
| `--fix` | Enable auto-fix for safe issues | false |
| `--strict` | Use strict thresholds (0 errors allowed) | false |
| `--files` | Specific files to validate (comma-separated) | All target files |
| `--skip-types` | Skip TypeScript type checks | false |
## Integration
### Command Chain
- **Called By**: `/workflow:test-fix-gen` (after IMPL-001)
- **Requires**: IMPL-001 output OR context-package.json
- **Followed By**: IMPL-001.5 (Test Quality Gate) on PASS
### Task JSON Integration
When used in test-fix workflow, generates task:
```json
{
"id": "IMPL-001.3-validation",
"meta": {
"type": "code-validation",
"agent": "@test-fix-agent"
},
"context": {
"depends_on": ["IMPL-001"],
"requirements": "Validate generated code for AI common errors"
},
"flow_control": {
"validation_config": "~/.claude/workflows/test-quality-config.json",
"max_retries": 2,
"auto_fix_enabled": true
},
"acceptance_criteria": [
"Zero critical issues",
"Maximum 3 error issues",
"All imports resolvable",
"No variable redeclarations"
]
}
```
## Error Handling
| Error | Resolution |
|-------|------------|
| tsconfig.json not found | Use default compiler options |
| ESLint not installed | Skip ESLint checks, use tsc only |
| madge not installed | Skip circular dependency check |
| No files to validate | Return PASS (nothing to check) |
## Best Practices
1. **Run Early**: Execute validation immediately after code generation
2. **Use --fix First**: Let auto-fix resolve trivial issues
3. **Review Suggestions**: AI fix suggestions may need human judgment
4. **Don't Skip Critical**: Never proceed with critical errors
5. **Track Patterns**: Common failures indicate prompt improvement opportunities
## Related Commands
- `/workflow:test-fix-gen` - Parent workflow that invokes this command
- `/workflow:tools:test-quality-gate` - Next phase (IMPL-001.5) for test quality
- `/workflow:test-cycle-execute` - Execute tests after validation passes

View File

@@ -284,16 +284,24 @@ Execution Method: ${userConfig.executionMethod} // agent|hybrid|cli
Preferred CLI Tool: ${userConfig.preferredCliTool} // codex|gemini|qwen|auto
Supplementary Materials: ${userConfig.supplementaryMaterials}
## CLI TOOL SELECTION
Based on userConfig.executionMethod:
- "agent": No command field in implementation_approach steps
- "hybrid": Add command field to complex steps only (agent handles simple steps)
- "cli": Add command field to ALL implementation_approach steps
## EXECUTION METHOD MAPPING
Based on userConfig.executionMethod, set task-level meta.execution_config:
CLI Resume Support (MANDATORY for all CLI commands):
- Use --resume parameter to continue from previous task execution
- Read previous task's cliExecutionId from session state
- Format: ccw cli -p "[prompt]" --resume ${previousCliId} --tool ${tool} --mode write
"agent" →
meta.execution_config = { method: "agent", cli_tool: null, enable_resume: false }
Agent executes implementation_approach steps directly
"cli" →
meta.execution_config = { method: "cli", cli_tool: userConfig.preferredCliTool, enable_resume: true }
Agent executes pre_analysis, then hands off full context to CLI via buildCliHandoffPrompt()
"hybrid" →
Per-task decision: Analyze task complexity, set method to "agent" OR "cli" per task
- Simple tasks (≤3 files, straightforward logic) → method: "agent"
- Complex tasks (>3 files, complex logic, refactoring) → method: "cli"
CLI tool: userConfig.preferredCliTool, enable_resume: true
IMPORTANT: Do NOT add command field to implementation_approach steps. Execution routing is controlled by task-level meta.execution_config.method only.
## PRIORITIZED CONTEXT (from context-package.prioritized_context) - ALREADY SORTED
Context sorting is ALREADY COMPLETED in context-gather Phase 2/3. DO NOT re-sort.
@@ -378,16 +386,26 @@ Hard Constraints:
- Return completion status with document count and task breakdown summary
## PLANNING NOTES RECORD (REQUIRED)
After completing all documents, append a brief execution record to planning-notes.md:
After completing, update planning-notes.md:
**File**: .workflow/active/{session_id}/planning-notes.md
**Location**: Create new section after "## Consolidated Constraints"
**Format**:
\`\`\`
## Task Generation (Phase 4)
1. **Task Generation (Phase 4)**: Task count and key tasks
2. **N+1 Context**: Key decisions (with rationale) + deferred items
\`\`\`markdown
## Task Generation (Phase 4)
### [Action-Planning Agent] YYYY-MM-DD
- **Note**: [智能补充:简短总结任务数量、关键任务、依赖关系等]
- **Tasks**: [count] ([IDs])
## N+1 Context
### Decisions
| Decision | Rationale | Revisit? |
|----------|-----------|----------|
| [choice] | [why] | [Yes/No] |
### Deferred
- [ ] [item] - [reason]
\`\`\`
`
)
@@ -448,16 +466,24 @@ Execution Method: ${userConfig.executionMethod} // agent|hybrid|cli
Preferred CLI Tool: ${userConfig.preferredCliTool} // codex|gemini|qwen|auto
Supplementary Materials: ${userConfig.supplementaryMaterials}
## CLI TOOL SELECTION
Based on userConfig.executionMethod:
- "agent": No command field in implementation_approach steps
- "hybrid": Add command field to complex steps only (agent handles simple steps)
- "cli": Add command field to ALL implementation_approach steps
## EXECUTION METHOD MAPPING
Based on userConfig.executionMethod, set task-level meta.execution_config:
CLI Resume Support (MANDATORY for all CLI commands):
- Use --resume parameter to continue from previous task execution
- Read previous task's cliExecutionId from session state
- Format: ccw cli -p "[prompt]" --resume ${previousCliId} --tool ${tool} --mode write
"agent" →
meta.execution_config = { method: "agent", cli_tool: null, enable_resume: false }
Agent executes implementation_approach steps directly
"cli" →
meta.execution_config = { method: "cli", cli_tool: userConfig.preferredCliTool, enable_resume: true }
Agent executes pre_analysis, then hands off full context to CLI via buildCliHandoffPrompt()
"hybrid" →
Per-task decision: Analyze task complexity, set method to "agent" OR "cli" per task
- Simple tasks (≤3 files, straightforward logic) → method: "agent"
- Complex tasks (>3 files, complex logic, refactoring) → method: "cli"
CLI tool: userConfig.preferredCliTool, enable_resume: true
IMPORTANT: Do NOT add command field to implementation_approach steps. Execution routing is controlled by task-level meta.execution_config.method only.
## PRIORITIZED CONTEXT (from context-package.prioritized_context) - ALREADY SORTED
Context sorting is ALREADY COMPLETED in context-gather Phase 2/3. DO NOT re-sort.
@@ -543,19 +569,13 @@ Hard Constraints:
- Return: task count, task IDs, dependency summary (internal + cross-module)
## PLANNING NOTES RECORD (REQUIRED)
After completing module task JSONs, append a brief execution record to planning-notes.md:
After completing, append to planning-notes.md:
**File**: .workflow/active/{session_id}/planning-notes.md
**Location**: Create new section after "## Consolidated Constraints" (if not exists)
**Format**:
\`\`\`markdown
### [${module.name}] YYYY-MM-DD
- **Tasks**: [count] ([IDs])
- **CROSS deps**: [placeholders used]
\`\`\`
## Task Generation (Phase 4)
### [Action-Planning Agent - ${module.name}] YYYY-MM-DD
- **Note**: [智能补充:简短总结本模块任务数量、关键任务等]
\`\`\`
**Note**: Multiple module agents will append their records. Phase 3 Integration Coordinator will add final summary.
`
)
);
@@ -638,14 +658,21 @@ Module Count: ${modules.length}
- Return: task count, per-module breakdown, resolved dependency count
## PLANNING NOTES RECORD (REQUIRED)
After completing integration, append final summary to planning-notes.md:
After integration, update planning-notes.md:
**File**: .workflow/active/{session_id}/planning-notes.md
**Location**: Under "## Task Generation (Phase 4)" section (after module agent records)
**Format**:
\`\`\`
### [Integration Coordinator] YYYY-MM-DD
- **Note**: [智能补充:简短总结总任务数、跨模块依赖解决情况等]
\`\`\`markdown
### [Coordinator] YYYY-MM-DD
- **Total**: [count] tasks
- **Resolved**: [CROSS:: resolutions]
## N+1 Context
### Decisions
| Decision | Rationale | Revisit? |
|----------|-----------|----------|
| CROSS::X → IMPL-Y | [why this resolution] | [Yes/No] |
### Deferred
- [ ] [unresolved CROSS or conflict] - [reason]
\`\`\`
`
)

View File

@@ -359,16 +359,24 @@ Execution Method: ${userConfig.executionMethod} // agent|hybrid|cli
Preferred CLI Tool: ${userConfig.preferredCliTool} // codex|gemini|qwen|auto
Supplementary Materials: ${userConfig.supplementaryMaterials}
## CLI TOOL SELECTION
Based on userConfig.executionMethod:
- "agent": No command field in implementation_approach steps
- "hybrid": Add command field to complex steps only (Red/Green phases recommended for CLI)
- "cli": Add command field to ALL Red-Green-Refactor steps
## EXECUTION METHOD MAPPING
Based on userConfig.executionMethod, set task-level meta.execution_config:
CLI Resume Support (MANDATORY for all CLI commands):
- Use --resume parameter to continue from previous task execution
- Read previous task's cliExecutionId from session state
- Format: ccw cli -p "[prompt]" --resume [previousCliId] --tool [tool] --mode write
"agent" →
meta.execution_config = { method: "agent", cli_tool: null, enable_resume: false }
Agent executes Red-Green-Refactor phases directly
"cli" →
meta.execution_config = { method: "cli", cli_tool: userConfig.preferredCliTool, enable_resume: true }
Agent executes pre_analysis, then hands off full context to CLI via buildCliHandoffPrompt()
"hybrid" →
Per-task decision: Analyze TDD cycle complexity, set method to "agent" OR "cli" per task
- Simple cycles (≤5 test cases, ≤3 files) → method: "agent"
- Complex cycles (>5 test cases, >3 files, integration tests) → method: "cli"
CLI tool: userConfig.preferredCliTool, enable_resume: true
IMPORTANT: Do NOT add command field to implementation_approach steps. Execution routing is controlled by task-level meta.execution_config.method only.
## EXPLORATION CONTEXT (from context-package.exploration_results)
- Load exploration_results from context-package.json
@@ -426,7 +434,7 @@ CLI Resume Support (MANDATORY for all CLI commands):
2. Green Phase (`tdd_phase: "green"`): Implement to pass tests
3. Refactor Phase (`tdd_phase: "refactor"`): Improve code quality
- `flow_control.pre_analysis`: Include exploration integration_points analysis
- CLI tool usage based on userConfig (add `command` field per executionMethod)
- **meta.execution_config**: Set per userConfig.executionMethod (agent/cli/hybrid)
- **Details**: See action-planning-agent.md § TDD Task JSON Generation
##### 2. IMPL_PLAN.md (TDD Variant)
@@ -623,7 +631,7 @@ This section provides quick reference for TDD task JSON structure. For complete
- Context: `focus_paths` use absolute or clear relative paths
- Flow control: Exactly 3 steps with `tdd_phase` field ("red", "green", "refactor")
- Flow control: `pre_analysis` includes exploration integration_points analysis
- Command field: Added per `userConfig.executionMethod` (agent/hybrid/cli)
- **meta.execution_config**: Set per `userConfig.executionMethod` (agent/cli/hybrid)
- See Phase 2 agent prompt for full schema and requirements
## Output Files Structure
@@ -736,7 +744,7 @@ IMPL (Green phase) tasks include automatic test-fix cycle:
3. **Success Path**: Tests pass → Complete task
4. **Failure Path**: Tests fail → Enter iterative fix cycle:
- **Gemini Diagnosis**: Analyze failures with bug-fix template
- **Fix Application**: Agent (default) or CLI (if `command` field present)
- **Fix Application**: Agent executes fixes directly
- **Retest**: Verify fix resolves failures
- **Repeat**: Up to max_iterations (default: 3)
5. **Safety Net**: Auto-revert all changes if max iterations reached
@@ -745,5 +753,5 @@ IMPL (Green phase) tasks include automatic test-fix cycle:
## Configuration Options
- **meta.max_iterations**: Number of fix attempts in Green phase (default: 3)
- **CLI tool usage**: Determined semantically from user's task description via `command` field in implementation_approach
- **meta.execution_config.method**: Execution routing (agent/cli) determined from userConfig.executionMethod

View File

@@ -1,6 +1,6 @@
---
name: test-task-generate
description: Generate test planning documents (IMPL_PLAN.md, test task JSONs, TODO_LIST.md) using action-planning-agent - produces test planning artifacts, does NOT execute tests
description: Generate test planning documents (IMPL_PLAN.md, test task JSONs, TODO_LIST.md) by invoking test-action-planning-agent
argument-hint: "--session WFS-test-session-id"
examples:
- /workflow:tools:test-task-generate --session WFS-test-auth
@@ -9,118 +9,263 @@ examples:
# Generate Test Planning Documents Command
## Overview
Generate test planning documents (IMPL_PLAN.md, test task JSONs, TODO_LIST.md) using action-planning-agent. This command produces **test planning artifacts only** - it does NOT execute tests or implement code. Actual test execution requires separate execution command (e.g., /workflow:test-cycle-execute).
## Core Philosophy
- **Planning Only**: Generate test planning documents (IMPL_PLAN.md, task JSONs, TODO_LIST.md) - does NOT execute tests
- **Agent-Driven Document Generation**: Delegate test plan generation to action-planning-agent
- **Two-Phase Flow**: Context Preparation (command) → Test Document Generation (agent)
- **Memory-First**: Reuse loaded documents from conversation memory
- **MCP-Enhanced**: Use MCP tools for test pattern research and analysis
- **Path Clarity**: All `focus_paths` prefer absolute paths (e.g., `D:\\project\\src\\module`), or clear relative paths from project root
- **Leverage Existing Test Infrastructure**: Prioritize using established testing frameworks and tools present in the project
Generate test planning documents (IMPL_PLAN.md, test task JSONs, TODO_LIST.md) by invoking **test-action-planning-agent**.
## Test-Specific Execution Modes
This command produces **test planning artifacts only** - it does NOT execute tests or implement code. Actual test execution requires separate execution command (e.g., /workflow:test-cycle-execute).
### Test Generation (IMPL-001)
- **Agent Mode** (default): @code-developer generates tests within agent context
- **CLI Mode**: Use CLI tools when `command` field present in implementation_approach (determined semantically)
### Agent Specialization
### Test Execution & Fix (IMPL-002+)
- **Agent Mode** (default): Gemini diagnosis → agent applies fixes
- **CLI Mode**: Gemini diagnosis → CLI applies fixes (when `command` field present in implementation_approach)
This command invokes `@test-action-planning-agent` - a specialized variant of action-planning-agent with:
- Progressive L0-L3 test layers (Static, Unit, Integration, E2E)
- AI code issue detection (L0.5) with severity levels
- Project type templates (React, Node API, CLI, Library, Monorepo)
- Test anti-pattern detection with quality gates
- Layer completeness thresholds and coverage targets
**See**: `d:\Claude_dms3\.claude\agents\test-action-planning-agent.md` for complete test specifications.
---
## Execution Process
```
Input Parsing:
─ Parse flags: --session
└─ Validation: session_id REQUIRED
─ Parse flags: --session
Phase 1: Context Preparation (Command)
├─ Assemble test session paths
│ ├─ session_metadata_path
│ ├─ test_analysis_results_path (REQUIRED)
│ └─ test_context_package_path
─ Provide metadata (session_id, source_session_id)
─ Provide metadata (session_id, source_session_id)
└─ Create test-planning-notes.md (User Intent section)
Phase 1.5: Gemini Test Enhancement (Command)
├─ Invoke cli-execution-agent for Gemini analysis
├─ Read TEST_ANALYSIS_RESULTS.md for context
├─ Generate enriched test suggestions (API, integration, error scenarios)
└─ Record enriched suggestions to test-planning-notes.md (Gemini Enhancement section)
Phase 2: Test Document Generation (Agent)
├─ Load TEST_ANALYSIS_RESULTS.md as primary requirements source
├─ Load TEST_ANALYSIS_RESULTS.md (with enriched suggestions)
├─ Load test-planning-notes.md (consolidated context)
├─ Generate Test Task JSON Files (.task/IMPL-*.json)
│ ├─ IMPL-001: Test generation (meta.type: "test-gen")
─ IMPL-002+: Test execution & fix (meta.type: "test-fix")
│ ├─ IMPL-001: Test generation (L1-L3 layers, project-specific templates)
─ IMPL-001.3: Code validation gate (L0 + AI issue detection)
│ ├─ IMPL-001.5: Test quality gate (anti-patterns + coverage)
│ └─ IMPL-002: Test execution & fix cycle
├─ Create IMPL_PLAN.md (test_session variant)
─ Generate TODO_LIST.md with test phase indicators
─ Generate TODO_LIST.md with test phase indicators
└─ Update test-planning-notes.md (Task Generation section)
```
## Document Generation Lifecycle
---
### Phase 1: Context Preparation (Command Responsibility)
## Phase 1: Context Preparation
**Command prepares test session paths and metadata for planning document generation.**
**Purpose**: Assemble test session paths, load test analysis context, and create test-planning-notes.md.
**Test Session Path Structure**:
```
.workflow/active/WFS-test-{session-id}/
├── workflow-session.json # Test session metadata
├── .process/
│ ├── TEST_ANALYSIS_RESULTS.md # Test requirements and strategy
│ ├── test-context-package.json # Test patterns and coverage
│ └── context-package.json # General context artifacts
├── .task/ # Output: Test task JSON files
├── IMPL_PLAN.md # Output: Test implementation plan
└── TODO_LIST.md # Output: Test TODO list
```
**Execution Steps**:
1. Parse `--session` flag to get test session ID
2. Load `workflow-session.json` for session metadata
3. Verify `TEST_ANALYSIS_RESULTS.md` exists (from test-concept-enhanced)
4. Load `test-context-package.json` for coverage data
5. Create `test-planning-notes.md` with initial context
**Command Preparation**:
1. **Assemble Test Session Paths** for agent prompt:
- `session_metadata_path`
- `test_analysis_results_path` (REQUIRED)
- `test_context_package_path`
- Output directory paths
**After Phase 1**: Initialize test-planning-notes.md
2. **Provide Metadata** (simple values):
- `session_id`
- `source_session_id` (if exists)
- `mcp_capabilities` (available MCP tools)
```javascript
// Create test-planning-notes.md with N+1 context support
const testPlanningNotesPath = `.workflow/active/${testSessionId}/test-planning-notes.md`
const sessionMetadata = JSON.parse(Read(`.workflow/active/${testSessionId}/workflow-session.json`))
const testAnalysis = Read(`.workflow/active/${testSessionId}/.process/TEST_ANALYSIS_RESULTS.md`)
const sourceSessionId = sessionMetadata.source_session_id || 'N/A'
**Note**: CLI tool usage is now determined semantically from user's task description, not by flags.
// Extract key info from TEST_ANALYSIS_RESULTS.md
const projectType = testAnalysis.match(/Project Type:\s*(.+)/)?.[1] || 'Unknown'
const testFramework = testAnalysis.match(/Test Framework:\s*(.+)/)?.[1] || 'Unknown'
const coverageTarget = testAnalysis.match(/Coverage Target:\s*(.+)/)?.[1] || '80%'
### Phase 2: Test Document Generation (Agent Responsibility)
Write(testPlanningNotesPath, `# Test Planning Notes
**Purpose**: Generate test-specific IMPL_PLAN.md, task JSONs, and TODO_LIST.md - planning documents only, NOT test execution.
**Session**: ${testSessionId}
**Source Session**: ${sourceSessionId}
**Created**: ${new Date().toISOString()}
## Test Intent (Phase 1)
- **PROJECT_TYPE**: ${projectType}
- **TEST_FRAMEWORK**: ${testFramework}
- **COVERAGE_TARGET**: ${coverageTarget}
- **SOURCE_SESSION**: ${sourceSessionId}
---
## Context Findings (Phase 1)
### Files with Coverage Gaps
(Extracted from TEST_ANALYSIS_RESULTS.md)
### Test Framework & Conventions
- Framework: ${testFramework}
- Coverage Target: ${coverageTarget}
---
## Gemini Enhancement (Phase 1.5)
(To be filled by Gemini analysis)
### Enhanced Test Suggestions
- **L1 (Unit)**: (Pending)
- **L2.1 (Integration)**: (Pending)
- **L2.2 (API Contracts)**: (Pending)
- **L2.4 (External APIs)**: (Pending)
- **L2.5 (Failure Modes)**: (Pending)
### Gemini Analysis Summary
(Pending enrichment)
---
## Consolidated Test Requirements (Phase 2 Input)
1. [Context] ${testFramework} framework conventions
2. [Context] ${coverageTarget} coverage target
---
## Task Generation (Phase 2)
(To be filled by test-action-planning-agent)
## N+1 Context
### Decisions
| Decision | Rationale | Revisit? |
|----------|-----------|----------|
### Deferred
- [ ] (For N+1)
`)
---
## Phase 1.5: Gemini Test Enhancement
**Purpose**: Enrich test specifications with comprehensive test suggestions and record to test-planning-notes.md.
**Execution Steps**:
1. Load TEST_ANALYSIS_RESULTS.md from `.workflow/active/{test-session-id}/.process/`
2. Invoke `cli-execution-agent` with Gemini for test enhancement analysis
3. Use template: `~/.claude/workflows/cli-templates/prompts/test-suggestions-enhancement.txt`
4. Gemini generates enriched test suggestions across L1-L3 layers gemini-enriched-suggestions.md
5. Record enriched suggestions to test-planning-notes.md (Gemini Enhancement section)
**Agent Invocation**:
```javascript
Task(
subagent_type="action-planning-agent",
subagent_type="cli-execution-agent",
run_in_background=false,
description="Generate test planning documents (IMPL_PLAN.md, task JSONs, TODO_LIST.md)",
description="Enhance test specifications with Gemini analysis",
prompt=`
## Task Objective
Analyze TEST_ANALYSIS_RESULTS.md and generate enriched test suggestions using Gemini CLI
## Input Files
- Read: .workflow/active/{test-session-id}/.process/TEST_ANALYSIS_RESULTS.md
- Extract: Project type, test framework, coverage gaps, identified files
## Gemini Analysis Execution
Execute Gemini with comprehensive test enhancement prompt:
ccw cli -p "[comprehensive test prompt]" --tool gemini --mode analysis --rule analysis-test-strategy-enhancement --cd .workflow/active/{test-session-id}/.process
## Expected Output
Generate gemini-enriched-suggestions.md with structured test enhancements:
- L1 (Unit Tests): Edge cases, boundaries, error paths
- L2.1 (Integration): Module interactions, dependency injection
- L2.2 (API Contracts): Request/response, validation, error responses
- L2.4 (External APIs): Mock strategies, failure scenarios, timeouts
- L2.5 (Failure Modes): Exception handling, error propagation, recovery
## Validation
- gemini-enriched-suggestions.md created and complete
- Suggestions are actionable and specific (not generic)
- All L1-L3 layers covered
`
)
```
**Output**: gemini-enriched-suggestions.md (complete Gemini analysis)
**After Phase 1.5**: Update test-planning-notes.md with Gemini enhancement findings
```javascript
// Read enriched suggestions from gemini-enriched-suggestions.md
const enrichedSuggestionsPath = `.workflow/active/${testSessionId}/.process/gemini-enriched-suggestions.md`
const enrichedSuggestions = Read(enrichedSuggestionsPath)
// Update Phase 1.5 section in test-planning-notes.md with full enriched suggestions
Edit(testPlanningNotesPath, {
old: '## Gemini Enhancement (Phase 1.5)\n(To be filled by Gemini analysis)\n\n### Enhanced Test Suggestions\n- **L1 (Unit)**: (Pending)\n- **L2.1 (Integration)**: (Pending)\n- **L2.2 (API Contracts)**: (Pending)\n- **L2.4 (External APIs)**: (Pending)\n- **L2.5 (Failure Modes)**: (Pending)\n\n### Gemini Analysis Summary\n(Pending enrichment)',
new: `## Gemini Enhancement (Phase 1.5)
**Analysis Timestamp**: ${new Date().toISOString()}
**Template**: test-suggestions-enhancement.txt
**Output File**: .process/gemini-enriched-suggestions.md
### Enriched Test Suggestions (Complete Gemini Analysis)
${enrichedSuggestions}
### Gemini Analysis Summary
- **Status**: Enrichment complete
- **Layers Covered**: L1, L2.1, L2.2, L2.4, L2.5
- **Focus Areas**: API contracts, integration patterns, error scenarios, edge cases
- **Output Stored**: Full analysis in gemini-enriched-suggestions.md`
})
// Append Gemini constraints to consolidated test requirements
const geminiConstraints = [
'[Gemini] Implement all suggested L1 edge cases and boundary tests',
'[Gemini] Apply L2.1 module interaction patterns from analysis',
'[Gemini] Follow L2.2 API contract test matrix from analysis',
'[Gemini] Use L2.4 external API mock strategies from analysis',
'[Gemini] Cover L2.5 error scenarios from analysis'
]
const currentNotes = Read(testPlanningNotesPath)
const constraintCount = (currentNotes.match(/^\d+\./gm) || []).length
Edit(testPlanningNotesPath, {
old: '## Consolidated Test Requirements (Phase 2 Input)',
new: `## Consolidated Test Requirements (Phase 2 Input)
1. [Context] ${testFramework} framework conventions
2. [Context] ${coverageTarget} coverage target
${geminiConstraints.map((c, i) => `${i + 3}. ${c}`).join('\n')}`
})
```
---
## Agent Invocation
```javascript
Task(
subagent_type="test-action-planning-agent",
run_in_background=false,
description="Generate test planning documents",
prompt=`
## TASK OBJECTIVE
Generate test planning documents (IMPL_PLAN.md, task JSONs, TODO_LIST.md) for test workflow session
IMPORTANT: This is TEST PLANNING ONLY - you are generating planning documents, NOT executing tests.
CRITICAL:
- Use existing test frameworks and utilities from the project
- Follow the progressive loading strategy defined in your agent specification (load context incrementally from memory-first approach)
## AGENT CONFIGURATION REFERENCE
Refer to your specification for:
- Test Task JSON Schema (6-field structure with test-specific metadata)
- Test IMPL_PLAN.md Structure (test_session variant with test-fix cycle)
- TODO_LIST.md Format (with test phase indicators)
- Progressive Loading Strategy (memory-first, load TEST_ANALYSIS_RESULTS.md as primary source)
- Quality Validation Rules (task count limits, requirement quantification)
## SESSION PATHS
Input:
- Session Metadata: .workflow/active/{test-session-id}/workflow-session.json
- TEST_ANALYSIS_RESULTS: .workflow/active/{test-session-id}/.process/TEST_ANALYSIS_RESULTS.md (REQUIRED - primary requirements source)
- TEST_ANALYSIS_RESULTS: .workflow/active/{test-session-id}/.process/TEST_ANALYSIS_RESULTS.md (REQUIRED)
- Test Planning Notes: .workflow/active/{test-session-id}/test-planning-notes.md (REQUIRED - contains Gemini enhancement findings)
- Test Context Package: .workflow/active/{test-session-id}/.process/test-context-package.json
- Context Package: .workflow/active/{test-session-id}/.process/context-package.json
- Enriched Suggestions: .workflow/active/{test-session-id}/.process/gemini-enriched-suggestions.md (for reference)
- Source Session Summaries: .workflow/active/{source-session-id}/.summaries/IMPL-*.md (if exists)
Output:
@@ -134,103 +279,106 @@ Workflow Type: test_session
Source Session: {source-session-id} (if exists)
MCP Capabilities: {exa_code, exa_web, code_index}
## CLI TOOL SELECTION
Determine CLI tool usage per-step based on user's task description:
- If user specifies "use Codex/Gemini/Qwen for X" → Add command field to relevant steps
- Default: Agent execution (no command field) unless user explicitly requests CLI
## CONSOLIDATED CONTEXT
**From test-planning-notes.md**:
- Test Intent: Project type, test framework, coverage target
- Context Findings: Coverage gaps, file analysis
- Gemini Enhancement: Complete enriched test suggestions (L1-L3 layers)
* Full analysis embedded in planning-notes.md
* API contracts, integration patterns, error scenarios
- Consolidated Requirements: Combined constraints from all phases
## TEST-SPECIFIC REQUIREMENTS SUMMARY
(Detailed specifications in your agent definition)
## YOUR SPECIFICATIONS
You are @test-action-planning-agent. Your complete test specifications are defined in:
d:\Claude_dms3\.claude\agents\test-action-planning-agent.md
### Task Structure Requirements
- Minimum 2 tasks: IMPL-001 (test generation) + IMPL-002 (test execution & fix)
- Expandable for complex projects: Add IMPL-003+ (per-module, integration, E2E tests)
This includes:
- Progressive Test Layers (L0-L3) with L0.1-L0.5, L1.1-L1.5, L2.1-L2.5, L3.1-L3.4
- AI Code Issue Detection (L0.5) with 7 categories and severity levels
- Project Type Detection & Templates (6 project types)
- Test Anti-Pattern Detection (5 categories)
- Layer Completeness & Quality Metrics (thresholds and gate decisions)
- Task JSON structure requirements (minimum 4 tasks)
- Quality validation rules
Task Configuration:
IMPL-001 (Test Generation):
- meta.type: "test-gen"
- meta.agent: "@code-developer"
- meta.test_framework: Specify existing framework (e.g., "jest", "vitest", "pytest")
- flow_control: Test generation strategy from TEST_ANALYSIS_RESULTS.md
- CLI execution: Add `command` field when user requests (determined semantically)
IMPL-002+ (Test Execution & Fix):
- meta.type: "test-fix"
- meta.agent: "@test-fix-agent"
- flow_control: Test-fix cycle with iteration limits and diagnosis configuration
- CLI execution: Add `command` field when user requests (determined semantically)
### Test-Fix Cycle Specification (IMPL-002+)
Required flow_control fields:
- max_iterations: 5
- diagnosis_tool: "gemini"
- diagnosis_template: "~/.claude/workflows/cli-templates/prompts/analysis/01-diagnose-bug-root-cause.txt"
- cycle_pattern: "test → gemini_diagnose → fix → retest"
- exit_conditions: ["all_tests_pass", "max_iterations_reached"]
- auto_revert_on_failure: true
- CLI fix: Add `command` field when user specifies CLI tool usage
### Automation Framework Configuration
Select automation tools based on test requirements from TEST_ANALYSIS_RESULTS.md:
- UI interaction testing → E2E browser automation (meta.e2e_framework)
- API/database integration → integration test tools (meta.test_tools)
- Performance metrics → load testing tools (meta.perf_framework)
- Logic verification → unit test framework (meta.test_framework)
**Tool Selection**: Detect from project config > suggest based on requirements
### TEST_ANALYSIS_RESULTS.md Mapping
PRIMARY requirements source - extract and map to task JSONs:
- Test framework config → meta.test_framework (use existing framework from project)
- Existing test utilities → flow_control.reusable_test_tools (discovered test helpers, fixtures, mocks)
- Test runner commands → flow_control.test_commands (from package.json or pytest config)
- Coverage targets → meta.coverage_target
- Test requirements → context.requirements (quantified with explicit counts)
- Test generation strategy → IMPL-001 flow_control.implementation_approach
- Implementation targets → context.files_to_test (absolute paths)
**Follow your specification exactly** when generating test task JSONs.
## EXPECTED DELIVERABLES
1. Test Task JSON Files (.task/IMPL-*.json)
- 6-field schema with quantified requirements from TEST_ANALYSIS_RESULTS.md
- Test-specific metadata: type, agent, test_framework, coverage_target
- flow_control includes: reusable_test_tools, test_commands (from project config)
- CLI execution via `command` field when user requests (determined semantically)
- Artifact references from test-context-package.json
- Absolute paths in context.files_to_test
1. Test Task JSON Files (.task/IMPL-*.json) - Minimum 4:
- IMPL-001.json: Test generation (L1-L3 layers per spec)
- IMPL-001.3-validation.json: Code validation gate (L0 + AI issues per spec)
- IMPL-001.5-review.json: Test quality gate (anti-patterns + coverage per spec)
- IMPL-002.json: Test execution & fix cycle
2. Test Implementation Plan (IMPL_PLAN.md)
- Template: ~/.claude/workflows/cli-templates/prompts/workflow/impl-plan-template.txt
- Test-specific frontmatter: workflow_type="test_session", test_framework, source_session_id
- Test-Fix-Retest Cycle section with diagnosis configuration
- Source session context integration (if applicable)
2. IMPL_PLAN.md: Test implementation plan with quality gates
3. TODO List (TODO_LIST.md)
- Hierarchical structure with test phase containers
- Links to task JSONs with status markers
- Matches task JSON hierarchy
## QUALITY STANDARDS
Hard Constraints:
- Task count: minimum 2, maximum 18
- All requirements quantified from TEST_ANALYSIS_RESULTS.md
- Test framework matches existing project framework
- flow_control includes reusable_test_tools and test_commands from project
- Absolute paths for all focus_paths
- Acceptance criteria include verification commands
- CLI `command` field added only when user explicitly requests CLI tool usage
3. TODO_LIST.md: Hierarchical task list with test phase indicators
## SUCCESS CRITERIA
- All test planning documents generated successfully
- Return completion status: task count, test framework, coverage targets, source session status
- Task count: minimum 4 (expandable for complex projects)
- Test framework: {detected from project}
- Coverage targets: L0 zero errors, L1 80%+, L2 70%+
- L0-L3 layers explicitly defined per spec
- AI issue detection configured per spec
- Quality gates with measurable thresholds
`
)
```
---
## Test-Specific Execution Modes
### Test Generation (IMPL-001)
- **Agent Mode** (default): @code-developer generates tests within agent context
- **CLI Mode**: Use CLI tools when `command` field present in implementation_approach
### Test Execution & Fix (IMPL-002+)
- **Agent Mode** (default): Gemini diagnosis → agent applies fixes
- **CLI Mode**: Gemini diagnosis → CLI applies fixes (when `command` field present)
**CLI Tool Selection**: Determined semantically from user's task description (e.g., "use Codex for fixes")
---
## Output
### Directory Structure
```
.workflow/active/WFS-test-[session]/
├── workflow-session.json # Session metadata
├── IMPL_PLAN.md # Test implementation plan
├── TODO_LIST.md # Task checklist
├── test-planning-notes.md # [NEW] Consolidated planning notes with full Gemini analysis
├── .task/
│ ├── IMPL-001.json # Test generation (L1-L3)
│ ├── IMPL-001.3-validation.json # Code validation gate (L0 + AI)
│ ├── IMPL-001.5-review.json # Test quality gate
│ └── IMPL-002.json # Test execution & fix cycle
└── .process/
├── test-context-package.json # Test coverage and patterns
├── gemini-enriched-suggestions.md # [NEW] Gemini-generated test enhancements
└── TEST_ANALYSIS_RESULTS.md # L0-L3 requirements (original from test-concept-enhanced)
```
### Task Summary
| Task | Type | Agent | Purpose |
|------|------|-------|---------|
| IMPL-001 | test-gen | @code-developer | Generate L1-L3 tests with project templates |
| IMPL-001.3 | code-validation | @test-fix-agent | Validate L0 + detect AI issues (CRITICAL/ERROR/WARNING) |
| IMPL-001.5 | test-quality-review | @test-fix-agent | Check anti-patterns, layer completeness, coverage |
| IMPL-002 | test-fix | @test-fix-agent | Execute tests, diagnose failures, apply fixes |
---
## Integration & Usage
### Command Chain
- **Called By**: `/workflow:test-gen` (Phase 4), `/workflow:test-fix-gen` (Phase 4)
- **Invokes**: `action-planning-agent` for test planning document generation
- **Called By**: `/workflow:test-fix-gen` (Phase 4)
- **Invokes**: `@test-action-planning-agent` for test planning document generation
- **Followed By**: `/workflow:test-cycle-execute` or `/workflow:execute` (user-triggered)
### Usage Examples
@@ -238,18 +386,31 @@ Hard Constraints:
# Standard execution
/workflow:tools:test-task-generate --session WFS-test-auth
# With semantic CLI request (include in task description)
# With semantic CLI request (include in task description when calling /workflow:test-fix-gen)
# e.g., "Generate tests, use Codex for implementation and fixes"
```
### CLI Tool Selection
CLI tool usage is determined semantically from user's task description:
- Include "use Codex" for automated fixes
- Include "use Gemini" for analysis
- Default: Agent execution (no `command` field)
### Output Validation
### Output
- Test task JSON files in `.task/` directory (minimum 2)
- IMPL_PLAN.md with test strategy and fix cycle specification
- TODO_LIST.md with test phase indicators
- Session ready for test execution
**Minimum Requirements**:
- 4 task JSON files created
- IMPL_PLAN.md exists with test-specific sections
- TODO_LIST.md exists with test phase hierarchy
- All tasks reference TEST_ANALYSIS_RESULTS.md specifications
- L0-L3 layers explicitly defined in IMPL-001
- AI issue detection configured in IMPL-001.3
- Quality gates with thresholds in IMPL-001.5
---
## Related Commands
**Called By**:
- `/workflow:test-fix-gen` - Phase 4: Generate test planning documents
**Prerequisite**:
- `/workflow:tools:test-concept-enhanced` - Must generate TEST_ANALYSIS_RESULTS.md first
**Follow-Up**:
- `/workflow:test-cycle-execute` - Execute generated test tasks
- `/workflow:execute` - Alternative: Standard task execution

View File

@@ -2,7 +2,7 @@
name: workflow:ui-design:codify-style
description: Orchestrator to extract styles from code and generate shareable reference package with preview (automatic file discovery)
argument-hint: "<path> [--package-name <name>] [--output-dir <path>] [--overwrite]"
allowed-tools: SlashCommand,Bash,Read,TodoWrite
allowed-tools: Skill,Bash,Read,TodoWrite
auto-continue: true
---
@@ -27,7 +27,7 @@ auto-continue: true
**Phase Transition Mechanism**:
- **Phase 0 (Validation)**: Validate parameters, prepare workspace → IMMEDIATELY triggers Phase 1
- **Phase 1-2 (Task Attachment)**: `SlashCommand` invocation **ATTACHES** tasks to current workflow. Orchestrator **EXECUTES** these tasks itself.
- **Phase 1-2 (Task Attachment)**: `Skill` invocation **ATTACHES** tasks to current workflow. Orchestrator **EXECUTES** these tasks itself.
- **Task Execution**: Orchestrator runs attached tasks sequentially, updating TodoWrite as each completes
- **Task Collapse**: After all attached tasks complete, collapse them into phase summary
- **Phase Transition**: Automatically execute next phase after collapsing completed tasks
@@ -35,7 +35,7 @@ auto-continue: true
**Auto-Continue Mechanism**: TodoWrite tracks phase status with dynamic task attachment/collapse. After executing all attached tasks, you MUST immediately collapse them, restore phase summary, and execute the next phase. No user intervention required. The workflow is NOT complete until reaching Phase 3.
**Task Attachment Model**: SlashCommand invocation is NOT delegation - it's task expansion. The orchestrator executes these attached tasks itself, not waiting for external completion.
**Task Attachment Model**: Skill invocation is NOT delegation - it's task expansion. The orchestrator executes these attached tasks itself, not waiting for external completion.
## Core Rules
@@ -45,7 +45,7 @@ auto-continue: true
4. **Intelligent Validation**: Smart parameter validation with user-friendly error messages
5. **Safety First**: Package overwrite protection, existence checks, fallback error handling
6. **Track Progress**: Update TodoWrite dynamically with task attachment/collapse pattern
7. **⚠️ CRITICAL: Task Attachment Model** - SlashCommand invocation **ATTACHES** tasks to current workflow. Orchestrator **EXECUTES** these attached tasks itself, not waiting for external completion. This is NOT delegation - it's task expansion.
7. **⚠️ CRITICAL: Task Attachment Model** - Skill invocation **ATTACHES** tasks to current workflow. Orchestrator **EXECUTES** these attached tasks itself, not waiting for external completion. This is NOT delegation - it's task expansion.
8. **⚠️ CRITICAL: DO NOT STOP** - This is a continuous multi-phase workflow. After executing all attached tasks, you MUST immediately collapse them and execute the next phase. Workflow is NOT complete until Phase 3.
---
@@ -199,7 +199,7 @@ STORE: temp_id, design_run_path
<!-- TodoWrite: Update Phase 0 → completed, Phase 1 → in_progress, INSERT import-from-code tasks -->
**TodoWrite Update (Phase 1 SlashCommand invoked - tasks attached)**:
**TodoWrite Update (Phase 1 Skill invoked - tasks attached)**:
```json
[
{"content": "Phase 0: Validate parameters and prepare session", "status": "completed", "activeForm": "Validating parameters"},
@@ -212,7 +212,7 @@ STORE: temp_id, design_run_path
]
```
**Note**: SlashCommand invocation **attaches** import-from-code's 4 tasks to current workflow. Orchestrator **executes** these tasks itself.
**Note**: Skill invocation **attaches** import-from-code's 4 tasks to current workflow. Orchestrator **executes** these tasks itself.
**Next Action**: Tasks attached → **Execute Phase 1.0-1.3** sequentially
@@ -236,13 +236,13 @@ command = "/workflow:ui-design:import-from-code" +
```bash
TRY:
# SlashCommand invocation ATTACHES import-from-code's 4 tasks to current workflow
# Skill invocation ATTACHES import-from-code's 4 tasks to current workflow
# Orchestrator will EXECUTE these attached tasks itself:
# 1. Phase 1.0: Discover and categorize code files
# 2. Phase 1.1: Style Agent extraction
# 3. Phase 1.2: Animation Agent extraction
# 4. Phase 1.3: Layout Agent extraction
SlashCommand(command)
Skill(skill=command)
# After executing all attached tasks, verify extraction outputs
tokens_path = "${design_run_path}/style-extraction/style-1/design-tokens.json"
@@ -281,7 +281,7 @@ CATCH error:
<!-- TodoWrite: REMOVE Phase 1.0-1.3 tasks, INSERT reference-page-generator tasks -->
**TodoWrite Update (Phase 2 SlashCommand invoked - tasks attached)**:
**TodoWrite Update (Phase 2 Skill invoked - tasks attached)**:
```json
[
{"content": "Phase 0: Validate parameters and prepare session", "status": "completed", "activeForm": "Validating parameters"},
@@ -293,7 +293,7 @@ CATCH error:
]
```
**Note**: Phase 1 tasks completed and collapsed. SlashCommand invocation **attaches** reference-page-generator's 3 tasks. Orchestrator **executes** these tasks itself.
**Note**: Phase 1 tasks completed and collapsed. Skill invocation **attaches** reference-page-generator's 3 tasks. Orchestrator **executes** these tasks itself.
**Next Action**: Tasks attached → **Execute Phase 2.0-2.2** sequentially
@@ -316,12 +316,12 @@ command = "/workflow:ui-design:reference-page-generator " +
```bash
TRY:
# SlashCommand invocation ATTACHES reference-page-generator's 3 tasks to current workflow
# Skill invocation ATTACHES reference-page-generator's 3 tasks to current workflow
# Orchestrator will EXECUTE these attached tasks itself:
# 1. Phase 2.0: Setup and validation
# 2. Phase 2.1: Prepare component data
# 3. Phase 2.2: Generate preview pages
SlashCommand(command)
Skill(skill=command)
# After executing all attached tasks, verify package outputs
required_files = [
@@ -450,13 +450,13 @@ TodoWrite({todos: [
// ⚠️ CRITICAL: Dynamic TodoWrite task attachment strategy:
//
// **Key Concept**: SlashCommand invocation ATTACHES tasks to current workflow.
// **Key Concept**: Skill invocation ATTACHES tasks to current workflow.
// Orchestrator EXECUTES these attached tasks itself, not waiting for external completion.
//
// 1. INITIAL STATE: 4 orchestrator-level tasks only
//
// 2. PHASE 1 SlashCommand INVOCATION:
// - SlashCommand(/workflow:ui-design:import-from-code) ATTACHES 4 tasks
// 2. PHASE 1 Skill INVOCATION:
// - Skill(skill="workflow:ui-design:import-from-code") ATTACHES 4 tasks
// - TodoWrite expands to: Phase 0 (completed) + 4 import-from-code tasks + Phase 2 + Phase 3
// - Orchestrator EXECUTES these 4 tasks sequentially (Phase 1.0 → 1.1 → 1.2 → 1.3)
// - First attached task marked as in_progress
@@ -466,8 +466,8 @@ TodoWrite({todos: [
// - COLLAPSE completed tasks into Phase 1 summary
// - TodoWrite becomes: Phase 0-1 (completed) + Phase 2 + Phase 3
//
// 4. PHASE 2 SlashCommand INVOCATION:
// - SlashCommand(/workflow:ui-design:reference-page-generator) ATTACHES 3 tasks
// 4. PHASE 2 Skill INVOCATION:
// - Skill(skill="workflow:ui-design:reference-page-generator") ATTACHES 3 tasks
// - TodoWrite expands to: Phase 0-1 (completed) + 3 reference-page-generator tasks + Phase 3
// - Orchestrator EXECUTES these 3 tasks sequentially (Phase 2.0 → 2.1 → 2.2)
//
@@ -477,7 +477,7 @@ TodoWrite({todos: [
// - TodoWrite returns to: Phase 0-2 (completed) + Phase 3 (in_progress)
//
// 6. PHASE 3 EXECUTION:
// - Orchestrator's own task (no SlashCommand attachment)
// - Orchestrator's own task (no Skill attachment)
// - Mark Phase 3 as completed
// - Final state: All 4 orchestrator tasks completed
@@ -507,7 +507,7 @@ User triggers: /workflow:ui-design:codify-style ./src --package-name my-style-v1
[Phase 0 Complete] → TodoWrite: Phase 0 → completed
[Phase 1 Invoke] → SlashCommand(/workflow:ui-design:import-from-code) ATTACHES 4 tasks
[Phase 1 Invoke] → Skill(skill="workflow:ui-design:import-from-code") ATTACHES 4 tasks
├─ Phase 0 (completed)
├─ Phase 1.0: Discover and categorize code files (in_progress) ← ATTACHED
├─ Phase 1.1: Style Agent extraction (pending) ← ATTACHED
@@ -523,7 +523,7 @@ User triggers: /workflow:ui-design:codify-style ./src --package-name my-style-v1
[Phase 1 Complete] → TodoWrite: COLLAPSE Phase 1.0-1.3 into Phase 1 summary
[Phase 2 Invoke] → SlashCommand(/workflow:ui-design:reference-page-generator) ATTACHES 3 tasks
[Phase 2 Invoke] → Skill(skill="workflow:ui-design:reference-page-generator") ATTACHES 3 tasks
├─ Phase 0 (completed)
├─ Phase 1: Style extraction from source code (completed) ← COLLAPSED
├─ Phase 2.0: Setup and validation (in_progress) ← ATTACHED

View File

@@ -2,7 +2,7 @@
name: explore-auto
description: Interactive exploratory UI design workflow with style-centric batch generation, creates design variants from prompts/images with parallel execution and user selection
argument-hint: "[--input "<value>"] [--targets "<list>"] [--target-type "page|component"] [--session <id>] [--style-variants <count>] [--layout-variants <count>]"
allowed-tools: SlashCommand(*), TodoWrite(*), Read(*), Bash(*), Glob(*), Write(*), Task(conceptual-planning-agent)
allowed-tools: Skill(*), TodoWrite(*), Read(*), Bash(*), Glob(*), Write(*), Task(conceptual-planning-agent)
---
# UI Design Auto Workflow Command
@@ -28,7 +28,7 @@ allowed-tools: SlashCommand(*), TodoWrite(*), Read(*), Bash(*), Glob(*), Write(*
**Phase Transition Mechanism**:
- **Phase 5 (User Interaction)**: User confirms targets → IMMEDIATELY executes Phase 7
- **Phase 7-10 (Autonomous)**: SlashCommand execute **ATTACHES** tasks to current workflow
- **Phase 7-10 (Autonomous)**: Skill execute **ATTACHES** tasks to current workflow
- **Task Execution**: Orchestrator **EXECUTES** these attached tasks itself
- **Task Collapse**: After tasks complete, collapse them into phase summary
- **Phase Transition**: Automatically execute next phase after collapsing
@@ -36,7 +36,7 @@ allowed-tools: SlashCommand(*), TodoWrite(*), Read(*), Bash(*), Glob(*), Write(*
**Auto-Continue Mechanism**: TodoWrite tracks phase status with dynamic task attachment/collapse. After executing all attached tasks, you MUST immediately collapse them, restore phase summary, and execute the next phase. No user intervention required. The workflow is NOT complete until Phase 10 (UI assembly) finishes.
**Task Attachment Model**: SlashCommand execute is NOT delegation - it's task expansion. The orchestrator executes these attached tasks itself, not waiting for external completion.
**Task Attachment Model**: Skill execute is NOT delegation - it's task expansion. The orchestrator executes these attached tasks itself, not waiting for external completion.
**Target Type Detection**: Automatically inferred from prompt/targets, or explicitly set via `--target-type`.
@@ -92,7 +92,7 @@ Phase 10: UI Assembly
3. **Parse & Pass**: Extract data from each output for next phase
4. **Default to All**: When selecting variants/prototypes, use ALL generated items
5. **Track Progress**: Update TodoWrite dynamically with task attachment/collapse pattern
6. **⚠️ CRITICAL: Task Attachment Model** - SlashCommand execute **ATTACHES** tasks to current workflow. Orchestrator **EXECUTES** these attached tasks itself, not waiting for external completion. This is NOT delegation - it's task expansion.
6. **⚠️ CRITICAL: Task Attachment Model** - Skill execute **ATTACHES** tasks to current workflow. Orchestrator **EXECUTES** these attached tasks itself, not waiting for external completion. This is NOT delegation - it's task expansion.
7. **⚠️ CRITICAL: DO NOT STOP** - This is a continuous multi-phase workflow. After executing all attached tasks, you MUST immediately collapse them and execute the next phase. Workflow is NOT complete until Phase 10 (UI assembly) finishes.
## Parameter Requirements
@@ -364,11 +364,11 @@ IF design_source IN ["code_only", "hybrid"]:
command = "/workflow:ui-design:import-from-code --design-id \"{design_id}\" --source \"{code_base_path}\""
TRY:
# SlashCommand execute ATTACHES import-from-code's tasks to current workflow
# Skill execute ATTACHES import-from-code's tasks to current workflow
# Orchestrator will EXECUTE these attached tasks itself:
# - Phase 0: Discover and categorize code files
# - Phase 1.1-1.3: Style/Animation/Layout Agent extraction
SlashCommand(command)
Skill(skill=command)
CATCH error:
WARN: "⚠️ Code import failed: {error}"
WARN: "Cleaning up incomplete import directories"
@@ -479,9 +479,9 @@ IF design_source == "visual_only" OR needs_visual_supplement:
(prompt_text ? "--prompt \"{prompt_text}\" " : "") +
"--variants {style_variants} --interactive"
# SlashCommand execute ATTACHES style-extract's tasks to current workflow
# Skill execute ATTACHES style-extract's tasks to current workflow
# Orchestrator will EXECUTE these attached tasks itself
SlashCommand(command)
Skill(skill=command)
# After executing all attached tasks, collapse them into phase summary
ELSE:
@@ -522,9 +522,9 @@ IF should_extract_animation:
command = " ".join(command_parts)
# SlashCommand execute ATTACHES animation-extract's tasks to current workflow
# Skill execute ATTACHES animation-extract's tasks to current workflow
# Orchestrator will EXECUTE these attached tasks itself
SlashCommand(command)
Skill(skill=command)
# After executing all attached tasks, collapse them into phase summary
ELSE:
@@ -548,9 +548,9 @@ IF (design_source == "visual_only" OR needs_visual_supplement) OR (NOT layout_co
(prompt_text ? "--prompt \"{prompt_text}\" " : "") +
"--targets \"{targets_string}\" --variants {layout_variants} --device-type \"{device_type}\" --interactive"
# SlashCommand execute ATTACHES layout-extract's tasks to current workflow
# Skill execute ATTACHES layout-extract's tasks to current workflow
# Orchestrator will EXECUTE these attached tasks itself
SlashCommand(command)
Skill(skill=command)
# After executing all attached tasks, collapse them into phase summary
ELSE:
@@ -571,9 +571,9 @@ REPORT: " → Pure assembly: Combining layout templates + design tokens"
REPORT: " → Device: {device_type} (from layout templates)"
REPORT: " → Assembly tasks: {total} combinations"
# SlashCommand execute ATTACHES generate's tasks to current workflow
# Skill execute ATTACHES generate's tasks to current workflow
# Orchestrator will EXECUTE these attached tasks itself
SlashCommand(command)
Skill(skill=command)
# After executing all attached tasks, collapse them into phase summary
# Workflow complete - generate command handles preview file generation (compare.html, PREVIEW.md)
@@ -596,10 +596,10 @@ TodoWrite({todos: [
// ⚠️ CRITICAL: Dynamic TodoWrite task attachment strategy:
//
// **Key Concept**: SlashCommand execute ATTACHES tasks to current workflow.
// **Key Concept**: Skill execute ATTACHES tasks to current workflow.
// Orchestrator EXECUTES these attached tasks itself, not waiting for external completion.
//
// Phase 7-10 SlashCommand Execute Pattern (when tasks are attached):
// Phase 7-10 Skill Execute Pattern (when tasks are attached):
// Example - Phase 7 with sub-tasks:
// [
// {"content": "Phase 7: Style Extraction", "status": "in_progress", "activeForm": "Executing style extraction"},

View File

@@ -2,7 +2,7 @@
name: imitate-auto
description: UI design workflow with direct code/image input for design token extraction and prototype generation
argument-hint: "[--input "<value>"] [--session <id>]"
allowed-tools: SlashCommand(*), TodoWrite(*), Read(*), Write(*), Bash(*)
allowed-tools: Skill(*), TodoWrite(*), Read(*), Write(*), Bash(*)
---
# UI Design Imitate-Auto Workflow Command
@@ -26,7 +26,7 @@ allowed-tools: SlashCommand(*), TodoWrite(*), Read(*), Write(*), Bash(*)
7. Phase 4: Design system integration → **Execute orchestrator task** → Reports completion
**Phase Transition Mechanism**:
- **Task Attachment**: SlashCommand execute **ATTACHES** tasks to current workflow
- **Task Attachment**: Skill execute **ATTACHES** tasks to current workflow
- **Task Execution**: Orchestrator **EXECUTES** these attached tasks itself
- **Task Collapse**: After tasks complete, collapse them into phase summary
- **Phase Transition**: Automatically execute next phase after collapsing
@@ -34,7 +34,7 @@ allowed-tools: SlashCommand(*), TodoWrite(*), Read(*), Write(*), Bash(*)
**Auto-Continue Mechanism**: TodoWrite tracks phase status with dynamic task attachment/collapse. After executing all attached tasks, you MUST immediately collapse them, restore phase summary, and execute the next phase. No user intervention required. The workflow is NOT complete until reaching Phase 4.
**Task Attachment Model**: SlashCommand execute is NOT delegation - it's task expansion. The orchestrator executes these attached tasks itself, not waiting for external completion.
**Task Attachment Model**: Skill execute is NOT delegation - it's task expansion. The orchestrator executes these attached tasks itself, not waiting for external completion.
## Execution Process
@@ -86,7 +86,7 @@ Phase 4: Design System Integration
2. **No Preliminary Validation**: Sub-commands handle their own validation
3. **Parse & Pass**: Extract data from each output for next phase
4. **Track Progress**: Update TodoWrite dynamically with task attachment/collapse pattern
5. **⚠️ CRITICAL: Task Attachment Model** - SlashCommand execute **ATTACHES** tasks to current workflow. Orchestrator **EXECUTES** these attached tasks itself, not waiting for external completion. This is NOT delegation - it's task expansion.
5. **⚠️ CRITICAL: Task Attachment Model** - Skill execute **ATTACHES** tasks to current workflow. Orchestrator **EXECUTES** these attached tasks itself, not waiting for external completion. This is NOT delegation - it's task expansion.
6. **⚠️ CRITICAL: DO NOT STOP** - This is a continuous multi-phase workflow. After executing all attached tasks, you MUST immediately collapse them and execute the next phase. Workflow is NOT complete until Phase 4.
## Parameter Requirements
@@ -291,11 +291,11 @@ IF design_source == "hybrid":
"--source \"{code_base_path}\""
TRY:
# SlashCommand execute ATTACHES import-from-code's tasks to current workflow
# Skill execute ATTACHES import-from-code's tasks to current workflow
# Orchestrator will EXECUTE these attached tasks itself:
# - Phase 0: Discover and categorize code files
# - Phase 1.1-1.3: Style/Animation/Layout Agent extraction
SlashCommand(command)
Skill(skill=command)
CATCH error:
WARN: "Code import failed: {error}"
WARN: "Falling back to web-only mode"
@@ -409,9 +409,9 @@ ELSE:
extract_command = " ".join(command_parts)
# SlashCommand execute ATTACHES style-extract's tasks to current workflow
# Skill execute ATTACHES style-extract's tasks to current workflow
# Orchestrator will EXECUTE these attached tasks itself
SlashCommand(extract_command)
Skill(skill=extract_command)
# After executing all attached tasks, collapse them into phase summary
TodoWrite(mark_completed: "Extract style", mark_in_progress: "Extract animation")
@@ -442,9 +442,9 @@ ELSE:
animation_extract_command = " ".join(command_parts)
# SlashCommand execute ATTACHES animation-extract's tasks to current workflow
# Skill execute ATTACHES animation-extract's tasks to current workflow
# Orchestrator will EXECUTE these attached tasks itself
SlashCommand(animation_extract_command)
Skill(skill=animation_extract_command)
# After executing all attached tasks, collapse them into phase summary
TodoWrite(mark_completed: "Extract animation", mark_in_progress: "Extract layout")
@@ -477,9 +477,9 @@ ELSE:
layout_extract_command = " ".join(command_parts)
# SlashCommand execute ATTACHES layout-extract's tasks to current workflow
# Skill execute ATTACHES layout-extract's tasks to current workflow
# Orchestrator will EXECUTE these attached tasks itself
SlashCommand(layout_extract_command)
Skill(skill=layout_extract_command)
# After executing all attached tasks, collapse them into phase summary
TodoWrite(mark_completed: "Extract layout", mark_in_progress: "Assemble UI")
@@ -493,9 +493,9 @@ ELSE:
REPORT: "🚀 Phase 3: UI Assembly"
generate_command = f"/workflow:ui-design:generate --design-id \"{design_id}\""
# SlashCommand execute ATTACHES generate's tasks to current workflow
# Skill execute ATTACHES generate's tasks to current workflow
# Orchestrator will EXECUTE these attached tasks itself
SlashCommand(generate_command)
Skill(skill=generate_command)
# After executing all attached tasks, collapse them into phase summary
TodoWrite(mark_completed: "Assemble UI", mark_in_progress: session_id ? "Integrate design system" : "Completion")
@@ -510,9 +510,9 @@ IF session_id:
REPORT: "🚀 Phase 4: Design System Integration"
update_command = f"/workflow:ui-design:update --session {session_id}"
# SlashCommand execute ATTACHES update's tasks to current workflow
# Skill execute ATTACHES update's tasks to current workflow
# Orchestrator will EXECUTE these attached tasks itself
SlashCommand(update_command)
Skill(skill=update_command)
# Update metadata
metadata = Read("{base_path}/.run-metadata.json")
@@ -636,10 +636,10 @@ TodoWrite({todos: [
// ⚠️ CRITICAL: Dynamic TodoWrite task attachment strategy:
//
// **Key Concept**: SlashCommand execute ATTACHES tasks to current workflow.
// **Key Concept**: Skill execute ATTACHES tasks to current workflow.
// Orchestrator EXECUTES these attached tasks itself, not waiting for external completion.
//
// Phase 2-4 SlashCommand Execute Pattern (when tasks are attached):
// Phase 2-4 Skill Execute Pattern (when tasks are attached):
// Example - Phase 2 with sub-tasks:
// [
// {"content": "Phase 0: Initialize and Detect Design Source", "status": "completed", "activeForm": "Initializing"},

File diff suppressed because it is too large Load Diff

View File

@@ -0,0 +1,453 @@
# Claude Code Hooks - 当前实现 vs 官方标准对比报告
## 执行摘要
当前 CCW 代码库中的钩子实现**不符合 Claude Code 官方标准**。存在以下主要问题:
1.**钩子事件名称不符合官方标准** - 使用了错误的事件名称
2.**配置结构不同** - 自定义了配置格式,不符合官方规范
3.**使用了不存在的事件类型** - 某些事件在官方钩子系统中不存在
4.**文档与实现不一致** - 代码中的注释引用的是自定义实现,而非官方标准
---
## 详细对比
### 1. 钩子事件名称对比
#### ❌ 当前实现(错误)
```json
{
"hooks": {
"session-start": [], // ❌ 错误:应为 SessionStart
"session-end": [], // ❌ 错误:应为 SessionEnd
"file-modified": [], // ❌ 错误:官方不支持此事件
"context-request": [], // ❌ 错误:官方不支持此事件
"PostToolUse": [] // ✅ 正确
}
}
```
#### ✅ 官方标准(正确)
```json
{
"hooks": {
"SessionStart": [], // ✅ 当会话开始或恢复时触发
"UserPromptSubmit": [], // ✅ 当用户提交提示词时触发
"PreToolUse": [], // ✅ 工具调用前触发,可以阻止
"PermissionRequest": [], // ✅ 权限对话出现时触发
"PostToolUse": [], // ✅ 工具调用成功后触发
"PostToolUseFailure": [], // ✅ 工具调用失败时触发
"Notification": [], // ✅ 通知发送时触发
"SubagentStart": [], // ✅ 子代理生成时触发
"SubagentStop": [], // ✅ 子代理完成时触发
"Stop": [], // ✅ Claude 完成响应时触发
"PreCompact": [], // ✅ 上下文压缩前触发
"SessionEnd": [] // ✅ 会话终止时触发
}
}
```
### 2. 配置结构对比
#### ❌ 当前实现(自定义结构)
```json
{
"hooks": {
"session-start": [
{
"name": "Progressive Disclosure",
"description": "Injects progressive disclosure index",
"enabled": true,
"handler": "internal:context",
"timeout": 5000,
"failMode": "silent"
}
]
},
"hookSettings": {
"globalTimeout": 60000,
"defaultFailMode": "silent",
"allowAsync": true,
"enableLogging": true
}
}
```
**问题:**
- 使用了非标准字段:`name`, `description`, `enabled`, `handler`, `failMode`
- 使用了自定义的 `hookSettings` 配置
- 结构过度复杂化
#### ✅ 官方标准(简洁标准)
```json
{
"hooks": {
"SessionStart": [
{
"matcher": "startup|resume|clear|compact",
"hooks": [
{
"type": "command",
"command": "bash /path/to/script.sh",
"timeout": 600,
"async": false
}
]
}
]
}
}
```
**优点:**
- 简洁明了的三层结构:事件 → 匹配器 → 处理器
- 标准的字段集:`type`, `command`, `timeout`, `async`
- 支持三种处理器类型:`command`, `prompt`, `agent`
### 3. 官方支持的钩子事件及其触发时机
| 事件名称 | 触发时机 | 可阻止 | 匹配器支持 |
|---------|---------|--------|-----------|
| `SessionStart` | 会话开始或恢复 | ❌ | startup, resume, clear, compact |
| `UserPromptSubmit` | 用户提交提示词前 | ✅ | ❌ |
| `PreToolUse` | 工具调用前 | ✅ | 工具名称 |
| `PermissionRequest` | 权限对话出现时 | ✅ | 工具名称 |
| `PostToolUse` | 工具调用成功后 | ❌ | 工具名称 |
| `PostToolUseFailure` | 工具调用失败时 | ❌ | 工具名称 |
| `Notification` | 通知发送时 | ❌ | 通知类型 |
| `SubagentStart` | 子代理生成时 | ❌ | 代理类型 |
| `SubagentStop` | 子代理完成时 | ✅ | 代理类型 |
| `Stop` | Claude 完成响应时 | ✅ | ❌ |
| `PreCompact` | 上下文压缩前 | ❌ | manual, auto |
| `SessionEnd` | 会话终止时 | ❌ | 终止原因 |
**当前实现不支持的事件:**
-`file-modified` - 官方系统中不存在
-`context-request` - 官方系统中不存在
### 4. 处理器类型对比
#### ❌ 当前实现
仅支持一种:`handler: "internal:context"`(自定义内部处理器)
#### ✅ 官方标准
支持三种标准类型:
1. **Command hooks** (`type: "command"`)
```json
{
"type": "command",
"command": "bash /path/to/script.sh",
"timeout": 600,
"async": false
}
```
2. **Prompt hooks** (`type: "prompt"`)
```json
{
"type": "prompt",
"prompt": "Evaluate if this is safe to execute: $ARGUMENTS",
"model": "haiku",
"timeout": 30
}
```
3. **Agent hooks** (`type: "agent"`)
```json
{
"type": "agent",
"prompt": "Verify tests pass: $ARGUMENTS",
"model": "haiku",
"timeout": 60
}
```
### 5. 匹配器对比
#### ❌ 当前实现
没有明确的匹配器概念,而是使用:
- `handler: "internal:context"` - 内部处理
- 没有工具级别的过滤
#### ✅ 官方标准
完整的匹配器系统:
```json
{
"PreToolUse": [
{
"matcher": "Edit|Write", // 仅在 Edit 或 Write 工具时触发
"hooks": [ ... ]
}
]
}
```
**支持的匹配器:**
- **工具事件**`Bash`, `Edit`, `Write`, `Read`, `Glob`, `Grep`, `Task`, `WebFetch`, `WebSearch`
- **MCP工具**`mcp__memory__.*`, `mcp__.*__write.*`
- **会话事件**`startup`, `resume`, `clear`, `compact`
- **通知类型**`permission_prompt`, `idle_prompt`, `auth_success`
- **代理类型**`Bash`, `Explore`, `Plan`
### 6. 输入/输出机制对比
#### ❌ 当前实现
- 未定义标准的 stdin/stdout 通信协议
- 使用了自定义的环境变量:`$SESSION_ID`, `$FILE_PATH`, `$PROJECT_PATH`
#### ✅ 官方标准
**标准 JSON stdin 输入:**
```json
{
"session_id": "abc123",
"transcript_path": "/path/to/transcript.jsonl",
"cwd": "/current/dir",
"permission_mode": "default",
"hook_event_name": "PreToolUse",
"tool_name": "Bash",
"tool_input": {
"command": "npm test"
}
}
```
**标准 exit code 输出:**
- **exit 0**: 成功,解析 stdout 的 JSON 决策
- **exit 2**: 阻止性错误stderr 成为 Claude 的反馈
- **其他码**: 非阻止性错误stderr 显示在详细模式
**标准 JSON stdout 输出:**
```json
{
"hookSpecificOutput": {
"hookEventName": "PreToolUse",
"permissionDecision": "allow|deny|ask",
"permissionDecisionReason": "explanation"
}
}
```
### 7. 文件位置对比
#### 当前实现
示例配置文件位置:
- `ccw/src/templates/hooks-config-example.json`
#### ✅ 官方标准
标准配置位置(优先级顺序):
1. `~/.claude/settings.json` - 全局用户配置
2. `.claude/settings.json` - 项目配置(可提交)
3. `.claude/settings.local.json` - 项目本地配置gitignored
4. 插件 `hooks/hooks.json` - 插件内部
5. Skill/Agent frontmatter - 技能或代理
---
## 代码库中的具体问题位置
### 1. 错误的配置示例
**文件:** `ccw/src/templates/hooks-config-example.json`
```json
{
"hooks": {
"session-start": [ ... ], // ❌ 应为 SessionStart
"session-end": [ ... ], // ❌ 应为 SessionEnd
"file-modified": [ ... ], // ❌ 不存在的事件
"context-request": [ ... ] // ❌ 不存在的事件
}
}
```
**应改为:**
```json
{
"hooks": {
"SessionStart": [
{
"matcher": "startup|resume",
"hooks": [
{
"type": "command",
"command": "echo 'Session started'"
}
]
}
],
"PostToolUse": [
{
"matcher": "Write|Edit",
"hooks": [
{
"type": "command",
"command": "prettier --write $FILE_PATH"
}
]
}
],
"SessionEnd": [
{
"matcher": "clear",
"hooks": [
{
"type": "command",
"command": "echo 'Session ended'"
}
]
}
]
}
}
```
### 2. 错误的命令注释
**文件:** `ccw/src/commands/hook.ts`
当前代码引用了自定义的钩子处理逻辑,但不符合官方标准。
---
## 修复建议
### 优先级 1关键修复
- [ ] 更新 `hooks-config-example.json` 使用官方事件名称
- [ ] 更新配置结构以符合官方三层标准
- [ ] 移除不支持的事件类型:`file-modified`, `context-request`
- [ ] 文档化官方支持的事件列表
### 优先级 2功能对齐
- [ ] 实现官方的标准 JSON stdin/stdout 通信
- [ ] 实现标准的 exit code 处理
- [ ] 支持标准的匹配器系统
### 优先级 3增强
- [ ] 添加对 `prompt` 和 `agent` 处理器类型的支持
- [ ] 实现标准的异步钩子支持(`async: true`
- [ ] 添加对环境变量持久化的支持(`CLAUDE_ENV_FILE`
---
## 官方示例
### 例1格式化代码后自动执行
```json
{
"hooks": {
"PostToolUse": [
{
"matcher": "Edit|Write",
"hooks": [
{
"type": "command",
"command": "jq -r '.tool_input.file_path' | xargs npx prettier --write"
}
]
}
]
}
}
```
### 例2阻止编辑受保护文件
```json
{
"hooks": {
"PreToolUse": [
{
"matcher": "Edit|Write",
"hooks": [
{
"type": "command",
"command": "\"$CLAUDE_PROJECT_DIR\"/.claude/hooks/protect-files.sh"
}
]
}
]
}
}
```
### 例3会话开始时重新注入上下文
```json
{
"hooks": {
"SessionStart": [
{
"matcher": "compact",
"hooks": [
{
"type": "command",
"command": "echo 'Use Bun, not npm. Run bun test before committing.'"
}
]
}
]
}
}
```
### 例4基于条件的权限决策
```json
{
"hooks": {
"PreToolUse": [
{
"matcher": "Bash",
"hooks": [
{
"type": "prompt",
"prompt": "Is this a safe command to run? $ARGUMENTS"
}
]
}
]
}
}
```
---
## 参考资源
- 官方指南https://code.claude.com/docs/en/hooks-guide
- 官方参考https://code.claude.com/docs/en/hooks
- 官方示例https://github.com/anthropics/claude-code/tree/main/examples/hooks
---
## 总结
当前 CCW 的钩子实现是基于自定义规范的,**完全不兼容** Claude Code 官方钩子系统。为了与官方标准对齐,需要进行彻底的重构,包括:
1. ✅ 采用官方的事件名称(已在 `.claude/docs/` 文件中提供)
2. ✅ 采用官方的三层配置结构
3. ✅ 实现官方的 JSON stdin/stdout 通信协议
4. ✅ 移除不存在的自定义事件
5. ✅ 支持官方的三种处理器类型
这样才能确保当用户将 CCW 的配置迁移到真实的 Claude Code CLI 时,能够正常工作。

View File

@@ -0,0 +1,224 @@
# Claude Code Hooks - 文档索引
本目录包含 Claude Code 官方钩子系统的完整文档和分析报告。
---
## 📚 官方文档(已下载)
### 1. HOOKS_OFFICIAL_GUIDE.md
- **来源**: https://code.claude.com/docs/en/hooks-guide
- **内容**: 官方钩子指南,包含快速入门、常见用例、配置教程
- **适用**: 初次使用钩子系统的开发者
### 2. HOOKS_OFFICIAL_REFERENCE.md
- **来源**: https://code.claude.com/docs/en/hooks
- **内容**: 完整的技术参考,包含所有事件的 schema、输入输出格式、配置选项
- **适用**: 需要查阅具体事件参数和配置细节的开发者
### 3. HOOKS_QUICK_REFERENCE.md
- **内容**: 快速查阅指南,包含所有事件列表、配置模板、常见用例
- **适用**: 需要快速查找特定配置或事件信息的开发者
---
## 📊 分析报告
### 4. HOOKS_ANALYSIS_REPORT.md
- **内容**: 当前 CCW 钩子实现 vs 官方标准对比分析
- **包含**:
- 当前实现存在的问题
- 事件名称对比
- 配置结构对比
- 修复建议和优先级
- **适用**: 需要了解当前实现与官方标准差异的开发者
---
## 💻 示例代码
### 5. ../examples/hooks_bash_command_validator.py
- **来源**: https://github.com/anthropics/claude-code/blob/main/examples/hooks/bash_command_validator_example.py
- **内容**: 官方示例 - Bash 命令验证器
- **功能**: 拦截 Bash 命令,建议使用 ripgrep 替代 grep
- **适用**: 学习如何编写 PreToolUse 钩子的开发者
---
## 🎯 官方钩子事件列表
### 官方支持的 12 个钩子事件
| # | 事件名称 | 触发时机 | 可阻止 |
|---|---------|---------|--------|
| 1 | `SessionStart` | 会话开始或恢复 | ❌ |
| 2 | `UserPromptSubmit` | 用户提交提示词前 | ✅ |
| 3 | `PreToolUse` | 工具调用前 | ✅ |
| 4 | `PermissionRequest` | 权限对话出现时 | ✅ |
| 5 | `PostToolUse` | 工具调用成功后 | ❌ |
| 6 | `PostToolUseFailure` | 工具调用失败后 | ❌ |
| 7 | `Notification` | 通知发送时 | ❌ |
| 8 | `SubagentStart` | 子代理生成时 | ❌ |
| 9 | `SubagentStop` | 子代理完成时 | ✅ |
| 10 | `Stop` | Claude完成响应时 | ✅ |
| 11 | `PreCompact` | 上下文压缩前 | ❌ |
| 12 | `SessionEnd` | 会话终止时 | ❌ |
---
## ⚠️ 当前实现的主要问题
### 问题 1: 事件名称不符合官方标准
**当前使用(错误):**
```json
{
"hooks": {
"session-start": [], // 错误
"session-end": [], // 错误
"file-modified": [], // 不存在
"context-request": [] // 不存在
}
}
```
**官方标准(正确):**
```json
{
"hooks": {
"SessionStart": [], // 正确
"SessionEnd": [], // 正确
"PostToolUse": [], // 使用其他官方事件
"PreToolUse": [] // 替代自定义事件
}
}
```
### 问题 2: 配置结构不符合官方标准
**当前结构(自定义):**
```json
{
"hooks": {
"session-start": [
{
"name": "...",
"enabled": true,
"handler": "internal:context",
"failMode": "silent"
}
]
}
}
```
**官方结构(标准):**
```json
{
"hooks": {
"SessionStart": [
{
"matcher": "startup|resume",
"hooks": [
{
"type": "command",
"command": "bash script.sh",
"timeout": 600
}
]
}
]
}
}
```
---
## 🔗 外部资源
### 官方资源
- **官方指南**: https://code.claude.com/docs/en/hooks-guide
- **官方参考**: https://code.claude.com/docs/en/hooks
- **GitHub 示例**: https://github.com/anthropics/claude-code/tree/main/examples/hooks
- **配置博客**: https://claude.com/blog/how-to-configure-hooks
### 社区资源
- [Claude Code Hooks 从入门到实战 - 知乎](https://zhuanlan.zhihu.com/p/1969164730326324920)
- [GitHub: claude-code-best-practices](https://github.com/xiaobei930/claude-code-best-practices)
- [Claude Code power user customization](https://claude.com/blog/how-to-configure-hooks)
---
## 📖 推荐阅读顺序
### 对于初学者
1. `HOOKS_QUICK_REFERENCE.md` - 快速了解钩子概念
2. `HOOKS_OFFICIAL_GUIDE.md` - 学习如何配置和使用
3. `hooks_bash_command_validator.py` - 查看示例代码
### 对于开发者(修复当前实现)
1. `HOOKS_ANALYSIS_REPORT.md` - 了解问题和修复方案
2. `HOOKS_OFFICIAL_REFERENCE.md` - 查阅技术细节
3. `HOOKS_OFFICIAL_GUIDE.md` - 学习最佳实践
### 对于高级用户
1. `HOOKS_OFFICIAL_REFERENCE.md` - 完整技术参考
2. 官方 GitHub 仓库 - 更多示例
3. `HOOKS_QUICK_REFERENCE.md` - 快速查阅
---
## 🛠️ 快速开始
### 查看当前配置
```bash
# 在 Claude Code CLI 中
/hooks
```
### 创建第一个钩子(格式化代码)
`.claude/settings.json`:
```json
{
"hooks": {
"PostToolUse": [
{
"matcher": "Edit|Write",
"hooks": [
{
"type": "command",
"command": "jq -r '.tool_input.file_path' | xargs prettier --write"
}
]
}
]
}
}
```
### 调试钩子
```bash
claude --debug # 查看详细执行日志
```
在 CLI 中按 `Ctrl+O` 切换详细模式
---
## 📝 文档更新
- **创建时间**: 2026-02-01
- **官方文档版本**: 最新(截至 2026-02-01
- **下次更新建议**: 当 Claude Code 发布新版本时
---
## 🔍 搜索关键词
钩子、Hooks、事件、Events、SessionStart、PreToolUse、PostToolUse、配置、Configuration、命令、Command、Prompt、Agent、阻止、Block、通知、Notification
---
**需要帮助?**
参考 `HOOKS_QUICK_REFERENCE.md` 获取快速答案,或查阅 `HOOKS_OFFICIAL_REFERENCE.md` 获取完整技术细节。

View File

@@ -0,0 +1,124 @@
# Claude Code Hooks - Official Guide
> Complete official documentation from https://code.claude.com/docs/en/hooks-guide
## Automate workflows with hooks
Run shell commands automatically when Claude Code edits files, finishes tasks, or needs input. Format code, send notifications, validate commands, and enforce project rules.
### Hook lifecycle
Hooks fire at specific points during a Claude Code session. The official hook events are:
| Event | When it fires |
| :------------------- | :--------------------------------------------------- |
| `SessionStart` | When a session begins or resumes |
| `UserPromptSubmit` | When you submit a prompt, before Claude processes it |
| `PreToolUse` | Before a tool call executes. Can block it |
| `PermissionRequest` | When a permission dialog appears |
| `PostToolUse` | After a tool call succeeds |
| `PostToolUseFailure` | After a tool call fails |
| `Notification` | When Claude Code sends a notification |
| `SubagentStart` | When a subagent is spawned |
| `SubagentStop` | When a subagent finishes |
| `Stop` | When Claude finishes responding |
| `PreCompact` | Before context compaction |
| `SessionEnd` | When a session terminates |
### Hook handler types
There are three types of hook handlers:
1. **Command hooks** (`type: "command"`): Run a shell command
2. **Prompt hooks** (`type: "prompt"`): Use Claude model for single-turn evaluation
3. **Agent hooks** (`type: "agent"`): Spawn subagent with tool access
### Configuration structure
```json
{
"hooks": {
"EventName": [
{
"matcher": "ToolName|AnotherTool",
"hooks": [
{
"type": "command",
"command": "bash /path/to/script.sh",
"timeout": 600,
"async": false
}
]
}
]
}
}
```
### Hook input (stdin)
Common fields for all events:
- `session_id`: Current session ID
- `transcript_path`: Path to conversation JSON
- `cwd`: Current working directory
- `permission_mode`: Current permission mode
- `hook_event_name`: Name of the event that fired
Event-specific fields depend on the event type.
### Hook output (exit codes and stdout)
- **Exit 0**: Success. Parse stdout for JSON decision
- **Exit 2**: Blocking error. stderr text becomes Claude's feedback
- **Any other code**: Non-blocking error
### Tool matchers
Available for: `PreToolUse`, `PostToolUse`, `PostToolUseFailure`, `PermissionRequest`
Tool names:
- Built-in: `Bash`, `Edit`, `Write`, `Read`, `Glob`, `Grep`, `Task`, `WebFetch`, `WebSearch`
- MCP tools: `mcp__<server>__<tool>` (e.g., `mcp__github__search_repositories`)
### Event matchers
Different events match on different fields:
- `SessionStart`: `startup`, `resume`, `clear`, `compact`
- `SessionEnd`: `clear`, `logout`, `prompt_input_exit`, `bypass_permissions_disabled`, `other`
- `Notification`: `permission_prompt`, `idle_prompt`, `auth_success`, `elicitation_dialog`
- `SubagentStart`/`SubagentStop`: agent type (e.g., `Bash`, `Explore`, `Plan`)
- `PreCompact`: `manual`, `auto`
### Hook configuration locations
| Location | Scope |
|----------|-------|
| `~/.claude/settings.json` | All your projects |
| `.claude/settings.json` | Single project |
| `.claude/settings.local.json` | Single project (gitignored) |
| Plugin `hooks/hooks.json` | When plugin enabled |
| Skill/Agent frontmatter | While component active |
### Best practices
**DO:**
- Use command hooks for deterministic actions
- Use prompt hooks for judgment-based decisions
- Use agent hooks when verification requires file inspection
- Quote all shell variables: `"$VAR"`
- Use absolute paths with `$CLAUDE_PROJECT_DIR`
- Set appropriate timeouts
- Use async hooks for long-running operations
- Keep hooks fast (< 10 seconds by default)
**DON'T:**
- Trust input data blindly
- Use relative paths
- Put sensitive data in hook output
- Create infinite loops in Stop hooks
- Run blocking operations without async
---
See https://code.claude.com/docs/en/hooks-guide for complete guide
See https://code.claude.com/docs/en/hooks for reference documentation

View File

@@ -0,0 +1,268 @@
# Claude Code Hooks - Official Reference
> Complete official reference from https://code.claude.com/docs/en/hooks
## Hooks reference
This is the complete technical reference for Claude Code hooks.
### Hook events reference
#### SessionStart
**When it fires:** When a session begins or resumes
**Matchers:**
- `startup` - New session
- `resume` - `--resume`, `--continue`, or `/resume`
- `clear` - `/clear`
- `compact` - Auto or manual compaction
**Input schema:**
```json
{
"session_id": "abc123",
"transcript_path": "/path/to/transcript.jsonl",
"cwd": "/current/working/dir",
"permission_mode": "default",
"hook_event_name": "SessionStart",
"source": "startup|resume|clear|compact",
"model": "claude-sonnet-4-5-20250929"
}
```
**Output control:**
- Exit 0: Text written to stdout is added to Claude's context
- Can use `additionalContext` in JSON output
- Cannot block session start
**Special variables:**
- `CLAUDE_ENV_FILE`: Write `export` statements to persist environment variables
#### UserPromptSubmit
**When it fires:** When user submits a prompt, before Claude processes it
**Input schema:**
```json
{
"session_id": "abc123",
"hook_event_name": "UserPromptSubmit",
"prompt": "User's prompt text here"
}
```
**Output control:**
- Exit 0: Plain text stdout is added as context
- `decision: "block"` prevents prompt processing
- `additionalContext` adds context to Claude
#### PreToolUse
**When it fires:** Before a tool call executes
**Matchers:** Tool names (Bash, Edit, Write, Read, etc.)
**Tool input schemas:**
- `Bash`: `command`, `description`, `timeout`, `run_in_background`
- `Write`: `file_path`, `content`
- `Edit`: `file_path`, `old_string`, `new_string`, `replace_all`
- `Read`: `file_path`, `offset`, `limit`
- `Glob`: `pattern`, `path`
- `Grep`: `pattern`, `path`, `glob`, `output_mode`, `-i`, `multiline`
- `WebFetch`: `url`, `prompt`
- `WebSearch`: `query`, `allowed_domains`, `blocked_domains`
- `Task`: `prompt`, `description`, `subagent_type`, `model`
**Output control:**
- `permissionDecision`: `"allow"`, `"deny"`, `"ask"`
- `permissionDecisionReason`: Explanation
- `updatedInput`: Modify tool input before execution
- `additionalContext`: Add context to Claude
#### PermissionRequest
**When it fires:** When permission dialog appears
**Input schema:** Similar to PreToolUse but fires when permission needed
**Output control:**
- `decision.behavior`: `"allow"` or `"deny"`
- `decision.updatedInput`: Modify input before execution
- `decision.message`: For deny, tells Claude why
#### PostToolUse
**When it fires:** After a tool call succeeds
**Input schema:** Includes both `tool_input` and `tool_response`
**Output control:**
- `decision: "block"` to flag issue to Claude
- `additionalContext`: Add context
- `updatedMCPToolOutput`: For MCP tools, replace output
#### PostToolUseFailure
**When it fires:** After a tool call fails
**Input schema:** Includes `error` and `is_interrupt` fields
**Output control:**
- `additionalContext`: Provide context about the failure
#### Notification
**When it fires:** When Claude Code sends a notification
**Matchers:**
- `permission_prompt` - Permission needed
- `idle_prompt` - Claude idle
- `auth_success` - Auth successful
- `elicitation_dialog` - Dialog shown
**Input schema:**
```json
{
"hook_event_name": "Notification",
"message": "Notification text",
"title": "Title",
"notification_type": "permission_prompt|idle_prompt|..."
}
```
#### SubagentStart
**When it fires:** When subagent is spawned
**Matchers:** Agent types (Bash, Explore, Plan, or custom)
**Input schema:**
```json
{
"hook_event_name": "SubagentStart",
"agent_id": "agent-abc123",
"agent_type": "Explore"
}
```
**Output control:**
- `additionalContext`: Add context to subagent
#### SubagentStop
**When it fires:** When subagent finishes
**Input schema:** Similar to SubagentStart with `stop_hook_active` field
#### Stop
**When it fires:** When Claude finishes responding
**Input schema:**
```json
{
"hook_event_name": "Stop",
"stop_hook_active": false|true
}
```
**Output control:**
- `decision: "block"` prevents Claude from stopping
- `reason`: Required when blocking, tells Claude why to continue
- Check `stop_hook_active` to prevent infinite loops
#### PreCompact
**When it fires:** Before context compaction
**Matchers:**
- `manual` - `/compact`
- `auto` - Auto-compact when context full
**Input schema:**
```json
{
"hook_event_name": "PreCompact",
"trigger": "manual|auto",
"custom_instructions": ""
}
```
#### SessionEnd
**When it fires:** When session terminates
**Matchers:**
- `clear` - `/clear`
- `logout` - User logged out
- `prompt_input_exit` - User exited during prompt
- `bypass_permissions_disabled` - Bypass disabled
- `other` - Other reasons
**Input schema:**
```json
{
"hook_event_name": "SessionEnd",
"reason": "clear|logout|..."
}
```
### Prompt-based hooks
**Type:** `"prompt"`
**Configuration:**
```json
{
"type": "prompt",
"prompt": "Your prompt here. Use $ARGUMENTS for input JSON",
"model": "haiku",
"timeout": 30
}
```
**Response schema:**
```json
{
"ok": true|false,
"reason": "Explanation if ok is false"
}
```
### Agent-based hooks
**Type:** `"agent"`
**Configuration:**
```json
{
"type": "agent",
"prompt": "Your prompt here. Use $ARGUMENTS for input JSON",
"model": "haiku",
"timeout": 60
}
```
**Response schema:** Same as prompt hooks
### Async hooks
**For command hooks only:**
```json
{
"type": "command",
"command": "...",
"async": true,
"timeout": 300
}
```
- Doesn't block Claude's execution
- Cannot return decisions
- Output delivered on next conversation turn
- Max 50 turns per session
---
See https://code.claude.com/docs/en/hooks for full reference

View File

@@ -0,0 +1,390 @@
# Claude Code Hooks - 快速参考
## 12 个官方钩子事件
| # | 事件名称 | 触发时机 | 可阻止 | 匹配器 |
|---|---------|---------|--------|--------|
| 1 | `SessionStart` | 会话开始/恢复 | ❌ | startup, resume, clear, compact |
| 2 | `UserPromptSubmit` | 用户提交提示词前 | ✅ | ❌ 不支持 |
| 3 | `PreToolUse` | 工具调用前 | ✅ | 工具名称 |
| 4 | `PermissionRequest` | 权限对话时 | ✅ | 工具名称 |
| 5 | `PostToolUse` | 工具成功后 | ❌ | 工具名称 |
| 6 | `PostToolUseFailure` | 工具失败后 | ❌ | 工具名称 |
| 7 | `Notification` | 发送通知时 | ❌ | 通知类型 |
| 8 | `SubagentStart` | 子代理开始 | ❌ | 代理类型 |
| 9 | `SubagentStop` | 子代理完成 | ✅ | 代理类型 |
| 10 | `Stop` | Claude完成响应 | ✅ | ❌ 不支持 |
| 11 | `PreCompact` | 上下文压缩前 | ❌ | manual, auto |
| 12 | `SessionEnd` | 会话终止 | ❌ | 终止原因 |
---
## 配置模板
### 基础结构
```json
{
"hooks": {
"EventName": [
{
"matcher": "pattern",
"hooks": [
{
"type": "command|prompt|agent",
"command": "...",
"timeout": 600,
"async": false
}
]
}
]
}
}
```
---
## 工具名称(用于匹配器)
### 内置工具
```
Bash, Edit, Write, Read, Glob, Grep, Task, WebFetch, WebSearch
```
### MCP 工具
```
mcp__<server>__<tool>
mcp__memory__.*
mcp__.*__write.*
```
---
## Exit Codes
| Code | 含义 | Claude反馈 |
|------|------|-----------|
| 0 | 成功 | 解析 stdout JSON允许操作 |
| 2 | 阻止 | stderr 发送给 Claude阻止操作 |
| 其他 | 错误 | stderr 仅在详细模式显示 |
---
## 标准 JSON 输入stdin
```json
{
"session_id": "abc123",
"transcript_path": "/path/to/transcript.jsonl",
"cwd": "/current/dir",
"permission_mode": "default",
"hook_event_name": "PreToolUse",
"tool_name": "Bash",
"tool_input": {
"command": "npm test"
}
}
```
---
## 标准 JSON 输出stdout, exit 0
### PreToolUse
```json
{
"hookSpecificOutput": {
"hookEventName": "PreToolUse",
"permissionDecision": "allow|deny|ask",
"permissionDecisionReason": "explanation"
}
}
```
### UserPromptSubmit
```json
{
"decision": "block",
"reason": "explanation",
"hookSpecificOutput": {
"hookEventName": "UserPromptSubmit",
"additionalContext": "context string"
}
}
```
### Stop
```json
{
"decision": "block",
"reason": "Must complete tasks X, Y, Z"
}
```
---
## 处理器类型
### 1. Command Hook
```json
{
"type": "command",
"command": "bash /path/to/script.sh",
"timeout": 600,
"async": false
}
```
### 2. Prompt Hook
```json
{
"type": "prompt",
"prompt": "Evaluate: $ARGUMENTS",
"model": "haiku",
"timeout": 30
}
```
响应格式:
```json
{
"ok": true|false,
"reason": "explanation if ok is false"
}
```
### 3. Agent Hook
```json
{
"type": "agent",
"prompt": "Verify tests pass: $ARGUMENTS",
"model": "haiku",
"timeout": 60
}
```
响应格式:与 Prompt Hook 相同
---
## 环境变量
### 标准变量
```bash
$CLAUDE_PROJECT_DIR # 项目根目录
$CLAUDE_PLUGIN_ROOT # 插件根目录(插件内部使用)
$CLAUDE_CODE_REMOTE # "true" 在远程环境
```
### SessionStart 特殊变量
```bash
$CLAUDE_ENV_FILE # 持久化环境变量的文件路径
```
用法:
```bash
#!/bin/bash
if [ -n "$CLAUDE_ENV_FILE" ]; then
echo 'export NODE_ENV=production' >> "$CLAUDE_ENV_FILE"
fi
```
---
## 常见用例
### 1. 格式化代码
```json
{
"hooks": {
"PostToolUse": [
{
"matcher": "Edit|Write",
"hooks": [
{
"type": "command",
"command": "jq -r '.tool_input.file_path' | xargs prettier --write"
}
]
}
]
}
}
```
### 2. 阻止危险命令
```json
{
"hooks": {
"PreToolUse": [
{
"matcher": "Bash",
"hooks": [
{
"type": "command",
"command": "\"$CLAUDE_PROJECT_DIR\"/.claude/hooks/block-rm.sh"
}
]
}
]
}
}
```
脚本 `block-rm.sh`:
```bash
#!/bin/bash
COMMAND=$(jq -r '.tool_input.command')
if echo "$COMMAND" | grep -q 'rm -rf'; then
echo "Blocked: rm -rf is not allowed" >&2
exit 2
fi
exit 0
```
### 3. 通知用户
```json
{
"hooks": {
"Notification": [
{
"matcher": "permission_prompt",
"hooks": [
{
"type": "command",
"command": "osascript -e 'display notification \"Claude needs your attention\"'"
}
]
}
]
}
}
```
### 4. 确认任务完成
```json
{
"hooks": {
"Stop": [
{
"hooks": [
{
"type": "prompt",
"prompt": "Check if all tasks complete: $ARGUMENTS"
}
]
}
]
}
}
```
### 5. 异步运行测试
```json
{
"hooks": {
"PostToolUse": [
{
"matcher": "Write",
"hooks": [
{
"type": "command",
"command": "/path/to/run-tests.sh",
"async": true,
"timeout": 120
}
]
}
]
}
}
```
### 6. 会话开始注入上下文
```json
{
"hooks": {
"SessionStart": [
{
"matcher": "compact",
"hooks": [
{
"type": "command",
"command": "echo 'Reminder: use Bun, not npm'"
}
]
}
]
}
}
```
---
## 配置位置
| 位置 | 作用域 | 可共享 |
|------|--------|--------|
| `~/.claude/settings.json` | 全局用户 | ❌ |
| `.claude/settings.json` | 单个项目 | ✅ |
| `.claude/settings.local.json` | 单个项目 | ❌ (gitignored) |
| 插件 `hooks/hooks.json` | 插件启用时 | ✅ |
| Skill/Agent frontmatter | 组件活动时 | ✅ |
---
## 调试技巧
### 1. 详细模式
```bash
claude --debug # 查看完整钩子执行细节
Ctrl+O # 切换详细模式(实时)
```
### 2. 测试钩子脚本
```bash
echo '{"tool_name":"Bash","tool_input":{"command":"ls"}}' | ./my-hook.sh
echo $? # 检查退出码
```
### 3. 检查钩子配置
```
/hooks # 交互式钩子管理器
```
---
## 最佳实践
**推荐:**
- 总是引用 shell 变量:`"$VAR"`
- 使用绝对路径:`"$CLAUDE_PROJECT_DIR"/script.sh`
- 设置合理的超时时间
- 验证和清理输入数据
- 在 Stop 钩子中检查 `stop_hook_active`
- 使用 async 进行长时间运行的操作
**避免:**
- 直接信任输入数据
- 使用相对路径
- 在钩子输出中暴露敏感数据
- 创建无限循环(尤其在 Stop 钩子)
- 没有设置超时的阻塞操作
---
## 官方资源
- **指南**: https://code.claude.com/docs/en/hooks-guide
- **参考**: https://code.claude.com/docs/en/hooks
- **示例**: https://github.com/anthropics/claude-code/tree/main/examples/hooks
---
## 本地文档
- `HOOKS_OFFICIAL_GUIDE.md` - 官方指南中文版
- `HOOKS_OFFICIAL_REFERENCE.md` - 官方参考中文版
- `HOOKS_ANALYSIS_REPORT.md` - 当前实现对比分析
- `hooks_bash_command_validator.py` - 官方示例脚本

View File

@@ -0,0 +1,85 @@
#!/usr/bin/env python3
"""
Claude Code Hook: Bash Command Validator
=========================================
This hook runs as a PreToolUse hook for the Bash tool.
It validates bash commands against a set of rules before execution.
In this case it changes grep calls to using rg.
Read more about hooks here: https://code.claude.com/docs/en/hooks
Make sure to change your path to your actual script.
Configuration for .claude/settings.json:
{
"hooks": {
"PreToolUse": [
{
"matcher": "Bash",
"hooks": [
{
"type": "command",
"command": "python3 /path/to/bash_command_validator.py"
}
]
}
]
}
}
"""
import json
import re
import sys
# Define validation rules as a list of (regex pattern, message) tuples
_VALIDATION_RULES = [
(
r"^grep\b(?!.*\|)",
"Use 'rg' (ripgrep) instead of 'grep' for better performance and features",
),
(
r"^find\s+\S+\s+-name\b",
"Use 'rg --files | rg pattern' or 'rg --files -g pattern' instead of 'find -name' for better performance",
),
]
def _validate_command(command: str) -> list[str]:
issues = []
for pattern, message in _VALIDATION_RULES:
if re.search(pattern, command):
issues.append(message)
return issues
def main():
try:
input_data = json.load(sys.stdin)
except json.JSONDecodeError as e:
print(f"Error: Invalid JSON input: {e}", file=sys.stderr)
# Exit code 1 shows stderr to the user but not to Claude
sys.exit(1)
tool_name = input_data.get("tool_name", "")
if tool_name != "Bash":
sys.exit(0)
tool_input = input_data.get("tool_input", {})
command = tool_input.get("command", "")
if not command:
sys.exit(0)
issues = _validate_command(command)
if issues:
for message in issues:
print(f"{message}", file=sys.stderr)
# Exit code 2 blocks tool call and shows stderr to Claude
sys.exit(2)
if __name__ == "__main__":
main()

View File

@@ -0,0 +1,741 @@
# 命令转 Skill 规范 v2.0
> 基于 `workflow-plan` 转换实践提炼的标准流程
>
> **v2.0 变更**:命令调用引用统一转换为文件路径引用,不再保留 `/workflow:XX` 等命令调用语法
## ⚠️ 核心要求
**Phase 文件内容必须与原命令文件内容一致**
转换时只改变结构和组织方式,不改变功能实现细节。
---
## 目录
1. [转换目标](#1-转换目标)
2. [核心原则](#2-核心原则)
3. [结构映射](#3-结构映射)
4. [转换步骤](#4-转换步骤)
5. [SKILL.md 编写规范](#5-skillmd-编写规范)
6. [Phase 文件编写规范](#6-phase-文件编写规范)
7. [内容一致性验证](#7-内容一致性验证)
8. [质量检查清单](#8-质量检查清单)
9. [示例对照](#9-示例对照)
---
## 1. 转换目标
### 1.1 为什么转换
| 命令格式 | Skill 格式 |
|----------|-----------|
| 单文件 + 子命令引用 | 结构化目录 + 分阶段执行 |
| 一次性全量加载 | **分布读取**(按需加载) |
| 执行流程隐式 | 执行流程**显式定义** |
| 子命令散落各处 | Phase 文件**集中管理** |
| 缺乏复用机制 | 模板/规范可复用 |
### 1.2 转换收益
- **上下文优化**Phase 文件按需加载,减少 token 消耗
- **可维护性**:结构清晰,修改范围可控
- **可扩展性**:新增阶段只需添加 Phase 文件
- **执行可视化**TodoWrite 与 Phase 对齐,进度透明
---
## 2. 核心原则
### 2.1 内容一致性原则(最高优先级)
> **Phase 文件内容必须与原命令文件内容一致**
>
> 这是转换的第一要求,不可妥协。
**一致性定义**
| 维度 | 一致性要求 | 验证方法 |
|------|-----------|---------|
| **代码逻辑** | 所有Bash命令、函数、条件判断完全一致 | 逐行对比代码块 |
| **Agent Prompt** | Prompt内容100%保留,包括示例和约束 | 全文搜索关键段落 |
| **数据结构** | 表格、Schema、类型定义完整复制 | 对比字段数量 |
| **执行步骤** | 步骤顺序、子步骤层级保持不变 | 对比步骤编号 |
| **文本说明** | 关键说明、注释、警告完整保留 | 抽查重要段落 |
**允许的差异**
| 差异类型 | 说明 | 示例 |
|---------|------|------|
| ✅ 添加阶段标识 | Phase标题、来源标注 | `# Phase 1: Session Discovery` |
| ✅ 添加组织结构 | 章节标题优化 | 添加 `## Execution Steps` |
| ✅ 添加衔接说明 | Post-Phase Update | 阶段完成后的状态说明 |
| ✅ 命令引用转文件路径 | 将 `/workflow``/command` 等命令引用转换为相对文件路径 | 原文 `/workflow:session:start``phases/01-session-discovery.md` |
| ✅ 移除 Frontmatter | 移除命令的 argument-hint、examples | 命令级元数据在 Skill 中不需要 |
| ❌ 简化代码 | 任何代码的省略或改写 | 不允许 |
| ❌ 简化Prompt | Agent Prompt的任何删减 | 不允许 |
| ❌ 调整步骤 | 合并、拆分、重排步骤 | 不允许 |
**内容保留清单**
| 内容类型 | 保留要求 | 示例 |
|---------|---------|------|
| **Bash命令** | 完整命令,包含所有参数、管道、重定向 | `find . -name "*.json" \| head -1` |
| **Agent Prompt** | 全文保留包含OBJECTIVE、TASK、EXPECTED等所有节 | 完整的Task({prompt: "..."}) |
| **代码函数** | 完整函数体所有if/else分支 | `analyzeTaskComplexity()` 全部代码 |
| **参数表格** | 所有行列,不省略任何参数 | Session Types表格 |
| **JSON Schema** | 所有字段、类型、required定义 | context-package.json schema |
| **验证逻辑** | 所有校验条件、错误处理、回滚代码 | conflict resolution验证清单 |
| **配置示例** | 输入输出示例、配置模板 | planning-notes.md模板 |
| **注释说明** | 重要的行内注释、块注释 | `// CRITICAL: ...` |
**严格禁止的简化行为**
1. **将代码替换为描述**
- ❌ 错误:`Execute context gathering agent`
- ✅ 正确:完整的 `Task({ subagent_type: "context-search-agent", prompt: "...[完整200行prompt]..." })`
2. **省略Prompt内容**
- ❌ 错误:`Agent prompt for context gathering (see original file)`
- ✅ 正确复制粘贴完整Prompt文本
3. **压缩表格**
- ❌ 错误:`Session types: Discovery/Auto/Force (see details in code)`
- ✅ 正确完整Session Types表格包含type/description/use case列
4. **合并步骤**
- ❌ 错误将Step 1.1-1.4合并为Step 1
- ✅ 正确保持1.1、1.2、1.3、1.4独立步骤
5. **删除示例**
- ❌ 错误省略JSON输出示例
- ✅ 正确:保留所有输入输出示例
**引用转换规则**
命令引用统一转换为文件路径引用,规则如下:
| 原命令引用 | 转换方式 | 示例 |
|-----------|---------|------|
| **Frontmatter** | 移除,在 SKILL.md 中统一定义 | `argument-hint``examples` → 移除 |
| **命令调用语法** | 转换为 Phase 文件的相对路径 | `/workflow:session:start``phases/01-session-discovery.md` |
| **命令路径引用** | 转换为 Skill 目录内路径 | `commands/workflow/tools/``phases/` |
| **跨命令引用** | 转换为 Phase 间文件引用 | `/workflow:tools:context-gather``phases/02-context-gathering.md` |
| **命令参数说明** | 移除或转为 Phase Prerequisites | `usage: /workflow:plan [session-id]` → Phase Prerequisites 中说明 |
**转换示例**
原命令文件中:
```markdown
## Related Commands
- `/workflow:session:start` - Start new session
- `/workflow:tools:context-gather` - Gather context
```
转换后 Phase 文件中(使用文件路径引用):
```markdown
## Related Phases
- [Phase 1: Session Discovery](phases/01-session-discovery.md)
- [Phase 2: Context Gathering](phases/02-context-gathering.md)
```
或在 SKILL.md 的 Phase Reference Table 中统一管理引用关系:
```markdown
### Phase Reference Documents
| Phase | Document | Purpose |
|-------|----------|---------|
| Phase 1 | [phases/01-session-discovery.md](phases/01-session-discovery.md) | 会话发现与初始化 |
| Phase 2 | [phases/02-context-gathering.md](phases/02-context-gathering.md) | 上下文收集 |
```
### 2.2 分布读取原则
> **SKILL.md 引用 PhasePhase 按执行顺序加载**
```
执行流程:
Phase 1 → Read: phases/01-xxx.md → 执行 → 释放
Phase 2 → Read: phases/02-xxx.md → 执行 → 释放
Phase N → Read: phases/0N-xxx.md → 执行 → 完成
```
### 2.3 职责分离原则
| 层级 | 职责 | 内容 |
|------|------|------|
| **SKILL.md** | 编排协调 | 流程图、数据流、TodoWrite 模式、阶段入口 |
| **Phase 文件** | 具体执行 | 完整执行逻辑、代码实现、Agent Prompt |
| **Specs** | 规范约束 | Schema 定义、质量标准、领域规范 |
| **Templates** | 可复用片段 | Agent 基础模板、输出模板 |
---
## 3. 结构映射
### 3.1 命令结构 → Skill 结构
```
commands/ skills/
├── workflow/ ├── workflow-plan/
│ ├── plan.md ────────→ │ ├── SKILL.md (orchestrator)
│ ├── session/ │ ├── phases/
│ │ └── start.md ────────→ │ │ ├── 01-session-discovery.md
│ └── tools/ │ │ ├── 02-context-gathering.md
│ ├── context-gather.md ───→ │ │ ├── 03-conflict-resolution.md
│ ├── conflict-resolution.md │ │ └── 04-task-generation.md
│ └── task-generate-agent.md │ └── (specs/, templates/ 可选)
```
### 3.2 文件映射规则
| 源文件类型 | 目标文件 | 映射规则 |
|-----------|---------|---------|
| 主命令文件 | `SKILL.md` | 提取元数据、流程、协调逻辑 |
| 子命令文件 | `phases/0N-xxx.md` | 完整内容 + 阶段标识 |
| 引用的规范 | `specs/xxx.md` | 独立提取或保留在 Phase 中 |
| 通用模板 | `templates/xxx.md` | 多处引用时提取 |
### 3.3 命名转换
| 原命令路径 | Phase 文件名 |
|-----------|-------------|
| `session/start.md` | `01-session-discovery.md` |
| `tools/context-gather.md` | `02-context-gathering.md` |
| `tools/conflict-resolution.md` | `03-conflict-resolution.md` |
| `tools/task-generate-agent.md` | `04-task-generation.md` |
**命名规则**`{序号}-{动作描述}.md`
- 序号:两位数字 `01`, `02`, ...
- 动作:动词-名词形式,小写连字符
---
## 4. 转换步骤
### 4.1 准备阶段
```markdown
□ Step 0: 分析源命令结构
- 读取主命令文件,识别调用的子命令
- 绘制命令调用关系图
- 统计各文件行数(用于后续验证)
```
### 4.2 创建目录结构
```markdown
□ Step 1: 创建 Skill 目录
mkdir skills/{skill-name}/
mkdir skills/{skill-name}/phases/
```
### 4.3 编写 SKILL.md
```markdown
□ Step 2: 编写 SKILL.md
- 提取 Frontmatter (name, description, allowed-tools)
- 编写架构概览图
- 编写执行流程(含 Phase 引用表)
- 编写数据流定义
- 编写 TodoWrite 模式
- 编写协调逻辑(阶段间衔接)
```
### 4.4 转换 Phase 文件
```markdown
□ Step 3: 逐个转换子命令为 Phase 文件
FOR each 子命令:
- 读取原命令完整内容
- 添加 Phase 标题和元信息
- 保留所有代码、表格、Agent Prompt
- 添加 Post-Phase Update 节(如需)
- 验证行数接近原文件
```
### 4.5 验证完整性
```markdown
□ Step 4: 验证内容完整性
- 比较 Phase 文件与原命令行数
- 确认关键代码块存在
- 确认 Agent Prompt 完整
- 确认表格和 Schema 完整
```
---
## 5. SKILL.md 编写规范
### 5.1 Frontmatter 模板
```yaml
---
name: {skill-name}
description: {简短描述}. Triggers on "{trigger-phrase}".
allowed-tools: Task, AskUserQuestion, TodoWrite, Read, Write, Edit, Bash, Glob, Grep
---
```
### 5.2 必需章节
```markdown
# {Skill Name}
{一句话描述功能}
## Architecture Overview
{ASCII 架构图}
## Key Design Principles
{设计原则列表}
## Auto Mode
{自动模式说明}
## Usage
```
Skill(skill="{skill-name}", args="<task description>")
Skill(skill="{skill-name}", args="[FLAGS] \"<task description>\"")
# Flags
{flag 说明,每个 flag 一行}
# Examples
Skill(skill="{skill-name}", args="\"Implement JWT authentication\"") # 说明
Skill(skill="{skill-name}", args="--mode xxx \"Refactor payment module\"") # 说明
Skill(skill="{skill-name}", args="-y \"Add user profile page\"") # 说明
```
## Execution Flow
{流程图 + Phase 引用表}
### Phase Reference Documents
| Phase | Document | Load When |
|-------|----------|-----------|
| Phase 1 | phases/01-xxx.md | Phase 1 开始时 |
| ... | ... | ... |
## Core Rules
{执行约束和规则}
## Data Flow
{阶段间数据传递定义}
## TodoWrite Pattern
{任务列表模板}
## Error Handling
{错误处理策略}
```
**Usage 格式要求**
- **必须使用代码块** 包裹 Usage 内容
- 使用 `Skill()` 调用格式,不使用 `/skill-name` 命令行格式
- 包含两种调用格式:基本调用 + 带 Flags 的完整调用
- Flags 说明每行一个 flag格式`flag-name 说明`
- Examples 必须展示所有 flag 组合的典型调用场景
- 字符串参数中的引号使用转义 `\"`
- Examples 行尾可添加 `# 说明` 注释
### 5.3 执行流程示例
```markdown
## Execution Flow
```
Phase 1: Session Discovery
│ Ref: phases/01-session-discovery.md
Phase 2: Context Gathering
│ Ref: phases/02-context-gathering.md
Phase 3: Conflict Resolution (conditional)
│ Ref: phases/03-conflict-resolution.md
Phase 4: Task Generation
│ Ref: phases/04-task-generation.md
Complete: IMPL_PLAN.md + Task JSONs
```
### Phase Reference Documents
| Phase | Document | Load When |
|-------|----------|-----------|
| Phase 1 | `phases/01-session-discovery.md` | Phase 1 开始执行时 |
| Phase 2 | `phases/02-context-gathering.md` | Phase 1 完成后 |
| Phase 3 | `phases/03-conflict-resolution.md` | conflict_risk ≥ medium 时 |
| Phase 4 | `phases/04-task-generation.md` | Phase 3 完成或跳过后 |
```
---
## 6. Phase 文件编写规范
### 6.1 Phase 文件结构
```markdown
# Phase N: {Phase Name}
> 来源: `commands/{path}/original.md`
## Overview
{阶段目标和职责}
## Prerequisites
{前置条件和输入}
## Execution Steps
### Step N.1: {Step Name}
{完整实现代码}
### Step N.2: {Step Name}
{完整实现代码}
## Agent Prompt (如有)
{完整 Agent Prompt不简化}
## Output Format
{输出 Schema 和示例}
## Post-Phase Update
{阶段完成后的状态更新}
```
### 6.2 转换原则:只改结构,不改内容
**转换时的思维模式**
> "我是在**搬运**内容,不是在**改写**内容"
| 操作 | 允许 | 禁止 |
|------|------|------|
| 复制粘贴代码 | ✅ | |
| 调整章节标题 | ✅ | |
| 添加Phase标识 | ✅ | |
| 改写代码逻辑 | | ❌ |
| 简化Prompt | | ❌ |
| 省略步骤 | | ❌ |
**转换流程**
1. **打开原命令文件和新Phase文件并排显示**
2. **逐段复制粘贴**(不要凭记忆改写)
3. **只添加结构性元素**Phase标题、章节标题
4. **保留100%功能性内容**代码、Prompt、表格、示例
### 6.3 内容完整性自查
转换每个Phase文件后必须完成以下检查
| 检查项 | 检查方法 | 合格标准 |
|--------|---------|---------|
| **代码块数量** | 计数 ` ```bash `` ```javascript ` | 与原文件相等 |
| **表格数量** | 计数 ` \| ` 开头的行 | 与原文件相等 |
| **Agent Prompt** | 搜索 `Task({` | 完整的prompt参数内容 |
| **步骤编号** | 检查 `### Step` | 编号序列与原文件一致 |
| **文件行数** | `wc -l` | ±20%以内 |
| **关键函数** | 搜索函数名 | 所有函数完整保留 |
**示例检查**
```bash
# 原命令文件
$ grep -c "^###" commands/workflow/tools/context-gather.md
15
# Phase文件应该相等或略多
$ grep -c "^###" skills/workflow-plan/phases/02-context-gathering.md
16 # ✓ 只多了"Post-Phase Update"节
# 代码块数量对比
$ grep -c '```' commands/workflow/tools/context-gather.md
24
$ grep -c '```' skills/workflow-plan/phases/02-context-gathering.md
24 # ✓ 完全相等
```
### 6.4 内容保留清单(详细版)
转换时必须完整保留:
| 内容类型 | 保留要求 |
|---------|---------|
| **Bash 命令** | 完整命令,包含所有参数和路径 |
| **Agent Prompt** | 全文,包含示例和约束 |
| **代码函数** | 完整函数体,不简化 |
| **参数表格** | 所有列和行 |
| **JSON Schema** | 完整字段定义 |
| **验证逻辑** | if/else、校验代码 |
| **错误处理** | try/catch、回滚逻辑 |
| **示例** | 输入输出示例 |
### 6.3 禁止的简化
**不要这样**
```markdown
### Step 2: Run context gathering
Execute the context-search-agent to gather project context.
```
**应该这样**
```markdown
### Step 2: Run context gathering
```javascript
Task({
subagent_type: "context-search-agent",
prompt: `
## Context Search Task
### OBJECTIVE
Gather comprehensive context for planning session ${sessionId}
### MULTI-SOURCE DISCOVERY STRATEGY
**Track 0: Project Foundation**
- Read CLAUDE.md files at all levels
- Check .workflow/project-tech.json for stack info
...
[完整 Prompt 内容]
`,
run_in_background: false
})
```
```
---
## 7. 内容一致性验证
### 7.1 验证流程
每完成一个Phase文件转换后执行以下验证
```
┌─────────────────────────────────────────────────────────────────┐
│ 内容一致性验证流程 │
├─────────────────────────────────────────────────────────────────┤
│ │
│ Step 1: 行数对比 │
│ $ wc -l 原文件 Phase文件 │
│ └─→ 差异应在 ±20% 以内 │
│ │
│ Step 2: 代码块对比 │
│ $ grep -c '```' 原文件 Phase文件 │
│ └─→ 数量应相等 │
│ │
│ Step 3: 关键内容抽查 │
│ - 搜索 Task({ → Agent Prompt 完整性 │
│ - 搜索函数名 → 函数体完整性 │
│ - 搜索表格标记 → 表格完整性 │
│ │
│ Step 4: 并排对比 │
│ - 打开原文件和Phase文件 │
│ - 逐节对比功能性内容 │
│ - 确认无省略无改写 │
│ │
└─────────────────────────────────────────────────────────────────┘
```
### 7.2 验证命令
```bash
# 1. 行数对比
wc -l commands/workflow/tools/context-gather.md skills/workflow-plan/phases/02-context-gathering.md
# 2. 代码块数量
grep -c '```' commands/workflow/tools/context-gather.md
grep -c '```' skills/workflow-plan/phases/02-context-gathering.md
# 3. 表格行数
grep -c '^|' commands/workflow/tools/context-gather.md
grep -c '^|' skills/workflow-plan/phases/02-context-gathering.md
# 4. Agent Prompt检查
grep -c 'Task({' commands/workflow/tools/context-gather.md
grep -c 'Task({' skills/workflow-plan/phases/02-context-gathering.md
# 5. 函数定义检查
grep -E '^(function|const.*=.*=>|async function)' commands/workflow/tools/context-gather.md
grep -E '^(function|const.*=.*=>|async function)' skills/workflow-plan/phases/02-context-gathering.md
```
### 7.3 一致性检查表
每个Phase文件必须填写
| 检查项 | 原文件 | Phase文件 | 是否一致 |
|--------|--------|-----------|---------|
| 文件行数 | ___ | ___ | □ ±20%内 |
| 代码块数量 | ___ | ___ | □ 相等 |
| 表格行数 | ___ | ___ | □ 相等 |
| Task调用数 | ___ | ___ | □ 相等 |
| 函数定义数 | ___ | ___ | □ 相等 |
| 步骤数量 | ___ | ___ | □ 相等或+1 |
### 7.4 不一致处理
发现不一致时:
| 情况 | 处理方式 |
|------|---------|
| Phase文件行数少>20% | ❌ **必须补充**:定位缺失内容,从原文件复制 |
| 代码块数量少 | ❌ **必须补充**:找出缺失的代码块 |
| Agent Prompt缺失 | ❌ **严重问题**:立即从原文件完整复制 |
| 表格缺失 | ❌ **必须补充**:复制完整表格 |
| Phase文件行数多>20% | ✓ 可接受(添加了结构性内容) |
---
## 8. 质量检查清单
### 8.1 结构检查
```markdown
□ SKILL.md 存在且包含必需章节
□ phases/ 目录存在
□ Phase 文件按数字前缀排序
□ 所有子命令都有对应 Phase 文件
```
### 8.2 内容完整性检查
```markdown
□ Phase 文件行数 ≈ 原命令行数 (±20%)
□ 所有 Bash 命令完整保留
□ 所有 Agent Prompt 完整保留
□ 所有表格完整保留
□ 所有 JSON Schema 完整保留
□ 验证逻辑和错误处理完整
```
### 8.3 引用检查
```markdown
□ SKILL.md 中 Phase 引用路径正确
□ Phase 文件标注了来源命令
□ 跨 Phase 数据引用明确
```
### 8.4 行数对比表
| 原命令 | 行数 | Phase 文件 | 行数 | 差异 |
|--------|------|-----------|------|------|
| session/start.md | 202 | 01-session-discovery.md | 281 | +39% ✓ |
| tools/context-gather.md | 404 | 02-context-gathering.md | 427 | +6% ✓ |
| tools/conflict-resolution.md | 604 | 03-conflict-resolution.md | 645 | +7% ✓ |
| tools/task-generate-agent.md | 693 | 04-task-generation.md | 701 | +1% ✓ |
> Phase 文件可以比原命令稍长添加了阶段标识、Post-Phase Update 等),但不应显著缩短。
---
## 9. 示例对照
### 9.1 workflow-plan 转换示例
**转换前**(命令结构):
```
commands/workflow/
├── plan.md (163 行) ─── 主命令,调用子命令
├── session/
│ └── start.md (202 行) ─── 会话管理
└── tools/
├── context-gather.md (404 行) ─── 上下文收集
├── conflict-resolution.md (604 行) ─── 冲突解决
└── task-generate-agent.md (693 行) ─── 任务生成
```
**转换后**Skill 结构):
```
skills/workflow-plan/
├── SKILL.md (348 行) ─── 编排协调
└── phases/
├── 01-session-discovery.md (281 行)
├── 02-context-gathering.md (427 行)
├── 03-conflict-resolution.md (645 行)
└── 04-task-generation.md (701 行)
```
### 9.2 SKILL.md 与原主命令对比
| 原 plan.md 内容 | SKILL.md 对应位置 |
|----------------|-------------------|
| Frontmatter | Frontmatter (扩展) |
| argument-hint | Usage (转换为 Skill 调用格式) |
| 执行流程描述 | Execution Flow (可视化) |
| 子命令调用 | Phase Reference Table |
| 数据传递 | Data Flow (显式定义) |
| (无) | Usage (新增 - Skill 调用格式) |
| (无) | TodoWrite Pattern (新增) |
| (无) | Error Handling (新增) |
**Usage 转换示例**
原命令 `argument-hint`:
```yaml
argument-hint: "[-y|--yes] \"text description\"|file.md"
```
转换为 SKILL.md Usage:
```
Skill(skill="workflow-plan", args="<task description>")
Skill(skill="workflow-plan", args="[-y|--yes] \"<task description>\"")
# Flags
-y, --yes Skip all confirmations (auto mode)
# Examples
Skill(skill="workflow-plan", args="\"Implement authentication\"") # Interactive mode
Skill(skill="workflow-plan", args="-y \"Implement authentication\"") # Auto mode
```
### 9.3 Phase 文件与原子命令对比
| 原子命令内容 | Phase 文件对应 |
|-------------|---------------|
| Frontmatter | 移除 (Skill 不需要) |
| 步骤说明 | Execution Steps |
| 代码实现 | **完整保留(一致性要求)** |
| Agent Prompt | **完整保留(一致性要求)** |
| 输出格式 | Output Format |
| (无) | Post-Phase Update (新增) |
---
## 附录:快速转换命令
```bash
# 1. 统计原命令行数
wc -l commands/{path}/*.md commands/{path}/**/*.md
# 2. 创建 Skill 目录
mkdir -p skills/{skill-name}/phases
# 3. 转换完成后验证
wc -l skills/{skill-name}/SKILL.md skills/{skill-name}/phases/*.md
# 4. 对比行数差异
# Phase 文件行数应 ≈ 原命令行数 (±20%)
```
---
## 修订历史
| 版本 | 日期 | 变更 |
|------|------|------|
| v1.0 | 2025-02-05 | 基于 workflow-plan 转换实践创建 |
| v1.1 | 2025-02-05 | 强化内容一致性要求添加第7章一致性验证添加应移除的命令特有内容说明 |
| v2.0 | 2026-02-05 | 命令调用引用统一转换为文件路径引用;移除 `/workflow:XX` 命令语法;引用转换规则重构 |
| v2.1 | 2026-02-05 | 添加 Usage 部分格式规范Skill 调用格式);更新 5.2 必需章节;添加 Usage 转换示例到 9.2 节 |

View File

@@ -1,259 +0,0 @@
---
name: ccw-loop
description: Stateless iterative development loop workflow with documented progress. Supports develop, debug, and validate phases with file-based state tracking. Triggers on "ccw-loop", "dev loop", "development loop", "开发循环", "迭代开发".
allowed-tools: Task(*), AskUserQuestion(*), Read(*), Grep(*), Glob(*), Bash(*), Edit(*), Write(*), TodoWrite(*)
---
# CCW Loop - Stateless Iterative Development Workflow
无状态迭代开发循环工作流,支持开发 (develop)、调试 (debug)、验证 (validate) 三个阶段,每个阶段都有独立的文件记录进展。
## Arguments
| Arg | Required | Description |
|-----|----------|-------------|
| task | No | Task description (for new loop, mutually exclusive with --loop-id) |
| --loop-id | No | Existing loop ID to continue (from API or previous session) |
| --auto | No | Auto-cycle mode (develop → debug → validate → complete) |
## Unified Architecture (API + Skill Integration)
```
┌─────────────────────────────────────────────────────────────────┐
│ Dashboard (UI) │
│ [Create] [Start] [Pause] [Resume] [Stop] [View Progress] │
└─────────────────────────────────────────────────────────────────┘
┌─────────────────────────────────────────────────────────────────┐
│ loop-v2-routes.ts (Control Plane) │
│ │
│ State: .loop/{loopId}.json (MASTER) │
│ Tasks: .loop/{loopId}.tasks.jsonl │
│ │
│ /start → Trigger ccw-loop skill with --loop-id │
│ /pause → Set status='paused' (skill checks before action) │
│ /stop → Set status='failed' (skill terminates) │
│ /resume → Set status='running' (skill continues) │
└─────────────────────────────────────────────────────────────────┘
┌─────────────────────────────────────────────────────────────────┐
│ ccw-loop Skill (Execution Plane) │
│ │
│ Reads/Writes: .loop/{loopId}.json (unified state) │
│ Writes: .loop/{loopId}.progress/* (progress files) │
│ │
│ BEFORE each action: │
│ → Check status: paused/stopped → exit gracefully │
│ → running → continue with action │
│ │
│ Actions: init → develop → debug → validate → complete │
└─────────────────────────────────────────────────────────────────┘
```
## Key Design Principles
1. **统一状态**: API 和 Skill 共享 `.loop/{loopId}.json` 状态文件
2. **控制信号**: Skill 每个 Action 前检查 status 字段 (paused/stopped)
3. **文件驱动**: 所有进度、理解、结果都记录在 `.loop/{loopId}.progress/`
4. **可恢复**: 任何时候可以继续之前的循环 (`--loop-id`)
5. **双触发**: 支持 API 触发 (`--loop-id`) 和直接调用 (task description)
6. **Gemini 辅助**: 使用 CLI 工具进行深度分析和假设验证
## Execution Modes
### Mode 1: Interactive (交互式)
用户手动选择每个动作,适合复杂任务。
```
用户 → 选择动作 → 执行 → 查看结果 → 选择下一动作
```
### Mode 2: Auto-Loop (自动循环)
按预设顺序自动执行,适合标准开发流程。
```
Develop → Debug → Validate → (如有问题) → Develop → ...
```
## Session Structure (Unified Location)
```
.loop/
├── {loopId}.json # 主状态文件 (API + Skill 共享)
├── {loopId}.tasks.jsonl # 任务列表 (API 管理)
└── {loopId}.progress/ # Skill 进度文件
├── develop.md # 开发进度记录
├── debug.md # 理解演变文档
├── validate.md # 验证报告
├── changes.log # 代码变更日志 (NDJSON)
└── debug.log # 调试日志 (NDJSON)
```
## Directory Setup
```javascript
// loopId 来源:
// 1. API 触发时: 从 --loop-id 参数获取
// 2. 直接调用时: 生成新的 loop-v2-{timestamp}-{random}
const loopId = args['--loop-id'] || generateLoopId()
const loopFile = `.loop/${loopId}.json`
const progressDir = `.loop/${loopId}.progress`
// 创建进度目录
Bash(`mkdir -p "${progressDir}"`)
```
## Action Catalog
| Action | Purpose | Output Files | CLI Integration |
|--------|---------|--------------|-----------------|
| [action-init](phases/actions/action-init.md) | 初始化循环会话 | meta.json, state.json | - |
| [action-develop-with-file](phases/actions/action-develop-with-file.md) | 开发任务执行 | progress.md, tasks.json | gemini --mode write |
| [action-debug-with-file](phases/actions/action-debug-with-file.md) | 假设驱动调试 | understanding.md, hypotheses.json | gemini --mode analysis |
| [action-validate-with-file](phases/actions/action-validate-with-file.md) | 测试与验证 | validation.md, test-results.json | gemini --mode analysis |
| [action-complete](phases/actions/action-complete.md) | 完成循环 | summary.md | - |
| [action-menu](phases/actions/action-menu.md) | 显示操作菜单 | - | - |
## Usage
```bash
# 启动新循环 (直接调用)
/ccw-loop "实现用户认证功能"
# 继续现有循环 (API 触发或手动恢复)
/ccw-loop --loop-id loop-v2-20260122-abc123
# 自动循环模式
/ccw-loop --auto "修复登录bug并添加测试"
# API 触发自动循环
/ccw-loop --loop-id loop-v2-20260122-abc123 --auto
```
## Execution Flow
```
┌─────────────────────────────────────────────────────────────────┐
│ /ccw-loop [<task> | --loop-id <id>] [--auto] │
├─────────────────────────────────────────────────────────────────┤
│ │
│ 1. Parameter Detection: │
│ ├─ IF --loop-id provided: │
│ │ ├─ Read .loop/{loopId}.json │
│ │ ├─ Validate status === 'running' │
│ │ └─ Continue from skill_state.current_action │
│ └─ ELSE (task description): │
│ ├─ Generate new loopId │
│ ├─ Create .loop/{loopId}.json │
│ └─ Initialize with action-init │
│ │
│ 2. Orchestrator Loop: │
│ ├─ Read state from .loop/{loopId}.json │
│ ├─ Check control signals: │
│ │ ├─ status === 'paused' → Exit (wait for resume) │
│ │ ├─ status === 'failed' → Exit with error │
│ │ └─ status === 'running' → Continue │
│ ├─ Show menu / auto-select next action │
│ ├─ Execute action │
│ ├─ Update .loop/{loopId}.progress/{action}.md │
│ ├─ Update .loop/{loopId}.json (skill_state) │
│ └─ Loop or exit based on user choice / completion │
│ │
│ 3. Action Execution: │
│ ├─ BEFORE: checkControlSignals() → exit if paused/stopped │
│ ├─ Develop: Plan → Implement → Document progress │
│ ├─ Debug: Hypothesize → Instrument → Analyze → Fix │
│ ├─ Validate: Test → Check → Report │
│ └─ AFTER: Update skill_state in .loop/{loopId}.json │
│ │
│ 4. Termination: │
│ ├─ Control signal: paused (graceful exit, wait resume) │
│ ├─ Control signal: stopped (failed state) │
│ ├─ User exits (interactive mode) │
│ ├─ All tasks completed (status → completed) │
│ └─ Max iterations reached │
│ │
└─────────────────────────────────────────────────────────────────┘
```
## Reference Documents
| Document | Purpose |
|----------|---------|
| [phases/orchestrator.md](phases/orchestrator.md) | 编排器:状态读取 + 动作选择 |
| [phases/state-schema.md](phases/state-schema.md) | 状态结构定义 |
| [specs/loop-requirements.md](specs/loop-requirements.md) | 循环需求规范 |
| [specs/action-catalog.md](specs/action-catalog.md) | 动作目录 |
| [templates/progress-template.md](templates/progress-template.md) | 进度文档模板 |
| [templates/understanding-template.md](templates/understanding-template.md) | 理解文档模板 |
## Integration with Loop Monitor (Dashboard)
此 Skill 与 CCW Dashboard 的 Loop Monitor 实现 **控制平面 + 执行平面** 分离架构:
### Control Plane (Dashboard/API → loop-v2-routes.ts)
1. **创建循环**: `POST /api/loops/v2` → 创建 `.loop/{loopId}.json`
2. **启动执行**: `POST /api/loops/v2/:loopId/start` → 触发 `/ccw-loop --loop-id {loopId} --auto`
3. **暂停执行**: `POST /api/loops/v2/:loopId/pause` → 设置 `status='paused'` (Skill 下次检查时退出)
4. **恢复执行**: `POST /api/loops/v2/:loopId/resume` → 设置 `status='running'` → 重新触发 Skill
5. **停止执行**: `POST /api/loops/v2/:loopId/stop` → 设置 `status='failed'`
### Execution Plane (ccw-loop Skill)
1. **读取状态**: 从 `.loop/{loopId}.json` 读取 API 设置的状态
2. **检查控制**: 每个 Action 前检查 `status` 字段
3. **执行动作**: develop → debug → validate → complete
4. **更新进度**: 写入 `.loop/{loopId}.progress/*.md` 和更新 `skill_state`
5. **状态同步**: Dashboard 通过读取 `.loop/{loopId}.json` 获取进度
## CLI Integration Points
### Develop Phase
```bash
ccw cli -p "PURPOSE: Implement {task}...
TASK: • Analyze requirements • Write code • Update progress
MODE: write
CONTEXT: @progress.md @tasks.json
EXPECTED: Implementation + updated progress.md
" --tool gemini --mode write --rule development-implement-feature
```
### Debug Phase
```bash
ccw cli -p "PURPOSE: Generate debugging hypotheses...
TASK: • Analyze error • Generate hypotheses • Add instrumentation
MODE: analysis
CONTEXT: @understanding.md @debug.log
EXPECTED: Hypotheses + instrumentation plan
" --tool gemini --mode analysis --rule analysis-diagnose-bug-root-cause
```
### Validate Phase
```bash
ccw cli -p "PURPOSE: Validate implementation...
TASK: • Run tests • Check coverage • Verify requirements
MODE: analysis
CONTEXT: @validation.md @test-results.json
EXPECTED: Validation report
" --tool gemini --mode analysis --rule analysis-review-code-quality
```
## Error Handling
| Situation | Action |
|-----------|--------|
| Session not found | Create new session |
| State file corrupted | Rebuild from file contents |
| CLI tool fails | Fallback to manual analysis |
| Tests fail | Loop back to develop/debug |
| >10 iterations | Warn user, suggest break |
## Post-Completion Expansion
完成后询问用户是否扩展为 issue (test/enhance/refactor/doc),选中项调用 `/issue:new "{summary} - {dimension}"`

View File

@@ -1,320 +0,0 @@
# Action: Complete
完成 CCW Loop 会话,生成总结报告。
## Purpose
- 生成完成报告
- 汇总所有阶段成果
- 提供后续建议
- 询问是否扩展为 Issue
## Preconditions
- [ ] state.initialized === true
- [ ] state.status === 'running'
## Execution
### Step 1: 汇总统计
```javascript
const getUtc8ISOString = () => new Date(Date.now() + 8 * 60 * 60 * 1000).toISOString()
const sessionFolder = `.workflow/.loop/${state.session_id}`
const stats = {
// 时间统计
duration: Date.now() - new Date(state.created_at).getTime(),
iterations: state.iteration_count,
// 开发统计
develop: {
total_tasks: state.develop.total_count,
completed_tasks: state.develop.completed_count,
completion_rate: state.develop.total_count > 0
? (state.develop.completed_count / state.develop.total_count * 100).toFixed(1)
: 0
},
// 调试统计
debug: {
iterations: state.debug.iteration,
hypotheses_tested: state.debug.hypotheses.length,
root_cause_found: state.debug.confirmed_hypothesis !== null
},
// 验证统计
validate: {
runs: state.validate.test_results.length,
passed: state.validate.passed,
coverage: state.validate.coverage,
failed_tests: state.validate.failed_tests.length
}
}
console.log('\n生成完成报告...')
```
### Step 2: 生成总结报告
```javascript
const summaryReport = `# CCW Loop Session Summary
**Session ID**: ${state.session_id}
**Task**: ${state.task_description}
**Started**: ${state.created_at}
**Completed**: ${getUtc8ISOString()}
**Duration**: ${formatDuration(stats.duration)}
---
## Executive Summary
${state.validate.passed
? '✅ **任务成功完成** - 所有测试通过,验证成功'
: state.develop.completed_count === state.develop.total_count
? '⚠️ **开发完成,验证未通过** - 需要进一步调试'
: '⏸️ **任务部分完成** - 仍有待处理项'}
---
## Development Phase
| Metric | Value |
|--------|-------|
| Total Tasks | ${stats.develop.total_tasks} |
| Completed | ${stats.develop.completed_tasks} |
| Completion Rate | ${stats.develop.completion_rate}% |
### Completed Tasks
${state.develop.tasks.filter(t => t.status === 'completed').map(t => `
- ✅ ${t.description}
- Files: ${t.files_changed?.join(', ') || 'N/A'}
- Completed: ${t.completed_at}
`).join('\n')}
### Pending Tasks
${state.develop.tasks.filter(t => t.status !== 'completed').map(t => `
- ⏳ ${t.description}
`).join('\n') || '_None_'}
---
## Debug Phase
| Metric | Value |
|--------|-------|
| Iterations | ${stats.debug.iterations} |
| Hypotheses Tested | ${stats.debug.hypotheses_tested} |
| Root Cause Found | ${stats.debug.root_cause_found ? 'Yes' : 'No'} |
${stats.debug.root_cause_found ? `
### Confirmed Root Cause
**${state.debug.confirmed_hypothesis}**: ${state.debug.hypotheses.find(h => h.id === state.debug.confirmed_hypothesis)?.description || 'N/A'}
` : ''}
### Hypothesis Summary
${state.debug.hypotheses.map(h => `
- **${h.id}**: ${h.status.toUpperCase()}
- ${h.description}
`).join('\n') || '_No hypotheses tested_'}
---
## Validation Phase
| Metric | Value |
|--------|-------|
| Test Runs | ${stats.validate.runs} |
| Status | ${stats.validate.passed ? 'PASSED' : 'FAILED'} |
| Coverage | ${stats.validate.coverage || 'N/A'}% |
| Failed Tests | ${stats.validate.failed_tests} |
${stats.validate.failed_tests > 0 ? `
### Failed Tests
${state.validate.failed_tests.map(t => `- ❌ ${t}`).join('\n')}
` : ''}
---
## Files Modified
${listModifiedFiles(sessionFolder)}
---
## Key Learnings
${state.debug.iteration > 0 ? `
### From Debugging
${extractLearnings(state.debug.hypotheses)}
` : ''}
---
## Recommendations
${generateRecommendations(stats, state)}
---
## Session Artifacts
| File | Description |
|------|-------------|
| \`develop/progress.md\` | Development progress timeline |
| \`develop/tasks.json\` | Task list with status |
| \`debug/understanding.md\` | Debug exploration and learnings |
| \`debug/hypotheses.json\` | Hypothesis history |
| \`validate/validation.md\` | Validation report |
| \`validate/test-results.json\` | Test execution results |
---
*Generated by CCW Loop at ${getUtc8ISOString()}*
`
Write(`${sessionFolder}/summary.md`, summaryReport)
console.log(`\n报告已保存: ${sessionFolder}/summary.md`)
```
### Step 3: 询问后续扩展
```javascript
console.log('\n' + '═'.repeat(60))
console.log(' 任务已完成')
console.log('═'.repeat(60))
const expansionResponse = await AskUserQuestion({
questions: [{
question: "是否将发现扩展为 Issue",
header: "扩展选项",
multiSelect: true,
options: [
{ label: "测试 (Test)", description: "添加更多测试用例" },
{ label: "增强 (Enhance)", description: "功能增强建议" },
{ label: "重构 (Refactor)", description: "代码重构建议" },
{ label: "文档 (Doc)", description: "文档更新需求" },
{ label: "否,直接完成", description: "不创建 Issue" }
]
}]
})
const selectedExpansions = expansionResponse["扩展选项"]
if (selectedExpansions && !selectedExpansions.includes("否,直接完成")) {
for (const expansion of selectedExpansions) {
const dimension = expansion.split(' ')[0].toLowerCase()
const issueSummary = `${state.task_description} - ${dimension}`
console.log(`\n创建 Issue: ${issueSummary}`)
// 调用 /issue:new 创建 issue
await Bash({
command: `/issue:new "${issueSummary}"`,
run_in_background: false
})
}
}
```
### Step 4: 最终输出
```javascript
console.log(`
═══════════════════════════════════════════════════════════
✅ CCW Loop 会话完成
═══════════════════════════════════════════════════════════
会话 ID: ${state.session_id}
用时: ${formatDuration(stats.duration)}
迭代: ${stats.iterations}
开发: ${stats.develop.completed_tasks}/${stats.develop.total_tasks} 任务完成
调试: ${stats.debug.iterations} 次迭代
验证: ${stats.validate.passed ? '通过 ✅' : '未通过 ❌'}
报告: ${sessionFolder}/summary.md
═══════════════════════════════════════════════════════════
`)
```
## State Updates
```javascript
return {
stateUpdates: {
status: 'completed',
completed_at: getUtc8ISOString(),
summary: stats
},
continue: false,
message: `会话 ${state.session_id} 已完成`
}
```
## Helper Functions
```javascript
function formatDuration(ms) {
const seconds = Math.floor(ms / 1000)
const minutes = Math.floor(seconds / 60)
const hours = Math.floor(minutes / 60)
if (hours > 0) {
return `${hours}h ${minutes % 60}m`
} else if (minutes > 0) {
return `${minutes}m ${seconds % 60}s`
} else {
return `${seconds}s`
}
}
function generateRecommendations(stats, state) {
const recommendations = []
if (stats.develop.completion_rate < 100) {
recommendations.push('- 完成剩余开发任务')
}
if (!stats.validate.passed) {
recommendations.push('- 修复失败的测试')
}
if (stats.validate.coverage && stats.validate.coverage < 80) {
recommendations.push(`- 提高测试覆盖率 (当前: ${stats.validate.coverage}%)`)
}
if (stats.debug.iterations > 3 && !stats.debug.root_cause_found) {
recommendations.push('- 考虑代码重构以简化调试')
}
if (recommendations.length === 0) {
recommendations.push('- 考虑代码审查')
recommendations.push('- 更新相关文档')
recommendations.push('- 准备部署')
}
return recommendations.join('\n')
}
```
## Error Handling
| Error Type | Recovery |
|------------|----------|
| 报告生成失败 | 显示基本统计,跳过文件写入 |
| Issue 创建失败 | 记录错误,继续完成 |
## Next Actions
- 无 (终止状态)
- 如需继续: 使用 `ccw-loop --resume {session-id}` 重新打开会话

View File

@@ -1,485 +0,0 @@
# Action: Debug With File
假设驱动调试,记录理解演变到 understanding.md支持 Gemini 辅助分析和假设生成。
## Purpose
执行假设驱动的调试流程,包括:
- 定位错误源
- 生成可测试假设
- 添加 NDJSON 日志
- 分析日志证据
- 纠正错误理解
- 应用修复
## Preconditions
- [ ] state.initialized === true
- [ ] state.status === 'running'
## Session Setup
```javascript
const getUtc8ISOString = () => new Date(Date.now() + 8 * 60 * 60 * 1000).toISOString()
const sessionFolder = `.workflow/.loop/${state.session_id}`
const debugFolder = `${sessionFolder}/debug`
const understandingPath = `${debugFolder}/understanding.md`
const hypothesesPath = `${debugFolder}/hypotheses.json`
const debugLogPath = `${debugFolder}/debug.log`
```
---
## Mode Detection
```javascript
// 自动检测模式
const understandingExists = fs.existsSync(understandingPath)
const logHasContent = fs.existsSync(debugLogPath) && fs.statSync(debugLogPath).size > 0
const debugMode = logHasContent ? 'analyze' : (understandingExists ? 'continue' : 'explore')
console.log(`Debug mode: ${debugMode}`)
```
---
## Explore Mode (首次调试)
### Step 1.1: 定位错误源
```javascript
if (debugMode === 'explore') {
// 询问用户 bug 描述
const bugInput = await AskUserQuestion({
questions: [{
question: "请描述遇到的 bug 或错误信息:",
header: "Bug 描述",
multiSelect: false,
options: [
{ label: "手动输入", description: "输入错误描述或堆栈" },
{ label: "从测试失败", description: "从验证阶段的失败测试中获取" }
]
}]
})
const bugDescription = bugInput["Bug 描述"]
// 提取关键词并搜索
const searchResults = await Task({
subagent_type: 'Explore',
run_in_background: false,
prompt: `Search codebase for error patterns related to: ${bugDescription}`
})
// 分析搜索结果,识别受影响的位置
const affectedLocations = analyzeSearchResults(searchResults)
}
```
### Step 1.2: 记录初始理解
```javascript
// 创建 understanding.md
const initialUnderstanding = `# Understanding Document
**Session ID**: ${state.session_id}
**Bug Description**: ${bugDescription}
**Started**: ${getUtc8ISOString()}
---
## Exploration Timeline
### Iteration 1 - Initial Exploration (${getUtc8ISOString()})
#### Current Understanding
Based on bug description and initial code search:
- Error pattern: ${errorPattern}
- Affected areas: ${affectedLocations.map(l => l.file).join(', ')}
- Initial hypothesis: ${initialThoughts}
#### Evidence from Code Search
${searchResults.map(r => `
**Keyword: "${r.keyword}"**
- Found in: ${r.files.join(', ')}
- Key findings: ${r.insights}
`).join('\n')}
#### Next Steps
- Generate testable hypotheses
- Add instrumentation
- Await reproduction
---
## Current Consolidated Understanding
${initialConsolidatedUnderstanding}
`
Write(understandingPath, initialUnderstanding)
```
### Step 1.3: Gemini 辅助假设生成
```bash
ccw cli -p "
PURPOSE: Generate debugging hypotheses for: ${bugDescription}
Success criteria: Testable hypotheses with clear evidence criteria
TASK:
• Analyze error pattern and code search results
• Identify 3-5 most likely root causes
• For each hypothesis, specify:
- What might be wrong
- What evidence would confirm/reject it
- Where to add instrumentation
• Rank by likelihood
MODE: analysis
CONTEXT: @${understandingPath} | Search results in understanding.md
EXPECTED:
- Structured hypothesis list (JSON format)
- Each hypothesis with: id, description, testable_condition, logging_point, evidence_criteria
- Likelihood ranking (1=most likely)
CONSTRAINTS: Focus on testable conditions
" --tool gemini --mode analysis --rule analysis-diagnose-bug-root-cause
```
### Step 1.4: 保存假设
```javascript
const hypotheses = {
iteration: 1,
timestamp: getUtc8ISOString(),
bug_description: bugDescription,
hypotheses: [
{
id: "H1",
description: "...",
testable_condition: "...",
logging_point: "file.ts:func:42",
evidence_criteria: {
confirm: "...",
reject: "..."
},
likelihood: 1,
status: "pending"
}
// ...
],
gemini_insights: "...",
corrected_assumptions: []
}
Write(hypothesesPath, JSON.stringify(hypotheses, null, 2))
```
### Step 1.5: 添加 NDJSON 日志
```javascript
// 为每个假设添加日志点
for (const hypothesis of hypotheses.hypotheses) {
const [file, func, line] = hypothesis.logging_point.split(':')
const logStatement = `console.log(JSON.stringify({
hid: "${hypothesis.id}",
ts: Date.now(),
func: "${func}",
data: { /* 相关数据 */ }
}))`
// 使用 Edit 工具添加日志
// ...
}
```
---
## Analyze Mode (有日志后)
### Step 2.1: 解析调试日志
```javascript
if (debugMode === 'analyze') {
// 读取 NDJSON 日志
const logContent = Read(debugLogPath)
const entries = logContent.split('\n')
.filter(l => l.trim())
.map(l => JSON.parse(l))
// 按假设分组
const byHypothesis = groupBy(entries, 'hid')
}
```
### Step 2.2: Gemini 辅助证据分析
```bash
ccw cli -p "
PURPOSE: Analyze debug log evidence to validate/correct hypotheses for: ${bugDescription}
Success criteria: Clear verdict per hypothesis + corrected understanding
TASK:
• Parse log entries by hypothesis
• Evaluate evidence against expected criteria
• Determine verdict: confirmed | rejected | inconclusive
• Identify incorrect assumptions from previous understanding
• Suggest corrections to understanding
MODE: analysis
CONTEXT:
@${debugLogPath}
@${understandingPath}
@${hypothesesPath}
EXPECTED:
- Per-hypothesis verdict with reasoning
- Evidence summary
- List of incorrect assumptions with corrections
- Updated consolidated understanding
- Root cause if confirmed, or next investigation steps
CONSTRAINTS: Evidence-based reasoning only, no speculation
" --tool gemini --mode analysis --rule analysis-diagnose-bug-root-cause
```
### Step 2.3: 更新理解文档
```javascript
// 追加新迭代到 understanding.md
const iteration = state.debug.iteration + 1
const analysisEntry = `
### Iteration ${iteration} - Evidence Analysis (${getUtc8ISOString()})
#### Log Analysis Results
${results.map(r => `
**${r.id}**: ${r.verdict.toUpperCase()}
- Evidence: ${JSON.stringify(r.evidence)}
- Reasoning: ${r.reason}
`).join('\n')}
#### Corrected Understanding
Previous misunderstandings identified and corrected:
${corrections.map(c => `
- ~~${c.wrong}~~ → ${c.corrected}
- Why wrong: ${c.reason}
- Evidence: ${c.evidence}
`).join('\n')}
#### New Insights
${newInsights.join('\n- ')}
#### Gemini Analysis
${geminiAnalysis}
${confirmedHypothesis ? `
#### Root Cause Identified
**${confirmedHypothesis.id}**: ${confirmedHypothesis.description}
Evidence supporting this conclusion:
${confirmedHypothesis.supportingEvidence}
` : `
#### Next Steps
${nextSteps}
`}
---
## Current Consolidated Understanding (Updated)
### What We Know
- ${validUnderstanding1}
- ${validUnderstanding2}
### What Was Disproven
- ~~${wrongAssumption}~~ (Evidence: ${disproofEvidence})
### Current Investigation Focus
${currentFocus}
### Remaining Questions
- ${openQuestion1}
- ${openQuestion2}
`
const existingContent = Read(understandingPath)
Write(understandingPath, existingContent + analysisEntry)
```
### Step 2.4: 更新假设状态
```javascript
const hypothesesData = JSON.parse(Read(hypothesesPath))
// 更新假设状态
hypothesesData.hypotheses = hypothesesData.hypotheses.map(h => ({
...h,
status: results.find(r => r.id === h.id)?.verdict || h.status,
evidence: results.find(r => r.id === h.id)?.evidence || h.evidence,
verdict_reason: results.find(r => r.id === h.id)?.reason || h.verdict_reason
}))
hypothesesData.iteration++
hypothesesData.timestamp = getUtc8ISOString()
Write(hypothesesPath, JSON.stringify(hypothesesData, null, 2))
```
---
## Fix & Verification
### Step 3.1: 应用修复
```javascript
if (confirmedHypothesis) {
console.log(`\n根因确认: ${confirmedHypothesis.description}`)
console.log('准备应用修复...')
// 使用 Gemini 生成修复代码
const fixPrompt = `
PURPOSE: Fix the identified root cause
Root Cause: ${confirmedHypothesis.description}
Evidence: ${confirmedHypothesis.supportingEvidence}
TASK:
• Generate fix code
• Ensure backward compatibility
• Add tests if needed
MODE: write
CONTEXT: @${confirmedHypothesis.logging_point.split(':')[0]}
EXPECTED: Fixed code + verification steps
`
await Bash({
command: `ccw cli -p "${fixPrompt}" --tool gemini --mode write --rule development-debug-runtime-issues`,
run_in_background: false
})
}
```
### Step 3.2: 记录解决方案
```javascript
const resolutionEntry = `
### Resolution (${getUtc8ISOString()})
#### Fix Applied
- Modified files: ${modifiedFiles.join(', ')}
- Fix description: ${fixDescription}
- Root cause addressed: ${rootCause}
#### Verification Results
${verificationResults}
#### Lessons Learned
1. ${lesson1}
2. ${lesson2}
#### Key Insights for Future
- ${insight1}
- ${insight2}
`
const existingContent = Read(understandingPath)
Write(understandingPath, existingContent + resolutionEntry)
```
### Step 3.3: 清理日志
```javascript
// 移除调试日志
// (可选,根据用户选择)
```
---
## State Updates
```javascript
return {
stateUpdates: {
debug: {
current_bug: bugDescription,
hypotheses: hypothesesData.hypotheses,
confirmed_hypothesis: confirmedHypothesis?.id || null,
iteration: hypothesesData.iteration,
last_analysis_at: getUtc8ISOString(),
understanding_updated: true
},
last_action: 'action-debug-with-file'
},
continue: true,
message: confirmedHypothesis
? `根因确认: ${confirmedHypothesis.description}\n修复已应用,请验证`
: `分析完成,需要更多证据\n请复现 bug 后再次执行`
}
```
## Error Handling
| Error Type | Recovery |
|------------|----------|
| 空 debug.log | 提示用户复现 bug |
| 所有假设被否定 | 使用 Gemini 生成新假设 |
| 修复无效 | 记录失败尝试,迭代 |
| >5 迭代 | 建议升级到 /workflow:lite-fix |
| Gemini 不可用 | 回退到手动分析 |
## Understanding Document Template
参考 [templates/understanding-template.md](../../templates/understanding-template.md)
## CLI Integration
### 假设生成
```bash
ccw cli -p "PURPOSE: Generate debugging hypotheses..." --tool gemini --mode analysis --rule analysis-diagnose-bug-root-cause
```
### 证据分析
```bash
ccw cli -p "PURPOSE: Analyze debug log evidence..." --tool gemini --mode analysis --rule analysis-diagnose-bug-root-cause
```
### 生成修复
```bash
ccw cli -p "PURPOSE: Fix the identified root cause..." --tool gemini --mode write --rule development-debug-runtime-issues
```
## Next Actions (Hints)
- 根因确认: `action-validate-with-file` (验证修复)
- 需要更多证据: 等待用户复现,再次执行此动作
- 所有假设否定: 重新执行此动作生成新假设
- 用户选择: `action-menu` (返回菜单)

View File

@@ -1,365 +0,0 @@
# Action: Develop With File
增量开发任务执行,记录进度到 progress.md支持 Gemini 辅助实现。
## Purpose
执行开发任务并记录进度,包括:
- 分析任务需求
- 使用 Gemini/CLI 实现代码
- 记录代码变更
- 更新进度文档
## Preconditions
- [ ] state.status === 'running'
- [ ] state.skill_state !== null
- [ ] state.skill_state.develop.tasks.some(t => t.status === 'pending')
## Session Setup (Unified Location)
```javascript
const getUtc8ISOString = () => new Date(Date.now() + 8 * 60 * 60 * 1000).toISOString()
// 统一位置: .loop/{loopId}
const loopId = state.loop_id
const loopFile = `.loop/${loopId}.json`
const progressDir = `.loop/${loopId}.progress`
const progressPath = `${progressDir}/develop.md`
const changesLogPath = `${progressDir}/changes.log`
```
---
## Execution
### Step 0: Check Control Signals (CRITICAL)
```javascript
/**
* CRITICAL: 每个 Action 必须在开始时检查控制信号
* 如果 API 设置了 paused/stoppedSkill 应立即退出
*/
function checkControlSignals(loopId) {
const state = JSON.parse(Read(`.loop/${loopId}.json`))
switch (state.status) {
case 'paused':
console.log('⏸️ Loop paused by API. Exiting action.')
return { continue: false, reason: 'paused' }
case 'failed':
console.log('⏹️ Loop stopped by API. Exiting action.')
return { continue: false, reason: 'stopped' }
case 'running':
return { continue: true, reason: 'running' }
default:
return { continue: false, reason: 'unknown_status' }
}
}
// Execute check
const control = checkControlSignals(loopId)
if (!control.continue) {
return {
skillStateUpdates: { current_action: null },
continue: false,
message: `Action terminated: ${control.reason}`
}
}
```
### Step 1: 加载任务列表
```javascript
// 读取任务列表 (从 skill_state)
let tasks = state.skill_state?.develop?.tasks || []
// 如果任务列表为空,询问用户创建
if (tasks.length === 0) {
// 使用 Gemini 分析任务描述,生成任务列表
const analysisPrompt = `
PURPOSE: 分析开发任务并分解为可执行步骤
Success: 生成 3-7 个具体、可验证的子任务
TASK:
• 分析任务描述: ${state.task_description}
• 识别关键功能点
• 分解为独立子任务
• 为每个子任务指定工具和模式
MODE: analysis
CONTEXT: @package.json @src/**/*.ts | Memory: 项目结构
EXPECTED:
JSON 格式:
{
"tasks": [
{
"id": "task-001",
"description": "任务描述",
"tool": "gemini",
"mode": "write",
"files": ["src/xxx.ts"]
}
]
}
`
const result = await Task({
subagent_type: 'cli-execution-agent',
run_in_background: false,
prompt: `Execute Gemini CLI with prompt: ${analysisPrompt}`
})
tasks = JSON.parse(result).tasks
}
// 找到第一个待处理任务
const currentTask = tasks.find(t => t.status === 'pending')
if (!currentTask) {
return {
skillStateUpdates: {
develop: { ...state.skill_state.develop, current_task: null }
},
continue: true,
message: '所有开发任务已完成'
}
}
```
### Step 2: 执行开发任务
```javascript
console.log(`\n执行任务: ${currentTask.description}`)
// 更新任务状态
currentTask.status = 'in_progress'
// 使用 Gemini 实现
const implementPrompt = `
PURPOSE: 实现开发任务
Task: ${currentTask.description}
Success criteria: 代码实现完成,测试通过
TASK:
• 分析现有代码结构
• 实现功能代码
• 添加必要的类型定义
• 确保代码风格一致
MODE: write
CONTEXT: @${currentTask.files?.join(' @') || 'src/**/*.ts'}
EXPECTED:
- 完整的代码实现
- 代码变更列表
- 简要实现说明
CONSTRAINTS: 遵循现有代码风格 | 不破坏现有功能
`
const implementResult = await Bash({
command: `ccw cli -p "${implementPrompt}" --tool gemini --mode write --rule development-implement-feature`,
run_in_background: false
})
// 记录代码变更
const timestamp = getUtc8ISOString()
const changeEntry = {
timestamp,
task_id: currentTask.id,
description: currentTask.description,
files_changed: currentTask.files || [],
result: 'success'
}
// 追加到 changes.log (NDJSON 格式)
const changesContent = Read(changesLogPath) || ''
Write(changesLogPath, changesContent + JSON.stringify(changeEntry) + '\n')
```
### Step 3: 更新进度文档
```javascript
const timestamp = getUtc8ISOString()
const iteration = state.develop.completed_count + 1
// 读取现有进度文档
let progressContent = Read(progressPath) || ''
// 如果是新文档,添加头部
if (!progressContent) {
progressContent = `# Development Progress
**Session ID**: ${state.session_id}
**Task**: ${state.task_description}
**Started**: ${timestamp}
---
## Progress Timeline
`
}
// 追加本次进度
const progressEntry = `
### Iteration ${iteration} - ${currentTask.description} (${timestamp})
#### Task Details
- **ID**: ${currentTask.id}
- **Tool**: ${currentTask.tool}
- **Mode**: ${currentTask.mode}
#### Implementation Summary
${implementResult.summary || '实现完成'}
#### Files Changed
${currentTask.files?.map(f => `- \`${f}\``).join('\n') || '- No files specified'}
#### Status: COMPLETED
---
`
Write(progressPath, progressContent + progressEntry)
// 更新任务状态
currentTask.status = 'completed'
currentTask.completed_at = timestamp
```
### Step 4: 更新任务列表文件
```javascript
// 更新 tasks.json
const updatedTasks = tasks.map(t =>
t.id === currentTask.id ? currentTask : t
)
Write(tasksPath, JSON.stringify(updatedTasks, null, 2))
```
## State Updates
```javascript
return {
stateUpdates: {
develop: {
tasks: updatedTasks,
current_task_id: null,
completed_count: state.develop.completed_count + 1,
total_count: updatedTasks.length,
last_progress_at: getUtc8ISOString()
},
last_action: 'action-develop-with-file'
},
continue: true,
message: `任务完成: ${currentTask.description}\n进度: ${state.develop.completed_count + 1}/${updatedTasks.length}`
}
```
## Error Handling
| Error Type | Recovery |
|------------|----------|
| Gemini CLI 失败 | 提示用户手动实现,记录到 progress.md |
| 文件写入失败 | 重试一次,失败则记录错误 |
| 任务解析失败 | 询问用户手动输入任务 |
## Progress Document Template
```markdown
# Development Progress
**Session ID**: LOOP-xxx-2026-01-22
**Task**: 实现用户认证功能
**Started**: 2026-01-22T10:00:00+08:00
---
## Progress Timeline
### Iteration 1 - 分析登录组件 (2026-01-22T10:05:00+08:00)
#### Task Details
- **ID**: task-001
- **Tool**: gemini
- **Mode**: analysis
#### Implementation Summary
分析了现有登录组件结构,识别了需要修改的文件和依赖关系。
#### Files Changed
- `src/components/Login.tsx`
- `src/hooks/useAuth.ts`
#### Status: COMPLETED
---
### Iteration 2 - 实现登录 API (2026-01-22T10:15:00+08:00)
...
---
## Current Statistics
| Metric | Value |
|--------|-------|
| Total Tasks | 5 |
| Completed | 2 |
| In Progress | 1 |
| Pending | 2 |
| Progress | 40% |
---
## Next Steps
- [ ] 完成剩余任务
- [ ] 运行测试
- [ ] 代码审查
```
## CLI Integration
### 任务分析
```bash
ccw cli -p "PURPOSE: 分解开发任务为子任务
TASK: • 分析任务描述 • 识别功能点 • 生成任务列表
MODE: analysis
CONTEXT: @package.json @src/**/*
EXPECTED: JSON 任务列表
" --tool gemini --mode analysis --rule planning-breakdown-task-steps
```
### 代码实现
```bash
ccw cli -p "PURPOSE: 实现功能代码
TASK: • 分析需求 • 编写代码 • 添加类型
MODE: write
CONTEXT: @src/xxx.ts
EXPECTED: 完整实现
" --tool gemini --mode write --rule development-implement-feature
```
## Next Actions (Hints)
- 所有任务完成: `action-debug-with-file` (开始调试)
- 任务失败: `action-develop-with-file` (重试或下一个任务)
- 用户选择: `action-menu` (返回菜单)

View File

@@ -1,200 +0,0 @@
# Action: Initialize
初始化 CCW Loop 会话,创建目录结构和初始状态。
## Purpose
- 创建会话目录结构
- 初始化状态文件
- 分析任务描述生成初始任务列表
- 准备执行环境
## Preconditions
- [ ] state.status === 'pending'
- [ ] state.initialized === false
## Execution
### Step 1: 创建目录结构
```javascript
const getUtc8ISOString = () => new Date(Date.now() + 8 * 60 * 60 * 1000).toISOString()
const taskSlug = state.task_description.toLowerCase().replace(/[^a-z0-9]+/g, '-').substring(0, 30)
const dateStr = getUtc8ISOString().substring(0, 10)
const sessionId = `LOOP-${taskSlug}-${dateStr}`
const sessionFolder = `.workflow/.loop/${sessionId}`
Bash(`mkdir -p "${sessionFolder}/develop"`)
Bash(`mkdir -p "${sessionFolder}/debug"`)
Bash(`mkdir -p "${sessionFolder}/validate"`)
console.log(`Session created: ${sessionId}`)
console.log(`Location: ${sessionFolder}`)
```
### Step 2: 创建元数据文件
```javascript
const meta = {
session_id: sessionId,
task_description: state.task_description,
created_at: getUtc8ISOString(),
mode: state.mode || 'interactive'
}
Write(`${sessionFolder}/meta.json`, JSON.stringify(meta, null, 2))
```
### Step 3: 分析任务生成开发任务列表
```javascript
// 使用 Gemini 分析任务描述
console.log('\n分析任务描述...')
const analysisPrompt = `
PURPOSE: 分析开发任务并分解为可执行步骤
Success: 生成 3-7 个具体、可验证的子任务
TASK:
• 分析任务描述: ${state.task_description}
• 识别关键功能点
• 分解为独立子任务
• 为每个子任务指定工具和模式
MODE: analysis
CONTEXT: @package.json @src/**/*.ts (如存在)
EXPECTED:
JSON 格式:
{
"tasks": [
{
"id": "task-001",
"description": "任务描述",
"tool": "gemini",
"mode": "write",
"priority": 1
}
],
"estimated_complexity": "low|medium|high",
"key_files": ["file1.ts", "file2.ts"]
}
CONSTRAINTS: 生成实际可执行的任务
`
const result = await Bash({
command: `ccw cli -p "${analysisPrompt}" --tool gemini --mode analysis --rule planning-breakdown-task-steps`,
run_in_background: false
})
const analysis = JSON.parse(result.stdout)
const tasks = analysis.tasks.map((t, i) => ({
...t,
id: t.id || `task-${String(i + 1).padStart(3, '0')}`,
status: 'pending',
created_at: getUtc8ISOString(),
completed_at: null,
files_changed: []
}))
// 保存任务列表
Write(`${sessionFolder}/develop/tasks.json`, JSON.stringify(tasks, null, 2))
```
### Step 4: 初始化进度文档
```javascript
const progressInitial = `# Development Progress
**Session ID**: ${sessionId}
**Task**: ${state.task_description}
**Started**: ${getUtc8ISOString()}
**Estimated Complexity**: ${analysis.estimated_complexity}
---
## Task List
${tasks.map((t, i) => `${i + 1}. [ ] ${t.description}`).join('\n')}
## Key Files
${analysis.key_files?.map(f => `- \`${f}\``).join('\n') || '- To be determined'}
---
## Progress Timeline
`
Write(`${sessionFolder}/develop/progress.md`, progressInitial)
```
### Step 5: 显示初始化结果
```javascript
console.log(`\n✅ 会话初始化完成`)
console.log(`\n任务列表 (${tasks.length} 项):`)
tasks.forEach((t, i) => {
console.log(` ${i + 1}. ${t.description} [${t.tool}/${t.mode}]`)
})
console.log(`\n预估复杂度: ${analysis.estimated_complexity}`)
console.log(`\n执行 'develop' 开始开发,或 'menu' 查看更多选项`)
```
## State Updates
```javascript
return {
stateUpdates: {
session_id: sessionId,
status: 'running',
initialized: true,
develop: {
tasks: tasks,
current_task_id: null,
completed_count: 0,
total_count: tasks.length,
last_progress_at: null
},
debug: {
current_bug: null,
hypotheses: [],
confirmed_hypothesis: null,
iteration: 0,
last_analysis_at: null,
understanding_updated: false
},
validate: {
test_results: [],
coverage: null,
passed: false,
failed_tests: [],
last_run_at: null
},
context: {
estimated_complexity: analysis.estimated_complexity,
key_files: analysis.key_files
}
},
continue: true,
message: `会话 ${sessionId} 已初始化\n${tasks.length} 个开发任务待执行`
}
```
## Error Handling
| Error Type | Recovery |
|------------|----------|
| 目录创建失败 | 检查权限,重试 |
| Gemini 分析失败 | 提示用户手动输入任务 |
| 任务解析失败 | 使用默认任务列表 |
## Next Actions
- 成功: `action-menu` (显示操作菜单) 或 `action-develop-with-file` (直接开始开发)
- 失败: 报错退出

View File

@@ -1,192 +0,0 @@
# Action: Menu
显示交互式操作菜单,让用户选择下一步操作。
## Purpose
- 显示当前状态摘要
- 提供操作选项
- 接收用户选择
- 返回下一个动作
## Preconditions
- [ ] state.initialized === true
- [ ] state.status === 'running'
## Execution
### Step 1: 生成状态摘要
```javascript
const getUtc8ISOString = () => new Date(Date.now() + 8 * 60 * 60 * 1000).toISOString()
// 开发进度
const developProgress = state.develop.total_count > 0
? `${state.develop.completed_count}/${state.develop.total_count} (${(state.develop.completed_count / state.develop.total_count * 100).toFixed(0)}%)`
: '未开始'
// 调试状态
const debugStatus = state.debug.confirmed_hypothesis
? `✅ 已确认根因`
: state.debug.iteration > 0
? `🔍 迭代 ${state.debug.iteration}`
: '未开始'
// 验证状态
const validateStatus = state.validate.passed
? `✅ 通过`
: state.validate.test_results.length > 0
? `${state.validate.failed_tests.length} 个失败`
: '未运行'
const statusSummary = `
═══════════════════════════════════════════════════════════
CCW Loop - ${state.session_id}
═══════════════════════════════════════════════════════════
任务: ${state.task_description}
迭代: ${state.iteration_count}
┌─────────────────────────────────────────────────────┐
│ 开发 (Develop) │ ${developProgress.padEnd(20)}
│ 调试 (Debug) │ ${debugStatus.padEnd(20)}
│ 验证 (Validate) │ ${validateStatus.padEnd(20)}
└─────────────────────────────────────────────────────┘
═══════════════════════════════════════════════════════════
`
console.log(statusSummary)
```
### Step 2: 显示操作选项
```javascript
const options = [
{
label: "📝 继续开发 (Develop)",
description: state.develop.completed_count < state.develop.total_count
? `执行下一个开发任务`
: "所有任务已完成,可添加新任务",
action: "action-develop-with-file"
},
{
label: "🔍 开始调试 (Debug)",
description: state.debug.iteration > 0
? "继续假设驱动调试"
: "开始新的调试会话",
action: "action-debug-with-file"
},
{
label: "✅ 运行验证 (Validate)",
description: "运行测试并检查覆盖率",
action: "action-validate-with-file"
},
{
label: "📊 查看详情 (Status)",
description: "查看详细进度和文件",
action: "action-status"
},
{
label: "🏁 完成循环 (Complete)",
description: "结束当前循环",
action: "action-complete"
},
{
label: "🚪 退出 (Exit)",
description: "保存状态并退出",
action: "exit"
}
]
const response = await AskUserQuestion({
questions: [{
question: "选择下一步操作:",
header: "操作",
multiSelect: false,
options: options.map(o => ({
label: o.label,
description: o.description
}))
}]
})
const selectedLabel = response["操作"]
const selectedOption = options.find(o => o.label === selectedLabel)
const nextAction = selectedOption?.action || 'action-menu'
```
### Step 3: 处理特殊选项
```javascript
if (nextAction === 'exit') {
console.log('\n保存状态并退出...')
return {
stateUpdates: {
status: 'user_exit'
},
continue: false,
message: '会话已保存,使用 --resume 可继续'
}
}
if (nextAction === 'action-status') {
// 显示详细状态
const sessionFolder = `.workflow/.loop/${state.session_id}`
console.log('\n=== 开发进度 ===')
const progress = Read(`${sessionFolder}/develop/progress.md`)
console.log(progress?.substring(0, 500) + '...')
console.log('\n=== 调试状态 ===')
if (state.debug.hypotheses.length > 0) {
state.debug.hypotheses.forEach(h => {
console.log(` ${h.id}: ${h.status} - ${h.description.substring(0, 50)}...`)
})
} else {
console.log(' 尚未开始调试')
}
console.log('\n=== 验证结果 ===')
if (state.validate.test_results.length > 0) {
const latest = state.validate.test_results[state.validate.test_results.length - 1]
console.log(` 最近运行: ${latest.timestamp}`)
console.log(` 通过率: ${latest.summary.pass_rate}%`)
} else {
console.log(' 尚未运行验证')
}
// 返回菜单
return {
stateUpdates: {},
continue: true,
nextAction: 'action-menu',
message: ''
}
}
```
## State Updates
```javascript
return {
stateUpdates: {
// 不更新状态,仅返回下一个动作
},
continue: true,
nextAction: nextAction,
message: `执行: ${selectedOption?.label || nextAction}`
}
```
## Error Handling
| Error Type | Recovery |
|------------|----------|
| 用户取消 | 返回菜单 |
| 无效选择 | 重新显示菜单 |
## Next Actions
根据用户选择动态决定下一个动作。

View File

@@ -1,307 +0,0 @@
# Action: Validate With File
运行测试并验证实现,记录结果到 validation.md支持 Gemini 辅助分析测试覆盖率和质量。
## Purpose
执行测试验证流程,包括:
- 运行单元测试
- 运行集成测试
- 检查代码覆盖率
- 生成验证报告
- 分析失败原因
## Preconditions
- [ ] state.initialized === true
- [ ] state.status === 'running'
- [ ] state.develop.completed_count > 0 || state.debug.confirmed_hypothesis !== null
## Session Setup
```javascript
const getUtc8ISOString = () => new Date(Date.now() + 8 * 60 * 60 * 1000).toISOString()
const sessionFolder = `.workflow/.loop/${state.session_id}`
const validateFolder = `${sessionFolder}/validate`
const validationPath = `${validateFolder}/validation.md`
const testResultsPath = `${validateFolder}/test-results.json`
const coveragePath = `${validateFolder}/coverage.json`
```
---
## Execution
### Step 1: 运行测试
```javascript
console.log('\n运行测试...')
// 检测测试框架
const packageJson = JSON.parse(Read('package.json'))
const testScript = packageJson.scripts?.test || 'npm test'
// 运行测试并捕获输出
const testResult = await Bash({
command: testScript,
timeout: 300000 // 5分钟
})
// 解析测试输出
const testResults = parseTestOutput(testResult.stdout)
```
### Step 2: 检查覆盖率
```javascript
// 运行覆盖率检查
let coverageData = null
if (packageJson.scripts?.['test:coverage']) {
const coverageResult = await Bash({
command: 'npm run test:coverage',
timeout: 300000
})
// 解析覆盖率报告
coverageData = parseCoverageReport(coverageResult.stdout)
Write(coveragePath, JSON.stringify(coverageData, null, 2))
}
```
### Step 3: Gemini 辅助分析
```bash
ccw cli -p "
PURPOSE: Analyze test results and coverage
Success criteria: Identify quality issues and suggest improvements
TASK:
• Analyze test execution results
• Review code coverage metrics
• Identify missing test cases
• Suggest quality improvements
• Verify requirements coverage
MODE: analysis
CONTEXT:
@${testResultsPath}
@${coveragePath}
@${sessionFolder}/develop/progress.md
EXPECTED:
- Quality assessment report
- Failed tests analysis
- Coverage gaps identification
- Improvement recommendations
- Pass/Fail decision with rationale
CONSTRAINTS: Evidence-based quality assessment
" --tool gemini --mode analysis --rule analysis-review-code-quality
```
### Step 4: 生成验证报告
```javascript
const timestamp = getUtc8ISOString()
const iteration = (state.validate.test_results?.length || 0) + 1
const validationReport = `# Validation Report
**Session ID**: ${state.session_id}
**Task**: ${state.task_description}
**Validated**: ${timestamp}
---
## Iteration ${iteration} - Validation Run
### Test Execution Summary
| Metric | Value |
|--------|-------|
| Total Tests | ${testResults.total} |
| Passed | ${testResults.passed} |
| Failed | ${testResults.failed} |
| Skipped | ${testResults.skipped} |
| Duration | ${testResults.duration_ms}ms |
| **Pass Rate** | **${(testResults.passed / testResults.total * 100).toFixed(1)}%** |
### Coverage Report
${coverageData ? `
| File | Statements | Branches | Functions | Lines |
|------|------------|----------|-----------|-------|
${coverageData.files.map(f => `| ${f.path} | ${f.statements}% | ${f.branches}% | ${f.functions}% | ${f.lines}% |`).join('\n')}
**Overall Coverage**: ${coverageData.overall.statements}%
` : '_No coverage data available_'}
### Failed Tests
${testResults.failed > 0 ? `
${testResults.failures.map(f => `
#### ${f.test_name}
- **Suite**: ${f.suite}
- **Error**: ${f.error_message}
- **Stack**:
\`\`\`
${f.stack_trace}
\`\`\`
`).join('\n')}
` : '_All tests passed_'}
### Gemini Quality Analysis
${geminiAnalysis}
### Recommendations
${recommendations.map(r => `- ${r}`).join('\n')}
---
## Validation Decision
**Result**: ${testResults.passed === testResults.total ? '✅ PASS' : '❌ FAIL'}
**Rationale**: ${validationDecision}
${testResults.passed !== testResults.total ? `
### Next Actions
1. Review failed tests
2. Debug failures using action-debug-with-file
3. Fix issues and re-run validation
` : `
### Next Actions
1. Consider code review
2. Prepare for deployment
3. Update documentation
`}
`
// 写入验证报告
Write(validationPath, validationReport)
```
### Step 5: 保存测试结果
```javascript
const testResultsData = {
iteration,
timestamp,
summary: {
total: testResults.total,
passed: testResults.passed,
failed: testResults.failed,
skipped: testResults.skipped,
pass_rate: (testResults.passed / testResults.total * 100).toFixed(1),
duration_ms: testResults.duration_ms
},
tests: testResults.tests,
failures: testResults.failures,
coverage: coverageData?.overall || null
}
Write(testResultsPath, JSON.stringify(testResultsData, null, 2))
```
---
## State Updates
```javascript
const validationPassed = testResults.failed === 0 && testResults.passed > 0
return {
stateUpdates: {
validate: {
test_results: [...(state.validate.test_results || []), testResultsData],
coverage: coverageData?.overall.statements || null,
passed: validationPassed,
failed_tests: testResults.failures.map(f => f.test_name),
last_run_at: getUtc8ISOString()
},
last_action: 'action-validate-with-file'
},
continue: true,
message: validationPassed
? `验证通过 ✅\n测试: ${testResults.passed}/${testResults.total}\n覆盖率: ${coverageData?.overall.statements || 'N/A'}%`
: `验证失败 ❌\n失败: ${testResults.failed}/${testResults.total}\n建议进入调试模式`
}
```
## Test Output Parsers
### Jest/Vitest Parser
```javascript
function parseJestOutput(stdout) {
const testPattern = /Tests:\s+(\d+) passed.*?(\d+) failed.*?(\d+) total/
const match = stdout.match(testPattern)
return {
total: parseInt(match[3]),
passed: parseInt(match[1]),
failed: parseInt(match[2]),
// ... parse individual test results
}
}
```
### Pytest Parser
```javascript
function parsePytestOutput(stdout) {
const summaryPattern = /(\d+) passed.*?(\d+) failed.*?(\d+) error/
// ... implementation
}
```
## Error Handling
| Error Type | Recovery |
|------------|----------|
| Tests don't run | 检查测试脚本配置,提示用户 |
| All tests fail | 建议进入 debug 模式 |
| Coverage tool missing | 跳过覆盖率检查,仅运行测试 |
| Timeout | 增加超时时间或拆分测试 |
## Validation Report Template
参考 [templates/validation-template.md](../../templates/validation-template.md)
## CLI Integration
### 质量分析
```bash
ccw cli -p "PURPOSE: Analyze test results and coverage...
TASK: • Review results • Identify gaps • Suggest improvements
MODE: analysis
CONTEXT: @test-results.json @coverage.json
EXPECTED: Quality assessment
" --tool gemini --mode analysis --rule analysis-review-code-quality
```
### 测试生成 (如覆盖率低)
```bash
ccw cli -p "PURPOSE: Generate missing test cases...
TASK: • Analyze uncovered code • Write tests
MODE: write
CONTEXT: @coverage.json @src/**/*
EXPECTED: Test code
" --tool gemini --mode write --rule development-generate-tests
```
## Next Actions (Hints)
- 验证通过: `action-complete` (完成循环)
- 验证失败: `action-debug-with-file` (调试失败测试)
- 覆盖率低: `action-develop-with-file` (添加测试)
- 用户选择: `action-menu` (返回菜单)

View File

@@ -1,486 +0,0 @@
# Orchestrator
根据当前状态选择并执行下一个动作,实现无状态循环工作流。与 API (loop-v2-routes.ts) 协作实现控制平面/执行平面分离。
## Role
检查控制信号 → 读取文件状态 → 选择动作 → 执行 → 更新文件 → 循环,直到完成或被外部暂停/停止。
## State Management (Unified Location)
### 读取状态
```javascript
const getUtc8ISOString = () => new Date(Date.now() + 8 * 60 * 60 * 1000).toISOString()
/**
* 读取循环状态 (统一位置)
* @param loopId - Loop ID (e.g., "loop-v2-20260122-abc123")
*/
function readLoopState(loopId) {
const stateFile = `.loop/${loopId}.json`
if (!fs.existsSync(stateFile)) {
return null
}
const state = JSON.parse(Read(stateFile))
return state
}
```
### 更新状态
```javascript
/**
* 更新循环状态 (只更新 skill_state 部分,不修改 API 字段)
* @param loopId - Loop ID
* @param updates - 更新内容 (skill_state 字段)
*/
function updateLoopState(loopId, updates) {
const stateFile = `.loop/${loopId}.json`
const currentState = readLoopState(loopId)
if (!currentState) {
throw new Error(`Loop state not found: ${loopId}`)
}
// 只更新 skill_state 和 updated_at
const newState = {
...currentState,
updated_at: getUtc8ISOString(),
skill_state: {
...currentState.skill_state,
...updates
}
}
Write(stateFile, JSON.stringify(newState, null, 2))
return newState
}
```
### 创建新循环状态 (直接调用时)
```javascript
/**
* 创建新的循环状态 (仅在直接调用时使用API 触发时状态已存在)
*/
function createLoopState(loopId, taskDescription) {
const stateFile = `.loop/${loopId}.json`
const now = getUtc8ISOString()
const state = {
// API 兼容字段
loop_id: loopId,
title: taskDescription.substring(0, 100),
description: taskDescription,
max_iterations: 10,
status: 'running', // 直接调用时设为 running
current_iteration: 0,
created_at: now,
updated_at: now,
// Skill 扩展字段
skill_state: null // 由 action-init 初始化
}
// 确保目录存在
Bash(`mkdir -p ".loop"`)
Bash(`mkdir -p ".loop/${loopId}.progress"`)
Write(stateFile, JSON.stringify(state, null, 2))
return state
}
```
## Control Signal Checking
```javascript
/**
* 检查 API 控制信号
* 必须在每个 Action 开始前调用
* @returns { continue: boolean, reason: string }
*/
function checkControlSignals(loopId) {
const state = readLoopState(loopId)
if (!state) {
return { continue: false, reason: 'state_not_found' }
}
switch (state.status) {
case 'paused':
// API 暂停了循环Skill 应退出等待 resume
console.log(`⏸️ Loop paused by API. Waiting for resume...`)
return { continue: false, reason: 'paused' }
case 'failed':
// API 停止了循环 (用户手动停止)
console.log(`⏹️ Loop stopped by API.`)
return { continue: false, reason: 'stopped' }
case 'completed':
// 已完成
console.log(`✅ Loop already completed.`)
return { continue: false, reason: 'completed' }
case 'created':
// API 创建但未启动 (不应该走到这里)
console.log(`⚠️ Loop not started by API.`)
return { continue: false, reason: 'not_started' }
case 'running':
// 正常继续
return { continue: true, reason: 'running' }
default:
console.log(`⚠️ Unknown status: ${state.status}`)
return { continue: false, reason: 'unknown_status' }
}
}
```
## Decision Logic
```javascript
/**
* 选择下一个 Action (基于 skill_state)
*/
function selectNextAction(state, mode = 'interactive') {
const skillState = state.skill_state
// 1. 终止条件检查 (API status)
if (state.status === 'completed') return null
if (state.status === 'failed') return null
if (state.current_iteration >= state.max_iterations) {
console.warn(`已达到最大迭代次数 (${state.max_iterations})`)
return 'action-complete'
}
// 2. 初始化检查
if (!skillState || !skillState.current_action) {
return 'action-init'
}
// 3. 模式判断
if (mode === 'interactive') {
return 'action-menu' // 显示菜单让用户选择
}
// 4. 自动模式:基于状态自动选择
if (mode === 'auto') {
// 按优先级develop → debug → validate
// 如果有待开发任务
const hasPendingDevelop = skillState.develop?.tasks?.some(t => t.status === 'pending')
if (hasPendingDevelop) {
return 'action-develop-with-file'
}
// 如果开发完成但未调试
if (skillState.last_action === 'action-develop-with-file') {
const needsDebug = skillState.develop?.completed < skillState.develop?.total
if (needsDebug) {
return 'action-debug-with-file'
}
}
// 如果调试完成但未验证
if (skillState.last_action === 'action-debug-with-file' ||
skillState.debug?.confirmed_hypothesis) {
return 'action-validate-with-file'
}
// 如果验证失败,回到开发
if (skillState.last_action === 'action-validate-with-file') {
if (!skillState.validate?.passed) {
return 'action-develop-with-file'
}
}
// 全部通过,完成
if (skillState.validate?.passed && !hasPendingDevelop) {
return 'action-complete'
}
// 默认:开发
return 'action-develop-with-file'
}
// 5. 默认完成
return 'action-complete'
}
```
## Execution Loop
```javascript
/**
* 运行编排器
* @param options.loopId - 现有 Loop ID (API 触发时)
* @param options.task - 任务描述 (直接调用时)
* @param options.mode - 'interactive' | 'auto'
*/
async function runOrchestrator(options = {}) {
const { loopId: existingLoopId, task, mode = 'interactive' } = options
console.log('=== CCW Loop Orchestrator Started ===')
// 1. 确定 loopId
let loopId
let state
if (existingLoopId) {
// API 触发:使用现有 loopId
loopId = existingLoopId
state = readLoopState(loopId)
if (!state) {
console.error(`Loop not found: ${loopId}`)
return { status: 'error', message: 'Loop not found' }
}
console.log(`Resuming loop: ${loopId}`)
console.log(`Status: ${state.status}`)
} else if (task) {
// 直接调用:创建新 loopId
const timestamp = getUtc8ISOString().replace(/[-:]/g, '').split('.')[0]
const random = Math.random().toString(36).substring(2, 10)
loopId = `loop-v2-${timestamp}-${random}`
console.log(`Creating new loop: ${loopId}`)
console.log(`Task: ${task}`)
state = createLoopState(loopId, task)
} else {
console.error('Either --loop-id or task description is required')
return { status: 'error', message: 'Missing loopId or task' }
}
const progressDir = `.loop/${loopId}.progress`
// 2. 主循环
let iteration = state.current_iteration || 0
while (iteration < state.max_iterations) {
iteration++
// ========================================
// CRITICAL: Check control signals first
// ========================================
const control = checkControlSignals(loopId)
if (!control.continue) {
console.log(`\n🛑 Loop terminated: ${control.reason}`)
break
}
// 重新读取状态 (可能被 API 更新)
state = readLoopState(loopId)
console.log(`\n[Iteration ${iteration}] Status: ${state.status}`)
// 选择下一个动作
const actionId = selectNextAction(state, mode)
if (!actionId) {
console.log('No action selected, terminating.')
break
}
console.log(`[Iteration ${iteration}] Executing: ${actionId}`)
// 更新 current_iteration
state = {
...state,
current_iteration: iteration,
updated_at: getUtc8ISOString()
}
Write(`.loop/${loopId}.json`, JSON.stringify(state, null, 2))
// 执行动作
try {
const actionPromptFile = `.claude/skills/ccw-loop/phases/actions/${actionId}.md`
if (!fs.existsSync(actionPromptFile)) {
console.error(`Action file not found: ${actionPromptFile}`)
continue
}
const actionPrompt = Read(actionPromptFile)
// 构建 Agent 提示
const agentPrompt = `
[LOOP CONTEXT]
Loop ID: ${loopId}
State File: .loop/${loopId}.json
Progress Dir: ${progressDir}
[CURRENT STATE]
${JSON.stringify(state, null, 2)}
[ACTION INSTRUCTIONS]
${actionPrompt}
[TASK]
You are executing ${actionId} for loop: ${state.title || state.description}
[CONTROL SIGNALS]
Before executing, check if status is still 'running'.
If status is 'paused' or 'failed', exit gracefully.
[RETURN]
Return JSON with:
- skillStateUpdates: Object with skill_state fields to update
- continue: Boolean indicating if loop should continue
- message: String with user message
`
const result = await Task({
subagent_type: 'universal-executor',
run_in_background: false,
description: `Execute ${actionId}`,
prompt: agentPrompt
})
// 解析结果
const actionResult = JSON.parse(result)
// 更新状态 (只更新 skill_state)
updateLoopState(loopId, {
current_action: null,
last_action: actionId,
completed_actions: [
...(state.skill_state?.completed_actions || []),
actionId
],
...actionResult.skillStateUpdates
})
// 显示消息
if (actionResult.message) {
console.log(`\n${actionResult.message}`)
}
// 检查是否继续
if (actionResult.continue === false) {
console.log('Action requested termination.')
break
}
} catch (error) {
console.error(`Error executing ${actionId}: ${error.message}`)
// 错误处理
updateLoopState(loopId, {
current_action: null,
errors: [
...(state.skill_state?.errors || []),
{
action: actionId,
message: error.message,
timestamp: getUtc8ISOString()
}
]
})
}
}
if (iteration >= state.max_iterations) {
console.log(`\n⚠️ Reached maximum iterations (${state.max_iterations})`)
console.log('Consider breaking down the task or taking a break.')
}
console.log('\n=== CCW Loop Orchestrator Finished ===')
// 返回最终状态
const finalState = readLoopState(loopId)
return {
status: finalState.status,
loop_id: loopId,
iterations: iteration,
final_state: finalState
}
}
```
## Action Catalog
| Action | Purpose | Preconditions | Effects |
|--------|---------|---------------|---------|
| [action-init](actions/action-init.md) | 初始化会话 | status=pending | initialized=true |
| [action-menu](actions/action-menu.md) | 显示操作菜单 | initialized=true | 用户选择下一动作 |
| [action-develop-with-file](actions/action-develop-with-file.md) | 开发任务 | initialized=true | 更新 progress.md |
| [action-debug-with-file](actions/action-debug-with-file.md) | 假设调试 | initialized=true | 更新 understanding.md |
| [action-validate-with-file](actions/action-validate-with-file.md) | 测试验证 | initialized=true | 更新 validation.md |
| [action-complete](actions/action-complete.md) | 完成循环 | validation_passed=true | status=completed |
## Termination Conditions
1. **API 暂停**: `state.status === 'paused'` (Skill 退出,等待 resume)
2. **API 停止**: `state.status === 'failed'` (Skill 终止)
3. **任务完成**: `state.status === 'completed'`
4. **迭代限制**: `state.current_iteration >= state.max_iterations`
5. **Action 请求终止**: `actionResult.continue === false`
## Error Recovery
| Error Type | Recovery Strategy |
|------------|-------------------|
| 动作执行失败 | 记录错误,增加 error_count继续下一动作 |
| 状态文件损坏 | 从其他文件重建状态 (progress.md, understanding.md 等) |
| 用户中止 | 保存当前状态,允许 --resume 恢复 |
| CLI 工具失败 | 回退到手动分析模式 |
## Mode Strategies
### Interactive Mode (默认)
每次显示菜单,让用户选择动作:
```
当前状态: 开发中
可用操作:
1. 继续开发 (develop)
2. 开始调试 (debug)
3. 运行验证 (validate)
4. 查看进度 (status)
5. 退出 (exit)
请选择:
```
### Auto Mode (自动循环)
按预设流程自动执行:
```
Develop → Debug → Validate →
↓ (如验证失败)
Develop (修复) → Debug → Validate → 完成
```
## State Machine (API Status)
```mermaid
stateDiagram-v2
[*] --> created: API creates loop
created --> running: API /start → Trigger Skill
running --> paused: API /pause → Set status
running --> completed: action-complete
running --> failed: API /stop OR error
paused --> running: API /resume → Re-trigger Skill
completed --> [*]
failed --> [*]
note right of paused
Skill checks status before each action
If paused, Skill exits gracefully
end note
note right of running
Skill executes: init → develop → debug → validate
end note
```

View File

@@ -1,474 +0,0 @@
# State Schema
CCW Loop 的状态结构定义(统一版本)。
## 状态文件
**位置**: `.loop/{loopId}.json` (统一位置API + Skill 共享)
**旧版本位置** (仅向后兼容): `.workflow/.loop/{session-id}/state.json`
## 结构定义
### 统一状态接口 (Unified Loop State)
```typescript
/**
* Unified Loop State - API 和 Skill 共享的状态结构
* API (loop-v2-routes.ts) 拥有状态的主控权
* Skill (ccw-loop) 读取和更新此状态
*/
interface LoopState {
// =====================================================
// API FIELDS (from loop-v2-routes.ts)
// 这些字段由 API 管理Skill 只读
// =====================================================
loop_id: string // Loop ID, e.g., "loop-v2-20260122-abc123"
title: string // Loop 标题
description: string // Loop 描述
max_iterations: number // 最大迭代次数
status: 'created' | 'running' | 'paused' | 'completed' | 'failed'
current_iteration: number // 当前迭代次数
created_at: string // 创建时间 (ISO8601)
updated_at: string // 最后更新时间 (ISO8601)
completed_at?: string // 完成时间 (ISO8601)
failure_reason?: string // 失败原因
// =====================================================
// SKILL EXTENSION FIELDS
// 这些字段由 Skill 管理API 只读
// =====================================================
skill_state?: {
// 当前执行动作
current_action: 'init' | 'develop' | 'debug' | 'validate' | 'complete' | null
last_action: string | null
completed_actions: string[]
mode: 'interactive' | 'auto'
// === 开发阶段 ===
develop: {
total: number
completed: number
current_task?: string
tasks: DevelopTask[]
last_progress_at: string | null
}
// === 调试阶段 ===
debug: {
active_bug?: string
hypotheses_count: number
hypotheses: Hypothesis[]
confirmed_hypothesis: string | null
iteration: number
last_analysis_at: string | null
}
// === 验证阶段 ===
validate: {
pass_rate: number // 测试通过率 (0-100)
coverage: number // 覆盖率 (0-100)
test_results: TestResult[]
passed: boolean
failed_tests: string[]
last_run_at: string | null
}
// === 错误追踪 ===
errors: Array<{
action: string
message: string
timestamp: string
}>
}
}
interface DevelopTask {
id: string
description: string
tool: 'gemini' | 'qwen' | 'codex' | 'bash'
mode: 'analysis' | 'write'
status: 'pending' | 'in_progress' | 'completed' | 'failed'
files_changed: string[]
created_at: string
completed_at: string | null
}
interface Hypothesis {
id: string // H1, H2, ...
description: string
testable_condition: string
logging_point: string
evidence_criteria: {
confirm: string
reject: string
}
likelihood: number // 1 = 最可能
status: 'pending' | 'confirmed' | 'rejected' | 'inconclusive'
evidence: Record<string, any> | null
verdict_reason: string | null
}
interface TestResult {
test_name: string
suite: string
status: 'passed' | 'failed' | 'skipped'
duration_ms: number
error_message: string | null
stack_trace: string | null
}
```
## 初始状态
### 由 API 创建时 (Dashboard 触发)
```json
{
"loop_id": "loop-v2-20260122-abc123",
"title": "Implement user authentication",
"description": "Add login/logout functionality",
"max_iterations": 10,
"status": "created",
"current_iteration": 0,
"created_at": "2026-01-22T10:00:00+08:00",
"updated_at": "2026-01-22T10:00:00+08:00"
}
```
### 由 Skill 初始化后 (action-init)
```json
{
"loop_id": "loop-v2-20260122-abc123",
"title": "Implement user authentication",
"description": "Add login/logout functionality",
"max_iterations": 10,
"status": "running",
"current_iteration": 0,
"created_at": "2026-01-22T10:00:00+08:00",
"updated_at": "2026-01-22T10:00:05+08:00",
"skill_state": {
"current_action": "init",
"last_action": null,
"completed_actions": [],
"mode": "auto",
"develop": {
"total": 3,
"completed": 0,
"current_task": null,
"tasks": [
{ "id": "task-001", "description": "Create auth component", "status": "pending" }
],
"last_progress_at": null
},
"debug": {
"active_bug": null,
"hypotheses_count": 0,
"hypotheses": [],
"confirmed_hypothesis": null,
"iteration": 0,
"last_analysis_at": null
},
"validate": {
"pass_rate": 0,
"coverage": 0,
"test_results": [],
"passed": false,
"failed_tests": [],
"last_run_at": null
},
"errors": []
}
}
```
## 控制信号检查 (Control Signals)
Skill 在每个 Action 开始前必须检查控制信号:
```javascript
/**
* 检查 API 控制信号
* @returns { continue: boolean, action: 'pause_exit' | 'stop_exit' | 'continue' }
*/
function checkControlSignals(loopId) {
const state = JSON.parse(Read(`.loop/${loopId}.json`))
switch (state.status) {
case 'paused':
// API 暂停了循环Skill 应退出等待 resume
return { continue: false, action: 'pause_exit' }
case 'failed':
// API 停止了循环 (用户手动停止)
return { continue: false, action: 'stop_exit' }
case 'running':
// 正常继续
return { continue: true, action: 'continue' }
default:
// 异常状态
return { continue: false, action: 'stop_exit' }
}
}
```
### 在 Action 中使用
```markdown
## Execution
### Step 1: Check Control Signals
\`\`\`javascript
const control = checkControlSignals(loopId)
if (!control.continue) {
// 输出退出原因
console.log(`Loop ${control.action}: status = ${state.status}`)
// 如果是 pause_exit保存当前进度
if (control.action === 'pause_exit') {
updateSkillState(loopId, { current_action: 'paused' })
}
return // 退出 Action
}
\`\`\`
### Step 2: Execute Action Logic
...
```
## 状态转换规则
### 1. 初始化 (action-init)
```javascript
// Skill 初始化后
{
// API 字段更新
status: 'created' 'running', // 或保持 'running' 如果 API 已设置
updated_at: timestamp,
// Skill 字段初始化
skill_state: {
current_action: 'init',
mode: 'auto',
develop: {
tasks: [...parsed_tasks],
total: N,
completed: 0
}
}
}
```
### 2. 开发进行中 (action-develop-with-file)
```javascript
// 开发任务执行后
{
updated_at: timestamp,
current_iteration: state.current_iteration + 1,
skill_state: {
current_action: 'develop',
last_action: 'action-develop-with-file',
completed_actions: [...state.skill_state.completed_actions, 'action-develop-with-file'],
develop: {
current_task: 'task-xxx',
completed: N+1,
last_progress_at: timestamp
}
}
}
```
### 3. 调试进行中 (action-debug-with-file)
```javascript
// 调试执行后
{
updated_at: timestamp,
current_iteration: state.current_iteration + 1,
skill_state: {
current_action: 'debug',
last_action: 'action-debug-with-file',
debug: {
active_bug: '...',
hypotheses_count: N,
hypotheses: [...new_hypotheses],
iteration: N+1,
last_analysis_at: timestamp
}
}
}
```
### 4. 验证完成 (action-validate-with-file)
```javascript
// 验证执行后
{
updated_at: timestamp,
current_iteration: state.current_iteration + 1,
skill_state: {
current_action: 'validate',
last_action: 'action-validate-with-file',
validate: {
test_results: [...results],
pass_rate: 95.5,
coverage: 85.0,
passed: true | false,
failed_tests: ['test1', 'test2'],
last_run_at: timestamp
}
}
}
```
### 5. 完成 (action-complete)
```javascript
// 循环完成后
{
status: 'running' 'completed',
completed_at: timestamp,
updated_at: timestamp,
skill_state: {
current_action: 'complete',
last_action: 'action-complete'
}
}
```
## 状态派生字段
以下字段可从状态计算得出,不需要存储:
```javascript
// 开发完成度
const developProgress = state.develop.total_count > 0
? (state.develop.completed_count / state.develop.total_count) * 100
: 0
// 是否有待开发任务
const hasPendingDevelop = state.develop.tasks.some(t => t.status === 'pending')
// 调试是否完成
const debugCompleted = state.debug.confirmed_hypothesis !== null
// 验证是否通过
const validationPassed = state.validate.passed && state.validate.test_results.length > 0
// 整体进度
const overallProgress = (
(developProgress * 0.5) +
(debugCompleted ? 25 : 0) +
(validationPassed ? 25 : 0)
)
```
## 文件同步
### 统一位置 (Unified Location)
状态与文件的对应关系:
| 状态字段 | 同步文件 | 同步时机 |
|----------|----------|----------|
| 整个 LoopState | `.loop/{loopId}.json` | 每次状态变更 (主文件) |
| `skill_state.develop` | `.loop/{loopId}.progress/develop.md` | 每次开发操作后 |
| `skill_state.debug` | `.loop/{loopId}.progress/debug.md` | 每次调试操作后 |
| `skill_state.validate` | `.loop/{loopId}.progress/validate.md` | 每次验证操作后 |
| 代码变更日志 | `.loop/{loopId}.progress/changes.log` | 每次文件修改 (NDJSON) |
| 调试日志 | `.loop/{loopId}.progress/debug.log` | 每次调试日志 (NDJSON) |
### 文件结构示例
```
.loop/
├── loop-v2-20260122-abc123.json # 主状态文件 (API + Skill)
├── loop-v2-20260122-abc123.tasks.jsonl # 任务列表 (API 管理)
└── loop-v2-20260122-abc123.progress/ # Skill 进度文件
├── develop.md # 开发进度
├── debug.md # 调试理解
├── validate.md # 验证报告
├── changes.log # 代码变更 (NDJSON)
└── debug.log # 调试日志 (NDJSON)
```
## 状态恢复
如果主状态文件 `.loop/{loopId}.json` 损坏,可以从进度文件重建 skill_state:
```javascript
function rebuildSkillStateFromProgress(loopId) {
const progressDir = `.loop/${loopId}.progress`
// 尝试从进度文件解析状态
const skill_state = {
develop: parseProgressFile(`${progressDir}/develop.md`),
debug: parseProgressFile(`${progressDir}/debug.md`),
validate: parseProgressFile(`${progressDir}/validate.md`)
}
return skill_state
}
// 解析进度 Markdown 文件
function parseProgressFile(filePath) {
const content = Read(filePath)
if (!content) return null
// 从 Markdown 表格和结构中提取数据
// ... implementation
}
```
### 恢复策略
1. **API 字段**: 无法恢复 - 需要从 API 重新获取或用户手动输入
2. **skill_state 字段**: 可以从 `.progress/` 目录的 Markdown 文件解析
3. **任务列表**: 从 `.loop/{loopId}.tasks.jsonl` 恢复
## 状态验证
```javascript
function validateState(state) {
const errors = []
// 必需字段
if (!state.session_id) errors.push('Missing session_id')
if (!state.task_description) errors.push('Missing task_description')
// 状态一致性
if (state.initialized && state.status === 'pending') {
errors.push('Inconsistent: initialized but status is pending')
}
if (state.status === 'completed' && !state.validate.passed) {
errors.push('Inconsistent: completed but validation not passed')
}
// 开发任务一致性
const completedTasks = state.develop.tasks.filter(t => t.status === 'completed').length
if (completedTasks !== state.develop.completed_count) {
errors.push('Inconsistent: completed_count mismatch')
}
return { valid: errors.length === 0, errors }
}
```

View File

@@ -1,300 +0,0 @@
# Action Catalog
CCW Loop 所有可用动作的目录和说明。
## Available Actions
| Action | Purpose | Preconditions | Effects | CLI Integration |
|--------|---------|---------------|---------|-----------------|
| [action-init](../phases/actions/action-init.md) | 初始化会话 | status=pending, initialized=false | status→running, initialized→true, 创建目录和任务列表 | Gemini 任务分解 |
| [action-menu](../phases/actions/action-menu.md) | 显示操作菜单 | initialized=true, status=running | 返回用户选择的动作 | - |
| [action-develop-with-file](../phases/actions/action-develop-with-file.md) | 执行开发任务 | initialized=true, pending tasks > 0 | 更新 progress.md, 完成一个任务 | Gemini 代码实现 |
| [action-debug-with-file](../phases/actions/action-debug-with-file.md) | 假设驱动调试 | initialized=true | 更新 understanding.md, hypotheses.json | Gemini 假设生成和证据分析 |
| [action-validate-with-file](../phases/actions/action-validate-with-file.md) | 运行测试验证 | initialized=true, develop > 0 or debug confirmed | 更新 validation.md, test-results.json | Gemini 质量分析 |
| [action-complete](../phases/actions/action-complete.md) | 完成循环 | initialized=true | status→completed, 生成 summary.md | - |
## Action Dependencies Graph
```mermaid
graph TD
START([用户启动 /ccw-loop]) --> INIT[action-init]
INIT --> MENU[action-menu]
MENU --> DEVELOP[action-develop-with-file]
MENU --> DEBUG[action-debug-with-file]
MENU --> VALIDATE[action-validate-with-file]
MENU --> STATUS[action-status]
MENU --> COMPLETE[action-complete]
MENU --> EXIT([退出])
DEVELOP --> MENU
DEBUG --> MENU
VALIDATE --> MENU
STATUS --> MENU
COMPLETE --> END([结束])
EXIT --> END
style INIT fill:#e1f5fe
style MENU fill:#fff3e0
style DEVELOP fill:#e8f5e9
style DEBUG fill:#fce4ec
style VALIDATE fill:#f3e5f5
style COMPLETE fill:#c8e6c9
```
## Action Execution Matrix
### Interactive Mode
| State | Auto-Selected Action | User Options |
|-------|---------------------|--------------|
| pending | action-init | - |
| running, !initialized | action-init | - |
| running, initialized | action-menu | All actions |
### Auto Mode
| Condition | Selected Action |
|-----------|----------------|
| pending_develop_tasks > 0 | action-develop-with-file |
| last_action=develop, !debug_completed | action-debug-with-file |
| last_action=debug, !validation_completed | action-validate-with-file |
| validation_failed | action-develop-with-file (fix) |
| validation_passed, no pending | action-complete |
## Action Inputs/Outputs
### action-init
**Inputs**:
- state.task_description
- User input (optional)
**Outputs**:
- meta.json
- state.json (初始化)
- develop/tasks.json
- develop/progress.md
**State Changes**:
```javascript
{
status: 'pending' 'running',
initialized: false true,
develop.tasks: [] [task1, task2, ...]
}
```
### action-develop-with-file
**Inputs**:
- state.develop.tasks
- User selection (如有多个待处理任务)
**Outputs**:
- develop/progress.md (追加)
- develop/tasks.json (更新)
- develop/changes.log (追加)
**State Changes**:
```javascript
{
develop.current_task_id: null 'task-xxx' null,
develop.completed_count: N N+1,
last_action: X 'action-develop-with-file'
}
```
### action-debug-with-file
**Inputs**:
- Bug description (用户输入或从测试失败获取)
- debug.log (如已有)
**Outputs**:
- debug/understanding.md (追加)
- debug/hypotheses.json (更新)
- Code changes (添加日志或修复)
**State Changes**:
```javascript
{
debug.current_bug: null 'bug description',
debug.hypotheses: [...updated],
debug.iteration: N N+1,
debug.confirmed_hypothesis: null 'H1' (如确认)
}
```
### action-validate-with-file
**Inputs**:
- 测试脚本 (从 package.json)
- 覆盖率工具 (可选)
**Outputs**:
- validate/validation.md (追加)
- validate/test-results.json (更新)
- validate/coverage.json (更新)
**State Changes**:
```javascript
{
validate.test_results: [...new results],
validate.coverage: null 85.5,
validate.passed: false true,
validate.failed_tests: ['test1', 'test2'] []
}
```
### action-complete
**Inputs**:
- state (完整状态)
- User choices (扩展选项)
**Outputs**:
- summary.md
- Issues (如选择扩展)
**State Changes**:
```javascript
{
status: 'running' 'completed',
completed_at: null timestamp
}
```
## Action Sequences
### Typical Happy Path
```
action-init
→ action-develop-with-file (task 1)
→ action-develop-with-file (task 2)
→ action-develop-with-file (task 3)
→ action-validate-with-file
→ PASS
→ action-complete
```
### Debug Iteration Path
```
action-init
→ action-develop-with-file (task 1)
→ action-validate-with-file
→ FAIL
→ action-debug-with-file (探索)
→ action-debug-with-file (分析)
→ Root cause found
→ action-validate-with-file
→ PASS
→ action-complete
```
### Multi-Iteration Path
```
action-init
→ action-develop-with-file (task 1)
→ action-debug-with-file
→ action-develop-with-file (task 2)
→ action-validate-with-file
→ FAIL
→ action-debug-with-file
→ action-validate-with-file
→ PASS
→ action-complete
```
## Error Scenarios
### CLI Tool Failure
```
action-develop-with-file
→ Gemini CLI fails
→ Fallback to manual implementation
→ Prompt user for code
→ Continue
```
### Test Failure
```
action-validate-with-file
→ Tests fail
→ Record failed tests
→ Suggest action-debug-with-file
→ User chooses debug or manual fix
```
### Max Iterations Reached
```
state.iteration_count >= 10
→ Warning message
→ Suggest break or task split
→ Allow continue or exit
```
## Action Extensions
### Adding New Actions
To add a new action:
1. Create `phases/actions/action-{name}.md`
2. Define preconditions, execution, state updates
3. Add to this catalog
4. Update orchestrator.md decision logic
5. Add to action-menu.md options
### Action Template
```markdown
# Action: {Name}
{Brief description}
## Purpose
{Detailed purpose}
## Preconditions
- [ ] condition1
- [ ] condition2
## Execution
### Step 1: {Step Name}
\`\`\`javascript
// code
\`\`\`
## State Updates
\`\`\`javascript
return {
stateUpdates: {
// updates
},
continue: true,
message: "..."
}
\`\`\`
## Error Handling
| Error Type | Recovery |
|------------|----------|
| ... | ... |
## Next Actions (Hints)
- condition: next_action
```

View File

@@ -1,192 +0,0 @@
# Loop Requirements Specification
CCW Loop 的核心需求和约束定义。
## Core Requirements
### 1. 无状态循环
**Requirement**: 每次执行从文件读取状态,执行后写回文件,不依赖内存状态。
**Rationale**: 支持随时中断和恢复,状态持久化。
**Validation**:
- [ ] 每个 action 开始时从文件读取状态
- [ ] 每个 action 结束时将状态写回文件
- [ ] 无全局变量或内存状态依赖
### 2. 文件驱动进度
**Requirement**: 所有进度、理解、验证结果都记录在专用 Markdown 文件中。
**Rationale**: 可审计、可回顾、团队可见。
**Validation**:
- [ ] develop/progress.md 记录开发进度
- [ ] debug/understanding.md 记录理解演变
- [ ] validate/validation.md 记录验证结果
- [ ] 所有文件使用 Markdown 格式,易读
### 3. CLI 工具集成
**Requirement**: 关键决策点使用 Gemini/CLI 进行深度分析。
**Rationale**: 利用 LLM 能力提高质量。
**Validation**:
- [ ] 任务分解使用 Gemini
- [ ] 假设生成使用 Gemini
- [ ] 证据分析使用 Gemini
- [ ] 质量评估使用 Gemini
### 4. 用户控制循环
**Requirement**: 支持交互式和自动循环两种模式,用户可随时介入。
**Rationale**: 灵活性,适应不同场景。
**Validation**:
- [ ] 交互模式:每步显示菜单
- [ ] 自动模式:按预设流程执行
- [ ] 用户可随时退出
- [ ] 状态可恢复
### 5. 可恢复性
**Requirement**: 任何时候中断后,可以从上次位置继续。
**Rationale**: 长时间任务支持,意外中断恢复。
**Validation**:
- [ ] 状态保存在 state.json
- [ ] 使用 --resume 可继续
- [ ] 历史记录完整保留
## Quality Standards
### Completeness
| Dimension | Threshold |
|-----------|-----------|
| 进度文档完整性 | 每个任务都有记录 |
| 理解文档演变 | 每次迭代都有更新 |
| 验证报告详尽 | 包含所有测试结果 |
### Consistency
| Dimension | Threshold |
|-----------|-----------|
| 文件格式一致 | 所有 Markdown 文件使用相同模板 |
| 状态同步一致 | state.json 与文件内容匹配 |
| 时间戳格式 | 统一使用 ISO8601 格式 |
### Usability
| Dimension | Threshold |
|-----------|-----------|
| 菜单易用性 | 选项清晰,描述准确 |
| 进度可见性 | 随时可查看当前状态 |
| 错误提示 | 错误消息清晰,提供恢复建议 |
## Constraints
### 1. 文件结构约束
```
.workflow/.loop/{session-id}/
├── meta.json # 只写一次,不再修改
├── state.json # 每次 action 后更新
├── develop/
│ ├── progress.md # 只追加,不删除
│ ├── tasks.json # 任务状态更新
│ └── changes.log # NDJSON 格式,只追加
├── debug/
│ ├── understanding.md # 只追加,记录时间线
│ ├── hypotheses.json # 更新假设状态
│ └── debug.log # NDJSON 格式
└── validate/
├── validation.md # 每次验证追加
├── test-results.json # 累积测试结果
└── coverage.json # 最新覆盖率
```
### 2. 命名约束
- Session ID: `LOOP-{slug}-{YYYY-MM-DD}`
- Task ID: `task-{NNN}` (三位数字)
- Hypothesis ID: `H{N}` (单字母+数字)
### 3. 状态转换约束
```
pending → running → completed
user_exit
failed
```
Only allow: `pending→running`, `running→completed/user_exit/failed`
### 4. 错误限制约束
- 最大错误次数: 3
- 超过 3 次错误 → 自动终止
- 每次错误 → 记录到 state.errors[]
### 5. 迭代限制约束
- 最大迭代次数: 10 (警告)
- 超过 10 次 → 警告用户,但不强制停止
- 建议拆分任务或休息
## Integration Requirements
### 1. Dashboard 集成
**Requirement**: 与 CCW Dashboard Loop Monitor 无缝集成。
**Specification**:
- Dashboard 创建 Loop → 调用此 Skill
- state.json → Dashboard 实时显示
- 任务列表双向同步
- 状态控制按钮映射到 actions
### 2. Issue 系统集成
**Requirement**: 完成后可扩展为 Issue。
**Specification**:
- 支持维度: test, enhance, refactor, doc
- 调用 `/issue:new "{summary} - {dimension}"`
- 自动填充上下文
### 3. CLI 工具集成
**Requirement**: 使用 CCW CLI 工具进行分析和实现。
**Specification**:
- 任务分解: `--rule planning-breakdown-task-steps`
- 代码实现: `--rule development-implement-feature`
- 根因分析: `--rule analysis-diagnose-bug-root-cause`
- 质量评估: `--rule analysis-review-code-quality`
## Non-Functional Requirements
### Performance
- Session 初始化: < 5s
- Action 执行: < 30s (不含 CLI 调用)
- 状态读写: < 1s
### Reliability
- 状态文件损坏恢复: 支持从其他文件重建
- CLI 工具失败降级: 回退到手动模式
- 错误重试: 支持一次自动重试
### Maintainability
- 文档化: 所有 action 都有清晰说明
- 模块化: 每个 action 独立可测
- 可扩展: 易于添加新 action

View File

@@ -1,175 +0,0 @@
# Progress Document Template
开发进度文档的标准模板。
## Template Structure
```markdown
# Development Progress
**Session ID**: {{session_id}}
**Task**: {{task_description}}
**Started**: {{started_at}}
**Estimated Complexity**: {{complexity}}
---
## Task List
{{#each tasks}}
{{@index}}. [{{#if completed}}x{{else}} {{/if}}] {{description}}
{{/each}}
## Key Files
{{#each key_files}}
- `{{this}}`
{{/each}}
---
## Progress Timeline
{{#each iterations}}
### Iteration {{@index}} - {{task_name}} ({{timestamp}})
#### Task Details
- **ID**: {{task_id}}
- **Tool**: {{tool}}
- **Mode**: {{mode}}
#### Implementation Summary
{{summary}}
#### Files Changed
{{#each files_changed}}
- `{{this}}`
{{/each}}
#### Status: {{status}}
---
{{/each}}
## Current Statistics
| Metric | Value |
|--------|-------|
| Total Tasks | {{total_tasks}} |
| Completed | {{completed_tasks}} |
| In Progress | {{in_progress_tasks}} |
| Pending | {{pending_tasks}} |
| Progress | {{progress_percentage}}% |
---
## Next Steps
{{#each next_steps}}
- [ ] {{this}}
{{/each}}
```
## Template Variables
| Variable | Type | Source | Description |
|----------|------|--------|-------------|
| `session_id` | string | state.session_id | 会话 ID |
| `task_description` | string | state.task_description | 任务描述 |
| `started_at` | string | state.created_at | 开始时间 |
| `complexity` | string | state.context.estimated_complexity | 预估复杂度 |
| `tasks` | array | state.develop.tasks | 任务列表 |
| `key_files` | array | state.context.key_files | 关键文件 |
| `iterations` | array | 从文件解析 | 迭代历史 |
| `total_tasks` | number | state.develop.total_count | 总任务数 |
| `completed_tasks` | number | state.develop.completed_count | 已完成数 |
## Usage Example
```javascript
const progressTemplate = Read('.claude/skills/ccw-loop/templates/progress-template.md')
function renderProgress(state) {
let content = progressTemplate
// 替换简单变量
content = content.replace('{{session_id}}', state.session_id)
content = content.replace('{{task_description}}', state.task_description)
content = content.replace('{{started_at}}', state.created_at)
content = content.replace('{{complexity}}', state.context?.estimated_complexity || 'unknown')
// 替换任务列表
const taskList = state.develop.tasks.map((t, i) => {
const checkbox = t.status === 'completed' ? 'x' : ' '
return `${i + 1}. [${checkbox}] ${t.description}`
}).join('\n')
content = content.replace('{{#each tasks}}...{{/each}}', taskList)
// 替换统计
content = content.replace('{{total_tasks}}', state.develop.total_count)
content = content.replace('{{completed_tasks}}', state.develop.completed_count)
// ...
return content
}
```
## Section Templates
### Task Entry
```markdown
### Iteration {{N}} - {{task_name}} ({{timestamp}})
#### Task Details
- **ID**: {{task_id}}
- **Tool**: {{tool}}
- **Mode**: {{mode}}
#### Implementation Summary
{{summary}}
#### Files Changed
{{#each files}}
- `{{this}}`
{{/each}}
#### Status: COMPLETED
---
```
### Statistics Table
```markdown
## Current Statistics
| Metric | Value |
|--------|-------|
| Total Tasks | {{total}} |
| Completed | {{completed}} |
| In Progress | {{in_progress}} |
| Pending | {{pending}} |
| Progress | {{percentage}}% |
```
### Next Steps
```markdown
## Next Steps
{{#if all_completed}}
- [ ] Run validation tests
- [ ] Code review
- [ ] Update documentation
{{else}}
- [ ] Complete remaining {{pending}} tasks
- [ ] Review completed work
{{/if}}
```

View File

@@ -1,303 +0,0 @@
# Understanding Document Template
调试理解演变文档的标准模板。
## Template Structure
```markdown
# Understanding Document
**Session ID**: {{session_id}}
**Bug Description**: {{bug_description}}
**Started**: {{started_at}}
---
## Exploration Timeline
{{#each iterations}}
### Iteration {{number}} - {{title}} ({{timestamp}})
{{#if is_exploration}}
#### Current Understanding
Based on bug description and initial code search:
- Error pattern: {{error_pattern}}
- Affected areas: {{affected_areas}}
- Initial hypothesis: {{initial_thoughts}}
#### Evidence from Code Search
{{#each search_results}}
**Keyword: "{{keyword}}"**
- Found in: {{files}}
- Key findings: {{insights}}
{{/each}}
{{/if}}
{{#if has_hypotheses}}
#### Hypotheses Generated (Gemini-Assisted)
{{#each hypotheses}}
**{{id}}** (Likelihood: {{likelihood}}): {{description}}
- Logging at: {{logging_point}}
- Testing: {{testable_condition}}
- Evidence to confirm: {{confirm_criteria}}
- Evidence to reject: {{reject_criteria}}
{{/each}}
**Gemini Insights**: {{gemini_insights}}
{{/if}}
{{#if is_analysis}}
#### Log Analysis Results
{{#each results}}
**{{id}}**: {{verdict}}
- Evidence: {{evidence}}
- Reasoning: {{reason}}
{{/each}}
#### Corrected Understanding
Previous misunderstandings identified and corrected:
{{#each corrections}}
- ~~{{wrong}}~~ → {{corrected}}
- Why wrong: {{reason}}
- Evidence: {{evidence}}
{{/each}}
#### New Insights
{{#each insights}}
- {{this}}
{{/each}}
#### Gemini Analysis
{{gemini_analysis}}
{{/if}}
{{#if root_cause_found}}
#### Root Cause Identified
**{{hypothesis_id}}**: {{description}}
Evidence supporting this conclusion:
{{supporting_evidence}}
{{else}}
#### Next Steps
{{next_steps}}
{{/if}}
---
{{/each}}
## Current Consolidated Understanding
### What We Know
{{#each valid_understandings}}
- {{this}}
{{/each}}
### What Was Disproven
{{#each disproven}}
- ~~{{assumption}}~~ (Evidence: {{evidence}})
{{/each}}
### Current Investigation Focus
{{current_focus}}
### Remaining Questions
{{#each questions}}
- {{this}}
{{/each}}
```
## Template Variables
| Variable | Type | Source | Description |
|----------|------|--------|-------------|
| `session_id` | string | state.session_id | 会话 ID |
| `bug_description` | string | state.debug.current_bug | Bug 描述 |
| `iterations` | array | 从文件解析 | 迭代历史 |
| `hypotheses` | array | state.debug.hypotheses | 假设列表 |
| `valid_understandings` | array | 从 Gemini 分析 | 有效理解 |
| `disproven` | array | 从假设状态 | 被否定的假设 |
## Section Templates
### Exploration Section
```markdown
### Iteration {{N}} - Initial Exploration ({{timestamp}})
#### Current Understanding
Based on bug description and initial code search:
- Error pattern: {{pattern}}
- Affected areas: {{areas}}
- Initial hypothesis: {{thoughts}}
#### Evidence from Code Search
{{#each search_results}}
**Keyword: "{{keyword}}"**
- Found in: {{files}}
- Key findings: {{insights}}
{{/each}}
#### Next Steps
- Generate testable hypotheses
- Add instrumentation
- Await reproduction
```
### Hypothesis Section
```markdown
#### Hypotheses Generated (Gemini-Assisted)
| ID | Description | Likelihood | Status |
|----|-------------|------------|--------|
{{#each hypotheses}}
| {{id}} | {{description}} | {{likelihood}} | {{status}} |
{{/each}}
**Details:**
{{#each hypotheses}}
**{{id}}**: {{description}}
- Logging at: `{{logging_point}}`
- Testing: {{testable_condition}}
- Confirm: {{evidence_criteria.confirm}}
- Reject: {{evidence_criteria.reject}}
{{/each}}
```
### Analysis Section
```markdown
### Iteration {{N}} - Evidence Analysis ({{timestamp}})
#### Log Analysis Results
{{#each results}}
**{{id}}**: **{{verdict}}**
- Evidence: \`{{evidence}}\`
- Reasoning: {{reason}}
{{/each}}
#### Corrected Understanding
| Previous Assumption | Corrected To | Reason |
|---------------------|--------------|--------|
{{#each corrections}}
| ~~{{wrong}}~~ | {{corrected}} | {{reason}} |
{{/each}}
#### Gemini Analysis
{{gemini_analysis}}
```
### Consolidated Understanding Section
```markdown
## Current Consolidated Understanding
### What We Know
{{#each valid}}
- {{this}}
{{/each}}
### What Was Disproven
{{#each disproven}}
- ~~{{this.assumption}}~~ (Evidence: {{this.evidence}})
{{/each}}
### Current Investigation Focus
{{focus}}
### Remaining Questions
{{#each questions}}
- {{this}}
{{/each}}
```
### Resolution Section
```markdown
### Resolution ({{timestamp}})
#### Fix Applied
- Modified files: {{files}}
- Fix description: {{description}}
- Root cause addressed: {{root_cause}}
#### Verification Results
{{verification}}
#### Lessons Learned
{{#each lessons}}
{{@index}}. {{this}}
{{/each}}
#### Key Insights for Future
{{#each insights}}
- {{this}}
{{/each}}
```
## Consolidation Rules
更新 "Current Consolidated Understanding" 时遵循以下规则:
1. **简化被否定项**: 移到 "What Was Disproven",只保留单行摘要
2. **保留有效见解**: 将确认的发现提升到 "What We Know"
3. **避免重复**: 不在合并部分重复时间线细节
4. **关注当前状态**: 描述现在知道什么,而不是过程
5. **保留关键纠正**: 保留重要的 wrong→right 转换供学习
## Anti-Patterns
**错误示例 (冗余)**:
```markdown
## Current Consolidated Understanding
In iteration 1 we thought X, but in iteration 2 we found Y, then in iteration 3...
Also we checked A and found B, and then we checked C...
```
**正确示例 (精简)**:
```markdown
## Current Consolidated Understanding
### What We Know
- Error occurs during runtime update, not initialization
- Config value is None (not missing key)
### What Was Disproven
- ~~Initialization error~~ (Timing evidence)
- ~~Missing key hypothesis~~ (Key exists)
### Current Investigation Focus
Why is config value None during update?
```

View File

@@ -1,258 +0,0 @@
# Validation Report Template
验证报告的标准模板。
## Template Structure
```markdown
# Validation Report
**Session ID**: {{session_id}}
**Task**: {{task_description}}
**Validated**: {{timestamp}}
---
## Iteration {{iteration}} - Validation Run
### Test Execution Summary
| Metric | Value |
|--------|-------|
| Total Tests | {{total_tests}} |
| Passed | {{passed_tests}} |
| Failed | {{failed_tests}} |
| Skipped | {{skipped_tests}} |
| Duration | {{duration}}ms |
| **Pass Rate** | **{{pass_rate}}%** |
### Coverage Report
{{#if has_coverage}}
| File | Statements | Branches | Functions | Lines |
|------|------------|----------|-----------|-------|
{{#each coverage_files}}
| {{path}} | {{statements}}% | {{branches}}% | {{functions}}% | {{lines}}% |
{{/each}}
**Overall Coverage**: {{overall_coverage}}%
{{else}}
_No coverage data available_
{{/if}}
### Failed Tests
{{#if has_failures}}
{{#each failures}}
#### {{test_name}}
- **Suite**: {{suite}}
- **Error**: {{error_message}}
- **Stack**:
\`\`\`
{{stack_trace}}
\`\`\`
{{/each}}
{{else}}
_All tests passed_
{{/if}}
### Gemini Quality Analysis
{{gemini_analysis}}
### Recommendations
{{#each recommendations}}
- {{this}}
{{/each}}
---
## Validation Decision
**Result**: {{#if passed}}✅ PASS{{else}}❌ FAIL{{/if}}
**Rationale**: {{rationale}}
{{#if not_passed}}
### Next Actions
1. Review failed tests
2. Debug failures using action-debug-with-file
3. Fix issues and re-run validation
{{else}}
### Next Actions
1. Consider code review
2. Prepare for deployment
3. Update documentation
{{/if}}
```
## Template Variables
| Variable | Type | Source | Description |
|----------|------|--------|-------------|
| `session_id` | string | state.session_id | 会话 ID |
| `task_description` | string | state.task_description | 任务描述 |
| `timestamp` | string | 当前时间 | 验证时间 |
| `iteration` | number | 从文件计算 | 验证迭代次数 |
| `total_tests` | number | 测试输出 | 总测试数 |
| `passed_tests` | number | 测试输出 | 通过数 |
| `failed_tests` | number | 测试输出 | 失败数 |
| `pass_rate` | number | 计算得出 | 通过率 |
| `coverage_files` | array | 覆盖率报告 | 文件覆盖率 |
| `failures` | array | 测试输出 | 失败测试详情 |
| `gemini_analysis` | string | Gemini CLI | 质量分析 |
| `recommendations` | array | Gemini CLI | 建议列表 |
## Section Templates
### Test Summary
```markdown
### Test Execution Summary
| Metric | Value |
|--------|-------|
| Total Tests | {{total}} |
| Passed | {{passed}} |
| Failed | {{failed}} |
| Skipped | {{skipped}} |
| Duration | {{duration}}ms |
| **Pass Rate** | **{{rate}}%** |
```
### Coverage Table
```markdown
### Coverage Report
| File | Statements | Branches | Functions | Lines |
|------|------------|----------|-----------|-------|
{{#each files}}
| `{{path}}` | {{statements}}% | {{branches}}% | {{functions}}% | {{lines}}% |
{{/each}}
**Overall Coverage**: {{overall}}%
**Coverage Thresholds**:
- ✅ Good: ≥ 80%
- ⚠️ Warning: 60-79%
- ❌ Poor: < 60%
```
### Failed Test Details
```markdown
### Failed Tests
{{#each failures}}
#### ❌ {{test_name}}
| Field | Value |
|-------|-------|
| Suite | {{suite}} |
| Error | {{error_message}} |
| Duration | {{duration}}ms |
**Stack Trace**:
\`\`\`
{{stack_trace}}
\`\`\`
**Possible Causes**:
{{#each possible_causes}}
- {{this}}
{{/each}}
---
{{/each}}
```
### Quality Analysis
```markdown
### Gemini Quality Analysis
#### Code Quality Assessment
| Dimension | Score | Status |
|-----------|-------|--------|
| Correctness | {{correctness}}/10 | {{correctness_status}} |
| Completeness | {{completeness}}/10 | {{completeness_status}} |
| Reliability | {{reliability}}/10 | {{reliability_status}} |
| Maintainability | {{maintainability}}/10 | {{maintainability_status}} |
#### Key Findings
{{#each findings}}
- **{{severity}}**: {{description}}
{{/each}}
#### Recommendations
{{#each recommendations}}
{{@index}}. {{this}}
{{/each}}
```
### Decision Section
```markdown
## Validation Decision
**Result**: {{#if passed}}✅ PASS{{else}}❌ FAIL{{/if}}
**Rationale**:
{{rationale}}
**Confidence Level**: {{confidence}}
### Decision Matrix
| Criteria | Status | Weight | Score |
|----------|--------|--------|-------|
| All tests pass | {{tests_pass}} | 40% | {{tests_score}} |
| Coverage ≥ 80% | {{coverage_pass}} | 30% | {{coverage_score}} |
| No critical issues | {{no_critical}} | 20% | {{critical_score}} |
| Quality analysis pass | {{quality_pass}} | 10% | {{quality_score}} |
| **Total** | | 100% | **{{total_score}}** |
**Threshold**: 70% to pass
### Next Actions
{{#if passed}}
1. ✅ Code review (recommended)
2. ✅ Update documentation
3. ✅ Prepare for deployment
{{else}}
1. ❌ Review failed tests
2. ❌ Debug failures
3. ❌ Fix issues and re-run
{{/if}}
```
## Historical Comparison
```markdown
## Validation History
| Iteration | Date | Pass Rate | Coverage | Status |
|-----------|------|-----------|----------|--------|
{{#each history}}
| {{iteration}} | {{date}} | {{pass_rate}}% | {{coverage}}% | {{status}} |
{{/each}}
### Trend Analysis
{{#if improving}}
📈 **Improving**: Pass rate increased from {{previous_rate}}% to {{current_rate}}%
{{else if declining}}
📉 **Declining**: Pass rate decreased from {{previous_rate}}% to {{current_rate}}%
{{else}}
➡️ **Stable**: Pass rate remains at {{current_rate}}%
{{/if}}
```

View File

@@ -0,0 +1,583 @@
---
name: flow-coordinator
description: Template-driven workflow coordinator with minimal state tracking. Executes command chains from workflow templates OR unified PromptTemplate workflows. Supports slash-command and DAG-based execution. Triggers on "flow-coordinator", "workflow template", "orchestrate".
allowed-tools: Task, AskUserQuestion, Read, Write, Bash, Glob, Grep
---
# Flow Coordinator
Lightweight workflow coordinator supporting two workflow formats:
1. **Legacy Templates**: Command chains with slash-command execution
2. **Unified Workflows**: DAG-based PromptTemplate nodes (spec: `spec/unified-workflow-spec.md`)
## Specification Reference
- **Unified Workflow Spec**: @spec/unified-workflow-spec.md
- **Demo Workflow**: `ccw/data/flows/demo-unified-workflow.json`
## Architecture
```
User Task → Detect Format → Select Workflow → Init Status → Execute → Complete
│ │
├─ Legacy Template │
│ └─ Sequential cmd execution │
│ │
└─ Unified Workflow │
└─ DAG traversal with contextRefs │
└──────────────── Resume (from status.json) ──────────────┘
Execution Modes:
├─ analysis → Read-only, CLI --mode analysis
├─ write → File changes, CLI --mode write
├─ mainprocess → Blocking, synchronous
└─ async → Background, ccw cli
```
## Core Concepts
**Dual Format Support**:
- Legacy: `templates/*.json` with `cmd`, `args`, `execution`
- Unified: `ccw/data/flows/*.json` with `nodes`, `edges`, `contextRefs`
**Unified PromptTemplate Model**: All workflow steps are natural language instructions with:
- `instruction`: What to execute (natural language)
- `slashCommand`: Optional slash command name (e.g., "workflow:plan")
- `slashArgs`: Optional arguments for slash command (supports {{variable}})
- `outputName`: Name for output reference
- `contextRefs`: References to previous step outputs
- `tool`: Optional CLI tool (gemini/qwen/codex/claude)
- `mode`: Execution mode (analysis/write/mainprocess/async)
**DAG Execution**: Unified workflows execute as directed acyclic graphs with parallel branches and conditional edges.
**Dynamic Discovery**: Both formats discovered at runtime via Glob.
---
## Execution Flow
```javascript
async function execute(task) {
// 1. Discover and select template
const templates = await discoverTemplates();
const template = await selectTemplate(templates);
// 2. Init status
const sessionId = `fc-${timestamp()}`;
const statusPath = `.workflow/.flow-coordinator/${sessionId}/status.json`;
const status = initStatus(template, task);
write(statusPath, JSON.stringify(status, null, 2));
// 3. Execute steps based on execution config
await executeSteps(status, statusPath);
}
async function executeSteps(status, statusPath) {
for (let i = status.current; i < status.steps.length; i++) {
const step = status.steps[i];
status.current = i;
// Execute based on step mode (all steps use slash-command type)
const execConfig = step.execution || { type: 'slash-command', mode: 'mainprocess' };
if (execConfig.mode === 'async') {
// Async execution - stop and wait for hook callback
await executeSlashCommandAsync(step, status, statusPath);
break;
} else {
// Mainprocess execution - continue immediately
await executeSlashCommandSync(step, status);
step.status = 'done';
write(statusPath, JSON.stringify(status, null, 2));
}
}
// All steps complete
if (status.current >= status.steps.length) {
status.complete = true;
write(statusPath, JSON.stringify(status, null, 2));
}
}
```
---
## Unified Workflow Execution
For workflows using the unified PromptTemplate format (`ccw/data/flows/*.json`):
```javascript
async function executeUnifiedWorkflow(workflow, task) {
// 1. Initialize execution state
const sessionId = `ufc-${timestamp()}`;
const statusPath = `.workflow/.flow-coordinator/${sessionId}/status.json`;
const state = {
id: sessionId,
workflow: workflow.id,
goal: task,
nodeStates: {}, // nodeId -> { status, result, error }
outputs: {}, // outputName -> result
complete: false
};
// 2. Topological sort for execution order
const executionOrder = topologicalSort(workflow.nodes, workflow.edges);
// 3. Execute nodes respecting DAG dependencies
await executeDAG(workflow, executionOrder, state, statusPath);
}
async function executeDAG(workflow, order, state, statusPath) {
for (const nodeId of order) {
const node = workflow.nodes.find(n => n.id === nodeId);
const data = node.data;
// Check if all dependencies are satisfied
if (!areDependenciesSatisfied(nodeId, workflow.edges, state)) {
continue; // Will be executed when dependencies complete
}
// Build instruction from slashCommand or raw instruction
let instruction = buildNodeInstruction(data, state.outputs);
// Execute based on mode
state.nodeStates[nodeId] = { status: 'running' };
write(statusPath, JSON.stringify(state, null, 2));
const result = await executeNode(instruction, data.tool, data.mode);
// Store output for downstream nodes
state.nodeStates[nodeId] = { status: 'completed', result };
if (data.outputName) {
state.outputs[data.outputName] = result;
}
write(statusPath, JSON.stringify(state, null, 2));
}
state.complete = true;
write(statusPath, JSON.stringify(state, null, 2));
}
/**
* Build node instruction from slashCommand or raw instruction
* Handles slashCommand/slashArgs fields from frontend orchestrator
*/
function buildNodeInstruction(data, outputs) {
const refs = data.contextRefs || [];
// If slashCommand is set, construct instruction from it
if (data.slashCommand) {
// Resolve variables in slashArgs
const args = data.slashArgs
? resolveContextRefs(data.slashArgs, refs, outputs)
: '';
// Build slash command instruction
let instruction = `/${data.slashCommand}${args ? ' ' + args : ''}`;
// Append additional instruction if provided
if (data.instruction) {
const additionalInstruction = resolveContextRefs(data.instruction, refs, outputs);
instruction = `${instruction}\n\n${additionalInstruction}`;
}
return instruction;
}
// Fallback: use raw instruction with context refs resolved
return resolveContextRefs(data.instruction || '', refs, outputs);
}
function resolveContextRefs(instruction, refs, outputs) {
let resolved = instruction;
for (const ref of refs) {
const value = outputs[ref];
const placeholder = `{{${ref}}}`;
resolved = resolved.replace(new RegExp(placeholder, 'g'),
typeof value === 'object' ? JSON.stringify(value) : String(value));
}
return resolved;
}
async function executeNode(instruction, tool, mode) {
// Build CLI command based on tool and mode
const cliTool = tool || 'gemini';
const cliMode = mode === 'write' ? 'write' : 'analysis';
if (mode === 'async') {
// Background execution
return Bash(
`ccw cli -p "${escapePrompt(instruction)}" --tool ${cliTool} --mode ${cliMode}`,
{ run_in_background: true }
);
} else {
// Synchronous execution
return Bash(
`ccw cli -p "${escapePrompt(instruction)}" --tool ${cliTool} --mode ${cliMode}`
);
}
}
```
### Unified Workflow Discovery
```javascript
async function discoverUnifiedWorkflows() {
const files = Glob('*.json', { path: 'ccw/data/flows/' });
const workflows = [];
for (const file of files) {
const content = JSON.parse(Read(file));
// Detect unified format by checking for 'nodes' array
if (content.nodes && Array.isArray(content.nodes)) {
workflows.push({
id: content.id,
name: content.name,
description: content.description,
nodeCount: content.nodes.length,
format: 'unified',
file: file
});
}
}
return workflows;
}
```
### Format Detection
```javascript
function detectWorkflowFormat(content) {
if (content.nodes && content.edges) {
return 'unified'; // PromptTemplate DAG format
}
if (content.steps && content.steps[0]?.cmd) {
return 'legacy'; // Command chain format
}
throw new Error('Unknown workflow format');
}
```
---
## Legacy Template Discovery
**Dynamic query** - never hardcode template list:
```javascript
async function discoverTemplates() {
// Discover all JSON templates
const files = Glob('*.json', { path: 'templates/' });
// Parse each template
const templates = [];
for (const file of files) {
const content = JSON.parse(Read(file));
templates.push({
name: content.name,
description: content.description,
steps: content.steps.map(s => s.cmd).join(' → '),
file: file
});
}
return templates;
}
```
---
## Template Selection
User chooses from discovered templates:
```javascript
async function selectTemplate(templates) {
// Build options from discovered templates
const options = templates.slice(0, 4).map(t => ({
label: t.name,
description: t.steps
}));
const response = await AskUserQuestion({
questions: [{
question: 'Select workflow template:',
header: 'Template',
options: options,
multiSelect: false
}]
});
// Handle "Other" - show remaining templates or custom input
if (response.template === 'Other') {
return await selectFromRemainingTemplates(templates.slice(4));
}
return templates.find(t => t.name === response.template);
}
```
---
## Status Schema
**Creation**: Copy template JSON → Update `id`, `template`, `goal`, set all steps `status: "pending"`
**Location**: `.workflow/.flow-coordinator/{session-id}/status.json`
**Core Fields**:
- `id`: Session ID (fc-YYYYMMDD-HHMMSS)
- `template`: Template name
- `goal`: User task description
- `current`: Current step index
- `steps[]`: Step array from template (with runtime `status`, `session`, `taskId`)
- `complete`: All steps done?
**Step Status**: `pending``running``done` | `failed` | `skipped`
---
## Extended Template Schema
**Templates stored in**: `templates/*.json` (discovered at runtime via Glob)
**TemplateStep Fields**:
- `cmd`: Full command path (e.g., `/workflow:lite-plan`, `/workflow:execute`)
- `args?`: Arguments with `{{goal}}` and `{{prev}}` placeholders
- `unit?`: Minimum execution unit name (groups related commands)
- `optional?`: Can be skipped by user
- `execution`: Type and mode configuration
- `type`: Always `'slash-command'` (for all workflow commands)
- `mode`: `'mainprocess'` (blocking) or `'async'` (background)
- `contextHint?`: Natural language guidance for context assembly
**Template Example**:
```json
{
"name": "rapid",
"steps": [
{
"cmd": "/workflow:lite-plan",
"args": "\"{{goal}}\"",
"unit": "quick-implementation",
"execution": { "type": "slash-command", "mode": "mainprocess" },
"contextHint": "Create lightweight implementation plan"
},
{
"cmd": "/workflow:lite-execute",
"args": "--in-memory",
"unit": "quick-implementation",
"execution": { "type": "slash-command", "mode": "async" },
"contextHint": "Execute plan from previous step"
}
]
}
```
---
## Execution Implementation
### Mainprocess Mode (Blocking)
```javascript
async function executeSlashCommandSync(step, status) {
// Build command: /workflow:cmd -y args
const cmd = buildCommand(step, status);
const result = await SlashCommand({ command: cmd });
step.session = result.session_id;
step.status = 'done';
return result;
}
```
### Async Mode (Background)
```javascript
async function executeSlashCommandAsync(step, status, statusPath) {
// Build prompt: /workflow:cmd -y args + context
const prompt = buildCommandPrompt(step, status);
step.status = 'running';
write(statusPath, JSON.stringify(status, null, 2));
// Execute via ccw cli in background
const taskId = Bash(
`ccw cli -p "${escapePrompt(prompt)}" --tool claude --mode write`,
{ run_in_background: true }
).task_id;
step.taskId = taskId;
write(statusPath, JSON.stringify(status, null, 2));
console.log(`Executing: ${step.cmd} (async)`);
console.log(`Resume: /flow-coordinator --resume ${status.id}`);
}
```
---
## Prompt Building
Prompts are built in format: `/workflow:cmd -y args` + context
```javascript
function buildCommandPrompt(step, status) {
// step.cmd already contains full path: /workflow:lite-plan, /workflow:execute, etc.
let prompt = `${step.cmd} -y`;
// Add arguments (with placeholder replacement)
if (step.args) {
const args = step.args
.replace('{{goal}}', status.goal)
.replace('{{prev}}', getPreviousSessionId(status));
prompt += ` ${args}`;
}
// Add context based on contextHint
if (step.contextHint) {
const context = buildContextFromHint(step.contextHint, status);
prompt += `\n\nContext:\n${context}`;
} else {
// Default context: previous session IDs
const previousContext = collectPreviousResults(status);
if (previousContext) {
prompt += `\n\nPrevious results:\n${previousContext}`;
}
}
return prompt;
}
function buildContextFromHint(hint, status) {
// Parse contextHint instruction and build context accordingly
// Examples:
// "Summarize IMPL_PLAN.md" → read and summarize plan
// "List test coverage gaps" → analyze previous test results
// "Pass session ID" → just return session reference
return parseAndBuildContext(hint, status);
}
```
### Example Prompt Output
```
/workflow:lite-plan -y "Implement user registration"
Context:
Task: Implement user registration
Previous results:
- None (first step)
```
```
/workflow:execute -y --in-memory
Context:
Task: Implement user registration
Previous results:
- lite-plan: WFS-plan-20250130 (planning-context.md)
```
---
## User Interaction
### Step 1: Select Template
```
Select workflow template:
○ rapid lite-plan → lite-execute → test-cycle-execute
○ coupled plan → plan-verify → execute → review → test
○ bugfix lite-fix → lite-execute → test-cycle-execute
○ tdd tdd-plan → execute → tdd-verify
○ Other (more templates or custom)
```
### Step 2: Review Execution Plan
```
Template: coupled
Steps:
1. /workflow:plan (slash-command mainprocess)
2. /workflow:plan-verify (slash-command mainprocess)
3. /workflow:execute (slash-command async)
4. /workflow:review-session-cycle (slash-command mainprocess)
5. /workflow:review-cycle-fix (slash-command mainprocess)
6. /workflow:test-fix-gen (slash-command mainprocess)
7. /workflow:test-cycle-execute (slash-command async)
Proceed? [Confirm / Cancel]
```
---
## Resume Capability
```javascript
async function resume(sessionId) {
const statusPath = `.workflow/.flow-coordinator/${sessionId}/status.json`;
const status = JSON.parse(Read(statusPath));
// Find first incomplete step
status.current = status.steps.findIndex(s => s.status !== 'done');
if (status.current === -1) {
console.log('All steps complete');
return;
}
// Continue executing steps
await executeSteps(status, statusPath);
}
```
---
## Available Templates
Templates discovered from `templates/*.json`:
| Template | Use Case | Steps |
|----------|----------|-------|
| rapid | Simple feature | /workflow:lite-plan → /workflow:lite-execute → /workflow:test-cycle-execute |
| coupled | Complex feature | /workflow:plan → /workflow:plan-verify → /workflow:execute → /workflow:review-session-cycle → /workflow:test-fix-gen |
| bugfix | Bug fix | /workflow:lite-fix → /workflow:lite-execute → /workflow:test-cycle-execute |
| tdd | Test-driven | /workflow:tdd-plan → /workflow:execute → /workflow:tdd-verify |
| test-fix | Fix failing tests | /workflow:test-fix-gen → /workflow:test-cycle-execute |
| brainstorm | Exploration | /workflow:brainstorm-with-file |
| debug | Debug with docs | /workflow:debug-with-file |
| analyze | Collaborative analysis | /workflow:analyze-with-file |
| issue | Issue workflow | /workflow:issue:plan → /workflow:issue:queue → /workflow:issue:execute |
---
## Design Principles
1. **Minimal fields**: Only essential tracking data
2. **Flat structure**: No nested objects beyond steps array
3. **Step-level execution**: Each step defines how it's executed
4. **Resumable**: Any step can be resumed from status
5. **Human readable**: Clear JSON format
---
## Reference Documents
| Document | Purpose |
|----------|---------|
| spec/unified-workflow-spec.md | Unified PromptTemplate workflow specification |
| ccw/data/flows/*.json | Unified workflows (DAG format, dynamic discovery) |
| templates/*.json | Legacy workflow templates (command chain format) |
### Demo Workflows (Unified Format)
| File | Description | Nodes |
|------|-------------|-------|
| `demo-unified-workflow.json` | Auth implementation | 7 nodes: Analyze → Plan → Implement → Review → Tests → Report |
| `parallel-ci-workflow.json` | CI/CD pipeline | 8 nodes: Parallel checks → Merge → Conditional notify |
| `simple-analysis-workflow.json` | Analysis pipeline | 3 nodes: Explore → Analyze → Report |

View File

@@ -0,0 +1,332 @@
# Unified Workflow Specification v1.0
> Standard format for PromptTemplate-based workflow definitions
## Overview
This specification defines the JSON schema for unified workflows where **all nodes are prompt templates** with natural language instructions. This replaces the previous multi-type node system with a single, flexible model.
**Design Philosophy**: Every workflow step is a natural language instruction that can optionally specify execution tool and mode. Data flows through named outputs referenced by subsequent steps.
---
## Schema Definition
### Root Object: `Flow`
```typescript
interface Flow {
id: string; // Unique identifier (kebab-case)
name: string; // Display name
description?: string; // Human-readable description
version: number; // Schema version (currently 1)
created_at: string; // ISO 8601 timestamp
updated_at: string; // ISO 8601 timestamp
nodes: FlowNode[]; // Workflow steps
edges: FlowEdge[]; // Step connections (DAG)
variables: Record<string, unknown>; // Global workflow variables
metadata: FlowMetadata; // Classification and source info
}
```
### FlowNode
```typescript
interface FlowNode {
id: string; // Unique node ID
type: 'prompt-template'; // Always 'prompt-template'
position: { x: number; y: number }; // Canvas position
data: PromptTemplateNodeData; // Node configuration
}
```
### PromptTemplateNodeData
```typescript
interface PromptTemplateNodeData {
// === Required ===
label: string; // Display label in editor
instruction: string; // Natural language instruction
// === Slash Command (optional, overrides instruction) ===
slashCommand?: string; // Slash command name (e.g., "workflow:plan")
slashArgs?: string; // Arguments for slash command (supports {{variable}})
// === Data Flow ===
outputName?: string; // Name for output reference
contextRefs?: string[]; // References to previous outputs
// === Execution Config ===
tool?: CliTool; // 'gemini' | 'qwen' | 'codex' | 'claude'
mode?: ExecutionMode; // 'analysis' | 'write' | 'mainprocess' | 'async'
// === Runtime State (populated during execution) ===
executionStatus?: ExecutionStatus;
executionError?: string;
executionResult?: unknown;
}
```
**Instruction Resolution Priority**:
1. If `slashCommand` is set: `/{slashCommand} {slashArgs}` + optional `instruction` as context
2. Otherwise: `instruction` directly
### FlowEdge
```typescript
interface FlowEdge {
id: string; // Unique edge ID
source: string; // Source node ID
target: string; // Target node ID
type?: string; // Edge type (default: 'default')
data?: {
label?: string; // Edge label (e.g., 'parallel')
condition?: string; // Conditional expression
};
}
```
### FlowMetadata
```typescript
interface FlowMetadata {
source?: 'template' | 'custom' | 'imported';
tags?: string[];
category?: string;
}
```
---
## Instruction Syntax
### Context References
Use `{{outputName}}` syntax to reference outputs from previous steps:
```
Analyze {{requirements_analysis}} and create implementation plan.
```
### Nested Property Access
```
If {{ci_report.status}} === 'failed', stop execution.
```
### Multiple References
```
Combine {{lint_result}}, {{typecheck_result}}, and {{test_result}} into report.
```
---
## Execution Modes
| Mode | Behavior | Use Case |
|------|----------|----------|
| `analysis` | Read-only, no file changes | Code review, exploration |
| `write` | Can create/modify/delete files | Implementation, fixes |
| `mainprocess` | Blocking, synchronous | Interactive steps |
| `async` | Background, non-blocking | Long-running tasks |
---
## DAG Execution Semantics
### Sequential Execution
Nodes with single input edge execute after predecessor completes.
```
[A] ──▶ [B] ──▶ [C]
```
### Parallel Execution
Multiple edges from same source trigger parallel execution:
```
┌──▶ [B]
[A] ──┤
└──▶ [C]
```
### Merge Point
Node with multiple input edges waits for all predecessors:
```
[B] ──┐
├──▶ [D]
[C] ──┘
```
### Conditional Branching
Edge `data.condition` specifies branch condition:
```json
{
"id": "e-decision-success",
"source": "decision",
"target": "notify-success",
"data": { "condition": "decision.result === 'pass'" }
}
```
---
## Example: Minimal Workflow
```json
{
"id": "simple-analysis",
"name": "Simple Analysis",
"version": 1,
"created_at": "2026-02-04T00:00:00.000Z",
"updated_at": "2026-02-04T00:00:00.000Z",
"nodes": [
{
"id": "analyze",
"type": "prompt-template",
"position": { "x": 100, "y": 100 },
"data": {
"label": "Analyze Code",
"instruction": "Analyze the authentication module for security issues.",
"outputName": "analysis",
"tool": "gemini",
"mode": "analysis"
}
},
{
"id": "report",
"type": "prompt-template",
"position": { "x": 100, "y": 250 },
"data": {
"label": "Generate Report",
"instruction": "Based on {{analysis}}, generate a security report with recommendations.",
"outputName": "report",
"contextRefs": ["analysis"]
}
}
],
"edges": [
{ "id": "e1", "source": "analyze", "target": "report" }
],
"variables": {},
"metadata": { "source": "custom", "tags": ["security"] }
}
```
---
## Example: Parallel with Merge
```json
{
"nodes": [
{
"id": "start",
"type": "prompt-template",
"position": { "x": 200, "y": 50 },
"data": {
"label": "Prepare",
"instruction": "Set up build environment",
"outputName": "env"
}
},
{
"id": "lint",
"type": "prompt-template",
"position": { "x": 100, "y": 200 },
"data": {
"label": "Lint",
"instruction": "Run linter checks",
"outputName": "lint_result",
"tool": "codex",
"mode": "analysis",
"contextRefs": ["env"]
}
},
{
"id": "test",
"type": "prompt-template",
"position": { "x": 300, "y": 200 },
"data": {
"label": "Test",
"instruction": "Run unit tests",
"outputName": "test_result",
"tool": "codex",
"mode": "analysis",
"contextRefs": ["env"]
}
},
{
"id": "merge",
"type": "prompt-template",
"position": { "x": 200, "y": 350 },
"data": {
"label": "Merge Results",
"instruction": "Combine {{lint_result}} and {{test_result}} into CI report",
"outputName": "ci_report",
"contextRefs": ["lint_result", "test_result"]
}
}
],
"edges": [
{ "id": "e1", "source": "start", "target": "lint", "data": { "label": "parallel" } },
{ "id": "e2", "source": "start", "target": "test", "data": { "label": "parallel" } },
{ "id": "e3", "source": "lint", "target": "merge" },
{ "id": "e4", "source": "test", "target": "merge" }
]
}
```
---
## Migration from Old Format
### Old Template Step
```json
{
"cmd": "/workflow:lite-plan",
"args": "\"{{goal}}\"",
"execution": { "type": "slash-command", "mode": "mainprocess" }
}
```
### New PromptTemplate Node
```json
{
"id": "plan",
"type": "prompt-template",
"data": {
"label": "Create Plan",
"instruction": "Execute /workflow:lite-plan for: {{goal}}",
"outputName": "plan_result",
"mode": "mainprocess"
}
}
```
---
## Validation Rules
1. **Unique IDs**: All node and edge IDs must be unique within the flow
2. **Valid References**: `contextRefs` must reference existing `outputName` values
3. **DAG Structure**: No circular dependencies allowed
4. **Required Fields**: `id`, `name`, `version`, `nodes`, `edges` are required
5. **Node Type**: All nodes must have `type: 'prompt-template'`
---
## File Location
Workflow files stored in: `ccw/data/flows/*.json`
Template discovery: `Glob('*.json', { path: 'ccw/data/flows/' })`

View File

@@ -0,0 +1,16 @@
{
"name": "analyze",
"description": "Collaborative analysis with multi-round discussion - deep exploration and understanding",
"level": 3,
"steps": [
{
"cmd": "/workflow:analyze-with-file",
"args": "\"{{goal}}\"",
"execution": {
"type": "slash-command",
"mode": "mainprocess"
},
"contextHint": "Multi-round collaborative analysis with iterative understanding. Generate discussion.md with comprehensive analysis and conclusions"
}
]
}

View File

@@ -0,0 +1,36 @@
{
"name": "brainstorm-to-issue",
"description": "Bridge brainstorm session to issue workflow - convert exploration insights to executable issues",
"level": 4,
"steps": [
{
"cmd": "/workflow:issue:from-brainstorm",
"args": "--auto",
"unit": "brainstorm-to-issue",
"execution": {
"type": "slash-command",
"mode": "mainprocess"
},
"contextHint": "Convert brainstorm session findings into issue plans and solutions"
},
{
"cmd": "/workflow:issue:queue",
"unit": "brainstorm-to-issue",
"execution": {
"type": "slash-command",
"mode": "mainprocess"
},
"contextHint": "Build execution queue from converted brainstorm issues"
},
{
"cmd": "/workflow:issue:execute",
"args": "--queue auto",
"unit": "brainstorm-to-issue",
"execution": {
"type": "slash-command",
"mode": "async"
},
"contextHint": "Execute issues from queue with state tracking"
}
]
}

View File

@@ -0,0 +1,16 @@
{
"name": "brainstorm",
"description": "Multi-perspective ideation with documentation - explore possibilities with multiple analytical viewpoints",
"level": 4,
"steps": [
{
"cmd": "/workflow:brainstorm-with-file",
"args": "\"{{goal}}\"",
"execution": {
"type": "slash-command",
"mode": "mainprocess"
},
"contextHint": "Multi-perspective ideation with interactive diverge-converge cycles. Generate brainstorm.md with synthesis of ideas and recommendations"
}
]
}

View File

@@ -0,0 +1,16 @@
{
"name": "bugfix-hotfix",
"description": "Urgent production fix - immediate diagnosis and fix with minimal overhead",
"level": 1,
"steps": [
{
"cmd": "/workflow:lite-fix",
"args": "--hotfix \"{{goal}}\"",
"execution": {
"type": "slash-command",
"mode": "async"
},
"contextHint": "Urgent hotfix mode: quick diagnosis and immediate fix for critical production issue"
}
]
}

View File

@@ -0,0 +1,47 @@
{
"name": "bugfix",
"description": "Standard bug fix workflow - lightweight diagnosis and execution with testing",
"level": 2,
"steps": [
{
"cmd": "/workflow:lite-fix",
"args": "\"{{goal}}\"",
"unit": "bug-fix",
"execution": {
"type": "slash-command",
"mode": "mainprocess"
},
"contextHint": "Analyze bug report, trace execution flow, identify root cause with fix strategy"
},
{
"cmd": "/workflow:lite-execute",
"args": "--in-memory",
"unit": "bug-fix",
"execution": {
"type": "slash-command",
"mode": "async"
},
"contextHint": "Implement fix based on diagnosis. Execute against in-memory state from lite-fix analysis."
},
{
"cmd": "/workflow:test-fix-gen",
"unit": "test-validation",
"optional": true,
"execution": {
"type": "slash-command",
"mode": "mainprocess"
},
"contextHint": "Generate test tasks to verify bug fix and prevent regression"
},
{
"cmd": "/workflow:test-cycle-execute",
"unit": "test-validation",
"optional": true,
"execution": {
"type": "slash-command",
"mode": "async"
},
"contextHint": "Execute test-fix cycle until all tests pass"
}
]
}

View File

@@ -0,0 +1,71 @@
{
"name": "coupled",
"description": "Full workflow for complex features - detailed planning with verification, execution, review, and testing",
"level": 3,
"steps": [
{
"cmd": "/workflow:plan",
"args": "\"{{goal}}\"",
"unit": "verified-planning-execution",
"execution": {
"type": "slash-command",
"mode": "mainprocess"
},
"contextHint": "Create detailed implementation plan with architecture design, file structure, dependencies, and milestones"
},
{
"cmd": "/workflow:plan-verify",
"unit": "verified-planning-execution",
"execution": {
"type": "slash-command",
"mode": "mainprocess"
},
"contextHint": "Verify IMPL_PLAN.md against requirements, check for missing details, conflicts, and quality gates"
},
{
"cmd": "/workflow:execute",
"unit": "verified-planning-execution",
"execution": {
"type": "slash-command",
"mode": "async"
},
"contextHint": "Execute implementation based on verified plan. Resume from planning session with all context preserved."
},
{
"cmd": "/workflow:review-session-cycle",
"unit": "code-review",
"execution": {
"type": "slash-command",
"mode": "mainprocess"
},
"contextHint": "Perform multi-dimensional code review across correctness, security, performance, maintainability. Reference execution session for full code context."
},
{
"cmd": "/workflow:review-cycle-fix",
"unit": "code-review",
"execution": {
"type": "slash-command",
"mode": "mainprocess"
},
"contextHint": "Fix issues identified in review findings with prioritization by severity levels"
},
{
"cmd": "/workflow:test-fix-gen",
"unit": "test-validation",
"execution": {
"type": "slash-command",
"mode": "mainprocess"
},
"contextHint": "Generate comprehensive test tasks for the implementation with coverage analysis"
},
{
"cmd": "/workflow:test-cycle-execute",
"unit": "test-validation",
"execution": {
"type": "slash-command",
"mode": "async"
},
"contextHint": "Execute iterative test-fix cycle until pass rate >= 95%"
}
]
}

View File

@@ -0,0 +1,16 @@
{
"name": "debug",
"description": "Hypothesis-driven debugging with documentation - systematic troubleshooting and logging",
"level": 3,
"steps": [
{
"cmd": "/workflow:debug-with-file",
"args": "\"{{goal}}\"",
"execution": {
"type": "slash-command",
"mode": "mainprocess"
},
"contextHint": "Systematic debugging with hypothesis formation and verification. Generate understanding.md with root cause analysis and fix recommendations"
}
]
}

View File

@@ -0,0 +1,27 @@
{
"name": "docs",
"description": "Documentation generation workflow",
"level": 2,
"steps": [
{
"cmd": "/workflow:lite-plan",
"args": "\"{{goal}}\"",
"unit": "quick-documentation",
"execution": {
"type": "slash-command",
"mode": "mainprocess"
},
"contextHint": "Plan documentation structure and content organization"
},
{
"cmd": "/workflow:lite-execute",
"args": "--in-memory",
"unit": "quick-documentation",
"execution": {
"type": "slash-command",
"mode": "async"
},
"contextHint": "Execute documentation generation from plan"
}
]
}

View File

@@ -0,0 +1,61 @@
{
"name": "full",
"description": "Comprehensive workflow - brainstorm exploration, planning verification, execution, and testing",
"level": 4,
"steps": [
{
"cmd": "/workflow:brainstorm:auto-parallel",
"args": "\"{{goal}}\"",
"execution": {
"type": "slash-command",
"mode": "mainprocess"
},
"contextHint": "Multi-perspective exploration of requirements and possible approaches"
},
{
"cmd": "/workflow:plan",
"unit": "verified-planning-execution",
"execution": {
"type": "slash-command",
"mode": "mainprocess"
},
"contextHint": "Create detailed implementation plan based on brainstorm insights"
},
{
"cmd": "/workflow:plan-verify",
"unit": "verified-planning-execution",
"execution": {
"type": "slash-command",
"mode": "mainprocess"
},
"contextHint": "Verify plan quality and completeness"
},
{
"cmd": "/workflow:execute",
"unit": "verified-planning-execution",
"execution": {
"type": "slash-command",
"mode": "async"
},
"contextHint": "Execute implementation from verified plan"
},
{
"cmd": "/workflow:test-fix-gen",
"unit": "test-validation",
"execution": {
"type": "slash-command",
"mode": "mainprocess"
},
"contextHint": "Generate comprehensive test tasks"
},
{
"cmd": "/workflow:test-cycle-execute",
"unit": "test-validation",
"execution": {
"type": "slash-command",
"mode": "async"
},
"contextHint": "Execute test-fix cycle until pass rate >= 95%"
}
]
}

View File

@@ -0,0 +1,43 @@
{
"name": "issue",
"description": "Issue workflow - discover issues, create plans, queue execution, and resolve",
"level": "issue",
"steps": [
{
"cmd": "/workflow:issue:discover",
"execution": {
"type": "slash-command",
"mode": "mainprocess"
},
"contextHint": "Discover pending issues from codebase for potential fixes"
},
{
"cmd": "/workflow:issue:plan",
"args": "--all-pending",
"unit": "issue-workflow",
"execution": {
"type": "slash-command",
"mode": "mainprocess"
},
"contextHint": "Create execution plans for all discovered pending issues"
},
{
"cmd": "/workflow:issue:queue",
"unit": "issue-workflow",
"execution": {
"type": "slash-command",
"mode": "mainprocess"
},
"contextHint": "Build execution queue with issue prioritization and dependencies"
},
{
"cmd": "/workflow:issue:execute",
"unit": "issue-workflow",
"execution": {
"type": "slash-command",
"mode": "async"
},
"contextHint": "Execute issues from queue with state tracking and completion reporting"
}
]
}

View File

@@ -0,0 +1,16 @@
{
"name": "lite-lite-lite",
"description": "Ultra-lightweight for immediate simple tasks - minimal overhead, direct execution",
"level": 1,
"steps": [
{
"cmd": "/workflow:lite-lite-lite",
"args": "\"{{goal}}\"",
"execution": {
"type": "slash-command",
"mode": "async"
},
"contextHint": "Direct task execution with minimal analysis and zero documentation overhead"
}
]
}

View File

@@ -0,0 +1,47 @@
{
"name": "multi-cli-plan",
"description": "Multi-perspective planning with cross-tool verification and execution",
"level": 3,
"steps": [
{
"cmd": "/workflow:multi-cli-plan",
"args": "\"{{goal}}\"",
"unit": "multi-cli-planning",
"execution": {
"type": "slash-command",
"mode": "mainprocess"
},
"contextHint": "Multi-perspective analysis comparing different implementation approaches with trade-off analysis"
},
{
"cmd": "/workflow:lite-execute",
"args": "--in-memory",
"unit": "multi-cli-planning",
"execution": {
"type": "slash-command",
"mode": "async"
},
"contextHint": "Execute best approach selected from multi-perspective analysis"
},
{
"cmd": "/workflow:test-fix-gen",
"unit": "test-validation",
"optional": true,
"execution": {
"type": "slash-command",
"mode": "mainprocess"
},
"contextHint": "Generate test tasks for the implementation"
},
{
"cmd": "/workflow:test-cycle-execute",
"unit": "test-validation",
"optional": true,
"execution": {
"type": "slash-command",
"mode": "async"
},
"contextHint": "Execute test-fix cycle until pass rate >= 95%"
}
]
}

View File

@@ -0,0 +1,46 @@
{
"name": "rapid-to-issue",
"description": "Bridge lite workflow to issue workflow - convert simple plan to structured issue execution",
"level": 2.5,
"steps": [
{
"cmd": "/workflow:lite-plan",
"args": "\"{{goal}}\"",
"unit": "rapid-to-issue",
"execution": {
"type": "slash-command",
"mode": "mainprocess"
},
"contextHint": "Create lightweight plan for the task"
},
{
"cmd": "/workflow:issue:convert-to-plan",
"args": "--latest-lite-plan -y",
"unit": "rapid-to-issue",
"execution": {
"type": "slash-command",
"mode": "mainprocess"
},
"contextHint": "Convert lite plan to structured issue plan"
},
{
"cmd": "/workflow:issue:queue",
"unit": "rapid-to-issue",
"execution": {
"type": "slash-command",
"mode": "mainprocess"
},
"contextHint": "Build execution queue from converted plan"
},
{
"cmd": "/workflow:issue:execute",
"args": "--queue auto",
"unit": "rapid-to-issue",
"execution": {
"type": "slash-command",
"mode": "async"
},
"contextHint": "Execute issues from queue with state tracking"
}
]
}

View File

@@ -0,0 +1,47 @@
{
"name": "rapid",
"description": "Quick implementation - lightweight plan and immediate execution for simple features",
"level": 2,
"steps": [
{
"cmd": "/workflow:lite-plan",
"args": "\"{{goal}}\"",
"unit": "quick-implementation",
"execution": {
"type": "slash-command",
"mode": "mainprocess"
},
"contextHint": "Analyze requirements and create a lightweight implementation plan with key decisions and file structure"
},
{
"cmd": "/workflow:lite-execute",
"args": "--in-memory",
"unit": "quick-implementation",
"execution": {
"type": "slash-command",
"mode": "async"
},
"contextHint": "Use the plan from previous step to implement code. Execute against in-memory state."
},
{
"cmd": "/workflow:test-fix-gen",
"unit": "test-validation",
"optional": true,
"execution": {
"type": "slash-command",
"mode": "mainprocess"
},
"contextHint": "Generate test tasks from the implementation session"
},
{
"cmd": "/workflow:test-cycle-execute",
"unit": "test-validation",
"optional": true,
"execution": {
"type": "slash-command",
"mode": "async"
},
"contextHint": "Execute test-fix cycle until all tests pass"
}
]
}

View File

@@ -0,0 +1,43 @@
{
"name": "review",
"description": "Code review workflow - multi-dimensional review, fix issues, and test validation",
"level": 3,
"steps": [
{
"cmd": "/workflow:review-session-cycle",
"unit": "code-review",
"execution": {
"type": "slash-command",
"mode": "mainprocess"
},
"contextHint": "Perform comprehensive multi-dimensional code review across correctness, security, performance, maintainability dimensions"
},
{
"cmd": "/workflow:review-cycle-fix",
"unit": "code-review",
"execution": {
"type": "slash-command",
"mode": "mainprocess"
},
"contextHint": "Fix all review findings prioritized by severity level (critical → high → medium → low)"
},
{
"cmd": "/workflow:test-fix-gen",
"unit": "test-validation",
"execution": {
"type": "slash-command",
"mode": "mainprocess"
},
"contextHint": "Generate test tasks for fixed code with coverage analysis"
},
{
"cmd": "/workflow:test-cycle-execute",
"unit": "test-validation",
"execution": {
"type": "slash-command",
"mode": "async"
},
"contextHint": "Execute iterative test-fix cycle until pass rate >= 95%"
}
]
}

View File

@@ -0,0 +1,34 @@
{
"name": "tdd",
"description": "Test-driven development - write tests first, implement to pass tests, verify Red-Green-Refactor cycles",
"level": 3,
"steps": [
{
"cmd": "/workflow:tdd-plan",
"args": "\"{{goal}}\"",
"unit": "tdd-planning-execution",
"execution": {
"type": "slash-command",
"mode": "mainprocess"
},
"contextHint": "Create TDD task plan with Red-Green-Refactor cycles, test specifications, and implementation strategy"
},
{
"cmd": "/workflow:execute",
"unit": "tdd-planning-execution",
"execution": {
"type": "slash-command",
"mode": "async"
},
"contextHint": "Execute TDD tasks following Red-Green-Refactor workflow with test-first development"
},
{
"cmd": "/workflow:tdd-verify",
"execution": {
"type": "slash-command",
"mode": "mainprocess"
},
"contextHint": "Verify TDD cycle compliance, test coverage, and code quality against Red-Green-Refactor principles"
}
]
}

View File

@@ -0,0 +1,26 @@
{
"name": "test-fix",
"description": "Fix failing tests - generate test tasks and execute iterative test-fix cycle",
"level": 2,
"steps": [
{
"cmd": "/workflow:test-fix-gen",
"args": "\"{{goal}}\"",
"unit": "test-validation",
"execution": {
"type": "slash-command",
"mode": "mainprocess"
},
"contextHint": "Analyze failing tests, generate targeted test tasks with root cause and fix strategy"
},
{
"cmd": "/workflow:test-cycle-execute",
"unit": "test-validation",
"execution": {
"type": "slash-command",
"mode": "async"
},
"contextHint": "Execute iterative test-fix cycle with pass rate tracking until >= 95% pass rate achieved"
}
]
}

View File

@@ -1,650 +0,0 @@
---
name: lite-skill-generator
description: Lightweight skill generator with style learning - creates simple skills using flow-based execution and style imitation. Use for quick skill scaffolding, simple workflow creation, or style-aware skill generation.
allowed-tools: Read, Write, Bash, Glob, Grep, AskUserQuestion
---
# Lite Skill Generator
Lightweight meta-skill for rapid skill creation with intelligent style learning and flow-based execution.
## Core Concept
**Simplicity First**: Generate simple, focused skills quickly with minimal overhead. Learn from existing skills to maintain consistent style and structure.
**Progressive Disclosure**: Follow anthropics' three-layer loading principle:
1. **Metadata** - name, description, triggers (always loaded)
2. **SKILL.md** - core instructions (loaded when triggered)
3. **Bundled resources** - scripts, references, assets (loaded on demand)
## Execution Model
**3-Phase Flow**: Style Learning → Requirements Gathering → Generation
```
User Input → Phase 1: Style Analysis → Phase 2: Requirements → Phase 3: Generate → Skill Package
↓ ↓ ↓
Learn from examples Interactive prompts Write files + validate
```
## Architecture Overview
```
┌─────────────────────────────────────────────────────────────────┐
│ Lite Skill Generator │
│ │
│ Input: Skill name, purpose, reference skills │
│ ↓ │
│ ┌─────────────────────────────────────────────────────────┐ │
│ │ Phase 1-3: Lightweight Pipeline │ │
│ │ ┌────┐ ┌────┐ ┌────┐ │ │
│ │ │ P1 │→│ P2 │→│ P3 │ │ │
│ │ │Styl│ │Req │ │Gen │ │ │
│ │ └────┘ └────┘ └────┘ │ │
│ └─────────────────────────────────────────────────────────┘ │
│ ↓ │
│ Output: .claude/skills/{skill-name}/ (minimal package) │
│ │
└─────────────────────────────────────────────────────────────────┘
```
## 3-Phase Workflow
### Phase 1: Style Analysis & Learning
Analyze reference skills to extract language patterns, structural conventions, and writing style.
```javascript
// Phase 1 Execution Flow
async function analyzeStyle(referencePaths) {
// Step 1: Load reference skills
const references = [];
for (const path of referencePaths) {
const content = Read(path);
references.push({
path: path,
content: content,
metadata: extractYAMLFrontmatter(content)
});
}
// Step 2: Extract style patterns
const styleProfile = {
// Structural patterns
structure: {
hasFrontmatter: references.every(r => r.metadata !== null),
sectionHeaders: extractCommonSections(references),
codeBlockUsage: detectCodeBlockPatterns(references),
flowDiagramUsage: detectFlowDiagrams(references)
},
// Language patterns
language: {
instructionStyle: detectInstructionStyle(references), // 'imperative' | 'declarative' | 'procedural'
pseudocodeUsage: detectPseudocodePatterns(references),
verbosity: calculateVerbosityLevel(references), // 'concise' | 'detailed' | 'verbose'
terminology: extractCommonTerms(references)
},
// Organization patterns
organization: {
phaseStructure: detectPhasePattern(references), // 'sequential' | 'autonomous' | 'flat'
exampleDensity: calculateExampleRatio(references),
templateUsage: detectTemplateReferences(references)
}
};
// Step 3: Generate style guide
return {
profile: styleProfile,
recommendations: generateStyleRecommendations(styleProfile),
examples: extractStyleExamples(references, styleProfile)
};
}
// Structural pattern detection
function extractCommonSections(references) {
const allSections = references.map(r =>
r.content.match(/^##? (.+)$/gm)?.map(s => s.replace(/^##? /, ''))
).flat();
return findMostCommon(allSections);
}
// Language style detection
function detectInstructionStyle(references) {
const imperativePattern = /^(Use|Execute|Run|Call|Create|Generate)\s/gim;
const declarativePattern = /^(The|This|Each|All)\s.*\s(is|are|will be)\s/gim;
const proceduralPattern = /^(Step \d+|Phase \d+|First|Then|Finally)\s/gim;
const scores = references.map(r => ({
imperative: (r.content.match(imperativePattern) || []).length,
declarative: (r.content.match(declarativePattern) || []).length,
procedural: (r.content.match(proceduralPattern) || []).length
}));
return getMaxStyle(scores);
}
// Pseudocode pattern detection
function detectPseudocodePatterns(references) {
const hasJavaScriptBlocks = references.some(r => r.content.includes('```javascript'));
const hasFunctionDefs = references.some(r => /function\s+\w+\(/m.test(r.content));
const hasFlowComments = references.some(r => /\/\/.*/m.test(r.content));
return {
usePseudocode: hasJavaScriptBlocks && hasFunctionDefs,
flowAnnotations: hasFlowComments,
style: hasFunctionDefs ? 'functional' : 'imperative'
};
}
```
**Output**:
```
Style Analysis Complete:
Structure: Flow-based with pseudocode
Language: Procedural, detailed
Organization: Sequential phases
Key Patterns: 3-5 phases, function definitions, ASCII diagrams
Recommendations:
✓ Use phase-based structure (3-4 phases)
✓ Include pseudocode for complex logic
✓ Add ASCII flow diagrams
✓ Maintain concise documentation style
```
---
### Phase 2: Requirements Gathering
Interactive discovery of skill requirements using learned style patterns.
```javascript
async function gatherRequirements(styleProfile) {
// Step 1: Basic information
const basicInfo = await AskUserQuestion({
questions: [
{
question: "What is the skill name? (kebab-case, e.g., 'pdf-generator')",
header: "Name",
options: [
{ label: "pdf-generator", description: "Example: PDF generation skill" },
{ label: "code-analyzer", description: "Example: Code analysis skill" },
{ label: "Custom", description: "Enter custom name" }
]
},
{
question: "What is the primary purpose?",
header: "Purpose",
options: [
{ label: "Generation", description: "Create/generate artifacts" },
{ label: "Analysis", description: "Analyze/inspect code or data" },
{ label: "Transformation", description: "Convert/transform content" },
{ label: "Orchestration", description: "Coordinate multiple operations" }
]
}
]
});
// Step 2: Execution complexity
const complexity = await AskUserQuestion({
questions: [{
question: "How many main steps does this skill need?",
header: "Steps",
options: [
{ label: "2-3 steps", description: "Simple workflow (recommended for lite-skill)" },
{ label: "4-5 steps", description: "Moderate workflow" },
{ label: "6+ steps", description: "Complex workflow (consider full skill-generator)" }
]
}]
});
// Step 3: Tool requirements
const tools = await AskUserQuestion({
questions: [{
question: "Which tools will this skill use? (select multiple)",
header: "Tools",
multiSelect: true,
options: [
{ label: "Read", description: "Read files" },
{ label: "Write", description: "Write files" },
{ label: "Bash", description: "Execute commands" },
{ label: "Task", description: "Launch agents" },
{ label: "AskUserQuestion", description: "Interactive prompts" }
]
}]
});
// Step 4: Output format
const output = await AskUserQuestion({
questions: [{
question: "What does this skill produce?",
header: "Output",
options: [
{ label: "Single file", description: "One main output file" },
{ label: "Multiple files", description: "Several related files" },
{ label: "Directory structure", description: "Complete directory tree" },
{ label: "Modified files", description: "Edits to existing files" }
]
}]
});
// Step 5: Build configuration
return {
name: basicInfo.Name,
purpose: basicInfo.Purpose,
description: generateDescription(basicInfo.Name, basicInfo.Purpose),
steps: parseStepCount(complexity.Steps),
allowedTools: tools.Tools,
outputType: output.Output,
styleProfile: styleProfile,
triggerPhrases: generateTriggerPhrases(basicInfo.Name, basicInfo.Purpose)
};
}
// Generate skill description from name and purpose
function generateDescription(name, purpose) {
const templates = {
Generation: `Generate ${humanize(name)} with intelligent scaffolding`,
Analysis: `Analyze ${humanize(name)} with detailed reporting`,
Transformation: `Transform ${humanize(name)} with format conversion`,
Orchestration: `Orchestrate ${humanize(name)} workflow with multi-step coordination`
};
return templates[purpose] || `${humanize(name)} skill for ${purpose.toLowerCase()} tasks`;
}
// Generate trigger phrases
function generateTriggerPhrases(name, purpose) {
const base = [name, name.replace(/-/g, ' ')];
const purposeVariants = {
Generation: ['generate', 'create', 'build'],
Analysis: ['analyze', 'inspect', 'review'],
Transformation: ['transform', 'convert', 'format'],
Orchestration: ['orchestrate', 'coordinate', 'manage']
};
return [...base, ...purposeVariants[purpose].map(v => `${v} ${humanize(name)}`)];
}
```
**Display to User**:
```
Requirements Gathered:
Name: pdf-generator
Purpose: Generation
Steps: 3 (Setup → Generate → Validate)
Tools: Read, Write, Bash
Output: Single file (PDF document)
Triggers: "pdf-generator", "generate pdf", "create pdf"
Style Application:
Using flow-based structure (from style analysis)
Including pseudocode blocks
Adding ASCII diagrams for clarity
```
---
### Phase 3: Generate Skill Package
Create minimal skill structure with style-aware content generation.
```javascript
async function generateSkillPackage(requirements) {
const skillDir = `.claude/skills/${requirements.name}`;
const workDir = `.workflow/.scratchpad/lite-skill-gen-${Date.now()}`;
// Step 1: Create directory structure
Bash(`mkdir -p "${skillDir}" "${workDir}"`);
// Step 2: Generate SKILL.md (using learned style)
const skillContent = generateSkillMd(requirements);
Write(`${skillDir}/SKILL.md`, skillContent);
// Step 3: Conditionally add bundled resources
if (requirements.outputType === 'Directory structure') {
Bash(`mkdir -p "${skillDir}/templates"`);
const templateContent = generateTemplate(requirements);
Write(`${skillDir}/templates/base-template.md`, templateContent);
}
if (requirements.allowedTools.includes('Bash')) {
Bash(`mkdir -p "${skillDir}/scripts"`);
const scriptContent = generateScript(requirements);
Write(`${skillDir}/scripts/helper.sh`, scriptContent);
}
// Step 4: Generate README
const readmeContent = generateReadme(requirements);
Write(`${skillDir}/README.md`, readmeContent);
// Step 5: Validate structure
const validation = validateSkillStructure(skillDir, requirements);
Write(`${workDir}/validation-report.json`, JSON.stringify(validation, null, 2));
// Step 6: Return summary
return {
skillPath: skillDir,
filesCreated: [
`${skillDir}/SKILL.md`,
...(validation.hasTemplates ? [`${skillDir}/templates/`] : []),
...(validation.hasScripts ? [`${skillDir}/scripts/`] : []),
`${skillDir}/README.md`
],
validation: validation,
nextSteps: generateNextSteps(requirements)
};
}
// Generate SKILL.md with style awareness
function generateSkillMd(req) {
const { styleProfile } = req;
// YAML frontmatter
const frontmatter = `---
name: ${req.name}
description: ${req.description}
allowed-tools: ${req.allowedTools.join(', ')}
---
`;
// Main content structure (adapts to style)
let content = frontmatter;
content += `\n# ${humanize(req.name)}\n\n`;
content += `${req.description}\n\n`;
// Add architecture diagram if style uses them
if (styleProfile.structure.flowDiagramUsage) {
content += generateArchitectureDiagram(req);
}
// Add execution flow
content += `## Execution Flow\n\n`;
if (styleProfile.language.pseudocodeUsage.usePseudocode) {
content += generatePseudocodeFlow(req);
} else {
content += generateProceduralFlow(req);
}
// Add phase sections
for (let i = 0; i < req.steps; i++) {
content += generatePhaseSection(i + 1, req, styleProfile);
}
// Add examples if style is verbose
if (styleProfile.language.verbosity !== 'concise') {
content += generateExamplesSection(req);
}
return content;
}
// Generate architecture diagram
function generateArchitectureDiagram(req) {
return `## Architecture
\`\`\`
┌─────────────────────────────────────────────────┐
${humanize(req.name)}
│ │
│ Input → Phase 1 → Phase 2 → Phase 3 → Output │
${getPhaseName(1, req)}
${getPhaseName(2, req)}
${getPhaseName(3, req)}
└─────────────────────────────────────────────────┘
\`\`\`
`;
}
// Generate pseudocode flow
function generatePseudocodeFlow(req) {
return `\`\`\`javascript
async function ${toCamelCase(req.name)}(input) {
// Phase 1: ${getPhaseName(1, req)}
const prepared = await phase1Prepare(input);
// Phase 2: ${getPhaseName(2, req)}
const processed = await phase2Process(prepared);
// Phase 3: ${getPhaseName(3, req)}
const result = await phase3Finalize(processed);
return result;
}
\`\`\`
`;
}
// Generate phase section
function generatePhaseSection(phaseNum, req, styleProfile) {
const phaseName = getPhaseName(phaseNum, req);
let section = `### Phase ${phaseNum}: ${phaseName}\n\n`;
if (styleProfile.language.pseudocodeUsage.usePseudocode) {
section += `\`\`\`javascript\n`;
section += `async function phase${phaseNum}${toCamelCase(phaseName)}(input) {\n`;
section += ` // TODO: Implement ${phaseName.toLowerCase()} logic\n`;
section += ` return output;\n`;
section += `}\n\`\`\`\n\n`;
} else {
section += `**Steps**:\n`;
section += `1. Load input data\n`;
section += `2. Process according to ${phaseName.toLowerCase()} logic\n`;
section += `3. Return result to next phase\n\n`;
}
return section;
}
// Validation
function validateSkillStructure(skillDir, req) {
const requiredFiles = [`${skillDir}/SKILL.md`, `${skillDir}/README.md`];
const exists = requiredFiles.map(f => Bash(`test -f "${f}"`).exitCode === 0);
return {
valid: exists.every(e => e),
hasTemplates: Bash(`test -d "${skillDir}/templates"`).exitCode === 0,
hasScripts: Bash(`test -d "${skillDir}/scripts"`).exitCode === 0,
filesPresent: requiredFiles.filter((f, i) => exists[i]),
styleCompliance: checkStyleCompliance(skillDir, req.styleProfile)
};
}
```
**Output**:
```
Skill Package Generated:
Location: .claude/skills/pdf-generator/
Structure:
✓ SKILL.md (entry point)
✓ README.md (usage guide)
✓ templates/ (directory templates)
✓ scripts/ (helper scripts)
Validation:
✓ All required files present
✓ Style compliance: 95%
✓ Frontmatter valid
✓ Tool references correct
Next Steps:
1. Review SKILL.md and customize phases
2. Test skill: /skill:pdf-generator "test input"
3. Iterate based on usage
```
---
## Complete Execution Flow
```
User: "Create a PDF generator skill"
Phase 1: Style Analysis
|-- Read reference skills (ccw.md, ccw-coordinator.md)
|-- Extract style patterns (flow diagrams, pseudocode, structure)
|-- Generate style profile
+-- Output: Style recommendations
Phase 2: Requirements
|-- Ask: Name, purpose, steps
|-- Ask: Tools, output format
|-- Generate: Description, triggers
+-- Output: Requirements config
Phase 3: Generation
|-- Create: Directory structure
|-- Write: SKILL.md (style-aware)
|-- Write: README.md
|-- Optionally: templates/, scripts/
|-- Validate: Structure and style
+-- Output: Skill package
Return: Skill location + next steps
```
## Phase Execution Protocol
```javascript
// Main entry point
async function liteSkillGenerator(input) {
// Phase 1: Style Learning
const references = [
'.claude/commands/ccw.md',
'.claude/commands/ccw-coordinator.md',
...discoverReferenceSkills(input)
];
const styleProfile = await analyzeStyle(references);
console.log(`Style Analysis: ${styleProfile.organization.phaseStructure}, ${styleProfile.language.verbosity}`);
// Phase 2: Requirements
const requirements = await gatherRequirements(styleProfile);
console.log(`Requirements: ${requirements.name} (${requirements.steps} phases)`);
// Phase 3: Generation
const result = await generateSkillPackage(requirements);
console.log(`✅ Generated: ${result.skillPath}`);
return result;
}
```
## Output Structure
**Minimal Package** (default):
```
.claude/skills/{skill-name}/
├── SKILL.md # Entry point with frontmatter
└── README.md # Usage guide
```
**With Templates** (if needed):
```
.claude/skills/{skill-name}/
├── SKILL.md
├── README.md
└── templates/
└── base-template.md
```
**With Scripts** (if using Bash):
```
.claude/skills/{skill-name}/
├── SKILL.md
├── README.md
└── scripts/
└── helper.sh
```
## Key Design Principles
1. **Style Learning** - Analyze reference skills to maintain consistency
2. **Minimal Overhead** - Generate only essential files (SKILL.md + README)
3. **Progressive Disclosure** - Follow anthropics' three-layer loading
4. **Flow-Based** - Use pseudocode and flow diagrams (when style appropriate)
5. **Interactive** - Guided requirements gathering via AskUserQuestion
6. **Fast Generation** - 3 phases instead of 6, focused on simplicity
7. **Style Awareness** - Adapt output based on detected patterns
## Style Pattern Detection
**Structural Patterns**:
- YAML frontmatter usage (100% in references)
- Section headers (H2 for major, H3 for sub-sections)
- Code blocks (JavaScript pseudocode, Bash examples)
- ASCII diagrams (architecture, flow charts)
**Language Patterns**:
- Instruction style: Procedural with function definitions
- Pseudocode: JavaScript-based with flow annotations
- Verbosity: Detailed but focused
- Terminology: Phase, workflow, pipeline, orchestrator
**Organization Patterns**:
- Phase structure: 3-5 sequential phases
- Example density: Moderate (1-2 per major section)
- Template usage: Minimal (only when necessary)
## Usage Examples
**Basic Generation**:
```
User: "Create a markdown formatter skill"
Lite-Skill-Generator:
→ Analyzes ccw.md style
→ Asks: Name? "markdown-formatter"
→ Asks: Purpose? "Transformation"
→ Asks: Steps? "3 steps"
→ Generates: .claude/skills/markdown-formatter/
```
**With Custom References**:
```
User: "Create a skill like software-manual but simpler"
Lite-Skill-Generator:
→ Analyzes software-manual skill
→ Learns: Multi-phase, agent-based, template-heavy
→ Simplifies: 3 phases, direct execution, minimal templates
→ Generates: Simplified version
```
## Comparison: lite-skill-generator vs skill-generator
| Aspect | lite-skill-generator | skill-generator |
|--------|---------------------|-----------------|
| **Phases** | 3 (Style → Req → Gen) | 6 (Spec → Req → Dir → Gen → Specs → Val) |
| **Style Learning** | Yes (analyze references) | No (fixed templates) |
| **Complexity** | Simple skills only | Full-featured skills |
| **Output** | Minimal (SKILL.md + README) | Complete (phases/, specs/, templates/) |
| **Generation Time** | Fast (~2 min) | Thorough (~10 min) |
| **Use Case** | Quick scaffolding | Production-ready skills |
## Workflow Integration
**Standalone**:
```bash
/skill:lite-skill-generator "Create a log analyzer skill"
```
**With References**:
```bash
/skill:lite-skill-generator "Create a skill based on ccw-coordinator.md style"
```
**Batch Generation** (for multiple simple skills):
```bash
/skill:lite-skill-generator "Create 3 skills: json-validator, yaml-parser, toml-converter"
```
---
**Next Steps After Generation**:
1. Review `.claude/skills/{name}/SKILL.md`
2. Customize phase logic for your use case
3. Add examples to README.md
4. Test skill with sample input
5. Iterate based on real usage

View File

@@ -1,68 +0,0 @@
---
name: {{SKILL_NAME}}
description: {{SKILL_DESCRIPTION}}
allowed-tools: {{ALLOWED_TOOLS}}
---
# {{SKILL_TITLE}}
{{SKILL_DESCRIPTION}}
## Architecture
```
┌─────────────────────────────────────────────────┐
│ {{SKILL_TITLE}} │
│ │
│ Input → {{PHASE_1}} → {{PHASE_2}} → Output │
└─────────────────────────────────────────────────┘
```
## Execution Flow
```javascript
async function {{SKILL_FUNCTION}}(input) {
// Phase 1: {{PHASE_1}}
const prepared = await phase1(input);
// Phase 2: {{PHASE_2}}
const result = await phase2(prepared);
return result;
}
```
### Phase 1: {{PHASE_1}}
```javascript
async function phase1(input) {
// TODO: Implement {{PHASE_1_LOWER}} logic
return output;
}
```
### Phase 2: {{PHASE_2}}
```javascript
async function phase2(input) {
// TODO: Implement {{PHASE_2_LOWER}} logic
return output;
}
```
## Usage
```bash
/skill:{{SKILL_NAME}} "input description"
```
## Examples
**Basic Usage**:
```
User: "{{EXAMPLE_INPUT}}"
{{SKILL_NAME}}:
→ Phase 1: {{PHASE_1_ACTION}}
→ Phase 2: {{PHASE_2_ACTION}}
→ Output: {{EXAMPLE_OUTPUT}}
```

View File

@@ -1,64 +0,0 @@
# Style Guide Template
Generated by lite-skill-generator style analysis phase.
## Detected Patterns
### Structural Patterns
| Pattern | Detected | Recommendation |
|---------|----------|----------------|
| YAML Frontmatter | {{HAS_FRONTMATTER}} | {{FRONTMATTER_REC}} |
| ASCII Diagrams | {{HAS_DIAGRAMS}} | {{DIAGRAMS_REC}} |
| Code Blocks | {{HAS_CODE_BLOCKS}} | {{CODE_BLOCKS_REC}} |
| Phase Structure | {{PHASE_STRUCTURE}} | {{PHASE_REC}} |
### Language Patterns
| Pattern | Value | Notes |
|---------|-------|-------|
| Instruction Style | {{INSTRUCTION_STYLE}} | imperative/declarative/procedural |
| Pseudocode Usage | {{PSEUDOCODE_USAGE}} | functional/imperative/none |
| Verbosity Level | {{VERBOSITY}} | concise/detailed/verbose |
| Common Terms | {{TERMINOLOGY}} | domain-specific vocabulary |
### Organization Patterns
| Pattern | Value |
|---------|-------|
| Phase Count | {{PHASE_COUNT}} |
| Example Density | {{EXAMPLE_DENSITY}} |
| Template Usage | {{TEMPLATE_USAGE}} |
## Style Compliance Checklist
- [ ] YAML frontmatter with name, description, allowed-tools
- [ ] Architecture diagram (if pattern detected)
- [ ] Execution flow section with pseudocode
- [ ] Phase sections (sequential numbered)
- [ ] Usage examples section
- [ ] README.md for external documentation
## Reference Skills Analyzed
{{#REFERENCES}}
- `{{REF_PATH}}`: {{REF_NOTES}}
{{/REFERENCES}}
## Generated Configuration
```json
{
"style": {
"structure": "{{STRUCTURE_TYPE}}",
"language": "{{LANGUAGE_TYPE}}",
"organization": "{{ORG_TYPE}}"
},
"recommendations": {
"usePseudocode": {{USE_PSEUDOCODE}},
"includeDiagrams": {{INCLUDE_DIAGRAMS}},
"verbosityLevel": "{{VERBOSITY}}",
"phaseCount": {{PHASE_COUNT}}
}
}
```

View File

@@ -1,124 +0,0 @@
---
name: Prompt Enhancer
description: Transform vague prompts into actionable specs using intelligent analysis and session memory. Use when user input contains -e or --enhance flag.
allowed-tools: (none)
---
# Prompt Enhancer
**Transform**: Vague intent → Structured specification (Memory-based, Direct Output)
**Languages**: English + Chinese (中英文语义识别)
## Process (Internal → Direct Output)
**Internal Analysis**: Intelligently extract session context, identify tech stack, and structure into actionable format.
**Output**: Direct structured prompt (no intermediate steps shown)
## Output Format
**Dynamic Structure**: Adapt fields based on task type and context needs. Not all fields are required.
**Core Fields** (always present):
- **INTENT**: One-sentence technical goal
- **ACTION**: Concrete steps with technical details
**Optional Fields** (include when relevant):
- **TECH STACK**: Relevant technologies (when tech-specific)
- **CONTEXT**: Session memory findings (when context matters)
- **ATTENTION**: Critical constraints (when risks/requirements exist)
- **SCOPE**: Affected modules/files (for multi-module tasks)
- **METRICS**: Success criteria (for optimization/performance tasks)
- **DEPENDENCIES**: Related components (for integration tasks)
**Example (Simple Task)**:
```
📋 ENHANCED PROMPT
INTENT: Fix authentication token validation in JWT middleware
ACTION:
1. Review token expiration logic in auth middleware
2. Add proper error handling for expired tokens
3. Test with valid/expired/malformed tokens
```
**Example (Complex Task)**:
```
📋 ENHANCED PROMPT
INTENT: Optimize API performance with caching and database indexing
TECH STACK:
- Redis: Response caching
- PostgreSQL: Query optimization
CONTEXT:
- API response times >2s mentioned in previous conversation
- PostgreSQL slow query logs show N+1 problems
ACTION:
1. Profile endpoints to identify slow queries
2. Add PostgreSQL indexes on frequently queried columns
3. Implement Redis caching for read-heavy endpoints
4. Add cache invalidation on data updates
METRICS:
- Target: <500ms API response time
- Cache hit ratio: >80%
ATTENTION:
- Maintain backward compatibility with existing API contracts
- Handle cache invalidation correctly to avoid stale data
```
## Workflow
```
Trigger (-e/--enhance) → Internal Analysis → Dynamic Output
↓ ↓ ↓
User Input Assess Task Type Select Fields
Extract Memory Context Structure Prompt
```
1. **Detect**: User input contains `-e` or `--enhance`
2. **Analyze**:
- Determine task type (fix/optimize/implement/refactor)
- Extract relevant session context
- Identify tech stack and constraints
3. **Structure**:
- Always include: INTENT + ACTION
- Conditionally add: TECH STACK, CONTEXT, ATTENTION, METRICS, etc.
4. **Output**: Present dynamically structured prompt
## Enhancement Guidelines (Internal)
**Always Include**:
- Clear, actionable INTENT
- Concrete ACTION steps with technical details
**Add When Relevant**:
- TECH STACK: Task involves specific technologies
- CONTEXT: Session memory provides useful background
- ATTENTION: Security/compatibility/performance concerns exist
- SCOPE: Multi-module or cross-component changes
- METRICS: Performance/optimization goals need measurement
- DEPENDENCIES: Integration points matter
**Quality Checks**:
- Make vague intent explicit
- Resolve ambiguous references
- Add testing/validation steps
- Include constraints from memory
## Best Practices
- ✅ Trigger only on `-e`/`--enhance` flags
- ✅ Use **dynamic field selection** based on task type
- ✅ Extract **memory context ONLY** (no file reading)
- ✅ Always include INTENT + ACTION as core fields
- ✅ Add optional fields only when relevant to task
- ✅ Direct output (no intermediate steps shown)
- ❌ NO tool calls
- ❌ NO file operations (Bash, Read, Glob, Grep)
- ❌ NO fixed template - adapt to task needs

View File

@@ -1,196 +0,0 @@
---
name: text-formatter
description: Transform and optimize text content with intelligent formatting. Output BBCode + Markdown hybrid format optimized for forums. Triggers on "format text", "text formatter", "排版", "格式化文本", "BBCode".
allowed-tools: Task, AskUserQuestion, Read, Write, Bash, Glob
---
# Text Formatter
Transform and optimize text content with intelligent structure analysis. Output format: **BBCode + Markdown hybrid** optimized for forum publishing.
## Architecture Overview
```
┌─────────────────────────────────────────────────────────────────┐
│ Text Formatter Architecture (BBCode + MD Mode) │
├─────────────────────────────────────────────────────────────────┤
│ │
│ Phase 1: Input Collection → 接收文本/文件 │
│ ↓ │
│ Phase 2: Content Analysis → 分析结构、识别 Callout/Admonition │
│ ↓ │
│ Phase 3: Format Transform → 转换为 BBCode+MD 格式 │
│ ↓ │
│ Phase 4: Output & Preview → 保存文件 + 预览 │
│ │
└─────────────────────────────────────────────────────────────────┘
```
## Key Design Principles
1. **Single Format Output**: BBCode + Markdown hybrid (forum optimized)
2. **Pixel-Based Sizing**: size=150/120/100/80 (not 1-7 levels)
3. **Forum Compatibility**: Only use widely-supported BBCode tags
4. **Markdown Separators**: Use `---` for horizontal rules (not `[hr]`)
5. **No Alignment Tags**: `[align]` not supported, avoid usage
---
## Format Specification
### Supported BBCode Tags
| Tag | Usage | Example |
|-----|-------|---------|
| `[size=N]` | Font size (pixels) | `[size=120]Title[/size]` |
| `[color=X]` | Text color (hex/name) | `[color=#2196F3]Blue[/color]``[color=blue]` |
| `[b]` | Bold | `[b]Bold text[/b]` |
| `[i]` | Italic | `[i]Italic[/i]` |
| `[s]` | Strikethrough | `[s]deleted[/s]` |
| `[u]` | Underline | `[u]underlined[/u]` |
| `[quote]` | Quote block | `[quote]Content[/quote]` |
| `[code]` | Code block | `[code]code[/code]` |
| `[img]` | Image | `[img]url[/img]` |
| `[url]` | Link | `[url=link]text[/url]` |
| `[list]` | List container | `[list][*]item[/list]` |
| `[spoiler]` | Collapsible content | `[spoiler=标题]隐藏内容[/spoiler]` |
### HTML to BBCode Conversion
| HTML Input | BBCode Output |
|------------|---------------|
| `<mark>高亮</mark>` | `[color=yellow]高亮[/color]` |
| `<u>下划线</u>` | `[u]下划线[/u]` |
| `<details><summary>标题</summary>内容</details>` | `[spoiler=标题]内容[/spoiler]` |
### Unsupported Tags (Avoid!)
| Tag | Reason | Alternative |
|-----|--------|-------------|
| `[align]` | Not rendered | Remove or use default left |
| `[hr]` | Shows as text | Use Markdown `---` |
| `<div>` | HTML not supported | Use BBCode only |
| `[table]` | Limited support | Use list or code block |
### Size Hierarchy (Pixels)
| Element | Size | Color | Usage |
|---------|------|-------|-------|
| **Main Title** | 150 | #2196F3 | Document title |
| **Section Title** | 120 | #2196F3 | Major sections (## H2) |
| **Subsection** | 100 | #333 | Sub-sections (### H3) |
| **Normal Text** | (default) | - | Body content |
| **Notes/Gray** | 80 | gray | Footnotes, metadata |
### Color Palette
| Color | Hex | Semantic Usage |
|-------|-----|----------------|
| **Blue** | #2196F3 | Titles, links, info |
| **Green** | #4CAF50 | Success, tips, features |
| **Orange** | #FF9800 | Warnings, caution |
| **Red** | #F44336 | Errors, danger, important |
| **Purple** | #9C27B0 | Examples, code |
| **Gray** | gray | Notes, metadata |
---
## Mandatory Prerequisites
> Read before execution:
| Document | Purpose | Priority |
|----------|---------|----------|
| [specs/format-rules.md](specs/format-rules.md) | Format conversion rules | **P0** |
| [specs/element-mapping.md](specs/element-mapping.md) | Element type mappings | P1 |
| [specs/callout-types.md](specs/callout-types.md) | Callout/Admonition types | P1 |
---
## Execution Flow
```
┌────────────────────────────────────────────────────────────────┐
│ Phase 1: Input Collection │
│ - Ask: paste text OR file path │
│ - Output: input-config.json │
├────────────────────────────────────────────────────────────────┤
│ Phase 2: Content Analysis │
│ - Detect structure: headings, lists, code blocks, tables │
│ - Identify Callouts/Admonitions (>[!type]) │
│ - Output: analysis.json │
├────────────────────────────────────────────────────────────────┤
│ Phase 3: Format Transform │
│ - Apply BBCode + MD rules from specs/format-rules.md │
│ - Convert elements with pixel-based sizes │
│ - Use Markdown --- for separators │
│ - Output: formatted content │
├────────────────────────────────────────────────────────────────┤
│ Phase 4: Output & Preview │
│ - Save to .bbcode.txt file │
│ - Display preview │
│ - Output: final file │
└────────────────────────────────────────────────────────────────┘
```
## Callout/Admonition Support
支持 Obsidian 风格的 Callout 语法,转换为 BBCode quote
```markdown
> [!NOTE]
> 这是一个提示信息
> [!WARNING]
> 这是一个警告信息
```
转换结果:
```bbcode
[quote]
[size=100][color=#2196F3][b]📝 注意[/b][/color][/size]
这是一个提示信息
[/quote]
```
| Type | Color | Icon |
|------|-------|------|
| NOTE/INFO | #2196F3 | 📝 |
| TIP/HINT | #4CAF50 | 💡 |
| SUCCESS | #4CAF50 | ✅ |
| WARNING/CAUTION | #FF9800 | ⚠️ |
| DANGER/ERROR | #F44336 | ❌ |
| EXAMPLE | #9C27B0 | 📋 |
## Directory Setup
```javascript
const timestamp = new Date().toISOString().slice(0,10).replace(/-/g, '');
const workDir = `.workflow/.scratchpad/text-formatter-${timestamp}`;
Bash(`mkdir -p "${workDir}"`);
```
## Output Structure
```
.workflow/.scratchpad/text-formatter-{date}/
├── input-config.json # 输入配置
├── analysis.json # 内容分析结果
└── output.bbcode.txt # BBCode+MD 输出
```
## Reference Documents
| Document | Purpose |
|----------|---------|
| [phases/01-input-collection.md](phases/01-input-collection.md) | 收集输入内容 |
| [phases/02-content-analysis.md](phases/02-content-analysis.md) | 分析内容结构 |
| [phases/03-format-transform.md](phases/03-format-transform.md) | 格式转换 |
| [phases/04-output-preview.md](phases/04-output-preview.md) | 输出和预览 |
| [specs/format-rules.md](specs/format-rules.md) | 格式转换规则 |
| [specs/element-mapping.md](specs/element-mapping.md) | 元素映射表 |
| [specs/callout-types.md](specs/callout-types.md) | Callout 类型定义 |
| [templates/bbcode-template.md](templates/bbcode-template.md) | BBCode 模板 |

View File

@@ -1,111 +0,0 @@
# Phase 1: Input Collection
收集用户输入的文本内容。
## Objective
- 获取用户输入内容(直接粘贴或文件路径)
- 生成输入配置文件
**注意**: 输出格式固定为 BBCode + Markdown 混合格式(论坛优化),无需选择。
## Input
- 来源: 用户交互
- 配置: 无前置依赖
## Execution Steps
### Step 1: 询问输入方式
```javascript
const inputMethod = await AskUserQuestion({
questions: [
{
question: "请选择输入方式",
header: "输入方式",
multiSelect: false,
options: [
{ label: "直接粘贴文本", description: "在对话中粘贴要格式化的内容" },
{ label: "指定文件路径", description: "读取指定文件的内容" }
]
}
]
});
```
### Step 2: 获取内容
```javascript
let content = '';
if (inputMethod["输入方式"] === "直接粘贴文本") {
// 提示用户粘贴内容
const textInput = await AskUserQuestion({
questions: [
{
question: "请粘贴要格式化的文本内容(粘贴后选择确认)",
header: "文本内容",
multiSelect: false,
options: [
{ label: "已粘贴完成", description: "确认已在上方粘贴内容" }
]
}
]
});
// 从用户消息中提取文本内容
content = extractUserText();
} else {
// 询问文件路径
const filePath = await AskUserQuestion({
questions: [
{
question: "请输入文件路径",
header: "文件路径",
multiSelect: false,
options: [
{ label: "已输入路径", description: "确认路径已在上方输入" }
]
}
]
});
content = Read(extractedFilePath);
}
```
### Step 3: 保存配置
```javascript
const config = {
input_method: inputMethod["输入方式"],
target_format: "BBCode+MD", // 固定格式
original_content: content,
timestamp: new Date().toISOString()
};
Write(`${workDir}/input-config.json`, JSON.stringify(config, null, 2));
```
## Output
- **File**: `input-config.json`
- **Format**: JSON
```json
{
"input_method": "直接粘贴文本",
"target_format": "BBCode+MD",
"original_content": "...",
"timestamp": "2026-01-13T..."
}
```
## Quality Checklist
- [ ] 成功获取用户输入内容
- [ ] 内容非空且有效
- [ ] 配置文件已保存
## Next Phase
→ [Phase 2: Content Analysis](02-content-analysis.md)

View File

@@ -1,248 +0,0 @@
# Phase 2: Content Analysis
分析输入内容的结构和语义元素。
## Objective
- 识别内容结构(标题、段落、列表等)
- 检测特殊元素(代码块、表格、链接等)
- 生成结构化分析结果
## Input
- 依赖: `input-config.json`
- 配置: `{workDir}/input-config.json`
## Execution Steps
### Step 1: 加载输入
```javascript
const config = JSON.parse(Read(`${workDir}/input-config.json`));
const content = config.original_content;
```
### Step 2: 结构分析
```javascript
function analyzeStructure(text) {
const analysis = {
elements: [],
stats: {
headings: 0,
paragraphs: 0,
lists: 0,
code_blocks: 0,
tables: 0,
links: 0,
images: 0,
quotes: 0,
callouts: 0
}
};
// Callout 检测正则 (Obsidian 风格)
const CALLOUT_PATTERN = /^>\s*\[!(\w+)\](?:\s+(.+))?$/;
const lines = text.split('\n');
let currentElement = null;
let inCodeBlock = false;
let inList = false;
for (let i = 0; i < lines.length; i++) {
const line = lines[i];
// 检测代码块
if (line.match(/^```/)) {
inCodeBlock = !inCodeBlock;
if (inCodeBlock) {
analysis.elements.push({
type: 'code_block',
start: i,
language: line.replace(/^```/, '').trim()
});
analysis.stats.code_blocks++;
}
continue;
}
if (inCodeBlock) continue;
// 检测标题 (Markdown 或纯文本模式)
if (line.match(/^#{1,6}\s/)) {
const level = line.match(/^(#+)/)[1].length;
analysis.elements.push({
type: 'heading',
level: level,
content: line.replace(/^#+\s*/, ''),
line: i
});
analysis.stats.headings++;
continue;
}
// 检测列表
if (line.match(/^[\s]*[-*+]\s/) || line.match(/^[\s]*\d+\.\s/)) {
if (!inList) {
analysis.elements.push({
type: 'list',
start: i,
ordered: line.match(/^\d+\./) !== null
});
analysis.stats.lists++;
inList = true;
}
continue;
} else {
inList = false;
}
// 检测 Callout (Obsidian 风格) - 优先于普通引用
const calloutMatch = line.match(CALLOUT_PATTERN);
if (calloutMatch) {
const calloutType = calloutMatch[1].toLowerCase();
const calloutTitle = calloutMatch[2] || null;
// 收集 Callout 内容行
const calloutContent = [];
let j = i + 1;
while (j < lines.length && lines[j].startsWith('>')) {
calloutContent.push(lines[j].replace(/^>\s*/, ''));
j++;
}
analysis.elements.push({
type: 'callout',
calloutType: calloutType,
title: calloutTitle,
content: calloutContent.join('\n'),
start: i,
end: j - 1
});
analysis.stats.callouts++;
i = j - 1; // 跳过已处理的行
continue;
}
// 检测普通引用
if (line.match(/^>\s/)) {
analysis.elements.push({
type: 'quote',
content: line.replace(/^>\s*/, ''),
line: i
});
analysis.stats.quotes++;
continue;
}
// 检测表格
if (line.match(/^\|.*\|$/)) {
analysis.elements.push({
type: 'table_row',
line: i
});
if (!analysis.elements.find(e => e.type === 'table')) {
analysis.stats.tables++;
}
continue;
}
// 检测链接
const links = line.match(/\[([^\]]+)\]\(([^)]+)\)/g);
if (links) {
analysis.stats.links += links.length;
}
// 检测图片
const images = line.match(/!\[([^\]]*)\]\(([^)]+)\)/g);
if (images) {
analysis.stats.images += images.length;
}
// 普通段落
if (line.trim() && !line.match(/^[-=]{3,}$/)) {
analysis.elements.push({
type: 'paragraph',
line: i,
preview: line.substring(0, 50)
});
analysis.stats.paragraphs++;
}
}
return analysis;
}
const analysis = analyzeStructure(content);
```
### Step 3: 语义增强
```javascript
// 识别特殊语义
function enhanceSemantics(text, analysis) {
const enhanced = { ...analysis };
// 检测关键词强调
const boldPatterns = text.match(/\*\*[^*]+\*\*/g) || [];
const italicPatterns = text.match(/\*[^*]+\*/g) || [];
enhanced.semantics = {
emphasis: {
bold: boldPatterns.length,
italic: italicPatterns.length
},
estimated_reading_time: Math.ceil(text.split(/\s+/).length / 200) // 200 words/min
};
return enhanced;
}
const enhancedAnalysis = enhanceSemantics(content, analysis);
```
### Step 4: 保存分析结果
```javascript
Write(`${workDir}/analysis.json`, JSON.stringify(enhancedAnalysis, null, 2));
```
## Output
- **File**: `analysis.json`
- **Format**: JSON
```json
{
"elements": [
{ "type": "heading", "level": 1, "content": "Title", "line": 0 },
{ "type": "paragraph", "line": 2, "preview": "..." },
{ "type": "callout", "calloutType": "warning", "title": "注意事项", "content": "...", "start": 4, "end": 6 },
{ "type": "code_block", "start": 8, "language": "javascript" }
],
"stats": {
"headings": 3,
"paragraphs": 10,
"lists": 2,
"code_blocks": 1,
"tables": 0,
"links": 5,
"images": 0,
"quotes": 1,
"callouts": 2
},
"semantics": {
"emphasis": { "bold": 5, "italic": 3 },
"estimated_reading_time": 2
}
}
```
## Quality Checklist
- [ ] 所有结构元素已识别
- [ ] 统计信息准确
- [ ] 语义增强完成
- [ ] 分析文件已保存
## Next Phase
→ [Phase 3: Format Transform](03-format-transform.md)

View File

@@ -1,245 +0,0 @@
# Phase 3: Format Transform
将内容转换为 BBCode + Markdown 混合格式(论坛优化)。
## Objective
- 根据分析结果转换内容
- 应用像素级字号规则
- 处理 Callout/标注语法
- 生成论坛兼容的输出
## Input
- 依赖: `input-config.json`, `analysis.json`
- 规范: `specs/format-rules.md`, `specs/element-mapping.md`
## Format Specification
### Size Hierarchy (Pixels)
| Element | Size | Color | Usage |
|---------|------|-------|-------|
| **H1** | 150 | #2196F3 | 文档主标题 |
| **H2** | 120 | #2196F3 | 章节标题 |
| **H3** | 100 | #333 | 子标题 |
| **H4+** | (默认) | - | 仅加粗 |
| **Notes** | 80 | gray | 备注/元数据 |
### Unsupported Tags (禁止使用)
| Tag | Reason | Alternative |
|-----|--------|-------------|
| `[align]` | 不渲染 | 删除,使用默认左对齐 |
| `[hr]` | 显示为文本 | 使用 Markdown `---` |
| `[table]` | 支持有限 | 转为列表或代码块 |
| HTML tags | 不支持 | 仅使用 BBCode |
## Execution Steps
### Step 1: 加载配置和分析
```javascript
const config = JSON.parse(Read(`${workDir}/input-config.json`));
const analysis = JSON.parse(Read(`${workDir}/analysis.json`));
const content = config.original_content;
```
### Step 2: Callout 配置
```javascript
// Callout 类型映射(像素级字号)
const CALLOUT_CONFIG = {
// 信息类
note: { icon: '📝', color: '#2196F3', label: '注意' },
info: { icon: '', color: '#2196F3', label: '信息' },
abstract: { icon: '📄', color: '#2196F3', label: '摘要' },
summary: { icon: '📄', color: '#2196F3', label: '摘要' },
tldr: { icon: '📄', color: '#2196F3', label: '摘要' },
// 成功/提示类
tip: { icon: '💡', color: '#4CAF50', label: '提示' },
hint: { icon: '💡', color: '#4CAF50', label: '提示' },
success: { icon: '✅', color: '#4CAF50', label: '成功' },
check: { icon: '✅', color: '#4CAF50', label: '完成' },
done: { icon: '✅', color: '#4CAF50', label: '完成' },
// 警告类
warning: { icon: '⚠️', color: '#FF9800', label: '警告' },
caution: { icon: '⚠️', color: '#FF9800', label: '注意' },
attention: { icon: '⚠️', color: '#FF9800', label: '注意' },
question: { icon: '❓', color: '#FF9800', label: '问题' },
help: { icon: '❓', color: '#FF9800', label: '帮助' },
faq: { icon: '❓', color: '#FF9800', label: 'FAQ' },
todo: { icon: '📋', color: '#FF9800', label: '待办' },
// 错误/危险类
danger: { icon: '❌', color: '#F44336', label: '危险' },
error: { icon: '❌', color: '#F44336', label: '错误' },
bug: { icon: '🐛', color: '#F44336', label: 'Bug' },
important: { icon: '⭐', color: '#F44336', label: '重要' },
// 其他
example: { icon: '📋', color: '#9C27B0', label: '示例' },
quote: { icon: '💬', color: 'gray', label: '引用' },
cite: { icon: '💬', color: 'gray', label: '引用' }
};
// Callout 检测正则 (支持 +/- 折叠标记)
const CALLOUT_PATTERN = /^>\s*\[!(\w+)\][+-]?(?:\s+(.+))?$/;
```
### Step 3: Callout 解析器
```javascript
function parseCallouts(text) {
const lines = text.split('\n');
const result = [];
let i = 0;
while (i < lines.length) {
const match = lines[i].match(CALLOUT_PATTERN);
if (match) {
const type = match[1].toLowerCase();
const title = match[2] || null;
const content = [];
i++;
// 收集 Callout 内容行
while (i < lines.length && lines[i].startsWith('>')) {
content.push(lines[i].replace(/^>\s*/, ''));
i++;
}
result.push({
isCallout: true,
type,
title,
content: content.join('\n')
});
} else {
result.push({ isCallout: false, line: lines[i] });
i++;
}
}
return result;
}
```
### Step 4: BBCode+MD 转换器
```javascript
function formatBBCodeMD(text) {
let result = text;
// ===== 标题转换 (像素级字号) =====
result = result.replace(/^######\s*(.+)$/gm, '[b]$1[/b]');
result = result.replace(/^#####\s*(.+)$/gm, '[b]$1[/b]');
result = result.replace(/^####\s*(.+)$/gm, '[b]$1[/b]');
result = result.replace(/^###\s*(.+)$/gm, '[size=100][color=#333][b]$1[/b][/color][/size]');
result = result.replace(/^##\s*(.+)$/gm, '[size=120][color=#2196F3][b]$1[/b][/color][/size]');
result = result.replace(/^#\s*(.+)$/gm, '[size=150][color=#2196F3][b]$1[/b][/color][/size]');
// ===== 文本样式 =====
result = result.replace(/\*\*\*(.+?)\*\*\*/g, '[b][i]$1[/i][/b]');
result = result.replace(/\*\*(.+?)\*\*/g, '[b]$1[/b]');
result = result.replace(/__(.+?)__/g, '[b]$1[/b]');
result = result.replace(/\*(.+?)\*/g, '[i]$1[/i]');
result = result.replace(/_(.+?)_/g, '[i]$1[/i]');
result = result.replace(/~~(.+?)~~/g, '[s]$1[/s]');
result = result.replace(/==(.+?)==/g, '[color=yellow]$1[/color]');
// ===== HTML 转 BBCode =====
result = result.replace(/<mark>(.+?)<\/mark>/g, '[color=yellow]$1[/color]');
result = result.replace(/<u>(.+?)<\/u>/g, '[u]$1[/u]');
result = result.replace(/<details>\s*<summary>(.+?)<\/summary>\s*([\s\S]*?)<\/details>/g,
'[spoiler=$1]$2[/spoiler]');
// ===== 代码 =====
result = result.replace(/```(\w*)\n([\s\S]*?)```/g, '[code]$2[/code]');
// 行内代码保持原样 (部分论坛不支持 font=monospace)
// ===== 链接和图片 =====
result = result.replace(/\[([^\]]+)\]\(([^)]+)\)/g, '[url=$2]$1[/url]');
result = result.replace(/!\[([^\]]*)\]\(([^)]+)\)/g, '[img]$2[/img]');
// ===== 引用 (非 Callout) =====
result = result.replace(/^>\s+(.+)$/gm, '[quote]$1[/quote]');
// ===== 列表 (使用 • 符号) =====
result = result.replace(/^[-*+]\s+(.+)$/gm, '• $1');
// ===== 分隔线 (保持 Markdown 语法) =====
// `---` 在混合格式中通常可用,不转换为 [hr]
return result.trim();
}
```
### Step 5: Callout 转换
```javascript
function convertCallouts(text) {
const parsed = parseCallouts(text);
return parsed.map(item => {
if (item.isCallout) {
const cfg = CALLOUT_CONFIG[item.type] || CALLOUT_CONFIG.note;
const displayTitle = item.title || cfg.label;
// 使用 [quote] 包裹,标题使用 size=100
return `[quote]
[size=100][color=${cfg.color}][b]${cfg.icon} ${displayTitle}[/b][/color][/size]
${item.content}
[/quote]`;
}
return item.line;
}).join('\n');
}
```
### Step 6: 执行转换
```javascript
// 1. 先处理 Callouts
let formattedContent = convertCallouts(content);
// 2. 再进行通用 BBCode+MD 转换
formattedContent = formatBBCodeMD(formattedContent);
// 3. 清理多余空行
formattedContent = formattedContent.replace(/\n{3,}/g, '\n\n');
```
### Step 7: 保存转换结果
```javascript
const outputFile = 'output.bbcode.txt';
Write(`${workDir}/${outputFile}`, formattedContent);
// 更新配置
config.output_file = outputFile;
config.formatted_content = formattedContent;
Write(`${workDir}/input-config.json`, JSON.stringify(config, null, 2));
```
## Output
- **File**: `output.bbcode.txt`
- **Format**: BBCode + Markdown 混合格式
## Quality Checklist
- [ ] 标题使用像素值 (150/120/100)
- [ ] 未使用 `[align]` 标签
- [ ] 未使用 `[hr]` 标签
- [ ] 分隔线使用 `---`
- [ ] Callout 正确转换为 [quote]
- [ ] 颜色值使用 hex 格式
- [ ] 内容完整无丢失
## Next Phase
→ [Phase 4: Output & Preview](04-output-preview.md)

View File

@@ -1,183 +0,0 @@
# Phase 4: Output & Preview
输出最终结果并提供预览。
## Objective
- 保存格式化后的内容到文件
- 提供预览功能
- 显示转换统计信息
## Input
- 依赖: `input-config.json`, `output.*`
- 配置: `{workDir}/input-config.json`
## Execution Steps
### Step 1: 加载结果
```javascript
const config = JSON.parse(Read(`${workDir}/input-config.json`));
const analysis = JSON.parse(Read(`${workDir}/analysis.json`));
const outputFile = `${workDir}/${config.output_file}`;
const formattedContent = Read(outputFile);
```
### Step 2: 生成统计摘要
```javascript
const summary = {
input: {
method: config.input_method,
original_length: config.original_content.length,
word_count: config.original_content.split(/\s+/).length
},
output: {
format: config.target_format,
file: outputFile,
length: formattedContent.length
},
elements: analysis.stats,
reading_time: analysis.semantics?.estimated_reading_time || 1
};
console.log(`
╔════════════════════════════════════════════════════════════════╗
║ Text Formatter Summary ║
╠════════════════════════════════════════════════════════════════╣
║ Input: ${summary.input.word_count} words (${summary.input.original_length} chars)
║ Output: ${summary.output.format}${summary.output.file}
║ Elements Converted:
║ • Headings: ${summary.elements.headings}
║ • Paragraphs: ${summary.elements.paragraphs}
║ • Lists: ${summary.elements.lists}
║ • Code Blocks: ${summary.elements.code_blocks}
║ • Links: ${summary.elements.links}
║ Estimated Reading Time: ${summary.reading_time} min
╚════════════════════════════════════════════════════════════════╝
`);
```
### Step 3: HTML 预览(如适用)
```javascript
if (config.target_format === 'HTML') {
// 生成完整 HTML 文件用于预览
const previewHtml = `<!DOCTYPE html>
<html lang="zh-CN">
<head>
<meta charset="UTF-8">
<meta name="viewport" content="width=device-width, initial-scale=1.0">
<title>Text Formatter Preview</title>
<style>
body {
font-family: -apple-system, BlinkMacSystemFont, 'Segoe UI', Roboto, sans-serif;
line-height: 1.6;
max-width: 800px;
margin: 0 auto;
padding: 2rem;
background: #f5f5f5;
}
.content {
background: white;
padding: 2rem;
border-radius: 8px;
box-shadow: 0 2px 10px rgba(0,0,0,0.1);
}
h1, h2, h3, h4, h5, h6 { color: #333; margin-top: 1.5em; }
code { background: #f0f0f0; padding: 2px 6px; border-radius: 3px; }
pre { background: #282c34; color: #abb2bf; padding: 1rem; border-radius: 6px; overflow-x: auto; }
pre code { background: none; padding: 0; }
blockquote { border-left: 4px solid #ddd; margin: 0; padding-left: 1rem; color: #666; }
a { color: #0066cc; }
img { max-width: 100%; }
hr { border: none; border-top: 1px solid #ddd; margin: 2rem 0; }
</style>
</head>
<body>
<div class="content">
${formattedContent}
</div>
</body>
</html>`;
Write(`${workDir}/preview.html`, previewHtml);
// 可选:在浏览器中打开预览
// Bash(`start "${workDir}/preview.html"`); // Windows
// Bash(`open "${workDir}/preview.html"`); // macOS
}
```
### Step 4: 显示输出内容
```javascript
// 显示格式化后的内容
console.log('\n=== Formatted Content ===\n');
console.log(formattedContent);
console.log('\n=========================\n');
// 提示用户
console.log(`
📁 Output saved to: ${outputFile}
${config.target_format === 'HTML' ? '🌐 Preview available: ' + workDir + '/preview.html' : ''}
💡 Tips:
- Copy the content above for immediate use
- Or access the saved file at the path shown
`);
```
### Step 5: 询问后续操作
```javascript
const nextAction = await AskUserQuestion({
questions: [
{
question: "需要执行什么操作?",
header: "后续操作",
multiSelect: false,
options: [
{ label: "完成", description: "结束格式化流程" },
{ label: "转换为其他格式", description: "选择另一种输出格式" },
{ label: "重新编辑", description: "修改原始内容后重新格式化" }
]
}
]
});
if (nextAction["后续操作"] === "转换为其他格式") {
// 返回 Phase 1 选择新格式
console.log('请重新运行 /text-formatter 选择其他格式');
}
```
## Output
- **File**: `output.{ext}` (最终输出)
- **File**: `preview.html` (HTML 预览,仅 HTML 格式)
- **Console**: 统计摘要和格式化内容
## Final Output Structure
```
{workDir}/
├── input-config.json # 配置信息
├── analysis.json # 分析结果
├── output.md # Markdown 输出(如选择)
├── output.bbcode.txt # BBCode 输出(如选择)
├── output.html # HTML 输出(如选择)
└── preview.html # HTML 预览页面
```
## Quality Checklist
- [ ] 输出文件已保存
- [ ] 统计信息正确显示
- [ ] 预览功能可用HTML
- [ ] 用户可访问输出内容
## Completion
此为最终阶段,格式化流程完成。

Some files were not shown because too many files have changed in this diff Show More