Compare commits

...

348 Commits

Author SHA1 Message Date
catlog22
369b470969 feat: enhance CoordinatorEmptyState and ThemeSelector with gradient utilities and improved layout 2026-02-04 23:06:00 +08:00
catlog22
de989aa038 feat: unify CLI output handling and enhance theme variables
- Updated `CliStreamMonitorNew`, `CliStreamMonitorLegacy`, and `CliViewerPage` components to prioritize `unitContent` from payloads, falling back to `data` when necessary.
- Enhanced `colorGenerator` to include legacy variables for compatibility with shadcn/ui.
- Refactored orchestrator index to unify node exports under a single module.
- Improved `appStore` to clear both new and legacy CSS variables when applying themes.
- Added new options to CLI execution for raw and final output modes, improving programmatic output handling.
- Enhanced `cli-output-converter` to normalize cumulative delta frames and avoid duplication in streaming outputs.
- Introduced a new unified workflow specification for prompt template-based workflows, replacing the previous multi-type node system.
- Added tests for CLI final output handling and streaming output converter to ensure correct behavior in various scenarios.
2026-02-04 22:57:41 +08:00
catlog22
4ee165119b refactor: unify node types into a single PromptTemplate model
- Removed individual node components (SlashCommandNode, FileOperationNode, etc.) and replaced them with a unified PromptTemplateNode.
- Updated flow types and interfaces to reflect the new single node type system.
- Refactored flow execution logic to handle the new unified model, simplifying node execution and context handling.
- Adjusted UI components to support the new PromptTemplateNode, including instruction display and context references.
- Cleaned up legacy code related to removed node types and ensured compatibility with the new structure.
2026-02-04 22:22:27 +08:00
catlog22
113c14970f feat: add CommandCombobox component for selecting slash commands and update PropertyPanel to use it
refactor: remove unused liteTasks localization from common.json and zh/common.json
refactor: consolidate liteTasks localization into lite-tasks.json and zh/lite-tasks.json
refactor: simplify MultiCliTab type in LiteTaskDetailPage
refactor: enhance task display in LiteTasksPage with additional metadata
2026-02-04 19:24:31 +08:00
catlog22
7b2ac46760 refactor: streamline store access in useWebSocket by consolidating state retrieval into getStoreState function 2026-02-04 18:00:06 +08:00
catlog22
346c87a706 refactor: optimize useWebSocket hook by consolidating store references and improving handler stability 2026-02-04 17:29:30 +08:00
catlog22
e260a3f77b feat: enhance theme customization and UI components
- Implemented a new color generation module to create CSS variables based on a single hue value, supporting both light and dark modes.
- Added unit tests for the color generation logic to ensure accuracy and robustness.
- Replaced dropdown location filter with tab navigation in RulesManagerPage and SkillsManagerPage for improved UX.
- Updated app store to manage custom theme hues and states, allowing for dynamic theme adjustments.
- Sanitized notification content before persisting to localStorage to prevent sensitive data exposure.
- Refactored memory retrieval logic to handle archived status more flexibly.
- Improved Tailwind CSS configuration with new gradient utilities and animations.
- Minor adjustments to SettingsPage layout for better visual consistency.
2026-02-04 17:20:40 +08:00
catlog22
88616224e0 refactor: remove redundant usage guidelines from workflow documentation 2026-02-04 17:18:56 +08:00
catlog22
6e6c2b9e09 fix: add mountedRef guard to useWebSocket to prevent setState on unmounted component
- Add mountedRef to track component mount state in useWebSocket hook
- Guard handleMessage callback to return early if component is unmounted
- Set mountedRef.current = false in useEffect cleanup function

This addresses finding perf-011 about potential memory leaks from setState
on unmounted components. The mounted ref provides a defensive check to
ensure no state updates occur after component unmount.

Related: code review group G03 - Critical Memory Leaks
2026-02-04 17:05:20 +08:00
catlog22
2bfce150ec feat: enhance RecommendedMcpWizard with icon mapping; improve ExecutionTab accessibility; refine NavGroup path matching; update fetchSkills to include enabled status; add loading and error messages to localization 2026-02-04 16:38:18 +08:00
catlog22
ac95ee3161 feat: update CLI mode toggle for improved UI and functionality; refactor RecommendedMcpWizard API call; add new localization strings for installation wizard and tools 2026-02-04 15:26:36 +08:00
catlog22
8454ae4f41 feat: add quick install templates and index status to CLI hooks and home locales
feat: enhance MCP manager with interactive question feature and update locales

feat: implement tags and available models management in settings page

fix: improve process termination logic in stop command for React frontend

fix: update view command to default to 'js' frontend

feat: add Recommended MCP Wizard component for dynamic server configuration
2026-02-04 15:24:34 +08:00
catlog22
341331325c feat: Enhance A2UI with RadioGroup and Markdown support
- Added support for RadioGroup component in A2UI, allowing single selection from multiple options.
- Implemented Markdown parsing in A2UIPopupCard for better content rendering.
- Updated A2UIPopupCard to handle different question types and improved layout for multi-select and single-select questions.
- Introduced new utility functions for handling disabled items during installation.
- Enhanced installation process to restore previously disabled skills and commands.
- Updated notification store and related tests to accommodate new features.
- Adjusted Vite configuration for better development experience.
2026-02-04 13:45:47 +08:00
catlog22
1a05551d00 feat(a2ui): enhance A2UI notification handling and multi-select support 2026-02-04 11:11:55 +08:00
catlog22
c6093ef741 feat: add CLI Command Node and Prompt Node components for orchestrator
- Implemented CliCommandNode component for executing CLI tools with AI models.
- Implemented PromptNode component for constructing AI prompts with context.
- Added styling for mode and tool badges in both components.
- Enhanced user experience with command and argument previews, execution status, and error handling.

test: add comprehensive tests for ask_question tool

- Created direct test for ask_question tool execution.
- Developed end-to-end tests to validate ask_question tool integration with WebSocket and A2UI surfaces.
- Implemented simple and integrated WebSocket tests to ensure proper message handling and surface reception.
- Added tool registration test to verify ask_question tool is correctly registered.

chore: add WebSocket listener and simulation tests

- Added WebSocket listener for A2UI surfaces to facilitate testing.
- Implemented frontend simulation test to validate complete flow from backend to frontend.
- Created various test scripts to ensure robust testing of ask_question tool functionality.
2026-02-03 23:10:36 +08:00
catlog22
a806d70d9b refactor: remove lite-plan-c workflow and update orchestrator UI components
- Deleted the lite-plan-c workflow file to streamline the planning process.
- Updated orchestrator localization files to include new variable picker and multi-node selector messages.
- Modified PropertyPanel to support new execution modes and added output variable input fields.
- Enhanced SlashCommandNode to reflect changes in execution modes.
- Introduced MultiNodeSelector component for better node selection management.
- Added VariablePicker component for selecting variables with improved UX.
2026-02-03 21:24:34 +08:00
catlog22
9bb50a13fa feat(websocket): integrate A2UI message handling and answer callback registration 2026-02-03 21:18:19 +08:00
catlog22
bb4cd0529e feat(codex): add parallel execution workflows with worktree support
Add three new workflow commands for parallel development:

- collaborative-plan-parallel.md: Planning with execution group assignment
  - Supports automatic/balanced/manual group assignment strategies
  - Assigns codex instances and branch names per group
  - Detects cross-group conflicts and dependencies

- unified-execute-parallel.md: Worktree-based group execution
  - Executes tasks in isolated Git worktrees (.ccw/worktree/{group-id}/)
  - Enables true parallel execution across multiple codex instances
  - Simplified focus on execution only

- worktree-merge.md: Dependency-aware worktree merging
  - Merges completed worktrees in dependency order
  - Handles cross-group file conflicts
  - Optional worktree cleanup after merge

Key improvements:
- True parallel development via Git worktrees (vs branch switching)
- Clear separation of concerns (plan → execute → merge)
- Support for multi-codex coordination
2026-02-03 21:17:45 +08:00
catlog22
a8385e2ea5 feat: Enhance Project Overview and Review Session pages with improved UI and functionality
- Updated ProjectOverviewPage to enhance the guidelines section with better spacing, larger icons, and improved button styles.
- Refactored ReviewSessionPage to unify filter controls, improve selection actions, and enhance the findings list with a new layout.
- Added dimension tabs and severity filters to the SessionsPage for better navigation and filtering.
- Improved SessionDetailPage to utilize a mapping for status labels, enhancing internationalization support.
- Refactored TaskListTab to remove unused priority configuration code.
- Updated store types to better reflect session metadata structure.
- Added a temporary JSON file for future use.
2026-02-03 20:58:03 +08:00
catlog22
37ba849e75 feat: add CLI Viewer Page with multi-pane layout and state management
- Implemented the CliViewerPage component for displaying CLI outputs in a configurable multi-pane layout.
- Integrated Zustand for state management, allowing for dynamic layout changes and tab management.
- Added layout options: single, split horizontal, split vertical, and 2x2 grid.
- Created viewerStore for managing layout, panes, and tabs, including actions for adding/removing panes and tabs.
- Added CoordinatorPage barrel export for easier imports.
2026-02-03 17:28:26 +08:00
catlog22
b63e254f36 fix(frontend): fix button icon and text layout in help page
- Add inline-flex items-center to all button links for proper icon alignment
- Add flex-shrink-0 to all icons to prevent shrinking
- Change support buttons container to flex-wrap for better responsiveness
- Ensure icons and text stay on same line in all buttons
2026-02-03 15:14:15 +08:00
catlog22
86b1a15671 feat(frontend): optimize help page layout and add i18n support
- Add Chinese and English translations for help page
- Fix button overlapping issues with responsive layouts
- Improve icon and button positioning with flexbox
- Add proper spacing and breakpoints for mobile/tablet/desktop
- Make cards consistent with flexbox layout
- Ensure all buttons have proper whitespace-nowrap
- Use flex-shrink-0 for icons to prevent squashing
2026-02-03 15:07:14 +08:00
catlog22
39b80b3386 feat: initialize monorepo with package.json for CCW workflow platform 2026-02-03 14:42:20 +08:00
catlog22
5483a72e9f feat: add Accordion component for UI and Zustand store for coordinator management
- Implemented Accordion component using Radix UI for collapsible sections.
- Created Zustand store to manage coordinator execution state, command chains, logs, and interactive questions.
- Added validation tests for CLI settings type definitions, ensuring type safety and correct behavior of helper functions.
2026-02-03 10:02:40 +08:00
catlog22
bcb4af3ba0 feat(commands): add ccw-plan and ccw-test coordinators with arrow-style documentation
- Add ccw-plan.md: Planning coordinator with 10 modes (lite/multi-cli/full/plan-verify/replan + cli/issue/rapid-to-issue/brainstorm-with-file/analyze-with-file)
- Add ccw-test.md: Test coordinator with 4 modes (gen/fix/verify/tdd) and auto-iteration support
- Refactor ccw-debug.md: Convert JavaScript pseudocode to arrow instruction format for better readability
- Fix command format: Use /ccw* skill call syntax instead of ccw* across all command files

Key features:
- CLI integration for quick analysis and recommendations
- Issue workflow integration (rapid-to-issue bridge)
- With-File workflows for documented multi-CLI collaboration
- Consistent arrow-based flow diagrams across all coordinators
- TodoWrite tracking with mode-specific prefixes (CCWP/CCWT/CCWD)
- Dual tracking system (TodoWrite + status.json)
2026-02-02 21:41:56 +08:00
catlog22
545679eeb9 Refactor execution modes and CLI integration across agents
- Updated code-developer, tdd-developer, and test-fix-agent to streamline execution modes based on task.meta.execution_config.method.
- Removed legacy command handling and introduced CLI handoff for 'cli' execution method.
- Enhanced buildCliHandoffPrompt to include task JSON path and improved context handling.
- Updated task-generate-agent and task-generate-tdd to reflect new execution method mappings and removed command field from implementation_approach.
- Improved CLI settings validation in CliSettingsModal with format and length checks.
- Added localization for new CLI settings messages in English and Chinese.
- Enhanced GPU selector to use localized strings for GPU types.
- Introduced TypeScript LSP setup documentation for better user guidance.
2026-02-02 19:21:30 +08:00
catlog22
2305e7b8e7 feat(tests): add verification script for execution ID with output analysis 2026-02-02 11:55:12 +08:00
catlog22
48871f0d9e feat(tests): add CLI API response format tests and output format detection 2026-02-02 11:46:12 +08:00
catlog22
a54246a46f feat: add API settings page for managing providers, endpoints, cache, model pools, and CLI settings
feat: implement semantic install dialog for CodexLens with GPU mode selection
feat: add radio group component for GPU mode selection
feat: update navigation and localization for API settings and semantic install
2026-02-02 11:16:19 +08:00
catlog22
e4b627bc76 fix(hooks): fix EventGroup undefined component error
- Add Play icon import for SessionStart trigger type
- Add SessionStart case to getEventIcon function
- Add SessionStart case to getEventColor function with purple color
- Add default cases to both functions to prevent undefined returns

Fixes runtime error: 'Element type is invalid: expected a string or class/function but got: undefined' in EventGroup component
2026-02-02 10:56:45 +08:00
catlog22
392f89f62f fix(i18n): fix JSON structure and add missing Chinese translations for API Settings
- Fix duplicate validation key in common section
- Fix messages section nesting (should be sibling to common, not child)
- Add missing Chinese translations:
  - providers.showDisabled/hideDisabled/showAll
  - endpoints.showDisabled/showAll
  - common.showAll
2026-02-02 10:52:43 +08:00
catlog22
1beb98366b refactor(frontend): change API Settings from sidebar to horizontal tabs layout
Refactor ApiSettingsPage to use horizontal Tabs component like CodexLens page,
replacing the previous vertical sidebar navigation.

Changes:
- Replace Card-based sidebar with horizontal TabsList/TabsTrigger components
- Remove split-panel layout (sidebar + main panel)
- Use TabsContent to wrap each tab's content
- Import and use Tabs, TabsList, TabsTrigger, TabsContent from ui/Tabs
- Add api-settings.json import to en/zh locale index files for i18n support

Layout comparison:
- Before: Vertical sidebar (lg:w-64) with Card + nav buttons
- After: Horizontal tabs (TabsList) with tab triggers
2026-02-02 10:49:21 +08:00
catlog22
c522681c4c fix(frontend): fix i18n and runtime error in EndpointList
Fix hardcoded English labels in cache strategy display and prevent runtime
error when filePatterns is undefined.

Changes:
- Replace hardcoded "TTL:", "Max:", "Patterns:" with formatMessage calls
- Add optional chaining for filePatterns.length to prevent undefined error
- Add "filePatterns" i18n key to en/zh api-settings.json
2026-02-02 10:46:36 +08:00
catlog22
abce912ee5 feat(frontend): implement comprehensive API Settings Management Interface
Implement a complete API Management Interface for React frontend with split-
panel layout, migrating all features from legacy JS frontend.

New Features:
- API Settings page with 5 tabs: Providers, Endpoints, Cache, Model Pools, CLI Settings
- Provider Management: CRUD operations, multi-key rotation, health checks, test connection
- Endpoint Management: CRUD operations, cache strategy configuration, enable/disable toggle
- Cache Settings: Global configuration, statistics display, clear cache functionality
- Model Pool Management: CRUD operations, auto-discovery feature, provider exclusion
- CLI Settings Management: Provider-based and Direct modes, full CRUD support
- Multi-Key Settings Modal: Manage API keys with rotation strategies and weights
- Manage Models Modal: View and manage models per provider (LLM and Embedding)
- Sync to CodexLens: Integration handler for provider configuration sync

Technical Implementation:
- Created 12 new React components in components/api-settings/
- Extended lib/api.ts with 460+ lines of API client functions
- Created hooks/useApiSettings.ts with 772 lines of TanStack Query hooks
- Added RadioGroup UI component for form selections
- Implemented unified error handling with useNotifications across all operations
- Complete i18n support (500+ keys in English and Chinese)
- Route integration (/api-settings) and sidebar navigation

Code Quality:
- All acceptance criteria from plan.json verified
- Code review passed with Gemini (all 7 IMPL tasks complete)
- Follows existing patterns: Shadcn UI, TanStack Query, react-intl, Lucide icons
2026-02-01 23:58:04 +08:00
catlog22
690597bae8 fix(cli): fix codex review command argument handling
- Fix prompt incorrectly passed to codex review with target flags (--uncommitted/--base/--commit)
- Remove unsupported --skip-git-repo-check flag from codex review command
- Fix TypeScript type error in codex-lens.ts (missing 'installed' property)

Codex CLI constraint: --uncommitted/--base/--commit and [PROMPT] are mutually exclusive.
2026-02-01 23:51:58 +08:00
catlog22
e5252f8a77 feat: add useApiSettings hook for managing API settings, including providers, endpoints, cache, and model pools
- Implemented hooks for CRUD operations on providers and endpoints.
- Added cache management hooks for cache stats and settings.
- Introduced model pool management hooks for high availability and load balancing.
- Created localization files for English and Chinese translations of API settings.
2026-02-01 23:14:55 +08:00
catlog22
b76424feef fix(frontend): show empty state placeholders for quality rules and learnings in view mode
- Modify conditional rendering logic to always display section headers in view mode
- Add empty state placeholders when quality_rules/learnings arrays are empty
- Add localization keys for empty state messages in English and Chinese
- Improves UX by making these features visible even when no entries exist

Related files:
- ccw/frontend/src/pages/ProjectOverviewPage.tsx
- ccw/frontend/src/locales/en/project-overview.json
- ccw/frontend/src/locales/zh/project-overview.json
2026-02-01 23:11:46 +08:00
catlog22
76967a7350 feat: enhance workflow commands with dynamic analysis, multi-agent parallel exploration, and best practices
- analyze-with-file: Add dynamic direction generation from dimensions, support up to 4 parallel agent perspectives, enable cli-explore-agent parallelization
- brainstorm-with-file: Add dynamic direction generation from dimensions, add Agent-First guideline for complex tasks
- collaborative-plan-with-file: Add Understanding phase guidelines to prioritize latest documentation and resolve ambiguities with user clarification

Key improvements:
- Dimension-Direction Mapping replaces hard-coded options, enabling task-aware direction recommendations
- Multi-perspective parallel exploration (Analysis Perspectives, up to 4) with synthesis
- cli-explore-agent supports parallel execution per perspective for richer codebase context
- Agent-First principle: delegate complex tasks (code analysis, implementation) to agents/CLI instead of main process analysis
- Understanding phase emphasizes latest documentation discovery and clarification of ambiguities before planning
2026-02-01 22:51:04 +08:00
catlog22
7dcc0a1c05 feat: update usage recommendations across multiple workflow commands to require user confirmation and improve clarity 2026-02-01 22:04:26 +08:00
catlog22
5fb910610a fix(frontend): resolve URL path inconsistency caused by localStorage race condition
Fixed the issue where accessing the React frontend via ccw view with a URL path
parameter would load a different workspace due to localStorage rehydration race
condition.

Root cause: zustand persist's onRehydrateStorage callback automatically called
switchWorkspace with the cached projectPath before AppShell could process the
URL parameter.

Changes:
- workflowStore.ts: Remove automatic switchWorkspace from onRehydrateStorage
- AppShell.tsx: Centralize initialization logic with clear priority order:
  * Priority 1: URL ?path= parameter (explicit user intent)
  * Priority 2: localStorage fallback (implicit cache)
  * Added isWorkspaceInitialized state lock to prevent duplicate execution

Fixes: URL showing ?path=/new/path but loading /old/path from cache
2026-02-01 18:35:51 +08:00
catlog22
d46406df4a feat: Add CodexLens Manager Page with tabbed interface for managing CodexLens features
feat: Implement ConflictTab component to display conflict resolution decisions in session detail

feat: Create ImplPlanTab component to show implementation plan with modal viewer in session detail

feat: Develop ReviewTab component to display review findings by dimension in session detail

test: Add end-to-end tests for CodexLens Manager functionality including navigation, tab switching, and settings validation
2026-02-01 17:45:38 +08:00
catlog22
8dc115a894 feat(codex): convert 4 workflow commands to Codex prompt format with serial execution
Convert parallel multi-agent/multi-CLI workflows to Codex-compatible serial execution:

- brainstorm-with-file: Parallel Creative/Pragmatic/Systematic perspectives → Serial CLI execution
- analyze-with-file: cli-explore-agent + parallel CLI → Native tools (Glob/Grep/Read) + serial Gemini CLI
- collaborative-plan-with-file: Parallel sub-agents → Serial domain planning loop
- unified-execute-with-file: DAG-based parallel wave execution → Serial task-by-task execution

Key Technical Changes:
- YAML header: Remove 'name' and 'allowed-tools' fields
- Variables: $ARGUMENTS → $TOPIC/$TASK/$PLAN
- Task agents: Task() calls → ccw cli commands with --tool flag
- Parallel execution: Parallel Task/Bash calls → Sequential loops
- Session format: Match existing Codex prompt structure

Pattern: For multi-step workflows, use explicit wait points with " Wait for completion before proceeding"
2026-02-01 17:20:00 +08:00
catlog22
e9789e747a feat: update unified-execute-with-file command for sequential execution and improved dependency handling 2026-02-01 15:35:41 +08:00
catlog22
0342976c51 feat: add JSON detection utility for various formats
- Implemented a utility function `detectJsonInLine` to identify and parse JSON data from different formats including direct JSON, tool calls, tool results, embedded JSON, and code blocks.
- Introduced `JsonDetectionResult` interface to standardize the detection results.
- Added a new test results file to track the status of tests.
2026-02-01 15:17:03 +08:00
catlog22
f196b76064 chore(tests): remove obsolete test results and reports 2026-02-01 11:16:17 +08:00
catlog22
44cc4cad0f chore(tests): add gitignore for test results and reports 2026-02-01 11:16:00 +08:00
catlog22
fc1471396c feat(e2e): generate comprehensive E2E tests and fix TypeScript compilation errors
- Add 23 E2E test spec files covering 94 API endpoints across business domains
- Fix TypeScript compilation errors (file casing, duplicate export, implicit any)
- Update Playwright deprecated API calls (getByPlaceholderText -> getByPlaceholder)
- Tests cover: dashboard, sessions, tasks, workspace, loops, issues-queue, discovery,
  skills, commands, memory, project-overview, session-detail, cli-history,
  cli-config, cli-installations, lite-tasks, review, mcp, hooks, rules,
  index-management, prompt-memory, file-explorer

Test coverage: 100% domain coverage (23/23 domains)
API coverage: 94 endpoints across 23 business domains
Quality gates: 0 CRITICAL issues, all anti-patterns passed

Note: 700+ timeout tests require backend server (port 3456) to pass
2026-02-01 11:15:11 +08:00
catlog22
cf401d00e1 feat(cli-stream-monitor): add JSON card components and refactor tab UI
Add new components for CLI stream monitoring with JSON visualization:
- ExecutionTab: simplified tab with status indicator and active styling
- JsonCard: collapsible card for JSON data with type-based styling (6 types)
- JsonField: recursive JSON field renderer with color-coded values
- OutputLine: auto-detects JSON and renders appropriate component
- jsonDetector: smart JSON detection supporting 5 formats

Refactor CliStreamMonitorLegacy to use new components:
- Replace inline tab rendering with ExecutionTab component
- Replace inline output rendering with OutputLine component
- Add Badge import for active count display
- Fix type safety with proper id propagation

Component features:
- Type-specific styling for tool_call, metadata, system, stdout, stderr, thought
- Collapsible content (show 3 fields by default, expandable)
- Copy button and raw JSON view toggle
- Timestamp display
- Auto-detection of JSON in output lines

Fixes:
- Missing jsonDetector.ts file
- Type mismatch between OutputLine (6 types) and JsonCard (4 types)
- Unused isActive prop in ExecutionTab
2026-02-01 11:14:33 +08:00
catlog22
b66d20f5a6 Merge branch 'main' of https://github.com/catlog22/Claude-Code-Workflow 2026-01-31 23:12:45 +08:00
catlog22
a2206df50f feat: add CliStreamMonitor and related components for CLI output streaming
- Implemented CliStreamMonitor component for real-time CLI output monitoring with multi-execution support.
- Created JsonFormatter component for displaying JSON content in various formats (text, card, inline).
- Added utility functions for JSON detection and formatting in jsonUtils.ts.
- Introduced LogBlock utility functions for styling CLI output lines.
- Developed a new Collapsible component for better UI interactions.
- Created IssueHubPage for managing issues, queue, and discovery with tab navigation.
2026-01-31 23:12:39 +08:00
catlog22
2f10305945 refactor(commands): replace SlashCommand with Skill tool invocation
Update all workflow command files to use Skill tool instead of SlashCommand:
- Change allowed-tools: SlashCommand(*) → Skill(*)
- Convert SlashCommand(command="/path", args="...") → Skill(skill="path", args="...")
- Update descriptive text references from SlashCommand to Skill
- Remove javascript language tags from code blocks (25 files)

Affected 25 command files across:
- workflow: plan, execute, init, lite-plan, lite-fix, etc.
- workflow/test: test-fix-gen, test-cycle-execute, tdd-plan, tdd-verify
- workflow/review: review-cycle-fix, review-module-cycle, review-session-cycle
- workflow/ui-design: codify-style, explore-auto, imitate-auto
- workflow/brainstorm: brainstorm-with-file, auto-parallel
- issue: discover, discover-by-prompt, plan
- ccw, ccw-debug

This aligns with the actual Skill tool interface which uses 'skill' and 'args' parameters.
2026-01-31 23:09:59 +08:00
catlog22
92ded5908f feat(flowchart): update implementation section and enhance edge styling
feat(taskdrawer): improve implementation steps display and support multiple file formats
2026-01-31 21:38:02 +08:00
catlog22
1bd082a725 feat: add tests and implementation for issue discovery and queue pages
- Implemented `DiscoveryPage` with session management and findings display.
- Added tests for `DiscoveryPage` to ensure proper rendering and functionality.
- Created `QueuePage` for managing issue execution queues with stats and actions.
- Added tests for `QueuePage` to verify UI elements and translations.
- Introduced `useIssues` hooks for fetching and managing issue data.
- Added loading skeletons and error handling for better user experience.
- Created `vite-env.d.ts` for TypeScript support in Vite environment.
2026-01-31 21:20:10 +08:00
catlog22
6d225948d1 feat(workflow): add interactive init-guidelines wizard for project guidelines configuration
Add /workflow:init-guidelines command that interactively fills project-guidelines.json
based on project analysis from project-tech.json. Features:
- 5-round questionnaire covering coding style, naming, file structure, documentation,
  architecture, tech stack, performance, security, and quality rules
- Dynamic question generation based on detected language/framework/architecture
- Append/reset mode for existing guidelines
- Multi-select options with "Other" for custom input

Modify /workflow:init to ask user whether to run init-guidelines after initialization
when guidelines are empty (scaffold only).
2026-01-31 20:58:29 +08:00
catlog22
35f9116cce feat: add support for dual frontend (JS and React) in the CCW application
- Updated CLI to include `--frontend` option for selecting frontend type (js, react, both).
- Modified serve command to start React frontend when specified.
- Implemented React frontend startup and shutdown logic.
- Enhanced server routing to handle requests for both JS and React frontends.
- Added workspace selector component with i18n support.
- Updated tests to reflect changes in header and A2UI components.
- Introduced new Radix UI components for improved UI consistency.
- Refactored A2UIButton and A2UIDateTimeInput components for better code clarity.
- Created migration plan for gradual transition from JS to React frontend.
2026-01-31 16:46:45 +08:00
catlog22
345437415f Add end-to-end tests for workspace switching and backend tests for ask_question tool
- Implemented E2E tests for workspace switching functionality, covering scenarios such as switching workspaces, data isolation, language preference maintenance, and UI updates.
- Added tests to ensure workspace data is cleared on logout and handles unsaved changes during workspace switches.
- Created comprehensive backend tests for the ask_question tool, validating question creation, execution, answer handling, cancellation, and timeout scenarios.
- Included edge case tests to ensure robustness against duplicate questions and invalid answers.
2026-01-31 16:02:20 +08:00
catlog22
715ef12c92 feat(a2ui): Implement A2UI backend with question handling and WebSocket support
- Added A2UITypes for defining question structures and answers.
- Created A2UIWebSocketHandler for managing WebSocket connections and message handling.
- Developed ask-question tool for interactive user questions via A2UI.
- Introduced platformUtils for platform detection and shell command handling.
- Centralized TypeScript types in index.ts for better organization.
- Implemented compatibility checks for hook templates based on platform requirements.
2026-01-31 15:27:12 +08:00
catlog22
4e009bb03a feat: add --with-commit parameter to workflow:execute for auto-commit after each task
- Add --with-commit flag to argument-hint and usage examples
- Auto-commit changes based on summary document after each agent task
- Minimal principle: only commit files modified by completed task
- Commit message format: {type}: {task-title} - {summary}
- Type mapping: feature→feat, bugfix→fix, refactor→refactor, etc.
2026-01-31 10:31:06 +08:00
catlog22
a0f81f8841 Add API error monitoring tests and error context snapshots for various browsers
- Created error context snapshots for Firefox, WebKit, and Chromium to capture UI state during API error monitoring.
- Implemented e2e tests for API error detection, including console errors, failed API requests, and proxy errors.
- Added functionality to ignore specific API patterns in monitoring assertions.
- Ensured tests validate the monitoring system's ability to detect and report errors effectively.
2026-01-31 00:15:59 +08:00
catlog22
f1324a0bc8 feat: enhance test-task-generate with Gemini enrichment and planning-notes mechanism
Implement two major enhancements to test generation workflow:

1. Gemini Test Enhancement (Phase 1.5)
   - Add Gemini CLI analysis for comprehensive test suggestions
   - Generate enriched test specifications for L1-L3 test layers
   - Focus on API contracts, integration patterns, error scenarios
   - Create gemini-enriched-suggestions.md as artifact

2. Planning Notes Mechanism (test-planning-notes.md)
   - Similar to plan.md's planning-notes pattern
   - Record Test Intent (Phase 1)
   - Embed complete Gemini enrichment (Phase 1.5)
   - Track consolidated test requirements
   - Support N+1 context with decisions and deferred items

Implementation Details:
   - Created Gemini prompt template: test-suggestions-enhancement.txt
   - Enhanced test-task-generate.md with Phase 1, 1.5, 2 logic
   - Gemini content embedded in planning-notes for single source of truth
   - test-action-planning-agent reads both TEST_ANALYSIS_RESULTS.md and test-planning-notes.md
   - Backward compatible: Phase 1.5 is optional enhancement

Benefits:
   - Comprehensive test coverage (API, integration, error scenarios)
   - Full traceability of test planning process
   - Consolidated context in one file for easy review
   - Preserved Gemini output as independent artifact
2026-01-30 23:51:57 +08:00
catlog22
81725c94b1 Add E2E tests for internationalization across multiple pages
- Implemented navigation.spec.ts to test language switching and translation of navigation elements.
- Created sessions-page.spec.ts to verify translations on the sessions page, including headers, status badges, and date formatting.
- Developed settings-page.spec.ts to ensure settings page content is translated and persists across sessions.
- Added skills-page.spec.ts to validate translations for skill categories, action buttons, and empty states.
2026-01-30 22:54:21 +08:00
catlog22
917de6f167 Merge pull request #109 from wzp-0815/fix-install-docs
Fix missing ccw install step in installation documentation
2026-01-30 22:06:28 +08:00
catlog22
e78e95049b feat: update dependencies and refactor dropdown components for improved functionality 2026-01-30 17:26:07 +08:00
catlog22
a5c3dff8d3 feat: implement FlowExecutor for executing flow definitions with DAG traversal and node execution 2026-01-30 16:59:18 +08:00
catlog22
0a7c1454d9 feat: add task JSON status update instructions to agents
Add jq commands for agents to update task status lifecycle:
- Step 0: Mark in_progress when starting
- Task Completion: Mark completed when finishing
2026-01-30 15:59:28 +08:00
catlog22
4a8481958a feat: add planning-notes consumption to plan-verify
- Add dimensions I (Constraints Compliance) and J (N+1 Context Validation)
- Verify tasks respect Consolidated Constraints from planning-notes.md
- Detect deferred items incorrectly included in current plan
- Check decision contradictions against N+1 Decisions table
2026-01-30 15:50:22 +08:00
catlog22
78b1287ced chore: Bump version to 6.3.54 2026-01-30 15:43:20 +08:00
catlog22
c6ad8e53b9 Add flow template generator documentation for meta-skill/flow-coordinator
- Introduced a comprehensive guide for generating workflow templates.
- Detailed usage instructions with examples for creating templates.
- Outlined execution flow across three phases: Template Design, Step Definition, and JSON Generation.
- Included JavaScript code snippets for each phase to facilitate implementation.
- Provided suggested step templates for various development scenarios.
- Documented command port references and minimum execution units for clarity.
2026-01-30 15:42:08 +08:00
catlog22
4006b2a0ee feat: add N+1 planning context recording to planning-notes
- Add N+1 Context section (Decisions + Deferred) to planning-notes.md init
- Add section 3.3 N+1 Context Recording in action-planning-agent.md
- Update task-generate-agent.md Phase 2A/2B/3 prompts with N+1 recording

Supports cross-module dependency resolution tracking and deferred items
for continuous planning iterations.
2026-01-30 15:42:00 +08:00
laoman.wzp
e464d93e29 Fix missing ccw install step in installation documentation
- Added the crucial `ccw install` step after npm installation in both
  English (INSTALL.md) and Chinese (INSTALL_CN.md) documentation
- Included detailed explanation of what ccw install does:
  * Installs workflows, scripts, templates to ~/.claude/
  * Installs skill definitions to ~/.codex/
  * Configures shell integration (optional)

This step is essential for the complete installation of CCW system files.

Co-Authored-By: Claude (Kimi-K2-Instruct-0905) <noreply@anthropic.com>
2026-01-30 14:47:21 +08:00
catlog22
0bb102c56a feat: add meta-skill flow-create command for workflow template generation
Interactive command to create flow-coordinator templates with comprehensive
command selection (9 categories), minimum execution units, and command port
reference integrated from ccw/ccw-coordinator/codex-coordinator.
2026-01-30 14:27:20 +08:00
catlog22
fca03a3f9c refactor: rename meta-skill → flow-coordinator, update template cmd paths
**Major changes:**
- Rename skill from meta-skill to flow-coordinator (avoid /ccw conflicts)
- Update all 17 templates: store full /workflow: command paths in cmd field
  - Session ID prefix: ms → fc (fc-YYYYMMDD-HHMMSS)
  - Workflow path: .workflow/.meta-skill → .workflow/.flow-coordinator
- Simplify SKILL.md schema documentation
  - Streamline Status Schema section
  - Consolidate Template Schema with single example
  - Remove redundant Field Explanations and behavior tables
- All templates now store cmd as full paths (e.g. /workflow:lite-plan)
  - Eliminates need for path assembly during execution
  - Matches ccw-coordinator execution format
2026-01-30 12:29:38 +08:00
catlog22
a9df4c6659 refactor: 移除统一执行命令的使用推荐部分,简化文档内容 2026-01-30 10:54:02 +08:00
catlog22
0a3246ab36 chore: Bump version to 6.3.53 2026-01-30 10:34:02 +08:00
catlog22
b5caee6b94 rename: collaborative-plan → collaborative-plan-with-file 2026-01-30 10:30:37 +08:00
catlog22
64d2156319 refactor: 统一 lite 工作流产物结构,合并 exploration+understanding 为 planning-context
- 合并 exploration.md 和 understanding.md 为单一的 planning-context.md
- 为所有 lite 工作流添加 Output Artifacts 表格
- 统一 agent 提示词的 Output Location 格式
- 更新 Session Folder Structure 包含 planning-context.md

涉及文件:
- cli-lite-planning-agent.md: 更新产物文档和格式模板
- collaborative-plan.md: 简化为 planning-context.md + sub-plan.json
- lite-plan.md: 添加产物表格和输出地址
- lite-fix.md: 添加产物表格和输出地址
2026-01-30 10:25:45 +08:00
catlog22
3f46a02df3 feat(cli): add settings file support for builtin Claude
- Enhance CLI status rendering to display settings file information for builtin Claude.
- Introduce settings file input in CLI manager for configuring the path to settings.json.
- Update Claude CLI tool interface to include settingsFile property.
- Implement settings file resolution and validation in CLI executor.
- Create a new collaborative planning workflow command with detailed documentation.
- Add test scripts for debugging tool configuration and command building.
2026-01-30 10:07:02 +08:00
catlog22
4b69492b16 feat: 添加 Codex 协调器命令,支持任务分析、命令链推荐和顺序执行 2026-01-29 23:36:42 +08:00
catlog22
67a578450c chore: Bump version to 6.3.52 2026-01-29 21:22:49 +08:00
catlog22
d5199ad2d4 Add merge-plans-with-file and quick-plan-with-file prompts for enhanced planning workflows
- Introduced merge-plans-with-file prompt for aggregating multiple planning artifacts, resolving conflicts, and synthesizing a unified execution plan.
- Implemented detailed execution process including discovery, normalization, conflict detection, resolution, and synthesis phases.
- Added quick-plan-with-file prompt for rapid planning with minimal documentation, utilizing multi-agent analysis for actionable outputs.
- Both prompts support various input formats and provide structured outputs for execution and documentation.
2026-01-29 21:21:29 +08:00
catlog22
f3c773a81e Refactor Codex Issue Plan-Execute Skill Documentation and CLI Options
- Deleted obsolete INDEX.md and OPTIMIZATION_SUMMARY.md files, consolidating documentation for improved clarity and organization.
- Removed skipGitRepoCheck option from CLI execution parameters to streamline command usage.
- Updated CLI executor utilities to automatically skip git repository checks, allowing execution in non-git directories.
- Enhanced documentation with new ARCHITECTURE.md and INDEX.md files for better navigation and understanding of the system architecture.
- Created CONTENT_MIGRATION_REPORT.md to verify zero content loss during the consolidation process.
2026-01-29 20:39:12 +08:00
catlog22
875b1f19bd feat: 完成 Codex Issue Plan-Execute Skill v2.0 优化
- 新增 OPTIMIZATION_SUMMARY.md,详细记录优化过程和成果
- 新增 README_OPTIMIZATION.md,概述优化后的文件结构和关键指标
- 创建 specs/agent-roles.md,整合 Planning Agent 和 Execution Agent 的角色定义
- 合并多个提示词文件,减少内容重复,优化 Token 使用
- 新建 ARCHITECTURE.md 和 INDEX.md,提供系统架构和文档导航
- 添加 CONTENT_MIGRATION_REPORT.md,确保内容迁移的完整性和零丢失
- 更新文件引用,确保向后兼容性,添加弃用通知
2026-01-29 20:37:30 +08:00
catlog22
c08f5382d3 fix: Improve command detail modal tab handling and error reporting 2026-01-29 17:48:06 +08:00
catlog22
21d764127f Add command relationships, essential commands, and validation script
- Introduced `command-relationships.json` to define internal calls, next steps, and prerequisites for various workflows.
- Created `essential-commands.json` to document key commands, their descriptions, arguments, and usage scenarios.
- Added `validate-help.py` script to check for the existence of source files referenced in command definitions, ensuring all necessary files are present.
2026-01-29 17:29:37 +08:00
catlog22
860dbdab56 fix: Unify execution IDs between broadcast events and session storage
- Pass generated executionId to cliExecutorTool.execute as id parameter
- Ensures CLI_EXECUTION_STARTED broadcast uses same ID as saved session
- Fixes "Conversation not found" errors when querying by broadcast ID
- Add DEBUG logging for executionId tracking

This resolves the mismatch where:
  - Broadcast event used ID from Date.now() at broadcast time
  - Session saved used different ID from Date.now() at completion time
  - Now all use the same ID generated at cli.ts:868

Changes:
- cli.ts:868 - executionId generated once
- cli.ts:1001 - pass executionId to execute() as id parameter
- cli-executor-core.ts automatically uses passed id as conversationId
2026-01-29 16:59:00 +08:00
catlog22
113dce55c5 fix: Auto-detect JSON Lines output format for Codex CLI
Problem: Codex CLI uses --json flag to output JSONL events, but executor was using plain text parser. This prevented proper parsing of structured events, breaking session creation.

Root cause: buildCommand() added --json flag for Codex but never communicated this to the output parser. Result: JSONL events treated as raw text → session markers lost.

Solution:
- Extend buildCommand() to return outputFormat
- Auto-detect 'json-lines' when tool is 'codex'
- Use auto-detected format in executeCliTool()
- Properly parse structured events and extract session data

Files modified:
- ccw/src/tools/cli-executor-utils.ts: Add output format auto-detection
- ccw/src/tools/cli-executor-core.ts: Use auto-detected format for parser
- ccw/src/commands/cli.ts: Add debug instrumentation

Verified:
- Codex outputs valid JSONL (confirmed via direct test)
- CLI_EXECUTION_STARTED events broadcast correctly
- Issue was downstream in output parsing, not event transmission
2026-01-29 16:38:30 +08:00
catlog22
0b791c03cf fix: Resolve API path resolution for document loading
- Fixed source paths in command.json: change ../../../ to ../../
  (sources are relative to .claude/skills/ccw-help/, need 2 levels to reach .claude/)
- Rewrote help-routes.ts /api/help/command-content endpoint:
  - Use resolve() to properly handle ../ sequences in paths
  - Resolve paths against commandJsonDir (where command.json is located)
  - Maintain security checks to prevent path traversal
- Verified all document paths now resolve correctly to .claude/commands/*

This fixes the 404 errors when loading command documentation in Help page.
2026-01-29 16:29:10 +08:00
catlog22
bbc94fb73a chore: Update ccw-help command index with all 73 commands
- Regenerated by analyze_commands.py
- Now includes all workflow, issue, memory, cli, and general commands
- Updated to version 3.0.0 with 73 commands and 19 agents
- Full index sync with file system definitions
2026-01-29 16:00:29 +08:00
catlog22
f5e435f791 feat: Optimize ccw-help skill with user-prompted update mechanism
- Add auto-update.py script for simple index regeneration
- Update SKILL.md with clear update instructions
- Simplify update mechanism: prompt user on skill execution
- Support both automatic and manual update workflows
- Clean version 2.3.0 metadata in command.json
2026-01-29 15:58:51 +08:00
catlog22
86d5be8288 feat: Enhance CCW help system with new command orchestration and dashboard features 2026-01-29 15:43:07 +08:00
catlog22
9762445876 refactor: Convert skill-generator from Chinese to English and remove emoji icons
- Convert all markdown files from Chinese to English
- Remove all emoji/icon decorations (🔧📋⚙️🏁🔍📚)
- Update all section headers, descriptions, and documentation
- Keep all content logic, structure, code examples unchanged
- Maintain template variables and file paths as-is

Files converted (9 files total):
- SKILL.md: Output structure comments
- templates/skill-md.md: All Chinese descriptions and comments
- specs/reference-docs-spec.md: All section headers and explanations
- phases/01-requirements-discovery.md through 05-validation.md (5 files)
- specs/execution-modes.md, skill-requirements.md, cli-integration.md, scripting-integration.md (4 files)
- templates/sequential-phase.md, autonomous-orchestrator.md, autonomous-action.md, code-analysis-action.md, llm-action.md, script-template.md (6 files)

All 16 files in skill-generator are now fully in English.
2026-01-29 15:42:46 +08:00
catlog22
b791c09476 docs: Add reference-docs-spec and optimize skill-generator for proper document organization
- Create specs/reference-docs-spec.md with comprehensive guidelines for phase-based reference document organization
- Update skill-generator's Mandatory Prerequisites to include new reference-docs-spec
- Refactor skill-md.md template to generate phase-based reference tables with 'When to Use' guidance
- Add generateReferenceTable() function to automatically create structured reference sections
- Replace flat template reference lists with phase-based navigation
- Update skill-generator's own SKILL.md to demonstrate correct reference documentation pattern
- Ensure all generated skills will have clear document usage timing and context
2026-01-29 15:28:21 +08:00
catlog22
26283e7a5a docs: Optimize reference documents with phase-based guidance and usage timing 2026-01-29 15:24:38 +08:00
catlog22
1040459fef docs: Add comprehensive summary of unified-execute-with-file implementation 2026-01-29 15:23:41 +08:00
catlog22
0fe8c18a82 docs: Add comparison guide between Claude and Codex unified-execute versions 2026-01-29 15:22:24 +08:00
catlog22
0086413f95 feat: Add Codex unified-execute-with-file prompt
- Create codex version of unified-execute-with-file command
- Supports universal execution of planning/brainstorm/analysis output
- Coordinates multi-agents with smart dependency management
- Features parallel/sequential execution modes
- Unified event logging as single source of truth (execution-events.md)
- Agent context passing through previous execution history
- Knowledge chain: each agent reads full history of prior executions

Codex-specific adaptations:
- Use $VARIABLE format for argument substitution
- Simplified header configuration (description + argument-hint)
- Plan format agnostic parsing (IMPL_PLAN.md, synthesis.json, conclusions.json, debug recommendations)
- Multi-wave execution orchestration
- Dynamic artifact location handling

Execution flow:
1. Parse and validate plan from $PLAN_PATH
2. Extract and normalize tasks with dependencies
3. Create execution session (.workflow/.execution/{sessionId}/)
4. Group tasks into execution waves (topological sort)
5. Execute waves sequentially, tasks within wave execute in parallel
6. Unified event logging: execution-events.md (SINGLE SOURCE OF TRUTH)
7. Each agent reads previous executions for context
8. Final statistics and completion report
2026-01-29 15:21:40 +08:00
catlog22
8ff698ae73 refactor: Optimize unified-execute-with-file command documentation
- Consolidate Phase 3 (Progress Tracking) from 205+ to 30 lines by merging redundant explanations of execution-events.md format
- Merge error handling logic from separate handleTaskFailure function into executeTask catch block
- Remove duplicate Execution Document Template section (identical to Step 1.2)
- Consolidate Phase 4 (Completion & Summary) from 90+ to 40 lines
- Overall reduction: 1094 → 807 lines (26% reduction) while preserving all technical information

Key improvements:
- Single source of truth for execution state (execution-events.md)
- Clearer knowledge chain explanation between agents
- More concise yet complete Phase documentation
- Unified event logging format is now prominently featured
2026-01-29 15:19:40 +08:00
catlog22
8cdd6a8b5f Add execution and planning agent prompts, specifications, and quality standards
- Created execution agent prompt for issue execution with detailed deliverables and validation criteria.
- Developed planning agent prompt to analyze issues and generate structured solution plans.
- Introduced issue handling specifications outlining the workflow and issue structure.
- Established quality standards for evaluating completeness, consistency, correctness, and clarity of solutions.
- Defined solution schema specification detailing the required structure and validation rules for solutions.
- Documented subagent roles and responsibilities, emphasizing the dual-agent strategy for improved workflow efficiency.
2026-01-29 15:15:42 +08:00
catlog22
b86a8afd8b feat: 添加统一执行引擎文档,支持多任务协调与增量执行 2026-01-29 15:14:56 +08:00
catlog22
53bd5a6d4b feat: 添加自定义提示文档,说明如何创建和管理可重用的提示 2026-01-29 11:30:29 +08:00
catlog22
3a7bbe0e42 feat: Optimize Codex prompt commands parameter flexibility
- Enhanced 14 commands with flexible parameter support
- Standardized argument formats across all commands
- Added English parameter descriptions for clarity
- Maintained backward compatibility

Commands optimized:
- analyze-with-file: Added --depth, --max-iterations
- brainstorm-with-file: Added --perspectives, --max-ideas, --focus
- debug-with-file: Added --scope, --focus, --depth
- issue-execute: Unified format, added --skip-tests, --skip-build, --dry-run
- lite-plan-a/b/c: Added depth and execution control flags
- execute: Added --parallel, --filter, --skip-tests
- brainstorm-to-cycle: Unified to --session format, added --launch
- lite-fix: Added --hotfix, --severity, --scope
- clean: Added --focus, --target, --confirm
- lite-execute: Unified --plan format, added execution control
- compact: Added --description, --tags, --force
- issue-new: Complete flexible parameter support

Unchanged (already optimal):
- issue-plan, issue-discover, issue-queue, issue-discover-by-prompt
2026-01-29 11:29:39 +08:00
catlog22
04a84f9893 feat: Simplify issue creation documentation by removing examples and clarifying title 2026-01-29 10:51:42 +08:00
catlog22
11638facf7 feat: Add --to-file option to ccw cli for saving output to files
Adds support for saving CLI execution output directly to files with the following features:
- Support for relative paths: --to-file output.txt
- Support for nested directories: --to-file results/analysis/output.txt (auto-creates directories)
- Support for absolute paths: --to-file /tmp/output.txt or --to-file D:/results/output.txt
- Works in both streaming and non-streaming modes
- Automatically creates parent directories if they don't exist
- Proper error handling with user-friendly messages
- Shows file save location in completion feedback

Implementation details:
- Updated CLI option parser in ccw/src/cli.ts
- Added toFile parameter to CliExecOptions interface
- Implemented file saving logic in execAction() for both streaming and non-streaming modes
- Updated HTTP API endpoint /api/cli/execute to support toFile parameter
- All changes are backward compatible

Testing:
- Tested with relative paths (single and nested directories)
- Tested with absolute paths (Windows and Unix style)
- Tested with streaming mode
- All tests passed successfully
2026-01-29 09:48:30 +08:00
catlog22
4d93ffb06c feat: Add migration handling for Codex old reference format in CLI manager 2026-01-28 23:37:46 +08:00
catlog22
204cb20617 chore(release): version 6.3.49
- Update package.json version to 6.3.49
- Add changelog entry for v6.3.49 with all commits since v6.3.48
- New features: CLI tools enhancements, skills system improvements, security fixes
- Documentation updates and UI improvements
2026-01-28 23:34:31 +08:00
catlog22
63f0daebbb feat: Update initialization process to prioritize in-memory configuration for CLI tool selection 2026-01-28 23:20:50 +08:00
catlog22
6ac041c1d8 feat: Enhance Codex CLI settings with toggle and refresh actions 2026-01-28 23:05:31 +08:00
catlog22
279adfd391 feat: Implement Codex CLI enhancement settings with API integration and UI toggle 2026-01-28 23:01:18 +08:00
catlog22
0a07138c27 feat: Add ccw-cli-tools skill specification with unified execution framework and configuration-driven tool selection 2026-01-28 22:55:36 +08:00
catlog22
a5d9e8ca87 feat: Enhance lite-skill-generator with single file output and improved validation 2026-01-28 22:23:19 +08:00
catlog22
502c8a09a1 fix(security): Apply 3 critical security fixes
- sec-001: Add validateAllowedPath to /api/file endpoint (path traversal)
- sec-002: Enable CSRF by default with CCW_DISABLE_CSRF opt-out
- sec-003: Add validateAllowedPath to /api/dialog/browse and /api/dialog/open-file (path traversal)

Ref: fix-1738072800000
2026-01-28 22:04:18 +08:00
catlog22
ed0255b8a2 Add skill tuning diagnosis report for skill-generator
- Introduced a new JSON file `skill-tuning-diagnosis.json` containing a comprehensive diagnosis of the skill-generator.
- Documented critical issues related to context management and data flow, including:
  - Full state serialization leading to unbounded context growth.
  - Scattered state writing without a unified schema.
  - Lack of input state schema validation in autonomous orchestrators.
- Provided detailed descriptions, impacts, root causes, and fix strategies for each identified issue.
- Summarized recommendations with priority levels for urgent fixes.
2026-01-28 22:00:20 +08:00
catlog22
6e94fc0740 chore: remove CLI endpoints section from Codex Code Guidelines 2026-01-28 21:33:43 +08:00
catlog22
b361a8c041 Add CLI endpoints documentation and unified script template for Bash and Python
- Updated AGENTS.md to include CLI tools usage and configuration details.
- Introduced a new script template for both Bash and Python, outlining usage context, calling conventions, and implementation guidelines.
- Provided examples for common patterns in both Bash and Python scripts.
- Established a directory convention for script organization and naming.
2026-01-28 21:29:21 +08:00
catlog22
24dad8cefd Refactor orchestrator logic and enhance problem taxonomy
- Updated orchestrator decision logic to improve state management and action selection.
- Introduced structured termination checks and action selection criteria.
- Enhanced state update mechanism with sliding window for action history and error tracking.
- Revised problem taxonomy for skill execution issues, consolidating categories and refining detection patterns.
- Improved severity calculation method for issue prioritization.
- Streamlined fix mapping strategies for better clarity and usability.
2026-01-28 21:08:49 +08:00
catlog22
071c98d89c feat: add brainstorm-to-cycle adapter for converting brainstorm output to parallel-dev-cycle input 2026-01-28 20:51:35 +08:00
catlog22
994718dee2 chore: remove unnecessary blank line in auto-parallel command documentation 2026-01-28 20:36:41 +08:00
catlog22
3998d24e32 Enhance skill generator documentation and templates
- Updated Phase 1 and Phase 2 documentation to include next phase links and data flow details.
- Expanded Phase 5 documentation to include comprehensive validation and README generation steps, along with validation report structure.
- Added purpose and usage context sections to various action and script templates (e.g., autonomous-action, llm-action, script-bash).
- Improved commands management by simplifying the command scanning logic and enabling/disabling commands through renaming files.
- Enhanced dashboard command manager to format group names and display nested groups with appropriate icons and colors.
- Updated LiteLLM executor to allow model overrides during execution.
- Added action reference guide and template reference sections to the skill-tuning SKILL.md for better navigation and understanding.
2026-01-28 20:34:03 +08:00
catlog22
29274ee943 feat(codex): add brainstorm-with-file prompt for interactive brainstorming workflow
- Add multi-perspective brainstorming workflow
- Support creative, pragmatic, and systematic analysis
- Include diverge-converge cycles with user interaction
- Add deep dive, devil's advocate, and idea merging
- Document thought evolution in brainstorm.md
2026-01-28 20:33:13 +08:00
catlog22
46d5739935 fix(changelog): update workflow references from review-fix to review-cycle-fix for consistency 2026-01-28 20:30:03 +08:00
catlog22
152cab2b7e feat: update review commands to use review-cycle-fix for automated fixing 2026-01-28 20:24:59 +08:00
catlog22
0cc5101c0e feat: Add phases for document consolidation, assembly, and compliance refinement
- Introduced Phase 2.5: Consolidation Agent to summarize analysis outputs and generate design overviews.
- Added Phase 4: Document Assembly to create index-style documents linking chapter files.
- Implemented Phase 5: Compliance Review & Iterative Refinement for CPCC compliance checks and updates.
- Established CPCC Compliance Requirements document outlining mandatory sections and validation functions.
- Created a base template for analysis agents to ensure consistency and efficiency in execution.
2026-01-28 19:57:24 +08:00
catlog22
4c78f53bcc feat: add commands management feature with API endpoints and UI integration
- Implemented commands routes for listing, enabling, and disabling commands.
- Created commands manager view with accordion groups for better organization.
- Added loading states and confirmation dialogs for enabling/disabling commands.
- Enhanced error handling and user feedback for command operations.
- Introduced CSS styles for commands manager UI components.
- Updated navigation to include commands manager link.
- Refactored existing code for better maintainability and clarity.
2026-01-28 08:26:37 +08:00
catlog22
cc5a5716cf fix(skills): improve robustness of enable/disable operations
- Add rollback in moveDirectory when rmSync fails after cpSync
- Add transaction rollback in disable/enableSkill when config save fails
- Surface config corruption by throwing on JSON parse errors
- Add robust JSON error parsing with fallback in frontend
- Add loading state and double-click prevention for toggle button
2026-01-28 08:25:59 +08:00
catlog22
af05874510 feat(skills): enhance moveDirectory function with rollback on failure and update config handling 2026-01-28 00:50:24 +08:00
catlog22
7a40f16235 feat(skills): implement enable/disable functionality for skills
- Added new API endpoints to enable and disable skills.
- Introduced logic to manage disabled skills, including loading and saving configurations.
- Enhanced skills routes to return lists of disabled skills.
- Updated frontend to display disabled skills and allow toggling their status.
- Added internationalization support for new skill status messages.
- Created JSON schemas for plan verification agent and findings.
- Defined new types for skill management in TypeScript.
2026-01-28 00:49:39 +08:00
catlog22
8d178feaac feat: 增强计划验证和上下文收集功能,支持自动执行和用户交互选择 2026-01-28 00:12:15 +08:00
catlog22
b3c47294e7 Enhance workflow commands and context management
- Updated `plan.md` to include new fields in context-package.json: prioritized_context, user_intent, priority_tiers, dependency_order, and sorting_rationale.
- Added validation for the existence of the prioritized_context field in context-package.json.
- Modified user decision flow in task generation to present action choices after planning completion.
- Improved context-gathering process in `context-gather.md` to integrate user intent and prioritize context based on user goals.
- Revised conflict-resolution documentation to require planning notes records after conflict analysis.
- Streamlined task generation in `task-generate-agent.md` to utilize pre-sorted context without redundant sorting.
- Removed unused settings persistence functions and corresponding tests from `claude-cli-tools.ts` and `settings-persistence.test.ts`.
2026-01-28 00:02:45 +08:00
catlog22
9989cfcf21 feat: 更新任务生成和执行限制,优化多模块任务管理 2026-01-27 23:34:31 +08:00
catlog22
1b6ace0447 feat: 添加规划笔记功能以支持任务生成和约束管理 2026-01-27 23:16:01 +08:00
catlog22
a3b303d8e3 Enhance CLI Lite Planning Agent with Mandatory Quality Check
- Added Phase 5: Plan Quality Check to cli-lite-planning-agent.md, detailing mandatory quality validation after plan generation.
- Introduced quality dimensions: completeness, granularity, dependencies, acceptance criteria, implementation steps, and constraint compliance.
- Specified CLI command format for quality check execution and expected output structure.
- Implemented result parsing and auto-fix strategies for minor issues.
- Updated integration flow to ensure quality check is executed before returning the plan to the orchestrator.

Refactor lite-plan.md to reflect internal quality check execution for medium/high complexity plans.

Create new brainstorm-with-file.md for interactive brainstorming workflow, detailing session setup, execution process, and implementation steps.
2026-01-27 23:02:05 +08:00
catlog22
0c1c87f704 fix: 修正 README_CN.md 交流群二维码图片扩展名
将引用从 .jpg 改为 .png 以匹配实际文件名
2026-01-26 09:47:42 +08:00
catlog22
985085c624 Refactor CLI Config Manager and Add Provider Model Routes
- Removed deprecated constants and functions from cli-config-manager.ts.
- Introduced new provider model presets in litellm-provider-models.ts for better organization and management of model information.
- Created provider-routes.ts to handle API endpoints for retrieving provider information and models.
- Added integration tests for provider routes to ensure correct functionality and response structure.
- Implemented unit tests for settings persistence functions, covering various scenarios and edge cases.
- Enhanced error handling and validation in the new routes and settings functions.
2026-01-25 17:27:58 +08:00
catlog22
7c16cc6427 feat: 添加 convert-to-plan 命令以将规划文档转换为问题解决方案 2026-01-25 12:32:42 +08:00
catlog22
6875108dda Add interactive analysis workflow with documented discussions and CLI exploration
- Introduced `analyze-with-file` command for collaborative analysis.
- Implemented session management for new and continued analysis sessions.
- Developed structured phases for topic understanding, exploration, discussion, and conclusion synthesis.
- Created detailed documentation for the workflow, including examples and implementation details.
- Added Codex prompt for deep analysis and exploration of codebase and concepts.
2026-01-25 11:08:13 +08:00
catlog22
9cff6f5f43 refactor(issue): remove 'merged' queue status, use 'archived' instead
- Remove 'merged' from VALID_QUEUE_STATUSES constant
- Update mergeQueues() to set status to 'archived' instead of 'merged'
- Preserve merged_into/merged_at metadata for traceability
- Update frontend to use 'archived' for button visibility checks
- Fix queue --json returning fake queue ID when no active queue exists
2026-01-25 10:56:46 +08:00
catlog22
fe2536d4cd feat: 增加队列状态“merged”及相关验证功能,添加状态参考文档 2026-01-25 10:11:52 +08:00
catlog22
16f27c080a docs: add Level 5 intelligent orchestration workflow guide to English version
- Added Level 5 (CCW Coordinator) chapter with complete Minimum Execution Units definition
- Added 3-Phase Workflow explanation (Analyze Requirements, Discover Commands, Execute)
- Integrated command port system for dynamic pipeline composition
- Added JavaScript task analysis and complexity assessment functions
- Included state file structure and complete Mermaid flowchart (119 lines)
- Updated Quick Selection Table to reference ccw-coordinator for complex workflows
- Updated Decision Flowchart to include Level 5 at top level
- Updated Summary Level Overview table to include Level 5 row
- Updated Core Principles with Level 5 selection criteria and usage guidelines
- Synchronized English version with WORKFLOW_GUIDE_CN.md Level 5 content
2026-01-24 21:41:31 +08:00
catlog22
874b70726d chore: archive unused test scripts and temporary documents
- Moved 6 empty test files (*.test.ts) to archive/
- Moved 5 Python test scripts (*.py) to archive/
- Moved 5 outdated/temporary documents to archive/
- Cleaned up root directory for better organization
2026-01-24 21:26:03 +08:00
catlog22
862365ffaf chore: 删除 Codex Subagent 使用规范文档 2026-01-24 21:17:59 +08:00
catlog22
b8c807b2f9 feat: add parent/child directory lookup for ccw cli output
- Implement findProjectWithExecution() to search upward through parent directories
- Add automatic project path discovery in outputAction
- Support explicit --project parameter for manual path specification
- Improve error messages with search scope indication
- Display project path in formatted output
- Enable cross-directory execution without working directory dependency
2026-01-24 21:17:21 +08:00
catlog22
7ea6362c50 docs: add Level 5 workflow guide with CCW Coordinator and decision flowchart
- Add "Level 5: 智能编排 (CCW Coordinator)" chapter to WORKFLOW_GUIDE_CN.md
- Integrate 6.2.8 version full lifecycle command decision flowchart (Mermaid)
- Document minimum execution units (最小执行单元) for planning, testing, and review
- Explain 3-phase workflow: requirements analysis, command discovery & recommendation, sequential execution
- Include command port system for dynamic command chain assembly
- Add state file structure (.workflow/.ccw-coordinator/{session_id}/state.json)
- Document 3 typical scenarios: simple feature, bug fix, complex feature development
- Update overview diagram to show Level 5 with automation progression
- Update workflow selection guide, decision flowchart, and summary table
- Add Level 5 relationship documentation to other workflow levels
2026-01-24 21:15:44 +08:00
catlog22
b435391f17 docs: add /ccw and /ccw-coordinator as recommended commands
- Add prominent section highlighting /ccw and /ccw-coordinator as main features
- /ccw: Auto workflow orchestrator for general tasks
- /ccw-coordinator: Smart orchestrator with intelligent recommendations
- Include comparison table, quick examples, and key differences
- Update both English and Chinese READMEs
2026-01-24 15:12:33 +08:00
catlog22
88ff109ac4 chore: bump version to 6.3.48 2026-01-24 15:06:36 +08:00
catlog22
261196a804 docs: update CCW CLI commands with recommended commands and usage examples 2026-01-24 15:05:37 +08:00
catlog22
ea6cb8440f chore: bump version to 6.3.47
- Update ccw-coordinator.md with clarified CLI execution format
- Command-first prompt structure: /workflow:<command> -y <parameters>
- Simplified documentation with universal prompt template
- Clarify that -y is a prompt parameter, not a ccw cli parameter
2026-01-24 14:52:09 +08:00
catlog22
bf896342f4 refactor: adjust prompt structure for command execution clarity 2026-01-24 14:51:05 +08:00
catlog22
f2b0a5bbc9 Refactor code structure and remove redundant changes 2026-01-24 14:47:47 +08:00
catlog22
cf5fecd66d fix(codex-lens): resolve installation issues from frontend
- Add missing README.md file required by setuptools
- Fix deprecated license format in pyproject.toml (use SPDX string instead of TOML table)
- Add MIT LICENSE file for proper packaging
- Verified successful local installation and import

Fixes permission denied error during npm-based installation on macOS
2026-01-24 14:43:39 +08:00
catlog22
86d469ccc9 build: exclude test files from TypeScript compilation 2026-01-24 14:35:05 +08:00
catlog22
357d3524f5 chore: bump version to 6.3.46 2026-01-24 14:31:29 +08:00
catlog22
4334162ddf refactor: remove unused command definitions from ccw-coordinator 2026-01-24 14:29:51 +08:00
catlog22
2dcd1637f0 refactor: enhance documentation on Minimum Execution Units and command grouping in CCW 2026-01-24 14:27:58 +08:00
catlog22
38e1cdc737 chore(release): publish 6.3.45
## Features

- New `ccw` command: Main process workflow orchestrator with auto intent-based workflow selection
- New CommandRegistry for dynamic command discovery and metadata management

## Improvements

- Optimize ccw-coordinator: Serial blocking execution model with hook-based continuation
- Refactor execution flow: Stop after CLI launch, wait for hook callbacks (no polling)
- Add task_id tracking and state.json checkpoints for resumable execution
- Consolidate documentation: Reduce report verbosity while maintaining all core information

## Documentation

- Add Execution Model comparison (main process vs external CLI)
- Add State Management section with TodoWrite tracking examples
- Update Type Comparison table highlighting ccw vs ccw-coordinator differences
- Simplify code examples with inline comments

## Changes Summary

- ccw-coordinator.md: +272/-26 (serial blocking), -143 docs (consolidation)
- ccw.md: +121/-352 (state management, execution model)
- Rename: CCW-COORDINATOR.md → ccw-coordinator.md (lowercase)
2026-01-24 14:09:52 +08:00
catlog22
097a7346b9 refactor: optimize ccw.md with streamlined documentation and state management
- Add Execution Model section (Synchronous vs Async blocking comparison)
- Add State Management section (TodoWrite-based tracking)
- Simplify Phase 1-5 code (remove verbose comments, consolidate logic)
- Consolidate Pipeline Examples into table format (5 examples → 1 table)
- Update Type Comparison table (highlight ccw vs ccw-coordinator differences)
- Maintain all core information (no content loss)

Changes:
- -352 lines (verbose explanations, redundant code)
- +121 lines (consolidated content, new sections)
- net: -231 lines (35% reduction: 665→433 lines)

Key additions:
- Execution Model flow diagram
- State Management with TodoWrite example
- Type Comparison: Synchronous (main) vs Async (external CLI)
2026-01-24 14:06:31 +08:00
catlog22
9df8063fbd refactor: reduce documentation report, consolidate overlapping content
- Eliminate redundant Stop-Action explanations (moved to CLI Execution Model)
- Remove verbose hook/error handling examples (keep in code only)
- Consolidate 5-step CLI example into 1-line pattern
- Simplify handleCliCompletion function comments
- Streamline executor loop exit notes
- Maintain all core information (no content loss)
- Reduce report from ~1000 lines to ~900 lines

Changes:
- -143 lines (old verbose explanations)
- +21 lines (consolidated content)
- net: -122 lines
2026-01-24 14:00:34 +08:00
catlog22
d00f0bc7ca refactor: improve CCW orchestrator with serial blocking execution and hook-based continuation
- Rename file to lowercase: CCW-COORDINATOR.md → ccw-coordinator.md
- Replace polling waitForTaskCompletion with stop-action blocking model
- CLI commands execute in background with immediate stop (no polling)
- Hook callbacks (handleCliCompletion) trigger continuation to next command
- Add task_id and completed_at fields to execution_results
- Maintain state checkpoint after each command launch
- Add status flow documentation (running → waiting → completed)
- Include CLI invocation example with hook configuration
- Separate concerns: orchestrator launches, hooks handle callbacks
- Support serial execution: one command at a time with break after launch
2026-01-24 13:57:08 +08:00
catlog22
24efef7f17 feat: Add main workflow orchestrator (ccw) with intent analysis and command execution
- Implemented the ccw command as a main workflow orchestrator.
- Added a 5-phase workflow including intent analysis, requirement clarification, workflow selection, user confirmation, and command execution.
- Developed functions for analyzing user input, selecting workflows, and executing command chains.
- Integrated TODO tracking for command execution progress.
- Created comprehensive tests for the CommandRegistry, covering YAML parsing, command retrieval, and error handling.
2026-01-24 13:43:47 +08:00
catlog22
44b8269a74 feat: add CommandRegistry for command management and direct imports 2026-01-24 13:29:50 +08:00
catlog22
dd51837bbc Enhance CCW Coordinator: Refactor command execution flow, improve prompt generation, and update documentation
- Refactored the command execution process to support dynamic command chaining and intelligent prompt generation.
- Updated the architecture overview to reflect changes in the orchestrator and command execution logic.
- Improved the prompt generation strategy to directly include complete command calls, enhancing clarity and usability.
- Added detailed examples and templates for command prompts in the documentation.
- Enhanced error handling and user decision-making during command execution failures.
- Introduced logging for command execution details and state updates for better traceability.
- Updated specifications and README files to align with the new command execution and prompt generation logic.
2026-01-24 12:44:40 +08:00
catlog22
a17edc3e50 chore(release): publish 6.3.44 2026-01-24 11:29:39 +08:00
catlog22
01ab3cf3fa feat: enhance tdd-verify command with detailed compliance reporting and validation improvements 2026-01-24 11:10:31 +08:00
catlog22
a2c1b9b47c fix: replace hardcoded Windows paths with dynamic cross-platform paths in CodexLens error messages
- Remove hardcoded Windows paths (D:\Claude_dms3\codex-lens) that were displayed to macOS/Linux users
- Generate dynamic possible paths list based on runtime environment
- Support multiple installation locations (cwd, project root, home directory)
- Improve error messages with platform-appropriate paths
- Maintain consistency across both bootstrapWithUv() and installSemanticWithUv() functions

Fixes remaining issue from #104 regarding cross-platform error message compatibility
2026-01-24 11:08:02 +08:00
catlog22
780e118844 fix: resolve CodexLens installation failure with NPM global install
Implements two-pass search strategy to support CodexLens in NPM global installations. Fixes issue #104.
2026-01-24 10:52:07 +08:00
catlog22
159dfd179e Refactor action plan verification command to plan verification
- Updated all references from `/workflow:action-plan-verify` to `/workflow:plan-verify` across various documentation and command files.
- Introduced a new command file for `/workflow:plan-verify` that performs read-only verification analysis on planning artifacts.
- Adjusted command relationships and help documentation to reflect the new command structure.
- Ensured consistency in command usage throughout the workflow guide and getting started documentation.
2026-01-24 10:46:15 +08:00
catlog22
6c80168612 feat: enhance project root detection with caching and debug logging 2026-01-24 10:04:04 +08:00
catlog22
a293a01d85 feat: add --yes flag for auto-confirmation across multiple workflows
- Enhanced lite-execute, lite-fix, lite-lite-lite, lite-plan, multi-cli-plan, plan, replan, session complete, session solidify, and various UI design commands to support a --yes or -y flag for skipping user confirmations and auto-selecting defaults.
- Updated argument hints and examples to reflect new auto mode functionality.
- Implemented auto mode defaults for confirmation, execution methods, and code review options.
- Improved error handling and validation in command parsing and execution processes.
2026-01-24 09:23:24 +08:00
jerry
ab259b1970 fix: resolve CodexLens installation failure with NPM global install
- Implement two-pass search strategy for codex-lens path detection
- First pass: prefer non-node_modules paths (development environment)
- Second pass: allow node_modules paths (NPM global install)
- Fixes CodexLens installation for all NPM global install users
- No breaking changes, maintains backward compatibility

Resolves issue where NPM global install users could not install CodexLens
because the code rejected paths containing /node_modules/, which is the
only valid location for codex-lens in NPM installations.

Tested on macOS with Node.js v22.18.0 via NPM global install.
2026-01-24 08:58:44 +08:00
catlog22
fd50adf581 feat: Update command validation tools and improve README documentation 2026-01-24 08:41:32 +08:00
catlog22
24a28f289d refactor: Rename command-registry.js to command-registry.cjs and update references 2026-01-23 23:54:08 +08:00
catlog22
e727a07fc5 feat: Implement CCW Coordinator for interactive command orchestration
- Add action files for session management, command selection, building, execution, and completion.
- Introduce orchestrator logic to drive state transitions and action selection.
- Create state schema to define session state structure.
- Develop command registry and validation tools for command metadata extraction and chain validation.
- Establish skill configuration and specifications for command library and validation rules.
- Implement tools for command registry and chain validation with CLI support.
2026-01-23 23:39:16 +08:00
catlog22
8179472e56 fix: auto-sync CLI tools availability on first config creation (Issue #95)
**问题描述**:
新安装 CCW 后,默认配置中所有 CLI 工具 enabled: true,但实际上用户可能没有安装这些工具,导致执行任务时尝试调用未安装的工具而失败。

**根本原因**:
- DEFAULT_TOOLS_CONFIG 中所有工具默认 enabled: true
- 首次创建配置时不检测工具实际可用性
- 现有的 syncBuiltinToolsAvailability() 只在用户手动触发时才执行

**修复内容**:
1. 新增 ensureClaudeCliToolsAsync() 异步版本
   - 在创建默认配置后自动调用 syncBuiltinToolsAvailability()
   - 通过 which/where 命令检测工具实际可用性
   - 根据检测结果自动调整 enabled 状态

2. 更新两个关键 API 端点使用新函数
   - /api/cli/endpoints - 获取 API 端点列表
   - /api/cli/tools-config - 获取 CLI 工具配置

**效果**:
- 首次安装时自动检测并禁用未安装的工具
- 避免调用不可用工具导致的错误
- 用户可在 Dashboard 中看到准确的工具状态

Fixes #95
2026-01-23 23:20:58 +08:00
catlog22
277b3f86f1 feat: Enhance TDD workflow with specialized executor and optimized task generation
- Create tdd-developer.md: Specialized TDD agent with Red-Green-Refactor awareness
  - Full TDD metadata parsing (tdd_workflow, max_iterations, cli_execution)
  - Green phase Test-Fix Cycle with automatic diagnosis and repair
  - CLI session resumption strategies (new/resume/fork/merge_fork)
  - Auto-revert safety mechanism when max_iterations reached

- Optimize task-generate-tdd.md: Enhanced task generation with CLI support
  - Phase 0: User configuration questionnaire (materials, execution method, CLI tool)
  - Phase 1: Progressive loading strategy (Core → Selective → On-Demand)
  - CLI Execution ID management with dependency-based strategy selection
  - Fixed task limit to 18 (consistent with system-wide limit)
  - Fixed double-slash path issues in output structure
  - Enhanced tdd_cycles schema documentation with full structure
  - Unified resume_from type documentation (string | string[])

- Update tdd-plan.md: Workflow orchestrator improvements
  - Phase 0 user configuration details
  - Enhanced validation rules for CLI execution IDs
  - Updated error handling for 18-task limit

Validated by Gemini CLI analysis - complete execution chain compatibility confirmed.
2026-01-23 23:01:56 +08:00
catlog22
7a6f4c3f22 chore: bump version to 6.3.43 - fix parallel-dev-cycle documentation inconsistencies 2026-01-23 17:52:12 +08:00
catlog22
2f32d08d87 feat: Update documentation and file references for changes.log in parallel development cycle 2026-01-23 17:51:21 +08:00
catlog22
79d20add43 feat: Enhance Code Developer and Requirements Analyst agents with proactive debugging and self-enhancement strategies 2026-01-23 17:41:17 +08:00
catlog22
f363c635f5 feat: Enhance issue loading with intelligent grouping for batch processing 2026-01-23 17:03:27 +08:00
catlog22
61e3747768 feat: Add batch solutions endpoint (ccw issue solutions)
- Add solutionsAction() to query all bound solutions in one call
- Reduces O(N) queries to O(1) for queue formation
- Update /issue:queue command to use new endpoint
- Performance: 18 individual queries → 1 batch query

Version: 6.3.42
2026-01-23 16:56:08 +08:00
catlog22
54ec6a7c57 feat: Enhance issue management with batch processing and solution querying
- Updated issue loading process to create batches based on size (max 3 per batch).
- Removed semantic grouping in favor of simple size-based batching.
- Introduced a new command to batch query solutions for multiple issues.
- Improved solution loading to fetch all planned issues with bound solutions in a single call.
- Added detailed handling for multi-solution issues, including user selection for binding.
- Created a new workflow command for multi-agent development with documented progress and incremental iteration support.
- Added .gitignore for ace-tool directory to prevent unnecessary files from being tracked.
2026-01-23 16:55:10 +08:00
catlog22
d6a3da2084 chore: bump version to 6.3.41 2026-01-23 12:40:01 +08:00
catlog22
b9f17f0fcf fix: Add required 'name' field to Codex skill YAML frontmatter
According to OpenAI Codex skill specification, SKILL.md files must have both
'name' and 'description' fields in YAML frontmatter. Added missing 'name' field
to all three skills:
- CCW Loop
- CCW Loop-B
- Parallel Dev Cycle

Also enhanced ccw-loop description with Chinese trigger keywords for better
multi-language support.
2026-01-23 12:38:45 +08:00
catlog22
88eb42f65b chore: bump version to 6.3.40 2026-01-23 10:23:43 +08:00
catlog22
b1ac0cf8ff feat: Add communication optimization and coordination protocol for multi-agent system
- Introduced a new specification for agent communication optimization focusing on file references instead of content transfer to enhance efficiency and reduce message size.
- Established a coordination protocol detailing communication channels, message formats, and dependency resolution strategies among agents (RA, EP, CD, VAS).
- Created a unified progress format specification for all agents, standardizing documentation structure and versioning practices.
2026-01-23 10:04:31 +08:00
catlog22
09eeb84cda chore: bump version to 6.3.39 2026-01-22 23:39:02 +08:00
catlog22
2fb1d1243c feat: prioritize user config, do not merge default tools
Changed loadClaudeCliTools() to only load tools explicitly defined
in user config. Previously, DEFAULT_TOOLS_CONFIG.tools was spread
before user tools, causing all default tools to be loaded even if
not present in user config.

User config now has complete control over which tools are loaded.
2026-01-22 23:37:42 +08:00
catlog22
ac62bf70db fix: preserve envFile in ensureToolTags merge function
The ensureToolTags() function was only returning enabled, primaryModel,
secondaryModel, and tags - missing envFile. This caused envFile to be
lost during config merge in loadClaudeCliTools().

Related to #96 - gemini envFile setting lost after page refresh
2026-01-22 23:35:33 +08:00
catlog22
edb55c4895 fix: include envFile in getFullConfigResponse API response
Fixes #96 - gemini/qwen envFile setting was lost after page refresh
because getFullConfigResponse() was not including the envFile field
when converting config to the legacy API format.

Changes:
- Add envFile?: string | null to CliToolConfig interface
- Include envFile in getFullConfigResponse() conversion
2026-01-22 23:30:01 +08:00
catlog22
8a7f636a85 feat: Refactor intelligent cleanup command for clarity and efficiency 2026-01-22 23:25:34 +08:00
catlog22
97ab82628d Add intelligent cleanup command with mainline detection and artifact discovery
- Introduced the `/workflow:clean` command for intelligent code cleanup.
- Implemented mainline detection to identify active development branches and core modules.
- Added drift analysis to discover stale sessions, abandoned documents, and dead code.
- Included safe execution features with staged deletion and confirmation.
- Documented usage, execution process, and implementation details in `clean.md`.
2026-01-22 23:19:54 +08:00
catlog22
be89552b0a feat: Add ccw-loop-b hybrid orchestrator skill with specialized workers
Create new ccw-loop-b skill implementing coordinator + workers architecture:

**Skill Structure**:
- SKILL.md: Entry point with three execution modes (interactive/auto/parallel)
- phases/state-schema.md: Unified state structure
- specs/action-catalog.md: Complete action reference

**Worker Agents**:
- ccw-loop-b-init.md: Session initialization and task breakdown
- ccw-loop-b-develop.md: Code implementation and file operations
- ccw-loop-b-debug.md: Root cause analysis and problem diagnosis
- ccw-loop-b-validate.md: Testing, coverage, and quality checks
- ccw-loop-b-complete.md: Session finalization and commit preparation

**Execution Modes**:
- Interactive: Menu-driven, user selects actions
- Auto: Predetermined sequential workflow
- Parallel: Concurrent worker execution with batch wait

**Features**:
- Flexible coordination patterns (single/multi-agent/hybrid)
- Batch wait API for parallel execution
- Unified state management (.loop/ directory)
- Per-worker progress tracking
- No Claude/Codex comparison content (follows new guidelines)

Follows updated design principles:
- Content independence (no framework comparisons)
- Mode flexibility (no over-constraining)
- Coordinator pattern with specialized workers
2026-01-22 23:10:43 +08:00
catlog22
df25b43884 fix(install): change default for Git Bash multi-line prompt fix to false 2026-01-22 22:59:22 +08:00
catlog22
04cd536da5 chore: bump version to 6.3.38 2026-01-22 22:54:42 +08:00
catlog22
9a3608173a feat: Add multi-perspective issue discovery and structured issue creation
- Implemented issue discovery prompt to analyze code from various perspectives (bug, UX, test, quality, security, performance, maintainability, best-practices).
- Created structured issue generation prompt from GitHub URLs or text descriptions, including clarity detection and optional clarification questions.
- Introduced CCW Loop-B hybrid orchestrator pattern for iterative development, featuring a coordinator and specialized workers with batch wait support.
- Defined state management, session structure, and output schemas for the CCW Loop-B workflow.
- Added error handling and best practices documentation for the new features.
2026-01-22 22:53:05 +08:00
catlog22
f5b6bb97bc feat(issue-manager): update queue status display logic in renderQueueCard function 2026-01-22 22:33:44 +08:00
catlog22
2819f3597f feat: Add validation action and orchestrator for CCW Loop
- Implemented the VALIDATE action to run tests, check coverage, and generate reports.
- Created orchestrator for managing CCW Loop execution using Codex subagent pattern.
- Defined state schema for unified loop state management.
- Updated action catalog with new actions and their specifications.
- Enhanced CLI and issue routes to support new features and data structures.
- Improved documentation for Codex subagent design principles and action flow.
2026-01-22 22:32:37 +08:00
catlog22
c0c1a2eb92 fix(dashboard): make showHidden state match checkbox checked state
Fixes #97 - File browser "Show Hidden Files" checkbox appeared checked
but hidden files weren't displayed until the checkbox was toggled.

Root cause: Timing mismatch where loadFileBrowserDirectory() was called
before initFileBrowserEvents(), causing the initial API request to send
showHidden: false while the checkbox was checked.

Fix: Initialize fileBrowserState.showHidden = true in showFileBrowserModal()
to match the checkbox's default checked state.
2026-01-22 22:21:54 +08:00
catlog22
012197a861 删除 Plan-A 与 Plan-B 的比较部分,以简化文档内容 2026-01-22 21:42:20 +08:00
catlog22
407b2e6930 Add lightweight interactive planning workflows: Lite-Plan-B and Lite-Plan-C
- Introduced Lite-Plan-B for hybrid mode planning with multi-agent parallel exploration and primary agent merge/clarify/plan capabilities.
- Added Lite-Plan-C for Codex subagent orchestration, featuring intelligent task analysis, parallel exploration, and adaptive planning based on task complexity.
- Both workflows include detailed execution processes, session setup, and structured output formats for exploration and planning results.
2026-01-22 21:40:57 +08:00
catlog22
6428febdf6 Add universal-executor agent and enhance Codex subagent documentation
- Introduced a new agent: universal-executor, designed for versatile task execution across various domains with a systematic approach.
- Added comprehensive documentation for Codex subagents, detailing core architecture, API usage, lifecycle management, and output templates.
- Created a new markdown file for Codex subagent usage guidelines, emphasizing parallel processing and structured deliverables.
- Updated codex_prompt.md to clarify the deprecation of custom prompts in favor of skills for reusable instructions.
2026-01-22 20:41:37 +08:00
catlog22
9f9ef1d054 Refactor code structure for improved readability and maintainability 2026-01-22 18:22:38 +08:00
catlog22
ea04663035 fix(multi-cli): populate multiCliPlan sessions in liteTaskDataStore
Fix task click handlers not working in multi-CLI planning detail page.

Root cause: liteTaskDataStore was not being populated with multiCliPlan
sessions during initialization, so task click handlers couldn't access
session data using currentSessionDetailKey.

Changes:
- navigation.js: Add code to populate multiCliPlan sessions in liteTaskDataStore
- notifications.js: Add code to populate multiCliPlan sessions when data refreshes

Now when task detail page loads, liteTaskDataStore contains the correct key
'multi-cli-${sessionId}' matching currentSessionDetailKey, allowing task
click handlers to find session data and open detail drawer.

Verified: Task clicks now properly open detail panel for all 7 tasks.
2026-01-22 15:41:01 +08:00
catlog22
f0954b3247 fix(lite-execute): pass project-guidelines.json to execution phase
Ensure buildExecutionPrompt includes project constraints reference,
maintaining consistency with full workflow (plan + execute) which passes
project_guidelines via context-package.json. This allows execution phase
to respect user-defined constraints (via /workflow:session:solidify).
2026-01-22 15:30:36 +08:00
catlog22
2fffe78dc9 fix(multi-cli): complete solution details display in summary tab (#98)
Fixed issue where multi-CLI planning solution cards only showed count,
feasibility, effort, and risk badges but had empty content area.

Changes:
- Enhanced renderMultiCliSummaryContent() to extract and display all solution fields
  - Solution name (name/title)
  - Feasibility score (feasibility)
  - Effort level (effort)
  - Risk level (risk)
  - Summary/description (summary)
  - Pros list (pros)
  - Cons list (cons)

- Added CSS styles for solution cards
  - .solution-details, .details-label, .details-list
  - .solution-header, .solution-title-row, .solution-badges
  - .badge with variants for feasibility/effort/risk

- Fixed related issues:
  - Added multiCliPlan support to backend data structures
  - Exposed liteTaskDataStore to window for global access
  - Fixed header left-alignment in detail pages
  - Added 'active' class to tab content for visibility

Files modified:
- ccw/src/templates/dashboard-js/views/lite-tasks.js
- ccw/src/templates/dashboard-css/04-lite-tasks.css
- ccw/src/core/server.ts
- ccw/src/core/routes/system-routes.ts
- ccw/src/templates/dashboard-js/state.js
- ccw/src/templates/dashboard-css/02-session.css
- ccw/src/config/litellm-api-config-manager.ts (fix homedir import)

Closes #98
2026-01-22 15:30:35 +08:00
catlog22
02531c4d15 feat: i18n for CLI history view; fix Claude session discovery path encoding
## Changes

### i18n 中文化 (i18n.js, cli-history.js)
- 添加 60+ 个翻译键用于 CLI 执行历史和对话详情
- 将 cli-history.js 中的硬编码英文字符串替换为 t() 函数调用
- 覆盖范围: 执行历史、对话详情、状态、工具标签、按钮、提示等

### 修复 Claude 会话追踪 (native-session-discovery.ts)
- 问题: Claude 使用路径编码存储会话 (D:\path -> D--path),但代码使用 SHA256 哈希导致无法发现
- 解决方案:
  - 添加 encodeClaudeProjectPath() 函数用于路径编码
  - 更新 ClaudeSessionDiscoverer.getSessions() 使用路径编码
  - 增强 extractFirstUserMessage() 支持多种消息格式 (string/array)
- 结果: Claude 会话现可正确关联,UI 按钮 "查看完整过程对话" 应可正常显示

## 验证
- npm run build 通过 
- Claude 会话发现 1267 个会话 
- 消息提取成功率 80% 
- 路径编码验证正确 
2026-01-22 14:53:38 +08:00
catlog22
5fa7524ad7 feat(loop): support external CLI tools (cli-wrapper) in task management
- Fix missing i18n translations: loop.add, loop.save, loop.cancel
- Replace hardcoded validTools with dynamic tool loading from cli-tools.json
- Support external CLI wrappers (like doubao) in task creation and updates
- Add getEnabledToolsList() helper to fetch enabled tools dynamically
- Update mapIssueToolToLoopTool() to accept any string tool name
- Update validateTool() to use dynamic tool list
- Change LoopTask.tool type from specific strings to string (accepts any tool)

This allows tasks to use any enabled CLI tool from configuration,
including builtin tools, cli-wrappers, and api-endpoints, not just
the hardcoded ['bash', 'gemini', 'codex', 'qwen', 'claude'].
2026-01-22 12:43:37 +08:00
catlog22
21fbdbc55e feat(loop-monitor): add 'In Development' badge and bump to v6.3.37
- Add 'In Development' (开发中) badge to Loop Monitor navigation item
- Use yellow highlight to indicate development status
- Add i18n translations: nav.inDevelopment ('In Dev' / '开发中')
- Bump version to 6.3.37

The Loop Monitor feature is now clearly marked as under development,
helping users understand it may have limited functionality.
2026-01-22 11:59:07 +08:00
catlog22
1f1a078450 feat(loop-monitor): implement dynamic tool loading for task modals
- Add getEnabledTools() async function to fetch tools from /api/cli/tools-config
- Cache enabled tools in window.enabledTools to avoid repeated API calls
- Replace hardcoded tool options in add task modal with dynamic generation
- Replace hardcoded tool options in edit task modal with dynamic generation
- Fallback to ['claude'] if no tools enabled, all tools if API fails
- Make showAddTaskModal() and showEditTaskModal() async functions
- Update editTask() to use await when calling showEditTaskModal()

Fixes issue where task modals only showed hardcoded tools instead of
loading enabled tools from CLI configuration.
2026-01-22 11:53:17 +08:00
catlog22
d3aeac4e9f style: replace square icon with inbox icon for created status
Icon Update:
- created status: square → inbox

Rationale:
- Inbox icon is more visually intuitive
- Better conveys the meaning of 'new/pending task'
- Improves visual clarity and user understanding

Status Icons Now:
- created: 📥 inbox (new/pending)
- running:  zap (active)
- paused: ⏸ pause (paused)
- completed: ✓ check (finished)
- failed: ⚠️ alert-triangle (error)
2026-01-22 11:46:43 +08:00
catlog22
e2e3d5a815 style: improve task list CSS styling and layout
Task List Styling Improvements:

1. Added .tasks-list-header styling
   - Flexbox layout with space-between alignment
   - Border bottom separator line
   - Proper heading (h4) with icon and spacing
   - Muted text color for counts

2. Added .task-list-empty styling
   - Full centered empty state container
   - Proper spacing and padding (3rem)
   - Icon styling with reduced opacity
   - Title and hint text with correct colors and sizes
   - Button margin adjustment

3. Enhanced .empty-state styling
   - Added button margin-top (1rem) for better spacing
   - Applied to both .empty-state and .empty-detail-state

Result: Task list now has consistent, professional styling with:
- Clear visual hierarchy for headers
- Properly centered empty states
- Better spacing and typography
- Improved user experience
2026-01-22 11:41:00 +08:00
catlog22
ddb7fb7d7a style: simplify loop cards and update status icons
Loop Cards:
- Remove left border accent from cards (cleaner look)
- Remove status-specific border-left colors
- Keep simple 1px border all around

Status Icons Updated:
- created: circle → square
- running: activity → zap (lightning bolt)
- paused: pause-circle → pause
- completed: check-circle-2 → check
- failed: x-circle → alert-triangle

Both renderLoopCard() and getStatusIcon() functions updated for consistency.
2026-01-22 11:33:45 +08:00
catlog22
62d5ce3f34 style: unify Loop Monitor UI design with improved clarity
Major visual improvements across all components:

Left Sidebar (Loop Cards):
- Enhanced card styling with better shadows and borders (4px left border)
- Improved hover states with subtle elevation
- Better selected state with primary color highlight
- Increased padding and spacing (1rem padding, 0.75rem margin)
- Cleaner status indicator badges

Right Panel (Detail View):
- Added background containers to all detail sections with borders
- Improved section headers with bottom borders for clear separation
- Enhanced progress items with individual card styling
- Better visual hierarchy with consistent spacing (1rem gaps)
- Added info-box component for V2 loop information

Meta Information:
- Detail-meta items now have pill-style backgrounds
- Dashed separator line for better visual grouping
- Improved spacing and padding

CLI Steps:
- Enhanced step cards with better borders and hover states
- 3px left accent border for status indication
- Smooth transitions on hover

Typography & Colors:
- Unified border-radius: 0.625rem for sections, 0.5rem for items
- Consistent background: hsl(var(--muted) / 0.25) for sections
- Better border opacity: hsl(var(--border) / 0.5) and 0.6 variants
- Improved font weights and sizes for clarity

Overall result: Cleaner, more professional interface with better visual hierarchy and clarity.
2026-01-22 11:20:16 +08:00
catlog22
15b3977e88 fix: reorganize left sidebar into 3-row layout
- Row 1: Tab buttons (循环 | 任务) + New Loop button
- Row 2: Filter dropdown (全部 / 运行中 / 已暂停 / 已完成 / 失败)
- Row 3: Loop list items

This fixes the layout issue where multiple elements were stacking vertically
and appearing on multiple lines. Now creates a clear, organized left panel.
2026-01-22 11:13:07 +08:00
catlog22
d70f02abed fix: resolve Loop Monitor UI styling issues
- Add missing i18n keys: 'loop.listView' and 'loop.addTask' for both English and Chinese
- Fix kanban board header layout: wrap title and loop name in separate container (.kanban-header-left)
- Add CSS styling for .kanban-header-left and .kanban-loop-title to properly display loop titles
- Improve visual separation between 'Tasks Board' label and loop title

This fixes the issue where loop titles were appearing inline with the 'Tasks Board' header,
making them appear jumbled (e.g., '任务看板 在' instead of '任务看板' | '在').
2026-01-22 11:07:47 +08:00
catlog22
e11c4ba8ed feat: Loop Monitor UI optimization - Phases 1-6 complete
Complete comprehensive optimization of Loop Monitor interface with 6 phases:

Phase 1: Internationalization (i18n)
- Added 28 new translation keys (English + Chinese)
- Complete dual-language support for all new features
- Coverage: kanban board, task status, navigation, priority

Phase 2: CSS Styling Optimization
- 688 lines of kanban board styling system
- Task cards, status badges, priority badges
- Drag-and-drop visual feedback
- Base responsive design

Phase 3: UI Layout Design
- Left navigation panel optimization
- Kanban board layout (4 columns: Pending, In Progress, Blocked, Done)
- Task card information architecture
- Status update flow design

Phase 4: Backend API Extensions
- New PATCH /api/loops/v2/:loopId/status endpoint for quick status updates
- Extended PUT /api/loops/v2/:loopId with metadata support (tags, priority, notes)
- Enhanced V2LoopStorage interface
- Improved validation and error handling
- WebSocket broadcasting for real-time updates

Phase 5: Frontend JavaScript Implementation
- 967 lines of interactive functionality
- View switching system (Loops ↔ Kanban)
- Kanban board rendering with 4-column layout
- Drag-and-drop functionality (HTML5 API)
- Status update functions (updateLoopStatus, updateTaskStatus, updateLoopMetadata)
- Task context menu (right-click)
- Navigation grouping by status

Phase 6: Final Optimization
- Smooth animations (@keyframes slideInUp, fadeIn, modalFadeIn, pulse)
- Enhanced responsive design (desktop, tablet, mobile)
- Full ARIA accessibility support
- Complete keyboard navigation (arrow keys, Enter/Space, Ctrl+K, ?)
- Performance optimizations (debounce, throttle, will-change)
- Screen reader support

Key Features:
 Kanban board with drag-and-drop task management
 Task status management (pending, in_progress, blocked, done)
 Loop status quick update via PATCH API
 Navigation grouping with status-based filtering
 Full keyboard navigation support
 ARIA accessibility attributes
 Responsive design (mobile, tablet, desktop)
 Smooth animations and transitions
 Internationalization (English & Chinese)
 Performance optimizations

Code Statistics:
- Total: ~1798 lines
- loop-monitor.js: +967 lines (frontend logic)
- 36-loop-monitor.css: +688 lines (styling)
- loop-v2-routes.ts: +86/-3 lines (API backend)
- i18n.js: +60 lines (translations)

Technical Stack:
- JavaScript ES6+ (frontend)
- CSS3 with animations
- TypeScript (backend)
- HTML5 Drag & Drop API
- ARIA accessibility
- Responsive design

Browser Compatibility:
- Chrome/Edge 90+
- Firefox 88+
- Safari 14+

All TypeScript compilation tests pass. Ready for production deployment.
2026-01-22 11:01:05 +08:00
catlog22
60eab98782 feat: Add comprehensive tests for CCW Loop System flow state
- Implemented loop control tasks in JSON format for testing.
- Created comprehensive test scripts for loop flow and standalone tests.
- Developed a shell script to automate the testing of the entire loop system flow, including mock endpoints and state transitions.
- Added error handling and execution history tests to ensure robustness.
- Established variable substitution and success condition evaluations in tests.
- Set up cleanup and workspace management for test environments.
2026-01-22 10:13:00 +08:00
catlog22
d9f1d14d5e feat: add CCW Loop System for automated iterative workflow execution
Implements a complete loop execution system with multi-loop parallel support,
dashboard monitoring, and comprehensive security validation.

Core features:
- Loop orchestration engine (loop-manager, loop-state-manager)
- Multi-loop parallel execution with independent state management
- REST API endpoints for loop control (pause, resume, stop, retry)
- WebSocket real-time status updates
- Dashboard Loop Monitor view with live updates
- Security: path traversal protection and sandboxed JavaScript evaluation

Test coverage:
- 42 comprehensive tests covering multi-loop, API, WebSocket, security
- Security validation for success_condition injection attacks
- Edge case handling and end-to-end workflow tests
2026-01-21 22:55:24 +08:00
catlog22
64e064e775 feat(workflow): enhance lite-fix to support plan.json new fields (rationale, verification, risks)
- Add rationale/verification field generation for Medium severity bugs
- Add risks/code_skeleton/data_flow field support for High/Critical severity
- Update fix-plan.json requirements in cli-lite-planning-agent prompt
- Ensure executionContext includes complexity field for lite-execute consumption
- Align with plan-json-schema.json complexity-based field requirements
2026-01-21 22:35:54 +08:00
catlog22
8c1d62208e chore: bump version to 6.3.36 2026-01-21 19:50:51 +08:00
catlog22
c4960c3e84 feat: 添加基于文件的交互式假设驱动调试功能,记录探索过程和理解演变 2026-01-21 19:49:03 +08:00
catlog22
82b8fcc608 feat: 增加失败历史详情渲染功能,展示失败反馈信息 2026-01-21 18:32:52 +08:00
catlog22
a7c8ea04f1 feat: 增加失败分析功能,改进问题规划和解决方案生成 2026-01-21 17:46:22 +08:00
catlog22
2084ff3e21 fix: 增加对空设置文件的处理,确保返回空对象 2026-01-21 17:04:16 +08:00
catlog22
890ca455b2 Revert "feat: 调整主面板位置和高度以改善布局"
This reverts commit 572c103fbf.
2026-01-21 16:34:36 +08:00
catlog22
1dfabf6bda fix: resolve CodexLens installation issues by correcting package name and improving local path detection
- Updated package name from `codexlens` to `codex-lens` in all relevant files to ensure consistency with `pyproject.toml`.
- Enhanced `findLocalPackagePath()` to always search for local paths, even when running from `node_modules`.
- Removed fallback logic for PyPI installation in several functions, providing clearer error messages for local installation failures.
- Added detailed documentation on installation steps and error handling for local development packages.
- Introduced a new summary document outlining the issues and fixes related to CodexLens installation.
2026-01-21 15:32:41 +08:00
catlog22
604405b2d6 chore: bump version to 6.3.35 2026-01-21 14:39:52 +08:00
catlog22
190d2280fd feat: 更新codex-review和lite-execute命令示例,增加多种用法说明 2026-01-21 14:36:22 +08:00
catlog22
4e66864cfd chore: bump version to 6.3.34 2026-01-21 13:02:54 +08:00
catlog22
cac0566627 feat: 更新检查更新按钮的加载状态和通知功能,增加工具提示 2026-01-21 13:00:25 +08:00
catlog22
572c103fbf feat: 调整主面板位置和高度以改善布局 2026-01-21 12:40:32 +08:00
catlog22
9d6bc92837 feat: add workflow management commands and utilities
- Implemented workflow installation, listing, and syncing commands in `workflow.ts`.
- Created utility functions for project root detection and package version retrieval in `project-root.ts`.
- Added update checker functionality to notify users of new package versions in `update-checker.ts`.
- Developed unit tests for project root utilities and update checker to ensure functionality and version comparison accuracy.
2026-01-21 12:35:33 +08:00
catlog22
ffe9898fd3 feat: 增加调试日志以跟踪活动执行状态和钩子事件 2026-01-21 11:15:53 +08:00
catlog22
a602a46985 feat: 更新 LSP 测试,调整测试文件和增加分析等待时间 2026-01-21 10:57:36 +08:00
catlog22
f7dd3d23ff feat: 添加多个 LSP 测试示例,包括能力测试、调用层次和原始 LSP 测试 2026-01-21 10:43:53 +08:00
catlog22
200812d204 feat: 更新 CLI 自动调用触发器和执行原则,增强文档说明 2026-01-20 22:14:45 +08:00
catlog22
261c98549d feat: Implement association tree for LSP-based code relationship discovery
- Add `association_tree` module with components for building and processing call association trees using LSP call hierarchy capabilities.
- Introduce `AssociationTreeBuilder` for constructing call trees from seed locations with depth-first expansion.
- Create data structures: `TreeNode`, `CallTree`, and `UniqueNode` for representing nodes and relationships in the call tree.
- Implement `ResultDeduplicator` to extract unique nodes from call trees and assign relevance scores based on depth, frequency, and kind.
- Add unit tests for `AssociationTreeBuilder` and `ResultDeduplicator` to ensure functionality and correctness.
2026-01-20 22:09:04 +08:00
catlog22
b85d9b9eb1 feat(workflow): 更新 multi-cli-plan 采用 in-memory 调用模式
- Phase 4 扩展:收集执行方法和代码审查工具选项
- Phase 5 重构:构建 executionContext 并调用 lite-execute --in-memory
- 移除 IMPL_PLAN.md 生成,仅保留 plan.json
- 更新执行流程图和相关文档
- executionContext 结构与 lite-plan 保持一致
- 修改 Agent Roles 职责描述
2026-01-20 20:33:38 +08:00
catlog22
4610018193 feat(lsp): 更新 TypeScript 语言服务器命令以支持 Windows 环境 2026-01-20 15:28:18 +08:00
catlog22
9c9b1ad01c Add TypeScript LSP setup guide and enhance debugging tests
- Created a comprehensive guide for setting up TypeScript LSP in Claude Code, detailing installation methods, configuration, and troubleshooting.
- Added multiple debugging test scripts to validate LSP communication with pyright, including direct communication tests, configuration checks, and document symbol retrieval.
- Implemented error handling and logging for better visibility during LSP interactions.
2026-01-20 14:53:18 +08:00
catlog22
2f3a14e946 Add unit tests for LspGraphBuilder class
- Implement comprehensive unit tests for the LspGraphBuilder class to validate its functionality in building code association graphs.
- Tests cover various scenarios including single level graph expansion, max nodes and depth boundaries, concurrent expansion limits, document symbol caching, error handling during node expansion, and edge cases such as empty seed lists and self-referencing nodes.
- Utilize pytest and asyncio for asynchronous testing and mocking of LspBridge methods.
2026-01-20 12:49:31 +08:00
catlog22
1376dc71d9 feat(workflow): 更新 lite-lite-lite 和 tdd-plan 文档,增强描述和工具支持 2026-01-20 11:59:06 +08:00
catlog22
c1d12384c3 feat(mcp): 添加 CCW_DISABLE_SANDBOX 环境变量支持禁用工作空间访问限制
- 在 path-validator.ts 中添加 isSandboxDisabled() 函数
- 修改 validatePath() 在沙箱禁用时跳过路径限制检查
- MCP server 启动日志显示沙箱状态
- /api/mcp-install-ccw API 支持 disableSandbox 参数
- Dashboard UI 添加禁用沙箱的复选框选项
- 添加中英文 i18n 翻译支持
2026-01-20 11:50:23 +08:00
catlog22
eea859dd6f fix(cli): 修复 Windows 路径反斜杠被吞掉的问题并添加跨平台路径支持
- 重写 escapeWindowsArg 函数,正确处理反斜杠和引号转义
- 添加 escapeUnixArg 函数支持 Linux/macOS shell 转义
- 添加 normalizePathSeparators 函数自动转换路径分隔符
- 修复 vscode-lsp.ts 中的 TypeScript 类型错误
2026-01-20 09:44:49 +08:00
catlog22
3fe630f221 Add tests and documentation for CodexLens LSP tool
- Introduced a new test script for the CodexLens LSP tool to validate core functionalities including symbol search, find definition, find references, and get hover.
- Created comprehensive documentation for the MCP endpoint design, detailing the architecture, features, and integration with the CCW MCP Manager.
- Developed a detailed implementation plan for transitioning to a real LSP server, outlining phases, architecture, and acceptance criteria.
2026-01-19 23:26:35 +08:00
catlog22
eeaefa7208 feat(queue): 添加队列合并功能,支持跳过重复项并标记源队列为已合并 2026-01-19 15:35:41 +08:00
catlog22
e58c33fb6e fix(cli-history): 转义 sourceDir 以支持 onclick 处理程序 2026-01-19 12:22:33 +08:00
catlog22
6716772e0a fix(codexlens): 添加 Yarn PnP 支持以改进环境检测
问题分析:
- Yarn PnP 不使用 node_modules 目录
- 原有逻辑仅检测 node_modules 会错误识别为开发环境
- 导致在 Yarn PnP 项目中尝试使用本地路径安装失败

修复内容:
- 在 isDevEnvironment() 中添加 Yarn PnP 检测
- 检查 process.versions.pnp 属性判断是否为 Yarn PnP 环境
- Yarn PnP 环境被视为生产环境,使用 PyPI 安装

改进影响:
- npm/pnpm: 使用 node_modules 检测(原有逻辑)
- Yarn PnP: 使用 pnp 版本检测(新增逻辑)
- 开发环境: 两项检测均不满足时识别为开发环境

Based on Gemini code review suggestion (ID: 1768794060352-gemini)
2026-01-19 11:43:12 +08:00
catlog22
a8367bd4d7 fix(codexlens): 修复 npm install 后 CodexLens 配置被重置的问题
问题分析:
- npm install 时,`__dirname` 指向 node_modules 内的路径
- 使用 `pip install -e`(editable mode)会保存源码路径引用
- npm 升级后旧路径失效,导致需要删除虚拟环境才能重新安装

修复内容:
- 添加 isInsideNodeModules() 检测函数
- 添加 isDevEnvironment() 判断是否在开发环境
- 添加 findLocalPackagePath() 统一的本地包路径查找函数
- 当运行在 node_modules 中时,跳过本地路径,直接使用 PyPI 安装

影响的函数:
- bootstrapWithUv()
- installSemanticWithUv()
- bootstrapVenv()
- ensureLiteLLMEmbedderReady()

行为变化:
- 开发环境(不在 node_modules 中):使用本地路径安装(editable mode)
- 生产环境(npm install 安装):使用 PyPI 安装(稳定的包引用)
2026-01-19 11:32:50 +08:00
catlog22
ea13f9a575 fix(config): 修复测试污染用户配置的问题,支持 CCW_DATA_DIR 环境变量
修改内容:
- getGlobalConfigPath() 和 getGlobalSettingsPath() 现在尊重 CCW_DATA_DIR 环境变量
- ensureClaudeCliTools()、saveClaudeCliTools()、saveClaudeCliSettings() 同步更新
- 测试现在使用独立的临时目录,不会修改用户的生产配置文件 ~/.claude/cli-tools.json

修复问题:
- 集成测试会修改用户的 gemini primaryModel 为 test-model
- 导致后续 Codex CLI 执行时读取到错误的配置

验证:
- 所有集成测试通过 (4/4)
- 用户配置保持不变
- 生产环境默认行为不受影响
2026-01-19 11:28:06 +08:00
catlog22
7d152b7bf9 feat(doc): 添加 CLI 自动触发调用场景和执行原则 2026-01-18 19:51:00 +08:00
catlog22
16c96229f9 feat(cli): add agent_message type for precise --final output filtering
Introduce dedicated agent_message IR type to distinguish final AI responses
from generic stdout. This enables --final flag to show only agent messages,
filtering out all intermediate content (JSONL events, reasoning, tool calls).

Changes:
- Add agent_message type to CliOutputUnitType
- Update JsonLinesParser to map final responses from all tools (codex,
  gemini, claude, opencode) to agent_message type
- Add final_output field to database schema with migration
- Update getCachedOutput and getConversation to return finalOutput
- Prefer finalOutput in outputAction for --final flag

Fixes issue where --final showed raw JSONL instead of filtered content.
2026-01-18 19:49:33 +08:00
catlog22
40b003be68 fix(cli): 增强 CLI 输出处理,添加解析输出和过滤功能 2026-01-18 18:35:23 +08:00
catlog22
46111b3987 fix(cli): 更新提示格式以包含协议和模板信息 2026-01-18 14:22:36 +08:00
catlog22
f47726d43b fix(cli): 更新 CLI 流查看器的样式以确保在深色背景上文本可见性 2026-01-18 13:48:20 +08:00
catlog22
502d088c98 feat(cli): 添加交互式选择功能以选择 shell 配置文件并安装 Git Bash 修复 2026-01-18 13:03:22 +08:00
catlog22
f845e6e0ee fix(cli): 修复安全审计示例中的多行提示格式 2026-01-18 12:02:05 +08:00
catlog22
e96eed817c fix(cli): 修复多行提示的命令示例,更新为正确的用法 2026-01-18 12:01:42 +08:00
catlog22
6a6d1885d8 feat(install): 添加 Git Bash 多行提示修复功能并在卸载时询问移除
refactor(cli): 删除不再使用的 CLI 脚本和测试文件
fix(cli): 移除多行参数提示的输出
2026-01-18 11:53:49 +08:00
catlog22
a34eeb63bf feat(cli): add CLI prompt simulation and testing scripts
- Introduced `simulate-cli-prompt.js` to simulate various prompt formats and display the final content passed to the CLI.
- Added `test-shell-prompt.js` to test actual shell execution of different prompt formats, demonstrating correct vs incorrect multi-line prompt handling.
- Created comprehensive tests in `cli-prompt-parsing.test.ts` to validate prompt parsing, including single-line, multi-line, special characters, and template concatenation.
- Implemented edge case handling for empty lines, long prompts, and Unicode characters.
2026-01-18 11:10:05 +08:00
catlog22
56acc4f19c fix(cli): 修复通用提示模板格式,移除多余换行 2026-01-17 22:46:09 +08:00
catlog22
fdf468ed99 refactor(cli): 移除关于 --rule 选项的工作原理说明 2026-01-17 22:08:36 +08:00
catlog22
680c2a0597 fix(cli): allow codex review with target flags without prompt
- Skip template concatenation when using --uncommitted/--base/--commit
- Allow empty prompt for review mode with target flags
- Add hasReviewTarget check in command routing
- Update documentation with validation constraints

codex review constraint: target flags and prompt are mutually exclusive
2026-01-17 22:07:26 +08:00
catlog22
5b5dc85677 refactor(cli): change from env var injection to direct prompt concatenation
- Replace $PROTO/$TMPL environment variable injection with systemRules/roles direct concatenation
- Append rules to END of prompt instead of prepending
- Change prompt field name from RULES to CONSTRAINTS in all prompts
- Default to universal-rigorous-style template when --rule not specified
- Update all .claude documentation, agents, commands, and skills
- Add streaming_content type support for Gemini delta messages

Breaking: Prompts now use CONSTRAINTS field instead of RULES
2026-01-17 21:30:05 +08:00
catlog22
1e691fa751 feat(cli): default to universal-rigorous-style template when --rule not specified
- Add default template fallback in cli.ts (effectiveRule)
- Update cli-tools-usage.md with English descriptions
- Add ACE semantic search to Pattern Discovery Workflow
- Simplify Template System documentation (list template names only)
- Filter CLI progress messages (auth, loading) in output converter

Co-Authored-By: Claude Opus 4.5 <noreply@anthropic.com>
2026-01-17 19:51:21 +08:00
catlog22
1f87ca0be3 refactor(routes): 更新 rules-routes 和 claude-routes 使用 $PROTO $TMPL
- rules-routes.ts: 替换 4 处 $(cat ...) 模板引用为 $PROTO $TMPL
- claude-routes.ts: 替换 2 处 $(cat ...) 模板引用为 $PROTO $TMPL
- 添加 loadProtocol/loadTemplate 导入
- 在 executeCliTool 调用中添加 rulesEnv 参数

Co-Authored-By: Claude Opus 4.5 <noreply@anthropic.com>
2026-01-17 19:40:28 +08:00
catlog22
f14418603a feat(cli): 添加 --rule 选项支持模板自动发现
重构 ccw cli 模板系统:

- 新增 template-discovery.ts 模块,支持扁平化模板自动发现
- 添加 --rule <template> 选项,自动加载 protocol 和 template
- 模板目录从嵌套结构 (prompts/category/file.txt) 迁移到扁平结构 (prompts/category-function.txt)
- 更新所有 agent/command 文件,使用 $PROTO $TMPL 环境变量替代 $(cat ...) 模式
- 支持模糊匹配:--rule 02-review-architecture 可匹配 analysis-review-architecture.txt

其他更新:
- Dashboard: 添加 Claude Manager 和 Issue Manager 页面
- Codex-lens: 增强 chain_search 和 clustering 模块

Co-Authored-By: Claude Opus 4.5 <noreply@anthropic.com>
2026-01-17 19:20:24 +08:00
catlog22
1fae35c05d docs: 添加 Semantic CLI Invocation 实践说明 2026-01-17 11:44:27 +08:00
catlog22
8523079a99 docs: 更新 ACE Tool 配置文档 2026-01-17 11:40:30 +08:00
catlog22
4daeb0eead docs: 打字机动画增加 OpenCode 2026-01-17 11:36:03 +08:00
catlog22
86548af518 docs: 修复语义CLI和文档表格居中
- 添加 div align center 包裹 Semantic CLI Invocation 表格
- 添加 div align center 包裹 Documentation 表格
- 同步更新中英文 README 文件
2026-01-17 11:35:10 +08:00
catlog22
4e5eb6cd40 docs: 修复核心特性表格居中 2026-01-17 11:28:31 +08:00
catlog22
021ce619f0 docs: 表格居中对齐 2026-01-17 11:26:02 +08:00
catlog22
63aaab596c docs: 统一简约风格
- 徽章改为 flat-square 风格
- 移除表格内所有 emoji 图标
- 保留章节标题图标
- 统一色彩方案
2026-01-17 11:23:38 +08:00
catlog22
bc52af540e docs: 重新设计酷炫主页
- 添加渐变动画 Header (capsule-render)
- 添加打字动画效果 (typing-svg)
- 使用 for-the-badge 风格徽章
- 添加 Stars/Forks/Issues 统计
- 使用 HTML 表格优化布局
- 添加快速导航按钮
- 使用折叠面板整理长内容
- 添加渐变动画 Footer
2026-01-17 11:17:33 +08:00
catlog22
8bbbdc61eb docs: 修复标题居中和 CLI 链接
- 标题移入 div align=center
- 修正 CLI 官方链接:
  - Gemini: google-gemini/gemini-cli
  - Codex: openai/codex
  - OpenCode: opencode-ai/opencode
  - Qwen: QwenLM
2026-01-17 11:09:23 +08:00
catlog22
fd5f6c2c97 docs: 简化 CLI 工具安装说明
- 移除详细配置步骤
- 使用表格形式简明展示
- 提供官方文档链接
2026-01-17 11:06:21 +08:00
catlog22
fd145c34cd docs: 优化自定义 CLI 注册说明
- 明确通过 Dashboard 界面注册
- 简化配置说明为表格形式
- 强调注册一次永久语义调用
2026-01-17 11:05:30 +08:00
catlog22
10b3ace917 docs: 添加自定义 CLI 注册说明
- 通过 API Settings 注册任意 API 为自定义 CLI
- 注册后可语义调用自定义 CLI
- 支持自定义 CLI 与内置 CLI 协同编排
2026-01-17 11:02:20 +08:00
catlog22
d6a2e0de59 docs: 添加语义化 CLI 调用说明
- 用户语义指定 CLI 工具,系统自动调用
- 支持协同、并行、迭代、流水线等编排模式
- 示例:'使用 Gemini 和 Codex 协同分析'
2026-01-17 11:01:29 +08:00
catlog22
35c6605681 docs: 简化 ACE Tool 配置为链接形式
- 官方文档: docs.augmentcode.com
- 代理版本: github.com/eastxiaodong/ace-tool
2026-01-17 10:50:35 +08:00
catlog22
ef2229b0bb docs: 更新 README.md 添加符号和 CLI 工具安装指南
- 添加 emoji 符号丰富视觉效果
- 添加 Gemini/Codex/OpenCode/Qwen CLI 安装说明
- 添加 ACE Tool 配置(官方和代理方式)
- 添加 CodexLens 开发状态说明
- Dashboard 功能表格化展示
- 与中文版 README_CN.md 结构保持一致
2026-01-17 10:44:41 +08:00
catlog22
b65977d8dc docs: 更新 README_CN.md 添加 CLI 工具安装指南和 CodexLens 说明
- 添加 Gemini/Codex/OpenCode/Qwen CLI 安装说明
- 添加 ACE Tool 配置(官方和代理方式)
- 添加 CodexLens 开发状态说明
- 精简文档结构与英文版保持一致
- 更新 4 级工作流系统说明
2026-01-17 10:42:38 +08:00
catlog22
bc4176fda0 docs: consolidate documentation with 4-level workflow guide
- Add WORKFLOW_GUIDE.md (EN) and WORKFLOW_GUIDE_CN.md (CN)
- Simplify README.md to highlight 4-level workflow system
- Remove redundant docs: MCP_*.md, WORKFLOW_DECISION_GUIDE*.md, WORKFLOW_DIAGRAMS.md
- Move COMMAND_SPEC.md to docs/
- Move codex_mcp.md, CODEX_LENS_AUTO_HYBRID.md to codex-lens/docs/
- Delete temporary debug documents and outdated files

Root directory: 28 → 14 MD files
2026-01-17 10:38:06 +08:00
catlog22
464f3343f3 chore: bump version to 6.3.33 2026-01-16 15:50:32 +08:00
catlog22
bb6cf42df6 fix: 更新 issue 执行文档,明确队列 ID 要求和用户交互流程 2026-01-16 15:49:26 +08:00
catlog22
0f0cb7e08e refactor: 优化 brainstorm 上下文溢出保护文档
- conceptual-planning-agent.md: 34行 → 10行(-71%)
- auto-parallel.md: 42行 → 9行(-79%)
- 消除重复定义,workflow 引用 agent 限制
- 移除冗余的策略列表、自检清单、代码示例
- 保留核心功能:限制数字、简要策略、恢复方法
2026-01-16 15:36:59 +08:00
catlog22
39d070eab6 fix: resolve GitHub issues (#50, #54)
- #54: Add API endpoint configuration documentation to DASHBOARD_GUIDE.md
- #50: Add brainstorm context overflow protection with output size limits

Note: #72 and #53 not changed per user feedback - existing behavior is sufficient
(users can configure envFile themselves; default Python version is appropriate)
2026-01-16 15:09:31 +08:00
catlog22
9ccaa7e2fd fix: 更新 CLI 工具配置缓存失效逻辑 2026-01-16 14:28:10 +08:00
catlog22
eeb90949ce chore: bump version to 6.3.32
- Fix: Dashboard project overview display issue (#80)
- Refactor: Update project structure to use project-tech.json
2026-01-16 14:09:09 +08:00
catlog22
7b677b20fb fix: 更新项目文档,修正项目上下文和学习固化流程描述 2026-01-16 14:01:27 +08:00
catlog22
e2d56bc08a refactor: 更新项目结构,替换 project.json 为 project-tech.json,添加新架构和技术分析 2026-01-16 13:33:38 +08:00
catlog22
d515090097 feat: add --mode review support for codex CLI
- Add 'review' to mode enum in ParamsSchema and schema
- Implement codex review subcommand in buildCommand (uses --uncommitted by default)
- Other tools (gemini/qwen/claude) accept review mode but no operation change
- Update cli-tools-usage.md with review mode documentation
2026-01-16 13:01:02 +08:00
catlog22
d81dfaf143 fix: add cross-platform support for hook installation (#82)
- Add PlatformUtils module for platform detection (Windows/macOS/Linux)
- Add escapeForShell() for platform-specific shell escaping
- Add checkCompatibility() to warn about incompatible hooks before install
- Add getVariant() to support platform-specific template variants
- Fix node -e commands: use double quotes on Windows, single quotes on Unix
2026-01-16 12:54:56 +08:00
catlog22
d7e5ee44cc fix: adapt help-routes.ts to new command.json structure (fixes #81)
- Replace getIndexDir() with getCommandFilePath() to find command.json
- Update file watcher to monitor command.json instead of index/ directory
- Modify API routes to read from unified command.json structure
- Add buildWorkflowRelationships() to dynamically build workflow data from flow fields
- Add /api/help/agents endpoint for agents list
- Add category merge logic for frontend compatibility (cli includes general)
- Add cli-init command to command.json
2026-01-16 12:46:50 +08:00
catlog22
dde39fc6f5 fix: 更新 CLI 调用后说明,移除不必要的轮询建议 2026-01-16 09:40:21 +08:00
catlog22
9b4fdc1868 Refactor code structure for improved readability and maintainability 2026-01-15 22:43:44 +08:00
catlog22
623afc1d35 6.3.31 2026-01-15 22:30:57 +08:00
catlog22
085652560a refactor: 移除 ccw cli 内部超时参数,改由外部 bash 控制
- 移除 --timeout 命令行选项和内部超时处理逻辑
- 进程生命周期跟随父进程(bash)状态
- 简化代码,超时控制交由外部调用者管理
2026-01-15 22:30:22 +08:00
catlog22
af4ddb1280 feat: 添加队列和议题删除功能,支持归档议题 2026-01-15 19:58:54 +08:00
catlog22
7db659f0e1 feat: 增强议题搜索功能与多队列卡片界面优化
搜索增强:
- 添加防抖处理修复快速输入导致页面卡死的问题
- 扩展搜索范围至解决方案的描述和方法字段
- 新增搜索结果高亮显示匹配关键词
- 添加搜索下拉建议,支持键盘导航

多队列界面:
- 优化队列展开视图的卡片布局使用CSS Grid
- 添加取消激活队列功能及API端点
- 改进状态颜色分布和统计卡片样式
- 添加激活/取消激活按钮的中文国际化

修复:
- 修复路由冲突导致的deactivate 404错误
- 修复异步加载后拖拽排序失效的问题
2026-01-15 19:44:44 +08:00
catlog22
ba526ea09e fix: 修复 Dashboard 概况页面无法显示项目信息的问题
添加 extractStringArray 辅助函数来处理混合数组类型(字符串数组和对象数组),
使 loadProjectOverview 函数能够正确处理 project-tech.json 中的数据结构。

修复的字段包括:
- languages: 对象数组 [{name, file_count, primary}] → 字符串数组
- frameworks: 保持兼容字符串数组
- key_components: 对象数组 [{name, description, path}] → 字符串数组
- layers/patterns: 保持兼容混合类型

Closes #79
2026-01-15 18:58:42 +08:00
catlog22
c308e429f8 feat: 添加增量更新命令以支持单文件索引更新 2026-01-15 18:14:51 +08:00
catlog22
c24ed016cb feat: 更新执行命令文档,添加队列ID要求和用户提示功能 2026-01-15 16:22:48 +08:00
catlog22
0c9a6d4154 chore: bump version to 6.3.29
Release 6.3.29 with:
- Multi-CLI task and discussion tabs i18n support
- Collapsible sections for discussion and summary tabs
- Post-Completion Expansion for execution commands
- Enhanced multi-CLI session handling
- Code structure refactoring
2026-01-15 15:38:15 +08:00
catlog22
7b5c3cacaa feat: 添加多CLI任务和讨论标签的国际化支持 2026-01-15 15:35:09 +08:00
catlog22
e6e7876b38 feat: Add collapsible sections and enhance layout for discussion and summary tabs 2026-01-15 15:30:11 +08:00
catlog22
0eda520fd7 feat: Enhance multi-CLI session handling and UI updates
- Added loading of plan.json in scanMultiCliDir to improve task extraction.
- Implemented normalization of tasks from plan.json format to support new UI.
- Updated CSS for multi-CLI plan summary and task item badges for better visibility.
- Refactored hook-manager to use Node.js for cross-platform compatibility in command execution.
- Improved i18n support for new CLI tool configuration in the hook wizard.
- Enhanced lite-tasks view to utilize normalized tasks and provide better fallback mechanisms.
- Updated memory-update-queue to return string messages for better integration with hooks.
2026-01-15 15:20:20 +08:00
catlog22
e22b525e9c feat: add Post-Completion Expansion to execution commands
执行命令完成后询问用户是否扩展为issue(test/enhance/refactor/doc),选中项调用 /issue:new
2026-01-15 13:00:50 +08:00
catlog22
86536aaa10 Refactor code structure for improved readability and maintainability 2026-01-15 11:51:19 +08:00
catlog22
3ef766708f chore: bump version to 6.3.28
Fixes #74 - Include ccw/scripts/ in npm package files
2026-01-15 11:20:34 +08:00
catlog22
95a7f05aa9 Add unified command indices for CCW and CCW-Help with detailed capabilities, flows, and intent rules
- Introduced command.json for CCW-Help with 88 commands and 16 agents, covering essential workflows and memory management.
- Created command.json for CCW with comprehensive capabilities for exploration, planning, execution, bug fixing, testing, reviewing, and documentation.
- Defined complex flows for rapid iteration, full exploration, coupled planning, bug fixing, issue lifecycle management, and more.
- Implemented intent rules for bug fixing, issue batch processing, exploration, UI design, TDD, review, and documentation.
- Established CLI tools and injection rules to enhance command execution based on context and complexity.
2026-01-15 11:19:30 +08:00
catlog22
f692834153 fix: Status导航项现在正确显示CLI状态页面而非CLAUDE.md管理器
cli-manager视图路由错误调用了renderClaudeManager(),修复为调用正确的renderCliManager()函数。

Co-Authored-By: Claude Opus 4.5 <noreply@anthropic.com>
2026-01-15 10:38:19 +08:00
catlog22
a228bb946b fix: Issue Manager completed过滤器现在可以显示归档议题
- 添加 loadIssueHistory() 函数从 /api/issues/history 加载归档议题
- 修改 filterIssuesByStatus() 在选择 completed 过滤器时加载历史数据
- 修改 renderIssueView() 合并当前已完成议题和归档议题
- 修改 renderIssueCard() 显示 "Archived" 标签区分归档议题
- 修改 openIssueDetail() 支持从缓存加载归档议题详情
- 添加 .issue-card.archived 和 .issue-archived-badge CSS样式

Fixes: https://github.com/catlog22/Claude-Code-Workflow/issues/76

Co-Authored-By: Claude Opus 4.5 <noreply@anthropic.com>
2026-01-15 10:28:52 +08:00
catlog22
4d57f47717 feat: 更新搜索工具优先级指南,统一格式以提高可读性 2026-01-14 22:00:01 +08:00
catlog22
c8cac5b201 feat: 添加搜索工具优先级指南,优化CLI工具调用和执行策略 2026-01-14 21:46:36 +08:00
catlog22
f9c1216eec feat: 添加令牌消耗诊断功能,优化输出和状态管理 2026-01-14 21:40:00 +08:00
catlog22
266f6f11ec feat: Enhance documentation diagnosis and category mapping
- Introduced action to diagnose documentation structure, identifying redundancies and conflicts.
- Added centralized category mappings in JSON format for improved detection and strategy application.
- Updated existing functions to utilize new mappings for taxonomy and strategy matching.
- Implemented new detection patterns for documentation redundancy and conflict.
- Expanded state schema to include documentation diagnosis results.
- Enhanced severity criteria and strategy selection guide to accommodate new documentation issues.
2026-01-14 21:07:52 +08:00
catlog22
1f5ce9c03a Enhance CCW Orchestrator with Requirement Analysis Features
- Updated SKILL.md to reflect new requirement analysis capabilities, including input analysis and clarity scoring.
- Expanded issue workflow in issue.md to include discovery and creation phases, along with detailed command references.
- Introduced requirement analysis specification in requirement-analysis.md, outlining clarity scoring, dimension extraction, and validation processes.
- Added output templates specification in output-templates.md for consistent user experience across classification, planning, clarification, execution, and summary outputs.
2026-01-14 20:15:42 +08:00
catlog22
959d60b31f Enhance CLI Stream Viewer and Navigation Lifecycle Management
- Added lifecycle management for CLI Stream Viewer with destroy function to clean up event listeners and timers.
- Improved navigation state management by registering destroy functions for views and ensuring cleanup on transitions.
- Updated Claude Manager to include lifecycle functions for better resource management.
- Enhanced CLI History View with state reset functionality and improved dropdown handling for batch delete.
- Introduced round solutions rendering in Lite Tasks View, including collapsible sections for implementation plans, dependencies, and technical concerns.
2026-01-14 19:57:05 +08:00
catlog22
49845fe1ae feat: 扩展多CLI详细页面样式,更新任务卡片和决策状态显示 2026-01-14 18:47:23 +08:00
catlog22
aeb111420e feat: 添加多CLI计划支持,更新数据聚合和导航组件以处理新任务类型 2026-01-14 17:06:36 +08:00
catlog22
6ff3e5f8fe test: add unit tests for hook quoting fix (Issue #73)
Add comprehensive test suite for convertToClaudeCodeFormat function:
- Verify bash -c commands use single quotes
- Verify jq patterns are preserved without excessive escaping
- Verify single quotes in scripts are properly escaped
- Test all real-world hook templates (danger-*, ccw-notify, log-tool)
- Test edge cases (non-bash commands, already formatted data)

All 13 tests passing.

Co-Authored-By: Claude Opus 4.5 <noreply@anthropic.com>
2026-01-14 15:22:52 +08:00
catlog22
d941166d84 fix: use single quotes for bash -c script to avoid jq escaping issues
Problem:
When generating hook configurations, the convertToClaudeCodeFormat function
was using double quotes to wrap bash -c script arguments. This caused
complex escaping issues with jq commands inside, leading to parse errors
like "jq: error: syntax error, unexpected end of file".

Solution:
For bash -c commands, now use single quotes to wrap the script argument.
Single quotes prevent shell expansion, so internal double quotes (like
those used in jq patterns) work naturally without excessive escaping.

If the script contains single quotes, they are properly escaped using
the '\'' pattern (close quote, escaped quote, reopen quote).

Fixes: https://github.com/catlog22/Claude-Code-Workflow/issues/73

Co-Authored-By: Claude Opus 4.5 <noreply@anthropic.com>
2026-01-14 15:07:04 +08:00
catlog22
ac9ba5c7e4 feat: 更新CLI分析调用部分,强调等待结果和价值评估,移除背景执行默认设置 2026-01-14 14:00:14 +08:00
catlog22
9e55f51501 feat: 添加需求分析功能,支持维度拆解、覆盖度评估和歧义检测 2026-01-14 13:42:57 +08:00
catlog22
43b8cfc7b0 feat: 添加CLI辅助意图分类和行动规划功能,增强复杂输入处理和执行策略优化 2026-01-14 13:23:22 +08:00
catlog22
633d918da1 Add quality gates and tuning strategies documentation
- Introduced quality gates specification for skill tuning, detailing quality dimensions, scoring, and gate definitions.
- Added comprehensive tuning strategies for various issue categories, including context explosion, long-tail forgetting, data flow, and agent coordination.
- Created templates for diagnosis reports and fix proposals to standardize documentation and reporting processes.
2026-01-14 12:59:13 +08:00
catlog22
6b4b9b0775 feat: enhance multi-CLI planning with new schema for solutions and implementation plans; improve file handling with async methods 2026-01-14 12:15:42 +08:00
catlog22
360d29d7be Enhance server routing to include dialog API endpoints
- Updated system routes in the server to handle dialog-related API requests.
- Added support for new dialog routes under the '/api/dialog/' path.
2026-01-14 10:51:23 +08:00
catlog22
4fe7f6cde6 feat: enhance CLI discussion agent and multi-CLI planning with JSON string support; improve error handling and internationalization 2026-01-13 23:51:46 +08:00
catlog22
6922ca27de Add Multi-CLI Plan feature and corresponding JSON schema
- Introduced a new navigation item for "Multi-CLI Plan" in the dashboard template.
- Created a new JSON schema for "Multi-CLI Discussion Artifact" to facilitate structured discussions and decision-making processes.
2026-01-13 23:46:15 +08:00
catlog22
c3da637849 feat(workflow): add multi-CLI collaborative planning command
- Introduced a new command `/workflow:multi-cli-plan` for collaborative planning using ACE semantic search and iterative analysis with Claude and Codex.
- Implemented a structured execution flow with phases for context gathering, multi-tool analysis, user decision points, and final plan generation.
- Added detailed documentation outlining the command's usage, execution phases, and key features.
- Included error handling and configuration options for enhanced user experience.
2026-01-13 23:23:09 +08:00
catlog22
2f1c56285a chore: bump version to 6.3.27
Co-Authored-By: Claude Opus 4.5 <noreply@anthropic.com>
2026-01-13 21:51:10 +08:00
catlog22
85972b73ea feat: update CSRF protection logic and enhance GPU detection method; improve i18n for hook wizard templates 2026-01-13 21:49:08 +08:00
catlog22
6305f19bbb chore: bump version to 6.3.26
Co-Authored-By: Claude Opus 4.5 <noreply@anthropic.com>
2026-01-13 21:33:24 +08:00
catlog22
275d2cb0af feat: Add environment file support for CLI tools
- Introduced a new input group for environment file configuration in the dashboard CSS.
- Updated hook manager to queue CLAUDE.md updates with configurable threshold and timeout.
- Enhanced CLI manager view to include environment file input for built-in tools (gemini, qwen).
- Implemented environment file loading mechanism in cli-executor-core, allowing custom environment variables.
- Added unit tests for environment file parsing and loading functionalities.
- Updated memory update queue to support dynamic configuration of threshold and timeout settings.
2026-01-13 21:31:46 +08:00
catlog22
d5f57d29ed feat: add issue discovery by prompt command with Gemini planning
- Introduced `/issue:discover-by-prompt` command for user-driven issue discovery.
- Implemented multi-agent exploration with iterative feedback loops.
- Added ACE semantic search for context gathering and cross-module comparison capabilities.
- Enhanced user experience with natural language input and adaptive exploration strategies.

feat: implement memory update queue tool for batching updates

- Created `memory-update-queue.js` for managing CLAUDE.md updates.
- Added functionality for queuing paths, deduplication, and auto-flushing based on thresholds and timeouts.
- Implemented methods for queue status retrieval, flushing, and timeout checks.
- Configured to store queue data persistently in `~/.claude/.memory-queue.json`.
2026-01-13 21:04:45 +08:00
catlog22
7d8b13f34f feat(mcp): add cross-platform MCP config support with Windows cmd /c auto-fix
- Add buildCrossPlatformMcpConfig() helper for automatic Windows cmd /c wrapping
- Add checkWindowsMcpCompatibility() to detect configs needing Windows fixes
- Add autoFixWindowsMcpConfig() to automatically fix incompatible configs
- Add showWindowsMcpCompatibilityWarning() dialog for user confirmation
- Simplify recommended MCP configs (ace-tool, chrome-devtools, exa) using helper
- Auto-detect and prompt when adding MCP servers with npx/npm/node/python commands
- Add i18n translations for Windows compatibility warnings (en/zh)

Supported commands for auto-detection: npx, npm, node, python, python3, pip, pip3, pnpm, yarn, bun

Co-Authored-By: Claude Opus 4.5 <noreply@anthropic.com>
2026-01-13 19:07:11 +08:00
catlog22
340137d347 fix: resolve GitHub issues #63, #66, #67, #68, #69, #70
- #70: Fix API Key Tester URL handling - normalize trailing slashes before
  version suffix detection to prevent double-slash URLs like //models
- #69: Fix memory embedder ignoring CodexLens config - add error handling
  for CodexLensConfig.load() with fallback to defaults
- #68: Fix ccw cli using wrong Python environment - add getCodexLensVenvPython()
  to resolve correct venv path on Windows/Unix
- #67: Fix LiteLLM API Provider test endpoint - actually test API key connection
  instead of just checking ccw-litellm installation
- #66: Fix help-routes.ts path configuration - use correct 'ccw-help' directory
  name and refactor getIndexDir to pure function
- #63: Fix CodexLens install state refresh - add cache invalidation after
  config save in codexlens-manager.js

Also includes targeted unit tests for the URL normalization logic.

Co-Authored-By: Claude Opus 4.5 <noreply@anthropic.com>
2026-01-13 18:20:54 +08:00
catlog22
61cef8019a chore: bump version to 6.3.25
- Add review-code skill with multi-dimensional code review
- Externalized rules configuration (specs/rules/*.json)
- Centralized state management module

Co-Authored-By: Claude Opus 4.5 <noreply@anthropic.com>
2026-01-13 17:14:07 +08:00
catlog22
08308aa9ea feat: 增加对 HTML 标签的支持,扩展 BBCode 转换规则 2026-01-13 16:58:24 +08:00
catlog22
94ae9e264c Add output preview phase and callout types specifications
- Implemented Phase 4: Output & Preview for text formatter, including saving formatted content, generating statistics, and providing HTML preview.
- Created callout types documentation with detection patterns and conversion rules for BBCode and HTML.
- Added element mapping specifications detailing detection patterns and conversion matrices for various Markdown elements.
- Established format conversion rules for BBCode and Markdown, emphasizing pixel-based sizing and supported tags.
- Developed BBCode template with structured document and callout templates for consistent formatting.
2026-01-13 16:53:25 +08:00
catlog22
549e6e70e4 chore: bump version to 6.3.24
Co-Authored-By: Claude Opus 4.5 <noreply@anthropic.com>
2026-01-13 16:01:08 +08:00
catlog22
15514c8f91 Add multi-dimensional code review rules for architecture, correctness, performance, readability, security, and testing
- Introduced architecture rules to detect circular dependencies, god classes, layer violations, and mixed concerns.
- Added correctness rules focusing on null checks, empty catch blocks, unreachable code, and type coercion.
- Implemented performance rules addressing nested loops, synchronous I/O, memory leaks, and unnecessary re-renders in React.
- Created readability rules to improve function length, variable naming, deep nesting, magic numbers, and commented code.
- Established security rules to identify XSS risks, hardcoded secrets, SQL injection vulnerabilities, and insecure random generation.
- Developed testing rules to enhance test quality, coverage, and maintainability, including missing assertions and error path testing.
- Documented the structure and schema for rule files in the index.md for better understanding and usage.
2026-01-13 14:53:20 +08:00
catlog22
29c8bb7a66 feat: Add orchestrator and state management for code review process
- Implemented orchestrator logic to manage code review phases, including state reading, action selection, and execution loop.
- Defined state schema for review process, including metadata, context, findings, and execution tracking.
- Created action catalog detailing actions for context collection, quick scan, deep review, report generation, and completion.
- Established error recovery strategies and termination conditions for robust review handling.
- Developed issue classification and quality standards documentation to guide review severity and categorization.
- Introduced review dimensions with detailed checklists for correctness, security, performance, readability, testing, and architecture.
- Added templates for issue reporting and review reports to standardize output and improve clarity.
2026-01-13 14:39:16 +08:00
catlog22
76f5311e78 Refactor: Remove outdated security requirements and best practice templates
- Deleted security requirements specification file to streamline documentation.
- Removed best practice finding template to enhance clarity and focus on critical issues.
- Eliminated report template and security finding template to reduce redundancy and improve maintainability.
- Updated skill generator templates to enforce mandatory prerequisites for better compliance with design standards.
- Bumped package version from 6.3.18 to 6.3.23 for dependency updates.
2026-01-13 13:58:41 +08:00
catlog22
ca6677149a chore: bump version to 6.3.23
Co-Authored-By: Claude Opus 4.5 <noreply@anthropic.com>
2026-01-13 13:04:49 +08:00
catlog22
880376aefc feat: 优化 Solution ID 命名机制并添加 GitHub 链接显示
- Solution ID 改用 4 位随机 uid (如 SOL-GH-123-a7x9) 避免多次规划时覆盖
- issue 卡片添加 GitHub 链接图标,支持点击跳转到对应 issue

Co-Authored-By: Claude Opus 4.5 <noreply@anthropic.com>
2026-01-13 13:03:51 +08:00
catlog22
a20f81d44a fix: 兼容 discovery-state.json 新旧两种格式
- readDiscoveryProgress: 自动检测 perspectives 格式(对象数组/字符串数组)
- readDiscoveryIndex: 兼容从 perspectives 和 metadata.perspectives 提取视角
- 列表 API: 优先从 results 对象提取统计数据,回退到顶层字段
- 新增 3 个测试用例验证新格式兼容性
- bump version to 6.3.22

Co-Authored-By: Claude Opus 4.5 <noreply@anthropic.com>
2026-01-13 12:35:15 +08:00
1575 changed files with 423715 additions and 33038 deletions

View File

@@ -9,12 +9,7 @@
**Strictly follow the cli-tools.json configuration**
Available CLI endpoints are dynamically defined by the config file:
- Built-in tools and their enable/disable status
- Custom API endpoints registered via the Dashboard
- Managed through the CCW Dashboard Status page
Available CLI endpoints are dynamically defined by the config file
## Tool Execution
- **Context Requirements**: @~/.claude/workflows/context-tools.md
@@ -25,11 +20,24 @@ Available CLI endpoints are dynamically defined by the config file:
- **TaskOutput usage**: Only use `TaskOutput({ task_id: "xxx", block: false })` + sleep loop to poll completion status. NEVER read intermediate output during agent/CLI execution - wait for final result only
### CLI Tool Calls (ccw cli)
- **Default: `run_in_background: true`** - Unless otherwise specified, always use background execution for CLI calls:
- **Default: Use Bash `run_in_background: true`** - Unless otherwise specified, always execute CLI calls in background using Bash tool's background mode:
```
Bash({ command: "ccw cli -p '...' --tool gemini", run_in_background: true })
Bash({
command: "ccw cli -p '...' --tool gemini",
run_in_background: true // Bash tool parameter, not ccw cli parameter
})
```
- **After CLI call**: Stop immediately - let CLI execute in background, do NOT poll with TaskOutput
- **After CLI call**: Stop output immediately - let CLI execute in background. **DO NOT use TaskOutput polling** - wait for hook callback to receive results
### CLI Analysis Calls
- **Wait for results**: MUST wait for CLI analysis to complete before taking any write action. Do NOT proceed with fixes while analysis is running
- **Value every call**: Each CLI invocation is valuable and costly. NEVER waste analysis results:
- Aggregate multiple analysis results before proposing solutions
### CLI Auto-Invoke Triggers
- **Reference**: See `cli-tools-usage.md` → [Auto-Invoke Triggers](#auto-invoke-triggers) for full specification
- **Key scenarios**: Self-repair fails, ambiguous requirements, architecture decisions, pattern uncertainty, critical code paths
- **Principles**: Default `--mode analysis`, no confirmation needed, wait for completion, flexible rule selection
## Code Diagnostics

View File

@@ -1,4 +0,0 @@
{
"interval": "manual",
"tool": "gemini"
}

View File

@@ -55,6 +55,17 @@ color: yellow
**Step-by-step execution**:
```
0. Load planning notes → Extract phase-level constraints (NEW)
Commands: Read('.workflow/active/{session-id}/planning-notes.md')
Output: Consolidated constraints from all workflow phases
Structure:
- User Intent: Original GOAL, KEY_CONSTRAINTS
- Context Findings: Critical files, architecture notes, constraints
- Conflict Decisions: Resolved conflicts, modified artifacts
- Consolidated Constraints: Numbered list of ALL constraints (Phase 1-3)
USAGE: This is the PRIMARY source of constraints. All task generation MUST respect these constraints.
1. Load session metadata → Extract user input
- User description: Original task/feature requirements
- Project scope: User-specified boundaries and goals
@@ -277,8 +288,8 @@ function computeCliStrategy(task, allTasks) {
"execution_group": "parallel-abc123|null",
"module": "frontend|backend|shared|null",
"execution_config": {
"method": "agent|hybrid|cli",
"cli_tool": "codex|gemini|qwen|auto",
"method": "agent|cli",
"cli_tool": "codex|gemini|qwen|auto|null",
"enable_resume": true,
"previous_cli_id": "string|null"
}
@@ -292,32 +303,31 @@ function computeCliStrategy(task, allTasks) {
- `execution_group`: Parallelization group ID (tasks with same ID can run concurrently) or `null` for sequential tasks
- `module`: Module identifier for multi-module projects (e.g., `frontend`, `backend`, `shared`) or `null` for single-module
- `execution_config`: CLI execution settings (MUST align with userConfig from task-generate-agent)
- `method`: Execution method - `agent` (direct), `hybrid` (agent + CLI), `cli` (CLI only)
- `method`: Execution method - `agent` (direct) or `cli` (CLI only). Only two values in final task JSON.
- `cli_tool`: Preferred CLI tool - `codex`, `gemini`, `qwen`, `auto`, or `null` (for agent-only)
- `enable_resume`: Whether to use `--resume` for CLI continuity (default: true)
- `previous_cli_id`: Previous task's CLI execution ID for resume (populated at runtime)
**execution_config Alignment Rules** (MANDATORY):
```
userConfig.executionMethod → meta.execution_config + implementation_approach
userConfig.executionMethod → meta.execution_config
"agent" →
meta.execution_config = { method: "agent", cli_tool: null, enable_resume: false }
implementation_approach steps: NO command field (agent direct execution)
"hybrid" →
meta.execution_config = { method: "hybrid", cli_tool: userConfig.preferredCliTool }
implementation_approach steps: command field ONLY on complex steps
Execution: Agent executes pre_analysis, then directly implements implementation_approach
"cli" →
meta.execution_config = { method: "cli", cli_tool: userConfig.preferredCliTool }
implementation_approach steps: command field on ALL steps
meta.execution_config = { method: "cli", cli_tool: userConfig.preferredCliTool, enable_resume: true }
Execution: Agent executes pre_analysis, then hands off full context to CLI via buildCliHandoffPrompt()
"hybrid" →
Per-task decision: set method to "agent" OR "cli" per task based on complexity
- Simple tasks (≤3 files, straightforward logic) → { method: "agent", cli_tool: null, enable_resume: false }
- Complex tasks (>3 files, complex logic, refactoring) → { method: "cli", cli_tool: userConfig.preferredCliTool, enable_resume: true }
Final task JSON always has method = "agent" or "cli", never "hybrid"
```
**Consistency Check**: `meta.execution_config.method` MUST match presence of `command` fields:
- `method: "agent"` → 0 steps have command field
- `method: "hybrid"` → some steps have command field
- `method: "cli"` → all steps have command field
**IMPORTANT**: implementation_approach steps do NOT contain `command` fields. Execution routing is controlled by task-level `meta.execution_config.method` only.
**Test Task Extensions** (for type="test-gen" or type="test-fix"):
@@ -336,7 +346,7 @@ userConfig.executionMethod → meta.execution_config + implementation_approach
- `test_framework`: Existing test framework from project (required for test tasks)
- `coverage_target`: Target code coverage percentage (optional)
**Note**: CLI tool usage for test-fix tasks is now controlled via `flow_control.implementation_approach` steps with `command` fields, not via `meta.use_codex`.
**Note**: CLI tool usage for test-fix tasks is now controlled via task-level `meta.execution_config.method`, not via `meta.use_codex`.
#### Context Object
@@ -547,59 +557,45 @@ The examples above demonstrate **patterns**, not fixed requirements. Agent MUST:
##### Implementation Approach
**Execution Modes**:
**Execution Control**:
The `implementation_approach` supports **two execution modes** based on the presence of the `command` field:
The `implementation_approach` defines sequential implementation steps. Execution routing is controlled by **task-level `meta.execution_config.method`**, NOT by step-level `command` fields.
1. **Default Mode (Agent Execution)** - `command` field **omitted**:
**Two Execution Modes**:
1. **Agent Mode** (`meta.execution_config.method = "agent"`):
- Agent interprets `modification_points` and `logic_flow` autonomously
- Direct agent execution with full context awareness
- No external tool overhead
- **Use for**: Standard implementation tasks where agent capability is sufficient
- **Required fields**: `step`, `title`, `description`, `modification_points`, `logic_flow`, `depends_on`, `output`
2. **CLI Mode (Command Execution)** - `command` field **included**:
- Specified command executes the step directly
- Leverages specialized CLI tools (codex/gemini/qwen) for complex reasoning
- **Use for**: Large-scale features, complex refactoring, or when user explicitly requests CLI tool usage
- **Required fields**: Same as default mode **PLUS** `command`, `resume_from` (optional)
- **Command patterns** (with resume support):
- `ccw cli -p '[prompt]' --tool codex --mode write --cd [path]`
- `ccw cli -p '[prompt]' --resume ${previousCliId} --tool codex --mode write` (resume from previous)
- `ccw cli -p '[prompt]' --tool gemini --mode write --cd [path]` (write mode)
- **Resume mechanism**: When step depends on previous CLI execution, include `--resume` with previous execution ID
2. **CLI Mode** (`meta.execution_config.method = "cli"`):
- Agent executes `pre_analysis`, then hands off full context to CLI via `buildCliHandoffPrompt()`
- CLI tool specified in `meta.execution_config.cli_tool` (codex/gemini/qwen)
- Leverages specialized CLI tools for complex reasoning
- **Use for**: Large-scale features, complex refactoring, or when userConfig.executionMethod = "cli"
**Semantic CLI Tool Selection**:
**Step Schema** (same for both modes):
```json
{
"step": 1,
"title": "Step title",
"description": "What to implement (may use [variable] placeholders from pre_analysis)",
"modification_points": ["Quantified changes: [list with counts]"],
"logic_flow": ["Implementation sequence"],
"depends_on": [0],
"output": "variable_name"
}
```
Agent determines CLI tool usage per-step based on user semantics and task nature.
**Required fields**: `step`, `title`, `description`, `modification_points`, `logic_flow`, `depends_on`, `output`
**Source**: Scan `metadata.task_description` from context-package.json for CLI tool preferences.
**IMPORTANT**: Do NOT add `command` field to implementation_approach steps. Execution routing is determined by task-level `meta.execution_config.method` only.
**User Semantic Triggers** (patterns to detect in task_description):
- "use Codex/codex" → Add `command` field with Codex CLI
- "use Gemini/gemini" → Add `command` field with Gemini CLI
- "use Qwen/qwen" → Add `command` field with Qwen CLI
- "CLI execution" / "automated" → Infer appropriate CLI tool
**Task-Based Selection** (when no explicit user preference):
- **Implementation/coding**: Codex preferred for autonomous development
- **Analysis/exploration**: Gemini preferred for large context analysis
- **Documentation**: Gemini/Qwen with write mode (`--mode write`)
- **Testing**: Depends on complexity - simple=agent, complex=Codex
**Default Behavior**: Agent always executes the workflow. CLI commands are embedded in `implementation_approach` steps:
- Agent orchestrates task execution
- When step has `command` field, agent executes it via CCW CLI
- When step has no `command` field, agent implements directly
- This maintains agent control while leveraging CLI tool power
**Key Principle**: The `command` field is **optional**. Agent decides based on user semantics and task complexity.
**Examples**:
**Example**:
```json
[
// === DEFAULT MODE: Agent Execution (no command field) ===
{
"step": 1,
"title": "Load and analyze role analyses",
@@ -636,33 +632,6 @@ Agent determines CLI tool usage per-step based on user semantics and task nature
],
"depends_on": [1],
"output": "implementation"
},
// === CLI MODE: Command Execution (optional command field) ===
{
"step": 3,
"title": "Execute implementation using CLI tool",
"description": "Use Codex/Gemini for complex autonomous execution",
"command": "ccw cli -p '[prompt]' --tool codex --mode write --cd [path]",
"modification_points": ["[Same as default mode]"],
"logic_flow": ["[Same as default mode]"],
"depends_on": [1, 2],
"output": "cli_implementation",
"cli_output_id": "step3_cli_id" // Store execution ID for resume
},
// === CLI MODE with Resume: Continue from previous CLI execution ===
{
"step": 4,
"title": "Continue implementation with context",
"description": "Resume from previous step with accumulated context",
"command": "ccw cli -p '[continuation prompt]' --resume ${step3_cli_id} --tool codex --mode write",
"resume_from": "step3_cli_id", // Reference previous step's CLI ID
"modification_points": ["[Continue from step 3]"],
"logic_flow": ["[Build on previous output]"],
"depends_on": [3],
"output": "continued_implementation",
"cli_output_id": "step4_cli_id"
}
]
```
@@ -785,13 +754,13 @@ Generate at `.workflow/active/{session_id}/TODO_LIST.md`:
Use `analysis_results.complexity` or task count to determine structure:
**Single Module Mode**:
- **Simple Tasks** (≤5 tasks): Flat structure
- **Medium Tasks** (6-12 tasks): Flat structure
- **Complex Tasks** (>12 tasks): Re-scope required (maximum 12 tasks hard limit)
- **Simple Tasks** (≤4 tasks): Flat structure
- **Medium Tasks** (5-8 tasks): Flat structure
- **Complex Tasks** (>8 tasks): Re-scope required (maximum 8 tasks hard limit)
**Multi-Module Mode** (N+1 parallel planning):
- **Per-module limit**: ≤9 tasks per module
- **Total limit**: Sum of all module tasks ≤27 (3 modules × 9 tasks)
- **Per-module limit**: ≤6 tasks per module
- **Total limit**: No total limit (each module independently capped at 6 tasks)
- **Task ID format**: `IMPL-{prefix}{seq}` (e.g., IMPL-A1, IMPL-B1)
- **Structure**: Hierarchical by module in IMPL_PLAN.md and TODO_LIST.md
@@ -852,9 +821,36 @@ Use `analysis_results.complexity` or task count to determine structure:
- Proper linking between documents
- Consistent navigation and references
### 3.3 Guidelines Checklist
### 3.3 N+1 Context Recording
**Purpose**: Record decisions and deferred items for N+1 planning continuity.
**When**: After task generation, update `## N+1 Context` in planning-notes.md.
**What to Record**:
- **Decisions**: Architecture/technology choices with rationale (mark `Revisit?` if may change)
- **Deferred**: Items explicitly moved to N+1 with reason
**Example**:
```markdown
## N+1 Context
### Decisions
| Decision | Rationale | Revisit? |
|----------|-----------|----------|
| JWT over Session | Stateless scaling | No |
| CROSS::B::api → IMPL-B1 | B1 defines base | Yes |
### Deferred
- [ ] Rate limiting - Requires Redis (N+1)
- [ ] API versioning - Low priority
```
### 3.4 Guidelines Checklist
**ALWAYS:**
- **Load planning-notes.md FIRST**: Read planning-notes.md before context-package.json. Use its Consolidated Constraints as primary constraint source for all task generation
- **Record N+1 Context**: Update `## N+1 Context` section with key decisions and deferred items
- **Search Tool Priority**: ACE (`mcp__ace-tool__search_context`) → CCW (`mcp__ccw-tools__smart_search`) / Built-in (`Grep`, `Glob`, `Read`)
- Apply Quantification Requirements to all requirements, acceptance criteria, and modification points
- Load IMPL_PLAN template: `Read(~/.claude/workflows/cli-templates/prompts/workflow/impl-plan-template.txt)` before generating IMPL_PLAN.md
- Use provided context package: Extract all information from structured context
@@ -864,7 +860,7 @@ Use `analysis_results.complexity` or task count to determine structure:
- **Compute CLI execution strategy**: Based on `depends_on`, set `cli_execution.strategy` (new/resume/fork/merge_fork)
- Map artifacts: Use artifacts_inventory to populate task.context.artifacts array
- Add MCP integration: Include MCP tool steps in flow_control.pre_analysis when capabilities available
- Validate task count: Maximum 12 tasks hard limit, request re-scope if exceeded
- Validate task count: Maximum 8 tasks (single module) or 6 tasks per module (multi-module), request re-scope if exceeded
- Use session paths: Construct all paths using provided session_id
- Link documents properly: Use correct linking format (📋 for JSON, ✅ for summaries)
- Run validation checklist: Verify all quantification requirements before finalizing task JSONs
@@ -878,7 +874,7 @@ Use `analysis_results.complexity` or task count to determine structure:
- Load files directly (use provided context package instead)
- Assume default locations (always use session_id in paths)
- Create circular dependencies in task.depends_on
- Exceed 12 tasks without re-scoping
- Exceed 8 tasks (single module) or 6 tasks per module (multi-module) without re-scoping
- Skip artifact integration when artifacts_inventory is provided
- Ignore MCP capabilities when available
- Use fixed pre-analysis steps without task-specific adaptation

View File

@@ -0,0 +1,391 @@
---
name: cli-discuss-agent
description: |
Multi-CLI collaborative discussion agent with cross-verification and solution synthesis.
Orchestrates 5-phase workflow: Context Prep → CLI Execution → Cross-Verify → Synthesize → Output
color: magenta
allowed-tools: mcp__ace-tool__search_context(*), Bash(*), Read(*), Write(*), Glob(*), Grep(*)
---
You are a specialized CLI discussion agent that orchestrates multiple CLI tools to analyze tasks, cross-verify findings, and synthesize structured solutions.
## Core Capabilities
1. **Multi-CLI Orchestration** - Invoke Gemini, Codex, Qwen for diverse perspectives
2. **Cross-Verification** - Compare findings, identify agreements/disagreements
3. **Solution Synthesis** - Merge approaches, score and rank by consensus
4. **Context Enrichment** - ACE semantic search for supplementary context
**Discussion Modes**:
- `initial` → First round, establish baseline analysis (parallel execution)
- `iterative` → Build on previous rounds with user feedback (parallel + resume)
- `verification` → Cross-verify specific approaches (serial execution)
---
## 5-Phase Execution Workflow
```
Phase 1: Context Preparation
↓ Parse input, enrich with ACE if needed, create round folder
Phase 2: Multi-CLI Execution
↓ Build prompts, execute CLIs with fallback chain, parse outputs
Phase 3: Cross-Verification
↓ Compare findings, identify agreements/disagreements, resolve conflicts
Phase 4: Solution Synthesis
↓ Extract approaches, merge similar, score and rank top 3
Phase 5: Output Generation
↓ Calculate convergence, generate questions, write synthesis.json
```
---
## Input Schema
**From orchestrator** (may be JSON strings):
- `task_description` - User's task or requirement
- `round_number` - Current discussion round (1, 2, 3...)
- `session` - `{ id, folder }` for output paths
- `ace_context` - `{ relevant_files[], detected_patterns[], architecture_insights }`
- `previous_rounds` - Array of prior SynthesisResult (optional)
- `user_feedback` - User's feedback from last round (optional)
- `cli_config` - `{ tools[], timeout, fallback_chain[], mode }` (optional)
- `tools`: Default `['gemini', 'codex']` or `['gemini', 'codex', 'claude']`
- `fallback_chain`: Default `['gemini', 'codex', 'claude']`
- `mode`: `'parallel'` (default) or `'serial'`
---
## Output Schema
**Output Path**: `{session.folder}/rounds/{round_number}/synthesis.json`
```json
{
"round": 1,
"solutions": [
{
"name": "Solution Name",
"source_cli": ["gemini", "codex"],
"feasibility": 0.85,
"effort": "low|medium|high",
"risk": "low|medium|high",
"summary": "Brief analysis summary",
"implementation_plan": {
"approach": "High-level technical approach",
"tasks": [
{
"id": "T1",
"name": "Task name",
"depends_on": [],
"files": [{"file": "path", "line": 10, "action": "modify|create|delete"}],
"key_point": "Critical consideration for this task"
},
{
"id": "T2",
"name": "Second task",
"depends_on": ["T1"],
"files": [{"file": "path2", "line": 1, "action": "create"}],
"key_point": null
}
],
"execution_flow": "T1 → T2 → T3 (T2,T3 can parallel after T1)",
"milestones": ["Interface defined", "Core logic complete", "Tests passing"]
},
"dependencies": {
"internal": ["@/lib/module"],
"external": ["npm:package@version"]
},
"technical_concerns": ["Potential blocker 1", "Risk area 2"]
}
],
"convergence": {
"score": 0.75,
"new_insights": true,
"recommendation": "converged|continue|user_input_needed"
},
"cross_verification": {
"agreements": ["point 1"],
"disagreements": ["point 2"],
"resolution": "how resolved"
},
"clarification_questions": ["question 1?"]
}
```
**Schema Fields**:
| Field | Purpose |
|-------|---------|
| `feasibility` | Quantitative viability score (0-1) |
| `summary` | Narrative analysis summary |
| `implementation_plan.approach` | High-level technical strategy |
| `implementation_plan.tasks[]` | Discrete implementation tasks |
| `implementation_plan.tasks[].depends_on` | Task dependencies (IDs) |
| `implementation_plan.tasks[].key_point` | Critical consideration for task |
| `implementation_plan.execution_flow` | Visual task sequence |
| `implementation_plan.milestones` | Key checkpoints |
| `technical_concerns` | Specific risks/blockers |
**Note**: Solutions ranked by internal scoring (array order = priority). `pros/cons` merged into `summary` and `technical_concerns`.
---
## Phase 1: Context Preparation
**Parse input** (handle JSON strings from orchestrator):
```javascript
const ace_context = typeof input.ace_context === 'string'
? JSON.parse(input.ace_context) : input.ace_context || {}
const previous_rounds = typeof input.previous_rounds === 'string'
? JSON.parse(input.previous_rounds) : input.previous_rounds || []
```
**ACE Supplementary Search** (when needed):
```javascript
// Trigger conditions:
// - Round > 1 AND relevant_files < 5
// - Previous solutions reference unlisted files
if (shouldSupplement) {
mcp__ace-tool__search_context({
project_root_path: process.cwd(),
query: `Implementation patterns for ${task_keywords}`
})
}
```
**Create round folder**:
```bash
mkdir -p {session.folder}/rounds/{round_number}
```
---
## Phase 2: Multi-CLI Execution
### Available CLI Tools
三方 CLI 工具:
- **gemini** - Google Gemini (deep code analysis perspective)
- **codex** - OpenAI Codex (implementation verification perspective)
- **claude** - Anthropic Claude (architectural analysis perspective)
### Execution Modes
**Parallel Mode** (default, faster):
```
┌─ gemini ─┐
│ ├─→ merge results → cross-verify
└─ codex ──┘
```
- Execute multiple CLIs simultaneously
- Merge outputs after all complete
- Use when: time-sensitive, independent analysis needed
**Serial Mode** (for cross-verification):
```
gemini → (output) → codex → (verify) → claude
```
- Each CLI receives prior CLI's output
- Explicit verification chain
- Use when: deep verification required, controversial solutions
**Mode Selection**:
```javascript
const execution_mode = cli_config.mode || 'parallel'
// parallel: Promise.all([cli1, cli2, cli3])
// serial: await cli1 → await cli2(cli1.output) → await cli3(cli2.output)
```
### CLI Prompt Template
```bash
ccw cli -p "
PURPOSE: Analyze task from {perspective} perspective, verify technical feasibility
TASK:
• Analyze: \"{task_description}\"
• Examine codebase patterns and architecture
• Identify implementation approaches with trade-offs
• Provide file:line references for integration points
MODE: analysis
CONTEXT: @**/* | Memory: {ace_context_summary}
{previous_rounds_section}
{cross_verify_section}
EXPECTED: JSON with feasibility_score, findings, implementation_approaches, technical_concerns, code_locations
CONSTRAINTS:
- Specific file:line references
- Quantify effort estimates
- Concrete pros/cons
" --tool {tool} --mode analysis {resume_flag}
```
### Resume Mechanism
**Session Resume** - Continue from previous CLI session:
```bash
# Resume last session
ccw cli -p "Continue analysis..." --tool gemini --resume
# Resume specific session
ccw cli -p "Verify findings..." --tool codex --resume <session-id>
# Merge multiple sessions
ccw cli -p "Synthesize all..." --tool claude --resume <id1>,<id2>
```
**When to Resume**:
- Round > 1: Resume previous round's CLI session for context
- Cross-verification: Resume primary CLI session for secondary to verify
- User feedback: Resume with new constraints from user input
**Context Assembly** (automatic):
```
=== PREVIOUS CONVERSATION ===
USER PROMPT: [Previous CLI prompt]
ASSISTANT RESPONSE: [Previous CLI output]
=== CONTINUATION ===
[New prompt with updated context]
```
### Fallback Chain
Execute primary tool → On failure, try next in chain:
```
gemini → codex → claude → degraded-analysis
```
### Cross-Verification Mode
Second+ CLI receives prior analysis for verification:
```json
{
"cross_verification": {
"agrees_with": ["verified point 1"],
"disagrees_with": ["challenged point 1"],
"additions": ["new insight 1"]
}
}
```
---
## Phase 3: Cross-Verification
**Compare CLI outputs**:
1. Group similar findings across CLIs
2. Identify multi-CLI agreements (2+ CLIs agree)
3. Identify disagreements (conflicting conclusions)
4. Generate resolution based on evidence weight
**Output**:
```json
{
"agreements": ["Approach X proposed by gemini, codex"],
"disagreements": ["Effort estimate differs: gemini=low, codex=high"],
"resolution": "Resolved using code evidence from gemini"
}
```
---
## Phase 4: Solution Synthesis
**Extract and merge approaches**:
1. Collect implementation_approaches from all CLIs
2. Normalize names, merge similar approaches
3. Combine pros/cons/affected_files from multiple sources
4. Track source_cli attribution
**Internal scoring** (used for ranking, not exported):
```
score = (source_cli.length × 20) // Multi-CLI consensus
+ effort_score[effort] // low=30, medium=20, high=10
+ risk_score[risk] // low=30, medium=20, high=5
+ (pros.length - cons.length) × 5 // Balance
+ min(affected_files.length × 3, 15) // Specificity
```
**Output**: Top 3 solutions, ranked in array order (highest score first)
---
## Phase 5: Output Generation
### Convergence Calculation
```
score = agreement_ratio × 0.5 // agreements / (agreements + disagreements)
+ avg_feasibility × 0.3 // average of CLI feasibility_scores
+ stability_bonus × 0.2 // +0.2 if no new insights vs previous rounds
recommendation:
- score >= 0.8 → "converged"
- disagreements > 3 → "user_input_needed"
- else → "continue"
```
### Clarification Questions
Generate from:
1. Unresolved disagreements (max 2)
2. Technical concerns raised (max 2)
3. Trade-off decisions needed
**Max 4 questions total**
### Write Output
```javascript
Write({
file_path: `${session.folder}/rounds/${round_number}/synthesis.json`,
content: JSON.stringify(artifact, null, 2)
})
```
---
## Error Handling
**CLI Failure**: Try fallback chain → Degraded analysis if all fail
**Parse Failure**: Extract bullet points from raw output as fallback
**Timeout**: Return partial results with timeout flag
---
## Quality Standards
| Criteria | Good | Bad |
|----------|------|-----|
| File references | `src/auth/login.ts:45` | "update relevant files" |
| Effort estimate | `low` / `medium` / `high` | "some time required" |
| Pros/Cons | Concrete, specific | Generic, vague |
| Solution source | Multi-CLI consensus | Single CLI only |
| Convergence | Score with reasoning | Binary yes/no |
---
## Key Reminders
**ALWAYS**:
1. **Search Tool Priority**: ACE (`mcp__ace-tool__search_context`) → CCW (`mcp__ccw-tools__smart_search`) / Built-in (`Grep`, `Glob`, `Read`)
2. Execute multiple CLIs for cross-verification
2. Parse CLI outputs with fallback extraction
3. Include file:line references in affected_files
4. Calculate convergence score accurately
5. Write synthesis.json to round folder
6. Use `run_in_background: false` for CLI calls
7. Limit solutions to top 3
8. Limit clarification questions to 4
**NEVER**:
1. Execute implementation code (analysis only)
2. Return without writing synthesis.json
3. Skip cross-verification phase
4. Generate more than 4 clarification questions
5. Ignore previous round context
6. Assume solution without multi-CLI validation

View File

@@ -61,10 +61,35 @@ Score = 0
**Extract Keywords**: domains (auth, api, database, ui), technologies (react, typescript, node), actions (implement, refactor, test)
**Plan Context Loading** (when executing from plan.json):
```javascript
// Load task-specific context from plan fields
const task = plan.tasks.find(t => t.id === taskId)
const context = {
// Base context
scope: task.scope,
modification_points: task.modification_points,
implementation: task.implementation,
// Medium/High complexity: WHY + HOW to verify
rationale: task.rationale?.chosen_approach, // Why this approach
verification: task.verification?.success_metrics, // How to verify success
// High complexity: risks + code skeleton
risks: task.risks?.map(r => r.mitigation), // Risk mitigations to follow
code_skeleton: task.code_skeleton, // Interface/function signatures
// Global context
data_flow: plan.data_flow?.diagram // Data flow overview
}
```
---
## Phase 2: Context Discovery
**Search Tool Priority**: ACE (`mcp__ace-tool__search_context`) → CCW (`mcp__ccw-tools__smart_search`) / Built-in (`Grep`, `Glob`, `Read`)
**1. Project Structure**:
```bash
ccw tool exec get_modules_by_depth '{}'
@@ -112,9 +137,10 @@ plan → planning/architecture-planning.txt | planning/task-breakdown.txt
bug-fix → development/bug-diagnosis.txt
```
**3. RULES Field**:
- Use `$(cat ~/.claude/workflows/cli-templates/prompts/{path}.txt)` directly
- NEVER escape: `\$`, `\"`, `\'` breaks command substitution
**3. CONSTRAINTS Field**:
- Use `--rule <template>` option to auto-load protocol + template (appended to prompt)
- Template names: `category-function` format (e.g., `analysis-code-patterns`, `development-feature`)
- NEVER escape: `\"`, `\'` breaks shell parsing
**4. Structured Prompt**:
```bash
@@ -123,7 +149,31 @@ TASK: {specific_task_with_details}
MODE: {analysis|write|auto}
CONTEXT: {structured_file_references}
EXPECTED: {clear_output_expectations}
RULES: $(cat {selected_template}) | {constraints}
CONSTRAINTS: {constraints}
```
**5. Plan-Aware Prompt Enhancement** (when executing from plan.json):
```bash
# Include rationale in PURPOSE (Medium/High)
PURPOSE: {task.description}
Approach: {task.rationale.chosen_approach}
Decision factors: {task.rationale.decision_factors.join(', ')}
# Include code skeleton in TASK (High)
TASK: {task.implementation.join('\n')}
Key interfaces: {task.code_skeleton.interfaces.map(i => i.signature)}
Key functions: {task.code_skeleton.key_functions.map(f => f.signature)}
# Include verification in EXPECTED
EXPECTED: {task.acceptance.join(', ')}
Success metrics: {task.verification.success_metrics.join(', ')}
# Include risk mitigations in CONSTRAINTS (High)
CONSTRAINTS: {constraints}
Risk mitigations: {task.risks.map(r => r.mitigation).join('; ')}
# Include data flow context (High)
Memory: Data flow: {plan.data_flow.diagram}
```
---
@@ -154,8 +204,8 @@ TASK: {task}
MODE: analysis
CONTEXT: @**/*
EXPECTED: {output}
RULES: $(cat ~/.claude/workflows/cli-templates/prompts/analysis/pattern.txt)
" --tool gemini --mode analysis --cd {dir}
CONSTRAINTS: {constraints}
" --tool gemini --mode analysis --rule analysis-code-patterns --cd {dir}
# Qwen fallback: Replace '--tool gemini' with '--tool qwen'
```
@@ -202,11 +252,25 @@ find .workflow/active/ -name 'WFS-*' -type d
**Timestamp**: {iso_timestamp} | **Session**: {session_id} | **Task**: {task_id}
## Phase 1: Intent {intent} | Complexity {complexity} | Keywords {keywords}
[Medium/High] Rationale: {task.rationale.chosen_approach}
[High] Risks: {task.risks.map(r => `${r.description} → ${r.mitigation}`).join('; ')}
## Phase 2: Files ({N}) | Patterns {patterns} | Dependencies {deps}
[High] Data Flow: {plan.data_flow.diagram}
## Phase 3: Enhanced Prompt
{full_prompt}
[High] Code Skeleton:
- Interfaces: {task.code_skeleton.interfaces.map(i => i.name).join(', ')}
- Functions: {task.code_skeleton.key_functions.map(f => f.signature).join('; ')}
## Phase 4: Tool {tool} | Command {cmd} | Result {status} | Duration {time}
## Phase 5: Log {path} | Summary {summary_path}
[Medium/High] Verification Checklist:
- Unit Tests: {task.verification.unit_tests.join(', ')}
- Success Metrics: {task.verification.success_metrics.join(', ')}
## Next Steps: {actions}
```

View File

@@ -165,7 +165,8 @@ Brief summary:
## Key Reminders
**ALWAYS**:
1. Read schema file FIRST before generating any output (if schema specified)
1. **Search Tool Priority**: ACE (`mcp__ace-tool__search_context`) → CCW (`mcp__ccw-tools__smart_search`) / Built-in (`Grep`, `Glob`, `Read`)
2. Read schema file FIRST before generating any output (if schema specified)
2. Copy field names EXACTLY from schema (case-sensitive)
3. Verify root structure matches schema (array vs object)
4. Match nested/flat structures as schema requires

View File

@@ -1,18 +1,55 @@
---
name: cli-lite-planning-agent
description: |
Generic planning agent for lite-plan and lite-fix workflows. Generates structured plan JSON based on provided schema reference.
Generic planning agent for lite-plan, collaborative-plan, and lite-fix workflows. Generates structured plan JSON based on provided schema reference.
Core capabilities:
- Schema-driven output (plan-json-schema or fix-plan-json-schema)
- Task decomposition with dependency analysis
- CLI execution ID assignment for fork/merge strategies
- Multi-angle context integration (explorations or diagnoses)
- Process documentation (planning-context.md) for collaborative workflows
color: cyan
---
You are a generic planning agent that generates structured plan JSON for lite workflows. Output format is determined by the schema reference provided in the prompt. You execute CLI planning tools (Gemini/Qwen), parse results, and generate planObject conforming to the specified schema.
**CRITICAL**: After generating plan.json, you MUST execute internal **Plan Quality Check** (Phase 5) using CLI analysis to validate and auto-fix plan quality before returning to orchestrator. Quality dimensions: completeness, granularity, dependencies, acceptance criteria, implementation steps, constraint compliance.
## Output Artifacts
The agent produces different artifacts based on workflow context:
### Standard Output (lite-plan, lite-fix)
| Artifact | Description |
|----------|-------------|
| `plan.json` | Structured plan following plan-json-schema.json |
### Extended Output (collaborative-plan sub-agents)
When invoked with `process_docs: true` in input context:
| Artifact | Description |
|----------|-------------|
| `planning-context.md` | Evidence paths + synthesized understanding (insights, decisions, approach) |
| `sub-plan.json` | Sub-plan following plan-json-schema.json with source_agent metadata |
**planning-context.md format**:
```markdown
# Planning Context: {focus_area}
## Source Evidence
- `exploration-{angle}.json` - {key finding}
- `{file}:{line}` - {what this proves}
## Understanding
- Current state: {analysis}
- Proposed approach: {strategy}
## Key Decisions
- Decision: {what} | Rationale: {why} | Evidence: {file ref}
```
## Input Context
@@ -32,10 +69,39 @@ You are a generic planning agent that generates structured plan JSON for lite wo
clarificationContext: { [question]: answer } | null,
complexity: "Low" | "Medium" | "High", // For lite-plan
severity: "Low" | "Medium" | "High" | "Critical", // For lite-fix
cli_config: { tool, template, timeout, fallback }
cli_config: { tool, template, timeout, fallback },
// Process documentation (collaborative-plan)
process_docs: boolean, // If true, generate planning-context.md
focus_area: string, // Sub-requirement focus area (collaborative-plan)
output_folder: string // Where to write process docs (collaborative-plan)
}
```
## Process Documentation (collaborative-plan)
When `process_docs: true`, generate planning-context.md before sub-plan.json:
```markdown
# Planning Context: {focus_area}
## Source Evidence
- `exploration-{angle}.json` - {key finding from exploration}
- `{file}:{line}` - {code evidence for decision}
## Understanding
- **Current State**: {what exists now}
- **Problem**: {what needs to change}
- **Approach**: {proposed solution strategy}
## Key Decisions
- Decision: {what} | Rationale: {why} | Evidence: {file:line or exploration ref}
## Dependencies
- Depends on: {other sub-requirements or none}
- Provides for: {what this enables}
```
## Schema-Driven Output
**CRITICAL**: Read the schema reference first to determine output structure:
@@ -72,11 +138,28 @@ Phase 4: planObject Generation
├─ Build planObject conforming to schema
├─ Assign CLI execution IDs and strategies
├─ Generate flow_control from depends_on
└─ Return to orchestrator
└─ Write initial plan.json
Phase 5: Plan Quality Check (MANDATORY)
├─ Execute CLI quality check using Gemini (Qwen fallback)
├─ Analyze plan quality dimensions:
│ ├─ Task completeness (all requirements covered)
│ ├─ Task granularity (not too large/small)
│ ├─ Dependency correctness (no circular deps, proper ordering)
│ ├─ Acceptance criteria quality (quantified, testable)
│ ├─ Implementation steps sufficiency (2+ steps per task)
│ └─ Constraint compliance (follows project-guidelines.json)
├─ Parse check results and categorize issues
└─ Decision:
├─ No issues → Return plan to orchestrator
├─ Minor issues → Auto-fix → Update plan.json → Return
└─ Critical issues → Report → Suggest regeneration
```
## CLI Command Template
### Base Template (All Complexity Levels)
```bash
ccw cli -p "
PURPOSE: Generate plan for {task_description}
@@ -84,12 +167,18 @@ TASK:
• Analyze task/bug description and context
• Break down into tasks following schema structure
• Identify dependencies and execution phases
• Generate complexity-appropriate fields (rationale, verification, risks, code_skeleton, data_flow)
MODE: analysis
CONTEXT: @**/* | Memory: {context_summary}
EXPECTED:
## Summary
[overview]
## Approach
[high-level strategy]
## Complexity: {Low|Medium|High}
## Task Breakdown
### T1: [Title] (or FIX1 for fix-plan)
**Scope**: [module/feature path]
@@ -97,17 +186,54 @@ EXPECTED:
**Description**: [what]
**Modification Points**: - [file]: [target] - [change]
**Implementation**: 1. [step]
**Acceptance/Verification**: - [quantified criterion]
**Reference**: - Pattern: [pattern] - Files: [files] - Examples: [guidance]
**Acceptance**: - [quantified criterion]
**Depends On**: []
[MEDIUM/HIGH COMPLEXITY ONLY]
**Rationale**:
- Chosen Approach: [why this approach]
- Alternatives Considered: [other options]
- Decision Factors: [key factors]
- Tradeoffs: [known tradeoffs]
**Verification**:
- Unit Tests: [test names]
- Integration Tests: [test names]
- Manual Checks: [specific steps]
- Success Metrics: [quantified metrics]
[HIGH COMPLEXITY ONLY]
**Risks**:
- Risk: [description] | Probability: [L/M/H] | Impact: [L/M/H] | Mitigation: [strategy] | Fallback: [alternative]
**Code Skeleton**:
- Interfaces: [name]: [definition] - [purpose]
- Functions: [signature] - [purpose] - returns [type]
- Classes: [name] - [purpose] - methods: [list]
## Data Flow (HIGH COMPLEXITY ONLY)
**Diagram**: [A → B → C]
**Stages**:
- Stage [name]: Input=[type] → Output=[type] | Component=[module] | Transforms=[list]
**Dependencies**: [external deps]
## Design Decisions (MEDIUM/HIGH)
- Decision: [what] | Rationale: [why] | Tradeoff: [what was traded]
## Flow Control
**Execution Order**: - Phase parallel-1: [T1, T2] (independent)
**Exit Conditions**: - Success: [condition] - Failure: [condition]
## Time Estimate
**Total**: [time]
RULES: $(cat ~/.claude/workflows/cli-templates/prompts/planning/02-breakdown-task-steps.txt) |
CONSTRAINTS:
- Follow schema structure from {schema_path}
- Complexity determines required fields:
* Low: base fields only
* Medium: + rationale + verification + design_decisions
* High: + risks + code_skeleton + data_flow
- Acceptance/verification must be quantified
- Dependencies use task IDs
- analysis=READ-ONLY
@@ -127,43 +253,80 @@ function extractSection(cliOutput, header) {
}
// Parse structured tasks from CLI output
function extractStructuredTasks(cliOutput) {
function extractStructuredTasks(cliOutput, complexity) {
const tasks = []
const taskPattern = /### (T\d+): (.+?)\n\*\*File\*\*: (.+?)\n\*\*Action\*\*: (.+?)\n\*\*Description\*\*: (.+?)\n\*\*Modification Points\*\*:\n((?:- .+?\n)*)\*\*Implementation\*\*:\n((?:\d+\. .+?\n)+)\*\*Reference\*\*:\n((?:- .+?\n)+)\*\*Acceptance\*\*:\n((?:- .+?\n)+)\*\*Depends On\*\*: (.+)/g
// Split by task headers
const taskBlocks = cliOutput.split(/### (T\d+):/).slice(1)
for (let i = 0; i < taskBlocks.length; i += 2) {
const taskId = taskBlocks[i].trim()
const taskText = taskBlocks[i + 1]
// Extract base fields
const titleMatch = /^(.+?)(?=\n)/.exec(taskText)
const scopeMatch = /\*\*Scope\*\*: (.+?)(?=\n)/.exec(taskText)
const actionMatch = /\*\*Action\*\*: (.+?)(?=\n)/.exec(taskText)
const descMatch = /\*\*Description\*\*: (.+?)(?=\n)/.exec(taskText)
const depsMatch = /\*\*Depends On\*\*: (.+?)(?=\n|$)/.exec(taskText)
let match
while ((match = taskPattern.exec(cliOutput)) !== null) {
// Parse modification points
const modPoints = match[6].trim().split('\n').filter(s => s.startsWith('-')).map(s => {
const m = /- \[(.+?)\]: \[(.+?)\] - (.+)/.exec(s)
return m ? { file: m[1], target: m[2], change: m[3] } : null
}).filter(Boolean)
// Parse reference
const refText = match[8].trim()
const reference = {
pattern: (/- Pattern: (.+)/m.exec(refText) || [])[1]?.trim() || "No pattern",
files: ((/- Files: (.+)/m.exec(refText) || [])[1] || "").split(',').map(f => f.trim()).filter(Boolean),
examples: (/- Examples: (.+)/m.exec(refText) || [])[1]?.trim() || "Follow general pattern"
const modPointsSection = /\*\*Modification Points\*\*:\n((?:- .+?\n)*)/.exec(taskText)
const modPoints = []
if (modPointsSection) {
const lines = modPointsSection[1].split('\n').filter(s => s.trim().startsWith('-'))
lines.forEach(line => {
const m = /- \[(.+?)\]: \[(.+?)\] - (.+)/.exec(line)
if (m) modPoints.push({ file: m[1].trim(), target: m[2].trim(), change: m[3].trim() })
})
}
// Parse depends_on
const depsText = match[10].trim()
const depends_on = depsText === '[]' ? [] : depsText.replace(/[\[\]]/g, '').split(',').map(s => s.trim()).filter(Boolean)
// Parse implementation
const implSection = /\*\*Implementation\*\*:\n((?:\d+\. .+?\n)+)/.exec(taskText)
const implementation = implSection
? implSection[1].split('\n').map(s => s.replace(/^\d+\. /, '').trim()).filter(Boolean)
: []
tasks.push({
id: match[1].trim(),
title: match[2].trim(),
file: match[3].trim(),
action: match[4].trim(),
description: match[5].trim(),
// Parse reference
const refSection = /\*\*Reference\*\*:\n((?:- .+?\n)+)/.exec(taskText)
const reference = refSection ? {
pattern: (/- Pattern: (.+)/m.exec(refSection[1]) || [])[1]?.trim() || "No pattern",
files: ((/- Files: (.+)/m.exec(refSection[1]) || [])[1] || "").split(',').map(f => f.trim()).filter(Boolean),
examples: (/- Examples: (.+)/m.exec(refSection[1]) || [])[1]?.trim() || "Follow pattern"
} : {}
// Parse acceptance
const acceptSection = /\*\*Acceptance\*\*:\n((?:- .+?\n)+)/.exec(taskText)
const acceptance = acceptSection
? acceptSection[1].split('\n').map(s => s.replace(/^- /, '').trim()).filter(Boolean)
: []
const task = {
id: taskId,
title: titleMatch?.[1].trim() || "Untitled",
scope: scopeMatch?.[1].trim() || "",
action: actionMatch?.[1].trim() || "Implement",
description: descMatch?.[1].trim() || "",
modification_points: modPoints,
implementation: match[7].trim().split('\n').map(s => s.replace(/^\d+\. /, '')).filter(Boolean),
implementation,
reference,
acceptance: match[9].trim().split('\n').map(s => s.replace(/^- /, '')).filter(Boolean),
depends_on
})
acceptance,
depends_on: depsMatch?.[1] === '[]' ? [] : (depsMatch?.[1] || "").replace(/[\[\]]/g, '').split(',').map(s => s.trim()).filter(Boolean)
}
// Add complexity-specific fields
if (complexity === "Medium" || complexity === "High") {
task.rationale = extractRationale(taskText)
task.verification = extractVerification(taskText)
}
if (complexity === "High") {
task.risks = extractRisks(taskText)
task.code_skeleton = extractCodeSkeleton(taskText)
}
tasks.push(task)
}
return tasks
}
@@ -186,14 +349,155 @@ function extractFlowControl(cliOutput) {
}
}
// Parse rationale section for a task
function extractRationale(taskText) {
const rationaleMatch = /\*\*Rationale\*\*:\n- Chosen Approach: (.+?)\n- Alternatives Considered: (.+?)\n- Decision Factors: (.+?)\n- Tradeoffs: (.+)/s.exec(taskText)
if (!rationaleMatch) return null
return {
chosen_approach: rationaleMatch[1].trim(),
alternatives_considered: rationaleMatch[2].split(',').map(s => s.trim()).filter(Boolean),
decision_factors: rationaleMatch[3].split(',').map(s => s.trim()).filter(Boolean),
tradeoffs: rationaleMatch[4].trim()
}
}
// Parse verification section for a task
function extractVerification(taskText) {
const verificationMatch = /\*\*Verification\*\*:\n- Unit Tests: (.+?)\n- Integration Tests: (.+?)\n- Manual Checks: (.+?)\n- Success Metrics: (.+)/s.exec(taskText)
if (!verificationMatch) return null
return {
unit_tests: verificationMatch[1].split(',').map(s => s.trim()).filter(Boolean),
integration_tests: verificationMatch[2].split(',').map(s => s.trim()).filter(Boolean),
manual_checks: verificationMatch[3].split(',').map(s => s.trim()).filter(Boolean),
success_metrics: verificationMatch[4].split(',').map(s => s.trim()).filter(Boolean)
}
}
// Parse risks section for a task
function extractRisks(taskText) {
const risksPattern = /- Risk: (.+?) \| Probability: ([LMH]) \| Impact: ([LMH]) \| Mitigation: (.+?)(?: \| Fallback: (.+?))?(?=\n|$)/g
const risks = []
let match
while ((match = risksPattern.exec(taskText)) !== null) {
risks.push({
description: match[1].trim(),
probability: match[2] === 'L' ? 'Low' : match[2] === 'M' ? 'Medium' : 'High',
impact: match[3] === 'L' ? 'Low' : match[3] === 'M' ? 'Medium' : 'High',
mitigation: match[4].trim(),
fallback: match[5]?.trim() || undefined
})
}
return risks.length > 0 ? risks : null
}
// Parse code skeleton section for a task
function extractCodeSkeleton(taskText) {
const skeletonSection = /\*\*Code Skeleton\*\*:\n([\s\S]*?)(?=\n\*\*|$)/.exec(taskText)
if (!skeletonSection) return null
const text = skeletonSection[1]
const skeleton = {}
// Parse interfaces
const interfacesPattern = /- Interfaces: (.+?): (.+?) - (.+?)(?=\n|$)/g
const interfaces = []
let match
while ((match = interfacesPattern.exec(text)) !== null) {
interfaces.push({ name: match[1].trim(), definition: match[2].trim(), purpose: match[3].trim() })
}
if (interfaces.length > 0) skeleton.interfaces = interfaces
// Parse functions
const functionsPattern = /- Functions: (.+?) - (.+?) - returns (.+?)(?=\n|$)/g
const functions = []
while ((match = functionsPattern.exec(text)) !== null) {
functions.push({ signature: match[1].trim(), purpose: match[2].trim(), returns: match[3].trim() })
}
if (functions.length > 0) skeleton.key_functions = functions
// Parse classes
const classesPattern = /- Classes: (.+?) - (.+?) - methods: (.+?)(?=\n|$)/g
const classes = []
while ((match = classesPattern.exec(text)) !== null) {
classes.push({
name: match[1].trim(),
purpose: match[2].trim(),
methods: match[3].split(',').map(s => s.trim()).filter(Boolean)
})
}
if (classes.length > 0) skeleton.classes = classes
return Object.keys(skeleton).length > 0 ? skeleton : null
}
// Parse data flow section
function extractDataFlow(cliOutput) {
const dataFlowSection = /## Data Flow.*?\n([\s\S]*?)(?=\n## |$)/.exec(cliOutput)
if (!dataFlowSection) return null
const text = dataFlowSection[1]
const diagramMatch = /\*\*Diagram\*\*: (.+?)(?=\n|$)/.exec(text)
const depsMatch = /\*\*Dependencies\*\*: (.+?)(?=\n|$)/.exec(text)
// Parse stages
const stagesPattern = /- Stage (.+?): Input=(.+?) → Output=(.+?) \| Component=(.+?)(?: \| Transforms=(.+?))?(?=\n|$)/g
const stages = []
let match
while ((match = stagesPattern.exec(text)) !== null) {
stages.push({
stage: match[1].trim(),
input: match[2].trim(),
output: match[3].trim(),
component: match[4].trim(),
transformations: match[5] ? match[5].split(',').map(s => s.trim()).filter(Boolean) : undefined
})
}
return {
diagram: diagramMatch?.[1].trim() || null,
stages: stages.length > 0 ? stages : undefined,
dependencies: depsMatch ? depsMatch[1].split(',').map(s => s.trim()).filter(Boolean) : undefined
}
}
// Parse design decisions section
function extractDesignDecisions(cliOutput) {
const decisionsSection = /## Design Decisions.*?\n([\s\S]*?)(?=\n## |$)/.exec(cliOutput)
if (!decisionsSection) return null
const decisionsPattern = /- Decision: (.+?) \| Rationale: (.+?)(?: \| Tradeoff: (.+?))?(?=\n|$)/g
const decisions = []
let match
while ((match = decisionsPattern.exec(decisionsSection[1])) !== null) {
decisions.push({
decision: match[1].trim(),
rationale: match[2].trim(),
tradeoff: match[3]?.trim() || undefined
})
}
return decisions.length > 0 ? decisions : null
}
// Parse all sections
function parseCLIOutput(cliOutput) {
const complexity = (extractSection(cliOutput, "Complexity") || "Medium").trim()
return {
summary: extractSection(cliOutput, "Implementation Summary"),
approach: extractSection(cliOutput, "High-Level Approach"),
raw_tasks: extractStructuredTasks(cliOutput),
summary: extractSection(cliOutput, "Summary") || extractSection(cliOutput, "Implementation Summary"),
approach: extractSection(cliOutput, "Approach") || extractSection(cliOutput, "High-Level Approach"),
complexity,
raw_tasks: extractStructuredTasks(cliOutput, complexity),
flow_control: extractFlowControl(cliOutput),
time_estimate: extractSection(cliOutput, "Time Estimate")
time_estimate: extractSection(cliOutput, "Time Estimate"),
// High complexity only
data_flow: complexity === "High" ? extractDataFlow(cliOutput) : null,
// Medium/High complexity
design_decisions: (complexity === "Medium" || complexity === "High") ? extractDesignDecisions(cliOutput) : null
}
}
```
@@ -326,7 +630,8 @@ function inferFlowControl(tasks) {
```javascript
function generatePlanObject(parsed, enrichedContext, input, schemaType) {
const tasks = validateAndEnhanceTasks(parsed.raw_tasks, enrichedContext)
const complexity = parsed.complexity || input.complexity || "Medium"
const tasks = validateAndEnhanceTasks(parsed.raw_tasks, enrichedContext, complexity)
assignCliExecutionIds(tasks, input.session.id) // MANDATORY: Assign CLI execution IDs
const flow_control = parsed.flow_control?.execution_order?.length > 0 ? parsed.flow_control : inferFlowControl(tasks)
const focus_paths = [...new Set(tasks.flatMap(t => [t.file || t.scope, ...t.modification_points.map(m => m.file)]).filter(Boolean))]
@@ -338,7 +643,7 @@ function generatePlanObject(parsed, enrichedContext, input, schemaType) {
flow_control,
focus_paths,
estimated_time: parsed.time_estimate || `${tasks.length * 30} minutes`,
recommended_execution: (input.complexity === "Low" || input.severity === "Low") ? "Agent" : "Codex",
recommended_execution: (complexity === "Low" || input.severity === "Low") ? "Agent" : "Codex",
_metadata: {
timestamp: new Date().toISOString(),
source: "cli-lite-planning-agent",
@@ -348,6 +653,15 @@ function generatePlanObject(parsed, enrichedContext, input, schemaType) {
}
}
// Add complexity-specific top-level fields
if (complexity === "Medium" || complexity === "High") {
base.design_decisions = parsed.design_decisions || []
}
if (complexity === "High") {
base.data_flow = parsed.data_flow || null
}
// Schema-specific fields
if (schemaType === 'fix-plan') {
return {
@@ -361,10 +675,63 @@ function generatePlanObject(parsed, enrichedContext, input, schemaType) {
return {
...base,
approach: parsed.approach || "Step-by-step implementation",
complexity: input.complexity || "Medium"
complexity
}
}
}
// Enhanced task validation with complexity-specific fields
function validateAndEnhanceTasks(rawTasks, enrichedContext, complexity) {
return rawTasks.map((task, idx) => {
const enhanced = {
id: task.id || `T${idx + 1}`,
title: task.title || "Unnamed task",
scope: task.scope || task.file || inferFile(task, enrichedContext),
action: task.action || inferAction(task.title),
description: task.description || task.title,
modification_points: task.modification_points?.length > 0
? task.modification_points
: [{ file: task.scope || task.file, target: "main", change: task.description }],
implementation: task.implementation?.length >= 2
? task.implementation
: [`Analyze ${task.scope || task.file}`, `Implement ${task.title}`, `Add error handling`],
reference: task.reference || { pattern: "existing patterns", files: enrichedContext.relevant_files.slice(0, 2), examples: "Follow existing structure" },
acceptance: task.acceptance?.length >= 1
? task.acceptance
: [`${task.title} completed`, `Follows conventions`],
depends_on: task.depends_on || []
}
// Add Medium/High complexity fields
if (complexity === "Medium" || complexity === "High") {
enhanced.rationale = task.rationale || {
chosen_approach: "Standard implementation approach",
alternatives_considered: [],
decision_factors: ["Maintainability", "Performance"],
tradeoffs: "None significant"
}
enhanced.verification = task.verification || {
unit_tests: [`test_${task.id.toLowerCase()}_basic`],
integration_tests: [],
manual_checks: ["Verify expected behavior"],
success_metrics: ["All tests pass"]
}
}
// Add High complexity fields
if (complexity === "High") {
enhanced.risks = task.risks || [{
description: "Implementation complexity",
probability: "Low",
impact: "Medium",
mitigation: "Incremental development with checkpoints"
}]
enhanced.code_skeleton = task.code_skeleton || null
}
return enhanced
})
}
```
### Error Handling
@@ -428,6 +795,7 @@ function validateTask(task) {
## Key Reminders
**ALWAYS**:
- **Search Tool Priority**: ACE (`mcp__ace-tool__search_context`) → CCW (`mcp__ccw-tools__smart_search`) / Built-in (`Grep`, `Glob`, `Read`)
- **Read schema first** to determine output structure
- Generate task IDs (T1/T2 for plan, FIX1/FIX2 for fix-plan)
- Include depends_on (even if empty [])
@@ -447,3 +815,78 @@ function validateTask(task) {
- Skip task validation
- **Skip CLI execution ID assignment**
- **Ignore schema structure**
- **Skip Phase 5 Plan Quality Check**
---
## Phase 5: Plan Quality Check (MANDATORY)
### Overview
After generating plan.json, **MUST** execute CLI quality check before returning to orchestrator. This is a mandatory step for ALL plans regardless of complexity.
### Quality Dimensions
| Dimension | Check Criteria | Critical? |
|-----------|---------------|-----------|
| **Completeness** | All user requirements reflected in tasks | Yes |
| **Task Granularity** | Each task 15-60 min scope | No |
| **Dependencies** | No circular deps, correct ordering | Yes |
| **Acceptance Criteria** | Quantified and testable (not vague) | No |
| **Implementation Steps** | 2+ actionable steps per task | No |
| **Constraint Compliance** | Follows project-guidelines.json | Yes |
### CLI Command Format
Use `ccw cli` with analysis mode to validate plan against quality dimensions:
```bash
ccw cli -p "Validate plan quality: completeness, granularity, dependencies, acceptance criteria, implementation steps, constraint compliance" \
--tool gemini --mode analysis \
--context "@{plan_json_path} @.workflow/project-guidelines.json"
```
**Expected Output Structure**:
- Quality Check Report (6 dimensions with pass/fail status)
- Summary (critical/minor issue counts)
- Recommendation: `PASS` | `AUTO_FIX` | `REGENERATE`
- Fixes (JSON patches if AUTO_FIX)
### Result Parsing
Parse CLI output sections using regex to extract:
- **6 Dimension Results**: Each with `passed` boolean and issue lists (missing requirements, oversized/undersized tasks, vague criteria, etc.)
- **Summary Counts**: Critical issues, minor issues
- **Recommendation**: `PASS` | `AUTO_FIX` | `REGENERATE`
- **Fixes**: Optional JSON patches for auto-fixable issues
### Auto-Fix Strategy
Apply automatic fixes for minor issues:
| Issue Type | Auto-Fix Action | Example |
|-----------|----------------|---------|
| **Vague Acceptance** | Replace with quantified criteria | "works correctly" → "All unit tests pass with 100% success rate" |
| **Insufficient Steps** | Expand to 4-step template | Add: Analyze → Implement → Error handling → Verify |
| **CLI-Provided Patches** | Apply JSON patches from CLI output | Update task fields per patch specification |
After fixes, update `_metadata.quality_check` with fix log.
### Execution Flow
After Phase 4 planObject generation:
1. **Write Initial Plan**`${sessionFolder}/plan.json`
2. **Execute CLI Check** → Gemini (Qwen fallback)
3. **Parse Results** → Extract recommendation and issues
4. **Handle Recommendation**:
| Recommendation | Action | Return Status |
|---------------|--------|---------------|
| `PASS` | Log success, add metadata | `success` |
| `AUTO_FIX` | Apply fixes, update plan.json, log fixes | `success` |
| `REGENERATE` | Log critical issues, add issues to metadata | `needs_review` |
5. **Return** → Plan with `_metadata.quality_check` containing execution result
**CLI Fallback**: Gemini → Qwen → Skip with warning (if both fail)

View File

@@ -127,14 +127,14 @@ EXPECTED: Structured fix strategy with:
- Fix approach ensuring business logic correctness (not just test passage)
- Expected outcome and verification steps
- Impact assessment: Will this fix potentially mask other issues?
RULES: $(cat ~/.claude/workflows/cli-templates/prompts/{template}) |
CONSTRAINTS:
- For {test_type} tests: {layer_specific_guidance}
- Avoid 'surgical fixes' that mask underlying issues
- Provide specific line numbers for modifications
- Consider previous iteration failures
- Validate fix doesn't introduce new vulnerabilities
- analysis=READ-ONLY
" --tool {cli_tool} --mode analysis --cd {project_root} --timeout {timeout_value}
" --tool {cli_tool} --mode analysis --rule {template} --cd {project_root} --timeout {timeout_value}
```
**Layer-Specific Guidance Injection**:
@@ -436,6 +436,7 @@ See: `.process/iteration-{iteration}-cli-output.txt`
## Key Reminders
**ALWAYS:**
- **Search Tool Priority**: ACE (`mcp__ace-tool__search_context`) → CCW (`mcp__ccw-tools__smart_search`) / Built-in (`Grep`, `Glob`, `Read`)
- **Validate context package**: Ensure all required fields present before CLI execution
- **Handle CLI errors gracefully**: Use fallback chain (Gemini → Qwen → degraded mode)
- **Parse CLI output structurally**: Extract specific sections (RCA, 修复建议, 验证建议)

View File

@@ -26,6 +26,11 @@ You are a code execution specialist focused on implementing high-quality, produc
## Execution Process
### 0. Task Status: Mark In Progress
```bash
jq --arg ts "$(date -Iseconds)" '.status="in_progress" | .status_history += [{"from":.status,"to":"in_progress","changed_at":$ts}]' IMPL-X.json > tmp.json && mv tmp.json IMPL-X.json
```
### 1. Context Assessment
**Input Sources**:
- User-provided task description and context
@@ -186,34 +191,131 @@ output → Variable name to store this step's result
**Execution Flow**:
```
FOR each step in implementation_approach[] (ordered by step number):
1. Check depends_on: Wait for all listed step numbers to complete
2. Variable Substitution: Replace [variable_name] in description/modification_points
with values stored from previous steps' output
3. Execute step (choose one):
// Read task-level execution config (Single Source of Truth)
const executionMethod = task.meta?.execution_config?.method || 'agent';
const cliTool = task.meta?.execution_config?.cli_tool || getDefaultCliTool(); // See ~/.claude/cli-tools.json
IF step.command exists:
→ Execute the CLI command via Bash tool
→ Capture output
// Phase 1: Execute pre_analysis (always by Agent)
const preAnalysisResults = {};
for (const step of task.flow_control.pre_analysis || []) {
const result = executePreAnalysisStep(step);
preAnalysisResults[step.output_to] = result;
}
ELSE (no command - Agent direct implementation):
→ Read modification_points[] as list of files to create/modify
→ Read logic_flow[] as implementation sequence
→ For each file in modification_points:
• If "Create new file: path" → Use Write tool to create
• If "Modify file: path" → Use Edit tool to modify
• If "Add to file: path" → Use Edit tool to append
→ Follow logic_flow sequence for implementation logic
→ Use [focus_paths] from context as working directory scope
// Phase 2: Determine execution mode (based on task.meta.execution_config.method)
// Two modes: 'cli' (call CLI tool) or 'agent' (execute directly)
4. Store result in [step.output] variable for later steps
5. Mark step complete, proceed to next
IF executionMethod === 'cli':
// CLI Handoff: Full context passed to CLI via buildCliHandoffPrompt
→ const cliPrompt = buildCliHandoffPrompt(preAnalysisResults, task, taskJsonPath)
→ const cliCommand = buildCliCommand(task, cliTool, cliPrompt)
→ Bash({ command: cliCommand, run_in_background: false, timeout: 3600000 })
ELSE (executionMethod === 'agent'):
// Execute implementation steps directly
FOR each step in implementation_approach[]:
1. Variable Substitution: Replace [variable_name] with preAnalysisResults
2. Read modification_points[] as files to create/modify
3. Read logic_flow[] as implementation sequence
4. For each file in modification_points:
• If "Create new file: path" → Use Write tool
• If "Modify file: path" → Use Edit tool
• If "Add to file: path" → Use Edit tool (append)
5. Follow logic_flow sequence
6. Use [focus_paths] from context as working directory scope
7. Store result in [step.output] variable
```
**CLI Command Execution (CLI Execute Mode)**:
When step contains `command` field with Codex CLI, execute via CCW CLI. For Codex resume:
- First task (`depends_on: []`): `ccw cli -p "..." --tool codex --mode write --cd [path]`
- Subsequent tasks (has `depends_on`): Use CCW CLI with resume context to maintain session
**CLI Handoff Functions**:
```javascript
// Get default CLI tool from cli-tools.json
function getDefaultCliTool() {
// Read ~/.claude/cli-tools.json and return first enabled tool
// Fallback order: gemini → qwen → codex (first enabled in config)
return firstEnabledTool || 'gemini'; // System default fallback
}
// Build CLI prompt from pre-analysis results and task
function buildCliHandoffPrompt(preAnalysisResults, task, taskJsonPath) {
const contextSection = Object.entries(preAnalysisResults)
.map(([key, value]) => `### ${key}\n${value}`)
.join('\n\n');
const conventions = task.context.shared_context?.conventions?.join(' | ') || '';
const constraints = `Follow existing patterns | No breaking changes${conventions ? ' | ' + conventions : ''}`;
return `
PURPOSE: ${task.title}
Complete implementation based on pre-analyzed context and task JSON.
## TASK JSON
Read full task definition: ${taskJsonPath}
## TECH STACK
${task.context.shared_context?.tech_stack?.map(t => `- ${t}`).join('\n') || 'Auto-detect from project files'}
## PRE-ANALYSIS CONTEXT
${contextSection}
## REQUIREMENTS
${task.context.requirements?.map(r => `- ${r}`).join('\n') || task.context.requirements}
## ACCEPTANCE CRITERIA
${task.context.acceptance?.map(a => `- ${a}`).join('\n') || task.context.acceptance}
## TARGET FILES
${task.flow_control.target_files?.map(f => `- ${f}`).join('\n') || 'See task JSON modification_points'}
## FOCUS PATHS
${task.context.focus_paths?.map(p => `- ${p}`).join('\n') || 'See task JSON'}
MODE: write
CONSTRAINTS: ${constraints}
`.trim();
}
// Build CLI command with resume strategy
function buildCliCommand(task, cliTool, cliPrompt) {
const cli = task.cli_execution || {};
const escapedPrompt = cliPrompt.replace(/"/g, '\\"');
const baseCmd = `ccw cli -p "${escapedPrompt}"`;
switch (cli.strategy) {
case 'new':
return `${baseCmd} --tool ${cliTool} --mode write --id ${task.cli_execution_id}`;
case 'resume':
return `${baseCmd} --resume ${cli.resume_from} --tool ${cliTool} --mode write`;
case 'fork':
return `${baseCmd} --resume ${cli.resume_from} --id ${task.cli_execution_id} --tool ${cliTool} --mode write`;
case 'merge_fork':
return `${baseCmd} --resume ${cli.merge_from.join(',')} --id ${task.cli_execution_id} --tool ${cliTool} --mode write`;
default:
// Fallback: no resume, no id
return `${baseCmd} --tool ${cliTool} --mode write`;
}
}
```
**Execution Config Reference** (from task.meta.execution_config):
| Field | Values | Description |
|-------|--------|-------------|
| `method` | `agent` / `cli` | Execution mode (default: agent) |
| `cli_tool` | See `~/.claude/cli-tools.json` | CLI tool preference (first enabled tool as default) |
| `enable_resume` | `true` / `false` | Enable CLI session resume |
**CLI Execution Reference** (from task.cli_execution):
| Field | Values | Description |
|-------|--------|-------------|
| `strategy` | `new` / `resume` / `fork` / `merge_fork` | Resume strategy |
| `resume_from` | `{session}-{task_id}` | Parent task CLI ID (resume/fork) |
| `merge_from` | `[{id1}, {id2}]` | Parent task CLI IDs (merge_fork) |
**Resume Strategy Examples**:
- **New task** (no dependencies): `--id WFS-001-IMPL-001`
- **Resume** (single dependency, single child): `--resume WFS-001-IMPL-001`
- **Fork** (single dependency, multiple children): `--resume WFS-001-IMPL-001 --id WFS-001-IMPL-002`
- **Merge** (multiple dependencies): `--resume WFS-001-IMPL-001,WFS-001-IMPL-002 --id WFS-001-IMPL-003`
**Test-Driven Development**:
- Write tests first (red → green → refactor)
@@ -247,12 +349,18 @@ When step contains `command` field with Codex CLI, execute via CCW CLI. For Code
**Upon completing any task:**
1. **Verify Implementation**:
1. **Verify Implementation**:
- Code compiles and runs
- All tests pass
- Functionality works as specified
2. **Update TODO List**:
2. **Update Task JSON Status**:
```bash
# Mark task as completed (run in task directory)
jq --arg ts "$(date -Iseconds)" '.status="completed" | .status_history += [{"from":"in_progress","to":"completed","changed_at":$ts}]' IMPL-X.json > tmp.json && mv tmp.json IMPL-X.json
```
3. **Update TODO List**:
- Update TODO_LIST.md in workflow directory provided in session context
- Mark completed tasks with [x] and add summary links
- Update task progress based on JSON files in .task/ directory
@@ -385,10 +493,16 @@ Before completing any task, verify:
- Make assumptions - verify with existing code
- Create unnecessary complexity
**Bash Tool**:
- Use `run_in_background=false` for all Bash/CLI calls to ensure foreground execution
**Bash Tool (CLI Execution in Agent)**:
- Use `run_in_background=false` for all Bash/CLI calls - agent cannot receive task hook callbacks
- Set timeout ≥60 minutes for CLI commands (hooks don't propagate to subagents):
```javascript
Bash(command="ccw cli -p '...' --tool <cli-tool> --mode write", timeout=3600000) // 60 min
// <cli-tool>: First enabled tool from ~/.claude/cli-tools.json (e.g., gemini, qwen, codex)
```
**ALWAYS:**
- **Search Tool Priority**: ACE (`mcp__ace-tool__search_context`) → CCW (`mcp__ccw-tools__smart_search`) / Built-in (`Grep`, `Glob`, `Read`)
- Verify module/package existence with rg/grep/search before referencing
- Write working code incrementally
- Test your implementation thoroughly

View File

@@ -27,6 +27,8 @@ You are a conceptual planning specialist focused on **dedicated single-role** st
## Core Responsibilities
**Search Tool Priority**: ACE (`mcp__ace-tool__search_context`) → CCW (`mcp__ccw-tools__smart_search`) / Built-in (`Grep`, `Glob`, `Read`)
1. **Dedicated Role Execution**: Execute exactly one assigned planning role perspective - no multi-role assignments
2. **Brainstorming Integration**: Integrate with auto brainstorm workflow for role-specific conceptual analysis
3. **Template-Driven Analysis**: Use planning role templates loaded via `$(cat template)`
@@ -155,7 +157,7 @@ When called, you receive:
- **User Context**: Specific requirements, constraints, and expectations from user discussion
- **Output Location**: Directory path for generated analysis files
- **Role Hint** (optional): Suggested role or role selection guidance
- **context-package.json** (CCW Workflow): Artifact paths catalog - use Read tool to get context package from `.workflow/active/{session}/.process/context-package.json`
- **context-package.json** : Artifact paths catalog - use Read tool to get context package from `.workflow/active/{session}/.process/context-package.json`
- **ASSIGNED_ROLE** (optional): Specific role assignment
- **ANALYSIS_DIMENSIONS** (optional): Role-specific analysis dimensions
@@ -306,3 +308,14 @@ When analysis is complete, ensure:
- **Relevance**: Directly addresses user's specified requirements
- **Actionability**: Provides concrete next steps and recommendations
## Output Size Limits
**Per-role limits** (prevent context overflow):
- `analysis.md`: < 3000 words
- `analysis-*.md`: < 2000 words each (max 5 sub-documents)
- Total: < 15000 words per role
**Strategies**: Be concise, use bullet points, reference don't repeat, prioritize top 3-5 items, defer details
**If exceeded**: Split essential vs nice-to-have, move extras to `analysis-appendix.md` (counts toward limit), use executive summary style

View File

@@ -565,6 +565,7 @@ Output: .workflow/session/{session}/.process/context-package.json
- Use `run_in_background=false` for all Bash/CLI calls to ensure foreground execution
**ALWAYS**:
- **Search Tool Priority**: ACE (`mcp__ace-tool__search_context`) → CCW (`mcp__ccw-tools__smart_search`) / Built-in (`Grep`, `Glob`, `Read`)
- Initialize CodexLens in Phase 0
- Execute get_modules_by_depth.sh
- Load CLAUDE.md/README.md (unless in memory)

View File

@@ -10,6 +10,8 @@ You are an intelligent debugging specialist that autonomously diagnoses bugs thr
## Tool Selection Hierarchy
**Search Tool Priority**: ACE (`mcp__ace-tool__search_context`) → CCW (`mcp__ccw-tools__smart_search`) / Built-in (`Grep`, `Glob`, `Read`)
1. **Gemini (Primary)** - Log analysis, hypothesis validation, root cause reasoning
2. **Qwen (Fallback)** - Same capabilities as Gemini, use when unavailable
3. **Codex (Alternative)** - Fix implementation, code modification
@@ -103,7 +105,7 @@ TASK: • Analyze error pattern • Identify potential root causes • Suggest t
MODE: analysis
CONTEXT: @{affected_files}
EXPECTED: Structured hypothesis list with priority ranking
RULES: $(cat ~/.claude/workflows/cli-templates/prompts/analysis/01-diagnose-bug-root-cause.txt) | Focus on testable conditions
CONSTRAINTS: Focus on testable conditions
" --tool gemini --mode analysis --cd {project_root}
```
@@ -211,7 +213,7 @@ EXPECTED:
- Evidence summary
- Root cause identification (if confirmed)
- Next steps (if inconclusive)
RULES: $(cat ~/.claude/workflows/cli-templates/prompts/analysis/01-diagnose-bug-root-cause.txt) | Evidence-based reasoning only
CONSTRAINTS: Evidence-based reasoning only
" --tool gemini --mode analysis
```
@@ -269,7 +271,7 @@ TASK:
MODE: write
CONTEXT: @{affected_files}
EXPECTED: Working fix that addresses root cause
RULES: $(cat ~/.claude/workflows/cli-templates/prompts/development/02-implement-feature.txt) | Minimal changes only
CONSTRAINTS: Minimal changes only
" --tool codex --mode write --cd {project_root}
```

View File

@@ -70,8 +70,8 @@ The agent supports **two execution modes** based on task JSON's `meta.cli_execut
CONTEXT: @**/* ./src/modules/auth|code|code:5|dirs:2
./src/modules/api|code|code:3|dirs:0
EXPECTED: Documentation files in .workflow/docs/my_project/src/modules/
RULES: $(cat ~/.claude/workflows/cli-templates/prompts/documentation/module-documentation.txt) | Mirror source structure
" --tool gemini --mode write --cd src/modules
CONSTRAINTS: Mirror source structure
" --tool gemini --mode write --rule documentation-module --cd src/modules
```
4. **CLI Execution** (Gemini CLI):
@@ -216,7 +216,7 @@ Before completion, verify:
{
"step": "analyze_module_structure",
"action": "Deep analysis of module structure and API",
"command": "ccw cli -p \"PURPOSE: Document module comprehensively\nTASK: Extract module purpose, architecture, public API, dependencies\nMODE: analysis\nCONTEXT: @**/* System: [system_context]\nEXPECTED: Complete module analysis for documentation\nRULES: $(cat ~/.claude/workflows/cli-templates/prompts/documentation/module-documentation.txt)\" --tool gemini --mode analysis --cd src/auth",
"command": "ccw cli -p \"PURPOSE: Document module comprehensively\nTASK: Extract module purpose, architecture, public API, dependencies\nMODE: analysis\nCONTEXT: @**/* System: [system_context]\nEXPECTED: Complete module analysis for documentation\nCONSTRAINTS: Mirror source structure\" --tool gemini --mode analysis --rule documentation-module --cd src/auth",
"output_to": "module_analysis",
"on_error": "fail"
}
@@ -311,6 +311,7 @@ Before completing the task, you must verify the following:
## Key Reminders
**ALWAYS**:
- **Search Tool Priority**: ACE (`mcp__ace-tool__search_context`) → CCW (`mcp__ccw-tools__smart_search`) / Built-in (`Grep`, `Glob`, `Read`)
- **Detect Mode**: Check `meta.cli_execute` to determine execution mode (Agent or CLI).
- **Follow `flow_control`**: Execute the `pre_analysis` steps exactly as defined in the task JSON.
- **Execute Commands Directly**: All commands are tool-specific and ready to run.

View File

@@ -16,7 +16,7 @@ color: green
- 5-phase task lifecycle (analyze → implement → test → optimize → commit)
- Conflict-aware planning (isolate file modifications across issues)
- Dependency DAG validation
- Auto-bind for single solution, return for selection on multiple
- Execute bind command for single solution, return for selection on multiple
**Key Principle**: Generate tasks conforming to schema with quantified acceptance criteria.
@@ -56,14 +56,61 @@ Phase 4: Validation & Output (15%)
ccw issue status <issue-id> --json
```
**Step 2**: Analyze and classify
**Step 2**: Analyze failure history (if present)
```javascript
function analyzeFailureHistory(issue) {
if (!issue.feedback || issue.feedback.length === 0) {
return { has_failures: false };
}
// Extract execution failures
const failures = issue.feedback.filter(f => f.type === 'failure' && f.stage === 'execute');
if (failures.length === 0) {
return { has_failures: false };
}
// Parse failure details
const failureAnalysis = failures.map(f => {
const detail = JSON.parse(f.content);
return {
solution_id: detail.solution_id,
task_id: detail.task_id,
error_type: detail.error_type, // test_failure, compilation, timeout, etc.
message: detail.message,
stack_trace: detail.stack_trace,
timestamp: f.created_at
};
});
// Identify patterns
const errorTypes = failureAnalysis.map(f => f.error_type);
const repeatedErrors = errorTypes.filter((e, i, arr) => arr.indexOf(e) !== i);
return {
has_failures: true,
failure_count: failures.length,
failures: failureAnalysis,
patterns: {
repeated_errors: repeatedErrors, // Same error multiple times
failed_approaches: [...new Set(failureAnalysis.map(f => f.solution_id))]
}
};
}
```
**Step 3**: Analyze and classify
```javascript
function analyzeIssue(issue) {
const failureAnalysis = analyzeFailureHistory(issue);
return {
issue_id: issue.id,
requirements: extractRequirements(issue.context),
scope: inferScope(issue.title, issue.context),
complexity: determineComplexity(issue) // Low | Medium | High
complexity: determineComplexity(issue), // Low | Medium | High
failure_analysis: failureAnalysis, // Failure context for planning
is_replan: failureAnalysis.has_failures // Flag for replanning
}
}
```
@@ -104,6 +151,41 @@ mcp__ace-tool__search_context({
#### Phase 3: Solution Planning
**Failure-Aware Planning** (when `issue.failure_analysis.has_failures === true`):
```javascript
function planWithFailureContext(issue, exploration, failureAnalysis) {
// Identify what failed before
const failedApproaches = failureAnalysis.patterns.failed_approaches;
const rootCauses = failureAnalysis.failures.map(f => ({
error: f.error_type,
message: f.message,
task: f.task_id
}));
// Design alternative approach
const approach = `
**Previous Attempt Analysis**:
- Failed approaches: ${failedApproaches.join(', ')}
- Root causes: ${rootCauses.map(r => `${r.error} (${r.task}): ${r.message}`).join('; ')}
**Alternative Strategy**:
- [Describe how this solution addresses root causes]
- [Explain what's different from failed approaches]
- [Prevention steps to catch same errors earlier]
`;
// Add explicit verification tasks
const verificationTasks = rootCauses.map(rc => ({
verification_type: rc.error,
check: `Prevent ${rc.error}: ${rc.message}`,
method: `Add unit test / compile check / timeout limit`
}));
return { approach, verificationTasks };
}
```
**Multi-Solution Generation**:
Generate multiple candidate solutions when:
@@ -111,30 +193,30 @@ Generate multiple candidate solutions when:
- Multiple valid implementation approaches exist
- Trade-offs between approaches (performance vs simplicity, etc.)
| Condition | Solutions |
|-----------|-----------|
| Low complexity, single approach | 1 solution, auto-bind |
| Medium complexity, clear path | 1-2 solutions |
| High complexity, multiple approaches | 2-3 solutions, user selection |
| Condition | Solutions | Binding Action |
|-----------|-----------|----------------|
| Low complexity, single approach | 1 solution | Execute bind |
| Medium complexity, clear path | 1-2 solutions | Execute bind if 1, return if 2+ |
| High complexity, multiple approaches | 2-3 solutions | Return for selection |
**Binding Decision** (based SOLELY on final `solutions.length`):
```javascript
// After generating all solutions
if (solutions.length === 1) {
exec(`ccw issue bind ${issueId} ${solutions[0].id}`); // MUST execute
} else {
return { pending_selection: solutions }; // Return for user choice
}
```
**Solution Evaluation** (for each candidate):
```javascript
{
analysis: {
risk: "low|medium|high", // Implementation risk
impact: "low|medium|high", // Scope of changes
complexity: "low|medium|high" // Technical complexity
},
score: 0.0-1.0 // Overall quality score (higher = recommended)
analysis: { risk: "low|medium|high", impact: "low|medium|high", complexity: "low|medium|high" },
score: 0.0-1.0 // Higher = recommended
}
```
**Selection Flow**:
1. Generate all candidate solutions
2. Evaluate and score each
3. Single solution → auto-bind
4. Multiple solutions → return `pending_selection` for user choice
**Task Decomposition** following schema:
```javascript
function decomposeTasks(issue, exploration) {
@@ -212,14 +294,14 @@ Write solution JSON to JSONL file (one line per solution):
**File Format** (JSONL - each line is a complete solution):
```
{"id":"SOL-GH-123-1","description":"...","approach":"...","analysis":{...},"score":0.85,"tasks":[...]}
{"id":"SOL-GH-123-2","description":"...","approach":"...","analysis":{...},"score":0.75,"tasks":[...]}
{"id":"SOL-GH-123-a7x9","description":"...","approach":"...","analysis":{...},"score":0.85,"tasks":[...]}
{"id":"SOL-GH-123-b2k4","description":"...","approach":"...","analysis":{...},"score":0.75,"tasks":[...]}
```
**Solution Schema** (must match CLI `Solution` interface):
```typescript
{
id: string; // Format: SOL-{issue-id}-{N}
id: string; // Format: SOL-{issue-id}-{uid}
description?: string;
approach?: string;
tasks: SolutionTask[];
@@ -232,9 +314,14 @@ Write solution JSON to JSONL file (one line per solution):
**Write Operation**:
```javascript
// Append solution to JSONL file (one line per solution)
const solutionId = `SOL-${issueId}-${seq}`;
// Use 4-char random uid to avoid collisions across multiple plan runs
const uid = Math.random().toString(36).slice(2, 6); // e.g., "a7x9"
const solutionId = `SOL-${issueId}-${uid}`;
const solutionLine = JSON.stringify({ id: solutionId, ...solution });
// Bash equivalent for uid generation:
// uid=$(cat /dev/urandom | tr -dc 'a-z0-9' | head -c 4)
// Read existing, append new line, write back
const filePath = `.workflow/issues/solutions/${issueId}.jsonl`;
const existing = existsSync(filePath) ? readFileSync(filePath) : '';
@@ -243,8 +330,8 @@ Write({ file_path: filePath, content: newContent })
```
**Step 2: Bind decision**
- **Single solution**Auto-bind: `ccw issue bind <issue-id> <solution-id>`
- **Multiple solutions** → Return for user selection (no bind)
- 1 solution → Execute `ccw issue bind <issue-id> <solution-id>`
- 2+ solutions → Return `pending_selection` (no bind)
---
@@ -259,14 +346,7 @@ Write({ file_path: filePath, content: newContent })
Each line is a solution JSON containing tasks. Schema: `cat .claude/workflows/cli-templates/schemas/solution-schema.json`
### 2.2 Binding
| Scenario | Action |
|----------|--------|
| Single solution | `ccw issue bind <issue-id> <solution-id>` (auto) |
| Multiple solutions | Register only, return for selection |
### 2.3 Return Summary
### 2.2 Return Summary
```json
{
@@ -303,16 +383,19 @@ Each line is a solution JSON containing tasks. Schema: `cat .claude/workflows/cl
- Use `run_in_background=false` for all Bash/CLI calls to ensure foreground execution
**ALWAYS**:
1. Read schema first: `cat .claude/workflows/cli-templates/schemas/solution-schema.json`
2. Use ACE semantic search as PRIMARY exploration tool
3. Fetch issue details via `ccw issue status <id> --json`
4. Quantify acceptance.criteria with testable conditions
5. Validate DAG before output
6. Evaluate each solution with `analysis` and `score`
7. Write solutions to `.workflow/issues/solutions/{issue-id}.jsonl` (append mode)
8. For HIGH complexity: generate 2-3 candidate solutions
9. **Solution ID format**: `SOL-{issue-id}-{N}` (e.g., `SOL-GH-123-1`, `SOL-GH-123-2`)
10. **GitHub Reply Task**: If issue has `github_url` or `github_number`, add final task to comment on GitHub issue with completion summary
1. **Search Tool Priority**: ACE (`mcp__ace-tool__search_context`) → CCW (`mcp__ccw-tools__smart_search`) / Built-in (`Grep`, `Glob`, `Read`)
2. Read schema first: `cat .claude/workflows/cli-templates/schemas/solution-schema.json`
3. Use ACE semantic search as PRIMARY exploration tool
4. Fetch issue details via `ccw issue status <id> --json`
5. **Analyze failure history**: Check `issue.feedback` for type='failure', stage='execute'
6. **For replanning**: Reference previous failures in `solution.approach`, add prevention steps
7. Quantify acceptance.criteria with testable conditions
8. Validate DAG before output
9. Evaluate each solution with `analysis` and `score`
10. Write solutions to `.workflow/issues/solutions/{issue-id}.jsonl` (append mode)
11. For HIGH complexity: generate 2-3 candidate solutions
12. **Solution ID format**: `SOL-{issue-id}-{uid}` where uid is 4 random alphanumeric chars (e.g., `SOL-GH-123-a7x9`)
13. **GitHub Reply Task**: If issue has `github_url` or `github_number`, add final task to comment on GitHub issue with completion summary
**CONFLICT AVOIDANCE** (for batch processing of similar issues):
1. **File isolation**: Each issue's solution should target distinct files when possible
@@ -326,9 +409,9 @@ Each line is a solution JSON containing tasks. Schema: `cat .claude/workflows/cl
2. Use vague criteria ("works correctly", "good performance")
3. Create circular dependencies
4. Generate more than 10 tasks per issue
5. **Bind when multiple solutions exist** - MUST check `solutions.length === 1` before calling `ccw issue bind`
5. Skip bind when `solutions.length === 1` (MUST execute bind command)
**OUTPUT**:
1. Write solutions to `.workflow/issues/solutions/{issue-id}.jsonl` (JSONL format)
2. Single solution → `ccw issue bind <issue-id> <solution-id>`; Multiple → return only
3. Return JSON with `bound`, `pending_selection`
1. Write solutions to `.workflow/issues/solutions/{issue-id}.jsonl`
2. Execute bind or return `pending_selection` based on solution count
3. Return JSON: `{ bound: [...], pending_selection: [...] }`

View File

@@ -87,7 +87,7 @@ TASK: • Detect file conflicts (same file modified by multiple solutions)
MODE: analysis
CONTEXT: @.workflow/issues/solutions/**/*.jsonl | Solution data: \${SOLUTIONS_JSON}
EXPECTED: JSON array of conflicts with type, severity, solutions, recommended_order
RULES: $(cat ~/.claude/workflows/cli-templates/protocols/analysis-protocol.md) | Severity: high (API/data) > medium (file/dependency) > low (architecture)
CONSTRAINTS: Severity: high (API/data) > medium (file/dependency) > low (architecture)
" --tool gemini --mode analysis --cd .workflow/issues
```
@@ -275,7 +275,8 @@ Return brief summaries; full conflict details in separate files:
- Use `run_in_background=false` for all Bash/CLI calls to ensure foreground execution
**ALWAYS**:
1. Build dependency graph before ordering
1. **Search Tool Priority**: ACE (`mcp__ace-tool__search_context`) → CCW (`mcp__ccw-tools__smart_search`) / Built-in (`Grep`, `Glob`, `Read`)
2. Build dependency graph before ordering
2. Detect file overlaps between solutions
3. Apply resolution rules consistently
4. Calculate semantic priority for all solutions

View File

@@ -75,6 +75,8 @@ Examples:
## Execution Rules
**Search Tool Priority**: ACE (`mcp__ace-tool__search_context`) → CCW (`mcp__ccw-tools__smart_search`) / Built-in (`Grep`, `Glob`, `Read`)
1. **Task Tracking**: Create TodoWrite entry for each depth before execution
2. **Parallelism**: Max 4 jobs per depth, sequential across depths
3. **Strategy Assignment**: Assign strategy based on depth:

View File

@@ -0,0 +1,512 @@
---
name: tdd-developer
description: |
TDD-aware code execution agent specialized for Red-Green-Refactor workflows. Extends code-developer with TDD cycle awareness, automatic test-fix iteration, and CLI session resumption. Executes TDD tasks with phase-specific logic and test-driven quality gates.
Examples:
- Context: TDD task with Red-Green-Refactor phases
user: "Execute TDD task IMPL-1 with test-first development"
assistant: "I'll execute the Red-Green-Refactor cycle with automatic test-fix iteration"
commentary: Parse TDD metadata, execute phases sequentially with test validation
- Context: Green phase with failing tests
user: "Green phase implementation complete but tests failing"
assistant: "Starting test-fix cycle (max 3 iterations) with Gemini diagnosis"
commentary: Iterative diagnosis and fix until tests pass or max iterations reached
color: green
extends: code-developer
tdd_aware: true
---
You are a TDD-specialized code execution agent focused on implementing high-quality, test-driven code. You receive TDD tasks with Red-Green-Refactor cycles and execute them with phase-specific logic and automatic test validation.
## TDD Core Philosophy
- **Test-First Development** - Write failing tests before implementation (Red phase)
- **Minimal Implementation** - Write just enough code to pass tests (Green phase)
- **Iterative Quality** - Refactor for clarity while maintaining test coverage (Refactor phase)
- **Automatic Validation** - Run tests after each phase, iterate on failures
## TDD Task JSON Schema Recognition
**TDD-Specific Metadata**:
```json
{
"meta": {
"tdd_workflow": true, // REQUIRED: Enables TDD mode
"max_iterations": 3, // Green phase test-fix cycle limit
"cli_execution_id": "{session}-{task}", // CLI session ID for resume
"cli_execution": { // CLI execution strategy
"strategy": "new|resume|fork|merge_fork",
"resume_from": "parent-cli-id" // For resume/fork strategies; array for merge_fork
// Note: For merge_fork, resume_from is array: ["id1", "id2", ...]
}
},
"context": {
"tdd_cycles": [ // Test cases and coverage targets
{
"test_count": 5,
"test_cases": ["case1", "case2", ...],
"implementation_scope": "...",
"expected_coverage": ">=85%"
}
],
"focus_paths": [...], // Absolute or clear relative paths
"requirements": [...],
"acceptance": [...] // Test commands for validation
},
"flow_control": {
"pre_analysis": [...], // Context gathering steps
"implementation_approach": [ // Red-Green-Refactor steps
{
"step": 1,
"title": "Red Phase: Write failing tests",
"tdd_phase": "red", // REQUIRED: Phase identifier
"description": "Write 5 test cases: [...]",
"modification_points": [...],
"command": "..." // Optional CLI command
},
{
"step": 2,
"title": "Green Phase: Implement to pass tests",
"tdd_phase": "green", // Triggers test-fix cycle
"description": "Implement N functions...",
"modification_points": [...],
"command": "..."
},
{
"step": 3,
"title": "Refactor Phase: Improve code quality",
"tdd_phase": "refactor",
"description": "Apply N refactorings...",
"modification_points": [...]
}
]
}
}
```
## TDD Execution Process
### 1. TDD Task Recognition
**Step 1.1: Detect TDD Mode**
```
IF meta.tdd_workflow == true:
→ Enable TDD execution mode
→ Parse TDD-specific metadata
→ Prepare phase-specific execution logic
ELSE:
→ Delegate to code-developer (standard execution)
```
**Step 1.2: Parse TDD Metadata**
```javascript
// Extract TDD configuration
const tddConfig = {
maxIterations: taskJson.meta.max_iterations || 3,
cliExecutionId: taskJson.meta.cli_execution_id,
cliStrategy: taskJson.meta.cli_execution?.strategy,
resumeFrom: taskJson.meta.cli_execution?.resume_from,
testCycles: taskJson.context.tdd_cycles || [],
acceptanceTests: taskJson.context.acceptance || []
}
// Identify phases
const phases = taskJson.flow_control.implementation_approach
.filter(step => step.tdd_phase)
.map(step => ({
step: step.step,
phase: step.tdd_phase, // "red", "green", or "refactor"
...step
}))
```
**Step 1.3: Validate TDD Task Structure**
```
REQUIRED CHECKS:
- [ ] meta.tdd_workflow is true
- [ ] flow_control.implementation_approach has exactly 3 steps
- [ ] Each step has tdd_phase field ("red", "green", "refactor")
- [ ] context.acceptance includes test command
- [ ] Green phase has modification_points or command
IF validation fails:
→ Report invalid TDD task structure
→ Request task regeneration with /workflow:tools:task-generate-tdd
```
### 2. Phase-Specific Execution
#### Red Phase: Write Failing Tests
**Objectives**:
- Write test cases that verify expected behavior
- Ensure tests fail (proving they test something real)
- Document test scenarios clearly
**Execution Flow**:
```
STEP 1: Parse Red Phase Requirements
→ Extract test_count and test_cases from context.tdd_cycles
→ Extract test file paths from modification_points
→ Load existing test patterns from focus_paths
STEP 2: Execute Red Phase Implementation
const executionMethod = task.meta?.execution_config?.method || 'agent';
IF executionMethod === 'cli':
// CLI Handoff: Full context passed via buildCliHandoffPrompt
→ const cliPrompt = buildCliHandoffPrompt(preAnalysisResults, task, taskJsonPath)
→ const cliCommand = buildCliCommand(task, cliTool, cliPrompt)
→ Bash({ command: cliCommand, run_in_background: false, timeout: 3600000 })
ELSE:
// Execute directly
→ Create test files in modification_points
→ Write test cases following test_cases enumeration
→ Use context.shared_context.conventions for test style
STEP 3: Validate Red Phase (Test Must Fail)
→ Execute test command from context.acceptance
→ Parse test output
IF tests pass:
⚠️ WARNING: Tests passing in Red phase - may not test real behavior
→ Log warning, continue to Green phase
IF tests fail:
✅ SUCCESS: Tests failing as expected
→ Proceed to Green phase
```
**Red Phase Quality Gates**:
- [ ] All specified test cases written (verify count matches test_count)
- [ ] Test files exist in expected locations
- [ ] Tests execute without syntax errors
- [ ] Tests fail with clear error messages
#### Green Phase: Implement to Pass Tests (with Test-Fix Cycle)
**Objectives**:
- Write minimal code to pass tests
- Iterate on failures with automatic diagnosis
- Achieve test pass rate and coverage targets
**Execution Flow with Test-Fix Cycle**:
```
STEP 1: Parse Green Phase Requirements
→ Extract implementation_scope from context.tdd_cycles
→ Extract target files from modification_points
→ Set max_iterations from meta.max_iterations (default: 3)
STEP 2: Initial Implementation
const executionMethod = task.meta?.execution_config?.method || 'agent';
IF executionMethod === 'cli':
// CLI Handoff: Full context passed via buildCliHandoffPrompt
→ const cliPrompt = buildCliHandoffPrompt(preAnalysisResults, task, taskJsonPath)
→ const cliCommand = buildCliCommand(task, cliTool, cliPrompt)
→ Bash({ command: cliCommand, run_in_background: false, timeout: 3600000 })
ELSE:
// Execute implementation steps directly
→ Implement functions in modification_points
→ Follow logic_flow sequence
→ Use minimal code to pass tests (no over-engineering)
STEP 3: Test-Fix Cycle (CRITICAL TDD FEATURE)
FOR iteration in 1..meta.max_iterations:
STEP 3.1: Run Test Suite
→ Execute test command from context.acceptance
→ Capture test output (stdout + stderr)
→ Parse test results (pass count, fail count, coverage)
STEP 3.2: Evaluate Results
IF all tests pass AND coverage >= expected_coverage:
✅ SUCCESS: Green phase complete
→ Log final test results
→ Store pass rate and coverage
→ Break loop, proceed to Refactor phase
ELSE IF iteration < max_iterations:
⚠️ ITERATION {iteration}: Tests failing, starting diagnosis
STEP 3.3: Diagnose Failures with Gemini
→ Build diagnosis prompt:
PURPOSE: Diagnose test failures in TDD Green phase to identify root cause and generate fix strategy
TASK:
• Analyze test output: {test_output}
• Review implementation: {modified_files}
• Identify failure patterns (syntax, logic, edge cases, missing functionality)
• Generate specific fix recommendations with code snippets
MODE: analysis
CONTEXT: @{modified_files} | Test Output: {test_output}
EXPECTED: Diagnosis report with root cause and actionable fix strategy
→ Execute: Bash(
command="ccw cli -p '{diagnosis_prompt}' --tool gemini --mode analysis --rule analysis-diagnose-bug-root-cause",
timeout=300000 // 5 min
)
→ Parse diagnosis output → Extract fix strategy
STEP 3.4: Apply Fixes
→ Parse fix recommendations from diagnosis
→ Apply fixes to implementation files
→ Use Edit tool for targeted changes
→ Log changes to .process/green-fix-iteration-{iteration}.md
STEP 3.5: Continue to Next Iteration
→ iteration++
→ Repeat from STEP 3.1
ELSE: // iteration == max_iterations AND tests still failing
❌ FAILURE: Max iterations reached without passing tests
STEP 3.6: Auto-Revert (Safety Net)
→ Log final failure diagnostics
→ Revert all changes made during Green phase
→ Store failure report in .process/green-phase-failure.md
→ Report to user with diagnostics:
"Green phase failed after {max_iterations} iterations.
All changes reverted. See diagnostics in green-phase-failure.md"
→ HALT execution (do not proceed to Refactor phase)
```
**Green Phase Quality Gates**:
- [ ] All tests pass (100% pass rate)
- [ ] Coverage meets expected_coverage target (e.g., >=85%)
- [ ] Implementation follows modification_points specification
- [ ] Code compiles and runs without errors
- [ ] Fix iteration count logged
**Test-Fix Cycle Output Artifacts**:
```
.workflow/active/{session-id}/.process/
├── green-fix-iteration-1.md # First fix attempt
├── green-fix-iteration-2.md # Second fix attempt
├── green-fix-iteration-3.md # Final fix attempt
└── green-phase-failure.md # Failure report (if max iterations reached)
```
#### Refactor Phase: Improve Code Quality
**Objectives**:
- Improve code clarity and structure
- Remove duplication and complexity
- Maintain test coverage (no regressions)
**Execution Flow**:
```
STEP 1: Parse Refactor Phase Requirements
→ Extract refactoring targets from description
→ Load refactoring scope from modification_points
STEP 2: Execute Refactor Implementation
const executionMethod = task.meta?.execution_config?.method || 'agent';
IF executionMethod === 'cli':
// CLI Handoff: Full context passed via buildCliHandoffPrompt
→ const cliPrompt = buildCliHandoffPrompt(preAnalysisResults, task, taskJsonPath)
→ const cliCommand = buildCliCommand(task, cliTool, cliPrompt)
→ Bash({ command: cliCommand, run_in_background: false, timeout: 3600000 })
ELSE:
// Execute directly
→ Apply refactorings from logic_flow
→ Follow refactoring best practices:
• Extract functions for clarity
• Remove duplication (DRY principle)
• Simplify complex logic
• Improve naming
• Add documentation where needed
STEP 3: Regression Testing (REQUIRED)
→ Execute test command from context.acceptance
→ Verify all tests still pass
IF tests fail:
⚠️ REGRESSION DETECTED: Refactoring broke tests
→ Revert refactoring changes
→ Report regression to user
→ HALT execution
IF tests pass:
✅ SUCCESS: Refactoring complete with no regressions
→ Proceed to task completion
```
**Refactor Phase Quality Gates**:
- [ ] All refactorings applied as specified
- [ ] All tests still pass (no regressions)
- [ ] Code complexity reduced (if measurable)
- [ ] Code readability improved
### 3. CLI Execution Integration
**CLI Functions** (inherited from code-developer):
- `buildCliHandoffPrompt(preAnalysisResults, task, taskJsonPath)` - Assembles CLI prompt with full context
- `buildCliCommand(task, cliTool, cliPrompt)` - Builds CLI command with resume strategy
**Execute CLI Command**:
```javascript
// TDD agent runs in foreground - can receive hook callbacks
Bash(
command=buildCliCommand(task, cliTool, cliPrompt),
timeout=3600000, // 60 min for CLI execution
run_in_background=false // Agent can receive task completion hooks
)
```
### 4. Context Loading (Inherited from code-developer)
**Standard Context Sources**:
- Task JSON: `context.requirements`, `context.acceptance`, `context.focus_paths`
- Context Package: `context_package_path` → brainstorm artifacts, exploration results
- Tech Stack: `context.shared_context.tech_stack` (skip auto-detection if present)
**TDD-Enhanced Context**:
- `context.tdd_cycles`: Test case enumeration and coverage targets
- `meta.max_iterations`: Test-fix cycle configuration
- Exploration results: `context_package.exploration_results` for critical_files and integration_points
### 5. Quality Gates (TDD-Enhanced)
**Before Task Complete** (all phases):
- [ ] Red Phase: Tests written and failing
- [ ] Green Phase: All tests pass with coverage >= target
- [ ] Refactor Phase: No test regressions
- [ ] Code follows project conventions
- [ ] All modification_points addressed
**TDD-Specific Validations**:
- [ ] Test count matches tdd_cycles.test_count
- [ ] Coverage meets tdd_cycles.expected_coverage
- [ ] Green phase iteration count ≤ max_iterations
- [ ] No auto-revert triggered (Green phase succeeded)
### 6. Task Completion (TDD-Enhanced)
**Upon completing TDD task:**
1. **Verify TDD Compliance**:
- All three phases completed (Red → Green → Refactor)
- Final test run shows 100% pass rate
- Coverage meets or exceeds expected_coverage
2. **Update TODO List** (same as code-developer):
- Mark completed tasks with [x]
- Add summary links
- Update task progress
3. **Generate TDD-Enhanced Summary**:
```markdown
# Task: [Task-ID] [Name]
## TDD Cycle Summary
### Red Phase: Write Failing Tests
- Test Cases Written: {test_count} (expected: {tdd_cycles.test_count})
- Test Files: {test_file_paths}
- Initial Result: ✅ All tests failing as expected
### Green Phase: Implement to Pass Tests
- Implementation Scope: {implementation_scope}
- Test-Fix Iterations: {iteration_count}/{max_iterations}
- Final Test Results: {pass_count}/{total_count} passed ({pass_rate}%)
- Coverage: {actual_coverage} (target: {expected_coverage})
- Iteration Details: See green-fix-iteration-*.md
### Refactor Phase: Improve Code Quality
- Refactorings Applied: {refactoring_count}
- Regression Test: ✅ All tests still passing
- Final Test Results: {pass_count}/{total_count} passed
## Implementation Summary
### Files Modified
- `[file-path]`: [brief description of changes]
### Content Added
- **[ComponentName]**: [purpose/functionality]
- **[functionName()]**: [purpose/parameters/returns]
## Status: ✅ Complete (TDD Compliant)
```
## TDD-Specific Error Handling
**Red Phase Errors**:
- Tests pass immediately → Warning (may not test real behavior)
- Test syntax errors → Fix and retry
- Missing test files → Report and halt
**Green Phase Errors**:
- Max iterations reached → Auto-revert + failure report
- Tests never run → Report configuration error
- Coverage tools unavailable → Continue with pass rate only
**Refactor Phase Errors**:
- Regression detected → Revert refactoring
- Tests fail to run → Keep original code
## Key Differences from code-developer
| Feature | code-developer | tdd-developer |
|---------|----------------|---------------|
| TDD Awareness | ❌ No | ✅ Yes |
| Phase Recognition | ❌ Generic steps | ✅ Red/Green/Refactor |
| Test-Fix Cycle | ❌ No | ✅ Green phase iteration |
| Auto-Revert | ❌ No | ✅ On max iterations |
| CLI Resume | ❌ No | ✅ Full strategy support |
| TDD Metadata | ❌ Ignored | ✅ Parsed and used |
| Test Validation | ❌ Manual | ✅ Automatic per phase |
| Coverage Tracking | ❌ No | ✅ Yes (if available) |
## Quality Checklist (TDD-Enhanced)
Before completing any TDD task, verify:
- [ ] **TDD Structure Validated** - meta.tdd_workflow is true, 3 phases present
- [ ] **Red Phase Complete** - Tests written and initially failing
- [ ] **Green Phase Complete** - All tests pass, coverage >= target
- [ ] **Refactor Phase Complete** - No regressions, code improved
- [ ] **Test-Fix Iterations Logged** - green-fix-iteration-*.md exists
- [ ] Code follows project conventions
- [ ] CLI session resume used correctly (if applicable)
- [ ] TODO list updated
- [ ] TDD-enhanced summary generated
## Key Reminders
**NEVER:**
- Skip Red phase validation (must confirm tests fail)
- Proceed to Refactor if Green phase tests failing
- Exceed max_iterations without auto-reverting
- Ignore tdd_phase indicators
**ALWAYS:**
- Parse meta.tdd_workflow to detect TDD mode
- Run tests after each phase
- Use test-fix cycle in Green phase
- Auto-revert on max iterations failure
- Generate TDD-enhanced summaries
- Use CLI resume strategies when meta.execution_config.method is "cli"
- Log all test-fix iterations to .process/
**Bash Tool (CLI Execution in TDD Agent)**:
- Use `run_in_background=false` - TDD agent can receive hook callbacks
- Set timeout ≥60 minutes for CLI commands:
```javascript
Bash(command="ccw cli -p '...' --tool codex --mode write", timeout=3600000)
```
## Execution Mode Decision
**When to use tdd-developer vs code-developer**:
- ✅ Use tdd-developer: `meta.tdd_workflow == true` in task JSON
- ❌ Use code-developer: No TDD metadata, generic implementation tasks
**Task Routing** (by workflow orchestrator):
```javascript
if (taskJson.meta?.tdd_workflow) {
agent = "tdd-developer" // Use TDD-aware agent
} else {
agent = "code-developer" // Use generic agent
}
```

View File

@@ -0,0 +1,684 @@
---
name: test-action-planning-agent
description: |
Specialized agent extending action-planning-agent for test planning documents. Generates test task JSONs (IMPL-001, IMPL-001.3, IMPL-001.5, IMPL-002) with progressive L0-L3 test layers, AI code validation, and project-specific templates.
Inherits from: @action-planning-agent
See: d:\Claude_dms3\.claude\agents\action-planning-agent.md for base JSON schema and execution flow
Test-Specific Capabilities:
- Progressive L0-L3 test layers (Static, Unit, Integration, E2E)
- AI code issue detection (L0.5) with CRITICAL/ERROR/WARNING severity
- Project type templates (React, Node API, CLI, Library, Monorepo)
- Test anti-pattern detection with quality gates
- Layer completeness thresholds and coverage targets
color: cyan
---
## Agent Inheritance
**Base Agent**: `@action-planning-agent`
- **Inherits**: 6-field JSON schema, context loading, document generation flow
- **Extends**: Adds test-specific meta fields, flow_control fields, and quality gate specifications
**Reference Documents**:
- Base specifications: `d:\Claude_dms3\.claude\agents\action-planning-agent.md`
- Test command: `d:\Claude_dms3\.claude\commands\workflow\tools\test-task-generate.md`
---
## Overview
**Agent Role**: Specialized execution agent that transforms test requirements from TEST_ANALYSIS_RESULTS.md into structured test planning documents with progressive test layers (L0-L3), AI code validation, and project-specific templates.
**Core Capabilities**:
- Load and synthesize test requirements from TEST_ANALYSIS_RESULTS.md
- Generate test-specific task JSON files with L0-L3 layer specifications
- Apply project type templates (React, Node API, CLI, Library, Monorepo)
- Configure AI code issue detection (L0.5) with severity levels
- Set up quality gates (IMPL-001.3 code validation, IMPL-001.5 test quality)
- Create test-focused IMPL_PLAN.md and TODO_LIST.md
**Key Principle**: All test specifications MUST follow progressive L0-L3 layers with quantified requirements, explicit coverage targets, and measurable quality gates.
---
## Test Specification Reference
This section defines the detailed specifications that this agent MUST follow when generating test task JSONs.
### Progressive Test Layers (L0-L3)
| Layer | Name | Scope | Examples |
|-------|------|-------|----------|
| **L0** | Static Analysis | Compile-time checks | TypeCheck, Lint, Import validation, AI code issues |
| **L1** | Unit Tests | Single function/class | Happy path, Negative path, Edge cases (null/undefined/empty/boundary) |
| **L2** | Integration Tests | Component interactions | Module integration, API contracts, Failure scenarios (timeout/unavailable) |
| **L3** | E2E Tests | User journeys | Critical paths, Cross-module flows (if applicable) |
#### L0: Static Analysis Details
```
L0.1 Compilation - tsc --noEmit, babel parse, no syntax errors
L0.2 Import Validity - Package exists, path resolves, no circular deps
L0.3 Type Safety - No 'any' abuse, proper generics, null checks
L0.4 Lint Rules - ESLint/Prettier, project naming conventions
L0.5 AI Issues - Hallucinated imports, placeholders, mock leakage, etc.
```
#### L1: Unit Tests Details (per function/class)
```
L1.1 Happy Path - Normal input → expected output
L1.2 Negative Path - Invalid input → proper error/rejection
L1.3 Edge Cases - null, undefined, empty, boundary values
L1.4 State Changes - Before/after assertions for stateful code
L1.5 Async Behavior - Promise resolution, timeout, cancellation
```
#### L2: Integration Tests Details (component interactions)
```
L2.1 Module Wiring - Dependencies inject correctly
L2.2 API Contracts - Request/response schema validation
L2.3 Database Ops - CRUD operations, transactions, rollback
L2.4 External APIs - Mock external services, retry logic
L2.5 Failure Modes - Timeout, unavailable, rate limit, circuit breaker
```
#### L3: E2E Tests Details (user journeys, optional)
```
L3.1 Critical Paths - Login, checkout, core workflows
L3.2 Cross-Module - Feature spanning multiple modules
L3.3 Performance - Response time, memory usage thresholds
L3.4 Accessibility - WCAG compliance, screen reader
```
### AI Code Issue Detection (L0.5)
AI-generated code commonly exhibits these issues that MUST be detected:
| Category | Issues | Detection Method | Severity |
|----------|--------|------------------|----------|
| **Hallucinated Imports** | | | |
| - Non-existent package | `import x from 'fake-pkg'` not in package.json | Validate against package.json | CRITICAL |
| - Wrong subpath | `import x from 'lodash/nonExistent'` | Path resolution check | CRITICAL |
| - Typo in package | `import x from 'reat'` (meant 'react') | Similarity matching | CRITICAL |
| **Placeholder Code** | | | |
| - TODO in implementation | `// TODO: implement` in non-test file | Pattern matching | ERROR |
| - Not implemented | `throw new Error("Not implemented")` | String literal search | ERROR |
| - Ellipsis as statement | `...` (not spread) | AST analysis | ERROR |
| **Mock Leakage** | | | |
| - Jest in production | `jest.fn()`, `jest.mock()` in `src/` | File path + pattern | CRITICAL |
| - Spy in production | `vi.spyOn()`, `sinon.stub()` in `src/` | File path + pattern | CRITICAL |
| - Test util import | `import { render } from '@testing-library'` in `src/` | Import analysis | ERROR |
| **Type Abuse** | | | |
| - Explicit any | `const x: any` | TypeScript checker | WARNING |
| - Double cast | `as unknown as T` | Pattern matching | ERROR |
| - Type assertion chain | `(x as A) as B` | AST analysis | ERROR |
| **Naming Issues** | | | |
| - Mixed conventions | `camelCase` + `snake_case` in same file | Convention checker | WARNING |
| - Typo in identifier | Common misspellings | Spell checker | WARNING |
| - Misleading name | `isValid` returns non-boolean | Type inference | ERROR |
| **Control Flow** | | | |
| - Empty catch | `catch (e) {}` | Pattern matching | ERROR |
| - Unreachable code | Code after `return`/`throw` | Control flow analysis | WARNING |
| - Infinite loop risk | `while(true)` without break | Loop analysis | WARNING |
| **Resource Leaks** | | | |
| - Missing cleanup | Event listener without removal | Lifecycle analysis | WARNING |
| - Unclosed resource | File/DB connection without close | Resource tracking | ERROR |
| - Missing unsubscribe | Observable without unsubscribe | Pattern matching | WARNING |
| **Security Issues** | | | |
| - Hardcoded secret | `password = "..."`, `apiKey = "..."` | Pattern matching | CRITICAL |
| - Console in production | `console.log` with sensitive data | File path analysis | WARNING |
| - Eval usage | `eval()`, `new Function()` | Pattern matching | CRITICAL |
### Project Type Detection & Templates
| Project Type | Detection Signals | Test Focus | Example Frameworks |
|--------------|-------------------|------------|-------------------|
| **React/Vue/Angular** | `@react` or `vue` in deps, `.jsx/.vue/.ts(x)` files | Component render, hooks, user events, accessibility | Jest, Vitest, @testing-library/react |
| **Node.js API** | Express/Fastify/Koa/hapi in deps, route handlers | Request/response, middleware, auth, error handling | Jest, Mocha, Supertest |
| **CLI Tool** | `bin` field, commander/yargs in deps | Argument parsing, stdout/stderr, exit codes | Jest, Commander tests |
| **Library/SDK** | `main`/`exports` field, no app entry point | Public API surface, backward compatibility, types | Jest, TSup |
| **Full-Stack** | Both frontend + backend, monorepo or separate dirs | API integration, SSR, data flow, end-to-end | Jest, Cypress/Playwright, Vitest |
| **Monorepo** | workspaces, lerna, nx, pnpm-workspaces | Cross-package integration, shared dependencies | Jest workspaces, Lerna |
### Test Anti-Pattern Detection
| Category | Anti-Pattern | Detection | Severity |
|----------|--------------|-----------|----------|
| **Empty Tests** | | | |
| - No assertion | `it('test', () => {})` | Body analysis | CRITICAL |
| - Only setup | `it('test', () => { const x = 1; })` | No expect/assert | ERROR |
| - Commented out | `it.skip('test', ...)` | Skip detection | WARNING |
| **Weak Assertions** | | | |
| - toBeDefined only | `expect(x).toBeDefined()` | Pattern match | WARNING |
| - toBeTruthy only | `expect(x).toBeTruthy()` | Pattern match | WARNING |
| - Snapshot abuse | Many `.toMatchSnapshot()` | Count threshold | WARNING |
| **Test Isolation** | | | |
| - Shared state | `let x;` outside describe | Scope analysis | ERROR |
| - Missing cleanup | No afterEach with setup | Lifecycle check | WARNING |
| - Order dependency | Tests fail in random order | Shuffle test | ERROR |
| **Incomplete Coverage** | | | |
| - Missing L1.2 | No negative path test | Pattern scan | ERROR |
| - Missing L1.3 | No edge case test | Pattern scan | ERROR |
| - Missing async | Async function without async test | Signature match | WARNING |
| **AI-Generated Issues** | | | |
| - Tautology | `expect(1).toBe(1)` | Literal detection | CRITICAL |
| - Testing mock | `expect(mockFn).toHaveBeenCalled()` only | Mock-only test | ERROR |
| - Copy-paste | Identical test bodies | Similarity check | WARNING |
| - Wrong target | Test doesn't import subject | Import analysis | CRITICAL |
### Layer Completeness & Quality Metrics
#### Completeness Requirements
| Layer | Requirement | Threshold |
|-------|-------------|-----------|
| L1.1 | Happy path for each exported function | 100% |
| L1.2 | Negative path for functions with validation | 80% |
| L1.3 | Edge cases (null, empty, boundary) | 60% |
| L1.4 | State change tests for stateful code | 80% |
| L1.5 | Async tests for async functions | 100% |
| L2 | Integration tests for module boundaries | 70% |
| L3 | E2E for critical user paths | Optional |
#### Quality Metrics
| Metric | Target | Measurement | Critical? |
|--------|--------|-------------|-----------|
| Line Coverage | ≥ 80% | `jest --coverage` | ✅ Yes |
| Branch Coverage | ≥ 70% | `jest --coverage` | Yes |
| Function Coverage | ≥ 90% | `jest --coverage` | ✅ Yes |
| Assertion Density | ≥ 2 per test | Assert count / test count | Yes |
| Test/Code Ratio | ≥ 1:1 | Test lines / source lines | Yes |
#### Gate Decisions
**IMPL-001.3 (Code Validation Gate)**:
| Decision | Condition | Action |
|----------|-----------|--------|
| **PASS** | critical=0, error≤3, warning≤10 | Proceed to IMPL-001.5 |
| **SOFT_FAIL** | Fixable issues (no CRITICAL) | Auto-fix and retry (max 2) |
| **HARD_FAIL** | critical>0 OR max retries reached | Block with detailed report |
**IMPL-001.5 (Test Quality Gate)**:
| Decision | Condition | Action |
|----------|-----------|--------|
| **PASS** | All thresholds met, no CRITICAL | Proceed to IMPL-002 |
| **SOFT_FAIL** | Minor gaps, no CRITICAL | Generate improvement list, retry |
| **HARD_FAIL** | CRITICAL issues OR max retries | Block with report |
---
## 1. Input & Execution
### 1.1 Inherited Base Schema
**From @action-planning-agent** - Use standard 6-field JSON schema:
- `id`, `title`, `status` - Standard task metadata
- `context_package_path` - Path to context package
- `cli_execution_id` - CLI conversation ID
- `cli_execution` - Execution strategy (new/resume/fork/merge_fork)
- `meta` - Agent assignment, type, execution config
- `context` - Requirements, focus paths, acceptance criteria, dependencies
- `flow_control` - Pre-analysis, implementation approach, target files
**See**: `action-planning-agent.md` sections 2.1-2.3 for complete base schema specifications.
### 1.2 Test-Specific Extensions
**Extends base schema with test-specific fields**:
#### Meta Extensions
```json
{
"meta": {
"type": "test-gen|test-fix|code-validation|test-quality-review", // Test task types
"agent": "@code-developer|@test-fix-agent",
"test_framework": "jest|vitest|pytest|junit|mocha", // REQUIRED for test tasks
"project_type": "React|Node API|CLI|Library|Full-Stack|Monorepo", // NEW: Project type detection
"coverage_target": "line:80%,branch:70%,function:90%" // NEW: Coverage targets
}
}
```
#### Flow Control Extensions
```json
{
"flow_control": {
"pre_analysis": [...], // From base schema
"implementation_approach": [...], // From base schema
"target_files": [...], // From base schema
"reusable_test_tools": [ // NEW: Test-specific - existing test utilities
"tests/helpers/testUtils.ts",
"tests/fixtures/mockData.ts"
],
"test_commands": { // NEW: Test-specific - project test commands
"run_tests": "npm test",
"run_coverage": "npm test -- --coverage",
"run_specific": "npm test -- {test_file}"
},
"ai_issue_scan": { // NEW: IMPL-001.3 only - AI issue detection config
"categories": ["hallucinated_imports", "placeholder_code", ...],
"severity_levels": ["CRITICAL", "ERROR", "WARNING"],
"auto_fix_enabled": true,
"max_retries": 2
},
"quality_gates": { // NEW: IMPL-001.5 only - Test quality thresholds
"layer_completeness": { "L1.1": "100%", "L1.2": "80%", ... },
"anti_patterns": ["empty_tests", "weak_assertions", ...],
"coverage_thresholds": { "line": "80%", "branch": "70%", ... }
}
}
}
```
### 1.3 Input Processing
**What you receive from test-task-generate command**:
- **Session Paths**: File paths to load content autonomously
- `session_metadata_path`: Session configuration
- `test_analysis_results_path`: TEST_ANALYSIS_RESULTS.md (REQUIRED - primary requirements source)
- `test_context_package_path`: test-context-package.json
- `context_package_path`: context-package.json
- **Metadata**: Simple values
- `session_id`: Workflow session identifier (WFS-test-[topic])
- `source_session_id`: Source implementation session (if exists)
- `mcp_capabilities`: Available MCP tools
### 1.2 Execution Flow
#### Phase 1: Context Loading & Assembly
```
1. Load TEST_ANALYSIS_RESULTS.md (PRIMARY SOURCE)
- Extract project type detection
- Extract L0-L3 test requirements
- Extract AI issue scan results
- Extract coverage targets
- Extract test framework and conventions
2. Load session metadata
- Extract session configuration
- Identify source session (if test mode)
3. Load test context package
- Extract test coverage analysis
- Extract project dependencies
- Extract existing test utilities and frameworks
4. Assess test generation complexity
- Simple: <5 files, L1-L2 only
- Medium: 5-15 files, L1-L3
- Complex: >15 files, all layers, cross-module dependencies
```
#### Phase 2: Task JSON Generation
Generate minimum 4 tasks using **base 6-field schema + test extensions**:
**Base Schema (inherited from @action-planning-agent)**:
```json
{
"id": "IMPL-N",
"title": "Task description",
"status": "pending",
"context_package_path": ".workflow/active/WFS-test-{session}/.process/context-package.json",
"cli_execution_id": "WFS-test-{session}-IMPL-N",
"cli_execution": { "strategy": "new|resume|fork|merge_fork", ... },
"meta": { ... }, // See section 1.2 for test extensions
"context": { ... }, // See action-planning-agent.md section 2.2
"flow_control": { ... } // See section 1.2 for test extensions
}
```
**Task 1: IMPL-001.json (Test Generation)**
```json
{
"id": "IMPL-001",
"title": "Generate L1-L3 tests for {module}",
"status": "pending",
"context_package_path": ".workflow/active/WFS-test-{session}/.process/test-context-package.json",
"cli_execution_id": "WFS-test-{session}-IMPL-001",
"cli_execution": {
"strategy": "new"
},
"meta": {
"type": "test-gen",
"agent": "@code-developer",
"test_framework": "jest", // From TEST_ANALYSIS_RESULTS.md
"project_type": "React", // From project type detection
"coverage_target": "line:80%,branch:70%,function:90%"
},
"context": {
"requirements": [
"Generate 15 unit tests (L1) for 5 components: [Component A, B, C, D, E]",
"Generate 8 integration tests (L2) for 2 API integrations: [Auth API, Data API]",
"Create 5 test files: [ComponentA.test.tsx, ComponentB.test.tsx, ...]"
],
"focus_paths": ["src/components", "src/api"],
"acceptance": [
"15 L1 tests implemented: verify by npm test -- --testNamePattern='L1' | grep 'Tests: 15'",
"Test coverage ≥80%: verify by npm test -- --coverage | grep 'All files.*80'"
],
"depends_on": []
},
"flow_control": {
"pre_analysis": [
{
"step": "load_test_analysis",
"action": "Load TEST_ANALYSIS_RESULTS.md",
"commands": ["Read('.workflow/active/WFS-test-{session}/.process/TEST_ANALYSIS_RESULTS.md')"],
"output_to": "test_requirements"
},
{
"step": "load_test_context",
"action": "Load test context package",
"commands": ["Read('.workflow/active/WFS-test-{session}/.process/test-context-package.json')"],
"output_to": "test_context"
}
],
"implementation_approach": [
{
"phase": "Generate L1 Unit Tests",
"steps": [
"For each function: Generate L1.1 (happy path), L1.2 (negative), L1.3 (edge cases), L1.4 (state), L1.5 (async)"
],
"test_patterns": "render(), screen.getByRole(), userEvent.click(), waitFor()"
},
{
"phase": "Generate L2 Integration Tests",
"steps": [
"Generate L2.1 (module wiring), L2.2 (API contracts), L2.5 (failure modes)"
],
"test_patterns": "supertest(app), expect(res.status), expect(res.body)"
}
],
"target_files": [
"tests/components/ComponentA.test.tsx",
"tests/components/ComponentB.test.tsx",
"tests/api/auth.integration.test.ts"
],
"reusable_test_tools": [
"tests/helpers/renderWithProviders.tsx",
"tests/fixtures/mockData.ts"
],
"test_commands": {
"run_tests": "npm test",
"run_coverage": "npm test -- --coverage"
}
}
}
```
**Task 2: IMPL-001.3-validation.json (Code Validation Gate)**
```json
{
"id": "IMPL-001.3",
"title": "Code validation gate - AI issue detection",
"status": "pending",
"context_package_path": ".workflow/active/WFS-test-{session}/.process/test-context-package.json",
"cli_execution_id": "WFS-test-{session}-IMPL-001.3",
"cli_execution": {
"strategy": "resume",
"resume_from": "WFS-test-{session}-IMPL-001"
},
"meta": {
"type": "code-validation",
"agent": "@test-fix-agent"
},
"context": {
"requirements": [
"Validate L0.1-L0.5 for all generated test files",
"Detect all AI issues across 7 categories: [hallucinated_imports, placeholder_code, ...]",
"Zero CRITICAL issues required"
],
"focus_paths": ["tests/"],
"acceptance": [
"L0 validation passed: verify by zero CRITICAL issues",
"Compilation successful: verify by tsc --noEmit tests/ (exit code 0)"
],
"depends_on": ["IMPL-001"]
},
"flow_control": {
"pre_analysis": [],
"implementation_approach": [
{
"phase": "L0.1 Compilation Check",
"validation": "tsc --noEmit tests/"
},
{
"phase": "L0.2 Import Validity",
"validation": "Check all imports against package.json and node_modules"
},
{
"phase": "L0.5 AI Issue Detection",
"validation": "Scan for all 7 AI issue categories with severity levels"
}
],
"target_files": [],
"ai_issue_scan": {
"categories": [
"hallucinated_imports",
"placeholder_code",
"mock_leakage",
"type_abuse",
"naming_issues",
"control_flow",
"resource_leaks",
"security_issues"
],
"severity_levels": ["CRITICAL", "ERROR", "WARNING"],
"auto_fix_enabled": true,
"max_retries": 2,
"thresholds": {
"critical": 0,
"error": 3,
"warning": 10
}
}
}
}
```
**Task 3: IMPL-001.5-review.json (Test Quality Gate)**
```json
{
"id": "IMPL-001.5",
"title": "Test quality gate - anti-patterns and coverage",
"status": "pending",
"context_package_path": ".workflow/active/WFS-test-{session}/.process/test-context-package.json",
"cli_execution_id": "WFS-test-{session}-IMPL-001.5",
"cli_execution": {
"strategy": "resume",
"resume_from": "WFS-test-{session}-IMPL-001.3"
},
"meta": {
"type": "test-quality-review",
"agent": "@test-fix-agent"
},
"context": {
"requirements": [
"Validate layer completeness: L1.1 100%, L1.2 80%, L1.3 60%",
"Detect all anti-patterns across 5 categories: [empty_tests, weak_assertions, ...]",
"Verify coverage: line ≥80%, branch ≥70%, function ≥90%"
],
"focus_paths": ["tests/"],
"acceptance": [
"Coverage ≥80%: verify by npm test -- --coverage | grep 'All files.*80'",
"Zero CRITICAL anti-patterns: verify by quality report"
],
"depends_on": ["IMPL-001", "IMPL-001.3"]
},
"flow_control": {
"pre_analysis": [],
"implementation_approach": [
{
"phase": "Static Analysis",
"validation": "Lint test files, check anti-patterns"
},
{
"phase": "Coverage Analysis",
"validation": "Calculate coverage percentage, identify gaps"
},
{
"phase": "Quality Metrics",
"validation": "Verify thresholds, layer completeness"
}
],
"target_files": [],
"quality_gates": {
"layer_completeness": {
"L1.1": "100%",
"L1.2": "80%",
"L1.3": "60%",
"L1.4": "80%",
"L1.5": "100%",
"L2": "70%"
},
"anti_patterns": [
"empty_tests",
"weak_assertions",
"test_isolation",
"incomplete_coverage",
"ai_generated_issues"
],
"coverage_thresholds": {
"line": "80%",
"branch": "70%",
"function": "90%"
}
}
}
}
```
**Task 4: IMPL-002.json (Test Execution & Fix)**
```json
{
"id": "IMPL-002",
"title": "Test execution and fix cycle",
"status": "pending",
"context_package_path": ".workflow/active/WFS-test-{session}/.process/test-context-package.json",
"cli_execution_id": "WFS-test-{session}-IMPL-002",
"cli_execution": {
"strategy": "resume",
"resume_from": "WFS-test-{session}-IMPL-001.5"
},
"meta": {
"type": "test-fix",
"agent": "@test-fix-agent"
},
"context": {
"requirements": [
"Execute all tests and fix failures until pass rate ≥95%",
"Maximum 5 fix iterations",
"Use Gemini for diagnosis, agent for fixes"
],
"focus_paths": ["tests/", "src/"],
"acceptance": [
"All tests pass: verify by npm test (exit code 0)",
"Pass rate ≥95%: verify by test output"
],
"depends_on": ["IMPL-001", "IMPL-001.3", "IMPL-001.5"]
},
"flow_control": {
"pre_analysis": [],
"implementation_approach": [
{
"phase": "Initial Test Execution",
"command": "npm test"
},
{
"phase": "Iterative Fix Cycle",
"steps": [
"Diagnose failures with Gemini",
"Apply fixes via agent or CLI",
"Re-run tests",
"Repeat until pass rate ≥95% or max iterations"
],
"max_iterations": 5
}
],
"target_files": [],
"test_fix_cycle": {
"max_iterations": 5,
"diagnosis_tool": "gemini",
"fix_mode": "agent",
"exit_conditions": ["all_tests_pass", "max_iterations_reached"]
}
}
}
```
#### Phase 3: Document Generation
```
1. Create IMPL_PLAN.md (test-specific variant)
- frontmatter: workflow_type="test_session", test_framework, coverage_targets
- Test Generation Phase: L1-L3 layer breakdown
- Quality Gates: IMPL-001.3 and IMPL-001.5 specifications
- Test-Fix Cycle: Iteration strategy with diagnosis and fix modes
- Source Session Context: If exists (from source_session_id)
2. Create TODO_LIST.md
- Hierarchical structure with test phase containers
- Links to task JSONs with status markers
- Test layer indicators (L0, L1, L2, L3)
- Quality gate indicators (validation, review)
```
---
## 2. Output Validation
### Task JSON Validation
**IMPL-001 Requirements**:
- All L1.1-L1.5 tests explicitly defined for each target function
- Project type template correctly applied
- Reusable test tools and test commands included
- Implementation approach includes all 3 phases (L1, L2, L3)
**IMPL-001.3 Requirements**:
- All 7 AI issue categories included
- Severity levels properly assigned
- Auto-fix logic for ERROR and below
- Acceptance criteria references zero CRITICAL rule
**IMPL-001.5 Requirements**:
- Layer completeness thresholds: L1.1 100%, L1.2 80%, L1.3 60%
- All 5 anti-pattern categories included
- Coverage metrics: Line 80%, Branch 70%, Function 90%
- Acceptance criteria references all thresholds
**IMPL-002 Requirements**:
- Depends on: IMPL-001, IMPL-001.3, IMPL-001.5 (sequential)
- Max iterations: 5
- Diagnosis tool: Gemini
- Exit conditions: all_tests_pass OR max_iterations_reached
### Quality Standards
Hard Constraints:
- Task count: minimum 4, maximum 18
- All requirements quantified from TEST_ANALYSIS_RESULTS.md
- L0-L3 Progressive Layers fully implemented per specifications
- AI Issue Detection includes all items from L0.5 checklist
- Project Type Template correctly applied
- Test Anti-Patterns validation rules implemented
- Layer Completeness Thresholds met
- Quality Metrics targets: Line 80%, Branch 70%, Function 90%
---
## 3. Success Criteria
- All test planning documents generated successfully
- Task count reported: minimum 4
- Test framework correctly detected and reported
- Coverage targets clearly specified: L0 zero errors, L1 80%+, L2 70%+
- L0-L3 layers explicitly defined in IMPL-001 task
- AI issue detection configured in IMPL-001.3
- Quality gates with measurable thresholds in IMPL-001.5
- Source session status reported (if applicable)

View File

@@ -28,6 +28,8 @@ You are a test context discovery specialist focused on gathering test coverage i
## Tool Arsenal
**Search Tool Priority**: ACE (`mcp__ace-tool__search_context`) → CCW (`mcp__ccw-tools__smart_search`) / Built-in (`Grep`, `Glob`, `Read`)
### 1. Session & Implementation Context
**Tools**:
- `Read()` - Load session metadata and implementation summaries

View File

@@ -51,6 +51,11 @@ You will execute tests across multiple layers, analyze failures with layer-speci
## Execution Process
### 0. Task Status: Mark In Progress
```bash
jq --arg ts "$(date -Iseconds)" '.status="in_progress" | .status_history += [{"from":.status,"to":"in_progress","changed_at":$ts}]' IMPL-X.json > tmp.json && mv tmp.json IMPL-X.json
```
### Flow Control Execution
When task JSON contains `flow_control` field, execute preparation and implementation steps systematically.
@@ -78,15 +83,15 @@ When task JSON contains implementation_approach array:
- `description`: Detailed description with variable references
- `modification_points`: Test and code modification targets
- `logic_flow`: Test-fix iteration sequence
- `command`: Optional CLI command (only when explicitly specified)
- `depends_on`: Array of step numbers that must complete first
- `output`: Variable name for this step's output
5. **Execution Mode Selection**:
- IF `command` field exists → Execute CLI command via Bash tool
- ELSE (no command) → Agent direct execution:
- Parse `modification_points` as files to modify
- Follow `logic_flow` for test-fix iteration
- Use test_commands from flow_control for test execution
- Based on `meta.execution_config.method`:
- `"cli"` → Build CLI command via buildCliHandoffPrompt() and execute via Bash tool
- `"agent"` (default) → Agent direct execution:
- Parse `modification_points` as files to modify
- Follow `logic_flow` for test-fix iteration
- Use test_commands from flow_control for test execution
### 1. Context Assessment & Test Discovery
@@ -97,7 +102,7 @@ When task JSON contains implementation_approach array:
- L1 (Unit): `*.test.*`, `*.spec.*` in `__tests__/`, `tests/unit/`
- L2 (Integration): `tests/integration/`, `*.integration.test.*`
- L3 (E2E): `tests/e2e/`, `*.e2e.test.*`, `cypress/`, `playwright/`
- **context-package.json** (CCW Workflow): Use Read tool to get context package from `.workflow/active/{session}/.process/context-package.json`
- **context-package.json** : Use Read tool to get context package from `.workflow/active/{session}/.process/context-package.json`
- Identify test commands from project configuration
```bash
@@ -329,9 +334,17 @@ When generating test results for orchestrator (saved to `.process/test-results.j
- Pass rate >= 95% + any "high" or "medium" criticality failures → ⚠️ NEEDS FIX (continue iteration)
- Pass rate < 95% → ❌ FAILED (continue iteration or abort)
## Task Status Update
**Upon task completion**, update task JSON status:
```bash
jq --arg ts "$(date -Iseconds)" '.status="completed" | .status_history += [{"from":"in_progress","to":"completed","changed_at":$ts}]' IMPL-X.json > tmp.json && mv tmp.json IMPL-X.json
```
## Important Reminders
**ALWAYS:**
- **Search Tool Priority**: ACE (`mcp__ace-tool__search_context`) → CCW (`mcp__ccw-tools__smart_search`) / Built-in (`Grep`, `Glob`, `Read`)
- **Execute tests first** - Understand what's failing before fixing
- **Diagnose thoroughly** - Find root cause, not just symptoms
- **Fix minimally** - Change only what's needed to pass tests

View File

@@ -284,6 +284,8 @@ You execute 6 distinct task types organized into 3 patterns. Each task includes
### ALWAYS
**Search Tool Priority**: ACE (`mcp__ace-tool__search_context`) → CCW (`mcp__ccw-tools__smart_search`) / Built-in (`Grep`, `Glob`, `Read`)
**W3C Format Compliance**: ✅ Include $schema in all token files | ✅ Use $type metadata for all tokens | ✅ Use $value wrapper for color (light/dark), duration, easing | ✅ Validate token structure against W3C spec
**Pattern Recognition**: ✅ Identify pattern from [TASK_TYPE_IDENTIFIER] first | ✅ Apply pattern-specific execution rules | ✅ Follow autonomy level

View File

@@ -124,6 +124,7 @@ Before completing any task, verify:
- Use `run_in_background=false` for all Bash/CLI calls to ensure foreground execution
**ALWAYS:**
- **Search Tool Priority**: ACE (`mcp__ace-tool__search_context`) → CCW (`mcp__ccw-tools__smart_search`) / Built-in (`Grep`, `Glob`, `Read`)
- Verify resource/dependency existence before referencing
- Execute tasks systematically and incrementally
- Test and validate work thoroughly

File diff suppressed because it is too large Load Diff

View File

@@ -0,0 +1,364 @@
---
name: ccw-debug
description: Debug coordinator - analyze issue, select debug strategy, execute debug workflow in main process
argument-hint: "[--mode cli|debug|test|bidirectional] [--yes|-y] \"bug description\""
allowed-tools: Skill(*), TodoWrite(*), AskUserQuestion(*), Read(*), Bash(*)
---
# CCW-Debug Command - Debug Coordinator
Debug orchestrator: issue analysis → strategy selection → debug execution.
## Core Concept: Debug Units (调试单元)
**Definition**: Debug commands grouped into logical units for different root cause strategies.
**Debug Units**:
| Unit Type | Pattern | Example |
|-----------|---------|---------|
| **Quick Diagnosis** | CLI analysis only | cli → recommendation |
| **Hypothesis-Driven** | Debug exploration | debug-with-file → apply fix |
| **Test-Driven** | Test generation/iteration | test-fix-gen → test-cycle-execute |
| **Convergence** | Parallel debug + test | debug + test (parallel) |
**Atomic Rules**:
1. CLI mode: Analysis only, recommendation for user action
2. Debug/Test modes: Full cycle (analysis → fix → validate)
3. Bidirectional mode: Parallel execution, merge findings
## Execution Model
**Synchronous (Main Process)**: Debug commands execute via Skill, blocking until complete.
```
User Input → Analyze Issue → Select Strategy → [Confirm] → Execute Debug
Skill (blocking)
Update TodoWrite
Generate Fix/Report
```
## 5-Phase Workflow
### Phase 1: Analyze Issue
**Input** → Extract (description, symptoms) → Assess (error_type, clarity, complexity, scope) → **Analysis**
| Field | Values |
|-------|--------|
| error_type | syntax \| logic \| async \| integration \| unknown |
| clarity | 0-3 (≥2 = clear) |
| complexity | low \| medium \| high |
| scope | single-module \| cross-module \| system |
#### Mode Detection (Priority Order)
```
Input Keywords → Mode
─────────────────────────────────────────────────────────
quick|fast|immediate|recommendation|suggest → cli
test|fail|coverage|pass → test
multiple|system|distributed|concurrent → bidirectional
(default) → debug
```
**Output**: `IssueType: [type] | Clarity: [clarity]/3 | Complexity: [complexity] | RecommendedMode: [mode]`
---
### Phase 1.5: Issue Clarification (if clarity < 2)
```
Analysis → Check clarity ≥ 2?
YES → Continue to Phase 2
NO → Ask Questions → Update Analysis
```
**Questions Asked**: Error Symptoms, When It Occurs, Affected Components, Reproducibility
---
### Phase 2: Select Debug Strategy & Build Command Chain
```
Analysis → Detect Mode (keywords) → Build Command Chain → Debug Workflow
```
#### Command Chain Mapping
| Mode | Command Chain | Execution |
|------|---------------|-----------|
| **cli** | ccw cli --mode analysis --rule analysis-diagnose-bug-root-cause | Analysis only |
| **debug** | debug-with-file → test-fix-gen → test-cycle-execute | Sequential |
| **test** | test-fix-gen → test-cycle-execute | Sequential |
| **bidirectional** | (debug-with-file ∥ test-fix-gen ∥ test-cycle-execute) → merge-findings | Parallel → Merge |
**Note**: `∥` = parallel execution
**Output**: `Mode: [mode] | Strategy: [strategy] | Commands: [1. /cmd1 2. /cmd2]`
---
### Phase 3: User Confirmation
```
Debug Chain → Show Strategy → Ask User → User Decision:
- ✓ Confirm → Continue to Phase 4
- ⚙ Change Mode → Select Different Mode (back to Phase 2)
- ✗ Cancel → Abort
```
---
### Phase 4: Setup TODO Tracking & Status File
```
Debug Chain → Create Session Dir → Initialize Tracking → Tracking State
```
**Session Structure**:
```
Session ID: CCWD-{issue-slug}-{date}
Session Dir: .workflow/.ccw-debug/{session_id}/
TodoWrite:
CCWD:{mode}: [1/n] /command1 [in_progress]
CCWD:{mode}: [2/n] /command2 [pending]
...
status.json:
{
"session_id": "CCWD-...",
"mode": "debug|cli|test|bidirectional",
"status": "running",
"parallel_execution": false|true,
"issue": { description, error_type, clarity, complexity },
"command_chain": [...],
"findings": { debug, test, merged }
}
```
**Output**:
- TODO: `-> CCWD:debug: [1/3] /workflow:debug-with-file | ...`
- Status File: `.workflow/.ccw-debug/{session_id}/status.json`
---
### Phase 5: Execute Debug Chain
#### For Bidirectional Mode (Parallel Execution)
```
Start Commands (parallel) → Execute debug-with-file ∥ test-fix-gen ∥ test-cycle-execute
Collect Results → Merge Findings
Update status.json (findings.merged)
Mark completed
```
#### For Sequential Modes (cli, debug, test)
```
Start Command → Update status (running) → Execute via Skill → Result
CLI Mode? → YES → Ask Escalation → Escalate or Done
→ NO → Continue
Update status (completed) → Next Command
Error? → YES → Ask Action (Retry/Skip/Abort)
→ NO → Continue
```
#### Error Handling Pattern
```
Command Error → Update status (failed) → Ask User:
- Retry → Re-execute (same index)
- Skip → Continue next command
- Abort → Stop execution
```
#### CLI Mode Escalation
```
CLI Result → Findings.confidence?
High → Present findings → User decides:
• Done (end here)
• Escalate to debug mode
• Escalate to test mode
Low → Recommend escalation
```
---
## Execution Flow Summary
```
User Input
|
Phase 1: Analyze Issue
|-- Extract: description, error_type, clarity, complexity, scope
+-- If clarity < 2 -> Phase 1.5: Clarify Issue
|
Phase 2: Select Debug Strategy & Build Chain
|-- Detect mode: cli | debug | test | bidirectional
|-- Build command chain based on mode
|-- Parallel execution for bidirectional
+-- Consider escalation points (cli → debug/test)
|
Phase 3: User Confirmation (optional)
|-- Show debug strategy
+-- Allow mode change
|
Phase 4: Setup TODO Tracking & Status File
|-- Create todos with CCWD prefix
+-- Initialize .workflow/.ccw-debug/{session_id}/status.json
|
Phase 5: Execute Debug Chain
|-- For sequential modes: execute commands in order
|-- For bidirectional: execute debug + test in parallel
|-- CLI mode: present findings, ask for escalation
|-- Merge findings (bidirectional mode)
+-- Update status and TODO
```
---
## Debug Pipeline Examples
| Issue | Mode | Pipeline |
|-------|------|----------|
| "Login timeout error (quick)" | cli | ccw cli → analysis → (escalate or done) |
| "User login fails intermittently" | debug | debug-with-file → test-gen → test-cycle |
| "Authentication tests failing" | test | test-fix-gen → test-cycle-execute |
| "Multi-module auth + db sync issue" | bidirectional | (debug ∥ test) → merge findings |
**Legend**: `∥` = parallel execution
---
## State Management
### Dual Tracking System
**1. TodoWrite-Based Tracking** (UI Display):
```
// Initial state (debug mode)
CCWD:debug: [1/3] /workflow:debug-with-file [in_progress]
CCWD:debug: [2/3] /workflow:test-fix-gen [pending]
CCWD:debug: [3/3] /workflow:test-cycle-execute [pending]
// CLI mode: only 1 command
CCWD:cli: [1/1] ccw cli --mode analysis [in_progress]
// Bidirectional mode
CCWD:bidirectional: [1/3] /workflow:debug-with-file [in_progress] ∥
CCWD:bidirectional: [2/3] /workflow:test-fix-gen [in_progress] ∥
CCWD:bidirectional: [3/3] /workflow:test-cycle-execute [in_progress]
CCWD:bidirectional: [4/4] merge-findings [pending]
```
**2. Status.json Tracking**: Persistent state for debug monitoring.
**Location**: `.workflow/.ccw-debug/{session_id}/status.json`
**Structure**:
```json
{
"session_id": "CCWD-auth-timeout-2025-02-02",
"mode": "debug",
"status": "running|completed|failed",
"parallel_execution": false,
"created_at": "2025-02-02T10:00:00Z",
"updated_at": "2025-02-02T10:05:00Z",
"issue": {
"description": "User login timeout after 30 seconds",
"error_type": "async",
"clarity": 3,
"complexity": "medium"
},
"command_chain": [
{ "index": 0, "command": "/workflow:debug-with-file", "unit": "sequential", "status": "completed" },
{ "index": 1, "command": "/workflow:test-fix-gen", "unit": "sequential", "status": "in_progress" },
{ "index": 2, "command": "/workflow:test-cycle-execute", "unit": "sequential", "status": "pending" }
],
"current_index": 1,
"findings": {
"debug": { "root_cause": "...", "confidence": "high" },
"test": { "failure_pattern": "..." },
"merged": null
}
}
```
**Status Values**:
- `running`: Debug workflow in progress
- `completed`: Debug finished, fix applied
- `failed`: Debug aborted or unfixable
**Mode-Specific Fields**:
- `cli` mode: No findings field (recommendation-only)
- `debug`/`test`: Single finding source
- `bidirectional`: All three findings + merged result
---
## Key Design Principles
1. **Issue-Focused** - Diagnose root cause, not symptoms
2. **Mode-Driven** - 4 debug strategies for different issues
3. **Parallel Capability** - Bidirectional mode for complex systems
4. **Escalation Support** - CLI → debug/test mode progression
5. **Quick Diagnosis** - CLI mode for immediate recommendations
6. **TODO Tracking** - Use CCWD prefix to isolate debug todos
7. **Finding Convergence** - Merge parallel results for consensus
---
## Usage
```bash
# Auto-select mode
/ccw-debug "Login failed: token validation error"
# Explicit mode selection
/ccw-debug --mode cli "Quick diagnosis: API 500 error"
/ccw-debug --mode debug "User profile sync intermittent failure"
/ccw-debug --mode test "Permission check failing"
/ccw-debug --mode bidirectional "Multi-module auth + cache sync issue"
# Auto mode (skip confirmations)
/ccw-debug --yes "Production hotfix: database connection timeout"
# Resume or escalate from previous session
/ccw-debug --mode debug --source-session CCWD-login-timeout-2025-01-27
```
---
## Mode Selection Decision Tree
```
User calls: /ccw-debug "issue description"
├─ Keywords: "quick", "fast", "recommendation"
│ └─ Mode: CLI (2-5 min analysis, optional escalation)
├─ Keywords: "test", "fail", "coverage"
│ └─ Mode: Test (automated iteration, ≥95% pass)
├─ Keywords: "multiple", "system", "distributed"
│ └─ Mode: Bidirectional (parallel debug + test)
└─ Default → Debug (full hypothesis-driven workflow)
```

View File

@@ -0,0 +1,456 @@
---
name: ccw-plan
description: Planning coordinator - analyze requirements, select planning strategy, execute planning workflow in main process
argument-hint: "[--mode lite|multi-cli|full|plan-verify|replan|cli|issue|rapid-to-issue|brainstorm-with-file|analyze-with-file] [--yes|-y] \"task description\""
allowed-tools: Skill(*), TodoWrite(*), AskUserQuestion(*), Read(*), Grep(*), Glob(*)
---
# CCW-Plan Command - Planning Coordinator
Planning orchestrator: requirement analysis → strategy selection → planning execution.
## Core Concept: Planning Units (规划单元)
**Definition**: Planning commands are grouped into logical units based on verification requirements and collaboration strategies.
**Planning Units**:
| Unit Type | Pattern | Example |
|-----------|---------|---------|
| **Quick Planning** | plan-cmd (no verify) | lite-plan |
| **Verified Planning** | plan-cmd → verify-cmd | plan → plan-verify |
| **Collaborative Planning** | multi-cli-plan (implicit verify) | multi-cli-plan |
| **With-File Planning** | brainstorm-with-file or analyze-with-file | brainstorm + plan options |
| **CLI-Assisted Planning** | ccw cli (analysis) → recommendations | quick analysis + decision |
| **Issue Workflow Planning** | plan → issue workflow (discover/queue/execute) | rapid-to-issue bridge |
**Atomic Rules**:
1. Lite mode: No verification (fast iteration)
2. Plan-verify mode: Mandatory quality gate
3. Multi-cli/Full mode: Optional verification (via --skip-verify flag)
4. With-File modes: Self-contained iteration with built-in post-completion options
5. CLI mode: Quick analysis, user-driven decisions
6. Issue modes: Planning integrated into issue workflow lifecycle
## Execution Model
**Synchronous (Main Process)**: Planning commands execute via Skill, blocking until complete.
```
User Input → Analyze Requirements → Select Strategy → [Confirm] → Execute Planning
Skill (blocking)
Update TodoWrite
Generate Artifacts
```
## 5-Phase Workflow
### Phase 1: Analyze Requirements
**Input** → Extract (goal, scope, constraints) → Assess (complexity, clarity, criticality) → **Analysis**
| Field | Values |
|-------|--------|
| complexity | low \| medium \| high |
| clarity | 0-3 (≥2 = clear) |
| criticality | normal \| high \| critical |
| scope | single-module \| cross-module \| system \| batch-issues |
**Output**: `Type: [task_type] | Goal: [goal] | Complexity: [complexity] | Clarity: [clarity]/3 | Criticality: [criticality]`
---
### Phase 1.5: Requirement Clarification (if clarity < 2)
```
Analysis → Check clarity ≥ 2?
YES → Continue to Phase 2
NO → Ask Questions → Update Analysis
```
**Questions Asked**: Goal (Create/Fix/Optimize/Analyze), Scope (Single file/Module/Cross-module/System), Constraints (Backward compat/Skip tests/Urgent hotfix)
---
### Phase 2: Select Planning Strategy & Build Command Chain
```
Analysis → Detect Mode (keywords) → Build Command Chain → Planning Workflow
```
#### Mode Detection (Priority Order)
```
Input Keywords → Mode
───────────────────────────────────────────────────────────────────────────────
quick|fast|immediate|recommendation|suggest → cli
issues?|batch|issue workflow|structured workflow|queue → issue
issue transition|rapid.*issue|plan.*issue|convert.*issue → rapid-to-issue
brainstorm|ideation|头脑风暴|创意|发散思维|multi-perspective → brainstorm-with-file
analyze.*document|explore.*concept|collaborative analysis → analyze-with-file
production|critical|payment|auth → plan-verify
adjust|modify|change plan → replan
uncertain|explore → full
complex|multiple module|integrate → multi-cli
(default) → lite
```
#### Command Chain Mapping
| Mode | Command Chain | Verification | Use Case |
|------|---------------|--------------|----------|
| **cli** | ccw cli --mode analysis --rule planning-* | None | Quick planning recommendation |
| **issue** | /issue:discover → /issue:plan → /issue:queue → /issue:execute | Optional | Batch issue planning & execution |
| **rapid-to-issue** | lite-plan → /issue:convert-to-plan → queue → execute | Optional | Quick planning → Issue workflow bridge |
| **brainstorm-with-file** | /workflow:brainstorm-with-file → (plan/issue options) | Self-contained | Multi-perspective ideation |
| **analyze-with-file** | /workflow:analyze-with-file → (plan/issue options) | Self-contained | Collaborative architecture analysis |
| **lite** | lite-plan | None | Fast simple planning |
| **multi-cli** | multi-cli-plan → [plan-verify] | Optional | Multi-model collaborative planning |
| **full** | brainstorm → plan → [plan-verify] | Optional | Comprehensive brainstorm + planning |
| **plan-verify** | plan → **plan-verify** | **Mandatory** | Production/critical features |
| **replan** | replan | None | Plan refinement/adjustment |
**Note**:
- `[ ]` = optional verification
- **bold** = mandatory quality gate
- With-File modes include built-in post-completion options to create plans/issues
**Output**: `Mode: [mode] | Strategy: [strategy] | Commands: [1. /cmd1 2. /cmd2]`
---
### Phase 3: User Confirmation
```
Planning Chain → Show Strategy → Ask User → User Decision:
- ✓ Confirm → Continue to Phase 4
- ⚙ Adjust → Change Mode (back to Phase 2)
- ✗ Cancel → Abort
```
---
### Phase 4: Setup TODO Tracking & Status File
```
Planning Chain → Create Session Dir → Initialize Tracking → Tracking State
```
**Session Structure**:
```
Session ID: CCWP-{goal-slug}-{date}
Session Dir: .workflow/.ccw-plan/{session_id}/
TodoWrite:
CCWP:{mode}: [1/n] /command1 [in_progress]
CCWP:{mode}: [2/n] /command2 [pending]
...
status.json:
{
"session_id": "CCWP-...",
"mode": "plan-verify",
"status": "running",
"command_chain": [...],
"quality_gate": "pending" // plan-verify mode only
}
```
**Output**:
- TODO: `-> CCWP:plan-verify: [1/2] /workflow:plan | ...`
- Status File: `.workflow/.ccw-plan/{session_id}/status.json`
---
### Phase 5: Execute Planning Chain
```
Start Command → Update status (running) → Execute via Skill → Result
```
#### For Plan-Verify Mode (Quality Gate)
```
Quality Gate → PASS → Mark completed → Next command
↓ FAIL (plan-verify mode)
Ask User → Refine: replan + re-verify
→ Override: continue anyway
→ Abort: stop planning
```
#### Error Handling Pattern
```
Command Error → Update status (failed) → Ask User:
- Retry → Re-execute (same index)
- Skip → Continue next command
- Abort → Stop execution
```
---
## Planning Pipeline Examples
| Input | Mode | Pipeline | Use Case |
|-------|------|----------|----------|
| "Quick: should we use OAuth2?" | cli | ccw cli --mode analysis → recommendation | Immediate planning advice |
| "Plan user login system" | lite | lite-plan | Fast simple planning |
| "Implement OAuth2 auth" | multi-cli | multi-cli-plan → [plan-verify] | Multi-model collaborative planning |
| "Design notification system" | full | brainstorm → plan → [plan-verify] | Comprehensive brainstorm + planning |
| "Payment processing (prod)" | plan-verify | plan → **plan-verify** | Production critical (mandatory gate) |
| "头脑风暴: 用户通知系统重新设计" | brainstorm-with-file | brainstorm-with-file → (plan/issue options) | Multi-perspective ideation |
| "协作分析: 认证架构设计决策" | analyze-with-file | analyze-with-file → (plan/issue options) | Collaborative analysis |
| "Batch plan: handle 10 pending issues" | issue | /issue:discover → plan → queue → execute | Batch issue planning |
| "Plan and create issues" | rapid-to-issue | lite-plan → convert-to-plan → queue → execute | Quick plan → Issue workflow |
| "Update existing plan" | replan | replan | Plan refinement/adjustment |
**Legend**:
- `[ ]` = optional verification
- **bold** = mandatory quality gate
- **With-File modes** include built-in post-completion options to create plans/issues
---
## State Management
### Dual Tracking System
**1. TodoWrite-Based Tracking** (UI Display):
```
// Plan-verify mode (mandatory quality gate)
CCWP:plan-verify: [1/2] /workflow:plan [in_progress]
CCWP:plan-verify: [2/2] /workflow:plan-verify [pending]
// CLI mode (quick recommendations)
CCWP:cli: [1/1] ccw cli --mode analysis [in_progress]
// Issue mode (batch planning)
CCWP:issue: [1/4] /issue:discover [in_progress]
CCWP:issue: [2/4] /issue:plan [pending]
CCWP:issue: [3/4] /issue:queue [pending]
CCWP:issue: [4/4] /issue:execute [pending]
// Rapid-to-issue mode (planning → issue bridge)
CCWP:rapid-to-issue: [1/4] /workflow:lite-plan [in_progress]
CCWP:rapid-to-issue: [2/4] /issue:convert-to-plan [pending]
CCWP:rapid-to-issue: [3/4] /issue:queue [pending]
CCWP:rapid-to-issue: [4/4] /issue:execute [pending]
// Brainstorm-with-file mode (self-contained)
CCWP:brainstorm-with-file: [1/1] /workflow:brainstorm-with-file [in_progress]
// Analyze-with-file mode (self-contained)
CCWP:analyze-with-file: [1/1] /workflow:analyze-with-file [in_progress]
// Lite mode (fast simple planning)
CCWP:lite: [1/1] /workflow:lite-plan [in_progress]
// Multi-CLI mode (collaborative planning)
CCWP:multi-cli: [1/1] /workflow:multi-cli-plan [in_progress]
// Full mode (brainstorm + planning with optional verification)
CCWP:full: [1/2] /workflow:brainstorm [in_progress]
CCWP:full: [2/2] /workflow:plan [pending]
```
**2. Status.json Tracking**: Persistent state for planning monitoring.
**Location**: `.workflow/.ccw-plan/{session_id}/status.json`
**Structure**:
```json
{
"session_id": "CCWP-oauth-auth-2025-02-02",
"mode": "plan-verify",
"status": "running|completed|failed",
"created_at": "2025-02-02T10:00:00Z",
"updated_at": "2025-02-02T10:05:00Z",
"analysis": {
"goal": "Implement OAuth2 authentication",
"complexity": "high",
"clarity_score": 2,
"criticality": "high"
},
"command_chain": [
{ "index": 0, "command": "/workflow:plan", "mandatory": false, "status": "completed" },
{ "index": 1, "command": "/workflow:plan-verify", "mandatory": true, "status": "running" }
],
"current_index": 1,
"quality_gate": "pending|PASS|FAIL"
}
```
**Status Values**:
- `running`: Planning in progress
- `completed`: Planning finished successfully
- `failed`: Planning aborted or quality gate failed
**Quality Gate Values** (plan-verify mode only):
- `pending`: Verification not started
- `PASS`: Plan meets quality standards
- `FAIL`: Plan needs refinement
**Mode-Specific Fields**:
- **plan-verify**: `quality_gate` field (pending|PASS|FAIL)
- **cli**: No command_chain, stores CLI recommendations and user decision
- **issue**: includes issue discovery results and queue configuration
- **rapid-to-issue**: includes plan output and conversion to issue
- **with-file modes**: stores session artifacts and post-completion options
- **other modes**: basic command_chain tracking
---
## Extended Planning Modes
### CLI-Assisted Planning (cli mode)
```
Quick Input → ccw cli --mode analysis --rule planning-* → Recommendations → User Decision:
- ✓ Accept → Create lite-plan from recommendations
- ↗ Escalate → Switch to multi-cli or full mode
- ✗ Done → Stop (recommendation only)
```
**Use Cases**:
- Quick architecture decision questions
- Planning approach recommendations
- Pattern/library selection advice
**CLI Rules** (auto-selected based on context):
- `planning-plan-architecture-design` - Architecture decisions
- `planning-breakdown-task-steps` - Task decomposition
- `planning-design-component-spec` - Component specifications
---
### With-File Planning Workflows
**With-File workflows** provide documented exploration with multi-CLI collaboration, generating comprehensive session artifacts.
| Mode | Purpose | Key Features | Output Folder |
|------|---------|--------------|---------------|
| **brainstorm-with-file** | Multi-perspective ideation | Gemini/Codex/Claude perspectives, diverge-converge | `.workflow/.brainstorm/` |
| **analyze-with-file** | Collaborative architecture analysis | Multi-round Q&A, CLI exploration, documented discussions | `.workflow/.analysis/` |
**Detection Keywords**:
- **brainstorm-with-file**: 头脑风暴, 创意, 发散思维, multi-perspective, ideation
- **analyze-with-file**: 协作分析, 深度理解, collaborative analysis, explore concept
**Characteristics**:
1. **Self-Contained**: Each workflow handles its own iteration loop
2. **Documented Process**: Creates evolving documents (brainstorm.md, discussion.md)
3. **Multi-CLI**: Uses Gemini/Codex/Claude for different perspectives
4. **Built-in Post-Completion**: Offers follow-up options (create plan, create issue, deep dive)
---
### Issue Workflow Integration
| Mode | Purpose | Command Chain | Typical Use |
|------|---------|---------------|-------------|
| **issue** | Batch issue planning | discover → plan → queue → execute | Multiple issues in codebase |
| **rapid-to-issue** | Quick plan → Issue workflow | lite-plan → convert-to-plan → queue → execute | Fast iteration → structured execution |
**Issue Workflow Bridge**:
```
lite-plan (in-memory) → /issue:convert-to-plan → Creates issue JSON
/issue:queue → Form execution queue
/issue:execute → DAG-based parallel execution
```
**When to use Issue Workflow**:
- Need structured multi-stage execution (queue-based)
- Want parallel DAG execution
- Multiple related changes as individual commits
- Converting brainstorm/plan output to executable tasks
---
## Key Design Principles
1. **Planning-Focused** - Pure planning coordination, no execution
2. **Mode-Driven** - 10 planning modes for different needs (lite/multi-cli/full/plan-verify/replan + cli/issue/rapid-to-issue/brainstorm-with-file/analyze-with-file)
3. **CLI Integration** - Quick analysis for immediate recommendations
4. **With-File Support** - Multi-CLI collaboration with documented artifacts
5. **Issue Workflow Bridge** - Seamless transition from planning to structured execution
6. **Quality Gates** - Mandatory verification for production features
7. **Flexible Verification** - Optional for exploration, mandatory for critical features
8. **Progressive Clarification** - Low clarity triggers requirement questions
9. **TODO Tracking** - Use CCWP prefix to isolate planning todos
10. **Handoff Ready** - Generates artifacts ready for execution phase
---
## Usage
```bash
# Auto-select mode (keyword-based detection)
/ccw-plan "Add user authentication"
# Standard planning modes
/ccw-plan --mode lite "Add logout endpoint"
/ccw-plan --mode multi-cli "Implement OAuth2"
/ccw-plan --mode full "Design notification system"
/ccw-plan --mode plan-verify "Payment processing (production)"
/ccw-plan --mode replan --session WFS-auth-2025-01-28
# CLI-assisted planning (quick recommendations)
/ccw-plan --mode cli "Quick: should we use OAuth2 or JWT?"
/ccw-plan --mode cli "Which state management pattern for React app?"
# With-File workflows (multi-CLI collaboration)
/ccw-plan --mode brainstorm-with-file "头脑风暴: 用户通知系统重新设计"
/ccw-plan --mode analyze-with-file "协作分析: 认证架构的设计决策"
# Issue workflow integration
/ccw-plan --mode issue "Batch plan: handle all pending security issues"
/ccw-plan --mode rapid-to-issue "Plan user profile feature and create issue"
# Auto mode (skip confirmations)
/ccw-plan --yes "Quick feature: user profile endpoint"
```
---
## Mode Selection Decision Tree
```
User calls: /ccw-plan "task description"
├─ Keywords: "quick", "fast", "recommendation"
│ └─ Mode: CLI (quick analysis → recommendations)
├─ Keywords: "issue", "batch", "queue"
│ └─ Mode: Issue (batch planning → execution queue)
├─ Keywords: "plan.*issue", "rapid.*issue"
│ └─ Mode: Rapid-to-Issue (lite-plan → issue bridge)
├─ Keywords: "头脑风暴", "brainstorm", "ideation"
│ └─ Mode: Brainstorm-with-file (multi-CLI ideation)
├─ Keywords: "协作分析", "analyze.*document"
│ └─ Mode: Analyze-with-file (collaborative analysis)
├─ Keywords: "production", "critical", "payment"
│ └─ Mode: Plan-Verify (mandatory quality gate)
├─ Keywords: "adjust", "modify", "change plan"
│ └─ Mode: Replan (refine existing plan)
├─ Keywords: "uncertain", "explore"
│ └─ Mode: Full (brainstorm → plan → [verify])
├─ Keywords: "complex", "multiple module"
│ └─ Mode: Multi-CLI (collaborative planning)
└─ Default → Lite (fast simple planning)
```

View File

@@ -0,0 +1,387 @@
---
name: ccw-test
description: Test coordinator - analyze testing needs, select test strategy, execute test workflow in main process
argument-hint: "[--mode gen|fix|verify|tdd] [--yes|-y] \"test description\""
allowed-tools: Skill(*), TodoWrite(*), AskUserQuestion(*), Read(*), Bash(*)
---
# CCW-Test Command - Test Coordinator
Test orchestrator: testing needs analysis → strategy selection → test execution.
## Core Concept: Test Units (测试单元)
**Definition**: Test commands grouped into logical units based on testing objectives.
**Test Units**:
| Unit Type | Pattern | Example |
|-----------|---------|---------|
| **Generation Only** | test-gen (no execution) | test-fix-gen |
| **Test + Fix Cycle** | test-gen → test-execute-fix | test-fix-gen → test-cycle-execute |
| **Verification Only** | existing-tests → execute | execute-tests |
| **TDD Cycle** | tdd-plan → tdd-execute → verify | Red-Green-Refactor |
**Atomic Rules**:
1. Gen mode: Generate tests only (no execution)
2. Fix mode: Generate + auto-iteration until ≥95% pass
3. Verify mode: Execute existing tests + report
4. TDD mode: Full Red-Green-Refactor cycle compliance
## Execution Model
**Synchronous (Main Process)**: Test commands execute via Skill, blocking until complete.
```
User Input → Analyze Testing Needs → Select Strategy → [Confirm] → Execute Tests
Skill (blocking)
Update TodoWrite
Generate Tests/Results
```
## 5-Phase Workflow
### Phase 1: Analyze Testing Needs
**Input** → Extract (description, target_module, existing_tests) → Assess (testing_goal, framework, coverage_target) → **Analysis**
| Field | Values |
|-------|--------|
| testing_goal | generate \| fix \| verify \| tdd |
| framework | jest \| vitest \| pytest \| ... |
| coverage_target | 0-100 (default: 80) |
| existing_tests | true \| false |
#### Mode Detection (Priority Order)
```
Input Keywords → Mode
─────────────────────────────────────────────────────────
generate|create|write test|need test → gen
fix|repair|failing|broken → fix
verify|validate|check|run test → verify
tdd|test-driven|test first → tdd
(default) → fix
```
**Output**: `TestingGoal: [goal] | Mode: [mode] | Target: [module] | Framework: [framework]`
---
### Phase 1.5: Testing Clarification (if needed)
```
Analysis → Check testing_goal known?
YES → Check target_module set?
YES → Continue to Phase 2
NO → Ask Questions → Update Analysis
```
**Questions Asked**: Testing Goal, Target Module/Files, Coverage Requirements, Test Framework
---
### Phase 2: Select Test Strategy & Build Command Chain
```
Analysis → Detect Mode (keywords) → Build Command Chain → Test Workflow
```
#### Command Chain Mapping
| Mode | Command Chain | Behavior |
|------|---------------|----------|
| **gen** | test-fix-gen | Generate only, no execution |
| **fix** | test-fix-gen → test-cycle-execute (iterate) | Auto-iteration until ≥95% pass or max iterations |
| **verify** | execute-existing-tests → coverage-report | Execute + report only |
| **tdd** | tdd-plan → execute → tdd-verify | Red-Green-Refactor cycle compliance |
**Note**: `(iterate)` = auto-iteration until pass_rate ≥ 95% or max_iterations reached
**Output**: `Mode: [mode] | Strategy: [strategy] | Commands: [1. /cmd1 2. /cmd2]`
---
### Phase 3: User Confirmation
```
Test Chain → Show Strategy → Ask User → User Decision:
- ✓ Confirm → Continue to Phase 4
- ⚙ Change Mode → Select Different Mode (back to Phase 2)
- ✗ Cancel → Abort
```
---
### Phase 4: Setup TODO Tracking & Status File
```
Test Chain → Create Session Dir → Initialize Tracking → Tracking State
```
**Session Structure**:
```
Session ID: CCWT-{target-module-slug}-{date}
Session Dir: .workflow/.ccw-test/{session_id}/
TodoWrite:
CCWT:{mode}: [1/n] /command1 [in_progress]
CCWT:{mode}: [2/n] /command2 [pending]
...
status.json:
{
"session_id": "CCWT-...",
"mode": "gen|fix|verify|tdd",
"status": "running",
"testing": { description, target_module, framework, coverage_target },
"command_chain": [...],
"test_metrics": { total_tests, passed, failed, pass_rate, iteration_count, coverage }
}
```
**Output**:
- TODO: `-> CCWT:fix: [1/2] /workflow:test-fix-gen | CCWT:fix: [2/2] /workflow:test-cycle-execute`
- Status File: `.workflow/.ccw-test/{session_id}/status.json`
---
### Phase 5: Execute Test Chain
#### For All Modes (Sequential Execution)
```
Start Command → Update status (running) → Execute via Skill → Result
Update test_metrics → Next Command
Error? → YES → Ask Action (Retry/Skip/Abort)
→ NO → Continue
```
#### For Fix Mode (Auto-Iteration)
```
test-fix-gen completes → test-cycle-execute begins
Check pass_rate ≥ 95%?
↓ ↓
YES → Complete NO → Check iteration < max?
↓ ↓
YES → Iteration NO → Complete
| (analyze failures
| generate fix
| re-execute tests)
|
└→ Loop back to pass_rate check
```
#### Error Handling Pattern
```
Command Error → Update status (failed) → Ask User:
- Retry → Re-execute (same index)
- Skip → Continue next command
- Abort → Stop execution
```
#### Test Metrics Update
```
After Each Execution → Collect test_metrics:
- total_tests: number
- passed/failed: count
- pass_rate: percentage
- iteration_count: increment (fix mode)
- coverage: line/branch/function
Update status.json → Update TODO with iteration info (if fix mode)
```
---
## Execution Flow Summary
```
User Input
|
Phase 1: Analyze Testing Needs
|-- Extract: description, testing_goal, target_module, existing_tests
+-- If unclear -> Phase 1.5: Clarify Testing Needs
|
Phase 2: Select Test Strategy & Build Chain
|-- Detect mode: gen | fix | verify | tdd
|-- Build command chain based on mode
+-- Configure iteration limits (fix mode)
|
Phase 3: User Confirmation (optional)
|-- Show test strategy
+-- Allow mode change
|
Phase 4: Setup TODO Tracking & Status File
|-- Create todos with CCWT prefix
+-- Initialize .workflow/.ccw-test/{session_id}/status.json
|
Phase 5: Execute Test Chain
|-- For each command:
| |-- Update status.json (current=running)
| |-- Execute via Skill
| |-- Test-fix cycle: iterate until ≥95% pass or max iterations
| |-- Update test_metrics in status.json
| +-- Update TODO status
+-- Mark status.json as completed
```
---
## Test Pipeline Examples
| Input | Mode | Pipeline | Iteration |
|-------|------|----------|-----------|
| "Generate tests for auth module" | gen | test-fix-gen | No execution |
| "Fix failing authentication tests" | fix | test-fix-gen → test-cycle-execute (iterate) | Max 3 iterations |
| "Run existing test suite" | verify | execute-tests → coverage-report | One-time |
| "Implement user profile with TDD" | tdd | tdd-plan → execute → tdd-verify | Red-Green-Refactor |
**Legend**: `(iterate)` = auto-iteration until ≥95% pass rate
---
## State Management
### Dual Tracking System
**1. TodoWrite-Based Tracking** (UI Display):
```
// Initial state (fix mode)
CCWT:fix: [1/2] /workflow:test-fix-gen [in_progress]
CCWT:fix: [2/2] /workflow:test-cycle-execute [pending]
// During iteration (fix mode, iteration 2/3)
CCWT:fix: [1/2] /workflow:test-fix-gen [completed]
CCWT:fix: [2/2] /workflow:test-cycle-execute [in_progress] (iteration 2/3, pass rate: 78%)
// Gen mode (no execution)
CCWT:gen: [1/1] /workflow:test-fix-gen [in_progress]
// Verify mode (one-time)
CCWT:verify: [1/2] execute-existing-tests [in_progress]
CCWT:verify: [2/2] generate-coverage-report [pending]
// TDD mode (Red-Green-Refactor)
CCWT:tdd: [1/3] /workflow:tdd-plan [in_progress]
CCWT:tdd: [2/3] /workflow:execute [pending]
CCWT:tdd: [3/3] /workflow:tdd-verify [pending]
```
**2. Status.json Tracking**: Persistent state for test monitoring.
**Location**: `.workflow/.ccw-test/{session_id}/status.json`
**Structure**:
```json
{
"session_id": "CCWT-auth-module-2025-02-02",
"mode": "fix",
"status": "running|completed|failed",
"created_at": "2025-02-02T10:00:00Z",
"updated_at": "2025-02-02T10:05:00Z",
"testing": {
"description": "Fix failing authentication tests",
"target_module": "src/auth/**/*.ts",
"framework": "jest",
"coverage_target": 80
},
"command_chain": [
{ "index": 0, "command": "/workflow:test-fix-gen", "unit": "sequential", "status": "completed" },
{ "index": 1, "command": "/workflow:test-cycle-execute", "unit": "test-fix-cycle", "max_iterations": 3, "status": "in_progress" }
],
"current_index": 1,
"test_metrics": {
"total_tests": 42,
"passed": 38,
"failed": 4,
"pass_rate": 90.5,
"iteration_count": 2,
"coverage": {
"line": 82.3,
"branch": 75.6,
"function": 88.1
}
}
}
```
**Status Values**:
- `running`: Test workflow in progress
- `completed`: Tests passing (≥95%) or generation complete
- `failed`: Test workflow aborted
**Test Metrics** (updated during execution):
- `total_tests`: Number of tests executed
- `pass_rate`: Percentage of passing tests (target: ≥95%)
- `iteration_count`: Number of test-fix iterations (fix mode)
- `coverage`: Line/branch/function coverage percentages
---
## Key Design Principles
1. **Testing-Focused** - Pure test coordination, no implementation
2. **Mode-Driven** - 4 test strategies for different needs
3. **Auto-Iteration** - Fix mode iterates until ≥95% pass rate
4. **Metrics Tracking** - Real-time test metrics in status.json
5. **Coverage-Driven** - Coverage targets guide test generation
6. **TODO Tracking** - Use CCWT prefix to isolate test todos
7. **TDD Compliance** - TDD mode enforces Red-Green-Refactor cycle
---
## Usage
```bash
# Auto-select mode
/ccw-test "Test user authentication module"
# Explicit mode selection
/ccw-test --mode gen "Generate tests for payment module"
/ccw-test --mode fix "Fix failing authentication tests"
/ccw-test --mode verify "Validate current test suite"
/ccw-test --mode tdd "Implement user profile with TDD"
# Custom configuration
/ccw-test --mode fix --max-iterations 5 --pass-threshold 98 "Fix all tests"
/ccw-test --target "src/auth/**/*.ts" "Test authentication module"
# Auto mode (skip confirmations)
/ccw-test --yes "Quick test validation"
```
---
## Mode Selection Decision Tree
```
User calls: /ccw-test "test description"
├─ Keywords: "generate", "create", "write test"
│ └─ Mode: Gen (generate only, no execution)
├─ Keywords: "fix", "repair", "failing"
│ └─ Mode: Fix (auto-iterate until ≥95% pass)
├─ Keywords: "verify", "validate", "run test"
│ └─ Mode: Verify (execute existing tests)
├─ Keywords: "tdd", "test-driven", "test first"
│ └─ Mode: TDD (Red-Green-Refactor cycle)
└─ Default → Fix (most common: fix failing tests)
```

666
.claude/commands/ccw.md Normal file
View File

@@ -0,0 +1,666 @@
---
name: ccw
description: Main workflow orchestrator - analyze intent, select workflow, execute command chain in main process
argument-hint: "\"task description\""
allowed-tools: Skill(*), TodoWrite(*), AskUserQuestion(*), Read(*), Grep(*), Glob(*)
---
# CCW Command - Main Workflow Orchestrator
Main process orchestrator: intent analysis → workflow selection → command chain execution.
## Core Concept: Minimum Execution Units (最小执行单元)
**Definition**: A set of commands that must execute together as an atomic group to achieve a meaningful workflow milestone.
**Why This Matters**:
- **Prevents Incomplete States**: Avoid stopping after task generation without execution
- **User Experience**: User gets complete results, not intermediate artifacts requiring manual follow-up
- **Workflow Integrity**: Maintains logical coherence of multi-step operations
**Key Units in CCW**:
| Unit Type | Pattern | Example |
|-----------|---------|---------|
| **Planning + Execution** | plan-cmd → execute-cmd | lite-plan → lite-execute |
| **Testing** | test-gen-cmd → test-exec-cmd | test-fix-gen → test-cycle-execute |
| **Review** | review-cmd → fix-cmd | review-session-cycle → review-cycle-fix |
**Atomic Rules**:
1. CCW automatically groups commands into minimum units - never splits them
2. Pipeline visualization shows units with `【 】` markers
3. Error handling preserves unit boundaries (retry/skip affects whole unit)
## Execution Model
**Synchronous (Main Process)**: Commands execute via Skill in main process, blocking until complete.
```
User Input → Analyze Intent → Select Workflow → [Confirm] → Execute Chain
Skill (blocking)
Update TodoWrite
Next Command...
```
**vs ccw-coordinator**: External CLI execution with background tasks and hook callbacks.
## 5-Phase Workflow
### Phase 1: Analyze Intent
```javascript
function analyzeIntent(input) {
return {
goal: extractGoal(input),
scope: extractScope(input),
constraints: extractConstraints(input),
task_type: detectTaskType(input), // bugfix|feature|tdd|review|exploration|...
complexity: assessComplexity(input), // low|medium|high
clarity_score: calculateClarity(input) // 0-3 (>=2 = clear)
};
}
// Task type detection (priority order)
function detectTaskType(text) {
const patterns = {
'bugfix-hotfix': /urgent|production|critical/ && /fix|bug/,
// With-File workflows (documented exploration with multi-CLI collaboration)
'brainstorm': /brainstorm|ideation|头脑风暴|创意|发散思维|creative thinking|multi-perspective.*think|compare perspectives|探索.*可能/,
'brainstorm-to-issue': /brainstorm.*issue|头脑风暴.*issue|idea.*issue|想法.*issue|从.*头脑风暴|convert.*brainstorm/,
'debug-file': /debug.*document|hypothesis.*debug|troubleshoot.*track|investigate.*log|调试.*记录|假设.*验证|systematic debug|深度调试/,
'analyze-file': /analyze.*document|explore.*concept|understand.*architecture|investigate.*discuss|collaborative analysis|分析.*讨论|深度.*理解|协作.*分析/,
// Standard workflows
'bugfix': /fix|bug|error|crash|fail|debug/,
'issue-batch': /issues?|batch/ && /fix|resolve/,
'issue-transition': /issue workflow|structured workflow|queue|multi-stage/,
'exploration': /uncertain|explore|research|what if/,
'quick-task': /quick|simple|small/ && /feature|function/,
'ui-design': /ui|design|component|style/,
'tdd': /tdd|test-driven|test first/,
'test-fix': /test fail|fix test|failing test/,
'review': /review|code review/,
'documentation': /docs|documentation|readme/
};
for (const [type, pattern] of Object.entries(patterns)) {
if (pattern.test(text)) return type;
}
return 'feature';
}
```
**Output**: `Type: [task_type] | Goal: [goal] | Complexity: [complexity] | Clarity: [clarity_score]/3`
---
### Phase 1.5: Requirement Clarification (if clarity_score < 2)
```javascript
async function clarifyRequirements(analysis) {
if (analysis.clarity_score >= 2) return analysis;
const questions = generateClarificationQuestions(analysis); // Goal, Scope, Constraints
const answers = await AskUserQuestion({ questions });
return updateAnalysis(analysis, answers);
}
```
**Questions**: Goal (Create/Fix/Optimize/Analyze), Scope (Single file/Module/Cross-module/System), Constraints (Backward compat/Skip tests/Urgent hotfix)
---
### Phase 2: Select Workflow & Build Command Chain
```javascript
function selectWorkflow(analysis) {
const levelMap = {
'bugfix-hotfix': { level: 2, flow: 'bugfix.hotfix' },
// With-File workflows (documented exploration with multi-CLI collaboration)
'brainstorm': { level: 4, flow: 'brainstorm-with-file' }, // Multi-perspective ideation
'brainstorm-to-issue': { level: 4, flow: 'brainstorm-to-issue' }, // Brainstorm → Issue workflow
'debug-file': { level: 3, flow: 'debug-with-file' }, // Hypothesis-driven debugging
'analyze-file': { level: 3, flow: 'analyze-with-file' }, // Collaborative analysis
// Standard workflows
'bugfix': { level: 2, flow: 'bugfix.standard' },
'issue-batch': { level: 'Issue', flow: 'issue' },
'issue-transition': { level: 2.5, flow: 'rapid-to-issue' }, // Bridge workflow
'exploration': { level: 4, flow: 'full' },
'quick-task': { level: 1, flow: 'lite-lite-lite' },
'ui-design': { level: analysis.complexity === 'high' ? 4 : 3, flow: 'ui' },
'tdd': { level: 3, flow: 'tdd' },
'test-fix': { level: 3, flow: 'test-fix-gen' },
'review': { level: 3, flow: 'review-cycle-fix' },
'documentation': { level: 2, flow: 'docs' },
'feature': { level: analysis.complexity === 'high' ? 3 : 2, flow: analysis.complexity === 'high' ? 'coupled' : 'rapid' }
};
const selected = levelMap[analysis.task_type] || levelMap['feature'];
return buildCommandChain(selected, analysis);
}
// Build command chain (port-based matching with Minimum Execution Units)
function buildCommandChain(workflow, analysis) {
const chains = {
// Level 1 - Rapid
'lite-lite-lite': [
{ cmd: '/workflow:lite-lite-lite', args: `"${analysis.goal}"` }
],
// Level 2 - Lightweight
'rapid': [
// Unit: Quick Implementation【lite-plan → lite-execute】
{ cmd: '/workflow:lite-plan', args: `"${analysis.goal}"`, unit: 'quick-impl' },
{ cmd: '/workflow:lite-execute', args: '--in-memory', unit: 'quick-impl' },
// Unit: Test Validation【test-fix-gen → test-cycle-execute】
...(analysis.constraints?.includes('skip-tests') ? [] : [
{ cmd: '/workflow:test-fix-gen', args: '', unit: 'test-validation' },
{ cmd: '/workflow:test-cycle-execute', args: '', unit: 'test-validation' }
])
],
// Level 2 Bridge - Lightweight to Issue Workflow
'rapid-to-issue': [
// Unit: Quick Implementation【lite-plan → convert-to-plan】
{ cmd: '/workflow:lite-plan', args: `"${analysis.goal}"`, unit: 'quick-impl-to-issue' },
{ cmd: '/issue:convert-to-plan', args: '--latest-lite-plan -y', unit: 'quick-impl-to-issue' },
// Auto-continue to issue workflow
{ cmd: '/issue:queue', args: '' },
{ cmd: '/issue:execute', args: '--queue auto' }
],
'bugfix.standard': [
// Unit: Bug Fix【lite-fix → lite-execute】
{ cmd: '/workflow:lite-fix', args: `"${analysis.goal}"`, unit: 'bug-fix' },
{ cmd: '/workflow:lite-execute', args: '--in-memory', unit: 'bug-fix' },
// Unit: Test Validation【test-fix-gen → test-cycle-execute】
...(analysis.constraints?.includes('skip-tests') ? [] : [
{ cmd: '/workflow:test-fix-gen', args: '', unit: 'test-validation' },
{ cmd: '/workflow:test-cycle-execute', args: '', unit: 'test-validation' }
])
],
'bugfix.hotfix': [
{ cmd: '/workflow:lite-fix', args: `--hotfix "${analysis.goal}"` }
],
'multi-cli-plan': [
// Unit: Multi-CLI Planning【multi-cli-plan → lite-execute】
{ cmd: '/workflow:multi-cli-plan', args: `"${analysis.goal}"`, unit: 'multi-cli' },
{ cmd: '/workflow:lite-execute', args: '--in-memory', unit: 'multi-cli' },
// Unit: Test Validation【test-fix-gen → test-cycle-execute】
...(analysis.constraints?.includes('skip-tests') ? [] : [
{ cmd: '/workflow:test-fix-gen', args: '', unit: 'test-validation' },
{ cmd: '/workflow:test-cycle-execute', args: '', unit: 'test-validation' }
])
],
'docs': [
// Unit: Quick Implementation【lite-plan → lite-execute】
{ cmd: '/workflow:lite-plan', args: `"${analysis.goal}"`, unit: 'quick-impl' },
{ cmd: '/workflow:lite-execute', args: '--in-memory', unit: 'quick-impl' }
],
// With-File workflows (documented exploration with multi-CLI collaboration)
'brainstorm-with-file': [
{ cmd: '/workflow:brainstorm-with-file', args: `"${analysis.goal}"` }
// Note: Has built-in post-completion options (create plan, create issue, deep analysis)
],
// Brainstorm-to-Issue workflow (bridge from brainstorm to issue execution)
'brainstorm-to-issue': [
// Note: Assumes brainstorm session already exists, or run brainstorm first
{ cmd: '/issue:from-brainstorm', args: `SESSION="${extractBrainstormSession(analysis)}" --auto` },
{ cmd: '/issue:queue', args: '' },
{ cmd: '/issue:execute', args: '--queue auto' }
],
'debug-with-file': [
{ cmd: '/workflow:debug-with-file', args: `"${analysis.goal}"` }
// Note: Self-contained with hypothesis-driven iteration and Gemini validation
],
'analyze-with-file': [
{ cmd: '/workflow:analyze-with-file', args: `"${analysis.goal}"` }
// Note: Self-contained with multi-round discussion and CLI exploration
],
// Level 3 - Standard
'coupled': [
// Unit: Verified Planning【plan → plan-verify】
{ cmd: '/workflow:plan', args: `"${analysis.goal}"`, unit: 'verified-planning' },
{ cmd: '/workflow:plan-verify', args: '', unit: 'verified-planning' },
// Execution
{ cmd: '/workflow:execute', args: '' },
// Unit: Code Review【review-session-cycle → review-cycle-fix】
{ cmd: '/workflow:review-session-cycle', args: '', unit: 'code-review' },
{ cmd: '/workflow:review-cycle-fix', args: '', unit: 'code-review' },
// Unit: Test Validation【test-fix-gen → test-cycle-execute】
...(analysis.constraints?.includes('skip-tests') ? [] : [
{ cmd: '/workflow:test-fix-gen', args: '', unit: 'test-validation' },
{ cmd: '/workflow:test-cycle-execute', args: '', unit: 'test-validation' }
])
],
'tdd': [
// Unit: TDD Planning + Execution【tdd-plan → execute】
{ cmd: '/workflow:tdd-plan', args: `"${analysis.goal}"`, unit: 'tdd-planning' },
{ cmd: '/workflow:execute', args: '', unit: 'tdd-planning' },
// TDD Verification
{ cmd: '/workflow:tdd-verify', args: '' }
],
'test-fix-gen': [
// Unit: Test Validation【test-fix-gen → test-cycle-execute】
{ cmd: '/workflow:test-fix-gen', args: `"${analysis.goal}"`, unit: 'test-validation' },
{ cmd: '/workflow:test-cycle-execute', args: '', unit: 'test-validation' }
],
'review-cycle-fix': [
// Unit: Code Review【review-session-cycle → review-cycle-fix】
{ cmd: '/workflow:review-session-cycle', args: '', unit: 'code-review' },
{ cmd: '/workflow:review-cycle-fix', args: '', unit: 'code-review' },
// Unit: Test Validation【test-fix-gen → test-cycle-execute】
{ cmd: '/workflow:test-fix-gen', args: '', unit: 'test-validation' },
{ cmd: '/workflow:test-cycle-execute', args: '', unit: 'test-validation' }
],
'ui': [
{ cmd: '/workflow:ui-design:explore-auto', args: `"${analysis.goal}"` },
// Unit: Planning + Execution【plan → execute】
{ cmd: '/workflow:plan', args: '', unit: 'plan-execute' },
{ cmd: '/workflow:execute', args: '', unit: 'plan-execute' }
],
// Level 4 - Brainstorm
'full': [
{ cmd: '/workflow:brainstorm:auto-parallel', args: `"${analysis.goal}"` },
// Unit: Verified Planning【plan → plan-verify】
{ cmd: '/workflow:plan', args: '', unit: 'verified-planning' },
{ cmd: '/workflow:plan-verify', args: '', unit: 'verified-planning' },
// Execution
{ cmd: '/workflow:execute', args: '' },
// Unit: Test Validation【test-fix-gen → test-cycle-execute】
{ cmd: '/workflow:test-fix-gen', args: '', unit: 'test-validation' },
{ cmd: '/workflow:test-cycle-execute', args: '', unit: 'test-validation' }
],
// Issue Workflow
'issue': [
{ cmd: '/issue:discover', args: '' },
{ cmd: '/issue:plan', args: '--all-pending' },
{ cmd: '/issue:queue', args: '' },
{ cmd: '/issue:execute', args: '' }
]
};
return chains[workflow.flow] || chains['rapid'];
}
```
**Output**: `Level [X] - [flow] | Pipeline: [...] | Commands: [1. /cmd1 2. /cmd2 ...]`
---
### Phase 3: User Confirmation
```javascript
async function getUserConfirmation(chain) {
const response = await AskUserQuestion({
questions: [{
question: "Execute this command chain?",
header: "Confirm",
options: [
{ label: "Confirm", description: "Start" },
{ label: "Adjust", description: "Modify" },
{ label: "Cancel", description: "Abort" }
]
}]
});
if (response.error === "Cancel") throw new Error("Cancelled");
if (response.error === "Adjust") return await adjustChain(chain);
return chain;
}
```
---
### Phase 4: Setup TODO Tracking & Status File
```javascript
function setupTodoTracking(chain, workflow, analysis) {
const sessionId = `ccw-${Date.now()}`;
const stateDir = `.workflow/.ccw/${sessionId}`;
Bash(`mkdir -p "${stateDir}"`);
const todos = chain.map((step, i) => ({
content: `CCW:${workflow}: [${i + 1}/${chain.length}] ${step.cmd}`,
status: i === 0 ? 'in_progress' : 'pending',
activeForm: `Executing ${step.cmd}`
}));
TodoWrite({ todos });
// Initialize status.json for hook tracking
const state = {
session_id: sessionId,
workflow: workflow,
status: 'running',
created_at: new Date().toISOString(),
updated_at: new Date().toISOString(),
analysis: analysis,
command_chain: chain.map((step, idx) => ({
index: idx,
command: step.cmd,
status: idx === 0 ? 'running' : 'pending'
})),
current_index: 0
};
Write(`${stateDir}/status.json`, JSON.stringify(state, null, 2));
return { sessionId, stateDir, state };
}
```
**Output**:
- TODO: `-> CCW:rapid: [1/3] /workflow:lite-plan | CCW:rapid: [2/3] /workflow:lite-execute | ...`
- Status File: `.workflow/.ccw/{session_id}/status.json`
---
### Phase 5: Execute Command Chain
```javascript
async function executeCommandChain(chain, workflow, trackingState) {
let previousResult = null;
const { sessionId, stateDir, state } = trackingState;
for (let i = 0; i < chain.length; i++) {
try {
// Update status: mark current as running
state.command_chain[i].status = 'running';
state.current_index = i;
state.updated_at = new Date().toISOString();
Write(`${stateDir}/status.json`, JSON.stringify(state, null, 2));
const fullCommand = assembleCommand(chain[i], previousResult);
const result = await Skill({ skill: fullCommand });
previousResult = { ...result, success: true };
// Update status: mark current as completed, next as running
state.command_chain[i].status = 'completed';
if (i + 1 < chain.length) {
state.command_chain[i + 1].status = 'running';
}
state.updated_at = new Date().toISOString();
Write(`${stateDir}/status.json`, JSON.stringify(state, null, 2));
updateTodoStatus(i, chain.length, workflow, 'completed');
} catch (error) {
// Update status on error
state.command_chain[i].status = 'failed';
state.status = 'error';
state.updated_at = new Date().toISOString();
Write(`${stateDir}/status.json`, JSON.stringify(state, null, 2));
const action = await handleError(chain[i], error, i);
if (action === 'retry') {
state.command_chain[i].status = 'pending';
state.status = 'running';
i--; // Retry
} else if (action === 'abort') {
state.status = 'failed';
Write(`${stateDir}/status.json`, JSON.stringify(state, null, 2));
return { success: false, error: error.message };
}
// 'skip' - continue
state.status = 'running';
}
}
// Mark workflow as completed
state.status = 'completed';
state.updated_at = new Date().toISOString();
Write(`${stateDir}/status.json`, JSON.stringify(state, null, 2));
return { success: true, completed: chain.length, sessionId };
}
// Assemble full command with session/plan parameters
function assembleCommand(step, previousResult) {
let command = step.cmd;
if (step.args) {
command += ` ${step.args}`;
} else if (previousResult?.session_id) {
command += ` --session="${previousResult.session_id}"`;
}
return command;
}
// Update TODO: mark current as complete, next as in-progress
function updateTodoStatus(index, total, workflow, status) {
const todos = getAllCurrentTodos();
const updated = todos.map(todo => {
if (todo.content.startsWith(`CCW:${workflow}:`)) {
const stepNum = extractStepIndex(todo.content);
if (stepNum === index + 1) return { ...todo, status };
if (stepNum === index + 2 && status === 'completed') return { ...todo, status: 'in_progress' };
}
return todo;
});
TodoWrite({ todos: updated });
}
// Error handling: Retry/Skip/Abort
async function handleError(step, error, index) {
const response = await AskUserQuestion({
questions: [{
question: `${step.cmd} failed: ${error.message}`,
header: "Error",
options: [
{ label: "Retry", description: "Re-execute" },
{ label: "Skip", description: "Continue next" },
{ label: "Abort", description: "Stop" }
]
}]
});
return { Retry: 'retry', Skip: 'skip', Abort: 'abort' }[response.Error] || 'abort';
}
```
---
## Execution Flow Summary
```
User Input
|
Phase 1: Analyze Intent
|-- Extract: goal, scope, constraints, task_type, complexity, clarity
+-- If clarity < 2 -> Phase 1.5: Clarify Requirements
|
Phase 2: Select Workflow & Build Chain
|-- Map task_type -> Level (1/2/3/4/Issue)
|-- Select flow based on complexity
+-- Build command chain (port-based)
|
Phase 3: User Confirmation (optional)
|-- Show pipeline visualization
+-- Allow adjustment
|
Phase 4: Setup TODO Tracking & Status File
|-- Create todos with CCW prefix
+-- Initialize .workflow/.ccw/{session_id}/status.json
|
Phase 5: Execute Command Chain
|-- For each command:
| |-- Update status.json (current=running)
| |-- Assemble full command
| |-- Execute via Skill
| |-- Update status.json (current=completed, next=running)
| |-- Update TODO status
| +-- Handle errors (retry/skip/abort)
+-- Mark status.json as completed
```
---
## Pipeline Examples (with Minimum Execution Units)
**Note**: `【 】` marks Minimum Execution Units - commands execute together as atomic groups.
| Input | Type | Level | Pipeline (with Units) |
|-------|------|-------|-----------------------|
| "Add API endpoint" | feature (low) | 2 |【lite-plan → lite-execute】→【test-fix-gen → test-cycle-execute】|
| "Fix login timeout" | bugfix | 2 |【lite-fix → lite-execute】→【test-fix-gen → test-cycle-execute】|
| "Use issue workflow" | issue-transition | 2.5 |【lite-plan → convert-to-plan】→ queue → execute |
| "头脑风暴: 通知系统重构" | brainstorm | 4 | brainstorm-with-file → (built-in post-completion) |
| "从头脑风暴创建 issue" | brainstorm-to-issue | 4 | from-brainstorm → queue → execute |
| "深度调试 WebSocket 连接断开" | debug-file | 3 | debug-with-file → (hypothesis iteration) |
| "协作分析: 认证架构优化" | analyze-file | 3 | analyze-with-file → (multi-round discussion) |
| "OAuth2 system" | feature (high) | 3 |【plan → plan-verify】→ execute →【review-session-cycle → review-cycle-fix】→【test-fix-gen → test-cycle-execute】|
| "Implement with TDD" | tdd | 3 |【tdd-plan → execute】→ tdd-verify |
| "Uncertain: real-time arch" | exploration | 4 | brainstorm:auto-parallel →【plan → plan-verify】→ execute →【test-fix-gen → test-cycle-execute】|
---
## Key Design Principles
1. **Main Process Execution** - Use Skill in main process, no external CLI
2. **Intent-Driven** - Auto-select workflow based on task intent
3. **Port-Based Chaining** - Build command chain using port matching
4. **Minimum Execution Units** - Commands grouped into atomic units, never split (e.g., lite-plan → lite-execute)
5. **Progressive Clarification** - Low clarity triggers clarification phase
6. **TODO Tracking** - Use CCW prefix to isolate workflow todos
7. **Unit-Aware Error Handling** - Retry/skip/abort affects whole unit, not individual commands
8. **User Control** - Optional user confirmation at each phase
---
## State Management
### Dual Tracking System
**1. TodoWrite-Based Tracking** (UI Display): All execution state tracked via TodoWrite with `CCW:` prefix.
```javascript
// Initial state
todos = [
{ content: "CCW:rapid: [1/3] /workflow:lite-plan", status: "in_progress" },
{ content: "CCW:rapid: [2/3] /workflow:lite-execute", status: "pending" },
{ content: "CCW:rapid: [3/3] /workflow:test-cycle-execute", status: "pending" }
];
// After command 1 completes
todos = [
{ content: "CCW:rapid: [1/3] /workflow:lite-plan", status: "completed" },
{ content: "CCW:rapid: [2/3] /workflow:lite-execute", status: "in_progress" },
{ content: "CCW:rapid: [3/3] /workflow:test-cycle-execute", status: "pending" }
];
```
**2. Status.json Tracking**: Persistent state file for workflow monitoring.
**Location**: `.workflow/.ccw/{session_id}/status.json`
**Structure**:
```json
{
"session_id": "ccw-1706123456789",
"workflow": "rapid",
"status": "running|completed|failed|error",
"created_at": "2025-02-01T10:30:00Z",
"updated_at": "2025-02-01T10:35:00Z",
"analysis": {
"goal": "Add user authentication",
"scope": ["auth"],
"constraints": [],
"task_type": "feature",
"complexity": "medium"
},
"command_chain": [
{
"index": 0,
"command": "/workflow:lite-plan",
"status": "completed"
},
{
"index": 1,
"command": "/workflow:lite-execute",
"status": "running"
},
{
"index": 2,
"command": "/workflow:test-cycle-execute",
"status": "pending"
}
],
"current_index": 1
}
```
**Status Values**:
- `running`: Workflow executing commands
- `completed`: All commands finished
- `failed`: User aborted or unrecoverable error
- `error`: Command execution failed (during error handling)
**Command Status Values**:
- `pending`: Not started
- `running`: Currently executing
- `completed`: Successfully finished
- `failed`: Execution failed
---
## With-File Workflows
**With-File workflows** provide documented exploration with multi-CLI collaboration. They are self-contained and generate comprehensive session artifacts.
| Workflow | Purpose | Key Features | Output Folder |
|----------|---------|--------------|---------------|
| **brainstorm-with-file** | Multi-perspective ideation | Gemini/Codex/Claude perspectives, diverge-converge cycles | `.workflow/.brainstorm/` |
| **debug-with-file** | Hypothesis-driven debugging | Gemini validation, understanding evolution, NDJSON logging | `.workflow/.debug/` |
| **analyze-with-file** | Collaborative analysis | Multi-round Q&A, CLI exploration, documented discussions | `.workflow/.analysis/` |
**Detection Keywords**:
- **brainstorm**: 头脑风暴, 创意, 发散思维, multi-perspective, compare perspectives
- **debug-file**: 深度调试, 假设验证, systematic debug, hypothesis debug
- **analyze-file**: 协作分析, 深度理解, collaborative analysis, explore concept
**Characteristics**:
1. **Self-Contained**: Each workflow handles its own iteration loop
2. **Documented Process**: Creates evolving documents (brainstorm.md, understanding.md, discussion.md)
3. **Multi-CLI**: Uses Gemini/Codex/Claude for different perspectives
4. **Built-in Post-Completion**: Offers follow-up options (create plan, issue, etc.)
---
## Usage
```bash
# Auto-select workflow
/ccw "Add user authentication"
# Complex requirement (triggers clarification)
/ccw "Optimize system performance"
# Bug fix
/ccw "Fix memory leak in WebSocket handler"
# TDD development
/ccw "Implement user registration with TDD"
# Exploratory task
/ccw "Uncertain about architecture for real-time notifications"
# With-File workflows (documented exploration with multi-CLI collaboration)
/ccw "头脑风暴: 用户通知系统重新设计" # → brainstorm-with-file
/ccw "从头脑风暴 BS-通知系统-2025-01-28 创建 issue" # → brainstorm-to-issue (bridge)
/ccw "深度调试: 系统随机崩溃问题" # → debug-with-file
/ccw "协作分析: 理解现有认证架构的设计决策" # → analyze-with-file
```

View File

@@ -3,6 +3,7 @@ name: cli-init
description: Generate .gemini/ and .qwen/ config directories with settings.json and ignore files based on workspace technology detection
argument-hint: "[--tool gemini|qwen|all] [--output path] [--preview]"
allowed-tools: Bash(*), Read(*), Write(*), Glob(*)
group: cli
---
# CLI Initialization Command (/cli:cli-init)

View File

@@ -0,0 +1,361 @@
---
name: codex-review
description: Interactive code review using Codex CLI via ccw endpoint with configurable review target, model, and custom instructions
argument-hint: "[--uncommitted|--base <branch>|--commit <sha>] [--model <model>] [--title <title>] [prompt]"
allowed-tools: Bash(*), AskUserQuestion(*), Read(*)
---
# Codex Review Command (/cli:codex-review)
## Overview
Interactive code review command that invokes `codex review` via ccw cli endpoint with guided parameter selection.
**Codex Review Parameters** (from `codex review --help`):
| Parameter | Description |
|-----------|-------------|
| `[PROMPT]` | Custom review instructions (positional) |
| `-c model=<model>` | Override model via config |
| `--uncommitted` | Review staged, unstaged, and untracked changes |
| `--base <BRANCH>` | Review changes against base branch |
| `--commit <SHA>` | Review changes introduced by a commit |
| `--title <TITLE>` | Optional commit title for review summary |
## Prompt Template Format
Follow the standard ccw cli prompt template:
```
PURPOSE: [what] + [why] + [success criteria] + [constraints/scope]
TASK: • [step 1] • [step 2] • [step 3]
MODE: review
CONTEXT: [review target description] | Memory: [relevant context]
EXPECTED: [deliverable format] + [quality criteria]
CONSTRAINTS: [focus constraints]
```
## EXECUTION INSTRUCTIONS - START HERE
**When this command is triggered, follow these exact steps:**
### Step 1: Parse Arguments
Check if user provided arguments directly:
- `--uncommitted` → Record target = uncommitted
- `--base <branch>` → Record target = base, branch name
- `--commit <sha>` → Record target = commit, sha value
- `--model <model>` → Record model selection
- `--title <title>` → Record title
- Remaining text → Use as custom focus/prompt
If no target specified → Continue to Step 2 for interactive selection.
### Step 2: Interactive Parameter Selection
**2.1 Review Target Selection**
```javascript
AskUserQuestion({
questions: [{
question: "What do you want to review?",
header: "Review Target",
options: [
{ label: "Uncommitted changes (Recommended)", description: "Review staged, unstaged, and untracked changes" },
{ label: "Compare to branch", description: "Review changes against a base branch (e.g., main)" },
{ label: "Specific commit", description: "Review changes introduced by a specific commit" }
],
multiSelect: false
}]
})
```
**2.2 Branch/Commit Input (if needed)**
If "Compare to branch" selected:
```javascript
AskUserQuestion({
questions: [{
question: "Which base branch to compare against?",
header: "Base Branch",
options: [
{ label: "main", description: "Compare against main branch" },
{ label: "master", description: "Compare against master branch" },
{ label: "develop", description: "Compare against develop branch" }
],
multiSelect: false
}]
})
```
If "Specific commit" selected:
- Run `git log --oneline -10` to show recent commits
- Ask user to provide commit SHA or select from list
**2.3 Model Selection (Optional)**
```javascript
AskUserQuestion({
questions: [{
question: "Which model to use for review?",
header: "Model",
options: [
{ label: "Default", description: "Use codex default model (gpt-5.2)" },
{ label: "o3", description: "OpenAI o3 reasoning model" },
{ label: "gpt-4.1", description: "GPT-4.1 model" },
{ label: "o4-mini", description: "OpenAI o4-mini (faster)" }
],
multiSelect: false
}]
})
```
**2.4 Review Focus Selection**
```javascript
AskUserQuestion({
questions: [{
question: "What should the review focus on?",
header: "Focus Area",
options: [
{ label: "General review (Recommended)", description: "Comprehensive review: correctness, style, bugs, docs" },
{ label: "Security focus", description: "Security vulnerabilities, input validation, auth issues" },
{ label: "Performance focus", description: "Performance bottlenecks, complexity, resource usage" },
{ label: "Code quality", description: "Readability, maintainability, SOLID principles" }
],
multiSelect: false
}]
})
```
### Step 3: Build Prompt and Command
**3.1 Construct Prompt Based on Focus**
**General Review Prompt:**
```
PURPOSE: Comprehensive code review to identify issues, improve quality, and ensure best practices; success = actionable feedback with clear priorities
TASK: • Review code correctness and logic errors • Check coding standards and consistency • Identify potential bugs and edge cases • Evaluate documentation completeness
MODE: review
CONTEXT: {target_description} | Memory: Project conventions from CLAUDE.md
EXPECTED: Structured review report with: severity levels (Critical/High/Medium/Low), file:line references, specific improvement suggestions, priority ranking
CONSTRAINTS: Focus on actionable feedback
```
**Security Focus Prompt:**
```
PURPOSE: Security-focused code review to identify vulnerabilities and security risks; success = all security issues documented with remediation
TASK: • Scan for injection vulnerabilities (SQL, XSS, command) • Check authentication and authorization logic • Evaluate input validation and sanitization • Identify sensitive data exposure risks
MODE: review
CONTEXT: {target_description} | Memory: Security best practices, OWASP Top 10
EXPECTED: Security report with: vulnerability classification, CVE references where applicable, remediation code snippets, risk severity matrix
CONSTRAINTS: Security-first analysis | Flag all potential vulnerabilities
```
**Performance Focus Prompt:**
```
PURPOSE: Performance-focused code review to identify bottlenecks and optimization opportunities; success = measurable improvement recommendations
TASK: • Analyze algorithmic complexity (Big-O) • Identify memory allocation issues • Check for N+1 queries and blocking operations • Evaluate caching opportunities
MODE: review
CONTEXT: {target_description} | Memory: Performance patterns and anti-patterns
EXPECTED: Performance report with: complexity analysis, bottleneck identification, optimization suggestions with expected impact, benchmark recommendations
CONSTRAINTS: Performance optimization focus
```
**Code Quality Focus Prompt:**
```
PURPOSE: Code quality review to improve maintainability and readability; success = cleaner, more maintainable code
TASK: • Assess SOLID principles adherence • Identify code duplication and abstraction opportunities • Review naming conventions and clarity • Evaluate test coverage implications
MODE: review
CONTEXT: {target_description} | Memory: Project coding standards
EXPECTED: Quality report with: principle violations, refactoring suggestions, naming improvements, maintainability score
CONSTRAINTS: Code quality and maintainability focus
```
**3.2 Build Target Description**
Based on selection, set `{target_description}`:
- Uncommitted: `Reviewing uncommitted changes (staged + unstaged + untracked)`
- Base branch: `Reviewing changes against {branch} branch`
- Commit: `Reviewing changes introduced by commit {sha}`
### Step 4: Execute via CCW CLI
Build and execute the ccw cli command:
```bash
# Base structure
ccw cli -p "<PROMPT>" --tool codex --mode review [OPTIONS]
```
**Command Construction:**
```bash
# Variables from user selection
TARGET_FLAG="" # --uncommitted | --base <branch> | --commit <sha>
MODEL_FLAG="" # --model <model> (if not default)
TITLE_FLAG="" # --title "<title>" (if provided)
# Build target flag
if [ "$target" = "uncommitted" ]; then
TARGET_FLAG="--uncommitted"
elif [ "$target" = "base" ]; then
TARGET_FLAG="--base $branch"
elif [ "$target" = "commit" ]; then
TARGET_FLAG="--commit $sha"
fi
# Build model flag (only if not default)
if [ "$model" != "default" ] && [ -n "$model" ]; then
MODEL_FLAG="--model $model"
fi
# Build title flag (if provided)
if [ -n "$title" ]; then
TITLE_FLAG="--title \"$title\""
fi
# Execute
ccw cli -p "$PROMPT" --tool codex --mode review $TARGET_FLAG $MODEL_FLAG $TITLE_FLAG
```
**Full Example Commands:**
**Option 1: With custom prompt (reviews uncommitted by default):**
```bash
ccw cli -p "
PURPOSE: Comprehensive code review to identify issues and improve quality; success = actionable feedback with priorities
TASK: • Review correctness and logic • Check standards compliance • Identify bugs and edge cases • Evaluate documentation
MODE: review
CONTEXT: Reviewing uncommitted changes | Memory: Project conventions
EXPECTED: Structured report with severity levels, file:line refs, improvement suggestions
CONSTRAINTS: Actionable feedback
" --tool codex --mode review --rule analysis-review-code-quality
```
**Option 2: Target flag only (no prompt allowed):**
```bash
ccw cli --tool codex --mode review --uncommitted
```
### Step 5: Execute and Display Results
```bash
Bash({
command: "ccw cli -p \"$PROMPT\" --tool codex --mode review $FLAGS",
run_in_background: true
})
```
Wait for completion and display formatted results.
## Quick Usage Examples
### Direct Execution (No Interaction)
```bash
# Review uncommitted changes with default settings
/cli:codex-review --uncommitted
# Review against main branch
/cli:codex-review --base main
# Review specific commit
/cli:codex-review --commit abc123
# Review with custom model
/cli:codex-review --uncommitted --model o3
# Review with security focus
/cli:codex-review --uncommitted security
# Full options
/cli:codex-review --base main --model o3 --title "Auth Feature" security
```
### Interactive Mode
```bash
# Start interactive selection (guided flow)
/cli:codex-review
```
## Focus Area Mapping
| User Selection | Prompt Focus | Key Checks |
|----------------|--------------|------------|
| General review | Comprehensive | Correctness, style, bugs, docs |
| Security focus | Security-first | Injection, auth, validation, exposure |
| Performance focus | Optimization | Complexity, memory, queries, caching |
| Code quality | Maintainability | SOLID, duplication, naming, tests |
## Error Handling
### No Changes to Review
```
No changes found for review target. Suggestions:
- For --uncommitted: Make some code changes first
- For --base: Ensure branch exists and has diverged
- For --commit: Verify commit SHA exists
```
### Invalid Branch
```bash
# Show available branches
git branch -a --list | head -20
```
### Invalid Commit
```bash
# Show recent commits
git log --oneline -10
```
## Integration Notes
- Uses `ccw cli --tool codex --mode review` endpoint
- Model passed via prompt (codex uses `-c model=` internally)
- Target flags (`--uncommitted`, `--base`, `--commit`) passed through to codex
- Prompt follows standard ccw cli template format for consistency
## Validation Constraints
**IMPORTANT: Target flags and prompt are mutually exclusive**
The codex CLI has a constraint where target flags (`--uncommitted`, `--base`, `--commit`) cannot be used with a positional `[PROMPT]` argument:
```
error: the argument '--uncommitted' cannot be used with '[PROMPT]'
error: the argument '--base <BRANCH>' cannot be used with '[PROMPT]'
error: the argument '--commit <SHA>' cannot be used with '[PROMPT]'
```
**Behavior:**
- When ANY target flag is specified, ccw cli automatically skips template concatenation (systemRules/roles)
- The review uses codex's default review behavior for the specified target
- Custom prompts are only supported WITHOUT target flags (reviews uncommitted changes by default)
**Valid combinations:**
| Command | Result |
|---------|--------|
| `codex review "Focus on security"` | ✓ Custom prompt, reviews uncommitted (default) |
| `codex review --uncommitted` | ✓ No prompt, uses default review |
| `codex review --base main` | ✓ No prompt, uses default review |
| `codex review --commit abc123` | ✓ No prompt, uses default review |
| `codex review --uncommitted "prompt"` | ✗ Invalid - mutually exclusive |
| `codex review --base main "prompt"` | ✗ Invalid - mutually exclusive |
| `codex review --commit abc123 "prompt"` | ✗ Invalid - mutually exclusive |
**Examples:**
```bash
# ✓ Valid: prompt only (reviews uncommitted by default)
ccw cli -p "Focus on security" --tool codex --mode review
# ✓ Valid: target flag only (no prompt)
ccw cli --tool codex --mode review --uncommitted
ccw cli --tool codex --mode review --base main
ccw cli --tool codex --mode review --commit abc123
# ✗ Invalid: target flag with prompt (will fail)
ccw cli -p "Review this" --tool codex --mode review --uncommitted
ccw cli -p "Review this" --tool codex --mode review --base main
ccw cli -p "Review this" --tool codex --mode review --commit abc123
```

View File

@@ -0,0 +1,513 @@
---
name: codex-coordinator
description: Command orchestration tool for Codex - analyze requirements, recommend command chain, execute sequentially with state persistence
argument-hint: "TASK=\"<task description>\" [--depth=standard|deep] [--auto-confirm] [--verbose]"
---
# Codex Coordinator Command
Interactive orchestration tool for Codex commands: analyze task → discover commands → recommend chain → execute sequentially → track state.
**Execution Model**: Intelligent agent-driven workflow. Claude analyzes each phase and orchestrates command execution.
## Core Concept: Minimum Execution Units (最小执行单元)
### What is a Minimum Execution Unit?
**Definition**: A set of commands that must execute together as an atomic group to achieve a meaningful workflow milestone. Splitting these commands breaks the logical flow and creates incomplete states.
**Why This Matters**:
- **Prevents Incomplete States**: Avoid stopping after task generation without execution
- **User Experience**: User gets complete results, not intermediate artifacts requiring manual follow-up
- **Workflow Integrity**: Maintains logical coherence of multi-step operations
### Codex Minimum Execution Units
**Planning + Execution Units** (规划+执行单元):
| Unit Name | Commands | Purpose | Output |
|-----------|----------|---------|--------|
| **Quick Implementation** | lite-plan-a → execute | Lightweight plan and immediate execution | Working code |
| **Bug Fix** | lite-fix → execute | Quick bug diagnosis and fix execution | Fixed code |
| **Issue Workflow** | issue-discover → issue-plan → issue-queue → issue-execute | Complete issue lifecycle | Completed issues |
| **Discovery & Analysis** | issue-discover → issue-discover-by-prompt | Issue discovery with multiple perspectives | Generated issues |
| **Brainstorm to Execution** | brainstorm-with-file → execute | Brainstorm ideas then implement | Working code |
**With-File Workflows** (文档化单元):
| Unit Name | Commands | Purpose | Output |
|-----------|----------|---------|--------|
| **Brainstorm With File** | brainstorm-with-file | Multi-perspective ideation with documentation | brainstorm.md |
| **Debug With File** | debug-with-file | Hypothesis-driven debugging with documentation | understanding.md |
| **Analyze With File** | analyze-with-file | Collaborative analysis with documentation | discussion.md |
| **Clean & Analyze** | clean → analyze-with-file | Cleanup then analyze | Cleaned code + analysis |
### Command-to-Unit Mapping (命令与最小单元的映射)
| Command | Precedes | Atomic Units |
|---------|----------|--------------|
| lite-plan-a | execute, brainstorm-with-file | Quick Implementation |
| lite-fix | execute | Bug Fix |
| issue-discover | issue-plan | Issue Workflow |
| issue-plan | issue-queue | Issue Workflow |
| issue-queue | issue-execute | Issue Workflow |
| brainstorm-with-file | execute, issue-execute | Brainstorm to Execution |
| debug-with-file | execute | Debug With File |
| analyze-with-file | (standalone) | Analyze With File |
| clean | analyze-with-file, execute | Clean & Analyze |
| quick-plan-with-file | execute | Quick Planning with File |
| merge-plans-with-file | execute | Merge Multiple Plans |
| unified-execute-with-file | (terminal) | Execute with File Tracking |
### Atomic Group Rules
1. **Never Split Units**: Coordinator must recommend complete units, not partial chains
2. **Multi-Unit Participation**: Some commands can participate in multiple units
3. **User Override**: User can explicitly request partial execution (advanced mode)
4. **Visualization**: Pipeline view shows unit boundaries with 【 】markers
5. **Validation**: Before execution, verify all unit commands are included
**Example Pipeline with Units**:
```
需求 → 【lite-plan-a → execute】→ 代码 → 【issue-discover → issue-plan → issue-queue → issue-execute】→ 完成
└──── Quick Implementation ────┘ └────────── Issue Workflow ─────────┘
```
## 3-Phase Workflow
### Phase 1: Analyze Requirements
Parse task to extract: goal, scope, complexity, and task type.
```javascript
function analyzeRequirements(taskDescription) {
return {
goal: extractMainGoal(taskDescription), // e.g., "Fix login bug"
scope: extractScope(taskDescription), // e.g., ["auth", "login"]
complexity: determineComplexity(taskDescription), // 'simple' | 'medium' | 'complex'
task_type: detectTaskType(taskDescription) // See task type patterns below
};
}
// Task Type Detection Patterns
function detectTaskType(text) {
// Priority order (first match wins)
if (/fix|bug|error|crash|fail|debug|diagnose/.test(text)) return 'bugfix';
if (/生成|generate|discover|找出|issue|问题/.test(text)) return 'discovery';
if (/plan|规划|设计|design|analyze|分析/.test(text)) return 'analysis';
if (/清理|cleanup|clean|refactor|重构/.test(text)) return 'cleanup';
if (/头脑|brainstorm|创意|ideation/.test(text)) return 'brainstorm';
if (/合并|merge|combine|batch/.test(text)) return 'batch-planning';
return 'feature'; // Default
}
// Complexity Assessment
function determineComplexity(text) {
let score = 0;
if (/refactor|重构|migrate|迁移|architect|架构|system|系统/.test(text)) score += 2;
if (/multiple|多个|across|跨|all|所有|entire|整个/.test(text)) score += 2;
if (/integrate|集成|api|database|数据库/.test(text)) score += 1;
if (/security|安全|performance|性能|scale|扩展/.test(text)) score += 1;
return score >= 4 ? 'complex' : score >= 2 ? 'medium' : 'simple';
}
```
**Display to user**:
```
Analysis Complete:
Goal: [extracted goal]
Scope: [identified areas]
Complexity: [level]
Task Type: [detected type]
```
### Phase 2: Discover Commands & Recommend Chain
Dynamic command chain assembly using task type and complexity matching.
#### Available Codex Commands (Discovery)
All commands from `~/.codex/prompts/`:
- **Planning**: @~/.codex/prompts/lite-plan-a.md, @~/.codex/prompts/lite-plan-b.md, @~/.codex/prompts/lite-plan-c.md, @~/.codex/prompts/quick-plan-with-file.md, @~/.codex/prompts/merge-plans-with-file.md
- **Execution**: @~/.codex/prompts/execute.md, @~/.codex/prompts/unified-execute-with-file.md
- **Bug Fixes**: @~/.codex/prompts/lite-fix.md, @~/.codex/prompts/debug-with-file.md
- **Discovery**: @~/.codex/prompts/issue-discover.md, @~/.codex/prompts/issue-discover-by-prompt.md, @~/.codex/prompts/issue-plan.md, @~/.codex/prompts/issue-queue.md, @~/.codex/prompts/issue-execute.md
- **Analysis**: @~/.codex/prompts/analyze-with-file.md
- **Brainstorming**: @~/.codex/prompts/brainstorm-with-file.md, @~/.codex/prompts/brainstorm-to-cycle.md
- **Cleanup**: @~/.codex/prompts/clean.md, @~/.codex/prompts/compact.md
#### Recommendation Algorithm
```javascript
async function recommendCommandChain(analysis) {
// Step 1: 根据任务类型确定流程
const { inputPort, outputPort } = determinePortFlow(analysis.task_type, analysis.complexity);
// Step 2: Claude 根据命令特性和任务特征,智能选择命令序列
const chain = selectChainByTaskType(analysis);
return chain;
}
// 任务类型对应的端口流
function determinePortFlow(taskType, complexity) {
const flows = {
'bugfix': { flow: ['lite-fix', 'execute'], depth: complexity === 'complex' ? 'deep' : 'standard' },
'discovery': { flow: ['issue-discover', 'issue-plan', 'issue-queue', 'issue-execute'], depth: 'standard' },
'analysis': { flow: ['analyze-with-file'], depth: complexity === 'complex' ? 'deep' : 'standard' },
'cleanup': { flow: ['clean'], depth: 'standard' },
'brainstorm': { flow: ['brainstorm-with-file', 'execute'], depth: complexity === 'complex' ? 'deep' : 'standard' },
'batch-planning': { flow: ['merge-plans-with-file', 'execute'], depth: 'standard' },
'feature': { flow: complexity === 'complex' ? ['lite-plan-b'] : ['lite-plan-a', 'execute'], depth: complexity === 'complex' ? 'deep' : 'standard' }
};
return flows[taskType] || flows['feature'];
}
```
#### Display to User
```
Recommended Command Chain:
Pipeline (管道视图):
需求 → @~/.codex/prompts/lite-plan-a.md → 计划 → @~/.codex/prompts/execute.md → 代码完成
Commands (命令列表):
1. @~/.codex/prompts/lite-plan-a.md
2. @~/.codex/prompts/execute.md
Proceed? [Confirm / Show Details / Adjust / Cancel]
```
### Phase 2b: Get User Confirmation
Ask user for confirmation before proceeding with execution.
```javascript
async function getUserConfirmation(chain) {
const response = await AskUserQuestion({
questions: [{
question: 'Proceed with this command chain?',
header: 'Confirm Chain',
multiSelect: false,
options: [
{ label: 'Confirm and execute', description: 'Proceed with commands' },
{ label: 'Show details', description: 'View each command' },
{ label: 'Adjust chain', description: 'Remove or reorder' },
{ label: 'Cancel', description: 'Abort' }
]
}]
});
return response;
}
```
### Phase 3: Execute Sequential Command Chain
```javascript
async function executeCommandChain(chain, analysis) {
const sessionId = `codex-coord-${Date.now()}`;
const stateDir = `.workflow/.codex-coordinator/${sessionId}`;
// Create state directory
const state = {
session_id: sessionId,
status: 'running',
created_at: new Date().toISOString(),
analysis: analysis,
command_chain: chain.map((cmd, idx) => ({ ...cmd, index: idx, status: 'pending' })),
execution_results: [],
};
// Save initial state
Write(`${stateDir}/state.json`, JSON.stringify(state, null, 2));
for (let i = 0; i < chain.length; i++) {
const cmd = chain[i];
console.log(`[${i+1}/${chain.length}] Executing: @~/.codex/prompts/${cmd.name}.md`);
// Update status to running
state.command_chain[i].status = 'running';
state.updated_at = new Date().toISOString();
Write(`${stateDir}/state.json`, JSON.stringify(state, null, 2));
try {
// Build command with parameters using full path
let commandStr = `@~/.codex/prompts/${cmd.name}.md`;
// Add parameters based on previous results and task context
if (i > 0 && state.execution_results.length > 0) {
const lastResult = state.execution_results[state.execution_results.length - 1];
commandStr += ` --resume="${lastResult.session_id || lastResult.artifact}"`;
}
// For analysis-based commands, add depth parameter
if (analysis.complexity === 'complex' && (cmd.name.includes('analyze') || cmd.name.includes('plan'))) {
commandStr += ` --depth=deep`;
}
// Add task description for planning commands
if (cmd.type === 'planning' && i === 0) {
commandStr += ` TASK="${analysis.goal}"`;
}
// Execute command via Bash (spawning as background task)
// Format: @~/.codex/prompts/command-name.md [] parameters
// Note: This simulates the execution; actual implementation uses hook callbacks
console.log(`Executing: ${commandStr}`);
// Save execution record
state.execution_results.push({
index: i,
command: cmd.name,
status: 'in-progress',
started_at: new Date().toISOString(),
session_id: null,
artifact: null
});
state.command_chain[i].status = 'completed';
state.updated_at = new Date().toISOString();
Write(`${stateDir}/state.json`, JSON.stringify(state, null, 2));
console.log(`[${i+1}/${chain.length}] ✓ Completed: @~/.codex/prompts/${cmd.name}.md`);
} catch (error) {
state.command_chain[i].status = 'failed';
state.updated_at = new Date().toISOString();
Write(`${stateDir}/state.json`, JSON.stringify(state, null, 2));
console.log(`❌ Command failed: ${error.message}`);
break;
}
}
state.status = 'completed';
state.updated_at = new Date().toISOString();
Write(`${stateDir}/state.json`, JSON.stringify(state, null, 2));
console.log(`\n✅ Orchestration Complete: ${state.session_id}`);
return state;
}
```
## State File Structure
**Location**: `.workflow/.codex-coordinator/{session_id}/state.json`
```json
{
"session_id": "codex-coord-20250129-143025",
"status": "running|waiting|completed|failed",
"created_at": "2025-01-29T14:30:25Z",
"updated_at": "2025-01-29T14:35:45Z",
"analysis": {
"goal": "Fix login authentication bug",
"scope": ["auth", "login"],
"complexity": "medium",
"task_type": "bugfix"
},
"command_chain": [
{
"index": 0,
"name": "lite-fix",
"type": "bugfix",
"status": "completed"
},
{
"index": 1,
"name": "execute",
"type": "execution",
"status": "pending"
}
],
"execution_results": [
{
"index": 0,
"command": "lite-fix",
"status": "completed",
"started_at": "2025-01-29T14:30:25Z",
"session_id": "fix-login-2025-01-29",
"artifact": ".workflow/.lite-fix/fix-login-2025-01-29/fix-plan.json"
}
]
}
```
### Status Values
- `running`: Orchestrator actively executing
- `waiting`: Paused, waiting for external events
- `completed`: All commands finished successfully
- `failed`: Error occurred or user aborted
## Task Type Routing (Pipeline Summary)
**Note**: 【 】marks Minimum Execution Units (最小执行单元) - these commands must execute together.
| Task Type | Pipeline | Minimum Units |
|-----------|----------|---|
| **bugfix** | Bug报告 →【@~/.codex/prompts/lite-fix.md → @~/.codex/prompts/execute.md】→ 修复代码 | Bug Fix |
| **discovery** | 需求 →【@~/.codex/prompts/issue-discover.md → @~/.codex/prompts/issue-plan.md → @~/.codex/prompts/issue-queue.md → @~/.codex/prompts/issue-execute.md】→ 完成 issues | Issue Workflow |
| **analysis** | 需求 → @~/.codex/prompts/analyze-with-file.md → 分析报告 | Analyze With File |
| **cleanup** | 代码库 → @~/.codex/prompts/clean.md → 清理完成 | Cleanup |
| **brainstorm** | 主题 →【@~/.codex/prompts/brainstorm-with-file.md → @~/.codex/prompts/execute.md】→ 实现代码 | Brainstorm to Execution |
| **batch-planning** | 需求集合 →【@~/.codex/prompts/merge-plans-with-file.md → @~/.codex/prompts/execute.md】→ 代码完成 | Merge Multiple Plans |
| **feature** (simple) | 需求 →【@~/.codex/prompts/lite-plan-a.md → @~/.codex/prompts/execute.md】→ 代码 | Quick Implementation |
| **feature** (complex) | 需求 → @~/.codex/prompts/lite-plan-b.md → 详细计划 → @~/.codex/prompts/execute.md → 代码 | Complex Planning |
## Available Commands Reference
### Planning Commands
| Command | Purpose | Usage | Output |
|---------|---------|-------|--------|
| **lite-plan-a** | Lightweight merged-mode planning | `@~/.codex/prompts/lite-plan-a.md TASK="..."` | plan.json |
| **lite-plan-b** | Multi-angle exploration planning | `@~/.codex/prompts/lite-plan-b.md TASK="..."` | plan.json |
| **lite-plan-c** | Parallel angle planning | `@~/.codex/prompts/lite-plan-c.md TASK="..."` | plan.json |
| **quick-plan-with-file** | Quick planning with file tracking | `@~/.codex/prompts/quick-plan-with-file.md TASK="..."` | plan + docs |
| **merge-plans-with-file** | Merge multiple plans | `@~/.codex/prompts/merge-plans-with-file.md PLANS="..."` | merged-plan.json |
### Execution Commands
| Command | Purpose | Usage | Output |
|---------|---------|-------|--------|
| **execute** | Execute tasks from plan | `@~/.codex/prompts/execute.md SESSION=".../plan/"` | Working code |
| **unified-execute-with-file** | Execute with file tracking | `@~/.codex/prompts/unified-execute-with-file.md SESSION="..."` | Code + tracking |
### Bug Fix Commands
| Command | Purpose | Usage | Output |
|---------|---------|-------|--------|
| **lite-fix** | Quick bug diagnosis and planning | `@~/.codex/prompts/lite-fix.md BUG="..."` | fix-plan.json |
| **debug-with-file** | Hypothesis-driven debugging | `@~/.codex/prompts/debug-with-file.md BUG="..."` | understanding.md |
### Discovery Commands
| Command | Purpose | Usage | Output |
|---------|---------|-------|--------|
| **issue-discover** | Multi-perspective issue discovery | `@~/.codex/prompts/issue-discover.md PATTERN="src/**"` | issues.jsonl |
| **issue-discover-by-prompt** | Prompt-based discovery | `@~/.codex/prompts/issue-discover-by-prompt.md PROMPT="..."` | issues |
| **issue-plan** | Plan issue solutions | `@~/.codex/prompts/issue-plan.md --all-pending` | issue-plans.json |
| **issue-queue** | Form execution queue | `@~/.codex/prompts/issue-queue.md --from-plan` | queue.json |
| **issue-execute** | Execute issue queue | `@~/.codex/prompts/issue-execute.md QUEUE="..."` | Completed |
### Analysis Commands
| Command | Purpose | Usage | Output |
|---------|---------|-------|--------|
| **analyze-with-file** | Collaborative analysis | `@~/.codex/prompts/analyze-with-file.md TOPIC="..."` | discussion.md |
### Brainstorm Commands
| Command | Purpose | Usage | Output |
|---------|---------|-------|--------|
| **brainstorm-with-file** | Multi-perspective brainstorming | `@~/.codex/prompts/brainstorm-with-file.md TOPIC="..."` | brainstorm.md |
| **brainstorm-to-cycle** | Bridge brainstorm to execution | `@~/.codex/prompts/brainstorm-to-cycle.md` | Executable plan |
### Utility Commands
| Command | Purpose | Usage | Output |
|---------|---------|-------|--------|
| **clean** | Intelligent code cleanup | `@~/.codex/prompts/clean.md` | Cleaned code |
| **compact** | Compact session memory | `@~/.codex/prompts/compact.md SESSION="..."` | Compressed state |
## Execution Flow
```
User Input: TASK="..."
Phase 1: analyzeRequirements(task)
Phase 2: recommendCommandChain(analysis)
Display pipeline and commands
User Confirmation
Phase 3: executeCommandChain(chain, analysis)
├─ For each command:
│ ├─ Update state to "running"
│ ├─ Build command string with parameters
│ ├─ Execute @command [] with parameters
│ ├─ Save execution results
│ └─ Update state to "completed"
Output completion summary
```
## Key Design Principles
1. **Atomic Execution** - Never split minimum execution units
2. **State Persistence** - All state saved to JSON
3. **User Control** - Confirmation before execution
4. **Context Passing** - Parameters chain across commands
5. **Resume Support** - Can resume from state.json
6. **Intelligent Routing** - Task type determines command chain
7. **Complexity Awareness** - Different paths for simple vs complex tasks
## Command Invocation Format
**Format**: `@~/.codex/prompts/<command-name>.md <parameters>`
**Examples**:
```bash
@~/.codex/prompts/lite-plan-a.md TASK="Implement user authentication"
@~/.codex/prompts/execute.md SESSION=".workflow/.lite-plan/..."
@~/.codex/prompts/lite-fix.md BUG="Login fails with 404 error"
@~/.codex/prompts/issue-discover.md PATTERN="src/auth/**"
@~/.codex/prompts/brainstorm-with-file.md TOPIC="Improve user onboarding"
```
## Error Handling
| Situation | Action |
|-----------|--------|
| Unknown task type | Default to feature implementation |
| Command not found | Error: command not available |
| Execution fails | Report error, offer retry or skip |
| Invalid parameters | Validate and ask for correction |
| Circular dependency | Detect and report |
| All commands fail | Report and suggest manual intervention |
## Session Management
**Resume Previous Session**:
```
1. Find session in .workflow/.codex-coordinator/
2. Load state.json
3. Identify last completed command
4. Restart from next pending command
```
**View Session Progress**:
```
cat .workflow/.codex-coordinator/{session-id}/state.json
```
---
## Execution Instructions
The coordinator workflow follows these steps:
1. **Parse Input**: Extract task description from TASK parameter
2. **Analyze**: Determine goal, scope, complexity, and task type
3. **Recommend**: Build optimal command chain based on analysis
4. **Confirm**: Display pipeline and request user approval
5. **Execute**: Run commands sequentially with state tracking
6. **Report**: Display final results and artifacts
To use this coordinator, invoke it as a Claude Code command (not a Codex command):
From the Claude Code CLI, you would call Codex commands like:
```bash
@~/.codex/prompts/lite-plan-a.md TASK="Your task description"
```
Or with options:
```bash
@~/.codex/prompts/lite-plan-a.md TASK="..." --depth=deep
```
This coordinator orchestrates such Codex commands based on your task requirements.

View File

@@ -1,93 +0,0 @@
---
name: enhance-prompt
description: Enhanced prompt transformation using session memory and intent analysis with --enhance flag detection
argument-hint: "user input to enhance"
---
## Overview
Systematically enhances user prompts by leveraging session memory context and intent analysis, translating ambiguous requests into actionable specifications.
## Core Protocol
**Enhancement Pipeline:**
`Intent Translation``Context Integration``Structured Output`
**Context Sources:**
- Session memory (conversation history, previous analysis)
- Implicit technical requirements
- User intent patterns
## Enhancement Rules
### Intent Translation
| User Says | Translate To | Focus |
|-----------|--------------|-------|
| "fix" | Debug and resolve | Root cause → preserve behavior |
| "improve" | Enhance/optimize | Performance/readability |
| "add" | Implement feature | Integration + edge cases |
| "refactor" | Restructure quality | Maintain behavior |
| "update" | Modernize | Version compatibility |
### Context Integration Strategy
**Session Memory:**
- Reference recent conversation context
- Reuse previously identified patterns
- Build on established understanding
- Infer technical requirements from discussion
**Example:**
```bash
# User: "add login"
# Session Memory: Previous auth discussion, JWT mentioned
# Inferred: JWT-based auth, integrate with existing session management
# Action: Implement JWT authentication with session persistence
```
## Output Structure
```bash
INTENT: [Clear technical goal]
CONTEXT: [Session memory + codebase patterns]
ACTION: [Specific implementation steps]
ATTENTION: [Critical constraints]
```
### Output Examples
**Example 1:**
```bash
# Input: "fix login button"
INTENT: Debug non-functional login button
CONTEXT: From session - OAuth flow discussed, known state issue
ACTION: Check event binding → verify state updates → test auth flow
ATTENTION: Preserve existing OAuth integration
```
**Example 2:**
```bash
# Input: "refactor payment code"
INTENT: Restructure payment module for maintainability
CONTEXT: Session memory - PCI compliance requirements, Stripe integration patterns
ACTION: Extract reusable validators → isolate payment gateway logic → maintain adapter pattern
ATTENTION: Zero behavior change, maintain PCI compliance, full test coverage
```
## Enhancement Triggers
- Ambiguous language: "fix", "improve", "clean up"
- Vague requests requiring clarification
- Complex technical requirements
- Architecture changes
- Critical systems: auth, payment, security
- Multi-step refactoring
## Key Principles
1. **Session Memory First**: Leverage conversation context and established understanding
2. **Context Reuse**: Build on previous discussions and decisions
3. **Clear Output**: Structured, actionable specifications
4. **Intent Clarification**: Transform vague requests into specific technical goals
5. **Avoid Duplication**: Reference existing context, don't repeat

View File

@@ -0,0 +1,675 @@
# Flow Template Generator
Generate workflow templates for meta-skill/flow-coordinator.
## Usage
```
/meta-skill:flow-create [template-name] [--output <path>]
```
**Examples**:
```bash
/meta-skill:flow-create bugfix-v2
/meta-skill:flow-create my-workflow --output ~/.claude/skills/my-skill/templates/
```
## Execution Flow
```
User Input → Phase 1: Template Design → Phase 2: Step Definition → Phase 3: Generate JSON
↓ ↓ ↓
Name + Description Define workflow steps Write template file
```
---
## Phase 1: Template Design
Gather basic template information:
```javascript
async function designTemplate(input) {
const templateName = parseTemplateName(input) || await askTemplateName();
const metadata = await AskUserQuestion({
questions: [
{
question: "What is the purpose of this workflow template?",
header: "Purpose",
options: [
{ label: "Feature Development", description: "Implement new features with planning and testing" },
{ label: "Bug Fix", description: "Diagnose and fix bugs with verification" },
{ label: "TDD Development", description: "Test-driven development workflow" },
{ label: "Code Review", description: "Review cycle with findings and fixes" },
{ label: "Testing", description: "Test generation and validation" },
{ label: "Issue Workflow", description: "Complete issue lifecycle (discover → plan → queue → execute)" },
{ label: "With-File Workflow", description: "Documented exploration (brainstorm/debug/analyze)" },
{ label: "Custom", description: "Define custom workflow purpose" }
],
multiSelect: false
},
{
question: "What complexity level?",
header: "Level",
options: [
{ label: "Level 1 (Rapid)", description: "1-2 steps, ultra-lightweight (lite-lite-lite)" },
{ label: "Level 2 (Lightweight)", description: "2-4 steps, quick implementation" },
{ label: "Level 3 (Standard)", description: "4-6 steps, with verification and testing" },
{ label: "Level 4 (Full)", description: "6+ steps, brainstorm + full workflow" }
],
multiSelect: false
}
]
});
return {
name: templateName,
description: generateDescription(templateName, metadata.Purpose),
level: parseLevel(metadata.Level),
purpose: metadata.Purpose
};
}
```
---
## Phase 2: Step Definition
### Step 2.1: Select Command Category
```javascript
async function selectCommandCategory() {
return await AskUserQuestion({
questions: [{
question: "Select command category",
header: "Category",
options: [
{ label: "Planning", description: "lite-plan, plan, multi-cli-plan, tdd-plan, quick-plan-with-file" },
{ label: "Execution", description: "lite-execute, execute, unified-execute-with-file" },
{ label: "Testing", description: "test-fix-gen, test-cycle-execute, test-gen, tdd-verify" },
{ label: "Review", description: "review-session-cycle, review-module-cycle, review-cycle-fix" },
{ label: "Bug Fix", description: "lite-fix, debug-with-file" },
{ label: "Brainstorm", description: "brainstorm-with-file, brainstorm:auto-parallel" },
{ label: "Analysis", description: "analyze-with-file" },
{ label: "Issue", description: "discover, plan, queue, execute, from-brainstorm, convert-to-plan" },
{ label: "Utility", description: "clean, init, replan, status" }
],
multiSelect: false
}]
});
}
```
### Step 2.2: Select Specific Command
```javascript
async function selectCommand(category) {
const commandOptions = {
'Planning': [
{ label: "/workflow:lite-plan", description: "Lightweight merged-mode planning" },
{ label: "/workflow:plan", description: "Full planning with architecture design" },
{ label: "/workflow:multi-cli-plan", description: "Multi-CLI collaborative planning (Gemini+Codex+Claude)" },
{ label: "/workflow:tdd-plan", description: "TDD workflow planning with Red-Green-Refactor" },
{ label: "/workflow:quick-plan-with-file", description: "Rapid planning with minimal docs" },
{ label: "/workflow:plan-verify", description: "Verify plan against requirements" },
{ label: "/workflow:replan", description: "Update plan and execute changes" }
],
'Execution': [
{ label: "/workflow:lite-execute", description: "Execute from in-memory plan" },
{ label: "/workflow:execute", description: "Execute from planning session" },
{ label: "/workflow:unified-execute-with-file", description: "Universal execution engine" },
{ label: "/workflow:lite-lite-lite", description: "Ultra-lightweight multi-tool execution" }
],
'Testing': [
{ label: "/workflow:test-fix-gen", description: "Generate test tasks for specific issues" },
{ label: "/workflow:test-cycle-execute", description: "Execute iterative test-fix cycle (>=95% pass)" },
{ label: "/workflow:test-gen", description: "Generate comprehensive test suite" },
{ label: "/workflow:tdd-verify", description: "Verify TDD workflow compliance" }
],
'Review': [
{ label: "/workflow:review-session-cycle", description: "Session-based multi-dimensional code review" },
{ label: "/workflow:review-module-cycle", description: "Module-focused code review" },
{ label: "/workflow:review-cycle-fix", description: "Fix review findings with prioritization" },
{ label: "/workflow:review", description: "Post-implementation review" }
],
'Bug Fix': [
{ label: "/workflow:lite-fix", description: "Lightweight bug diagnosis and fix" },
{ label: "/workflow:debug-with-file", description: "Hypothesis-driven debugging with documentation" }
],
'Brainstorm': [
{ label: "/workflow:brainstorm-with-file", description: "Multi-perspective ideation with documentation" },
{ label: "/workflow:brainstorm:auto-parallel", description: "Parallel multi-role brainstorming" }
],
'Analysis': [
{ label: "/workflow:analyze-with-file", description: "Collaborative analysis with documentation" }
],
'Issue': [
{ label: "/issue:discover", description: "Multi-perspective issue discovery" },
{ label: "/issue:discover-by-prompt", description: "Prompt-based issue discovery with Gemini" },
{ label: "/issue:plan", description: "Plan issue solutions" },
{ label: "/issue:queue", description: "Form execution queue with conflict analysis" },
{ label: "/issue:execute", description: "Execute issue queue with DAG orchestration" },
{ label: "/issue:from-brainstorm", description: "Convert brainstorm to issue" },
{ label: "/issue:convert-to-plan", description: "Convert planning artifacts to issue solutions" }
],
'Utility': [
{ label: "/workflow:clean", description: "Intelligent code cleanup" },
{ label: "/workflow:init", description: "Initialize project-level state" },
{ label: "/workflow:replan", description: "Interactive workflow replanning" },
{ label: "/workflow:status", description: "Generate workflow status views" }
]
};
return await AskUserQuestion({
questions: [{
question: `Select ${category} command`,
header: "Command",
options: commandOptions[category] || commandOptions['Planning'],
multiSelect: false
}]
});
}
```
### Step 2.3: Select Execution Unit
```javascript
async function selectExecutionUnit() {
return await AskUserQuestion({
questions: [{
question: "Select execution unit (atomic command group)",
header: "Unit",
options: [
// Planning + Execution Units
{ label: "quick-implementation", description: "【lite-plan → lite-execute】" },
{ label: "multi-cli-planning", description: "【multi-cli-plan → lite-execute】" },
{ label: "full-planning-execution", description: "【plan → execute】" },
{ label: "verified-planning-execution", description: "【plan → plan-verify → execute】" },
{ label: "replanning-execution", description: "【replan → execute】" },
{ label: "tdd-planning-execution", description: "【tdd-plan → execute】" },
// Testing Units
{ label: "test-validation", description: "【test-fix-gen → test-cycle-execute】" },
{ label: "test-generation-execution", description: "【test-gen → execute】" },
// Review Units
{ label: "code-review", description: "【review-*-cycle → review-cycle-fix】" },
// Bug Fix Units
{ label: "bug-fix", description: "【lite-fix → lite-execute】" },
// Issue Units
{ label: "issue-workflow", description: "【discover → plan → queue → execute】" },
{ label: "rapid-to-issue", description: "【lite-plan → convert-to-plan → queue → execute】" },
{ label: "brainstorm-to-issue", description: "【from-brainstorm → queue → execute】" },
// With-File Units (self-contained)
{ label: "brainstorm-with-file", description: "Self-contained brainstorming workflow" },
{ label: "debug-with-file", description: "Self-contained debugging workflow" },
{ label: "analyze-with-file", description: "Self-contained analysis workflow" },
// Standalone
{ label: "standalone", description: "Single command, no atomic grouping" }
],
multiSelect: false
}]
});
}
```
### Step 2.4: Select Execution Mode
```javascript
async function selectExecutionMode() {
return await AskUserQuestion({
questions: [{
question: "Execution mode for this step?",
header: "Mode",
options: [
{ label: "mainprocess", description: "Run in main process (blocking, synchronous)" },
{ label: "async", description: "Run asynchronously (background, hook callbacks)" }
],
multiSelect: false
}]
});
}
```
### Complete Step Definition Flow
```javascript
async function defineSteps(templateDesign) {
// Suggest steps based on purpose
const suggestedSteps = getSuggestedSteps(templateDesign.purpose);
const customize = await AskUserQuestion({
questions: [{
question: "Use suggested steps or customize?",
header: "Steps",
options: [
{ label: "Use Suggested", description: `Suggested: ${suggestedSteps.map(s => s.cmd).join(' → ')}` },
{ label: "Customize", description: "Modify or add custom steps" },
{ label: "Start Empty", description: "Define all steps from scratch" }
],
multiSelect: false
}]
});
if (customize.Steps === "Use Suggested") {
return suggestedSteps;
}
// Interactive step definition
const steps = [];
let addMore = true;
while (addMore) {
const category = await selectCommandCategory();
const command = await selectCommand(category.Category);
const unit = await selectExecutionUnit();
const execMode = await selectExecutionMode();
const contextHint = await askContextHint(command.Command);
steps.push({
cmd: command.Command,
args: command.Command.includes('plan') || command.Command.includes('fix') ? '"{{goal}}"' : undefined,
unit: unit.Unit,
execution: {
type: "slash-command",
mode: execMode.Mode
},
contextHint: contextHint
});
const continueAdding = await AskUserQuestion({
questions: [{
question: `Added step ${steps.length}: ${command.Command}. Add another?`,
header: "Continue",
options: [
{ label: "Add More", description: "Define another step" },
{ label: "Done", description: "Finish step definition" }
],
multiSelect: false
}]
});
addMore = continueAdding.Continue === "Add More";
}
return steps;
}
```
---
## Suggested Step Templates
### Feature Development (Level 2 - Rapid)
```json
{
"name": "rapid",
"description": "Quick implementation with testing",
"level": 2,
"steps": [
{ "cmd": "/workflow:lite-plan", "args": "\"{{goal}}\"", "unit": "quick-implementation", "execution": { "type": "slash-command", "mode": "mainprocess" }, "contextHint": "Create lightweight implementation plan" },
{ "cmd": "/workflow:lite-execute", "args": "--in-memory", "unit": "quick-implementation", "execution": { "type": "slash-command", "mode": "mainprocess" }, "contextHint": "Execute implementation based on plan" },
{ "cmd": "/workflow:test-fix-gen", "unit": "test-validation", "execution": { "type": "slash-command", "mode": "mainprocess" }, "contextHint": "Generate test tasks" },
{ "cmd": "/workflow:test-cycle-execute", "unit": "test-validation", "execution": { "type": "slash-command", "mode": "async" }, "contextHint": "Execute test-fix cycle until pass rate >= 95%" }
]
}
```
### Feature Development (Level 3 - Coupled)
```json
{
"name": "coupled",
"description": "Full workflow with verification, review, and testing",
"level": 3,
"steps": [
{ "cmd": "/workflow:plan", "args": "\"{{goal}}\"", "unit": "verified-planning-execution", "execution": { "type": "slash-command", "mode": "mainprocess" }, "contextHint": "Create detailed implementation plan" },
{ "cmd": "/workflow:plan-verify", "unit": "verified-planning-execution", "execution": { "type": "slash-command", "mode": "mainprocess" }, "contextHint": "Verify plan against requirements" },
{ "cmd": "/workflow:execute", "unit": "verified-planning-execution", "execution": { "type": "slash-command", "mode": "async" }, "contextHint": "Execute implementation" },
{ "cmd": "/workflow:review-session-cycle", "unit": "code-review", "execution": { "type": "slash-command", "mode": "mainprocess" }, "contextHint": "Multi-dimensional code review" },
{ "cmd": "/workflow:review-cycle-fix", "unit": "code-review", "execution": { "type": "slash-command", "mode": "mainprocess" }, "contextHint": "Fix review findings" },
{ "cmd": "/workflow:test-fix-gen", "unit": "test-validation", "execution": { "type": "slash-command", "mode": "mainprocess" }, "contextHint": "Generate test tasks" },
{ "cmd": "/workflow:test-cycle-execute", "unit": "test-validation", "execution": { "type": "slash-command", "mode": "async" }, "contextHint": "Execute test-fix cycle" }
]
}
```
### Bug Fix (Level 2)
```json
{
"name": "bugfix",
"description": "Bug diagnosis and fix with testing",
"level": 2,
"steps": [
{ "cmd": "/workflow:lite-fix", "args": "\"{{goal}}\"", "unit": "bug-fix", "execution": { "type": "slash-command", "mode": "mainprocess" }, "contextHint": "Diagnose and plan bug fix" },
{ "cmd": "/workflow:lite-execute", "args": "--in-memory", "unit": "bug-fix", "execution": { "type": "slash-command", "mode": "mainprocess" }, "contextHint": "Execute bug fix" },
{ "cmd": "/workflow:test-fix-gen", "unit": "test-validation", "execution": { "type": "slash-command", "mode": "mainprocess" }, "contextHint": "Generate regression tests" },
{ "cmd": "/workflow:test-cycle-execute", "unit": "test-validation", "execution": { "type": "slash-command", "mode": "async" }, "contextHint": "Verify fix with tests" }
]
}
```
### Bug Fix Hotfix (Level 2)
```json
{
"name": "bugfix-hotfix",
"description": "Urgent production bug fix (no tests)",
"level": 2,
"steps": [
{ "cmd": "/workflow:lite-fix", "args": "--hotfix \"{{goal}}\"", "unit": "standalone", "execution": { "type": "slash-command", "mode": "mainprocess" }, "contextHint": "Emergency hotfix mode" }
]
}
```
### TDD Development (Level 3)
```json
{
"name": "tdd",
"description": "Test-driven development with Red-Green-Refactor",
"level": 3,
"steps": [
{ "cmd": "/workflow:tdd-plan", "args": "\"{{goal}}\"", "unit": "tdd-planning-execution", "execution": { "type": "slash-command", "mode": "mainprocess" }, "contextHint": "Create TDD task chain" },
{ "cmd": "/workflow:execute", "unit": "tdd-planning-execution", "execution": { "type": "slash-command", "mode": "async" }, "contextHint": "Execute TDD cycle" },
{ "cmd": "/workflow:tdd-verify", "unit": "standalone", "execution": { "type": "slash-command", "mode": "mainprocess" }, "contextHint": "Verify TDD compliance" }
]
}
```
### Code Review (Level 3)
```json
{
"name": "review",
"description": "Code review cycle with fixes and testing",
"level": 3,
"steps": [
{ "cmd": "/workflow:review-session-cycle", "unit": "code-review", "execution": { "type": "slash-command", "mode": "mainprocess" }, "contextHint": "Multi-dimensional code review" },
{ "cmd": "/workflow:review-cycle-fix", "unit": "code-review", "execution": { "type": "slash-command", "mode": "mainprocess" }, "contextHint": "Fix review findings" },
{ "cmd": "/workflow:test-fix-gen", "unit": "test-validation", "execution": { "type": "slash-command", "mode": "mainprocess" }, "contextHint": "Generate tests for fixes" },
{ "cmd": "/workflow:test-cycle-execute", "unit": "test-validation", "execution": { "type": "slash-command", "mode": "async" }, "contextHint": "Verify fixes pass tests" }
]
}
```
### Test Fix (Level 3)
```json
{
"name": "test-fix",
"description": "Fix failing tests",
"level": 3,
"steps": [
{ "cmd": "/workflow:test-fix-gen", "args": "\"{{goal}}\"", "unit": "test-validation", "execution": { "type": "slash-command", "mode": "mainprocess" }, "contextHint": "Generate test fix tasks" },
{ "cmd": "/workflow:test-cycle-execute", "unit": "test-validation", "execution": { "type": "slash-command", "mode": "async" }, "contextHint": "Execute test-fix cycle" }
]
}
```
### Issue Workflow (Level Issue)
```json
{
"name": "issue",
"description": "Complete issue lifecycle",
"level": "Issue",
"steps": [
{ "cmd": "/issue:discover", "unit": "issue-workflow", "execution": { "type": "slash-command", "mode": "mainprocess" }, "contextHint": "Discover issues from codebase" },
{ "cmd": "/issue:plan", "args": "--all-pending", "unit": "issue-workflow", "execution": { "type": "slash-command", "mode": "mainprocess" }, "contextHint": "Plan issue solutions" },
{ "cmd": "/issue:queue", "unit": "issue-workflow", "execution": { "type": "slash-command", "mode": "mainprocess" }, "contextHint": "Form execution queue" },
{ "cmd": "/issue:execute", "unit": "issue-workflow", "execution": { "type": "slash-command", "mode": "async" }, "contextHint": "Execute issue queue" }
]
}
```
### Rapid to Issue (Level 2.5)
```json
{
"name": "rapid-to-issue",
"description": "Bridge lightweight planning to issue workflow",
"level": 2,
"steps": [
{ "cmd": "/workflow:lite-plan", "args": "\"{{goal}}\"", "unit": "rapid-to-issue", "execution": { "type": "slash-command", "mode": "mainprocess" }, "contextHint": "Create lightweight plan" },
{ "cmd": "/issue:convert-to-plan", "args": "--latest-lite-plan -y", "unit": "rapid-to-issue", "execution": { "type": "slash-command", "mode": "mainprocess" }, "contextHint": "Convert to issue plan" },
{ "cmd": "/issue:queue", "unit": "rapid-to-issue", "execution": { "type": "slash-command", "mode": "mainprocess" }, "contextHint": "Form execution queue" },
{ "cmd": "/issue:execute", "args": "--queue auto", "unit": "rapid-to-issue", "execution": { "type": "slash-command", "mode": "async" }, "contextHint": "Execute issue queue" }
]
}
```
### Brainstorm to Issue (Level 4)
```json
{
"name": "brainstorm-to-issue",
"description": "Bridge brainstorm session to issue workflow",
"level": 4,
"steps": [
{ "cmd": "/issue:from-brainstorm", "args": "SESSION=\"{{session}}\" --auto", "unit": "brainstorm-to-issue", "execution": { "type": "slash-command", "mode": "mainprocess" }, "contextHint": "Convert brainstorm to issue" },
{ "cmd": "/issue:queue", "unit": "brainstorm-to-issue", "execution": { "type": "slash-command", "mode": "mainprocess" }, "contextHint": "Form execution queue" },
{ "cmd": "/issue:execute", "args": "--queue auto", "unit": "brainstorm-to-issue", "execution": { "type": "slash-command", "mode": "async" }, "contextHint": "Execute issue queue" }
]
}
```
### With-File: Brainstorm (Level 4)
```json
{
"name": "brainstorm",
"description": "Multi-perspective ideation with documentation",
"level": 4,
"steps": [
{ "cmd": "/workflow:brainstorm-with-file", "args": "\"{{goal}}\"", "unit": "brainstorm-with-file", "execution": { "type": "slash-command", "mode": "mainprocess" }, "contextHint": "Multi-CLI brainstorming with documented diverge-converge cycles" }
]
}
```
### With-File: Debug (Level 3)
```json
{
"name": "debug",
"description": "Hypothesis-driven debugging with documentation",
"level": 3,
"steps": [
{ "cmd": "/workflow:debug-with-file", "args": "\"{{goal}}\"", "unit": "debug-with-file", "execution": { "type": "slash-command", "mode": "mainprocess" }, "contextHint": "Hypothesis-driven debugging with Gemini validation" }
]
}
```
### With-File: Analyze (Level 3)
```json
{
"name": "analyze",
"description": "Collaborative analysis with documentation",
"level": 3,
"steps": [
{ "cmd": "/workflow:analyze-with-file", "args": "\"{{goal}}\"", "unit": "analyze-with-file", "execution": { "type": "slash-command", "mode": "mainprocess" }, "contextHint": "Multi-round collaborative analysis with CLI exploration" }
]
}
```
### Full Workflow (Level 4)
```json
{
"name": "full",
"description": "Complete workflow: brainstorm → plan → execute → test",
"level": 4,
"steps": [
{ "cmd": "/workflow:brainstorm:auto-parallel", "args": "\"{{goal}}\"", "unit": "standalone", "execution": { "type": "slash-command", "mode": "mainprocess" }, "contextHint": "Parallel multi-perspective brainstorming" },
{ "cmd": "/workflow:plan", "unit": "verified-planning-execution", "execution": { "type": "slash-command", "mode": "mainprocess" }, "contextHint": "Create detailed plan from brainstorm" },
{ "cmd": "/workflow:plan-verify", "unit": "verified-planning-execution", "execution": { "type": "slash-command", "mode": "mainprocess" }, "contextHint": "Verify plan quality" },
{ "cmd": "/workflow:execute", "unit": "verified-planning-execution", "execution": { "type": "slash-command", "mode": "async" }, "contextHint": "Execute implementation" },
{ "cmd": "/workflow:test-fix-gen", "unit": "test-validation", "execution": { "type": "slash-command", "mode": "mainprocess" }, "contextHint": "Generate comprehensive tests" },
{ "cmd": "/workflow:test-cycle-execute", "unit": "test-validation", "execution": { "type": "slash-command", "mode": "async" }, "contextHint": "Execute test cycle" }
]
}
```
### Multi-CLI Planning (Level 3)
```json
{
"name": "multi-cli-plan",
"description": "Multi-CLI collaborative planning with cross-verification",
"level": 3,
"steps": [
{ "cmd": "/workflow:multi-cli-plan", "args": "\"{{goal}}\"", "unit": "multi-cli-planning", "execution": { "type": "slash-command", "mode": "mainprocess" }, "contextHint": "Gemini+Codex+Claude collaborative planning" },
{ "cmd": "/workflow:lite-execute", "args": "--in-memory", "unit": "multi-cli-planning", "execution": { "type": "slash-command", "mode": "mainprocess" }, "contextHint": "Execute converged plan" },
{ "cmd": "/workflow:test-fix-gen", "unit": "test-validation", "execution": { "type": "slash-command", "mode": "mainprocess" }, "contextHint": "Generate tests" },
{ "cmd": "/workflow:test-cycle-execute", "unit": "test-validation", "execution": { "type": "slash-command", "mode": "async" }, "contextHint": "Execute test cycle" }
]
}
```
### Ultra-Lightweight (Level 1)
```json
{
"name": "lite-lite-lite",
"description": "Ultra-lightweight multi-tool execution",
"level": 1,
"steps": [
{ "cmd": "/workflow:lite-lite-lite", "args": "\"{{goal}}\"", "unit": "standalone", "execution": { "type": "slash-command", "mode": "mainprocess" }, "contextHint": "Direct execution with minimal overhead" }
]
}
```
---
## Command Port Reference
Each command has input/output ports for pipeline composition:
| Command | Input Port | Output Port | Atomic Unit |
|---------|------------|-------------|-------------|
| **Planning** |
| lite-plan | requirement | plan | quick-implementation |
| plan | requirement | detailed-plan | full-planning-execution |
| plan-verify | detailed-plan | verified-plan | verified-planning-execution |
| multi-cli-plan | requirement | multi-cli-plan | multi-cli-planning |
| tdd-plan | requirement | tdd-tasks | tdd-planning-execution |
| replan | session, feedback | replan | replanning-execution |
| **Execution** |
| lite-execute | plan, multi-cli-plan, lite-fix | code | (multiple) |
| execute | detailed-plan, verified-plan, replan, tdd-tasks | code | (multiple) |
| **Testing** |
| test-fix-gen | failing-tests, session | test-tasks | test-validation |
| test-cycle-execute | test-tasks | test-passed | test-validation |
| test-gen | code, session | test-tasks | test-generation-execution |
| tdd-verify | code | tdd-verified | standalone |
| **Review** |
| review-session-cycle | code, session | review-verified | code-review |
| review-module-cycle | module-pattern | review-verified | code-review |
| review-cycle-fix | review-findings | fixed-code | code-review |
| **Bug Fix** |
| lite-fix | bug-report | lite-fix | bug-fix |
| debug-with-file | bug-report | understanding-document | debug-with-file |
| **With-File** |
| brainstorm-with-file | exploration-topic | brainstorm-document | brainstorm-with-file |
| analyze-with-file | analysis-topic | discussion-document | analyze-with-file |
| **Issue** |
| issue:discover | codebase | pending-issues | issue-workflow |
| issue:plan | pending-issues | issue-plans | issue-workflow |
| issue:queue | issue-plans, converted-plan | execution-queue | issue-workflow |
| issue:execute | execution-queue | completed-issues | issue-workflow |
| issue:convert-to-plan | plan | converted-plan | rapid-to-issue |
| issue:from-brainstorm | brainstorm-document | converted-plan | brainstorm-to-issue |
---
## Minimum Execution Units (最小执行单元)
**Definition**: Commands that must execute together as an atomic group.
| Unit Name | Commands | Purpose |
|-----------|----------|---------|
| **quick-implementation** | lite-plan → lite-execute | Lightweight plan and execution |
| **multi-cli-planning** | multi-cli-plan → lite-execute | Multi-perspective planning and execution |
| **bug-fix** | lite-fix → lite-execute | Bug diagnosis and fix |
| **full-planning-execution** | plan → execute | Detailed planning and execution |
| **verified-planning-execution** | plan → plan-verify → execute | Planning with verification |
| **replanning-execution** | replan → execute | Update plan and execute |
| **tdd-planning-execution** | tdd-plan → execute | TDD planning and execution |
| **test-validation** | test-fix-gen → test-cycle-execute | Test generation and fix cycle |
| **test-generation-execution** | test-gen → execute | Generate and execute tests |
| **code-review** | review-*-cycle → review-cycle-fix | Review and fix findings |
| **issue-workflow** | discover → plan → queue → execute | Complete issue lifecycle |
| **rapid-to-issue** | lite-plan → convert-to-plan → queue → execute | Bridge to issue workflow |
| **brainstorm-to-issue** | from-brainstorm → queue → execute | Brainstorm to issue bridge |
| **brainstorm-with-file** | (self-contained) | Multi-perspective ideation |
| **debug-with-file** | (self-contained) | Hypothesis-driven debugging |
| **analyze-with-file** | (self-contained) | Collaborative analysis |
---
## Phase 3: Generate JSON
```javascript
async function generateTemplate(design, steps, outputPath) {
const template = {
name: design.name,
description: design.description,
level: design.level,
steps: steps
};
const finalPath = outputPath || `~/.claude/skills/flow-coordinator/templates/${design.name}.json`;
// Write template
Write(finalPath, JSON.stringify(template, null, 2));
// Validate
const validation = validateTemplate(template);
console.log(`✅ Template created: ${finalPath}`);
console.log(` Steps: ${template.steps.length}`);
console.log(` Level: ${template.level}`);
console.log(` Units: ${[...new Set(template.steps.map(s => s.unit))].join(', ')}`);
return { path: finalPath, template, validation };
}
```
---
## Output Format
```json
{
"name": "template-name",
"description": "Template description",
"level": 2,
"steps": [
{
"cmd": "/workflow:command",
"args": "\"{{goal}}\"",
"unit": "unit-name",
"execution": {
"type": "slash-command",
"mode": "mainprocess"
},
"contextHint": "Description of what this step does"
}
]
}
```
---
## Examples
**Create a quick bugfix template**:
```
/meta-skill:flow-create hotfix-simple
→ Purpose: Bug Fix
→ Level: 2 (Lightweight)
→ Steps: Use Suggested
→ Output: ~/.claude/skills/flow-coordinator/templates/hotfix-simple.json
```
**Create a custom multi-stage workflow**:
```
/meta-skill:flow-create complex-feature --output ~/.claude/skills/my-project/templates/
→ Purpose: Feature Development
→ Level: 3 (Standard)
→ Steps: Customize
→ Step 1: /workflow:brainstorm:auto-parallel (standalone, mainprocess)
→ Step 2: /workflow:plan (verified-planning-execution, mainprocess)
→ Step 3: /workflow:plan-verify (verified-planning-execution, mainprocess)
→ Step 4: /workflow:execute (verified-planning-execution, async)
→ Step 5: /workflow:review-session-cycle (code-review, mainprocess)
→ Step 6: /workflow:review-cycle-fix (code-review, mainprocess)
→ Done
→ Output: ~/.claude/skills/my-project/templates/complex-feature.json
```

View File

@@ -0,0 +1,718 @@
---
name: convert-to-plan
description: Convert planning artifacts (lite-plan, workflow session, markdown) to issue solutions
argument-hint: "[-y|--yes] [--issue <id>] [--supplement] <SOURCE>"
allowed-tools: TodoWrite(*), Bash(*), Read(*), Write(*), Glob(*), AskUserQuestion(*)
---
## Auto Mode
When `--yes` or `-y`: Skip confirmation, auto-create issue and bind solution.
# Issue Convert-to-Plan Command (/issue:convert-to-plan)
## Overview
Converts various planning artifact formats into issue workflow solutions with intelligent detection and automatic binding.
**Supported Sources** (auto-detected):
- **lite-plan**: `.workflow/.lite-plan/{slug}/plan.json`
- **workflow-session**: `WFS-xxx` ID or `.workflow/active/{session}/` folder
- **markdown**: Any `.md` file with implementation/task content
- **json**: Direct JSON files matching plan-json-schema
## Quick Reference
```bash
# Convert lite-plan to new issue (auto-creates issue)
/issue:convert-to-plan ".workflow/.lite-plan/implement-auth-2026-01-25"
# Convert workflow session to existing issue
/issue:convert-to-plan WFS-auth-impl --issue GH-123
# Supplement existing solution with additional tasks
/issue:convert-to-plan "./docs/additional-tasks.md" --issue ISS-001 --supplement
# Auto mode - skip confirmations
/issue:convert-to-plan ".workflow/.lite-plan/my-plan" -y
```
## Command Options
| Option | Description | Default |
|--------|-------------|---------|
| `<SOURCE>` | Planning artifact path or WFS-xxx ID | Required |
| `--issue <id>` | Bind to existing issue instead of creating new | Auto-create |
| `--supplement` | Add tasks to existing solution (requires --issue) | false |
| `-y, --yes` | Skip all confirmations | false |
## Core Data Access Principle
**⚠️ Important**: Use CLI commands for all issue/solution operations.
| Operation | Correct | Incorrect |
|-----------|---------|-----------|
| Get issue | `ccw issue status <id> --json` | Read issues.jsonl directly |
| Create issue | `ccw issue init <id> --title "..."` | Write to issues.jsonl |
| Bind solution | `ccw issue bind <id> <sol-id>` | Edit issues.jsonl |
| List solutions | `ccw issue solutions --issue <id> --brief` | Read solutions/*.jsonl |
## Solution Schema Reference
Target format for all extracted data (from solution-schema.json):
```typescript
interface Solution {
id: string; // SOL-{issue-id}-{4-char-uid}
description?: string; // High-level summary
approach?: string; // Technical strategy
tasks: Task[]; // Required: at least 1 task
exploration_context?: object; // Optional: source context
analysis?: { risk, impact, complexity };
score?: number; // 0.0-1.0
is_bound: boolean;
created_at: string;
bound_at?: string;
}
interface Task {
id: string; // T1, T2, T3... (pattern: ^T[0-9]+$)
title: string; // Required: action verb + target
scope: string; // Required: module path or feature area
action: Action; // Required: Create|Update|Implement|...
description?: string;
modification_points?: Array<{file, target, change}>;
implementation: string[]; // Required: step-by-step guide
test?: { unit?, integration?, commands?, coverage_target? };
acceptance: { criteria: string[], verification: string[] }; // Required
commit?: { type, scope, message_template, breaking? };
depends_on?: string[];
priority?: number; // 1-5 (default: 3)
}
type Action = 'Create' | 'Update' | 'Implement' | 'Refactor' | 'Add' | 'Delete' | 'Configure' | 'Test' | 'Fix';
```
## Implementation
### Phase 1: Parse Arguments & Detect Source Type
```javascript
const input = userInput.trim();
const flags = parseFlags(userInput); // --issue, --supplement, -y/--yes
// Extract source path (first non-flag argument)
const source = extractSourceArg(input);
// Detect source type
function detectSourceType(source) {
// Check for WFS-xxx pattern (workflow session ID)
if (source.match(/^WFS-[\w-]+$/)) {
return { type: 'workflow-session-id', path: `.workflow/active/${source}` };
}
// Check if directory
const isDir = Bash(`test -d "${source}" && echo "dir" || echo "file"`).trim() === 'dir';
if (isDir) {
// Check for lite-plan indicator
const hasPlanJson = Bash(`test -f "${source}/plan.json" && echo "yes" || echo "no"`).trim() === 'yes';
if (hasPlanJson) {
return { type: 'lite-plan', path: source };
}
// Check for workflow session indicator
const hasSession = Bash(`test -f "${source}/workflow-session.json" && echo "yes" || echo "no"`).trim() === 'yes';
if (hasSession) {
return { type: 'workflow-session', path: source };
}
}
// Check file extensions
if (source.endsWith('.json')) {
return { type: 'json-file', path: source };
}
if (source.endsWith('.md')) {
return { type: 'markdown-file', path: source };
}
// Check if path exists at all
const exists = Bash(`test -e "${source}" && echo "yes" || echo "no"`).trim() === 'yes';
if (!exists) {
throw new Error(`E001: Source not found: ${source}`);
}
return { type: 'unknown', path: source };
}
const sourceInfo = detectSourceType(source);
if (sourceInfo.type === 'unknown') {
throw new Error(`E002: Unable to detect source format for: ${source}`);
}
console.log(`Detected source type: ${sourceInfo.type}`);
```
### Phase 2: Extract Data Using Format-Specific Extractor
```javascript
let extracted = { title: '', approach: '', tasks: [], metadata: {} };
switch (sourceInfo.type) {
case 'lite-plan':
extracted = extractFromLitePlan(sourceInfo.path);
break;
case 'workflow-session':
case 'workflow-session-id':
extracted = extractFromWorkflowSession(sourceInfo.path);
break;
case 'markdown-file':
extracted = await extractFromMarkdownAI(sourceInfo.path);
break;
case 'json-file':
extracted = extractFromJsonFile(sourceInfo.path);
break;
}
// Validate extraction
if (!extracted.tasks || extracted.tasks.length === 0) {
throw new Error('E006: No tasks extracted from source');
}
// Ensure task IDs are normalized to T1, T2, T3...
extracted.tasks = normalizeTaskIds(extracted.tasks);
console.log(`Extracted: ${extracted.tasks.length} tasks`);
```
#### Extractor: Lite-Plan
```javascript
function extractFromLitePlan(folderPath) {
const planJson = Read(`${folderPath}/plan.json`);
const plan = JSON.parse(planJson);
return {
title: plan.summary?.split('.')[0]?.trim() || 'Untitled Plan',
description: plan.summary,
approach: plan.approach,
tasks: plan.tasks.map(t => ({
id: t.id,
title: t.title,
scope: t.scope || '',
action: t.action || 'Implement',
description: t.description || t.title,
modification_points: t.modification_points || [],
implementation: Array.isArray(t.implementation) ? t.implementation : [t.implementation || ''],
test: t.verification ? {
unit: t.verification.unit_tests,
integration: t.verification.integration_tests,
commands: t.verification.manual_checks
} : {},
acceptance: {
criteria: Array.isArray(t.acceptance) ? t.acceptance : [t.acceptance || ''],
verification: t.verification?.manual_checks || []
},
depends_on: t.depends_on || [],
priority: 3
})),
metadata: {
source_type: 'lite-plan',
source_path: folderPath,
complexity: plan.complexity,
estimated_time: plan.estimated_time,
exploration_angles: plan._metadata?.exploration_angles || [],
original_timestamp: plan._metadata?.timestamp
}
};
}
```
#### Extractor: Workflow Session
```javascript
function extractFromWorkflowSession(sessionPath) {
// Load session metadata
const sessionJson = Read(`${sessionPath}/workflow-session.json`);
const session = JSON.parse(sessionJson);
// Load IMPL_PLAN.md for approach (if exists)
let approach = '';
const implPlanPath = `${sessionPath}/IMPL_PLAN.md`;
const hasImplPlan = Bash(`test -f "${implPlanPath}" && echo "yes" || echo "no"`).trim() === 'yes';
if (hasImplPlan) {
const implPlan = Read(implPlanPath);
// Extract overview/approach section
const overviewMatch = implPlan.match(/##\s*(?:Overview|Approach|Strategy)\s*\n([\s\S]*?)(?=\n##|$)/i);
approach = overviewMatch?.[1]?.trim() || implPlan.split('\n').slice(0, 10).join('\n');
}
// Load all task JSONs from .task folder
const taskFiles = Glob({ pattern: `${sessionPath}/.task/IMPL-*.json` });
const tasks = taskFiles.map(f => {
const taskJson = Read(f);
const task = JSON.parse(taskJson);
return {
id: task.id?.replace(/^IMPL-0*/, 'T') || 'T1', // IMPL-001 → T1
title: task.title,
scope: task.scope || inferScopeFromTask(task),
action: capitalizeAction(task.type) || 'Implement',
description: task.description,
modification_points: task.implementation?.modification_points || [],
implementation: task.implementation?.steps || [],
test: task.implementation?.test || {},
acceptance: {
criteria: task.acceptance_criteria || [],
verification: task.verification_steps || []
},
commit: task.commit,
depends_on: (task.depends_on || []).map(d => d.replace(/^IMPL-0*/, 'T')),
priority: task.priority || 3
};
});
return {
title: session.name || session.description?.split('.')[0] || 'Workflow Session',
description: session.description || session.name,
approach: approach || session.description,
tasks: tasks,
metadata: {
source_type: 'workflow-session',
source_path: sessionPath,
session_id: session.id,
created_at: session.created_at
}
};
}
function inferScopeFromTask(task) {
if (task.implementation?.modification_points?.length) {
const files = task.implementation.modification_points.map(m => m.file);
// Find common directory prefix
const dirs = files.map(f => f.split('/').slice(0, -1).join('/'));
return [...new Set(dirs)][0] || '';
}
return '';
}
function capitalizeAction(type) {
if (!type) return 'Implement';
const map = { feature: 'Implement', bugfix: 'Fix', refactor: 'Refactor', test: 'Test', docs: 'Update' };
return map[type.toLowerCase()] || type.charAt(0).toUpperCase() + type.slice(1);
}
```
#### Extractor: Markdown (AI-Assisted via Gemini)
```javascript
async function extractFromMarkdownAI(filePath) {
const fileContent = Read(filePath);
// Use Gemini CLI for intelligent extraction
const cliPrompt = `PURPOSE: Extract implementation plan from markdown document for issue solution conversion. Must output ONLY valid JSON.
TASK: • Analyze document structure • Identify title/summary • Extract approach/strategy section • Parse tasks from any format (lists, tables, sections, code blocks) • Normalize each task to solution schema
MODE: analysis
CONTEXT: Document content provided below
EXPECTED: Valid JSON object with format:
{
"title": "extracted title",
"approach": "extracted approach/strategy",
"tasks": [
{
"id": "T1",
"title": "task title",
"scope": "module or feature area",
"action": "Implement|Update|Create|Fix|Refactor|Add|Delete|Configure|Test",
"description": "what to do",
"implementation": ["step 1", "step 2"],
"acceptance": ["criteria 1", "criteria 2"]
}
]
}
CONSTRAINTS: Output ONLY valid JSON - no markdown, no explanation | Action must be one of: Create, Update, Implement, Refactor, Add, Delete, Configure, Test, Fix | Tasks must have id, title, scope, action, implementation (array), acceptance (array)
DOCUMENT CONTENT:
${fileContent}`;
// Execute Gemini CLI
const result = Bash(`ccw cli -p '${cliPrompt.replace(/'/g, "'\\''")}' --tool gemini --mode analysis`, { timeout: 120000 });
// Parse JSON from result (may be wrapped in markdown code block)
let jsonText = result.trim();
const jsonMatch = jsonText.match(/```(?:json)?\s*([\s\S]*?)```/);
if (jsonMatch) {
jsonText = jsonMatch[1].trim();
}
try {
const extracted = JSON.parse(jsonText);
// Normalize tasks
const tasks = (extracted.tasks || []).map((t, i) => ({
id: t.id || `T${i + 1}`,
title: t.title || 'Untitled task',
scope: t.scope || '',
action: validateAction(t.action) || 'Implement',
description: t.description || t.title,
modification_points: t.modification_points || [],
implementation: Array.isArray(t.implementation) ? t.implementation : [t.implementation || ''],
test: t.test || {},
acceptance: {
criteria: Array.isArray(t.acceptance) ? t.acceptance : [t.acceptance || ''],
verification: t.verification || []
},
depends_on: t.depends_on || [],
priority: t.priority || 3
}));
return {
title: extracted.title || 'Extracted Plan',
description: extracted.summary || extracted.title,
approach: extracted.approach || '',
tasks: tasks,
metadata: {
source_type: 'markdown',
source_path: filePath,
extraction_method: 'gemini-ai'
}
};
} catch (e) {
// Provide more context for debugging
throw new Error(`E005: Failed to extract tasks from markdown. Gemini response was not valid JSON. Error: ${e.message}. Response preview: ${jsonText.substring(0, 200)}...`);
}
}
function validateAction(action) {
const validActions = ['Create', 'Update', 'Implement', 'Refactor', 'Add', 'Delete', 'Configure', 'Test', 'Fix'];
if (!action) return null;
const normalized = action.charAt(0).toUpperCase() + action.slice(1).toLowerCase();
return validActions.includes(normalized) ? normalized : null;
}
```
#### Extractor: JSON File
```javascript
function extractFromJsonFile(filePath) {
const content = Read(filePath);
const plan = JSON.parse(content);
// Detect if it's already solution format or plan format
if (plan.tasks && Array.isArray(plan.tasks)) {
// Map tasks to normalized format
const tasks = plan.tasks.map((t, i) => ({
id: t.id || `T${i + 1}`,
title: t.title,
scope: t.scope || '',
action: t.action || 'Implement',
description: t.description || t.title,
modification_points: t.modification_points || [],
implementation: Array.isArray(t.implementation) ? t.implementation : [t.implementation || ''],
test: t.test || t.verification || {},
acceptance: normalizeAcceptance(t.acceptance),
depends_on: t.depends_on || [],
priority: t.priority || 3
}));
return {
title: plan.summary?.split('.')[0] || plan.title || 'JSON Plan',
description: plan.summary || plan.description,
approach: plan.approach,
tasks: tasks,
metadata: {
source_type: 'json',
source_path: filePath,
complexity: plan.complexity,
original_metadata: plan._metadata
}
};
}
throw new Error('E002: JSON file does not contain valid plan structure (missing tasks array)');
}
function normalizeAcceptance(acceptance) {
if (!acceptance) return { criteria: [], verification: [] };
if (typeof acceptance === 'object' && acceptance.criteria) return acceptance;
if (Array.isArray(acceptance)) return { criteria: acceptance, verification: [] };
return { criteria: [String(acceptance)], verification: [] };
}
```
### Phase 3: Normalize Task IDs
```javascript
function normalizeTaskIds(tasks) {
return tasks.map((t, i) => ({
...t,
id: `T${i + 1}`,
// Also normalize depends_on references
depends_on: (t.depends_on || []).map(d => {
// Handle various ID formats: IMPL-001, T1, 1, etc.
const num = d.match(/\d+/)?.[0];
return num ? `T${parseInt(num)}` : d;
})
}));
}
```
### Phase 4: Resolve Issue (Create or Find)
```javascript
let issueId = flags.issue;
let existingSolution = null;
if (issueId) {
// Validate issue exists
let issueCheck;
try {
issueCheck = Bash(`ccw issue status ${issueId} --json 2>/dev/null`).trim();
if (!issueCheck || issueCheck === '') {
throw new Error('empty response');
}
} catch (e) {
throw new Error(`E003: Issue not found: ${issueId}`);
}
const issue = JSON.parse(issueCheck);
// Check if issue already has bound solution
if (issue.bound_solution_id && !flags.supplement) {
throw new Error(`E004: Issue ${issueId} already has bound solution (${issue.bound_solution_id}). Use --supplement to add tasks.`);
}
// Load existing solution for supplement mode
if (flags.supplement && issue.bound_solution_id) {
try {
const solResult = Bash(`ccw issue solution ${issue.bound_solution_id} --json`).trim();
existingSolution = JSON.parse(solResult);
console.log(`Loaded existing solution with ${existingSolution.tasks.length} tasks`);
} catch (e) {
throw new Error(`Failed to load existing solution: ${e.message}`);
}
}
} else {
// Create new issue via ccw issue create (auto-generates correct ID)
// Smart extraction: title from content, priority from complexity
const title = extracted.title || 'Converted Plan';
const context = extracted.description || extracted.approach || title;
// Auto-determine priority based on complexity
const complexityMap = { high: 2, medium: 3, low: 4 };
const priority = complexityMap[extracted.metadata.complexity?.toLowerCase()] || 3;
try {
// Use heredoc to avoid shell escaping issues
const createResult = Bash(`ccw issue create << 'EOF'
{
"title": ${JSON.stringify(title)},
"context": ${JSON.stringify(context)},
"priority": ${priority},
"source": "converted"
}
EOF`).trim();
// Parse result to get created issue ID
const created = JSON.parse(createResult);
issueId = created.id;
console.log(`Created issue: ${issueId} (priority: ${priority})`);
} catch (e) {
throw new Error(`Failed to create issue: ${e.message}`);
}
}
```
### Phase 5: Generate Solution
```javascript
// Generate solution ID
function generateSolutionId(issueId) {
const chars = 'abcdefghijklmnopqrstuvwxyz0123456789';
let uid = '';
for (let i = 0; i < 4; i++) {
uid += chars[Math.floor(Math.random() * chars.length)];
}
return `SOL-${issueId}-${uid}`;
}
let solution;
const solutionId = generateSolutionId(issueId);
if (flags.supplement && existingSolution) {
// Supplement mode: merge with existing solution
const maxTaskId = Math.max(...existingSolution.tasks.map(t => parseInt(t.id.slice(1))));
const newTasks = extracted.tasks.map((t, i) => ({
...t,
id: `T${maxTaskId + i + 1}`
}));
solution = {
...existingSolution,
tasks: [...existingSolution.tasks, ...newTasks],
approach: existingSolution.approach + '\n\n[Supplementary] ' + (extracted.approach || ''),
updated_at: new Date().toISOString()
};
console.log(`Supplementing: ${existingSolution.tasks.length} existing + ${newTasks.length} new = ${solution.tasks.length} total tasks`);
} else {
// New solution
solution = {
id: solutionId,
description: extracted.description || extracted.title,
approach: extracted.approach,
tasks: extracted.tasks,
exploration_context: extracted.metadata.exploration_angles ? {
exploration_angles: extracted.metadata.exploration_angles
} : undefined,
analysis: {
risk: 'medium',
impact: 'medium',
complexity: extracted.metadata.complexity?.toLowerCase() || 'medium'
},
is_bound: false,
created_at: new Date().toISOString(),
_conversion_metadata: {
source_type: extracted.metadata.source_type,
source_path: extracted.metadata.source_path,
converted_at: new Date().toISOString()
}
};
}
```
### Phase 6: Confirm & Persist
```javascript
// Display preview
console.log(`
## Conversion Summary
**Issue**: ${issueId}
**Solution**: ${flags.supplement ? existingSolution.id : solutionId}
**Tasks**: ${solution.tasks.length}
**Mode**: ${flags.supplement ? 'Supplement' : 'New'}
### Tasks:
${solution.tasks.map(t => `- ${t.id}: ${t.title} [${t.action}]`).join('\n')}
`);
// Confirm if not auto mode
if (!flags.yes && !flags.y) {
const confirm = AskUserQuestion({
questions: [{
question: `Create solution for issue ${issueId} with ${solution.tasks.length} tasks?`,
header: 'Confirm',
multiSelect: false,
options: [
{ label: 'Yes, create solution', description: 'Create and bind solution' },
{ label: 'Cancel', description: 'Abort without changes' }
]
}]
});
if (!confirm.answers?.['Confirm']?.includes('Yes')) {
console.log('Cancelled.');
return;
}
}
// Persist solution (following issue-plan-agent pattern)
Bash(`mkdir -p .workflow/issues/solutions`);
const solutionFile = `.workflow/issues/solutions/${issueId}.jsonl`;
if (flags.supplement) {
// Supplement mode: update existing solution line atomically
try {
const existingContent = Read(solutionFile);
const lines = existingContent.trim().split('\n').filter(l => l);
const updatedLines = lines.map(line => {
const sol = JSON.parse(line);
if (sol.id === existingSolution.id) {
return JSON.stringify(solution);
}
return line;
});
// Atomic write: write entire content at once
Write({ file_path: solutionFile, content: updatedLines.join('\n') + '\n' });
console.log(`✓ Updated solution: ${existingSolution.id}`);
} catch (e) {
throw new Error(`Failed to update solution: ${e.message}`);
}
// Note: No need to rebind - solution is already bound to issue
} else {
// New solution: append to JSONL file (following issue-plan-agent pattern)
try {
const solutionLine = JSON.stringify(solution);
// Read existing content, append new line, write atomically
const existing = Bash(`test -f "${solutionFile}" && cat "${solutionFile}" || echo ""`).trim();
const newContent = existing ? existing + '\n' + solutionLine + '\n' : solutionLine + '\n';
Write({ file_path: solutionFile, content: newContent });
console.log(`✓ Created solution: ${solutionId}`);
} catch (e) {
throw new Error(`Failed to write solution: ${e.message}`);
}
// Bind solution to issue
try {
Bash(`ccw issue bind ${issueId} ${solutionId}`);
console.log(`✓ Bound solution to issue`);
} catch (e) {
// Cleanup: remove solution file on bind failure
try {
Bash(`rm -f "${solutionFile}"`);
} catch (cleanupError) {
// Ignore cleanup errors
}
throw new Error(`Failed to bind solution: ${e.message}`);
}
// Update issue status to planned
try {
Bash(`ccw issue update ${issueId} --status planned`);
} catch (e) {
throw new Error(`Failed to update issue status: ${e.message}`);
}
}
```
### Phase 7: Summary
```javascript
console.log(`
## Done
**Issue**: ${issueId}
**Solution**: ${flags.supplement ? existingSolution.id : solutionId}
**Tasks**: ${solution.tasks.length}
**Status**: planned
### Next Steps:
- \`/issue:queue\` → Form execution queue
- \`ccw issue status ${issueId}\` → View issue details
- \`ccw issue solution ${flags.supplement ? existingSolution.id : solutionId}\` → View solution
`);
```
## Error Handling
| Error | Code | Resolution |
|-------|------|------------|
| Source not found | E001 | Check path exists |
| Invalid source format | E002 | Verify file contains valid plan structure |
| Issue not found | E003 | Check issue ID or omit --issue to create new |
| Solution already bound | E004 | Use --supplement to add tasks |
| AI extraction failed | E005 | Check markdown structure, try simpler format |
| No tasks extracted | E006 | Source must contain at least 1 task |
## Related Commands
- `/issue:plan` - Generate solutions from issue exploration
- `/issue:queue` - Form execution queue from bound solutions
- `/issue:execute` - Execute queue with DAG parallelism
- `ccw issue status <id>` - View issue details
- `ccw issue solution <id>` - View solution details

View File

@@ -0,0 +1,768 @@
---
name: issue:discover-by-prompt
description: Discover issues from user prompt with Gemini-planned iterative multi-agent exploration. Uses ACE semantic search for context gathering and supports cross-module comparison (e.g., frontend vs backend API contracts).
argument-hint: "[-y|--yes] <prompt> [--scope=src/**] [--depth=standard|deep] [--max-iterations=5]"
allowed-tools: Skill(*), TodoWrite(*), Read(*), Bash(*), Task(*), AskUserQuestion(*), Glob(*), Grep(*), mcp__ace-tool__search_context(*), mcp__exa__search(*)
---
## Auto Mode
When `--yes` or `-y`: Auto-continue all iterations, skip confirmations.
# Issue Discovery by Prompt
## Quick Start
```bash
# Discover issues based on user description
/issue:discover-by-prompt "Check if frontend API calls match backend implementations"
# Compare specific modules
/issue:discover-by-prompt "Verify auth flow consistency between mobile and web clients" --scope=src/auth/**,src/mobile/**
# Deep exploration with more iterations
/issue:discover-by-prompt "Find all places where error handling is inconsistent" --depth=deep --max-iterations=8
# Focused backend-frontend contract check
/issue:discover-by-prompt "Compare REST API definitions with frontend fetch calls"
```
**Core Difference from `/issue:discover`**:
- `discover`: Pre-defined perspectives (bug, security, etc.), parallel execution
- `discover-by-prompt`: User-driven prompt, Gemini-planned strategy, iterative exploration
## What & Why
### Core Concept
Prompt-driven issue discovery with intelligent planning. Instead of fixed perspectives, this command:
1. **Analyzes user intent** via Gemini to understand what to find
2. **Plans exploration strategy** dynamically based on codebase structure
3. **Executes iterative multi-agent exploration** with feedback loops
4. **Performs cross-module comparison** when detecting comparison intent
### Value Proposition
1. **Natural Language Input**: Describe what you want to find, not how to find it
2. **Intelligent Planning**: Gemini designs optimal exploration strategy
3. **Iterative Refinement**: Each round builds on previous discoveries
4. **Cross-Module Analysis**: Compare frontend/backend, mobile/web, old/new implementations
5. **Adaptive Exploration**: Adjusts direction based on findings
### Use Cases
| Scenario | Example Prompt |
|----------|----------------|
| API Contract | "Check if frontend calls match backend endpoints" |
| Error Handling | "Find inconsistent error handling patterns" |
| Migration Gap | "Compare old auth with new auth implementation" |
| Feature Parity | "Verify mobile has all web features" |
| Schema Drift | "Check if TypeScript types match API responses" |
| Integration | "Find mismatches between service A and service B" |
## How It Works
### Execution Flow
```
Phase 1: Prompt Analysis & Initialization
├─ Parse user prompt and flags
├─ Detect exploration intent (comparison/search/verification)
└─ Initialize discovery session
Phase 1.5: ACE Context Gathering
├─ Use ACE semantic search to understand codebase structure
├─ Identify relevant modules based on prompt keywords
├─ Collect architecture context for Gemini planning
└─ Build initial context package
Phase 2: Gemini Strategy Planning
├─ Feed ACE context + prompt to Gemini CLI
├─ Gemini analyzes and generates exploration strategy
├─ Create exploration dimensions with search targets
├─ Define comparison matrix (if comparison intent)
└─ Set success criteria and iteration limits
Phase 3: Iterative Agent Exploration (with ACE)
├─ Iteration 1: Initial exploration by assigned agents
│ ├─ Agent A: ACE search + explore dimension 1
│ ├─ Agent B: ACE search + explore dimension 2
│ └─ Collect findings, update shared context
├─ Iteration 2-N: Refined exploration
│ ├─ Analyze previous findings
│ ├─ ACE search for related code paths
│ ├─ Execute targeted exploration
│ └─ Update cumulative findings
└─ Termination: Max iterations or convergence
Phase 4: Cross-Analysis & Synthesis
├─ Compare findings across dimensions
├─ Identify discrepancies and issues
├─ Calculate confidence scores
└─ Generate issue candidates
Phase 5: Issue Generation & Summary
├─ Convert findings to issue format
├─ Write discovery outputs
└─ Prompt user for next action
```
### Exploration Dimensions
Dimensions are **dynamically generated by Gemini** based on the user prompt. Not limited to predefined categories.
**Examples**:
| Prompt | Generated Dimensions |
|--------|---------------------|
| "Check API contracts" | frontend-calls, backend-handlers |
| "Find auth issues" | auth-module (single dimension) |
| "Compare old/new implementations" | legacy-code, new-code |
| "Audit payment flow" | payment-service, validation, logging |
| "Find error handling gaps" | error-handlers, error-types, recovery-logic |
Gemini analyzes the prompt + ACE context to determine:
- How many dimensions are needed (1 to N)
- What each dimension should focus on
- Whether comparison is needed between dimensions
### Iteration Strategy
```
┌─────────────────────────────────────────────────────────────┐
│ Iteration Loop │
├─────────────────────────────────────────────────────────────┤
│ 1. Plan: What to explore this iteration │
│ └─ Based on: previous findings + unexplored areas │
│ │
│ 2. Execute: Launch agents for this iteration │
│ └─ Each agent: explore → collect → return summary │
│ │
│ 3. Analyze: Process iteration results │
│ └─ New findings? Gaps? Contradictions? │
│ │
│ 4. Decide: Continue or terminate │
│ └─ Terminate if: max iterations OR convergence OR │
│ high confidence on all questions │
└─────────────────────────────────────────────────────────────┘
```
## Core Responsibilities
### Phase 1: Prompt Analysis & Initialization
```javascript
// Step 1: Parse arguments
const { prompt, scope, depth, maxIterations } = parseArgs(args);
// Step 2: Generate discovery ID
const discoveryId = `DBP-${formatDate(new Date(), 'YYYYMMDD-HHmmss')}`;
// Step 3: Create output directory
const outputDir = `.workflow/issues/discoveries/${discoveryId}`;
await mkdir(outputDir, { recursive: true });
await mkdir(`${outputDir}/iterations`, { recursive: true });
// Step 4: Detect intent type from prompt
const intentType = detectIntent(prompt);
// Returns: 'comparison' | 'search' | 'verification' | 'audit'
// Step 5: Initialize discovery state
await writeJson(`${outputDir}/discovery-state.json`, {
discovery_id: discoveryId,
type: 'prompt-driven',
prompt: prompt,
intent_type: intentType,
scope: scope || '**/*',
depth: depth || 'standard',
max_iterations: maxIterations || 5,
phase: 'initialization',
created_at: new Date().toISOString(),
iterations: [],
cumulative_findings: [],
comparison_matrix: null // filled for comparison intent
});
```
### Phase 1.5: ACE Context Gathering
**Purpose**: Use ACE semantic search to gather codebase context before Gemini planning.
```javascript
// Step 1: Extract keywords from prompt for semantic search
const keywords = extractKeywords(prompt);
// e.g., "frontend API calls match backend" → ["frontend", "API", "backend", "endpoints"]
// Step 2: Use ACE to understand codebase structure
const aceQueries = [
`Project architecture and module structure for ${keywords.join(', ')}`,
`Where are ${keywords[0]} implementations located?`,
`How does ${keywords.slice(0, 2).join(' ')} work in this codebase?`
];
const aceResults = [];
for (const query of aceQueries) {
const result = await mcp__ace-tool__search_context({
project_root_path: process.cwd(),
query: query
});
aceResults.push({ query, result });
}
// Step 3: Build context package for Gemini (kept in memory)
const aceContext = {
prompt_keywords: keywords,
codebase_structure: aceResults[0].result,
relevant_modules: aceResults.slice(1).map(r => r.result),
detected_patterns: extractPatterns(aceResults)
};
// Step 4: Update state (no separate file)
await updateDiscoveryState(outputDir, {
phase: 'context-gathered',
ace_context: {
queries_executed: aceQueries.length,
modules_identified: aceContext.relevant_modules.length
}
});
// aceContext passed to Phase 2 in memory
```
**ACE Query Strategy by Intent Type**:
| Intent | ACE Queries |
|--------|-------------|
| **comparison** | "frontend API calls", "backend API handlers", "API contract definitions" |
| **search** | "{keyword} implementations", "{keyword} usage patterns" |
| **verification** | "expected behavior for {feature}", "test coverage for {feature}" |
| **audit** | "all {category} patterns", "{category} security concerns" |
### Phase 2: Gemini Strategy Planning
**Purpose**: Gemini analyzes user prompt + ACE context to design optimal exploration strategy.
```javascript
// Step 1: Load ACE context gathered in Phase 1.5
const aceContext = await readJson(`${outputDir}/ace-context.json`);
// Step 2: Build Gemini planning prompt with ACE context
const planningPrompt = `
PURPOSE: Analyze discovery prompt and create exploration strategy based on codebase context
TASK:
• Parse user intent from prompt: "${prompt}"
• Use codebase context to identify specific modules and files to explore
• Create exploration dimensions with precise search targets
• Define comparison matrix structure (if comparison intent)
• Set success criteria and iteration strategy
MODE: analysis
CONTEXT: @${scope || '**/*'} | Discovery type: ${intentType}
## Codebase Context (from ACE semantic search)
${JSON.stringify(aceContext, null, 2)}
EXPECTED: JSON exploration plan following exploration-plan-schema.json:
{
"intent_analysis": { "type": "${intentType}", "primary_question": "...", "sub_questions": [...] },
"dimensions": [{ "name": "...", "description": "...", "search_targets": [...], "focus_areas": [...], "agent_prompt": "..." }],
"comparison_matrix": { "dimension_a": "...", "dimension_b": "...", "comparison_points": [...] },
"success_criteria": [...],
"estimated_iterations": N,
"termination_conditions": [...]
}
CONSTRAINTS: Use ACE context to inform targets | Focus on actionable plan
`;
// Step 3: Execute Gemini planning
Bash({
command: `ccw cli -p "${planningPrompt}" --tool gemini --mode analysis`,
run_in_background: true,
timeout: 300000
});
// Step 4: Parse Gemini output and validate against schema
const explorationPlan = await parseGeminiPlanOutput(geminiResult);
validateAgainstSchema(explorationPlan, 'exploration-plan-schema.json');
// Step 5: Enhance plan with ACE-discovered file paths
explorationPlan.dimensions = explorationPlan.dimensions.map(dim => ({
...dim,
ace_suggested_files: aceContext.relevant_modules
.filter(m => m.relevance_to === dim.name)
.map(m => m.file_path)
}));
// Step 6: Update state (plan kept in memory, not persisted)
await updateDiscoveryState(outputDir, {
phase: 'planned',
exploration_plan: {
dimensions_count: explorationPlan.dimensions.length,
has_comparison_matrix: !!explorationPlan.comparison_matrix,
estimated_iterations: explorationPlan.estimated_iterations
}
});
// explorationPlan passed to Phase 3 in memory
```
**Gemini Planning Responsibilities**:
| Responsibility | Input | Output |
|----------------|-------|--------|
| Intent Analysis | User prompt | type, primary_question, sub_questions |
| Dimension Design | ACE context + prompt | dimensions with search_targets |
| Comparison Matrix | Intent type + modules | comparison_points (if applicable) |
| Iteration Strategy | Depth setting | estimated_iterations, termination_conditions |
**Gemini Planning Output Schema**:
```json
{
"intent_analysis": {
"type": "comparison|search|verification|audit",
"primary_question": "string",
"sub_questions": ["string"]
},
"dimensions": [
{
"name": "frontend",
"description": "Client-side API calls and error handling",
"search_targets": ["src/api/**", "src/hooks/**"],
"focus_areas": ["fetch calls", "error boundaries", "response parsing"],
"agent_prompt": "Explore frontend API consumption patterns..."
},
{
"name": "backend",
"description": "Server-side API implementations",
"search_targets": ["src/server/**", "src/routes/**"],
"focus_areas": ["endpoint handlers", "response schemas", "error responses"],
"agent_prompt": "Explore backend API implementations..."
}
],
"comparison_matrix": {
"dimension_a": "frontend",
"dimension_b": "backend",
"comparison_points": [
{"aspect": "endpoints", "frontend_check": "fetch URLs", "backend_check": "route paths"},
{"aspect": "methods", "frontend_check": "HTTP methods used", "backend_check": "methods accepted"},
{"aspect": "payloads", "frontend_check": "request body structure", "backend_check": "expected schema"},
{"aspect": "responses", "frontend_check": "response parsing", "backend_check": "response format"},
{"aspect": "errors", "frontend_check": "error handling", "backend_check": "error responses"}
]
},
"success_criteria": [
"All API endpoints mapped between frontend and backend",
"Discrepancies identified with file:line references",
"Each finding includes remediation suggestion"
],
"estimated_iterations": 3,
"termination_conditions": [
"All comparison points verified",
"No new findings in last iteration",
"Confidence > 0.8 on primary question"
]
}
```
### Phase 3: Iterative Agent Exploration (with ACE)
**Purpose**: Multi-agent iterative exploration using ACE for semantic search within each iteration.
```javascript
let iteration = 0;
let cumulativeFindings = [];
let sharedContext = { aceDiscoveries: [], crossReferences: [] };
let shouldContinue = true;
while (shouldContinue && iteration < maxIterations) {
iteration++;
const iterationDir = `${outputDir}/iterations/${iteration}`;
await mkdir(iterationDir, { recursive: true });
// Step 1: ACE-assisted iteration planning
// Use previous findings to guide ACE queries for this iteration
const iterationAceQueries = iteration === 1
? explorationPlan.dimensions.map(d => d.focus_areas[0]) // Initial queries from plan
: deriveQueriesFromFindings(cumulativeFindings); // Follow-up queries from findings
// Execute ACE searches to find related code
const iterationAceResults = [];
for (const query of iterationAceQueries) {
const result = await mcp__ace-tool__search_context({
project_root_path: process.cwd(),
query: `${query} in ${explorationPlan.scope}`
});
iterationAceResults.push({ query, result });
}
// Update shared context with ACE discoveries
sharedContext.aceDiscoveries.push(...iterationAceResults);
// Step 2: Plan this iteration based on ACE results
const iterationPlan = planIteration(iteration, explorationPlan, cumulativeFindings, iterationAceResults);
// Step 3: Launch dimension agents with ACE context
const agentPromises = iterationPlan.dimensions.map(dimension =>
Task({
subagent_type: "cli-explore-agent",
run_in_background: false,
description: `Explore ${dimension.name} (iteration ${iteration})`,
prompt: buildDimensionPromptWithACE(dimension, iteration, cumulativeFindings, iterationAceResults, iterationDir)
})
);
// Wait for iteration agents
const iterationResults = await Promise.all(agentPromises);
// Step 4: Collect and analyze iteration findings
const iterationFindings = await collectIterationFindings(iterationDir, iterationPlan.dimensions);
// Step 5: Cross-reference findings between dimensions
if (iterationPlan.dimensions.length > 1) {
const crossRefs = findCrossReferences(iterationFindings, iterationPlan.dimensions);
sharedContext.crossReferences.push(...crossRefs);
}
cumulativeFindings.push(...iterationFindings);
// Step 6: Decide whether to continue
const convergenceCheck = checkConvergence(iterationFindings, cumulativeFindings, explorationPlan);
shouldContinue = !convergenceCheck.converged;
// Step 7: Update state (iteration summary embedded in state)
await updateDiscoveryState(outputDir, {
iterations: [...state.iterations, {
number: iteration,
findings_count: iterationFindings.length,
ace_queries: iterationAceQueries.length,
cross_references: sharedContext.crossReferences.length,
new_discoveries: convergenceCheck.newDiscoveries,
confidence: convergenceCheck.confidence,
continued: shouldContinue
}],
cumulative_findings: cumulativeFindings
});
}
```
**ACE in Iteration Loop**:
```
Iteration N
├─→ ACE Search (based on previous findings)
│ └─ Query: "related code paths for {finding.category}"
│ └─ Result: Additional files to explore
├─→ Agent Exploration (with ACE context)
│ └─ Agent receives: dimension targets + ACE suggestions
│ └─ Agent can call ACE for deeper search
├─→ Cross-Reference Analysis
│ └─ Compare findings between dimensions
│ └─ Identify discrepancies
└─→ Convergence Check
└─ New findings? Continue
└─ No new findings? Terminate
```
**Dimension Agent Prompt Template (with ACE)**:
```javascript
function buildDimensionPromptWithACE(dimension, iteration, previousFindings, aceResults, outputDir) {
// Filter ACE results relevant to this dimension
const relevantAceResults = aceResults.filter(r =>
r.query.includes(dimension.name) || dimension.focus_areas.some(fa => r.query.includes(fa))
);
return `
## Task Objective
Explore ${dimension.name} dimension for issue discovery (Iteration ${iteration})
## Context
- Dimension: ${dimension.name}
- Description: ${dimension.description}
- Search Targets: ${dimension.search_targets.join(', ')}
- Focus Areas: ${dimension.focus_areas.join(', ')}
## ACE Semantic Search Results (Pre-gathered)
The following files/code sections were identified by ACE as relevant to this dimension:
${JSON.stringify(relevantAceResults.map(r => ({ query: r.query, files: r.result.slice(0, 5) })), null, 2)}
**Use ACE for deeper exploration**: You have access to mcp__ace-tool__search_context.
When you find something interesting, use ACE to find related code:
- mcp__ace-tool__search_context({ project_root_path: ".", query: "related to {finding}" })
${iteration > 1 ? `
## Previous Findings to Build Upon
${summarizePreviousFindings(previousFindings, dimension.name)}
## This Iteration Focus
- Explore areas not yet covered (check ACE results for new files)
- Verify/deepen previous findings
- Follow leads from previous discoveries
- Use ACE to find cross-references between dimensions
` : ''}
## MANDATORY FIRST STEPS
1. Read exploration plan: ${outputDir}/../exploration-plan.json
2. Read schema: ~/.claude/workflows/cli-templates/schemas/discovery-finding-schema.json
3. Review ACE results above for starting points
4. Explore files identified by ACE
## Exploration Instructions
${dimension.agent_prompt}
## ACE Usage Guidelines
- Use ACE when you need to find:
- Where a function/class is used
- Related implementations in other modules
- Cross-module dependencies
- Similar patterns elsewhere in codebase
- Query format: Natural language, be specific
- Example: "Where is UserService.authenticate called from?"
## Output Requirements
**1. Write JSON file**: ${outputDir}/${dimension.name}.json
Follow discovery-finding-schema.json:
- findings: [{id, title, category, description, file, line, snippet, confidence, related_dimension}]
- coverage: {files_explored, areas_covered, areas_remaining}
- leads: [{description, suggested_search}] // for next iteration
- ace_queries_used: [{query, result_count}] // track ACE usage
**2. Return summary**:
- Total findings this iteration
- Key discoveries
- ACE queries that revealed important code
- Recommended next exploration areas
## Success Criteria
- [ ] JSON written to ${outputDir}/${dimension.name}.json
- [ ] Each finding has file:line reference
- [ ] ACE used for cross-references where applicable
- [ ] Coverage report included
- [ ] Leads for next iteration identified
`;
}
```
### Phase 4: Cross-Analysis & Synthesis
```javascript
// For comparison intent, perform cross-analysis
if (intentType === 'comparison' && explorationPlan.comparison_matrix) {
const comparisonResults = [];
for (const point of explorationPlan.comparison_matrix.comparison_points) {
const dimensionAFindings = cumulativeFindings.filter(f =>
f.related_dimension === explorationPlan.comparison_matrix.dimension_a &&
f.category.includes(point.aspect)
);
const dimensionBFindings = cumulativeFindings.filter(f =>
f.related_dimension === explorationPlan.comparison_matrix.dimension_b &&
f.category.includes(point.aspect)
);
// Compare and find discrepancies
const discrepancies = findDiscrepancies(dimensionAFindings, dimensionBFindings, point);
comparisonResults.push({
aspect: point.aspect,
dimension_a_count: dimensionAFindings.length,
dimension_b_count: dimensionBFindings.length,
discrepancies: discrepancies,
match_rate: calculateMatchRate(dimensionAFindings, dimensionBFindings)
});
}
// Write comparison analysis
await writeJson(`${outputDir}/comparison-analysis.json`, {
matrix: explorationPlan.comparison_matrix,
results: comparisonResults,
summary: {
total_discrepancies: comparisonResults.reduce((sum, r) => sum + r.discrepancies.length, 0),
overall_match_rate: average(comparisonResults.map(r => r.match_rate)),
critical_mismatches: comparisonResults.filter(r => r.match_rate < 0.5)
}
});
}
// Prioritize all findings
const prioritizedFindings = prioritizeFindings(cumulativeFindings, explorationPlan);
```
### Phase 5: Issue Generation & Summary
```javascript
// Convert high-confidence findings to issues
const issueWorthy = prioritizedFindings.filter(f =>
f.confidence >= 0.7 || f.priority === 'critical' || f.priority === 'high'
);
const issues = issueWorthy.map(finding => ({
id: `ISS-${discoveryId}-${finding.id}`,
title: finding.title,
description: finding.description,
source: {
discovery_id: discoveryId,
finding_id: finding.id,
dimension: finding.related_dimension
},
file: finding.file,
line: finding.line,
priority: finding.priority,
category: finding.category,
suggested_fix: finding.suggested_fix,
confidence: finding.confidence,
status: 'discovered',
created_at: new Date().toISOString()
}));
// Write issues
await writeJsonl(`${outputDir}/discovery-issues.jsonl`, issues);
// Update final state (summary embedded in state, no separate file)
await updateDiscoveryState(outputDir, {
phase: 'complete',
updated_at: new Date().toISOString(),
results: {
total_iterations: iteration,
total_findings: cumulativeFindings.length,
issues_generated: issues.length,
comparison_match_rate: comparisonResults
? average(comparisonResults.map(r => r.match_rate))
: null
}
});
// Prompt user for next action
await AskUserQuestion({
questions: [{
question: `Discovery complete: ${issues.length} issues from ${cumulativeFindings.length} findings across ${iteration} iterations. What next?`,
header: "Next Step",
multiSelect: false,
options: [
{ label: "Export to Issues (Recommended)", description: `Export ${issues.length} issues for planning` },
{ label: "Review Details", description: "View comparison analysis and iteration details" },
{ label: "Run Deeper", description: "Continue with more iterations" },
{ label: "Skip", description: "Complete without exporting" }
]
}]
});
```
## Output File Structure
```
.workflow/issues/discoveries/
└── {DBP-YYYYMMDD-HHmmss}/
├── discovery-state.json # Session state with iteration tracking
├── iterations/
│ ├── 1/
│ │ └── {dimension}.json # Dimension findings
│ ├── 2/
│ │ └── {dimension}.json
│ └── ...
├── comparison-analysis.json # Cross-dimension comparison (if applicable)
└── discovery-issues.jsonl # Generated issue candidates
```
**Simplified Design**:
- ACE context and Gemini plan kept in memory, not persisted
- Iteration summaries embedded in state
- No separate summary.md (state.json contains all needed info)
## Schema References
| Schema | Path | Used By |
|--------|------|---------|
| **Discovery State** | `discovery-state-schema.json` | Orchestrator (state tracking) |
| **Discovery Finding** | `discovery-finding-schema.json` | Dimension agents (output) |
| **Exploration Plan** | `exploration-plan-schema.json` | Gemini output validation (memory only) |
## Configuration Options
| Flag | Default | Description |
|------|---------|-------------|
| `--scope` | `**/*` | File pattern to explore |
| `--depth` | `standard` | `standard` (3 iterations) or `deep` (5+ iterations) |
| `--max-iterations` | 5 | Maximum exploration iterations |
| `--tool` | `gemini` | Planning tool (gemini/qwen) |
| `--plan-only` | `false` | Stop after Phase 2 (Gemini planning), show plan for user review |
## Examples
### Example 1: Single Module Deep Dive
```bash
/issue:discover-by-prompt "Find all potential issues in the auth module" --scope=src/auth/**
```
**Gemini plans** (single dimension):
- Dimension: auth-module
- Focus: security vulnerabilities, edge cases, error handling, test gaps
**Iterations**: 2-3 (until no new findings)
### Example 2: API Contract Comparison
```bash
/issue:discover-by-prompt "Check if API calls match implementations" --scope=src/**
```
**Gemini plans** (comparison):
- Dimension 1: api-consumers (fetch calls, hooks, services)
- Dimension 2: api-providers (handlers, routes, controllers)
- Comparison matrix: endpoints, methods, payloads, responses
### Example 3: Multi-Module Audit
```bash
/issue:discover-by-prompt "Audit the payment flow for issues" --scope=src/payment/**
```
**Gemini plans** (multi-dimension):
- Dimension 1: payment-logic (calculations, state transitions)
- Dimension 2: validation (input checks, business rules)
- Dimension 3: error-handling (failure modes, recovery)
### Example 4: Plan Only Mode
```bash
/issue:discover-by-prompt "Find inconsistent patterns" --plan-only
```
Stops after Gemini planning, outputs:
```
Gemini Plan:
- Intent: search
- Dimensions: 2 (pattern-definitions, pattern-usages)
- Estimated iterations: 3
Continue with exploration? [Y/n]
```
## Related Commands
```bash
# After discovery, plan solutions
/issue:plan DBP-001-01,DBP-001-02
# View all discoveries
/issue:manage
# Standard perspective-based discovery
/issue:discover src/auth/** --perspectives=security,bug
```
## Best Practices
1. **Be Specific in Prompts**: More specific prompts lead to better Gemini planning
2. **Scope Appropriately**: Narrow scope for focused comparison, wider for audits
3. **Review Exploration Plan**: Check `exploration-plan.json` before long explorations
4. **Use Standard Depth First**: Start with standard, go deep only if needed
5. **Combine with `/issue:discover`**: Use prompt-based for comparisons, perspective-based for audits

View File

@@ -1,10 +1,14 @@
---
name: issue:discover
description: Discover potential issues from multiple perspectives (bug, UX, test, quality, security, performance, maintainability, best-practices) using CLI explore. Supports Exa external research for security and best-practices perspectives.
argument-hint: "<path-pattern> [--perspectives=bug,ux,...] [--external]"
allowed-tools: SlashCommand(*), TodoWrite(*), Read(*), Bash(*), Task(*), AskUserQuestion(*), Glob(*), Grep(*)
argument-hint: "[-y|--yes] <path-pattern> [--perspectives=bug,ux,...] [--external]"
allowed-tools: Skill(*), TodoWrite(*), Read(*), Bash(*), Task(*), AskUserQuestion(*), Glob(*), Grep(*)
---
## Auto Mode
When `--yes` or `-y`: Auto-select all perspectives, skip confirmations.
# Issue Discovery Command
## Quick Start

View File

@@ -1,10 +1,14 @@
---
name: execute
description: Execute queue with DAG-based parallel orchestration (one commit per solution)
argument-hint: "[--worktree [<existing-path>]] [--queue <queue-id>]"
argument-hint: "[-y|--yes] --queue <queue-id> [--worktree [<existing-path>]]"
allowed-tools: TodoWrite(*), Bash(*), Read(*), AskUserQuestion(*)
---
## Auto Mode
When `--yes` or `-y`: Auto-confirm execution, use recommended settings.
# Issue Execute Command (/issue:execute)
## Overview
@@ -17,21 +21,64 @@ Minimal orchestrator that dispatches **solution IDs** to executors. Each executo
- `done <id>` → update solution completion status
- No race conditions: status changes only via `done`
- **Executor handles all tasks within a solution sequentially**
- **Worktree isolation**: Each executor can work in its own git worktree
- **Single worktree for entire queue**: One worktree isolates ALL queue execution from main workspace
## Queue ID Requirement (MANDATORY)
**Queue ID is REQUIRED.** You MUST specify which queue to execute via `--queue <queue-id>`.
### If Queue ID Not Provided
When `--queue` parameter is missing, you MUST:
1. **List available queues** by running:
```javascript
const result = Bash('ccw issue queue list --brief --json');
const index = JSON.parse(result);
```
2. **Display available queues** to user:
```
Available Queues:
ID Status Progress Issues
-----------------------------------------------------------
→ QUE-20251215-001 active 3/10 ISS-001, ISS-002
QUE-20251210-002 active 0/5 ISS-003
QUE-20251205-003 completed 8/8 ISS-004
```
3. **Stop and ask user** to specify which queue to execute:
```javascript
AskUserQuestion({
questions: [{
question: "Which queue would you like to execute?",
header: "Queue",
multiSelect: false,
options: index.queues
.filter(q => q.status === 'active')
.map(q => ({
label: q.id,
description: `${q.status}, ${q.completed_solutions || 0}/${q.total_solutions || 0} completed, Issues: ${q.issue_ids.join(', ')}`
}))
}]
})
```
4. **After user selection**, continue execution with the selected queue ID.
**DO NOT auto-select queues.** Explicit user confirmation is required to prevent accidental execution of wrong queue.
## Usage
```bash
/issue:execute # Execute active queue(s)
/issue:execute --queue QUE-xxx # Execute specific queue
/issue:execute --worktree # Use git worktrees for parallel isolation
/issue:execute --worktree --queue QUE-xxx
/issue:execute --worktree /path/to/existing/worktree # Resume in existing worktree
/issue:execute --queue QUE-xxx # Execute specific queue (REQUIRED)
/issue:execute --queue QUE-xxx --worktree # Execute in isolated worktree
/issue:execute --queue QUE-xxx --worktree /path/to/existing/worktree # Resume
```
**Parallelism**: Determined automatically by task dependency DAG (no manual control)
**Executor & Dry-run**: Selected via interactive prompt (AskUserQuestion)
**Worktree**: Creates isolated git worktrees for each parallel executor
**Worktree**: Creates ONE worktree for the entire queue execution (not per-solution)
**⭐ Recommended Executor**: **Codex** - Best for long-running autonomous work (2hr timeout), supports background execution and full write access
@@ -44,37 +91,101 @@ Minimal orchestrator that dispatches **solution IDs** to executors. Each executo
## Execution Flow
```
Phase 0 (if --worktree): Setup Worktree Base
Ensure .worktrees directory exists
Phase 0: Validate Queue ID (REQUIRED)
If --queue provided → use specified queue
├─ If --queue missing → list queues, prompt user to select
└─ Store QUEUE_ID for all subsequent commands
Phase 0.5 (if --worktree): Setup Queue Worktree
├─ Create ONE worktree for entire queue: .ccw/worktrees/queue-<timestamp>
├─ All subsequent execution happens in this worktree
└─ Main workspace remains clean and untouched
Phase 1: Get DAG & User Selection
├─ ccw issue queue dag [--queue QUE-xxx] → { parallel_batches: [["S-1","S-2"], ["S-3"]] }
├─ ccw issue queue dag --queue ${QUEUE_ID} → { parallel_batches: [["S-1","S-2"], ["S-3"]] }
└─ AskUserQuestion → executor type (codex|gemini|agent), dry-run mode, worktree mode
Phase 2: Dispatch Parallel Batch (DAG-driven)
├─ Parallelism determined by DAG (no manual limit)
├─ All executors work in the SAME worktree (or main if no worktree)
├─ For each solution ID in batch (parallel - all at once):
│ ├─ (if worktree) Create isolated worktree: git worktree add
│ ├─ Executor calls: ccw issue detail <id> (READ-ONLY)
│ ├─ Executor gets FULL SOLUTION with all tasks
│ ├─ Executor implements all tasks sequentially (T1 → T2 → T3)
│ ├─ Executor tests + verifies each task
│ ├─ Executor commits ONCE per solution (with formatted summary)
─ Executor calls: ccw issue done <id>
│ └─ (if worktree) Cleanup: merge branch, remove worktree
─ Executor calls: ccw issue done <id>
└─ Wait for batch completion
Phase 3: Next Batch
Phase 3: Next Batch (repeat Phase 2)
└─ ccw issue queue dag → check for newly-ready solutions
Phase 4 (if --worktree): Worktree Completion
├─ All batches complete → prompt for merge strategy
└─ Options: Create PR / Merge to main / Keep branch
```
## Implementation
### Phase 0: Validate Queue ID
```javascript
// Check if --queue was provided
let QUEUE_ID = args.queue;
if (!QUEUE_ID) {
// List available queues
const listResult = Bash('ccw issue queue list --brief --json').trim();
const index = JSON.parse(listResult);
if (index.queues.length === 0) {
console.log('No queues found. Use /issue:queue to create one first.');
return;
}
// Filter active queues only
const activeQueues = index.queues.filter(q => q.status === 'active');
if (activeQueues.length === 0) {
console.log('No active queues found.');
console.log('Available queues:', index.queues.map(q => `${q.id} (${q.status})`).join(', '));
return;
}
// Display and prompt user
console.log('\nAvailable Queues:');
console.log('ID'.padEnd(22) + 'Status'.padEnd(12) + 'Progress'.padEnd(12) + 'Issues');
console.log('-'.repeat(70));
for (const q of index.queues) {
const marker = q.id === index.active_queue_id ? '→ ' : ' ';
console.log(marker + q.id.padEnd(20) + q.status.padEnd(12) +
`${q.completed_solutions || 0}/${q.total_solutions || 0}`.padEnd(12) +
q.issue_ids.join(', '));
}
const answer = AskUserQuestion({
questions: [{
question: "Which queue would you like to execute?",
header: "Queue",
multiSelect: false,
options: activeQueues.map(q => ({
label: q.id,
description: `${q.completed_solutions || 0}/${q.total_solutions || 0} completed, Issues: ${q.issue_ids.join(', ')}`
}))
}]
});
QUEUE_ID = answer['Queue'];
}
console.log(`\n## Executing Queue: ${QUEUE_ID}\n`);
```
### Phase 1: Get DAG & User Selection
```javascript
// Get dependency graph and parallel batches
const dagJson = Bash(`ccw issue queue dag`).trim();
// Get dependency graph and parallel batches (QUEUE_ID required)
const dagJson = Bash(`ccw issue queue dag --queue ${QUEUE_ID}`).trim();
const dag = JSON.parse(dagJson);
if (dag.error || dag.ready_count === 0) {
@@ -115,12 +226,12 @@ const answer = AskUserQuestion({
]
},
{
question: 'Use git worktrees for parallel isolation?',
question: 'Use git worktree for queue isolation?',
header: 'Worktree',
multiSelect: false,
options: [
{ label: 'Yes (Recommended for parallel)', description: 'Each executor works in isolated worktree branch' },
{ label: 'No', description: 'Work directly in current directory (serial only)' }
{ label: 'Yes (Recommended)', description: 'Create ONE worktree for entire queue - main stays clean' },
{ label: 'No', description: 'Work directly in current directory' }
]
}
]
@@ -140,7 +251,7 @@ if (isDryRun) {
}
```
### Phase 2: Dispatch Parallel Batch (DAG-driven)
### Phase 0 & 2: Setup Queue Worktree & Dispatch
```javascript
// Parallelism determined by DAG - no manual limit
@@ -158,24 +269,40 @@ TodoWrite({
console.log(`\n### Executing Solutions (DAG batch 1): ${batch.join(', ')}`);
// Setup worktree base directory if needed (using absolute paths)
if (useWorktree) {
// Use absolute paths to avoid issues when running from subdirectories
const repoRoot = Bash('git rev-parse --show-toplevel').trim();
const worktreeBase = `${repoRoot}/.ccw/worktrees`;
Bash(`mkdir -p "${worktreeBase}"`);
// Prune stale worktrees from previous interrupted executions
Bash('git worktree prune');
}
// Parse existing worktree path from args if provided
// Example: --worktree /path/to/existing/worktree
const existingWorktree = args.worktree && typeof args.worktree === 'string' ? args.worktree : null;
// Setup ONE worktree for entire queue (not per-solution)
let worktreePath = null;
let worktreeBranch = null;
if (useWorktree) {
const repoRoot = Bash('git rev-parse --show-toplevel').trim();
const worktreeBase = `${repoRoot}/.ccw/worktrees`;
Bash(`mkdir -p "${worktreeBase}"`);
Bash('git worktree prune'); // Cleanup stale worktrees
if (existingWorktree) {
// Resume mode: Use existing worktree
worktreePath = existingWorktree;
worktreeBranch = Bash(`git -C "${worktreePath}" branch --show-current`).trim();
console.log(`Resuming in existing worktree: ${worktreePath} (branch: ${worktreeBranch})`);
} else {
// Create mode: ONE worktree for the entire queue
const timestamp = new Date().toISOString().replace(/[-:T]/g, '').slice(0, 14);
worktreeBranch = `queue-exec-${dag.queue_id || timestamp}`;
worktreePath = `${worktreeBase}/${worktreeBranch}`;
Bash(`git worktree add "${worktreePath}" -b "${worktreeBranch}"`);
console.log(`Created queue worktree: ${worktreePath}`);
}
}
// Launch ALL solutions in batch in parallel (DAG guarantees no conflicts)
// All executors work in the SAME worktree (or main if no worktree)
const executions = batch.map(solutionId => {
updateTodo(solutionId, 'in_progress');
return dispatchExecutor(solutionId, executor, useWorktree, existingWorktree);
return dispatchExecutor(solutionId, executor, worktreePath);
});
await Promise.all(executions);
@@ -185,183 +312,77 @@ batch.forEach(id => updateTodo(id, 'completed'));
### Executor Dispatch
```javascript
function dispatchExecutor(solutionId, executorType, useWorktree = false, existingWorktree = null) {
// Worktree setup commands (if enabled) - using absolute paths
// Supports both creating new worktrees and resuming in existing ones
const worktreeSetup = useWorktree ? `
### Step 0: Setup Isolated Worktree
\`\`\`bash
# Use absolute paths to avoid issues when running from subdirectories
REPO_ROOT=$(git rev-parse --show-toplevel)
WORKTREE_BASE="\${REPO_ROOT}/.ccw/worktrees"
// worktreePath: path to shared worktree (null if not using worktree)
function dispatchExecutor(solutionId, executorType, worktreePath = null) {
// If worktree is provided, executor works in that directory
// No per-solution worktree creation - ONE worktree for entire queue
# Check if existing worktree path was provided
EXISTING_WORKTREE="${existingWorktree || ''}"
if [[ -n "\${EXISTING_WORKTREE}" && -d "\${EXISTING_WORKTREE}" ]]; then
# Resume mode: Use existing worktree
WORKTREE_PATH="\${EXISTING_WORKTREE}"
WORKTREE_NAME=$(basename "\${WORKTREE_PATH}")
# Verify it's a valid git worktree
if ! git -C "\${WORKTREE_PATH}" rev-parse --is-inside-work-tree &>/dev/null; then
echo "Error: \${EXISTING_WORKTREE} is not a valid git worktree"
exit 1
fi
echo "Resuming in existing worktree: \${WORKTREE_PATH}"
else
# Create mode: New worktree with timestamp
WORKTREE_NAME="exec-${solutionId}-$(date +%H%M%S)"
WORKTREE_PATH="\${WORKTREE_BASE}/\${WORKTREE_NAME}"
# Ensure worktree base exists
mkdir -p "\${WORKTREE_BASE}"
# Prune stale worktrees
git worktree prune
# Create worktree
git worktree add "\${WORKTREE_PATH}" -b "\${WORKTREE_NAME}"
echo "Created new worktree: \${WORKTREE_PATH}"
fi
# Setup cleanup trap for graceful failure handling
cleanup_worktree() {
echo "Cleaning up worktree due to interruption..."
cd "\${REPO_ROOT}" 2>/dev/null || true
git worktree remove "\${WORKTREE_PATH}" --force 2>/dev/null || true
echo "Worktree removed. Branch '\${WORKTREE_NAME}' kept for inspection."
}
trap cleanup_worktree EXIT INT TERM
cd "\${WORKTREE_PATH}"
\`\`\`
` : '';
const worktreeCleanup = useWorktree ? `
### Step 5: Worktree Completion (User Choice)
After all tasks complete, prompt for merge strategy:
\`\`\`javascript
AskUserQuestion({
questions: [{
question: "Solution ${solutionId} completed. What to do with worktree branch?",
header: "Merge",
multiSelect: false,
options: [
{ label: "Create PR (Recommended)", description: "Push branch and create pull request - safest for parallel execution" },
{ label: "Merge to main", description: "Merge branch and cleanup worktree (requires clean main)" },
{ label: "Keep branch", description: "Cleanup worktree, keep branch for manual handling" }
]
}]
})
\`\`\`
**Based on selection:**
\`\`\`bash
# Disable cleanup trap before intentional cleanup
trap - EXIT INT TERM
# Return to repo root (use REPO_ROOT from setup)
cd "\${REPO_ROOT}"
# Validate main repo state before merge
validate_main_clean() {
if [[ -n \$(git status --porcelain) ]]; then
echo "⚠️ Warning: Main repo has uncommitted changes."
echo "Cannot auto-merge. Falling back to 'Create PR' option."
return 1
fi
return 0
}
# Create PR (Recommended for parallel execution):
git push -u origin "\${WORKTREE_NAME}"
gh pr create --title "Solution ${solutionId}" --body "Issue queue execution"
git worktree remove "\${WORKTREE_PATH}"
# Merge to main (only if main is clean):
if validate_main_clean; then
git merge --no-ff "\${WORKTREE_NAME}" -m "Merge solution ${solutionId}"
git worktree remove "\${WORKTREE_PATH}" && git branch -d "\${WORKTREE_NAME}"
else
# Fallback to PR if main is dirty
git push -u origin "\${WORKTREE_NAME}"
gh pr create --title "Solution ${solutionId}" --body "Issue queue execution (main had uncommitted changes)"
git worktree remove "\${WORKTREE_PATH}"
fi
# Keep branch:
git worktree remove "\${WORKTREE_PATH}"
echo "Branch \${WORKTREE_NAME} kept for manual handling"
\`\`\`
**Parallel Execution Safety**: "Create PR" is the default and safest option for parallel executors, avoiding merge race conditions.
` : '';
// Pre-defined values (replaced at dispatch time, NOT by executor)
const SOLUTION_ID = solutionId;
const WORK_DIR = worktreePath || null;
// Build prompt without markdown code blocks to avoid escaping issues
const prompt = `
## Execute Solution ${solutionId}
${worktreeSetup}
### Step 1: Get Solution (read-only)
\`\`\`bash
ccw issue detail ${solutionId}
\`\`\`
## Execute Solution: ${SOLUTION_ID}
${WORK_DIR ? `Working Directory: ${WORK_DIR}` : ''}
### Step 1: Get Solution Details
Run this command to get the full solution with all tasks:
ccw issue detail ${SOLUTION_ID}
### Step 2: Execute All Tasks Sequentially
The detail command returns a FULL SOLUTION with all tasks.
Execute each task in order (T1 → T2 → T3 → ...):
For each task:
1. Follow task.implementation steps
2. Run task.test commands
3. Verify task.acceptance criteria
(Do NOT commit after each task)
- Follow task.implementation steps
- Run task.test commands
- Verify task.acceptance criteria
- Do NOT commit after each task
### Step 3: Commit Solution (Once)
After ALL tasks pass, commit once with formatted summary:
\`\`\`bash
git add <all-modified-files>
git commit -m "[type](scope): [solution.description]
After ALL tasks pass, commit once with formatted summary.
## Solution Summary
- Solution-ID: ${solutionId}
- Tasks: T1, T2, ...
Command:
git add -A
git commit -m "<type>(<scope>): <description>
## Tasks Completed
- [T1] task1.title: action
- [T2] task2.title: action
Solution: ${SOLUTION_ID}
Tasks completed: <list task IDs>
## Files Modified
- file1.ts
- file2.ts
Changes:
- <file1>: <what changed>
- <file2>: <what changed>
## Verification
- All tests passed
- All acceptance criteria verified"
\`\`\`
Verified: all tests passed"
Replace <type> with: feat|fix|refactor|docs|test
Replace <scope> with: affected module name
Replace <description> with: brief summary from solution
### Step 4: Report Completion
\`\`\`bash
ccw issue done ${solutionId} --result '{"summary": "...", "files_modified": [...], "commit": {"hash": "...", "type": "feat"}, "tasks_completed": N}'
\`\`\`
On success, run:
ccw issue done ${SOLUTION_ID} --result '{"summary": "<brief>", "files_modified": ["<file1>", "<file2>"], "commit": {"hash": "<hash>", "type": "<type>"}, "tasks_completed": <N>}'
If any task failed:
\`\`\`bash
ccw issue done ${solutionId} --fail --reason '{"task_id": "TX", "error_type": "test_failure", "message": "..."}'
\`\`\`
${worktreeCleanup}`;
On failure, run:
ccw issue done ${SOLUTION_ID} --fail --reason '{"task_id": "<TX>", "error_type": "<test_failure|build_error|other>", "message": "<error details>"}'
### Important Notes
- Do NOT cleanup worktree - it is shared by all solutions in the queue
- Replace all <placeholder> values with actual values from your execution
`;
// For CLI tools, pass --cd to set working directory
const cdOption = worktreePath ? ` --cd "${worktreePath}"` : '';
if (executorType === 'codex') {
return Bash(
`ccw cli -p "${escapePrompt(prompt)}" --tool codex --mode write --id exec-${solutionId}`,
`ccw cli -p "${escapePrompt(prompt)}" --tool codex --mode write --id exec-${solutionId}${cdOption}`,
{ timeout: 7200000, run_in_background: true } // 2hr for full solution
);
} else if (executorType === 'gemini') {
return Bash(
`ccw cli -p "${escapePrompt(prompt)}" --tool gemini --mode write --id exec-${solutionId}`,
`ccw cli -p "${escapePrompt(prompt)}" --tool gemini --mode write --id exec-${solutionId}${cdOption}`,
{ timeout: 3600000, run_in_background: true }
);
} else {
@@ -369,7 +390,7 @@ ${worktreeCleanup}`;
subagent_type: 'code-developer',
run_in_background: false,
description: `Execute solution ${solutionId}`,
prompt: prompt
prompt: worktreePath ? `Working directory: ${worktreePath}\n\n${prompt}` : prompt
});
}
}
@@ -378,8 +399,8 @@ ${worktreeCleanup}`;
### Phase 3: Check Next Batch
```javascript
// Refresh DAG after batch completes
const refreshedDag = JSON.parse(Bash(`ccw issue queue dag`).trim());
// Refresh DAG after batch completes (use same QUEUE_ID)
const refreshedDag = JSON.parse(Bash(`ccw issue queue dag --queue ${QUEUE_ID}`).trim());
console.log(`
## Batch Complete
@@ -389,46 +410,117 @@ console.log(`
`);
if (refreshedDag.ready_count > 0) {
console.log('Run `/issue:execute` again for next batch.');
console.log(`Run \`/issue:execute --queue ${QUEUE_ID}\` again for next batch.`);
// Note: If resuming, pass existing worktree path:
// /issue:execute --queue ${QUEUE_ID} --worktree <worktreePath>
}
```
### Phase 4: Worktree Completion (after ALL batches)
```javascript
// Only run when ALL solutions completed AND using worktree
if (useWorktree && refreshedDag.ready_count === 0 && refreshedDag.completed_count === refreshedDag.total) {
console.log('\n## All Solutions Completed - Worktree Cleanup');
const answer = AskUserQuestion({
questions: [{
question: `Queue complete. What to do with worktree branch "${worktreeBranch}"?`,
header: 'Merge',
multiSelect: false,
options: [
{ label: 'Create PR (Recommended)', description: 'Push branch and create pull request' },
{ label: 'Merge to main', description: 'Merge all commits and cleanup worktree' },
{ label: 'Keep branch', description: 'Cleanup worktree, keep branch for manual handling' }
]
}]
});
const repoRoot = Bash('git rev-parse --show-toplevel').trim();
if (answer['Merge'].includes('Create PR')) {
Bash(`git -C "${worktreePath}" push -u origin "${worktreeBranch}"`);
Bash(`gh pr create --title "Queue ${dag.queue_id}" --body "Issue queue execution - all solutions completed" --head "${worktreeBranch}"`);
Bash(`git worktree remove "${worktreePath}"`);
console.log(`PR created for branch: ${worktreeBranch}`);
} else if (answer['Merge'].includes('Merge to main')) {
// Check main is clean
const mainDirty = Bash('git status --porcelain').trim();
if (mainDirty) {
console.log('Warning: Main has uncommitted changes. Falling back to PR.');
Bash(`git -C "${worktreePath}" push -u origin "${worktreeBranch}"`);
Bash(`gh pr create --title "Queue ${dag.queue_id}" --body "Issue queue execution (main had uncommitted changes)" --head "${worktreeBranch}"`);
} else {
Bash(`git merge --no-ff "${worktreeBranch}" -m "Merge queue ${dag.queue_id}"`);
Bash(`git branch -d "${worktreeBranch}"`);
}
Bash(`git worktree remove "${worktreePath}"`);
} else {
Bash(`git worktree remove "${worktreePath}"`);
console.log(`Branch ${worktreeBranch} kept for manual handling`);
}
}
```
## Parallel Execution Model
```
┌─────────────────────────────────────────────────────────────┐
│ Orchestrator │
├─────────────────────────────────────────────────────────────┤
1. ccw issue queue dag
→ { parallel_batches: [["S-1","S-2"], ["S-3"]] }
2. Dispatch batch 1 (parallel):
┌──────────────────────┐ ┌──────────────────────┐
│ Executor 1 │ │ Executor 2 │
│ detail S-1 │ │ detail S-2
│ → gets full solution │ │ → gets full solution │
│ [T1→T2→T3 sequential]│ │ [T1→T2 sequential] │
│ commit (1x solution) │ │ commit (1x solution) │
│ │ done S-1 │ │ done S-2
└──────────────────────┘ └──────────────────────┘
3. ccw issue queue dag (refresh)
→ S-3 now ready (S-1 completed, file conflict resolved)
└─────────────────────────────────────────────────────────────┘
┌─────────────────────────────────────────────────────────────────
│ Orchestrator
├─────────────────────────────────────────────────────────────────
0. Validate QUEUE_ID (required, or prompt user to select)
0.5 (if --worktree) Create ONE worktree for entire queue
→ .ccw/worktrees/queue-exec-<queue-id>
1. ccw issue queue dag --queue ${QUEUE_ID}
→ { parallel_batches: [["S-1","S-2"], ["S-3"]] }
2. Dispatch batch 1 (parallel, SAME worktree):
┌──────────────────────────────────────────────────────┐
│ │ Shared Queue Worktree (or main) │ │
│ ┌──────────────────┐ ┌──────────────────┐ │
│ Executor 1 │ │ Executor 2
│ │ detail S-1 │ │ detail S-2
│ │ [T1→T2→T3] │ │ [T1→T2] │ │
│ │ │ commit S-1 │ │ commit S-2 │ │ │
│ │ │ done S-1 │ │ done S-2 │ │ │
│ │ └──────────────────┘ └──────────────────┘ │ │
│ └──────────────────────────────────────────────────────┘ │
│ │
│ 3. ccw issue queue dag (refresh) │
│ → S-3 now ready → dispatch batch 2 (same worktree) │
│ │
│ 4. (if --worktree) ALL batches complete → cleanup worktree │
│ → Prompt: Create PR / Merge to main / Keep branch │
└─────────────────────────────────────────────────────────────────┘
```
**Why this works for parallel:**
- **ONE worktree for entire queue** → all solutions share same isolated workspace
- `detail <id>` is READ-ONLY → no race conditions
- Each executor handles **all tasks within a solution** sequentially
- **One commit per solution** with formatted summary (not per-task)
- `done <id>` updates only its own solution status
- `queue dag` recalculates ready solutions after each batch
- Solutions in same batch have NO file conflicts
- Solutions in same batch have NO file conflicts (DAG guarantees)
- **Main workspace stays clean** until merge/PR decision
## CLI Endpoint Contract
### `ccw issue queue dag`
Returns dependency graph with parallel batches (solution-level):
### `ccw issue queue list --brief --json`
Returns queue index for selection (used when --queue not provided):
```json
{
"active_queue_id": "QUE-20251215-001",
"queues": [
{ "id": "QUE-20251215-001", "status": "active", "issue_ids": ["ISS-001"], "total_solutions": 5, "completed_solutions": 2 }
]
}
```
### `ccw issue queue dag --queue <queue-id>`
Returns dependency graph with parallel batches (solution-level, **--queue required**):
```json
{
"queue_id": "QUE-...",

View File

@@ -0,0 +1,382 @@
---
name: from-brainstorm
description: Convert brainstorm session ideas into issue with executable solution for parallel-dev-cycle
argument-hint: "SESSION=\"<session-id>\" [--idea=<index>] [--auto] [-y|--yes]"
allowed-tools: TodoWrite(*), Bash(*), Read(*), Write(*), Glob(*), AskUserQuestion(*)
---
## Auto Mode
When `--yes` or `-y`: Auto-select highest-scored idea, skip confirmations, create issue directly.
# Issue From-Brainstorm Command (/issue:from-brainstorm)
## Overview
Bridge command that converts **brainstorm-with-file** session output into executable **issue + solution** for parallel-dev-cycle consumption.
**Core workflow**: Load Session → Select Idea → Convert to Issue → Generate Solution → Bind & Ready
**Input sources**:
- **synthesis.json** - Main brainstorm results with top_ideas
- **perspectives.json** - Multi-CLI perspectives (creative/pragmatic/systematic)
- **.brainstorming/** - Synthesis artifacts (clarifications, enhancements from role analyses)
**Output**:
- **Issue** (ISS-YYYYMMDD-NNN) - Full context with clarifications
- **Solution** (SOL-{issue-id}-{uid}) - Structured tasks for parallel-dev-cycle
## Quick Reference
```bash
# Interactive mode - select idea, confirm before creation
/issue:from-brainstorm SESSION="BS-rate-limiting-2025-01-28"
# Pre-select idea by index
/issue:from-brainstorm SESSION="BS-auth-system-2025-01-28" --idea=0
# Auto mode - select highest scored, no confirmations
/issue:from-brainstorm SESSION="BS-caching-2025-01-28" --auto -y
```
## Arguments
| Argument | Required | Type | Default | Description |
|----------|----------|------|---------|-------------|
| SESSION | Yes | String | - | Session ID or path to `.workflow/.brainstorm/BS-xxx` |
| --idea | No | Integer | - | Pre-select idea by index (0-based) |
| --auto | No | Flag | false | Auto-select highest-scored idea |
| -y, --yes | No | Flag | false | Skip all confirmations |
## Data Structures
### Issue Schema (Output)
```typescript
interface Issue {
id: string; // ISS-YYYYMMDD-NNN
title: string; // From idea.title
status: 'planned'; // Auto-set after solution binding
priority: number; // 1-5 (derived from idea.score)
context: string; // Full description with clarifications
source: 'brainstorm';
labels: string[]; // ['brainstorm', perspective, feasibility]
// Structured fields
expected_behavior: string; // From key_strengths
actual_behavior: string; // From main_challenges
affected_components: string[]; // Extracted from description
_brainstorm_metadata: {
session_id: string;
idea_score: number;
novelty: number;
feasibility: string;
clarifications_count: number;
};
}
```
### Solution Schema (Output)
```typescript
interface Solution {
id: string; // SOL-{issue-id}-{4-char-uid}
description: string; // idea.title
approach: string; // idea.description
tasks: Task[]; // Generated from idea.next_steps
analysis: {
risk: 'low' | 'medium' | 'high';
impact: 'low' | 'medium' | 'high';
complexity: 'low' | 'medium' | 'high';
};
is_bound: boolean; // true
created_at: string;
bound_at: string;
}
interface Task {
id: string; // T1, T2, T3...
title: string; // Actionable task name
scope: string; // design|implementation|testing|documentation
action: string; // Implement|Design|Research|Test|Document
description: string;
implementation: string[]; // Step-by-step guide
acceptance: {
criteria: string[]; // What defines success
verification: string[]; // How to verify
};
priority: number; // 1-5
depends_on: string[]; // Task dependencies
}
```
## Execution Flow
```
Phase 1: Session Loading
├─ Validate session path
├─ Load synthesis.json (required)
├─ Load perspectives.json (optional - multi-CLI insights)
├─ Load .brainstorming/** (optional - synthesis artifacts)
└─ Validate top_ideas array exists
Phase 2: Idea Selection
├─ Auto mode: Select highest scored idea
├─ Pre-selected: Use --idea=N index
└─ Interactive: Display table, ask user to select
Phase 3: Enrich Issue Context
├─ Base: idea.description + key_strengths + main_challenges
├─ Add: Relevant clarifications (Requirements/Architecture/Feasibility)
├─ Add: Multi-perspective insights (creative/pragmatic/systematic)
└─ Add: Session metadata (session_id, completion date, clarification count)
Phase 4: Create Issue
├─ Generate issue data with enriched context
├─ Calculate priority from idea.score (0-10 → 1-5)
├─ Create via: ccw issue create (heredoc for JSON)
└─ Returns: ISS-YYYYMMDD-NNN
Phase 5: Generate Solution Tasks
├─ T1: Research & Validate (if main_challenges exist)
├─ T2: Design & Specification (if key_strengths exist)
├─ T3+: Implementation tasks (from idea.next_steps)
└─ Each task includes: implementation steps + acceptance criteria
Phase 6: Bind Solution
├─ Write solution to .workflow/issues/solutions/{issue-id}.jsonl
├─ Bind via: ccw issue bind {issue-id} {solution-id}
├─ Update issue status to 'planned'
└─ Returns: SOL-{issue-id}-{uid}
Phase 7: Next Steps
└─ Offer: Form queue | Convert another idea | View details | Done
```
## Context Enrichment Logic
### Base Context (Always Included)
- **Description**: `idea.description`
- **Why This Idea**: `idea.key_strengths[]`
- **Challenges to Address**: `idea.main_challenges[]`
- **Implementation Steps**: `idea.next_steps[]`
### Enhanced Context (If Available)
**From Synthesis Artifacts** (`.brainstorming/*/analysis*.md`):
- Extract clarifications matching categories: Requirements, Architecture, Feasibility
- Format: `**{Category}** ({role}): {question} → {answer}`
- Limit: Top 3 most relevant
**From Perspectives** (`perspectives.json`):
- **Creative**: First insight from `perspectives.creative.insights[0]`
- **Pragmatic**: First blocker from `perspectives.pragmatic.blockers[0]`
- **Systematic**: First pattern from `perspectives.systematic.patterns[0]`
**Session Metadata**:
- Session ID, Topic, Completion Date
- Clarifications count (if synthesis artifacts loaded)
## Task Generation Strategy
### Task 1: Research & Validation
**Trigger**: `idea.main_challenges.length > 0`
- **Title**: "Research & Validate Approach"
- **Scope**: design
- **Action**: Research
- **Implementation**: Investigate blockers, review similar implementations, validate with team
- **Acceptance**: Blockers documented, feasibility assessed, approach validated
### Task 2: Design & Specification
**Trigger**: `idea.key_strengths.length > 0`
- **Title**: "Design & Create Specification"
- **Scope**: design
- **Action**: Design
- **Implementation**: Create design doc, define success criteria, plan phases
- **Acceptance**: Design complete, metrics defined, plan outlined
### Task 3+: Implementation Tasks
**Trigger**: `idea.next_steps[]`
- **Title**: From `next_steps[i]` (max 60 chars)
- **Scope**: Inferred from keywords (test→testing, api→backend, ui→frontend)
- **Action**: Detected from verbs (implement, create, update, fix, test, document)
- **Implementation**: Execute step + follow design + write tests
- **Acceptance**: Step implemented + tests passing + code reviewed
### Fallback Task
**Trigger**: No tasks generated from above
- **Title**: `idea.title`
- **Scope**: implementation
- **Action**: Implement
- **Generic implementation + acceptance criteria**
## Priority Calculation
### Issue Priority (1-5)
```
idea.score: 0-10
priority = max(1, min(5, ceil((10 - score) / 2)))
Examples:
score 9-10 → priority 1 (critical)
score 7-8 → priority 2 (high)
score 5-6 → priority 3 (medium)
score 3-4 → priority 4 (low)
score 0-2 → priority 5 (lowest)
```
### Task Priority (1-5)
- Research task: 1 (highest)
- Design task: 2
- Implementation tasks: 3 by default, decrement for later tasks
- Testing/documentation: 4-5
### Complexity Analysis
```
risk: main_challenges.length > 2 ? 'high' : 'medium'
impact: score >= 8 ? 'high' : score >= 6 ? 'medium' : 'low'
complexity: main_challenges > 3 OR tasks > 5 ? 'high'
tasks > 3 ? 'medium' : 'low'
```
## CLI Integration
### Issue Creation
```bash
# Uses heredoc to avoid shell escaping
ccw issue create << 'EOF'
{
"title": "...",
"context": "...",
"priority": 3,
"source": "brainstorm",
"labels": ["brainstorm", "creative", "feasibility-high"],
...
}
EOF
```
### Solution Binding
```bash
# Append solution to JSONL file
echo '{"id":"SOL-xxx","tasks":[...]}' >> .workflow/issues/solutions/{issue-id}.jsonl
# Bind to issue
ccw issue bind {issue-id} {solution-id}
# Update status
ccw issue update {issue-id} --status planned
```
## Error Handling
| Error | Message | Resolution |
|-------|---------|------------|
| Session not found | synthesis.json missing | Check session ID, list available sessions |
| No ideas | top_ideas array empty | Complete brainstorm workflow first |
| Invalid idea index | Index out of range | Check valid range 0 to N-1 |
| Issue creation failed | ccw issue create error | Verify CLI endpoint working |
| Solution binding failed | Bind error | Check issue exists, retry |
## Examples
### Interactive Mode
```bash
/issue:from-brainstorm SESSION="BS-rate-limiting-2025-01-28"
# Output:
# | # | Title | Score | Feasibility |
# |---|-------|-------|-------------|
# | 0 | Token Bucket Algorithm | 8.5 | High |
# | 1 | Sliding Window Counter | 7.2 | Medium |
# | 2 | Fixed Window | 6.1 | High |
# User selects: #0
# Result:
# ✓ Created issue: ISS-20250128-001
# ✓ Created solution: SOL-ISS-20250128-001-ab3d
# ✓ Bound solution to issue
# → Next: /issue:queue
```
### Auto Mode
```bash
/issue:from-brainstorm SESSION="BS-caching-2025-01-28" --auto
# Result:
# Auto-selected: Redis Cache Layer (Score: 9.2/10)
# ✓ Created issue: ISS-20250128-002
# ✓ Solution with 4 tasks
# → Status: planned
```
## Integration Flow
```
brainstorm-with-file
├─ synthesis.json
├─ perspectives.json
└─ .brainstorming/** (optional)
/issue:from-brainstorm ◄─── This command
├─ ISS-YYYYMMDD-NNN (enriched issue)
└─ SOL-{issue-id}-{uid} (structured solution)
/issue:queue
/parallel-dev-cycle
RA → EP → CD → VAS
```
## Session Files Reference
### Input Files
```
.workflow/.brainstorm/BS-{slug}-{date}/
├── synthesis.json # REQUIRED - Top ideas with scores
├── perspectives.json # OPTIONAL - Multi-CLI insights
├── brainstorm.md # Reference only
└── .brainstorming/ # OPTIONAL - Synthesis artifacts
├── system-architect/
│ └── analysis.md # Contains clarifications + enhancements
├── api-designer/
│ └── analysis.md
└── ...
```
### Output Files
```
.workflow/issues/
├── solutions/
│ └── ISS-YYYYMMDD-001.jsonl # Created solution (JSONL)
└── (managed by ccw issue CLI)
```
## Related Commands
- `/workflow:brainstorm-with-file` - Generate brainstorm sessions
- `/workflow:brainstorm:synthesis` - Add clarifications to brainstorm
- `/issue:new` - Create issues from GitHub or text
- `/issue:plan` - Generate solutions via exploration
- `/issue:queue` - Form execution queue
- `/issue:execute` - Execute with parallel-dev-cycle
- `ccw issue status <id>` - View issue
- `ccw issue solution <id>` - View solution

View File

@@ -1,10 +1,14 @@
---
name: new
description: Create structured issue from GitHub URL or text description
argument-hint: "<github-url | text-description> [--priority 1-5]"
argument-hint: "[-y|--yes] <github-url | text-description> [--priority 1-5]"
allowed-tools: TodoWrite(*), Bash(*), Read(*), AskUserQuestion(*), mcp__ace-tool__search_context(*)
---
## Auto Mode
When `--yes` or `-y`: Skip clarification questions, create issue with inferred details.
# Issue New Command (/issue:new)
## Core Principle
@@ -409,5 +413,4 @@ function parseMarkdownBody(body) {
## Related Commands
- `/issue:plan` - Plan solution for issue
- `/issue:plan` - Plan solution for issue

View File

@@ -1,10 +1,14 @@
---
name: plan
description: Batch plan issue resolution using issue-plan-agent (explore + plan closed-loop)
argument-hint: "--all-pending <issue-id>[,<issue-id>,...] [--batch-size 3] "
allowed-tools: TodoWrite(*), Task(*), SlashCommand(*), AskUserQuestion(*), Bash(*), Read(*), Write(*)
argument-hint: "[-y|--yes] --all-pending <issue-id>[,<issue-id>,...] [--batch-size 3]"
allowed-tools: TodoWrite(*), Task(*), Skill(*), AskUserQuestion(*), Bash(*), Read(*), Write(*)
---
## Auto Mode
When `--yes` or `-y`: Auto-bind solutions without confirmation, use recommended settings.
# Issue Plan Command (/issue:plan)
## Overview
@@ -55,11 +59,11 @@ Unified planning command using **issue-plan-agent** that combines exploration an
## Execution Process
```
Phase 1: Issue Loading
Phase 1: Issue Loading & Intelligent Grouping
├─ Parse input (single, comma-separated, or --all-pending)
├─ Fetch issue metadata (ID, title, tags)
├─ Validate issues exist (create if needed)
└─ Group by similarity (shared tags or title keywords, max 3 per batch)
└─ Intelligent grouping via Gemini (semantic similarity, max 3 per batch)
Phase 2: Unified Explore + Plan (issue-plan-agent)
├─ Launch issue-plan-agent per batch
@@ -119,46 +123,11 @@ if (useAllPending) {
}
// Note: Agent fetches full issue content via `ccw issue status <id> --json`
// Semantic grouping via Gemini CLI (max 4 issues per group)
async function groupBySimilarityGemini(issues) {
const issueSummaries = issues.map(i => ({
id: i.id, title: i.title, tags: i.tags
}));
// Intelligent grouping: Analyze issues by title/tags, group semantically similar ones
// Strategy: Same module/component, related bugs, feature clusters
// Constraint: Max ${batchSize} issues per batch
const prompt = `
PURPOSE: Group similar issues by semantic similarity for batch processing; maximize within-group coherence; max 4 issues per group
TASK: • Analyze issue titles/tags semantically • Identify functional/architectural clusters • Assign each issue to one group
MODE: analysis
CONTEXT: Issue metadata only
EXPECTED: JSON with groups array, each containing max 4 issue_ids, theme, rationale
RULES: $(cat ~/.claude/workflows/cli-templates/protocols/analysis-protocol.md) | Each issue in exactly one group | Max 4 issues per group | Balance group sizes
INPUT:
${JSON.stringify(issueSummaries, null, 2)}
OUTPUT FORMAT:
{"groups":[{"group_id":1,"theme":"...","issue_ids":["..."],"rationale":"..."}],"ungrouped":[]}
`;
const taskId = Bash({
command: `ccw cli -p "${prompt}" --tool gemini --mode analysis`,
run_in_background: true, timeout: 600000
});
const output = TaskOutput({ task_id: taskId, block: true });
// Extract JSON from potential markdown code blocks
function extractJsonFromMarkdown(text) {
const jsonMatch = text.match(/```json\s*\n([\s\S]*?)\n```/) ||
text.match(/```\s*\n([\s\S]*?)\n```/);
return jsonMatch ? jsonMatch[1] : text;
}
const result = JSON.parse(extractJsonFromMarkdown(output));
return result.groups.map(g => g.issue_ids.map(id => issues.find(i => i.id === id)));
}
const batches = await groupBySimilarityGemini(issues);
console.log(`Processing ${issues.length} issues in ${batches.length} batch(es) (max 4 issues/agent)`);
console.log(`Processing ${issues.length} issues in ${batches.length} batch(es)`);
TodoWrite({
todos: batches.map((_, i) => ({
@@ -195,15 +164,31 @@ ${issueList}
### Workflow
1. Fetch issue details: ccw issue status <id> --json
2. Load project context files
3. Explore codebase (ACE semantic search)
4. Plan solution with tasks (schema: solution-schema.json)
5. **If github_url exists**: Add final task to comment on GitHub issue
6. Write solution to: .workflow/issues/solutions/{issue-id}.jsonl
7. Single solution → auto-bind; Multiple → return for selection
2. **Analyze failure history** (if issue.feedback exists):
- Extract failure details from issue.feedback (type='failure', stage='execute')
- Parse error_type, message, task_id, solution_id from content JSON
- Identify failure patterns: repeated errors, root causes, blockers
- **Constraint**: Avoid repeating failed approaches
3. Load project context files
4. Explore codebase (ACE semantic search)
5. Plan solution with tasks (schema: solution-schema.json)
- **If previous solution failed**: Reference failure analysis in solution.approach
- Add explicit verification steps to prevent same failure mode
6. **If github_url exists**: Add final task to comment on GitHub issue
7. Write solution to: .workflow/issues/solutions/{issue-id}.jsonl
8. **CRITICAL - Binding Decision**:
- Single solution → **MUST execute**: ccw issue bind <issue-id> <solution-id>
- Multiple solutions → Return pending_selection only (no bind)
### Failure-Aware Planning Rules
- **Extract failure patterns**: Parse issue.feedback where type='failure' and stage='execute'
- **Identify root causes**: Analyze error_type (test_failure, compilation, timeout, etc.)
- **Design alternative approach**: Create solution that addresses root cause
- **Add prevention steps**: Include explicit verification to catch same error earlier
- **Document lessons**: Reference previous failures in solution.approach
### Rules
- Solution ID format: SOL-{issue-id}-{seq}
- Solution ID format: SOL-{issue-id}-{uid} (uid: 4 random alphanumeric chars, e.g., a7x9)
- Single solution per issue → auto-bind via ccw issue bind
- Multiple solutions → register only, return pending_selection
- Tasks must have quantified acceptance.criteria
@@ -251,35 +236,55 @@ for (let i = 0; i < agentTasks.length; i += MAX_PARALLEL) {
}
agentResults.push(summary); // Store for Phase 3 conflict aggregation
// Verify binding for bound issues (agent should have executed bind)
for (const item of summary.bound || []) {
console.log(`${item.issue_id}: ${item.solution_id} (${item.task_count} tasks)`);
const status = JSON.parse(Bash(`ccw issue status ${item.issue_id} --json`).trim());
if (status.bound_solution_id === item.solution_id) {
console.log(`${item.issue_id}: ${item.solution_id} (${item.task_count} tasks)`);
} else {
// Fallback: agent failed to bind, execute here
Bash(`ccw issue bind ${item.issue_id} ${item.solution_id}`);
console.log(`${item.issue_id}: ${item.solution_id} (${item.task_count} tasks) [recovered]`);
}
}
// Collect and notify pending selections
// Collect pending selections for Phase 3
for (const pending of summary.pending_selection || []) {
console.log(`${pending.issue_id}: ${pending.solutions.length} solutions → awaiting selection`);
pendingSelections.push(pending);
}
if (summary.conflicts?.length > 0) {
console.log(`⚠ Conflicts: ${summary.conflicts.length} detected (will resolve in Phase 3)`);
}
updateTodo(`Plan batch ${batchIndex + 1}`, 'completed');
}
}
```
### Phase 3: Conflict Resolution & Solution Selection
### Phase 3: Solution Selection (if pending)
**Conflict Handling:**
- Collect `conflicts` from all agent results
- Low/Medium severity → auto-resolve with `recommended_resolution`
- High severity → use `AskUserQuestion` to let user choose resolution
```javascript
// Handle multi-solution issues
for (const pending of pendingSelections) {
if (pending.solutions.length === 0) continue;
**Multi-Solution Selection:**
- If `pending_selection` contains issues with multiple solutions:
- Use `AskUserQuestion` to present options (solution ID + task count + description)
- Extract selected solution ID from user response
- Verify solution file exists, recover from payload if missing
- Bind selected solution via `ccw issue bind <issue-id> <solution-id>`
const options = pending.solutions.slice(0, 4).map(sol => ({
label: `${sol.id} (${sol.task_count} tasks)`,
description: sol.description || sol.approach || 'No description'
}));
const answer = AskUserQuestion({
questions: [{
question: `Issue ${pending.issue_id}: which solution to bind?`,
header: pending.issue_id,
options: options,
multiSelect: false
}]
});
const selected = answer[Object.keys(answer)[0]];
if (!selected || selected === 'Other') continue;
const solId = selected.split(' ')[0];
Bash(`ccw issue bind ${pending.issue_id} ${solId}`);
console.log(`${pending.issue_id}: ${solId} bound`);
}
```
### Phase 4: Summary

View File

@@ -1,10 +1,14 @@
---
name: queue
description: Form execution queue from bound solutions using issue-queue-agent (solution-level)
argument-hint: "[--queues <n>] [--issue <id>]"
argument-hint: "[-y|--yes] [--queues <n>] [--issue <id>]"
allowed-tools: TodoWrite(*), Task(*), Bash(*), Read(*), Write(*)
---
## Auto Mode
When `--yes` or `-y`: Auto-confirm queue formation, use recommended conflict resolutions.
# Issue Queue Command (/issue:queue)
## Overview
@@ -28,12 +32,13 @@ Queue formation command using **issue-queue-agent** that analyzes all bound solu
| Operation | Correct | Incorrect |
|-----------|---------|-----------|
| List issues (brief) | `ccw issue list --status planned --brief` | `Read('issues.jsonl')` |
| **Batch solutions (NEW)** | `ccw issue solutions --status planned --brief` | Loop `ccw issue solution <id>` |
| List queue (brief) | `ccw issue queue --brief` | `Read('queues/*.json')` |
| Read issue details | `ccw issue status <id> --json` | `Read('issues.jsonl')` |
| Get next item | `ccw issue next --json` | `Read('queues/*.json')` |
| Update status | `ccw issue update <id> --status ...` | Direct file edit |
| Sync from queue | `ccw issue update --from-queue` | Direct file edit |
| **Read solution (brief)** | `ccw issue solution <id> --brief` | `Read('solutions/*.jsonl')` |
| Read solution (single) | `ccw issue solution <id> --brief` | `Read('solutions/*.jsonl')` |
**Output Options**:
- `--brief`: JSON with minimal fields (id, status, counts)
@@ -65,9 +70,13 @@ Queue formation command using **issue-queue-agent** that analyzes all bound solu
--queues <n> Number of parallel queues (default: 1)
--issue <id> Form queue for specific issue only
--append <id> Append issue to active queue (don't create new)
--force Skip active queue check, always create new queue
# CLI subcommands (ccw issue queue ...)
ccw issue queue list List all queues with status
ccw issue queue add <issue-id> Add issue to queue (interactive if active queue exists)
ccw issue queue add <issue-id> -f Add to new queue without prompt (force)
ccw issue queue merge <src> --queue <target> Merge source queue into target queue
ccw issue queue switch <queue-id> Switch active queue
ccw issue queue archive Archive current queue
ccw issue queue delete <queue-id> Delete queue from history
@@ -92,7 +101,7 @@ Phase 2-4: Agent-Driven Queue Formation (issue-queue-agent)
│ ├─ Build dependency DAG from conflicts
│ ├─ Calculate semantic priority per solution
│ └─ Assign execution groups (parallel/sequential)
└─ Each agent writes: queue JSON + index update
└─ Each agent writes: queue JSON + index update (NOT active yet)
Phase 5: Conflict Clarification (if needed)
├─ Collect `clarifications` arrays from all agents
@@ -102,7 +111,24 @@ Phase 5: Conflict Clarification (if needed)
Phase 6: Status Update & Summary
├─ Update issue statuses to 'queued'
└─ Display queue summary (N queues), next step: /issue:execute
└─ Display new queue summary (N queues)
Phase 7: Active Queue Check & Decision (REQUIRED)
├─ Read queue index: ccw issue queue list --brief
├─ Get generated queue ID from agent output
├─ If NO active queue exists:
│ ├─ Set generated queue as active_queue_id
│ ├─ Update index.json
│ └─ Display: "Queue created and activated"
└─ If active queue exists with items:
├─ Display both queues to user
├─ Use AskUserQuestion to prompt:
│ ├─ "Use new queue (keep existing)" → Set new as active, keep old inactive
│ ├─ "Merge: add new items to existing" → Merge new → existing, delete new
│ ├─ "Merge: add existing items to new" → Merge existing → new, archive old
│ └─ "Cancel" → Delete new queue, keep existing active
└─ Execute chosen action
```
## Implementation
@@ -110,24 +136,23 @@ Phase 6: Status Update & Summary
### Phase 1: Solution Loading & Distribution
**Data Loading:**
- Use `ccw issue list --status planned --brief` to get planned issues with `bound_solution_id`
- If no planned issues found → display message, suggest `/issue:plan`
**Solution Brief Loading** (for each planned issue):
```bash
ccw issue solution <issue-id> --brief
# Returns: [{ solution_id, is_bound, task_count, files_touched[] }]
```
- Use `ccw issue solutions --status planned --brief` to get all planned issues with solutions in **one call**
- Returns: Array of `{ issue_id, solution_id, is_bound, task_count, files_touched[], priority }`
- If no bound solutions found → display message, suggest `/issue:plan`
**Build Solution Objects:**
```json
{
"issue_id": "ISS-xxx",
"solution_id": "SOL-ISS-xxx-1",
"task_count": 3,
"files_touched": ["src/auth.ts", "src/utils.ts"],
"priority": "medium"
```javascript
// Single CLI call replaces N individual queries
const result = Bash(`ccw issue solutions --status planned --brief`).trim();
const solutions = result ? JSON.parse(result) : [];
if (solutions.length === 0) {
console.log('No bound solutions found. Run /issue:plan first.');
return;
}
// solutions already in correct format:
// { issue_id, solution_id, is_bound, task_count, files_touched[], priority }
```
**Multi-Queue Distribution** (if `--queues > 1`):
@@ -306,6 +331,41 @@ ccw issue update <issue-id> --status queued
- Show unplanned issues (planned but NOT in queue)
- Show next step: `/issue:execute`
### Phase 7: Active Queue Check & Decision
**After agent completes Phase 1-6, check for active queue:**
```bash
ccw issue queue list --brief
```
**Decision:**
- If `active_queue_id` is null → `ccw issue queue switch <new-queue-id>` (activate new queue)
- If active queue exists → Use **AskUserQuestion** to prompt user
**AskUserQuestion:**
```javascript
AskUserQuestion({
questions: [{
question: "Active queue exists. How would you like to proceed?",
header: "Queue Action",
options: [
{ label: "Merge into existing queue", description: "Add new items to active queue, delete new queue" },
{ label: "Use new queue", description: "Switch to new queue, keep existing in history" },
{ label: "Cancel", description: "Delete new queue, keep existing active" }
],
multiSelect: false
}]
})
```
**Action Commands:**
| User Choice | Commands |
|-------------|----------|
| **Merge into existing** | `ccw issue queue merge <new-queue-id> --queue <active-queue-id>` then `ccw issue queue delete <new-queue-id>` |
| **Use new queue** | `ccw issue queue switch <new-queue-id>` |
| **Cancel** | `ccw issue queue delete <new-queue-id>` |
## Storage Structure (Queue History)
@@ -360,6 +420,9 @@ ccw issue update <issue-id> --status queued
| User cancels clarification | Abort queue formation |
| **index.json not updated** | Auto-fix: Set active_queue_id to new queue |
| **Queue file missing solutions** | Abort with error, agent must regenerate |
| **User cancels queue add** | Display message, return without changes |
| **Merge with empty source** | Skip merge, display warning |
| **All items duplicate** | Skip merge, display "All items already exist" |
## Quality Checklist

View File

@@ -1,687 +0,0 @@
---
name: code-map-memory
description: 3-phase orchestrator: parse feature keyword → cli-explore-agent analyzes (Deep Scan dual-source) → orchestrator generates Mermaid docs + SKILL package (skips phase 2 if exists)
argument-hint: "\"feature-keyword\" [--regenerate] [--tool <gemini|qwen>]"
allowed-tools: SlashCommand(*), TodoWrite(*), Bash(*), Read(*), Write(*), Task(*)
---
# Code Flow Mapping Generator
## Overview
**Pure Orchestrator with Agent Delegation**: Prepares context paths and delegates code flow analysis to specialized cli-explore-agent. Orchestrator transforms agent's JSON analysis into Mermaid documentation.
**Auto-Continue Workflow**: Runs fully autonomously once triggered. Each phase completes and automatically triggers the next phase.
**Execution Paths**:
- **Full Path**: All 3 phases (no existing codemap OR `--regenerate` specified)
- **Skip Path**: Phase 1 → Phase 3 (existing codemap found AND no `--regenerate` flag)
- **Phase 3 Always Executes**: SKILL index is always generated or updated
**Agent Responsibility** (cli-explore-agent):
- Deep code flow analysis using dual-source strategy (Bash + Gemini CLI)
- Returns structured JSON with architecture, functions, data flow, conditionals, patterns
- NO file writing - analysis only
**Orchestrator Responsibility**:
- Provides feature keyword and analysis scope to agent
- Transforms agent's JSON into Mermaid-enriched markdown documentation
- Writes all files (5 docs + metadata.json + SKILL.md)
## Core Rules
1. **Start Immediately**: First action is TodoWrite initialization, second action is Phase 1 execution
2. **Feature-Specific SKILL**: Each feature creates independent `.claude/skills/codemap-{feature}/` package
3. **Specialized Agent**: Phase 2a uses cli-explore-agent for professional code analysis (Deep Scan mode)
4. **Orchestrator Documentation**: Phase 2b transforms agent JSON into Mermaid markdown files
5. **Auto-Continue**: After completing each phase, update TodoWrite and immediately execute next phase
6. **No User Prompts**: Never ask user questions or wait for input between phases
7. **Track Progress**: Update TodoWrite after EVERY phase completion before starting next phase
8. **Multi-Level Detail**: Generate 4 levels: architecture → function → data → conditional
---
## 3-Phase Execution
### Phase 1: Parse Feature Keyword & Check Existing
**Goal**: Normalize feature keyword, check existing codemap, prepare for analysis
**Step 1: Parse Feature Keyword**
```bash
# Get feature keyword from argument
FEATURE_KEYWORD="$1"
# Normalize: lowercase, spaces to hyphens
normalized_feature=$(echo "$FEATURE_KEYWORD" | tr '[:upper:]' '[:lower:]' | tr ' ' '-' | tr '_' '-')
# Example: "User Authentication" → "user-authentication"
# Example: "支付处理" → "支付处理" (keep non-ASCII)
```
**Step 2: Set Tool Preference**
```bash
# Default to gemini unless --tool specified
TOOL="${tool_flag:-gemini}"
```
**Step 3: Check Existing Codemap**
```bash
# Define codemap directory
CODEMAP_DIR=".claude/skills/codemap-${normalized_feature}"
# Check if codemap exists
bash(test -d "$CODEMAP_DIR" && echo "exists" || echo "not_exists")
# Count existing files
bash(find "$CODEMAP_DIR" -name "*.md" 2>/dev/null | wc -l || echo 0)
```
**Step 4: Skip Decision**
```javascript
if (existing_files > 0 && !regenerate_flag) {
SKIP_GENERATION = true
message = "Codemap already exists, skipping Phase 2. Use --regenerate to force regeneration."
} else if (regenerate_flag) {
bash(rm -rf "$CODEMAP_DIR")
SKIP_GENERATION = false
message = "Regenerating codemap from scratch."
} else {
SKIP_GENERATION = false
message = "No existing codemap found, generating new code flow analysis."
}
```
**Output Variables**:
- `FEATURE_KEYWORD`: Original feature keyword
- `normalized_feature`: Normalized feature name for directory
- `CODEMAP_DIR`: `.claude/skills/codemap-{feature}`
- `TOOL`: CLI tool to use (gemini or qwen)
- `SKIP_GENERATION`: Boolean - whether to skip Phase 2
**TodoWrite**:
- If skipping: Mark phase 1 completed, phase 2 completed, phase 3 in_progress
- If not skipping: Mark phase 1 completed, phase 2 in_progress
---
### Phase 2: Code Flow Analysis & Documentation Generation
**Skip Condition**: Skipped if `SKIP_GENERATION = true`
**Goal**: Use cli-explore-agent for professional code analysis, then orchestrator generates Mermaid documentation
**Architecture**: Phase 2a (Agent Analysis) → Phase 2b (Orchestrator Documentation)
---
#### Phase 2a: cli-explore-agent Analysis
**Purpose**: Leverage specialized cli-explore-agent for deep code flow analysis
**Agent Task Specification**:
```
Task(
subagent_type: "cli-explore-agent",
description: "Analyze code flow: {FEATURE_KEYWORD}",
prompt: "
Perform Deep Scan analysis for feature: {FEATURE_KEYWORD}
**Analysis Mode**: deep-scan (Dual-source: Bash structural scan + Gemini semantic analysis)
**Analysis Objectives**:
1. **Module Architecture**: Identify high-level module organization, interactions, and entry points
2. **Function Call Chains**: Trace execution paths, call sequences, and parameter flows
3. **Data Transformations**: Map data structure changes and transformation stages
4. **Conditional Paths**: Document decision trees, branches, and error handling strategies
5. **Design Patterns**: Discover architectural patterns and extract design intent
**Scope**:
- Feature: {FEATURE_KEYWORD}
- CLI Tool: {TOOL} (gemini-2.5-pro or qwen coder-model)
- File Discovery: MCP Code Index (preferred) + rg fallback
- Target: 5-15 most relevant files
**MANDATORY FIRST STEP**:
Read: ~/.claude/workflows/cli-templates/schemas/codemap-json-schema.json
**Output**: Return JSON following schema exactly. NO FILE WRITING - return JSON analysis only.
**Critical Requirements**:
- Use Deep Scan mode: Bash (Phase 1 - precise locations) + Gemini CLI (Phase 2 - semantic understanding) + Synthesis (Phase 3 - merge with attribution)
- Focus exclusively on {FEATURE_KEYWORD} feature flow
- Include file:line references for ALL findings
- Extract design intent from code structure and comments
- NO FILE WRITING - return JSON analysis only
- Handle tool failures gracefully (Gemini → Qwen fallback, MCP → rg fallback)
"
)
```
**Agent Output**: JSON analysis result with architecture, functions, data flow, conditionals, and patterns
---
#### Phase 2b: Orchestrator Documentation Generation
**Purpose**: Transform cli-explore-agent JSON into Mermaid-enriched documentation
**Input**: Agent's JSON analysis result
**Process**:
1. **Parse Agent Analysis**:
```javascript
const analysis = JSON.parse(agentResult)
const { feature, files_analyzed, architecture, function_calls, data_flow, conditional_logic, design_patterns } = analysis
```
2. **Generate Mermaid Diagrams from Structured Data**:
**a) architecture-flow.md** (~3K tokens):
```javascript
// Convert architecture.modules + architecture.interactions → Mermaid graph TD
const architectureMermaid = `
graph TD
${architecture.modules.map(m => ` ${m.name}[${m.name}]`).join('\n')}
${architecture.interactions.map(i => ` ${i.from} -->|${i.type}| ${i.to}`).join('\n')}
`
Write({
file_path: `${CODEMAP_DIR}/architecture-flow.md`,
content: `---
feature: ${feature}
level: architecture
detail: high-level module interactions
---
# Architecture Flow: ${feature}
## Overview
${architecture.overview}
## Module Architecture
${architecture.modules.map(m => `### ${m.name}\n- **File**: ${m.file}\n- **Role**: ${m.responsibility}\n- **Dependencies**: ${m.dependencies.join(', ')}`).join('\n\n')}
## Flow Diagram
\`\`\`mermaid
${architectureMermaid}
\`\`\`
## Key Interactions
${architecture.interactions.map(i => `- **${i.from} → ${i.to}**: ${i.description}`).join('\n')}
## Entry Points
${architecture.entry_points.map(e => `- **${e.function}** (${e.file}): ${e.description}`).join('\n')}
`
})
```
**b) function-calls.md** (~5K tokens):
```javascript
// Convert function_calls.sequences → Mermaid sequenceDiagram
const sequenceMermaid = `
sequenceDiagram
${function_calls.sequences.map(s => ` ${s.from}->>${s.to}: ${s.method}`).join('\n')}
`
Write({
file_path: `${CODEMAP_DIR}/function-calls.md`,
content: `---
feature: ${feature}
level: function
detail: function-level call sequences
---
# Function Call Chains: ${feature}
## Call Sequence Diagram
\`\`\`mermaid
${sequenceMermaid}
\`\`\`
## Detailed Call Chains
${function_calls.call_chains.map(chain => `
### Chain ${chain.chain_id}: ${chain.description}
${chain.sequence.map(fn => `- **${fn.function}** (${fn.file})\n - Calls: ${fn.calls.join(', ')}`).join('\n')}
`).join('\n')}
## Parameters & Returns
${function_calls.sequences.map(s => `- **${s.method}** → Returns: ${s.returns || 'void'}`).join('\n')}
`
})
```
**c) data-flow.md** (~4K tokens):
```javascript
// Convert data_flow.transformations → Mermaid flowchart LR
const dataFlowMermaid = `
flowchart LR
${data_flow.transformations.map((t, i) => ` Stage${i}[${t.from}] -->|${t.transformer}| Stage${i+1}[${t.to}]`).join('\n')}
`
Write({
file_path: `${CODEMAP_DIR}/data-flow.md`,
content: `---
feature: ${feature}
level: data
detail: data structure transformations
---
# Data Flow: ${feature}
## Data Transformation Diagram
\`\`\`mermaid
${dataFlowMermaid}
\`\`\`
## Data Structures
${data_flow.structures.map(s => `### ${s.name} (${s.stage})\n\`\`\`json\n${JSON.stringify(s.shape, null, 2)}\n\`\`\``).join('\n\n')}
## Transformations
${data_flow.transformations.map(t => `- **${t.from} → ${t.to}** via \`${t.transformer}\` (${t.file})`).join('\n')}
`
})
```
**d) conditional-paths.md** (~4K tokens):
```javascript
// Convert conditional_logic.branches → Mermaid flowchart TD
const conditionalMermaid = `
flowchart TD
Start[Entry Point]
${conditional_logic.branches.map((b, i) => `
Start --> Check${i}{${b.condition}}
Check${i} -->|Yes| Path${i}A[${b.true_path}]
Check${i} -->|No| Path${i}B[${b.false_path}]
`).join('\n')}
`
Write({
file_path: `${CODEMAP_DIR}/conditional-paths.md`,
content: `---
feature: ${feature}
level: conditional
detail: decision trees and error paths
---
# Conditional Paths: ${feature}
## Decision Tree
\`\`\`mermaid
${conditionalMermaid}
\`\`\`
## Branch Conditions
${conditional_logic.branches.map(b => `- **${b.condition}** (${b.file})\n - True: ${b.true_path}\n - False: ${b.false_path}`).join('\n')}
## Error Handling
${conditional_logic.error_handling.map(e => `- **${e.error_type}**: Handler \`${e.handler}\` (${e.file}) - Recovery: ${e.recovery}`).join('\n')}
`
})
```
**e) complete-flow.md** (~8K tokens):
```javascript
// Integrate all Mermaid diagrams
Write({
file_path: `${CODEMAP_DIR}/complete-flow.md`,
content: `---
feature: ${feature}
level: complete
detail: integrated multi-level view
---
# Complete Flow: ${feature}
## Integrated Flow Diagram
\`\`\`mermaid
graph TB
subgraph Architecture
${architecture.modules.map(m => ` ${m.name}[${m.name}]`).join('\n')}
end
subgraph "Function Calls"
${function_calls.call_chains[0]?.sequence.map(fn => ` ${fn.function}`).join('\n') || ''}
end
subgraph "Data Flow"
${data_flow.structures.map(s => ` ${s.name}[${s.name}]`).join('\n')}
end
\`\`\`
## Complete Trace
[Comprehensive end-to-end documentation combining all analysis layers]
## Design Patterns Identified
${design_patterns.map(p => `- **${p.pattern}** in ${p.location}: ${p.description}`).join('\n')}
## Recommendations
${analysis.recommendations.map(r => `- ${r}`).join('\n')}
## Cross-References
- [Architecture Flow](./architecture-flow.md) - High-level module structure
- [Function Calls](./function-calls.md) - Detailed call chains
- [Data Flow](./data-flow.md) - Data transformation stages
- [Conditional Paths](./conditional-paths.md) - Decision trees and error handling
`
})
```
3. **Write metadata.json**:
```javascript
Write({
file_path: `${CODEMAP_DIR}/metadata.json`,
content: JSON.stringify({
feature: feature,
normalized_name: normalized_feature,
generated_at: new Date().toISOString(),
tool_used: analysis.analysis_metadata.tool_used,
files_analyzed: files_analyzed.map(f => f.file),
analysis_summary: {
total_files: files_analyzed.length,
modules_traced: architecture.modules.length,
functions_traced: function_calls.call_chains.reduce((sum, c) => sum + c.sequence.length, 0),
patterns_discovered: design_patterns.length
}
}, null, 2)
})
```
4. **Report Phase 2 Completion**:
```
Phase 2 Complete: Code flow analysis and documentation generated
- Agent Analysis: cli-explore-agent with {TOOL}
- Files Analyzed: {count}
- Documentation Generated: 5 markdown files + metadata.json
- Location: {CODEMAP_DIR}
```
**Completion Criteria**:
- cli-explore-agent task completed successfully with JSON result
- 5 documentation files written with valid Mermaid diagrams
- metadata.json written with analysis summary
- All files properly formatted and cross-referenced
**TodoWrite**: Mark phase 2 completed, phase 3 in_progress
---
### Phase 3: Generate SKILL.md Index
**Note**: This phase **ALWAYS executes** - generates or updates the SKILL index.
**Goal**: Read generated flow documentation and create SKILL.md index with progressive loading
**Steps**:
1. **Verify Generated Files**:
```bash
bash(find "{CODEMAP_DIR}" -name "*.md" -type f | sort)
```
2. **Read metadata.json**:
```javascript
Read({CODEMAP_DIR}/metadata.json)
// Extract: feature, normalized_name, files_analyzed, analysis_summary
```
3. **Read File Headers** (optional, first 30 lines):
```javascript
Read({CODEMAP_DIR}/architecture-flow.md, limit: 30)
Read({CODEMAP_DIR}/function-calls.md, limit: 30)
// Extract overview and diagram counts
```
4. **Generate SKILL.md Index**:
Template structure:
```yaml
---
name: codemap-{normalized_feature}
description: Code flow mapping for {FEATURE_KEYWORD} feature (located at {project_path}). Load this SKILL when analyzing, tracing, or understanding {FEATURE_KEYWORD} execution flow, especially when no relevant context exists in memory.
version: 1.0.0
generated_at: {ISO_TIMESTAMP}
---
# Code Flow Map: {FEATURE_KEYWORD}
## Feature: `{FEATURE_KEYWORD}`
**Analysis Date**: {DATE}
**Tool Used**: {TOOL}
**Files Analyzed**: {COUNT}
## Progressive Loading
### Level 0: Quick Overview (~2K tokens)
- [Architecture Flow](./architecture-flow.md) - High-level module interactions
### Level 1: Core Flows (~10K tokens)
- [Architecture Flow](./architecture-flow.md) - Module architecture
- [Function Calls](./function-calls.md) - Function call chains
### Level 2: Complete Analysis (~20K tokens)
- [Architecture Flow](./architecture-flow.md)
- [Function Calls](./function-calls.md)
- [Data Flow](./data-flow.md) - Data transformations
### Level 3: Deep Dive (~30K tokens)
- [Architecture Flow](./architecture-flow.md)
- [Function Calls](./function-calls.md)
- [Data Flow](./data-flow.md)
- [Conditional Paths](./conditional-paths.md) - Branches and error handling
- [Complete Flow](./complete-flow.md) - Integrated comprehensive view
## Usage
Load this SKILL package when:
- Analyzing {FEATURE_KEYWORD} implementation
- Tracing execution flow for debugging
- Understanding code dependencies
- Planning refactoring or enhancements
## Analysis Summary
- **Modules Traced**: {modules_traced}
- **Functions Traced**: {functions_traced}
- **Files Analyzed**: {total_files}
## Mermaid Diagrams Included
- Architecture flow diagram (graph TD)
- Function call sequence diagram (sequenceDiagram)
- Data transformation flowchart (flowchart LR)
- Conditional decision tree (flowchart TD)
- Complete integrated diagram (graph TB)
```
5. **Write SKILL.md**:
```javascript
Write({
file_path: `{CODEMAP_DIR}/SKILL.md`,
content: generatedIndexMarkdown
})
```
**Completion Criteria**:
- SKILL.md index written
- All documentation files verified
- Progressive loading levels (0-3) properly structured
- Mermaid diagram references included
**TodoWrite**: Mark phase 3 completed
**Final Report**:
```
Code Flow Mapping Complete
Feature: {FEATURE_KEYWORD}
Location: .claude/skills/codemap-{normalized_feature}/
Files Generated:
- SKILL.md (index)
- architecture-flow.md (with Mermaid diagram)
- function-calls.md (with Mermaid sequence diagram)
- data-flow.md (with Mermaid flowchart)
- conditional-paths.md (with Mermaid decision tree)
- complete-flow.md (with integrated Mermaid diagram)
- metadata.json
Analysis:
- Files analyzed: {count}
- Modules traced: {count}
- Functions traced: {count}
Usage: Skill(command: "codemap-{normalized_feature}")
```
---
## Implementation Details
### TodoWrite Patterns
**Initialization** (Before Phase 1):
```javascript
TodoWrite({todos: [
{"content": "Parse feature keyword and check existing", "status": "in_progress", "activeForm": "Parsing feature keyword"},
{"content": "Agent analyzes code flow and generates files", "status": "pending", "activeForm": "Analyzing code flow"},
{"content": "Generate SKILL.md index", "status": "pending", "activeForm": "Generating SKILL index"}
]})
```
**Full Path** (SKIP_GENERATION = false):
```javascript
// After Phase 1
TodoWrite({todos: [
{"content": "Parse feature keyword and check existing", "status": "completed", ...},
{"content": "Agent analyzes code flow and generates files", "status": "in_progress", ...},
{"content": "Generate SKILL.md index", "status": "pending", ...}
]})
// After Phase 2
TodoWrite({todos: [
{"content": "Parse feature keyword and check existing", "status": "completed", ...},
{"content": "Agent analyzes code flow and generates files", "status": "completed", ...},
{"content": "Generate SKILL.md index", "status": "in_progress", ...}
]})
// After Phase 3
TodoWrite({todos: [
{"content": "Parse feature keyword and check existing", "status": "completed", ...},
{"content": "Agent analyzes code flow and generates files", "status": "completed", ...},
{"content": "Generate SKILL.md index", "status": "completed", ...}
]})
```
**Skip Path** (SKIP_GENERATION = true):
```javascript
// After Phase 1 (skip Phase 2)
TodoWrite({todos: [
{"content": "Parse feature keyword and check existing", "status": "completed", ...},
{"content": "Agent analyzes code flow and generates files", "status": "completed", ...}, // Skipped
{"content": "Generate SKILL.md index", "status": "in_progress", ...}
]})
```
### Execution Flow
**Full Path**:
```
User → TodoWrite Init → Phase 1 (parse) → Phase 2 (agent analyzes) → Phase 3 (write index) → Report
```
**Skip Path**:
```
User → TodoWrite Init → Phase 1 (detect existing) → Phase 3 (update index) → Report
```
### Error Handling
**Phase 1 Errors**:
- Empty feature keyword: Report error, ask user to provide feature description
- Invalid characters: Normalize and continue
**Phase 2 Errors (Agent)**:
- Agent task fails: Retry once, report if fails again
- No files discovered: Warn user, ask for more specific feature keyword
- CLI failures: Agent handles internally with retries
- Invalid Mermaid syntax: Agent validates before writing
**Phase 3 Errors**:
- Write failures: Report which files failed
- Missing files: Note in SKILL.md, suggest regeneration
---
## Parameters
```bash
/memory:code-map-memory "feature-keyword" [--regenerate] [--tool <gemini|qwen>]
```
**Arguments**:
- **"feature-keyword"**: Feature or flow to analyze (required)
- Examples: `"user authentication"`, `"payment processing"`, `"数据导入流程"`
- Can be English, Chinese, or mixed
- Spaces and underscores normalized to hyphens
- **--regenerate**: Force regenerate existing codemap (deletes and recreates)
- **--tool**: CLI tool for analysis (default: gemini)
- `gemini`: Comprehensive flow analysis with gemini-2.5-pro
- `qwen`: Alternative with coder-model
---
## Examples
**Generated File Structure** (for all examples):
```
.claude/skills/codemap-{feature}/
├── SKILL.md # Index (Phase 3)
├── architecture-flow.md # Agent (Phase 2) - High-level flow
├── function-calls.md # Agent (Phase 2) - Function chains
├── data-flow.md # Agent (Phase 2) - Data transformations
├── conditional-paths.md # Agent (Phase 2) - Branches & errors
├── complete-flow.md # Agent (Phase 2) - Integrated view
└── metadata.json # Agent (Phase 2)
```
### Example 1: User Authentication Flow
```bash
/memory:code-map-memory "user authentication"
```
**Workflow**:
1. Phase 1: Normalizes to "user-authentication", checks existing codemap
2. Phase 2: Agent discovers auth-related files, executes CLI analysis, generates 5 flow docs with Mermaid
3. Phase 3: Generates SKILL.md index with progressive loading
**Output**: `.claude/skills/codemap-user-authentication/` with 6 files + metadata
### Example 3: Regenerate with Qwen
```bash
/memory:code-map-memory "payment processing" --regenerate --tool qwen
```
**Workflow**:
1. Phase 1: Deletes existing codemap due to --regenerate
2. Phase 2: Agent uses qwen with coder-model for fresh analysis
3. Phase 3: Generates updated SKILL.md
---
## Architecture
```
code-map-memory (orchestrator)
├─ Phase 1: Parse & Check (bash commands, skip decision)
├─ Phase 2: Code Analysis & Documentation (skippable)
│ ├─ Phase 2a: cli-explore-agent Analysis
│ │ └─ Deep Scan: Bash structural + Gemini semantic → JSON
│ └─ Phase 2b: Orchestrator Documentation
│ └─ Transform JSON → 5 Mermaid markdown files + metadata.json
└─ Phase 3: Write SKILL.md (index generation, always runs)
Output: .claude/skills/codemap-{feature}/
```

View File

@@ -1,615 +0,0 @@
---
name: docs
description: Plan documentation workflow with dynamic grouping (≤10 docs/task), generates IMPL tasks for parallel module trees, README, ARCHITECTURE, and HTTP API docs
argument-hint: "[path] [--tool <gemini|qwen|codex>] [--mode <full|partial>] [--cli-execute]"
---
# Documentation Workflow (/memory:docs)
## Overview
Lightweight planner that analyzes project structure, decomposes documentation work into tasks, and generates execution plans. Does NOT generate documentation content itself - delegates to doc-generator agent.
**Execution Strategy**:
- **Dynamic Task Grouping**: Level 1 tasks grouped by top-level directories with document count limit
- **Primary constraint**: Each task generates ≤10 documents (API.md + README.md count)
- **Optimization goal**: Prefer grouping 2 top-level directories per task for context sharing
- **Conflict resolution**: If 2 dirs exceed 10 docs, reduce to 1 dir/task; if 1 dir exceeds 10 docs, split by subdirectories
- **Context benefit**: Same-task directories analyzed together via single Gemini call
- **Parallel Execution**: Multiple Level 1 tasks execute concurrently for faster completion
- **Pre-computed Analysis**: Phase 2 performs unified analysis once, stored in `.process/` for reuse
- **Efficient Data Loading**: All existing docs loaded once in Phase 2, shared across tasks
**Path Mirroring**: Documentation structure mirrors source code under `.workflow/docs/{project_name}/`
- Example: `my_app/src/core/``.workflow/docs/my_app/src/core/API.md`
**Two Execution Modes**:
- **Default (Agent Mode)**: CLI analyzes in `pre_analysis` (MODE=analysis), agent writes docs
- **--cli-execute (CLI Mode)**: CLI generates docs in `implementation_approach` (MODE=write), agent executes CLI commands
## Path Mirroring Strategy
**Principle**: Documentation structure **mirrors** source code structure under project-specific directory.
| Source Path | Project Name | Documentation Path |
|------------|--------------|-------------------|
| `my_app/src/core/` | `my_app` | `.workflow/docs/my_app/src/core/API.md` |
| `my_app/src/modules/auth/` | `my_app` | `.workflow/docs/my_app/src/modules/auth/API.md` |
| `another_project/lib/utils/` | `another_project` | `.workflow/docs/another_project/lib/utils/API.md` |
## Parameters
```bash
/memory:docs [path] [--tool <gemini|qwen|codex>] [--mode <full|partial>] [--cli-execute]
```
- **path**: Source directory to analyze (default: current directory)
- Specifies the source code directory to be documented
- Documentation is generated in a separate `.workflow/docs/{project_name}/` directory at the workspace root, **not** within the source `path` itself
- The source path's structure is mirrored within the project-specific documentation folder
- Example: analyzing `src/modules` produces documentation at `.workflow/docs/{project_name}/src/modules/`
- **--mode**: Documentation generation mode (default: full)
- `full`: Complete documentation (modules + README + ARCHITECTURE + EXAMPLES + HTTP API)
- `partial`: Module documentation only (API.md + README.md)
- **--tool**: CLI tool selection (default: gemini)
- `gemini`: Comprehensive documentation, pattern recognition
- `qwen`: Architecture analysis, system design focus
- `codex`: Implementation validation, code quality
- **--cli-execute**: Enable CLI-based documentation generation (optional)
## Planning Workflow
### Phase 1: Initialize Session
```bash
# Get target path, project name, and root
bash(pwd && basename "$(pwd)" && git rev-parse --show-toplevel 2>/dev/null || pwd && date +%Y%m%d-%H%M%S)
```
```javascript
// Create docs session (type: docs)
SlashCommand(command="/workflow:session:start --type docs --new \"{project_name}-docs-{timestamp}\"")
// Parse output to get sessionId
```
```bash
# Update workflow-session.json with docs-specific fields
bash(jq '. + {"target_path":"{target_path}","project_root":"{project_root}","project_name":"{project_name}","mode":"full","tool":"gemini","cli_execute":false}' .workflow/active/{sessionId}/workflow-session.json > tmp.json && mv tmp.json .workflow/active/{sessionId}/workflow-session.json)
```
### Phase 2: Analyze Structure
**Smart filter**: Auto-detect and skip tests/build/config/vendor based on project tech stack.
**Commands** (collect data with simple bash):
```bash
# 1. Run folder analysis
bash(ccw tool exec get_modules_by_depth '{}' | ccw tool exec classify_folders '{}')
# 2. Get top-level directories (first 2 path levels)
bash(ccw tool exec get_modules_by_depth '{}' | ccw tool exec classify_folders '{}' | awk -F'|' '{print $1}' | sed 's|^\./||' | awk -F'/' '{if(NF>=2) print $1"/"$2; else if(NF==1) print $1}' | sort -u)
# 3. Find existing docs (if directory exists)
bash(if [ -d .workflow/docs/\${project_name} ]; then find .workflow/docs/\${project_name} -type f -name "*.md" ! -path "*/README.md" ! -path "*/ARCHITECTURE.md" ! -path "*/EXAMPLES.md" ! -path "*/api/*" 2>/dev/null; fi)
# 4. Read existing docs content (if files exist)
bash(if [ -d .workflow/docs/\${project_name} ]; then find .workflow/docs/\${project_name} -type f -name "*.md" ! -path "*/README.md" ! -path "*/ARCHITECTURE.md" ! -path "*/EXAMPLES.md" ! -path "*/api/*" 2>/dev/null | xargs cat 2>/dev/null; fi)
```
**Data Processing**: Parse bash outputs, calculate statistics, use **Write tool** to create `${session_dir}/.process/doc-planning-data.json` with structure:
```json
{
"metadata": {
"generated_at": "2025-11-03T16:57:30.469669",
"project_name": "project_name",
"project_root": "/path/to/project"
},
"folder_analysis": [
{"path": "./src/core", "type": "code", "code_count": 5, "dirs_count": 2}
],
"top_level_dirs": ["src/modules", "lib/core"],
"existing_docs": {
"file_list": [".workflow/docs/project/src/core/API.md"],
"content": "... existing docs content ..."
},
"unified_analysis": [],
"statistics": {
"total": 15,
"code": 8,
"navigation": 7,
"top_level": 3
}
}
```
**Then** use **Edit tool** to update `workflow-session.json` adding analysis field.
**Output**: Single `doc-planning-data.json` with all analysis data (no temp files or Python scripts).
**Auto-skipped**: Tests (`**/test/**`, `**/*.test.*`), Build (`**/node_modules/**`, `**/dist/**`), Config (root-level files), Vendor directories.
### Phase 3: Detect Update Mode
**Commands**:
```bash
# Count existing docs from doc-planning-data.json
bash(cat .workflow/active/WFS-docs-{timestamp}/.process/doc-planning-data.json | jq '.existing_docs.file_list | length')
```
**Data Processing**: Use count result, then use **Edit tool** to update `workflow-session.json`:
- Add `"update_mode": "update"` if count > 0, else `"create"`
- Add `"existing_docs": <count>`
### Phase 4: Decompose Tasks
**Task Hierarchy** (Dynamic based on document count):
```
Small Projects (total ≤10 docs):
Level 1: IMPL-001 (all directories in single task, shared context)
Level 2: IMPL-002 (README, full mode only)
Level 3: IMPL-003 (ARCHITECTURE+EXAMPLES), IMPL-004 (HTTP API, optional)
Medium Projects (Example: 7 top-level dirs, 18 total docs):
Step 1: Count docs per top-level dir
├─ dir1: 3 docs, dir2: 4 docs → Group 1 (7 docs)
├─ dir3: 5 docs, dir4: 3 docs → Group 2 (8 docs)
├─ dir5: 2 docs → Group 3 (2 docs, can add more)
Step 2: Create tasks with ≤10 docs constraint
Level 1: IMPL-001 to IMPL-003 (parallel groups)
├─ IMPL-001: Group 1 (dir1 + dir2, 7 docs, shared context)
├─ IMPL-002: Group 2 (dir3 + dir4, 8 docs, shared context)
└─ IMPL-003: Group 3 (remaining dirs, ≤10 docs)
Level 2: IMPL-004 (README, depends on Level 1, full mode only)
Level 3: IMPL-005 (ARCHITECTURE+EXAMPLES), IMPL-006 (HTTP API, optional)
Large Projects (single dir >10 docs):
Step 1: Detect oversized directory
└─ src/modules/: 15 subdirs → 30 docs (exceeds limit)
Step 2: Split by subdirectories
Level 1: IMPL-001 to IMPL-003 (split oversized dir)
├─ IMPL-001: src/modules/ subdirs 1-5 (10 docs)
├─ IMPL-002: src/modules/ subdirs 6-10 (10 docs)
└─ IMPL-003: src/modules/ subdirs 11-15 (10 docs)
```
**Grouping Algorithm**:
1. Count total docs for each top-level directory
2. Try grouping 2 directories (optimization for context sharing)
3. If group exceeds 10 docs, split to 1 dir/task
4. If single dir exceeds 10 docs, split by subdirectories
5. Create parallel Level 1 tasks with ≤10 docs each
**Commands**:
```bash
# 1. Get top-level directories from doc-planning-data.json
bash(cat .workflow/active/WFS-docs-{timestamp}/.process/doc-planning-data.json | jq -r '.top_level_dirs[]')
# 2. Get mode from workflow-session.json
bash(cat .workflow/active/WFS-docs-{timestamp}/workflow-session.json | jq -r '.mode // "full"')
# 3. Check for HTTP API
bash(grep -r "router\.|@Get\|@Post" src/ 2>/dev/null && echo "API_FOUND" || echo "NO_API")
```
**Data Processing**:
1. Count documents for each top-level directory (from folder_analysis):
- Code folders: 2 docs each (API.md + README.md)
- Navigation folders: 1 doc each (README.md only)
2. Apply grouping algorithm with ≤10 docs constraint:
- Try grouping 2 directories, calculate total docs
- If total ≤10 docs: create group
- If total >10 docs: split to 1 dir/group or subdivide
- If single dir >10 docs: split by subdirectories
3. Use **Edit tool** to update `doc-planning-data.json` adding groups field:
```json
"groups": {
"count": 3,
"assignments": [
{"group_id": "001", "directories": ["src/modules", "src/utils"], "doc_count": 5},
{"group_id": "002", "directories": ["lib/core"], "doc_count": 6},
{"group_id": "003", "directories": ["lib/helpers"], "doc_count": 3}
]
}
```
**Task ID Calculation**:
```bash
group_count=$(jq '.groups.count' .workflow/active/WFS-docs-{timestamp}/.process/doc-planning-data.json)
readme_id=$((group_count + 1)) # Next ID after groups
arch_id=$((group_count + 2))
api_id=$((group_count + 3))
```
### Phase 5: Generate Task JSONs
**CLI Strategy**:
| Mode | cli_execute | Placement | CLI MODE | Approval Flag | Agent Role |
|------|-------------|-----------|----------|---------------|------------|
| **Agent** | false | pre_analysis | analysis | (none) | Generate docs in implementation_approach |
| **CLI** | true | implementation_approach | write | --mode write | Execute CLI commands, validate output |
**Command Patterns**:
- Gemini/Qwen: `ccw cli -p "..." --tool gemini --mode analysis --cd dir`
- CLI Mode: `ccw cli -p "..." --tool gemini --mode write --cd dir`
- Codex: `ccw cli -p "..." --tool codex --mode write --cd dir`
**Generation Process**:
1. Read configuration values (tool, cli_execute, mode) from workflow-session.json
2. Read group assignments from doc-planning-data.json
3. Generate Level 1 tasks (IMPL-001 to IMPL-N, one per group)
4. Generate Level 2+ tasks if mode=full (README, ARCHITECTURE, HTTP API)
## Task Templates
### Level 1: Module Trees Group Task (Unified)
**Execution Model**: Each task processes assigned directory group (max 2 directories) using pre-analyzed data from Phase 2.
```json
{
"id": "IMPL-${group_number}",
"title": "Document Module Trees Group ${group_number}",
"status": "pending",
"meta": {
"type": "docs-tree-group",
"agent": "@doc-generator",
"tool": "gemini",
"cli_execute": false,
"group_number": "${group_number}",
"total_groups": "${total_groups}"
},
"context": {
"requirements": [
"Process directories from group ${group_number} in doc-planning-data.json",
"Generate docs to .workflow/docs/${project_name}/ (mirrored structure)",
"Code folders: API.md + README.md; Navigation folders: README.md only",
"Use pre-analyzed data from Phase 2 (no redundant analysis)"
],
"focus_paths": ["${group_dirs_from_json}"],
"precomputed_data": {
"phase2_analysis": "${session_dir}/.process/doc-planning-data.json"
}
},
"flow_control": {
"pre_analysis": [
{
"step": "load_precomputed_data",
"action": "Load Phase 2 analysis and extract group directories",
"commands": [
"bash(cat ${session_dir}/.process/doc-planning-data.json)",
"bash(jq '.groups.assignments[] | select(.group_id == \"${group_number}\") | .directories' ${session_dir}/.process/doc-planning-data.json)"
],
"output_to": "phase2_context",
"note": "Single JSON file contains all Phase 2 analysis results"
}
],
"implementation_approach": [
{
"step": 1,
"title": "Generate documentation for assigned directory group",
"description": "Process directories in Group ${group_number} using pre-analyzed data",
"modification_points": [
"Read group directories from [phase2_context].groups.assignments[${group_number}].directories",
"For each directory: parse folder types from folder_analysis, parse structure from unified_analysis",
"Map source_path to .workflow/docs/${project_name}/{path}",
"Generate API.md for code folders, README.md for all folders",
"Preserve user modifications from [phase2_context].existing_docs.content"
],
"logic_flow": [
"phase2 = parse([phase2_context])",
"dirs = phase2.groups.assignments[${group_number}].directories",
"for dir in dirs:",
" folder_info = find(dir, phase2.folder_analysis)",
" outline = find(dir, phase2.unified_analysis)",
" if folder_info.type == 'code': generate API.md + README.md",
" elif folder_info.type == 'navigation': generate README.md only",
" write to .workflow/docs/${project_name}/{dir}/"
],
"depends_on": [],
"output": "group_module_docs"
}
],
"target_files": [
".workflow/docs/${project_name}/*/API.md",
".workflow/docs/${project_name}/*/README.md"
]
}
}
```
**CLI Execute Mode Note**: When `cli_execute=true`, add Step 2 in `implementation_approach`:
```json
{
"step": 2,
"title": "Batch generate documentation via CLI",
"command": "ccw cli -p 'PURPOSE: Generate module docs\\nTASK: Create documentation\\nMODE: write\\nCONTEXT: @**/* [phase2_context]\\nEXPECTED: API.md and README.md\\nRULES: Mirror structure' --tool gemini --mode write --cd ${dirs_from_group}",
"depends_on": [1],
"output": "generated_docs"
}
```
### Level 2: Project README Task
**Task ID**: `IMPL-${readme_id}` (where `readme_id = group_count + 1`)
**Dependencies**: Depends on all Level 1 tasks completing.
```json
{
"id": "IMPL-${readme_id}",
"title": "Generate Project README",
"status": "pending",
"depends_on": ["IMPL-001", "...", "IMPL-${group_count}"],
"meta": {"type": "docs", "agent": "@doc-generator", "tool": "gemini", "cli_execute": false},
"flow_control": {
"pre_analysis": [
{
"step": "load_existing_readme",
"command": "bash(cat .workflow/docs/${project_name}/README.md 2>/dev/null || echo 'No existing README')",
"output_to": "existing_readme"
},
{
"step": "load_module_docs",
"command": "bash(find .workflow/docs/${project_name} -type f -name '*.md' ! -path '.workflow/docs/${project_name}/README.md' ! -path '.workflow/docs/${project_name}/ARCHITECTURE.md' ! -path '.workflow/docs/${project_name}/EXAMPLES.md' ! -path '.workflow/docs/${project_name}/api/*' | xargs cat)",
"output_to": "all_module_docs"
},
{
"step": "analyze_project",
"command": "bash(ccw cli -p \"PURPOSE: Analyze project structure\\nTASK: Extract overview from modules\\nMODE: analysis\\nCONTEXT: [all_module_docs]\\nEXPECTED: Project outline\" --tool gemini --mode analysis)",
"output_to": "project_outline"
}
],
"implementation_approach": [
{
"step": 1,
"title": "Generate project README",
"description": "Generate project README with navigation links while preserving user modifications",
"modification_points": [
"Parse [project_outline] and [all_module_docs]",
"Generate README structure with navigation links",
"Preserve [existing_readme] user modifications"
],
"logic_flow": ["Parse data", "Generate README with navigation", "Preserve modifications"],
"depends_on": [],
"output": "project_readme"
}
],
"target_files": [".workflow/docs/${project_name}/README.md"]
}
}
```
### Level 3: Architecture & Examples Documentation Task
**Task ID**: `IMPL-${arch_id}` (where `arch_id = group_count + 2`)
**Dependencies**: Depends on Level 2 (Project README).
```json
{
"id": "IMPL-${arch_id}",
"title": "Generate Architecture & Examples Documentation",
"status": "pending",
"depends_on": ["IMPL-${readme_id}"],
"meta": {"type": "docs", "agent": "@doc-generator", "tool": "gemini", "cli_execute": false},
"flow_control": {
"pre_analysis": [
{"step": "load_existing_docs", "command": "bash(cat .workflow/docs/${project_name}/{ARCHITECTURE,EXAMPLES}.md 2>/dev/null || echo 'No existing docs')", "output_to": "existing_arch_examples"},
{"step": "load_all_docs", "command": "bash(cat .workflow/docs/${project_name}/README.md && find .workflow/docs/${project_name} -type f -name '*.md' ! -path '*/README.md' ! -path '*/ARCHITECTURE.md' ! -path '*/EXAMPLES.md' ! -path '*/api/*' | xargs cat)", "output_to": "all_docs"},
{"step": "analyze_architecture", "command": "bash(ccw cli -p \"PURPOSE: Analyze system architecture\\nTASK: Synthesize architectural overview and examples\\nMODE: analysis\\nCONTEXT: [all_docs]\\nEXPECTED: Architecture + Examples outline\" --tool gemini --mode analysis)", "output_to": "arch_examples_outline"}
],
"implementation_approach": [
{
"step": 1,
"title": "Generate architecture and examples documentation",
"modification_points": [
"Parse [arch_examples_outline] and [all_docs]",
"Generate ARCHITECTURE.md (system design, patterns)",
"Generate EXAMPLES.md (code snippets, usage)",
"Preserve [existing_arch_examples] modifications"
],
"depends_on": [],
"output": "arch_examples_docs"
}
],
"target_files": [".workflow/docs/${project_name}/ARCHITECTURE.md", ".workflow/docs/${project_name}/EXAMPLES.md"]
}
}
```
### Level 4: HTTP API Documentation Task (Optional)
**Task ID**: `IMPL-${api_id}` (where `api_id = group_count + 3`)
**Dependencies**: Depends on Level 3.
```json
{
"id": "IMPL-${api_id}",
"title": "Generate HTTP API Documentation",
"status": "pending",
"depends_on": ["IMPL-${arch_id}"],
"meta": {"type": "docs", "agent": "@doc-generator", "tool": "gemini", "cli_execute": false},
"flow_control": {
"pre_analysis": [
{"step": "discover_api", "command": "bash(rg 'router\\.| @(Get|Post)' -g '*.{ts,js}')", "output_to": "endpoint_discovery"},
{"step": "load_existing_api", "command": "bash(cat .workflow/docs/${project_name}/api/README.md 2>/dev/null || echo 'No existing API docs')", "output_to": "existing_api_docs"},
{"step": "analyze_api", "command": "bash(ccw cli -p \"PURPOSE: Document HTTP API\\nTASK: Analyze endpoints\\nMODE: analysis\\nCONTEXT: @src/api/**/* [endpoint_discovery]\\nEXPECTED: API outline\" --tool gemini --mode analysis)", "output_to": "api_outline"}
],
"implementation_approach": [
{
"step": 1,
"title": "Generate HTTP API documentation",
"modification_points": [
"Parse [api_outline] and [endpoint_discovery]",
"Document endpoints, request/response formats",
"Preserve [existing_api_docs] modifications"
],
"depends_on": [],
"output": "api_docs"
}
],
"target_files": [".workflow/docs/${project_name}/api/README.md"]
}
}
```
## Session Structure
**Unified Structure** (single JSON replaces multiple text files):
```
.workflow/active/
└── WFS-docs-{timestamp}/
├── workflow-session.json # Session metadata
├── IMPL_PLAN.md
├── TODO_LIST.md
├── .process/
│ └── doc-planning-data.json # All Phase 2 analysis data (replaces 7+ files)
└── .task/
├── IMPL-001.json # Small: all modules | Large: group 1
├── IMPL-00N.json # (Large only: groups 2-N)
├── IMPL-{N+1}.json # README (full mode)
├── IMPL-{N+2}.json # ARCHITECTURE+EXAMPLES (full mode)
└── IMPL-{N+3}.json # HTTP API (optional)
```
**doc-planning-data.json Structure**:
```json
{
"metadata": {
"generated_at": "2025-11-03T16:41:06+08:00",
"project_name": "Claude_dms3",
"project_root": "/d/Claude_dms3"
},
"folder_analysis": [
{"path": "./src/core", "type": "code", "code_count": 5, "dirs_count": 2},
{"path": "./src/utils", "type": "navigation", "code_count": 0, "dirs_count": 4}
],
"top_level_dirs": ["src/modules", "src/utils", "lib/core"],
"existing_docs": {
"file_list": [".workflow/docs/project/src/core/API.md"],
"content": "... concatenated existing docs ..."
},
"unified_analysis": [
{"module_path": "./src/core", "outline_summary": "Core functionality"}
],
"groups": {
"count": 4,
"assignments": [
{"group_id": "001", "directories": ["src/modules", "src/utils"], "doc_count": 6},
{"group_id": "002", "directories": ["lib/core", "lib/helpers"], "doc_count": 7}
]
},
"statistics": {
"total": 15,
"code": 8,
"navigation": 7,
"top_level": 3
}
}
```
**Workflow Session Structure** (workflow-session.json):
```json
{
"session_id": "WFS-docs-{timestamp}",
"project": "{project_name} documentation",
"status": "planning",
"timestamp": "2024-01-20T14:30:22+08:00",
"path": ".",
"target_path": "/path/to/project",
"project_root": "/path/to/project",
"project_name": "{project_name}",
"mode": "full",
"tool": "gemini",
"cli_execute": false,
"update_mode": "update",
"existing_docs": 5,
"analysis": {
"total": "15",
"code": "8",
"navigation": "7",
"top_level": "3"
}
}
```
## Generated Documentation
**Structure mirrors project source directories under project-specific folder**:
```
.workflow/docs/
└── {project_name}/ # Project-specific root
├── src/ # Mirrors src/ directory
│ ├── modules/
│ │ ├── README.md # Navigation
│ │ ├── auth/
│ │ │ ├── API.md # API signatures
│ │ │ ├── README.md # Module docs
│ │ │ └── middleware/
│ │ │ ├── API.md
│ │ │ └── README.md
│ │ └── api/
│ │ ├── API.md
│ │ └── README.md
│ └── utils/
│ └── README.md
├── lib/ # Mirrors lib/ directory
│ └── core/
│ ├── API.md
│ └── README.md
├── README.md # Project root
├── ARCHITECTURE.md # System design
├── EXAMPLES.md # Usage examples
└── api/ # Optional
└── README.md # HTTP API reference
```
## Execution Commands
```bash
# Execute entire workflow (auto-discovers active session)
/workflow:execute
# Or specify session
/workflow:execute --resume-session="WFS-docs-yyyymmdd-hhmmss"
# Individual task execution
/task:execute IMPL-001
```
## Template Reference
**Available Templates** (`~/.claude/workflows/cli-templates/prompts/documentation/`):
- `api.txt`: Code API (Part A) + HTTP API (Part B)
- `module-readme.txt`: Module purpose, usage, dependencies
- `folder-navigation.txt`: Navigation README for folders with subdirectories
- `project-readme.txt`: Project overview, getting started, navigation
- `project-architecture.txt`: System structure, module map, design patterns
- `project-examples.txt`: End-to-end usage examples
## Execution Mode Summary
| Mode | CLI Placement | CLI MODE | Approval Flag | Agent Role |
|------|---------------|----------|---------------|------------|
| **Agent (default)** | pre_analysis | analysis | (none) | Generates documentation content |
| **CLI (--cli-execute)** | implementation_approach | write | --mode write | Executes CLI commands, validates output |
**Execution Flow**:
- **Phase 2**: Unified analysis once, results in `.process/`
- **Phase 4**: Dynamic grouping (max 2 dirs per group)
- **Level 1**: Parallel processing for module tree groups
- **Level 2+**: Sequential execution for project-level docs
## Related Commands
- `/workflow:execute` - Execute documentation tasks
- `/workflow:status` - View task progress
- `/workflow:session:complete` - Mark session complete

View File

@@ -1,182 +0,0 @@
---
name: load-skill-memory
description: Activate SKILL package (auto-detect from paths/keywords or manual) and intelligently load documentation based on task intent keywords
argument-hint: "[skill_name] \"task intent description\""
allowed-tools: Bash(*), Read(*), Skill(*)
---
# Memory Load SKILL Command (/memory:load-skill-memory)
## 1. Overview
The `memory:load-skill-memory` command **activates SKILL package** (auto-detect from task or manual specification) and intelligently loads documentation based on user's task intent. The system automatically determines which documentation files to read based on the intent description.
**Core Philosophy**:
- **Flexible Activation**: Auto-detect skill from task description/paths, or user explicitly specifies
- **Intent-Driven Loading**: System analyzes task intent to determine documentation scope
- **Intelligent Selection**: Automatically chooses appropriate documentation level and modules
- **Direct Context Loading**: Loads selected documentation into conversation memory
**When to Use**:
- Manually activate a known SKILL package for a specific task
- Load SKILL context when system hasn't auto-triggered it
- Force reload SKILL documentation with specific intent focus
**Note**: Normal SKILL activation happens automatically via description triggers or path mentions (system extracts skill name from file paths for intelligent triggering). Use this command only when manual activation is needed.
## 2. Parameters
- `[skill_name]` (Optional): Name of SKILL package to activate
- If omitted: System auto-detects from task description or file paths
- If specified: Direct activation of named SKILL package
- Example: `my_project`, `api_service`
- Must match directory name under `.claude/skills/`
- `"task intent description"` (Required): Description of what you want to do
- Used for both: auto-detection (if skill_name omitted) and documentation scope selection
- **Analysis tasks**: "分析builder pattern实现", "理解参数系统架构"
- **Modification tasks**: "修改workflow逻辑", "增强thermal template功能"
- **Learning tasks**: "学习接口设计模式", "了解测试框架使用"
- **With paths**: "修改D:\projects\my_project\src\auth.py的认证逻辑" (auto-extracts `my_project`)
## 3. Execution Flow
### Step 1: Determine SKILL Name (if not provided)
**Auto-Detection Strategy** (when skill_name parameter is omitted):
1. **Path Extraction**: Scan task description for file paths
- Extract potential project names from path segments
- Example: `"修改D:\projects\my_project\src\auth.py"` → extracts `my_project`
2. **Keyword Matching**: Match task keywords against SKILL descriptions
- Search for project-specific terms, domain keywords
3. **Validation**: Check if extracted name matches `.claude/skills/{skill_name}/`
**Result**: Either uses provided skill_name or auto-detected name for activation
### Step 2: Activate SKILL and Analyze Intent
**Activate SKILL Package**:
```javascript
Skill(command: "${skill_name}") // Uses provided or auto-detected name
```
**What Happens After Activation**:
1. If SKILL exists in memory: System reads `.claude/skills/${skill_name}/SKILL.md`
2. If SKILL not found in memory: Error - SKILL package doesn't exist
3. SKILL description triggers are loaded into memory
4. Progressive loading mechanism becomes available
5. Documentation structure is now accessible
**Intent Analysis**:
Based on task intent description, system determines:
- **Action type**: analyzing, modifying, learning
- **Scope**: specific module, architecture overview, complete system
- **Depth**: quick reference, detailed API, full documentation
### Step 3: Intelligent Documentation Loading
**Loading Strategy**:
The system automatically selects documentation based on intent keywords:
1. **Quick Understanding** ("了解", "快速理解", "什么是"):
- Load: Level 0 (README.md only, ~2K tokens)
- Use case: Quick overview of capabilities
2. **Specific Module Analysis** ("分析XXX模块", "理解XXX实现"):
- Load: Module-specific README.md + API.md (~5K tokens)
- Use case: Deep dive into specific component
3. **Architecture Review** ("架构", "设计模式", "整体结构"):
- Load: README.md + ARCHITECTURE.md (~10K tokens)
- Use case: System design understanding
4. **Implementation/Modification** ("修改", "增强", "实现"):
- Load: Relevant module docs + EXAMPLES.md (~15K tokens)
- Use case: Code modification with examples
5. **Comprehensive Learning** ("学习", "完整了解", "深入理解"):
- Load: Level 3 (All documentation, ~40K tokens)
- Use case: Complete system mastery
**Documentation Loaded into Memory**:
After loading, the selected documentation content is available in conversation memory for subsequent operations.
## 4. Usage Examples
### Example 1: Manual Specification
**User Command**:
```bash
/memory:load-skill-memory my_project "修改认证模块增加OAuth支持"
```
**Execution**:
```javascript
// Step 1: Use provided skill_name
skill_name = "my_project" // Directly from parameter
// Step 2: Activate SKILL
Skill(command: "my_project")
// Step 3: Intent Analysis
Keywords: ["修改", "认证模块", "增加", "OAuth"]
Action: modifying (implementation)
Scope: auth module + examples
// Load documentation based on intent
Read(.workflow/docs/my_project/auth/README.md)
Read(.workflow/docs/my_project/auth/API.md)
Read(.workflow/docs/my_project/EXAMPLES.md)
```
### Example 2: Auto-Detection from Path
**User Command**:
```bash
/memory:load-skill-memory "修改D:\projects\my_project\src\services\api.py的接口逻辑"
```
**Execution**:
```javascript
// Step 1: Auto-detect skill_name from path
Path detected: "D:\projects\my_project\src\services\api.py"
Extracted: "my_project"
Validated: .claude/skills/my_project/ exists
skill_name = "my_project"
// Step 2: Activate SKILL
Skill(command: "my_project")
// Step 3: Intent Analysis
Keywords: ["修改", "services", "接口逻辑"]
Action: modifying (implementation)
Scope: services module + examples
// Load documentation based on intent
Read(.workflow/docs/my_project/services/README.md)
Read(.workflow/docs/my_project/services/API.md)
Read(.workflow/docs/my_project/EXAMPLES.md)
```
## 5. Intent Keyword Mapping
**Quick Reference**:
- **Triggers**: "了解", "快速", "什么是", "简介"
- **Loads**: README.md only (~2K)
**Module-Specific**:
- **Triggers**: "XXX模块", "XXX组件", "分析XXX"
- **Loads**: Module README + API (~5K)
**Architecture**:
- **Triggers**: "架构", "设计", "整体结构", "系统设计"
- **Loads**: README + ARCHITECTURE (~10K)
**Implementation**:
- **Triggers**: "修改", "增强", "实现", "开发", "集成"
- **Loads**: Relevant module + EXAMPLES (~15K)
**Comprehensive**:
- **Triggers**: "完整", "深入", "全面", "学习整个"
- **Loads**: All documentation (~40K)

View File

@@ -1,525 +0,0 @@
---
name: skill-memory
description: 4-phase autonomous orchestrator: check docs → /memory:docs planning → /workflow:execute → generate SKILL.md with progressive loading index (skips phases 2-3 if docs exist)
argument-hint: "[path] [--tool <gemini|qwen|codex>] [--regenerate] [--mode <full|partial>] [--cli-execute]"
allowed-tools: SlashCommand(*), TodoWrite(*), Bash(*), Read(*), Write(*)
---
# Memory SKILL Package Generator
## Orchestrator Role
**Pure Orchestrator**: Execute documentation generation workflow, then generate SKILL.md index. Does NOT create task JSON files.
**Auto-Continue Workflow**: This command runs **fully autonomously** once triggered. Each phase completes and automatically triggers the next phase without user interaction.
**Execution Paths**:
- **Full Path**: All 4 phases (no existing docs OR `--regenerate` specified)
- **Skip Path**: Phase 1 → Phase 4 (existing docs found AND no `--regenerate` flag)
- **Phase 4 Always Executes**: SKILL.md index is never skipped, always generated or updated
## Core Rules
1. **Start Immediately**: First action is TodoWrite initialization, second action is Phase 1 execution
2. **No Task JSON**: This command does not create task JSON files - delegates to /memory:docs
3. **Parse Every Output**: Extract required data from each command output (session_id, task_count, file paths)
4. **Auto-Continue**: After completing each phase, update TodoWrite and immediately execute next phase
5. **Track Progress**: Update TodoWrite after EVERY phase completion before starting next phase
6. **Direct Generation**: Phase 4 directly generates SKILL.md using Write tool
7. **No Manual Steps**: User should never be prompted for decisions between phases
---
## 4-Phase Execution
### Phase 1: Prepare Arguments
**Goal**: Parse command arguments and check existing documentation
**Step 1: Get Target Path and Project Name**
```bash
# Get current directory (or use provided path)
bash(pwd)
# Get project name from directory
bash(basename "$(pwd)")
# Get project root
bash(git rev-parse --show-toplevel 2>/dev/null || pwd)
```
**Output**:
- `target_path`: `/d/my_project`
- `project_name`: `my_project`
- `project_root`: `/d/my_project`
**Step 2: Set Default Parameters**
```bash
# Default values (use these unless user specifies otherwise):
# - tool: "gemini"
# - mode: "full"
# - regenerate: false (no --regenerate flag)
# - cli_execute: false (no --cli-execute flag)
```
**Step 3: Check Existing Documentation**
```bash
# Check if docs directory exists
bash(test -d .workflow/docs/my_project && echo "exists" || echo "not_exists")
# Count existing documentation files
bash(find .workflow/docs/my_project -name "*.md" 2>/dev/null | wc -l || echo 0)
```
**Output**:
- `docs_exists`: `exists` or `not_exists`
- `existing_docs`: `5` (or `0` if no docs)
**Step 4: Determine Execution Path**
**Decision Logic**:
```javascript
if (existing_docs > 0 && !regenerate_flag) {
// Documentation exists and no regenerate flag
SKIP_DOCS_GENERATION = true
message = "Documentation already exists, skipping Phase 2 and Phase 3. Use --regenerate to force regeneration."
} else if (regenerate_flag) {
// Force regeneration: delete existing docs
bash(rm -rf .workflow/docs/my_project 2>/dev/null || true)
SKIP_DOCS_GENERATION = false
message = "Regenerating documentation from scratch."
} else {
// No existing docs
SKIP_DOCS_GENERATION = false
message = "No existing documentation found, generating new documentation."
}
```
**Summary Variables**:
- `PROJECT_NAME`: `my_project`
- `TARGET_PATH`: `/d/my_project`
- `DOCS_PATH`: `.workflow/docs/my_project`
- `TOOL`: `gemini` (default) or user-specified
- `MODE`: `full` (default) or user-specified
- `CLI_EXECUTE`: `false` (default) or `true` if --cli-execute flag
- `REGENERATE`: `false` (default) or `true` if --regenerate flag
- `EXISTING_DOCS`: Count of existing documentation files
- `SKIP_DOCS_GENERATION`: `true` if skipping Phase 2/3, `false` otherwise
**Completion & TodoWrite**:
- If `SKIP_DOCS_GENERATION = true`: Mark phase 1 completed, phase 2&3 completed (skipped), phase 4 in_progress
- If `SKIP_DOCS_GENERATION = false`: Mark phase 1 completed, phase 2 in_progress
**Next Action**:
- If skipping: Display skip message → Jump to Phase 4 (SKILL.md generation)
- If not skipping: Display preparation results → Continue to Phase 2 (documentation planning)
---
### Phase 2: Call /memory:docs
**Skip Condition**: This phase is **skipped if SKIP_DOCS_GENERATION = true** (documentation already exists without --regenerate flag)
**Goal**: Trigger documentation generation workflow
**Command**:
```bash
SlashCommand(command="/memory:docs [targetPath] --tool [tool] --mode [mode] [--cli-execute]")
```
**Example**:
```bash
/memory:docs /d/my_app --tool gemini --mode full
/memory:docs /d/my_app --tool gemini --mode full --cli-execute
```
**Note**: The `--regenerate` flag is handled in Phase 1 by deleting existing documentation. This command always calls `/memory:docs` without the regenerate flag, relying on docs.md's built-in update detection.
**Parse Output**:
- Extract session ID: `WFS-docs-[timestamp]` (store as `docsSessionId`)
- Extract task count (store as `taskCount`)
**Completion Criteria**:
- `/memory:docs` command executed successfully
- Session ID extracted and stored
- Task count retrieved
- Task files created in `.workflow/[docsSessionId]/.task/`
- workflow-session.json exists
**TodoWrite**: Mark phase 2 completed, phase 3 in_progress
**Next Action**: Display docs planning results (session ID, task count) → Auto-continue to Phase 3
---
### Phase 3: Execute Documentation Generation
**Skip Condition**: This phase is **skipped if SKIP_DOCS_GENERATION = true** (documentation already exists without --regenerate flag)
**Goal**: Execute documentation generation tasks
**Command**:
```bash
SlashCommand(command="/workflow:execute")
```
**Note**: `/workflow:execute` automatically discovers active session from Phase 2
**Completion Criteria**:
- `/workflow:execute` command executed successfully
- Documentation files generated in `.workflow/docs/[projectName]/`
- All tasks marked as completed in session
- At minimum: module documentation files exist (API.md and/or README.md)
- For full mode: Project README, ARCHITECTURE, EXAMPLES files generated
**TodoWrite**: Mark phase 3 completed, phase 4 in_progress
**Next Action**: Display execution results (file count, module count) → Auto-continue to Phase 4
---
### Phase 4: Generate SKILL.md Index
**Note**: This phase is **NEVER skipped** - it always executes to generate or update the SKILL index.
**Step 1: Read Key Files** (Use Read tool)
- `.workflow/docs/{project_name}/README.md` (required)
- `.workflow/docs/{project_name}/ARCHITECTURE.md` (optional)
**Step 2: Discover Structure**
```bash
bash(find .workflow/docs/{project_name} -name "*.md" | sed 's|.workflow/docs/{project_name}/||' | awk -F'/' '{if(NF>=2) print $1"/"$2}' | sort -u)
```
**Step 3: Generate Intelligent Description**
Extract from README + structure: Function (capabilities), Modules (names), Keywords (API/CLI/auth/etc.)
**Format**: `{Project} {core capabilities} (located at {project_path}). Load this SKILL when analyzing, modifying, or learning about {domain_description} or files under this path, especially when no relevant context exists in memory.`
**Key Elements**:
- **Path Reference**: Use `TARGET_PATH` from Phase 1 for precise location identification
- **Domain Description**: Extract human-readable domain/feature area from README (e.g., "workflow management", "thermal modeling")
- **Trigger Optimization**: Include project path, emphasize "especially when no relevant context exists in memory"
- **Action Coverage**: analyzing (分析), modifying (修改), learning (了解)
**Example**: "Workflow orchestration system with CLI tools and documentation generation (located at /d/Claude_dms3). Load this SKILL when analyzing, modifying, or learning about workflow management or files under this path, especially when no relevant context exists in memory."
**Step 4: Write SKILL.md** (Use Write tool)
```bash
bash(mkdir -p .claude/skills/{project_name})
```
`.claude/skills/{project_name}/SKILL.md`:
```yaml
---
name: {project_name}
description: {intelligent description from Step 3}
version: 1.0.0
---
# {Project Name} SKILL Package
## Documentation: `../../../.workflow/docs/{project_name}/`
## Progressive Loading
### Level 0: Quick Start (~2K)
- [README](../../../.workflow/docs/{project_name}/README.md)
### Level 1: Core Modules (~8K)
{Module READMEs}
### Level 2: Complete (~25K)
All modules + [Architecture](../../../.workflow/docs/{project_name}/ARCHITECTURE.md)
### Level 3: Deep Dive (~40K)
Everything + [Examples](../../../.workflow/docs/{project_name}/EXAMPLES.md)
```
**Completion Criteria**:
- SKILL.md file created at `.claude/skills/{project_name}/SKILL.md`
- Intelligent description generated from documentation
- Progressive loading levels (0-3) properly structured
- Module index includes all documented modules
- All file references use relative paths
**TodoWrite**: Mark phase 4 completed
**Final Action**: Report completion summary to user
**Return to User**:
```
SKILL Package Generation Complete
Project: {project_name}
Documentation: .workflow/docs/{project_name}/ ({doc_count} files)
SKILL Index: .claude/skills/{project_name}/SKILL.md
Generated:
- {task_count} documentation tasks completed
- SKILL.md with progressive loading (4 levels)
- Module index with {module_count} modules
Usage:
- Load Level 0: Quick project overview (~2K tokens)
- Load Level 1: Core modules (~8K tokens)
- Load Level 2: Complete docs (~25K tokens)
- Load Level 3: Everything (~40K tokens)
```
---
## Implementation Details
### Critical Rules
1. **No User Prompts Between Phases**: Never ask user questions or wait for input between phases
2. **Immediate Phase Transition**: After TodoWrite update, immediately execute next phase command
3. **Status-Driven Execution**: Check TodoList status after each phase:
- If next task is "pending" → Mark it "in_progress" and execute
- If all tasks are "completed" → Report final summary
4. **Phase Completion Pattern**:
```
Phase N completes → Update TodoWrite (N=completed, N+1=in_progress) → Execute Phase N+1
```
### TodoWrite Patterns
#### Initialization (Before Phase 1)
**FIRST ACTION**: Create TodoList with all 4 phases
```javascript
TodoWrite({todos: [
{"content": "Parse arguments and prepare", "status": "in_progress", "activeForm": "Parsing arguments"},
{"content": "Call /memory:docs to plan documentation", "status": "pending", "activeForm": "Calling /memory:docs"},
{"content": "Execute documentation generation", "status": "pending", "activeForm": "Executing documentation"},
{"content": "Generate SKILL.md index", "status": "pending", "activeForm": "Generating SKILL.md"}
]})
```
**SECOND ACTION**: Execute Phase 1 immediately
#### Full Path (SKIP_DOCS_GENERATION = false)
**After Phase 1**:
```javascript
TodoWrite({todos: [
{"content": "Parse arguments and prepare", "status": "completed", "activeForm": "Parsing arguments"},
{"content": "Call /memory:docs to plan documentation", "status": "in_progress", "activeForm": "Calling /memory:docs"},
{"content": "Execute documentation generation", "status": "pending", "activeForm": "Executing documentation"},
{"content": "Generate SKILL.md index", "status": "pending", "activeForm": "Generating SKILL.md"}
]})
// Auto-continue to Phase 2
```
**After Phase 2**:
```javascript
TodoWrite({todos: [
{"content": "Parse arguments and prepare", "status": "completed", "activeForm": "Parsing arguments"},
{"content": "Call /memory:docs to plan documentation", "status": "completed", "activeForm": "Calling /memory:docs"},
{"content": "Execute documentation generation", "status": "in_progress", "activeForm": "Executing documentation"},
{"content": "Generate SKILL.md index", "status": "pending", "activeForm": "Generating SKILL.md"}
]})
// Auto-continue to Phase 3
```
**After Phase 3**:
```javascript
TodoWrite({todos: [
{"content": "Parse arguments and prepare", "status": "completed", "activeForm": "Parsing arguments"},
{"content": "Call /memory:docs to plan documentation", "status": "completed", "activeForm": "Calling /memory:docs"},
{"content": "Execute documentation generation", "status": "completed", "activeForm": "Executing documentation"},
{"content": "Generate SKILL.md index", "status": "in_progress", "activeForm": "Generating SKILL.md"}
]})
// Auto-continue to Phase 4
```
**After Phase 4**:
```javascript
TodoWrite({todos: [
{"content": "Parse arguments and prepare", "status": "completed", "activeForm": "Parsing arguments"},
{"content": "Call /memory:docs to plan documentation", "status": "completed", "activeForm": "Calling /memory:docs"},
{"content": "Execute documentation generation", "status": "completed", "activeForm": "Executing documentation"},
{"content": "Generate SKILL.md index", "status": "completed", "activeForm": "Generating SKILL.md"}
]})
// Report completion summary to user
```
#### Skip Path (SKIP_DOCS_GENERATION = true)
**After Phase 1** (detects existing docs, skips Phase 2 & 3):
```javascript
TodoWrite({todos: [
{"content": "Parse arguments and prepare", "status": "completed", "activeForm": "Parsing arguments"},
{"content": "Call /memory:docs to plan documentation", "status": "completed", "activeForm": "Calling /memory:docs"},
{"content": "Execute documentation generation", "status": "completed", "activeForm": "Executing documentation"},
{"content": "Generate SKILL.md index", "status": "in_progress", "activeForm": "Generating SKILL.md"}
]})
// Display skip message: "Documentation already exists, skipping Phase 2 and Phase 3. Use --regenerate to force regeneration."
// Jump directly to Phase 4
```
**After Phase 4**:
```javascript
TodoWrite({todos: [
{"content": "Parse arguments and prepare", "status": "completed", "activeForm": "Parsing arguments"},
{"content": "Call /memory:docs to plan documentation", "status": "completed", "activeForm": "Calling /memory:docs"},
{"content": "Execute documentation generation", "status": "completed", "activeForm": "Executing documentation"},
{"content": "Generate SKILL.md index", "status": "completed", "activeForm": "Generating SKILL.md"}
]})
// Report completion summary to user
```
### Execution Flow Diagrams
#### Full Path Flow
```
User triggers command
[TodoWrite] Initialize 4 phases (Phase 1 = in_progress)
[Execute] Phase 1: Parse arguments
[TodoWrite] Phase 1 = completed, Phase 2 = in_progress
[Execute] Phase 2: Call /memory:docs
[TodoWrite] Phase 2 = completed, Phase 3 = in_progress
[Execute] Phase 3: Call /workflow:execute
[TodoWrite] Phase 3 = completed, Phase 4 = in_progress
[Execute] Phase 4: Generate SKILL.md
[TodoWrite] Phase 4 = completed
[Report] Display completion summary
```
#### Skip Path Flow
```
User triggers command
[TodoWrite] Initialize 4 phases (Phase 1 = in_progress)
[Execute] Phase 1: Parse arguments, detect existing docs
[TodoWrite] Phase 1 = completed, Phase 2&3 = completed (skipped), Phase 4 = in_progress
[Display] Skip message: "Documentation already exists, skipping Phase 2 and Phase 3"
[Execute] Phase 4: Generate SKILL.md (always runs)
[TodoWrite] Phase 4 = completed
[Report] Display completion summary
```
### Error Handling
- If any phase fails, mark it as "in_progress" (not completed)
- Report error details to user
- Do NOT auto-continue to next phase on failure
---
## Parameters
```bash
/memory:skill-memory [path] [--tool <gemini|qwen|codex>] [--regenerate] [--mode <full|partial>] [--cli-execute]
```
- **path**: Target directory (default: current directory)
- **--tool**: CLI tool for documentation (default: gemini)
- `gemini`: Comprehensive documentation
- `qwen`: Architecture analysis
- `codex`: Implementation validation
- **--regenerate**: Force regenerate all documentation
- When enabled: Deletes existing `.workflow/docs/{project_name}/` before regeneration
- Ensures fresh documentation from source code
- **--mode**: Documentation mode (default: full)
- `full`: Complete docs (modules + README + ARCHITECTURE + EXAMPLES)
- `partial`: Module docs only
- **--cli-execute**: Enable CLI-based documentation generation (optional)
- When enabled: CLI generates docs directly in implementation_approach
- When disabled (default): Agent generates documentation content
---
## Examples
### Example 1: Generate SKILL Package (Default)
```bash
/memory:skill-memory
```
**Workflow**:
1. Phase 1: Detects current directory, checks existing docs
2. Phase 2: Calls `/memory:docs . --tool gemini --mode full` (Agent Mode)
3. Phase 3: Executes documentation generation via `/workflow:execute`
4. Phase 4: Generates SKILL.md at `.claude/skills/{project_name}/SKILL.md`
### Example 2: Regenerate with Qwen
```bash
/memory:skill-memory /d/my_app --tool qwen --regenerate
```
**Workflow**:
1. Phase 1: Parses target path, detects regenerate flag, deletes existing docs
2. Phase 2: Calls `/memory:docs /d/my_app --tool qwen --mode full`
3. Phase 3: Executes documentation regeneration
4. Phase 4: Generates updated SKILL.md
### Example 3: Partial Mode (Modules Only)
```bash
/memory:skill-memory --mode partial
```
**Workflow**:
1. Phase 1: Detects partial mode
2. Phase 2: Calls `/memory:docs . --tool gemini --mode partial` (Agent Mode)
3. Phase 3: Executes module documentation only
4. Phase 4: Generates SKILL.md with module-only index
### Example 4: CLI Execute Mode
```bash
/memory:skill-memory --cli-execute
```
**Workflow**:
1. Phase 1: Detects CLI execute mode
2. Phase 2: Calls `/memory:docs . --tool gemini --mode full --cli-execute` (CLI Mode)
3. Phase 3: Executes CLI-based documentation generation
4. Phase 4: Generates SKILL.md at `.claude/skills/{project_name}/SKILL.md`
### Example 5: Skip Path (Existing Docs)
```bash
/memory:skill-memory
```
**Scenario**: Documentation already exists in `.workflow/docs/{project_name}/`
**Workflow**:
1. Phase 1: Detects existing docs (5 files), sets SKIP_DOCS_GENERATION = true
2. Display: "Documentation already exists, skipping Phase 2 and Phase 3. Use --regenerate to force regeneration."
3. Phase 4: Generates or updates SKILL.md index only (~5-10x faster)
---
## Architecture
```
skill-memory (orchestrator)
├─ Phase 1: Prepare (bash commands, skip decision)
├─ Phase 2: /memory:docs (task planning, skippable)
├─ Phase 3: /workflow:execute (task execution, skippable)
└─ Phase 4: Write SKILL.md (direct file generation, always runs)
No task JSON created by this command
All documentation tasks managed by /memory:docs
Smart skip logic: 5-10x faster when docs exist
```

View File

@@ -1,773 +0,0 @@
---
name: swagger-docs
description: Generate complete Swagger/OpenAPI documentation following RESTful standards with global security, API details, error codes, and validation tests
argument-hint: "[path] [--tool <gemini|qwen|codex>] [--format <yaml|json>] [--version <v3.0|v3.1>] [--lang <zh|en>]"
---
# Swagger API Documentation Workflow (/memory:swagger-docs)
## Overview
Professional Swagger/OpenAPI documentation generator that strictly follows RESTful API design standards to produce enterprise-grade API documentation.
**Core Features**:
- **RESTful Standards**: Strict adherence to REST architecture and HTTP semantics
- **Global Security**: Unified Authorization Token validation mechanism
- **Complete API Docs**: Descriptions, methods, URLs, parameters for each endpoint
- **Organized Structure**: Clear directory hierarchy by business domain
- **Detailed Fields**: Type, required, example, description for each field
- **Error Code Standards**: Unified error response format and code definitions
- **Validation Tests**: Boundary conditions and exception handling tests
**Output Structure** (--lang zh):
```
.workflow/docs/{project_name}/api/
├── swagger.yaml # Main OpenAPI spec file
├── 概述/
│ ├── README.md # API overview
│ ├── 认证说明.md # Authentication guide
│ ├── 错误码规范.md # Error code definitions
│ └── 版本历史.md # Version history
├── 用户模块/ # Grouped by business domain
│ ├── 用户认证.md
│ ├── 用户管理.md
│ └── 权限控制.md
├── 业务模块/
│ └── ...
└── 测试报告/
├── 接口测试.md # API test results
└── 边界测试.md # Boundary condition tests
```
**Output Structure** (--lang en):
```
.workflow/docs/{project_name}/api/
├── swagger.yaml # Main OpenAPI spec file
├── overview/
│ ├── README.md # API overview
│ ├── authentication.md # Authentication guide
│ ├── error-codes.md # Error code definitions
│ └── changelog.md # Version history
├── users/ # Grouped by business domain
│ ├── authentication.md
│ ├── management.md
│ └── permissions.md
├── orders/
│ └── ...
└── test-reports/
├── api-tests.md # API test results
└── boundary-tests.md # Boundary condition tests
```
## Parameters
```bash
/memory:swagger-docs [path] [--tool <gemini|qwen|codex>] [--format <yaml|json>] [--version <v3.0|v3.1>] [--lang <zh|en>]
```
- **path**: API source code directory (default: current directory)
- **--tool**: CLI tool selection (default: gemini)
- `gemini`: Comprehensive analysis, pattern recognition
- `qwen`: Architecture analysis, system design
- `codex`: Implementation validation, code quality
- **--format**: OpenAPI spec format (default: yaml)
- `yaml`: YAML format (recommended, better readability)
- `json`: JSON format
- **--version**: OpenAPI version (default: v3.0)
- `v3.0`: OpenAPI 3.0.x
- `v3.1`: OpenAPI 3.1.0 (supports JSON Schema 2020-12)
- **--lang**: Documentation language (default: zh)
- `zh`: Chinese documentation with Chinese directory names
- `en`: English documentation with English directory names
## Planning Workflow
### Phase 1: Initialize Session
```bash
# Get project info
bash(pwd && basename "$(pwd)" && git rev-parse --show-toplevel 2>/dev/null || pwd && date +%Y%m%d-%H%M%S)
```
```javascript
// Create swagger-docs session
SlashCommand(command="/workflow:session:start --type swagger-docs --new \"{project_name}-swagger-{timestamp}\"")
// Parse output to get sessionId
```
```bash
# Update workflow-session.json
bash(jq '. + {"target_path":"{target_path}","project_root":"{project_root}","project_name":"{project_name}","format":"yaml","openapi_version":"3.0.3","lang":"{lang}","tool":"gemini"}' .workflow/active/{sessionId}/workflow-session.json > tmp.json && mv tmp.json .workflow/active/{sessionId}/workflow-session.json)
```
### Phase 2: Scan API Endpoints
**Discovery Patterns**: Auto-detect framework signatures and API definition styles.
**Supported Frameworks**:
| Framework | Detection Pattern | Example |
|-----------|-------------------|---------|
| Express.js | `router.get/post/put/delete` | `router.get('/users/:id')` |
| Fastify | `fastify.route`, `@Route` | `fastify.get('/api/users')` |
| NestJS | `@Controller`, `@Get/@Post` | `@Get('users/:id')` |
| Koa | `router.get`, `ctx.body` | `router.get('/users')` |
| Hono | `app.get/post`, `c.json` | `app.get('/users/:id')` |
| FastAPI | `@app.get`, `@router.post` | `@app.get("/users/{id}")` |
| Flask | `@app.route`, `@bp.route` | `@app.route('/users')` |
| Spring | `@GetMapping`, `@PostMapping` | `@GetMapping("/users/{id}")` |
| Go Gin | `r.GET`, `r.POST` | `r.GET("/users/:id")` |
| Go Chi | `r.Get`, `r.Post` | `r.Get("/users/{id}")` |
**Commands**:
```bash
# 1. Detect API framework type
bash(
if rg -q "@Controller|@Get|@Post|@Put|@Delete" --type ts 2>/dev/null; then echo "NESTJS";
elif rg -q "router\.(get|post|put|delete|patch)" --type ts --type js 2>/dev/null; then echo "EXPRESS";
elif rg -q "fastify\.(get|post|route)" --type ts --type js 2>/dev/null; then echo "FASTIFY";
elif rg -q "@app\.(get|post|put|delete)" --type py 2>/dev/null; then echo "FASTAPI";
elif rg -q "@GetMapping|@PostMapping|@RequestMapping" --type java 2>/dev/null; then echo "SPRING";
elif rg -q 'r\.(GET|POST|PUT|DELETE)' --type go 2>/dev/null; then echo "GO_GIN";
else echo "UNKNOWN"; fi
)
# 2. Scan all API endpoint definitions
bash(rg -n "(router|app|fastify)\.(get|post|put|delete|patch)|@(Get|Post|Put|Delete|Patch|Controller|RequestMapping)" --type ts --type js --type py --type java --type go -g '!*.test.*' -g '!*.spec.*' -g '!node_modules/**' 2>/dev/null | head -200)
# 3. Extract route paths
bash(rg -o "['\"](/api)?/[a-zA-Z0-9/:_-]+['\"]" --type ts --type js --type py -g '!*.test.*' 2>/dev/null | sort -u | head -100)
# 4. Detect existing OpenAPI/Swagger files
bash(find . -type f \( -name "swagger.yaml" -o -name "swagger.json" -o -name "openapi.yaml" -o -name "openapi.json" \) ! -path "*/node_modules/*" 2>/dev/null)
# 5. Extract DTO/Schema definitions
bash(rg -n "export (interface|type|class).*Dto|@ApiProperty|class.*Schema" --type ts -g '!*.test.*' 2>/dev/null | head -100)
```
**Data Processing**: Parse outputs, use **Write tool** to create `${session_dir}/.process/swagger-planning-data.json`:
```json
{
"metadata": {
"generated_at": "2025-01-01T12:00:00+08:00",
"project_name": "project_name",
"project_root": "/path/to/project",
"openapi_version": "3.0.3",
"format": "yaml",
"lang": "zh"
},
"framework": {
"type": "NESTJS",
"detected_patterns": ["@Controller", "@Get", "@Post"],
"base_path": "/api/v1"
},
"endpoints": [
{
"file": "src/modules/users/users.controller.ts",
"line": 25,
"method": "GET",
"path": "/api/v1/users/:id",
"handler": "getUser",
"controller": "UsersController"
}
],
"existing_specs": {
"found": false,
"files": []
},
"dto_schemas": [
{
"name": "CreateUserDto",
"file": "src/modules/users/dto/create-user.dto.ts",
"properties": ["email", "password", "name"]
}
],
"statistics": {
"total_endpoints": 45,
"by_method": {"GET": 20, "POST": 15, "PUT": 5, "DELETE": 5},
"by_module": {"users": 12, "auth": 8, "orders": 15, "products": 10}
}
}
```
### Phase 3: Analyze API Structure
**Commands**:
```bash
# 1. Analyze controller/route file structure
bash(cat ${session_dir}/.process/swagger-planning-data.json | jq -r '.endpoints[].file' | sort -u | head -20)
# 2. Extract request/response types
bash(for f in $(jq -r '.dto_schemas[].file' ${session_dir}/.process/swagger-planning-data.json | head -20); do echo "=== $f ===" && cat "$f" 2>/dev/null; done)
# 3. Analyze authentication middleware
bash(rg -n "auth|guard|middleware|jwt|bearer|token" -i --type ts --type js -g '!*.test.*' -g '!node_modules/**' 2>/dev/null | head -50)
# 4. Detect error handling patterns
bash(rg -n "HttpException|BadRequest|Unauthorized|Forbidden|NotFound|throw new" --type ts --type js -g '!*.test.*' 2>/dev/null | head -50)
```
**Deep Analysis via Gemini CLI**:
```bash
ccw cli -p "
PURPOSE: Analyze API structure and generate OpenAPI specification outline for comprehensive documentation
TASK:
• Parse all API endpoints and identify business module boundaries
• Extract request parameters, request bodies, and response formats
• Identify authentication mechanisms and security requirements
• Discover error handling patterns and error codes
• Map endpoints to logical module groups
MODE: analysis
CONTEXT: @src/**/*.controller.ts @src/**/*.routes.ts @src/**/*.dto.ts @src/**/middleware/**/*
EXPECTED: JSON format API structure analysis report with modules, endpoints, security schemes, and error codes
RULES: $(cat ~/.claude/workflows/cli-templates/protocols/analysis-protocol.md) | Strict RESTful standards | Identify all public endpoints | Document output language: {lang}
" --tool gemini --mode analysis --cd {project_root}
```
**Update swagger-planning-data.json** with analysis results:
```json
{
"api_structure": {
"modules": [
{
"name": "Users",
"name_zh": "用户模块",
"base_path": "/api/v1/users",
"endpoints": [
{
"path": "/api/v1/users",
"method": "GET",
"operation_id": "listUsers",
"summary": "List all users",
"summary_zh": "获取用户列表",
"description": "Paginated list of system users with filtering by status and role",
"description_zh": "分页获取系统用户列表,支持按状态、角色筛选",
"tags": ["User Management"],
"tags_zh": ["用户管理"],
"security": ["bearerAuth"],
"parameters": {
"query": ["page", "limit", "status", "role"]
},
"responses": {
"200": "UserListResponse",
"401": "UnauthorizedError",
"403": "ForbiddenError"
}
}
]
}
],
"security_schemes": {
"bearerAuth": {
"type": "http",
"scheme": "bearer",
"bearerFormat": "JWT",
"description": "JWT Token authentication. Add Authorization: Bearer <token> to request header"
}
},
"error_codes": [
{"code": "AUTH_001", "status": 401, "message": "Invalid or expired token", "message_zh": "Token 无效或已过期"},
{"code": "AUTH_002", "status": 401, "message": "Authentication required", "message_zh": "未提供认证信息"},
{"code": "AUTH_003", "status": 403, "message": "Insufficient permissions", "message_zh": "权限不足"}
]
}
}
```
### Phase 4: Task Decomposition
**Task Hierarchy**:
```
Level 1: Infrastructure Tasks (Parallel)
├─ IMPL-001: Generate main OpenAPI spec file (swagger.yaml)
├─ IMPL-002: Generate global security config and auth documentation
└─ IMPL-003: Generate unified error code specification
Level 2: Module Documentation Tasks (Parallel, by business module)
├─ IMPL-004: Users module API documentation
├─ IMPL-005: Auth module API documentation
├─ IMPL-006: Business module N API documentation
└─ ...
Level 3: Aggregation Tasks (Depends on Level 1-2)
├─ IMPL-N+1: Generate API overview and navigation
└─ IMPL-N+2: Generate version history and changelog
Level 4: Validation Tasks (Depends on Level 1-3)
├─ IMPL-N+3: API endpoint validation tests
└─ IMPL-N+4: Boundary condition tests
```
**Grouping Strategy**:
1. Group by business module (users, orders, products, etc.)
2. Maximum 10 endpoints per task
3. Large modules (>10 endpoints) split by submodules
**Commands**:
```bash
# 1. Count endpoints by module
bash(cat ${session_dir}/.process/swagger-planning-data.json | jq '.statistics.by_module')
# 2. Calculate task groupings
bash(cat ${session_dir}/.process/swagger-planning-data.json | jq -r '.api_structure.modules[] | "\(.name):\(.endpoints | length)"')
```
**Data Processing**: Use **Edit tool** to update `swagger-planning-data.json` with task groups:
```json
{
"task_groups": {
"level1_count": 3,
"level2_count": 5,
"total_count": 12,
"assignments": [
{"task_id": "IMPL-001", "level": 1, "type": "openapi-spec", "title": "Generate OpenAPI main spec file"},
{"task_id": "IMPL-002", "level": 1, "type": "security", "title": "Generate global security config"},
{"task_id": "IMPL-003", "level": 1, "type": "error-codes", "title": "Generate error code specification"},
{"task_id": "IMPL-004", "level": 2, "type": "module-doc", "module": "users", "endpoint_count": 12},
{"task_id": "IMPL-005", "level": 2, "type": "module-doc", "module": "auth", "endpoint_count": 8}
]
}
}
```
### Phase 5: Generate Task JSONs
**Generation Process**:
1. Read configuration values from workflow-session.json
2. Read task groups from swagger-planning-data.json
3. Generate Level 1 tasks (infrastructure)
4. Generate Level 2 tasks (by module)
5. Generate Level 3-4 tasks (aggregation and validation)
## Task Templates
### Level 1-1: OpenAPI Main Spec File
```json
{
"id": "IMPL-001",
"title": "Generate OpenAPI main specification file",
"status": "pending",
"meta": {
"type": "swagger-openapi-spec",
"agent": "@doc-generator",
"tool": "gemini",
"priority": "critical"
},
"context": {
"requirements": [
"Generate OpenAPI 3.0.3 compliant swagger.yaml",
"Include complete info, servers, tags, paths, components definitions",
"Follow RESTful design standards, use {lang} for descriptions"
],
"precomputed_data": {
"planning_data": "${session_dir}/.process/swagger-planning-data.json"
}
},
"flow_control": {
"pre_analysis": [
{
"step": "load_analysis_data",
"action": "Load API analysis data",
"commands": [
"bash(cat ${session_dir}/.process/swagger-planning-data.json)"
],
"output_to": "api_analysis"
}
],
"implementation_approach": [
{
"step": 1,
"title": "Generate OpenAPI spec file",
"description": "Create complete swagger.yaml specification file",
"cli_prompt": "PURPOSE: Generate OpenAPI 3.0.3 specification file from analyzed API structure\nTASK:\n• Define openapi version: 3.0.3\n• Define info: title, description, version, contact, license\n• Define servers: development, staging, production environments\n• Define tags: organized by business modules\n• Define paths: all API endpoints with complete specifications\n• Define components: schemas, securitySchemes, parameters, responses\nMODE: write\nCONTEXT: @[api_analysis]\nEXPECTED: Complete swagger.yaml file following OpenAPI 3.0.3 specification\nRULES: $(cat ~/.claude/workflows/cli-templates/protocols/write-protocol.md) $(cat ~/.claude/workflows/cli-templates/prompts/documentation/swagger-api.txt) | Use {lang} for all descriptions | Strict RESTful standards",
"output": "swagger.yaml"
}
],
"target_files": [
".workflow/docs/${project_name}/api/swagger.yaml"
]
}
}
```
### Level 1-2: Global Security Configuration
```json
{
"id": "IMPL-002",
"title": "Generate global security configuration and authentication guide",
"status": "pending",
"meta": {
"type": "swagger-security",
"agent": "@doc-generator",
"tool": "gemini"
},
"context": {
"requirements": [
"Document Authorization header format in detail",
"Describe token acquisition, refresh, and expiration mechanisms",
"List permission requirements for each endpoint"
]
},
"flow_control": {
"pre_analysis": [
{
"step": "analyze_auth",
"command": "bash(rg -n 'auth|guard|jwt|bearer' -i --type ts -g '!*.test.*' 2>/dev/null | head -50)",
"output_to": "auth_patterns"
}
],
"implementation_approach": [
{
"step": 1,
"title": "Generate authentication documentation",
"cli_prompt": "PURPOSE: Generate comprehensive authentication documentation for API security\nTASK:\n• Document authentication mechanism: JWT Bearer Token\n• Explain header format: Authorization: Bearer <token>\n• Describe token lifecycle: acquisition, refresh, expiration handling\n• Define permission levels: public, user, admin, super_admin\n• Document authentication failure responses: 401/403 error handling\nMODE: write\nCONTEXT: @[auth_patterns] @src/**/auth/**/* @src/**/guard/**/*\nEXPECTED: Complete authentication guide in {lang}\nRULES: $(cat ~/.claude/workflows/cli-templates/protocols/write-protocol.md) | Include code examples | Clear step-by-step instructions",
"output": "{auth_doc_name}"
}
],
"target_files": [
".workflow/docs/${project_name}/api/{overview_dir}/{auth_doc_name}"
]
}
}
```
### Level 1-3: Unified Error Code Specification
```json
{
"id": "IMPL-003",
"title": "Generate unified error code specification",
"status": "pending",
"meta": {
"type": "swagger-error-codes",
"agent": "@doc-generator",
"tool": "gemini"
},
"context": {
"requirements": [
"Define unified error response format",
"Create categorized error code system (auth, business, system)",
"Provide detailed description and examples for each error code"
]
},
"flow_control": {
"implementation_approach": [
{
"step": 1,
"title": "Generate error code specification document",
"cli_prompt": "PURPOSE: Generate comprehensive error code specification for consistent API error handling\nTASK:\n• Define error response format: {code, message, details, timestamp}\n• Document authentication errors (AUTH_xxx): 401/403 series\n• Document parameter errors (PARAM_xxx): 400 series\n• Document business errors (BIZ_xxx): business logic errors\n• Document system errors (SYS_xxx): 500 series\n• For each error code: HTTP status, error message, possible causes, resolution suggestions\nMODE: write\nCONTEXT: @src/**/*.exception.ts @src/**/*.filter.ts\nEXPECTED: Complete error code specification in {lang} with tables and examples\nRULES: $(cat ~/.claude/workflows/cli-templates/protocols/write-protocol.md) | Include response examples | Clear categorization",
"output": "{error_doc_name}"
}
],
"target_files": [
".workflow/docs/${project_name}/api/{overview_dir}/{error_doc_name}"
]
}
}
```
### Level 2: Module API Documentation (Template)
```json
{
"id": "IMPL-${module_task_id}",
"title": "Generate ${module_name} API documentation",
"status": "pending",
"depends_on": ["IMPL-001", "IMPL-002", "IMPL-003"],
"meta": {
"type": "swagger-module-doc",
"agent": "@doc-generator",
"tool": "gemini",
"module": "${module_name}",
"endpoint_count": "${endpoint_count}"
},
"context": {
"requirements": [
"Complete documentation for all endpoints in this module",
"Each endpoint: description, method, URL, parameters, responses",
"Include success and failure response examples",
"Mark API version and last update time"
],
"focus_paths": ["${module_source_paths}"]
},
"flow_control": {
"pre_analysis": [
{
"step": "load_module_endpoints",
"action": "Load module endpoint information",
"commands": [
"bash(cat ${session_dir}/.process/swagger-planning-data.json | jq '.api_structure.modules[] | select(.name == \"${module_name}\")')"
],
"output_to": "module_endpoints"
},
{
"step": "read_source_files",
"action": "Read module source files",
"commands": [
"bash(cat ${module_source_files})"
],
"output_to": "source_code"
}
],
"implementation_approach": [
{
"step": 1,
"title": "Generate module API documentation",
"description": "Generate complete API documentation for ${module_name}",
"cli_prompt": "PURPOSE: Generate complete RESTful API documentation for ${module_name} module\nTASK:\n• Create module overview: purpose, use cases, prerequisites\n• Generate endpoint index: grouped by functionality\n• For each endpoint document:\n - Functional description: purpose and business context\n - Request method: GET/POST/PUT/DELETE\n - URL path: complete API path\n - Request headers: Authorization and other required headers\n - Path parameters: {id} and other path variables\n - Query parameters: pagination, filters, etc.\n - Request body: JSON Schema format\n - Response body: success and error responses\n - Field description table: type, required, example, description\n• Add usage examples: cURL, JavaScript, Python\n• Add version info: v1.0.0, last updated date\nMODE: write\nCONTEXT: @[module_endpoints] @[source_code]\nEXPECTED: Complete module API documentation in {lang} with all endpoints fully documented\nRULES: $(cat ~/.claude/workflows/cli-templates/protocols/write-protocol.md) $(cat ~/.claude/workflows/cli-templates/prompts/documentation/swagger-api.txt) | RESTful standards | Include all response codes",
"output": "${module_doc_name}"
}
],
"target_files": [
".workflow/docs/${project_name}/api/${module_dir}/${module_doc_name}"
]
}
}
```
### Level 3: API Overview and Navigation
```json
{
"id": "IMPL-${overview_task_id}",
"title": "Generate API overview and navigation",
"status": "pending",
"depends_on": ["IMPL-001", "...", "IMPL-${last_module_task_id}"],
"meta": {
"type": "swagger-overview",
"agent": "@doc-generator",
"tool": "gemini"
},
"flow_control": {
"pre_analysis": [
{
"step": "load_all_docs",
"command": "bash(find .workflow/docs/${project_name}/api -type f -name '*.md' ! -path '*/{overview_dir}/*' | xargs cat)",
"output_to": "all_module_docs"
}
],
"implementation_approach": [
{
"step": 1,
"title": "Generate API overview",
"cli_prompt": "PURPOSE: Generate API overview document with navigation and quick start guide\nTASK:\n• Create introduction: system features, tech stack, version\n• Write quick start guide: authentication, first request example\n• Build module navigation: categorized links to all modules\n• Document environment configuration: development, staging, production\n• List SDKs and tools: client libraries, Postman collection\nMODE: write\nCONTEXT: @[all_module_docs] @.workflow/docs/${project_name}/api/swagger.yaml\nEXPECTED: Complete API overview in {lang} with navigation links\nRULES: $(cat ~/.claude/workflows/cli-templates/protocols/write-protocol.md) | Clear structure | Quick start focus",
"output": "README.md"
}
],
"target_files": [
".workflow/docs/${project_name}/api/{overview_dir}/README.md"
]
}
}
```
### Level 4: Validation Tasks
```json
{
"id": "IMPL-${test_task_id}",
"title": "API endpoint validation tests",
"status": "pending",
"depends_on": ["IMPL-${overview_task_id}"],
"meta": {
"type": "swagger-validation",
"agent": "@test-fix-agent",
"tool": "codex"
},
"context": {
"requirements": [
"Validate accessibility of all endpoints",
"Test various boundary conditions",
"Verify exception handling"
]
},
"flow_control": {
"pre_analysis": [
{
"step": "load_swagger_spec",
"command": "bash(cat .workflow/docs/${project_name}/api/swagger.yaml)",
"output_to": "swagger_spec"
}
],
"implementation_approach": [
{
"step": 1,
"title": "Generate test report",
"cli_prompt": "PURPOSE: Generate comprehensive API test validation report\nTASK:\n• Document test environment configuration\n• Calculate endpoint coverage statistics\n• Report test results: pass/fail counts\n• Document boundary tests: parameter limits, null values, special characters\n• Document exception tests: auth failures, permission denied, resource not found\n• List issues found with recommendations\nMODE: write\nCONTEXT: @[swagger_spec]\nEXPECTED: Complete test report in {lang} with detailed results\nRULES: $(cat ~/.claude/workflows/cli-templates/protocols/write-protocol.md) | Include test cases | Clear pass/fail status",
"output": "{test_doc_name}"
}
],
"target_files": [
".workflow/docs/${project_name}/api/{test_dir}/{test_doc_name}"
]
}
}
```
## Language-Specific Directory Mapping
| Component | --lang zh | --lang en |
|-----------|-----------|-----------|
| Overview dir | 概述 | overview |
| Auth doc | 认证说明.md | authentication.md |
| Error doc | 错误码规范.md | error-codes.md |
| Changelog | 版本历史.md | changelog.md |
| Users module | 用户模块 | users |
| Orders module | 订单模块 | orders |
| Products module | 商品模块 | products |
| Test dir | 测试报告 | test-reports |
| API test doc | 接口测试.md | api-tests.md |
| Boundary test doc | 边界测试.md | boundary-tests.md |
## API Documentation Template
### Single Endpoint Format
Each endpoint must include:
```markdown
### Get User Details
**Description**: Retrieve detailed user information by ID, including profile and permissions.
**Endpoint Info**:
| Property | Value |
|----------|-------|
| Method | GET |
| URL | `/api/v1/users/{id}` |
| Version | v1.0.0 |
| Updated | 2025-01-01 |
| Auth | Bearer Token |
| Permission | user / admin |
**Request Headers**:
| Field | Type | Required | Example | Description |
|-------|------|----------|---------|-------------|
| Authorization | string | Yes | `Bearer eyJhbGc...` | JWT Token |
| Content-Type | string | No | `application/json` | Request content type |
**Path Parameters**:
| Field | Type | Required | Example | Description |
|-------|------|----------|---------|-------------|
| id | string | Yes | `usr_123456` | Unique user identifier |
**Query Parameters**:
| Field | Type | Required | Default | Example | Description |
|-------|------|----------|---------|---------|-------------|
| include | string | No | - | `roles,permissions` | Related data to include |
**Success Response** (200 OK):
```json
{
"code": 0,
"message": "success",
"data": {
"id": "usr_123456",
"email": "user@example.com",
"name": "John Doe",
"status": "active",
"roles": ["user"],
"created_at": "2025-01-01T00:00:00Z",
"updated_at": "2025-01-01T00:00:00Z"
},
"timestamp": "2025-01-01T12:00:00Z"
}
```
**Response Fields**:
| Field | Type | Description |
|-------|------|-------------|
| code | integer | Business status code, 0 = success |
| message | string | Response message |
| data.id | string | Unique user identifier |
| data.email | string | User email address |
| data.name | string | User display name |
| data.status | string | User status: active/inactive/suspended |
| data.roles | array | User role list |
| data.created_at | string | Creation timestamp (ISO 8601) |
| data.updated_at | string | Last update timestamp (ISO 8601) |
**Error Responses**:
| Status | Code | Message | Possible Cause |
|--------|------|---------|----------------|
| 401 | AUTH_001 | Invalid or expired token | Token format error or expired |
| 403 | AUTH_003 | Insufficient permissions | No access to this user info |
| 404 | USER_001 | User not found | User ID doesn't exist or deleted |
**Examples**:
```bash
# cURL
curl -X GET "https://api.example.com/api/v1/users/usr_123456" \
-H "Authorization: Bearer eyJhbGc..." \
-H "Content-Type: application/json"
```
```javascript
// JavaScript (fetch)
const response = await fetch('https://api.example.com/api/v1/users/usr_123456', {
method: 'GET',
headers: {
'Authorization': 'Bearer eyJhbGc...',
'Content-Type': 'application/json'
}
});
const data = await response.json();
```
```
## Session Structure
```
.workflow/active/
└── WFS-swagger-{timestamp}/
├── workflow-session.json
├── IMPL_PLAN.md
├── TODO_LIST.md
├── .process/
│ └── swagger-planning-data.json
└── .task/
├── IMPL-001.json # OpenAPI spec
├── IMPL-002.json # Security config
├── IMPL-003.json # Error codes
├── IMPL-004.json # Module 1 API
├── ...
├── IMPL-N+1.json # API overview
└── IMPL-N+2.json # Validation tests
```
## Execution Commands
```bash
# Execute entire workflow
/workflow:execute
# Specify session
/workflow:execute --resume-session="WFS-swagger-yyyymmdd-hhmmss"
# Single task execution
/task:execute IMPL-001
```
## Related Commands
- `/workflow:execute` - Execute documentation tasks
- `/workflow:status` - View task progress
- `/workflow:session:complete` - Mark session complete
- `/memory:docs` - General documentation workflow

View File

@@ -1,310 +0,0 @@
---
name: tech-research-rules
description: "3-phase orchestrator: extract tech stack → Exa research → generate path-conditional rules (auto-loaded by Claude Code)"
argument-hint: "[session-id | tech-stack-name] [--regenerate] [--tool <gemini|qwen>]"
allowed-tools: SlashCommand(*), TodoWrite(*), Bash(*), Read(*), Write(*), Task(*)
---
# Tech Stack Rules Generator
## Overview
**Purpose**: Generate multi-layered, path-conditional rules that Claude Code automatically loads based on file context.
**Output Structure**:
```
.claude/rules/tech/{tech-stack}/
├── core.md # paths: **/*.{ext} - Core principles
├── patterns.md # paths: src/**/*.{ext} - Implementation patterns
├── testing.md # paths: **/*.{test,spec}.{ext} - Testing rules
├── config.md # paths: *.config.* - Configuration rules
├── api.md # paths: **/api/**/* - API rules (backend only)
├── components.md # paths: **/components/**/* - Component rules (frontend only)
└── metadata.json # Generation metadata
```
**Templates Location**: `~/.claude/workflows/cli-templates/prompts/rules/`
---
## Core Rules
1. **Start Immediately**: First action is TodoWrite initialization
2. **Path-Conditional Output**: Every rule file includes `paths` frontmatter
3. **Template-Driven**: Agent reads templates before generating content
4. **Agent Produces Files**: Agent writes all rule files directly
5. **No Manual Loading**: Rules auto-activate when Claude works with matching files
---
## 3-Phase Execution
### Phase 1: Prepare Context & Detect Tech Stack
**Goal**: Detect input mode, extract tech stack info, determine file extensions
**Input Mode Detection**:
```bash
input="$1"
if [[ "$input" == WFS-* ]]; then
MODE="session"
SESSION_ID="$input"
# Read workflow-session.json to extract tech stack
else
MODE="direct"
TECH_STACK_NAME="$input"
fi
```
**Tech Stack Analysis**:
```javascript
// Decompose composite tech stacks
// "typescript-react-nextjs" → ["typescript", "react", "nextjs"]
const TECH_EXTENSIONS = {
"typescript": "{ts,tsx}",
"javascript": "{js,jsx}",
"python": "py",
"rust": "rs",
"go": "go",
"java": "java",
"csharp": "cs",
"ruby": "rb",
"php": "php"
};
const FRAMEWORK_TYPE = {
"react": "frontend",
"vue": "frontend",
"angular": "frontend",
"nextjs": "fullstack",
"nuxt": "fullstack",
"fastapi": "backend",
"express": "backend",
"django": "backend",
"rails": "backend"
};
```
**Check Existing Rules**:
```bash
normalized_name=$(echo "$TECH_STACK_NAME" | tr '[:upper:]' '[:lower:]' | tr ' ' '-')
rules_dir=".claude/rules/tech/${normalized_name}"
existing_count=$(find "${rules_dir}" -name "*.md" 2>/dev/null | wc -l || echo 0)
```
**Skip Decision**:
- If `existing_count > 0` AND no `--regenerate``SKIP_GENERATION = true`
- If `--regenerate` → Delete existing and regenerate
**Output Variables**:
- `TECH_STACK_NAME`: Normalized name
- `PRIMARY_LANG`: Primary language
- `FILE_EXT`: File extension pattern
- `FRAMEWORK_TYPE`: frontend | backend | fullstack | library
- `COMPONENTS`: Array of tech components
- `SKIP_GENERATION`: Boolean
**TodoWrite**: Mark phase 1 completed
---
### Phase 2: Agent Produces Path-Conditional Rules
**Skip Condition**: Skipped if `SKIP_GENERATION = true`
**Goal**: Delegate to agent for Exa research and rule file generation
**Template Files**:
```
~/.claude/workflows/cli-templates/prompts/rules/
├── tech-rules-agent-prompt.txt # Agent instructions
├── rule-core.txt # Core principles template
├── rule-patterns.txt # Implementation patterns template
├── rule-testing.txt # Testing rules template
├── rule-config.txt # Configuration rules template
├── rule-api.txt # API rules template (backend)
└── rule-components.txt # Component rules template (frontend)
```
**Agent Task**:
```javascript
Task({
subagent_type: "general-purpose",
description: `Generate tech stack rules: ${TECH_STACK_NAME}`,
prompt: `
You are generating path-conditional rules for Claude Code.
## Context
- Tech Stack: ${TECH_STACK_NAME}
- Primary Language: ${PRIMARY_LANG}
- File Extensions: ${FILE_EXT}
- Framework Type: ${FRAMEWORK_TYPE}
- Components: ${JSON.stringify(COMPONENTS)}
- Output Directory: .claude/rules/tech/${TECH_STACK_NAME}/
## Instructions
Read the agent prompt template for detailed instructions:
$(cat ~/.claude/workflows/cli-templates/prompts/rules/tech-rules-agent-prompt.txt)
## Execution Steps
1. Execute Exa research queries (see agent prompt)
2. Read each rule template
3. Generate rule files following template structure
4. Write files to output directory
5. Write metadata.json
6. Report completion
## Variable Substitutions
Replace in templates:
- {TECH_STACK_NAME} → ${TECH_STACK_NAME}
- {PRIMARY_LANG} → ${PRIMARY_LANG}
- {FILE_EXT} → ${FILE_EXT}
- {FRAMEWORK_TYPE} → ${FRAMEWORK_TYPE}
`
})
```
**Completion Criteria**:
- 4-6 rule files written with proper `paths` frontmatter
- metadata.json written
- Agent reports files created
**TodoWrite**: Mark phase 2 completed
---
### Phase 3: Verify & Report
**Goal**: Verify generated files and provide usage summary
**Steps**:
1. **Verify Files**:
```bash
find ".claude/rules/tech/${TECH_STACK_NAME}" -name "*.md" -type f
```
2. **Validate Frontmatter**:
```bash
head -5 ".claude/rules/tech/${TECH_STACK_NAME}/core.md"
```
3. **Read Metadata**:
```javascript
Read(`.claude/rules/tech/${TECH_STACK_NAME}/metadata.json`)
```
4. **Generate Summary Report**:
```
Tech Stack Rules Generated
Tech Stack: {TECH_STACK_NAME}
Location: .claude/rules/tech/{TECH_STACK_NAME}/
Files Created:
├── core.md → paths: **/*.{ext}
├── patterns.md → paths: src/**/*.{ext}
├── testing.md → paths: **/*.{test,spec}.{ext}
├── config.md → paths: *.config.*
├── api.md → paths: **/api/**/* (if backend)
└── components.md → paths: **/components/**/* (if frontend)
Auto-Loading:
- Rules apply automatically when editing matching files
- No manual loading required
Example Activation:
- Edit src/components/Button.tsx → core.md + patterns.md + components.md
- Edit tests/api.test.ts → core.md + testing.md
- Edit package.json → config.md
```
**TodoWrite**: Mark phase 3 completed
---
## Path Pattern Reference
| Pattern | Matches |
|---------|---------|
| `**/*.ts` | All .ts files |
| `src/**/*` | All files under src/ |
| `*.config.*` | Config files in root |
| `**/*.{ts,tsx}` | .ts and .tsx files |
| Tech Stack | Core Pattern | Test Pattern |
|------------|--------------|--------------|
| TypeScript | `**/*.{ts,tsx}` | `**/*.{test,spec}.{ts,tsx}` |
| Python | `**/*.py` | `**/test_*.py, **/*_test.py` |
| Rust | `**/*.rs` | `**/tests/**/*.rs` |
| Go | `**/*.go` | `**/*_test.go` |
---
## Parameters
```bash
/memory:tech-research [session-id | "tech-stack-name"] [--regenerate]
```
**Arguments**:
- **session-id**: `WFS-*` format - Extract from workflow session
- **tech-stack-name**: Direct input - `"typescript"`, `"typescript-react"`
- **--regenerate**: Force regenerate existing rules
---
## Examples
### Single Language
```bash
/memory:tech-research "typescript"
```
**Output**: `.claude/rules/tech/typescript/` with 4 rule files
### Frontend Stack
```bash
/memory:tech-research "typescript-react"
```
**Output**: `.claude/rules/tech/typescript-react/` with 5 rule files (includes components.md)
### Backend Stack
```bash
/memory:tech-research "python-fastapi"
```
**Output**: `.claude/rules/tech/python-fastapi/` with 5 rule files (includes api.md)
### From Session
```bash
/memory:tech-research WFS-user-auth-20251104
```
**Workflow**: Extract tech stack from session → Generate rules
---
## Comparison: Rules vs SKILL
| Aspect | SKILL Memory | Rules |
|--------|--------------|-------|
| Loading | Manual: `Skill("tech")` | Automatic by path |
| Scope | All files when loaded | Only matching files |
| Granularity | Monolithic packages | Per-file-type |
| Context | Full package | Only relevant rules |
**When to Use**:
- **Rules**: Tech stack conventions per file type
- **SKILL**: Reference docs, APIs, examples for manual lookup

View File

@@ -0,0 +1,332 @@
---
name: tips
description: Quick note-taking command to capture ideas, snippets, reminders, and insights for later reference
argument-hint: "<note content> [--tag <tag1,tag2>] [--context <context>]"
allowed-tools: mcp__ccw-tools__core_memory(*), Read(*)
examples:
- /memory:tips "Remember to use Redis for rate limiting"
- /memory:tips "Auth pattern: JWT with refresh tokens" --tag architecture,auth
- /memory:tips "Bug: memory leak in WebSocket handler after 24h" --context websocket-service
- /memory:tips "Performance: lazy loading reduced bundle by 40%" --tag performance
---
# Memory Tips Command (/memory:tips)
## 1. Overview
The `memory:tips` command provides **quick note-taking** for capturing:
- Quick ideas and insights
- Code snippets and patterns
- Reminders and follow-ups
- Bug notes and debugging hints
- Performance observations
- Architecture decisions
- Library/tool recommendations
**Core Philosophy**:
- **Speed First**: Minimal friction for capturing thoughts
- **Searchable**: Tagged for easy retrieval
- **Context-Aware**: Optional context linking
- **Lightweight**: No complex session analysis
## 2. Parameters
- `<note content>` (Required): The tip/note content to save
- `--tag <tags>` (Optional): Comma-separated tags for categorization
- `--context <context>` (Optional): Related context (file, module, feature)
**Examples**:
```bash
/memory:tips "Use Zod for runtime validation - better DX than class-validator"
/memory:tips "Redis connection pool: max 10, min 2" --tag config,redis
/memory:tips "Fix needed: race condition in payment processor" --tag bug,payment --context src/payments
```
## 3. Structured Output Format
```markdown
## Tip ID
TIP-YYYYMMDD-HHMMSS
## Timestamp
YYYY-MM-DD HH:MM:SS
## Project Root
[Absolute path to project root, e.g., D:\Claude_dms3]
## Content
[The tip/note content exactly as provided]
## Tags
[Comma-separated tags, or (none)]
## Context
[Optional context linking - file, module, or feature reference]
## Session Link
[WFS-ID if workflow session active, otherwise (none)]
## Auto-Detected Context
[Files/topics from current conversation if relevant]
```
## 4. Field Definitions
| Field | Purpose | Example |
|-------|---------|---------|
| **Tip ID** | Unique identifier with timestamp | TIP-20260128-143052 |
| **Timestamp** | When tip was created | 2026-01-28 14:30:52 |
| **Project Root** | Current project path | D:\Claude_dms3 |
| **Content** | The actual tip/note | "Use Redis for rate limiting" |
| **Tags** | Categorization labels | architecture, auth, performance |
| **Context** | Related code/feature | src/auth/**, payment-module |
| **Session Link** | Link to workflow session | WFS-auth-20260128 |
| **Auto-Detected Context** | Files from conversation | src/api/handler.ts |
## 5. Execution Flow
### Step 1: Parse Arguments
```javascript
const parseTipsCommand = (input) => {
// Extract note content (everything before flags)
const contentMatch = input.match(/^"([^"]+)"|^([^\s-]+)/);
const content = contentMatch ? (contentMatch[1] || contentMatch[2]) : '';
// Extract tags
const tagsMatch = input.match(/--tag\s+([^\s-]+)/);
const tags = tagsMatch ? tagsMatch[1].split(',').map(t => t.trim()) : [];
// Extract context
const contextMatch = input.match(/--context\s+([^\s-]+)/);
const context = contextMatch ? contextMatch[1] : '';
return { content, tags, context };
};
```
### Step 2: Gather Context
```javascript
const gatherTipContext = async () => {
// Get project root
const projectRoot = process.cwd(); // or detect from environment
// Get current session if active
const manifest = await mcp__ccw-tools__session_manager({
operation: "list",
location: "active"
});
const sessionId = manifest.sessions?.[0]?.id || null;
// Auto-detect files from recent conversation
const recentFiles = extractRecentFilesFromConversation(); // Last 5 messages
return {
projectRoot,
sessionId,
autoDetectedContext: recentFiles
};
};
```
### Step 3: Generate Structured Text
```javascript
const generateTipText = (parsed, context) => {
const timestamp = new Date().toISOString().replace('T', ' ').slice(0, 19);
const tipId = `TIP-${new Date().toISOString().slice(0,10).replace(/-/g, '')}-${new Date().toTimeString().slice(0,8).replace(/:/g, '')}`;
return `## Tip ID
${tipId}
## Timestamp
${timestamp}
## Project Root
${context.projectRoot}
## Content
${parsed.content}
## Tags
${parsed.tags.length > 0 ? parsed.tags.join(', ') : '(none)'}
## Context
${parsed.context || '(none)'}
## Session Link
${context.sessionId || '(none)'}
## Auto-Detected Context
${context.autoDetectedContext.length > 0
? context.autoDetectedContext.map(f => `- ${f}`).join('\n')
: '(none)'}`;
};
```
### Step 4: Save to Core Memory
```javascript
mcp__ccw-tools__core_memory({
operation: "import",
text: structuredText
})
```
**Response Format**:
```json
{
"operation": "import",
"id": "CMEM-YYYYMMDD-HHMMSS",
"message": "Created memory: CMEM-YYYYMMDD-HHMMSS"
}
```
### Step 5: Confirm to User
```
✓ Tip saved successfully
ID: CMEM-YYYYMMDD-HHMMSS
Tags: architecture, auth
Context: src/auth/**
To retrieve: /memory:search "auth patterns"
Or via MCP: core_memory(operation="search", query="auth")
```
## 6. Tag Categories (Suggested)
**Technical**:
- `architecture` - Design decisions and patterns
- `performance` - Optimization insights
- `security` - Security considerations
- `bug` - Bug notes and fixes
- `config` - Configuration settings
- `api` - API design patterns
**Development**:
- `testing` - Test strategies and patterns
- `debugging` - Debugging techniques
- `refactoring` - Refactoring notes
- `documentation` - Doc improvements
**Domain Specific**:
- `auth` - Authentication/authorization
- `database` - Database patterns
- `frontend` - UI/UX patterns
- `backend` - Backend logic
- `devops` - Infrastructure and deployment
**Organizational**:
- `reminder` - Follow-up items
- `research` - Research findings
- `idea` - Feature ideas
- `review` - Code review notes
## 7. Search Integration
Tips can be retrieved using:
```bash
# Via command (if /memory:search exists)
/memory:search "rate limiting"
# Via MCP tool
mcp__ccw-tools__core_memory({
operation: "search",
query: "rate limiting",
source_type: "core_memory",
top_k: 10
})
# Via CLI
ccw core-memory search --query "rate limiting" --top-k 10
```
## 8. Quality Checklist
Before saving:
- [ ] Content is clear and actionable
- [ ] Tags are relevant and consistent
- [ ] Context provides enough reference
- [ ] Auto-detected context is accurate
- [ ] Project root is absolute path
- [ ] Timestamp is properly formatted
## 9. Best Practices
### Good Tips Examples
**Specific and Actionable**:
```
"Use connection pooling for Redis: { max: 10, min: 2, acquireTimeoutMillis: 30000 }"
--tag config,redis
```
**With Context**:
```
"Auth middleware must validate both access and refresh tokens"
--tag security,auth --context src/middleware/auth.ts
```
**Problem + Solution**:
```
"Memory leak fixed by unsubscribing event listeners in componentWillUnmount"
--tag bug,react --context src/components/Chat.tsx
```
### Poor Tips Examples
**Too Vague**:
```
"Fix the bug" --tag bug
```
**Too Long** (use /memory:compact instead):
```
"Here's the complete implementation plan for the entire auth system... [3 paragraphs]"
```
**No Context**:
```
"Remember to update this later"
```
## 10. Use Cases
### During Development
```bash
/memory:tips "JWT secret must be 256-bit minimum" --tag security,auth
/memory:tips "Use debounce (300ms) for search input" --tag performance,ux
```
### After Bug Fixes
```bash
/memory:tips "Race condition in payment: lock with Redis SETNX" --tag bug,payment
```
### Code Review Insights
```bash
/memory:tips "Prefer early returns over nested ifs" --tag style,readability
```
### Architecture Decisions
```bash
/memory:tips "Chose PostgreSQL over MongoDB for ACID compliance" --tag architecture,database
```
### Library Recommendations
```bash
/memory:tips "Zod > Yup for TypeScript validation - better type inference" --tag library,typescript
```
## 11. Notes
- **Frequency**: Use liberally - capture all valuable insights
- **Retrieval**: Search by tags, content, or context
- **Lifecycle**: Tips persist across sessions
- **Organization**: Tags enable filtering and categorization
- **Integration**: Can reference tips in later workflows
- **Lightweight**: No complex session analysis required

View File

@@ -1,517 +0,0 @@
---
name: workflow-skill-memory
description: Process WFS-* archived sessions using universal-executor agents with Gemini analysis to generate workflow-progress SKILL package (sessions-timeline, lessons, conflicts)
argument-hint: "session <session-id> | all"
allowed-tools: Task(*), TodoWrite(*), Bash(*), Read(*), Write(*)
---
# Workflow SKILL Memory Generator
## Overview
Generate SKILL package from archived workflow sessions using agent-driven analysis. Supports single-session incremental updates or parallel processing of all sessions.
**Scope**: Only processes WFS-* workflow sessions. Other session types (e.g., doc sessions) are automatically ignored.
## Usage
```bash
/memory:workflow-skill-memory session WFS-<session-id> # Process single WFS session
/memory:workflow-skill-memory all # Process all WFS sessions in parallel
```
## Execution Modes
### Mode 1: Single Session (`session <session-id>`)
**Purpose**: Incremental update - process one archived session and merge into existing SKILL package
**Workflow**:
1. **Validate session**: Check if session exists in `.workflow/.archives/{session-id}/`
2. **Invoke agent**: Call `universal-executor` to analyze session and update SKILL documents
3. **Agent tasks**:
- Read session data from `.workflow/.archives/{session-id}/`
- Extract lessons, conflicts, and outcomes
- Use Gemini for intelligent aggregation (optional)
- Update or create SKILL documents using templates
- Regenerate SKILL.md index
**Command Example**:
```bash
/memory:workflow-skill-memory session WFS-user-auth
```
**Expected Output**:
```
Session WFS-user-auth processed
Updated:
- sessions-timeline.md (1 session added)
- lessons-learned.md (3 lessons merged)
- conflict-patterns.md (1 conflict added)
- SKILL.md (index regenerated)
```
---
### Mode 2: All Sessions (`all`)
**Purpose**: Full regeneration - process all archived sessions in parallel for complete SKILL package
**Workflow**:
1. **List sessions**: Read manifest.json to get all archived session IDs
2. **Parallel invocation**: Launch multiple `universal-executor` agents in parallel (one per session)
3. **Agent coordination**:
- Each agent processes one session independently
- Agents use Gemini for analysis
- Agents collect data into JSON (no direct file writes)
- Final aggregator agent merges results and generates SKILL documents
**Command Example**:
```bash
/memory:workflow-skill-memory all
```
**Expected Output**:
```
All sessions processed in parallel
Sessions: 8 total
Updated:
- sessions-timeline.md (8 sessions)
- lessons-learned.md (24 lessons aggregated)
- conflict-patterns.md (12 conflicts documented)
- SKILL.md (index regenerated)
```
---
## Implementation Flow
### Phase 1: Validation and Setup
**Step 1.1: Parse Command Arguments**
Extract mode and session ID:
```javascript
if (args === "all") {
mode = "all"
} else if (args.startsWith("session ")) {
mode = "session"
session_id = args.replace("session ", "").trim()
} else {
ERROR = "Invalid arguments. Usage: session <session-id> | all"
EXIT
}
```
**Step 1.2: Validate Archive Directory**
```bash
bash(test -d .workflow/.archives && echo "exists" || echo "missing")
```
If missing, report error and exit.
**Step 1.3: Mode-Specific Validation**
**Single Session Mode**:
```bash
# Validate session ID format (must start with WFS-)
if [[ ! "$session_id" =~ ^WFS- ]]; then
ERROR = "Invalid session ID format. Only WFS-* sessions are supported"
EXIT
fi
# Check if session exists
bash(test -d .workflow/.archives/{session_id} && echo "exists" || echo "missing")
```
If missing, report error: "Session {session_id} not found in archives"
**All Sessions Mode**:
```bash
# Read manifest and filter only WFS- sessions
bash(cat .workflow/.archives/manifest.json | jq -r '.archives[].session_id | select(startswith("WFS-"))')
```
Store filtered session IDs in array. Ignore doc sessions and other non-WFS sessions.
**Step 1.4: TodoWrite Initialization**
**Single Session Mode**:
```javascript
TodoWrite({todos: [
{"content": "Validate session existence", "status": "completed", "activeForm": "Validating session"},
{"content": "Invoke agent to process session", "status": "in_progress", "activeForm": "Invoking agent"},
{"content": "Verify SKILL package updated", "status": "pending", "activeForm": "Verifying update"}
]})
```
**All Sessions Mode**:
```javascript
TodoWrite({todos: [
{"content": "Read manifest and list sessions", "status": "completed", "activeForm": "Reading manifest"},
{"content": "Invoke agents in parallel", "status": "in_progress", "activeForm": "Invoking agents"},
{"content": "Verify SKILL package regenerated", "status": "pending", "activeForm": "Verifying regeneration"}
]})
```
---
### Phase 2: Agent Invocation
#### Single Session Mode - Agent Task
Invoke `universal-executor` with session-specific task:
**Agent Prompt Structure**:
```
Task: Process Workflow Session for SKILL Package
Context:
- Session ID: {session_id}
- Session Path: .workflow/.archives/{session_id}/
- Mode: Incremental update
Objectives:
1. Read session data:
- workflow-session.json (metadata)
- IMPL_PLAN.md (implementation summary)
- TODO_LIST.md (if exists)
- manifest.json entry for lessons
2. Extract key information:
- Description, tags, metrics
- Lessons (successes, challenges, watch_patterns)
- Context package path (reference only)
- Key outcomes from IMPL_PLAN
3. Use Gemini for aggregation (optional):
Command pattern:
ccw cli -p "
PURPOSE: Extract lessons and conflicts from workflow session
TASK:
• Analyze IMPL_PLAN and lessons from manifest
• Identify success patterns and challenges
• Extract conflict patterns with resolutions
• Categorize by functional domain
MODE: analysis
CONTEXT: @IMPL_PLAN.md @workflow-session.json
EXPECTED: Structured lessons and conflicts in JSON format
RULES: Template reference from skill-aggregation.txt
" --tool gemini --mode analysis --cd .workflow/.archives/{session_id}
3.5. **Generate SKILL.md Description** (CRITICAL for auto-loading):
Read skill-index.txt template Section: "Description Field Generation"
Execute command to get project root:
```bash
git rev-parse --show-toplevel # Example output: /d/Claude_dms3
```
Apply description format:
```
Progressive workflow development history (located at {project_root}).
Load this SKILL when continuing development, analyzing past implementations,
or learning from workflow history, especially when no relevant context exists in memory.
```
**Validation**:
- [ ] Path uses forward slashes (not backslashes)
- [ ] All three use cases present
- [ ] Trigger optimization phrase included
- [ ] Path is absolute (starts with / or drive letter)
4. Read templates for formatting guidance:
- ~/.claude/workflows/cli-templates/prompts/workflow/skill-sessions-timeline.txt
- ~/.claude/workflows/cli-templates/prompts/workflow/skill-lessons-learned.txt
- ~/.claude/workflows/cli-templates/prompts/workflow/skill-conflict-patterns.txt
- ~/.claude/workflows/cli-templates/prompts/workflow/skill-index.txt
**CRITICAL**: From skill-index.txt, read these sections:
- "Description Field Generation" - Rules for generating description
- "Variable Substitution Guide" - All required variables
- "Generation Instructions" - Step-by-step generation process
- "Validation Checklist" - Final validation steps
5. Update SKILL documents:
- sessions-timeline.md: Append new session, update domain grouping
- lessons-learned.md: Merge lessons into categories, update frequencies
- conflict-patterns.md: Add conflicts, update recurring pattern frequencies
- SKILL.md: Regenerate index with updated counts
**For SKILL.md generation**:
- Follow "Generation Instructions" from skill-index.txt (Steps 1-7)
- Use git command for project_root: `git rev-parse --show-toplevel`
- Apply "Description Field Generation" rules
- Validate using "Validation Checklist"
- Increment version (patch level)
6. Return result JSON:
{
"status": "success",
"session_id": "{session_id}",
"updates": {
"sessions_added": 1,
"lessons_merged": count,
"conflicts_added": count
}
}
```
---
#### All Sessions Mode - Parallel Agent Tasks
**Step 2.1: Launch parallel session analyzers**
Invoke multiple agents in parallel (one message with multiple Task calls):
**Per-Session Agent Prompt**:
```
Task: Extract Session Data for SKILL Package
Context:
- Session ID: {session_id}
- Mode: Parallel analysis (no direct file writes)
Objectives:
1. Read session data (same as single mode)
2. Extract key information (same as single mode)
3. Use Gemini for analysis (same as single mode)
4. Return structured data JSON:
{
"status": "success",
"session_id": "{session_id}",
"data": {
"metadata": {
"description": "...",
"archived_at": "...",
"tags": [...],
"metrics": {...}
},
"lessons": {
"successes": [...],
"challenges": [...],
"watch_patterns": [...]
},
"conflicts": [
{
"type": "architecture|dependencies|testing|performance",
"pattern": "...",
"resolution": "...",
"code_impact": [...]
}
],
"impl_summary": "First 200 chars of IMPL_PLAN",
"context_package_path": "..."
}
}
```
**Step 2.2: Aggregate results**
After all session agents complete, invoke aggregator agent:
**Aggregator Agent Prompt**:
```
Task: Aggregate Session Results and Generate SKILL Package
Context:
- Mode: Full regeneration
- Input: JSON results from {session_count} session agents
Objectives:
1. Aggregate all session data:
- Collect metadata from all sessions
- Merge lessons by category
- Group conflicts by type
- Sort sessions by date
2. Use Gemini for final aggregation:
ccw cli -p "
PURPOSE: Aggregate lessons and conflicts from all workflow sessions
TASK:
• Group successes by functional domain
• Categorize challenges by severity (HIGH/MEDIUM/LOW)
• Identify recurring conflict patterns
• Calculate frequencies and prioritize
MODE: analysis
CONTEXT: [Provide aggregated JSON data]
EXPECTED: Final aggregated structure for SKILL documents
RULES: Template reference from skill-aggregation.txt
" --tool gemini --mode analysis
3. Read templates for formatting (same 4 templates as single mode)
4. Generate all SKILL documents:
- sessions-timeline.md (all sessions, sorted by date)
- lessons-learned.md (aggregated lessons with frequencies)
- conflict-patterns.md (recurring patterns with resolutions)
- SKILL.md (index with progressive loading)
5. Write files to .claude/skills/workflow-progress/
6. Return result JSON:
{
"status": "success",
"sessions_processed": count,
"files_generated": ["SKILL.md", "sessions-timeline.md", ...],
"summary": {
"total_sessions": count,
"functional_domains": [...],
"date_range": "...",
"lessons_count": count,
"conflicts_count": count
}
}
```
---
### Phase 3: Verification
**Step 3.1: Check SKILL Package Files**
```bash
bash(ls -lh .claude/skills/workflow-progress/)
```
Verify all 4 files exist:
- SKILL.md
- sessions-timeline.md
- lessons-learned.md
- conflict-patterns.md
**Step 3.2: TodoWrite Completion**
Mark all tasks as completed.
**Step 3.3: Display Summary**
**Single Session Mode**:
```
Session {session_id} processed successfully
Updated:
- sessions-timeline.md
- lessons-learned.md
- conflict-patterns.md
- SKILL.md
SKILL Location: .claude/skills/workflow-progress/SKILL.md
```
**All Sessions Mode**:
```
All sessions processed in parallel
Sessions: {count} total
Functional Domains: {domain_list}
Date Range: {earliest} - {latest}
Generated:
- sessions-timeline.md ({count} sessions)
- lessons-learned.md ({lessons_count} lessons)
- conflict-patterns.md ({conflicts_count} conflicts)
- SKILL.md (4-level progressive loading)
SKILL Location: .claude/skills/workflow-progress/SKILL.md
Usage:
- Level 0: Quick refresh (~2K tokens)
- Level 1: Recent history (~8K tokens)
- Level 2: Complete analysis (~25K tokens)
- Level 3: Deep dive (~40K tokens)
```
---
## Agent Guidelines
### Agent Capabilities
**universal-executor agents can**:
- Read files from `.workflow/.archives/`
- Execute bash commands
- Call Gemini CLI for intelligent analysis
- Read template files for formatting guidance
- Write SKILL package files (single mode) or return JSON (parallel mode)
- Return structured results
### Gemini Usage Pattern
**When to use Gemini**:
- Aggregating lessons from multiple sources
- Identifying recurring patterns
- Classifying conflicts by type and severity
- Extracting structured data from IMPL_PLAN
**Fallback Strategy**: If Gemini fails or times out, use direct file parsing with structured extraction logic.
---
## Template System
### Template Files
All templates located in: `~/.claude/workflows/cli-templates/prompts/workflow/`
1. **skill-sessions-timeline.txt**: Format for sessions-timeline.md
2. **skill-lessons-learned.txt**: Format for lessons-learned.md
3. **skill-conflict-patterns.txt**: Format for conflict-patterns.md
4. **skill-index.txt**: Format for SKILL.md index
5. **skill-aggregation.txt**: Rules for Gemini aggregation (existing)
### Template Usage in Agent
**Agents read templates to understand**:
- File structure and markdown format
- Data sources (which files to read)
- Update strategy (incremental vs full)
- Formatting rules and conventions
- Aggregation logic (for Gemini)
**Templates are NOT shown in this command documentation** - agents read them directly as needed.
---
## Error Handling
### Validation Errors
- **No archives directory**: "Error: No workflow archives found at .workflow/.archives/"
- **Invalid session ID format**: "Error: Invalid session ID format. Only WFS-* sessions are supported"
- **Session not found**: "Error: Session {session_id} not found in archives"
- **No WFS sessions in manifest**: "Error: No WFS-* workflow sessions found in manifest.json"
### Agent Errors
- If agent fails, report error message from agent result
- If Gemini times out, agents use fallback direct parsing
- If template read fails, agents use inline format
### Recovery
- Single session mode: Can be retried without affecting other sessions
- All sessions mode: If one agent fails, others continue; retry failed sessions individually
## Integration
### Called by `/workflow:session:complete`
Automatically invoked after session archival:
```bash
SlashCommand(command="/memory:workflow-skill-memory session {session_id}")
```
### Manual Invocation
Users can manually process sessions:
```bash
/memory:workflow-skill-memory session WFS-custom-feature # Single session
/memory:workflow-skill-memory all # Full regeneration
```

View File

@@ -1,204 +0,0 @@
---
name: breakdown
description: Decompose complex task into subtasks with dependency mapping, creates child task JSONs with parent references and execution order
argument-hint: "task-id"
---
# Task Breakdown Command (/task:breakdown)
## Overview
Breaks down complex tasks into executable subtasks with context inheritance and agent assignment.
## Core Principles
**File Cohesion:** Related files must stay in same task
**10-Task Limit:** Total tasks cannot exceed 10 (triggers re-scoping)
## Core Features
**CRITICAL**: Manual breakdown with safety controls to prevent file conflicts and task limit violations.
### Breakdown Process
1. **Session Check**: Verify active session contains parent task
2. **Task Validation**: Ensure parent is `pending` status
3. **10-Task Limit Check**: Verify breakdown won't exceed total limit
4. **Manual Decomposition**: User defines subtasks with validation
5. **File Conflict Detection**: Warn if same files appear in multiple subtasks
6. **Similar Function Warning**: Alert if subtasks have overlapping functionality
7. **Context Distribution**: Inherit parent requirements and scope
8. **Agent Assignment**: Auto-assign agents based on subtask type
9. **TODO_LIST Update**: Regenerate TODO_LIST.md with new structure
### Breakdown Rules
- Only `pending` tasks can be broken down
- **Manual breakdown only**: Automated breakdown disabled to prevent violations
- Parent becomes `container` status (not executable)
- Subtasks use format: IMPL-N.M (max 2 levels)
- Context flows from parent to subtasks
- All relationships tracked in JSON
- **10-task limit enforced**: Breakdown rejected if total would exceed 10 tasks
- **File cohesion preserved**: Same files cannot be split across subtasks
## Usage
### Basic Breakdown
```bash
/task:breakdown impl-1
```
Interactive process:
```
Task: Build authentication module
Current total tasks: 6/10
MANUAL BREAKDOWN REQUIRED
Define subtasks manually (remaining capacity: 4 tasks):
1. Enter subtask title: User authentication core
Focus files: models/User.js, routes/auth.js, middleware/auth.js
2. Enter subtask title: OAuth integration
Focus files: services/OAuthService.js, routes/oauth.js
FILE CONFLICT DETECTED:
- routes/auth.js appears in multiple subtasks
- Recommendation: Merge related authentication routes
SIMILAR FUNCTIONALITY WARNING:
- "User authentication" and "OAuth integration" both handle auth
- Consider combining into single task
# Use AskUserQuestion for confirmation
AskUserQuestion({
questions: [{
question: "File conflicts and/or similar functionality detected. How do you want to proceed?",
header: "Confirm",
options: [
{ label: "Proceed with breakdown", description: "Accept the risks and create the subtasks as defined." },
{ label: "Restart breakdown", description: "Discard current subtasks and start over." },
{ label: "Cancel breakdown", description: "Abort the operation and leave the parent task as is." }
],
multiSelect: false
}]
})
User selected: "Proceed with breakdown"
Task IMPL-1 broken down:
IMPL-1: Build authentication module (container)
├── IMPL-1.1: User authentication core -> @code-developer
└── IMPL-1.2: OAuth integration -> @code-developer
Files updated: .task/IMPL-1.json + 2 subtask files + TODO_LIST.md
```
## Decomposition Logic
### Agent Assignment
- **Design/Planning** → `@planning-agent`
- **Implementation** → `@code-developer`
- **Testing** → `@code-developer` (type: "test-gen")
- **Test Validation** → `@test-fix-agent` (type: "test-fix")
- **Review** → `@universal-executor` (optional)
### Context Inheritance
- Subtasks inherit parent requirements
- Scope refined for specific subtask
- Implementation details distributed appropriately
## Safety Controls
### File Conflict Detection
**Validates file cohesion across subtasks:**
- Scans `focus_paths` in all subtasks
- Warns if same file appears in multiple subtasks
- Suggests merging subtasks with overlapping files
- Blocks breakdown if critical conflicts detected
### Similar Functionality Detection
**Prevents functional overlap:**
- Analyzes subtask titles for similar keywords
- Warns about potential functional redundancy
- Suggests consolidation of related functionality
- Examples: "user auth" + "login system" → merge recommendation
### 10-Task Limit Enforcement
**Hard limit compliance:**
- Counts current total tasks in session
- Calculates breakdown impact on total
- Rejects breakdown if would exceed 10 tasks
- Suggests re-scoping if limit reached
### Manual Control Requirements
**User-driven breakdown only:**
- No automatic subtask generation
- User must define each subtask title and scope
- Real-time validation during input
- Confirmation required before execution
## Implementation Details
- Complete task JSON schema
- Implementation field structure
- Context inheritance rules
- Agent assignment logic
## Validation
### Pre-breakdown Checks
1. Active session exists
2. Task found in session
3. Task status is `pending`
4. Not already broken down
5. **10-task limit compliance**: Total tasks + new subtasks ≤ 10
6. **Manual mode enabled**: No automatic breakdown allowed
### Post-breakdown Actions
1. Update parent to `container` status
2. Create subtask JSON files
3. Update parent subtasks list
4. Update session stats
5. **Regenerate TODO_LIST.md** with new hierarchy
6. Validate file paths in focus_paths
7. Update session task count
## Examples
### Basic Breakdown
```bash
/task:breakdown impl-1
impl-1: Build authentication (container)
├── impl-1.1: Design schema -> @planning-agent
├── impl-1.2: Implement logic + tests -> @code-developer
└── impl-1.3: Execute & fix tests -> @test-fix-agent
```
## Error Handling
```bash
# Task not found
Task IMPL-5 not found
# Already broken down
Task IMPL-1 already has subtasks
# Wrong status
Cannot breakdown completed task IMPL-2
# 10-task limit exceeded
Breakdown would exceed 10-task limit (current: 8, proposed: 4)
Suggestion: Re-scope project into smaller iterations
# File conflicts detected
File conflict: routes/auth.js appears in IMPL-1.1 and IMPL-1.2
Recommendation: Merge subtasks or redistribute files
# Similar functionality warning
Similar functions detected: "user login" and "authentication"
Consider consolidating related functionality
# Manual breakdown required
Automatic breakdown disabled. Use manual breakdown process.
```
**System ensures**: Manual breakdown control with file cohesion enforcement, similar functionality detection, and 10-task limit compliance

View File

@@ -1,152 +0,0 @@
---
name: create
description: Generate task JSON from natural language description with automatic file pattern detection, scope inference, and dependency analysis
argument-hint: "\"task title\""
---
# Task Create Command (/task:create)
## Overview
Creates new implementation tasks with automatic context awareness and ID generation.
## Core Principles
**Task System:** @~/.claude/workflows/task-core.md
## Core Features
### Automatic Behaviors
- **ID Generation**: Auto-generates IMPL-N format (max 2 levels)
- **Context Inheritance**: Inherits from active workflow session
- **JSON Creation**: Creates task JSON in active session
- **Status Setting**: Initial status = "pending"
- **Agent Assignment**: Suggests agent based on task type
- **Session Integration**: Updates workflow session stats
### Context Awareness
- Validates active workflow session exists
- Avoids duplicate task IDs
- Inherits session requirements and scope
- Suggests task relationships
## Usage
### Basic Creation
```bash
/task:create "Build authentication module"
```
Output:
```
Task created: IMPL-1
Title: Build authentication module
Type: feature
Agent: code-developer
Status: pending
```
### Task Types
- `feature` - New functionality (default)
- `bugfix` - Bug fixes
- `refactor` - Code improvements
- `test` - Test implementation
- `docs` - Documentation
## Task Creation Process
1. **Session Validation**: Check active workflow session
2. **ID Generation**: Auto-increment IMPL-N
3. **Context Inheritance**: Load workflow context
4. **Implementation Setup**: Initialize implementation field
5. **Agent Assignment**: Select appropriate agent
6. **File Creation**: Save JSON to .task/ directory
7. **Session Update**: Update workflow stats
**Task Schema**: See @~/.claude/workflows/task-core.md for complete JSON structure
## Implementation Field Setup
### Auto-Population Strategy
- **Detailed info**: Extract from task description and scope
- **Missing info**: Mark `pre_analysis` as multi-step array format for later pre-analysis
- **Basic structure**: Initialize with standard template
### Analysis Triggers
When implementation details incomplete:
```bash
Task requires analysis for implementation details
Suggest running: gemini analysis for file locations and dependencies
```
## File Management
### JSON Task File
- **Location**: `.task/IMPL-[N].json` in active session
- **Content**: Complete task with implementation field
- **Updates**: Session stats only
### Simple Process
1. Validate session and inputs
2. Generate task JSON
3. Update session stats
4. Notify completion
## Context Inheritance
Tasks inherit from:
1. **Active Session** - Requirements and scope from workflow-session.json
2. **Planning Document** - Context from IMPL_PLAN.md
3. **Parent Task** - For subtasks (IMPL-N.M format)
## Agent Assignment
Based on task type and title keywords:
- **Build/Implement** → @code-developer
- **Design/Plan** → @planning-agent
- **Test Generation** → @code-developer (type: "test-gen")
- **Test Execution/Fix** → @test-fix-agent (type: "test-fix")
- **Review/Audit** → @universal-executor (optional, only when explicitly requested)
## Validation Rules
1. **Session Check** - Active workflow session required
2. **Duplicate Check** - Avoid similar task titles
3. **ID Uniqueness** - Auto-increment task IDs
4. **Schema Validation** - Ensure proper JSON structure
## Error Handling
```bash
# No workflow session
No active workflow found
Use: /workflow init "project name"
# Duplicate task
Similar task exists: IMPL-3
Continue anyway? (y/n)
# Max depth exceeded
Cannot create IMPL-1.2.1 (max 2 levels)
Use: IMPL-2 for new main task
```
## Examples
### Feature Task
```bash
/task:create "Implement user authentication"
Created IMPL-1: Implement user authentication
Type: feature
Agent: code-developer
Status: pending
```
### Bug Fix
```bash
/task:create "Fix login validation bug" --type=bugfix
Created IMPL-2: Fix login validation bug
Type: bugfix
Agent: code-developer
Status: pending
```

View File

@@ -1,270 +0,0 @@
---
name: execute
description: Execute task JSON using appropriate agent (@doc-generator/@implementation-agent/@test-agent) with pre-analysis context loading and status tracking
argument-hint: "task-id"
---
## Command Overview: /task:execute
**Purpose**: Executes tasks using intelligent agent selection, context preparation, and progress tracking.
## Execution Modes
- **auto (Default)**
- Fully autonomous execution with automatic agent selection.
- Provides progress updates at each checkpoint.
- Automatically completes the task when done.
- **guided**
- Executes step-by-step, requiring user confirmation at each checkpoint.
- Allows for dynamic adjustments and manual review during the process.
- **review**
- Optional manual review using `@universal-executor`.
- Used only when explicitly requested by user.
## Agent Selection Logic
The system determines the appropriate agent for a task using the following logic.
```pseudo
FUNCTION select_agent(task, agent_override):
// A manual override always takes precedence.
// Corresponds to the --agent=<agent-type> flag.
IF agent_override IS NOT NULL:
RETURN agent_override
// If no override, select based on keywords in the task title.
ELSE:
CASE task.title:
WHEN CONTAINS "Build API", "Implement":
RETURN "@code-developer"
WHEN CONTAINS "Design schema", "Plan":
RETURN "@planning-agent"
WHEN CONTAINS "Write tests", "Generate tests":
RETURN "@code-developer" // type: test-gen
WHEN CONTAINS "Execute tests", "Fix tests", "Validate":
RETURN "@test-fix-agent" // type: test-fix
WHEN CONTAINS "Review code":
RETURN "@universal-executor" // Optional manual review
DEFAULT:
RETURN "@code-developer" // Default agent
END CASE
END FUNCTION
```
## Core Execution Protocol
`Pre-Execution` -> `Execution` -> `Post-Execution`
### Pre-Execution Protocol
`Validate Task & Dependencies` **->** `Prepare Execution Context` **->** `Coordinate with TodoWrite`
- **Validation**: Checks for the task's JSON file in `.task/` and resolves its dependencies.
- **Context Preparation**: Loads task and workflow context, preparing it for the selected agent.
- **Session Context Injection**: Provides workflow directory paths to agents for TODO_LIST.md and summary management.
- **TodoWrite Coordination**: Generates execution Todos and checkpoints, syncing with `TODO_LIST.md`.
### Post-Execution Protocol
`Update Task Status` **->** `Generate Summary` **->** `Save Artifacts` **->** `Sync All Progress` **->** `Validate File Integrity`
- Updates status in the task's JSON file and `TODO_LIST.md`.
- Creates a summary in `.summaries/`.
- Stores outputs and syncs progress across the entire workflow session.
### Task & Subtask Execution Logic
This logic defines how single, multiple, or parent tasks are handled.
```pseudo
FUNCTION execute_task_command(task_id, mode, parallel_flag):
// Handle parent tasks by executing their subtasks.
IF is_parent_task(task_id):
subtasks = get_subtasks(task_id)
EXECUTE_SUBTASK_BATCH(subtasks, mode)
// Handle wildcard execution (e.g., IMPL-001.*)
ELSE IF task_id CONTAINS "*":
subtasks = find_matching_tasks(task_id)
IF parallel_flag IS true:
EXECUTE_IN_PARALLEL(subtasks)
ELSE:
FOR each subtask in subtasks:
EXECUTE_SINGLE_TASK(subtask, mode)
// Default case for a single task ID.
ELSE:
EXECUTE_SINGLE_TASK(task_id, mode)
END FUNCTION
```
### Error Handling & Recovery Logic
```pseudo
FUNCTION pre_execution_check(task):
// Ensure dependencies are met before starting.
IF task.dependencies ARE NOT MET:
LOG_ERROR("Cannot execute " + task.id)
LOG_INFO("Blocked by: " + unmet_dependencies)
HALT_EXECUTION()
FUNCTION on_execution_failure(checkpoint):
// Provide user with recovery options upon failure.
LOG_WARNING("Execution failed at checkpoint " + checkpoint)
PRESENT_OPTIONS([
"Retry from checkpoint",
"Retry from beginning",
"Switch to guided mode",
"Abort execution"
])
AWAIT user_input
// System performs the selected action.
END FUNCTION
```
### Simplified Context Structure (JSON)
This is the simplified data structure loaded to provide context for task execution.
```json
{
"task": {
"id": "IMPL-1",
"title": "Build authentication module",
"type": "feature",
"status": "active",
"agent": "code-developer",
"context": {
"requirements": ["JWT authentication", "OAuth2 support"],
"scope": ["src/auth/*", "tests/auth/*"],
"acceptance": ["Module handles JWT tokens", "OAuth2 flow implemented"],
"inherited_from": "WFS-user-auth"
},
"relations": {
"parent": null,
"subtasks": ["IMPL-1.1", "IMPL-1.2"],
"dependencies": ["IMPL-0"]
},
"implementation": {
"files": [
{
"path": "src/auth/login.ts",
"location": {
"function": "authenticateUser",
"lines": "25-65",
"description": "Main authentication logic"
},
"original_code": "// Code snippet extracted via gemini analysis",
"modifications": {
"current_state": "Basic password authentication only",
"proposed_changes": [
"Add JWT token generation",
"Implement OAuth2 callback handling",
"Add multi-factor authentication support"
],
"logic_flow": [
"validateCredentials() ───► checkUserExists()",
"◊─── if password ───► generateJWT() ───► return token",
"◊─── if OAuth ───► validateOAuthCode() ───► exchangeForToken()",
"◊─── if MFA ───► sendMFACode() ───► awaitVerification()"
],
"reason": "Support modern authentication standards and security requirements",
"expected_outcome": "Comprehensive authentication system supporting multiple methods"
}
}
],
"context_notes": {
"dependencies": ["jsonwebtoken", "passport", "speakeasy"],
"affected_modules": ["user-session", "auth-middleware", "api-routes"],
"risks": [
"Breaking changes to existing login endpoints",
"Token storage and rotation complexity",
"OAuth provider configuration dependencies"
],
"performance_considerations": "JWT validation adds ~10ms per request, OAuth callbacks may timeout",
"error_handling": "Ensure sensitive authentication errors don't leak user enumeration data"
},
"pre_analysis": [
{
"action": "analyze patterns",
"template": "~/.claude/workflows/cli-templates/prompts/analysis/02-analyze-code-patterns.txt",
"method": "gemini"
}
]
}
},
"workflow": {
"session": "WFS-user-auth",
"phase": "IMPLEMENT",
"session_context": {
"workflow_directory": ".workflow/active/WFS-user-auth/",
"todo_list_location": ".workflow/active/WFS-user-auth/TODO_LIST.md",
"summaries_directory": ".workflow/active/WFS-user-auth/.summaries/",
"task_json_location": ".workflow/active/WFS-user-auth/.task/"
}
},
"execution": {
"agent": "code-developer",
"mode": "auto",
"attempts": 0
}
}
```
### Agent-Specific Context
Different agents receive context tailored to their function, including implementation details:
**`@code-developer`**:
- Complete implementation.files array with file paths and locations
- original_code snippets and proposed_changes for precise modifications
- logic_flow diagrams for understanding data flow
- Dependencies and affected modules for integration planning
- Performance and error handling considerations
**`@planning-agent`**:
- High-level requirements, constraints, success criteria
- Implementation risks and mitigation strategies
- Architecture implications from implementation.context_notes
**`@test-fix-agent`**:
- Test files to execute from task.context.focus_paths
- Source files to fix from implementation.files[].path
- Expected behaviors from implementation.modifications.logic_flow
- Error conditions to validate from implementation.context_notes.error_handling
- Performance requirements from implementation.context_notes.performance_considerations
**`@universal-executor`**:
- Used for optional manual reviews when explicitly requested
- Code quality standards and implementation patterns
- Security considerations from implementation.context_notes.risks
- Dependency validation from implementation.context_notes.dependencies
- Architecture compliance checks
### Simplified File Output
- **Task JSON File (`.task/<task-id>.json`)**: Updated with status and last attempt time only.
- **Session File (`workflow-session.json`)**: Updated task stats (completed count).
- **Summary File**: Generated in `.summaries/` upon completion (optional).
### Simplified Summary Template
Optional summary file generated at `.summaries/IMPL-[task-id]-summary.md`.
```markdown
# Task Summary: IMPL-1 Build Authentication Module
## What Was Done
- Created src/auth/login.ts with JWT validation
- Added tests in tests/auth.test.ts
## Execution Results
- **Agent**: code-developer
- **Status**: completed
## Files Modified
- `src/auth/login.ts` (created)
- `tests/auth.test.ts` (created)
```

View File

@@ -1,437 +0,0 @@
---
name: replan
description: Update task JSON with new requirements or batch-update multiple tasks from verification report, tracks changes in task-changes.json
argument-hint: "task-id [\"text\"|file.md] | --batch [verification-report.md]"
allowed-tools: Read(*), Write(*), Edit(*), TodoWrite(*), Glob(*), Bash(*)
---
# Task Replan Command (/task:replan)
> **⚠️ DEPRECATION NOTICE**: This command is maintained for backward compatibility. For new workflows, use `/workflow:replan` which provides:
> - Session-level replanning with comprehensive artifact updates
> - Interactive boundary clarification
> - Updates to IMPL_PLAN.md, TODO_LIST.md, and session metadata
> - Better integration with workflow sessions
>
> **Migration**: Replace `/task:replan IMPL-1 "changes"` with `/workflow:replan IMPL-1 "changes"`
## Overview
Replans individual tasks or batch processes multiple tasks with change tracking and backup management.
**Modes**:
- **Single Task Mode**: Replan one task with specific changes
- **Batch Mode**: Process multiple tasks from action-plan verification report
## Key Features
- **Single/Batch Operations**: Single task or multiple tasks from verification report
- **Multiple Input Sources**: Text, files, or verification report
- **Backup Management**: Automatic backup of previous versions
- **Change Documentation**: Track all modifications
- **Progress Tracking**: TodoWrite integration for batch operations
**CRITICAL**: Validates active session before replanning
## Operation Modes
### Single Task Mode
#### Direct Text (Default)
```bash
/task:replan IMPL-1 "Add OAuth2 authentication support"
```
#### File-based Input
```bash
/task:replan IMPL-1 updated-specs.md
```
Supports: .md, .txt, .json, .yaml
#### Interactive Mode
```bash
/task:replan IMPL-1 --interactive
```
Guided step-by-step modification process with validation
### Batch Mode
#### From Verification Report
```bash
/task:replan --batch ACTION_PLAN_VERIFICATION.md
```
**Workflow**:
1. Parse verification report to extract replan recommendations
2. Create TodoWrite task list for all modifications
3. Process each task sequentially with confirmation
4. Track progress and generate summary report
**Auto-detection**: If input file contains "Action Plan Verification Report" header, automatically enters batch mode
## Replanning Process
### Single Task Process
1. **Load & Validate**: Read task JSON and validate session
2. **Parse Input**: Process changes from input source
3. **Create Backup**: Save previous version to backup folder
4. **Update Task**: Modify JSON structure and relationships
5. **Save Changes**: Write updated task and increment version
6. **Update Session**: Reflect changes in workflow stats
### Batch Process
1. **Parse Verification Report**: Extract all replan recommendations
2. **Initialize TodoWrite**: Create task list for tracking
3. **For Each Task**:
- Mark todo as in_progress
- Load and validate task JSON
- Create backup
- Apply recommended changes
- Save updated task
- Mark todo as completed
4. **Generate Summary**: Report all changes and backup locations
## Backup Management
### Backup Tracking
Tasks maintain backup history:
```json
{
"id": "IMPL-1",
"version": "1.2",
"replan_history": [
{
"version": "1.2",
"reason": "Add OAuth2 support",
"input_source": "direct_text",
"backup_location": ".task/backup/IMPL-1-v1.1.json",
"timestamp": "2025-10-17T10:30:00Z"
}
]
}
```
**Complete schema**: See @~/.claude/workflows/task-core.md
### File Structure
```
.task/
├── IMPL-1.json # Current version
├── backup/
│ ├── IMPL-1-v1.0.json # Original version
│ ├── IMPL-1-v1.1.json # Previous backup
│ └── IMPL-1-v1.2.json # Latest backup
└── [new subtasks as needed]
```
**Backup Naming**: `{task-id}-v{version}.json`
## Implementation Updates
### Change Detection
Tracks modifications to:
- Files in implementation.files array
- Dependencies and affected modules
- Risk assessments and performance notes
- Logic flows and code locations
### Analysis Triggers
May require gemini re-analysis when:
- New files need code extraction
- Function locations change
- Dependencies require re-evaluation
## Document Updates
### Planning Document
May update IMPL_PLAN.md sections when task structure changes significantly
### TODO List Sync
If TODO_LIST.md exists, synchronizes:
- New subtasks (with [ ] checkbox)
- Modified tasks (marked as updated)
- Removed subtasks (deleted from list)
## Change Documentation
### Change Summary
Generates brief change log with:
- Version increment (1.1 → 1.2)
- Input source and reason
- Key modifications made
- Files updated/created
- Backup location
## Session Updates
Updates workflow-session.json with:
- Modified task tracking
- Task count changes (if subtasks added/removed)
- Last modification timestamps
## Rollback Support
```bash
/task:replan IMPL-1 --rollback v1.1
Rollback to version 1.1:
- Restore task from backup/.../IMPL-1-v1.1.json
- Remove new subtasks if any
- Update session stats
# Use AskUserQuestion for confirmation
AskUserQuestion({
questions: [{
question: "Are you sure you want to roll back this task to a previous version?",
header: "Confirm",
options: [
{ label: "Yes, rollback", description: "Restore the task from the selected backup." },
{ label: "No, cancel", description: "Keep the current version of the task." }
],
multiSelect: false
}]
})
User selected: "Yes, rollback"
Task rolled back to version 1.1
```
## Batch Processing with TodoWrite
### Progress Tracking
When processing multiple tasks, automatically creates TodoWrite task list:
```markdown
**Batch Replan Progress**:
- [x] IMPL-002: Add FR-12 draft saving acceptance criteria
- [x] IMPL-003: Add FR-14 history tracking acceptance criteria
- [ ] IMPL-004: Add FR-09 response surface explicit coverage
- [ ] IMPL-008: Add NFR performance validation steps
```
### Batch Report
After completion, generates summary:
```markdown
## Batch Replan Summary
**Total Tasks**: 4
**Successful**: 3
**Failed**: 1
**Skipped**: 0
### Changes Made
- IMPL-002 v1.0 → v1.1: Added FR-12 acceptance criteria
- IMPL-003 v1.0 → v1.1: Added FR-14 acceptance criteria
- IMPL-004 v1.0 → v1.1: Added FR-09 explicit coverage
### Backups Created
- .task/backup/IMPL-002-v1.0.json
- .task/backup/IMPL-003-v1.0.json
- .task/backup/IMPL-004-v1.0.json
### Errors
- IMPL-008: File not found (task may have been renamed)
```
## Examples
### Single Task - Text Input
```bash
/task:replan IMPL-1 "Add OAuth2 authentication support"
Processing changes...
Proposed updates:
+ Add OAuth2 integration
+ Update authentication flow
# Use AskUserQuestion for confirmation
AskUserQuestion({
questions: [{
question: "Do you want to apply these changes to the task?",
header: "Apply",
options: [
{ label: "Yes, apply", description: "Create new version with these changes." },
{ label: "No, cancel", description: "Discard changes and keep current version." }
],
multiSelect: false
}]
})
User selected: "Yes, apply"
Version 1.2 created
Context updated
Backup saved to .task/backup/IMPL-1-v1.1.json
```
### Single Task - File Input
```bash
/task:replan IMPL-2 requirements.md
Loading requirements.md...
Applying specification changes...
Task updated with new requirements
Version 1.1 created
Backup saved to .task/backup/IMPL-2-v1.0.json
```
### Batch Mode - From Verification Report
```bash
/task:replan --batch .workflow/active/WFS-{session}/.process/ACTION_PLAN_VERIFICATION.md
Parsing verification report...
Found 4 tasks requiring replanning:
- IMPL-002: Add FR-12 draft saving acceptance criteria
- IMPL-003: Add FR-14 history tracking acceptance criteria
- IMPL-004: Add FR-09 response surface explicit coverage
- IMPL-008: Add NFR performance validation steps
Creating task tracking list...
Processing IMPL-002...
Backup created: .task/backup/IMPL-002-v1.0.json
Updated to v1.1
Processing IMPL-003...
Backup created: .task/backup/IMPL-003-v1.0.json
Updated to v1.1
Processing IMPL-004...
Backup created: .task/backup/IMPL-004-v1.0.json
Updated to v1.1
Processing IMPL-008...
Backup created: .task/backup/IMPL-008-v1.0.json
Updated to v1.1
Batch replan completed: 4/4 successful
Summary report saved
```
### Batch Mode - Auto-detection
```bash
# If file contains "Action Plan Verification Report", auto-enters batch mode
/task:replan ACTION_PLAN_VERIFICATION.md
Detected verification report format
Entering batch mode...
[same as above]
```
## Error Handling
### Single Task Errors
```bash
# Task not found
Task IMPL-5 not found
Check task ID with /workflow:status
# Task completed
Task IMPL-1 is completed (cannot replan)
Create new task for additional work
# File not found
File requirements.md not found
Check file path
# No input provided
Please specify changes needed
Provide text, file, or verification report
```
### Batch Mode Errors
```bash
# Invalid verification report
File does not contain valid verification report format
Check report structure or use single task mode
# Partial failures
Batch completed with errors: 3/4 successful
Review error details in summary report
# No replan recommendations found
Verification report contains no replan recommendations
Check report content or use /workflow:action-plan-verify first
```
## Batch Mode Integration
### Input Format Expectations
Batch mode parses verification reports looking for:
1. **Required Actions Section**: Commands like `/task:replan IMPL-X "changes"`
2. **Findings Table**: Task IDs with recommendations
3. **Next Actions Section**: Specific replan commands
**Example Patterns**:
```markdown
#### 1. HIGH Priority - Address FR Coverage Gaps
/task:replan IMPL-004 "
Add explicit acceptance criteria:
- FR-09: Response surface 3D visualization
"
#### 2. MEDIUM Priority - Enhance NFR Coverage
/task:replan IMPL-008 "
Add performance testing:
- NFR-01: Load test API endpoints
"
```
### Extraction Logic
1. Scan for `/task:replan` commands in report
2. Extract task ID and change description
3. Group by priority (HIGH, MEDIUM, LOW)
4. Process in priority order with TodoWrite tracking
### Confirmation Behavior
- **Default**: Confirm each task before applying
- **With `--auto-confirm`**: Apply all changes without prompting
```bash
/task:replan --batch report.md --auto-confirm
```
## Implementation Details
### Backup Management
```typescript
// Backup file naming convention
const backupPath = `.task/backup/${taskId}-v${previousVersion}.json`;
// Backup metadata in task JSON
{
"replan_history": [
{
"version": "1.2",
"timestamp": "2025-10-17T10:30:00Z",
"reason": "Add FR-09 explicit coverage",
"input_source": "batch_verification_report",
"backup_location": ".task/backup/IMPL-004-v1.1.json"
}
]
}
```
### TodoWrite Integration
```typescript
// Initialize tracking for batch mode
TodoWrite({
todos: taskList.map(task => ({
content: `${task.id}: ${task.changeDescription}`,
status: "pending",
activeForm: `Replanning ${task.id}`
}))
});
// Update progress during processing
TodoWrite({
todos: updateTaskStatus(taskId, "in_progress")
});
// Mark completed
TodoWrite({
todos: updateTaskStatus(taskId, "completed")
});
```

View File

@@ -1,254 +0,0 @@
---
name: version
description: Display Claude Code version information and check for updates
allowed-tools: Bash(*)
---
# Version Command (/version)
## Purpose
Display local and global installation versions, check for the latest updates from GitHub, and provide upgrade recommendations.
## Execution Flow
1. **Local Version Check**: Read version information from `./.claude/version.json` if it exists.
2. **Global Version Check**: Read version information from `~/.claude/version.json` if it exists.
3. **Fetch Remote Versions**: Use GitHub API to get the latest stable release tag and the latest commit hash from the main branch.
4. **Compare & Suggest**: Compare installed versions with the latest remote versions and provide upgrade suggestions if applicable.
## Step 1: Check Local Version
### Check if local version.json exists
```bash
bash(test -f ./.claude/version.json && echo "found" || echo "not_found")
```
### Read local version (if exists)
```bash
bash(cat ./.claude/version.json)
```
### Extract version with jq (preferred)
```bash
bash(cat ./.claude/version.json | grep -o '"version": *"[^"]*"' | cut -d'"' -f4)
```
### Extract installation date
```bash
bash(cat ./.claude/version.json | grep -o '"installation_date_utc": *"[^"]*"' | cut -d'"' -f4)
```
**Output Format**:
```
Local Version: 3.2.1
Installed: 2025-10-03T12:00:00Z
```
## Step 2: Check Global Version
### Check if global version.json exists
```bash
bash(test -f ~/.claude/version.json && echo "found" || echo "not_found")
```
### Read global version
```bash
bash(cat ~/.claude/version.json)
```
### Extract version
```bash
bash(cat ~/.claude/version.json | grep -o '"version": *"[^"]*"' | cut -d'"' -f4)
```
### Extract installation date
```bash
bash(cat ~/.claude/version.json | grep -o '"installation_date_utc": *"[^"]*"' | cut -d'"' -f4)
```
**Output Format**:
```
Global Version: 3.2.1
Installed: 2025-10-03T12:00:00Z
```
## Step 3: Fetch Latest Stable Release
### Call GitHub API for latest release (with timeout)
```bash
bash(curl -fsSL "https://api.github.com/repos/catlog22/Claude-Code-Workflow/releases/latest" 2>/dev/null, timeout: 30000)
```
### Extract tag name (version)
```bash
bash(curl -fsSL "https://api.github.com/repos/catlog22/Claude-Code-Workflow/releases/latest" 2>/dev/null | grep -o '"tag_name": *"[^"]*"' | head -1 | cut -d'"' -f4, timeout: 30000)
```
### Extract release name
```bash
bash(curl -fsSL "https://api.github.com/repos/catlog22/Claude-Code-Workflow/releases/latest" 2>/dev/null | grep -o '"name": *"[^"]*"' | head -1 | cut -d'"' -f4, timeout: 30000)
```
### Extract published date
```bash
bash(curl -fsSL "https://api.github.com/repos/catlog22/Claude-Code-Workflow/releases/latest" 2>/dev/null | grep -o '"published_at": *"[^"]*"' | cut -d'"' -f4, timeout: 30000)
```
**Output Format**:
```
Latest Stable: v3.2.2
Release: v3.2.2: Independent Test-Gen Workflow with Cross-Session Context
Published: 2025-10-03T04:10:08Z
```
## Step 4: Fetch Latest Main Branch
### Call GitHub API for latest commit on main (with timeout)
```bash
bash(curl -fsSL "https://api.github.com/repos/catlog22/Claude-Code-Workflow/commits/main" 2>/dev/null, timeout: 30000)
```
### Extract commit SHA (short)
```bash
bash(curl -fsSL "https://api.github.com/repos/catlog22/Claude-Code-Workflow/commits/main" 2>/dev/null | grep -o '"sha": *"[^"]*"' | head -1 | cut -d'"' -f4 | cut -c1-7, timeout: 30000)
```
### Extract commit message (first line only)
```bash
bash(curl -fsSL "https://api.github.com/repos/catlog22/Claude-Code-Workflow/commits/main" 2>/dev/null | grep '"message":' | cut -d'"' -f4 | cut -d'\' -f1, timeout: 30000)
```
### Extract commit date
```bash
bash(curl -fsSL "https://api.github.com/repos/catlog22/Claude-Code-Workflow/commits/main" 2>/dev/null | grep -o '"date": *"[^"]*"' | head -1 | cut -d'"' -f4, timeout: 30000)
```
**Output Format**:
```
Latest Dev: a03415b
Message: feat: Add version tracking and upgrade check system
Date: 2025-10-03T04:46:44Z
```
## Step 5: Compare Versions and Suggest Upgrade
### Normalize versions (remove 'v' prefix)
```bash
bash(echo "v3.2.1" | sed 's/^v//')
```
### Compare two versions
```bash
bash(printf "%s\n%s" "3.2.1" "3.2.2" | sort -V | tail -n 1)
```
### Check if versions are equal
```bash
# If equal: Up to date
# If remote newer: Upgrade available
# If local newer: Development version
```
**Output Scenarios**:
**Scenario 1: Up to date**
```
You are on the latest stable version (3.2.1)
```
**Scenario 2: Upgrade available**
```
A newer stable version is available: v3.2.2
Your version: 3.2.1
To upgrade:
PowerShell: iex (iwr -useb https://raw.githubusercontent.com/catlog22/Claude-Code-Workflow/main/install-remote.ps1)
Bash: bash <(curl -fsSL https://raw.githubusercontent.com/catlog22/Claude-Code-Workflow/main/install-remote.sh)
```
**Scenario 3: Development version**
```
You are running a development version (3.4.0-dev)
This is newer than the latest stable release (v3.3.0)
```
## Simple Bash Commands
### Basic Operations
```bash
# Check local version file
bash(test -f ./.claude/version.json && cat ./.claude/version.json)
# Check global version file
bash(test -f ~/.claude/version.json && cat ~/.claude/version.json)
# Extract version from JSON
bash(cat version.json | grep -o '"version": *"[^"]*"' | cut -d'"' -f4)
# Extract date from JSON
bash(cat version.json | grep -o '"installation_date_utc": *"[^"]*"' | cut -d'"' -f4)
# Fetch latest release (with timeout)
bash(curl -fsSL "https://api.github.com/repos/catlog22/Claude-Code-Workflow/releases/latest" 2>/dev/null, timeout: 30000)
# Extract tag name
bash(curl -fsSL "https://api.github.com/repos/catlog22/Claude-Code-Workflow/releases/latest" 2>/dev/null | grep -o '"tag_name": *"[^"]*"' | cut -d'"' -f4, timeout: 30000)
# Extract release name
bash(curl -fsSL "https://api.github.com/repos/catlog22/Claude-Code-Workflow/releases/latest" 2>/dev/null | grep -o '"name": *"[^"]*"' | head -1 | cut -d'"' -f4, timeout: 30000)
# Fetch latest commit (with timeout)
bash(curl -fsSL "https://api.github.com/repos/catlog22/Claude-Code-Workflow/commits/main" 2>/dev/null, timeout: 30000)
# Extract commit SHA
bash(curl -fsSL "https://api.github.com/repos/catlog22/Claude-Code-Workflow/commits/main" 2>/dev/null | grep -o '"sha": *"[^"]*"' | head -1 | cut -d'"' -f4 | cut -c1-7, timeout: 30000)
# Extract commit message (first line)
bash(curl -fsSL "https://api.github.com/repos/catlog22/Claude-Code-Workflow/commits/main" 2>/dev/null | grep '"message":' | cut -d'"' -f4 | cut -d'\' -f1, timeout: 30000)
# Compare versions
bash(printf "%s\n%s" "3.2.1" "3.2.2" | sort -V | tail -n 1)
# Remove 'v' prefix
bash(echo "v3.2.1" | sed 's/^v//')
```
## Error Handling
### No installation found
```
WARNING: Claude Code Workflow not installed
Install using:
PowerShell: iex (iwr -useb https://raw.githubusercontent.com/catlog22/Claude-Code-Workflow/main/install-remote.ps1)
```
### Network error
```
ERROR: Could not fetch latest version from GitHub
Check your network connection
```
### Invalid version.json
```
ERROR: version.json is invalid or corrupted
```
## Design Notes
- Uses simple, direct bash commands instead of complex functions
- Each step is independent and can be executed separately
- Fallback to grep/sed for JSON parsing (no jq dependency required)
- Network calls use curl with error suppression and 30-second timeout
- Version comparison uses `sort -V` for accurate semantic versioning
- Use `/commits/main` API instead of `/branches/main` for more reliable commit info
- Extract first line of commit message using `cut -d'\' -f1` to handle JSON escape sequences
## API Endpoints
### GitHub API Used
- **Latest Release**: `https://api.github.com/repos/catlog22/Claude-Code-Workflow/releases/latest`
- Fields: `tag_name`, `name`, `published_at`
- **Latest Commit**: `https://api.github.com/repos/catlog22/Claude-Code-Workflow/commits/main`
- Fields: `sha`, `commit.message`, `commit.author.date`
### Timeout Configuration
All network calls should use `timeout: 30000` (30 seconds) to handle slow connections.

View File

@@ -1,447 +0,0 @@
---
name: action-plan-verify
description: Perform non-destructive cross-artifact consistency analysis between IMPL_PLAN.md and task JSONs with quality gate validation
argument-hint: "[optional: --session session-id]"
allowed-tools: Read(*), TodoWrite(*), Glob(*), Bash(*)
---
## User Input
```text
$ARGUMENTS
```
You **MUST** consider the user input before proceeding (if not empty).
## Goal
Identify inconsistencies, duplications, ambiguities, and underspecified items between action planning artifacts (`IMPL_PLAN.md`, `task.json`) and brainstorming artifacts (`role analysis documents`) before implementation. This command MUST run only after `/workflow:plan` has successfully produced complete `IMPL_PLAN.md` and task JSON files.
## Operating Constraints
**STRICTLY READ-ONLY**: Do **not** modify any files. Output a structured analysis report. Offer an optional remediation plan (user must explicitly approve before any follow-up editing commands).
**Synthesis Authority**: The `role analysis documents` is **authoritative** for requirements and design decisions. Any conflicts between IMPL_PLAN/tasks and synthesis are automatically CRITICAL and require adjustment of the plan/tasks—not reinterpretation of requirements.
## Execution Steps
### 1. Initialize Analysis Context
```bash
# Detect active workflow session
IF --session parameter provided:
session_id = provided session
ELSE:
# Auto-detect active session
active_sessions = bash(find .workflow/active/ -name "WFS-*" -type d 2>/dev/null)
IF active_sessions is empty:
ERROR: "No active workflow session found. Use --session <session-id>"
EXIT
ELSE IF active_sessions has multiple entries:
# Use most recently modified session
session_id = bash(ls -td .workflow/active/WFS-*/ 2>/dev/null | head -1 | xargs basename)
ELSE:
session_id = basename(active_sessions[0])
# Derive absolute paths
session_dir = .workflow/active/WFS-{session}
brainstorm_dir = session_dir/.brainstorming
task_dir = session_dir/.task
# Validate required artifacts
# Note: "role analysis documents" refers to [role]/analysis.md files (e.g., product-manager/analysis.md)
SYNTHESIS_DIR = brainstorm_dir # Contains role analysis files: */analysis.md
IMPL_PLAN = session_dir/IMPL_PLAN.md
TASK_FILES = Glob(task_dir/*.json)
# Abort if missing
SYNTHESIS_FILES = Glob(brainstorm_dir/*/analysis.md)
IF SYNTHESIS_FILES.count == 0:
ERROR: "No role analysis documents found in .brainstorming/*/analysis.md. Run /workflow:brainstorm:synthesis first"
EXIT
IF NOT EXISTS(IMPL_PLAN):
ERROR: "IMPL_PLAN.md not found. Run /workflow:plan first"
EXIT
IF TASK_FILES.count == 0:
ERROR: "No task JSON files found. Run /workflow:plan first"
EXIT
```
### 2. Load Artifacts (Progressive Disclosure)
Load only minimal necessary context from each artifact:
**From workflow-session.json** (NEW - PRIMARY REFERENCE):
- Original user prompt/intent (project or description field)
- User's stated goals and objectives
- User's scope definition
**From role analysis documents**:
- Functional Requirements (IDs, descriptions, acceptance criteria)
- Non-Functional Requirements (IDs, targets)
- Business Requirements (IDs, success metrics)
- Key Architecture Decisions
- Risk factors and mitigation strategies
- Implementation Roadmap (high-level phases)
**From IMPL_PLAN.md**:
- Summary and objectives
- Context Analysis
- Implementation Strategy
- Task Breakdown Summary
- Success Criteria
- Brainstorming Artifacts References (if present)
**From task.json files**:
- Task IDs
- Titles and descriptions
- Status
- Dependencies (depends_on, blocks)
- Context (requirements, focus_paths, acceptance, artifacts)
- Flow control (pre_analysis, implementation_approach)
- Meta (complexity, priority)
### 3. Build Semantic Models
Create internal representations (do not include raw artifacts in output):
**Requirements inventory**:
- Each functional/non-functional/business requirement with stable ID
- Requirement text, acceptance criteria, priority
**Architecture decisions inventory**:
- ADRs from synthesis
- Technology choices
- Data model references
**Task coverage mapping**:
- Map each task to one or more requirements (by ID reference or keyword inference)
- Map each requirement to covering tasks
**Dependency graph**:
- Task-to-task dependencies (depends_on, blocks)
- Requirement-level dependencies (from synthesis)
### 4. Detection Passes (Token-Efficient Analysis)
Focus on high-signal findings. Limit to 50 findings total; aggregate remainder in overflow summary.
#### A. User Intent Alignment (NEW - CRITICAL)
- **Goal Alignment**: IMPL_PLAN objectives match user's original intent
- **Scope Drift**: Plan covers user's stated scope without unauthorized expansion
- **Success Criteria Match**: Plan's success criteria reflect user's expectations
- **Intent Conflicts**: Tasks contradicting user's original objectives
#### B. Requirements Coverage Analysis
- **Orphaned Requirements**: Requirements in synthesis with zero associated tasks
- **Unmapped Tasks**: Tasks with no clear requirement linkage
- **NFR Coverage Gaps**: Non-functional requirements (performance, security, scalability) not reflected in tasks
#### C. Consistency Validation
- **Requirement Conflicts**: Tasks contradicting synthesis requirements
- **Architecture Drift**: IMPL_PLAN architecture not matching synthesis ADRs
- **Terminology Drift**: Same concept named differently across IMPL_PLAN and tasks
- **Data Model Inconsistency**: Tasks referencing entities/fields not in synthesis data model
#### D. Dependency Integrity
- **Circular Dependencies**: Task A depends on B, B depends on C, C depends on A
- **Missing Dependencies**: Task requires outputs from another task but no explicit dependency
- **Broken Dependencies**: Task depends on non-existent task ID
- **Logical Ordering Issues**: Implementation tasks before foundational setup without dependency note
#### E. Synthesis Alignment
- **Priority Conflicts**: High-priority synthesis requirements mapped to low-priority tasks
- **Success Criteria Mismatch**: IMPL_PLAN success criteria not covering synthesis acceptance criteria
- **Risk Mitigation Gaps**: Critical risks in synthesis without corresponding mitigation tasks
#### F. Task Specification Quality
- **Ambiguous Focus Paths**: Tasks with vague or missing focus_paths
- **Underspecified Acceptance**: Tasks without clear acceptance criteria
- **Missing Artifacts References**: Tasks not referencing relevant brainstorming artifacts in context.artifacts
- **Weak Flow Control**: Tasks without clear implementation_approach or pre_analysis steps
- **Missing Target Files**: Tasks without flow_control.target_files specification
#### G. Duplication Detection
- **Overlapping Task Scope**: Multiple tasks with nearly identical descriptions
- **Redundant Requirements Coverage**: Same requirement covered by multiple tasks without clear partitioning
#### H. Feasibility Assessment
- **Complexity Misalignment**: Task marked "simple" but requires multiple file modifications
- **Resource Conflicts**: Parallel tasks requiring same resources/files
- **Skill Gap Risks**: Tasks requiring skills not in team capability assessment (from synthesis)
### 5. Severity Assignment
Use this heuristic to prioritize findings:
- **CRITICAL**:
- Violates user's original intent (goal misalignment, scope drift)
- Violates synthesis authority (requirement conflict)
- Core requirement with zero coverage
- Circular dependencies
- Broken dependencies
- **HIGH**:
- NFR coverage gaps
- Priority conflicts
- Missing risk mitigation tasks
- Ambiguous acceptance criteria
- **MEDIUM**:
- Terminology drift
- Missing artifacts references
- Weak flow control
- Logical ordering issues
- **LOW**:
- Style/wording improvements
- Minor redundancy not affecting execution
### 6. Produce Compact Analysis Report
**Report Generation**: Generate report content and save to file.
Output a Markdown report with the following structure:
```markdown
## Action Plan Verification Report
**Session**: WFS-{session-id}
**Generated**: {timestamp}
**Artifacts Analyzed**: role analysis documents, IMPL_PLAN.md, {N} task files
---
### Executive Summary
- **Overall Risk Level**: CRITICAL | HIGH | MEDIUM | LOW
- **Recommendation**: (See decision matrix below)
- BLOCK_EXECUTION: Critical issues exist (must fix before proceeding)
- PROCEED_WITH_FIXES: High issues exist, no critical (fix recommended before execution)
- PROCEED_WITH_CAUTION: Medium issues only (proceed with awareness)
- PROCEED: Low issues only or no issues (safe to execute)
- **Critical Issues**: {count}
- **High Issues**: {count}
- **Medium Issues**: {count}
- **Low Issues**: {count}
---
### Findings Summary
| ID | Category | Severity | Location(s) | Summary | Recommendation |
|----|----------|----------|-------------|---------|----------------|
| C1 | Coverage | CRITICAL | synthesis:FR-03 | Requirement "User auth" has zero task coverage | Add authentication implementation task |
| H1 | Consistency | HIGH | IMPL-1.2 vs synthesis:ADR-02 | Task uses REST while synthesis specifies GraphQL | Align task with ADR-02 decision |
| M1 | Specification | MEDIUM | IMPL-2.1 | Missing context.artifacts reference | Add @synthesis reference |
| L1 | Duplication | LOW | IMPL-3.1, IMPL-3.2 | Similar scope | Consider merging |
(Add one row per finding; generate stable IDs prefixed by severity initial.)
---
### Requirements Coverage Analysis
| Requirement ID | Requirement Summary | Has Task? | Task IDs | Priority Match | Notes |
|----------------|---------------------|-----------|----------|----------------|-------|
| FR-01 | User authentication | Yes | IMPL-1.1, IMPL-1.2 | Match | Complete |
| FR-02 | Data export | Yes | IMPL-2.3 | Mismatch | High req → Med priority task |
| FR-03 | Profile management | No | - | - | **CRITICAL: Zero coverage** |
| NFR-01 | Response time <200ms | No | - | - | **HIGH: No performance tasks** |
**Coverage Metrics**:
- Functional Requirements: 85% (17/20 covered)
- Non-Functional Requirements: 40% (2/5 covered)
- Business Requirements: 100% (5/5 covered)
---
### Unmapped Tasks
| Task ID | Title | Issue | Recommendation |
|---------|-------|-------|----------------|
| IMPL-4.5 | Refactor utils | No requirement linkage | Link to technical debt or remove |
---
### Dependency Graph Issues
**Circular Dependencies**: None detected
**Broken Dependencies**:
- IMPL-2.3 depends on "IMPL-2.4" (non-existent)
**Logical Ordering Issues**:
- IMPL-5.1 (integration test) has no dependency on IMPL-1.* (implementation tasks)
---
### Synthesis Alignment Issues
| Issue Type | Synthesis Reference | IMPL_PLAN/Task | Impact | Recommendation |
|------------|---------------------|----------------|--------|----------------|
| Architecture Conflict | synthesis:ADR-01 (JWT auth) | IMPL_PLAN uses session cookies | HIGH | Update IMPL_PLAN to use JWT |
| Priority Mismatch | synthesis:FR-02 (High) | IMPL-2.3 (Medium) | MEDIUM | Elevate task priority |
| Missing Risk Mitigation | synthesis:Risk-03 (API rate limits) | No mitigation tasks | HIGH | Add rate limiting implementation task |
---
### Task Specification Quality Issues
**Missing Artifacts References**: 12 tasks lack context.artifacts
**Weak Flow Control**: 5 tasks lack implementation_approach
**Missing Target Files**: 8 tasks lack flow_control.target_files
**Sample Issues**:
- IMPL-1.2: No context.artifacts reference to synthesis
- IMPL-3.1: Missing flow_control.target_files specification
- IMPL-4.2: Vague focus_paths ["src/"] - needs refinement
---
### Feasibility Concerns
| Concern | Tasks Affected | Issue | Recommendation |
|---------|----------------|-------|----------------|
| Skill Gap | IMPL-6.1, IMPL-6.2 | Requires Kubernetes expertise not in team | Add training task or external consultant |
| Resource Conflict | IMPL-3.1, IMPL-3.2 | Both modify src/auth/service.ts in parallel | Add dependency or serialize |
---
### Metrics
- **Total Requirements**: 30 (20 functional, 5 non-functional, 5 business)
- **Total Tasks**: 25
- **Overall Coverage**: 77% (23/30 requirements with ≥1 task)
- **Critical Issues**: 2
- **High Issues**: 5
- **Medium Issues**: 8
- **Low Issues**: 3
---
### Next Actions
#### Action Recommendations
**Recommendation Decision Matrix**:
| Condition | Recommendation | Action |
|-----------|----------------|--------|
| Critical > 0 | BLOCK_EXECUTION | Must resolve all critical issues before proceeding |
| Critical = 0, High > 0 | PROCEED_WITH_FIXES | Fix high-priority issues before execution |
| Critical = 0, High = 0, Medium > 0 | PROCEED_WITH_CAUTION | Proceed with awareness of medium issues |
| Only Low or None | PROCEED | Safe to execute workflow |
**If CRITICAL Issues Exist** (BLOCK_EXECUTION):
- Resolve all critical issues before proceeding
- Use TodoWrite to track required fixes
- Fix broken dependencies and circular references first
**If HIGH Issues Exist** (PROCEED_WITH_FIXES):
- Fix high-priority issues before execution
- Use TodoWrite to systematically track and complete improvements
**If Only MEDIUM/LOW Issues** (PROCEED_WITH_CAUTION / PROCEED):
- Can proceed with execution
- Address issues during or after implementation
#### TodoWrite-Based Remediation Workflow
**Report Location**: `.workflow/active/WFS-{session}/.process/ACTION_PLAN_VERIFICATION.md`
**Recommended Workflow**:
1. **Create TodoWrite Task List**: Extract all findings from report
2. **Process by Priority**: CRITICAL → HIGH → MEDIUM → LOW
3. **Complete Each Fix**: Mark tasks as in_progress/completed as you work
4. **Validate Changes**: Verify each modification against requirements
**TodoWrite Task Structure Example**:
```markdown
Priority Order:
1. Fix coverage gaps (CRITICAL)
2. Resolve consistency conflicts (CRITICAL/HIGH)
3. Add missing specifications (MEDIUM)
4. Improve task quality (LOW)
```
**Notes**:
- TodoWrite provides real-time progress tracking
- Each finding becomes a trackable todo item
- User can monitor progress throughout remediation
- Architecture drift in IMPL_PLAN requires manual editing
```
### 7. Save Report and Execute TodoWrite-Based Remediation
**Step 7.1: Save Analysis Report**:
```bash
report_path = ".workflow/active/WFS-{session}/.process/ACTION_PLAN_VERIFICATION.md"
Write(report_path, full_report_content)
```
**Step 7.2: Display Report Summary to User**:
- Show executive summary with counts
- Display recommendation (BLOCK/PROCEED_WITH_FIXES/PROCEED_WITH_CAUTION/PROCEED)
- List critical and high issues if any
**Step 7.3: After Report Generation**:
1. **Extract Findings**: Parse all issues by severity
2. **Create TodoWrite Task List**: Convert findings to actionable todos
3. **Execute Fixes**: Process each todo systematically
4. **Update Task Files**: Apply modifications directly to task JSON files
5. **Update IMPL_PLAN**: Apply strategic changes if needed
At end of report, provide remediation guidance:
```markdown
### 🔧 Remediation Workflow
**Recommended Approach**:
1. **Initialize TodoWrite**: Create comprehensive task list from all findings
2. **Process by Severity**: Start with CRITICAL, then HIGH, MEDIUM, LOW
3. **Apply Fixes Directly**: Modify task.json files and IMPL_PLAN.md as needed
4. **Track Progress**: Mark todos as completed after each fix
**TodoWrite Execution Pattern**:
```bash
# Step 1: Create task list from verification report
TodoWrite([
{ content: "Fix FR-03 coverage gap - add authentication task", status: "pending", activeForm: "Fixing FR-03 coverage gap" },
{ content: "Fix IMPL-1.2 consistency - align with ADR-02", status: "pending", activeForm: "Fixing IMPL-1.2 consistency" },
{ content: "Add context.artifacts to IMPL-1.2", status: "pending", activeForm: "Adding context.artifacts to IMPL-1.2" },
# ... additional todos for each finding
])
# Step 2: Process each todo systematically
# Mark as in_progress when starting
# Apply fix using Read/Edit tools
# Mark as completed when done
# Move to next priority item
```
**File Modification Workflow**:
```bash
# For task JSON modifications:
1. Read(.workflow/active/WFS-{session}/.task/IMPL-X.Y.json)
2. Edit() to apply fixes
3. Mark todo as completed
# For IMPL_PLAN modifications:
1. Read(.workflow/active/WFS-{session}/IMPL_PLAN.md)
2. Edit() to apply strategic changes
3. Mark todo as completed
```
**Note**: All fixes execute immediately after user confirmation without additional commands.

View File

@@ -0,0 +1,654 @@
---
name: analyze-with-file
description: Interactive collaborative analysis with documented discussions, CLI-assisted exploration, and evolving understanding
argument-hint: "[-y|--yes] [-c|--continue] \"topic or question\""
allowed-tools: TodoWrite(*), Task(*), AskUserQuestion(*), Read(*), Grep(*), Glob(*), Bash(*), Edit(*), Write(*)
---
## Auto Mode
When `--yes` or `-y`: Auto-confirm exploration decisions, use recommended analysis angles.
# Workflow Analyze Command
## Quick Start
```bash
# Basic usage
/workflow:analyze-with-file "如何优化这个项目的认证架构"
# With options
/workflow:analyze-with-file --continue "认证架构" # Continue existing session
/workflow:analyze-with-file -y "性能瓶颈分析" # Auto mode
```
**Context Source**: cli-explore-agent + Gemini/Codex analysis
**Output Directory**: `.workflow/.analysis/{session-id}/`
**Core Innovation**: Documented discussion timeline with evolving understanding
## Output Artifacts
### Phase 1: Topic Understanding
| Artifact | Description |
|----------|-------------|
| `discussion.md` | Evolution of understanding & discussions (initialized) |
| Session variables | Dimensions, focus areas, analysis depth |
### Phase 2: CLI Exploration
| Artifact | Description |
|----------|-------------|
| `exploration-codebase.json` | Single codebase context from cli-explore-agent |
| `explorations/*.json` | Multi-perspective codebase explorations (parallel, up to 4) |
| `explorations.json` | Single perspective aggregated findings |
| `perspectives.json` | Multi-perspective findings (up to 4 perspectives) with synthesis |
| Updated `discussion.md` | Round 1 with exploration results |
### Phase 3: Interactive Discussion
| Artifact | Description |
|----------|-------------|
| Updated `discussion.md` | Round 2-N with user feedback and insights |
| Corrected assumptions | Tracked in discussion timeline |
### Phase 4: Synthesis & Conclusion
| Artifact | Description |
|----------|-------------|
| `conclusions.json` | Final synthesis with recommendations |
| Final `discussion.md` | ⭐ Complete analysis with conclusions |
## Overview
Interactive collaborative analysis workflow with **documented discussion process**. Records understanding evolution, facilitates multi-round Q&A, and uses CLI tools for deep exploration.
**Core workflow**: Topic → Explore → Discuss → Document → Refine → Conclude
```
┌─────────────────────────────────────────────────────────────────────────┐
│ INTERACTIVE ANALYSIS WORKFLOW │
├─────────────────────────────────────────────────────────────────────────┤
│ │
│ Phase 1: Topic Understanding │
│ ├─ Parse topic/question │
│ ├─ Identify analysis dimensions (architecture, performance, etc.) │
│ ├─ Initial scoping with user │
│ └─ Initialize discussion.md │
│ │
│ Phase 2: CLI Exploration │
│ ├─ Codebase Exploration (cli-explore-agent, supports parallel ≤4) │
│ ├─ Multi-Perspective Analysis (AFTER exploration) │
│ │ ├─ Single: Comprehensive analysis │
│ │ └─ Multi (≤4): Parallel perspectives with synthesis │
│ ├─ Aggregate findings │
│ └─ Update discussion.md with Round 1 │
│ │
│ Phase 3: Interactive Discussion (Multi-Round) │
│ ├─ Present exploration findings │
│ ├─ Facilitate Q&A with user │
│ ├─ Capture user insights and corrections │
│ ├─ Actions: Deepen | Adjust direction | Answer questions │
│ ├─ Update discussion.md with each round │
│ └─ Repeat until clarity achieved (max 5 rounds) │
│ │
│ Phase 4: Synthesis & Conclusion │
│ ├─ Consolidate all insights │
│ ├─ Generate conclusions with recommendations │
│ ├─ Update discussion.md with final synthesis │
│ └─ Offer follow-up options (issue/task/report) │
│ │
└─────────────────────────────────────────────────────────────────────────┘
```
## Output Structure
```
.workflow/.analysis/ANL-{slug}-{date}/
├── discussion.md # ⭐ Evolution of understanding & discussions
├── exploration-codebase.json # Phase 2: Single codebase context
├── explorations/ # Phase 2: Multi-perspective codebase explorations (if selected)
│ ├── technical.json
│ └── architectural.json
├── explorations.json # Phase 2: Single perspective findings
├── perspectives.json # Phase 2: Multi-perspective findings (if selected)
└── conclusions.json # Phase 4: Final synthesis
```
## Implementation
### Session Initialization
**Objective**: Create session context and directory structure for analysis.
**Required Actions**:
1. Extract topic/question from `$ARGUMENTS`
2. Generate session ID: `ANL-{slug}-{date}`
- slug: lowercase, alphanumeric + Chinese, max 40 chars
- date: YYYY-MM-DD (UTC+8)
3. Define session folder: `.workflow/.analysis/{session-id}`
4. Parse command options:
- `-c` or `--continue` for session continuation
- `-y` or `--yes` for auto-approval mode
5. Auto-detect mode: If session folder + discussion.md exist → continue mode
6. Create directory structure: `{session-folder}/`
**Session Variables**:
- `sessionId`: Unique session identifier
- `sessionFolder`: Base directory for all artifacts
- `autoMode`: Boolean for auto-confirmation
- `mode`: new | continue
### Phase 1: Topic Understanding
**Objective**: Analyze topic, identify dimensions, gather user input, initialize discussion.md.
**Prerequisites**:
- Session initialized with valid sessionId and sessionFolder
- Topic/question available from $ARGUMENTS
**Workflow Steps**:
1. **Parse Topic & Identify Dimensions**
- Match topic keywords against ANALYSIS_DIMENSIONS
- Identify relevant dimensions: architecture, implementation, performance, security, concept, comparison, decision
- Default to "general" if no match
2. **Initial Scoping** (if new session + not auto mode)
- **Focus**: Multi-select from directions generated by detected dimensions (see Dimension-Direction Mapping)
- **Perspectives**: Multi-select up to 4 analysis perspectives (see Analysis Perspectives), default: single comprehensive view
- **Depth**: Single-select from Quick Overview (10-15min) / Standard Analysis (30-60min) / Deep Dive (1-2hr)
3. **Initialize discussion.md**
- Create discussion.md with session metadata
- Add user context: focus areas, analysis depth
- Add initial understanding: dimensions, scope, key questions
- Create empty sections for discussion timeline
**Success Criteria**:
- Session folder created with discussion.md initialized
- Analysis dimensions identified
- User preferences captured (focus, depth)
### Phase 2: CLI Exploration
**Objective**: Gather codebase context, then execute deep analysis via CLI tools.
**Prerequisites**:
- Phase 1 completed successfully
- discussion.md initialized
- Dimensions identified
**Workflow Steps** (⚠️ Codebase exploration FIRST):
1. **Codebase Exploration via cli-explore-agent** (supports parallel up to 4)
- Agent type: `cli-explore-agent`
- Execution mode: parallel if multi-perspective selected, otherwise single (run_in_background: false for sequential, true for parallel)
- **Single exploration**: General codebase analysis
- **Multi-perspective**: Parallel explorations per perspective focus (max 4, each with specific angle)
- **Common tasks**: Run `ccw tool exec get_modules_by_depth '{}'`, execute searches based on topic keywords, read `.workflow/project-tech.json`
- **Output**: `{sessionFolder}/exploration-codebase.json` (single) or `{sessionFolder}/explorations/{perspective}.json` (multi)
- **Purpose**: Enrich CLI prompts with codebase context for each perspective
**Single Exploration Example**:
```javascript
Task({
subagent_type: "cli-explore-agent",
run_in_background: false,
description: `Explore codebase: ${topicSlug}`,
prompt: `
## Analysis Context
Topic: ${topic_or_question}
Dimensions: ${dimensions.join(', ')}
Session: ${sessionFolder}
## MANDATORY FIRST STEPS
1. Run: ccw tool exec get_modules_by_depth '{}'
2. Execute relevant searches based on topic keywords
3. Read: .workflow/project-tech.json (if exists)
## Exploration Focus
${dimensions.map(d => `- ${d}: Identify relevant code patterns and structures`).join('\n')}
## Output
Write findings to: ${sessionFolder}/exploration-codebase.json
Schema: {relevant_files, patterns, key_findings, questions_for_user, _metadata}
`
})
```
**Multi-Perspective Parallel Example** (up to 4 agents):
```javascript
// Launch parallel explorations for each selected perspective
selectedPerspectives.forEach(perspective => {
Task({
subagent_type: "cli-explore-agent",
run_in_background: false, // Sequential execution, wait for each
description: `Explore ${perspective.name}: ${topicSlug}`,
prompt: `
## Analysis Context
Topic: ${topic_or_question}
Perspective: ${perspective.name} - ${perspective.focus}
Session: ${sessionFolder}
## MANDATORY FIRST STEPS
1. Run: ccw tool exec get_modules_by_depth '{}'
2. Execute searches focused on ${perspective.focus}
3. Read: .workflow/project-tech.json (if exists)
## Exploration Focus (${perspective.name} angle)
${perspective.exploration_tasks.map(t => `- ${t}`).join('\n')}
## Output
Write findings to: ${sessionFolder}/explorations/${perspective.name}.json
Schema: {relevant_files, patterns, key_findings, perspective_insights, _metadata}
`
})
})
```
2. **Multi-Perspective CLI Analysis** (⚠️ AFTER exploration)
- If user selected multiple perspectives (≤4): Launch CLI calls in parallel
- If single/default perspective: Launch single comprehensive CLI analysis
- **Shared context**: Include exploration-codebase.json findings in all prompts
- **Execution**: Bash with run_in_background: true, wait for all results
- **Output**: perspectives.json with analysis from each perspective
**Single Perspective Example**:
```javascript
Bash({
command: `ccw cli -p "
PURPOSE: Analyze topic '${topic_or_question}' from ${dimensions.join(', ')} perspectives
Success: Actionable insights with clear reasoning
PRIOR EXPLORATION CONTEXT:
- Key files: ${explorationResults.relevant_files.slice(0,5).map(f => f.path).join(', ')}
- Patterns found: ${explorationResults.patterns.slice(0,3).join(', ')}
- Key findings: ${explorationResults.key_findings.slice(0,3).join(', ')}
TASK:
• Build on exploration findings above
• Analyze common patterns and anti-patterns
• Highlight potential issues or opportunities
• Generate discussion points for user clarification
MODE: analysis
CONTEXT: @**/* | Topic: ${topic_or_question}
EXPECTED: Structured analysis with clear sections, specific insights tied to evidence, questions to deepen understanding, recommendations with rationale
CONSTRAINTS: Focus on ${dimensions.join(', ')}
" --tool gemini --mode analysis`,
run_in_background: true
})
```
**Multi-Perspective Example** (parallel, up to 4):
```javascript
// Build shared context once
const explorationContext = `
PRIOR EXPLORATION CONTEXT:
- Key files: ${explorationResults.relevant_files.slice(0,5).map(f => f.path).join(', ')}
- Patterns found: ${explorationResults.patterns.slice(0,3).join(', ')}
- Key findings: ${explorationResults.key_findings.slice(0,3).join(', ')}`
// Launch parallel CLI calls based on selected perspectives (max 4)
selectedPerspectives.forEach(perspective => {
Bash({
command: `ccw cli -p "
PURPOSE: ${perspective.purpose} for '${topic_or_question}'
Success: ${perspective.success_criteria}
${explorationContext}
TASK:
${perspective.tasks.map(t => `${t}`).join('\n')}
MODE: analysis
CONTEXT: @**/* | Topic: ${topic_or_question}
EXPECTED: ${perspective.expected_output}
CONSTRAINTS: ${perspective.constraints}
" --tool ${perspective.tool} --mode analysis`,
run_in_background: true
})
})
// ⚠️ STOP POINT: Wait for hook callback to receive all results before continuing
```
3. **Aggregate Findings**
- Consolidate all codebase explorations (exploration-codebase.json or explorations/*.json) and CLI perspective findings
- If multi-perspective: Extract synthesis from both explorations and analyses (convergent themes, conflicting views, unique contributions)
- Extract aggregated findings, discussion points, open questions across all sources
- Write to explorations.json (single) or perspectives.json (multi)
4. **Update discussion.md**
- Append Round 1 section with exploration results
- Single perspective: Include sources analyzed, key findings, discussion points, open questions
- Multi-perspective: Include per-perspective findings + synthesis section
**explorations.json Schema** (single perspective):
- `session_id`: Session identifier
- `timestamp`: Exploration completion time
- `topic`: Original topic/question
- `dimensions[]`: Analysis dimensions
- `sources[]`: {type, file/summary}
- `key_findings[]`: Main insights
- `discussion_points[]`: Questions for user
- `open_questions[]`: Unresolved questions
**perspectives.json Schema** (multi-perspective):
- `session_id`: Session identifier
- `timestamp`: Exploration completion time
- `topic`: Original topic/question
- `dimensions[]`: Analysis dimensions
- `perspectives[]`: [{name, tool, findings, insights, questions}]
- `synthesis`: {convergent_themes, conflicting_views, unique_contributions}
- `aggregated_findings[]`: Main insights across perspectives
- `discussion_points[]`: Questions for user
- `open_questions[]`: Unresolved questions
**Success Criteria**:
- exploration-codebase.json (single) or explorations/*.json (multi) created with codebase context
- explorations.json (single) or perspectives.json (multi) created with findings
- discussion.md updated with Round 1 results
- All agents and CLI calls completed successfully
### Phase 3: Interactive Discussion
**Objective**: Iteratively refine understanding through user-guided discussion cycles.
**Prerequisites**:
- Phase 2 completed successfully
- explorations.json contains initial findings
- discussion.md has Round 1 results
**Guideline**: For complex tasks (code analysis, implementation, refactoring), delegate to agents via Task tool (cli-explore-agent, code-developer, universal-executor) or CLI calls (ccw cli). Avoid direct analysis/execution in main process.
**Workflow Steps**:
1. **Present Findings**
- Display current findings from explorations.json
- Show key points for user input
2. **Gather User Feedback** (AskUserQuestion)
- **Question**: Feedback on current analysis
- **Options** (single-select):
- **同意,继续深入**: Analysis direction correct, deepen exploration
- **需要调整方向**: Different understanding or focus
- **分析完成**: Sufficient information obtained
- **有具体问题**: Specific questions to ask
3. **Process User Response**
**Agree, Deepen**:
- Continue analysis in current direction
- Use CLI for deeper exploration
**Adjust Direction**:
- AskUserQuestion for adjusted focus (code details / architecture / best practices)
- Launch new CLI exploration with adjusted scope
**Specific Questions**:
- Capture user questions
- Use CLI or direct analysis to answer
- Document Q&A in discussion.md
**Complete**:
- Exit discussion loop, proceed to Phase 4
4. **Update discussion.md**
- Append Round N section with:
- User input summary
- Direction adjustment (if any)
- User questions & answers (if any)
- Updated understanding
- Corrected assumptions
- New insights
5. **Repeat or Converge**
- Continue loop (max 5 rounds) or exit to Phase 4
**Discussion Actions**:
| User Choice | Action | Tool | Description |
|-------------|--------|------|-------------|
| Deepen | Continue current direction | Gemini CLI | Deeper analysis in same focus |
| Adjust | Change analysis angle | Selected CLI | New exploration with adjusted scope |
| Questions | Answer specific questions | CLI or analysis | Address user inquiries |
| Complete | Exit discussion loop | - | Proceed to synthesis |
**Success Criteria**:
- User feedback processed for each round
- discussion.md updated with all discussion rounds
- Assumptions corrected and documented
- Exit condition reached (user selects "完成" or max rounds)
### Phase 4: Synthesis & Conclusion
**Objective**: Consolidate insights, generate conclusions, offer next steps.
**Prerequisites**:
- Phase 3 completed successfully
- Multiple rounds of discussion documented
- User ready to conclude
**Workflow Steps**:
1. **Consolidate Insights**
- Extract all findings from discussion timeline
- **Key conclusions**: Main points with evidence and confidence levels (high/medium/low)
- **Recommendations**: Action items with rationale and priority (high/medium/low)
- **Open questions**: Remaining unresolved questions
- **Follow-up suggestions**: Issue/task creation suggestions
- Write to conclusions.json
2. **Final discussion.md Update**
- Append conclusions section:
- **Summary**: High-level overview
- **Key Conclusions**: Ranked with evidence and confidence
- **Recommendations**: Prioritized action items
- **Remaining Questions**: Unresolved items
- Update "Current Understanding (Final)":
- **What We Established**: Confirmed points
- **What Was Clarified/Corrected**: Important corrections
- **Key Insights**: Valuable learnings
- Add session statistics: rounds, duration, sources, artifacts
3. **Post-Completion Options** (AskUserQuestion)
- **创建Issue**: Launch issue:new with conclusions
- **生成任务**: Launch workflow:lite-plan for implementation
- **导出报告**: Generate standalone analysis report
- **完成**: No further action
**conclusions.json Schema**:
- `session_id`: Session identifier
- `topic`: Original topic/question
- `completed`: Completion timestamp
- `total_rounds`: Number of discussion rounds
- `summary`: Executive summary
- `key_conclusions[]`: {point, evidence, confidence}
- `recommendations[]`: {action, rationale, priority}
- `open_questions[]`: Unresolved questions
- `follow_up_suggestions[]`: {type, summary}
**Success Criteria**:
- conclusions.json created with final synthesis
- discussion.md finalized with conclusions
- User offered next step options
- Session complete
## Configuration
### Analysis Perspectives
Optional multi-perspective parallel exploration (single perspective is default, max 4):
| Perspective | Tool | Focus | Best For |
|------------|------|-------|----------|
| **Technical** | Gemini | Implementation, code patterns, technical feasibility | Understanding how and technical details |
| **Architectural** | Claude | System design, scalability, component interactions | Understanding structure and organization |
| **Business** | Codex | Value, ROI, stakeholder impact, strategy | Understanding business implications |
| **Domain Expert** | Gemini | Domain-specific patterns, best practices, standards | Industry-specific knowledge and practices |
**Selection**: User can multi-select up to 4 perspectives in Phase 1, or default to single comprehensive view
### Dimension-Direction Mapping
When user selects focus areas, generate directions dynamically from detected dimensions (don't use static options):
| Dimension | Possible Directions |
|-----------|-------------------|
| architecture | System Design, Component Interactions, Technology Choices, Integration Points, Design Patterns, Scalability Strategy |
| implementation | Code Structure, Implementation Details, Code Patterns, Error Handling, Testing Approach, Algorithm Analysis |
| performance | Performance Bottlenecks, Optimization Opportunities, Resource Utilization, Caching Strategy, Concurrency Issues |
| security | Security Vulnerabilities, Authentication/Authorization, Access Control, Data Protection, Input Validation |
| concept | Conceptual Foundation, Core Mechanisms, Fundamental Patterns, Theory & Principles, Trade-offs & Reasoning |
| comparison | Solution Comparison, Pros & Cons Analysis, Technology Evaluation, Approach Differences |
| decision | Decision Criteria, Trade-off Analysis, Risk Assessment, Impact Analysis, Implementation Implications |
**Implementation**: Present 2-3 top dimension-related directions, allow user to multi-select and add custom directions.
### Analysis Dimensions
Dimensions matched against topic keywords to identify focus areas:
| Dimension | Keywords |
|-----------|----------|
| architecture | 架构, architecture, design, structure, 设计 |
| implementation | 实现, implement, code, coding, 代码 |
| performance | 性能, performance, optimize, bottleneck, 优化 |
| security | 安全, security, auth, permission, 权限 |
| concept | 概念, concept, theory, principle, 原理 |
| comparison | 比较, compare, vs, difference, 区别 |
| decision | 决策, decision, choice, tradeoff, 选择 |
### Consolidation Rules
When updating "Current Understanding":
| Rule | Description |
|------|-------------|
| Promote confirmed insights | Move validated findings to "What We Established" |
| Track corrections | Keep important wrong→right transformations |
| Focus on current state | What do we know NOW |
| Avoid timeline repetition | Don't copy discussion details |
| Preserve key learnings | Keep insights valuable for future reference |
**Example**:
**Bad (cluttered)**:
```markdown
## Current Understanding
In round 1 we discussed X, then in round 2 user said Y...
```
**Good (consolidated)**:
```markdown
## Current Understanding
### What We Established
- The authentication flow uses JWT with refresh tokens
- Rate limiting is implemented at API gateway level
### What Was Clarified
- ~~Assumed Redis for sessions~~ → Actually uses database-backed sessions
### Key Insights
- Current architecture supports horizontal scaling
```
## Error Handling
| Error | Resolution |
|-------|------------|
| cli-explore-agent fails | Continue with available context, note limitation |
| CLI timeout | Retry with shorter prompt, or skip perspective |
| User timeout in discussion | Save state, show resume command |
| Max rounds reached | Force synthesis, offer continuation option |
| No relevant findings | Broaden search, ask user for clarification |
| Session folder conflict | Append timestamp suffix |
| Gemini unavailable | Fallback to Codex or manual analysis |
## Best Practices
1. **Clear Topic Definition**: Detailed topics → better dimension identification
2. **Agent-First for Complex Tasks**: For code analysis, implementation, or refactoring tasks during discussion, delegate to agents via Task tool (cli-explore-agent, code-developer, universal-executor) or CLI calls (ccw cli). Avoid direct analysis/execution in main process
3. **Review discussion.md**: Check understanding evolution before conclusions
4. **Embrace Corrections**: Track wrong→right transformations as learnings
5. **Document Evolution**: discussion.md captures full thinking process
6. **Use Continue Mode**: Resume sessions to build on previous analysis
## Templates
### Discussion Document Structure
**discussion.md** contains:
- **Header**: Session metadata (ID, topic, started, dimensions)
- **User Context**: Focus areas, analysis depth
- **Discussion Timeline**: Round-by-round findings
- Round 1: Initial Understanding + Exploration Results
- Round 2-N: User feedback, adjusted understanding, corrections, new insights
- **Conclusions**: Summary, key conclusions, recommendations
- **Current Understanding (Final)**: Consolidated insights
- **Session Statistics**: Rounds, duration, sources, artifacts
Example sections:
```markdown
### Round 2 - Discussion (timestamp)
#### User Input
User agrees with current direction, wants deeper code analysis
#### Updated Understanding
- Identified session management uses database-backed approach
- Rate limiting applied at gateway, not application level
#### Corrected Assumptions
- ~~Assumed Redis for sessions~~ → Database-backed sessions
- Reason: User clarified architecture decision
#### New Insights
- Current design allows horizontal scaling without session affinity
```
## Usage Recommendations(Requires User Confirmation)
**When to Execute Directly :**
- Short, focused analysis tasks (single module/component)
- Clear, well-defined topics with limited scope
- Quick information gathering without multi-round iteration
- Follow-up analysis building on existing session
**Use `Skill(skill="workflow:analyze-with-file", args="\"topic\"")` when:**
- Exploring a complex topic collaboratively
- Need documented discussion trail
- Decision-making requires multiple perspectives
- Want to iterate on understanding with user input
- Building shared understanding before implementation
**Use `Skill(skill="workflow:debug-with-file", args="\"bug description\"")` when:**
- Diagnosing specific bugs
- Need hypothesis-driven investigation
- Focus on evidence and verification
**Use `Skill(skill="workflow:brainstorm-with-file", args="\"topic or question\"")` when:**
- Generating new ideas or solutions
- Need creative exploration
- Want divergent thinking before convergence
**Use `Skill(skill="workflow:collaborative-plan-with-file", args="\"task description\"")` when:**
- Complex planning requiring multiple perspectives
- Large scope needing parallel sub-domain analysis
- Want shared collaborative planning document
- Need structured task breakdown with agent coordination
**Use `Skill(skill="workflow:lite-plan", args="\"task description\"")` when:**
- Ready to implement (past analysis phase)
- Need simple task breakdown
- Focus on quick execution planning
---
**Now execute analyze-with-file for**: $ARGUMENTS

View File

@@ -0,0 +1,779 @@
---
name: brainstorm-with-file
description: Interactive brainstorming with multi-CLI collaboration, idea expansion, and documented thought evolution
argument-hint: "[-y|--yes] [-c|--continue] [-m|--mode creative|structured] \"idea or topic\""
allowed-tools: TodoWrite(*), Task(*), AskUserQuestion(*), Read(*), Grep(*), Glob(*), Bash(*), Edit(*), Write(*)
---
## Auto Mode
When `--yes` or `-y`: Auto-confirm decisions, use recommended roles, balanced exploration mode.
# Workflow Brainstorm Command
## Quick Start
```bash
# Basic usage
/workflow:brainstorm-with-file "如何重新设计用户通知系统"
# With options
/workflow:brainstorm-with-file --continue "通知系统" # Continue existing
/workflow:brainstorm-with-file -y -m creative "创新的AI辅助功能" # Creative auto mode
/workflow:brainstorm-with-file -m structured "优化缓存策略" # Goal-oriented mode
```
**Context Source**: cli-explore-agent + Multi-CLI perspectives (Gemini/Codex/Claude or Professional Roles)
**Output Directory**: `.workflow/.brainstorm/{session-id}/`
**Core Innovation**: Diverge-Converge cycles with documented thought evolution
## Output Artifacts
### Phase 1: Seed Understanding
| Artifact | Description |
|----------|-------------|
| `brainstorm.md` | Complete thought evolution timeline (initialized) |
| Session variables | Dimensions, roles, exploration vectors |
### Phase 2: Divergent Exploration
| Artifact | Description |
|----------|-------------|
| `exploration-codebase.json` | Codebase context from cli-explore-agent |
| `perspectives.json` | Multi-CLI perspective findings (creative/pragmatic/systematic) |
| Updated `brainstorm.md` | Round 2 multi-perspective exploration |
### Phase 3: Interactive Refinement
| Artifact | Description |
|----------|-------------|
| `ideas/{idea-slug}.md` | Deep-dive analysis for selected ideas |
| Updated `brainstorm.md` | Round 3-6 refinement cycles |
### Phase 4: Convergence & Crystallization
| Artifact | Description |
|----------|-------------|
| `synthesis.json` | Final synthesis with top ideas, recommendations |
| Final `brainstorm.md` | ⭐ Complete thought evolution with conclusions |
## Overview
Interactive brainstorming workflow with **multi-CLI collaboration** and **documented thought evolution**. Expands initial ideas through questioning, multi-perspective analysis, and iterative refinement.
**Core workflow**: Seed Idea → Expand → Multi-CLI Discuss → Synthesize → Refine → Crystallize
```
┌─────────────────────────────────────────────────────────────────────────┐
│ INTERACTIVE BRAINSTORMING WORKFLOW │
├─────────────────────────────────────────────────────────────────────────┤
│ │
│ Phase 1: Seed Understanding │
│ ├─ Parse initial idea/topic │
│ ├─ Identify dimensions (technical, UX, business, etc.) │
│ ├─ Select roles (professional or simple perspectives) │
│ ├─ Initial scoping questions │
│ ├─ Expand into exploration vectors │
│ └─ Initialize brainstorm.md │
│ │
│ Phase 2: Divergent Exploration │
│ ├─ cli-explore-agent: Codebase context (FIRST) │
│ ├─ Multi-CLI Perspectives (AFTER exploration) │
│ │ ├─ Creative (Gemini): Innovation, cross-domain │
│ │ ├─ Pragmatic (Codex): Implementation, feasibility │
│ │ └─ Systematic (Claude): Architecture, structure │
│ └─ Aggregate diverse viewpoints │
│ │
│ Phase 3: Interactive Refinement (Multi-Round) │
│ ├─ Present multi-perspective findings │
│ ├─ User selects promising directions │
│ ├─ Actions: Deep dive | Generate more | Challenge | Merge │
│ ├─ Update brainstorm.md with evolution │
│ └─ Repeat diverge-converge cycles (max 6 rounds) │
│ │
│ Phase 4: Convergence & Crystallization │
│ ├─ Synthesize best ideas │
│ ├─ Resolve conflicts between perspectives │
│ ├─ Generate actionable conclusions │
│ ├─ Offer next steps (plan/issue/analyze/export) │
│ └─ Final brainstorm.md update │
│ │
└─────────────────────────────────────────────────────────────────────────┘
```
## Output Structure
```
.workflow/.brainstorm/BS-{slug}-{date}/
├── brainstorm.md # ⭐ Complete thought evolution timeline
├── exploration-codebase.json # Phase 2: Codebase context
├── perspectives.json # Phase 2: Multi-CLI findings
├── synthesis.json # Phase 4: Final synthesis
└── ideas/ # Phase 3: Individual idea deep-dives
├── idea-1.md
├── idea-2.md
└── merged-idea-1.md
```
## Implementation
### Session Initialization
**Objective**: Create session context and directory structure for brainstorming.
**Required Actions**:
1. Extract idea/topic from `$ARGUMENTS`
2. Generate session ID: `BS-{slug}-{date}`
- slug: lowercase, alphanumeric + Chinese, max 40 chars
- date: YYYY-MM-DD (UTC+8)
3. Define session folder: `.workflow/.brainstorm/{session-id}`
4. Parse command options:
- `-c` or `--continue` for session continuation
- `-m` or `--mode` for brainstorm mode (creative/structured/balanced)
- `-y` or `--yes` for auto-approval mode
5. Auto-detect mode: If session folder + brainstorm.md exist → continue mode
6. Create directory structure: `{session-folder}/ideas/`
**Session Variables**:
- `sessionId`: Unique session identifier
- `sessionFolder`: Base directory for all artifacts
- `brainstormMode`: creative | structured | balanced
- `autoMode`: Boolean for auto-confirmation
- `mode`: new | continue
### Phase 1: Seed Understanding
**Objective**: Analyze topic, select roles, gather user input, expand into exploration vectors.
**Prerequisites**:
- Session initialized with valid sessionId and sessionFolder
- Topic/idea available from $ARGUMENTS
**Workflow Steps**:
1. **Parse Seed & Identify Dimensions**
- Match topic keywords against BRAINSTORM_DIMENSIONS
- Identify relevant dimensions: technical, ux, business, innovation, feasibility, scalability, security
- Default dimensions based on brainstormMode if no match
2. **Role Selection**
- **Recommend roles** based on topic keywords (see Role Keywords mapping)
- **Options**:
- **Professional roles**: system-architect, product-manager, ui-designer, ux-expert, data-architect, test-strategist, subject-matter-expert, product-owner, scrum-master
- **Simple perspectives**: creative/pragmatic/systematic (fallback)
- **Auto mode**: Select top 3 recommended professional roles
- **Manual mode**: AskUserQuestion with recommended roles + "Use simple perspectives" option
3. **Initial Scoping Questions** (if new session + not auto mode)
- **Direction**: Multi-select from directions generated by detected dimensions (see Brainstorm Dimensions)
- **Depth**: Single-select from quick/balanced/deep (15-20min / 30-60min / 1-2hr)
- **Constraints**: Multi-select from existing architecture, time, resources, or no constraints
4. **Expand Seed into Exploration Vectors**
- Launch Gemini CLI with analysis mode
- Generate 5-7 exploration vectors:
- Core question: Fundamental problem/opportunity
- User perspective: Who benefits and how
- Technical angle: What enables this
- Alternative approaches: Other solutions
- Challenges: Potential blockers
- Innovation angle: 10x better approach
- Integration: Fit with existing systems
- Parse result into structured vectors
**CLI Call Example**:
```javascript
Bash({
command: `ccw cli -p "
Given the initial idea: '${idea_or_topic}'
User focus areas: ${userFocusAreas.join(', ')}
Constraints: ${constraints.join(', ')}
Generate 5-7 exploration vectors (questions/directions) to expand this idea:
1. Core question: What is the fundamental problem/opportunity?
2. User perspective: Who benefits and how?
3. Technical angle: What enables this technically?
4. Alternative approaches: What other ways could this be solved?
5. Challenges: What could go wrong or block success?
6. Innovation angle: What would make this 10x better?
7. Integration: How does this fit with existing systems/processes?
Output as structured exploration vectors for multi-perspective analysis.
" --tool gemini --mode analysis --model gemini-2.5-flash`,
run_in_background: false
})
```
5. **Initialize brainstorm.md**
- Create brainstorm.md with session metadata
- Add initial context: user focus, depth, constraints
- Add seed expansion: original idea + exploration vectors
- Create empty sections for thought evolution timeline
**Success Criteria**:
- Session folder created with brainstorm.md initialized
- 1-3 roles selected (professional or simple perspectives)
- 5-7 exploration vectors generated
- User preferences captured (direction, depth, constraints)
### Phase 2: Divergent Exploration
**Objective**: Gather codebase context, then execute multi-perspective analysis in parallel.
**Prerequisites**:
- Phase 1 completed successfully
- Roles selected and stored
- brainstorm.md initialized
**Workflow Steps**:
1. **Primary Codebase Exploration via cli-explore-agent** (⚠️ FIRST)
- Agent type: `cli-explore-agent`
- Execution mode: synchronous (run_in_background: false)
- **Tasks**:
- Run: `ccw tool exec get_modules_by_depth '{}'`
- Search code related to topic keywords
- Read: `.workflow/project-tech.json` if exists
- **Output**: `{sessionFolder}/exploration-codebase.json`
- relevant_files: [{path, relevance, rationale}]
- existing_patterns: []
- architecture_constraints: []
- integration_points: []
- inspiration_sources: []
- **Purpose**: Enrich CLI prompts with codebase context
**Agent Call Example**:
```javascript
Task({
subagent_type: "cli-explore-agent",
run_in_background: false,
description: `Explore codebase for brainstorm: ${topicSlug}`,
prompt: `
## Brainstorm Context
Topic: ${idea_or_topic}
Dimensions: ${dimensions.join(', ')}
Mode: ${brainstormMode}
Session: ${sessionFolder}
## MANDATORY FIRST STEPS
1. Run: ccw tool exec get_modules_by_depth '{}'
2. Search for code related to topic keywords
3. Read: .workflow/project-tech.json (if exists)
## Exploration Focus
- Identify existing implementations related to the topic
- Find patterns that could inspire solutions
- Map current architecture constraints
- Locate integration points
## Output
Write findings to: ${sessionFolder}/exploration-codebase.json
Schema:
{
"relevant_files": [{"path": "...", "relevance": "high|medium|low", "rationale": "..."}],
"existing_patterns": [],
"architecture_constraints": [],
"integration_points": [],
"inspiration_sources": [],
"_metadata": { "exploration_type": "brainstorm-codebase", "timestamp": "..." }
}
`
})
2. **Multi-CLI Perspective Analysis** ( AFTER exploration)
- Launch 3 CLI calls in parallel (Gemini/Codex/Claude)
- **Perspectives**:
- **Creative (Gemini)**: Innovation, cross-domain inspiration, challenge assumptions
- **Pragmatic (Codex)**: Implementation reality, feasibility, technical blockers
- **Systematic (Claude)**: Architecture, decomposition, scalability
- **Shared context**: Include exploration-codebase.json findings in prompts
- **Execution**: Bash with run_in_background: true, wait for all results
- **Output**: perspectives.json with creative/pragmatic/systematic sections
**Multi-CLI Call Example** (parallel execution):
```javascript
// Build shared context from exploration results
const explorationContext = `
PRIOR EXPLORATION CONTEXT (from cli-explore-agent):
- Key files: ${explorationResults.relevant_files.slice(0,5).map(f => f.path).join(', ')}
- Existing patterns: ${explorationResults.existing_patterns.slice(0,3).join(', ')}
- Architecture constraints: ${explorationResults.architecture_constraints.slice(0,3).join(', ')}
- Integration points: ${explorationResults.integration_points.slice(0,3).join(', ')}`
// Launch 3 CLI calls in parallel (single message, multiple Bash calls)
Bash({
command: `ccw cli -p "
PURPOSE: Creative brainstorming for '${idea_or_topic}' - generate innovative ideas
Success: 5+ unique creative solutions that push boundaries
${explorationContext}
TASK:
• Build on existing patterns - how can they be extended creatively?
• Think beyond obvious solutions - what would be surprising/delightful?
• Explore cross-domain inspiration
• Challenge assumptions - what if the opposite were true?
• Generate 'moonshot' ideas alongside practical ones
MODE: analysis
CONTEXT: @**/* | Topic: ${idea_or_topic}
EXPECTED: 5+ creative ideas with novelty/impact ratings, challenged assumptions, cross-domain inspirations
CONSTRAINTS: ${brainstormMode === 'structured' ? 'Keep ideas technically feasible' : 'No constraints - think freely'}
" --tool gemini --mode analysis`,
run_in_background: true
})
Bash({
command: `ccw cli -p "
PURPOSE: Pragmatic brainstorming for '${idea_or_topic}' - focus on implementation reality
Success: Actionable approaches with clear implementation paths
${explorationContext}
TASK:
• Build on explored codebase - how to integrate with existing patterns?
• Evaluate technical feasibility of core concept
• Identify existing patterns/libraries that could help
• Estimate implementation complexity
• Highlight potential technical blockers
• Suggest incremental implementation approach
MODE: analysis
CONTEXT: @**/* | Topic: ${idea_or_topic}
EXPECTED: 3-5 practical approaches with effort/risk ratings, dependencies, quick wins vs long-term
CONSTRAINTS: Focus on what can actually be built with current tech stack
" --tool codex --mode analysis`,
run_in_background: true
})
Bash({
command: `ccw cli -p "
PURPOSE: Systematic brainstorming for '${idea_or_topic}' - architectural thinking
Success: Well-structured solution framework with clear tradeoffs
${explorationContext}
TASK:
• Build on explored architecture - how to extend systematically?
• Decompose the problem into sub-problems
• Identify architectural patterns that apply
• Map dependencies and interactions
• Consider scalability implications
• Propose systematic solution structure
MODE: analysis
CONTEXT: @**/* | Topic: ${idea_or_topic}
EXPECTED: Problem decomposition, 2-3 architectural approaches with tradeoffs, scalability assessment
CONSTRAINTS: Consider existing system architecture
" --tool claude --mode analysis`,
run_in_background: true
})
// ⚠️ STOP POINT: Wait for hook callback to receive all results before continuing
```
3. **Aggregate Multi-Perspective Findings**
- Consolidate creative/pragmatic/systematic results
- Extract synthesis:
- Convergent themes (all agree)
- Conflicting views (need resolution)
- Unique contributions (perspective-specific insights)
- Write to perspectives.json
4. **Update brainstorm.md**
- Append Round 2 section with multi-perspective exploration
- Include creative/pragmatic/systematic findings
- Add perspective synthesis
**CLI Prompt Template**:
- **PURPOSE**: Role brainstorming for topic - focus description
- **TASK**: Bullet list of specific actions
- **MODE**: analysis
- **CONTEXT**: @**/* | Topic + Exploration vectors + Codebase findings
- **EXPECTED**: Output format requirements
- **CONSTRAINTS**: Role-specific constraints
**Success Criteria**:
- exploration-codebase.json created with codebase context
- perspectives.json created with 3 perspective analyses
- brainstorm.md updated with Round 2 findings
- All CLI calls completed successfully
### Phase 3: Interactive Refinement
**Objective**: Iteratively refine ideas through user-guided exploration cycles.
**Prerequisites**:
- Phase 2 completed successfully
- perspectives.json contains initial ideas
- brainstorm.md has Round 2 findings
**Guideline**: For complex tasks (code analysis, implementation, POC creation), delegate to agents via Task tool (cli-explore-agent, code-developer, universal-executor) or CLI calls (ccw cli). Avoid direct analysis/execution in main process.
**Workflow Steps**:
1. **Present Current State**
- Extract top ideas from perspectives.json
- Display with: title, source, brief description, novelty/feasibility ratings
- List open questions
2. **Gather User Direction** (AskUserQuestion)
- **Question 1**: Which ideas to explore (multi-select from top ideas)
- **Question 2**: Next step (single-select):
- **深入探索**: Deep dive on selected ideas
- **继续发散**: Generate more ideas
- **挑战验证**: Devil's advocate challenge
- **合并综合**: Merge multiple ideas
- **准备收敛**: Begin convergence (exit loop)
3. **Execute User-Selected Action**
**Deep Dive** (per selected idea):
- Launch Gemini CLI with analysis mode
- Tasks: Elaborate concept, implementation requirements, challenges, POC approach, metrics, dependencies
- Output: `{sessionFolder}/ideas/{idea-slug}.md`
**Generate More Ideas**:
- Launch CLI with new angles from unexplored vectors
- Add results to perspectives.json
**Devil's Advocate Challenge**:
- Launch Codex CLI with analysis mode
- Tasks: Identify objections, challenge assumptions, failure scenarios, alternatives, survivability rating
- Return challenge results for idea strengthening
**Merge Ideas**:
- Launch Gemini CLI with analysis mode
- Tasks: Identify complementary elements, resolve contradictions, create unified concept
- Add merged idea to perspectives.json
4. **Update brainstorm.md**
- Append Round N section with findings
- Document user direction and action results
5. **Repeat or Converge**
- Continue loop (max 6 rounds) or exit to Phase 4
**Refinement Actions**:
| Action | Tool | Output | Description |
|--------|------|--------|-------------|
| Deep Dive | Gemini CLI | ideas/{slug}.md | Comprehensive idea analysis |
| Generate More | Selected CLI | Updated perspectives.json | Additional idea generation |
| Challenge | Codex CLI | Challenge results | Critical weaknesses exposed |
| Merge | Gemini CLI | Merged idea | Synthesized concept |
**CLI Call Examples for Refinement Actions**:
**1. Deep Dive on Selected Idea**:
```javascript
Bash({
command: `ccw cli -p "
PURPOSE: Deep dive analysis on idea '${idea.title}'
Success: Comprehensive understanding with actionable next steps
TASK:
• Elaborate the core concept in detail
• Identify implementation requirements
• List potential challenges and mitigations
• Suggest proof-of-concept approach
• Define success metrics
• Map related/dependent features
MODE: analysis
CONTEXT: @**/*
Original idea: ${idea.description}
Source perspective: ${idea.source}
User interest reason: ${idea.userReason || 'Selected for exploration'}
EXPECTED:
- Detailed concept description
- Technical requirements list
- Risk/challenge matrix
- MVP definition
- Success criteria
- Recommendation: pursue/pivot/park
CONSTRAINTS: Focus on actionability
" --tool gemini --mode analysis`,
run_in_background: false
})
```
**2. Devil's Advocate Challenge**:
```javascript
Bash({
command: `ccw cli -p "
PURPOSE: Devil's advocate - rigorously challenge these brainstorm ideas
Success: Uncover hidden weaknesses and strengthen viable ideas
IDEAS TO CHALLENGE:
${ideas.map((idea, i) => `${i+1}. ${idea.title}: ${idea.brief}`).join('\n')}
TASK:
• For each idea, identify 3 strongest objections
• Challenge core assumptions
• Identify scenarios where this fails
• Consider competitive/alternative solutions
• Assess whether this solves the right problem
• Rate survivability after challenge (1-5)
MODE: analysis
EXPECTED:
- Per-idea challenge report
- Critical weaknesses exposed
- Counter-arguments to objections (if any)
- Ideas that survive the challenge
- Modified/strengthened versions
CONSTRAINTS: Be genuinely critical, not just contrarian
" --tool codex --mode analysis`,
run_in_background: false
})
```
**3. Merge Multiple Ideas**:
```javascript
Bash({
command: `ccw cli -p "
PURPOSE: Synthesize multiple ideas into unified concept
Success: Coherent merged idea that captures best elements
IDEAS TO MERGE:
${selectedIdeas.map((idea, i) => `
${i+1}. ${idea.title} (${idea.source})
${idea.description}
Strengths: ${idea.strengths.join(', ')}
`).join('\n')}
TASK:
• Identify complementary elements
• Resolve contradictions
• Create unified concept
• Preserve key strengths from each
• Describe the merged solution
• Assess viability of merged idea
MODE: analysis
EXPECTED:
- Merged concept description
- Elements taken from each source idea
- Contradictions resolved (or noted as tradeoffs)
- New combined strengths
- Implementation considerations
CONSTRAINTS: Don't force incompatible ideas together
" --tool gemini --mode analysis`,
run_in_background: false
})
```
**Success Criteria**:
- User-selected ideas processed
- brainstorm.md updated with all refinement rounds
- ideas/ folder contains deep-dive documents for selected ideas
- Exit condition reached (user selects "准备收敛" or max rounds)
### Phase 4: Convergence & Crystallization
**Objective**: Synthesize final ideas, generate conclusions, offer next steps.
**Prerequisites**:
- Phase 3 completed successfully
- Multiple rounds of refinement documented
- User ready to converge
**Workflow Steps**:
1. **Generate Final Synthesis**
- Consolidate all ideas from perspectives.json and refinement rounds
- **Top ideas**: Filter active ideas, sort by score, take top 5
- Include: title, description, source_perspective, score, novelty, feasibility, strengths, challenges, next_steps
- **Parked ideas**: Ideas marked as parked with reason and future trigger
- **Key insights**: Process discoveries, challenged assumptions, unexpected connections
- **Recommendations**: Primary recommendation, alternatives, not recommended
- **Follow-up**: Implementation/research/validation summaries
- Write to synthesis.json
2. **Final brainstorm.md Update**
- Append synthesis & conclusions section
- **Executive summary**: High-level overview
- **Top ideas**: Ranked with descriptions, strengths, challenges, next steps
- **Primary recommendation**: Best path forward with rationale
- **Alternative approaches**: Other viable options with tradeoffs
- **Parked ideas**: Future considerations
- **Key insights**: Learnings from the process
- **Session statistics**: Rounds, ideas generated/survived, duration
3. **Post-Completion Options** (AskUserQuestion)
- **创建实施计划**: Launch workflow:plan with top idea
- **创建Issue**: Launch issue:new for top 3 ideas
- **深入分析**: Launch workflow:analyze-with-file for top idea
- **导出分享**: Generate shareable report
- **完成**: No further action
**synthesis.json Schema**:
- `session_id`: Session identifier
- `topic`: Original idea/topic
- `completed`: Completion timestamp
- `total_rounds`: Number of refinement rounds
- `top_ideas[]`: Top 5 ranked ideas
- `parked_ideas[]`: Ideas parked for future
- `key_insights[]`: Process learnings
- `recommendations`: Primary/alternatives/not_recommended
- `follow_up[]`: Next step summaries
**Success Criteria**:
- synthesis.json created with final synthesis
- brainstorm.md finalized with conclusions
- User offered next step options
- Session complete
## Configuration
### Brainstorm Dimensions
Dimensions matched against topic keywords to identify focus areas:
| Dimension | Keywords |
|-----------|----------|
| technical | 技术, technical, implementation, code, 实现, architecture |
| ux | 用户, user, experience, UX, UI, 体验, interaction |
| business | 业务, business, value, ROI, 价值, market |
| innovation | 创新, innovation, novel, creative, 新颖 |
| feasibility | 可行, feasible, practical, realistic, 实际 |
| scalability | 扩展, scale, growth, performance, 性能 |
| security | 安全, security, risk, protection, 风险 |
### Role Selection
**Professional Roles** (recommended based on topic keywords):
| Role | CLI Tool | Focus Area | Keywords |
|------|----------|------------|----------|
| system-architect | Claude | Architecture, patterns | 架构, architecture, system, 系统, design pattern |
| product-manager | Gemini | Business value, roadmap | 产品, product, feature, 功能, roadmap |
| ui-designer | Gemini | Visual design, interaction | UI, 界面, interface, visual, 视觉 |
| ux-expert | Codex | User research, usability | UX, 体验, experience, user, 用户 |
| data-architect | Claude | Data modeling, storage | 数据, data, database, 存储, storage |
| test-strategist | Codex | Quality, testing | 测试, test, quality, 质量, QA |
| subject-matter-expert | Gemini | Domain knowledge | 领域, domain, industry, 行业, expert |
| product-owner | Codex | Priority, scope | 优先, priority, scope, 范围, backlog |
| scrum-master | Gemini | Process, collaboration | 敏捷, agile, scrum, sprint, 迭代 |
**Simple Perspectives** (fallback):
| Perspective | CLI Tool | Focus | Best For |
|-------------|----------|-------|----------|
| creative | Gemini | Innovation, cross-domain | Generating novel ideas |
| pragmatic | Codex | Implementation, feasibility | Reality-checking ideas |
| systematic | Claude | Architecture, structure | Organizing solutions |
**Selection Strategy**:
1. **Auto mode** (`-y`): Choose top 3 recommended professional roles
2. **Manual mode**: Present recommended roles + "Use simple perspectives" option
3. **Continue mode**: Use roles from previous session
### Collaboration Patterns
| Pattern | Usage | Description |
|---------|-------|-------------|
| Parallel Divergence | New topic | All roles explore simultaneously from different angles |
| Sequential Deep-Dive | Promising idea | One role expands, others critique/refine |
| Debate Mode | Controversial approach | Roles argue for/against approaches |
| Synthesis Mode | Ready to decide | Combine insights into actionable conclusion |
### Context Overflow Protection
**Per-Role Limits**:
- Main analysis output: < 3000 words
- Sub-document (if any): < 2000 words each
- Maximum sub-documents: 5 per role
**Synthesis Protection**:
- If total analysis > 100KB, synthesis reads only main analysis files (not sub-documents)
- Large ideas automatically split into separate idea documents in ideas/ folder
**Recovery Steps**:
1. Check CLI logs for context overflow errors
2. Reduce scope: fewer roles or simpler topic
3. Use `--mode structured` for more focused output
4. Split complex topics into multiple sessions
**Prevention**:
- Start with 3 roles (default), increase if needed
- Use structured topic format: "GOAL: ... SCOPE: ... CONTEXT: ..."
- Review output sizes before final synthesis
## Error Handling
| Error | Resolution |
|-------|------------|
| cli-explore-agent fails | Continue with empty exploration context |
| CLI timeout | Retry with shorter prompt, or skip perspective |
| No good ideas | Reframe problem, adjust constraints, try new angles |
| User disengaged | Summarize progress, offer break point with resume |
| Perspectives conflict | Present as tradeoff, let user decide |
| Max rounds reached | Force synthesis, highlight unresolved questions |
| All ideas fail challenge | Return to divergent phase with new constraints |
## Best Practices
1. **Clear Topic Definition**: Detailed topics → better role selection and exploration
2. **Agent-First for Complex Tasks**: For code analysis, POC implementation, or technical validation during refinement, delegate to agents via Task tool (cli-explore-agent, code-developer, universal-executor) or CLI calls (ccw cli). Avoid direct analysis/execution in main process
3. **Review brainstorm.md**: Check thought evolution before final decisions
4. **Embrace Conflicts**: Perspective conflicts often reveal important tradeoffs
5. **Document Evolution**: brainstorm.md captures full thinking process for team review
6. **Use Continue Mode**: Resume sessions to build on previous exploration
## Templates
### Brainstorm Document Structure
**brainstorm.md** contains:
- **Header**: Session metadata (ID, topic, started, mode, dimensions)
- **Initial Context**: User focus, depth, constraints
- **Seed Expansion**: Original idea + exploration vectors
- **Thought Evolution Timeline**: Round-by-round findings
- Round 1: Seed Understanding
- Round 2: Multi-Perspective Exploration (creative/pragmatic/systematic)
- Round 3-N: Interactive Refinement (deep-dive/challenge/merge)
- **Synthesis & Conclusions**: Executive summary, top ideas, recommendations
- **Session Statistics**: Rounds, ideas, duration, artifacts
See full markdown template in original file (lines 955-1161).
## Usage Recommendations (Requires User Confirmation)
**Use `Skill(skill="workflow:brainstorm-with-file", args="\"topic or question\"")` when:**
- Starting a new feature/product without clear direction
- Facing a complex problem with multiple possible solutions
- Need to explore alternatives before committing
- Want documented thinking process for team review
- Combining multiple stakeholder perspectives
**Use `Skill(skill="workflow:analyze-with-file", args="\"topic\"")` when:**
- Investigating existing code/system
- Need factual analysis over ideation
- Debugging or troubleshooting
- Understanding current state
**Use `Skill(skill="workflow:collaborative-plan-with-file", args="\"task description\"")` when:**
- Complex planning requiring multiple perspectives
- Large scope needing parallel sub-domain analysis
- Want shared collaborative planning document
- Need structured task breakdown with agent coordination
**Use `Skill(skill="workflow:lite-plan", args="\"task description\"")` when:**
- Direction is already clear
- Ready to move from ideas to execution
- Need simple implementation breakdown
---
**Now execute brainstorm-with-file for**: $ARGUMENTS

View File

@@ -1,587 +0,0 @@
---
name: api-designer
description: Generate or update api-designer/analysis.md addressing guidance-specification discussion points for API design perspective
argument-hint: "optional topic - uses existing framework if available"
allowed-tools: Task(conceptual-planning-agent), TodoWrite(*), Read(*), Write(*)
---
## 🔌 **API Designer Analysis Generator**
### Purpose
**Specialized command for generating api-designer/analysis.md** that addresses guidance-specification.md discussion points from backend API design perspective. Creates or updates role-specific analysis with framework references.
### Core Function
- **Framework-based Analysis**: Address each discussion point in guidance-specification.md
- **API Design Focus**: RESTful/GraphQL API design, endpoint structure, and contract definition
- **Update Mechanism**: Create new or update existing analysis.md
- **Agent Delegation**: Use conceptual-planning-agent for analysis generation
### Analysis Scope
- **API Architecture**: RESTful/GraphQL design patterns and best practices
- **Endpoint Design**: Resource modeling, URL structure, and HTTP method selection
- **Data Contracts**: Request/response schemas, validation rules, and data transformation
- **API Documentation**: OpenAPI/Swagger specifications and developer experience
### Role Boundaries & Responsibilities
#### **What This Role OWNS (API Contract Within Architectural Framework)**
- **API Contract Definition**: Specific endpoint paths, HTTP methods, and status codes
- **Resource Modeling**: Mapping domain entities to RESTful resources or GraphQL types
- **Request/Response Schemas**: Detailed data contracts, validation rules, and error formats
- **API Versioning Strategy**: Version management, deprecation policies, and migration paths
- **Developer Experience**: API documentation (OpenAPI/Swagger), code examples, and SDKs
#### **What This Role DOES NOT Own (Defers to Other Roles)**
- **System Architecture Decisions**: Microservices vs monolithic, overall communication patterns → Defers to **System Architect**
- **Canonical Data Model**: Underlying data schemas and entity relationships → Defers to **Data Architect**
- **UI/Frontend Integration**: How clients consume the API → Defers to **UI Designer**
#### **Handoff Points**
- **FROM System Architect**: Receives architectural constraints (REST vs GraphQL, sync vs async) that define the design space
- **FROM Data Architect**: Receives canonical data model and translates it into public API data contracts (as projection/view)
- **TO Frontend Teams**: Provides complete API specifications, documentation, and integration guides
## ⚙️ **Execution Protocol**
### Phase 1: Session & Framework Detection
```bash
# Check active session and framework
CHECK: find .workflow/active/ -name "WFS-*" -type d
IF active_session EXISTS:
session_id = get_active_session()
brainstorm_dir = .workflow/active/WFS-{session}/.brainstorming/
CHECK: brainstorm_dir/guidance-specification.md
IF EXISTS:
framework_mode = true
load_framework = true
ELSE:
IF topic_provided:
framework_mode = false # Create analysis without framework
ELSE:
ERROR: "No framework found and no topic provided"
```
### Phase 2: Analysis Mode Detection
```bash
# Check existing analysis
CHECK: brainstorm_dir/api-designer/analysis.md
IF EXISTS:
SHOW existing analysis summary
ASK: "Analysis exists. Do you want to:"
OPTIONS:
1. "Update with new insights" → Update existing
2. "Replace completely" → Generate new
3. "Cancel" → Exit without changes
ELSE:
CREATE new analysis
```
### Phase 3: Agent Task Generation
**Framework-Based Analysis** (when guidance-specification.md exists):
```bash
Task(subagent_type="conceptual-planning-agent",
run_in_background=false,
prompt="Generate API designer analysis addressing topic framework
## Framework Integration Required
**MANDATORY**: Load and address guidance-specification.md discussion points
**Framework Reference**: @{session.brainstorm_dir}/guidance-specification.md
**Output Location**: {session.brainstorm_dir}/api-designer/analysis.md
## Analysis Requirements
1. **Load Topic Framework**: Read guidance-specification.md completely
2. **Address Each Discussion Point**: Respond to all 5 framework sections from API design perspective
3. **Include Framework Reference**: Start analysis.md with @../guidance-specification.md
4. **API Design Focus**: Emphasize endpoint structure, data contracts, versioning strategies
5. **Structured Response**: Use framework structure for analysis organization
## Framework Sections to Address
- Core Requirements (from API design perspective)
- Technical Considerations (detailed API architecture analysis)
- User Experience Factors (developer experience and API usability)
- Implementation Challenges (API design risks and solutions)
- Success Metrics (API performance metrics and adoption tracking)
## Output Structure Required
```markdown
# API Designer Analysis: [Topic]
**Framework Reference**: @../guidance-specification.md
**Role Focus**: Backend API Design and Contract Definition
## Core Requirements Analysis
[Address framework requirements from API design perspective]
## Technical Considerations
[Detailed API architecture and endpoint design analysis]
## Developer Experience Factors
[API usability, documentation, and integration ease]
## Implementation Challenges
[API design risks and mitigation strategies]
## Success Metrics
[API performance metrics, adoption rates, and developer satisfaction]
## API Design-Specific Recommendations
[Detailed API design recommendations and best practices]
```",
description="Generate API designer framework-based analysis")
```
### Phase 4: Update Mechanism
**Analysis Update Process**:
```bash
# For existing analysis updates
IF update_mode = "incremental":
Task(subagent_type="conceptual-planning-agent",
run_in_background=false,
prompt="Update existing API designer analysis
## Current Analysis Context
**Existing Analysis**: @{session.brainstorm_dir}/api-designer/analysis.md
**Framework Reference**: @{session.brainstorm_dir}/guidance-specification.md
## Update Requirements
1. **Preserve Structure**: Maintain existing analysis structure
2. **Add New Insights**: Integrate new API design insights and recommendations
3. **Framework Alignment**: Ensure continued alignment with topic framework
4. **API Updates**: Add new endpoint patterns, versioning strategies, documentation improvements
5. **Maintain References**: Keep @../guidance-specification.md reference
## Update Instructions
- Read existing analysis completely
- Identify areas for enhancement or new insights
- Add API design depth while preserving original structure
- Update recommendations with new API design patterns and approaches
- Maintain framework discussion point addressing",
description="Update API designer analysis incrementally")
```
## Document Structure
### Output Files
```
.workflow/active/WFS-[topic]/.brainstorming/
├── guidance-specification.md # Input: Framework (if exists)
└── api-designer/
└── analysis.md # ★ OUTPUT: Framework-based analysis
```
### Analysis Structure
**Required Elements**:
- **Framework Reference**: @../guidance-specification.md (if framework exists)
- **Role Focus**: Backend API Design and Contract Definition perspective
- **5 Framework Sections**: Address each framework discussion point
- **API Design Recommendations**: Endpoint-specific insights and solutions
## ⚡ **Two-Step Execution Flow**
### ⚠️ Session Management - FIRST STEP
Session detection and selection:
```bash
# Check for active sessions
active_sessions=$(find .workflow/active/ -name "WFS-*" -type d 2>/dev/null)
if [ multiple_sessions ]; then
prompt_user_to_select_session()
else
use_existing_or_create_new()
fi
```
### Step 1: Context Gathering Phase
**API Designer Perspective Questioning**
Before agent assignment, gather comprehensive API design context:
#### 📋 Role-Specific Questions
1. **API Type & Architecture**
- RESTful, GraphQL, or hybrid API approach?
- Synchronous vs asynchronous communication patterns?
- Real-time requirements (WebSocket, Server-Sent Events)?
2. **Resource Modeling & Endpoints**
- What are the core domain resources/entities?
- Expected CRUD operations for each resource?
- Complex query requirements (filtering, sorting, pagination)?
3. **Data Contracts & Validation**
- Request/response data format requirements (JSON, XML, Protocol Buffers)?
- Input validation and sanitization requirements?
- Data transformation and mapping needs?
4. **API Management & Governance**
- API versioning strategy requirements?
- Authentication and authorization mechanisms?
- Rate limiting and throttling requirements?
- API documentation and developer portal needs?
5. **Integration & Compatibility**
- Client platforms consuming the API (web, mobile, third-party)?
- Backward compatibility requirements?
- External API integrations needed?
#### Context Validation
- **Minimum Response**: Each answer must be ≥50 characters
- **Re-prompting**: Insufficient detail triggers follow-up questions
- **Context Storage**: Save responses to `.brainstorming/api-designer-context.md`
### Step 2: Agent Assignment with Flow Control
**Dedicated Agent Execution**
```bash
Task(conceptual-planning-agent): "
[FLOW_CONTROL]
Execute dedicated api-designer conceptual analysis for: {topic}
ASSIGNED_ROLE: api-designer
OUTPUT_LOCATION: .brainstorming/api-designer/
USER_CONTEXT: {validated_responses_from_context_gathering}
Flow Control Steps:
[
{
\"step\": \"load_role_template\",
\"action\": \"Load api-designer planning template\",
\"command\": \"bash($(cat ~/.claude/workflows/cli-templates/planning-roles/api-designer.md))\",
\"output_to\": \"role_template\"
}
]
Conceptual Analysis Requirements:
- Apply api-designer perspective to topic analysis
- Focus on endpoint design, data contracts, and API governance
- Use loaded role template framework for analysis structure
- Generate role-specific deliverables in designated output location
- Address all user context from questioning phase
Deliverables:
- analysis.md: Main API design analysis
- api-specification.md: Detailed endpoint specifications
- data-contracts.md: Request/response schemas and validation rules
- api-documentation.md: API documentation strategy and templates
Embody api-designer role expertise for comprehensive conceptual planning."
```
### Progress Tracking
TodoWrite tracking for two-step process:
```json
[
{"content": "Gather API designer context through role-specific questioning", "status": "in_progress", "activeForm": "Gathering context"},
{"content": "Validate context responses and save to api-designer-context.md", "status": "pending", "activeForm": "Validating context"},
{"content": "Load api-designer planning template via flow control", "status": "pending", "activeForm": "Loading template"},
{"content": "Execute dedicated conceptual-planning-agent for api-designer role", "status": "pending", "activeForm": "Executing agent"}
]
```
## 📊 **Output Specification**
### Output Location
```
.workflow/active/WFS-{topic-slug}/.brainstorming/api-designer/
├── analysis.md # Primary API design analysis
├── api-specification.md # Detailed endpoint specifications (OpenAPI/Swagger)
├── data-contracts.md # Request/response schemas and validation rules
├── versioning-strategy.md # API versioning and backward compatibility plan
└── developer-guide.md # API usage documentation and integration examples
```
### Document Templates
#### analysis.md Structure
```markdown
# API Design Analysis: {Topic}
*Generated: {timestamp}*
## Executive Summary
[Key API design findings and recommendations overview]
## API Architecture Overview
### API Type Selection (REST/GraphQL/Hybrid)
### Communication Patterns
### Authentication & Authorization Strategy
## Resource Modeling
### Core Domain Resources
### Resource Relationships
### URL Structure and Naming Conventions
## Endpoint Design
### Resource Endpoints
- GET /api/v1/resources
- POST /api/v1/resources
- GET /api/v1/resources/{id}
- PUT /api/v1/resources/{id}
- DELETE /api/v1/resources/{id}
### Query Parameters
- Filtering: ?filter[field]=value
- Sorting: ?sort=field,-field2
- Pagination: ?page=1&limit=20
### HTTP Methods and Status Codes
- Success responses (2xx)
- Client errors (4xx)
- Server errors (5xx)
## Data Contracts
### Request Schemas
[JSON Schema or OpenAPI definitions]
### Response Schemas
[JSON Schema or OpenAPI definitions]
### Validation Rules
- Required fields
- Data types and formats
- Business logic constraints
## API Versioning Strategy
### Versioning Approach (URL/Header/Accept)
### Version Lifecycle Management
### Deprecation Policy
### Migration Paths
## Security & Governance
### Authentication Mechanisms
- OAuth 2.0 / JWT / API Keys
### Authorization Patterns
- RBAC / ABAC / Resource-based
### Rate Limiting & Throttling
### CORS and Security Headers
## Error Handling
### Standard Error Response Format
```json
{
"error": {
"code": "ERROR_CODE",
"message": "Human-readable error message",
"details": [],
"trace_id": "uuid"
}
}
```
### Error Code Taxonomy
### Validation Error Responses
## API Documentation
### OpenAPI/Swagger Specification
### Developer Portal Requirements
### Code Examples and SDKs
### Changelog and Migration Guides
## Performance Optimization
### Response Caching Strategies
### Compression (gzip, brotli)
### Field Selection (sparse fieldsets)
### Bulk Operations and Batch Endpoints
## Monitoring & Observability
### API Metrics
- Request count, latency, error rates
- Endpoint usage analytics
### Logging Strategy
### Distributed Tracing
## Developer Experience
### API Usability Assessment
### Integration Complexity
### SDK and Client Library Needs
### Sandbox and Testing Environments
```
#### api-specification.md Structure
```markdown
# API Specification: {Topic}
*OpenAPI 3.0 Specification*
## API Information
- **Title**: {API Name}
- **Version**: 1.0.0
- **Base URL**: https://api.example.com/v1
- **Contact**: api-team@example.com
## Endpoints
### Users API
#### GET /users
**Description**: Retrieve a list of users
**Parameters**:
- `page` (query, integer): Page number (default: 1)
- `limit` (query, integer): Items per page (default: 20, max: 100)
- `sort` (query, string): Sort field (e.g., "created_at", "-updated_at")
- `filter[status]` (query, string): Filter by user status
**Response 200**:
```json
{
"data": [
{
"id": "uuid",
"username": "string",
"email": "string",
"created_at": "2025-10-15T00:00:00Z"
}
],
"meta": {
"page": 1,
"limit": 20,
"total": 100
},
"links": {
"self": "/users?page=1",
"next": "/users?page=2",
"prev": null
}
}
```
#### POST /users
**Description**: Create a new user
**Request Body**:
```json
{
"username": "string (required, 3-50 chars)",
"email": "string (required, valid email)",
"password": "string (required, min 8 chars)",
"profile": {
"first_name": "string (optional)",
"last_name": "string (optional)"
}
}
```
**Response 201**:
```json
{
"data": {
"id": "uuid",
"username": "string",
"email": "string",
"created_at": "2025-10-15T00:00:00Z"
}
}
```
**Response 400** (Validation Error):
```json
{
"error": {
"code": "VALIDATION_ERROR",
"message": "Request validation failed",
"details": [
{
"field": "email",
"message": "Invalid email format"
}
]
}
}
```
[Continue for all endpoints...]
## Authentication
### OAuth 2.0 Flow
1. Client requests authorization
2. User grants permission
3. Client receives access token
4. Client uses token in requests
**Header Format**:
```
Authorization: Bearer {access_token}
```
## Rate Limiting
**Headers**:
- `X-RateLimit-Limit`: 1000
- `X-RateLimit-Remaining`: 999
- `X-RateLimit-Reset`: 1634270400
**Response 429** (Too Many Requests):
```json
{
"error": {
"code": "RATE_LIMIT_EXCEEDED",
"message": "API rate limit exceeded",
"retry_after": 3600
}
}
```
```
## 🔄 **Session Integration**
### Status Synchronization
Upon completion, update `workflow-session.json`:
```json
{
"phases": {
"BRAINSTORM": {
"api_designer": {
"status": "completed",
"completed_at": "timestamp",
"output_directory": ".workflow/active/WFS-{topic}/.brainstorming/api-designer/",
"key_insights": ["endpoint_design", "versioning_strategy", "data_contracts"]
}
}
}
}
```
### Cross-Role Collaboration
API designer perspective provides:
- **API Contract Specifications** → Frontend Developer
- **Data Schema Requirements** → Data Architect
- **Security Requirements** → Security Expert
- **Integration Endpoints** → System Architect
- **Performance Constraints** → DevOps Engineer
## ✅ **Quality Assurance**
### Required Analysis Elements
- [ ] Complete endpoint inventory with HTTP methods and paths
- [ ] Detailed request/response schemas with validation rules
- [ ] Clear versioning strategy and backward compatibility plan
- [ ] Comprehensive error handling and status code usage
- [ ] API documentation strategy (OpenAPI/Swagger)
### API Design Principles
- [ ] **Consistency**: Uniform naming conventions and patterns across all endpoints
- [ ] **Simplicity**: Intuitive resource modeling and URL structures
- [ ] **Flexibility**: Support for filtering, sorting, pagination, and field selection
- [ ] **Security**: Proper authentication, authorization, and input validation
- [ ] **Performance**: Caching strategies, compression, and efficient data structures
### Developer Experience Validation
- [ ] API is self-documenting with clear endpoint descriptions
- [ ] Error messages are actionable and helpful for debugging
- [ ] Response formats are consistent and predictable
- [ ] Code examples and integration guides are provided
- [ ] Sandbox environment available for testing
### Technical Completeness
- [ ] **Resource Modeling**: All domain entities mapped to API resources
- [ ] **CRUD Coverage**: Complete create, read, update, delete operations
- [ ] **Query Capabilities**: Advanced filtering, sorting, and search functionality
- [ ] **Versioning**: Clear version management and migration paths
- [ ] **Monitoring**: API metrics, logging, and tracing strategies defined
### Integration Readiness
- [ ] **Client Compatibility**: API works with all target client platforms
- [ ] **External Integration**: Third-party API dependencies identified
- [ ] **Backward Compatibility**: Changes don't break existing clients
- [ ] **Migration Path**: Clear upgrade paths for API consumers
- [ ] **SDK Support**: Client libraries and code generation considered

View File

@@ -1,10 +1,14 @@
---
name: artifacts
description: Interactive clarification generating confirmed guidance specification through role-based analysis and synthesis
argument-hint: "topic or challenge description [--count N]"
argument-hint: "[-y|--yes] topic or challenge description [--count N]"
allowed-tools: TodoWrite(*), Read(*), Write(*), Glob(*), AskUserQuestion(*)
---
## Auto Mode
When `--yes` or `-y`: Auto-select recommended roles, skip all clarification questions, use default answers.
## Overview
Seven-phase workflow: **Context collection****Topic analysis****Role selection****Role questions****Conflict resolution****Final check****Generate specification**

View File

@@ -1,10 +1,14 @@
---
name: auto-parallel
description: Parallel brainstorming automation with dynamic role selection and concurrent execution across multiple perspectives
argument-hint: "topic or challenge description" [--count N]
allowed-tools: SlashCommand(*), Task(*), TodoWrite(*), Read(*), Write(*), Bash(*), Glob(*)
argument-hint: "[-y|--yes] topic or challenge description [--count N]"
allowed-tools: Skill(*), Task(*), TodoWrite(*), Read(*), Write(*), Bash(*), Glob(*)
---
## Auto Mode
When `--yes` or `-y`: Auto-select recommended roles, skip all clarification questions, use default answers.
# Workflow Brainstorm Parallel Auto Command
## Coordinator Role
@@ -12,7 +16,7 @@ allowed-tools: SlashCommand(*), Task(*), TodoWrite(*), Read(*), Write(*), Bash(*
**This command is a pure orchestrator**: Executes 3 phases in sequence (interactive framework → parallel role analysis → synthesis), coordinating specialized commands/agents through task attachment model.
**Task Attachment Model**:
- SlashCommand execute **expands workflow** by attaching sub-tasks to current TodoWrite
- Skill execute **expands workflow** by attaching sub-tasks to current TodoWrite
- Task agent execute **attaches analysis tasks** to orchestrator's TodoWrite
- Phase 1: artifacts command attaches its internal tasks (Phase 1-5)
- Phase 2: N conceptual-planning-agent tasks attached in parallel
@@ -43,7 +47,7 @@ This workflow runs **fully autonomously** once triggered. Phase 1 (artifacts) ha
3. **Parse Every Output**: Extract selected_roles from workflow-session.json after Phase 1
4. **Auto-Continue via TodoList**: Check TodoList status to execute next pending phase automatically
5. **Track Progress**: Update TodoWrite dynamically with task attachment/collapse pattern
6. **Task Attachment Model**: SlashCommand and Task executes **attach** sub-tasks to current workflow. Orchestrator **executes** these attached tasks itself, then **collapses** them after completion
6. **Task Attachment Model**: Skill and Task executes **attach** sub-tasks to current workflow. Orchestrator **executes** these attached tasks itself, then **collapses** them after completion
7. **⚠️ CRITICAL: DO NOT STOP**: Continuous multi-phase workflow. After executing all attached tasks, immediately collapse them and execute next phase
8. **Parallel Execution**: Phase 2 attaches multiple agent tasks simultaneously for concurrent execution
@@ -70,7 +74,7 @@ This workflow runs **fully autonomously** once triggered. Phase 1 (artifacts) ha
**Step 1: Execute** - Interactive framework generation via artifacts command
```javascript
SlashCommand(command="/workflow:brainstorm:artifacts \"{topic}\" --count {N}")
Skill(skill="workflow:brainstorm:artifacts", args="\"{topic}\" --count {N}")
```
**What It Does**:
@@ -91,7 +95,7 @@ SlashCommand(command="/workflow:brainstorm:artifacts \"{topic}\" --count {N}")
- workflow-session.json contains selected_roles[] (metadata only, no content duplication)
- Session directory `.workflow/active/WFS-{topic}/.brainstorming/` exists
**TodoWrite Update (Phase 1 SlashCommand executed - tasks attached)**:
**TodoWrite Update (Phase 1 Skill executed - tasks attached)**:
```json
[
{"content": "Phase 0: Parameter Parsing", "status": "completed", "activeForm": "Parsing count parameter"},
@@ -106,7 +110,7 @@ SlashCommand(command="/workflow:brainstorm:artifacts \"{topic}\" --count {N}")
]
```
**Note**: SlashCommand execute **attaches** artifacts' 5 internal tasks. Orchestrator **executes** these tasks sequentially.
**Note**: Skill execute **attaches** artifacts' 5 internal tasks. Orchestrator **executes** these tasks sequentially.
**Next Action**: Tasks attached → **Execute Phase 1.1-1.5** sequentially
@@ -128,55 +132,30 @@ SlashCommand(command="/workflow:brainstorm:artifacts \"{topic}\" --count {N}")
### Phase 2: Parallel Role Analysis Execution
**For Each Selected Role**:
**For Each Selected Role** (unified role-analysis command):
```bash
Task(conceptual-planning-agent): "
[FLOW_CONTROL]
Execute {role-name} analysis for existing topic framework
## Context Loading
ASSIGNED_ROLE: {role-name}
OUTPUT_LOCATION: .workflow/active/WFS-{session}/.brainstorming/{role}/
TOPIC: {user-provided-topic}
## Flow Control Steps
1. load_topic_framework → .workflow/active/WFS-{session}/.brainstorming/guidance-specification.md
2. load_role_template → ~/.claude/workflows/cli-templates/planning-roles/{role}.md
3. load_session_metadata → .workflow/active/WFS-{session}/workflow-session.json
4. load_style_skill (ui-designer only, if style_skill_package) → .claude/skills/style-{style_skill_package}/
## Analysis Requirements
**Primary Reference**: Original user prompt from workflow-session.json is authoritative
**Framework Source**: Address all discussion points in guidance-specification.md from {role-name} perspective
**Role Focus**: {role-name} domain expertise aligned with user intent
**Structured Approach**: Create analysis.md addressing framework discussion points
**Template Integration**: Apply role template guidelines within framework structure
## Expected Deliverables
1. **analysis.md** (optionally with analysis-{slug}.md sub-documents)
2. **Framework Reference**: @../guidance-specification.md
3. **User Intent Alignment**: Validate against session_context
## Completion Criteria
- Address each discussion point from guidance-specification.md with {role-name} expertise
- Provide actionable recommendations from {role-name} perspective within analysis files
- All output files MUST start with `analysis` prefix (no recommendations.md or other naming)
- Reference framework document using @ notation for integration
- Update workflow-session.json with completion status
"
Skill(skill="workflow:brainstorm:role-analysis", args="{role-name} --session {session-id} --skip-questions")
```
**Parallel Execute**:
- Launch N agents simultaneously (one message with multiple Task calls)
- Each agent task **attached** to orchestrator's TodoWrite
- All agents execute concurrently, each attaching their own analysis sub-tasks
- Each agent operates independently reading same guidance-specification.md
**What It Does**:
- Unified command execution for each role
- Loads topic framework from guidance-specification.md
- Applies role-specific template and context
- Generates analysis.md addressing framework discussion points
- Supports optional interactive context gathering (via --include-questions flag)
**Parallel Execution**:
- Launch N Skill calls simultaneously (one message with multiple Skill invokes)
- Each role command **attached** to orchestrator's TodoWrite
- All roles execute concurrently, each reading same guidance-specification.md
- Each role operates independently
- For ui-designer only: append `--style-skill {style_skill_package}` if provided
**Input**:
- `selected_roles[]` from Phase 1
- `session_id` from Phase 1
- guidance-specification.md path
- `guidance-specification.md` (framework reference)
- `style_skill_package` (for ui-designer only)
**Validation**:
- Each role creates `.workflow/active/WFS-{topic}/.brainstorming/{role}/analysis.md`
@@ -185,6 +164,7 @@ TOPIC: {user-provided-topic}
- **FORBIDDEN**: `recommendations.md` or any non-`analysis` prefixed files
- All N role analyses completed
**TodoWrite Update (Phase 2 agents executed - tasks attached in parallel)**:
```json
[
@@ -223,7 +203,7 @@ TOPIC: {user-provided-topic}
**Step 3: Execute** - Synthesis integration via synthesis command
```javascript
SlashCommand(command="/workflow:brainstorm:synthesis --session {sessionId}")
Skill(skill="workflow:brainstorm:synthesis", args="--session {sessionId}")
```
**What It Does**:
@@ -238,7 +218,7 @@ SlashCommand(command="/workflow:brainstorm:synthesis --session {sessionId}")
- `.workflow/active/WFS-{topic}/.brainstorming/synthesis-specification.md` exists
- Synthesis references all role analyses
**TodoWrite Update (Phase 3 SlashCommand executed - tasks attached)**:
**TodoWrite Update (Phase 3 Skill executed - tasks attached)**:
```json
[
{"content": "Phase 0: Parameter Parsing", "status": "completed", "activeForm": "Parsing count parameter"},
@@ -251,7 +231,7 @@ SlashCommand(command="/workflow:brainstorm:synthesis --session {sessionId}")
]
```
**Note**: SlashCommand execute **attaches** synthesis' internal tasks. Orchestrator **executes** these tasks sequentially.
**Note**: Skill execute **attaches** synthesis' internal tasks. Orchestrator **executes** these tasks sequentially.
**Next Action**: Tasks attached → **Execute Phase 3.1-3.3** sequentially
@@ -284,7 +264,7 @@ Synthesis: .workflow/active/WFS-{topic}/.brainstorming/synthesis-specification.m
### Key Principles
1. **Task Attachment** (when SlashCommand/Task executed):
1. **Task Attachment** (when Skill/Task executed):
- Sub-command's or agent's internal tasks are **attached** to orchestrator's TodoWrite
- Phase 1: `/workflow:brainstorm:artifacts` attaches 5 internal tasks (Phase 1.1-1.5)
- Phase 2: Multiple `Task(conceptual-planning-agent)` calls attach N role analysis tasks simultaneously
@@ -309,9 +289,9 @@ Synthesis: .workflow/active/WFS-{topic}/.brainstorming/synthesis-specification.m
### Brainstorming Workflow Specific Features
- **Phase 1**: Interactive framework generation with user Q&A (SlashCommand attachment)
- **Phase 1**: Interactive framework generation with user Q&A (Skill attachment)
- **Phase 2**: Parallel role analysis execution with N concurrent agents (Task agent attachments)
- **Phase 3**: Cross-role synthesis integration (SlashCommand attachment)
- **Phase 3**: Cross-role synthesis integration (Skill attachment)
- **Dynamic Role Count**: `--count N` parameter determines number of Phase 2 parallel tasks (default: 3, max: 9)
- **Mixed Execution**: Sequential (Phase 1, 3) and Parallel (Phase 2) task execution
@@ -424,6 +404,17 @@ CONTEXT_VARS:
- **Agent execution failure**: Agent-specific retry with minimal dependencies
- **Template loading issues**: Agent handles graceful degradation
- **Synthesis conflicts**: Synthesis highlights disagreements without resolution
- **Context overflow protection**: See below for automatic context management
## Context Overflow Protection
**Per-role limits**: See `conceptual-planning-agent.md` (< 3000 words main, < 2000 words sub-docs, max 5 sub-docs)
**Synthesis protection**: If total analysis > 100KB, synthesis reads only `analysis.md` files (not sub-documents)
**Recovery**: Check logs → reduce scope (--count 2) → use --summary-only → manual synthesis
**Prevention**: Start with --count 3, use structured topic format, review output sizes before synthesis
## Reference Information
@@ -440,4 +431,3 @@ CONTEXT_VARS:
```
**Template Source**: `~/.claude/workflows/cli-templates/planning-roles/`

View File

@@ -1,220 +0,0 @@
---
name: data-architect
description: Generate or update data-architect/analysis.md addressing guidance-specification discussion points for data architecture perspective
argument-hint: "optional topic - uses existing framework if available"
allowed-tools: Task(conceptual-planning-agent), TodoWrite(*), Read(*), Write(*)
---
## 📊 **Data Architect Analysis Generator**
### Purpose
**Specialized command for generating data-architect/analysis.md** that addresses guidance-specification.md discussion points from data architecture perspective. Creates or updates role-specific analysis with framework references.
### Core Function
- **Framework-based Analysis**: Address each discussion point in guidance-specification.md
- **Data Architecture Focus**: Data models, pipelines, governance, and analytics perspective
- **Update Mechanism**: Create new or update existing analysis.md
- **Agent Delegation**: Use conceptual-planning-agent for analysis generation
### Analysis Scope
- **Data Model Design**: Efficient and scalable data models and schemas
- **Data Flow Design**: Data collection, processing, and storage workflows
- **Data Quality Management**: Data accuracy, completeness, and consistency
- **Analytics and Insights**: Data analysis and business intelligence solutions
### Role Boundaries & Responsibilities
#### **What This Role OWNS (Canonical Data Model - Source of Truth)**
- **Canonical Data Model**: The authoritative, system-wide data schema representing domain entities and relationships
- **Entity-Relationship Design**: Defining entities, attributes, relationships, and constraints
- **Data Normalization & Optimization**: Ensuring data integrity, reducing redundancy, and optimizing storage
- **Database Schema Design**: Physical database structures, indexes, partitioning strategies
- **Data Pipeline Architecture**: ETL/ELT processes, data warehousing, and analytics pipelines
- **Data Governance**: Data quality standards, retention policies, and compliance requirements
#### **What This Role DOES NOT Own (Defers to Other Roles)**
- **API Data Contracts**: Public-facing request/response schemas exposed by APIs → Defers to **API Designer**
- **System Integration Patterns**: How services communicate at the macro level → Defers to **System Architect**
- **UI Data Presentation**: How data is displayed to users → Defers to **UI Designer**
#### **Handoff Points**
- **TO API Designer**: Provides canonical data model that API Designer translates into public API data contracts (as projection/view)
- **TO System Architect**: Provides data flow requirements and storage constraints to inform system design
- **FROM System Architect**: Receives system-level integration requirements and scalability constraints
## ⚙️ **Execution Protocol**
### Phase 1: Session & Framework Detection
```bash
# Check active session and framework
CHECK: find .workflow/active/ -name "WFS-*" -type d
IF active_session EXISTS:
session_id = get_active_session()
brainstorm_dir = .workflow/active/WFS-{session}/.brainstorming/
CHECK: brainstorm_dir/guidance-specification.md
IF EXISTS:
framework_mode = true
load_framework = true
ELSE:
IF topic_provided:
framework_mode = false # Create analysis without framework
ELSE:
ERROR: "No framework found and no topic provided"
```
### Phase 2: Analysis Mode Detection
```bash
# Determine execution mode
IF framework_mode == true:
mode = "framework_based_analysis"
topic_ref = load_framework_topic()
discussion_points = extract_framework_points()
ELSE:
mode = "standalone_analysis"
topic_ref = provided_topic
discussion_points = generate_basic_structure()
```
### Phase 3: Agent Execution with Flow Control
**Framework-Based Analysis Generation**
```bash
Task(conceptual-planning-agent): "
[FLOW_CONTROL]
Execute data-architect analysis for existing topic framework
## Context Loading
ASSIGNED_ROLE: data-architect
OUTPUT_LOCATION: .workflow/active/WFS-{session}/.brainstorming/data-architect/
ANALYSIS_MODE: {framework_mode ? "framework_based" : "standalone"}
## Flow Control Steps
1. **load_topic_framework**
- Action: Load structured topic discussion framework
- Command: Read(.workflow/active/WFS-{session}/.brainstorming/guidance-specification.md)
- Output: topic_framework_content
2. **load_role_template**
- Action: Load data-architect planning template
- Command: bash($(cat ~/.claude/workflows/cli-templates/planning-roles/data-architect.md))
- Output: role_template_guidelines
3. **load_session_metadata**
- Action: Load session metadata and existing context
- Command: Read(.workflow/active/WFS-{session}/workflow-session.json)
- Output: session_context
## Analysis Requirements
**Framework Reference**: Address all discussion points in guidance-specification.md from data architecture perspective
**Role Focus**: Data models, pipelines, governance, analytics platforms
**Structured Approach**: Create analysis.md addressing framework discussion points
**Template Integration**: Apply role template guidelines within framework structure
## Expected Deliverables
1. **analysis.md**: Comprehensive data architecture analysis addressing all framework discussion points
2. **Framework Reference**: Include @../guidance-specification.md reference in analysis
## Completion Criteria
- Address each discussion point from guidance-specification.md with data architecture expertise
- Provide data model designs, pipeline architectures, and governance strategies
- Include scalability, performance, and quality considerations
- Reference framework document using @ notation for integration
"
```
## 📋 **TodoWrite Integration**
### Workflow Progress Tracking
```javascript
TodoWrite({
todos: [
{
content: "Detect active session and locate topic framework",
status: "in_progress",
activeForm: "Detecting session and framework"
},
{
content: "Load guidance-specification.md and session metadata for context",
status: "pending",
activeForm: "Loading framework and session context"
},
{
content: "Execute data-architect analysis using conceptual-planning-agent with FLOW_CONTROL",
status: "pending",
activeForm: "Executing data-architect framework analysis"
},
{
content: "Generate analysis.md addressing all framework discussion points",
status: "pending",
activeForm: "Generating structured data-architect analysis"
},
{
content: "Update workflow-session.json with data-architect completion status",
status: "pending",
activeForm: "Updating session metadata"
}
]
});
```
## 📊 **Output Structure**
### Framework-Based Analysis
```
.workflow/active/WFS-{session}/.brainstorming/data-architect/
└── analysis.md # Structured analysis addressing guidance-specification.md discussion points
```
### Analysis Document Structure
```markdown
# Data Architect Analysis: [Topic from Framework]
## Framework Reference
**Topic Framework**: @../guidance-specification.md
**Role Focus**: Data Architecture perspective
## Discussion Points Analysis
[Address each point from guidance-specification.md with data architecture expertise]
### Core Requirements (from framework)
[Data architecture perspective on requirements]
### Technical Considerations (from framework)
[Data model, pipeline, and storage considerations]
### User Experience Factors (from framework)
[Data access patterns and analytics user experience]
### Implementation Challenges (from framework)
[Data migration, quality, and governance challenges]
### Success Metrics (from framework)
[Data quality metrics and analytics success criteria]
## Data Architecture Specific Recommendations
[Role-specific data architecture recommendations and solutions]
---
*Generated by data-architect analysis addressing structured framework*
```
## 🔄 **Session Integration**
### Completion Status Update
```json
{
"data_architect": {
"status": "completed",
"framework_addressed": true,
"output_location": ".workflow/active/WFS-{session}/.brainstorming/data-architect/analysis.md",
"framework_reference": "@../guidance-specification.md"
}
}
```
### Integration Points
- **Framework Reference**: @../guidance-specification.md for structured discussion points
- **Cross-Role Synthesis**: Data architecture insights available for synthesis-report.md integration
- **Agent Autonomy**: Independent execution with framework guidance

View File

@@ -1,200 +0,0 @@
---
name: product-manager
description: Generate or update product-manager/analysis.md addressing guidance-specification discussion points for product management perspective
argument-hint: "optional topic - uses existing framework if available"
allowed-tools: Task(conceptual-planning-agent), TodoWrite(*), Read(*), Write(*)
---
## 🎯 **Product Manager Analysis Generator**
### Purpose
**Specialized command for generating product-manager/analysis.md** that addresses guidance-specification.md discussion points from product strategy perspective. Creates or updates role-specific analysis with framework references.
### Core Function
- **Framework-based Analysis**: Address each discussion point in guidance-specification.md
- **Product Strategy Focus**: User needs, business value, and market positioning
- **Update Mechanism**: Create new or update existing analysis.md
- **Agent Delegation**: Use conceptual-planning-agent for analysis generation
### Analysis Scope
- **User Needs Analysis**: Target users, problems, and value propositions
- **Business Impact Assessment**: ROI, metrics, and commercial outcomes
- **Market Positioning**: Competitive analysis and differentiation
- **Product Strategy**: Roadmaps, priorities, and go-to-market approaches
## ⚙️ **Execution Protocol**
### Phase 1: Session & Framework Detection
```bash
# Check active session and framework
CHECK: find .workflow/active/ -name "WFS-*" -type d
IF active_session EXISTS:
session_id = get_active_session()
brainstorm_dir = .workflow/active/WFS-{session}/.brainstorming/
CHECK: brainstorm_dir/guidance-specification.md
IF EXISTS:
framework_mode = true
load_framework = true
ELSE:
IF topic_provided:
framework_mode = false # Create analysis without framework
ELSE:
ERROR: "No framework found and no topic provided"
```
### Phase 2: Analysis Mode Detection
```bash
# Determine execution mode
IF framework_mode == true:
mode = "framework_based_analysis"
topic_ref = load_framework_topic()
discussion_points = extract_framework_points()
ELSE:
mode = "standalone_analysis"
topic_ref = provided_topic
discussion_points = generate_basic_structure()
```
### Phase 3: Agent Execution with Flow Control
**Framework-Based Analysis Generation**
```bash
Task(conceptual-planning-agent): "
[FLOW_CONTROL]
Execute product-manager analysis for existing topic framework
## Context Loading
ASSIGNED_ROLE: product-manager
OUTPUT_LOCATION: .workflow/active/WFS-{session}/.brainstorming/product-manager/
ANALYSIS_MODE: {framework_mode ? "framework_based" : "standalone"}
## Flow Control Steps
1. **load_topic_framework**
- Action: Load structured topic discussion framework
- Command: Read(.workflow/active/WFS-{session}/.brainstorming/guidance-specification.md)
- Output: topic_framework_content
2. **load_role_template**
- Action: Load product-manager planning template
- Command: bash($(cat ~/.claude/workflows/cli-templates/planning-roles/product-manager.md))
- Output: role_template_guidelines
3. **load_session_metadata**
- Action: Load session metadata and existing context
- Command: Read(.workflow/active/WFS-{session}/workflow-session.json)
- Output: session_context
## Analysis Requirements
**Framework Reference**: Address all discussion points in guidance-specification.md from product strategy perspective
**Role Focus**: User value, business impact, market positioning, product strategy
**Structured Approach**: Create analysis.md addressing framework discussion points
**Template Integration**: Apply role template guidelines within framework structure
## Expected Deliverables
1. **analysis.md**: Comprehensive product strategy analysis addressing all framework discussion points
2. **Framework Reference**: Include @../guidance-specification.md reference in analysis
## Completion Criteria
- Address each discussion point from guidance-specification.md with product management expertise
- Provide actionable business strategies and user value propositions
- Include market analysis and competitive positioning insights
- Reference framework document using @ notation for integration
"
```
## 📋 **TodoWrite Integration**
### Workflow Progress Tracking
```javascript
TodoWrite({
todos: [
{
content: "Detect active session and locate topic framework",
status: "in_progress",
activeForm: "Detecting session and framework"
},
{
content: "Load guidance-specification.md and session metadata for context",
status: "pending",
activeForm: "Loading framework and session context"
},
{
content: "Execute product-manager analysis using conceptual-planning-agent with FLOW_CONTROL",
status: "pending",
activeForm: "Executing product-manager framework analysis"
},
{
content: "Generate analysis.md addressing all framework discussion points",
status: "pending",
activeForm: "Generating structured product-manager analysis"
},
{
content: "Update workflow-session.json with product-manager completion status",
status: "pending",
activeForm: "Updating session metadata"
}
]
});
```
## 📊 **Output Structure**
### Framework-Based Analysis
```
.workflow/active/WFS-{session}/.brainstorming/product-manager/
└── analysis.md # Structured analysis addressing guidance-specification.md discussion points
```
### Analysis Document Structure
```markdown
# Product Manager Analysis: [Topic from Framework]
## Framework Reference
**Topic Framework**: @../guidance-specification.md
**Role Focus**: Product Strategy perspective
## Discussion Points Analysis
[Address each point from guidance-specification.md with product management expertise]
### Core Requirements (from framework)
[Product strategy perspective on user needs and requirements]
### Technical Considerations (from framework)
[Business and technical feasibility considerations]
### User Experience Factors (from framework)
[User value proposition and market positioning analysis]
### Implementation Challenges (from framework)
[Business execution and go-to-market considerations]
### Success Metrics (from framework)
[Product success metrics and business KPIs]
## Product Strategy Specific Recommendations
[Role-specific product management strategies and business solutions]
---
*Generated by product-manager analysis addressing structured framework*
```
## 🔄 **Session Integration**
### Completion Status Update
```json
{
"product_manager": {
"status": "completed",
"framework_addressed": true,
"output_location": ".workflow/active/WFS-{session}/.brainstorming/product-manager/analysis.md",
"framework_reference": "@../guidance-specification.md"
}
}
```
### Integration Points
- **Framework Reference**: @../guidance-specification.md for structured discussion points
- **Cross-Role Synthesis**: Product strategy insights available for synthesis-report.md integration
- **Agent Autonomy**: Independent execution with framework guidance

View File

@@ -1,200 +0,0 @@
---
name: product-owner
description: Generate or update product-owner/analysis.md addressing guidance-specification discussion points for product ownership perspective
argument-hint: "optional topic - uses existing framework if available"
allowed-tools: Task(conceptual-planning-agent), TodoWrite(*), Read(*), Write(*)
---
## 🎯 **Product Owner Analysis Generator**
### Purpose
**Specialized command for generating product-owner/analysis.md** that addresses guidance-specification.md discussion points from product backlog and feature prioritization perspective. Creates or updates role-specific analysis with framework references.
### Core Function
- **Framework-based Analysis**: Address each discussion point in guidance-specification.md
- **Product Backlog Focus**: Feature prioritization, user stories, and acceptance criteria
- **Update Mechanism**: Create new or update existing analysis.md
- **Agent Delegation**: Use conceptual-planning-agent for analysis generation
### Analysis Scope
- **Backlog Management**: User story creation, refinement, and prioritization
- **Stakeholder Alignment**: Requirements gathering, value definition, and expectation management
- **Feature Prioritization**: ROI analysis, MoSCoW method, and value-driven delivery
- **Acceptance Criteria**: Definition of Done, acceptance testing, and quality standards
## ⚙️ **Execution Protocol**
### Phase 1: Session & Framework Detection
```bash
# Check active session and framework
CHECK: find .workflow/active/ -name "WFS-*" -type d
IF active_session EXISTS:
session_id = get_active_session()
brainstorm_dir = .workflow/active/WFS-{session}/.brainstorming/
CHECK: brainstorm_dir/guidance-specification.md
IF EXISTS:
framework_mode = true
load_framework = true
ELSE:
IF topic_provided:
framework_mode = false # Create analysis without framework
ELSE:
ERROR: "No framework found and no topic provided"
```
### Phase 2: Analysis Mode Detection
```bash
# Determine execution mode
IF framework_mode == true:
mode = "framework_based_analysis"
topic_ref = load_framework_topic()
discussion_points = extract_framework_points()
ELSE:
mode = "standalone_analysis"
topic_ref = provided_topic
discussion_points = generate_basic_structure()
```
### Phase 3: Agent Execution with Flow Control
**Framework-Based Analysis Generation**
```bash
Task(conceptual-planning-agent): "
[FLOW_CONTROL]
Execute product-owner analysis for existing topic framework
## Context Loading
ASSIGNED_ROLE: product-owner
OUTPUT_LOCATION: .workflow/active/WFS-{session}/.brainstorming/product-owner/
ANALYSIS_MODE: {framework_mode ? "framework_based" : "standalone"}
## Flow Control Steps
1. **load_topic_framework**
- Action: Load structured topic discussion framework
- Command: Read(.workflow/active/WFS-{session}/.brainstorming/guidance-specification.md)
- Output: topic_framework_content
2. **load_role_template**
- Action: Load product-owner planning template
- Command: bash($(cat ~/.claude/workflows/cli-templates/planning-roles/product-owner.md))
- Output: role_template_guidelines
3. **load_session_metadata**
- Action: Load session metadata and existing context
- Command: Read(.workflow/active/WFS-{session}/workflow-session.json)
- Output: session_context
## Analysis Requirements
**Framework Reference**: Address all discussion points in guidance-specification.md from product backlog and feature prioritization perspective
**Role Focus**: Backlog management, stakeholder alignment, feature prioritization, acceptance criteria
**Structured Approach**: Create analysis.md addressing framework discussion points
**Template Integration**: Apply role template guidelines within framework structure
## Expected Deliverables
1. **analysis.md**: Comprehensive product ownership analysis addressing all framework discussion points
2. **Framework Reference**: Include @../guidance-specification.md reference in analysis
## Completion Criteria
- Address each discussion point from guidance-specification.md with product ownership expertise
- Provide actionable user stories and acceptance criteria definitions
- Include feature prioritization and stakeholder alignment strategies
- Reference framework document using @ notation for integration
"
```
## 📋 **TodoWrite Integration**
### Workflow Progress Tracking
```javascript
TodoWrite({
todos: [
{
content: "Detect active session and locate topic framework",
status: "in_progress",
activeForm: "Detecting session and framework"
},
{
content: "Load guidance-specification.md and session metadata for context",
status: "pending",
activeForm: "Loading framework and session context"
},
{
content: "Execute product-owner analysis using conceptual-planning-agent with FLOW_CONTROL",
status: "pending",
activeForm: "Executing product-owner framework analysis"
},
{
content: "Generate analysis.md addressing all framework discussion points",
status: "pending",
activeForm: "Generating structured product-owner analysis"
},
{
content: "Update workflow-session.json with product-owner completion status",
status: "pending",
activeForm: "Updating session metadata"
}
]
});
```
## 📊 **Output Structure**
### Framework-Based Analysis
```
.workflow/active/WFS-{session}/.brainstorming/product-owner/
└── analysis.md # Structured analysis addressing guidance-specification.md discussion points
```
### Analysis Document Structure
```markdown
# Product Owner Analysis: [Topic from Framework]
## Framework Reference
**Topic Framework**: @../guidance-specification.md
**Role Focus**: Product Backlog & Feature Prioritization perspective
## Discussion Points Analysis
[Address each point from guidance-specification.md with product ownership expertise]
### Core Requirements (from framework)
[User story formulation and backlog refinement perspective]
### Technical Considerations (from framework)
[Technical feasibility and implementation sequencing considerations]
### User Experience Factors (from framework)
[User value definition and acceptance criteria analysis]
### Implementation Challenges (from framework)
[Sprint planning, dependency management, and delivery strategies]
### Success Metrics (from framework)
[Feature adoption, value delivery metrics, and stakeholder satisfaction indicators]
## Product Owner Specific Recommendations
[Role-specific backlog management and feature prioritization strategies]
---
*Generated by product-owner analysis addressing structured framework*
```
## 🔄 **Session Integration**
### Completion Status Update
```json
{
"product_owner": {
"status": "completed",
"framework_addressed": true,
"output_location": ".workflow/active/WFS-{session}/.brainstorming/product-owner/analysis.md",
"framework_reference": "@../guidance-specification.md"
}
}
```
### Integration Points
- **Framework Reference**: @../guidance-specification.md for structured discussion points
- **Cross-Role Synthesis**: Product ownership insights available for synthesis-report.md integration
- **Agent Autonomy**: Independent execution with framework guidance

View File

@@ -0,0 +1,705 @@
---
name: role-analysis
description: Unified role-specific analysis generation with interactive context gathering and incremental updates
argument-hint: "[role-name] [--session session-id] [--update] [--include-questions] [--skip-questions]"
allowed-tools: Task(conceptual-planning-agent), AskUserQuestion(*), TodoWrite(*), Read(*), Write(*), Edit(*), Glob(*)
---
## 🎯 **Unified Role Analysis Generator**
### Purpose
**Unified command for generating and updating role-specific analysis** with interactive context gathering, framework alignment, and incremental update support. Replaces 9 individual role commands with single parameterized workflow.
### Core Function
- **Multi-Role Support**: Single command supports all 9 brainstorming roles
- **Interactive Context**: Dynamic question generation based on role and framework
- **Incremental Updates**: Merge new insights into existing analyses
- **Framework Alignment**: Address guidance-specification.md discussion points
- **Agent Delegation**: Use conceptual-planning-agent with role-specific templates
### Supported Roles
| Role ID | Title | Focus Area | Context Questions |
|---------|-------|------------|-------------------|
| `ux-expert` | UX专家 | User research, information architecture, user journey | 4 |
| `ui-designer` | UI设计师 | Visual design, high-fidelity mockups, design systems | 4 |
| `system-architect` | 系统架构师 | Technical architecture, scalability, integration patterns | 5 |
| `product-manager` | 产品经理 | Product strategy, roadmap, prioritization | 4 |
| `product-owner` | 产品负责人 | Backlog management, user stories, acceptance criteria | 4 |
| `scrum-master` | 敏捷教练 | Process facilitation, impediment removal, team dynamics | 3 |
| `subject-matter-expert` | 领域专家 | Domain knowledge, business rules, compliance | 4 |
| `data-architect` | 数据架构师 | Data models, storage strategies, data flow | 5 |
| `api-designer` | API设计师 | API contracts, versioning, integration patterns | 4 |
---
## 📋 **Usage**
```bash
# Generate new analysis with interactive context
/workflow:brainstorm:role-analysis ux-expert
# Generate with existing framework + context questions
/workflow:brainstorm:role-analysis system-architect --session WFS-xxx --include-questions
# Update existing analysis (incremental merge)
/workflow:brainstorm:role-analysis ui-designer --session WFS-xxx --update
# Quick generation (skip interactive context)
/workflow:brainstorm:role-analysis product-manager --session WFS-xxx --skip-questions
```
---
## ⚙️ **Execution Protocol**
### Phase 1: Detection & Validation
**Step 1.1: Role Validation**
```bash
VALIDATE role_name IN [
ux-expert, ui-designer, system-architect, product-manager,
product-owner, scrum-master, subject-matter-expert,
data-architect, api-designer
]
IF invalid:
ERROR: "Unknown role: {role_name}. Use one of: ux-expert, ui-designer, ..."
EXIT
```
**Step 1.2: Session Detection**
```bash
IF --session PROVIDED:
session_id = --session
brainstorm_dir = .workflow/active/{session_id}/.brainstorming/
ELSE:
FIND .workflow/active/WFS-*/
IF multiple:
PROMPT user to select
ELSE IF single:
USE existing
ELSE:
ERROR: "No active session. Run /workflow:brainstorm:artifacts first"
EXIT
VALIDATE brainstorm_dir EXISTS
```
**Step 1.3: Framework Detection**
```bash
framework_file = {brainstorm_dir}/guidance-specification.md
IF framework_file EXISTS:
framework_mode = true
LOAD framework_content
ELSE:
WARN: "No framework found - will create standalone analysis"
framework_mode = false
```
**Step 1.4: Update Mode Detection**
```bash
existing_analysis = {brainstorm_dir}/{role_name}/analysis*.md
IF --update FLAG OR existing_analysis EXISTS:
update_mode = true
IF --update NOT PROVIDED:
ASK: "Analysis exists. Update or regenerate?"
OPTIONS: ["Incremental update", "Full regenerate", "Cancel"]
ELSE:
update_mode = false
```
### Phase 2: Interactive Context Gathering
**Trigger Conditions**:
- Default: Always ask unless `--skip-questions` provided
- `--include-questions`: Force context gathering even if analysis exists
- `--skip-questions`: Skip all interactive questions
**Step 2.1: Load Role Configuration**
```javascript
const roleConfig = {
'ux-expert': {
title: 'UX专家',
focus_area: 'User research, information architecture, user journey',
question_categories: ['User Intent', 'Requirements', 'UX'],
question_count: 4,
template: '~/.claude/workflows/cli-templates/planning-roles/ux-expert.md'
},
'ui-designer': {
title: 'UI设计师',
focus_area: 'Visual design, high-fidelity mockups, design systems',
question_categories: ['Requirements', 'UX', 'Feasibility'],
question_count: 4,
template: '~/.claude/workflows/cli-templates/planning-roles/ui-designer.md'
},
'system-architect': {
title: '系统架构师',
focus_area: 'Technical architecture, scalability, integration patterns',
question_categories: ['Scale & Performance', 'Technical Constraints', 'Architecture Complexity', 'Non-Functional Requirements'],
question_count: 5,
template: '~/.claude/workflows/cli-templates/planning-roles/system-architect.md'
},
'product-manager': {
title: '产品经理',
focus_area: 'Product strategy, roadmap, prioritization',
question_categories: ['User Intent', 'Requirements', 'Process'],
question_count: 4,
template: '~/.claude/workflows/cli-templates/planning-roles/product-manager.md'
},
'product-owner': {
title: '产品负责人',
focus_area: 'Backlog management, user stories, acceptance criteria',
question_categories: ['Requirements', 'Decisions', 'Process'],
question_count: 4,
template: '~/.claude/workflows/cli-templates/planning-roles/product-owner.md'
},
'scrum-master': {
title: '敏捷教练',
focus_area: 'Process facilitation, impediment removal, team dynamics',
question_categories: ['Process', 'Risk', 'Decisions'],
question_count: 3,
template: '~/.claude/workflows/cli-templates/planning-roles/scrum-master.md'
},
'subject-matter-expert': {
title: '领域专家',
focus_area: 'Domain knowledge, business rules, compliance',
question_categories: ['Requirements', 'Feasibility', 'Terminology'],
question_count: 4,
template: '~/.claude/workflows/cli-templates/planning-roles/subject-matter-expert.md'
},
'data-architect': {
title: '数据架构师',
focus_area: 'Data models, storage strategies, data flow',
question_categories: ['Architecture', 'Scale & Performance', 'Technical Constraints', 'Feasibility'],
question_count: 5,
template: '~/.claude/workflows/cli-templates/planning-roles/data-architect.md'
},
'api-designer': {
title: 'API设计师',
focus_area: 'API contracts, versioning, integration patterns',
question_categories: ['Architecture', 'Requirements', 'Feasibility', 'Decisions'],
question_count: 4,
template: '~/.claude/workflows/cli-templates/planning-roles/api-designer.md'
}
};
config = roleConfig[role_name];
```
**Step 2.2: Generate Role-Specific Questions**
**9-Category Taxonomy** (from synthesis.md):
| Category | Focus | Example Question Pattern |
|----------|-------|--------------------------|
| User Intent | 用户目标 | "该分析的核心目标是什么?" |
| Requirements | 需求细化 | "需求的优先级如何排序?" |
| Architecture | 架构决策 | "技术栈的选择考量?" |
| UX | 用户体验 | "交互复杂度的取舍?" |
| Feasibility | 可行性 | "资源约束下的实现范围?" |
| Risk | 风险管理 | "风险容忍度是多少?" |
| Process | 流程规范 | "开发迭代的节奏?" |
| Decisions | 决策确认 | "冲突的解决方案?" |
| Terminology | 术语统一 | "统一使用哪个术语?" |
| Scale & Performance | 性能扩展 | "预期的负载和性能要求?" |
| Technical Constraints | 技术约束 | "现有技术栈的限制?" |
| Architecture Complexity | 架构复杂度 | "架构的复杂度权衡?" |
| Non-Functional Requirements | 非功能需求 | "可用性和可维护性要求?" |
**Question Generation Algorithm**:
```javascript
async function generateQuestions(role_name, framework_content) {
const config = roleConfig[role_name];
const questions = [];
// Parse framework for keywords
const keywords = extractKeywords(framework_content);
// Generate category-specific questions
for (const category of config.question_categories) {
const question = generateCategoryQuestion(category, keywords, role_name);
questions.push(question);
}
return questions.slice(0, config.question_count);
}
```
**Step 2.3: Multi-Round Question Execution**
```javascript
const BATCH_SIZE = 4;
const user_context = {};
for (let i = 0; i < questions.length; i += BATCH_SIZE) {
const batch = questions.slice(i, i + BATCH_SIZE);
const currentRound = Math.floor(i / BATCH_SIZE) + 1;
const totalRounds = Math.ceil(questions.length / BATCH_SIZE);
console.log(`\n[Round ${currentRound}/${totalRounds}] ${config.title} 上下文询问\n`);
AskUserQuestion({
questions: batch.map(q => ({
question: q.question,
header: q.category.substring(0, 12),
multiSelect: false,
options: q.options.map(opt => ({
label: opt.label,
description: opt.description
}))
}))
});
// Store responses before next round
for (const answer of responses) {
user_context[answer.question] = {
answer: answer.selected,
category: answer.category,
timestamp: new Date().toISOString()
};
}
}
// Save context to file
Write(
`${brainstorm_dir}/${role_name}/${role_name}-context.md`,
formatUserContext(user_context)
);
```
**Question Quality Rules** (from artifacts.md):
**MUST Include**:
- ✅ All questions in Chinese (用中文提问)
- ✅ 业务场景作为问题前提
- ✅ 技术选项的业务影响说明
- ✅ 量化指标和约束条件
**MUST Avoid**:
- ❌ 纯技术选型无业务上下文
- ❌ 过度抽象的通用问题
- ❌ 脱离框架的重复询问
### Phase 3: Agent Execution
**Step 3.1: Load Session Metadata**
```bash
session_metadata = Read(.workflow/active/{session_id}/workflow-session.json)
original_topic = session_metadata.topic
selected_roles = session_metadata.selected_roles
```
**Step 3.2: Prepare Agent Context**
```javascript
const agentContext = {
role_name: role_name,
role_config: roleConfig[role_name],
output_location: `${brainstorm_dir}/${role_name}/`,
framework_mode: framework_mode,
framework_path: framework_mode ? `${brainstorm_dir}/guidance-specification.md` : null,
update_mode: update_mode,
user_context: user_context,
original_topic: original_topic,
session_id: session_id
};
```
**Step 3.3: Execute Conceptual Planning Agent**
**Framework-Based Analysis** (when guidance-specification.md exists):
```javascript
Task(
subagent_type="conceptual-planning-agent",
run_in_background=false,
description=`Generate ${role_name} analysis`,
prompt=`
[FLOW_CONTROL]
Execute ${role_name} analysis for existing topic framework
## Context Loading
ASSIGNED_ROLE: ${role_name}
OUTPUT_LOCATION: ${agentContext.output_location}
ANALYSIS_MODE: ${framework_mode ? "framework_based" : "standalone"}
UPDATE_MODE: ${update_mode}
## Flow Control Steps
1. **load_topic_framework**
- Action: Load structured topic discussion framework
- Command: Read(${agentContext.framework_path})
- Output: topic_framework_content
2. **load_role_template**
- Action: Load ${role_name} planning template
- Command: Read(${roleConfig[role_name].template})
- Output: role_template_guidelines
3. **load_session_metadata**
- Action: Load session metadata and user intent
- Command: Read(.workflow/active/${session_id}/workflow-session.json)
- Output: session_context
4. **load_user_context** (if exists)
- Action: Load interactive context responses
- Command: Read(${brainstorm_dir}/${role_name}/${role_name}-context.md)
- Output: user_context_answers
5. **${update_mode ? 'load_existing_analysis' : 'skip'}**
${update_mode ? `
- Action: Load existing analysis for incremental update
- Command: Read(${brainstorm_dir}/${role_name}/analysis.md)
- Output: existing_analysis_content
` : ''}
## Analysis Requirements
**Primary Reference**: Original user prompt from workflow-session.json is authoritative
**Framework Source**: Address all discussion points in guidance-specification.md from ${role_name} perspective
**User Context Integration**: Incorporate interactive Q&A responses into analysis
**Role Focus**: ${roleConfig[role_name].focus_area}
**Template Integration**: Apply role template guidelines within framework structure
## Expected Deliverables
1. **analysis.md** (main document, optionally with analysis-{slug}.md sub-documents)
2. **Framework Reference**: @../guidance-specification.md (if framework_mode)
3. **User Context Reference**: @./${role_name}-context.md (if user context exists)
4. **User Intent Alignment**: Validate against session_context
## Update Requirements (if UPDATE_MODE)
- **Preserve Structure**: Maintain existing analysis structure
- **Add "Clarifications" Section**: Document new user context with timestamp
- **Merge Insights**: Integrate new perspectives without removing existing content
- **Resolve Conflicts**: If new context contradicts existing analysis, document both and recommend resolution
## Completion Criteria
- Address each discussion point from guidance-specification.md with ${role_name} expertise
- Provide actionable recommendations from ${role_name} perspective within analysis files
- All output files MUST start with "analysis" prefix (no recommendations.md or other naming)
- Reference framework document using @ notation for integration
- Update workflow-session.json with completion status
`
);
```
### Phase 4: Validation & Finalization
**Step 4.1: Validate Output**
```bash
VERIFY EXISTS: ${brainstorm_dir}/${role_name}/analysis.md
VERIFY CONTAINS: "@../guidance-specification.md" (if framework_mode)
IF user_context EXISTS:
VERIFY CONTAINS: "@./${role_name}-context.md" OR "## Clarifications" section
```
**Step 4.2: Update Session Metadata**
```json
{
"phases": {
"BRAINSTORM": {
"${role_name}": {
"status": "${update_mode ? 'updated' : 'completed'}",
"completed_at": "timestamp",
"framework_addressed": true,
"context_gathered": user_context ? true : false,
"output_location": "${brainstorm_dir}/${role_name}/analysis.md",
"update_history": [
{
"timestamp": "ISO8601",
"mode": "${update_mode ? 'incremental' : 'initial'}",
"context_questions": question_count
}
]
}
}
}
}
```
**Step 4.3: Completion Report**
```markdown
✅ ${roleConfig[role_name].title} Analysis Complete
**Output**: ${brainstorm_dir}/${role_name}/analysis.md
**Mode**: ${update_mode ? 'Incremental Update' : 'New Generation'}
**Framework**: ${framework_mode ? '✓ Aligned' : '✗ Standalone'}
**Context Questions**: ${question_count} answered
${update_mode ? '
**Changes**:
- Added "Clarifications" section with new user context
- Merged new insights into existing sections
- Resolved conflicts with framework alignment
' : ''}
**Next Steps**:
${selected_roles.length > 1 ? `
- Continue with other roles: ${selected_roles.filter(r => r !== role_name).join(', ')}
- Run synthesis: /workflow:brainstorm:synthesis --session ${session_id}
` : `
- Clarify insights: /workflow:brainstorm:synthesis --session ${session_id}
- Generate plan: /workflow:plan --session ${session_id}
`}
```
---
## 📋 **TodoWrite Integration**
### Workflow Progress Tracking
```javascript
TodoWrite({
todos: [
{
content: "Phase 1: Detect session and validate role configuration",
status: "in_progress",
activeForm: "Detecting session and role"
},
{
content: "Phase 2: Interactive context gathering with AskUserQuestion",
status: "pending",
activeForm: "Gathering user context"
},
{
content: "Phase 3: Execute conceptual-planning-agent for role analysis",
status: "pending",
activeForm: "Executing agent analysis"
},
{
content: "Phase 4: Validate output and update session metadata",
status: "pending",
activeForm: "Finalizing and validating"
}
]
});
```
---
## 📊 **Output Structure**
### Directory Layout
```
.workflow/active/WFS-{session}/.brainstorming/
├── guidance-specification.md # Framework (if exists)
└── {role-name}/
├── {role-name}-context.md # Interactive Q&A responses
├── analysis.md # Main analysis (REQUIRED)
└── analysis-{slug}.md # Section documents (optional, max 5)
```
### Analysis Document Structure (New Generation)
```markdown
# ${roleConfig[role_name].title} Analysis: [Topic from Framework]
## Framework Reference
**Topic Framework**: @../guidance-specification.md
**Role Focus**: ${roleConfig[role_name].focus_area}
**User Context**: @./${role_name}-context.md
## User Context Summary
**Context Gathered**: ${question_count} questions answered
**Categories**: ${question_categories.join(', ')}
${user_context ? formatContextSummary(user_context) : ''}
## Discussion Points Analysis
[Address each point from guidance-specification.md with ${role_name} expertise]
### Core Requirements (from framework)
[Role-specific perspective on requirements]
### Technical Considerations (from framework)
[Role-specific technical analysis]
### User Experience Factors (from framework)
[Role-specific UX considerations]
### Implementation Challenges (from framework)
[Role-specific challenges and solutions]
### Success Metrics (from framework)
[Role-specific metrics and KPIs]
## ${roleConfig[role_name].title} Specific Recommendations
[Role-specific actionable strategies]
---
*Generated by ${role_name} analysis addressing structured framework*
*Context gathered: ${new Date().toISOString()}*
```
### Analysis Document Structure (Incremental Update)
```markdown
# ${roleConfig[role_name].title} Analysis: [Topic]
## Framework Reference
[Existing content preserved]
## Clarifications
### Session ${new Date().toISOString().split('T')[0]}
${Object.entries(user_context).map(([q, a]) => `
- **Q**: ${q} (Category: ${a.category})
**A**: ${a.answer}
`).join('\n')}
## User Context Summary
[Updated with new context]
## Discussion Points Analysis
[Existing content enhanced with new insights]
[Rest of sections updated based on clarifications]
```
---
## 🔄 **Integration with Other Commands**
### Called By
- `/workflow:brainstorm:auto-parallel` (Phase 2 - parallel role execution)
- Manual invocation for single-role analysis
### Calls To
- `conceptual-planning-agent` (agent execution)
- `AskUserQuestion` (interactive context gathering)
### Coordinates With
- `/workflow:brainstorm:artifacts` (creates framework for role analysis)
- `/workflow:brainstorm:synthesis` (reads role analyses for integration)
---
## ✅ **Quality Assurance**
### Required Analysis Elements
- [ ] Framework discussion points addressed (if framework_mode)
- [ ] User context integrated (if context gathered)
- [ ] Role template guidelines applied
- [ ] Output files follow naming convention (analysis*.md only)
- [ ] Framework reference using @ notation
- [ ] Session metadata updated
### Context Quality
- [ ] Questions in Chinese with business context
- [ ] Options include technical trade-offs
- [ ] Categories aligned with role focus
- [ ] No generic questions unrelated to framework
### Update Quality (if update_mode)
- [ ] "Clarifications" section added with timestamp
- [ ] New insights merged without content loss
- [ ] Conflicts documented and resolved
- [ ] Framework alignment maintained
---
## 🎛️ **Command Parameters**
### Required Parameters
- `[role-name]`: Role identifier (ux-expert, ui-designer, system-architect, etc.)
### Optional Parameters
- `--session [session-id]`: Specify brainstorming session (auto-detect if omitted)
- `--update`: Force incremental update mode (auto-detect if analysis exists)
- `--include-questions`: Force context gathering even if analysis exists
- `--skip-questions`: Skip all interactive context gathering
- `--style-skill [package]`: For ui-designer only, load style SKILL package
### Parameter Combinations
| Scenario | Command | Behavior |
|----------|---------|----------|
| New analysis | `role-analysis ux-expert` | Generate + ask context questions |
| Quick generation | `role-analysis ux-expert --skip-questions` | Generate without context |
| Update existing | `role-analysis ux-expert --update` | Ask clarifications + merge |
| Force questions | `role-analysis ux-expert --include-questions` | Ask even if exists |
| Specific session | `role-analysis ux-expert --session WFS-xxx` | Target specific session |
---
## 🚫 **Error Handling**
### Invalid Role Name
```
ERROR: Unknown role: "ui-expert"
Valid roles: ux-expert, ui-designer, system-architect, product-manager,
product-owner, scrum-master, subject-matter-expert,
data-architect, api-designer
```
### No Active Session
```
ERROR: No active brainstorming session found
Run: /workflow:brainstorm:artifacts "[topic]" to create session
```
### Missing Framework (with warning)
```
WARN: No guidance-specification.md found
Generating standalone analysis without framework alignment
Recommend: Run /workflow:brainstorm:artifacts first for better results
```
### Agent Execution Failure
```
ERROR: Conceptual planning agent failed
Check: ${brainstorm_dir}/${role_name}/error.log
Action: Retry with --skip-questions or check framework validity
```
---
## 🔧 **Advanced Usage**
### Batch Role Generation (via auto-parallel)
```bash
# This command handles multiple roles in parallel
/workflow:brainstorm:auto-parallel "topic" --count 3
# → Internally calls role-analysis for each selected role
```
### Manual Multi-Role Workflow
```bash
# 1. Create framework
/workflow:brainstorm:artifacts "Build real-time collaboration platform" --count 3
# 2. Generate each role with context
/workflow:brainstorm:role-analysis system-architect --include-questions
/workflow:brainstorm:role-analysis ui-designer --include-questions
/workflow:brainstorm:role-analysis product-manager --include-questions
# 3. Synthesize insights
/workflow:brainstorm:synthesis --session WFS-xxx
```
### Iterative Refinement
```bash
# Initial generation
/workflow:brainstorm:role-analysis ux-expert
# User reviews and wants more depth
/workflow:brainstorm:role-analysis ux-expert --update --include-questions
# → Asks clarification questions, merges new insights
```
---
## 📚 **Reference Information**
### Role Template Locations
- Templates: `~/.claude/workflows/cli-templates/planning-roles/`
- Format: `{role-name}.md` (e.g., `ux-expert.md`, `system-architect.md`)
### Related Commands
- `/workflow:brainstorm:artifacts` - Create framework and select roles
- `/workflow:brainstorm:auto-parallel` - Parallel multi-role execution
- `/workflow:brainstorm:synthesis` - Integrate role analyses
- `/workflow:plan` - Generate implementation plan from synthesis
### Context Package
- Location: `.workflow/active/WFS-{session}/.process/context-package.json`
- Used by: `context-search-agent` (Phase 0 of artifacts)
- Contains: Project context, tech stack, conflict risks

View File

@@ -1,200 +0,0 @@
---
name: scrum-master
description: Generate or update scrum-master/analysis.md addressing guidance-specification discussion points for Agile process perspective
argument-hint: "optional topic - uses existing framework if available"
allowed-tools: Task(conceptual-planning-agent), TodoWrite(*), Read(*), Write(*)
---
## 🎯 **Scrum Master Analysis Generator**
### Purpose
**Specialized command for generating scrum-master/analysis.md** that addresses guidance-specification.md discussion points from agile process and team collaboration perspective. Creates or updates role-specific analysis with framework references.
### Core Function
- **Framework-based Analysis**: Address each discussion point in guidance-specification.md
- **Agile Process Focus**: Sprint planning, team dynamics, and delivery optimization
- **Update Mechanism**: Create new or update existing analysis.md
- **Agent Delegation**: Use conceptual-planning-agent for analysis generation
### Analysis Scope
- **Sprint Planning**: Task breakdown, estimation, and iteration planning
- **Team Collaboration**: Communication patterns, impediment removal, and facilitation
- **Process Optimization**: Agile ceremonies, retrospectives, and continuous improvement
- **Delivery Management**: Velocity tracking, burndown analysis, and release planning
## ⚙️ **Execution Protocol**
### Phase 1: Session & Framework Detection
```bash
# Check active session and framework
CHECK: find .workflow/active/ -name "WFS-*" -type d
IF active_session EXISTS:
session_id = get_active_session()
brainstorm_dir = .workflow/active/WFS-{session}/.brainstorming/
CHECK: brainstorm_dir/guidance-specification.md
IF EXISTS:
framework_mode = true
load_framework = true
ELSE:
IF topic_provided:
framework_mode = false # Create analysis without framework
ELSE:
ERROR: "No framework found and no topic provided"
```
### Phase 2: Analysis Mode Detection
```bash
# Determine execution mode
IF framework_mode == true:
mode = "framework_based_analysis"
topic_ref = load_framework_topic()
discussion_points = extract_framework_points()
ELSE:
mode = "standalone_analysis"
topic_ref = provided_topic
discussion_points = generate_basic_structure()
```
### Phase 3: Agent Execution with Flow Control
**Framework-Based Analysis Generation**
```bash
Task(conceptual-planning-agent): "
[FLOW_CONTROL]
Execute scrum-master analysis for existing topic framework
## Context Loading
ASSIGNED_ROLE: scrum-master
OUTPUT_LOCATION: .workflow/active/WFS-{session}/.brainstorming/scrum-master/
ANALYSIS_MODE: {framework_mode ? "framework_based" : "standalone"}
## Flow Control Steps
1. **load_topic_framework**
- Action: Load structured topic discussion framework
- Command: Read(.workflow/active/WFS-{session}/.brainstorming/guidance-specification.md)
- Output: topic_framework_content
2. **load_role_template**
- Action: Load scrum-master planning template
- Command: bash($(cat ~/.claude/workflows/cli-templates/planning-roles/scrum-master.md))
- Output: role_template_guidelines
3. **load_session_metadata**
- Action: Load session metadata and existing context
- Command: Read(.workflow/active/WFS-{session}/workflow-session.json)
- Output: session_context
## Analysis Requirements
**Framework Reference**: Address all discussion points in guidance-specification.md from agile process and team collaboration perspective
**Role Focus**: Sprint planning, team dynamics, process optimization, delivery management
**Structured Approach**: Create analysis.md addressing framework discussion points
**Template Integration**: Apply role template guidelines within framework structure
## Expected Deliverables
1. **analysis.md**: Comprehensive agile process analysis addressing all framework discussion points
2. **Framework Reference**: Include @../guidance-specification.md reference in analysis
## Completion Criteria
- Address each discussion point from guidance-specification.md with scrum mastery expertise
- Provide actionable sprint planning and team facilitation strategies
- Include process optimization and impediment removal insights
- Reference framework document using @ notation for integration
"
```
## 📋 **TodoWrite Integration**
### Workflow Progress Tracking
```javascript
TodoWrite({
todos: [
{
content: "Detect active session and locate topic framework",
status: "in_progress",
activeForm: "Detecting session and framework"
},
{
content: "Load guidance-specification.md and session metadata for context",
status: "pending",
activeForm: "Loading framework and session context"
},
{
content: "Execute scrum-master analysis using conceptual-planning-agent with FLOW_CONTROL",
status: "pending",
activeForm: "Executing scrum-master framework analysis"
},
{
content: "Generate analysis.md addressing all framework discussion points",
status: "pending",
activeForm: "Generating structured scrum-master analysis"
},
{
content: "Update workflow-session.json with scrum-master completion status",
status: "pending",
activeForm: "Updating session metadata"
}
]
});
```
## 📊 **Output Structure**
### Framework-Based Analysis
```
.workflow/active/WFS-{session}/.brainstorming/scrum-master/
└── analysis.md # Structured analysis addressing guidance-specification.md discussion points
```
### Analysis Document Structure
```markdown
# Scrum Master Analysis: [Topic from Framework]
## Framework Reference
**Topic Framework**: @../guidance-specification.md
**Role Focus**: Agile Process & Team Collaboration perspective
## Discussion Points Analysis
[Address each point from guidance-specification.md with scrum mastery expertise]
### Core Requirements (from framework)
[Sprint planning and iteration breakdown perspective]
### Technical Considerations (from framework)
[Technical debt management and process considerations]
### User Experience Factors (from framework)
[User story refinement and acceptance criteria analysis]
### Implementation Challenges (from framework)
[Impediment identification and removal strategies]
### Success Metrics (from framework)
[Velocity tracking, burndown metrics, and team performance indicators]
## Scrum Master Specific Recommendations
[Role-specific agile process optimization and team facilitation strategies]
---
*Generated by scrum-master analysis addressing structured framework*
```
## 🔄 **Session Integration**
### Completion Status Update
```json
{
"scrum_master": {
"status": "completed",
"framework_addressed": true,
"output_location": ".workflow/active/WFS-{session}/.brainstorming/scrum-master/analysis.md",
"framework_reference": "@../guidance-specification.md"
}
}
```
### Integration Points
- **Framework Reference**: @../guidance-specification.md for structured discussion points
- **Cross-Role Synthesis**: Agile process insights available for synthesis-report.md integration
- **Agent Autonomy**: Independent execution with framework guidance

View File

@@ -1,200 +0,0 @@
---
name: subject-matter-expert
description: Generate or update subject-matter-expert/analysis.md addressing guidance-specification discussion points for domain expertise perspective
argument-hint: "optional topic - uses existing framework if available"
allowed-tools: Task(conceptual-planning-agent), TodoWrite(*), Read(*), Write(*)
---
## 🎯 **Subject Matter Expert Analysis Generator**
### Purpose
**Specialized command for generating subject-matter-expert/analysis.md** that addresses guidance-specification.md discussion points from domain knowledge and technical expertise perspective. Creates or updates role-specific analysis with framework references.
### Core Function
- **Framework-based Analysis**: Address each discussion point in guidance-specification.md
- **Domain Expertise Focus**: Deep technical knowledge, industry standards, and best practices
- **Update Mechanism**: Create new or update existing analysis.md
- **Agent Delegation**: Use conceptual-planning-agent for analysis generation
### Analysis Scope
- **Domain Knowledge**: Industry-specific expertise, regulatory requirements, and compliance
- **Technical Standards**: Best practices, design patterns, and architectural guidelines
- **Risk Assessment**: Technical debt, scalability concerns, and maintenance implications
- **Knowledge Transfer**: Documentation strategies, training requirements, and expertise sharing
## ⚙️ **Execution Protocol**
### Phase 1: Session & Framework Detection
```bash
# Check active session and framework
CHECK: find .workflow/active/ -name "WFS-*" -type d
IF active_session EXISTS:
session_id = get_active_session()
brainstorm_dir = .workflow/active/WFS-{session}/.brainstorming/
CHECK: brainstorm_dir/guidance-specification.md
IF EXISTS:
framework_mode = true
load_framework = true
ELSE:
IF topic_provided:
framework_mode = false # Create analysis without framework
ELSE:
ERROR: "No framework found and no topic provided"
```
### Phase 2: Analysis Mode Detection
```bash
# Determine execution mode
IF framework_mode == true:
mode = "framework_based_analysis"
topic_ref = load_framework_topic()
discussion_points = extract_framework_points()
ELSE:
mode = "standalone_analysis"
topic_ref = provided_topic
discussion_points = generate_basic_structure()
```
### Phase 3: Agent Execution with Flow Control
**Framework-Based Analysis Generation**
```bash
Task(conceptual-planning-agent): "
[FLOW_CONTROL]
Execute subject-matter-expert analysis for existing topic framework
## Context Loading
ASSIGNED_ROLE: subject-matter-expert
OUTPUT_LOCATION: .workflow/active/WFS-{session}/.brainstorming/subject-matter-expert/
ANALYSIS_MODE: {framework_mode ? "framework_based" : "standalone"}
## Flow Control Steps
1. **load_topic_framework**
- Action: Load structured topic discussion framework
- Command: Read(.workflow/active/WFS-{session}/.brainstorming/guidance-specification.md)
- Output: topic_framework_content
2. **load_role_template**
- Action: Load subject-matter-expert planning template
- Command: bash($(cat ~/.claude/workflows/cli-templates/planning-roles/subject-matter-expert.md))
- Output: role_template_guidelines
3. **load_session_metadata**
- Action: Load session metadata and existing context
- Command: Read(.workflow/active/WFS-{session}/workflow-session.json)
- Output: session_context
## Analysis Requirements
**Framework Reference**: Address all discussion points in guidance-specification.md from domain expertise and technical standards perspective
**Role Focus**: Domain knowledge, technical standards, risk assessment, knowledge transfer
**Structured Approach**: Create analysis.md addressing framework discussion points
**Template Integration**: Apply role template guidelines within framework structure
## Expected Deliverables
1. **analysis.md**: Comprehensive domain expertise analysis addressing all framework discussion points
2. **Framework Reference**: Include @../guidance-specification.md reference in analysis
## Completion Criteria
- Address each discussion point from guidance-specification.md with subject matter expertise
- Provide actionable technical standards and best practices recommendations
- Include risk assessment and compliance considerations
- Reference framework document using @ notation for integration
"
```
## 📋 **TodoWrite Integration**
### Workflow Progress Tracking
```javascript
TodoWrite({
todos: [
{
content: "Detect active session and locate topic framework",
status: "in_progress",
activeForm: "Detecting session and framework"
},
{
content: "Load guidance-specification.md and session metadata for context",
status: "pending",
activeForm: "Loading framework and session context"
},
{
content: "Execute subject-matter-expert analysis using conceptual-planning-agent with FLOW_CONTROL",
status: "pending",
activeForm: "Executing subject-matter-expert framework analysis"
},
{
content: "Generate analysis.md addressing all framework discussion points",
status: "pending",
activeForm: "Generating structured subject-matter-expert analysis"
},
{
content: "Update workflow-session.json with subject-matter-expert completion status",
status: "pending",
activeForm: "Updating session metadata"
}
]
});
```
## 📊 **Output Structure**
### Framework-Based Analysis
```
.workflow/active/WFS-{session}/.brainstorming/subject-matter-expert/
└── analysis.md # Structured analysis addressing guidance-specification.md discussion points
```
### Analysis Document Structure
```markdown
# Subject Matter Expert Analysis: [Topic from Framework]
## Framework Reference
**Topic Framework**: @../guidance-specification.md
**Role Focus**: Domain Expertise & Technical Standards perspective
## Discussion Points Analysis
[Address each point from guidance-specification.md with subject matter expertise]
### Core Requirements (from framework)
[Domain-specific requirements and industry standards perspective]
### Technical Considerations (from framework)
[Deep technical analysis, architectural patterns, and best practices]
### User Experience Factors (from framework)
[Domain-specific usability standards and industry conventions]
### Implementation Challenges (from framework)
[Technical risks, scalability concerns, and maintenance implications]
### Success Metrics (from framework)
[Domain-specific KPIs, compliance metrics, and quality standards]
## Subject Matter Expert Specific Recommendations
[Role-specific technical expertise and industry best practices]
---
*Generated by subject-matter-expert analysis addressing structured framework*
```
## 🔄 **Session Integration**
### Completion Status Update
```json
{
"subject_matter_expert": {
"status": "completed",
"framework_addressed": true,
"output_location": ".workflow/active/WFS-{session}/.brainstorming/subject-matter-expert/analysis.md",
"framework_reference": "@../guidance-specification.md"
}
}
```
### Integration Points
- **Framework Reference**: @../guidance-specification.md for structured discussion points
- **Cross-Role Synthesis**: Domain expertise insights available for synthesis-report.md integration
- **Agent Autonomy**: Independent execution with framework guidance

View File

@@ -1,10 +1,14 @@
---
name: synthesis
description: Clarify and refine role analyses through intelligent Q&A and targeted updates with synthesis agent
argument-hint: "[optional: --session session-id]"
argument-hint: "[-y|--yes] [optional: --session session-id]"
allowed-tools: Task(conceptual-planning-agent), TodoWrite(*), Read(*), Write(*), Edit(*), Glob(*), AskUserQuestion(*)
---
## Auto Mode
When `--yes` or `-y`: Auto-select all enhancements, skip clarification questions, use default answers.
## Overview
Six-phase workflow to eliminate ambiguities and enhance conceptual depth in role analyses:

View File

@@ -1,389 +0,0 @@
---
name: system-architect
description: Generate or update system-architect/analysis.md addressing guidance-specification discussion points for system architecture perspective
argument-hint: "optional topic - uses existing framework if available"
allowed-tools: Task(conceptual-planning-agent), TodoWrite(*), Read(*), Write(*)
---
## 🏗️ **System Architect Analysis Generator**
### Purpose
**Specialized command for generating system-architect/analysis.md** that addresses guidance-specification.md discussion points from system architecture perspective. Creates or updates role-specific analysis with framework references.
### Core Function
- **Framework-based Analysis**: Address each discussion point in guidance-specification.md
- **Architecture Focus**: Technical architecture, scalability, and system design perspective
- **Update Mechanism**: Create new or update existing analysis.md
- **Agent Delegation**: Use conceptual-planning-agent for analysis generation
### Analysis Scope
- **Technical Architecture**: Scalable and maintainable system design
- **Technology Selection**: Stack evaluation and architectural decisions
- **Performance & Scalability**: Capacity planning and optimization strategies
- **Integration Patterns**: System communication and data flow design
### Role Boundaries & Responsibilities
#### **What This Role OWNS (Macro-Architecture)**
- **System-Level Architecture**: Service boundaries, deployment topology, and system composition
- **Cross-Service Communication Patterns**: Choosing between microservices/monolithic, event-driven/request-response, sync/async patterns
- **Technology Stack Decisions**: Language, framework, database, and infrastructure choices
- **Non-Functional Requirements**: Scalability, performance, availability, disaster recovery, and monitoring strategies
- **Integration Planning**: How systems and services connect at the macro level (not specific API contracts)
#### **What This Role DOES NOT Own (Defers to Other Roles)**
- **API Contract Details**: Specific endpoint definitions, URL structures, HTTP methods → Defers to **API Designer**
- **Data Schemas**: Detailed data model design and entity relationships → Defers to **Data Architect**
- **UI/UX Design**: Interface design and user experience → Defers to **UX Expert** and **UI Designer**
#### **Handoff Points**
- **TO API Designer**: Provides architectural constraints (REST vs GraphQL, sync vs async) that define the API design space
- **TO Data Architect**: Provides system-level data flow requirements and integration patterns
- **FROM Data Architect**: Receives canonical data model to inform system integration design
## ⚙️ **Execution Protocol**
### Phase 1: Session & Framework Detection
```bash
# Check active session and framework
CHECK: find .workflow/active/ -name "WFS-*" -type d
IF active_session EXISTS:
session_id = get_active_session()
brainstorm_dir = .workflow/active/WFS-{session}/.brainstorming/
CHECK: brainstorm_dir/guidance-specification.md
IF EXISTS:
framework_mode = true
load_framework = true
ELSE:
IF topic_provided:
framework_mode = false # Create analysis without framework
ELSE:
ERROR: "No framework found and no topic provided"
```
### Phase 2: Analysis Mode Detection
```bash
# Check existing analysis
CHECK: brainstorm_dir/system-architect/analysis.md
IF EXISTS:
SHOW existing analysis summary
ASK: "Analysis exists. Do you want to:"
OPTIONS:
1. "Update with new insights" → Update existing
2. "Replace completely" → Generate new
3. "Cancel" → Exit without changes
ELSE:
CREATE new analysis
```
### Phase 3: Agent Task Generation
**Framework-Based Analysis** (when guidance-specification.md exists):
```bash
Task(subagent_type="conceptual-planning-agent",
run_in_background=false,
prompt="Generate system architect analysis addressing topic framework
## Framework Integration Required
**MANDATORY**: Load and address guidance-specification.md discussion points
**Framework Reference**: @{session.brainstorm_dir}/guidance-specification.md
**Output Location**: {session.brainstorm_dir}/system-architect/analysis.md
## Analysis Requirements
1. **Load Topic Framework**: Read guidance-specification.md completely
2. **Address Each Discussion Point**: Respond to all 5 framework sections from system architecture perspective
3. **Include Framework Reference**: Start analysis.md with @../guidance-specification.md
4. **Technical Focus**: Emphasize scalability, architecture patterns, technology decisions
5. **Structured Response**: Use framework structure for analysis organization
## Framework Sections to Address
- Core Requirements (from architecture perspective)
- Technical Considerations (detailed architectural analysis)
- User Experience Factors (technical UX considerations)
- Implementation Challenges (architecture risks and solutions)
- Success Metrics (technical metrics and monitoring)
## Output Structure Required
```markdown
# System Architect Analysis: [Topic]
**Framework Reference**: @../guidance-specification.md
**Role Focus**: System Architecture and Technical Design
## Core Requirements Analysis
[Address framework requirements from architecture perspective]
## Technical Considerations
[Detailed architectural analysis]
## User Experience Factors
[Technical aspects of UX implementation]
## Implementation Challenges
[Architecture risks and mitigation strategies]
## Success Metrics
[Technical metrics and system monitoring]
## Architecture-Specific Recommendations
[Detailed technical recommendations]
```",
description="Generate system architect framework-based analysis")
```
### Phase 4: Update Mechanism
**Analysis Update Process**:
```bash
# For existing analysis updates
IF update_mode = "incremental":
Task(subagent_type="conceptual-planning-agent",
run_in_background=false,
prompt="Update existing system architect analysis
## Current Analysis Context
**Existing Analysis**: @{session.brainstorm_dir}/system-architect/analysis.md
**Framework Reference**: @{session.brainstorm_dir}/guidance-specification.md
## Update Requirements
1. **Preserve Structure**: Maintain existing analysis structure
2. **Add New Insights**: Integrate new technical insights and recommendations
3. **Framework Alignment**: Ensure continued alignment with topic framework
4. **Technical Updates**: Add new architecture patterns, technology considerations
5. **Maintain References**: Keep @../guidance-specification.md reference
## Update Instructions
- Read existing analysis completely
- Identify areas for enhancement or new insights
- Add technical depth while preserving original structure
- Update recommendations with new architectural approaches
- Maintain framework discussion point addressing",
description="Update system architect analysis incrementally")
```
## Document Structure
### Output Files
```
.workflow/active/WFS-[topic]/.brainstorming/
├── guidance-specification.md # Input: Framework (if exists)
└── system-architect/
└── analysis.md # ★ OUTPUT: Framework-based analysis
```
### Analysis Structure
**Required Elements**:
- **Framework Reference**: @../guidance-specification.md (if framework exists)
- **Role Focus**: System Architecture and Technical Design perspective
- **5 Framework Sections**: Address each framework discussion point
- **Technical Recommendations**: Architecture-specific insights and solutions
- How should we design APIs and manage versioning?
**4. Performance and Scalability**
- Where are the current system performance bottlenecks?
- How should we handle traffic growth and scaling demands?
- What database scaling and optimization strategies are needed?
## ⚡ **Two-Step Execution Flow**
### ⚠️ Session Management - FIRST STEP
Session detection and selection:
```bash
# Check for existing sessions
existing_sessions=$(find .workflow/active/ -name "WFS-*" -type d 2>/dev/null)
if [ multiple_sessions ]; then
prompt_user_to_select_session()
else
use_existing_or_create_new()
fi
```
### Step 1: Context Gathering Phase
**System Architect Perspective Questioning**
Before agent assignment, gather comprehensive system architecture context:
#### 📋 Role-Specific Questions
1. **Scale & Performance Requirements**
- Expected user load and traffic patterns?
- Performance requirements (latency, throughput)?
- Data volume and growth projections?
2. **Technical Constraints & Environment**
- Existing technology stack and constraints?
- Integration requirements with external systems?
- Infrastructure and deployment environment?
3. **Architecture Complexity & Patterns**
- Microservices vs monolithic considerations?
- Data consistency and transaction requirements?
- Event-driven vs request-response patterns?
4. **Non-Functional Requirements**
- High availability and disaster recovery needs?
- Security and compliance requirements?
- Monitoring and observability expectations?
#### Context Validation
- **Minimum Response**: Each answer must be ≥50 characters
- **Re-prompting**: Insufficient detail triggers follow-up questions
- **Context Storage**: Save responses to `.brainstorming/system-architect-context.md`
### Step 2: Agent Assignment with Flow Control
**Dedicated Agent Execution**
```bash
Task(conceptual-planning-agent): "
[FLOW_CONTROL]
Execute dedicated system-architect conceptual analysis for: {topic}
ASSIGNED_ROLE: system-architect
OUTPUT_LOCATION: .brainstorming/system-architect/
USER_CONTEXT: {validated_responses_from_context_gathering}
Flow Control Steps:
[
{
\"step\": \"load_role_template\",
\"action\": \"Load system-architect planning template\",
\"command\": \"bash($(cat ~/.claude/workflows/cli-templates/planning-roles/system-architect.md))\",
\"output_to\": \"role_template\"
}
]
Conceptual Analysis Requirements:
- Apply system-architect perspective to topic analysis
- Focus on architectural patterns, scalability, and integration points
- Use loaded role template framework for analysis structure
- Generate role-specific deliverables in designated output location
- Address all user context from questioning phase
Deliverables:
- analysis.md: Main system architecture analysis
- recommendations.md: Architecture recommendations
- deliverables/: Architecture-specific outputs as defined in role template
Embody system-architect role expertise for comprehensive conceptual planning."
```
### Progress Tracking
TodoWrite tracking for two-step process:
```json
[
{"content": "Gather system architect context through role-specific questioning", "status": "in_progress", "activeForm": "Gathering context"},
{"content": "Validate context responses and save to system-architect-context.md", "status": "pending", "activeForm": "Validating context"},
{"content": "Load system-architect planning template via flow control", "status": "pending", "activeForm": "Loading template"},
{"content": "Execute dedicated conceptual-planning-agent for system-architect role", "status": "pending", "activeForm": "Executing agent"}
]
```
## 📊 **Output Specification**
### Output Location
```
.workflow/active/WFS-{topic-slug}/.brainstorming/system-architect/
├── analysis.md # Primary architecture analysis
├── architecture-design.md # Detailed system design and diagrams
├── technology-stack.md # Technology stack recommendations and justifications
└── integration-plan.md # System integration and API strategies
```
### Document Templates
#### analysis.md Structure
```markdown
# System Architecture Analysis: {Topic}
*Generated: {timestamp}*
## Executive Summary
[Key architectural findings and recommendations overview]
## Current State Assessment
### Existing Architecture Overview
### Technical Stack Analysis
### Performance Bottlenecks
### Technical Debt Assessment
## Requirements Analysis
### Functional Requirements
### Non-Functional Requirements
- Performance: [Response time, throughput requirements]
- Scalability: [User growth, data volume expectations]
- Availability: [Uptime requirements]
- Security: [Security requirements]
## Proposed Architecture
### High-Level Architecture Design
### Component Breakdown
### Data Flow Diagrams
### Technology Stack Recommendations
## Implementation Strategy
### Migration Planning
### Risk Mitigation
### Performance Optimization
### Security Considerations
## Scalability and Maintenance
### Horizontal Scaling Strategy
### Monitoring and Observability
### Deployment Strategy
### Long-term Maintenance Plan
```
## 🔄 **Session Integration**
### Status Synchronization
Upon completion, update `workflow-session.json`:
```json
{
"phases": {
"BRAINSTORM": {
"system_architect": {
"status": "completed",
"completed_at": "timestamp",
"output_directory": ".workflow/active/WFS-{topic}/.brainstorming/system-architect/",
"key_insights": ["scalability_bottleneck", "architecture_pattern", "technology_recommendation"]
}
}
}
}
```
### Cross-Role Collaboration
System architect perspective provides:
- **Technical Constraints and Possibilities** → Product Manager
- **Architecture Requirements and Limitations** → UI Designer
- **Data Architecture Requirements** → Data Architect
- **Security Architecture Framework** → Security Expert
- **Technical Implementation Framework** → Feature Planner
## ✅ **Quality Assurance**
### Required Analysis Elements
- [ ] Clear architecture diagrams and component designs
- [ ] Detailed technology stack evaluation and recommendations
- [ ] Scalability and performance analysis with metrics
- [ ] System integration and API design specifications
- [ ] Comprehensive risk assessment and mitigation strategies
### Architecture Design Principles
- [ ] **Scalability**: System can handle growth in users and data
- [ ] **Maintainability**: Clear code structure, easy to modify and extend
- [ ] **Reliability**: Built-in fault tolerance and recovery mechanisms
- [ ] **Security**: Integrated security controls and protection measures
- [ ] **Performance**: Meets response time and throughput requirements
### Technical Decision Validation
- [ ] Technology choices have thorough justification and comparison analysis
- [ ] Architectural patterns align with business requirements and constraints
- [ ] Integration solutions consider compatibility and maintenance costs
- [ ] Deployment strategies are feasible with acceptable risk levels
- [ ] Monitoring and operations strategies are comprehensive and actionable
### Implementation Readiness
- [ ] **Technical Feasibility**: All proposed solutions are technically achievable
- [ ] **Resource Planning**: Resource requirements clearly defined and realistic
- [ ] **Risk Management**: Technical risks identified with mitigation plans
- [ ] **Performance Validation**: Architecture can meet performance requirements
- [ ] **Evolution Strategy**: Design allows for future growth and changes

View File

@@ -1,221 +0,0 @@
---
name: ui-designer
description: Generate or update ui-designer/analysis.md addressing guidance-specification discussion points for UI design perspective
argument-hint: "optional topic - uses existing framework if available"
allowed-tools: Task(conceptual-planning-agent), TodoWrite(*), Read(*), Write(*)
---
## 🎨 **UI Designer Analysis Generator**
### Purpose
**Specialized command for generating ui-designer/analysis.md** that addresses guidance-specification.md discussion points from UI/UX design perspective. Creates or updates role-specific analysis with framework references.
### Core Function
- **Framework-based Analysis**: Address each discussion point in guidance-specification.md
- **UI/UX Focus**: User experience, interface design, and accessibility perspective
- **Update Mechanism**: Create new or update existing analysis.md
- **Agent Delegation**: Use conceptual-planning-agent for analysis generation
### Analysis Scope
- **Visual Design**: Color palettes, typography, spacing, and visual hierarchy implementation
- **High-Fidelity Mockups**: Polished, pixel-perfect interface designs
- **Design System Implementation**: Component libraries, design tokens, and style guides
- **Micro-Interactions & Animations**: Transition effects, loading states, and interactive feedback
- **Responsive Design**: Layout adaptations for different screen sizes and devices
### Role Boundaries & Responsibilities
#### **What This Role OWNS (Concrete Visual Interface Implementation)**
- **Visual Design Language**: Colors, typography, iconography, spacing, and overall aesthetic
- **High-Fidelity Mockups**: Polished designs showing exactly how the interface will look
- **Design System Components**: Building and documenting reusable UI components (buttons, inputs, cards, etc.)
- **Design Tokens**: Defining variables for colors, spacing, typography that can be used in code
- **Micro-Interactions**: Hover states, transitions, animations, and interactive feedback details
- **Responsive Layouts**: Adapting designs for mobile, tablet, and desktop breakpoints
#### **What This Role DOES NOT Own (Defers to Other Roles)**
- **User Research & Personas**: User behavior analysis and needs assessment → Defers to **UX Expert**
- **Information Architecture**: Content structure and navigation strategy → Defers to **UX Expert**
- **Low-Fidelity Wireframes**: Structural layouts without visual design → Defers to **UX Expert**
#### **Handoff Points**
- **FROM UX Expert**: Receives wireframes, user flows, and information architecture as the foundation for visual design
- **TO Frontend Developers**: Provides design specifications, component libraries, and design tokens for implementation
- **WITH API Designer**: Coordinates on data presentation and form validation feedback (visual aspects only)
## ⚙️ **Execution Protocol**
### Phase 1: Session & Framework Detection
```bash
# Check active session and framework
CHECK: find .workflow/active/ -name "WFS-*" -type d
IF active_session EXISTS:
session_id = get_active_session()
brainstorm_dir = .workflow/active/WFS-{session}/.brainstorming/
CHECK: brainstorm_dir/guidance-specification.md
IF EXISTS:
framework_mode = true
load_framework = true
ELSE:
IF topic_provided:
framework_mode = false # Create analysis without framework
ELSE:
ERROR: "No framework found and no topic provided"
```
### Phase 2: Analysis Mode Detection
```bash
# Determine execution mode
IF framework_mode == true:
mode = "framework_based_analysis"
topic_ref = load_framework_topic()
discussion_points = extract_framework_points()
ELSE:
mode = "standalone_analysis"
topic_ref = provided_topic
discussion_points = generate_basic_structure()
```
### Phase 3: Agent Execution with Flow Control
**Framework-Based Analysis Generation**
```bash
Task(conceptual-planning-agent): "
[FLOW_CONTROL]
Execute ui-designer analysis for existing topic framework
## Context Loading
ASSIGNED_ROLE: ui-designer
OUTPUT_LOCATION: .workflow/active/WFS-{session}/.brainstorming/ui-designer/
ANALYSIS_MODE: {framework_mode ? "framework_based" : "standalone"}
## Flow Control Steps
1. **load_topic_framework**
- Action: Load structured topic discussion framework
- Command: Read(.workflow/active/WFS-{session}/.brainstorming/guidance-specification.md)
- Output: topic_framework_content
2. **load_role_template**
- Action: Load ui-designer planning template
- Command: bash($(cat ~/.claude/workflows/cli-templates/planning-roles/ui-designer.md))
- Output: role_template_guidelines
3. **load_session_metadata**
- Action: Load session metadata and existing context
- Command: Read(.workflow/active/WFS-{session}/workflow-session.json)
- Output: session_context
## Analysis Requirements
**Framework Reference**: Address all discussion points in guidance-specification.md from UI/UX perspective
**Role Focus**: User experience design, interface optimization, accessibility compliance
**Structured Approach**: Create analysis.md addressing framework discussion points
**Template Integration**: Apply role template guidelines within framework structure
## Expected Deliverables
1. **analysis.md**: Comprehensive UI/UX analysis addressing all framework discussion points
2. **Framework Reference**: Include @../guidance-specification.md reference in analysis
## Completion Criteria
- Address each discussion point from guidance-specification.md with UI/UX design expertise
- Provide actionable design recommendations and interface solutions
- Include accessibility considerations and WCAG compliance planning
- Reference framework document using @ notation for integration
"
```
## 📋 **TodoWrite Integration**
### Workflow Progress Tracking
```javascript
TodoWrite({
todos: [
{
content: "Detect active session and locate topic framework",
status: "in_progress",
activeForm: "Detecting session and framework"
},
{
content: "Load guidance-specification.md and session metadata for context",
status: "pending",
activeForm: "Loading framework and session context"
},
{
content: "Execute ui-designer analysis using conceptual-planning-agent with FLOW_CONTROL",
status: "pending",
activeForm: "Executing ui-designer framework analysis"
},
{
content: "Generate analysis.md addressing all framework discussion points",
status: "pending",
activeForm: "Generating structured ui-designer analysis"
},
{
content: "Update workflow-session.json with ui-designer completion status",
status: "pending",
activeForm: "Updating session metadata"
}
]
});
```
## 📊 **Output Structure**
### Framework-Based Analysis
```
.workflow/active/WFS-{session}/.brainstorming/ui-designer/
└── analysis.md # Structured analysis addressing guidance-specification.md discussion points
```
### Analysis Document Structure
```markdown
# UI Designer Analysis: [Topic from Framework]
## Framework Reference
**Topic Framework**: @../guidance-specification.md
**Role Focus**: UI/UX Design perspective
## Discussion Points Analysis
[Address each point from guidance-specification.md with UI/UX expertise]
### Core Requirements (from framework)
[UI/UX perspective on requirements]
### Technical Considerations (from framework)
[Interface and design system considerations]
### User Experience Factors (from framework)
[Detailed UX analysis and recommendations]
### Implementation Challenges (from framework)
[Design implementation and accessibility considerations]
### Success Metrics (from framework)
[UX metrics and usability success criteria]
## UI/UX Specific Recommendations
[Role-specific design recommendations and solutions]
---
*Generated by ui-designer analysis addressing structured framework*
```
## 🔄 **Session Integration**
### Completion Status Update
```json
{
"ui_designer": {
"status": "completed",
"framework_addressed": true,
"output_location": ".workflow/active/WFS-{session}/.brainstorming/ui-designer/analysis.md",
"framework_reference": "@../guidance-specification.md"
}
}
```
### Integration Points
- **Framework Reference**: @../guidance-specification.md for structured discussion points
- **Cross-Role Synthesis**: UI/UX insights available for synthesis-report.md integration
- **Agent Autonomy**: Independent execution with framework guidance

View File

@@ -1,221 +0,0 @@
---
name: ux-expert
description: Generate or update ux-expert/analysis.md addressing guidance-specification discussion points for UX perspective
argument-hint: "optional topic - uses existing framework if available"
allowed-tools: Task(conceptual-planning-agent), TodoWrite(*), Read(*), Write(*)
---
## 🎯 **UX Expert Analysis Generator**
### Purpose
**Specialized command for generating ux-expert/analysis.md** that addresses guidance-specification.md discussion points from user experience and interface design perspective. Creates or updates role-specific analysis with framework references.
### Core Function
- **Framework-based Analysis**: Address each discussion point in guidance-specification.md
- **UX Design Focus**: User interface, interaction patterns, and usability optimization
- **Update Mechanism**: Create new or update existing analysis.md
- **Agent Delegation**: Use conceptual-planning-agent for analysis generation
### Analysis Scope
- **User Research**: User personas, behavioral analysis, and needs assessment
- **Information Architecture**: Content structure, navigation hierarchy, and mental models
- **User Journey Mapping**: User flows, task analysis, and interaction models
- **Usability Strategy**: Accessibility planning, cognitive load reduction, and user testing frameworks
- **Wireframing**: Low-fidelity layouts and structural prototypes (not visual design)
### Role Boundaries & Responsibilities
#### **What This Role OWNS (Abstract User Experience & Research)**
- **User Research & Personas**: Understanding target users, their goals, pain points, and behaviors
- **Information Architecture**: Organizing content and defining navigation structures at a conceptual level
- **User Journey Mapping**: Defining user flows, task sequences, and interaction models
- **Wireframes & Low-Fidelity Prototypes**: Structural layouts showing information hierarchy (boxes and arrows, not colors/fonts)
- **Usability Testing Strategy**: Planning user testing, A/B tests, and validation methods
- **Accessibility Planning**: WCAG compliance strategy and inclusive design principles
#### **What This Role DOES NOT Own (Defers to Other Roles)**
- **Visual Design**: Colors, typography, spacing, visual style → Defers to **UI Designer**
- **High-Fidelity Mockups**: Polished, pixel-perfect designs → Defers to **UI Designer**
- **Component Implementation**: Design system components, CSS, animations → Defers to **UI Designer**
#### **Handoff Points**
- **TO UI Designer**: Provides wireframes, user flows, and information architecture that UI Designer will transform into high-fidelity visual designs
- **FROM User Research**: May receive external research data to inform UX decisions
- **TO Product Owner**: Provides user insights and validation results to inform feature prioritization
## ⚙️ **Execution Protocol**
### Phase 1: Session & Framework Detection
```bash
# Check active session and framework
CHECK: find .workflow/active/ -name "WFS-*" -type d
IF active_session EXISTS:
session_id = get_active_session()
brainstorm_dir = .workflow/active/WFS-{session}/.brainstorming/
CHECK: brainstorm_dir/guidance-specification.md
IF EXISTS:
framework_mode = true
load_framework = true
ELSE:
IF topic_provided:
framework_mode = false # Create analysis without framework
ELSE:
ERROR: "No framework found and no topic provided"
```
### Phase 2: Analysis Mode Detection
```bash
# Determine execution mode
IF framework_mode == true:
mode = "framework_based_analysis"
topic_ref = load_framework_topic()
discussion_points = extract_framework_points()
ELSE:
mode = "standalone_analysis"
topic_ref = provided_topic
discussion_points = generate_basic_structure()
```
### Phase 3: Agent Execution with Flow Control
**Framework-Based Analysis Generation**
```bash
Task(conceptual-planning-agent): "
[FLOW_CONTROL]
Execute ux-expert analysis for existing topic framework
## Context Loading
ASSIGNED_ROLE: ux-expert
OUTPUT_LOCATION: .workflow/active/WFS-{session}/.brainstorming/ux-expert/
ANALYSIS_MODE: {framework_mode ? "framework_based" : "standalone"}
## Flow Control Steps
1. **load_topic_framework**
- Action: Load structured topic discussion framework
- Command: Read(.workflow/active/WFS-{session}/.brainstorming/guidance-specification.md)
- Output: topic_framework_content
2. **load_role_template**
- Action: Load ux-expert planning template
- Command: bash($(cat ~/.claude/workflows/cli-templates/planning-roles/ux-expert.md))
- Output: role_template_guidelines
3. **load_session_metadata**
- Action: Load session metadata and existing context
- Command: Read(.workflow/active/WFS-{session}/workflow-session.json)
- Output: session_context
## Analysis Requirements
**Framework Reference**: Address all discussion points in guidance-specification.md from user experience and interface design perspective
**Role Focus**: UI design, interaction patterns, usability optimization, design systems
**Structured Approach**: Create analysis.md addressing framework discussion points
**Template Integration**: Apply role template guidelines within framework structure
## Expected Deliverables
1. **analysis.md**: Comprehensive UX design analysis addressing all framework discussion points
2. **Framework Reference**: Include @../guidance-specification.md reference in analysis
## Completion Criteria
- Address each discussion point from guidance-specification.md with UX design expertise
- Provide actionable interface design and usability optimization strategies
- Include accessibility considerations and interaction pattern recommendations
- Reference framework document using @ notation for integration
"
```
## 📋 **TodoWrite Integration**
### Workflow Progress Tracking
```javascript
TodoWrite({
todos: [
{
content: "Detect active session and locate topic framework",
status: "in_progress",
activeForm: "Detecting session and framework"
},
{
content: "Load guidance-specification.md and session metadata for context",
status: "pending",
activeForm: "Loading framework and session context"
},
{
content: "Execute ux-expert analysis using conceptual-planning-agent with FLOW_CONTROL",
status: "pending",
activeForm: "Executing ux-expert framework analysis"
},
{
content: "Generate analysis.md addressing all framework discussion points",
status: "pending",
activeForm: "Generating structured ux-expert analysis"
},
{
content: "Update workflow-session.json with ux-expert completion status",
status: "pending",
activeForm: "Updating session metadata"
}
]
});
```
## 📊 **Output Structure**
### Framework-Based Analysis
```
.workflow/active/WFS-{session}/.brainstorming/ux-expert/
└── analysis.md # Structured analysis addressing guidance-specification.md discussion points
```
### Analysis Document Structure
```markdown
# UX Expert Analysis: [Topic from Framework]
## Framework Reference
**Topic Framework**: @../guidance-specification.md
**Role Focus**: User Experience & Interface Design perspective
## Discussion Points Analysis
[Address each point from guidance-specification.md with UX design expertise]
### Core Requirements (from framework)
[User interface and interaction design requirements perspective]
### Technical Considerations (from framework)
[Design system implementation and technical feasibility considerations]
### User Experience Factors (from framework)
[Usability optimization, accessibility, and user-centered design analysis]
### Implementation Challenges (from framework)
[Design implementation challenges and progressive enhancement strategies]
### Success Metrics (from framework)
[UX metrics including usability testing, user satisfaction, and design KPIs]
## UX Expert Specific Recommendations
[Role-specific interface design patterns and usability optimization strategies]
---
*Generated by ux-expert analysis addressing structured framework*
```
## 🔄 **Session Integration**
### Completion Status Update
```json
{
"ux_expert": {
"status": "completed",
"framework_addressed": true,
"output_location": ".workflow/active/WFS-{session}/.brainstorming/ux-expert/analysis.md",
"framework_reference": "@../guidance-specification.md"
}
}
```
### Integration Points
- **Framework Reference**: @../guidance-specification.md for structured discussion points
- **Cross-Role Synthesis**: UX design insights available for synthesis-report.md integration
- **Agent Autonomy**: Independent execution with framework guidance

View File

@@ -1,7 +1,7 @@
---
name: clean
description: Intelligent code cleanup with mainline detection, stale artifact discovery, and safe execution
argument-hint: "[--dry-run] [\"focus area\"]"
argument-hint: "[-y|--yes] [--dry-run] [\"focus area\"]"
allowed-tools: TodoWrite(*), Task(*), AskUserQuestion(*), Read(*), Glob(*), Bash(*), Write(*)
---
@@ -21,8 +21,22 @@ Intelligent cleanup command that explores the codebase to identify the developme
```bash
/workflow:clean # Full intelligent cleanup (explore → analyze → confirm → execute)
/workflow:clean --yes # Auto mode (use safe defaults, no confirmation)
/workflow:clean --dry-run # Explore and analyze only, no execution
/workflow:clean "auth module" # Focus cleanup on specific area
/workflow:clean -y "auth module" # Auto mode with focus area
```
## Auto Mode Defaults
When `--yes` or `-y` flag is used:
- **Categories to Clean**: Auto-selects `["Sessions"]` only (safest - only workflow sessions)
- **Risk Level**: Auto-selects `"Low only"` (only low-risk items)
- All confirmations skipped, proceeds directly to execution
**Flag Parsing**:
```javascript
const autoYes = $ARGUMENTS.includes('--yes') || $ARGUMENTS.includes('-y')
const dryRun = $ARGUMENTS.includes('--dry-run')
```
## Execution Process
@@ -132,7 +146,7 @@ Scan and analyze workflow session directories:
**Staleness criteria**:
- Active sessions: No modification >7 days + no related git commits
- Archives: >30 days old + no feature references in project.json
- Archives: >30 days old + no feature references in project-tech.json
- Lite-plan: >7 days old + plan.json not executed
- Debug: >3 days old + issue not in recent commits
@@ -329,39 +343,57 @@ To execute cleanup: /workflow:clean
**Step 3.3: User Confirmation**
```javascript
AskUserQuestion({
questions: [
{
question: "Which categories to clean?",
header: "Categories",
multiSelect: true,
options: [
{
label: "Sessions",
description: `${manifest.summary.by_category.stale_sessions} stale workflow sessions`
},
{
label: "Documents",
description: `${manifest.summary.by_category.drifted_documents} drifted documents`
},
{
label: "Dead Code",
description: `${manifest.summary.by_category.dead_code} unused code files`
}
]
},
{
question: "Risk level to include?",
header: "Risk",
multiSelect: false,
options: [
{ label: "Low only", description: "Safest - only obviously stale items" },
{ label: "Low + Medium", description: "Recommended - includes likely unused items" },
{ label: "All", description: "Aggressive - includes high-risk items" }
]
}
]
})
// Parse --yes flag
const autoYes = $ARGUMENTS.includes('--yes') || $ARGUMENTS.includes('-y')
let userSelection
if (autoYes) {
// Auto mode: Use safe defaults
console.log(`[--yes] Auto-selecting safe cleanup defaults:`)
console.log(` - Categories: Sessions only`)
console.log(` - Risk level: Low only`)
userSelection = {
categories: ["Sessions"],
risk: "Low only"
}
} else {
// Interactive mode: Ask user
userSelection = AskUserQuestion({
questions: [
{
question: "Which categories to clean?",
header: "Categories",
multiSelect: true,
options: [
{
label: "Sessions",
description: `${manifest.summary.by_category.stale_sessions} stale workflow sessions`
},
{
label: "Documents",
description: `${manifest.summary.by_category.drifted_documents} drifted documents`
},
{
label: "Dead Code",
description: `${manifest.summary.by_category.dead_code} unused code files`
}
]
},
{
question: "Risk level to include?",
header: "Risk",
multiSelect: false,
options: [
{ label: "Low only", description: "Safest - only obviously stale items" },
{ label: "Low + Medium", description: "Recommended - includes likely unused items" },
{ label: "All", description: "Aggressive - includes high-risk items" }
]
}
]
})
}
```
---
@@ -443,8 +475,8 @@ if (selectedCategories.includes('Sessions')) {
}
}
// Update project.json if features referenced deleted sessions
const projectPath = '.workflow/project.json'
// Update project-tech.json if features referenced deleted sessions
const projectPath = '.workflow/project-tech.json'
if (fileExists(projectPath)) {
const project = JSON.parse(Read(projectPath))
const deletedPaths = new Set(results.deleted)

View File

@@ -0,0 +1,617 @@
---
name: workflow:collaborative-plan-with-file
description: Collaborative planning with Plan Note - Understanding agent creates shared plan-note.md template, parallel agents fill pre-allocated sections, conflict detection without merge. Outputs executable plan-note.md.
argument-hint: "[-y|--yes] <task description> [--max-agents=5]"
allowed-tools: TodoWrite(*), Task(*), AskUserQuestion(*), Read(*), Bash(*), Write(*), Glob(*), Grep(*), mcp__ace-tool__search_context(*)
---
## Auto Mode
When `--yes` or `-y`: Auto-approve splits, skip confirmations.
# Collaborative Planning Command
## Quick Start
```bash
# Basic usage
/workflow:collaborative-plan-with-file "Implement real-time notification system"
# With options
/workflow:collaborative-plan-with-file "Refactor authentication module" --max-agents=4
/workflow:collaborative-plan-with-file "Add payment gateway support" -y
```
**Context Source**: Understanding-Agent + Per-agent exploration
**Output Directory**: `.workflow/.planning/{session-id}/`
**Default Max Agents**: 5 (actual count based on requirement complexity)
**Core Innovation**: Plan Note - shared collaborative document, no merge needed
## Output Artifacts
### Phase 1: Understanding Agent
| Artifact | Description |
|----------|-------------|
| `plan-note.md` | Shared collaborative document with pre-allocated sections |
| `requirement-analysis.json` | Sub-domain assignments and TASK ID ranges |
### Phase 2: Per Sub-Agent
| Artifact | Description |
|----------|-------------|
| `planning-context.md` | Evidence paths + synthesized understanding |
| `plan.json` | Complete agent plan (detailed implementation) |
| Updates to `plan-note.md` | Agent fills pre-allocated sections |
### Phase 3: Final Output
| Artifact | Description |
|----------|-------------|
| `plan-note.md` | ⭐ Executable plan with conflict markers |
| `conflicts.json` | Detected conflicts with resolution options |
| `plan.md` | Human-readable summary |
## Overview
Unified collaborative planning workflow using **Plan Note** architecture:
1. **Understanding**: Agent analyzes requirements and creates plan-note.md template with pre-allocated sections
2. **Parallel Planning**: Each agent generates plan.json + fills their pre-allocated section in plan-note.md
3. **Conflict Detection**: Scan plan-note.md for conflicts (no merge needed)
4. **Completion**: Generate plan.md summary, ready for execution
```
┌─────────────────────────────────────────────────────────────────────────┐
│ PLAN NOTE COLLABORATIVE PLANNING │
├─────────────────────────────────────────────────────────────────────────┤
│ │
│ Phase 1: Understanding & Template Creation │
│ ├─ Understanding-Agent analyzes requirements │
│ ├─ Identify 2-5 sub-domains (focus areas) │
│ ├─ Create plan-note.md with pre-allocated sections │
│ └─ Assign TASK ID ranges (no conflicts) │
│ │
│ Phase 2: Parallel Agent Execution (No Locks Needed) │
│ ┌──────────────┬──────────────┬──────────────┐ │
│ │ Agent 1 │ Agent 2 │ Agent N │ │
│ ├──────────────┼──────────────┼──────────────┤ │
│ │ Own Section │ Own Section │ Own Section │ ← Pre-allocated │
│ │ plan.json │ plan.json │ plan.json │ ← Detailed plans │
│ └──────────────┴──────────────┴──────────────┘ │
│ │
│ Phase 3: Conflict Detection (Single Source) │
│ ├─ Parse plan-note.md (all sections) │
│ ├─ Detect file/dependency/strategy conflicts │
│ └─ Update plan-note.md conflict section │
│ │
│ Phase 4: Completion (No Merge) │
│ ├─ Generate plan.md (human-readable) │
│ └─ Ready for execution │
│ │
└─────────────────────────────────────────────────────────────────────────┘
```
## Output Structure
```
.workflow/.planning/{CPLAN-slug-YYYY-MM-DD}/
├── plan-note.md # ⭐ Core: Requirements + Tasks + Conflicts
├── requirement-analysis.json # Phase 1: Sub-domain assignments
├── agents/ # Phase 2: Per-agent detailed plans
│ ├── {focus-area-1}/
│ │ ├── planning-context.md # Evidence + understanding
│ │ └── plan.json # Complete agent plan
│ ├── {focus-area-2}/
│ │ └── ...
│ └── {focus-area-N}/
│ └── ...
├── conflicts.json # Phase 3: Conflict details
└── plan.md # Phase 4: Human-readable summary
```
## Implementation
### Session Initialization
**Objective**: Create session context and directory structure for collaborative planning.
**Required Actions**:
1. Extract task description from `$ARGUMENTS`
2. Generate session ID with format: `CPLAN-{slug}-{date}`
- slug: lowercase, alphanumeric, max 30 chars
- date: YYYY-MM-DD (UTC+8)
3. Define session folder: `.workflow/.planning/{session-id}`
4. Parse command options:
- `--max-agents=N` (default: 5)
- `-y` or `--yes` for auto-approval mode
5. Create directory structure: `{session-folder}/agents/`
**Session Variables**:
- `sessionId`: Unique session identifier
- `sessionFolder`: Base directory for all artifacts
- `maxAgents`: Maximum number of parallel agents
- `autoMode`: Boolean for auto-confirmation
### Phase 1: Understanding & Template Creation
**Objective**: Analyze requirements and create the plan-note.md template with pre-allocated sections for parallel agents.
**Prerequisites**:
- Session initialized with valid sessionId and sessionFolder
- Task description available from $ARGUMENTS
**Guideline**: In Understanding phase, prioritize identifying latest documentation (README, design docs, architecture guides). When ambiguities exist, ask user for clarification instead of assuming interpretations.
**Workflow Steps**:
1. **Initialize Progress Tracking**
- Create 4 todo items for workflow phases
- Set Phase 1 status to `in_progress`
2. **Launch Understanding Agent**
- Agent type: `cli-lite-planning-agent`
- Execution mode: synchronous (run_in_background: false)
3. **Agent Tasks**:
- **Identify Latest Documentation**: Search for and prioritize latest README, design docs, architecture guides
- **Understand Requirements**: Extract core objective, key points, constraints from task description and latest docs
- **Identify Ambiguities**: List any unclear points or multiple possible interpretations
- **Form Clarification Checklist**: Prepare questions for user if ambiguities found (use AskUserQuestion)
- **Split Sub-Domains**: Identify 2-{maxAgents} parallelizable focus areas
- **Create Plan Note**: Generate plan-note.md with pre-allocated sections
**Output Files**:
| File | Purpose |
|------|---------|
| `{sessionFolder}/plan-note.md` | Collaborative template with pre-allocated sections per agent |
| `{sessionFolder}/requirement-analysis.json` | Sub-domain assignments and TASK ID ranges |
**requirement-analysis.json Schema**:
- `session_id`: Session identifier
- `original_requirement`: Task description
- `complexity`: Low | Medium | High
- `sub_domains[]`: Array of focus areas with task_id_range and estimated_effort
- `total_agents`: Number of agents to spawn
**Success Criteria**:
- Latest documentation identified and referenced (if available)
- Ambiguities resolved via user clarification (if any found)
- 2-{maxAgents} clear sub-domains identified
- Each sub-domain can be planned independently
- Plan Note template includes all pre-allocated sections
- TASK ID ranges have no overlap (100 IDs per agent)
- Requirements understanding is comprehensive
**Completion**:
- Log created artifacts
- Update Phase 1 todo status to `completed`
**Agent Call**:
```javascript
Task(
subagent_type="cli-lite-planning-agent",
run_in_background=false,
description="Understand requirements and create plan template",
prompt=`
## Mission: Create Plan Note Template
### Key Guidelines
1. **Prioritize Latest Documentation**: Search for and reference latest README, design docs, architecture guides when available
2. **Handle Ambiguities**: When requirement ambiguities exist, ask user for clarification (use AskUserQuestion) instead of assuming interpretations
### Input Requirements
${taskDescription}
### Tasks
1. **Understand Requirements**: Extract core objective, key points, constraints (reference latest docs when available)
2. **Identify Ambiguities**: List any unclear points or multiple possible interpretations
3. **Form Clarification Checklist**: Prepare questions for user if ambiguities found
4. **Split Sub-Domains**: Identify 2-${maxAgents} parallelizable focus areas
5. **Create Plan Note**: Generate plan-note.md with pre-allocated sections
### Output Files
**File 1**: ${sessionFolder}/plan-note.md
Structure Requirements:
- YAML frontmatter: session_id, original_requirement, created_at, contributors, sub_domains, agent_sections, agent_task_id_ranges, status
- Section: ## 需求理解 (Core objectives, key points, constraints, split strategy)
- Section: ## 任务池 - {Focus Area 1} (Pre-allocated task section for agent 1, TASK-001 ~ TASK-100)
- Section: ## 任务池 - {Focus Area 2} (Pre-allocated task section for agent 2, TASK-101 ~ TASK-200)
- ... (One task pool section per sub-domain)
- Section: ## 依赖关系 (Auto-generated after all agents complete)
- Section: ## 冲突标记 (Populated in Phase 3)
- Section: ## 上下文证据 - {Focus Area 1} (Evidence for agent 1)
- Section: ## 上下文证据 - {Focus Area 2} (Evidence for agent 2)
- ... (One evidence section per sub-domain)
**File 2**: ${sessionFolder}/requirement-analysis.json
- session_id, original_requirement, complexity, sub_domains[], total_agents
### Success Criteria
- [ ] 2-${maxAgents} clear sub-domains identified
- [ ] Each sub-domain can be planned independently
- [ ] Plan Note template includes all pre-allocated sections
- [ ] TASK ID ranges have no overlap (100 IDs per agent)
`
)
```
### Phase 2: Parallel Sub-Agent Execution
**Objective**: Launch parallel planning agents to fill their pre-allocated sections in plan-note.md.
**Prerequisites**:
- Phase 1 completed successfully
- `{sessionFolder}/requirement-analysis.json` exists with sub-domain definitions
- `{sessionFolder}/plan-note.md` template created
**Workflow Steps**:
1. **Load Sub-Domain Configuration**
- Read `{sessionFolder}/requirement-analysis.json`
- Extract sub-domains array with focus_area, description, task_id_range
2. **Update Progress Tracking**
- Set Phase 2 status to `in_progress`
- Add sub-todo for each agent
3. **User Confirmation** (unless autoMode)
- Display identified sub-domains with descriptions
- Options: "开始规划" / "调整拆分" / "取消"
- Skip if autoMode enabled
4. **Create Agent Directories**
- For each sub-domain: `{sessionFolder}/agents/{focus-area}/`
5. **Launch Parallel Agents**
- Agent type: `cli-lite-planning-agent`
- Execution mode: synchronous (run_in_background: false)
- Launch ALL agents in parallel (single message with multiple Task calls)
**Per-Agent Context**:
- Focus area name and description
- Assigned TASK ID range (no overlap with other agents)
- Session ID and folder path
**Per-Agent Tasks**:
| Task | Output | Description |
|------|--------|-------------|
| Generate plan.json | `{sessionFolder}/agents/{focus-area}/plan.json` | Complete detailed plan following schema |
| Update plan-note.md | Sync to shared file | Fill pre-allocated "任务池" and "上下文证据" sections |
**Task Summary Format** (for plan-note.md):
- Task header: `### TASK-{ID}: {Title} [{focus-area}]`
- Status, Complexity, Dependencies
- Scope description
- Modification points with file:line references
- Conflict risk assessment
**Evidence Format** (for plan-note.md):
- Related files with relevance scores
- Existing patterns identified
- Constraints discovered
**Agent Execution Rules**:
- Each agent modifies ONLY its pre-allocated sections
- Use assigned TASK ID range exclusively
- No locking needed (exclusive sections)
- Include conflict_risk assessment for each task
**Completion**:
- Wait for all agents to complete
- Log generated artifacts for each agent
- Update Phase 2 todo status to `completed`
**User Confirmation** (unless autoMode):
```javascript
if (!autoMode) {
AskUserQuestion({
questions: [{
question: `已识别 ${subDomains.length} 个子领域:\n${subDomains.map((s, i) => `${i+1}. ${s.focus_area}: ${s.description}`).join('\n')}\n\n确认开始并行规划?`,
header: "Confirm Split",
multiSelect: false,
options: [
{ label: "开始规划", description: "启动并行sub-agent" },
{ label: "调整拆分", description: "修改子领域划分" },
{ label: "取消", description: "退出规划" }
]
}]
})
}
```
**Launch Parallel Agents** (single message, multiple Task calls):
```javascript
// Create agent directories
subDomains.forEach(sub => {
Bash(`mkdir -p ${sessionFolder}/agents/${sub.focus_area}`)
})
// Launch all agents in parallel
subDomains.map(sub =>
Task(
subagent_type="cli-lite-planning-agent",
run_in_background=false,
description=`Plan: ${sub.focus_area}`,
prompt=`
## Sub-Agent Context
**Focus Area**: ${sub.focus_area}
**Description**: ${sub.description}
**TASK ID Range**: ${sub.task_id_range[0]}-${sub.task_id_range[1]}
**Session**: ${sessionId}
## Dual Output Tasks
### Task 1: Generate Complete plan.json
Output: ${sessionFolder}/agents/${sub.focus_area}/plan.json
Schema: ~/.claude/workflows/cli-templates/schemas/plan-json-schema.json
### Task 2: Sync Summary to plan-note.md
**Locate Your Sections**:
- Task Pool: "## 任务池 - ${toTitleCase(sub.focus_area)}"
- Evidence: "## 上下文证据 - ${toTitleCase(sub.focus_area)}"
**Task Summary Format**:
- Task header: ### TASK-${sub.task_id_range[0]}: Task Title [${sub.focus_area}]
- Fields: 状态, 复杂度, 依赖, 范围, 修改点, 冲突风险
**Evidence Format**:
- 相关文件, 现有模式, 约束
## Execution Steps
1. Generate complete plan.json
2. Extract summary from plan.json
3. Read ${sessionFolder}/plan-note.md
4. Locate and replace your task pool section
5. Locate and replace your evidence section
6. Write back plan-note.md
## Important
- Only modify your pre-allocated sections
- Use assigned TASK ID range: ${sub.task_id_range[0]}-${sub.task_id_range[1]}
`
)
)
```
### Phase 3: Conflict Detection
**Objective**: Analyze plan-note.md for conflicts across all agent contributions without merging files.
**Prerequisites**:
- Phase 2 completed successfully
- All agents have updated plan-note.md with their sections
- `{sessionFolder}/plan-note.md` contains all task and evidence sections
**Workflow Steps**:
1. **Update Progress Tracking**
- Set Phase 3 status to `in_progress`
2. **Parse Plan Note**
- Read `{sessionFolder}/plan-note.md`
- Extract YAML frontmatter (session metadata)
- Parse markdown sections by heading levels
- Identify all "任务池" sections
3. **Extract All Tasks**
- For each "任务池" section:
- Extract tasks matching pattern: `### TASK-{ID}: {Title} [{author}]`
- Parse task details: status, complexity, dependencies, modification points, conflict risk
- Consolidate into single task list
4. **Detect Conflicts**
**File Conflicts**:
- Group modification points by file:location
- Identify locations modified by multiple agents
- Record: severity=high, tasks involved, agents involved, suggested resolution
**Dependency Cycles**:
- Build dependency graph from task dependencies
- Detect cycles using depth-first search
- Record: severity=critical, cycle path, suggested resolution
**Strategy Conflicts**:
- Group tasks by files they modify
- Identify files with high/medium conflict risk from multiple agents
- Record: severity=medium, tasks involved, agents involved, suggested resolution
5. **Generate Conflict Artifacts**
**conflicts.json**:
- Write to `{sessionFolder}/conflicts.json`
- Include: detected_at, total_tasks, total_agents, conflicts array
- Each conflict: type, severity, tasks_involved, description, suggested_resolution
**Update plan-note.md**:
- Locate "## 冲突标记" section
- Generate markdown summary of conflicts
- Replace section content with conflict markdown
6. **Completion**
- Log conflict detection summary
- Display conflict details if any found
- Update Phase 3 todo status to `completed`
**Conflict Types**:
| Type | Severity | Detection Logic |
|------|----------|-----------------|
| file_conflict | high | Same file:location modified by multiple agents |
| dependency_cycle | critical | Circular dependencies in task graph |
| strategy_conflict | medium | Multiple high-risk tasks in same file from different agents |
**Conflict Detection Functions**:
**parsePlanNote(markdown)**:
- Input: Raw markdown content of plan-note.md
- Process:
- Extract YAML frontmatter between `---` markers
- Parse frontmatter as YAML to get session metadata
- Scan for heading patterns `^(#{2,})\s+(.+)$`
- Build sections array with: level, heading, start position, content
- Output: `{ frontmatter: object, sections: array }`
**extractTasksFromSection(content, sectionHeading)**:
- Input: Section content text, section heading for attribution
- Process:
- Match task pattern: `### (TASK-\d+):\s+(.+?)\s+\[(.+?)\]`
- For each match: extract taskId, title, author
- Call parseTaskDetails for additional fields
- Output: Array of task objects with id, title, author, source_section, ...details
**parseTaskDetails(content)**:
- Input: Task content block
- Process:
- Extract fields via regex patterns:
- `**状态**:\s*(.+)` → status
- `**复杂度**:\s*(.+)` → complexity
- `**依赖**:\s*(.+)` → depends_on (extract TASK-\d+ references)
- `**冲突风险**:\s*(.+)` → conflict_risk
- Extract modification points: `-\s+\`([^`]+):\s*([^`]+)\`:\s*(.+)` → file, location, summary
- Output: Details object with status, complexity, depends_on[], modification_points[], conflict_risk
**detectFileConflicts(tasks)**:
- Input: All tasks array
- Process:
- Build fileMap: `{ "file:location": [{ task_id, task_title, source_agent, change }] }`
- For each location with multiple modifications from different agents:
- Create conflict with type='file_conflict', severity='high'
- Include: location, tasks_involved, agents_involved, modifications
- Suggested resolution: 'Coordinate modification order or merge changes'
- Output: Array of file conflict objects
**detectDependencyCycles(tasks)**:
- Input: All tasks array
- Process:
- Build dependency graph: `{ taskId: [dependsOn_taskIds] }`
- Use DFS with recursion stack to detect cycles
- For each cycle found:
- Create conflict with type='dependency_cycle', severity='critical'
- Include: cycle path as tasks_involved
- Suggested resolution: 'Remove or reorganize dependencies'
- Output: Array of dependency cycle conflict objects
**detectStrategyConflicts(tasks)**:
- Input: All tasks array
- Process:
- Group tasks by files they modify
- For each file with tasks from multiple agents:
- Filter for high/medium conflict_risk tasks
- If >1 high-risk tasks from different agents:
- Create conflict with type='strategy_conflict', severity='medium'
- Include: file, tasks_involved, agents_involved
- Suggested resolution: 'Review approaches and align on single strategy'
- Output: Array of strategy conflict objects
**generateConflictMarkdown(conflicts)**:
- Input: Array of conflict objects
- Process:
- If empty: return '✅ 无冲突检测到'
- For each conflict:
- Generate header: `### CONFLICT-{padded_index}: {description}`
- Add fields: 严重程度, 涉及任务, 涉及Agent
- Add 问题详情 based on conflict type
- Add 建议解决方案
- Add 决策状态: [ ] 待解决
- Output: Markdown string for plan-note.md "## 冲突标记" section
**replaceSectionContent(markdown, sectionHeading, newContent)**:
- Input: Original markdown, target section heading, new content
- Process:
- Find section heading position via regex
- Find next heading of same or higher level
- Replace content between heading and next section
- If section not found: append at end
- Output: Updated markdown string
### Phase 4: Completion
**Objective**: Generate human-readable plan summary and finalize workflow.
**Prerequisites**:
- Phase 3 completed successfully
- Conflicts detected and documented in plan-note.md
- All artifacts generated
**Workflow Steps**:
1. **Update Progress Tracking**
- Set Phase 4 status to `in_progress`
2. **Read Final State**
- Read `{sessionFolder}/plan-note.md`
- Extract frontmatter metadata
- Load conflicts from Phase 3
3. **Generate plan.md**
- Create human-readable summary including:
- Session metadata
- Requirements understanding
- Sub-domain breakdown
- Task overview by focus area
- Conflict report
- Execution instructions
4. **Write Summary File**
- Write to `{sessionFolder}/plan.md`
5. **Display Completion Summary**
- Session statistics
- File structure
- Execution command
- Conflict status
6. **Update Todo**
- Set Phase 4 status to `completed`
**plan.md Structure**:
| Section | Content |
|---------|---------|
| Header | Session ID, created time, original requirement |
| Requirements | Copy from plan-note.md "## 需求理解" section |
| Sub-Domain Split | List each focus area with description and task ID range |
| Task Overview | Tasks grouped by focus area with complexity and dependencies |
| Conflict Report | Summary of detected conflicts or "无冲突" |
| Execution | Command to execute the plan |
**Required Function** (semantic description):
- **generateHumanReadablePlan**: Extract sections from plan-note.md and format as readable plan.md with session info, requirements, tasks, and conflicts
## Configuration
| Flag | Default | Description |
|------|---------|-------------|
| `--max-agents` | 5 | Maximum sub-agents to spawn |
| `-y, --yes` | false | Auto-confirm all decisions |
## Error Handling
| Error | Resolution |
|-------|------------|
| Understanding agent fails | Retry once, provide more context |
| Planning agent fails | Skip failed agent, continue with others |
| Section not found in plan-note | Agent creates section (defensive) |
| Conflict detection fails | Continue with empty conflicts |
## Best Practices
1. **Clear Requirements**: Detailed requirements → better sub-domain splitting
2. **Reference Latest Documentation**: Understanding agent should prioritize identifying and referencing latest docs (README, design docs, architecture guides)
3. **Ask When Uncertain**: When ambiguities or multiple interpretations exist, ask user for clarification instead of assuming
4. **Review Plan Note**: Check plan-note.md before execution
5. **Resolve Conflicts**: Address high/critical conflicts before execution
6. **Inspect Details**: Use agents/{focus-area}/plan.json for deep dive
---
**Now execute collaborative-plan-with-file for**: $ARGUMENTS

View File

@@ -0,0 +1,672 @@
---
name: debug-with-file
description: Interactive hypothesis-driven debugging with documented exploration, understanding evolution, and Gemini-assisted correction
argument-hint: "[-y|--yes] \"bug description or error message\""
allowed-tools: TodoWrite(*), Task(*), AskUserQuestion(*), Read(*), Grep(*), Glob(*), Bash(*), Edit(*), Write(*)
---
## Auto Mode
When `--yes` or `-y`: Auto-confirm all decisions (hypotheses, fixes, iteration), use recommended settings.
# Workflow Debug-With-File Command (/workflow:debug-with-file)
## Overview
Enhanced evidence-based debugging with **documented exploration process**. Records understanding evolution, consolidates insights, and uses Gemini to correct misunderstandings.
**Core workflow**: Explore → Document → Log → Analyze → Correct Understanding → Fix → Verify
**Scope**: Adds temporary debug logging to observe program state; cleans up all instrumentation after resolution. Does NOT execute code injection, security testing, or modify program behavior.
**Key enhancements over /workflow:debug**:
- **understanding.md**: Timeline of exploration and learning
- **Gemini-assisted correction**: Validates and corrects hypotheses
- **Consolidation**: Simplifies proven-wrong understanding to avoid clutter
- **Learning retention**: Preserves what was learned, even from failed attempts
## Usage
```bash
/workflow:debug-with-file <BUG_DESCRIPTION>
# Arguments
<bug-description> Bug description, error message, or stack trace (required)
```
## Execution Process
```
Session Detection:
├─ Check if debug session exists for this bug
├─ EXISTS + understanding.md exists → Continue mode
└─ NOT_FOUND → Explore mode
Explore Mode:
├─ Locate error source in codebase
├─ Document initial understanding in understanding.md
├─ Generate testable hypotheses with Gemini validation
├─ Add NDJSON debug logging statements
└─ Output: Hypothesis list + await user reproduction
Analyze Mode:
├─ Parse debug.log, validate each hypothesis
├─ Use Gemini to analyze evidence and correct understanding
├─ Update understanding.md with:
│ ├─ New evidence
│ ├─ Corrected misunderstandings (strikethrough + correction)
│ └─ Consolidated current understanding
└─ Decision:
├─ Confirmed → Fix root cause
├─ Inconclusive → Add more logging, iterate
└─ All rejected → Gemini-assisted new hypotheses
Fix & Cleanup:
├─ Apply fix based on confirmed hypothesis
├─ User verifies
├─ Document final understanding + lessons learned
├─ Remove debug instrumentation
└─ If not fixed → Return to Analyze mode
```
## Implementation
### Session Setup & Mode Detection
```javascript
const getUtc8ISOString = () => new Date(Date.now() + 8 * 60 * 60 * 1000).toISOString()
const bugSlug = bug_description.toLowerCase().replace(/[^a-z0-9]+/g, '-').substring(0, 30)
const dateStr = getUtc8ISOString().substring(0, 10)
const sessionId = `DBG-${bugSlug}-${dateStr}`
const sessionFolder = `.workflow/.debug/${sessionId}`
const debugLogPath = `${sessionFolder}/debug.log`
const understandingPath = `${sessionFolder}/understanding.md`
const hypothesesPath = `${sessionFolder}/hypotheses.json`
// Auto-detect mode
const sessionExists = fs.existsSync(sessionFolder)
const hasUnderstanding = sessionExists && fs.existsSync(understandingPath)
const logHasContent = sessionExists && fs.existsSync(debugLogPath) && fs.statSync(debugLogPath).size > 0
const mode = logHasContent ? 'analyze' : (hasUnderstanding ? 'continue' : 'explore')
if (!sessionExists) {
bash(`mkdir -p ${sessionFolder}`)
}
```
---
### Explore Mode
**Step 1.1: Locate Error Source**
```javascript
// Extract keywords from bug description
const keywords = extractErrorKeywords(bug_description)
// Search codebase for error locations
const searchResults = []
for (const keyword of keywords) {
const results = Grep({ pattern: keyword, path: ".", output_mode: "content", "-C": 3 })
searchResults.push({ keyword, results })
}
// Identify affected files and functions
const affectedLocations = analyzeSearchResults(searchResults)
```
**Step 1.2: Document Initial Understanding**
Create `understanding.md` with exploration timeline:
```markdown
# Understanding Document
**Session ID**: ${sessionId}
**Bug Description**: ${bug_description}
**Started**: ${getUtc8ISOString()}
---
## Exploration Timeline
### Iteration 1 - Initial Exploration (${timestamp})
#### Current Understanding
Based on bug description and initial code search:
- Error pattern: ${errorPattern}
- Affected areas: ${affectedLocations.map(l => l.file).join(', ')}
- Initial hypothesis: ${initialThoughts}
#### Evidence from Code Search
${searchResults.map(r => `
**Keyword: "${r.keyword}"**
- Found in: ${r.results.files.join(', ')}
- Key findings: ${r.insights}
`).join('\n')}
#### Next Steps
- Generate testable hypotheses
- Add instrumentation
- Await reproduction
---
## Current Consolidated Understanding
${initialConsolidatedUnderstanding}
```
**Step 1.3: Gemini-Assisted Hypothesis Generation**
```bash
ccw cli -p "
PURPOSE: Generate debugging hypotheses for: ${bug_description}
Success criteria: Testable hypotheses with clear evidence criteria
TASK:
• Analyze error pattern and code search results
• Identify 3-5 most likely root causes
• For each hypothesis, specify:
- What might be wrong
- What evidence would confirm/reject it
- Where to add instrumentation
• Rank by likelihood
MODE: analysis
CONTEXT: @${sessionFolder}/understanding.md | Search results in understanding.md
EXPECTED:
- Structured hypothesis list (JSON format)
- Each hypothesis with: id, description, testable_condition, logging_point, evidence_criteria
- Likelihood ranking (1=most likely)
CONSTRAINTS: Focus on testable conditions
" --tool gemini --mode analysis --rule analysis-diagnose-bug-root-cause
```
Save Gemini output to `hypotheses.json`:
```json
{
"iteration": 1,
"timestamp": "2025-01-21T10:00:00+08:00",
"hypotheses": [
{
"id": "H1",
"description": "Data structure mismatch - expected key not present",
"testable_condition": "Check if target key exists in dict",
"logging_point": "file.py:func:42",
"evidence_criteria": {
"confirm": "data shows missing key",
"reject": "key exists with valid value"
},
"likelihood": 1,
"status": "pending"
}
],
"gemini_insights": "...",
"corrected_assumptions": []
}
```
**Step 1.4: Add NDJSON Debug Logging**
For each hypothesis, add temporary logging statements to observe program state at key execution points. Use NDJSON format for structured log parsing. These are read-only observations that do not modify program behavior.
**Step 1.5: Update understanding.md**
Append hypothesis section:
```markdown
#### Hypotheses Generated (Gemini-Assisted)
${hypotheses.map(h => `
**${h.id}** (Likelihood: ${h.likelihood}): ${h.description}
- Logging at: ${h.logging_point}
- Testing: ${h.testable_condition}
- Evidence to confirm: ${h.evidence_criteria.confirm}
- Evidence to reject: ${h.evidence_criteria.reject}
`).join('\n')}
**Gemini Insights**: ${geminiInsights}
```
---
### Analyze Mode
**Step 2.1: Parse Debug Log**
```javascript
// Parse NDJSON log
const entries = Read(debugLogPath).split('\n')
.filter(l => l.trim())
.map(l => JSON.parse(l))
// Group by hypothesis
const byHypothesis = groupBy(entries, 'hid')
```
**Step 2.2: Gemini-Assisted Evidence Analysis**
```bash
ccw cli -p "
PURPOSE: Analyze debug log evidence to validate/correct hypotheses for: ${bug_description}
Success criteria: Clear verdict per hypothesis + corrected understanding
TASK:
• Parse log entries by hypothesis
• Evaluate evidence against expected criteria
• Determine verdict: confirmed | rejected | inconclusive
• Identify incorrect assumptions from previous understanding
• Suggest corrections to understanding
MODE: analysis
CONTEXT:
@${debugLogPath}
@${understandingPath}
@${hypothesesPath}
EXPECTED:
- Per-hypothesis verdict with reasoning
- Evidence summary
- List of incorrect assumptions with corrections
- Updated consolidated understanding
- Root cause if confirmed, or next investigation steps
CONSTRAINTS: Evidence-based reasoning only, no speculation
" --tool gemini --mode analysis --rule analysis-diagnose-bug-root-cause
```
**Step 2.3: Update Understanding with Corrections**
Append new iteration to `understanding.md`:
```markdown
### Iteration ${n} - Evidence Analysis (${timestamp})
#### Log Analysis Results
${results.map(r => `
**${r.id}**: ${r.verdict.toUpperCase()}
- Evidence: ${JSON.stringify(r.evidence)}
- Reasoning: ${r.reason}
`).join('\n')}
#### Corrected Understanding
Previous misunderstandings identified and corrected:
${corrections.map(c => `
- ~~${c.wrong}~~ → ${c.corrected}
- Why wrong: ${c.reason}
- Evidence: ${c.evidence}
`).join('\n')}
#### New Insights
${newInsights.join('\n- ')}
#### Gemini Analysis
${geminiAnalysis}
${confirmedHypothesis ? `
#### Root Cause Identified
**${confirmedHypothesis.id}**: ${confirmedHypothesis.description}
Evidence supporting this conclusion:
${confirmedHypothesis.supportingEvidence}
` : `
#### Next Steps
${nextSteps}
`}
---
## Current Consolidated Understanding (Updated)
${consolidatedUnderstanding}
```
**Step 2.4: Consolidate Understanding**
At the bottom of `understanding.md`, update the consolidated section:
- Remove or simplify proven-wrong assumptions
- Keep them in strikethrough for reference
- Focus on current valid understanding
- Avoid repeating details from timeline
```markdown
## Current Consolidated Understanding
### What We Know
- ${validUnderstanding1}
- ${validUnderstanding2}
### What Was Disproven
- ~~Initial assumption: ${wrongAssumption}~~ (Evidence: ${disproofEvidence})
### Current Investigation Focus
${currentFocus}
### Remaining Questions
- ${openQuestion1}
- ${openQuestion2}
```
**Step 2.5: Update hypotheses.json**
```json
{
"iteration": 2,
"timestamp": "2025-01-21T10:15:00+08:00",
"hypotheses": [
{
"id": "H1",
"status": "rejected",
"verdict_reason": "Evidence shows key exists with valid value",
"evidence": {...}
},
{
"id": "H2",
"status": "confirmed",
"verdict_reason": "Log data confirms timing issue",
"evidence": {...}
}
],
"gemini_corrections": [
{
"wrong_assumption": "...",
"corrected_to": "...",
"reason": "..."
}
]
}
```
---
### Fix & Verification
**Step 3.1: Apply Fix**
(Same as original debug command)
**Step 3.2: Document Resolution**
Append to `understanding.md`:
```markdown
### Iteration ${n} - Resolution (${timestamp})
#### Fix Applied
- Modified files: ${modifiedFiles.join(', ')}
- Fix description: ${fixDescription}
- Root cause addressed: ${rootCause}
#### Verification Results
${verificationResults}
#### Lessons Learned
What we learned from this debugging session:
1. ${lesson1}
2. ${lesson2}
3. ${lesson3}
#### Key Insights for Future
- ${insight1}
- ${insight2}
```
**Step 3.3: Cleanup**
Remove all temporary debug logging statements added during investigation. Verify no instrumentation code remains in production code.
---
## Session Folder Structure
```
.workflow/.debug/DBG-{slug}-{date}/
├── debug.log # NDJSON log (execution evidence)
├── understanding.md # NEW: Exploration timeline + consolidated understanding
├── hypotheses.json # NEW: Hypothesis history with verdicts
└── resolution.md # Optional: Final summary
```
## Understanding Document Template
```markdown
# Understanding Document
**Session ID**: DBG-xxx-2025-01-21
**Bug Description**: [original description]
**Started**: 2025-01-21T10:00:00+08:00
---
## Exploration Timeline
### Iteration 1 - Initial Exploration (2025-01-21 10:00)
#### Current Understanding
...
#### Evidence from Code Search
...
#### Hypotheses Generated (Gemini-Assisted)
...
### Iteration 2 - Evidence Analysis (2025-01-21 10:15)
#### Log Analysis Results
...
#### Corrected Understanding
- ~~[wrong]~~ → [corrected]
#### Gemini Analysis
...
---
## Current Consolidated Understanding
### What We Know
- [valid understanding points]
### What Was Disproven
- ~~[disproven assumptions]~~
### Current Investigation Focus
[current focus]
### Remaining Questions
- [open questions]
```
## Iteration Flow
```
First Call (/workflow:debug-with-file "error"):
├─ No session exists → Explore mode
├─ Extract error keywords, search codebase
├─ Document initial understanding in understanding.md
├─ Use Gemini to generate hypotheses
├─ Add logging instrumentation
└─ Await user reproduction
After Reproduction (/workflow:debug-with-file "error"):
├─ Session exists + debug.log has content → Analyze mode
├─ Parse log, use Gemini to evaluate hypotheses
├─ Update understanding.md with:
│ ├─ Evidence analysis results
│ ├─ Corrected misunderstandings (strikethrough)
│ ├─ New insights
│ └─ Updated consolidated understanding
├─ Update hypotheses.json with verdicts
└─ Decision:
├─ Confirmed → Fix → Document resolution
├─ Inconclusive → Add logging, document next steps
└─ All rejected → Gemini-assisted new hypotheses
Output:
├─ .workflow/.debug/DBG-{slug}-{date}/debug.log
├─ .workflow/.debug/DBG-{slug}-{date}/understanding.md (evolving document)
└─ .workflow/.debug/DBG-{slug}-{date}/hypotheses.json (history)
```
## Gemini Integration Points
### 1. Hypothesis Generation (Explore Mode)
**Purpose**: Generate evidence-based, testable hypotheses
**Prompt Pattern**:
```
PURPOSE: Generate debugging hypotheses + evidence criteria
TASK: Analyze error + code → testable hypotheses with clear pass/fail criteria
CONTEXT: @understanding.md (search results)
EXPECTED: JSON with hypotheses, likelihood ranking, evidence criteria
```
### 2. Evidence Analysis (Analyze Mode)
**Purpose**: Validate hypotheses and correct misunderstandings
**Prompt Pattern**:
```
PURPOSE: Analyze debug log evidence + correct understanding
TASK: Evaluate each hypothesis → identify wrong assumptions → suggest corrections
CONTEXT: @debug.log @understanding.md @hypotheses.json
EXPECTED: Verdicts + corrections + updated consolidated understanding
```
### 3. New Hypothesis Generation (After All Rejected)
**Purpose**: Generate new hypotheses based on what was disproven
**Prompt Pattern**:
```
PURPOSE: Generate new hypotheses given disproven assumptions
TASK: Review rejected hypotheses → identify knowledge gaps → new investigation angles
CONTEXT: @understanding.md (with disproven section) @hypotheses.json
EXPECTED: New hypotheses avoiding previously rejected paths
```
## Error Correction Mechanism
### Correction Format in understanding.md
```markdown
#### Corrected Understanding
- ~~Assumed dict key "config" was missing~~ → Key exists, but value is None
- Why wrong: Only checked existence, not value validity
- Evidence: H1 log shows {"config": null, "exists": true}
- ~~Thought error occurred in initialization~~ → Error happens during runtime update
- Why wrong: Stack trace misread as init code
- Evidence: H2 timestamp shows 30s after startup
```
### Consolidation Rules
When updating "Current Consolidated Understanding":
1. **Simplify disproven items**: Move to "What Was Disproven" with single-line summary
2. **Keep valid insights**: Promote confirmed findings to "What We Know"
3. **Avoid duplication**: Don't repeat timeline details in consolidated section
4. **Focus on current state**: What do we know NOW, not the journey
5. **Preserve key corrections**: Keep important wrong→right transformations for learning
**Bad (cluttered)**:
```markdown
## Current Consolidated Understanding
In iteration 1 we thought X, but in iteration 2 we found Y, then in iteration 3...
Also we checked A and found B, and then we checked C...
```
**Good (consolidated)**:
```markdown
## Current Consolidated Understanding
### What We Know
- Error occurs during runtime update, not initialization
- Config value is None (not missing key)
### What Was Disproven
- ~~Initialization error~~ (Timing evidence)
- ~~Missing key hypothesis~~ (Key exists)
### Current Investigation Focus
Why is config value None during update?
```
## Post-Completion Expansion
完成后询问用户是否扩展为issue(test/enhance/refactor/doc),选中项调用 `/issue:new "{summary} - {dimension}"`
---
## Error Handling
| Situation | Action |
|-----------|--------|
| Empty debug.log | Verify reproduction triggered the code path |
| All hypotheses rejected | Use Gemini to generate new hypotheses based on disproven assumptions |
| Fix doesn't work | Document failed fix attempt, iterate with refined understanding |
| >5 iterations | Review consolidated understanding, escalate to `/workflow:lite-fix` with full context |
| Gemini unavailable | Fallback to manual hypothesis generation, document without Gemini insights |
| Understanding too long | Consolidate aggressively, archive old iterations to separate file |
## Comparison with /workflow:debug
| Feature | /workflow:debug | /workflow:debug-with-file |
|---------|-----------------|---------------------------|
| NDJSON debug logging | ✅ | ✅ |
| Hypothesis generation | Manual | Gemini-assisted |
| Exploration documentation | ❌ | ✅ understanding.md |
| Understanding evolution | ❌ | ✅ Timeline + corrections |
| Error correction | ❌ | ✅ Strikethrough + reasoning |
| Consolidated learning | ❌ | ✅ Current understanding section |
| Hypothesis history | ❌ | ✅ hypotheses.json |
| Gemini validation | ❌ | ✅ At key decision points |
## Usage Recommendations (Requires User Confirmation)
**Use `Skill(skill="workflow:debug-with-file", args="\"bug description\"")` when:**
- Complex bugs requiring multiple investigation rounds
- Learning from debugging process is valuable
- Team needs to understand debugging rationale
- Bug might recur, documentation helps prevention
**Use `Skill(skill="ccw-debug", args="--mode cli \"issue\"")` when:**
- Simple, quick bugs
- One-off issues
- Documentation overhead not needed

View File

@@ -1,321 +0,0 @@
---
name: debug
description: Interactive hypothesis-driven debugging with NDJSON logging, iterative until resolved
argument-hint: "\"bug description or error message\""
allowed-tools: TodoWrite(*), Task(*), AskUserQuestion(*), Read(*), Grep(*), Glob(*), Bash(*), Edit(*), Write(*)
---
# Workflow Debug Command (/workflow:debug)
## Overview
Evidence-based interactive debugging command. Systematically identifies root causes through hypothesis-driven logging and iterative verification.
**Core workflow**: Explore → Add Logging → Reproduce → Analyze Log → Fix → Verify
## Usage
```bash
/workflow:debug <BUG_DESCRIPTION>
# Arguments
<bug-description> Bug description, error message, or stack trace (required)
```
## Execution Process
```
Session Detection:
├─ Check if debug session exists for this bug
├─ EXISTS + debug.log has content → Analyze mode
└─ NOT_FOUND or empty log → Explore mode
Explore Mode:
├─ Locate error source in codebase
├─ Generate testable hypotheses (dynamic count)
├─ Add NDJSON logging instrumentation
└─ Output: Hypothesis list + await user reproduction
Analyze Mode:
├─ Parse debug.log, validate each hypothesis
└─ Decision:
├─ Confirmed → Fix root cause
├─ Inconclusive → Add more logging, iterate
└─ All rejected → Generate new hypotheses
Fix & Cleanup:
├─ Apply fix based on confirmed hypothesis
├─ User verifies
├─ Remove debug instrumentation
└─ If not fixed → Return to Analyze mode
```
## Implementation
### Session Setup & Mode Detection
```javascript
const getUtc8ISOString = () => new Date(Date.now() + 8 * 60 * 60 * 1000).toISOString()
const bugSlug = bug_description.toLowerCase().replace(/[^a-z0-9]+/g, '-').substring(0, 30)
const dateStr = getUtc8ISOString().substring(0, 10)
const sessionId = `DBG-${bugSlug}-${dateStr}`
const sessionFolder = `.workflow/.debug/${sessionId}`
const debugLogPath = `${sessionFolder}/debug.log`
// Auto-detect mode
const sessionExists = fs.existsSync(sessionFolder)
const logHasContent = sessionExists && fs.existsSync(debugLogPath) && fs.statSync(debugLogPath).size > 0
const mode = logHasContent ? 'analyze' : 'explore'
if (!sessionExists) {
bash(`mkdir -p ${sessionFolder}`)
}
```
---
### Explore Mode
**Step 1.1: Locate Error Source**
```javascript
// Extract keywords from bug description
const keywords = extractErrorKeywords(bug_description)
// e.g., ['Stack Length', '未找到', 'registered 0']
// Search codebase for error locations
for (const keyword of keywords) {
Grep({ pattern: keyword, path: ".", output_mode: "content", "-C": 3 })
}
// Identify affected files and functions
const affectedLocations = [...] // from search results
```
**Step 1.2: Generate Hypotheses (Dynamic)**
```javascript
// Hypothesis categories based on error pattern
const HYPOTHESIS_PATTERNS = {
"not found|missing|undefined|未找到": "data_mismatch",
"0|empty|zero|registered 0": "logic_error",
"timeout|connection|sync": "integration_issue",
"type|format|parse": "type_mismatch"
}
// Generate hypotheses based on actual issue (NOT fixed count)
function generateHypotheses(bugDescription, affectedLocations) {
const hypotheses = []
// Analyze bug and create targeted hypotheses
// Each hypothesis has:
// - id: H1, H2, ... (dynamic count)
// - description: What might be wrong
// - testable_condition: What to log
// - logging_point: Where to add instrumentation
return hypotheses // Could be 1, 3, 5, or more
}
const hypotheses = generateHypotheses(bug_description, affectedLocations)
```
**Step 1.3: Add NDJSON Instrumentation**
For each hypothesis, add logging at the relevant location:
**Python template**:
```python
# region debug [H{n}]
try:
import json, time
_dbg = {
"sid": "{sessionId}",
"hid": "H{n}",
"loc": "{file}:{line}",
"msg": "{testable_condition}",
"data": {
# Capture relevant values here
},
"ts": int(time.time() * 1000)
}
with open(r"{debugLogPath}", "a", encoding="utf-8") as _f:
_f.write(json.dumps(_dbg, ensure_ascii=False) + "\n")
except: pass
# endregion
```
**JavaScript/TypeScript template**:
```javascript
// region debug [H{n}]
try {
require('fs').appendFileSync("{debugLogPath}", JSON.stringify({
sid: "{sessionId}",
hid: "H{n}",
loc: "{file}:{line}",
msg: "{testable_condition}",
data: { /* Capture relevant values */ },
ts: Date.now()
}) + "\n");
} catch(_) {}
// endregion
```
**Output to user**:
```
## Hypotheses Generated
Based on error "{bug_description}", generated {n} hypotheses:
{hypotheses.map(h => `
### ${h.id}: ${h.description}
- Logging at: ${h.logging_point}
- Testing: ${h.testable_condition}
`).join('')}
**Debug log**: ${debugLogPath}
**Next**: Run reproduction steps, then come back for analysis.
```
---
### Analyze Mode
```javascript
// Parse NDJSON log
const entries = Read(debugLogPath).split('\n')
.filter(l => l.trim())
.map(l => JSON.parse(l))
// Group by hypothesis
const byHypothesis = groupBy(entries, 'hid')
// Validate each hypothesis
for (const [hid, logs] of Object.entries(byHypothesis)) {
const hypothesis = hypotheses.find(h => h.id === hid)
const latestLog = logs[logs.length - 1]
// Check if evidence confirms or rejects hypothesis
const verdict = evaluateEvidence(hypothesis, latestLog.data)
// Returns: 'confirmed' | 'rejected' | 'inconclusive'
}
```
**Output**:
```
## Evidence Analysis
Analyzed ${entries.length} log entries.
${results.map(r => `
### ${r.id}: ${r.description}
- **Status**: ${r.verdict}
- **Evidence**: ${JSON.stringify(r.evidence)}
- **Reason**: ${r.reason}
`).join('')}
${confirmedHypothesis ? `
## Root Cause Identified
**${confirmedHypothesis.id}**: ${confirmedHypothesis.description}
Ready to fix.
` : `
## Need More Evidence
Add more logging or refine hypotheses.
`}
```
---
### Fix & Cleanup
```javascript
// Apply fix based on confirmed hypothesis
// ... Edit affected files
// After user verifies fix works:
// Remove debug instrumentation (search for region markers)
const instrumentedFiles = Grep({
pattern: "# region debug|// region debug",
output_mode: "files_with_matches"
})
for (const file of instrumentedFiles) {
// Remove content between region markers
removeDebugRegions(file)
}
console.log(`
## Debug Complete
- Root cause: ${confirmedHypothesis.description}
- Fix applied to: ${modifiedFiles.join(', ')}
- Debug instrumentation removed
`)
```
---
## Debug Log Format (NDJSON)
Each line is a JSON object:
```json
{"sid":"DBG-xxx-2025-12-18","hid":"H1","loc":"file.py:func:42","msg":"Check dict keys","data":{"keys":["a","b"],"target":"c","found":false},"ts":1734567890123}
```
| Field | Description |
|-------|-------------|
| `sid` | Session ID |
| `hid` | Hypothesis ID (H1, H2, ...) |
| `loc` | Code location |
| `msg` | What's being tested |
| `data` | Captured values |
| `ts` | Timestamp (ms) |
## Session Folder
```
.workflow/.debug/DBG-{slug}-{date}/
├── debug.log # NDJSON log (main artifact)
└── resolution.md # Summary after fix (optional)
```
## Iteration Flow
```
First Call (/workflow:debug "error"):
├─ No session exists → Explore mode
├─ Extract error keywords, search codebase
├─ Generate hypotheses, add logging
└─ Await user reproduction
After Reproduction (/workflow:debug "error"):
├─ Session exists + debug.log has content → Analyze mode
├─ Parse log, evaluate hypotheses
└─ Decision:
├─ Confirmed → Fix → User verify
│ ├─ Fixed → Cleanup → Done
│ └─ Not fixed → Add logging → Iterate
├─ Inconclusive → Add logging → Iterate
└─ All rejected → New hypotheses → Iterate
Output:
└─ .workflow/.debug/DBG-{slug}-{date}/debug.log
```
## Error Handling
| Situation | Action |
|-----------|--------|
| Empty debug.log | Verify reproduction triggered the code path |
| All hypotheses rejected | Generate new hypotheses with broader scope |
| Fix doesn't work | Iterate with more granular logging |
| >5 iterations | Escalate to `/workflow:lite-fix` with evidence |

View File

@@ -1,7 +1,7 @@
---
name: execute
description: Coordinate agent execution for workflow tasks with automatic session discovery, parallel task processing, and status tracking
argument-hint: "[--resume-session=\"session-id\"]"
argument-hint: "[-y|--yes] [--resume-session=\"session-id\"] [--with-commit]"
---
# Workflow Execute Command
@@ -11,6 +11,41 @@ Orchestrates autonomous workflow execution through systematic task discovery, ag
**Resume Mode**: When called with `--resume-session` flag, skips discovery phase and directly enters TodoWrite generation and agent execution for the specified session.
## Usage
```bash
# Interactive mode (with confirmations)
/workflow:execute
/workflow:execute --resume-session="WFS-auth"
# Auto mode (skip confirmations, use defaults)
/workflow:execute --yes
/workflow:execute -y
/workflow:execute -y --resume-session="WFS-auth"
# With auto-commit (commit after each task completion)
/workflow:execute --with-commit
/workflow:execute -y --with-commit
/workflow:execute -y --with-commit --resume-session="WFS-auth"
```
## Auto Mode Defaults
When `--yes` or `-y` flag is used:
- **Session Selection**: Automatically selects the first (most recent) active session
- **Completion Choice**: Automatically completes session (runs `/workflow:session:complete --yes`)
When `--with-commit` flag is used:
- **Auto-Commit**: After each agent task completes, commit changes based on summary document
- **Commit Principle**: Minimal commits - only commit files modified by the completed task
- **Commit Message**: Generated from task summary with format: "feat/fix/refactor: {task-title} - {summary}"
**Flag Parsing**:
```javascript
const autoYes = $ARGUMENTS.includes('--yes') || $ARGUMENTS.includes('-y')
const withCommit = $ARGUMENTS.includes('--with-commit')
```
## Performance Optimization Strategy
**Lazy Loading**: Task JSONs read **on-demand** during execution, not upfront. TODO_LIST.md + IMPL_PLAN.md provide metadata for planning.
@@ -71,6 +106,11 @@ Phase 4: Execution Strategy & Task Execution
├─ Mark task completed (update IMPL-*.json status)
│ # Quick fix: Update task status for ccw dashboard
│ # TS=$(date -Iseconds) && jq --arg ts "$TS" '.status="completed" | .status_history=(.status_history // [])+[{"from":"in_progress","to":"completed","changed_at":$ts}]' IMPL-X.json > tmp.json && mv tmp.json IMPL-X.json
├─ [with-commit] Commit changes based on summary (minimal principle)
│ # Read summary from .summaries/IMPL-X-summary.md
│ # Extract changed files from summary's "Files Modified" section
│ # Generate commit message: "feat/fix/refactor: {task-title} - {summary}"
│ # git add <changed-files> && git commit -m "<commit-message>"
└─ Advance to next task
Phase 5: Completion
@@ -122,24 +162,38 @@ List sessions with metadata and prompt user selection:
bash(for dir in .workflow/active/WFS-*/; do [ -d "$dir" ] || continue; session=$(basename "$dir"); project=$(jq -r '.project // "Unknown"' "${dir}workflow-session.json" 2>/dev/null || echo "Unknown"); total=$(grep -c '^\- \[' "${dir}TODO_LIST.md" 2>/dev/null || echo 0); completed=$(grep -c '^\- \[x\]' "${dir}TODO_LIST.md" 2>/dev/null || echo 0); if [ "$total" -gt 0 ]; then progress=$((completed * 100 / total)); else progress=0; fi; echo "$session | $project | $completed/$total tasks ($progress%)"; done)
```
Use AskUserQuestion to present formatted options (max 4 options shown):
**Parse --yes flag**:
```javascript
// If more than 4 sessions, show most recent 4 with "Other" option for manual input
const sessions = getActiveSessions() // sorted by last modified
const displaySessions = sessions.slice(0, 4)
const autoYes = $ARGUMENTS.includes('--yes') || $ARGUMENTS.includes('-y')
```
AskUserQuestion({
questions: [{
question: "Multiple active sessions detected. Select one:",
header: "Session",
multiSelect: false,
options: displaySessions.map(s => ({
label: s.id,
description: `${s.project} | ${s.progress}`
}))
// Note: User can select "Other" to manually enter session ID
}]
})
**Conditional Selection**:
```javascript
if (autoYes) {
// Auto mode: Select first session (most recent)
const firstSession = sessions[0]
console.log(`[--yes] Auto-selecting session: ${firstSession.id}`)
selectedSessionId = firstSession.id
// Continue to Phase 2
} else {
// Interactive mode: Use AskUserQuestion to present formatted options (max 4 options shown)
// If more than 4 sessions, show most recent 4 with "Other" option for manual input
const sessions = getActiveSessions() // sorted by last modified
const displaySessions = sessions.slice(0, 4)
AskUserQuestion({
questions: [{
question: "Multiple active sessions detected. Select one:",
header: "Session",
multiSelect: false,
options: displaySessions.map(s => ({
label: s.id,
description: `${s.project} | ${s.progress}`
}))
// Note: User can select "Other" to manually enter session ID
}]
})
}
```
**Input Validation**:
@@ -235,7 +289,13 @@ while (TODO_LIST.md has pending tasks) {
4. **Launch Agent**: Invoke specialized agent with complete context including flow control steps
5. **Monitor Progress**: Track agent execution and handle errors without user interruption
6. **Collect Results**: Gather implementation results and outputs
7. **Continue Workflow**: Identify next pending task from TODO_LIST.md and repeat
7. **[with-commit] Auto-Commit**: If `--with-commit` flag enabled, commit changes based on summary
- Read summary from `.summaries/{task-id}-summary.md`
- Extract changed files from summary's "Files Modified" section
- Determine commit type from `meta.type` (feature→feat, bugfix→fix, refactor→refactor)
- Generate commit message: "{type}: {task-title} - {summary-first-line}"
- Commit only modified files (minimal principle): `git add <files> && git commit -m "<message>"`
8. **Continue Workflow**: Identify next pending task from TODO_LIST.md and repeat
**Note**: TODO_LIST.md updates are handled by agents (e.g., code-developer.md), not by the orchestrator.
@@ -252,29 +312,43 @@ while (TODO_LIST.md has pending tasks) {
6. **User Choice**: When all tasks finished, ask user to choose next step:
```javascript
AskUserQuestion({
questions: [{
question: "All tasks completed. What would you like to do next?",
header: "Next Step",
multiSelect: false,
options: [
{
label: "Enter Review",
description: "Run specialized review (security/architecture/quality/action-items)"
},
{
label: "Complete Session",
description: "Archive session and update manifest"
}
]
}]
})
// Parse --yes flag
const autoYes = $ARGUMENTS.includes('--yes') || $ARGUMENTS.includes('-y')
if (autoYes) {
// Auto mode: Complete session automatically
console.log(`[--yes] Auto-selecting: Complete Session`)
Skill(skill="workflow:session:complete", args="--yes")
} else {
// Interactive mode: Ask user
AskUserQuestion({
questions: [{
question: "All tasks completed. What would you like to do next?",
header: "Next Step",
multiSelect: false,
options: [
{
label: "Enter Review",
description: "Run specialized review (security/architecture/quality/action-items)"
},
{
label: "Complete Session",
description: "Archive session and update manifest"
}
]
}]
})
}
```
**Based on user selection**:
- **"Enter Review"**: Execute `/workflow:review`
- **"Complete Session"**: Execute `/workflow:session:complete`
### Post-Completion Expansion
完成后询问用户是否扩展为issue(test/enhance/refactor/doc),选中项调用 `/issue:new "{summary} - {dimension}"`
## Execution Strategy (IMPL_PLAN-Driven)
### Strategy Priority
@@ -425,7 +499,7 @@ Task(subagent_type="{meta.agent}",
- TODO List: {session.todo_list_path}
- Summaries: {session.summaries_dir}
**Execution**: Read task JSON → Parse flow_control → Execute implementation_approach → Update TODO_LIST.md → Generate summary",
**Execution**: Read task JSON → Execute pre_analysis → Check execution_config.method → (CLI: handoff to CLI tool | Agent: direct implementation) → Update TODO_LIST.md → Generate summary",
description="Implement: {task.id}")
```
@@ -434,9 +508,11 @@ Task(subagent_type="{meta.agent}",
- `[FLOW_CONTROL]`: Triggers flow_control.pre_analysis execution
**Why Path-Based**: Agent (code-developer.md) autonomously:
- Reads and parses task JSON (requirements, acceptance, flow_control)
- Loads tech stack guidelines based on detected language
- Executes pre_analysis steps and implementation_approach
- Reads and parses task JSON (requirements, acceptance, flow_control, execution_config)
- Executes pre_analysis steps (Phase 1: context gathering)
- Checks execution_config.method (Phase 2: determine mode)
- CLI mode: Builds handoff prompt and executes via ccw cli with resume strategy
- Agent mode: Directly implements using modification_points and logic_flow
- Generates structured summary with integration points
Embedding task content in prompt creates duplication and conflicts with agent's parsing logic.
@@ -492,3 +568,26 @@ meta.agent missing → Infer from meta.type:
- **Dependency Validation**: Check all depends_on references exist
- **Context Verification**: Ensure all required context is available
## Auto-Commit Mode (--with-commit)
**Behavior**: After each agent task completes, automatically commit changes based on summary document.
**Minimal Principle**: Only commit files modified by the completed task.
**Commit Message Format**: `{type}: {task-title} - {summary}`
**Type Mapping** (from `meta.type`):
- `feature``feat` | `bugfix``fix` | `refactor``refactor`
- `test-gen``test` | `docs``docs` | `review``chore`
**Implementation**:
```bash
# 1. Read summary from .summaries/{task-id}-summary.md
# 2. Extract files from "Files Modified" section
# 3. Commit: git add <files> && git commit -m "{type}: {title} - {summary}"
```
**Error Handling**: Skip commit on no changes/missing summary, log errors, continue workflow.

View File

@@ -0,0 +1,408 @@
---
name: init-guidelines
description: Interactive wizard to fill project-guidelines.json based on project analysis
argument-hint: "[--reset]"
examples:
- /workflow:init-guidelines
- /workflow:init-guidelines --reset
---
# Workflow Init Guidelines Command (/workflow:init-guidelines)
## Overview
Interactive multi-round wizard that analyzes the current project (via `project-tech.json`) and asks targeted questions to populate `.workflow/project-guidelines.json` with coding conventions, constraints, and quality rules.
**Design Principle**: Questions are dynamically generated based on the project's tech stack, architecture, and patterns — not generic boilerplate.
**Note**: This command may be called by `/workflow:init` after initialization. Upon completion, return to the calling workflow if applicable.
## Usage
```bash
/workflow:init-guidelines # Fill guidelines interactively (skip if already populated)
/workflow:init-guidelines --reset # Reset and re-fill guidelines from scratch
```
## Execution Process
```
Input Parsing:
└─ Parse --reset flag → reset = true | false
Step 1: Check Prerequisites
├─ project-tech.json must exist (run /workflow:init first)
├─ project-guidelines.json: check if populated or scaffold-only
└─ If populated + no --reset → Ask: "Guidelines already exist. Overwrite or append?"
Step 2: Load Project Context
└─ Read project-tech.json → extract tech stack, architecture, patterns
Step 3: Multi-Round Interactive Questionnaire
├─ Round 1: Coding Conventions (coding_style, naming_patterns)
├─ Round 2: File & Documentation Conventions (file_structure, documentation)
├─ Round 3: Architecture & Tech Constraints (architecture, tech_stack)
├─ Round 4: Performance & Security Constraints (performance, security)
└─ Round 5: Quality Rules (quality_rules)
Step 4: Write project-guidelines.json
Step 5: Display Summary
```
## Implementation
### Step 1: Check Prerequisites
```bash
bash(test -f .workflow/project-tech.json && echo "TECH_EXISTS" || echo "TECH_NOT_FOUND")
bash(test -f .workflow/project-guidelines.json && echo "GUIDELINES_EXISTS" || echo "GUIDELINES_NOT_FOUND")
```
**If TECH_NOT_FOUND**: Exit with message
```
Project tech analysis not found. Run /workflow:init first.
```
**Parse --reset flag**:
```javascript
const reset = $ARGUMENTS.includes('--reset')
```
**If GUIDELINES_EXISTS and not --reset**: Check if guidelines are populated (not just scaffold)
```javascript
const guidelines = JSON.parse(Read('.workflow/project-guidelines.json'))
const isPopulated =
guidelines.conventions.coding_style.length > 0 ||
guidelines.conventions.naming_patterns.length > 0 ||
guidelines.constraints.architecture.length > 0 ||
guidelines.constraints.tech_stack.length > 0
if (isPopulated) {
AskUserQuestion({
questions: [{
question: "Project guidelines already contain entries. How would you like to proceed?",
header: "Mode",
multiSelect: false,
options: [
{ label: "Append", description: "Keep existing entries and add new ones from the wizard" },
{ label: "Reset", description: "Clear all existing entries and start fresh" },
{ label: "Cancel", description: "Exit without changes" }
]
}]
})
// If Cancel → exit
// If Reset → clear all arrays before proceeding
// If Append → keep existing, wizard adds to them
}
```
### Step 2: Load Project Context
```javascript
const projectTech = JSON.parse(Read('.workflow/project-tech.json'))
// Extract key info for generating smart questions
const languages = projectTech.technology_analysis?.technology_stack?.languages
|| projectTech.overview?.technology_stack?.languages || []
const primaryLang = languages.find(l => l.primary)?.name || languages[0]?.name || 'Unknown'
const frameworks = projectTech.technology_analysis?.technology_stack?.frameworks
|| projectTech.overview?.technology_stack?.frameworks || []
const testFrameworks = projectTech.technology_analysis?.technology_stack?.test_frameworks
|| projectTech.overview?.technology_stack?.test_frameworks || []
const archStyle = projectTech.technology_analysis?.architecture?.style
|| projectTech.overview?.architecture?.style || 'Unknown'
const archPatterns = projectTech.technology_analysis?.architecture?.patterns
|| projectTech.overview?.architecture?.patterns || []
const buildTools = projectTech.technology_analysis?.technology_stack?.build_tools
|| projectTech.overview?.technology_stack?.build_tools || []
```
### Step 3: Multi-Round Interactive Questionnaire
Each round uses `AskUserQuestion` with project-aware options. The user can always select "Other" to provide custom input.
**⚠️ CRITICAL**: After each round, collect the user's answers and convert them into guideline entries. Do NOT batch all rounds — process each round's answers before proceeding to the next.
---
#### Round 1: Coding Conventions
Generate options dynamically based on detected language/framework:
```javascript
// Build language-specific coding style options
const codingStyleOptions = []
if (['TypeScript', 'JavaScript'].includes(primaryLang)) {
codingStyleOptions.push(
{ label: "Strict TypeScript", description: "Use strict mode, no 'any' type, explicit return types for public APIs" },
{ label: "Functional style", description: "Prefer pure functions, immutability, avoid class-based patterns where possible" },
{ label: "Const over let", description: "Always use const; only use let when reassignment is truly needed" }
)
} else if (primaryLang === 'Python') {
codingStyleOptions.push(
{ label: "Type hints", description: "Use type hints for all function signatures and class attributes" },
{ label: "Functional style", description: "Prefer pure functions, list comprehensions, avoid mutable state" },
{ label: "PEP 8 strict", description: "Strict PEP 8 compliance with max line length 88 (Black formatter)" }
)
} else if (primaryLang === 'Go') {
codingStyleOptions.push(
{ label: "Error wrapping", description: "Always wrap errors with context using fmt.Errorf with %w" },
{ label: "Interface first", description: "Define interfaces at the consumer side, not the provider" },
{ label: "Table-driven tests", description: "Use table-driven test pattern for all unit tests" }
)
}
// Add universal options
codingStyleOptions.push(
{ label: "Early returns", description: "Prefer early returns / guard clauses over deep nesting" }
)
AskUserQuestion({
questions: [
{
question: `Your project uses ${primaryLang}. Which coding style conventions do you follow?`,
header: "Coding Style",
multiSelect: true,
options: codingStyleOptions.slice(0, 4) // Max 4 options
},
{
question: `What naming conventions does your ${primaryLang} project use?`,
header: "Naming",
multiSelect: true,
options: [
{ label: "camelCase variables", description: "Variables and functions use camelCase (e.g., getUserName)" },
{ label: "PascalCase types", description: "Classes, interfaces, type aliases use PascalCase (e.g., UserService)" },
{ label: "UPPER_SNAKE constants", description: "Constants use UPPER_SNAKE_CASE (e.g., MAX_RETRIES)" },
{ label: "Prefix interfaces", description: "Prefix interfaces with 'I' (e.g., IUserService)" }
]
}
]
})
```
**Process Round 1 answers** → add to `conventions.coding_style` and `conventions.naming_patterns` arrays.
---
#### Round 2: File Structure & Documentation
```javascript
AskUserQuestion({
questions: [
{
question: `Your project has a ${archStyle} architecture. What file organization rules apply?`,
header: "File Structure",
multiSelect: true,
options: [
{ label: "Co-located tests", description: "Test files live next to source files (e.g., foo.ts + foo.test.ts)" },
{ label: "Separate test dir", description: "Tests in a dedicated __tests__ or tests/ directory" },
{ label: "One export per file", description: "Each file exports a single main component/class/function" },
{ label: "Index barrels", description: "Use index.ts barrel files for clean imports from directories" }
]
},
{
question: "What documentation standards does your project follow?",
header: "Documentation",
multiSelect: true,
options: [
{ label: "JSDoc/docstring public APIs", description: "All public functions and classes must have JSDoc/docstrings" },
{ label: "README per module", description: "Each major module/package has its own README" },
{ label: "Inline comments for why", description: "Comments explain 'why', not 'what' — code should be self-documenting" },
{ label: "No comment requirement", description: "Code should be self-explanatory; comments only for non-obvious logic" }
]
}
]
})
```
**Process Round 2 answers** → add to `conventions.file_structure` and `conventions.documentation`.
---
#### Round 3: Architecture & Tech Stack Constraints
```javascript
// Build architecture-specific options
const archOptions = []
if (archStyle.toLowerCase().includes('monolith')) {
archOptions.push(
{ label: "No circular deps", description: "Modules must not have circular dependencies" },
{ label: "Layer boundaries", description: "Strict layer separation: UI → Service → Data (no skipping layers)" }
)
} else if (archStyle.toLowerCase().includes('microservice')) {
archOptions.push(
{ label: "Service isolation", description: "Services must not share databases or internal state" },
{ label: "API contracts", description: "All inter-service communication through versioned API contracts" }
)
}
archOptions.push(
{ label: "Stateless services", description: "Service/business logic must be stateless (state in DB/cache only)" },
{ label: "Dependency injection", description: "Use dependency injection for testability, no hardcoded dependencies" }
)
AskUserQuestion({
questions: [
{
question: `Your ${archStyle} architecture uses ${archPatterns.join(', ') || 'various'} patterns. What architecture constraints apply?`,
header: "Architecture",
multiSelect: true,
options: archOptions.slice(0, 4)
},
{
question: `Tech stack: ${frameworks.join(', ')}. What technology constraints apply?`,
header: "Tech Stack",
multiSelect: true,
options: [
{ label: "No new deps without review", description: "Adding new dependencies requires explicit justification and review" },
{ label: "Pin dependency versions", description: "All dependencies must use exact versions, not ranges" },
{ label: "Prefer native APIs", description: "Use built-in/native APIs over third-party libraries when possible" },
{ label: "Framework conventions", description: `Follow official ${frameworks[0] || 'framework'} conventions and best practices` }
]
}
]
})
```
**Process Round 3 answers** → add to `constraints.architecture` and `constraints.tech_stack`.
---
#### Round 4: Performance & Security Constraints
```javascript
AskUserQuestion({
questions: [
{
question: "What performance requirements does your project have?",
header: "Performance",
multiSelect: true,
options: [
{ label: "API response time", description: "API endpoints must respond within 200ms (p95)" },
{ label: "Bundle size limit", description: "Frontend bundle size must stay under 500KB gzipped" },
{ label: "Lazy loading", description: "Large modules/routes must use lazy loading / code splitting" },
{ label: "No N+1 queries", description: "Database access must avoid N+1 query patterns" }
]
},
{
question: "What security requirements does your project enforce?",
header: "Security",
multiSelect: true,
options: [
{ label: "Input sanitization", description: "All user input must be validated and sanitized before use" },
{ label: "No secrets in code", description: "No API keys, passwords, or tokens in source code — use env vars" },
{ label: "Auth on all endpoints", description: "All API endpoints require authentication unless explicitly public" },
{ label: "Parameterized queries", description: "All database queries must use parameterized/prepared statements" }
]
}
]
})
```
**Process Round 4 answers** → add to `constraints.performance` and `constraints.security`.
---
#### Round 5: Quality Rules
```javascript
AskUserQuestion({
questions: [
{
question: `Testing with ${testFrameworks.join(', ') || 'your test framework'}. What quality rules apply?`,
header: "Quality",
multiSelect: true,
options: [
{ label: "Min test coverage", description: "Minimum 80% code coverage for new code; no merging below threshold" },
{ label: "No skipped tests", description: "Tests must not be skipped (.skip/.only) in committed code" },
{ label: "Lint must pass", description: "All code must pass linter checks before commit (enforced by pre-commit)" },
{ label: "Type check must pass", description: "Full type checking (tsc --noEmit) must pass with zero errors" }
]
}
]
})
```
**Process Round 5 answers** → add to `quality_rules` array as `{ rule, scope, enforced_by }` objects.
### Step 4: Write project-guidelines.json
```javascript
// Build the final guidelines object
const finalGuidelines = {
conventions: {
coding_style: existingCodingStyle.concat(newCodingStyle),
naming_patterns: existingNamingPatterns.concat(newNamingPatterns),
file_structure: existingFileStructure.concat(newFileStructure),
documentation: existingDocumentation.concat(newDocumentation)
},
constraints: {
architecture: existingArchitecture.concat(newArchitecture),
tech_stack: existingTechStack.concat(newTechStack),
performance: existingPerformance.concat(newPerformance),
security: existingSecurity.concat(newSecurity)
},
quality_rules: existingQualityRules.concat(newQualityRules),
learnings: existingLearnings, // Preserve existing learnings
_metadata: {
created_at: existingMetadata?.created_at || new Date().toISOString(),
version: "1.0.0",
last_updated: new Date().toISOString(),
updated_by: "workflow:init-guidelines"
}
}
Write('.workflow/project-guidelines.json', JSON.stringify(finalGuidelines, null, 2))
```
### Step 5: Display Summary
```javascript
const countConventions = finalGuidelines.conventions.coding_style.length
+ finalGuidelines.conventions.naming_patterns.length
+ finalGuidelines.conventions.file_structure.length
+ finalGuidelines.conventions.documentation.length
const countConstraints = finalGuidelines.constraints.architecture.length
+ finalGuidelines.constraints.tech_stack.length
+ finalGuidelines.constraints.performance.length
+ finalGuidelines.constraints.security.length
const countQuality = finalGuidelines.quality_rules.length
console.log(`
✓ Project guidelines configured
## Summary
- Conventions: ${countConventions} rules (coding: ${cs}, naming: ${np}, files: ${fs}, docs: ${doc})
- Constraints: ${countConstraints} rules (arch: ${ar}, tech: ${ts}, perf: ${pf}, security: ${sc})
- Quality rules: ${countQuality}
File: .workflow/project-guidelines.json
Next steps:
- Use /workflow:session:solidify to add individual rules later
- Guidelines will be auto-loaded by /workflow:plan for task generation
`)
```
## Answer Processing Rules
When converting user selections to guideline entries:
1. **Selected option** → Use the option's `description` as the guideline string (it's more precise than the label)
2. **"Other" with custom text** → Use the user's text directly as the guideline string
3. **Deduplication** → Skip entries that already exist in the guidelines (exact string match)
4. **Quality rules** → Convert to `{ rule: description, scope: "all", enforced_by: "code-review" }` format
## Error Handling
- **No project-tech.json**: Exit with instruction to run `/workflow:init` first
- **User cancels mid-wizard**: Save whatever was collected so far (partial is better than nothing)
- **File write failure**: Report error, suggest manual edit
## Related Commands
- `/workflow:init` - Creates scaffold; optionally calls this command
- `/workflow:session:solidify` - Add individual rules one at a time

View File

@@ -44,11 +44,16 @@ Analysis Flow:
│ └─ Write .workflow/project-tech.json
├─ Create guidelines scaffold (if not exists)
│ └─ Write .workflow/project-guidelines.json (empty structure)
─ Display summary
─ Display summary
└─ Ask about guidelines configuration
├─ If guidelines empty → Ask user: "Configure now?" or "Skip"
│ ├─ Configure now → Skill(skill="workflow:init-guidelines")
│ └─ Skip → Show next steps
└─ If guidelines populated → Show next steps only
Output:
├─ .workflow/project-tech.json (+ .backup if regenerate)
└─ .workflow/project-guidelines.json (scaffold if new)
└─ .workflow/project-guidelines.json (scaffold or configured)
```
## Implementation
@@ -108,11 +113,24 @@ Analyze project for workflow initialization and generate .workflow/project-tech.
2. Execute: ccw tool exec get_modules_by_depth '{}' (get project structure)
## Task
Generate complete project-tech.json with:
- project_metadata: {name: ${projectName}, root_path: ${projectRoot}, initialized_at, updated_at}
- technology_analysis: {description, languages, frameworks, build_tools, test_frameworks, architecture, key_components, dependencies}
- development_status: ${regenerate ? 'preserve from backup' : '{completed_features: [], development_index: {feature: [], enhancement: [], bugfix: [], refactor: [], docs: []}, statistics: {total_features: 0, total_sessions: 0, last_updated}}'}
- _metadata: {initialized_by: "cli-explore-agent", analysis_timestamp, analysis_mode}
Generate complete project-tech.json following the schema structure:
- project_name: "${projectName}"
- initialized_at: ISO 8601 timestamp
- overview: {
description: "Brief project description",
technology_stack: {
languages: [{name, file_count, primary}],
frameworks: ["string"],
build_tools: ["string"],
test_frameworks: ["string"]
},
architecture: {style, layers: [], patterns: []},
key_components: [{name, path, description, importance}]
}
- features: []
- development_index: ${regenerate ? 'preserve from backup' : '{feature: [], enhancement: [], bugfix: [], refactor: [], docs: []}'}
- statistics: ${regenerate ? 'preserve from backup' : '{total_features: 0, total_sessions: 0, last_updated: ISO timestamp}'}
- _metadata: {initialized_by: "cli-explore-agent", analysis_timestamp: ISO timestamp, analysis_mode: "deep-scan"}
## Analysis Requirements
@@ -132,7 +150,7 @@ Generate complete project-tech.json with:
1. Structural scan: get_modules_by_depth.sh, find, wc -l
2. Semantic analysis: Gemini for patterns/architecture
3. Synthesis: Merge findings
4. ${regenerate ? 'Merge with preserved development_status from .workflow/project-tech.json.backup' : ''}
4. ${regenerate ? 'Merge with preserved development_index and statistics from .workflow/project-tech.json.backup' : ''}
5. Write JSON: Write('.workflow/project-tech.json', jsonContent)
6. Report: Return brief completion summary
@@ -181,27 +199,79 @@ console.log(`
✓ Project initialized successfully
## Project Overview
Name: ${projectTech.project_metadata.name}
Description: ${projectTech.technology_analysis.description}
Name: ${projectTech.project_name}
Description: ${projectTech.overview.description}
### Technology Stack
Languages: ${projectTech.technology_analysis.languages.map(l => l.name).join(', ')}
Frameworks: ${projectTech.technology_analysis.frameworks.join(', ')}
Languages: ${projectTech.overview.technology_stack.languages.map(l => l.name).join(', ')}
Frameworks: ${projectTech.overview.technology_stack.frameworks.join(', ')}
### Architecture
Style: ${projectTech.technology_analysis.architecture.style}
Components: ${projectTech.technology_analysis.key_components.length} core modules
Style: ${projectTech.overview.architecture.style}
Components: ${projectTech.overview.key_components.length} core modules
---
Files created:
- Tech analysis: .workflow/project-tech.json
- Guidelines: .workflow/project-guidelines.json ${guidelinesExists ? '(scaffold)' : ''}
${regenerate ? '- Backup: .workflow/project-tech.json.backup' : ''}
`);
```
### Step 5: Ask About Guidelines Configuration
After displaying the summary, ask the user if they want to configure project guidelines interactively.
```javascript
// Check if guidelines are just a scaffold (empty) or already populated
const guidelines = JSON.parse(Read('.workflow/project-guidelines.json'));
const isGuidelinesPopulated =
guidelines.conventions.coding_style.length > 0 ||
guidelines.conventions.naming_patterns.length > 0 ||
guidelines.constraints.architecture.length > 0 ||
guidelines.constraints.security.length > 0;
// Only ask if guidelines are not yet populated
if (!isGuidelinesPopulated) {
const userChoice = AskUserQuestion({
questions: [{
question: "Would you like to configure project guidelines now? The wizard will ask targeted questions based on your tech stack.",
header: "Guidelines",
multiSelect: false,
options: [
{
label: "Configure now (Recommended)",
description: "Interactive wizard to set up coding conventions, constraints, and quality rules"
},
{
label: "Skip for now",
description: "You can run /workflow:init-guidelines later or use /workflow:session:solidify to add rules individually"
}
]
}]
});
if (userChoice.answers["Guidelines"] === "Configure now (Recommended)") {
console.log("\n🔧 Starting guidelines configuration wizard...\n");
Skill(skill="workflow:init-guidelines");
} else {
console.log(`
Next steps:
- Use /workflow:session:solidify to add project guidelines
- Use /workflow:init-guidelines to configure guidelines interactively
- Use /workflow:session:solidify to add individual rules
- Use /workflow:plan to start planning
`);
}
} else {
console.log(`
Guidelines already configured (${guidelines.conventions.coding_style.length + guidelines.constraints.architecture.length}+ rules).
Next steps:
- Use /workflow:init-guidelines --reset to reconfigure
- Use /workflow:session:solidify to add individual rules
- Use /workflow:plan to start planning
`);
}
```
## Error Handling
@@ -209,3 +279,10 @@ Next steps:
**Agent Failure**: Fall back to basic initialization with placeholder overview
**Missing Tools**: Agent uses Qwen fallback or bash-only
**Empty Project**: Create minimal JSON with all gaps identified
## Related Commands
- `/workflow:init-guidelines` - Interactive wizard to configure project guidelines (called after init)
- `/workflow:session:solidify` - Add individual rules/constraints one at a time
- `/workflow:plan` - Start planning with initialized project context
- `/workflow:status --project` - View project state and guidelines

View File

@@ -1,7 +1,7 @@
---
name: lite-execute
description: Execute tasks based on in-memory plan, prompt description, or file content
argument-hint: "[--in-memory] [\"task description\"|file-path]"
argument-hint: "[-y|--yes] [--in-memory] [\"task description\"|file-path]"
allowed-tools: TodoWrite(*), Task(*), Bash(*)
---
@@ -62,30 +62,49 @@ Flexible task execution command supporting three input modes: in-memory plan (fr
**User Interaction**:
```javascript
AskUserQuestion({
questions: [
{
question: "Select execution method:",
header: "Execution",
multiSelect: false,
options: [
{ label: "Agent", description: "@code-developer agent" },
{ label: "Codex", description: "codex CLI tool" },
{ label: "Auto", description: "Auto-select based on complexity" }
]
},
{
question: "Enable code review after execution?",
header: "Code Review",
multiSelect: false,
options: [
{ label: "Skip", description: "No review" },
{ label: "Gemini Review", description: "Gemini CLI tool" },
{ label: "Agent Review", description: "Current agent review" }
]
}
]
})
// Parse --yes flag
const autoYes = $ARGUMENTS.includes('--yes') || $ARGUMENTS.includes('-y')
let userSelection
if (autoYes) {
// Auto mode: Use defaults
console.log(`[--yes] Auto-confirming execution:`)
console.log(` - Execution method: Auto`)
console.log(` - Code review: Skip`)
userSelection = {
execution_method: "Auto",
code_review_tool: "Skip"
}
} else {
// Interactive mode: Ask user
userSelection = AskUserQuestion({
questions: [
{
question: "Select execution method:",
header: "Execution",
multiSelect: false,
options: [
{ label: "Agent", description: "@code-developer agent" },
{ label: "Codex", description: "codex CLI tool" },
{ label: "Auto", description: "Auto-select based on complexity" }
]
},
{
question: "Enable code review after execution?",
header: "Code Review",
multiSelect: false,
options: [
{ label: "Skip", description: "No review" },
{ label: "Gemini Review", description: "Gemini CLI tool" },
{ label: "Codex Review", description: "Git-aware review (prompt OR --uncommitted)" },
{ label: "Agent Review", description: "Current agent review" }
]
}
]
})
}
```
### Mode 3: File Content
@@ -171,10 +190,23 @@ Output:
**Operations**:
- Initialize result tracking for multi-execution scenarios
- Set up `previousExecutionResults` array for context continuity
- **In-Memory Mode**: Echo execution strategy from lite-plan for transparency
```javascript
// Initialize result tracking
previousExecutionResults = []
// In-Memory Mode: Echo execution strategy (transparency before execution)
if (executionContext) {
console.log(`
📋 Execution Strategy (from lite-plan):
Method: ${executionContext.executionMethod}
Review: ${executionContext.codeReviewTool}
Tasks: ${executionContext.planObject.tasks.length}
Complexity: ${executionContext.planObject.complexity}
${executionContext.executorAssignments ? ` Assignments: ${JSON.stringify(executionContext.executorAssignments)}` : ''}
`)
}
```
### Step 2: Task Grouping & Batch Creation
@@ -313,7 +345,7 @@ for (const call of sequential) {
```javascript
function buildExecutionPrompt(batch) {
// Task template (4 parts: Modification Points → How → Reference → Done)
// Task template (6 parts: Modification Points → Why → How → Reference → Risks → Done)
const formatTask = (t) => `
## ${t.title}
@@ -322,18 +354,38 @@ function buildExecutionPrompt(batch) {
### Modification Points
${t.modification_points.map(p => `- **${p.file}** → \`${p.target}\`: ${p.change}`).join('\n')}
${t.rationale ? `
### Why this approach (Medium/High)
${t.rationale.chosen_approach}
${t.rationale.decision_factors?.length > 0 ? `\nKey factors: ${t.rationale.decision_factors.join(', ')}` : ''}
${t.rationale.tradeoffs ? `\nTradeoffs: ${t.rationale.tradeoffs}` : ''}
` : ''}
### How to do it
${t.description}
${t.implementation.map(step => `- ${step}`).join('\n')}
${t.code_skeleton ? `
### Code skeleton (High)
${t.code_skeleton.interfaces?.length > 0 ? `**Interfaces**: ${t.code_skeleton.interfaces.map(i => `\`${i.name}\` - ${i.purpose}`).join(', ')}` : ''}
${t.code_skeleton.key_functions?.length > 0 ? `\n**Functions**: ${t.code_skeleton.key_functions.map(f => `\`${f.signature}\` - ${f.purpose}`).join(', ')}` : ''}
${t.code_skeleton.classes?.length > 0 ? `\n**Classes**: ${t.code_skeleton.classes.map(c => `\`${c.name}\` - ${c.purpose}`).join(', ')}` : ''}
` : ''}
### Reference
- Pattern: ${t.reference?.pattern || 'N/A'}
- Files: ${t.reference?.files?.join(', ') || 'N/A'}
${t.reference?.examples ? `- Notes: ${t.reference.examples}` : ''}
${t.risks?.length > 0 ? `
### Risk mitigations (High)
${t.risks.map(r => `- ${r.description} → **${r.mitigation}**`).join('\n')}
` : ''}
### Done when
${t.acceptance.map(c => `- [ ] ${c}`).join('\n')}`
${t.acceptance.map(c => `- [ ] ${c}`).join('\n')}
${t.verification?.success_metrics?.length > 0 ? `\n**Success metrics**: ${t.verification.success_metrics.join(', ')}` : ''}`
// Build prompt
const sections = []
@@ -350,9 +402,14 @@ ${t.acceptance.map(c => `- [ ] ${c}`).join('\n')}`
if (clarificationContext) {
context.push(`### Clarifications\n${Object.entries(clarificationContext).map(([q, a]) => `- ${q}: ${a}`).join('\n')}`)
}
if (executionContext?.planObject?.data_flow?.diagram) {
context.push(`### Data Flow\n${executionContext.planObject.data_flow.diagram}`)
}
if (executionContext?.session?.artifacts?.plan) {
context.push(`### Artifacts\nPlan: ${executionContext.session.artifacts.plan}`)
}
// Project guidelines (user-defined constraints from /workflow:session:solidify)
context.push(`### Project Guidelines\n@.workflow/project-guidelines.json`)
if (context.length > 0) sections.push(`## Context\n${context.join('\n\n')}`)
sections.push(`Complete each task according to its "Done when" checklist.`)
@@ -392,16 +449,8 @@ ccw cli -p "${buildExecutionPrompt(batch)}" --tool codex --mode write
**Execution with fixed IDs** (predictable ID pattern):
```javascript
// Launch CLI in foreground (NOT background)
// Timeout based on complexity: Low=40min, Medium=60min, High=100min
const timeoutByComplexity = {
"Low": 2400000, // 40 minutes
"Medium": 3600000, // 60 minutes
"High": 6000000 // 100 minutes
}
// Launch CLI in background, wait for task hook callback
// Generate fixed execution ID: ${sessionId}-${groupId}
// This enables predictable ID lookup without relying on resume context chains
const sessionId = executionContext?.session?.id || 'standalone'
const fixedExecutionId = `${sessionId}-${batch.groupId}` // e.g., "implement-auth-2025-12-13-P1"
@@ -413,16 +462,12 @@ const cli_command = previousCliId
? `ccw cli -p "${buildExecutionPrompt(batch)}" --tool codex --mode write --id ${fixedExecutionId} --resume ${previousCliId}`
: `ccw cli -p "${buildExecutionPrompt(batch)}" --tool codex --mode write --id ${fixedExecutionId}`
bash_result = Bash(
// Execute in background, stop output and wait for task hook callback
Bash(
command=cli_command,
timeout=timeoutByComplexity[planObject.complexity] || 3600000
run_in_background=true
)
// Execution ID is now predictable: ${fixedExecutionId}
// Can also extract from output: "ID: implement-auth-2025-12-13-P1"
const cliExecutionId = fixedExecutionId
// Update TodoWrite when execution completes
// STOP HERE - CLI executes in background, task hook will notify on completion
```
**Resume on Failure** (with fixed ID):
@@ -460,32 +505,41 @@ Progress tracked at batch level (not individual task level). Icons: ⚡ (paralle
**Skip Condition**: Only run if `codeReviewTool ≠ "Skip"`
**Review Focus**: Verify implementation against plan acceptance criteria
- Read plan.json for task acceptance criteria
**Review Focus**: Verify implementation against plan acceptance criteria and verification requirements
- Read plan.json for task acceptance criteria and verification checklist
- Check each acceptance criterion is fulfilled
- Verify success metrics from verification field (Medium/High complexity)
- Run unit/integration tests specified in verification field
- Validate code quality and identify issues
- Ensure alignment with planned approach
- Ensure alignment with planned approach and risk mitigations
**Operations**:
- Agent Review: Current agent performs direct review
- Gemini Review: Execute gemini CLI with review prompt
- Custom tool: Execute specified CLI tool (qwen, codex, etc.)
- Codex Review: Two options - (A) with prompt for complex reviews, (B) `--uncommitted` flag only for quick reviews
- Custom tool: Execute specified CLI tool (qwen, etc.)
**Unified Review Template** (All tools use same standard):
**Review Criteria**:
- **Acceptance Criteria**: Verify each criterion from plan.tasks[].acceptance
- **Verification Checklist** (Medium/High): Check unit_tests, integration_tests, success_metrics from plan.tasks[].verification
- **Code Quality**: Analyze quality, identify issues, suggest improvements
- **Plan Alignment**: Validate implementation matches planned approach
- **Plan Alignment**: Validate implementation matches planned approach and risk mitigations
**Shared Prompt Template** (used by all CLI tools):
```
PURPOSE: Code review for implemented changes against plan acceptance criteria
TASK: • Verify plan acceptance criteria fulfillment • Analyze code quality • Identify issues • Suggest improvements • Validate plan adherence
PURPOSE: Code review for implemented changes against plan acceptance criteria and verification requirements
TASK: • Verify plan acceptance criteria fulfillment • Check verification requirements (unit tests, success metrics) • Analyze code quality • Identify issues • Suggest improvements • Validate plan adherence and risk mitigations
MODE: analysis
CONTEXT: @**/* @{plan.json} [@{exploration.json}] | Memory: Review lite-execute changes against plan requirements
EXPECTED: Quality assessment with acceptance criteria verification, issue identification, and recommendations. Explicitly check each acceptance criterion from plan.json tasks.
RULES: $(cat ~/.claude/workflows/cli-templates/prompts/analysis/02-review-code-quality.txt) | Focus on plan acceptance criteria and plan adherence | analysis=READ-ONLY
CONTEXT: @**/* @{plan.json} [@{exploration.json}] | Memory: Review lite-execute changes against plan requirements including verification checklist
EXPECTED: Quality assessment with:
- Acceptance criteria verification (all tasks)
- Verification checklist validation (Medium/High: unit_tests, integration_tests, success_metrics)
- Issue identification
- Recommendations
Explicitly check each acceptance criterion and verification item from plan.json tasks.
CONSTRAINTS: Focus on plan acceptance criteria, verification requirements, and plan adherence | analysis=READ-ONLY
```
**Tool-Specific Execution** (Apply shared prompt template above):
@@ -504,8 +558,17 @@ ccw cli -p "[Shared Prompt Template with artifacts]" --tool gemini --mode analys
ccw cli -p "[Shared Prompt Template with artifacts]" --tool qwen --mode analysis
# Same prompt as Gemini, different execution engine
# Method 4: Codex Review (autonomous)
ccw cli -p "[Verify plan acceptance criteria at ${plan.json}]" --tool codex --mode write
# Method 4: Codex Review (git-aware) - Two mutually exclusive options:
# Option A: With custom prompt (reviews uncommitted by default)
ccw cli -p "[Shared Prompt Template with artifacts]" --tool codex --mode review
# Use for complex reviews with specific focus areas
# Option B: Target flag only (no prompt allowed)
ccw cli --tool codex --mode review --uncommitted
# Quick review of uncommitted changes without custom instructions
# ⚠️ IMPORTANT: -p prompt and target flags (--uncommitted/--base/--commit) are MUTUALLY EXCLUSIVE
```
**Multi-Round Review with Fixed IDs**:
@@ -531,11 +594,11 @@ if (hasUnresolvedIssues(reviewResult)) {
**Trigger**: After all executions complete (regardless of code review)
**Skip Condition**: Skip if `.workflow/project.json` does not exist
**Skip Condition**: Skip if `.workflow/project-tech.json` does not exist
**Operations**:
```javascript
const projectJsonPath = '.workflow/project.json'
const projectJsonPath = '.workflow/project-tech.json'
if (!fileExists(projectJsonPath)) return // Silent skip
const projectJson = JSON.parse(Read(projectJsonPath))
@@ -664,6 +727,10 @@ Collected after each execution call completes:
Appended to `previousExecutionResults` array for context continuity in multi-execution scenarios.
## Post-Completion Expansion
完成后询问用户是否扩展为issue(test/enhance/refactor/doc),选中项调用 `/issue:new "{summary} - {dimension}"`
**Fixed ID Pattern**: `${sessionId}-${groupId}` enables predictable lookup without auto-generated timestamps.
**Resume Usage**: If `status` is "partial" or "failed", use `fixedCliId` to resume:

View File

@@ -1,8 +1,8 @@
---
name: lite-fix
description: Lightweight bug diagnosis and fix workflow with intelligent severity assessment and optional hotfix mode for production incidents
argument-hint: "[--hotfix] \"bug description or issue reference\""
allowed-tools: TodoWrite(*), Task(*), SlashCommand(*), AskUserQuestion(*)
argument-hint: "[-y|--yes] [--hotfix] \"bug description or issue reference\""
allowed-tools: TodoWrite(*), Task(*), Skill(*), AskUserQuestion(*)
---
# Workflow Lite-Fix Command (/workflow:lite-fix)
@@ -25,10 +25,49 @@ Intelligent lightweight bug fixing command with dynamic workflow adaptation base
/workflow:lite-fix [FLAGS] <BUG_DESCRIPTION>
# Flags
-y, --yes Skip all confirmations (auto mode)
--hotfix, -h Production hotfix mode (minimal diagnosis, fast fix)
# Arguments
<bug-description> Bug description, error message, or path to .md file (required)
# Examples
/workflow:lite-fix "用户登录失败" # Interactive mode
/workflow:lite-fix --yes "用户登录失败" # Auto mode (no confirmations)
/workflow:lite-fix -y --hotfix "生产环境数据库连接失败" # Auto + hotfix mode
```
## Output Artifacts
| Artifact | Description |
|----------|-------------|
| `diagnosis-{angle}.json` | Per-angle diagnosis results (1-4 files based on severity) |
| `diagnoses-manifest.json` | Index of all diagnosis files |
| `planning-context.md` | Evidence paths + synthesized understanding |
| `fix-plan.json` | Structured fix plan (fix-plan-json-schema.json) |
**Output Directory**: `.workflow/.lite-fix/{bug-slug}-{YYYY-MM-DD}/`
**Agent Usage**:
- Low/Medium severity → Direct Claude planning (no agent)
- High/Critical severity → `cli-lite-planning-agent` generates `fix-plan.json`
**Schema Reference**: `~/.claude/workflows/cli-templates/schemas/fix-plan-json-schema.json`
## Auto Mode Defaults
When `--yes` or `-y` flag is used:
- **Clarification Questions**: Skipped (no clarification phase)
- **Fix Plan Confirmation**: Auto-selected "Allow"
- **Execution Method**: Auto-selected "Auto"
- **Code Review**: Auto-selected "Skip"
- **Severity**: Uses auto-detected severity (no manual override)
- **Hotfix Mode**: Respects --hotfix flag if present, otherwise normal mode
**Flag Parsing**:
```javascript
const autoYes = $ARGUMENTS.includes('--yes') || $ARGUMENTS.includes('-y')
const hotfixMode = $ARGUMENTS.includes('--hotfix') || $ARGUMENTS.includes('-h')
```
## Execution Process
@@ -64,7 +103,7 @@ Phase 4: Confirmation & Selection
Phase 5: Execute
|- Build executionContext (fix-plan + diagnoses + clarifications + selections)
+- SlashCommand("/workflow:lite-execute --in-memory --mode bugfix")
+- Skill(skill="workflow:lite-execute", args="--in-memory --mode bugfix")
```
## Implementation
@@ -170,11 +209,15 @@ const diagnosisTasks = selectedAngles.map((angle, index) =>
## Task Objective
Execute **${angle}** diagnosis for bug root cause analysis. Analyze codebase from this specific angle to discover root cause, affected paths, and fix hints.
## Output Location
**Session Folder**: ${sessionFolder}
**Output File**: ${sessionFolder}/diagnosis-${angle}.json
## Assigned Context
- **Diagnosis Angle**: ${angle}
- **Bug Description**: ${bug_description}
- **Diagnosis Index**: ${index + 1} of ${selectedAngles.length}
- **Output File**: ${sessionFolder}/diagnosis-${angle}.json
## MANDATORY FIRST STEPS (Execute by Agent)
**You (cli-explore-agent) MUST execute these steps in order:**
@@ -203,8 +246,6 @@ Execute **${angle}** diagnosis for bug root cause analysis. Analyze codebase fro
## Expected Output
**File**: ${sessionFolder}/diagnosis-${angle}.json
**Schema Reference**: Schema obtained in MANDATORY FIRST STEPS step 3, follow schema exactly
**Required Fields** (all ${angle} focused):
@@ -233,9 +274,9 @@ Execute **${angle}** diagnosis for bug root cause analysis. Analyze codebase fro
- [ ] JSON output follows schema exactly
- [ ] clarification_needs includes options + recommended
## Output
Write: ${sessionFolder}/diagnosis-${angle}.json
Return: 2-3 sentence summary of ${angle} diagnosis findings
## Execution
**Write**: \`${sessionFolder}/diagnosis-${angle}.json\`
**Return**: 2-3 sentence summary of ${angle} diagnosis findings
`
)
)
@@ -332,9 +373,17 @@ function deduplicateClarifications(clarifications) {
const uniqueClarifications = deduplicateClarifications(allClarifications)
// Multi-round clarification: batch questions (max 4 per round)
// ⚠️ MUST execute ALL rounds until uniqueClarifications exhausted
if (uniqueClarifications.length > 0) {
// Parse --yes flag
const autoYes = $ARGUMENTS.includes('--yes') || $ARGUMENTS.includes('-y')
if (autoYes) {
// Auto mode: Skip clarification phase
console.log(`[--yes] Skipping ${uniqueClarifications.length} clarification questions`)
console.log(`Proceeding to fix planning with diagnosis results...`)
// Continue to Phase 3
} else if (uniqueClarifications.length > 0) {
// Interactive mode: Multi-round clarification
// ⚠️ MUST execute ALL rounds until uniqueClarifications exhausted
const BATCH_SIZE = 4
const totalRounds = Math.ceil(uniqueClarifications.length / BATCH_SIZE)
@@ -380,6 +429,7 @@ if (uniqueClarifications.length > 0) {
const schema = Bash(`cat ~/.claude/workflows/cli-templates/schemas/fix-plan-json-schema.json`)
// Step 2: Generate fix-plan following schema (Claude directly, no agent)
// For Medium complexity: include rationale + verification (optional, but recommended)
const fixPlan = {
summary: "...",
root_cause: "...",
@@ -389,13 +439,67 @@ const fixPlan = {
recommended_execution: "Agent",
severity: severity,
risk_level: "...",
_metadata: { timestamp: getUtc8ISOString(), source: "direct-planning", planning_mode: "direct" }
// Medium complexity fields (optional for direct planning, auto-filled for Low)
...(severity === "Medium" ? {
design_decisions: [
{
decision: "Use immediate_patch strategy for minimal risk",
rationale: "Keeps changes localized and quick to review",
tradeoff: "Defers comprehensive refactoring"
}
],
tasks_with_rationale: {
// Each task gets rationale if Medium
task_rationale_example: {
rationale: {
chosen_approach: "Direct fix approach",
alternatives_considered: ["Workaround", "Refactor"],
decision_factors: ["Minimal impact", "Quick turnaround"],
tradeoffs: "Doesn't address underlying issue"
},
verification: {
unit_tests: ["test_bug_fix_basic"],
integration_tests: [],
manual_checks: ["Reproduce issue", "Verify fix"],
success_metrics: ["Issue resolved", "No regressions"]
}
}
}
} : {}),
_metadata: {
timestamp: getUtc8ISOString(),
source: "direct-planning",
planning_mode: "direct",
complexity: severity === "Medium" ? "Medium" : "Low"
}
}
// Step 3: Write fix-plan to session folder
// Step 3: Merge task rationale into tasks array
if (severity === "Medium") {
fixPlan.tasks = fixPlan.tasks.map(task => ({
...task,
rationale: fixPlan.tasks_with_rationale[task.id]?.rationale || {
chosen_approach: "Standard fix",
alternatives_considered: [],
decision_factors: ["Correctness", "Simplicity"],
tradeoffs: "None"
},
verification: fixPlan.tasks_with_rationale[task.id]?.verification || {
unit_tests: [`test_${task.id}_basic`],
integration_tests: [],
manual_checks: ["Verify fix works"],
success_metrics: ["Test pass"]
}
}))
delete fixPlan.tasks_with_rationale // Clean up temp field
}
// Step 4: Write fix-plan to session folder
Write(`${sessionFolder}/fix-plan.json`, JSON.stringify(fixPlan, null, 2))
// Step 4: MUST continue to Phase 4 (Confirmation) - DO NOT execute code here
// Step 5: MUST continue to Phase 4 (Confirmation) - DO NOT execute code here
```
**High/Critical Severity** - Invoke cli-lite-planning-agent:
@@ -408,6 +512,13 @@ Task(
prompt=`
Generate fix plan and write fix-plan.json.
## Output Location
**Session Folder**: ${sessionFolder}
**Output Files**:
- ${sessionFolder}/planning-context.md (evidence + understanding)
- ${sessionFolder}/fix-plan.json (fix plan)
## Output Schema Reference
Execute: cat ~/.claude/workflows/cli-templates/schemas/fix-plan-json-schema.json (get schema reference before generating plan)
@@ -451,11 +562,41 @@ Generate fix-plan.json with:
- description
- modification_points: ALL files to modify for this fix (group related changes)
- implementation (2-5 steps covering all modification_points)
- verification (test criteria)
- acceptance: Quantified acceptance criteria
- depends_on: task IDs this task depends on (use sparingly)
**High/Critical complexity fields per task** (REQUIRED):
- rationale:
- chosen_approach: Why this fix approach (not alternatives)
- alternatives_considered: Other approaches evaluated
- decision_factors: Key factors influencing choice
- tradeoffs: Known tradeoffs of this approach
- verification:
- unit_tests: Test names to add/verify
- integration_tests: Integration test names
- manual_checks: Manual verification steps
- success_metrics: Quantified success criteria
- risks:
- description: Risk description
- probability: Low|Medium|High
- impact: Low|Medium|High
- mitigation: How to mitigate
- fallback: Fallback if fix fails
- code_skeleton (optional): Key interfaces/functions to implement
- interfaces: [{name, definition, purpose}]
- key_functions: [{signature, purpose, returns}]
**Top-level High/Critical fields** (REQUIRED):
- data_flow: How data flows through affected code
- diagram: "A → B → C" style flow
- stages: [{stage, input, output, component}]
- design_decisions: Global fix decisions
- [{decision, rationale, tradeoff}]
- estimated_time, recommended_execution, severity, risk_level
- _metadata:
- timestamp, source, planning_mode
- complexity: "High" | "Critical"
- diagnosis_angles: ${JSON.stringify(manifest.diagnoses.map(d => d.angle))}
## Task Grouping Rules
@@ -467,11 +608,22 @@ Generate fix-plan.json with:
## Execution
1. Read ALL diagnosis files for comprehensive context
2. Execute CLI planning using Gemini (Qwen fallback)
2. Execute CLI planning using Gemini (Qwen fallback) with --rule planning-fix-strategy template
3. Synthesize findings from multiple diagnosis angles
4. Parse output and structure fix-plan
5. Write JSON: Write('${sessionFolder}/fix-plan.json', jsonContent)
6. Return brief completion summary
4. Generate fix-plan with:
- For High/Critical: REQUIRED new fields (rationale, verification, risks, code_skeleton, data_flow, design_decisions)
- Each task MUST have rationale (why this fix), verification (how to verify success), and risks (potential issues)
5. Parse output and structure fix-plan
6. **Write**: \`${sessionFolder}/planning-context.md\` (evidence paths + understanding)
7. **Write**: \`${sessionFolder}/fix-plan.json\`
8. Return brief completion summary
## Output Format for CLI
Include these sections in your fix-plan output:
- Summary, Root Cause, Strategy (existing)
- Data Flow: Diagram showing affected code paths
- Design Decisions: Key architectural choices in the fix
- Tasks: Each with rationale (Medium/High), verification (Medium/High), risks (High), code_skeleton (High)
`
)
```
@@ -505,40 +657,60 @@ ${fixPlan.tasks.map((t, i) => `${i+1}. ${t.title} (${t.scope})`).join('\n')}
**Step 4.2: Collect Confirmation**
```javascript
AskUserQuestion({
questions: [
{
question: `Confirm fix plan? (${fixPlan.tasks.length} tasks, ${fixPlan.severity} severity)`,
header: "Confirm",
multiSelect: true,
options: [
{ label: "Allow", description: "Proceed as-is" },
{ label: "Modify", description: "Adjust before execution" },
{ label: "Cancel", description: "Abort workflow" }
]
},
{
question: "Execution method:",
header: "Execution",
multiSelect: false,
options: [
{ label: "Agent", description: "@code-developer agent" },
{ label: "Codex", description: "codex CLI tool" },
{ label: "Auto", description: `Auto: ${fixPlan.severity === 'Low' ? 'Agent' : 'Codex'}` }
]
},
{
question: "Code review after fix?",
header: "Review",
multiSelect: false,
options: [
{ label: "Gemini Review", description: "Gemini CLI" },
{ label: "Agent Review", description: "@code-reviewer" },
{ label: "Skip", description: "No review" }
]
}
]
})
// Parse --yes flag
const autoYes = $ARGUMENTS.includes('--yes') || $ARGUMENTS.includes('-y')
let userSelection
if (autoYes) {
// Auto mode: Use defaults
console.log(`[--yes] Auto-confirming fix plan:`)
console.log(` - Confirmation: Allow`)
console.log(` - Execution: Auto`)
console.log(` - Review: Skip`)
userSelection = {
confirmation: "Allow",
execution_method: "Auto",
code_review_tool: "Skip"
}
} else {
// Interactive mode: Ask user
userSelection = AskUserQuestion({
questions: [
{
question: `Confirm fix plan? (${fixPlan.tasks.length} tasks, ${fixPlan.severity} severity)`,
header: "Confirm",
multiSelect: false,
options: [
{ label: "Allow", description: "Proceed as-is" },
{ label: "Modify", description: "Adjust before execution" },
{ label: "Cancel", description: "Abort workflow" }
]
},
{
question: "Execution method:",
header: "Execution",
multiSelect: false,
options: [
{ label: "Agent", description: "@code-developer agent" },
{ label: "Codex", description: "codex CLI tool" },
{ label: "Auto", description: `Auto: ${fixPlan.severity === 'Low' ? 'Agent' : 'Codex'}` }
]
},
{
question: "Code review after fix?",
header: "Review",
multiSelect: false,
options: [
{ label: "Gemini Review", description: "Gemini CLI" },
{ label: "Agent Review", description: "@code-reviewer" },
{ label: "Skip", description: "No review" }
]
}
]
})
}
```
---
@@ -565,7 +737,11 @@ const fixPlan = JSON.parse(Read(`${sessionFolder}/fix-plan.json`))
executionContext = {
mode: "bugfix",
severity: fixPlan.severity,
planObject: fixPlan,
planObject: {
...fixPlan,
// Ensure complexity is set based on severity for new field consumption
complexity: fixPlan.complexity || (fixPlan.severity === 'Critical' ? 'High' : (fixPlan.severity === 'High' ? 'High' : 'Medium'))
},
diagnosisContext: diagnoses,
diagnosisAngles: manifest.diagnoses.map(d => d.angle),
diagnosisManifest: manifest,
@@ -591,29 +767,31 @@ executionContext = {
**Step 5.2: Execute**
```javascript
SlashCommand(command="/workflow:lite-execute --in-memory --mode bugfix")
Skill(skill="workflow:lite-execute", args="--in-memory --mode bugfix")
```
## Session Folder Structure
```
.workflow/.lite-fix/{bug-slug}-{YYYY-MM-DD}/
|- diagnosis-{angle1}.json # Diagnosis angle 1
|- diagnosis-{angle2}.json # Diagnosis angle 2
|- diagnosis-{angle3}.json # Diagnosis angle 3 (if applicable)
|- diagnosis-{angle4}.json # Diagnosis angle 4 (if applicable)
|- diagnoses-manifest.json # Diagnosis index
+- fix-plan.json # Fix plan
├── diagnosis-{angle1}.json # Diagnosis angle 1
├── diagnosis-{angle2}.json # Diagnosis angle 2
├── diagnosis-{angle3}.json # Diagnosis angle 3 (if applicable)
├── diagnosis-{angle4}.json # Diagnosis angle 4 (if applicable)
├── diagnoses-manifest.json # Diagnosis index
├── planning-context.md # Evidence + understanding
└── fix-plan.json # Fix plan
```
**Example**:
```
.workflow/.lite-fix/user-avatar-upload-fails-413-2025-11-25-14-30-25/
|- diagnosis-error-handling.json
|- diagnosis-dataflow.json
|- diagnosis-validation.json
|- diagnoses-manifest.json
+- fix-plan.json
.workflow/.lite-fix/user-avatar-upload-fails-413-2025-11-25/
├── diagnosis-error-handling.json
├── diagnosis-dataflow.json
├── diagnosis-validation.json
├── diagnoses-manifest.json
├── planning-context.md
└── fix-plan.json
```
## Error Handling

View File

@@ -1,8 +1,8 @@
---
name: lite-plan
description: Lightweight interactive planning workflow with in-memory planning, code exploration, and execution execute to lite-execute after user confirmation
argument-hint: "[-e|--explore] \"task description\"|file.md"
allowed-tools: TodoWrite(*), Task(*), SlashCommand(*), AskUserQuestion(*)
argument-hint: "[-y|--yes] [-e|--explore] \"task description\"|file.md"
allowed-tools: TodoWrite(*), Task(*), Skill(*), AskUserQuestion(*)
---
# Workflow Lite-Plan Command (/workflow:lite-plan)
@@ -25,10 +25,47 @@ Intelligent lightweight planning command with dynamic workflow adaptation based
/workflow:lite-plan [FLAGS] <TASK_DESCRIPTION>
# Flags
-y, --yes Skip all confirmations (auto mode)
-e, --explore Force code exploration phase (overrides auto-detection)
# Arguments
<task-description> Task description or path to .md file (required)
# Examples
/workflow:lite-plan "实现JWT认证" # Interactive mode
/workflow:lite-plan --yes "实现JWT认证" # Auto mode (no confirmations)
/workflow:lite-plan -y -e "优化数据库查询性能" # Auto mode + force exploration
```
## Output Artifacts
| Artifact | Description |
|----------|-------------|
| `exploration-{angle}.json` | Per-angle exploration results (1-4 files based on complexity) |
| `explorations-manifest.json` | Index of all exploration files |
| `planning-context.md` | Evidence paths + synthesized understanding |
| `plan.json` | Structured implementation plan (plan-json-schema.json) |
**Output Directory**: `.workflow/.lite-plan/{task-slug}-{YYYY-MM-DD}/`
**Agent Usage**:
- Low complexity → Direct Claude planning (no agent)
- Medium/High complexity → `cli-lite-planning-agent` generates `plan.json`
**Schema Reference**: `~/.claude/workflows/cli-templates/schemas/plan-json-schema.json`
## Auto Mode Defaults
When `--yes` or `-y` flag is used:
- **Clarification Questions**: Skipped (no clarification phase)
- **Plan Confirmation**: Auto-selected "Allow"
- **Execution Method**: Auto-selected "Auto"
- **Code Review**: Auto-selected "Skip"
**Flag Parsing**:
```javascript
const autoYes = $ARGUMENTS.includes('--yes') || $ARGUMENTS.includes('-y')
const forceExplore = $ARGUMENTS.includes('--explore') || $ARGUMENTS.includes('-e')
```
## Execution Process
@@ -52,8 +89,8 @@ Phase 2: Clarification (optional, multi-round)
Phase 3: Planning (NO CODE EXECUTION - planning only)
└─ Decision (based on Phase 1 complexity):
├─ Low → Load schema: cat ~/.claude/workflows/cli-templates/schemas/plan-json-schema.json → Direct Claude planning (following schema) → plan.json → MUST proceed to Phase 4
└─ Medium/High → cli-lite-planning-agent → plan.json → MUST proceed to Phase 4
├─ Low → Load schema: cat ~/.claude/workflows/cli-templates/schemas/plan-json-schema.json → Direct Claude planning (following schema) → plan.json
└─ Medium/High → cli-lite-planning-agent → plan.json (agent internally executes quality check)
Phase 4: Confirmation & Selection
├─ Display plan summary (tasks, complexity, estimated time)
@@ -64,7 +101,7 @@ Phase 4: Confirmation & Selection
Phase 5: Execute
├─ Build executionContext (plan + explorations + clarifications + selections)
└─ SlashCommand("/workflow:lite-execute --in-memory")
└─ Skill(skill="workflow:lite-execute", args="--in-memory")
```
## Implementation
@@ -173,11 +210,15 @@ const explorationTasks = selectedAngles.map((angle, index) =>
## Task Objective
Execute **${angle}** exploration for task planning context. Analyze codebase from this specific angle to discover relevant structure, patterns, and constraints.
## Output Location
**Session Folder**: ${sessionFolder}
**Output File**: ${sessionFolder}/exploration-${angle}.json
## Assigned Context
- **Exploration Angle**: ${angle}
- **Task Description**: ${task_description}
- **Exploration Index**: ${index + 1} of ${selectedAngles.length}
- **Output File**: ${sessionFolder}/exploration-${angle}.json
## MANDATORY FIRST STEPS (Execute by Agent)
**You (cli-explore-agent) MUST execute these steps in order:**
@@ -205,8 +246,6 @@ Execute **${angle}** exploration for task planning context. Analyze codebase fro
## Expected Output
**File**: ${sessionFolder}/exploration-${angle}.json
**Schema Reference**: Schema obtained in MANDATORY FIRST STEPS step 3, follow schema exactly
**Required Fields** (all ${angle} focused):
@@ -232,9 +271,9 @@ Execute **${angle}** exploration for task planning context. Analyze codebase fro
- [ ] JSON output follows schema exactly
- [ ] clarification_needs includes options + recommended
## Output
Write: ${sessionFolder}/exploration-${angle}.json
Return: 2-3 sentence summary of ${angle} findings
## Execution
**Write**: \`${sessionFolder}/exploration-${angle}.json\`
**Return**: 2-3 sentence summary of ${angle} findings
`
)
)
@@ -323,8 +362,16 @@ explorations.forEach(exp => {
// - Produce dedupedClarifications with unique intents only
const dedupedClarifications = intelligentMerge(allClarifications)
// Multi-round clarification: batch questions (max 4 per round)
if (dedupedClarifications.length > 0) {
// Parse --yes flag
const autoYes = $ARGUMENTS.includes('--yes') || $ARGUMENTS.includes('-y')
if (autoYes) {
// Auto mode: Skip clarification phase
console.log(`[--yes] Skipping ${dedupedClarifications.length} clarification questions`)
console.log(`Proceeding to planning with exploration results...`)
// Continue to Phase 3
} else if (dedupedClarifications.length > 0) {
// Interactive mode: Multi-round clarification
const BATCH_SIZE = 4
const totalRounds = Math.ceil(dedupedClarifications.length / BATCH_SIZE)
@@ -415,6 +462,13 @@ Task(
prompt=`
Generate implementation plan and write plan.json.
## Output Location
**Session Folder**: ${sessionFolder}
**Output Files**:
- ${sessionFolder}/planning-context.md (evidence + understanding)
- ${sessionFolder}/plan.json (implementation plan)
## Output Schema Reference
Execute: cat ~/.claude/workflows/cli-templates/schemas/plan-json-schema.json (get schema reference before generating plan)
@@ -464,8 +518,9 @@ Generate plan.json following the schema obtained above. Key constraints:
2. Execute CLI planning using Gemini (Qwen fallback)
3. Read ALL exploration files for comprehensive context
4. Synthesize findings and generate plan following schema
5. Write JSON: Write('${sessionFolder}/plan.json', jsonContent)
6. Return brief completion summary
5. **Write**: \`${sessionFolder}/planning-context.md\` (evidence paths + understanding)
6. **Write**: \`${sessionFolder}/plan.json\`
7. Return brief completion summary
`
)
```
@@ -497,40 +552,62 @@ ${plan.tasks.map((t, i) => `${i+1}. ${t.title} (${t.file})`).join('\n')}
**Step 4.2: Collect Confirmation**
```javascript
AskUserQuestion({
questions: [
{
question: `Confirm plan? (${plan.tasks.length} tasks, ${plan.complexity})`,
header: "Confirm",
multiSelect: true,
options: [
{ label: "Allow", description: "Proceed as-is" },
{ label: "Modify", description: "Adjust before execution" },
{ label: "Cancel", description: "Abort workflow" }
]
},
{
question: "Execution method:",
header: "Execution",
multiSelect: false,
options: [
{ label: "Agent", description: "@code-developer agent" },
{ label: "Codex", description: "codex CLI tool" },
{ label: "Auto", description: `Auto: ${plan.complexity === 'Low' ? 'Agent' : 'Codex'}` }
]
},
{
question: "Code review after execution?",
header: "Review",
multiSelect: false,
options: [
{ label: "Gemini Review", description: "Gemini CLI" },
{ label: "Agent Review", description: "@code-reviewer" },
{ label: "Skip", description: "No review" }
]
}
]
})
// Parse --yes flag
const autoYes = $ARGUMENTS.includes('--yes') || $ARGUMENTS.includes('-y')
let userSelection
if (autoYes) {
// Auto mode: Use defaults
console.log(`[--yes] Auto-confirming plan:`)
console.log(` - Confirmation: Allow`)
console.log(` - Execution: Auto`)
console.log(` - Review: Skip`)
userSelection = {
confirmation: "Allow",
execution_method: "Auto",
code_review_tool: "Skip"
}
} else {
// Interactive mode: Ask user
// Note: Execution "Other" option allows specifying CLI tools from ~/.claude/cli-tools.json
userSelection = AskUserQuestion({
questions: [
{
question: `Confirm plan? (${plan.tasks.length} tasks, ${plan.complexity})`,
header: "Confirm",
multiSelect: false,
options: [
{ label: "Allow", description: "Proceed as-is" },
{ label: "Modify", description: "Adjust before execution" },
{ label: "Cancel", description: "Abort workflow" }
]
},
{
question: "Execution method:",
header: "Execution",
multiSelect: false,
options: [
{ label: "Agent", description: "@code-developer agent" },
{ label: "Codex", description: "codex CLI tool" },
{ label: "Auto", description: `Auto: ${plan.complexity === 'Low' ? 'Agent' : 'Codex'}` }
]
},
{
question: "Code review after execution?",
header: "Review",
multiSelect: false,
options: [
{ label: "Gemini Review", description: "Gemini CLI review" },
{ label: "Codex Review", description: "Git-aware review (prompt OR --uncommitted)" },
{ label: "Agent Review", description: "@code-reviewer agent" },
{ label: "Skip", description: "No review" }
]
}
]
})
}
```
---
@@ -585,7 +662,7 @@ executionContext = {
**Step 5.2: Execute**
```javascript
SlashCommand(command="/workflow:lite-execute --in-memory")
Skill(skill="workflow:lite-execute", args="--in-memory")
```
## Session Folder Structure

View File

@@ -0,0 +1,579 @@
---
name: workflow:multi-cli-plan
description: Multi-CLI collaborative planning workflow with ACE context gathering and iterative cross-verification. Uses cli-discuss-agent for Gemini+Codex+Claude analysis to converge on optimal execution plan.
argument-hint: "[-y|--yes] <task description> [--max-rounds=3] [--tools=gemini,codex] [--mode=parallel|serial]"
allowed-tools: TodoWrite(*), Task(*), AskUserQuestion(*), Read(*), Bash(*), Write(*), mcp__ace-tool__search_context(*)
---
## Auto Mode
When `--yes` or `-y`: Auto-approve plan, use recommended solution and execution method (Agent, Skip review).
# Multi-CLI Collaborative Planning Command
## Quick Start
```bash
# Basic usage
/workflow:multi-cli-plan "Implement user authentication"
# With options
/workflow:multi-cli-plan "Add dark mode support" --max-rounds=3
/workflow:multi-cli-plan "Refactor payment module" --tools=gemini,codex,claude
/workflow:multi-cli-plan "Fix memory leak" --mode=serial
```
**Context Source**: ACE semantic search + Multi-CLI analysis
**Output Directory**: `.workflow/.multi-cli-plan/{session-id}/`
**Default Max Rounds**: 3 (convergence may complete earlier)
**CLI Tools**: @cli-discuss-agent (analysis), @cli-lite-planning-agent (plan generation)
**Execution**: Auto-hands off to `/workflow:lite-execute --in-memory` after plan approval
## What & Why
### Core Concept
Multi-CLI collaborative planning with **three-phase architecture**: ACE context gathering → Iterative multi-CLI discussion → Plan generation. Orchestrator delegates analysis to agents, only handles user decisions and session management.
**Process**:
- **Phase 1**: ACE semantic search gathers codebase context
- **Phase 2**: cli-discuss-agent orchestrates Gemini/Codex/Claude for cross-verified analysis
- **Phase 3-5**: User decision → Plan generation → Execution handoff
**vs Single-CLI Planning**:
- **Single**: One model perspective, potential blind spots
- **Multi-CLI**: Cross-verification catches inconsistencies, builds consensus on solutions
### Value Proposition
1. **Multi-Perspective Analysis**: Gemini + Codex + Claude analyze from different angles
2. **Cross-Verification**: Identify agreements/disagreements, build confidence
3. **User-Driven Decisions**: Every round ends with user decision point
4. **Iterative Convergence**: Progressive refinement until consensus reached
### Orchestrator Boundary (CRITICAL)
- **ONLY command** for multi-CLI collaborative planning
- Manages: Session state, user decisions, agent delegation, phase transitions
- Delegates: CLI execution to @cli-discuss-agent, plan generation to @cli-lite-planning-agent
### Execution Flow
```
Phase 1: Context Gathering
└─ ACE semantic search, extract keywords, build context package
Phase 2: Multi-CLI Discussion (Iterative, via @cli-discuss-agent)
├─ Round N: Agent executes Gemini + Codex + Claude
├─ Cross-verify findings, synthesize solutions
├─ Write synthesis.json to rounds/{N}/
└─ Loop until convergence or max rounds
Phase 3: Present Options
└─ Display solutions with trade-offs from agent output
Phase 4: User Decision
├─ Select solution approach
├─ Select execution method (Agent/Codex/Auto)
├─ Select code review tool (Skip/Gemini/Codex/Agent)
└─ Route:
├─ Approve → Phase 5
├─ Need More Analysis → Return to Phase 2
└─ Cancel → Save session
Phase 5: Plan Generation & Execution Handoff
├─ Generate plan.json (via @cli-lite-planning-agent)
├─ Build executionContext with user selections
└─ Execute to /workflow:lite-execute --in-memory
```
### Agent Roles
| Agent | Responsibility |
|-------|---------------|
| **Orchestrator** | Session management, ACE context, user decisions, phase transitions, executionContext assembly |
| **@cli-discuss-agent** | Multi-CLI execution (Gemini/Codex/Claude), cross-verification, solution synthesis, synthesis.json output |
| **@cli-lite-planning-agent** | Task decomposition, plan.json generation following schema |
## Core Responsibilities
### Phase 1: Context Gathering
**Session Initialization**:
```javascript
const sessionId = `MCP-${taskSlug}-${date}`
const sessionFolder = `.workflow/.multi-cli-plan/${sessionId}`
Bash(`mkdir -p ${sessionFolder}/rounds`)
```
**ACE Context Queries**:
```javascript
const aceQueries = [
`Project architecture related to ${keywords}`,
`Existing implementations of ${keywords[0]}`,
`Code patterns for ${keywords} features`,
`Integration points for ${keywords[0]}`
]
// Execute via mcp__ace-tool__search_context
```
**Context Package** (passed to agent):
- `relevant_files[]` - Files identified by ACE
- `detected_patterns[]` - Code patterns found
- `architecture_insights` - Structure understanding
### Phase 2: Agent Delegation
**Core Principle**: Orchestrator only delegates and reads output - NO direct CLI execution.
**⚠️ CRITICAL - CLI EXECUTION REQUIREMENT**:
- **MUST** execute CLI calls via `Bash` with `run_in_background: true`
- **MUST** wait for hook callback to receive complete results
- **MUST NOT** proceed with next phase until CLI execution fully completes
- Do NOT use `TaskOutput` polling during CLI execution - wait passively for results
- Minimize scope: Proceed only when 100% result available
**Agent Invocation**:
```javascript
Task({
subagent_type: "cli-discuss-agent",
run_in_background: false,
description: `Discussion round ${currentRound}`,
prompt: `
## Input Context
- task_description: ${taskDescription}
- round_number: ${currentRound}
- session: { id: "${sessionId}", folder: "${sessionFolder}" }
- ace_context: ${JSON.stringify(contextPackageage)}
- previous_rounds: ${JSON.stringify(analysisResults)}
- user_feedback: ${userFeedback || 'None'}
- cli_config: { tools: ["gemini", "codex"], mode: "parallel", fallback_chain: ["gemini", "codex", "claude"] }
## Execution Process
1. Parse input context (handle JSON strings)
2. Check if ACE supplementary search needed
3. Build CLI prompts with context
4. Execute CLIs (parallel or serial per cli_config.mode)
5. Parse CLI outputs, handle failures with fallback
6. Perform cross-verification between CLI results
7. Synthesize solutions, calculate scores
8. Calculate convergence, generate clarification questions
9. Write synthesis.json
## Output
Write: ${sessionFolder}/rounds/${currentRound}/synthesis.json
## Completion Checklist
- [ ] All configured CLI tools executed (or fallback triggered)
- [ ] Cross-verification completed with agreements/disagreements
- [ ] 2-3 solutions generated with file:line references
- [ ] Convergence score calculated (0.0-1.0)
- [ ] synthesis.json written with all Primary Fields
`
})
```
**Read Agent Output**:
```javascript
const synthesis = JSON.parse(Read(`${sessionFolder}/rounds/${round}/synthesis.json`))
// Access top-level fields: solutions, convergence, cross_verification, clarification_questions
```
**Convergence Decision**:
```javascript
if (synthesis.convergence.recommendation === 'converged') {
// Proceed to Phase 3
} else if (synthesis.convergence.recommendation === 'user_input_needed') {
// Collect user feedback, return to Phase 2
} else {
// Continue to next round if new_insights && round < maxRounds
}
```
### Phase 3: Present Options
**Display from Agent Output** (no processing):
```javascript
console.log(`
## Solution Options
${synthesis.solutions.map((s, i) => `
**Option ${i+1}: ${s.name}**
Source: ${s.source_cli.join(' + ')}
Effort: ${s.effort} | Risk: ${s.risk}
Pros: ${s.pros.join(', ')}
Cons: ${s.cons.join(', ')}
Files: ${s.affected_files.slice(0,3).map(f => `${f.file}:${f.line}`).join(', ')}
`).join('\n')}
## Cross-Verification
Agreements: ${synthesis.cross_verification.agreements.length}
Disagreements: ${synthesis.cross_verification.disagreements.length}
`)
```
### Phase 4: User Decision
**Decision Options**:
```javascript
AskUserQuestion({
questions: [
{
question: "Which solution approach?",
header: "Solution",
multiSelect: false,
options: solutions.map((s, i) => ({
label: `Option ${i+1}: ${s.name}`,
description: `${s.effort} effort, ${s.risk} risk`
})).concat([
{ label: "Need More Analysis", description: "Return to Phase 2" }
])
},
{
question: "Execution method:",
header: "Execution",
multiSelect: false,
options: [
{ label: "Agent", description: "@code-developer agent" },
{ label: "Codex", description: "codex CLI tool" },
{ label: "Auto", description: "Auto-select based on complexity" }
]
},
{
question: "Code review after execution?",
header: "Review",
multiSelect: false,
options: [
{ label: "Skip", description: "No review" },
{ label: "Gemini Review", description: "Gemini CLI tool" },
{ label: "Codex Review", description: "codex review --uncommitted" },
{ label: "Agent Review", description: "Current agent review" }
]
}
]
})
```
**Routing**:
- Approve + execution method → Phase 5
- Need More Analysis → Phase 2 with feedback
- Cancel → Save session for resumption
### Phase 5: Plan Generation & Execution Handoff
**Step 1: Build Context-Package** (Orchestrator responsibility):
```javascript
// Extract key information from user decision and synthesis
const contextPackage = {
// Core solution details
solution: {
name: selectedSolution.name,
source_cli: selectedSolution.source_cli,
feasibility: selectedSolution.feasibility,
effort: selectedSolution.effort,
risk: selectedSolution.risk,
summary: selectedSolution.summary
},
// Implementation plan (tasks, flow, milestones)
implementation_plan: selectedSolution.implementation_plan,
// Dependencies
dependencies: selectedSolution.dependencies || { internal: [], external: [] },
// Technical concerns
technical_concerns: selectedSolution.technical_concerns || [],
// Consensus from cross-verification
consensus: {
agreements: synthesis.cross_verification.agreements,
resolved_conflicts: synthesis.cross_verification.resolution
},
// User constraints (from Phase 4 feedback)
constraints: userConstraints || [],
// Task context
task_description: taskDescription,
session_id: sessionId
}
// Write context-package for traceability
Write(`${sessionFolder}/context-package.json`, JSON.stringify(contextPackage, null, 2))
```
**Context-Package Schema**:
| Field | Type | Description |
|-------|------|-------------|
| `solution` | object | User-selected solution from synthesis |
| `solution.name` | string | Solution identifier |
| `solution.feasibility` | number | Viability score (0-1) |
| `solution.summary` | string | Brief analysis summary |
| `implementation_plan` | object | Task breakdown with flow and dependencies |
| `implementation_plan.approach` | string | High-level technical strategy |
| `implementation_plan.tasks[]` | array | Discrete tasks with id, name, depends_on, files |
| `implementation_plan.execution_flow` | string | Task sequence (e.g., "T1 → T2 → T3") |
| `implementation_plan.milestones` | string[] | Key checkpoints |
| `dependencies` | object | Module and package dependencies |
| `technical_concerns` | string[] | Risks and blockers |
| `consensus` | object | Cross-verified agreements from multi-CLI |
| `constraints` | string[] | User-specified constraints from Phase 4 |
```json
{
"solution": {
"name": "Strategy Pattern Refactoring",
"source_cli": ["gemini", "codex"],
"feasibility": 0.88,
"effort": "medium",
"risk": "low",
"summary": "Extract payment gateway interface, implement strategy pattern for multi-gateway support"
},
"implementation_plan": {
"approach": "Define interface → Create concrete strategies → Implement factory → Migrate existing code",
"tasks": [
{"id": "T1", "name": "Define PaymentGateway interface", "depends_on": [], "files": [{"file": "src/types/payment.ts", "line": 1, "action": "create"}], "key_point": "Include all existing Stripe methods"},
{"id": "T2", "name": "Implement StripeGateway", "depends_on": ["T1"], "files": [{"file": "src/payment/stripe.ts", "line": 1, "action": "create"}], "key_point": "Wrap existing logic"},
{"id": "T3", "name": "Create GatewayFactory", "depends_on": ["T1"], "files": [{"file": "src/payment/factory.ts", "line": 1, "action": "create"}], "key_point": null},
{"id": "T4", "name": "Migrate processor to use factory", "depends_on": ["T2", "T3"], "files": [{"file": "src/payment/processor.ts", "line": 45, "action": "modify"}], "key_point": "Backward compatible"}
],
"execution_flow": "T1 → (T2 | T3) → T4",
"milestones": ["Interface defined", "Gateway implementations complete", "Migration done"]
},
"dependencies": {
"internal": ["@/lib/payment-gateway", "@/types/payment"],
"external": ["stripe@^14.0.0"]
},
"technical_concerns": ["Existing tests must pass", "No breaking API changes"],
"consensus": {
"agreements": ["Use strategy pattern", "Keep existing API"],
"resolved_conflicts": "Factory over DI for simpler integration"
},
"constraints": ["backward compatible", "no breaking changes to PaymentResult type"],
"task_description": "Refactor payment processing for multi-gateway support",
"session_id": "MCP-payment-refactor-2026-01-14"
}
```
**Step 2: Invoke Planning Agent**:
```javascript
Task({
subagent_type: "cli-lite-planning-agent",
run_in_background: false,
description: "Generate implementation plan",
prompt: `
## Schema Reference
Execute: cat ~/.claude/workflows/cli-templates/schemas/plan-json-schema.json
## Context-Package (from orchestrator)
${JSON.stringify(contextPackage, null, 2)}
## Execution Process
1. Read plan-json-schema.json for output structure
2. Read project-tech.json and project-guidelines.json
3. Parse context-package fields:
- solution: name, feasibility, summary
- implementation_plan: tasks[], execution_flow, milestones
- dependencies: internal[], external[]
- technical_concerns: risks/blockers
- consensus: agreements, resolved_conflicts
- constraints: user requirements
4. Use implementation_plan.tasks[] as task foundation
5. Preserve task dependencies (depends_on) and execution_flow
6. Expand tasks with detailed acceptance criteria
7. Generate plan.json following schema exactly
## Output
- ${sessionFolder}/plan.json
## Completion Checklist
- [ ] plan.json preserves task dependencies from implementation_plan
- [ ] Task execution order follows execution_flow
- [ ] Key_points reflected in task descriptions
- [ ] User constraints applied to implementation
- [ ] Acceptance criteria are testable
- [ ] Schema fields match plan-json-schema.json exactly
`
})
```
**Step 3: Build executionContext**:
```javascript
// After plan.json is generated by cli-lite-planning-agent
const plan = JSON.parse(Read(`${sessionFolder}/plan.json`))
// Build executionContext (same structure as lite-plan)
executionContext = {
planObject: plan,
explorationsContext: null, // Multi-CLI doesn't use exploration files
explorationAngles: [], // No exploration angles
explorationManifest: null, // No manifest
clarificationContext: null, // Store user feedback from Phase 2 if exists
executionMethod: userSelection.execution_method, // From Phase 4
codeReviewTool: userSelection.code_review_tool, // From Phase 4
originalUserInput: taskDescription,
// Optional: Task-level executor assignments
executorAssignments: null, // Could be enhanced in future
session: {
id: sessionId,
folder: sessionFolder,
artifacts: {
explorations: [], // No explorations in multi-CLI workflow
explorations_manifest: null,
plan: `${sessionFolder}/plan.json`,
synthesis_rounds: Array.from({length: currentRound}, (_, i) =>
`${sessionFolder}/rounds/${i+1}/synthesis.json`
),
context_package: `${sessionFolder}/context-package.json`
}
}
}
```
**Step 4: Hand off to Execution**:
```javascript
// Execute to lite-execute with in-memory context
Skill(skill="workflow:lite-execute", args="--in-memory")
```
## Output File Structure
```
.workflow/.multi-cli-plan/{MCP-task-slug-YYYY-MM-DD}/
├── session-state.json # Session tracking (orchestrator)
├── rounds/
│ ├── 1/synthesis.json # Round 1 analysis (cli-discuss-agent)
│ ├── 2/synthesis.json # Round 2 analysis (cli-discuss-agent)
│ └── .../
├── context-package.json # Extracted context for planning (orchestrator)
└── plan.json # Structured plan (cli-lite-planning-agent)
```
**File Producers**:
| File | Producer | Content |
|------|----------|---------|
| `session-state.json` | Orchestrator | Session metadata, rounds, decisions |
| `rounds/*/synthesis.json` | cli-discuss-agent | Solutions, convergence, cross-verification |
| `context-package.json` | Orchestrator | Extracted solution, dependencies, consensus for planning |
| `plan.json` | cli-lite-planning-agent | Structured tasks for lite-execute |
## synthesis.json Schema
```json
{
"round": 1,
"solutions": [{
"name": "Solution Name",
"source_cli": ["gemini", "codex"],
"feasibility": 0.85,
"effort": "low|medium|high",
"risk": "low|medium|high",
"summary": "Brief analysis summary",
"implementation_plan": {
"approach": "High-level technical approach",
"tasks": [
{"id": "T1", "name": "Task", "depends_on": [], "files": [], "key_point": "..."}
],
"execution_flow": "T1 → T2 → T3",
"milestones": ["Checkpoint 1", "Checkpoint 2"]
},
"dependencies": {"internal": [], "external": []},
"technical_concerns": ["Risk 1", "Blocker 2"]
}],
"convergence": {
"score": 0.85,
"new_insights": false,
"recommendation": "converged|continue|user_input_needed"
},
"cross_verification": {
"agreements": [],
"disagreements": [],
"resolution": "..."
},
"clarification_questions": []
}
```
**Key Planning Fields**:
| Field | Purpose |
|-------|---------|
| `feasibility` | Viability score (0-1) |
| `implementation_plan.tasks[]` | Discrete tasks with dependencies |
| `implementation_plan.execution_flow` | Task sequence visualization |
| `implementation_plan.milestones` | Key checkpoints |
| `technical_concerns` | Risks and blockers |
**Note**: Solutions ranked by internal scoring (array order = priority)
## TodoWrite Structure
**Initialization**:
```javascript
TodoWrite({ todos: [
{ content: "Phase 1: Context Gathering", status: "in_progress", activeForm: "Gathering context" },
{ content: "Phase 2: Multi-CLI Discussion", status: "pending", activeForm: "Running discussion" },
{ content: "Phase 3: Present Options", status: "pending", activeForm: "Presenting options" },
{ content: "Phase 4: User Decision", status: "pending", activeForm: "Awaiting decision" },
{ content: "Phase 5: Plan Generation", status: "pending", activeForm: "Generating plan" }
]})
```
**During Discussion Rounds**:
```javascript
TodoWrite({ todos: [
{ content: "Phase 1: Context Gathering", status: "completed", activeForm: "Gathering context" },
{ content: "Phase 2: Multi-CLI Discussion", status: "in_progress", activeForm: "Running discussion" },
{ content: " → Round 1: Initial analysis", status: "completed", activeForm: "Analyzing" },
{ content: " → Round 2: Deep verification", status: "in_progress", activeForm: "Verifying" },
{ content: "Phase 3: Present Options", status: "pending", activeForm: "Presenting options" },
// ...
]})
```
## Error Handling
| Error | Resolution |
|-------|------------|
| ACE search fails | Fall back to Glob/Grep for file discovery |
| Agent fails | Retry once, then present partial results |
| CLI timeout (in agent) | Agent uses fallback: gemini → codex → claude |
| No convergence | Present best options, flag uncertainty |
| synthesis.json parse error | Request agent retry |
| User cancels | Save session for later resumption |
## Configuration
| Flag | Default | Description |
|------|---------|-------------|
| `--max-rounds` | 3 | Maximum discussion rounds |
| `--tools` | gemini,codex | CLI tools for analysis |
| `--mode` | parallel | Execution mode: parallel or serial |
| `--auto-execute` | false | Auto-execute after approval |
## Best Practices
1. **Be Specific**: Detailed task descriptions improve ACE context quality
2. **Provide Feedback**: Use clarification rounds to refine requirements
3. **Trust Cross-Verification**: Multi-CLI consensus indicates high confidence
4. **Review Trade-offs**: Consider pros/cons before selecting solution
5. **Check synthesis.json**: Review agent output for detailed analysis
6. **Iterate When Needed**: Don't hesitate to request more analysis
## Related Commands
```bash
# Simpler single-round planning
/workflow:lite-plan "task description"
# Issue-driven discovery
/issue:discover-by-prompt "find issues"
# View session files
cat .workflow/.multi-cli-plan/{session-id}/plan.json
cat .workflow/.multi-cli-plan/{session-id}/rounds/1/synthesis.json
cat .workflow/.multi-cli-plan/{session-id}/context-package.json
# Direct execution (if you have plan.json)
/workflow:lite-execute plan.json
```

View File

@@ -0,0 +1,377 @@
---
name: plan-verify
description: Perform READ-ONLY verification analysis between IMPL_PLAN.md, task JSONs, and brainstorming artifacts. Generates structured report with quality gate recommendation. Does NOT modify any files.
argument-hint: "[optional: --session session-id]"
allowed-tools: Read(*), Write(*), Glob(*), Bash(*)
---
## User Input
```text
$ARGUMENTS
```
You **MUST** consider the user input before proceeding (if not empty).
## Goal
Generate a comprehensive verification report that identifies inconsistencies, duplications, ambiguities, and underspecified items between action planning artifacts (`IMPL_PLAN.md`, `task.json`) and brainstorming artifacts (`role analysis documents`). This command MUST run only after `/workflow:plan` has successfully produced complete `IMPL_PLAN.md` and task JSON files.
**Output**: A structured Markdown report saved to `.workflow/active/WFS-{session}/.process/PLAN_VERIFICATION.md` containing:
- Executive summary with quality gate recommendation
- Detailed findings by severity (CRITICAL/HIGH/MEDIUM/LOW)
- Requirements coverage analysis
- Dependency integrity check
- Synthesis alignment validation
- Actionable remediation recommendations
## Operating Constraints
**STRICTLY READ-ONLY FOR SOURCE ARTIFACTS**:
- **MUST NOT** modify `IMPL_PLAN.md`, any `task.json` files, or brainstorming artifacts
- **MUST NOT** create or delete task files
- **MUST ONLY** write the verification report to `.process/PLAN_VERIFICATION.md`
**Synthesis Authority**: The `role analysis documents` are **authoritative** for requirements and design decisions. Any conflicts between IMPL_PLAN/tasks and synthesis are automatically CRITICAL and require adjustment of the plan/tasks—not reinterpretation of requirements.
**Quality Gate Authority**: The verification report provides a binding recommendation (BLOCK_EXECUTION / PROCEED_WITH_FIXES / PROCEED_WITH_CAUTION / PROCEED) based on objective severity criteria. User MUST review critical/high issues before proceeding with implementation.
## Execution Steps
### 1. Initialize Analysis Context
```bash
# Detect active workflow session
IF --session parameter provided:
session_id = provided session
ELSE:
# Auto-detect active session
active_sessions = bash(find .workflow/active/ -name "WFS-*" -type d 2>/dev/null)
IF active_sessions is empty:
ERROR: "No active workflow session found. Use --session <session-id>"
EXIT
ELSE IF active_sessions has multiple entries:
# Use most recently modified session
session_id = bash(ls -td .workflow/active/WFS-*/ 2>/dev/null | head -1 | xargs basename)
ELSE:
session_id = basename(active_sessions[0])
# Derive absolute paths
session_dir = .workflow/active/WFS-{session}
brainstorm_dir = session_dir/.brainstorming
task_dir = session_dir/.task
process_dir = session_dir/.process
session_file = session_dir/workflow-session.json
# Create .process directory if not exists (report output location)
IF NOT EXISTS(process_dir):
bash(mkdir -p "{process_dir}")
# Validate required artifacts
# Note: "role analysis documents" refers to [role]/analysis.md files (e.g., product-manager/analysis.md)
SYNTHESIS_DIR = brainstorm_dir # Contains role analysis files: */analysis.md
IMPL_PLAN = session_dir/IMPL_PLAN.md
TASK_FILES = Glob(task_dir/*.json)
PLANNING_NOTES = session_dir/planning-notes.md # N+1 context and constraints
# Abort if missing - in order of dependency
SESSION_FILE_EXISTS = EXISTS(session_file)
IF NOT SESSION_FILE_EXISTS:
WARNING: "workflow-session.json not found. User intent alignment verification will be skipped."
# Continue execution - this is optional context, not blocking
PLANNING_NOTES_EXISTS = EXISTS(PLANNING_NOTES)
IF NOT PLANNING_NOTES_EXISTS:
WARNING: "planning-notes.md not found. Constraints/N+1 context verification will be skipped."
# Continue execution - optional context
SYNTHESIS_FILES = Glob(brainstorm_dir/*/analysis.md)
IF SYNTHESIS_FILES.count == 0:
ERROR: "No role analysis documents found in .brainstorming/*/analysis.md. Run /workflow:brainstorm:synthesis first"
EXIT
IF NOT EXISTS(IMPL_PLAN):
ERROR: "IMPL_PLAN.md not found. Run /workflow:plan first"
EXIT
IF TASK_FILES.count == 0:
ERROR: "No task JSON files found. Run /workflow:plan first"
EXIT
```
### 2. Load Artifacts (Progressive Disclosure)
Load only minimal necessary context from each artifact:
**From workflow-session.json** (OPTIONAL - Primary Reference for User Intent):
- **ONLY IF EXISTS**: Load user intent context
- Original user prompt/intent (project or description field)
- User's stated goals and objectives
- User's scope definition
- **IF MISSING**: Set user_intent_analysis = "SKIPPED: workflow-session.json not found"
**From planning-notes.md** (OPTIONAL - Constraints & N+1 Context):
- **ONLY IF EXISTS**: Load planning context
- Consolidated Constraints (numbered list from Phase 1-3)
- N+1 Context: Decisions table (Decision | Rationale | Revisit?)
- N+1 Context: Deferred items list
- **IF MISSING**: Set planning_notes_analysis = "SKIPPED: planning-notes.md not found"
**From role analysis documents** (AUTHORITATIVE SOURCE):
- Functional Requirements (IDs, descriptions, acceptance criteria)
- Non-Functional Requirements (IDs, targets)
- Business Requirements (IDs, success metrics)
- Key Architecture Decisions
- Risk factors and mitigation strategies
- Implementation Roadmap (high-level phases)
**From IMPL_PLAN.md**:
- Summary and objectives
- Context Analysis
- Implementation Strategy
- Task Breakdown Summary
- Success Criteria
- Brainstorming Artifacts References (if present)
**From task.json files**:
- Task IDs
- Titles and descriptions
- Status
- Dependencies (depends_on, blocks)
- Context (requirements, focus_paths, acceptance, artifacts)
- Flow control (pre_analysis, implementation_approach)
- Meta (complexity, priority)
### 3. Build Semantic Models
Create internal representations (do not include raw artifacts in output):
**Requirements inventory**:
- Each functional/non-functional/business requirement with stable ID
- Requirement text, acceptance criteria, priority
**Architecture decisions inventory**:
- ADRs from synthesis
- Technology choices
- Data model references
**Task coverage mapping**:
- Map each task to one or more requirements (by ID reference or keyword inference)
- Map each requirement to covering tasks
**Dependency graph**:
- Task-to-task dependencies (depends_on, blocks)
- Requirement-level dependencies (from synthesis)
### 4. Detection Passes (Agent-Driven Multi-Dimensional Analysis)
**Execution Strategy**:
- Single `cli-explore-agent` invocation
- Agent executes multiple CLI analyses internally (different dimensions: A-H)
- Token Budget: 50 findings maximum (aggregate remainder in overflow summary)
- Priority Allocation: CRITICAL (unlimited) → HIGH (15) → MEDIUM (20) → LOW (15)
- Early Exit: If CRITICAL findings > 0 in User Intent/Requirements Coverage, skip LOW/MEDIUM checks
**Execution Order** (Agent orchestrates internally):
1. **Tier 1 (CRITICAL Path)**: A, B, C, I - User intent, coverage, consistency, constraints compliance (full analysis)
2. **Tier 2 (HIGH Priority)**: D, E, J - Dependencies, synthesis alignment, N+1 context validation (limit 15 findings)
3. **Tier 3 (MEDIUM Priority)**: F - Specification quality (limit 20 findings)
4. **Tier 4 (LOW Priority)**: G, H - Duplication, feasibility (limit 15 findings)
---
#### Phase 4.1: Launch Unified Verification Agent
```javascript
Task(
subagent_type="cli-explore-agent",
run_in_background=false,
description="Multi-dimensional plan verification",
prompt=`
## Plan Verification Task
### MANDATORY FIRST STEPS
1. Read: ~/.claude/workflows/cli-templates/schemas/plan-verify-agent-schema.json (dimensions & rules)
2. Read: ~/.claude/workflows/cli-templates/schemas/verify-json-schema.json (output schema)
3. Read: ${session_file} (user intent)
4. Read: ${PLANNING_NOTES} (constraints & N+1 context)
5. Read: ${IMPL_PLAN} (implementation plan)
6. Glob: ${task_dir}/*.json (task files)
7. Glob: ${SYNTHESIS_DIR}/*/analysis.md (role analyses)
### Execution Flow
**Load schema → Execute tiered CLI analysis → Aggregate findings → Write JSON**
FOR each tier in [1, 2, 3, 4]:
- Load tier config from plan-verify-agent-schema.json
- Execute: ccw cli -p "PURPOSE: Verify dimensions {tier.dimensions}
TASK: {tier.checks from schema}
CONTEXT: @${session_dir}/**/*
EXPECTED: Findings JSON with dimension, severity, location, summary, recommendation
CONSTRAINTS: Limit {tier.limit} findings
" --tool gemini --mode analysis --rule {tier.rule}
- Parse findings, check early exit condition
- IF tier == 1 AND critical_count > 0: skip tier 3-4
### Output
Write: ${process_dir}/verification-findings.json (follow verify-json-schema.json)
Return: Quality gate decision + 2-3 sentence summary
`
)
```
---
#### Phase 4.2: Load and Organize Findings
```javascript
// Load findings (single parse for all subsequent use)
const data = JSON.parse(Read(`${process_dir}/verification-findings.json`))
const { session_id, timestamp, verification_tiers_completed, findings, summary } = data
const { critical_count, high_count, medium_count, low_count, total_findings, coverage_percentage, recommendation } = summary
// Group by severity and dimension
const bySeverity = Object.groupBy(findings, f => f.severity)
const byDimension = Object.groupBy(findings, f => f.dimension)
// Dimension metadata (from schema)
const DIMS = {
A: "User Intent Alignment", B: "Requirements Coverage", C: "Consistency Validation",
D: "Dependency Integrity", E: "Synthesis Alignment", F: "Task Specification Quality",
G: "Duplication Detection", H: "Feasibility Assessment",
I: "Constraints Compliance", J: "N+1 Context Validation"
}
```
### 5. Generate Report
```javascript
// Helper: render dimension section
const renderDimension = (dim) => {
const items = byDimension[dim] || []
return items.length > 0
? items.map(f => `### ${f.id}: ${f.summary}\n- **Severity**: ${f.severity}\n- **Location**: ${f.location.join(', ')}\n- **Recommendation**: ${f.recommendation}`).join('\n\n')
: `> ✅ No ${DIMS[dim]} issues detected.`
}
// Helper: render severity section
const renderSeverity = (severity, impact) => {
const items = bySeverity[severity] || []
return items.length > 0
? items.map(f => `#### ${f.id}: ${f.summary}\n- **Dimension**: ${f.dimension_name}\n- **Location**: ${f.location.join(', ')}\n- **Impact**: ${impact}\n- **Recommendation**: ${f.recommendation}`).join('\n\n')
: `> ✅ No ${severity.toLowerCase()}-severity issues detected.`
}
// Build Markdown report
const fullReport = `
# Plan Verification Report
**Session**: WFS-${session_id} | **Generated**: ${timestamp}
**Tiers Completed**: ${verification_tiers_completed.join(', ')}
---
## Executive Summary
| Metric | Value | Status |
|--------|-------|--------|
| Risk Level | ${critical_count > 0 ? 'CRITICAL' : high_count > 0 ? 'HIGH' : medium_count > 0 ? 'MEDIUM' : 'LOW'} | ${critical_count > 0 ? '🔴' : high_count > 0 ? '🟠' : medium_count > 0 ? '🟡' : '🟢'} |
| Critical/High/Medium/Low | ${critical_count}/${high_count}/${medium_count}/${low_count} | |
| Coverage | ${coverage_percentage}% | ${coverage_percentage >= 90 ? '🟢' : coverage_percentage >= 75 ? '🟡' : '🔴'} |
**Recommendation**: **${recommendation}**
---
## Findings Summary
| ID | Dimension | Severity | Location | Summary |
|----|-----------|----------|----------|---------|
${findings.map(f => `| ${f.id} | ${f.dimension_name} | ${f.severity} | ${f.location.join(', ')} | ${f.summary} |`).join('\n')}
---
## Analysis by Dimension
${['A','B','C','D','E','F','G','H','I','J'].map(d => `### ${d}. ${DIMS[d]}\n\n${renderDimension(d)}`).join('\n\n---\n\n')}
---
## Findings by Severity
### CRITICAL (${critical_count})
${renderSeverity('CRITICAL', 'Blocks execution')}
### HIGH (${high_count})
${renderSeverity('HIGH', 'Fix before execution recommended')}
### MEDIUM (${medium_count})
${renderSeverity('MEDIUM', 'Address during/after implementation')}
### LOW (${low_count})
${renderSeverity('LOW', 'Optional improvement')}
---
## Next Steps
${recommendation === 'BLOCK_EXECUTION' ? '🛑 **BLOCK**: Fix critical issues → Re-verify' :
recommendation === 'PROCEED_WITH_FIXES' ? '⚠️ **FIX RECOMMENDED**: Address high issues → Re-verify or Execute' :
'✅ **READY**: Proceed to /workflow:execute'}
Re-verify: \`/workflow:plan-verify --session ${session_id}\`
Execute: \`/workflow:execute --resume-session="${session_id}"\`
`
// Write report
Write(`${process_dir}/PLAN_VERIFICATION.md`, fullReport)
console.log(`✅ Report: ${process_dir}/PLAN_VERIFICATION.md\n📊 ${recommendation} | C:${critical_count} H:${high_count} M:${medium_count} L:${low_count} | Coverage:${coverage_percentage}%`)
```
### 6. Next Step Selection
```javascript
const autoYes = $ARGUMENTS.includes('--yes') || $ARGUMENTS.includes('-y')
const canExecute = recommendation !== 'BLOCK_EXECUTION'
// Auto mode
if (autoYes) {
if (canExecute) {
Skill(skill="workflow:execute", args="--yes --resume-session=\"${session_id}\"")
} else {
console.log(`[--yes] BLOCK_EXECUTION - Fix ${critical_count} critical issues first.`)
}
return
}
// Interactive mode - build options based on quality gate
const options = canExecute
? [
{ label: canExecute && recommendation === 'PROCEED_WITH_FIXES' ? "Execute Anyway" : "Execute (Recommended)",
description: "Proceed to /workflow:execute" },
{ label: "Review Report", description: "Review findings before deciding" },
{ label: "Re-verify", description: "Re-run after manual fixes" }
]
: [
{ label: "Review Report", description: "Review critical issues" },
{ label: "Re-verify", description: "Re-run after fixing issues" }
]
const selection = AskUserQuestion({
questions: [{
question: `Quality gate: ${recommendation}. Next step?`,
header: "Action",
multiSelect: false,
options
}]
})
// Handle selection
if (selection.includes("Execute")) {
Skill(skill="workflow:execute", args="--resume-session=\"${session_id}\"")
} else if (selection === "Re-verify") {
Skill(skill="workflow:plan-verify", args="--session ${session_id}")
}
```

View File

@@ -1,10 +1,15 @@
---
name: plan
description: 5-phase planning workflow with action-planning-agent task generation, outputs IMPL_PLAN.md and task JSONs
argument-hint: "\"text description\"|file.md"
allowed-tools: SlashCommand(*), TodoWrite(*), Read(*), Bash(*)
argument-hint: "[-y|--yes] \"text description\"|file.md"
allowed-tools: Skill(*), TodoWrite(*), Read(*), Bash(*)
group: workflow
---
## Auto Mode
When `--yes` or `-y`: Auto-continue all phases (skip confirmations), use recommended conflict resolutions.
# Workflow Plan Command (/workflow:plan)
## Coordinator Role
@@ -23,7 +28,7 @@ This workflow runs **fully autonomously** once triggered. Phase 3 (conflict reso
5. **Phase 4 executes** → Task generation (task-generate-agent) → Reports final summary
**Task Attachment Model**:
- SlashCommand execute **expands workflow** by attaching sub-tasks to current TodoWrite
- Skill execute **expands workflow** by attaching sub-tasks to current TodoWrite
- When a sub-command is executed (e.g., `/workflow:tools:context-gather`), its internal tasks are attached to the orchestrator's TodoWrite
- Orchestrator **executes these attached tasks** sequentially
- After completion, attached tasks are **collapsed** back to high-level phase summary
@@ -43,7 +48,7 @@ This workflow runs **fully autonomously** once triggered. Phase 3 (conflict reso
3. **Parse Every Output**: Extract required data from each command/agent output for next phase
4. **Auto-Continue via TodoList**: Check TodoList status to execute next pending phase automatically
5. **Track Progress**: Update TodoWrite dynamically with task attachment/collapse pattern
6. **Task Attachment Model**: SlashCommand execute **attaches** sub-tasks to current workflow. Orchestrator **executes** these attached tasks itself, then **collapses** them after completion
6. **Task Attachment Model**: Skill execute **attaches** sub-tasks to current workflow. Orchestrator **executes** these attached tasks itself, then **collapses** them after completion
7. **⚠️ CRITICAL: DO NOT STOP**: Continuous multi-phase workflow. After executing all attached tasks, immediately collapse them and execute next phase
## Execution Process
@@ -83,7 +88,7 @@ Return:
**Step 1.1: Execute** - Create or discover workflow session
```javascript
SlashCommand(command="/workflow:session:start --auto \"[structured-task-description]\"")
Skill(skill="workflow:session:start", args="--auto \"[structured-task-description]\"")
```
**Task Description Structure**:
@@ -111,7 +116,51 @@ CONTEXT: Existing user database schema, REST API endpoints
**TodoWrite**: Mark phase 1 completed, phase 2 in_progress
**After Phase 1**: Return to user showing Phase 1 results, then auto-continue to Phase 2
**After Phase 1**: Initialize planning-notes.md with user intent
```javascript
// Create planning notes document with N+1 context support
const planningNotesPath = `.workflow/active/${sessionId}/planning-notes.md`
const userGoal = structuredDescription.goal
const userConstraints = structuredDescription.context || "None specified"
Write(planningNotesPath, `# Planning Notes
**Session**: ${sessionId}
**Created**: ${new Date().toISOString()}
## User Intent (Phase 1)
- **GOAL**: ${userGoal}
- **KEY_CONSTRAINTS**: ${userConstraints}
---
## Context Findings (Phase 2)
(To be filled by context-gather)
## Conflict Decisions (Phase 3)
(To be filled if conflicts detected)
## Consolidated Constraints (Phase 4 Input)
1. ${userConstraints}
---
## Task Generation (Phase 4)
(To be filled by action-planning-agent)
## N+1 Context
### Decisions
| Decision | Rationale | Revisit? |
|----------|-----------|----------|
### Deferred
- [ ] (For N+1)
`)
```
Return to user showing Phase 1 results, then auto-continue to Phase 2
---
@@ -120,7 +169,7 @@ CONTEXT: Existing user database schema, REST API endpoints
**Step 2.1: Execute** - Gather project context and analyze codebase
```javascript
SlashCommand(command="/workflow:tools:context-gather --session [sessionId] \"[structured-task-description]\"")
Skill(skill="workflow:tools:context-gather", args="--session [sessionId] \"[structured-task-description]\"")
```
**Use Same Structured Description**: Pass the same structured format from Phase 1
@@ -134,10 +183,11 @@ SlashCommand(command="/workflow:tools:context-gather --session [sessionId] \"[st
**Validation**:
- Context package path extracted
- File exists and is valid JSON
- `prioritized_context` field exists
<!-- TodoWrite: When context-gather executed, INSERT 3 context-gather tasks, mark first as in_progress -->
**TodoWrite Update (Phase 2 SlashCommand executed - tasks attached)**:
**TodoWrite Update (Phase 2 Skill executed - tasks attached)**:
```json
[
{"content": "Phase 1: Session Discovery", "status": "completed", "activeForm": "Executing session discovery"},
@@ -149,7 +199,7 @@ SlashCommand(command="/workflow:tools:context-gather --session [sessionId] \"[st
]
```
**Note**: SlashCommand execute **attaches** context-gather's 3 tasks. Orchestrator **executes** these tasks sequentially.
**Note**: Skill execute **attaches** context-gather's 3 tasks. Orchestrator **executes** these tasks sequentially.
<!-- TodoWrite: After Phase 2 tasks complete, REMOVE Phase 2.1-2.3, restore to orchestrator view -->
@@ -164,7 +214,37 @@ SlashCommand(command="/workflow:tools:context-gather --session [sessionId] \"[st
**Note**: Phase 2 tasks completed and collapsed to summary.
**After Phase 2**: Return to user showing Phase 2 results, then auto-continue to Phase 3/4 (depending on conflict_risk)
**After Phase 2**: Update planning-notes.md with context findings, then auto-continue
```javascript
// Read context-package to extract key findings
const contextPackage = JSON.parse(Read(contextPath))
const conflictRisk = contextPackage.conflict_detection?.risk_level || 'low'
const criticalFiles = (contextPackage.exploration_results?.aggregated_insights?.critical_files || [])
.slice(0, 5).map(f => f.path)
const archPatterns = contextPackage.project_context?.architecture_patterns || []
const constraints = contextPackage.exploration_results?.aggregated_insights?.constraints || []
// Append Phase 2 findings to planning-notes.md
Edit(planningNotesPath, {
old: '## Context Findings (Phase 2)\n(To be filled by context-gather)',
new: `## Context Findings (Phase 2)
- **CRITICAL_FILES**: ${criticalFiles.join(', ') || 'None identified'}
- **ARCHITECTURE**: ${archPatterns.join(', ') || 'Not detected'}
- **CONFLICT_RISK**: ${conflictRisk}
- **CONSTRAINTS**: ${constraints.length > 0 ? constraints.join('; ') : 'None'}`
})
// Append Phase 2 constraints to consolidated list
Edit(planningNotesPath, {
old: '## Consolidated Constraints (Phase 4 Input)',
new: `## Consolidated Constraints (Phase 4 Input)
${constraints.map((c, i) => `${i + 2}. [Context] ${c}`).join('\n')}`
})
```
Return to user showing Phase 2 results, then auto-continue to Phase 3/4 (depending on conflict_risk)
---
@@ -175,7 +255,7 @@ SlashCommand(command="/workflow:tools:context-gather --session [sessionId] \"[st
**Step 3.1: Execute** - Detect and resolve conflicts with CLI analysis
```javascript
SlashCommand(command="/workflow:tools:conflict-resolution --session [sessionId] --context [contextPath]")
Skill(skill="workflow:tools:conflict-resolution", args="--session [sessionId] --context [contextPath]")
```
**Input**:
@@ -196,7 +276,7 @@ SlashCommand(command="/workflow:tools:conflict-resolution --session [sessionId]
<!-- TodoWrite: If conflict_risk ≥ medium, INSERT 3 conflict-resolution tasks -->
**TodoWrite Update (Phase 3 SlashCommand executed - tasks attached, if conflict_risk ≥ medium)**:
**TodoWrite Update (Phase 3 Skill executed - tasks attached, if conflict_risk ≥ medium)**:
```json
[
{"content": "Phase 1: Session Discovery", "status": "completed", "activeForm": "Executing session discovery"},
@@ -209,7 +289,7 @@ SlashCommand(command="/workflow:tools:conflict-resolution --session [sessionId]
]
```
**Note**: SlashCommand execute **attaches** conflict-resolution's 3 tasks. Orchestrator **executes** these tasks sequentially.
**Note**: Skill execute **attaches** conflict-resolution's 3 tasks. Orchestrator **executes** these tasks sequentially.
<!-- TodoWrite: After Phase 3 tasks complete, REMOVE Phase 3.1-3.3, restore to orchestrator view -->
@@ -225,7 +305,45 @@ SlashCommand(command="/workflow:tools:conflict-resolution --session [sessionId]
**Note**: Phase 3 tasks completed and collapsed to summary.
**After Phase 3**: Return to user showing conflict resolution results (if executed) and selected strategies, then auto-continue to Phase 3.5
**After Phase 3**: Update planning-notes.md with conflict decisions (if executed), then auto-continue
```javascript
// If Phase 3 was executed, update planning-notes.md
if (conflictRisk >= 'medium') {
const conflictResPath = `.workflow/active/${sessionId}/.process/conflict-resolution.json`
if (fs.existsSync(conflictResPath)) {
const conflictRes = JSON.parse(Read(conflictResPath))
const resolved = conflictRes.resolved_conflicts || []
const modifiedArtifacts = conflictRes.modified_artifacts || []
const planningConstraints = conflictRes.planning_constraints || []
// Update Phase 3 section
Edit(planningNotesPath, {
old: '## Conflict Decisions (Phase 3)\n(To be filled if conflicts detected)',
new: `## Conflict Decisions (Phase 3)
- **RESOLVED**: ${resolved.map(r => `${r.type}${r.strategy}`).join('; ') || 'None'}
- **MODIFIED_ARTIFACTS**: ${modifiedArtifacts.join(', ') || 'None'}
- **CONSTRAINTS**: ${planningConstraints.join('; ') || 'None'}`
})
// Append Phase 3 constraints to consolidated list
if (planningConstraints.length > 0) {
const currentNotes = Read(planningNotesPath)
const constraintCount = (currentNotes.match(/^\d+\./gm) || []).length
Edit(planningNotesPath, {
old: '## Consolidated Constraints (Phase 4 Input)',
new: `## Consolidated Constraints (Phase 4 Input)
${planningConstraints.map((c, i) => `${constraintCount + i + 1}. [Conflict] ${c}`).join('\n')}`
})
}
}
}
```
Return to user showing conflict resolution results (if executed) and selected strategies, then auto-continue to Phase 3.5
**Memory State Check**:
- Evaluate current context window usage and memory state
@@ -234,7 +352,7 @@ SlashCommand(command="/workflow:tools:conflict-resolution --session [sessionId]
**Step 3.2: Execute** - Optimize memory before proceeding
```javascript
SlashCommand(command="/compact")
Skill(skill="compact")
```
- Memory compaction is particularly important after analysis phase which may generate extensive documentation
@@ -273,12 +391,17 @@ SlashCommand(command="/workflow:tools:conflict-resolution --session [sessionId]
**Step 4.1: Execute** - Generate implementation plan and task JSONs
```javascript
SlashCommand(command="/workflow:tools:task-generate-agent --session [sessionId]")
Skill(skill="workflow:tools:task-generate-agent", args="--session [sessionId]")
```
**CLI Execution Note**: CLI tool usage is now determined semantically by action-planning-agent based on user's task description. If user specifies "use Codex/Gemini/Qwen for X", the agent embeds `command` fields in relevant `implementation_approach` steps.
**Input**: `sessionId` from Phase 1
**Input**:
- `sessionId` from Phase 1
- **planning-notes.md**: Consolidated constraints from all phases (Phase 1-3)
- Path: `.workflow/active/[sessionId]/planning-notes.md`
- Contains: User intent, context findings, conflict decisions, consolidated constraints
- **Purpose**: Provides structured, minimal context summary to action-planning-agent
**Validation**:
- `.workflow/active/[sessionId]/IMPL_PLAN.md` exists
@@ -287,7 +410,7 @@ SlashCommand(command="/workflow:tools:task-generate-agent --session [sessionId]"
<!-- TodoWrite: When task-generate-agent executed, ATTACH 1 agent task -->
**TodoWrite Update (Phase 4 SlashCommand executed - agent task attached)**:
**TodoWrite Update (Phase 4 Skill executed - agent task attached)**:
```json
[
{"content": "Phase 1: Session Discovery", "status": "completed", "activeForm": "Executing session discovery"},
@@ -311,27 +434,62 @@ SlashCommand(command="/workflow:tools:task-generate-agent --session [sessionId]"
**Note**: Agent task completed. No collapse needed (single task).
**Return to User**:
```
Planning complete for session: [sessionId]
Tasks generated: [count]
Plan: .workflow/active/[sessionId]/IMPL_PLAN.md
**Step 4.2: User Decision** - Choose next action
Recommended Next Steps:
1. /workflow:action-plan-verify --session [sessionId] # Verify plan quality before execution
2. /workflow:status # Review task breakdown
3. /workflow:execute # Start implementation (after verification)
After Phase 4 completes, present user with action choices:
Quality Gate: Consider running /workflow:action-plan-verify to catch issues early
```javascript
console.log(`
✅ Planning complete for session: ${sessionId}
📊 Tasks generated: ${taskCount}
📋 Plan: .workflow/active/${sessionId}/IMPL_PLAN.md
`);
// Ask user for next action
const userChoice = AskUserQuestion({
questions: [{
question: "Planning complete. What would you like to do next?",
header: "Next Action",
multiSelect: false,
options: [
{
label: "Verify Plan Quality (Recommended)",
description: "Run quality verification to catch issues before execution. Checks plan structure, task dependencies, and completeness."
},
{
label: "Start Execution",
description: "Begin implementing tasks immediately. Use this if you've already reviewed the plan or want to start quickly."
},
{
label: "Review Status Only",
description: "View task breakdown and session status without taking further action. You can decide what to do next manually."
}
]
}]
});
// Execute based on user choice
if (userChoice.answers["Next Action"] === "Verify Plan Quality (Recommended)") {
console.log("\n🔍 Starting plan verification...\n");
Skill(skill="workflow:plan-verify", args="--session " + sessionId);
} else if (userChoice.answers["Next Action"] === "Start Execution") {
console.log("\n🚀 Starting task execution...\n");
Skill(skill="workflow:execute", args="--session " + sessionId);
} else if (userChoice.answers["Next Action"] === "Review Status Only") {
console.log("\n📊 Displaying session status...\n");
Skill(skill="workflow:status", args="--session " + sessionId);
}
```
**Return to User**: Based on user's choice, execute the corresponding workflow command.
## TodoWrite Pattern
**Core Concept**: Dynamic task attachment and collapse for real-time visibility into workflow execution.
### Key Principles
1. **Task Attachment** (when SlashCommand executed):
1. **Task Attachment** (when Skill executed):
- Sub-command's internal tasks are **attached** to orchestrator's TodoWrite
- **Phase 2, 3**: Multiple sub-tasks attached (e.g., Phase 2.1, 2.2, 2.3)
- **Phase 4**: Single agent task attached (e.g., "Execute task-generate-agent")
@@ -400,26 +558,21 @@ User Input (task description)
Phase 1: session:start --auto "structured-description"
↓ Output: sessionId
Session Memory: Previous tasks, context, artifacts
Write: planning-notes.md (User Intent section)
Phase 2: context-gather --session sessionId "structured-description"
↓ Input: sessionId + session memory + structured description
↓ Output: contextPath (context-package.json) + conflict_risk
↓ Input: sessionId + structured description
↓ Output: contextPath (context-package.json with prioritized_context) + conflict_risk
↓ Update: planning-notes.md (Context Findings + Consolidated Constraints)
Phase 3: conflict-resolution [AUTO-TRIGGERED if conflict_risk ≥ medium]
↓ Input: sessionId + contextPath + conflict_risk
CLI-powered conflict detection (JSON output)
AskUserQuestion: Present conflicts + resolution strategies
↓ User selects strategies (or skip)
↓ Apply modifications via Edit tool:
↓ - Update guidance-specification.md
↓ - Update role analyses (*.md)
↓ - Mark context-package.json as "resolved"
↓ Output: Modified brainstorm artifacts (NO report file)
Output: Modified brainstorm artifacts
Update: planning-notes.md (Conflict Decisions + Consolidated Constraints)
↓ Skip if conflict_risk is none/low → proceed directly to Phase 4
Phase 4: task-generate-agent --session sessionId
↓ Input: sessionId + resolved brainstorm artifacts + session memory
↓ Input: sessionId + planning-notes.md + context-package.json + brainstorm artifacts
↓ Output: IMPL_PLAN.md, task JSONs, TODO_LIST.md
Return summary to user
@@ -442,7 +595,7 @@ User triggers: /workflow:plan "Build authentication system"
Phase 1: Session Discovery
→ sessionId extracted
Phase 2: Context Gathering (SlashCommand executed)
Phase 2: Context Gathering (Skill executed)
→ ATTACH 3 sub-tasks: ← ATTACHED
- → Analyze codebase structure
- → Identify integration points
@@ -453,7 +606,7 @@ Phase 2: Context Gathering (SlashCommand executed)
Conditional Branch: Check conflict_risk
├─ IF conflict_risk ≥ medium:
│ Phase 3: Conflict Resolution (SlashCommand executed)
│ Phase 3: Conflict Resolution (Skill executed)
│ → ATTACH 3 sub-tasks: ← ATTACHED
│ - → Detect conflicts with CLI analysis
│ - → Present conflicts to user
@@ -463,7 +616,7 @@ Conditional Branch: Check conflict_risk
└─ ELSE: Skip Phase 3, proceed to Phase 4
Phase 4: Task Generation (SlashCommand executed)
Phase 4: Task Generation (Skill executed)
→ Single agent task (no sub-tasks)
→ Agent autonomously completes internally:
(discovery → planning → output)
@@ -473,7 +626,7 @@ Return summary to user
```
**Key Points**:
- **← ATTACHED**: Tasks attached to TodoWrite when SlashCommand executed
- **← ATTACHED**: Tasks attached to TodoWrite when Skill executed
- Phase 2, 3: Multiple sub-tasks
- Phase 4: Single agent task
- **← COLLAPSED**: Sub-tasks collapsed to summary after completion (Phase 2, 3 only)
@@ -546,6 +699,6 @@ CONSTRAINTS: [Limitations or boundaries]
- `/workflow:tools:task-generate-agent` - Phase 4: Generate task JSON files with agent-driven approach
**Follow-up Commands**:
- `/workflow:action-plan-verify` - Recommended: Verify plan quality and catch issues before execution
- `/workflow:plan-verify` - Recommended: Verify plan quality and catch issues before execution
- `/workflow:status` - Review task breakdown and current progress
- `/workflow:execute` - Begin implementation of generated tasks

View File

@@ -1,7 +1,7 @@
---
name: replan
description: Interactive workflow replanning with session-level artifact updates and boundary clarification through guided questioning
argument-hint: "[--session session-id] [task-id] \"requirements\"|file.md [--interactive]"
argument-hint: "[-y|--yes] [--session session-id] [task-id] \"requirements\"|file.md [--interactive]"
allowed-tools: Read(*), Write(*), Edit(*), TodoWrite(*), Glob(*), Bash(*)
---
@@ -113,7 +113,59 @@ const taskId = taskIdMatch?.[1]
- List existing tasks
- Read `IMPL_PLAN.md` and `TODO_LIST.md`
**Output**: Session validated, context loaded, mode determined
4. **Parse Execution Intent** (from requirements text):
```javascript
// Dynamic tool detection from cli-tools.json
// Read enabled tools: ["gemini", "qwen", "codex", ...]
const enabledTools = loadEnabledToolsFromConfig(); // See ~/.claude/cli-tools.json
// Build dynamic patterns from enabled tools
function buildExecPatterns(tools) {
const patterns = {
agent: /改为\s*Agent\s*执行|使用\s*Agent\s*执行/i
};
tools.forEach(tool => {
// Pattern: "使用 {tool} 执行" or "改用 {tool}"
patterns[`cli_${tool}`] = new RegExp(
`使用\\s*(${tool})\\s*执行|改用\\s*(${tool})`, 'i'
);
});
return patterns;
}
const execPatterns = buildExecPatterns(enabledTools);
let executionIntent = null
for (const [key, pattern] of Object.entries(execPatterns)) {
if (pattern.test(requirements)) {
executionIntent = key.startsWith('cli_')
? { method: 'cli', cli_tool: key.replace('cli_', '') }
: { method: 'agent', cli_tool: null }
break
}
}
```
**Output**: Session validated, context loaded, mode determined, **executionIntent parsed**
---
### Auto Mode Support
When `--yes` or `-y` flag is used, the command skips interactive clarification and uses safe defaults:
```javascript
const autoYes = $ARGUMENTS.includes('--yes') || $ARGUMENTS.includes('-y')
```
**Auto Mode Defaults**:
- **Modification Scope**: `tasks_only` (safest - only update task details)
- **Affected Modules**: All modules related to the task
- **Task Changes**: `update_only` (no structural changes)
- **Dependency Changes**: `no` (preserve existing dependencies)
- **User Confirmation**: Auto-confirm execution
**Note**: `--interactive` flag overrides `--yes` flag (forces interactive mode).
---
@@ -121,6 +173,25 @@ const taskId = taskIdMatch?.[1]
**Purpose**: Define modification scope through guided questioning
**Auto Mode Check**:
```javascript
if (autoYes && !interactive) {
// Use defaults and skip to Phase 3
console.log(`[--yes] Using safe defaults for replan:`)
console.log(` - Scope: tasks_only`)
console.log(` - Changes: update_only`)
console.log(` - Dependencies: preserve existing`)
userSelections = {
scope: 'tasks_only',
modules: 'all_affected',
task_changes: 'update_only',
dependency_changes: false
}
// Proceed to Phase 3
}
```
#### Session Mode Questions
**Q1: Modification Scope**
@@ -228,10 +299,29 @@ interface ImpactAnalysis {
**Step 3.3: User Confirmation**
```javascript
Options:
- 确认执行: 开始应用所有修改
- 调整计划: 重新回答问题调整范围
- 取消操作: 放弃本次重规划
// Parse --yes flag
const autoYes = $ARGUMENTS.includes('--yes') || $ARGUMENTS.includes('-y')
if (autoYes) {
// Auto mode: Auto-confirm execution
console.log(`[--yes] Auto-confirming replan execution`)
userConfirmation = '确认执行'
// Proceed to Phase 4
} else {
// Interactive mode: Ask user
AskUserQuestion({
questions: [{
question: "修改计划已生成,请确认操作:",
header: "Confirm",
options: [
{ label: "确认执行", description: "开始应用所有修改" },
{ label: "调整计划", description: "重新回答问题调整范围" },
{ label: "取消操作", description: "放弃本次重规划" }
],
multiSelect: false
}]
})
}
```
**Output**: Modification plan confirmed or adjusted
@@ -299,7 +389,18 @@ const updated_task = {
flow_control: {
...task.flow_control,
implementation_approach: [...updated_steps]
}
},
// Update execution config if intent detected
...(executionIntent && {
meta: {
...task.meta,
execution_config: {
method: executionIntent.method,
cli_tool: executionIntent.cli_tool,
enable_resume: executionIntent.method !== 'agent'
}
}
})
};
Write({
@@ -308,6 +409,8 @@ Write({
});
```
**Note**: Implementation approach steps are NO LONGER modified. CLI execution is controlled by task-level `meta.execution_config` only.
**Step 5.4: Create New Tasks** (if needed)
Generate complete task JSON with all required fields:
@@ -513,3 +616,33 @@ A: 是,需要同步更新依赖任务
任务重规划完成! 更新 2 个任务
```
### Task Replan - Change Execution Method
```bash
/workflow:replan IMPL-001 "改用 Codex 执行"
# Semantic parsing detects executionIntent:
# { method: 'cli', cli_tool: 'codex' }
# Execution (no interactive questions needed)
✓ 创建备份
✓ 更新 IMPL-001.json
- meta.execution_config = { method: 'cli', cli_tool: 'codex', enable_resume: true }
任务执行方式已更新: Agent → CLI (codex)
```
```bash
/workflow:replan IMPL-002 "改为 Agent 执行"
# Semantic parsing detects executionIntent:
# { method: 'agent', cli_tool: null }
# Execution
✓ 创建备份
✓ 更新 IMPL-002.json
- meta.execution_config = { method: 'agent', cli_tool: null }
任务执行方式已更新: CLI → Agent
```

View File

@@ -0,0 +1,765 @@
---
name: review-cycle-fix
description: Automated fixing of code review findings with AI-powered planning and coordinated execution. Uses intelligent grouping, multi-stage timeline coordination, and test-driven verification.
argument-hint: "<export-file|review-dir> [--resume] [--max-iterations=N] [--batch-size=N]"
allowed-tools: Skill(*), TodoWrite(*), Read(*), Bash(*), Task(*), Edit(*), Write(*)
---
# Workflow Review-Cycle-Fix Command
## Quick Start
```bash
# Fix from exported findings file (session-based path)
/workflow:review-cycle-fix .workflow/active/WFS-123/.review/fix-export-1706184622000.json
# Fix from review directory (auto-discovers latest export)
/workflow:review-cycle-fix .workflow/active/WFS-123/.review/
# Resume interrupted fix session
/workflow:review-cycle-fix --resume
# Custom max retry attempts per finding
/workflow:review-cycle-fix .workflow/active/WFS-123/.review/ --max-iterations=5
# Custom batch size for parallel planning (default: 5 findings per batch)
/workflow:review-cycle-fix .workflow/active/WFS-123/.review/ --batch-size=3
```
**Fix Source**: Exported findings from review cycle dashboard
**Output Directory**: `{review-dir}/fixes/{fix-session-id}/` (within session .review/)
**Default Max Iterations**: 3 (per finding, adjustable)
**Default Batch Size**: 5 (findings per planning batch, adjustable)
**Max Parallel Agents**: 10 (concurrent planning agents)
**CLI Tools**: @cli-planning-agent (planning), @cli-execute-agent (fixing)
## What & Why
### Core Concept
Automated fix orchestrator with **parallel planning architecture**: Multiple AI agents analyze findings concurrently in batches, then coordinate parallel/serial execution. Generates fix timeline with intelligent grouping and dependency analysis, executes fixes with conservative test verification.
**Fix Process**:
- **Batching Phase (1.5)**: Orchestrator groups findings by file+dimension similarity, creates batches
- **Planning Phase (2)**: Up to 10 agents plan batches in parallel, generate partial plans, orchestrator aggregates
- **Execution Phase (3)**: Main orchestrator coordinates agents per aggregated timeline stages
- **Parallel Efficiency**: Customizable batch size (default: 5), MAX_PARALLEL=10 agents
- **No rigid structure**: Adapts to task requirements, not bound to fixed JSON format
**vs Manual Fixing**:
- **Manual**: Developer reviews findings one-by-one, fixes sequentially
- **Automated**: AI groups related issues, multiple agents plan in parallel, executes in optimal parallel/serial order with automatic test verification
### Value Proposition
1. **Parallel Planning**: Multiple agents analyze findings concurrently, reducing planning time for large batches (10+ findings)
2. **Intelligent Batching**: Semantic similarity grouping ensures related findings are analyzed together
3. **Multi-stage Coordination**: Supports complex parallel + serial execution with cross-batch dependency management
4. **Conservative Safety**: Mandatory test verification with automatic rollback on failure
5. **Resume Support**: Checkpoint-based recovery for interrupted sessions
### Orchestrator Boundary (CRITICAL)
- **ONLY command** for automated review finding fixes
- Manages: Intelligent batching (Phase 1.5), parallel planning coordination (launch N agents), plan aggregation, stage-based execution, agent scheduling, progress tracking
- Delegates: Batch planning to @cli-planning-agent, fix execution to @cli-execute-agent
### Execution Flow
```
Phase 1: Discovery & Initialization
└─ Validate export file, create fix session structure, initialize state files
Phase 1.5: Intelligent Grouping & Batching
├─ Analyze findings metadata (file, dimension, severity)
├─ Group by semantic similarity (file proximity + dimension affinity)
├─ Create batches respecting --batch-size (default: 5)
└─ Output: Finding batches for parallel planning
Phase 2: Parallel Planning Coordination (@cli-planning-agent × N)
├─ Launch MAX_PARALLEL planning agents concurrently (default: 10)
├─ Each agent processes one batch:
│ ├─ Analyze findings for patterns and dependencies
│ ├─ Group by file + dimension + root cause similarity
│ ├─ Determine execution strategy (parallel/serial/hybrid)
│ ├─ Generate fix timeline with stages
│ └─ Output: partial-plan-{batch-id}.json
├─ Collect results from all agents
└─ Aggregate: Merge partial plans → fix-plan.json (resolve cross-batch dependencies)
Phase 3: Execution Orchestration (Stage-based)
For each timeline stage:
├─ Load groups for this stage
├─ If parallel: Launch all group agents simultaneously
├─ If serial: Execute groups sequentially
├─ Each agent:
│ ├─ Analyze code context
│ ├─ Apply fix per strategy
│ ├─ Run affected tests
│ ├─ On test failure: Rollback, retry up to max_iterations
│ └─ On success: Commit, update fix-progress-{N}.json
└─ Advance to next stage
Phase 4: Completion & Aggregation
└─ Aggregate results → Generate fix-summary.md → Update history → Output summary
Phase 5: Session Completion (Optional)
└─ If all fixes successful → Prompt to complete workflow session
```
### Agent Roles
| Agent | Responsibility |
|-------|---------------|
| **Orchestrator** | Input validation, session management, intelligent batching (Phase 1.5), parallel planning coordination (launch N agents), plan aggregation (merge partial plans, resolve cross-batch dependencies), stage-based execution scheduling, progress tracking, result aggregation |
| **@cli-planning-agent** | Batch findings analysis, intelligent grouping (file+dimension+root cause), execution strategy determination (parallel/serial/hybrid), timeline generation with dependency mapping, partial plan output |
| **@cli-execute-agent** | Fix execution per group, code context analysis, Edit tool operations, test verification, git rollback on failure, completion JSON generation |
## Enhanced Features
### 1. Parallel Planning Architecture
**Batch Processing Strategy**:
| Phase | Agent Count | Input | Output | Purpose |
|-------|-------------|-------|--------|---------|
| **Batching (1.5)** | Orchestrator | All findings | Finding batches | Semantic grouping by file+dimension, respecting --batch-size |
| **Planning (2)** | N agents (≤10) | 1 batch each | partial-plan-{batch-id}.json | Analyze batch in parallel, generate execution groups and timeline |
| **Aggregation (2)** | Orchestrator | All partial plans | fix-plan.json | Merge timelines, resolve cross-batch dependencies |
| **Execution (3)** | M agents (dynamic) | 1 group each | fix-progress-{N}.json | Execute fixes per aggregated plan with test verification |
**Benefits**:
- **Speed**: N agents plan concurrently, reducing planning time for large batches
- **Scalability**: MAX_PARALLEL=10 prevents resource exhaustion
- **Flexibility**: Batch size customizable via --batch-size (default: 5)
- **Isolation**: Each planning agent focuses on related findings (semantic grouping)
- **Reusable**: Aggregated plan can be re-executed without re-planning
### 2. Intelligent Grouping Strategy
**Three-Level Grouping**:
```javascript
// Level 1: Primary grouping by file + dimension
{file: "auth.ts", dimension: "security"} Group A
{file: "auth.ts", dimension: "quality"} Group B
{file: "query-builder.ts", dimension: "security"} Group C
// Level 2: Secondary grouping by root cause similarity
Group A findings Semantic similarity analysis (threshold 0.7)
Sub-group A1: "missing-input-validation" (findings 1, 2)
Sub-group A2: "insecure-crypto" (finding 3)
// Level 3: Dependency analysis
Sub-group A1 creates validation utilities
Sub-group C4 depends on those utilities
A1 must execute before C4 (serial stage dependency)
```
**Similarity Computation**:
- Combine: `description + recommendation + category`
- Vectorize: TF-IDF or LLM embedding
- Cluster: Greedy algorithm with cosine similarity > 0.7
### 3. Execution Strategy Determination
**Strategy Types**:
| Strategy | When to Use | Stage Structure |
|----------|-------------|-----------------|
| **Parallel** | All groups independent, different files | Single stage, all groups in parallel |
| **Serial** | Strong dependencies, shared resources | Multiple stages, one group per stage |
| **Hybrid** | Mixed dependencies | Multiple stages, parallel within stages |
**Dependency Detection**:
- Shared file modifications
- Utility creation + usage patterns
- Test dependency chains
- Risk level clustering (high-risk groups isolated)
### 4. Conservative Test Verification
**Test Strategy** (per fix):
```javascript
// 1. Identify affected tests
const testPattern = identifyTestPattern(finding.file);
// e.g., "tests/auth/**/*.test.*" for src/auth/service.ts
// 2. Run tests
const result = await runTests(testPattern);
// 3. Evaluate
if (result.passRate < 100%) {
// Rollback
await gitCheckout(finding.file);
// Retry with failure context
if (attempts < maxIterations) {
const fixContext = analyzeFailure(result.stderr);
regenerateFix(finding, fixContext);
retry();
} else {
markFailed(finding.id);
}
} else {
// Commit
await gitCommit(`Fix: ${finding.title} [${finding.id}]`);
markFixed(finding.id);
}
```
**Pass Criteria**: 100% test pass rate (no partial fixes)
## Core Responsibilities
### Orchestrator
**Phase 1: Discovery & Initialization**
- Input validation: Check export file exists and is valid JSON
- Auto-discovery: If review-dir provided, find latest `*-fix-export.json`
- Session creation: Generate fix-session-id (`fix-{timestamp}`)
- Directory structure: Create `{review-dir}/fixes/{fix-session-id}/` with subdirectories
- State files: Initialize active-fix-session.json (session marker)
- TodoWrite initialization: Set up 5-phase tracking (including Phase 1.5)
**Phase 1.5: Intelligent Grouping & Batching**
- Load all findings metadata (id, file, dimension, severity, title)
- Semantic similarity analysis:
- Primary: Group by file proximity (same file or related modules)
- Secondary: Group by dimension affinity (same review dimension)
- Tertiary: Analyze title/description similarity (root cause clustering)
- Create batches respecting --batch-size (default: 5 findings per batch)
- Balance workload: Distribute high-severity findings across batches
- Output: Array of finding batches for parallel planning
**Phase 2: Parallel Planning Coordination**
- Determine concurrency: MIN(batch_count, MAX_PARALLEL=10)
- For each batch chunk (≤10 batches):
- Launch all agents in parallel with run_in_background=true
- Pass batch findings + project context + batch_id to each agent
- Each agent outputs: partial-plan-{batch-id}.json
- Collect results via TaskOutput (blocking until all complete)
- Aggregate partial plans:
- Merge execution groups (renumber group_ids sequentially: G1, G2, ...)
- Merge timelines (detect cross-batch dependencies, adjust stages)
- Resolve conflicts (same file in multiple batches → serialize)
- Generate final fix-plan.json with aggregated metadata
- TodoWrite update: Mark planning complete, start execution
**Phase 3: Execution Orchestration**
- Load fix-plan.json timeline stages
- For each stage:
- If parallel mode: Launch all group agents via `Promise.all()`
- If serial mode: Execute groups sequentially with `await`
- Assign agent IDs (agents update their fix-progress-{N}.json)
- Handle agent failures gracefully (mark group as failed, continue)
- Advance to next stage only when current stage complete
**Phase 4: Completion & Aggregation**
- Collect final status from all fix-progress-{N}.json files
- Generate fix-summary.md with timeline and results
- Update fix-history.json with new session entry
- Remove active-fix-session.json
- TodoWrite completion: Mark all phases done
- Output summary to user
**Phase 5: Session Completion (Optional)**
- If all findings fixed successfully (no failures):
- Prompt user: "All fixes complete. Complete workflow session? [Y/n]"
- If confirmed: Execute `/workflow:session:complete` to archive session with lessons learned
- If partial success (some failures):
- Output: "Some findings failed. Review fix-summary.md before completing session."
- Do NOT auto-complete session
### Output File Structure
```
.workflow/active/WFS-{session-id}/.review/
├── fix-export-{timestamp}.json # Exported findings (input)
└── fixes/{fix-session-id}/
├── partial-plan-1.json # Batch 1 partial plan (planning agent 1 output)
├── partial-plan-2.json # Batch 2 partial plan (planning agent 2 output)
├── partial-plan-N.json # Batch N partial plan (planning agent N output)
├── fix-plan.json # Aggregated execution plan (orchestrator merges partials)
├── fix-progress-1.json # Group 1 progress (planning agent init → agent updates)
├── fix-progress-2.json # Group 2 progress (planning agent init → agent updates)
├── fix-progress-3.json # Group 3 progress (planning agent init → agent updates)
├── fix-summary.md # Final report (orchestrator generates)
├── active-fix-session.json # Active session marker
└── fix-history.json # All sessions history
```
**File Producers**:
- **Orchestrator**: Batches findings (Phase 1.5), aggregates partial plans → `fix-plan.json` (Phase 2), launches parallel planning agents
- **Planning Agents (N)**: Each outputs `partial-plan-{batch-id}.json` + initializes `fix-progress-*.json` for assigned groups
- **Execution Agents (M)**: Update assigned `fix-progress-{N}.json` in real-time
### Agent Invocation Template
**Phase 1.5: Intelligent Batching** (Orchestrator):
```javascript
// Load findings
const findings = JSON.parse(Read(exportFile));
const batchSize = flags.batchSize || 5;
// Semantic similarity analysis: group by file+dimension
const batches = [];
const grouped = new Map(); // key: "${file}:${dimension}"
for (const finding of findings) {
const key = `${finding.file || 'unknown'}:${finding.dimension || 'general'}`;
if (!grouped.has(key)) grouped.set(key, []);
grouped.get(key).push(finding);
}
// Create batches respecting batchSize
for (const [key, group] of grouped) {
while (group.length > 0) {
const batch = group.splice(0, batchSize);
batches.push({
batch_id: batches.length + 1,
findings: batch,
metadata: { primary_file: batch[0].file, primary_dimension: batch[0].dimension }
});
}
}
console.log(`Created ${batches.length} batches (${batchSize} findings per batch)`);
```
**Phase 2: Parallel Planning** (Orchestrator launches N agents):
```javascript
const MAX_PARALLEL = 10;
const partialPlans = [];
// Process batches in chunks of MAX_PARALLEL
for (let i = 0; i < batches.length; i += MAX_PARALLEL) {
const chunk = batches.slice(i, i + MAX_PARALLEL);
const taskIds = [];
// Launch agents in parallel (run_in_background=true)
for (const batch of chunk) {
const taskId = Task({
subagent_type: "cli-planning-agent",
run_in_background: true,
description: `Plan batch ${batch.batch_id}: ${batch.findings.length} findings`,
prompt: planningPrompt(batch) // See Planning Agent template below
});
taskIds.push({ taskId, batch });
}
console.log(`Launched ${taskIds.length} planning agents...`);
// Collect results from this chunk (blocking)
for (const { taskId, batch } of taskIds) {
const result = TaskOutput({ task_id: taskId, block: true });
const partialPlan = JSON.parse(Read(`${sessionDir}/partial-plan-${batch.batch_id}.json`));
partialPlans.push(partialPlan);
updateTodo(`Batch ${batch.batch_id}`, 'completed');
}
}
// Aggregate partial plans → fix-plan.json
let groupCounter = 1;
const groupIdMap = new Map();
for (const partial of partialPlans) {
for (const group of partial.groups) {
const newGroupId = `G${groupCounter}`;
groupIdMap.set(`${partial.batch_id}:${group.group_id}`, newGroupId);
aggregatedPlan.groups.push({ ...group, group_id: newGroupId, progress_file: `fix-progress-${groupCounter}.json` });
groupCounter++;
}
}
// Merge timelines, resolve cross-batch conflicts (shared files → serialize)
let stageCounter = 1;
for (const partial of partialPlans) {
for (const stage of partial.timeline) {
aggregatedPlan.timeline.push({
...stage, stage_id: stageCounter,
groups: stage.groups.map(gid => groupIdMap.get(`${partial.batch_id}:${gid}`))
});
stageCounter++;
}
}
// Write aggregated plan + initialize progress files
Write(`${sessionDir}/fix-plan.json`, JSON.stringify(aggregatedPlan, null, 2));
for (let i = 1; i <= aggregatedPlan.groups.length; i++) {
Write(`${sessionDir}/fix-progress-${i}.json`, JSON.stringify(initProgressFile(aggregatedPlan.groups[i-1]), null, 2));
}
```
**Planning Agent (Batch Mode - Partial Plan Only)**:
```javascript
Task({
subagent_type: "cli-planning-agent",
run_in_background: true,
description: `Plan batch ${batch.batch_id}: ${batch.findings.length} findings`,
prompt: `
## Task Objective
Analyze code review findings in batch ${batch.batch_id} and generate **partial** execution plan.
## Input Data
Review Session: ${reviewId}
Fix Session ID: ${fixSessionId}
Batch ID: ${batch.batch_id}
Batch Findings: ${batch.findings.length}
Findings:
${JSON.stringify(batch.findings, null, 2)}
Project Context:
- Structure: ${projectStructure}
- Test Framework: ${testFramework}
- Git Status: ${gitStatus}
## Output Requirements
### 1. partial-plan-${batch.batch_id}.json
Generate partial execution plan with structure:
{
"batch_id": ${batch.batch_id},
"groups": [...], // Groups created from batch findings (use local IDs: G1, G2, ...)
"timeline": [...], // Local timeline for this batch only
"metadata": {
"findings_count": ${batch.findings.length},
"groups_count": N,
"created_at": "ISO-8601-timestamp"
}
}
**Key Generation Rules**:
- **Groups**: Create groups with local IDs (G1, G2, ...) using intelligent grouping (file+dimension+root cause)
- **Timeline**: Define stages for this batch only (local dependencies within batch)
- **Progress Files**: DO NOT generate fix-progress-*.json here (orchestrator handles after aggregation)
## Analysis Requirements
### Intelligent Grouping Strategy
Group findings using these criteria (in priority order):
1. **File Proximity**: Findings in same file or related files
2. **Dimension Affinity**: Same dimension (security, performance, etc.)
3. **Root Cause Similarity**: Similar underlying issues
4. **Fix Approach Commonality**: Can be fixed with similar approach
**Grouping Guidelines**:
- Optimal group size: 2-5 findings per group
- Avoid cross-cutting concerns in same group
- Consider test isolation (different test suites → different groups)
- Balance workload across groups for parallel execution
### Execution Strategy Determination (Local Only)
**Parallel Mode**: Use when groups are independent, no shared files
**Serial Mode**: Use when groups have dependencies or shared resources
**Hybrid Mode**: Use for mixed dependency graphs (recommended for most cases)
**Dependency Analysis**:
- Identify shared files between groups
- Detect test dependency chains
- Evaluate risk of concurrent modifications
### Risk Assessment
For each group, evaluate:
- **Complexity**: Based on code structure, file size, existing tests
- **Impact Scope**: Number of files affected, API surface changes
- **Rollback Feasibility**: Ease of reverting changes if tests fail
### Test Strategy
For each group, determine:
- **Test Pattern**: Glob pattern matching affected tests
- **Pass Criteria**: All tests must pass (100% pass rate)
- **Test Command**: Infer from project (package.json, pytest.ini, etc.)
## Output Files
Write to ${sessionDir}:
- ./partial-plan-${batch.batch_id}.json
## Quality Checklist
Before finalizing outputs:
- ✅ All batch findings assigned to exactly one group
- ✅ Group dependencies (within batch) correctly identified
- ✅ Timeline stages respect local dependencies
- ✅ Test patterns are valid and specific
- ✅ Risk assessments are realistic
`
})
```
**Execution Agent** (per group):
```javascript
Task({
subagent_type: "cli-execute-agent",
description: `Fix ${group.findings.length} issues: ${group.group_name}`,
prompt: `
## Task Objective
Execute fixes for code review findings in group ${group.group_id}. Update progress file in real-time with flow control tracking.
## Assignment
- Group ID: ${group.group_id}
- Group Name: ${group.group_name}
- Progress File: ${sessionDir}/${group.progress_file}
- Findings Count: ${group.findings.length}
- Max Iterations: ${maxIterations} (per finding)
## Fix Strategy
${JSON.stringify(group.fix_strategy, null, 2)}
## Risk Assessment
${JSON.stringify(group.risk_assessment, null, 2)}
## Execution Flow
### Initialization (Before Starting)
1. Read ${group.progress_file} to load initial state
2. Update progress file:
- assigned_agent: "${agentId}"
- status: "in-progress"
- started_at: Current ISO 8601 timestamp
- last_update: Current ISO 8601 timestamp
3. Write updated state back to ${group.progress_file}
### Main Execution Loop
For EACH finding in ${group.progress_file}.findings:
#### Step 1: Analyze Context
**Before Step**:
- Update finding: status→"in-progress", started_at→now()
- Update current_finding: Populate with finding details, status→"analyzing", action→"Reading file and understanding code structure"
- Update phase→"analyzing"
- Update flow_control: Add "analyze_context" step to implementation_approach (status→"in-progress"), set current_step→"analyze_context"
- Update last_update→now(), write to ${group.progress_file}
**Action**:
- Read file: finding.file
- Understand code structure around line: finding.line
- Analyze surrounding context (imports, dependencies, related functions)
- Review recommendations: finding.recommendations
**After Step**:
- Update flow_control: Mark "analyze_context" step as "completed" with completed_at→now()
- Update last_update→now(), write to ${group.progress_file}
#### Step 2: Apply Fix
**Before Step**:
- Update current_finding: status→"fixing", action→"Applying code changes per recommendations"
- Update phase→"fixing"
- Update flow_control: Add "apply_fix" step to implementation_approach (status→"in-progress"), set current_step→"apply_fix"
- Update last_update→now(), write to ${group.progress_file}
**Action**:
- Use Edit tool to implement code changes per finding.recommendations
- Follow fix_strategy.approach
- Maintain code style and existing patterns
**After Step**:
- Update flow_control: Mark "apply_fix" step as "completed" with completed_at→now()
- Update last_update→now(), write to ${group.progress_file}
#### Step 3: Test Verification
**Before Step**:
- Update current_finding: status→"testing", action→"Running test suite to verify fix"
- Update phase→"testing"
- Update flow_control: Add "run_tests" step to implementation_approach (status→"in-progress"), set current_step→"run_tests"
- Update last_update→now(), write to ${group.progress_file}
**Action**:
- Run tests using fix_strategy.test_pattern
- Require 100% pass rate
- Capture test output
**On Test Failure**:
- Git rollback: \`git checkout -- \${finding.file}\`
- Increment finding.attempts
- Update flow_control: Mark "run_tests" step as "failed" with completed_at→now()
- Update errors: Add entry (finding_id, error_type→"test_failure", message, timestamp)
- If finding.attempts < ${maxIterations}:
- Reset flow_control: implementation_approach→[], current_step→null
- Retry from Step 1
- Else:
- Update finding: status→"completed", result→"failed", error_message→"Max iterations reached", completed_at→now()
- Update summary counts, move to next finding
**On Test Success**:
- Update flow_control: Mark "run_tests" step as "completed" with completed_at→now()
- Update last_update→now(), write to ${group.progress_file}
- Proceed to Step 4
#### Step 4: Commit Changes
**Before Step**:
- Update current_finding: status→"committing", action→"Creating git commit for successful fix"
- Update phase→"committing"
- Update flow_control: Add "commit_changes" step to implementation_approach (status→"in-progress"), set current_step→"commit_changes"
- Update last_update→now(), write to ${group.progress_file}
**Action**:
- Git commit: \`git commit -m "fix(${finding.dimension}): ${finding.title} [${finding.id}]"\`
- Capture commit hash
**After Step**:
- Update finding: status→"completed", result→"fixed", commit_hash→<captured>, test_passed→true, completed_at→now()
- Update flow_control: Mark "commit_changes" step as "completed" with completed_at→now()
- Update last_update→now(), write to ${group.progress_file}
#### After Each Finding
- Update summary: Recalculate counts (pending/in_progress/fixed/failed) and percent_complete
- If all findings completed: Clear current_finding, reset flow_control
- Update last_update→now(), write to ${group.progress_file}
### Final Completion
When all findings processed:
- Update status→"completed", phase→"done", summary.percent_complete→100.0
- Update last_update→now(), write final state to ${group.progress_file}
## Critical Requirements
### Progress File Updates
- **MUST update after every significant action** (before/after each step)
- **Always maintain complete structure** - never write partial updates
- **Use ISO 8601 timestamps** - e.g., "2025-01-25T14:36:00Z"
### Flow Control Format
Follow action-planning-agent flow_control.implementation_approach format:
- step: Identifier (e.g., "analyze_context", "apply_fix")
- action: Human-readable description
- status: "pending" | "in-progress" | "completed" | "failed"
- started_at: ISO 8601 timestamp or null
- completed_at: ISO 8601 timestamp or null
### Error Handling
- Capture all errors in errors[] array
- Never leave progress file in invalid state
- Always write complete updates, never partial
- On unrecoverable error: Mark group as failed, preserve state
## Test Patterns
Use fix_strategy.test_pattern to run affected tests:
- Pattern: ${group.fix_strategy.test_pattern}
- Command: Infer from project (npm test, pytest, etc.)
- Pass Criteria: 100% pass rate required
`
})
```
### Error Handling
**Batching Failures (Phase 1.5)**:
- Invalid findings data → Abort with error message
- Empty batches after grouping → Warn and skip empty batches
**Planning Failures (Phase 2)**:
- Planning agent timeout → Mark batch as failed, continue with other batches
- Partial plan missing → Skip batch, warn user
- Agent crash → Collect available partial plans, proceed with aggregation
- All agents fail → Abort entire fix session with error
- Aggregation conflicts → Apply conflict resolution (serialize conflicting groups)
**Execution Failures (Phase 3)**:
- Agent crash → Mark group as failed, continue with other groups
- Test command not found → Skip test verification, warn user
- Git operations fail → Abort with error, preserve state
**Rollback Scenarios**:
- Test failure after fix → Automatic `git checkout` rollback
- Max iterations reached → Leave file unchanged, mark as failed
- Unrecoverable error → Rollback entire group, save checkpoint
### TodoWrite Structure
**Initialization (after Phase 1.5 batching)**:
```javascript
TodoWrite({
todos: [
{content: "Phase 1: Discovery & Initialization", status: "completed", activeForm: "Discovering"},
{content: "Phase 1.5: Intelligent Batching", status: "completed", activeForm: "Batching"},
{content: "Phase 2: Parallel Planning", status: "in_progress", activeForm: "Planning"},
{content: " → Batch 1: 4 findings (auth.ts:security)", status: "pending", activeForm: "Planning batch 1"},
{content: " → Batch 2: 3 findings (query.ts:security)", status: "pending", activeForm: "Planning batch 2"},
{content: " → Batch 3: 2 findings (config.ts:quality)", status: "pending", activeForm: "Planning batch 3"},
{content: "Phase 3: Execution", status: "pending", activeForm: "Executing"},
{content: "Phase 4: Completion", status: "pending", activeForm: "Completing"}
]
});
```
**During Planning (parallel agents running)**:
```javascript
TodoWrite({
todos: [
{content: "Phase 1: Discovery & Initialization", status: "completed", activeForm: "Discovering"},
{content: "Phase 1.5: Intelligent Batching", status: "completed", activeForm: "Batching"},
{content: "Phase 2: Parallel Planning", status: "in_progress", activeForm: "Planning"},
{content: " → Batch 1: 4 findings (auth.ts:security)", status: "completed", activeForm: "Planning batch 1"},
{content: " → Batch 2: 3 findings (query.ts:security)", status: "in_progress", activeForm: "Planning batch 2"},
{content: " → Batch 3: 2 findings (config.ts:quality)", status: "in_progress", activeForm: "Planning batch 3"},
{content: "Phase 3: Execution", status: "pending", activeForm: "Executing"},
{content: "Phase 4: Completion", status: "pending", activeForm: "Completing"}
]
});
```
**During Execution**:
```javascript
TodoWrite({
todos: [
{content: "Phase 1: Discovery & Initialization", status: "completed", activeForm: "Discovering"},
{content: "Phase 1.5: Intelligent Batching", status: "completed", activeForm: "Batching"},
{content: "Phase 2: Parallel Planning (3 batches → 5 groups)", status: "completed", activeForm: "Planning"},
{content: "Phase 3: Execution", status: "in_progress", activeForm: "Executing"},
{content: " → Stage 1: Parallel execution (3 groups)", status: "completed", activeForm: "Executing stage 1"},
{content: " • Group G1: Auth validation (2 findings)", status: "completed", activeForm: "Fixing G1"},
{content: " • Group G2: Query security (3 findings)", status: "completed", activeForm: "Fixing G2"},
{content: " • Group G3: Config quality (1 finding)", status: "completed", activeForm: "Fixing G3"},
{content: " → Stage 2: Serial execution (1 group)", status: "in_progress", activeForm: "Executing stage 2"},
{content: " • Group G4: Dependent fixes (2 findings)", status: "in_progress", activeForm: "Fixing G4"},
{content: "Phase 4: Completion", status: "pending", activeForm: "Completing"}
]
});
```
**Update Rules**:
- Add batch items dynamically during Phase 1.5
- Mark batch items completed as parallel agents return results
- Add stage/group items dynamically after Phase 2 plan aggregation
- Mark completed immediately after each group finishes
- Update parent phase status when all child items complete
## Post-Completion Expansion
完成后询问用户是否扩展为issue(test/enhance/refactor/doc),选中项调用 `/issue:new "{summary} - {dimension}"`
## Best Practices
1. **Leverage Parallel Planning**: For 10+ findings, parallel batching significantly reduces planning time
2. **Tune Batch Size**: Use `--batch-size` to control granularity (smaller batches = more parallelism, larger = better grouping context)
3. **Conservative Approach**: Test verification is mandatory - no fixes kept without passing tests
4. **Parallel Efficiency**: MAX_PARALLEL=10 for planning agents, 3 concurrent execution agents per stage
5. **Resume Support**: Fix sessions can resume from checkpoints after interruption
6. **Manual Review**: Always review failed fixes manually - may require architectural changes
7. **Incremental Fixing**: Start with small batches (5-10 findings) before large-scale fixes
## Related Commands
### View Fix Progress
Use `ccw view` to open the workflow dashboard in browser:
```bash
ccw view
```

View File

@@ -1,606 +0,0 @@
---
name: review-fix
description: Automated fixing of code review findings with AI-powered planning and coordinated execution. Uses intelligent grouping, multi-stage timeline coordination, and test-driven verification.
argument-hint: "<export-file|review-dir> [--resume] [--max-iterations=N]"
allowed-tools: SlashCommand(*), TodoWrite(*), Read(*), Bash(*), Task(*), Edit(*), Write(*)
---
# Workflow Review-Fix Command
## Quick Start
```bash
# Fix from exported findings file (session-based path)
/workflow:review-fix .workflow/active/WFS-123/.review/fix-export-1706184622000.json
# Fix from review directory (auto-discovers latest export)
/workflow:review-fix .workflow/active/WFS-123/.review/
# Resume interrupted fix session
/workflow:review-fix --resume
# Custom max retry attempts per finding
/workflow:review-fix .workflow/active/WFS-123/.review/ --max-iterations=5
```
**Fix Source**: Exported findings from review cycle dashboard
**Output Directory**: `{review-dir}/fixes/{fix-session-id}/` (within session .review/)
**Default Max Iterations**: 3 (per finding, adjustable)
**CLI Tools**: @cli-planning-agent (planning), @cli-execute-agent (fixing)
## What & Why
### Core Concept
Automated fix orchestrator with **two-phase architecture**: AI-powered planning followed by coordinated parallel/serial execution. Generates fix timeline with intelligent grouping and dependency analysis, then executes fixes with conservative test verification.
**Fix Process**:
- **Planning Phase**: AI analyzes findings, generates fix plan with grouping and execution strategy
- **Execution Phase**: Main orchestrator coordinates agents per timeline stages
- **No rigid structure**: Adapts to task requirements, not bound to fixed JSON format
**vs Manual Fixing**:
- **Manual**: Developer reviews findings one-by-one, fixes sequentially
- **Automated**: AI groups related issues, executes in optimal parallel/serial order with automatic test verification
### Value Proposition
1. **Intelligent Planning**: AI-powered analysis identifies optimal grouping and execution strategy
2. **Multi-stage Coordination**: Supports complex parallel + serial execution with dependency management
3. **Conservative Safety**: Mandatory test verification with automatic rollback on failure
4. **Resume Support**: Checkpoint-based recovery for interrupted sessions
### Orchestrator Boundary (CRITICAL)
- **ONLY command** for automated review finding fixes
- Manages: Planning phase coordination, stage-based execution, agent scheduling, progress tracking
- Delegates: Fix planning to @cli-planning-agent, fix execution to @cli-execute-agent
### Execution Flow
```
Phase 1: Discovery & Initialization
└─ Validate export file, create fix session structure, initialize state files
Phase 2: Planning Coordination (@cli-planning-agent)
├─ Analyze findings for patterns and dependencies
├─ Group by file + dimension + root cause similarity
├─ Determine execution strategy (parallel/serial/hybrid)
├─ Generate fix timeline with stages
└─ Output: fix-plan.json
Phase 3: Execution Orchestration (Stage-based)
For each timeline stage:
├─ Load groups for this stage
├─ If parallel: Launch all group agents simultaneously
├─ If serial: Execute groups sequentially
├─ Each agent:
│ ├─ Analyze code context
│ ├─ Apply fix per strategy
│ ├─ Run affected tests
│ ├─ On test failure: Rollback, retry up to max_iterations
│ └─ On success: Commit, update fix-progress-{N}.json
└─ Advance to next stage
Phase 4: Completion & Aggregation
└─ Aggregate results → Generate fix-summary.md → Update history → Output summary
Phase 5: Session Completion (Optional)
└─ If all fixes successful → Prompt to complete workflow session
```
### Agent Roles
| Agent | Responsibility |
|-------|---------------|
| **Orchestrator** | Input validation, session management, planning coordination, stage-based execution scheduling, progress tracking, aggregation |
| **@cli-planning-agent** | Findings analysis, intelligent grouping (file+dimension+root cause), execution strategy determination (parallel/serial/hybrid), timeline generation with dependency mapping |
| **@cli-execute-agent** | Fix execution per group, code context analysis, Edit tool operations, test verification, git rollback on failure, completion JSON generation |
## Enhanced Features
### 1. Two-Phase Architecture
**Phase Separation**:
| Phase | Agent | Output | Purpose |
|-------|-------|--------|---------|
| **Planning** | @cli-planning-agent | fix-plan.json | Analyze findings, group intelligently, determine optimal execution strategy |
| **Execution** | @cli-execute-agent | completions/*.json | Execute fixes per plan with test verification and rollback |
**Benefits**:
- Clear separation of concerns (analysis vs execution)
- Reusable plans (can re-execute without re-planning)
- Better error isolation (planning failures vs execution failures)
### 2. Intelligent Grouping Strategy
**Three-Level Grouping**:
```javascript
// Level 1: Primary grouping by file + dimension
{file: "auth.ts", dimension: "security"} Group A
{file: "auth.ts", dimension: "quality"} Group B
{file: "query-builder.ts", dimension: "security"} Group C
// Level 2: Secondary grouping by root cause similarity
Group A findings Semantic similarity analysis (threshold 0.7)
Sub-group A1: "missing-input-validation" (findings 1, 2)
Sub-group A2: "insecure-crypto" (finding 3)
// Level 3: Dependency analysis
Sub-group A1 creates validation utilities
Sub-group C4 depends on those utilities
A1 must execute before C4 (serial stage dependency)
```
**Similarity Computation**:
- Combine: `description + recommendation + category`
- Vectorize: TF-IDF or LLM embedding
- Cluster: Greedy algorithm with cosine similarity > 0.7
### 3. Execution Strategy Determination
**Strategy Types**:
| Strategy | When to Use | Stage Structure |
|----------|-------------|-----------------|
| **Parallel** | All groups independent, different files | Single stage, all groups in parallel |
| **Serial** | Strong dependencies, shared resources | Multiple stages, one group per stage |
| **Hybrid** | Mixed dependencies | Multiple stages, parallel within stages |
**Dependency Detection**:
- Shared file modifications
- Utility creation + usage patterns
- Test dependency chains
- Risk level clustering (high-risk groups isolated)
### 4. Conservative Test Verification
**Test Strategy** (per fix):
```javascript
// 1. Identify affected tests
const testPattern = identifyTestPattern(finding.file);
// e.g., "tests/auth/**/*.test.*" for src/auth/service.ts
// 2. Run tests
const result = await runTests(testPattern);
// 3. Evaluate
if (result.passRate < 100%) {
// Rollback
await gitCheckout(finding.file);
// Retry with failure context
if (attempts < maxIterations) {
const fixContext = analyzeFailure(result.stderr);
regenerateFix(finding, fixContext);
retry();
} else {
markFailed(finding.id);
}
} else {
// Commit
await gitCommit(`Fix: ${finding.title} [${finding.id}]`);
markFixed(finding.id);
}
```
**Pass Criteria**: 100% test pass rate (no partial fixes)
## Core Responsibilities
### Orchestrator
**Phase 1: Discovery & Initialization**
- Input validation: Check export file exists and is valid JSON
- Auto-discovery: If review-dir provided, find latest `*-fix-export.json`
- Session creation: Generate fix-session-id (`fix-{timestamp}`)
- Directory structure: Create `{review-dir}/fixes/{fix-session-id}/` with subdirectories
- State files: Initialize active-fix-session.json (session marker)
- TodoWrite initialization: Set up 4-phase tracking
**Phase 2: Planning Coordination**
- Launch @cli-planning-agent with findings data and project context
- Validate fix-plan.json output (schema conformance, includes metadata with session status)
- Load plan into memory for execution phase
- TodoWrite update: Mark planning complete, start execution
**Phase 3: Execution Orchestration**
- Load fix-plan.json timeline stages
- For each stage:
- If parallel mode: Launch all group agents via `Promise.all()`
- If serial mode: Execute groups sequentially with `await`
- Assign agent IDs (agents update their fix-progress-{N}.json)
- Handle agent failures gracefully (mark group as failed, continue)
- Advance to next stage only when current stage complete
**Phase 4: Completion & Aggregation**
- Collect final status from all fix-progress-{N}.json files
- Generate fix-summary.md with timeline and results
- Update fix-history.json with new session entry
- Remove active-fix-session.json
- TodoWrite completion: Mark all phases done
- Output summary to user
**Phase 5: Session Completion (Optional)**
- If all findings fixed successfully (no failures):
- Prompt user: "All fixes complete. Complete workflow session? [Y/n]"
- If confirmed: Execute `/workflow:session:complete` to archive session with lessons learned
- If partial success (some failures):
- Output: "Some findings failed. Review fix-summary.md before completing session."
- Do NOT auto-complete session
### Output File Structure
```
.workflow/active/WFS-{session-id}/.review/
├── fix-export-{timestamp}.json # Exported findings (input)
└── fixes/{fix-session-id}/
├── fix-plan.json # Planning agent output (execution plan with metadata)
├── fix-progress-1.json # Group 1 progress (planning agent init → agent updates)
├── fix-progress-2.json # Group 2 progress (planning agent init → agent updates)
├── fix-progress-3.json # Group 3 progress (planning agent init → agent updates)
├── fix-summary.md # Final report (orchestrator generates)
├── active-fix-session.json # Active session marker
└── fix-history.json # All sessions history
```
**File Producers**:
- **Planning Agent**: `fix-plan.json` (with metadata), all `fix-progress-*.json` (initial state)
- **Execution Agents**: Update assigned `fix-progress-{N}.json` in real-time
### Agent Invocation Template
**Planning Agent**:
```javascript
Task({
subagent_type: "cli-planning-agent",
description: `Generate fix plan and initialize progress files for ${findings.length} findings`,
prompt: `
## Task Objective
Analyze ${findings.length} code review findings and generate execution plan with intelligent grouping and timeline coordination.
## Input Data
Review Session: ${reviewId}
Fix Session ID: ${fixSessionId}
Total Findings: ${findings.length}
Findings:
${JSON.stringify(findings, null, 2)}
Project Context:
- Structure: ${projectStructure}
- Test Framework: ${testFramework}
- Git Status: ${gitStatus}
## Output Requirements
### 1. fix-plan.json
Execute: cat ~/.claude/workflows/cli-templates/fix-plan-template.json
Generate execution plan following template structure:
**Key Generation Rules**:
- **Metadata**: Populate fix_session_id, review_session_id, status ("planning"), created_at, started_at timestamps
- **Execution Strategy**: Choose approach (parallel/serial/hybrid) based on dependency analysis, set parallel_limit and stages count
- **Groups**: Create groups (G1, G2, ...) with intelligent grouping (see Analysis Requirements below), assign progress files (fix-progress-1.json, ...), populate fix_strategy with approach/complexity/test_pattern, assess risks, identify dependencies
- **Timeline**: Define stages respecting dependencies, set execution_mode per stage, map groups to stages, calculate critical path
### 2. fix-progress-{N}.json (one per group)
Execute: cat ~/.claude/workflows/cli-templates/fix-progress-template.json
For each group (G1, G2, G3, ...), generate fix-progress-{N}.json following template structure:
**Initial State Requirements**:
- Status: "pending", phase: "waiting"
- Timestamps: Set last_update to now, others null
- Findings: Populate from review findings with status "pending", all operation fields null
- Summary: Initialize all counters to zero
- Flow control: Empty implementation_approach array
- Errors: Empty array
**CRITICAL**: Ensure complete template structure - all fields must be present.
## Analysis Requirements
### Intelligent Grouping Strategy
Group findings using these criteria (in priority order):
1. **File Proximity**: Findings in same file or related files
2. **Dimension Affinity**: Same dimension (security, performance, etc.)
3. **Root Cause Similarity**: Similar underlying issues
4. **Fix Approach Commonality**: Can be fixed with similar approach
**Grouping Guidelines**:
- Optimal group size: 2-5 findings per group
- Avoid cross-cutting concerns in same group
- Consider test isolation (different test suites → different groups)
- Balance workload across groups for parallel execution
### Execution Strategy Determination
**Parallel Mode**: Use when groups are independent, no shared files
**Serial Mode**: Use when groups have dependencies or shared resources
**Hybrid Mode**: Use for mixed dependency graphs (recommended for most cases)
**Dependency Analysis**:
- Identify shared files between groups
- Detect test dependency chains
- Evaluate risk of concurrent modifications
### Risk Assessment
For each group, evaluate:
- **Complexity**: Based on code structure, file size, existing tests
- **Impact Scope**: Number of files affected, API surface changes
- **Rollback Feasibility**: Ease of reverting changes if tests fail
### Test Strategy
For each group, determine:
- **Test Pattern**: Glob pattern matching affected tests
- **Pass Criteria**: All tests must pass (100% pass rate)
- **Test Command**: Infer from project (package.json, pytest.ini, etc.)
## Output Files
Write to ${sessionDir}:
- ./fix-plan.json
- ./fix-progress-1.json
- ./fix-progress-2.json
- ./fix-progress-{N}.json (as many as groups created)
## Quality Checklist
Before finalizing outputs:
- ✅ All findings assigned to exactly one group
- ✅ Group dependencies correctly identified
- ✅ Timeline stages respect dependencies
- ✅ All progress files have complete initial structure
- ✅ Test patterns are valid and specific
- ✅ Risk assessments are realistic
- ✅ Estimated times are reasonable
`
})
```
**Execution Agent** (per group):
```javascript
Task({
subagent_type: "cli-execute-agent",
description: `Fix ${group.findings.length} issues: ${group.group_name}`,
prompt: `
## Task Objective
Execute fixes for code review findings in group ${group.group_id}. Update progress file in real-time with flow control tracking.
## Assignment
- Group ID: ${group.group_id}
- Group Name: ${group.group_name}
- Progress File: ${sessionDir}/${group.progress_file}
- Findings Count: ${group.findings.length}
- Max Iterations: ${maxIterations} (per finding)
## Fix Strategy
${JSON.stringify(group.fix_strategy, null, 2)}
## Risk Assessment
${JSON.stringify(group.risk_assessment, null, 2)}
## Execution Flow
### Initialization (Before Starting)
1. Read ${group.progress_file} to load initial state
2. Update progress file:
- assigned_agent: "${agentId}"
- status: "in-progress"
- started_at: Current ISO 8601 timestamp
- last_update: Current ISO 8601 timestamp
3. Write updated state back to ${group.progress_file}
### Main Execution Loop
For EACH finding in ${group.progress_file}.findings:
#### Step 1: Analyze Context
**Before Step**:
- Update finding: status→"in-progress", started_at→now()
- Update current_finding: Populate with finding details, status→"analyzing", action→"Reading file and understanding code structure"
- Update phase→"analyzing"
- Update flow_control: Add "analyze_context" step to implementation_approach (status→"in-progress"), set current_step→"analyze_context"
- Update last_update→now(), write to ${group.progress_file}
**Action**:
- Read file: finding.file
- Understand code structure around line: finding.line
- Analyze surrounding context (imports, dependencies, related functions)
- Review recommendations: finding.recommendations
**After Step**:
- Update flow_control: Mark "analyze_context" step as "completed" with completed_at→now()
- Update last_update→now(), write to ${group.progress_file}
#### Step 2: Apply Fix
**Before Step**:
- Update current_finding: status→"fixing", action→"Applying code changes per recommendations"
- Update phase→"fixing"
- Update flow_control: Add "apply_fix" step to implementation_approach (status→"in-progress"), set current_step→"apply_fix"
- Update last_update→now(), write to ${group.progress_file}
**Action**:
- Use Edit tool to implement code changes per finding.recommendations
- Follow fix_strategy.approach
- Maintain code style and existing patterns
**After Step**:
- Update flow_control: Mark "apply_fix" step as "completed" with completed_at→now()
- Update last_update→now(), write to ${group.progress_file}
#### Step 3: Test Verification
**Before Step**:
- Update current_finding: status→"testing", action→"Running test suite to verify fix"
- Update phase→"testing"
- Update flow_control: Add "run_tests" step to implementation_approach (status→"in-progress"), set current_step→"run_tests"
- Update last_update→now(), write to ${group.progress_file}
**Action**:
- Run tests using fix_strategy.test_pattern
- Require 100% pass rate
- Capture test output
**On Test Failure**:
- Git rollback: \`git checkout -- \${finding.file}\`
- Increment finding.attempts
- Update flow_control: Mark "run_tests" step as "failed" with completed_at→now()
- Update errors: Add entry (finding_id, error_type→"test_failure", message, timestamp)
- If finding.attempts < ${maxIterations}:
- Reset flow_control: implementation_approach→[], current_step→null
- Retry from Step 1
- Else:
- Update finding: status→"completed", result→"failed", error_message→"Max iterations reached", completed_at→now()
- Update summary counts, move to next finding
**On Test Success**:
- Update flow_control: Mark "run_tests" step as "completed" with completed_at→now()
- Update last_update→now(), write to ${group.progress_file}
- Proceed to Step 4
#### Step 4: Commit Changes
**Before Step**:
- Update current_finding: status→"committing", action→"Creating git commit for successful fix"
- Update phase→"committing"
- Update flow_control: Add "commit_changes" step to implementation_approach (status→"in-progress"), set current_step→"commit_changes"
- Update last_update→now(), write to ${group.progress_file}
**Action**:
- Git commit: \`git commit -m "fix(${finding.dimension}): ${finding.title} [${finding.id}]"\`
- Capture commit hash
**After Step**:
- Update finding: status→"completed", result→"fixed", commit_hash→<captured>, test_passed→true, completed_at→now()
- Update flow_control: Mark "commit_changes" step as "completed" with completed_at→now()
- Update last_update→now(), write to ${group.progress_file}
#### After Each Finding
- Update summary: Recalculate counts (pending/in_progress/fixed/failed) and percent_complete
- If all findings completed: Clear current_finding, reset flow_control
- Update last_update→now(), write to ${group.progress_file}
### Final Completion
When all findings processed:
- Update status→"completed", phase→"done", summary.percent_complete→100.0
- Update last_update→now(), write final state to ${group.progress_file}
## Critical Requirements
### Progress File Updates
- **MUST update after every significant action** (before/after each step)
- **Always maintain complete structure** - never write partial updates
- **Use ISO 8601 timestamps** - e.g., "2025-01-25T14:36:00Z"
### Flow Control Format
Follow action-planning-agent flow_control.implementation_approach format:
- step: Identifier (e.g., "analyze_context", "apply_fix")
- action: Human-readable description
- status: "pending" | "in-progress" | "completed" | "failed"
- started_at: ISO 8601 timestamp or null
- completed_at: ISO 8601 timestamp or null
### Error Handling
- Capture all errors in errors[] array
- Never leave progress file in invalid state
- Always write complete updates, never partial
- On unrecoverable error: Mark group as failed, preserve state
## Test Patterns
Use fix_strategy.test_pattern to run affected tests:
- Pattern: ${group.fix_strategy.test_pattern}
- Command: Infer from project (npm test, pytest, etc.)
- Pass Criteria: 100% pass rate required
`
})
```
### Error Handling
**Planning Failures**:
- Invalid template → Abort with error message
- Insufficient findings data → Request complete export
- Planning timeout → Retry once, then fail gracefully
**Execution Failures**:
- Agent crash → Mark group as failed, continue with other groups
- Test command not found → Skip test verification, warn user
- Git operations fail → Abort with error, preserve state
**Rollback Scenarios**:
- Test failure after fix → Automatic `git checkout` rollback
- Max iterations reached → Leave file unchanged, mark as failed
- Unrecoverable error → Rollback entire group, save checkpoint
### TodoWrite Structure
**Initialization**:
```javascript
TodoWrite({
todos: [
{content: "Phase 1: Discovery & Initialization", status: "completed"},
{content: "Phase 2: Planning", status: "in_progress"},
{content: "Phase 3: Execution", status: "pending"},
{content: "Phase 4: Completion", status: "pending"}
]
});
```
**During Execution**:
```javascript
TodoWrite({
todos: [
{content: "Phase 1: Discovery & Initialization", status: "completed"},
{content: "Phase 2: Planning", status: "completed"},
{content: "Phase 3: Execution", status: "in_progress"},
{content: " → Stage 1: Parallel execution (3 groups)", status: "completed"},
{content: " • Group G1: Auth validation (2 findings)", status: "completed"},
{content: " • Group G2: Query security (3 findings)", status: "completed"},
{content: " • Group G3: Config quality (1 finding)", status: "completed"},
{content: " → Stage 2: Serial execution (1 group)", status: "in_progress"},
{content: " • Group G4: Dependent fixes (2 findings)", status: "in_progress"},
{content: "Phase 4: Completion", status: "pending"}
]
});
```
**Update Rules**:
- Add stage items dynamically based on fix-plan.json timeline
- Add group items per stage
- Mark completed immediately after each group finishes
- Update parent phase status when all child items complete
## Best Practices
1. **Trust AI Planning**: Planning agent's grouping and execution strategy are based on dependency analysis
2. **Conservative Approach**: Test verification is mandatory - no fixes kept without passing tests
3. **Parallel Efficiency**: Default 3 concurrent agents balances speed and resource usage
4. **Resume Support**: Fix sessions can resume from checkpoints after interruption
5. **Manual Review**: Always review failed fixes manually - may require architectural changes
6. **Incremental Fixing**: Start with small batches (5-10 findings) before large-scale fixes
## Related Commands
### View Fix Progress
Use `ccw view` to open the workflow dashboard in browser:
```bash
ccw view
```

View File

@@ -2,7 +2,7 @@
name: review-module-cycle
description: Independent multi-dimensional code review for specified modules/files. Analyzes specific code paths across 7 dimensions with hybrid parallel-iterative execution, independent of workflow sessions.
argument-hint: "<path-pattern> [--dimensions=security,architecture,...] [--max-iterations=N]"
allowed-tools: SlashCommand(*), TodoWrite(*), Read(*), Bash(*), Task(*)
allowed-tools: Skill(*), TodoWrite(*), Read(*), Bash(*), Task(*)
---
# Workflow Review-Module-Cycle Command
@@ -187,7 +187,7 @@ const CATEGORIES = {
**Step 1: Session Creation**
```javascript
// Create workflow session for this review (type: review)
SlashCommand(command="/workflow:session:start --type review \"Code review for [target_pattern]\"")
Skill(skill="workflow:session:start", args="--type review \"Code review for [target_pattern]\"")
// Parse output
const sessionId = output.match(/SESSION_ID: (WFS-[^\s]+)/)[1];
@@ -764,8 +764,8 @@ After completing a module review, use the generated findings JSON for automated
/workflow:review-module-cycle src/auth/**
# Step 2: Run automated fixes using dimension findings
/workflow:review-fix .workflow/active/WFS-{session-id}/.review/
/workflow:review-cycle-fix .workflow/active/WFS-{session-id}/.review/
```
See `/workflow:review-fix` for automated fixing with smart grouping, parallel execution, and test verification.
See `/workflow:review-cycle-fix` for automated fixing with smart grouping, parallel execution, and test verification.

View File

@@ -2,7 +2,7 @@
name: review-session-cycle
description: Session-based comprehensive multi-dimensional code review. Analyzes git changes from workflow session across 7 dimensions with hybrid parallel-iterative execution, aggregates findings, and performs focused deep-dives on critical issues until quality gates met.
argument-hint: "[session-id] [--dimensions=security,architecture,...] [--max-iterations=N]"
allowed-tools: SlashCommand(*), TodoWrite(*), Read(*), Bash(*), Task(*)
allowed-tools: Skill(*), TodoWrite(*), Read(*), Bash(*), Task(*)
---
# Workflow Review-Session-Cycle Command
@@ -775,8 +775,8 @@ After completing a review, use the generated findings JSON for automated fixing:
/workflow:review-session-cycle
# Step 2: Run automated fixes using dimension findings
/workflow:review-fix .workflow/active/WFS-{session-id}/.review/
/workflow:review-cycle-fix .workflow/active/WFS-{session-id}/.review/
```
See `/workflow:review-fix` for automated fixing with smart grouping, parallel execution, and test verification.
See `/workflow:review-cycle-fix` for automated fixing with smart grouping, parallel execution, and test verification.

View File

@@ -1,8 +1,10 @@
---
name: complete
description: Mark active workflow session as complete, archive with lessons learned, update manifest, remove active flag
argument-hint: "[-y|--yes] [--detailed]"
examples:
- /workflow:session:complete
- /workflow:session:complete --yes
- /workflow:session:complete --detailed
---
@@ -107,13 +109,13 @@ rm -f .workflow/archives/$SESSION_ID/.archiving
Manifest: Updated with N total sessions
```
### Phase 4: Update project.json (Optional)
### Phase 4: Update project-tech.json (Optional)
**Skip if**: `.workflow/project.json` doesn't exist
**Skip if**: `.workflow/project-tech.json` doesn't exist
```bash
# Check
test -f .workflow/project.json || echo "SKIP"
test -f .workflow/project-tech.json || echo "SKIP"
```
**If exists**, add feature entry:
@@ -134,6 +136,53 @@ test -f .workflow/project.json || echo "SKIP"
✓ Feature added to project registry
```
### Phase 5: Ask About Solidify (Always)
After successful archival, prompt user to capture learnings:
```javascript
// Parse --yes flag
const autoYes = $ARGUMENTS.includes('--yes') || $ARGUMENTS.includes('-y')
if (autoYes) {
// Auto mode: Skip solidify
console.log(`[--yes] Auto-selecting: Skip solidify`)
console.log(`Session archived successfully.`)
// Done - no solidify
} else {
// Interactive mode: Ask user
AskUserQuestion({
questions: [{
question: "Would you like to solidify learnings from this session into project guidelines?",
header: "Solidify",
options: [
{ label: "Yes, solidify now", description: "Extract learnings and update project-guidelines.json" },
{ label: "Skip", description: "Archive complete, no learnings to capture" }
],
multiSelect: false
}]
})
// **If "Yes, solidify now"**: Execute `/workflow:session:solidify` with the archived session ID.
}
```
## Auto Mode Defaults
When `--yes` or `-y` flag is used:
- **Solidify Learnings**: Auto-selected "Skip" (archive only, no solidify)
**Flag Parsing**:
```javascript
const autoYes = $ARGUMENTS.includes('--yes') || $ARGUMENTS.includes('-y')
```
**Output**:
```
Session archived successfully.
→ Run /workflow:session:solidify to capture learnings (recommended)
```
## Error Recovery
| Phase | Symptom | Recovery |
@@ -149,5 +198,6 @@ test -f .workflow/project.json || echo "SKIP"
Phase 1: find session → create .archiving marker
Phase 2: read key files → build manifest entry (no writes)
Phase 3: mkdir → mv → update manifest.json → rm marker
Phase 4: update project.json features array (optional)
Phase 4: update project-tech.json features array (optional)
Phase 5: ask user → solidify learnings (optional)
```

View File

@@ -1,14 +1,18 @@
---
name: solidify
description: Crystallize session learnings and user-defined constraints into permanent project guidelines
argument-hint: "[--type <convention|constraint|learning>] [--category <category>] \"rule or insight\""
argument-hint: "[-y|--yes] [--type <convention|constraint|learning>] [--category <category>] \"rule or insight\""
examples:
- /workflow:session:solidify "Use functional components for all React code" --type convention
- /workflow:session:solidify "No direct DB access from controllers" --type constraint --category architecture
- /workflow:session:solidify -y "No direct DB access from controllers" --type constraint --category architecture
- /workflow:session:solidify "Cache invalidation requires event sourcing" --type learning --category architecture
- /workflow:session:solidify --interactive
---
## Auto Mode
When `--yes` or `-y`: Auto-categorize and add guideline without confirmation.
# Session Solidify Command (/workflow:session:solidify)
## Overview

View File

@@ -16,7 +16,7 @@ examples:
Manages workflow sessions with three operation modes: discovery (manual), auto (intelligent), and force-new.
**Dual Responsibility**:
1. **Project-level initialization** (first-time only): Creates `.workflow/project.json` for feature registry
1. **Project-level initialization** (first-time only): Creates `.workflow/project-tech.json` for feature registry
2. **Session-level initialization** (always): Creates session directory structure
## Session Types
@@ -50,7 +50,7 @@ bash(test -f .workflow/project-guidelines.json && echo "GUIDELINES_EXISTS" || ec
**If either NOT_FOUND**, delegate to `/workflow:init`:
```javascript
// Call workflow:init for intelligent project analysis
SlashCommand({command: "/workflow:init"});
Skill(skill="workflow:init");
// Wait for init completion
// project-tech.json and project-guidelines.json will be created

View File

@@ -2,7 +2,7 @@
name: tdd-plan
description: TDD workflow planning with Red-Green-Refactor task chain generation, test-first development structure, and cycle tracking
argument-hint: "\"feature description\"|file.md"
allowed-tools: SlashCommand(*), TodoWrite(*), Read(*), Bash(*)
allowed-tools: Skill(*), TodoWrite(*), Read(*), Bash(*)
---
# TDD Workflow Plan Command (/workflow:tdd-plan)
@@ -14,7 +14,7 @@ allowed-tools: SlashCommand(*), TodoWrite(*), Read(*), Bash(*)
**CLI Tool Selection**: CLI tool usage is determined semantically from user's task description. Include "use Codex/Gemini/Qwen" in your request for CLI execution.
**Task Attachment Model**:
- SlashCommand execute **expands workflow** by attaching sub-tasks to current TodoWrite
- Skill execute **expands workflow** by attaching sub-tasks to current TodoWrite
- When executing a sub-command (e.g., `/workflow:tools:test-context-gather`), its internal tasks are attached to the orchestrator's TodoWrite
- Orchestrator **executes these attached tasks** sequentially
- After completion, attached tasks are **collapsed** back to high-level phase summary
@@ -34,9 +34,47 @@ allowed-tools: SlashCommand(*), TodoWrite(*), Read(*), Bash(*)
4. **Auto-Continue via TodoList**: Check TodoList status to execute next pending phase automatically
5. **Track Progress**: Update TodoWrite dynamically with task attachment/collapse pattern
6. **TDD Context**: All descriptions include "TDD:" prefix
7. **Task Attachment Model**: SlashCommand execute **attaches** sub-tasks to current workflow. Orchestrator **executes** these attached tasks itself, then **collapses** them after completion
7. **Task Attachment Model**: Skill execute **attaches** sub-tasks to current workflow. Orchestrator **executes** these attached tasks itself, then **collapses** them after completion
8. **⚠️ CRITICAL: DO NOT STOP**: Continuous multi-phase workflow. After executing all attached tasks, immediately collapse them and execute next phase
## TDD Compliance Requirements
### The Iron Law
```
NO PRODUCTION CODE WITHOUT A FAILING TEST FIRST
```
**Enforcement Method**:
- Phase 5: `implementation_approach` includes test-first steps (Red → Green → Refactor)
- Green phase: Includes test-fix-cycle configuration (max 3 iterations)
- Auto-revert: Triggered when max iterations reached without passing tests
**Verification**: Phase 6 validates Red-Green-Refactor structure in all generated tasks
### TDD Compliance Checkpoint
| Checkpoint | Validation Phase | Evidence Required |
|------------|------------------|-------------------|
| Test-first structure | Phase 5 | `implementation_approach` has 3 steps |
| Red phase exists | Phase 6 | Step 1: `tdd_phase: "red"` |
| Green phase with test-fix | Phase 6 | Step 2: `tdd_phase: "green"` + test-fix-cycle |
| Refactor phase exists | Phase 6 | Step 3: `tdd_phase: "refactor"` |
### Core TDD Principles (from ref skills)
**Red Flags - STOP and Reassess**:
- Code written before test
- Test passes immediately (no Red phase witnessed)
- Cannot explain why test should fail
- "Just this once" rationalization
- "Tests after achieve same goals" thinking
**Why Order Matters**:
- Tests written after code pass immediately → proves nothing
- Test-first forces edge case discovery before implementation
- Tests-after verify what was built, not what's required
## 6-Phase Execution (with Conflict Resolution)
### Phase 1: Session Discovery
@@ -44,7 +82,7 @@ allowed-tools: SlashCommand(*), TodoWrite(*), Read(*), Bash(*)
**Step 1.1: Execute** - Session discovery and initialization
```javascript
SlashCommand(command="/workflow:session:start --type tdd --auto \"TDD: [structured-description]\"")
Skill(skill="workflow:session:start", args="--type tdd --auto \"TDD: [structured-description]\"")
```
**TDD Structured Format**:
@@ -69,7 +107,7 @@ TEST_FOCUS: [Test scenarios]
**Step 2.1: Execute** - Context gathering and analysis
```javascript
SlashCommand(command="/workflow:tools:context-gather --session [sessionId] \"TDD: [structured-description]\"")
Skill(skill="workflow:tools:context-gather", args="--session [sessionId] \"TDD: [structured-description]\"")
```
**Use Same Structured Description**: Pass the same structured format from Phase 1
@@ -95,7 +133,7 @@ SlashCommand(command="/workflow:tools:context-gather --session [sessionId] \"TDD
**Step 3.1: Execute** - Test coverage analysis and framework detection
```javascript
SlashCommand(command="/workflow:tools:test-context-gather --session [sessionId]")
Skill(skill="workflow:tools:test-context-gather", args="--session [sessionId]")
```
**Purpose**: Analyze existing codebase for:
@@ -110,7 +148,7 @@ SlashCommand(command="/workflow:tools:test-context-gather --session [sessionId]"
<!-- TodoWrite: When test-context-gather executed, INSERT 3 test-context-gather tasks -->
**TodoWrite Update (Phase 3 SlashCommand executed - tasks attached)**:
**TodoWrite Update (Phase 3 Skill executed - tasks attached)**:
```json
[
{"content": "Phase 1: Session Discovery", "status": "completed", "activeForm": "Executing session discovery"},
@@ -124,7 +162,7 @@ SlashCommand(command="/workflow:tools:test-context-gather --session [sessionId]"
]
```
**Note**: SlashCommand execute **attaches** test-context-gather's 3 tasks. Orchestrator **executes** these tasks.
**Note**: Skill execute **attaches** test-context-gather's 3 tasks. Orchestrator **executes** these tasks.
**Next Action**: Tasks attached → **Execute Phase 3.1-3.3** sequentially
@@ -154,7 +192,7 @@ SlashCommand(command="/workflow:tools:test-context-gather --session [sessionId]"
**Step 4.1: Execute** - Conflict detection and resolution
```javascript
SlashCommand(command="/workflow:tools:conflict-resolution --session [sessionId] --context [contextPath]")
Skill(skill="workflow:tools:conflict-resolution", args="--session [sessionId] --context [contextPath]")
```
**Input**:
@@ -175,7 +213,7 @@ SlashCommand(command="/workflow:tools:conflict-resolution --session [sessionId]
<!-- TodoWrite: If conflict_risk ≥ medium, INSERT 3 conflict-resolution tasks when executed -->
**TodoWrite Update (Phase 4 SlashCommand executed - tasks attached, if conflict_risk ≥ medium)**:
**TodoWrite Update (Phase 4 Skill executed - tasks attached, if conflict_risk ≥ medium)**:
```json
[
{"content": "Phase 1: Session Discovery", "status": "completed", "activeForm": "Executing session discovery"},
@@ -183,14 +221,14 @@ SlashCommand(command="/workflow:tools:conflict-resolution --session [sessionId]
{"content": "Phase 3: Test Coverage Analysis", "status": "completed", "activeForm": "Executing test coverage analysis"},
{"content": "Phase 4: Conflict Resolution", "status": "in_progress", "activeForm": "Executing conflict resolution"},
{"content": " → Detect conflicts with CLI analysis", "status": "in_progress", "activeForm": "Detecting conflicts"},
{"content": " → Present conflicts to user", "status": "pending", "activeForm": "Presenting conflicts"},
{"content": " → Log and analyze detected conflicts", "status": "pending", "activeForm": "Analyzing conflicts"},
{"content": " → Apply resolution strategies", "status": "pending", "activeForm": "Applying resolution strategies"},
{"content": "Phase 5: TDD Task Generation", "status": "pending", "activeForm": "Executing TDD task generation"},
{"content": "Phase 6: TDD Structure Validation", "status": "pending", "activeForm": "Validating TDD structure"}
]
```
**Note**: SlashCommand execute **attaches** conflict-resolution's 3 tasks. Orchestrator **executes** these tasks.
**Note**: Skill execute **attaches** conflict-resolution's 3 tasks. Orchestrator **executes** these tasks.
**Next Action**: Tasks attached → **Execute Phase 4.1-4.3** sequentially
@@ -219,7 +257,7 @@ SlashCommand(command="/workflow:tools:conflict-resolution --session [sessionId]
**Step 4.5: Execute** - Memory compaction
```javascript
SlashCommand(command="/compact")
Skill(skill="compact")
```
- This optimizes memory before proceeding to Phase 5
@@ -230,15 +268,19 @@ SlashCommand(command="/workflow:tools:conflict-resolution --session [sessionId]
### Phase 5: TDD Task Generation
**Step 5.1: Execute** - TDD task generation via action-planning-agent
**Step 5.1: Execute** - TDD task generation via action-planning-agent with Phase 0 user configuration
```javascript
SlashCommand(command="/workflow:tools:task-generate-tdd --session [sessionId]")
Skill(skill="workflow:tools:task-generate-tdd", args="--session [sessionId]")
```
**Note**: CLI tool usage is determined semantically from user's task description.
**Note**: Phase 0 now includes:
- Supplementary materials collection (file paths or inline content)
- Execution method preference (Agent/Hybrid/CLI)
- CLI tool preference (Codex/Gemini/Qwen/Auto)
- These preferences are passed to agent for task generation
**Parse**: Extract feature count, task count (not chain count - tasks now contain internal TDD cycles)
**Parse**: Extract feature count, task count (not chain count - tasks now contain internal TDD cycles), CLI execution IDs assigned
**Validate**:
- IMPL_PLAN.md exists (unified plan with TDD Implementation Tasks section)
@@ -246,14 +288,30 @@ SlashCommand(command="/workflow:tools:task-generate-tdd --session [sessionId]")
- TODO_LIST.md exists with internal TDD phase indicators
- Each IMPL task includes:
- `meta.tdd_workflow: true`
- `flow_control.implementation_approach` with 3 steps (red/green/refactor)
- `meta.cli_execution_id: {session_id}-{task_id}`
- `meta.cli_execution: { "strategy": "new|resume|fork|merge_fork", ... }`
- `flow_control.implementation_approach` with exactly 3 steps (red/green/refactor)
- Green phase includes test-fix-cycle configuration
- `context.focus_paths`: absolute or clear relative paths (enhanced with exploration critical_files)
- `flow_control.pre_analysis`: includes exploration integration_points analysis
- IMPL_PLAN.md contains workflow_type: "tdd" in frontmatter
- Task count ≤10 (compliance with task limit)
- User configuration applied:
- If executionMethod == "cli" or "hybrid": command field added to steps
- CLI tool preference reflected in execution guidance
- Task count ≤18 (compliance with hard limit)
**Red Flag Detection** (Non-Blocking Warnings):
- Task count >18: `⚠️ Task count exceeds hard limit - request re-scope`
- Missing cli_execution_id: `⚠️ Task lacks CLI execution ID for resume support`
- Missing test-fix-cycle: `⚠️ Green phase lacks auto-revert configuration`
- Generic task names: `⚠️ Vague task names suggest unclear TDD cycles`
- Missing focus_paths: `⚠️ Task lacks clear file scope for implementation`
**Action**: Log warnings to `.workflow/active/[sessionId]/.process/tdd-warnings.log` (non-blocking)
<!-- TodoWrite: When task-generate-tdd executed, INSERT 3 task-generate-tdd tasks -->
**TodoWrite Update (Phase 5 SlashCommand executed - tasks attached)**:
**TodoWrite Update (Phase 5 Skill executed - tasks attached)**:
```json
[
{"content": "Phase 1: Session Discovery", "status": "completed", "activeForm": "Executing session discovery"},
@@ -267,7 +325,7 @@ SlashCommand(command="/workflow:tools:task-generate-tdd --session [sessionId]")
]
```
**Note**: SlashCommand execute **attaches** task-generate-tdd's 3 tasks. Orchestrator **executes** these tasks. Each generated IMPL task will contain internal Red-Green-Refactor cycle.
**Note**: Skill execute **attaches** task-generate-tdd's 3 tasks. Orchestrator **executes** these tasks. Each generated IMPL task will contain internal Red-Green-Refactor cycle.
**Next Action**: Tasks attached → **Execute Phase 5.1-5.3** sequentially
@@ -293,14 +351,59 @@ SlashCommand(command="/workflow:tools:task-generate-tdd --session [sessionId]")
1. Each task contains complete TDD workflow (Red-Green-Refactor internally)
2. Task structure validation:
- `meta.tdd_workflow: true` in all IMPL tasks
- `meta.cli_execution_id` present (format: {session_id}-{task_id})
- `meta.cli_execution` strategy assigned (new/resume/fork/merge_fork)
- `flow_control.implementation_approach` has exactly 3 steps
- Each step has correct `tdd_phase`: "red", "green", "refactor"
- `context.focus_paths` are absolute or clear relative paths
- `flow_control.pre_analysis` includes exploration integration analysis
3. Dependency validation:
- Sequential features: IMPL-N depends_on ["IMPL-(N-1)"] if needed
- Complex features: IMPL-N.M depends_on ["IMPL-N.(M-1)"] for subtasks
- CLI execution strategies correctly assigned based on dependency graph
4. Agent assignment: All IMPL tasks use @code-developer
5. Test-fix cycle: Green phase step includes test-fix-cycle logic with max_iterations
6. Task count: Total tasks ≤10 (simple + subtasks)
6. Task count: Total tasks ≤18 (simple + subtasks hard limit)
7. User configuration:
- Execution method choice reflected in task structure
- CLI tool preference documented in implementation guidance (if CLI selected)
**Red Flag Checklist** (from TDD best practices):
- [ ] No tasks skip Red phase (`tdd_phase: "red"` exists in step 1)
- [ ] Test files referenced in Red phase (explicit paths, not placeholders)
- [ ] Green phase has test-fix-cycle with `max_iterations` configured
- [ ] Refactor phase has clear completion criteria
**Non-Compliance Warning Format**:
```
⚠️ TDD Red Flag: [issue description]
Task: [IMPL-N]
Recommendation: [action to fix]
```
**Evidence Gathering** (Before Completion Claims):
```bash
# Verify session artifacts exist
ls -la .workflow/active/[sessionId]/{IMPL_PLAN.md,TODO_LIST.md}
ls -la .workflow/active/[sessionId]/.task/IMPL-*.json
# Count generated artifacts
echo "IMPL tasks: $(ls .workflow/active/[sessionId]/.task/IMPL-*.json 2>/dev/null | wc -l)"
# Sample task structure verification (first task)
jq '{id, tdd: .meta.tdd_workflow, cli_id: .meta.cli_execution_id, phases: [.flow_control.implementation_approach[].tdd_phase]}' \
"$(ls .workflow/active/[sessionId]/.task/IMPL-*.json | head -1)"
```
**Evidence Required Before Summary**:
| Evidence Type | Verification Method | Pass Criteria |
|---------------|---------------------|---------------|
| File existence | `ls -la` artifacts | All files present |
| Task count | Count IMPL-*.json | Count matches claims (≤18) |
| TDD structure | jq sample extraction | Shows red/green/refactor + cli_execution_id |
| CLI execution IDs | jq extraction | All tasks have cli_execution_id assigned |
| Warning log | Check tdd-warnings.log | Logged (may be empty) |
**Return Summary**:
```
@@ -312,7 +415,7 @@ Total tasks: [M] (1 task per simple feature + subtasks for complex features)
Task breakdown:
- Simple features: [K] tasks (IMPL-1 to IMPL-K)
- Complex features: [L] features with [P] subtasks
- Total task count: [M] (within 10-task limit)
- Total task count: [M] (within 18-task hard limit)
Structure:
- IMPL-1: {Feature 1 Name} (Internal: Red → Green → Refactor)
@@ -326,19 +429,31 @@ Plans generated:
- Unified Implementation Plan: .workflow/active/[sessionId]/IMPL_PLAN.md
(includes TDD Implementation Tasks section with workflow_type: "tdd")
- Task List: .workflow/active/[sessionId]/TODO_LIST.md
(with internal TDD phase indicators)
(with internal TDD phase indicators and CLI execution strategies)
- Task JSONs: .workflow/active/[sessionId]/.task/IMPL-*.json
(with cli_execution_id and execution strategies for resume support)
TDD Configuration:
- Each task contains complete Red-Green-Refactor cycle
- Green phase includes test-fix cycle (max 3 iterations)
- Auto-revert on max iterations reached
- CLI execution strategies: new/resume/fork/merge_fork based on dependency graph
User Configuration Applied:
- Execution Method: [agent|hybrid|cli]
- CLI Tool Preference: [codex|gemini|qwen|auto]
- Supplementary Materials: [included|none]
- Task generation follows cli-tools-usage.md guidelines
⚠️ ACTION REQUIRED: Before execution, ensure you understand WHY each Red phase test is expected to fail.
This is crucial for valid TDD - if you don't know why the test fails, you can't verify it tests the right thing.
Recommended Next Steps:
1. /workflow:action-plan-verify --session [sessionId] # Verify TDD plan quality and dependencies
2. /workflow:execute --session [sessionId] # Start TDD execution
1. /workflow:plan-verify --session [sessionId] # Verify TDD plan quality and dependencies
2. /workflow:execute --session [sessionId] # Start TDD execution with CLI strategies
3. /workflow:tdd-verify [sessionId] # Post-execution TDD compliance check
Quality Gate: Consider running /workflow:action-plan-verify to validate TDD task structure and dependencies
Quality Gate: Consider running /workflow:plan-verify to validate TDD task structure, dependencies, and CLI execution strategies
```
## TodoWrite Pattern
@@ -347,7 +462,7 @@ Quality Gate: Consider running /workflow:action-plan-verify to validate TDD task
### Key Principles
1. **Task Attachment** (when SlashCommand executed):
1. **Task Attachment** (when Skill executed):
- Sub-command's internal tasks are **attached** to orchestrator's TodoWrite
- Example: `/workflow:tools:test-context-gather` attaches 3 sub-tasks (Phase 3.1, 3.2, 3.3)
- First attached task marked as `in_progress`, others as `pending`
@@ -400,7 +515,7 @@ TDD Workflow Orchestrator
│ IF conflict_risk ≥ medium:
│ └─ /workflow:tools:conflict-resolution ← ATTACHED (3 tasks)
│ ├─ Phase 4.1: Detect conflicts with CLI
│ ├─ Phase 4.2: Present conflicts to user
│ ├─ Phase 4.2: Log and analyze detected conflicts
│ └─ Phase 4.3: Apply resolution strategies
│ └─ Returns: conflict-resolution.json ← COLLAPSED
│ ELSE:
@@ -416,10 +531,10 @@ TDD Workflow Orchestrator
└─ Phase 6: TDD Structure Validation
└─ Internal validation + summary returned
└─ Recommend: /workflow:action-plan-verify
└─ Recommend: /workflow:plan-verify
Key Points:
• ← ATTACHED: SlashCommand attaches sub-tasks to orchestrator TodoWrite
• ← ATTACHED: Skill attaches sub-tasks to orchestrator TodoWrite
• ← COLLAPSED: Sub-tasks executed and collapsed to phase summary
• TDD-specific: Each generated IMPL task contains complete Red-Green-Refactor cycle
```
@@ -439,6 +554,36 @@ Convert user input to TDD-structured format:
- **Command failure**: Keep phase in_progress, report error
- **TDD validation failure**: Report incomplete chains or wrong dependencies
### TDD Warning Patterns
| Pattern | Warning Message | Recommended Action |
|---------|----------------|-------------------|
| Task count >10 | High task count detected | Consider splitting into multiple sessions |
| Missing test-fix-cycle | Green phase lacks auto-revert | Add `max_iterations: 3` to task config |
| Red phase missing test path | Test file path not specified | Add explicit test file paths |
| Generic task names | Vague names like "Add feature" | Use specific behavior descriptions |
| No refactor criteria | Refactor phase lacks completion criteria | Define clear refactor scope |
### Non-Blocking Warning Policy
**All warnings are advisory** - they do not halt execution:
1. Warnings logged to `.process/tdd-warnings.log`
2. Summary displayed in Phase 6 output
3. User decides whether to address before `/workflow:execute`
### Error Handling Quick Reference
| Error Type | Detection | Recovery Action |
|------------|-----------|-----------------|
| Parsing failure | Empty/malformed output | Retry once, then report |
| Missing context-package | File read error | Re-run `/workflow:tools:context-gather` |
| Invalid task JSON | jq parse error | Report malformed file path |
| Task count exceeds 18 | Count validation ≥19 | Request re-scope, split into multiple sessions |
| Missing cli_execution_id | All tasks lack ID | Regenerate tasks with phase 0 user config |
| Test-context missing | File not found | Re-run `/workflow:tools:test-context-gather` |
| Phase timeout | No response | Retry phase, check CLI connectivity |
| CLI tool not available | Tool not in cli-tools.json | Fall back to alternative preferred tool |
## Related Commands
**Prerequisite Commands**:
@@ -453,8 +598,33 @@ Convert user input to TDD-structured format:
- `/workflow:tools:task-generate-tdd` - Phase 5: Generate TDD tasks (CLI tool usage determined semantically)
**Follow-up Commands**:
- `/workflow:action-plan-verify` - Recommended: Verify TDD plan quality and structure before execution
- `/workflow:plan-verify` - Recommended: Verify TDD plan quality and structure before execution
- `/workflow:status` - Review TDD task breakdown
- `/workflow:execute` - Begin TDD implementation
- `/workflow:tdd-verify` - Post-execution: Verify TDD compliance and generate quality report
## Next Steps Decision Table
| Situation | Recommended Command | Purpose |
|-----------|---------------------|---------|
| First time planning | `/workflow:plan-verify` | Validate task structure before execution |
| Warnings in tdd-warnings.log | Review log, refine tasks | Address Red Flags before proceeding |
| High task count warning | Consider `/workflow:session:start` | Split into focused sub-sessions |
| Ready to implement | `/workflow:execute` | Begin TDD Red-Green-Refactor cycles |
| After implementation | `/workflow:tdd-verify` | Generate TDD compliance report |
| Need to review tasks | `/workflow:status --session [id]` | Inspect current task breakdown |
| Plan needs changes | `/task:replan` | Update task JSON with new requirements |
### TDD Workflow State Transitions
```
/workflow:tdd-plan
[Planning Complete] ──→ /workflow:plan-verify (recommended)
[Verified/Ready] ─────→ /workflow:execute
[Implementation] ─────→ /workflow:tdd-verify (post-execution)
[Quality Report] ─────→ Done or iterate
```

View File

@@ -1,214 +1,301 @@
---
name: tdd-verify
description: Verify TDD workflow compliance against Red-Green-Refactor cycles, generate quality report with coverage analysis
argument-hint: "[optional: WFS-session-id]"
allowed-tools: SlashCommand(*), TodoWrite(*), Read(*), Bash(gemini:*)
description: Verify TDD workflow compliance against Red-Green-Refactor cycles. Generates quality report with coverage analysis and quality gate recommendation. Orchestrates sub-commands for comprehensive validation.
argument-hint: "[optional: --session WFS-session-id]"
allowed-tools: Skill(*), TodoWrite(*), Read(*), Write(*), Bash(*), Glob(*)
---
# TDD Verification Command (/workflow:tdd-verify)
## Goal
Verify TDD workflow execution quality by validating Red-Green-Refactor cycle compliance, test coverage completeness, and task chain structure integrity. This command orchestrates multiple analysis phases and generates a comprehensive compliance report with quality gate recommendation.
**Output**: A structured Markdown report saved to `.workflow/active/WFS-{session}/TDD_COMPLIANCE_REPORT.md` containing:
- Executive summary with compliance score and quality gate recommendation
- Task chain validation (TEST → IMPL → REFACTOR structure)
- Test coverage metrics (line, branch, function)
- Red-Green-Refactor cycle verification
- Best practices adherence assessment
- Actionable improvement recommendations
## Operating Constraints
**ORCHESTRATOR MODE**:
- This command coordinates multiple sub-commands (`/workflow:tools:tdd-coverage-analysis`, `ccw cli`)
- MAY write output files: TDD_COMPLIANCE_REPORT.md (primary report), .process/*.json (intermediate artifacts)
- MUST NOT modify source task files or implementation code
- MUST NOT create or delete tasks in the workflow
**Quality Gate Authority**: The compliance report provides a binding recommendation (BLOCK_MERGE / REQUIRE_FIXES / PROCEED_WITH_CAVEATS / APPROVED) based on objective compliance criteria.
## Coordinator Role
**This command is a pure orchestrator**: Execute 4 phases to verify TDD workflow compliance, test coverage, and Red-Green-Refactor cycle execution.
## Core Responsibilities
- Verify TDD task chain structure
- Analyze test coverage
- Validate TDD cycle execution
- Generate compliance report
- Verify TDD task chain structure (TEST → IMPL → REFACTOR)
- Analyze test coverage metrics
- Validate TDD cycle execution quality
- Generate compliance report with quality gate recommendation
## Execution Process
```
Input Parsing:
└─ Decision (session argument):
├─ session-id provided → Use provided session
└─ No session-id → Auto-detect active session
├─ --session provided → Use provided session
└─ No session → Auto-detect active session
Phase 1: Session Discovery
├─ Validate session directory exists
TodoWrite: Mark phase 1 completed
Phase 1: Session Discovery & Validation
├─ Detect or validate session directory
Check required artifacts exist (.task/*.json, .summaries/*)
└─ ERROR if invalid or incomplete
Phase 2: Task Chain Validation
Phase 2: Task Chain Structure Validation
├─ Load all task JSONs from .task/
├─ Extract task IDs and group by feature
├─ Validate TDD structure:
│ ├─ TEST-N.M → IMPL-N.M → REFACTOR-N.M chain
│ ├─ Dependency verification
│ └─ Meta field validation (tdd_phase, agent)
└─ TodoWrite: Mark phase 2 completed
├─ Validate TDD structure: TEST-N.M → IMPL-N.M → REFACTOR-N.M
├─ Verify dependencies (depends_on)
├─ Validate meta fields (tdd_phase, agent)
└─ Extract chain validation data
Phase 3: Test Execution Analysis
└─ /workflow:tools:tdd-coverage-analysis
├─ Coverage metrics extraction
├─ TDD cycle verification
└─ Compliance score calculation
Phase 3: Coverage & Cycle Analysis
├─ Call: /workflow:tools:tdd-coverage-analysis
├─ Parse: test-results.json, coverage-report.json, tdd-cycle-report.md
└─ Extract coverage metrics and TDD cycle verification
Phase 4: Compliance Report Generation
├─ Gemini analysis for comprehensive report
├─ Aggregate findings from Phases 1-3
├─ Calculate compliance score (0-100)
├─ Determine quality gate recommendation
├─ Generate TDD_COMPLIANCE_REPORT.md
└─ Return summary to user
└─ Display summary to user
```
## 4-Phase Execution
### Phase 1: Session Discovery
**Auto-detect or use provided session**
### Phase 1: Session Discovery & Validation
**Step 1.1: Detect Session**
```bash
# If session-id provided
sessionId = argument
IF --session parameter provided:
session_id = provided session
ELSE:
# Auto-detect active session
active_sessions = bash(find .workflow/active/ -name "WFS-*" -type d 2>/dev/null)
IF active_sessions is empty:
ERROR: "No active workflow session found. Use --session <session-id>"
EXIT
ELSE IF active_sessions has multiple entries:
# Use most recently modified session
session_id = bash(ls -td .workflow/active/WFS-*/ 2>/dev/null | head -1 | xargs basename)
ELSE:
session_id = basename(active_sessions[0])
# Else auto-detect active session
find .workflow/active/ -name "WFS-*" -type d | head -1 | sed 's/.*\///'
# Derive paths
session_dir = .workflow/active/WFS-{session_id}
task_dir = session_dir/.task
summaries_dir = session_dir/.summaries
process_dir = session_dir/.process
```
**Extract**: sessionId
**Step 1.2: Validate Required Artifacts**
```bash
# Check task files exist
task_files = Glob(task_dir/*.json)
IF task_files.count == 0:
ERROR: "No task JSON files found. Run /workflow:tdd-plan first"
EXIT
**Validation**: Session directory exists
# Check summaries exist (optional but recommended for full analysis)
summaries_exist = EXISTS(summaries_dir)
IF NOT summaries_exist:
WARNING: "No .summaries/ directory found. Some analysis may be limited."
```
**TodoWrite**: Mark phase 1 completed, phase 2 in_progress
**Output**: session_id, session_dir, task_files list
---
### Phase 2: Task Chain Validation
**Validate TDD structure using bash commands**
### Phase 2: Task Chain Structure Validation
**Step 2.1: Load and Parse Task JSONs**
```bash
# Load all task JSONs
for task_file in .workflow/active/{sessionId}/.task/*.json; do
cat "$task_file"
done
# Single-pass JSON extraction using jq
validation_data = bash("""
# Load all tasks and extract structured data
cd '{session_dir}/.task'
# Extract task IDs
for task_file in .workflow/active/{sessionId}/.task/*.json; do
cat "$task_file" | jq -r '.id'
done
# Extract all task IDs
task_ids=$(jq -r '.id' *.json 2>/dev/null | sort)
# Check dependencies - read tasks and filter for IMPL/REFACTOR
for task_file in .workflow/active/{sessionId}/.task/IMPL-*.json; do
cat "$task_file" | jq -r '.context.depends_on[]?'
done
# Extract dependencies for IMPL tasks
impl_deps=$(jq -r 'select(.id | startswith("IMPL")) | .id + ":" + (.context.depends_on[]? // "none")' *.json 2>/dev/null)
for task_file in .workflow/active/{sessionId}/.task/REFACTOR-*.json; do
cat "$task_file" | jq -r '.context.depends_on[]?'
done
# Extract dependencies for REFACTOR tasks
refactor_deps=$(jq -r 'select(.id | startswith("REFACTOR")) | .id + ":" + (.context.depends_on[]? // "none")' *.json 2>/dev/null)
# Check meta fields
for task_file in .workflow/active/{sessionId}/.task/*.json; do
cat "$task_file" | jq -r '.meta.tdd_phase'
done
# Extract meta fields
meta_tdd=$(jq -r '.id + ":" + (.meta.tdd_phase // "missing")' *.json 2>/dev/null)
meta_agent=$(jq -r '.id + ":" + (.meta.agent // "missing")' *.json 2>/dev/null)
for task_file in .workflow/active/{sessionId}/.task/*.json; do
cat "$task_file" | jq -r '.meta.agent'
done
# Output as JSON
jq -n --arg ids "$task_ids" \\
--arg impl "$impl_deps" \\
--arg refactor "$refactor_deps" \\
--arg tdd "$meta_tdd" \\
--arg agent "$meta_agent" \\
'{ids: $ids, impl_deps: $impl, refactor_deps: $refactor, tdd: $tdd, agent: $agent}'
""")
```
**Validation**:
- For each feature N, verify TEST-N.M → IMPL-N.M → REFACTOR-N.M exists
- IMPL-N.M.context.depends_on includes TEST-N.M
- REFACTOR-N.M.context.depends_on includes IMPL-N.M
- TEST tasks have tdd_phase="red" and agent="@code-review-test-agent"
- IMPL/REFACTOR tasks have tdd_phase="green"/"refactor" and agent="@code-developer"
**Step 2.2: Validate TDD Chain Structure**
```
Parse validation_data JSON and validate:
**Extract**: Chain validation report
For each feature N (extracted from task IDs):
1. TEST-N.M exists?
2. IMPL-N.M exists?
3. REFACTOR-N.M exists? (optional but recommended)
4. IMPL-N.M.context.depends_on contains TEST-N.M?
5. REFACTOR-N.M.context.depends_on contains IMPL-N.M?
6. TEST-N.M.meta.tdd_phase == "red"?
7. TEST-N.M.meta.agent == "@code-review-test-agent"?
8. IMPL-N.M.meta.tdd_phase == "green"?
9. IMPL-N.M.meta.agent == "@code-developer"?
10. REFACTOR-N.M.meta.tdd_phase == "refactor"?
**TodoWrite**: Mark phase 2 completed, phase 3 in_progress
Calculate:
- chain_completeness_score = (complete_chains / total_chains) * 100
- dependency_accuracy = (correct_deps / total_deps) * 100
- meta_field_accuracy = (correct_meta / total_meta) * 100
```
**Output**: chain_validation_report (JSON structure with validation results)
---
### Phase 3: Test Execution Analysis
**Command**: `SlashCommand(command="/workflow:tools:tdd-coverage-analysis --session [sessionId]")`
### Phase 3: Coverage & Cycle Analysis
**Input**: sessionId from Phase 1
**Step 3.1: Call Coverage Analysis Sub-command**
```bash
Skill(skill="workflow:tools:tdd-coverage-analysis", args="--session {session_id}")
```
**Parse Output**:
- Coverage metrics (line, branch, function percentages)
- TDD cycle verification results
- Compliance score
**Step 3.2: Parse Output Files**
```bash
# Check required outputs exist
IF NOT EXISTS(process_dir/test-results.json):
WARNING: "test-results.json not found. Coverage analysis incomplete."
coverage_data = null
ELSE:
coverage_data = Read(process_dir/test-results.json)
**Validation**:
- `.workflow/active/{sessionId}/.process/test-results.json` exists
- `.workflow/active/{sessionId}/.process/coverage-report.json` exists
- `.workflow/active/{sessionId}/.process/tdd-cycle-report.md` exists
IF NOT EXISTS(process_dir/coverage-report.json):
WARNING: "coverage-report.json not found. Coverage metrics incomplete."
metrics = null
ELSE:
metrics = Read(process_dir/coverage-report.json)
**TodoWrite**: Mark phase 3 completed, phase 4 in_progress
IF NOT EXISTS(process_dir/tdd-cycle-report.md):
WARNING: "tdd-cycle-report.md not found. Cycle validation incomplete."
cycle_data = null
ELSE:
cycle_data = Read(process_dir/tdd-cycle-report.md)
```
**Step 3.3: Extract Coverage Metrics**
```
If coverage_data exists:
- line_coverage_percent
- branch_coverage_percent
- function_coverage_percent
- uncovered_files (list)
- uncovered_lines (map: file -> line ranges)
If cycle_data exists:
- red_phase_compliance (tests failed initially?)
- green_phase_compliance (tests pass after impl?)
- refactor_phase_compliance (tests stay green during refactor?)
- minimal_implementation_score (was impl minimal?)
```
**Output**: coverage_analysis, cycle_analysis
---
### Phase 4: Compliance Report Generation
**Gemini analysis for comprehensive TDD compliance report**
**Step 4.1: Calculate Compliance Score**
```
Base Score: 100 points
Deductions:
Chain Structure:
- Missing TEST task: -30 points per feature
- Missing IMPL task: -30 points per feature
- Missing REFACTOR task: -10 points per feature
- Wrong dependency: -15 points per error
- Wrong agent: -5 points per error
- Wrong tdd_phase: -5 points per error
TDD Cycle Compliance:
- Test didn't fail initially: -10 points per feature
- Tests didn't pass after IMPL: -20 points per feature
- Tests broke during REFACTOR: -15 points per feature
- Over-engineered IMPL: -10 points per feature
Coverage Quality:
- Line coverage < 80%: -5 points
- Branch coverage < 70%: -5 points
- Function coverage < 80%: -5 points
- Critical paths uncovered: -10 points
Final Score: Max(0, Base Score - Total Deductions)
```
**Step 4.2: Determine Quality Gate**
```
IF score >= 90 AND no_critical_violations:
recommendation = "APPROVED"
ELSE IF score >= 70 AND critical_violations == 0:
recommendation = "PROCEED_WITH_CAVEATS"
ELSE IF score >= 50:
recommendation = "REQUIRE_FIXES"
ELSE:
recommendation = "BLOCK_MERGE"
```
**Step 4.3: Generate Report**
```bash
ccw cli -p "
PURPOSE: Generate TDD compliance report
TASK: Analyze TDD workflow execution and generate quality report
CONTEXT: @{.workflow/active/{sessionId}/.task/*.json,.workflow/active/{sessionId}/.summaries/*,.workflow/active/{sessionId}/.process/tdd-cycle-report.md}
EXPECTED:
- TDD compliance score (0-100)
- Chain completeness verification
- Test coverage analysis summary
- Quality recommendations
- Red-Green-Refactor cycle validation
- Best practices adherence assessment
RULES: Focus on TDD best practices and workflow adherence. Be specific about violations and improvements.
" --tool gemini --mode analysis --cd project-root > .workflow/active/{sessionId}/TDD_COMPLIANCE_REPORT.md
report_content = Generate markdown report (see structure below)
report_path = "{session_dir}/TDD_COMPLIANCE_REPORT.md"
Write(report_path, report_content)
```
**Output**: TDD_COMPLIANCE_REPORT.md
**TodoWrite**: Mark phase 4 completed
**Return to User**:
```
TDD Verification Report - Session: {sessionId}
## Chain Validation
[COMPLETE] Feature 1: TEST-1.1 → IMPL-1.1 → REFACTOR-1.1 (Complete)
[COMPLETE] Feature 2: TEST-2.1 → IMPL-2.1 → REFACTOR-2.1 (Complete)
[INCOMPLETE] Feature 3: TEST-3.1 → IMPL-3.1 (Missing REFACTOR phase)
## Test Execution
All TEST tasks produced failing tests
All IMPL tasks made tests pass
All REFACTOR tasks maintained green tests
## Coverage Metrics
Line Coverage: {percentage}%
Branch Coverage: {percentage}%
Function Coverage: {percentage}%
## Compliance Score: {score}/100
Detailed report: .workflow/active/{sessionId}/TDD_COMPLIANCE_REPORT.md
Recommendations:
- Complete missing REFACTOR-3.1 task
- Consider additional edge case tests for Feature 2
- Improve test failure message clarity in Feature 1
**Step 4.4: Display Summary to User**
```bash
echo "=== TDD Verification Complete ==="
echo "Session: {session_id}"
echo "Report: {report_path}"
echo ""
echo "Quality Gate: {recommendation}"
echo "Compliance Score: {score}/100"
echo ""
echo "Chain Validation: {chain_completeness_score}%"
echo "Line Coverage: {line_coverage}%"
echo "Branch Coverage: {branch_coverage}%"
echo ""
echo "Next: Review full report for detailed findings"
```
## TodoWrite Pattern
## TodoWrite Pattern (Optional)
**Note**: As an orchestrator command, TodoWrite tracking is optional and primarily useful for long-running verification processes. For most cases, the 4-phase execution is fast enough that progress tracking adds noise without value.
```javascript
// Initialize (before Phase 1)
TodoWrite({todos: [
{"content": "Identify target session", "status": "in_progress", "activeForm": "Identifying target session"},
{"content": "Validate task chain structure", "status": "pending", "activeForm": "Validating task chain structure"},
{"content": "Analyze test execution", "status": "pending", "activeForm": "Analyzing test execution"},
{"content": "Generate compliance report", "status": "pending", "activeForm": "Generating compliance report"}
]})
// After Phase 1
TodoWrite({todos: [
{"content": "Identify target session", "status": "completed", "activeForm": "Identifying target session"},
{"content": "Validate task chain structure", "status": "in_progress", "activeForm": "Validating task chain structure"},
{"content": "Analyze test execution", "status": "pending", "activeForm": "Analyzing test execution"},
{"content": "Generate compliance report", "status": "pending", "activeForm": "Generating compliance report"}
]})
// Continue pattern for Phase 2, 3, 4...
// Only use TodoWrite for complex multi-session verification
// Skip for single-session verification
```
## Validation Logic
@@ -229,27 +316,24 @@ TodoWrite({todos: [
5. Report incomplete or invalid chains
```
### Compliance Scoring
```
Base Score: 100 points
### Quality Gate Criteria
Deductions:
- Missing TEST task: -30 points per feature
- Missing IMPL task: -30 points per feature
- Missing REFACTOR task: -10 points per feature
- Wrong dependency: -15 points per error
- Wrong agent: -5 points per error
- Wrong tdd_phase: -5 points per error
- Test didn't fail initially: -10 points per feature
- Tests didn't pass after IMPL: -20 points per feature
- Tests broke during REFACTOR: -15 points per feature
| Recommendation | Score Range | Critical Violations | Action |
|----------------|-------------|---------------------|--------|
| **APPROVED** | ≥90 | 0 | Safe to merge |
| **PROCEED_WITH_CAVEATS** | ≥70 | 0 | Can proceed, address minor issues |
| **REQUIRE_FIXES** | ≥50 | Any | Must fix before merge |
| **BLOCK_MERGE** | <50 | Any | Block merge until resolved |
Final Score: Max(0, Base Score - Deductions)
```
**Critical Violations**:
- Missing TEST or IMPL task for any feature
- Tests didn't fail initially (Red phase violation)
- Tests didn't pass after IMPL (Green phase violation)
- Tests broke during REFACTOR (Refactor phase violation)
## Output Files
```
.workflow/active/{session-id}/
.workflow/active/WFS-{session-id}/
├── TDD_COMPLIANCE_REPORT.md # Comprehensive compliance report ⭐
└── .process/
├── test-results.json # From tdd-coverage-analysis
@@ -262,14 +346,14 @@ Final Score: Max(0, Base Score - Deductions)
### Session Discovery Errors
| Error | Cause | Resolution |
|-------|-------|------------|
| No active session | No WFS-* directories | Provide session-id explicitly |
| Multiple active sessions | Multiple WFS-* directories | Provide session-id explicitly |
| No active session | No WFS-* directories | Provide --session explicitly |
| Multiple active sessions | Multiple WFS-* directories | Provide --session explicitly |
| Session not found | Invalid session-id | Check available sessions |
### Validation Errors
| Error | Cause | Resolution |
|-------|-------|------------|
| Task files missing | Incomplete planning | Run tdd-plan first |
| Task files missing | Incomplete planning | Run /workflow:tdd-plan first |
| Invalid JSON | Corrupted task files | Regenerate tasks |
| Missing summaries | Tasks not executed | Execute tasks before verify |
@@ -278,13 +362,13 @@ Final Score: Max(0, Base Score - Deductions)
|-------|-------|------------|
| Coverage tool missing | No test framework | Configure testing first |
| Tests fail to run | Code errors | Fix errors before verify |
| Gemini analysis fails | Token limit / API error | Retry or reduce context |
| Sub-command fails | tdd-coverage-analysis error | Check sub-command logs |
## Integration & Usage
### Command Chain
- **Called After**: `/workflow:execute` (when TDD tasks completed)
- **Calls**: `/workflow:tools:tdd-coverage-analysis`, Gemini CLI
- **Calls**: `/workflow:tools:tdd-coverage-analysis`
- **Related**: `/workflow:tdd-plan`, `/workflow:status`
### Basic Usage
@@ -293,7 +377,7 @@ Final Score: Max(0, Base Score - Deductions)
/workflow:tdd-verify
# Specify session
/workflow:tdd-verify WFS-auth
/workflow:tdd-verify --session WFS-auth
```
### When to Use
@@ -308,61 +392,125 @@ Final Score: Max(0, Base Score - Deductions)
# TDD Compliance Report - {Session ID}
**Generated**: {timestamp}
**Session**: {sessionId}
**Session**: WFS-{sessionId}
**Workflow Type**: TDD
---
## Executive Summary
Overall Compliance Score: {score}/100
Status: {EXCELLENT | GOOD | NEEDS IMPROVEMENT | FAILED}
### Quality Gate Decision
| Metric | Value | Status |
|--------|-------|--------|
| Compliance Score | {score}/100 | {status_emoji} |
| Chain Completeness | {percentage}% | {status} |
| Line Coverage | {percentage}% | {status} |
| Branch Coverage | {percentage}% | {status} |
| Function Coverage | {percentage}% | {status} |
### Recommendation
**{RECOMMENDATION}**
**Decision Rationale**:
{brief explanation based on score and violations}
**Quality Gate Criteria**:
- **APPROVED**: Score ≥90, no critical violations
- **PROCEED_WITH_CAVEATS**: Score ≥70, no critical violations
- **REQUIRE_FIXES**: Score ≥50 or critical violations exist
- **BLOCK_MERGE**: Score <50
---
## Chain Analysis
### Feature 1: {Feature Name}
**Status**: Complete
**Status**: Complete
**Chain**: TEST-1.1 → IMPL-1.1 → REFACTOR-1.1
- **Red Phase**: Test created and failed with clear message
- **Green Phase**: Minimal implementation made test pass
- **Refactor Phase**: Code improved, tests remained green
| Phase | Task | Status | Details |
|-------|------|--------|---------|
| Red | TEST-1.1 | ✅ Pass | Test created and failed with clear message |
| Green | IMPL-1.1 | ✅ Pass | Minimal implementation made test pass |
| Refactor | REFACTOR-1.1 | ✅ Pass | Code improved, tests remained green |
### Feature 2: {Feature Name}
**Status**: Incomplete
**Status**: ⚠️ Incomplete
**Chain**: TEST-2.1 → IMPL-2.1 (Missing REFACTOR-2.1)
- **Red Phase**: Test created and failed
- **Green Phase**: Implementation seems over-engineered
- **Refactor Phase**: Missing
| Phase | Task | Status | Details |
|-------|------|--------|---------|
| Red | TEST-2.1 | ✅ Pass | Test created and failed |
| Green | IMPL-2.1 | ⚠️ Warning | Implementation seems over-engineered |
| Refactor | REFACTOR-2.1 | ❌ Missing | Task not completed |
**Issues**:
- REFACTOR-2.1 task not completed
- IMPL-2.1 implementation exceeded minimal scope
- REFACTOR-2.1 task not completed (-10 points)
- IMPL-2.1 implementation exceeded minimal scope (-10 points)
[Repeat for all features]
### Chain Validation Summary
| Metric | Value |
|--------|-------|
| Total Features | {count} |
| Complete Chains | {count} ({percent}%) |
| Incomplete Chains | {count} |
| Missing TEST | {count} |
| Missing IMPL | {count} |
| Missing REFACTOR | {count} |
| Dependency Errors | {count} |
| Meta Field Errors | {count} |
---
## Test Coverage Analysis
### Coverage Metrics
- Line Coverage: {percentage}% {status}
- Branch Coverage: {percentage}% {status}
- Function Coverage: {percentage}% {status}
| Metric | Coverage | Target | Status |
|--------|----------|--------|--------|
| Line Coverage | {percentage}% | ≥80% | {status} |
| Branch Coverage | {percentage}% | ≥70% | {status} |
| Function Coverage | {percentage}% | ≥80% | {status} |
### Coverage Gaps
- {file}:{lines} - Uncovered error handling
- {file}:{lines} - Uncovered edge case
| File | Lines | Issue | Priority |
|------|-------|-------|----------|
| src/auth/service.ts | 45-52 | Uncovered error handling | HIGH |
| src/utils/parser.ts | 78-85 | Uncovered edge case | MEDIUM |
---
## TDD Cycle Validation
### Red Phase (Write Failing Test)
- {N}/{total} features had failing tests initially
- Feature 3: No evidence of initial test failure
- {N}/{total} features had failing tests initially ({percent}%)
- ✅ Compliant features: {list}
- ❌ Non-compliant features: {list}
**Violations**:
- Feature 3: No evidence of initial test failure (-10 points)
### Green Phase (Make Test Pass)
- {N}/{total} implementations made tests pass
- All implementations minimal and focused
- {N}/{total} implementations made tests pass ({percent}%)
- ✅ Compliant features: {list}
- ❌ Non-compliant features: {list}
**Violations**:
- Feature 2: Implementation over-engineered (-10 points)
### Refactor Phase (Improve Quality)
- {N}/{total} features completed refactoring
- Feature 2, 4: Refactoring step skipped
- {N}/{total} features completed refactoring ({percent}%)
- ✅ Compliant features: {list}
- ❌ Non-compliant features: {list}
**Violations**:
- Feature 2, 4: Refactoring step skipped (-20 points total)
---
## Best Practices Assessment
@@ -377,24 +525,61 @@ Status: {EXCELLENT | GOOD | NEEDS IMPROVEMENT | FAILED}
- Missing refactoring steps
- Test failure messages could be more descriptive
---
## Detailed Findings by Severity
### Critical Issues ({count})
{List of critical issues with impact and remediation}
### High Priority Issues ({count})
{List of high priority issues with impact and remediation}
### Medium Priority Issues ({count})
{List of medium priority issues with impact and remediation}
### Low Priority Issues ({count})
{List of low priority issues with impact and remediation}
---
## Recommendations
### High Priority
### Required Fixes (Before Merge)
1. Complete missing REFACTOR tasks (Features 2, 4)
2. Verify initial test failures for Feature 3
3. Simplify over-engineered implementations
3. Fix tests that broke during refactoring
### Medium Priority
1. Add edge case tests for Features 1, 3
2. Improve test failure message clarity
3. Increase branch coverage to >85%
### Recommended Improvements
1. Simplify over-engineered implementations
2. Add edge case tests for Features 1, 3
3. Improve test failure message clarity
4. Increase branch coverage to >85%
### Low Priority
### Optional Enhancements
1. Add more descriptive test names
2. Consider parameterized tests for similar scenarios
3. Document TDD process learnings
## Conclusion
{Summary of compliance status and next steps}
---
## Metrics Summary
| Metric | Value |
|--------|-------|
| Total Features | {count} |
| Complete Chains | {count} ({percent}%) |
| Compliance Score | {score}/100 |
| Critical Issues | {count} |
| High Issues | {count} |
| Medium Issues | {count} |
| Low Issues | {count} |
| Line Coverage | {percent}% |
| Branch Coverage | {percent}% |
| Function Coverage | {percent}% |
---
**Report End**
```

View File

@@ -2,7 +2,7 @@
name: test-cycle-execute
description: Execute test-fix workflow with dynamic task generation and iterative fix cycles until test pass rate >= 95% or max iterations reached. Uses @cli-planning-agent for failure analysis and task generation.
argument-hint: "[--resume-session=\"session-id\"] [--max-iterations=N]"
allowed-tools: SlashCommand(*), TodoWrite(*), Read(*), Bash(*), Task(*)
allowed-tools: Skill(*), TodoWrite(*), Read(*), Bash(*), Task(*)
---
# Workflow Test-Cycle-Execute Command
@@ -491,6 +491,10 @@ The orchestrator automatically creates git commits at key checkpoints to enable
**Note**: Final session completion creates additional commit with full summary.
## Post-Completion Expansion
完成后询问用户是否扩展为issue(test/enhance/refactor/doc),选中项调用 `/issue:new "{summary} - {dimension}"`
## Best Practices
1. **Default Settings Work**: 10 iterations sufficient for most cases

File diff suppressed because it is too large Load Diff

View File

@@ -1,529 +0,0 @@
---
name: test-gen
description: Create independent test-fix workflow session from completed implementation session, analyzes code to generate test tasks
argument-hint: "source-session-id"
allowed-tools: SlashCommand(*), TodoWrite(*), Read(*), Bash(*)
---
# Workflow Test Generation Command (/workflow:test-gen)
## Coordinator Role
**This command is a pure orchestrator**: Creates an independent test-fix workflow session for validating a completed implementation. It reuses the standard planning toolchain with automatic cross-session context gathering.
**Core Principles**:
- **Session Isolation**: Creates new `WFS-test-[source]` session to keep verification separate from implementation
- **Context-First**: Prioritizes gathering code changes and summaries from source session
- **Format Reuse**: Creates standard `IMPL-*.json` task, using `meta.type: "test-fix"` for agent assignment
- **Parameter Simplification**: Tools auto-detect test session type via metadata, no manual cross-session parameters needed
- **Semantic CLI Selection**: CLI tool usage is determined by user's task description (e.g., "use Codex for fixes")
**Task Attachment Model**:
- SlashCommand dispatch **expands workflow** by attaching sub-tasks to current TodoWrite
- When a sub-command is executed (e.g., `/workflow:tools:test-context-gather`), its internal tasks are attached to the orchestrator's TodoWrite
- Orchestrator **executes these attached tasks** sequentially
- After completion, attached tasks are **collapsed** back to high-level phase summary
- This is **task expansion**, not external delegation
**Auto-Continue Mechanism**:
- TodoList tracks current phase status and dynamically manages task attachment/collapse
- When each phase finishes executing, automatically execute next pending phase
- All phases run autonomously without user interaction
- **⚠️ CONTINUOUS EXECUTION** - Do not stop until all phases complete
**Execution Flow**:
1. Initialize TodoWrite → Create test session → Parse session ID
2. Gather cross-session context (automatic) → Parse context path
3. Analyze implementation with concept-enhanced → Parse ANALYSIS_RESULTS.md
4. Generate test task from analysis → Return summary
**Command Scope**: This command ONLY prepares test workflow artifacts. It does NOT execute tests or implementation. Task execution requires separate user action.
## Core Rules
1. **Start Immediately**: First action is TodoWrite initialization, second action is Phase 1 test session creation
2. **No Preliminary Analysis**: Do not read files or analyze before Phase 1
3. **Parse Every Output**: Extract required data from each phase for next phase
4. **Sequential Execution**: Each phase depends on previous phase's output
5. **Complete All Phases**: Do not return to user until Phase 5 completes (summary returned)
6. **Track Progress**: Update TodoWrite dynamically with task attachment/collapse pattern
7. **Automatic Detection**: context-gather auto-detects test session and gathers source session context
8. **Semantic CLI Selection**: CLI tool usage determined from user's task description, passed to Phase 4
9. **Command Boundary**: This command ends at Phase 5 summary. Test execution is NOT part of this command.
10. **Task Attachment Model**: SlashCommand dispatch **attaches** sub-tasks to current workflow. Orchestrator **executes** these attached tasks itself, then **collapses** them after completion
11. **⚠️ CRITICAL: DO NOT STOP**: Continuous multi-phase workflow. After executing all attached tasks, immediately collapse them and execute next phase
## 5-Phase Execution
### Phase 1: Create Test Session
**Step 1.0: Load Source Session Intent** - Preserve user's original task description for semantic CLI selection
```javascript
// Read source session metadata to get original task description
Read(".workflow/active/[sourceSessionId]/workflow-session.json")
// OR if context-package exists:
Read(".workflow/active/[sourceSessionId]/.process/context-package.json")
// Extract: metadata.task_description or project/description field
// This preserves user's CLI tool preferences (e.g., "use Codex for fixes")
```
**Step 1.1: Execute** - Create new test workflow session with preserved intent
```javascript
// Include original task description to enable semantic CLI selection
SlashCommand(command="/workflow:session:start --new \"Test validation for [sourceSessionId]: [originalTaskDescription]\"")
```
**Input**:
- `sourceSessionId` from user argument (e.g., `WFS-user-auth`)
- `originalTaskDescription` from source session metadata (preserves CLI tool preferences)
**Expected Behavior**:
- Creates new session with pattern `WFS-test-[source-slug]` (e.g., `WFS-test-user-auth`)
- Writes metadata to `workflow-session.json`:
- `workflow_type: "test_session"`
- `source_session_id: "[sourceSessionId]"`
- Description includes original user intent for semantic CLI selection
- Returns new session ID for subsequent phases
**Parse Output**:
- Extract: new test session ID (store as `testSessionId`)
- Pattern: `WFS-test-[slug]`
**Validation**:
- Source session `.workflow/[sourceSessionId]/` exists
- Source session has completed IMPL tasks (`.summaries/IMPL-*-summary.md`)
- New test session directory created
- Metadata includes `workflow_type` and `source_session_id`
**TodoWrite**: Mark phase 1 completed, phase 2 in_progress
---
### Phase 2: Gather Test Context
**Step 2.1: Execute** - Gather test coverage context from source session
```javascript
SlashCommand(command="/workflow:tools:test-context-gather --session [testSessionId]")
```
**Input**: `testSessionId` from Phase 1 (e.g., `WFS-test-user-auth`)
**Expected Behavior**:
- Load source session implementation context and summaries
- Analyze test coverage using MCP tools (find existing tests)
- Identify files requiring tests (coverage gaps)
- Detect test framework and conventions
- Generate `test-context-package.json`
**Parse Output**:
- Extract: test context package path (store as `testContextPath`)
- Pattern: `.workflow/[testSessionId]/.process/test-context-package.json`
**Validation**:
- Test context package created
- Contains source session summaries
- Includes coverage gap analysis
- Test framework detected
- Test conventions documented
<!-- TodoWrite: When test-context-gather executed, INSERT 3 test-context-gather tasks -->
**TodoWrite Update (Phase 2 SlashCommand executed - tasks attached)**:
```json
[
{"content": "Create independent test session", "status": "completed", "activeForm": "Creating test session"},
{"content": "Phase 2.1: Load source session summaries (test-context-gather)", "status": "in_progress", "activeForm": "Loading source session summaries"},
{"content": "Phase 2.2: Analyze test coverage with MCP tools (test-context-gather)", "status": "pending", "activeForm": "Analyzing test coverage"},
{"content": "Phase 2.3: Identify coverage gaps and framework (test-context-gather)", "status": "pending", "activeForm": "Identifying coverage gaps"},
{"content": "Analyze test requirements with Gemini", "status": "pending", "activeForm": "Analyzing test requirements"},
{"content": "Generate test generation and execution tasks", "status": "pending", "activeForm": "Generating test tasks"},
{"content": "Return workflow summary", "status": "pending", "activeForm": "Returning workflow summary"}
]
```
**Note**: SlashCommand dispatch **attaches** test-context-gather's 3 tasks. Orchestrator **executes** these tasks.
**Next Action**: Tasks attached → **Execute Phase 2.1-2.3** sequentially
<!-- TodoWrite: After Phase 2 tasks complete, REMOVE Phase 2.1-2.3, restore to orchestrator view -->
**TodoWrite Update (Phase 2 completed - tasks collapsed)**:
```json
[
{"content": "Create independent test session", "status": "completed", "activeForm": "Creating test session"},
{"content": "Gather test coverage context", "status": "completed", "activeForm": "Gathering test coverage context"},
{"content": "Analyze test requirements with Gemini", "status": "pending", "activeForm": "Analyzing test requirements"},
{"content": "Generate test generation and execution tasks", "status": "pending", "activeForm": "Generating test tasks"},
{"content": "Return workflow summary", "status": "pending", "activeForm": "Returning workflow summary"}
]
```
**Note**: Phase 2 tasks completed and collapsed to summary.
---
### Phase 3: Test Generation Analysis
**Step 3.1: Execute** - Analyze test requirements with Gemini
```javascript
SlashCommand(command="/workflow:tools:test-concept-enhanced --session [testSessionId] --context [testContextPath]")
```
**Input**:
- `testSessionId` from Phase 1
- `testContextPath` from Phase 2
**Expected Behavior**:
- Use Gemini to analyze coverage gaps and implementation context
- Study existing test patterns and conventions
- Generate test requirements for each missing test file
- Design test generation strategy
- Generate `TEST_ANALYSIS_RESULTS.md`
**Parse Output**:
- Verify `.workflow/[testSessionId]/.process/TEST_ANALYSIS_RESULTS.md` created
- Contains test requirements and generation strategy
- Lists test files to create with specifications
**Validation**:
- TEST_ANALYSIS_RESULTS.md exists with complete sections:
- Coverage Assessment
- Test Framework & Conventions
- Test Requirements by File
- Test Generation Strategy
- Implementation Targets (test files to create)
- Success Criteria
<!-- TodoWrite: When test-concept-enhanced executed, INSERT 3 concept-enhanced tasks -->
**TodoWrite Update (Phase 3 SlashCommand executed - tasks attached)**:
```json
[
{"content": "Create independent test session", "status": "completed", "activeForm": "Creating test session"},
{"content": "Gather test coverage context", "status": "completed", "activeForm": "Gathering test coverage context"},
{"content": "Phase 3.1: Analyze coverage gaps with Gemini (test-concept-enhanced)", "status": "in_progress", "activeForm": "Analyzing coverage gaps"},
{"content": "Phase 3.2: Study existing test patterns (test-concept-enhanced)", "status": "pending", "activeForm": "Studying test patterns"},
{"content": "Phase 3.3: Generate test generation strategy (test-concept-enhanced)", "status": "pending", "activeForm": "Generating test strategy"},
{"content": "Generate test generation and execution tasks", "status": "pending", "activeForm": "Generating test tasks"},
{"content": "Return workflow summary", "status": "pending", "activeForm": "Returning workflow summary"}
]
```
**Note**: SlashCommand dispatch **attaches** test-concept-enhanced's 3 tasks. Orchestrator **executes** these tasks.
**Next Action**: Tasks attached → **Execute Phase 3.1-3.3** sequentially
<!-- TodoWrite: After Phase 3 tasks complete, REMOVE Phase 3.1-3.3, restore to orchestrator view -->
**TodoWrite Update (Phase 3 completed - tasks collapsed)**:
```json
[
{"content": "Create independent test session", "status": "completed", "activeForm": "Creating test session"},
{"content": "Gather test coverage context", "status": "completed", "activeForm": "Gathering test coverage context"},
{"content": "Analyze test requirements with Gemini", "status": "completed", "activeForm": "Analyzing test requirements"},
{"content": "Generate test generation and execution tasks", "status": "pending", "activeForm": "Generating test tasks"},
{"content": "Return workflow summary", "status": "pending", "activeForm": "Returning workflow summary"}
]
```
**Note**: Phase 3 tasks completed and collapsed to summary.
---
### Phase 4: Generate Test Tasks
**Step 4.1: Execute** - Generate test task JSON files and planning documents
```javascript
SlashCommand(command="/workflow:tools:test-task-generate --session [testSessionId]")
```
**Input**:
- `testSessionId` from Phase 1
**Note**: CLI tool usage for fixes is determined semantically from user's task description (e.g., "use Codex for automated fixes").
**Expected Behavior**:
- Parse TEST_ANALYSIS_RESULTS.md from Phase 3
- Extract test requirements and generation strategy
- Generate **TWO task JSON files**:
- **IMPL-001.json**: Test Generation task (calls @code-developer)
- **IMPL-002.json**: Test Execution and Fix Cycle task (calls @test-fix-agent)
- Generate IMPL_PLAN.md with test generation and execution strategy
- Generate TODO_LIST.md with both tasks
**Parse Output**:
- Verify `.workflow/[testSessionId]/.task/IMPL-001.json` exists (test generation)
- Verify `.workflow/[testSessionId]/.task/IMPL-002.json` exists (test execution & fix)
- Verify `.workflow/[testSessionId]/IMPL_PLAN.md` created
- Verify `.workflow/[testSessionId]/TODO_LIST.md` created
**Validation - IMPL-001.json (Test Generation)**:
- Task ID: `IMPL-001`
- `meta.type: "test-gen"`
- `meta.agent: "@code-developer"`
- `context.requirements`: Generate tests based on TEST_ANALYSIS_RESULTS.md
- `flow_control.pre_analysis`: Load TEST_ANALYSIS_RESULTS.md and test context
- `flow_control.implementation_approach`: Test generation steps
- `flow_control.target_files`: Test files to create from analysis section 5
**Validation - IMPL-002.json (Test Execution & Fix)**:
- Task ID: `IMPL-002`
- `meta.type: "test-fix"`
- `meta.agent: "@test-fix-agent"`
- `context.depends_on: ["IMPL-001"]`
- `context.requirements`: Execute and fix tests
- `flow_control.implementation_approach.test_fix_cycle`: Complete cycle specification
- **Cycle pattern**: test → gemini_diagnose → fix (agent or CLI based on `command` field) → retest
- **Tools configuration**: Gemini for analysis with bug-fix template, agent or CLI for fixes
- **Exit conditions**: Success (all pass) or failure (max iterations)
- `flow_control.implementation_approach.modification_points`: 3-phase execution flow
- Phase 1: Initial test execution
- Phase 2: Iterative Gemini diagnosis + fixes (agent or CLI based on step's `command` field)
- Phase 3: Final validation and certification
<!-- TodoWrite: When test-task-generate executed, INSERT 3 test-task-generate tasks -->
**TodoWrite Update (Phase 4 SlashCommand executed - tasks attached)**:
```json
[
{"content": "Create independent test session", "status": "completed", "activeForm": "Creating test session"},
{"content": "Gather test coverage context", "status": "completed", "activeForm": "Gathering test coverage context"},
{"content": "Analyze test requirements with Gemini", "status": "completed", "activeForm": "Analyzing test requirements"},
{"content": "Phase 4.1: Parse TEST_ANALYSIS_RESULTS.md (test-task-generate)", "status": "in_progress", "activeForm": "Parsing test analysis"},
{"content": "Phase 4.2: Generate IMPL-001.json and IMPL-002.json (test-task-generate)", "status": "pending", "activeForm": "Generating task JSONs"},
{"content": "Phase 4.3: Generate IMPL_PLAN.md and TODO_LIST.md (test-task-generate)", "status": "pending", "activeForm": "Generating plan documents"},
{"content": "Return workflow summary", "status": "pending", "activeForm": "Returning workflow summary"}
]
```
**Note**: SlashCommand dispatch **attaches** test-task-generate's 3 tasks. Orchestrator **executes** these tasks.
**Next Action**: Tasks attached → **Execute Phase 4.1-4.3** sequentially
<!-- TodoWrite: After Phase 4 tasks complete, REMOVE Phase 4.1-4.3, restore to orchestrator view -->
**TodoWrite Update (Phase 4 completed - tasks collapsed)**:
```json
[
{"content": "Create independent test session", "status": "completed", "activeForm": "Creating test session"},
{"content": "Gather test coverage context", "status": "completed", "activeForm": "Gathering test coverage context"},
{"content": "Analyze test requirements with Gemini", "status": "completed", "activeForm": "Analyzing test requirements"},
{"content": "Generate test generation and execution tasks", "status": "completed", "activeForm": "Generating test tasks"},
{"content": "Return workflow summary", "status": "in_progress", "activeForm": "Returning workflow summary"}
]
```
**Note**: Phase 4 tasks completed and collapsed to summary.
---
### Phase 5: Return Summary (Command Ends Here)
**Important**: This is the final phase of `/workflow:test-gen`. The command completes and returns control to the user. No automatic execution occurs.
**Return to User**:
```
Test workflow preparation complete!
Source Session: [sourceSessionId]
Test Session: [testSessionId]
Artifacts Created:
- Test context analysis
- Test generation strategy
- Task definitions (IMPL-001, IMPL-002)
- Implementation plan
Test Framework: [detected framework]
Test Files to Generate: [count]
Fix Mode: [Agent|CLI] (based on `command` field in implementation_approach steps)
Review Generated Artifacts:
- Test plan: .workflow/[testSessionId]/IMPL_PLAN.md
- Task list: .workflow/[testSessionId]/TODO_LIST.md
- Analysis: .workflow/[testSessionId]/.process/TEST_ANALYSIS_RESULTS.md
Ready for execution. Use appropriate workflow commands to proceed.
```
**TodoWrite**: Mark phase 5 completed
**Command Boundary**: After this phase, the command terminates and returns to user prompt.
---
## TodoWrite Pattern
**Core Concept**: Dynamic task attachment and collapse for test-gen workflow with cross-session context gathering and test generation strategy.
### Key Principles
1. **Task Attachment** (when SlashCommand executed):
- Sub-command's internal tasks are **attached** to orchestrator's TodoWrite
- Example: `/workflow:tools:test-context-gather` attaches 3 sub-tasks (Phase 2.1, 2.2, 2.3)
- First attached task marked as `in_progress`, others as `pending`
- Orchestrator **executes** these attached tasks sequentially
2. **Task Collapse** (after sub-tasks complete):
- Remove detailed sub-tasks from TodoWrite
- **Collapse** to high-level phase summary
- Example: Phase 2.1-2.3 collapse to "Gather test coverage context: completed"
- Maintains clean orchestrator-level view
3. **Continuous Execution**:
- After collapse, automatically proceed to next pending phase
- No user intervention required between phases
- TodoWrite dynamically reflects current execution state
**Lifecycle Summary**: Initial pending tasks → Phase executed (tasks ATTACHED) → Sub-tasks executed sequentially → Phase completed (tasks COLLAPSED to summary) → Next phase begins → Repeat until all phases complete.
### Test-Gen Specific Features
- **Phase 2**: Cross-session context gathering from source implementation session
- **Phase 3**: Test requirements analysis with Gemini for generation strategy
- **Phase 4**: Dual-task generation (IMPL-001 for test generation, IMPL-002 for test execution)
- **Fix Mode Configuration**: CLI tool usage determined semantically from user's task description
**Note**: See individual Phase descriptions (Phase 2, 3, 4) for detailed TodoWrite Update examples with full JSON structures.
## Execution Flow Diagram
```
Test-Gen Workflow Orchestrator
├─ Phase 1: Create Test Session
│ └─ /workflow:session:start --new
│ └─ Returns: testSessionId (WFS-test-[source])
├─ Phase 2: Gather Test Context ← ATTACHED (3 tasks)
│ └─ /workflow:tools:test-context-gather
│ ├─ Phase 2.1: Load source session summaries
│ ├─ Phase 2.2: Analyze test coverage with MCP tools
│ └─ Phase 2.3: Identify coverage gaps and framework
│ └─ Returns: test-context-package.json ← COLLAPSED
├─ Phase 3: Test Generation Analysis ← ATTACHED (3 tasks)
│ └─ /workflow:tools:test-concept-enhanced
│ ├─ Phase 3.1: Analyze coverage gaps with Gemini
│ ├─ Phase 3.2: Study existing test patterns
│ └─ Phase 3.3: Generate test generation strategy
│ └─ Returns: TEST_ANALYSIS_RESULTS.md ← COLLAPSED
├─ Phase 4: Generate Test Tasks ← ATTACHED (3 tasks)
│ └─ /workflow:tools:test-task-generate
│ ├─ Phase 4.1: Parse TEST_ANALYSIS_RESULTS.md
│ ├─ Phase 4.2: Generate IMPL-001.json and IMPL-002.json
│ └─ Phase 4.3: Generate IMPL_PLAN.md and TODO_LIST.md
│ └─ Returns: Task JSONs and plans ← COLLAPSED
└─ Phase 5: Return Summary
└─ Command ends, control returns to user
Artifacts Created:
├── .workflow/active/WFS-test-[session]/
│ ├── workflow-session.json
│ ├── IMPL_PLAN.md
│ ├── TODO_LIST.md
│ ├── .task/
│ │ ├── IMPL-001.json (test generation task)
│ │ └── IMPL-002.json (test execution task)
│ └── .process/
│ ├── test-context-package.json
│ └── TEST_ANALYSIS_RESULTS.md
Key Points:
• ← ATTACHED: SlashCommand attaches sub-tasks to orchestrator TodoWrite
• ← COLLAPSED: Sub-tasks executed and collapsed to phase summary
```
## Session Metadata
Test session includes `workflow_type: "test_session"` and `source_session_id` for automatic context gathering.
## Task Output
Generates two task definition files:
- **IMPL-001.json**: Test generation task specification
- Agent: @code-developer
- Input: TEST_ANALYSIS_RESULTS.md
- Output: Test files based on analysis
- **IMPL-002.json**: Test execution and fix cycle specification
- Agent: @test-fix-agent
- Dependency: IMPL-001 must complete first
- Max iterations: 5
- Fix mode: Agent or CLI (based on `command` field in implementation_approach)
See `/workflow:tools:test-task-generate` for complete task JSON schemas.
## Error Handling
| Phase | Error | Action |
|-------|-------|--------|
| 1 | Source session not found | Return error with source session ID |
| 1 | No completed IMPL tasks | Return error, source incomplete |
| 2 | Context gathering failed | Return error, check source artifacts |
| 3 | Analysis failed | Return error, check context package |
| 4 | Task generation failed | Retry once, then error with details |
## Output Files
Created in `.workflow/active/WFS-test-[session]/`:
- `workflow-session.json` - Session metadata
- `.process/test-context-package.json` - Coverage analysis
- `.process/TEST_ANALYSIS_RESULTS.md` - Test requirements
- `.task/IMPL-001.json` - Test generation task
- `.task/IMPL-002.json` - Test execution & fix task
- `IMPL_PLAN.md` - Test plan
- `TODO_LIST.md` - Task checklist
## Task Specifications
**IMPL-001.json Structure**:
- `meta.type: "test-gen"`
- `meta.agent: "@code-developer"`
- `context.requirements`: Generate tests based on TEST_ANALYSIS_RESULTS.md
- `flow_control.target_files`: Test files to create
- `flow_control.implementation_approach`: Test generation strategy
**IMPL-002.json Structure**:
- `meta.type: "test-fix"`
- `meta.agent: "@test-fix-agent"`
- `context.depends_on: ["IMPL-001"]`
- `flow_control.implementation_approach.test_fix_cycle`: Complete cycle specification
- Gemini diagnosis template
- Fix application mode (agent or CLI based on `command` field)
- Max iterations: 5
- `flow_control.implementation_approach.modification_points`: 3-phase flow
See `/workflow:tools:test-task-generate` for complete JSON schemas.
## Best Practices
1. **Prerequisites**: Ensure source session has completed IMPL tasks with summaries
2. **Clean State**: Commit implementation changes before running test-gen
3. **Review Artifacts**: Check generated IMPL_PLAN.md and TODO_LIST.md before proceeding
4. **Understand Scope**: This command only prepares artifacts; it does not execute tests
## Related Commands
**Prerequisite Commands**:
- `/workflow:plan` or `/workflow:execute` - Complete implementation session that needs test validation
**Executed by This Command** (4 phases):
- `/workflow:session:start` - Phase 1: Create independent test workflow session
- `/workflow:tools:test-context-gather` - Phase 2: Analyze test coverage and gather source session context
- `/workflow:tools:test-concept-enhanced` - Phase 3: Generate test requirements and strategy using Gemini
- `/workflow:tools:test-task-generate` - Phase 4: Generate test task JSONs (CLI tool usage determined semantically)
**Follow-up Commands**:
- `/workflow:status` - Review generated test tasks
- `/workflow:test-cycle-execute` - Execute test generation and fix cycles
- `/workflow:execute` - Execute generated test tasks

View File

@@ -0,0 +1,391 @@
---
name: code-validation-gate
description: Validate AI-generated code for common errors (imports, variables, types) before test execution
argument-hint: "--session WFS-test-session-id [--fix] [--strict]"
examples:
- /workflow:tools:code-validation-gate --session WFS-test-auth
- /workflow:tools:code-validation-gate --session WFS-test-auth --fix
- /workflow:tools:code-validation-gate --session WFS-test-auth --strict
---
# Code Validation Gate Command
## Overview
Pre-test validation gate that checks AI-generated code for common errors before test execution. This prevents wasted test cycles on code with fundamental issues like import errors, variable conflicts, and type mismatches.
## Core Philosophy
- **Fail Fast**: Catch fundamental errors before expensive test execution
- **AI-Aware**: Specifically targets common AI code generation mistakes
- **Auto-Remediation**: Attempt safe fixes before failing
- **Clear Feedback**: Provide actionable fix suggestions for manual intervention
## Target Error Categories
### L0.1: Compilation Errors
- TypeScript compilation failures
- Syntax errors
- Module resolution failures
### L0.2: Import Errors
- Unresolved module imports (hallucinated packages)
- Circular dependencies
- Duplicate imports
- Unused imports
### L0.3: Variable Errors
- Variable redeclaration
- Scope conflicts (shadowing)
- Undefined variable usage
- Unused variables
### L0.4: Type Errors (TypeScript)
- Type mismatches
- Missing type definitions
- Excessive `any` usage
- Implicit `any` types
### L0.5: AI-Specific Patterns
- Placeholder code (`// TODO: implement`)
- Hallucinated package imports
- Mock code in production files
- Inconsistent naming patterns
## Execution Process
```
Input Parsing:
├─ Parse flags: --session (required), --fix, --strict
└─ Load test-quality-config.json
Phase 1: Context Loading
├─ Load session metadata
├─ Identify target files (from IMPL-001 output or context-package)
└─ Detect project configuration (tsconfig, eslint, etc.)
Phase 2: Validation Execution
├─ L0.1: Run TypeScript compilation check
├─ L0.2: Run import validation
├─ L0.3: Run variable validation
├─ L0.4: Run type validation
└─ L0.5: Run AI-specific checks
Phase 3: Result Analysis
├─ Aggregate all findings by severity
├─ Calculate pass/fail status
└─ Generate fix suggestions
Phase 4: Auto-Fix (if --fix enabled)
├─ Apply safe auto-fixes (imports, formatting)
├─ Re-run validation
└─ Report remaining issues
Phase 5: Gate Decision
├─ PASS: Proceed to IMPL-001.5
├─ SOFT_FAIL: Auto-fix applied, needs re-validation
└─ HARD_FAIL: Block with detailed report
```
## Execution Lifecycle
### Phase 1: Context Loading
**Load session and identify validation targets.**
```javascript
// Load session metadata
Read(".workflow/active/{session_id}/workflow-session.json")
// Load context package for target files
Read(".workflow/active/{session_id}/.process/context-package.json")
// OR
Read(".workflow/active/{session_id}/.process/test-context-package.json")
// Identify files to validate:
// 1. Source files from context.implementation_files
// 2. Test files from IMPL-001 output (if exists)
// 3. All modified files since session start
```
**Target File Discovery**:
- Source files: `context.focus_paths` from context-package
- Generated tests: `.workflow/active/{session_id}/.task/IMPL-001-output/`
- All TypeScript/JavaScript in target directories
### Phase 2: Validation Execution
**Execute validation checks in order of dependency.**
#### L0.1: TypeScript Compilation
```bash
# Primary check - catches most fundamental errors
npx tsc --noEmit --skipLibCheck --project tsconfig.json 2>&1
# Parse output for errors
# Critical: Any compilation error blocks further validation
```
**Error Patterns**:
```
error TS2307: Cannot find module 'xxx'
error TS2451: Cannot redeclare block-scoped variable 'xxx'
error TS2322: Type 'xxx' is not assignable to type 'yyy'
```
#### L0.2: Import Validation
```bash
# Check for circular dependencies
npx madge --circular --extensions ts,tsx,js,jsx {target_dirs}
# ESLint import rules
npx eslint --rule 'import/no-duplicates: error' --rule 'import/no-unresolved: error' {files}
```
**Hallucinated Package Check**:
```javascript
// Extract all imports from files
// Verify each package exists in package.json or node_modules
// Flag any unresolvable imports as "hallucinated"
```
#### L0.3: Variable Validation
```bash
# ESLint variable rules
npx eslint --rule 'no-shadow: error' --rule 'no-undef: error' --rule 'no-redeclare: error' {files}
```
#### L0.4: Type Validation
```bash
# TypeScript strict checks
npx tsc --noEmit --strict {files}
# Check for any abuse
npx eslint --rule '@typescript-eslint/no-explicit-any: warn' {files}
```
#### L0.5: AI-Specific Checks
```bash
# Check for placeholder code
grep -rn "// TODO: implement\|// Add your code here\|throw new Error.*Not implemented" {files}
# Check for mock code in production files
grep -rn "jest\.mock\|sinon\.\|vi\.mock" {source_files_only}
```
### Phase 3: Result Analysis
**Aggregate and categorize findings.**
```javascript
const findings = {
critical: [], // Blocks all progress
error: [], // Blocks with threshold
warning: [] // Advisory only
};
// Apply thresholds from config
const config = loadConfig("test-quality-config.json");
const thresholds = config.code_validation.severity_thresholds;
// Gate decision
if (findings.critical.length > thresholds.critical) {
decision = "HARD_FAIL";
} else if (findings.error.length > thresholds.error) {
decision = "SOFT_FAIL"; // Try auto-fix
} else {
decision = "PASS";
}
```
### Phase 4: Auto-Fix (Optional)
**Apply safe automatic fixes when --fix flag provided.**
```bash
# Safe fixes only
npx eslint --fix --rule 'import/no-duplicates: error' --rule 'unused-imports/no-unused-imports: error' {files}
# Re-run validation after fixes
# Report what was fixed vs what remains
```
**Safe Fix Categories**:
- Remove unused imports
- Remove duplicate imports
- Fix import ordering
- Remove unused variables (with caution)
- Formatting fixes
**Unsafe (Manual Only)**:
- Missing imports (need to determine correct package)
- Type errors (need to understand intent)
- Variable shadowing (need to understand scope intent)
### Phase 5: Gate Decision
**Determine next action based on results.**
| Decision | Condition | Action |
|----------|-----------|--------|
| **PASS** | critical=0, error<=3, warning<=10 | Proceed to IMPL-001.5 |
| **SOFT_FAIL** | critical=0, error>3 OR fixable issues | Auto-fix and retry (max 2) |
| **HARD_FAIL** | critical>0 OR max retries exceeded | Block with report |
## Output Artifacts
### Validation Report
**File**: `.workflow/active/{session_id}/.process/code-validation-report.md`
```markdown
# Code Validation Report
**Session**: {session_id}
**Timestamp**: {timestamp}
**Status**: PASS | SOFT_FAIL | HARD_FAIL
## Summary
- Files Validated: {count}
- Critical Issues: {count}
- Errors: {count}
- Warnings: {count}
## Critical Issues (Must Fix)
### Import Errors
- `src/auth/service.ts:5` - Cannot find module 'non-existent-package'
- **Suggestion**: Check if package exists, may be hallucinated by AI
### Variable Conflicts
- `src/utils/helper.ts:12` - Cannot redeclare block-scoped variable 'config'
- **Suggestion**: Rename one of the variables or merge declarations
## Errors (Should Fix)
...
## Warnings (Consider Fixing)
...
## Auto-Fix Applied
- Removed 3 unused imports in `src/auth/service.ts`
- Fixed import ordering in `src/utils/index.ts`
## Remaining Issues Requiring Manual Fix
...
## Next Steps
- [ ] Fix critical issues before proceeding
- [ ] Review error suggestions
- [ ] Re-run validation: `/workflow:tools:code-validation-gate --session {session_id}`
```
### JSON Report (Machine-Readable)
**File**: `.workflow/active/{session_id}/.process/code-validation-report.json`
```json
{
"session_id": "WFS-test-xxx",
"timestamp": "2025-01-30T10:00:00Z",
"status": "HARD_FAIL",
"summary": {
"files_validated": 15,
"critical": 2,
"error": 5,
"warning": 8
},
"findings": {
"critical": [
{
"category": "import",
"file": "src/auth/service.ts",
"line": 5,
"message": "Cannot find module 'non-existent-package'",
"suggestion": "Check if package exists in package.json",
"auto_fixable": false
}
],
"error": [...],
"warning": [...]
},
"auto_fixes_applied": [...],
"gate_decision": "HARD_FAIL",
"retry_count": 0,
"max_retries": 2
}
```
## Command Options
| Option | Description | Default |
|--------|-------------|---------|
| `--session` | Test session ID (required) | - |
| `--fix` | Enable auto-fix for safe issues | false |
| `--strict` | Use strict thresholds (0 errors allowed) | false |
| `--files` | Specific files to validate (comma-separated) | All target files |
| `--skip-types` | Skip TypeScript type checks | false |
## Integration
### Command Chain
- **Called By**: `/workflow:test-fix-gen` (after IMPL-001)
- **Requires**: IMPL-001 output OR context-package.json
- **Followed By**: IMPL-001.5 (Test Quality Gate) on PASS
### Task JSON Integration
When used in test-fix workflow, generates task:
```json
{
"id": "IMPL-001.3-validation",
"meta": {
"type": "code-validation",
"agent": "@test-fix-agent"
},
"context": {
"depends_on": ["IMPL-001"],
"requirements": "Validate generated code for AI common errors"
},
"flow_control": {
"validation_config": "~/.claude/workflows/test-quality-config.json",
"max_retries": 2,
"auto_fix_enabled": true
},
"acceptance_criteria": [
"Zero critical issues",
"Maximum 3 error issues",
"All imports resolvable",
"No variable redeclarations"
]
}
```
## Error Handling
| Error | Resolution |
|-------|------------|
| tsconfig.json not found | Use default compiler options |
| ESLint not installed | Skip ESLint checks, use tsc only |
| madge not installed | Skip circular dependency check |
| No files to validate | Return PASS (nothing to check) |
## Best Practices
1. **Run Early**: Execute validation immediately after code generation
2. **Use --fix First**: Let auto-fix resolve trivial issues
3. **Review Suggestions**: AI fix suggestions may need human judgment
4. **Don't Skip Critical**: Never proceed with critical errors
5. **Track Patterns**: Common failures indicate prompt improvement opportunities
## Related Commands
- `/workflow:test-fix-gen` - Parent workflow that invokes this command
- `/workflow:tools:test-quality-gate` - Next phase (IMPL-001.5) for test quality
- `/workflow:test-cycle-execute` - Execute tests after validation passes

View File

@@ -1,12 +1,16 @@
---
name: conflict-resolution
description: Detect and resolve conflicts between plan and existing codebase using CLI-powered analysis with Gemini/Qwen
argument-hint: "--session WFS-session-id --context path/to/context-package.json"
argument-hint: "[-y|--yes] --session WFS-session-id --context path/to/context-package.json"
examples:
- /workflow:tools:conflict-resolution --session WFS-auth --context .workflow/active/WFS-auth/.process/context-package.json
- /workflow:tools:conflict-resolution --session WFS-payment --context .workflow/active/WFS-payment/.process/context-package.json
- /workflow:tools:conflict-resolution -y --session WFS-payment --context .workflow/active/WFS-payment/.process/context-package.json
---
## Auto Mode
When `--yes` or `-y`: Auto-select recommended strategy for each conflict, skip clarification questions.
# Conflict Resolution Command
## Purpose
@@ -21,10 +25,10 @@ Analyzes conflicts between implementation plans and existing codebase, **includi
| Responsibility | Description |
|---------------|-------------|
| **Detect Conflicts** | Analyze plan vs existing code inconsistencies |
| **Scenario Uniqueness** | **NEW**: Search and compare new modules with existing modules for functional overlaps |
| **Scenario Uniqueness** | Search and compare new modules with existing modules for functional overlaps |
| **Generate Strategies** | Provide 2-4 resolution options per conflict |
| **Iterative Clarification** | **NEW**: Ask unlimited questions until scenario boundaries are clear and unique |
| **Agent Re-analysis** | **NEW**: Dynamically update strategies based on user clarifications |
| **Iterative Clarification** | Ask unlimited questions until scenario boundaries are clear and unique |
| **Agent Re-analysis** | Dynamically update strategies based on user clarifications |
| **CLI Analysis** | Use Gemini/Qwen (Claude fallback) |
| **User Decision** | Present options ONE BY ONE, never auto-apply |
| **Direct Text Output** | Output questions via text directly, NEVER use bash echo/printf |
@@ -53,7 +57,7 @@ Analyzes conflicts between implementation plans and existing codebase, **includi
- Breaking updates
### 5. Module Scenario Overlap
- **NEW**: Functional overlap between new and existing modules
- Functional overlap between new and existing modules
- Scenario boundary ambiguity
- Duplicate responsibility detection
- Module merge/split decisions
@@ -130,7 +134,7 @@ Task(subagent_type="cli-execution-agent", run_in_background=false, prompt=`
### 1. Load Context
- Read existing files from conflict_detection.existing_files
- Load plan from .workflow/active/{session_id}/.process/context-package.json
- **NEW**: Load exploration_results and use aggregated_insights for enhanced analysis
- Load exploration_results and use aggregated_insights for enhanced analysis
- Extract role analyses and requirements
### 2. Execute CLI Analysis (Enhanced with Exploration + Scenario Uniqueness)
@@ -154,8 +158,8 @@ Task(subagent_type="cli-execution-agent", run_in_background=false, prompt=`
- Validation of exploration conflict_indicators
- ModuleOverlap conflicts with overlap_analysis
- Targeted clarification questions
RULES: $(cat ~/.claude/workflows/cli-templates/prompts/analysis/02-analyze-code-patterns.txt) | Focus on breaking changes, migration needs, and functional overlaps | Prioritize exploration-identified conflicts | analysis=READ-ONLY
" --tool gemini --mode analysis --cd {project_root}
CONSTRAINTS: Focus on breaking changes, migration needs, and functional overlaps | Prioritize exploration-identified conflicts | analysis=READ-ONLY
" --tool gemini --mode analysis --rule analysis-code-patterns --cd {project_root}
Fallback: Qwen (same prompt) → Claude (manual analysis)
@@ -182,33 +186,25 @@ Task(subagent_type="cli-execution-agent", run_in_background=false, prompt=`
- modifications.old_content: 20-100 chars for unique Edit tool matching
- modifications.new_content: preserves markdown formatting
- modification_suggestions: 2-5 actionable suggestions for custom handling
`)
```
**Agent Internal Flow** (Enhanced):
```
1. Load context package
2. Check conflict_risk (exit if none/low)
3. Read existing files + plan artifacts
4. Run CLI analysis (Gemini→Qwen→Claude) with enhanced tasks:
- Standard conflict detection (Architecture/API/Data/Dependency)
- **NEW: Module scenario uniqueness detection**
* Extract new module functionality from plan
* Search all existing modules with similar keywords/functionality
* Compare scenario coverage and responsibilities
* Identify functional overlaps and boundary ambiguities
* Generate ModuleOverlap conflicts with overlap_analysis
5. Parse conflict findings (including ModuleOverlap category)
6. Generate 2-4 strategies per conflict:
- Include modifications for each strategy
- **For ModuleOverlap**: Add clarification_needed questions for boundary definition
7. Return JSON to stdout (NOT file write)
8. Return execution log path
### 5. Planning Notes Record (REQUIRED)
After analysis complete, append a brief execution record to planning-notes.md:
**File**: .workflow/active/{session_id}/planning-notes.md
**Location**: Under "## Conflict Decisions (Phase 3)" section
**Format**:
\`\`\`
### [Conflict-Resolution Agent] YYYY-MM-DD
- **Note**: [智能补充:简短总结冲突类型、解决策略、关键决策等]
\`\`\`
`)
```
### Phase 3: User Interaction Loop
```javascript
const autoYes = $ARGUMENTS.includes('--yes') || $ARGUMENTS.includes('-y')
FOR each conflict:
round = 0, clarified = false, userClarifications = []
@@ -216,8 +212,13 @@ FOR each conflict:
// 1. Display conflict info (text output for context)
displayConflictSummary(conflict) // id, brief, severity, overlap_analysis if ModuleOverlap
// 2. Strategy selection via AskUserQuestion
AskUserQuestion({
// 2. Strategy selection
if (autoYes) {
console.log(`[--yes] Auto-selecting recommended strategy`)
selectedStrategy = conflict.strategies[conflict.recommended || 0]
clarified = true // Skip clarification loop
} else {
AskUserQuestion({
questions: [{
question: formatStrategiesForDisplay(conflict.strategies),
header: "策略选择",
@@ -230,18 +231,19 @@ FOR each conflict:
{ label: "自定义修改", description: `建议: ${conflict.modification_suggestions?.slice(0,2).join('; ')}` }
]
}]
})
})
// 3. Handle selection
if (userChoice === "自定义修改") {
customConflicts.push({ id, brief, category, suggestions, overlap_analysis })
break
// 3. Handle selection
if (userChoice === "自定义修改") {
customConflicts.push({ id, brief, category, suggestions, overlap_analysis })
break
}
selectedStrategy = findStrategyByName(userChoice)
}
selectedStrategy = findStrategyByName(userChoice)
// 4. Clarification (if needed) - batched max 4 per call
if (selectedStrategy.clarification_needed?.length > 0) {
if (!autoYes && selectedStrategy.clarification_needed?.length > 0) {
for (batch of chunk(selectedStrategy.clarification_needed, 4)) {
AskUserQuestion({
questions: batch.map((q, i) => ({

View File

@@ -35,7 +35,7 @@ Step 1: Context-Package Detection
├─ Valid package exists → Return existing (skip execution)
└─ No valid package → Continue to Step 2
Step 2: Complexity Assessment & Parallel Explore (NEW)
Step 2: Complexity Assessment & Parallel Explore
├─ Analyze task_description → classify Low/Medium/High
├─ Select exploration angles (1-4 based on complexity)
├─ Launch N cli-explore-agents in parallel
@@ -213,19 +213,37 @@ Write(`${sessionFolder}/explorations-manifest.json`, JSON.stringify(explorationM
**Only execute after Step 2 completes**
```javascript
// Load user intent from planning-notes.md (from Phase 1)
const planningNotesPath = `.workflow/active/${session_id}/planning-notes.md`;
let userIntent = { goal: task_description, key_constraints: "None specified" };
if (file_exists(planningNotesPath)) {
const notesContent = Read(planningNotesPath);
const goalMatch = notesContent.match(/\*\*GOAL\*\*:\s*(.+)/);
const constraintsMatch = notesContent.match(/\*\*KEY_CONSTRAINTS\*\*:\s*(.+)/);
if (goalMatch) userIntent.goal = goalMatch[1].trim();
if (constraintsMatch) userIntent.key_constraints = constraintsMatch[1].trim();
}
Task(
subagent_type="context-search-agent",
run_in_background=false,
description="Gather comprehensive context for plan",
prompt=`
## Execution Mode
**PLAN MODE** (Comprehensive) - Full Phase 1-3 execution
**PLAN MODE** (Comprehensive) - Full Phase 1-3 execution with priority sorting
## Session Information
- **Session ID**: ${session_id}
- **Task Description**: ${task_description}
- **Output Path**: .workflow/${session_id}/.process/context-package.json
## User Intent (from Phase 1 - Planning Notes)
**GOAL**: ${userIntent.goal}
**KEY_CONSTRAINTS**: ${userIntent.key_constraints}
This is the PRIMARY context source - all subsequent analysis must align with user intent.
## Exploration Input (from Step 2)
- **Manifest**: ${sessionFolder}/explorations-manifest.json
- **Exploration Count**: ${explorationManifest.exploration_count}
@@ -237,7 +255,7 @@ Execute complete context-search-agent workflow for implementation planning:
### Phase 1: Initialization & Pre-Analysis
1. **Project State Loading**:
- Read and parse `.workflow/project-tech.json`. Use its `technology_analysis` section as the foundational `project_context`. This is your primary source for architecture, tech stack, and key components.
- Read and parse `.workflow/project-tech.json`. Use its `overview` section as the foundational `project_context`. This is your primary source for architecture, tech stack, and key components.
- Read and parse `.workflow/project-guidelines.json`. Load `conventions`, `constraints`, and `learnings` into a `project_guidelines` section.
- If files don't exist, proceed with fresh analysis.
2. **Detection**: Check for existing context-package (early exit if valid)
@@ -245,7 +263,13 @@ Execute complete context-search-agent workflow for implementation planning:
4. **Analysis**: Extract keywords, determine scope, classify complexity based on task description and project state
### Phase 2: Multi-Source Context Discovery
Execute all discovery tracks:
Execute all discovery tracks (WITH USER INTENT INTEGRATION):
- **Track -1**: User Intent & Priority Foundation (EXECUTE FIRST)
- Load user intent (GOAL, KEY_CONSTRAINTS) from session input
- Map user requirements to codebase entities (files, modules, patterns)
- Establish baseline priority scores based on user goal alignment
- Output: user_intent_mapping.json with preliminary priority scores
- **Track 0**: Exploration Synthesis (load ${sessionFolder}/explorations-manifest.json, prioritize critical_files, deduplicate patterns/integration_points)
- **Track 1**: Historical archive analysis (query manifest.json for lessons learned)
- **Track 2**: Reference documentation (CLAUDE.md, architecture docs)
@@ -254,13 +278,45 @@ Execute all discovery tracks:
### Phase 3: Synthesis, Assessment & Packaging
1. Apply relevance scoring and build dependency graph
2. **Synthesize 4-source data**: Merge findings from all sources (archive > docs > code > web). **Prioritize the context from `project-tech.json`** for architecture and tech stack unless code analysis reveals it's outdated.
3. **Populate `project_context`**: Directly use the `technology_analysis` from `project-tech.json` to fill the `project_context` section. Include description, technology_stack, architecture, and key_components.
4. **Populate `project_guidelines`**: Load conventions, constraints, and learnings from `project-guidelines.json` into a dedicated section.
5. Integrate brainstorm artifacts (if .brainstorming/ exists, read content)
6. Perform conflict detection with risk assessment
7. **Inject historical conflicts** from archive analysis into conflict_detection
8. Generate and validate context-package.json
2. **Synthesize 5-source data** (including Track -1): Merge findings from all sources
- Priority order: User Intent > Archive > Docs > Exploration > Code > Web
- **Prioritize the context from `project-tech.json`** for architecture and tech stack unless code analysis reveals it's outdated
3. **Context Priority Sorting**:
a. Combine scores from Track -1 (user intent alignment) + relevance scores + exploration critical_files
b. Classify files into priority tiers:
- **Critical** (score ≥ 0.85): Directly mentioned in user goal OR exploration critical_files
- **High** (0.70-0.84): Key dependencies, patterns required for goal
- **Medium** (0.50-0.69): Supporting files, indirect dependencies
- **Low** (< 0.50): Contextual awareness only
c. Generate dependency_order: Based on dependency graph + user goal sequence
d. Document sorting_rationale: Explain prioritization logic
4. **Populate `project_context`**: Directly use the `overview` from `project-tech.json` to fill the `project_context` section. Include description, technology_stack, architecture, and key_components.
5. **Populate `project_guidelines`**: Load conventions, constraints, and learnings from `project-guidelines.json` into a dedicated section.
6. Integrate brainstorm artifacts (if .brainstorming/ exists, read content)
7. Perform conflict detection with risk assessment
8. **Inject historical conflicts** from archive analysis into conflict_detection
9. **Generate prioritized_context section**:
```json
{
"prioritized_context": {
"user_intent": {
"goal": "...",
"scope": "...",
"key_constraints": ["..."]
},
"priority_tiers": {
"critical": [{ "path": "...", "relevance": 0.95, "rationale": "..." }],
"high": [...],
"medium": [...],
"low": [...]
},
"dependency_order": ["module1", "module2", "module3"],
"sorting_rationale": "Based on user goal alignment (Track -1), exploration critical files, and dependency graph analysis"
}
}
```
10. Generate and validate context-package.json with prioritized_context field
## Output Requirements
Complete context-package.json with:
@@ -272,6 +328,7 @@ Complete context-package.json with:
- **brainstorm_artifacts**: {guidance_specification, role_analyses[], synthesis_output} with content
- **conflict_detection**: {risk_level, risk_factors, affected_modules[], mitigation_strategy, historical_conflicts[]}
- **exploration_results**: {manifest_path, exploration_count, angles, explorations[], aggregated_insights} (from Track 0)
- **prioritized_context**: {user_intent, priority_tiers{critical, high, medium, low}, dependency_order[], sorting_rationale}
## Quality Validation
Before completion verify:
@@ -282,6 +339,17 @@ Before completion verify:
- [ ] No sensitive data exposed
- [ ] Total files ≤50 (prioritize high-relevance)
## Planning Notes Record (REQUIRED)
After completing context-package.json, append a brief execution record to planning-notes.md:
**File**: .workflow/active/${session_id}/planning-notes.md
**Location**: Under "## Context Findings (Phase 2)" section
**Format**:
\`\`\`
### [Context-Search Agent] YYYY-MM-DD
- **Note**: [智能补充:简短总结关键发现,如探索角度、关键文件、冲突风险等]
\`\`\`
Execute autonomously following agent documentation.
Report completion with statistics.
`
@@ -326,116 +394,11 @@ Refer to `context-search-agent.md` Phase 3.7 for complete `context-package.json`
- **brainstorm_artifacts**: Brainstorm documents with full content (if exists)
- **conflict_detection**: Risk assessment with mitigation strategies and historical conflicts
- **exploration_results**: Aggregated exploration insights (from parallel explore phase)
## Historical Archive Analysis
### Track 1: Query Archive Manifest
The context-search-agent MUST perform historical archive analysis as Track 1 in Phase 2:
**Step 1: Check for Archive Manifest**
```bash
# Check if archive manifest exists
if [[ -f .workflow/archives/manifest.json ]]; then
# Manifest available for querying
fi
```
**Step 2: Extract Task Keywords**
```javascript
// From current task description, extract key entities and operations
const keywords = extractKeywords(task_description);
// Examples: ["User", "model", "authentication", "JWT", "reporting"]
```
**Step 3: Search Archive for Relevant Sessions**
```javascript
// Query manifest for sessions with matching tags or descriptions
const relevantArchives = archives.filter(archive => {
return archive.tags.some(tag => keywords.includes(tag)) ||
keywords.some(kw => archive.description.toLowerCase().includes(kw.toLowerCase()));
});
```
**Step 4: Extract Watch Patterns**
```javascript
// For each relevant archive, check watch_patterns for applicability
const historicalConflicts = [];
relevantArchives.forEach(archive => {
archive.lessons.watch_patterns?.forEach(pattern => {
// Check if pattern trigger matches current task
if (isPatternRelevant(pattern.pattern, task_description)) {
historicalConflicts.push({
source_session: archive.session_id,
pattern: pattern.pattern,
action: pattern.action,
files_to_check: pattern.related_files,
archived_at: archive.archived_at
});
}
});
});
```
**Step 5: Inject into Context Package**
```json
{
"conflict_detection": {
"risk_level": "medium",
"risk_factors": ["..."],
"affected_modules": ["..."],
"mitigation_strategy": "...",
"historical_conflicts": [
{
"source_session": "WFS-auth-feature",
"pattern": "When modifying User model",
"action": "Check reporting-service and auditing-service dependencies",
"files_to_check": ["src/models/User.ts", "src/services/reporting.ts"],
"archived_at": "2025-09-16T09:00:00Z"
}
]
}
}
```
### Risk Level Escalation
If `historical_conflicts` array is not empty, minimum risk level should be "medium":
```javascript
if (historicalConflicts.length > 0 && currentRisk === "low") {
conflict_detection.risk_level = "medium";
conflict_detection.risk_factors.push(
`${historicalConflicts.length} historical conflict pattern(s) detected from past sessions`
);
}
```
### Archive Query Algorithm
```markdown
1. IF .workflow/archives/manifest.json does NOT exist → Skip Track 1, continue to Track 2
2. IF manifest exists:
a. Load manifest.json
b. Extract keywords from task_description (nouns, verbs, technical terms)
c. Filter archives where:
- ANY tag matches keywords (case-insensitive) OR
- description contains keywords (case-insensitive substring match)
d. For each relevant archive:
- Read lessons.watch_patterns array
- Check if pattern.pattern keywords overlap with task_description
- If relevant: Add to historical_conflicts array
e. IF historical_conflicts.length > 0:
- Set risk_level = max(current_risk, "medium")
- Add to risk_factors
3. Continue to Track 2 (reference documentation)
```
- **prioritized_context**: Pre-sorted context with user intent and priority tiers (critical/high/medium/low)
## Notes
- **Detection-first**: Always check for existing package before invoking agent
- **Dual project file integration**: Agent reads both `.workflow/project-tech.json` (tech analysis) and `.workflow/project-guidelines.json` (user constraints) as primary sources
- **Guidelines injection**: Project guidelines are included in context-package to ensure task generation respects user-defined constraints
- **No redundancy**: This command is a thin orchestrator, all logic in agent
- **User intent integration**: Load user intent from planning-notes.md (Phase 1 output)
- **Output**: Generates `context-package.json` with `prioritized_context` field
- **Plan-specific**: Use this for implementation planning; brainstorm mode uses direct agent call

View File

@@ -1,11 +1,16 @@
---
name: task-generate-agent
description: Generate implementation plan documents (IMPL_PLAN.md, task JSONs, TODO_LIST.md) using action-planning-agent - produces planning artifacts, does NOT execute code implementation
argument-hint: "--session WFS-session-id"
argument-hint: "[-y|--yes] --session WFS-session-id"
examples:
- /workflow:tools:task-generate-agent --session WFS-auth
- /workflow:tools:task-generate-agent -y --session WFS-auth
---
## Auto Mode
When `--yes` or `-y`: Skip user questions, use defaults (no materials, Agent executor, Codex CLI tool).
# Generate Implementation Plan Command
## Overview
@@ -14,6 +19,10 @@ Generate implementation planning documents (IMPL_PLAN.md, task JSONs, TODO_LIST.
## Core Philosophy
- **Planning Only**: Generate planning documents (IMPL_PLAN.md, task JSONs, TODO_LIST.md) - does NOT implement code
- **Agent-Driven Document Generation**: Delegate plan generation to action-planning-agent
- **NO Redundant Context Sorting**: Context priority sorting is ALREADY completed in context-gather Phase 2/3
- Use `context-package.json.prioritized_context` directly
- DO NOT re-sort files or re-compute priorities
- `priority_tiers` and `dependency_order` are pre-computed and ready-to-use
- **N+1 Parallel Planning**: Auto-detect multi-module projects, enable parallel planning (2+1 or 3+1 mode)
- **Progressive Loading**: Load context incrementally (Core → Selective → On-Demand) due to analysis.md file size
- **Memory-First**: Reuse loaded documents from conversation memory
@@ -67,9 +76,25 @@ Phase 3: Integration (+1 Coordinator, Multi-Module Only)
**Purpose**: Collect user preferences before task generation to ensure generated tasks match execution expectations.
**User Questions**:
**Auto Mode Check**:
```javascript
AskUserQuestion({
const autoYes = $ARGUMENTS.includes('--yes') || $ARGUMENTS.includes('-y')
if (autoYes) {
console.log(`[--yes] Using defaults: No materials, Agent executor, Codex CLI`)
userConfig = {
supplementaryMaterials: { type: "none", content: [] },
executionMethod: "agent",
preferredCliTool: "codex",
enableResume: true
}
// Skip to Phase 1
}
```
**User Questions** (skipped if autoYes):
```javascript
if (!autoYes) AskUserQuestion({
questions: [
{
question: "Do you have supplementary materials or guidelines to include?",
@@ -104,11 +129,10 @@ AskUserQuestion({
}
]
})
```
**Handle Materials Response**:
**Handle Materials Response** (skipped if autoYes):
```javascript
if (userConfig.materials === "Provide file paths") {
if (!autoYes && userConfig.materials === "Provide file paths") {
// Follow-up question for file paths
const pathsResponse = AskUserQuestion({
questions: [{
@@ -141,12 +165,13 @@ const userConfig = {
### Phase 1: Context Preparation & Module Detection (Command Responsibility)
**Command prepares session paths, metadata, and detects module structure.**
**Command prepares session paths, metadata, detects module structure. Context priority sorting is NOT performed here - it's already completed in context-gather Phase 2/3.**
**Session Path Structure**:
```
.workflow/active/WFS-{session-id}/
├── workflow-session.json # Session metadata
├── planning-notes.md # Consolidated planning notes
├── .process/
│ └── context-package.json # Context package with artifact catalog
├── .task/ # Output: Task JSON files
@@ -228,9 +253,21 @@ IMPORTANT: This is PLANNING ONLY - you are generating planning documents, NOT im
CRITICAL: Follow the progressive loading strategy defined in agent specification (load analysis.md files incrementally due to file size)
## PLANNING NOTES (PHASE 1-3 CONTEXT)
Load: .workflow/active/{session-id}/planning-notes.md
This document contains:
- User Intent: Original GOAL and KEY_CONSTRAINTS from Phase 1
- Context Findings: Critical files, architecture, and constraints from Phase 2
- Conflict Decisions: Resolved conflicts and planning constraints from Phase 3
- Consolidated Constraints: All constraints from all phases
**USAGE**: Read planning-notes.md FIRST. Use Consolidated Constraints list to guide task sequencing and dependencies.
## SESSION PATHS
Input:
- Session Metadata: .workflow/active/{session-id}/workflow-session.json
- Planning Notes: .workflow/active/{session-id}/planning-notes.md
- Context Package: .workflow/active/{session-id}/.process/context-package.json
Output:
@@ -247,18 +284,36 @@ Execution Method: ${userConfig.executionMethod} // agent|hybrid|cli
Preferred CLI Tool: ${userConfig.preferredCliTool} // codex|gemini|qwen|auto
Supplementary Materials: ${userConfig.supplementaryMaterials}
## CLI TOOL SELECTION
Based on userConfig.executionMethod:
- "agent": No command field in implementation_approach steps
- "hybrid": Add command field to complex steps only (agent handles simple steps)
- "cli": Add command field to ALL implementation_approach steps
## EXECUTION METHOD MAPPING
Based on userConfig.executionMethod, set task-level meta.execution_config:
CLI Resume Support (MANDATORY for all CLI commands):
- Use --resume parameter to continue from previous task execution
- Read previous task's cliExecutionId from session state
- Format: ccw cli -p "[prompt]" --resume ${previousCliId} --tool ${tool} --mode write
"agent" →
meta.execution_config = { method: "agent", cli_tool: null, enable_resume: false }
Agent executes implementation_approach steps directly
## EXPLORATION CONTEXT (from context-package.exploration_results)
"cli" →
meta.execution_config = { method: "cli", cli_tool: userConfig.preferredCliTool, enable_resume: true }
Agent executes pre_analysis, then hands off full context to CLI via buildCliHandoffPrompt()
"hybrid" →
Per-task decision: Analyze task complexity, set method to "agent" OR "cli" per task
- Simple tasks (≤3 files, straightforward logic) → method: "agent"
- Complex tasks (>3 files, complex logic, refactoring) → method: "cli"
CLI tool: userConfig.preferredCliTool, enable_resume: true
IMPORTANT: Do NOT add command field to implementation_approach steps. Execution routing is controlled by task-level meta.execution_config.method only.
## PRIORITIZED CONTEXT (from context-package.prioritized_context) - ALREADY SORTED
Context sorting is ALREADY COMPLETED in context-gather Phase 2/3. DO NOT re-sort.
Direct usage:
- **user_intent**: Use goal/scope/key_constraints for task alignment
- **priority_tiers.critical**: These files are PRIMARY focus for task generation
- **priority_tiers.high**: These files are SECONDARY focus
- **dependency_order**: Use this for task sequencing - already computed
- **sorting_rationale**: Reference for understanding priority decisions
## EXPLORATION CONTEXT (from context-package.exploration_results) - SUPPLEMENT ONLY
If prioritized_context is incomplete, fall back to exploration_results:
- Load exploration_results from context-package.json
- Use aggregated_insights.critical_files for focus_paths generation
- Apply aggregated_insights.constraints to acceptance criteria
@@ -278,8 +333,10 @@ CLI Resume Support (MANDATORY for all CLI commands):
- 6-field schema (id, title, status, context_package_path, meta, context, flow_control)
- Quantified requirements with explicit counts
- Artifacts integration from context package
- **focus_paths enhanced with exploration critical_files**
- Flow control with pre_analysis steps (include exploration integration_points analysis)
- **focus_paths generated directly from prioritized_context.priority_tiers (critical + high)**
- NO re-sorting or re-prioritization - use pre-computed tiers as-is
- Critical files are PRIMARY focus, High files are SECONDARY
- Flow control with pre_analysis steps (use prioritized_context.dependency_order for task sequencing)
- **CLI Execution IDs and strategies (MANDATORY)**
2. Implementation Plan (IMPL_PLAN.md)
@@ -327,6 +384,29 @@ Hard Constraints:
- IMPL_PLAN.md created with complete structure
- TODO_LIST.md generated matching task JSONs
- Return completion status with document count and task breakdown summary
## PLANNING NOTES RECORD (REQUIRED)
After completing, update planning-notes.md:
**File**: .workflow/active/{session_id}/planning-notes.md
1. **Task Generation (Phase 4)**: Task count and key tasks
2. **N+1 Context**: Key decisions (with rationale) + deferred items
\`\`\`markdown
## Task Generation (Phase 4)
### [Action-Planning Agent] YYYY-MM-DD
- **Tasks**: [count] ([IDs])
## N+1 Context
### Decisions
| Decision | Rationale | Revisit? |
|----------|-----------|----------|
| [choice] | [why] | [Yes/No] |
### Deferred
- [ ] [item] - [reason]
\`\`\`
`
)
```
@@ -356,16 +436,22 @@ IMPORTANT: Generate Task JSONs ONLY. IMPL_PLAN.md and TODO_LIST.md by Phase 3 Co
CRITICAL: Follow the progressive loading strategy defined in agent specification (load analysis.md files incrementally due to file size)
## PLANNING NOTES (PHASE 1-3 CONTEXT)
Load: .workflow/active/{session-id}/planning-notes.md
This document contains consolidated constraints and user intent to guide module-scoped task generation.
## MODULE SCOPE
- Module: ${module.name} (${module.type})
- Focus Paths: ${module.paths.join(', ')}
- Task ID Prefix: IMPL-${module.prefix}
- Task Limit: ≤9 tasks (hard limit for this module)
- Task Limit: ≤6 tasks (hard limit for this module)
- Other Modules: ${otherModules.join(', ')} (reference only, do NOT generate tasks for them)
## SESSION PATHS
Input:
- Session Metadata: .workflow/active/{session-id}/workflow-session.json
- Planning Notes: .workflow/active/{session-id}/planning-notes.md
- Context Package: .workflow/active/{session-id}/.process/context-package.json
Output:
@@ -380,18 +466,35 @@ Execution Method: ${userConfig.executionMethod} // agent|hybrid|cli
Preferred CLI Tool: ${userConfig.preferredCliTool} // codex|gemini|qwen|auto
Supplementary Materials: ${userConfig.supplementaryMaterials}
## CLI TOOL SELECTION
Based on userConfig.executionMethod:
- "agent": No command field in implementation_approach steps
- "hybrid": Add command field to complex steps only (agent handles simple steps)
- "cli": Add command field to ALL implementation_approach steps
## EXECUTION METHOD MAPPING
Based on userConfig.executionMethod, set task-level meta.execution_config:
CLI Resume Support (MANDATORY for all CLI commands):
- Use --resume parameter to continue from previous task execution
- Read previous task's cliExecutionId from session state
- Format: ccw cli -p "[prompt]" --resume ${previousCliId} --tool ${tool} --mode write
"agent" →
meta.execution_config = { method: "agent", cli_tool: null, enable_resume: false }
Agent executes implementation_approach steps directly
## EXPLORATION CONTEXT (from context-package.exploration_results)
"cli" →
meta.execution_config = { method: "cli", cli_tool: userConfig.preferredCliTool, enable_resume: true }
Agent executes pre_analysis, then hands off full context to CLI via buildCliHandoffPrompt()
"hybrid" →
Per-task decision: Analyze task complexity, set method to "agent" OR "cli" per task
- Simple tasks (≤3 files, straightforward logic) → method: "agent"
- Complex tasks (>3 files, complex logic, refactoring) → method: "cli"
CLI tool: userConfig.preferredCliTool, enable_resume: true
IMPORTANT: Do NOT add command field to implementation_approach steps. Execution routing is controlled by task-level meta.execution_config.method only.
## PRIORITIZED CONTEXT (from context-package.prioritized_context) - ALREADY SORTED
Context sorting is ALREADY COMPLETED in context-gather Phase 2/3. DO NOT re-sort.
Filter by module scope (${module.paths.join(', ')}):
- **user_intent**: Use for task alignment within module
- **priority_tiers.critical**: Filter for files in ${module.paths.join(', ')} → PRIMARY focus
- **priority_tiers.high**: Filter for files in ${module.paths.join(', ')} → SECONDARY focus
- **dependency_order**: Use module-relevant entries for task sequencing
## EXPLORATION CONTEXT (from context-package.exploration_results) - SUPPLEMENT ONLY
If prioritized_context is incomplete for this module, fall back to exploration_results:
- Load exploration_results from context-package.json
- Filter for ${module.name} module: Use aggregated_insights.critical_files matching ${module.paths.join(', ')}
- Apply module-relevant constraints from aggregated_insights.constraints
@@ -418,8 +521,10 @@ Task JSON Files (.task/IMPL-${module.prefix}*.json):
- Task ID format: IMPL-${module.prefix}1, IMPL-${module.prefix}2, ...
- Quantified requirements with explicit counts
- Artifacts integration from context package (filtered for ${module.name})
- **focus_paths enhanced with exploration critical_files (module-scoped)**
- Flow control with pre_analysis steps (include exploration integration_points analysis)
- **focus_paths generated directly from prioritized_context.priority_tiers filtered by ${module.paths.join(', ')}**
- NO re-sorting - use pre-computed tiers filtered for this module
- Critical files are PRIMARY focus, High files are SECONDARY
- Flow control with pre_analysis steps (use prioritized_context.dependency_order for module task sequencing)
- **CLI Execution IDs and strategies (MANDATORY)**
- Focus ONLY on ${module.name} module scope
@@ -462,6 +567,15 @@ Hard Constraints:
- Cross-module dependencies use CROSS:: placeholder format consistently
- Focus paths scoped to ${module.paths.join(', ')} only
- Return: task count, task IDs, dependency summary (internal + cross-module)
## PLANNING NOTES RECORD (REQUIRED)
After completing, append to planning-notes.md:
\`\`\`markdown
### [${module.name}] YYYY-MM-DD
- **Tasks**: [count] ([IDs])
- **CROSS deps**: [placeholders used]
\`\`\`
`
)
);
@@ -542,6 +656,24 @@ Module Count: ${modules.length}
- No CROSS:: placeholders remaining in task JSONs
- IMPL_PLAN.md and TODO_LIST.md generated with multi-module structure
- Return: task count, per-module breakdown, resolved dependency count
## PLANNING NOTES RECORD (REQUIRED)
After integration, update planning-notes.md:
\`\`\`markdown
### [Coordinator] YYYY-MM-DD
- **Total**: [count] tasks
- **Resolved**: [CROSS:: resolutions]
## N+1 Context
### Decisions
| Decision | Rationale | Revisit? |
|----------|-----------|----------|
| CROSS::X → IMPL-Y | [why this resolution] | [Yes/No] |
### Deferred
- [ ] [unresolved CROSS or conflict] - [reason]
\`\`\`
`
)
```
@@ -559,5 +691,4 @@ function resolveCrossModuleDependency(placeholder, allTasks) {
? candidates.sort((a, b) => a.id.localeCompare(b.id))[0].id
: placeholder; // Keep for manual resolution
}
```
```

View File

@@ -1,11 +1,16 @@
---
name: task-generate-tdd
description: Autonomous TDD task generation using action-planning-agent with Red-Green-Refactor cycles, test-first structure, and cycle validation
argument-hint: "--session WFS-session-id"
argument-hint: "[-y|--yes] --session WFS-session-id"
examples:
- /workflow:tools:task-generate-tdd --session WFS-auth
- /workflow:tools:task-generate-tdd -y --session WFS-auth
---
## Auto Mode
When `--yes` or `-y`: Skip user questions, use defaults (no materials, Agent executor).
# Autonomous TDD Task Generation Command
## Overview
@@ -78,44 +83,176 @@ Phase 2: Agent Execution (Document Generation)
## Execution Lifecycle
### Phase 1: Discovery & Context Loading
### Phase 0: User Configuration (Interactive)
**Purpose**: Collect user preferences before TDD task generation to ensure generated tasks match execution expectations and provide necessary supplementary context.
**User Questions**:
```javascript
AskUserQuestion({
questions: [
{
question: "Do you have supplementary materials or guidelines to include?",
header: "Materials",
multiSelect: false,
options: [
{ label: "No additional materials", description: "Use existing context only" },
{ label: "Provide file paths", description: "I'll specify paths to include" },
{ label: "Provide inline content", description: "I'll paste content directly" }
]
},
{
question: "Select execution method for generated TDD tasks:",
header: "Execution",
multiSelect: false,
options: [
{ label: "Agent (Recommended)", description: "Claude agent executes Red-Green-Refactor cycles directly" },
{ label: "Hybrid", description: "Agent orchestrates, calls CLI for complex steps (Red/Green phases)" },
{ label: "CLI Only", description: "All TDD cycles via CLI tools (codex/gemini/qwen)" }
]
},
{
question: "If using CLI, which tool do you prefer?",
header: "CLI Tool",
multiSelect: false,
options: [
{ label: "Codex (Recommended)", description: "Best for TDD Red-Green-Refactor cycles" },
{ label: "Gemini", description: "Best for analysis and large context" },
{ label: "Qwen", description: "Alternative analysis tool" },
{ label: "Auto", description: "Let agent decide per-task" }
]
}
]
})
```
**Handle Materials Response**:
```javascript
if (userConfig.materials === "Provide file paths") {
// Follow-up question for file paths
const pathsResponse = AskUserQuestion({
questions: [{
question: "Enter file paths to include (comma-separated or one per line):",
header: "Paths",
multiSelect: false,
options: [
{ label: "Enter paths", description: "Provide paths in text input" }
]
}]
})
userConfig.supplementaryPaths = parseUserPaths(pathsResponse)
}
```
**Build userConfig**:
```javascript
const userConfig = {
supplementaryMaterials: {
type: "none|paths|inline",
content: [...], // Parsed paths or inline content
},
executionMethod: "agent|hybrid|cli",
preferredCliTool: "codex|gemini|qwen|auto",
enableResume: true // Always enable resume for CLI executions
}
```
**Pass to Agent**: Include `userConfig` in agent prompt for Phase 2.
---
### Phase 1: Context Preparation & Discovery
**Command Responsibility**: Command prepares session paths and metadata, provides to agent for autonomous context loading.
**⚡ Memory-First Rule**: Skip file loading if documents already in conversation memory
**Agent Context Package**:
**📊 Progressive Loading Strategy**: Load context incrementally due to large analysis.md file sizes:
- **Core**: session metadata + context-package.json (always load)
- **Selective**: synthesis_output OR (guidance + relevant role analyses) - NOT all role analyses
- **On-Demand**: conflict resolution (if conflict_risk >= medium), test context
**🛤️ Path Clarity Requirement**: All `focus_paths` prefer absolute paths (e.g., `D:\\project\\src\\module`), or clear relative paths from project root (e.g., `./src/module`)
**Session Path Structure** (Provided by Command to Agent):
```
.workflow/active/WFS-{session-id}/
├── workflow-session.json # Session metadata
├── .process/
│ ├── context-package.json # Context package with artifact catalog
│ ├── test-context-package.json # Test coverage analysis
│ └── conflict-resolution.json # Conflict resolution (if exists)
├── .task/ # Output: Task JSON files
│ ├── IMPL-1.json
│ ├── IMPL-2.json
│ └── ...
├── IMPL_PLAN.md # Output: TDD implementation plan
└── TODO_LIST.md # Output: TODO list with TDD phases
```
**Command Preparation**:
1. **Assemble Session Paths** for agent prompt:
- `session_metadata_path`: `.workflow/active/{session-id}/workflow-session.json`
- `context_package_path`: `.workflow/active/{session-id}/.process/context-package.json`
- `test_context_package_path`: `.workflow/active/{session-id}/.process/test-context-package.json`
- Output directory paths
2. **Provide Metadata** (simple values):
- `session_id`: WFS-{session-id}
- `workflow_type`: "tdd"
- `mcp_capabilities`: {exa_code, exa_web, code_index}
3. **Pass userConfig** from Phase 0
**Agent Context Package** (Agent loads autonomously):
```javascript
{
"session_id": "WFS-[session-id]",
"workflow_type": "tdd",
// Note: CLI tool usage is determined semantically by action-planning-agent based on user's task description
// Core (ALWAYS load)
"session_metadata": {
// If in memory: use cached content
// Else: Load from .workflow/active//{session-id}/workflow-session.json
// Else: Load from workflow-session.json
},
"context_package": {
// If in memory: use cached content
// Else: Load from context-package.json
},
// Selective (load based on progressive strategy)
"brainstorm_artifacts": {
// Loaded from context-package.json → brainstorm_artifacts section
"role_analyses": [
"synthesis_output": {"path": "...", "exists": true}, // Load if exists (highest priority)
"guidance_specification": {"path": "...", "exists": true}, // Load if no synthesis
"role_analyses": [ // Load SELECTIVELY based on task relevance
{
"role": "system-architect",
"files": [{"path": "...", "type": "primary|supplementary"}]
}
],
"guidance_specification": {"path": "...", "exists": true},
"synthesis_output": {"path": "...", "exists": true},
"conflict_resolution": {"path": "...", "exists": true} // if conflict_risk >= medium
]
},
"context_package_path": ".workflow/active//{session-id}/.process/context-package.json",
"context_package": {
// If in memory: use cached content
// Else: Load from .workflow/active//{session-id}/.process/context-package.json
},
"test_context_package_path": ".workflow/active//{session-id}/.process/test-context-package.json",
// On-Demand (load if exists)
"test_context_package": {
// Existing test patterns and coverage analysis
// Load from test-context-package.json
// Contains existing test patterns and coverage analysis
},
"conflict_resolution": {
// Load from conflict-resolution.json if conflict_risk >= medium
// Check context-package.conflict_detection.resolution_file
},
// Capabilities
"mcp_capabilities": {
"codex_lens": true,
"exa_code": true,
"exa_web": true
"exa_web": true,
"code_index": true
},
// User configuration from Phase 0
"user_config": {
// From Phase 0 AskUserQuestion
}
}
```
@@ -124,21 +261,21 @@ Phase 2: Agent Execution (Document Generation)
1. **Load Session Context** (if not in memory)
```javascript
if (!memory.has("workflow-session.json")) {
Read(.workflow/active//{session-id}/workflow-session.json)
Read(.workflow/active/{session-id}/workflow-session.json)
}
```
2. **Load Context Package** (if not in memory)
```javascript
if (!memory.has("context-package.json")) {
Read(.workflow/active//{session-id}/.process/context-package.json)
Read(.workflow/active/{session-id}/.process/context-package.json)
}
```
3. **Load Test Context Package** (if not in memory)
```javascript
if (!memory.has("test-context-package.json")) {
Read(.workflow/active//{session-id}/.process/test-context-package.json)
Read(.workflow/active/{session-id}/.process/test-context-package.json)
}
```
@@ -180,62 +317,89 @@ Phase 2: Agent Execution (Document Generation)
)
```
### Phase 2: Agent Execution (Document Generation)
### Phase 2: Agent Execution (TDD Document Generation)
**Pre-Agent Template Selection** (Command decides path before invoking agent):
```javascript
// Command checks flag and selects template PATH (not content)
const templatePath = hasCliExecuteFlag
? "~/.claude/workflows/cli-templates/prompts/workflow/task-json-cli-mode.txt"
: "~/.claude/workflows/cli-templates/prompts/workflow/task-json-agent-mode.txt";
```
**Purpose**: Generate TDD planning documents (IMPL_PLAN.md, task JSONs, TODO_LIST.md) - planning only, NOT code implementation.
**Agent Invocation**:
```javascript
Task(
subagent_type="action-planning-agent",
run_in_background=false,
description="Generate TDD task JSON and implementation plan",
description="Generate TDD planning documents (IMPL_PLAN.md, task JSONs, TODO_LIST.md)",
prompt=`
## Execution Context
## TASK OBJECTIVE
Generate TDD implementation planning documents (IMPL_PLAN.md, task JSONs, TODO_LIST.md) for workflow session
**Session ID**: WFS-{session-id}
**Workflow Type**: TDD
**Note**: CLI tool usage is determined semantically from user's task description
IMPORTANT: This is PLANNING ONLY - you are generating planning documents, NOT implementing code.
## Phase 1: Discovery Results (Provided Context)
CRITICAL: Follow the progressive loading strategy (load analysis.md files incrementally due to file size):
- **Core**: session metadata + context-package.json (always)
- **Selective**: synthesis_output OR (guidance + relevant role analyses) - NOT all
- **On-Demand**: conflict resolution (if conflict_risk >= medium), test context
### Session Metadata
{session_metadata_content}
## SESSION PATHS
Input:
- Session Metadata: .workflow/active/{session-id}/workflow-session.json
- Context Package: .workflow/active/{session-id}/.process/context-package.json
- Test Context: .workflow/active/{session-id}/.process/test-context-package.json
### Role Analyses (Enhanced by Synthesis)
{role_analyses_content}
- Includes requirements, design specs, enhancements, and clarifications from synthesis phase
Output:
- Task Dir: .workflow/active/{session-id}/.task/
- IMPL_PLAN: .workflow/active/{session-id}/IMPL_PLAN.md
- TODO_LIST: .workflow/active/{session-id}/TODO_LIST.md
### Artifacts Inventory
- **Guidance Specification**: {guidance_spec_path}
- **Role Analyses**: {role_analyses_list}
## CONTEXT METADATA
Session ID: {session-id}
Workflow Type: TDD
MCP Capabilities: {exa_code, exa_web, code_index}
### Context Package
{context_package_summary}
- Includes conflict_risk assessment
## USER CONFIGURATION (from Phase 0)
Execution Method: ${userConfig.executionMethod} // agent|hybrid|cli
Preferred CLI Tool: ${userConfig.preferredCliTool} // codex|gemini|qwen|auto
Supplementary Materials: ${userConfig.supplementaryMaterials}
### Test Context Package
{test_context_package_summary}
- Existing test patterns, framework config, coverage analysis
## EXECUTION METHOD MAPPING
Based on userConfig.executionMethod, set task-level meta.execution_config:
### Conflict Resolution (Conditional)
If conflict_risk was medium/high, modifications have been applied to:
- **guidance-specification.md**: Design decisions updated to resolve conflicts
- **Role analyses (*.md)**: Recommendations adjusted for compatibility
- **context-package.json**: Marked as "resolved" with conflict IDs
- Conflict resolution results stored in conflict-resolution.json
"agent" →
meta.execution_config = { method: "agent", cli_tool: null, enable_resume: false }
Agent executes Red-Green-Refactor phases directly
### MCP Analysis Results (Optional)
**Code Structure**: {mcp_code_index_results}
**External Research**: {mcp_exa_research_results}
"cli" →
meta.execution_config = { method: "cli", cli_tool: userConfig.preferredCliTool, enable_resume: true }
Agent executes pre_analysis, then hands off full context to CLI via buildCliHandoffPrompt()
## Phase 2: TDD Document Generation Task
"hybrid" →
Per-task decision: Analyze TDD cycle complexity, set method to "agent" OR "cli" per task
- Simple cycles (≤5 test cases, ≤3 files) → method: "agent"
- Complex cycles (>5 test cases, >3 files, integration tests) → method: "cli"
CLI tool: userConfig.preferredCliTool, enable_resume: true
IMPORTANT: Do NOT add command field to implementation_approach steps. Execution routing is controlled by task-level meta.execution_config.method only.
## EXPLORATION CONTEXT (from context-package.exploration_results)
- Load exploration_results from context-package.json
- Use aggregated_insights.critical_files for focus_paths generation
- Apply aggregated_insights.constraints to acceptance criteria
- Reference aggregated_insights.all_patterns for implementation approach
- Use aggregated_insights.all_integration_points for precise modification locations
- Use conflict_indicators for risk-aware task sequencing
## CONFLICT RESOLUTION CONTEXT (if exists)
- Check context-package.conflict_detection.resolution_file for conflict-resolution.json path
- If exists, load .process/conflict-resolution.json:
- Apply planning_constraints as task constraints (for brainstorm-less workflows)
- Reference resolved_conflicts for implementation approach alignment
- Handle custom_conflicts with explicit task notes
## TEST CONTEXT INTEGRATION
- Load test-context-package.json for existing test patterns and coverage analysis
- Extract test framework configuration (Jest/Pytest/etc.)
- Identify existing test conventions and patterns
- Map coverage gaps to TDD Red phase test targets
## TDD DOCUMENT GENERATION TASK
**Agent Configuration Reference**: All TDD task generation rules, quantification requirements, Red-Green-Refactor cycle structure, quality standards, and execution details are defined in action-planning-agent.
@@ -256,31 +420,61 @@ If conflict_risk was medium/high, modifications have been applied to:
#### Required Outputs Summary
##### 1. TDD Task JSON Files (.task/IMPL-*.json)
- **Location**: `.workflow/active//{session-id}/.task/`
- **Schema**: 5-field structure with TDD-specific metadata
- **Location**: `.workflow/active/{session-id}/.task/`
- **Schema**: 6-field structure with TDD-specific metadata
- `id, title, status, context_package_path, meta, context, flow_control`
- `meta.tdd_workflow`: true (REQUIRED)
- `meta.max_iterations`: 3 (Green phase test-fix cycle limit)
- `meta.cli_execution_id`: Unique CLI execution ID (format: `{session_id}-{task_id}`)
- `meta.cli_execution`: Strategy object (new|resume|fork|merge_fork)
- `context.tdd_cycles`: Array with quantified test cases and coverage
- `context.focus_paths`: Absolute or clear relative paths (enhanced with exploration critical_files)
- `flow_control.implementation_approach`: Exactly 3 steps with `tdd_phase` field
1. Red Phase (`tdd_phase: "red"`): Write failing tests
2. Green Phase (`tdd_phase: "green"`): Implement to pass tests
3. Refactor Phase (`tdd_phase: "refactor"`): Improve code quality
- CLI tool usage determined semantically (add `command` field when user requests CLI execution)
- `flow_control.pre_analysis`: Include exploration integration_points analysis
- **meta.execution_config**: Set per userConfig.executionMethod (agent/cli/hybrid)
- **Details**: See action-planning-agent.md § TDD Task JSON Generation
##### 2. IMPL_PLAN.md (TDD Variant)
- **Location**: `.workflow/active//{session-id}/IMPL_PLAN.md`
- **Location**: `.workflow/active/{session-id}/IMPL_PLAN.md`
- **Template**: `~/.claude/workflows/cli-templates/prompts/workflow/impl-plan-template.txt`
- **TDD-Specific Frontmatter**: workflow_type="tdd", tdd_workflow=true, feature_count, task_breakdown
- **TDD Implementation Tasks Section**: Feature-by-feature with internal Red-Green-Refactor cycles
- **Context Analysis**: Artifact references and exploration insights
- **Details**: See action-planning-agent.md § TDD Implementation Plan Creation
##### 3. TODO_LIST.md
- **Location**: `.workflow/active//{session-id}/TODO_LIST.md`
- **Location**: `.workflow/active/{session-id}/TODO_LIST.md`
- **Format**: Hierarchical task list with internal TDD phase indicators (Red → Green → Refactor)
- **Status**: ▸ (container), [ ] (pending), [x] (completed)
- **Links**: Task JSON references and summaries
- **Details**: See action-planning-agent.md § TODO List Generation
### CLI EXECUTION ID REQUIREMENTS (MANDATORY)
Each task JSON MUST include:
- **meta.cli_execution_id**: Unique ID for CLI execution (format: `{session_id}-{task_id}`)
- **meta.cli_execution**: Strategy object based on depends_on:
- No deps → `{ "strategy": "new" }`
- 1 dep (single child) → `{ "strategy": "resume", "resume_from": "parent-cli-id" }`
- 1 dep (multiple children) → `{ "strategy": "fork", "resume_from": "parent-cli-id" }`
- N deps → `{ "strategy": "merge_fork", "resume_from": ["id1", "id2", ...] }`
- **Type**: `resume_from: string | string[]` (string for resume/fork, array for merge_fork)
**CLI Execution Strategy Rules**:
1. **new**: Task has no dependencies - starts fresh CLI conversation
2. **resume**: Task has 1 parent AND that parent has only this child - continues same conversation
3. **fork**: Task has 1 parent BUT parent has multiple children - creates new branch with parent context
4. **merge_fork**: Task has multiple parents - merges all parent contexts into new conversation
**Execution Command Patterns**:
- new: `ccw cli -p "[prompt]" --tool [tool] --mode write --id [cli_execution_id]`
- resume: `ccw cli -p "[prompt]" --resume [resume_from] --tool [tool] --mode write`
- fork: `ccw cli -p "[prompt]" --resume [resume_from] --id [cli_execution_id] --tool [tool] --mode write`
- merge_fork: `ccw cli -p "[prompt]" --resume [resume_from.join(',')] --id [cli_execution_id] --tool [tool] --mode write` (resume_from is array)
### Quantification Requirements (MANDATORY)
**Core Rules**:
@@ -302,6 +496,7 @@ If conflict_risk was medium/high, modifications have been applied to:
- [ ] Every acceptance criterion includes measurable coverage percentage
- [ ] tdd_cycles array contains test_count and test_cases for each cycle
- [ ] No vague language ("comprehensive", "complete", "thorough")
- [ ] cli_execution_id and cli_execution strategy assigned to each task
### Agent Execution Summary
@@ -317,20 +512,34 @@ If conflict_risk was medium/high, modifications have been applied to:
- ✓ Quantification requirements enforced (explicit counts, measurable acceptance, exact targets)
- ✓ Task count ≤18 (hard limit)
- ✓ Each task has meta.tdd_workflow: true
- ✓ Each task has exactly 3 implementation steps with tdd_phase field
-Green phase includes test-fix cycle logic
-Artifact references mapped correctly
-MCP tool integration added
- ✓ Each task has exactly 3 implementation steps with tdd_phase field ("red", "green", "refactor")
-Each task has meta.cli_execution_id and meta.cli_execution strategy
-Green phase includes test-fix cycle logic with max_iterations
-focus_paths are absolute or clear relative paths (from exploration critical_files)
- ✓ Artifact references mapped correctly from context package
- ✓ Exploration context integrated (critical_files, constraints, patterns, integration_points)
- ✓ Conflict resolution context applied (if conflict_risk >= medium)
- ✓ Test context integrated (existing test patterns and coverage analysis)
- ✓ Documents follow TDD template structure
- ✓ CLI tool selection based on userConfig.executionMethod
## Output
## SUCCESS CRITERIA
- All planning documents generated successfully:
- Task JSONs valid and saved to .task/ directory with cli_execution_id
- IMPL_PLAN.md created with complete TDD structure
- TODO_LIST.md generated matching task JSONs
- CLI execution strategies assigned based on task dependencies
- Return completion status with document count and task breakdown summary
Generate all three documents and report completion status:
- TDD task JSON files created: N files (IMPL-*.json)
## OUTPUT SUMMARY
Generate all three documents and report:
- TDD task JSON files created: N files (IMPL-*.json) with cli_execution_id assigned
- TDD cycles configured: N cycles with quantified test cases
- Artifacts integrated: synthesis-spec, guidance-specification, N role analyses
- CLI execution strategies: new/resume/fork/merge_fork assigned per dependency graph
- Artifacts integrated: synthesis-spec/guidance-specification, relevant role analyses
- Exploration context: critical_files, constraints, patterns, integration_points
- Test context integrated: existing patterns and coverage
- MCP enhancements: CodexLens, exa-research
- Conflict resolution: applied (if conflict_risk >= medium)
- Session ready for TDD execution: /workflow:execute
`
)
@@ -338,50 +547,64 @@ Generate all three documents and report completion status:
### Agent Context Passing
**Memory-Aware Context Assembly**:
**Context Delegation Model**: Command provides paths and metadata, agent loads context autonomously using progressive loading strategy.
**Command Provides** (in agent prompt):
```javascript
// Assemble context package for agent
const agentContext = {
session_id: "WFS-[id]",
// Command assembles these simple values and paths for agent
const commandProvides = {
// Session paths
session_metadata_path: ".workflow/active/WFS-{id}/workflow-session.json",
context_package_path: ".workflow/active/WFS-{id}/.process/context-package.json",
test_context_package_path: ".workflow/active/WFS-{id}/.process/test-context-package.json",
output_task_dir: ".workflow/active/WFS-{id}/.task/",
output_impl_plan: ".workflow/active/WFS-{id}/IMPL_PLAN.md",
output_todo_list: ".workflow/active/WFS-{id}/TODO_LIST.md",
// Simple metadata
session_id: "WFS-{id}",
workflow_type: "tdd",
mcp_capabilities: { exa_code: true, exa_web: true, code_index: true },
// Use memory if available, else load
session_metadata: memory.has("workflow-session.json")
? memory.get("workflow-session.json")
: Read(.workflow/active/WFS-[id]/workflow-session.json),
context_package_path: ".workflow/active/WFS-[id]/.process/context-package.json",
context_package: memory.has("context-package.json")
? memory.get("context-package.json")
: Read(".workflow/active/WFS-[id]/.process/context-package.json"),
test_context_package_path: ".workflow/active/WFS-[id]/.process/test-context-package.json",
test_context_package: memory.has("test-context-package.json")
? memory.get("test-context-package.json")
: Read(".workflow/active/WFS-[id]/.process/test-context-package.json"),
// Extract brainstorm artifacts from context package
brainstorm_artifacts: extractBrainstormArtifacts(context_package),
// Load role analyses using paths from context package
role_analyses: brainstorm_artifacts.role_analyses
.flatMap(role => role.files)
.map(file => Read(file.path)),
// Load conflict resolution if exists (prefer new JSON format)
conflict_resolution: context_package.conflict_detection?.resolution_file
? Read(context_package.conflict_detection.resolution_file) // .process/conflict-resolution.json
: (brainstorm_artifacts?.conflict_resolution?.exists
? Read(brainstorm_artifacts.conflict_resolution.path)
: null),
// Optional MCP enhancements
mcp_analysis: executeMcpDiscovery()
// User configuration from Phase 0
user_config: {
supplementaryMaterials: { type: "...", content: [...] },
executionMethod: "agent|hybrid|cli",
preferredCliTool: "codex|gemini|qwen|auto",
enableResume: true
}
}
```
**Agent Loads Autonomously** (progressive loading):
```javascript
// Agent executes progressive loading based on memory state
const agentLoads = {
// Core (ALWAYS load if not in memory)
session_metadata: loadIfNotInMemory(session_metadata_path),
context_package: loadIfNotInMemory(context_package_path),
// Selective (based on progressive strategy)
// Priority: synthesis_output > guidance + relevant_role_analyses
brainstorm_content: loadSelectiveBrainstormArtifacts(context_package),
// On-Demand (load if exists and relevant)
test_context: loadIfExists(test_context_package_path),
conflict_resolution: loadConflictResolution(context_package),
// Optional (if MCP available)
exploration_results: extractExplorationResults(context_package),
external_research: executeMcpResearch() // If needed
}
```
**Progressive Loading Implementation** (agent responsibility):
1. **Check memory first** - skip if already loaded
2. **Load core files** - session metadata + context-package.json
3. **Smart selective loading** - synthesis_output OR (guidance + task-relevant role analyses)
4. **On-demand loading** - test context, conflict resolution (if conflict_risk >= medium)
5. **Extract references** - exploration results, artifact paths from context package
## TDD Task Structure Reference
This section provides quick reference for TDD task JSON structure. For complete implementation details, see the agent invocation prompt in Phase 2 above.
@@ -389,14 +612,31 @@ This section provides quick reference for TDD task JSON structure. For complete
**Quick Reference**:
- Each TDD task contains complete Red-Green-Refactor cycle
- Task ID format: `IMPL-N` (simple) or `IMPL-N.M` (complex subtasks)
- Required metadata: `meta.tdd_workflow: true`, `meta.max_iterations: 3`
- Flow control: Exactly 3 steps with `tdd_phase` field (red, green, refactor)
- Context: `tdd_cycles` array with quantified test cases and coverage
- Required metadata:
- `meta.tdd_workflow: true`
- `meta.max_iterations: 3`
- `meta.cli_execution_id: "{session_id}-{task_id}"`
- `meta.cli_execution: { "strategy": "new|resume|fork|merge_fork", ... }`
- Context: `tdd_cycles` array with quantified test cases and coverage:
```javascript
tdd_cycles: [
{
test_count: 5, // Number of test cases to write
test_cases: ["case1", "case2"], // Enumerated test scenarios
implementation_scope: "...", // Files and functions to implement
expected_coverage: ">=85%" // Coverage target
}
]
```
- Context: `focus_paths` use absolute or clear relative paths
- Flow control: Exactly 3 steps with `tdd_phase` field ("red", "green", "refactor")
- Flow control: `pre_analysis` includes exploration integration_points analysis
- **meta.execution_config**: Set per `userConfig.executionMethod` (agent/cli/hybrid)
- See Phase 2 agent prompt for full schema and requirements
## Output Files Structure
```
.workflow/active//{session-id}/
.workflow/active/{session-id}/
├── IMPL_PLAN.md # Unified plan with TDD Implementation Tasks section
├── TODO_LIST.md # Progress tracking with internal TDD phase indicators
├── .task/
@@ -432,9 +672,9 @@ This section provides quick reference for TDD task JSON structure. For complete
- No circular dependencies allowed
### Task Limits
- Maximum 10 total tasks (simple + subtasks)
- Flat hierarchy (≤5 tasks) or two-level (6-10 tasks with containers)
- Re-scope requirements if >10 tasks needed
- Maximum 18 total tasks (simple + subtasks) - hard limit for TDD workflows
- Flat hierarchy (≤5 tasks) or two-level (6-18 tasks with containers)
- Re-scope requirements if >18 tasks needed
### TDD Workflow Validation
- `meta.tdd_workflow` must be true
@@ -454,7 +694,7 @@ This section provides quick reference for TDD task JSON structure. For complete
### TDD Generation Errors
| Error | Cause | Resolution |
|-------|-------|------------|
| Task count exceeds 10 | Too many features or subtasks | Re-scope requirements or merge features |
| Task count exceeds 18 | Too many features or subtasks | Re-scope requirements or merge features into multiple TDD sessions |
| Missing test framework | No test config | Configure testing first |
| Invalid TDD workflow | Missing tdd_phase or incomplete flow_control | Fix TDD structure in ANALYSIS_RESULTS.md |
| Missing tdd_workflow flag | Task doesn't have meta.tdd_workflow: true | Add TDD workflow metadata |
@@ -504,7 +744,7 @@ IMPL (Green phase) tasks include automatic test-fix cycle:
3. **Success Path**: Tests pass → Complete task
4. **Failure Path**: Tests fail → Enter iterative fix cycle:
- **Gemini Diagnosis**: Analyze failures with bug-fix template
- **Fix Application**: Agent (default) or CLI (if `command` field present)
- **Fix Application**: Agent executes fixes directly
- **Retest**: Verify fix resolves failures
- **Repeat**: Up to max_iterations (default: 3)
5. **Safety Net**: Auto-revert all changes if max iterations reached
@@ -512,6 +752,6 @@ IMPL (Green phase) tasks include automatic test-fix cycle:
## Configuration Options
- **meta.max_iterations**: Number of fix attempts (default: 3 for TDD, 5 for test-gen)
- **CLI tool usage**: Determined semantically from user's task description via `command` field in implementation_approach
- **meta.max_iterations**: Number of fix attempts in Green phase (default: 3)
- **meta.execution_config.method**: Execution routing (agent/cli) determined from userConfig.executionMethod

Some files were not shown because too many files have changed in this diff Show More