Compare commits

...

225 Commits

Author SHA1 Message Date
catlog22
19acaea0f9 feat: add exploration and plan JSON schemas for task context and planning 2025-11-24 21:57:19 +08:00
catlog22
481a716c09 docs: update workflow initialization documentation and add project metadata schema 2025-11-24 21:46:44 +08:00
catlog22
07775cda30 fix: correct spelling of 'cli-execution-agent' in documentation 2025-11-24 21:46:08 +08:00
catlog22
3acf6fcba8 docs: enhance test task generation documentation with existing test infrastructure and framework usage 2025-11-24 21:14:32 +08:00
catlog22
f798dd4172 refactor: consolidate workflow commands and add UI design templates
1. Remove deprecated task-generate.md
   - Replaced by task-generate-agent.md in previous commits
   - Clean up duplicate command documentation

2. Update ui-design-agent.md
   - Refine agent specifications and workflow integration

3. Add UI design template files
   - Create ~/.claude/workflows/cli-templates/ui-design/systems/
   - Add animation-tokens.json for animation and transition patterns
   - Add design-tokens.json for color, typography, spacing tokens
   - Add layout-templates.json for structural layout patterns
   - Modular design system management

4. Update analyze_commands.py
   - Enhance command analysis capabilities
   - Improve metadata extraction

Key improvements:
- Remove duplicate/deprecated documentation
- Modular UI design template management
- Better workflow command organization
2025-11-24 19:36:33 +08:00
catlog22
aabc6294f4 refactor: optimize test workflow commands and extract templates
1. Optimize test-task-generate.md structure
   - Simplify from 417 lines to 217 lines (~48% reduction)
   - Reference task-generate-agent.md structure for consistency
   - Enhance agent prompt with clear specifications:
     * Explicit agent configuration reference
     * Detailed TEST_ANALYSIS_RESULTS.md integration mapping
     * Complete test-fix cycle configuration
     * Clear task structure requirements (IMPL-001 test-gen, IMPL-002+ test-fix)
   - Fix format errors (remove nested code blocks in agent prompt)
   - Emphasize "planning only" nature (does NOT execute tests)

2. Refactor test-concept-enhanced.md to use cli-execute-agent
   - Simplify from 463 lines to 142 lines (~69% reduction)
   - Change from direct Gemini execution to cli-execute-agent delegation
   - Extract Gemini prompt template to separate file:
     * Create ~/.claude/workflows/cli-templates/prompts/test/test-concept-analysis.txt
     * Use $(cat ~/.claude/...) pattern for template reference
   - Clarify responsibility separation:
     * Command: Workflow coordination and validation
     * Agent: Execute Gemini analysis and generate outputs
   - Agent generates both gemini-test-analysis.md and TEST_ANALYSIS_RESULTS.md
   - Remove verbose embedded prompts and output format descriptions

Key improvements:
- Modular template management for easier maintenance
- Clear command vs agent responsibility separation
- Consistent documentation structure across workflow commands
- Enhanced agent prompts ensure correct output generation
- Use ~/.claude paths for installed workflow location
2025-11-24 19:33:02 +08:00
catlog22
adbb2070bb docs: update action-planning-agent and task-generate-agent documentation to clarify loading strategies and steps 2025-11-24 10:47:37 +08:00
catlog22
3c9cf3a677 docs: enhance action-planning-agent and task-generate-agent documentation with progressive loading strategy and smart artifact selection 2025-11-24 10:36:06 +08:00
catlog22
ff808ed539 docs: update task-generate-agent documentation for clarity on planning document generation and execution phases 2025-11-23 22:57:43 +08:00
catlog22
99a5c75b13 docs: enhance task generation agent documentation with progressive loading strategy and smart artifact selection 2025-11-23 22:44:11 +08:00
catlog22
7453987cfe Add documentation for new CLI commands: docs-related-cli and lite-fix
- Implemented the `docs-related-cli` command for context-aware documentation generation and update for changed modules using CLI execution with tool fallback.
- Introduced the `lite-fix` command for lightweight bug diagnosis and fix workflow, featuring intelligent severity assessment and optional hotfix mode for production incidents.
2025-11-23 22:18:39 +08:00
catlog22
4bb4bdc124 refactor: migrate agents to path-based context loading
- Update action-planning-agent and task-generate-agent to load context
  via file paths instead of embedded context packages
- Streamline test-cycle-execute workflow execution
- Remove redundant content from conflict-resolution and context-gather
- Update SKILL.md and tdd-plan documentation
2025-11-23 22:06:13 +08:00
catlog22
3915f7cb35 docs: update execute.md to clarify task loading and resume mode behavior 2025-11-23 21:18:30 +08:00
catlog22
657af628fd refactor: remove performance metrics and context management details from execute.md 2025-11-23 21:13:49 +08:00
catlog22
b649360cd6 docs: update agent documentation for context package and execution responsibilities 2025-11-23 21:04:25 +08:00
catlog22
20aa0f3a0b feat: add agent-task execution rules to ensure one agent per task JSON 2025-11-23 20:37:03 +08:00
catlog22
c8dd1adc69 docs: update WORKFLOW_DECISION_GUIDE with lite-fix workflow integration
- Add Bug Fix decision branch at workflow entry point (Q0)
- Integrate lite-fix workflow into decision flowchart
  - Standard bug fix path with adaptive severity assessment
  - Hotfix mode for production incidents
  - Diagnostic fallback for unclear root causes
- Add comprehensive lite-fix documentation:
  - 6-phase workflow explanation
  - Session artifacts structure
  - Risk scoring and workflow adaptation
  - Standard vs hotfix mode comparison
- Add two new scenarios:
  - Scenario D: Standard bug fix workflow
  - Scenario E: Production hotfix workflow
- Update quick reference tables:
  - Add bug fix commands by knowledge level
  - Add bug fix stage in project lifecycle
  - Add Lite-Fix workflow mode
- Update best practices and common pitfalls
- Update version to 5.9.2

🤖 Generated with [Claude Code](https://claude.com/claude-code)

Co-Authored-By: Claude <noreply@anthropic.com>
2025-11-23 20:16:30 +08:00
catlog22
d53e7e18db docs: update README files to version 5.9.2 with lite-fix enhancements
- Add English README.md based on Chinese version
- Update version badges to v5.9.2 in both README files
- Add Lite-Fix workflow documentation and examples
- Document session artifact structure and management
- Include lite-fix quick start examples (standard and hotfix modes)
- Align with latest lite-fix session management features

🤖 Generated with [Claude Code](https://claude.com/claude-code)

Co-Authored-By: Claude <noreply@anthropic.com>
2025-11-23 20:11:48 +08:00
catlog22
0207677857 feat: enhance lite-fix workflow with session folder setup and artifact management 2025-11-23 20:05:34 +08:00
catlog22
72f27fb2f8 feat: add implementation_approach command field execution mode
- Add CLI mode example to implementation_approach showing optional command field
- Document two execution modes:
  1. Default Mode (agent execution): No command field, agent interprets steps
  2. CLI Mode (command execution): With command field, uses CLI tools (codex/gemini/qwen)

- Add Implementation Approach Execution Modes section with:
  - Mode descriptions and use cases
  - Required fields for each mode
  - Command patterns for CLI mode (codex, gemini)
  - Mode selection strategy

- Simplify implementation_approach examples to use generic placeholders
- Emphasize command field is optional, agent decides based on task complexity

Benefits:
- Clear documentation of both execution patterns
- Agents understand when to use CLI tools vs direct execution
- Pattern examples show structure without over-specifying content
- Supports both autonomous agent work and CLI tool delegation

🤖 Generated with [Claude Code](https://claude.com/claude-code)

Co-Authored-By: Claude <noreply@anthropic.com>
2025-11-23 19:42:08 +08:00
catlog22
be129f5821 refactor: simplify pre_analysis JSON examples to show patterns only
- Simplify pre_analysis JSON examples to focus on structure patterns
- Replace specific commands with generic placeholders ([pattern], [lang], [N])
- Reduce OPTIONAL section to 5 key pattern examples (vs 12 detailed examples)
- Condense dynamic adaptation guide to essential principles
- Remove overly detailed command examples that could mislead

Benefits:
- Clearer focus on structure patterns rather than specific implementations
- Reduced cognitive load - easier to understand the schema
- Less risk of agents copying specific examples blindly
- More flexibility for agents to create task-appropriate steps

Pattern examples now show:
- Project structure analysis pattern
- Local search pattern (bash/rg/find)
- Gemini CLI pattern
- Qwen CLI pattern
- MCP tools pattern

🤖 Generated with [Claude Code](https://claude.com/claude-code)

Co-Authored-By: Claude <noreply@anthropic.com>
2025-11-23 19:39:34 +08:00
catlog22
b1bb74af0d feat: enhance pre_analysis with comprehensive CLI examples and dynamic guidance
- Add extensive pre_analysis examples organized by category:
  - Required context loading steps
  - Project structure analysis
  - Local codebase exploration (bash/rg/find)
  - Gemini CLI analysis (architecture, execution flow)
  - Qwen CLI analysis (code quality, fallback)
  - MCP tools integration
  - Tech-specific searches (React, database, etc.)

- Add "举一反三" (draw inferences) principle with detailed guidance:
  - Dynamic step selection based on task type
  - Tool selection strategy (Gemini/Qwen/Bash/MCP)
  - Task-specific adaptation examples (security, performance, API)
  - Command composition patterns

- Emphasize that examples are reference templates, not fixed requirements
- Agent must dynamically select and adapt steps based on actual task needs
- Provide clear guidelines for when to use each type of analysis tool

🤖 Generated with [Claude Code](https://claude.com/claude-code)

Co-Authored-By: Claude <noreply@anthropic.com>
2025-11-23 19:36:47 +08:00
catlog22
a7a654805c refactor: remove context_signature field from task JSON schema
- Remove context_signature field from meta object
- Simplify meta object to focus on essential fields
- Keep execution_group for parallelization support

🤖 Generated with [Claude Code](https://claude.com/claude-code)

Co-Authored-By: Claude <noreply@anthropic.com>
2025-11-23 19:33:29 +08:00
catlog22
c0c894ced1 feat: enhance action-planning-agent task JSON schema definition
- Add missing top-level field: context_package_path
- Add missing context fields: parent, inherited, shared_context
- Add meta fields for parallelization: execution_group, context_signature
- Fix status enumeration: pending|active|completed|blocked|container
- Fix meta.type enumeration: distinguish test-gen from test-fix
- Expand meta.agent options: add action-planning-agent, test-fix-agent, universal-executor
- Enhance artifacts structure: add source, usage, contains fields
- Restructure schema presentation into 4 clear sections
- Add practical pre_analysis examples with CLI bash search commands
- Emphasize quantification requirements for requirements/acceptance fields

🤖 Generated with [Claude Code](https://claude.com/claude-code)

Co-Authored-By: Claude <noreply@anthropic.com>
2025-11-23 19:24:46 +08:00
catlog22
7517f4f8ec docs: clarify manifest.json location in plan.md Phase 1
Add explicit note in Phase 1 validation to prevent main Claude from
incorrectly looking for manifest.json in .workflow/active/WFS-xxx/.

manifest.json only exists in .workflow/archives/ (for archived sessions).
Active sessions only have workflow-session.json (metadata).

This prevents confusion during plan execution before agent invocation.

🤖 Generated with [Claude Code](https://claude.com/claude-code)

Co-Authored-By: Claude <noreply@anthropic.com>
2025-11-23 13:44:02 +08:00
catlog22
0b45ff7345 docs: clarify optional historical archive analysis in context-search-agent
Add note in Phase 2 to clarify that historical archive analysis
(querying .workflow/archives/manifest.json) is optional, addressing
discrepancy between context-gather.md (4 tracks) and agent implementation
(3 tracks).

This prevents confusion about manifest.json location and makes the
optional nature of archive querying explicit.

🤖 Generated with [Claude Code](https://claude.com/claude-code)

Co-Authored-By: Claude <noreply@anthropic.com>
2025-11-23 13:39:36 +08:00
catlog22
0416b23186 feat: add context-package.json sync to synthesis and design-sync workflows
Add Phase 6 to both synthesis.md and design-sync.md to automatically
update context-package.json when artifacts are modified, preventing
stale cache issues when transitioning to /workflow:plan.

Changes:
- synthesis.md: Add Phase 6 to sync updated role analyses to context-package
- design-sync.md: Add Phase 5 to sync design system refs + role analyses

This ensures:
- synthesis → plan: context-package reflects synthesis enhancements
- ui-design → plan: context-package includes design system references
- No manual context-gather re-run needed after artifact updates

🤖 Generated with [Claude Code](https://claude.com/claude-code)

Co-Authored-By: Claude <noreply@anthropic.com>
2025-11-23 12:53:08 +08:00
catlog22
948cf3fcd7 fix: prevent unwanted auto-commits during CLI tool execution 2025-11-23 12:29:44 +08:00
catlog22
4272ca9ebd Add CLI commands for full and related documentation generation
- Implemented `/memory:docs-full-cli` for comprehensive project documentation generation using CLI execution with batched agents and tool fallback.
- Introduced `/memory:docs-related-cli` to generate/update documentation for git-changed modules, optimizing for efficiency with direct parallel execution for fewer modules and agent batch processing for larger sets.
- Defined execution flows, strategies, and error handling for both commands, ensuring robust documentation processes.
2025-11-23 11:27:23 +08:00
catlog22
73fed4893b docs: Generate ARCHITECTURE.md and EXAMPLES.md
Generated system design and usage examples documentation based on project
context and provided templates.

ARCHITECTURE.md provides an overview of the system's architecture,
including core principles, structure, module map, interactions, design patterns,
API overview, data flow, security, and scalability.

EXAMPLES.md offers end-to-end usage examples, covering quick start, core use cases,
advanced scenarios, testing examples, and best practices.
2025-11-23 11:23:21 +08:00
catlog22
f09c6e2a7a docs: Generate comprehensive project overview README.md 2025-11-23 11:20:35 +08:00
catlog22
65a204a563 fix: enforce synchronous bash execution in memory update workflows
Explicitly set run_in_background: false for all Bash commands in update-full and update-related workflows to prevent unwanted background execution. This ensures synchronous execution without requiring BashOutput polling.

Changes:
- Phase 1-4: Add run_in_background: false to all Bash calls
- Simplify code examples and remove redundant explanations
- Update both direct execution and agent batch execution patterns

🤖 Generated with [Claude Code](https://claude.com/claude-code)

Co-Authored-By: Claude <noreply@anthropic.com>
2025-11-23 10:23:30 +08:00
catlog22
ffbc440a7e feat: optimize file tracking in installation process with bulk operations 2025-11-23 09:50:51 +08:00
catlog22
3c28c61bea fix: update minimal documentation output guideline for clarity 2025-11-22 23:21:26 +08:00
catlog22
b0b99a4217 Enhance installation and uninstallation processes
- Added optional quiet mode for backup notifications to reduce output clutter.
- Improved file merging with progress reporting and summary of backed up files.
- Implemented a cleanup function to move old installations to a backup directory before new installations.
- Enhanced manifest handling to distinguish between Global and Path installations.
- Updated uninstallation process to preserve global files if a Global installation exists.
- Improved error handling and user feedback during file operations.
2025-11-22 23:20:39 +08:00
catlog22
4f533f6fd5 feat(lite-execute): intelligent task grouping with parallel/sequential execution
Major improvements:
- Implement dependency-aware task grouping algorithm
  * Same file modifications → sequential batch
  * Different files + no dependencies → parallel batch
  * Keyword-based dependency inference (use/integrate/call/import)
  * Batch limits: Agent 7 tasks, CLI 4 tasks

- Add parallel execution strategy
  * All parallel batches launch concurrently (single Claude message)
  * Sequential batches execute in order
  * No concurrency limit for CLI tools
  * Execution flow: parallel-first → sequential-after

- Enhance code review with task.json acceptance criteria
  * Unified review template for all tools (Gemini/Qwen/Codex/Agent)
  * task.json in CONTEXT field for CLI tools
  * Explicit acceptance criteria verification

- Clarify execution requirements
  * Remove ambiguous strategy instructions
  * Add explicit "MUST complete ALL tasks" requirement
  * Prevent CLI tools from partial execution misunderstanding

- Simplify documentation (~260 lines reduced)
  * Remove redundant examples and explanations
  * Keep core algorithm logic with inline comments
  * Eliminate duplicate descriptions across sections

Breaking changes: None (backward compatible)
2025-11-22 21:05:43 +08:00
catlog22
530c348e95 feat: enhance task execution with intelligent grouping and dependency analysis 2025-11-22 20:57:16 +08:00
catlog22
a98b26b111 feat: add session folder structure for lite-plan artifacts
- Create dedicated session folder (.workflow/.lite-plan/{task-slug}-{timestamp}/)
  for each lite-plan execution to organize all planning artifacts
- Always export task.json (removed optional export question from Phase 4)
- Save exploration.json, plan.json, and task.json to session folder
- Add session artifact paths to executionContext for lite-execute delegation
- Update lite-execute to use artifact file paths for CLI/agent context
- Enable CLI tools (Gemini/Qwen/Codex) and agents to access detailed
  planning context via file references

Benefits:
- Clean separation between different task executions
- All artifacts automatically saved for reusability
- Enhanced context available for execution phase
- Natural audit trail of planning sessions
2025-11-22 20:32:01 +08:00
catlog22
9f7e33cbde Merge branch 'main' of https://github.com/catlog22/Claude-Code-Workflow 2025-11-22 20:22:21 +08:00
catlog22
a25464ce28 Merge pull request #29 from catlog22/claude/analyze-workflow-logic-015GvXNP62281T3ogUSaJdP4
Analyze workflow execution logic and document findings
2025-11-20 20:16:28 +08:00
catlog22
0a3f2a5b03 Merge pull request #28 from catlog22/claude/check-active-workflow-files-01S7quRK9xHWStgpQ9WeTnF2
Check for .active identifier in workflow files
2025-11-20 20:15:39 +08:00
Claude
1929b7f72d docs: remove obsolete .active marker file references
Remove outdated references to .active-session marker files that are no longer used in the workflow implementation. The system now uses directory-based session management where active sessions are identified by their location in .workflow/active/ directory.

Changes:
- WORKFLOW_DIAGRAMS.md: Replace .active-session marker with actual directory structure
- COMMAND_SPEC.md: Update session:complete description to reflect directory-based archival

The .archiving marker is still valid and used for transactional session completion.
2025-11-20 12:13:45 +00:00
Claude
b8889d99c9 refactor: simplify session selection logic expression
- Condense Case A/B/C headers for clarity
- Merge bash if-else into single line using && ||
- Combine steps 1-4 in Case C into compact flow
- Remove redundant explanations while keeping key info
- Reduce from 64 lines to 39 lines (39% reduction)
2025-11-20 12:12:56 +00:00
Claude
a79a3221ce fix: enhance workflow execute session selection with user interaction
- Add detailed session discovery logic with 3 cases (none, single, multiple)
- Implement AskUserQuestion for multiple active sessions
- Display rich session metadata (project name, task progress, completion %)
- Support flexible user input (session number, full ID, or partial ID)
- Maintain backward compatibility for single-session scenarios
- Improve user experience with clear confirmation messages

This minimal change addresses session conflict issues without complex
locking mechanisms, focusing on explicit user choice when ambiguity exists.
2025-11-20 12:08:35 +00:00
catlog22
67c18d1b03 Merge pull request #27 from catlog22/claude/cleanup-root-docs-01SAjhmQa3Wc4EWxZoCnFb77
Remove unnecessary documentation from root directory
2025-11-20 19:55:15 +08:00
Claude
2301f263cd docs: cleanup unnecessary root directory documentation
Remove 7 unnecessary documentation files from root directory:
- COMMAND_DOCS_AUDIT_REPORT.md (temporary audit report)
- PLANNING_GAP_ANALYSIS.md (temporary analysis document)
- PROJECT_INTRODUCTION.md (duplicate of README)
- COMMAND_FLOW_STANDARD.md (internal standard)
- COMMAND_TEMPLATE_EXECUTOR.md (internal design)
- COMMAND_TEMPLATE_ORCHESTRATOR.md (internal design)
- LITE_FIX_DESIGN.md (internal design)

Retained essential documentation:
- User guides (README, GETTING_STARTED, FAQ, EXAMPLES)
- Developer docs (CLAUDE.md, CONTRIBUTING, ARCHITECTURE)
- Command references (COMMAND_REFERENCE, COMMAND_SPEC)
- Workflow guides (WORKFLOW_DECISION_GUIDE, WORKFLOW_DIAGRAMS)
2025-11-20 11:53:40 +00:00
catlog22
8d828e8762 Merge branch 'main' of https://github.com/catlog22/Claude-Code-Workflow 2025-11-20 19:44:31 +08:00
catlog22
b573450821 Merge pull request #26 from catlog22/claude/fix-ui-workflow-docs-01BxfT5RLJ8Txc5N8oToBiYn
Remove unsupported URL input from UI workflow docs
2025-11-20 19:39:14 +08:00
Claude
229a9867e6 docs: remove URL support from UI workflow commands
Removed all URL-related parameters and functionality from UI design commands
as they are no longer supported. All commands now only accept local files.

Changes:
- style-extract.md: Removed --urls parameter and all URL mode logic
- layout-extract.md: Removed --urls parameter and DOM extraction via URLs
- COMMAND_SPEC.md: Deleted capture and explore-layers commands, updated syntax

Affected commands:
- /workflow:ui-design:style-extract: Only accepts --images and --prompt
- /workflow:ui-design:layout-extract: Only accepts --images and --prompt
- Removed: /workflow:ui-design:capture (command deleted)
- Removed: /workflow:ui-design:explore-layers (command deleted)

All UI workflow commands now require manual input of local resources
(images, code files, HTML, CSS, JS) instead of fetching from URLs.
2025-11-20 11:35:00 +00:00
Claude
6fe31cc408 docs: fix UI workflow documentation - remove outdated URL support
The /workflow:ui-design:imitate-auto command no longer supports URL input.
Updated all documentation to reflect that it now only accepts:
- Local image files (glob patterns)
- Local code files/directories
- Text prompts

Changes:
- COMMAND_SPEC.md: Updated syntax and examples
- WORKFLOW_DECISION_GUIDE.md/.._EN.md: Replaced URL examples with local file examples
- EXAMPLES.md: Updated design system creation example
- GETTING_STARTED.md/.._CN.md: Fixed command descriptions
- ui-design-workflow-guide.md: Updated multiple sections and examples

Note: layout-extract still supports --urls parameter (not changed)
2025-11-20 11:25:21 +00:00
catlog22
196951ff4f Merge pull request #19 from catlog22/claude/analyze-planning-phase-01LgSNX3QxNDKmaUGpwn11gc
Analyze planning phase for overlooked scenarios
2025-11-20 19:07:20 +08:00
Claude
61c08e1585 docs: update LITE_FIX_DESIGN.md to v2.0 simplified design
Complete rewrite reflecting simplified architecture:

Version Change: 1.0.0 → 2.0.0 (Simplified Design)

Major Updates:
1. Mode Simplification (3 → 2)
   - Removed: Regular, Critical, Hotfix
   - Now: Default (auto-adaptive), Hotfix
   - Added: Intelligent self-adaptation mechanism

2. Parameter Reduction (3 → 1)
   - Removed: --critical, --incident
   - Kept: --hotfix only
   - Simplified: 67% fewer parameters

3. New Core Innovation: Intelligent Self-Adaptation
   - Phase 2 auto-calculates risk score (0-10)
   - Workflow adapts automatically (diagnosis depth, test strategy, review)
   - 4 risk levels: <3.0 (Low), 3.0-5.0 (Medium), 5.0-8.0 (High), ≥8.0 (Critical)

4. Updated All Sections:
   - Design comparison with lite-plan
   - Command syntax before/after
   - Intelligent adaptive workflow details
   - Phase-by-phase adaptation logic
   - Data structure extensions (confidence_level, workflow_adaptation)
   - Implementation roadmap updates
   - Success metrics (mode selection accuracy now 100%)
   - User experience flow comparison

5. New ADRs (Architecture Decision Records):
   - ADR-001: Why remove Critical mode?
   - ADR-002: Why keep Hotfix as separate mode?
   - ADR-003: Why adaptive confirmation dimensions?
   - ADR-004: Why remove --incident parameter?

6. Risk Assessment:
   - Auto-severity detection errors (mitigation: transparent scoring)
   - Users miss --hotfix flag (mitigation: keyword detection)
   - Adaptive workflow confusion (mitigation: clear explanations)

Key Philosophy Shift:
- v1.0: "Provide multiple modes for different scenarios"
- v2.0: "Intelligent single mode that adapts to reality"

Document Status: Design Complete, Development Pending
2025-11-20 11:02:32 +00:00
catlog22
07caf20e0d Merge branch 'main' of https://github.com/catlog22/Claude-Code-Workflow 2025-11-20 18:51:24 +08:00
Claude
1e9ca574ed refactor: simplify lite-fix command modes and parameters
Reduced complexity from 3 modes to 2 modes with intelligent adaptation:

Before (complex):
- 3 modes: Regular, Critical, Hotfix
- 3 parameters: --critical, --hotfix, --incident

After (simplified):
- 2 modes: Default (auto-adaptive), Hotfix
- 1 parameter: --hotfix

Key improvements:
- Default mode intelligently adapts based on risk score (Phase 2)
- Automatic workflow adjustment (diagnosis depth, test strategy, review)
- Risk score thresholds determine process complexity automatically
- Removed manual Critical mode selection (now auto-detected)
- Removed --incident parameter (can include in bug description)

Adaptive behavior:
- risk_score <3.0: Full test suite, comprehensive diagnosis
- 3.0-5.0: Focused integration tests, moderate diagnosis
- 5.0-8.0: Smoke+critical tests, focused diagnosis
- ≥8.0: Smoke tests only, minimal diagnosis

Line count: 652 lines (reduced from 707)
Matches lite-plan simplicity while maintaining full functionality
2025-11-20 10:44:20 +00:00
catlog22
d0ceb835b5 Merge pull request #25 from catlog22/claude/add-document-analysis-template-01T8qhUKLCGSev5RunZyxBwp
Add document analysis prompt template
2025-11-20 18:43:05 +08:00
catlog22
fad32d7caf Merge pull request #24 from catlog22/claude/optimize-cli-prompts-01QJm4wjGas8z5BEF5zGTM7H
Optimize CLI prompt templates for clarity
2025-11-20 18:42:27 +08:00
catlog22
806b782b03 Merge pull request #23 from catlog22/claude/workflow-status-dashboard-01MWTxt5mbpC4G8ADPCLHZ73
Add workflow status dashboard with task board
2025-11-20 18:34:55 +08:00
Claude
a62bbd6a7f refactor: simplify document analysis template and update strategy guide
Changes:
- Simplify 02-analyze-technical-document.txt to match other template formats
- Reduce from 280 lines to 33 lines with concise checklist structure
- Update intelligent-tools-strategy.md to include document analysis
- Add document-analysis to Quick Decision Matrix and Task-Template Matrix

Template now follows standard format:
- Brief description + CORE CHECKLIST + REQUIRED ANALYSIS
- OUTPUT REQUIREMENTS + VERIFICATION CHECKLIST + Focus statement
- Maintains all key requirements: pre-planning, evidence-based, self-critique
2025-11-20 10:34:36 +00:00
catlog22
2a7d55264d Merge pull request #22 from catlog22/claude/update-readme-workflow-guide-01AgqTs4ZTPmJqJHJRkzrYzZ
Update English README with workflow guide
2025-11-20 18:33:57 +08:00
Claude
837bee79c7 feat: add document analysis template for technical documents and papers
Add new CLI mode for systematic technical document analysis with:
- CLI command: /cli:mode:document-analysis for Gemini/Qwen/Codex
- Comprehensive analysis template with 6-phase protocol
- Support for README, API docs, research papers, specifications, tutorials
- Evidence-based analysis with pre-planning and self-critique requirements
- Precise language constraints and structured output format

Template features:
- Pre-analysis planning phase for approach definition
- Multi-phase analysis: assessment, extraction, critical analysis, synthesis
- Self-critique requirement before final output
- Mandatory section references and evidence citations
- Output length control proportional to document size
2025-11-20 10:05:09 +00:00
Claude
d8ead86b67 refactor: optimize CLI prompt templates for clarity and directness
Optimized 7 key CLI prompt templates following best practices:

Key improvements:
- Prioritize critical instructions at the top (role, constraints, output format)
- Replace verbose/persuasive language with direct, precise wording
- Add explicit planning requirements before final output
- Remove emojis and unnecessary adjectives
- Simplify section headers and structure
- Convert verbose checklists to concise bullet points
- Add self-review checklists for quality control

Files optimized:
- analysis/01-diagnose-bug-root-cause.txt: Simplified persona, added planning steps
- analysis/02-analyze-code-patterns.txt: Removed emojis, added planning requirements
- planning/01-plan-architecture-design.txt: Streamlined capabilities, direct language
- documentation/module-readme.txt: Concise structure, planning requirements
- development/02-implement-feature.txt: Clear planning phase, simplified checklist
- development/02-generate-tests.txt: Direct requirements, focused verification
- planning-roles/product-owner.md: Simplified role definition, added planning process

Benefits:
- Clearer expectations for model output
- Reduced token usage through conciseness
- Better focus on critical instructions
- Consistent structure across templates
- Explicit planning/self-critique requirements
2025-11-20 10:03:57 +00:00
Claude
8c2a7b6983 docs: update README to reference English workflow decision guide
Updated the English README.md to reference WORKFLOW_DECISION_GUIDE_EN.md
instead of WORKFLOW_DECISION_GUIDE.md for proper language consistency.
2025-11-20 10:00:09 +00:00
catlog22
f5ca033ee8 Merge pull request #20 from catlog22/claude/add-cli-workflow-guide-01XnbftHLPsFZwGDFdXSteRN
Add CLI usage steps to workflow guide
2025-11-20 17:58:02 +08:00
Claude
842ed624e8 fix: use ~/.claude path format for template reference
Update template path from .claude/templates to ~/.claude/templates
to follow project convention for path references.
2025-11-20 09:54:41 +00:00
Claude
c34a6042c0 docs: add CLI results as memory/context section
Add comprehensive explanation of how CLI tool results can be saved and
reused as context for subsequent operations:
- Result persistence in workflow sessions (.chat/ directory)
- Using analysis results as planning basis
- Using analysis results as implementation basis
- Cross-session references
- Memory update loops with iterative optimization
- Visual memory flow diagram showing phase-to-phase context passing
- Best practices for maintaining continuity and quality

This enables intelligent workflows where Gemini/Qwen analysis informs
Codex implementation, and all results accumulate as project memory for
future decision-making. Integrates with /workflow:plan and
/workflow:lite-plan commands.
2025-11-20 09:53:21 +00:00
Claude
383da9ebb7 docs: add CLI tools collaboration mode to Workflow Decision Guide
Add comprehensive section on multi-model CLI collaboration (Gemini/Qwen/Codex):
- Three execution modes: serial, parallel, and hybrid
- Semantic invocation vs command invocation patterns
- Integration examples with Lite and Full workflows
- Best practices for tool selection and execution strategies

Updates both Chinese and English versions with practical examples showing
how to leverage ultra-long context models (Gemini/Qwen) for analysis and
Codex for precise code implementation.
2025-11-20 09:51:50 +00:00
Claude
4693527a8e feat: add HTML dashboard generation to workflow status command
Add --dashboard parameter to /workflow:status command that generates
an interactive HTML task board displaying active and archived sessions.

Features:
- Responsive layout with modern CSS design
- Real-time statistics for sessions and tasks
- Task cards with status indicators (pending/in_progress/completed)
- Progress bars for each session
- Search and filter functionality
- Dark/light theme toggle
- Separate panels for active and archived sessions

Implementation:
- Added Dashboard Mode section to .claude/commands/workflow/status.md
- Created HTML template at .claude/templates/workflow-dashboard.html
- Template uses data injection pattern with {{WORKFLOW_DATA}} placeholder
- Generated dashboard saved to .workflow/dashboard.html

Usage:
  /workflow:status --dashboard

The dashboard provides a visual overview of all workflow sessions
and their tasks, making it easier to track project progress.
2025-11-20 09:49:54 +00:00
Claude
5f0dab409b refactor: convert lite-fix.md to full English
Changed all Chinese text to English for consistency:
- Table headers: "适用场景" → "Use Case", "流程特点" → "Workflow Characteristics"
- Example comments: Chinese descriptions → English descriptions
- All mixed language content now fully in English

Maintains same structure and functionality (707 lines).
2025-11-20 09:49:05 +00:00
Claude
c679253c30 refactor: streamline lite-fix.md to match lite-plan's concise style (707 lines)
Reduced from 1700+ lines to 707 lines while preserving core functionality:

Preserved:
- Complete 6-phase execution flow
- Three severity modes (Regular/Critical/Hotfix)
- Data structure definitions
- Best practices and quality gates
- Related commands and comparisons

Removed/Condensed:
- Redundant examples (kept 3 essential ones)
- Verbose implementation details
- Duplicate explanations
- Extended discussion sections

Format matches lite-plan.md (667 lines) for consistency.
Detailed design rationale remains in LITE_FIX_DESIGN.md.
2025-11-20 09:41:32 +00:00
catlog22
fc965c87d7 Merge pull request #17 from catlog22/claude/optimize-doc-phase2-filename-012ABaeYT56XeMJ73zUeA3kt
Optimize JSON filename in doc command phase2
2025-11-20 17:40:30 +08:00
catlog22
50a36ded97 Merge pull request #18 from catlog22/claude/add-english-workflow-guide-01EfMQYn5ZBavQWNB5PgSN85
Add English version of Workflow Decision Guide
2025-11-20 17:40:11 +08:00
Claude
c5a0f635f4 docs: add English version of Workflow Decision Guide
Add WORKFLOW_DECISION_GUIDE_EN.md with complete English translation of the workflow decision guide, including:
- Full lifecycle command selection flowchart
- Decision point explanations with examples
- Testing and review strategies
- Complete flows for typical scenarios
- Quick reference tables by knowledge level, project phase, and work mode
- Best practices and common pitfalls
2025-11-20 09:35:45 +00:00
Claude
ca9653c2e6 refactor: rename phase2-analysis.json to doc-planning-data.json
Optimize the Phase 2 output filename in /memory:docs command for better clarity:
- Old: phase2-analysis.json (generic, non-descriptive)
- New: doc-planning-data.json (clear purpose, self-documenting)

The new name better reflects that this file contains comprehensive
documentation planning data including folder analysis, grouping
information, existing docs, and statistics.

Updated all references in command documentation and skill guides.
2025-11-20 09:22:32 +00:00
Claude
38f2355573 feat: add lite-fix command design for bug diagnosis and emergency fixes
Introduces /workflow:lite-fix - a lightweight bug fixing workflow optimized
for rapid diagnosis, targeted fixes, and streamlined verification.

Command Design:
- Three severity modes: Regular (2-4h), Critical (30-60min), Hotfix (15-30min)
- Six-phase execution: Diagnosis → Impact → Planning → Verification → Confirmation → Execution
- Intelligent code search: cli-explore-agent (regular) → direct search (critical) → minimal (hotfix)
- Risk-aware verification: Full test suite → Focused tests → Smoke tests

Key Features:
- Structured root cause analysis (file:line, reproduction steps, blame info)
- Quantitative impact assessment (risk score 0-10, user/business impact)
- Multi-strategy fix planning (immediate patch vs comprehensive refactor)
- Adaptive branch strategy (feature branch vs hotfix branch from production tag)
- Automatic follow-up task generation for hotfixes (tech debt management)
- Real-time deployment monitoring with auto-rollback triggers

Integration:
- Complements /workflow:lite-plan (fix vs feature development)
- Reuses /workflow:lite-execute for execution layer
- Integrates with /cli:mode:bug-diagnosis for preliminary analysis
- Escalation path to /workflow:plan for complex refactors

Design Documents:
- .claude/commands/workflow/lite-fix.md - Complete command specification
- LITE_FIX_DESIGN.md - Architecture design and decision records

Addresses: PLANNING_GAP_ANALYSIS.md Scenario #8 (Emergency Fix)

Expected Impact:
- Reduce bug fix time by 50-70%
- Improve diagnosis accuracy to 85%+
- Reduce production hotfix risks
- Systematize technical debt from quick fixes
2025-11-20 09:21:26 +00:00
Claude
2fb1015038 docs: add comprehensive planning gap analysis for 15 software development scenarios
Analysis identifies critical gaps in current planning workflows:

High Priority Gaps:
- Legacy code refactoring (no test coverage safety nets)
- Emergency hotfix workflows (production incidents)
- Data migration planning (rollback and validation)
- Dependency upgrade management (breaking changes)

Medium Priority Gaps:
- Incremental rollout with feature flags
- Multi-team coordination and API contracts
- Tech debt systematic management
- Performance optimization with profiling

Analysis includes:
- 15 detailed scenario analyses with gap identification
- Enhanced Task JSON schema extension proposals
- Implementation roadmap (4 phases)
- Priority recommendations based on real-world impact

Impact: Extends planning coverage from ~40% to ~90% of software development scenarios
2025-11-20 09:08:27 +00:00
catlog22
d7bee9bdf2 docs: clarify path parameter description in /memory:docs command
Improve the path parameter documentation to eliminate ambiguity:
- Change "Target directory" to "Source directory to analyze"
- Explicitly state documentation is generated in .workflow/docs/{project_name}/ at workspace root
- Emphasize docs are NOT created within the source path itself
- Add concrete example showing path mirroring behavior

This resolves potential confusion about where documentation files are created.

🤖 Generated with [Claude Code](https://claude.com/claude-code)

Co-Authored-By: Claude <noreply@anthropic.com>
2025-11-20 15:48:06 +08:00
catlog22
751d251433 Merge pull request #16 from catlog22/claude/fix-lexical-error-01QGNCxKABnMVrn1PSTCMLCW
Fix lexical error in rich display rendering
2025-11-20 13:58:18 +08:00
Claude
51b1eb5da6 fix: correct Mermaid parallelogram syntax for command nodes
Fixed lexical error in Mermaid diagram where nodes starting with
[/workflow: were not properly closed. Changed to use correct
parallelogram syntax [/ text /] to resolve "Unrecognized text" error.

Affected nodes:
- BrainIdea, BrainDesign (brainstorm commands)
- UIImitate, UIExplore, UISync (UI design commands)
- LitePlanE, LitePlanNormal, LiteAgent (lite workflow commands)
- Verify, Execute variants (verification/execution commands)
- TDD, TestGen, TestCycle (testing commands)
- SecurityReview, ArchReview, QualityReview, GeneralReview (review commands)
2025-11-20 05:45:41 +00:00
catlog22
275ed051c6 Merge pull request #15 from catlog22/claude/add-project-documentation-01KoecebaGvjvFiUYgAWyL6F
fix: remove quotes from Mermaid node labels to resolve rendering error
2025-11-20 13:32:26 +08:00
Claude
fa7f37695e fix: remove quotes from Mermaid node labels to resolve rendering error 2025-11-20 05:26:25 +00:00
catlog22
5e69748016 Merge pull request #14 from catlog22/claude/add-project-documentation-01KoecebaGvjvFiUYgAWyL6F
fix: replace <br/> with <br> for GitHub Mermaid compatibility
2025-11-20 13:22:49 +08:00
Claude
f1fff34a9d fix: replace <br/> with <br> for GitHub Mermaid compatibility
GitHub's Mermaid renderer requires <br> instead of <br/> for line breaks in node labels.
Fixed all 32 occurrences in the workflow decision flowchart.
2025-11-20 05:20:23 +00:00
catlog22
8ae3da8f61 Merge pull request #13 from catlog22/claude/add-project-documentation-01KoecebaGvjvFiUYgAWyL6F
docs: clarify brainstorm vs plan workflow and add decision guide
2025-11-20 13:19:03 +08:00
Claude
62ffc5c645 docs: clarify brainstorm vs plan workflow and add decision guide
Major improvements to workflow understanding and guidance:

1. New Document: WORKFLOW_DECISION_GUIDE.md
   - Comprehensive decision flowchart for full software lifecycle
   - Interactive Mermaid diagram with 40+ decision points
   - Clear guidance on when to use each command
   - Real-world scenarios for all workflow types
   - Quick reference tables by knowledge level, project stage, and mode
   - Expert tips and common pitfalls

2. Clarified Brainstorm vs Plan Relationship
   Core concept correction:
   - Brainstorm: Use when you know WHAT to build, NOT HOW
   - Plan: Use when you know both WHAT and HOW
   - Plan can accept user input OR brainstorm output

3. Updated Documentation Files
   - GETTING_STARTED.md: Added clear distinction and decision criteria
   - GETTING_STARTED_CN.md: Chinese version with same clarifications
   - FAQ.md: Enhanced brainstorm usage explanation with workflow comparison table
   - README.md: Added Workflow Decision Guide link
   - README_CN.md: Added Chinese link to decision guide

4. Key Improvements
   When to Use Brainstorming:
   - Unclear solution approach (multiple ways to solve)
   - Architectural exploration needed
   - Requirements clarification (high-level clear, details not)
   - Multiple trade-offs to analyze
   - Unfamiliar domain

   When to Skip Brainstorming:
   - Clear implementation approach
   - Similar to existing code patterns
   - Well-defined requirements
   - Simple features

5. Decision Flowchart Covers:
   - Ideation phase (know what to build?)
   - Design phase (know how to build?)
   - UI design phase (need UI design?)
   - Planning phase (complexity level?)
   - Testing phase (TDD, post-test, test-fix?)
   - Review phase (security, architecture, quality?)
   - Completion

Benefits:
- Eliminates confusion about brainstorm usage
- Provides clear decision criteria for all commands
- Visual flowchart helps users navigate workflows
- Comprehensive coverage of all development stages
- Reduces trial-and-error in workflow selection

Files modified: 5
Files added: 1 (WORKFLOW_DECISION_GUIDE.md)
2025-11-20 05:14:57 +00:00
catlog22
758321b829 Merge pull request #12 from catlog22/claude/add-project-documentation-01KoecebaGvjvFiUYgAWyL6F
docs: add comprehensive project documentation
2025-11-20 13:08:54 +08:00
Claude
85d7fd9340 docs: fix command name inconsistencies across documentation
Fix multiple command naming and parameter inconsistencies:

1. Command Name Fixes:
   - Change /cli:mode:bug-index to /cli:mode:bug-diagnosis
   - Affected files: COMMAND_REFERENCE.md, COMMAND_SPEC.md,
     EXAMPLES.md, GETTING_STARTED.md, GETTING_STARTED_CN.md,
     PROJECT_INTRODUCTION.md

2. Parameter Fixes:
   - Remove invalid --tool parameter from /workflow:lite-plan
   - The lite-plan command does not support --tool flag
   - Affected files: README.md, README_CN.md, COMMAND_SPEC.md

3. Invalid Command Fixes in PROJECT_INTRODUCTION.md:
   - Change /gemini:analyze to /cli:analyze --tool gemini
   - Change /qwen:mode:plan to /cli:mode:plan --tool qwen
   - Change /codex:mode:auto to /cli:execute --tool codex
   - Change /workflow:plan-deep to /workflow:plan
   - These legacy command formats are no longer valid

Impact:
- All documentation now matches actual command implementations
- Users will not encounter "command not found" errors
- Examples are now executable and correct
- Consistent command syntax across all docs

Files modified: 8
- COMMAND_REFERENCE.md
- COMMAND_SPEC.md
- EXAMPLES.md
- GETTING_STARTED.md
- GETTING_STARTED_CN.md
- PROJECT_INTRODUCTION.md
- README.md
- README_CN.md
2025-11-20 05:07:36 +00:00
Claude
fbd41a0851 docs: add comprehensive project documentation
Add four new comprehensive documentation files and update READMEs:

New Documentation:
- ARCHITECTURE.md: High-level system architecture overview
  * Design philosophy and core principles
  * System components and data flow
  * Multi-agent system details
  * CLI tool integration strategy
  * Session and memory management
  * Performance optimizations and best practices

- CONTRIBUTING.md: Contributor guidelines
  * Code of conduct
  * Development setup instructions
  * Coding standards and best practices
  * Testing guidelines
  * Documentation requirements
  * Pull request process
  * Release process

- FAQ.md: Frequently asked questions
  * General questions about CCW
  * Installation and setup
  * Usage and workflows
  * Commands and syntax
  * Sessions and tasks
  * Agents and tools
  * Memory system
  * Troubleshooting
  * Advanced topics

- EXAMPLES.md: Real-world use cases
  * Quick start examples
  * Web development (Express, React, e-commerce)
  * API development (REST, GraphQL)
  * Testing & quality assurance (TDD, test generation)
  * Refactoring (monolith to microservices)
  * UI/UX design (design systems, landing pages)
  * Bug fixes (simple and complex)
  * Documentation generation
  * DevOps & automation (CI/CD, Docker)
  * Complex projects (chat app, analytics dashboard)

Updated Documentation:
- README.md: Added comprehensive documentation section
  * Organized docs into categories
  * Links to all new documentation files
  * Improved navigation structure

- README_CN.md: Added documentation section (Chinese)
  * Same organization as English version
  * Links to all documentation resources

Benefits:
- Provides clear entry points for new users
- Comprehensive reference for all features
- Real-world examples for practical learning
- Complete contributor onboarding
- Better project discoverability
2025-11-20 04:44:15 +00:00
catlog22
2a63ab5e0a Merge pull request #11 from catlog22/claude/refactor-task-workflow-01WBDUE6UgN74q4JhmwDJ6c1
Refactor task replan into workflow structure
2025-11-20 12:34:31 +08:00
Claude
46527c5b9a docs: remove Core Principles sections from replan commands
Remove unnecessary Core Principles sections that reference task-core.md
and workflow-architecture.md from both workflow:replan and task:replan
documentation to align with execute.md style and maintain simplicity.
2025-11-20 04:32:00 +00:00
Claude
b9e893245b refactor(workflow): simplify workflow:replan documentation
## Changes

### Documentation Simplification
- Removed version information and comparison tables
- Removed "Migration from task:replan" section
- Removed "Related Commands" section
- Removed redundant "Implementation Notes" section
- Streamlined examples to essential use cases only

### Followed execute.md Style
- Clean, focused structure
- Direct entry into core functionality
- Clear phase-based execution lifecycle
- Concise error handling
- Essential file structure reference

### Maintained Core Content
- Overview with core capabilities
- Operation modes (Session/Task)
- 6-phase execution lifecycle
- Interactive clarification questions
- TodoWrite progress tracking
- Error handling examples
- Practical usage examples

The documentation now follows the established pattern from execute.md
and other workflow commands, providing essential information without
redundant sections or comparisons.
2025-11-20 03:19:47 +00:00
Claude
d96a8a06a0 feat(workflow): add workflow:replan command with session-level replanning support
## Changes

### New Command: workflow:replan
- Created `.claude/commands/workflow/replan.md` with comprehensive session-level replanning
- Supports dual modes: session-wide and task-specific replanning
- Interactive boundary clarification using guided questioning
- Updates multiple artifacts: IMPL_PLAN.md, TODO_LIST.md, task JSONs, session metadata
- Comprehensive backup management with restore capability

### Key Features
1. **Interactive Clarification**: Guided questions to define modification scope
   - Modification scope (tasks only, plan update, task restructure, comprehensive)
   - Affected modules detection
   - Task change types (add, remove, merge, split, update)
   - Dependency relationship updates

2. **Session-Aware Operations**:
   - Auto-detects active sessions from `.workflow/active/`
   - Updates all session artifacts consistently
   - Maintains backup history in `.process/backup/replan-{timestamp}/`

3. **Comprehensive Artifact Updates**:
   - IMPL_PLAN.md section modifications
   - TODO_LIST.md task list updates
   - Task JSON files (requirements, implementation, dependencies)
   - Session metadata tracking

4. **Impact Analysis**: Automatically determines affected files and operations

### Deprecation
- Added deprecation notice to `/task:replan` in both command and reference files
- Updated `COMMAND_REFERENCE.md` to:
  - Add workflow:replan to Core Workflow section
  - Mark task:replan as deprecated with migration guidance

### Migration Path
- Old: `/task:replan IMPL-1 "changes"`
- New: `/workflow:replan IMPL-1 "changes"`

### Files Modified
- `.claude/commands/task/replan.md`: Added deprecation notice
- `.claude/commands/workflow/replan.md`: New command implementation
- `.claude/skills/command-guide/reference/commands/task/replan.md`: Deprecation notice
- `.claude/skills/command-guide/reference/commands/workflow/replan.md`: Reference documentation
- `COMMAND_REFERENCE.md`: Updated command listings

This change enhances workflow replanning capabilities by providing interactive,
session-aware modifications that maintain consistency across all workflow artifacts.
2025-11-20 03:11:28 +00:00
catlog22
957473aa71 Merge pull request #10 from catlog22/claude/audit-command-docs-agents-011tVdT2942cX3W3fib9pYQz
Claude/audit command docs agents 011t vd t2942c x3 w3fib9p y qz
2025-11-20 11:05:05 +08:00
Claude
c56bf68d87 Remove version evolution content from tdd-plan.md
Removed 104 lines of version history and enhancement documentation (lines 420-523):
- TDD Workflow Enhancements section
- Key Improvements (adopted features from test-gen and plan workflows)
- Workflow Comparison table (Previous vs Current)
- Migration Notes and backward compatibility info
- Configuration examples

This content represented ~19% of the file and was historical/evolutionary
information rather than core command functionality. Focuses the documentation
on what the command does, not how it evolved.

Improves task focus and documentation clarity as identified in audit report.
2025-11-20 02:57:21 +00:00
Claude
9627b42c03 Add comprehensive audit report for command docs and agent files
Completed systematic audit of 87 files (73 commands + 14 agents) to verify:
- No version information in docs (except version.md)
- No extraneous content unrelated to core tasks
- Focus on single responsibility

Key findings:
- Overall compliance: 89.7% (78/87 files)
- Commands: 95.9% compliant (70/73)
- Agents: 57.1% compliant (8/14)

Issues identified:
1. tdd-plan.md contains 103 lines of version evolution content
2. 6 agents have duplicate Windows path guidelines
3. ui-design-agent.md contains 607 lines of JSON schema templates

Report provides actionable recommendations with priority levels and estimated effort.
2025-11-20 02:49:36 +00:00
catlog22
292dc113e3 refactor(workflow): complete migration from .workflow/sessions/ to .workflow/active/ directory structure
Update all command files to use the new standardized directory structure:
- Active sessions: .workflow/active/WFS-{session}/
- Archived sessions: .workflow/archives/WFS-{session}/

Changes:
- Updated workflow-architecture.md with new directory structure (active/ not sessions/)
- Fixed 31 path references across 12 command files
- Verified compliance across all 73 command files using parallel agent checks
- Confirmed directory naming: active/ (adjective, no plural) + archives/ (noun plural)

Files modified:
- .claude/workflows/workflow-architecture.md (directory structure definition)
- .claude/commands/cli/*.md (8 files: analyze, chat, codex-execute, discuss-plan, execute, mode/*)
- .claude/commands/memory/docs.md (4 path fixes)
- .claude/commands/task/*.md (execute, replan)

🤖 Generated with [Claude Code](https://claude.com/claude-code)

Co-Authored-By: Claude <noreply@anthropic.com>
2025-11-20 10:25:28 +08:00
catlog22
c3818fdb79 Refactor workflow file paths from .workflow/sessions/ to .workflow/active/ for improved session management and consistency across commands. Updated documentation and command references to reflect the new directory structure, ensuring all relevant commands and outputs are correctly aligned with the active session model. 2025-11-20 09:40:58 +08:00
catlog22
9f322e0f34 refactor(workflow): simplify explore-auto command and remove Phase 11 duplication
- Simplify Phase 1-3: Use unified principles (Detect→Classify→Store) to avoid string concatenation and escaping
- Reduce Phase 1-3 from 70+ lines to 30 lines with clear priority chains
- Remove Phase 11 (preview generation): Delegated to generate.md command to eliminate duplication
- Update workflow from 11 phases to 10 phases
- Convert all content to English for internationalization
- Update TodoWrite pattern from 5 tasks to 4 tasks (remove preview generation task)

🤖 Generated with [Claude Code](https://claude.com/claude-code)

Co-Authored-By: Claude <noreply@anthropic.com>
2025-11-20 09:35:53 +08:00
catlog22
89a61acb71 Refactor workflow output paths to use a standardized sessions directory structure
- Updated output paths in various command files to reflect the new structure: `.workflow/sessions/{session_id}/` instead of `.workflow/{session_id}/`.
- Adjusted documentation and code comments to ensure consistency across all agents and commands.
- Ensured that all references to session-related files are correctly pointing to the new directory format.
2025-11-19 23:04:10 +08:00
catlog22
9b07310d68 refactor(workflow): enhance init command documentation with memory discovery features 2025-11-19 20:46:37 +08:00
catlog22
487b359266 refactor(workflow): remove all .active marker file references and sync documentation
Core Changes (10 files):
- commands: cli/execute.md, memory/docs.md, workflow/review.md, workflow/brainstorm/*.md
- agents: cli-execution-agent.md
- workflows: task-core.md, workflow-architecture.md

Transformations:
- Removed all .active-* marker file operations (touch/rm/find)
- Updated session discovery to directory-based (.workflow/sessions/)
- Updated directory structure examples to show sessions/ subdirectory
- Replaced marker-based state with location-based state

Reference Documentation (57 files):
- Auto-synced via analyze_commands.py script
- Includes all core file changes
- Updated command indexes (all-commands.json, by-category.json, etc.)

Migration complete: 100% .active marker references removed
Session state now determined by directory location only
2025-11-19 20:24:14 +08:00
catlog22
bc5ddb3670 refactor(agents): remove Performance Limits section from context-search-agent
- Removed Performance Limits section (18 lines)
- Deleted file counts, size filtering, depth control, and tool priority sections
- Performance constraints should not appear in agent responsibilities
- Agent documentation now focuses on core capabilities and execution flow
2025-11-19 15:35:50 +08:00
catlog22
45a082d963 refactor(agents): remove Performance Optimization section from cli-explore-agent
- Removed Performance Optimization section (49 lines)
- Performance considerations should not appear in agent responsibilities
- Deleted caching strategy, parallel execution, and resource limits sections
- Agent documentation now focuses solely on core capabilities and execution flow
2025-11-18 13:07:47 +08:00
catlog22
19ebb2dc82 fix(workflow): complete path migration and enhance session:complete
Comprehensive update to eliminate all old path references and implement transactional archival in session:complete.

## Part 1: Complete Path Migration (43 files)

### Files Updated
- action-plan-verify.md
- session/list.md, session/resume.md
- tdd-verify.md, tdd-plan.md, test-*.md
- All brainstorm/*.md files (13 files)
- All tools/*.md files (10 files)
- All ui-design/*.md files (10 files)
- lite-plan.md, lite-execute.md, plan.md

### Path Changes
- `.workflow/.active-*` → `find .workflow/sessions/ -name "WFS-*" -type d`
- `.workflow/WFS-{session}` → `.workflow/sessions/WFS-{session}`
- `.workflow/.archives/` → `.workflow/archives/`
- Removed all marker file operations (touch/rm .active-*)

### Verification
-  0 references to .active-* markers remain
-  0 references to direct .workflow/WFS-* paths
-  0 references to .workflow/.archives/ (dot prefix)

## Part 2: Transactional Archival Enhancement

### session/complete.md - Complete Rewrite

**New Four-Phase Architecture**:

1. **Phase 1: Pre-Archival Preparation**
   - Find active session in .workflow/sessions/
   - Check for existing .archiving marker (resume detection)
   - Create .archiving marker to prevent concurrent operations

2. **Phase 2: Agent Analysis (In-Place)**
   - Agent processes session WHILE STILL in sessions/ directory
   - Pure analysis - no file moves or manifest updates
   - Returns complete metadata package for atomic commit
   - Failure here → session remains active, safe to retry

3. **Phase 3: Atomic Commit**
   - Only executes if Phase 2 succeeds
   - Move session to archives/
   - Update manifest.json
   - Remove .archiving marker
   - All-or-nothing guarantee

4. **Phase 4: Project Registry Update**
   - Extract feature metadata and update project.json

**Key Improvements**:
- **Agent-first, move-second**: Prevents inconsistent states
- **Transactional guarantees**: All-or-nothing file operations
- **Resume capability**: .archiving marker enables safe retry
- **Error recovery**: Comprehensive failure handling documented

**Old Design Flaw (Fixed)**:
- Old: Move first → Agent processes → If agent fails, session moved but metadata incomplete
- New: Agent processes → If success, atomic commit → Guaranteed consistency

## Benefits

1. **Consistency**: All commands use identical path patterns
2. **Robustness**: Transactional archival prevents data loss
3. **Maintainability**: Single source of truth for directory structure
4. **Recoverability**: Failed operations can be safely retried
5. **Alignment**: Closer to OpenSpec's clean three-layer model

🤖 Generated with [Claude Code](https://claude.com/claude-code)

Co-Authored-By: Claude <noreply@anthropic.com>
2025-11-18 12:39:23 +08:00
catlog22
d9fcdad949 refactor(workflow): migrate to sessions/ subdirectory structure
Refactor workflow directory structure to improve clarity and robustness by introducing dedicated sessions/ subdirectory for active sessions.

## Directory Structure Changes

### Before (Old Structure)
```
.workflow/
├── .active-WFS-session-1    # Marker files (fragile)
├── WFS-session-1/           # Active sessions mixed with config
├── WFS-session-2/
├── project.json
└── .archives/               # Archived sessions
```

### After (New Structure)
```
.workflow/
├── project.json             # Project metadata
├── sessions/                # [NEW] All active sessions
│   └── WFS-session-1/
└── archives/                # Archived sessions (removed dot prefix)
```

## Key Improvements

1. **Physical Isolation**: Active sessions now in dedicated sessions/ subdirectory
2. **No Marker Files**: Removed fragile .active-* marker file mechanism
3. **Directory-Based State**: Session state determined by directory location
   - In sessions/ = active
   - In archives/ = archived
4. **Simpler Discovery**: `ls .workflow/sessions/` instead of `find .workflow/ -name ".active-*"`
5. **Cleaner Root**: .workflow/ root now only contains project.json, sessions/, archives/

## Path Transformations

| Old Pattern | New Pattern |
|-------------|-------------|
| `find .workflow/ -name ".active-*"` | `find .workflow/sessions/ -name "WFS-*" -type d` |
| `.workflow/WFS-[slug]/` | `.workflow/sessions/WFS-[slug]/` |
| `.workflow/.archives/` | `.workflow/archives/` |
| `touch .workflow/.active-*` | *Removed* |
| `rm .workflow/.active-*` | *Removed* |

## Modified Commands

### session/start.md
- Update session creation path to .workflow/sessions/WFS-*/
- Remove .active-* marker file creation
- Update session discovery to use directory listing
- Updated all 3 modes: Discovery, Auto, Force New

### session/complete.md
- Update session move from sessions/ to archives/
- Remove .active-* marker file deletion
- Update manifest path to archives/manifest.json
- Update agent prompts with new paths

### status.md
- Update session discovery to find .workflow/sessions/
- Update all session file path references
- Update archive query examples

### execute.md
- Update session discovery logic
- Update all task file paths to include sessions/ prefix
- Update context package paths
- Update error handling documentation

## Benefits

- **Robustness**: Eliminates marker file state inconsistency risk
- **Clarity**: Clear separation between active and archived sessions
- **Simplicity**: Session discovery is straightforward directory listing
- **Alignment**: Closer to OpenSpec's three-layer structure (specs/changes/archive)

🤖 Generated with [Claude Code](https://claude.com/claude-code)

Co-Authored-By: Claude <noreply@anthropic.com>
2025-11-18 11:34:52 +08:00
catlog22
2aacc34c24 refactor(workflow): remove redundant sections, focus on core responsibilities
Streamline workflow command documentation by removing sections that don't belong to core command responsibilities.

## Changes

### init.md (-46 lines)
- Remove Integration with Existing Commands section
- Remove Performance Considerations section
- Remove Related Commands section
- Focus: Project initialization and analysis only

### session/complete.md (-30 lines)
- Remove Quick Commands reference section
- Remove Archive Query Commands section
- Focus: Session archival and feature registry updates only

### session/start.md (-34 lines)
- Remove Simple Bash Commands reference section
- Keep: Session ID Format (specification/standard)
- Focus: Session creation and discovery only

### status.md (-165 lines)
- Remove Simple Bash Commands section
- Remove Simple Output Format examples
- Remove Project Mode Quick Commands section
- Focus: Status display logic only

### context-gather.md (-29 lines)
- Remove Usage Examples section
- Remove Success Criteria section
- Remove Error Handling table
- Keep: Notes (core design principles)
- Focus: Context gathering orchestration only

## Benefits
- Reduced documentation redundancy (~304 lines removed)
- Clearer command responsibilities and boundaries
- Easier maintenance and updates
- Commands focus on their specific roles in workflow

🤖 Generated with [Claude Code](https://claude.com/claude-code)

Co-Authored-By: Claude <noreply@anthropic.com>
2025-11-18 11:21:35 +08:00
catlog22
4dafec7054 feat(workflow): add project-level state management with intelligent initialization
Implement comprehensive project.json system for tracking project state, completed features, and architecture overview.

## New Features

### 1. /workflow:init Command (NEW)
- Independent project initialization command using cli-explore-agent
- Deep Scan mode analyzes: technology stack, architecture, key components, metrics
- Generates project.json with comprehensive overview field
- Supports --regenerate flag to update analysis while preserving features
- One-time initialization with intelligent project cognition

### 2. Enhanced session:start
- Added Step 0: Check project.json existence before session creation
- Auto-calls /workflow:init if project.json missing
- Separated project-level vs session-level initialization responsibilities

### 3. Enhanced session:complete
- Added Phase 3: Update Project Feature Registry
- Extracts feature metadata from IMPL_PLAN.md
- Records completed sessions as features in project.json
- Includes: title, description, tags, timeline, traceability (archive_path, commit_hash)
- Auto-generates feature IDs and tags from implementation plans

### 4. Enhanced status Command
- Added --project flag for project overview mode
- Displays: technology stack, architecture patterns, key components, metrics
- Shows completed features with full traceability
- Provides query commands for feature exploration

### 5. Enhanced context-gather
- Integrated project.json reading in context-search-agent workflow
- Phase 1: Load project.json overview as foundational context
- Phase 3: Populate context-package.json from project.json
- Prioritizes project.json context for architecture and tech stack
- Avoids redundant project analysis on every planning session

## Data Flow
project.json (init) → context-gather → context-package.json → task-generation
session-complete → project.json (features registry)

## Benefits
- Single source of truth for project state
- Efficient context reuse across sessions
- Complete feature traceability and history
- Consistent architectural baseline for all workflows

🤖 Generated with [Claude Code](https://claude.com/claude-code)

Co-Authored-By: Claude <noreply@anthropic.com>
2025-11-18 11:10:07 +08:00
catlog22
b4e09213e4 fix(installer): prevent deletion of current manifest in Bash version
Issue:
- In Bash version, new_install_manifest creates file immediately
- save_install_manifest calls remove_old_manifests_for_path
- This deletes ALL manifests for the path, including the new one
- Result: "WARNING: Failed to save installation manifest"

Solution:
- Add current_manifest_file parameter to remove_old_manifests_for_path
- Skip deletion if file matches current manifest
- Pass manifest_file to exclude it from deletion

Note: PowerShell version does not have this issue because it creates
the manifest file AFTER deleting old ones (hashtable → file conversion)
2025-11-17 22:51:14 +08:00
catlog22
3f7db2fdbc refactor(installer): implement manifest deduplication for same installation paths
- Add Remove-OldManifestsForPath function to automatically clean old manifests
- Modify Save-InstallManifest to remove old manifests before saving new one
- Update Get-AllInstallManifests to return only latest manifest per installation path
- Apply same strategy to both Install-Claude.ps1 (PowerShell) and Install-Claude.sh (Bash)

Benefits:
- Each installation location registers only once
- Only latest version manifest is retained
- Uninstall UI shows only latest version per location
- Prevents manifest file accumulation
2025-11-17 22:48:54 +08:00
catlog22
7bcf7f24a3 refactor(workflow): reorganize lite-plan and lite-execute docs for improved clarity
Restructure lite-plan.md (844→668 lines, -176) and lite-execute.md (597→569 lines, -28) following agent document patterns. Move Data Structures sections to end as reference, simplify repeated content, improve hierarchical organization. All original content preserved.

Key changes:
- Data Structures moved to end (from beginning)
- Simplified Execution Process to avoid duplication
- Improved section hierarchy and flow
- Consistent structure across both documents
2025-11-17 22:10:44 +08:00
catlog22
0a6c90c345 refactor(lite-plan): remove timeout and color attributes for streamlined configuration 2025-11-17 22:05:29 +08:00
catlog22
4a0eef03a2 refactor(agents): reorganize cli-planning agent docs for improved clarity and consistency
Restructure cli-lite-planning-agent.md and cli-planning-agent.md following action-planning-agent.md's clear hierarchical pattern. Merge duplicate content, consolidate sections, and improve readability while preserving all original information.
2025-11-17 21:51:01 +08:00
catlog22
9cb9b2213b fix(lite-plan): correct executionContext.planObject.tasks type documentation
Issue found by Gemini analysis:
- executionContext.planObject.tasks was incorrectly documented as string[]
- Actual runtime structure is array of structured task objects (7 fields)
- This caused documentation mismatch between lite-plan and lite-execute

Fix:
- Update executionContext definition to show full task object structure
- Add comment clarifying it's an array of structured objects (7 fields each)
- Aligns with actual implementation and lite-execute consumption logic

Impact: Documentation only (no code changes, runtime behavior was already correct)
2025-11-17 20:58:51 +08:00
catlog22
0e21c0dba7 feat(lite-plan): upgrade task structure to detailed objects with implementation guidance
Major changes:
- Add cli-lite-planning-agent.md for generating structured task objects
- Upgrade planObject.tasks from string[] to structured objects with 7 fields:
  - title, file, action, description (what to do)
  - implementation (3-7 steps on how to do it)
  - reference (pattern, files, examples to follow)
  - acceptance (verification criteria)
- Update lite-execute.md to format structured tasks for Agent/Codex execution
- Clean up agent files: remove "how to call me" sections (cli-planning-agent, cli-explore-agent)
- Update lite-plan.md to use cli-lite-planning-agent for Medium/High complexity tasks

Benefits:
- Execution agents receive complete "how to do" guidance instead of vague descriptions
- Each task includes specific file paths, implementation steps, and verification criteria
- Clear separation of concerns: agents only describe what they do, not how they are called
- Architecture validated by Gemini: 100% consistency, no responsibility leakage

Breaking changes: None (backward compatible via task.title || task fallback in lite-execute)
2025-11-17 20:52:49 +08:00
catlog22
8e4e751655 fix(lite-plan): update user confirmation step to collect four inputs instead of three 2025-11-17 18:38:44 +08:00
catlog22
6ebb1801d1 feat(lite-plan): add detailed data structures and enhanced task JSON export for improved planning context 2025-11-17 17:06:13 +08:00
catlog22
0380cbb7b8 feat(lite-execute, lite-plan): store original user input for enhanced task execution context 2025-11-17 16:34:23 +08:00
catlog22
85ef755c12 feat(lite-execute): add new command for executing tasks with in-memory plans and flexible input modes 2025-11-17 16:16:15 +08:00
catlog22
a5effb9784 refactor(intelligent-tools): clarify write permission and session resume instructions 2025-11-16 23:16:16 +08:00
catlog22
1d766ed4ad fix(lite-plan): clarify executionId definition and tracking flow
Changes:
1. Initialize previousExecutionResults array in Step 5.1
2. Add id field to executionCalls objects (format: [Method-Index])
3. Add execution loop structure in Step 5.2 showing sequential processing
4. Clarify executionId comes from executionCalls[currentIndex].id
5. Add comments explaining ID storage for result collection

Benefits:
- Clear definition of where executionId comes from
- Explicit initialization of tracking variables
- Better understanding of execution flow and result collection
- Proper context continuity across multiple execution calls
2025-11-16 23:14:21 +08:00
catlog22
fe0d30256c feat(lite-plan): enhance execution context continuity for multi-call scenarios
Improvements:
1. Add plan summary in confirmation question for quick review
2. Add previousExecutionResults tracking for multi-execution flows
3. Include execution result collection mechanism after each call
4. Update both Agent and Codex execution prompts with context continuity

Benefits:
- Subsequent executions can see what previous calls completed
- Avoid duplicate work across multiple execution calls
- Better dependency management and task flow
- Clear context propagation: executionId, status, tasks, outputs, notes
2025-11-16 23:10:18 +08:00
catlog22
1c416b538d refactor(lite-plan): separate plan display from user confirmation in Phase 4
Change Phase 4 confirmation flow from single-step to two-step process:
- Step 4.1: Display complete plan as readable text output
- Step 4.2: Collect three-dimensional input via AskUserQuestion

Benefits:
- Clearer plan presentation (not embedded in question)
- Simpler question interface focused on user decisions
- Better user experience with logical separation
2025-11-16 22:56:48 +08:00
catlog22
81362c14de refactor(lite-plan): enhance execution call tracking and user interaction clarity 2025-11-16 21:52:09 +08:00
catlog22
fa6257ecae refactor(lite-plan): streamline planning execution guidelines and enhance user confirmation process 2025-11-16 21:35:47 +08:00
catlog22
ccb4490ed4 refactor(cli-tools): remove redundant model parameters for improved command clarity 2025-11-16 21:11:16 +08:00
catlog22
58206f1996 refactor(lite-plan): update execution method options for clarity and complexity-based selection 2025-11-16 21:03:16 +08:00
catlog22
564bcb72ea refactor(lite-plan): simplify command format for Codex and Qwen by removing redundant parameters 2025-11-16 20:58:25 +08:00
catlog22
965a80b54e docs: Release v5.8.1 - Lite-Plan Workflow & CLI Tools Enhancement
Major Features:
- Add /workflow:lite-plan - Lightweight interactive planning workflow
  - Three-dimensional multi-select confirmation
  - Smart code exploration with auto-detection
  - Parallel task execution support
  - Flexible execution (Agent/CLI) with optional code review
- Optimize CLI tools - Remove -m parameter requirement
  - Auto-model-selection for Gemini, Qwen, Codex
  - Updated models: gpt-5.1, gpt-5.1-codex, gpt-5.1-codex-mini

Documentation Updates:
- Update README.md with Lite-Plan usage examples
- Update README_CN.md (Chinese translation)
- Add /workflow:lite-plan to COMMAND_SPEC.md
- Add /workflow:lite-plan to COMMAND_REFERENCE.md
- Update CHANGELOG.md with v5.8.1 release notes
- Update intelligent-tools-strategy.md with model selection guidelines

See CHANGELOG.md for full details.
2025-11-16 20:50:10 +08:00
catlog22
8f55bf2ece refactor(intelligent-tools-strategy): remove optional model parameter for improved command clarity and auto-selection guidance 2025-11-16 20:41:39 +08:00
catlog22
a721c50ba3 refactor(lite-plan): enhance three-dimensional confirmation to support multi-select interactions for task approval, execution method, and code review tool 2025-11-15 19:59:38 +08:00
catlog22
4a5c8490b1 refactor(cli-explore-agent): update color scheme from blue to yellow for improved visibility 2025-11-15 18:17:08 +08:00
catlog22
2f0ca988f4 refactor(lite-plan): enhance confirmation process with three-dimensional user interaction and optional code review 2025-11-15 18:13:10 +08:00
catlog22
a45f5e9dc2 refactor(lite-plan): enhance task dependency management and introduce parallel execution guidelines 2025-11-15 17:57:11 +08:00
catlog22
b8dc3018d4 refactor(lite-plan): enhance argument hints and clarify exploration flags in documentation 2025-11-15 17:39:28 +08:00
catlog22
9d4c9ef212 refactor(execute): clarify resume mode description and enhance error handling details 2025-11-15 17:03:35 +08:00
catlog22
d7ffd6ee32 refactor(execute): streamline execution phases and enhance lazy loading strategy 2025-11-15 16:50:51 +08:00
catlog22
02ee426af0 Refactor UI Design Commands: Replace /workflow:ui-design:update with /workflow:ui-design:design-sync
- Deleted the `list` command for design runs.
- Removed the `update` command and its associated documentation.
- Introduced `design-sync` command to synchronize finalized design system references to brainstorming artifacts.
- Updated command references in `COMMAND_REFERENCE.md`, `GETTING_STARTED.md`, and `GETTING_STARTED_CN.md` to reflect the new command structure.
- Ensured all related documentation and output styles are consistent with the new command naming and functionality.
2025-11-15 16:30:40 +08:00
catlog22
e76e5bbf5c refactor(enhance-prompt): update description and streamline enhancement pipeline by removing codebase analysis references 2025-11-15 16:19:36 +08:00
catlog22
763c51cb28 refactor(workflow): remove redundant key characteristics and usage examples from lite-plan documentation 2025-11-15 16:13:08 +08:00
catlog22
c7542d95c8 refactor(memory): streamline SKILL.md usage instructions and remove redundant references 2025-11-15 11:22:01 +08:00
catlog22
02bf6e296c feat(memory): enhance SKILL.md template by adding comprehensive jq usage guide and improving package overview 2025-11-15 11:12:23 +08:00
catlog22
f839a3afb8 refactor(memory): remove outdated template file references from style-skill-memory documentation 2025-11-15 11:05:38 +08:00
catlog22
79714edc9a fix(memory): update path to skill-md-template.md for accurate template processing 2025-11-15 10:16:44 +08:00
catlog22
f9c33bd0ba Add SKILL.md template for Style Memory Package with comprehensive jq usage guide and design principles 2025-11-15 10:09:45 +08:00
catlog22
e4a29c0b2e feat(memory): enhance style-skill-memory process by adding design system analysis and dynamic principle generation 2025-11-14 23:53:12 +08:00
catlog22
ca18043b14 feat(memory): optimize style-skill-memory process by simplifying data extraction and enhancing documentation clarity 2025-11-14 23:29:54 +08:00
catlog22
871a02c1f8 feat(memory): enhance style-skill-memory documentation with design principles and jq usage guide 2025-11-14 23:14:53 +08:00
catlog22
3747a7b083 feat(memory): optimize SKILL.md generation with concise structure and embedded jq commands 2025-11-14 15:43:23 +08:00
catlog22
c05dbb2791 docs(skill): update command guide for v5.8.0 release
Major updates:
- Add UI Design Style Memory workflow documentation
  - /memory:style-skill-memory command
  - /workflow:ui-design:codify-style orchestrator
  - /workflow:ui-design:reference-page-generator
- Add /workflow:lite-plan intelligent planning command (testing)
- Add /memory:code-map-memory code flow mapping (testing)

Agent enhancements:
- Add cli-explore-agent for code exploration
- Enhance cli-planning-agent task generation
- Major ui-design-agent refactoring

Statistics:
- Commands: 69 → 75 (+6)
- Agents: 11 → 13 (+2)
- Reference docs: 80 → 88 files

Generated with [Claude Code](https://claude.com/claude-code)

Co-Authored-By: Claude <noreply@anthropic.com>
2025-11-12 21:33:23 +08:00
catlog22
167034aaa7 feat(workflow): add --style-skill parameter for enhanced UI design capabilities 2025-11-12 21:27:00 +08:00
catlog22
a8e8412477 feat(agents): add cli-explore-agent and enhance workflow documentation
Add new cli-explore-agent for code structure analysis and dependency mapping:
- Dual-source strategy (Bash + Gemini CLI) for comprehensive code exploration
- Three analysis modes: quick-scan, deep-scan, dependency-map
- Language-agnostic support (TypeScript, Python, Go, Java, Rust)

Enhance lite-plan workflow documentation:
- Clarify agent call prompts with structured return formats
- Add expected return structures for cli-explore-agent and cli-planning-agent
- Simplify AskUserQuestion usage with clearer examples
- Document data flow between workflow phases

Add code-map-memory command:
- Generate Mermaid code flow diagrams from feature keywords
- Create SKILL packages for code understanding
- Auto-continue workflow with phase skipping

Improve UI design system:
- Add theme colors guide to ui-design-agent
- Enhance code import workflow documentation

🤖 Generated with [Claude Code](https://claude.com/claude-code)

Co-Authored-By: Claude <noreply@anthropic.com>
2025-11-12 21:13:42 +08:00
catlog22
158df6acfa feat(workflow): add lite-plan command for intelligent task planning and execution 2025-11-12 20:11:11 +08:00
catlog22
2788cf7da4 refactor(installer): implement incremental merge strategy with critical config backup
Changes:
- Switch from full directory replacement to incremental merge for all directories (.claude, .codex, .gemini, .qwen)
- Preserve user's custom files and folders during installation
- Add priority backup for critical config files:
  - .codex/AGENTS.md
  - .gemini/GEMINI.md, CLAUDE.md
  - .qwen/QWEN.md
- Only overwrite files present in installation package
- Auto-cleanup empty backup folders
- Update both PowerShell and Bash installers with consistent behavior

Benefits:
- Non-destructive updates that preserve user customizations
- Safer upgrade path with automatic backup
- Better user experience during reinstallation

🤖 Generated with [Claude Code](https://claude.com/claude-code)

Co-Authored-By: Claude <noreply@anthropic.com>
2025-11-12 16:13:19 +08:00
catlog22
9ccf348827 refactor(ui-design): enhance agent operation documentation and optimize layout structure 2025-11-12 10:23:25 +08:00
catlog22
fdcdf73d60 Implement feature X to enhance user experience and fix bug Y in module Z 2025-11-12 09:50:36 +08:00
catlog22
8f8467e016 Implement feature X to enhance user experience and fix bug Y in module Z 2025-11-12 09:40:48 +08:00
catlog22
9851163fc8 refactor(ui-design): enhance component classification and emphasize project independence in style SKILL memory generation 2025-11-12 08:55:21 +08:00
catlog22
02d6604283 refactor(ui-design): update preview generation script path and simplify execution command 2025-11-11 21:59:29 +08:00
catlog22
1abf1e24a9 refactor(ui-design): update workflow phases and enhance preview generation process in explore-auto command 2025-11-11 21:54:57 +08:00
catlog22
d602ca052b refactor(ui-design): enhance usage recommendations and extraction processes in design workflows 2025-11-11 21:52:47 +08:00
catlog22
8786b8c34d refactor(ui-design): enhance conflict detection methods and improve semantic search for primary color definitions 2025-11-11 21:38:44 +08:00
catlog22
e209799756 refactor(ui-design): enhance conflict detection and validation processes in import-from-code workflow 2025-11-11 21:36:18 +08:00
catlog22
136d17b118 refactor(script): enhance file synchronization and add delay for metadata update in discover-design-files.sh 2025-11-11 21:13:13 +08:00
catlog22
3cd8c18182 refactor(memory): update package data handling and enhance component counting in style SKILL memory generation 2025-11-11 21:04:06 +08:00
catlog22
e5349146df refactor(ui-design): update workflow phases and streamline task execution in codify-style and reference-page-generator 2025-11-11 21:00:22 +08:00
catlog22
836bf4cd1c Add UI Design Commands: List and Reference Page Generator
- Implemented the '/workflow:ui-design:list' command to list all available design runs with metadata including session, created time, and prototype count.
- Developed the '/workflow:ui-design:reference-page-generator' command to generate multi-component reference pages and documentation from design run extraction, including setup, validation, and preview generation phases.
- Added detailed error handling and usage examples for both commands to enhance user experience and clarity.
2025-11-11 20:53:42 +08:00
catlog22
ab09aa4621 refactor(ui-design): update workflow phases and enhance task execution flow in explore-auto and import-from-code commands 2025-11-11 20:48:21 +08:00
catlog22
7ca1d06cfe refactor(ui-design): enhance component classification in import and reference page generation 2025-11-11 20:36:49 +08:00
catlog22
7184a3be66 Implement feature X to enhance user experience and fix bug Y in module Z 2025-11-11 20:31:28 +08:00
catlog22
30071f48e8 refactor(ui-design): streamline output files for animation, layout, and style extraction commands by removing unnecessary guide files 2025-11-11 20:23:46 +08:00
catlog22
19351cd938 refactor(ui-design): enhance design token and layout template handling in reference page generation 2025-11-11 20:12:15 +08:00
catlog22
a393d95cf9 Refactor workflows to enhance task attachment and execution model
- Updated auto-parallel.md to clarify the task attachment model, emphasizing the orchestration of tasks through attachment rather than delegation. Improved descriptions of phases and execution flow.
- Revised plan.md, tdd-plan.md, test-fix-gen.md, and test-gen.md to simplify lifecycle patterns, replacing detailed patterns with concise lifecycle summaries.
- Modified reference-page-generator.md to streamline the component extraction process, focusing on layout templates and removing unnecessary complexity in the operations.
- Enhanced error handling and output messages across various workflows to improve clarity and user guidance.
2025-11-11 19:52:58 +08:00
catlog22
7d77b0e6f7 refactor(workflow): enhance TodoWrite patterns for plan, tdd-plan, test-fix-gen, and test-gen commands with dynamic task attachment and collapse strategies 2025-11-11 19:37:11 +08:00
catlog22
0a773b9411 Enhance test workflow documentation with Task Attachment Model and Auto-Continue Mechanism
- Introduced Task Attachment Model to clarify how SlashCommand invocations expand workflows by attaching sub-tasks to the current TodoWrite.
- Added Auto-Continue Mechanism details to explain dynamic task management and continuous execution across phases.
- Updated TodoWrite examples to reflect task attachment and collapsing behavior during workflow execution.
- Improved execution flow diagrams for both test-fix-gen and test-gen workflows to illustrate task attachment and phase completion.
- Emphasized critical aspects of continuous execution and task management in the documentation.
2025-11-11 19:28:50 +08:00
catlog22
be176ac4b3 refactor(ui-design): remove usage examples from codify-style and import-from-code documentation for clarity 2025-11-11 18:58:56 +08:00
catlog22
52c8fe1d5c refactor(ui-design): enhance task attachment model and phase execution flow for improved clarity and automation 2025-11-11 18:48:04 +08:00
catlog22
4048ed4cc0 refactor(ui-design): enhance import-from-code with CLI analysis and fix discovery script
- fix(discover-design-files.sh): Fix script errors and expand framework support
  * Fix variable quoting and conditional checks
  * Remove keyword restrictions for JS file discovery
  * Add support for .mjs, .cjs, .vue, .svelte files
  * Now correctly discovers all JS/TS framework files recursively

- feat(import-from-code): Add optional gemini/qwen CLI analysis for agents
  * Add CLI-Assisted Analysis step to Style/Animation/Layout agents
  * Optional usage for large codebases (>20 files) or complex frameworks
  * All CLI calls use analysis mode (READ-ONLY)
  * Agent can use CLI output to guide file reading and token extraction

🤖 Generated with [Claude Code](https://claude.com/claude-code)

Co-Authored-By: Claude <noreply@anthropic.com>
2025-11-11 17:39:43 +08:00
catlog22
a496dc382a refactor(ui-design): update TodoWrite phases and clarify agent task outputs 2025-11-11 17:13:10 +08:00
catlog22
8507231a81 refactor(ui-design): enhance layout extraction documentation and output files 2025-11-11 16:51:26 +08:00
catlog22
92f77ad997 refactor(file-discovery): implement script for discovering design-related files and outputting JSON 2025-11-11 16:32:55 +08:00
catlog22
40f3d44ed4 refactor(ui-design): enhance automatic file discovery and simplify command parameters 2025-11-11 15:47:16 +08:00
catlog22
0767d6f2d3 Refactor UI design workflows to unify input parameters and enhance detection logic
- Updated `explore-auto.md`, `imitate-auto.md`, and `import-from-code.md` to replace legacy parameters with a unified `--input` parameter for better clarity and functionality.
- Enhanced input detection logic to support glob patterns, file paths, and text descriptions, allowing for more flexible input combinations.
- Deprecated old parameters (`--images`, `--prompt`, etc.) with warnings and provided clear migration paths.
- Improved error handling for missing inputs and validation of existing directories in `reference-page-generator.md`.
- Streamlined command execution phases to utilize new input structures across workflows.
2025-11-11 15:30:53 +08:00
catlog22
feae69470e refactor(ui-design): convert codify-style to orchestrator pattern with enhanced safety
Refactor style codification workflow into orchestrator pattern with three specialized commands:

Changes:
- Refactor codify-style.md as pure orchestrator coordinating sub-commands
  • Delegates to import-from-code for style extraction
  • Calls reference-page-generator for packaging
  • Creates temporary design run as intermediate storage
  • Added --overwrite flag with package protection logic
  • Improved component counting using jq with grep fallback

- Create reference-page-generator.md for multi-component reference pages
  • Generates interactive preview with all components
  • Creates design tokens, style guide, and component patterns
  • Package validation to prevent invalid directory overwrites
  • Outputs to .workflow/reference_style/{package-name}/

- Create style-skill-memory.md for SKILL memory generation
  • Converts style reference packages to SKILL memory
  • Progressive loading levels (0-3) for efficient token usage
  • Intelligent description generation from metadata
  • --regenerate flag support

Improvements based on Gemini analysis:
- Overwrite protection in codify-style prevents accidental data loss
- Reliable component counting via jq JSON parsing (grep fallback)
- Package validation in reference-page-generator ensures data integrity

🤖 Generated with [Claude Code](https://claude.com/claude-code)

Co-Authored-By: Claude <noreply@anthropic.com>
2025-11-11 14:30:40 +08:00
catlog22
bc959b1a0f refactor(cli): remove --agent flag and enhance agent execution context
- Remove --agent parameter from all CLI commands (now default behavior)
- Restore and enhance Task() invocation context for cli-execution-agent
- Add detailed execution phases and context discovery strategies
- Simplify documentation by removing redundant CLI command templates
- Consolidate output documentation to unified format

Modified commands:
- /cli:analyze, /cli:chat, /cli:execute
- /cli:mode:plan, /cli:mode:code-analysis, /cli:mode:bug-diagnosis

🤖 Generated with [Claude Code](https://claude.com/claude-code)

Co-Authored-By: Claude <noreply@anthropic.com>
2025-11-10 16:12:04 +08:00
catlog22
ccbec186b2 refactor(cli-tools): update Gemini default model to 2.5-pro
- Remove gemini-3-pro-preview-11-2025 model references
- Set gemini-2.5-pro as default analysis model
- Clean up error handling documentation (remove 404 fallback)
- Update all command examples and templates

🤖 Generated with [Claude Code](https://claude.com/claude-code)

Co-Authored-By: Claude <noreply@anthropic.com>
2025-11-10 15:44:47 +08:00
catlog22
a795538182 refactor(test-workflow): implement multi-layered testing strategy with quality gates
Introduce comprehensive test quality assurance framework to prevent "hollow tests"
from masking real issues. Optimize JSON data structures following minimal-but-sufficient principle.

Major Changes:
- Multi-layered test strategy (L0: Static, L1: Unit, L2: Integration, L3: E2E)
- New quality gate task (IMPL-001.5-review) validates tests before fix cycle
- Layer-aware failure diagnosis with test_type field support
- JSON simplification: removed redundant failure_context (~44% size reduction)

File Changes:
- new: cli-planning-agent.md - CLI analysis executor with layer-specific guidance
- mod: test-fix-gen.md - multi-layered test planning and quality gate generation
- mod: test-fix-agent.md - layer-aware test execution and failure classification
- mod: test-cycle-execute.md - 95% pass rate threshold with criticality assessment

Technical Details:
- test_type field tracks test layer (static/unit/integration/e2e)
- IMPL-fix-N.json simplified: removed 350 lines of redundant data
- Single source of truth: iteration-N-analysis.md contains full context
- Quality config: ~/.claude/workflows/test-quality-config.json (not in repo)

Benefits:
- Prevents symptom-level fixes through layer-specific diagnosis
- Ensures test quality with static analysis and coverage validation
- Reduces JSON file size by 44% while maintaining information completeness
- Enforces comprehensive test coverage (happy path + negative + edge cases)

🤖 Generated with [Claude Code](https://claude.com/claude-code)

Co-Authored-By: Claude <noreply@anthropic.com>
2025-11-10 15:34:17 +08:00
catlog22
78e7e7663b refactor(ui-design): shift from automatic capture to user-driven input model
Major Changes:
- Remove MCP-based automatic screenshot capture dependency
- Remove automatic code extraction mechanisms
- Shift to user-provided images and prompts as primary input

animation-extract.md:
- Remove MCP chrome-devtools integration
- Change from URL-based to image-based inference
- Use AI analysis of images instead of CSS extraction
- Update parameters: --urls → --images (glob pattern)

explore-auto.md:
- Fix critical parameter passing bug in animation-extract phase
- Add --images and --prompt parameter passing to animation-extract
- Ensure consistent context propagation across all extraction phases

imitate-auto.md:
- Add --refine parameter to all extraction commands (style, animation, layout)
- Add --images parameter passing to animation-extract phase
- Add --prompt parameter passing to animation-extract phase
- Align with refinement-focused design intent

Parameter Transmission Matrix (Fixed):
- style-extract: --images ✓ --prompt ✓ --refine ✓ (imitate only)
- animation-extract: --images ✓ --prompt ✓ --refine ✓ (imitate only)
- layout-extract: --images ✓ --prompt ✓ --refine ✓ (imitate only)

Deprecation Notice:
- capture.md and explore-layers.md will be removed in future versions
- Workflows now rely on user-provided inputs instead of automated capture

🤖 Generated with [Claude Code](https://claude.com/claude-code)

Co-Authored-By: Claude <noreply@anthropic.com>
2025-11-10 10:29:15 +08:00
catlog22
6a50b714d0 Enhance layout and style extraction commands with dual mode support for exploration and refinement.
- Updated `layout-extract` command to include a refinement mode (`--refine`) for generating single refined layouts, alongside exploration mode for multiple contrasting variants.
- Added detailed reporting and validation for variants count based on the selected mode.
- Implemented logic to load existing layouts for refinement and generate specific refinement options categorized by density, responsiveness, grid specifics, and component arrangement.
- Updated `style-extract` command similarly to support refinement mode, allowing for detailed adjustments to existing design systems.
- Enhanced user interaction phases to accommodate selection of refinements or design directions based on the active mode.
2025-11-10 09:49:10 +08:00
catlog22
b471e881a9 refactor(ui-design): clarify agent distribution and animation extraction logic
- **generate.md**: Eliminate ambiguity in agent task allocation
  - Add explicit "one agent per style" principle with warnings
  - Fix distribution strategy: balanced split (12→6+6) instead of fill-first (12→10+2)
  - Add distribution formula and concrete examples table
  - Simplify redundant constraints and consolidate documentation
  - Remove verbose descriptions per minimal information principle

- **explore-auto.md**: Optimize animation extraction conditional logic
  - Add user confirmation for animation reuse in code import scenarios
  - Implement should_extract_animation flag with comprehensive conditions
  - Add skip_animation_extraction for explicit user preference handling
  - Update autonomous flow description with conditional animation phase
  - Add animation extraction task to TodoWrite tracking

🤖 Generated with [Claude Code](https://claude.com/claude-code)

Co-Authored-By: Claude <noreply@anthropic.com>
2025-11-09 22:53:37 +08:00
catlog22
22b2cecd1f refactor(ui-design): unify ID architecture with design-id parameter
Replace --base-path with --design-id/--session across all UI design commands to eliminate ambiguity and improve consistency.

Key changes:
- New command: list.md for viewing available design runs
- Unified ID format: design_id = directory_name = "design-run-YYYYMMDD-RANDOM"
- Consistent path resolution: --design-id > --session > auto-detect/create
- Updated 11 commands: explore-auto, imitate-auto, capture, style/layout/animation-extract, generate, update, explore-layers, import-from-code
- Enhanced error handling with /workflow:ui-design:list hints
- Fixed import race condition with cleanup logic in explore-auto

Verified by Gemini: 5.0/5.0 consistency score

🤖 Generated with [Claude Code](https://claude.com/claude-code)

Co-Authored-By: Claude <noreply@anthropic.com>
2025-11-09 21:14:54 +08:00
catlog22
f88beb9a49 refactor(workflow): eliminate ambiguous wait/completion terminology
Remove ambiguous "wait for completion" expressions that could be misinterpreted as waiting for an external command. Replace with clear execution-blocking language.

Changes:
- "WAIT for completion" → "Execute phase (blocks until finished)"
- "Upon each phase completion" → "When each phase finishes executing"
- "execution pauses until completion" → "execution pauses until the command finishes"

Affected files (6):
- brainstorm/auto-parallel.md: 3 changes
- plan.md: 3 changes
- tdd-plan.md: 1 change
- test-gen.md: 1 change
- ui-design/explore-auto.md: 12 changes
- ui-design/imitate-auto.md: 10 changes

Impact: Clarifies that SlashCommand is blocking execution, not waiting for separate completion task

🤖 Generated with [Claude Code](https://claude.com/claude-code)

Co-Authored-By: Claude <noreply@anthropic.com>
2025-11-09 19:35:31 +08:00
catlog22
ac799872c3 refactor(ui-design): optimize agent task allocation strategy for generate command
Optimized agent grouping to process multiple layouts per agent:
- Changed from one-layout-per-agent to max-10-layouts-per-agent
- Grouped by target × style for better efficiency
- Reduced agent count from O(T×S×L) to O(T×S×⌈L/10⌉)
- Shared token loading within agent (read once, use for all layouts)
- Enforced target isolation (different targets use different agents)

Benefits:
- Reduced agent overhead by ~83% in typical scenarios
- Minimized redundant token reads
- Maintained parallel execution capability (max 6 concurrent agents)

🤖 Generated with [Claude Code](https://claude.com/claude-code)

Co-Authored-By: Claude <noreply@anthropic.com>
2025-11-09 17:30:13 +08:00
catlog22
a2862090e1 docs(ui-design): remove key features and examples sections from explore-auto and imitate-auto documentation 2025-11-09 15:23:53 +08:00
catlog22
fb65e8f90f docs(ui-design-agent): add remote assets reference guidelines
Add comprehensive remote assets reference section:
- Image CDN services (Unsplash, Picsum, Placeholder.com)
- Icon libraries (Lucide, Font Awesome, Material Icons)
- Usage patterns with HTML examples
- Best practices for external resource loading

Guidelines include:
- HTTPS URLs mandatory
- Width/height attributes for layout stability
- Lazy loading for below-fold images
- Accessibility requirements (alt attributes)
- Prohibit local file paths and base64 embedding

Position: Between Typography System and Visual Effects System

🤖 Generated with [Claude Code](https://claude.com/claude-code)

Co-Authored-By: Claude <noreply@anthropic.com>
2025-11-09 15:05:47 +08:00
catlog22
0d11a29577 docs(explore-auto): add animation-extraction folder structure
Add missing animation-extraction output structure:
- .intermediates/animation-analysis/: analysis-options.json (with embedded user_selection), animations-*.json (URL mode)
- animation-extraction/: animation-tokens.json, animation-guide.md

Complete file structure now includes all three extraction phases:
- style-extraction
- animation-extraction
- layout-extraction

🤖 Generated with [Claude Code](https://claude.com/claude-code)

Co-Authored-By: Claude <noreply@anthropic.com>
2025-11-09 15:04:07 +08:00
catlog22
e488352f15 fix(explore-auto): correct intermediate file structure references
Fix file structure documentation to reflect actual implementation:
- user_selection is embedded in analysis-options.json, not separate file
- computed-styles.json only exists in URL mode
- dom-structure-*.json only exists in URL mode
- Remove obsolete design-space-analysis.json reference

Updated:
- .intermediates/style-analysis/: analysis-options.json (with embedded user_selection), computed-styles.json (URL mode)
- .intermediates/layout-analysis/: analysis-options.json (with embedded user_selection), dom-structure-*.json (URL mode)

🤖 Generated with [Claude Code](https://claude.com/claude-code)

Co-Authored-By: Claude <noreply@anthropic.com>
2025-11-09 15:02:29 +08:00
catlog22
dd0348c3eb docs(ui-design): add batch text format interaction strategy for user questions
Add universal question interaction rules to UI design commands:
- Use batch text format (1a 2b) when questions > 4 OR options > 3
- Otherwise use AskUserQuestion tool
- Support multi-selection: [N][key1,key2] format for layout/style commands
- Variable-based templates with option descriptions for clarity

Updated commands:
- animation-extract.md: Single selection per question
- layout-extract.md: Multi-selection per target support
- style-extract.md: Multi-selection for variants

🤖 Generated with [Claude Code](https://claude.com/claude-code)

Co-Authored-By: Claude <noreply@anthropic.com>
2025-11-09 14:59:27 +08:00
catlog22
3be5663ab0 Remove batch-generate command documentation and related references; update explore-auto and layout-extract commands to enable interactive mode by default and embed user selections in analysis-options.json; enhance style-extract command to capture user selections and update analysis JSON; revise command reference and specification documents accordingly. 2025-11-09 14:35:30 +08:00
catlog22
d410ed20d6 docs(command-guide): add standardized update guideline and enhance documentation
- Add UPDATE-GUIDELINE.md: version-agnostic update process (6 phases)
- Update SKILL.md: remove version info, reference update guideline
- Update ui-design-workflow-guide.md: document explore-auto default non-interactive mode
- Sync reference docs: latest UI design command changes (9 files)
- Rebuild indexes: reflect explore-auto --interactive parameter addition

Refs: 47e05f2 (explore-auto default mode change)

🤖 Generated with [Claude Code](https://claude.com/claude-code)

Co-Authored-By: Claude <noreply@anthropic.com>
2025-11-09 13:38:34 +08:00
catlog22
47e05f2142 fix: change explore-auto default to non-interactive mode
- Update default interactive_mode from true to false
- Add --interactive parameter documentation
- Non-interactive batch generation is now the default behavior

🤖 Generated with [Claude Code](https://claude.com/claude-code)

Co-Authored-By: Claude <noreply@anthropic.com>
2025-11-09 13:30:06 +08:00
catlog22
6caf5eed49 Merge branch 'main' of https://github.com/catlog22/Claude-Code-Workflow 2025-11-09 13:29:02 +08:00
catlog22
084f7b7254 docs(command-guide): sync reference docs and rebuild indexes
- Sync latest agent files (action-planning-agent, ui-design-agent)
- Sync latest UI design workflow commands (11 files)
- Sync latest test workflow commands (8 files)
- Rebuild all 5 index files with updated metadata

Generated with [Claude Code](https://claude.com/claude-code)

Co-Authored-By: Claude <noreply@anthropic.com>
2025-11-09 13:24:49 +08:00
catlog22
c647787b86 feat: add interactive multi-selection mechanism for UI design workflow
- Add --interactive flag to style-extract and layout-extract commands
- Enhance animation-extract --mode interactive with selection storage
- Implement unified user-selections storage in .intermediates/user-selections/
- Add parameter passthrough in explore-auto (default: enabled) and imitate-auto (always enabled)
- Support fallback to generate all variants when no selection file exists
- Fix explore-auto.md bug: duplicate --base-path in import-from-code call (line 313)

🤖 Generated with [Claude Code](https://claude.com/claude-code)

Co-Authored-By: Claude <noreply@anthropic.com>
2025-11-08 23:53:27 +08:00
catlog22
d213885f52 Refactor layout and style extraction workflows for multi-selection capabilities
- Updated `layout-extract` command to support user multi-selection of layout concepts, allowing for multiple layout templates to be generated based on user preferences.
- Revised argument hints and output structures to reflect changes in selection handling and output file formats.
- Enhanced user interaction by enabling multi-select options in both layout and style extraction phases.
- Streamlined the generation of layout templates and design systems to accommodate multiple selected variants, improving flexibility and user experience.
- Adjusted validation and output verification processes to ensure consistency with new multi-selection features.
2025-11-08 23:23:06 +08:00
catlog22
7269f20f57 refactor: reorganize ui-design-agent with 3 task patterns covering 6 task types
Restructure agent documentation to avoid confusion and clarify responsibilities.
All 6 task types now organized into 3 clear patterns with shared standards.

## New Structure

**Task Patterns** (replaces Core Capabilities):
- Pattern 1: Option Generation (2 tasks)
  * DESIGN_DIRECTION_GENERATION_TASK
  * LAYOUT_CONCEPT_GENERATION_TASK

- Pattern 2: System Generation (3 tasks)
  * DESIGN_SYSTEM_GENERATION_TASK
  * LAYOUT_TEMPLATE_GENERATION_TASK
  * ANIMATION_TOKEN_GENERATION_TASK

- Pattern 3: Assembly (1 task)
  * LAYOUT_STYLE_ASSEMBLY

Each pattern documents:
- Purpose and autonomy level
- Task types covered
- Common process flow
- Task-specific inputs/outputs
- Key principles

## Benefits

1. **Complete Coverage**: All 6 task types explicitly documented
2. **Clear Organization**: Tasks grouped by similar characteristics
3. **Reduced Duplication**: Shared design standards and execution principles
4. **Pattern Recognition**: Agent knows which rules apply per task type
5. **Maintainability**: Single agent easier to maintain than multiple agents

## Updated Sections

- **Task Patterns**: New section replacing fragmented Core Capabilities
- **Execution Flow**: Generic pattern-based flow (was task-specific)
- **Core Execution Principles**: Simplified with pattern-specific autonomy levels
- **Key Reminders**: Updated to reference patterns, removed external invocation terms

## Verification

 Pattern 1 covers: DESIGN_DIRECTION_GENERATION, LAYOUT_CONCEPT_GENERATION
 Pattern 2 covers: DESIGN_SYSTEM_GENERATION, LAYOUT_TEMPLATE_GENERATION, ANIMATION_TOKEN_GENERATION
 Pattern 3 covers: LAYOUT_STYLE_ASSEMBLY

All commands (style-extract, layout-extract, animation-extract, generate) now properly supported.

🤖 Generated with [Claude Code](https://claude.com/claude-code)

Co-Authored-By: Claude <noreply@anthropic.com>
2025-11-08 21:20:23 +08:00
catlog22
3199b91255 refactor: remove external invocation references from ui-design-agent
Agent should focus on task execution, not who invokes it. Remove all
references to orchestrator commands and external callers.

Changes:
1. **Agent description** (line 19):
   - Before: "invoked by orchestrator commands (e.g., generate.md)"
   - After: Focus on autonomous execution

2. **Core Capabilities** (line 25):
   - Removed: "Invoked by: generate.md Phase 2"
   - Agent doesn't need to know its caller

3. **Execution Process** (line 250):
   - Before: "When invoked by orchestrator command"
   - After: "Standard execution flow for design generation tasks"

4. **Task prompt terminology** (line 259):
   - Changed: "orchestrator prompt" → "task prompt"

5. **Invocation Model → Task Responsibilities** (lines 295-305):
   - Before: "You are invoked by orchestrator commands..."
   - After: Focus on task responsibilities and expected output

6. **Execution Principles** (lines 310, 315):
   - Changed: "orchestrator command" → "task prompt"
   - Changed: "Each invocation" → "Each task"

7. **Performance Optimization → Generation Approach** (lines 342-348):
   - Removed: Two-layer architecture with orchestrator responsibility
   - Added: Single-pass assembly focus (matches current implementation)

8. **Scope & Boundaries** (line 359):
   - Simplified "NOT Your Responsibilities" to "Out of Scope"
   - Removed: workflow orchestration, task scheduling references
   - Focus on what agent does, not external workflow

9. **Path Handling** (lines 423-426):
   - Changed: "Orchestrator provides" → "Task prompts provide"
   - Changed: "Agent uses" → "Use" (more direct)

Result: Agent documentation now focuses on task execution without
references to external callers or orchestration.

🤖 Generated with [Claude Code](https://claude.com/claude-code)

Co-Authored-By: Claude <noreply@anthropic.com>
2025-11-08 21:12:06 +08:00
catlog22
e20604bb21 refactor: align ui-design-agent.md with current generate.md implementation
Remove obsolete content and update documentation to match actual workflow:

1. **Remove consolidate.md references**:
   - Updated agent invocation description (line 19)
   - Replaced "Invocation Model" section (lines 296-308)
   - consolidate.md doesn't exist in current workflow

2. **Replace obsolete Core Capabilities section** (lines 21-101):
   - Removed outdated task definitions:
     * Design Token Synthesis (referenced consolidate.md)
     * Layout Strategy Generation (referenced consolidate.md)
     * Obsolete UI Prototype Generation with CSS placeholders
     * Consistency Validation (no longer used)
   - Added correct "Layout & Style Assembly" documentation matching generate.md:
     * Task type: [LAYOUT_STYLE_ASSEMBLY]
     * Direct CSS reference: <link href="{target}-style-{style_id}-layout-{layout_id}.css">
     * Self-contained CSS architecture (no placeholders)
     * Assembly process matching generate.md lines 130-174

3. **Remove CSS placeholder instructions**:
   - Removed {{STRUCTURAL_CSS}} and {{TOKEN_CSS}} placeholder pattern
   - These placeholders are NOT used in current implementation
   - Current system uses direct CSS file references with resolved var() values

Key improvements:
- Accurate documentation of current [LAYOUT_STYLE_ASSEMBLY] task
- Correct CSS reference pattern matching generate.md:134
- Removed all references to non-existent consolidate.md
- Self-contained CSS architecture properly documented

Verified consistency with generate.md implementation (lines 130-174).

🤖 Generated with [Claude Code](https://claude.com/claude-code)

Co-Authored-By: Claude <noreply@anthropic.com>
2025-11-08 21:07:21 +08:00
catlog22
1fb5b3cbe9 refactor: standardize UI design workflow folder naming and add absolute path conversion
Unified folder naming format across all UI design commands:
- Session mode: .workflow/WFS-{session}/design-run-YYYYMMDD-RANDOM
- Standalone mode: .workflow/.design/design-run-YYYYMMDD-RANDOM

Key changes:
- Standardized run directory naming to design-run-* format
- Added absolute path conversion for all base_path variables
- Updated path discovery patterns from design-* to design-run-*
- Maintained backward compatibility for existing directory discovery

Modified commands (10 files):
- explore-auto.md, imitate-auto.md: Unified standalone path format
- explore-layers.md: Changed from design-layers-* to design-run-*
- capture.md: Added design- prefix to standalone mode
- animation-extract.md, style-extract.md, layout-extract.md,
  generate.md, batch-generate.md: Added absolute path conversion
- update.md: Updated path discovery pattern

Technical improvements:
- Eliminates path ambiguity between command and agent execution contexts
- Ensures all file operations use absolute paths for reliability
- Provides consistent directory structure across workflows

🤖 Generated with [Claude Code](https://claude.com/claude-code)

Co-Authored-By: Claude <noreply@anthropic.com>
2025-11-08 20:52:10 +08:00
catlog22
a8ab192c72 refactor: remove Related Commands section from workflow tool commands
Only orchestrator commands (plan, execute, resume, test-gen, test-fix-gen,
tdd-plan) retain Related Commands section to document workflow phases.
Tool commands (conflict-resolution, task-generate-tdd, test-task-generate,
test-concept-enhanced, test-context-gather, tdd-coverage-analysis) have
Related Commands removed to reduce documentation redundancy.

🤖 Generated with [Claude Code](https://claude.com/claude-code)

Co-Authored-By: Claude <noreply@anthropic.com>
2025-11-08 17:55:55 +08:00
catlog22
b62b42e9f4 feat: migrate test-task-generate to agent-driven architecture
- Refactor test-task-generate.md to use action-planning-agent
- Add two-phase execution flow (Discovery → Agent Execution)
- Integrate Memory-First principle and MCP tool enhancements
- Support both agent-mode (default) and cli-execute-mode
- Add test-specific context package with TEST_ANALYSIS_RESULTS.md
- Align with task-generate-agent.md architecture
- Remove 556 lines of redundant old content (Phase 1-4 old structure)
- Update test-gen.md and test-fix-gen.md to reflect agent-driven changes

Changes include:
- Core Philosophy with agent-driven principles
- Agent Context Package structure for test sessions
- Discovery Actions for test context loading
- Agent Invocation with test-specific requirements
- Test Task Structure Reference
- Updated Integration & Usage sections
- Enhanced Related Commands documentation

Now test task generation uses autonomous agent planning with MCP enhancements for better test coverage analysis and generation.

🤖 Generated with [Claude Code](https://claude.com/claude-code)

Co-Authored-By: Claude <noreply@anthropic.com>
2025-11-08 17:23:55 +08:00
catlog22
52fce757f8 refactor: update tdd-plan to use --cli-execute parameter
- Change --agent flag to --cli-execute for consistency
- Make Agent Mode the default execution mode
- Update CLI Mode to use --cli-execute flag
- Align with agent-driven task generation architecture
- Update Phase 5 command documentation
- Update Related Commands section

This completes the migration to agent-driven architecture for both /workflow:plan and /workflow:tdd-plan commands.

🤖 Generated with [Claude Code](https://claude.com/claude-code)

Co-Authored-By: Claude <noreply@anthropic.com>
2025-11-08 17:17:09 +08:00
catlog22
c12f6b888a feat: migrate task-generate-tdd to agent-driven architecture
- Refactor task-generate-tdd.md to use action-planning-agent
- Add two-phase execution flow (Discovery → Agent Execution)
- Integrate Memory-First principle and MCP tool enhancements
- Support both agent-mode (default) and cli-execute-mode
- Change --agent flag to --cli-execute for consistency
- Add TDD-specific context package with test coverage integration
- Align with task-generate-agent.md architecture

Now both /workflow:plan and /workflow:tdd-plan use agent-driven task generation for autonomous planning and execution.

🤖 Generated with [Claude Code](https://claude.com/claude-code)

Co-Authored-By: Claude <noreply@anthropic.com>
2025-11-08 17:11:17 +08:00
catlog22
47667b8360 refactor: streamline task generation workflow architecture
- Remove 150+ lines of duplicate content from task-generate-agent.md
- Implement reference-based design following Content Uniqueness Rules
- Simplify plan.md to use task-generate-agent exclusively
- Remove --agent parameter (agent mode is now default)
- Improve separation of concerns between command and agent layers

Changes:
- task-generate-agent.md: Replace detailed specs with references to action-planning-agent.md
- plan.md: Remove task-generate command, unify on agent-driven approach

🤖 Generated with [Claude Code](https://claude.com/claude-code)

Co-Authored-By: Claude <noreply@anthropic.com>
2025-11-08 17:03:18 +08:00
catlog22
915eb396e7 feat: enhance quantification requirements across workflow tools
Enhanced task generation with mandatory quantification standards to eliminate ambiguity:

- Add Quantification Requirements section to all task generation commands
- Enforce explicit counts and enumerations in requirements, acceptance criteria, and modification points
- Standardize formats: "Implement N items: [list]" vs vague "implement features"
- Include verification commands for measurable acceptance criteria
- Simplify documentation by removing verbose examples while preserving all key information

Changes:
- task-generate.md: Add quantification section, streamline Task JSON schema, remove CLI examples
- task-generate-agent.md: Add quantification rules, improve template selection clarity
- task-generate-tdd.md: Add TDD-specific quantification formats for Red-Green-Refactor phases
- action-planning-agent.md: Add quantification requirements with validation checklist and updated examples

Impact:
- Reduces task documentation from ~900 lines to ~600 lines (33% reduction)
- All requirements now require explicit counts: "5 files", "15 test cases", ">=85% coverage"
- Acceptance criteria must include verification commands
- Modification points must specify exact targets with line numbers

🤖 Generated with [Claude Code](https://claude.com/claude-code)

Co-Authored-By: Claude <noreply@anthropic.com>
2025-11-08 15:02:07 +08:00
catlog22
1cb83c07e0 feat: 强化任务生成命令,新增量化要求以消除模糊性 2025-11-08 14:39:45 +08:00
catlog22
0404a7eb7c docs: 增强 conflict-resolution 核心规则,禁止使用 bash 命令输出
添加核心职责规则:直接文本输出,禁止使用 bash echo/printf 命令

🤖 Generated with [Claude Code](https://claude.com/claude-code)

Co-Authored-By: Claude <noreply@anthropic.com>
2025-11-08 12:07:45 +08:00
catlog22
b98d28df3d docs(command-guide): add Pattern 0 brainstorming workflow for 0-to-1 development
- Add Pattern 0: 头脑风暴 (Brainstorming) as first workflow pattern
- Distinguish FROM-ZERO-TO-ONE vs FEATURE-ADDITION scenarios
- Update decision tree to require brainstorming for new projects
- Enhance SKILL.md Mode 4 with project stage identification
- Remove duplicate Pattern 7 UI design workflow
- Add warning in best practices about not skipping brainstorming

Fixes issue where system incorrectly guided users to start with /workflow:plan
for new projects instead of /workflow:brainstorm:artifacts
2025-11-06 22:14:14 +08:00
catlog22
1e67f5780d feat: 增强 command-guide skill,新增 UI 设计工作流指南和自动同步功能
## 主要更新

### 1. 新增 UI 设计工作流完整指南
- 新增 `ui-design-workflow-guide.md` (12KB)
- 使用 Gemini 分析 11 个 UI 设计命令文件
- 提供 4 种工作流模式详细指导:
  - 探索式设计(新概念)
  - 设计复制(模仿现有网站)
  - 代码优先导入
  - 批量生成(高容量)
- 包含架构最佳实践、性能优化和故障排查

### 2. 更新工作流模式指南
- 在 `workflow-patterns.md` 中新增 Pattern 7: UI设计工作流
- 提供三种子模式的中文示例
- 添加 UI 设计指南的交叉引用

### 3. 增强索引构建脚本
- 更新 `analyze_commands.py` 支持自动同步 reference 目录
- 新增 `sync_reference_directory()` 函数:
  - 自动删除旧的 reference 文件
  - 从 `.claude/agents` 和 `.claude/commands` 复制最新文件
  - 确保索引构建前 reference 目录为最新
- 增强统计输出,显示 reference 目录同步状态

### 4. 更新索引文件
- 重建所有索引文件(all-commands.json, by-category.json 等)
- 优化命令元数据和分类
- 同步最新的 UI 设计命令(包括新增的 import-from-code.md)

## 技术细节

**命令分类体系**:
- Orchestrators: explore-auto, imitate-auto, batch-generate
- Core Extractors: style-extract, layout-extract, animation-extract
- Input & Capture: capture, explore-layers, import-from-code
- Assemblers: generate, update

**架构原则**:
- 关注点分离:Style、Structure、Motion 独立
- Token-First CSS:使用 CSS 变量而非硬编码
- 并行执行:支持最多 6 个并发任务

🤖 Generated with [Claude Code](https://claude.com/claude-code)

Co-Authored-By: Claude <noreply@anthropic.com>
2025-11-06 21:37:19 +08:00
catlog22
581b46b494 feat: 更新 UI 设计工作流,增强设计完整性检查并生成详细报告 2025-11-06 21:20:19 +08:00
catlog22
eeffa8a9e8 Enhance UI Design Workflows with Intelligent Path Detection and Source Selection
- Updated explore-auto.md to include a new phase for intelligent path detection and source selection, allowing the system to identify existing file paths from prompts and images.
- Introduced conditional code import and completeness assessment based on detected design sources (code only, visual only, hybrid).
- Modified phase execution flow to accommodate new checks for style, animation, and layout completeness based on the design source.
- Added error handling for missing design elements and user prompts for visual supplementation.
- Enhanced imitate-auto.md with intelligent path detection and a new phase for code import and completeness assessment, ensuring a hybrid approach when applicable.
- Implemented detailed reporting for each phase, including missing elements and recommendations for improvement.
- Created a comprehensive import-from-code.md file outlining the workflow for importing design systems from code files, detailing the execution process and error handling.
2025-11-06 21:00:49 +08:00
catlog22
096621eee7 docs: 强调 SKILL 智能整合原则,避免模板直接复制
核心修改:
1. 新增"Core Principle: Intelligent Integration"部分
   - 明确 SKILL 提供参考材料用于智能整合,而非直接复制模板
   - 定义响应策略:分析上下文 → 提取相关信息 → 合成定制化 → 交付目标响应
   - 列出禁止事项和必须遵守的原则

2. 更新所有 6 个 Mode 的 Process 描述:
   - Mode 1 (Command Search): 强调智能过滤和排序,而非 JSON 数据转储
   - Mode 2 (Smart Recommendations): 分析工作流上下文并评估推荐优先级
   - Mode 3 (Full Documentation): 提取用户相关部分,渐进式披露
   - Mode 4 (Beginner Onboarding): 评估用户背景,设计个性化学习路径
   - Mode 5 (Issue Reporting): 智能引导信息收集,适应模板部分
   - Mode 6 (Deep Command Analysis): 合成聚焦解释,处理并整合 CLI 分析

3. 更新示例说明:
   - 每个 Mode 的示例都强调"NOT: [直接返回模板]"
   - 展示如何根据上下文定制响应

🤖 Generated with [Claude Code](https://claude.com/claude-code)

Co-Authored-By: Claude <noreply@anthropic.com>
2025-11-06 16:56:34 +08:00
catlog22
e8a5980c88 docs: 修正 CLI 工具语义调用概念并精简文档
核心修改:
1. 修正语义调用触发方式:用户必须明确说"使用 gemini"、"用 qwen"、"让 codex"
2. 区分两种调用方式:
   - CLI 工具语义调用:用户明确指定工具 → Claude 生成 CLI 命令
   - Slash 命令调用:用户输入 /workflow:* 或 /cli:* → 执行预定义流程
3. 精简文档结构:
   - 删除过多的重复示例和详细步骤
   - 简化能力特性清单
   - 删除进阶技巧和避免做法部分
   - 添加常用工作流程指南
4. 更新所有使用场景示例,确保触发方式正确

🤖 Generated with [Claude Code](https://claude.com/claude-code)

Co-Authored-By: Claude <noreply@anthropic.com>
2025-11-06 16:52:31 +08:00
catlog22
38b070551c fix: 使用绝对路径确保 skill 在全局安装时正常工作
问题
- 原有代码使用相对路径 (cd reference &&)
- skill 安装在 ~/.claude/skills/ 后相对路径会失效

修复
- 所有文件操作使用绝对路径: ~/.claude/skills/command-guide/reference/
- CLI 命令改用 --include-directories 参数指向绝对路径
- 更新示例输出中的路径为绝对路径

影响
- handleSimpleQuery: basePath 使用绝对路径
- locateCommandFile: 使用 basePath 参数
- executeCLIAnalysis: 使用 --include-directories 替代 cd
- resolveEntityPath: glob 使用绝对路径
- buildCLIPrompt: CONTEXT 使用 @**/* + --include-directories

版本更新
- v1.3.0 → v1.3.1

🤖 Generated with [Claude Code](https://claude.com/claude-code)

Co-Authored-By: Claude <noreply@anthropic.com>
2025-11-06 16:37:21 +08:00
catlog22
1897ba4e82 feat: 增强 command-guide skill 支持深度命令分析和 CLI 辅助查询
新增 Mode 6: 深度命令分析
- 创建 reference 备份目录(80个文档:11 agents + 69 commands)
- 支持简单查询(直接文件查找)和复杂查询(CLI 辅助分析)
- 集成 gemini/qwen 进行跨命令对比、最佳实践、工作流分析
- 添加查询复杂度自动分类和降级策略

更新文档
- SKILL.md: 添加 Mode 6 说明和 Reference Documentation 章节
- implementation-details.md: 添加完整的 Mode 6 实现逻辑
- 版本更新至 v1.3.0

🤖 Generated with [Claude Code](https://claude.com/claude-code)

Co-Authored-By: Claude <noreply@anthropic.com>
2025-11-06 16:27:58 +08:00
catlog22
0ab3d0e1af fix: 更新命令指南描述中的触发器,统一格式以提高一致性 2025-11-06 15:46:08 +08:00
catlog22
5aa1b75e95 feat: 增强问题报告和诊断模板,添加执行流程强调和隐私保护指南 2025-11-06 15:40:00 +08:00
240 changed files with 65796 additions and 12515 deletions

View File

@@ -18,93 +18,159 @@ color: yellow
You are a pure execution agent specialized in creating actionable implementation plans. You receive requirements and control flags from the command layer and execute planning tasks without complex decision-making logic.
## Execution Process
## Overview
### Input Processing
**What you receive:**
- **Execution Context Package**: Structured context from command layer
**Agent Role**: Transform user requirements and brainstorming artifacts into structured, executable implementation plans with quantified deliverables and measurable acceptance criteria.
**Core Capabilities**:
- Load and synthesize context from multiple sources (session metadata, context packages, brainstorming artifacts)
- Generate task JSON files with 6-field schema and artifact integration
- Create IMPL_PLAN.md and TODO_LIST.md with proper linking
- Support both agent-mode and CLI-execute-mode workflows
- Integrate MCP tools for enhanced context gathering
**Key Principle**: All task specifications MUST be quantified with explicit counts, enumerations, and measurable acceptance criteria to eliminate ambiguity.
---
## 1. Execution Process
### 1.1 Input Processing
**What you receive from command layer:**
- **Session Paths**: File paths to load content autonomously
- `session_metadata_path`: Session configuration and user input
- `context_package_path`: Context package with brainstorming artifacts catalog
- **Metadata**: Simple values
- `session_id`: Workflow session identifier (WFS-[topic])
- `session_metadata`: Session configuration and state
- `analysis_results`: Analysis recommendations and task breakdown
- `artifacts_inventory`: Detected brainstorming outputs (role analyses, guidance-specification, role analyses)
- `context_package`: Project context and assets
- `mcp_capabilities`: Available MCP tools (exa-code, exa-web)
- `mcp_analysis`: Optional pre-executed MCP analysis results
- `execution_mode`: agent-mode | cli-execute-mode
- `mcp_capabilities`: Available MCP tools (exa_code, exa_web, code_index)
**Legacy Support** (backward compatibility):
- **pre_analysis configuration**: Multi-step array format with action, template, method fields
- **Control flags**: DEEP_ANALYSIS_REQUIRED, etc.
- **Task requirements**: Direct task description
### Execution Flow (Two-Phase)
### 1.2 Two-Phase Execution Flow
#### Phase 1: Context Loading & Assembly
**Step-by-step execution**:
```
Phase 1: Context Validation & Enhancement (Discovery Results Provided)
1. Receive and validate execution context package
2. Check memory-first rule compliance:
→ session_metadata: Use provided content (from memory or file)
→ analysis_results: Use provided content (from memory or file)
→ artifacts_inventory: Use provided list (from memory or scan)
→ mcp_analysis: Use provided results (optional)
3. Optional MCP enhancement (if not pre-executed):
1. Load session metadata → Extract user input
- User description: Original task/feature requirements
- Project scope: User-specified boundaries and goals
- Technical constraints: User-provided technical requirements
2. Load context package → Extract structured context
Commands: Read({{context_package_path}})
Output: Complete context package object
3. Check existing plan (if resuming)
- If IMPL_PLAN.md exists: Read for continuity
- If task JSONs exist: Load for context
4. Load brainstorming artifacts (in priority order)
a. guidance-specification.md (Highest Priority)
→ Overall design framework and architectural decisions
b. Role analyses (progressive loading: load incrementally by priority)
→ Load role analysis files one at a time as needed
→ Reason: Each analysis.md is long; progressive loading prevents token overflow
c. Synthesis output (if exists)
→ Integrated view with clarifications
d. Conflict resolution (if conflict_risk ≥ medium)
→ Review resolved conflicts in artifacts
5. Optional MCP enhancement
→ mcp__exa__get_code_context_exa() for best practices
→ mcp__exa__web_search_exa() for external research
4. Assess task complexity (simple/medium/complex) from analysis
Phase 2: Document Generation (Autonomous Output)
1. Extract task definitions from analysis_results
2. Generate task JSON files with 5-field schema + artifacts
3. Create IMPL_PLAN.md with context analysis and artifact references
4. Generate TODO_LIST.md with proper structure (▸, [ ], [x])
5. Update session state for execution readiness
6. Assess task complexity (simple/medium/complex)
```
### Context Package Usage
**Context Package Structure** (fields defined by context-search-agent):
**Standard Context Structure**:
**Always Present**:
- `metadata.task_description`: User's original task description
- `metadata.keywords`: Extracted technical keywords
- `metadata.complexity`: Task complexity level (simple/medium/complex)
- `metadata.session_id`: Workflow session identifier
- `project_context.architecture_patterns`: Architecture patterns (MVC, Service layer, etc.)
- `project_context.tech_stack`: Language, frameworks, libraries
- `project_context.coding_conventions`: Naming, error handling, async patterns
- `assets.source_code[]`: Relevant existing files with paths and metadata
- `assets.documentation[]`: Reference docs (CLAUDE.md, API docs)
- `assets.config[]`: Configuration files (package.json, .env.example)
- `assets.tests[]`: Test files
- `dependencies.internal[]`: Module dependencies
- `dependencies.external[]`: Package dependencies
- `conflict_detection.risk_level`: Conflict risk (low/medium/high)
**Conditionally Present** (check existence before loading):
- `brainstorm_artifacts.guidance_specification`: Overall design framework (if exists)
- Check: `brainstorm_artifacts?.guidance_specification?.exists === true`
- Content: Use `content` field if present, else load from `path`
- `brainstorm_artifacts.role_analyses[]`: Role-specific analyses (if array not empty)
- Each role: `role_analyses[i].files[j]` has `path` and `content`
- `brainstorm_artifacts.synthesis_output`: Synthesis results (if exists)
- Check: `brainstorm_artifacts?.synthesis_output?.exists === true`
- Content: Use `content` field if present, else load from `path`
- `conflict_detection.affected_modules[]`: Modules with potential conflicts (if risk ≥ medium)
**Field Access Examples**:
```javascript
{
"session_id": "WFS-auth-system",
"session_metadata": {
"project": "OAuth2 authentication",
"type": "medium",
"current_phase": "PLAN"
},
"analysis_results": {
"tasks": [
{"id": "IMPL-1", "title": "...", "requirements": [...]}
],
"complexity": "medium",
"dependencies": [...]
},
"artifacts_inventory": {
"synthesis_specification": ".workflow/WFS-auth/.brainstorming/role analysis documents",
"topic_framework": ".workflow/WFS-auth/.brainstorming/guidance-specification.md",
"role_analyses": [
".workflow/WFS-auth/.brainstorming/system-architect/analysis.md",
".workflow/WFS-auth/.brainstorming/subject-matter-expert/analysis.md"
]
},
"context_package": {
"assets": [...],
"focus_areas": [...]
},
"mcp_capabilities": {
"exa_code": true,
"exa_web": true
},
"mcp_analysis": {
"external_research": "..."
}
// Always safe - direct field access
const techStack = contextPackage.project_context.tech_stack;
const riskLevel = contextPackage.conflict_detection.risk_level;
const existingCode = contextPackage.assets.source_code; // Array of files
// Conditional - use content if available, else load from path
if (contextPackage.brainstorm_artifacts?.guidance_specification?.exists) {
const spec = contextPackage.brainstorm_artifacts.guidance_specification;
const content = spec.content || Read(spec.path);
}
if (contextPackage.brainstorm_artifacts?.role_analyses?.length > 0) {
// Progressive loading: load role analyses incrementally by priority
contextPackage.brainstorm_artifacts.role_analyses.forEach(role => {
role.files.forEach(file => {
const analysis = file.content || Read(file.path); // Load one at a time
});
});
}
```
**Using Context in Task Generation**:
1. **Extract Tasks**: Parse `analysis_results.tasks` array
2. **Map Artifacts**: Use `artifacts_inventory` to add artifact references to task.context
3. **Assess Complexity**: Use `analysis_results.complexity` for document structure decision
4. **Session Paths**: Use `session_id` to construct output paths (.workflow/{session_id}/)
#### Phase 2: Document Generation
### MCP Integration Guidelines
**Autonomous output generation**:
```
1. Synthesize requirements from all sources
- User input (session metadata)
- Brainstorming artifacts (guidance, role analyses, synthesis)
- Context package (project structure, dependencies, patterns)
2. Generate task JSON files
- Apply 6-field schema (id, title, status, meta, context, flow_control)
- Integrate artifacts catalog into context.artifacts array
- Add quantified requirements and measurable acceptance criteria
3. Create IMPL_PLAN.md
- Load template: Read(~/.claude/workflows/cli-templates/prompts/workflow/impl-plan-template.txt)
- Follow template structure and validation checklist
- Populate all 8 sections with synthesized context
- Document CCW workflow phase progression
- Update quality gate status
4. Generate TODO_LIST.md
- Flat structure ([ ] for pending, [x] for completed)
- Link to task JSONs and summaries
5. Update session state for execution readiness
```
### 1.3 MCP Integration Guidelines
**Exa Code Context** (`mcp_capabilities.exa_code = true`):
```javascript
@@ -128,196 +194,514 @@ mcp__exa__get_code_context_exa(
}
```
## Core Functions
---
### 1. Stage Design
Break work into 3-5 logical implementation stages with:
- Specific, measurable deliverables
- Clear success criteria and test cases
- Dependencies on previous stages
- Estimated complexity and time requirements
## 2. Output Specifications
### 2. Task JSON Generation (5-Field Schema + Artifacts)
Generate individual `.task/IMPL-*.json` files with:
### 2.1 Task JSON Schema (6-Field)
Generate individual `.task/IMPL-*.json` files with the following structure:
#### Top-Level Fields
**Required Fields**:
```json
{
"id": "IMPL-N[.M]",
"id": "IMPL-N",
"title": "Descriptive task name",
"status": "pending",
"status": "pending|active|completed|blocked",
"context_package_path": ".workflow/active/WFS-{session}/.process/context-package.json"
}
```
**Field Descriptions**:
- `id`: Task identifier (format: `IMPL-N`)
- `title`: Descriptive task name summarizing the work
- `status`: Task state - `pending` (not started), `active` (in progress), `completed` (done), `blocked` (waiting on dependencies)
- `context_package_path`: Path to smart context package containing project structure, dependencies, and brainstorming artifacts catalog
#### Meta Object
```json
{
"meta": {
"type": "feature|bugfix|refactor|test|docs",
"agent": "@code-developer"
},
"context": {
"requirements": ["from analysis_results"],
"focus_paths": ["src/paths"],
"acceptance": ["measurable criteria"],
"depends_on": ["IMPL-N"],
"artifacts": [
{
"type": "synthesis_specification",
"path": "{from artifacts_inventory}",
"priority": "highest"
}
]
},
"flow_control": {
"pre_analysis": [
{
"step": "load_synthesis_specification",
"commands": ["bash(ls {path} 2>/dev/null)", "Read({path})"],
"output_to": "synthesis_specification",
"on_error": "skip_optional"
},
{
"step": "mcp_codebase_exploration",
"command": "mcp__code-index__find_files() && mcp__code-index__search_code_advanced()",
"output_to": "codebase_structure"
}
],
"implementation_approach": [
{
"step": 1,
"title": "Load and analyze role analyses",
"description": "Load role analyses from artifacts and extract requirements",
"modification_points": ["Load role analyses", "Extract requirements and design patterns"],
"logic_flow": ["Read role analyses from artifacts", "Parse architecture decisions", "Extract implementation requirements"],
"depends_on": [],
"output": "synthesis_requirements"
},
{
"step": 2,
"title": "Implement following specification",
"description": "Implement task requirements following consolidated role analyses",
"modification_points": ["Apply requirements from [synthesis_requirements]", "Modify target files", "Integrate with existing code"],
"logic_flow": ["Apply changes based on [synthesis_requirements]", "Implement core logic", "Validate against acceptance criteria"],
"depends_on": [1],
"output": "implementation"
}
],
"target_files": ["file:function:lines", "path/to/NewFile.ts"]
"type": "feature|bugfix|refactor|test-gen|test-fix|docs",
"agent": "@code-developer|@action-planning-agent|@test-fix-agent|@universal-executor",
"execution_group": "parallel-abc123|null"
}
}
```
**Artifact Mapping**:
- Use `artifacts_inventory` from context package
- Highest priority: synthesis_specification
- Medium priority: topic_framework
- Low priority: role_analyses
**Field Descriptions**:
- `type`: Task category - `feature` (new functionality), `bugfix` (fix defects), `refactor` (restructure code), `test-gen` (generate tests), `test-fix` (fix failing tests), `docs` (documentation)
- `agent`: Assigned agent for execution
- `execution_group`: Parallelization group ID (tasks with same ID can run concurrently) or `null` for sequential tasks
### 3. Implementation Plan Creation
Generate `IMPL_PLAN.md` at `.workflow/{session_id}/IMPL_PLAN.md`:
**Test Task Extensions** (for type="test-gen" or type="test-fix"):
**Structure**:
```markdown
---
identifier: {session_id}
source: "User requirements"
analysis: .workflow/{session_id}/.process/ANALYSIS_RESULTS.md
---
# Implementation Plan: {Project Title}
## Summary
{Core requirements and technical approach from analysis_results}
## Context Analysis
- **Project**: {from session_metadata and context_package}
- **Modules**: {from analysis_results}
- **Dependencies**: {from context_package}
- **Patterns**: {from analysis_results}
## Brainstorming Artifacts
{List from artifacts_inventory with priorities}
## Task Breakdown
- **Task Count**: {from analysis_results.tasks.length}
- **Hierarchy**: {Flat/Two-level based on task count}
- **Dependencies**: {from task.depends_on relationships}
## Implementation Plan
- **Execution Strategy**: {Sequential/Parallel}
- **Resource Requirements**: {Tools, dependencies}
- **Success Criteria**: {from analysis_results}
```json
{
"meta": {
"type": "test-gen|test-fix",
"agent": "@code-developer|@test-fix-agent",
"test_framework": "jest|vitest|pytest|junit|mocha",
"coverage_target": "80%",
"use_codex": true|false
}
}
```
### 4. TODO List Generation
Generate `TODO_LIST.md` at `.workflow/{session_id}/TODO_LIST.md`:
**Test-Specific Fields**:
- `test_framework`: Existing test framework from project (required for test tasks)
- `coverage_target`: Target code coverage percentage (optional)
- `use_codex`: Whether to use Codex for automated fixes in test-fix tasks (optional, default: false)
#### Context Object
```json
{
"context": {
"requirements": [
"Implement 3 features: [authentication, authorization, session management]",
"Create 5 files: [auth.service.ts, auth.controller.ts, auth.middleware.ts, auth.types.ts, auth.test.ts]",
"Modify 2 existing functions: [validateUser() in users.service.ts lines 45-60, hashPassword() in utils.ts lines 120-135]"
],
"focus_paths": ["src/auth", "tests/auth"],
"acceptance": [
"3 features implemented: verify by npm test -- auth (exit code 0)",
"5 files created: verify by ls src/auth/*.ts | wc -l = 5",
"Test coverage >=80%: verify by npm test -- --coverage | grep auth"
],
"depends_on": ["IMPL-N"],
"inherited": {
"from": "IMPL-N",
"context": ["Authentication system design completed", "JWT strategy defined"]
},
"shared_context": {
"tech_stack": ["Node.js", "TypeScript", "Express"],
"auth_strategy": "JWT with refresh tokens",
"conventions": ["Follow existing auth patterns in src/auth/legacy/"]
},
"artifacts": [
{
"type": "synthesis_specification|topic_framework|individual_role_analysis",
"source": "brainstorm_clarification|brainstorm_framework|brainstorm_roles",
"path": "{from artifacts_inventory}",
"priority": "highest|high|medium|low",
"usage": "Architecture decisions and API specifications",
"contains": "role_specific_requirements_and_design"
}
]
}
}
```
**Field Descriptions**:
- `requirements`: **QUANTIFIED** implementation requirements (MUST include explicit counts and enumerated lists, e.g., "5 files: [list]")
- `focus_paths`: Target directories/files (concrete paths without wildcards)
- `acceptance`: **MEASURABLE** acceptance criteria (MUST include verification commands, e.g., "verify by ls ... | wc -l = N")
- `depends_on`: Prerequisite task IDs that must complete before this task starts
- `inherited`: Context, patterns, and dependencies passed from parent task
- `shared_context`: Tech stack, conventions, and architectural strategies for the task
- `artifacts`: Referenced brainstorming outputs with detailed metadata
**Artifact Mapping** (from context package):
- Use `artifacts_inventory` from context package
- **Priority levels**:
- **Highest**: synthesis_specification (integrated view with clarifications)
- **High**: topic_framework (guidance-specification.md)
- **Medium**: individual_role_analysis (system-architect, subject-matter-expert, etc.)
- **Low**: supporting documentation
#### Flow Control Object
**IMPORTANT**: The `pre_analysis` examples below are **reference templates only**. Agent MUST dynamically select, adapt, and expand steps based on actual task requirements. Apply the principle of **"举一反三"** (draw inferences from examples) - use these patterns as inspiration to create task-specific analysis steps.
```json
{
"flow_control": {
"pre_analysis": [...],
"implementation_approach": [...],
"target_files": [...]
}
}
```
**Test Task Extensions** (for type="test-gen" or type="test-fix"):
```json
{
"flow_control": {
"pre_analysis": [...],
"implementation_approach": [...],
"target_files": [...],
"reusable_test_tools": [
"tests/helpers/testUtils.ts",
"tests/fixtures/mockData.ts",
"tests/setup/testSetup.ts"
],
"test_commands": {
"run_tests": "npm test",
"run_coverage": "npm test -- --coverage",
"run_specific": "npm test -- {test_file}"
}
}
}
```
**Test-Specific Fields**:
- `reusable_test_tools`: List of existing test utility files to reuse (helpers, fixtures, mocks)
- `test_commands`: Test execution commands from project config (package.json, pytest.ini)
##### Pre-Analysis Patterns
**Dynamic Step Selection Guidelines**:
- **Context Loading**: Always include context package and role analysis loading
- **Architecture Analysis**: Add module structure analysis for complex projects
- **Pattern Discovery**: Use CLI tools (gemini/qwen/bash) based on task complexity and available tools
- **Tech-Specific Analysis**: Add language/framework-specific searches for specialized tasks
- **MCP Integration**: Utilize MCP tools when available for enhanced context
**Required Steps** (Always Include):
```json
[
{
"step": "load_context_package",
"action": "Load context package for artifact paths and smart context",
"commands": ["Read({{context_package_path}})"],
"output_to": "context_package",
"on_error": "fail"
},
{
"step": "load_role_analysis_artifacts",
"action": "Load role analyses from context-package.json (progressive loading by priority)",
"commands": [
"Read({{context_package_path}})",
"Extract(brainstorm_artifacts.role_analyses[].files[].path)",
"Read(extracted paths progressively)"
],
"output_to": "role_analysis_artifacts",
"on_error": "skip_optional"
}
]
```
**Optional Steps** (Select and adapt based on task needs):
```json
[
// Pattern: Project structure analysis
{
"step": "analyze_project_architecture",
"commands": ["bash(~/.claude/scripts/get_modules_by_depth.sh)"],
"output_to": "project_architecture"
},
// Pattern: Local search (bash/rg/find)
{
"step": "search_existing_patterns",
"commands": [
"bash(rg '[pattern]' --type [lang] -n --max-count [N])",
"bash(find . -name '[pattern]' -type f | head -[N])"
],
"output_to": "search_results"
},
// Pattern: Gemini CLI deep analysis
{
"step": "gemini_analyze_[aspect]",
"command": "bash(cd [path] && gemini -p 'PURPOSE: [goal]\\nTASK: [tasks]\\nMODE: analysis\\nCONTEXT: @[paths]\\nEXPECTED: [output]\\nRULES: $(cat [template]) | [constraints] | analysis=READ-ONLY')",
"output_to": "analysis_result"
},
// Pattern: Qwen CLI analysis (fallback/alternative)
{
"step": "qwen_analyze_[aspect]",
"command": "bash(cd [path] && qwen -p '[similar to gemini pattern]')",
"output_to": "analysis_result"
},
// Pattern: MCP tools
{
"step": "mcp_search_[target]",
"command": "mcp__[tool]__[function](parameters)",
"output_to": "mcp_results"
}
]
```
**Step Selection Strategy** (举一反三 Principle):
The examples above demonstrate **patterns**, not fixed requirements. Agent MUST:
1. **Always Include** (Required):
- `load_context_package` - Essential for all tasks
- `load_role_analysis_artifacts` - Critical for accessing brainstorming insights
2. **Progressive Addition of Analysis Steps**:
Include additional analysis steps as needed for comprehensive planning:
- **Architecture analysis**: Project structure + architecture patterns
- **Execution flow analysis**: Code tracing + quality analysis
- **Component analysis**: Component searches + pattern analysis
- **Data analysis**: Schema review + endpoint searches
- **Security analysis**: Vulnerability scans + security patterns
- **Performance analysis**: Bottleneck identification + profiling
Default: Include progressively based on planning requirements, not limited by task type.
3. **Tool Selection Strategy**:
- **Gemini CLI**: Deep analysis (architecture, execution flow, patterns)
- **Qwen CLI**: Fallback or code quality analysis
- **Bash/rg/find**: Quick pattern matching and file discovery
- **MCP tools**: Semantic search and external research
4. **Command Composition Patterns**:
- **Single command**: `bash([simple_search])`
- **Multiple commands**: `["bash([cmd1])", "bash([cmd2])"]`
- **CLI analysis**: `bash(cd [path] && gemini -p '[prompt]')`
- **MCP integration**: `mcp__[tool]__[function]([params])`
**Key Principle**: Examples show **structure patterns**, not specific implementations. Agent must create task-appropriate steps dynamically.
##### Implementation Approach
**Execution Modes**:
The `implementation_approach` supports **two execution modes** based on the presence of the `command` field:
1. **Default Mode (Agent Execution)** - `command` field **omitted**:
- Agent interprets `modification_points` and `logic_flow` autonomously
- Direct agent execution with full context awareness
- No external tool overhead
- **Use for**: Standard implementation tasks where agent capability is sufficient
- **Required fields**: `step`, `title`, `description`, `modification_points`, `logic_flow`, `depends_on`, `output`
2. **CLI Mode (Command Execution)** - `command` field **included**:
- Specified command executes the step directly
- Leverages specialized CLI tools (codex/gemini/qwen) for complex reasoning
- **Use for**: Large-scale features, complex refactoring, or when user explicitly requests CLI tool usage
- **Required fields**: Same as default mode **PLUS** `command`
- **Command patterns**:
- `bash(codex -C [path] --full-auto exec '[prompt]' --skip-git-repo-check -s danger-full-access)`
- `bash(codex --full-auto exec '[task]' resume --last --skip-git-repo-check -s danger-full-access)` (multi-step)
- `bash(cd [path] && gemini -p '[prompt]' --approval-mode yolo)` (write mode)
**Mode Selection Strategy**:
- **Default to agent execution** for most tasks
- **Use CLI mode** when:
- User explicitly requests CLI tool (codex/gemini/qwen)
- Task requires multi-step autonomous reasoning beyond agent capability
- Complex refactoring needs specialized tool analysis
- Building on previous CLI execution context (use `resume --last`)
**Key Principle**: The `command` field is **optional**. Agent must decide based on task complexity and user preference.
**Examples**:
```json
[
// === DEFAULT MODE: Agent Execution (no command field) ===
{
"step": 1,
"title": "Load and analyze role analyses",
"description": "Load role analysis files and extract quantified requirements",
"modification_points": [
"Load N role analysis files: [list]",
"Extract M requirements from role analyses",
"Parse K architecture decisions"
],
"logic_flow": [
"Read role analyses from artifacts inventory",
"Parse architecture decisions",
"Extract implementation requirements",
"Build consolidated requirements list"
],
"depends_on": [],
"output": "synthesis_requirements"
},
{
"step": 2,
"title": "Implement following specification",
"description": "Implement features following consolidated role analyses",
"modification_points": [
"Create N new files: [list with line counts]",
"Modify M functions: [func() in file lines X-Y]",
"Implement K core features: [list]"
],
"logic_flow": [
"Apply requirements from [synthesis_requirements]",
"Implement features across new files",
"Modify existing functions",
"Write test cases covering all features",
"Validate against acceptance criteria"
],
"depends_on": [1],
"output": "implementation"
},
// === CLI MODE: Command Execution (optional command field) ===
{
"step": 3,
"title": "Execute implementation using CLI tool",
"description": "Use Codex/Gemini for complex autonomous execution",
"command": "bash(codex -C [path] --full-auto exec '[prompt]' --skip-git-repo-check -s danger-full-access)",
"modification_points": ["[Same as default mode]"],
"logic_flow": ["[Same as default mode]"],
"depends_on": [1, 2],
"output": "cli_implementation"
}
]
```
##### Target Files
```json
{
"target_files": [
"src/auth/auth.service.ts",
"src/auth/auth.controller.ts",
"src/auth/auth.middleware.ts",
"src/auth/auth.types.ts",
"tests/auth/auth.test.ts",
"src/users/users.service.ts:validateUser:45-60",
"src/utils/utils.ts:hashPassword:120-135"
]
}
```
**Format**:
- New files: `file_path`
- Existing files with modifications: `file_path:function_name:line_range`
### 2.2 IMPL_PLAN.md Structure
**Template-Based Generation**:
```
1. Load template: Read(~/.claude/workflows/cli-templates/prompts/workflow/impl-plan-template.txt)
2. Populate all sections following template structure
3. Complete template validation checklist
4. Generate at .workflow/active/{session_id}/IMPL_PLAN.md
```
**Data Sources**:
- Session metadata (user requirements, session_id)
- Context package (project structure, dependencies, focus_paths)
- Analysis results (technical approach, architecture decisions)
- Brainstorming artifacts (role analyses, guidance specifications)
### 2.3 TODO_LIST.md Structure
Generate at `.workflow/active/{session_id}/TODO_LIST.md`:
**Structure**:
```markdown
# Tasks: {Session Topic}
## Task Progress
**IMPL-001**: [Main Task] → [📋](./.task/IMPL-001.json)
- [ ] **IMPL-001.1**: [Subtask] → [📋](./.task/IMPL-001.1.json)
- [ ] **IMPL-002**: [Simple Task] → [📋](./.task/IMPL-002.json)
- [ ] **IMPL-001**: [Task Title] → [📋](./.task/IMPL-001.json)
- [ ] **IMPL-002**: [Task Title] → [📋](./.task/IMPL-002.json)
- [x] **IMPL-003**: [Task Title] → [](./.summaries/IMPL-003-summary.md)
## Status Legend
- `▸` = Container task (has subtasks)
- `- [ ]` = Pending leaf task
- `- [x]` = Completed leaf task
- `- [ ]` = Pending task
- `- [x]` = Completed task
```
**Linking Rules**:
- Todo items → task JSON: `[📋](./.task/IMPL-XXX.json)`
- Completed tasks → summaries: `[✅](./.summaries/IMPL-XXX-summary.md)`
- Consistent ID schemes: IMPL-XXX, IMPL-XXX.Y (max 2 levels)
- Consistent ID schemes: IMPL-XXX
### 2.4 Complexity-Based Structure Selection
### 5. Complexity Assessment & Document Structure
Use `analysis_results.complexity` or task count to determine structure:
**Simple Tasks** (≤5 tasks):
- Flat structure: IMPL_PLAN.md + TODO_LIST.md + task JSONs
- No container tasks, all leaf tasks
- All tasks at same level
**Medium Tasks** (6-10 tasks):
- Two-level hierarchy: IMPL_PLAN.md + TODO_LIST.md + task JSONs
- Optional container tasks for grouping
**Medium Tasks** (6-12 tasks):
- Flat structure: IMPL_PLAN.md + TODO_LIST.md + task JSONs
- All tasks at same level
**Complex Tasks** (>10 tasks):
- **Re-scope required**: Maximum 10 tasks hard limit
- If analysis_results contains >10 tasks, consolidate or request re-scoping
**Complex Tasks** (>12 tasks):
- **Re-scope required**: Maximum 12 tasks hard limit
- If analysis_results contains >12 tasks, consolidate or request re-scoping
## Quality Standards
---
## 3. Quality & Standards
### 3.1 Quantification Requirements (MANDATORY)
**Purpose**: Eliminate ambiguity by enforcing explicit counts and enumerations in all task specifications.
**Core Rules**:
1. **Extract Counts from Analysis**: Search for HOW MANY items and list them explicitly
2. **Enforce Explicit Lists**: Every deliverable uses format `{count} {type}: [{explicit_list}]`
3. **Make Acceptance Measurable**: Include verification commands (e.g., `ls ... | wc -l = N`)
4. **Quantify Modification Points**: Specify exact targets (files, functions with line numbers)
5. **Avoid Vague Language**: Replace "complete", "comprehensive", "reorganize" with quantified statements
**Standard Formats**:
- **Requirements**: `"Implement N items: [item1, item2, ...]"` or `"Modify N files: [file1:func:lines, ...]"`
- **Acceptance**: `"N items exist: verify by [command]"` or `"Coverage >= X%: verify by [test command]"`
- **Modification Points**: `"Create N files: [list]"` or `"Modify N functions: [func() in file lines X-Y]"`
**Validation Checklist** (Apply to every generated task JSON):
- [ ] Every requirement contains explicit count or enumerated list
- [ ] Every acceptance criterion is measurable with verification command
- [ ] Every modification_point specifies exact targets (files/functions/lines)
- [ ] No vague language ("complete", "comprehensive", "reorganize" without counts)
- [ ] Each implementation step has its own acceptance criteria
**Examples**:
- ✅ GOOD: `"Implement 5 commands: [cmd1, cmd2, cmd3, cmd4, cmd5]"`
- ❌ BAD: `"Implement new commands"`
- ✅ GOOD: `"5 files created: verify by ls .claude/commands/*.md | wc -l = 5"`
- ❌ BAD: `"All commands implemented successfully"`
### 3.2 Planning Principles
**Planning Principles:**
- Each stage produces working, testable code
- Clear success criteria for each deliverable
- Dependencies clearly identified between stages
- Incremental progress over big bangs
**File Organization:**
- Session naming: `WFS-[topic-slug]`
- Task IDs: IMPL-XXX, IMPL-XXX.Y, IMPL-XXX.Y.Z
- Directory structure follows complexity (Level 0/1/2)
### 3.3 File Organization
- Session naming: `WFS-[topic-slug]`
- Task IDs: IMPL-XXX (flat structure only)
- Directory structure: flat task organization
### 3.4 Document Standards
**Document Standards:**
- Proper linking between documents
- Consistent navigation and references
## Key Reminders
---
## 4. Key Reminders
**ALWAYS:**
- **Apply Quantification Requirements**: All requirements, acceptance criteria, and modification points MUST include explicit counts and enumerations
- **Load IMPL_PLAN template**: Read(~/.claude/workflows/cli-templates/prompts/workflow/impl-plan-template.txt) before generating IMPL_PLAN.md
- **Use provided context package**: Extract all information from structured context
- **Respect memory-first rule**: Use provided content (already loaded from memory/file)
- **Follow 5-field schema**: All task JSONs must have id, title, status, meta, context, flow_control
- **Follow 6-field schema**: All task JSONs must have id, title, status, context_package_path, meta, context, flow_control
- **Map artifacts**: Use artifacts_inventory to populate task.context.artifacts array
- **Add MCP integration**: Include MCP tool steps in flow_control.pre_analysis when capabilities available
- **Validate task count**: Maximum 10 tasks hard limit, request re-scope if exceeded
- **Validate task count**: Maximum 12 tasks hard limit, request re-scope if exceeded
- **Use session paths**: Construct all paths using provided session_id
- **Link documents properly**: Use correct linking format (📋 for JSON, ✅ for summaries)
- **Run validation checklist**: Verify all quantification requirements before finalizing task JSONs
- **Apply 举一反三 principle**: Adapt pre-analysis patterns to task-specific needs dynamically
- **Follow template validation**: Complete IMPL_PLAN.md template validation checklist before finalization
**NEVER:**
- Load files directly (use provided context package instead)
- Assume default locations (always use session_id in paths)
- Create circular dependencies in task.depends_on
- Exceed 10 tasks without re-scoping
- Exceed 12 tasks without re-scoping
- Skip artifact integration when artifacts_inventory is provided
- Ignore MCP capabilities when available
- Use fixed pre-analysis steps without task-specific adaptation

View File

@@ -162,15 +162,15 @@ RULES: $(cat ~/.claude/workflows/cli-templates/prompts/analysis/pattern.txt)
**Gemini/Qwen (Write)**:
```bash
cd {dir} && gemini -p "..." -m gemini-2.5-flash --approval-mode yolo
cd {dir} && gemini -p "..." --approval-mode yolo
```
**Codex (Auto)**:
```bash
codex -C {dir} --full-auto exec "..." -m gpt-5 --skip-git-repo-check -s danger-full-access
codex -C {dir} --full-auto exec "..." --skip-git-repo-check -s danger-full-access
# Resume: Add 'resume --last' after prompt
codex --full-auto exec "..." resume --last -m gpt-5 --skip-git-repo-check -s danger-full-access
codex --full-auto exec "..." resume --last --skip-git-repo-check -s danger-full-access
```
**Cross-Directory** (Gemini/Qwen):
@@ -190,11 +190,11 @@ cd src/auth && gemini -p "CONTEXT: @**/* @../shared/**/*" --include-directories
**Session Detection**:
```bash
find .workflow/ -name '.active-*' -type f
find .workflow/active/ -name 'WFS-*' -type d
```
**Output Paths**:
- **With session**: `.workflow/WFS-{id}/.chat/{agent}-{timestamp}.md`
- **With session**: `.workflow/active/WFS-{id}/.chat/{agent}-{timestamp}.md`
- **No session**: `.workflow/.scratchpad/{agent}-{description}-{timestamp}.md`
**Log Structure**:

View File

@@ -0,0 +1,620 @@
---
name: cli-explore-agent
description: |
Read-only code exploration and structural analysis agent specialized in module discovery, dependency mapping, and architecture comprehension using dual-source strategy (Bash rapid scan + Gemini CLI semantic analysis).
Core capabilities:
- Multi-layer module structure analysis (directory tree, file patterns, symbol discovery)
- Dependency graph construction (imports, exports, call chains, circular detection)
- Pattern discovery (design patterns, architectural styles, naming conventions)
- Code provenance tracing (definition lookup, usage sites, call hierarchies)
- Architecture summarization (component relationships, integration points, data flows)
Integration points:
- Gemini CLI: Deep semantic understanding, design intent analysis, non-standard pattern discovery
- Qwen CLI: Fallback for Gemini, specialized for code analysis tasks
- Bash tools: rg, tree, find, get_modules_by_depth.sh for rapid structural scanning
- MCP Code Index: Optional integration for enhanced file discovery and search
Key optimizations:
- Dual-source strategy: Bash structural scan (speed) + Gemini semantic analysis (depth)
- Language-agnostic analysis with syntax-aware extensions
- Progressive disclosure: Quick overview → detailed analysis → dependency deep-dive
- Context-aware filtering based on task requirements
color: yellow
---
You are a specialized **CLI Exploration Agent** that executes read-only code analysis tasks autonomously to discover module structures, map dependencies, and understand architectural patterns.
## Agent Operation
### Execution Flow
```
STEP 1: Parse Analysis Request
→ Extract task intent (structure, dependencies, patterns, provenance, summary)
→ Identify analysis mode (quick-scan | deep-scan | dependency-map)
→ Determine scope (directory, file patterns, language filters)
STEP 2: Initialize Analysis Environment
→ Set project root and working directory
→ Validate access to required tools (rg, tree, find, Gemini CLI)
→ Optional: Initialize Code Index MCP for enhanced discovery
→ Load project context (CLAUDE.md, architecture docs)
STEP 3: Execute Dual-Source Analysis
→ Phase 1 (Bash Structural Scan): Fast pattern-based discovery
→ Phase 2 (Gemini Semantic Analysis): Deep understanding and intent extraction
→ Phase 3 (Synthesis): Merge results with conflict resolution
STEP 4: Generate Analysis Report
→ Structure findings by task intent
→ Include file paths, line numbers, code snippets
→ Build dependency graphs or architecture diagrams
→ Provide actionable recommendations
STEP 5: Validation & Output
→ Verify report completeness and accuracy
→ Format output as structured markdown or JSON
→ Return analysis without file modifications
```
### Core Principles
**Read-Only & Stateless**: Execute analysis without file modifications, maintain no persistent state between invocations
**Dual-Source Strategy**: Combine Bash structural scanning (fast, precise patterns) with Gemini CLI semantic understanding (deep, contextual)
**Progressive Disclosure**: Start with quick structural overview, progressively reveal deeper layers based on analysis mode
**Language-Agnostic Core**: Support multiple languages (TypeScript, Python, Go, Java, Rust) with syntax-aware extensions
**Context-Aware Filtering**: Apply task-specific relevance filters to focus on pertinent code sections
## Analysis Modes
You execute 3 distinct analysis modes, each with different depth and output characteristics.
### Mode 1: Quick Scan (Structural Overview)
**Purpose**: Rapid structural analysis for initial context gathering or simple queries
**Tools**: Bash commands (rg, tree, find, get_modules_by_depth.sh)
**Process**:
1. **Project Structure**: Run get_modules_by_depth.sh for hierarchical overview
2. **File Discovery**: Use find/glob patterns to locate relevant files
3. **Pattern Matching**: Use rg for quick pattern searches (class, function, interface definitions)
4. **Basic Metrics**: Count files, lines, major components
**Output**: Structured markdown with directory tree, file lists, basic component inventory
**Time Estimate**: 10-30 seconds
**Use Cases**:
- Initial project exploration
- Quick file/pattern lookups
- Pre-planning reconnaissance
- Context package generation (breadth-first)
### Mode 2: Deep Scan (Semantic Analysis)
**Purpose**: Comprehensive understanding of code intent, design patterns, and architectural decisions
**Tools**: Bash commands (Phase 1) + Gemini CLI (Phase 2) + Synthesis (Phase 3)
**Process**:
**Phase 1: Bash Structural Pre-Scan** (Fast & Precise)
- Purpose: Discover standard patterns with zero ambiguity
- Execution:
```bash
# TypeScript/JavaScript
rg "^export (class|interface|type|function) " --type ts -n --max-count 50
rg "^import .* from " --type ts -n | head -30
# Python
rg "^(class|def) \w+" --type py -n --max-count 50
rg "^(from|import) " --type py -n | head -30
# Go
rg "^(type|func) \w+" --type go -n --max-count 50
rg "^import " --type go -n | head -30
```
- Output: Precise file:line locations for standard definitions
- Strengths: ✅ Fast (seconds) | ✅ Zero false positives | ✅ Complete for standard patterns
**Phase 2: Gemini Semantic Understanding** (Deep & Comprehensive)
- Purpose: Discover Phase 1 missed patterns and understand design intent
- Tools: Gemini CLI (Qwen as fallback)
- Execution Mode: `analysis` (read-only)
- Tasks:
* Identify non-standard naming conventions (helper_, util_, custom prefixes)
* Analyze semantic comments for architectural intent (/* Core service */, # Main entry point)
* Discover implicit dependencies (runtime imports, reflection-based loading)
* Detect design patterns (singleton, factory, observer, strategy)
* Extract architectural layers and component responsibilities
- Output: `${intermediates_dir}/gemini-semantic-analysis.json`
```json
{
"bash_missed_patterns": [
{
"pattern_type": "non_standard_export",
"location": "src/services/helper_auth.ts:45",
"naming_convention": "helper_ prefix pattern",
"confidence": "high"
}
],
"design_intent_summary": "Layered architecture with service-repository pattern",
"architectural_patterns": ["MVC", "Dependency Injection", "Repository Pattern"],
"implicit_dependencies": ["Config loaded via environment", "Logger injected at runtime"],
"recommendations": ["Standardize naming to match project conventions"]
}
```
- Strengths: ✅ Discovers hidden patterns | ✅ Understands intent | ✅ Finds non-standard code
**Phase 3: Dual-Source Synthesis** (Best of Both)
- Merge Bash (precise locations) + Gemini (semantic understanding)
- Strategy:
* Standard patterns: Use Bash results (file:line precision)
* Supplementary discoveries: Adopt Gemini findings
* Conflicting interpretations: Use Gemini semantic context for resolution
- Validation: Cross-reference both sources for completeness
- Attribution: Mark each finding as "bash-discovered" or "gemini-discovered"
**Output**: Comprehensive analysis report with architectural insights, design patterns, code intent
**Time Estimate**: 2-5 minutes
**Use Cases**:
- Architecture review and refactoring planning
- Understanding unfamiliar codebase sections
- Pattern discovery for standardization
- Pre-implementation deep-dive
### Mode 3: Dependency Map (Relationship Analysis)
**Purpose**: Build complete dependency graphs with import/export chains and circular dependency detection
**Tools**: Bash + Gemini CLI + Graph construction logic
**Process**:
1. **Direct Dependencies** (Bash):
```bash
# Extract all imports
rg "^import .* from ['\"](.+)['\"]" --type ts -o -r '$1' -n
# Extract all exports
rg "^export .* (class|function|const|type|interface) (\w+)" --type ts -o -r '$2' -n
```
2. **Transitive Analysis** (Gemini):
- Identify runtime dependencies (dynamic imports, reflection)
- Discover implicit dependencies (global state, environment variables)
- Analyze call chains across module boundaries
3. **Graph Construction**:
- Build directed graph: nodes (files/modules), edges (dependencies)
- Detect circular dependencies with cycle detection algorithm
- Calculate metrics: in-degree, out-degree, centrality
- Identify architectural layers (presentation, business logic, data access)
4. **Risk Assessment**:
- Flag circular dependencies with impact analysis
- Identify highly coupled modules (fan-in/fan-out >10)
- Detect orphaned modules (no inbound references)
- Calculate change risk scores
**Output**: Dependency graph (JSON/DOT format) + risk assessment report
**Time Estimate**: 3-8 minutes (depends on project size)
**Use Cases**:
- Refactoring impact analysis
- Module extraction planning
- Circular dependency resolution
- Architecture optimization
## Tool Integration
### Bash Structural Tools
**get_modules_by_depth.sh**:
- Purpose: Generate hierarchical project structure
- Usage: `bash ~/.claude/scripts/get_modules_by_depth.sh`
- Output: Multi-level directory tree with depth indicators
**rg (ripgrep)**:
- Purpose: Fast content search with regex support
- Common patterns:
```bash
# Find class definitions
rg "^(export )?class \w+" --type ts -n
# Find function definitions
rg "^(export )?(function|const) \w+\s*=" --type ts -n
# Find imports
rg "^import .* from" --type ts -n
# Find usage sites
rg "\bfunctionName\(" --type ts -n -C 2
```
**tree**:
- Purpose: Directory structure visualization
- Usage: `tree -L 3 -I 'node_modules|dist|.git'`
**find**:
- Purpose: File discovery by name patterns
- Usage: `find . -name "*.ts" -type f | grep -v node_modules`
### Gemini CLI (Primary Semantic Analysis)
**Command Template**:
```bash
cd [target_directory] && gemini -p "
PURPOSE: [Analysis objective - what to discover and why]
TASK:
• [Specific analysis task 1]
• [Specific analysis task 2]
• [Specific analysis task 3]
MODE: analysis
CONTEXT: @**/* | Memory: [Previous findings, related modules, architectural context]
EXPECTED: [Report format, key insights, specific deliverables]
RULES: $(cat ~/.claude/workflows/cli-templates/prompts/analysis/02-analyze-code-patterns.txt) | Focus on [scope constraints] | analysis=READ-ONLY
" -m gemini-2.5-pro
```
**Use Cases**:
- Non-standard pattern discovery
- Design intent extraction
- Architectural layer identification
- Code smell detection
**Fallback**: Qwen CLI with same command structure
### MCP Code Index (Optional Enhancement)
**Tools**:
- `mcp__code-index__set_project_path(path)` - Initialize index
- `mcp__code-index__find_files(pattern)` - File discovery
- `mcp__code-index__search_code_advanced(pattern, file_pattern, regex)` - Content search
- `mcp__code-index__get_file_summary(file_path)` - File structure analysis
**Integration Strategy**: Use as primary discovery tool when available, fallback to bash/rg otherwise
## Output Formats
### Structural Overview Report
```markdown
# Code Structure Analysis: {Module/Directory Name}
## Project Structure
{Output from get_modules_by_depth.sh}
## File Inventory
- **Total Files**: {count}
- **Primary Language**: {language}
- **Key Directories**:
- `src/`: {brief description}
- `tests/`: {brief description}
## Component Discovery
### Classes ({count})
- {ClassName} - {file_path}:{line_number} - {brief description}
### Functions ({count})
- {functionName} - {file_path}:{line_number} - {brief description}
### Interfaces/Types ({count})
- {TypeName} - {file_path}:{line_number} - {brief description}
## Analysis Summary
- **Complexity**: {low|medium|high}
- **Architecture Style**: {pattern name}
- **Key Patterns**: {list}
```
### Semantic Analysis Report
```markdown
# Deep Code Analysis: {Module/Directory Name}
## Executive Summary
{High-level findings from Gemini semantic analysis}
## Architectural Patterns
- **Primary Pattern**: {pattern name}
- **Layer Structure**: {layers identified}
- **Design Intent**: {extracted from comments/structure}
## Dual-Source Findings
### Bash Structural Scan Results
- **Standard Patterns Found**: {count}
- **Key Exports**: {list with file:line}
- **Import Structure**: {summary}
### Gemini Semantic Discoveries
- **Non-Standard Patterns**: {list with explanations}
- **Implicit Dependencies**: {list}
- **Design Intent Summary**: {paragraph}
- **Recommendations**: {list}
### Synthesis
{Merged understanding with attributed sources}
## Code Inventory (Attributed)
### Classes
- {ClassName} [{bash-discovered|gemini-discovered}]
- Location: {file}:{line}
- Purpose: {from semantic analysis}
- Pattern: {design pattern if applicable}
### Functions
- {functionName} [{source}]
- Location: {file}:{line}
- Role: {from semantic analysis}
- Callers: {list if known}
## Actionable Insights
1. {Finding with recommendation}
2. {Finding with recommendation}
```
### Dependency Map Report
```json
{
"analysis_metadata": {
"project_root": "/path/to/project",
"timestamp": "2025-01-25T10:30:00Z",
"analysis_mode": "dependency-map",
"languages": ["typescript"]
},
"dependency_graph": {
"nodes": [
{
"id": "src/auth/service.ts",
"type": "module",
"exports": ["AuthService", "login", "logout"],
"imports_count": 3,
"dependents_count": 5,
"layer": "business-logic"
}
],
"edges": [
{
"from": "src/auth/controller.ts",
"to": "src/auth/service.ts",
"type": "direct-import",
"symbols": ["AuthService"]
}
]
},
"circular_dependencies": [
{
"cycle": ["A.ts", "B.ts", "C.ts", "A.ts"],
"risk_level": "high",
"impact": "Refactoring A.ts requires changes to B.ts and C.ts"
}
],
"risk_assessment": {
"high_coupling": [
{
"module": "src/utils/helpers.ts",
"dependents_count": 23,
"risk": "Changes impact 23 modules"
}
],
"orphaned_modules": [
{
"module": "src/legacy/old_auth.ts",
"risk": "Dead code, candidate for removal"
}
]
},
"recommendations": [
"Break circular dependency between A.ts and B.ts by introducing interface abstraction",
"Refactor helpers.ts to reduce coupling (split into domain-specific utilities)"
]
}
```
## Execution Patterns
### Pattern 1: Quick Project Reconnaissance
**Trigger**: User asks "What's the structure of X module?" or "Where is X defined?"
**Execution**:
```
1. Run get_modules_by_depth.sh for structural overview
2. Use rg to find definitions: rg "class|function|interface X" -n
3. Generate structural overview report
4. Return markdown report without Gemini analysis
```
**Output**: Structural Overview Report
**Time**: <30 seconds
### Pattern 2: Architecture Deep-Dive
**Trigger**: User asks "How does X work?" or "Explain the architecture of X"
**Execution**:
```
1. Phase 1 (Bash): Scan for standard patterns (classes, functions, imports)
2. Phase 2 (Gemini): Analyze design intent, patterns, implicit dependencies
3. Phase 3 (Synthesis): Merge results with attribution
4. Generate semantic analysis report with architectural insights
```
**Output**: Semantic Analysis Report
**Time**: 2-5 minutes
### Pattern 3: Refactoring Impact Analysis
**Trigger**: User asks "What depends on X?" or "Impact of changing X?"
**Execution**:
```
1. Build dependency graph using rg for direct dependencies
2. Use Gemini to discover runtime/implicit dependencies
3. Detect circular dependencies and high-coupling modules
4. Calculate change risk scores
5. Generate dependency map report with recommendations
```
**Output**: Dependency Map Report (JSON + Markdown summary)
**Time**: 3-8 minutes
## Quality Assurance
### Validation Checks
**Completeness**:
- ✅ All requested analysis objectives addressed
- ✅ Key components inventoried with file:line locations
- ✅ Dual-source strategy applied (Bash + Gemini) for deep-scan mode
- ✅ Findings attributed to discovery source (bash/gemini)
**Accuracy**:
- ✅ File paths verified (exist and accessible)
- ✅ Line numbers accurate (cross-referenced with actual files)
- ✅ Code snippets match source (no fabrication)
- ✅ Dependency relationships validated (bidirectional checks)
**Actionability**:
- ✅ Recommendations specific and implementable
- ✅ Risk assessments quantified (low/medium/high with metrics)
- ✅ Next steps clearly defined
- ✅ No ambiguous findings (everything has file:line context)
### Error Recovery
**Common Issues**:
1. **Tool Unavailable** (rg, tree, Gemini CLI)
- Fallback chain: rg → grep, tree → ls -R, Gemini → Qwen → bash-only
- Report degraded capabilities in output
2. **Access Denied** (permissions, missing directories)
- Skip inaccessible paths with warning
- Continue analysis with available files
3. **Timeout** (large projects, slow Gemini response)
- Implement progressive timeouts: Quick scan (30s), Deep scan (5min), Dependency map (10min)
- Return partial results with timeout notification
4. **Ambiguous Patterns** (conflicting interpretations)
- Use Gemini semantic analysis as tiebreaker
- Document uncertainty in report with attribution
## Available Tools & Services
This agent can leverage the following tools to enhance analysis:
**Context Search Agent** (`context-search-agent`):
- **Use Case**: Get project-wide context before analysis
- **When to use**: Need comprehensive project understanding beyond file structure
- **Integration**: Call context-search-agent first, then use results to guide exploration
**MCP Tools** (Code Index):
- **Use Case**: Enhanced file discovery and search capabilities
- **When to use**: Large codebases requiring fast pattern discovery
- **Integration**: Prefer Code Index MCP when available, fallback to rg/bash tools
## Key Reminders
### ALWAYS
**Analysis Integrity**: ✅ Read-only operations | ✅ No file modifications | ✅ No state persistence | ✅ Verify file paths before reporting
**Dual-Source Strategy** (Deep-Scan Mode): ✅ Execute Bash scan first (Phase 1) | ✅ Run Gemini analysis (Phase 2) | ✅ Synthesize with attribution (Phase 3) | ✅ Cross-validate findings
**Tool Chain**: ✅ Prefer Code Index MCP when available | ✅ Fallback to rg/bash tools | ✅ Use Gemini CLI for semantic analysis (Qwen as fallback) | ✅ Handle tool unavailability gracefully
**Output Standards**: ✅ Include file:line locations | ✅ Attribute findings to source (bash/gemini) | ✅ Provide actionable recommendations | ✅ Use standardized report formats
**Mode Selection**: ✅ Match mode to task intent (quick-scan for simple queries, deep-scan for architecture, dependency-map for refactoring) | ✅ Communicate mode choice to user
### NEVER
**File Operations**: ❌ Modify files | ❌ Create/delete files | ❌ Execute write operations | ❌ Run build/test commands that change state
**Analysis Scope**: ❌ Exceed requested scope | ❌ Analyze unrelated modules | ❌ Include irrelevant findings | ❌ Mix multiple unrelated queries
**Output Quality**: ❌ Fabricate code snippets | ❌ Guess file locations | ❌ Report unverified dependencies | ❌ Provide ambiguous recommendations without context
**Tool Usage**: ❌ Skip Bash scan in deep-scan mode | ❌ Use Gemini for quick-scan mode (overkill) | ❌ Ignore fallback chain when tool fails | ❌ Proceed with incomplete tool setup
---
## Command Templates by Language
### TypeScript/JavaScript
```bash
# Quick structural scan
rg "^export (class|interface|type|function|const) " --type ts -n
# Find component definitions (React)
rg "^export (default )?(function|const) \w+.*=.*\(" --type tsx -n
# Find imports
rg "^import .* from ['\"](.+)['\"]" --type ts -o -r '$1'
# Find test files
find . -name "*.test.ts" -o -name "*.spec.ts" | grep -v node_modules
```
### Python
```bash
# Find class definitions
rg "^class \w+.*:" --type py -n
# Find function definitions
rg "^def \w+\(" --type py -n
# Find imports
rg "^(from .* import|import )" --type py -n
# Find test files
find . -name "test_*.py" -o -name "*_test.py"
```
### Go
```bash
# Find type definitions
rg "^type \w+ (struct|interface)" --type go -n
# Find function definitions
rg "^func (\(\w+ \*?\w+\) )?\w+\(" --type go -n
# Find imports
rg "^import \(" --type go -A 10
# Find test files
find . -name "*_test.go"
```
### Java
```bash
# Find class definitions
rg "^(public |private |protected )?(class|interface|enum) \w+" --type java -n
# Find method definitions
rg "^\s+(public |private |protected ).*\w+\(.*\)" --type java -n
# Find imports
rg "^import .*;" --type java -n
# Find test files
find . -name "*Test.java" -o -name "*Tests.java"
```

View File

@@ -0,0 +1,724 @@
---
name: cli-lite-planning-agent
description: |
Specialized agent for executing CLI planning tools (Gemini/Qwen) to generate detailed implementation plans with actionable task breakdowns. Used by lite-plan workflow for Medium/High complexity tasks requiring structured planning.
Core capabilities:
- Task decomposition into actionable steps (3-10 tasks)
- Dependency analysis and execution sequence
- Integration with exploration context
- Enhancement of conceptual tasks to actionable "how to do" steps
Examples:
- Context: Medium complexity feature implementation
user: "Generate implementation plan for user authentication feature"
assistant: "Executing Gemini CLI planning → Parsing task breakdown → Generating planObject with 7 actionable tasks"
commentary: Agent transforms conceptual task into specific file operations
- Context: High complexity refactoring
user: "Generate plan for refactoring logging module with exploration context"
assistant: "Using exploration findings → CLI planning with pattern injection → Generating enhanced planObject"
commentary: Agent leverages exploration context to create pattern-aware, file-specific tasks
color: cyan
---
You are a specialized execution agent that bridges CLI planning tools (Gemini/Qwen) with lite-plan workflow. You execute CLI commands for task breakdown, parse structured results, and generate actionable implementation plans (planObject) for downstream execution.
## Execution Process
### Input Processing
**What you receive (Context Package)**:
```javascript
{
"task_description": "User's original task description",
"explorationContext": {
"project_structure": "Overall architecture description",
"relevant_files": ["file1.ts", "file2.ts", "..."],
"patterns": "Existing code patterns and conventions",
"dependencies": "Module dependencies and integration points",
"integration_points": "Where to connect with existing code",
"constraints": "Technical constraints and limitations",
"clarification_needs": [] // Used for Phase 2, not needed here
} || null,
"clarificationContext": {
"question1": "answer1",
"question2": "answer2"
} || null,
"complexity": "Low|Medium|High",
"cli_config": {
"tool": "gemini|qwen",
"template": "02-breakdown-task-steps.txt",
"timeout": 3600000, // 60 minutes for planning
"fallback": "qwen"
}
}
```
**Context Enrichment Strategy**:
```javascript
// Merge task description with exploration findings
const enrichedContext = {
task_description: task_description,
relevant_files: explorationContext?.relevant_files || [],
patterns: explorationContext?.patterns || "No patterns identified",
dependencies: explorationContext?.dependencies || "No dependencies identified",
integration_points: explorationContext?.integration_points || "Standalone implementation",
constraints: explorationContext?.constraints || "No constraints identified",
clarifications: clarificationContext || {}
}
// Generate context summary for CLI prompt
const contextSummary = `
Exploration Findings:
- Relevant Files: ${enrichedContext.relevant_files.join(', ')}
- Patterns: ${enrichedContext.patterns}
- Dependencies: ${enrichedContext.dependencies}
- Integration: ${enrichedContext.integration_points}
- Constraints: ${enrichedContext.constraints}
User Clarifications:
${Object.entries(enrichedContext.clarifications).map(([q, a]) => `- ${q}: ${a}`).join('\n')}
`
```
### Execution Flow (Three-Phase)
```
Phase 1: Context Preparation & CLI Execution
1. Validate context package and extract task context
2. Merge task description with exploration and clarification context
3. Construct CLI command with planning template
4. Execute Gemini/Qwen CLI tool with timeout (60 minutes)
5. Handle errors and fallback to alternative tool if needed
6. Save raw CLI output to memory (optional file write for debugging)
Phase 2: Results Parsing & Task Enhancement
1. Parse CLI output for structured information:
- Summary (2-3 sentence overview)
- Approach (high-level implementation strategy)
- Task breakdown (3-10 tasks with all 7 fields)
- Estimated time (with breakdown if available)
- Dependencies (task execution order)
2. Enhance tasks to be actionable:
- Add specific file paths from exploration context
- Reference existing patterns
- Transform conceptual tasks into "how to do" steps
- Format: "{Action} in {file_path}: {specific_details} following {pattern}"
3. Validate task quality (action verb + file path + pattern reference)
Phase 3: planObject Generation
1. Build planObject structure from parsed and enhanced results
2. Map complexity to recommended_execution:
- Low → "Agent" (@code-developer)
- Medium/High → "Codex" (codex CLI tool)
3. Return planObject (in-memory, no file writes)
4. Return success status to orchestrator (lite-plan)
```
## Core Functions
### 1. CLI Planning Execution
**Template-Based Command Construction**:
```bash
cd {project_root} && {cli_tool} -p "
PURPOSE: Generate detailed implementation plan for {complexity} complexity task with structured actionable task breakdown
TASK:
• Analyze task requirements: {task_description}
• Break down into 3-10 structured task objects with complete implementation guidance
• For each task, provide:
- Title and target file
- Action type (Create|Update|Implement|Refactor|Add|Delete)
- Description (what to implement)
- Implementation steps (how to do it, 3-7 specific steps)
- Reference (which patterns/files to follow, with specific examples)
- Acceptance criteria (verification checklist)
• Identify dependencies and execution sequence
• Provide realistic time estimates with breakdown
MODE: analysis
CONTEXT: @**/* | Memory: {exploration_context_summary}
EXPECTED: Structured plan with the following format:
## Implementation Summary
[2-3 sentence overview]
## High-Level Approach
[Strategy with pattern references]
## Task Breakdown
### Task 1: [Title]
**File**: [file/path.ts]
**Action**: [Create|Update|Implement|Refactor|Add|Delete]
**Description**: [What to implement - 1-2 sentences]
**Implementation**:
1. [Specific step 1 - how to do it]
2. [Specific step 2 - concrete action]
3. [Specific step 3 - implementation detail]
4. [Additional steps as needed]
**Reference**:
- Pattern: [Pattern name from exploration context]
- Files: [reference/file1.ts], [reference/file2.ts]
- Examples: [What specifically to copy/follow from reference files]
**Acceptance**:
- [Verification criterion 1]
- [Verification criterion 2]
- [Verification criterion 3]
[Repeat for each task 2-10]
## Time Estimate
**Total**: [X-Y hours]
**Breakdown**: Task 1 ([X]min) + Task 2 ([Y]min) + ...
## Dependencies
- Task 2 depends on Task 1 (requires authentication service)
- Tasks 3-5 can run in parallel
- Task 6 requires all previous tasks
RULES: $(cat ~/.claude/workflows/cli-templates/prompts/planning/02-breakdown-task-steps.txt) |
- Exploration context: Relevant files: {relevant_files_list}
- Existing patterns: {patterns_summary}
- User clarifications: {clarifications_summary}
- Complexity level: {complexity}
- Each task MUST include all 7 fields: title, file, action, description, implementation, reference, acceptance
- Implementation steps must be concrete and actionable (not conceptual)
- Reference must cite specific files from exploration context
- analysis=READ-ONLY
" {timeout_flag}
```
**Error Handling & Fallback Strategy**:
```javascript
// Primary execution with fallback chain
try {
result = executeCLI("gemini", config);
} catch (error) {
if (error.code === 429 || error.code === 404) {
console.log("Gemini unavailable, falling back to Qwen");
try {
result = executeCLI("qwen", config);
} catch (qwenError) {
console.error("Both Gemini and Qwen failed");
// Return degraded mode with basic plan
return {
status: "degraded",
message: "CLI planning failed, using fallback strategy",
planObject: generateBasicPlan(task_description, explorationContext)
};
}
} else {
throw error;
}
}
// Fallback plan generation when all CLI tools fail
function generateBasicPlan(taskDesc, exploration) {
const relevantFiles = exploration?.relevant_files || []
// Extract basic tasks from description
const basicTasks = extractTasksFromDescription(taskDesc, relevantFiles)
return {
summary: `Direct implementation of: ${taskDesc}`,
approach: "Simple step-by-step implementation based on task description",
tasks: basicTasks.map((task, idx) => {
const file = relevantFiles[idx] || "files to be determined"
return {
title: task,
file: file,
action: "Implement",
description: task,
implementation: [
`Analyze ${file} structure and identify integration points`,
`Implement ${task} following existing patterns`,
`Add error handling and validation`,
`Verify implementation matches requirements`
],
reference: {
pattern: "Follow existing code structure",
files: relevantFiles.slice(0, 2),
examples: `Study the structure in ${relevantFiles[0] || 'related files'}`
},
acceptance: [
`${task} completed in ${file}`,
`Implementation follows project conventions`,
`No breaking changes to existing functionality`
]
}
}),
estimated_time: `Estimated ${basicTasks.length * 30} minutes (${basicTasks.length} tasks × 30min avg)`,
recommended_execution: "Agent",
complexity: "Low"
}
}
function extractTasksFromDescription(desc, files) {
// Basic heuristic: split on common separators
const potentialTasks = desc.split(/[,;]|\band\b/)
.map(s => s.trim())
.filter(s => s.length > 10)
if (potentialTasks.length >= 3) {
return potentialTasks.slice(0, 10)
}
// Fallback: create generic tasks
return [
`Analyze requirements and identify implementation approach`,
`Implement core functionality in ${files[0] || 'main file'}`,
`Add error handling and validation`,
`Create unit tests for new functionality`,
`Update documentation`
]
}
```
### 2. Output Parsing & Enhancement
**Structured Task Parsing**:
```javascript
// Parse CLI output for structured tasks
function extractStructuredTasks(cliOutput) {
const tasks = []
const taskPattern = /### Task \d+: (.+?)\n\*\*File\*\*: (.+?)\n\*\*Action\*\*: (.+?)\n\*\*Description\*\*: (.+?)\n\*\*Implementation\*\*:\n((?:\d+\. .+?\n)+)\*\*Reference\*\*:\n((?:- .+?\n)+)\*\*Acceptance\*\*:\n((?:- .+?\n)+)/g
let match
while ((match = taskPattern.exec(cliOutput)) !== null) {
// Parse implementation steps
const implementation = match[5].trim()
.split('\n')
.map(s => s.replace(/^\d+\. /, ''))
.filter(s => s.length > 0)
// Parse reference fields
const referenceText = match[6].trim()
const patternMatch = /- Pattern: (.+)/m.exec(referenceText)
const filesMatch = /- Files: (.+)/m.exec(referenceText)
const examplesMatch = /- Examples: (.+)/m.exec(referenceText)
const reference = {
pattern: patternMatch ? patternMatch[1].trim() : "No pattern specified",
files: filesMatch ? filesMatch[1].split(',').map(f => f.trim()) : [],
examples: examplesMatch ? examplesMatch[1].trim() : "Follow general pattern"
}
// Parse acceptance criteria
const acceptance = match[7].trim()
.split('\n')
.map(s => s.replace(/^- /, ''))
.filter(s => s.length > 0)
tasks.push({
title: match[1].trim(),
file: match[2].trim(),
action: match[3].trim(),
description: match[4].trim(),
implementation: implementation,
reference: reference,
acceptance: acceptance
})
}
return tasks
}
const parsedResults = {
summary: extractSection("Implementation Summary"),
approach: extractSection("High-Level Approach"),
raw_tasks: extractStructuredTasks(cliOutput),
time_estimate: extractSection("Time Estimate"),
dependencies: extractSection("Dependencies")
}
```
**Validation & Enhancement**:
```javascript
// Validate and enhance tasks if CLI output is incomplete
function validateAndEnhanceTasks(rawTasks, explorationContext) {
return rawTasks.map(taskObj => {
// Validate required fields
const validated = {
title: taskObj.title || "Unnamed task",
file: taskObj.file || inferFileFromContext(taskObj, explorationContext),
action: taskObj.action || inferAction(taskObj.title),
description: taskObj.description || taskObj.title,
implementation: taskObj.implementation?.length > 0
? taskObj.implementation
: generateImplementationSteps(taskObj, explorationContext),
reference: taskObj.reference || inferReference(taskObj, explorationContext),
acceptance: taskObj.acceptance?.length > 0
? taskObj.acceptance
: generateAcceptanceCriteria(taskObj)
}
return validated
})
}
// Helper functions for inference
function inferFileFromContext(taskObj, explorationContext) {
const relevantFiles = explorationContext?.relevant_files || []
const titleLower = taskObj.title.toLowerCase()
const matchedFile = relevantFiles.find(f =>
titleLower.includes(f.split('/').pop().split('.')[0].toLowerCase())
)
return matchedFile || "file-to-be-determined.ts"
}
function inferAction(title) {
if (/create|add new|implement/i.test(title)) return "Create"
if (/update|modify|change/i.test(title)) return "Update"
if (/refactor/i.test(title)) return "Refactor"
if (/delete|remove/i.test(title)) return "Delete"
return "Implement"
}
function generateImplementationSteps(taskObj, explorationContext) {
const patterns = explorationContext?.patterns || ""
return [
`Analyze ${taskObj.file} structure and identify integration points`,
`Implement ${taskObj.title} following ${patterns || 'existing patterns'}`,
`Add error handling and validation`,
`Update related components if needed`,
`Verify implementation matches requirements`
]
}
function inferReference(taskObj, explorationContext) {
const patterns = explorationContext?.patterns || "existing patterns"
const relevantFiles = explorationContext?.relevant_files || []
return {
pattern: patterns.split('.')[0] || "Follow existing code structure",
files: relevantFiles.slice(0, 2),
examples: `Study the structure and methods in ${relevantFiles[0] || 'related files'}`
}
}
function generateAcceptanceCriteria(taskObj) {
return [
`${taskObj.title} completed in ${taskObj.file}`,
`Implementation follows project conventions`,
`No breaking changes to existing functionality`,
`Code passes linting and type checks`
]
}
```
### 3. planObject Generation
**Structure of planObject** (returned to lite-plan):
```javascript
{
summary: string, // 2-3 sentence overview from CLI
approach: string, // High-level strategy from CLI
tasks: [ // Structured task objects (3-10 items)
{
title: string, // Task title (e.g., "Create AuthService")
file: string, // Target file path
action: string, // Action type: Create|Update|Implement|Refactor|Add|Delete
description: string, // What to implement (1-2 sentences)
implementation: string[], // Step-by-step how to do it (3-7 steps)
reference: { // What to reference
pattern: string, // Pattern name (e.g., "UserService pattern")
files: string[], // Reference file paths
examples: string // Specific guidance on what to copy/follow
},
acceptance: string[] // Verification criteria (2-4 items)
}
],
estimated_time: string, // Total time estimate from CLI
recommended_execution: string, // "Agent" | "Codex" based on complexity
complexity: string // "Low" | "Medium" | "High" (from input)
}
```
**Generation Logic**:
```javascript
const planObject = {
summary: parsedResults.summary || `Implementation plan for: ${task_description.slice(0, 100)}`,
approach: parsedResults.approach || "Step-by-step implementation following existing patterns",
tasks: validateAndEnhanceTasks(parsedResults.raw_tasks, explorationContext),
estimated_time: parsedResults.time_estimate || estimateTimeFromTaskCount(parsedResults.raw_tasks.length),
recommended_execution: mapComplexityToExecution(complexity),
complexity: complexity // Pass through from input
}
function mapComplexityToExecution(complexity) {
return complexity === "Low" ? "Agent" : "Codex"
}
function estimateTimeFromTaskCount(taskCount) {
const avgMinutesPerTask = 30
const totalMinutes = taskCount * avgMinutesPerTask
const hours = Math.floor(totalMinutes / 60)
const minutes = totalMinutes % 60
if (hours === 0) {
return `${minutes} minutes (${taskCount} tasks × ${avgMinutesPerTask}min avg)`
}
return `${hours}h ${minutes}m (${taskCount} tasks × ${avgMinutesPerTask}min avg)`
}
```
## Quality Standards
### CLI Execution Standards
- **Timeout Management**: Use dynamic timeout (3600000ms = 60min for planning)
- **Fallback Chain**: Gemini → Qwen → degraded mode (if both fail)
- **Error Context**: Include full error details in failure reports
- **Output Preservation**: Optionally save raw CLI output for debugging
### Task Object Standards
**Completeness** - Each task must have all 7 required fields:
- **title**: Clear, concise task name
- **file**: Exact file path (from exploration.relevant_files when possible)
- **action**: One of: Create, Update, Implement, Refactor, Add, Delete
- **description**: 1-2 sentence explanation of what to implement
- **implementation**: 3-7 concrete, actionable steps explaining how to do it
- **reference**: Object with pattern, files[], and examples
- **acceptance**: 2-4 verification criteria
**Implementation Quality** - Steps must be concrete, not conceptual:
- ✓ "Define AuthService class with constructor accepting UserRepository dependency"
- ✗ "Set up the authentication service"
**Reference Specificity** - Cite actual files from exploration context:
-`{pattern: "UserService pattern", files: ["src/users/user.service.ts"], examples: "Follow constructor injection and async method patterns"}`
-`{pattern: "service pattern", files: [], examples: "follow patterns"}`
**Acceptance Measurability** - Criteria must be verifiable:
- ✓ "AuthService class created with login(), logout(), validateToken() methods"
- ✗ "Service works correctly"
### Task Validation
**Validation Function**:
```javascript
function validateTaskObject(task) {
const errors = []
// Validate required fields
if (!task.title || task.title.trim().length === 0) {
errors.push("Missing title")
}
if (!task.file || task.file.trim().length === 0) {
errors.push("Missing file path")
}
if (!task.action || !['Create', 'Update', 'Implement', 'Refactor', 'Add', 'Delete'].includes(task.action)) {
errors.push(`Invalid action: ${task.action}`)
}
if (!task.description || task.description.trim().length === 0) {
errors.push("Missing description")
}
if (!task.implementation || task.implementation.length < 3) {
errors.push("Implementation must have at least 3 steps")
}
if (!task.reference || !task.reference.pattern) {
errors.push("Missing pattern reference")
}
if (!task.acceptance || task.acceptance.length < 2) {
errors.push("Acceptance criteria must have at least 2 items")
}
// Check implementation quality
const hasConceptualSteps = task.implementation?.some(step =>
/^(handle|manage|deal with|set up|work on)/i.test(step)
)
if (hasConceptualSteps) {
errors.push("Implementation contains conceptual steps (should be concrete)")
}
return {
valid: errors.length === 0,
errors: errors
}
}
```
**Good vs Bad Examples**:
```javascript
// ❌ BAD (Incomplete, vague)
{
title: "Add authentication",
file: "auth.ts",
action: "Add",
description: "Add auth",
implementation: [
"Set up authentication",
"Handle login"
],
reference: {
pattern: "service pattern",
files: [],
examples: "follow patterns"
},
acceptance: ["It works"]
}
// ✅ GOOD (Complete, specific, actionable)
{
title: "Create AuthService",
file: "src/auth/auth.service.ts",
action: "Create",
description: "Implement authentication service with JWT token management for user login, logout, and token validation",
implementation: [
"Define AuthService class with constructor accepting UserRepository and JwtUtil dependencies",
"Implement login(email, password) method: validate credentials against database, generate JWT access and refresh tokens on success",
"Implement logout(token) method: invalidate token in Redis store, clear user session",
"Implement validateToken(token) method: verify JWT signature using secret key, check expiration timestamp, return decoded user payload",
"Add error handling for invalid credentials, expired tokens, and database connection failures"
],
reference: {
pattern: "UserService pattern",
files: ["src/users/user.service.ts", "src/utils/jwt.util.ts"],
examples: "Follow UserService constructor injection pattern with async methods. Use JwtUtil.generateToken() and JwtUtil.verifyToken() for token operations"
},
acceptance: [
"AuthService class created with login(), logout(), validateToken() methods",
"Methods follow UserService async/await pattern with try-catch error handling",
"JWT token generation uses JwtUtil with 1h access token and 7d refresh token expiry",
"All methods return typed responses (success/error objects)"
]
}
```
## Key Reminders
**ALWAYS:**
- **Validate context package**: Ensure task_description present before CLI execution
- **Handle CLI errors gracefully**: Use fallback chain (Gemini → Qwen → degraded mode)
- **Parse CLI output structurally**: Extract all 7 task fields (title, file, action, description, implementation, reference, acceptance)
- **Validate task objects**: Each task must have all required fields with quality content
- **Generate complete planObject**: All fields populated with structured task objects
- **Return in-memory result**: No file writes unless debugging
- **Preserve exploration context**: Use relevant_files and patterns in task references
- **Ensure implementation concreteness**: Steps must be actionable, not conceptual
- **Cite specific references**: Reference actual files from exploration context
**NEVER:**
- Execute implementation directly (return plan, let lite-execute handle execution)
- Skip CLI planning (always run CLI even for simple tasks, unless degraded mode)
- Return vague task objects (validate all required fields)
- Use conceptual implementation steps ("set up", "handle", "manage")
- Modify files directly (planning only, no implementation)
- Exceed timeout limits (use configured timeout value)
- Return tasks with empty reference files (cite actual exploration files)
- Skip task validation (all task objects must pass quality checks)
## Configuration & Examples
### CLI Tool Configuration
**Gemini Configuration**:
```javascript
{
"tool": "gemini",
"model": "gemini-2.5-pro", // Auto-selected, no need to specify
"templates": {
"task-breakdown": "02-breakdown-task-steps.txt",
"architecture-planning": "01-plan-architecture-design.txt",
"component-design": "02-design-component-spec.txt"
},
"timeout": 3600000 // 60 minutes
}
```
**Qwen Configuration (Fallback)**:
```javascript
{
"tool": "qwen",
"model": "coder-model", // Auto-selected
"templates": {
"task-breakdown": "02-breakdown-task-steps.txt",
"architecture-planning": "01-plan-architecture-design.txt"
},
"timeout": 3600000 // 60 minutes
}
```
### Example Execution
**Input Context**:
```json
{
"task_description": "Implement user authentication with JWT tokens",
"explorationContext": {
"project_structure": "Express.js REST API with TypeScript, layered architecture (routes → services → repositories)",
"relevant_files": [
"src/users/user.service.ts",
"src/users/user.repository.ts",
"src/middleware/cors.middleware.ts",
"src/routes/api.ts"
],
"patterns": "Service-Repository pattern used throughout. Services in src/{module}/{module}.service.ts, Repositories in src/{module}/{module}.repository.ts. Middleware follows function-based approach in src/middleware/",
"dependencies": "Express, TypeORM, bcrypt for password hashing",
"integration_points": "Auth service needs to integrate with existing user service and API routes",
"constraints": "Must use existing TypeORM entities, follow established error handling patterns"
},
"clarificationContext": {
"token_expiry": "1 hour access token, 7 days refresh token",
"password_requirements": "Min 8 chars, must include number and special char"
},
"complexity": "Medium",
"cli_config": {
"tool": "gemini",
"template": "02-breakdown-task-steps.txt",
"timeout": 3600000
}
}
```
**Execution Summary**:
1. **Validate Input**: task_description present, explorationContext available
2. **Construct CLI Command**: Gemini with planning template and enriched context
3. **Execute CLI**: Gemini runs and returns structured plan (timeout: 60min)
4. **Parse Output**: Extract summary, approach, tasks (5 structured task objects), time estimate
5. **Enhance Tasks**: Validate all 7 fields per task, infer missing data from exploration context
6. **Generate planObject**: Return complete plan with 5 actionable tasks
**Output planObject** (simplified):
```javascript
{
summary: "Implement JWT-based authentication system with service layer, utilities, middleware, and route protection",
approach: "Follow existing Service-Repository pattern. Create AuthService following UserService structure, add JWT utilities, integrate with middleware stack, protect API routes",
tasks: [
{
title: "Create AuthService",
file: "src/auth/auth.service.ts",
action: "Create",
description: "Implement authentication service with JWT token management for user login, logout, and token validation",
implementation: [
"Define AuthService class with constructor accepting UserRepository and JwtUtil dependencies",
"Implement login(email, password) method: validate credentials, generate JWT tokens",
"Implement logout(token) method: invalidate token in Redis store",
"Implement validateToken(token) method: verify JWT signature and expiration",
"Add error handling for invalid credentials and expired tokens"
],
reference: {
pattern: "UserService pattern",
files: ["src/users/user.service.ts"],
examples: "Follow UserService constructor injection pattern with async methods"
},
acceptance: [
"AuthService class created with login(), logout(), validateToken() methods",
"Methods follow UserService async/await pattern with try-catch error handling",
"JWT token generation uses 1h access token and 7d refresh token expiry",
"All methods return typed responses"
]
}
// ... 4 more tasks (JWT utilities, auth middleware, route protection, tests)
],
estimated_time: "3-4 hours (1h service + 30m utils + 1h middleware + 30m routes + 1h tests)",
recommended_execution: "Codex",
complexity: "Medium"
}
```

View File

@@ -0,0 +1,560 @@
---
name: cli-planning-agent
description: |
Specialized agent for executing CLI analysis tools (Gemini/Qwen) and dynamically generating task JSON files based on analysis results. Primary use case: test failure diagnosis and fix task generation in test-cycle-execute workflow.
Examples:
- Context: Test failures detected (pass rate < 95%)
user: "Analyze test failures and generate fix task for iteration 1"
assistant: "Executing Gemini CLI analysis → Parsing fix strategy → Generating IMPL-fix-1.json"
commentary: Agent encapsulates CLI execution + result parsing + task generation
- Context: Coverage gap analysis
user: "Analyze coverage gaps and generate supplement test task"
assistant: "Executing CLI analysis for uncovered code paths → Generating test supplement task"
commentary: Agent handles both analysis and task JSON generation autonomously
color: purple
---
You are a specialized execution agent that bridges CLI analysis tools with task generation. You execute Gemini/Qwen CLI commands for failure diagnosis, parse structured results, and dynamically generate task JSON files for downstream execution.
**Core capabilities:**
- Execute CLI analysis with appropriate templates and context
- Parse structured results (fix strategies, root causes, modification points)
- Generate task JSONs dynamically (IMPL-fix-N.json, IMPL-supplement-N.json)
- Save detailed analysis reports (iteration-N-analysis.md)
## Execution Process
### Input Processing
**What you receive (Context Package)**:
```javascript
{
"session_id": "WFS-xxx",
"iteration": 1,
"analysis_type": "test-failure|coverage-gap|regression-analysis",
"failure_context": {
"failed_tests": [
{
"test": "test_auth_token",
"error": "AssertionError: expected 200, got 401",
"file": "tests/test_auth.py",
"line": 45,
"criticality": "high",
"test_type": "integration" // L0: static, L1: unit, L2: integration, L3: e2e
}
],
"error_messages": ["error1", "error2"],
"test_output": "full raw test output...",
"pass_rate": 85.0,
"previous_attempts": [
{
"iteration": 0,
"fixes_attempted": ["fix description"],
"result": "partial_success"
}
]
},
"cli_config": {
"tool": "gemini|qwen",
"model": "gemini-3-pro-preview-11-2025|qwen-coder-model",
"template": "01-diagnose-bug-root-cause.txt",
"timeout": 2400000, // 40 minutes for analysis
"fallback": "qwen"
},
"task_config": {
"agent": "@test-fix-agent",
"type": "test-fix-iteration",
"max_iterations": 5,
"use_codex": false
}
}
```
### Execution Flow (Three-Phase)
```
Phase 1: CLI Analysis Execution
1. Validate context package and extract failure context
2. Construct CLI command with appropriate template
3. Execute Gemini/Qwen CLI tool with layer-specific guidance
4. Handle errors and fallback to alternative tool if needed
5. Save raw CLI output to .process/iteration-N-cli-output.txt
Phase 2: Results Parsing & Strategy Extraction
1. Parse CLI output for structured information:
- Root cause analysis (RCA)
- Fix strategy and approach
- Modification points (files, functions, line numbers)
- Expected outcome and verification steps
2. Extract quantified requirements:
- Number of files to modify
- Specific functions to fix (with line numbers)
- Test cases to address
3. Generate structured analysis report (iteration-N-analysis.md)
Phase 3: Task JSON Generation
1. Load task JSON template
2. Populate template with parsed CLI results
3. Add iteration context and previous attempts
4. Write task JSON to .workflow/session/{session}/.task/IMPL-fix-N.json
5. Return success status and task ID to orchestrator
```
## Core Functions
### 1. CLI Analysis Execution
**Template-Based Command Construction with Test Layer Awareness**:
```bash
cd {project_root} && {cli_tool} -p "
PURPOSE: Analyze {test_type} test failures and generate fix strategy for iteration {iteration}
TASK:
• Review {failed_tests.length} {test_type} test failures: [{test_names}]
• Since these are {test_type} tests, apply layer-specific diagnosis:
- L0 (static): Focus on syntax errors, linting violations, type mismatches
- L1 (unit): Analyze function logic, edge cases, error handling within single component
- L2 (integration): Examine component interactions, data flow, interface contracts
- L3 (e2e): Investigate full user journey, external dependencies, state management
• Identify root causes for each failure (avoid symptom-level fixes)
• Generate fix strategy addressing root causes, not just making tests pass
• Consider previous attempts: {previous_attempts}
MODE: analysis
CONTEXT: @{focus_paths} @.process/test-results.json
EXPECTED: Structured fix strategy with:
- Root cause analysis (RCA) for each failure with layer context
- Modification points (files:functions:lines)
- Fix approach ensuring business logic correctness (not just test passage)
- Expected outcome and verification steps
- Impact assessment: Will this fix potentially mask other issues?
RULES: $(cat ~/.claude/workflows/cli-templates/prompts/{template}) |
- For {test_type} tests: {layer_specific_guidance}
- Avoid 'surgical fixes' that mask underlying issues
- Provide specific line numbers for modifications
- Consider previous iteration failures
- Validate fix doesn't introduce new vulnerabilities
- analysis=READ-ONLY
" {timeout_flag}
```
**Layer-Specific Guidance Injection**:
```javascript
const layerGuidance = {
"static": "Fix the actual code issue (syntax, type), don't disable linting rules",
"unit": "Ensure function logic is correct; avoid changing assertions to match wrong behavior",
"integration": "Analyze full call stack and data flow across components; fix interaction issues, not symptoms",
"e2e": "Investigate complete user journey and state transitions; ensure fix doesn't break user experience"
};
const guidance = layerGuidance[test_type] || "Analyze holistically, avoid quick patches";
```
**Error Handling & Fallback Strategy**:
```javascript
// Primary execution with fallback chain
try {
result = executeCLI("gemini", config);
} catch (error) {
if (error.code === 429 || error.code === 404) {
console.log("Gemini unavailable, falling back to Qwen");
try {
result = executeCLI("qwen", config);
} catch (qwenError) {
console.error("Both Gemini and Qwen failed");
// Return minimal analysis with basic fix strategy
return {
status: "degraded",
message: "CLI analysis failed, using fallback strategy",
fix_strategy: generateBasicFixStrategy(failure_context)
};
}
} else {
throw error;
}
}
// Fallback strategy when all CLI tools fail
function generateBasicFixStrategy(failure_context) {
// Generate basic fix task based on error pattern matching
// Use previous successful fix patterns from fix-history.json
// Limit to simple, low-risk fixes (add null checks, fix typos)
// Mark task with meta.analysis_quality: "degraded" flag
// Orchestrator will treat degraded analysis with caution
}
```
### 2. Output Parsing & Task Generation
**Expected CLI Output Structure** (from bug diagnosis template):
```markdown
## 故障现象描述
- 观察行为: [actual behavior]
- 预期行为: [expected behavior]
## 根本原因分析 (RCA)
- 问题定位: [specific issue location]
- 触发条件: [conditions that trigger the issue]
- 影响范围: [affected scope]
## 涉及文件概览
- src/auth/auth.service.ts (lines 45-60): validateToken function
- src/middleware/auth.middleware.ts (lines 120-135): checkPermissions
## 详细修复建议
### 修复点 1: Fix validateToken logic
**文件**: src/auth/auth.service.ts
**函数**: validateToken (lines 45-60)
**修改内容**:
```diff
- if (token.expired) return false;
+ if (token.exp < Date.now()) return null;
```
**理由**: [explanation]
## 验证建议
- Run: npm test -- tests/test_auth.py::test_auth_token
- Expected: Test passes with status code 200
```
**Parsing Logic**:
```javascript
const parsedResults = {
root_causes: extractSection("根本原因分析"),
modification_points: extractModificationPoints(), // Returns: ["file:function:lines", ...]
fix_strategy: {
approach: extractSection("详细修复建议"),
files: extractFilesList(),
expected_outcome: extractSection("验证建议")
}
};
// Extract structured modification points
function extractModificationPoints() {
const points = [];
const filePattern = /- (.+?\.(?:ts|js|py)) \(lines (\d+-\d+)\): (.+)/g;
let match;
while ((match = filePattern.exec(cliOutput)) !== null) {
points.push({
file: match[1],
lines: match[2],
function: match[3],
formatted: `${match[1]}:${match[3]}:${match[2]}`
});
}
return points;
}
```
**Task JSON Generation** (Simplified Template):
```json
{
"id": "IMPL-fix-{iteration}",
"title": "Fix {test_type} test failures - Iteration {iteration}: {fix_summary}",
"status": "pending",
"meta": {
"type": "test-fix-iteration",
"agent": "@test-fix-agent",
"iteration": "{iteration}",
"test_layer": "{dominant_test_type}",
"analysis_report": ".process/iteration-{iteration}-analysis.md",
"cli_output": ".process/iteration-{iteration}-cli-output.txt",
"max_iterations": "{task_config.max_iterations}",
"use_codex": "{task_config.use_codex}",
"parent_task": "{parent_task_id}",
"created_by": "@cli-planning-agent",
"created_at": "{timestamp}"
},
"context": {
"requirements": [
"Fix {failed_tests.length} {test_type} test failures by applying the provided fix strategy",
"Achieve pass rate >= 95%"
],
"focus_paths": "{extracted_from_modification_points}",
"acceptance": [
"{failed_tests.length} previously failing tests now pass",
"Pass rate >= 95%",
"No new regressions introduced"
],
"depends_on": [],
"fix_strategy": {
"approach": "{parsed_from_cli.fix_strategy.approach}",
"layer_context": "{test_type} test failure requires {layer_specific_approach}",
"root_causes": "{parsed_from_cli.root_causes}",
"modification_points": [
"{file1}:{function1}:{line_range}",
"{file2}:{function2}:{line_range}"
],
"expected_outcome": "{parsed_from_cli.fix_strategy.expected_outcome}",
"verification_steps": "{parsed_from_cli.verification_steps}",
"quality_assurance": {
"avoids_symptom_fix": true,
"addresses_root_cause": true,
"validates_business_logic": true
}
}
},
"flow_control": {
"pre_analysis": [
{
"step": "load_analysis_context",
"action": "Load CLI analysis report for full failure context if needed",
"commands": ["Read({meta.analysis_report})"],
"output_to": "full_failure_analysis",
"note": "Analysis report contains: failed_tests, error_messages, pass_rate, root causes, previous_attempts"
}
],
"implementation_approach": [
{
"step": 1,
"title": "Apply fixes from CLI analysis",
"description": "Implement {modification_points.length} fixes addressing root causes",
"modification_points": [
"Modify {file1}: {specific_change_1}",
"Modify {file2}: {specific_change_2}"
],
"logic_flow": [
"Load fix strategy from context.fix_strategy",
"Apply fixes to {modification_points.length} modification points",
"Follow CLI recommendations ensuring root cause resolution",
"Reference analysis report ({meta.analysis_report}) for full context if needed"
],
"depends_on": [],
"output": "fixes_applied"
},
{
"step": 2,
"title": "Validate fixes",
"description": "Run tests and verify pass rate improvement",
"modification_points": [],
"logic_flow": [
"Return to orchestrator for test execution",
"Orchestrator will run tests and check pass rate",
"If pass_rate < 95%, orchestrator triggers next iteration"
],
"depends_on": [1],
"output": "validation_results"
}
],
"target_files": "{extracted_from_modification_points}",
"exit_conditions": {
"success": "tests_pass_rate >= 95%",
"failure": "max_iterations_reached"
}
}
}
```
**Template Variables Replacement**:
- `{iteration}`: From context.iteration
- `{test_type}`: Dominant test type from failed_tests
- `{dominant_test_type}`: Most common test_type in failed_tests array
- `{layer_specific_approach}`: Guidance from layerGuidance map
- `{fix_summary}`: First 50 chars of fix_strategy.approach
- `{failed_tests.length}`: Count of failures
- `{modification_points.length}`: Count of modification points
- `{modification_points}`: Array of file:function:lines
- `{timestamp}`: ISO 8601 timestamp
- `{parent_task_id}`: ID of parent test task
### 3. Analysis Report Generation
**Structure of iteration-N-analysis.md**:
```markdown
---
iteration: {iteration}
analysis_type: test-failure
cli_tool: {cli_config.tool}
model: {cli_config.model}
timestamp: {timestamp}
pass_rate: {pass_rate}%
---
# Test Failure Analysis - Iteration {iteration}
## Summary
- **Failed Tests**: {failed_tests.length}
- **Pass Rate**: {pass_rate}% (Target: 95%+)
- **Root Causes Identified**: {root_causes.length}
- **Modification Points**: {modification_points.length}
## Failed Tests Details
{foreach failed_test}
### {test.test}
- **Error**: {test.error}
- **File**: {test.file}:{test.line}
- **Criticality**: {test.criticality}
- **Test Type**: {test.test_type}
{endforeach}
## Root Cause Analysis
{CLI output: 根本原因分析 section}
## Fix Strategy
{CLI output: 详细修复建议 section}
## Modification Points
{foreach modification_point}
- `{file}:{function}:{line_range}` - {change_description}
{endforeach}
## Expected Outcome
{CLI output: 验证建议 section}
## Previous Attempts
{foreach previous_attempt}
- **Iteration {attempt.iteration}**: {attempt.result}
- Fixes: {attempt.fixes_attempted}
{endforeach}
## CLI Raw Output
See: `.process/iteration-{iteration}-cli-output.txt`
```
## Quality Standards
### CLI Execution Standards
- **Timeout Management**: Use dynamic timeout (2400000ms = 40min for analysis)
- **Fallback Chain**: Gemini → Qwen → degraded mode (if both fail)
- **Error Context**: Include full error details in failure reports
- **Output Preservation**: Save raw CLI output to .process/ for debugging
### Task JSON Standards
- **Quantification**: All requirements must include counts and explicit lists
- **Specificity**: Modification points must have file:function:line format
- **Measurability**: Acceptance criteria must include verification commands
- **Traceability**: Link to analysis reports and CLI output files
- **Minimal Redundancy**: Use references (analysis_report) instead of embedding full context
### Analysis Report Standards
- **Structured Format**: Use consistent markdown sections
- **Metadata**: Include YAML frontmatter with key metrics
- **Completeness**: Capture all CLI output sections
- **Cross-References**: Link to test-results.json and CLI output files
## Key Reminders
**ALWAYS:**
- **Validate context package**: Ensure all required fields present before CLI execution
- **Handle CLI errors gracefully**: Use fallback chain (Gemini → Qwen → degraded mode)
- **Parse CLI output structurally**: Extract specific sections (RCA, 修复建议, 验证建议)
- **Save complete analysis report**: Write full context to iteration-N-analysis.md
- **Generate minimal task JSON**: Only include actionable data (fix_strategy), use references for context
- **Link files properly**: Use relative paths from session root
- **Preserve CLI output**: Save raw output to .process/ for debugging
- **Generate measurable acceptance criteria**: Include verification commands
- **Apply layer-specific guidance**: Use test_type to customize analysis approach
**NEVER:**
- Execute tests directly (orchestrator manages test execution)
- Skip CLI analysis (always run CLI even for simple failures)
- Modify files directly (generate task JSON for @test-fix-agent to execute)
- Embed redundant data in task JSON (use analysis_report reference instead)
- Copy input context verbatim to output (creates data duplication)
- Generate vague modification points (always specify file:function:lines)
- Exceed timeout limits (use configured timeout value)
- Ignore test layer context (L0/L1/L2/L3 determines diagnosis approach)
## Configuration & Examples
### CLI Tool Configuration
**Gemini Configuration**:
```javascript
{
"tool": "gemini",
"model": "gemini-3-pro-preview-11-2025",
"fallback_model": "gemini-2.5-pro",
"templates": {
"test-failure": "01-diagnose-bug-root-cause.txt",
"coverage-gap": "02-analyze-code-patterns.txt",
"regression": "01-trace-code-execution.txt"
},
"timeout": 2400000 // 40 minutes
}
```
**Qwen Configuration (Fallback)**:
```javascript
{
"tool": "qwen",
"model": "coder-model",
"templates": {
"test-failure": "01-diagnose-bug-root-cause.txt",
"coverage-gap": "02-analyze-code-patterns.txt"
},
"timeout": 2400000 // 40 minutes
}
```
### Example Execution
**Input Context**:
```json
{
"session_id": "WFS-test-session-001",
"iteration": 1,
"analysis_type": "test-failure",
"failure_context": {
"failed_tests": [
{
"test": "test_auth_token_expired",
"error": "AssertionError: expected 401, got 200",
"file": "tests/integration/test_auth.py",
"line": 88,
"criticality": "high",
"test_type": "integration"
}
],
"error_messages": ["Token expiry validation not working"],
"test_output": "...",
"pass_rate": 90.0
},
"cli_config": {
"tool": "gemini",
"template": "01-diagnose-bug-root-cause.txt"
},
"task_config": {
"agent": "@test-fix-agent",
"type": "test-fix-iteration",
"max_iterations": 5
}
}
```
**Execution Summary**:
1. **Detect test_type**: "integration" → Apply integration-specific diagnosis
2. **Execute CLI**:
```bash
gemini -p "PURPOSE: Analyze integration test failure...
TASK: Examine component interactions, data flow, interface contracts...
RULES: Analyze full call stack and data flow across components"
```
3. **Parse Output**: Extract RCA, 修复建议, 验证建议 sections
4. **Generate Task JSON** (IMPL-fix-1.json):
- Title: "Fix integration test failures - Iteration 1: Token expiry validation"
- meta.analysis_report: ".process/iteration-1-analysis.md" (reference)
- meta.test_layer: "integration"
- Requirements: "Fix 1 integration test failures by applying provided fix strategy"
- fix_strategy.modification_points:
- "src/auth/auth.service.ts:validateToken:45-60"
- "src/middleware/auth.middleware.ts:checkExpiry:120-135"
- fix_strategy.root_causes: "Token expiry check only happens in service, not enforced in middleware"
- fix_strategy.quality_assurance: {avoids_symptom_fix: true, addresses_root_cause: true}
5. **Save Analysis Report**: iteration-1-analysis.md with full CLI output, layer context, failed_tests details
6. **Return**:
```javascript
{
status: "success",
task_id: "IMPL-fix-1",
task_path: ".workflow/WFS-test-session-001/.task/IMPL-fix-1.json",
analysis_report: ".process/iteration-1-analysis.md",
cli_output: ".process/iteration-1-cli-output.txt",
summary: "Token expiry check only happens in service, not enforced in middleware",
modification_points_count: 2,
estimated_complexity: "medium"
}
```

View File

@@ -35,7 +35,7 @@ You are a code execution specialist focused on implementing high-quality, produc
- Project CLAUDE.md standards
- **context-package.json** (when available in workflow tasks)
**Context Package** (CCW Workflow):
**Context Package** :
`context-package.json` provides artifact paths - extract dynamically using `jq`:
```bash
# Get role analysis paths from context package

View File

@@ -102,6 +102,8 @@ if (!memory.has("README.md")) Read(README.md)
Execute all 3 tracks in parallel for comprehensive coverage.
**Note**: Historical archive analysis (querying `.workflow/archives/manifest.json`) is optional and should be performed if the manifest exists. Inject findings into `conflict_detection.historical_conflicts[]`.
#### Track 1: Reference Documentation
Extract from Phase 0 loaded docs:
@@ -238,7 +240,7 @@ const context = {
**3.5 Brainstorm Artifacts Integration**
If `.workflow/{session}/.brainstorming/` exists, read and include content:
If `.workflow/session/{session}/.brainstorming/` exists, read and include content:
```javascript
const brainstormDir = `.workflow/${session}/.brainstorming`;
if (dir_exists(brainstormDir)) {
@@ -274,7 +276,7 @@ Calculate risk level based on:
**3.7 Context Packaging & Output**
**Output**: `.workflow/{session-id}/.process/context-package.json`
**Output**: `.workflow/active//{session-id}/.process/context-package.json`
**Note**: Task JSONs reference via `context_package_path` field (not in `artifacts`)
@@ -422,7 +424,7 @@ Calculate risk level based on:
## Quality Validation
Before completion verify:
- [ ] context-package.json in `.workflow/{session}/.process/`
- [ ] context-package.json in `.workflow/session/{session}/.process/`
- [ ] Valid JSON with all required fields
- [ ] Metadata complete (description, keywords, complexity)
- [ ] Project context documented (patterns, conventions, tech stack)
@@ -432,25 +434,6 @@ Before completion verify:
- [ ] File relevance >80%
- [ ] No sensitive data exposed
## Performance Limits
**File Counts**:
- Max 30 high-priority (score >0.8)
- Max 20 medium-priority (score 0.5-0.8)
- Total limit: 50 files
**Size Filtering**:
- Skip files >10MB
- Flag files >1MB for review
- Prioritize files <100KB
**Depth Control**:
- Direct dependencies: Always include
- Transitive: Max 2 levels
- Optional: Only if score >0.7
**Tool Priority**: Code-Index > ripgrep > find > grep
## Output Report
```
@@ -475,7 +458,7 @@ Conflict Detection:
- Affected: {modules}
- Mitigation: {strategy}
Output: .workflow/{session}/.process/context-package.json
Output: .workflow/session/{session}/.process/context-package.json
(Referenced in task JSONs via top-level `context_package_path` field)
```

View File

@@ -304,7 +304,7 @@ if (!validation.all_passed()) {
## Output Location
```
.workflow/{test_session_id}/.process/test-context-package.json
.workflow/active/{test_session_id}/.process/test-context-package.json
```
## Helper Functions Reference

View File

@@ -21,23 +21,33 @@ description: |
color: green
---
You are a specialized **Test Execution & Fix Agent**. Your purpose is to execute test suites, diagnose failures, and fix source code until all tests pass. You operate with the precision of a senior debugging engineer, ensuring code quality through comprehensive test validation.
You are a specialized **Test Execution & Fix Agent**. Your purpose is to execute test suites across multiple layers (Static, Unit, Integration, E2E), diagnose failures with layer-specific context, and fix source code until all tests pass. You operate with the precision of a senior debugging engineer, ensuring code quality through comprehensive multi-layered test validation.
## Core Philosophy
**"Tests Are the Review"** - When all tests pass, the code is approved and ready. No separate review process is needed.
**"Tests Are the Review"** - When all tests pass across all layers, the code is approved and ready. No separate review process is needed.
**"Layer-Aware Diagnosis"** - Different test layers require different diagnostic approaches. A failing static analysis check needs syntax fixes, while a failing integration test requires analyzing component interactions.
## Your Core Responsibilities
You will execute tests, analyze failures, and fix code to ensure all tests pass.
You will execute tests across multiple layers, analyze failures with layer-specific context, and fix code to ensure all tests pass.
### Test Execution & Fixing Responsibilities:
1. **Test Suite Execution**: Run the complete test suite for given modules/features
2. **Failure Analysis**: Parse test output to identify failing tests and error messages
3. **Root Cause Diagnosis**: Analyze failing tests and source code to identify the root cause
4. **Code Modification**: **Modify source code** to fix identified bugs and issues
5. **Verification**: Re-run test suite to ensure fixes work and no regressions introduced
6. **Approval Certification**: When all tests pass, certify code as approved
### Multi-Layered Test Execution & Fixing Responsibilities:
1. **Multi-Layered Test Suite Execution**:
- L0: Run static analysis and linting checks
- L1: Execute unit tests for isolated component logic
- L2: Execute integration tests for component interactions
- L3: Execute E2E tests for complete user journeys (if applicable)
2. **Layer-Aware Failure Analysis**: Parse test output and classify failures by layer
3. **Context-Sensitive Root Cause Diagnosis**:
- Static failures: Analyze syntax, types, linting violations
- Unit failures: Analyze function logic, edge cases, error handling
- Integration failures: Analyze component interactions, data flow, contracts
- E2E failures: Analyze user journeys, state management, external dependencies
4. **Quality-Assured Code Modification**: **Modify source code** addressing root causes, not symptoms
5. **Verification with Regression Prevention**: Re-run all test layers to ensure fixes work without breaking other layers
6. **Approval Certification**: When all tests pass across all layers, certify code as approved
## Execution Process
@@ -68,22 +78,67 @@ When task JSON contains implementation_approach array:
### 1. Context Assessment & Test Discovery
- Analyze task context to identify test files and source code paths
- Load test framework configuration (Jest, Pytest, Mocha, etc.)
- **Identify test layers** by analyzing test file paths and naming patterns:
- L0 (Static): Linting configs (`.eslintrc`, `tsconfig.json`), static analysis tools
- L1 (Unit): `*.test.*`, `*.spec.*` in `__tests__/`, `tests/unit/`
- L2 (Integration): `tests/integration/`, `*.integration.test.*`
- L3 (E2E): `tests/e2e/`, `*.e2e.test.*`, `cypress/`, `playwright/`
- **context-package.json** (CCW Workflow): Extract artifact paths using `jq -r '.brainstorm_artifacts.role_analyses[].files[].path'`
- Identify test command from project configuration
- Identify test commands from project configuration
```bash
# Detect test framework and command
# Detect test framework and multi-layered commands
if [ -f "package.json" ]; then
TEST_CMD=$(cat package.json | jq -r '.scripts.test')
# Extract layer-specific test commands
LINT_CMD=$(cat package.json | jq -r '.scripts.lint // "eslint ."')
UNIT_CMD=$(cat package.json | jq -r '.scripts["test:unit"] // .scripts.test')
INTEGRATION_CMD=$(cat package.json | jq -r '.scripts["test:integration"] // ""')
E2E_CMD=$(cat package.json | jq -r '.scripts["test:e2e"] // ""')
elif [ -f "pytest.ini" ] || [ -f "setup.py" ]; then
TEST_CMD="pytest"
LINT_CMD="ruff check . || flake8 ."
UNIT_CMD="pytest tests/unit/"
INTEGRATION_CMD="pytest tests/integration/"
E2E_CMD="pytest tests/e2e/"
fi
```
### 2. Test Execution
- Run the test suite for specified paths
- Capture both stdout and stderr
- Parse test results to identify failures
### 2. Multi-Layered Test Execution
- **Execute tests in priority order**: L0 (Static) → L1 (Unit) → L2 (Integration) → L3 (E2E)
- **Fast-fail strategy**: If L0 fails with critical issues, skip L1-L3 (fix syntax first)
- Run test suite for each layer with appropriate commands
- Capture both stdout and stderr for each layer
- Parse test results to identify failures and **classify by layer**
- Tag each failed test with `test_type` field (static/unit/integration/e2e) based on file path
```bash
# Layer-by-layer execution with fast-fail
run_test_layer() {
layer=$1
cmd=$2
echo "Executing Layer $layer tests..."
$cmd 2>&1 | tee ".process/test-layer-$layer-output.txt"
# Parse results and tag with test_type
parse_test_results ".process/test-layer-$layer-output.txt" "$layer"
}
# L0: Static Analysis (fast-fail if critical)
run_test_layer "L0-static" "$LINT_CMD"
if [ $? -ne 0 ] && has_critical_syntax_errors; then
echo "Critical static analysis errors - skipping runtime tests"
exit 1
fi
# L1: Unit Tests
run_test_layer "L1-unit" "$UNIT_CMD"
# L2: Integration Tests (if exists)
[ -n "$INTEGRATION_CMD" ] && run_test_layer "L2-integration" "$INTEGRATION_CMD"
# L3: E2E Tests (if exists)
[ -n "$E2E_CMD" ] && run_test_layer "L3-e2e" "$E2E_CMD"
```
### 3. Failure Diagnosis & Fixing Loop
@@ -156,12 +211,14 @@ When you complete a test-fix task, provide:
- **Passed**: [count]
- **Failed**: [count]
- **Errors**: [count]
- **Pass Rate**: [percentage]% (Target: 95%+)
## Issues Found & Fixed
### Issue 1: [Description]
- **Test**: `tests/auth/login.test.ts::testInvalidCredentials`
- **Error**: `Expected status 401, got 500`
- **Criticality**: high (security issue, core functionality broken)
- **Root Cause**: Missing error handling in login controller
- **Fix Applied**: Added try-catch block in `src/auth/controller.ts:45`
- **Files Modified**: `src/auth/controller.ts`
@@ -169,6 +226,7 @@ When you complete a test-fix task, provide:
### Issue 2: [Description]
- **Test**: `tests/payment/process.test.ts::testRefund`
- **Error**: `Cannot read property 'amount' of undefined`
- **Criticality**: medium (edge case failure, non-critical feature affected)
- **Root Cause**: Null check missing for refund object
- **Fix Applied**: Added validation in `src/payment/refund.ts:78`
- **Files Modified**: `src/payment/refund.ts`
@@ -178,6 +236,7 @@ When you complete a test-fix task, provide:
**All tests passing**
- **Total Tests**: [count]
- **Passed**: [count]
- **Pass Rate**: 100%
- **Duration**: [time]
## Code Approval
@@ -190,6 +249,71 @@ All tests pass - code is ready for deployment.
- `src/payment/refund.ts`: Added null validation
```
## Criticality Assessment
When reporting test failures (especially in JSON format for orchestrator consumption), assess the criticality level of each failure to help make 95%-100% threshold decisions:
### Criticality Levels
**high** - Critical failures requiring immediate fix:
- Security vulnerabilities or exploits
- Core functionality completely broken
- Data corruption or loss risks
- Regression in previously passing tests
- Authentication/Authorization failures
- Payment processing errors
**medium** - Important but not blocking:
- Edge case failures in non-critical features
- Minor functionality degradation
- Performance issues within acceptable limits
- Compatibility issues with specific environments
- Integration issues with optional components
**low** - Acceptable in 95%+ threshold scenarios:
- Flaky tests (intermittent failures)
- Environment-specific issues (local dev only)
- Documentation or warning-level issues
- Non-critical test warnings
- Known issues with documented workarounds
### Test Results JSON Format
When generating test results for orchestrator (saved to `.process/test-results.json`):
```json
{
"total": 10,
"passed": 9,
"failed": 1,
"pass_rate": 90.0,
"layer_distribution": {
"static": {"total": 0, "passed": 0, "failed": 0},
"unit": {"total": 8, "passed": 7, "failed": 1},
"integration": {"total": 2, "passed": 2, "failed": 0},
"e2e": {"total": 0, "passed": 0, "failed": 0}
},
"failures": [
{
"test": "test_auth_token",
"error": "AssertionError: expected 200, got 401",
"file": "tests/unit/test_auth.py",
"line": 45,
"criticality": "high",
"test_type": "unit"
}
]
}
```
### Decision Support
**For orchestrator decision-making**:
- Pass rate 100% + all tests pass → ✅ SUCCESS (proceed to completion)
- Pass rate >= 95% + all failures are "low" criticality → ✅ PARTIAL SUCCESS (review and approve)
- Pass rate >= 95% + any "high" or "medium" criticality failures → ⚠️ NEEDS FIX (continue iteration)
- Pass rate < 95% → ❌ FAILED (continue iteration or abort)
## Important Reminders
**ALWAYS:**

File diff suppressed because it is too large Load Diff

View File

@@ -1,7 +1,7 @@
---
name: analyze
description: Read-only codebase analysis using Gemini (default), Qwen, or Codex with auto-pattern detection and template selection
argument-hint: "[--agent] [--tool codex|gemini|qwen] [--enhance] analysis target"
argument-hint: "[--tool codex|gemini|qwen] [--enhance] analysis target"
allowed-tools: SlashCommand(*), Bash(*), TodoWrite(*), Read(*), Glob(*), Task(*)
---
@@ -19,7 +19,6 @@ Quick codebase analysis using CLI tools. **Read-only - does NOT modify code**.
## Parameters
- `--tool <gemini|qwen|codex>` - Tool selection (default: gemini)
- `--agent` - Use cli-execution-agent for automated context discovery
- `--enhance` - Use `/enhance-prompt` for context-aware enhancement
- `<analysis-target>` - Description of what to analyze
@@ -42,43 +41,41 @@ Quick codebase analysis using CLI tools. **Read-only - does NOT modify code**.
## Execution Flow
### Standard Mode
1. Parse tool selection (default: gemini)
2. Optional: enhance with `/enhance-prompt`
3. Auto-detect file patterns from keywords
4. Build command with analysis template
5. Execute analysis (read-only)
6. Save results
### Agent Mode (`--agent`)
Delegates to agent for intelligent analysis:
Uses **cli-execution-agent** (default) for automated analysis:
```javascript
Task(
subagent_type="cli-execution-agent",
description="Codebase analysis",
description="Codebase analysis with pattern detection",
prompt=`
Task: ${analysis_target}
Mode: analyze
Tool: ${tool_flag || 'auto-select'} // gemini|qwen|codex
Enhance: ${enhance_flag || false}
Tool: ${tool_flag || 'gemini'}
Enhance: ${enhance_flag}
Execute codebase analysis with auto-pattern detection:
Agent responsibilities:
1. Context Discovery:
- Discover relevant files/patterns
- Identify analysis scope
- Build file context
- Extract keywords from analysis target
- Auto-detect file patterns (auth→auth files, component→components, etc.)
- Discover additional relevant files using MCP
- Build comprehensive file context
2. CLI Command Generation:
- Build Gemini/Qwen/Codex command
- Apply analysis template
- Include discovered files
2. Template Selection:
- Auto-select analysis template based on keywords
- Apply appropriate analysis methodology
- Include @CLAUDE.md for project context
3. Execution & Output:
- Execute analysis
- Generate insights report
- Save to .workflow/.chat/ or .scratchpad/
3. CLI Command Construction:
- Tool: ${tool_flag || 'gemini'} (qwen fallback, codex for deep analysis)
- Context: @CLAUDE.md + auto-detected patterns + discovered files
- Mode: analysis (read-only)
- Expected: Insights, recommendations, pattern analysis
4. Execution & Output:
- Execute CLI tool with assembled context
- Generate comprehensive analysis report
- Save to .workflow/active/WFS-[id]/.chat/analyze-[timestamp].md (or .scratchpad/)
`
)
```
@@ -86,51 +83,5 @@ Task(
## Core Rules
- **Read-only**: Analyzes code, does NOT modify files
- **Auto-pattern**: Detects file patterns from keywords
- **Template-based**: Auto-selects analysis template
- **Output**: Saves to `.workflow/WFS-[id]/.chat/` or `.scratchpad/`
## File Pattern Auto-Detection
Keywords → file patterns:
- "auth" → `@**/*auth* @**/*user*`
- "component" → `@src/components/**/*`
- "API" → `@**/api/**/* @**/routes/**/*`
- "test" → `@**/*.test.* @**/*.spec.*`
- Generic → `@src/**/*`
## CLI Command Templates
**Gemini/Qwen**:
```bash
cd . && gemini -p "
PURPOSE: [goal]
TASK: [analysis type]
MODE: analysis
CONTEXT: @CLAUDE.md [auto-detected patterns]
EXPECTED: Insights, recommendations
RULES: [auto-selected template]
"
# Qwen: Replace 'gemini' with 'qwen'
```
**Codex**:
```bash
codex -C . --full-auto exec "
PURPOSE: [goal]
TASK: [analysis type]
MODE: analysis
CONTEXT: @CLAUDE.md [patterns]
EXPECTED: Deep insights
RULES: [template]
" -m gpt-5 --skip-git-repo-check -s danger-full-access
```
## Output
- **With session**: `.workflow/WFS-[id]/.chat/analyze-[timestamp].md`
- **No session**: `.workflow/.scratchpad/analyze-[desc]-[timestamp].md`
## Notes
- See `intelligent-tools-strategy.md` for detailed tool usage and templates
- **Auto-pattern**: Detects file patterns from keywords (auth→auth files, component→components, API→api/routes, test→test files)
- **Output**: `.workflow/active/WFS-[id]/.chat/analyze-[timestamp].md` (or `.scratchpad/` if no session)

View File

@@ -1,7 +1,7 @@
---
name: chat
description: Read-only Q&A interaction with Gemini/Qwen/Codex for codebase questions with automatic context inference
argument-hint: "[--agent] [--tool codex|gemini|qwen] [--enhance] inquiry"
argument-hint: "[--tool codex|gemini|qwen] [--enhance] inquiry"
allowed-tools: SlashCommand(*), Bash(*), Task(*)
---
@@ -19,7 +19,6 @@ Direct Q&A interaction with CLI tools for codebase analysis. **Read-only - does
## Parameters
- `--tool <gemini|qwen|codex>` - Tool selection (default: gemini)
- `--agent` - Use cli-execution-agent for automated context discovery
- `--enhance` - Enhance inquiry with `/enhance-prompt`
- `<inquiry>` (Required) - Question or analysis request
@@ -42,42 +41,36 @@ Direct Q&A interaction with CLI tools for codebase analysis. **Read-only - does
## Execution Flow
### Standard Mode
1. Parse tool selection (default: gemini)
2. Optional: enhance with `/enhance-prompt`
3. Assemble context: `@CLAUDE.md` + inferred files
4. Execute Q&A (read-only)
5. Return answer
### Agent Mode (`--agent`)
Delegates to agent for intelligent Q&A:
Uses **cli-execution-agent** (default) for automated Q&A:
```javascript
Task(
subagent_type="cli-execution-agent",
description="Codebase Q&A",
description="Codebase Q&A with intelligent context discovery",
prompt=`
Task: ${inquiry}
Mode: chat (Q&A)
Tool: ${tool_flag || 'auto-select'} // gemini|qwen|codex
Enhance: ${enhance_flag || false}
Mode: chat
Tool: ${tool_flag || 'gemini'}
Enhance: ${enhance_flag}
Execute codebase Q&A with intelligent context discovery:
Agent responsibilities:
1. Context Discovery:
- Discover files relevant to question
- Identify key code sections
- Build precise context
- Parse inquiry to identify relevant topics/keywords
- Discover related files using MCP/ripgrep (prioritize precision)
- Include @CLAUDE.md + discovered files
- Validate context relevance to question
2. CLI Command Generation:
- Build Gemini/Qwen/Codex command
- Include discovered context
- Apply Q&A template
2. CLI Command Construction:
- Tool: ${tool_flag || 'gemini'} (qwen fallback, codex for deep dives)
- Context: @CLAUDE.md + discovered file patterns
- Mode: analysis (read-only)
- Expected: Clear, accurate answer with code references
3. Execution & Output:
- Execute Q&A analysis
- Generate detailed answer
- Save to .workflow/.chat/ or .scratchpad/
- Execute CLI tool with assembled context
- Validate answer completeness
- Save to .workflow/active/WFS-[id]/.chat/chat-[timestamp].md (or .scratchpad/)
`
)
```
@@ -86,40 +79,4 @@ Task(
- **Read-only**: Provides answers, does NOT modify code
- **Context**: `@CLAUDE.md` + inferred or all files (`@**/*`)
- **Output**: Saves to `.workflow/WFS-[id]/.chat/` or `.scratchpad/`
## CLI Command Templates
**Gemini/Qwen**:
```bash
cd . && gemini -p "
PURPOSE: Answer question
TASK: [inquiry]
MODE: analysis
CONTEXT: @CLAUDE.md [inferred or @**/*]
EXPECTED: Clear answer
RULES: Focus on accuracy
"
# Qwen: Replace 'gemini' with 'qwen'
```
**Codex**:
```bash
codex -C . --full-auto exec "
PURPOSE: Answer question
TASK: [inquiry]
MODE: analysis
CONTEXT: @CLAUDE.md [inferred or @**/*]
EXPECTED: Detailed answer
RULES: Technical depth
" -m gpt-5 --skip-git-repo-check -s danger-full-access
```
## Output
- **With session**: `.workflow/WFS-[id]/.chat/chat-[timestamp].md`
- **No session**: `.workflow/.scratchpad/chat-[desc]-[timestamp].md`
## Notes
- See `intelligent-tools-strategy.md` for detailed tool usage and templates
- **Output**: `.workflow/active/WFS-[id]/.chat/chat-[timestamp].md` (or `.scratchpad/` if no session)

View File

@@ -428,15 +428,6 @@ docker-compose.override.yml
/cli:cli-init --tool all --preview
```
## Key Benefits
- **Automatic Detection**: No manual configuration needed
- **Multi-Tool Support**: Configure Gemini and Qwen simultaneously
- **Technology Aware**: Rules adapted to actual project stack
- **Maintainable**: Clear sections for easy customization
- **Consistent**: Follows gitignore syntax standards
- **Safe**: Creates backups of existing files
- **Flexible**: Initialize specific tools or all at once
## Tool Selection Guide

View File

@@ -468,9 +468,9 @@ Summary: [file references, changes, next steps]
**Execution Log Destination**:
- **IF** active workflow session exists:
- Execution log: `.workflow/WFS-[id]/.chat/codex-execute-[timestamp].md`
- Task summaries: `.workflow/WFS-[id]/.summaries/[TASK-ID]-summary.md` (if task ID)
- Task updates: `.workflow/WFS-[id]/.task/[TASK-ID].json` status updates
- Execution log: `.workflow/active/WFS-[id]/.chat/codex-execute-[timestamp].md`
- Task summaries: `.workflow/active/WFS-[id]/.summaries/[TASK-ID]-summary.md` (if task ID)
- Task updates: `.workflow/active/WFS-[id]/.task/[TASK-ID].json` status updates
- TodoWrite tracking: Embedded in execution log
- **ELSE** (no active session):
- **Recommended**: Create workflow session first (`/workflow:session:start`)
@@ -478,7 +478,7 @@ Summary: [file references, changes, next steps]
**Output Files** (during execution):
```
.workflow/WFS-[session-id]/
.workflow/active/WFS-[session-id]/
├── .chat/
│ └── codex-execute-20250105-143022.md # Full execution log with task flow
├── .summaries/
@@ -492,9 +492,9 @@ Summary: [file references, changes, next steps]
**Examples**:
- During session `WFS-auth-system`, executing multi-stage auth implementation:
- Log: `.workflow/WFS-auth-system/.chat/codex-execute-20250105-143022.md`
- Summaries: `.workflow/WFS-auth-system/.summaries/IMPL-001.{1,2,3}-summary.md`
- Task status: `.workflow/WFS-auth-system/.task/IMPL-001.json` (status: completed)
- Log: `.workflow/active/WFS-auth-system/.chat/codex-execute-20250105-143022.md`
- Summaries: `.workflow/active/WFS-auth-system/.summaries/IMPL-001.{1,2,3}-summary.md`
- Task status: `.workflow/active/WFS-auth-system/.task/IMPL-001.json` (status: completed)
- No session, ad-hoc multi-stage task:
- Log: `.workflow/.scratchpad/codex-execute-auth-refactor-20250105-143045.md`

View File

@@ -167,9 +167,9 @@ TodoWrite({
## Output Routing
- **Primary Log**: Entire multi-round discussion logged to single file:
- `.workflow/WFS-[id]/.chat/discuss-plan-[topic]-[timestamp].md`
- `.workflow/active/WFS-[id]/.chat/discuss-plan-[topic]-[timestamp].md`
- **Final Plan**: Clean final version saved upon conclusion:
- `.workflow/WFS-[id]/.summaries/plan-[topic].md`
- `.workflow/active/WFS-[id]/.summaries/plan-[topic].md`
- **Scratchpad**: If no session active:
- `.workflow/.scratchpad/discuss-plan-[topic]-[timestamp].md`

View File

@@ -1,7 +1,7 @@
---
name: execute
description: Autonomous code implementation with YOLO auto-approval using Gemini/Qwen/Codex, supports task ID or description input with automatic file pattern detection
argument-hint: "[--agent] [--tool codex|gemini|qwen] [--enhance] description or task-id"
argument-hint: "[--tool codex|gemini|qwen] [--enhance] description or task-id"
allowed-tools: SlashCommand(*), Bash(*), Task(*)
---
@@ -39,7 +39,7 @@ Auto-approves: file pattern inference, execution, **file modifications**, summar
- Input: Workflow task identifier (e.g., `IMPL-001`)
- Process: Task JSON parsing → Scope analysis → Execute
**3. Agent Mode** (`--agent` flag):
**3. Agent Mode** (default):
- Input: Description or task-id
- Process: 5-Phase Workflow → Context Discovery → Optimal Tool Selection → Execute
@@ -68,8 +68,7 @@ Use `resume --last` when current task extends/relates to previous execution. See
## Parameters
- `--agent` - Use cli-execution-agent for automated context discovery (5-phase intelligent mode)
- `--tool <codex|gemini|qwen>` - Select CLI tool (default: gemini, ignored in agent mode unless specified)
- `--tool <codex|gemini|qwen>` - Select CLI tool (default: auto-select by agent based on complexity)
- `--enhance` - Enhance input with `/enhance-prompt` first (Description Mode only)
- `<description|task-id>` - Natural language description or task identifier
- `--debug` - Verbose logging
@@ -77,102 +76,89 @@ Use `resume --last` when current task extends/relates to previous execution. See
## Workflow Integration
**Session Management**: Auto-detects `.workflow/.active-*` marker
- Active session: Save to `.workflow/WFS-[id]/.chat/execute-[timestamp].md`
**Session Management**: Auto-detects active session from `.workflow/active/` directory
- Active session: Save to `.workflow/active/WFS-[id]/.chat/execute-[timestamp].md`
- No session: Create new session or save to scratchpad
**Task Integration**: Load from `.task/[TASK-ID].json`, update status, generate summary
## Output Routing
## Execution Flow
**Execution Log Destination**:
- **IF** active workflow session exists:
- Save to `.workflow/WFS-[id]/.chat/execute-[timestamp].md`
- Update task status in `.task/[TASK-ID].json` (if task ID provided)
- Generate summary in `.workflow/WFS-[id]/.summaries/[TASK-ID]-summary.md`
- **ELSE** (no active session):
- **Option 1**: Create new workflow session for task
- **Option 2**: Save to `.workflow/.scratchpad/execute-[description]-[timestamp].md`
Uses **cli-execution-agent** (default) for automated implementation:
**Output Files** (when active session exists):
- Execution log: `.workflow/WFS-[id]/.chat/execute-[timestamp].md`
- Task summary: `.workflow/WFS-[id]/.summaries/[TASK-ID]-summary.md` (if task ID)
- Modified code: Project files per implementation
**Examples**:
- During session `WFS-auth-system`, executing `IMPL-001`:
- Log: `.workflow/WFS-auth-system/.chat/execute-20250105-143022.md`
- Summary: `.workflow/WFS-auth-system/.summaries/IMPL-001-summary.md`
- No session, ad-hoc implementation:
- Log: `.workflow/.scratchpad/execute-jwt-auth-20250105-143045.md`
## Execution Modes
### Standard Mode (Default)
```bash
# Gemini/Qwen: MODE=write with --approval-mode yolo
cd . && gemini --approval-mode yolo "
PURPOSE: [implementation goal]
TASK: [specific implementation]
MODE: write
CONTEXT: @CLAUDE.md [auto-detected files]
EXPECTED: Working implementation with code changes
RULES: [constraints] | Auto-approve all changes
"
# Codex: MODE=auto with danger-full-access
codex -C . --full-auto exec "
PURPOSE: [implementation goal]
TASK: [specific implementation]
MODE: auto
CONTEXT: [auto-detected files]
EXPECTED: Complete implementation with tests
" --skip-git-repo-check -s danger-full-access
```
### Agent Mode (`--agent` flag)
Delegate implementation to `cli-execution-agent` for intelligent execution with automated context discovery.
**Agent invocation**:
```javascript
Task(
subagent_type="cli-execution-agent",
description="Implement with automated context discovery and optimal tool selection",
description="Autonomous code implementation with YOLO auto-approval",
prompt=`
Task: ${description_or_task_id}
Mode: execute
Tool Preference: ${tool_flag || 'auto-select'}
${enhance_flag ? 'Enhance: true' : ''}
Tool: ${tool_flag || 'auto-select'}
Enhance: ${enhance_flag}
Task-ID: ${task_id}
Agent will autonomously:
- Discover implementation files and dependencies
- Assess complexity and select optimal tool
- Execute with YOLO permissions (auto-approve)
- Generate task summary if task-id provided
Execute autonomous code implementation with full modification permissions:
1. Task Analysis:
${task_id ? '- Load task spec from .task/' + task_id + '.json' : ''}
- Parse requirements and implementation scope
- Classify complexity (simple/medium/complex)
- Extract keywords for context discovery
2. Context Discovery:
- Discover implementation files using MCP/ripgrep
- Identify existing patterns and conventions (CLAUDE.md)
- Map dependencies and integration points
- Gather related tests and documentation
- Auto-detect file patterns from keywords
3. Tool Selection & Execution:
- Complexity assessment:
* Simple/Medium → Gemini/Qwen (MODE=write, --approval-mode yolo)
* Complex → Codex (MODE=auto, --skip-git-repo-check -s danger-full-access)
- Tool preference: ${tool_flag || 'auto-select based on complexity'}
- Apply appropriate implementation template
- Execute with YOLO auto-approval (bypasses all confirmations)
4. Implementation:
- Modify/create/delete code files per requirements
- Follow existing code patterns and conventions
- Include comprehensive context in CLI command
- Ensure working implementation with proper error handling
5. Output & Documentation:
- Save execution log: .workflow/active/WFS-[id]/.chat/execute-[timestamp].md
${task_id ? '- Generate task summary: .workflow/active/WFS-[id]/.summaries/' + task_id + '-summary.md' : ''}
${task_id ? '- Update task status in .task/' + task_id + '.json' : ''}
- Document all code changes made
⚠️ YOLO Mode: All file operations auto-approved without confirmation
`
)
```
The agent handles all phases internally, including complexity-based tool selection.
**Output**: `.workflow/active/WFS-[id]/.chat/execute-[timestamp].md` + `.workflow/active/WFS-[id]/.summaries/[TASK-ID]-summary.md` (or `.scratchpad/` if no session)
## Examples
**Basic Implementation (Standard Mode)** (modifies code):
**Basic Implementation** (modifies code):
```bash
/cli:execute "implement JWT authentication with middleware"
# Executes: Creates auth middleware, updates routes, modifies config
# Agent Phase 1: Classifies intent=execute, complexity=medium, keywords=['jwt', 'auth', 'middleware']
# Agent Phase 2: Discovers auth patterns, existing middleware structure
# Agent Phase 3: Selects Gemini (medium complexity)
# Agent Phase 4: Executes with auto-approval
# Result: NEW/MODIFIED code files with JWT implementation
```
**Intelligent Implementation (Agent Mode)** (modifies code):
**Complex Implementation** (modifies code):
```bash
/cli:execute --agent "implement OAuth2 authentication with token refresh"
# Phase 1: Classifies intent=execute, complexity=complex, keywords=['oauth2', 'auth', 'token', 'refresh']
# Phase 2: MCP discovers auth patterns, existing middleware, JWT dependencies
# Phase 3: Enhances prompt with discovered patterns and best practices
# Phase 4: Selects Codex (complex task), executes with comprehensive context
# Phase 5: Saves execution log + generates implementation summary
/cli:execute "implement OAuth2 authentication with token refresh"
# Agent Phase 1: Classifies intent=execute, complexity=complex, keywords=['oauth2', 'auth', 'token', 'refresh']
# Agent Phase 2: MCP discovers auth patterns, existing middleware, JWT dependencies
# Agent Phase 3: Enhances prompt with discovered patterns and best practices
# Agent Phase 4: Selects Codex (complex task), executes with comprehensive context
# Agent Phase 5: Saves execution log + generates implementation summary
# Result: Complete OAuth2 implementation + detailed execution log
```
@@ -214,8 +200,3 @@ The agent handles all phases internally, including complexity-based tool selecti
| `/cli:analyze` | Understand code | NO | N/A |
| `/cli:chat` | Ask questions | NO | N/A |
| `/cli:execute` | **Implement** | **YES** | **YES** |
## Notes
- Command templates, YOLO mode details, and session management: see intelligent-tools-strategy.md (loaded in memory)
- **Code Modification**: This command modifies code - execution logs document changes made

View File

@@ -1,7 +1,7 @@
---
name: bug-diagnosis
description: Read-only bug root cause analysis using Gemini/Qwen/Codex with systematic diagnosis template for fix suggestions
argument-hint: "[--agent] [--tool codex|gemini|qwen] [--enhance] [--cd path] bug description"
argument-hint: "[--tool codex|gemini|qwen] [--enhance] [--cd path] bug description"
allowed-tools: SlashCommand(*), Bash(*), Task(*)
---
@@ -19,7 +19,6 @@ Systematic bug diagnosis with root cause analysis template (`~/.claude/workflows
## Parameters
- `--tool <gemini|qwen|codex>` - Tool selection (default: gemini)
- `--agent` - Use cli-execution-agent for automated context discovery
- `--enhance` - Enhance bug description with `/enhance-prompt`
- `--cd "path"` - Target directory for focused diagnosis
- `<bug-description>` (Required) - Bug description or error details
@@ -44,44 +43,48 @@ Systematic bug diagnosis with root cause analysis template (`~/.claude/workflows
## Execution Flow
### Standard Mode
1. Parse tool selection (default: gemini)
2. Optional: enhance with `/enhance-prompt`
3. Detect directory from `--cd` or auto-infer
4. Build command with template
5. Execute diagnosis (read-only)
6. Save to `.workflow/WFS-[id]/.chat/`
### Agent Mode (`--agent`)
Delegates to agent for intelligent diagnosis:
Uses **cli-execution-agent** (default) for automated bug diagnosis:
```javascript
Task(
subagent_type="cli-execution-agent",
description="Bug root cause diagnosis",
description="Bug root cause diagnosis with fix suggestions",
prompt=`
Task: ${bug_description}
Mode: bug-diagnosis
Tool: ${tool_flag || 'auto-select'} // gemini|qwen|codex
Directory: ${cd_path || 'auto-detect'}
Tool: ${tool_flag || 'gemini'}
Directory: ${cd_path || '.'}
Enhance: ${enhance_flag}
Template: ~/.claude/workflows/cli-templates/prompts/analysis/01-diagnose-bug-root-cause.txt
Agent responsibilities:
Execute systematic bug diagnosis and root cause analysis:
1. Context Discovery:
- Locate error traces and logs
- Find related code sections
- Identify data flow paths
- Locate error traces, stack traces, and log messages
- Find related code sections and affected modules
- Identify data flow paths leading to the bug
- Discover test cases related to bug area
- Use MCP/ripgrep for comprehensive context gathering
2. CLI Command Generation:
- Build Gemini/Qwen/Codex command
- Include diagnostic context
- Apply ~/.claude/workflows/cli-templates/prompts/analysis/01-diagnose-bug-root-cause.txt template
2. Root Cause Analysis:
- Apply diagnostic template methodology
- Trace execution to identify failure point
- Analyze state, data, and logic causing issue
- Document potential root causes with evidence
- Assess bug severity and impact scope
3. Execution & Output:
- Execute root cause analysis
- Generate fix suggestions
- Save to .workflow/.chat/
3. CLI Command Construction:
- Tool: ${tool_flag || 'gemini'} (qwen fallback, codex for complex bugs)
- Directory: cd ${cd_path || '.'} &&
- Context: @**/* + error traces + affected code
- Mode: analysis (read-only)
- Template: analysis/01-diagnose-bug-root-cause.txt
4. Output Generation:
- Root cause diagnosis with evidence
- Fix suggestions and recommendations
- Prevention strategies
- Save to .workflow/active/WFS-[id]/.chat/bug-diagnosis-[timestamp].md (or .scratchpad/)
`
)
```
@@ -89,42 +92,5 @@ Task(
## Core Rules
- **Read-only**: Diagnoses bugs, does NOT modify code
- **Template**: Uses `~/.claude/workflows/cli-templates/prompts/analysis/01-diagnose-bug-root-cause.txt` for root cause analysis
- **Output**: Saves to `.workflow/WFS-[id]/.chat/`
## CLI Command Templates
**Gemini/Qwen** (default, diagnosis only):
```bash
cd [dir] && gemini -p "
PURPOSE: [goal]
TASK: Root cause analysis
MODE: analysis
CONTEXT: @**/*
EXPECTED: Diagnosis, fix plan
RULES: $(cat ~/.claude/workflows/cli-templates/prompts/analysis/01-diagnose-bug-root-cause.txt)
"
# Qwen: Replace 'gemini' with 'qwen'
```
**Codex** (diagnosis + potential fixes):
```bash
codex -C [dir] --full-auto exec "
PURPOSE: [goal]
TASK: Bug diagnosis
MODE: analysis
CONTEXT: @**/*
EXPECTED: Diagnosis, fix suggestions
RULES: $(cat ~/.claude/workflows/cli-templates/prompts/analysis/01-diagnose-bug-root-cause.txt)
" -m gpt-5 --skip-git-repo-check -s danger-full-access
```
## Output
- **With session**: `.workflow/WFS-[id]/.chat/bug-diagnosis-[timestamp].md`
- **No session**: `.workflow/.scratchpad/bug-diagnosis-[desc]-[timestamp].md`
## Notes
- Template: `~/.claude/workflows/cli-templates/prompts/analysis/01-diagnose-bug-root-cause.txt`
- See `intelligent-tools-strategy.md` for detailed tool usage
- **Template**: `~/.claude/workflows/cli-templates/prompts/analysis/01-diagnose-bug-root-cause.txt`
- **Output**: `.workflow/active/WFS-[id]/.chat/bug-diagnosis-[timestamp].md` (or `.scratchpad/` if no session)

View File

@@ -1,7 +1,7 @@
---
name: code-analysis
description: Read-only execution path tracing using Gemini/Qwen/Codex with specialized analysis template for call flow and optimization
argument-hint: "[--agent] [--tool codex|gemini|qwen] [--enhance] [--cd path] analysis target"
argument-hint: "[--tool codex|gemini|qwen] [--enhance] [--cd path] analysis target"
allowed-tools: SlashCommand(*), Bash(*), Task(*)
---
@@ -21,7 +21,6 @@ Systematic code analysis with execution path tracing template (`~/.claude/workfl
## Parameters
- `--tool <gemini|qwen|codex>` - Tool selection (default: gemini)
- `--agent` - Use cli-execution-agent for automated context discovery
- `--enhance` - Enhance analysis target with `/enhance-prompt` first
- `--cd "path"` - Target directory for focused analysis
- `<analysis-target>` (Required) - Code analysis target or question
@@ -47,91 +46,53 @@ Systematic code analysis with execution path tracing template (`~/.claude/workfl
## Execution Flow
### Standard Mode (Default)
1. Parse tool selection (default: gemini)
2. Optional: enhance analysis target with `/enhance-prompt`
3. Detect target directory from `--cd` or auto-infer
4. Build command with template
5. Execute analysis (read-only)
6. Save to `.workflow/WFS-[id]/.chat/code-analysis-[timestamp].md`
### Agent Mode (`--agent` flag)
Delegates to `cli-execution-agent` for intelligent context discovery and analysis.
## Core Rules
- **Read-only**: Analyzes code, does NOT modify files
- **Template**: Uses `~/.claude/workflows/cli-templates/prompts/analysis/01-trace-code-execution.txt` for systematic analysis
- **Output**: Saves to `.workflow/WFS-[id]/.chat/`
## CLI Command Templates
**Gemini/Qwen** (default, read-only analysis):
```bash
cd [dir] && gemini -p "
PURPOSE: [goal]
TASK: Execution path tracing
MODE: analysis
CONTEXT: @**/*
EXPECTED: Trace, call diagram
RULES: $(cat ~/.claude/workflows/cli-templates/prompts/analysis/01-trace-code-execution.txt)
"
# Qwen: Replace 'gemini' with 'qwen'
```
**Codex** (analysis + optimization suggestions):
```bash
codex -C [dir] --full-auto exec "
PURPOSE: [goal]
TASK: Path analysis
MODE: analysis
CONTEXT: @**/*
EXPECTED: Trace, optimization
RULES: $(cat ~/.claude/workflows/cli-templates/prompts/analysis/01-trace-code-execution.txt)
" -m gpt-5 --skip-git-repo-check -s danger-full-access
```
## Agent Execution Context
When `--agent` flag is used, delegate to agent:
Uses **cli-execution-agent** (default) for automated code analysis:
```javascript
Task(
subagent_type="cli-execution-agent",
description="Code execution path analysis",
description="Execution path tracing and call flow analysis",
prompt=`
Task: ${analysis_target}
Mode: code-analysis
Tool: ${tool_flag || 'auto-select'} // gemini|qwen|codex
Directory: ${cd_path || 'auto-detect'}
Tool: ${tool_flag || 'gemini'}
Directory: ${cd_path || '.'}
Enhance: ${enhance_flag}
Template: ~/.claude/workflows/cli-templates/prompts/analysis/01-trace-code-execution.txt
Agent responsibilities:
Execute systematic code analysis with execution path tracing:
1. Context Discovery:
- Identify entry points and call chains
- Discover related files (MCP/ripgrep)
- Map execution flow paths
- Identify entry points and function signatures
- Trace call chains and execution flows
- Discover related files (implementations, dependencies, tests)
- Map data flow and state transformations
- Use MCP/ripgrep for comprehensive file discovery
2. CLI Command Generation:
- Build Gemini/Qwen/Codex command
- Include discovered context
- Apply ~/.claude/workflows/cli-templates/prompts/analysis/01-trace-code-execution.txt template
2. Analysis Execution:
- Apply execution tracing template
- Generate call flow diagrams (textual)
- Document execution paths and branching logic
- Identify optimization opportunities
3. Execution & Output:
- Execute analysis with selected tool
- Save to .workflow/WFS-[id]/.chat/
3. CLI Command Construction:
- Tool: ${tool_flag || 'gemini'} (qwen fallback, codex for complex analysis)
- Directory: cd ${cd_path || '.'} &&
- Context: @**/* + discovered execution context
- Mode: analysis (read-only)
- Template: analysis/01-trace-code-execution.txt
4. Output Generation:
- Execution trace documentation
- Call flow analysis with diagrams
- Performance and optimization insights
- Save to .workflow/active/WFS-[id]/.chat/code-analysis-[timestamp].md (or .scratchpad/)
`
)
```
## Output
## Core Rules
- **With session**: `.workflow/WFS-[id]/.chat/code-analysis-[timestamp].md`
- **No session**: `.workflow/.scratchpad/code-analysis-[desc]-[timestamp].md`
## Notes
- Template: `~/.claude/workflows/cli-templates/prompts/analysis/01-trace-code-execution.txt`
- See `intelligent-tools-strategy.md` for detailed tool usage
- **Read-only**: Analyzes code, does NOT modify files
- **Template**: `~/.claude/workflows/cli-templates/prompts/analysis/01-trace-code-execution.txt`
- **Output**: `.workflow/active/WFS-[id]/.chat/code-analysis-[timestamp].md` (or `.scratchpad/` if no session)

View File

@@ -0,0 +1,126 @@
---
name: document-analysis
description: Read-only technical document/paper analysis using Gemini/Qwen/Codex with systematic comprehension template for insights extraction
argument-hint: "[--tool codex|gemini|qwen] [--enhance] [--cd path] document path or topic"
allowed-tools: SlashCommand(*), Bash(*), Task(*), Read(*)
---
# CLI Mode: Document Analysis (/cli:mode:document-analysis)
## Purpose
Systematic analysis of technical documents, research papers, API documentation, and technical specifications.
**Tool Selection**:
- **gemini** (default) - Best for document comprehension and structure analysis
- **qwen** - Fallback when Gemini unavailable
- **codex** - Alternative for complex technical documents
**Key Feature**: `--cd` flag for directory-scoped document discovery
## Parameters
- `--tool <gemini|qwen|codex>` - Tool selection (default: gemini)
- `--enhance` - Enhance analysis target with `/enhance-prompt`
- `--cd "path"` - Target directory for document search
- `<document-path-or-topic>` (Required) - File path or topic description
## Tool Usage
**Gemini** (Primary):
```bash
/cli:mode:document-analysis "README.md"
/cli:mode:document-analysis --tool gemini "analyze API documentation"
```
**Qwen** (Fallback):
```bash
/cli:mode:document-analysis --tool qwen "docs/architecture.md"
```
**Codex** (Alternative):
```bash
/cli:mode:document-analysis --tool codex "research paper in docs/"
```
## Execution Flow
Uses **cli-execution-agent** for automated document analysis:
```javascript
Task(
subagent_type="cli-execution-agent",
description="Systematic document comprehension and insights extraction",
prompt=`
Task: ${document_path_or_topic}
Mode: document-analysis
Tool: ${tool_flag || 'gemini'}
Directory: ${cd_path || '.'}
Enhance: ${enhance_flag}
Template: ~/.claude/workflows/cli-templates/prompts/analysis/02-analyze-technical-document.txt
Execute systematic document analysis:
1. Document Discovery:
- Locate target document(s) via path or topic keywords
- Identify document type (README, API docs, research paper, spec, tutorial)
- Detect document format (Markdown, PDF, plain text, reStructuredText)
- Discover related documents (references, appendices, examples)
- Use MCP/ripgrep for comprehensive file discovery
2. Pre-Analysis Planning (Required):
- Determine document structure (sections, hierarchy, flow)
- Identify key components (abstract, methodology, implementation details)
- Map dependencies and cross-references
- Assess document scope and complexity
- Plan analysis approach based on document type
3. CLI Command Construction:
- Tool: ${tool_flag || 'gemini'} (qwen fallback, codex for complex docs)
- Directory: cd ${cd_path || '.'} &&
- Context: @{document_paths} + @CLAUDE.md + related files
- Mode: analysis (read-only)
- Template: analysis/02-analyze-technical-document.txt
4. Analysis Execution:
- Apply 6-field template structure (PURPOSE, TASK, MODE, CONTEXT, EXPECTED, RULES)
- Execute multi-phase analysis protocol with pre-planning
- Perform self-critique before final output
- Generate structured report with evidence-based insights
5. Output Generation:
- Comprehensive document analysis report
- Structured insights with section references
- Critical assessment with evidence
- Actionable recommendations
- Save to .workflow/active/WFS-[id]/.chat/doc-analysis-[timestamp].md (or .scratchpad/)
`
)
```
## Core Rules
- **Read-only**: Analyzes documents, does NOT modify files
- **Evidence-based**: All claims must reference specific sections/pages
- **Pre-planning**: Requires analysis approach planning before execution
- **Precise language**: Direct, accurate wording - no persuasive embellishment
- **Output**: `.workflow/active/WFS-[id]/.chat/doc-analysis-[timestamp].md` (or `.scratchpad/` if no session)
## Document Types Supported
| Type | Focus Areas | Key Outputs |
|------|-------------|-------------|
| README | Purpose, setup, usage | Integration steps, quick-start guide |
| API Documentation | Endpoints, parameters, responses | API usage patterns, integration points |
| Research Paper | Methodology, findings, validity | Applicable techniques, implementation feasibility |
| Specification | Requirements, standards, constraints | Compliance checklist, implementation requirements |
| Tutorial | Learning path, examples, exercises | Key concepts, practical applications |
| Architecture Docs | System design, components, patterns | Design decisions, integration points, trade-offs |
## Best Practices
1. **Scope Definition**: Clearly define what aspects to analyze before starting
2. **Layered Reading**: Structure/Overview → Details → Critical Analysis → Synthesis
3. **Evidence Trail**: Track section references for all extracted information
4. **Gap Identification**: Note missing information or unclear sections explicitly
5. **Actionable Output**: Focus on insights that inform decisions or actions

View File

@@ -1,7 +1,7 @@
---
name: plan
description: Read-only architecture planning using Gemini/Qwen/Codex with strategic planning template for modification plans and impact analysis
argument-hint: "[--agent] [--tool codex|gemini|qwen] [--enhance] [--cd path] topic"
argument-hint: "[--tool codex|gemini|qwen] [--enhance] [--cd path] topic"
allowed-tools: SlashCommand(*), Bash(*), Task(*)
---
@@ -19,7 +19,6 @@ Strategic software architecture planning template (`~/.claude/workflows/cli-temp
## Parameters
- `--tool <gemini|qwen|codex>` - Tool selection (default: gemini)
- `--agent` - Use cli-execution-agent for automated context discovery
- `--enhance` - Enhance task with `/enhance-prompt`
- `--cd "path"` - Target directory for focused planning
- `<planning-task>` (Required) - Architecture planning task or modification requirements
@@ -43,87 +42,52 @@ Strategic software architecture planning template (`~/.claude/workflows/cli-temp
## Execution Flow
### Standard Mode
1. Parse tool selection (default: gemini)
2. Optional: enhance with `/enhance-prompt`
3. Detect directory from `--cd` or auto-infer
4. Build command with template
5. Execute planning (read-only, no code generation)
6. Save to `.workflow/WFS-[id]/.chat/`
### Agent Mode (`--agent`)
Delegates to agent for intelligent planning:
Uses **cli-execution-agent** (default) for automated planning:
```javascript
Task(
subagent_type="cli-execution-agent",
description="Architecture modification planning",
description="Architecture planning with impact analysis",
prompt=`
Task: ${planning_task}
Mode: architecture-planning
Tool: ${tool_flag || 'auto-select'} // gemini|qwen|codex
Directory: ${cd_path || 'auto-detect'}
Mode: plan
Tool: ${tool_flag || 'gemini'}
Directory: ${cd_path || '.'}
Enhance: ${enhance_flag}
Template: ~/.claude/workflows/cli-templates/prompts/planning/01-plan-architecture-design.txt
Agent responsibilities:
Execute strategic architecture planning:
1. Context Discovery:
- Analyze current architecture
- Identify affected components
- Map dependencies and impacts
- Analyze current architecture structure
- Identify affected components and modules
- Map dependencies and integration points
- Assess modification impacts (scope, complexity, risks)
2. CLI Command Generation:
- Build Gemini/Qwen/Codex command
- Include architecture context
- Apply ~/.claude/workflows/cli-templates/prompts/planning/01-plan-architecture-design.txt template
2. Planning Analysis:
- Apply strategic planning template
- Generate modification plan with phases
- Document architectural decisions and rationale
- Identify potential conflicts and mitigation strategies
3. Execution & Output:
- Execute strategic planning
- Generate modification plan
- Save to .workflow/.chat/
3. CLI Command Construction:
- Tool: ${tool_flag || 'gemini'} (qwen fallback, codex for implementation guidance)
- Directory: cd ${cd_path || '.'} &&
- Context: @**/* (full architecture context)
- Mode: analysis (read-only, no code generation)
- Template: planning/01-plan-architecture-design.txt
4. Output Generation:
- Strategic modification plan
- Impact analysis and risk assessment
- Implementation roadmap
- Save to .workflow/active/WFS-[id]/.chat/plan-[timestamp].md (or .scratchpad/)
`
)
```
## Core Rules
- **Planning only**: Creates modification plans, does NOT generate code
- **Template**: Uses `~/.claude/workflows/cli-templates/prompts/planning/01-plan-architecture-design.txt` for strategic planning
- **Output**: Saves to `.workflow/WFS-[id]/.chat/`
## CLI Command Templates
**Gemini/Qwen** (default, planning only):
```bash
cd [dir] && gemini -p "
PURPOSE: [goal]
TASK: Architecture planning
MODE: analysis
CONTEXT: @**/*
EXPECTED: Modification plan, impact analysis
RULES: $(cat ~/.claude/workflows/cli-templates/prompts/planning/01-plan-architecture-design.txt)
"
# Qwen: Replace 'gemini' with 'qwen'
```
**Codex** (planning + implementation guidance):
```bash
codex -C [dir] --full-auto exec "
PURPOSE: [goal]
TASK: Architecture planning
MODE: analysis
CONTEXT: @**/*
EXPECTED: Plan, implementation roadmap
RULES: $(cat ~/.claude/workflows/cli-templates/prompts/planning/01-plan-architecture-design.txt)
" -m gpt-5 --skip-git-repo-check -s danger-full-access
```
## Output
- **With session**: `.workflow/WFS-[id]/.chat/plan-[timestamp].md`
- **No session**: `.workflow/.scratchpad/plan-[desc]-[timestamp].md`
## Notes
- Template: `~/.claude/workflows/cli-templates/prompts/planning/01-plan-architecture-design.txt`
- See `intelligent-tools-strategy.md` for detailed tool usage
- **Read-only**: Creates modification plans, does NOT generate code
- **Template**: `~/.claude/workflows/cli-templates/prompts/planning/01-plan-architecture-design.txt`
- **Output**: `.workflow/active/WFS-[id]/.chat/plan-[timestamp].md` (or `.scratchpad/` if no session)

View File

@@ -1,37 +1,22 @@
---
name: enhance-prompt
description: Enhanced prompt transformation using session memory and codebase analysis with --enhance flag detection
description: Enhanced prompt transformation using session memory and intent analysis with --enhance flag detection
argument-hint: "user input to enhance"
---
## Overview
Systematically enhances user prompts by combining session memory context with codebase patterns, translating ambiguous requests into actionable specifications.
Systematically enhances user prompts by leveraging session memory context and intent analysis, translating ambiguous requests into actionable specifications.
## Core Protocol
**Enhancement Pipeline:**
`Intent Translation``Context Integration` `Gemini Analysis (if needed)` `Structured Output`
`Intent Translation``Context Integration``Structured Output`
**Context Sources:**
- Session memory (conversation history, previous analysis)
- Codebase patterns (via Gemini when triggered)
- Implicit technical requirements
## Gemini Trigger Logic
```pseudo
FUNCTION should_use_gemini(user_prompt):
critical_keywords = ["refactor", "migrate", "redesign", "auth", "payment", "security"]
RETURN (
prompt_affects_multiple_modules(user_prompt, threshold=3) OR
any_keyword_in_prompt(critical_keywords, user_prompt)
)
END
```
**Gemini Integration:** ~/.claude/workflows/intelligent-tools-strategy.md
- User intent patterns
## Enhancement Rules
@@ -47,22 +32,18 @@ END
### Context Integration Strategy
**Session Memory First:**
**Session Memory:**
- Reference recent conversation context
- Reuse previously identified patterns
- Build on established understanding
**Codebase Analysis (via Gemini):**
- Only when complexity requires it
- Focus on integration points
- Identify existing patterns
- Infer technical requirements from discussion
**Example:**
```bash
# User: "add login"
# Session Memory: Previous auth discussion, JWT mentioned
# Inferred: JWT-based auth, integrate with existing session management
# Gemini (if multi-module): Analyze AuthService patterns, middleware structure
# Action: Implement JWT authentication with session persistence
```
## Output Structure
@@ -76,7 +57,7 @@ ATTENTION: [Critical constraints]
### Output Examples
**Simple (no Gemini):**
**Example 1:**
```bash
# Input: "fix login button"
INTENT: Debug non-functional login button
@@ -85,28 +66,28 @@ ACTION: Check event binding → verify state updates → test auth flow
ATTENTION: Preserve existing OAuth integration
```
**Complex (with Gemini):**
**Example 2:**
```bash
# Input: "refactor payment code"
INTENT: Restructure payment module for maintainability
CONTEXT: Session memory - PCI compliance requirements
Gemini - PaymentServiceStripeAdapter pattern identified
ACTION: Extract reusable validators → isolate payment gateway logic
CONTEXT: Session memory - PCI compliance requirements, Stripe integration patterns
ACTION: Extract reusable validators → isolate payment gateway logic → maintain adapter pattern
ATTENTION: Zero behavior change, maintain PCI compliance, full test coverage
```
## Automatic Triggers
## Enhancement Triggers
- Ambiguous language: "fix", "improve", "clean up"
- Multi-module impact (>3 modules)
- Vague requests requiring clarification
- Complex technical requirements
- Architecture changes
- Critical systems: auth, payment, security
- Complex refactoring
- Multi-step refactoring
## Key Principles
1. **Memory First**: Leverage session context before analysis
2. **Minimal Gemini**: Only when complexity demands it
3. **Context Reuse**: Build on previous understanding
4. **Clear Output**: Structured, actionable specifications
1. **Session Memory First**: Leverage conversation context and established understanding
2. **Context Reuse**: Build on previous discussions and decisions
3. **Clear Output**: Structured, actionable specifications
4. **Intent Clarification**: Transform vague requests into specific technical goals
5. **Avoid Duplication**: Reference existing context, don't repeat

View File

@@ -0,0 +1,745 @@
---
name: code-map-memory
description: 3-phase orchestrator: parse feature keyword → cli-explore-agent analyzes (Deep Scan dual-source) → orchestrator generates Mermaid docs + SKILL package (skips phase 2 if exists)
argument-hint: "\"feature-keyword\" [--regenerate] [--tool <gemini|qwen>]"
allowed-tools: SlashCommand(*), TodoWrite(*), Bash(*), Read(*), Write(*), Task(*)
---
# Code Flow Mapping Generator
## Overview
**Pure Orchestrator with Agent Delegation**: Prepares context paths and delegates code flow analysis to specialized cli-explore-agent. Orchestrator transforms agent's JSON analysis into Mermaid documentation.
**Auto-Continue Workflow**: Runs fully autonomously once triggered. Each phase completes and automatically triggers the next phase.
**Execution Paths**:
- **Full Path**: All 3 phases (no existing codemap OR `--regenerate` specified)
- **Skip Path**: Phase 1 → Phase 3 (existing codemap found AND no `--regenerate` flag)
- **Phase 3 Always Executes**: SKILL index is always generated or updated
**Agent Responsibility** (cli-explore-agent):
- Deep code flow analysis using dual-source strategy (Bash + Gemini CLI)
- Returns structured JSON with architecture, functions, data flow, conditionals, patterns
- NO file writing - analysis only
**Orchestrator Responsibility**:
- Provides feature keyword and analysis scope to agent
- Transforms agent's JSON into Mermaid-enriched markdown documentation
- Writes all files (5 docs + metadata.json + SKILL.md)
## Core Rules
1. **Start Immediately**: First action is TodoWrite initialization, second action is Phase 1 execution
2. **Feature-Specific SKILL**: Each feature creates independent `.claude/skills/codemap-{feature}/` package
3. **Specialized Agent**: Phase 2a uses cli-explore-agent for professional code analysis (Deep Scan mode)
4. **Orchestrator Documentation**: Phase 2b transforms agent JSON into Mermaid markdown files
5. **Auto-Continue**: After completing each phase, update TodoWrite and immediately execute next phase
6. **No User Prompts**: Never ask user questions or wait for input between phases
7. **Track Progress**: Update TodoWrite after EVERY phase completion before starting next phase
8. **Multi-Level Detail**: Generate 4 levels: architecture → function → data → conditional
---
## 3-Phase Execution
### Phase 1: Parse Feature Keyword & Check Existing
**Goal**: Normalize feature keyword, check existing codemap, prepare for analysis
**Step 1: Parse Feature Keyword**
```bash
# Get feature keyword from argument
FEATURE_KEYWORD="$1"
# Normalize: lowercase, spaces to hyphens
normalized_feature=$(echo "$FEATURE_KEYWORD" | tr '[:upper:]' '[:lower:]' | tr ' ' '-' | tr '_' '-')
# Example: "User Authentication" → "user-authentication"
# Example: "支付处理" → "支付处理" (keep non-ASCII)
```
**Step 2: Set Tool Preference**
```bash
# Default to gemini unless --tool specified
TOOL="${tool_flag:-gemini}"
```
**Step 3: Check Existing Codemap**
```bash
# Define codemap directory
CODEMAP_DIR=".claude/skills/codemap-${normalized_feature}"
# Check if codemap exists
bash(test -d "$CODEMAP_DIR" && echo "exists" || echo "not_exists")
# Count existing files
bash(find "$CODEMAP_DIR" -name "*.md" 2>/dev/null | wc -l || echo 0)
```
**Step 4: Skip Decision**
```javascript
if (existing_files > 0 && !regenerate_flag) {
SKIP_GENERATION = true
message = "Codemap already exists, skipping Phase 2. Use --regenerate to force regeneration."
} else if (regenerate_flag) {
bash(rm -rf "$CODEMAP_DIR")
SKIP_GENERATION = false
message = "Regenerating codemap from scratch."
} else {
SKIP_GENERATION = false
message = "No existing codemap found, generating new code flow analysis."
}
```
**Output Variables**:
- `FEATURE_KEYWORD`: Original feature keyword
- `normalized_feature`: Normalized feature name for directory
- `CODEMAP_DIR`: `.claude/skills/codemap-{feature}`
- `TOOL`: CLI tool to use (gemini or qwen)
- `SKIP_GENERATION`: Boolean - whether to skip Phase 2
**TodoWrite**:
- If skipping: Mark phase 1 completed, phase 2 completed, phase 3 in_progress
- If not skipping: Mark phase 1 completed, phase 2 in_progress
---
### Phase 2: Code Flow Analysis & Documentation Generation
**Skip Condition**: Skipped if `SKIP_GENERATION = true`
**Goal**: Use cli-explore-agent for professional code analysis, then orchestrator generates Mermaid documentation
**Architecture**: Phase 2a (Agent Analysis) → Phase 2b (Orchestrator Documentation)
---
#### Phase 2a: cli-explore-agent Analysis
**Purpose**: Leverage specialized cli-explore-agent for deep code flow analysis
**Agent Task Specification**:
```
Task(
subagent_type: "cli-explore-agent",
description: "Analyze code flow: {FEATURE_KEYWORD}",
prompt: "
Perform Deep Scan analysis for feature: {FEATURE_KEYWORD}
**Analysis Mode**: deep-scan (Dual-source: Bash structural scan + Gemini semantic analysis)
**Analysis Objectives**:
1. **Module Architecture**: Identify high-level module organization, interactions, and entry points
2. **Function Call Chains**: Trace execution paths, call sequences, and parameter flows
3. **Data Transformations**: Map data structure changes and transformation stages
4. **Conditional Paths**: Document decision trees, branches, and error handling strategies
5. **Design Patterns**: Discover architectural patterns and extract design intent
**Scope**:
- Feature: {FEATURE_KEYWORD}
- CLI Tool: {TOOL} (gemini-2.5-pro or qwen coder-model)
- File Discovery: MCP Code Index (preferred) + rg fallback
- Target: 5-15 most relevant files
**Expected Output Format**:
Return comprehensive analysis as structured JSON:
{
\"feature\": \"{FEATURE_KEYWORD}\",
\"analysis_metadata\": {
\"tool_used\": \"gemini|qwen\",
\"timestamp\": \"ISO_TIMESTAMP\",
\"analysis_mode\": \"deep-scan\"
},
\"files_analyzed\": [
{\"file\": \"path/to/file.ts\", \"relevance\": \"high|medium|low\", \"role\": \"brief description\"}
],
\"architecture\": {
\"overview\": \"High-level description\",
\"modules\": [
{\"name\": \"ModuleName\", \"file\": \"file:line\", \"responsibility\": \"description\", \"dependencies\": [...]}
],
\"interactions\": [
{\"from\": \"ModuleA\", \"to\": \"ModuleB\", \"type\": \"import|call|data-flow\", \"description\": \"...\"}
],
\"entry_points\": [
{\"function\": \"main\", \"file\": \"file:line\", \"description\": \"...\"}
]
},
\"function_calls\": {
\"call_chains\": [
{
\"chain_id\": 1,
\"description\": \"User authentication flow\",
\"sequence\": [
{\"function\": \"login\", \"file\": \"file:line\", \"calls\": [\"validateCredentials\", \"createSession\"]}
]
}
],
\"sequences\": [
{\"from\": \"Client\", \"to\": \"AuthService\", \"method\": \"login(username, password)\", \"returns\": \"Session\"}
]
},
\"data_flow\": {
\"structures\": [
{\"name\": \"UserData\", \"stage\": \"input\", \"shape\": {\"username\": \"string\", \"password\": \"string\"}}
],
\"transformations\": [
{\"from\": \"RawInput\", \"to\": \"ValidatedData\", \"transformer\": \"validateUser\", \"file\": \"file:line\"}
]
},
\"conditional_logic\": {
\"branches\": [
{\"condition\": \"isAuthenticated\", \"file\": \"file:line\", \"true_path\": \"...\", \"false_path\": \"...\"}
],
\"error_handling\": [
{\"error_type\": \"AuthenticationError\", \"handler\": \"handleAuthError\", \"file\": \"file:line\", \"recovery\": \"retry|fail\"}
]
},
\"design_patterns\": [
{\"pattern\": \"Repository Pattern\", \"location\": \"src/repositories\", \"description\": \"...\"}
],
\"recommendations\": [
\"Consider extracting authentication logic into separate module\",
\"Add error recovery for network failures\"
]
}
**Critical Requirements**:
- Use Deep Scan mode: Bash (Phase 1 - precise locations) + Gemini CLI (Phase 2 - semantic understanding) + Synthesis (Phase 3 - merge with attribution)
- Focus exclusively on {FEATURE_KEYWORD} feature flow
- Include file:line references for ALL findings
- Extract design intent from code structure and comments
- NO FILE WRITING - return JSON analysis only
- Handle tool failures gracefully (Gemini → Qwen fallback, MCP → rg fallback)
"
)
```
**Agent Output**: JSON analysis result with architecture, functions, data flow, conditionals, and patterns
---
#### Phase 2b: Orchestrator Documentation Generation
**Purpose**: Transform cli-explore-agent JSON into Mermaid-enriched documentation
**Input**: Agent's JSON analysis result
**Process**:
1. **Parse Agent Analysis**:
```javascript
const analysis = JSON.parse(agentResult)
const { feature, files_analyzed, architecture, function_calls, data_flow, conditional_logic, design_patterns } = analysis
```
2. **Generate Mermaid Diagrams from Structured Data**:
**a) architecture-flow.md** (~3K tokens):
```javascript
// Convert architecture.modules + architecture.interactions → Mermaid graph TD
const architectureMermaid = `
graph TD
${architecture.modules.map(m => ` ${m.name}[${m.name}]`).join('\n')}
${architecture.interactions.map(i => ` ${i.from} -->|${i.type}| ${i.to}`).join('\n')}
`
Write({
file_path: `${CODEMAP_DIR}/architecture-flow.md`,
content: `---
feature: ${feature}
level: architecture
detail: high-level module interactions
---
# Architecture Flow: ${feature}
## Overview
${architecture.overview}
## Module Architecture
${architecture.modules.map(m => `### ${m.name}\n- **File**: ${m.file}\n- **Role**: ${m.responsibility}\n- **Dependencies**: ${m.dependencies.join(', ')}`).join('\n\n')}
## Flow Diagram
\`\`\`mermaid
${architectureMermaid}
\`\`\`
## Key Interactions
${architecture.interactions.map(i => `- **${i.from} → ${i.to}**: ${i.description}`).join('\n')}
## Entry Points
${architecture.entry_points.map(e => `- **${e.function}** (${e.file}): ${e.description}`).join('\n')}
`
})
```
**b) function-calls.md** (~5K tokens):
```javascript
// Convert function_calls.sequences → Mermaid sequenceDiagram
const sequenceMermaid = `
sequenceDiagram
${function_calls.sequences.map(s => ` ${s.from}->>${s.to}: ${s.method}`).join('\n')}
`
Write({
file_path: `${CODEMAP_DIR}/function-calls.md`,
content: `---
feature: ${feature}
level: function
detail: function-level call sequences
---
# Function Call Chains: ${feature}
## Call Sequence Diagram
\`\`\`mermaid
${sequenceMermaid}
\`\`\`
## Detailed Call Chains
${function_calls.call_chains.map(chain => `
### Chain ${chain.chain_id}: ${chain.description}
${chain.sequence.map(fn => `- **${fn.function}** (${fn.file})\n - Calls: ${fn.calls.join(', ')}`).join('\n')}
`).join('\n')}
## Parameters & Returns
${function_calls.sequences.map(s => `- **${s.method}** → Returns: ${s.returns || 'void'}`).join('\n')}
`
})
```
**c) data-flow.md** (~4K tokens):
```javascript
// Convert data_flow.transformations → Mermaid flowchart LR
const dataFlowMermaid = `
flowchart LR
${data_flow.transformations.map((t, i) => ` Stage${i}[${t.from}] -->|${t.transformer}| Stage${i+1}[${t.to}]`).join('\n')}
`
Write({
file_path: `${CODEMAP_DIR}/data-flow.md`,
content: `---
feature: ${feature}
level: data
detail: data structure transformations
---
# Data Flow: ${feature}
## Data Transformation Diagram
\`\`\`mermaid
${dataFlowMermaid}
\`\`\`
## Data Structures
${data_flow.structures.map(s => `### ${s.name} (${s.stage})\n\`\`\`json\n${JSON.stringify(s.shape, null, 2)}\n\`\`\``).join('\n\n')}
## Transformations
${data_flow.transformations.map(t => `- **${t.from} → ${t.to}** via \`${t.transformer}\` (${t.file})`).join('\n')}
`
})
```
**d) conditional-paths.md** (~4K tokens):
```javascript
// Convert conditional_logic.branches → Mermaid flowchart TD
const conditionalMermaid = `
flowchart TD
Start[Entry Point]
${conditional_logic.branches.map((b, i) => `
Start --> Check${i}{${b.condition}}
Check${i} -->|Yes| Path${i}A[${b.true_path}]
Check${i} -->|No| Path${i}B[${b.false_path}]
`).join('\n')}
`
Write({
file_path: `${CODEMAP_DIR}/conditional-paths.md`,
content: `---
feature: ${feature}
level: conditional
detail: decision trees and error paths
---
# Conditional Paths: ${feature}
## Decision Tree
\`\`\`mermaid
${conditionalMermaid}
\`\`\`
## Branch Conditions
${conditional_logic.branches.map(b => `- **${b.condition}** (${b.file})\n - True: ${b.true_path}\n - False: ${b.false_path}`).join('\n')}
## Error Handling
${conditional_logic.error_handling.map(e => `- **${e.error_type}**: Handler \`${e.handler}\` (${e.file}) - Recovery: ${e.recovery}`).join('\n')}
`
})
```
**e) complete-flow.md** (~8K tokens):
```javascript
// Integrate all Mermaid diagrams
Write({
file_path: `${CODEMAP_DIR}/complete-flow.md`,
content: `---
feature: ${feature}
level: complete
detail: integrated multi-level view
---
# Complete Flow: ${feature}
## Integrated Flow Diagram
\`\`\`mermaid
graph TB
subgraph Architecture
${architecture.modules.map(m => ` ${m.name}[${m.name}]`).join('\n')}
end
subgraph "Function Calls"
${function_calls.call_chains[0]?.sequence.map(fn => ` ${fn.function}`).join('\n') || ''}
end
subgraph "Data Flow"
${data_flow.structures.map(s => ` ${s.name}[${s.name}]`).join('\n')}
end
\`\`\`
## Complete Trace
[Comprehensive end-to-end documentation combining all analysis layers]
## Design Patterns Identified
${design_patterns.map(p => `- **${p.pattern}** in ${p.location}: ${p.description}`).join('\n')}
## Recommendations
${analysis.recommendations.map(r => `- ${r}`).join('\n')}
## Cross-References
- [Architecture Flow](./architecture-flow.md) - High-level module structure
- [Function Calls](./function-calls.md) - Detailed call chains
- [Data Flow](./data-flow.md) - Data transformation stages
- [Conditional Paths](./conditional-paths.md) - Decision trees and error handling
`
})
```
3. **Write metadata.json**:
```javascript
Write({
file_path: `${CODEMAP_DIR}/metadata.json`,
content: JSON.stringify({
feature: feature,
normalized_name: normalized_feature,
generated_at: new Date().toISOString(),
tool_used: analysis.analysis_metadata.tool_used,
files_analyzed: files_analyzed.map(f => f.file),
analysis_summary: {
total_files: files_analyzed.length,
modules_traced: architecture.modules.length,
functions_traced: function_calls.call_chains.reduce((sum, c) => sum + c.sequence.length, 0),
patterns_discovered: design_patterns.length
}
}, null, 2)
})
```
4. **Report Phase 2 Completion**:
```
Phase 2 Complete: Code flow analysis and documentation generated
- Agent Analysis: cli-explore-agent with {TOOL}
- Files Analyzed: {count}
- Documentation Generated: 5 markdown files + metadata.json
- Location: {CODEMAP_DIR}
```
**Completion Criteria**:
- cli-explore-agent task completed successfully with JSON result
- 5 documentation files written with valid Mermaid diagrams
- metadata.json written with analysis summary
- All files properly formatted and cross-referenced
**TodoWrite**: Mark phase 2 completed, phase 3 in_progress
---
### Phase 3: Generate SKILL.md Index
**Note**: This phase **ALWAYS executes** - generates or updates the SKILL index.
**Goal**: Read generated flow documentation and create SKILL.md index with progressive loading
**Steps**:
1. **Verify Generated Files**:
```bash
bash(find "{CODEMAP_DIR}" -name "*.md" -type f | sort)
```
2. **Read metadata.json**:
```javascript
Read({CODEMAP_DIR}/metadata.json)
// Extract: feature, normalized_name, files_analyzed, analysis_summary
```
3. **Read File Headers** (optional, first 30 lines):
```javascript
Read({CODEMAP_DIR}/architecture-flow.md, limit: 30)
Read({CODEMAP_DIR}/function-calls.md, limit: 30)
// Extract overview and diagram counts
```
4. **Generate SKILL.md Index**:
Template structure:
```yaml
---
name: codemap-{normalized_feature}
description: Code flow mapping for {FEATURE_KEYWORD} feature (located at {project_path}). Load this SKILL when analyzing, tracing, or understanding {FEATURE_KEYWORD} execution flow, especially when no relevant context exists in memory.
version: 1.0.0
generated_at: {ISO_TIMESTAMP}
---
# Code Flow Map: {FEATURE_KEYWORD}
## Feature: `{FEATURE_KEYWORD}`
**Analysis Date**: {DATE}
**Tool Used**: {TOOL}
**Files Analyzed**: {COUNT}
## Progressive Loading
### Level 0: Quick Overview (~2K tokens)
- [Architecture Flow](./architecture-flow.md) - High-level module interactions
### Level 1: Core Flows (~10K tokens)
- [Architecture Flow](./architecture-flow.md) - Module architecture
- [Function Calls](./function-calls.md) - Function call chains
### Level 2: Complete Analysis (~20K tokens)
- [Architecture Flow](./architecture-flow.md)
- [Function Calls](./function-calls.md)
- [Data Flow](./data-flow.md) - Data transformations
### Level 3: Deep Dive (~30K tokens)
- [Architecture Flow](./architecture-flow.md)
- [Function Calls](./function-calls.md)
- [Data Flow](./data-flow.md)
- [Conditional Paths](./conditional-paths.md) - Branches and error handling
- [Complete Flow](./complete-flow.md) - Integrated comprehensive view
## Usage
Load this SKILL package when:
- Analyzing {FEATURE_KEYWORD} implementation
- Tracing execution flow for debugging
- Understanding code dependencies
- Planning refactoring or enhancements
## Analysis Summary
- **Modules Traced**: {modules_traced}
- **Functions Traced**: {functions_traced}
- **Files Analyzed**: {total_files}
## Mermaid Diagrams Included
- Architecture flow diagram (graph TD)
- Function call sequence diagram (sequenceDiagram)
- Data transformation flowchart (flowchart LR)
- Conditional decision tree (flowchart TD)
- Complete integrated diagram (graph TB)
```
5. **Write SKILL.md**:
```javascript
Write({
file_path: `{CODEMAP_DIR}/SKILL.md`,
content: generatedIndexMarkdown
})
```
**Completion Criteria**:
- SKILL.md index written
- All documentation files verified
- Progressive loading levels (0-3) properly structured
- Mermaid diagram references included
**TodoWrite**: Mark phase 3 completed
**Final Report**:
```
Code Flow Mapping Complete
Feature: {FEATURE_KEYWORD}
Location: .claude/skills/codemap-{normalized_feature}/
Files Generated:
- SKILL.md (index)
- architecture-flow.md (with Mermaid diagram)
- function-calls.md (with Mermaid sequence diagram)
- data-flow.md (with Mermaid flowchart)
- conditional-paths.md (with Mermaid decision tree)
- complete-flow.md (with integrated Mermaid diagram)
- metadata.json
Analysis:
- Files analyzed: {count}
- Modules traced: {count}
- Functions traced: {count}
Usage: Skill(command: "codemap-{normalized_feature}")
```
---
## Implementation Details
### TodoWrite Patterns
**Initialization** (Before Phase 1):
```javascript
TodoWrite({todos: [
{"content": "Parse feature keyword and check existing", "status": "in_progress", "activeForm": "Parsing feature keyword"},
{"content": "Agent analyzes code flow and generates files", "status": "pending", "activeForm": "Analyzing code flow"},
{"content": "Generate SKILL.md index", "status": "pending", "activeForm": "Generating SKILL index"}
]})
```
**Full Path** (SKIP_GENERATION = false):
```javascript
// After Phase 1
TodoWrite({todos: [
{"content": "Parse feature keyword and check existing", "status": "completed", ...},
{"content": "Agent analyzes code flow and generates files", "status": "in_progress", ...},
{"content": "Generate SKILL.md index", "status": "pending", ...}
]})
// After Phase 2
TodoWrite({todos: [
{"content": "Parse feature keyword and check existing", "status": "completed", ...},
{"content": "Agent analyzes code flow and generates files", "status": "completed", ...},
{"content": "Generate SKILL.md index", "status": "in_progress", ...}
]})
// After Phase 3
TodoWrite({todos: [
{"content": "Parse feature keyword and check existing", "status": "completed", ...},
{"content": "Agent analyzes code flow and generates files", "status": "completed", ...},
{"content": "Generate SKILL.md index", "status": "completed", ...}
]})
```
**Skip Path** (SKIP_GENERATION = true):
```javascript
// After Phase 1 (skip Phase 2)
TodoWrite({todos: [
{"content": "Parse feature keyword and check existing", "status": "completed", ...},
{"content": "Agent analyzes code flow and generates files", "status": "completed", ...}, // Skipped
{"content": "Generate SKILL.md index", "status": "in_progress", ...}
]})
```
### Execution Flow
**Full Path**:
```
User → TodoWrite Init → Phase 1 (parse) → Phase 2 (agent analyzes) → Phase 3 (write index) → Report
```
**Skip Path**:
```
User → TodoWrite Init → Phase 1 (detect existing) → Phase 3 (update index) → Report
```
### Error Handling
**Phase 1 Errors**:
- Empty feature keyword: Report error, ask user to provide feature description
- Invalid characters: Normalize and continue
**Phase 2 Errors (Agent)**:
- Agent task fails: Retry once, report if fails again
- No files discovered: Warn user, ask for more specific feature keyword
- CLI failures: Agent handles internally with retries
- Invalid Mermaid syntax: Agent validates before writing
**Phase 3 Errors**:
- Write failures: Report which files failed
- Missing files: Note in SKILL.md, suggest regeneration
---
## Parameters
```bash
/memory:code-map-memory "feature-keyword" [--regenerate] [--tool <gemini|qwen>]
```
**Arguments**:
- **"feature-keyword"**: Feature or flow to analyze (required)
- Examples: `"user authentication"`, `"payment processing"`, `"数据导入流程"`
- Can be English, Chinese, or mixed
- Spaces and underscores normalized to hyphens
- **--regenerate**: Force regenerate existing codemap (deletes and recreates)
- **--tool**: CLI tool for analysis (default: gemini)
- `gemini`: Comprehensive flow analysis with gemini-2.5-pro
- `qwen`: Alternative with coder-model
---
## Examples
**Generated File Structure** (for all examples):
```
.claude/skills/codemap-{feature}/
├── SKILL.md # Index (Phase 3)
├── architecture-flow.md # Agent (Phase 2) - High-level flow
├── function-calls.md # Agent (Phase 2) - Function chains
├── data-flow.md # Agent (Phase 2) - Data transformations
├── conditional-paths.md # Agent (Phase 2) - Branches & errors
├── complete-flow.md # Agent (Phase 2) - Integrated view
└── metadata.json # Agent (Phase 2)
```
### Example 1: User Authentication Flow
```bash
/memory:code-map-memory "user authentication"
```
**Workflow**:
1. Phase 1: Normalizes to "user-authentication", checks existing codemap
2. Phase 2: Agent discovers auth-related files, executes CLI analysis, generates 5 flow docs with Mermaid
3. Phase 3: Generates SKILL.md index with progressive loading
**Output**: `.claude/skills/codemap-user-authentication/` with 6 files + metadata
### Example 3: Regenerate with Qwen
```bash
/memory:code-map-memory "payment processing" --regenerate --tool qwen
```
**Workflow**:
1. Phase 1: Deletes existing codemap due to --regenerate
2. Phase 2: Agent uses qwen with coder-model for fresh analysis
3. Phase 3: Generates updated SKILL.md
---
## Architecture
```
code-map-memory (orchestrator)
├─ Phase 1: Parse & Check (bash commands, skip decision)
├─ Phase 2: Code Analysis & Documentation (skippable)
│ ├─ Phase 2a: cli-explore-agent Analysis
│ │ └─ Deep Scan: Bash structural + Gemini semantic → JSON
│ └─ Phase 2b: Orchestrator Documentation
│ └─ Transform JSON → 5 Mermaid markdown files + metadata.json
└─ Phase 3: Write SKILL.md (index generation, always runs)
Output: .claude/skills/codemap-{feature}/
```

View File

@@ -0,0 +1,471 @@
---
name: docs-full-cli
description: Generate full project documentation using CLI execution (Layer 3→1) with batched agents (4 modules/agent) and gemini→qwen→codex fallback, <20 modules uses direct parallel
argument-hint: "[path] [--tool <gemini|qwen|codex>]"
---
# Full Documentation Generation - CLI Mode (/memory:docs-full-cli)
## Overview
Orchestrates project-wide documentation generation using CLI-based execution with batched agents and automatic tool fallback.
**Parameters**:
- `path`: Target directory (default: current directory)
- `--tool <gemini|qwen|codex>`: Primary tool (default: gemini)
**Execution Flow**: Discovery → Plan Presentation → Execution → Verification
## 3-Layer Architecture & Auto-Strategy Selection
### Layer Definition & Strategy Assignment
| Layer | Depth | Strategy | Purpose | Context Pattern |
|-------|-------|----------|---------|----------------|
| **Layer 3** (Deepest) | ≥3 | `full` | Generate docs for all subdirectories with code | `@**/*` (all files) |
| **Layer 2** (Middle) | 1-2 | `single` | Current dir + child docs | `@*/API.md @*/README.md @*.{ts,tsx,js,...}` |
| **Layer 1** (Top) | 0 | `single` | Current dir + child docs | `@*/API.md @*/README.md @*.{ts,tsx,js,...}` |
**Generation Direction**: Layer 3 → Layer 2 → Layer 1 (bottom-up dependency flow)
**Strategy Auto-Selection**: Strategies are automatically determined by directory depth - no user configuration needed.
### Strategy Details
#### Full Strategy (Layer 3 Only)
- **Use Case**: Deepest directories with comprehensive file coverage
- **Behavior**: Generates API.md + README.md for current directory AND subdirectories containing code
- **Context**: All files in current directory tree (`@**/*`)
- **Output**: `.workflow/docs/{project_name}/{path}/API.md` + `README.md`
#### Single Strategy (Layers 1-2)
- **Use Case**: Upper layers that aggregate from existing documentation
- **Behavior**: Generates API.md + README.md only in current directory
- **Context**: Direct children docs + current directory code files
- **Output**: `.workflow/docs/{project_name}/{path}/API.md` + `README.md`
### Example Flow
```
src/auth/handlers/ (depth 3) → FULL STRATEGY
CONTEXT: @**/* (all files in handlers/ and subdirs)
GENERATES: .workflow/docs/project/src/auth/handlers/{API.md,README.md} + subdirs
src/auth/ (depth 2) → SINGLE STRATEGY
CONTEXT: @*/API.md @*/README.md @*.ts (handlers docs + current code)
GENERATES: .workflow/docs/project/src/auth/{API.md,README.md} only
src/ (depth 1) → SINGLE STRATEGY
CONTEXT: @*/API.md @*/README.md (auth docs, utils docs)
GENERATES: .workflow/docs/project/src/{API.md,README.md} only
./ (depth 0) → SINGLE STRATEGY
CONTEXT: @*/API.md @*/README.md (src docs, tests docs)
GENERATES: .workflow/docs/project/{API.md,README.md} only
```
## Core Execution Rules
1. **Analyze First**: Module discovery + folder classification before generation
2. **Wait for Approval**: Present plan, no execution without user confirmation
3. **Execution Strategy**:
- **<20 modules**: Direct parallel execution (max 4 concurrent per layer)
- **≥20 modules**: Agent batch processing (4 modules/agent, 73% overhead reduction)
4. **Tool Fallback**: Auto-retry with fallback tools on failure
5. **Layer Sequential**: Process layers 3→2→1 (bottom-up), parallel batches within layer
6. **Safety Check**: Verify only docs files modified in .workflow/docs/
7. **Layer-based Grouping**: Group modules by LAYER (not depth) for execution
## Tool Fallback Hierarchy
```javascript
--tool gemini [gemini, qwen, codex] // default
--tool qwen [qwen, gemini, codex]
--tool codex [codex, gemini, qwen]
```
**Trigger**: Non-zero exit code from generation script
| Tool | Best For | Fallback To |
|--------|--------------------------------|----------------|
| gemini | Documentation, patterns | qwen → codex |
| qwen | Architecture, system design | gemini → codex |
| codex | Implementation, code quality | gemini → qwen |
## Execution Phases
### Phase 1: Discovery & Analysis
```javascript
// Get project metadata
Bash({command: "pwd && basename \"$(pwd)\" && git rev-parse --show-toplevel 2>/dev/null || pwd", run_in_background: false});
// Get module structure with classification
Bash({command: "~/.claude/scripts/get_modules_by_depth.sh list | ~/.claude/scripts/classify-folders.sh", run_in_background: false});
// OR with path parameter
Bash({command: "cd <target-path> && ~/.claude/scripts/get_modules_by_depth.sh list | ~/.claude/scripts/classify-folders.sh", run_in_background: false});
```
**Parse output** `depth:N|path:<PATH>|type:<code|navigation>|...` to extract module paths, types, and count.
**Smart filter**: Auto-detect and skip tests/build/config/vendor based on project tech stack.
### Phase 2: Plan Presentation
**For <20 modules**:
```
Documentation Generation Plan:
Tool: gemini (fallback: qwen → codex)
Total: 7 modules
Execution: Direct parallel (< 20 modules threshold)
Project: myproject
Output: .workflow/docs/myproject/
Will generate docs for:
- ./core/interfaces (12 files, type: code) - depth 2 [Layer 2] - single strategy
- ./core (22 files, type: code) - depth 1 [Layer 2] - single strategy
- ./models (9 files, type: code) - depth 1 [Layer 2] - single strategy
- ./utils (12 files, type: navigation) - depth 1 [Layer 2] - single strategy
- . (5 files, type: code) - depth 0 [Layer 1] - single strategy
Documentation Strategy (Auto-Selected):
- Layer 2 (depth 1-2): API.md + README.md (current dir only, reference child docs)
- Layer 1 (depth 0): API.md + README.md (current dir only, reference child docs)
Output Structure:
- Code folders: API.md + README.md
- Navigation folders: README.md only
Auto-skipped: ./tests, __pycache__, node_modules (15 paths)
Execution order: Layer 2 → Layer 1
Estimated time: ~5-10 minutes
Confirm execution? (y/n)
```
**For ≥20 modules**:
```
Documentation Generation Plan:
Tool: gemini (fallback: qwen → codex)
Total: 31 modules
Execution: Agent batch processing (4 modules/agent)
Project: myproject
Output: .workflow/docs/myproject/
Will generate docs for:
- ./src/features/auth (12 files, type: code) - depth 3 [Layer 3] - full strategy
- ./.claude/commands/cli (6 files, type: code) - depth 3 [Layer 3] - full strategy
- ./src/utils (8 files, type: code) - depth 2 [Layer 2] - single strategy
...
Documentation Strategy (Auto-Selected):
- Layer 3 (depth ≥3): API.md + README.md (all subdirs with code)
- Layer 2 (depth 1-2): API.md + README.md (current dir only)
- Layer 1 (depth 0): API.md + README.md (current dir only)
Output Structure:
- Code folders: API.md + README.md
- Navigation folders: README.md only
Auto-skipped: ./tests, __pycache__, node_modules (15 paths)
Execution order: Layer 3 → Layer 2 → Layer 1
Agent allocation (by LAYER):
- Layer 3 (14 modules, depth ≥3): 4 agents [4, 4, 4, 2]
- Layer 2 (15 modules, depth 1-2): 4 agents [4, 4, 4, 3]
- Layer 1 (2 modules, depth 0): 1 agent [2]
Estimated time: ~15-25 minutes
Confirm execution? (y/n)
```
### Phase 3A: Direct Execution (<20 modules)
**Strategy**: Parallel execution within layer (max 4 concurrent), no agent overhead.
**CRITICAL**: All Bash commands use `run_in_background: false` for synchronous execution.
```javascript
let project_name = detect_project_name();
for (let layer of [3, 2, 1]) {
if (modules_by_layer[layer].length === 0) continue;
let batches = batch_modules(modules_by_layer[layer], 4);
for (let batch of batches) {
let parallel_tasks = batch.map(module => {
return async () => {
let strategy = module.depth >= 3 ? "full" : "single";
for (let tool of tool_order) {
Bash({
command: `cd ${module.path} && ~/.claude/scripts/generate_module_docs.sh "${strategy}" "." "${project_name}" "${tool}"`,
run_in_background: false
});
if (bash_result.exit_code === 0) {
report(`${module.path} (Layer ${layer}) docs generated with ${tool}`);
return true;
}
}
report(`❌ FAILED: ${module.path} (Layer ${layer}) failed all tools`);
return false;
};
});
await Promise.all(parallel_tasks.map(task => task()));
}
}
```
### Phase 3B: Agent Batch Execution (≥20 modules)
**Strategy**: Batch modules into groups of 4, spawn memory-bridge agents per batch.
```javascript
// Group modules by LAYER and batch within each layer
let modules_by_layer = group_by_layer(module_list);
let tool_order = construct_tool_order(primary_tool);
let project_name = detect_project_name();
for (let layer of [3, 2, 1]) {
if (modules_by_layer[layer].length === 0) continue;
let batches = batch_modules(modules_by_layer[layer], 4);
let worker_tasks = [];
for (let batch of batches) {
worker_tasks.push(
Task(
subagent_type="memory-bridge",
description=`Generate docs for ${batch.length} modules in Layer ${layer}`,
prompt=generate_batch_worker_prompt(batch, tool_order, layer, project_name)
)
);
}
await parallel_execute(worker_tasks);
}
```
**Batch Worker Prompt Template**:
```
PURPOSE: Generate documentation for assigned modules with tool fallback
TASK: Generate API.md + README.md for assigned modules using specified strategies.
PROJECT: {{project_name}}
OUTPUT: .workflow/docs/{{project_name}}/
MODULES:
{{module_path_1}} (strategy: {{strategy_1}}, type: {{folder_type_1}})
{{module_path_2}} (strategy: {{strategy_2}}, type: {{folder_type_2}})
...
TOOLS (try in order): {{tool_1}}, {{tool_2}}, {{tool_3}}
EXECUTION SCRIPT: ~/.claude/scripts/generate_module_docs.sh
- Accepts strategy parameter: full | single
- Accepts folder type detection: code | navigation
- Tool execution via direct CLI commands (gemini/qwen/codex)
- Output path: .workflow/docs/{{project_name}}/{module_path}/
EXECUTION FLOW (for each module):
1. Tool fallback loop (exit on first success):
for tool in {{tool_1}} {{tool_2}} {{tool_3}}; do
Bash({
command: `cd "{{module_path}}" && ~/.claude/scripts/generate_module_docs.sh "{{strategy}}" "." "{{project_name}}" "${tool}"`,
run_in_background: false
})
exit_code=$?
if [ $exit_code -eq 0 ]; then
report "✅ {{module_path}} docs generated with $tool"
break
else
report "⚠️ {{module_path}} failed with $tool, trying next..."
continue
fi
done
2. Handle complete failure (all tools failed):
if [ $exit_code -ne 0 ]; then
report "❌ FAILED: {{module_path}} - all tools exhausted"
# Continue to next module (do not abort batch)
fi
FOLDER TYPE HANDLING:
- code: Generate API.md + README.md
- navigation: Generate README.md only
FAILURE HANDLING:
- Module-level isolation: One module's failure does not affect others
- Exit code detection: Non-zero exit code triggers next tool
- Exhaustion reporting: Log modules where all tools failed
- Batch continuation: Always process remaining modules
REPORTING FORMAT:
Per-module status:
✅ path/to/module docs generated with {tool}
⚠️ path/to/module failed with {tool}, trying next...
❌ FAILED: path/to/module - all tools exhausted
```
### Phase 4: Project-Level Documentation
**After all module documentation is generated, create project-level documentation files.**
```javascript
let project_name = detect_project_name();
let project_root = get_project_root();
// Step 1: Generate Project README
report("Generating project README.md...");
for (let tool of tool_order) {
Bash({
command: `cd ${project_root} && ~/.claude/scripts/generate_module_docs.sh "project-readme" "." "${project_name}" "${tool}"`,
run_in_background: false
});
if (bash_result.exit_code === 0) {
report(`✅ Project README generated with ${tool}`);
break;
}
}
// Step 2: Generate Architecture & Examples
report("Generating ARCHITECTURE.md and EXAMPLES.md...");
for (let tool of tool_order) {
Bash({
command: `cd ${project_root} && ~/.claude/scripts/generate_module_docs.sh "project-architecture" "." "${project_name}" "${tool}"`,
run_in_background: false
});
if (bash_result.exit_code === 0) {
report(`✅ Architecture docs generated with ${tool}`);
break;
}
}
// Step 3: Generate HTTP API documentation (if API routes detected)
Bash({command: 'rg "router\\.|@Get|@Post" -g "*.{ts,js,py}" 2>/dev/null && echo "API_FOUND" || echo "NO_API"', run_in_background: false});
if (bash_result.stdout.includes("API_FOUND")) {
report("Generating HTTP API documentation...");
for (let tool of tool_order) {
Bash({
command: `cd ${project_root} && ~/.claude/scripts/generate_module_docs.sh "http-api" "." "${project_name}" "${tool}"`,
run_in_background: false
});
if (bash_result.exit_code === 0) {
report(`✅ HTTP API docs generated with ${tool}`);
break;
}
}
}
```
**Expected Output**:
```
Project-Level Documentation:
✅ README.md (project root overview)
✅ ARCHITECTURE.md (system design)
✅ EXAMPLES.md (usage examples)
✅ api/README.md (HTTP API reference) [optional]
```
### Phase 5: Verification
```javascript
// Check documentation files created
Bash({command: 'find .workflow/docs -type f -name "*.md" 2>/dev/null | wc -l', run_in_background: false});
// Display structure
Bash({command: 'tree -L 3 .workflow/docs/', run_in_background: false});
```
**Result Summary**:
```
Documentation Generation Summary:
Total: 31 | Success: 29 | Failed: 2
Tool usage: gemini: 25, qwen: 4, codex: 0
Failed: path1, path2
Generated documentation:
.workflow/docs/myproject/
├── src/
│ ├── auth/
│ │ ├── API.md
│ │ └── README.md
│ └── utils/
│ └── README.md
└── README.md
```
## Error Handling
**Batch Worker**: Tool fallback per module, batch isolation, clear status reporting
**Coordinator**: Invalid path abort, user decline handling, verification with cleanup
**Fallback Triggers**: Non-zero exit code, script timeout, unexpected output
## Output Structure
```
.workflow/docs/{project_name}/
├── src/ # Mirrors source structure
│ ├── modules/
│ │ ├── README.md # Navigation
│ │ ├── auth/
│ │ │ ├── API.md # API signatures
│ │ │ ├── README.md # Module docs
│ │ │ └── middleware/
│ │ │ ├── API.md
│ │ │ └── README.md
│ │ └── api/
│ │ ├── API.md
│ │ └── README.md
│ └── utils/
│ └── README.md
├── lib/
│ └── core/
│ ├── API.md
│ └── README.md
├── README.md # ✨ Project root overview (auto-generated)
├── ARCHITECTURE.md # ✨ System design (auto-generated)
├── EXAMPLES.md # ✨ Usage examples (auto-generated)
└── api/ # ✨ Optional (auto-generated if HTTP API detected)
└── README.md # HTTP API reference
```
## Usage Examples
```bash
# Full project documentation generation
/memory:docs-full-cli
# Target specific directory
/memory:docs-full-cli src/features/auth
/memory:docs-full-cli .claude
# Use specific tool
/memory:docs-full-cli --tool qwen
/memory:docs-full-cli src --tool qwen
```
## Key Advantages
- **Efficiency**: 30 modules → 8 agents (73% reduction from sequential)
- **Resilience**: 3-tier tool fallback per module
- **Performance**: Parallel batches, no concurrency limits
- **Observability**: Per-module tool usage, batch-level metrics
- **Automation**: Zero configuration - strategy auto-selected by directory depth
- **Path Mirroring**: Clear 1:1 mapping between source and documentation structure
## Template Reference
Templates used from `~/.claude/workflows/cli-templates/prompts/documentation/`:
- `api.txt`: Code API documentation (Part A: Code API, Part B: HTTP API)
- `module-readme.txt`: Module purpose, usage, dependencies
- `folder-navigation.txt`: Navigation README for folders with subdirectories
## Related Commands
- `/memory:docs` - Agent-based documentation planning workflow
- `/memory:docs-related-cli` - Update docs for changed modules only
- `/workflow:execute` - Execute documentation tasks (when using agent mode)

View File

@@ -0,0 +1,386 @@
---
name: docs-related-cli
description: Generate/update documentation for git-changed modules using CLI execution with batched agents (4 modules/agent) and gemini→qwen→codex fallback, <15 modules uses direct parallel
argument-hint: "[--tool <gemini|qwen|codex>]"
---
# Related Documentation Generation - CLI Mode (/memory:docs-related-cli)
## Overview
Orchestrates context-aware documentation generation/update for changed modules using CLI-based execution with batched agents and automatic tool fallback (gemini→qwen→codex).
**Parameters**:
- `--tool <gemini|qwen|codex>`: Primary tool (default: gemini)
**Execution Flow**:
1. Change Detection → 2. Plan Presentation → 3. Batched Execution → 4. Verification
## Core Rules
1. **Detect Changes First**: Use git diff to identify affected modules
2. **Wait for Approval**: Present plan, no execution without user confirmation
3. **Execution Strategy**:
- **<15 modules**: Direct parallel execution (max 4 concurrent per depth, no agent overhead)
- **≥15 modules**: Agent batch processing (4 modules/agent, 73% overhead reduction)
4. **Tool Fallback**: Auto-retry with fallback tools on failure
5. **Depth Sequential**: Process depths N→0, parallel batches within depth (both modes)
6. **Related Mode**: Generate/update only changed modules and their parent contexts
7. **Single Strategy**: Always use `single` strategy (incremental update)
## Tool Fallback Hierarchy
```javascript
--tool gemini [gemini, qwen, codex] // default
--tool qwen [qwen, gemini, codex]
--tool codex [codex, gemini, qwen]
```
**Trigger**: Non-zero exit code from generation script
| Tool | Best For | Fallback To |
|--------|--------------------------------|----------------|
| gemini | Documentation, patterns | qwen → codex |
| qwen | Architecture, system design | gemini → codex |
| codex | Implementation, code quality | gemini → qwen |
## Phase 1: Change Detection & Analysis
```javascript
// Get project metadata
Bash({command: "pwd && basename \"$(pwd)\" && git rev-parse --show-toplevel 2>/dev/null || pwd", run_in_background: false});
// Detect changed modules
Bash({command: "~/.claude/scripts/detect_changed_modules.sh list", run_in_background: false});
// Cache git changes
Bash({command: "git add -A 2>/dev/null || true", run_in_background: false});
```
**Parse output** `depth:N|path:<PATH>|change:<TYPE>|type:<code|navigation>` to extract affected modules.
**Smart filter**: Auto-detect and skip tests/build/config/vendor based on project tech stack (Node.js/Python/Go/Rust/etc).
**Fallback**: If no changes detected, use recent modules (first 10 by depth).
## Phase 2: Plan Presentation
**Present filtered plan**:
```
Related Documentation Generation Plan:
Tool: gemini (fallback: qwen → codex)
Changed: 4 modules | Batching: 4 modules/agent
Project: myproject
Output: .workflow/docs/myproject/
Will generate/update docs for:
- ./src/api/auth (5 files, type: code) [new module]
- ./src/api (12 files, type: code) [parent of changed auth/]
- ./src (8 files, type: code) [parent context]
- . (14 files, type: code) [root level]
Documentation Strategy:
- Strategy: single (all modules - incremental update)
- Output: API.md + README.md (code folders), README.md only (navigation folders)
- Context: Current dir code + child docs
Auto-skipped (12 paths):
- Tests: ./src/api/auth.test.ts (8 paths)
- Config: tsconfig.json (3 paths)
- Other: node_modules (1 path)
Agent allocation:
- Depth 3 (1 module): 1 agent [1]
- Depth 2 (1 module): 1 agent [1]
- Depth 1 (1 module): 1 agent [1]
- Depth 0 (1 module): 1 agent [1]
Estimated time: ~5-10 minutes
Confirm execution? (y/n)
```
**Decision logic**:
- User confirms "y": Proceed with execution
- User declines "n": Abort, no changes
- <15 modules: Direct execution
- ≥15 modules: Agent batch execution
## Phase 3A: Direct Execution (<15 modules)
**Strategy**: Parallel execution within depth (max 4 concurrent), no agent overhead.
**CRITICAL**: All Bash commands use `run_in_background: false` for synchronous execution.
```javascript
let project_name = detect_project_name();
for (let depth of sorted_depths.reverse()) { // N → 0
let batches = batch_modules(modules_by_depth[depth], 4);
for (let batch of batches) {
let parallel_tasks = batch.map(module => {
return async () => {
for (let tool of tool_order) {
Bash({
command: `cd ${module.path} && ~/.claude/scripts/generate_module_docs.sh "single" "." "${project_name}" "${tool}"`,
run_in_background: false
});
if (bash_result.exit_code === 0) {
report(`${module.path} docs generated with ${tool}`);
return true;
}
}
report(`❌ FAILED: ${module.path} failed all tools`);
return false;
};
});
await Promise.all(parallel_tasks.map(task => task()));
}
}
```
## Phase 3B: Agent Batch Execution (≥15 modules)
### Batching Strategy
```javascript
// Batch modules into groups of 4
function batch_modules(modules, batch_size = 4) {
let batches = [];
for (let i = 0; i < modules.length; i += batch_size) {
batches.push(modules.slice(i, i + batch_size));
}
return batches;
}
// Examples: 10→[4,4,2] | 8→[4,4] | 3→[3]
```
### Coordinator Orchestration
```javascript
let modules_by_depth = group_by_depth(changed_modules);
let tool_order = construct_tool_order(primary_tool);
let project_name = detect_project_name();
for (let depth of sorted_depths.reverse()) { // N → 0
let batches = batch_modules(modules_by_depth[depth], 4);
let worker_tasks = [];
for (let batch of batches) {
worker_tasks.push(
Task(
subagent_type="memory-bridge",
description=`Generate docs for ${batch.length} modules at depth ${depth}`,
prompt=generate_batch_worker_prompt(batch, tool_order, depth, project_name, "related")
)
);
}
await parallel_execute(worker_tasks); // Batches run in parallel
}
```
### Batch Worker Prompt Template
```
PURPOSE: Generate/update documentation for assigned modules with tool fallback (related mode)
TASK:
Generate documentation for the following modules based on recent changes. For each module, try tools in order until success.
PROJECT: {{project_name}}
OUTPUT: .workflow/docs/{{project_name}}/
MODULES:
{{module_path_1}} (type: {{folder_type_1}})
{{module_path_2}} (type: {{folder_type_2}})
{{module_path_3}} (type: {{folder_type_3}})
{{module_path_4}} (type: {{folder_type_4}})
TOOLS (try in order):
1. {{tool_1}}
2. {{tool_2}}
3. {{tool_3}}
EXECUTION:
For each module above:
1. Try tool 1:
Bash({
command: `cd "{{module_path}}" && ~/.claude/scripts/generate_module_docs.sh "single" "." "{{project_name}}" "{{tool_1}}"`,
run_in_background: false
})
→ Success: Report "✅ {{module_path}} docs generated with {{tool_1}}", proceed to next module
→ Failure: Try tool 2
2. Try tool 2:
Bash({
command: `cd "{{module_path}}" && ~/.claude/scripts/generate_module_docs.sh "single" "." "{{project_name}}" "{{tool_2}}"`,
run_in_background: false
})
→ Success: Report "✅ {{module_path}} docs generated with {{tool_2}}", proceed to next module
→ Failure: Try tool 3
3. Try tool 3:
Bash({
command: `cd "{{module_path}}" && ~/.claude/scripts/generate_module_docs.sh "single" "." "{{project_name}}" "{{tool_3}}"`,
run_in_background: false
})
→ Success: Report "✅ {{module_path}} docs generated with {{tool_3}}", proceed to next module
→ Failure: Report "❌ FAILED: {{module_path}} failed all tools", proceed to next module
FOLDER TYPE HANDLING:
- code: Generate API.md + README.md
- navigation: Generate README.md only
REPORTING:
Report final summary with:
- Total processed: X modules
- Successful: Y modules
- Failed: Z modules
- Tool usage: {{tool_1}}:X, {{tool_2}}:Y, {{tool_3}}:Z
```
## Phase 4: Verification
```javascript
// Check documentation files created/updated
Bash({command: 'find .workflow/docs -type f -name "*.md" 2>/dev/null | wc -l', run_in_background: false});
// Display recent changes
Bash({command: 'find .workflow/docs -type f -name "*.md" -mmin -60 2>/dev/null', run_in_background: false});
```
**Aggregate results**:
```
Documentation Generation Summary:
Total: 4 | Success: 4 | Failed: 0
Tool usage:
- gemini: 4 modules
- qwen: 0 modules (fallback)
- codex: 0 modules
Changes:
.workflow/docs/myproject/src/api/auth/API.md (new)
.workflow/docs/myproject/src/api/auth/README.md (new)
.workflow/docs/myproject/src/api/API.md (updated)
.workflow/docs/myproject/src/api/README.md (updated)
.workflow/docs/myproject/src/API.md (updated)
.workflow/docs/myproject/src/README.md (updated)
.workflow/docs/myproject/API.md (updated)
.workflow/docs/myproject/README.md (updated)
```
## Execution Summary
**Module Count Threshold**:
- **<15 modules**: Coordinator executes Phase 3A (Direct Execution)
- **≥15 modules**: Coordinator executes Phase 3B (Agent Batch Execution)
**Agent Hierarchy** (for ≥15 modules):
- **Coordinator**: Handles batch division, spawns worker agents per depth
- **Worker Agents**: Each processes 4 modules with tool fallback (related mode)
## Error Handling
**Batch Worker**:
- Tool fallback per module (auto-retry)
- Batch isolation (failures don't propagate)
- Clear per-module status reporting
**Coordinator**:
- No changes: Use fallback (recent 10 modules)
- User decline: No execution
- Verification fail: Report incomplete modules
- Partial failures: Continue execution, report failed modules
**Fallback Triggers**:
- Non-zero exit code
- Script timeout
- Unexpected output
## Output Structure
```
.workflow/docs/{project_name}/
├── src/ # Mirrors source structure
│ ├── modules/
│ │ ├── README.md
│ │ ├── auth/
│ │ │ ├── API.md # Updated based on code changes
│ │ │ └── README.md # Updated based on code changes
│ │ └── api/
│ │ ├── API.md
│ │ └── README.md
│ └── utils/
│ └── README.md
└── README.md
```
## Usage Examples
```bash
# Daily development documentation update
/memory:docs-related-cli
# After feature work with specific tool
/memory:docs-related-cli --tool qwen
# Code quality documentation review after implementation
/memory:docs-related-cli --tool codex
```
## Key Advantages
**Efficiency**: 30 modules → 8 agents (73% reduction)
**Resilience**: 3-tier fallback per module
**Performance**: Parallel batches, no concurrency limits
**Context-aware**: Updates based on actual git changes
**Fast**: Only affected modules, not entire project
**Incremental**: Single strategy for focused updates
## Coordinator Checklist
- Parse `--tool` (default: gemini)
- Get project metadata (name, root)
- Detect changed modules via detect_changed_modules.sh
- **Smart filter modules** (auto-detect tech stack, skip tests/build/config/vendor)
- Cache git changes
- Apply fallback if no changes (recent 10 modules)
- Construct tool fallback order
- **Present filtered plan** with skip reasons and change types
- **Wait for y/n confirmation**
- Determine execution mode:
- **<15 modules**: Direct execution (Phase 3A)
- For each depth (N→0): Sequential module updates with tool fallback
- **≥15 modules**: Agent batch execution (Phase 3B)
- For each depth (N→0): Batch modules (4 per batch), spawn batch workers in parallel
- Wait for depth/batch completion
- Aggregate results
- Verification check (documentation files created/updated)
- Display summary + recent changes
## Comparison with Full Documentation Generation
| Aspect | Related Generation | Full Generation |
|--------|-------------------|-----------------|
| **Scope** | Changed modules only | All project modules |
| **Speed** | Fast (minutes) | Slower (10-30 min) |
| **Use case** | Daily development | Initial setup, major refactoring |
| **Strategy** | `single` (all) | `full` (L3) + `single` (L1-2) |
| **Trigger** | After commits | After setup or major changes |
| **Batching** | 4 modules/agent | 4 modules/agent |
| **Fallback** | gemini→qwen→codex | gemini→qwen→codex |
| **Complexity threshold** | ≤15 modules | ≤20 modules |
## Template Reference
Templates used from `~/.claude/workflows/cli-templates/prompts/documentation/`:
- `api.txt`: Code API documentation
- `module-readme.txt`: Module purpose, usage, dependencies
- `folder-navigation.txt`: Navigation README for folders
## Related Commands
- `/memory:docs-full-cli` - Full project documentation generation
- `/memory:docs` - Agent-based documentation planning workflow
- `/memory:update-related` - Update CLAUDE.md for changed modules

View File

@@ -36,7 +36,6 @@ Lightweight planner that analyzes project structure, decomposes documentation wo
| `my_app/src/modules/auth/` | `my_app` | `.workflow/docs/my_app/src/modules/auth/API.md` |
| `another_project/lib/utils/` | `another_project` | `.workflow/docs/another_project/lib/utils/API.md` |
**Benefits**: Easy to locate documentation, maintains logical organization, clear 1:1 mapping, supports any project structure.
## Parameters
@@ -44,7 +43,11 @@ Lightweight planner that analyzes project structure, decomposes documentation wo
/memory:docs [path] [--tool <gemini|qwen|codex>] [--mode <full|partial>] [--cli-execute]
```
- **path**: Target directory (default: current directory)
- **path**: Source directory to analyze (default: current directory)
- Specifies the source code directory to be documented
- Documentation is generated in a separate `.workflow/docs/{project_name}/` directory at the workspace root, **not** within the source `path` itself
- The source path's structure is mirrored within the project-specific documentation folder
- Example: analyzing `src/modules` produces documentation at `.workflow/docs/{project_name}/src/modules/`
- **--mode**: Documentation generation mode (default: full)
- `full`: Complete documentation (modules + README + ARCHITECTURE + EXAMPLES + HTTP API)
- `partial`: Module documentation only (API.md + README.md)
@@ -63,10 +66,10 @@ Lightweight planner that analyzes project structure, decomposes documentation wo
bash(pwd && basename "$(pwd)" && git rev-parse --show-toplevel 2>/dev/null || pwd && date +%Y%m%d-%H%M%S)
# Create session directories (replace timestamp)
bash(mkdir -p .workflow/WFS-docs-{timestamp}/.{task,process,summaries} && touch .workflow/.active-WFS-docs-{timestamp})
bash(mkdir -p .workflow/active/WFS-docs-{timestamp}/.{task,process,summaries})
# Create workflow-session.json (replace values)
bash(echo '{"session_id":"WFS-docs-{timestamp}","project":"{project} documentation","status":"planning","timestamp":"2024-01-20T14:30:22+08:00","path":".","target_path":"{target_path}","project_root":"{project_root}","project_name":"{project_name}","mode":"full","tool":"gemini","cli_execute":false}' | jq '.' > .workflow/WFS-docs-{timestamp}/workflow-session.json)
bash(echo '{"session_id":"WFS-docs-{timestamp}","project":"{project} documentation","status":"planning","timestamp":"2024-01-20T14:30:22+08:00","path":".","target_path":"{target_path}","project_root":"{project_root}","project_name":"{project_name}","mode":"full","tool":"gemini","cli_execute":false}' | jq '.' > .workflow/active/WFS-docs-{timestamp}/workflow-session.json)
```
### Phase 2: Analyze Structure
@@ -89,7 +92,7 @@ bash(if [ -d .workflow/docs/\${project_name} ]; then find .workflow/docs/\${proj
bash(if [ -d .workflow/docs/\${project_name} ]; then find .workflow/docs/\${project_name} -type f -name "*.md" ! -path "*/README.md" ! -path "*/ARCHITECTURE.md" ! -path "*/EXAMPLES.md" ! -path "*/api/*" 2>/dev/null | xargs cat 2>/dev/null; fi)
```
**Data Processing**: Parse bash outputs, calculate statistics, use **Write tool** to create `${session_dir}/.process/phase2-analysis.json` with structure:
**Data Processing**: Parse bash outputs, calculate statistics, use **Write tool** to create `${session_dir}/.process/doc-planning-data.json` with structure:
```json
{
@@ -118,7 +121,7 @@ bash(if [ -d .workflow/docs/\${project_name} ]; then find .workflow/docs/\${proj
**Then** use **Edit tool** to update `workflow-session.json` adding analysis field.
**Output**: Single `phase2-analysis.json` with all analysis data (no temp files or Python scripts).
**Output**: Single `doc-planning-data.json` with all analysis data (no temp files or Python scripts).
**Auto-skipped**: Tests (`**/test/**`, `**/*.test.*`), Build (`**/node_modules/**`, `**/dist/**`), Config (root-level files), Vendor directories.
@@ -127,8 +130,8 @@ bash(if [ -d .workflow/docs/\${project_name} ]; then find .workflow/docs/\${proj
**Commands**:
```bash
# Count existing docs from phase2-analysis.json
bash(cat .workflow/WFS-docs-{timestamp}/.process/phase2-analysis.json | jq '.existing_docs.file_list | length')
# Count existing docs from doc-planning-data.json
bash(cat .workflow/active/WFS-docs-{timestamp}/.process/doc-planning-data.json | jq '.existing_docs.file_list | length')
```
**Data Processing**: Use count result, then use **Edit tool** to update `workflow-session.json`:
@@ -177,16 +180,15 @@ Large Projects (single dir >10 docs):
4. If single dir exceeds 10 docs, split by subdirectories
5. Create parallel Level 1 tasks with ≤10 docs each
**Benefits**: Parallel execution, failure isolation, progress visibility, context sharing, document count control.
**Commands**:
```bash
# 1. Get top-level directories from phase2-analysis.json
bash(cat .workflow/WFS-docs-{timestamp}/.process/phase2-analysis.json | jq -r '.top_level_dirs[]')
# 1. Get top-level directories from doc-planning-data.json
bash(cat .workflow/active/WFS-docs-{timestamp}/.process/doc-planning-data.json | jq -r '.top_level_dirs[]')
# 2. Get mode from workflow-session.json
bash(cat .workflow/WFS-docs-{timestamp}/workflow-session.json | jq -r '.mode // "full"')
bash(cat .workflow/active/WFS-docs-{timestamp}/workflow-session.json | jq -r '.mode // "full"')
# 3. Check for HTTP API
bash(grep -r "router\.|@Get\|@Post" src/ 2>/dev/null && echo "API_FOUND" || echo "NO_API")
@@ -201,7 +203,7 @@ bash(grep -r "router\.|@Get\|@Post" src/ 2>/dev/null && echo "API_FOUND" || echo
- If total ≤10 docs: create group
- If total >10 docs: split to 1 dir/group or subdivide
- If single dir >10 docs: split by subdirectories
3. Use **Edit tool** to update `phase2-analysis.json` adding groups field:
3. Use **Edit tool** to update `doc-planning-data.json` adding groups field:
```json
"groups": {
"count": 3,
@@ -215,7 +217,7 @@ bash(grep -r "router\.|@Get\|@Post" src/ 2>/dev/null && echo "API_FOUND" || echo
**Task ID Calculation**:
```bash
group_count=$(jq '.groups.count' .workflow/WFS-docs-{timestamp}/.process/phase2-analysis.json)
group_count=$(jq '.groups.count' .workflow/active/WFS-docs-{timestamp}/.process/doc-planning-data.json)
readme_id=$((group_count + 1)) # Next ID after groups
arch_id=$((group_count + 2))
api_id=$((group_count + 3))
@@ -237,7 +239,7 @@ api_id=$((group_count + 3))
**Generation Process**:
1. Read configuration values (tool, cli_execute, mode) from workflow-session.json
2. Read group assignments from phase2-analysis.json
2. Read group assignments from doc-planning-data.json
3. Generate Level 1 tasks (IMPL-001 to IMPL-N, one per group)
4. Generate Level 2+ tasks if mode=full (README, ARCHITECTURE, HTTP API)
@@ -262,14 +264,14 @@ api_id=$((group_count + 3))
},
"context": {
"requirements": [
"Process directories from group ${group_number} in phase2-analysis.json",
"Process directories from group ${group_number} in doc-planning-data.json",
"Generate docs to .workflow/docs/${project_name}/ (mirrored structure)",
"Code folders: API.md + README.md; Navigation folders: README.md only",
"Use pre-analyzed data from Phase 2 (no redundant analysis)"
],
"focus_paths": ["${group_dirs_from_json}"],
"precomputed_data": {
"phase2_analysis": "${session_dir}/.process/phase2-analysis.json"
"phase2_analysis": "${session_dir}/.process/doc-planning-data.json"
}
},
"flow_control": {
@@ -278,8 +280,8 @@ api_id=$((group_count + 3))
"step": "load_precomputed_data",
"action": "Load Phase 2 analysis and extract group directories",
"commands": [
"bash(cat ${session_dir}/.process/phase2-analysis.json)",
"bash(jq '.groups.assignments[] | select(.group_id == \"${group_number}\") | .directories' ${session_dir}/.process/phase2-analysis.json)"
"bash(cat ${session_dir}/.process/doc-planning-data.json)",
"bash(jq '.groups.assignments[] | select(.group_id == \"${group_number}\") | .directories' ${session_dir}/.process/doc-planning-data.json)"
],
"output_to": "phase2_context",
"note": "Single JSON file contains all Phase 2 analysis results"
@@ -324,7 +326,7 @@ api_id=$((group_count + 3))
{
"step": 2,
"title": "Batch generate documentation via CLI",
"command": "bash(dirs=$(jq -r '.groups.assignments[] | select(.group_id == \"${group_number}\") | .directories[]' ${session_dir}/.process/phase2-analysis.json); for dir in $dirs; do cd \"$dir\" && gemini --approval-mode yolo -p \"PURPOSE: Generate module docs\\nTASK: Create documentation\\nMODE: write\\nCONTEXT: @**/* [phase2_context]\\nEXPECTED: API.md and README.md\\nRULES: Mirror structure\" || echo \"Failed: $dir\"; cd -; done)",
"command": "bash(dirs=$(jq -r '.groups.assignments[] | select(.group_id == \"${group_number}\") | .directories[]' ${session_dir}/.process/doc-planning-data.json); for dir in $dirs; do cd \"$dir\" && gemini --approval-mode yolo -p \"PURPOSE: Generate module docs\\nTASK: Create documentation\\nMODE: write\\nCONTEXT: @**/* [phase2_context]\\nEXPECTED: API.md and README.md\\nRULES: Mirror structure\" || echo \"Failed: $dir\"; cd -; done)",
"depends_on": [1],
"output": "generated_docs"
}
@@ -458,14 +460,13 @@ api_id=$((group_count + 3))
**Unified Structure** (single JSON replaces multiple text files):
```
.workflow/
├── .active-WFS-docs-{timestamp}
.workflow/active/
└── WFS-docs-{timestamp}/
├── workflow-session.json # Session metadata
├── IMPL_PLAN.md
├── TODO_LIST.md
├── .process/
│ └── phase2-analysis.json # All Phase 2 analysis data (replaces 7+ files)
│ └── doc-planning-data.json # All Phase 2 analysis data (replaces 7+ files)
└── .task/
├── IMPL-001.json # Small: all modules | Large: group 1
├── IMPL-00N.json # (Large only: groups 2-N)
@@ -474,7 +475,7 @@ api_id=$((group_count + 3))
└── IMPL-{N+3}.json # HTTP API (optional)
```
**phase2-analysis.json Structure**:
**doc-planning-data.json Structure**:
```json
{
"metadata": {

View File

@@ -508,16 +508,7 @@ User triggers command
---
## Benefits
- **Pure Orchestrator**: No task JSON generation, delegates to /memory:docs
- **Auto-Continue**: Autonomous 4-phase execution without user interaction
- **Intelligent Skip**: Detects existing docs and skips regeneration for fast SKILL updates
- **Always Fresh Index**: Phase 4 always executes to ensure SKILL.md stays synchronized
- **Simplified**: ~70% less code than previous version
- **Maintainable**: Changes to /memory:docs automatically apply
- **Direct Generation**: Phase 4 directly writes SKILL.md
- **Flexible**: Supports all /memory:docs options (tool, mode, cli-execute)
## Architecture

View File

@@ -0,0 +1,396 @@
---
name: style-skill-memory
description: Generate SKILL memory package from style reference for easy loading and consistent design system usage
argument-hint: "[package-name] [--regenerate]"
allowed-tools: Bash,Read,Write,TodoWrite
auto-continue: true
---
# Memory: Style SKILL Memory Generator
## Overview
**Purpose**: Convert style reference package into SKILL memory for easy loading and context management.
**Input**: Style reference package at `.workflow/reference_style/{package-name}/`
**Output**: SKILL memory index at `.claude/skills/style-{package-name}/SKILL.md`
**Use Case**: Load design system context when working with UI components, analyzing design patterns, or implementing style guidelines.
**Key Features**:
- Extracts primary design references (colors, typography, spacing, etc.)
- Provides dynamic adjustment guidelines for design tokens
- Includes prerequisites and tooling requirements (browsers, PostCSS, dark mode)
- Progressive loading structure for efficient token usage
- Complete implementation examples with React components
- Interactive preview showcase
---
## Quick Reference
### Command Syntax
```bash
/memory:style-skill-memory [package-name] [--regenerate]
# Arguments
package-name Style reference package name (required)
--regenerate Force regenerate SKILL.md even if it exists (optional)
```
### Usage Examples
```bash
# Generate SKILL memory for package
/memory:style-skill-memory main-app-style-v1
# Regenerate SKILL memory
/memory:style-skill-memory main-app-style-v1 --regenerate
# Package name from current directory or default
/memory:style-skill-memory
```
### Key Variables
**Input Variables**:
- `PACKAGE_NAME`: Style reference package name
- `PACKAGE_DIR`: `.workflow/reference_style/${package_name}`
- `SKILL_DIR`: `.claude/skills/style-${package_name}`
- `REGENERATE`: `true` if --regenerate flag, `false` otherwise
**Data Sources** (Phase 2):
- `DESIGN_TOKENS_DATA`: Complete design-tokens.json content (from Read)
- `LAYOUT_TEMPLATES_DATA`: Complete layout-templates.json content (from Read)
- `ANIMATION_TOKENS_DATA`: Complete animation-tokens.json content (from Read, if exists)
**Metadata** (Phase 2):
- `COMPONENT_COUNT`: Total components
- `UNIVERSAL_COUNT`: Universal components count
- `SPECIALIZED_COUNT`: Specialized components count
- `UNIVERSAL_COMPONENTS`: Universal component names (first 5)
- `HAS_ANIMATIONS`: Whether animation-tokens.json exists
**Analysis Output** (`DESIGN_ANALYSIS` - Phase 2):
- `has_colors`: Colors exist
- `color_semantic`: Has semantic naming (primary/secondary/accent)
- `uses_oklch`: Uses modern color spaces (oklch, lab, etc.)
- `has_dark_mode`: Has separate light/dark mode color tokens
- `spacing_pattern`: Pattern type ("linear", "geometric", "custom")
- `spacing_scale`: Actual scale values (e.g., [4, 8, 16, 32, 64])
- `has_typography`: Typography system exists
- `typography_hierarchy`: Has size scale for hierarchy
- `uses_calc`: Uses calc() expressions in token values
- `has_radius`: Border radius exists
- `radius_style`: Style characteristic ("sharp" <4px, "moderate" 4-8px, "rounded" >8px)
- `has_shadows`: Shadow system exists
- `shadow_pattern`: Elevation naming pattern
- `has_animations`: Animation tokens exist
- `animation_range`: Duration range (fast to slow)
- `easing_variety`: Types of easing functions
### Common Errors
| Error | Cause | Resolution |
|-------|-------|------------|
| Package not found | Invalid package name or doesn't exist | Run `/workflow:ui-design:codify-style` first |
| SKILL already exists | SKILL.md already generated | Use `--regenerate` flag |
| Missing layout-templates.json | Incomplete package | Verify package integrity, re-run codify-style |
| Invalid JSON format | Corrupted package files | Regenerate package with codify-style |
---
## Execution Process
### Phase 1: Validate Package
**TodoWrite** (First Action):
```json
[
{
"content": "Validate package exists and check SKILL status",
"activeForm": "Validating package and SKILL status",
"status": "in_progress"
},
{
"content": "Read package data and analyze design system",
"activeForm": "Reading package data and analyzing design system",
"status": "pending"
},
{
"content": "Generate SKILL.md with design principles and token values",
"activeForm": "Generating SKILL.md with design principles and token values",
"status": "pending"
}
]
```
**Step 1: Parse Package Name**
```bash
# Get package name from argument or auto-detect
bash(echo "${package_name}" || basename "$(pwd)" | sed 's/^style-//')
```
**Step 2: Validate Package Exists**
```bash
bash(test -d .workflow/reference_style/${package_name} && echo "exists" || echo "missing")
```
**Error Handling**:
```javascript
if (package_not_exists) {
error("ERROR: Style reference package not found: ${package_name}")
error("HINT: Run '/workflow:ui-design:codify-style' first to create package")
error("Available packages:")
bash(ls -1 .workflow/reference_style/ 2>/dev/null || echo " (none)")
exit(1)
}
```
**Step 3: Check SKILL Already Exists**
```bash
bash(test -f .claude/skills/style-${package_name}/SKILL.md && echo "exists" || echo "missing")
```
**Decision Logic**:
```javascript
if (skill_exists && !regenerate_flag) {
echo("SKILL memory already exists for: ${package_name}")
echo("Use --regenerate to force regeneration")
exit(0)
}
if (regenerate_flag && skill_exists) {
echo("Regenerating SKILL memory for: ${package_name}")
}
```
**TodoWrite Update**: Mark "Validate" as completed, "Read package data" as in_progress
---
### Phase 2: Read Package Data & Analyze Design System
**Step 1: Read All JSON Files**
```bash
# Read layout templates
Read(file_path=".workflow/reference_style/${package_name}/layout-templates.json")
# Read design tokens
Read(file_path=".workflow/reference_style/${package_name}/design-tokens.json")
# Read animation tokens (if exists)
bash(test -f .workflow/reference_style/${package_name}/animation-tokens.json && echo "exists" || echo "missing")
Read(file_path=".workflow/reference_style/${package_name}/animation-tokens.json") # if exists
```
**Step 2: Extract Metadata for Description**
```bash
# Count components and classify by type
bash(jq '.layout_templates | length' layout-templates.json)
bash(jq '[.layout_templates[] | select(.component_type == "universal")] | length' layout-templates.json)
bash(jq '[.layout_templates[] | select(.component_type == "specialized")] | length' layout-templates.json)
bash(jq -r '.layout_templates | to_entries[] | select(.value.component_type == "universal") | .key' layout-templates.json | head -5)
```
Store results in metadata variables (see [Key Variables](#key-variables))
**Step 3: Analyze Design System for Dynamic Principles**
Analyze design-tokens.json to extract characteristics and patterns:
```bash
# Color system characteristics
bash(jq '.colors | keys' design-tokens.json)
bash(jq '.colors | to_entries[0:2] | map(.value)' design-tokens.json)
# Check for modern color spaces
bash(jq '.colors | to_entries[] | .value | test("oklch|lab|lch")' design-tokens.json)
# Check for dark mode variants
bash(jq '.colors | keys | map(select(contains("dark") or contains("light")))' design-tokens.json)
# → Store: has_colors, color_semantic, uses_oklch, has_dark_mode
# Spacing pattern detection
bash(jq '.spacing | to_entries | map(.value) | map(gsub("[^0-9.]"; "") | tonumber)' design-tokens.json)
# Analyze pattern: linear (4-8-12-16) vs geometric (4-8-16-32) vs custom
# → Store: spacing_pattern, spacing_scale
# Typography characteristics
bash(jq '.typography | keys | map(select(contains("family") or contains("weight")))' design-tokens.json)
bash(jq '.typography | to_entries | map(select(.key | contains("size"))) | .[].value' design-tokens.json)
# Check for calc() usage
bash(jq '. | tostring | test("calc\\(")' design-tokens.json)
# → Store: has_typography, typography_hierarchy, uses_calc
# Border radius style
bash(jq '.border_radius | to_entries | map(.value)' design-tokens.json)
# Check range: small (sharp <4px) vs moderate (4-8px) vs large (rounded >8px)
# → Store: has_radius, radius_style
# Shadow characteristics
bash(jq '.shadows | keys' design-tokens.json)
bash(jq '.shadows | to_entries[0].value' design-tokens.json)
# → Store: has_shadows, shadow_pattern
# Animations (if available)
bash(jq '.duration | to_entries | map(.value)' animation-tokens.json)
bash(jq '.easing | keys' animation-tokens.json)
# → Store: has_animations, animation_range, easing_variety
```
Store analysis results in `DESIGN_ANALYSIS` (see [Key Variables](#key-variables))
**Note**: Analysis focuses on characteristics and patterns, not counts. Include technical feature detection (oklch, calc, dark mode) for Prerequisites section.
**TodoWrite Update**: Mark "Read package data" as completed, "Generate SKILL.md" as in_progress
---
### Phase 3: Generate SKILL.md
**Step 1: Create SKILL Directory**
```bash
bash(mkdir -p .claude/skills/style-${package_name})
```
**Step 2: Generate Intelligent Description**
**Format**:
```
{package_name} project-independent design system with {universal_count} universal layout templates and interactive preview (located at .workflow/reference_style/{package_name}). Load when working with reusable UI components, design tokens, layout patterns, or implementing visual consistency. Excludes {specialized_count} project-specific components.
```
**Step 3: Load and Process SKILL.md Template**
**⚠️ CRITICAL - Execute First**:
```bash
bash(cat ~/.claude/workflows/cli-templates/memory/style-skill-memory/skill-md-template.md)
```
**Template Processing**:
1. **Replace variables**: Substitute all `{variable}` placeholders with actual values from Phase 2
2. **Generate dynamic sections**:
- **Prerequisites & Tooling**: Generate based on `DESIGN_ANALYSIS` technical features (oklch, calc, dark mode)
- **Design Principles**: Generate based on `DESIGN_ANALYSIS` characteristics
- **Complete Implementation Example**: Include React component example with token adaptation
- **Design Token Values**: Iterate `DESIGN_TOKENS_DATA`, `ANIMATION_TOKENS_DATA` and display all key-value pairs with DEFAULT annotations
3. **Write to file**: Use Write tool to save to `.claude/skills/style-{package_name}/SKILL.md`
**Variable Replacement Map**:
- `{package_name}` → PACKAGE_NAME
- `{intelligent_description}` → Generated description from Step 2
- `{component_count}` → COMPONENT_COUNT
- `{universal_count}` → UNIVERSAL_COUNT
- `{specialized_count}` → SPECIALIZED_COUNT
- `{universal_components_list}` → UNIVERSAL_COMPONENTS (comma-separated)
- `{has_animations}` → HAS_ANIMATIONS
**Dynamic Content Generation**:
See template file for complete structure. Key dynamic sections:
1. **Prerequisites & Tooling** (based on DESIGN_ANALYSIS technical features):
- IF uses_oklch → Include PostCSS plugin requirement (`postcss-oklab-function`)
- IF uses_calc → Include preprocessor requirement for calc() expressions
- IF has_dark_mode → Include dark mode implementation mechanism (class or media query)
- ALWAYS include browser support, jq installation, and local server setup
2. **Design Principles** (based on DESIGN_ANALYSIS):
- IF has_colors → Include "Color System" principle with semantic pattern
- IF spacing_pattern detected → Include "Spatial Rhythm" with unified scale description (actual token values)
- IF has_typography_hierarchy → Include "Typographic System" with scale examples
- IF has_radius → Include "Shape Language" with style characteristic
- IF has_shadows → Include "Depth & Elevation" with elevation pattern
- IF has_animations → Include "Motion & Timing" with duration range
- ALWAYS include "Accessibility First" principle
3. **Design Token Values** (iterate from read data):
- Colors: Iterate `DESIGN_TOKENS_DATA.colors`
- Typography: Iterate `DESIGN_TOKENS_DATA.typography`
- Spacing: Iterate `DESIGN_TOKENS_DATA.spacing`
- Border Radius: Iterate `DESIGN_TOKENS_DATA.border_radius` with calc() explanations
- Shadows: Iterate `DESIGN_TOKENS_DATA.shadows` with DEFAULT token annotations
- Animations (if available): Iterate `ANIMATION_TOKENS_DATA.duration` and `ANIMATION_TOKENS_DATA.easing`
**Step 4: Verify SKILL.md Created**
```bash
bash(test -f .claude/skills/style-${package_name}/SKILL.md && echo "success" || echo "failed")
```
**TodoWrite Update**: Mark all todos as completed
---
### Completion Message
Display a simple completion message with key information:
```
✅ SKILL memory generated for style package: {package_name}
📁 Location: .claude/skills/style-{package_name}/SKILL.md
📊 Package Summary:
- {component_count} components ({universal_count} universal, {specialized_count} specialized)
- Design tokens: colors, typography, spacing, shadows{animations_note}
💡 Usage: /memory:load-skill-memory style-{package_name} "your task description"
```
Variables: `{package_name}`, `{component_count}`, `{universal_count}`, `{specialized_count}`, `{animations_note}` (", animations" if exists)
---
## Implementation Details
### Critical Rules
1. **Check Before Generate**: Verify package exists before attempting SKILL generation
2. **Respect Existing SKILL**: Don't overwrite unless --regenerate flag provided
3. **Load Templates via cat**: Use `cat ~/.claude/workflows/cli-templates/memory/style-skill-memory/{template}` to load templates
4. **Variable Substitution**: Replace all `{variable}` placeholders with actual values
5. **Technical Feature Detection**: Analyze tokens for modern features (oklch, calc, dark mode) and generate appropriate Prerequisites section
6. **Dynamic Content Generation**: Generate sections based on DESIGN_ANALYSIS characteristics
7. **Unified Spacing Scale**: Use actual token values as primary scale reference, avoid contradictory pattern descriptions
8. **Direct Iteration**: Iterate data structures (DESIGN_TOKENS_DATA, etc.) for token values
9. **Annotate Special Tokens**: Add comments for DEFAULT tokens and calc() expressions
10. **Embed jq Commands**: Include bash/jq commands in SKILL.md for dynamic loading
11. **Progressive Loading**: Include all 3 levels (0-2) with specific jq commands
12. **Complete Examples**: Include end-to-end implementation examples (React components)
13. **Intelligent Description**: Extract component count and key features from metadata
14. **Emphasize Flexibility**: Strongly warn against rigid copying - values are references for creative adaptation
### Template Files Location
```
Phase 1: Validate
├─ Parse package_name
├─ Check PACKAGE_DIR exists
└─ Check SKILL_DIR exists (skip if exists and no --regenerate)
Phase 2: Read & Analyze
├─ Read design-tokens.json → DESIGN_TOKENS_DATA
├─ Read layout-templates.json → LAYOUT_TEMPLATES_DATA
├─ Read animation-tokens.json → ANIMATION_TOKENS_DATA (if exists)
├─ Extract Metadata → COMPONENT_COUNT, UNIVERSAL_COUNT, etc.
└─ Analyze Design System → DESIGN_ANALYSIS (characteristics)
Phase 3: Generate
├─ Create SKILL directory
├─ Generate intelligent description
├─ Load SKILL.md template (cat command)
├─ Replace variables and generate dynamic content
├─ Write SKILL.md
├─ Verify creation
├─ Load completion message template (cat command)
└─ Display completion message
```

View File

@@ -128,8 +128,8 @@ Generate a complete tech stack SKILL package with Exa research.
1. **Extract Tech Stack Information**:
IF MODE == 'session':
- Read `.workflow/{SESSION_ID}/workflow-session.json`
- Read `.workflow/{SESSION_ID}/.process/context-package.json`
- Read `.workflow/active/{session_id}/workflow-session.json`
- Read `.workflow/active/{session_id}/.process/context-package.json`
- Extract tech_stack: {language, frameworks, libraries}
- Build tech stack name: \"{language}-{framework1}-{framework2}\"
- Example: \"typescript-react-nextjs\"

View File

@@ -36,13 +36,12 @@ Orchestrates project-wide CLAUDE.md updates using batched agent execution with a
- **Use Case**: Deepest directories with unstructured file layouts
- **Behavior**: Generates CLAUDE.md for current directory AND each subdirectory containing files
- **Context**: All files in current directory tree (`@**/*`)
- **Benefits**: Creates foundation documentation for upper layers to reference
#### Single-Layer Strategy (Layers 1-2)
- **Use Case**: Upper layers that aggregate from existing documentation
- **Behavior**: Generates CLAUDE.md only for current directory
- **Context**: Direct children CLAUDE.md files + current directory code files
- **Benefits**: Minimal context consumption, clear layer separation
### Example Flow
```
@@ -95,14 +94,15 @@ src/ (depth 1) → SINGLE-LAYER STRATEGY
### Phase 1: Discovery & Analysis
```bash
# Cache git changes
bash(git add -A 2>/dev/null || true)
```javascript
// Cache git changes
Bash({command: "git add -A 2>/dev/null || true", run_in_background: false});
# Get module structure
bash(~/.claude/scripts/get_modules_by_depth.sh list)
# OR with --path
bash(cd <target-path> && ~/.claude/scripts/get_modules_by_depth.sh list)
// Get module structure
Bash({command: "~/.claude/scripts/get_modules_by_depth.sh list", run_in_background: false});
// OR with --path
Bash({command: "cd <target-path> && ~/.claude/scripts/get_modules_by_depth.sh list", run_in_background: false});
```
**Parse output** `depth:N|path:<PATH>|...` to extract module paths and count.
@@ -172,26 +172,23 @@ Update Plan:
**Strategy**: Parallel execution within layer (max 4 concurrent), no agent overhead.
```javascript
// Group modules by LAYER (not depth)
let modules_by_layer = group_by_layer(module_list);
let tool_order = construct_tool_order(primary_tool);
**CRITICAL**: All Bash commands use `run_in_background: false` for synchronous execution.
// Process by LAYER (3 → 2 → 1), not by depth
```javascript
for (let layer of [3, 2, 1]) {
if (modules_by_layer[layer].length === 0) continue;
let batches = batch_modules(modules_by_layer[layer], 4);
for (let batch of batches) {
let parallel_tasks = batch.map(module => {
return async () => {
// Auto-determine strategy based on depth
let strategy = module.depth >= 3 ? "multi-layer" : "single-layer";
for (let tool of tool_order) {
let exit_code = bash(`cd ${module.path} && ~/.claude/scripts/update_module_claude.sh "${strategy}" "." "${tool}"`);
if (exit_code === 0) {
Bash({
command: `cd ${module.path} && ~/.claude/scripts/update_module_claude.sh "${strategy}" "." "${tool}"`,
run_in_background: false
});
if (bash_result.exit_code === 0) {
report(`${module.path} (Layer ${layer}) updated with ${tool}`);
return true;
}
@@ -200,7 +197,6 @@ for (let layer of [3, 2, 1]) {
return false;
};
});
await Promise.all(parallel_tasks.map(task => task()));
}
}
@@ -255,7 +251,10 @@ EXECUTION SCRIPT: ~/.claude/scripts/update_module_claude.sh
EXECUTION FLOW (for each module):
1. Tool fallback loop (exit on first success):
for tool in {{tool_1}} {{tool_2}} {{tool_3}}; do
bash(cd "{{module_path}}" && ~/.claude/scripts/update_module_claude.sh "{{strategy}}" "." "${tool}")
Bash({
command: `cd "{{module_path}}" && ~/.claude/scripts/update_module_claude.sh "{{strategy}}" "." "${tool}"`,
run_in_background: false
})
exit_code=$?
if [ $exit_code -eq 0 ]; then
@@ -287,12 +286,12 @@ REPORTING FORMAT:
```
### Phase 4: Safety Verification
```bash
# Check only CLAUDE.md modified
bash(git diff --cached --name-only | grep -v "CLAUDE.md" || echo "Only CLAUDE.md files modified")
```javascript
// Check only CLAUDE.md files modified
Bash({command: 'git diff --cached --name-only | grep -v "CLAUDE.md" || echo "Only CLAUDE.md files modified"', run_in_background: false});
# Display status
bash(git status --short)
// Display status
Bash({command: "git status --short", run_in_background: false});
```
**Result Summary**:

View File

@@ -39,12 +39,12 @@ Orchestrates context-aware CLAUDE.md updates for changed modules using batched a
## Phase 1: Change Detection & Analysis
```bash
# Detect changed modules (no index refresh needed)
bash(~/.claude/scripts/detect_changed_modules.sh list)
```javascript
// Detect changed modules
Bash({command: "~/.claude/scripts/detect_changed_modules.sh list", run_in_background: false});
# Cache git changes
bash(git add -A 2>/dev/null || true)
// Cache git changes
Bash({command: "git add -A 2>/dev/null || true", run_in_background: false});
```
**Parse output** `depth:N|path:<PATH>|change:<TYPE>` to extract affected modules.
@@ -89,47 +89,36 @@ Related Update Plan:
## Phase 3A: Direct Execution (<15 modules)
**Strategy**: Parallel execution within depth (max 4 concurrent), no agent overhead, tool fallback per module.
**Strategy**: Parallel execution within depth (max 4 concurrent), no agent overhead.
**CRITICAL**: All Bash commands use `run_in_background: false` for synchronous execution.
```javascript
let modules_by_depth = group_by_depth(changed_modules);
let tool_order = construct_tool_order(primary_tool);
for (let depth of sorted_depths.reverse()) { // N → 0
let modules = modules_by_depth[depth];
let batches = batch_modules(modules, 4); // Split into groups of 4
let batches = batch_modules(modules_by_depth[depth], 4);
for (let batch of batches) {
// Execute batch in parallel (max 4 concurrent)
let parallel_tasks = batch.map(module => {
return async () => {
let success = false;
for (let tool of tool_order) {
let exit_code = bash(cd ${module.path} && ~/.claude/scripts/update_module_claude.sh "single-layer" "." "${tool}");
if (exit_code === 0) {
report("${module.path} updated with ${tool}");
success = true;
break;
Bash({
command: `cd ${module.path} && ~/.claude/scripts/update_module_claude.sh "single-layer" "." "${tool}"`,
run_in_background: false
});
if (bash_result.exit_code === 0) {
report(`${module.path} updated with ${tool}`);
return true;
}
}
if (!success) {
report("FAILED: ${module.path} failed all tools");
}
report(`❌ FAILED: ${module.path} failed all tools`);
return false;
};
});
await Promise.all(parallel_tasks.map(task => task())); // Run batch in parallel
await Promise.all(parallel_tasks.map(task => task()));
}
}
```
**Benefits**:
- No agent startup overhead
- Parallel execution within depth (max 4 concurrent)
- Tool fallback still applies per module
- Faster for small changesets (<15 modules)
- Same batching strategy as Phase 3B but without agent layer
---
## Phase 3B: Agent Batch Execution (≥15 modules)
@@ -193,19 +182,27 @@ TOOLS (try in order):
EXECUTION:
For each module above:
1. cd "{{module_path}}"
2. Try tool 1:
bash(cd "{{module_path}}" && ~/.claude/scripts/update_module_claude.sh "single-layer" "." "{{tool_1}}")
→ Success: Report "{{module_path}} updated with {{tool_1}}", proceed to next module
1. Try tool 1:
Bash({
command: `cd "{{module_path}}" && ~/.claude/scripts/update_module_claude.sh "single-layer" "." "{{tool_1}}"`,
run_in_background: false
})
→ Success: Report "✅ {{module_path}} updated with {{tool_1}}", proceed to next module
→ Failure: Try tool 2
3. Try tool 2:
bash(cd "{{module_path}}" && ~/.claude/scripts/update_module_claude.sh "single-layer" "." "{{tool_2}}")
→ Success: Report "{{module_path}} updated with {{tool_2}}", proceed to next module
2. Try tool 2:
Bash({
command: `cd "{{module_path}}" && ~/.claude/scripts/update_module_claude.sh "single-layer" "." "{{tool_2}}"`,
run_in_background: false
})
→ Success: Report "✅ {{module_path}} updated with {{tool_2}}", proceed to next module
→ Failure: Try tool 3
4. Try tool 3:
bash(cd "{{module_path}}" && ~/.claude/scripts/update_module_claude.sh "single-layer" "." "{{tool_3}}")
→ Success: Report "{{module_path}} updated with {{tool_3}}", proceed to next module
→ Failure: Report "FAILED: {{module_path}} failed all tools", proceed to next module
3. Try tool 3:
Bash({
command: `cd "{{module_path}}" && ~/.claude/scripts/update_module_claude.sh "single-layer" "." "{{tool_3}}"`,
run_in_background: false
})
→ Success: Report "✅ {{module_path}} updated with {{tool_3}}", proceed to next module
→ Failure: Report "❌ FAILED: {{module_path}} failed all tools", proceed to next module
REPORTING:
Report final summary with:
@@ -213,30 +210,16 @@ Report final summary with:
- Successful: Y modules
- Failed: Z modules
- Tool usage: {{tool_1}}:X, {{tool_2}}:Y, {{tool_3}}:Z
- Detailed results for each module
```
### Example Execution
**Depth 3 (new module)**:
```javascript
Task(subagent_type="memory-bridge", batch=[./src/api/auth], mode="related")
```
**Benefits**:
- 4 modules → 1 agent (75% reduction)
- Parallel batches, sequential within batch
- Each module gets full fallback chain
- Context-aware updates based on git changes
## Phase 4: Safety Verification
```bash
# Check only CLAUDE.md modified
bash(git diff --cached --name-only | grep -v "CLAUDE.md" || echo "Only CLAUDE.md files modified")
```javascript
// Check only CLAUDE.md modified
Bash({command: 'git diff --cached --name-only | grep -v "CLAUDE.md" || echo "Only CLAUDE.md files modified"', run_in_background: false});
# Display statistics
bash(git diff --stat)
// Display statistics
Bash({command: "git diff --stat", run_in_background: false});
```
**Aggregate results**:

View File

@@ -199,10 +199,10 @@ This is the simplified data structure loaded to provide context for task executi
"session": "WFS-user-auth",
"phase": "IMPLEMENT",
"session_context": {
"workflow_directory": ".workflow/WFS-user-auth/",
"todo_list_location": ".workflow/WFS-user-auth/TODO_LIST.md",
"summaries_directory": ".workflow/WFS-user-auth/.summaries/",
"task_json_location": ".workflow/WFS-user-auth/.task/"
"workflow_directory": ".workflow/active/WFS-user-auth/",
"todo_list_location": ".workflow/active/WFS-user-auth/TODO_LIST.md",
"summaries_directory": ".workflow/active/WFS-user-auth/.summaries/",
"task_json_location": ".workflow/active/WFS-user-auth/.task/"
}
},
"execution": {

View File

@@ -7,6 +7,14 @@ allowed-tools: Read(*), Write(*), Edit(*), TodoWrite(*), Glob(*), Bash(*)
# Task Replan Command (/task:replan)
> **⚠️ DEPRECATION NOTICE**: This command is maintained for backward compatibility. For new workflows, use `/workflow:replan` which provides:
> - Session-level replanning with comprehensive artifact updates
> - Interactive boundary clarification
> - Updates to IMPL_PLAN.md, TODO_LIST.md, and session metadata
> - Better integration with workflow sessions
>
> **Migration**: Replace `/task:replan IMPL-1 "changes"` with `/workflow:replan IMPL-1 "changes"`
## Overview
Replans individual tasks or batch processes multiple tasks with change tracking and backup management.
@@ -14,9 +22,6 @@ Replans individual tasks or batch processes multiple tasks with change tracking
- **Single Task Mode**: Replan one task with specific changes
- **Batch Mode**: Process multiple tasks from action-plan verification report
## Core Principles
**Task System:** @~/.claude/workflows/task-core.md
## Key Features
- **Single/Batch Operations**: Single task or multiple tasks from verification report
- **Multiple Input Sources**: Text, files, or verification report
@@ -274,7 +279,7 @@ Backup saved to .task/backup/IMPL-2-v1.0.json
### Batch Mode - From Verification Report
```bash
/task:replan --batch .workflow/WFS-{session}/.process/ACTION_PLAN_VERIFICATION.md
/task:replan --batch .workflow/active/WFS-{session}/.process/ACTION_PLAN_VERIFICATION.md
Parsing verification report...
Found 4 tasks requiring replanning:

View File

@@ -32,7 +32,7 @@ Identify inconsistencies, duplications, ambiguities, and underspecified items be
IF --session parameter provided:
session_id = provided session
ELSE:
CHECK: .workflow/.active-* marker files
CHECK: find .workflow/active/ -name "WFS-*" -type d
IF active_session EXISTS:
session_id = get_active_session()
ELSE:
@@ -40,7 +40,7 @@ ELSE:
EXIT
# Derive absolute paths
session_dir = .workflow/WFS-{session}
session_dir = .workflow/active/WFS-{session}
brainstorm_dir = session_dir/.brainstorming
task_dir = session_dir/.task
@@ -333,7 +333,7 @@ Output a Markdown report (no file writes) with the following structure:
#### TodoWrite-Based Remediation Workflow
**Report Location**: `.workflow/WFS-{session}/.process/ACTION_PLAN_VERIFICATION.md`
**Report Location**: `.workflow/active/WFS-{session}/.process/ACTION_PLAN_VERIFICATION.md`
**Recommended Workflow**:
1. **Create TodoWrite Task List**: Extract all findings from report
@@ -361,7 +361,7 @@ Priority Order:
**Save Analysis Report**:
```bash
report_path = ".workflow/WFS-{session}/.process/ACTION_PLAN_VERIFICATION.md"
report_path = ".workflow/active/WFS-{session}/.process/ACTION_PLAN_VERIFICATION.md"
Write(report_path, full_report_content)
```
@@ -404,12 +404,12 @@ TodoWrite([
**File Modification Workflow**:
```bash
# For task JSON modifications:
1. Read(.workflow/WFS-{session}/.task/IMPL-X.Y.json)
1. Read(.workflow/active/WFS-{session}/.task/IMPL-X.Y.json)
2. Edit() to apply fixes
3. Mark todo as completed
# For IMPL_PLAN modifications:
1. Read(.workflow/WFS-{session}/IMPL_PLAN.md)
1. Read(.workflow/active/WFS-{session}/IMPL_PLAN.md)
2. Edit() to apply strategic changes
3. Mark todo as completed
```

View File

@@ -46,10 +46,10 @@ allowed-tools: Task(conceptual-planning-agent), TodoWrite(*), Read(*), Write(*)
### Phase 1: Session & Framework Detection
```bash
# Check active session and framework
CHECK: .workflow/.active-* marker files
CHECK: find .workflow/active/ -name "WFS-*" -type d
IF active_session EXISTS:
session_id = get_active_session()
brainstorm_dir = .workflow/WFS-{session}/.brainstorming/
brainstorm_dir = .workflow/active/WFS-{session}/.brainstorming/
CHECK: brainstorm_dir/guidance-specification.md
IF EXISTS:
@@ -162,7 +162,7 @@ IF update_mode = "incremental":
### Output Files
```
.workflow/WFS-[topic]/.brainstorming/
.workflow/active/WFS-[topic]/.brainstorming/
├── guidance-specification.md # Input: Framework (if exists)
└── api-designer/
└── analysis.md # ★ OUTPUT: Framework-based analysis
@@ -181,7 +181,7 @@ IF update_mode = "incremental":
Session detection and selection:
```bash
# Check for active sessions
active_sessions=$(find .workflow -name ".active-*" 2>/dev/null)
active_sessions=$(find .workflow/active/ -name "WFS-*" -type d 2>/dev/null)
if [ multiple_sessions ]; then
prompt_user_to_select_session()
else
@@ -280,7 +280,7 @@ TodoWrite tracking for two-step process:
### Output Location
```
.workflow/WFS-{topic-slug}/.brainstorming/api-designer/
.workflow/active/WFS-{topic-slug}/.brainstorming/api-designer/
├── analysis.md # Primary API design analysis
├── api-specification.md # Detailed endpoint specifications (OpenAPI/Swagger)
├── data-contracts.md # Request/response schemas and validation rules
@@ -531,7 +531,7 @@ Upon completion, update `workflow-session.json`:
"api_designer": {
"status": "completed",
"completed_at": "timestamp",
"output_directory": ".workflow/WFS-{topic}/.brainstorming/api-designer/",
"output_directory": ".workflow/active/WFS-{topic}/.brainstorming/api-designer/",
"key_insights": ["endpoint_design", "versioning_strategy", "data_contracts"]
}
}

View File

@@ -10,7 +10,7 @@ allowed-tools: TodoWrite(*), Read(*), Write(*), Glob(*)
Six-phase workflow: **Automatic project context collection** → Extract topic challenges → Select roles → Generate task-specific questions → Detect conflicts → Generate confirmed guidance (declarative statements only).
**Input**: `"GOAL: [objective] SCOPE: [boundaries] CONTEXT: [background]" [--count N]`
**Output**: `.workflow/WFS-{topic}/.brainstorming/guidance-specification.md` (CONFIRMED/SELECTED format)
**Output**: `.workflow/active/WFS-{topic}/.brainstorming/guidance-specification.md` (CONFIRMED/SELECTED format)
**Core Principle**: Questions dynamically generated from project context + topic keywords/challenges, NOT from generic templates
**Parameters**:
@@ -32,7 +32,7 @@ Six-phase workflow: **Automatic project context collection** → Extract topic c
**Standalone Mode**:
```json
[
{"content": "Initialize session (.workflow/.active-* check, parse --count parameter)", "status": "pending", "activeForm": "Initializing"},
{"content": "Initialize session (.workflow/active/ session check, parse --count parameter)", "status": "pending", "activeForm": "Initializing"},
{"content": "Phase 0: Automatic project context collection (call context-gather)", "status": "pending", "activeForm": "Phase 0 context collection"},
{"content": "Phase 1: Extract challenges, output 2-4 task-specific questions, wait for user input", "status": "pending", "activeForm": "Phase 1 topic analysis"},
{"content": "Phase 2: Recommend count+2 roles, output role selection, wait for user input", "status": "pending", "activeForm": "Phase 2 role selection"},
@@ -133,7 +133,7 @@ b) {role-name} ({中文名})
## Execution Phases
### Session Management
- Check `.workflow/.active-*` markers first
- Check `.workflow/active/` for existing sessions
- Multiple sessions → Prompt selection | Single → Use it | None → Create `WFS-[topic-slug]`
- Parse `--count N` parameter from user input (default: 3 if not specified)
- Store decisions in `workflow-session.json` including count parameter
@@ -145,7 +145,7 @@ b) {role-name} ({中文名})
**Detection Mechanism** (execute first):
```javascript
// Check if context-package already exists
const contextPackagePath = `.workflow/WFS-{session-id}/.process/context-package.json`;
const contextPackagePath = `.workflow/active/WFS-{session-id}/.process/context-package.json`;
if (file_exists(contextPackagePath)) {
// Validate package
@@ -229,7 +229,7 @@ Report completion with statistics.
**Steps**:
1. **Load Phase 0 context** (if available):
- Read `.workflow/WFS-{session-id}/.process/context-package.json`
- Read `.workflow/active/WFS-{session-id}/.process/context-package.json`
- Extract: tech_stack, existing modules, conflict_risk, relevant files
2. **Deep topic analysis** (context-aware):
@@ -449,7 +449,7 @@ FOR each selected role:
## Output Document Template
**File**: `.workflow/WFS-{topic}/.brainstorming/guidance-specification.md`
**File**: `.workflow/active/WFS-{topic}/.brainstorming/guidance-specification.md`
```markdown
# [Project] - Confirmed Guidance Specification
@@ -596,8 +596,7 @@ ELSE:
## File Structure
```
.workflow/WFS-[topic]/
├── .active-brainstorming
.workflow/active/WFS-[topic]/
├── workflow-session.json # Session metadata ONLY
└── .brainstorming/
└── guidance-specification.md # Full guidance content

View File

@@ -9,22 +9,32 @@ allowed-tools: SlashCommand(*), Task(*), TodoWrite(*), Read(*), Write(*), Bash(*
## Coordinator Role
**This command is a pure orchestrator**: Execute 3 phases in sequence (interactive framework → parallel role analysis → synthesis), delegate to specialized commands/agents, and ensure complete execution through **automatic continuation**.
**This command is a pure orchestrator**: Execute 3 phases in sequence (interactive framework → parallel role analysis → synthesis), coordinating specialized commands/agents through task attachment model.
**Task Attachment Model**:
- SlashCommand invocation **expands workflow** by attaching sub-tasks to current TodoWrite
- Task agent execution **attaches analysis tasks** to orchestrator's TodoWrite
- Phase 1: artifacts command attaches its internal tasks (Phase 1-5)
- Phase 2: N conceptual-planning-agent tasks attached in parallel
- Phase 3: synthesis command attaches its internal tasks
- Orchestrator **executes these attached tasks** sequentially (Phase 1, 3) or in parallel (Phase 2)
- After completion, attached tasks are **collapsed** back to high-level phase summary
- This is **task expansion**, not external delegation
**Execution Model - Auto-Continue Workflow**:
This workflow runs **fully autonomously** once triggered. Phase 1 (artifacts) handles user interaction, Phase 2 (role agents) runs in parallel.
1. **User triggers**: `/workflow:brainstorm:auto-parallel "topic" [--count N]`
2. **Phase 1 executes** → artifacts command (interactive framework) → Auto-continues
3. **Phase 2 executes** → Parallel role agents (N agents run concurrently) → Auto-continues
4. **Phase 3 executes** → Synthesis command → Reports final summary
2. **Phase 1 executes** → artifacts command (tasks ATTACHED) → Auto-continues
3. **Phase 2 executes** → Parallel role agents (N tasks ATTACHED concurrently) → Auto-continues
4. **Phase 3 executes** → Synthesis command (tasks ATTACHED) → Reports final summary
**Auto-Continue Mechanism**:
- TodoList tracks current phase status
- After Phase 1 (artifacts) completion, automatically load roles and launch Phase 2 agents
- After Phase 2 (all agents) completion, automatically execute Phase 3 synthesis
- Progress updates shown at each phase for visibility
- TodoList tracks current phase status and dynamically manages task attachment/collapse
- When Phase 1 (artifacts) finishes executing, automatically load roles and launch Phase 2 agents
- When Phase 2 (all agents) finishes executing, automatically execute Phase 3 synthesis
- **⚠️ CONTINUOUS EXECUTION** - Do not stop until all phases complete
## Core Rules
@@ -32,23 +42,26 @@ This workflow runs **fully autonomously** once triggered. Phase 1 (artifacts) ha
2. **No Preliminary Analysis**: Do not analyze topic before Phase 1 - artifacts handles all analysis
3. **Parse Every Output**: Extract selected_roles from workflow-session.json after Phase 1
4. **Auto-Continue via TodoList**: Check TodoList status to execute next pending phase automatically
5. **Track Progress**: Update TodoWrite after every phase completion
6. **TodoWrite Extension**: artifacts command EXTENDS parent TodoList (NOT replaces)
5. **Track Progress**: Update TodoWrite dynamically with task attachment/collapse pattern
6. **Task Attachment Model**: SlashCommand and Task invocations **attach** sub-tasks to current workflow. Orchestrator **executes** these attached tasks itself, then **collapses** them after completion
7. **⚠️ CRITICAL: DO NOT STOP**: Continuous multi-phase workflow. After executing all attached tasks, immediately collapse them and execute next phase
8. **Parallel Execution**: Phase 2 attaches multiple agent tasks simultaneously for concurrent execution
## Usage
```bash
/workflow:brainstorm:auto-parallel "<topic>" [--count N]
/workflow:brainstorm:auto-parallel "<topic>" [--count N] [--style-skill package-name]
```
**Recommended Structured Format**:
```bash
/workflow:brainstorm:auto-parallel "GOAL: [objective] SCOPE: [boundaries] CONTEXT: [background]" [--count N]
/workflow:brainstorm:auto-parallel "GOAL: [objective] SCOPE: [boundaries] CONTEXT: [background]" [--count N] [--style-skill package-name]
```
**Parameters**:
- `topic` (required): Topic or challenge description (structured format recommended)
- `--count N` (optional): Number of roles to select (default: 3, max: 9)
- `--style-skill package-name` (optional): Style SKILL package to load for UI design (located at `.claude/skills/style-{package-name}/`)
## 3-Phase Execution
@@ -72,17 +85,39 @@ This workflow runs **fully autonomously** once triggered. Phase 1 (artifacts) ha
**Validation**:
- guidance-specification.md created with confirmed decisions
- workflow-session.json contains selected_roles[] (metadata only, no content duplication)
- Session directory `.workflow/WFS-{topic}/.brainstorming/` exists
- Session directory `.workflow/active/WFS-{topic}/.brainstorming/` exists
**TodoWrite**: Mark phase 1 completed, phase 2 in_progress
**TodoWrite Update (Phase 1 SlashCommand invoked - tasks attached)**:
```json
[
{"content": "Parse --count parameter from user input", "status": "completed", "activeForm": "Parsing count parameter"},
{"content": "Phase 1.1: Topic analysis and question generation (artifacts)", "status": "in_progress", "activeForm": "Analyzing topic"},
{"content": "Phase 1.2: Role selection and user confirmation (artifacts)", "status": "pending", "activeForm": "Selecting roles"},
{"content": "Phase 1.3: Role questions and user decisions (artifacts)", "status": "pending", "activeForm": "Collecting role questions"},
{"content": "Phase 1.4: Conflict detection and resolution (artifacts)", "status": "pending", "activeForm": "Resolving conflicts"},
{"content": "Phase 1.5: Guidance specification generation (artifacts)", "status": "pending", "activeForm": "Generating guidance"},
{"content": "Execute parallel role analysis", "status": "pending", "activeForm": "Executing parallel role analysis"},
{"content": "Execute synthesis integration", "status": "pending", "activeForm": "Executing synthesis integration"}
]
```
**After Phase 1**: Auto-continue to Phase 2 (role agent assignment)
**Note**: SlashCommand invocation **attaches** artifacts' 5 internal tasks. Orchestrator **executes** these tasks sequentially.
**⚠️ TodoWrite Coordination**: artifacts EXTENDS parent TodoList by:
- Marking parent task "Execute artifacts..." as in_progress
- APPENDING artifacts sub-tasks (Phase 1-5) after parent task
- PRESERVING all other auto-parallel tasks (role agents, synthesis)
- When artifacts Phase 5 completes, marking parent task as completed
**Next Action**: Tasks attached → **Execute Phase 1.1-1.5** sequentially
**TodoWrite Update (Phase 1 completed - tasks collapsed)**:
```json
[
{"content": "Parse --count parameter from user input", "status": "completed", "activeForm": "Parsing count parameter"},
{"content": "Execute artifacts interactive framework generation", "status": "completed", "activeForm": "Executing artifacts interactive framework"},
{"content": "Execute parallel role analysis", "status": "pending", "activeForm": "Executing parallel role analysis"},
{"content": "Execute synthesis integration", "status": "pending", "activeForm": "Executing synthesis integration"}
]
```
**Note**: Phase 1 tasks completed and collapsed to summary.
**After Phase 1**: Auto-continue to Phase 2 (parallel role agent execution)
---
@@ -97,13 +132,13 @@ Execute {role-name} analysis for existing topic framework
## Context Loading
ASSIGNED_ROLE: {role-name}
OUTPUT_LOCATION: .workflow/WFS-{session}/.brainstorming/{role}/
OUTPUT_LOCATION: .workflow/active/WFS-{session}/.brainstorming/{role}/
TOPIC: {user-provided-topic}
## Flow Control Steps
1. **load_topic_framework**
- Action: Load structured topic discussion framework
- Command: Read(.workflow/WFS-{session}/.brainstorming/guidance-specification.md)
- Command: Read(.workflow/active/WFS-{session}/.brainstorming/guidance-specification.md)
- Output: topic_framework_content
2. **load_role_template**
@@ -113,9 +148,15 @@ TOPIC: {user-provided-topic}
3. **load_session_metadata**
- Action: Load session metadata and original user intent
- Command: Read(.workflow/WFS-{session}/workflow-session.json)
- Command: Read(.workflow/active/WFS-{session}/workflow-session.json)
- Output: session_context (contains original user prompt as PRIMARY reference)
4. **load_style_skill** (ONLY for ui-designer role when style_skill_package exists)
- Action: Load style SKILL package for design system reference
- Command: Read(.claude/skills/style-{style_skill_package}/SKILL.md) AND Read(.workflow/reference_style/{style_skill_package}/design-tokens.json)
- Output: style_skill_content, design_tokens
- Usage: Apply design tokens in ui-designer analysis and artifacts
## Analysis Requirements
**Primary Reference**: Original user prompt from workflow-session.json is authoritative
**Framework Source**: Address all discussion points in guidance-specification.md from {role-name} perspective
@@ -143,8 +184,9 @@ TOPIC: {user-provided-topic}
**Parallel Execution**:
- Launch N agents simultaneously (one message with multiple Task calls)
- Each agent task **attached** to orchestrator's TodoWrite
- All agents execute concurrently, each attaching their own analysis sub-tasks
- Each agent operates independently reading same guidance-specification.md
- All agents update progress concurrently
**Input**:
- `selected_roles[]` from Phase 1
@@ -152,13 +194,39 @@ TOPIC: {user-provided-topic}
- guidance-specification.md path
**Validation**:
- Each role creates `.workflow/WFS-{topic}/.brainstorming/{role}/analysis.md` (primary file)
- Each role creates `.workflow/active/WFS-{topic}/.brainstorming/{role}/analysis.md` (primary file)
- If content is large (>800 lines), may split to `analysis-1.md`, `analysis-2.md` (max 3 files total)
- **File naming pattern**: ALL files MUST start with `analysis` prefix (use `analysis*.md` for globbing)
- **FORBIDDEN naming**: No `recommendations.md`, `recommendations-*.md`, or any non-`analysis` prefixed files
- All N role analyses completed
**TodoWrite**: Mark all N role agent tasks completed, phase 3 in_progress
**TodoWrite Update (Phase 2 agents invoked - tasks attached in parallel)**:
```json
[
{"content": "Parse --count parameter from user input", "status": "completed", "activeForm": "Parsing count parameter"},
{"content": "Execute artifacts interactive framework generation", "status": "completed", "activeForm": "Executing artifacts interactive framework"},
{"content": "Phase 2.1: Execute system-architect analysis [conceptual-planning-agent]", "status": "in_progress", "activeForm": "Executing system-architect analysis"},
{"content": "Phase 2.2: Execute ui-designer analysis [conceptual-planning-agent]", "status": "in_progress", "activeForm": "Executing ui-designer analysis"},
{"content": "Phase 2.3: Execute product-manager analysis [conceptual-planning-agent]", "status": "in_progress", "activeForm": "Executing product-manager analysis"},
{"content": "Execute synthesis integration", "status": "pending", "activeForm": "Executing synthesis integration"}
]
```
**Note**: Multiple Task invocations **attach** N role analysis tasks simultaneously. Orchestrator **executes** these tasks in parallel.
**Next Action**: Tasks attached → **Execute Phase 2.1-2.N** concurrently
**TodoWrite Update (Phase 2 completed - tasks collapsed)**:
```json
[
{"content": "Parse --count parameter from user input", "status": "completed", "activeForm": "Parsing count parameter"},
{"content": "Execute artifacts interactive framework generation", "status": "completed", "activeForm": "Executing artifacts interactive framework"},
{"content": "Execute parallel role analysis", "status": "completed", "activeForm": "Executing parallel role analysis"},
{"content": "Execute synthesis integration", "status": "pending", "activeForm": "Executing synthesis integration"}
]
```
**Note**: Phase 2 parallel tasks completed and collapsed to summary.
**After Phase 2**: Auto-continue to Phase 3 (synthesis)
@@ -177,16 +245,42 @@ TOPIC: {user-provided-topic}
**Input**: `sessionId` from Phase 1
**Validation**:
- `.workflow/WFS-{topic}/.brainstorming/synthesis-specification.md` exists
- `.workflow/active/WFS-{topic}/.brainstorming/synthesis-specification.md` exists
- Synthesis references all role analyses
**TodoWrite**: Mark phase 3 completed
**TodoWrite Update (Phase 3 SlashCommand invoked - tasks attached)**:
```json
[
{"content": "Parse --count parameter from user input", "status": "completed", "activeForm": "Parsing count parameter"},
{"content": "Execute artifacts interactive framework generation", "status": "completed", "activeForm": "Executing artifacts interactive framework"},
{"content": "Execute parallel role analysis", "status": "completed", "activeForm": "Executing parallel role analysis"},
{"content": "Phase 3.1: Load role analysis files (synthesis)", "status": "in_progress", "activeForm": "Loading role analyses"},
{"content": "Phase 3.2: Integrate insights across roles (synthesis)", "status": "pending", "activeForm": "Integrating insights"},
{"content": "Phase 3.3: Generate synthesis specification (synthesis)", "status": "pending", "activeForm": "Generating synthesis"}
]
```
**Note**: SlashCommand invocation **attaches** synthesis' internal tasks. Orchestrator **executes** these tasks sequentially.
**Next Action**: Tasks attached → **Execute Phase 3.1-3.3** sequentially
**TodoWrite Update (Phase 3 completed - tasks collapsed)**:
```json
[
{"content": "Parse --count parameter from user input", "status": "completed", "activeForm": "Parsing count parameter"},
{"content": "Execute artifacts interactive framework generation", "status": "completed", "activeForm": "Executing artifacts interactive framework"},
{"content": "Execute parallel role analysis", "status": "completed", "activeForm": "Executing parallel role analysis"},
{"content": "Execute synthesis integration", "status": "completed", "activeForm": "Executing synthesis integration"}
]
```
**Note**: Phase 3 tasks completed and collapsed to summary.
**Return to User**:
```
Brainstorming complete for session: {sessionId}
Roles analyzed: {count}
Synthesis: .workflow/WFS-{topic}/.brainstorming/synthesis-specification.md
Synthesis: .workflow/active/WFS-{topic}/.brainstorming/synthesis-specification.md
✅ Next Steps:
1. /workflow:concept-clarify --session {sessionId} # Optional refinement
@@ -195,49 +289,41 @@ Synthesis: .workflow/WFS-{topic}/.brainstorming/synthesis-specification.md
## TodoWrite Pattern
```javascript
// Initialize (before Phase 1)
TodoWrite({todos: [
{"content": "Parse --count parameter from user input", "status": "in_progress", "activeForm": "Parsing count parameter"},
{"content": "Execute artifacts command for interactive framework generation", "status": "pending", "activeForm": "Executing artifacts interactive framework"},
{"content": "Load selected_roles from workflow-session.json", "status": "pending", "activeForm": "Loading selected roles"},
// Role agent tasks added dynamically after Phase 1 based on selected_roles count
{"content": "Execute synthesis command for final integration", "status": "pending", "activeForm": "Executing synthesis integration"}
]})
**Core Concept**: Dynamic task attachment and collapse for parallel brainstorming workflow with interactive framework generation and concurrent role analysis.
// After Phase 1 (artifacts completes, roles loaded)
// Note: artifacts EXTENDS this list by appending its Phase 1-5 sub-tasks
TodoWrite({todos: [
{"content": "Parse --count parameter from user input", "status": "completed", "activeForm": "Parsing count parameter"},
{"content": "Execute artifacts command for interactive framework generation", "status": "completed", "activeForm": "Executing artifacts interactive framework"},
{"content": "Load selected_roles from workflow-session.json", "status": "in_progress", "activeForm": "Loading selected roles"},
{"content": "Execute system-architect analysis [conceptual-planning-agent]", "status": "pending", "activeForm": "Executing system-architect analysis"},
{"content": "Execute ui-designer analysis [conceptual-planning-agent]", "status": "pending", "activeForm": "Executing ui-designer analysis"},
{"content": "Execute product-manager analysis [conceptual-planning-agent]", "status": "pending", "activeForm": "Executing product-manager analysis"},
// ... (N role tasks based on --count parameter)
{"content": "Execute synthesis command for final integration", "status": "pending", "activeForm": "Executing synthesis integration"}
]})
### Key Principles
// After Phase 2 (all agents launched in parallel)
TodoWrite({todos: [
// ... previous completed tasks
{"content": "Load selected_roles from workflow-session.json", "status": "completed", "activeForm": "Loading selected roles"},
{"content": "Execute system-architect analysis [conceptual-planning-agent]", "status": "in_progress", "activeForm": "Executing system-architect analysis"},
{"content": "Execute ui-designer analysis [conceptual-planning-agent]", "status": "in_progress", "activeForm": "Executing ui-designer analysis"},
{"content": "Execute product-manager analysis [conceptual-planning-agent]", "status": "in_progress", "activeForm": "Executing product-manager analysis"},
// ... (all N agents in_progress simultaneously)
{"content": "Execute synthesis command for final integration", "status": "pending", "activeForm": "Executing synthesis integration"}
]})
1. **Task Attachment** (when SlashCommand/Task invoked):
- Sub-command's or agent's internal tasks are **attached** to orchestrator's TodoWrite
- Phase 1: `/workflow:brainstorm:artifacts` attaches 5 internal tasks (Phase 1.1-1.5)
- Phase 2: Multiple `Task(conceptual-planning-agent)` calls attach N role analysis tasks simultaneously
- Phase 3: `/workflow:brainstorm:synthesis` attaches 3 internal tasks (Phase 3.1-3.3)
- First attached task marked as `in_progress`, others as `pending`
- Orchestrator **executes** these attached tasks (sequentially for Phase 1, 3; in parallel for Phase 2)
2. **Task Collapse** (after sub-tasks complete):
- Remove detailed sub-tasks from TodoWrite
- **Collapse** to high-level phase summary
- Example: Phase 1.1-1.5 collapse to "Execute artifacts interactive framework generation: completed"
- Phase 2: Multiple role tasks collapse to "Execute parallel role analysis: completed"
- Phase 3: Synthesis tasks collapse to "Execute synthesis integration: completed"
- Maintains clean orchestrator-level view
3. **Continuous Execution**:
- After collapse, automatically proceed to next pending phase
- No user intervention required between phases
- TodoWrite dynamically reflects current execution state
**Lifecycle Summary**: Initial pending tasks → Phase 1 invoked (artifacts tasks ATTACHED) → Artifacts sub-tasks executed → Phase 1 completed (tasks COLLAPSED) → Phase 2 invoked (N role tasks ATTACHED in parallel) → Role analyses executed concurrently → Phase 2 completed (tasks COLLAPSED) → Phase 3 invoked (synthesis tasks ATTACHED) → Synthesis sub-tasks executed → Phase 3 completed (tasks COLLAPSED) → Workflow complete.
### Brainstorming Workflow Specific Features
- **Phase 1**: Interactive framework generation with user Q&A (SlashCommand attachment)
- **Phase 2**: Parallel role analysis execution with N concurrent agents (Task agent attachments)
- **Phase 3**: Cross-role synthesis integration (SlashCommand attachment)
- **Dynamic Role Count**: `--count N` parameter determines number of Phase 2 parallel tasks (default: 3, max: 9)
- **Mixed Execution**: Sequential (Phase 1, 3) and Parallel (Phase 2) task execution
// After Phase 2 (all agents complete)
TodoWrite({todos: [
// ... previous completed tasks
{"content": "Execute system-architect analysis [conceptual-planning-agent]", "status": "completed", "activeForm": "Executing system-architect analysis"},
{"content": "Execute ui-designer analysis [conceptual-planning-agent]", "status": "completed", "activeForm": "Executing ui-designer analysis"},
{"content": "Execute product-manager analysis [conceptual-planning-agent]", "status": "completed", "activeForm": "Executing product-manager analysis"},
{"content": "Execute synthesis command for final integration", "status": "in_progress", "activeForm": "Executing synthesis integration"}
]})
```
## Input Processing
@@ -255,6 +341,34 @@ ELSE:
EXECUTE: /workflow:brainstorm:artifacts "{topic}" --count {count_value}
```
**Style-Skill Parameter Parsing**:
```javascript
// Extract --style-skill from user input
IF user_input CONTAINS "--style-skill":
EXTRACT style_skill_name FROM "--style-skill package-name" pattern
// Validate SKILL package exists
skill_path = ".claude/skills/style-{style_skill_name}/SKILL.md"
IF file_exists(skill_path):
style_skill_package = style_skill_name
style_reference_path = ".workflow/reference_style/{style_skill_name}"
echo("✓ Style SKILL package found: style-{style_skill_name}")
echo(" Design reference: {style_reference_path}")
ELSE:
echo("⚠ WARNING: Style SKILL package not found: {style_skill_name}")
echo(" Expected location: {skill_path}")
echo(" Continuing without style reference...")
style_skill_package = null
ELSE:
style_skill_package = null
echo("No style-skill specified, ui-designer will use default workflow")
// Store for Phase 2 ui-designer context
CONTEXT_VARS:
- style_skill_package: {style_skill_package}
- style_reference_path: {style_reference_path}
```
**Topic Structuring**:
1. **Already Structured** → Pass directly to artifacts
```
@@ -270,32 +384,34 @@ EXECUTE: /workflow:brainstorm:artifacts "{topic}" --count {count_value}
## Session Management
**⚡ FIRST ACTION**: Check for `.workflow/.active-*` markers before Phase 1
**⚡ FIRST ACTION**: Check `.workflow/active/` for existing sessions before Phase 1
**Multiple Sessions Support**:
- Different Claude instances can have different active brainstorming sessions
- If multiple active sessions found, prompt user to select
- If single active session found, use it
- If no active session exists, create `WFS-[topic-slug]`
- Different Claude instances can have different brainstorming sessions
- If multiple sessions found, prompt user to select
- If single session found, use it
- If no session exists, create `WFS-[topic-slug]`
**Session Continuity**:
- MUST use selected active session for all phases
- MUST use selected session for all phases
- Each role's context stored in session directory
- Session isolation: Each session maintains independent state
## Output Structure
**Phase 1 Output**:
- `.workflow/WFS-{topic}/.brainstorming/guidance-specification.md` (framework content)
- `.workflow/WFS-{topic}/workflow-session.json` (metadata: selected_roles[], topic, timestamps)
- `.workflow/active/WFS-{topic}/.brainstorming/guidance-specification.md` (framework content)
- `.workflow/active/WFS-{topic}/workflow-session.json` (metadata: selected_roles[], topic, timestamps, style_skill_package)
**Phase 2 Output**:
- `.workflow/WFS-{topic}/.brainstorming/{role}/analysis.md` (one per role)
- `.workflow/active/WFS-{topic}/.brainstorming/{role}/analysis.md` (one per role)
- `.superdesign/design_iterations/` (ui-designer artifacts, if --style-skill provided)
**Phase 3 Output**:
- `.workflow/WFS-{topic}/.brainstorming/synthesis-specification.md` (integrated analysis)
- `.workflow/active/WFS-{topic}/.brainstorming/synthesis-specification.md` (integrated analysis)
**⚠️ Storage Separation**: Guidance content in .md files, metadata in .json (no duplication)
**⚠️ Style References**: When --style-skill provided, workflow-session.json stores style_skill_package name, ui-designer loads from `.claude/skills/style-{package-name}/`
## Available Roles
@@ -322,8 +438,7 @@ EXECUTE: /workflow:brainstorm:artifacts "{topic}" --count {count_value}
**File Structure**:
```
.workflow/WFS-[topic]/
├── .active-brainstorming
.workflow/active/WFS-[topic]/
├── workflow-session.json # Session metadata ONLY
└── .brainstorming/
├── guidance-specification.md # Framework (Phase 1)

View File

@@ -47,10 +47,10 @@ allowed-tools: Task(conceptual-planning-agent), TodoWrite(*), Read(*), Write(*)
### Phase 1: Session & Framework Detection
```bash
# Check active session and framework
CHECK: .workflow/.active-* marker files
CHECK: find .workflow/active/ -name "WFS-*" -type d
IF active_session EXISTS:
session_id = get_active_session()
brainstorm_dir = .workflow/WFS-{session}/.brainstorming/
brainstorm_dir = .workflow/active/WFS-{session}/.brainstorming/
CHECK: brainstorm_dir/guidance-specification.md
IF EXISTS:
@@ -87,13 +87,13 @@ Execute data-architect analysis for existing topic framework
## Context Loading
ASSIGNED_ROLE: data-architect
OUTPUT_LOCATION: .workflow/WFS-{session}/.brainstorming/data-architect/
OUTPUT_LOCATION: .workflow/active/WFS-{session}/.brainstorming/data-architect/
ANALYSIS_MODE: {framework_mode ? "framework_based" : "standalone"}
## Flow Control Steps
1. **load_topic_framework**
- Action: Load structured topic discussion framework
- Command: Read(.workflow/WFS-{session}/.brainstorming/guidance-specification.md)
- Command: Read(.workflow/active/WFS-{session}/.brainstorming/guidance-specification.md)
- Output: topic_framework_content
2. **load_role_template**
@@ -103,7 +103,7 @@ ANALYSIS_MODE: {framework_mode ? "framework_based" : "standalone"}
3. **load_session_metadata**
- Action: Load session metadata and existing context
- Command: Read(.workflow/WFS-{session}/workflow-session.json)
- Command: Read(.workflow/active/WFS-{session}/workflow-session.json)
- Output: session_context
## Analysis Requirements
@@ -163,7 +163,7 @@ TodoWrite({
### Framework-Based Analysis
```
.workflow/WFS-{session}/.brainstorming/data-architect/
.workflow/active/WFS-{session}/.brainstorming/data-architect/
└── analysis.md # Structured analysis addressing guidance-specification.md discussion points
```
@@ -208,7 +208,7 @@ TodoWrite({
"data_architect": {
"status": "completed",
"framework_addressed": true,
"output_location": ".workflow/WFS-{session}/.brainstorming/data-architect/analysis.md",
"output_location": ".workflow/active/WFS-{session}/.brainstorming/data-architect/analysis.md",
"framework_reference": "@../guidance-specification.md"
}
}

View File

@@ -27,10 +27,10 @@ allowed-tools: Task(conceptual-planning-agent), TodoWrite(*), Read(*), Write(*)
### Phase 1: Session & Framework Detection
```bash
# Check active session and framework
CHECK: .workflow/.active-* marker files
CHECK: find .workflow/active/ -name "WFS-*" -type d
IF active_session EXISTS:
session_id = get_active_session()
brainstorm_dir = .workflow/WFS-{session}/.brainstorming/
brainstorm_dir = .workflow/active/WFS-{session}/.brainstorming/
CHECK: brainstorm_dir/guidance-specification.md
IF EXISTS:
@@ -67,13 +67,13 @@ Execute product-manager analysis for existing topic framework
## Context Loading
ASSIGNED_ROLE: product-manager
OUTPUT_LOCATION: .workflow/WFS-{session}/.brainstorming/product-manager/
OUTPUT_LOCATION: .workflow/active/WFS-{session}/.brainstorming/product-manager/
ANALYSIS_MODE: {framework_mode ? "framework_based" : "standalone"}
## Flow Control Steps
1. **load_topic_framework**
- Action: Load structured topic discussion framework
- Command: Read(.workflow/WFS-{session}/.brainstorming/guidance-specification.md)
- Command: Read(.workflow/active/WFS-{session}/.brainstorming/guidance-specification.md)
- Output: topic_framework_content
2. **load_role_template**
@@ -83,7 +83,7 @@ ANALYSIS_MODE: {framework_mode ? "framework_based" : "standalone"}
3. **load_session_metadata**
- Action: Load session metadata and existing context
- Command: Read(.workflow/WFS-{session}/workflow-session.json)
- Command: Read(.workflow/active/WFS-{session}/workflow-session.json)
- Output: session_context
## Analysis Requirements
@@ -143,7 +143,7 @@ TodoWrite({
### Framework-Based Analysis
```
.workflow/WFS-{session}/.brainstorming/product-manager/
.workflow/active/WFS-{session}/.brainstorming/product-manager/
└── analysis.md # Structured analysis addressing guidance-specification.md discussion points
```
@@ -188,7 +188,7 @@ TodoWrite({
"product_manager": {
"status": "completed",
"framework_addressed": true,
"output_location": ".workflow/WFS-{session}/.brainstorming/product-manager/analysis.md",
"output_location": ".workflow/active/WFS-{session}/.brainstorming/product-manager/analysis.md",
"framework_reference": "@../guidance-specification.md"
}
}

View File

@@ -27,10 +27,10 @@ allowed-tools: Task(conceptual-planning-agent), TodoWrite(*), Read(*), Write(*)
### Phase 1: Session & Framework Detection
```bash
# Check active session and framework
CHECK: .workflow/.active-* marker files
CHECK: find .workflow/active/ -name "WFS-*" -type d
IF active_session EXISTS:
session_id = get_active_session()
brainstorm_dir = .workflow/WFS-{session}/.brainstorming/
brainstorm_dir = .workflow/active/WFS-{session}/.brainstorming/
CHECK: brainstorm_dir/guidance-specification.md
IF EXISTS:
@@ -67,13 +67,13 @@ Execute product-owner analysis for existing topic framework
## Context Loading
ASSIGNED_ROLE: product-owner
OUTPUT_LOCATION: .workflow/WFS-{session}/.brainstorming/product-owner/
OUTPUT_LOCATION: .workflow/active/WFS-{session}/.brainstorming/product-owner/
ANALYSIS_MODE: {framework_mode ? "framework_based" : "standalone"}
## Flow Control Steps
1. **load_topic_framework**
- Action: Load structured topic discussion framework
- Command: Read(.workflow/WFS-{session}/.brainstorming/guidance-specification.md)
- Command: Read(.workflow/active/WFS-{session}/.brainstorming/guidance-specification.md)
- Output: topic_framework_content
2. **load_role_template**
@@ -83,7 +83,7 @@ ANALYSIS_MODE: {framework_mode ? "framework_based" : "standalone"}
3. **load_session_metadata**
- Action: Load session metadata and existing context
- Command: Read(.workflow/WFS-{session}/workflow-session.json)
- Command: Read(.workflow/active/WFS-{session}/workflow-session.json)
- Output: session_context
## Analysis Requirements
@@ -143,7 +143,7 @@ TodoWrite({
### Framework-Based Analysis
```
.workflow/WFS-{session}/.brainstorming/product-owner/
.workflow/active/WFS-{session}/.brainstorming/product-owner/
└── analysis.md # Structured analysis addressing guidance-specification.md discussion points
```
@@ -188,7 +188,7 @@ TodoWrite({
"product_owner": {
"status": "completed",
"framework_addressed": true,
"output_location": ".workflow/WFS-{session}/.brainstorming/product-owner/analysis.md",
"output_location": ".workflow/active/WFS-{session}/.brainstorming/product-owner/analysis.md",
"framework_reference": "@../guidance-specification.md"
}
}

View File

@@ -27,10 +27,10 @@ allowed-tools: Task(conceptual-planning-agent), TodoWrite(*), Read(*), Write(*)
### Phase 1: Session & Framework Detection
```bash
# Check active session and framework
CHECK: .workflow/.active-* marker files
CHECK: find .workflow/active/ -name "WFS-*" -type d
IF active_session EXISTS:
session_id = get_active_session()
brainstorm_dir = .workflow/WFS-{session}/.brainstorming/
brainstorm_dir = .workflow/active/WFS-{session}/.brainstorming/
CHECK: brainstorm_dir/guidance-specification.md
IF EXISTS:
@@ -67,13 +67,13 @@ Execute scrum-master analysis for existing topic framework
## Context Loading
ASSIGNED_ROLE: scrum-master
OUTPUT_LOCATION: .workflow/WFS-{session}/.brainstorming/scrum-master/
OUTPUT_LOCATION: .workflow/active/WFS-{session}/.brainstorming/scrum-master/
ANALYSIS_MODE: {framework_mode ? "framework_based" : "standalone"}
## Flow Control Steps
1. **load_topic_framework**
- Action: Load structured topic discussion framework
- Command: Read(.workflow/WFS-{session}/.brainstorming/guidance-specification.md)
- Command: Read(.workflow/active/WFS-{session}/.brainstorming/guidance-specification.md)
- Output: topic_framework_content
2. **load_role_template**
@@ -83,7 +83,7 @@ ANALYSIS_MODE: {framework_mode ? "framework_based" : "standalone"}
3. **load_session_metadata**
- Action: Load session metadata and existing context
- Command: Read(.workflow/WFS-{session}/workflow-session.json)
- Command: Read(.workflow/active/WFS-{session}/workflow-session.json)
- Output: session_context
## Analysis Requirements
@@ -143,7 +143,7 @@ TodoWrite({
### Framework-Based Analysis
```
.workflow/WFS-{session}/.brainstorming/scrum-master/
.workflow/active/WFS-{session}/.brainstorming/scrum-master/
└── analysis.md # Structured analysis addressing guidance-specification.md discussion points
```
@@ -188,7 +188,7 @@ TodoWrite({
"scrum_master": {
"status": "completed",
"framework_addressed": true,
"output_location": ".workflow/WFS-{session}/.brainstorming/scrum-master/analysis.md",
"output_location": ".workflow/active/WFS-{session}/.brainstorming/scrum-master/analysis.md",
"framework_reference": "@../guidance-specification.md"
}
}

View File

@@ -27,10 +27,10 @@ allowed-tools: Task(conceptual-planning-agent), TodoWrite(*), Read(*), Write(*)
### Phase 1: Session & Framework Detection
```bash
# Check active session and framework
CHECK: .workflow/.active-* marker files
CHECK: find .workflow/active/ -name "WFS-*" -type d
IF active_session EXISTS:
session_id = get_active_session()
brainstorm_dir = .workflow/WFS-{session}/.brainstorming/
brainstorm_dir = .workflow/active/WFS-{session}/.brainstorming/
CHECK: brainstorm_dir/guidance-specification.md
IF EXISTS:
@@ -67,13 +67,13 @@ Execute subject-matter-expert analysis for existing topic framework
## Context Loading
ASSIGNED_ROLE: subject-matter-expert
OUTPUT_LOCATION: .workflow/WFS-{session}/.brainstorming/subject-matter-expert/
OUTPUT_LOCATION: .workflow/active/WFS-{session}/.brainstorming/subject-matter-expert/
ANALYSIS_MODE: {framework_mode ? "framework_based" : "standalone"}
## Flow Control Steps
1. **load_topic_framework**
- Action: Load structured topic discussion framework
- Command: Read(.workflow/WFS-{session}/.brainstorming/guidance-specification.md)
- Command: Read(.workflow/active/WFS-{session}/.brainstorming/guidance-specification.md)
- Output: topic_framework_content
2. **load_role_template**
@@ -83,7 +83,7 @@ ANALYSIS_MODE: {framework_mode ? "framework_based" : "standalone"}
3. **load_session_metadata**
- Action: Load session metadata and existing context
- Command: Read(.workflow/WFS-{session}/workflow-session.json)
- Command: Read(.workflow/active/WFS-{session}/workflow-session.json)
- Output: session_context
## Analysis Requirements
@@ -143,7 +143,7 @@ TodoWrite({
### Framework-Based Analysis
```
.workflow/WFS-{session}/.brainstorming/subject-matter-expert/
.workflow/active/WFS-{session}/.brainstorming/subject-matter-expert/
└── analysis.md # Structured analysis addressing guidance-specification.md discussion points
```
@@ -188,7 +188,7 @@ TodoWrite({
"subject_matter_expert": {
"status": "completed",
"framework_addressed": true,
"output_location": ".workflow/WFS-{session}/.brainstorming/subject-matter-expert/analysis.md",
"output_location": ".workflow/active/WFS-{session}/.brainstorming/subject-matter-expert/analysis.md",
"framework_reference": "@../guidance-specification.md"
}
}

View File

@@ -48,7 +48,7 @@ Three-phase workflow to eliminate ambiguities and enhance conceptual depth in ro
### Phase 1: Discovery & Validation
1. **Detect Session**: Use `--session` parameter or `.workflow/.active-*` marker
1. **Detect Session**: Use `--session` parameter or find `.workflow/active/WFS-*` directories
2. **Validate Files**:
- `guidance-specification.md` (optional, warn if missing)
- `*/analysis*.md` (required, error if empty)
@@ -59,7 +59,7 @@ Three-phase workflow to eliminate ambiguities and enhance conceptual depth in ro
**Main flow prepares file paths for Agent**:
1. **Discover Analysis Files**:
- Glob(.workflow/WFS-{session}/.brainstorming/*/analysis*.md)
- Glob(.workflow/active/WFS-{session}/.brainstorming/*/analysis*.md)
- Supports: analysis.md, analysis-1.md, analysis-2.md, analysis-3.md
- Validate: At least one file exists (error if empty)
@@ -69,7 +69,7 @@ Three-phase workflow to eliminate ambiguities and enhance conceptual depth in ro
3. **Pass to Agent** (Phase 3):
- `session_id`
- `brainstorm_dir`: .workflow/WFS-{session}/.brainstorming/
- `brainstorm_dir`: .workflow/active/WFS-{session}/.brainstorming/
- `role_analysis_paths`: ["product-manager/analysis.md", "system-architect/analysis-1.md", ...]
- `participating_roles`: ["product-manager", "system-architect", ...]
@@ -361,7 +361,7 @@ Updated {role2}/analysis.md with Clarifications section + enhanced content
## Output
**Location**: `.workflow/WFS-{session}/.brainstorming/[role]/analysis*.md` (in-place updates)
**Location**: `.workflow/active/WFS-{session}/.brainstorming/[role]/analysis*.md` (in-place updates)
**Updated Structure**:
```markdown
@@ -381,6 +381,64 @@ Updated {role2}/analysis.md with Clarifications section + enhanced content
- Ambiguities resolved, placeholders removed
- Consistent terminology
### Phase 6: Update Context Package
**Purpose**: Sync updated role analyses to context-package.json to avoid stale cache
**Operations**:
```bash
context_pkg_path = ".workflow/active/WFS-{session}/.process/context-package.json"
# 1. Read existing package
context_pkg = Read(context_pkg_path)
# 2. Re-read brainstorm artifacts (now with synthesis enhancements)
brainstorm_dir = ".workflow/active/WFS-{session}/.brainstorming"
# 2.1 Update guidance-specification if exists
IF exists({brainstorm_dir}/guidance-specification.md):
context_pkg.brainstorm_artifacts.guidance_specification.content = Read({brainstorm_dir}/guidance-specification.md)
context_pkg.brainstorm_artifacts.guidance_specification.updated_at = NOW()
# 2.2 Update synthesis-specification if exists
IF exists({brainstorm_dir}/synthesis-specification.md):
IF context_pkg.brainstorm_artifacts.synthesis_output:
context_pkg.brainstorm_artifacts.synthesis_output.content = Read({brainstorm_dir}/synthesis-specification.md)
context_pkg.brainstorm_artifacts.synthesis_output.updated_at = NOW()
# 2.3 Re-read all role analysis files
role_analysis_files = Glob({brainstorm_dir}/*/analysis*.md)
context_pkg.brainstorm_artifacts.role_analyses = []
FOR file IN role_analysis_files:
role_name = extract_role_from_path(file) # e.g., "ui-designer"
relative_path = file.replace({brainstorm_dir}/, "")
context_pkg.brainstorm_artifacts.role_analyses.push({
"role": role_name,
"files": [{
"path": relative_path,
"type": "primary",
"content": Read(file),
"updated_at": NOW()
}]
})
# 3. Update metadata
context_pkg.metadata.updated_at = NOW()
context_pkg.metadata.synthesis_timestamp = NOW()
# 4. Write back
Write(context_pkg_path, JSON.stringify(context_pkg, indent=2))
REPORT: "✅ Updated context-package.json with synthesis results"
```
**TodoWrite Update**:
```json
{"content": "Update context package with synthesis results", "status": "completed", "activeForm": "Updating context package"}
```
## Session Metadata
Update `workflow-session.json`:

View File

@@ -46,10 +46,10 @@ allowed-tools: Task(conceptual-planning-agent), TodoWrite(*), Read(*), Write(*)
### Phase 1: Session & Framework Detection
```bash
# Check active session and framework
CHECK: .workflow/.active-* marker files
CHECK: find .workflow/active/ -name "WFS-*" -type d
IF active_session EXISTS:
session_id = get_active_session()
brainstorm_dir = .workflow/WFS-{session}/.brainstorming/
brainstorm_dir = .workflow/active/WFS-{session}/.brainstorming/
CHECK: brainstorm_dir/guidance-specification.md
IF EXISTS:
@@ -162,7 +162,7 @@ IF update_mode = "incremental":
### Output Files
```
.workflow/WFS-[topic]/.brainstorming/
.workflow/active/WFS-[topic]/.brainstorming/
├── guidance-specification.md # Input: Framework (if exists)
└── system-architect/
└── analysis.md # ★ OUTPUT: Framework-based analysis
@@ -186,8 +186,8 @@ IF update_mode = "incremental":
### ⚠️ Session Management - FIRST STEP
Session detection and selection:
```bash
# Check for active sessions
active_sessions=$(find .workflow -name ".active-*" 2>/dev/null)
# Check for existing sessions
existing_sessions=$(find .workflow/active/ -name "WFS-*" -type d 2>/dev/null)
if [ multiple_sessions ]; then
prompt_user_to_select_session()
else
@@ -279,7 +279,7 @@ TodoWrite tracking for two-step process:
### Output Location
```
.workflow/WFS-{topic-slug}/.brainstorming/system-architect/
.workflow/active/WFS-{topic-slug}/.brainstorming/system-architect/
├── analysis.md # Primary architecture analysis
├── architecture-design.md # Detailed system design and diagrams
├── technology-stack.md # Technology stack recommendations and justifications
@@ -340,7 +340,7 @@ Upon completion, update `workflow-session.json`:
"system_architect": {
"status": "completed",
"completed_at": "timestamp",
"output_directory": ".workflow/WFS-{topic}/.brainstorming/system-architect/",
"output_directory": ".workflow/active/WFS-{topic}/.brainstorming/system-architect/",
"key_insights": ["scalability_bottleneck", "architecture_pattern", "technology_recommendation"]
}
}

View File

@@ -48,10 +48,10 @@ allowed-tools: Task(conceptual-planning-agent), TodoWrite(*), Read(*), Write(*)
### Phase 1: Session & Framework Detection
```bash
# Check active session and framework
CHECK: .workflow/.active-* marker files
CHECK: find .workflow/active/ -name "WFS-*" -type d
IF active_session EXISTS:
session_id = get_active_session()
brainstorm_dir = .workflow/WFS-{session}/.brainstorming/
brainstorm_dir = .workflow/active/WFS-{session}/.brainstorming/
CHECK: brainstorm_dir/guidance-specification.md
IF EXISTS:
@@ -88,13 +88,13 @@ Execute ui-designer analysis for existing topic framework
## Context Loading
ASSIGNED_ROLE: ui-designer
OUTPUT_LOCATION: .workflow/WFS-{session}/.brainstorming/ui-designer/
OUTPUT_LOCATION: .workflow/active/WFS-{session}/.brainstorming/ui-designer/
ANALYSIS_MODE: {framework_mode ? "framework_based" : "standalone"}
## Flow Control Steps
1. **load_topic_framework**
- Action: Load structured topic discussion framework
- Command: Read(.workflow/WFS-{session}/.brainstorming/guidance-specification.md)
- Command: Read(.workflow/active/WFS-{session}/.brainstorming/guidance-specification.md)
- Output: topic_framework_content
2. **load_role_template**
@@ -104,7 +104,7 @@ ANALYSIS_MODE: {framework_mode ? "framework_based" : "standalone"}
3. **load_session_metadata**
- Action: Load session metadata and existing context
- Command: Read(.workflow/WFS-{session}/workflow-session.json)
- Command: Read(.workflow/active/WFS-{session}/workflow-session.json)
- Output: session_context
## Analysis Requirements
@@ -164,7 +164,7 @@ TodoWrite({
### Framework-Based Analysis
```
.workflow/WFS-{session}/.brainstorming/ui-designer/
.workflow/active/WFS-{session}/.brainstorming/ui-designer/
└── analysis.md # Structured analysis addressing guidance-specification.md discussion points
```
@@ -209,7 +209,7 @@ TodoWrite({
"ui_designer": {
"status": "completed",
"framework_addressed": true,
"output_location": ".workflow/WFS-{session}/.brainstorming/ui-designer/analysis.md",
"output_location": ".workflow/active/WFS-{session}/.brainstorming/ui-designer/analysis.md",
"framework_reference": "@../guidance-specification.md"
}
}

View File

@@ -48,10 +48,10 @@ allowed-tools: Task(conceptual-planning-agent), TodoWrite(*), Read(*), Write(*)
### Phase 1: Session & Framework Detection
```bash
# Check active session and framework
CHECK: .workflow/.active-* marker files
CHECK: find .workflow/active/ -name "WFS-*" -type d
IF active_session EXISTS:
session_id = get_active_session()
brainstorm_dir = .workflow/WFS-{session}/.brainstorming/
brainstorm_dir = .workflow/active/WFS-{session}/.brainstorming/
CHECK: brainstorm_dir/guidance-specification.md
IF EXISTS:
@@ -88,13 +88,13 @@ Execute ux-expert analysis for existing topic framework
## Context Loading
ASSIGNED_ROLE: ux-expert
OUTPUT_LOCATION: .workflow/WFS-{session}/.brainstorming/ux-expert/
OUTPUT_LOCATION: .workflow/active/WFS-{session}/.brainstorming/ux-expert/
ANALYSIS_MODE: {framework_mode ? "framework_based" : "standalone"}
## Flow Control Steps
1. **load_topic_framework**
- Action: Load structured topic discussion framework
- Command: Read(.workflow/WFS-{session}/.brainstorming/guidance-specification.md)
- Command: Read(.workflow/active/WFS-{session}/.brainstorming/guidance-specification.md)
- Output: topic_framework_content
2. **load_role_template**
@@ -104,7 +104,7 @@ ANALYSIS_MODE: {framework_mode ? "framework_based" : "standalone"}
3. **load_session_metadata**
- Action: Load session metadata and existing context
- Command: Read(.workflow/WFS-{session}/workflow-session.json)
- Command: Read(.workflow/active/WFS-{session}/workflow-session.json)
- Output: session_context
## Analysis Requirements
@@ -164,7 +164,7 @@ TodoWrite({
### Framework-Based Analysis
```
.workflow/WFS-{session}/.brainstorming/ux-expert/
.workflow/active/WFS-{session}/.brainstorming/ux-expert/
└── analysis.md # Structured analysis addressing guidance-specification.md discussion points
```
@@ -209,7 +209,7 @@ TodoWrite({
"ux_expert": {
"status": "completed",
"framework_addressed": true,
"output_location": ".workflow/WFS-{session}/.brainstorming/ux-expert/analysis.md",
"output_location": ".workflow/active/WFS-{session}/.brainstorming/ux-expert/analysis.md",
"framework_reference": "@../guidance-specification.md"
}
}

View File

@@ -7,7 +7,7 @@ argument-hint: "[--resume-session=\"session-id\"]"
# Workflow Execute Command
## Overview
Orchestrates autonomous workflow execution through systematic task discovery, agent coordination, and progress tracking. **Executes entire workflow without user interruption**, providing complete context to agents and ensuring proper flow control execution with comprehensive TodoWrite tracking.
Orchestrates autonomous workflow execution through systematic task discovery, agent coordination, and progress tracking. **Executes entire workflow without user interruption** (except initial session selection if multiple active sessions exist), providing complete context to agents and ensuring proper flow control execution with comprehensive TodoWrite tracking.
**Resume Mode**: When called with `--resume-session` flag, skips discovery phase and directly enters TodoWrite generation and agent execution for the specified session.
@@ -15,130 +15,170 @@ Orchestrates autonomous workflow execution through systematic task discovery, ag
**Lazy Loading**: Task JSONs read **on-demand** during execution, not upfront. TODO_LIST.md + IMPL_PLAN.md provide metadata for planning.
| Metric | Before | After | Improvement |
|--------|--------|-------|-------------|
| **Initial Load** | All task JSONs (~2,300 lines) | TODO_LIST.md only (~650 lines) | **72% reduction** |
| **Startup Time** | Seconds | Milliseconds | **~90% faster** |
| **Memory** | All tasks | 1-2 tasks | **90% less** |
| **Scalability** | 10-20 tasks | 100+ tasks | **5-10x** |
**Loading Strategy**:
- **TODO_LIST.md**: Read in Phase 2 (task metadata, status, dependencies)
- **IMPL_PLAN.md**: Read existence in Phase 2, parse execution strategy when needed
- **Task JSONs**: Complete lazy loading (read only during execution)
## Core Rules
**Complete entire workflow autonomously without user interruption, using TodoWrite for comprehensive progress tracking.**
**Execute all discovered pending tasks sequentially until workflow completion or blocking dependency.**
**Execute all discovered pending tasks until workflow completion or blocking dependency.**
**Auto-complete session when all tasks finished: Call `/workflow:session:complete` upon workflow completion.**
**ONE AGENT = ONE TASK JSON: Each agent instance executes exactly one task JSON file - never batch multiple tasks into single agent execution.**
## Core Responsibilities
- **Session Discovery**: Identify and select active workflow sessions
- **Task Dependency Resolution**: Analyze task relationships and execution order
- **Execution Strategy Parsing**: Extract execution model from IMPL_PLAN.md
- **TodoWrite Progress Tracking**: Maintain real-time execution status throughout entire workflow
- **Agent Orchestration**: Coordinate specialized agents with complete context
- **Flow Control Execution**: Execute pre-analysis steps and context accumulation
- **Status Synchronization**: Update task JSON files and workflow state
- **Autonomous Completion**: Continue execution until all tasks complete or reach blocking state
- **Session Auto-Complete**: Call `/workflow:session:complete` when all workflow tasks finished
## Execution Philosophy
- **Discovery-first**: Auto-discover existing plans and tasks
- **Status-aware**: Execute only ready tasks with resolved dependencies
- **Context-rich**: Provide complete task JSON and accumulated context to agents
- **Progress tracking**: **Continuous TodoWrite updates throughout entire workflow execution**
- **Flow control**: Sequential step execution with variable passing
- **Autonomous completion**: **Execute all tasks without user interruption until workflow complete**
## Flow Control Execution
**[FLOW_CONTROL]** marker indicates task JSON contains `flow_control.pre_analysis` steps for context preparation.
### Orchestrator Responsibility
- Pass complete task JSON to agent (including `flow_control` block)
- Provide session paths for artifact access
- Monitor agent completion
### Agent Responsibility
- Parse `flow_control.pre_analysis` array from JSON
- Execute steps sequentially with variable substitution
- Accumulate context from artifacts and dependencies
- Follow error handling per `step.on_error`
- Complete implementation using accumulated context
**Orchestrator does NOT execute flow control steps - Agent interprets and executes them from JSON.**
- **Progress tracking**: Continuous TodoWrite updates throughout entire workflow execution
- **Autonomous completion**: Execute all tasks without user interruption until workflow complete
## Execution Lifecycle
### Resume Mode Detection
**Special Flag Processing**: When `--resume-session="session-id"` is provided:
1. **Skip Discovery Phase**: Use provided session ID directly
2. **Load Specified Session**: Read session state from `.workflow/{session-id}/`
3. **Direct TodoWrite Generation**: Skip to Phase 3 (Planning) immediately
4. **Accelerated Execution**: Enter agent coordination without validation delays
### Phase 1: Discovery
**Applies to**: Normal mode only (skipped in resume mode)
### Phase 1: Discovery (Normal Mode Only)
1. **Check Active Sessions**: Find `.workflow/.active-*` markers
2. **Select Session**: If multiple found, prompt user selection
3. **Load Session Metadata**: Read `workflow-session.json` ONLY (minimal context)
4. **DO NOT read task JSONs yet** - defer until execution phase
**Purpose**: Find and select active workflow session with user confirmation when multiple sessions exist
**Note**: In resume mode, this phase is completely skipped.
**Process**:
#### Step 1.1: Count Active Sessions
```bash
bash(find .workflow/active/ -name "WFS-*" -type d 2>/dev/null | wc -l)
```
#### Step 1.2: Handle Session Selection
**Case A: No Sessions** (count = 0)
```
ERROR: No active workflow sessions found
Run /workflow:plan "task description" to create a session
```
**Case B: Single Session** (count = 1)
```bash
bash(find .workflow/active/ -name "WFS-*" -type d 2>/dev/null | head -1 | xargs basename)
```
Auto-select and continue to Phase 2.
**Case C: Multiple Sessions** (count > 1)
List sessions with metadata and prompt user selection:
```bash
bash(for dir in .workflow/active/WFS-*/; do
session=$(basename "$dir")
project=$(jq -r '.project // "Unknown"' "$dir/workflow-session.json" 2>/dev/null)
total=$(grep -c "^- \[" "$dir/TODO_LIST.md" 2>/dev/null || echo "0")
completed=$(grep -c "^- \[x\]" "$dir/TODO_LIST.md" 2>/dev/null || echo "0")
[ "$total" -gt 0 ] && progress=$((completed * 100 / total)) || progress=0
echo "${session} | ${project} | ${completed}/${total} tasks (${progress}%)"
done)
```
Use AskUserQuestion to present formatted options:
```
Multiple active workflow sessions detected. Please select one:
1. WFS-auth-system | Authentication System | 3/5 tasks (60%)
2. WFS-payment-module | Payment Integration | 0/8 tasks (0%)
Enter number, full session ID, or partial match:
```
Parse user input (supports: number "1", full ID "WFS-auth-system", or partial "auth"), validate selection, and continue to Phase 2.
#### Step 1.3: Load Session Metadata
```bash
bash(cat .workflow/active/${sessionId}/workflow-session.json)
```
**Output**: Store session metadata in memory
**DO NOT read task JSONs yet** - defer until execution phase (lazy loading)
**Resume Mode**: This entire phase is skipped when `--resume-session="session-id"` flag is provided.
### Phase 2: Planning Document Analysis
**Applies to**: Normal mode only (skipped in resume mode)
### Phase 2: Planning Document Analysis (Normal Mode Only)
**Optimized to avoid reading all task JSONs upfront**
1. **Read IMPL_PLAN.md**: Understand overall strategy, task breakdown summary, dependencies
**Process**:
1. **Check IMPL_PLAN.md**: Verify file exists and has valid structure (defer detailed parsing to Phase 4)
2. **Read TODO_LIST.md**: Get current task statuses and execution progress
3. **Extract Task Metadata**: Parse task IDs, titles, and dependency relationships from TODO_LIST.md
4. **Build Execution Queue**: Determine ready tasks based on TODO_LIST.md status and dependencies
**Key Optimization**: Use IMPL_PLAN.md and TODO_LIST.md as primary sources instead of reading all task JSONs
**Key Optimization**: Use IMPL_PLAN.md (existence check only) and TODO_LIST.md as primary sources instead of reading all task JSONs. Detailed IMPL_PLAN.md parsing happens in Phase 4A.
**Note**: In resume mode, this phase is also skipped as session analysis was already completed by `/workflow:status`.
**Resume Mode**: When `--resume-session` flag is provided, **session discovery** (Phase 1) is skipped, but **task metadata loading** (TODO_LIST.md reading) still occurs in Phase 3 for TodoWrite generation.
### Phase 3: TodoWrite Generation (Resume Mode Entry Point)
**This is where resume mode directly enters after skipping Phases 1 & 2**
### Phase 3: TodoWrite Generation
**Applies to**: Both normal and resume modes (resume mode entry point)
**Process**:
1. **Create TodoWrite List**: Generate task list from TODO_LIST.md (not from task JSONs)
- Parse TODO_LIST.md to extract all tasks with current statuses
- Identify first pending task with met dependencies
- Generate comprehensive TodoWrite covering entire workflow
2. **Mark Initial Status**: Set first ready task as `in_progress` in TodoWrite
3. **Prepare Session Context**: Inject workflow paths for agent use (using provided session-id)
4. **Validate Prerequisites**: Ensure IMPL_PLAN.md and TODO_LIST.md exist and are valid
2. **Prepare Session Context**: Inject workflow paths for agent use (using provided session-id)
3. **Validate Prerequisites**: Ensure IMPL_PLAN.md and TODO_LIST.md exist and are valid
**Resume Mode Behavior**:
- Load existing TODO_LIST.md directly from `.workflow/{session-id}/`
- Load existing TODO_LIST.md directly from `.workflow/active/{session-id}/`
- Extract current progress from TODO_LIST.md
- Generate TodoWrite from TODO_LIST.md state
- Proceed immediately to agent execution (Phase 4)
### Phase 4: Execution (Lazy Task Loading)
**Key Optimization**: Read task JSON **only when needed** for execution
### Phase 4: Execution Strategy Selection & Task Execution
**Applies to**: Both normal and resume modes
1. **Identify Next Task**: From TodoWrite, get the next `in_progress` task ID
2. **Load Task JSON on Demand**: Read `.task/{task-id}.json` for current task ONLY
3. **Validate Task Structure**: Ensure all 5 required fields exist (id, title, status, meta, context, flow_control)
4. **Pass Task with Flow Control**: Include complete task JSON with `pre_analysis` steps for agent execution
5. **Launch Agent**: Invoke specialized agent with complete context including flow control steps
6. **Monitor Progress**: Track agent execution and handle errors without user interruption
7. **Collect Results**: Gather implementation results and outputs
8. **Update TODO_LIST.md**: Mark current task as completed in TODO_LIST.md
9. **Continue Workflow**: Identify next pending task from TODO_LIST.md and repeat from step 1
**Step 4A: Parse Execution Strategy from IMPL_PLAN.md**
Read IMPL_PLAN.md Section 4 to extract:
- **Execution Model**: Sequential | Parallel | Phased | TDD Cycles
- **Parallelization Opportunities**: Which tasks can run in parallel
- **Serialization Requirements**: Which tasks must run sequentially
- **Critical Path**: Priority execution order
If IMPL_PLAN.md lacks execution strategy, use intelligent fallback (analyze task structure).
**Step 4B: Execute Tasks with Lazy Loading**
**Key Optimization**: Read task JSON **only when needed** for execution
**Execution Loop Pattern**:
```
while (TODO_LIST.md has pending tasks) {
next_task_id = getTodoWriteInProgressTask()
task_json = Read(.workflow/{session}/.task/{next_task_id}.json) // Lazy load
task_json = Read(.workflow/session/{session}/.task/{next_task_id}.json) // Lazy load
executeTaskWithAgent(task_json)
updateTodoListMarkCompleted(next_task_id)
advanceTodoWriteToNextTask()
}
```
**Benefits**:
- Reduces initial context loading by ~90%
- Only reads task JSON when actually executing
- Scales better for workflows with many tasks
- Faster startup time for workflow execution
**Execution Process per Task**:
1. **Identify Next Task**: From TodoWrite, get the next `in_progress` task ID
2. **Load Task JSON on Demand**: Read `.task/{task-id}.json` for current task ONLY
3. **Validate Task Structure**: Ensure all 5 required fields exist (id, title, status, meta, context, flow_control)
4. **Launch Agent**: Invoke specialized agent with complete context including flow control steps
5. **Monitor Progress**: Track agent execution and handle errors without user interruption
6. **Collect Results**: Gather implementation results and outputs
7. **Continue Workflow**: Identify next pending task from TODO_LIST.md and repeat
**Note**: TODO_LIST.md updates are handled by agents (e.g., code-developer.md), not by the orchestrator.
### Phase 5: Completion
**Applies to**: Both normal and resume modes
**Process**:
1. **Update Task Status**: Mark completed tasks in JSON files
2. **Generate Summary**: Create task summary in `.summaries/`
3. **Update TodoWrite**: Mark current task complete, advance to next
@@ -146,34 +186,53 @@ while (TODO_LIST.md has pending tasks) {
5. **Check Workflow Complete**: Verify all tasks are completed
6. **Auto-Complete Session**: Call `/workflow:session:complete` when all tasks finished
## Task Discovery & Queue Building
## Execution Strategy (IMPL_PLAN-Driven)
### Session Discovery Process (Normal Mode - Optimized)
```
├── Check for .active-* markers in .workflow/
├── If multiple active sessions found → Prompt user to select
├── Locate selected session's workflow folder
├── Load session metadata: workflow-session.json (minimal context)
├── Read IMPL_PLAN.md (strategy overview and task summary)
├── Read TODO_LIST.md (current task statuses and dependencies)
├── Parse TODO_LIST.md to extract task metadata (NO JSON loading)
├── Build execution queue from TODO_LIST.md
└── Generate TodoWrite from TODO_LIST.md state
```
### Strategy Priority
**Key Change**: Task JSONs are NOT loaded during discovery - they are loaded lazily during execution
**IMPL_PLAN-Driven Execution (Recommended)**:
1. **Read IMPL_PLAN.md execution strategy** (Section 4: Implementation Strategy)
2. **Follow explicit guidance**:
- Execution Model (Sequential/Parallel/Phased/TDD)
- Parallelization Opportunities (which tasks can run in parallel)
- Serialization Requirements (which tasks must run sequentially)
- Critical Path (priority execution order)
3. **Use TODO_LIST.md for status tracking** only
4. **IMPL_PLAN decides "HOW"**, execute.md implements it
### Resume Mode Process (--resume-session flag - Optimized)
```
├── Use provided session-id directly (skip discovery)
├── Validate .workflow/{session-id}/ directory exists
├── Read TODO_LIST.md for current progress
├── Parse TODO_LIST.md to extract task IDs and statuses
├── Generate TodoWrite from TODO_LIST.md (prioritize in-progress/pending tasks)
└── Enter Phase 4 (Execution) with lazy task JSON loading
```
**Intelligent Fallback (When IMPL_PLAN lacks execution details)**:
1. **Analyze task structure**:
- Check `meta.execution_group` in task JSONs
- Analyze `depends_on` relationships
- Understand task complexity and risk
2. **Apply smart defaults**:
- No dependencies + same execution_group → Parallel
- Has dependencies → Sequential (wait for deps)
- Critical/high-risk tasks → Sequential
3. **Conservative approach**: When uncertain, prefer sequential execution
**Key Change**: Completely skip IMPL_PLAN.md and task JSON loading - use TODO_LIST.md only
### Execution Models
#### 1. Sequential Execution
**When**: IMPL_PLAN specifies "Sequential" OR no clear parallelization guidance
**Pattern**: Execute tasks one by one in TODO_LIST order
**TodoWrite**: ONE task marked as `in_progress` at a time
#### 2. Parallel Execution
**When**: IMPL_PLAN specifies "Parallel" with clear parallelization opportunities
**Pattern**: Execute independent task groups concurrently by launching multiple agent instances
**TodoWrite**: MULTIPLE tasks (in same batch) marked as `in_progress` simultaneously
**Agent Instantiation**: Launch one agent instance per task (respects ONE AGENT = ONE TASK JSON rule)
#### 3. Phased Execution
**When**: IMPL_PLAN specifies "Phased" with phase breakdown
**Pattern**: Execute tasks in phases, respect phase boundaries
**TodoWrite**: Within each phase, follow Sequential or Parallel rules
#### 4. Intelligent Fallback
**When**: IMPL_PLAN lacks execution strategy details
**Pattern**: Analyze task structure and apply smart defaults
**TodoWrite**: Follow Sequential or Parallel rules based on analysis
### Task Status Logic
```
@@ -182,155 +241,36 @@ completed → skip
blocked → skip until dependencies clear
```
## Batch Execution with Dependency Graph
### Parallel Execution Algorithm
**Core principle**: Execute independent tasks concurrently in batches based on dependency graph.
#### Algorithm Steps (Optimized with Lazy Loading)
```javascript
function executeBatchWorkflow(sessionId) {
// 1. Build dependency graph from TODO_LIST.md (NOT task JSONs)
const graph = buildDependencyGraphFromTodoList(`.workflow/${sessionId}/TODO_LIST.md`);
// 2. Process batches until graph is empty
while (!graph.isEmpty()) {
// 3. Identify current batch (tasks with in-degree = 0)
const batch = graph.getNodesWithInDegreeZero();
// 4. Load task JSONs ONLY for current batch (lazy loading)
const batchTaskJsons = batch.map(taskId =>
Read(`.workflow/${sessionId}/.task/${taskId}.json`)
);
// 5. Check for parallel execution opportunities
const parallelGroups = groupByExecutionGroup(batchTaskJsons);
// 6. Execute batch concurrently
await Promise.all(
parallelGroups.map(group => executeBatch(group))
);
// 7. Update graph: remove completed tasks and their edges
graph.removeNodes(batch);
// 8. Update TODO_LIST.md and TodoWrite to reflect completed batch
updateTodoListAfterBatch(batch);
updateTodoWriteAfterBatch(batch);
}
// 9. All tasks complete - auto-complete session
SlashCommand("/workflow:session:complete");
}
function buildDependencyGraphFromTodoList(todoListPath) {
const todoContent = Read(todoListPath);
const tasks = parseTodoListTasks(todoContent);
const graph = new DirectedGraph();
tasks.forEach(task => {
graph.addNode(task.id, { id: task.id, title: task.title, status: task.status });
task.dependencies?.forEach(depId => graph.addEdge(depId, task.id));
});
return graph;
}
function parseTodoListTasks(todoContent) {
// Parse: - [ ] **IMPL-001**: Task title → [📋](./.task/IMPL-001.json)
const taskPattern = /- \[([ x])\] \*\*([A-Z]+-\d+(?:\.\d+)?)\*\*: (.+?) →/g;
const tasks = [];
let match;
while ((match = taskPattern.exec(todoContent)) !== null) {
tasks.push({
status: match[1] === 'x' ? 'completed' : 'pending',
id: match[2],
title: match[3]
});
}
return tasks;
}
function groupByExecutionGroup(tasks) {
const groups = {};
tasks.forEach(task => {
const groupId = task.meta.execution_group || task.id;
if (!groups[groupId]) groups[groupId] = [];
groups[groupId].push(task);
});
return Object.values(groups);
}
async function executeBatch(tasks) {
// Execute all tasks in batch concurrently
return Promise.all(
tasks.map(task => executeTask(task))
);
}
```
#### Execution Group Rules
1. **Same `execution_group` ID** → Execute in parallel (independent, different contexts)
2. **No `execution_group` (null)** → Execute sequentially (has dependencies)
3. **Different `execution_group` IDs** → Execute in parallel (independent batches)
4. **Same `context_signature`** → Should have been merged (warning if not)
#### Parallel Execution Example
```
Batch 1 (no dependencies):
- IMPL-1.1 (execution_group: "parallel-auth-api") → Agent 1
- IMPL-1.2 (execution_group: "parallel-ui-comp") → Agent 2
- IMPL-1.3 (execution_group: "parallel-db-schema") → Agent 3
Wait for Batch 1 completion...
Batch 2 (depends on Batch 1):
- IMPL-2.1 (execution_group: null, depends_on: [IMPL-1.1, IMPL-1.2]) → Agent 1
Wait for Batch 2 completion...
Batch 3 (independent of Batch 2):
- IMPL-3.1 (execution_group: "parallel-tests-1") → Agent 1
- IMPL-3.2 (execution_group: "parallel-tests-2") → Agent 2
```
## TodoWrite Coordination
**Comprehensive workflow tracking** with immediate status updates throughout entire execution without user interruption:
#### TodoWrite Workflow Rules
1. **Initial Creation**: Generate TodoWrite from discovered pending tasks for entire workflow
- **Normal Mode**: Create from discovery results
- **Resume Mode**: Create from existing session state and current progress
2. **Parallel Task Support**:
- **Single-task execution**: Mark ONLY ONE task as `in_progress` at a time
- **Batch execution**: Mark ALL tasks in current batch as `in_progress` simultaneously
- **Execution group indicator**: Show `[execution_group: group-id]` for parallel tasks
3. **Immediate Updates**: Update status after each task/batch completion without user interruption
4. **Status Synchronization**: Sync with JSON task files after updates
5. **Continuous Tracking**: Maintain TodoWrite throughout entire workflow execution until completion
### TodoWrite Rules (Unified)
#### Resume Mode TodoWrite Generation
**Special behavior when `--resume-session` flag is present**:
- Load existing session progress from `.workflow/{session-id}/TODO_LIST.md`
- Identify currently in-progress or next pending task
- Generate TodoWrite starting from interruption point
- Preserve completed task history in TodoWrite display
- Focus on remaining pending tasks for execution
**Rule 1: Initial Creation**
- **Normal Mode**: Generate TodoWrite from discovered pending tasks for entire workflow
- **Resume Mode**: Generate from existing session state and current progress
#### TodoWrite Tool Usage
**Use Claude Code's built-in TodoWrite tool** to track workflow progress in real-time:
**Rule 2: In-Progress Task Count (Execution-Model-Dependent)**
- **Sequential execution**: Mark ONLY ONE task as `in_progress` at a time
- **Parallel batch execution**: Mark ALL tasks in current batch as `in_progress` simultaneously
- **Execution group indicator**: Show `[execution_group: group-id]` for parallel tasks
**Rule 3: Status Updates**
- **Immediate Updates**: Update status after each task/batch completion without user interruption
- **Status Synchronization**: Sync with JSON task files after updates
- **Continuous Tracking**: Maintain TodoWrite throughout entire workflow execution until completion
**Rule 4: Workflow Completion Check**
- When all tasks marked `completed`, auto-call `/workflow:session:complete`
### TodoWrite Tool Usage
**Example 1: Sequential Execution**
```javascript
// Example 1: Sequential execution (traditional)
TodoWrite({
todos: [
{
content: "Execute IMPL-1.1: Design auth schema [code-developer] [FLOW_CONTROL]",
status: "in_progress", // Single task in progress
status: "in_progress", // ONE task in progress
activeForm: "Executing IMPL-1.1: Design auth schema"
},
{
@@ -340,8 +280,10 @@ TodoWrite({
}
]
});
```
// Example 2: Batch execution (parallel tasks with execution_group)
**Example 2: Parallel Batch Execution**
```javascript
TodoWrite({
todos: [
{
@@ -366,255 +308,50 @@ TodoWrite({
}
]
});
// Example 3: After batch completion
TodoWrite({
todos: [
{
content: "Execute IMPL-1.1: Build Auth API [code-developer] [execution_group: parallel-auth-api]",
status: "completed", // Batch completed
activeForm: "Executing IMPL-1.1: Build Auth API"
},
{
content: "Execute IMPL-1.2: Build User UI [code-developer] [execution_group: parallel-ui-comp]",
status: "completed", // Batch completed
activeForm: "Executing IMPL-1.2: Build User UI"
},
{
content: "Execute IMPL-1.3: Setup Database [code-developer] [execution_group: parallel-db-schema]",
status: "completed", // Batch completed
activeForm: "Executing IMPL-1.3: Setup Database"
},
{
content: "Execute IMPL-2.1: Integration Tests [test-fix-agent]",
status: "in_progress", // Next batch started
activeForm: "Executing IMPL-2.1: Integration Tests"
}
]
});
```
**TodoWrite Integration Rules**:
- **Continuous Workflow Tracking**: Use TodoWrite tool throughout entire workflow execution
- **Real-time Updates**: Immediate progress tracking without user interruption
- **Single Active Task**: Only ONE task marked as `in_progress` at any time
- **Immediate Completion**: Mark tasks `completed` immediately after finishing
- **Status Sync**: Sync TodoWrite status with JSON task files after each update
- **Full Execution**: Continue TodoWrite tracking until all workflow tasks complete
- **Workflow Completion Check**: When all tasks marked `completed`, auto-call `/workflow:session:complete`
## Agent Execution Pattern
#### TODO_LIST.md Update Timing
**Single source of truth for task status** - enables lazy loading by providing task metadata without reading JSONs
### Flow Control Execution
**[FLOW_CONTROL]** marker indicates task JSON contains `flow_control.pre_analysis` steps for context preparation.
- **Before Agent Launch**: Mark task as `in_progress`
- **After Task Complete**: Mark as `completed`, advance to next
- **On Error**: Keep as `in_progress`, add error note
- **Workflow Complete**: Call `/workflow:session:complete`
**Note**: Orchestrator does NOT execute flow control steps - Agent interprets and executes them autonomously.
### 3. Agent Context Management
**Comprehensive context preparation** for autonomous agent execution:
### Agent Prompt Template
**Dynamic Generation**: Before agent invocation, read task JSON and extract key requirements.
#### Context Sources (Priority Order)
1. **Complete Task JSON**: Full task definition including all fields and artifacts
2. **Artifacts Context**: Brainstorming outputs and role analysess from task.context.artifacts
3. **Flow Control Context**: Accumulated outputs from pre_analysis steps (including artifact loading)
4. **Dependency Summaries**: Previous task completion summaries
5. **Session Context**: Workflow paths and session metadata
6. **Inherited Context**: Parent task context and shared variables
#### Context Assembly Process
```
1. Load Task JSON → Base context (including artifacts array)
2. Load Artifacts → Synthesis specifications and brainstorming outputs
3. Execute Flow Control → Accumulated context (with artifact loading steps)
4. Load Dependencies → Dependency context
5. Prepare Session Paths → Session context
6. Combine All → Complete agent context with artifact integration
```
#### Agent Context Package Structure
```json
{
"task": { /* Complete task JSON with artifacts array */ },
"artifacts": {
"synthesis_specification": { "path": "{{from context-package.json → brainstorm_artifacts.synthesis_output.path}}", "priority": "highest" },
"guidance_specification": { "path": "{{from context-package.json → brainstorm_artifacts.guidance_specification.path}}", "priority": "medium" },
"role_analyses": [ /* From context-package.json brainstorm_artifacts.role_analyses[] */ ],
"conflict_resolution": { "path": "{{from context-package.json → brainstorm_artifacts.conflict_resolution.path}}", "conditional": true }
},
"flow_context": {
"step_outputs": {
"synthesis_specification": "...",
"individual_artifacts": "...",
"pattern_analysis": "...",
"dependency_context": "..."
}
},
"session": {
"workflow_dir": ".workflow/WFS-session/",
"context_package_path": ".workflow/WFS-session/.process/context-package.json",
"todo_list_path": ".workflow/WFS-session/TODO_LIST.md",
"summaries_dir": ".workflow/WFS-session/.summaries/",
"task_json_path": ".workflow/WFS-session/.task/IMPL-1.1.json"
},
"dependencies": [ /* Task summaries from depends_on */ ],
"inherited": { /* Parent task context */ }
}
```
#### Context Validation Rules
- **Task JSON Complete**: All 5 fields present and valid, including artifacts array in context
- **Artifacts Available**: All artifacts loaded from context-package.json
- **Flow Control Ready**: All pre_analysis steps completed including artifact loading steps
- **Dependencies Loaded**: All depends_on summaries available
- **Session Paths Valid**: All workflow paths exist and accessible (verified via context-package.json)
- **Agent Assignment**: Valid agent type specified in meta.agent
### 4. Agent Execution Pattern
**Structured agent invocation** with complete context and clear instructions:
#### Agent Prompt Template
```bash
Task(subagent_type="{meta.agent}",
prompt="**EXECUTE TASK FROM JSON**
prompt="Execute task: {task.title}
## Task JSON Location
{session.task_json_path}
{[FLOW_CONTROL]}
## Instructions
1. **Load Complete Task JSON**: Read and validate all fields (id, title, status, meta, context, flow_control)
2. **Execute Flow Control**: If `flow_control.pre_analysis` exists, execute steps sequentially:
- Load artifacts (role analysis documents, role analyses) using commands in each step
- Accumulate context from step outputs using variable substitution [variable_name]
- Handle errors per step.on_error (skip_optional | fail | retry_once)
3. **Implement Solution**: Follow `flow_control.implementation_approach` using accumulated context
4. **Complete Task**:
- Update task status: `jq '.status = \"completed\"' {session.task_json_path} > temp.json && mv temp.json {session.task_json_path}`
- Update TODO_LIST.md: Mark task as [x] completed in {session.todo_list_path}
- Generate summary: {session.summaries_dir}/{task.id}-summary.md
- Check workflow completion and call `/workflow:session:complete` if all tasks done
**Task Objectives** (from task JSON):
{task.context.objective}
## Context Sources (All from JSON)
- Requirements: `context.requirements`
- Focus Paths: `context.focus_paths`
- Acceptance: `context.acceptance`
- Artifacts: `context.artifacts` (synthesis specs, brainstorming outputs)
- Dependencies: `context.depends_on`
- Target Files: `flow_control.target_files`
**Expected Deliverables** (from task JSON):
{task.context.deliverables}
## Session Paths
**Quality Standards** (from task JSON):
{task.context.acceptance_criteria}
**MANDATORY FIRST STEPS**:
1. Read complete task JSON: {session.task_json_path}
2. Load context package: {session.context_package_path}
Follow complete execution guidelines in @.claude/agents/{meta.agent}.md
**Session Paths**:
- Workflow Dir: {session.workflow_dir}
- TODO List: {session.todo_list_path}
- Summaries: {session.summaries_dir}
- Flow Context: {flow_context.step_outputs}
- Summaries Dir: {session.summaries_dir}
- Context Package: {session.context_package_path}
**Complete JSON structure is authoritative - load and follow it exactly.**"),
description="Execute task: {task.id}")
**Success Criteria**: Complete all objectives, meet all quality standards, deliver all outputs as specified above.",
description="Executing: {task.title}")
```
#### Agent JSON Loading Specification
**MANDATORY AGENT PROTOCOL**: All agents must follow this exact loading sequence:
1. **JSON Loading**: First action must be `cat {session.task_json_path}`
2. **Field Validation**: Verify all 5 required fields exist: `id`, `title`, `status`, `meta`, `context`, `flow_control`
3. **Structure Parsing**: Parse nested fields correctly:
- `meta.type` and `meta.agent` (NOT flat `task_type`)
- `context.requirements`, `context.focus_paths`, `context.acceptance`
- `context.depends_on`, `context.inherited`
- `flow_control.pre_analysis` array, `flow_control.target_files`
4. **Flow Control Execution**: If `flow_control.pre_analysis` exists, execute steps sequentially
5. **Status Management**: Update JSON status upon completion
**JSON Field Reference**:
```json
{
"id": "IMPL-1.2",
"title": "Task title",
"status": "pending|active|completed|blocked",
"meta": {
"type": "feature|bugfix|refactor|test-gen|test-fix|docs",
"agent": "@code-developer|@test-fix-agent|@universal-executor"
},
"context": {
"requirements": ["req1", "req2"],
"focus_paths": ["src/path1", "src/path2"],
"acceptance": ["criteria1", "criteria2"],
"depends_on": ["IMPL-1.1"],
"inherited": { "from": "parent", "context": ["info"] },
"artifacts": [
{
"type": "synthesis_specification",
"source": "context-package.json → brainstorm_artifacts.synthesis_output",
"path": "{{loaded dynamically from context-package.json}}",
"priority": "highest",
"contains": "complete_integrated_specification"
},
{
"type": "individual_role_analysis",
"source": "context-package.json → brainstorm_artifacts.role_analyses[]",
"path": "{{loaded dynamically from context-package.json}}",
"note": "Supports analysis*.md pattern (analysis.md, analysis-01.md, analysis-api.md, etc.)",
"priority": "low",
"contains": "role_specific_analysis_fallback"
}
]
},
"flow_control": {
"pre_analysis": [
{
"step": "load_synthesis_specification",
"action": "Load synthesis specification from context-package.json",
"commands": [
"Read(.workflow/WFS-[session]/.process/context-package.json)",
"Extract(brainstorm_artifacts.synthesis_output.path)",
"Read(extracted path)"
],
"output_to": "synthesis_specification",
"on_error": "skip_optional"
},
{
"step": "step_name",
"command": "bash_command",
"output_to": "variable",
"on_error": "skip_optional|fail|retry_once"
}
],
"implementation_approach": [
{
"step": 1,
"title": "Implement task following role analyses",
"description": "Implement '[title]' following role analyses. PRIORITY: Use role analysis documents as primary requirement source. When implementation needs technical details (e.g., API schemas, caching configs, design tokens), refer to artifacts[] for detailed specifications from original role analyses.",
"modification_points": [
"Apply consolidated requirements from role analysis documents",
"Follow technical guidelines from synthesis",
"Consult artifacts for implementation details when needed",
"Integrate with existing patterns"
],
"logic_flow": [
"Load role analyses",
"Parse architecture and requirements",
"Implement following specification",
"Consult artifacts for technical details when needed",
"Validate against acceptance criteria"
],
"depends_on": [],
"output": "implementation"
}
],
"target_files": ["file:function:lines", "path/to/NewFile.ts"]
}
}
```
#### Execution Flow
1. **Load Task JSON**: Agent reads and validates complete JSON structure
2. **Execute Flow Control**: Agent runs pre_analysis steps if present
3. **Prepare Implementation**: Agent uses implementation_approach from JSON
4. **Launch Implementation**: Agent follows focus_paths and target_files
5. **Update Status**: Agent marks JSON status as completed
6. **Generate Summary**: Agent creates completion summary
#### Agent Assignment Rules
### Agent Assignment Rules
```
meta.agent specified → Use specified agent
meta.agent missing → Infer from meta.type:
@@ -625,18 +362,12 @@ meta.agent missing → Infer from meta.type:
- "docs" → @doc-generator
```
#### Error Handling During Execution
- **Agent Failure**: Retry once with adjusted context
- **Flow Control Error**: Skip optional steps, fail on critical
- **Context Missing**: Reload from JSON files and retry
- **Timeout**: Mark as blocked, continue with next task
## Workflow File Structure Reference
```
.workflow/WFS-[topic-slug]/
.workflow/active/WFS-[topic-slug]/
├── workflow-session.json # Session state and metadata
├── IMPL_PLAN.md # Planning document and requirements
├── TODO_LIST.md # Progress tracking (auto-updated)
├── TODO_LIST.md # Progress tracking (updated by agents)
├── .task/ # Task definitions (JSON only)
│ ├── IMPL-1.json # Main task definitions
│ └── IMPL-1.1.json # Subtask definitions
@@ -644,78 +375,26 @@ meta.agent missing → Infer from meta.type:
│ ├── IMPL-1-summary.md # Task completion details
│ └── IMPL-1.1-summary.md # Subtask completion details
└── .process/ # Planning artifacts
├── context-package.json # Smart context package
└── ANALYSIS_RESULTS.md # Planning analysis results
```
## Error Handling & Recovery
### Discovery Phase Errors
| Error | Cause | Resolution | Command |
|-------|-------|------------|---------|
| No active session | No `.active-*` markers found | Create or resume session | `/workflow:plan "project"` |
| Multiple sessions | Multiple `.active-*` markers | Select specific session | Manual choice prompt |
| Corrupted session | Invalid JSON files | Recreate session structure | `/workflow:session:status --validate` |
| Missing task files | Broken task references | Regenerate tasks | `/task:create` or repair |
### Common Errors & Recovery
### Execution Phase Errors
| Error | Cause | Recovery Strategy | Max Attempts |
|-------|-------|------------------|--------------|
| Error Type | Cause | Recovery Strategy | Max Attempts |
|-----------|-------|------------------|--------------|
| **Discovery Errors** |
| No active session | No sessions in `.workflow/active/` | Create or resume session: `/workflow:plan "project"` | N/A |
| Multiple sessions | Multiple sessions in `.workflow/active/` | Prompt user selection | N/A |
| Corrupted session | Invalid JSON files | Recreate session structure or validate files | N/A |
| **Execution Errors** |
| Agent failure | Agent crash/timeout | Retry with simplified context | 2 |
| Flow control error | Command failure | Skip optional, fail critical | 1 per step |
| Context loading error | Missing dependencies | Reload from JSON, use defaults | 3 |
| JSON file corruption | File system issues | Restore from backup/recreate | 1 |
### Recovery Procedures
#### Session Recovery
```bash
# Check session integrity
find .workflow -name ".active-*" | while read marker; do
session=$(basename "$marker" | sed 's/^\.active-//')
if [ ! -d ".workflow/$session" ]; then
echo "Removing orphaned marker: $marker"
rm "$marker"
fi
done
# Recreate corrupted session files
if [ ! -f ".workflow/$session/workflow-session.json" ]; then
echo '{"session_id":"'$session'","status":"active"}' > ".workflow/$session/workflow-session.json"
fi
```
#### Task Recovery
```bash
# Validate task JSON integrity
for task_file in .workflow/$session/.task/*.json; do
if ! jq empty "$task_file" 2>/dev/null; then
echo "Corrupted task file: $task_file"
# Backup and regenerate or restore from backup
fi
done
# Fix missing dependencies
missing_deps=$(jq -r '.context.depends_on[]?' .workflow/$session/.task/*.json | sort -u)
for dep in $missing_deps; do
if [ ! -f ".workflow/$session/.task/$dep.json" ]; then
echo "Missing dependency: $dep - creating placeholder"
fi
done
```
#### Context Recovery
```bash
# Reload context from available sources
if [ -f ".workflow/$session/.process/ANALYSIS_RESULTS.md" ]; then
echo "Reloading planning context..."
fi
# Restore from documentation if available
if [ -d ".workflow/docs/" ]; then
echo "Using documentation context as fallback..."
fi
```
### Error Prevention
- **Pre-flight Checks**: Validate session integrity before execution
- **Backup Strategy**: Create task snapshots before major operations
@@ -723,16 +402,3 @@ fi
- **Dependency Validation**: Check all depends_on references exist
- **Context Verification**: Ensure all required context is available
## Usage Examples
### Basic Usage
```bash
/workflow:execute # Execute all pending tasks autonomously
/workflow:session:status # Check progress
/task:execute IMPL-1.2 # Execute specific task
```
### Integration
- **Planning**: `/workflow:plan``/workflow:execute``/workflow:review`
- **Recovery**: `/workflow:status --validate``/workflow:execute`

View File

@@ -0,0 +1,160 @@
---
name: init
description: Initialize project-level state with intelligent project analysis using cli-explore-agent
argument-hint: "[--regenerate]"
examples:
- /workflow:init
- /workflow:init --regenerate
---
# Workflow Init Command (/workflow:init)
## Overview
Initialize `.workflow/project.json` with comprehensive project understanding by delegating analysis to **cli-explore-agent**.
**Note**: This command may be called by other workflow commands. Upon completion, return immediately to continue the calling workflow without interrupting the task flow.
## Usage
```bash
/workflow:init # Initialize (skip if exists)
/workflow:init --regenerate # Force regeneration
```
## Implementation
### Step 1: Check Existing State
```bash
bash(test -f .workflow/project.json && echo "EXISTS" || echo "NOT_FOUND")
```
**If EXISTS and no --regenerate**: Exit early
```
Project already initialized at .workflow/project.json
Use /workflow:init --regenerate to rebuild
Use /workflow:status --project to view state
```
### Step 2: Get Project Metadata
```bash
bash(basename "$(git rev-parse --show-toplevel 2>/dev/null || pwd)")
bash(git rev-parse --show-toplevel 2>/dev/null || pwd)
bash(mkdir -p .workflow)
```
### Step 3: Invoke cli-explore-agent
**For --regenerate**: Backup and preserve existing data
```bash
bash(cp .workflow/project.json .workflow/project.json.backup)
```
**Delegate analysis to agent**:
```javascript
Task(
subagent_type="cli-explore-agent",
description="Deep project analysis",
prompt=`
Analyze project for workflow initialization and generate .workflow/project.json.
## Output Schema Reference
~/.claude/workflows/cli-templates/schemas/project-json-schema.json
## Task
Generate complete project.json with:
- project_name: ${projectName}
- initialized_at: current ISO timestamp
- overview: {description, technology_stack, architecture, key_components, entry_points, metrics}
- features: ${regenerate ? 'preserve from backup' : '[] (empty)'}
- statistics: ${regenerate ? 'preserve from backup' : '{total_features: 0, total_sessions: 0, last_updated}'}
- memory_resources: {skills, documentation, module_docs, gaps, last_scanned}
- _metadata: {initialized_by: "cli-explore-agent", analysis_timestamp, analysis_mode}
## Analysis Requirements
**Technology Stack**:
- Languages: File counts, mark primary
- Frameworks: From package.json, requirements.txt, go.mod, etc.
- Build tools: npm, cargo, maven, webpack, vite
- Test frameworks: jest, pytest, go test, junit
**Architecture**:
- Style: MVC, microservices, layered (from structure & imports)
- Layers: presentation, business-logic, data-access
- Patterns: singleton, factory, repository
- Key components: 5-10 modules {name, path, description, importance}
**Metrics**:
- total_files: Source files (exclude tests/configs)
- lines_of_code: Use find + wc -l
- module_count: Use ~/.claude/scripts/get_modules_by_depth.sh
- complexity: low | medium | high
**Entry Points**:
- main: index.ts, main.py, main.go
- cli_commands: package.json scripts, Makefile targets
- api_endpoints: HTTP/REST routes (if applicable)
**Memory Resources**:
- skills: Scan .claude/skills/ → [{name, type, path}]
- documentation: Scan .workflow/docs/ → [{name, path, has_readme, has_architecture}]
- module_docs: Find **/CLAUDE.md (exclude node_modules, .git)
- gaps: Identify missing resources
## Execution
1. Structural scan: get_modules_by_depth.sh, find, wc -l
2. Semantic analysis: Gemini for patterns/architecture
3. Synthesis: Merge findings
4. ${regenerate ? 'Merge with preserved features/statistics from .workflow/project.json.backup' : ''}
5. Write JSON: Write('.workflow/project.json', jsonContent)
6. Report: Return brief completion summary
Project root: ${projectRoot}
`
)
```
### Step 4: Display Summary
```javascript
const projectJson = JSON.parse(Read('.workflow/project.json'));
console.log(`
✓ Project initialized successfully
## Project Overview
Name: ${projectJson.project_name}
Description: ${projectJson.overview.description}
### Technology Stack
Languages: ${projectJson.overview.technology_stack.languages.map(l => l.name).join(', ')}
Frameworks: ${projectJson.overview.technology_stack.frameworks.join(', ')}
### Architecture
Style: ${projectJson.overview.architecture.style}
Components: ${projectJson.overview.key_components.length} core modules
### Metrics
Files: ${projectJson.overview.metrics.total_files}
LOC: ${projectJson.overview.metrics.lines_of_code}
Complexity: ${projectJson.overview.metrics.complexity}
### Memory Resources
SKILL Packages: ${projectJson.memory_resources.skills.length}
Documentation: ${projectJson.memory_resources.documentation.length}
Module Docs: ${projectJson.memory_resources.module_docs.length}
Gaps: ${projectJson.memory_resources.gaps.join(', ') || 'none'}
---
Project state: .workflow/project.json
${regenerate ? 'Backup: .workflow/project.json.backup' : ''}
`);
```
## Error Handling
**Agent Failure**: Fall back to basic initialization with placeholder overview
**Missing Tools**: Agent uses Qwen fallback or bash-only
**Empty Project**: Create minimal JSON with all gaps identified

View File

@@ -0,0 +1,585 @@
---
name: lite-execute
description: Execute tasks based on in-memory plan, prompt description, or file content
argument-hint: "[--in-memory] [\"task description\"|file-path]"
allowed-tools: TodoWrite(*), Task(*), Bash(*)
---
# Workflow Lite-Execute Command (/workflow:lite-execute)
## Overview
Flexible task execution command supporting three input modes: in-memory plan (from lite-plan), direct prompt description, or file content. Handles execution orchestration, progress tracking, and optional code review.
**Core capabilities:**
- Multi-mode input (in-memory plan, prompt description, or file path)
- Execution orchestration (Agent or Codex) with full context
- Live progress tracking via TodoWrite at execution call level
- Optional code review with selected tool (Gemini, Agent, or custom)
- Context continuity across multiple executions
- Intelligent format detection (Enhanced Task JSON vs plain text)
## Usage
### Command Syntax
```bash
/workflow:lite-execute [FLAGS] <INPUT>
# Flags
--in-memory Use plan from memory (called by lite-plan)
# Arguments
<input> Task description string, or path to file (required)
```
## Input Modes
### Mode 1: In-Memory Plan
**Trigger**: Called by lite-plan after Phase 4 approval with `--in-memory` flag
**Input Source**: `executionContext` global variable set by lite-plan
**Content**: Complete execution context (see Data Structures section)
**Behavior**:
- Skip execution method selection (already set by lite-plan)
- Directly proceed to execution with full context
- All planning artifacts available (exploration, clarifications, plan)
### Mode 2: Prompt Description
**Trigger**: User calls with task description string
**Input**: Simple task description (e.g., "Add unit tests for auth module")
**Behavior**:
- Store prompt as `originalUserInput`
- Create simple execution plan from prompt
- AskUserQuestion: Select execution method (Agent/Codex/Auto)
- AskUserQuestion: Select code review tool (Skip/Gemini/Agent/Other)
- Proceed to execution with `originalUserInput` included
**User Interaction**:
```javascript
AskUserQuestion({
questions: [
{
question: "Select execution method:",
header: "Execution",
multiSelect: false,
options: [
{ label: "Agent", description: "@code-developer agent" },
{ label: "Codex", description: "codex CLI tool" },
{ label: "Auto", description: "Auto-select based on complexity" }
]
},
{
question: "Enable code review after execution?",
header: "Code Review",
multiSelect: false,
options: [
{ label: "Skip", description: "No review" },
{ label: "Gemini Review", description: "Gemini CLI tool" },
{ label: "Agent Review", description: "Current agent review" }
]
}
]
})
```
### Mode 3: File Content
**Trigger**: User calls with file path
**Input**: Path to file containing task description or Enhanced Task JSON
**Step 1: Read and Detect Format**
```javascript
fileContent = Read(filePath)
// Attempt JSON parsing
try {
jsonData = JSON.parse(fileContent)
// Check if Enhanced Task JSON from lite-plan
if (jsonData.meta?.workflow === "lite-plan") {
// Extract plan data
planObject = {
summary: jsonData.context.plan.summary,
approach: jsonData.context.plan.approach,
tasks: jsonData.context.plan.tasks,
estimated_time: jsonData.meta.estimated_time,
recommended_execution: jsonData.meta.recommended_execution,
complexity: jsonData.meta.complexity
}
explorationContext = jsonData.context.exploration || null
clarificationContext = jsonData.context.clarifications || null
originalUserInput = jsonData.title
isEnhancedTaskJson = true
} else {
// Valid JSON but not Enhanced Task JSON - treat as plain text
originalUserInput = fileContent
isEnhancedTaskJson = false
}
} catch {
// Not valid JSON - treat as plain text prompt
originalUserInput = fileContent
isEnhancedTaskJson = false
}
```
**Step 2: Create Execution Plan**
If `isEnhancedTaskJson === true`:
- Use extracted `planObject` directly
- Skip planning, use lite-plan's existing plan
- User still selects execution method and code review
If `isEnhancedTaskJson === false`:
- Treat file content as prompt (same behavior as Mode 2)
- Create simple execution plan from content
**Step 3: User Interaction**
- AskUserQuestion: Select execution method (Agent/Codex/Auto)
- AskUserQuestion: Select code review tool
- Proceed to execution with full context
## Execution Process
### Workflow Overview
```
Input Processing → Mode Detection
|
v
[Mode 1] --in-memory: Load executionContext → Skip selection
[Mode 2] Prompt: Create plan → User selects method + review
[Mode 3] File: Detect format → Extract plan OR treat as prompt → User selects
|
v
Execution & Progress Tracking
├─ Step 1: Initialize execution tracking
├─ Step 2: Create TodoWrite execution list
├─ Step 3: Launch execution (Agent or Codex)
├─ Step 4: Track execution progress
└─ Step 5: Code review (optional)
|
v
Execution Complete
```
## Detailed Execution Steps
### Step 1: Initialize Execution Tracking
**Operations**:
- Initialize result tracking for multi-execution scenarios
- Set up `previousExecutionResults` array for context continuity
```javascript
// Initialize result tracking
previousExecutionResults = []
```
### Step 2: Task Grouping & Batch Creation
**Dependency Analysis & Grouping Algorithm**:
```javascript
// Infer dependencies: same file → sequential, keywords (use/integrate) → sequential
function inferDependencies(tasks) {
return tasks.map((task, i) => {
const deps = []
const file = task.file || task.title.match(/in\s+([^\s:]+)/)?.[1]
const keywords = (task.description || task.title).toLowerCase()
for (let j = 0; j < i; j++) {
const prevFile = tasks[j].file || tasks[j].title.match(/in\s+([^\s:]+)/)?.[1]
if (file && prevFile === file) deps.push(j) // Same file
else if (/use|integrate|call|import/.test(keywords)) deps.push(j) // Keyword dependency
}
return { ...task, taskIndex: i, dependencies: deps }
})
}
// Group into batches: independent → parallel [P1,P2...], dependent → sequential [S1,S2...]
function createExecutionCalls(tasks, executionMethod) {
const tasksWithDeps = inferDependencies(tasks)
const maxBatch = executionMethod === "Codex" ? 4 : 7
const calls = []
const processed = new Set()
// Parallel: independent tasks, different files, max batch size
const parallelGroups = []
tasksWithDeps.forEach(t => {
if (t.dependencies.length === 0 && !processed.has(t.taskIndex)) {
const group = [t]
processed.add(t.taskIndex)
tasksWithDeps.forEach(o => {
if (!o.dependencies.length && !processed.has(o.taskIndex) &&
group.length < maxBatch && t.file !== o.file) {
group.push(o)
processed.add(o.taskIndex)
}
})
parallelGroups.push(group)
}
})
// Sequential: dependent tasks, batch when deps satisfied
const remaining = tasksWithDeps.filter(t => !processed.has(t.taskIndex))
while (remaining.length > 0) {
const batch = remaining.filter((t, i) =>
i < maxBatch && t.dependencies.every(d => processed.has(d))
)
if (!batch.length) break
batch.forEach(t => processed.add(t.taskIndex))
calls.push({ executionType: "sequential", groupId: `S${calls.length + 1}`, tasks: batch })
remaining.splice(0, remaining.length, ...remaining.filter(t => !processed.has(t.taskIndex)))
}
// Combine results
return [
...parallelGroups.map((g, i) => ({
method: executionMethod, executionType: "parallel", groupId: `P${i+1}`,
taskSummary: g.map(t => t.title).join(' | '), tasks: g
})),
...calls.map(c => ({ ...c, method: executionMethod, taskSummary: c.tasks.map(t => t.title).join(' → ') }))
]
}
executionCalls = createExecutionCalls(planObject.tasks, executionMethod).map(c => ({ ...c, id: `[${c.groupId}]` }))
TodoWrite({
todos: executionCalls.map(c => ({
content: `${c.executionType === "parallel" ? "⚡" : "→"} ${c.id} (${c.tasks.length} tasks)`,
status: "pending",
activeForm: `Executing ${c.id}`
}))
})
```
### Step 3: Launch Execution
**Execution Flow**: Parallel batches concurrently → Sequential batches in order
```javascript
const parallel = executionCalls.filter(c => c.executionType === "parallel")
const sequential = executionCalls.filter(c => c.executionType === "sequential")
// Phase 1: Launch all parallel batches (single message with multiple tool calls)
if (parallel.length > 0) {
TodoWrite({ todos: executionCalls.map(c => ({ status: c.executionType === "parallel" ? "in_progress" : "pending" })) })
parallelResults = await Promise.all(parallel.map(c => executeBatch(c)))
previousExecutionResults.push(...parallelResults)
TodoWrite({ todos: executionCalls.map(c => ({ status: parallel.includes(c) ? "completed" : "pending" })) })
}
// Phase 2: Execute sequential batches one by one
for (const call of sequential) {
TodoWrite({ todos: executionCalls.map(c => ({ status: c === call ? "in_progress" : "..." })) })
result = await executeBatch(call)
previousExecutionResults.push(result)
TodoWrite({ todos: executionCalls.map(c => ({ status: "completed" or "pending" })) })
}
```
**Option A: Agent Execution**
When to use:
- `executionMethod = "Agent"`
- `executionMethod = "Auto" AND complexity = "Low"`
Agent call format:
```javascript
function formatTaskForAgent(task, index) {
return `
### Task ${index + 1}: ${task.title}
**File**: ${task.file}
**Action**: ${task.action}
**Description**: ${task.description}
**Implementation Steps**:
${task.implementation.map((step, i) => `${i + 1}. ${step}`).join('\n')}
**Reference**:
- Pattern: ${task.reference.pattern}
- Example Files: ${task.reference.files.join(', ')}
- Guidance: ${task.reference.examples}
**Acceptance Criteria**:
${task.acceptance.map((criterion, i) => `${i + 1}. ${criterion}`).join('\n')}
`
}
Task(
subagent_type="code-developer",
description="Implement planned tasks",
prompt=`
${originalUserInput ? `## Original User Request\n${originalUserInput}\n\n` : ''}
## Implementation Plan
**Summary**: ${planObject.summary}
**Approach**: ${planObject.approach}
## Task Breakdown (${planObject.tasks.length} tasks)
${planObject.tasks.map((task, i) => formatTaskForAgent(task, i)).join('\n')}
${previousExecutionResults.length > 0 ? `\n## Previous Execution Results\n${previousExecutionResults.map(result => `
[${result.executionId}] ${result.status}
Tasks: ${result.tasksSummary}
Completion: ${result.completionSummary}
Outputs: ${result.keyOutputs || 'See git diff'}
${result.notes ? `Notes: ${result.notes}` : ''}
`).join('\n---\n')}` : ''}
## Code Context
${explorationContext || "No exploration performed"}
${clarificationContext ? `\n## Clarifications\n${JSON.stringify(clarificationContext, null, 2)}` : ''}
${executionContext?.session?.artifacts ? `\n## Planning Artifacts
Detailed planning context available in:
${executionContext.session.artifacts.exploration ? `- Exploration: ${executionContext.session.artifacts.exploration}` : ''}
- Plan: ${executionContext.session.artifacts.plan}
- Task: ${executionContext.session.artifacts.task}
Read these files for detailed architecture, patterns, and constraints.` : ''}
## Requirements
MUST complete ALL ${planObject.tasks.length} tasks listed above in this single execution.
Return only after all tasks are fully implemented and tested.
`
)
```
**Result Collection**: After completion, collect result following `executionResult` structure (see Data Structures section)
**Option B: CLI Execution (Codex)**
When to use:
- `executionMethod = "Codex"`
- `executionMethod = "Auto" AND complexity = "Medium" or "High"`
**Artifact Path Delegation**:
- Include artifact file paths in CLI prompt for enhanced context
- Codex can read artifact files for detailed planning information
- Example: Reference exploration.json for architecture patterns
Command format:
```bash
function formatTaskForCodex(task, index) {
return `
${index + 1}. ${task.title} (${task.file})
Action: ${task.action}
What: ${task.description}
How:
${task.implementation.map((step, i) => ` ${i + 1}. ${step}`).join('\n')}
Reference: ${task.reference.pattern} (see ${task.reference.files.join(', ')})
Guidance: ${task.reference.examples}
Verify:
${task.acceptance.map((criterion, i) => ` - ${criterion}`).join('\n')}
`
}
codex --full-auto exec "
${originalUserInput ? `## Original User Request\n${originalUserInput}\n\n` : ''}
## Implementation Plan
TASK: ${planObject.summary}
APPROACH: ${planObject.approach}
### Task Breakdown (${planObject.tasks.length} tasks)
${planObject.tasks.map((task, i) => formatTaskForCodex(task, i)).join('\n')}
${previousExecutionResults.length > 0 ? `\n### Previous Execution Results\n${previousExecutionResults.map(result => `
[${result.executionId}] ${result.status}
Tasks: ${result.tasksSummary}
Status: ${result.completionSummary}
Outputs: ${result.keyOutputs || 'See git diff'}
${result.notes ? `Notes: ${result.notes}` : ''}
`).join('\n---\n')}
IMPORTANT: Review previous results. Build on completed work. Avoid duplication.
` : ''}
### Code Context from Exploration
${explorationContext ? `
Project Structure: ${explorationContext.project_structure || 'Standard structure'}
Relevant Files: ${explorationContext.relevant_files?.join(', ') || 'TBD'}
Current Patterns: ${explorationContext.patterns || 'Follow existing conventions'}
Integration Points: ${explorationContext.dependencies || 'None specified'}
Constraints: ${explorationContext.constraints || 'None'}
` : 'No prior exploration - analyze codebase as needed'}
${clarificationContext ? `\n### User Clarifications\n${Object.entries(clarificationContext).map(([q, a]) => `${q}: ${a}`).join('\n')}` : ''}
${executionContext?.session?.artifacts ? `\n### Planning Artifact Files
Detailed planning context available in session folder:
${executionContext.session.artifacts.exploration ? `- Exploration: ${executionContext.session.artifacts.exploration}` : ''}
- Plan: ${executionContext.session.artifacts.plan}
- Task: ${executionContext.session.artifacts.task}
Read these files for complete architecture details, code patterns, and integration constraints.
` : ''}
## Requirements
MUST complete ALL ${planObject.tasks.length} tasks listed above in this single execution.
Return only after all tasks are fully implemented and tested.
Complexity: ${planObject.complexity}
" --skip-git-repo-check -s danger-full-access
```
**Execution with tracking**:
```javascript
// Launch CLI in foreground (NOT background)
bash_result = Bash(
command=cli_command,
timeout=600000 // 10 minutes
)
// Update TodoWrite when execution completes
```
**Result Collection**: After completion, analyze output and collect result following `executionResult` structure
### Step 4: Progress Tracking
Progress tracked at batch level (not individual task level). Icons: ⚡ (parallel, concurrent), → (sequential, one-by-one)
### Step 5: Code Review (Optional)
**Skip Condition**: Only run if `codeReviewTool ≠ "Skip"`
**Review Focus**: Verify implementation against task.json acceptance criteria
- Read task.json from session artifacts for acceptance criteria
- Check each acceptance criterion is fulfilled
- Validate code quality and identify issues
- Ensure alignment with planned approach
**Operations**:
- Agent Review: Current agent performs direct review (read task.json for acceptance criteria)
- Gemini Review: Execute gemini CLI with review prompt (task.json in CONTEXT)
- Custom tool: Execute specified CLI tool (qwen, codex, etc.) with task.json reference
**Unified Review Template** (All tools use same standard):
**Review Criteria**:
- **Acceptance Criteria**: Verify each criterion from task.json `context.acceptance`
- **Code Quality**: Analyze quality, identify issues, suggest improvements
- **Plan Alignment**: Validate implementation matches planned approach
**Shared Prompt Template** (used by all CLI tools):
```
PURPOSE: Code review for implemented changes against task.json acceptance criteria
TASK: • Verify task.json acceptance criteria fulfillment • Analyze code quality • Identify issues • Suggest improvements • Validate plan adherence
MODE: analysis
CONTEXT: @**/* @{task.json} @{plan.json} [@{exploration.json}] | Memory: Review lite-execute changes against task.json requirements
EXPECTED: Quality assessment with acceptance criteria verification, issue identification, and recommendations. Explicitly check each acceptance criterion from task.json.
RULES: $(cat ~/.claude/workflows/cli-templates/prompts/analysis/02-review-code-quality.txt) | Focus on task.json acceptance criteria and plan adherence | analysis=READ-ONLY
```
**Tool-Specific Execution** (Apply shared prompt template above):
```bash
# Method 1: Agent Review (current agent)
# - Read task.json: ${executionContext.session.artifacts.task}
# - Apply unified review criteria (see Shared Prompt Template)
# - Report findings directly
# Method 2: Gemini Review (recommended)
gemini -p "[Shared Prompt Template with artifacts]"
# CONTEXT includes: @**/* @${task.json} @${plan.json} [@${exploration.json}]
# Method 3: Qwen Review (alternative)
qwen -p "[Shared Prompt Template with artifacts]"
# Same prompt as Gemini, different execution engine
# Method 4: Codex Review (autonomous)
codex --full-auto exec "[Verify task.json acceptance criteria at ${task.json}]" --skip-git-repo-check -s danger-full-access
```
**Implementation Note**: Replace `[Shared Prompt Template with artifacts]` placeholder with actual template content, substituting:
- `@{task.json}``@${executionContext.session.artifacts.task}`
- `@{plan.json}``@${executionContext.session.artifacts.plan}`
- `[@{exploration.json}]``@${executionContext.session.artifacts.exploration}` (if exists)
## Best Practices
**Input Modes**: In-memory (lite-plan), prompt (standalone), file (JSON/text)
**Batch Limits**: Agent 7 tasks, CLI 4 tasks
**Execution**: Parallel batches use single Claude message with multiple tool calls (no concurrency limit)
## Error Handling
| Error | Cause | Resolution |
|-------|-------|------------|
| Missing executionContext | --in-memory without context | Error: "No execution context found. Only available when called by lite-plan." |
| File not found | File path doesn't exist | Error: "File not found: {path}. Check file path." |
| Empty file | File exists but no content | Error: "File is empty: {path}. Provide task description." |
| Invalid Enhanced Task JSON | JSON missing required fields | Warning: "Missing required fields. Treating as plain text." |
| Malformed JSON | JSON parsing fails | Treat as plain text (expected for non-JSON files) |
| Execution failure | Agent/Codex crashes | Display error, save partial progress, suggest retry |
| Codex unavailable | Codex not installed | Show installation instructions, offer Agent execution |
## Data Structures
### executionContext (Input - Mode 1)
Passed from lite-plan via global variable:
```javascript
{
planObject: {
summary: string,
approach: string,
tasks: [...],
estimated_time: string,
recommended_execution: string,
complexity: string
},
explorationContext: {...} | null,
clarificationContext: {...} | null,
executionMethod: "Agent" | "Codex" | "Auto",
codeReviewTool: "Skip" | "Gemini Review" | "Agent Review" | string,
originalUserInput: string,
// Session artifacts location (saved by lite-plan)
session: {
id: string, // Session identifier: {taskSlug}-{shortTimestamp}
folder: string, // Session folder path: .workflow/.lite-plan/{session-id}
artifacts: {
exploration: string | null, // exploration.json path (if exploration performed)
plan: string, // plan.json path (always present)
task: string // task.json path (always exported)
}
}
}
```
**Artifact Usage**:
- Artifact files contain detailed planning context
- Pass artifact paths to CLI tools and agents for enhanced context
- See execution options below for usage examples
### executionResult (Output)
Collected after each execution call completes:
```javascript
{
executionId: string, // e.g., "[Agent-1]", "[Codex-1]"
status: "completed" | "partial" | "failed",
tasksSummary: string, // Brief description of tasks handled
completionSummary: string, // What was completed
keyOutputs: string, // Files created/modified, key changes
notes: string // Important context for next execution
}
```
Appended to `previousExecutionResults` array for context continuity in multi-execution scenarios.

View File

@@ -0,0 +1,760 @@
---
name: lite-fix
description: Lightweight bug diagnosis and fix workflow with intelligent severity assessment and optional hotfix mode for production incidents
argument-hint: "[--hotfix] \"bug description or issue reference\""
allowed-tools: TodoWrite(*), Task(*), SlashCommand(*), AskUserQuestion(*), Read(*), Bash(*)
---
# Workflow Lite-Fix Command (/workflow:lite-fix)
## Overview
Fast-track bug fixing workflow optimized for quick diagnosis, targeted fixes, and streamlined verification. Automatically adjusts process complexity based on impact assessment.
**Core capabilities:**
- Rapid root cause diagnosis with intelligent code search
- Automatic severity assessment and adaptive workflow
- Fix strategy selection (immediate patch vs comprehensive refactor)
- Risk-aware verification (smoke tests to full suite)
- Optional hotfix mode for production incidents with branch management
- Automatic follow-up task generation for hotfixes
## Usage
### Command Syntax
```bash
/workflow:lite-fix [FLAGS] <BUG_DESCRIPTION>
# Flags
--hotfix, -h Production hotfix mode (creates hotfix branch, auto follow-up)
# Arguments
<bug-description> Bug description or issue reference (required)
```
### Modes
| Mode | Time Budget | Use Case | Workflow Characteristics |
|------|-------------|----------|--------------------------|
| **Default** | Auto-adapt (15min-4h) | All standard bugs | Intelligent severity assessment + adaptive process |
| **Hotfix** (`--hotfix`) | 15-30 min | Production outage | Minimal diagnosis + hotfix branch + auto follow-up |
### Examples
```bash
# Default mode: Automatically adjusts based on impact
/workflow:lite-fix "User avatar upload fails with 413 error"
/workflow:lite-fix "Shopping cart randomly loses items at checkout"
# Hotfix mode: Production incident
/workflow:lite-fix --hotfix "Payment gateway 5xx errors"
```
## Execution Process
### Workflow Overview
```
Bug Input → Diagnosis (Phase 1) → Impact Assessment (Phase 2)
Severity Auto-Detection → Fix Planning (Phase 3)
Verification Strategy (Phase 4) → User Confirmation (Phase 5) → Execution (Phase 6)
```
### Phase Summary
| Phase | Default Mode | Hotfix Mode |
|-------|--------------|-------------|
| 1. Diagnosis | Adaptive search depth | Minimal (known issue) |
| 2. Impact Assessment | Full risk scoring | Critical path only |
| 3. Fix Planning | Strategy options based on complexity | Single surgical fix |
| 4. Verification | Test level matches risk score | Smoke tests only |
| 5. User Confirmation | 3 dimensions | 2 dimensions |
| 6. Execution | Via lite-execute | Via lite-execute + monitoring |
---
## Detailed Phase Execution
### Phase 1: Diagnosis & Root Cause Analysis
**Goal**: Identify root cause and affected code paths
**Session Folder Setup**:
```javascript
// Generate session identifiers for artifact storage
const bugSlug = bug_description.toLowerCase().replace(/[^a-z0-9]+/g, '-').substring(0, 40)
const timestamp = new Date().toISOString().replace(/[:.]/g, '-')
const shortTimestamp = timestamp.substring(0, 19).replace('T', '-') // YYYY-MM-DD-HH-mm-ss
const sessionId = `${bugSlug}-${shortTimestamp}`
const sessionFolder = `.workflow/.lite-fix/${sessionId}`
```
**Execution Strategy**:
**Default Mode** - Adaptive search:
- **High confidence keywords** (e.g., specific error messages): Direct grep search (5min)
- **Medium confidence**: cli-explore-agent with focused search (10-15min)
- **Low confidence** (vague symptoms): cli-explore-agent with broad search (20min)
```javascript
// Confidence-based strategy selection
if (has_specific_error_message || has_file_path_hint) {
// Quick targeted search
grep -r '${error_message}' src/ --include='*.ts' -n | head -10
git log --oneline --since='1 week ago' -- '*affected*'
} else {
// Deep exploration
Task(subagent_type="cli-explore-agent", prompt=`
Bug: ${bug_description}
Execute diagnostic search:
1. Search error patterns and similar issues
2. Trace execution path in affected modules
3. Check recent changes
Return: Root cause hypothesis, affected paths, reproduction steps
`)
}
```
**Hotfix Mode** - Minimal search:
```bash
Read(suspected_file) # User typically knows the file
git blame ${suspected_file}
```
**Output Structure**:
```javascript
diagnosisContext = {
symptom: string,
error_message: string | null,
keywords: string[],
confidence_level: "high" | "medium" | "low",
root_cause: {
file: "src/auth/tokenValidator.ts",
line_range: "45-52",
issue: "Token expiration check uses wrong comparison",
introduced_by: "commit abc123"
},
reproduction_steps: ["Login", "Wait 15min", "Access protected route"],
affected_scope: {
users: "All authenticated users",
features: ["login", "API access"],
data_risk: "none"
}
}
// Save diagnosis results for CLI/agent access in lite-execute
const diagnosisFile = `${sessionFolder}/diagnosis.json`
Write(diagnosisFile, JSON.stringify(diagnosisContext, null, 2))
```
**Output**: `diagnosisContext` (in-memory)
**Artifact**: Saved to `{sessionFolder}/diagnosis.json` for CLI/agent use
**TodoWrite**: Mark Phase 1 completed, Phase 2 in_progress
---
### Phase 2: Impact Assessment & Severity Auto-Detection
**Goal**: Quantify blast radius and auto-determine severity
**Risk Score Calculation**:
```javascript
risk_score = (user_impact × 0.4) + (system_risk × 0.3) + (business_impact × 0.3)
// Auto-severity mapping
if (risk_score >= 8.0) severity = "critical"
else if (risk_score >= 5.0) severity = "high"
else if (risk_score >= 3.0) severity = "medium"
else severity = "low"
// Workflow adaptation
if (severity >= "high") {
diagnosis_depth = "focused"
test_strategy = "smoke_and_critical"
review_optional = true
} else {
diagnosis_depth = "comprehensive"
test_strategy = "full_suite"
review_optional = false
}
```
**Assessment Output**:
```javascript
impactContext = {
affected_users: {
count: "5000 active users (100%)",
severity: "high"
},
system_risk: {
availability: "degraded_30%",
cascading_failures: "possible_logout_storm"
},
business_impact: {
revenue: "medium",
reputation: "high",
sla_breach: "yes"
},
risk_score: 7.1,
severity: "high",
workflow_adaptation: {
test_strategy: "focused_integration",
review_required: false,
time_budget: "1_hour"
}
}
// Save impact assessment for CLI/agent access
const impactFile = `${sessionFolder}/impact.json`
Write(impactFile, JSON.stringify(impactContext, null, 2))
```
**Output**: `impactContext` (in-memory)
**Artifact**: Saved to `{sessionFolder}/impact.json` for CLI/agent use
**Hotfix Mode**: Skip detailed assessment, assume critical
**TodoWrite**: Mark Phase 2 completed, Phase 3 in_progress
---
### Phase 3: Fix Planning & Strategy Selection
**Goal**: Generate fix options with trade-off analysis
**Strategy Generation**:
**Default Mode** - Complexity-adaptive:
- **Low risk score (<5.0)**: Generate 2-3 strategy options for user selection
- **High risk score (≥5.0)**: Generate single best strategy for speed
```javascript
strategies = generateFixStrategies(root_cause, risk_score)
if (risk_score >= 5.0 || mode === "hotfix") {
// Single best strategy
return strategies[0] // Fastest viable fix
} else {
// Multiple options with trade-offs
return strategies // Let user choose
}
```
**Example Strategies**:
```javascript
// Low risk: Multiple options
[
{
strategy: "immediate_patch",
description: "Fix comparison operator",
estimated_time: "15 minutes",
risk: "low",
pros: ["Quick fix"],
cons: ["Doesn't address underlying issue"]
},
{
strategy: "comprehensive_fix",
description: "Refactor token validation logic",
estimated_time: "2 hours",
risk: "medium",
pros: ["Addresses root cause"],
cons: ["Longer implementation"]
}
]
// High risk or hotfix: Single option
{
strategy: "surgical_fix",
description: "Minimal change to fix comparison",
files: ["src/auth/tokenValidator.ts:47"],
estimated_time: "5 minutes",
risk: "minimal"
}
```
**Complexity Assessment**:
```javascript
if (complexity === "high" && risk_score < 5.0) {
suggestCommand("/workflow:plan --mode bugfix")
return // Escalate to full planning
}
// Save fix plan for CLI/agent access
const planFile = `${sessionFolder}/fix-plan.json`
Write(planFile, JSON.stringify(fixPlan, null, 2))
```
**Output**: `fixPlan` (in-memory)
**Artifact**: Saved to `{sessionFolder}/fix-plan.json` for CLI/agent use
**TodoWrite**: Mark Phase 3 completed, Phase 4 in_progress
---
### Phase 4: Verification Strategy
**Goal**: Define testing approach based on severity
**Adaptive Test Strategy**:
| Risk Score | Test Scope | Duration | Automation |
|------------|------------|----------|------------|
| **< 3.0** (Low) | Full test suite | 15-20 min | `npm test` |
| **3.0-5.0** (Medium) | Focused integration | 8-12 min | `npm test -- affected-module.test.ts` |
| **5.0-8.0** (High) | Smoke + critical | 5-8 min | `npm test -- critical.smoke.test.ts` |
| **≥ 8.0** (Critical) | Smoke only | 2-5 min | `npm test -- smoke.test.ts` |
| **Hotfix** | Production smoke | 2-3 min | `npm test -- production.smoke.test.ts` |
**Branch Strategy**:
**Default Mode**:
```javascript
{
type: "feature_branch",
base: "main",
name: "fix/token-expiration-edge-case",
merge_target: "main"
}
```
**Hotfix Mode**:
```javascript
{
type: "hotfix_branch",
base: "production_tag_v2.3.1", // ⚠️ From production tag
name: "hotfix/token-validation-fix",
merge_target: ["main", "production"] // Dual merge
}
```
**TodoWrite**: Mark Phase 4 completed, Phase 5 in_progress
---
### Phase 5: User Confirmation & Execution Selection
**Adaptive Confirmation Dimensions**:
**Default Mode** - 3 dimensions (adapted by risk score):
```javascript
dimensions = [
{
question: "Confirm fix approach?",
options: ["Proceed", "Modify", "Escalate to /workflow:plan"]
},
{
question: "Execution method:",
options: ["Agent", "CLI Tool (Codex/Gemini)", "Manual (plan only)"]
},
{
question: "Verification level:",
options: adaptedByRiskScore() // Auto-suggest based on Phase 2
}
]
// If risk_score >= 5.0, auto-skip code review dimension
// If risk_score < 5.0, add optional code review dimension
if (risk_score < 5.0) {
dimensions.push({
question: "Post-fix review:",
options: ["Gemini", "Skip"]
})
}
```
**Hotfix Mode** - 2 dimensions (minimal):
```javascript
[
{
question: "Confirm hotfix deployment:",
options: ["Deploy", "Stage First", "Abort"]
},
{
question: "Post-deployment monitoring:",
options: ["Real-time (15 min)", "Passive (alerts only)"]
}
]
```
**TodoWrite**: Mark Phase 5 completed, Phase 6 in_progress
---
### Phase 6: Execution Dispatch & Follow-up
**Export Enhanced Task JSON**:
```javascript
const taskId = `BUGFIX-${shortTimestamp}`
const taskFile = `${sessionFolder}/task.json`
const enhancedTaskJson = {
id: taskId,
title: bug_description,
status: "pending",
meta: {
type: "bugfix",
created_at: new Date().toISOString(),
severity: impactContext.severity,
risk_score: impactContext.risk_score,
estimated_time: fixPlan.estimated_time,
workflow: mode === "hotfix" ? "lite-fix-hotfix" : "lite-fix",
session_id: sessionId,
session_folder: sessionFolder
},
context: {
requirements: [bug_description],
diagnosis: diagnosisContext,
impact: impactContext,
plan: fixPlan,
verification_strategy: verificationStrategy,
branch_strategy: branchStrategy
}
}
Write(taskFile, JSON.stringify(enhancedTaskJson, null, 2))
```
**Dispatch to lite-execute**:
```javascript
executionContext = {
mode: "bugfix",
severity: impactContext.severity,
planObject: fixPlan,
diagnosisContext: diagnosisContext,
impactContext: impactContext,
verificationStrategy: verificationStrategy,
branchStrategy: branchStrategy,
executionMethod: user_selection.execution_method,
// Session artifacts location
session: {
id: sessionId,
folder: sessionFolder,
artifacts: {
diagnosis: `${sessionFolder}/diagnosis.json`,
impact: `${sessionFolder}/impact.json`,
plan: `${sessionFolder}/fix-plan.json`,
task: `${sessionFolder}/task.json`
}
}
}
SlashCommand("/workflow:lite-execute --in-memory --mode bugfix")
```
**Hotfix Auto Follow-up**:
```javascript
if (mode === "hotfix") {
follow_up_tasks = [
{
id: `FOLLOWUP-${taskId}-comprehensive`,
title: "Replace hotfix with comprehensive fix",
priority: "high",
due_date: "within_3_days",
description: "Refactor quick hotfix into proper solution with full test coverage"
},
{
id: `FOLLOWUP-${taskId}-postmortem`,
title: "Incident postmortem",
priority: "medium",
due_date: "within_1_week",
sections: ["Timeline", "Root cause", "Prevention measures"]
}
]
Write(`${sessionFolder}/followup.json`, follow_up_tasks)
console.log(`
⚠️ Hotfix follow-up tasks generated:
- Comprehensive fix: ${follow_up_tasks[0].id} (due in 3 days)
- Postmortem: ${follow_up_tasks[1].id} (due in 1 week)
- Location: ${sessionFolder}/followup.json
`)
}
}
```
**TodoWrite**: Mark Phase 6 completed
---
## Data Structures
### diagnosisContext
```javascript
{
symptom: string,
error_message: string | null,
keywords: string[],
confidence_level: "high" | "medium" | "low", // Search confidence
root_cause: {
file: string,
line_range: string,
issue: string,
introduced_by: string
},
reproduction_steps: string[],
affected_scope: {...}
}
```
### impactContext
```javascript
{
affected_users: { count: string, severity: string },
system_risk: { availability: string, cascading_failures: string },
business_impact: { revenue: string, reputation: string, sla_breach: string },
risk_score: number, // 0-10
severity: "low" | "medium" | "high" | "critical",
workflow_adaptation: {
diagnosis_depth: string,
test_strategy: string,
review_optional: boolean,
time_budget: string
}
}
```
### fixPlan
```javascript
{
strategy: string,
summary: string,
tasks: [{
title: string,
file: string,
action: "Update" | "Create" | "Delete",
implementation: string[],
verification: string[]
}],
estimated_time: string,
recommended_execution: "Agent" | "CLI" | "Manual"
}
```
### executionContext
Context passed to lite-execute via --in-memory (Phase 6):
```javascript
{
mode: "bugfix",
severity: "high" | "medium" | "low" | "critical",
// Core data objects
planObject: {...}, // Complete fixPlan (see above)
diagnosisContext: {...}, // Complete diagnosisContext (see above)
impactContext: {...}, // Complete impactContext (see above)
// Verification and branch strategies
verificationStrategy: {...},
branchStrategy: {...},
executionMethod: "Agent" | "CLI" | "Manual",
// Session artifacts location (for lite-execute to access saved files)
session: {
id: string, // Session identifier: {bugSlug}-{shortTimestamp}
folder: string, // Session folder path: .workflow/.lite-fix/{session-id}
artifacts: {
diagnosis: string, // diagnosis.json path
impact: string, // impact.json path
plan: string, // fix-plan.json path
task: string // task.json path
}
}
}
```
### Enhanced Task JSON Export
Task JSON structure exported in Phase 6:
```json
{
"id": "BUGFIX-{timestamp}",
"title": "Original bug description",
"status": "pending",
"meta": {
"type": "bugfix",
"created_at": "ISO timestamp",
"severity": "low|medium|high|critical",
"risk_score": 7.1,
"estimated_time": "X minutes",
"workflow": "lite-fix|lite-fix-hotfix",
"session_id": "{bugSlug}-{shortTimestamp}",
"session_folder": ".workflow/.lite-fix/{session-id}"
},
"context": {
"requirements": ["Original bug description"],
"diagnosis": {/* diagnosisContext */},
"impact": {/* impactContext */},
"plan": {/* fixPlan */},
"verification_strategy": {/* test strategy */},
"branch_strategy": {/* branch strategy */}
}
}
```
**Schema Notes**:
- Aligns with Enhanced Task JSON Schema (6-field structure)
- `context_package_path` omitted (not used by lite-fix)
- `flow_control` omitted (handled by lite-execute)
---
## Best Practices
### When to Use Default Mode
**Use for all standard bugs:**
- Automatically adapts to severity (no manual mode selection needed)
- Risk score determines workflow complexity
- Handles 90% of bug fixing scenarios
**Typical scenarios:**
- UI bugs, logic errors, edge cases
- Performance issues (non-critical)
- Integration failures
- Data validation bugs
### When to Use Hotfix Mode
**Only use for production incidents:**
- Production is down or critically degraded
- Revenue/reputation at immediate risk
- SLA breach occurring
- Issue is well-understood (minimal diagnosis needed)
**Hotfix characteristics:**
- Creates hotfix branch from production tag
- Minimal diagnosis (assumes known issue)
- Smoke tests only
- Auto-generates follow-up tasks
- Requires incident tracking
### Branching Strategy
**Default Mode (feature branch)**:
```bash
# Standard feature branch workflow
git checkout -b fix/issue-description main
# ... implement fix
git checkout main && git merge fix/issue-description
```
**Hotfix Mode (dual merge)**:
```bash
# ✅ Correct: Branch from production tag
git checkout -b hotfix/fix-name v2.3.1
# Merge to both targets
git checkout main && git merge hotfix/fix-name
git checkout production && git merge hotfix/fix-name
git tag v2.3.2
# ❌ Wrong: Branch from main
git checkout -b hotfix/fix-name main # Contains unreleased code!
```
---
## Error Handling
| Error | Cause | Resolution |
|-------|-------|------------|
| Root cause unclear | Vague symptoms | Extend diagnosis time or use /cli:mode:bug-diagnosis |
| Multiple potential causes | Complex interaction | Use /cli:discuss-plan for analysis |
| Fix too complex | High-risk refactor | Escalate to /workflow:plan --mode bugfix |
| High risk score but unsure | Uncertain severity | Default mode will adapt, proceed normally |
---
## Session Folder Structure
Each lite-fix execution creates a dedicated session folder to organize all artifacts:
```
.workflow/.lite-fix/{bug-slug}-{short-timestamp}/
├── diagnosis.json # Phase 1: Root cause analysis
├── impact.json # Phase 2: Impact assessment
├── fix-plan.json # Phase 3: Fix strategy
├── task.json # Phase 6: Enhanced Task JSON
└── followup.json # Hotfix mode only: Follow-up tasks
```
**Folder Naming Convention**:
- `{bug-slug}`: First 40 characters of bug description, lowercased, non-alphanumeric replaced with `-`
- `{short-timestamp}`: YYYY-MM-DD-HH-mm-ss format
- Example: `.workflow/.lite-fix/user-avatar-upload-fails-413-2025-01-15-14-30-45/`
**File Contents**:
- `diagnosis.json`: Complete diagnosisContext object (Phase 1)
- `impact.json`: Complete impactContext object (Phase 2)
- `fix-plan.json`: Complete fixPlan object (Phase 3)
- `task.json`: Enhanced Task JSON with all context (Phase 6)
- `followup.json`: Follow-up tasks (hotfix mode only)
**Access Patterns**:
- **lite-fix**: Creates folder and writes all artifacts during execution, passes paths via `executionContext.session.artifacts`
- **lite-execute**: Reads artifact paths from `executionContext.session.artifacts`
- **User**: Can inspect artifacts for debugging or reference
- **Reuse**: Pass `task.json` path to `/workflow:lite-execute {path}` for re-execution
**Legacy Cache** (deprecated, use session folder instead):
```
.workflow/.lite-fix-cache/
└── diagnosis-cache/
└── ${bug_hash}.json
```
## Quality Gates
**Before execution** (auto-checked):
- [ ] Root cause identified (>70% confidence for default, >90% for hotfix)
- [ ] Impact scope defined
- [ ] Fix strategy reviewed
- [ ] Verification plan matches risk level
**Hotfix-specific**:
- [ ] Production tag identified
- [ ] Rollback plan documented
- [ ] Follow-up tasks generated
- [ ] Monitoring configured
---
## When to Use lite-fix
**Perfect for:**
- Any bug with clear symptoms
- Localized fixes (1-5 files)
- Known technology stack
- Time-sensitive but not catastrophic (default mode adapts)
- Production incidents (use --hotfix)
**Not suitable for:**
- Root cause completely unclear → use `/cli:mode:bug-diagnosis` first
- Requires architectural changes → use `/workflow:plan`
- Complex legacy code without tests → use `/workflow:plan --legacy-refactor`
- Performance deep-dive → use `/workflow:plan --performance-optimization`
- Data migration → use `/workflow:plan --data-migration`
---
**Last Updated**: 2025-11-20
**Version**: 2.0.0
**Status**: Design Document (Simplified)

View File

@@ -0,0 +1,359 @@
---
name: lite-plan
description: Lightweight interactive planning workflow with in-memory planning, code exploration, and execution dispatch to lite-execute after user confirmation
argument-hint: "[-e|--explore] \"task description\"|file.md"
allowed-tools: TodoWrite(*), Task(*), SlashCommand(*), AskUserQuestion(*)
---
# Workflow Lite-Plan Command (/workflow:lite-plan)
## Overview
Intelligent lightweight planning command with dynamic workflow adaptation based on task complexity. Focuses on planning phases (exploration, clarification, planning, confirmation) and delegates execution to `/workflow:lite-execute`.
**Core capabilities:**
- Intelligent task analysis with automatic exploration detection
- Dynamic code exploration (cli-explore-agent) when codebase understanding needed
- Interactive clarification after exploration to gather missing information
- Adaptive planning strategy (direct Claude vs cli-lite-planning-agent) based on complexity
- Two-step confirmation: plan display → multi-dimensional input collection
- Execution dispatch with complete context handoff to lite-execute
## Usage
```bash
/workflow:lite-plan [FLAGS] <TASK_DESCRIPTION>
# Flags
-e, --explore Force code exploration phase (overrides auto-detection)
# Arguments
<task-description> Task description or path to .md file (required)
```
## Execution Process
```
User Input → Task Analysis & Exploration Decision (Phase 1)
Clarification (Phase 2, optional)
Complexity Assessment & Planning (Phase 3)
Task Confirmation & Execution Selection (Phase 4)
Dispatch to Execution (Phase 5)
```
## Implementation
### Phase 1: Task Analysis & Exploration Decision
**Session Setup**:
```javascript
const taskSlug = task_description.toLowerCase().replace(/[^a-z0-9]+/g, '-').substring(0, 40)
const timestamp = new Date().toISOString().replace(/[:.]/g, '-')
const shortTimestamp = timestamp.substring(0, 19).replace('T', '-')
const sessionId = `${taskSlug}-${shortTimestamp}`
const sessionFolder = `.workflow/.lite-plan/${sessionId}`
bash(`mkdir -p ${sessionFolder}`)
```
**Decision Logic**:
```javascript
needsExploration = (
flags.includes('--explore') || flags.includes('-e') ||
task.mentions_specific_files ||
task.requires_codebase_context ||
task.needs_architecture_understanding ||
task.modifies_existing_code
)
```
**If exploration needed** - Invoke cli-explore-agent:
```javascript
Task(
subagent_type="cli-explore-agent",
description="Analyze codebase for task context",
prompt=`
Analyze codebase for task context and generate exploration.json.
## Output Schema Reference
~/.claude/workflows/cli-templates/schemas/explore-json-schema.json
## Task Description
${task_description}
## Requirements
Generate exploration.json with:
- project_structure: Architecture and module organization
- relevant_files: File paths to be affected
- patterns: Code patterns, conventions, styles
- dependencies: External/internal module dependencies
- integration_points: Task connections with existing code
- constraints: Technical limitations/requirements
- clarification_needs: Ambiguities requiring user input
- _metadata: timestamp, task_description, source
## Execution
1. Structural scan: get_modules_by_depth.sh, find, rg
2. Semantic analysis: Gemini for patterns/architecture
3. Write JSON: Write('${sessionFolder}/exploration.json', jsonContent)
4. Return brief completion summary
Time Limit: 60 seconds
`
)
```
**Output**: `${sessionFolder}/exploration.json` (if exploration performed)
---
### Phase 2: Clarification (Optional)
**Skip if**: No exploration or `clarification_needs` is empty
**If clarification needed**:
```javascript
const exploration = JSON.parse(Read(`${sessionFolder}/exploration.json`))
if (exploration.clarification_needs?.length > 0) {
AskUserQuestion({
questions: exploration.clarification_needs.map(need => ({
question: `${need.context}\n\n${need.question}`,
header: "Clarification",
multiSelect: false,
options: need.options.map(opt => ({
label: opt,
description: `Use ${opt} approach`
}))
}))
})
}
```
**Output**: `clarificationContext` (in-memory)
---
### Phase 3: Complexity Assessment & Planning
**Complexity Assessment**:
```javascript
complexityScore = {
file_count: exploration?.relevant_files?.length || 0,
integration_points: exploration?.dependencies?.length || 0,
architecture_changes: exploration?.constraints?.includes('architecture'),
task_scope: estimated_steps > 5
}
// Low: score < 3, Medium: 3-5, High: > 5
```
**Low Complexity** - Direct planning by Claude:
- Generate plan directly, write to `${sessionFolder}/plan.json`
- No agent invocation
**Medium/High Complexity** - Invoke cli-lite-planning-agent:
```javascript
Task(
subagent_type="cli-lite-planning-agent",
description="Generate detailed implementation plan",
prompt=`
Generate implementation plan and write plan.json.
## Output Schema Reference
~/.claude/workflows/cli-templates/schemas/plan-json-schema.json
## Task Description
${task_description}
## Exploration Context
${sessionFolder}/exploration.json (if exists)
## User Clarifications
${JSON.stringify(clarificationContext) || "None"}
## Complexity Level
${complexity}
## Requirements
Generate plan.json with:
- summary: 2-3 sentence overview
- approach: High-level implementation strategy
- tasks: 3-10 structured tasks with:
- title, file, action, description
- implementation (3-7 steps)
- reference (pattern, files, examples)
- acceptance (2-4 criteria)
- estimated_time, recommended_execution, complexity
- _metadata: timestamp, source, planning_mode
## Execution
1. Execute CLI planning using Gemini (Qwen fallback)
2. Parse output and structure plan
3. Write JSON: Write('${sessionFolder}/plan.json', jsonContent)
4. Return brief completion summary
`
)
```
**Output**: `${sessionFolder}/plan.json`
---
### Phase 4: Task Confirmation & Execution Selection
**Step 4.1: Display Plan**
```javascript
const plan = JSON.parse(Read(`${sessionFolder}/plan.json`))
console.log(`
## Implementation Plan
**Summary**: ${plan.summary}
**Approach**: ${plan.approach}
**Tasks** (${plan.tasks.length}):
${plan.tasks.map((t, i) => `${i+1}. ${t.title} (${t.file})`).join('\n')}
**Complexity**: ${plan.complexity}
**Estimated Time**: ${plan.estimated_time}
**Recommended**: ${plan.recommended_execution}
`)
```
**Step 4.2: Collect Confirmation**
```javascript
AskUserQuestion({
questions: [
{
question: `Confirm plan? (${plan.tasks.length} tasks, ${plan.complexity})`,
header: "Confirm",
multiSelect: true,
options: [
{ label: "Allow", description: "Proceed as-is" },
{ label: "Modify", description: "Adjust before execution" },
{ label: "Cancel", description: "Abort workflow" }
]
},
{
question: "Execution method:",
header: "Execution",
multiSelect: false,
options: [
{ label: "Agent", description: "@code-developer agent" },
{ label: "Codex", description: "codex CLI tool" },
{ label: "Auto", description: `Auto: ${plan.complexity === 'Low' ? 'Agent' : 'Codex'}` }
]
},
{
question: "Code review after execution?",
header: "Review",
multiSelect: false,
options: [
{ label: "Gemini Review", description: "Gemini CLI" },
{ label: "Agent Review", description: "@code-reviewer" },
{ label: "Skip", description: "No review" }
]
}
]
})
```
---
### Phase 5: Dispatch to Execution
**Step 5.1: Generate task.json** (by command, not agent)
```javascript
const taskId = `LP-${shortTimestamp}`
const exploration = file_exists(`${sessionFolder}/exploration.json`)
? JSON.parse(Read(`${sessionFolder}/exploration.json`))
: null
const plan = JSON.parse(Read(`${sessionFolder}/plan.json`))
const taskJson = {
id: taskId,
title: task_description,
status: "pending",
meta: {
type: "planning",
created_at: new Date().toISOString(),
complexity: plan.complexity,
estimated_time: plan.estimated_time,
recommended_execution: plan.recommended_execution,
workflow: "lite-plan",
session_id: sessionId,
session_folder: sessionFolder
},
context: {
requirements: [task_description],
plan: {
summary: plan.summary,
approach: plan.approach,
tasks: plan.tasks
},
exploration: exploration,
clarifications: clarificationContext || null,
focus_paths: exploration?.relevant_files || [],
acceptance: plan.tasks.flatMap(t => t.acceptance)
}
}
Write(`${sessionFolder}/task.json`, JSON.stringify(taskJson, null, 2))
```
**Step 5.2: Store executionContext**
```javascript
executionContext = {
planObject: plan,
explorationContext: exploration,
clarificationContext: clarificationContext || null,
executionMethod: userSelection.execution_method,
codeReviewTool: userSelection.code_review_tool,
originalUserInput: task_description,
session: {
id: sessionId,
folder: sessionFolder,
artifacts: {
exploration: exploration ? `${sessionFolder}/exploration.json` : null,
plan: `${sessionFolder}/plan.json`,
task: `${sessionFolder}/task.json`
}
}
}
```
**Step 5.3: Dispatch**
```javascript
SlashCommand(command="/workflow:lite-execute --in-memory")
```
## Session Folder Structure
```
.workflow/.lite-plan/{task-slug}-{timestamp}/
├── exploration.json # If exploration performed (by cli-explore-agent)
├── plan.json # Always created (by agent or direct planning)
└── task.json # Always created (by command)
```
## Error Handling
| Error | Resolution |
|-------|------------|
| Exploration agent failure | Skip exploration, continue with task description only |
| Planning agent failure | Fallback to direct planning by Claude |
| Clarification timeout | Use exploration findings as-is |
| Confirmation timeout | Save context, display resume instructions |
| Modify loop > 3 times | Suggest breaking task or using /workflow:plan |

View File

@@ -1,7 +1,7 @@
---
name: plan
description: 5-phase planning workflow with Gemini analysis and action-planning-agent task generation, outputs IMPL_PLAN.md and task JSONs with optional CLI auto-execution
argument-hint: "[--agent] [--cli-execute] \"text description\"|file.md"
description: 5-phase planning workflow with action-planning-agent task generation, outputs IMPL_PLAN.md and task JSONs with optional CLI auto-execution
argument-hint: "[--cli-execute] \"text description\"|file.md"
allowed-tools: SlashCommand(*), TodoWrite(*), Read(*), Bash(*)
---
@@ -20,18 +20,21 @@ This workflow runs **fully autonomously** once triggered. Phase 3 (conflict reso
2. **Phase 1 executes** → Session discovery → Auto-continues
3. **Phase 2 executes** → Context gathering → Auto-continues
4. **Phase 3 executes** (optional, if conflict_risk ≥ medium) → Conflict resolution → Auto-continues
5. **Phase 4 executes** (task-generate-agent if --agent) → Task generation → Reports final summary
5. **Phase 4 executes** → Task generation (task-generate-agent) → Reports final summary
**Task Attachment Model**:
- SlashCommand invocation **expands workflow** by attaching sub-tasks to current TodoWrite
- When a sub-command is invoked (e.g., `/workflow:tools:context-gather`), its internal tasks are attached to the orchestrator's TodoWrite
- Orchestrator **executes these attached tasks** sequentially
- After completion, attached tasks are **collapsed** back to high-level phase summary
- This is **task expansion**, not external delegation
**Auto-Continue Mechanism**:
- TodoList tracks current phase status
- After each phase completion, automatically executes next pending phase
- TodoList tracks current phase status and dynamically manages task attachment/collapse
- When each phase finishes executing, automatically execute next pending phase
- All phases run autonomously without user interaction (clarification handled in brainstorm phase)
- Progress updates shown at each phase for visibility
**Execution Modes**:
- **Manual Mode** (default): Use `/workflow:tools:task-generate`
- **Agent Mode** (`--agent`): Use `/workflow:tools:task-generate-agent`
- **CLI Execute Mode** (`--cli-execute`): Generate tasks with Codex execution commands
- **⚠️ CONTINUOUS EXECUTION** - Do not stop until all phases complete
## Core Rules
@@ -39,8 +42,9 @@ This workflow runs **fully autonomously** once triggered. Phase 3 (conflict reso
2. **No Preliminary Analysis**: Do not read files, analyze structure, or gather context before Phase 1
3. **Parse Every Output**: Extract required data from each command/agent output for next phase
4. **Auto-Continue via TodoList**: Check TodoList status to execute next pending phase automatically
5. **Track Progress**: Update TodoWrite after every phase completion
6. **Agent Delegation**: Phase 3 uses cli-execution-agent for autonomous intelligent analysis
5. **Track Progress**: Update TodoWrite dynamically with task attachment/collapse pattern
6. **Task Attachment Model**: SlashCommand invocation **attaches** sub-tasks to current workflow. Orchestrator **executes** these attached tasks itself, then **collapses** them after completion
7. **⚠️ CRITICAL: DO NOT STOP**: Continuous multi-phase workflow. After executing all attached tasks, immediately collapse them and execute next phase
## 5-Phase Execution
@@ -66,7 +70,9 @@ CONTEXT: Existing user database schema, REST API endpoints
**Validation**:
- Session ID successfully extracted
- Session directory `.workflow/[sessionId]/` exists
- Session directory `.workflow/active/[sessionId]/` exists
**Note**: Session directory contains `workflow-session.json` (metadata). Do NOT look for `manifest.json` here - it only exists in `.workflow/archives/` for archived sessions.
**TodoWrite**: Mark phase 1 completed, phase 2 in_progress
@@ -83,15 +89,41 @@ CONTEXT: Existing user database schema, REST API endpoints
**Parse Output**:
- Extract: context-package.json path (store as `contextPath`)
- Typical pattern: `.workflow/[sessionId]/.process/context-package.json`
- Typical pattern: `.workflow/active/[sessionId]/.process/context-package.json`
**Validation**:
- Context package path extracted
- File exists and is valid JSON
**TodoWrite**: Mark phase 2 completed, phase 3 in_progress
<!-- TodoWrite: When context-gather invoked, INSERT 3 context-gather tasks, mark first as in_progress -->
**After Phase 2**: Return to user showing Phase 2 results, then auto-continue to Phase 3
**TodoWrite Update (Phase 2 SlashCommand invoked - tasks attached)**:
```json
[
{"content": "Execute session discovery", "status": "completed", "activeForm": "Executing session discovery"},
{"content": "Phase 2.1: Analyze codebase structure (context-gather)", "status": "in_progress", "activeForm": "Analyzing codebase structure"},
{"content": "Phase 2.2: Identify integration points (context-gather)", "status": "pending", "activeForm": "Identifying integration points"},
{"content": "Phase 2.3: Generate context package (context-gather)", "status": "pending", "activeForm": "Generating context package"},
{"content": "Execute task generation", "status": "pending", "activeForm": "Executing task generation"}
]
```
**Note**: SlashCommand invocation **attaches** context-gather's 3 tasks. Orchestrator **executes** these tasks sequentially.
<!-- TodoWrite: After Phase 2 tasks complete, REMOVE Phase 2.1-2.3, restore to orchestrator view -->
**TodoWrite Update (Phase 2 completed - tasks collapsed)**:
```json
[
{"content": "Execute session discovery", "status": "completed", "activeForm": "Executing session discovery"},
{"content": "Execute context gathering", "status": "completed", "activeForm": "Executing context gathering"},
{"content": "Execute task generation", "status": "pending", "activeForm": "Executing task generation"}
]
```
**Note**: Phase 2 tasks completed and collapsed to summary.
**After Phase 2**: Return to user showing Phase 2 results, then auto-continue to Phase 3/4 (depending on conflict_risk)
---
@@ -111,13 +143,41 @@ CONTEXT: Existing user database schema, REST API endpoints
- Verify: CONFLICT_RESOLUTION.md file path (if executed)
**Validation**:
- File `.workflow/[sessionId]/.process/CONFLICT_RESOLUTION.md` exists (if executed)
- File `.workflow/active/[sessionId]/.process/CONFLICT_RESOLUTION.md` exists (if executed)
**Skip Behavior**:
- If conflict_risk is "none" or "low", skip directly to Phase 3.5
- Display: "No significant conflicts detected, proceeding to clarification"
**TodoWrite**: Mark phase 3 completed (if executed) or skipped, phase 3.5 in_progress
<!-- TodoWrite: If conflict_risk ≥ medium, INSERT 3 conflict-resolution tasks -->
**TodoWrite Update (Phase 3 SlashCommand invoked - tasks attached, if conflict_risk ≥ medium)**:
```json
[
{"content": "Execute session discovery", "status": "completed", "activeForm": "Executing session discovery"},
{"content": "Execute context gathering", "status": "completed", "activeForm": "Executing context gathering"},
{"content": "Phase 3.1: Detect conflicts with CLI analysis (conflict-resolution)", "status": "in_progress", "activeForm": "Detecting conflicts"},
{"content": "Phase 3.2: Present conflicts to user (conflict-resolution)", "status": "pending", "activeForm": "Presenting conflicts"},
{"content": "Phase 3.3: Apply resolution strategies (conflict-resolution)", "status": "pending", "activeForm": "Applying resolution strategies"},
{"content": "Execute task generation", "status": "pending", "activeForm": "Executing task generation"}
]
```
**Note**: SlashCommand invocation **attaches** conflict-resolution's 3 tasks. Orchestrator **executes** these tasks sequentially.
<!-- TodoWrite: After Phase 3 tasks complete, REMOVE Phase 3.1-3.3, restore to orchestrator view -->
**TodoWrite Update (Phase 3 completed - tasks collapsed)**:
```json
[
{"content": "Execute session discovery", "status": "completed", "activeForm": "Executing session discovery"},
{"content": "Execute context gathering", "status": "completed", "activeForm": "Executing context gathering"},
{"content": "Resolve conflicts and apply fixes", "status": "completed", "activeForm": "Resolving conflicts"},
{"content": "Execute task generation", "status": "pending", "activeForm": "Executing task generation"}
]
```
**Note**: Phase 3 tasks completed and collapsed to summary.
**After Phase 3**: Return to user showing conflict resolution results (if executed) and selected strategies, then auto-continue to Phase 3.5
@@ -159,38 +219,56 @@ CONTEXT: Existing user database schema, REST API endpoints
- Task generation translates high-level role analyses into concrete, actionable work items
- **Intent priority**: Current user prompt > role analysis.md files > guidance-specification.md
**Command Selection**:
- Manual: `SlashCommand(command="/workflow:tools:task-generate --session [sessionId]")`
- Agent: `SlashCommand(command="/workflow:tools:task-generate-agent --session [sessionId]")`
- CLI Execute: Add `--cli-execute` flag to either command
**Flag Combination**:
- `--cli-execute` alone: Manual task generation with CLI execution
- `--agent --cli-execute`: Agent task generation with CLI execution
**Command Examples**:
**Command**:
```bash
# Manual with CLI execution
/workflow:tools:task-generate --session WFS-auth --cli-execute
# Default (agent mode)
SlashCommand(command="/workflow:tools:task-generate-agent --session [sessionId]")
# Agent with CLI execution
/workflow:tools:task-generate-agent --session WFS-auth --cli-execute
# With CLI execution
SlashCommand(command="/workflow:tools:task-generate-agent --session [sessionId] --cli-execute")
```
**Flag**:
- `--cli-execute`: Generate tasks with Codex execution commands
**Input**: `sessionId` from Phase 1
**Validation**:
- `.workflow/[sessionId]/IMPL_PLAN.md` exists
- `.workflow/[sessionId]/.task/IMPL-*.json` exists (at least one)
- `.workflow/[sessionId]/TODO_LIST.md` exists
- `.workflow/active/[sessionId]/IMPL_PLAN.md` exists
- `.workflow/active/[sessionId]/.task/IMPL-*.json` exists (at least one)
- `.workflow/active/[sessionId]/TODO_LIST.md` exists
**TodoWrite**: Mark phase 4 completed
<!-- TodoWrite: When task-generate-agent invoked, ATTACH 1 agent task -->
**TodoWrite Update (Phase 4 SlashCommand invoked - agent task attached)**:
```json
[
{"content": "Execute session discovery", "status": "completed", "activeForm": "Executing session discovery"},
{"content": "Execute context gathering", "status": "completed", "activeForm": "Executing context gathering"},
{"content": "Execute task-generate-agent", "status": "in_progress", "activeForm": "Executing task-generate-agent"}
]
```
**Note**: Single agent task attached. Agent autonomously completes discovery, planning, and output generation internally.
<!-- TodoWrite: After agent completes, mark task as completed -->
**TodoWrite Update (Phase 4 completed)**:
```json
[
{"content": "Execute session discovery", "status": "completed", "activeForm": "Executing session discovery"},
{"content": "Execute context gathering", "status": "completed", "activeForm": "Executing context gathering"},
{"content": "Execute task-generate-agent", "status": "completed", "activeForm": "Executing task-generate-agent"}
]
```
**Note**: Agent task completed. No collapse needed (single task).
**Return to User**:
```
Planning complete for session: [sessionId]
Tasks generated: [count]
Plan: .workflow/[sessionId]/IMPL_PLAN.md
Plan: .workflow/active/[sessionId]/IMPL_PLAN.md
Recommended Next Steps:
1. /workflow:action-plan-verify --session [sessionId] # Verify plan quality before execution
@@ -202,39 +280,36 @@ Quality Gate: Consider running /workflow:action-plan-verify to catch issues earl
## TodoWrite Pattern
```javascript
// Initialize (before Phase 1)
// Note: Phase 3 todo only added dynamically after Phase 2 if conflict_risk ≥ medium
TodoWrite({todos: [
{"content": "Execute session discovery", "status": "in_progress", "activeForm": "Executing session discovery"},
{"content": "Execute context gathering", "status": "pending", "activeForm": "Executing context gathering"},
// Phase 3 todo added dynamically after Phase 2 if conflict_risk ≥ medium
{"content": "Execute task generation", "status": "pending", "activeForm": "Executing task generation"}
]})
**Core Concept**: Dynamic task attachment and collapse for real-time visibility into workflow execution.
// After Phase 2 (if conflict_risk ≥ medium, insert Phase 3 todo)
TodoWrite({todos: [
{"content": "Execute session discovery", "status": "completed", "activeForm": "Executing session discovery"},
{"content": "Execute context gathering", "status": "completed", "activeForm": "Executing context gathering"},
{"content": "Resolve conflicts and apply fixes", "status": "in_progress", "activeForm": "Resolving conflicts"},
{"content": "Execute task generation", "status": "pending", "activeForm": "Executing task generation"}
]})
### Key Principles
// After Phase 2 (if conflict_risk is none/low, skip Phase 3, go directly to Phase 4)
TodoWrite({todos: [
{"content": "Execute session discovery", "status": "completed", "activeForm": "Executing session discovery"},
{"content": "Execute context gathering", "status": "completed", "activeForm": "Executing context gathering"},
{"content": "Execute task generation", "status": "in_progress", "activeForm": "Executing task generation"}
]})
1. **Task Attachment** (when SlashCommand invoked):
- Sub-command's internal tasks are **attached** to orchestrator's TodoWrite
- **Phase 2, 3**: Multiple sub-tasks attached (e.g., Phase 2.1, 2.2, 2.3)
- **Phase 4**: Single agent task attached (e.g., "Execute task-generate-agent")
- First attached task marked as `in_progress`, others as `pending`
- Orchestrator **executes** these attached tasks sequentially
// After Phase 3 (if executed), continue to Phase 4
TodoWrite({todos: [
{"content": "Execute session discovery", "status": "completed", "activeForm": "Executing session discovery"},
{"content": "Execute context gathering", "status": "completed", "activeForm": "Executing context gathering"},
{"content": "Resolve conflicts and apply fixes", "status": "completed", "activeForm": "Resolving conflicts"},
{"content": "Execute task generation", "status": "in_progress", "activeForm": "Executing task generation"}
]})
```
2. **Task Collapse** (after sub-tasks complete):
- **Applies to Phase 2, 3**: Remove detailed sub-tasks from TodoWrite
- **Collapse** to high-level phase summary
- Example: Phase 2.1-2.3 collapse to "Execute context gathering: completed"
- **Phase 4**: No collapse needed (single task, just mark completed)
- Maintains clean orchestrator-level view
3. **Continuous Execution**:
- After completion, automatically proceed to next pending phase
- No user intervention required between phases
- TodoWrite dynamically reflects current execution state
**Lifecycle Summary**: Initial pending tasks → Phase invoked (tasks ATTACHED) → Sub-tasks executed sequentially → Phase completed (tasks COLLAPSED to summary for Phase 2/3, or marked completed for Phase 4) → Next phase begins → Repeat until all phases complete.
**Note**: See individual Phase descriptions for detailed TodoWrite Update examples:
- **Phase 2, 3**: Multiple sub-tasks with attach/collapse pattern
- **Phase 4**: Single agent task (no collapse needed)
## Input Processing
@@ -296,7 +371,7 @@ Phase 3: conflict-resolution [AUTO-TRIGGERED if conflict_risk ≥ medium]
↓ Output: Modified brainstorm artifacts (NO report file)
↓ Skip if conflict_risk is none/low → proceed directly to Phase 4
Phase 4: task-generate[--agent] --session sessionId
Phase 4: task-generate-agent --session sessionId [--cli-execute]
↓ Input: sessionId + resolved brainstorm artifacts + session memory
↓ Output: IMPL_PLAN.md, task JSONs, TODO_LIST.md
@@ -309,11 +384,56 @@ Return summary to user
- Brainstorming artifacts (potentially modified by Phase 3)
- Session-specific configuration
**Structured Description Benefits**:
- **Clarity**: Clear separation of goal, scope, and context
- **Consistency**: Same format across all phases
- **Traceability**: Easy to track what was requested
- **Precision**: Better context gathering and analysis
## Execution Flow Diagram
```
User triggers: /workflow:plan "Build authentication system"
[TodoWrite Init] 3 orchestrator-level tasks
Phase 1: Session Discovery
→ sessionId extracted
Phase 2: Context Gathering (SlashCommand invoked)
→ ATTACH 3 tasks: ← ATTACHED
- Phase 2.1: Analyze codebase structure
- Phase 2.2: Identify integration points
- Phase 2.3: Generate context package
→ Execute Phase 2.1-2.3
→ COLLAPSE tasks ← COLLAPSED
→ contextPath + conflict_risk extracted
Conditional Branch: Check conflict_risk
├─ IF conflict_risk ≥ medium:
│ Phase 3: Conflict Resolution (SlashCommand invoked)
│ → ATTACH 3 tasks: ← ATTACHED
│ - Phase 3.1: Detect conflicts with CLI analysis
│ - Phase 3.2: Present conflicts to user
│ - Phase 3.3: Apply resolution strategies
│ → Execute Phase 3.1-3.3
│ → COLLAPSE tasks ← COLLAPSED
└─ ELSE: Skip Phase 3, proceed to Phase 4
Phase 4: Task Generation (SlashCommand invoked)
→ ATTACH 1 agent task: ← ATTACHED
- Execute task-generate-agent
→ Agent autonomously completes internally:
(discovery → planning → output)
→ Outputs: IMPL_PLAN.md, IMPL-*.json, TODO_LIST.md
Return summary to user
```
**Key Points**:
- **← ATTACHED**: Tasks attached to TodoWrite when SlashCommand invoked
- Phase 2, 3: Multiple sub-tasks
- Phase 4: Single agent task
- **← COLLAPSED**: Sub-tasks collapsed to summary after completion (Phase 2, 3 only)
- **Phase 4**: Single agent task, no collapse (just mark completed)
- **Conditional Branch**: Phase 3 only executes if conflict_risk ≥ medium
- **Continuous Flow**: No user intervention between phases
## Error Handling
@@ -331,11 +451,10 @@ Return summary to user
- Parse context path from Phase 2 output, store in memory
- **Extract conflict_risk from context-package.json**: Determine Phase 3 execution
- **If conflict_risk ≥ medium**: Launch Phase 3 conflict-resolution with sessionId and contextPath
- Wait for Phase 3 completion (if executed), verify CONFLICT_RESOLUTION.md created
- Wait for Phase 3 to finish executing (if executed), verify CONFLICT_RESOLUTION.md created
- **If conflict_risk is none/low**: Skip Phase 3, proceed directly to Phase 4
- **Build Phase 4 command** based on flags:
- Base command: `/workflow:tools:task-generate` (or `-agent` if `--agent` flag)
- Add `--session [sessionId]`
- **Build Phase 4 command**:
- Base command: `/workflow:tools:task-generate-agent --session [sessionId]`
- Add `--cli-execute` if flag present
- Pass session ID to Phase 4 command
- Verify all Phase 4 outputs
@@ -380,8 +499,7 @@ CONSTRAINTS: [Limitations or boundaries]
- `/workflow:tools:context-gather` - Phase 2: Gather project context and analyze codebase
- `/workflow:tools:conflict-resolution` - Phase 3: Detect and resolve conflicts (auto-triggered if conflict_risk ≥ medium)
- `/compact` - Phase 3: Memory optimization (if context approaching limits)
- `/workflow:tools:task-generate` - Phase 4: Generate task JSON files with manual approach
- `/workflow:tools:task-generate-agent` - Phase 4: Generate task JSON files with agent-driven approach (when `--agent` flag used)
- `/workflow:tools:task-generate-agent` - Phase 4: Generate task JSON files with agent-driven approach
**Follow-up Commands**:
- `/workflow:action-plan-verify` - Recommended: Verify plan quality and catch issues before execution

View File

@@ -0,0 +1,470 @@
---
name: replan
description: Interactive workflow replanning with session-level artifact updates and boundary clarification through guided questioning
argument-hint: "[--session session-id] [task-id] \"requirements\"|file.md [--interactive]"
allowed-tools: Read(*), Write(*), Edit(*), TodoWrite(*), Glob(*), Bash(*)
---
# Workflow Replan Command
## Overview
Intelligently replans workflow sessions or individual tasks with interactive boundary clarification and comprehensive artifact updates.
**Core Capabilities**:
- **Session Replan**: Updates multiple artifacts (IMPL_PLAN.md, TODO_LIST.md, task JSONs)
- **Task Replan**: Focused updates within session context
- **Interactive Clarification**: Guided questioning to define modification boundaries
- **Impact Analysis**: Automatic detection of affected files and dependencies
- **Backup Management**: Preserves previous versions with restore capability
## Operation Modes
### Session Replan Mode
```bash
# Auto-detect active session
/workflow:replan "添加双因素认证支持"
# Explicit session
/workflow:replan --session WFS-oauth "添加双因素认证支持"
# File-based input
/workflow:replan --session WFS-oauth requirements-update.md
# Interactive mode
/workflow:replan --interactive
```
### Task Replan Mode
```bash
# Direct task update
/workflow:replan IMPL-1 "修改为使用 OAuth2.0 标准"
# Task with explicit session
/workflow:replan --session WFS-oauth IMPL-2 "增加单元测试覆盖率到 90%"
# Interactive mode
/workflow:replan IMPL-1 --interactive
```
## Execution Lifecycle
### Phase 1: Mode Detection & Session Discovery
**Process**:
1. **Detect Operation Mode**:
- Check if task ID provided (IMPL-N or IMPL-N.M format) → Task mode
- Otherwise → Session mode
2. **Discover/Validate Session**:
- Use `--session` flag if provided
- Otherwise auto-detect from `.workflow/active/`
- Validate session exists
3. **Load Session Context**:
- Read `workflow-session.json`
- List existing tasks
- Read `IMPL_PLAN.md` and `TODO_LIST.md`
**Output**: Session validated, context loaded, mode determined
---
### Phase 2: Interactive Requirement Clarification
**Purpose**: Define modification scope through guided questioning
#### Session Mode Questions
**Q1: Modification Scope**
```javascript
Options:
- 仅更新任务细节 (tasks_only)
- 修改规划方案 (plan_update)
- 重构任务结构 (task_restructure)
- 全面重规划 (comprehensive)
```
**Q2: Affected Modules** (if scope >= plan_update)
```javascript
Options: Dynamically generated from existing tasks' focus_paths
- 认证模块 (src/auth)
- 用户管理 (src/user)
- 全部模块
```
**Q3: Task Changes** (if scope >= task_restructure)
```javascript
Options:
- 添加新任务
- 删除现有任务
- 合并任务
- 拆分任务
- 仅更新内容
```
**Q4: Dependency Changes**
```javascript
Options:
- ,需要重新梳理依赖
- ,保持现有依赖
```
#### Task Mode Questions
**Q1: Update Type**
```javascript
Options:
- 需求和验收标准 (requirements & acceptance)
- 实现方案 (implementation_approach)
- 文件范围 (focus_paths)
- 依赖关系 (depends_on)
- 全部更新
```
**Q2: Ripple Effect**
```javascript
Options:
- ,需要同步更新依赖任务
- ,仅影响当前任务
- 不确定,请帮我分析
```
**Output**: User selections stored, modification boundaries defined
---
### Phase 3: Impact Analysis & Planning
**Step 3.1: Analyze Required Changes**
Determine affected files based on clarification:
```typescript
interface ImpactAnalysis {
affected_files: {
impl_plan: boolean;
todo_list: boolean;
session_meta: boolean;
tasks: string[];
};
operations: {
type: 'create' | 'update' | 'delete' | 'merge' | 'split';
target: string;
reason: string;
}[];
backup_strategy: {
timestamp: string;
files: string[];
};
}
```
**Step 3.2: Generate Modification Plan**
```markdown
## 修改计划
### 影响范围
- [ ] IMPL_PLAN.md: 更新技术方案第 3 节
- [ ] TODO_LIST.md: 添加 2 个新任务,删除 1 个废弃任务
- [ ] IMPL-001.json: 更新实现方案
- [ ] workflow-session.json: 更新任务计数
### 变更操作
1. **创建**: IMPL-004.json (双因素认证实现)
2. **更新**: IMPL-001.json (添加 2FA 准备工作)
3. **删除**: IMPL-003.json (已被新方案替代)
```
**Step 3.3: User Confirmation**
```javascript
Options:
- 确认执行: 开始应用所有修改
- 调整计划: 重新回答问题调整范围
- 取消操作: 放弃本次重规划
```
**Output**: Modification plan confirmed or adjusted
---
### Phase 4: Backup Creation
**Process**:
1. **Create Backup Directory**:
```bash
timestamp=$(date -u +"%Y-%m-%dT%H-%M-%S")
backup_dir=".workflow/active/$SESSION_ID/.process/backup/replan-$timestamp"
mkdir -p "$backup_dir"
```
2. **Backup All Affected Files**:
- IMPL_PLAN.md
- TODO_LIST.md
- workflow-session.json
- Affected task JSONs
3. **Create Backup Manifest**:
```markdown
# Replan Backup Manifest
**Timestamp**: {timestamp}
**Reason**: {replan_reason}
**Scope**: {modification_scope}
## Restoration Command
cp {backup_dir}/* .workflow/active/{session}/
```
**Output**: All files safely backed up with manifest
---
### Phase 5: Apply Modifications
**Step 5.1: Update IMPL_PLAN.md** (if needed)
Use Edit tool to modify specific sections:
- Update affected technical sections
- Update modification date
**Step 5.2: Update TODO_LIST.md** (if needed)
- Add new tasks with `[ ]` checkbox
- Mark deleted tasks as `[x] ~~task~~ (已废弃)`
- Update modified task descriptions
**Step 5.3: Update Task JSONs**
For each affected task:
```typescript
const updated_task = {
...task,
context: {
...task.context,
requirements: [...updated_requirements],
acceptance: [...updated_acceptance]
},
flow_control: {
...task.flow_control,
implementation_approach: [...updated_steps]
}
};
Write({
file_path: `.workflow/active/${SESSION_ID}/.task/${task_id}.json`,
content: JSON.stringify(updated_task, null, 2)
});
```
**Step 5.4: Create New Tasks** (if needed)
Generate complete task JSON with all required fields:
- id, title, status
- meta (type, agent)
- context (requirements, focus_paths, acceptance)
- flow_control (pre_analysis, implementation_approach, target_files)
**Step 5.5: Delete Obsolete Tasks** (if needed)
Move to backup instead of hard delete:
```bash
mv ".workflow/active/$SESSION_ID/.task/{task-id}.json" "$backup_dir/"
```
**Step 5.6: Update Session Metadata**
Update workflow-session.json:
- progress.current_tasks
- progress.last_replan
- replan_history array
**Output**: All modifications applied, artifacts updated
---
### Phase 6: Verification & Summary
**Step 6.1: Verify Consistency**
1. Validate all task JSONs are valid JSON
2. Check task count within limits (max 10)
3. Verify dependency graph is acyclic
**Step 6.2: Generate Change Summary**
```markdown
## 重规划完成
### 会话信息
- **Session**: {session-id}
- **时间**: {timestamp}
- **备份**: {backup-path}
### 变更摘要
**范围**: {scope}
**原因**: {reason}
### 修改的文件
- ✓ IMPL_PLAN.md: {changes}
- ✓ TODO_LIST.md: {changes}
- ✓ Task JSONs: {count} files updated
### 任务变更
- **新增**: {task-ids}
- **删除**: {task-ids}
- **更新**: {task-ids}
### 回滚方法
cp {backup-path}/* .workflow/active/{session}/
```
**Output**: Summary displayed, replan complete
---
## TodoWrite Progress Tracking
### Session Mode Progress
```json
[
{"content": "检测模式和发现会话", "status": "completed", "activeForm": "检测模式和发现会话"},
{"content": "交互式需求明确", "status": "completed", "activeForm": "交互式需求明确"},
{"content": "影响分析和计划生成", "status": "completed", "activeForm": "影响分析和计划生成"},
{"content": "创建备份", "status": "completed", "activeForm": "创建备份"},
{"content": "更新会话产出文件", "status": "completed", "activeForm": "更新会话产出文件"},
{"content": "验证一致性", "status": "completed", "activeForm": "验证一致性"}
]
```
### Task Mode Progress
```json
[
{"content": "检测会话和加载任务", "status": "completed", "activeForm": "检测会话和加载任务"},
{"content": "交互式更新确认", "status": "completed", "activeForm": "交互式更新确认"},
{"content": "应用任务修改", "status": "completed", "activeForm": "应用任务修改"}
]
```
## Error Handling
### Session Errors
```bash
# No active session found
ERROR: No active session found
Run /workflow:session:start to create a session
# Session not found
ERROR: Session WFS-invalid not found
Available sessions: [list]
# No changes specified
WARNING: No modifications specified
Use --interactive mode or provide requirements
```
### Task Errors
```bash
# Task not found
ERROR: Task IMPL-999 not found in session
Available tasks: [list]
# Task completed
WARNING: Task IMPL-001 is completed
Consider creating new task for additional work
# Circular dependency
ERROR: Circular dependency detected
Resolve dependency conflicts before proceeding
```
### Validation Errors
```bash
# Task limit exceeded
ERROR: Replan would create 12 tasks (limit: 10)
Consider: combining tasks, splitting sessions, or removing tasks
# Invalid JSON
ERROR: Generated invalid JSON
Backup preserved, rolling back changes
```
## File Structure
```
.workflow/active/WFS-session-name/
├── workflow-session.json
├── IMPL_PLAN.md
├── TODO_LIST.md
├── .task/
│ ├── IMPL-001.json
│ ├── IMPL-002.json
│ └── IMPL-003.json
└── .process/
├── context-package.json
└── backup/
└── replan-{timestamp}/
├── MANIFEST.md
├── IMPL_PLAN.md
├── TODO_LIST.md
├── workflow-session.json
└── IMPL-*.json
```
## Examples
### Session Replan - Add Feature
```bash
/workflow:replan "添加双因素认证支持"
# Interactive clarification
Q: 修改范围?
A: 全面重规划
Q: 受影响模块?
A: 认证模块, API接口
Q: 任务变更?
A: 添加新任务, 更新内容
# Execution
✓ 创建备份
✓ 更新 IMPL_PLAN.md
✓ 更新 TODO_LIST.md
✓ 创建 IMPL-004.json
✓ 更新 IMPL-001.json, IMPL-002.json
重规划完成! 新增 1 任务,更新 2 任务
```
### Task Replan - Update Requirements
```bash
/workflow:replan IMPL-001 "支持 OAuth2.0 标准"
# Interactive clarification
Q: 更新部分?
A: 需求和验收标准, 实现方案
Q: 影响其他任务?
A: 是,需要同步更新依赖任务
# Execution
✓ 创建备份
✓ 更新 IMPL-001.json
✓ 更新 IMPL-002.json (依赖任务)
任务重规划完成! 更新 2 个任务
```

View File

@@ -1,105 +0,0 @@
---
name: resume
description: Resume paused workflow session with automatic progress analysis, pending task identification, and conflict detection
argument-hint: "session-id for workflow session to resume"
allowed-tools: SlashCommand(*), TodoWrite(*), Read(*), Bash(*)
---
# Sequential Workflow Resume Command
## Usage
```bash
/workflow:resume "<session-id>"
```
## Purpose
**Sequential command coordination for workflow resumption** by first analyzing current session status, then continuing execution with special resume context. This command orchestrates intelligent session resumption through two-step process.
## Command Coordination Workflow
### Phase 1: Status Analysis
1. **Call status command**: Execute `/workflow:status` to analyze current session state
2. **Verify session information**: Check session ID, progress, and current task status
3. **Identify resume point**: Determine where workflow was interrupted
### Phase 2: Resume Execution
1. **Call execute with resume flag**: Execute `/workflow:execute --resume-session="{session-id}"`
2. **Pass session context**: Provide analyzed session information to execute command
3. **Direct agent execution**: Skip discovery phase, directly enter TodoWrite and agent execution
## Implementation Protocol
### Sequential Command Execution
```bash
# Phase 1: Analyze current session status
SlashCommand(command="/workflow:status")
# Phase 2: Resume execution with special flag
SlashCommand(command="/workflow:execute --resume-session=\"{session-id}\"")
```
### Progress Tracking
```javascript
TodoWrite({
todos: [
{
content: "Analyze current session status and progress",
status: "in_progress",
activeForm: "Analyzing session status"
},
{
content: "Resume workflow execution with session context",
status: "pending",
activeForm: "Resuming workflow execution"
}
]
});
```
## Resume Information Flow
### Status Analysis Results
The `/workflow:status` command provides:
- **Session ID**: Current active session identifier
- **Current Progress**: Completed, in-progress, and pending tasks
- **Interruption Point**: Last executed task and next pending task
- **Session State**: Overall workflow status
### Execute Command Context
The special `--resume-session` flag tells `/workflow:execute`:
- **Skip Discovery**: Don't search for sessions, use provided session ID
- **Direct Execution**: Go straight to TodoWrite generation and agent launching
- **Context Restoration**: Use existing session state and summaries
- **Resume Point**: Continue from identified interruption point
## Error Handling
### Session Validation Failures
- **Session not found**: Report missing session, suggest available sessions
- **Session inactive**: Recommend activating session first
- **Status command fails**: Retry once, then report analysis failure
### Execute Resumption Failures
- **No pending tasks**: Report workflow completion status
- **Execute command fails**: Report resumption failure, suggest manual intervention
## Success Criteria
1. **Status analysis complete**: Session state properly analyzed and reported
2. **Execute command launched**: Resume execution started with proper context
3. **Agent coordination**: TodoWrite and agent execution initiated successfully
4. **Context preservation**: Session state and progress properly maintained
## Related Commands
**Prerequisite Commands**:
- `/workflow:plan` or `/workflow:execute` - Workflow must be in progress or paused
**Called by This Command** (2 phases):
- `/workflow:status` - Phase 1: Analyze current session status and identify resume point
- `/workflow:execute` - Phase 2: Resume execution with `--resume-session` flag
**Follow-up Commands**:
- None - Workflow continues automatically via `/workflow:execute`
---
*Sequential command coordination for workflow session resumption*

View File

@@ -39,17 +39,17 @@ argument-hint: "[--type=security|architecture|action-items|quality] [optional: s
if [ -n "$SESSION_ARG" ]; then
sessionId="$SESSION_ARG"
else
sessionId=$(find .workflow/ -name '.active-*' | head -1 | sed 's/.*active-//')
sessionId=$(find .workflow/active/ -name "WFS-*" -type d | head -1 | xargs basename)
fi
# Step 2: Validation
if [ ! -d ".workflow/${sessionId}" ]; then
if [ ! -d ".workflow/active/${sessionId}" ]; then
echo "Session ${sessionId} not found"
exit 1
fi
# Check for completed tasks
if [ ! -d ".workflow/${sessionId}/.summaries" ] || [ -z "$(find .workflow/${sessionId}/.summaries/ -name "IMPL-*.md" -type f 2>/dev/null)" ]; then
if [ ! -d ".workflow/active/${sessionId}/.summaries" ] || [ -z "$(find .workflow/active/${sessionId}/.summaries/ -name "IMPL-*.md" -type f 2>/dev/null)" ]; then
echo "No completed implementation found. Complete implementation first"
exit 1
fi
@@ -80,13 +80,13 @@ After bash validation, the model takes control to:
1. **Load Context**: Read completed task summaries and changed files
```bash
# Load implementation summaries
cat .workflow/${sessionId}/.summaries/IMPL-*.md
cat .workflow/active/${sessionId}/.summaries/IMPL-*.md
# Load test results (if available)
cat .workflow/${sessionId}/.summaries/TEST-FIX-*.md 2>/dev/null
cat .workflow/active/${sessionId}/.summaries/TEST-FIX-*.md 2>/dev/null
# Get changed files
git log --since="$(cat .workflow/${sessionId}/workflow-session.json | jq -r .created_at)" --name-only --pretty=format: | sort -u
git log --since="$(cat .workflow/active/${sessionId}/workflow-session.json | jq -r .created_at)" --name-only --pretty=format: | sort -u
```
2. **Perform Specialized Review**: Based on `review_type`
@@ -99,7 +99,7 @@ After bash validation, the model takes control to:
```
- Use Gemini for security analysis:
```bash
cd .workflow/${sessionId} && gemini -p "
cd .workflow/active/${sessionId} && gemini -p "
PURPOSE: Security audit of completed implementation
TASK: Review code for security vulnerabilities, insecure patterns, auth/authz issues
CONTEXT: @.summaries/IMPL-*.md,../.. @../../CLAUDE.md
@@ -111,7 +111,7 @@ After bash validation, the model takes control to:
**Architecture Review** (`--type=architecture`):
- Use Qwen for architecture analysis:
```bash
cd .workflow/${sessionId} && qwen -p "
cd .workflow/active/${sessionId} && qwen -p "
PURPOSE: Architecture compliance review
TASK: Evaluate adherence to architectural patterns, identify technical debt, review design decisions
CONTEXT: @.summaries/IMPL-*.md,../.. @../../CLAUDE.md
@@ -123,7 +123,7 @@ After bash validation, the model takes control to:
**Quality Review** (`--type=quality`):
- Use Gemini for code quality:
```bash
cd .workflow/${sessionId} && gemini -p "
cd .workflow/active/${sessionId} && gemini -p "
PURPOSE: Code quality and best practices review
TASK: Assess code readability, maintainability, adherence to best practices
CONTEXT: @.summaries/IMPL-*.md,../.. @../../CLAUDE.md
@@ -136,14 +136,14 @@ After bash validation, the model takes control to:
- Verify all requirements and acceptance criteria met:
```bash
# Load task requirements and acceptance criteria
find .workflow/${sessionId}/.task -name "IMPL-*.json" -exec jq -r '
find .workflow/active/${sessionId}/.task -name "IMPL-*.json" -exec jq -r '
"Task: " + .id + "\n" +
"Requirements: " + (.context.requirements | join(", ")) + "\n" +
"Acceptance: " + (.context.acceptance | join(", "))
' {} \;
# Check implementation summaries against requirements
cd .workflow/${sessionId} && gemini -p "
cd .workflow/active/${sessionId} && gemini -p "
PURPOSE: Verify all requirements and acceptance criteria are met
TASK: Cross-check implementation summaries against original requirements
CONTEXT: @.task/IMPL-*.json,.summaries/IMPL-*.md,../.. @../../CLAUDE.md
@@ -195,7 +195,7 @@ After bash validation, the model takes control to:
4. **Output Files**:
```bash
# Save review report
Write(.workflow/${sessionId}/REVIEW-${review_type}.md)
Write(.workflow/active/${sessionId}/REVIEW-${review_type}.md)
# Update session metadata
# (optional) Update workflow-session.json with review status

View File

@@ -19,129 +19,461 @@ Mark the currently active workflow session as complete, analyze it for lessons l
## Implementation Flow
### Phase 1: Prepare for Archival (Minimal Manual Operations)
### Phase 1: Pre-Archival Preparation (Transactional Setup)
**Purpose**: Find active session, move to archive location, pass control to agent. Minimal operations.
**Purpose**: Find active session, create archiving marker to prevent concurrent operations. Session remains in active location for agent processing.
#### Step 1.1: Find Active Session and Get Name
```bash
# Find active marker
bash(find .workflow/ -name ".active-*" -type f | head -1)
# Find active session directory
bash(find .workflow/active/ -name "WFS-*" -type d | head -1)
# Extract session name from marker path
bash(basename .workflow/.active-WFS-session-name | sed 's/^\.active-//')
# Extract session name from directory path
bash(basename .workflow/active/WFS-session-name)
```
**Output**: Session name `WFS-session-name`
#### Step 1.2: Move Session to Archive
#### Step 1.2: Check for Existing Archiving Marker (Resume Detection)
```bash
# Create archive directory if needed
bash(mkdir -p .workflow/.archives/)
# Move session to archive location
bash(mv .workflow/WFS-session-name .workflow/.archives/WFS-session-name)
# Check if session is already being archived
bash(test -f .workflow/active/WFS-session-name/.archiving && echo "RESUMING" || echo "NEW")
```
**Result**: Session now at `.workflow/.archives/WFS-session-name/`
### Phase 2: Agent-Orchestrated Completion (All Data Processing)
**If RESUMING**:
- Previous archival attempt was interrupted
- Skip to Phase 2 to resume agent analysis
**Purpose**: Agent analyzes archived session, generates metadata, updates manifest, and removes active marker.
**If NEW**:
- Continue to Step 1.3
#### Step 1.3: Create Archiving Marker
```bash
# Mark session as "archiving in progress"
bash(touch .workflow/active/WFS-session-name/.archiving)
```
**Purpose**:
- Prevents concurrent operations on this session
- Enables recovery if archival fails
- Session remains in `.workflow/active/` for agent analysis
**Result**: Session still at `.workflow/active/WFS-session-name/` with `.archiving` marker
### Phase 2: Agent Analysis (In-Place Processing)
**Purpose**: Agent analyzes session WHILE STILL IN ACTIVE LOCATION. Generates metadata but does NOT move files or update manifest.
#### Agent Invocation
Invoke `universal-executor` agent to complete the archival process.
Invoke `universal-executor` agent to analyze session and prepare archive metadata.
**Agent Task**:
```
Task(
subagent_type="universal-executor",
description="Complete session archival",
description="Analyze session for archival",
prompt=`
Complete workflow session archival. Session already moved to archive location.
Analyze workflow session for archival preparation. Session is STILL in active location.
## Context
- Session: .workflow/.archives/WFS-session-name/
- Active marker: .workflow/.active-WFS-session-name
- Session: .workflow/active/WFS-session-name/
- Status: Marked as archiving (.archiving marker present)
- Location: Active sessions directory (NOT archived yet)
## Tasks
1. **Extract session data** from workflow-session.json (session_id, description/topic, started_at/timestamp, completed_at, status)
1. **Extract session data** from workflow-session.json
- session_id, description/topic, started_at, completed_at, status
- If status != "completed", update it with timestamp
2. **Count files**: tasks (.task/*.json) and summaries (.summaries/*.md)
3. **Generate lessons**: Use gemini with ~/.claude/workflows/cli-templates/prompts/archive/analysis-simple.txt (fallback: analyze files directly)
3. **Generate lessons**: Use gemini with ~/.claude/workflows/cli-templates/prompts/archive/analysis-simple.txt
- Return: {successes, challenges, watch_patterns}
4. **Build archive entry**:
- Calculate: duration_hours, success_rate, tags (3-5 keywords)
- Construct complete JSON with session_id, description, archived_at, archive_path, metrics, tags, lessons
- Construct complete JSON with session_id, description, archived_at, metrics, tags, lessons
- Include archive_path: ".workflow/archives/WFS-session-name" (future location)
5. **Update manifest**: Initialize .workflow/.archives/manifest.json if needed, append entry
5. **Extract feature metadata** (for Phase 4):
- Parse IMPL_PLAN.md for title (first # heading)
- Extract description (first paragraph, max 200 chars)
- Generate feature tags (3-5 keywords from content)
6. **Remove active marker**
6. **Return result**: Complete metadata package for atomic commit
{
"status": "success",
"session_id": "WFS-session-name",
"archive_entry": {
"session_id": "...",
"description": "...",
"archived_at": "...",
"archive_path": ".workflow/archives/WFS-session-name",
"metrics": {...},
"tags": [...],
"lessons": {...}
},
"feature_metadata": {
"title": "...",
"description": "...",
"tags": [...]
}
}
7. **Return result**: {"status": "success", "session_id": "...", "archived_at": "...", "metrics": {...}, "lessons_summary": {...}}
## Important Constraints
- DO NOT move or delete any files
- DO NOT update manifest.json yet
- Session remains in .workflow/active/ during analysis
- Return complete metadata package for orchestrator to commit atomically
## Error Handling
- On failure: return {"status": "error", "task": "...", "message": "..."}
- Do NOT remove marker if failed
- Do NOT modify any files on error
`
)
```
**Expected Output**:
- Agent returns JSON result confirming successful archival
- Display completion summary to user based on agent response
- Agent returns complete metadata package
- Session remains in `.workflow/active/` with `.archiving` marker
- No files moved or manifests updated yet
### Phase 3: Atomic Commit (Transactional File Operations)
**Purpose**: Atomically commit all changes. Only execute if Phase 2 succeeds.
#### Step 3.1: Create Archive Directory
```bash
bash(mkdir -p .workflow/archives/)
```
#### Step 3.2: Move Session to Archive
```bash
bash(mv .workflow/active/WFS-session-name .workflow/archives/WFS-session-name)
```
**Result**: Session now at `.workflow/archives/WFS-session-name/`
#### Step 3.3: Update Manifest
```bash
# Read current manifest (or create empty array if not exists)
bash(test -f .workflow/archives/manifest.json && cat .workflow/archives/manifest.json || echo "[]")
```
**JSON Update Logic**:
```javascript
// Read agent result from Phase 2
const agentResult = JSON.parse(agentOutput);
const archiveEntry = agentResult.archive_entry;
// Read existing manifest
let manifest = [];
try {
const manifestContent = Read('.workflow/archives/manifest.json');
manifest = JSON.parse(manifestContent);
} catch {
manifest = []; // Initialize if not exists
}
// Append new entry
manifest.push(archiveEntry);
// Write back
Write('.workflow/archives/manifest.json', JSON.stringify(manifest, null, 2));
```
#### Step 3.4: Remove Archiving Marker
```bash
bash(rm .workflow/archives/WFS-session-name/.archiving)
```
**Result**: Clean archived session without temporary markers
**Output Confirmation**:
```
✓ Session "${sessionId}" archived successfully
Location: .workflow/archives/WFS-session-name/
Lessons: ${archiveEntry.lessons.successes.length} successes, ${archiveEntry.lessons.challenges.length} challenges
Manifest: Updated with ${manifest.length} total sessions
```
### Phase 4: Update Project Feature Registry
**Purpose**: Record completed session as a project feature in `.workflow/project.json`.
**Execution**: Uses feature metadata from Phase 2 agent result to update project registry.
#### Step 4.1: Check Project State Exists
```bash
bash(test -f .workflow/project.json && echo "EXISTS" || echo "SKIP")
```
**If SKIP**: Output warning and skip Phase 4
```
WARNING: No project.json found. Run /workflow:session:start to initialize.
```
#### Step 4.2: Extract Feature Information from Agent Result
**Data Processing** (Uses Phase 2 agent output):
```javascript
// Extract feature metadata from agent result
const agentResult = JSON.parse(agentOutput);
const featureMeta = agentResult.feature_metadata;
// Data already prepared by agent:
const title = featureMeta.title;
const description = featureMeta.description;
const tags = featureMeta.tags;
// Create feature ID (lowercase slug)
const featureId = title.toLowerCase().replace(/[^a-z0-9]+/g, '-').substring(0, 50);
```
#### Step 4.3: Update project.json
```bash
# Read current project state
bash(cat .workflow/project.json)
```
**JSON Update Logic**:
```javascript
// Read existing project.json (created by /workflow:init)
// Note: overview field is managed by /workflow:init, not modified here
const projectMeta = JSON.parse(Read('.workflow/project.json'));
const currentTimestamp = new Date().toISOString();
const currentDate = currentTimestamp.split('T')[0]; // YYYY-MM-DD
// Extract tags from IMPL_PLAN.md (simple keyword extraction)
const tags = extractTags(planContent); // e.g., ["auth", "security"]
// Build feature object with complete metadata
const newFeature = {
id: featureId,
title: title,
description: description,
status: "completed",
tags: tags,
timeline: {
created_at: currentTimestamp,
implemented_at: currentDate,
updated_at: currentTimestamp
},
traceability: {
session_id: sessionId,
archive_path: archivePath, // e.g., ".workflow/archives/WFS-auth-system"
commit_hash: getLatestCommitHash() || "" // Optional: git rev-parse HEAD
},
docs: [], // Placeholder for future doc links
relations: [] // Placeholder for feature dependencies
};
// Add new feature to array
projectMeta.features.push(newFeature);
// Update statistics
projectMeta.statistics.total_features = projectMeta.features.length;
projectMeta.statistics.total_sessions += 1;
projectMeta.statistics.last_updated = currentTimestamp;
// Write back
Write('.workflow/project.json', JSON.stringify(projectMeta, null, 2));
```
**Helper Functions**:
```javascript
// Extract tags from IMPL_PLAN.md content
function extractTags(planContent) {
const tags = [];
// Look for common keywords
const keywords = {
'auth': /authentication|login|oauth|jwt/i,
'security': /security|encrypt|hash|token/i,
'api': /api|endpoint|rest|graphql/i,
'ui': /component|page|interface|frontend/i,
'database': /database|schema|migration|sql/i,
'test': /test|testing|spec|coverage/i
};
for (const [tag, pattern] of Object.entries(keywords)) {
if (pattern.test(planContent)) {
tags.push(tag);
}
}
return tags.slice(0, 5); // Max 5 tags
}
// Get latest git commit hash (optional)
function getLatestCommitHash() {
try {
const result = Bash({
command: "git rev-parse --short HEAD 2>/dev/null",
description: "Get latest commit hash"
});
return result.trim();
} catch {
return "";
}
}
```
#### Step 4.4: Output Confirmation
```
✓ Feature "${title}" added to project registry
ID: ${featureId}
Session: ${sessionId}
Location: .workflow/project.json
```
**Error Handling**:
- If project.json malformed: Output error, skip update
- If feature_metadata missing from agent result: Skip Phase 4
- If extraction fails: Use minimal defaults
**Phase 4 Total Commands**: 1 bash read + JSON manipulation
## Error Recovery
### If Agent Fails (Phase 2)
**Symptoms**:
- Agent returns `{"status": "error", ...}`
- Agent crashes or times out
- Analysis incomplete
**Recovery Steps**:
```bash
# Session still in .workflow/active/WFS-session-name
# Remove archiving marker
bash(rm .workflow/active/WFS-session-name/.archiving)
```
**User Notification**:
```
ERROR: Session archival failed during analysis phase
Reason: [error message from agent]
Session remains active in: .workflow/active/WFS-session-name
Recovery:
1. Fix any issues identified in error message
2. Retry: /workflow:session:complete
Session state: SAFE (no changes committed)
```
### If Move Fails (Phase 3)
**Symptoms**:
- `mv` command fails
- Permission denied
- Disk full
**Recovery Steps**:
```bash
# Archiving marker still present
# Session still in .workflow/active/ (move failed)
# No manifest updated yet
```
**User Notification**:
```
ERROR: Session archival failed during move operation
Reason: [mv error message]
Session remains in: .workflow/active/WFS-session-name
Recovery:
1. Fix filesystem issues (permissions, disk space)
2. Retry: /workflow:session:complete
- System will detect .archiving marker
- Will resume from Phase 2 (agent analysis)
Session state: SAFE (analysis complete, ready to retry move)
```
### If Manifest Update Fails (Phase 3)
**Symptoms**:
- JSON parsing error
- Write permission denied
- Session moved but manifest not updated
**Recovery Steps**:
```bash
# Session moved to .workflow/archives/WFS-session-name
# Manifest NOT updated
# Archiving marker still present in archived location
```
**User Notification**:
```
ERROR: Session archived but manifest update failed
Reason: [error message]
Session location: .workflow/archives/WFS-session-name
Recovery:
1. Fix manifest.json issues (syntax, permissions)
2. Manual manifest update:
- Add archive entry from agent output
- Remove .archiving marker: rm .workflow/archives/WFS-session-name/.archiving
Session state: PARTIALLY COMPLETE (session archived, manifest needs update)
```
## Workflow Execution Strategy
### Two-Phase Approach (Optimized)
### Transactional Four-Phase Approach
**Phase 1: Minimal Manual Setup** (2 simple operations)
**Phase 1: Pre-Archival Preparation** (Marker creation)
- Find active session and extract name
- Move session to archive location
- **No data extraction** - agent handles all data processing
- **No counting** - agent does this from archive location
- **Total**: 2 bash commands (find + move)
- Check for existing `.archiving` marker (resume detection)
- Create `.archiving` marker if new
- **No data processing** - just state tracking
- **Total**: 2-3 bash commands (find + marker check/create)
**Phase 2: Agent-Driven Completion** (1 agent invocation)
- Extract all session data from archived location
**Phase 2: Agent Analysis** (Read-only data processing)
- Extract all session data from active location
- Count tasks and summaries
- Generate lessons learned analysis
- Build complete archive metadata
- Update manifest
- Remove active marker
- Return success/error result
- Extract feature metadata from IMPL_PLAN.md
- Build complete archive + feature metadata package
- **No file modifications** - pure analysis
- **Total**: 1 agent invocation
## Quick Commands
**Phase 3: Atomic Commit** (Transactional file operations)
- Create archive directory
- Move session to archive location
- Update manifest.json with archive entry
- Remove `.archiving` marker
- **All-or-nothing**: Either all succeed or session remains in safe state
- **Total**: 4 bash commands + JSON manipulation
```bash
# Phase 1: Find and move
bash(find .workflow/ -name ".active-*" -type f | head -1)
bash(basename .workflow/.active-WFS-session-name | sed 's/^\.active-//')
bash(mkdir -p .workflow/.archives/)
bash(mv .workflow/WFS-session-name .workflow/.archives/WFS-session-name)
**Phase 4: Project Registry Update** (Optional feature tracking)
- Check project.json exists
- Use feature metadata from Phase 2 agent result
- Build feature object with complete traceability
- Update project statistics
- **Independent**: Can fail without affecting archival
- **Total**: 1 bash read + JSON manipulation
# Phase 2: Agent completes archival
Task(subagent_type="universal-executor", description="Complete session archival", prompt=`...`)
```
### Transactional Guarantees
## Archive Query Commands
**State Consistency**:
- Session NEVER in inconsistent state
- `.archiving` marker enables safe resume
- Agent failure leaves session in recoverable state
- Move/manifest operations grouped in Phase 3
After archival, you can query the manifest:
**Failure Isolation**:
- Phase 1 failure: No changes made
- Phase 2 failure: Session still active, can retry
- Phase 3 failure: Clear error state, manual recovery documented
- Phase 4 failure: Does not affect archival success
```bash
# List all archived sessions
jq '.archives[].session_id' .workflow/.archives/manifest.json
**Resume Capability**:
- Detect interrupted archival via `.archiving` marker
- Resume from Phase 2 (skip marker creation)
- Idempotent operations (safe to retry)
# Find sessions by keyword
jq '.archives[] | select(.description | test("auth"; "i"))' .workflow/.archives/manifest.json
# Get specific session details
jq '.archives[] | select(.session_id == "WFS-user-auth")' .workflow/.archives/manifest.json
# List all watch patterns across sessions
jq '.archives[].lessons.watch_patterns[]' .workflow/.archives/manifest.json
```

View File

@@ -19,35 +19,35 @@ Display all workflow sessions with their current status, progress, and metadata.
### Step 1: Find All Sessions
```bash
ls .workflow/WFS-* 2>/dev/null
ls .workflow/active/WFS-* 2>/dev/null
```
### Step 2: Check Active Session
```bash
ls .workflow/.active-* 2>/dev/null | head -1
find .workflow/active/ -name "WFS-*" -type d 2>/dev/null | head -1
```
### Step 3: Read Session Metadata
```bash
jq -r '.session_id, .status, .project' .workflow/WFS-session/workflow-session.json
jq -r '.session_id, .status, .project' .workflow/active/WFS-session/workflow-session.json
```
### Step 4: Count Task Progress
```bash
find .workflow/WFS-session/.task/ -name "*.json" -type f 2>/dev/null | wc -l
find .workflow/WFS-session/.summaries/ -name "*.md" -type f 2>/dev/null | wc -l
find .workflow/active/WFS-session/.task/ -name "*.json" -type f 2>/dev/null | wc -l
find .workflow/active/WFS-session/.summaries/ -name "*.md" -type f 2>/dev/null | wc -l
```
### Step 5: Get Creation Time
```bash
jq -r '.created_at // "unknown"' .workflow/WFS-session/workflow-session.json
jq -r '.created_at // "unknown"' .workflow/active/WFS-session/workflow-session.json
```
## Simple Bash Commands
### Basic Operations
- **List sessions**: `find .workflow/ -maxdepth 1 -type d -name "WFS-*"`
- **Find active**: `find .workflow/ -name ".active-*" -type f`
- **List sessions**: `find .workflow/active/ -name "WFS-*" -type d`
- **Find active**: `find .workflow/active/ -name "WFS-*" -type d`
- **Read session data**: `jq -r '.session_id, .status' session.json`
- **Count tasks**: `find .task/ -name "*.json" -type f | wc -l`
- **Count completed**: `find .summaries/ -name "*.md" -type f 2>/dev/null | wc -l`
@@ -89,11 +89,8 @@ Total: 3 sessions (1 active, 1 paused, 1 completed)
### Quick Commands
```bash
# Count all sessions
ls .workflow/WFS-* | wc -l
# Show only active
ls .workflow/.active-* | basename | sed 's/^\.active-//'
ls .workflow/active/WFS-* | wc -l
# Show recent sessions
ls -t .workflow/WFS-*/workflow-session.json | head -3
ls -t .workflow/active/WFS-*/workflow-session.json | head -3
```

View File

@@ -17,45 +17,39 @@ Resume the most recently paused workflow session, restoring all context and stat
### Step 1: Find Paused Sessions
```bash
ls .workflow/WFS-* 2>/dev/null
ls .workflow/active/WFS-* 2>/dev/null
```
### Step 2: Check Session Status
```bash
jq -r '.status' .workflow/WFS-session/workflow-session.json
jq -r '.status' .workflow/active/WFS-session/workflow-session.json
```
### Step 3: Find Most Recent Paused
```bash
ls -t .workflow/WFS-*/workflow-session.json | head -1
ls -t .workflow/active/WFS-*/workflow-session.json | head -1
```
### Step 4: Update Session Status
```bash
jq '.status = "active"' .workflow/WFS-session/workflow-session.json > temp.json
mv temp.json .workflow/WFS-session/workflow-session.json
jq '.status = "active"' .workflow/active/WFS-session/workflow-session.json > temp.json
mv temp.json .workflow/active/WFS-session/workflow-session.json
```
### Step 5: Add Resume Timestamp
```bash
jq '.resumed_at = "'$(date -u +%Y-%m-%dT%H:%M:%SZ)'"' .workflow/WFS-session/workflow-session.json > temp.json
mv temp.json .workflow/WFS-session/workflow-session.json
```
### Step 6: Create Active Marker
```bash
touch .workflow/.active-WFS-session-name
jq '.resumed_at = "'$(date -u +%Y-%m-%dT%H:%M:%SZ)'"' .workflow/active/WFS-session/workflow-session.json > temp.json
mv temp.json .workflow/active/WFS-session/workflow-session.json
```
## Simple Bash Commands
### Basic Operations
- **List sessions**: `ls .workflow/WFS-*`
- **List sessions**: `ls .workflow/active/WFS-*`
- **Check status**: `jq -r '.status' session.json`
- **Find recent**: `ls -t .workflow/*/workflow-session.json | head -1`
- **Find recent**: `ls -t .workflow/active/*/workflow-session.json | head -1`
- **Update status**: `jq '.status = "active"' session.json > temp.json`
- **Add timestamp**: `jq '.resumed_at = "'$(date -u +%Y-%m-%dT%H:%M:%SZ)'"'`
- **Create marker**: `touch .workflow/.active-session`
### Resume Result
```

View File

@@ -13,6 +13,35 @@ examples:
## Overview
Manages workflow sessions with three operation modes: discovery (manual), auto (intelligent), and force-new.
**Dual Responsibility**:
1. **Project-level initialization** (first-time only): Creates `.workflow/project.json` for feature registry
2. **Session-level initialization** (always): Creates session directory structure
## Step 0: Initialize Project State (First-time Only)
**Executed before all modes** - Ensures project-level state file exists by calling `/workflow:init`.
### Check and Initialize
```bash
# Check if project state exists
bash(test -f .workflow/project.json && echo "EXISTS" || echo "NOT_FOUND")
```
**If NOT_FOUND**, delegate to `/workflow:init`:
```javascript
// Call workflow:init for intelligent project analysis
SlashCommand({command: "/workflow:init"});
// Wait for init completion
// project.json will be created with comprehensive project overview
```
**Output**:
- If EXISTS: `PROJECT_STATE: initialized`
- If NOT_FOUND: Calls `/workflow:init` → creates `.workflow/project.json` with full project analysis
**Note**: `/workflow:init` uses cli-explore-agent to build comprehensive project understanding (technology stack, architecture, key components). This step runs once per project. Subsequent executions skip initialization.
## Mode 1: Discovery Mode (Default)
### Usage
@@ -20,19 +49,14 @@ Manages workflow sessions with three operation modes: discovery (manual), auto (
/workflow:session:start
```
### Step 1: Check Active Sessions
### Step 1: List Active Sessions
```bash
bash(ls .workflow/.active-* 2>/dev/null)
bash(ls -1 .workflow/active/ 2>/dev/null | head -5)
```
### Step 2: List All Sessions
### Step 2: Display Session Metadata
```bash
bash(ls -1 .workflow/WFS-* 2>/dev/null | head -5)
```
### Step 3: Display Session Metadata
```bash
bash(cat .workflow/WFS-promptmaster-platform/workflow-session.json)
bash(cat .workflow/active/WFS-promptmaster-platform/workflow-session.json)
```
### Step 4: User Decision
@@ -49,7 +73,7 @@ Present session information and wait for user to select or create session.
### Step 1: Check Active Sessions Count
```bash
bash(ls .workflow/.active-* 2>/dev/null | wc -l)
bash(find .workflow/active/ -name "WFS-*" -type d 2>/dev/null | wc -l)
```
### Step 2a: No Active Sessions → Create New
@@ -58,15 +82,12 @@ bash(ls .workflow/.active-* 2>/dev/null | wc -l)
bash(echo "implement OAuth2 auth" | sed 's/[^a-zA-Z0-9]/-/g' | tr '[:upper:]' '[:lower:]' | cut -c1-50)
# Create directory structure
bash(mkdir -p .workflow/WFS-implement-oauth2-auth/.process)
bash(mkdir -p .workflow/WFS-implement-oauth2-auth/.task)
bash(mkdir -p .workflow/WFS-implement-oauth2-auth/.summaries)
bash(mkdir -p .workflow/active/WFS-implement-oauth2-auth/.process)
bash(mkdir -p .workflow/active/WFS-implement-oauth2-auth/.task)
bash(mkdir -p .workflow/active/WFS-implement-oauth2-auth/.summaries)
# Create metadata
bash(echo '{"session_id":"WFS-implement-oauth2-auth","project":"implement OAuth2 auth","status":"planning"}' > .workflow/WFS-implement-oauth2-auth/workflow-session.json)
# Mark as active
bash(touch .workflow/.active-WFS-implement-oauth2-auth)
bash(echo '{"session_id":"WFS-implement-oauth2-auth","project":"implement OAuth2 auth","status":"planning"}' > .workflow/active/WFS-implement-oauth2-auth/workflow-session.json)
```
**Output**: `SESSION_ID: WFS-implement-oauth2-auth`
@@ -74,10 +95,10 @@ bash(touch .workflow/.active-WFS-implement-oauth2-auth)
### Step 2b: Single Active Session → Check Relevance
```bash
# Extract session ID
bash(ls .workflow/.active-* 2>/dev/null | head -1 | xargs basename | sed 's/^\.active-//')
bash(find .workflow/active/ -name "WFS-*" -type d 2>/dev/null | head -1 | xargs basename)
# Read project name from metadata
bash(cat .workflow/WFS-promptmaster-platform/workflow-session.json | grep -o '"project":"[^"]*"' | cut -d'"' -f4)
bash(cat .workflow/active/WFS-promptmaster-platform/workflow-session.json | grep -o '"project":"[^"]*"' | cut -d'"' -f4)
# Check keyword match (manual comparison)
# If task contains project keywords → Reuse session
@@ -90,7 +111,7 @@ bash(cat .workflow/WFS-promptmaster-platform/workflow-session.json | grep -o '"p
### Step 2c: Multiple Active Sessions → Use First
```bash
# Get first active session
bash(ls .workflow/.active-* 2>/dev/null | head -1 | xargs basename | sed 's/^\.active-//')
bash(find .workflow/active/ -name "WFS-*" -type d 2>/dev/null | head -1 | xargs basename)
# Output warning and session ID
# WARNING: Multiple active sessions detected
@@ -110,25 +131,19 @@ bash(ls .workflow/.active-* 2>/dev/null | head -1 | xargs basename | sed 's/^\.a
bash(echo "fix login bug" | sed 's/[^a-zA-Z0-9]/-/g' | tr '[:upper:]' '[:lower:]' | cut -c1-50)
# Check if exists, add counter if needed
bash(ls .workflow/WFS-fix-login-bug 2>/dev/null && echo "WFS-fix-login-bug-2" || echo "WFS-fix-login-bug")
bash(ls .workflow/active/WFS-fix-login-bug 2>/dev/null && echo "WFS-fix-login-bug-2" || echo "WFS-fix-login-bug")
```
### Step 2: Create Session Structure
```bash
bash(mkdir -p .workflow/WFS-fix-login-bug/.process)
bash(mkdir -p .workflow/WFS-fix-login-bug/.task)
bash(mkdir -p .workflow/WFS-fix-login-bug/.summaries)
bash(mkdir -p .workflow/active/WFS-fix-login-bug/.process)
bash(mkdir -p .workflow/active/WFS-fix-login-bug/.task)
bash(mkdir -p .workflow/active/WFS-fix-login-bug/.summaries)
```
### Step 3: Create Metadata
```bash
bash(echo '{"session_id":"WFS-fix-login-bug","project":"fix login bug","status":"planning"}' > .workflow/WFS-fix-login-bug/workflow-session.json)
```
### Step 4: Mark Active and Clean Old Markers
```bash
bash(rm .workflow/.active-* 2>/dev/null)
bash(touch .workflow/.active-WFS-fix-login-bug)
bash(echo '{"session_id":"WFS-fix-login-bug","project":"fix login bug","status":"planning"}' > .workflow/active/WFS-fix-login-bug/workflow-session.json)
```
**Output**: `SESSION_ID: WFS-fix-login-bug`
@@ -173,41 +188,6 @@ SlashCommand(command="/workflow:session:start")
SlashCommand(command="/workflow:session:start --new \"experimental feature\"")
```
## Simple Bash Commands
### Basic Operations
```bash
# Check active sessions
bash(ls .workflow/.active-*)
# List all sessions
bash(ls .workflow/WFS-*)
# Read session metadata
bash(cat .workflow/WFS-[session-id]/workflow-session.json)
# Create session directories
bash(mkdir -p .workflow/WFS-[session-id]/.process)
bash(mkdir -p .workflow/WFS-[session-id]/.task)
bash(mkdir -p .workflow/WFS-[session-id]/.summaries)
# Mark session as active
bash(touch .workflow/.active-WFS-[session-id])
# Clean active markers
bash(rm .workflow/.active-*)
```
### Generate Session Slug
```bash
bash(echo "Task Description" | sed 's/[^a-zA-Z0-9]/-/g' | tr '[:upper:]' '[:lower:]' | cut -c1-50)
```
### Create Metadata JSON
```bash
bash(echo '{"session_id":"WFS-test","project":"test project","status":"planning"}' > .workflow/WFS-test/workflow-session.json)
```
## Session ID Format
- Pattern: `WFS-[lowercase-slug]`
- Characters: `a-z`, `0-9`, `-` only

View File

@@ -1,47 +1,185 @@
---
name: workflow:status
description: Generate on-demand task status views from JSON task data with optional task-id filtering for detailed view
argument-hint: "[optional: task-id]"
description: Generate on-demand views for project overview and workflow tasks with optional task-id filtering for detailed view
argument-hint: "[optional: --project|task-id|--validate|--dashboard]"
---
# Workflow Status Command (/workflow:status)
## Overview
Generates on-demand views from JSON task data. No synchronization needed - all views are calculated from the current state of JSON files.
Generates on-demand views from project and session data. Supports multiple modes:
1. **Project Overview** (`--project`): Shows completed features and project statistics
2. **Workflow Tasks** (default): Shows current session task progress
3. **HTML Dashboard** (`--dashboard`): Generates interactive HTML task board with active and archived sessions
No synchronization needed - all views are calculated from current JSON state.
## Usage
```bash
/workflow:status # Show current workflow overview
/workflow:status # Show current workflow session overview
/workflow:status --project # Show project-level feature registry
/workflow:status impl-1 # Show specific task details
/workflow:status --validate # Validate workflow integrity
/workflow:status --dashboard # Generate HTML dashboard board
```
## Implementation Flow
### Mode Selection
**Check for --project flag**:
- If `--project` flag present → Execute **Project Overview Mode**
- Otherwise → Execute **Workflow Session Mode** (default)
## Project Overview Mode
### Step 1: Check Project State
```bash
bash(test -f .workflow/project.json && echo "EXISTS" || echo "NOT_FOUND")
```
**If NOT_FOUND**:
```
No project state found.
Run /workflow:session:start to initialize project.
```
### Step 2: Read Project Data
```bash
bash(cat .workflow/project.json)
```
### Step 3: Parse and Display
**Data Processing**:
```javascript
const projectData = JSON.parse(Read('.workflow/project.json'));
const features = projectData.features || [];
const stats = projectData.statistics || {};
const overview = projectData.overview || null;
// Sort features by implementation date (newest first)
const sortedFeatures = features.sort((a, b) =>
new Date(b.implemented_at) - new Date(a.implemented_at)
);
```
**Output Format** (with extended overview):
```
## Project: ${projectData.project_name}
Initialized: ${projectData.initialized_at}
${overview ? `
### Overview
${overview.description}
**Technology Stack**:
${overview.technology_stack.languages.map(l => `- ${l.name}${l.primary ? ' (primary)' : ''}: ${l.file_count} files`).join('\n')}
Frameworks: ${overview.technology_stack.frameworks.join(', ')}
**Architecture**:
Style: ${overview.architecture.style}
Patterns: ${overview.architecture.patterns.join(', ')}
**Key Components** (${overview.key_components.length}):
${overview.key_components.map(c => `- ${c.name} (${c.path})\n ${c.description}`).join('\n')}
**Metrics**:
- Files: ${overview.metrics.total_files}
- Lines of Code: ${overview.metrics.lines_of_code}
- Complexity: ${overview.metrics.complexity}
---
` : ''}
### Completed Features (${stats.total_features})
${sortedFeatures.map(f => `
- ${f.title} (${f.timeline?.implemented_at || f.implemented_at})
${f.description}
Tags: ${f.tags?.join(', ') || 'none'}
Session: ${f.traceability?.session_id || f.session_id}
Archive: ${f.traceability?.archive_path || 'unknown'}
${f.traceability?.commit_hash ? `Commit: ${f.traceability.commit_hash}` : ''}
`).join('\n')}
### Project Statistics
- Total Features: ${stats.total_features}
- Total Sessions: ${stats.total_sessions}
- Last Updated: ${stats.last_updated}
### Quick Access
- View session details: /workflow:status
- Archive query: jq '.archives[] | select(.session_id == "SESSION_ID")' .workflow/archives/manifest.json
- Documentation: .workflow/docs/${projectData.project_name}/
### Query Commands
# Find by tag
cat .workflow/project.json | jq '.features[] | select(.tags[] == "auth")'
# View archive
cat ${feature.traceability.archive_path}/IMPL_PLAN.md
# List all tags
cat .workflow/project.json | jq -r '.features[].tags[]' | sort -u
```
**Empty State**:
```
## Project: ${projectData.project_name}
Initialized: ${projectData.initialized_at}
No features completed yet.
Complete your first workflow session to add features:
1. /workflow:plan "feature description"
2. /workflow:execute
3. /workflow:session:complete
```
### Step 4: Show Recent Sessions (Optional)
```bash
# List 5 most recent archived sessions
bash(ls -1t .workflow/archives/WFS-* 2>/dev/null | head -5 | xargs -I {} basename {})
```
**Output**:
```
### Recent Sessions
- WFS-auth-system (archived)
- WFS-payment-flow (archived)
- WFS-user-dashboard (archived)
Use /workflow:session:complete to archive current session.
```
## Workflow Session Mode (Default)
### Step 1: Find Active Session
```bash
find .workflow/ -name ".active-*" -type f 2>/dev/null | head -1
find .workflow/active/ -name "WFS-*" -type d 2>/dev/null | head -1
```
### Step 2: Load Session Data
```bash
cat .workflow/WFS-session/workflow-session.json
cat .workflow/active/WFS-session/workflow-session.json
```
### Step 3: Scan Task Files
```bash
find .workflow/WFS-session/.task/ -name "*.json" -type f 2>/dev/null
find .workflow/active/WFS-session/.task/ -name "*.json" -type f 2>/dev/null
```
### Step 4: Generate Task Status
```bash
cat .workflow/WFS-session/.task/impl-1.json | jq -r '.status'
cat .workflow/active/WFS-session/.task/impl-1.json | jq -r '.status'
```
### Step 5: Count Task Progress
```bash
find .workflow/WFS-session/.task/ -name "*.json" -type f | wc -l
find .workflow/WFS-session/.summaries/ -name "*.md" -type f 2>/dev/null | wc -l
find .workflow/active/WFS-session/.task/ -name "*.json" -type f | wc -l
find .workflow/active/WFS-session/.summaries/ -name "*.md" -type f 2>/dev/null | wc -l
```
### Step 6: Display Overview
@@ -58,62 +196,133 @@ find .workflow/WFS-session/.summaries/ -name "*.md" -type f 2>/dev/null | wc -l
- [COMPLETED] impl-0: Setup completed
```
## Simple Bash Commands
## Dashboard Mode (HTML Board)
### Basic Operations
- **Find active session**: `find .workflow/ -name ".active-*" -type f`
- **Read session info**: `cat .workflow/session/workflow-session.json`
- **List tasks**: `find .workflow/session/.task/ -name "*.json" -type f`
- **Check task status**: `cat task.json | jq -r '.status'`
- **Count completed**: `find .summaries/ -name "*.md" -type f | wc -l`
### Task Status Check
- **pending**: Not started yet
- **active**: Currently in progress
- **completed**: Finished with summary
- **blocked**: Waiting for dependencies
### Validation Commands
### Step 1: Check for --dashboard flag
```bash
# Check session exists
test -f .workflow/.active-* && echo "Session active"
# Validate task files
for f in .workflow/session/.task/*.json; do jq empty "$f" && echo "Valid: $f"; done
# Check summaries match
find .task/ -name "*.json" -type f | wc -l
find .summaries/ -name "*.md" -type f 2>/dev/null | wc -l
# If --dashboard flag present → Execute Dashboard Mode
```
## Simple Output Format
### Step 2: Collect Workflow Data
### Default Overview
```
Session: WFS-user-auth
Status: ACTIVE
Progress: 5/12 tasks
**Collect Active Sessions**:
```bash
# Find all active sessions
find .workflow/active/ -name "WFS-*" -type d 2>/dev/null
Current: impl-3 (Building API endpoints)
Next: impl-4 (Adding authentication)
Completed: impl-1, impl-2
# For each active session, read metadata and tasks
for session in $(find .workflow/active/ -name "WFS-*" -type d 2>/dev/null); do
cat "$session/workflow-session.json"
find "$session/.task/" -name "*.json" -type f 2>/dev/null
done
```
### Task Details
```
Task: impl-1
Title: Build authentication module
Status: completed
Agent: code-developer
Created: 2025-09-15
Completed: 2025-09-15
Summary: .summaries/impl-1-summary.md
**Collect Archived Sessions**:
```bash
# Find all archived sessions
find .workflow/archives/ -name "WFS-*" -type d 2>/dev/null
# Read manifest if exists
cat .workflow/archives/manifest.json 2>/dev/null
# For each archived session, read metadata
for archive in $(find .workflow/archives/ -name "WFS-*" -type d 2>/dev/null); do
cat "$archive/workflow-session.json" 2>/dev/null
# Count completed tasks
find "$archive/.task/" -name "*.json" -type f 2>/dev/null | wc -l
done
```
### Validation Results
### Step 3: Process and Structure Data
**Build data structure for dashboard**:
```javascript
const dashboardData = {
activeSessions: [],
archivedSessions: [],
generatedAt: new Date().toISOString()
};
// Process active sessions
for each active_session in active_sessions:
const sessionData = JSON.parse(Read(active_session/workflow-session.json));
const tasks = [];
// Load all tasks for this session
for each task_file in find(active_session/.task/*.json):
const taskData = JSON.parse(Read(task_file));
tasks.push({
task_id: taskData.task_id,
title: taskData.title,
status: taskData.status,
type: taskData.type
});
dashboardData.activeSessions.push({
session_id: sessionData.session_id,
project: sessionData.project,
status: sessionData.status,
created_at: sessionData.created_at || sessionData.initialized_at,
tasks: tasks
});
// Process archived sessions
for each archived_session in archived_sessions:
const sessionData = JSON.parse(Read(archived_session/workflow-session.json));
const taskCount = bash(find archived_session/.task/*.json | wc -l);
dashboardData.archivedSessions.push({
session_id: sessionData.session_id,
project: sessionData.project,
archived_at: sessionData.completed_at || sessionData.archived_at,
taskCount: parseInt(taskCount),
archive_path: archived_session
});
```
Session file valid
8 task files found
3 summaries found
5 tasks pending completion
### Step 4: Generate HTML from Template
**Load template and inject data**:
```javascript
// Read the HTML template
const template = Read("~/.claude/templates/workflow-dashboard.html");
// Prepare data for injection
const dataJson = JSON.stringify(dashboardData, null, 2);
// Replace placeholder with actual data
const htmlContent = template.replace('{{WORKFLOW_DATA}}', dataJson);
// Ensure .workflow directory exists
bash(mkdir -p .workflow);
```
### Step 5: Write HTML File
```bash
# Write the generated HTML to .workflow/dashboard.html
Write({
file_path: ".workflow/dashboard.html",
content: htmlContent
})
```
### Step 6: Display Success Message
```markdown
Dashboard generated successfully!
Location: .workflow/dashboard.html
Open in browser:
file://$(pwd)/.workflow/dashboard.html
Features:
- 📊 Active sessions overview
- 📦 Archived sessions history
- 🔍 Search and filter
- 📈 Progress tracking
- 🎨 Dark/light theme
Refresh data: Re-run /workflow:status --dashboard
```

View File

@@ -1,7 +1,7 @@
---
name: tdd-plan
description: TDD workflow planning with Red-Green-Refactor task chain generation, test-first development structure, and cycle tracking
argument-hint: "[--agent] \"feature description\"|file.md"
argument-hint: "[--cli-execute] \"feature description\"|file.md"
allowed-tools: SlashCommand(*), TodoWrite(*), Read(*), Bash(*)
---
@@ -9,11 +9,24 @@ allowed-tools: SlashCommand(*), TodoWrite(*), Read(*), Bash(*)
## Coordinator Role
**This command is a pure orchestrator**: Execute 5 slash commands in sequence, parse outputs, pass context, and ensure complete TDD workflow creation.
**This command is a pure orchestrator**: Execute 6 slash commands in sequence, parse outputs, pass context, and ensure complete TDD workflow creation with Red-Green-Refactor task generation.
**Execution Modes**:
- **Manual Mode** (default): Use `/workflow:tools:task-generate-tdd`
- **Agent Mode** (`--agent`): Use `/workflow:tools:task-generate-tdd --agent`
- **Agent Mode** (default): Use `/workflow:tools:task-generate-tdd` (autonomous agent-driven)
- **CLI Mode** (`--cli-execute`): Use `/workflow:tools:task-generate-tdd --cli-execute` (Gemini/Qwen)
**Task Attachment Model**:
- SlashCommand invocation **expands workflow** by attaching sub-tasks to current TodoWrite
- When a sub-command is invoked (e.g., `/workflow:tools:test-context-gather`), its internal tasks are attached to the orchestrator's TodoWrite
- Orchestrator **executes these attached tasks** sequentially
- After completion, attached tasks are **collapsed** back to high-level phase summary
- This is **task expansion**, not external delegation
**Auto-Continue Mechanism**:
- TodoList tracks current phase status and dynamically manages task attachment/collapse
- When each phase finishes executing, automatically execute next pending phase
- All phases run autonomously without user interaction
- **⚠️ CONTINUOUS EXECUTION** - Do not stop until all phases complete
## Core Rules
@@ -21,9 +34,10 @@ allowed-tools: SlashCommand(*), TodoWrite(*), Read(*), Bash(*)
2. **No Preliminary Analysis**: Do not read files before Phase 1
3. **Parse Every Output**: Extract required data for next phase
4. **Auto-Continue via TodoList**: Check TodoList status to execute next pending phase automatically
5. **Track Progress**: Update TodoWrite after every phase completion
5. **Track Progress**: Update TodoWrite dynamically with task attachment/collapse pattern
6. **TDD Context**: All descriptions include "TDD:" prefix
7. **Quality Gate**: Phase 4 conflict resolution (optional, auto-triggered) validates compatibility before task generation
7. **Task Attachment Model**: SlashCommand invocation **attaches** sub-tasks to current workflow. Orchestrator **executes** these attached tasks itself, then **collapses** them after completion
8. **⚠️ CRITICAL: DO NOT STOP**: Continuous multi-phase workflow. After executing all attached tasks, immediately collapse them and execute next phase
## 6-Phase Execution (with Conflict Resolution)
@@ -56,7 +70,7 @@ TEST_FOCUS: [Test scenarios]
**Parse Output**:
- Extract: context-package.json path (store as `contextPath`)
- Typical pattern: `.workflow/[sessionId]/.process/context-package.json`
- Typical pattern: `.workflow/active/[sessionId]/.process/context-package.json`
**Validation**:
- Context package path extracted
@@ -77,17 +91,45 @@ TEST_FOCUS: [Test scenarios]
- Related components and integration points
- Test framework detection
**Parse**: Extract testContextPath (`.workflow/[sessionId]/.process/test-context-package.json`)
**Parse**: Extract testContextPath (`.workflow/active/[sessionId]/.process/test-context-package.json`)
**Benefits**:
- Makes TDD aware of existing environment
- Identifies reusable test patterns
- Prevents duplicate test creation
- Enables integration with existing tests
**TodoWrite**: Mark phase 3 completed, phase 4 in_progress
**After Phase 3**: Return to user showing test coverage results, then auto-continue to Phase 4
<!-- TodoWrite: When test-context-gather invoked, INSERT 3 test-context-gather tasks -->
**TodoWrite Update (Phase 3 SlashCommand invoked - tasks attached)**:
```json
[
{"content": "Execute session discovery", "status": "completed", "activeForm": "Executing session discovery"},
{"content": "Execute context gathering", "status": "completed", "activeForm": "Executing context gathering"},
{"content": "Phase 3.1: Detect test framework and conventions (test-context-gather)", "status": "in_progress", "activeForm": "Detecting test framework"},
{"content": "Phase 3.2: Analyze existing test coverage (test-context-gather)", "status": "pending", "activeForm": "Analyzing test coverage"},
{"content": "Phase 3.3: Identify coverage gaps (test-context-gather)", "status": "pending", "activeForm": "Identifying coverage gaps"},
{"content": "Execute TDD task generation", "status": "pending", "activeForm": "Executing TDD task generation"},
{"content": "Validate TDD structure", "status": "pending", "activeForm": "Validating TDD structure"}
]
```
**Note**: SlashCommand invocation **attaches** test-context-gather's 3 tasks. Orchestrator **executes** these tasks.
**Next Action**: Tasks attached → **Execute Phase 3.1-3.3** sequentially
<!-- TodoWrite: After Phase 3 tasks complete, REMOVE Phase 3.1-3.3, restore to orchestrator view -->
**TodoWrite Update (Phase 3 completed - tasks collapsed)**:
```json
[
{"content": "Execute session discovery", "status": "completed", "activeForm": "Executing session discovery"},
{"content": "Execute context gathering", "status": "completed", "activeForm": "Executing context gathering"},
{"content": "Execute test coverage analysis", "status": "completed", "activeForm": "Executing test coverage analysis"},
{"content": "Execute TDD task generation", "status": "pending", "activeForm": "Executing TDD task generation"},
{"content": "Validate TDD structure", "status": "pending", "activeForm": "Validating TDD structure"}
]
```
**Note**: Phase 3 tasks completed and collapsed to summary.
**After Phase 3**: Return to user showing test coverage results, then auto-continue to Phase 4/5 (depending on conflict_risk)
---
@@ -107,13 +149,47 @@ TEST_FOCUS: [Test scenarios]
- Verify: CONFLICT_RESOLUTION.md file path (if executed)
**Validation**:
- File `.workflow/[sessionId]/.process/CONFLICT_RESOLUTION.md` exists (if executed)
- File `.workflow/active/[sessionId]/.process/CONFLICT_RESOLUTION.md` exists (if executed)
**Skip Behavior**:
- If conflict_risk is "none" or "low", skip directly to Phase 5
- Display: "No significant conflicts detected, proceeding to TDD task generation"
**TodoWrite**: Mark phase 4 completed (if executed) or skipped, phase 5 in_progress
<!-- TodoWrite: If conflict_risk ≥ medium, INSERT 3 conflict-resolution tasks -->
**TodoWrite Update (Phase 4 SlashCommand invoked - tasks attached, if conflict_risk ≥ medium)**:
```json
[
{"content": "Execute session discovery", "status": "completed", "activeForm": "Executing session discovery"},
{"content": "Execute context gathering", "status": "completed", "activeForm": "Executing context gathering"},
{"content": "Execute test coverage analysis", "status": "completed", "activeForm": "Executing test coverage analysis"},
{"content": "Phase 4.1: Detect conflicts with CLI analysis (conflict-resolution)", "status": "in_progress", "activeForm": "Detecting conflicts"},
{"content": "Phase 4.2: Present conflicts to user (conflict-resolution)", "status": "pending", "activeForm": "Presenting conflicts"},
{"content": "Phase 4.3: Apply resolution strategies (conflict-resolution)", "status": "pending", "activeForm": "Applying resolution strategies"},
{"content": "Execute TDD task generation", "status": "pending", "activeForm": "Executing TDD task generation"},
{"content": "Validate TDD structure", "status": "pending", "activeForm": "Validating TDD structure"}
]
```
**Note**: SlashCommand invocation **attaches** conflict-resolution's 3 tasks. Orchestrator **executes** these tasks.
**Next Action**: Tasks attached → **Execute Phase 4.1-4.3** sequentially
<!-- TodoWrite: After Phase 4 tasks complete, REMOVE Phase 4.1-4.3, restore to orchestrator view -->
**TodoWrite Update (Phase 4 completed - tasks collapsed)**:
```json
[
{"content": "Execute session discovery", "status": "completed", "activeForm": "Executing session discovery"},
{"content": "Execute context gathering", "status": "completed", "activeForm": "Executing context gathering"},
{"content": "Execute test coverage analysis", "status": "completed", "activeForm": "Executing test coverage analysis"},
{"content": "Execute conflict resolution", "status": "completed", "activeForm": "Executing conflict resolution"},
{"content": "Execute TDD task generation", "status": "pending", "activeForm": "Executing TDD task generation"},
{"content": "Validate TDD structure", "status": "pending", "activeForm": "Validating TDD structure"}
]
```
**Note**: Phase 4 tasks completed and collapsed to summary.
**After Phase 4**: Return to user showing conflict resolution results (if executed) and selected strategies, then auto-continue to Phase 5
@@ -129,8 +205,8 @@ TEST_FOCUS: [Test scenarios]
### Phase 5: TDD Task Generation
**Command**:
- Manual: `/workflow:tools:task-generate-tdd --session [sessionId]`
- Agent: `/workflow:tools:task-generate-tdd --session [sessionId] --agent`
- Agent Mode (default): `/workflow:tools:task-generate-tdd --session [sessionId]`
- CLI Mode (`--cli-execute`): `/workflow:tools:task-generate-tdd --session [sessionId] --cli-execute`
**Parse**: Extract feature count, task count (not chain count - tasks now contain internal TDD cycles)
@@ -145,6 +221,40 @@ TEST_FOCUS: [Test scenarios]
- IMPL_PLAN.md contains workflow_type: "tdd" in frontmatter
- Task count ≤10 (compliance with task limit)
<!-- TodoWrite: When task-generate-tdd invoked, INSERT 3 task-generate-tdd tasks -->
**TodoWrite Update (Phase 5 SlashCommand invoked - tasks attached)**:
```json
[
{"content": "Execute session discovery", "status": "completed", "activeForm": "Executing session discovery"},
{"content": "Execute context gathering", "status": "completed", "activeForm": "Executing context gathering"},
{"content": "Execute test coverage analysis", "status": "completed", "activeForm": "Executing test coverage analysis"},
{"content": "Phase 5.1: Discovery - analyze TDD requirements (task-generate-tdd)", "status": "in_progress", "activeForm": "Analyzing TDD requirements"},
{"content": "Phase 5.2: Planning - design Red-Green-Refactor cycles (task-generate-tdd)", "status": "pending", "activeForm": "Designing TDD cycles"},
{"content": "Phase 5.3: Output - generate IMPL tasks with internal TDD phases (task-generate-tdd)", "status": "pending", "activeForm": "Generating TDD tasks"},
{"content": "Validate TDD structure", "status": "pending", "activeForm": "Validating TDD structure"}
]
```
**Note**: SlashCommand invocation **attaches** task-generate-tdd's 3 tasks. Orchestrator **executes** these tasks. Each generated IMPL task will contain internal Red-Green-Refactor cycle.
**Next Action**: Tasks attached → **Execute Phase 5.1-5.3** sequentially
<!-- TodoWrite: After Phase 5 tasks complete, REMOVE Phase 5.1-5.3, restore to orchestrator view -->
**TodoWrite Update (Phase 5 completed - tasks collapsed)**:
```json
[
{"content": "Execute session discovery", "status": "completed", "activeForm": "Executing session discovery"},
{"content": "Execute context gathering", "status": "completed", "activeForm": "Executing context gathering"},
{"content": "Execute test coverage analysis", "status": "completed", "activeForm": "Executing test coverage analysis"},
{"content": "Execute TDD task generation", "status": "completed", "activeForm": "Executing TDD task generation"},
{"content": "Validate TDD structure", "status": "in_progress", "activeForm": "Validating TDD structure"}
]
```
**Note**: Phase 5 tasks completed and collapsed to summary. Each generated IMPL task contains complete Red-Green-Refactor cycle internally.
### Phase 6: TDD Structure Validation & Action Plan Verification (RECOMMENDED)
**Internal validation first, then recommend external verification**
@@ -182,9 +292,9 @@ Structure:
[...]
Plans generated:
- Unified Implementation Plan: .workflow/[sessionId]/IMPL_PLAN.md
- Unified Implementation Plan: .workflow/active/[sessionId]/IMPL_PLAN.md
(includes TDD Implementation Tasks section with workflow_type: "tdd")
- Task List: .workflow/[sessionId]/TODO_LIST.md
- Task List: .workflow/active/[sessionId]/TODO_LIST.md
(with internal TDD phase indicators)
TDD Configuration:
@@ -202,45 +312,85 @@ Quality Gate: Consider running /workflow:action-plan-verify to validate TDD task
## TodoWrite Pattern
```javascript
// Initialize (Phase 4 added dynamically after Phase 3 if conflict_risk ≥ medium)
TodoWrite({todos: [
{"content": "Execute session discovery", "status": "in_progress", "activeForm": "Executing session discovery"},
{"content": "Execute context gathering", "status": "pending", "activeForm": "Executing context gathering"},
{"content": "Execute test coverage analysis", "status": "pending", "activeForm": "Executing test coverage analysis"},
// Phase 4 todo added dynamically after Phase 3 if conflict_risk ≥ medium
{"content": "Execute TDD task generation", "status": "pending", "activeForm": "Executing TDD task generation"},
{"content": "Validate TDD structure", "status": "pending", "activeForm": "Validating TDD structure"}
]})
**Core Concept**: Dynamic task attachment and collapse for TDD workflow with test coverage analysis and Red-Green-Refactor cycle generation.
// After Phase 3 (if conflict_risk ≥ medium, insert Phase 4 todo)
TodoWrite({todos: [
{"content": "Execute session discovery", "status": "completed", "activeForm": "Executing session discovery"},
{"content": "Execute context gathering", "status": "completed", "activeForm": "Executing context gathering"},
{"content": "Execute test coverage analysis", "status": "completed", "activeForm": "Executing test coverage analysis"},
{"content": "Execute conflict resolution", "status": "in_progress", "activeForm": "Executing conflict resolution"},
{"content": "Execute TDD task generation", "status": "pending", "activeForm": "Executing TDD task generation"},
{"content": "Validate TDD structure", "status": "pending", "activeForm": "Validating TDD structure"}
]})
### Key Principles
// After Phase 3 (if conflict_risk is none/low, skip Phase 4, go directly to Phase 5)
TodoWrite({todos: [
{"content": "Execute session discovery", "status": "completed", "activeForm": "Executing session discovery"},
{"content": "Execute context gathering", "status": "completed", "activeForm": "Executing context gathering"},
{"content": "Execute test coverage analysis", "status": "completed", "activeForm": "Executing test coverage analysis"},
{"content": "Execute TDD task generation", "status": "in_progress", "activeForm": "Executing TDD task generation"},
{"content": "Validate TDD structure", "status": "pending", "activeForm": "Validating TDD structure"}
]})
1. **Task Attachment** (when SlashCommand invoked):
- Sub-command's internal tasks are **attached** to orchestrator's TodoWrite
- Example: `/workflow:tools:test-context-gather` attaches 3 sub-tasks (Phase 3.1, 3.2, 3.3)
- First attached task marked as `in_progress`, others as `pending`
- Orchestrator **executes** these attached tasks sequentially
// After Phase 4 (if executed), continue to Phase 5
TodoWrite({todos: [
{"content": "Execute session discovery", "status": "completed", "activeForm": "Executing session discovery"},
{"content": "Execute context gathering", "status": "completed", "activeForm": "Executing context gathering"},
{"content": "Execute test coverage analysis", "status": "completed", "activeForm": "Executing test coverage analysis"},
{"content": "Execute conflict resolution", "status": "completed", "activeForm": "Executing conflict resolution"},
{"content": "Execute TDD task generation", "status": "in_progress", "activeForm": "Executing TDD task generation"},
{"content": "Validate TDD structure", "status": "pending", "activeForm": "Validating TDD structure"}
]})
2. **Task Collapse** (after sub-tasks complete):
- Remove detailed sub-tasks from TodoWrite
- **Collapse** to high-level phase summary
- Example: Phase 3.1-3.3 collapse to "Execute test coverage analysis: completed"
- Maintains clean orchestrator-level view
3. **Continuous Execution**:
- After collapse, automatically proceed to next pending phase
- No user intervention required between phases
- TodoWrite dynamically reflects current execution state
**Lifecycle Summary**: Initial pending tasks → Phase invoked (tasks ATTACHED) → Sub-tasks executed sequentially → Phase completed (tasks COLLAPSED to summary) → Next phase begins (conditional Phase 4 if conflict_risk ≥ medium) → Repeat until all phases complete.
### TDD-Specific Features
- **Phase 3**: Test coverage analysis detects existing patterns and gaps
- **Phase 5**: Generated IMPL tasks contain internal Red-Green-Refactor cycles
- **Conditional Phase 4**: Conflict resolution only if conflict_risk ≥ medium
**Note**: See individual Phase descriptions (Phase 3, 4, 5) for detailed TodoWrite Update examples with full JSON structures.
## Execution Flow Diagram
```
TDD Workflow Orchestrator
├─ Phase 1: Session Discovery
│ └─ /workflow:session:start --auto
│ └─ Returns: sessionId
├─ Phase 2: Context Gathering
│ └─ /workflow:tools:context-gather
│ └─ Returns: context-package.json path
├─ Phase 3: Test Coverage Analysis ← ATTACHED (3 tasks)
│ └─ /workflow:tools:test-context-gather
│ ├─ Phase 3.1: Detect test framework
│ ├─ Phase 3.2: Analyze existing test coverage
│ └─ Phase 3.3: Identify coverage gaps
│ └─ Returns: test-context-package.json ← COLLAPSED
├─ Phase 4: Conflict Resolution (conditional)
│ IF conflict_risk ≥ medium:
│ └─ /workflow:tools:conflict-resolution ← ATTACHED (3 tasks)
│ ├─ Phase 4.1: Detect conflicts with CLI
│ ├─ Phase 4.2: Present conflicts to user
│ └─ Phase 4.3: Apply resolution strategies
│ └─ Returns: CONFLICT_RESOLUTION.md ← COLLAPSED
│ ELSE:
│ └─ Skip to Phase 5
├─ Phase 5: TDD Task Generation ← ATTACHED (3 tasks)
│ └─ /workflow:tools:task-generate-tdd
│ ├─ Phase 5.1: Discovery - analyze TDD requirements
│ ├─ Phase 5.2: Planning - design Red-Green-Refactor cycles
│ └─ Phase 5.3: Output - generate IMPL tasks with internal TDD phases
│ └─ Returns: IMPL-*.json, IMPL_PLAN.md ← COLLAPSED
│ (Each IMPL task contains internal Red-Green-Refactor cycle)
└─ Phase 6: TDD Structure Validation
└─ Internal validation + summary returned
└─ Recommend: /workflow:action-plan-verify
Key Points:
• ← ATTACHED: SlashCommand attaches sub-tasks to orchestrator TodoWrite
• ← COLLAPSED: Sub-tasks executed and collapsed to phase summary
• TDD-specific: Each generated IMPL task contains complete Red-Green-Refactor cycle
```
## Input Processing
@@ -258,110 +408,6 @@ Convert user input to TDD-structured format:
- **Command failure**: Keep phase in_progress, report error
- **TDD validation failure**: Report incomplete chains or wrong dependencies
## TDD Workflow Enhancements
### Overview
The TDD workflow has been significantly enhanced by integrating best practices from both traditional `plan --agent` and `test-gen` workflows, creating a hybrid approach that bridges the gap between idealized TDD and real-world development complexity.
### Key Improvements
#### 1. Test Coverage Analysis (Phase 3)
**Adopted from test-gen workflow**
Before planning TDD tasks, the workflow now analyzes the existing codebase:
- Detects existing test patterns and conventions
- Identifies current test coverage
- Discovers related components and integration points
- Detects test framework automatically
**Benefits**:
- Context-aware TDD planning
- Avoids duplicate test creation
- Enables integration with existing tests
- No longer assumes greenfield scenarios
#### 2. Iterative Green Phase with Test-Fix Cycle
**Adopted from test-gen workflow**
IMPL (Green phase) tasks now include automatic test-fix cycle for resilient implementation:
**Enhanced IMPL Task Flow**:
```
1. Write minimal implementation code
2. Execute test suite
3. IF tests pass → Complete task
4. IF tests fail → Enter fix cycle:
a. Gemini diagnoses with bug-fix template
b. Apply fix (manual or Codex)
c. Retest
d. Repeat (max 3 iterations)
5. IF max iterations → Auto-revert changes 🔄
```
**Benefits**:
- Faster feedback within Green phase
- Autonomous recovery from implementation errors
- Systematic debugging with Gemini
- Safe rollback prevents broken state
#### 3. Agent-Driven Planning
**From plan --agent workflow**
Supports action-planning-agent for more autonomous TDD planning with:
- MCP tool integration (code-index, exa)
- Memory-first principles
- Brainstorming artifact integration
- Task merging over decomposition
### Workflow Comparison
| Aspect | Previous | Current (Optimized) |
|--------|----------|---------------------|
| **Phases** | 6 (with test coverage) | 7 (added concept verification) |
| **Context** | Greenfield assumption | Existing codebase aware |
| **Task Structure** | 1 feature = 3 tasks (TEST/IMPL/REFACTOR) | 1 feature = 1 task (internal TDD cycle) |
| **Task Count** | 5 features = 15 tasks | 5 features = 5 tasks (70% reduction) |
| **Green Phase** | Single implementation | Iterative with fix cycle |
| **Failure Handling** | Manual intervention | Auto-diagnose + fix + revert |
| **Test Analysis** | None | Deep coverage analysis |
| **Feedback Loop** | Post-execution | During Green phase |
| **Task Management** | High overhead (15 tasks) | Low overhead (5 tasks) |
| **Execution Efficiency** | Frequent context switching | Continuous context per feature |
### Migration Notes
**Backward Compatibility**: Fully compatible
- Existing TDD workflows continue to work
- New features are additive, not breaking
- Phase 3 can be skipped if test-context-gather not available
**Session Structure**:
```
.workflow/WFS-xxx/
├── IMPL_PLAN.md (unified plan with TDD Implementation Tasks section)
├── TODO_LIST.md (with internal TDD phase indicators)
├── .process/
│ ├── context-package.json
│ ├── test-context-package.json
│ ├── ANALYSIS_RESULTS.md (enhanced with TDD breakdown)
│ └── green-fix-iteration-*.md (fix logs from Green phase cycles)
└── .task/
├── IMPL-1.json (Complete TDD task: Red-Green-Refactor internally)
├── IMPL-2.json (Complete TDD task)
├── IMPL-3.json (Complex feature container, if needed)
├── IMPL-3.1.json (Complex feature subtask, if needed)
└── IMPL-3.2.json (Complex feature subtask, if needed)
```
**File Count Comparison**:
- **Old structure**: 5 features = 15 task files (TEST/IMPL/REFACTOR × 5)
- **New structure**: 5 features = 5 task files (IMPL-N × 5)
- **Complex features**: Add container + subtasks only when necessary
**Configuration Options** (in IMPL tasks):
- `meta.max_iterations`: Fix attempts (default: 3)
- `meta.use_codex`: Auto-fix mode (default: false)
## Related Commands
**Prerequisite Commands**:
@@ -373,8 +419,8 @@ Supports action-planning-agent for more autonomous TDD planning with:
- `/workflow:tools:test-context-gather` - Phase 3: Analyze existing test patterns and coverage
- `/workflow:tools:conflict-resolution` - Phase 4: Detect and resolve conflicts (auto-triggered if conflict_risk ≥ medium)
- `/compact` - Phase 4: Memory optimization (if context approaching limits)
- `/workflow:tools:task-generate-tdd` - Phase 5: Generate TDD task chains with Red-Green-Refactor cycles
- `/workflow:tools:task-generate-tdd --agent` - Phase 5: Generate TDD tasks with agent-driven approach (when `--agent` flag used)
- `/workflow:tools:task-generate-tdd` - Phase 5: Generate TDD tasks with agent-driven approach (default, autonomous)
- `/workflow:tools:task-generate-tdd --cli-execute` - Phase 5: Generate TDD tasks with CLI tools (Gemini/Qwen, when `--cli-execute` flag used)
**Follow-up Commands**:
- `/workflow:action-plan-verify` - Recommended: Verify TDD plan quality and structure before execution

View File

@@ -28,7 +28,7 @@ allowed-tools: SlashCommand(*), TodoWrite(*), Read(*), Bash(gemini:*)
sessionId = argument
# Else auto-detect active session
find .workflow/ -name '.active-*' | head -1 | sed 's/.*active-//'
find .workflow/active/ -name "WFS-*" -type d | head -1 | sed 's/.*\///'
```
**Extract**: sessionId
@@ -44,18 +44,18 @@ find .workflow/ -name '.active-*' | head -1 | sed 's/.*active-//'
```bash
# Load all task JSONs
find .workflow/{sessionId}/.task/ -name '*.json'
find .workflow/active/{sessionId}/.task/ -name '*.json'
# Extract task IDs
find .workflow/{sessionId}/.task/ -name '*.json' -exec jq -r '.id' {} \;
find .workflow/active/{sessionId}/.task/ -name '*.json' -exec jq -r '.id' {} \;
# Check dependencies
find .workflow/{sessionId}/.task/ -name 'IMPL-*.json' -exec jq -r '.context.depends_on[]?' {} \;
find .workflow/{sessionId}/.task/ -name 'REFACTOR-*.json' -exec jq -r '.context.depends_on[]?' {} \;
find .workflow/active/{sessionId}/.task/ -name 'IMPL-*.json' -exec jq -r '.context.depends_on[]?' {} \;
find .workflow/active/{sessionId}/.task/ -name 'REFACTOR-*.json' -exec jq -r '.context.depends_on[]?' {} \;
# Check meta fields
find .workflow/{sessionId}/.task/ -name '*.json' -exec jq -r '.meta.tdd_phase' {} \;
find .workflow/{sessionId}/.task/ -name '*.json' -exec jq -r '.meta.agent' {} \;
find .workflow/active/{sessionId}/.task/ -name '*.json' -exec jq -r '.meta.tdd_phase' {} \;
find .workflow/active/{sessionId}/.task/ -name '*.json' -exec jq -r '.meta.agent' {} \;
```
**Validation**:
@@ -82,9 +82,9 @@ find .workflow/{sessionId}/.task/ -name '*.json' -exec jq -r '.meta.agent' {} \;
- Compliance score
**Validation**:
- `.workflow/{sessionId}/.process/test-results.json` exists
- `.workflow/{sessionId}/.process/coverage-report.json` exists
- `.workflow/{sessionId}/.process/tdd-cycle-report.md` exists
- `.workflow/active/{sessionId}/.process/test-results.json` exists
- `.workflow/active/{sessionId}/.process/coverage-report.json` exists
- `.workflow/active/{sessionId}/.process/tdd-cycle-report.md` exists
**TodoWrite**: Mark phase 3 completed, phase 4 in_progress
@@ -97,7 +97,7 @@ find .workflow/{sessionId}/.task/ -name '*.json' -exec jq -r '.meta.agent' {} \;
cd project-root && gemini -p "
PURPOSE: Generate TDD compliance report
TASK: Analyze TDD workflow execution and generate quality report
CONTEXT: @{.workflow/{sessionId}/.task/*.json,.workflow/{sessionId}/.summaries/*,.workflow/{sessionId}/.process/tdd-cycle-report.md}
CONTEXT: @{.workflow/active/{sessionId}/.task/*.json,.workflow/active/{sessionId}/.summaries/*,.workflow/active/{sessionId}/.process/tdd-cycle-report.md}
EXPECTED:
- TDD compliance score (0-100)
- Chain completeness verification
@@ -106,7 +106,7 @@ EXPECTED:
- Red-Green-Refactor cycle validation
- Best practices adherence assessment
RULES: Focus on TDD best practices and workflow adherence. Be specific about violations and improvements.
" > .workflow/{sessionId}/TDD_COMPLIANCE_REPORT.md
" > .workflow/active/{sessionId}/TDD_COMPLIANCE_REPORT.md
```
**Output**: TDD_COMPLIANCE_REPORT.md
@@ -134,7 +134,7 @@ Function Coverage: {percentage}%
## Compliance Score: {score}/100
Detailed report: .workflow/{sessionId}/TDD_COMPLIANCE_REPORT.md
Detailed report: .workflow/active/{sessionId}/TDD_COMPLIANCE_REPORT.md
Recommendations:
- Complete missing REFACTOR-3.1 task
@@ -168,7 +168,7 @@ TodoWrite({todos: [
### Chain Validation Algorithm
```
1. Load all task JSONs from .workflow/{sessionId}/.task/
1. Load all task JSONs from .workflow/active/{sessionId}/.task/
2. Extract task IDs and group by feature number
3. For each feature:
- Check TEST-N.M exists
@@ -202,7 +202,7 @@ Final Score: Max(0, Base Score - Deductions)
## Output Files
```
.workflow/{session-id}/
.workflow/active/{session-id}/
├── TDD_COMPLIANCE_REPORT.md # Comprehensive compliance report ⭐
└── .process/
├── test-results.json # From tdd-coverage-analysis
@@ -215,8 +215,8 @@ Final Score: Max(0, Base Score - Deductions)
### Session Discovery Errors
| Error | Cause | Resolution |
|-------|-------|------------|
| No active session | No .active-* file | Provide session-id explicitly |
| Multiple active sessions | Multiple .active-* files | Provide session-id explicitly |
| No active session | No WFS-* directories | Provide session-id explicitly |
| Multiple active sessions | Multiple WFS-* directories | Provide session-id explicitly |
| Session not found | Invalid session-id | Check available sessions |
### Validation Errors

View File

@@ -1,6 +1,6 @@
---
name: test-cycle-execute
description: Execute test-fix workflow with dynamic task generation and iterative fix cycles until all tests pass or max iterations reached
description: Execute test-fix workflow with dynamic task generation and iterative fix cycles until test pass rate >= 95% or max iterations reached. Uses @cli-planning-agent for failure analysis and task generation.
argument-hint: "[--resume-session=\"session-id\"] [--max-iterations=N]"
allowed-tools: SlashCommand(*), TodoWrite(*), Read(*), Bash(*), Task(*)
---
@@ -10,10 +10,13 @@ allowed-tools: SlashCommand(*), TodoWrite(*), Read(*), Bash(*), Task(*)
## Overview
Orchestrates dynamic test-fix workflow execution through iterative cycles of testing, analysis, and fixing. **Unlike standard execute, this command dynamically generates intermediate tasks** during execution based on test results and CLI analysis, enabling adaptive problem-solving.
**Quality Gate**: Iterates until test pass rate >= 95% (with criticality assessment) or 100% for full approval.
**CRITICAL - Orchestrator Boundary**:
- This command is the **ONLY place** where test failures are handled
- All CLI analysis (Gemini/Qwen), fix task generation (IMPL-fix-N.json), and iteration management happen HERE
- Agents (@test-fix-agent) only execute single tasks and return results
- All failure analysis and fix task generation delegated to **@cli-planning-agent**
- Orchestrator calculates pass rate, assesses criticality, and manages iteration loop
- @test-fix-agent executes tests and applies fixes, reports results back
- **Do NOT handle test failures in main workflow or other commands** - always delegate to this orchestrator
**Resume Mode**: When called with `--resume-session` flag, skips discovery and continues from interruption point.
@@ -35,44 +38,53 @@ Orchestrates dynamic test-fix workflow execution through iterative cycles of tes
### Agent Coordination
- **@code-developer**: Understands requirements, generates implementations
- **@test-fix-agent**: Executes tests, applies fixes, validates results
- **CLI Tools (Gemini/Qwen)**: Analyzes failures, suggests fix strategies
- **@test-fix-agent**: Executes tests, applies fixes, validates results, assigns criticality
- **@cli-planning-agent**: Executes CLI analysis (Gemini/Qwen), parses results, generates fix task JSONs
## Core Rules
1. **Dynamic Task Generation**: Create intermediate fix tasks based on test failures
2. **Iterative Execution**: Repeat test-fix cycles until success or max iterations
3. **CLI-Driven Analysis**: Use Gemini/Qwen to analyze failures and plan fixes
4. **Agent Delegation**: All execution delegated to specialized agents
5. **Context Accumulation**: Each iteration builds on previous attempt context
6. **Autonomous Completion**: Continue until all tests pass without user interruption
1. **Dynamic Task Generation**: Create intermediate fix tasks via @cli-planning-agent based on test failures
2. **Iterative Execution**: Repeat test-fix cycles until pass rate >= 95% (with criticality assessment) or max iterations
3. **Pass Rate Threshold**: Target 95%+ pass rate; 100% for full approval; assess criticality for 95-100% range
4. **Agent-Based Analysis**: Delegate CLI execution and task generation to @cli-planning-agent
5. **Agent Delegation**: All execution delegated to specialized agents (@cli-planning-agent, @test-fix-agent)
6. **Context Accumulation**: Each iteration builds on previous attempt context
7. **Autonomous Completion**: Continue until pass rate >= 95% without user interruption
## Core Responsibilities
- **Session Discovery**: Identify test-fix workflow sessions
- **Task Queue Management**: Maintain dynamic task queue with runtime additions
- **Test Execution**: Run tests through @test-fix-agent
- **Failure Analysis**: Use CLI tools to diagnose test failures
- **Fix Task Generation**: Create intermediate fix tasks dynamically
- **Iteration Control**: Manage fix cycles with max iteration limits
- **Context Propagation**: Pass failure context and fix history between iterations
- **Pass Rate Calculation**: Calculate test pass rate from test-results.json (passed/total * 100)
- **Criticality Assessment**: Evaluate failure severity using test-results.json criticality field
- **Threshold Decision**: Determine if pass rate >= 95% with criticality consideration
- **Failure Analysis Delegation**: Invoke @cli-planning-agent for CLI analysis and task generation
- **Iteration Control**: Manage fix cycles with max iteration limits (5 default)
- **Context Propagation**: Pass failure context and fix history to @cli-planning-agent
- **Progress Tracking**: TodoWrite updates for entire iteration cycle
- **Session Auto-Complete**: Call `/workflow:session:complete` when all tests pass
- **Session Auto-Complete**: Call `/workflow:session:complete` when pass rate >= 95% (or 100%)
## Responsibility Matrix
**CRITICAL - Clear division of labor between orchestrator and agents:**
| Responsibility | test-cycle-execute (Orchestrator) | @test-fix-agent (Executor) |
|----------------|----------------------------|---------------------------|
| Manage iteration loop | Yes - Controls loop flow | No - Executes single task |
| Run CLI analysis (Gemini/Qwen) | Yes - Runs between agent tasks | No - Not involved |
| Generate IMPL-fix-N.json | Yes - Creates task files | No - Not involved |
| Run tests | No - Delegates to agent | Yes - Executes test command |
| Apply fixes | No - Delegates to agent | Yes - Modifies code |
| Detect test failures | Yes - Analyzes results and decides next action | Yes - Executes tests and reports outcomes |
| Add tasks to queue | Yes - Manages queue | No - Not involved |
| Update iteration state | Yes - Maintains overall iteration state | Yes - Updates individual task status only |
| Responsibility | test-cycle-execute (Orchestrator) | @cli-planning-agent | @test-fix-agent (Executor) |
|----------------|----------------------------|---------------------|---------------------------|
| Manage iteration loop | Yes - Controls loop flow | No - Not involved | No - Executes single task |
| Calculate pass rate | Yes - From test-results.json | No - Not involved | No - Reports test results |
| Assess criticality | Yes - From test-results.json | No - Not involved | Yes - Assigns criticality in test results |
| Run CLI analysis (Gemini/Qwen) | No - Delegates to cli-planning-agent | Yes - Executes CLI internally | No - Not involved |
| Parse CLI output | No - Delegated | Yes - Extracts fix strategy | No - Not involved |
| Generate IMPL-fix-N.json | No - Delegated | Yes - Creates task files | No - Not involved |
| Run tests | No - Delegates to agent | No - Not involved | Yes - Executes test command |
| Apply fixes | No - Delegates to agent | No - Not involved | Yes - Modifies code |
| Detect test failures | Yes - Analyzes pass rate and decides next action | No - Not involved | Yes - Executes tests and reports outcomes |
| Add tasks to queue | Yes - Manages queue | No - Returns task ID | No - Not involved |
| Update iteration state | Yes - Maintains overall iteration state | No - Not involved | Yes - Updates individual task status only |
**Key Principle**: Orchestrator manages the "what" and "when"; agents execute the "how".
**Key Principles**:
- Orchestrator manages the "what" (iteration flow, threshold decisions) and "when" (task scheduling)
- @cli-planning-agent executes the "analysis" (CLI execution, result parsing, task generation)
- @test-fix-agent executes the "how" (run tests, apply fixes)
**ENFORCEMENT**: If test failures occur outside this orchestrator, do NOT handle them inline - always call `/workflow:test-cycle-execute` instead.
@@ -97,17 +109,29 @@ For each task in queue:
1. [Orchestrator] Load task JSON and context
2. [Orchestrator] Determine task type (test-gen, test-fix, fix-iteration)
3. [Orchestrator] Execute task through appropriate agent
4. [Orchestrator] Collect agent results and check exit conditions
5. If test failures detected:
a. [Orchestrator] Run CLI analysis (Gemini/Qwen)
b. [Orchestrator] Generate fix task JSON (IMPL-fix-N.json)
4. [Orchestrator] Collect agent results from .process/test-results.json
5. [Orchestrator] Calculate test pass rate:
a. Parse test-results.json: passRate = (passed / total) * 100
b. Assess failure criticality (from test-results.json)
c. Evaluate fix effectiveness (NEW):
- Compare passRate with previous iteration
- If passRate decreased by >10%: REGRESSION detected
- If regression: Rollback last fix, skip to next strategy
6. [Orchestrator] Make threshold decision:
IF passRate === 100%:
→ SUCCESS: Mark task complete, update TodoWrite, continue
ELSE IF passRate >= 95%:
→ REVIEW: Check failure criticality
→ If all failures are "low" criticality: PARTIAL SUCCESS (approve with note)
→ If any "high" or "medium" criticality: Enter fix loop (step 7)
ELSE IF passRate < 95%:
→ FAILED: Enter fix loop (step 7)
7. If entering fix loop (pass rate < 95% OR critical failures exist):
a. [Orchestrator] Invoke @cli-planning-agent with failure context
b. [Agent] Executes CLI analysis + generates IMPL-fix-N.json
c. [Orchestrator] Insert fix task at front of queue
d. [Orchestrator] Continue loop
6. If test success:
a. [Orchestrator] Mark task complete
b. [Orchestrator] Update TodoWrite
c. [Orchestrator] Continue to next task
7. [Orchestrator] Check max iterations limit
8. [Orchestrator] Check max iterations limit (abort if exceeded)
```
**Note**: The orchestrator controls the loop. Agents execute individual tasks and return results.
@@ -119,33 +143,33 @@ For each task in queue:
#### Iteration Structure
```
Iteration N (managed by test-cycle-execute orchestrator):
├── 1. Test Execution
├── 1. Test Execution & Pass Rate Validation
│ ├── [Orchestrator] Launch @test-fix-agent with test task
│ ├── [Agent] Run test suite
│ ├── [Agent] Collect failures and report back
── [Orchestrator] Receive failure report
├── 2. Failure Analysis
│ ├── [Orchestrator] Run CLI tool (Gemini/Qwen)
│ ├── [CLI Tool] Analyze error messages and failure context
│ ├── [CLI Tool] Identify root causes
── [CLI Tool] Generate fix strategy → saved to iteration-N-analysis.md
├── 3. Fix Task Generation
│ ├── [Orchestrator] Parse CLI analysis results
│ ├── [Orchestrator] Create IMPL-fix-N.json with:
│ ├── meta.agent: "@test-fix-agent"
│ │ ├── Failure context (content, not just path)
│ └── Fix strategy from CLI analysis
── [Orchestrator] Insert into task queue (front position)
├── 4. Fix Execution
│ ├── [Orchestrator] Launch @test-fix-agent with fix task
│ ├── [Agent] Load fix strategy from task context
│ ├── [Agent] Apply fixes to code/tests
│ ├── [Agent] Run test suite and save results to test-results.json
│ ├── [Agent] Report completion back to orchestrator
── [Orchestrator] Calculate pass rate: (passed / total) * 100
│ └── [Orchestrator] Assess failure criticality from test-results.json
├── 2. Failure Analysis & Task Generation (via @cli-planning-agent)
│ ├── [Orchestrator] Assemble failure context package (tests, errors, pass_rate)
│ ├── [Orchestrator] Invoke @cli-planning-agent with context
── [@cli-planning-agent] Execute CLI tool (Gemini/Qwen) internally
│ ├── [@cli-planning-agent] Parse CLI output for root causes and fix strategy
│ ├── [@cli-planning-agent] Generate IMPL-fix-N.json with structured task
│ ├── [@cli-planning-agent] Save analysis to iteration-N-analysis.md
└── [Orchestrator] Receive task ID and insert into queue (front position)
├── 3. Fix Execution
├── [Orchestrator] Launch @test-fix-agent with IMPL-fix-N task
── [Agent] Load fix strategy from task.context.fix_strategy
│ ├── [Agent] Apply surgical fixes to identified files
│ └── [Agent] Report completion
└── 5. Re-test
└── 4. Re-test
└── [Orchestrator] Return to step 1 with updated code
```
**Key**: Orchestrator runs CLI analysis between agent tasks, then generates new fix tasks.
**Key Changes**:
- CLI analysis + task generation encapsulated in @cli-planning-agent
- Pass rate calculation added to test execution step
- Orchestrator only assembles context and invokes agent
#### Iteration Task JSON Template
```json
@@ -185,13 +209,13 @@ Iteration N (managed by test-cycle-execute orchestrator):
"pre_analysis": [
{
"step": "load_failure_context",
"command": "Read(.workflow/{session}/.process/iteration-{N-1}-failures.json)",
"command": "Read(.workflow/session/{session}/.process/iteration-{N-1}-failures.json)",
"output_to": "previous_failures",
"on_error": "skip_optional"
},
{
"step": "load_fix_strategy",
"command": "Read(.workflow/{session}/.process/iteration-{N}-strategy.md)",
"command": "Read(.workflow/session/{session}/.process/iteration-{N}-strategy.md)",
"output_to": "fix_strategy",
"on_error": "fail"
}
@@ -220,74 +244,167 @@ Iteration N (managed by test-cycle-execute orchestrator):
}
```
### Phase 4: CLI Analysis Integration
### Phase 4: Agent-Based Failure Analysis & Task Generation
**Orchestrator executes CLI analysis between agent tasks:**
**Orchestrator delegates CLI analysis and task generation to @cli-planning-agent:**
#### When Test Failures Occur
1. **[Orchestrator]** Detects failures from agent test execution output
2. **[Orchestrator]** Collects failure context from `.process/test-results.json` and logs
3. **[Orchestrator]** Executes Gemini/Qwen CLI tool with failure context
4. **[Orchestrator]** Interprets CLI tool output to extract fix strategy
5. **[Orchestrator]** Saves analysis to `.process/iteration-N-analysis.md`
6. **[Orchestrator]** Generates `IMPL-fix-N.json` with strategy content (not just path)
#### When Test Failures Occur (Pass Rate < 95% OR Critical Failures)
1. **[Orchestrator]** Detects failures from test-results.json
2. **[Orchestrator]** Check for repeated failures (NEW):
- Compare failed_tests with previous 2 iterations
- If same test failed 3 times consecutively: Mark as "stuck"
- If >50% of failures are "stuck": Switch analysis strategy or abort
3. **[Orchestrator]** Extracts failure context:
- Failed tests with criticality assessment
- Error messages and stack traces
- Current pass rate
- Previous iteration attempts (if any)
- Stuck test markers (NEW)
4. **[Orchestrator]** Assembles context package for @cli-planning-agent
5. **[Orchestrator]** Invokes @cli-planning-agent via Task tool
6. **[@cli-planning-agent]** Executes internally:
- Runs Gemini/Qwen CLI analysis with bug diagnosis template
- Parses CLI output to extract root causes and fix strategy
- Generates `IMPL-fix-N.json` with structured task definition
- Saves analysis report to `.process/iteration-N-analysis.md`
- Saves raw CLI output to `.process/iteration-N-cli-output.txt`
7. **[Orchestrator]** Receives task ID from agent and inserts into queue
**Note**: The orchestrator executes CLI analysis tools and processes their output. CLI tools provide analysis, orchestrator manages the workflow.
**Key Change**: CLI execution + result parsing + task generation are now encapsulated in @cli-planning-agent, simplifying orchestrator logic.
#### CLI Analysis Command (executed by orchestrator)
```bash
cd {project_root} && gemini -p "
PURPOSE: Analyze test failures and generate fix strategy
TASK: Review test failures and identify root causes
MODE: analysis
CONTEXT: @test files @ implementation files
#### Agent Invocation Pattern (executed by orchestrator)
```javascript
Task(
subagent_type="cli-planning-agent",
description=`Analyze test failures and generate fix task (iteration ${currentIteration})`,
prompt=`
## Task Objective
Analyze test failures and generate structured fix task JSON for iteration ${currentIteration}
[Test failure context and requirements...]
## MANDATORY FIRST STEPS
1. Read test results: {session.test_results_path}
2. Read test output: {session.test_output_path}
3. Read iteration state: {session.iteration_state_path}
4. Read fix history (if exists): {session.fix_history_path}
EXPECTED: Detailed fix strategy in markdown format
RULES: Focus on minimal changes, avoid over-engineering
"
## Session Paths
- Workflow Dir: {session.workflow_dir}
- Test Results: {session.test_results_path}
- Test Output: {session.test_output_path}
- Iteration State: {session.iteration_state_path}
- Fix History: {session.fix_history_path}
- Task Output Dir: {session.task_dir}
- Analysis Output: {session.process_dir}/iteration-${currentIteration}-analysis.md
- CLI Output: {session.process_dir}/iteration-${currentIteration}-cli-output.txt
## Context Metadata
- Session ID: ${sessionId}
- Current Iteration: ${currentIteration}
- Max Iterations: ${maxIterations}
- Current Pass Rate: ${passRate}%
## CLI Configuration
- Tool: gemini (fallback: qwen)
- Model: gemini-3-pro-preview-11-2025
- Template: 01-diagnose-bug-root-cause.txt
- Timeout: 2400000ms
## Expected Deliverables
1. Task JSON file: {session.task_dir}/IMPL-fix-${currentIteration}.json
2. Analysis report: {session.process_dir}/iteration-${currentIteration}-analysis.md
3. CLI raw output: {session.process_dir}/iteration-${currentIteration}-cli-output.txt
4. Return task ID to orchestrator
## Quality Standards
- Fix strategy must include specific modification points (file:function:lines)
- Analysis must identify root causes, not just symptoms
- Task JSON must be valid and complete with all required fields
- All deliverables saved to specified paths
## Success Criteria
Generate valid IMPL-fix-${currentIteration}.json with:
- Concrete fix strategy with modification points
- Root cause analysis from CLI tool
- All required task JSON fields (id, title, status, meta, context, flow_control)
- Return task ID for orchestrator to queue
`
)
```
#### Analysis Output Structure
#### Agent Response Format
**Agent must return structured response with deliverable paths:**
```javascript
{
"status": "success",
"task_id": "IMPL-fix-${iteration}",
"deliverables": {
"task_json": ".workflow/${session}/.task/IMPL-fix-${iteration}.json",
"analysis_report": ".workflow/${session}/.process/iteration-${iteration}-analysis.md",
"cli_output": ".workflow/${session}/.process/iteration-${iteration}-cli-output.txt"
},
"summary": {
"root_causes": ["Authentication token validation missing", "Null check missing"],
"modification_points": [
"src/auth/client.ts:sendRequest:45-50",
"src/validators/user.ts:validateUser:23-25"
],
"estimated_complexity": "low",
"expected_pass_rate_improvement": "85% → 95%"
}
}
```
**Orchestrator validates all deliverable paths exist before proceeding.**
#### Generated Analysis Report Structure
The @cli-planning-agent generates `.process/iteration-N-analysis.md`:
```markdown
# Test Failure Analysis - Iteration {N}
---
iteration: N
analysis_type: test-failure
cli_tool: gemini
model: gemini-3-pro-preview-11-2025
timestamp: 2025-11-10T10:00:00Z
pass_rate: 85.0%
---
# Test Failure Analysis - Iteration N
## Summary
- **Failed Tests**: 2
- **Pass Rate**: 85.0% (Target: 95%+)
- **Root Causes Identified**: 2
- **Modification Points**: 2
## Failed Tests Details
### test_auth_flow
- **Error**: Expected 200, got 401
- **File**: tests/test_auth.test.ts:45
- **Criticality**: high
### test_data_validation
- **Error**: TypeError: Cannot read property 'name' of undefined
- **File**: tests/test_validators.test.ts:23
- **Criticality**: medium
## Root Cause Analysis
1. **Test: test_auth_flow**
- Error: `Expected 200, got 401`
- Root Cause: Missing authentication token in request headers
- Affected Code: `src/auth/client.ts:45`
2. **Test: test_data_validation**
- Error: `TypeError: Cannot read property 'name' of undefined`
- Root Cause: Null check missing before property access
- Affected Code: `src/validators/user.ts:23`
[CLI output: 根本原因分析 section]
## Fix Strategy
[CLI output: 详细修复建议 section]
### Priority 1: Authentication Issue
- **File**: src/auth/client.ts
- **Function**: sendRequest (line 45)
- **Change**: Add token header: `headers['Authorization'] = 'Bearer ' + token`
- **Verification**: Run test_auth_flow
## Modification Points
- `src/auth/client.ts:sendRequest:45-50` - Add authentication token header
- `src/validators/user.ts:validateUser:23-25` - Add null check before property access
### Priority 2: Null Check
- **File**: src/validators/user.ts
- **Function**: validateUser (line 23)
- **Change**: Add check: `if (!user?.name) return false`
- **Verification**: Run test_data_validation
## Expected Outcome
[CLI output: 验证建议 section]
## Verification Plan
1. Apply fixes in order
2. Run test suite after each fix
3. Check for regressions
4. Validate all tests pass
## Risk Assessment
- Low risk: Changes are surgical and isolated
- No breaking changes expected
- Existing tests should remain green
## CLI Raw Output
See: `.process/iteration-N-cli-output.txt`
```
### Phase 5: Task Queue Management
@@ -321,27 +438,56 @@ After IMPL-fix-2 execution (success):
#### Success Conditions
- All initial tasks completed
- All generated fix tasks completed
- All tests passing
- **Test pass rate === 100%** (all tests passing)
- No pending tasks in queue
#### Partial Success Conditions (NEW)
- All initial tasks completed
- Test pass rate >= 95% AND < 100%
- All failures are "low" criticality (flaky tests, env-specific issues)
- **Automatic Approval with Warning**: System auto-approves but marks session with review flag
- Note: Generate completion summary with detailed warnings about low-criticality failures
#### Completion Steps
1. **Final Validation**: Run full test suite one more time
2. **Update Session State**: Mark all tasks completed
3. **Generate Summary**: Create session completion summary
4. **Update TodoWrite**: Mark all items completed
5. **Auto-Complete**: Call `/workflow:session:complete`
2. **Calculate Final Pass Rate**: Parse test-results.json
3. **Assess Completion Status**:
- If pass_rate === 100% → Full Success
- If pass_rate >= 95% + all "low" criticality → Partial Success (add review note)
- If pass_rate >= 95% + any "high"/"medium" criticality → Continue iteration
- If pass_rate < 95% → Failure
4. **Update Session State**: Mark all tasks completed (or blocked if failure)
5. **Generate Summary**: Create session completion summary with pass rate metrics
6. **Update TodoWrite**: Mark all items completed
7. **Auto-Complete**: Call `/workflow:session:complete` (for Full or Partial Success)
#### Failure Conditions
- Max iterations reached without success
- Unrecoverable test failures
- Max iterations (5) reached without achieving 95% pass rate
- **Test pass rate < 95% after max iterations** (NEW)
- Pass rate >= 95% but critical failures exist and max iterations reached
- Unrecoverable test failures (infinite loop detection)
- Agent execution errors
#### Failure Handling
1. **Document State**: Save current iteration context
2. **Generate Report**: Create failure analysis report
3. **Preserve Context**: Keep all iteration logs
1. **Document State**: Save current iteration context with final pass rate
2. **Generate Failure Report**: Include:
- Final pass rate (e.g., "85% after 5 iterations")
- Remaining failures with criticality assessment
- Iteration history and attempted fixes
- CLI analysis quality (normal/degraded)
- Recommendations for manual intervention
3. **Preserve Context**: Keep all iteration logs and analysis reports
4. **Mark Blocked**: Update task status to blocked
5. **Return Control**: Return to user with detailed report
5. **Return Control**: Return to user with detailed failure report
#### Degraded Analysis Handling (NEW)
When @cli-planning-agent returns `status: "degraded"` (both Gemini and Qwen failed):
1. **Log Warning**: Record CLI analysis failure in iteration-state.json
2. **Assess Risk**: Check if degraded analysis is acceptable:
- If iteration < 3 AND pass_rate improved: Accept degraded analysis, continue
- If iteration >= 3 OR pass_rate stagnant: Skip iteration, mark as blocked
3. **User Notification**: Include CLI failure in completion summary
4. **Fallback Strategy**: Use basic pattern matching from fix-history.json
## TodoWrite Coordination
@@ -385,59 +531,60 @@ TodoWrite({
4. **Iteration Complete**: Mark iteration item completed
5. **All Complete**: Mark parent task completed
## Agent Context Package
## Agent Context Loading
**Generated by test-cycle-execute orchestrator before launching agents.**
**Orchestrator provides file paths, agents load content themselves.**
The orchestrator assembles this context package from:
- Task JSON file (IMPL-*.json)
- Iteration state files
- Test results and failure context
- Session metadata
### Path References Provided to Agents
This package is passed to agents via the Task tool's prompt context.
**Orchestrator passes these paths via Task tool prompt:**
### Enhanced Context for Test-Fix Agent
```json
```javascript
{
"task": { /* IMPL-fix-N.json */ },
"iteration_context": {
"current_iteration": N,
"max_iterations": 5,
"previous_attempts": [
{
"iteration": N-1,
"failures": ["test1", "test2"],
"fixes_attempted": ["fix1", "fix2"],
"result": "partial_success"
}
],
"failure_analysis": {
"source": "gemini_cli",
"analysis_file": ".process/iteration-N-analysis.md",
"fix_strategy": { /* from CLI */ }
}
},
"test_context": {
"test_framework": "jest|pytest|...",
"test_files": ["path/to/test1.test.ts"],
"test_command": "npm test",
"coverage_target": 80
},
"session": {
"workflow_dir": ".workflow/WFS-test-{session}/",
"iteration_state_file": ".process/iteration-state.json",
"test_results_file": ".process/test-results.json",
"fix_history_file": ".process/fix-history.json"
}
// Primary Task Definition
"task_json_path": ".workflow/active/WFS-test-{session}/.task/IMPL-fix-N.json",
// Iteration Context Paths
"iteration_state_path": ".workflow/active/WFS-test-{session}/.process/iteration-state.json",
"fix_history_path": ".workflow/active/WFS-test-{session}/.process/fix-history.json",
// Test Context Paths
"test_results_path": ".workflow/active/WFS-test-{session}/.process/test-results.json",
"test_output_path": ".workflow/active/WFS-test-{session}/.process/test-output.log",
"test_context_path": ".workflow/active/WFS-test-{session}/.process/TEST_ANALYSIS_RESULTS.md",
// Analysis & Strategy Paths (for fix-iteration tasks)
"analysis_path": ".workflow/active/WFS-test-{session}/.process/iteration-N-analysis.md",
"cli_output_path": ".workflow/active/WFS-test-{session}/.process/iteration-N-cli-output.txt",
// Session Management Paths
"workflow_dir": ".workflow/active/WFS-test-{session}/",
"summaries_dir": ".workflow/active/WFS-test-{session}/.summaries/",
"todo_list_path": ".workflow/active/WFS-test-{session}/TODO_LIST.md",
// Metadata (simple values, not file content)
"session_id": "WFS-test-{session}",
"current_iteration": N,
"max_iterations": 5
}
```
### Agent Loading Sequence
**Agents must load files in this order:**
1. **Task JSON** (`task_json_path`) - Get task definition, requirements, fix strategy
2. **Iteration State** (`iteration_state_path`) - Understand current iteration context
3. **Test Results** (`test_results_path`) - Analyze current test status
4. **Test Output** (`test_output_path`) - Review detailed test execution logs
5. **Analysis Report** (`analysis_path`, for fix tasks) - Load CLI-generated fix strategy
6. **Fix History** (`fix_history_path`) - Review previous fix attempts to avoid repetition
## File Structure
### Test-Fix Session Files
```
.workflow/WFS-test-{session}/
.workflow/active/WFS-test-{session}/
├── workflow-session.json # Session metadata with workflow_type
├── IMPL_PLAN.md # Test plan
├── TODO_LIST.md # Progress tracking
@@ -498,77 +645,66 @@ This package is passed to agents via the Task tool's prompt context.
## Agent Prompt Template
**Unified template for all agent tasks (orchestrator invokes with Task tool):**
**Dynamic Generation**: Before agent invocation, orchestrator reads task JSON and extracts key requirements.
```bash
Task(subagent_type="{meta.agent}",
prompt="**TASK EXECUTION: {task.title}**
prompt="Execute task: {task.title}
## STEP 1: Load Complete Task JSON
**MANDATORY**: First load the complete task JSON from: {session.task_json_path}
{[FLOW_CONTROL]}
cat {session.task_json_path}
**Task Objectives** (from task JSON):
{task.context.requirements}
**CRITICAL**: Validate all required fields present
**Expected Deliverables**:
- For test-gen: Test files in target directories, test coverage report
- For test-fix: Test execution results saved to test-results.json, test-output.log
- For test-fix-iteration: Fixed code files, updated test results, iteration summary
## STEP 2: Task Context (From Loaded JSON)
**ID**: {task.id}
**Type**: {task.meta.type}
**Agent**: {task.meta.agent}
**Quality Standards**:
- All tests must execute without errors
- Test results must be saved in structured JSON format
- All deliverables must be saved to specified paths
- Task status must be updated in task JSON
## STEP 3: Execute Task Based on Type
**MANDATORY FIRST STEPS**:
1. Read complete task JSON: {session.task_json_path}
2. Load iteration state (if applicable): {session.iteration_state_path}
3. Load test context: {session.test_context_path}
### For test-gen (IMPL-001):
- Generate tests based on TEST_ANALYSIS_RESULTS.md
- Follow test framework conventions
- Create test files in target_files
Follow complete execution guidelines in @.claude/agents/{meta.agent}.md
### For test-fix (IMPL-002):
- Run test suite: {test_command}
- Collect results to .process/test-results.json
- Report results to orchestrator (do NOT analyze failures)
- Orchestrator will handle failure detection and iteration decisions
- If success: Mark complete
**Session Paths** (use these for all file operations):
- Workflow Dir: {session.workflow_dir}
- Task JSON: {session.task_json_path}
- TODO List: {session.todo_list_path}
- Summaries Dir: {session.summaries_dir}
- Test Results: {session.test_results_path}
- Test Output Log: {session.test_output_path}
- Iteration State: {session.iteration_state_path}
- Fix History: {session.fix_history_path}
### For test-fix-iteration (IMPL-fix-N):
- Load fix strategy from context.fix_strategy (CONTENT, not path)
- Apply surgical fixes to identified files
- Return results to orchestrator
- Do NOT run tests independently - orchestrator manages all test execution
- Do NOT handle failures - orchestrator analyzes and decides next iteration
**Critical Rules**:
- For test-fix tasks: Run tests and save results, do NOT analyze failures
- For fix-iteration tasks: Apply fixes from task JSON, do NOT run tests independently
- Orchestrator manages iteration loop and failure analysis
- Return results to orchestrator for next-step decisions
## STEP 4: Implementation Context (From JSON)
**Requirements**: {context.requirements}
**Fix Strategy**: {context.fix_strategy} (full content provided in task JSON)
**Failure Context**: {context.failure_context}
**Iteration History**: {context.inherited.iteration_history}
## STEP 5: Flow Control Execution
If flow_control.pre_analysis exists, execute steps sequentially
## STEP 6: Agent Completion
1. Execute task following implementation_approach
2. Update task status in JSON
3. Update TODO_LIST.md
4. Generate summary in .summaries/
5. **CRITICAL**: Save results for orchestrator to analyze
**Output Requirements**:
- test-results.json: Structured test results
- test-output.log: Full test output
- iteration-state.json: Current iteration state (if applicable)
- task-summary.md: Completion summary
**Return to Orchestrator**: Agent completes and returns. Orchestrator decides next action.
"),
description="Execute {task.type} task with JSON validation")
**Success Criteria**:
- Complete all task objectives as specified in task JSON
- Deliver all required outputs to specified paths
- Update task status and TODO_LIST.md
- Generate completion summary in .summaries/
",
description="Executing: {task.title}")
```
**Key Points**:
- Agent executes single task and returns
- Orchestrator analyzes results and decides next step
- Fix strategy content (not path) embedded in task JSON by orchestrator
- Agent does not manage iteration loop
**Key Changes from Previous Version**:
1. **Paths over Content**: Provide JSON paths for agent to read, not embedded content
2. **MANDATORY FIRST STEPS**: Explicit requirement to load task JSON and context
3. **Complete Session Paths**: All file paths provided for agent operations
4. **Emphasized Deliverables**: Clear deliverable requirements per task type
5. **Simplified Structure**: Removed type-specific instructions (agent reads from JSON)
## Error Handling & Recovery
@@ -586,7 +722,7 @@ Task(subagent_type="{meta.agent}",
#### Resume from Interruption
```bash
# Load iteration state
iteration_state=$(cat .workflow/{session}/.process/iteration-state.json)
iteration_state=$(cat .workflow/session/{session}/.process/iteration-state.json)
current_iteration=$(jq -r '.current_iteration' <<< "$iteration_state")
# Determine resume point
@@ -605,7 +741,7 @@ fi
git revert HEAD
# Remove failed fix task
rm .workflow/{session}/.task/IMPL-fix-{N}.json
rm .workflow/session/{session}/.task/IMPL-fix-{N}.json
# Restore iteration state
jq '.current_iteration -= 1' iteration-state.json > temp.json

View File

@@ -58,6 +58,19 @@ This command is a **pure planning coordinator**:
- Creates independent test workflow session
- **All execution delegated to `/workflow:test-cycle-execute`**
**Task Attachment Model**:
- SlashCommand invocation **expands workflow** by attaching sub-tasks to current TodoWrite
- When a sub-command is invoked (e.g., `/workflow:tools:test-context-gather`), its internal tasks are attached to the orchestrator's TodoWrite
- Orchestrator **executes these attached tasks** sequentially
- After completion, attached tasks are **collapsed** back to high-level phase summary
- This is **task expansion**, not external delegation
**Auto-Continue Mechanism**:
- TodoList tracks current phase status and dynamically manages task attachment/collapse
- When each phase finishes executing, automatically execute next pending phase
- All phases run autonomously without user interaction
- **⚠️ CONTINUOUS EXECUTION** - Do not stop until all phases complete
---
## Usage
@@ -128,9 +141,11 @@ This command is a **pure planning coordinator**:
3. **Parse Every Output**: Extract required data from each phase for next phase
4. **Sequential Execution**: Each phase depends on previous phase's output
5. **Complete All Phases**: Do not return until Phase 5 completes
6. **Track Progress**: Update TodoWrite after every phase
6. **Track Progress**: Update TodoWrite dynamically with task attachment/collapse pattern
7. **Automatic Detection**: Mode auto-detected from input pattern
8. **Parse Flags**: Extract `--use-codex` and `--cli-execute` flags for Phase 4
9. **Task Attachment Model**: SlashCommand invocation **attaches** sub-tasks to current workflow. Orchestrator **executes** these attached tasks itself, then **collapses** them after completion
10. **⚠️ CRITICAL: DO NOT STOP**: Continuous multi-phase workflow. After executing all attached tasks, immediately collapse them and execute next phase
### 5-Phase Execution
@@ -202,9 +217,25 @@ This command is a **pure planning coordinator**:
**Expected Behavior**:
- Use Gemini to analyze coverage gaps and implementation
- Study existing test patterns and conventions
- Generate test requirements for missing test files
- Design test generation strategy
- Generate `TEST_ANALYSIS_RESULTS.md`
- Generate **multi-layered test requirements** (L0: Static Analysis, L1: Unit, L2: Integration, L3: E2E)
- Design test generation strategy with quality assurance criteria
- Generate `TEST_ANALYSIS_RESULTS.md` with structured test layers
**Enhanced Test Requirements**:
For each targeted file/function, Gemini MUST generate:
1. **L0: Static Analysis Requirements**:
- Linting rules to enforce (ESLint, Prettier)
- Type checking requirements (TypeScript)
- Anti-pattern detection rules
2. **L1: Unit Test Requirements**:
- Happy path scenarios (valid inputs → expected outputs)
- Negative path scenarios (invalid inputs → error handling)
- Edge cases (null, undefined, 0, empty strings/arrays)
3. **L2: Integration Test Requirements**:
- Successful component interactions
- Failure handling scenarios (service unavailable, timeout)
4. **L3: E2E Test Requirements** (if applicable):
- Key user journeys from start to finish
**Parse Output**:
- Verify `.workflow/[testSessionId]/.process/TEST_ANALYSIS_RESULTS.md` created
@@ -213,9 +244,18 @@ This command is a **pure planning coordinator**:
- TEST_ANALYSIS_RESULTS.md exists with complete sections:
- Coverage Assessment
- Test Framework & Conventions
- Test Requirements by File
- **Multi-Layered Test Plan** (NEW):
- L0: Static Analysis Plan
- L1: Unit Test Plan
- L2: Integration Test Plan
- L3: E2E Test Plan (if applicable)
- Test Requirements by File (with layer annotations)
- Test Generation Strategy
- Implementation Targets
- Quality Assurance Criteria (NEW):
- Minimum coverage thresholds
- Required test types per function
- Acceptance criteria for test quality
- Success Criteria
**TodoWrite**: Mark phase 3 completed, phase 4 in_progress
@@ -232,16 +272,18 @@ This command is a **pure planning coordinator**:
- `--cli-execute` flag (if present) - Controls IMPL-001 generation mode
**Expected Behavior**:
- Parse TEST_ANALYSIS_RESULTS.md from Phase 3
- Generate **minimum 2 task JSON files** (expandable based on complexity):
- Parse TEST_ANALYSIS_RESULTS.md from Phase 3 (multi-layered test plan)
- Generate **minimum 3 task JSON files** (expandable based on complexity):
- **IMPL-001.json**: Test Understanding & Generation (`@code-developer`)
- **IMPL-001.5-review.json**: Test Quality Gate (`@test-fix-agent`) ← **NEW**
- **IMPL-002.json**: Test Execution & Fix Cycle (`@test-fix-agent`)
- **IMPL-003+**: Additional tasks if needed for complex projects
- Generate `IMPL_PLAN.md` with test strategy
- Generate `IMPL_PLAN.md` with multi-layered test strategy
- Generate `TODO_LIST.md` with task checklist
**Parse Output**:
- Verify `.workflow/[testSessionId]/.task/IMPL-001.json` exists
- Verify `.workflow/[testSessionId]/.task/IMPL-001.5-review.json` exists ← **NEW**
- Verify `.workflow/[testSessionId]/.task/IMPL-002.json` exists
- Verify additional `.task/IMPL-*.json` if applicable
- Verify `IMPL_PLAN.md` and `TODO_LIST.md` created
@@ -262,11 +304,16 @@ Test Session: [testSessionId]
Tasks Created:
- IMPL-001: Test Understanding & Generation (@code-developer)
- IMPL-001.5: Test Quality Gate - Static Analysis & Coverage (@test-fix-agent) ← NEW
- IMPL-002: Test Execution & Fix Cycle (@test-fix-agent)
[- IMPL-003+: Additional tasks if applicable]
Test Strategy: Multi-Layered (L0: Static, L1: Unit, L2: Integration, L3: E2E)
Test Framework: [detected framework]
Test Files to Generate: [count]
Quality Thresholds:
- Minimum Coverage: 80%
- Static Analysis: Zero critical issues
Max Fix Iterations: 5
Fix Mode: [Manual|Codex Automated]
@@ -275,11 +322,12 @@ Review artifacts:
- Task list: .workflow/[testSessionId]/TODO_LIST.md
CRITICAL - Next Steps:
1. Review IMPL_PLAN.md
1. Review IMPL_PLAN.md (now includes multi-layered test strategy)
2. **MUST execute: /workflow:test-cycle-execute**
- This command only generated task JSON files
- Test execution and fix iterations happen in test-cycle-execute
- Do NOT attempt to run tests or fixes in main workflow
3. IMPL-001.5 will validate test quality before fix cycle begins
```
**TodoWrite**: Mark phase 5 completed
@@ -291,52 +339,125 @@ CRITICAL - Next Steps:
---
### TodoWrite Progress Tracking
### TodoWrite Pattern
Track all 5 phases:
**Core Concept**: Dynamic task attachment and collapse for test-fix-gen workflow with dual-mode support (Session Mode and Prompt Mode).
```javascript
TodoWrite({todos: [
{"content": "Create independent test session", "status": "in_progress|completed", "activeForm": "Creating test session"},
{"content": "Gather test coverage context", "status": "pending|in_progress|completed", "activeForm": "Gathering test coverage context"},
{"content": "Analyze test requirements with Gemini", "status": "pending|in_progress|completed", "activeForm": "Analyzing test requirements"},
{"content": "Generate test generation and execution tasks", "status": "pending|in_progress|completed", "activeForm": "Generating test tasks"},
{"content": "Return workflow summary", "status": "pending|in_progress|completed", "activeForm": "Returning workflow summary"}
]})
```
#### Key Principles
1. **Task Attachment** (when SlashCommand invoked):
- Sub-command's internal tasks are **attached** to orchestrator's TodoWrite
- Example: `/workflow:tools:test-context-gather` (Session Mode) or `/workflow:tools:context-gather` (Prompt Mode) attaches 3 sub-tasks (Phase 2.1, 2.2, 2.3)
- First attached task marked as `in_progress`, others as `pending`
- Orchestrator **executes** these attached tasks sequentially
2. **Task Collapse** (after sub-tasks complete):
- Remove detailed sub-tasks from TodoWrite
- **Collapse** to high-level phase summary
- Example: Phase 2.1-2.3 collapse to "Gather test coverage context: completed"
- Maintains clean orchestrator-level view
3. **Continuous Execution**:
- After collapse, automatically proceed to next pending phase
- No user intervention required between phases
- TodoWrite dynamically reflects current execution state
**Lifecycle Summary**: Initial pending tasks → Phase invoked (tasks ATTACHED with mode-specific context gathering) → Sub-tasks executed sequentially → Phase completed (tasks COLLAPSED to summary) → Next phase begins → Repeat until all phases complete.
#### Test-Fix-Gen Specific Features
- **Dual-Mode Support**: Automatic mode detection based on input pattern
- **Session Mode**: Input pattern `WFS-*` → uses `test-context-gather` for cross-session context
- **Prompt Mode**: Text or file path → uses `context-gather` for direct codebase analysis
- **Phase 2**: Mode-specific context gathering (session summaries vs codebase analysis)
- **Phase 3**: Multi-layered test requirements analysis (L0: Static, L1: Unit, L2: Integration, L3: E2E)
- **Phase 4**: Multi-task generation with quality gate (IMPL-001, IMPL-001.5-review, IMPL-002)
- **Fix Mode Configuration**: `--use-codex` flag controls IMPL-002 fix mode (manual vs automated)
Update status to `in_progress` when starting each phase, `completed` when done.
---
## Task Specifications
Generates minimum 2 tasks (expandable for complex projects):
Generates minimum 3 tasks (expandable for complex projects):
### IMPL-001: Test Understanding & Generation
**Agent**: `@code-developer`
**Purpose**: Understand source implementation and generate test files
**Purpose**: Understand source implementation and generate test files following multi-layered test strategy
**Task Configuration**:
- Task ID: `IMPL-001`
- `meta.type: "test-gen"`
- `meta.agent: "@code-developer"`
- `context.requirements`: Understand source implementation and generate tests
- `context.requirements`: Understand source implementation and generate tests across all layers (L0-L3)
- `flow_control.target_files`: Test files to create from TEST_ANALYSIS_RESULTS.md section 5
**Execution Flow**:
1. **Understand Phase**:
- Load TEST_ANALYSIS_RESULTS.md and test context
- Understand source code implementation patterns
- Analyze test requirements and conventions
- Identify test scenarios and edge cases
- Analyze multi-layered test requirements (L0: Static, L1: Unit, L2: Integration, L3: E2E)
- Identify test scenarios, edge cases, and error paths
2. **Generation Phase**:
- Generate test files following existing patterns
- Ensure test coverage aligns with requirements
- Generate L1 unit test files following existing patterns
- Generate L2 integration test files (if applicable)
- Generate L3 E2E test files (if applicable)
- Ensure test coverage aligns with multi-layered requirements
- Include both positive and negative test cases
3. **Verification Phase**:
- Verify test completeness and correctness
- Ensure each test has meaningful assertions
- Check for test anti-patterns (tests without assertions, overly broad mocks)
### IMPL-001.5: Test Quality Gate ← **NEW**
**Agent**: `@test-fix-agent`
**Purpose**: Validate test quality before entering fix cycle - prevent "hollow tests" from becoming the source of truth
**Task Configuration**:
- Task ID: `IMPL-001.5-review`
- `meta.type: "test-quality-review"`
- `meta.agent: "@test-fix-agent"`
- `context.depends_on: ["IMPL-001"]`
- `context.requirements`: Validate generated tests meet quality standards
- `context.quality_config`: Load from `.claude/workflows/test-quality-config.json`
**Execution Flow**:
1. **L0: Static Analysis**:
- Run linting on test files (ESLint, Prettier)
- Check for test anti-patterns:
- Tests without assertions (`expect()` missing)
- Empty test bodies (`it('should...', () => {})`)
- Disabled tests without justification (`it.skip`, `xit`)
- Verify TypeScript type safety (if applicable)
2. **Coverage Analysis**:
- Run coverage analysis on generated tests
- Calculate coverage percentage for target source files
- Identify uncovered branches and edge cases
3. **Test Quality Metrics**:
- Verify minimum coverage threshold met (default: 80%)
- Verify all critical functions have negative test cases
- Verify integration tests cover key component interactions
4. **Quality Gate Decision**:
- **PASS**: Coverage ≥ 80%, zero critical anti-patterns → Proceed to IMPL-002
- **FAIL**: Coverage < 80% OR critical anti-patterns found → Loop back to IMPL-001 with feedback
**Acceptance Criteria**:
- Static analysis: Zero critical issues
- Test coverage: ≥ 80% for target files
- Test completeness: All targeted functions have unit tests
- Negative test coverage: Each public API has at least one error handling test
- Integration coverage: Key component interactions have integration tests (if applicable)
**Failure Handling**:
If quality gate fails:
1. Generate detailed feedback report (`.process/test-quality-report.md`)
2. Update IMPL-001 task with specific improvement requirements
3. Trigger IMPL-001 re-execution with enhanced context
4. Maximum 2 quality gate retries before escalating to user
### IMPL-002: Test Execution & Fix Cycle
@@ -384,7 +505,7 @@ Generates minimum 2 tasks (expandable for complex projects):
### Output Files Structure
Created in `.workflow/WFS-test-[session]/`:
Created in `.workflow/active/WFS-test-[session]/`:
```
WFS-test-[session]/
@@ -412,18 +533,62 @@ WFS-test-[session]/
- `workflow_type: "test_session"`
- No `source_session_id` field
### Complete Data Flow
### Execution Flow Diagram
**Example Command**: `/workflow:test-fix-gen WFS-user-auth`
```
Test-Fix-Gen Workflow Orchestrator (Dual-Mode Support)
├─ Phase 1: Create Test Session
│ ├─ Session Mode: /workflow:session:start --new (with source_session_id)
│ └─ Prompt Mode: /workflow:session:start --new (without source_session_id)
│ └─ Returns: testSessionId (WFS-test-[slug])
├─ Phase 2: Gather Context ← ATTACHED (3 tasks)
│ ├─ Session Mode: /workflow:tools:test-context-gather
│ │ └─ Load source session summaries + analyze coverage
│ └─ Prompt Mode: /workflow:tools:context-gather
│ └─ Analyze codebase from description
│ ├─ Phase 2.1: Load context and analyze coverage
│ ├─ Phase 2.2: Detect test framework and conventions
│ └─ Phase 2.3: Generate context package
│ └─ Returns: [test-]context-package.json ← COLLAPSED
├─ Phase 3: Test Generation Analysis ← ATTACHED (3 tasks)
│ └─ /workflow:tools:test-concept-enhanced
│ ├─ Phase 3.1: Analyze coverage gaps with Gemini
│ ├─ Phase 3.2: Study existing test patterns
│ └─ Phase 3.3: Generate test generation strategy
│ └─ Returns: TEST_ANALYSIS_RESULTS.md ← COLLAPSED
├─ Phase 4: Generate Test Tasks ← ATTACHED (3 tasks)
│ └─ /workflow:tools:test-task-generate
│ ├─ Phase 4.1: Parse TEST_ANALYSIS_RESULTS.md
│ ├─ Phase 4.2: Generate task JSONs (IMPL-001, IMPL-002)
│ └─ Phase 4.3: Generate IMPL_PLAN.md and TODO_LIST.md
│ └─ Returns: Task JSONs and plans ← COLLAPSED
└─ Phase 5: Return Summary
└─ Command ends, control returns to user
**Phase Execution Chain**:
1. Phase 1: `session-start``WFS-test-user-auth`
2. Phase 2: `test-context-gather``test-context-package.json`
3. Phase 3: `test-concept-enhanced``TEST_ANALYSIS_RESULTS.md`
4. Phase 4: `test-task-generate``IMPL-001.json` + `IMPL-002.json` (+ additional if needed)
5. Phase 5: Return summary
Artifacts Created:
├── .workflow/active/WFS-test-[session]/
│ ├── workflow-session.json
│ ├── IMPL_PLAN.md
│ ├── TODO_LIST.md
│ ├── .task/
│ │ ├── IMPL-001.json (test understanding & generation)
│ │ ├── IMPL-002.json (test execution & fix cycle)
│ │ └── IMPL-003.json (optional: test review & certification)
│ └── .process/
│ ├── [test-]context-package.json
│ └── TEST_ANALYSIS_RESULTS.md
**Command completes after Phase 5**
Key Points:
• ← ATTACHED: SlashCommand attaches sub-tasks to orchestrator TodoWrite
• ← COLLAPSED: Sub-tasks executed and collapsed to phase summary
• Dual-Mode: Session Mode and Prompt Mode share same attachment pattern
• Command Boundary: Execution delegated to /workflow:test-cycle-execute
```
---
@@ -473,9 +638,9 @@ WFS-test-[session]/
- `/workflow:tools:test-context-gather` - Phase 2 (Session Mode): Gather source session context
- `/workflow:tools:context-gather` - Phase 2 (Prompt Mode): Analyze codebase directly
- `/workflow:tools:test-concept-enhanced` - Phase 3: Generate test requirements using Gemini
- `/workflow:tools:test-task-generate` - Phase 4: Generate test task JSONs with fix cycle specification
- `/workflow:tools:test-task-generate --use-codex` - Phase 4: With automated Codex fixes (when `--use-codex` flag used)
- `/workflow:tools:test-task-generate --cli-execute` - Phase 4: With CLI execution mode (when `--cli-execute` flag used)
- `/workflow:tools:test-task-generate` - Phase 4: Generate test task JSONs using action-planning-agent (autonomous, default)
- `/workflow:tools:test-task-generate --use-codex` - Phase 4: With automated Codex fixes for IMPL-002 (when `--use-codex` flag used)
- `/workflow:tools:test-task-generate --cli-execute` - Phase 4: With CLI execution mode for IMPL-001 test generation (when `--cli-execute` flag used)
**Follow-up Commands**:
- `/workflow:status` - Review generated test tasks

View File

@@ -18,6 +18,19 @@ allowed-tools: SlashCommand(*), TodoWrite(*), Read(*), Bash(*)
- **Parameter Simplification**: Tools auto-detect test session type via metadata, no manual cross-session parameters needed
- **Manual First**: Default to manual fixes, use `--use-codex` flag for automated Codex fix application
**Task Attachment Model**:
- SlashCommand invocation **expands workflow** by attaching sub-tasks to current TodoWrite
- When a sub-command is invoked (e.g., `/workflow:tools:test-context-gather`), its internal tasks are attached to the orchestrator's TodoWrite
- Orchestrator **executes these attached tasks** sequentially
- After completion, attached tasks are **collapsed** back to high-level phase summary
- This is **task expansion**, not external delegation
**Auto-Continue Mechanism**:
- TodoList tracks current phase status and dynamically manages task attachment/collapse
- When each phase finishes executing, automatically execute next pending phase
- All phases run autonomously without user interaction
- **⚠️ CONTINUOUS EXECUTION** - Do not stop until all phases complete
**Execution Flow**:
1. Initialize TodoWrite → Create test session → Parse session ID
2. Gather cross-session context (automatic) → Parse context path
@@ -33,10 +46,12 @@ allowed-tools: SlashCommand(*), TodoWrite(*), Read(*), Bash(*)
3. **Parse Every Output**: Extract required data from each phase for next phase
4. **Sequential Execution**: Each phase depends on previous phase's output
5. **Complete All Phases**: Do not return to user until Phase 5 completes (summary returned)
6. **Track Progress**: Update TodoWrite after every phase completion
6. **Track Progress**: Update TodoWrite dynamically with task attachment/collapse pattern
7. **Automatic Detection**: context-gather auto-detects test session and gathers source session context
8. **Parse --use-codex Flag**: Extract flag from arguments and pass to Phase 4 (test-task-generate)
9. **Command Boundary**: This command ends at Phase 5 summary. Test execution is NOT part of this command.
10. **Task Attachment Model**: SlashCommand invocation **attaches** sub-tasks to current workflow. Orchestrator **executes** these attached tasks itself, then **collapses** them after completion
11. **⚠️ CRITICAL: DO NOT STOP**: Continuous multi-phase workflow. After executing all attached tasks, immediately collapse them and execute next phase
## 5-Phase Execution
@@ -89,7 +104,39 @@ allowed-tools: SlashCommand(*), TodoWrite(*), Read(*), Bash(*)
- Test framework detected
- Test conventions documented
**TodoWrite**: Mark phase 2 completed, phase 3 in_progress
<!-- TodoWrite: When test-context-gather invoked, INSERT 3 test-context-gather tasks -->
**TodoWrite Update (Phase 2 SlashCommand invoked - tasks attached)**:
```json
[
{"content": "Create independent test session", "status": "completed", "activeForm": "Creating test session"},
{"content": "Phase 2.1: Load source session summaries (test-context-gather)", "status": "in_progress", "activeForm": "Loading source session summaries"},
{"content": "Phase 2.2: Analyze test coverage with MCP tools (test-context-gather)", "status": "pending", "activeForm": "Analyzing test coverage"},
{"content": "Phase 2.3: Identify coverage gaps and framework (test-context-gather)", "status": "pending", "activeForm": "Identifying coverage gaps"},
{"content": "Analyze test requirements with Gemini", "status": "pending", "activeForm": "Analyzing test requirements"},
{"content": "Generate test generation and execution tasks", "status": "pending", "activeForm": "Generating test tasks"},
{"content": "Return workflow summary", "status": "pending", "activeForm": "Returning workflow summary"}
]
```
**Note**: SlashCommand invocation **attaches** test-context-gather's 3 tasks. Orchestrator **executes** these tasks.
**Next Action**: Tasks attached → **Execute Phase 2.1-2.3** sequentially
<!-- TodoWrite: After Phase 2 tasks complete, REMOVE Phase 2.1-2.3, restore to orchestrator view -->
**TodoWrite Update (Phase 2 completed - tasks collapsed)**:
```json
[
{"content": "Create independent test session", "status": "completed", "activeForm": "Creating test session"},
{"content": "Gather test coverage context", "status": "completed", "activeForm": "Gathering test coverage context"},
{"content": "Analyze test requirements with Gemini", "status": "pending", "activeForm": "Analyzing test requirements"},
{"content": "Generate test generation and execution tasks", "status": "pending", "activeForm": "Generating test tasks"},
{"content": "Return workflow summary", "status": "pending", "activeForm": "Returning workflow summary"}
]
```
**Note**: Phase 2 tasks completed and collapsed to summary.
---
@@ -121,7 +168,39 @@ allowed-tools: SlashCommand(*), TodoWrite(*), Read(*), Bash(*)
- Implementation Targets (test files to create)
- Success Criteria
**TodoWrite**: Mark phase 3 completed, phase 4 in_progress
<!-- TodoWrite: When test-concept-enhanced invoked, INSERT 3 concept-enhanced tasks -->
**TodoWrite Update (Phase 3 SlashCommand invoked - tasks attached)**:
```json
[
{"content": "Create independent test session", "status": "completed", "activeForm": "Creating test session"},
{"content": "Gather test coverage context", "status": "completed", "activeForm": "Gathering test coverage context"},
{"content": "Phase 3.1: Analyze coverage gaps with Gemini (test-concept-enhanced)", "status": "in_progress", "activeForm": "Analyzing coverage gaps"},
{"content": "Phase 3.2: Study existing test patterns (test-concept-enhanced)", "status": "pending", "activeForm": "Studying test patterns"},
{"content": "Phase 3.3: Generate test generation strategy (test-concept-enhanced)", "status": "pending", "activeForm": "Generating test strategy"},
{"content": "Generate test generation and execution tasks", "status": "pending", "activeForm": "Generating test tasks"},
{"content": "Return workflow summary", "status": "pending", "activeForm": "Returning workflow summary"}
]
```
**Note**: SlashCommand invocation **attaches** test-concept-enhanced's 3 tasks. Orchestrator **executes** these tasks.
**Next Action**: Tasks attached → **Execute Phase 3.1-3.3** sequentially
<!-- TodoWrite: After Phase 3 tasks complete, REMOVE Phase 3.1-3.3, restore to orchestrator view -->
**TodoWrite Update (Phase 3 completed - tasks collapsed)**:
```json
[
{"content": "Create independent test session", "status": "completed", "activeForm": "Creating test session"},
{"content": "Gather test coverage context", "status": "completed", "activeForm": "Gathering test coverage context"},
{"content": "Analyze test requirements with Gemini", "status": "completed", "activeForm": "Analyzing test requirements"},
{"content": "Generate test generation and execution tasks", "status": "pending", "activeForm": "Generating test tasks"},
{"content": "Return workflow summary", "status": "pending", "activeForm": "Returning workflow summary"}
]
```
**Note**: Phase 3 tasks completed and collapsed to summary.
---
@@ -173,7 +252,39 @@ allowed-tools: SlashCommand(*), TodoWrite(*), Read(*), Bash(*)
- Phase 2: Iterative Gemini diagnosis + manual/Codex fixes (based on flag)
- Phase 3: Final validation and certification
**TodoWrite**: Mark phase 4 completed, phase 5 in_progress
<!-- TodoWrite: When test-task-generate invoked, INSERT 3 test-task-generate tasks -->
**TodoWrite Update (Phase 4 SlashCommand invoked - tasks attached)**:
```json
[
{"content": "Create independent test session", "status": "completed", "activeForm": "Creating test session"},
{"content": "Gather test coverage context", "status": "completed", "activeForm": "Gathering test coverage context"},
{"content": "Analyze test requirements with Gemini", "status": "completed", "activeForm": "Analyzing test requirements"},
{"content": "Phase 4.1: Parse TEST_ANALYSIS_RESULTS.md (test-task-generate)", "status": "in_progress", "activeForm": "Parsing test analysis"},
{"content": "Phase 4.2: Generate IMPL-001.json and IMPL-002.json (test-task-generate)", "status": "pending", "activeForm": "Generating task JSONs"},
{"content": "Phase 4.3: Generate IMPL_PLAN.md and TODO_LIST.md (test-task-generate)", "status": "pending", "activeForm": "Generating plan documents"},
{"content": "Return workflow summary", "status": "pending", "activeForm": "Returning workflow summary"}
]
```
**Note**: SlashCommand invocation **attaches** test-task-generate's 3 tasks. Orchestrator **executes** these tasks.
**Next Action**: Tasks attached → **Execute Phase 4.1-4.3** sequentially
<!-- TodoWrite: After Phase 4 tasks complete, REMOVE Phase 4.1-4.3, restore to orchestrator view -->
**TodoWrite Update (Phase 4 completed - tasks collapsed)**:
```json
[
{"content": "Create independent test session", "status": "completed", "activeForm": "Creating test session"},
{"content": "Gather test coverage context", "status": "completed", "activeForm": "Gathering test coverage context"},
{"content": "Analyze test requirements with Gemini", "status": "completed", "activeForm": "Analyzing test requirements"},
{"content": "Generate test generation and execution tasks", "status": "completed", "activeForm": "Generating test tasks"},
{"content": "Return workflow summary", "status": "in_progress", "activeForm": "Returning workflow summary"}
]
```
**Note**: Phase 4 tasks completed and collapsed to summary.
---
@@ -214,40 +325,75 @@ Ready for execution. Use appropriate workflow commands to proceed.
## TodoWrite Pattern
Track progress through 5 phases:
**Core Concept**: Dynamic task attachment and collapse for test-gen workflow with cross-session context gathering and test generation strategy.
```javascript
TodoWrite({todos: [
{"content": "Create independent test session", "status": "in_progress|completed", "activeForm": "Creating test session"},
{"content": "Gather test coverage context", "status": "pending|in_progress|completed", "activeForm": "Gathering test coverage context"},
{"content": "Analyze test requirements with Gemini", "status": "pending|in_progress|completed", "activeForm": "Analyzing test requirements"},
{"content": "Generate test generation and execution tasks", "status": "pending|in_progress|completed", "activeForm": "Generating test tasks"},
{"content": "Return workflow summary", "status": "pending|in_progress|completed", "activeForm": "Returning workflow summary"}
]})
```
### Key Principles
Update status to `in_progress` when starting each phase, mark `completed` when done.
1. **Task Attachment** (when SlashCommand invoked):
- Sub-command's internal tasks are **attached** to orchestrator's TodoWrite
- Example: `/workflow:tools:test-context-gather` attaches 3 sub-tasks (Phase 2.1, 2.2, 2.3)
- First attached task marked as `in_progress`, others as `pending`
- Orchestrator **executes** these attached tasks sequentially
## Data Flow
2. **Task Collapse** (after sub-tasks complete):
- Remove detailed sub-tasks from TodoWrite
- **Collapse** to high-level phase summary
- Example: Phase 2.1-2.3 collapse to "Gather test coverage context: completed"
- Maintains clean orchestrator-level view
3. **Continuous Execution**:
- After collapse, automatically proceed to next pending phase
- No user intervention required between phases
- TodoWrite dynamically reflects current execution state
**Lifecycle Summary**: Initial pending tasks → Phase invoked (tasks ATTACHED) → Sub-tasks executed sequentially → Phase completed (tasks COLLAPSED to summary) → Next phase begins → Repeat until all phases complete.
### Test-Gen Specific Features
- **Phase 2**: Cross-session context gathering from source implementation session
- **Phase 3**: Test requirements analysis with Gemini for generation strategy
- **Phase 4**: Dual-task generation (IMPL-001 for test generation, IMPL-002 for test execution)
- **Fix Mode Configuration**: `--use-codex` flag controls IMPL-002 fix mode (manual vs automated)
**Note**: See individual Phase descriptions (Phase 2, 3, 4) for detailed TodoWrite Update examples with full JSON structures.
## Execution Flow Diagram
```
┌─────────────────────────────────────────────────────────┐
│ /workflow:test-gen WFS-user-auth
├─────────────────────────────────────────────────────────┤
Phase 1: session-start → WFS-test-user-auth │
↓ │
│ Phase 2: test-context-gather → test-context-package.json
│ ↓
Phase 3: test-concept-enhanced → TEST_ANALYSIS_RESULTS.md│
↓ │
Phase 4: test-task-generate → IMPL-001.json + IMPL-002.json│
Phase 5: Return summary
└─────────────────────────────────────────────────────────┘
COMMAND ENDS - Control returns to user
Test-Gen Workflow Orchestrator
├─ Phase 1: Create Test Session
└─ /workflow:session:start --new
└─ Returns: testSessionId (WFS-test-[source])
├─ Phase 2: Gather Test Context ← ATTACHED (3 tasks)
└─ /workflow:tools:test-context-gather
├─ Phase 2.1: Load source session summaries
├─ Phase 2.2: Analyze test coverage with MCP tools
└─ Phase 2.3: Identify coverage gaps and framework
└─ Returns: test-context-package.json ← COLLAPSED
├─ Phase 3: Test Generation Analysis ← ATTACHED (3 tasks)
│ └─ /workflow:tools:test-concept-enhanced
│ ├─ Phase 3.1: Analyze coverage gaps with Gemini
│ ├─ Phase 3.2: Study existing test patterns
│ └─ Phase 3.3: Generate test generation strategy
│ └─ Returns: TEST_ANALYSIS_RESULTS.md ← COLLAPSED
├─ Phase 4: Generate Test Tasks ← ATTACHED (3 tasks)
│ └─ /workflow:tools:test-task-generate
│ ├─ Phase 4.1: Parse TEST_ANALYSIS_RESULTS.md
│ ├─ Phase 4.2: Generate IMPL-001.json and IMPL-002.json
│ └─ Phase 4.3: Generate IMPL_PLAN.md and TODO_LIST.md
│ └─ Returns: Task JSONs and plans ← COLLAPSED
└─ Phase 5: Return Summary
└─ Command ends, control returns to user
Artifacts Created:
├── .workflow/WFS-test-[session]/
├── .workflow/active/WFS-test-[session]/
│ ├── workflow-session.json
│ ├── IMPL_PLAN.md
│ ├── TODO_LIST.md
@@ -257,6 +403,10 @@ Artifacts Created:
│ └── .process/
│ ├── test-context-package.json
│ └── TEST_ANALYSIS_RESULTS.md
Key Points:
• ← ATTACHED: SlashCommand attaches sub-tasks to orchestrator TodoWrite
• ← COLLAPSED: Sub-tasks executed and collapsed to phase summary
```
## Session Metadata
@@ -290,7 +440,7 @@ See `/workflow:tools:test-task-generate` for complete task JSON schemas.
## Output Files
Created in `.workflow/WFS-test-[session]/`:
Created in `.workflow/active/WFS-test-[session]/`:
- `workflow-session.json` - Session metadata
- `.process/test-context-package.json` - Coverage analysis
- `.process/TEST_ANALYSIS_RESULTS.md` - Test requirements
@@ -337,9 +487,9 @@ See `/workflow:tools:test-task-generate` for complete JSON schemas.
- `/workflow:session:start` - Phase 1: Create independent test workflow session
- `/workflow:tools:test-context-gather` - Phase 2: Analyze test coverage and gather source session context
- `/workflow:tools:test-concept-enhanced` - Phase 3: Generate test requirements and strategy using Gemini
- `/workflow:tools:test-task-generate` - Phase 4: Generate test generation and execution task JSONs
- `/workflow:tools:test-task-generate --use-codex` - Phase 4: With automated Codex fixes (when `--use-codex` flag used)
- `/workflow:tools:test-task-generate --cli-execute` - Phase 4: With CLI execution mode (when `--cli-execute` flag used)
- `/workflow:tools:test-task-generate` - Phase 4: Generate test task JSONs using action-planning-agent (autonomous, default)
- `/workflow:tools:test-task-generate --use-codex` - Phase 4: With automated Codex fixes for IMPL-002 (when `--use-codex` flag used)
- `/workflow:tools:test-task-generate --cli-execute` - Phase 4: With CLI execution mode for IMPL-001 test generation (when `--cli-execute` flag used)
**Follow-up Commands**:
- `/workflow:status` - Review generated test tasks

View File

@@ -3,14 +3,14 @@ name: conflict-resolution
description: Detect and resolve conflicts between plan and existing codebase using CLI-powered analysis with Gemini/Qwen
argument-hint: "--session WFS-session-id --context path/to/context-package.json"
examples:
- /workflow:tools:conflict-resolution --session WFS-auth --context .workflow/WFS-auth/.process/context-package.json
- /workflow:tools:conflict-resolution --session WFS-payment --context .workflow/WFS-payment/.process/context-package.json
- /workflow:tools:conflict-resolution --session WFS-auth --context .workflow/active/WFS-auth/.process/context-package.json
- /workflow:tools:conflict-resolution --session WFS-payment --context .workflow/active/WFS-payment/.process/context-package.json
---
# Conflict Resolution Command
## Purpose
Analyzes conflicts between implementation plans and existing codebase, generating multiple resolution strategies.
Analyzes conflicts between implementation plans and existing codebase, **including module scenario uniqueness detection**, generating multiple resolution strategies with **iterative clarification until boundaries are clear**.
**Scope**: Detection and strategy generation only - NO code modification or task creation.
@@ -21,10 +21,14 @@ Analyzes conflicts between implementation plans and existing codebase, generatin
| Responsibility | Description |
|---------------|-------------|
| **Detect Conflicts** | Analyze plan vs existing code inconsistencies |
| **Scenario Uniqueness** | **NEW**: Search and compare new modules with existing modules for functional overlaps |
| **Generate Strategies** | Provide 2-4 resolution options per conflict |
| **Iterative Clarification** | **NEW**: Ask unlimited questions until scenario boundaries are clear and unique |
| **Agent Re-analysis** | **NEW**: Dynamically update strategies based on user clarifications |
| **CLI Analysis** | Use Gemini/Qwen (Claude fallback) |
| **User Decision** | Present options, never auto-apply |
| **Single Output** | `CONFLICT_RESOLUTION.md` with findings |
| **User Decision** | Present options ONE BY ONE, never auto-apply |
| **Direct Text Output** | Output questions via text directly, NEVER use bash echo/printf |
| **Structured Data** | JSON output for programmatic processing, NO file generation |
## Conflict Categories
@@ -48,6 +52,13 @@ Analyzes conflicts between implementation plans and existing codebase, generatin
- Setup conflicts
- Breaking updates
### 5. Module Scenario Overlap
- **NEW**: Functional overlap between new and existing modules
- Scenario boundary ambiguity
- Duplicate responsibility detection
- Module merge/split decisions
- **Requires iterative clarification until uniqueness confirmed**
## Execution Flow
### Phase 1: Validation
@@ -72,23 +83,31 @@ Task(subagent_type="cli-execution-agent", prompt=`
### 1. Load Context
- Read existing files from conflict_detection.existing_files
- Load plan from .workflow/{session_id}/.process/context-package.json
- Load plan from .workflow/active/{session_id}/.process/context-package.json
- Extract role analyses and requirements
### 2. Execute CLI Analysis
### 2. Execute CLI Analysis (Enhanced with Scenario Uniqueness Detection)
Primary (Gemini):
cd {project_root} && gemini -p "
PURPOSE: Detect conflicts between plan and codebase
PURPOSE: Detect conflicts between plan and codebase, including module scenario overlaps
TASK:
• Compare architectures
• Identify breaking API changes
• Detect data model incompatibilities
• Assess dependency conflicts
• **NEW: Analyze module scenario uniqueness**
- Extract new module functionality from plan
- Search all existing modules with similar functionality
- Compare scenario coverage and identify overlaps
- Generate clarification questions for boundary definition
MODE: analysis
CONTEXT: @{existing_files} @.workflow/{session_id}/**/*
EXPECTED: Conflict list with severity ratings
RULES: Focus on breaking changes and migration needs
CONTEXT: @**/*.ts @**/*.js @**/*.tsx @**/*.jsx @.workflow/active/{session_id}/**/*
EXPECTED: Conflict list with severity ratings, including ModuleOverlap conflicts with:
- Existing module list with scenarios
- Overlap analysis matrix
- Targeted clarification questions
RULES: $(cat ~/.claude/workflows/cli-templates/prompts/analysis/02-analyze-code-patterns.txt) | Focus on breaking changes, migration needs, and functional overlaps | analysis=READ-ONLY
"
Fallback: Qwen (same prompt) → Claude (manual analysis)
@@ -97,9 +116,11 @@ Task(subagent_type="cli-execution-agent", prompt=`
Template per conflict:
- Severity: Critical/High/Medium
- Category: Architecture/API/Data/Dependency
- Category: Architecture/API/Data/Dependency/ModuleOverlap
- Affected files + impact
- **For ModuleOverlap**: Include overlap_analysis with existing modules and scenarios
- Options with pros/cons, effort, risk
- **For ModuleOverlap strategies**: Add clarification_needed questions for boundary definition
- Recommended strategy + rationale
### 4. Return Structured Conflict Data
@@ -115,10 +136,10 @@ Task(subagent_type="cli-execution-agent", prompt=`
"id": "CON-001",
"brief": "一行中文冲突摘要",
"severity": "Critical|High|Medium",
"category": "Architecture|API|Data|Dependency",
"category": "Architecture|API|Data|Dependency|ModuleOverlap",
"affected_files": [
".workflow/{session}/.brainstorm/guidance-specification.md",
".workflow/{session}/.brainstorm/system-architect/analysis.md"
".workflow/active/{session}/.brainstorm/guidance-specification.md",
".workflow/active/{session}/.brainstorm/system-architect/analysis.md"
],
"description": "详细描述冲突 - 什么不兼容",
"impact": {
@@ -127,6 +148,23 @@ Task(subagent_type="cli-execution-agent", prompt=`
"migration_required": true|false,
"estimated_effort": "人天估计"
},
"overlap_analysis": {
"// NOTE": "仅当 category=ModuleOverlap 时需要此字段",
"new_module": {
"name": "新模块名称",
"scenarios": ["场景1", "场景2", "场景3"],
"responsibilities": "职责描述"
},
"existing_modules": [
{
"file": "src/existing/module.ts",
"name": "现有模块名称",
"scenarios": ["场景A", "场景B"],
"overlap_scenarios": ["重叠场景1", "重叠场景2"],
"responsibilities": "现有模块职责"
}
]
},
"strategies": [
{
"name": "策略名称(中文)",
@@ -136,9 +174,15 @@ Task(subagent_type="cli-execution-agent", prompt=`
"effort": "时间估计",
"pros": ["优点1", "优点2"],
"cons": ["缺点1", "缺点2"],
"clarification_needed": [
"// NOTE: 仅当需要用户进一步澄清时需要此字段(尤其是 ModuleOverlap",
"新模块的核心职责边界是什么?",
"如何与现有模块 X 协作?",
"哪些场景应该由新模块处理?"
],
"modifications": [
{
"file": ".workflow/{session}/.brainstorm/guidance-specification.md",
"file": ".workflow/active/{session}/.brainstorm/guidance-specification.md",
"section": "## 2. System Architect Decisions",
"change_type": "update",
"old_content": "原始内容片段(用于定位)",
@@ -146,7 +190,7 @@ Task(subagent_type="cli-execution-agent", prompt=`
"rationale": "为什么这样改"
},
{
"file": ".workflow/{session}/.brainstorm/system-architect/analysis.md",
"file": ".workflow/active/{session}/.brainstorm/system-architect/analysis.md",
"section": "## Design Decisions",
"change_type": "update",
"old_content": "原始内容片段",
@@ -203,155 +247,251 @@ Task(subagent_type="cli-execution-agent", prompt=`
`)
```
**Agent Internal Flow**:
**Agent Internal Flow** (Enhanced):
```
1. Load context package
2. Check conflict_risk (exit if none/low)
3. Read existing files + plan artifacts
4. Run CLI analysis (Gemini→Qwen→Claude)
5. Parse conflict findings
6. Generate 2-4 strategies per conflict with modifications
4. Run CLI analysis (Gemini→Qwen→Claude) with enhanced tasks:
- Standard conflict detection (Architecture/API/Data/Dependency)
- **NEW: Module scenario uniqueness detection**
* Extract new module functionality from plan
* Search all existing modules with similar keywords/functionality
* Compare scenario coverage and responsibilities
* Identify functional overlaps and boundary ambiguities
* Generate ModuleOverlap conflicts with overlap_analysis
5. Parse conflict findings (including ModuleOverlap category)
6. Generate 2-4 strategies per conflict:
- Include modifications for each strategy
- **For ModuleOverlap**: Add clarification_needed questions for boundary definition
7. Return JSON to stdout (NOT file write)
8. Return execution log path
```
### Phase 3: User Confirmation via Text Interaction
### Phase 3: Iterative User Interaction with Clarification Loop
**Command parses agent JSON output and presents conflicts to user via text**:
**Execution Flow**:
```
FOR each conflict (逐个处理,无数量限制):
clarified = false
round = 0
userClarifications = []
WHILE (!clarified && round < 10):
round++
// 1. Display conflict (包含所有关键字段)
- category, id, brief, severity, description
- IF ModuleOverlap: 展示 overlap_analysis
* new_module: {name, scenarios, responsibilities}
* existing_modules[]: {file, name, scenarios, overlap_scenarios, responsibilities}
// 2. Display strategies (2-4个策略 + 自定义选项)
- FOR each strategy: {name, approach, complexity, risk, effort, pros, cons}
* IF clarification_needed: 展示待澄清问题列表
- 自定义选项: {suggestions: modification_suggestions[]}
// 3. User selects strategy
userChoice = readInput()
IF userChoice == "自定义":
customConflicts.push({id, brief, category, suggestions, overlap_analysis})
clarified = true
BREAK
selectedStrategy = strategies[userChoice]
// 4. Clarification loop
IF selectedStrategy.clarification_needed.length > 0:
// 收集澄清答案
FOR each question:
answer = readInput()
userClarifications.push({question, answer})
// Agent 重新分析
reanalysisResult = Task(cli-execution-agent, prompt={
冲突信息: {id, brief, category, 策略}
用户澄清: userClarifications[]
场景分析: overlap_analysis (if ModuleOverlap)
输出: {
uniqueness_confirmed: bool,
rationale: string,
updated_strategy: {name, approach, complexity, risk, effort, modifications[]},
remaining_questions: [] (如果仍有歧义)
}
})
IF reanalysisResult.uniqueness_confirmed:
selectedStrategy = updated_strategy
selectedStrategy.clarifications = userClarifications
clarified = true
ELSE:
// 更新澄清问题,继续下一轮
selectedStrategy.clarification_needed = remaining_questions
ELSE:
clarified = true
resolvedConflicts.push({conflict, strategy: selectedStrategy})
END WHILE
END FOR
// Build output
selectedStrategies = resolvedConflicts.map(r => ({
conflict_id, strategy, clarifications[]
}))
```
**Key Data Structures**:
```javascript
// 1. Parse agent JSON output
const conflictData = JSON.parse(agentOutput);
const conflicts = conflictData.conflicts; // No 4-conflict limit
// 2. Format conflicts as text output (max 10 per round)
const batchSize = 10;
const batches = chunkArray(conflicts, batchSize);
for (const [batchIdx, batch] of batches.entries()) {
const totalBatches = batches.length;
// Output batch header
console.log(`===== 冲突解决 (第 ${batchIdx + 1}/${totalBatches} 轮) =====\n`);
// Output each conflict in batch
batch.forEach((conflict, idx) => {
const questionNum = batchIdx * batchSize + idx + 1;
console.log(`【问题${questionNum} - ${conflict.category}${conflict.id}: ${conflict.brief}`);
conflict.strategies.forEach((strategy, sIdx) => {
const optionLetter = String.fromCharCode(97 + sIdx); // a, b, c, ...
console.log(`${optionLetter}) ${strategy.name}`);
console.log(` 说明:${strategy.approach}`);
console.log(` 复杂度: ${strategy.complexity} | 风险: ${strategy.risk} | 工作量: ${strategy.effort}`);
});
// Add custom option
const customLetter = String.fromCharCode(97 + conflict.strategies.length);
console.log(`${customLetter}) 自定义修改`);
console.log(` 说明:根据修改建议自行处理,不应用预设策略`);
// Show modification suggestions
if (conflict.modification_suggestions && conflict.modification_suggestions.length > 0) {
console.log(` 修改建议:`);
conflict.modification_suggestions.forEach(suggestion => {
console.log(` - ${suggestion}`);
});
}
console.log();
});
console.log(`请回答 (格式: 1a 2b 3c...)`);
// Wait for user input
const userInput = await readUserInput();
// Parse answers
const answers = parseUserAnswers(userInput, batch);
// Custom conflict tracking
customConflicts[] = {
id, brief, category,
suggestions: modification_suggestions[],
overlap_analysis: { new_module{}, existing_modules[] } // ModuleOverlap only
}
// 3. Build selected strategies (exclude custom selections)
const selectedStrategies = answers.filter(a => !a.isCustom).map(a => a.strategy);
const customConflicts = answers.filter(a => a.isCustom).map(a => ({
id: a.conflict.id,
brief: a.conflict.brief,
suggestions: a.conflict.modification_suggestions
}));
// Agent re-analysis prompt output
{
uniqueness_confirmed: bool,
rationale: string,
updated_strategy: {
name, approach, complexity, risk, effort,
modifications: [{file, section, change_type, old_content, new_content, rationale}]
},
remaining_questions: string[]
}
```
**Text Output Example**:
**Text Output Example** (展示关键字段):
```markdown
===== 冲突解决 (第 1/1 轮) =====
============================================================
冲突 1/3 - 第 1 轮
============================================================
【ModuleOverlap】CON-001: 新增用户认证服务与现有模块功能重叠
严重程度: High | 描述: 计划中的 UserAuthService 与现有 AuthManager 场景重叠
【问题1 - Architecture】CON-001: 现有认证系统与计划不兼容
a) 渐进式迁移
说明:保留现有系统,逐步迁移到新方案
复杂度: Medium | 风险: Low | 工作量: 3-5天
b) 完全重写
说明:废弃旧系统,从零实现新认证
复杂度: High | 风险: Medium | 工作量: 7-10天
c) 自定义修改
说明:根据修改建议自行处理,不应用预设策略
修改建议:
- 评估现有认证系统的兼容性,考虑是否可以通过适配器模式桥接
- 检查JWT token格式和验证逻辑是否需要调整
- 确保用户会话管理与新架构保持一致
--- 场景重叠分析 ---
新模块: UserAuthService | 场景: 登录, Token验证, 权限, MFA
现有模块: AuthManager (src/auth/AuthManager.ts) | 重叠: 登录, Token验证
【问题2 - Data】CON-002: 数据库 schema 冲突
a) 添加迁移脚本
说明:创建数据库迁移脚本处理 schema 变更
复杂度: Low | 风险: Low | 工作量: 1-2天
b) 自定义修改
说明:根据修改建议自行处理,不应用预设策略
修改建议:
- 检查现有表结构是否支持新增字段,避免破坏性变更
- 考虑使用数据库版本控制工具如Flyway或Liquibase
- 准备数据迁移和回滚策略
--- 解决策略 ---
1) 合并 (Low复杂度 | Low风险 | 2-3天)
⚠️ 需澄清: AuthManager是否能承担MFA
请回答 (格式: 1a 2b)
2) 拆分边界 (Medium复杂度 | Medium风险 | 4-5天)
⚠️ 需澄清: 基础/高级认证边界? Token验证归谁?
3) 自定义修改
建议: 评估扩展性; 策略模式分离; 定义接口边界
请选择 (1-3): > 2
--- 澄清问答 (第1轮) ---
Q: 基础/高级认证边界?
A: 基础=密码登录+token验证, 高级=MFA+OAuth+SSO
Q: Token验证归谁?
A: 统一由 AuthManager 负责
🔄 重新分析...
✅ 唯一性已确认 | 理由: 边界清晰 - AuthManager(基础+token), UserAuthService(MFA+OAuth+SSO)
============================================================
冲突 2/3 - 第 1 轮 [下一个冲突]
============================================================
```
**User Input Examples**:
- `1a 2a` → Conflict 1: 渐进式迁移, Conflict 2: 添加迁移脚本
- `1b 2b` → Conflict 1: 完全重写, Conflict 2: 自定义修改
- `1c 2c` → Both choose custom modification (user handles manually with suggestions)
**Loop Characteristics**: 逐个处理 | 无限轮次(max 10) | 动态问题生成 | Agent重新分析判断唯一性 | ModuleOverlap场景边界澄清
### Phase 4: Apply Modifications
```javascript
// 1. Extract modifications from selected strategies
// 1. Extract modifications from resolved strategies
const modifications = [];
selectedStrategies.forEach(strategy => {
if (strategy !== "skip") {
modifications.push(...strategy.modifications);
selectedStrategies.forEach(item => {
if (item.strategy && item.strategy.modifications) {
modifications.push(...item.strategy.modifications.map(mod => ({
...mod,
conflict_id: item.conflict_id,
clarifications: item.clarifications
})));
}
});
console.log(`\n正在应用 ${modifications.length} 个修改...`);
// 2. Apply each modification using Edit tool
modifications.forEach(mod => {
if (mod.change_type === "update") {
Edit({
file_path: mod.file,
old_string: mod.old_content,
new_string: mod.new_content
});
const appliedModifications = [];
const failedModifications = [];
modifications.forEach((mod, idx) => {
try {
console.log(`[${idx + 1}/${modifications.length}] 修改 ${mod.file}...`);
if (mod.change_type === "update") {
Edit({
file_path: mod.file,
old_string: mod.old_content,
new_string: mod.new_content
});
} else if (mod.change_type === "add") {
// Handle addition - append or insert based on section
const fileContent = Read(mod.file);
const updated = insertContentAfterSection(fileContent, mod.section, mod.new_content);
Write(mod.file, updated);
} else if (mod.change_type === "remove") {
Edit({
file_path: mod.file,
old_string: mod.old_content,
new_string: ""
});
}
appliedModifications.push(mod);
console.log(` ✓ 成功`);
} catch (error) {
console.log(` ✗ 失败: ${error.message}`);
failedModifications.push({ ...mod, error: error.message });
}
// Handle "add" and "remove" similarly
});
// 3. Update context-package.json
// 3. Update context-package.json with resolution details
const contextPackage = JSON.parse(Read(contextPath));
contextPackage.conflict_detection.conflict_risk = "resolved";
contextPackage.conflict_detection.resolved_conflicts = conflicts.map(c => c.id);
contextPackage.conflict_detection.resolved_conflicts = selectedStrategies.map(s => ({
conflict_id: s.conflict_id,
strategy_name: s.strategy.name,
clarifications: s.clarifications
}));
contextPackage.conflict_detection.custom_conflicts = customConflicts.map(c => c.id);
contextPackage.conflict_detection.resolved_at = new Date().toISOString();
Write(contextPath, JSON.stringify(contextPackage, null, 2));
// 4. Output custom conflict summary (if any)
// 4. Output custom conflict summary with overlap analysis (if any)
if (customConflicts.length > 0) {
console.log("\n===== 需要自定义处理的冲突 =====\n");
console.log(`\n${'='.repeat(60)}`);
console.log(`需要自定义处理的冲突 (${customConflicts.length})`);
console.log(`${'='.repeat(60)}\n`);
customConflicts.forEach(conflict => {
console.log(`${conflict.id}${conflict.brief}`);
console.log("修改建议:");
console.log(`${conflict.category}${conflict.id}: ${conflict.brief}`);
// Show overlap analysis for ModuleOverlap conflicts
if (conflict.category === 'ModuleOverlap' && conflict.overlap_analysis) {
console.log(`\n场景重叠信息:`);
console.log(` 新模块: ${conflict.overlap_analysis.new_module.name}`);
console.log(` 场景: ${conflict.overlap_analysis.new_module.scenarios.join(', ')}`);
console.log(`\n 与以下模块重叠:`);
conflict.overlap_analysis.existing_modules.forEach(mod => {
console.log(` - ${mod.name} (${mod.file})`);
console.log(` 重叠场景: ${mod.overlap_scenarios.join(', ')}`);
});
}
console.log(`\n修改建议:`);
conflict.suggestions.forEach(suggestion => {
console.log(` - ${suggestion}`);
});
@@ -359,25 +499,43 @@ if (customConflicts.length > 0) {
});
}
// 5. Return summary
// 5. Output failure summary (if any)
if (failedModifications.length > 0) {
console.log(`\n⚠️ 部分修改失败 (${failedModifications.length}):`);
failedModifications.forEach(mod => {
console.log(` - ${mod.file}: ${mod.error}`);
});
}
// 6. Return summary
return {
resolved: modifications.length,
custom: customConflicts.length,
modified_files: [...new Set(modifications.map(m => m.file))],
custom_conflicts: customConflicts
total_conflicts: conflicts.length,
resolved_with_strategy: selectedStrategies.length,
custom_handling: customConflicts.length,
modifications_applied: appliedModifications.length,
modifications_failed: failedModifications.length,
modified_files: [...new Set(appliedModifications.map(m => m.file))],
custom_conflicts: customConflicts,
clarification_records: selectedStrategies.filter(s => s.clarifications.length > 0)
};
```
**Validation**:
```
✓ Agent returns valid JSON structure
Text output displays all conflicts (max 10 per round)
User selections captured correctly
✓ Agent returns valid JSON structure with ModuleOverlap conflicts
Conflicts processed ONE BY ONE (not in batches)
ModuleOverlap conflicts include overlap_analysis field
✓ Strategies with clarification_needed display questions
✓ User selections captured correctly per conflict
✓ Clarification loop continues until uniqueness confirmed
✓ Agent re-analysis returns uniqueness_confirmed and updated_strategy
✓ Maximum 10 rounds per conflict safety limit enforced
✓ Edit tool successfully applies modifications
✓ guidance-specification.md updated
✓ Role analyses (*.md) updated
✓ context-package.json marked as resolved
Agent log saved to .workflow/{session_id}/.chat/
✓ context-package.json marked as resolved with clarification records
Custom conflicts display overlap_analysis for manual handling
✓ Agent log saved to .workflow/active/{session_id}/.chat/
```
## Output Format: Agent JSON Response
@@ -434,38 +592,45 @@ If Edit tool fails mid-application:
**Output**:
- Modified files:
- `.workflow/{session_id}/.brainstorm/guidance-specification.md`
- `.workflow/{session_id}/.brainstorm/{role}/analysis.md`
- `.workflow/{session_id}/.process/context-package.json` (conflict_risk → resolved)
- `.workflow/active/{session_id}/.brainstorm/guidance-specification.md`
- `.workflow/active/{session_id}/.brainstorm/{role}/analysis.md`
- `.workflow/active/{session_id}/.process/context-package.json` (conflict_risk → resolved)
- NO report file generation
**User Interaction**:
- Text-based strategy selection (max 10 conflicts per round)
- **Iterative conflict processing**: One conflict at a time, not in batches
- Each conflict: 2-4 strategy options + "自定义修改" option (with suggestions)
- **Clarification loop**: Unlimited questions per conflict until uniqueness confirmed (max 10 rounds)
- **ModuleOverlap conflicts**: Display overlap_analysis with existing modules
- **Agent re-analysis**: Dynamic strategy updates based on user clarifications
### Success Criteria
```
✓ CLI analysis returns valid JSON structure
Conflicts presented in batches (max 10 per round)
✓ CLI analysis returns valid JSON structure with ModuleOverlap category
Agent performs scenario uniqueness detection (searches existing modules)
✓ Conflicts processed ONE BY ONE with iterative clarification
✓ Min 2 strategies per conflict with modifications
✓ ModuleOverlap conflicts include overlap_analysis with existing modules
✓ Strategies requiring clarification include clarification_needed questions
✓ Each conflict includes 2-5 modification_suggestions
✓ Text output displays all conflicts correctly with suggestions
✓ User selections captured and processed
✓ Text output displays conflict with overlap analysis (if ModuleOverlap)
✓ User selections captured per conflict
✓ Clarification loop continues until uniqueness confirmed (unlimited rounds, max 10)
✓ Agent re-analysis with user clarifications updates strategy
✓ Uniqueness confirmation based on clear scenario boundaries
✓ Edit tool applies modifications successfully
✓ Custom conflicts displayed with suggestions for manual handling
✓ Custom conflicts displayed with overlap_analysis for manual handling
✓ guidance-specification.md updated with resolved conflicts
✓ Role analyses (*.md) updated with resolved conflicts
✓ context-package.json marked as "resolved"
✓ context-package.json marked as "resolved" with clarification records
✓ No CONFLICT_RESOLUTION.md file generated
✓ Modification summary includes custom conflict count
✓ Agent log saved to .workflow/{session_id}/.chat/
✓ Modification summary includes:
- Total conflicts
- Resolved with strategy (count)
- Custom handling (count)
- Clarification records
- Overlap analysis for custom ModuleOverlap conflicts
✓ Agent log saved to .workflow/active/{session_id}/.chat/
✓ Error handling robust (validate/retry/degrade)
```
## Related Commands
| Command | Relationship |
|---------|--------------|
| `/workflow:tools:context-gather` | Generates input conflict_detection data |
| `/workflow:plan` | Auto-triggers this when risk ≥ medium |
| `/workflow:tools:task-generate` | Uses resolved conflicts from updated brainstorm files |
| `/workflow:brainstorm:artifacts` | Generates guidance-specification.md (modified by this command) |

View File

@@ -22,7 +22,7 @@ Orchestrator command that invokes `context-search-agent` to gather comprehensive
- **Agent Delegation**: Delegate all discovery to `context-search-agent` for autonomous execution
- **Detection-First**: Check for existing context-package before executing
- **Plan Mode**: Full comprehensive analysis (vs lightweight brainstorm mode)
- **Standardized Output**: Generate `.workflow/{session}/.process/context-package.json`
- **Standardized Output**: Generate `.workflow/active/{session}/.process/context-package.json`
## Execution Flow
@@ -57,8 +57,6 @@ Task(
subagent_type="context-search-agent",
description="Gather comprehensive context for plan",
prompt=`
You are executing as context-search-agent (.claude/agents/context-search-agent.md).
## Execution Mode
**PLAN MODE** (Comprehensive) - Full Phase 1-3 execution
@@ -71,9 +69,10 @@ You are executing as context-search-agent (.claude/agents/context-search-agent.m
Execute complete context-search-agent workflow for implementation planning:
### Phase 1: Initialization & Pre-Analysis
1. **Detection**: Check for existing context-package (early exit if valid)
2. **Foundation**: Initialize code-index, get project structure, load docs
3. **Analysis**: Extract keywords, determine scope, classify complexity
1. **Project State Loading**: Read and parse `.workflow/project.json`. Use its `overview` section as the foundational `project_context`. This is your primary source for architecture, tech stack, and key components. If file doesn't exist, proceed with fresh analysis.
2. **Detection**: Check for existing context-package (early exit if valid)
3. **Foundation**: Initialize code-index, get project structure, load docs
4. **Analysis**: Extract keywords, determine scope, classify complexity based on task description and project state
### Phase 2: Multi-Source Context Discovery
Execute all 4 discovery tracks:
@@ -84,16 +83,17 @@ Execute all 4 discovery tracks:
### Phase 3: Synthesis, Assessment & Packaging
1. Apply relevance scoring and build dependency graph
2. Synthesize 4-source data (archive > docs > code > web)
3. Integrate brainstorm artifacts (if .brainstorming/ exists, read content)
4. Perform conflict detection with risk assessment
5. **Inject historical conflicts** from archive analysis into conflict_detection
6. Generate and validate context-package.json
2. **Synthesize 4-source data**: Merge findings from all sources (archive > docs > code > web). **Prioritize the context from `project.json`** for architecture and tech stack unless code analysis reveals it's outdated.
3. **Populate `project_context`**: Directly use the `overview` from `project.json` to fill the `project_context` section of the output `context-package.json`. Include technology_stack, architecture, key_components, and entry_points.
4. Integrate brainstorm artifacts (if .brainstorming/ exists, read content)
5. Perform conflict detection with risk assessment
6. **Inject historical conflicts** from archive analysis into conflict_detection
7. Generate and validate context-package.json
## Output Requirements
Complete context-package.json with:
- **metadata**: task_description, keywords, complexity, tech_stack, session_id
- **project_context**: architecture_patterns, coding_conventions, tech_stack
- **project_context**: architecture_patterns, coding_conventions, tech_stack (sourced from `project.json` overview)
- **assets**: {documentation[], source_code[], config[], tests[]} with relevance scores
- **dependencies**: {internal[], external[]} with dependency graph
- **brainstorm_artifacts**: {guidance_specification, role_analyses[], synthesis_output} with content
@@ -139,7 +139,7 @@ Refer to `context-search-agent.md` Phase 3.7 for complete `context-package.json`
**Key Sections**:
- **metadata**: Session info, keywords, complexity, tech stack
- **project_context**: Architecture patterns, conventions, tech stack
- **project_context**: Architecture patterns, conventions, tech stack (populated from `project.json` overview)
- **assets**: Categorized files with relevance scores (documentation, source_code, config, tests)
- **dependencies**: Internal and external dependency graphs
- **brainstorm_artifacts**: Brainstorm documents with full content (if exists)
@@ -154,7 +154,7 @@ The context-search-agent MUST perform historical archive analysis as Track 1 in
**Step 1: Check for Archive Manifest**
```bash
# Check if archive manifest exists
if [[ -f .workflow/.archives/manifest.json ]]; then
if [[ -f .workflow/archives/manifest.json ]]; then
# Manifest available for querying
fi
```
@@ -233,7 +233,7 @@ if (historicalConflicts.length > 0 && currentRisk === "low") {
### Archive Query Algorithm
```markdown
1. IF .workflow/.archives/manifest.json does NOT exist → Skip Track 1, continue to Track 2
1. IF .workflow/archives/manifest.json does NOT exist → Skip Track 1, continue to Track 2
2. IF manifest exists:
a. Load manifest.json
b. Extract keywords from task_description (nouns, verbs, technical terms)
@@ -250,33 +250,10 @@ if (historicalConflicts.length > 0 && currentRisk === "low") {
3. Continue to Track 2 (reference documentation)
```
## Usage Examples
### Basic Usage
```bash
/workflow:tools:context-gather --session WFS-auth-feature "Implement JWT authentication with refresh tokens"
```
## Success Criteria
- ✅ Valid context-package.json generated in `.workflow/{session}/.process/`
- ✅ Contains >80% relevant files based on task keywords
- ✅ Execution completes within 2 minutes
- ✅ All required schema fields present and valid
- ✅ Conflict risk accurately assessed
- ✅ Agent reports completion with statistics
## Error Handling
| Error | Cause | Resolution |
|-------|-------|------------|
| Package validation failed | Invalid session_id in existing package | Re-run agent to regenerate |
| Agent execution timeout | Large codebase or slow MCP | Increase timeout, check code-index status |
| Missing required fields | Agent incomplete execution | Check agent logs, verify schema compliance |
| File count exceeds limit | Too many relevant files | Agent should auto-prioritize top 50 by relevance |
## Notes
- **Detection-first**: Always check for existing package before invoking agent
- **Project.json integration**: Agent reads `.workflow/project.json` as primary source for project context, avoiding redundant analysis
- **Agent autonomy**: Agent handles all discovery logic per `.claude/agents/context-search-agent.md`
- **No redundancy**: This command is a thin orchestrator, all logic in agent
- **Plan-specific**: Use this for implementation planning; brainstorm mode uses direct agent call

View File

@@ -1,363 +1,120 @@
---
name: task-generate-agent
description: Autonomous task generation using action-planning-agent with discovery and output phases for workflow planning
description: Generate implementation plan documents (IMPL_PLAN.md, task JSONs, TODO_LIST.md) using action-planning-agent - produces planning artifacts, does NOT execute code implementation
argument-hint: "--session WFS-session-id [--cli-execute]"
examples:
- /workflow:tools:task-generate-agent --session WFS-auth
- /workflow:tools:task-generate-agent --session WFS-auth --cli-execute
---
# Autonomous Task Generation Command
# Generate Implementation Plan Command
## Overview
Autonomous task JSON and IMPL_PLAN.md generation using action-planning-agent with two-phase execution: discovery and document generation. Supports both agent-driven execution (default) and CLI tool execution modes.
Generate implementation planning documents (IMPL_PLAN.md, task JSONs, TODO_LIST.md) using action-planning-agent. This command produces **planning artifacts only** - it does NOT execute code implementation. Actual code implementation requires separate execution command (e.g., /workflow:execute).
## Core Philosophy
- **Agent-Driven**: Delegate execution to action-planning-agent for autonomous operation
- **Two-Phase Flow**: Discovery (context gathering) → Output (document generation)
- **Planning Only**: Generate planning documents (IMPL_PLAN.md, task JSONs, TODO_LIST.md) - does NOT implement code
- **Agent-Driven Document Generation**: Delegate plan generation to action-planning-agent
- **Progressive Loading**: Load context incrementally (Core → Selective → On-Demand) due to analysis.md file size
- **Two-Phase Flow**: Discovery (context gathering) → Output (planning document generation)
- **Memory-First**: Reuse loaded documents from conversation memory
- **Smart Selection**: Load synthesis_output OR guidance + relevant role analyses, NOT all role analyses
- **MCP-Enhanced**: Use MCP tools for advanced code analysis and research
- **Pre-Selected Templates**: Command selects correct template based on `--cli-execute` flag **before** invoking agent
- **Agent Simplicity**: Agent receives pre-selected template and focuses only on content generation
- **Path Clarity**: All `focus_paths` prefer absolute paths (e.g., `D:\\project\\src\\module`), or clear relative paths from project root (e.g., `./src/module`)
## Execution Lifecycle
## Document Generation Lifecycle
### Phase 1: Discovery & Context Loading
**⚡ Memory-First Rule**: Skip file loading if documents already in conversation memory
### Phase 1: Context Preparation (Command Responsibility)
**Agent Context Package**:
```javascript
{
"session_id": "WFS-[session-id]",
"execution_mode": "agent-mode" | "cli-execute-mode", // Determined by flag
"task_json_template_path": "~/.claude/workflows/cli-templates/prompts/workflow/task-json-agent-mode.txt"
| "~/.claude/workflows/cli-templates/prompts/workflow/task-json-cli-mode.txt",
// Path selected by command based on --cli-execute flag, agent reads it
"session_metadata": {
// If in memory: use cached content
// Else: Load from .workflow/{session-id}/workflow-session.json
},
"brainstorm_artifacts": {
// Loaded from context-package.json → brainstorm_artifacts section
"role_analyses": [
{
"role": "system-architect",
"files": [{"path": "...", "type": "primary|supplementary"}]
}
],
"guidance_specification": {"path": "...", "exists": true},
"synthesis_output": {"path": "...", "exists": true},
"conflict_resolution": {"path": "...", "exists": true} // if conflict_risk >= medium
},
"context_package_path": ".workflow/{session-id}/.process/context-package.json",
"context_package": {
// If in memory: use cached content
// Else: Load from .workflow/{session-id}/.process/context-package.json
},
"mcp_capabilities": {
"code_index": true,
"exa_code": true,
"exa_web": true
}
}
**Command prepares session paths and metadata for planning document generation.**
**Session Path Structure**:
```
.workflow/active/WFS-{session-id}/
├── workflow-session.json # Session metadata
├── .process/
└── context-package.json # Context package with artifact catalog
├── .task/ # Output: Task JSON files
├── IMPL_PLAN.md # Output: Implementation plan
└── TODO_LIST.md # Output: TODO list
```
**Discovery Actions**:
1. **Load Session Context** (if not in memory)
```javascript
if (!memory.has("workflow-session.json")) {
Read(.workflow/{session-id}/workflow-session.json)
}
```
**Command Preparation**:
1. **Assemble Session Paths** for agent prompt:
- `session_metadata_path`
- `context_package_path`
- Output directory paths
2. **Load Context Package** (if not in memory)
```javascript
if (!memory.has("context-package.json")) {
Read(.workflow/{session-id}/.process/context-package.json)
}
```
2. **Provide Metadata** (simple values):
- `session_id`
- `execution_mode` (agent-mode | cli-execute-mode)
- `mcp_capabilities` (available MCP tools)
3. **Extract & Load Role Analyses** (from context-package.json)
```javascript
// Extract role analysis paths from context package
const roleAnalysisPaths = contextPackage.brainstorm_artifacts.role_analyses
.flatMap(role => role.files.map(f => f.path));
### Phase 2: Planning Document Generation (Agent Responsibility)
// Load each role analysis file
roleAnalysisPaths.forEach(path => Read(path));
```
4. **Load Conflict Resolution** (from context-package.json, if exists)
```javascript
if (contextPackage.brainstorm_artifacts.conflict_resolution?.exists) {
Read(contextPackage.brainstorm_artifacts.conflict_resolution.path)
}
```
5. **Code Analysis with Native Tools** (optional - enhance understanding)
```bash
# Find relevant files for task context
find . -name "*auth*" -type f
rg "authentication|oauth" -g "*.ts"
```
6. **MCP External Research** (optional - gather best practices)
```javascript
// Get external examples for implementation
mcp__exa__get_code_context_exa(
query="TypeScript JWT authentication best practices",
tokensNum="dynamic"
)
```
### Phase 2: Agent Execution (Document Generation)
**Pre-Agent Template Selection** (Command decides path before invoking agent):
```javascript
// Command checks flag and selects template PATH (not content)
const templatePath = hasCliExecuteFlag
? "~/.claude/workflows/cli-templates/prompts/workflow/task-json-cli-mode.txt"
: "~/.claude/workflows/cli-templates/prompts/workflow/task-json-agent-mode.txt";
```
**Purpose**: Generate IMPL_PLAN.md, task JSONs, and TODO_LIST.md - planning documents only, NOT code implementation.
**Agent Invocation**:
```javascript
Task(
subagent_type="action-planning-agent",
description="Generate task JSON and implementation plan",
description="Generate planning documents (IMPL_PLAN.md, task JSONs, TODO_LIST.md)",
prompt=`
## Execution Context
## TASK OBJECTIVE
Generate implementation planning documents (IMPL_PLAN.md, task JSONs, TODO_LIST.md) for workflow session
**Session ID**: WFS-{session-id}
**Execution Mode**: {agent-mode | cli-execute-mode}
**Task JSON Template Path**: {template_path}
IMPORTANT: This is PLANNING ONLY - you are generating planning documents, NOT implementing code.
## Phase 1: Discovery Results (Provided Context)
CRITICAL: Follow the progressive loading strategy defined in agent specification (load analysis.md files incrementally due to file size)
### Session Metadata
{session_metadata_content}
## SESSION PATHS
Input:
- Session Metadata: .workflow/active/{session-id}/workflow-session.json
- Context Package: .workflow/active/{session-id}/.process/context-package.json
### Role Analyses (Enhanced by Synthesis)
{role_analyses_content}
- Includes requirements, design specs, enhancements, and clarifications from synthesis phase
Output:
- Task Dir: .workflow/active/{session-id}/.task/
- IMPL_PLAN: .workflow/active/{session-id}/IMPL_PLAN.md
- TODO_LIST: .workflow/active/{session-id}/TODO_LIST.md
### Artifacts Inventory
- **Guidance Specification**: {guidance_spec_path}
- **Role Analyses**: {role_analyses_list}
## CONTEXT METADATA
Session ID: {session-id}
Planning Mode: {agent-mode | cli-execute-mode}
MCP Capabilities: {exa_code, exa_web, code_index}
### Context Package
{context_package_summary}
- Includes conflict_risk assessment
## EXPECTED DELIVERABLES
1. Task JSON Files (.task/IMPL-*.json)
- 6-field schema (id, title, status, context_package_path, meta, context, flow_control)
- Quantified requirements with explicit counts
- Artifacts integration from context package
- Flow control with pre_analysis steps
### Conflict Resolution (Conditional)
If conflict_risk was medium/high, modifications have been applied to:
- **guidance-specification.md**: Design decisions updated to resolve conflicts
- **Role analyses (*.md)**: Recommendations adjusted for compatibility
- **context-package.json**: Marked as "resolved" with conflict IDs
- NO separate CONFLICT_RESOLUTION.md file (conflicts resolved in-place)
2. Implementation Plan (IMPL_PLAN.md)
- Context analysis and artifact references
- Task breakdown and execution strategy
- Complete structure per agent definition
### MCP Analysis Results (Optional)
**Code Structure**: {mcp_code_index_results}
**External Research**: {mcp_exa_research_results}
3. TODO List (TODO_LIST.md)
- Hierarchical structure (containers, pending, completed markers)
- Links to task JSONs and summaries
- Matches task JSON hierarchy
## Phase 2: Document Generation Task
## QUALITY STANDARDS
Hard Constraints:
- Task count <= 12 (hard limit - request re-scope if exceeded)
- All requirements quantified (explicit counts and enumerated lists)
- Acceptance criteria measurable (include verification commands)
- Artifact references mapped from context package
- All documents follow agent-defined structure
### Task Decomposition Standards
**Core Principle**: Task Merging Over Decomposition
- **Merge Rule**: Execute together when possible
- **Decompose Only When**:
- Excessive workload (>2500 lines or >6 files)
- Different tech stacks or domains
- Sequential dependency blocking
- Parallel execution needed
**Task Limits**:
- **Maximum 10 tasks** (hard limit)
- **Function-based**: Complete units (logic + UI + tests + config)
- **Hierarchy**: Flat (≤5) | Two-level (6-10) | Re-scope (>10)
### Required Outputs
#### 1. Task JSON Files (.task/IMPL-*.json)
**Location**: .workflow/{session-id}/.task/
**Template**: Read from the template path provided above
**Task JSON Template Loading**:
\`\`\`
Read({template_path})
\`\`\`
**Important**:
- Read the template from the path provided in context
- Use the template structure exactly as written
- Replace placeholder variables ({synthesis_spec_path}, {role_analysis_path}, etc.) with actual session-specific paths
- Include MCP tool integration in pre_analysis steps
- Map artifacts based on task domain (UI → ui-designer, Backend → system-architect)
#### 2. IMPL_PLAN.md
**Location**: .workflow/{session-id}/IMPL_PLAN.md
**IMPL_PLAN Template**:
\`\`\`
$(cat ~/.claude/workflows/cli-templates/prompts/workflow/impl-plan-template.txt)
\`\`\`
**Important**:
- Use the template above for IMPL_PLAN.md generation
- Replace all {placeholder} variables with actual session-specific values
- Populate CCW Workflow Context based on actual phase progression
- Extract content from role analyses and context-package.json
- List all detected brainstorming artifacts with correct paths (role analyses, guidance-specification.md)
- Include conflict resolution status if CONFLICT_RESOLUTION.md exists
#### 3. TODO_LIST.md
**Location**: .workflow/{session-id}/TODO_LIST.md
**Structure**:
\`\`\`markdown
# Tasks: {Session Topic}
## Task Progress
**IMPL-001**: [Main Task Group] → [📋](./.task/IMPL-001.json)
- [ ] **IMPL-001.1**: [Subtask] → [📋](./.task/IMPL-001.1.json)
- [ ] **IMPL-001.2**: [Subtask] → [📋](./.task/IMPL-001.2.json)
- [ ] **IMPL-002**: [Simple Task] → [📋](./.task/IMPL-002.json)
## Status Legend
- \`▸\` = Container task (has subtasks)
- \`- [ ]\` = Pending leaf task
- \`- [x]\` = Completed leaf task
\`\`\`
### Execution Instructions for Agent
**Agent Task**: Generate task JSON files, IMPL_PLAN.md, and TODO_LIST.md based on analysis results
**Note**: The correct task JSON template path has been pre-selected by the command based on the `--cli-execute` flag and is provided in the context as `{template_path}`.
**Step 1: Load Task JSON Template**
- Read template from the provided path: `Read({template_path})`
- This template is already the correct one based on execution mode
**Step 2: Extract and Decompose Tasks**
- Parse role analysis.md files for requirements, design specs, and task recommendations
- Review synthesis enhancements and clarifications in role analyses
- Apply conflict resolution strategies (if CONFLICT_RESOLUTION.md exists)
- Apply task merging rules (merge when possible, decompose only when necessary)
- Map artifacts to tasks based on domain (UI → ui-designer, Backend → system-architect, Data → data-architect)
- Ensure task count ≤10
**Step 3: Generate Task JSON Files**
- Use the template structure from Step 1
- Create .task/IMPL-*.json files with proper structure
- Replace all {placeholder} variables with actual session paths
- Embed artifacts array with brainstorming outputs
- Include MCP tool integration in pre_analysis steps
**Step 4: Create IMPL_PLAN.md**
- Use IMPL_PLAN template
- Populate all sections with session-specific content
- List artifacts with priorities and usage guidelines
- Document execution strategy and dependencies
**Step 5: Generate TODO_LIST.md**
- Create task progress checklist matching generated JSONs
- Use proper status indicators (▸, [ ], [x])
- Link to task JSON files
**Step 6: Update Session State**
- Update workflow-session.json with task count and artifact inventory
- Mark session ready for execution
### MCP Enhancement Examples
**Code Index Usage**:
\`\`\`javascript
// Discover authentication-related files
bash(find . -name "*auth*" -type f)
// Search for OAuth patterns
bash(rg "oauth|jwt|authentication" -g "*.{ts,js}")
// Get file summary for key components
bash(rg "^(class|function|export|interface)" src/auth/index.ts)
\`\`\`
**Exa Research Usage**:
\`\`\`javascript
// Get best practices for task implementation
mcp__exa__get_code_context_exa(
query="TypeScript OAuth2 implementation patterns",
tokensNum="dynamic"
)
// Research specific API usage
mcp__exa__get_code_context_exa(
query="Express.js JWT middleware examples",
tokensNum=5000
)
\`\`\`
### Quality Validation
Before completion, verify:
- [ ] All task JSON files created in .task/ directory
- [ ] Each task JSON has 5 required fields
- [ ] Artifact references correctly mapped
- [ ] Flow control includes artifact loading steps
- [ ] MCP tool integration added where appropriate
- [ ] IMPL_PLAN.md follows required structure
- [ ] TODO_LIST.md matches task JSONs
- [ ] Dependency graph is acyclic
- [ ] Task count within limits (≤10)
- [ ] Session state updated
## Output
Generate all three documents and report completion status:
- Task JSON files created: N files
- Artifacts integrated: synthesis-spec, guidance-specification, N role analyses
- MCP enhancements: code-index, exa-research
- Session ready for execution: /workflow:execute
## SUCCESS CRITERIA
- All planning documents generated successfully:
- Task JSONs valid and saved to .task/ directory
- IMPL_PLAN.md created with complete structure
- TODO_LIST.md generated matching task JSONs
- Return completion status with document count and task breakdown summary
`
)
```
### Agent Context Passing
**Memory-Aware Context Assembly**:
```javascript
// Assemble context package for agent
const agentContext = {
session_id: "WFS-[id]",
// Use memory if available, else load
session_metadata: memory.has("workflow-session.json")
? memory.get("workflow-session.json")
: Read(.workflow/WFS-[id]/workflow-session.json),
context_package_path: ".workflow/WFS-[id]/.process/context-package.json",
context_package: memory.has("context-package.json")
? memory.get("context-package.json")
: Read(".workflow/WFS-[id]/.process/context-package.json"),
// Extract brainstorm artifacts from context package
brainstorm_artifacts: extractBrainstormArtifacts(context_package),
// Load role analyses using paths from context package
role_analyses: brainstorm_artifacts.role_analyses
.flatMap(role => role.files)
.map(file => Read(file.path)),
// Load conflict resolution if exists (from context package)
conflict_resolution: brainstorm_artifacts.conflict_resolution?.exists
? Read(brainstorm_artifacts.conflict_resolution.path)
: null,
// Optional MCP enhancements
mcp_analysis: executeMcpDiscovery()
}
```

View File

@@ -1,14 +1,28 @@
---
name: task-generate-tdd
description: Generate TDD task chains with Red-Green-Refactor dependencies, test-first structure, and cycle validation
argument-hint: "--session WFS-session-id [--agent]"
allowed-tools: Read(*), Write(*), Bash(gemini:*), TodoWrite(*)
description: Autonomous TDD task generation using action-planning-agent with Red-Green-Refactor cycles, test-first structure, and cycle validation
argument-hint: "--session WFS-session-id [--cli-execute]"
examples:
- /workflow:tools:task-generate-tdd --session WFS-auth
- /workflow:tools:task-generate-tdd --session WFS-auth --cli-execute
---
# TDD Task Generation Command
# Autonomous TDD Task Generation Command
## Overview
Generate TDD-specific tasks from analysis results with complete Red-Green-Refactor cycles contained within each task.
Autonomous TDD task JSON and IMPL_PLAN.md generation using action-planning-agent with two-phase execution: discovery and document generation. Supports both agent-driven execution (default) and CLI tool execution modes. Generates complete Red-Green-Refactor cycles contained within each task.
## Core Philosophy
- **Agent-Driven**: Delegate execution to action-planning-agent for autonomous operation
- **Two-Phase Flow**: Discovery (context gathering) → Output (document generation)
- **Memory-First**: Reuse loaded documents from conversation memory
- **MCP-Enhanced**: Use MCP tools for advanced code analysis and research
- **Pre-Selected Templates**: Command selects correct TDD template based on `--cli-execute` flag **before** invoking agent
- **Agent Simplicity**: Agent receives pre-selected template and focuses only on content generation
- **Path Clarity**: All `focus_paths` prefer absolute paths (e.g., `D:\\project\\src\\module`), or clear relative paths from project root (e.g., `./src/module`)
- **TDD-First**: Every feature starts with a failing test (Red phase)
- **Feature-Complete Tasks**: Each task contains complete Red-Green-Refactor cycle
- **Quantification-Enforced**: All test cases, coverage requirements, and implementation scope MUST include explicit counts and enumerations
## Task Strategy & Philosophy
@@ -16,11 +30,6 @@ Generate TDD-specific tasks from analysis results with complete Red-Green-Refact
- **1 feature = 1 task** containing complete TDD cycle internally
- Each task executes Red-Green-Refactor phases sequentially
- Task count = Feature count (typically 5 features = 5 tasks)
- **Benefits**:
- 70% reduction in task management overhead
- Continuous context per feature (no switching between TEST/IMPL/REFACTOR)
- Simpler dependency management
- Maintains TDD rigor through internal phase structure
**Previous Approach** (Deprecated):
- 1 feature = 3 separate tasks (TEST-N.M, IMPL-N.M, REFACTOR-N.M)
@@ -44,363 +53,333 @@ Generate TDD-specific tasks from analysis results with complete Red-Green-Refact
- **Current approach**: 1 feature = 1 task (IMPL-N with internal Red-Green-Refactor phases)
- **Complex features**: 1 container (IMPL-N) + subtasks (IMPL-N.M) when necessary
### Core Principles
- **TDD-First**: Every feature starts with a failing test (Red phase)
- **Feature-Complete Tasks**: Each task contains complete Red-Green-Refactor cycle
- **Phase-Explicit**: Internal phases clearly marked in flow_control.implementation_approach
- **Task Merging**: Prefer single task per feature over decomposition
- **Path Clarity**: All `focus_paths` prefer absolute paths (e.g., `D:\\project\\src\\module`), or clear relative paths from project root (e.g., `./src/module`)
- **Artifact-Aware**: Integrates brainstorming outputs
- **Memory-First**: Reuse loaded documents from memory
- **Context-Aware**: Analyzes existing codebase and test patterns
- **Iterative Green Phase**: Auto-diagnose and fix test failures with Gemini + optional Codex
- **Safety-First**: Auto-revert on max iterations to prevent broken state
## Core Responsibilities
- Parse analysis results and identify testable features
- Generate feature-complete tasks with internal TDD cycles (1 task per simple feature)
- Apply task merging strategy by default, create subtasks only when complexity requires
- Generate IMPL_PLAN.md with TDD Implementation Tasks section
- Generate TODO_LIST.md with internal TDD phase indicators
- Update session state for TDD execution with task count compliance
## Execution Lifecycle
### Phase 1: Input Validation & Discovery
**Memory-First Rule**: Skip file loading if documents already in conversation memory
### Phase 1: Discovery & Context Loading
**Memory-First Rule**: Skip file loading if documents already in conversation memory
1. **Session Validation**
- If session metadata in memory → Skip loading
- Else: Load `.workflow/{session_id}/workflow-session.json`
2. **Conflict Resolution Check** (NEW - Priority Input)
- If CONFLICT_RESOLUTION.md exists → Load selected strategies
- Else: Skip to brainstorming artifacts
- Path: `.workflow/{session_id}/.process/CONFLICT_RESOLUTION.md`
3. **Artifact Discovery**
- If artifact inventory in memory → Skip scanning
- Else: Scan `.workflow/{session_id}/.brainstorming/` directory
- Detect: role analysis documents, guidance-specification.md, role analyses
4. **Context Package Loading**
- Load `.workflow/{session_id}/.process/context-package.json`
- Load `.workflow/{session_id}/.process/test-context-package.json` (if exists)
### Phase 2: TDD Task JSON Generation
**Input Sources** (priority order):
1. **Conflict Resolution** (if exists): `.process/CONFLICT_RESOLUTION.md` - Selected resolution strategies
2. **Brainstorming Artifacts**: Role analysis documents (system-architect, product-owner, etc.)
3. **Context Package**: `.process/context-package.json` - Project structure and requirements
4. **Test Context**: `.process/test-context-package.json` - Existing test patterns
**TDD Task Structure includes**:
- Feature list with testable requirements
- Test cases for Red phase
- Implementation requirements for Green phase (with test-fix cycle)
- Refactoring opportunities
- Task dependencies and execution order
- Conflict resolution decisions (if applicable)
### Phase 3: Task JSON & IMPL_PLAN.md Generation
#### Task Structure (Feature-Complete with Internal TDD)
For each feature, generate task(s) with ID format:
- **IMPL-N** - Single task containing complete TDD cycle (Red-Green-Refactor)
- **IMPL-N.M** - Sub-tasks only when feature is complex (>2500 lines or technical blocking)
**Task Dependency Rules**:
- **Sequential features**: IMPL-2 depends_on ["IMPL-1"] if Feature 2 needs Feature 1
- **Independent features**: No dependencies, can execute in parallel
- **Complex features**: IMPL-N.2 depends_on ["IMPL-N.1"] for subtask ordering
**Agent Assignment**:
- **All IMPL tasks** → `@code-developer` (handles full TDD cycle)
- Agent executes Red, Green, Refactor phases sequentially within task
**Meta Fields**:
- `meta.type`: "feature" (TDD-driven feature implementation)
- `meta.agent`: "@code-developer"
- `meta.tdd_workflow`: true (enables TDD-specific flow)
- `meta.tdd_phase`: Not used (phases are in flow_control.implementation_approach)
- `meta.max_iterations`: 3 (for Green phase test-fix cycle)
- `meta.use_codex`: false (manual fixes by default)
#### Task JSON Structure Reference
**Simple Feature Task (IMPL-N.json)** - Recommended for most features:
```json
**Agent Context Package**:
```javascript
{
"id": "IMPL-N", // Task identifier
"title": "Feature description with TDD", // Human-readable title
"status": "pending", // pending | in_progress | completed | container
"context_package_path": ".workflow/{session-id}/.process/context-package.json", // Path to smart context package
"meta": {
"type": "feature", // Task type
"agent": "@code-developer", // Assigned agent
"tdd_workflow": true, // REQUIRED: Enables TDD flow
"max_iterations": 3, // Green phase test-fix cycle limit
"use_codex": false // false=manual fixes, true=Codex automated fixes
"session_id": "WFS-[session-id]",
"execution_mode": "agent-mode" | "cli-execute-mode", // Determined by flag
"task_json_template_path": "~/.claude/workflows/cli-templates/prompts/workflow/task-json-agent-mode.txt"
| "~/.claude/workflows/cli-templates/prompts/workflow/task-json-cli-mode.txt",
// Path selected by command based on --cli-execute flag, agent reads it
"workflow_type": "tdd",
"session_metadata": {
// If in memory: use cached content
// Else: Load from .workflow/active//{session-id}/workflow-session.json
},
"context": {
"requirements": [ // Feature requirements with TDD phases
"Feature description",
"Red: Test scenarios to write",
"Green: Implementation approach with test-fix cycle",
"Refactor: Code quality improvements"
],
"tdd_cycles": [ // OPTIONAL: Detailed test cycles
"brainstorm_artifacts": {
// Loaded from context-package.json → brainstorm_artifacts section
"role_analyses": [
{
"cycle": 1,
"feature": "Specific functionality",
"test_focus": "What to test",
"expected_failure": "Why test should fail initially"
"role": "system-architect",
"files": [{"path": "...", "type": "primary|supplementary"}]
}
],
"focus_paths": ["D:\\project\\src\\path", "./tests/path"], // Absolute or clear relative paths from project root
"acceptance": [ // Success criteria
"All tests pass (Red → Green)",
"Code refactored (Refactor complete)",
"Test coverage ≥80%"
],
"depends_on": [] // Task dependencies
"guidance_specification": {"path": "...", "exists": true},
"synthesis_output": {"path": "...", "exists": true},
"conflict_resolution": {"path": "...", "exists": true} // if conflict_risk >= medium
},
"flow_control": {
"pre_analysis": [ // OPTIONAL: Pre-execution checks
{
"step": "check_test_framework",
"action": "Verify test framework",
"command": "bash(npm list jest)",
"output_to": "test_framework_info",
"on_error": "warn"
}
],
"implementation_approach": [ // REQUIRED: 3 TDD phases
{
"step": 1,
"title": "RED Phase: Write failing tests",
"tdd_phase": "red", // REQUIRED: Phase identifier
"description": "Write comprehensive failing tests",
"modification_points": ["Files/changes to make"],
"logic_flow": ["Step-by-step process"],
"acceptance": ["Phase success criteria"],
"depends_on": [],
"output": "failing_tests"
},
{
"step": 2,
"title": "GREEN Phase: Implement to pass tests",
"tdd_phase": "green", // REQUIRED: Phase identifier
"description": "Minimal implementation with test-fix cycle",
"modification_points": ["Implementation files"],
"logic_flow": [
"Implement minimal code",
"Run tests",
"If fail → Enter iteration loop (max 3):",
" 1. Extract failure messages",
" 2. Gemini bug-fix diagnosis",
" 3. Apply fixes",
" 4. Rerun tests",
"If max_iterations → Auto-revert"
],
"acceptance": ["All tests pass"],
"command": "bash(npm test -- tests/path/)",
"depends_on": [1],
"output": "passing_implementation"
},
{
"step": 3,
"title": "REFACTOR Phase: Improve code quality",
"tdd_phase": "refactor", // REQUIRED: Phase identifier
"description": "Refactor while keeping tests green",
"modification_points": ["Quality improvements"],
"logic_flow": ["Incremental refactoring with test verification"],
"acceptance": ["Tests still pass", "Code quality improved"],
"command": "bash(npm run lint && npm test)",
"depends_on": [2],
"output": "refactored_implementation"
}
],
"post_completion": [ // OPTIONAL: Final verification
{
"step": "verify_full_tdd_cycle",
"action": "Confirm complete TDD cycle",
"command": "bash(npm test && echo 'TDD complete')",
"output_to": "final_validation",
"on_error": "fail"
}
],
"error_handling": { // OPTIONAL: Error recovery
"green_phase_max_iterations": {
"action": "revert_all_changes",
"commands": ["bash(git reset --hard HEAD)"],
"report": "Generate failure report"
}
}
"context_package_path": ".workflow/active//{session-id}/.process/context-package.json",
"context_package": {
// If in memory: use cached content
// Else: Load from .workflow/active//{session-id}/.process/context-package.json
},
"test_context_package_path": ".workflow/active//{session-id}/.process/test-context-package.json",
"test_context_package": {
// Existing test patterns and coverage analysis
},
"mcp_capabilities": {
"code_index": true,
"exa_code": true,
"exa_web": true
}
}
```
**Key JSON Fields Summary**:
- `meta.tdd_workflow`: Must be `true`
- `meta.max_iterations`: Green phase fix cycle limit (default: 3)
- `meta.use_codex`: Automated fixes (false=manual, true=Codex)
- `flow_control.implementation_approach`: Exactly 3 steps with `tdd_phase`: "red", "green", "refactor"
- `context.tdd_cycles`: Optional detailed test cycle specifications
- `context.parent`: Required for subtasks (IMPL-N.M)
**Discovery Actions**:
1. **Load Session Context** (if not in memory)
```javascript
if (!memory.has("workflow-session.json")) {
Read(.workflow/active//{session-id}/workflow-session.json)
}
```
#### IMPL_PLAN.md Structure
2. **Load Context Package** (if not in memory)
```javascript
if (!memory.has("context-package.json")) {
Read(.workflow/active//{session-id}/.process/context-package.json)
}
```
Generate IMPL_PLAN.md with 8-section structure:
3. **Load Test Context Package** (if not in memory)
```javascript
if (!memory.has("test-context-package.json")) {
Read(.workflow/active//{session-id}/.process/test-context-package.json)
}
```
**Frontmatter** (required fields):
```yaml
---
identifier: WFS-{session-id}
source: "User requirements" | "File: path"
conflict_resolution: .workflow/{session-id}/.process/CONFLICT_RESOLUTION.md # if exists
context_package: .workflow/{session-id}/.process/context-package.json
context_package_path: .workflow/{session-id}/.process/context-package.json
test_context: .workflow/{session-id}/.process/test-context-package.json # if exists
workflow_type: "tdd"
verification_history:
conflict_resolution: "executed | skipped" # based on conflict_risk
action_plan_verify: "pending"
phase_progression: "brainstorm → context → test_context → conflict_resolution → tdd_planning"
feature_count: N
task_count: N # ≤10 total
task_breakdown:
simple_features: K
complex_features: L
total_subtasks: M
tdd_workflow: true
---
4. **Extract & Load Role Analyses** (from context-package.json)
```javascript
// Extract role analysis paths from context package
const roleAnalysisPaths = contextPackage.brainstorm_artifacts.role_analyses
.flatMap(role => role.files.map(f => f.path));
// Load each role analysis file
roleAnalysisPaths.forEach(path => Read(path));
```
5. **Load Conflict Resolution** (from context-package.json, if exists)
```javascript
if (contextPackage.brainstorm_artifacts.conflict_resolution?.exists) {
Read(contextPackage.brainstorm_artifacts.conflict_resolution.path)
}
```
6. **Code Analysis with Native Tools** (optional - enhance understanding)
```bash
# Find relevant test files and patterns
find . -name "*test*" -type f
rg "describe|it\(|test\(" -g "*.ts"
```
7. **MCP External Research** (optional - gather TDD best practices)
```javascript
// Get external TDD examples and patterns
mcp__exa__get_code_context_exa(
query="TypeScript TDD best practices Red-Green-Refactor",
tokensNum="dynamic"
)
```
### Phase 2: Agent Execution (Document Generation)
**Pre-Agent Template Selection** (Command decides path before invoking agent):
```javascript
// Command checks flag and selects template PATH (not content)
const templatePath = hasCliExecuteFlag
? "~/.claude/workflows/cli-templates/prompts/workflow/task-json-cli-mode.txt"
: "~/.claude/workflows/cli-templates/prompts/workflow/task-json-agent-mode.txt";
```
**8 Sections Structure**:
**Agent Invocation**:
```javascript
Task(
subagent_type="action-planning-agent",
description="Generate TDD task JSON and implementation plan",
prompt=`
## Execution Context
```markdown
# Implementation Plan: {Project Title}
**Session ID**: WFS-{session-id}
**Workflow Type**: TDD
**Execution Mode**: {agent-mode | cli-execute-mode}
**Task JSON Template Path**: {template_path}
## 1. Summary
- Core requirements and objectives (2-3 paragraphs)
- TDD-specific technical approach
## Phase 1: Discovery Results (Provided Context)
## 2. Context Analysis
- CCW Workflow Context (Phase progression, Quality gates)
- Context Package Summary (Focus paths, Test context)
- Project Profile (Type, Scale, Tech Stack, Timeline)
- Module Structure (Directory tree)
- Dependencies (Primary, Testing, Development)
- Patterns & Conventions
### Session Metadata
{session_metadata_content}
## 3. Brainstorming Artifacts Reference
- Artifact Usage Strategy
- CONFLICT_RESOLUTION.md (if exists - selected resolution strategies)
- role analysis documents (primary reference)
- test-context-package.json (test patterns)
- context-package.json (smart context)
- Artifact Priority in Development
### Role Analyses (Enhanced by Synthesis)
{role_analyses_content}
- Includes requirements, design specs, enhancements, and clarifications from synthesis phase
## 4. Implementation Strategy
- Execution Strategy (TDD Cycles: Red-Green-Refactor)
- Architectural Approach
- Key Dependencies (Task dependency graph)
- Testing Strategy (Coverage targets, Quality gates)
### Artifacts Inventory
- **Guidance Specification**: {guidance_spec_path}
- **Role Analyses**: {role_analyses_list}
## 5. TDD Implementation Tasks
- Feature-by-Feature TDD Tasks
- Each task: IMPL-N with internal Red → Green → Refactor
- Dependencies and complexity metrics
- Complex Feature Examples (when subtasks needed)
- TDD Task Breakdown Summary
### Context Package
{context_package_summary}
- Includes conflict_risk assessment
## 6. Implementation Plan (Detailed Phased Breakdown)
- Execution Strategy (feature-by-feature sequential)
- Phase breakdown (Phase 1, Phase 2, etc.)
- Resource Requirements (Team, Dependencies, Infrastructure)
### Test Context Package
{test_context_package_summary}
- Existing test patterns, framework config, coverage analysis
## 7. Risk Assessment & Mitigation
- Risk table (Risk, Impact, Probability, Mitigation, Owner)
- Critical Risks (TDD-specific)
- Monitoring Strategy
### Conflict Resolution (Conditional)
If conflict_risk was medium/high, modifications have been applied to:
- **guidance-specification.md**: Design decisions updated to resolve conflicts
- **Role analyses (*.md)**: Recommendations adjusted for compatibility
- **context-package.json**: Marked as "resolved" with conflict IDs
- NO separate CONFLICT_RESOLUTION.md file (conflicts resolved in-place)
## 8. Success Criteria
- Functional Completeness
- Technical Quality (Test coverage ≥80%)
- Operational Readiness
- TDD Compliance
### MCP Analysis Results (Optional)
**Code Structure**: {mcp_code_index_results}
**External Research**: {mcp_exa_research_results}
## Phase 2: TDD Document Generation Task
**Agent Configuration Reference**: All TDD task generation rules, quantification requirements, Red-Green-Refactor cycle structure, quality standards, and execution details are defined in action-planning-agent.
Refer to: @.claude/agents/action-planning-agent.md for:
- TDD Task Decomposition Standards
- Red-Green-Refactor Cycle Requirements
- Quantification Requirements (MANDATORY)
- 5-Field Task JSON Schema
- IMPL_PLAN.md Structure (TDD variant)
- TODO_LIST.md Format
- TDD Execution Flow & Quality Validation
### TDD-Specific Requirements Summary
#### Task Structure Philosophy
- **1 feature = 1 task** containing complete TDD cycle internally
- Each task executes Red-Green-Refactor phases sequentially
- Task count = Feature count (typically 5 features = 5 tasks)
- Subtasks only when complexity >2500 lines or >6 files per cycle
- **Maximum 10 tasks** (hard limit for TDD workflows)
#### TDD Cycle Mapping
- **Simple features**: IMPL-N with internal Red-Green-Refactor phases
- **Complex features**: IMPL-N (container) + IMPL-N.M (subtasks)
- Each cycle includes: test_count, test_cases array, implementation_scope, expected_coverage
#### Required Outputs Summary
##### 1. TDD Task JSON Files (.task/IMPL-*.json)
- **Location**: `.workflow/active//{session-id}/.task/`
- **Template**: Read from `{template_path}` (pre-selected by command based on `--cli-execute` flag)
- **Schema**: 5-field structure with TDD-specific metadata
- `meta.tdd_workflow`: true (REQUIRED)
- `meta.max_iterations`: 3 (Green phase test-fix cycle limit)
- `meta.use_codex`: false (manual fixes by default)
- `context.tdd_cycles`: Array with quantified test cases and coverage
- `flow_control.implementation_approach`: Exactly 3 steps with `tdd_phase` field
1. Red Phase (`tdd_phase: "red"`): Write failing tests
2. Green Phase (`tdd_phase: "green"`): Implement to pass tests
3. Refactor Phase (`tdd_phase: "refactor"`): Improve code quality
- **Details**: See action-planning-agent.md § TDD Task JSON Generation
##### 2. IMPL_PLAN.md (TDD Variant)
- **Location**: `.workflow/active//{session-id}/IMPL_PLAN.md`
- **Template**: `~/.claude/workflows/cli-templates/prompts/workflow/impl-plan-template.txt`
- **TDD-Specific Frontmatter**: workflow_type="tdd", tdd_workflow=true, feature_count, task_breakdown
- **TDD Implementation Tasks Section**: Feature-by-feature with internal Red-Green-Refactor cycles
- **Details**: See action-planning-agent.md § TDD Implementation Plan Creation
##### 3. TODO_LIST.md
- **Location**: `.workflow/active//{session-id}/TODO_LIST.md`
- **Format**: Hierarchical task list with internal TDD phase indicators (Red → Green → Refactor)
- **Status**: ▸ (container), [ ] (pending), [x] (completed)
- **Details**: See action-planning-agent.md § TODO List Generation
### Quantification Requirements (MANDATORY)
**Core Rules**:
1. **Explicit Test Case Counts**: Red phase specifies exact number with enumerated list
2. **Quantified Coverage**: Acceptance includes measurable percentage (e.g., ">=85%")
3. **Detailed Implementation Scope**: Green phase enumerates files, functions, line counts
4. **Enumerated Refactoring Targets**: Refactor phase lists specific improvements with counts
**TDD Phase Formats**:
- **Red Phase**: "Write N test cases: [test1, test2, ...]"
- **Green Phase**: "Implement N functions in file lines X-Y: [func1() X1-Y1, func2() X2-Y2, ...]"
- **Refactor Phase**: "Apply N refactorings: [improvement1 (details), improvement2 (details), ...]"
- **Acceptance**: "All N tests pass with >=X% coverage: verify by [test command]"
**Validation Checklist**:
- [ ] Every Red phase specifies exact test case count with enumerated list
- [ ] Every Green phase enumerates files, functions, and estimated line counts
- [ ] Every Refactor phase lists specific improvements with counts
- [ ] Every acceptance criterion includes measurable coverage percentage
- [ ] tdd_cycles array contains test_count and test_cases for each cycle
- [ ] No vague language ("comprehensive", "complete", "thorough")
### Agent Execution Summary
**Key Steps** (Detailed instructions in action-planning-agent.md):
1. Load task JSON template from provided path
2. Extract and decompose features with TDD cycles
3. Generate TDD task JSON files enforcing quantification requirements
4. Create IMPL_PLAN.md using TDD template variant
5. Generate TODO_LIST.md with TDD phase indicators
6. Update session state with TDD metadata
**Quality Gates** (Full checklist in action-planning-agent.md):
- ✓ Quantification requirements enforced (explicit counts, measurable acceptance, exact targets)
- ✓ Task count ≤10 (hard limit)
- ✓ Each task has meta.tdd_workflow: true
- ✓ Each task has exactly 3 implementation steps with tdd_phase field
- ✓ Green phase includes test-fix cycle logic
- ✓ Artifact references mapped correctly
- ✓ MCP tool integration added
- ✓ Documents follow TDD template structure
## Output
Generate all three documents and report completion status:
- TDD task JSON files created: N files (IMPL-*.json)
- TDD cycles configured: N cycles with quantified test cases
- Artifacts integrated: synthesis-spec, guidance-specification, N role analyses
- Test context integrated: existing patterns and coverage
- MCP enhancements: code-index, exa-research
- Session ready for TDD execution: /workflow:execute
`
)
```
### Phase 4: TODO_LIST.md Generation
### Agent Context Passing
Generate task list with internal TDD phase indicators:
**Memory-Aware Context Assembly**:
```javascript
// Assemble context package for agent
const agentContext = {
session_id: "WFS-[id]",
workflow_type: "tdd",
**For Simple Features (1 task per feature)**:
```markdown
## TDD Implementation Tasks
// Use memory if available, else load
session_metadata: memory.has("workflow-session.json")
? memory.get("workflow-session.json")
: Read(.workflow/active/WFS-[id]/workflow-session.json),
### Feature 1: {Feature Name}
- [ ] **IMPL-1**: Implement {feature} with TDD → [Task](./.task/IMPL-1.json)
- Internal phases: Red → Green → Refactor
- Dependencies: None
context_package_path: ".workflow/active/WFS-[id]/.process/context-package.json",
### Feature 2: {Feature Name}
- [ ] **IMPL-2**: Implement {feature} with TDD → [Task](./.task/IMPL-2.json)
- Internal phases: Red → Green → Refactor
- Dependencies: IMPL-1
```
context_package: memory.has("context-package.json")
? memory.get("context-package.json")
: Read(".workflow/active/WFS-[id]/.process/context-package.json"),
**For Complex Features (with subtasks)**:
```markdown
### Feature 3: {Complex Feature Name}
**IMPL-3**: Implement {complex feature} with TDD → [Task](./.task/IMPL-3.json)
- [ ] **IMPL-3.1**: {Sub-feature A} with TDD → [Task](./.task/IMPL-3.1.json)
- Internal phases: Red → Green → Refactor
- [ ] **IMPL-3.2**: {Sub-feature B} with TDD → [Task](./.task/IMPL-3.2.json)
- Internal phases: Red → Green → Refactor
- Dependencies: IMPL-3.1
```
test_context_package_path: ".workflow/active/WFS-[id]/.process/test-context-package.json",
**Status Legend**:
```markdown
## Status Legend
- ▸ = Container task (has subtasks)
- [ ] = Pending task
- [x] = Completed task
- Red = Write failing tests
- Green = Implement to pass tests (with test-fix cycle)
- Refactor = Improve code quality
```
test_context_package: memory.has("test-context-package.json")
? memory.get("test-context-package.json")
: Read(".workflow/active/WFS-[id]/.process/test-context-package.json"),
### Phase 5: Session State Update
// Extract brainstorm artifacts from context package
brainstorm_artifacts: extractBrainstormArtifacts(context_package),
Update workflow-session.json with TDD metadata:
```json
{
"workflow_type": "tdd",
"feature_count": 5,
"task_count": 5,
"task_breakdown": {
"simple_features": 4,
"complex_features": 1,
"total_subtasks": 2
},
"tdd_workflow": true,
"task_limit_compliance": true
// Load role analyses using paths from context package
role_analyses: brainstorm_artifacts.role_analyses
.flatMap(role => role.files)
.map(file => Read(file.path)),
// Load conflict resolution if exists (from context package)
conflict_resolution: brainstorm_artifacts.conflict_resolution?.exists
? Read(brainstorm_artifacts.conflict_resolution.path)
: null,
// Optional MCP enhancements
mcp_analysis: executeMcpDiscovery()
}
```
**Task Count Calculation**:
- **Simple features**: 1 task each (IMPL-N with internal TDD cycle)
- **Complex features**: 1 container + M subtasks (IMPL-N + IMPL-N.M)
- **Total**: Simple feature count + Complex feature subtask count
- **Example**: 4 simple + 1 complex (with 2 subtasks) = 6 total tasks (not 15)
## TDD Task Structure Reference
This section provides quick reference for TDD task JSON structure. For complete implementation details, see the agent invocation prompt in Phase 2 above.
**Quick Reference**:
- Each TDD task contains complete Red-Green-Refactor cycle
- Task ID format: `IMPL-N` (simple) or `IMPL-N.M` (complex subtasks)
- Required metadata: `meta.tdd_workflow: true`, `meta.max_iterations: 3`
- Flow control: Exactly 3 steps with `tdd_phase` field (red, green, refactor)
- Context: `tdd_cycles` array with quantified test cases and coverage
- See Phase 2 agent prompt for full schema and requirements
## Output Files Structure
```
.workflow/{session-id}/
.workflow/active//{session-id}/
├── IMPL_PLAN.md # Unified plan with TDD Implementation Tasks section
├── TODO_LIST.md # Progress tracking with internal TDD phase indicators
├── .task/
@@ -465,52 +444,30 @@ Update workflow-session.json with TDD metadata:
## Integration & Usage
### Command Chain
- **Called By**: `/workflow:tdd-plan` (Phase 4)
- **Calls**: Gemini CLI for TDD breakdown
- **Followed By**: `/workflow:execute`, `/workflow:tdd-verify`
**Command Chain**:
- Called by: `/workflow:tdd-plan` (Phase 4)
- Invokes: `action-planning-agent` for autonomous task generation
- Followed by: `/workflow:execute`, `/workflow:tdd-verify`
### Basic Usage
**Basic Usage**:
```bash
# Manual mode (default)
# Agent mode (default, autonomous execution)
/workflow:tools:task-generate-tdd --session WFS-auth
# Agent mode (autonomous task generation)
/workflow:tools:task-generate-tdd --session WFS-auth --agent
# CLI tool mode (use Gemini/Qwen for generation)
/workflow:tools:task-generate-tdd --session WFS-auth --cli-execute
```
### Expected Output
```
TDD task generation complete for session: WFS-auth
**Execution Modes**:
- **Agent mode** (default): Uses `action-planning-agent` with agent-mode task template
- **CLI mode** (`--cli-execute`): Uses Gemini/Qwen with cli-mode task template
Features analyzed: 5
Total tasks: 5 (1 task per feature with internal TDD cycles)
Task breakdown:
- Simple features: 4 tasks (IMPL-1 to IMPL-4)
- Complex features: 1 task with 2 subtasks (IMPL-5, IMPL-5.1, IMPL-5.2)
- Total task count: 6 (within 10-task limit)
Structure:
- IMPL-1: User Authentication (Internal: Red → Green → Refactor)
- IMPL-2: Password Reset (Internal: Red → Green → Refactor)
- IMPL-3: Email Verification (Internal: Red → Green → Refactor)
- IMPL-4: Role Management (Internal: Red → Green → Refactor)
- IMPL-5: Payment System (Container)
- IMPL-5.1: Gateway Integration (Internal: Red → Green → Refactor)
- IMPL-5.2: Transaction Management (Internal: Red → Green → Refactor)
Plans generated:
- Unified Plan: .workflow/WFS-auth/IMPL_PLAN.md (includes TDD Implementation Tasks section)
- Task List: .workflow/WFS-auth/TODO_LIST.md (with internal TDD phase indicators)
TDD Configuration:
- Each task contains complete Red-Green-Refactor cycle
- Green phase includes test-fix cycle (max 3 iterations)
- Auto-revert on max iterations reached
Next: /workflow:action-plan-verify --session WFS-auth (recommended) or /workflow:execute --session WFS-auth
```
**Output**:
- TDD task JSON files in `.task/` directory (IMPL-N.json format)
- IMPL_PLAN.md with TDD Implementation Tasks section
- TODO_LIST.md with internal TDD phase indicators
- Session state updated with task count and TDD metadata
- MCP enhancements integrated (if available)
## Test Coverage Analysis Integration
@@ -537,19 +494,9 @@ IMPL (Green phase) tasks include automatic test-fix cycle:
- **Repeat**: Up to max_iterations (default: 3)
5. **Safety Net**: Auto-revert all changes if max iterations reached
**Key Benefits**:
- Faster feedback loop within Green phase
- Autonomous recovery from initial implementation errors
- Systematic debugging with Gemini's bug-fix template
- Safe rollback prevents broken TDD state
## Configuration Options
- **meta.max_iterations**: Number of fix attempts (default: 3 for TDD, 5 for test-gen)
- **meta.use_codex**: Enable Codex automated fixes (default: false, manual)
## Related Commands
- `/workflow:tdd-plan` - Orchestrates TDD workflow planning (6 phases)
- `/workflow:tools:test-context-gather` - Analyzes test coverage
- `/workflow:execute` - Executes TDD tasks in order
- `/workflow:tdd-verify` - Verifies TDD compliance
- `/workflow:test-gen` - Post-implementation test generation

View File

@@ -21,7 +21,7 @@ Analyze test coverage and verify Red-Green-Refactor cycle execution for TDD work
### Phase 1: Extract Test Tasks
```bash
find .workflow/{session_id}/.task/ -name 'TEST-*.json' -exec jq -r '.context.focus_paths[]' {} \;
find .workflow/active/{session_id}/.task/ -name 'TEST-*.json' -exec jq -r '.context.focus_paths[]' {} \;
```
**Output**: List of test directories/files from all TEST tasks
@@ -29,20 +29,20 @@ find .workflow/{session_id}/.task/ -name 'TEST-*.json' -exec jq -r '.context.foc
### Phase 2: Run Test Suite
```bash
# Node.js/JavaScript
npm test -- --coverage --json > .workflow/{session_id}/.process/test-results.json
npm test -- --coverage --json > .workflow/active/{session_id}/.process/test-results.json
# Python
pytest --cov --json-report > .workflow/{session_id}/.process/test-results.json
pytest --cov --json-report > .workflow/active/{session_id}/.process/test-results.json
# Other frameworks (detect from project)
[test_command] --coverage --json-output .workflow/{session_id}/.process/test-results.json
[test_command] --coverage --json-output .workflow/active/{session_id}/.process/test-results.json
```
**Output**: test-results.json with coverage data
### Phase 3: Parse Coverage Data
```bash
jq '.coverage' .workflow/{session_id}/.process/test-results.json > .workflow/{session_id}/.process/coverage-report.json
jq '.coverage' .workflow/active/{session_id}/.process/test-results.json > .workflow/active/{session_id}/.process/coverage-report.json
```
**Extract**:
@@ -58,7 +58,7 @@ For each TDD chain (TEST-N.M -> IMPL-N.M -> REFACTOR-N.M):
**1. Red Phase Verification**
```bash
# Check TEST task summary
cat .workflow/{session_id}/.summaries/TEST-N.M-summary.md
cat .workflow/active/{session_id}/.summaries/TEST-N.M-summary.md
```
Verify:
@@ -69,7 +69,7 @@ Verify:
**2. Green Phase Verification**
```bash
# Check IMPL task summary
cat .workflow/{session_id}/.summaries/IMPL-N.M-summary.md
cat .workflow/active/{session_id}/.summaries/IMPL-N.M-summary.md
```
Verify:
@@ -80,7 +80,7 @@ Verify:
**3. Refactor Phase Verification**
```bash
# Check REFACTOR task summary
cat .workflow/{session_id}/.summaries/REFACTOR-N.M-summary.md
cat .workflow/active/{session_id}/.summaries/REFACTOR-N.M-summary.md
```
Verify:
@@ -90,7 +90,7 @@ Verify:
### Phase 5: Generate Analysis Report
Create `.workflow/{session_id}/.process/tdd-cycle-report.md`:
Create `.workflow/active/{session_id}/.process/tdd-cycle-report.md`:
```markdown
# TDD Cycle Analysis - {Session ID}
@@ -146,7 +146,7 @@ Create `.workflow/{session_id}/.process/tdd-cycle-report.md`:
## Output Files
```
.workflow/{session-id}/
.workflow/active//{session-id}/
└── .process/
├── test-results.json # Raw test execution results
├── coverage-report.json # Parsed coverage data
@@ -272,10 +272,6 @@ Function Coverage: 91%
Overall Compliance: 93/100
Detailed report: .workflow/WFS-auth/.process/tdd-cycle-report.md
Detailed report: .workflow/active/WFS-auth/.process/tdd-cycle-report.md
```
## Related Commands
- `/workflow:tdd-verify` - Uses this tool for verification
- `/workflow:tools:task-generate-tdd` - Generates tasks this tool analyzes
- `/workflow:execute` - Executes tasks before analysis

View File

@@ -1,15 +1,15 @@
---
name: test-concept-enhanced
description: Analyze test requirements and generate test generation strategy using Gemini with test-context package
description: Coordinate test analysis workflow using cli-execution-agent to generate test strategy via Gemini
argument-hint: "--session WFS-test-session-id --context path/to/test-context-package.json"
examples:
- /workflow:tools:test-concept-enhanced --session WFS-test-auth --context .workflow/WFS-test-auth/.process/test-context-package.json
- /workflow:tools:test-concept-enhanced --session WFS-test-auth --context .workflow/active/WFS-test-auth/.process/test-context-package.json
---
# Test Concept Enhanced Command
## Overview
Specialized analysis tool for test generation workflows that uses Gemini to analyze test coverage gaps, implementation context, and generate comprehensive test generation strategies.
Workflow coordinator that delegates test analysis to cli-execution-agent. Agent executes Gemini to analyze test coverage gaps, implementation context, and generate comprehensive test generation strategies.
## Core Philosophy
- **Coverage-Driven**: Focus on identified test gaps from context analysis
@@ -19,18 +19,19 @@ Specialized analysis tool for test generation workflows that uses Gemini to anal
- **No Code Generation**: Strategy and planning only, actual test generation happens in task execution
## Core Responsibilities
- Parse test-context-package.json from test-context-gather
- Analyze implementation summaries and coverage gaps
- Study existing test patterns and conventions
- Generate test generation strategy using Gemini
- Produce TEST_ANALYSIS_RESULTS.md for task generation
- Coordinate test analysis workflow using cli-execution-agent
- Validate test-context-package.json prerequisites
- Execute Gemini analysis via agent for test strategy generation
- Validate agent outputs (gemini-test-analysis.md, TEST_ANALYSIS_RESULTS.md)
## Execution Lifecycle
### Phase 1: Validation & Preparation
### Phase 1: Context Preparation (Command Responsibility)
**Command prepares session context and validates prerequisites.**
1. **Session Validation**
- Load `.workflow/{test_session_id}/workflow-session.json`
- Load `.workflow/active/{test_session_id}/workflow-session.json`
- Verify test session type is "test-gen"
- Extract source session reference
@@ -40,428 +41,100 @@ Specialized analysis tool for test generation workflows that uses Gemini to anal
- Extract coverage gaps and framework details
3. **Strategy Determination**
- **Simple Test Generation** (1-3 files): Single Gemini analysis
- **Medium Test Generation** (4-6 files): Gemini comprehensive analysis
- **Complex Test Generation** (>6 files): Gemini analysis with modular approach
- **Simple** (1-3 files): Single Gemini analysis
- **Medium** (4-6 files): Comprehensive analysis
- **Complex** (>6 files): Modular analysis approach
### Phase 2: Gemini Test Analysis
### Phase 2: Test Analysis Execution (Agent Responsibility)
**Tool Configuration**:
```bash
cd .workflow/{test_session_id}/.process && gemini -p "
PURPOSE: Analyze test coverage gaps and design comprehensive test generation strategy
TASK: Study implementation context, existing tests, and generate test requirements for missing coverage
MODE: analysis
CONTEXT: @{.workflow/{test_session_id}/.process/test-context-package.json}
**Purpose**: Analyze test coverage gaps and generate comprehensive test strategy.
**MANDATORY FIRST STEP**: Read and analyze test-context-package.json to understand:
- Test coverage gaps from test_coverage.missing_tests[]
- Implementation context from source_context.implementation_summaries[]
- Existing test patterns from test_framework.conventions
- Changed files requiring tests from source_context.implementation_summaries[].changed_files
**Agent Invocation**:
```javascript
Task(
subagent_type="cli-execution-agent",
description="Analyze test coverage gaps and generate test strategy",
prompt=`
## TASK OBJECTIVE
Analyze test requirements and generate comprehensive test generation strategy using Gemini CLI
**ANALYSIS REQUIREMENTS**:
## EXECUTION CONTEXT
Session: {test_session_id}
Source Session: {source_session_id}
Working Dir: .workflow/active/{test_session_id}/.process
Template: ~/.claude/workflows/cli-templates/prompts/test/test-concept-analysis.txt
1. **Implementation Understanding**
- Load all implementation summaries from source session
- Understand implemented features, APIs, and business logic
- Extract key functions, classes, and modules
- Identify integration points and dependencies
## EXECUTION STEPS
1. Execute Gemini analysis:
cd .workflow/active/{test_session_id}/.process && gemini -p "$(cat ~/.claude/workflows/cli-templates/prompts/test/test-concept-analysis.txt)" --approval-mode yolo
2. **Existing Test Pattern Analysis**
- Study existing test files for patterns and conventions
- Identify test structure (describe/it, test suites, fixtures)
- Analyze assertion patterns and mocking strategies
- Extract test setup/teardown patterns
2. Generate TEST_ANALYSIS_RESULTS.md:
Synthesize gemini-test-analysis.md into standardized format for task generation
Include: coverage assessment, test framework, test requirements, generation strategy, implementation targets
3. **Coverage Gap Assessment**
- For each file in missing_tests[], analyze:
- File purpose and functionality
- Public APIs requiring test coverage
- Critical paths and edge cases
- Integration points requiring tests
- Prioritize tests: high (core logic), medium (utilities), low (helpers)
## EXPECTED OUTPUTS
1. gemini-test-analysis.md - Raw Gemini analysis
2. TEST_ANALYSIS_RESULTS.md - Standardized test requirements document
4. **Test Requirements Specification**
- For each missing test file, specify:
- **Test scope**: What needs to be tested
- **Test scenarios**: Happy path, error cases, edge cases, integration
- **Test data**: Required fixtures, mocks, test data
- **Dependencies**: External services, databases, APIs to mock
- **Coverage targets**: Functions/methods requiring tests
5. **Test Generation Strategy**
- Determine test generation approach for each file
- Identify reusable test patterns from existing tests
- Plan test data and fixture requirements
- Define mocking strategy for dependencies
- Specify expected test file structure
EXPECTED OUTPUT - Write to gemini-test-analysis.md:
# Test Generation Analysis
## 1. Implementation Context Summary
- **Source Session**: {source_session_id}
- **Implemented Features**: {feature_summary}
- **Changed Files**: {list_of_implementation_files}
- **Tech Stack**: {technologies_used}
## 2. Test Coverage Assessment
- **Existing Tests**: {count} files
- **Missing Tests**: {count} files
- **Coverage Percentage**: {percentage}%
- **Priority Breakdown**:
- High Priority: {count} files (core business logic)
- Medium Priority: {count} files (utilities, helpers)
- Low Priority: {count} files (configuration, constants)
## 3. Existing Test Pattern Analysis
- **Test Framework**: {framework_name_and_version}
- **File Naming Convention**: {pattern}
- **Test Structure**: {describe_it_or_other}
- **Assertion Style**: {expect_assert_should}
- **Mocking Strategy**: {mocking_framework_and_patterns}
- **Setup/Teardown**: {beforeEach_afterEach_patterns}
- **Test Data**: {fixtures_factories_builders}
## 4. Test Requirements by File
### File: {implementation_file_path}
**Test File**: {suggested_test_file_path}
**Priority**: {high|medium|low}
#### Scope
- {description_of_what_needs_testing}
#### Test Scenarios
1. **Happy Path Tests**
- {scenario_1}
- {scenario_2}
2. **Error Handling Tests**
- {error_scenario_1}
- {error_scenario_2}
3. **Edge Case Tests**
- {edge_case_1}
- {edge_case_2}
4. **Integration Tests** (if applicable)
- {integration_scenario_1}
- {integration_scenario_2}
#### Test Data & Fixtures
- {required_test_data}
- {required_mocks}
- {required_fixtures}
#### Dependencies to Mock
- {external_service_1}
- {external_service_2}
#### Coverage Targets
- Function: {function_name} - {test_requirements}
- Function: {function_name} - {test_requirements}
---
[Repeat for each missing test file]
---
## 5. Test Generation Strategy
### Overall Approach
- {strategy_description}
### Test Generation Order
1. {file_1} - {rationale}
2. {file_2} - {rationale}
3. {file_3} - {rationale}
### Reusable Patterns
- {pattern_1_from_existing_tests}
- {pattern_2_from_existing_tests}
### Test Data Strategy
- {approach_to_test_data_and_fixtures}
### Mocking Strategy
- {approach_to_mocking_dependencies}
### Quality Criteria
- Code coverage target: {percentage}%
- Test scenarios per function: {count}
- Integration test coverage: {approach}
## 6. Implementation Targets
**Purpose**: Identify new test files to create
**Format**: New test files only (no existing files to modify)
**Test Files to Create**:
1. **Target**: `tests/auth/TokenValidator.test.ts`
- **Type**: Create new test file
- **Purpose**: Test TokenValidator class
- **Scenarios**: 15 test cases covering validation logic, error handling, edge cases
- **Dependencies**: Mock JWT library, test fixtures for tokens
2. **Target**: `tests/middleware/errorHandler.test.ts`
- **Type**: Create new test file
- **Purpose**: Test error handling middleware
- **Scenarios**: 8 test cases for different error types and response formats
- **Dependencies**: Mock Express req/res/next, error fixtures
[List all test files to create]
## 7. Success Metrics
- **Test Coverage Goal**: {target_percentage}%
- **Test Quality**: All scenarios covered (happy, error, edge, integration)
- **Convention Compliance**: Follow existing test patterns
- **Maintainability**: Clear test descriptions, reusable fixtures
RULES:
- Focus on TEST REQUIREMENTS and GENERATION STRATEGY, NOT code generation
- Study existing test patterns thoroughly for consistency
- Prioritize critical business logic tests
- Specify clear test scenarios and coverage targets
- Identify all dependencies requiring mocks
- **MUST write output to .workflow/{test_session_id}/.process/gemini-test-analysis.md**
- Do NOT generate actual test code or implementation
- Output ONLY test analysis and generation strategy
" --approval-mode yolo
## QUALITY VALIDATION
- Both output files exist and are complete
- All required sections present in TEST_ANALYSIS_RESULTS.md
- Test requirements are actionable and quantified
- Test scenarios cover happy path, errors, edge cases
- Dependencies and mocks clearly identified
`
)
```
**Output Location**: `.workflow/{test_session_id}/.process/gemini-test-analysis.md`
**Output Files**:
- `.workflow/active/{test_session_id}/.process/gemini-test-analysis.md`
- `.workflow/active/{test_session_id}/.process/TEST_ANALYSIS_RESULTS.md`
### Phase 3: Results Synthesis
### Phase 3: Output Validation (Command Responsibility)
1. **Output Validation**
- Verify `gemini-test-analysis.md` exists and is complete
- Validate all required sections present
- Check test requirements are actionable
**Command validates agent outputs.**
2. **Quality Assessment**
- Test scenarios cover happy path, errors, edge cases
- Dependencies and mocks clearly identified
- Test generation strategy is practical
- Coverage targets are reasonable
### Phase 4: TEST_ANALYSIS_RESULTS.md Generation
Synthesize Gemini analysis into standardized format:
```markdown
# Test Generation Analysis Results
## Executive Summary
- **Test Session**: {test_session_id}
- **Source Session**: {source_session_id}
- **Analysis Timestamp**: {timestamp}
- **Coverage Gap**: {missing_test_count} files require tests
- **Test Framework**: {framework}
- **Overall Strategy**: {high_level_approach}
---
## 1. Coverage Assessment
### Current Coverage
- **Existing Tests**: {count} files
- **Implementation Files**: {count} files
- **Coverage Percentage**: {percentage}%
### Missing Tests (Priority Order)
1. **High Priority** ({count} files)
- {file_1} - {reason}
- {file_2} - {reason}
2. **Medium Priority** ({count} files)
- {file_1} - {reason}
3. **Low Priority** ({count} files)
- {file_1} - {reason}
---
## 2. Test Framework & Conventions
### Framework Configuration
- **Framework**: {framework_name}
- **Version**: {version}
- **Test Pattern**: {file_pattern}
- **Test Directory**: {directory_structure}
### Conventions
- **File Naming**: {convention}
- **Test Structure**: {describe_it_blocks}
- **Assertions**: {assertion_library}
- **Mocking**: {mocking_framework}
- **Setup/Teardown**: {beforeEach_afterEach}
### Example Pattern (from existing tests)
```
{example_test_structure_from_analysis}
```
---
## 3. Test Requirements by File
[For each missing test, include:]
### Test File: {test_file_path}
**Implementation**: {implementation_file}
**Priority**: {high|medium|low}
**Estimated Test Count**: {count}
#### Test Scenarios
1. **Happy Path**: {scenarios}
2. **Error Handling**: {scenarios}
3. **Edge Cases**: {scenarios}
4. **Integration**: {scenarios}
#### Dependencies & Mocks
- {dependency_1_to_mock}
- {dependency_2_to_mock}
#### Test Data Requirements
- {fixture_1}
- {fixture_2}
---
## 4. Test Generation Strategy
### Generation Approach
{overall_strategy_description}
### Generation Order
1. {test_file_1} - {rationale}
2. {test_file_2} - {rationale}
3. {test_file_3} - {rationale}
### Reusable Components
- **Test Fixtures**: {common_fixtures}
- **Mock Patterns**: {common_mocks}
- **Helper Functions**: {test_helpers}
### Quality Targets
- **Coverage Goal**: {percentage}%
- **Scenarios per Function**: {min_count}
- **Integration Coverage**: {approach}
---
## 5. Implementation Targets
**Purpose**: New test files to create (code-developer will generate these)
**Test Files to Create**:
1. **Target**: `tests/auth/TokenValidator.test.ts`
- **Implementation Source**: `src/auth/TokenValidator.ts`
- **Test Scenarios**: 15 (validation, error handling, edge cases)
- **Dependencies**: Mock JWT library, token fixtures
- **Priority**: High
2. **Target**: `tests/middleware/errorHandler.test.ts`
- **Implementation Source**: `src/middleware/errorHandler.ts`
- **Test Scenarios**: 8 (error types, response formats)
- **Dependencies**: Mock Express, error fixtures
- **Priority**: High
[List all test files with full specifications]
---
## 6. Success Criteria
### Coverage Metrics
- Achieve {target_percentage}% code coverage
- All public APIs have tests
- Critical paths fully covered
### Quality Standards
- All test scenarios covered (happy, error, edge, integration)
- Follow existing test conventions
- Clear test descriptions and assertions
- Maintainable test structure
### Validation Approach
- Run full test suite after generation
- Verify coverage with coverage tool
- Manual review of test quality
- Integration test validation
---
## 7. Reference Information
### Source Context
- **Implementation Summaries**: {paths}
- **Existing Tests**: {example_tests}
- **Documentation**: {relevant_docs}
### Analysis Tools
- **Gemini Analysis**: gemini-test-analysis.md
- **Coverage Tools**: {coverage_tool_if_detected}
```
**Output Location**: `.workflow/{test_session_id}/.process/TEST_ANALYSIS_RESULTS.md`
- Verify `gemini-test-analysis.md` exists and is complete
- Validate `TEST_ANALYSIS_RESULTS.md` generated by agent
- Check required sections present
- Confirm test requirements are actionable
## Error Handling
### Validation Errors
| Error | Cause | Resolution |
|-------|-------|------------|
| Missing context package | test-context-gather not run | Run test-context-gather first |
| No coverage gaps | All files have tests | Skip test generation, proceed to test execution |
| No test framework detected | Missing test dependencies | Request user to configure test framework |
| Invalid source session | Source session incomplete | Complete implementation first |
| Error | Resolution |
|-------|------------|
| Missing context package | Run test-context-gather first |
| No coverage gaps | Skip test generation, proceed to execution |
| No test framework detected | Configure test framework |
| Invalid source session | Complete implementation first |
### Gemini Execution Errors
| Error | Cause | Recovery |
|-------|-------|----------|
| Timeout | Large project analysis | Reduce scope, analyze by module |
| Output incomplete | Token limit exceeded | Retry with focused analysis |
| No output file | Write permission error | Check directory permissions |
### Execution Errors
| Error | Recovery |
|-------|----------|
| Gemini timeout | Reduce scope, analyze by module |
| Output incomplete | Retry with focused analysis |
| No output file | Check directory permissions |
### Fallback Strategy
- If Gemini fails, generate basic TEST_ANALYSIS_RESULTS.md from context package
- Use coverage gaps and framework info to create minimal requirements
- Provide guidance for manual test planning
**Fallback Strategy**: Generate basic TEST_ANALYSIS_RESULTS.md from context package if Gemini fails
## Performance Optimization
## Integration & Usage
- **Focused Analysis**: Only analyze files with missing tests
- **Pattern Reuse**: Study existing tests for quick pattern extraction
- **Parallel Operations**: Load implementation summaries in parallel
- **Timeout Management**: 20-minute limit for Gemini analysis
### Command Chain
- **Called By**: `/workflow:test-gen` (Phase 4: Analysis)
- **Requires**: `test-context-package.json` from `/workflow:tools:test-context-gather`
- **Followed By**: `/workflow:tools:test-task-generate`
## Integration
### Performance
- Focused analysis: Only analyze files with missing tests
- Pattern reuse: Study existing tests for quick extraction
- Timeout: 20-minute limit for analysis
### Called By
- `/workflow:test-gen` (Phase 4: Analysis)
### Success Criteria
- Valid TEST_ANALYSIS_RESULTS.md generated
- All missing tests documented with actionable requirements
- Test scenarios cover happy path, errors, edge cases, integration
- Dependencies and mocks clearly identified
- Test generation strategy is practical
- Output follows existing test conventions
### Requires
- `/workflow:tools:test-context-gather` output (test-context-package.json)
### Followed By
- `/workflow:tools:test-task-generate` - Generates test task JSON with code-developer invocation
## Success Criteria
- ✅ Valid TEST_ANALYSIS_RESULTS.md generated
- ✅ All missing tests documented with requirements
- ✅ Test scenarios cover happy path, errors, edge cases
- ✅ Dependencies and mocks identified
- ✅ Test generation strategy is actionable
- ✅ Execution time < 20 minutes
- ✅ Output follows existing test conventions
## Related Commands
- `/workflow:tools:test-context-gather` - Provides input context
- `/workflow:tools:test-task-generate` - Consumes analysis results
- `/workflow:test-gen` - Main test generation workflow

View File

@@ -22,7 +22,7 @@ Orchestrator command that invokes `test-context-search-agent` to gather comprehe
- **Detection-First**: Check for existing test-context-package before executing
- **Coverage-First**: Analyze existing test coverage before planning new tests
- **Source Context Loading**: Import implementation summaries from source session
- **Standardized Output**: Generate `.workflow/{test_session_id}/.process/test-context-package.json`
- **Standardized Output**: Generate `.workflow/active/{test_session_id}/.process/test-context-package.json`
## Execution Flow
@@ -164,7 +164,7 @@ Refer to `test-context-search-agent.md` Phase 3.2 for complete `test-context-pac
## Success Criteria
- ✅ Valid test-context-package.json generated in `.workflow/{test_session_id}/.process/`
- ✅ Valid test-context-package.json generated in `.workflow/active/{test_session_id}/.process/`
- ✅ Source session context loaded successfully
- ✅ Test coverage gaps identified (>90% accuracy)
- ✅ Test framework detected and documented
@@ -203,8 +203,3 @@ Refer to `test-context-search-agent.md` Phase 3.2 for complete `test-context-pac
- **Framework agnostic**: Supports Jest, Mocha, pytest, RSpec, Go testing, etc.
- **Coverage focus**: Primary goal is identifying implementation files without tests
## Related Commands
- `/workflow:test-gen` - Main test generation workflow
- `/workflow:tools:test-concept-enhanced` - Test generation analysis
- `/workflow:tools:test-task-generate` - Test task JSON generation

View File

@@ -1,695 +1,224 @@
---
name: test-task-generate
description: Generate test-fix task JSON with iterative test-fix-retest cycle specification using Gemini/Qwen/Codex
description: Generate test planning documents (IMPL_PLAN.md, test task JSONs, TODO_LIST.md) using action-planning-agent - produces test planning artifacts, does NOT execute tests
argument-hint: "[--use-codex] [--cli-execute] --session WFS-test-session-id"
examples:
- /workflow:tools:test-task-generate --session WFS-test-auth
- /workflow:tools:test-task-generate --use-codex --session WFS-test-auth
- /workflow:tools:test-task-generate --cli-execute --session WFS-test-auth
- /workflow:tools:test-task-generate --cli-execute --use-codex --session WFS-test-auth
---
# Test Task Generation Command
# Generate Test Planning Documents Command
## Overview
Generate specialized test-fix task JSON with comprehensive test-fix-retest cycle specification, including Gemini diagnosis (using bug-fix template) and manual fix workflow (Codex automation only when explicitly requested).
## Execution Modes
### Test Generation (IMPL-001)
- **Agent Mode (Default)**: @code-developer generates tests within agent context
- **CLI Execute Mode (`--cli-execute`)**: Use Codex CLI for autonomous test generation
### Test Fix (IMPL-002)
- **Manual Mode (Default)**: Gemini diagnosis → user applies fixes
- **Codex Mode (`--use-codex`)**: Gemini diagnosis → Codex applies fixes with resume mechanism
Generate test planning documents (IMPL_PLAN.md, test task JSONs, TODO_LIST.md) using action-planning-agent. This command produces **test planning artifacts only** - it does NOT execute tests or implement code. Actual test execution requires separate execution command (e.g., /workflow:test-cycle-execute).
## Core Philosophy
- **Analysis-Driven Test Generation**: Use TEST_ANALYSIS_RESULTS.md from test-concept-enhanced
- **Agent-Based Test Creation**: Call @code-developer agent for comprehensive test generation
- **Coverage-First**: Generate all missing tests before execution
- **Test Execution**: Execute complete test suite after generation
- **Gemini Diagnosis**: Use Gemini for root cause analysis and fix suggestions (references bug-fix template)
- **Manual Fixes First**: Apply fixes manually by default, codex only when explicitly needed
- **Iterative Refinement**: Repeat test-analyze-fix-retest cycle until all tests pass
- **Surgical Fixes**: Minimal code changes, no refactoring during test fixes
- **Auto-Revert**: Rollback all changes if max iterations reached
- **Planning Only**: Generate test planning documents (IMPL_PLAN.md, task JSONs, TODO_LIST.md) - does NOT execute tests
- **Agent-Driven Document Generation**: Delegate test plan generation to action-planning-agent
- **Two-Phase Flow**: Context Preparation (command) → Test Document Generation (agent)
- **Memory-First**: Reuse loaded documents from conversation memory
- **MCP-Enhanced**: Use MCP tools for test pattern research and analysis
- **Path Clarity**: All `focus_paths` prefer absolute paths (e.g., `D:\\project\\src\\module`), or clear relative paths from project root
- **Leverage Existing Test Infrastructure**: Prioritize using established testing frameworks and tools present in the project
## Core Responsibilities
- Parse TEST_ANALYSIS_RESULTS.md from test-concept-enhanced
- Extract test requirements and generation strategy
- Parse `--use-codex` flag to determine fix mode (manual vs automated)
- Generate test generation subtask calling @code-developer
- Generate test execution and fix cycle task JSON with appropriate fix mode
- Configure Gemini diagnosis workflow (bug-fix template) and manual/Codex fix application
- Create test-oriented IMPL_PLAN.md and TODO_LIST.md with test generation phase
## Test-Specific Execution Modes
## Execution Lifecycle
### Test Generation (IMPL-001)
- **Agent Mode** (default): @code-developer generates tests within agent context
- **CLI Execute Mode** (`--cli-execute`): Use Codex CLI for autonomous test generation
### Phase 1: Input Validation & Discovery
### Test Execution & Fix (IMPL-002+)
- **Manual Mode** (default): Gemini diagnosis → user applies fixes
- **Codex Mode** (`--use-codex`): Gemini diagnosis → Codex applies fixes with resume mechanism
1. **Parameter Parsing**
- Parse `--use-codex` flag from command arguments → Controls IMPL-002 fix mode
- Parse `--cli-execute` flag from command arguments → Controls IMPL-001 generation mode
- Store flag values for task JSON generation
## Document Generation Lifecycle
2. **Test Session Validation**
- Load `.workflow/{test-session-id}/workflow-session.json`
- Verify `workflow_type: "test_session"`
- Extract `source_session_id` from metadata
### Phase 1: Context Preparation (Command Responsibility)
3. **Test Analysis Results Loading**
- **REQUIRED**: Load `.workflow/{test-session-id}/.process/TEST_ANALYSIS_RESULTS.md`
- Parse test requirements by file
- Extract test generation strategy
- Identify test files to create with specifications
**Command prepares test session paths and metadata for planning document generation.**
4. **Test Context Package Loading**
- Load `.workflow/{test-session-id}/.process/test-context-package.json`
- Extract test framework configuration
- Extract coverage gaps and priorities
- Load source session implementation summaries
### Phase 2: Task JSON Generation
Generate **TWO task JSON files**:
1. **IMPL-001.json** - Test Generation (calls @code-developer)
2. **IMPL-002.json** - Test Execution and Fix Cycle (calls @test-fix-agent)
#### IMPL-001.json - Test Generation Task
```json
{
"id": "IMPL-001",
"title": "Generate comprehensive tests for [sourceSessionId]",
"status": "pending",
"meta": {
"type": "test-gen",
"agent": "@code-developer",
"source_session": "[sourceSessionId]",
"test_framework": "jest|pytest|cargo|detected"
},
"context": {
"requirements": [
"Generate comprehensive test files based on TEST_ANALYSIS_RESULTS.md",
"Follow existing test patterns and conventions from test framework",
"Create tests for all missing coverage identified in analysis",
"Include happy path, error handling, edge cases, and integration tests",
"Use test data and mocks as specified in analysis",
"Ensure tests follow project coding standards"
],
"focus_paths": [
"tests/**/*",
"src/**/*.test.*",
"{paths_from_analysis}"
],
"acceptance": [
"All test files from TEST_ANALYSIS_RESULTS.md section 5 are created",
"Tests follow existing test patterns and conventions",
"Test scenarios cover happy path, errors, edge cases, integration",
"All dependencies are properly mocked",
"Test files are syntactically valid and can be executed",
"Test coverage meets analysis requirements"
],
"depends_on": [],
"source_context": {
"session_id": "[sourceSessionId]",
"test_analysis": ".workflow/[testSessionId]/.process/TEST_ANALYSIS_RESULTS.md",
"test_context": ".workflow/[testSessionId]/.process/test-context-package.json",
"implementation_summaries": [
".workflow/[sourceSessionId]/.summaries/IMPL-001-summary.md"
]
}
},
"flow_control": {
"pre_analysis": [
{
"step": "load_test_analysis",
"action": "Load test generation requirements and strategy",
"commands": [
"Read(.workflow/[testSessionId]/.process/TEST_ANALYSIS_RESULTS.md)",
"Read(.workflow/[testSessionId]/.process/test-context-package.json)"
],
"output_to": "test_generation_requirements",
"on_error": "fail"
},
{
"step": "load_implementation_context",
"action": "Load source implementation for test generation context",
"commands": [
"bash(for f in .workflow/[sourceSessionId]/.summaries/IMPL-*-summary.md; do echo \"=== $(basename $f) ===\"&& cat \"$f\"; done)"
],
"output_to": "implementation_context",
"on_error": "skip_optional"
},
{
"step": "load_existing_test_patterns",
"action": "Study existing tests for pattern reference",
"commands": [
"bash(find . -name \"*.test.*\" -type f)",
"bash(# Read first 2 existing test files as examples)",
"bash(test_files=$(find . -name \"*.test.*\" -type f | head -2))",
"bash(for f in $test_files; do echo \"=== $f ===\"&& cat \"$f\"; done)"
],
"output_to": "existing_test_patterns",
"on_error": "skip_optional"
}
],
// Agent Mode (Default): Agent implements tests
"implementation_approach": [
{
"step": 1,
"title": "Generate comprehensive test suite",
"description": "Generate comprehensive test suite based on TEST_ANALYSIS_RESULTS.md. Follow test generation strategy and create all test files listed in section 5 (Implementation Targets).",
"modification_points": [
"Read TEST_ANALYSIS_RESULTS.md sections 3 and 4",
"Study existing test patterns",
"Create test files with all required scenarios",
"Implement happy path, error handling, edge case, and integration tests",
"Add required mocks and fixtures"
],
"logic_flow": [
"Read TEST_ANALYSIS_RESULTS.md section 3 (Test Requirements by File)",
"Read TEST_ANALYSIS_RESULTS.md section 4 (Test Generation Strategy)",
"Study existing test patterns from test_context.test_framework.conventions",
"For each test file in section 5 (Implementation Targets): Create test file with specified scenarios, Implement happy path tests, Implement error handling tests, Implement edge case tests, Implement integration tests (if specified), Add required mocks and fixtures",
"Follow test framework conventions and project standards",
"Ensure all tests are executable and syntactically valid"
],
"depends_on": [],
"output": "test_suite"
}
],
// CLI Execute Mode (--cli-execute): Use Codex command (alternative format shown below)
"implementation_approach": [{
"step": 1,
"title": "Generate tests using Codex",
"description": "Use Codex CLI to autonomously generate comprehensive test suite based on TEST_ANALYSIS_RESULTS.md",
"modification_points": [
"Codex loads TEST_ANALYSIS_RESULTS.md and existing test patterns",
"Codex generates all test files listed in analysis section 5",
"Codex ensures tests follow framework conventions"
],
"logic_flow": [
"Start new Codex session",
"Pass TEST_ANALYSIS_RESULTS.md to Codex",
"Codex studies existing test patterns",
"Codex generates comprehensive test suite",
"Codex validates test syntax and executability"
],
"command": "bash(codex -C [focus_paths] --full-auto exec \"PURPOSE: Generate comprehensive test suite TASK: Create test files based on TEST_ANALYSIS_RESULTS.md section 5 MODE: write CONTEXT: @.workflow/WFS-test-[session]/.process/TEST_ANALYSIS_RESULTS.md @.workflow/WFS-test-[session]/.process/test-context-package.json EXPECTED: All test files with happy path, error handling, edge cases, integration tests RULES: Follow test framework conventions, ensure tests are executable\" --skip-git-repo-check -s danger-full-access)",
"depends_on": [],
"output": "test_generation"
}],
"target_files": [
"{test_file_1 from TEST_ANALYSIS_RESULTS.md section 5}",
"{test_file_2 from TEST_ANALYSIS_RESULTS.md section 5}",
"{test_file_N from TEST_ANALYSIS_RESULTS.md section 5}"
]
}
}
**Test Session Path Structure**:
```
#### IMPL-002.json - Test Execution & Fix Cycle Task
```json
{
"id": "IMPL-002",
"title": "Execute and fix tests for [sourceSessionId]",
"status": "pending",
"meta": {
"type": "test-fix",
"agent": "@test-fix-agent",
"source_session": "[sourceSessionId]",
"test_framework": "jest|pytest|cargo|detected",
"max_iterations": 5,
"use_codex": false // Set to true if --use-codex flag present
},
"context": {
"requirements": [
"Execute complete test suite (generated in IMPL-001)",
"Diagnose test failures using Gemini analysis with bug-fix template",
"Present fixes to user for manual application (default)",
"Use Codex ONLY if user explicitly requests automation",
"Iterate until all tests pass or max iterations reached",
"Revert changes if unable to fix within iteration limit"
],
"focus_paths": [
"tests/**/*",
"src/**/*.test.*",
"{implementation_files_from_source_session}"
],
"acceptance": [
"All tests pass successfully (100% pass rate)",
"No test failures or errors in final run",
"Code changes are minimal and surgical",
"All fixes are verified through retest",
"Iteration logs document fix progression"
],
"depends_on": ["IMPL-001"],
"source_context": {
"session_id": "[sourceSessionId]",
"test_generation_summary": ".workflow/[testSessionId]/.summaries/IMPL-001-summary.md",
"implementation_summaries": [
".workflow/[sourceSessionId]/.summaries/IMPL-001-summary.md"
]
}
},
"flow_control": {
"pre_analysis": [
{
"step": "load_source_session_summaries",
"action": "Load implementation context from source session",
"commands": [
"bash(find .workflow/[sourceSessionId]/.summaries/ -name 'IMPL-*-summary.md' 2>/dev/null)",
"bash(for f in .workflow/[sourceSessionId]/.summaries/IMPL-*-summary.md; do echo \"=== $(basename $f) ===\"&& cat \"$f\"; done)"
],
"output_to": "implementation_context",
"on_error": "skip_optional"
},
{
"step": "discover_test_framework",
"action": "Identify test framework and test command",
"commands": [
"bash(jq -r '.scripts.test // \"npm test\"' package.json 2>/dev/null || echo 'pytest' || echo 'cargo test')",
"bash([ -f 'package.json' ] && echo 'jest/npm' || [ -f 'pytest.ini' ] && echo 'pytest' || [ -f 'Cargo.toml' ] && echo 'cargo' || echo 'unknown')"
],
"output_to": "test_command",
"on_error": "fail"
},
{
"step": "analyze_test_coverage",
"action": "Analyze test coverage and identify missing tests",
"commands": [
"bash(find . -name \"*.test.*\" -type f)",
"bash(rg \"test|describe|it|def test_\" -g \"*.test.*\")",
"bash(# Count implementation files vs test files)",
"bash(impl_count=$(find [changed_files_dirs] -type f \\( -name '*.ts' -o -name '*.js' -o -name '*.py' \\) ! -name '*.test.*' 2>/dev/null | wc -l))",
"bash(test_count=$(find . -name \"*.test.*\" -type f | wc -l))",
"bash(echo \"Implementation files: $impl_count, Test files: $test_count\")"
],
"output_to": "test_coverage_analysis",
"on_error": "skip_optional"
},
{
"step": "identify_files_without_tests",
"action": "List implementation files that lack corresponding test files",
"commands": [
"bash(# For each changed file from source session, check if test exists)",
"bash(for file in [changed_files]; do test_file=$(echo $file | sed 's/\\(.*\\)\\.\\(ts\\|js\\|py\\)$/\\1.test.\\2/'); [ ! -f \"$test_file\" ] && echo \"$file\"; done)"
],
"output_to": "files_without_tests",
"on_error": "skip_optional"
},
{
"step": "prepare_test_environment",
"action": "Ensure test environment is ready",
"commands": [
"bash([ -f 'package.json' ] && npm install 2>/dev/null || true)",
"bash([ -f 'requirements.txt' ] && pip install -q -r requirements.txt 2>/dev/null || true)"
],
"output_to": "environment_status",
"on_error": "skip_optional"
}
],
"implementation_approach": [
{
"step": 1,
"title": "Execute iterative test-fix-retest cycle",
"description": "Execute iterative test-fix-retest cycle using Gemini diagnosis (bug-fix template) and manual fixes (Codex only if meta.use_codex=true). Max 5 iterations with automatic revert on failure.",
"test_fix_cycle": {
"max_iterations": 5,
"cycle_pattern": "test → gemini_diagnose → manual_fix (or codex if needed) → retest",
"tools": {
"test_execution": "bash(test_command)",
"diagnosis": "gemini (MODE: analysis, uses bug-fix template)",
"fix_application": "manual (default) or codex exec resume --last (if explicitly needed)",
"verification": "bash(test_command) + regression_check"
},
"exit_conditions": {
"success": "all_tests_pass",
"failure": "max_iterations_reached",
"error": "test_command_not_found"
}
},
"modification_points": [
"PHASE 1: Initial Test Execution",
" 1.1. Discover test command from framework detection",
" 1.2. Execute initial test run: bash([test_command])",
" 1.3. Parse test output and count failures",
" 1.4. If all pass → Skip to PHASE 3 (success)",
" 1.5. If failures → Store failure output, proceed to PHASE 2",
"",
"PHASE 2: Iterative Test-Fix-Retest Cycle (max 5 iterations)",
" Note: This phase handles test failures, NOT test generation failures",
" Initialize: max_iterations=5, current_iteration=0",
" ",
" WHILE (tests failing AND current_iteration < max_iterations):",
" current_iteration++",
" ",
" STEP 2.1: Gemini Diagnosis (using bug-fix template)",
" - Prepare diagnosis context:",
" * Test failure output from previous run",
" * Source files from focus_paths",
" * Implementation summaries from source session",
" - Execute Gemini analysis with bug-fix template:",
" bash(cd .workflow/WFS-test-[session]/.process && gemini \"",
" PURPOSE: Diagnose test failure iteration [N] and propose minimal fix",
" TASK: Systematic bug analysis and fix recommendations for test failure",
" MODE: analysis",
" CONTEXT: @CLAUDE.md,**/*CLAUDE.md",
" Test output: [test_failures]",
" Source files: [focus_paths]",
" Implementation: [implementation_context]",
" EXPECTED: Root cause analysis, code path tracing, targeted fixes",
" RULES: $(cat ~/.claude/workflows/cli-templates/prompts/analysis/01-diagnose-bug-root-cause.txt) | Bug: [test_failure_description]",
" Minimal surgical fixes only - no refactoring",
" \" > fix-iteration-[N]-diagnosis.md)",
" - Parse diagnosis → extract fix_suggestion and target_files",
" - Present fix to user for manual application (default)",
" ",
" STEP 2.2: Apply Fix (Based on meta.use_codex Flag)",
" ",
" IF meta.use_codex = false (DEFAULT):",
" - Present Gemini diagnosis to user for manual fix",
" - User applies fix based on diagnosis recommendations",
" - Stage changes: bash(git add -A)",
" - Store fix log: .process/fix-iteration-[N]-changes.log",
" ",
" IF meta.use_codex = true (--use-codex flag present):",
" - Stage current changes (if valid git repo): bash(git add -A)",
" - First iteration: Start new Codex session",
" codex -C [project_root] --full-auto exec \"",
" PURPOSE: Fix test failure iteration 1",
" TASK: [fix_suggestion from Gemini]",
" MODE: write",
" CONTEXT: Diagnosis: .workflow/.process/fix-iteration-1-diagnosis.md",
" Target files: [target_files]",
" Implementation context: [implementation_context]",
" EXPECTED: Minimal code changes to resolve test failure",
" RULES: Apply ONLY suggested changes, no refactoring",
" Preserve existing code style",
" \" --skip-git-repo-check -s danger-full-access",
" - Subsequent iterations: Resume session for context continuity",
" codex exec \"",
" CONTINUE TO NEXT FIX:",
" Iteration [N] of 5: Fix test failure",
" ",
" PURPOSE: Fix remaining test failures",
" TASK: [fix_suggestion from Gemini iteration N]",
" CONTEXT: Previous fixes applied, diagnosis: .process/fix-iteration-[N]-diagnosis.md",
" EXPECTED: Surgical fix for current failure",
" RULES: Build on previous fixes, maintain consistency",
" \" resume --last --skip-git-repo-check -s danger-full-access",
" - Store fix log: .process/fix-iteration-[N]-changes.log",
" ",
" STEP 2.3: Retest and Verification",
" - Re-execute test suite: bash([test_command])",
" - Capture output: .process/fix-iteration-[N]-retest.log",
" - Count failures: bash(grep -c 'FAIL\\|ERROR' .process/fix-iteration-[N]-retest.log)",
" - Check for regression:",
" IF new_failures > previous_failures:",
" WARN: Regression detected",
" Include in next Gemini diagnosis context",
" - Analyze results:",
" IF all_tests_pass:",
" BREAK loop → Proceed to PHASE 3",
" ELSE:",
" Update test_failures context",
" CONTINUE loop",
" ",
" IF max_iterations reached AND tests still failing:",
" EXECUTE: git reset --hard HEAD (revert all changes)",
" MARK: Task status = blocked",
" GENERATE: Detailed failure report with iteration logs",
" EXIT: Require manual intervention",
"",
"PHASE 3: Final Validation and Certification",
" 3.1. Execute final confirmation test run",
" 3.2. Generate success summary:",
" - Iterations required: [current_iteration]",
" - Fixes applied: [summary from iteration logs]",
" - Test results: All passing ✅",
" 3.3. Mark task status: completed",
" 3.4. Update TODO_LIST.md: Mark as ✅",
" 3.5. Certify code: APPROVED for deployment"
],
"logic_flow": [
"Load source session implementation context",
"Discover test framework and command",
"PHASE 0: Test Coverage Check",
" Analyze existing test files",
" Identify files without tests",
" IF tests missing:",
" Report to user (no automatic generation)",
" Wait for user to generate tests or request automation",
" ELSE:",
" Skip to Phase 1",
"PHASE 1: Initial Test Execution",
" Execute test suite",
" IF all pass → Success (Phase 3)",
" ELSE → Store failures, proceed to Phase 2",
"PHASE 2: Iterative Fix Cycle (max 5 iterations)",
" LOOP (max 5 times):",
" 1. Gemini diagnoses failure with bug-fix template → fix suggestion",
" 2. Check meta.use_codex flag:",
" - IF false (default): Present fix to user for manual application",
" - IF true (--use-codex): Codex applies fix with resume for continuity",
" 3. Retest and check results",
" 4. IF pass → Exit loop to Phase 3",
" 5. ELSE → Continue with updated context",
" IF max iterations → Revert + report failure",
"PHASE 3: Final Validation",
" Confirm all tests pass",
" Generate summary (include test generation info)",
" Certify code APPROVED"
],
"error_handling": {
"max_iterations_reached": {
"action": "revert_all_changes",
"commands": [
"bash(git reset --hard HEAD)",
"bash(jq '.status = \"blocked\"' .workflow/[session]/.task/IMPL-001.json > temp.json && mv temp.json .workflow/[session]/.task/IMPL-001.json)"
],
"report": "Generate failure report with iteration logs in .summaries/IMPL-001-failure-report.md"
},
"test_command_fails": {
"action": "treat_as_test_failure",
"context": "Use stderr as failure context for Gemini diagnosis"
},
"codex_apply_fails": {
"action": "retry_once_then_skip",
"fallback": "Mark iteration as skipped, continue to next"
},
"gemini_diagnosis_fails": {
"action": "retry_with_simplified_context",
"fallback": "Use previous diagnosis, continue"
},
"regression_detected": {
"action": "log_warning_continue",
"context": "Include regression info in next Gemini diagnosis"
}
},
"depends_on": [],
"output": "test_fix_results"
}
],
"target_files": [
"Auto-discovered from test failures",
"Extracted from Gemini diagnosis each iteration",
"Format: file:function:lines or file (for new files)"
],
"codex_session": {
"strategy": "resume_for_continuity",
"first_iteration": "codex exec \"fix iteration 1\" --full-auto",
"subsequent_iterations": "codex exec \"fix iteration N\" resume --last",
"benefits": [
"Maintains conversation context across fixes",
"Remembers previous decisions and patterns",
"Ensures consistency in fix approach",
"Reduces redundant context injection"
]
}
}
}
```
### Phase 3: IMPL_PLAN.md Generation
#### Document Structure
```markdown
---
identifier: WFS-test-[session-id]
source_session: WFS-[source-session-id]
workflow_type: test_session
test_framework: jest|pytest|cargo|detected
---
# Test Validation Plan: [Source Session Topic]
## Summary
Execute comprehensive test suite for implementation from session WFS-[source-session-id].
Diagnose and fix all test failures using iterative Gemini analysis and Codex execution.
## Source Session Context
- **Implementation Session**: WFS-[source-session-id]
- **Completed Tasks**: IMPL-001, IMPL-002, ...
- **Changed Files**: [list from git log]
- **Implementation Summaries**: [references to source session summaries]
## Test Framework
- **Detected Framework**: jest|pytest|cargo|other
- **Test Command**: npm test|pytest|cargo test
- **Test Files**: [discovered test files]
- **Coverage**: [estimated test coverage]
## Test-Fix-Retest Cycle
- **Max Iterations**: 5
- **Diagnosis Tool**: Gemini (analysis mode with bug-fix template from bug-index.md)
- **Fix Tool**: Manual (default, meta.use_codex=false) or Codex (if --use-codex flag, meta.use_codex=true)
- **Verification**: Bash test execution + regression check
### Cycle Workflow
1. **Initial Test**: Execute full suite, capture failures
2. **Iterative Fix Loop** (max 5 times):
- Gemini diagnoses failure using bug-fix template → surgical fix suggestion
- Check meta.use_codex flag:
- If false (default): Present fix to user for manual application
- If true (--use-codex): Codex applies fix with resume for context continuity
- Retest and verify (check for regressions)
- Continue until all pass or max iterations reached
3. **Final Validation**: Confirm all tests pass, certify code
### Error Recovery
- **Max iterations reached**: Revert all changes, report failure
- **Test command fails**: Treat as test failure, diagnose with Gemini
- **Codex fails**: Retry once, skip iteration if still failing
- **Regression detected**: Log warning, include in next diagnosis
## Task Breakdown
- **IMPL-001**: Execute and validate tests with iterative fix cycle
## Implementation Strategy
- **Phase 1**: Initial test execution and failure capture
- **Phase 2**: Iterative Gemini diagnosis + Codex fix + retest
- **Phase 3**: Final validation and code certification
## Success Criteria
- All tests pass (100% pass rate)
- No test failures or errors in final run
- Minimal, surgical code changes
- Iteration logs document fix progression
- Code certified APPROVED for deployment
```
### Phase 4: TODO_LIST.md Generation
```markdown
# Tasks: Test Validation for [Source Session]
## Task Progress
- [ ] **IMPL-001**: Execute and validate tests with iterative fix cycle → [📋](./.task/IMPL-001.json)
## Execution Details
- **Source Session**: WFS-[source-session-id]
- **Test Framework**: jest|pytest|cargo
- **Max Iterations**: 5
- **Tools**: Gemini diagnosis + Codex resume fixes
## Status Legend
- `- [ ]` = Pending
- `- [x]` = Completed
```
## Output Files Structure
```
.workflow/WFS-test-[session]/
├── workflow-session.json # Test session metadata
├── IMPL_PLAN.md # Test validation plan
├── TODO_LIST.md # Progress tracking
├── .task/
│ └── IMPL-001.json # Test-fix task with cycle spec
.workflow/active/WFS-test-{session-id}/
├── workflow-session.json # Test session metadata
├── .process/
│ ├── ANALYSIS_RESULTS.md # From concept-enhanced (optional)
│ ├── context-package.json # From context-gather
── initial-test.log # Phase 1: Initial test results
│ ├── fix-iteration-1-diagnosis.md # Gemini diagnosis iteration 1
│ ├── fix-iteration-1-changes.log # Codex changes iteration 1
│ ├── fix-iteration-1-retest.log # Retest results iteration 1
│ ├── fix-iteration-N-*.md/log # Subsequent iterations
│ └── final-test.log # Phase 3: Final validation
└── .summaries/
└── IMPL-001-summary.md # Success report OR failure report
│ ├── TEST_ANALYSIS_RESULTS.md # Test requirements and strategy
│ ├── test-context-package.json # Test patterns and coverage
── context-package.json # General context artifacts
├── .task/ # Output: Test task JSON files
├── IMPL_PLAN.md # Output: Test implementation plan
└── TODO_LIST.md # Output: Test TODO list
```
## Error Handling
**Command Preparation**:
1. **Assemble Test Session Paths** for agent prompt:
- `session_metadata_path`
- `test_analysis_results_path` (REQUIRED)
- `test_context_package_path`
- Output directory paths
### Input Validation Errors
| Error | Cause | Resolution |
|-------|-------|------------|
| Not a test session | Missing workflow_type: "test_session" | Verify session created by test-gen |
| Source session not found | Invalid source_session_id | Check source session exists |
| No implementation summaries | Source session incomplete | Ensure source session has completed tasks |
2. **Provide Metadata** (simple values):
- `session_id`
- `execution_mode` (agent-mode | cli-execute-mode)
- `use_codex` flag (true | false)
- `source_session_id` (if exists)
- `mcp_capabilities` (available MCP tools)
### Test Framework Discovery Errors
| Error | Cause | Resolution |
|-------|-------|------------|
| No test command found | Unknown framework | Manual test command specification |
| No test files found | Tests not written | Request user to write tests first |
| Test dependencies missing | Incomplete setup | Run dependency installation |
### Phase 2: Test Document Generation (Agent Responsibility)
### Generation Errors
| Error | Cause | Resolution |
|-------|-------|------------|
| Invalid JSON structure | Template error | Fix task generation logic |
| Missing required fields | Incomplete metadata | Validate session metadata |
**Purpose**: Generate test-specific IMPL_PLAN.md, task JSONs, and TODO_LIST.md - planning documents only, NOT test execution.
**Agent Invocation**:
```javascript
Task(
subagent_type="action-planning-agent",
description="Generate test planning documents (IMPL_PLAN.md, task JSONs, TODO_LIST.md)",
prompt=`
## TASK OBJECTIVE
Generate test planning documents (IMPL_PLAN.md, task JSONs, TODO_LIST.md) for test workflow session
IMPORTANT: This is TEST PLANNING ONLY - you are generating planning documents, NOT executing tests.
CRITICAL:
- Use existing test frameworks and utilities from the project
- Follow the progressive loading strategy defined in your agent specification (load context incrementally from memory-first approach)
## AGENT CONFIGURATION REFERENCE
All test task generation rules, schemas, and quality standards are defined in your agent specification:
@.claude/agents/action-planning-agent.md
Refer to your specification for:
- Test Task JSON Schema (6-field structure with test-specific metadata)
- Test IMPL_PLAN.md Structure (test_session variant with test-fix cycle)
- TODO_LIST.md Format (with test phase indicators)
- Progressive Loading Strategy (memory-first, load TEST_ANALYSIS_RESULTS.md as primary source)
- Quality Validation Rules (task count limits, requirement quantification)
## SESSION PATHS
Input:
- Session Metadata: .workflow/active/{test-session-id}/workflow-session.json
- TEST_ANALYSIS_RESULTS: .workflow/active/{test-session-id}/.process/TEST_ANALYSIS_RESULTS.md (REQUIRED - primary requirements source)
- Test Context Package: .workflow/active/{test-session-id}/.process/test-context-package.json
- Context Package: .workflow/active/{test-session-id}/.process/context-package.json
- Source Session Summaries: .workflow/active/{source-session-id}/.summaries/IMPL-*.md (if exists)
Output:
- Task Dir: .workflow/active/{test-session-id}/.task/
- IMPL_PLAN: .workflow/active/{test-session-id}/IMPL_PLAN.md
- TODO_LIST: .workflow/active/{test-session-id}/TODO_LIST.md
## CONTEXT METADATA
Session ID: {test-session-id}
Workflow Type: test_session
Planning Mode: {agent-mode | cli-execute-mode}
Use Codex: {true | false}
Source Session: {source-session-id} (if exists)
MCP Capabilities: {exa_code, exa_web, code_index}
## TEST-SPECIFIC REQUIREMENTS SUMMARY
(Detailed specifications in your agent definition)
### Task Structure Requirements
- Minimum 2 tasks: IMPL-001 (test generation) + IMPL-002 (test execution & fix)
- Expandable for complex projects: Add IMPL-003+ (per-module, integration, E2E tests)
Task Configuration:
IMPL-001 (Test Generation):
- meta.type: "test-gen"
- meta.agent: "@code-developer" (agent-mode) OR CLI execution (cli-execute-mode)
- meta.test_framework: Specify existing framework (e.g., "jest", "vitest", "pytest")
- flow_control: Test generation strategy from TEST_ANALYSIS_RESULTS.md
IMPL-002+ (Test Execution & Fix):
- meta.type: "test-fix"
- meta.agent: "@test-fix-agent"
- meta.use_codex: true/false (based on flag)
- flow_control: Test-fix cycle with iteration limits and diagnosis configuration
### Test-Fix Cycle Specification (IMPL-002+)
Required flow_control fields:
- max_iterations: 5
- diagnosis_tool: "gemini"
- diagnosis_template: "~/.claude/workflows/cli-templates/prompts/analysis/01-diagnose-bug-root-cause.txt"
- fix_mode: "manual" OR "codex" (based on use_codex flag)
- cycle_pattern: "test → gemini_diagnose → fix → retest"
- exit_conditions: ["all_tests_pass", "max_iterations_reached"]
- auto_revert_on_failure: true
### TEST_ANALYSIS_RESULTS.md Mapping
PRIMARY requirements source - extract and map to task JSONs:
- Test framework config → meta.test_framework (use existing framework from project)
- Existing test utilities → flow_control.reusable_test_tools (discovered test helpers, fixtures, mocks)
- Test runner commands → flow_control.test_commands (from package.json or pytest config)
- Coverage targets → meta.coverage_target
- Test requirements → context.requirements (quantified with explicit counts)
- Test generation strategy → IMPL-001 flow_control.implementation_approach
- Implementation targets → context.files_to_test (absolute paths)
## EXPECTED DELIVERABLES
1. Test Task JSON Files (.task/IMPL-*.json)
- 6-field schema with quantified requirements from TEST_ANALYSIS_RESULTS.md
- Test-specific metadata: type, agent, use_codex, test_framework, coverage_target
- flow_control includes: reusable_test_tools, test_commands (from project config)
- Artifact references from test-context-package.json
- Absolute paths in context.files_to_test
2. Test Implementation Plan (IMPL_PLAN.md)
- Template: ~/.claude/workflows/cli-templates/prompts/workflow/impl-plan-template.txt
- Test-specific frontmatter: workflow_type="test_session", test_framework, source_session_id
- Test-Fix-Retest Cycle section with diagnosis configuration
- Source session context integration (if applicable)
3. TODO List (TODO_LIST.md)
- Hierarchical structure with test phase containers
- Links to task JSONs with status markers
- Matches task JSON hierarchy
## QUALITY STANDARDS
Hard Constraints:
- Task count: minimum 2, maximum 12
- All requirements quantified from TEST_ANALYSIS_RESULTS.md
- Test framework matches existing project framework
- flow_control includes reusable_test_tools and test_commands from project
- use_codex flag correctly set in IMPL-002+ tasks
- Absolute paths for all focus_paths
- Acceptance criteria include verification commands
## SUCCESS CRITERIA
- All test planning documents generated successfully
- Return completion status: task count, test framework, coverage targets, source session status
`
)
```
## Integration & Usage
### Command Chain
- **Called By**: `/workflow:test-gen` (Phase 4)
- **Calls**: None (terminal command)
- **Followed By**: `/workflow:execute` (user-triggered)
- **Called By**: `/workflow:test-gen` (Phase 4), `/workflow:test-fix-gen` (Phase 4)
- **Invokes**: `action-planning-agent` for test planning document generation
- **Followed By**: `/workflow:test-cycle-execute` or `/workflow:execute` (user-triggered)
### Basic Usage
### Usage Examples
```bash
# Manual fix mode (default)
# Agent mode (default)
/workflow:tools:test-task-generate --session WFS-test-auth
# Automated Codex fix mode
# With automated Codex fixes
/workflow:tools:test-task-generate --use-codex --session WFS-test-auth
# CLI execution mode for test generation
/workflow:tools:test-task-generate --cli-execute --session WFS-test-auth
```
### Flag Behavior
- **No flag**: `meta.use_codex=false`, manual fixes presented to user
- **--use-codex**: `meta.use_codex=true`, Codex automatically applies fixes with resume mechanism
- **No flags**: `meta.use_codex=false` (manual fixes), agent-mode test generation
- **--use-codex**: `meta.use_codex=true` (Codex automated fixes in IMPL-002+)
- **--cli-execute**: CLI tool execution mode for IMPL-001 test generation
- **Both flags**: CLI generation + automated Codex fixes
## Related Commands
- `/workflow:test-gen` - Creates test session and calls this tool
- `/workflow:tools:context-gather` - Provides cross-session context
- `/workflow:tools:concept-enhanced` - Provides test strategy analysis
- `/workflow:execute` - Executes the generated test-fix task
- `@test-fix-agent` - Agent that executes the iterative test-fix cycle
## Agent Execution Notes
The `@test-fix-agent` will execute the task by following the `flow_control.implementation_approach` specification:
1. **Load task JSON**: Read complete test-fix task from `.task/IMPL-002.json`
2. **Check meta.use_codex**: Determine fix mode (manual or automated)
3. **Execute pre_analysis**: Load source context, discover framework, analyze tests
4. **Phase 1**: Run initial test suite
5. **Phase 2**: If failures, enter iterative loop:
- Use Gemini for diagnosis (analysis mode with bug-fix template)
- Check meta.use_codex flag:
- If false (default): Present fix suggestions to user for manual application
- If true (--use-codex): Use Codex resume for automated fixes (maintains context)
- Retest and check for regressions
- Repeat max 5 times
6. **Phase 3**: Generate summary and certify code
7. **Error Recovery**: Revert changes if max iterations reached
**Bug Diagnosis Template**: Uses `~/.claude/workflows/cli-templates/prompts/analysis/01-diagnose-bug-root-cause.txt` template for systematic root cause analysis, code path tracing, and targeted fix recommendations.
**Codex Usage**: The agent uses `codex exec "..." resume --last` pattern ONLY when meta.use_codex=true (--use-codex flag present) to maintain conversation context across multiple fix iterations, ensuring consistency and learning from previous attempts.
### Output
- Test task JSON files in `.task/` directory (minimum 2)
- IMPL_PLAN.md with test strategy and fix cycle specification
- TODO_LIST.md with test phase indicators
- Session ready for test execution

File diff suppressed because it is too large Load Diff

View File

@@ -1,480 +0,0 @@
---
name: batch-generate
description: Prompt-driven batch UI generation using target-style-centric parallel execution with design token application
argument-hint: [--targets "<list>"] [--target-type "page|component"] [--device-type "desktop|mobile|tablet|responsive"] [--base-path <path>] [--session <id>] [--style-variants <count>] [--layout-variants <count>]
allowed-tools: TodoWrite(*), Read(*), Write(*), Task(ui-design-agent), Bash(*), mcp__exa__web_search_exa(*)
---
# Batch Generate UI Prototypes (/workflow:ui-design:batch-generate)
## Overview
Prompt-driven UI generation with intelligent target extraction and **target-style-centric batch execution**. Each agent handles all layouts for one target × style combination.
**Strategy**: Prompt → Targets → Batched Generation
- **Prompt-driven**: Describe what to build, command extracts targets
- **Agent scope**: Each of `T × S` agents generates `L` layouts
- **Parallel batching**: Max 6 concurrent agents for optimal throughput
- **Component isolation**: Complete task independence
- **Style-aware**: HTML adapts to design_attributes
- **Self-contained CSS**: Direct token values (no var() refs)
**Supports**: Pages (full layouts) and components (isolated elements)
## Phase 1: Setup & Validation
### Step 1: Parse Prompt & Resolve Configuration
```bash
# Parse required parameters
prompt_text = --prompt
device_type = --device-type OR "responsive"
# Extract targets from prompt
IF --targets:
target_list = split_and_clean(--targets)
ELSE:
target_list = extract_targets_from_prompt(prompt_text) # See helpers
IF NOT target_list: target_list = ["home"] # Fallback
# Detect target type
target_type = --target-type OR detect_target_type(target_list)
# Resolve base path
IF --base-path:
base_path = --base-path
ELSE IF --session:
bash(find .workflow/WFS-{session} -type d -name "design-*" -printf "%T@ %p\n" | sort -nr | head -1 | cut -d' ' -f2)
ELSE:
bash(find .workflow -type d -name "design-*" -printf "%T@ %p\n" | sort -nr | head -1 | cut -d' ' -f2)
# Get variant counts
style_variants = --style-variants OR bash(ls {base_path}/style-extraction/style-* -d | wc -l)
layout_variants = --layout-variants OR 3
```
**Output**: `base_path`, `target_list[]`, `target_type`, `device_type`, `style_variants`, `layout_variants`
### Step 2: Validate Design Tokens
```bash
# Check design tokens exist
bash(test -f {base_path}/style-extraction/style-1/design-tokens.json && echo "valid")
# Load design space analysis (optional, from intermediates)
IF exists({base_path}/.intermediates/style-analysis/design-space-analysis.json):
design_space_analysis = Read({base_path}/.intermediates/style-analysis/design-space-analysis.json)
```
**Output**: `design_tokens_valid`, `design_space_analysis`
### Step 3: Gather Layout Inspiration (Reuse or Create)
```bash
# Check if layout inspirations already exist from layout-extract phase
inspiration_source = "{base_path}/.intermediates/layout-analysis/inspirations"
FOR target IN target_list:
# Priority 1: Reuse existing inspiration from layout-extract
IF exists({inspiration_source}/{target}-layout-ideas.txt):
# Reuse existing inspiration (no action needed)
REPORT: "Using existing layout inspiration for {target}"
ELSE:
# Priority 2: Generate new inspiration via MCP
bash(mkdir -p {inspiration_source})
search_query = "{target} {target_type} layout patterns variations"
mcp__exa__web_search_exa(query=search_query, numResults=5)
# Extract context from prompt for this target
target_requirements = extract_relevant_context_from_prompt(prompt_text, target)
# Write inspiration file to centralized location
Write({inspiration_source}/{target}-layout-ideas.txt, inspiration_content)
REPORT: "Created new layout inspiration for {target}"
```
**Output**: `T` inspiration text files (reused or created in `.intermediates/layout-analysis/inspirations/`)
## Phase 2: Target-Style-Centric Batch Generation (Agent)
**Executor**: `Task(ui-design-agent)` × `T × S` tasks in **batched parallel** (max 6 concurrent)
### Step 1: Calculate Batch Execution Plan
```bash
bash(mkdir -p {base_path}/prototypes)
# Build task list: T × S combinations
MAX_PARALLEL = 6
total_tasks = T × S
total_batches = ceil(total_tasks / MAX_PARALLEL)
# Initialize batch tracking
TodoWrite({todos: [
{content: "Batch 1/{batches}: Generate 6 tasks", status: "in_progress"},
{content: "Batch 2/{batches}: Generate 6 tasks", status: "pending"},
...
]})
```
### Step 2: Launch Batched Agent Tasks
For each batch (up to 6 parallel tasks):
```javascript
Task(ui-design-agent): `
[TARGET_STYLE_UI_GENERATION_FROM_PROMPT]
🎯 ONE component: {target} × Style-{style_id} ({philosophy_name})
Generate: {layout_variants} × 2 files (HTML + CSS per layout)
PROMPT CONTEXT: {target_requirements} # Extracted from original prompt
TARGET: {target} | TYPE: {target_type} | STYLE: {style_id}/{style_variants}
BASE_PATH: {base_path}
DEVICE: {device_type}
${design_attributes ? "DESIGN_ATTRIBUTES: " + JSON.stringify(design_attributes) : ""}
## Reference
- Layout inspiration: Read("{base_path}/.intermediates/layout-analysis/inspirations/{target}-layout-ideas.txt")
- Design tokens: Read("{base_path}/style-extraction/style-{style_id}/design-tokens.json")
Parse ALL token values including:
* colors, typography (with combinations), spacing, opacity
* border_radius, shadows, breakpoints
* component_styles (button, card, input variants)
${design_attributes ? "- Adapt DOM to: density, visual_weight, formality, organic_vs_geometric" : ""}
## Generation
For EACH layout (1 to {layout_variants}):
1. HTML: {base_path}/prototypes/{target}-style-{style_id}-layout-N.html
- Complete HTML5: <!DOCTYPE>, <head>, <body>
- CSS ref: <link href="{target}-style-{style_id}-layout-N.css">
- Semantic: <header>, <nav>, <main>, <footer>
- A11y: ARIA labels, landmarks, responsive meta
- Viewport: <meta name="viewport" content="width=device-width, initial-scale=1.0">
- Follow user requirements from prompt
${design_attributes ? `
- DOM adaptation:
* density='spacious' → flatter hierarchy
* density='compact' → deeper nesting
* visual_weight='heavy' → extra wrappers
* visual_weight='minimal' → direct structure` : ""}
- Device-specific: Optimize for {device_type}
2. CSS: {base_path}/prototypes/{target}-style-{style_id}-layout-N.css
- Self-contained: Direct token VALUES (no var())
- Use tokens: colors, fonts, spacing, opacity, borders, shadows
- IF tokens.component_styles exists: Use component presets for buttons, cards, inputs
- IF tokens.typography.combinations exists: Use typography presets for headings and body text
- Device-optimized: {device_type} styles
${device_type === 'responsive' ? '- Responsive: Mobile-first @media' : '- Fixed: ' + device_type}
${design_attributes ? `
- Token selection: density → spacing, visual_weight → shadows` : ""}
## Notes
- ✅ Token VALUES directly from design-tokens.json (with typography.combinations, opacity, component_styles support)
- ✅ Follow prompt requirements for {target}
- ✅ Optimize for {device_type}
- ❌ NO var() refs, NO external deps
- Layouts structurally DISTINCT
- Write files IMMEDIATELY (per layout)
- CSS filename MUST match HTML <link href>
`
# After each batch completes
TodoWrite: Mark batch completed, next batch in_progress
```
## Phase 3: Verify & Generate Previews
### Step 1: Verify Generated Files
```bash
# Count expected vs found
bash(ls {base_path}/prototypes/{target}-style-*-layout-*.html | wc -l)
# Expected: S × L × T × 2
# Validate samples
Read({base_path}/prototypes/{target}-style-{style_id}-layout-{layout_id}.html)
# Check: <!DOCTYPE html>, correct CSS href, sufficient CSS length
```
**Output**: `S × L × T × 2` files verified
### Step 2: Run Preview Generation Script
```bash
bash(~/.claude/scripts/ui-generate-preview.sh "{base_path}/prototypes")
```
**Script generates**:
- `compare.html` (interactive matrix)
- `index.html` (navigation)
- `PREVIEW.md` (instructions)
### Step 3: Verify Preview Files
```bash
bash(ls {base_path}/prototypes/compare.html {base_path}/prototypes/index.html {base_path}/prototypes/PREVIEW.md)
```
**Output**: 3 preview files
## Completion
### Todo Update
```javascript
TodoWrite({todos: [
{content: "Setup and parse prompt", status: "completed", activeForm: "Parsing prompt"},
{content: "Detect token sources", status: "completed", activeForm: "Loading design systems"},
{content: "Gather layout inspiration", status: "completed", activeForm: "Researching layouts"},
{content: "Batch 1/{batches}: Generate 6 tasks", status: "completed", activeForm: "Generating batch 1"},
... (all batches completed)
{content: "Verify files & generate previews", status: "completed", activeForm: "Creating previews"}
]});
```
### Output Message
```
✅ Prompt-driven batch UI generation complete!
Prompt: {prompt_text}
Configuration:
- Style Variants: {style_variants}
- Layout Variants: {layout_variants}
- Target Type: {target_type}
- Device Type: {device_type}
- Targets: {target_list} ({T} targets)
- Total Prototypes: {S × L × T}
Batch Execution:
- Total tasks: {T × S} (targets × styles)
- Batches: {batches} (max 6 parallel per batch)
- Agent scope: {L} layouts per target×style
- Component isolation: Complete task independence
- Device-specific: All layouts optimized for {device_type}
Quality:
- Style-aware: {design_space_analysis ? 'HTML adapts to design_attributes' : 'Standard structure'}
- CSS: Self-contained (direct token values, no var())
- Device-optimized: {device_type} layouts
- Tokens: Production-ready (WCAG AA compliant)
Generated Files:
{base_path}/prototypes/
├── {target}-style-{s}-layout-{l}.html ({S×L×T} prototypes)
├── {target}-style-{s}-layout-{l}.css
├── compare.html (interactive matrix)
├── index.html (navigation)
└── PREVIEW.md (instructions)
Layout Inspirations:
{base_path}/.intermediates/layout-analysis/inspirations/ ({T} text files, reused or created)
Preview:
1. Open compare.html (recommended)
2. Open index.html
3. Read PREVIEW.md
Next: /workflow:ui-design:update
```
## Simple Bash Commands
### Path Operations
```bash
# Find design directory
bash(find .workflow -type d -name "design-*" -printf "%T@ %p\n" | sort -nr | head -1 | cut -d' ' -f2)
# Count style variants
bash(ls {base_path}/style-extraction/style-* -d | wc -l)
# Check design tokens exist
bash(test -f {base_path}/style-extraction/style-1/design-tokens.json && echo "valid")
```
### Validation Commands
```bash
# Count generated files
bash(ls {base_path}/prototypes/{target}-style-*-layout-*.html | wc -l)
# Verify preview
bash(test -f {base_path}/prototypes/compare.html && echo "exists")
```
### File Operations
```bash
# Create prototypes directory
bash(mkdir -p {base_path}/prototypes)
# Create inspirations directory (if needed)
bash(mkdir -p {base_path}/.intermediates/layout-analysis/inspirations)
# Run preview script
bash(~/.claude/scripts/ui-generate-preview.sh "{base_path}/prototypes")
```
## Output Structure
```
{base_path}/
├── .intermediates/
│ └── layout-analysis/
│ └── inspirations/
│ └── {target}-layout-ideas.txt # Layout inspiration (reused or created)
├── prototypes/
│ ├── {target}-style-{s}-layout-{l}.html # Final prototypes
│ ├── {target}-style-{s}-layout-{l}.css
│ ├── compare.html
│ ├── index.html
│ └── PREVIEW.md
└── style-extraction/
└── style-{s}/
├── design-tokens.json
└── style-guide.md
```
## Error Handling
### Common Errors
```
ERROR: No design tokens found
→ Run /workflow:ui-design:style-extract first
ERROR: No targets extracted from prompt
→ Use --targets explicitly or rephrase prompt
ERROR: MCP search failed
→ Check network, retry
ERROR: Batch {N} agent tasks failed
→ Check agent output, retry specific target×style combinations
ERROR: Script permission denied
→ chmod +x ~/.claude/scripts/ui-generate-preview.sh
```
### Recovery Strategies
- **Partial success**: Keep successful target×style combinations
- **Missing design_attributes**: Works without (less style-aware)
- **Invalid tokens**: Validate design-tokens.json structure
- **Failed batch**: Re-run command, only failed combinations will retry
## Quality Checklist
- [ ] Prompt clearly describes targets
- [ ] CSS uses direct token values (no var())
- [ ] HTML adapts to design_attributes (if available)
- [ ] Semantic HTML5 structure
- [ ] ARIA attributes present
- [ ] Device-optimized layouts
- [ ] Layouts structurally distinct
- [ ] compare.html works
## Key Features
- **Prompt-Driven**: Describe what to build, command extracts targets
- **Target-Style-Centric**: `T×S` agents, each handles `L` layouts
- **Parallel Batching**: Max 6 concurrent agents with progress tracking
- **Component Isolation**: Complete task independence
- **Style-Aware**: HTML adapts to design_attributes
- **Self-Contained CSS**: Direct token values (no var())
- **Device-Specific**: Optimized for desktop/mobile/tablet/responsive
- **Inspiration-Based**: MCP-powered layout research
- **Production-Ready**: Semantic, accessible, responsive
## Integration
**Input**:
- Required: Prompt, design-tokens.json
- Optional: design-space-analysis.json (from `.intermediates/style-analysis/`)
- Reuses: Layout inspirations from `.intermediates/layout-analysis/inspirations/` (if available from layout-extract)
**Output**: S×L×T prototypes for `/workflow:ui-design:update`
**Compatible**: style-extract, explore-auto, imitate-auto outputs
**Optimization**: Reuses layout inspirations from layout-extract phase, avoiding duplicate MCP searches
## Usage Examples
### Basic: Auto-detection
```bash
/workflow:ui-design:batch-generate \
--prompt "Dashboard with metric cards and charts"
# Auto: latest design run, extracts "dashboard" target
# Output: S × L × 1 prototypes
```
### With Session
```bash
/workflow:ui-design:batch-generate \
--prompt "Auth pages: login, signup, password reset" \
--session WFS-auth
# Uses WFS-auth's design run
# Extracts: ["login", "signup", "password-reset"]
# Batches: 2 (if S=3: 9 tasks = 6+3)
# Output: S × L × 3 prototypes
```
### Components with Device Type
```bash
/workflow:ui-design:batch-generate \
--prompt "Mobile UI components: navbar, card, footer" \
--target-type component \
--device-type mobile
# Mobile-optimized component generation
# Output: S × L × 3 prototypes
```
### Large Scale (Multi-batch)
```bash
/workflow:ui-design:batch-generate \
--prompt "E-commerce site" \
--targets "home,shop,product,cart,checkout" \
--style-variants 4 \
--layout-variants 2
# Tasks: 5 × 4 = 20 (4 batches: 6+6+6+2)
# Output: 4 × 2 × 5 = 40 prototypes
```
## Helper Functions Reference
### Target Extraction
```python
# extract_targets_from_prompt(prompt_text)
# Patterns: "Create X and Y", "Generate X, Y, Z pages", "Build X"
# Returns: ["x", "y", "z"] (normalized lowercase with hyphens)
# detect_target_type(target_list)
# Keywords: page (home, dashboard, login) vs component (navbar, card, button)
# Returns: "page" or "component"
# extract_relevant_context_from_prompt(prompt_text, target)
# Extracts sentences mentioning specific target
# Returns: Relevant context string
```
## Batch Execution Details
### Parallel Control
- **Max concurrent**: 6 agents per batch
- **Task distribution**: T × S tasks = ceil(T×S/6) batches
- **Progress tracking**: TodoWrite per-batch status
- **Examples**:
- 3 tasks → 1 batch
- 9 tasks → 2 batches (6+3)
- 20 tasks → 4 batches (6+6+6+2)
### Performance
| Tasks | Batches | Est. Time | Efficiency |
|-------|---------|-----------|------------|
| 1-6 | 1 | 3-5 min | 100% |
| 7-12 | 2 | 6-10 min | ~85% |
| 13-18 | 3 | 9-15 min | ~80% |
| 19-30 | 4-5 | 12-25 min | ~75% |
### Optimization Tips
1. **Reduce tasks**: Fewer targets or styles
2. **Adjust layouts**: L=2 instead of L=3 for faster iteration
3. **Stage generation**: Core pages first, secondary pages later
## Notes
- **Prompt quality**: Clear descriptions improve target extraction
- **Token sources**: Consolidated (production) or proposed (fast-track)
- **Batch parallelism**: Max 6 concurrent for stability
- **Scalability**: Tested up to 30+ tasks (5+ batches)
- **Dependencies**: MCP web search, ui-generate-preview.sh script

View File

@@ -1,361 +0,0 @@
---
name: capture
description: Batch screenshot capture for UI design workflows using MCP puppeteer or local fallback with URL mapping
argument-hint: --url-map "target:url,..." [--base-path path] [--session id]
allowed-tools: TodoWrite(*), Read(*), Write(*), Bash(*), Glob(*), ListMcpResourcesTool(*), mcp__chrome-devtools__*, mcp__playwright__*
---
# Batch Screenshot Capture (/workflow:ui-design:capture)
## Overview
Batch screenshot tool with MCP-first strategy and multi-tier fallback. Processes multiple URLs in parallel.
**Strategy**: MCP → Playwright → Chrome → Manual
**Output**: Flat structure `screenshots/{target}.png`
## Phase 1: Initialize & Parse
### Step 1: Determine Base Path
```bash
# Priority: --base-path > session > standalone
bash(if [ -n "$BASE_PATH" ]; then
echo "$BASE_PATH"
elif [ -n "$SESSION_ID" ]; then
find .workflow/WFS-$SESSION_ID/design-* -type d | head -1 || \
echo ".workflow/WFS-$SESSION_ID/design-run-$(date +%Y%m%d-%H%M%S)"
else
echo ".workflow/.design/run-$(date +%Y%m%d-%H%M%S)"
fi)
bash(mkdir -p $BASE_PATH/screenshots)
```
### Step 2: Parse URL Map
```javascript
// Input: "home:https://linear.app, pricing:https://linear.app/pricing"
url_entries = []
FOR pair IN split(params["--url-map"], ","):
parts = pair.split(":", 1)
IF len(parts) != 2:
ERROR: "Invalid format: {pair}. Expected: 'target:url'"
EXIT 1
target = parts[0].strip().lower().replace(" ", "-")
url = parts[1].strip()
// Validate target name
IF NOT regex_match(target, r"^[a-z0-9][a-z0-9_-]*$"):
ERROR: "Invalid target: {target}"
EXIT 1
// Add https:// if missing
IF NOT url.startswith("http"):
url = f"https://{url}"
url_entries.append({target, url})
```
**Output**: `base_path`, `url_entries[]`
### Step 3: Initialize Todos
```javascript
TodoWrite({todos: [
{content: "Parse url-map", status: "completed", activeForm: "Parsing"},
{content: "Detect MCP tools", status: "in_progress", activeForm: "Detecting"},
{content: "Capture screenshots", status: "pending", activeForm: "Capturing"},
{content: "Verify results", status: "pending", activeForm: "Verifying"}
]})
```
## Phase 2: Detect Screenshot Tools
### Step 1: Check MCP Availability
```javascript
// List available MCP servers
all_resources = ListMcpResourcesTool()
available_servers = unique([r.server for r in all_resources])
// Check Chrome DevTools MCP
chrome_devtools = "chrome-devtools" IN available_servers
chrome_screenshot = check_tool_exists("mcp__chrome-devtools__take_screenshot")
// Check Playwright MCP
playwright_mcp = "playwright" IN available_servers
playwright_screenshot = check_tool_exists("mcp__playwright__screenshot")
// Determine primary tool
IF chrome_devtools AND chrome_screenshot:
tool = "chrome-devtools"
ELSE IF playwright_mcp AND playwright_screenshot:
tool = "playwright"
ELSE:
tool = null
```
**Output**: `tool` (chrome-devtools | playwright | null)
### Step 2: Check Local Fallback
```bash
# Only if MCP unavailable
bash(which playwright 2>/dev/null || echo "")
bash(which google-chrome || which chrome || which chromium 2>/dev/null || echo "")
```
**Output**: `local_tools[]`
## Phase 3: Capture Screenshots
### Screenshot Format Options
**PNG Format** (default, lossless):
- **Pros**: Lossless quality, best for detailed UI screenshots
- **Cons**: Larger file sizes (typically 200-500 KB per screenshot)
- **Parameters**: `format: "png"` (no quality parameter)
- **Use case**: High-fidelity UI replication, design system extraction
**WebP Format** (optional, lossy/lossless):
- **Pros**: Smaller file sizes with good quality (50-70% smaller than PNG)
- **Cons**: Requires quality parameter, slight quality loss at high compression
- **Parameters**: `format: "webp", quality: 90` (80-100 recommended)
- **Use case**: Batch captures, network-constrained environments
**JPEG Format** (optional, lossy):
- **Pros**: Smallest file sizes
- **Cons**: Lossy compression, not recommended for UI screenshots
- **Parameters**: `format: "jpeg", quality: 90`
- **Use case**: Photo-heavy pages, not recommended for UI design
### Step 1: MCP Capture (If Available)
```javascript
IF tool == "chrome-devtools":
// Get or create page
pages = mcp__chrome-devtools__list_pages()
IF pages.length == 0:
mcp__chrome-devtools__new_page({url: url_entries[0].url})
page_idx = 0
ELSE:
page_idx = 0
mcp__chrome-devtools__select_page({pageIdx: page_idx})
// Capture each URL
FOR entry IN url_entries:
mcp__chrome-devtools__navigate_page({url: entry.url, timeout: 30000})
bash(sleep 2)
// PNG format doesn't support quality parameter
// Use PNG for lossless quality (larger files)
mcp__chrome-devtools__take_screenshot({
fullPage: true,
format: "png",
filePath: f"{base_path}/screenshots/{entry.target}.png"
})
// Alternative: Use WebP with quality for smaller files
// mcp__chrome-devtools__take_screenshot({
// fullPage: true,
// format: "webp",
// quality: 90,
// filePath: f"{base_path}/screenshots/{entry.target}.webp"
// })
ELSE IF tool == "playwright":
FOR entry IN url_entries:
mcp__playwright__screenshot({
url: entry.url,
output_path: f"{base_path}/screenshots/{entry.target}.png",
full_page: true,
timeout: 30000
})
```
### Step 2: Local Fallback (If MCP Failed)
```bash
# Try Playwright CLI
bash(playwright screenshot "$url" "$output_file" --full-page --timeout 30000)
# Try Chrome headless
bash($chrome --headless --screenshot="$output_file" --window-size=1920,1080 "$url")
```
### Step 3: Manual Mode (If All Failed)
```
⚠️ Manual Screenshot Required
Failed URLs:
home: https://linear.app
Save to: .workflow/.design/run-20250110/screenshots/home.png
Steps:
1. Visit URL in browser
2. Take full-page screenshot
3. Save to path above
4. Type 'ready' to continue
Options: ready | skip | abort
```
## Phase 4: Verification
### Step 1: Scan Captured Files
```bash
bash(ls -1 $base_path/screenshots/*.{png,jpg,jpeg,webp} 2>/dev/null)
bash(du -h $base_path/screenshots/*.png 2>/dev/null)
```
### Step 2: Generate Metadata
```javascript
captured_files = Glob(f"{base_path}/screenshots/*.{{png,jpg,jpeg,webp}}")
captured_targets = [basename_no_ext(f) for f in captured_files]
metadata = {
"timestamp": current_timestamp(),
"total_requested": len(url_entries),
"total_captured": len(captured_targets),
"screenshots": []
}
FOR entry IN url_entries:
is_captured = entry.target IN captured_targets
metadata.screenshots.append({
"target": entry.target,
"url": entry.url,
"captured": is_captured,
"path": f"{base_path}/screenshots/{entry.target}.png" IF is_captured ELSE null,
"size_kb": file_size_kb IF is_captured ELSE null
})
Write(f"{base_path}/screenshots/capture-metadata.json", JSON.stringify(metadata))
```
**Output**: `capture-metadata.json`
## Completion
### Todo Update
```javascript
TodoWrite({todos: [
{content: "Parse url-map", status: "completed", activeForm: "Parsing"},
{content: "Detect MCP tools", status: "completed", activeForm: "Detecting"},
{content: "Capture screenshots", status: "completed", activeForm: "Capturing"},
{content: "Verify results", status: "completed", activeForm: "Verifying"}
]})
```
### Output Message
```
✅ Batch screenshot capture complete!
Summary:
- Requested: {total_requested}
- Captured: {total_captured}
- Success rate: {success_rate}%
- Method: {tool || "Local fallback"}
Output:
{base_path}/screenshots/
├── home.png (245.3 KB)
├── pricing.png (198.7 KB)
└── capture-metadata.json
Next: /workflow:ui-design:extract --images "screenshots/*.png"
```
## Simple Bash Commands
### Path Operations
```bash
# Find design directory
bash(find .workflow -type d -name "design-*" | head -1)
# Create screenshot directory
bash(mkdir -p $BASE_PATH/screenshots)
```
### Tool Detection
```bash
# Check MCP
all_resources = ListMcpResourcesTool()
# Check local tools
bash(which playwright 2>/dev/null)
bash(which google-chrome 2>/dev/null)
```
### Verification
```bash
# List captures
bash(ls -1 $base_path/screenshots/*.png 2>/dev/null)
# File sizes
bash(du -h $base_path/screenshots/*.png)
```
## Output Structure
```
{base_path}/
└── screenshots/
├── home.png
├── pricing.png
├── about.png
└── capture-metadata.json
```
## Error Handling
### Common Errors
```
ERROR: Invalid url-map format
→ Use: "target:url, target2:url2"
ERROR: png screenshots do not support 'quality'
→ PNG format is lossless, no quality parameter needed
→ Remove quality parameter OR switch to webp/jpeg format
ERROR: MCP unavailable
→ Using local fallback
ERROR: All tools failed
→ Manual mode activated
```
### Format-Specific Errors
```
❌ Wrong: format: "png", quality: 90
✅ Right: format: "png"
✅ Or use: format: "webp", quality: 90
✅ Or use: format: "jpeg", quality: 90
```
### Recovery
- **Partial success**: Keep successful captures
- **Retry**: Re-run with failed targets only
- **Manual**: Follow interactive guidance
## Quality Checklist
- [ ] All requested URLs processed
- [ ] File sizes > 1KB (valid images)
- [ ] Metadata JSON generated
- [ ] No missing targets (or documented)
## Key Features
- **MCP-first**: Prioritize managed tools
- **Multi-tier fallback**: 4 layers (MCP → Local → Manual)
- **Batch processing**: Parallel capture
- **Error tolerance**: Partial failures handled
- **Structured output**: Flat, predictable
## Integration
**Input**: `--url-map` (multiple target:url pairs)
**Output**: `screenshots/*.png` + `capture-metadata.json`
**Called by**: `/workflow:ui-design:imitate-auto`, `/workflow:ui-design:explore-auto`
**Next**: `/workflow:ui-design:extract` or `/workflow:ui-design:explore-layers`

View File

@@ -0,0 +1,652 @@
---
name: workflow:ui-design:codify-style
description: Orchestrator to extract styles from code and generate shareable reference package with preview (automatic file discovery)
argument-hint: "<path> [--package-name <name>] [--output-dir <path>] [--overwrite]"
allowed-tools: SlashCommand,Bash,Read,TodoWrite
auto-continue: true
---
# UI Design: Codify Style (Orchestrator)
## Overview & Execution Model
**Fully autonomous orchestrator**: Coordinates style extraction from codebase and generates shareable reference packages.
**Pure Orchestrator Pattern**: Does NOT directly execute agent tasks. Delegates to specialized commands:
1. `/workflow:ui-design:import-from-code` - Extract styles from source code
2. `/workflow:ui-design:reference-page-generator` - Generate versioned reference package with interactive preview
**Output**: Shareable, versioned style reference package at `.workflow/reference_style/{package-name}/`
**Autonomous Flow** (⚠️ CONTINUOUS EXECUTION - DO NOT STOP):
1. User triggers: `/workflow:ui-design:codify-style <path> --package-name <name>`
2. Phase 0: Parameter validation & preparation → **IMMEDIATELY triggers Phase 1**
3. Phase 1 (import-from-code) → **Attach 4 tasks → Execute tasks → Collapse** → Auto-continues to Phase 2
4. Phase 2 (reference-page-generator) → **Attach 4 tasks → Execute tasks → Collapse** → Auto-continues to Phase 3
5. Phase 3 (cleanup & verification) → **Execute orchestrator task** → Reports completion
**Phase Transition Mechanism**:
- **Phase 0 (Validation)**: Validate parameters, prepare workspace → IMMEDIATELY triggers Phase 1
- **Phase 1-2 (Task Attachment)**: `SlashCommand` invocation **ATTACHES** tasks to current workflow. Orchestrator **EXECUTES** these tasks itself.
- **Task Execution**: Orchestrator runs attached tasks sequentially, updating TodoWrite as each completes
- **Task Collapse**: After all attached tasks complete, collapse them into phase summary
- **Phase Transition**: Automatically execute next phase after collapsing completed tasks
- No user interaction required after initial command
**Auto-Continue Mechanism**: TodoWrite tracks phase status with dynamic task attachment/collapse. After executing all attached tasks, you MUST immediately collapse them, restore phase summary, and execute the next phase. No user intervention required. The workflow is NOT complete until reaching Phase 3.
**Task Attachment Model**: SlashCommand invocation is NOT delegation - it's task expansion. The orchestrator executes these attached tasks itself, not waiting for external completion.
## Core Rules
1. **Start Immediately**: TodoWrite initialization → Phase 0 validation → Phase 1 execution
2. **No Task JSON**: This command does not create task JSON files - pure orchestrator pattern
3. **Parse & Pass**: Extract required data from each command output (design run path, metadata)
4. **Intelligent Validation**: Smart parameter validation with user-friendly error messages
5. **Safety First**: Package overwrite protection, existence checks, fallback error handling
6. **Track Progress**: Update TodoWrite dynamically with task attachment/collapse pattern
7. **⚠️ CRITICAL: Task Attachment Model** - SlashCommand invocation **ATTACHES** tasks to current workflow. Orchestrator **EXECUTES** these attached tasks itself, not waiting for external completion. This is NOT delegation - it's task expansion.
8. **⚠️ CRITICAL: DO NOT STOP** - This is a continuous multi-phase workflow. After executing all attached tasks, you MUST immediately collapse them and execute the next phase. Workflow is NOT complete until Phase 3.
---
## Usage
### Command Syntax
```bash
/workflow:ui-design:codify-style <path> [OPTIONS]
# Required
<path> Source code directory to analyze
# Optional
--package-name <name> Custom name for the style reference package
(default: auto-generated from directory name)
--output-dir <path> Output directory (default: .workflow/reference_style)
--overwrite Overwrite existing package without prompting
```
**Note**: File discovery is fully automatic. The command will scan the source directory and find all style-related files (CSS, SCSS, JS, HTML) automatically.
---
## 4-Phase Execution
### Phase 0: Intelligent Parameter Validation & Session Preparation
**Goal**: Validate parameters, check safety constraints, prepare session, and get user confirmation
**TodoWrite** (First Action):
```json
[
{"content": "Phase 0: Validate parameters and prepare session", "status": "in_progress", "activeForm": "Validating parameters"},
{"content": "Phase 1: Style extraction from source code (import-from-code)", "status": "pending", "activeForm": "Extracting styles"},
{"content": "Phase 2: Reference package generation (reference-page-generator)", "status": "pending", "activeForm": "Generating package"},
{"content": "Phase 3: Cleanup and verify package", "status": "pending", "activeForm": "Cleanup and verification"}
]
```
**Note**: Orchestrator tracks only high-level phases. Sub-command details shown when executed.
**Step 0a: Parse and Validate Required Parameters**
```bash
# Parse positional path parameter (first non-flag argument)
source_path = FIRST_POSITIONAL_ARG
# Validate source path
IF NOT source_path:
REPORT: "❌ ERROR: Missing required parameter: <path>"
REPORT: "USAGE: /workflow:ui-design:codify-style <path> [OPTIONS]"
REPORT: "EXAMPLE: /workflow:ui-design:codify-style ./src"
REPORT: "EXAMPLE: /workflow:ui-design:codify-style ./app --package-name design-system-v2"
EXIT 1
# Validate source path existence
TRY:
source_exists = Bash(test -d "${source_path}" && echo "exists" || echo "not_exists")
IF source_exists != "exists":
REPORT: "❌ ERROR: Source directory not found: ${source_path}"
REPORT: "Please provide a valid directory path."
EXIT 1
CATCH error:
REPORT: "❌ ERROR: Cannot validate source path: ${error}"
EXIT 1
source = source_path
STORE: source
# Auto-generate package name if not provided
IF NOT --package-name:
# Extract directory name from path
dir_name = Bash(basename "${source}")
# Normalize to package name format (lowercase, replace special chars with hyphens)
normalized_name = dir_name.toLowerCase().replace(/[^a-z0-9]+/g, '-').replace(/^-+|-+$/g, '')
# Add version suffix
package_name = "${normalized_name}-style-v1"
ELSE:
package_name = --package-name
# Validate custom package name format (lowercase, alphanumeric, hyphens only)
IF NOT package_name MATCHES /^[a-z0-9][a-z0-9-]*$/:
REPORT: "❌ ERROR: Invalid package name format: ${package_name}"
REPORT: "Requirements:"
REPORT: " • Must start with lowercase letter or number"
REPORT: " • Only lowercase letters, numbers, and hyphens allowed"
REPORT: " • No spaces or special characters"
REPORT: "EXAMPLES: main-app-style-v1, design-system-v2, component-lib-v1"
EXIT 1
STORE: package_name, output_dir (default: ".workflow/reference_style"), overwrite_flag
```
**Step 0b: Intelligent Package Safety Check**
```bash
# Set default output directory
output_dir = --output-dir OR ".workflow/reference_style"
package_path = "${output_dir}/${package_name}"
TRY:
package_exists = Bash(test -d "${package_path}" && echo "exists" || echo "not_exists")
IF package_exists == "exists":
IF NOT --overwrite:
REPORT: "❌ ERROR: Package '${package_name}' already exists at ${package_path}/"
REPORT: "Use --overwrite flag to replace, or choose a different package name"
EXIT 1
ELSE:
REPORT: "⚠️ Overwriting existing package: ${package_name}"
CATCH error:
REPORT: "⚠️ Warning: Cannot check package existence: ${error}"
REPORT: "Continuing with package creation..."
```
**Step 0c: Session Preparation**
```bash
# Create temporary workspace for processing
TRY:
# Step 1: Ensure .workflow directory exists and generate unique ID
Bash(mkdir -p .workflow)
temp_id = Bash(echo "codify-temp-$(date +%Y%m%d-%H%M%S)")
# Step 2: Create temporary directory
Bash(mkdir -p ".workflow/${temp_id}")
# Step 3: Get absolute path using bash
design_run_path = Bash(cd ".workflow/${temp_id}" && pwd)
CATCH error:
REPORT: "❌ ERROR: Failed to create temporary workspace: ${error}"
EXIT 1
STORE: temp_id, design_run_path
```
**Summary Variables**:
- `SOURCE`: Validated source directory path
- `PACKAGE_NAME`: Validated package name (lowercase, alphanumeric, hyphens)
- `PACKAGE_PATH`: Full output path `${output_dir}/${package_name}`
- `OUTPUT_DIR`: `.workflow/reference_style` (default) or user-specified
- `OVERWRITE`: `true` if --overwrite flag present
- `CSS/SCSS/JS/HTML/STYLE_FILES`: Optional glob patterns
- `TEMP_ID`: `codify-temp-{timestamp}` (temporary workspace identifier)
- `DESIGN_RUN_PATH`: Absolute path to temporary workspace
<!-- TodoWrite: Update Phase 0 → completed, Phase 1 → in_progress, INSERT import-from-code tasks -->
**TodoWrite Update (Phase 1 SlashCommand invoked - tasks attached)**:
```json
[
{"content": "Phase 0: Validate parameters and prepare session", "status": "completed", "activeForm": "Validating parameters"},
{"content": "Phase 1.0: Discover and categorize code files (import-from-code)", "status": "in_progress", "activeForm": "Discovering code files"},
{"content": "Phase 1.1: Style Agent extraction (import-from-code)", "status": "pending", "activeForm": "Extracting style tokens"},
{"content": "Phase 1.2: Animation Agent extraction (import-from-code)", "status": "pending", "activeForm": "Extracting animation tokens"},
{"content": "Phase 1.3: Layout Agent extraction (import-from-code)", "status": "pending", "activeForm": "Extracting layout patterns"},
{"content": "Phase 2: Reference package generation (reference-page-generator)", "status": "pending", "activeForm": "Generating package"},
{"content": "Phase 3: Cleanup and verify package", "status": "pending", "activeForm": "Cleanup and verification"}
]
```
**Note**: SlashCommand invocation **attaches** import-from-code's 4 tasks to current workflow. Orchestrator **executes** these tasks itself.
**Next Action**: Tasks attached → **Execute Phase 1.0-1.3** sequentially
---
### Phase 1: Style Extraction from Source Code
**Goal**: Extract design tokens, style patterns, and component styles from codebase
**Command Construction**:
```bash
# Build command with required parameters only
# Use temp_id as design-id since it's the workspace directory name
command = "/workflow:ui-design:import-from-code" +
" --design-id \"${temp_id}\"" +
" --source \"${source}\""
```
**Execute Command (Task Attachment Pattern)**:
```bash
TRY:
# SlashCommand invocation ATTACHES import-from-code's 4 tasks to current workflow
# Orchestrator will EXECUTE these attached tasks itself:
# 1. Phase 1.0: Discover and categorize code files
# 2. Phase 1.1: Style Agent extraction
# 3. Phase 1.2: Animation Agent extraction
# 4. Phase 1.3: Layout Agent extraction
SlashCommand(command)
# After executing all attached tasks, verify extraction outputs
tokens_path = "${design_run_path}/style-extraction/style-1/design-tokens.json"
guide_path = "${design_run_path}/style-extraction/style-1/style-guide.md"
tokens_exists = Bash(test -f "${tokens_path}" && echo "exists" || echo "missing")
guide_exists = Bash(test -f "${guide_path}" && echo "exists" || echo "missing")
IF tokens_exists != "exists" OR guide_exists != "exists":
REPORT: "⚠️ WARNING: Expected extraction files not found"
REPORT: "Continuing with available outputs..."
CATCH error:
REPORT: "❌ ERROR: Style extraction failed"
REPORT: "Error: ${error}"
REPORT: "Possible cause: Source directory contains no style files"
Bash(rm -rf .workflow/${temp_id})
EXIT 1
```
**Example Command**:
```bash
# Automatic file discovery
/workflow:ui-design:import-from-code --design-id codify-temp-20250111-123456 --source ./src
```
**Completion Criteria**:
-`import-from-code` command executed successfully
- ✅ Design run created at `${design_run_path}`
- ✅ Required files exist:
- `design-tokens.json` - Complete design token system
- `style-guide.md` - Style documentation
- ⭕ Optional files:
- `animation-tokens.json` - Animation specifications
- `component-patterns.json` - Component catalog
<!-- TodoWrite: REMOVE Phase 1.0-1.3 tasks, INSERT reference-page-generator tasks -->
**TodoWrite Update (Phase 2 SlashCommand invoked - tasks attached)**:
```json
[
{"content": "Phase 0: Validate parameters and prepare session", "status": "completed", "activeForm": "Validating parameters"},
{"content": "Phase 1: Style extraction from source code (import-from-code)", "status": "completed", "activeForm": "Extracting styles"},
{"content": "Phase 2.0: Setup and validation (reference-page-generator)", "status": "in_progress", "activeForm": "Validating parameters"},
{"content": "Phase 2.1: Prepare component data (reference-page-generator)", "status": "pending", "activeForm": "Copying layout templates"},
{"content": "Phase 2.2: Generate preview pages (reference-page-generator)", "status": "pending", "activeForm": "Generating preview"},
{"content": "Phase 3: Cleanup and verify package", "status": "pending", "activeForm": "Cleanup and verification"}
]
```
**Note**: Phase 1 tasks completed and collapsed. SlashCommand invocation **attaches** reference-page-generator's 3 tasks. Orchestrator **executes** these tasks itself.
**Next Action**: Tasks attached → **Execute Phase 2.0-2.2** sequentially
---
### Phase 2: Reference Package Generation
**Goal**: Generate shareable reference package with interactive preview and documentation
**Command Construction**:
```bash
command = "/workflow:ui-design:reference-page-generator " +
"--design-run \"${design_run_path}\" " +
"--package-name \"${package_name}\" " +
"--output-dir \"${output_dir}\""
```
**Execute Command (Task Attachment Pattern)**:
```bash
TRY:
# SlashCommand invocation ATTACHES reference-page-generator's 3 tasks to current workflow
# Orchestrator will EXECUTE these attached tasks itself:
# 1. Phase 2.0: Setup and validation
# 2. Phase 2.1: Prepare component data
# 3. Phase 2.2: Generate preview pages
SlashCommand(command)
# After executing all attached tasks, verify package outputs
required_files = [
"layout-templates.json",
"design-tokens.json",
"preview.html",
"preview.css"
]
missing_files = []
FOR file IN required_files:
file_path = "${package_path}/${file}"
exists = Bash(test -f "${file_path}" && echo "exists" || echo "missing")
IF exists != "exists":
missing_files.append(file)
IF missing_files.length > 0:
REPORT: "⚠️ WARNING: Some expected files are missing"
REPORT: "Package may be incomplete. Continuing with cleanup..."
CATCH error:
REPORT: "❌ ERROR: Reference package generation failed"
REPORT: "Error: ${error}"
Bash(rm -rf .workflow/${temp_id})
EXIT 1
```
**Example Command**:
```bash
/workflow:ui-design:reference-page-generator \
--design-run .workflow/codify-temp-20250111-123456 \
--package-name main-app-style-v1 \
--output-dir .workflow/reference_style
```
**Completion Criteria**:
-`reference-page-generator` executed successfully
- ✅ Reference package created at `${package_path}/`
- ✅ All required files present:
- `layout-templates.json` - Layout templates from design run
- `design-tokens.json` - Complete design token system
- `preview.html` - Interactive multi-component showcase
- `preview.css` - Showcase styling
- ⭕ Optional files:
- `animation-tokens.json` - Animation specifications (if available from extraction)
<!-- TodoWrite: REMOVE Phase 2.0-2.2 tasks, restore to orchestrator view -->
**TodoWrite Update (Phase 2 completed - tasks collapsed)**:
```json
[
{"content": "Phase 0: Validate parameters and prepare session", "status": "completed", "activeForm": "Validating parameters"},
{"content": "Phase 1: Style extraction from source code (import-from-code)", "status": "completed", "activeForm": "Extracting styles"},
{"content": "Phase 2: Reference package generation (reference-page-generator)", "status": "completed", "activeForm": "Generating package"},
{"content": "Phase 3: Cleanup and verify package", "status": "in_progress", "activeForm": "Cleanup and verification"}
]
```
**Note**: Phase 2 tasks completed and collapsed to summary.
**Next Action**: TodoWrite restored → **Execute Phase 3** (orchestrator's own task)
---
### Phase 3: Cleanup & Verification
**Goal**: Clean up temporary workspace and report completion
**Operations**:
```bash
# Cleanup temporary workspace
TRY:
Bash(rm -rf .workflow/${temp_id})
CATCH error:
# Silent failure - not critical
# Quick verification
package_exists = Bash(test -d "${package_path}" && echo "exists" || echo "missing")
IF package_exists != "exists":
REPORT: "❌ ERROR: Package generation failed - directory not found"
EXIT 1
# Get absolute path and component count for final report
absolute_package_path = Bash(cd "${package_path}" && pwd 2>/dev/null || echo "${package_path}")
component_count = Bash(jq -r '.layout_templates | length // "unknown"' "${package_path}/layout-templates.json" 2>/dev/null || echo "unknown")
anim_exists = Bash(test -f "${package_path}/animation-tokens.json" && echo "✓" || echo "○")
```
<!-- TodoWrite: Update Phase 3 → completed -->
**Final Action**: Display completion summary to user
---
## Completion Message
```
✅ Style reference package generated successfully
📦 Package: {package_name}
📂 Location: {absolute_package_path}/
📄 Source: {source}
📊 Components: {component_count}
Files: layout-templates.json, design-tokens.json, animation-tokens.json (optional), preview.html, preview.css
Preview: file://{absolute_package_path}/preview.html
Next: /memory:style-skill-memory {package_name}
```
---
## TodoWrite Pattern
```javascript
// Initialize IMMEDIATELY at the start to track orchestrator workflow (4 high-level tasks)
TodoWrite({todos: [
{"content": "Phase 0: Validate parameters and prepare session", "status": "in_progress", "activeForm": "Validating parameters"},
{"content": "Phase 1: Style extraction from source code (import-from-code)", "status": "pending", "activeForm": "Extracting styles"},
{"content": "Phase 2: Reference package generation (reference-page-generator)", "status": "pending", "activeForm": "Generating package"},
{"content": "Phase 3: Cleanup and verify package", "status": "pending", "activeForm": "Cleanup and verification"}
]})
// ⚠️ CRITICAL: Dynamic TodoWrite task attachment strategy:
//
// **Key Concept**: SlashCommand invocation ATTACHES tasks to current workflow.
// Orchestrator EXECUTES these attached tasks itself, not waiting for external completion.
//
// 1. INITIAL STATE: 4 orchestrator-level tasks only
//
// 2. PHASE 1 SlashCommand INVOCATION:
// - SlashCommand(/workflow:ui-design:import-from-code) ATTACHES 4 tasks
// - TodoWrite expands to: Phase 0 (completed) + 4 import-from-code tasks + Phase 2 + Phase 3
// - Orchestrator EXECUTES these 4 tasks sequentially (Phase 1.0 → 1.1 → 1.2 → 1.3)
// - First attached task marked as in_progress
//
// 3. PHASE 1 TASKS COMPLETED:
// - All 4 import-from-code tasks executed and completed
// - COLLAPSE completed tasks into Phase 1 summary
// - TodoWrite becomes: Phase 0-1 (completed) + Phase 2 + Phase 3
//
// 4. PHASE 2 SlashCommand INVOCATION:
// - SlashCommand(/workflow:ui-design:reference-page-generator) ATTACHES 3 tasks
// - TodoWrite expands to: Phase 0-1 (completed) + 3 reference-page-generator tasks + Phase 3
// - Orchestrator EXECUTES these 3 tasks sequentially (Phase 2.0 → 2.1 → 2.2)
//
// 5. PHASE 2 TASKS COMPLETED:
// - All 3 reference-page-generator tasks executed and completed
// - COLLAPSE completed tasks into Phase 2 summary
// - TodoWrite returns to: Phase 0-2 (completed) + Phase 3 (in_progress)
//
// 6. PHASE 3 EXECUTION:
// - Orchestrator's own task (no SlashCommand attachment)
// - Mark Phase 3 as completed
// - Final state: All 4 orchestrator tasks completed
// ✓ Dynamic attachment/collapse maintains clarity
```
---
## Execution Flow Diagram
```
User triggers: /workflow:ui-design:codify-style ./src --package-name my-style-v1
[TodoWrite Init] 4 orchestrator-level tasks
├─ Phase 0: Validate parameters and prepare session (in_progress)
├─ Phase 1: Style extraction from source code (pending)
├─ Phase 2: Reference package generation (pending)
└─ Phase 3: Cleanup and verify package (pending)
[Phase 0] Parameter validation & preparation
├─ Parse positional path parameter
├─ Validate source directory exists
├─ Auto-generate or validate package name
├─ Check package overwrite protection
├─ Create temporary workspace
└─ Display configuration summary
[Phase 0 Complete] → TodoWrite: Phase 0 → completed
[Phase 1 Invoke] → SlashCommand(/workflow:ui-design:import-from-code) ATTACHES 4 tasks
├─ Phase 0 (completed)
├─ Phase 1.0: Discover and categorize code files (in_progress) ← ATTACHED
├─ Phase 1.1: Style Agent extraction (pending) ← ATTACHED
├─ Phase 1.2: Animation Agent extraction (pending) ← ATTACHED
├─ Phase 1.3: Layout Agent extraction (pending) ← ATTACHED
├─ Phase 2: Reference package generation (pending)
└─ Phase 3: Cleanup and verify package (pending)
[Execute Phase 1.0] → Discover files (orchestrator executes this)
[Execute Phase 1.1-1.3] → Run 3 agents in parallel (orchestrator executes these)
└─ Outputs: design-tokens.json, style-guide.md, animation-tokens.json, layout-templates.json
[Phase 1 Complete] → TodoWrite: COLLAPSE Phase 1.0-1.3 into Phase 1 summary
[Phase 2 Invoke] → SlashCommand(/workflow:ui-design:reference-page-generator) ATTACHES 3 tasks
├─ Phase 0 (completed)
├─ Phase 1: Style extraction from source code (completed) ← COLLAPSED
├─ Phase 2.0: Setup and validation (in_progress) ← ATTACHED
├─ Phase 2.1: Prepare component data (pending) ← ATTACHED
├─ Phase 2.2: Generate preview pages (pending) ← ATTACHED
└─ Phase 3: Cleanup and verify package (pending)
[Execute Phase 2.0] → Setup and validation (orchestrator executes this)
[Execute Phase 2.1] → Prepare component data (orchestrator executes this)
[Execute Phase 2.2] → Generate preview pages (orchestrator executes this)
└─ Outputs: layout-templates.json, design-tokens.json, animation-tokens.json (optional), preview.html, preview.css
[Phase 2 Complete] → TodoWrite: COLLAPSE Phase 2.0-2.2 into Phase 2 summary
├─ Phase 0 (completed)
├─ Phase 1: Style extraction from source code (completed)
├─ Phase 2: Reference package generation (completed) ← COLLAPSED
└─ Phase 3: Cleanup and verify package (in_progress)
[Execute Phase 3] → Orchestrator's own task (no attachment needed)
├─ Remove temporary workspace (.workflow/codify-temp-{timestamp}/)
├─ Verify package directory
├─ Extract component count
└─ Display completion summary
[Phase 3 Complete] → TodoWrite: Phase 3 → completed
├─ Phase 0 (completed)
├─ Phase 1 (completed)
├─ Phase 2 (completed)
└─ Phase 3 (completed)
```
---
## Error Handling
### Common Errors
| Error | Cause | Resolution |
|-------|-------|------------|
| Missing --source or --package-name | Required parameters not provided | Provide both flags |
| Invalid package name | Contains uppercase, special chars | Use lowercase, alphanumeric, hyphens only |
| import-from-code failed | Source path invalid or no files found | Verify source path, check glob patterns |
| reference-page-generator failed | Design run incomplete | Check import-from-code output, verify extraction files |
| Package verification failed | Output directory creation failed | Check file permissions |
### Error Recovery
- If Phase 2 fails: Cleanup temporary session and report error
- If Phase 3 fails: Keep design run for debugging, report error
- User can manually inspect `${design_run_path}` if needed
---
## Implementation Details
### Critical Rules
1. **No User Prompts Between Phases**: Never ask user questions or wait for input between phases
2. **Immediate Phase Transition**: After TodoWrite update, immediately execute next phase command
3. **Status-Driven Execution**: Check TodoList status after each phase
4. **Phase Completion Pattern**:
```
Phase N completes → Update TodoWrite (N=completed, N+1=in_progress) → Execute Phase N+1
```
### Parameter Pass-Through
Only essential parameters are passed to `import-from-code`:
- `--design-id` → temporary design run ID (auto-generated)
- `--source` → user-specified source directory
File discovery is fully automatic - no glob patterns needed.
### Output Directory Structure
```
.workflow/
├── reference_style/ # Default output directory
│ └── {package-name}/
│ ├── layout-templates.json
│ ├── design-tokens.json
│ ├── animation-tokens.json (optional)
│ ├── preview.html
│ └── preview.css
└── codify-temp-{timestamp}/ # Temporary workspace (cleaned up after completion)
├── style-extraction/
├── animation-extraction/
└── layout-extraction/
```
---
## Architecture
```
codify-style (orchestrator - simplified interface)
├─ Phase 0: Intelligent Validation
│ ├─ Parse positional path parameter
│ ├─ Auto-generate package name (if not provided)
│ ├─ Safety checks (overwrite protection)
│ └─ User confirmation
├─ Phase 1: /workflow:ui-design:import-from-code (style extraction)
│ ├─ Extract design tokens from source code
│ ├─ Generate style guide
│ └─ Extract component patterns
├─ Phase 2: /workflow:ui-design:reference-page-generator (reference package)
│ ├─ Generate shareable package
│ ├─ Create interactive preview
│ └─ Generate documentation
└─ Phase 3: Cleanup & Verification
├─ Remove temporary workspace
├─ Verify package integrity
└─ Report completion
Design Principles:
✓ No task JSON created by this command
✓ All extraction delegated to import-from-code
✓ All packaging delegated to reference-page-generator
✓ Pure orchestration with intelligent defaults
✓ Single path parameter for maximum simplicity
```

View File

@@ -1,11 +1,11 @@
---
name: update
description: Update brainstorming artifacts with finalized design system references from selected prototypes
name: design-sync
description: Synchronize finalized design system references to brainstorming artifacts, preparing them for /workflow:plan consumption
argument-hint: --session <session_id> [--selected-prototypes "<list>"]
allowed-tools: Read(*), Write(*), Edit(*), TodoWrite(*), Glob(*), Bash(*)
---
# Design Update Command
# Design Sync Command
## Overview
@@ -25,10 +25,10 @@ Synchronize finalized design system references to brainstorming artifacts, prepa
```bash
# Validate session
CHECK: .workflow/.active-* marker files; VALIDATE: session_id matches active session
CHECK: find .workflow/active/ -name "WFS-*" -type d; VALIDATE: session_id matches active session
# Verify design artifacts in latest design run
latest_design = find_latest_path_matching(".workflow/WFS-{session}/design-*")
latest_design = find_latest_path_matching(".workflow/active/WFS-{session}/design-run-*")
# Detect design system structure
IF exists({latest_design}/style-extraction/style-1/design-tokens.json):
@@ -51,7 +51,7 @@ REPORT: "Found {count} design artifacts, {prototype_count} prototypes"
```bash
# Check if role analysis documents contains current design run reference
synthesis_spec_path = ".workflow/WFS-{session}/.brainstorming/role analysis documents"
synthesis_spec_path = ".workflow/active/WFS-{session}/.brainstorming/role analysis documents"
current_design_run = basename(latest_design) # e.g., "design-run-20250109-143022"
IF exists(synthesis_spec_path):
@@ -68,8 +68,8 @@ IF exists(synthesis_spec_path):
```bash
# Load target brainstorming artifacts (files to be updated)
Read(.workflow/WFS-{session}/.brainstorming/role analysis documents)
IF exists(.workflow/WFS-{session}/.brainstorming/ui-designer/analysis.md): Read(analysis.md)
Read(.workflow/active/WFS-{session}/.brainstorming/role analysis documents)
IF exists(.workflow/active/WFS-{session}/.brainstorming/ui-designer/analysis.md): Read(analysis.md)
# Optional: Read prototype notes for descriptions (minimal context)
FOR each selected_prototype IN selected_list:
@@ -89,8 +89,8 @@ Update `.brainstorming/role analysis documents` with design system references.
## UI/UX Guidelines
### Design System Reference
**Finalized Design Tokens**: @../design-{run_id}/{design_tokens_path}
**Style Guide**: @../design-{run_id}/{style_guide_path}
**Finalized Design Tokens**: @../{design_id}/{design_tokens_path}
**Style Guide**: @../{design_id}/{style_guide_path}
**Design System Mode**: {design_system_mode}
### Implementation Requirements
@@ -101,19 +101,19 @@ Update `.brainstorming/role analysis documents` with design system references.
### Reference Prototypes
{FOR each selected_prototype:
- **{page_name}**: @../design-{run_id}/prototypes/{prototype}.html | Layout: {layout_strategy from notes}
- **{page_name}**: @../{design_id}/prototypes/{prototype}.html | Layout: {layout_strategy from notes}
}
### Design System Assets
```json
{"design_tokens": "design-{run_id}/{design_tokens_path}", "style_guide": "design-{run_id}/{style_guide_path}", "design_system_mode": "{design_system_mode}", "prototypes": [{FOR each: "design-{run_id}/prototypes/{prototype}.html"}]}
{"design_tokens": "{design_id}/{design_tokens_path}", "style_guide": "{design_id}/{style_guide_path}", "design_system_mode": "{design_system_mode}", "prototypes": [{FOR each: "{design_id}/prototypes/{prototype}.html"}]}
```
```
**Implementation**:
```bash
# Option 1: Edit existing section
Edit(file_path=".workflow/WFS-{session}/.brainstorming/role analysis documents",
Edit(file_path=".workflow/active/WFS-{session}/.brainstorming/role analysis documents",
old_string="## UI/UX Guidelines\n[existing content]",
new_string="## UI/UX Guidelines\n\n[new design reference content]")
@@ -128,15 +128,15 @@ IF section not found:
```bash
# Always update ui-designer
ui_designer_files = Glob(".workflow/WFS-{session}/.brainstorming/ui-designer/analysis*.md")
ui_designer_files = Glob(".workflow/active/WFS-{session}/.brainstorming/ui-designer/analysis*.md")
# Conditionally update other roles
has_animations = exists({latest_design}/animation-extraction/animation-tokens.json)
has_layouts = exists({latest_design}/layout-extraction/layout-templates.json)
IF has_animations: ux_expert_files = Glob(".workflow/WFS-{session}/.brainstorming/ux-expert/analysis*.md")
IF has_layouts: architect_files = Glob(".workflow/WFS-{session}/.brainstorming/system-architect/analysis*.md")
IF selected_list: pm_files = Glob(".workflow/WFS-{session}/.brainstorming/product-manager/analysis*.md")
IF has_animations: ux_expert_files = Glob(".workflow/active/WFS-{session}/.brainstorming/ux-expert/analysis*.md")
IF has_layouts: architect_files = Glob(".workflow/active/WFS-{session}/.brainstorming/system-architect/analysis*.md")
IF selected_list: pm_files = Glob(".workflow/active/WFS-{session}/.brainstorming/product-manager/analysis*.md")
```
**Content Templates**:
@@ -145,9 +145,9 @@ IF selected_list: pm_files = Glob(".workflow/WFS-{session}/.brainstorming/produc
```markdown
## Design System Implementation Reference
**Design Tokens**: @../../design-{run_id}/{design_tokens_path}
**Style Guide**: @../../design-{run_id}/{style_guide_path}
**Prototypes**: {FOR each: @../../design-{run_id}/prototypes/{prototype}.html}
**Design Tokens**: @../../{design_id}/{design_tokens_path}
**Style Guide**: @../../{design_id}/{style_guide_path}
**Prototypes**: {FOR each: @../../{design_id}/prototypes/{prototype}.html}
*Reference added by /workflow:ui-design:update*
```
@@ -156,8 +156,8 @@ IF selected_list: pm_files = Glob(".workflow/WFS-{session}/.brainstorming/produc
```markdown
## Animation & Interaction Reference
**Animations**: @../../design-{run_id}/animation-extraction/animation-tokens.json
**Prototypes**: {FOR each: @../../design-{run_id}/prototypes/{prototype}.html}
**Animations**: @../../{design_id}/animation-extraction/animation-tokens.json
**Prototypes**: {FOR each: @../../{design_id}/prototypes/{prototype}.html}
*Reference added by /workflow:ui-design:update*
```
@@ -166,7 +166,7 @@ IF selected_list: pm_files = Glob(".workflow/WFS-{session}/.brainstorming/produc
```markdown
## Layout Structure Reference
**Layout Templates**: @../../design-{run_id}/layout-extraction/layout-templates.json
**Layout Templates**: @../../{design_id}/layout-extraction/layout-templates.json
*Reference added by /workflow:ui-design:update*
```
@@ -175,7 +175,7 @@ IF selected_list: pm_files = Glob(".workflow/WFS-{session}/.brainstorming/produc
```markdown
## Prototype Validation Reference
**Prototypes**: {FOR each: @../../design-{run_id}/prototypes/{prototype}.html}
**Prototypes**: {FOR each: @../../{design_id}/prototypes/{prototype}.html}
*Reference added by /workflow:ui-design:update*
```
@@ -197,8 +197,8 @@ Create or update `.brainstorming/ui-designer/design-system-reference.md`:
## Design System Integration
This style guide references the finalized design system from the design refinement phase.
**Design Tokens**: @../../design-{run_id}/{design_tokens_path}
**Style Guide**: @../../design-{run_id}/{style_guide_path}
**Design Tokens**: @../../{design_id}/{design_tokens_path}
**Style Guide**: @../../{design_id}/{style_guide_path}
**Design System Mode**: {design_system_mode}
## Implementation Guidelines
@@ -209,13 +209,13 @@ This style guide references the finalized design system from the design refineme
## Reference Prototypes
{FOR each selected_prototype:
- **{page_name}**: @../../design-{run_id}/prototypes/{prototype}.html
- **{page_name}**: @../../{design_id}/prototypes/{prototype}.html
}
## Token System
For complete token definitions and usage examples, see:
- Design Tokens: @../../design-{run_id}/{design_tokens_path}
- Style Guide: @../../design-{run_id}/{style_guide_path}
- Design Tokens: @../../{design_id}/{design_tokens_path}
- Style Guide: @../../{design_id}/{style_guide_path}
---
*Auto-generated by /workflow:ui-design:update | Last updated: {timestamp}*
@@ -223,11 +223,72 @@ For complete token definitions and usage examples, see:
**Implementation**:
```bash
Write(file_path=".workflow/WFS-{session}/.brainstorming/ui-designer/design-system-reference.md",
Write(file_path=".workflow/active/WFS-{session}/.brainstorming/ui-designer/design-system-reference.md",
content="[generated content with @ references]")
```
### Phase 5: Completion
### Phase 5: Update Context Package
**Purpose**: Sync design system references to context-package.json
**Operations**:
```bash
context_pkg_path = ".workflow/active/WFS-{session}/.process/context-package.json"
# 1. Read existing package
context_pkg = Read(context_pkg_path)
# 2. Update brainstorm_artifacts (role analyses now contain @ design references)
brainstorm_dir = ".workflow/active/WFS-{session}/.brainstorming"
role_analysis_files = Glob({brainstorm_dir}/*/analysis*.md)
context_pkg.brainstorm_artifacts.role_analyses = []
FOR file IN role_analysis_files:
role_name = extract_role_from_path(file)
relative_path = file.replace({brainstorm_dir}/, "")
context_pkg.brainstorm_artifacts.role_analyses.push({
"role": role_name,
"files": [{
"path": relative_path,
"type": "primary",
"content": Read(file), # Contains @ design system references
"updated_at": NOW()
}]
})
# 3. Add design_system_references field
context_pkg.design_system_references = {
"design_run_id": design_id,
"tokens": `${design_id}/${design_tokens_path}`,
"style_guide": `${design_id}/${style_guide_path}`,
"prototypes": selected_list.map(p => `${design_id}/prototypes/${p}.html`),
"updated_at": NOW()
}
# 4. Optional: Add animations and layouts if they exist
IF exists({latest_design}/animation-extraction/animation-tokens.json):
context_pkg.design_system_references.animations = `${design_id}/animation-extraction/animation-tokens.json`
IF exists({latest_design}/layout-extraction/layout-templates.json):
context_pkg.design_system_references.layouts = `${design_id}/layout-extraction/layout-templates.json`
# 5. Update metadata
context_pkg.metadata.updated_at = NOW()
context_pkg.metadata.design_sync_timestamp = NOW()
# 6. Write back
Write(context_pkg_path, JSON.stringify(context_pkg, indent=2))
REPORT: "✅ Updated context-package.json with design system references"
```
**TodoWrite Update**:
```json
{"content": "Update context package with design references", "status": "completed", "activeForm": "Updating context package"}
```
### Phase 6: Completion
```javascript
TodoWrite({todos: [
@@ -259,7 +320,7 @@ Next: /workflow:plan [--agent] "<task description>"
**Updated Files**:
```
.workflow/WFS-{session}/.brainstorming/
.workflow/active/WFS-{session}/.brainstorming/
├── role analysis documents # Updated with UI/UX Guidelines section
├── ui-designer/
│ ├── analysis*.md # Updated with design system references
@@ -271,24 +332,24 @@ Next: /workflow:plan [--agent] "<task description>"
**@ Reference Format** (role analysis documents):
```
@../design-{run_id}/style-extraction/style-1/design-tokens.json
@../design-{run_id}/style-extraction/style-1/style-guide.md
@../design-{run_id}/prototypes/{prototype}.html
@../{design_id}/style-extraction/style-1/design-tokens.json
@../{design_id}/style-extraction/style-1/style-guide.md
@../{design_id}/prototypes/{prototype}.html
```
**@ Reference Format** (ui-designer/design-system-reference.md):
```
@../../design-{run_id}/style-extraction/style-1/design-tokens.json
@../../design-{run_id}/style-extraction/style-1/style-guide.md
@../../design-{run_id}/prototypes/{prototype}.html
@../../{design_id}/style-extraction/style-1/design-tokens.json
@../../{design_id}/style-extraction/style-1/style-guide.md
@../../{design_id}/prototypes/{prototype}.html
```
**@ Reference Format** (role analysis.md files):
```
@../../design-{run_id}/style-extraction/style-1/design-tokens.json
@../../design-{run_id}/animation-extraction/animation-tokens.json
@../../design-{run_id}/layout-extraction/layout-templates.json
@../../design-{run_id}/prototypes/{prototype}.html
@../../{design_id}/style-extraction/style-1/design-tokens.json
@../../{design_id}/animation-extraction/animation-tokens.json
@../../{design_id}/layout-extraction/layout-templates.json
@../../{design_id}/prototypes/{prototype}.html
```
## Integration with /workflow:plan
@@ -307,9 +368,9 @@ After this update, `/workflow:plan` will discover design assets through:
"task_id": "IMPL-001",
"context": {
"design_system": {
"tokens": "design-{run_id}/style-extraction/style-1/design-tokens.json",
"style_guide": "design-{run_id}/style-extraction/style-1/style-guide.md",
"prototypes": ["design-{run_id}/prototypes/dashboard-variant-1.html"]
"tokens": "{design_id}/style-extraction/style-1/design-tokens.json",
"style_guide": "{design_id}/style-extraction/style-1/style-guide.md",
"prototypes": ["{design_id}/prototypes/dashboard-variant-1.html"]
}
}
}
@@ -349,15 +410,3 @@ After update, verify:
- **Next Phase**: `/workflow:plan` discovers and utilizes design system through @ references
- **Auto Integration**: Automatically triggered by `/workflow:ui-design:auto` workflow
## Why Main Claude Execution?
This command is executed directly by main Claude (not delegated to an Agent) because:
1. **Simple Reference Generation**: Only generating file paths, not complex synthesis
2. **Context Preservation**: Main Claude has full session and conversation context
3. **Minimal Transformation**: Primarily updating references, not analyzing content
4. **Path Resolution**: Requires precise relative path calculation
5. **Edit Operations**: Better error recovery for Edit conflicts
6. **Synthesis Pattern**: Follows same direct-execution pattern as other reference updates
This ensures reliable, lightweight integration without Agent handoff overhead.

View File

@@ -1,7 +1,7 @@
---
name: explore-auto
description: Exploratory UI design workflow with style-centric batch generation, creates design variants from prompts/images with parallel execution
argument-hint: "[--prompt "<desc>"] [--images "<glob>"] [--targets "<list>"] [--target-type "page|component"] [--session <id>] [--style-variants <count>] [--layout-variants <count>] [--batch-plan]""
description: Interactive exploratory UI design workflow with style-centric batch generation, creates design variants from prompts/images with parallel execution and user selection
argument-hint: "[--input "<value>"] [--targets "<list>"] [--target-type "page|component"] [--session <id>] [--style-variants <count>] [--layout-variants <count>]"
allowed-tools: SlashCommand(*), TodoWrite(*), Read(*), Bash(*), Glob(*), Write(*), Task(conceptual-planning-agent)
---
@@ -18,35 +18,58 @@ allowed-tools: SlashCommand(*), TodoWrite(*), Read(*), Bash(*), Glob(*), Write(*
**Autonomous Flow** (⚠️ CONTINUOUS EXECUTION - DO NOT STOP):
1. User triggers: `/workflow:ui-design:explore-auto [params]`
2. Phase 0c: Target confirmation → User confirms → **IMMEDIATELY triggers Phase 1**
3. Phase 1 (style-extract) → **WAIT for completion** → Auto-continues
4. Phase 2.3 (animation-extract, optional)**WAIT for completion** → Auto-continues
5. Phase 2.5 (layout-extract) → **WAIT for completion** → Auto-continues
6. **Phase 3 (ui-assembly)****WAIT for completion** → Auto-continues
7. Phase 4 (design-update) → **WAIT for completion** → Auto-continues
8. Phase 5 (batch-plan, optional) → Reports completion
2. Phase 5: Target confirmation → User confirms → **IMMEDIATELY triggers Phase 7**
3. Phase 7 (style-extract) → **Attach tasks → Execute → Collapse** → Auto-continues to Phase 8
4. Phase 8 (animation-extract, conditional):
- **IF should_extract_animation**: **Attach tasks → Execute → Collapse** → Auto-continues to Phase 9
- **ELSE**: Skip (use code import) → Auto-continues to Phase 9
5. Phase 9 (layout-extract) → **Attach tasks → Execute → Collapse** → Auto-continues to Phase 10
6. **Phase 10 (ui-assembly)****Attach tasks → Execute → Collapse** → Workflow complete
**Phase Transition Mechanism**:
- **Phase 0c (User Interaction)**: User confirms targets → IMMEDIATELY triggers Phase 1
- **Phase 1-5 (Autonomous)**: `SlashCommand` is BLOCKING - execution pauses until completion
- Upon each phase completion: Automatically process output and execute next phase
- No additional user interaction after Phase 0c confirmation
- **Phase 5 (User Interaction)**: User confirms targets → IMMEDIATELY triggers Phase 7
- **Phase 7-10 (Autonomous)**: `SlashCommand` invocation **ATTACHES** tasks to current workflow
- **Task Execution**: Orchestrator **EXECUTES** these attached tasks itself
- **Task Collapse**: After tasks complete, collapse them into phase summary
- **Phase Transition**: Automatically execute next phase after collapsing
- No additional user interaction after Phase 5 confirmation
**Auto-Continue Mechanism**: TodoWrite tracks phase status. Upon each phase completion, you MUST immediately construct and execute the next phase command. No user intervention required. The workflow is NOT complete until reaching Phase 4 (or Phase 5 if --batch-plan).
**Auto-Continue Mechanism**: TodoWrite tracks phase status with dynamic task attachment/collapse. After executing all attached tasks, you MUST immediately collapse them, restore phase summary, and execute the next phase. No user intervention required. The workflow is NOT complete until Phase 10 (UI assembly) finishes.
**Task Attachment Model**: SlashCommand invocation is NOT delegation - it's task expansion. The orchestrator executes these attached tasks itself, not waiting for external completion.
**Target Type Detection**: Automatically inferred from prompt/targets, or explicitly set via `--target-type`.
## Core Rules
1. **Start Immediately**: TodoWrite initialization → Phase 1 execution
1. **Start Immediately**: TodoWrite initialization → Phase 7 execution
2. **No Preliminary Validation**: Sub-commands handle their own validation
3. **Parse & Pass**: Extract data from each output for next phase
4. **Default to All**: When selecting variants/prototypes, use ALL generated items
5. **Track Progress**: Update TodoWrite after each phase
6. **⚠️ CRITICAL: DO NOT STOP** - This is a continuous multi-phase workflow. After each SlashCommand completes, you MUST wait for completion, then immediately execute the next phase. Workflow is NOT complete until Phase 4 (or Phase 5 if --batch-plan).
5. **Track Progress**: Update TodoWrite dynamically with task attachment/collapse pattern
6. **⚠️ CRITICAL: Task Attachment Model** - SlashCommand invocation **ATTACHES** tasks to current workflow. Orchestrator **EXECUTES** these attached tasks itself, not waiting for external completion. This is NOT delegation - it's task expansion.
7. **⚠️ CRITICAL: DO NOT STOP** - This is a continuous multi-phase workflow. After executing all attached tasks, you MUST immediately collapse them and execute the next phase. Workflow is NOT complete until Phase 10 (UI assembly) finishes.
## Parameter Requirements
**Recommended Parameter**:
- `--input "<value>"`: Unified input source (auto-detects type)
- **Glob pattern** (images): `"design-refs/*"`, `"screenshots/*.png"`
- **File/directory path** (code): `"./src/components"`, `"/path/to/styles"`
- **Text description** (prompt): `"modern dashboard with 3 styles"`, `"minimalist design"`
- **Combination**: `"design-refs/* modern dashboard"` (glob + description)
- Multiple inputs: Separate with `|``"design-refs/*|modern style"`
**Detection Logic**:
- Contains `*` or matches existing files → **glob pattern** (images)
- Existing file/directory path → **code import**
- Pure text without paths → **design prompt**
- Contains `|` separator → **multiple inputs** (glob|prompt or path|prompt)
**Legacy Parameters** (deprecated, use `--input` instead):
- `--images "<glob>"`: Reference image paths (shows deprecation warning)
- `--prompt "<description>"`: Design description (shows deprecation warning)
**Optional Parameters** (all have smart defaults):
- `--targets "<list>"`: Comma-separated targets (pages/components) to generate (inferred from prompt/session if omitted)
- `--target-type "page|component|auto"`: Explicitly set target type (default: `auto` - intelligent detection)
@@ -56,19 +79,16 @@ allowed-tools: SlashCommand(*), TodoWrite(*), Read(*), Bash(*), Glob(*), Write(*
- **Tablet**: 768×1024px - Hybrid touch/mouse layouts
- **Responsive**: 1920×1080px base with mobile-first breakpoints
- `--session <id>`: Workflow session ID (standalone mode if omitted)
- `--images "<glob>"`: Reference image paths (default: `design-refs/*`)
- `--prompt "<description>"`: Design style and target description
- `--style-variants <count>`: Style variants (default: inferred from prompt or 3, range: 1-5)
- `--layout-variants <count>`: Layout variants per style (default: inferred or 3, range: 1-5)
- `--batch-plan`: Auto-generate implementation tasks after design-update
**Legacy Parameters** (maintained for backward compatibility):
**Legacy Target Parameters** (maintained for backward compatibility):
- `--pages "<list>"`: Alias for `--targets` with `--target-type page`
- `--components "<list>"`: Alias for `--targets` with `--target-type component`
**Input Rules**:
- Must provide at least one: `--images` or `--prompt` or `--targets`
- Multiple parameters can be combined for guided analysis
- Must provide: `--input` OR (legacy: `--images`/`--prompt`) OR `--targets`
- `--input` can combine multiple input types
- If `--targets` not provided, intelligently inferred from prompt/session
**Supported Target Types**:
@@ -105,87 +125,109 @@ allowed-tools: SlashCommand(*), TodoWrite(*), Read(*), Bash(*), Glob(*), Write(*
**Integrated vs. Standalone**:
- `--session` flag determines session integration or standalone execution
## 6-Phase Execution
## 10-Phase Execution
### Phase 0a: Intelligent Prompt Parsing
### Phase 1: Parameter Parsing & Input Detection
**Unified Principle**: Detect → Classify → Store (avoid string concatenation and escaping)
**Step 1: Parameter Normalization**
```bash
# Parse variant counts from prompt or use explicit/default values
IF --prompt AND (NOT --style-variants OR NOT --layout-variants):
style_variants = regex_extract(prompt, r"(\d+)\s*style") OR --style-variants OR 3
layout_variants = regex_extract(prompt, r"(\d+)\s*layout") OR --layout-variants OR 3
ELSE:
style_variants = --style-variants OR 3
layout_variants = --layout-variants OR 3
# Legacy parameters (deprecated)
IF --images OR --prompt:
WARN: "⚠️ --images/--prompt deprecated. Use --input"
images_input = --images; prompt_text = --prompt
VALIDATE: 1 <= style_variants <= 5, 1 <= layout_variants <= 5
# Unified --input (split by "|")
ELSE IF --input:
FOR part IN split(--input, "|"):
IF "*" IN part OR glob_exists(part): images_input = part
ELSE IF path_exists(part): prompt_text += part
ELSE: prompt_text += part
```
### Phase 0a-2: Device Type Inference
**Step 2: Design Source Detection**
```bash
# Device type inference
device_type = "auto"
code_base_path = extract_first_valid_path(prompt_text)
has_visual_input = (images_input AND glob_exists(images_input))
# Step 1: Explicit parameter (highest priority)
IF --device-type AND --device-type != "auto":
device_type = --device-type
device_source = "explicit"
ELSE:
# Step 2: Prompt analysis
IF --prompt:
device_keywords = {
"desktop": ["desktop", "web", "laptop", "widescreen", "large screen"],
"mobile": ["mobile", "phone", "smartphone", "ios", "android"],
"tablet": ["tablet", "ipad", "medium screen"],
"responsive": ["responsive", "adaptive", "multi-device", "cross-platform"]
}
detected_device = detect_device_from_prompt(--prompt, device_keywords)
IF detected_device:
device_type = detected_device
device_source = "prompt_inference"
# Step 3: Target type inference
IF device_type == "auto":
# Components are typically desktop-first, pages can vary
device_type = target_type == "component" ? "desktop" : "responsive"
device_source = "target_type_inference"
STORE: device_type, device_source
design_source = classify_source(code_base_path, has_visual_input):
• code + visual → "hybrid"
• code only → "code_only"
• visual/prompt → "visual_only"
• none → ERROR
```
**Device Type Presets**:
- **Desktop**: 1920×1080px - Mouse-driven, spacious layouts
- **Mobile**: 375×812px - Touch-friendly, compact layouts
- **Tablet**: 768×1024px - Hybrid touch/mouse layouts
- **Responsive**: 1920×1080px base with mobile-first breakpoints
**Stored Variables**: `design_source`, `code_base_path`, `has_visual_input`, `images_input`, `prompt_text`
**Detection Keywords**:
- Prompt contains "mobile", "phone", "smartphone" → mobile
- Prompt contains "tablet", "ipad" → tablet
- Prompt contains "desktop", "web", "laptop" → desktop
- Prompt contains "responsive", "adaptive" → responsive
- Otherwise: Inferred from target type (components→desktop, pages→responsive)
---
### Phase 2: Intelligent Prompt Parsing
**Unified Principle**: explicit > inferred > default
### Phase 0b: Run Initialization & Directory Setup
```bash
run_id = "run-$(date +%Y%m%d-%H%M%S)"
base_path = --session ? ".workflow/WFS-{session}/design-${run_id}" : ".workflow/.design/${run_id}"
# Variant counts (priority chain)
style_variants = --style-variants OR extract_number(prompt_text, "style") OR 3
layout_variants = --layout-variants OR extract_number(prompt_text, "layout") OR 3
Bash(mkdir -p "${base_path}/{style-extraction,style-consolidation,prototypes}")
VALIDATE: 1 ≤ variants ≤ 5
```
**Stored Variables**: `style_variants`, `layout_variants`
---
### Phase 3: Device Type Inference
**Unified Principle**: explicit > prompt keywords > target_type > default
```bash
# Device type (priority chain)
device_type = --device-type (if != "auto")
OR detect_keywords(prompt_text, ["mobile", "desktop", "tablet", "responsive"])
OR infer_from_target(target_type) # component→desktop, page→responsive
OR "responsive"
device_source = track_detection_source()
```
**Detection Keywords**: mobile, phone, smartphone → mobile | desktop, web, laptop → desktop | tablet, ipad → tablet | responsive, adaptive → responsive
**Device Presets**: Desktop (1920×1080) | Mobile (375×812) | Tablet (768×1024) | Responsive (1920×1080 + breakpoints)
**Stored Variables**: `device_type`, `device_source`
### Phase 4: Run Initialization & Directory Setup
```bash
design_id = "design-run-$(date +%Y%m%d)-$RANDOM"
relative_base_path = --session ? ".workflow/active/WFS-{session}/${design_id}" : ".workflow/${design_id}"
# Create directory and convert to absolute path
Bash(mkdir -p "${relative_base_path}/style-extraction")
Bash(mkdir -p "${relative_base_path}/prototypes")
base_path=$(cd "${relative_base_path}" && pwd)
Write({base_path}/.run-metadata.json): {
"run_id": "${run_id}", "session_id": "${session_id}", "timestamp": "...",
"design_id": "${design_id}", "session_id": "${session_id}", "timestamp": "...",
"workflow": "ui-design:auto",
"architecture": "style-centric-batch-generation",
"parameters": { "style_variants": ${style_variants}, "layout_variants": ${layout_variants},
"targets": "${inferred_target_list}", "target_type": "${target_type}",
"prompt": "${prompt_text}", "images": "${images_pattern}",
"prompt": "${prompt_text}", "images": "${images_input}",
"input": "${--input}",
"device_type": "${device_type}", "device_source": "${device_source}" },
"status": "in_progress",
"performance_mode": "optimized"
}
# Initialize default flags for animation extraction logic
animation_complete = false # Default: always extract animations unless code import proves complete
needs_visual_supplement = false # Will be set to true in hybrid mode
skip_animation_extraction = false # User preference for code import scenario
```
### Phase 0c: Unified Target Inference with Intelligent Type Detection
### Phase 5: Unified Target Inference with Intelligent Type Detection
```bash
# Priority: --pages/--components (legacy) → --targets → --prompt analysis → synthesis → default
target_list = []; target_type = "auto"; target_source = "none"
@@ -198,8 +240,8 @@ ELSE IF --targets:
target_type = --target-type != "auto" ? --target-type : detect_target_type(target_list)
# Step 3: Prompt analysis (Claude internal analysis)
ELSE IF --prompt:
analysis_result = analyze_prompt("{prompt_text}") # Extract targets, types, purpose
ELSE IF prompt_text:
analysis_result = analyze_prompt(prompt_text) # Extract targets, types, purpose
target_list = analysis_result.targets
target_type = analysis_result.primary_type OR detect_target_type(target_list)
target_source = "prompt_analysis"
@@ -250,7 +292,7 @@ MATCH user_input:
STORE: inferred_target_list, target_type, target_inference_source
# ⚠️ CRITICAL: User confirmation complete, IMMEDIATELY initialize TodoWrite and execute Phase 1
# ⚠️ CRITICAL: User confirmation complete, IMMEDIATELY initialize TodoWrite and execute Phase 7
# This is the only user interaction point in the workflow
# After this point, all subsequent phases execute automatically without user intervention
```
@@ -267,175 +309,245 @@ detect_target_type(target_list):
RETURN "component" IF component_matches > page_matches ELSE "page"
```
### Phase 1: Style Extraction
### Phase 6: Code Import & Completeness Assessment (Conditional)
```bash
command = "/workflow:ui-design:style-extract --base-path \"{base_path}\" " +
(--images ? "--images \"{images}\" " : "") +
(--prompt ? "--prompt \"{prompt}\" " : "") +
"--mode explore --variants {style_variants}"
SlashCommand(command)
IF design_source IN ["code_only", "hybrid"]:
REPORT: "🔍 Phase 6: Code Import ({design_source})"
command = "/workflow:ui-design:import-from-code --design-id \"{design_id}\" --source \"{code_base_path}\""
# Output: {style_variants} style cards with design_attributes
# SlashCommand blocks until phase complete
# Upon completion, IMMEDIATELY execute Phase 2.3 (auto-continue)
TRY:
# SlashCommand invocation ATTACHES import-from-code's tasks to current workflow
# Orchestrator will EXECUTE these attached tasks itself:
# - Phase 0: Discover and categorize code files
# - Phase 1.1-1.3: Style/Animation/Layout Agent extraction
SlashCommand(command)
CATCH error:
WARN: "⚠️ Code import failed: {error}"
WARN: "Cleaning up incomplete import directories"
Bash(rm -rf "{base_path}/style-extraction" "{base_path}/animation-extraction" "{base_path}/layout-extraction" 2>/dev/null)
IF design_source == "code_only":
REPORT: "Cannot proceed with code-only mode after import failure"
EXIT 1
ELSE: # hybrid mode
WARN: "Continuing with visual-only mode"
design_source = "visual_only"
# Check file existence and assess completeness
style_exists = exists("{base_path}/style-extraction/style-1/design-tokens.json")
animation_exists = exists("{base_path}/animation-extraction/animation-tokens.json")
layout_count = bash(ls {base_path}/layout-extraction/layout-*.json 2>/dev/null | wc -l)
layout_exists = (layout_count > 0)
style_complete = false
animation_complete = false
layout_complete = false
missing_categories = []
# Style completeness check
IF style_exists:
tokens = Read("{base_path}/style-extraction/style-1/design-tokens.json")
style_complete = (
tokens.colors?.brand && tokens.colors?.surface &&
tokens.typography?.font_family && tokens.spacing &&
Object.keys(tokens.colors.brand || {}).length >= 3 &&
Object.keys(tokens.spacing || {}).length >= 8
)
IF NOT style_complete AND tokens._metadata?.completeness?.missing_categories:
missing_categories.extend(tokens._metadata.completeness.missing_categories)
ELSE:
missing_categories.push("style tokens")
# Animation completeness check
IF animation_exists:
anim = Read("{base_path}/animation-extraction/animation-tokens.json")
animation_complete = (
anim.duration && anim.easing &&
Object.keys(anim.duration || {}).length >= 3 &&
Object.keys(anim.easing || {}).length >= 3
)
IF NOT animation_complete AND anim._metadata?.completeness?.missing_items:
missing_categories.extend(anim._metadata.completeness.missing_items)
ELSE:
missing_categories.push("animation tokens")
# Layout completeness check
IF layout_exists:
# Read first layout file to verify structure
first_layout = bash(ls {base_path}/layout-extraction/layout-*.json 2>/dev/null | head -1)
layout_data = Read(first_layout)
layout_complete = (
layout_count >= 1 &&
layout_data.template?.dom_structure &&
layout_data.template?.css_layout_rules
)
IF NOT layout_complete:
missing_categories.push("complete layout structure")
ELSE:
missing_categories.push("layout templates")
needs_visual_supplement = false
IF design_source == "code_only" AND NOT (style_complete AND layout_complete):
REPORT: "⚠️ Missing: {', '.join(missing_categories)}"
REPORT: "Options: 'continue' | 'supplement: <images>' | 'cancel'"
user_response = WAIT_FOR_USER_INPUT()
MATCH user_response:
"continue"needs_visual_supplement = false
"supplement: ..."needs_visual_supplement = true; --images = extract_path(user_response)
"cancel" → EXIT 0
default → needs_visual_supplement = false
ELSE IF design_source == "hybrid":
needs_visual_supplement = true
# Animation reuse confirmation (code import with complete animations)
IF design_source == "code_only" AND animation_complete:
REPORT: "✅ Complete animation system detected (from code import)"
REPORT: " Duration scales: {duration_count} | Easing functions: {easing_count}"
REPORT: ""
REPORT: "Options:"
REPORT: " • 'reuse' (default) - Reuse existing animation system"
REPORT: " • 'regenerate' - Regenerate animation system (interactive)"
REPORT: " • 'cancel' - Cancel workflow"
user_response = WAIT_FOR_USER_INPUT()
MATCH user_response:
"reuse"skip_animation_extraction = true
"regenerate"skip_animation_extraction = false
"cancel" → EXIT 0
default → skip_animation_extraction = true # Default: reuse
STORE: needs_visual_supplement, style_complete, animation_complete, layout_complete, skip_animation_extraction
```
### Phase 2.3: Animation Extraction (Optional - Interactive Mode)
### Phase 7: Style Extraction
```bash
# Animation extraction for motion design patterns
REPORT: "🚀 Phase 2.3: Animation Extraction (interactive mode)"
REPORT: " → Mode: Interactive specification"
REPORT: " → Purpose: Define motion design patterns"
IF design_source == "visual_only" OR needs_visual_supplement:
REPORT: "🎨 Phase 7: Style Extraction (variants: {style_variants})"
command = "/workflow:ui-design:style-extract --design-id \"{design_id}\" " +
(images_input ? "--images \"{images_input}\" " : "") +
(prompt_text ? "--prompt \"{prompt_text}\" " : "") +
"--variants {style_variants} --interactive"
command = "/workflow:ui-design:animation-extract --base-path \"{base_path}\" --mode interactive"
# SlashCommand invocation ATTACHES style-extract's tasks to current workflow
# Orchestrator will EXECUTE these attached tasks itself
SlashCommand(command)
SlashCommand(command)
# After executing all attached tasks, collapse them into phase summary
ELSE:
REPORT: "✅ Phase 7: Style (Using Code Import)"
```
### Phase 8: Animation Extraction
```bash
# Determine if animation extraction is needed
should_extract_animation = false
IF (design_source == "visual_only" OR needs_visual_supplement):
# Pure visual input or hybrid mode requiring visual supplement
should_extract_animation = true
ELSE IF NOT animation_complete:
# Code import but animations are incomplete
should_extract_animation = true
ELSE IF design_source == "code_only" AND animation_complete AND NOT skip_animation_extraction:
# Code import with complete animations, but user chose to regenerate
should_extract_animation = true
IF should_extract_animation:
REPORT: "🚀 Phase 8: Animation Extraction"
# Build command with available inputs
command_parts = [f"/workflow:ui-design:animation-extract --design-id \"{design_id}\""]
IF images_input:
command_parts.append(f"--images \"{images_input}\"")
IF prompt_text:
command_parts.append(f"--prompt \"{prompt_text}\"")
command_parts.append("--interactive")
command = " ".join(command_parts)
# SlashCommand invocation ATTACHES animation-extract's tasks to current workflow
# Orchestrator will EXECUTE these attached tasks itself
SlashCommand(command)
# After executing all attached tasks, collapse them into phase summary
ELSE:
REPORT: "✅ Phase 8: Animation (Using Code Import)"
# Output: animation-tokens.json + animation-guide.md
# SlashCommand blocks until phase complete
# Upon completion, IMMEDIATELY execute Phase 2.5 (auto-continue)
# When phase finishes, IMMEDIATELY execute Phase 9 (auto-continue)
```
### Phase 2.5: Layout Extraction
### Phase 9: Layout Extraction
```bash
targets_string = ",".join(inferred_target_list)
command = "/workflow:ui-design:layout-extract --base-path \"{base_path}\" " +
(--images ? "--images \"{images}\" " : "") +
(--prompt ? "--prompt \"{prompt}\" " : "") +
"--targets \"{targets_string}\" " +
"--mode explore --variants {layout_variants} " +
"--device-type \"{device_type}\""
REPORT: "🚀 Phase 2.5: Layout Extraction (explore mode)"
REPORT: " → Targets: {targets_string}"
REPORT: " → Layout variants: {layout_variants}"
REPORT: " → Device: {device_type}"
IF (design_source == "visual_only" OR needs_visual_supplement) OR (NOT layout_complete):
REPORT: "🚀 Phase 9: Layout Extraction ({targets_string}, variants: {layout_variants}, device: {device_type})"
command = "/workflow:ui-design:layout-extract --design-id \"{design_id}\" " +
(images_input ? "--images \"{images_input}\" " : "") +
(prompt_text ? "--prompt \"{prompt_text}\" " : "") +
"--targets \"{targets_string}\" --variants {layout_variants} --device-type \"{device_type}\" --interactive"
SlashCommand(command)
# SlashCommand invocation ATTACHES layout-extract's tasks to current workflow
# Orchestrator will EXECUTE these attached tasks itself
SlashCommand(command)
# Output: layout-templates.json with {targets × layout_variants} layout structures
# SlashCommand blocks until phase complete
# Upon completion, IMMEDIATELY execute Phase 3 (auto-continue)
# After executing all attached tasks, collapse them into phase summary
ELSE:
REPORT: "✅ Phase 9: Layout (Using Code Import)"
```
### Phase 3: UI Assembly
### Phase 10: UI Assembly
```bash
command = "/workflow:ui-design:generate --base-path \"{base_path}\" " +
"--style-variants {style_variants} --layout-variants {layout_variants}"
command = "/workflow:ui-design:generate --design-id \"{design_id}\"" + (--session ? " --session {session_id}" : "")
total = style_variants × layout_variants × len(inferred_target_list)
REPORT: "🚀 Phase 3: UI Assembly | Matrix: {s}×{l}×{n} = {total} prototypes"
REPORT: "🚀 Phase 10: UI Assembly | Matrix: {s}×{l}×{n} = {total} prototypes"
REPORT: " → Pure assembly: Combining layout templates + design tokens"
REPORT: " → Device: {device_type} (from layout templates)"
REPORT: " → Assembly tasks: {total} combinations"
# SlashCommand invocation ATTACHES generate's tasks to current workflow
# Orchestrator will EXECUTE these attached tasks itself
SlashCommand(command)
# SlashCommand blocks until phase complete
# Upon completion, IMMEDIATELY execute Phase 4 (auto-continue)
# Output:
# After executing all attached tasks, collapse them into phase summary
# Workflow complete - generate command handles preview file generation (compare.html, PREVIEW.md)
# Output (generated by generate command):
# - {target}-style-{s}-layout-{l}.html (assembled prototypes)
# - {target}-style-{s}-layout-{l}.css
# - compare.html (interactive matrix view)
# - PREVIEW.md (usage instructions)
```
### Phase 4: Design System Integration
```bash
command = "/workflow:ui-design:update" + (--session ? " --session {session_id}" : "")
SlashCommand(command)
# SlashCommand blocks until phase complete
# Upon completion:
# - If --batch-plan flag present: IMMEDIATELY execute Phase 5 (auto-continue)
# - If no --batch-plan: Workflow complete, display final report
```
### Phase 5: Batch Task Generation (Optional)
```bash
IF --batch-plan:
FOR target IN inferred_target_list:
task_desc = "Implement {target} {target_type} based on design system"
SlashCommand("/workflow:plan --agent \"{task_desc}\"")
```
## TodoWrite Pattern
```javascript
// Initialize IMMEDIATELY after Phase 0c user confirmation to track multi-phase execution
// Initialize IMMEDIATELY after Phase 5 user confirmation to track multi-phase execution (4 orchestrator-level tasks)
TodoWrite({todos: [
{"content": "Execute style extraction", "status": "in_progress", "activeForm": "Executing..."},
{"content": "Execute layout extraction", "status": "pending", "activeForm": "Executing..."},
{"content": "Execute UI assembly", "status": "pending", "activeForm": "Executing..."},
{"content": "Execute design integration", "status": "pending", "activeForm": "Executing..."}
{"content": "Execute style extraction", "status": "in_progress", "activeForm": "Executing style extraction"},
{"content": "Execute animation extraction", "status": "pending", "activeForm": "Executing animation extraction"},
{"content": "Execute layout extraction", "status": "pending", "activeForm": "Executing layout extraction"},
{"content": "Execute UI assembly", "status": "pending", "activeForm": "Executing UI assembly"}
]})
// ⚠️ CRITICAL: After EACH SlashCommand completion (Phase 1-5), you MUST:
// 1. SlashCommand blocks and returns when phase is complete
// 2. Update current phase: status → "completed"
// 3. Update next phase: status → "in_progress"
// 4. IMMEDIATELY execute next phase SlashCommand (auto-continue)
// This ensures continuous workflow tracking and prevents premature stopping
```
## Key Features
- **🚀 Performance**: Style-centric batch generation with S agent calls
- **🎨 Style-Aware**: HTML structure adapts to design_attributes
- **✅ Perfect Consistency**: Each style by single agent
- **📦 Autonomous**: No user intervention required between phases
- **🧠 Intelligent**: Parses natural language, infers targets/types
- **🔄 Reproducible**: Deterministic flow with isolated run directories
- **🎯 Flexible**: Supports pages, components, or mixed targets
## Examples
### 1. Page Mode (Prompt Inference)
```bash
/workflow:ui-design:explore-auto --prompt "Modern blog: home, article, author"
# Result: 27 prototypes (3×3×3) - responsive layouts (default)
```
### 2. Mobile-First Design
```bash
/workflow:ui-design:explore-auto --prompt "Mobile shopping app: home, product, cart" --device-type mobile
# Result: 27 prototypes (3×3×3) - mobile layouts (375×812px)
```
### 3. Desktop Application
```bash
/workflow:ui-design:explore-auto --targets "dashboard,analytics,settings" --device-type desktop --style-variants 2 --layout-variants 2
# Result: 12 prototypes (2×2×3) - desktop layouts (1920×1080px)
```
### 4. Tablet Interface
```bash
/workflow:ui-design:explore-auto --prompt "Educational app for tablets" --device-type tablet --targets "courses,lessons,profile"
# Result: 27 prototypes (3×3×3) - tablet layouts (768×1024px)
```
### 5. Custom Matrix with Session
```bash
/workflow:ui-design:explore-auto --session WFS-ecommerce --images "refs/*.png" --style-variants 2 --layout-variants 2
# Result: 2×2×N prototypes - device type inferred from session
```
### 6. Component Mode (Desktop)
```bash
/workflow:ui-design:explore-auto --targets "navbar,hero" --target-type "component" --device-type desktop --style-variants 3 --layout-variants 2
# Result: 12 prototypes (3×2×2) - desktop components
```
### 7. Intelligent Parsing + Batch Planning
```bash
/workflow:ui-design:explore-auto --prompt "Create 4 styles with 2 layouts for mobile dashboard and settings" --batch-plan
# Result: 16 prototypes (4×2×2) + auto-generated tasks - mobile-optimized (inferred from prompt)
```
### 8. Large Scale Responsive
```bash
/workflow:ui-design:explore-auto --targets "home,dashboard,settings,profile" --device-type responsive --style-variants 3 --layout-variants 3
# Result: 36 prototypes (3×3×4) - responsive layouts
// ⚠️ CRITICAL: Dynamic TodoWrite task attachment strategy:
//
// **Key Concept**: SlashCommand invocation ATTACHES tasks to current workflow.
// Orchestrator EXECUTES these attached tasks itself, not waiting for external completion.
//
// Phase 7-10 SlashCommand Invocation Pattern:
// 1. SlashCommand invocation ATTACHES sub-command tasks to TodoWrite
// 2. TodoWrite expands to include attached tasks
// 3. Orchestrator EXECUTES attached tasks sequentially
// 4. After all attached tasks complete, COLLAPSE them into phase summary
// 5. Update next phase to in_progress
// 6. IMMEDIATELY execute next phase (auto-continue)
// 7. After Phase 10 completes, workflow finishes (generate command handles preview files)
//
```
## Completion Output
@@ -446,34 +558,37 @@ Architecture: Style-Centric Batch Generation
Run ID: {run_id} | Session: {session_id or "standalone"}
Type: {icon} {target_type} | Device: {device_type} | Matrix: {s}×{l}×{n} = {total} prototypes
Phase 1: {s} complete design systems (style-extract)
Phase 2: {n×l} layout templates (layout-extract explore mode)
Phase 7: {s} complete design systems (style-extract with multi-select)
Phase 9: {n×l} layout templates (layout-extract with multi-select)
- Device: {device_type} layouts
- {n} targets × {l} layout variants = {n×l} structural templates
Phase 3: UI Assembly (generate)
- User-selected concepts generated in parallel
Phase 10: UI Assembly (generate)
- Pure assembly: layout templates + design tokens
- {s}×{l}×{n} = {total} final prototypes
Phase 4: Brainstorming artifacts updated
[Phase 5: {n} implementation tasks created] # if --batch-plan
- Preview files: compare.html, PREVIEW.md (auto-generated by generate command)
Assembly Process:
✅ Separation of Concerns: Layout (structure) + Style (tokens) kept separate
✅ Layout Extraction: {n×l} reusable structural templates
✅ Multi-Selection Workflow: User selects multiple variants from generated options
✅ Pure Assembly: No design decisions in generate phase
✅ Device-Optimized: Layouts designed for {device_type}
Design Quality:
✅ Token-Driven Styling: 100% var() usage
✅ Structural Variety: {l} distinct layouts per target
✅ Style Variety: {s} independent design systems
✅ Structural Variety: {l} distinct layouts per target (user-selected)
✅ Style Variety: {s} independent design systems (user-selected)
✅ Device-Optimized: Layouts designed for {device_type}
📂 {base_path}/
├── .intermediates/ (Intermediate analysis files)
│ ├── style-analysis/ (computed-styles.json, design-space-analysis.json)
── layout-analysis/ (dom-structure-*.json, inspirations/*.txt)
│ ├── style-analysis/ (analysis-options.json with embedded user_selection, computed-styles.json if URL mode)
── animation-analysis/ (analysis-options.json with embedded user_selection, animations-*.json if URL mode)
│ └── layout-analysis/ (analysis-options.json with embedded user_selection, dom-structure-*.json if URL mode)
├── style-extraction/ ({s} complete design systems)
├── layout-extraction/ ({n×l} layout templates + layout-space-analysis.json)
├── animation-extraction/ (animation-tokens.json, animation-guide.md)
├── layout-extraction/ ({n×l} layout template files: layout-{target}-{variant}.json)
├── prototypes/ ({total} assembled prototypes)
└── .run-metadata.json (includes device type)
@@ -489,6 +604,6 @@ Design Quality:
- Layout plans stored as structured JSON
- Optimized for {device_type} viewing
Next: [/workflow:execute] OR [Open compare.html → Select → /workflow:plan]
Next: Open compare.html to preview all design variants
```

View File

@@ -1,589 +0,0 @@
---
name: explore-layers
description: Interactive deep UI capture with depth-controlled layer exploration using MCP puppeteer
argument-hint: --url <url> --depth <1-5> [--session id] [--base-path path]
allowed-tools: TodoWrite(*), Read(*), Write(*), Bash(*), Glob(*), mcp__chrome-devtools__*
---
# Interactive Layer Exploration (/workflow:ui-design:explore-layers)
## Overview
Single-URL depth-controlled interactive capture. Progressively explores UI layers from pages to Shadow DOM.
**Depth Levels**:
- `1` = Page (full-page screenshot)
- `2` = Elements (key components)
- `3` = Interactions (modals, dropdowns)
- `4` = Embedded (iframes, widgets)
- `5` = Shadow DOM (web components)
**Requirements**: Chrome DevTools MCP
## Phase 1: Setup & Validation
### Step 1: Parse Parameters
```javascript
url = params["--url"]
depth = int(params["--depth"])
// Validate URL
IF NOT url.startswith("http"):
url = f"https://{url}"
// Validate depth
IF depth NOT IN [1, 2, 3, 4, 5]:
ERROR: "Invalid depth: {depth}. Use 1-5"
EXIT 1
```
### Step 2: Determine Base Path
```bash
bash(if [ -n "$BASE_PATH" ]; then
echo "$BASE_PATH"
elif [ -n "$SESSION_ID" ]; then
find .workflow/WFS-$SESSION_ID/design-* -type d | head -1 || \
echo ".workflow/WFS-$SESSION_ID/design-layers-$(date +%Y%m%d-%H%M%S)"
else
echo ".workflow/.design/layers-$(date +%Y%m%d-%H%M%S)"
fi)
# Create depth directories
bash(for i in $(seq 1 $depth); do mkdir -p $BASE_PATH/screenshots/depth-$i; done)
```
**Output**: `url`, `depth`, `base_path`
### Step 3: Validate MCP Availability
```javascript
all_resources = ListMcpResourcesTool()
chrome_devtools = "chrome-devtools" IN [r.server for r in all_resources]
IF NOT chrome_devtools:
ERROR: "explore-layers requires Chrome DevTools MCP"
ERROR: "Install: npm i -g @modelcontextprotocol/server-chrome-devtools"
EXIT 1
```
### Step 4: Initialize Todos
```javascript
todos = [
{content: "Setup and validation", status: "completed", activeForm: "Setting up"}
]
FOR level IN range(1, depth + 1):
todos.append({
content: f"Depth {level}: {DEPTH_NAMES[level]}",
status: "pending",
activeForm: f"Capturing depth {level}"
})
todos.append({content: "Generate layer map", status: "pending", activeForm: "Mapping"})
TodoWrite({todos})
```
## Phase 2: Navigate & Load Page
### Step 1: Get or Create Browser Page
```javascript
pages = mcp__chrome-devtools__list_pages()
IF pages.length == 0:
mcp__chrome-devtools__new_page({url: url, timeout: 30000})
page_idx = 0
ELSE:
page_idx = 0
mcp__chrome-devtools__select_page({pageIdx: page_idx})
mcp__chrome-devtools__navigate_page({url: url, timeout: 30000})
bash(sleep 3) // Wait for page load
```
**Output**: `page_idx`
## Phase 3: Depth 1 - Page Level
### Step 1: Capture Full Page
```javascript
TodoWrite(mark_in_progress: "Depth 1: Page")
output_file = f"{base_path}/screenshots/depth-1/full-page.png"
mcp__chrome-devtools__take_screenshot({
fullPage: true,
format: "png",
quality: 90,
filePath: output_file
})
layer_map = {
"url": url,
"depth": depth,
"layers": {
"depth-1": {
"type": "page",
"captures": [{
"name": "full-page",
"path": output_file,
"size_kb": file_size_kb(output_file)
}]
}
}
}
TodoWrite(mark_completed: "Depth 1: Page")
```
**Output**: `depth-1/full-page.png`
## Phase 4: Depth 2 - Element Level (If depth >= 2)
### Step 1: Analyze Page Structure
```javascript
IF depth < 2: SKIP
TodoWrite(mark_in_progress: "Depth 2: Elements")
snapshot = mcp__chrome-devtools__take_snapshot()
// Filter key elements
key_types = ["nav", "header", "footer", "aside", "button", "form", "article"]
key_elements = [
el for el in snapshot.interactiveElements
if el.type IN key_types OR el.role IN ["navigation", "banner", "main"]
][:10] // Limit to top 10
```
### Step 2: Capture Element Screenshots
```javascript
depth_2_captures = []
FOR idx, element IN enumerate(key_elements):
element_name = sanitize(element.text[:20] or element.type) or f"element-{idx}"
output_file = f"{base_path}/screenshots/depth-2/{element_name}.png"
TRY:
mcp__chrome-devtools__take_screenshot({
uid: element.uid,
format: "png",
quality: 85,
filePath: output_file
})
depth_2_captures.append({
"name": element_name,
"type": element.type,
"path": output_file,
"size_kb": file_size_kb(output_file)
})
CATCH error:
REPORT: f"Skip {element_name}: {error}"
layer_map.layers["depth-2"] = {
"type": "elements",
"captures": depth_2_captures
}
TodoWrite(mark_completed: "Depth 2: Elements")
```
**Output**: `depth-2/{element}.png` × N
## Phase 5: Depth 3 - Interaction Level (If depth >= 3)
### Step 1: Analyze Interactive Triggers
```javascript
IF depth < 3: SKIP
TodoWrite(mark_in_progress: "Depth 3: Interactions")
// Detect structure
structure = mcp__chrome-devtools__evaluate_script({
function: `() => ({
modals: document.querySelectorAll('[role="dialog"], .modal').length,
dropdowns: document.querySelectorAll('[role="menu"], .dropdown').length,
tooltips: document.querySelectorAll('[role="tooltip"], [title]').length
})`
})
// Identify triggers
triggers = []
FOR element IN snapshot.interactiveElements:
IF element.attributes CONTAINS ("data-toggle", "aria-haspopup"):
triggers.append({
uid: element.uid,
type: "modal" IF "modal" IN element.classes ELSE "dropdown",
trigger: "click",
text: element.text
})
ELSE IF element.attributes CONTAINS ("title", "data-tooltip"):
triggers.append({
uid: element.uid,
type: "tooltip",
trigger: "hover",
text: element.text
})
triggers = triggers[:10] // Limit
```
### Step 2: Trigger Interactions & Capture
```javascript
depth_3_captures = []
FOR idx, trigger IN enumerate(triggers):
layer_name = f"{trigger.type}-{sanitize(trigger.text[:15]) or idx}"
output_file = f"{base_path}/screenshots/depth-3/{layer_name}.png"
TRY:
// Trigger interaction
IF trigger.trigger == "click":
mcp__chrome-devtools__click({uid: trigger.uid})
ELSE:
mcp__chrome-devtools__hover({uid: trigger.uid})
bash(sleep 1)
// Capture
mcp__chrome-devtools__take_screenshot({
fullPage: false, // Viewport only
format: "png",
quality: 90,
filePath: output_file
})
depth_3_captures.append({
"name": layer_name,
"type": trigger.type,
"trigger_method": trigger.trigger,
"path": output_file,
"size_kb": file_size_kb(output_file)
})
// Dismiss (ESC key)
mcp__chrome-devtools__evaluate_script({
function: `() => {
document.dispatchEvent(new KeyboardEvent('keydown', {key: 'Escape'}));
}`
})
bash(sleep 0.5)
CATCH error:
REPORT: f"Skip {layer_name}: {error}"
layer_map.layers["depth-3"] = {
"type": "interactions",
"triggers": structure,
"captures": depth_3_captures
}
TodoWrite(mark_completed: "Depth 3: Interactions")
```
**Output**: `depth-3/{interaction}.png` × N
## Phase 6: Depth 4 - Embedded Level (If depth >= 4)
### Step 1: Detect Iframes
```javascript
IF depth < 4: SKIP
TodoWrite(mark_in_progress: "Depth 4: Embedded")
iframes = mcp__chrome-devtools__evaluate_script({
function: `() => {
return Array.from(document.querySelectorAll('iframe')).map(iframe => ({
src: iframe.src,
id: iframe.id || 'iframe',
title: iframe.title || 'untitled'
})).filter(i => i.src && i.src.startsWith('http'));
}`
})
```
### Step 2: Capture Iframe Content
```javascript
depth_4_captures = []
FOR idx, iframe IN enumerate(iframes):
iframe_name = f"iframe-{sanitize(iframe.title or iframe.id)}-{idx}"
output_file = f"{base_path}/screenshots/depth-4/{iframe_name}.png"
TRY:
// Navigate to iframe URL in new tab
mcp__chrome-devtools__new_page({url: iframe.src, timeout: 30000})
bash(sleep 2)
mcp__chrome-devtools__take_screenshot({
fullPage: true,
format: "png",
quality: 90,
filePath: output_file
})
depth_4_captures.append({
"name": iframe_name,
"url": iframe.src,
"path": output_file,
"size_kb": file_size_kb(output_file)
})
// Close iframe tab
current_pages = mcp__chrome-devtools__list_pages()
mcp__chrome-devtools__close_page({pageIdx: current_pages.length - 1})
CATCH error:
REPORT: f"Skip {iframe_name}: {error}"
layer_map.layers["depth-4"] = {
"type": "embedded",
"captures": depth_4_captures
}
TodoWrite(mark_completed: "Depth 4: Embedded")
```
**Output**: `depth-4/iframe-*.png` × N
## Phase 7: Depth 5 - Shadow DOM (If depth = 5)
### Step 1: Detect Shadow Roots
```javascript
IF depth < 5: SKIP
TodoWrite(mark_in_progress: "Depth 5: Shadow DOM")
shadow_elements = mcp__chrome-devtools__evaluate_script({
function: `() => {
const elements = Array.from(document.querySelectorAll('*'));
return elements
.filter(el => el.shadowRoot)
.map((el, idx) => ({
tag: el.tagName.toLowerCase(),
id: el.id || \`shadow-\${idx}\`,
innerHTML: el.shadowRoot.innerHTML.substring(0, 100)
}));
}`
})
```
### Step 2: Capture Shadow DOM Components
```javascript
depth_5_captures = []
FOR idx, shadow IN enumerate(shadow_elements):
shadow_name = f"shadow-{sanitize(shadow.id)}"
output_file = f"{base_path}/screenshots/depth-5/{shadow_name}.png"
TRY:
// Inject highlight script
mcp__chrome-devtools__evaluate_script({
function: `() => {
const el = document.querySelector('${shadow.tag}${shadow.id ? "#" + shadow.id : ""}');
if (el) {
el.scrollIntoView({behavior: 'smooth', block: 'center'});
el.style.outline = '3px solid red';
}
}`
})
bash(sleep 0.5)
// Full-page screenshot (component highlighted)
mcp__chrome-devtools__take_screenshot({
fullPage: false,
format: "png",
quality: 90,
filePath: output_file
})
depth_5_captures.append({
"name": shadow_name,
"tag": shadow.tag,
"path": output_file,
"size_kb": file_size_kb(output_file)
})
CATCH error:
REPORT: f"Skip {shadow_name}: {error}"
layer_map.layers["depth-5"] = {
"type": "shadow-dom",
"captures": depth_5_captures
}
TodoWrite(mark_completed: "Depth 5: Shadow DOM")
```
**Output**: `depth-5/shadow-*.png` × N
## Phase 8: Generate Layer Map
### Step 1: Compile Metadata
```javascript
TodoWrite(mark_in_progress: "Generate layer map")
// Calculate totals
total_captures = sum(len(layer.captures) for layer in layer_map.layers.values())
total_size_kb = sum(
sum(c.size_kb for c in layer.captures)
for layer in layer_map.layers.values()
)
layer_map["summary"] = {
"timestamp": current_timestamp(),
"total_depth": depth,
"total_captures": total_captures,
"total_size_kb": total_size_kb
}
Write(f"{base_path}/screenshots/layer-map.json", JSON.stringify(layer_map, indent=2))
TodoWrite(mark_completed: "Generate layer map")
```
**Output**: `layer-map.json`
## Completion
### Todo Update
```javascript
all_todos_completed = true
TodoWrite({todos: all_completed_todos})
```
### Output Message
```
✅ Interactive layer exploration complete!
Configuration:
- URL: {url}
- Max depth: {depth}
- Layers explored: {len(layer_map.layers)}
Capture Summary:
Depth 1 (Page): {depth_1_count} screenshot(s)
Depth 2 (Elements): {depth_2_count} screenshot(s)
Depth 3 (Interactions): {depth_3_count} screenshot(s)
Depth 4 (Embedded): {depth_4_count} screenshot(s)
Depth 5 (Shadow DOM): {depth_5_count} screenshot(s)
Total: {total_captures} captures ({total_size_kb:.1f} KB)
Output Structure:
{base_path}/screenshots/
├── depth-1/
│ └── full-page.png
├── depth-2/
│ ├── navbar.png
│ └── footer.png
├── depth-3/
│ ├── modal-login.png
│ └── dropdown-menu.png
├── depth-4/
│ └── iframe-analytics.png
├── depth-5/
│ └── shadow-button.png
└── layer-map.json
Next: /workflow:ui-design:extract --images "screenshots/**/*.png"
```
## Simple Bash Commands
### Directory Setup
```bash
# Create depth directories
bash(for i in $(seq 1 $depth); do mkdir -p $BASE_PATH/screenshots/depth-$i; done)
```
### Validation
```bash
# Check MCP
all_resources = ListMcpResourcesTool()
# Count captures per depth
bash(ls $base_path/screenshots/depth-{1..5}/*.png 2>/dev/null | wc -l)
```
### File Operations
```bash
# List all captures
bash(find $base_path/screenshots -name "*.png" -type f)
# Total size
bash(du -sh $base_path/screenshots)
```
## Output Structure
```
{base_path}/screenshots/
├── depth-1/
│ └── full-page.png
├── depth-2/
│ ├── {element}.png
│ └── ...
├── depth-3/
│ ├── {interaction}.png
│ └── ...
├── depth-4/
│ ├── iframe-*.png
│ └── ...
├── depth-5/
│ ├── shadow-*.png
│ └── ...
└── layer-map.json
```
## Depth Level Details
| Depth | Name | Captures | Time | Use Case |
|-------|------|----------|------|----------|
| 1 | Page | Full page | 30s | Quick preview |
| 2 | Elements | Key components | 1-2min | Component library |
| 3 | Interactions | Modals, dropdowns | 2-4min | UI flows |
| 4 | Embedded | Iframes | 3-6min | Complete context |
| 5 | Shadow DOM | Web components | 4-8min | Full coverage |
## Error Handling
### Common Errors
```
ERROR: Chrome DevTools MCP required
→ Install: npm i -g @modelcontextprotocol/server-chrome-devtools
ERROR: Invalid depth
→ Use: 1-5
ERROR: Interaction trigger failed
→ Some modals may be skipped, check layer-map.json
```
### Recovery
- **Partial success**: Lower depth captures preserved
- **Trigger failures**: Interaction layer may be incomplete
- **Iframe restrictions**: Cross-origin iframes skipped
## Quality Checklist
- [ ] All depths up to specified level captured
- [ ] layer-map.json generated with metadata
- [ ] File sizes valid (> 500 bytes)
- [ ] Interaction triggers executed
- [ ] Shadow DOM elements highlighted
## Key Features
- **Depth-controlled**: Progressive capture 1-5 levels
- **Interactive triggers**: Click/hover for hidden layers
- **Iframe support**: Embedded content captured
- **Shadow DOM**: Web component internals
- **Structured output**: Organized by depth
## Integration
**Input**: Single URL + depth level (1-5)
**Output**: Hierarchical screenshots + layer-map.json
**Complements**: `/workflow:ui-design:capture` (multi-URL batch)
**Next**: `/workflow:ui-design:extract` for design analysis

View File

@@ -1,7 +1,7 @@
---
name: generate
description: Assemble UI prototypes by combining layout templates with design tokens, pure assembler without new content generation
argument-hint: [--base-path <path>] [--session <id>] [--style-variants <count>] [--layout-variants <count>]
description: Assemble UI prototypes by combining layout templates with design tokens (default animation support), pure assembler without new content generation
argument-hint: [--design-id <id>] [--session <id>]
allowed-tools: TodoWrite(*), Read(*), Write(*), Task(ui-design-agent), Bash(*)
---
@@ -11,7 +11,7 @@ allowed-tools: TodoWrite(*), Read(*), Write(*), Task(ui-design-agent), Bash(*)
Pure assembler that combines pre-extracted layout templates with design tokens to generate UI prototypes (`style × layout × targets`). No layout design logic - purely combines existing components.
**Strategy**: Pure Assembly
- **Input**: `layout-templates.json` + `design-tokens.json` (+ reference images if available)
- **Input**: `layout-*.json` files + `design-tokens.json` (+ reference images if available)
- **Process**: Combine structure (DOM) with style (tokens)
- **Output**: Complete HTML/CSS prototypes
- **No Design Logic**: All layout and style decisions already made
@@ -25,23 +25,48 @@ Pure assembler that combines pre-extracted layout templates with design tokens t
### Step 1: Resolve Base Path & Parse Configuration
```bash
# Determine working directory
bash(find .workflow -type d -name "design-*" | head -1) # Auto-detect
# Determine base path with priority: --design-id > --session > auto-detect
if [ -n "$DESIGN_ID" ]; then
# Exact match by design ID
relative_path=$(find .workflow -name "${DESIGN_ID}" -type d -print -quit)
elif [ -n "$SESSION_ID" ]; then
# Latest in session
relative_path=$(find .workflow/active/WFS-$SESSION_ID -name "design-run-*" -type d -printf "%T@ %p\n" 2>/dev/null | sort -nr | head -1 | cut -d' ' -f2)
else
# Latest globally
relative_path=$(find .workflow -name "design-run-*" -type d -printf "%T@ %p\n" 2>/dev/null | sort -nr | head -1 | cut -d' ' -f2)
fi
# Validate and convert to absolute path
if [ -z "$relative_path" ] || [ ! -d "$relative_path" ]; then
echo "❌ ERROR: Design run not found"
echo "💡 HINT: Run '/workflow:ui-design:list' to see available design runs"
exit 1
fi
base_path=$(cd "$relative_path" && pwd)
bash(echo "✓ Base path: $base_path")
# Get style count
bash(ls {base_path}/style-extraction/style-* -d | wc -l)
bash(ls "$base_path"/style-extraction/style-* -d | wc -l)
# Image reference auto-detected from layout template source_image_path
```
### Step 2: Load Layout Templates
```bash
# Check layout templates exist
bash(test -f {base_path}/layout-extraction/layout-templates.json && echo "exists")
# Check layout templates exist (multi-file pattern)
bash(find {base_path}/layout-extraction -name "layout-*.json" -print -quit | grep -q . && echo "exists")
# Load layout templates
Read({base_path}/layout-extraction/layout-templates.json)
# Extract: targets, layout_variants count, device_type, template structures
# Get list of all layout files
bash(ls {base_path}/layout-extraction/layout-*.json 2>/dev/null)
# Load each layout template file
FOR each layout_file in layout_files:
template_data = Read(layout_file)
# Extract: target, variant_id, device_type, dom_structure, css_layout_rules
# Aggregate: targets[], layout_variants count, device_type, all template structures
```
**Output**: `base_path`, `style_variants`, `layout_templates[]`, `targets[]`, `device_type`
@@ -74,30 +99,97 @@ ELSE:
## Phase 2: Assembly (Agent)
**Executor**: `Task(ui-design-agent)` × `T × S × L` tasks (can be batched)
**Executor**: `Task(ui-design-agent)` grouped by `target × style` (max 10 layouts per agent, max 6 concurrent agents)
### Step 1: Launch Assembly Tasks
```bash
bash(mkdir -p {base_path}/prototypes)
**⚠️ Core Principle**: **Each agent processes ONLY ONE style** (but can process multiple layouts for that style)
### Agent Grouping Strategy
**Grouping Rules**:
1. **Style Isolation**: Each agent processes ONLY ONE style (never mixed)
2. **Balanced Distribution**: Layouts evenly split (e.g., 12→6+6, not 10+2)
3. **Target Separation**: Different targets use different agents
**Distribution Formula**:
```
agents_needed = ceil(layout_count / MAX_LAYOUTS_PER_AGENT)
base_count = floor(layout_count / agents_needed)
remainder = layout_count % agents_needed
# First 'remainder' agents get (base_count + 1), others get base_count
```
For each `target × style_id × layout_id`:
**Examples** (MAX=10):
| Scenario | Result | Explanation |
|----------|--------|-------------|
| 3 styles × 3 layouts | 3 agents | Each style: 1 agent (3 layouts) |
| 3 styles × 12 layouts | 6 agents | Each style: 2 agents (6+6 layouts) |
| 2 styles × 5 layouts × 2 targets | 4 agents | Each (target, style): 1 agent (5 layouts) |
### Step 1: Calculate Agent Grouping Plan
```bash
bash(mkdir -p {base_path}/prototypes)
MAX_LAYOUTS_PER_AGENT = 10
MAX_PARALLEL = 6
agent_groups = []
FOR each target in targets:
FOR each style_id in [1..S]:
layouts_for_this_target_style = filter layouts by current target
layout_count = len(layouts_for_this_target_style)
# Balanced distribution (e.g., 12 layouts → 6+6)
agents_needed = ceil(layout_count / MAX_LAYOUTS_PER_AGENT)
base_count = floor(layout_count / agents_needed)
remainder = layout_count % agents_needed
layout_chunks = []
start_idx = 0
FOR i in range(agents_needed):
chunk_size = base_count + 1 if i < remainder else base_count
layout_chunks.append(layouts[start_idx : start_idx + chunk_size])
start_idx += chunk_size
FOR each chunk in layout_chunks:
agent_groups.append({
target: target, # Single target
style_id: style_id, # Single style
layout_ids: chunk # Balanced layouts (≤10)
})
total_agents = len(agent_groups)
total_batches = ceil(total_agents / MAX_PARALLEL)
TodoWrite({todos: [
{content: "Setup and validation", status: "completed", activeForm: "Loading design systems"},
{content: "Batch 1/{total_batches}: Assemble up to 6 agent groups", status: "in_progress", activeForm: "Assembling batch 1"},
{content: "Batch 2/{total_batches}: Assemble up to 6 agent groups", status: "pending", activeForm: "Assembling batch 2"},
... (continue for all batches)
]})
```
### Step 2: Launch Batched Assembly Tasks
For each batch (up to 6 parallel agents per batch):
For each agent group `{target, style_id, layout_ids[]}` in current batch:
```javascript
Task(ui-design-agent): `
[LAYOUT_STYLE_ASSEMBLY]
🎯 Assembly task: {target} × Style-{style_id} × Layout-{layout_id}
Combine: Pre-extracted layout structure + design tokens → Final HTML/CSS
🎯 {target} × Style-{style_id} × Layouts-{layout_ids}
⚠️ CONSTRAINT: Use ONLY style-{style_id}/design-tokens.json (never mix styles)
TARGET: {target} | STYLE: {style_id} | LAYOUT: {layout_id}
TARGET: {target} | STYLE: {style_id} | LAYOUTS: {layout_ids} (max 10)
BASE_PATH: {base_path}
## Inputs (READ ONLY - NO DESIGN DECISIONS)
1. Layout Template:
Read("{base_path}/layout-extraction/layout-templates.json")
Find template where: target={target} AND variant_id="layout-{layout_id}"
Extract: dom_structure, css_layout_rules, device_type, source_image_path
1. Layout Templates (LOOP THROUGH):
FOR each layout_id in layout_ids:
Read("{base_path}/layout-extraction/layout-{target}-{layout_id}.json")
This file contains the specific layout template for this target and variant.
Extract: dom_structure, css_layout_rules, device_type, source_image_path (from template field)
2. Design Tokens:
2. Design Tokens (SHARED - READ ONCE):
Read("{base_path}/style-extraction/style-{style_id}/design-tokens.json")
Extract: ALL token values including:
* colors, typography (with combinations), spacing, opacity
@@ -121,61 +213,70 @@ Task(ui-design-agent): `
ELSE:
Use generic placeholder content
## Assembly Process
1. Build HTML: {base_path}/prototypes/{target}-style-{style_id}-layout-{layout_id}.html
- Recursively build from template.dom_structure
- Add: <!DOCTYPE html>, <head>, <meta viewport>
- CSS link: <link href="{target}-style-{style_id}-layout-{layout_id}.css">
- Inject placeholder content:
* Default: Use Lorem ipsum, generic sample data
* If reference image available: Generate more contextually appropriate placeholders
(e.g., realistic headings, meaningful text snippets that match the visual context)
- Preserve all attributes from dom_structure
## Assembly Process (LOOP FOR EACH LAYOUT)
FOR each layout_id in layout_ids:
2. Build CSS: {base_path}/prototypes/{target}-style-{style_id}-layout-{layout_id}.css
- Start with template.css_layout_rules
- Replace ALL var(--*) with actual token values from design-tokens.json
Example: var(--spacing-4) → 1rem (from tokens.spacing.4)
Example: var(--breakpoint-md) → 768px (from tokens.breakpoints.md)
Example: var(--opacity-80) → 0.8 (from tokens.opacity.80)
- Add visual styling using design tokens:
* Colors: tokens.colors.*
* Typography: tokens.typography.* (including combinations)
* Opacity: tokens.opacity.*
* Shadows: tokens.shadows.*
* Border radius: tokens.border_radius.*
- IF tokens.component_styles exists: Add component style classes
* Generate classes for button variants (.btn-primary, .btn-secondary)
* Generate classes for card variants (.card-default, .card-interactive)
* Generate classes for input variants (.input-default, .input-focus, .input-error)
* Use var() references that resolve to actual token values
- IF tokens.typography.combinations exists: Add typography preset classes
* Generate classes for typography presets (.text-heading-primary, .text-body-regular, .text-caption)
* Use var() references for family, size, weight, line-height, letter-spacing
- IF has_animations == true: Inject animation tokens
* Add CSS Custom Properties for animations at :root level:
--duration-instant, --duration-fast, --duration-normal, etc.
--easing-linear, --easing-ease-out, etc.
* Add @keyframes rules from animation_tokens.keyframes
* Add interaction classes (.button-hover, .card-hover) from animation_tokens.interactions
* Add utility classes (.transition-color, .transition-transform) from animation_tokens.transitions
* Include prefers-reduced-motion media query for accessibility
- Device-optimized for template.device_type
1. Build HTML: {base_path}/prototypes/{target}-style-{style_id}-layout-{layout_id}.html
- Recursively build from template.dom_structure
- Add: <!DOCTYPE html>, <head>, <meta viewport>
- CSS link: <link href="{target}-style-{style_id}-layout-{layout_id}.css">
- Inject placeholder content:
* Default: Use Lorem ipsum, generic sample data
* If reference image available: Generate more contextually appropriate placeholders
(e.g., realistic headings, meaningful text snippets that match the visual context)
- Preserve all attributes from dom_structure
2. Build CSS: {base_path}/prototypes/{target}-style-{style_id}-layout-{layout_id}.css
- Start with template.css_layout_rules
- Replace ALL var(--*) with actual token values from design-tokens.json
Example: var(--spacing-4) → 1rem (from tokens.spacing.4)
Example: var(--breakpoint-md) → 768px (from tokens.breakpoints.md)
Example: var(--opacity-80) → 0.8 (from tokens.opacity.80)
- Add visual styling using design tokens:
* Colors: tokens.colors.*
* Typography: tokens.typography.* (including combinations)
* Opacity: tokens.opacity.*
* Shadows: tokens.shadows.*
* Border radius: tokens.border_radius.*
- IF tokens.component_styles exists: Add component style classes
* Generate classes for button variants (.btn-primary, .btn-secondary)
* Generate classes for card variants (.card-default, .card-interactive)
* Generate classes for input variants (.input-default, .input-focus, .input-error)
* Use var() references that resolve to actual token values
- IF tokens.typography.combinations exists: Add typography preset classes
* Generate classes for typography presets (.text-heading-primary, .text-body-regular, .text-caption)
* Use var() references for family, size, weight, line-height, letter-spacing
- IF has_animations == true: Inject animation tokens (ONCE, shared across layouts)
* Add CSS Custom Properties for animations at :root level:
--duration-instant, --duration-fast, --duration-normal, etc.
--easing-linear, --easing-ease-out, etc.
* Add @keyframes rules from animation_tokens.keyframes
* Add interaction classes (.button-hover, .card-hover) from animation_tokens.interactions
* Add utility classes (.transition-color, .transition-transform) from animation_tokens.transitions
* Include prefers-reduced-motion media query for accessibility
- Device-optimized for template.device_type
3. Write files IMMEDIATELY after each layout completes
## Assembly Rules
- ✅ Pure assembly: Combine existing structure + existing style
- ❌ NO layout design decisions (structure pre-defined)
- ❌ NO style design decisions (tokens pre-defined)
- ✅ Pure assembly: Combine pre-extracted structure + tokens
- ❌ NO design decisions (layout/style pre-defined)
- ✅ Read tokens ONCE, apply to all layouts in this batch
- ✅ Replace var() with actual values
- ✅ Add placeholder content only
- Write files IMMEDIATELY
- CSS filename MUST match HTML <link href="...">
- ✅ CSS filename MUST match HTML <link href="...">
## Output
- Files: {len(layout_ids) × 2} (HTML + CSS pairs)
- Each layout generates 2 files independently
`
# After each batch completes
TodoWrite: Mark current batch completed, next batch in_progress
```
### Step 2: Verify Generated Files
### Step 3: Verify Generated Files
```bash
# Count expected vs found
# Count expected vs found (should equal S × L × T)
bash(ls {base_path}/prototypes/{target}-style-*-layout-*.html | wc -l)
# Validate samples
@@ -183,7 +284,7 @@ Read({base_path}/prototypes/{target}-style-{style_id}-layout-{layout_id}.html)
# Check: <!DOCTYPE html>, correct CSS href, sufficient CSS length
```
**Output**: `S × L × T × 2` files verified
**Output**: `total_files = S × L × T × 2` files verified (HTML + CSS pairs)
## Phase 3: Generate Preview Files
@@ -210,10 +311,10 @@ bash(ls {base_path}/prototypes/compare.html {base_path}/prototypes/index.html {b
```javascript
TodoWrite({todos: [
{content: "Setup and validation", status: "completed", activeForm: "Loading design systems"},
{content: "Load layout templates", status: "completed", activeForm: "Reading layout templates"},
{content: "Assembly (agent)", status: "completed", activeForm: "Assembling prototypes"},
{content: "Verify files", status: "completed", activeForm: "Validating output"},
{content: "Generate previews", status: "completed", activeForm: "Creating preview files"}
{content: "Batch 1/{total_batches}: Assemble 6 tasks", status: "completed", activeForm: "Assembling batch 1"},
{content: "Batch 2/{total_batches}: Assemble 6 tasks", status: "completed", activeForm: "Assembling batch 2"},
... (all batches completed)
{content: "Verify files & generate previews", status: "completed", activeForm: "Creating previews"}
]});
```
@@ -223,33 +324,41 @@ TodoWrite({todos: [
Configuration:
- Style Variants: {style_variants}
- Layout Variants: {layout_variants} (from layout-templates.json)
- Layout Variants: {layout_variants} (from layout-*.json files)
- Device Type: {device_type}
- Targets: {targets}
- Total Prototypes: {S × L × T}
- Image Reference: Auto-detected (uses source images when available in layout templates)
- Animation Support: {has_animations ? 'Enabled (animation-tokens.json loaded)' : 'Not available'}
Assembly Process:
- Pure assembly: Combined pre-extracted layouts + design tokens
- No design decisions: All structure and style pre-defined
- Assembly tasks: T×S×L = {T}×{S}×{L} = {T×S×L} combinations
- Agent grouping: target × style (max 10 layouts per agent)
- Balanced distribution: Layouts evenly split (e.g., 12 → 6+6, not 10+2)
Batch Execution:
- Total agents: {total_agents} (each processes ONE style only)
- Batches: {total_batches} (max 6 agents parallel)
- Token efficiency: Read once per agent, apply to all layouts
Quality:
- Structure: From layout-extract (DOM, CSS layout rules)
- Style: From style-extract (design tokens)
- CSS: Token values directly applied (var() replaced)
- Device-optimized: Layouts match device_type from templates
- Animations: {has_animations ? 'CSS custom properties and @keyframes injected' : 'Static styles only'}
Generated Files:
{base_path}/prototypes/
├── _templates/
│ └── layout-templates.json (input, pre-extracted)
├── {target}-style-{s}-layout-{l}.html ({S×L×T} prototypes)
├── {target}-style-{s}-layout-{l}.css
├── compare.html (interactive matrix)
├── index.html (navigation)
└── PREVIEW.md (instructions)
Input Files (from layout-extraction/):
├── layout-{target}-{variant}.json (multiple files, one per target-variant combination)
Preview:
1. Open compare.html (recommended)
2. Open index.html
@@ -263,7 +372,7 @@ Next: /workflow:ui-design:update
### Path Operations
```bash
# Find design directory
bash(find .workflow -type d -name "design-*" | head -1)
bash(find .workflow -type d -name "design-run-*" | head -1)
# Count style variants
bash(ls {base_path}/style-extraction/style-* -d | wc -l)
@@ -271,8 +380,11 @@ bash(ls {base_path}/style-extraction/style-* -d | wc -l)
### Validation Commands
```bash
# Check layout templates exist
bash(test -f {base_path}/layout-extraction/layout-templates.json && echo "exists")
# Check layout templates exist (multi-file pattern)
bash(find {base_path}/layout-extraction -name "layout-*.json" -print -quit | grep -q . && echo "exists")
# Count layout files
bash(ls {base_path}/layout-extraction/layout-*.json 2>/dev/null | wc -l)
# Check design tokens exist
bash(test -f {base_path}/style-extraction/style-1/design-tokens.json && echo "valid")
@@ -298,10 +410,10 @@ bash(~/.claude/scripts/ui-generate-preview.sh "{base_path}/prototypes")
```
{base_path}/
├── layout-extraction/
│ └── layout-templates.json # Input (from layout-extract)
│ └── layout-{target}-{variant}.json # Input (multiple files from layout-extract)
├── style-extraction/
│ └── style-{s}/
│ ├── design-tokens.json # Input (from style-extract)
│ ├── design-tokens.json # Input (from style-extract)
│ └── style-guide.md
└── prototypes/
├── {target}-style-{s}-layout-{l}.html # Assembled prototypes
@@ -330,7 +442,7 @@ ERROR: Script permission denied
### Recovery Strategies
- **Partial success**: Keep successful assembly combinations
- **Invalid template structure**: Validate layout-templates.json
- **Invalid template structure**: Validate layout-*.json files
- **Invalid tokens**: Validate design-tokens.json structure
## Quality Checklist
@@ -346,18 +458,17 @@ ERROR: Script permission denied
## Key Features
- **Pure Assembly**: No design decisions, only combination
- **Separation of Concerns**: Layout (structure) + Style (tokens) kept separate until final assembly
- **Token Resolution**: var() placeholders replaced with actual values
- **Pre-validated**: Inputs already validated by extract/consolidate
- **Efficient**: Simple assembly vs complex generation
- **Token Resolution**: var() → actual values
- **Efficient Grouping**: target × style (max 10 layouts/agent, balanced split)
- **Style Isolation**: Each agent processes ONE style only
- **Production-Ready**: Semantic, accessible, token-driven
## Integration
**Prerequisites**:
- `/workflow:ui-design:style-extract``design-tokens.json` + `style-guide.md`
- `/workflow:ui-design:layout-extract``layout-templates.json`
- `/workflow:ui-design:layout-extract``layout-{target}-{variant}.json` files
**Input**: `layout-templates.json` + `design-tokens.json`
**Input**: `layout-*.json` files + `design-tokens.json`
**Output**: S×L×T prototypes for `/workflow:ui-design:update`
**Called by**: `/workflow:ui-design:explore-auto`, `/workflow:ui-design:imitate-auto`

File diff suppressed because it is too large Load Diff

View File

@@ -0,0 +1,518 @@
---
name: workflow:ui-design:import-from-code
description: Import design system from code files (CSS/JS/HTML/SCSS) with automatic file discovery and parallel agent analysis
argument-hint: "[--design-id <id>] [--session <id>] [--source <path>]"
allowed-tools: Read,Write,Bash,Glob,Grep,Task,TodoWrite
auto-continue: true
---
# UI Design: Import from Code
## Overview
Extract design system tokens from source code files (CSS/SCSS/JS/TS/HTML) using parallel agent analysis. Each agent can reference any file type for cross-source token extraction, and directly generates completeness reports with findings and gaps.
**Key Characteristics**:
- Executes parallel agent analysis (3 agents: Style, Animation, Layout)
- Each agent can read ALL file types (CSS/SCSS/JS/TS/HTML) for cross-reference
- Direct completeness reporting without synthesis phase
- Graceful failure handling with detailed missing content analysis
- Returns concrete analysis results with recommendations
## Core Functionality
- **File Discovery**: Auto-discover or target specific CSS/SCSS/JS/HTML files
- **Parallel Analysis**: 3 agents extract tokens simultaneously with cross-file-type support
- **Completeness Reporting**: Each agent reports found tokens, missing content, and recommendations
- **Cross-Source Extraction**: Agents can reference any file type (e.g., Style agent can read JS theme configs)
## Usage
### Command Syntax
```bash
/workflow:ui-design:import-from-code [FLAGS]
# Flags
--design-id <id> Design run ID to import into (must exist)
--session <id> Session ID (uses latest design run in session)
--source <path> Source code directory to analyze (required)
```
**Note**: All file discovery is automatic. The command will scan the source directory and find all relevant style files (CSS, SCSS, JS, HTML) automatically.
## Execution Process
### Step 1: Setup & File Discovery
**Purpose**: Initialize session, discover and categorize code files
**Operations**:
```bash
# 1. Determine base path with priority: --design-id > --session > error
if [ -n "$DESIGN_ID" ]; then
# Exact match by design ID
relative_path=$(find .workflow -name "${DESIGN_ID}" -type d -print -quit)
if [ -z "$relative_path" ]; then
echo "ERROR: Design run not found: $DESIGN_ID"
echo "HINT: Run '/workflow:ui-design:list' to see available design runs"
exit 1
fi
elif [ -n "$SESSION_ID" ]; then
# Latest in session
relative_path=$(find .workflow/active/WFS-$SESSION_ID -name "design-run-*" -type d -printf "%T@ %p\n" 2>/dev/null | sort -nr | head -1 | cut -d' ' -f2)
if [ -z "$relative_path" ]; then
echo "ERROR: No design run found in session: $SESSION_ID"
echo "HINT: Create a design run first or provide --design-id"
exit 1
fi
else
echo "ERROR: Must provide --design-id or --session parameter"
exit 1
fi
base_path=$(cd "$relative_path" && pwd)
design_id=$(basename "$base_path")
# 2. Initialize directories
source="${source:-.}"
intermediates_dir="${base_path}/.intermediates/import-analysis"
mkdir -p "$intermediates_dir"
echo "[Phase 0] File Discovery Started"
echo " Design ID: $design_id"
echo " Source: $source"
echo " Output: $base_path"
# 3. Discover files using script
discovery_file="${intermediates_dir}/discovered-files.json"
~/.claude/scripts/discover-design-files.sh "$source" "$discovery_file"
echo " Output: $discovery_file"
```
<!-- TodoWrite: Initialize todo list -->
**TodoWrite**:
```json
[
{"content": "Phase 0: 发现和分类代码文件", "status": "in_progress", "activeForm": "发现代码文件"},
{"content": "Phase 1.1: Style Agent - 提取视觉token及代码片段 (design-tokens.json + code_snippets)", "status": "pending", "activeForm": "提取视觉token"},
{"content": "Phase 1.2: Animation Agent - 提取动画token及代码片段 (animation-tokens.json + code_snippets)", "status": "pending", "activeForm": "提取动画token"},
{"content": "Phase 1.3: Layout Agent - 提取布局模式及代码片段 (layout-templates.json + code_snippets)", "status": "pending", "activeForm": "提取布局模式"}
]
```
**File Discovery Behavior**:
- **Automatic discovery**: Intelligently scans source directory for all style-related files
- **Supported file types**: CSS, SCSS, JavaScript, TypeScript, HTML
- **Smart filtering**: Finds theme-related JS/TS files (e.g., tailwind.config.js, theme.js, styled-components)
- **Exclusions**: Automatically excludes `node_modules/`, `dist/`, `.git/`, and build directories
- **Output**: Single JSON file `discovered-files.json` in `.intermediates/import-analysis/`
- Structure: `{ "css": [...], "js": [...], "html": [...], "counts": {...}, "discovery_time": "..." }`
- Generated via bash commands using `find` + JSON formatting
<!-- TodoWrite: Update Phase 0 → completed, Phase 1.1-1.3 → in_progress (all 3 agents in parallel) -->
---
### Step 2: Parallel Agent Analysis
**Purpose**: Three agents analyze all file types in parallel, each producing completeness-report.json
**Operations**:
- **Style Agent**: Extracts visual tokens (colors, typography, spacing) from ALL files (CSS/SCSS/JS/HTML)
- **Animation Agent**: Extracts animations/transitions from ALL files
- **Layout Agent**: Extracts layout patterns/component structures from ALL files
**Validation**:
- Each agent can reference any file type (not restricted to single type)
- Direct output: Each agent generates completeness-report.json with findings + missing content
- No synthesis needed: Agents produce final output directly
```bash
echo "[Phase 1] Starting parallel agent analysis (3 agents)"
```
#### Style Agent Task (design-tokens.json, style-guide.md)
**Agent Task**:
```javascript
Task(subagent_type="ui-design-agent",
prompt="[STYLE_TOKENS_EXTRACTION]
Extract visual design tokens from code files using code import extraction pattern.
MODE: style-extraction | SOURCE: ${source} | BASE_PATH: ${base_path}
## Input Files
**Discovered Files**: ${intermediates_dir}/discovered-files.json
$(cat \"${intermediates_dir}/discovered-files.json\" 2>/dev/null | grep -E '(count|files)' | head -30)
## Code Import Extraction Strategy
**Step 0: Fast Conflict Detection** (Use Bash/Grep for quick global scan)
- Quick scan: \`rg --color=never -n "^\\s*--primary:|^\\s*--secondary:|^\\s*--accent:" --type css ${source}\` to find core color definitions with line numbers
- Semantic search: \`rg --color=never -B3 -A1 "^\\s*--primary:" --type css ${source}\` to capture surrounding context and comments
- Core token scan: Search for --primary, --secondary, --accent, --background patterns to detect all theme-critical definitions
- Pattern: rg → Extract values → Compare → If different → Read full context with comments → Record conflict
- Alternative (if many files): Execute CLI analysis for comprehensive report:
\`\`\`bash
cd ${source} && gemini -p \"
PURPOSE: Detect color token conflicts across all CSS/SCSS/JS files
TASK: • Scan all files for color definitions • Identify conflicting values • Extract semantic comments
MODE: analysis
CONTEXT: @**/*.css @**/*.scss @**/*.js @**/*.ts
EXPECTED: JSON report listing conflicts with file:line, values, semantic context
RULES: Focus on core tokens | Report ALL variants | analysis=READ-ONLY
\"
\`\`\`
**Step 1: Load file list**
- Read(${intermediates_dir}/discovered-files.json)
- Extract: file_types.css.files, file_types.js.files, file_types.html.files
**Step 2: Cross-source token extraction**
- CSS/SCSS: Colors, typography, spacing, shadows, borders
- JavaScript/TypeScript: Theme configs (Tailwind, styled-components, CSS-in-JS)
- HTML: Inline styles, usage patterns
**Step 3: Validation and Conflict Detection**
- Report missing tokens WITHOUT inference (mark as "missing" in _metadata.completeness)
- Detect and report inconsistent values across files (list ALL variants with file:line sources)
- Report missing categories WITHOUT auto-filling (document gaps for manual review)
- CRITICAL: Verify core tokens (primary, secondary, accent) against semantic comments in source code
## Output Files
**Target Directory**: ${base_path}/style-extraction/style-1/
**Files to Generate**:
1. **design-tokens.json**
- Follow [DESIGN_SYSTEM_GENERATION_TASK] standard token structure
- Add \"_metadata.extraction_source\": \"code_import\"
- Add \"_metadata.files_analyzed\": {css, js, html file lists}
- Add \"_metadata.completeness\": {status, missing_categories, recommendations}
- Add \"_metadata.conflicts\": Array of conflicting definitions (MANDATORY if conflicts exist)
- Add \"_metadata.code_snippets\": Map of code snippets (see below)
- Add \"_metadata.usage_recommendations\": Usage patterns from code (see below)
- Include \"source\" field for each token (e.g., \"file.css:23\")
**Code Snippet Recording**:
- For each extracted token, record the actual code snippet in `_metadata.code_snippets`
- Structure:
```json
\"code_snippets\": {
\"file.css:23\": {
\"lines\": \"23-27\",
\"snippet\": \":root {\\n --color-primary: oklch(0.5555 0.15 270);\\n /* Primary brand color */\\n --color-primary-hover: oklch(0.6 0.15 270);\\n}\",
\"context\": \"css-variable\"
}
}
```
- Context types: \"css-variable\" | \"css-class\" | \"js-object\" | \"js-theme-config\" | \"inline-style\"
- Record complete code blocks with all dependencies and relevant comments
- Typical ranges: Simple declarations (1-5 lines), Utility classes (5-15 lines), Complete configs (15-50 lines)
- Preserve original formatting and indentation
**Conflict Detection and Reporting**:
- When the same token is defined differently across multiple files, record in `_metadata.conflicts`
- Follow Agent schema for conflicts array structure (see ui-design-agent.md)
- Each conflict MUST include: token_name, category, all definitions with context, selected_value, selection_reason
- Selection priority:
1. Definitions with semantic comments explaining intent (/* Blue theme */, /* Primary brand color */)
2. Definitions that align with overall color scheme described in comments
3. When in doubt, report ALL variants and flag for manual review in completeness.recommendations
**Usage Recommendations Generation**:
- Analyze code usage patterns to extract `_metadata.usage_recommendations` (see ui-design-agent.md schema)
- **Typography recommendations**:
* `common_sizes`: Identify most frequent font size usage (e.g., \"body_text\": \"base (1rem)\")
* `common_combinations`: Extract heading+body pairings from actual usage (e.g., h1 with p tags)
- **Spacing recommendations**:
* `size_guide`: Categorize spacing values into tight/normal/loose based on frequency
* `common_patterns`: Extract frequent padding/margin combinations from components
- Analysis method: Scan code for class/style usage frequency, extract patterns from component implementations
- Optional: If insufficient usage data, mark fields as empty arrays/objects with note in completeness.recommendations
## Code Import Specific Requirements
- ✅ Read discovered-files.json FIRST to get file paths
- ✅ Track extraction source for each token (file:line)
- ✅ Record complete code snippets in _metadata.code_snippets (complete blocks with dependencies/comments)
- ✅ Include completeness assessment in _metadata
- ✅ Report inconsistent values with ALL source locations in _metadata.conflicts (DO NOT auto-normalize or choose)
- ✅ CRITICAL: Verify core theme tokens (primary, secondary, accent) match source code semantic intent
- ✅ When conflicts exist, prefer definitions with semantic comments explaining intent
- ❌ NO inference, NO smart filling, NO automatic conflict resolution
- ❌ NO external research or web searches (code-only extraction)
")
```
#### Animation Agent Task (animation-tokens.json, animation-guide.md)
**Agent Task**:
```javascript
Task(subagent_type="ui-design-agent",
prompt="[ANIMATION_TOKEN_GENERATION_TASK]
Extract animation tokens from code files using code import extraction pattern.
MODE: animation-extraction | SOURCE: ${source} | BASE_PATH: ${base_path}
## Input Files
**Discovered Files**: ${intermediates_dir}/discovered-files.json
$(cat \"${intermediates_dir}/discovered-files.json\" 2>/dev/null | grep -E '(count|files)' | head -30)
## Code Import Extraction Strategy
**Step 0: Fast Animation Discovery** (Use Bash/Grep for quick pattern detection)
- Quick scan: \`rg --color=never -n "@keyframes|animation:|transition:" --type css ${source}\` to find animation definitions with line numbers
- Framework detection: \`rg --color=never "framer-motion|gsap|@react-spring|react-spring" --type js --type ts ${source}\` to detect animation frameworks
- Pattern categorization: \`rg --color=never -B2 -A5 "@keyframes" --type css ${source}\` to extract keyframe animations with context
- Pattern: rg → Identify animation types → Map framework usage → Prioritize extraction targets
- Alternative (if complex framework mix): Execute CLI analysis for comprehensive report:
\`\`\`bash
cd ${source} && gemini -p \"
PURPOSE: Detect animation frameworks and patterns
TASK: • Identify frameworks • Map animation patterns • Categorize by complexity
MODE: analysis
CONTEXT: @**/*.css @**/*.scss @**/*.js @**/*.ts
EXPECTED: JSON report listing frameworks, animation types, file locations
RULES: Focus on framework consistency | Map all animations | analysis=READ-ONLY
\"
\`\`\`
**Step 1: Load file list**
- Read(${intermediates_dir}/discovered-files.json)
- Extract: file_types.css.files, file_types.js.files, file_types.html.files
**Step 2: Cross-source animation extraction**
- CSS/SCSS: @keyframes, transitions, animation properties
- JavaScript/TypeScript: Animation frameworks (Framer Motion, GSAP), CSS-in-JS
- HTML: Inline styles, data-animation attributes
**Step 3: Framework detection & normalization**
- Detect animation frameworks used (css-animations | framer-motion | gsap | none)
- Normalize into semantic token system
- Cross-reference CSS animations with JS configs
## Output Files
**Target Directory**: ${base_path}/animation-extraction/
**Files to Generate**:
1. **animation-tokens.json**
- Follow [ANIMATION_TOKEN_GENERATION_TASK] standard structure
- Add \"_metadata.framework_detected\"
- Add \"_metadata.files_analyzed\"
- Add \"_metadata.completeness\"
- Add \"_metadata.code_snippets\": Map of code snippets (same format as Style Agent)
- Include \"source\" field for each token
**Code Snippet Recording**:
- Record actual animation/transition code in `_metadata.code_snippets`
- Context types: \"css-keyframes\" | \"css-transition\" | \"js-animation\" | \"framer-motion\" | \"gsap\"
- Record complete blocks: @keyframes animations (10-30 lines), transition configs (5-15 lines), JS animation objects (15-50 lines)
- Include all animation steps, timing functions, and related comments
- Preserve original formatting and framework-specific syntax
## Code Import Specific Requirements
- ✅ Read discovered-files.json FIRST to get file paths
- ✅ Detect animation framework if present
- ✅ Track extraction source for each token (file:line)
- ✅ Record complete code snippets in _metadata.code_snippets (complete animation blocks with all steps/timing)
- ✅ Normalize framework-specific syntax into standard tokens
- ❌ NO external research or web searches (code-only extraction)
")
```
#### Layout Agent Task (layout-templates.json, layout-guide.md)
**Agent Task**:
```javascript
Task(subagent_type="ui-design-agent",
prompt="[LAYOUT_TEMPLATE_GENERATION_TASK]
Extract layout patterns from code files using code import extraction pattern.
MODE: layout-extraction | SOURCE: ${source} | BASE_PATH: ${base_path}
## Input Files
**Discovered Files**: ${intermediates_dir}/discovered-files.json
$(cat \"${intermediates_dir}/discovered-files.json\" 2>/dev/null | grep -E '(count|files)' | head -30)
## Code Import Extraction Strategy
**Step 0: Fast Component Discovery** (Use Bash/Grep for quick component scan)
- Layout pattern scan: \`rg --color=never -n "display:\\s*(grid|flex)|grid-template" --type css ${source}\` to find layout systems
- Component class scan: \`rg --color=never "class.*=.*\\"[^\"]*\\b(btn|button|card|input|modal|dialog|dropdown)" --type html --type js --type ts ${source}\` to identify UI components
- Universal component heuristic: Components appearing in 3+ files = universal, <3 files = specialized
- Pattern: rg → Count occurrences → Classify by frequency → Prioritize universal components
- Alternative (if large codebase): Execute CLI analysis for comprehensive categorization:
\`\`\`bash
cd ${source} && gemini -p \"
PURPOSE: Classify components as universal vs specialized
TASK: • Identify UI components • Classify reusability • Map layout systems
MODE: analysis
CONTEXT: @**/*.css @**/*.scss @**/*.js @**/*.ts @**/*.html
EXPECTED: JSON report categorizing components, layout patterns, naming conventions
RULES: Focus on component reusability | Identify layout systems | analysis=READ-ONLY
\"
\`\`\`
**Step 1: Load file list**
- Read(${intermediates_dir}/discovered-files.json)
- Extract: file_types.css.files, file_types.js.files, file_types.html.files
**Step 2: Cross-source layout extraction**
- CSS/SCSS: Grid systems, flexbox utilities, layout classes, media queries
- JavaScript/TypeScript: Layout components (React/Vue), grid configs
- HTML: Semantic structure, component hierarchies
**Component Classification** (MUST annotate in extraction):
- **Universal Components**: Reusable multi-component templates (buttons, inputs, cards, modals, etc.)
- **Specialized Components**: Module-specific components from code (feature-specific layouts, custom widgets, domain components)
**Step 3: System identification**
- Detect naming convention (BEM | SMACSS | utility-first | css-modules)
- Identify layout system (12-column | flexbox | css-grid | custom)
- Extract responsive strategy and breakpoints
## Output Files
**Target Directory**: ${base_path}/layout-extraction/
**Files to Generate**:
1. **layout-templates.json**
- Follow [LAYOUT_TEMPLATE_GENERATION_TASK] standard structure
- Add \"extraction_metadata\" section:
* extraction_source: \"code_import\"
* naming_convention: detected convention
* layout_system: {type, confidence, source_files}
* responsive: {breakpoints, mobile_first, source}
* completeness: {status, missing_items, recommendations}
* code_snippets: Map of code snippets (same format as Style Agent)
- For each component in \"layout_templates\":
* Include \"source\" field (file:line)
* **Include \"component_type\" field: \"universal\" | \"specialized\"**
* dom_structure with semantic HTML5
* css_layout_rules using var() placeholders
* Add \"description\" field explaining component purpose and classification rationale
* **Add \"usage_guide\" field for universal components** (see ui-design-agent.md schema):
- common_sizes: Extract size variants (small/medium/large) from code
- variant_recommendations: Document when to use each variant (primary/secondary/etc)
- usage_context: List typical usage scenarios from actual implementation
- accessibility_tips: Extract ARIA patterns and a11y notes from code
**Code Snippet Recording**:
- Record actual layout/component code in `extraction_metadata.code_snippets`
- Context types: \"css-grid\" | \"css-flexbox\" | \"css-utility\" | \"html-structure\" | \"react-component\"
- Record complete blocks: Utility classes (5-15 lines), HTML structures (10-30 lines), React components (20-100 lines)
- For components: include HTML structure + associated CSS rules + component logic
- Preserve original formatting and framework-specific syntax
## Code Import Specific Requirements
- ✅ Read discovered-files.json FIRST to get file paths
- ✅ Detect and document naming conventions
- ✅ Identify layout system with confidence level
- ✅ Extract component variants and states from usage patterns
- ✅ **Classify each component as \"universal\" or \"specialized\"** based on:
* Universal: Reusable across multiple features (buttons, inputs, cards, modals)
* Specialized: Feature-specific or domain-specific (checkout form, dashboard widget)
- ✅ Record complete code snippets in extraction_metadata.code_snippets (complete components/structures)
- ✅ **Document classification rationale** in component description
- ✅ **Generate usage_guide for universal components** (REQUIRED):
* Analyze code to extract size variants (scan for size-related classes/props)
* Document variant usage from code comments and implementation patterns
* List usage contexts from component instances in codebase
* Extract accessibility patterns from ARIA attributes and a11y comments
* If insufficient data, populate with minimal valid structure and note in completeness
- ❌ NO external research or web searches (code-only extraction)
")
```
**Wait for All Agents**:
```bash
# Note: Agents run in parallel and write separate completeness reports
# Each agent generates its own completeness-report.json directly
# No synthesis phase needed
echo "[Phase 1] Parallel agent analysis complete"
```
<!-- TodoWrite: Update Phase 1.1-1.3 → completed (all 3 agents complete together) -->
---
## Output Files
### Generated Files
**Location**: `${base_path}/`
**Directory Structure**:
```
${base_path}/
├── style-extraction/
│ └── style-1/
│ └── design-tokens.json # Production-ready design tokens with code snippets
├── animation-extraction/
│ └── animation-tokens.json # Animation/transition tokens with code snippets
├── layout-extraction/
│ └── layout-templates.json # Layout patterns with code snippets
└── .intermediates/
└── import-analysis/
└── discovered-files.json # All discovered files (JSON format)
```
**Files**:
1. **style-extraction/style-1/design-tokens.json**
- Production-ready design tokens
- Categories: colors, typography, spacing, opacity, border_radius, shadows, breakpoints
- Metadata: extraction_source, files_analyzed, completeness assessment, **code_snippets**
- **Code snippets**: Complete code blocks from source files (CSS variables, theme configs, inline styles)
2. **animation-extraction/animation-tokens.json**
- Animation tokens: duration, easing, transitions, keyframes, interactions
- Framework detection: css-animations, framer-motion, gsap, etc.
- Metadata: extraction_source, completeness assessment, **code_snippets**
- **Code snippets**: Complete animation blocks (@keyframes, transition configs, JS animations)
3. **layout-extraction/layout-templates.json**
- Layout templates for each discovered component
- Extraction metadata: naming_convention, layout_system, responsive strategy, **code_snippets**
- Component patterns with DOM structure and CSS rules
- **Code snippets**: Complete component/structure code (HTML, CSS utilities, React components)
**Intermediate Files**: `.intermediates/import-analysis/`
- `discovered-files.json` - All discovered files in JSON format with counts and metadata
---
## Error Handling
### Common Errors
| Error | Cause | Resolution |
|-------|-------|------------|
| No files discovered | Wrong --source path or no style files in directory | Verify --source parameter points to correct directory with style files |
| Agent reports "failed" status | No tokens found in any file | Review file content, check if files contain design tokens |
| Empty completeness reports | Files exist but contain no extractable tokens | Manually verify token definitions in source files |
| Missing file type | File extensions not recognized | Check that files use standard extensions (.css, .scss, .js, .ts, .html) |
---
## Best Practices
1. **Point to the right directory**: Use `--source` to specify the directory containing your style files (e.g., `./src`, `./app`, `./styles`)
2. **Let automatic discovery work**: The command will find all relevant files - no need to specify patterns
3. **Specify target design run**: Use `--design-id` for existing design run or `--session` to use session's latest design run (one of these is required)
4. **Cross-reference agent reports**: Compare all three completeness reports (style, animation, layout) to identify gaps
5. **Review missing content**: Check `_metadata.completeness` field in reports for actionable improvements
6. **Verify file discovery**: Check `${base_path}/.intermediates/import-analysis/discovered-files.json` if agents report no data

File diff suppressed because it is too large Load Diff

View File

@@ -0,0 +1,333 @@
---
name: workflow:ui-design:reference-page-generator
description: Generate multi-component reference pages and documentation from design run extraction
argument-hint: "[--design-run <path>] [--package-name <name>] [--output-dir <path>]"
allowed-tools: Read,Write,Bash,Task,TodoWrite
auto-continue: true
---
# UI Design: Reference Page Generator
## Overview
Converts design run extraction results into shareable reference package with:
- Interactive multi-component preview (preview.html + preview.css)
- Component layout templates (layout-templates.json)
**Role**: Takes existing design run (from `import-from-code` or other extraction commands) and generates preview pages for easy reference.
## Usage
### Command Syntax
```bash
/workflow:ui-design:reference-page-generator [FLAGS]
# Flags
--design-run <path> Design run directory path (required)
--package-name <name> Package name for reference (required)
--output-dir <path> Output directory (default: .workflow/reference_style)
```
---
## Execution Process
### Phase 0: Setup & Validation
**Purpose**: Validate inputs, prepare output directory
**Operations**:
```bash
# 1. Validate required parameters
if [ -z "$design_run" ] || [ -z "$package_name" ]; then
echo "ERROR: --design-run and --package-name are required"
echo "USAGE: /workflow:ui-design:reference-page-generator --design-run <path> --package-name <name>"
exit 1
fi
# 2. Validate package name format (lowercase, alphanumeric, hyphens only)
if ! [[ "$package_name" =~ ^[a-z0-9][a-z0-9-]*$ ]]; then
echo "ERROR: Invalid package name. Use lowercase, alphanumeric, and hyphens only."
echo "EXAMPLE: main-app-style-v1"
exit 1
fi
# 3. Validate design run exists
if [ ! -d "$design_run" ]; then
echo "ERROR: Design run not found: $design_run"
echo "HINT: Run '/workflow:ui-design:import-from-code' first to create design run"
exit 1
fi
# 4. Check required extraction files exist
required_files=(
"$design_run/style-extraction/style-1/design-tokens.json"
"$design_run/layout-extraction/layout-templates.json"
)
for file in "${required_files[@]}"; do
if [ ! -f "$file" ]; then
echo "ERROR: Required file not found: $file"
echo "HINT: Ensure design run has style and layout extraction results"
exit 1
fi
done
# 5. Setup output directory and validate
output_dir="${output_dir:-.workflow/reference_style}"
package_dir="${output_dir}/${package_name}"
# Check if package directory exists and is not empty
if [ -d "$package_dir" ] && [ "$(ls -A $package_dir 2>/dev/null)" ]; then
# Directory exists - check if it's a valid package or just a directory
if [ -f "$package_dir/metadata.json" ]; then
# Valid package - safe to overwrite
existing_version=$(jq -r '.version // "unknown"' "$package_dir/metadata.json" 2>/dev/null || echo "unknown")
echo "INFO: Overwriting existing package '$package_name' (version: $existing_version)"
else
# Directory exists but not a valid package
echo "ERROR: Directory '$package_dir' exists but is not a valid package"
echo "Use a different package name or remove the directory manually"
exit 1
fi
fi
mkdir -p "$package_dir"
echo "[Phase 0] Setup Complete"
echo " Design Run: $design_run"
echo " Package: $package_name"
echo " Output: $package_dir"
```
<!-- TodoWrite: Initialize todo list -->
**TodoWrite**:
```json
[
{"content": "Phase 0: 验证和准备", "status": "completed", "activeForm": "验证参数"},
{"content": "Phase 1: 准备组件数据", "status": "in_progress", "activeForm": "复制布局模板"},
{"content": "Phase 2: 生成预览页面", "status": "pending", "activeForm": "生成预览"}
]
```
---
### Phase 1: Prepare Component Data
**Purpose**: Copy required files from design run to package directory
**Operations**:
```bash
echo "[Phase 1] Preparing component data from design run"
# 1. Copy layout templates as component patterns
cp "${design_run}/layout-extraction/layout-templates.json" "${package_dir}/layout-templates.json"
if [ ! -f "${package_dir}/layout-templates.json" ]; then
echo "ERROR: Failed to copy layout templates"
exit 1
fi
# Count components from layout templates
component_count=$(jq -r '.layout_templates | length // 0' "${package_dir}/layout-templates.json" 2>/dev/null || echo 0)
echo " ✓ Layout templates copied (${component_count} components)"
# 2. Copy design tokens (required for preview generation)
cp "${design_run}/style-extraction/style-1/design-tokens.json" "${package_dir}/design-tokens.json"
if [ ! -f "${package_dir}/design-tokens.json" ]; then
echo "ERROR: Failed to copy design tokens"
exit 1
fi
echo " ✓ Design tokens copied"
# 3. Copy animation tokens if exists (optional)
if [ -f "${design_run}/animation-extraction/animation-tokens.json" ]; then
cp "${design_run}/animation-extraction/animation-tokens.json" "${package_dir}/animation-tokens.json"
echo " ✓ Animation tokens copied"
else
echo " ○ Animation tokens not found (optional)"
fi
echo "[Phase 1] Component data preparation complete"
```
<!-- TodoWrite: Mark Phase 1 complete, start Phase 2 -->
**TodoWrite**:
```json
[
{"content": "Phase 1: 准备组件数据", "status": "completed", "activeForm": "复制布局模板"},
{"content": "Phase 2: 生成预览页面", "status": "in_progress", "activeForm": "生成预览"}
]
```
---
### Phase 2: Preview Generation (Final Phase)
**Purpose**: Generate interactive multi-component preview (preview.html + preview.css)
**Agent Task**:
```javascript
Task(ui-design-agent): `
[PREVIEW_SHOWCASE_GENERATION]
Generate interactive multi-component showcase panel for reference package
PACKAGE_DIR: ${package_dir} | PACKAGE_NAME: ${package_name}
## Input Files (MUST READ ALL)
1. ${package_dir}/layout-templates.json (component layout patterns - REQUIRED)
2. ${package_dir}/design-tokens.json (design tokens - REQUIRED)
3. ${package_dir}/animation-tokens.json (optional, if exists)
## Generation Task
Create interactive showcase with these sections:
### Section 1: Colors
- Display all color categories as color swatches
- Show hex/rgb values
- Group by: brand, semantic, surface, text, border
### Section 2: Typography
- Display typography scale (font sizes, weights)
- Show typography combinations if available
- Include font family examples
- **Display usage recommendations** (from design-tokens.json _metadata.usage_recommendations.typography):
* Common sizes table (small_text, body_text, heading)
* Common combinations with use cases
### Section 3: Components
- Render all components from layout-templates.json (use layout_templates field)
- **Universal Components**: Display reusable multi-component showcases (buttons, inputs, cards, etc.)
* **Display usage_guide** (from layout-templates.json):
- Common sizes table with dimensions and use cases
- Variant recommendations (when to use primary/secondary/etc)
- Usage context list (typical scenarios)
- Accessibility tips checklist
- **Specialized Components**: Display module-specific components from code (feature-specific layouts, custom widgets)
- Display all variants side-by-side
- Show DOM structure with proper styling
- Include usage code snippets in <details> tags
- Clearly label component types (universal vs specialized)
### Section 4: Spacing & Layout
- Visual spacing scale
- Border radius examples
- Shadow depth examples
- **Display spacing recommendations** (from design-tokens.json _metadata.usage_recommendations.spacing):
* Size guide table (tight/normal/loose categories)
* Common patterns with use cases and pixel values
### Section 5: Animations (if available)
- Animation duration examples
- Easing function demonstrations
## Output Requirements
Generate 2 files:
1. ${package_dir}/preview.html
2. ${package_dir}/preview.css
### preview.html Structure:
- Complete standalone HTML file
- Responsive design with mobile-first approach
- Sticky navigation for sections
- Interactive component demonstrations
- Code snippets in collapsible <details> elements
- Footer with package metadata
### preview.css Structure:
- CSS Custom Properties from design-tokens.json
- Typography combination classes
- Component classes from layout-templates.json
- Preview page layout styles
- Interactive demo styles
## Critical Requirements
- ✅ Read ALL input files (layout-templates.json, design-tokens.json, animation-tokens.json if exists)
- ✅ Generate complete, interactive showcase HTML
- ✅ All CSS uses var() references to design tokens
- ✅ Display ALL components from layout-templates.json
- ✅ **Separate universal components from specialized components** in the showcase
- ✅ Display component DOM structures with proper styling
- ✅ Include usage code snippets
- ✅ Label each component type clearly (Universal / Specialized)
- ✅ **Display usage recommendations** when available:
- Typography: common_sizes, common_combinations (from _metadata.usage_recommendations)
- Components: usage_guide for universal components (from layout-templates)
- Spacing: size_guide, common_patterns (from _metadata.usage_recommendations)
- ✅ Gracefully handle missing usage data (display sections only if data exists)
- ✅ Use Write() to save both files:
- ${package_dir}/preview.html
- ${package_dir}/preview.css
- ❌ NO external research or MCP calls
`
```
<!-- TodoWrite: Mark all complete -->
**TodoWrite**:
```json
[
{"content": "Phase 0: 验证和准备", "status": "completed", "activeForm": "验证参数"},
{"content": "Phase 1: 准备组件数据", "status": "completed", "activeForm": "复制布局模板"},
{"content": "Phase 2: 生成预览页面", "status": "completed", "activeForm": "生成预览"}
]
```
---
## Output Structure
```
${output_dir}/
└── ${package_name}/
├── layout-templates.json # Layout templates (copied from design run)
├── design-tokens.json # Design tokens (copied from design run)
├── animation-tokens.json # Animation tokens (copied from design run, optional)
├── preview.html # Interactive showcase (NEW)
└── preview.css # Showcase styling (NEW)
```
## Completion Message
```
✅ Preview package generated!
Package: {package_name}
Location: {package_dir}
Files:
✓ layout-templates.json {component_count} components
✓ design-tokens.json Design tokens
✓ animation-tokens.json Animation tokens {if exists: "✓" else: "○ (not found)"}
✓ preview.html Interactive showcase
✓ preview.css Showcase styling
Open preview:
file://{absolute_path_to_package_dir}/preview.html
```
## Error Handling
### Common Errors
| Error | Cause | Resolution |
|-------|-------|------------|
| Missing --design-run or --package-name | Required parameters not provided | Provide both flags |
| Invalid package name | Contains uppercase, special chars | Use lowercase, alphanumeric, hyphens only |
| Design run not found | Incorrect path or design run doesn't exist | Verify design run path, run import-from-code first |
| Missing extraction files | Design run incomplete | Ensure design run has style-extraction and layout-extraction results |
| Layout templates copy failed | layout-templates.json not found | Run import-from-code with Layout Agent first |
| Preview generation failed | Invalid design tokens | Check design-tokens.json format |
---

View File

@@ -1,20 +1,22 @@
---
name: style-extract
description: Extract design style from reference images or text prompts using Claude analysis with variant generation
argument-hint: "[--base-path <path>] [--session <id>] [--images "<glob>"] [--urls "<list>"] [--prompt "<desc>"] [--mode <imitate|explore>] [--variants <count>]"
allowed-tools: TodoWrite(*), Read(*), Write(*), Glob(*), mcp__chrome-devtools__navigate_page(*), mcp__chrome-devtools__evaluate_script(*)
description: Extract design style from reference images or text prompts using Claude analysis with variant generation or refinement mode
argument-hint: "[--design-id <id>] [--session <id>] [--images "<glob>"] [--prompt "<desc>"] [--variants <count>] [--interactive] [--refine]"
allowed-tools: TodoWrite(*), Read(*), Write(*), Glob(*), AskUserQuestion(*)
---
# Style Extraction Command
## Overview
Extract design style from reference images or text prompts using Claude's built-in analysis. Directly generates production-ready design systems with complete `design-tokens.json` and `style-guide.md` for each variant.
Extract design style from reference images or text prompts using Claude's built-in analysis. Supports two modes:
1. **Exploration Mode** (default): Generate multiple contrasting design variants
2. **Refinement Mode** (`--refine`): Refine a single existing design through detailed adjustments
**Strategy**: AI-Driven Design Space Exploration
- **Claude-Native**: 100% Claude analysis, no external tools
- **Direct Output**: Complete design systems (design-tokens.json + style-guide.md)
- **Direct Output**: Complete design systems (design-tokens.json)
- **Flexible Input**: Images, text prompts, or both (hybrid mode)
- **Maximum Contrast**: AI generates maximally divergent design directions
- **Dual Mode**: Exploration (multiple contrasting variants) or Refinement (single design fine-tuning)
- **Production-Ready**: WCAG AA compliant, OKLCH colors, semantic naming
## Phase 0: Setup & Input Validation
@@ -22,98 +24,46 @@ Extract design style from reference images or text prompts using Claude's built-
### Step 1: Detect Input Mode, Extraction Mode & Base Path
```bash
# Detect input source
# Priority: --urls + --images + --prompt → hybrid-url | --urls + --images → url-image | --urls → url | --images + --prompt → hybrid | --images → image | --prompt → text
# Priority: --images + --prompt → hybrid | --images → image | --prompt → text
# Parse URLs if provided (format: "target:url,target:url,...")
IF --urls:
url_list = []
FOR pair IN split(--urls, ","):
IF ":" IN pair:
target, url = pair.split(":", 1)
url_list.append({target: target.strip(), url: url.strip()})
ELSE:
# Single URL without target
url_list.append({target: "page", url: pair.strip()})
# Detect refinement mode
refine_mode = --refine OR false
has_urls = true
primary_url = url_list[0].url # First URL as primary source
# Set variants count
# Refinement mode: Force variants_count = 1 (ignore user-provided --variants)
# Exploration mode: Use --variants or default to 3 (range: 1-5)
IF refine_mode:
variants_count = 1
REPORT: "🔧 Refinement mode enabled: Will generate 1 refined design system"
ELSE:
has_urls = false
# Determine extraction mode
# Priority: --mode parameter → default "imitate"
extraction_mode = --mode OR "imitate" # "imitate" or "explore"
# Set variants count based on mode
IF extraction_mode == "imitate":
variants_count = 1 # Force single variant for imitate mode (ignore --variants)
ELSE IF extraction_mode == "explore":
variants_count = --variants OR 3 # Default to 3 for explore mode
variants_count = --variants OR 3
VALIDATE: 1 <= variants_count <= 5
REPORT: "🔍 Exploration mode: Will generate {variants_count} contrasting design directions"
# Determine base path
bash(find .workflow -type d -name "design-*" | head -1) # Auto-detect
# OR use --base-path / --session parameters
# Determine base path with priority: --design-id > --session > auto-detect
if [ -n "$DESIGN_ID" ]; then
# Exact match by design ID
relative_path=$(find .workflow -name "${DESIGN_ID}" -type d -print -quit)
elif [ -n "$SESSION_ID" ]; then
# Latest in session
relative_path=$(find .workflow/active/WFS-$SESSION_ID -name "design-run-*" -type d -printf "%T@ %p\n" 2>/dev/null | sort -nr | head -1 | cut -d' ' -f2)
else
# Latest globally
relative_path=$(find .workflow -name "design-run-*" -type d -printf "%T@ %p\n" 2>/dev/null | sort -nr | head -1 | cut -d' ' -f2)
fi
# Validate and convert to absolute path
if [ -z "$relative_path" ] || [ ! -d "$relative_path" ]; then
echo "❌ ERROR: Design run not found"
echo "💡 HINT: Run '/workflow:ui-design:list' to see available design runs"
exit 1
fi
base_path=$(cd "$relative_path" && pwd)
bash(echo "✓ Base path: $base_path")
```
### Step 2: Extract Computed Styles (URL Mode - Auto-Trigger)
```bash
# AUTO-TRIGGER: If URLs are available (from --urls parameter or capture metadata), automatically extract real CSS values
# This provides accurate design tokens to supplement visual analysis
# Priority 1: Check for --urls parameter
IF has_urls:
url_to_extract = primary_url
url_source = "--urls parameter"
# Priority 2: Check for URL metadata from capture phase
ELSE IF exists({base_path}/.metadata/capture-urls.json):
capture_urls = Read({base_path}/.metadata/capture-urls.json)
url_to_extract = capture_urls[0] # Use first URL
url_source = "capture metadata"
ELSE:
url_to_extract = null
# Execute extraction if URL available
IF url_to_extract AND mcp_chrome_devtools_available:
REPORT: "🔍 Auto-triggering URL mode: Extracting computed styles from {url_source}"
REPORT: " URL: {url_to_extract}"
# Read extraction script
script_content = Read(~/.claude/scripts/extract-computed-styles.js)
# Open page in Chrome DevTools
mcp__chrome-devtools__navigate_page(url=url_to_extract)
# Execute extraction script directly
result = mcp__chrome-devtools__evaluate_script(function=script_content)
# Save computed styles to intermediates directory
bash(mkdir -p {base_path}/.intermediates/style-analysis)
Write({base_path}/.intermediates/style-analysis/computed-styles.json, result)
computed_styles_available = true
REPORT: " ✅ Computed styles extracted and saved"
ELSE:
computed_styles_available = false
IF url_to_extract:
REPORT: "⚠️ Chrome DevTools MCP not available, falling back to visual analysis"
```
**Extraction Script Reference**: `~/.claude/scripts/extract-computed-styles.js`
**Usage**: Read the script file and use content directly in `mcp__chrome-devtools__evaluate_script()`
**Script returns**:
- `metadata`: Extraction timestamp, URL, method
- `tokens`: Organized design tokens (colors, borderRadii, shadows, fontSizes, fontWeights, spacing)
**Benefits**:
- ✅ Pixel-perfect accuracy for border-radius, box-shadow, padding, etc.
- ✅ Eliminates guessing from visual analysis
- ✅ Provides ground truth for design tokens
### Step 3: Load Inputs
### Step 2: Load Inputs
```bash
# For image mode
bash(ls {images_pattern}) # Expand glob pattern
@@ -138,74 +88,133 @@ IF exists: SKIP to completion
---
**Phase 0 Output**: `input_mode`, `base_path`, `extraction_mode`, `variants_count`, `loaded_images[]` or `prompt_guidance`, `has_urls`, `url_list[]`, `computed_styles_available`
**Phase 0 Output**: `input_mode`, `base_path`, `extraction_mode`, `variants_count`, `loaded_images[]` or `prompt_guidance`
## Phase 1: Design Direction Generation (Explore Mode Only)
## Phase 1: Design Direction or Refinement Options Generation
### Step 1: Check Extraction Mode
```bash
# Check extraction mode
# extraction_mode == "imitate" → skip this phase
# extraction_mode == "explore" → execute this phase
```
**If imitate mode**: Skip to Phase 2
### Step 2: Load Project Context (Explore Mode)
### Step 1: Load Project Context
```bash
# Load brainstorming context if available
bash(test -f {base_path}/.brainstorming/role analysis documents && cat it)
# Load existing design system if refinement mode
IF refine_mode:
existing_tokens = Read({base_path}/style-extraction/style-1/design-tokens.json)
```
### Step 3: Generate Design Direction Options (Agent Task 1)
### Step 2: Generate Options (Agent Task 1 - Mode-Specific)
**Executor**: `Task(ui-design-agent)`
Launch agent to generate `variants_count` design direction options with previews:
**Exploration Mode** (default): Generate contrasting design directions
**Refinement Mode** (`--refine`): Generate refinement options for existing design
```javascript
Task(ui-design-agent): `
[DESIGN_DIRECTION_GENERATION_TASK]
Generate {variants_count} maximally contrasting design directions with visual previews
// Conditional agent task based on refine_mode
IF NOT refine_mode:
// EXPLORATION MODE
Task(ui-design-agent): `
[DESIGN_DIRECTION_GENERATION_TASK]
Generate {variants_count} maximally contrasting design directions with visual previews
SESSION: {session_id} | MODE: explore | BASE_PATH: {base_path}
SESSION: {session_id} | MODE: explore | BASE_PATH: {base_path}
## Input Analysis
- User prompt: {prompt_guidance}
- Visual references: {loaded_images if available}
- Project context: {brainstorming_context if available}
## Input Analysis
- User prompt: {prompt_guidance}
- Visual references: {loaded_images if available}
- Project context: {brainstorming_context if available}
## Analysis Rules
- Analyze 6D attribute space: color saturation, visual weight, formality, organic/geometric, innovation, density
- Generate {variants_count} directions with MAXIMUM contrast
- Each direction must be distinctly different (min distance score: 0.7)
## Analysis Rules
- Analyze 6D attribute space: color saturation, visual weight, formality, organic/geometric, innovation, density
- Generate {variants_count} directions with MAXIMUM contrast
- Each direction must be distinctly different (min distance score: 0.7)
## Generate for EACH Direction
1. **Core Philosophy**:
- philosophy_name (2-3 words, e.g., "Minimalist & Airy")
- design_attributes (6D scores 0-1)
- search_keywords (3-5 keywords)
- anti_keywords (2-3 keywords to avoid)
- rationale (why this is distinct from others)
## Generate for EACH Direction
1. **Core Philosophy**:
- philosophy_name (2-3 words, e.g., "Minimalist & Airy")
- design_attributes (6D scores 0-1)
- search_keywords (3-5 keywords)
- anti_keywords (2-3 keywords to avoid)
- rationale (why this is distinct from others)
2. **Visual Preview Elements**:
- primary_color (OKLCH format)
- secondary_color (OKLCH format)
- accent_color (OKLCH format)
- font_family_heading (specific font name)
- font_family_body (specific font name)
- border_radius_base (e.g., "0.5rem")
- mood_description (1-2 sentences describing the feel)
2. **Visual Preview Elements**:
- primary_color (OKLCH format)
- secondary_color (OKLCH format)
- accent_color (OKLCH format)
- font_family_heading (specific font name)
- font_family_body (specific font name)
- border_radius_base (e.g., "0.5rem")
- mood_description (1-2 sentences describing the feel)
## Output
Write single JSON file: {base_path}/.intermediates/style-analysis/analysis-options.json
## Output
Write single JSON file: {base_path}/.intermediates/style-analysis/analysis-options.json
Use schema from INTERACTIVE-DATA-SPEC.md (Style Extract: analysis-options.json)
Use schema from INTERACTIVE-DATA-SPEC.md (Style Extract: analysis-options.json)
CRITICAL: Use Write() tool immediately after generating complete JSON
`
CRITICAL: Use Write() tool immediately after generating complete JSON
`
ELSE:
// REFINEMENT MODE
Task(ui-design-agent): `
[DESIGN_REFINEMENT_OPTIONS_TASK]
Generate refinement options for existing design system
SESSION: {session_id} | MODE: refine | BASE_PATH: {base_path}
## Existing Design System
- design-tokens.json: {existing_tokens}
## Input Guidance
- User prompt: {prompt_guidance}
- Visual references: {loaded_images if available}
## Refinement Categories
Generate 8-12 refinement options across these categories:
1. **Intensity/Degree Adjustments** (2-3 options):
- Color intensity: more vibrant ↔ more muted
- Visual weight: bolder ↔ lighter
- Density: more compact ↔ more spacious
2. **Specific Attribute Tuning** (3-4 options):
- Border radius: sharper corners ↔ rounder edges
- Shadow depth: subtler shadows ↔ deeper elevation
- Typography scale: tighter scale ↔ more contrast
- Spacing scale: tighter rhythm ↔ more breathing room
3. **Context-Specific Variations** (2-3 options):
- Dark mode optimization
- Mobile-specific adjustments
- High-contrast accessibility mode
4. **Component-Level Customization** (1-2 options):
- Button styling emphasis
- Card/container treatment
- Form input refinements
## Output Format
Each option:
- category: "intensity|attribute|context|component"
- option_id: unique identifier
- label: Short descriptive name (e.g., "More Vibrant Colors")
- description: What changes (2-3 sentences)
- preview_changes: Key token adjustments preview
- impact_scope: Which tokens affected
## Output
Write single JSON file: {base_path}/.intermediates/style-analysis/analysis-options.json
Use refinement schema:
{
"mode": "refinement",
"base_design": "style-1",
"refinement_options": [array of refinement options]
}
CRITICAL: Use Write() tool immediately after generating complete JSON
`
```
### Step 4: Verify Options File Created
### Step 3: Verify Options File Created
```bash
bash(test -f {base_path}/.intermediates/style-analysis/analysis-options.json && echo "created")
@@ -217,20 +226,44 @@ bash(cat {base_path}/.intermediates/style-analysis/analysis-options.json | grep
---
## Phase 1.5: User Confirmation (Explore Mode Only - INTERACTIVE)
## Phase 1.5: User Confirmation (Optional - Triggered by --interactive)
**Purpose**: Allow user to select preferred design direction(s) before generating full design systems
**Purpose**:
- **Exploration Mode**: Allow user to select preferred design direction(s)
- **Refinement Mode**: Allow user to select refinement options to apply
### Step 1: Load and Present Options
**Trigger Condition**: Execute this phase ONLY if `--interactive` flag is present
### Step 1: Check Interactive Flag
```bash
# Skip this entire phase if --interactive flag is not present
IF NOT --interactive:
SKIP to Phase 2
IF refine_mode:
REPORT: " Non-interactive refinement mode: Will apply all suggested refinements"
ELSE:
REPORT: " Non-interactive mode: Will generate all {variants_count} variants"
REPORT: "🎯 Interactive mode enabled: User selection required"
```
### Step 2: Load and Present Options (Mode-Specific)
```bash
# Read options file
options = Read({base_path}/.intermediates/style-analysis/analysis-options.json)
# Parse design directions
design_directions = options.design_directions
# Branch based on mode
IF NOT refine_mode:
# EXPLORATION MODE
design_directions = options.design_directions
ELSE:
# REFINEMENT MODE
refinement_options = options.refinement_options
```
### Step 2: Present Options to User
### Step 3: Present Options to User (Mode-Specific)
**Exploration Mode**:
```
📋 Design Direction Options
@@ -263,14 +296,39 @@ Please select the direction(s) you'd like to develop into complete design system
═══════════════════════════════════════════════════
```
### Step 3: Capture User Selection
**Refinement Mode**:
```
📋 Design Refinement Options
We've analyzed your existing design system and generated refinement options.
Please select the refinement(s) you'd like to apply.
Base Design: style-1
{FOR each option in refinement_options grouped by category:
━━━ {category_name} ━━━
{FOR each refinement in category_options:
[{refinement.option_id}] {refinement.label}
{refinement.description}
Preview: {refinement.preview_changes}
Affects: {refinement.impact_scope}
}
}
═══════════════════════════════════════════════════
```
### Step 4: Capture User Selection and Update Analysis JSON
**Exploration Mode Interaction**:
```javascript
// Use AskUserQuestion tool for selection
// Use AskUserQuestion tool for multi-selection
AskUserQuestion({
questions: [{
question: "Which design direction would you like to develop into a complete design system?",
question: "Which design direction(s) would you like to develop into complete design systems?",
header: "Style Choice",
multiSelect: false, // Single selection for Phase 1
multiSelect: true, // Multi-selection enabled
options: [
{FOR each direction:
label: "Option {direction.index}: {direction.philosophy_name}",
@@ -280,142 +338,186 @@ AskUserQuestion({
}]
})
// Parse user response (e.g., "Option 1: Minimalist & Airy")
selected_option_text = user_answer
// Parse user response (array of selections)
selected_options = user_answer
// Check for user cancellation
IF selected_option_text == null OR selected_option_text == "":
IF selected_options == null OR selected_options.length == 0:
REPORT: "⚠️ User canceled selection. Workflow terminated."
EXIT workflow
// Extract option index from response format "Option N: Name"
match = selected_option_text.match(/Option (\d+):/)
IF match:
selected_index = parseInt(match[1])
ELSE:
ERROR: "Invalid selection format. Expected 'Option N: ...' format"
EXIT workflow
```
// Extract option indices
selected_indices = []
FOR each selected_option_text IN selected_options:
match = selected_option_text.match(/Option (\d+):/)
IF match:
selected_indices.push(parseInt(match[1]))
### Step 4: Write User Selection File
```bash
# Create user selection JSON
selection_data = {
"metadata": {
"selected_at": "{current_timestamp}",
"selection_type": "single",
"session_id": "{session_id}"
},
"selected_indices": [selected_index],
"refinements": {
"enabled": false
}
REPORT: "✅ Selected {selected_indices.length} design direction(s)"
// Update analysis-options.json
options_file = Read({base_path}/.intermediates/style-analysis/analysis-options.json)
options_file.user_selection = {
"selected_at": NOW(),
"selected_indices": selected_indices,
"selection_count": selected_indices.length
}
# Write to file
bash(echo '{selection_data}' > {base_path}/.intermediates/style-analysis/user-selection.json)
# Verify
bash(test -f {base_path}/.intermediates/style-analysis/user-selection.json && echo "saved")
Write({base_path}/.intermediates/style-analysis/analysis-options.json, JSON.stringify(options_file, indent=2))
```
### Step 5: Confirmation Message
**Refinement Mode Interaction**:
```javascript
// Use AskUserQuestion tool for multi-selection of refinements
AskUserQuestion({
questions: [{
question: "Which refinement(s) would you like to apply to your design system?",
header: "Refinements",
multiSelect: true, // Multi-selection enabled
options: [
{FOR each refinement:
label: "{refinement.label}",
description: "{refinement.description} (Affects: {refinement.impact_scope})"
}
]
}]
})
// Parse user response
selected_refinements = user_answer
// Check for user cancellation
IF selected_refinements == null OR selected_refinements.length == 0:
REPORT: "⚠️ User canceled selection. Workflow terminated."
EXIT workflow
// Extract refinement option_ids
selected_option_ids = []
FOR each selected_text IN selected_refinements:
# Match against refinement labels to find option_ids
FOR each refinement IN refinement_options:
IF refinement.label IN selected_text:
selected_option_ids.push(refinement.option_id)
REPORT: "✅ Selected {selected_option_ids.length} refinement(s)"
// Update analysis-options.json
options_file = Read({base_path}/.intermediates/style-analysis/analysis-options.json)
options_file.user_selection = {
"selected_at": NOW(),
"selected_option_ids": selected_option_ids,
"selection_count": selected_option_ids.length
}
Write({base_path}/.intermediates/style-analysis/analysis-options.json, JSON.stringify(options_file, indent=2))
```
### Step 5: Confirmation Message (Mode-Specific)
**Exploration Mode**:
```
✅ Selection recorded!
You selected: Option {selected_index} - {selected_direction.philosophy_name}
You selected {selected_indices.length} design direction(s):
{FOR each index IN selected_indices:
• Option {index} - {design_directions[index-1].philosophy_name}
}
Proceeding to generate complete design system based on your selection...
Proceeding to generate {selected_indices.length} complete design system(s)...
```
**Output**: `user-selection.json` with user's choice
**Refinement Mode**:
```
✅ Selection recorded!
You selected {selected_option_ids.length} refinement(s):
{FOR each id IN selected_option_ids:
• {refinement_by_id[id].label} ({refinement_by_id[id].category})
}
Proceeding to apply refinements and generate updated design system...
```
**Output**: Updated `analysis-options.json` with user's selection embedded
## Phase 2: Design System Generation (Agent Task 2)
**Executor**: `Task(ui-design-agent)` for selected variant(s)
### Step 1: Load User Selection (Explore Mode)
### Step 1: Load User Selection or Default to All
```bash
# For explore mode, read user selection
IF extraction_mode == "explore":
selection = Read({base_path}/.intermediates/style-analysis/user-selection.json)
selected_indices = selection.selected_indices
refinements = selection.refinements
# Read analysis-options.json which may contain user_selection
options = Read({base_path}/.intermediates/style-analysis/analysis-options.json)
# Also read the selected direction details from options
options = Read({base_path}/.intermediates/style-analysis/analysis-options.json)
selected_directions = [options.design_directions[i-1] for i in selected_indices] # 0-indexed
# Check if user_selection field exists (interactive mode)
IF options.user_selection AND options.user_selection.selected_indices:
# Interactive mode: Use user-selected variants
selected_indices = options.user_selection.selected_indices # Array of selected indices (e.g., [1, 3])
# For Phase 1, we only allow single selection
selected_direction = selected_directions[0]
actual_variants_count = 1
REPORT: "🎯 Interactive mode: Using {selected_indices.length} user-selected variant(s)"
ELSE:
# Imitate mode - generate single variant without selection
selected_direction = null
actual_variants_count = 1
# Non-interactive mode: Generate ALL variants (default behavior)
selected_indices = [1, 2, ..., variants_count] # All indices from 1 to variants_count
REPORT: " Non-interactive mode: Generating all {variants_count} variant(s)"
# Extract the selected direction details from options
selected_directions = [options.design_directions[i-1] for i in selected_indices] # 0-indexed array
actual_variants_count = selected_indices.length
REPORT: "📦 Generating {actual_variants_count} design system(s)..."
```
### Step 2: Create Output Directory
### Step 2: Create Output Directories
```bash
# Create directory for selected variant only
bash(mkdir -p {base_path}/style-extraction/style-1)
# Create directories for each selected variant
FOR index IN 1..actual_variants_count:
bash(mkdir -p {base_path}/style-extraction/style-{index})
```
### Step 3: Launch Agent Task
Generate design system for selected direction:
### Step 3: Launch Agent Tasks (Parallel)
Generate design systems for ALL selected directions in parallel (max 5 concurrent):
```javascript
Task(ui-design-agent): `
[DESIGN_SYSTEM_GENERATION_TASK]
Generate production-ready design system based on user-selected direction
// Launch parallel tasks, one for each selected direction
FOR variant_index IN 1..actual_variants_count:
selected_direction = selected_directions[variant_index - 1] // 0-indexed
SESSION: {session_id} | MODE: {extraction_mode} | BASE_PATH: {base_path}
Task(ui-design-agent): `
[DESIGN_SYSTEM_GENERATION_TASK #{variant_index}/{actual_variants_count}]
Generate production-ready design system based on user-selected direction
${extraction_mode == "explore" ? `
USER SELECTION:
- Selected Direction: ${selected_direction.philosophy_name}
- Design Attributes: ${JSON.stringify(selected_direction.design_attributes)}
- Search Keywords: ${selected_direction.search_keywords.join(", ")}
- Anti-keywords: ${selected_direction.anti_keywords.join(", ")}
- Rationale: ${selected_direction.rationale}
- Preview Colors: Primary=${selected_direction.preview.primary_color}, Accent=${selected_direction.preview.accent_color}
- Preview Typography: Heading=${selected_direction.preview.font_family_heading}, Body=${selected_direction.preview.font_family_body}
- Preview Border Radius: ${selected_direction.preview.border_radius_base}
SESSION: {session_id} | VARIANT: {variant_index}/{actual_variants_count} | BASE_PATH: {base_path}
${refinements.enabled ? `
USER REFINEMENTS:
${refinements.primary_color ? "- Primary Color Override: " + refinements.primary_color : ""}
${refinements.font_family_heading ? "- Heading Font Override: " + refinements.font_family_heading : ""}
${refinements.font_family_body ? "- Body Font Override: " + refinements.font_family_body : ""}
` : ""}
` : ""}
USER SELECTION:
- Selected Direction: ${selected_direction.philosophy_name}
- Design Attributes: ${JSON.stringify(selected_direction.design_attributes)}
- Search Keywords: ${selected_direction.search_keywords.join(", ")}
- Anti-keywords: ${selected_direction.anti_keywords.join(", ")}
- Rationale: ${selected_direction.rationale}
- Preview Colors: Primary=${selected_direction.preview.primary_color}, Accent=${selected_direction.preview.accent_color}
- Preview Typography: Heading=${selected_direction.preview.font_family_heading}, Body=${selected_direction.preview.font_family_body}
- Preview Border Radius: ${selected_direction.preview.border_radius_base}
## Input Analysis
- Input mode: {input_mode} (image/text/hybrid${has_urls ? "/url" : ""})
- Visual references: {loaded_images OR prompt_guidance}
${computed_styles_available ? "- Computed styles: Use as ground truth (Read from .intermediates/style-analysis/computed-styles.json)" : ""}
## Input Analysis
- Input mode: {input_mode} (image/text/hybrid)
- Visual references: {loaded_images OR prompt_guidance}
## Generation Rules
${extraction_mode == "explore" ? `
- **Explore Mode**: Develop the selected design direction into a complete design system
- Use preview elements as foundation and expand with full token coverage
- Apply design_attributes to all token values:
* color_saturation → OKLCH chroma values
* visual_weight → font weights, shadow depths
* density → spacing scale compression/expansion
* formality → typography choices, border radius
* organic_geometric → border radius, shape patterns
* innovation → token naming, experimental values
- Honor search_keywords for design inspiration
- Avoid anti_keywords patterns
` : `
- **Imitate Mode**: High-fidelity replication of reference design
${computed_styles_available ? "- Use computed styles as ground truth for all measurements" : "- Use visual inference for measurements"}
`}
- All colors in OKLCH format ${computed_styles_available ? "(convert from computed RGB)" : ""}
- WCAG AA compliance: 4.5:1 text contrast, 3:1 UI contrast
## Generation Rules
- Develop the selected design direction into a complete design system
- Use preview elements as foundation and expand with full token coverage
- Apply design_attributes to all token values:
* color_saturation → OKLCH chroma values
* visual_weight → font weights, shadow depths
* density → spacing scale compression/expansion
* formality → typography choices, border radius
* organic_geometric → border radius, shape patterns
* innovation → token naming, experimental values
- Honor search_keywords for design inspiration
- Avoid anti_keywords patterns
- All colors in OKLCH format
- WCAG AA compliance: 4.5:1 text contrast, 3:1 UI contrast
## Generate
Create complete design system in {base_path}/style-extraction/style-1/
## Generate
Create complete design system in {base_path}/style-extraction/style-{variant_index}/
1. **design-tokens.json**:
- Complete token structure with ALL fields:
@@ -433,29 +535,15 @@ Task(ui-design-agent): `
${extraction_mode == "explore" && refinements.enabled ? "- Apply user refinements where specified" : ""}
- Common Tailwind CSS usage patterns in project (if extracting from existing project)
2. **style-guide.md**:
- Design philosophy (${extraction_mode == "explore" ? "expand on: " + selected_direction.philosophy_name : "describe the reference design"})
- Complete color system documentation with accessibility notes
- Typography scale and usage guidelines
- Typography Combinations section: Document each preset (heading-primary, heading-secondary, body-regular, body-emphasis, caption, label) with usage context and code examples
- Spacing system explanation
- Opacity & Transparency section: Opacity scale usage, common use cases (disabled states, overlays, hover effects), accessibility considerations
- Shadows & Elevation section: Shadow hierarchy and semantic usage
- Component Styles section: Document button, card, and input variants with code examples and visual descriptions
- Border Radius system and semantic usage
- Component examples and usage patterns
- Common Tailwind CSS patterns (if applicable)
## Critical Requirements
- ✅ Use Write() tool immediately for each file
- ✅ Write to style-1/ directory (single output)
- ❌ NO external research or MCP calls (pure AI generation)
- ✅ Write to style-{variant_index}/ directory
- ✅ Can use Exa MCP to research modern design patterns and obtain code examples (Explore/Text mode)
- ✅ Maintain consistency with user-selected direction
${refinements.enabled ? "- ✅ Apply user refinements precisely" : ""}
`
`
```
**Output**: Agent generates 2 files (design-tokens.json, style-guide.md) for selected direction
**Output**: {actual_variants_count} parallel agent tasks generate design-tokens.json for each variant
## Phase 3: Verify Output
@@ -494,16 +582,9 @@ TodoWrite({todos: [
Configuration:
- Session: {session_id}
- Extraction Mode: {extraction_mode} (imitate/explore)
- Input Mode: {input_mode} (image/text/hybrid{"/url" if has_urls else ""})
- Input Mode: {input_mode} (image/text/hybrid)
- Variants: {variants_count}
- Production-Ready: Complete design systems generated
{IF has_urls AND computed_styles_available:
- 🔍 URL Mode: Computed styles extracted from {len(url_list)} URL(s)
- Accuracy: Pixel-perfect design tokens from DOM
}
{IF has_urls AND NOT computed_styles_available:
- ⚠️ URL Mode: Chrome DevTools unavailable, used visual analysis fallback
}
{IF extraction_mode == "explore":
Design Direction Selection:
@@ -513,15 +594,9 @@ Design Direction Selection:
Generated Files:
{base_path}/style-extraction/
└── style-1/ (design-tokens.json, style-guide.md)
{IF computed_styles_available:
Intermediate Analysis:
{base_path}/.intermediates/style-analysis/computed-styles.json (extracted from {primary_url})
}
└── style-1/design-tokens.json
{IF extraction_mode == "explore":
{base_path}/.intermediates/style-analysis/analysis-options.json (design direction options)
{base_path}/.intermediates/style-analysis/user-selection.json (your selection)
{base_path}/.intermediates/style-analysis/analysis-options.json (design direction options + user selection)
}
Next: /workflow:ui-design:layout-extract --session {session_id} --targets "..."
@@ -533,7 +608,7 @@ Next: /workflow:ui-design:layout-extract --session {session_id} --targets "..."
### Path Operations
```bash
# Find design directory
bash(find .workflow -type d -name "design-*" | head -1)
bash(find .workflow -type d -name "design-run-*" | head -1)
# Expand image pattern
bash(ls {images_pattern})
@@ -559,8 +634,10 @@ bash(cat {base_path}/style-extraction/style-1/design-tokens.json | grep -q "colo
# Load brainstorming context
bash(test -f .brainstorming/role analysis documents && cat it)
# Create directories
bash(mkdir -p {base_path}/style-extraction/style-{{1..3}})
# Create directories (example for multiple variants)
bash(mkdir -p {base_path}/style-extraction/style-1)
bash(mkdir -p {base_path}/style-extraction/style-2)
bash(mkdir -p {base_path}/style-extraction/style-3)
# Verify output
bash(ls {base_path}/style-extraction/style-1/)
@@ -574,12 +651,10 @@ bash(test -f {base_path}/.intermediates/style-analysis/analysis-options.json &&
├── .intermediates/ # Intermediate analysis files
│ └── style-analysis/
│ ├── computed-styles.json # Extracted CSS values from DOM (if URL available)
── analysis-options.json # Design direction options (explore mode only)
│ └── user-selection.json # User's selected direction (explore mode only)
── analysis-options.json # Design direction options + user selection (explore mode only)
└── style-extraction/ # Final design system
└── style-1/
── design-tokens.json # Production-ready design tokens
└── style-guide.md # Design philosophy and usage guide
── design-tokens.json # Production-ready design tokens
```
## design-tokens.json Format
@@ -650,22 +725,12 @@ ERROR: Claude JSON parsing error
## Key Features
- **Auto-Trigger URL Mode** - Automatically extracts computed styles when --urls provided (no manual flag needed)
- **Direct Design System Generation** - Complete design-tokens.json + style-guide.md in one step
- **Hybrid Extraction Strategy** - Combines computed CSS values (ground truth) with AI visual analysis
- **Pixel-Perfect Accuracy** - Chrome DevTools extracts exact border-radius, shadows, spacing values
- **AI-Driven Design Space Exploration** - 6D attribute space analysis for maximum contrast
- **Variant-Specific Directions** - Each variant has unique philosophy, keywords, anti-patterns
- **Maximum Contrast Guarantee** - Variants maximally distant in attribute space
- **Flexible Input** - Images, text, URLs, or hybrid mode
- **Graceful Fallback** - Falls back to pure visual inference if Chrome DevTools unavailable
- **Flexible Input** - Images, text, or hybrid mode
- **Production-Ready** - OKLCH colors, WCAG AA compliance, semantic naming
- **Agent-Driven** - Autonomous multi-file generation with ui-design-agent
## Integration
**Input**: Reference images or text prompts
**Output**: `style-extraction/style-{N}/` with design-tokens.json + style-guide.md
**Next**: `/workflow:ui-design:layout-extract --session {session_id}` OR `/workflow:ui-design:generate --session {session_id}`
**Note**: This command extracts visual style (colors, typography, spacing) and generates production-ready design systems. For layout structure extraction, use `/workflow:ui-design:layout-extract`.

View File

@@ -1,21 +0,0 @@
---
name: Workflow Coordination
description: Core coordination principles for multi-agent development workflows
---
## Core Agent Coordination Principles
### Planning First Principle
**Purpose**: Thorough upfront planning reduces risk, improves quality, and prevents costly rework.
### TodoWrite Coordination Rules
1. **TodoWrite FIRST**: Always create TodoWrite entries *before* complex task execution begins.
2. **Context Before Implementation**: Context gathering must complete before implementation tasks begin.
3. **Agent Coordination**: Each agent is responsible for updating the status of its assigned todo.
4. **Progress Visibility**: Provides clear workflow state visibility to stakeholders.
6. **Checkpoint Safety**: State is saved automatically after each agent completes its work.
7. **Interrupt/Resume**: The system must support full state preservation and restoration.

View File

@@ -0,0 +1,83 @@
#!/usr/bin/env bash
# discover-design-files.sh - Discover design-related files and output JSON
# Usage: discover-design-files.sh <source_dir> <output_json>
set -euo pipefail
source_dir="${1:-.}"
output_json="${2:-discovered-files.json}"
# Function to find and format files as JSON array
find_files() {
local pattern="$1"
local files
files=$(eval "find \"$source_dir\" -type f $pattern \
! -path \"*/node_modules/*\" \
! -path \"*/dist/*\" \
! -path \"*/.git/*\" \
! -path \"*/build/*\" \
! -path \"*/coverage/*\" \
2>/dev/null | sort || true")
local count
if [ -z "$files" ]; then
count=0
else
count=$(echo "$files" | grep -c . || echo 0)
fi
local json_files=""
if [ "$count" -gt 0 ]; then
json_files=$(echo "$files" | awk '{printf "\"%s\"%s\n", $0, (NR<'$count'?",":"")}' | tr '\n' ' ')
fi
echo "$count|$json_files"
}
# Discover CSS/SCSS files
css_result=$(find_files '\( -name "*.css" -o -name "*.scss" \)')
css_count=${css_result%%|*}
css_files=${css_result#*|}
# Discover JS/TS files (all framework files)
js_result=$(find_files '\( -name "*.js" -o -name "*.ts" -o -name "*.jsx" -o -name "*.tsx" -o -name "*.mjs" -o -name "*.cjs" -o -name "*.vue" -o -name "*.svelte" \)')
js_count=${js_result%%|*}
js_files=${js_result#*|}
# Discover HTML files
html_result=$(find_files '-name "*.html"')
html_count=${html_result%%|*}
html_files=${html_result#*|}
# Calculate total
total_count=$((css_count + js_count + html_count))
# Generate JSON
cat > "$output_json" << EOF
{
"discovery_time": "$(date -u +%Y-%m-%dT%H:%M:%SZ)",
"source_directory": "$(cd "$source_dir" && pwd)",
"file_types": {
"css": {
"count": $css_count,
"files": [${css_files}]
},
"js": {
"count": $js_count,
"files": [${js_files}]
},
"html": {
"count": $html_count,
"files": [${html_files}]
}
},
"total_files": $total_count
}
EOF
# Ensure file is fully written and synchronized to disk
# This prevents race conditions when the file is immediately read by another process
sync "$output_json" 2>/dev/null || sync # Sync specific file, fallback to full sync
sleep 0.1 # Additional safety: 100ms delay for filesystem metadata update
echo "Discovered: CSS=$css_count, JS=$js_count, HTML=$html_count (Total: $total_count)" >&2

View File

@@ -0,0 +1,713 @@
#!/bin/bash
# Generate documentation for modules and projects with multiple strategies
# Usage: generate_module_docs.sh <strategy> <source_path> <project_name> [tool] [model]
# strategy: full|single|project-readme|project-architecture|http-api
# source_path: Path to the source module directory (or project root for project-level docs)
# project_name: Project name for output path (e.g., "myproject")
# tool: gemini|qwen|codex (default: gemini)
# model: Model name (optional, uses tool defaults)
#
# Default Models:
# gemini: gemini-2.5-flash
# qwen: coder-model
# codex: gpt5-codex
#
# Module-Level Strategies:
# full: Full documentation generation
# - Read: All files in current and subdirectories (@**/*)
# - Generate: API.md + README.md for each directory containing code files
# - Use: Deep directories (Layer 3), comprehensive documentation
#
# single: Single-layer documentation
# - Read: Current directory code + child API.md/README.md files
# - Generate: API.md + README.md only in current directory
# - Use: Upper layers (Layer 1-2), incremental updates
#
# Project-Level Strategies:
# project-readme: Project overview documentation
# - Read: All module API.md and README.md files
# - Generate: README.md (project root)
# - Use: After all module docs are generated
#
# project-architecture: System design documentation
# - Read: All module docs + project README
# - Generate: ARCHITECTURE.md + EXAMPLES.md
# - Use: After project README is generated
#
# http-api: HTTP API documentation
# - Read: API route files + existing docs
# - Generate: api/README.md
# - Use: For projects with HTTP APIs
#
# Output Structure:
# Module docs: .workflow/docs/{project_name}/{source_path}/API.md
# Module docs: .workflow/docs/{project_name}/{source_path}/README.md
# Project docs: .workflow/docs/{project_name}/README.md
# Project docs: .workflow/docs/{project_name}/ARCHITECTURE.md
# Project docs: .workflow/docs/{project_name}/EXAMPLES.md
# API docs: .workflow/docs/{project_name}/api/README.md
#
# Features:
# - Path mirroring: source structure → docs structure
# - Template-driven generation
# - Respects .gitignore patterns
# - Detects code vs navigation folders
# - Tool fallback support
# Build exclusion filters from .gitignore
build_exclusion_filters() {
local filters=""
# Common system/cache directories to exclude
local system_excludes=(
".git" "__pycache__" "node_modules" ".venv" "venv" "env"
"dist" "build" ".cache" ".pytest_cache" ".mypy_cache"
"coverage" ".nyc_output" "logs" "tmp" "temp" ".workflow"
)
for exclude in "${system_excludes[@]}"; do
filters+=" -not -path '*/$exclude' -not -path '*/$exclude/*'"
done
# Find and parse .gitignore (current dir first, then git root)
local gitignore_file=""
# Check current directory first
if [ -f ".gitignore" ]; then
gitignore_file=".gitignore"
else
# Try to find git root and check for .gitignore there
local git_root=$(git rev-parse --show-toplevel 2>/dev/null)
if [ -n "$git_root" ] && [ -f "$git_root/.gitignore" ]; then
gitignore_file="$git_root/.gitignore"
fi
fi
# Parse .gitignore if found
if [ -n "$gitignore_file" ]; then
while IFS= read -r line; do
# Skip empty lines and comments
[[ -z "$line" || "$line" =~ ^[[:space:]]*# ]] && continue
# Remove trailing slash and whitespace
line=$(echo "$line" | sed 's|/$||' | xargs)
# Skip wildcards patterns (too complex for simple find)
[[ "$line" =~ \* ]] && continue
# Add to filters
filters+=" -not -path '*/$line' -not -path '*/$line/*'"
done < "$gitignore_file"
fi
echo "$filters"
}
# Detect folder type (code vs navigation)
detect_folder_type() {
local target_path="$1"
local exclusion_filters="$2"
# Count code files (primary indicators)
local code_count=$(eval "find \"$target_path\" -maxdepth 1 -type f \\( -name '*.ts' -o -name '*.tsx' -o -name '*.js' -o -name '*.jsx' -o -name '*.py' -o -name '*.sh' -o -name '*.go' -o -name '*.rs' \\) $exclusion_filters 2>/dev/null" | wc -l)
if [ $code_count -gt 0 ]; then
echo "code"
else
echo "navigation"
fi
}
# Scan directory structure and generate structured information
scan_directory_structure() {
local target_path="$1"
local strategy="$2"
if [ ! -d "$target_path" ]; then
echo "Directory not found: $target_path"
return 1
fi
local exclusion_filters=$(build_exclusion_filters)
local structure_info=""
# Get basic directory info
local dir_name=$(basename "$target_path")
local total_files=$(eval "find \"$target_path\" -type f $exclusion_filters 2>/dev/null" | wc -l)
local total_dirs=$(eval "find \"$target_path\" -type d $exclusion_filters 2>/dev/null" | wc -l)
local folder_type=$(detect_folder_type "$target_path" "$exclusion_filters")
structure_info+="Directory: $dir_name\n"
structure_info+="Total files: $total_files\n"
structure_info+="Total directories: $total_dirs\n"
structure_info+="Folder type: $folder_type\n\n"
if [ "$strategy" = "full" ]; then
# For full: show all subdirectories with file counts
structure_info+="Subdirectories with files:\n"
while IFS= read -r dir; do
if [ -n "$dir" ] && [ "$dir" != "$target_path" ]; then
local rel_path=${dir#$target_path/}
local file_count=$(eval "find \"$dir\" -maxdepth 1 -type f $exclusion_filters 2>/dev/null" | wc -l)
if [ $file_count -gt 0 ]; then
local subdir_type=$(detect_folder_type "$dir" "$exclusion_filters")
structure_info+=" - $rel_path/ ($file_count files, type: $subdir_type)\n"
fi
fi
done < <(eval "find \"$target_path\" -type d $exclusion_filters 2>/dev/null")
else
# For single: show direct children only
structure_info+="Direct subdirectories:\n"
while IFS= read -r dir; do
if [ -n "$dir" ]; then
local dir_name=$(basename "$dir")
local file_count=$(eval "find \"$dir\" -maxdepth 1 -type f $exclusion_filters 2>/dev/null" | wc -l)
local has_api=$([ -f "$dir/API.md" ] && echo " [has API.md]" || echo "")
local has_readme=$([ -f "$dir/README.md" ] && echo " [has README.md]" || echo "")
structure_info+=" - $dir_name/ ($file_count files)$has_api$has_readme\n"
fi
done < <(eval "find \"$target_path\" -maxdepth 1 -type d $exclusion_filters 2>/dev/null" | grep -v "^$target_path$")
fi
# Show main file types in current directory
structure_info+="\nCurrent directory files:\n"
local code_files=$(eval "find \"$target_path\" -maxdepth 1 -type f \\( -name '*.ts' -o -name '*.tsx' -o -name '*.js' -o -name '*.jsx' -o -name '*.py' -o -name '*.sh' -o -name '*.go' -o -name '*.rs' \\) $exclusion_filters 2>/dev/null" | wc -l)
local config_files=$(eval "find \"$target_path\" -maxdepth 1 -type f \\( -name '*.json' -o -name '*.yaml' -o -name '*.yml' -o -name '*.toml' \\) $exclusion_filters 2>/dev/null" | wc -l)
local doc_files=$(eval "find \"$target_path\" -maxdepth 1 -type f -name '*.md' $exclusion_filters 2>/dev/null" | wc -l)
structure_info+=" - Code files: $code_files\n"
structure_info+=" - Config files: $config_files\n"
structure_info+=" - Documentation: $doc_files\n"
printf "%b" "$structure_info"
}
# Calculate output path based on source path and project name
calculate_output_path() {
local source_path="$1"
local project_name="$2"
local project_root="$3"
# Get absolute path of source (normalize to Unix-style path)
local abs_source=$(cd "$source_path" && pwd)
# Normalize project root to same format
local norm_project_root=$(cd "$project_root" && pwd)
# Calculate relative path from project root
local rel_path="${abs_source#$norm_project_root}"
# Remove leading slash if present
rel_path="${rel_path#/}"
# If source is project root, use project name directly
if [ "$abs_source" = "$norm_project_root" ] || [ -z "$rel_path" ]; then
echo "$norm_project_root/.workflow/docs/$project_name"
else
echo "$norm_project_root/.workflow/docs/$project_name/$rel_path"
fi
}
generate_module_docs() {
local strategy="$1"
local source_path="$2"
local project_name="$3"
local tool="${4:-gemini}"
local model="$5"
# Validate parameters
if [ -z "$strategy" ] || [ -z "$source_path" ] || [ -z "$project_name" ]; then
echo "❌ Error: Strategy, source path, and project name are required"
echo "Usage: generate_module_docs.sh <strategy> <source_path> <project_name> [tool] [model]"
echo "Module strategies: full, single"
echo "Project strategies: project-readme, project-architecture, http-api"
return 1
fi
# Validate strategy
local valid_strategies=("full" "single" "project-readme" "project-architecture" "http-api")
local strategy_valid=false
for valid_strategy in "${valid_strategies[@]}"; do
if [ "$strategy" = "$valid_strategy" ]; then
strategy_valid=true
break
fi
done
if [ "$strategy_valid" = false ]; then
echo "❌ Error: Invalid strategy '$strategy'"
echo "Valid module strategies: full, single"
echo "Valid project strategies: project-readme, project-architecture, http-api"
return 1
fi
if [ ! -d "$source_path" ]; then
echo "❌ Error: Source directory '$source_path' does not exist"
return 1
fi
# Set default models if not specified
if [ -z "$model" ]; then
case "$tool" in
gemini)
model="gemini-2.5-flash"
;;
qwen)
model="coder-model"
;;
codex)
model="gpt5-codex"
;;
*)
model=""
;;
esac
fi
# Build exclusion filters
local exclusion_filters=$(build_exclusion_filters)
# Get project root
local project_root=$(git rev-parse --show-toplevel 2>/dev/null || pwd)
# Determine if this is a project-level strategy
local is_project_level=false
if [[ "$strategy" =~ ^project- ]] || [ "$strategy" = "http-api" ]; then
is_project_level=true
fi
# Calculate output path
local output_path
if [ "$is_project_level" = true ]; then
# Project-level docs go to project root
if [ "$strategy" = "http-api" ]; then
output_path="$project_root/.workflow/docs/$project_name/api"
else
output_path="$project_root/.workflow/docs/$project_name"
fi
else
output_path=$(calculate_output_path "$source_path" "$project_name" "$project_root")
fi
# Create output directory
mkdir -p "$output_path"
# Detect folder type (only for module-level strategies)
local folder_type=""
if [ "$is_project_level" = false ]; then
folder_type=$(detect_folder_type "$source_path" "$exclusion_filters")
fi
# Load templates based on strategy
local api_template=""
local readme_template=""
local template_content=""
if [ "$is_project_level" = true ]; then
# Project-level templates
case "$strategy" in
project-readme)
local proj_readme_path="$HOME/.claude/workflows/cli-templates/prompts/documentation/project-readme.txt"
if [ -f "$proj_readme_path" ]; then
template_content=$(cat "$proj_readme_path")
echo " 📋 Loaded Project README template: $(wc -l < "$proj_readme_path") lines"
fi
;;
project-architecture)
local arch_path="$HOME/.claude/workflows/cli-templates/prompts/documentation/project-architecture.txt"
local examples_path="$HOME/.claude/workflows/cli-templates/prompts/documentation/project-examples.txt"
if [ -f "$arch_path" ]; then
template_content=$(cat "$arch_path")
echo " 📋 Loaded Architecture template: $(wc -l < "$arch_path") lines"
fi
if [ -f "$examples_path" ]; then
template_content="$template_content
EXAMPLES TEMPLATE:
$(cat "$examples_path")"
echo " 📋 Loaded Examples template: $(wc -l < "$examples_path") lines"
fi
;;
http-api)
local api_path="$HOME/.claude/workflows/cli-templates/prompts/documentation/api.txt"
if [ -f "$api_path" ]; then
template_content=$(cat "$api_path")
echo " 📋 Loaded HTTP API template: $(wc -l < "$api_path") lines"
fi
;;
esac
else
# Module-level templates
local api_template_path="$HOME/.claude/workflows/cli-templates/prompts/documentation/api.txt"
local readme_template_path="$HOME/.claude/workflows/cli-templates/prompts/documentation/module-readme.txt"
local nav_template_path="$HOME/.claude/workflows/cli-templates/prompts/documentation/folder-navigation.txt"
if [ "$folder_type" = "code" ]; then
if [ -f "$api_template_path" ]; then
api_template=$(cat "$api_template_path")
echo " 📋 Loaded API template: $(wc -l < "$api_template_path") lines"
fi
if [ -f "$readme_template_path" ]; then
readme_template=$(cat "$readme_template_path")
echo " 📋 Loaded README template: $(wc -l < "$readme_template_path") lines"
fi
else
# Navigation folder uses navigation template
if [ -f "$nav_template_path" ]; then
readme_template=$(cat "$nav_template_path")
echo " 📋 Loaded Navigation template: $(wc -l < "$nav_template_path") lines"
fi
fi
fi
# Scan directory structure (only for module-level strategies)
local structure_info=""
if [ "$is_project_level" = false ]; then
echo " 🔍 Scanning directory structure..."
structure_info=$(scan_directory_structure "$source_path" "$strategy")
fi
# Prepare logging info
local module_name=$(basename "$source_path")
echo "⚡ Generating docs: $source_path$output_path"
echo " Strategy: $strategy | Tool: $tool | Model: $model | Type: $folder_type"
echo " Output: $output_path"
# Build strategy-specific prompt
local final_prompt=""
# Project-level strategies
if [ "$strategy" = "project-readme" ]; then
final_prompt="PURPOSE: Generate comprehensive project overview documentation
PROJECT: $project_name
OUTPUT: Current directory (file will be moved to final location)
Read: @.workflow/docs/$project_name/**/*.md
Context: All module documentation files from the project
Generate ONE documentation file in current directory:
- README.md - Project root documentation
Template:
$template_content
Instructions:
- Create README.md in CURRENT DIRECTORY
- Synthesize information from all module docs
- Include project overview, getting started, and navigation
- Create clear module navigation with links
- Follow template structure exactly"
elif [ "$strategy" = "project-architecture" ]; then
final_prompt="PURPOSE: Generate system design and usage examples documentation
PROJECT: $project_name
OUTPUT: Current directory (files will be moved to final location)
Read: @.workflow/docs/$project_name/**/*.md
Context: All project documentation including module docs and project README
Generate TWO documentation files in current directory:
1. ARCHITECTURE.md - System architecture and design patterns
2. EXAMPLES.md - End-to-end usage examples
Template:
$template_content
Instructions:
- Create both ARCHITECTURE.md and EXAMPLES.md in CURRENT DIRECTORY
- Synthesize architectural patterns from module documentation
- Document system structure, module relationships, and design decisions
- Provide practical code examples and usage scenarios
- Follow template structure for both files"
elif [ "$strategy" = "http-api" ]; then
final_prompt="PURPOSE: Generate HTTP API reference documentation
PROJECT: $project_name
OUTPUT: Current directory (file will be moved to final location)
Read: @**/*.{ts,js,py,go,rs} @.workflow/docs/$project_name/**/*.md
Context: API route files and existing documentation
Generate ONE documentation file in current directory:
- README.md - HTTP API documentation (in api/ subdirectory)
Template:
$template_content
Instructions:
- Create README.md in CURRENT DIRECTORY
- Document all HTTP endpoints (routes, methods, parameters, responses)
- Include authentication requirements and error codes
- Provide request/response examples
- Follow template structure (Part B: HTTP API documentation)"
# Module-level strategies
elif [ "$strategy" = "full" ]; then
# Full strategy: read all files, generate for each directory
if [ "$folder_type" = "code" ]; then
final_prompt="PURPOSE: Generate comprehensive API and module documentation
Directory Structure Analysis:
$structure_info
SOURCE: $source_path
OUTPUT: Current directory (files will be moved to final location)
Read: @**/*
Generate TWO documentation files in current directory:
1. API.md - Code API documentation (functions, classes, interfaces)
Template:
$api_template
2. README.md - Module overview documentation
Template:
$readme_template
Instructions:
- Generate both API.md and README.md in CURRENT DIRECTORY
- If subdirectories contain code files, generate their docs too (recursive)
- Work bottom-up: deepest directories first
- Follow template structure exactly
- Use structure analysis for context"
else
# Navigation folder - README only
final_prompt="PURPOSE: Generate navigation documentation for folder structure
Directory Structure Analysis:
$structure_info
SOURCE: $source_path
OUTPUT: Current directory (file will be moved to final location)
Read: @**/*
Generate ONE documentation file in current directory:
- README.md - Navigation and folder overview
Template:
$readme_template
Instructions:
- Create README.md in CURRENT DIRECTORY
- Focus on folder structure and navigation
- Link to subdirectory documentation
- Use structure analysis for context"
fi
else
# Single strategy: read current + child docs only
if [ "$folder_type" = "code" ]; then
final_prompt="PURPOSE: Generate API and module documentation for current directory
Directory Structure Analysis:
$structure_info
SOURCE: $source_path
OUTPUT: Current directory (files will be moved to final location)
Read: @*/API.md @*/README.md @*.ts @*.tsx @*.js @*.jsx @*.py @*.sh @*.go @*.rs @*.md @*.json @*.yaml @*.yml
Generate TWO documentation files in current directory:
1. API.md - Code API documentation
Template:
$api_template
2. README.md - Module overview
Template:
$readme_template
Instructions:
- Generate both API.md and README.md in CURRENT DIRECTORY
- Reference child documentation, do not duplicate
- Follow template structure
- Use structure analysis for current directory context"
else
# Navigation folder - README only
final_prompt="PURPOSE: Generate navigation documentation
Directory Structure Analysis:
$structure_info
SOURCE: $source_path
OUTPUT: Current directory (file will be moved to final location)
Read: @*/API.md @*/README.md @*.md
Generate ONE documentation file in current directory:
- README.md - Navigation and overview
Template:
$readme_template
Instructions:
- Create README.md in CURRENT DIRECTORY
- Link to child documentation
- Use structure analysis for navigation context"
fi
fi
# Execute documentation generation
local start_time=$(date +%s)
echo " 🔄 Starting documentation generation..."
if cd "$source_path" 2>/dev/null; then
local tool_result=0
# Store current output path for CLI context
export DOC_OUTPUT_PATH="$output_path"
# Record git HEAD before CLI execution (to detect unwanted auto-commits)
local git_head_before=""
if git rev-parse --git-dir >/dev/null 2>&1; then
git_head_before=$(git rev-parse HEAD 2>/dev/null)
fi
# Execute with selected tool
case "$tool" in
qwen)
if [ "$model" = "coder-model" ]; then
qwen -p "$final_prompt" --yolo 2>&1
else
qwen -p "$final_prompt" -m "$model" --yolo 2>&1
fi
tool_result=$?
;;
codex)
codex --full-auto exec "$final_prompt" -m "$model" --skip-git-repo-check -s danger-full-access 2>&1
tool_result=$?
;;
gemini)
gemini -p "$final_prompt" -m "$model" --yolo 2>&1
tool_result=$?
;;
*)
echo " ⚠️ Unknown tool: $tool, defaulting to gemini"
gemini -p "$final_prompt" -m "$model" --yolo 2>&1
tool_result=$?
;;
esac
# Move generated files to output directory
local docs_created=0
local moved_files=""
if [ $tool_result -eq 0 ]; then
if [ "$is_project_level" = true ]; then
# Project-level documentation files
case "$strategy" in
project-readme)
if [ -f "README.md" ]; then
mv "README.md" "$output_path/README.md" 2>/dev/null && {
docs_created=$((docs_created + 1))
moved_files+="README.md "
}
fi
;;
project-architecture)
if [ -f "ARCHITECTURE.md" ]; then
mv "ARCHITECTURE.md" "$output_path/ARCHITECTURE.md" 2>/dev/null && {
docs_created=$((docs_created + 1))
moved_files+="ARCHITECTURE.md "
}
fi
if [ -f "EXAMPLES.md" ]; then
mv "EXAMPLES.md" "$output_path/EXAMPLES.md" 2>/dev/null && {
docs_created=$((docs_created + 1))
moved_files+="EXAMPLES.md "
}
fi
;;
http-api)
if [ -f "README.md" ]; then
mv "README.md" "$output_path/README.md" 2>/dev/null && {
docs_created=$((docs_created + 1))
moved_files+="api/README.md "
}
fi
;;
esac
else
# Module-level documentation files
# Check and move API.md if it exists
if [ "$folder_type" = "code" ] && [ -f "API.md" ]; then
mv "API.md" "$output_path/API.md" 2>/dev/null && {
docs_created=$((docs_created + 1))
moved_files+="API.md "
}
fi
# Check and move README.md if it exists
if [ -f "README.md" ]; then
mv "README.md" "$output_path/README.md" 2>/dev/null && {
docs_created=$((docs_created + 1))
moved_files+="README.md "
}
fi
fi
fi
# Check if CLI tool auto-committed (and revert if needed)
if [ -n "$git_head_before" ]; then
local git_head_after=$(git rev-parse HEAD 2>/dev/null)
if [ "$git_head_before" != "$git_head_after" ]; then
echo " ⚠️ Detected unwanted auto-commit by CLI tool, reverting..."
git reset --soft "$git_head_before" 2>/dev/null
echo " ✅ Auto-commit reverted (files remain staged)"
fi
fi
if [ $docs_created -gt 0 ]; then
local end_time=$(date +%s)
local duration=$((end_time - start_time))
echo " ✅ Generated $docs_created doc(s) in ${duration}s: $moved_files"
cd - > /dev/null
return 0
else
echo " ❌ Documentation generation failed for $source_path"
cd - > /dev/null
return 1
fi
else
echo " ❌ Cannot access directory: $source_path"
return 1
fi
}
# Execute function if script is run directly
if [[ "${BASH_SOURCE[0]}" == "${0}" ]]; then
# Show help if no arguments or help requested
if [ $# -eq 0 ] || [ "$1" = "-h" ] || [ "$1" = "--help" ]; then
echo "Usage: generate_module_docs.sh <strategy> <source_path> <project_name> [tool] [model]"
echo ""
echo "Module-Level Strategies:"
echo " full - Generate docs for all subdirectories with code"
echo " single - Generate docs only for current directory"
echo ""
echo "Project-Level Strategies:"
echo " project-readme - Generate project root README.md"
echo " project-architecture - Generate ARCHITECTURE.md + EXAMPLES.md"
echo " http-api - Generate HTTP API documentation (api/README.md)"
echo ""
echo "Tools: gemini (default), qwen, codex"
echo "Models: Use tool defaults if not specified"
echo ""
echo "Module Examples:"
echo " ./generate_module_docs.sh full ./src/auth myproject"
echo " ./generate_module_docs.sh single ./components myproject gemini"
echo ""
echo "Project Examples:"
echo " ./generate_module_docs.sh project-readme . myproject"
echo " ./generate_module_docs.sh project-architecture . myproject qwen"
echo " ./generate_module_docs.sh http-api . myproject"
exit 0
fi
generate_module_docs "$@"
fi

View File

@@ -126,13 +126,13 @@ Options:
Examples:
# Simple usage (auto-detect everything)
ui-instantiate-prototypes.sh .design/prototypes
ui-instantiate-prototypes.sh .workflow/design-run-*/prototypes
# With options
ui-instantiate-prototypes.sh .design/prototypes --session-id WFS-auth
ui-instantiate-prototypes.sh .workflow/design-run-*/prototypes --session-id WFS-auth
# Full manual mode
ui-instantiate-prototypes.sh .design/prototypes "login,dashboard" 3 3 --session-id WFS-auth
ui-instantiate-prototypes.sh .workflow/design-run-*/prototypes "login,dashboard" 3 3 --session-id WFS-auth
EOF
}

View File

@@ -1,12 +1,69 @@
---
name: command-guide
description: Workflow command guide for Claude DMS3 (69 commands). Search/browse commands, get next-step recommendations, view documentation, report issues. Triggers "CCW-help", "CCW-issue", "how to use", "search commands", "what's next", beginner onboarding questions
description: Workflow command guide for Claude DMS3 (78 commands). Search/browse commands, get next-step recommendations, view documentation, report issues. Triggers "CCW-help", "CCW-issue", "ccw-help", "ccw-issue", "ccw"
allowed-tools: Read, Grep, Glob, AskUserQuestion
version: 5.8.0
---
# Command Guide Skill
Comprehensive command guide for Claude DMS3 workflow system covering 69 commands across 4 categories (workflow, cli, memory, task).
Comprehensive command guide for Claude DMS3 workflow system covering 78 commands across 5 categories (workflow, cli, memory, task, general).
## 🆕 What's New in v5.8.0
### Major Features
**🎨 UI Design Style Memory Workflow** (Primary Focus)
- **`/memory:style-skill-memory`** - Generate reusable SKILL packages from design systems
- **`/workflow:ui-design:codify-style`** - Extract design tokens from code with automatic file discovery
- **`/workflow:ui-design:reference-page-generator`** - Generate multi-component reference pages
- **Workflow**: Design extraction → Token documentation → SKILL package → Easy loading
**`/workflow:lite-plan`** - Intelligent Planning & Execution (Testing Phase)
- Dynamic workflow adaptation (smart exploration, adaptive planning, progressive clarification)
- Two-dimensional confirmation (task approval + execution method selection)
- Direct execution with live TodoWrite progress tracking
- Faster than `/workflow:plan` (1-3 min vs 5-10 min) for simple to medium tasks
**🗺️ `/memory:code-map-memory`** - Code Flow Mapping Generator (Testing Phase)
- Uses cli-explore-agent for deep code flow analysis with dual-source strategy
- Generates Mermaid diagrams for architecture, functions, data flow, conditional paths
- Creates feature-specific SKILL packages for code understanding
- Progressive loading (2K → 30K tokens) for efficient context management
### Agent
- **cli-explore-agent** - Specialized code exploration with Deep Scan mode (Bash + Gemini)
- **cli-planning-agent** - Enhanced task generation with improved context handling
- **ui-design-agent** - Major refactoring for better design system extraction
### Additional Improvements
- Enhanced brainstorming workflows with parallel execution
- Improved test workflow documentation and task attachment models
- Updated CLI tool default models (Gemini 2.5-pro)
## 🧠 Core Principle: Intelligent Integration
**⚠️ IMPORTANT**: This SKILL provides **reference materials** for intelligent integration, NOT templates for direct copying.
**Response Strategy**:
1. **Analyze user's specific context** - Understand their exact need, workflow stage, and technical level
2. **Extract relevant information** - Select only the pertinent parts from reference guides
3. **Synthesize and customize** - Combine multiple sources, add context-specific examples
4. **Deliver targeted response** - Provide concise, actionable guidance tailored to the user's situation
**Never**:
- ❌ Copy-paste entire template sections verbatim
- ❌ Return raw reference documentation without processing
- ❌ Provide generic responses that ignore user context
**Always**:
- ✅ Understand the user's specific situation first
- ✅ Integrate information from multiple sources (indexes, guides, reference docs)
- ✅ Customize examples and explanations to match user's use case
- ✅ Provide progressive depth - brief answers with "more detail available" prompts
---
## 🎯 Operation Modes
@@ -19,10 +76,11 @@ Comprehensive command guide for Claude DMS3 workflow system covering 69 commands
**Process**:
1. Identify search type (keyword/category/use-case)
2. Query appropriate index (all-commands/by-category/by-use-case)
3. Return matching commands with metadata
4. Suggest related commands
3. **Intelligently filter and rank** results based on user's implied context
4. **Synthesize concise response** with command names, brief descriptions, and use-case fit
5. **Suggest next steps** - related commands or workflow patterns
**Example**: "搜索 planning 命令" → Lists planning commands from `index/by-use-case.json`
**Example**: "搜索 planning 命令" → Analyze user's likely goal → Present top 3-5 most relevant planning commands with context-specific usage hints, NOT raw JSON dump
---
@@ -33,12 +91,16 @@ Comprehensive command guide for Claude DMS3 workflow system covering 69 commands
**Triggers**: "下一步", "what's next", "after /workflow:plan", "推荐"
**Process**:
1. Parse current context (last command/workflow state)
2. Query `index/command-relationships.json`
3. Return recommended next commands with rationale
4. Show common workflow patterns
1. **Analyze workflow context** - Understand where user is in their development cycle
2. Query `index/command-relationships.json` for possible next commands
3. **Evaluate and prioritize** recommendations based on:
- User's stated goals
- Common workflow patterns
- Project complexity indicators
4. **Craft contextual guidance** - Explain WHY each recommendation fits, not just WHAT to run
5. **Provide workflow examples** - Show complete flow, not isolated commands
**Example**: "执行完 /workflow:plan 后做什么?" → Recommends /workflow:execute or /workflow:action-plan-verify
**Example**: "执行完 /workflow:plan 后做什么?" → Analyze plan output quality → Recommend `/workflow:action-plan-verify` (if complex) OR `/workflow:execute` (if straightforward) with reasoning for each choice
---
@@ -51,10 +113,11 @@ Comprehensive command guide for Claude DMS3 workflow system covering 69 commands
**Process**:
1. Locate command in `index/all-commands.json`
2. Read original command file for full details
3. Present parameters, arguments, examples
4. Link to related commands
3. **Extract user-relevant sections** - Focus on what they asked about (parameters OR examples OR workflow)
4. **Enhance with context** - Add use-case specific examples if user's scenario is clear
5. **Progressive disclosure** - Provide core info first, offer "need more details?" prompts
**Example**: "/workflow:plan 的参数是什么?" → Shows full parameter list and usage examples
**Example**: "/workflow:plan 的参数是什么?" → Identify user's experience level → Present parameters with context-appropriate explanations (beginner: verbose + examples; advanced: concise + edge cases), NOT raw documentation dump
---
@@ -62,15 +125,23 @@ Comprehensive command guide for Claude DMS3 workflow system covering 69 commands
**When**: New user needs guidance
**Triggers**: "新手", "getting started", "如何开始", "常用命令"
**Triggers**: "新手", "getting started", "如何开始", "常用命令", **"从0到1"**, **"全新项目"**
**Process**:
1. Present progressive learning path
2. Show `index/essential-commands.json` (Top 14 commands)
3. Link to getting-started guide
4. Provide first workflow example
1. **Assess user background** - Ask clarifying questions if needed (coding experience? project type?)
2. **⚠️ Identify project stage** - FROM-ZERO-TO-ONE vs FEATURE-ADDITION:
- **从0到1场景** (全新项目/产品/架构决策) → **MUST START with brainstorming workflow**
- **功能新增场景** (已有项目中添加功能) → Start with planning workflow
3. **Design personalized learning path** based on their goals and stage
4. **Curate essential commands** from `index/essential-commands.json` - Select 3-5 most relevant for their use case
5. **Provide guided first example** - Walk through ONE complete workflow with explanation, **emphasizing brainstorming for 0-to-1 scenarios**
6. **Set clear next steps** - What to try next, where to get help
**Example**: "我是新手,如何开始?" → Learning path + Top 14 commands + quick start guide
**Example 1 (从0到1)**: "我是新手,如何开始全新项目" → Identify as FROM-ZERO-TO-ONE → Emphasize brainstorming workflow (`/workflow:brainstorm:artifacts`) as mandatory first step → Explain brainstorm → plan → execute flow
**Example 2 (功能新增)**: "我是新手,如何在已有项目中添加功能?" → Identify as FEATURE-ADDITION → Guide to planning workflow (`/workflow:plan`) → Explain plan → execute → test flow
**Example 3 (探索)**: "我是新手,如何开始?" → Ask clarifying question: "是全新项目启动从0到1还是在已有项目中添加功能" → Based on answer, route to appropriate workflow
---
@@ -78,15 +149,103 @@ Comprehensive command guide for Claude DMS3 workflow system covering 69 commands
**When**: User wants to report issue or request feature
**Triggers**: **"CCW-issue"**, **"CCW-help"**, "报告 bug", "功能建议", "问题咨询"
**Triggers**: **"CCW-issue"**, **"CCW-help"**, **"ccw-issue"**, **"ccw-help"**, **"ccw"**, "报告 bug", "功能建议", "问题咨询", "交互式诊断"
**Process**:
1. Use AskUserQuestion to confirm type (bug/feature/question)
2. Collect required information interactively
3. Select appropriate template (`templates/issue-{type}.md`)
4. Generate filled template and save/display
1. **Understand issue context** - Use AskUserQuestion to confirm type AND gather initial context
2. **Intelligently guide information collection**:
- Adapt questions based on previous answers
- Skip irrelevant sections
- Probe for missing critical details
3. **Select and customize template**:
- `issue-diagnosis.md`, `issue-bug.md`, `issue-feature.md`, or `issue-question.md`
- **Adapt template sections** to match user's specific scenario
4. **Synthesize coherent issue report**:
- Integrate collected information with appropriate template sections
- **Highlight key details** - Don't bury critical info in boilerplate
- Add privacy-protected command history
5. **Provide actionable next steps** - Immediate troubleshooting OR submission guidance
**Example**: "CCW-issue" → Interactive Q&A → Generates GitHub issue template
**Example**: "CCW-issue" → Detect user frustration level → For urgent: fast-track to critical info collection; For exploratory: comprehensive diagnostic flow, NOT one-size-fits-all questionnaire
**🆕 Enhanced Features**:
- Complete command history with privacy protection
- Interactive diagnostic checklists
- Decision tree navigation (diagnosis template)
- Execution environment capture
---
### Mode 6: Deep Command Analysis 🔬
**When**: User asks detailed questions about specific commands or agents
**Triggers**: "详细说明", "命令原理", "agent 如何工作", "实现细节", specific command/agent name mentioned
**Data Sources**:
- `reference/agents/*.md` - All agent documentation (11 agents)
- `reference/commands/**/*.md` - All command documentation (69 commands)
**Process**:
**Simple Query** (direct documentation lookup):
1. Identify target command/agent from user query
2. Locate corresponding markdown file in `reference/`
3. **Extract contextually relevant sections** - Not entire document
4. **Synthesize focused explanation**:
- Address user's specific question
- Add context-appropriate examples
- Link related concepts
5. **Offer progressive depth** - "Want to know more about X?"
**Complex Query** (CLI-assisted analysis):
1. **Detect complexity indicators** (多个命令对比、工作流程分析、最佳实践)
2. **Design targeted analysis prompt** for gemini/qwen:
- Frame user's question precisely
- Specify required analysis depth
- Request structured comparison/synthesis
```bash
gemini -p "
PURPOSE: Analyze command documentation to answer user query
TASK: [extracted user question with context]
MODE: analysis
CONTEXT: @**/*
EXPECTED: Comprehensive answer with examples and recommendations
RULES: $(cat ~/.claude/workflows/cli-templates/prompts/analysis/02-analyze-code-patterns.txt) | Focus on practical usage | analysis=READ-ONLY
" -m gemini-3-pro-preview-11-2025 --include-directories ~/.claude/skills/command-guide/reference
```
Note: Use absolute path `~/.claude/skills/command-guide/reference` for reference documentation access
3. **Process and integrate CLI analysis**:
- Extract key insights from CLI output
- Add context-specific examples
- Synthesize actionable recommendations
4. **Deliver tailored response** - Not raw CLI output
**Query Classification**:
- **Simple**: Single command explanation, parameter list, basic usage
- **Complex**: Cross-command workflows, performance comparison, architectural analysis, best practices across multiple commands
**Examples**:
*Simple Query*:
```
User: "action-planning-agent 如何工作?"
→ Read reference/agents/action-planning-agent.md
→ **Identify user's knowledge gap** (mechanism? inputs/outputs? when to use?)
→ **Extract relevant sections** addressing their need
→ **Synthesize focused explanation** with examples
→ NOT: Dump entire agent documentation
```
*Complex Query*:
```
User: "对比 workflow:plan 和 workflow:tdd-plan 的使用场景和最佳实践"
→ Detect: 多命令对比 + 最佳实践
→ **Design comparison framework** (when to use, trade-offs, workflow integration)
→ Use gemini to analyze both commands with structured comparison prompt
→ **Synthesize insights** into decision matrix and usage guidelines
→ NOT: Raw command documentation side-by-side
```
---
@@ -113,15 +272,40 @@ All command metadata is stored in JSON indexes for fast querying:
- **[Implementation Details](guides/implementation-details.md)** - Detailed logic for each mode
- **[Usage Examples](guides/examples.md)** - Example dialogues and edge cases
## 📦 Reference Documentation
Complete backup of all command and agent documentation for deep analysis:
- **[reference/agents/](reference/agents/)** - 14 agent markdown files with implementation details
- **New in v5.8**: cli-explore-agent (code exploration), cli-planning-agent (enhanced)
- **[reference/commands/](reference/commands/)** - 78 command markdown files organized by category
- `cli/` - CLI tool commands (10 files) - **New**: document-analysis mode
- `memory/` - Memory management commands (12 files) - **New**: docs-full-cli, docs-related-cli, code-map-memory, style-skill-memory
- `task/` - Task management commands (4 files)
- `workflow/` - Workflow commands (50 files) - **New**: lite-plan, lite-fix, ui-design enhancements
**Installation Path**: `~/.claude/skills/command-guide/` (skill designed for global installation)
**Absolute Reference Path**: `~/.claude/skills/command-guide/reference/`
**Usage**: Mode 6 queries these files directly for detailed command/agent analysis, or uses CLI tools (gemini/qwen) with absolute paths for complex cross-command analysis.
---
## 🛠️ Issue Templates
Generate standardized GitHub issue templates:
Generate standardized GitHub issue templates with **execution flow emphasis**:
- **[Bug Report](templates/issue-bug.md)** - Report command errors or system bugs
- **[Feature Request](templates/issue-feature.md)** - Suggest new features or improvements
- **[Question](templates/issue-question.md)** - Ask usage questions or request help
- **[Interactive Diagnosis](templates/issue-diagnosis.md)** - 🆕 Comprehensive diagnostic workflow with decision tree, checklists, and full command history
- **[Bug Report](templates/issue-bug.md)** - Report command errors with complete execution flow and environment details
- **[Feature Request](templates/issue-feature.md)** - Suggest improvements with current workflow analysis and pain points
- **[Question](templates/issue-question.md)** - Ask usage questions with detailed attempt history and context
**All templates now include**:
- ✅ Complete command history sections (with privacy protection)
- ✅ Execution environment details
- ✅ Interactive problem-locating checklists
- ✅ Structured troubleshooting guidance
Templates are auto-populated during Mode 5 (Issue Reporting) interaction.
@@ -129,11 +313,13 @@ Templates are auto-populated during Mode 5 (Issue Reporting) interaction.
## 📊 System Statistics
- **Total Commands**: 69
- **Categories**: 4 (workflow: 46, cli: 9, memory: 8, task: 4, general: 2)
- **Use Cases**: 5 (planning, implementation, testing, documentation, session-management)
- **Total Commands**: 78
- **Total Agents**: 14
- **Categories**: 5 (workflow: 50, cli: 10, memory: 12, task: 4, general: 2)
- **Use Cases**: 7 (planning, implementation, testing, documentation, session-management, analysis, general)
- **Difficulty Levels**: 3 (Beginner, Intermediate, Advanced)
- **Essential Commands**: 14
- **Essential Commands**: 13
- **Reference Docs**: 92 markdown files (14 agents + 78 commands)
---
@@ -144,7 +330,8 @@ Templates are auto-populated during Mode 5 (Issue Reporting) interaction.
When commands are added/modified/removed:
```bash
bash scripts/update-index.sh
cd /d/Claude_dms3/.claude/skills/command-guide
python scripts/analyze_commands.py
```
This script:
@@ -156,7 +343,7 @@ This script:
### Committing Updates
```bash
git add .claude/skills/command-guide/index/
git add .claude/skills/command-guide/
git commit -m "docs: update command indexes"
git push
```
@@ -174,6 +361,28 @@ Team members get latest indexes via `git pull`.
---
**Version**: 1.1.0
**Last Updated**: 2025-11-06
## 🔄 Maintenance
### Documentation Updates
This SKILL documentation is kept in sync with command implementations through a standardized update process.
**Update Guideline**: See [UPDATE-GUIDELINE.md](UPDATE-GUIDELINE.md) for the complete documentation maintenance process.
**Update Process**:
1. **Analyze**: Identify changed commands/agents from git commits
2. **Extract**: Gather change information and impact assessment
3. **Update**: Sync reference docs, guides, and examples
4. **Regenerate**: Run `scripts/analyze_commands.py` to rebuild indexes
5. **Validate**: Test examples and verify consistency
6. **Commit**: Follow standardized commit message format
**Key Capabilities**:
- 6 operation modes (Search, Recommendations, Full Docs, Onboarding, Issue Reporting, Deep Analysis)
- 80 reference documentation files (11 agents + 69 commands)
- 5 JSON indexes for fast command lookup
- 8 comprehensive guides covering all workflow patterns
- 4 issue templates for standardized problem reporting
- CLI-assisted complex query analysis with gemini/qwen integration
**Maintainer**: Claude DMS3 Team

View File

@@ -0,0 +1,592 @@
# Command Guide Update Guideline
## 📋 Purpose
This document defines a **standardized, repeatable process** for updating command-guide documentation when command changes are detected. Use this guideline every time you need to update command-guide SKILL documentation to ensure consistency and completeness.
---
## 🎯 Update Trigger Conditions
Execute this update process when ANY of the following conditions are met:
1. **New commands added** to `.claude/commands/`
2. **Command parameters changed** (new flags, modified behavior)
3. **Command architecture refactored** (workflow reorganization)
4. **Agent implementations updated** in `.claude/agents/`
5. **User explicitly requests** command-guide update
---
## 📊 Phase 1: Analysis & Discovery
### Step 1.1: Identify Changed Files
**Objective**: Discover what has changed since last update
**Actions**:
```bash
# Find recent commits affecting commands/agents
git log --oneline --since="<last-update-date>" --grep="command\|agent\|workflow" -20
# Show detailed changes
git diff <last-commit>..<current-commit> --stat .claude/commands/ .claude/agents/
# Identify modified command files
git diff <last-commit>..<current-commit> --name-only .claude/commands/
```
**Output**: List of changed files and commit messages
**Document**:
- Changed command files
- Changed agent files
- Key commit messages
- Change patterns (new features, refactoring, fixes)
---
### Step 1.2: Analyze Change Scope
**Objective**: Understand the nature and impact of changes
**Questions to Answer**:
1. **What changed?** (parameters, workflow, architecture, behavior)
2. **Why changed?** (new feature, optimization, bug fix, refactoring)
3. **Impact scope?** (single command, workflow pattern, system-wide)
4. **User-facing?** (affects user commands, internal only)
**Analysis Matrix**:
| Change Type | Detection Method | Impact Level |
|-------------|--------------------|--------------|
| **New Parameter** | Diff `argument-hint` field | Medium |
| **Workflow Reorganization** | Multiple command changes | High |
| **Architecture Change** | Agent file changes + command changes | High |
| **Bug Fix** | Single file, small change | Low |
| **New Command** | New file in `.claude/commands/` | Medium-High |
**Output**: Change classification and impact assessment
---
### Step 1.3: Map Affected Documentation
**Objective**: Identify which documentation files need updates
**Mapping Rules**:
**Command Changes** → Affects:
- `reference/commands/<category>/<command-name>.md` (copy from source)
- `index/all-commands.json` (regenerate)
- `index/by-category.json` (if new category)
- `guides/ui-design-workflow-guide.md` (if UI workflow affected)
- `guides/workflow-patterns.md` (if workflow pattern changed)
**Agent Changes** → Affects:
- `reference/agents/<agent-name>.md` (copy from source)
- `guides/implementation-details.md` (if agent behavior changed)
**Workflow Reorganization** → Affects:
- All related command references
- Workflow guides
- Examples in guides
**Output**: Checklist of files to update
---
## 🔧 Phase 2: Content Preparation
### Step 2.1: Extract Key Information
**Objective**: Gather information needed for documentation updates
**Extract from Git Commits**:
```bash
# Get commit details
git show <commit-hash> --stat
# Extract commit message
git log --format=%B -n 1 <commit-hash>
```
**Information to Extract**:
1. **Feature Name** (from commit message)
2. **Change Description** (what was added/modified/removed)
3. **Rationale** (why the change was made)
4. **New Parameters** (from diff)
5. **Breaking Changes** (backward compatibility impact)
6. **Usage Examples** (from commit or command file)
**Output**: Structured data for documentation
---
### Step 2.2: Categorize Changes
**Objective**: Organize changes into logical categories
**Categories**:
1. **Major Features**
- New commands
- New workflows
- Architecture changes
- User-facing feature additions
2. **Enhancements**
- New parameters
- Improved behavior
- Performance optimizations
- Better error handling
3. **Refactoring**
- Code reorganization (no user impact)
- Internal structure changes
- Consistency improvements
4. **Bug Fixes**
- Corrected behavior
- Fixed edge cases
- Parameter validation fixes
5. **Documentation**
- Updated descriptions
- New examples
- Clarified usage
**Output**: Changes grouped by category with priority
---
### Step 2.3: Analyze User Impact
**Objective**: Determine what users need to know
**User Impact Questions**:
1. **Do existing workflows break?** → Migration guide needed
2. **Are new features optional?** → Enhancement documentation
3. **Is behavior significantly different?** → Usage pattern updates
4. **Do examples need updates?** → Example refresh required
**Impact Levels**:
- **Critical** (Breaking changes, migration required)
- **Important** (New features users should adopt)
- **Nice-to-have** (Enhancements, optional)
- **Internal** (No user action needed)
**Output**: User-facing change summary with impact levels
---
## 📝 Phase 3: Documentation Updates
### Step 3.1: Update Reference Documentation
**Objective**: Sync reference docs with source command files
**Actions**:
1. **Run Python Script to Sync & Rebuild**:
```bash
cd /d/Claude_dms3/.claude/skills/command-guide
python scripts/analyze_commands.py
```
This script automatically:
- Deletes existing `reference/` directory
- Copies all agent files from `.claude/agents/` to `reference/agents/`
- Copies all command files from `.claude/commands/` to `reference/commands/`
- Regenerates all 5 index files with updated metadata
2. **Verify Completeness**:
- Check sync output for file counts (11 agents + 70 commands)
- Verify all 5 index files regenerated successfully
- Ensure YAML frontmatter integrity in copied files
**Output**: Updated reference documentation matching source + regenerated indexes
---
### Step 3.2: Update Workflow Guides
**Objective**: Reflect changes in user-facing workflow guides
**Workflow Guide Update Pattern**:
**IF** (UI workflow commands changed):
1. Open `guides/ui-design-workflow-guide.md`
2. Locate affected workflow pattern sections
3. Update examples to use new parameters/behavior
4. Add "New!" badges for new features
5. Update performance metrics if applicable
6. Add troubleshooting entries for new issues
**IF** (General workflow patterns changed):
1. Open `guides/workflow-patterns.md`
2. Update affected workflow examples
3. Add new pattern sections if applicable
**Update Template for New Features**:
```markdown
### [Feature Name] (New!)
**Purpose**: [What this feature enables]
**Usage**:
```bash
[Example command with new feature]
```
**Benefits**:
- [Benefit 1]
- [Benefit 2]
**When to Use**:
- [Use case 1]
- [Use case 2]
```
**Output**: Updated workflow guides with new features documented
---
### Step 3.3: Update Examples and Best Practices
**Objective**: Ensure examples reflect current best practices
**Example Update Checklist**:
- [ ] Remove deprecated parameter usage
- [ ] Add examples for new parameters
- [ ] Update command syntax if changed
- [ ] Verify all examples are runnable
- [ ] Add "Note" sections for common pitfalls
**Best Practices Update**:
- [ ] Add recommendations for new features
- [ ] Update "When to Use" guidelines
- [ ] Revise performance optimization tips
- [ ] Update troubleshooting entries
**Output**: Current, runnable examples
---
### Step 3.4: Update SKILL.md Metadata
**Objective**: Keep SKILL.md current without version-specific details
**Update Sections**:
1. **Supporting Guides** (if new guide added):
```markdown
- **[New Guide Name](guides/new-guide.md)** - Description
```
2. **System Statistics** (if counts changed):
```markdown
- **Total Commands**: <new-count>
- **Total Agents**: <new-count>
```
3. **Remove Old Changelog Entries**:
- Keep only last 3 changelog entries
- Archive older entries to separate file if needed
**DO NOT**:
- Add version numbers
- Add specific dates
- Create time-based changelog entries
**Output**: Updated SKILL.md metadata
---
## 🧪 Phase 4: Validation
### Step 4.1: Consistency Check
**Objective**: Ensure documentation is internally consistent
**Checklist**:
- [ ] All command references use correct names
- [ ] Parameter descriptions match command files
- [ ] Examples use valid parameter combinations
- [ ] Links between documents are not broken
- [ ] Index files reflect current command count
**Validation Commands**:
```bash
# Check for broken internal links
grep -r "\[.*\](.*\.md)" guides/ reference/ | grep -v "http"
# Verify command count consistency
actual=$(find ../../commands -name "*.md" | wc -l)
indexed=$(jq '.commands | length' index/all-commands.json)
echo "Actual: $actual, Indexed: $indexed"
```
**Output**: Validation report
---
### Step 4.2: Example Testing
**Objective**: Verify all examples are runnable
**Test Cases**:
- [ ] Copy example commands from guides
- [ ] Run in test environment
- [ ] Verify expected output
- [ ] Document any prerequisites
**Note**: Some examples may be illustrative only; mark these clearly
**Output**: Tested examples
---
### Step 4.3: Peer Review Checklist
**Objective**: Prepare documentation for review
**Review Points**:
- [ ] Is the change clearly explained?
- [ ] Are examples helpful and clear?
- [ ] Is migration guidance complete (if breaking)?
- [ ] Are troubleshooting tips adequate?
- [ ] Is the documentation easy to scan?
**Output**: Review-ready documentation
---
## 📤 Phase 5: Commit & Distribution
### Step 5.1: Git Commit Structure
**Objective**: Create clear, traceable commits
**Commit Pattern**:
```bash
git add .claude/skills/command-guide/
# Commit message format
git commit -m "docs(command-guide): update for <feature-name> changes
- Update reference docs for <changed-commands>
- Enhance <guide-name> with <feature> documentation
- Regenerate indexes (new count: <count>)
- Add troubleshooting for <new-issues>
Refs: <commit-hashes-of-source-changes>
"
```
**Commit Message Rules**:
- **Type**: `docs(command-guide)`
- **Scope**: Always `command-guide`
- **Summary**: Concise, imperative mood
- **Body**: Bullet points for each change type
- **Refs**: Link to source change commits
**Output**: Clean commit history
---
### Step 5.2: Update Distribution
**Objective**: Make updates available to users
**Actions**:
```bash
# Push to remote
git push origin main
# Verify GitHub reflects changes
# Check: https://github.com/<org>/<repo>/tree/main/.claude/skills/command-guide
```
**User Notification** (if breaking changes):
- Update project README
- Add note to main documentation
- Consider announcement in team channels
**Output**: Published updates
---
## 🔄 Phase 6: Iteration & Improvement
### Step 6.1: Gather Feedback
**Objective**: Improve documentation based on usage
**Feedback Sources**:
- User questions about changed commands
- Confusion points in examples
- Missing information requests
- Error reports
**Track**:
- Common questions → Add to troubleshooting
- Confusing examples → Simplify or expand
- Missing use cases → Add to guides
**Output**: Improvement backlog
---
### Step 6.2: Continuous Refinement
**Objective**: Keep documentation evolving
**Regular Tasks**:
- [ ] Review index statistics monthly
- [ ] Update examples with real-world usage
- [ ] Consolidate redundant sections
- [ ] Expand troubleshooting based on issues
- [ ] Refresh screenshots/outputs if UI changed
**Output**: Living documentation
---
## 📐 Update Decision Matrix
Use this matrix to determine update depth:
| Change Scope | Reference Docs | Workflow Guides | Examples | Indexes | Migration Guide |
|--------------|----------------|-----------------|----------|---------|-----------------|
| **New Parameter** | Update command file | Add parameter note | Add usage example | Regenerate | No |
| **Workflow Refactor** | Update all affected | Major revision | Update all examples | Regenerate | If breaking |
| **New Command** | Copy new file | Add workflow pattern | Add examples | Regenerate | No |
| **Architecture Change** | Update all affected | Major revision | Comprehensive update | Regenerate | Yes |
| **Bug Fix** | Update description | Add note if user-visible | Fix incorrect examples | No change | No |
| **New Feature** | Update affected files | Add feature section | Add feature examples | Regenerate | No |
---
## 🎯 Quality Gates
Before considering documentation update complete, verify:
### Gate 1: Completeness
- [ ] All changed commands have updated reference docs
- [ ] All new features are documented in guides
- [ ] All examples are current and correct
- [ ] Indexes reflect current state
### Gate 2: Clarity
- [ ] Non-expert can understand changes
- [ ] Examples demonstrate key use cases
- [ ] Migration path is clear (if breaking)
- [ ] Troubleshooting covers common issues
### Gate 3: Consistency
- [ ] Terminology is consistent across docs
- [ ] Parameter descriptions match everywhere
- [ ] Cross-references are accurate
- [ ] Formatting follows established patterns
### Gate 4: Accessibility
- [ ] Table of contents is current
- [ ] Search/navigation works
- [ ] Related docs are linked
- [ ] Issue templates reference new content
---
## 🚀 Quick Start Template
When updates are needed, follow this abbreviated workflow:
```bash
# 1. ANALYZE (5 min)
git log --oneline --since="<last-update>" --grep="command\|agent" -20
# → Identify what changed
# 2. EXTRACT (10 min)
git show <commit-hash> --stat
git diff <commit>..HEAD --stat .claude/commands/
# → Understand changes
# 3. UPDATE (30 min)
# - Update affected guide sections (ui-design-workflow-guide.md, etc.)
# - Add examples for new features
# - Document parameter changes
# 4. SYNC & REGENERATE (2 min)
cd /d/Claude_dms3/.claude/skills/command-guide
python scripts/analyze_commands.py
# → Syncs reference docs + regenerates all 5 indexes
# 5. VALIDATE (10 min)
# - Test examples
# - Check consistency
# - Verify links
# 6. COMMIT (5 min)
git add .claude/skills/command-guide/
git commit -m "docs(command-guide): update for <feature> changes"
git push origin main
```
**Total Time**: ~1 hour for typical update
---
## 🔗 Related Resources
- **Python Index Script**: `.claude/skills/command-guide/scripts/analyze_commands.py`
- **Issue Templates**: `.claude/skills/command-guide/templates/`
- **SKILL Entry Point**: `.claude/skills/command-guide/SKILL.md`
- **Reference Source**: `.claude/commands/` and `.claude/agents/`
---
## 📌 Appendix: Common Patterns
### Pattern 1: New Parameter Addition
**Example**: `--interactive` flag added to `explore-auto`
**Update Sequence**:
1. Update `guides/ui-design-workflow-guide.md` with interactive examples
2. Add "When to Use" guidance
3. Run Python script to sync reference docs and regenerate indexes
4. Update argument-hint in examples
---
### Pattern 2: Workflow Reorganization
**Example**: Layout extraction split into concept generation + selection
**Update Sequence**:
1. Major revision of workflow guide section
2. Update all workflow examples
3. Add migration notes for existing users
4. Update troubleshooting
5. Run Python script to sync and regenerate indexes
---
### Pattern 3: Architecture Change
**Example**: Agent execution model changed
**Update Sequence**:
1. Update `guides/implementation-details.md`
2. Revise all workflow patterns using affected agents
3. Create migration guide
4. Update examples comprehensively
5. Run Python script to sync and regenerate indexes
6. Add extensive troubleshooting
---
**End of Update Guideline**
This guideline is a living document. Improve it based on update experience.

View File

@@ -1,64 +1,410 @@
# CLI 智能工具指南
# CLI 工具使用指南
Gemini CLI 能够集成多种智能工具(如大型语言模型 LLM以增强您的开发工作流。本指南将帮助您理解这些智能工具如何工作以及如何有效地利用它们。
> **SKILL 参考文档**:用于回答用户关于 CLI 工具Gemini、Qwen、Codex的使用问题
>
> **用途**:当用户询问 CLI 工具的能力、使用方法、调用方式时,从本文档中提取相关信息,根据用户具体需求加工后返回
## 1. 什么是智能工具
## 🎯 快速理解CLI 工具是什么
智能工具是集成到 Gemini CLI 中的高级 AI 模型,它们能够理解自然语言、分析代码、生成文本和代码,并协助完成复杂的开发任务。它们就像您的智能助手,可以自动化重复性工作,提供洞察,并帮助您做出更好的决策
CLI 工具是集成在 Claude DMS3 中的**智能分析和执行助手**
## 2. 核心工作原理
**工作流程**
1. **用户** → 用自然语言向 Claude Code 描述需求(如"分析认证模块的安全性"
2. **Claude Code** → 识别用户意图,决定使用哪种方式:
- **CLI 工具语义调用**:生成并执行 `gemini`/`qwen`/`codex` 命令
- **Slash 命令调用**:执行预定义的工作流命令(如 `/workflow:plan`
3. **工具** → 自动完成任务并返回结果
Gemini CLI 中的智能工具遵循以下核心原则:
**核心理念**:用户用自然语言描述需求 → Claude Code 选择最佳方式 → 工具执行 → 返回结果
- **上下文感知**: 工具会理解您当前的项目状态、代码内容和开发意图,从而提供更相关的建议和操作。这意味着它们不会“凭空”回答问题,而是基于您的实际工作环境提供帮助。
- **模块化**: 智能工具被设计为可互换的后端。您可以根据任务需求选择或组合不同的 LLM。例如某些模型可能擅长代码生成而另一些则擅长复杂推理。
- **自动化与增强**: 智能工具可以自动化重复或复杂的任务(如生成样板代码、基础重构、测试脚手架),并通过提供智能建议、解释和问题解决协助来增强您的开发能力。
- **用户控制与透明**: 您始终对工具的操作拥有最终控制权。工具会清晰地解释其建议的更改,并允许您轻松审查和修改。工具的选择和执行过程也是透明的。
---
详情可参考 `../../workflows/intelligent-tools-strategy.md`
## 📋 三大工具能力对比
## 3. 智能工具的应用场景
| 工具 | 擅长领域 | 典型场景 | 何时使用 |
|------|----------|----------|----------|
| **Gemini** | 分析、理解、规划 | 代码分析、架构设计、问题诊断 | 需要深入理解代码或系统 |
| **Qwen** | 分析、备选方案 | 代码审查、模式识别 | Gemini 不可用时的备选 |
| **Codex** | 实现、测试、执行 | 功能开发、测试生成、自动化任务 | 需要生成代码或自动执行 |
智能工具被集成到 CLI 工作流的多个关键点,以提供全面的帮助
**简单记忆**
- 想**理解**什么 → Gemini / Qwen
- 想**实现**什么 → Codex
- **提示增强**: 优化您的输入提示,使智能工具更好地理解您的需求。
- **命令**: `enhance-prompt`
- **代码分析与审查**: 快速洞察代码,识别潜在问题,并提出改进建议。
- **命令**: `analyze`, `chat`, `code-analysis`, `bug-diagnosis`
- **规划与任务分解**: 协助将复杂问题分解为可管理的小任务,并生成实施计划。
- **命令**: `plan`, `discuss-plan`, `breakdown`
- **代码生成与实现**: 根据您的规范生成代码片段、函数、测试甚至整个模块。
- **命令**: `execute`, `create`, `codex-execute`
- **测试与调试**: 生成测试用例,诊断错误,并建议修复方案。
- **命令**: `test-gen`, `test-fix-gen`, `tdd-plan`, `tdd-verify`
- **文档生成**: 自动化代码、API 和项目模块的文档创建。
- **命令**: `docs`, `skill-memory`, `update-full`, `update-related`
- **工作流编排**: 智能选择和协调工具与代理来执行复杂的工作流。
- **命令**: `workflow:status`, `resume`, `plan`, `execute` (工作流级别)
---
## 4. 如何选择和使用工具
## 🚀 Claude Code 的两种响应方式
Gemini CLI 允许您使用 `--tool` 标志来指定用于特定操作的智能工具。这为您提供了灵活性,可以根据任务的性质选择最合适的模型。
当用户用自然语言描述需求时Claude Code 会根据任务特性选择最佳方式:
- `--tool codex`: 优先使用 Codex或兼容的专注于代码的 LLM。非常适合精确的代码生成、重构和类似 linting 的任务。
- **何时使用**: 当您需要生成高质量代码、进行代码审查或修复代码错误时。
- `--tool gemini`: 优先使用 Gemini或兼容的通用 LLM。擅长复杂的推理、更广泛的分析、规划和自然语言理解任务。
- **何时使用**: 当您需要进行高层次的规划、理解复杂概念或进行广泛的代码库分析时。
- `--tool qwen`: 优先使用 Qwen或兼容的特定领域/语言 LLM。适用于需要专业知识或特定语言支持的任务。
- **何时使用**: 当您的项目涉及特定技术栈或需要特定语言的专业知识时。
- **(默认/自动)**: 如果未指定工具CLI 会根据任务上下文和可用配置智能地选择最合适的工具。
### 方式 1CLI 工具语义调用(灵活、强大)
**示例**:
**用户明确指示使用 CLI 工具**Claude Code 生成并执行相应命令。
```bash
# 使用 Gemini 进行项目规划
gemini plan --tool gemini "设计一个新的微服务架构"
#### 示例 1Gemini 语义分析
# 使用 Codex 生成代码
gemini execute --tool codex "task-id-for-code-generation"
# 使用 Qwen 分析特定领域的代码
gemini analyze --tool qwen "分析医疗数据处理模块"
**用户输入**(必须明确提到工具):
```
使用 gemini 分析认证模块的安全性,识别潜在的安全漏洞,
关注 JWT 令牌处理、密码存储、会话管理等方面,
提供具体的修复建议
```
通过理解这些智能工具及其用法,您可以更有效地利用 Gemini CLI加速您的开发过程。
**Claude Code 生成并执行**CLI 命令):
```bash
cd src/auth && gemini -p "
分析认证模块的安全性,识别潜在的安全漏洞,
关注 JWT 令牌处理、密码存储、会话管理等方面,
提供具体的修复建议
" -m gemini-3-pro-preview-11-2025
```
**工具输出**Gemini 自动理解任务,组合多个分析步骤,返回结构化报告
---
#### 示例 2Qwen 语义分析Gemini 的备选)
**用户输入**
```
用 qwen 追踪用户登录的完整执行流程,
从 API 入口到数据库查询,
列出所有调用的函数和依赖关系
```
**Claude Code 生成并执行**
```bash
cd src/auth && qwen -p "
追踪用户登录的完整执行流程,
从 API 入口到数据库查询,
列出所有调用的函数和依赖关系
"
```
**工具输出**Qwen 理解需求,自动追踪执行路径
---
#### 示例 3Codex 语义实现
**用户输入**
```
让 codex 实现 JWT 令牌刷新机制:
1. 添加 refreshToken 端点
2. 验证旧令牌的有效性
3. 生成新的访问令牌
4. 确保安全性(防重放攻击)
```
**Claude Code 生成并执行**
```bash
codex -C src/auth --full-auto exec "
实现 JWT 令牌刷新机制:
1. 添加 refreshToken 端点
2. 验证旧令牌的有效性
3. 生成新的访问令牌
4. 确保安全性(防重放攻击)
" -m gpt-5 --skip-git-repo-check -s danger-full-access
```
**工具输出**Codex 理解需求,自动生成代码并集成到现有系统
---
**核心特点**
-**用户明确指定工具**:必须说"使用 gemini"、"用 qwen"、"让 codex"等触发工具调用
-**Claude 生成命令**:识别工具名称后,自动构造最优的 CLI 工具调用
-**工具自动理解**CLI 工具解析需求,组合分析/实现步骤
-**灵活强大**:不受预定义工作流限制
-**精确控制**Claude 可指定工作目录、文件范围、模型参数
**触发方式**
- "使用 gemini ..."
- "用 qwen ..."
- "让 codex ..."
- "通过 gemini 工具..."
**Claude Code 何时选择此方式**
- 用户明确指定使用某个 CLI 工具
- 复杂分析任务(跨模块、多维度)
- 自定义工作流需求
- 需要精确控制上下文范围
---
### 方式 2Slash 命令调用(标准工作流)
**用户直接输入 Slash 命令**,或 **Claude Code 建议使用 Slash 命令**,系统执行预定义工作流(内部调用 CLI 工具)。
#### Workflow 类命令(系统自动选择工具)
**示例 1规划任务**
**用户输入**
```
/workflow:plan --agent "实现用户认证功能"
```
**系统执行**:内部调用 gemini/qwen 分析 + action-planning-agent 生成任务
---
**示例 2执行任务**
**用户输入**
```
/workflow:execute
```
**系统执行**:内部调用 codex 实现代码
---
**示例 3生成测试**
**用户输入**
```
/workflow:test-gen WFS-xxx
```
**系统执行**:内部调用 gemini 分析 + codex 生成测试
---
#### CLI 类命令(指定工具)
**示例 1封装的分析命令**
**用户输入**
```
/cli:analyze --tool gemini "分析认证模块"
```
**系统执行**:使用 gemini 工具进行分析
---
**示例 2封装的执行命令**
**用户输入**
```
/cli:execute --tool codex "实现 JWT 刷新"
```
**系统执行**:使用 codex 工具实现功能
---
**示例 3快速执行YOLO 模式)**
**用户输入**
```
/cli:codex-execute "添加用户头像上传"
```
**系统执行**:使用 codex 快速实现
---
**核心特点**
-**用户可直接输入**Slash 命令格式固定,用户可以直接输入(如 `/workflow:plan`
-**Claude 可建议**Claude Code 也可以识别需求后建议或执行 Slash 命令
-**预定义流程**:标准化的工作流模板
-**自动工具选择**workflow 命令内部自动选择合适的 CLI 工具
-**集成完整**:包含规划、执行、测试、文档等环节
-**简单易用**:无需了解底层 CLI 工具细节
**Claude Code 何时选择此方式**
- 标准开发任务(功能开发、测试、重构)
- 团队协作(统一工作流)
- 适合新手(降低学习曲线)
- 快速开发(减少配置时间)
---
## 🔄 两种方式对比
| 维度 | CLI 工具语义调用 | Slash 命令调用 |
|------|------------------|----------------|
| **用户输入** | 纯自然语言描述需求 | `/` 开头的固定命令格式 |
| **Claude Code 行为** | 生成并执行 `gemini`/`qwen`/`codex` 命令 | 执行预定义工作流(内部调用 CLI 工具) |
| **灵活性** | 完全自定义任务和执行方式 | 固定工作流模板 |
| **学习曲线** | 用户无需学习(纯自然语言) | 需要知道 Slash 命令名称 |
| **适用复杂度** | 复杂、探索性、定制化任务 | 标准、重复性、工作流化任务 |
| **工具选择** | Claude 自动选择最佳 CLI 工具 | 系统自动选择workflow 类)<br>或用户指定cli 类) |
| **典型场景** | 深度分析、自定义流程、探索研究 | 日常开发、团队协作、标准流程 |
**使用建议**
- **日常开发** → 优先使用 Slash 命令(标准化、快速)
- **复杂分析** → Claude 自动选择 CLI 工具语义调用(灵活、强大)
- **用户角度** → 只需用自然语言描述需求Claude Code 会选择最佳方式
---
## 💡 工具能力速查
### Gemini - 分析与规划
- 执行流程追踪、依赖分析、代码模式识别
- 架构设计、技术方案评估、任务分解
- 文档生成API 文档、模块说明)
**触发示例**`使用 gemini 追踪用户登录的完整流程`
---
### Qwen - Gemini 的备选
- 代码分析、模式识别、架构评审
- 作为 Gemini 不可用时的备选方案
**触发示例**`用 qwen 分析数据处理模块`
---
### Codex - 实现与执行
- 功能开发、组件实现、API 创建
- 单元测试、集成测试、TDD 支持
- 代码重构、性能改进、Bug 修复
**触发示例**`让 codex 实现用户注册功能,包含邮箱验证`
---
## 🔄 典型使用场景
### 场景 1理解陌生代码库
**需求**:接手新项目,需要快速理解代码结构
**方式 1CLI 工具语义调用**(推荐,灵活)
- **用户输入**`使用 gemini 分析这个项目的架构设计,识别主要模块、依赖关系和架构模式`
- **Claude Code 生成并执行**`cd project-root && gemini -p "..."`
**方式 2Slash 命令**
- **用户输入**`/cli:analyze --tool gemini "分析项目架构"`
---
### 场景 2实现新功能
**需求**:实现用户认证功能
**方式 1CLI 工具语义调用**
- **用户输入**`让 codex 实现用户认证功能:注册(邮箱+密码+验证、登录JWT token、刷新令牌技术栈 Node.js + Express`
- **Claude Code 生成并执行**`codex -C src/auth --full-auto exec "..." --skip-git-repo-check -s danger-full-access`
**方式 2Slash 命令**(工作流化)
- **用户输入**`/workflow:plan --agent "实现用户认证功能"``/workflow:execute`
---
### 场景 3诊断 Bug
**需求**:登录功能偶尔超时
**方式 1CLI 工具语义调用**
- **用户输入**`使用 gemini 诊断登录超时问题,分析处理流程、性能瓶颈、数据库查询效率`
- **Claude Code 生成并执行**`cd src/auth && gemini -p "..."`
- **用户输入**`让 codex 根据上述分析修复登录超时,优化查询、添加缓存`
- **Claude Code 生成并执行**`codex -C src/auth --full-auto exec "..." --skip-git-repo-check -s danger-full-access`
**方式 2Slash 命令**
- **用户输入**`/cli:mode:bug-diagnosis --tool gemini "诊断登录超时"``/cli:execute --tool codex "修复登录超时"`
---
### 场景 4生成文档
**需求**:为 API 模块生成完整文档
**方式 1CLI 工具语义调用**
- **用户输入**`使用 gemini 为 API 模块生成技术文档,包含端点说明、数据模型、使用示例`
- **Claude Code 生成并执行**`cd src/api && gemini -p "..." --approval-mode yolo`
**方式 2Slash 命令**
- **用户输入**`/memory:docs src/api --tool gemini --mode full`
---
## 🎯 常用工作流程
### 简单 Bug 修复
```
使用 gemini 诊断问题(可选其他 cli 工具)
→ Claude 分析
→ Claude 直接执行修复
```
### 复杂 Bug 修复
```
/cli:mode:plan 或 /cli:mode:bug-diagnosis
→ Claude 分析
→ Claude 执行修复
```
### 简单功能增加
```
/cli:mode:plan
→ Claude 执行
```
### 复杂功能增加
```
/cli:mode:plan --agent
→ Claude 执行 或 /cli:codex-execute
/cli:mode:plan
→ 进入工作流模式(/workflow:execute
```
### 项目内存管理
**建立技术栈文档**(为项目提供技术参考)
```
/memory:tech-research [session-id | tech-stack-name]
```
**为项目重建多级结构的 CLAUDE.md 内存**
```
/memory:docs [path] [--tool gemini|qwen|codex] [--mode full|partial]
```
---
## 📚 常用命令速查
| 需求 | 推荐命令 |
|------|----------|
| **代码分析** | `使用 gemini 分析...``/cli:analyze --tool gemini` |
| **Bug 诊断** | `/cli:mode:bug-diagnosis` |
| **功能实现** | `/cli:codex-execute``让 codex 实现...` |
| **架构规划** | `/cli:mode:plan` |
| **生成测试** | `/workflow:test-gen WFS-xxx` |
| **完整工作流** | `/workflow:plan``/workflow:execute` |
| **技术文档** | `/memory:tech-research [tech-name]` |
| **项目文档** | `/memory:docs [path]` |
---
## 🆘 快速提示
**触发 CLI 工具语义调用**
- "使用 gemini ..."
- "用 qwen ..."
- "让 codex ..."
**选择工具**
- **理解/分析/规划** → Gemini
- **实现/测试/执行** → Codex
- **不确定** → 使用 Slash 命令让系统选择
**提升质量**
- 清晰描述需求和期望
- 提供上下文信息
- 使用 `--agent` 处理复杂任务
---
**最后更新**: 2025-11-06

View File

@@ -1,95 +1,242 @@
# 5分钟快速上手指南
欢迎来到 Gemini CLI本指南将帮助您快速了解核心命令并通过一个简单的工作流示例让您在5分钟内开始使用
> 欢迎使用 Claude DMS3本指南将帮助您快速上手5分钟内开始第一个工作流
## 1. Gemini CLI 简介
## 🎯 Claude DMS3 是什么?
Gemini CLI 是一个强大的命令行工具,旨在通过智能代理和自动化工作流,提升您的开发效率。它能够帮助您进行代码分析、任务规划、代码生成、测试以及文档编写等。
Claude DMS3 是一个**智能开发管理系统**,集成了 69 个命令,帮助您:
- 📋 规划和分解复杂任务
- ⚡ 自动化代码实现
- 🧪 生成和执行测试
- 📚 生成项目文档
- 🤖 使用 AI 工具Gemini、Qwen、Codex加速开发
## 2. 核心命令速览
**核心理念**:用自然语言描述需求 → 系统自动规划和执行 → 获得结果
以下是一些您将频繁使用的核心命令:
---
### `version` - 查看版本信息
- **用途**: 检查当前 CLI 版本并获取更新信息。
- **示例**:
```bash
gemini version
```
## 🚀 最常用的14个命令
### `enhance-prompt` - 智能提示增强
- **用途**: 根据当前会话记忆和代码库分析,智能地优化您的输入提示,让代理更好地理解您的意图。
- **示例**:
```bash
gemini enhance-prompt "如何修复这个bug"
```
### 工作流类(必会)
### `analyze` - 快速代码分析
- **用途**: 使用 CLI 工具(如 Codex, Gemini, Qwen对代码库进行快速分析获取洞察。
- **示例**:
```bash
gemini analyze "分析 src/main.py 中的性能瓶颈"
```
| 命令 | 用途 | 何时使用 |
|------|------|----------|
| `/workflow:plan` | 规划任务 | 开始新功能、新项目 |
| `/workflow:execute` | 执行任务 | plan 之后,实现功能 |
| `/workflow:test-gen` | 生成测试 | 实现完成后,生成测试 |
| `/workflow:status` | 查看进度 | 查看工作流状态 |
| `/workflow:resume` | 恢复任务 | 继续之前的工作流 |
### `chat` - 交互式对话
- **用途**: 与 CLI 进行简单的交互式对话,直接进行代码分析或提问。
- **示例**:
```bash
gemini chat "解释一下 UserService.java 的主要功能"
```
### CLI 工具类(常用)
### `plan` - 项目规划与架构分析
- **用途**: 启动项目规划和架构分析工作流,帮助您将复杂问题分解为可执行的任务。
- **示例**:
```bash
gemini plan "实现用户认证模块"
```
| 命令 | 用途 | 何时使用 |
|------|------|----------|
| `/cli:analyze` | 代码分析 | 理解代码、分析架构 |
| `/cli:execute` | 执行实现 | 精确控制实现过程 |
| `/cli:codex-execute` | 自动化实现 | 快速实现功能 |
| `/cli:chat` | 问答交互 | 询问代码库问题 |
### `create` - 创建实现任务
- **用途**: 根据上下文创建具体的实现任务。
- **示例**:
```bash
gemini create "编写用户注册接口"
```
### Memory 类(知识管理)
### `execute` - 自动执行任务
- **用途**: 自动执行实现任务,智能地推断上下文并协调代理完成工作。
- **示例**:
```bash
gemini execute "task-id-123"
```
| 命令 | 用途 | 何时使用 |
|------|------|----------|
| `/memory:docs` | 生成文档 | 生成模块文档 |
| `/memory:load` | 加载上下文 | 获取任务相关上下文 |
## 3. 第一个工作流示例:规划与执行一个简单任务
### Task 类(任务管理)
让我们通过一个简单的例子来体验 Gemini CLI 的工作流:**规划并实现一个“Hello World”函数**。
| 命令 | 用途 | 何时使用 |
|------|------|----------|
| `/task:create` | 创建任务 | 手动创建单个任务 |
| `/task:execute` | 执行任务 | 执行特定任务 |
1. **规划任务**:
首先,我们使用 `plan` 命令来规划我们的“Hello World”功能。
```bash
gemini plan "实现一个打印 'Hello World!' 的 Python 函数"
```
CLI 将会启动一个规划工作流,可能会询问您一些问题,并最终生成一个或多个任务。
---
2. **创建具体任务**:
假设 `plan` 命令为您生成了一个任务 ID或者您想手动创建一个任务。
```bash
gemini create "编写 Python 函数 `say_hello` 打印 'Hello World!'"
```
这个命令会创建一个新的任务,并返回一个任务 ID。
## 📝 第一个工作流:实现一个新功能
3. **执行任务**:
现在,我们使用 `execute` 命令来让 CLI 自动完成这个任务。请将 `your-task-id` 替换为上一步中获得的实际任务 ID。
```bash
gemini execute "your-task-id"
```
CLI 将会调用智能代理,根据任务描述生成代码,并尝试将其写入文件。
让我们通过一个实际例子来体验完整的工作流:**实现用户登录功能**
通过这三个简单的步骤,您就完成了一个从规划到执行的完整工作流。
### 步骤 1规划任务
## 4. 接下来做什么?
```bash
/workflow:plan --agent "实现用户登录功能包括邮箱密码验证和JWT令牌"
```
- 探索更多命令:使用 `gemini help` 查看所有可用命令。
- 查阅其他指南深入了解工作流模式、CLI 工具使用和故障排除。
- 尝试更复杂的任务:挑战自己,使用 Gemini CLI 解决实际项目中的问题。
**发生什么**
- 系统分析需求
- 自动生成任务计划IMPL_PLAN.md
- 创建多个子任务task JSON 文件)
- 返回 workflow session ID如 WFS-20251106-xxx
祝您使用愉快!
**你会看到**
- ✅ 规划完成
- 📋 任务列表task-001-user-model, task-002-login-api 等)
- 📁 Session 目录创建
---
### 步骤 2执行实现
```bash
/workflow:execute
```
**发生什么**
- 系统自动发现最新的 workflow session
- 按顺序执行所有任务
- 使用 Codex 自动生成代码
- 实时显示进度
**你会看到**
- ⏳ Task 1 执行中...
- ✅ Task 1 完成
- ⏳ Task 2 执行中...
- (依次执行所有任务)
---
### 步骤 3生成测试
```bash
/workflow:test-gen WFS-20251106-xxx
```
**发生什么**
- 分析实现的代码
- 生成测试策略
- 创建测试任务
---
### 步骤 4查看状态
```bash
/workflow:status
```
**发生什么**
- 显示当前工作流状态
- 列出所有任务及其状态
- 显示已完成/进行中/待执行任务
---
## 🎓 其他常用场景
### 场景 1快速代码分析
**需求**:理解陌生代码
```bash
# 分析整体架构
/cli:analyze --tool gemini "分析项目架构和模块关系"
# 追踪执行流程
/cli:mode:code-analysis --tool gemini "追踪用户注册的执行流程"
```
---
### 场景 2快速实现功能
**需求**:实现一个简单功能
```bash
# 方式 1完整工作流推荐
/workflow:plan "添加用户头像上传功能"
/workflow:execute
# 方式 2直接实现快速
/cli:codex-execute "添加用户头像上传功能,支持图片裁剪和压缩"
```
---
### 场景 3恢复之前的工作
**需求**:继续上次的任务
```bash
# 查看可恢复的 session
/workflow:status
# 恢复特定 session
/workflow:resume WFS-20251106-xxx
```
---
### 场景 4生成文档
**需求**:为模块生成文档
```bash
/memory:docs src/auth --tool gemini --mode full
```
---
## 💡 快速记忆法
记住这个流程,就能完成大部分任务:
```
规划 → 执行 → 测试 → 完成
↓ ↓ ↓
plan → execute → test-gen
```
**扩展场景**
- 需要分析理解 → 使用 `/cli:analyze`
- 需要精确控制 → 使用 `/cli:execute`
- 需要快速实现 → 使用 `/cli:codex-execute`
---
## 🆘 遇到问题?
### 命令记不住?
使用 Command Guide SKILL
```bash
ccw # 或 ccw-help
```
然后说:
- "搜索 planning 命令"
- "执行完 /workflow:plan 后做什么"
- "我是新手,如何开始"
---
### 执行失败?
1. **查看错误信息**:仔细阅读错误提示
2. **使用诊断模板**`ccw-issue` → 选择 "诊断模板"
3. **查看排查指南**[Troubleshooting Guide](troubleshooting.md)
---
### 想深入学习?
- **工作流模式**[Workflow Patterns](workflow-patterns.md) - 学习更多工作流组合
- **CLI 工具使用**[CLI Tools Guide](cli-tools-guide.md) - 了解 Gemini/Qwen/Codex 的高级用法
- **完整命令列表**:查看 `index/essential-commands.json`
---
## 🎯 下一步
现在你已经掌握了基础!尝试:
1. **实践基础工作流**:选择一个小功能,走一遍 plan → execute → test-gen 流程
2. **探索 CLI 工具**:尝试用 `/cli:analyze` 分析你的代码库
3. **学习工作流模式**:阅读 [Workflow Patterns](workflow-patterns.md) 了解更多高级用法
**记住**Claude DMS3 的设计理念是让你用自然语言描述需求,系统自动完成繁琐的工作。不要担心命令记不住,随时可以使用 `ccw` 获取帮助!
---
**祝你使用愉快!** 🎉
有任何问题,使用 `ccw-issue` 提交问题或查询帮助。

View File

@@ -9,9 +9,11 @@ User Query
Intent Recognition
Mode Selection (1 of 5)
Mode Selection (1 of 6)
Index/File Query
Index/File/Reference Query
Optional CLI Analysis (Mode 6)
Response Formation
@@ -51,6 +53,15 @@ function recognizeIntent(userQuery) {
// Mode 3: Documentation
if (query.includes('参数') || query.includes('怎么用') ||
query.includes('如何使用') || query.match(/\/\w+:\w+.*详情/)) {
// Special case: CLI tools usage guide
if (query.match(/cli.*工具/) || query.match(/如何.*使用.*cli/) ||
query.match(/gemini|qwen|codex.*使用/) || query.match(/优雅.*使用/) ||
query.includes('cli能力') || query.includes('cli特性') ||
query.includes('语义调用') || query.includes('命令调用')) {
return 'CLI_TOOLS_GUIDE';
}
return 'DOCUMENTATION';
}
@@ -60,6 +71,17 @@ function recognizeIntent(userQuery) {
return 'ONBOARDING';
}
// Mode 6: Deep Command Analysis
// Triggered by specific command/agent names or complexity indicators
if (query.match(/\/\w+:\w+/) || // Contains command name pattern
query.match(/agent.*工作|实现.*原理|命令.*细节/) || // Asks about internals
query.includes('详细说明') || query.includes('实现细节') ||
query.match(/对比.*命令|workflow.*对比/) || // Comparison queries
query.match(/\w+-agent/) || // Agent name pattern
query.includes('最佳实践') && query.match(/\w+:\w+/)) { // Best practices for specific command
return 'DEEP_ANALYSIS';
}
// Default: Ask for clarification
return 'CLARIFY';
}
@@ -225,6 +247,15 @@ async function getRecommendations(currentCommand) {
- "如何使用 /cli:execute"
- "task:create 详细文档"
**Special Case - CLI Tools Guide**:
**Keywords**: cli工具, 如何使用cli, gemini/qwen/codex使用, 优雅使用, cli能力, cli特性, 语义调用, 命令调用
**Examples**:
- "如何优雅的使用cli工具"
- "cli工具能做什么"
- "gemini和codex的区别"
- "语义调用是什么"
### Processing Flow
```
@@ -253,7 +284,32 @@ async function getRecommendations(currentCommand) {
### Implementation
```javascript
async function getDocumentation(commandName) {
async function getDocumentation(commandName, queryType = 'DOCUMENTATION') {
// Special case: CLI tools usage guide
if (queryType === 'CLI_TOOLS_GUIDE') {
const guideContent = await readFile('guides/cli-tools-guide.md');
return {
type: 'CLI_TOOLS_GUIDE',
title: 'CLI 工具使用指南',
content: guideContent,
sections: {
introduction: extractSection(guideContent, '## 🎯 快速理解'),
comparison: extractSection(guideContent, '## 📋 三大工具能力对比'),
how_to_use: extractSection(guideContent, '## 🚀 如何调用'),
capabilities: extractSection(guideContent, '## 💡 能力特性清单'),
scenarios: extractSection(guideContent, '## 🔄 典型使用场景'),
quick_reference: extractSection(guideContent, '## 📚 快速参考'),
faq: extractSection(guideContent, '## 🆘 常见问题')
},
related_docs: [
'intelligent-tools-strategy.md',
'workflow-patterns.md',
'getting-started.md'
]
};
}
// Normal command documentation
// Normalize command name
const normalized = normalizeCommandName(commandName);
@@ -460,6 +516,432 @@ async function reportIssue(issueType) {
---
## Mode 6: Deep Command Analysis 🔬
**Path Configuration Note**:
This mode uses absolute paths (`~/.claude/skills/command-guide/reference`) to ensure the skill works correctly regardless of where it's installed. The skill is designed to be installed in `~/.claude/skills/` (user's global Claude configuration directory).
### Trigger Analysis
**Keywords**: 详细说明, 命令原理, agent 如何工作, 实现细节, 对比命令, 最佳实践
**Examples**:
- "action-planning-agent 如何工作?"
- "/workflow:plan 的实现原理是什么?"
- "对比 workflow:plan 和 workflow:tdd-plan"
- "ui-design-agent 详细说明"
### Processing Flow
```
1. Parse Query
├─ Identify target command(s)/agent(s)
├─ Determine query complexity
└─ Extract specific questions
2. Classify Query Type
├─ Simple: Single entity, basic explanation
└─ Complex: Multi-entity comparison, best practices, workflows
3. Simple Query Path
├─ Locate file in reference/
├─ Read markdown content
├─ Extract relevant sections
└─ Format response
4. Complex Query Path
├─ Identify all relevant files
├─ Construct CLI analysis prompt
├─ Execute gemini/qwen analysis
└─ Return comprehensive results
5. Response Enhancement
├─ Add usage examples
├─ Link to related docs
└─ Suggest next steps
```
### Query Classification Logic
```javascript
function classifyDeepAnalysisQuery(query) {
const complexityIndicators = {
multiEntity: query.match(/对比|比较|区别/) && query.match(/(\/\w+:\w+.*){2,}/),
bestPractices: query.includes('最佳实践') || query.includes('推荐用法'),
workflowAnalysis: query.match(/工作流.*分析|流程.*说明/),
architecturalDepth: query.includes('架构') || query.includes('设计思路'),
crossReference: query.match(/和.*一起用|配合.*使用/)
};
const isComplex = Object.values(complexityIndicators).some(v => v);
return {
isComplex,
indicators: complexityIndicators,
requiresCLI: isComplex
};
}
```
### Simple Query Implementation
```javascript
async function handleSimpleQuery(query) {
// Extract entity name (command or agent)
const entityName = extractEntityName(query); // e.g., "action-planning-agent" or "workflow:plan"
// Determine if command or agent
const isAgent = entityName.includes('-agent') || entityName.includes('agent');
const isCommand = entityName.includes(':') || entityName.startsWith('/');
// Base path for reference documentation
const basePath = '~/.claude/skills/command-guide/reference';
let filePath;
if (isAgent) {
// Agent query - use absolute path
const agentFileName = entityName.replace(/^\//, '').replace(/-agent$/, '-agent');
filePath = `${basePath}/agents/${agentFileName}.md`;
} else if (isCommand) {
// Command query - need to find in command hierarchy
const cmdName = entityName.replace(/^\//, '');
filePath = await locateCommandFile(cmdName, basePath);
}
// Read documentation
const docContent = await readFile(filePath);
// Extract relevant sections based on query keywords
const sections = extractRelevantSections(docContent, query);
// Format response
return {
entity: entityName,
type: isAgent ? 'agent' : 'command',
documentation: sections,
full_path: filePath,
related: await findRelatedEntities(entityName)
};
}
async function locateCommandFile(commandName, basePath) {
// Parse command category from name
// e.g., "workflow:plan" → "~/.claude/skills/command-guide/reference/commands/workflow/plan.md"
const [category, name] = commandName.split(':');
// Search in reference/commands hierarchy using absolute paths
const possiblePaths = [
`${basePath}/commands/${category}/${name}.md`,
`${basePath}/commands/${category}/${name}/*.md`,
`${basePath}/commands/${name}.md`
];
for (const path of possiblePaths) {
if (await fileExists(path)) {
return path;
}
}
throw new Error(`Command file not found: ${commandName}`);
}
function extractRelevantSections(markdown, query) {
// Parse markdown into sections
const sections = parseMarkdownSections(markdown);
// Determine which sections are relevant
const keywords = extractKeywords(query);
const relevantSections = {};
// Always include overview/description
if (sections['## Overview'] || sections['## Description']) {
relevantSections.overview = sections['## Overview'] || sections['## Description'];
}
// Include specific sections based on keywords
if (keywords.includes('参数') || keywords.includes('参数说明')) {
relevantSections.parameters = sections['## Parameters'] || sections['## Arguments'];
}
if (keywords.includes('例子') || keywords.includes('示例') || keywords.includes('example')) {
relevantSections.examples = sections['## Examples'] || sections['## Usage'];
}
if (keywords.includes('工作流') || keywords.includes('流程')) {
relevantSections.workflow = sections['## Workflow'] || sections['## Process Flow'];
}
if (keywords.includes('最佳实践') || keywords.includes('建议')) {
relevantSections.best_practices = sections['## Best Practices'] || sections['## Recommendations'];
}
return relevantSections;
}
```
### Complex Query Implementation (CLI-Assisted)
```javascript
async function handleComplexQuery(query, classification) {
// Identify all entities mentioned in query
const entities = extractAllEntities(query); // Returns array of command/agent names
// Build file context for CLI analysis
const contextPaths = [];
for (const entity of entities) {
const path = await resolveEntityPath(entity);
contextPaths.push(path);
}
// Construct CLI prompt based on query type
const prompt = buildCLIPrompt(query, classification, contextPaths);
// Execute CLI analysis
const cliResult = await executeCLIAnalysis(prompt);
return {
query_type: 'complex',
analysis_method: 'CLI-assisted (gemini)',
entities_analyzed: entities,
result: cliResult,
source_files: contextPaths
};
}
function buildCLIPrompt(userQuery, classification, contextPaths) {
// Extract key question
const question = extractCoreQuestion(userQuery);
// Build context reference
const contextRef = contextPaths.map(p => `@${p}`).join(' ');
// Determine analysis focus based on classification
let taskDescription = '';
if (classification.indicators.multiEntity) {
taskDescription = `• Compare the entities mentioned in terms of:
- Use cases and scenarios
- Capabilities and features
- When to use each
- Workflow integration
• Provide side-by-side comparison
• Recommend usage guidelines`;
} else if (classification.indicators.bestPractices) {
taskDescription = `• Analyze best practices for the mentioned entities
• Provide practical usage recommendations
• Include common pitfalls to avoid
• Show example workflows`;
} else if (classification.indicators.workflowAnalysis) {
taskDescription = `• Trace the workflow execution
• Explain process flow and dependencies
• Identify key integration points
• Provide usage examples`;
} else {
taskDescription = `• Provide comprehensive analysis
• Explain implementation details
• Show practical examples
• Include related concepts`;
}
// Construct full prompt using Standard Template
// Note: CONTEXT uses @**/* because we'll use --include-directories to specify the reference path
return `PURPOSE: Analyze command/agent documentation to provide comprehensive answer to user query
TASK:
${taskDescription}
MODE: analysis
CONTEXT: @**/*
EXPECTED: Comprehensive answer with examples, comparisons, and recommendations in markdown format
RULES: $(cat ~/.claude/workflows/cli-templates/prompts/analysis/02-analyze-code-patterns.txt) | Focus on practical usage and real-world scenarios | analysis=READ-ONLY
User Question: ${question}`;
}
async function executeCLIAnalysis(prompt) {
// Use absolute path for reference directory
// This ensures the command works regardless of where the skill is installed
const referencePath = '~/.claude/skills/command-guide/reference';
// Execute gemini with analysis prompt using --include-directories
// This allows gemini to access reference docs while maintaining correct file context
const command = `gemini -p "${escapePrompt(prompt)}" --include-directories ${referencePath}`;
try {
const result = await execBash(command, { timeout: 120000 }); // 2 min timeout
return parseAnalysisResult(result.stdout);
} catch (error) {
// Fallback to qwen if gemini fails
console.warn('Gemini failed, falling back to qwen');
const fallbackCmd = `qwen -p "${escapePrompt(prompt)}" --include-directories ${referencePath}`;
const result = await execBash(fallbackCmd, { timeout: 120000 });
return parseAnalysisResult(result.stdout);
}
}
function parseAnalysisResult(rawOutput) {
// Extract main content from CLI output
// Remove CLI wrapper/metadata, keep analysis content
const lines = rawOutput.split('\n');
const contentStart = lines.findIndex(l => l.trim().startsWith('#') || l.length > 50);
const content = lines.slice(contentStart).join('\n');
return {
raw: rawOutput,
parsed: content,
format: 'markdown'
};
}
```
### Helper Functions
```javascript
function extractEntityName(query) {
// Extract command name pattern: /workflow:plan or workflow:plan
const cmdMatch = query.match(/\/?(\w+:\w+)/);
if (cmdMatch) return cmdMatch[1];
// Extract agent name pattern: action-planning-agent or action planning agent
const agentMatch = query.match(/(\w+(?:-\w+)*-agent|\w+\s+agent)/);
if (agentMatch) return agentMatch[1].replace(/\s+/g, '-');
return null;
}
function extractAllEntities(query) {
const entities = [];
// Find all command patterns
const commands = query.match(/\/?(\w+:\w+)/g);
if (commands) {
entities.push(...commands.map(c => c.replace('/', '')));
}
// Find all agent patterns
const agents = query.match(/(\w+(?:-\w+)*-agent)/g);
if (agents) {
entities.push(...agents);
}
return [...new Set(entities)]; // Deduplicate
}
async function resolveEntityPath(entityName) {
// Base path for reference documentation
const basePath = '~/.claude/skills/command-guide/reference';
const isAgent = entityName.includes('-agent');
if (isAgent) {
// Return relative path within reference directory (used for @context in CLI)
return `agents/${entityName}.md`;
} else {
// Command - need to find in hierarchy
const [category] = entityName.split(':');
// Use glob to find the file (glob pattern uses absolute path)
const matches = await glob(`${basePath}/commands/${category}/**/${entityName.split(':')[1]}.md`);
if (matches.length > 0) {
// Return relative path within reference directory
return matches[0].replace(`${basePath}/`, '');
}
throw new Error(`Entity file not found: ${entityName}`);
}
}
function extractCoreQuestion(query) {
// Remove common prefixes
const cleaned = query
.replace(/^(请|帮我|能否|可以)/g, '')
.replace(/^(ccw|CCW)[:\s]*/gi, '')
.trim();
// Ensure it ends with question mark if it's interrogative
if (cleaned.match(/什么|如何|为什么|怎么|哪个/) && !cleaned.endsWith('?') && !cleaned.endsWith('')) {
return cleaned + '';
}
return cleaned;
}
function escapePrompt(prompt) {
// Escape special characters for bash
return prompt
.replace(/\\/g, '\\\\')
.replace(/"/g, '\\"')
.replace(/\$/g, '\\$')
.replace(/`/g, '\\`');
}
```
### Example Outputs
**Simple Query Example**:
```javascript
// Input: "action-planning-agent 如何工作?"
{
entity: "action-planning-agent",
type: "agent",
documentation: {
overview: "# Action Planning Agent\n\nGenerates structured task plans...",
workflow: "## Workflow\n1. Analyze requirements\n2. Break down into tasks...",
examples: "## Examples\n```bash\n/workflow:plan --agent \"feature\"\n```"
},
full_path: "~/.claude/skills/command-guide/reference/agents/action-planning-agent.md",
related: ["workflow:plan", "task:create", "conceptual-planning-agent"]
}
```
**Complex Query Example**:
```javascript
// Input: "对比 workflow:plan 和 workflow:tdd-plan 的使用场景和最佳实践"
{
query_type: "complex",
analysis_method: "CLI-assisted (gemini)",
entities_analyzed: ["workflow:plan", "workflow:tdd-plan"],
result: {
parsed: `# 对比分析: workflow:plan vs workflow:tdd-plan
## 使用场景对比
### workflow:plan
- **适用场景**: 通用功能开发,无特殊测试要求
- **特点**: 灵活的任务分解focus on implementation
...
### workflow:tdd-plan
- **适用场景**: 测试驱动开发,需要严格测试覆盖
- **特点**: Red-Green-Refactor 循环test-first
...
## 最佳实践
### workflow:plan 最佳实践
1. 先分析需求,明确目标
2. 合理分解任务,避免过大或过小
...
### workflow:tdd-plan 最佳实践
1. 先写测试,明确预期行为
2. 保持 Red-Green-Refactor 节奏
...
## 选择建议
| 情况 | 推荐命令 |
|------|----------|
| 新功能开发,无特殊测试要求 | workflow:plan |
| 核心模块,需要高测试覆盖 | workflow:tdd-plan |
| 快速原型,验证想法 | workflow:plan |
| 关键业务逻辑 | workflow:tdd-plan |
`,
format: "markdown"
},
source_files: [
"~/.claude/skills/command-guide/reference/commands/workflow/plan.md",
"~/.claude/skills/command-guide/reference/commands/workflow/tdd-plan.md"
]
}
```
---
## Error Handling
### Not Found
@@ -523,4 +1005,6 @@ async function readIndex(filename) {
---
**Last Updated**: 2025-01-06
**Last Updated**: 2025-11-06
**Version**: 1.3.0 - Added Mode 6: Deep Command Analysis with reference documentation backup and CLI-assisted complex queries

View File

@@ -0,0 +1,316 @@
# UI Design Workflow Guide
## Overview
The UI Design Workflow System is a comprehensive suite of 11 autonomous commands designed to transform intent (prompts), references (images/URLs), or existing code into functional, production-ready UI prototypes. It employs a **Separation of Concerns** architecture, treating Style (visual tokens), Structure (layout templates), and Motion (animation tokens) as independent, mix-and-match components.
## Command Taxonomy
### 1. Orchestrators (High-Level Workflows)
These commands automate end-to-end processes by chaining specialized sub-commands.
- **`/workflow:ui-design:explore-auto`**: For creating *new* designs. Generates multiple style and layout variants from a prompt to explore design directions.
- **`/workflow:ui-design:imitate-auto`**: For *replicating* existing designs. Creates design systems from local reference files (images, code) or text prompts.
### 2. Core Extractors (Specialized Analysis)
Agents dedicated to analyzing specific aspects of design.
- **`/workflow:ui-design:style-extract`**: Extracts visual tokens (colors, typography, spacing, shadows) into `design-tokens.json`.
- **`/workflow:ui-design:layout-extract`**: Extracts DOM structure and CSS layout rules into `layout-templates.json`.
- **`/workflow:ui-design:animation-extract`**: Extracts motion patterns into `animation-tokens.json`.
### 3. Input & Capture Utilities
Tools for gathering raw data for analysis.
- **`/workflow:ui-design:capture`**: High-speed batch screenshot capture for multiple URLs.
- **`/workflow:ui-design:explore-layers`**: Interactive, depth-controlled capture (e.g., capturing modals, dropdowns, shadow DOM).
- **`/workflow:ui-design:import-from-code`**: Bootstraps a design system by analyzing existing CSS/JS/HTML files.
### 4. Assemblers & Integrators
Tools for combining components and integrating results.
- **`/workflow:ui-design:generate`**: Pure assembler that combines Layout Templates + Design Tokens into HTML/CSS prototypes.
- **`/workflow:ui-design:update`**: Synchronizes generated design artifacts with the main project session for implementation planning.
---
## Common Workflow Patterns
### Workflow A: Exploratory Design (New Concepts)
**Goal:** Create multiple design options for a new project from a text description.
**Primary Command:** `explore-auto`
**Steps:**
1. **Initiate**: User runs `/workflow:ui-design:explore-auto --prompt "Modern fintech dashboard" --style-variants 3`
2. **Style Extraction**: System generates 3 distinct visual design systems based on the prompt.
3. **Layout Extraction**: System researches and generates responsive layout templates for a dashboard.
4. **Assembly**: System generates a matrix of prototypes (3 styles × 1 layout = 3 prototypes).
5. **Review**: User views `compare.html` to select the best direction.
**Example (Non-Interactive - Default):**
```bash
/workflow:ui-design:explore-auto \
--prompt "Modern SaaS landing page with hero, features, pricing sections" \
--style-variants 3 \
--layout-variants 2 \
--session WFS-001
```
**Output:**
- `design-tokens-v1.json`, `design-tokens-v2.json`, `design-tokens-v3.json` (3 style variants)
- `layout-templates-v1.json`, `layout-templates-v2.json` (2 layout variants)
- 6 HTML prototypes (3 × 2 combinations)
- `compare.html` for side-by-side comparison
**Example (Interactive Mode):**
```bash
/workflow:ui-design:explore-auto \
--prompt "Modern SaaS landing page with hero, features, pricing sections" \
--style-variants 5 \
--layout-variants 4 \
--interactive \
--session WFS-001
```
**Interactive Flow:**
1. System generates 5 style concepts
2. **User selects** 2-3 preferred styles (multi-select)
3. System generates 4 layout concepts
4. **User selects** 2 preferred layouts (multi-select)
5. System generates only 4-6 final prototypes (selected combinations)
---
### Workflow B: Design Replication (Imitation)
**Goal:** Create a design system and prototypes based on existing local references.
**Primary Command:** `imitate-auto`
**Steps:**
1. **Initiate**: User runs `/workflow:ui-design:imitate-auto --input "design-refs/*.png"` with local reference files
2. **Input Detection**: System detects input type (images, code files, or text)
3. **Extraction**: System extracts a unified design system (style, layout, animation) from the references.
4. **Assembly**: System creates prototypes using the extracted system.
**Example:**
```bash
# Using reference images
/workflow:ui-design:imitate-auto \
--input "design-refs/*.png" \
--session WFS-002
# Or importing from existing code
/workflow:ui-design:imitate-auto \
--input "./src/components" \
--session WFS-002
```
**Output:**
- `design-tokens.json` (unified style system)
- `layout-templates.json` (page structures)
- HTML prototypes based on the input references
---
### Workflow C: Code-First Bootstrap
**Goal:** Create a design system from an existing codebase.
**Primary Command:** `import-from-code`
**Steps:**
1. **Initiate**: User runs `/workflow:ui-design:import-from-code --base-path ./src`
2. **Analysis**: Parallel agents analyze CSS, JS, and HTML to find tokens, layouts, and animations.
3. **Reporting**: Generates completeness reports and initial token files.
4. **Supplement (Optional)**: Run `style-extract` or `layout-extract` to fill gaps identified in the reports.
**Example:**
```bash
/workflow:ui-design:import-from-code \
--base-path ./src/components \
--session WFS-003
```
**Output:**
- `design-tokens.json` (extracted from CSS variables, theme files)
- `layout-templates.json` (extracted from component structures)
- `completeness-report.md` (gaps and recommendations)
- `import-summary.json` (statistics and findings)
---
## Architecture & Best Practices
### Separation of Concerns
**Always keep design tokens separate from layout templates:**
- `design-tokens.json` (Style) - Colors, typography, spacing, shadows
- `layout-templates.json` (Structure) - DOM hierarchy, CSS layout rules
- `animation-tokens.json` (Motion) - Transitions, keyframes, timing functions
### Token-First CSS
Generated CSS should primarily use CSS custom properties:
```css
/* Good - Token-based */
.button {
background: var(--color-primary);
padding: var(--spacing-md);
border-radius: var(--radius-md);
}
/* Avoid - Hardcoded */
.button {
background: #3b82f6;
padding: 16px;
border-radius: 8px;
}
```
### Style-Centric Batching
For high-volume generation:
- Group tasks by style to minimize context switching
- Use parallel generation with multiple targets
- Reuse existing layout inspirations
### Input Quality Guidelines
**For Prompts:**
- Specify the desired *vibe* (e.g., "minimalist, high-contrast")
- Specify the *targets* (e.g., "dashboard, settings page")
- Include functional requirements (e.g., "responsive, mobile-first")
**For Local References:**
- Use high-quality reference images (PNG, JPG)
- Organize files in accessible directories
- For code imports, ensure files are properly structured (CSS, JS, HTML)
---
## Advanced Usage
### Multi-Session Workflows
You can run UI design workflows within an existing workflow session:
```bash
# 1. Start a workflow session
/workflow:session:start --new
# 2. Run exploratory design
/workflow:ui-design:explore-auto --prompt "E-commerce checkout flow" --session <session-id>
# 3. Update main session with design artifacts
/workflow:ui-design:update --session <session-id> --selected-prototypes "v1,v2"
```
### Combining Workflows
**Example: Imitation + Custom Extraction**
```bash
# 1. Import design from local references
/workflow:ui-design:imitate-auto --input "design-refs/*.png"
# 2. Extract additional layouts and generate prototypes
/workflow:ui-design:layout-extract --targets "new-page-1,new-page-2"
/workflow:ui-design:generate
```
### Deep Interactive Capture
For complex UIs with overlays, modals, or dynamic content:
```bash
/workflow:ui-design:explore-layers \
--url https://complex-app.com \
--depth 3 \
--session WFS-005
```
---
## Troubleshooting
| Issue | Likely Cause | Resolution |
|-------|--------------|------------|
| **Missing Design Tokens** | `style-extract` failed or wasn't run | Run `/workflow:ui-design:style-extract` manually or check logs |
| **Inaccurate Layouts** | Complex DOM structure not captured | Use `--urls` in `layout-extract` for Chrome DevTools analysis |
| **Empty Screenshots** | Anti-bot measures or timeout | Use `explore-layers` interactive mode or increase timeout |
| **Generation Stalls** | Too many concurrent tasks | System defaults to max 6 parallel tasks; check resources |
| **Integration Failures** | Session ID mismatch or missing markers | Ensure `--session <id>` matches active workflow session |
| **Low Quality Tokens** | Insufficient reference material | Provide multiple reference images/URLs for better token extraction |
| **Inconsistent Styles** | Multiple token files without merge | Use single unified `design-tokens.json` or explicit variants |
---
## Command Reference Quick Links
### Orchestrators
- `/workflow:ui-design:explore-auto` - Create new designs from prompts
- `/workflow:ui-design:imitate-auto` - Replicate existing designs
### Extractors
- `/workflow:ui-design:style-extract` - Extract visual design tokens
- `/workflow:ui-design:layout-extract` - Extract layout structures
- `/workflow:ui-design:animation-extract` - Extract motion patterns
### Utilities
- `/workflow:ui-design:capture` - Batch screenshot capture
- `/workflow:ui-design:explore-layers` - Interactive deep capture
- `/workflow:ui-design:import-from-code` - Bootstrap from existing code
- `/workflow:ui-design:generate` - Assemble prototypes from tokens
- `/workflow:ui-design:update` - Integrate with workflow sessions
---
## Performance Optimization
### Parallel Execution
The system is designed to run extraction phases in parallel:
- Animation and layout extraction can run concurrently
- Multiple target generation runs in parallel (default: max 6)
- Style variant generation is parallelized
### Reuse Intermediates
- Generation commands reuse existing layout inspirations
- Cached screenshots avoid redundant captures
- Token files can be versioned and reused
### Resource Management
- Each agent task consumes memory and CPU
- Monitor system resources with large batch operations
- Consider splitting large batches into smaller chunks
---
## Related Guides
- **Getting Started** - Basic workflow concepts
- **Workflow Patterns** - General workflow guidance
- **CLI Tools Guide** - CLI integration strategies
- **Troubleshooting** - Common issues and solutions

View File

@@ -1,100 +1,662 @@
# 常见工作流模式
Gemini CLI 不仅提供单个命令,更能通过智能编排将一系列命令组合成强大的工作流,帮助您高效完成复杂任务。本指南将介绍几种常见的工作流模式。
> 学习如何组合命令完成复杂任务,提升开发效率
## 1. 工作流核心概念
## 🎯 什么是工作流?
在深入了解具体模式之前理解工作流的架构至关重要。Gemini CLI 的工作流管理系统旨在提供一个灵活、可扩展的框架,用于定义、执行和协调复杂的开发任务
工作流是**一系列命令的组合**用于完成特定的开发目标。Claude DMS3 提供了多种工作流模式,覆盖从规划到测试的完整开发周期
- **工作流 (Workflows)**:一系列任务的组合,旨在实现特定的开发目标。
- **任务 (Tasks)**:工作流中的独立工作单元,可以简单也可以复杂,有状态、输入和输出。
- **智能体 (Agents)**:通常由大型语言模型驱动,负责执行任务或在工作流中做出决策。
- **上下文 (Context)**当前工作流的相关动态信息,包括项目状态、代码片段、文档、用户输入等,是智能决策的关键。
- **记忆 (Memory)**:持久存储上下文、工作流历史和学习模式,支持工作流的恢复、适应和改进。
**核心概念**
- **工作流Workflow**:一组相关任务的集合
- **任务Task**:独立的工作单元,有明确的输入和输出
- **Session**:工作流的执行实例,记录所有任务状态
- **上下文Context**:任务执行所需的代码、文档、配置等信息
详情可参考 `../../workflows/workflow-architecture.md`
---
## 2. 规划 -> 执行 (Plan -> Execute) 模式
## 💡 Pattern 0: 头脑风暴从0到1的第一步
这是最基础也是最常用的工作流模式,它将一个大的目标分解为可执行的步骤,并逐步实现
**⚠️ 重要**:这是**从0到1开发的起点**!在开始编码之前,通过多角色头脑风暴明确需求、技术选型和架构决策
**场景**: 您有一个需要从头开始实现的新功能或模块。
**适用场景**
- 全新项目启动,需求和技术方案不明确
- 重大功能开发,涉及多个技术领域和权衡
- 架构决策,需要多角色视角分析
**主要命令**:
- `plan`: 启动高级规划过程,分解目标。
- `breakdown`: 进一步细化和分解 `plan` 生成的任务。
- `create`: 创建具体的实施任务。
- `execute`: 执行创建好的任务以实现代码或解决方案。
**流程**:话题分析 → 角色选择 → 角色问答 → 冲突解决 → 生成指导文档
**工作流示例**:
1. **启动规划**: `gemini plan "开发一个用户认证服务"`
- CLI 会与您互动,明确需求,并生成一个初步的规划(可能包含多个子任务)。
2. **任务分解** (可选,如果规划足够细致可跳过):
- 假设 `plan` 产生了一个任务 ID `task-auth-service`
- `gemini breakdown task-auth-service`
- 可能进一步分解为 `task-register`, `task-login`, `task-password-reset`等。
3. **创建具体实现任务**:
- `gemini create "实现用户注册 API 接口"`
- 这会生成一个专门针对此任务的 ID例如 `task-id-register-api`
4. **执行实现任务**:
- `gemini execute task-id-register-api`
- CLI 将调用智能体自动编写和集成代码。
### 模式 A交互式头脑风暴推荐
## 3. 测试驱动开发 (TDD) 模式
**特点**:通过问答交互,逐步明确需求和决策
TDD 模式强调先编写测试再编写满足测试的代码然后重构。Gemini CLI 通过自动化 TDD 流程来支持这一模式。
```bash
# 步骤 1启动头脑风暴
/workflow:brainstorm:artifacts "
GOAL: 实现实时协作编辑平台
SCOPE: 支持100+用户同时在线,低延迟(<100ms),冲突自动解决
CONTEXT: MVP阶段3个月上线团队5人2前端+2后端+1全栈
" --count 3
**场景**: 您正在开发一个新功能,并希望通过 TDD 确保代码质量和正确性。
# 系统输出 Phase 0自动收集项目上下文
# ✅ 分析现有代码库结构
# ✅ 加载相关文档
# ✅ 识别技术栈和依赖
**主要命令**:
- `tdd-plan`: 规划 TDD 工作流,生成红-绿-重构任务链。
- `test-gen`: 根据功能描述生成测试用例。
- `execute`: 执行代码生成和测试。
- `tdd-verify`: 验证 TDD 工作流的合规性并生成质量报告。
# 系统输出 Phase 1意图分析2-4个问题
# 【问题1 - 核心挑战】实时协作的主要技术挑战?
# a) 实时数据同步
# b) 可扩展性架构
# c) 冲突解决机制
# 【问题2 - 优先级】MVP阶段最关注
# a) 功能完整性
# b) 用户体验
# c) 系统稳定性
# 请回答 (格式: 1a 2c)
**工作流示例**:
1. **TDD 规划**: `gemini tdd-plan "实现一个购物车功能"`
- CLI 将为您创建一个 TDD 任务链,包括测试生成、代码实现和验证。
2. **生成测试**: (通常包含在 `tdd-plan` 的早期阶段,或可以单独调用)
- `gemini test-gen source-session-id` (如果已有一个实现会话)
- 这会产生失败的测试(红)。
3. **执行代码实现和测试**:
- `gemini execute task-id-for-code-implementation`
- 智能体会编写代码以通过测试,并将执行测试(变为绿)。
4. **TDD 验证**: `gemini tdd-verify`
- 验证整个 TDD 周期是否规范执行,以及生成测试覆盖率等报告。
# 用户输入1c 2b
## 4. UI 设计与实现工作流
# 系统输出 Phase 2角色选择推荐 count+2 个角色)
# 【角色选择】请选择 3 个角色参与头脑风暴
# a) system-architect (系统架构师) - 实时同步架构设计
# b) ui-designer (UI设计师) - 协作界面用户体验
# c) product-manager (产品经理) - 功能优先级决策
# d) data-architect (数据架构师) - 数据同步模型设计
# e) ux-expert (UX专家) - 多用户协作交互流程
# 请输入选择 (格式: 2acd 或 2a 2c 2d)
Gemini CLI 可以辅助您进行 UI 的设计、提取和代码生成,加速前端开发。
# 用户输入2acd
**场景**: 您需要基于一些设计稿或现有网站来快速构建 UI 原型或实现页面。
# 系统输出 Phase 3角色问题每个角色 3-4 个问题)
# ===== system-architect 问题 =====
# 【问题1】100+ 用户实时状态同步方案?
# a) Event Sourcing - 完整事件历史,易于回溯
# b) 集中式状态管理 - 简单直接适合MVP
# c) CRDT - 自动冲突解决,分布式友好
# 【问题2】低延迟通信协议选择
# ...
# 请回答:
**主要命令**:
- `ui-designer`: 启动 UI 设计分析。
- `layout-extract`: 从参考图像或 URL 提取布局信息。
- `style-extract`: 从参考图像或 URL 提取设计风格。
- `generate`: 组合布局和设计令牌生成 UI 原型。
- `update`: 使用最终设计系统参考更新设计产物。
# 用户输入1c 2a 3b 4c
**工作流示例**:
1. **启动 UI 设计分析**: `gemini ui-designer`
- 开始一个引导式的流程,定义您的 UI 设计目标。
2. **提取布局**: `gemini layout-extract --urls "https://example.com/some-page"`
- 从给定 URL 提取页面布局结构。
3. **提取样式**: `gemini style-extract --images "./design-mockup.png"`
- 从设计图中提取颜色、字体等视觉风格。
4. **生成 UI 原型**: `gemini generate --base-path ./my-ui-project`
- 结合提取的布局和样式,生成可工作的 UI 代码或原型。
5. **更新与迭代**: `gemini update --session ui-design-session-id --selected-prototypes "proto-01,proto-03"`
- 根据反馈和最终设计系统,迭代并更新生成的 UI 产物。
# 系统输出 Phase 4冲突检测和解决
# 【冲突1】CRDT 与 UI 回滚期望冲突
# Background: system-architect 选择 CRDT但 ui-designer 期望回滚UI
# a) 采用 CRDT显示合并状态
# b) 切换到 OT 算法支持回滚
# c) 混合方案CRDT + 本地撤销栈
# ...
## 5. 上下文搜索策略
# 系统输出 Phase 5生成指导文档
# ✅ 生成 guidance-specification.md
# ✅ 记录所有决策和理由
# ✅ 标注冲突解决方案
# 📁 文件位置:.workflow/WFS-realtime-collab/.brainstorming/guidance-specification.md
所有这些工作流都依赖于高效的上下文管理。Gemini CLI 采用多层次的上下文搜索策略,以确保智能代理获得最相关的信息。
# 步骤 2查看生成的指导文档
cat .workflow/WFS-*//.brainstorming/guidance-specification.md
```
- **相关性优先**: 优先收集与当前任务直接相关的上下文,而非大量数据。
- **分层搜索**: 从最直接的来源(如当前打开文件)开始,逐步扩展到项目文件、记忆库和外部资源。
- **语义理解**: 利用智能搜索理解查询的意图,而非仅仅是关键词匹配。
### 模式 B自动并行头脑风暴快速
更多细节请查阅 `../../workflows/context-search-strategy.md`
**特点**:自动选择角色,并行执行,快速生成多角色分析
```bash
# 步骤 1一键启动并行头脑风暴
/workflow:brainstorm:auto-parallel "
GOAL: 实现支付处理模块
SCOPE: 支持微信/支付宝/银行卡日交易10万笔99.99%可用性
CONTEXT: 金融合规要求PCI DSS认证风控系统集成
" --count 4
# 系统输出:
# ✅ Phase 0: 收集项目上下文
# ✅ Phase 1-2: artifacts 交互式框架生成
# ⏳ Phase 3: 4个角色并行分析
# - system-architect → 分析中...
# - data-architect → 分析中...
# - product-manager → 分析中...
# - subject-matter-expert → 分析中...
# ✅ Phase 4: synthesis 综合分析
# 📁 输出文件:
# - .brainstorming/guidance-specification.md (框架)
# - system-architect/analysis.md
# - data-architect/analysis.md
# - product-manager/analysis.md
# - subject-matter-expert/analysis.md
# - synthesis/final-recommendations.md
# 步骤 2查看综合建议
cat .workflow/WFS-*//.brainstorming/synthesis/final-recommendations.md
```
### 模式 C单角色深度分析特定领域
**特点**:针对特定领域问题,调用单个角色深度分析
```bash
# 系统架构分析
/workflow:brainstorm:system-architect "API 网关架构设计支持10万QPS微服务集成"
# UI 设计分析
/workflow:brainstorm:ui-designer "管理后台界面设计,复杂数据展示,操作效率优先"
# 数据架构分析
/workflow:brainstorm:data-architect "分布式数据存储方案MySQL+Redis+ES 组合"
```
### 关键点
1. **Phase 0 自动上下文收集**
- 自动分析现有代码库、文档、技术栈
- 识别潜在冲突和集成点
- 为后续问题生成提供上下文
2. **动态问题生成**
- 基于话题关键词和项目上下文生成问题
- 不使用预定义模板
- 问题直接针对你的具体场景
3. **智能角色推荐**
- 基于话题分析推荐最相关的角色
- 推荐 count+2 个角色供选择
- 每个角色都有基于话题的推荐理由
4. **输出物**
- `guidance-specification.md` - 确认的指导规范(决策、理由、集成点)
- `{role}/analysis.md` - 各角色详细分析(仅 auto-parallel 模式)
- `synthesis/final-recommendations.md` - 综合建议(仅 auto-parallel 模式)
5. **下一步**
- 头脑风暴完成后,使用 `/workflow:plan` 基于指导文档生成实施计划
- 指导文档作为规划和实现的权威参考
### 使用场景对比
| 场景 | 推荐模式 | 原因 |
|------|---------|------|
| 全新项目启动 | 交互式 (artifacts) | 需要充分澄清需求和约束 |
| 重大架构决策 | 交互式 (artifacts) | 需要深入讨论权衡 |
| 快速原型验证 | 自动并行 (auto-parallel) | 快速获得多角色建议 |
| 特定技术问题 | 单角色 (specific role) | 专注某个领域深度分析 |
---
## 📋 Pattern 1: 规划→执行(最常用)
**适用场景**:实现新功能、新模块
**流程**:规划 → 执行 → 查看状态
### 完整示例
```bash
# 步骤 1规划任务
/workflow:plan --agent "实现用户认证模块"
# 系统输出:
# ✅ 规划完成
# 📁 Session: WFS-20251106-123456
# 📋 生成 5 个任务
# 步骤 2执行任务
/workflow:execute
# 系统输出:
# ⏳ 执行 task-001-user-model...
# ✅ task-001 完成
# ⏳ 执行 task-002-login-api...
# ...
# 步骤 3查看状态
/workflow:status
# 系统输出:
# Session: WFS-20251106-123456
# Total: 5 | Completed: 5 | Pending: 0
```
**关键点**
- `--agent` 参数使用 AI 生成更详细的计划
- 系统自动发现最新 session无需手动指定
- 所有任务按依赖顺序自动执行
---
## 🧪 Pattern 2: TDD测试驱动开发
**适用场景**:需要高质量代码和测试覆盖
**流程**TDD规划 → 执行(红→绿→重构)→ 验证
### 完整示例
```bash
# 步骤 1TDD 规划
/workflow:tdd-plan --agent "实现购物车功能"
# 系统输出:
# ✅ TDD 任务链生成
# 📋 Red-Green-Refactor 周期:
# - task-001-cart-tests (RED)
# - task-002-cart-implement (GREEN)
# - task-003-cart-refactor (REFACTOR)
# 步骤 2执行 TDD 周期
/workflow:execute
# 系统会自动:
# 1. 生成失败的测试RED
# 2. 实现代码让测试通过GREEN
# 3. 重构代码REFACTOR
# 步骤 3验证 TDD 合规性
/workflow:tdd-verify
# 系统输出:
# ✅ TDD 周期完整
# ✅ 测试覆盖率: 95%
# ✅ Red-Green-Refactor 合规
```
**关键点**
- TDD 模式自动生成测试优先的任务链
- 每个任务有依赖关系,确保正确的顺序
- 验证命令检查 TDD 合规性
---
## 🔄 Pattern 3: 测试生成
**适用场景**:已有代码,需要生成测试
**流程**:分析代码 → 生成测试策略 → 执行测试生成
### 完整示例
```bash
# 步骤 1实现功能已完成
# 假设已经完成实现session 为 WFS-20251106-123456
# 步骤 2生成测试
/workflow:test-gen WFS-20251106-123456
# 系统输出:
# ✅ 分析实现代码
# ✅ 生成测试策略
# 📋 创建测试任务WFS-test-20251106-789
# 步骤 3执行测试生成
/workflow:test-cycle-execute --resume-session WFS-test-20251106-789
# 系统输出:
# ⏳ 生成测试用例...
# ⏳ 执行测试...
# ❌ 3 tests failed
# ⏳ 修复失败测试...
# ✅ All tests passed
```
**关键点**
- `test-gen` 分析现有代码生成测试
- `test-cycle-execute` 自动生成→测试→修复循环
- 最多迭代 N 次直到所有测试通过
---
## 🎨 Pattern 4: UI 设计工作流
**适用场景**:基于设计稿或现有网站实现 UI
**流程**:提取样式 → 提取布局 → 生成原型 → 更新
### 完整示例
```bash
# 步骤 1提取设计样式
/workflow:ui-design:style-extract \
--images "design/*.png" \
--mode imitate \
--variants 3
# 系统输出:
# ✅ 提取颜色系统
# ✅ 提取字体系统
# ✅ 生成 3 个样式变体
# 步骤 2提取页面布局
/workflow:ui-design:layout-extract \
--urls "https://example.com/dashboard" \
--device-type responsive
# 系统输出:
# ✅ 提取布局结构
# ✅ 识别组件层次
# ✅ 生成响应式布局
# 步骤 3生成 UI 原型
/workflow:ui-design:generate \
--style-variants 2 \
--layout-variants 2
# 系统输出:
# ✅ 生成 4 个原型组合
# 📁 输出:.workflow/ui-design/prototypes/
# 步骤 4更新最终版本
/workflow:ui-design:update \
--session ui-session-id \
--selected-prototypes "proto-1,proto-3"
# 系统输出:
# ✅ 应用最终设计系统
# ✅ 更新所有原型
```
**关键点**
- 支持从图片或 URL 提取设计
- 可生成多个变体供选择
- 最终更新使用确定的设计系统
---
## 🔍 Pattern 5: 代码分析→重构
**适用场景**:优化现有代码,提高可维护性
**流程**:分析现状 → 制定计划 → 执行重构 → 生成测试
### 完整示例
```bash
# 步骤 1分析代码质量
/cli:analyze --tool gemini --cd src/auth \
"评估认证模块的代码质量、可维护性和潜在问题"
# 系统输出:
# ✅ 识别 3 个设计问题
# ✅ 发现 5 个性能瓶颈
# ✅ 建议 7 项改进
# 步骤 2制定重构计划
/cli:mode:plan --tool gemini --cd src/auth \
"基于上述分析,制定认证模块重构方案"
# 系统输出:
# ✅ 重构计划生成
# 📋 包含 8 个重构任务
# 步骤 3执行重构
/cli:execute --tool codex \
"按照重构计划执行认证模块重构"
# 步骤 4生成测试确保正确性
/workflow:test-gen WFS-refactor-session-id
```
**关键点**
- Gemini 用于分析和规划(理解)
- Codex 用于执行实现(重构)
- 重构后必须生成测试验证
---
## 📚 Pattern 6: 文档生成
**适用场景**:为项目或模块生成文档
**流程**:分析代码 → 生成文档 → 更新索引
### 完整示例
```bash
# 方式 1为单个模块生成文档
/memory:docs src/auth --tool gemini --mode full
# 系统输出:
# ✅ 分析模块结构
# ✅ 生成 CLAUDE.md
# ✅ 生成 API 文档
# ✅ 生成使用指南
# 方式 2更新所有模块文档
/memory:update-full --tool gemini
# 系统输出:
# ⏳ 按层级更新文档...
# ✅ Layer 3: 12 modules updated
# ✅ Layer 2: 5 modules updated
# ✅ Layer 1: 2 modules updated
# 方式 3只更新修改过的模块
/memory:update-related --tool gemini
# 系统输出:
# ✅ 检测 git 变更
# ✅ 更新 3 个相关模块
```
**关键点**
- `--mode full` 生成完整文档
- `update-full` 适用于初始化或大规模更新
- `update-related` 适用于日常增量更新
---
## 🔄 Pattern 7: 恢复和继续
**适用场景**:中断后继续工作,或修复失败的任务
**流程**:查看状态 → 恢复 session → 继续执行
### 完整示例
```bash
# 步骤 1查看所有 session
/workflow:status
# 系统输出:
# Session: WFS-20251106-123456 (5/10 completed)
# Session: WFS-20251105-234567 (10/10 completed)
# 步骤 2恢复特定 session
/workflow:resume WFS-20251106-123456
# 系统输出:
# ✅ Session 恢复
# 📋 5/10 tasks completed
# ⏳ 待执行: task-006, task-007, ...
# 步骤 3继续执行
/workflow:execute --resume-session WFS-20251106-123456
# 系统输出:
# ⏳ 继续执行 task-006...
# ✅ task-006 完成
# ...
```
**关键点**
- 所有 session 状态都被保存
- 可以随时恢复中断的工作流
- 恢复时自动分析进度和待办任务
---
## 🎯 Pattern 8: 快速实现Codex YOLO
**适用场景**:快速实现简单功能,跳过规划
**流程**:直接执行 → 完成
### 完整示例
```bash
# 一键实现功能
/cli:codex-execute --verify-git \
"实现用户头像上传功能:
- 支持 jpg/png 格式
- 自动裁剪为 200x200
- 压缩到 100KB 以下
- 上传到 OSS
"
# 系统输出:
# ⏳ 分析需求...
# ⏳ 生成代码...
# ⏳ 集成现有代码...
# ✅ 功能实现完成
# 📁 修改文件:
# - src/api/upload.ts
# - src/utils/image.ts
```
**关键点**
- 适合简单、独立的功能
- `--verify-git` 确保 git 状态干净
- 自动分析需求并完整实现
---
## 🤝 Pattern 9: 多工具协作
**适用场景**:复杂任务需要多个 AI 工具配合
**流程**Gemini 分析 → Gemini/Qwen 规划 → Codex 实现
### 完整示例
```bash
# 步骤 1Gemini 深度分析
/cli:analyze --tool gemini \
"分析支付模块的安全性和性能问题"
# 步骤 2多工具讨论方案
/cli:discuss-plan --topic "支付模块重构方案" --rounds 3
# 系统输出:
# Round 1:
# Gemini: 建议方案 A关注安全
# Codex: 建议方案 B关注性能
# Round 2:
# Gemini: 综合分析...
# Codex: 技术实现评估...
# Round 3:
# 最终方案: 方案 C安全+性能)
# 步骤 3Codex 执行实现
/cli:execute --tool codex "按照方案 C 重构支付模块"
```
**关键点**
- `discuss-plan` 让多个 AI 讨论方案
- 每个工具贡献自己的专长
- 最终选择综合最优方案
---
## 📊 工作流选择指南
**核心区分**从0到1 vs 功能新增
- **从0到1**:全新项目、新产品、重大架构决策 → **必须头脑风暴**
- **功能新增**:已有项目中添加功能 → **可直接规划**
```mermaid
graph TD
A[我要做什么?] --> B{项目阶段?}
B -->|从0到1<br/>全新项目/产品| Z[💡头脑风暴<br/>必经阶段]
B -->|功能新增<br/>已有项目| C{任务类型?}
Z --> Z1[/workflow:brainstorm:artifacts<br/>或<br/>/workflow:brainstorm:auto-parallel]
Z1 --> Z2[⬇️ 生成指导文档]
Z2 --> C
C -->|新功能| D[规划→执行]
C -->|需要测试| E{代码是否存在?}
C -->|UI开发| F[UI设计工作流]
C -->|代码优化| G[分析→重构]
C -->|生成文档| H[文档生成]
C -->|快速实现| I[Codex YOLO]
E -->|不存在| J[TDD工作流]
E -->|已存在| K[测试生成]
D --> L[/workflow:plan<br/>↓<br/>/workflow:execute]
J --> M[/workflow:tdd-plan<br/>↓<br/>/workflow:execute]
K --> N[/workflow:test-gen<br/>↓<br/>/workflow:test-cycle-execute]
F --> O[/workflow:ui-design:*]
G --> P[/cli:analyze<br/>↓<br/>/cli:mode:plan<br/>↓<br/>/cli:execute]
H --> Q[/memory:docs]
I --> R[/cli:codex-execute]
```
**说明**
- **从0到1场景**:创业项目、新产品线、系统重构 → 头脑风暴明确方向后再规划
- **功能新增场景**:现有系统添加模块、优化现有功能 → 直接进入规划或分析
---
## 💡 最佳实践
### ✅ 推荐做法
1. **复杂任务使用完整工作流**
```bash
/workflow:plan → /workflow:execute → /workflow:test-gen
```
2. **简单任务使用 Codex YOLO**
```bash
/cli:codex-execute "快速实现xxx"
```
3. **重要代码使用 TDD**
```bash
/workflow:tdd-plan → /workflow:execute → /workflow:tdd-verify
```
4. **定期更新文档**
```bash
/memory:update-related # 每次提交前
```
5. **善用恢复功能**
```bash
/workflow:status → /workflow:resume
```
---
### ❌ 避免做法
1. **⚠️ 不要在从0到1场景跳过头脑风暴**
- ❌ 全新项目直接 `/workflow:plan`
- ✅ 先 `/workflow:brainstorm:artifacts` 明确方向再规划
2. **不要跳过规划直接执行复杂任务**
- ❌ 直接 `/cli:execute` 实现复杂功能
- ✅ 先 `/workflow:plan` 再 `/workflow:execute`
3. **不要忽略测试**
- ❌ 实现完成后不生成测试
- ✅ 使用 `/workflow:test-gen` 生成测试
4. **不要遗忘文档**
- ❌ 代码实现后忘记更新文档
- ✅ 使用 `/memory:update-related` 自动更新
---
## 🔗 相关资源
- **快速入门**[Getting Started](getting-started.md) - 5分钟上手
- **CLI 工具**[CLI Tools Guide](cli-tools-guide.md) - Gemini/Qwen/Codex 详解
- **UI设计工作流**[UI Design Workflow Guide](ui-design-workflow-guide.md) - UI设计完整指南
- **问题排查**[Troubleshooting](troubleshooting.md) - 常见问题解决
- **完整命令列表**:查看 `index/all-commands.json`
---
**最后更新**: 2025-11-06
记住:选择合适的工作流模式,事半功倍!不确定用哪个?使用 `ccw` 询问 Command Guide

File diff suppressed because it is too large Load Diff

File diff suppressed because it is too large Load Diff

File diff suppressed because it is too large Load Diff

View File

@@ -1,616 +1,243 @@
{
"cli:analyze": {
"related_commands": [
"cli:chat",
"cli:mode:code-analysis"
],
"next_steps": [],
"prerequisites": []
},
"cli:chat": {
"related_commands": [
"cli:analyze",
"memory:load"
],
"next_steps": [],
"prerequisites": []
},
"cli:cli-init": {
"related_commands": [],
"next_steps": [],
"prerequisites": []
},
"cli:codex-execute": {
"related_commands": [
"cli:execute",
"task:execute"
],
"next_steps": [],
"prerequisites": []
},
"cli:discuss-plan": {
"related_commands": [],
"next_steps": [],
"prerequisites": []
},
"cli:execute": {
"related_commands": [
"cli:codex-execute",
"task:execute"
],
"next_steps": [
"workflow:status"
],
"prerequisites": []
},
"cli:mode:bug-diagnosis": {
"related_commands": [
"cli:mode:code-analysis",
"cli:execute"
],
"next_steps": [],
"prerequisites": []
},
"cli:mode:code-analysis": {
"related_commands": [
"cli:analyze",
"cli:mode:bug-diagnosis"
],
"next_steps": [],
"prerequisites": []
},
"cli:mode:plan": {
"related_commands": [
"workflow:plan",
"cli:analyze"
],
"next_steps": [],
"prerequisites": []
},
"enhance-prompt": {
"related_commands": [
"workflow:plan",
"cli:execute"
],
"next_steps": [],
"prerequisites": []
},
"memory:docs": {
"related_commands": [
"memory:update-full",
"memory:update-related"
],
"next_steps": [],
"prerequisites": []
},
"memory:load-skill-memory": {
"related_commands": [
"memory:load"
],
"next_steps": [],
"prerequisites": [
"memory:skill-memory"
]
},
"memory:load": {
"related_commands": [
"memory:skill-memory",
"memory:load-skill-memory"
],
"next_steps": [
"workflow:plan",
"cli:execute"
],
"prerequisites": []
},
"memory:skill-memory": {
"related_commands": [
"memory:docs",
"memory:update-full"
],
"next_steps": [
"memory:load-skill-memory"
],
"prerequisites": []
},
"memory:tech-research": {
"related_commands": [],
"next_steps": [
"memory:load-skill-memory"
],
"prerequisites": []
},
"memory:update-full": {
"related_commands": [
"memory:update-related"
],
"next_steps": [],
"prerequisites": [
"memory:docs"
]
},
"memory:update-related": {
"related_commands": [
"memory:update-full",
"memory:docs"
],
"next_steps": [],
"prerequisites": []
},
"memory:workflow-skill-memory": {
"related_commands": [
"memory:skill-memory",
"workflow:session:list"
],
"next_steps": [],
"prerequisites": []
},
"task:breakdown": {
"related_commands": [],
"next_steps": [
"task:execute"
],
"prerequisites": [
"task:create"
]
},
"task:create": {
"related_commands": [
"task:breakdown",
"workflow:plan"
],
"next_steps": [
"task:execute"
],
"prerequisites": []
},
"task:execute": {
"related_commands": [
"workflow:execute",
"workflow:status",
"task:create"
],
"next_steps": [],
"prerequisites": []
},
"task:replan": {
"related_commands": [
"task:execute",
"workflow:action-plan-verify"
],
"next_steps": [],
"prerequisites": []
},
"version": {
"related_commands": [],
"next_steps": [],
"prerequisites": []
},
"workflow:action-plan-verify": {
"related_commands": [
"workflow:status"
],
"next_steps": [
"workflow:execute"
],
"prerequisites": [
"workflow:plan"
]
},
"workflow:brainstorm:api-designer": {
"related_commands": [
"workflow:brainstorm:synthesis",
"workflow:plan"
],
"next_steps": [],
"prerequisites": []
},
"workflow:brainstorm:artifacts": {
"related_commands": [
"workflow:brainstorm:synthesis",
"workflow:plan"
],
"next_steps": [
"workflow:plan"
],
"prerequisites": []
},
"workflow:brainstorm:auto-parallel": {
"related_commands": [
"workflow:brainstorm:synthesis",
"workflow:plan"
],
"next_steps": [
"workflow:brainstorm:synthesis"
],
"prerequisites": []
},
"workflow:brainstorm:data-architect": {
"related_commands": [
"workflow:brainstorm:synthesis",
"workflow:plan"
],
"next_steps": [],
"prerequisites": []
},
"workflow:brainstorm:product-manager": {
"related_commands": [
"workflow:brainstorm:synthesis",
"workflow:plan"
],
"next_steps": [],
"prerequisites": []
},
"workflow:brainstorm:product-owner": {
"related_commands": [
"workflow:brainstorm:synthesis",
"workflow:plan"
],
"next_steps": [],
"prerequisites": []
},
"workflow:brainstorm:scrum-master": {
"related_commands": [
"workflow:brainstorm:synthesis",
"workflow:plan"
],
"next_steps": [],
"prerequisites": []
},
"workflow:brainstorm:subject-matter-expert": {
"related_commands": [
"workflow:brainstorm:synthesis",
"workflow:plan"
],
"next_steps": [],
"prerequisites": []
},
"workflow:brainstorm:synthesis": {
"related_commands": [
"workflow:brainstorm:synthesis",
"workflow:plan",
"workflow:brainstorm:artifacts"
],
"next_steps": [],
"prerequisites": []
},
"workflow:brainstorm:system-architect": {
"related_commands": [
"workflow:brainstorm:synthesis",
"workflow:plan"
],
"next_steps": [],
"prerequisites": []
},
"workflow:brainstorm:ui-designer": {
"related_commands": [
"workflow:brainstorm:synthesis",
"workflow:plan"
],
"next_steps": [],
"prerequisites": []
},
"workflow:brainstorm:ux-expert": {
"related_commands": [
"workflow:brainstorm:synthesis",
"workflow:plan"
],
"next_steps": [],
"prerequisites": []
},
"workflow:execute": {
"related_commands": [
"workflow:status",
"task:execute",
"workflow:session:start"
],
"next_steps": [
"workflow:review",
"workflow:test-cycle-execute"
],
"prerequisites": [
"workflow:plan"
]
},
"workflow:plan": {
"related_commands": [
"workflow:status",
"workflow:tdd-plan",
"workflow:session:start"
"calls_internally": [
"workflow:session:start",
"workflow:tools:context-gather",
"workflow:tools:conflict-resolution",
"workflow:tools:task-generate",
"workflow:tools:task-generate-agent"
],
"next_steps": [
"workflow:action-plan-verify",
"workflow:execute"
],
"prerequisites": []
},
"workflow:resume": {
"related_commands": [
"workflow:session:start",
"workflow:status",
"workflow:execute"
],
"next_steps": [],
"prerequisites": []
},
"workflow:review": {
"related_commands": [
"workflow:status"
"alternatives": [
"workflow:tdd-plan"
],
"next_steps": [],
"prerequisites": [
"workflow:execute"
]
},
"workflow:session:complete": {
"related_commands": [
"workflow:review",
"workflow:session:list"
],
"next_steps": [],
"prerequisites": [
"workflow:execute"
]
},
"workflow:session:list": {
"related_commands": [
"workflow:session:start",
"workflow:session:resume"
],
"next_steps": [],
"prerequisites": []
},
"workflow:session:resume": {
"related_commands": [
"workflow:resume",
"workflow:status"
],
"next_steps": [],
"prerequisites": [
"workflow:session:start"
]
},
"workflow:session:start": {
"related_commands": [
"workflow:session:list",
"workflow:session:resume"
],
"next_steps": [
"workflow:plan",
"workflow:execute"
],
"prerequisites": []
},
"workflow:status": {
"related_commands": [
"workflow:execute",
"workflow:plan",
"task:execute"
],
"next_steps": [],
"prerequisites": []
},
"workflow:tdd-plan": {
"related_commands": [
"workflow:plan",
"workflow:test-cycle-execute"
"calls_internally": [
"workflow:session:start",
"workflow:tools:context-gather",
"workflow:tools:task-generate-tdd"
],
"next_steps": [
"workflow:execute",
"workflow:tdd-verify"
"workflow:tdd-verify",
"workflow:status",
"workflow:execute"
],
"alternatives": [
"workflow:plan"
],
"prerequisites": []
},
"workflow:tdd-verify": {
"related_commands": [
"workflow:test-cycle-execute",
"workflow:review"
],
"next_steps": [],
"workflow:execute": {
"prerequisites": [
"workflow:plan",
"workflow:tdd-plan"
],
"related": [
"workflow:status",
"workflow:resume"
],
"next_steps": [
"workflow:review",
"workflow:tdd-verify"
]
},
"workflow:test-cycle-execute": {
"related_commands": [
"workflow:tdd-verify",
"workflow:action-plan-verify": {
"prerequisites": [
"workflow:plan"
],
"next_steps": [
"workflow:execute"
],
"next_steps": [],
"related": [
"workflow:status"
]
},
"workflow:tdd-verify": {
"prerequisites": [
"workflow:test-gen",
"workflow:test-fix-gen"
"workflow:execute"
],
"related": [
"workflow:tools:tdd-coverage-analysis"
]
},
"workflow:session:start": {
"next_steps": [
"workflow:plan",
"workflow:execute"
],
"related": [
"workflow:session:list",
"workflow:session:resume"
]
},
"workflow:session:resume": {
"alternatives": [
"workflow:resume"
],
"related": [
"workflow:session:list",
"workflow:status"
]
},
"workflow:resume": {
"alternatives": [
"workflow:session:resume"
],
"related": [
"workflow:status"
]
},
"task:create": {
"next_steps": [
"task:execute"
],
"related": [
"task:breakdown"
]
},
"task:breakdown": {
"next_steps": [
"task:execute"
],
"related": [
"task:create"
]
},
"task:replan": {
"prerequisites": [
"workflow:plan"
],
"related": [
"workflow:action-plan-verify"
]
},
"task:execute": {
"prerequisites": [
"task:create",
"task:breakdown",
"workflow:plan"
],
"related": [
"workflow:status"
]
},
"memory:docs": {
"calls_internally": [
"workflow:session:start",
"workflow:tools:context-gather"
],
"next_steps": [
"workflow:execute"
]
},
"memory:skill-memory": {
"next_steps": [
"workflow:plan",
"cli:analyze"
],
"related": [
"memory:load-skill-memory"
]
},
"memory:workflow-skill-memory": {
"related": [
"memory:skill-memory"
],
"next_steps": [
"workflow:plan"
]
},
"cli:execute": {
"alternatives": [
"cli:codex-execute"
],
"related": [
"cli:analyze",
"cli:chat"
]
},
"cli:analyze": {
"related": [
"cli:chat",
"cli:mode:code-analysis"
],
"next_steps": [
"cli:execute"
]
},
"workflow:brainstorm:artifacts": {
"next_steps": [
"workflow:brainstorm:synthesis",
"workflow:plan"
],
"related": [
"workflow:brainstorm:auto-parallel"
]
},
"workflow:brainstorm:synthesis": {
"prerequisites": [
"workflow:brainstorm:artifacts"
],
"next_steps": [
"workflow:plan"
]
},
"workflow:brainstorm:auto-parallel": {
"next_steps": [
"workflow:brainstorm:synthesis",
"workflow:plan"
],
"related": [
"workflow:brainstorm:artifacts"
]
},
"workflow:test-gen": {
"prerequisites": [
"workflow:execute"
],
"next_steps": [
"workflow:test-cycle-execute"
]
},
"workflow:test-fix-gen": {
"related_commands": [],
"alternatives": [
"workflow:test-gen"
],
"next_steps": [
"workflow:test-cycle-execute"
],
"prerequisites": []
},
"workflow:test-gen": {
"related_commands": [],
"next_steps": [
"workflow:test-cycle-execute"
],
"prerequisites": [
"workflow:execute"
]
},
"workflow:tools:conflict-resolution": {
"related_commands": [],
"next_steps": [
"workflow:tools:task-generate"
],
"workflow:test-cycle-execute": {
"prerequisites": [
"workflow:tools:context-gather"
]
},
"workflow:tools:context-gather": {
"related_commands": [
"memory:load",
"workflow:tools:conflict-resolution"
"workflow:test-gen",
"workflow:test-fix-gen"
],
"next_steps": [],
"prerequisites": [
"workflow:plan"
]
},
"workflow:tools:task-generate-agent": {
"related_commands": [
"workflow:tools:task-generate"
],
"next_steps": [
"workflow:execute"
],
"prerequisites": [
"workflow:plan"
]
},
"workflow:tools:task-generate-tdd": {
"related_commands": [
"workflow:tools:task-generate"
],
"next_steps": [],
"prerequisites": [
"workflow:tdd-plan"
]
},
"workflow:tools:task-generate": {
"related_commands": [],
"next_steps": [
"workflow:execute"
],
"prerequisites": [
"workflow:plan"
]
},
"workflow:tools:tdd-coverage-analysis": {
"related_commands": [
"related": [
"workflow:tdd-verify"
],
"next_steps": [],
"prerequisites": [
"workflow:tdd-plan"
]
},
"workflow:tools:test-concept-enhanced": {
"related_commands": [],
"next_steps": [],
"prerequisites": [
"workflow:tools:test-context-gather"
]
},
"workflow:tools:test-context-gather": {
"related_commands": [
"workflow:tools:context-gather"
],
"next_steps": [],
"prerequisites": [
"workflow:test-gen"
]
},
"workflow:tools:test-task-generate": {
"related_commands": [
"workflow:tools:task-generate"
],
"next_steps": [],
"prerequisites": [
"workflow:test-gen"
]
},
"workflow:ui-design:animation-extract": {
"related_commands": [
"workflow:plan",
"workflow:brainstorm:ui-designer"
],
"next_steps": [],
"prerequisites": []
},
"workflow:ui-design:batch-generate": {
"related_commands": [
"workflow:plan",
"workflow:brainstorm:ui-designer"
],
"next_steps": [],
"prerequisites": []
},
"workflow:ui-design:capture": {
"related_commands": [
"workflow:plan",
"workflow:brainstorm:ui-designer"
],
"next_steps": [
"workflow:ui-design:layout-extract",
"workflow:ui-design:style-extract"
],
"prerequisites": []
},
"workflow:ui-design:explore-auto": {
"related_commands": [
"workflow:plan",
"workflow:brainstorm:ui-designer"
"calls_internally": [
"workflow:ui-design:capture",
"workflow:ui-design:style-extract",
"workflow:ui-design:layout-extract"
],
"next_steps": [
"workflow:ui-design:generate",
"workflow:ui-design:update"
],
"prerequisites": []
},
"workflow:ui-design:explore-layers": {
"related_commands": [
"workflow:plan",
"workflow:brainstorm:ui-designer"
],
"next_steps": [],
"prerequisites": []
},
"workflow:ui-design:generate": {
"related_commands": [
"workflow:plan",
"workflow:brainstorm:ui-designer"
],
"next_steps": [
"workflow:ui-design:update",
"workflow:plan"
],
"prerequisites": []
"workflow:ui-design:generate"
]
},
"workflow:ui-design:imitate-auto": {
"related_commands": [
"workflow:plan",
"workflow:brainstorm:ui-designer"
],
"next_steps": [],
"prerequisites": []
},
"workflow:ui-design:layout-extract": {
"related_commands": [
"workflow:plan",
"workflow:brainstorm:ui-designer"
"calls_internally": [
"workflow:ui-design:capture"
],
"next_steps": [
"workflow:ui-design:generate"
],
"prerequisites": []
},
"workflow:ui-design:style-extract": {
"related_commands": [
"workflow:plan",
"workflow:brainstorm:ui-designer"
],
"next_steps": [
"workflow:ui-design:generate"
],
"prerequisites": []
},
"workflow:ui-design:update": {
"related_commands": [
"workflow:plan",
"workflow:brainstorm:ui-designer"
],
"next_steps": [],
"prerequisites": []
]
}
}

Some files were not shown because too many files have changed in this diff Show More